content
stringlengths 85
101k
| title
stringlengths 0
150
| question
stringlengths 15
48k
| answers
list | answers_scores
list | non_answers
list | non_answers_scores
list | tags
list | name
stringlengths 35
137
|
|---|---|---|---|---|---|---|---|---|
Q:
How do I determine which PIP library to download manually? (Parsing pypi.org json output)
I have all the details of my machine (Linux, Python 3.8). How can I determine which wheel file PIP would have downloaded without using pip install or pip download? The goal is to get the package details from (example) https://pypi.org/pypi/pyarrow/json and then get the size of the wheel.
The json output is given, but there can be multiple versions for a single release. Are there standardized values that I can match? Or how pip determines which version? Or some way to know what my machine would have downloaded?
Seems like python_version can have a variety of values like py2.py3, py3, py2, cp36...
A:
Try:
Import statements:
import requests
import re
Function to compare versions using pattern from setup.py (to best of my knowledge):
def compare_versions(compare_statement: str) -> bool:
try:
decompose = re.search(r"(\d+(\.[\d*]+)*)([^\d*.]+)(\d+(\.[\d*]+)*)", compare_statement)
gr = decompose.groups(0)
print("GR", gr)
ver_l, op, ver_r = gr[0], gr[2], gr[3]
print("components", ver_l, op, ver_r)
ver_l, ver_r = ver_l.rstrip(".*"), ver_r.rstrip(".*")
if op == ">":
return ver_l > ver_r
if op == "<":
return ver_l < ver_r
if op == ">=":
return ver_l >= ver_r
if op == "<=":
return ver_l <= ver_r
if op == "!=":
return ver_l != ver_r
if op in ("~=", "=="):
# ~= is a bit more complex, but "approximately" :)
return ver_l == ver_r
return False
except Exception as err:
print("ERROR!", compare_statement, err)
return False
Initialize basic variables:
url = r"https://pypi.org/pypi/{package}/json"
packages = ["pandas", "pyarrow", "numpy", "awswrangler", "apache-airflow"]
versions = dict()
Parse json to get most standardized pattern for python compatibility comparison (from requires_python):
for package in packages:
versions[package] = dict(
map(
lambda x: (
x[0],
max(x[1], key=lambda a: a["upload_time"])["requires_python"] if x[1] else []
),
requests.get(url.format(package=package)).json()["releases"].items()
)
)
Actual check (might be integrated in the above loop) - my goal was to keep code clear, not concise:
my_python_version = "3.7"
versions_suitable = dict()
for package in packages:
versions_suitable[package] = []
for v in versions[package]:
conditions = versions[package][v]
if not conditions: continue
for c in conditions.split(","):
if compare_versions(my_python_version + c):
print(f"For {package} version {v}, with conditions {conditions} is suitable!")
versions_suitable[package].append(v)
if versions_suitable[package]: print(f"Best version suitable: {max(versions_suitable[package])}")
print-s could be removed, there's a lot of parsing and broad assumptions, hence I kept code verbose.
|
How do I determine which PIP library to download manually? (Parsing pypi.org json output)
|
I have all the details of my machine (Linux, Python 3.8). How can I determine which wheel file PIP would have downloaded without using pip install or pip download? The goal is to get the package details from (example) https://pypi.org/pypi/pyarrow/json and then get the size of the wheel.
The json output is given, but there can be multiple versions for a single release. Are there standardized values that I can match? Or how pip determines which version? Or some way to know what my machine would have downloaded?
Seems like python_version can have a variety of values like py2.py3, py3, py2, cp36...
|
[
"Try:\nImport statements:\nimport requests\nimport re\n\nFunction to compare versions using pattern from setup.py (to best of my knowledge):\ndef compare_versions(compare_statement: str) -> bool:\n try:\n decompose = re.search(r\"(\\d+(\\.[\\d*]+)*)([^\\d*.]+)(\\d+(\\.[\\d*]+)*)\", compare_statement)\n gr = decompose.groups(0)\n print(\"GR\", gr)\n ver_l, op, ver_r = gr[0], gr[2], gr[3]\n print(\"components\", ver_l, op, ver_r)\n ver_l, ver_r = ver_l.rstrip(\".*\"), ver_r.rstrip(\".*\")\n if op == \">\":\n return ver_l > ver_r\n if op == \"<\":\n return ver_l < ver_r\n if op == \">=\":\n return ver_l >= ver_r\n if op == \"<=\":\n return ver_l <= ver_r\n if op == \"!=\":\n return ver_l != ver_r\n if op in (\"~=\", \"==\"):\n # ~= is a bit more complex, but \"approximately\" :)\n return ver_l == ver_r\n return False\n except Exception as err:\n print(\"ERROR!\", compare_statement, err)\n return False\n\nInitialize basic variables:\nurl = r\"https://pypi.org/pypi/{package}/json\"\npackages = [\"pandas\", \"pyarrow\", \"numpy\", \"awswrangler\", \"apache-airflow\"]\nversions = dict()\n\nParse json to get most standardized pattern for python compatibility comparison (from requires_python):\nfor package in packages:\n versions[package] = dict(\n map(\n lambda x: (\n x[0], \n max(x[1], key=lambda a: a[\"upload_time\"])[\"requires_python\"] if x[1] else []\n ),\n requests.get(url.format(package=package)).json()[\"releases\"].items()\n )\n )\n\nActual check (might be integrated in the above loop) - my goal was to keep code clear, not concise:\nmy_python_version = \"3.7\"\nversions_suitable = dict()\nfor package in packages:\n versions_suitable[package] = []\n for v in versions[package]:\n conditions = versions[package][v]\n if not conditions: continue \n for c in conditions.split(\",\"):\n if compare_versions(my_python_version + c):\n print(f\"For {package} version {v}, with conditions {conditions} is suitable!\")\n versions_suitable[package].append(v)\n if versions_suitable[package]: print(f\"Best version suitable: {max(versions_suitable[package])}\")\n\nprint-s could be removed, there's a lot of parsing and broad assumptions, hence I kept code verbose.\n"
] |
[
0
] |
[] |
[] |
[
"pip",
"python"
] |
stackoverflow_0074565967_pip_python.txt
|
Q:
How to capture the output of loop through a dictionary for use outside the loop in Kivy/python using the .update() method?
I can't figure out how to print the whole output out of a loop from tested Kivy Minimal Reproducible Example below:
from kivy.app import App
from kivy.uix.boxlayout import BoxLayout
class DataTable(BoxLayout):
def __init__(self,table='', **kwargs):
super().__init__(**kwargs)
def build(self):
datas = {}
for i in range(3):
data = {
'1':{i:'TESTa'},
'2':{i:'TESTb'},
'3':{i:'TESTc'},
} #data store
datas.update(data)
print(datas)
test = build(self)
class DataTableApp(App):
def build(self):
return DataTable()
if __name__=='__main__':
DataTableApp().run()
It outputs the last dictionary item only as:
{'1': {2: 'TESTa'}, '2': {2: 'TESTb'}, '3': {2: 'TESTc'}}
While with the
print(datas)
as this
from kivy.app import App
from kivy.uix.boxlayout import BoxLayout
class DataTable(BoxLayout):
def __init__(self,table='', **kwargs):
super().__init__(**kwargs)
def build(self):
datas = {}
for i in range(3):
data = {
'1':{i:'TESTa'},
'2':{i:'TESTb'},
'3':{i:'TESTc'},
} #data store
datas.update(data)
print(datas)
test = build(self)
class DataTableApp(App):
def build(self):
return DataTable()
if __name__=='__main__':
DataTableApp().run()
inside the for loop scope it does print the whole dictionary as:
{'1': {0: 'TESTa'}, '2': {0: 'TESTb'}, '3': {0: 'TESTc'}}
{'1': {1: 'TESTa'}, '2': {1: 'TESTb'}, '3': {1: 'TESTc'}}
{'1': {2: 'TESTa'}, '2': {2: 'TESTb'}, '3': {2: 'TESTc'}}
I tried as per those answers (initializing the empty dictionary variable datas = {} outside and above the for loop scope) :
https://stackoverflow.com/a/65779535/10789707
https://stackoverflow.com/a/66901779/10789707
https://stackoverflow.com/a/61539025/10789707
https://stackoverflow.com/a/56638281/10789707
I was Expecting the 2nd output above when the print statement print(datas) is outside the loop scope:
{'1': {0: 'TESTa'}, '2': {0: 'TESTb'}, '3': {0: 'TESTc'}}
{'1': {1: 'TESTa'}, '2': {1: 'TESTb'}, '3': {1: 'TESTc'}}
{'1': {2: 'TESTa'}, '2': {2: 'TESTb'}, '3': {2: 'TESTc'}}
The end use would be to then use the whole dictionary (not just the last key value pair pair) to display in the Kivy window app as per the source example from github script:
https://github.com/mfazrinizar/Datatable-Kivy/blob/master/datatable.py
The data dictionary keys values pairs must all be accessible outside of the loop in order to then display them in the Kivy window.
Here's other documentation and resources I've consulted:
https://towardsdatascience.com/3-ways-to-iterate-over-python-dictionaries-using-for-loops-14e789992ce5
https://realpython.com/iterate-through-dictionary-python/
for loop printing out last item in my array only
https://qr.ae/pv0JvZ
Why does declaring the variable outside the for loop print all the elements of an array but declaring first inside the for loop prints only the last?
Why python only printing the last key from dictionary?
Append a dictionary to a dictionary
https://www.youtube.com/watch?v=nCdcFOLww_c
https://www.geeksforgeeks.org/iterate-over-a-dictionary-in-python/
Python dictionary only keeps information from last iteration of loop
A:
From the documentation: the update method overwrites the values for the keys.
To keep things simple, consider only what happens to the key '1'.
On the first iteration the value is set to {0: 'TESTa'}. On the second iteration the value is set to {1: 'TESTa'}. Remember, the method overwrites the values it doesn't "update" the values.
If you want to update the values, you will need to do something like the following.
datas['1'].update({i: 'TESTa'})
Edit based on comments
I'm not exactly sure what the contents of the datas dictionary represents. But if you're trying to use loops to simplify the construction, then the following creates what you want.
datas = {
'1': {i: 'TESTa' for i in range(3)},
'2': {i: 'TESTb' for i in range(3)},
'3': {i: 'TESTc' for i in range(3)}
}
|
How to capture the output of loop through a dictionary for use outside the loop in Kivy/python using the .update() method?
|
I can't figure out how to print the whole output out of a loop from tested Kivy Minimal Reproducible Example below:
from kivy.app import App
from kivy.uix.boxlayout import BoxLayout
class DataTable(BoxLayout):
def __init__(self,table='', **kwargs):
super().__init__(**kwargs)
def build(self):
datas = {}
for i in range(3):
data = {
'1':{i:'TESTa'},
'2':{i:'TESTb'},
'3':{i:'TESTc'},
} #data store
datas.update(data)
print(datas)
test = build(self)
class DataTableApp(App):
def build(self):
return DataTable()
if __name__=='__main__':
DataTableApp().run()
It outputs the last dictionary item only as:
{'1': {2: 'TESTa'}, '2': {2: 'TESTb'}, '3': {2: 'TESTc'}}
While with the
print(datas)
as this
from kivy.app import App
from kivy.uix.boxlayout import BoxLayout
class DataTable(BoxLayout):
def __init__(self,table='', **kwargs):
super().__init__(**kwargs)
def build(self):
datas = {}
for i in range(3):
data = {
'1':{i:'TESTa'},
'2':{i:'TESTb'},
'3':{i:'TESTc'},
} #data store
datas.update(data)
print(datas)
test = build(self)
class DataTableApp(App):
def build(self):
return DataTable()
if __name__=='__main__':
DataTableApp().run()
inside the for loop scope it does print the whole dictionary as:
{'1': {0: 'TESTa'}, '2': {0: 'TESTb'}, '3': {0: 'TESTc'}}
{'1': {1: 'TESTa'}, '2': {1: 'TESTb'}, '3': {1: 'TESTc'}}
{'1': {2: 'TESTa'}, '2': {2: 'TESTb'}, '3': {2: 'TESTc'}}
I tried as per those answers (initializing the empty dictionary variable datas = {} outside and above the for loop scope) :
https://stackoverflow.com/a/65779535/10789707
https://stackoverflow.com/a/66901779/10789707
https://stackoverflow.com/a/61539025/10789707
https://stackoverflow.com/a/56638281/10789707
I was Expecting the 2nd output above when the print statement print(datas) is outside the loop scope:
{'1': {0: 'TESTa'}, '2': {0: 'TESTb'}, '3': {0: 'TESTc'}}
{'1': {1: 'TESTa'}, '2': {1: 'TESTb'}, '3': {1: 'TESTc'}}
{'1': {2: 'TESTa'}, '2': {2: 'TESTb'}, '3': {2: 'TESTc'}}
The end use would be to then use the whole dictionary (not just the last key value pair pair) to display in the Kivy window app as per the source example from github script:
https://github.com/mfazrinizar/Datatable-Kivy/blob/master/datatable.py
The data dictionary keys values pairs must all be accessible outside of the loop in order to then display them in the Kivy window.
Here's other documentation and resources I've consulted:
https://towardsdatascience.com/3-ways-to-iterate-over-python-dictionaries-using-for-loops-14e789992ce5
https://realpython.com/iterate-through-dictionary-python/
for loop printing out last item in my array only
https://qr.ae/pv0JvZ
Why does declaring the variable outside the for loop print all the elements of an array but declaring first inside the for loop prints only the last?
Why python only printing the last key from dictionary?
Append a dictionary to a dictionary
https://www.youtube.com/watch?v=nCdcFOLww_c
https://www.geeksforgeeks.org/iterate-over-a-dictionary-in-python/
Python dictionary only keeps information from last iteration of loop
|
[
"From the documentation: the update method overwrites the values for the keys.\nTo keep things simple, consider only what happens to the key '1'.\nOn the first iteration the value is set to {0: 'TESTa'}. On the second iteration the value is set to {1: 'TESTa'}. Remember, the method overwrites the values it doesn't \"update\" the values.\nIf you want to update the values, you will need to do something like the following.\ndatas['1'].update({i: 'TESTa'})\n\nEdit based on comments\nI'm not exactly sure what the contents of the datas dictionary represents. But if you're trying to use loops to simplify the construction, then the following creates what you want.\ndatas = {\n '1': {i: 'TESTa' for i in range(3)},\n '2': {i: 'TESTb' for i in range(3)},\n '3': {i: 'TESTc' for i in range(3)}\n}\n\n"
] |
[
0
] |
[] |
[] |
[
"dictionary",
"kivy",
"loops",
"python",
"python_3.x"
] |
stackoverflow_0074566425_dictionary_kivy_loops_python_python_3.x.txt
|
Q:
How to make FOR run together with while?
I have the code below working until the WHILE, I needed to add the WHILE to capture all the information on the page.
However, when the code stops working, I need to know how I can make the code run after the While.
import time
from selenium import webdriver
from selenium.webdriver.chrome.service import Service
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC
options = Options()
options.add_argument("start-maximized")
options.add_argument('--disable-notifications')
webdriver_service = Service('C:\webdrivers\chromedriver.exe')
driver = webdriver.Chrome(options=options, service=webdriver_service)
wait = WebDriverWait(driver, 1)
url = "https://www.google.com.br/maps/search/contabilidade+balneario+camboriu/@-26.9905418,-48.6289914,1 5z"
driver.get(url)
while True:
try:
wait.until(EC.visibility_of_element_located((By.XPATH, "//span[contains(text(),'reached the end')]")))
barraRolagem = wait.until(EC.presence_of_element_located((By.XPATH, "//div[@role='main']//div[@aria-label]")))
driver.execute_script("arguments[0].scroll(0, arguments[0].scrollHeight);", barraRolagem)
break
except:
barraRolagem = wait.until(EC.presence_of_element_located((By.XPATH, "//div[@role='main']//div[@aria-label]")))
driver.execute_script("arguments[0].scroll(0, arguments[0].scrollHeight);", barraRolagem)
time.sleep(0.5)
classe_empresas = driver.find_elements(By.CLASS_NAME, "hfpxzc")
for empresa in classe_empresas:
urls = empresa.get_attribute("href")
links.append(urls)
for paginas_individuais in links:
driver.get(paginas_individuais)
try:
print("Telefone")
tel = driver.find_element(By. XPATH, "//div[contains(text(),'("+ddd+")')]").get_attribute("innerHTML")
print("Endereco")
endereco = driver.find_element(By. XPATH, "/html[1]/body[1]/div[3]/div[9]/div[9]/div[1]/div[1]/div[1]/div[2]/div[1]/div[1]/div[1]/div[1]/div[7]/div[3]/button[1]/div[1]/div[2]/div[1]").get_attribute("innerHTML")
print("Nome")
nome = driver.find_element(By. TAG_NAME, "title").get_attribute("innerHTML")
except:
print("erro")
A:
for paginas_individuais in links: block not worked due to unintended indentation.
This block should be with the same indentation as while True: block to make it performed after the while True: block is completed.
|
How to make FOR run together with while?
|
I have the code below working until the WHILE, I needed to add the WHILE to capture all the information on the page.
However, when the code stops working, I need to know how I can make the code run after the While.
import time
from selenium import webdriver
from selenium.webdriver.chrome.service import Service
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC
options = Options()
options.add_argument("start-maximized")
options.add_argument('--disable-notifications')
webdriver_service = Service('C:\webdrivers\chromedriver.exe')
driver = webdriver.Chrome(options=options, service=webdriver_service)
wait = WebDriverWait(driver, 1)
url = "https://www.google.com.br/maps/search/contabilidade+balneario+camboriu/@-26.9905418,-48.6289914,1 5z"
driver.get(url)
while True:
try:
wait.until(EC.visibility_of_element_located((By.XPATH, "//span[contains(text(),'reached the end')]")))
barraRolagem = wait.until(EC.presence_of_element_located((By.XPATH, "//div[@role='main']//div[@aria-label]")))
driver.execute_script("arguments[0].scroll(0, arguments[0].scrollHeight);", barraRolagem)
break
except:
barraRolagem = wait.until(EC.presence_of_element_located((By.XPATH, "//div[@role='main']//div[@aria-label]")))
driver.execute_script("arguments[0].scroll(0, arguments[0].scrollHeight);", barraRolagem)
time.sleep(0.5)
classe_empresas = driver.find_elements(By.CLASS_NAME, "hfpxzc")
for empresa in classe_empresas:
urls = empresa.get_attribute("href")
links.append(urls)
for paginas_individuais in links:
driver.get(paginas_individuais)
try:
print("Telefone")
tel = driver.find_element(By. XPATH, "//div[contains(text(),'("+ddd+")')]").get_attribute("innerHTML")
print("Endereco")
endereco = driver.find_element(By. XPATH, "/html[1]/body[1]/div[3]/div[9]/div[9]/div[1]/div[1]/div[1]/div[2]/div[1]/div[1]/div[1]/div[1]/div[7]/div[3]/button[1]/div[1]/div[2]/div[1]").get_attribute("innerHTML")
print("Nome")
nome = driver.find_element(By. TAG_NAME, "title").get_attribute("innerHTML")
except:
print("erro")
|
[
"for paginas_individuais in links: block not worked due to unintended indentation.\nThis block should be with the same indentation as while True: block to make it performed after the while True: block is completed.\n"
] |
[
1
] |
[] |
[] |
[
"indentation",
"python",
"selenium"
] |
stackoverflow_0074566390_indentation_python_selenium.txt
|
Q:
Image.open() cannot identify image file - Python?
I am running Python 2.7 in Visual Studio 2013. The code previously worked ok when in Spyder, but when I run:
import numpy as np
import scipy as sp
import math as mt
import matplotlib.pyplot as plt
import Image
import random
# (0, 1) is N
SCALE = 2.2666 # the scale is chosen to be 1 m = 2.266666666 pixels
MIN_LENGTH = 150 # pixels
PROJECT_PATH = 'C:\\cimtrack_v1'
im = Image.open(PROJECT_PATH + '\\ST.jpg')
I end up with the following errors:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\cimtrack_v1\PythonApplication1\dr\trajgen.py", line 19, in <module>
im = Image.open(PROJECT_PATH + '\\ST.jpg')
File "C:\Python27\lib\site-packages\PIL\Image.py", line 2020, in open
raise IOError("cannot identify image file")
IOError: cannot identify image file
Why is it so and how may I fix it?
As suggested, I have used the Pillow installer to my Python 2.7. But weirdly, I end up with this:
>>> from PIL import Image
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: No module named PIL
>>> from pil import Image
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: No module named pil
>>> import PIL.Image
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: No module named PIL.Image
>>> import PIL
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: No module named PIL
All fail!
A:
I had a same issue.
from PIL import Image
instead of
import Image
fixed the issue
A:
So after struggling with this issue for quite some time, this is what could help you:
from PIL import Image
instead of
import Image
Also, if your Image file is not loading and you're getting an error "No file or directory" then you should do this:
path=r'C:\ABC\Users\Pictures\image.jpg'
and then open the file
image=Image.open(path)
A:
In my case.. I already had "from PIL import Image" in my code.
The error occurred for me because the image file was still in use (locked) by a previous operation in my code. I had to add a small delay or attempt to open the file in append mode in a loop, until that did not fail. Once that did not fail, it meant the file was no longer in use and I could continue and let PIL open the file. Here are the functions I used to check if the file is in use and wait for it to be available.
def is_locked(filepath):
locked = None
file_object = None
if os.path.exists(filepath):
try:
buffer_size = 8
# Opening file in append mode and read the first 8 characters.
file_object = open(filepath, 'a', buffer_size)
if file_object:
locked = False
except IOError as message:
locked = True
finally:
if file_object:
file_object.close()
return locked
def wait_for_file(filepath):
wait_time = 1
while is_locked(filepath):
time.sleep(wait_time)
A:
first, check your pillow version
python -c 'import PIL; print PIL.PILLOW_VERSION'
I use pip install --upgrade pillow upgrade the version from 2.7 to 2.9(or 3.0) fixed this.
A:
In my case, the image was corrupted during download (using wget with github url)
Try with multiple images from different sources.
python
from PIL import Image
Image.open()
A:
In my case, it was because the images I used were stored on a Mac, which generates many hidden files like .image_file.png, so they turned out to not even be the actual images I needed and I could safely ignore the warning or delete the hidden files. It was just an oversight in my case.
A:
Just a note for people having the same problem as me.
I've been using OpenCV/cv2 to export numpy arrays into Tiffs but I had problems with opening these Tiffs with PIL Open Image and had the same error as in the title.
The problem turned out to be that PIL Open Image could not open Tiffs which was created by exporting numpy float64 arrays. When I changed it to float32, PIL could open the Tiff again.
A:
Often it is because the image file is not closed by last program.
It should be better to use
with Image.open(file_path) as img:
#do something
A:
If you are using Anaconda on windows then you can open Anaconda Navigator app and go to Environment section and search for pillow in installed libraries and mark it for upgrade to latest version by right clicking on the checkbox.
Screenshot for reference:
This has fixed the following error:
PermissionError: [WinError 5] Access is denied: 'e:\\work\\anaconda\\lib\\site-packages\\pil\\_imaging.cp36-win_amd64.pyd'
A:
Seems like a Permissions Issue. I was facing the same error. But when I ran it from the root account, it worked. So either give the read permission to the file using chmod (in linux) or run your script after logging in as a root user.
A:
In my case there was an empty picture in the folder. After deleting the empty .jpg's it worked normally.
A:
This error can also occur when trying to open a multi-band image with PIL. It seems to do fine with 4 bands (probably because it assumes an alpha channel) but anything more than that and this error pops out. In my case, I fixed it by using tifffile.imread instead.
A:
In my case the image file had just been written to and needed to be flushed before opening, like so:
img_file.flush()
img = Image.open(img_file.name))
A:
For anyone who make it in bigger scale, you might have also check how many file descriptors you have. It will throw this error if you ran out at bad moment.
A:
For whoever reaches here with the error colab PIL UnidentifiedImageError: cannot identify image file in Google Colab, with a new PIL versions, and none of the previous solutions works for him:
Simply restart the environment, your installed PIL version is probably outdated.
A:
For me it was fixed by downloading the image data set I was using again (in fact I forwarded the copy I had locally using vs-code's SFTP). Here is the jupyter notebook I used (in vscode) with it's output:
from pathlib import Path
import PIL
import PIL.Image as PILI
#from PIL import Image
print(PIL.__version__)
img_path = Path('PATH_UR_DATASET/miniImagenet/train/n03998194/n0399819400000585.jpg')
print(img_path.exists())
img = PILI.open(img_path).convert('RGB')
print(img)
output:
7.0.0
True
<PIL.Image.Image image mode=RGB size=158x160 at 0x7F4AD0A1E050>
note that open always opens in r mode and even has a check to throw an error if that mode is changed.
A:
In my case the error was caused by alpha channels in a TIFF file.
A:
I had the same issue. In my case, the image file size was 0(zero). Check the file size before opening the image.
fsize = os.path.getsize(fname_image)
if fsize > 0 :
img = Image.open(fname_image)
#do something
A:
I'll add my particular case.
I was processing images uploaded through multipart/form-data using AWS API Gateway. When I was uploading my images, that had not been giving this error locally, I was observing UnidentifiedImageError exception thrown by PIL when loading uploaded image. In order to fix this error I had to add multipart/form-data within settings of service.
A:
Im working in Google colab, and in had same problem.
UnidentifiedImageError: cannot identify image file '/content/drive/MyDrive/Python/test.jpg'
The problem is that the default version of PIL (as today 24/11/2022) in colab is 9.3.0; but when you do !pip install pillow the version that is updated is 7.1.2.
So, what I did was open a new colab notebook and NOT pip pillow. It worked.
|
Image.open() cannot identify image file - Python?
|
I am running Python 2.7 in Visual Studio 2013. The code previously worked ok when in Spyder, but when I run:
import numpy as np
import scipy as sp
import math as mt
import matplotlib.pyplot as plt
import Image
import random
# (0, 1) is N
SCALE = 2.2666 # the scale is chosen to be 1 m = 2.266666666 pixels
MIN_LENGTH = 150 # pixels
PROJECT_PATH = 'C:\\cimtrack_v1'
im = Image.open(PROJECT_PATH + '\\ST.jpg')
I end up with the following errors:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\cimtrack_v1\PythonApplication1\dr\trajgen.py", line 19, in <module>
im = Image.open(PROJECT_PATH + '\\ST.jpg')
File "C:\Python27\lib\site-packages\PIL\Image.py", line 2020, in open
raise IOError("cannot identify image file")
IOError: cannot identify image file
Why is it so and how may I fix it?
As suggested, I have used the Pillow installer to my Python 2.7. But weirdly, I end up with this:
>>> from PIL import Image
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: No module named PIL
>>> from pil import Image
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: No module named pil
>>> import PIL.Image
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: No module named PIL.Image
>>> import PIL
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: No module named PIL
All fail!
|
[
"I had a same issue.\nfrom PIL import Image\n\ninstead of\nimport Image\n\nfixed the issue\n",
"So after struggling with this issue for quite some time, this is what could help you:\nfrom PIL import Image\n\ninstead of\nimport Image\n\nAlso, if your Image file is not loading and you're getting an error \"No file or directory\" then you should do this:\npath=r'C:\\ABC\\Users\\Pictures\\image.jpg'\n\nand then open the file\nimage=Image.open(path)\n\n",
"In my case.. I already had \"from PIL import Image\" in my code.\nThe error occurred for me because the image file was still in use (locked) by a previous operation in my code. I had to add a small delay or attempt to open the file in append mode in a loop, until that did not fail. Once that did not fail, it meant the file was no longer in use and I could continue and let PIL open the file. Here are the functions I used to check if the file is in use and wait for it to be available.\ndef is_locked(filepath):\n locked = None\n file_object = None\n if os.path.exists(filepath):\n try:\n buffer_size = 8\n # Opening file in append mode and read the first 8 characters.\n file_object = open(filepath, 'a', buffer_size)\n if file_object:\n locked = False\n except IOError as message:\n locked = True\n finally:\n if file_object:\n file_object.close()\n return locked\n\ndef wait_for_file(filepath):\n wait_time = 1\n while is_locked(filepath):\n time.sleep(wait_time)\n\n",
"first, check your pillow version\npython -c 'import PIL; print PIL.PILLOW_VERSION'\n\nI use pip install --upgrade pillow upgrade the version from 2.7 to 2.9(or 3.0) fixed this.\n",
"In my case, the image was corrupted during download (using wget with github url)\n\nTry with multiple images from different sources.\npython\nfrom PIL import Image\nImage.open()\n\n\n",
"In my case, it was because the images I used were stored on a Mac, which generates many hidden files like .image_file.png, so they turned out to not even be the actual images I needed and I could safely ignore the warning or delete the hidden files. It was just an oversight in my case.\n",
"Just a note for people having the same problem as me.\nI've been using OpenCV/cv2 to export numpy arrays into Tiffs but I had problems with opening these Tiffs with PIL Open Image and had the same error as in the title.\nThe problem turned out to be that PIL Open Image could not open Tiffs which was created by exporting numpy float64 arrays. When I changed it to float32, PIL could open the Tiff again.\n",
"Often it is because the image file is not closed by last program.\nIt should be better to use\nwith Image.open(file_path) as img:\n #do something\n\n",
"If you are using Anaconda on windows then you can open Anaconda Navigator app and go to Environment section and search for pillow in installed libraries and mark it for upgrade to latest version by right clicking on the checkbox.\nScreenshot for reference:\nThis has fixed the following error:\nPermissionError: [WinError 5] Access is denied: 'e:\\\\work\\\\anaconda\\\\lib\\\\site-packages\\\\pil\\\\_imaging.cp36-win_amd64.pyd'\n\n",
"Seems like a Permissions Issue. I was facing the same error. But when I ran it from the root account, it worked. So either give the read permission to the file using chmod (in linux) or run your script after logging in as a root user.\n",
"In my case there was an empty picture in the folder. After deleting the empty .jpg's it worked normally.\n",
"This error can also occur when trying to open a multi-band image with PIL. It seems to do fine with 4 bands (probably because it assumes an alpha channel) but anything more than that and this error pops out. In my case, I fixed it by using tifffile.imread instead.\n",
"In my case the image file had just been written to and needed to be flushed before opening, like so:\nimg_file.flush() \nimg = Image.open(img_file.name))\n\n",
"For anyone who make it in bigger scale, you might have also check how many file descriptors you have. It will throw this error if you ran out at bad moment.\n",
"For whoever reaches here with the error colab PIL UnidentifiedImageError: cannot identify image file in Google Colab, with a new PIL versions, and none of the previous solutions works for him: \nSimply restart the environment, your installed PIL version is probably outdated.\n",
"For me it was fixed by downloading the image data set I was using again (in fact I forwarded the copy I had locally using vs-code's SFTP). Here is the jupyter notebook I used (in vscode) with it's output:\nfrom pathlib import Path\n\nimport PIL\nimport PIL.Image as PILI\n#from PIL import Image\n\nprint(PIL.__version__)\n\nimg_path = Path('PATH_UR_DATASET/miniImagenet/train/n03998194/n0399819400000585.jpg')\nprint(img_path.exists())\nimg = PILI.open(img_path).convert('RGB')\n\nprint(img)\n\noutput:\n7.0.0\nTrue\n<PIL.Image.Image image mode=RGB size=158x160 at 0x7F4AD0A1E050>\n\nnote that open always opens in r mode and even has a check to throw an error if that mode is changed.\n",
"In my case the error was caused by alpha channels in a TIFF file.\n",
"I had the same issue. In my case, the image file size was 0(zero). Check the file size before opening the image.\nfsize = os.path.getsize(fname_image)\nif fsize > 0 : \n img = Image.open(fname_image)\n #do something\n\n",
"I'll add my particular case.\nI was processing images uploaded through multipart/form-data using AWS API Gateway. When I was uploading my images, that had not been giving this error locally, I was observing UnidentifiedImageError exception thrown by PIL when loading uploaded image. In order to fix this error I had to add multipart/form-data within settings of service.\n",
"Im working in Google colab, and in had same problem.\n\nUnidentifiedImageError: cannot identify image file '/content/drive/MyDrive/Python/test.jpg'\n\nThe problem is that the default version of PIL (as today 24/11/2022) in colab is 9.3.0; but when you do !pip install pillow the version that is updated is 7.1.2.\nSo, what I did was open a new colab notebook and NOT pip pillow. It worked.\n"
] |
[
69,
14,
12,
8,
4,
2,
2,
2,
1,
1,
1,
1,
0,
0,
0,
0,
0,
0,
0,
0
] |
[] |
[] |
[
"python",
"python_imaging_library"
] |
stackoverflow_0019230991_python_python_imaging_library.txt
|
Q:
force re.search to include # and $
I am trying to get a substring between two markers using re in Python, for example:
import re
test_str = "#$ -N model_simulation 2022"
# these two lines work
# the output is: model_simulation
print(re.search("-N(.*)2022",test_str).group(1))
print(re.search(" -N(.*)2022",test_str).group(1))
# these two lines give the error: 'NoneType' object has no attribute 'group'
print(re.search("$ -N(.*)2022",test_str).group(1))
print(re.search("#$ -N(.*)2022",test_str).group(1))
I read the documentation of re here. It says that "#" is intentionally ignored so that the outputs look neater.
But in my case, I do need to include "#" and "$". I need them to identify the part of the string that I want, because the "-N" is not unique in my entire text string for real work.
Is there a way to force re to include those? Or is there a different way without using re?
Thanks.
A:
You can escape both with \, for example,
print(re.search("\#\$ -N(.*)2022",test_str).group(1))
# output model_simulation
A:
You can get rid of the special meaning by using the backslash prefix: $. This way, you can match the dollar symbol in a given string
# add backslash before # and $
# the output is: model_simulation
print(re.search("\$ -N(.*)2022",test_str).group(1))
print(re.search("\#\$ -N(.*)2022",test_str).group(1))
A:
In regular expressions, $ signals the end of the string. So 'foo' would match foo anywhere in the string, but 'foo$' only matches foo if it appears at the end. To solve this, you need to escape it by prefixing it with a backslash. That way it will match a literal $ character
# is only the start of a comment in verbose mode using re.VERBOSE (which also ignores spaces), otherwise it just matches a literal #.
In general, it is also good practice to use raw string literals for regular expressions (r'foo'), which means Python will let backslashes alone so it doesn't conflict with regular expressions (that way you don't have to type \\\\ to match a single backslash \).
Instead of re.search, it looks like you actually want re.fullmatch, which matches only if the whole string matches.
So I would write your code like this:
print(re.search(r"\$ -N(.*)2022", test_str).group(1)) # This one would not work with fullmatch, because it doesn't match at the start
print(re.fullmatch(r"#\$ -N(.*)2022", test_str).group(1))
In a comment you mentioned that the string you need to match changes all the time. In that case, re.escape may prove useful.
Example:
prefix = '#$ - N'
postfix = '2022'
print(re.fullmatch(re.escape(prefix) + '(.*)' + re.escape(postfix), tst_str).group(1))
|
force re.search to include # and $
|
I am trying to get a substring between two markers using re in Python, for example:
import re
test_str = "#$ -N model_simulation 2022"
# these two lines work
# the output is: model_simulation
print(re.search("-N(.*)2022",test_str).group(1))
print(re.search(" -N(.*)2022",test_str).group(1))
# these two lines give the error: 'NoneType' object has no attribute 'group'
print(re.search("$ -N(.*)2022",test_str).group(1))
print(re.search("#$ -N(.*)2022",test_str).group(1))
I read the documentation of re here. It says that "#" is intentionally ignored so that the outputs look neater.
But in my case, I do need to include "#" and "$". I need them to identify the part of the string that I want, because the "-N" is not unique in my entire text string for real work.
Is there a way to force re to include those? Or is there a different way without using re?
Thanks.
|
[
"You can escape both with \\, for example,\nprint(re.search(\"\\#\\$ -N(.*)2022\",test_str).group(1))\n# output model_simulation\n\n",
"You can get rid of the special meaning by using the backslash prefix: $. This way, you can match the dollar symbol in a given string\n# add backslash before # and $ \n# the output is: model_simulation\nprint(re.search(\"\\$ -N(.*)2022\",test_str).group(1))\nprint(re.search(\"\\#\\$ -N(.*)2022\",test_str).group(1))\n\n",
"In regular expressions, $ signals the end of the string. So 'foo' would match foo anywhere in the string, but 'foo$' only matches foo if it appears at the end. To solve this, you need to escape it by prefixing it with a backslash. That way it will match a literal $ character\n# is only the start of a comment in verbose mode using re.VERBOSE (which also ignores spaces), otherwise it just matches a literal #.\nIn general, it is also good practice to use raw string literals for regular expressions (r'foo'), which means Python will let backslashes alone so it doesn't conflict with regular expressions (that way you don't have to type \\\\\\\\ to match a single backslash \\).\nInstead of re.search, it looks like you actually want re.fullmatch, which matches only if the whole string matches.\nSo I would write your code like this:\nprint(re.search(r\"\\$ -N(.*)2022\", test_str).group(1)) # This one would not work with fullmatch, because it doesn't match at the start\nprint(re.fullmatch(r\"#\\$ -N(.*)2022\", test_str).group(1))\n\n\nIn a comment you mentioned that the string you need to match changes all the time. In that case, re.escape may prove useful.\nExample:\nprefix = '#$ - N'\npostfix = '2022'\nprint(re.fullmatch(re.escape(prefix) + '(.*)' + re.escape(postfix), tst_str).group(1))\n\n"
] |
[
1,
1,
1
] |
[] |
[] |
[
"python",
"python_3.x",
"python_re"
] |
stackoverflow_0074566491_python_python_3.x_python_re.txt
|
Q:
ValueError: operands could not be broadcast together with shapes (2,1000) (2,)
I've created a function to define a test statistic, which I want to test in python. It resamples 1000 times from an existing sample (ex. matrix2, which is just a column) and takes the mode of these samples. Basically it bootstraps with the mode to create a sampling distribution of modes for both matrix2 and matrix3. Then, it compares these distributions using the KS test.
def newTestStat(matrix2, matrix3):
num_samples = 1000
sample_size_2 = len(matrix2)
replications_2 = np.array([np.random.choice(matrix2, sample_size_2, replace=True) for _ in range(num_samples)])
mode_2 = stats.mode(replications_2, axis=1)
sampleModes2 = mode_2.mode.flatten().tolist()
sample_size_3 = len(matrix3)
replications_3 = np.array([np.random.choice(matrix3, sample_size_3, replace=True) for _ in range(num_samples)])
mode_3 = stats.mode(replications_3, axis=1)
sampleModes3 = mode_3.mode.flatten().tolist()
return ks_2samp(np.array(sampleModes2), np.array(sampleModes3))
dataToUseMatrix= (matrix2,matrix3)
pTest = permutation_test(dataToUseMatrix,newTestStat,n_resamples=1000)
print('exact p-value:',pTest.pvalue)
However, I'm currently getting the following error, despite the fact that np.array(sampleModes2) and np.array(sampleModes3) have the same shape (1000,):
Traceback (most recent call last):
pTest = permutation_test(dataToUseMatrix,newTestStat,n_resamples=1000)
pvalues = compare[alternative](null_distribution, observed)
pvalues_less = less(null_distribution, observed)
cmps = null_distribution <= observed + gamma
ValueError: operands could not be broadcast together with shapes (2,1000) (2,)
Does anyone see what the problem could be here?
A:
The problem was that ks_2samp has a specific Kstest return type. If you want a numeric return type, you need to specify:
(ks_2samp(np.array(sampleModes2), np.array(sampleModes3))).statistic
|
ValueError: operands could not be broadcast together with shapes (2,1000) (2,)
|
I've created a function to define a test statistic, which I want to test in python. It resamples 1000 times from an existing sample (ex. matrix2, which is just a column) and takes the mode of these samples. Basically it bootstraps with the mode to create a sampling distribution of modes for both matrix2 and matrix3. Then, it compares these distributions using the KS test.
def newTestStat(matrix2, matrix3):
num_samples = 1000
sample_size_2 = len(matrix2)
replications_2 = np.array([np.random.choice(matrix2, sample_size_2, replace=True) for _ in range(num_samples)])
mode_2 = stats.mode(replications_2, axis=1)
sampleModes2 = mode_2.mode.flatten().tolist()
sample_size_3 = len(matrix3)
replications_3 = np.array([np.random.choice(matrix3, sample_size_3, replace=True) for _ in range(num_samples)])
mode_3 = stats.mode(replications_3, axis=1)
sampleModes3 = mode_3.mode.flatten().tolist()
return ks_2samp(np.array(sampleModes2), np.array(sampleModes3))
dataToUseMatrix= (matrix2,matrix3)
pTest = permutation_test(dataToUseMatrix,newTestStat,n_resamples=1000)
print('exact p-value:',pTest.pvalue)
However, I'm currently getting the following error, despite the fact that np.array(sampleModes2) and np.array(sampleModes3) have the same shape (1000,):
Traceback (most recent call last):
pTest = permutation_test(dataToUseMatrix,newTestStat,n_resamples=1000)
pvalues = compare[alternative](null_distribution, observed)
pvalues_less = less(null_distribution, observed)
cmps = null_distribution <= observed + gamma
ValueError: operands could not be broadcast together with shapes (2,1000) (2,)
Does anyone see what the problem could be here?
|
[
"The problem was that ks_2samp has a specific Kstest return type. If you want a numeric return type, you need to specify:\n(ks_2samp(np.array(sampleModes2), np.array(sampleModes3))).statistic\n\n"
] |
[
0
] |
[] |
[] |
[
"numpy",
"python"
] |
stackoverflow_0074565459_numpy_python.txt
|
Q:
Working with TIFFs (import, export) in Python using numpy
I need a python method to open and import TIFF images into numpy arrays so I can analyze and modify the pixel data and then save them as TIFFs again. (They are basically light intensity maps in greyscale, representing the respective values per pixel)
I couldn't find any documentation on PIL methods concerning TIFF. I tried to figure it out, but only got "bad mode" or "file type not supported" errors.
What do I need to use here?
A:
First, I downloaded a test TIFF image from this page called a_image.tif. Then I opened with PIL like this:
>>> from PIL import Image
>>> im = Image.open('a_image.tif')
>>> im.show()
This showed the rainbow image. To convert to a numpy array, it's as simple as:
>>> import numpy
>>> imarray = numpy.array(im)
We can see that the size of the image and the shape of the array match up:
>>> imarray.shape
(44, 330)
>>> im.size
(330, 44)
And the array contains uint8 values:
>>> imarray
array([[ 0, 1, 2, ..., 244, 245, 246],
[ 0, 1, 2, ..., 244, 245, 246],
[ 0, 1, 2, ..., 244, 245, 246],
...,
[ 0, 1, 2, ..., 244, 245, 246],
[ 0, 1, 2, ..., 244, 245, 246],
[ 0, 1, 2, ..., 244, 245, 246]], dtype=uint8)
Once you're done modifying the array, you can turn it back into a PIL image like this:
>>> Image.fromarray(imarray)
<Image.Image image mode=L size=330x44 at 0x2786518>
A:
I use matplotlib for reading TIFF files:
import matplotlib.pyplot as plt
I = plt.imread(tiff_file)
and I will be of type ndarray.
According to the documentation though it is actually PIL that works behind the scenes when handling TIFFs as matplotlib only reads PNGs natively, but this has been working fine for me.
There's also a plt.imsave function for saving.
A:
You could also use GDAL to do this. I realize that it is a geospatial toolkit, but nothing requires you to have a cartographic product.
Link to precompiled GDAL binaries for windows (assuming windows here)
Link
To access the array:
from osgeo import gdal
dataset = gdal.Open("path/to/dataset.tiff", gdal.GA_ReadOnly)
for x in range(1, dataset.RasterCount + 1):
band = dataset.GetRasterBand(x)
array = band.ReadAsArray()
A:
PyLibTiff worked better for me than PIL, which as of May 2022 still doesn't support color images with more than 8 bits per color.
from libtiff import TIFF
tif = TIFF.open('filename.tif') # open tiff file in read mode
# read an image in the current TIFF directory as a numpy array
image = tif.read_image()
# read all images in a TIFF file:
for image in tif.iter_images():
pass
tif = TIFF.open('filename.tif', mode='w')
tif.write_image(image)
You can install PyLibTiff with
pip3 install numpy libtiff
The readme of PyLibTiff also mentions the tifffile library but I haven't tried it.
A:
In case of image stacks, I find it easier to use scikit-image to read, and matplotlib to show or save. I have handled 16-bit TIFF image stacks with the following code.
from skimage import io
import matplotlib.pyplot as plt
# read the image stack
img = io.imread('a_image.tif')
# show the image
plt.imshow(img,cmap='gray')
plt.axis('off')
# save the image
plt.savefig('output.tif', transparent=True, dpi=300, bbox_inches="tight", pad_inches=0.0)
A:
You can also use pytiff of which I am the author.
import pytiff
with pytiff.Tiff("filename.tif") as handle:
part = handle[100:200, 200:400]
# multipage tif
with pytiff.Tiff("multipage.tif") as handle:
for page in handle:
part = page[100:200, 200:400]
It's a fairly small module and may not have as many features as other modules, but it supports tiled TIFFs and BigTIFF, so you can read parts of large images.
A:
There is a nice package called tifffile which makes working with .tif or .tiff files very easy.
Install package with pip
pip install tifffile
Now, to read .tif/.tiff file in numpy array format:
from tifffile import tifffile
image = tifffile.imread('path/to/your/image')
# type(image) = numpy.ndarray
If you want to save a numpy array as a .tif/.tiff file:
tifffile.imwrite('my_image.tif', my_numpy_data, photometric='rgb')
or
tifffile.imsave('my_image.tif', my_numpy_data)
You can read more about this package here.
A:
Using cv2
import cv2
image = cv2.imread(tiff_file.tif)
cv2.imshow('tif image',image)
A:
if you want save tiff encoding with geoTiff. You can use rasterio package
a simple code:
import rasterio
out = np.random.randint(low=10, high=20, size=(360, 720)).astype('float64')
new_dataset = rasterio.open('test.tiff', 'w', driver='GTiff',
height=out.shape[0], width=out.shape[1],
count=1, dtype=str(out.dtype),
)
new_dataset.write(out, 1)
new_dataset.close()
for more detail about numpy 2 GEOTiff .you can click this: https://gis.stackexchange.com/questions/279953/numpy-array-to-gtiff-using-rasterio-without-source-raster
A:
I recommend using the python bindings to OpenImageIO, it's the standard for dealing with various image formats in the vfx world. I've ovten found it more reliable in reading various compression types compared to PIL.
import OpenImageIO as oiio
input = oiio.ImageInput.open ("/path/to/image.tif")
A:
Another method of reading tiff files is using tensorflow api
import tensorflow_io as tfio
image = tf.io.read_file(image_path)
tf_image = tfio.experimental.image.decode_tiff(image)
print(tf_image.shape)
Output:
(512, 512, 4)
tensorflow documentation can be found here
For this module to work, a python package called tensorflow-io has to installed.
Athough I couldn't find a way to look at the output tensor (after converting to nd.array), as the output image had 4 channels. I tried to convert using cv2.cvtcolor() with the flag cv2.COLOR_BGRA2BGR after looking at this post but still wasn't able to view the image.
|
Working with TIFFs (import, export) in Python using numpy
|
I need a python method to open and import TIFF images into numpy arrays so I can analyze and modify the pixel data and then save them as TIFFs again. (They are basically light intensity maps in greyscale, representing the respective values per pixel)
I couldn't find any documentation on PIL methods concerning TIFF. I tried to figure it out, but only got "bad mode" or "file type not supported" errors.
What do I need to use here?
|
[
"First, I downloaded a test TIFF image from this page called a_image.tif. Then I opened with PIL like this:\n>>> from PIL import Image\n>>> im = Image.open('a_image.tif')\n>>> im.show()\n\nThis showed the rainbow image. To convert to a numpy array, it's as simple as:\n>>> import numpy\n>>> imarray = numpy.array(im)\n\nWe can see that the size of the image and the shape of the array match up:\n>>> imarray.shape\n(44, 330)\n>>> im.size\n(330, 44)\n\nAnd the array contains uint8 values:\n>>> imarray\narray([[ 0, 1, 2, ..., 244, 245, 246],\n [ 0, 1, 2, ..., 244, 245, 246],\n [ 0, 1, 2, ..., 244, 245, 246],\n ..., \n [ 0, 1, 2, ..., 244, 245, 246],\n [ 0, 1, 2, ..., 244, 245, 246],\n [ 0, 1, 2, ..., 244, 245, 246]], dtype=uint8)\n\nOnce you're done modifying the array, you can turn it back into a PIL image like this:\n>>> Image.fromarray(imarray)\n<Image.Image image mode=L size=330x44 at 0x2786518>\n\n",
"I use matplotlib for reading TIFF files:\nimport matplotlib.pyplot as plt\nI = plt.imread(tiff_file)\n\nand I will be of type ndarray.\nAccording to the documentation though it is actually PIL that works behind the scenes when handling TIFFs as matplotlib only reads PNGs natively, but this has been working fine for me.\nThere's also a plt.imsave function for saving.\n",
"You could also use GDAL to do this. I realize that it is a geospatial toolkit, but nothing requires you to have a cartographic product.\nLink to precompiled GDAL binaries for windows (assuming windows here)\nLink\nTo access the array:\nfrom osgeo import gdal\n\ndataset = gdal.Open(\"path/to/dataset.tiff\", gdal.GA_ReadOnly)\nfor x in range(1, dataset.RasterCount + 1):\n band = dataset.GetRasterBand(x)\n array = band.ReadAsArray()\n\n",
"PyLibTiff worked better for me than PIL, which as of May 2022 still doesn't support color images with more than 8 bits per color.\nfrom libtiff import TIFF\n\ntif = TIFF.open('filename.tif') # open tiff file in read mode\n# read an image in the current TIFF directory as a numpy array\nimage = tif.read_image()\n\n# read all images in a TIFF file:\nfor image in tif.iter_images(): \n pass\n\ntif = TIFF.open('filename.tif', mode='w')\ntif.write_image(image)\n\nYou can install PyLibTiff with\npip3 install numpy libtiff\n\nThe readme of PyLibTiff also mentions the tifffile library but I haven't tried it.\n",
"In case of image stacks, I find it easier to use scikit-image to read, and matplotlib to show or save. I have handled 16-bit TIFF image stacks with the following code.\nfrom skimage import io\nimport matplotlib.pyplot as plt\n\n# read the image stack\nimg = io.imread('a_image.tif')\n# show the image\nplt.imshow(img,cmap='gray')\nplt.axis('off')\n# save the image\nplt.savefig('output.tif', transparent=True, dpi=300, bbox_inches=\"tight\", pad_inches=0.0)\n\n",
"You can also use pytiff of which I am the author.\nimport pytiff\n\nwith pytiff.Tiff(\"filename.tif\") as handle:\n part = handle[100:200, 200:400]\n\n# multipage tif\nwith pytiff.Tiff(\"multipage.tif\") as handle:\n for page in handle:\n part = page[100:200, 200:400]\n\nIt's a fairly small module and may not have as many features as other modules, but it supports tiled TIFFs and BigTIFF, so you can read parts of large images.\n",
"There is a nice package called tifffile which makes working with .tif or .tiff files very easy.\nInstall package with pip\npip install tifffile\n\nNow, to read .tif/.tiff file in numpy array format:\nfrom tifffile import tifffile\nimage = tifffile.imread('path/to/your/image')\n# type(image) = numpy.ndarray\n\nIf you want to save a numpy array as a .tif/.tiff file:\ntifffile.imwrite('my_image.tif', my_numpy_data, photometric='rgb')\n\nor\ntifffile.imsave('my_image.tif', my_numpy_data)\n\nYou can read more about this package here.\n",
"Using cv2\nimport cv2\nimage = cv2.imread(tiff_file.tif)\ncv2.imshow('tif image',image)\n\n",
"if you want save tiff encoding with geoTiff. You can use rasterio package\na simple code:\nimport rasterio\n\nout = np.random.randint(low=10, high=20, size=(360, 720)).astype('float64')\nnew_dataset = rasterio.open('test.tiff', 'w', driver='GTiff',\n height=out.shape[0], width=out.shape[1],\n count=1, dtype=str(out.dtype),\n )\nnew_dataset.write(out, 1)\nnew_dataset.close()\n\nfor more detail about numpy 2 GEOTiff .you can click this: https://gis.stackexchange.com/questions/279953/numpy-array-to-gtiff-using-rasterio-without-source-raster\n",
"I recommend using the python bindings to OpenImageIO, it's the standard for dealing with various image formats in the vfx world. I've ovten found it more reliable in reading various compression types compared to PIL.\nimport OpenImageIO as oiio\ninput = oiio.ImageInput.open (\"/path/to/image.tif\")\n\n",
"Another method of reading tiff files is using tensorflow api\nimport tensorflow_io as tfio\nimage = tf.io.read_file(image_path)\ntf_image = tfio.experimental.image.decode_tiff(image)\nprint(tf_image.shape)\n\nOutput:\n(512, 512, 4)\n\ntensorflow documentation can be found here\nFor this module to work, a python package called tensorflow-io has to installed.\nAthough I couldn't find a way to look at the output tensor (after converting to nd.array), as the output image had 4 channels. I tried to convert using cv2.cvtcolor() with the flag cv2.COLOR_BGRA2BGR after looking at this post but still wasn't able to view the image.\n"
] |
[
136,
65,
21,
17,
13,
9,
7,
2,
1,
0,
0
] |
[
"no answers to this question did not work for me. so i found another way to view tif/tiff files:\nimport rasterio\nfrom matplotlib import pyplot as plt\nsrc = rasterio.open(\"ch4.tif\")\nplt.imshow(src.read(1), cmap='gray')\n\nthe code above will help you to view the tif files. also check below to be sure:\ntype(src.read(1)) #see that src.read(1) is a numpy array\n\n\nsrc.read(1) #prints the matrix\n\n"
] |
[
-1
] |
[
"numpy",
"python",
"python_imaging_library",
"tiff"
] |
stackoverflow_0007569553_numpy_python_python_imaging_library_tiff.txt
|
Q:
Understanding Principal Components Analyse (PCA) for Dimension Downscaling in EEG signals
I have read dozens of scientific articles and wherever a large number of channels are used to read EEG signals, the Principal Components Analyze method is used to reduce the Dimension.
I have read the theory about Principal Components Analyze many times and think that understanding how it works, each component is a new coordinate system for the data.
I implemented this method in Python for my data but ended up with new EEG data that was modeled relative to new components (coordinates) but the data became larger since I used few components.
Therefore, why is this method considered to reduce the dimension, because we end up with even more data if we use several components?
I attached image for my 9 channels and the result after PCA
my data before and after PCA
I can not understand, PCA finally doesn't decrease the dimension of EEG data, where I am wrong?
A:
Note that I am no expert as it comes to using PCA for EEG signals analysis.
PCA does decrease dimensions IF you choose to. Usually only first few components are important and you can discard all the others. How many - it depends on your needs.
PCA creates new, independent dimensions, with first being most important, "describing" as much of the data variance as it is possible for a single variable to describe. Second PCA dimension is second important and so on.
In your case it may be ok to keep only first 3-4 PCA components. But I am guessing. Best is to check this in some further step, for example checking "how many components can I remove before classification task accuracy drops" or something like that. You could also take a look at signals reconstructed using inverse PCA transform (available in sklearn)
|
Understanding Principal Components Analyse (PCA) for Dimension Downscaling in EEG signals
|
I have read dozens of scientific articles and wherever a large number of channels are used to read EEG signals, the Principal Components Analyze method is used to reduce the Dimension.
I have read the theory about Principal Components Analyze many times and think that understanding how it works, each component is a new coordinate system for the data.
I implemented this method in Python for my data but ended up with new EEG data that was modeled relative to new components (coordinates) but the data became larger since I used few components.
Therefore, why is this method considered to reduce the dimension, because we end up with even more data if we use several components?
I attached image for my 9 channels and the result after PCA
my data before and after PCA
I can not understand, PCA finally doesn't decrease the dimension of EEG data, where I am wrong?
|
[
"Note that I am no expert as it comes to using PCA for EEG signals analysis.\nPCA does decrease dimensions IF you choose to. Usually only first few components are important and you can discard all the others. How many - it depends on your needs.\nPCA creates new, independent dimensions, with first being most important, \"describing\" as much of the data variance as it is possible for a single variable to describe. Second PCA dimension is second important and so on.\nIn your case it may be ok to keep only first 3-4 PCA components. But I am guessing. Best is to check this in some further step, for example checking \"how many components can I remove before classification task accuracy drops\" or something like that. You could also take a look at signals reconstructed using inverse PCA transform (available in sklearn)\n"
] |
[
0
] |
[] |
[] |
[
"pca",
"python",
"signal_processing"
] |
stackoverflow_0074566258_pca_python_signal_processing.txt
|
Q:
How to insert multiple values at a time in Auto incremented column using SQLAlchemy
I am using Postgres database and sqlalchemy core. I have below table
CREATE TABLE IF NOT EXISTS id_generation (id SERIAL PRIMARY KEY)
and I am trying to insert multiple values in the id column using below query. number_of_ids can be in multiple of 1000.
for _ in range(number_of_ids):
conn.execute('INSERT INTO id_generation VALUES (DEFAULT)')
I just want to know is there any other way to write this query using Sqlalchemy core. How we can optimize it ?
A:
Assuming that you are using psycopg2 as the connector, you can pass a list of empty dictionaries to conn.execute, corresponding to the number of rows to be inserted, and SQLAlchemy will emit a single INSERT statement with multiple VALUES clauses.
import sqlalchemy as sa
...
vals = [{} for _ in range(100)]
with engine.begin() as conn:
conn.execute(sa.text(INSERT INTO id_generation VALUES (DEFAULT)'), vals)
SQLAlchemy makes use of psycopg2's fast execution helpers, as documented here.
|
How to insert multiple values at a time in Auto incremented column using SQLAlchemy
|
I am using Postgres database and sqlalchemy core. I have below table
CREATE TABLE IF NOT EXISTS id_generation (id SERIAL PRIMARY KEY)
and I am trying to insert multiple values in the id column using below query. number_of_ids can be in multiple of 1000.
for _ in range(number_of_ids):
conn.execute('INSERT INTO id_generation VALUES (DEFAULT)')
I just want to know is there any other way to write this query using Sqlalchemy core. How we can optimize it ?
|
[
"Assuming that you are using psycopg2 as the connector, you can pass a list of empty dictionaries to conn.execute, corresponding to the number of rows to be inserted, and SQLAlchemy will emit a single INSERT statement with multiple VALUES clauses.\nimport sqlalchemy as sa\n\n...\n\nvals = [{} for _ in range(100)]\n\nwith engine.begin() as conn:\n conn.execute(sa.text(INSERT INTO id_generation VALUES (DEFAULT)'), vals)\n\nSQLAlchemy makes use of psycopg2's fast execution helpers, as documented here.\n"
] |
[
0
] |
[] |
[] |
[
"postgresql",
"python",
"sqlalchemy"
] |
stackoverflow_0074566383_postgresql_python_sqlalchemy.txt
|
Q:
Groupby with brackets vs. Groupby with ".agg"?
what is the exact difference between
data_sex1= data_suicide.groupby(by=["year", "sex"])["suicides_no"].sum()
and
data_sex2 = data_suicide.groupby(by=['year', 'sex']).agg({'suicides_no': ['sum']})
?
My problem is that I have to modify both to plot them in seaborn.
The line for seaborn is this
sns.barplot(x="year", y="suicides_no", hue="sex", data=data_sex1)
sns.barplot(x="year", y="suicides_no", hue="sex", data=data_sex2)
Now for the upper example data_sex1 I have to add following line to make the plot work. Because that is the way to not only reset the indexes, but also add the name "suicides_no" to the column. The column had no name before.
data_sex1=data_sex1.reset_index()
Now for the second example data_sex2 I have to add following code to make the plot work. And please also note, that if I reset the index first and then rename the column, it also give me an error.
data_sex2.columns=["suicide_no"]
data_sex2=data_sex2.reset_index()
So I hope someone can really help me with this super confusing problem. Thanks so much!
A:
You would like to use agg when you want to apply different aggregation functions for different columns:
df.groupby('id').agg({'x': ['mean', 'sum', 'max'], 'y': ['sum', 'min']})
The other option gives you less flexibility in terms of columns / aggregation logics to apply.
|
Groupby with brackets vs. Groupby with ".agg"?
|
what is the exact difference between
data_sex1= data_suicide.groupby(by=["year", "sex"])["suicides_no"].sum()
and
data_sex2 = data_suicide.groupby(by=['year', 'sex']).agg({'suicides_no': ['sum']})
?
My problem is that I have to modify both to plot them in seaborn.
The line for seaborn is this
sns.barplot(x="year", y="suicides_no", hue="sex", data=data_sex1)
sns.barplot(x="year", y="suicides_no", hue="sex", data=data_sex2)
Now for the upper example data_sex1 I have to add following line to make the plot work. Because that is the way to not only reset the indexes, but also add the name "suicides_no" to the column. The column had no name before.
data_sex1=data_sex1.reset_index()
Now for the second example data_sex2 I have to add following code to make the plot work. And please also note, that if I reset the index first and then rename the column, it also give me an error.
data_sex2.columns=["suicide_no"]
data_sex2=data_sex2.reset_index()
So I hope someone can really help me with this super confusing problem. Thanks so much!
|
[
"You would like to use agg when you want to apply different aggregation functions for different columns:\ndf.groupby('id').agg({'x': ['mean', 'sum', 'max'], 'y': ['sum', 'min']})\n\nThe other option gives you less flexibility in terms of columns / aggregation logics to apply.\n"
] |
[
1
] |
[] |
[] |
[
"group_by",
"pandas",
"python"
] |
stackoverflow_0074566528_group_by_pandas_python.txt
|
Q:
How to return forms to app in method POST
I would like to send data using the form to the application. Unfortunately something is wrong. I'm still wondering if it's
name = request.form
is good because it doesn't even show me
print(name).
@app.route('/register', methods=['GET', 'POST'])
def register():
form = RegisterForm(request.form)
if request.method == 'POST' and form.validate():
name = request.form("name")
email = request.form("email")
username = request.form("username")
password = request.form("password")
cur = mysql.connect.cursor()
cur.execute("INSERT INTO users(name, email, username, password) VALUES(%s, %s, %s, %s)",
(name, email, username, password))
cur.connection.commit()
print(name)
cur.close()
return "true"
I tried to give
name = request.form.get("name")
but that doesn't work either
A:
You have made a mistake with the syntax. the form is not callable so you cannot call it using form().
Try the following:
name = request.form["name"]
or
name = request.form.get("name", fallBackValue)
I hope this helps
|
How to return forms to app in method POST
|
I would like to send data using the form to the application. Unfortunately something is wrong. I'm still wondering if it's
name = request.form
is good because it doesn't even show me
print(name).
@app.route('/register', methods=['GET', 'POST'])
def register():
form = RegisterForm(request.form)
if request.method == 'POST' and form.validate():
name = request.form("name")
email = request.form("email")
username = request.form("username")
password = request.form("password")
cur = mysql.connect.cursor()
cur.execute("INSERT INTO users(name, email, username, password) VALUES(%s, %s, %s, %s)",
(name, email, username, password))
cur.connection.commit()
print(name)
cur.close()
return "true"
I tried to give
name = request.form.get("name")
but that doesn't work either
|
[
"You have made a mistake with the syntax. the form is not callable so you cannot call it using form().\nTry the following:\nname = request.form[\"name\"]\n\nor\nname = request.form.get(\"name\", fallBackValue)\n\nI hope this helps\n"
] |
[
1
] |
[] |
[] |
[
"flask",
"http",
"post",
"python"
] |
stackoverflow_0074566123_flask_http_post_python.txt
|
Q:
For loop not working stating the endpoint is a float
So for context, I'm working on a program that requires the Guass formula. It's used to find for example, 5 + 4 + 3 + 2 + 1, or, 8 + 7 + 6 + 5 + 4 + 3 + 2 + 1.
The formula is (n*(n + 1))/2,
I tried to incorporate this into a for loop, but I'm getting an error stating:
"'float' object cannot be interpreted as an integer"
This is my code:
# Defining Variables #
print("Give me a start")
x = int(input())
print("Give me a delta")
y = int(input())
print("Give me an amount of rows")
z = int(input())
archive_list = []
f = z + 1
stop = z*f
final_stop = stop/2
# Main Logic #
for loop in range(1,final_stop,1):
print("hi")
I would appreciate a response on why it wasn't working as well as a fixed code.
Thanks in advance!
A:
As @ForceBru noted in his excellent comment, the problem is that the endpoint final_stop is a float, instead of an int.
The reason is because when computing it you used a single / instead of double.
If you replace
final_stop = stop/2
with
final_stop = stop//2,
then it should work fine.
|
For loop not working stating the endpoint is a float
|
So for context, I'm working on a program that requires the Guass formula. It's used to find for example, 5 + 4 + 3 + 2 + 1, or, 8 + 7 + 6 + 5 + 4 + 3 + 2 + 1.
The formula is (n*(n + 1))/2,
I tried to incorporate this into a for loop, but I'm getting an error stating:
"'float' object cannot be interpreted as an integer"
This is my code:
# Defining Variables #
print("Give me a start")
x = int(input())
print("Give me a delta")
y = int(input())
print("Give me an amount of rows")
z = int(input())
archive_list = []
f = z + 1
stop = z*f
final_stop = stop/2
# Main Logic #
for loop in range(1,final_stop,1):
print("hi")
I would appreciate a response on why it wasn't working as well as a fixed code.
Thanks in advance!
|
[
"As @ForceBru noted in his excellent comment, the problem is that the endpoint final_stop is a float, instead of an int.\nThe reason is because when computing it you used a single / instead of double.\nIf you replace\nfinal_stop = stop/2\nwith\nfinal_stop = stop//2,\nthen it should work fine.\n"
] |
[
1
] |
[] |
[] |
[
"loops",
"python"
] |
stackoverflow_0074566620_loops_python.txt
|
Q:
Array length does not match index when creating a Dataframe
I am constructing a dataframe from:
datetoday = (pd.to_datetime(files[-1]['file_published'], format='%d.%m.%Y %H:%M')).strftime('%Y-%m-%d')
datetoday
Out[66]: '2022-11-23'
dates = pd.Series(np.arange(1, 337, 1))
dates
Out[68]:
0 1
1 2
2 3
3 4
4 5
...
331 332
332 333
333 334
334 335
335 336
Length: 336, dtype: int64
And then adding a data column:
data = pd.read_excel(files[0]['file_path'], sheet_name='Sheet1', engine='openpyxl').iloc[1:, 3:].astype(
float).dropna(axis=1).values.flatten()
len(data)
Out[73]: 336
But when I create the final dataframe:
df = pd.DataFrame({'datecreated': datetoday, 'timestamp': dates, 'ipto_weekly_forecast': data})
I get the following error:
ValueError: array length 0 does not match index length 336
The strange thing is that the error happens on Jupyter but locally on PyCharm the df gets built without issues.
How can I fix this?
A:
Conjecture
The older version of pandas you are using on Jupyter is fussing about the way you are specifying the datecreated column using a scalar value (note for the other two columns, you specified using lists/arrays).
Solution
The following fix will work on any version of pandas (given that the dates and data lists are, in fact, length 336):
df = pd.DataFrame({
'datecreated': [datetoday]*336,
'timestamp': dates,
'ipto_weekly_forecast': data
})
Here is an example using mocked data.
dates = pd.Series(np.arange(1, 337, 1))
data = pd.Series(np.arange(1, 337, 1))
datetoday = '2022-11-23'
df = pd.DataFrame({'datecreated': [datetoday]*336, 'timestamp': dates, 'ipto_weekly_forecast': data})
Explanation
The solution works because the expression [datetoday]*336 evaluates to a list of length 336 with every value equal to datetoday. Now, we are providing pandas with data of the same length for every column.
Note: it was my intention to provide this information in more condensed format as a comment, but I do not have enough reputation to comment.
|
Array length does not match index when creating a Dataframe
|
I am constructing a dataframe from:
datetoday = (pd.to_datetime(files[-1]['file_published'], format='%d.%m.%Y %H:%M')).strftime('%Y-%m-%d')
datetoday
Out[66]: '2022-11-23'
dates = pd.Series(np.arange(1, 337, 1))
dates
Out[68]:
0 1
1 2
2 3
3 4
4 5
...
331 332
332 333
333 334
334 335
335 336
Length: 336, dtype: int64
And then adding a data column:
data = pd.read_excel(files[0]['file_path'], sheet_name='Sheet1', engine='openpyxl').iloc[1:, 3:].astype(
float).dropna(axis=1).values.flatten()
len(data)
Out[73]: 336
But when I create the final dataframe:
df = pd.DataFrame({'datecreated': datetoday, 'timestamp': dates, 'ipto_weekly_forecast': data})
I get the following error:
ValueError: array length 0 does not match index length 336
The strange thing is that the error happens on Jupyter but locally on PyCharm the df gets built without issues.
How can I fix this?
|
[
"Conjecture\nThe older version of pandas you are using on Jupyter is fussing about the way you are specifying the datecreated column using a scalar value (note for the other two columns, you specified using lists/arrays).\nSolution\nThe following fix will work on any version of pandas (given that the dates and data lists are, in fact, length 336):\ndf = pd.DataFrame({\n 'datecreated': [datetoday]*336,\n 'timestamp': dates,\n 'ipto_weekly_forecast': data\n})\n\nHere is an example using mocked data.\ndates = pd.Series(np.arange(1, 337, 1))\ndata = pd.Series(np.arange(1, 337, 1))\ndatetoday = '2022-11-23'\n\ndf = pd.DataFrame({'datecreated': [datetoday]*336, 'timestamp': dates, 'ipto_weekly_forecast': data})\n\nExplanation\nThe solution works because the expression [datetoday]*336 evaluates to a list of length 336 with every value equal to datetoday. Now, we are providing pandas with data of the same length for every column.\n\nNote: it was my intention to provide this information in more condensed format as a comment, but I do not have enough reputation to comment.\n"
] |
[
0
] |
[] |
[] |
[
"pandas",
"python"
] |
stackoverflow_0074566398_pandas_python.txt
|
Q:
Appending data to empty pandas dataframe
To start, I am very new to Python and stackoverflow. I am sorry if this question has come up before, but I could not find it in the forum. Could someone please explain why this output happens when I try to append data to a Dataframe.
A B C D E F G 0 1 2 3 4 5 6
0 NaN NaN NaN NaN NaN NaN NaN 1.0 2.0 3.0 4.0 5.0 6.0 7.0
dftest = pd.DataFrame(columns=["A","B","C","D","E","F","G"])
testdata = [1,2,3,4,5,6,7]
dftest = dftest.append([testdata])
I want the testdata to appear under the columns like this.
A B C D E F G
1 2 3 4 5 6 7
A:
Try this?
import pandas as pd
dftest = pd.DataFrame(columns=["A","B","C","D","E","F","G"])
testdata = [1,2,3,4,5,6,7]
dftest.loc[len(dftest)] = testdata
dftest
|
Appending data to empty pandas dataframe
|
To start, I am very new to Python and stackoverflow. I am sorry if this question has come up before, but I could not find it in the forum. Could someone please explain why this output happens when I try to append data to a Dataframe.
A B C D E F G 0 1 2 3 4 5 6
0 NaN NaN NaN NaN NaN NaN NaN 1.0 2.0 3.0 4.0 5.0 6.0 7.0
dftest = pd.DataFrame(columns=["A","B","C","D","E","F","G"])
testdata = [1,2,3,4,5,6,7]
dftest = dftest.append([testdata])
I want the testdata to appear under the columns like this.
A B C D E F G
1 2 3 4 5 6 7
|
[
"Try this?\nimport pandas as pd\ndftest = pd.DataFrame(columns=[\"A\",\"B\",\"C\",\"D\",\"E\",\"F\",\"G\"])\ntestdata = [1,2,3,4,5,6,7]\n\n\ndftest.loc[len(dftest)] = testdata\ndftest\n\n"
] |
[
0
] |
[] |
[] |
[
"append",
"pandas",
"python"
] |
stackoverflow_0074566714_append_pandas_python.txt
|
Q:
WTForms Field shows up when run, but has Unbound problems
I'm new to Flask and JS, so I'm really not sure what the problem is here.
app.py
@app.route('/email', methods=["GET", "POST"])
def email():
email_form = EmailForm(csrf_enabled=False)
return render_template("email-form.html", template_form=email_form, action='/appliance2', method='POST')
forms.py (I created that validator using the template from the wtForms docs)
class DBPresenceCheck(object):
def __init__(self, table, message="Field is invalid"):
self.table = table
self.message = message
def __call__(self, form, field):
presence = Connector().query('SELECT 1 FROM %(table)s WHERE %(col_name)s = %(data)s LIMIT 1;',
{'col_name': field.label, 'data': field.data, 'table': self.table})
if presence == 1:
raise ValidationError(self.message)
class EmailForm(FlaskForm):
title = "Enter Household Info"
subtitle = "Please enter your email address:"
email_input = StringField("email", validators=[email(),
DBPresenceCheck('Household', 'That email is already present in the database.')])
# print(email_input.data)
submit = SubmitField("Submit")
email-form.html
{% extends "base.html" %}
{% block content%}
<div class="container py-4">
<div class="p-5 mb-4 bg-light rounded-3">
<h2 class="display-5 fw-bold">{{ template_form.title }}</h2>
<p class="col-md-8 fs-4">{{ template_form.subtitle }}</p>
<form action="{{ action }}" method="{{ method }}">
<div id="main_elements">
{{ template_form['email_input']() }}
</div>
<div id="submit_element">
<br>
{{ template_form["submit"](class_="btn btn-primary") }}
</div>
</form>
</div>
</div>
{%endblock%}
I first realized something was wrong when I was testing the validation, and neither validator ever threw. I added the print statement in forms, and now, when I use flask run, I get the following error.
Traceback (most recent call last):
File "C:\Languages\Python\3.9\lib\runpy.py", line 197, in _run_module_as_main
return _run_code(code, main_globals, None,
File "C:\Languages\Python\3.9\lib\runpy.py", line 87, in _run_code
exec(code, run_globals)
File "C:\Users\askat\.virtualenvs\askat-YHGTcgZo\Scripts\flask.exe\__main__.py", line 7, in <module>
File "C:\Users\askat\.virtualenvs\askat-YHGTcgZo\lib\site-packages\flask\cli.py", line 1047, in main
cli.main()
File "C:\Users\askat\.virtualenvs\askat-YHGTcgZo\lib\site-packages\click\core.py", line 1055, in main
rv = self.invoke(ctx)
File "C:\Users\askat\.virtualenvs\askat-YHGTcgZo\lib\site-packages\click\core.py", line 1657, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "C:\Users\askat\.virtualenvs\askat-YHGTcgZo\lib\site-packages\click\core.py", line 1404, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "C:\Users\askat\.virtualenvs\askat-YHGTcgZo\lib\site-packages\click\core.py", line 760, in invoke
return __callback(*args, **kwargs)
File "C:\Users\askat\.virtualenvs\askat-YHGTcgZo\lib\site-packages\click\decorators.py", line 84, in new_func
return ctx.invoke(f, obj, *args, **kwargs)
File "C:\Users\askat\.virtualenvs\askat-YHGTcgZo\lib\site-packages\click\core.py", line 760, in invoke
return __callback(*args, **kwargs)
File "C:\Users\askat\.virtualenvs\askat-YHGTcgZo\lib\site-packages\flask\cli.py", line 911, in run_command
raise e from None
File "C:\Users\askat\.virtualenvs\askat-YHGTcgZo\lib\site-packages\flask\cli.py", line 897, in run_command
app = info.load_app()
File "C:\Users\askat\.virtualenvs\askat-YHGTcgZo\lib\site-packages\flask\cli.py", line 312, in load_app
app = locate_app(import_name, None, raise_if_not_found=False)
File "C:\Users\askat\.virtualenvs\askat-YHGTcgZo\lib\site-packages\flask\cli.py", line 218, in locate_app
__import__(module_name)
File "C:\Users\askat\PycharmProjects\cs6400-2022-03-Team060\Phase_3\app.py", line 4, in <module>
from forms import *
File "C:\Users\askat\PycharmProjects\cs6400-2022-03-Team060\Phase_3\forms.py", line 33, in <module>
class EmailForm(AddForm):
File "C:\Users\askat\PycharmProjects\cs6400-2022-03-Team060\Phase_3\forms.py", line 37, in EmailForm
print(email_input.data)
AttributeError: 'UnboundField' object has no attribute 'data'
If the print statement is just print(email_input), then when I run it, the following output
<UnboundField(StringField, ('email',), {'validators': [<wtforms.validators.Email object at 0x0000024145A4F460>, <forms.DBPresenceCheck object at 0x0000024145A4F610>]})>
but no errors are thrown. It seems strange to me that the error is thrown before the field is even created.
A:
In the validators list you need to pass in an instance of class Email. So you field should read:
from wtforms.validators import Email
email_input = StringField("email", validators=[Email(), ...
and don't forget to install the email package as Flask-WTF doesn't include anymore the Email in their validators.
|
WTForms Field shows up when run, but has Unbound problems
|
I'm new to Flask and JS, so I'm really not sure what the problem is here.
app.py
@app.route('/email', methods=["GET", "POST"])
def email():
email_form = EmailForm(csrf_enabled=False)
return render_template("email-form.html", template_form=email_form, action='/appliance2', method='POST')
forms.py (I created that validator using the template from the wtForms docs)
class DBPresenceCheck(object):
def __init__(self, table, message="Field is invalid"):
self.table = table
self.message = message
def __call__(self, form, field):
presence = Connector().query('SELECT 1 FROM %(table)s WHERE %(col_name)s = %(data)s LIMIT 1;',
{'col_name': field.label, 'data': field.data, 'table': self.table})
if presence == 1:
raise ValidationError(self.message)
class EmailForm(FlaskForm):
title = "Enter Household Info"
subtitle = "Please enter your email address:"
email_input = StringField("email", validators=[email(),
DBPresenceCheck('Household', 'That email is already present in the database.')])
# print(email_input.data)
submit = SubmitField("Submit")
email-form.html
{% extends "base.html" %}
{% block content%}
<div class="container py-4">
<div class="p-5 mb-4 bg-light rounded-3">
<h2 class="display-5 fw-bold">{{ template_form.title }}</h2>
<p class="col-md-8 fs-4">{{ template_form.subtitle }}</p>
<form action="{{ action }}" method="{{ method }}">
<div id="main_elements">
{{ template_form['email_input']() }}
</div>
<div id="submit_element">
<br>
{{ template_form["submit"](class_="btn btn-primary") }}
</div>
</form>
</div>
</div>
{%endblock%}
I first realized something was wrong when I was testing the validation, and neither validator ever threw. I added the print statement in forms, and now, when I use flask run, I get the following error.
Traceback (most recent call last):
File "C:\Languages\Python\3.9\lib\runpy.py", line 197, in _run_module_as_main
return _run_code(code, main_globals, None,
File "C:\Languages\Python\3.9\lib\runpy.py", line 87, in _run_code
exec(code, run_globals)
File "C:\Users\askat\.virtualenvs\askat-YHGTcgZo\Scripts\flask.exe\__main__.py", line 7, in <module>
File "C:\Users\askat\.virtualenvs\askat-YHGTcgZo\lib\site-packages\flask\cli.py", line 1047, in main
cli.main()
File "C:\Users\askat\.virtualenvs\askat-YHGTcgZo\lib\site-packages\click\core.py", line 1055, in main
rv = self.invoke(ctx)
File "C:\Users\askat\.virtualenvs\askat-YHGTcgZo\lib\site-packages\click\core.py", line 1657, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "C:\Users\askat\.virtualenvs\askat-YHGTcgZo\lib\site-packages\click\core.py", line 1404, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "C:\Users\askat\.virtualenvs\askat-YHGTcgZo\lib\site-packages\click\core.py", line 760, in invoke
return __callback(*args, **kwargs)
File "C:\Users\askat\.virtualenvs\askat-YHGTcgZo\lib\site-packages\click\decorators.py", line 84, in new_func
return ctx.invoke(f, obj, *args, **kwargs)
File "C:\Users\askat\.virtualenvs\askat-YHGTcgZo\lib\site-packages\click\core.py", line 760, in invoke
return __callback(*args, **kwargs)
File "C:\Users\askat\.virtualenvs\askat-YHGTcgZo\lib\site-packages\flask\cli.py", line 911, in run_command
raise e from None
File "C:\Users\askat\.virtualenvs\askat-YHGTcgZo\lib\site-packages\flask\cli.py", line 897, in run_command
app = info.load_app()
File "C:\Users\askat\.virtualenvs\askat-YHGTcgZo\lib\site-packages\flask\cli.py", line 312, in load_app
app = locate_app(import_name, None, raise_if_not_found=False)
File "C:\Users\askat\.virtualenvs\askat-YHGTcgZo\lib\site-packages\flask\cli.py", line 218, in locate_app
__import__(module_name)
File "C:\Users\askat\PycharmProjects\cs6400-2022-03-Team060\Phase_3\app.py", line 4, in <module>
from forms import *
File "C:\Users\askat\PycharmProjects\cs6400-2022-03-Team060\Phase_3\forms.py", line 33, in <module>
class EmailForm(AddForm):
File "C:\Users\askat\PycharmProjects\cs6400-2022-03-Team060\Phase_3\forms.py", line 37, in EmailForm
print(email_input.data)
AttributeError: 'UnboundField' object has no attribute 'data'
If the print statement is just print(email_input), then when I run it, the following output
<UnboundField(StringField, ('email',), {'validators': [<wtforms.validators.Email object at 0x0000024145A4F460>, <forms.DBPresenceCheck object at 0x0000024145A4F610>]})>
but no errors are thrown. It seems strange to me that the error is thrown before the field is even created.
|
[
"In the validators list you need to pass in an instance of class Email. So you field should read:\nfrom wtforms.validators import Email\n\nemail_input = StringField(\"email\", validators=[Email(), ...\n\nand don't forget to install the email package as Flask-WTF doesn't include anymore the Email in their validators.\n"
] |
[
0
] |
[] |
[] |
[
"flask",
"flask_wtforms",
"python",
"windows",
"wtforms"
] |
stackoverflow_0074565824_flask_flask_wtforms_python_windows_wtforms.txt
|
Q:
how to distinguish axes between image, line plot and colorbar?
N.B.: I have edited the question as it was probably unclear: I am looking for the best method to understand the type of plot in a given axis.
QUESTION:
I am trying to make a generic function which can arrange multiple figures as subplots.
As I loop over the subplots to set some properties (e.g. axis range) iterating over fig.axes, I need to understand which type every plot is in order to determine which properties I want to set for each of them (e.g. I want to set x range on images and line plots, but not on colorbar, otherwise my plot will explode).
My question is then how I can distinguish between different types.
I tried to play with try and except and select on the basis of different properties for different plot types, but they seem to be the same for all of them, so, at the moment, the best way I found is to check the content of each axis: in particular ax.images is a non empty list if a plot is an image, and ax.lines is not empty if it is a line plot, (and a colorbar has both empty).
This works for simple plots, but I wonder if this is still the best way and still working for more complex cases (e.g. insets, overlapped lines and images, subclasses)?
This is just an example to illustrate how the different type of plots can be accessed, with the following code creating three axes l, i and cb (respectively line, image, colorbar):
# create test figure
plt.figure()
b = np.arange(12).reshape([4,3])
plt.subplot(121)
plt.plot([1,2,3],[4,5,6])
plt.subplot(122)
plt.imshow(b)
plt.colorbar()
# create test objects
ax=plt.gca()
fig=plt.gcf()
l,i,cb = fig.axes
# do a simple test, images are different:
for o in l,i,cb: print(len(o.images))
# this also doesn't work in finding properties not in common between lines and colobars, gives empty list.
[a for a in dir(l) if a not in dir(cb)]
A:
I have to remind you that
Matplotib provides you with many different container objects,
You can store the Axes destination in a list, or a dictionary, when you use it — you can even say ax.ax_type = 'lineplot'.
That said, e.g.,
from matplotlib.pyplot import subplots, plot
fig, ax = subplots()
plot((1, 2), (2, 1))
...
axes_types = []
for ax_i in fig.axes:
try:
ax_i.__getattr__('get_clabel')
axes_types.append('colorbar')
except AttributeError:
axes_types.append('lineplot')
...
In other word, chose a method that is unique to each one of the differnt types you're testing and check if it's available.
A:
After creating the image above in IPython
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.cm import ScalarMappable
fig, ax = plt.subplots()
ax.imshow(((0,1),(2,3)))
ax.scatter((0,1),(0,1), fc='w', ec='k')
ax.plot((0,1),(0,1))
fig.colorbar(ScalarMappable(), ax=ax)
plt.show()
I tried to investigate
In [48]: fig.axes
Out[48]: [<AxesSubplot:>, <AxesSubplot:label='<colorbar>'>]
I can recognize that one of the two axes is a colorbar — but it's easy to inspect the content of the individual axes
In [49]: fig.axes[0]._children
Out[49]:
[<matplotlib.image.AxesImage at 0x7fad9dda2b30>,
<matplotlib.collections.PathCollection at 0x7fad9dad04f0>,
<matplotlib.lines.Line2D at 0x7fad9dad09d0>]
In [50]: fig.axes[1]._children
Out[50]:
[<matplotlib.patches.Polygon at 0x7fad9db525f0>,
<matplotlib.collections.LineCollection at 0x7fad9db52830>,
<matplotlib.collections.QuadMesh at 0x7fad9dad2320>]
A:
You could analyze the types of objects drawn on axes using something like get_whats_on_axis below.
In your example, its outputs will be like
[<class 'matplotlib.lines.Line2D'>]
[<class 'matplotlib.image.AxesImage'>]
[<class 'matplotlib.patches.Polygon'>, <class 'matplotlib.collections.LineCollection'>, <class 'matplotlib.collections.QuadMesh'>]
Note that this function might be too restrictive, i.e. it will eliminate the text objects added by the user, as well as the text objects used to display axes labels etc.
import matplotlib.pyplot as plt
import numpy as np
from matplotlib.spines import Spine as plt_spine
from matplotlib.axis import XAxis as plt_xaxis, YAxis as plt_yaxis
from matplotlib.text import Text as plt_text
from matplotlib.patches import Rectangle as plt_rectangle
def get_whats_on_axis(ax):
result = [type(c) for c in ax.get_children()
if not isinstance(c, (plt_spine, plt_xaxis, plt_yaxis, plt_text, plt_rectangle))]
return result
fig = plt.figure()
ax1 = plt.subplot(121)
ax1.plot([1,2,3],[4,5,6])
ax2 = plt.subplot(122)
plt.imshow(np.arange(12).reshape([4,3]))
cbar = plt.colorbar()
ax3 = cbar.ax
# create test objects
ax=plt.gca()
fig=plt.gcf()
l,i,cb = fig.axes
# do a simple test, images are different:
print('original method')
for o in l,i,cb:
print(len(o.images), o)
print('suggested method')
for o in (ax1, ax2, ax3):
print(get_whats_on_axis(o))
# this also doesn't work in finding properties not in common between lines and colobars, gives empty list.
[a for a in dir(l) if a not in dir(cb)]
plt.show()
|
how to distinguish axes between image, line plot and colorbar?
|
N.B.: I have edited the question as it was probably unclear: I am looking for the best method to understand the type of plot in a given axis.
QUESTION:
I am trying to make a generic function which can arrange multiple figures as subplots.
As I loop over the subplots to set some properties (e.g. axis range) iterating over fig.axes, I need to understand which type every plot is in order to determine which properties I want to set for each of them (e.g. I want to set x range on images and line plots, but not on colorbar, otherwise my plot will explode).
My question is then how I can distinguish between different types.
I tried to play with try and except and select on the basis of different properties for different plot types, but they seem to be the same for all of them, so, at the moment, the best way I found is to check the content of each axis: in particular ax.images is a non empty list if a plot is an image, and ax.lines is not empty if it is a line plot, (and a colorbar has both empty).
This works for simple plots, but I wonder if this is still the best way and still working for more complex cases (e.g. insets, overlapped lines and images, subclasses)?
This is just an example to illustrate how the different type of plots can be accessed, with the following code creating three axes l, i and cb (respectively line, image, colorbar):
# create test figure
plt.figure()
b = np.arange(12).reshape([4,3])
plt.subplot(121)
plt.plot([1,2,3],[4,5,6])
plt.subplot(122)
plt.imshow(b)
plt.colorbar()
# create test objects
ax=plt.gca()
fig=plt.gcf()
l,i,cb = fig.axes
# do a simple test, images are different:
for o in l,i,cb: print(len(o.images))
# this also doesn't work in finding properties not in common between lines and colobars, gives empty list.
[a for a in dir(l) if a not in dir(cb)]
|
[
"I have to remind you that\n\nMatplotib provides you with many different container objects,\nYou can store the Axes destination in a list, or a dictionary, when you use it — you can even say ax.ax_type = 'lineplot'.\n\nThat said, e.g.,\nfrom matplotlib.pyplot import subplots, plot\nfig, ax = subplots()\nplot((1, 2), (2, 1))\n...\naxes_types = []\nfor ax_i in fig.axes:\n try:\n ax_i.__getattr__('get_clabel')\n axes_types.append('colorbar')\n except AttributeError:\n axes_types.append('lineplot')\n...\n\nIn other word, chose a method that is unique to each one of the differnt types you're testing and check if it's available.\n",
"\nAfter creating the image above in IPython\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom matplotlib.cm import ScalarMappable\n\nfig, ax = plt.subplots()\nax.imshow(((0,1),(2,3)))\nax.scatter((0,1),(0,1), fc='w', ec='k')\nax.plot((0,1),(0,1))\nfig.colorbar(ScalarMappable(), ax=ax)\nplt.show()\n\nI tried to investigate\nIn [48]: fig.axes\nOut[48]: [<AxesSubplot:>, <AxesSubplot:label='<colorbar>'>]\n\nI can recognize that one of the two axes is a colorbar — but it's easy to inspect the content of the individual axes\nIn [49]: fig.axes[0]._children\nOut[49]: \n[<matplotlib.image.AxesImage at 0x7fad9dda2b30>,\n <matplotlib.collections.PathCollection at 0x7fad9dad04f0>,\n <matplotlib.lines.Line2D at 0x7fad9dad09d0>]\n\nIn [50]: fig.axes[1]._children\nOut[50]: \n[<matplotlib.patches.Polygon at 0x7fad9db525f0>,\n <matplotlib.collections.LineCollection at 0x7fad9db52830>,\n <matplotlib.collections.QuadMesh at 0x7fad9dad2320>]\n\n",
"You could analyze the types of objects drawn on axes using something like get_whats_on_axis below.\nIn your example, its outputs will be like\n[<class 'matplotlib.lines.Line2D'>]\n[<class 'matplotlib.image.AxesImage'>]\n[<class 'matplotlib.patches.Polygon'>, <class 'matplotlib.collections.LineCollection'>, <class 'matplotlib.collections.QuadMesh'>]\n\nNote that this function might be too restrictive, i.e. it will eliminate the text objects added by the user, as well as the text objects used to display axes labels etc.\nimport matplotlib.pyplot as plt\nimport numpy as np \nfrom matplotlib.spines import Spine as plt_spine\nfrom matplotlib.axis import XAxis as plt_xaxis, YAxis as plt_yaxis\nfrom matplotlib.text import Text as plt_text\nfrom matplotlib.patches import Rectangle as plt_rectangle\n\ndef get_whats_on_axis(ax):\n result = [type(c) for c in ax.get_children() \n if not isinstance(c, (plt_spine, plt_xaxis, plt_yaxis, plt_text, plt_rectangle))]\n return result\n\nfig = plt.figure()\nax1 = plt.subplot(121)\nax1.plot([1,2,3],[4,5,6])\nax2 = plt.subplot(122)\nplt.imshow(np.arange(12).reshape([4,3]))\ncbar = plt.colorbar()\nax3 = cbar.ax\n\n# create test objects\nax=plt.gca()\nfig=plt.gcf()\nl,i,cb = fig.axes\n\n# do a simple test, images are different:\nprint('original method')\nfor o in l,i,cb: \n print(len(o.images), o)\nprint('suggested method')\nfor o in (ax1, ax2, ax3): \n print(get_whats_on_axis(o))\n\n# this also doesn't work in finding properties not in common between lines and colobars, gives empty list.\n[a for a in dir(l) if a not in dir(cb)]\n\nplt.show()\n\n"
] |
[
0,
0,
0
] |
[] |
[] |
[
"matplotlib",
"python"
] |
stackoverflow_0074551664_matplotlib_python.txt
|
Q:
DataFrame with multi-index - which team has a larger number?
After using .groupby(['match_id', 'team']).sum() I'm left with this multi-index dataframe:
visionScore
match_id team
EUW1_5671848066 blue 212
red 127
EUW1_5671858853 blue 146
red 170
EUW1_5672206092 blue 82
... ...
How do I add a new boolean column that will tell whether blue or red team has larger visionScore? If there's a draw, consider both teams to be winning.
A:
This would work:
import pandas as pd
df = pd.DataFrame(
{"visionScore": [212, 127, 146, 170, 82, 82]},
index=pd.MultiIndex.from_product([["EUW1_5671848066", "EUW1_5671858853", "EUW1_5672206092"], ["blue", "red"]], names=["match_id", "team"])
)
df["winner"] = df.groupby("match_id").transform(lambda x: [x[0] >= x[1], x[1] >= x[0]])
# df:
# visionScore winner
# match_id team
# EUW1_5671848066 blue 212 True
# red 127 False
# EUW1_5671858853 blue 146 False
# red 170 True
# EUW1_5672206092 blue 82 True
# red 82 True
though I can't help but think that there's a better way ,:)
A:
I did bit of a work around to get slightly different but still sufficient result by joining the dataframe on itself and getting visionScore on the same row:
df = df.reset_index(drop=True)
df = df.loc[df['team'] == 'blue'].merge(df.loc[df['team'] == 'red'], on=['match_id'], suffixes=('_blue', '_red'))
df['blueWins'] = df['visionScore_blue'] >= df['visionScore_red']
results in:
blueWins visionScore_blue visionScore_red
False 156 189
False 83 90
True 142 102
True 185 161
True 147 94
...
|
DataFrame with multi-index - which team has a larger number?
|
After using .groupby(['match_id', 'team']).sum() I'm left with this multi-index dataframe:
visionScore
match_id team
EUW1_5671848066 blue 212
red 127
EUW1_5671858853 blue 146
red 170
EUW1_5672206092 blue 82
... ...
How do I add a new boolean column that will tell whether blue or red team has larger visionScore? If there's a draw, consider both teams to be winning.
|
[
"This would work:\nimport pandas as pd\n\ndf = pd.DataFrame(\n {\"visionScore\": [212, 127, 146, 170, 82, 82]},\n index=pd.MultiIndex.from_product([[\"EUW1_5671848066\", \"EUW1_5671858853\", \"EUW1_5672206092\"], [\"blue\", \"red\"]], names=[\"match_id\", \"team\"]) \n)\n\ndf[\"winner\"] = df.groupby(\"match_id\").transform(lambda x: [x[0] >= x[1], x[1] >= x[0]])\n\n# df:\n# visionScore winner\n# match_id team \n# EUW1_5671848066 blue 212 True\n# red 127 False\n# EUW1_5671858853 blue 146 False\n# red 170 True\n# EUW1_5672206092 blue 82 True\n# red 82 True\n\nthough I can't help but think that there's a better way ,:)\n",
"I did bit of a work around to get slightly different but still sufficient result by joining the dataframe on itself and getting visionScore on the same row:\ndf = df.reset_index(drop=True)\ndf = df.loc[df['team'] == 'blue'].merge(df.loc[df['team'] == 'red'], on=['match_id'], suffixes=('_blue', '_red'))\ndf['blueWins'] = df['visionScore_blue'] >= df['visionScore_red']\n\nresults in:\nblueWins visionScore_blue visionScore_red\nFalse 156 189\nFalse 83 90\nTrue 142 102\nTrue 185 161\nTrue 147 94\n...\n\n"
] |
[
2,
0
] |
[] |
[] |
[
"multi_index",
"pandas",
"python"
] |
stackoverflow_0074566302_multi_index_pandas_python.txt
|
Q:
Python - Class from Serial
how do I create a class that Inheritates from serial using the python serial module?
I need to create a module so another user can import to his code and create an object by just passing the COM.
This is my module, called py232
from serial import Serial
class serialPort(Serial):
def __init__(self, COM):
serial.__init__(self)
self.COMPort = COM
self.baudrate = 9600
self.bytesize = 8
self.stopbits = serial.STOPBITS_ONE
self.parity = serial.PARITY_NONE
self.timeout = 2
#serialPort = serial.Serial(port=self.COMPort, baudrate= self.baudrate, stopbits= self.stopbits, parity= self.parity, timeout= self.timeout)
self.createSerial()
def createSerial(self):
sPort = serial.Serial(port = self.COMPort, baudrate=self.baudrate, bytesize=self.bytesize, timeout= self.timeout, stopbits = self.stopbits, parity=self.parity)
return sPort
I am trying to use it in another script:
import py232
s = py232.serialPort('COM6')
s.sndSerial('test')
A:
Why don't you just create an instance of serial in a function. No need to create a sub-class.
from serial import Serial
def create_serial(port):
s = Serial(
port,
baudrate=9600,
bytesize=8,
timeout=2,
# etc
)
return s
It can then be used in another script.
import py232
s = py232.create_serial('COM6')
|
Python - Class from Serial
|
how do I create a class that Inheritates from serial using the python serial module?
I need to create a module so another user can import to his code and create an object by just passing the COM.
This is my module, called py232
from serial import Serial
class serialPort(Serial):
def __init__(self, COM):
serial.__init__(self)
self.COMPort = COM
self.baudrate = 9600
self.bytesize = 8
self.stopbits = serial.STOPBITS_ONE
self.parity = serial.PARITY_NONE
self.timeout = 2
#serialPort = serial.Serial(port=self.COMPort, baudrate= self.baudrate, stopbits= self.stopbits, parity= self.parity, timeout= self.timeout)
self.createSerial()
def createSerial(self):
sPort = serial.Serial(port = self.COMPort, baudrate=self.baudrate, bytesize=self.bytesize, timeout= self.timeout, stopbits = self.stopbits, parity=self.parity)
return sPort
I am trying to use it in another script:
import py232
s = py232.serialPort('COM6')
s.sndSerial('test')
|
[
"Why don't you just create an instance of serial in a function. No need to create a sub-class.\nfrom serial import Serial\n\ndef create_serial(port):\n s = Serial(\n port,\n baudrate=9600,\n bytesize=8,\n timeout=2,\n # etc\n )\n return s\n\nIt can then be used in another script.\nimport py232\n\ns = py232.create_serial('COM6')\n\n"
] |
[
1
] |
[] |
[] |
[
"python",
"serial_port"
] |
stackoverflow_0074566626_python_serial_port.txt
|
Q:
Is there any way to store instance variable name inside of instance string?
I'm making a Matrix class and when I pretty-print the matrix, I would like there to be the matrix name. So for example
Bob = Matrix("2&3&4@4&5&6@6&7&8")
print(Bob)
Output:
---Matrix Bob---
| 2 3 4 |
| 4 5 6 |
| 6 7 8 |
----------------
Is there any way to do this, without passing the name as a parameter?
I have no idea except for code file scraping aaand... not a good idea.
|
Is there any way to store instance variable name inside of instance string?
|
I'm making a Matrix class and when I pretty-print the matrix, I would like there to be the matrix name. So for example
Bob = Matrix("2&3&4@4&5&6@6&7&8")
print(Bob)
Output:
---Matrix Bob---
| 2 3 4 |
| 4 5 6 |
| 6 7 8 |
----------------
Is there any way to do this, without passing the name as a parameter?
I have no idea except for code file scraping aaand... not a good idea.
|
[] |
[] |
[
"Objects aren't able to see the the variable they're assigned to\n"
] |
[
-1
] |
[
"class",
"python"
] |
stackoverflow_0074566757_class_python.txt
|
Q:
Problem with two concurrent workers accessing mysql with tornado
I have this simple program composed by two workers: Worker1 inserts records that Workers2 should read. The problem is that during execution Workers2 reads 0 records. Launched separately from CLI they work correctly. The "culprit" seems to be tornado. Any idea?
import time
import munch
from tornado import concurrent
from tornado.ioloop import IOLoop
import logging
import pymysql
config = munch.munchify({"mysql_host": "127.0.0.1", "mysql_port": 3306, "mysql_user": "", "mysql_password": "", "mysql_db": ""})
executor = concurrent.futures.ThreadPoolExecutor(max_workers=8)
class Worker1():
CONST_TYPE_ID = 'worker1'
def __init__(self):
self.conn = pymysql.connect(host=config.mysql_host, port=config.mysql_port, user=config.mysql_user,
passwd=config.mysql_password, db=config.mysql_db, charset='UTF8MB4',
local_infile=True)
self.cursor = self.conn.cursor(pymysql.cursors.DictCursor)
def single_run(self):
print("Single run of worker:", self.CONST_TYPE_ID)
query = "insert into readwrite (timestamp) values (now())"
self.conn.ping(True)
self.cursor.execute(query)
self.conn.commit()
class Worker2():
CONST_TYPE_ID = 'worker2'
def __init__(self):
self.conn = pymysql.connect(host=config.mysql_host, port=config.mysql_port, user=config.mysql_user,
passwd=config.mysql_password, db=config.mysql_db, charset='UTF8MB4',
local_infile=True)
self.cursor = self.conn.cursor(pymysql.cursors.DictCursor)
def single_run(self):
print("Single run of worker:", self.CONST_TYPE_ID)
query = "select * from readwrite"
self.conn.ping(True)
self.cursor.execute(query)
for row in self.cursor.fetchall():
print(row)
def run_worker(worker):
i = 0
instance = worker()
while True:
try:
i += 1
print("Starting {name} - run {iter}".format(name=instance.CONST_TYPE_ID, iter=i))
instance.single_run()
except Exception as e:
logging.exception('Worker {} got error {!r}, errno is {}'.format(instance.CONST_TYPE_ID, e, e.args[0]))
print("Waiting...")
time.sleep(1)
def main():
workers = [Worker1, Worker2]
executor.map(run_worker, workers)
IOLoop.current().start()
if __name__ == '__main__':
main()
A:
Solved by adding self.conn.commit() after self.cursor.execute(query) in Worker2.
|
Problem with two concurrent workers accessing mysql with tornado
|
I have this simple program composed by two workers: Worker1 inserts records that Workers2 should read. The problem is that during execution Workers2 reads 0 records. Launched separately from CLI they work correctly. The "culprit" seems to be tornado. Any idea?
import time
import munch
from tornado import concurrent
from tornado.ioloop import IOLoop
import logging
import pymysql
config = munch.munchify({"mysql_host": "127.0.0.1", "mysql_port": 3306, "mysql_user": "", "mysql_password": "", "mysql_db": ""})
executor = concurrent.futures.ThreadPoolExecutor(max_workers=8)
class Worker1():
CONST_TYPE_ID = 'worker1'
def __init__(self):
self.conn = pymysql.connect(host=config.mysql_host, port=config.mysql_port, user=config.mysql_user,
passwd=config.mysql_password, db=config.mysql_db, charset='UTF8MB4',
local_infile=True)
self.cursor = self.conn.cursor(pymysql.cursors.DictCursor)
def single_run(self):
print("Single run of worker:", self.CONST_TYPE_ID)
query = "insert into readwrite (timestamp) values (now())"
self.conn.ping(True)
self.cursor.execute(query)
self.conn.commit()
class Worker2():
CONST_TYPE_ID = 'worker2'
def __init__(self):
self.conn = pymysql.connect(host=config.mysql_host, port=config.mysql_port, user=config.mysql_user,
passwd=config.mysql_password, db=config.mysql_db, charset='UTF8MB4',
local_infile=True)
self.cursor = self.conn.cursor(pymysql.cursors.DictCursor)
def single_run(self):
print("Single run of worker:", self.CONST_TYPE_ID)
query = "select * from readwrite"
self.conn.ping(True)
self.cursor.execute(query)
for row in self.cursor.fetchall():
print(row)
def run_worker(worker):
i = 0
instance = worker()
while True:
try:
i += 1
print("Starting {name} - run {iter}".format(name=instance.CONST_TYPE_ID, iter=i))
instance.single_run()
except Exception as e:
logging.exception('Worker {} got error {!r}, errno is {}'.format(instance.CONST_TYPE_ID, e, e.args[0]))
print("Waiting...")
time.sleep(1)
def main():
workers = [Worker1, Worker2]
executor.map(run_worker, workers)
IOLoop.current().start()
if __name__ == '__main__':
main()
|
[
"Solved by adding self.conn.commit() after self.cursor.execute(query) in Worker2.\n"
] |
[
1
] |
[] |
[] |
[
"mysql",
"python",
"python_multiprocessing",
"threadpoolexecutor",
"tornado"
] |
stackoverflow_0074548959_mysql_python_python_multiprocessing_threadpoolexecutor_tornado.txt
|
Q:
Finding all the prime numbers in a list in Python
I want to loop through a list and find all the numbers that are prime
arr = [1,2,3]
for i in range(len(arr)):
if arr[i] > 1:
for j in range(2, int(arr[i]/2)+1):
if (arr[i] % j) == 0:
print(arr[i], "is not prime")
else:
print(arr[i], "is prime")
else:
print(arr[i], "is not prime")
This only prints out "1 is not prime." I am guessing it has something to do with the range(len()) of the for loop.
A:
The problem with your code is as follows
int(arr[i]/2)+1) is smaller than 2, thenceforth range(2, int(arr[i]/2)+1)) has no elements. The for loop doesn't execute for 2 and 3. These two cases need to be treated apart.
The second problem is that for greater numbers, you're deciding for every iteration in the innerloop whether the number is prime or not.
Here is a slight modification of your code that should work:
arr = range(20)
for i in range(len(arr)):
if arr[i] > 3:
is_prime = True
for j in range(2, int(arr[i]/2)+1):
if (arr[i] % j) == 0:
is_prime = False
break
else:
continue
print(arr[i], "is prime" if is_prime else "is not prime")
elif arr[i] in [2,3]:
print(arr[i], "is prime")
else:
print(arr[i], "is not prime")
|
Finding all the prime numbers in a list in Python
|
I want to loop through a list and find all the numbers that are prime
arr = [1,2,3]
for i in range(len(arr)):
if arr[i] > 1:
for j in range(2, int(arr[i]/2)+1):
if (arr[i] % j) == 0:
print(arr[i], "is not prime")
else:
print(arr[i], "is prime")
else:
print(arr[i], "is not prime")
This only prints out "1 is not prime." I am guessing it has something to do with the range(len()) of the for loop.
|
[
"The problem with your code is as follows\nint(arr[i]/2)+1) is smaller than 2, thenceforth range(2, int(arr[i]/2)+1)) has no elements. The for loop doesn't execute for 2 and 3. These two cases need to be treated apart.\nThe second problem is that for greater numbers, you're deciding for every iteration in the innerloop whether the number is prime or not.\nHere is a slight modification of your code that should work:\narr = range(20)\n\nfor i in range(len(arr)):\n if arr[i] > 3:\n is_prime = True\n for j in range(2, int(arr[i]/2)+1):\n if (arr[i] % j) == 0:\n is_prime = False\n break\n else:\n continue\n print(arr[i], \"is prime\" if is_prime else \"is not prime\")\n elif arr[i] in [2,3]:\n print(arr[i], \"is prime\")\n else:\n print(arr[i], \"is not prime\")\n\n"
] |
[
0
] |
[
"arr = list(range(20))\n\ndef is_prime(n):\n if n < 2:\n return False\n for i in range(2, int(n**0.5)+1):\n if n % i == 0:\n return False\n return True\n\ndef find_primes(array):\n return list(filter(is_prime, array))\n\nprint(find_primes(arr))\n\nreturns: [2, 3, 5, 7, 11, 13, 17, 19]\n"
] |
[
-1
] |
[
"iteration",
"list",
"primes",
"python",
"range"
] |
stackoverflow_0074566677_iteration_list_primes_python_range.txt
|
Q:
how to delete rows that contain a word from a list in python
As stated in the title I have a pandas data frame with string sentences in the column "title". I know want to filter all rows, where the title column contains one of the words specified in the list "keywords".
keywords = ["Simon", "Mustermann"]
df =
Title
Bla
Simon is a python beginner
...
Second balaola
...
Simon
...
Since "Simon" is found in rows with index 0 and 2, they should be retained.
My code atm is the following:
new_df = df[df["title"].isin(keywords)]
However, it only contains the third row but not the first one. How can I fix this? Thanks a lot for your support and time!
A:
This snippet should work for you
keywords = ["Simon", "Mustermann"]
# filter rows where column title contains one of the keywords
df_filtered = df[df["title"].str.contains("|".join(keywords))]
|
how to delete rows that contain a word from a list in python
|
As stated in the title I have a pandas data frame with string sentences in the column "title". I know want to filter all rows, where the title column contains one of the words specified in the list "keywords".
keywords = ["Simon", "Mustermann"]
df =
Title
Bla
Simon is a python beginner
...
Second balaola
...
Simon
...
Since "Simon" is found in rows with index 0 and 2, they should be retained.
My code atm is the following:
new_df = df[df["title"].isin(keywords)]
However, it only contains the third row but not the first one. How can I fix this? Thanks a lot for your support and time!
|
[
"This snippet should work for you\nkeywords = [\"Simon\", \"Mustermann\"]\n\n# filter rows where column title contains one of the keywords\ndf_filtered = df[df[\"title\"].str.contains(\"|\".join(keywords))]\n\n"
] |
[
1
] |
[] |
[] |
[
"dataframe",
"list",
"list_comprehension",
"pandas",
"python"
] |
stackoverflow_0074566836_dataframe_list_list_comprehension_pandas_python.txt
|
Q:
Time interval calculation for consecutive days in rows
I have a dataframe that looks like this:
Path_Version commitdates Year-Month API Age api_spec_id
168 NaN 2018-10-19 2018-10 39 521
169 NaN 2018-10-19 2018-10 39 521
170 NaN 2018-10-12 2018-10 39 521
171 NaN 2018-10-12 2018-10 39 521
172 NaN 2018-10-12 2018-10 39 521
173 NaN 2018-10-11 2018-10 39 521
174 NaN 2018-10-11 2018-10 39 521
175 NaN 2018-10-11 2018-10 39 521
176 NaN 2018-10-11 2018-10 39 521
177 NaN 2018-10-11 2018-10 39 521
178 NaN 2018-09-26 2018-09 39 521
179 NaN 2018-09-25 2018-09 39 521
I want to calculate the days elapsed from the first commitdate till the last, after sorting the commit dates first, so something like this:
Path_Version commitdates Year-Month API Age api_spec_id Days_difference
168 NaN 2018-10-19 2018-10 39 521 25
169 NaN 2018-10-19 2018-10 39 521 25
170 NaN 2018-10-12 2018-10 39 521 18
171 NaN 2018-10-12 2018-10 39 521 18
172 NaN 2018-10-12 2018-10 39 521 18
173 NaN 2018-10-11 2018-10 39 521 16
174 NaN 2018-10-11 2018-10 39 521 16
175 NaN 2018-10-11 2018-10 39 521 16
176 NaN 2018-10-11 2018-10 39 521 16
177 NaN 2018-10-11 2018-10 39 521 16
178 NaN 2018-09-26 2018-09 39 521 1
179 NaN 2018-09-25 2018-09 39 521 0
I tried first sorting the commitdates by api_spec_id since it is unique for every API, and then calculating the diff
final_api['commitdates'] = final_api.groupby('api_spec_id')['commitdate'].apply(lambda x: x.sort_values())
final_api['diff'] = final_api.groupby('api_spec_id')['commitdates'].diff() / np.timedelta64(1, 'D')
final_api['diff'] = final_api['diff'].fillna(0)
It just returns me a zero for the entire column. I don't want to group them, I only want to calculate the difference based on the sorted commitdates: starting from the first commitdate till the last in the entire dataset, in days
Any idea how can I achieve this?
A:
Use pandas.to_datetime, sub, min and dt.days:
t = pd.to_datetime(df['commitdates'])
df['Days_difference'] = t.sub(t.min()).dt.days
If you need to group per API:
t = pd.to_datetime(df['commitdates'])
df['Days_difference'] = t.sub(t.groupby(df['api_spec_id']).transform('min')).dt.days
Output:
Path_Version commitdates Year-Month API Age api_spec_id Days_difference
168 NaN 2018-10-19 2018-10 39 521 24
169 NaN 2018-10-19 2018-10 39 521 24
170 NaN 2018-10-12 2018-10 39 521 17
171 NaN 2018-10-12 2018-10 39 521 17
172 NaN 2018-10-12 2018-10 39 521 17
173 NaN 2018-10-11 2018-10 39 521 16
174 NaN 2018-10-11 2018-10 39 521 16
175 NaN 2018-10-11 2018-10 39 521 16
176 NaN 2018-10-11 2018-10 39 521 16
177 NaN 2018-10-11 2018-10 39 521 16
178 NaN 2018-09-26 2018-09 39 521 1
179 NaN 2018-09-25 2018-09 39 521 0
|
Time interval calculation for consecutive days in rows
|
I have a dataframe that looks like this:
Path_Version commitdates Year-Month API Age api_spec_id
168 NaN 2018-10-19 2018-10 39 521
169 NaN 2018-10-19 2018-10 39 521
170 NaN 2018-10-12 2018-10 39 521
171 NaN 2018-10-12 2018-10 39 521
172 NaN 2018-10-12 2018-10 39 521
173 NaN 2018-10-11 2018-10 39 521
174 NaN 2018-10-11 2018-10 39 521
175 NaN 2018-10-11 2018-10 39 521
176 NaN 2018-10-11 2018-10 39 521
177 NaN 2018-10-11 2018-10 39 521
178 NaN 2018-09-26 2018-09 39 521
179 NaN 2018-09-25 2018-09 39 521
I want to calculate the days elapsed from the first commitdate till the last, after sorting the commit dates first, so something like this:
Path_Version commitdates Year-Month API Age api_spec_id Days_difference
168 NaN 2018-10-19 2018-10 39 521 25
169 NaN 2018-10-19 2018-10 39 521 25
170 NaN 2018-10-12 2018-10 39 521 18
171 NaN 2018-10-12 2018-10 39 521 18
172 NaN 2018-10-12 2018-10 39 521 18
173 NaN 2018-10-11 2018-10 39 521 16
174 NaN 2018-10-11 2018-10 39 521 16
175 NaN 2018-10-11 2018-10 39 521 16
176 NaN 2018-10-11 2018-10 39 521 16
177 NaN 2018-10-11 2018-10 39 521 16
178 NaN 2018-09-26 2018-09 39 521 1
179 NaN 2018-09-25 2018-09 39 521 0
I tried first sorting the commitdates by api_spec_id since it is unique for every API, and then calculating the diff
final_api['commitdates'] = final_api.groupby('api_spec_id')['commitdate'].apply(lambda x: x.sort_values())
final_api['diff'] = final_api.groupby('api_spec_id')['commitdates'].diff() / np.timedelta64(1, 'D')
final_api['diff'] = final_api['diff'].fillna(0)
It just returns me a zero for the entire column. I don't want to group them, I only want to calculate the difference based on the sorted commitdates: starting from the first commitdate till the last in the entire dataset, in days
Any idea how can I achieve this?
|
[
"Use pandas.to_datetime, sub, min and dt.days:\nt = pd.to_datetime(df['commitdates'])\n\ndf['Days_difference'] = t.sub(t.min()).dt.days\n\nIf you need to group per API:\nt = pd.to_datetime(df['commitdates'])\n\ndf['Days_difference'] = t.sub(t.groupby(df['api_spec_id']).transform('min')).dt.days\n\n\nOutput:\n Path_Version commitdates Year-Month API Age api_spec_id Days_difference\n168 NaN 2018-10-19 2018-10 39 521 24\n169 NaN 2018-10-19 2018-10 39 521 24\n170 NaN 2018-10-12 2018-10 39 521 17\n171 NaN 2018-10-12 2018-10 39 521 17\n172 NaN 2018-10-12 2018-10 39 521 17\n173 NaN 2018-10-11 2018-10 39 521 16\n174 NaN 2018-10-11 2018-10 39 521 16\n175 NaN 2018-10-11 2018-10 39 521 16\n176 NaN 2018-10-11 2018-10 39 521 16\n177 NaN 2018-10-11 2018-10 39 521 16\n178 NaN 2018-09-26 2018-09 39 521 1\n179 NaN 2018-09-25 2018-09 39 521 0\n\n"
] |
[
2
] |
[] |
[] |
[
"pandas",
"python"
] |
stackoverflow_0074566819_pandas_python.txt
|
Q:
No module names 'src' when importing from parent folder in jupyter notebook
I have the following folder structure in my project
my_project
notebook
|-- some_notebook.ipynb
src
|-- preprocess
|-- __init__.py
|-- some_processing.py
__init__.py
Now, inside some_notebook.ipynb I simply want to get the methods from some_processing.py. Now we I run
from src.preprocess import some_processing
from some_notebook.ipynb it always throws
ModuleNotFoundError: No module named 'src'
I found multiple questions regarding this and played around with sys.path.append(<path-to-src>). But I couldn't solve it. Which path do I provide? Something like ../src didnt work?
I checked for example the AlphaFold project from DeepMind and they are using it also with this structure. I tried to replicate exactly like they did.
How can I solve this? Which path do I provide in sys.path.append()?
I appreciate any help!
A:
I found the answer. Running
sys.path.insert(1, os.path.join(sys.path[0], '../src'))
made it possible to import anything from parent module src.
|
No module names 'src' when importing from parent folder in jupyter notebook
|
I have the following folder structure in my project
my_project
notebook
|-- some_notebook.ipynb
src
|-- preprocess
|-- __init__.py
|-- some_processing.py
__init__.py
Now, inside some_notebook.ipynb I simply want to get the methods from some_processing.py. Now we I run
from src.preprocess import some_processing
from some_notebook.ipynb it always throws
ModuleNotFoundError: No module named 'src'
I found multiple questions regarding this and played around with sys.path.append(<path-to-src>). But I couldn't solve it. Which path do I provide? Something like ../src didnt work?
I checked for example the AlphaFold project from DeepMind and they are using it also with this structure. I tried to replicate exactly like they did.
How can I solve this? Which path do I provide in sys.path.append()?
I appreciate any help!
|
[
"I found the answer. Running\nsys.path.insert(1, os.path.join(sys.path[0], '../src'))\n\nmade it possible to import anything from parent module src.\n"
] |
[
0
] |
[] |
[] |
[
"import",
"jupyter_notebook",
"python"
] |
stackoverflow_0074566749_import_jupyter_notebook_python.txt
|
Q:
Unpacking arrays into arrow plot
I have a strange looking function that calls the plots based on the attributes. So, if a function exists in the class then select that. Then I am trying to call it, in this example I use pyplot.arrow, however, I cannot seem to unpack all the values. It should take four parameters, but I get the following error:
ValueError: too many values to unpack (expected 2)
I cannot recognise how I am passing too many values given I unpack with *, here is what I have tried:
import numpy as np
import matplotlib.pyplot as plt
test_array = np.array([ [1, 2] , [5, 4] , [5, 2] , [1, 2] , [5, 2] , [1, 2]])
test_array = np.column_stack((test_array,[ [ 0 , 0 ] ]*test_array.shape[0]))
test_split = np.split(test_array, 6)
for b in test_split:
b[ : , [ 0 , 1 , -2 , -1 ] ] = b[ : , [ -2 , -1 , 0 ,1] ]
def plot(size: list,plType, *args, **kwargs):
figs, axs = plt.subplots(size[0], size[1], figsize=(8,8))
xy = np.array(args)
for A , ax in zip( xy , axs.flat ):
X = np.hsplit( A , xy.shape[2] )
if isinstance(X, list):
for ind , Z in enumerate(zip(*X)):
ax.__getattribute__( plType )( *Z, **kwargs)
plt.show()
print(np.array(test_split).shape)
plot([2, 3], 'arrow', *test_split)
A:
I feel like you are doing several unnecessary things which has made it confusing.
The main point is you want to do ax.__getattribute__("arrow")(x, y, dx, dy, **kwargs). To keep it simple:
import numpy as np
import matplotlib.pyplot as plt
test_array = np.array([
[1, 2],
[5, 4],
[5, 2],
[1, 2],
[5, 2],
[1, 2]
])
test_array = np.column_stack((test_array, [[0, 0]] * test_array.shape[0]))
test_split = np.split(test_array, 6)
for b in test_split:
b[:, [0, 1, -2 , -1]] = b[:, [-2, -1, 0 ,1]]
Here, test_split is a set of 6 arrays of size 4; assuming they represent (x, y, dx, dy).
[array([[0, 0, 1, 2]]),
array([[0, 0, 5, 4]]),
array([[0, 0, 5, 2]]),
array([[0, 0, 1, 2]]),
array([[0, 0, 5, 2]]),
array([[0, 0, 1, 2]])]
We can plot them in the subplots as follows:
def plot(size: list, plType, *args, **kwargs):
figs, axs = plt.subplots(size[0], size[1], figsize=(8, 8))
for A, ax in zip(args, axs.flat):
ax.__getattribute__(plType)(*A.ravel().tolist(), **kwargs)
plt.show()
A:
You were missing another level of obfuscation.
import numpy as np
import matplotlib.pyplot as plt
test_array = np.array([ [1, 2] , [5, 4] , [5, 2] , [1, 2] , [5, 2] , [1, 2]])
test_array = np.column_stack((test_array,[ [ 0 , 0 ] ]*test_array.shape[0]))
test_split = np.split(test_array, 6)
for b in test_split:
b[ : , [ 0 , 1 , -2 , -1 ] ] = b[ : , [ -2 , -1 , 0 ,1] ]
def plot(size: list,plType, *args, **kwargs):
figs, axs = plt.subplots(size[0], size[1], figsize=(8,8))
xy = np.array(args)
for A , ax in zip( xy , axs.flat ):
X = np.hsplit( A , xy.shape[2] )
if isinstance(X, list):
# Start of the extra level of obfuscation
for i, Y in enumerate(zip(*X)):
if isinstance(Y, tuple):
# End of the extra level of obfuscation
for j, Z in enumerate(zip(*Y)):
ax.__getattribute__( plType )( *Z, **kwargs)
plt.show()
print(np.array(test_split).shape)
plot([2, 3], 'arrow', *test_split)
I said obfuscation because, e.g., the enumerate is completely unnecessary — but this is just one of many examples.
|
Unpacking arrays into arrow plot
|
I have a strange looking function that calls the plots based on the attributes. So, if a function exists in the class then select that. Then I am trying to call it, in this example I use pyplot.arrow, however, I cannot seem to unpack all the values. It should take four parameters, but I get the following error:
ValueError: too many values to unpack (expected 2)
I cannot recognise how I am passing too many values given I unpack with *, here is what I have tried:
import numpy as np
import matplotlib.pyplot as plt
test_array = np.array([ [1, 2] , [5, 4] , [5, 2] , [1, 2] , [5, 2] , [1, 2]])
test_array = np.column_stack((test_array,[ [ 0 , 0 ] ]*test_array.shape[0]))
test_split = np.split(test_array, 6)
for b in test_split:
b[ : , [ 0 , 1 , -2 , -1 ] ] = b[ : , [ -2 , -1 , 0 ,1] ]
def plot(size: list,plType, *args, **kwargs):
figs, axs = plt.subplots(size[0], size[1], figsize=(8,8))
xy = np.array(args)
for A , ax in zip( xy , axs.flat ):
X = np.hsplit( A , xy.shape[2] )
if isinstance(X, list):
for ind , Z in enumerate(zip(*X)):
ax.__getattribute__( plType )( *Z, **kwargs)
plt.show()
print(np.array(test_split).shape)
plot([2, 3], 'arrow', *test_split)
|
[
"I feel like you are doing several unnecessary things which has made it confusing.\nThe main point is you want to do ax.__getattribute__(\"arrow\")(x, y, dx, dy, **kwargs). To keep it simple:\nimport numpy as np\nimport matplotlib.pyplot as plt\n\ntest_array = np.array([\n [1, 2],\n [5, 4],\n [5, 2],\n [1, 2],\n [5, 2],\n [1, 2]\n])\ntest_array = np.column_stack((test_array, [[0, 0]] * test_array.shape[0]))\ntest_split = np.split(test_array, 6)\nfor b in test_split:\n b[:, [0, 1, -2 , -1]] = b[:, [-2, -1, 0 ,1]]\n\nHere, test_split is a set of 6 arrays of size 4; assuming they represent (x, y, dx, dy).\n[array([[0, 0, 1, 2]]),\n array([[0, 0, 5, 4]]),\n array([[0, 0, 5, 2]]),\n array([[0, 0, 1, 2]]),\n array([[0, 0, 5, 2]]),\n array([[0, 0, 1, 2]])]\n\nWe can plot them in the subplots as follows:\ndef plot(size: list, plType, *args, **kwargs):\n figs, axs = plt.subplots(size[0], size[1], figsize=(8, 8))\n for A, ax in zip(args, axs.flat):\n ax.__getattribute__(plType)(*A.ravel().tolist(), **kwargs)\n plt.show()\n\n",
"\nYou were missing another level of obfuscation.\nimport numpy as np\nimport matplotlib.pyplot as plt\n\ntest_array = np.array([ [1, 2] , [5, 4] , [5, 2] , [1, 2] , [5, 2] , [1, 2]])\ntest_array = np.column_stack((test_array,[ [ 0 , 0 ] ]*test_array.shape[0]))\ntest_split = np.split(test_array, 6)\nfor b in test_split:\n b[ : , [ 0 , 1 , -2 , -1 ] ] = b[ : , [ -2 , -1 , 0 ,1] ]\n\ndef plot(size: list,plType, *args, **kwargs):\n figs, axs = plt.subplots(size[0], size[1], figsize=(8,8))\n xy = np.array(args)\n for A , ax in zip( xy , axs.flat ):\n X = np.hsplit( A , xy.shape[2] )\n if isinstance(X, list):\n # Start of the extra level of obfuscation\n for i, Y in enumerate(zip(*X)):\n if isinstance(Y, tuple):\n # End of the extra level of obfuscation\n for j, Z in enumerate(zip(*Y)):\n ax.__getattribute__( plType )( *Z, **kwargs)\n plt.show()\n\nprint(np.array(test_split).shape)\nplot([2, 3], 'arrow', *test_split)\n\n\n\nI said obfuscation because, e.g., the enumerate is completely unnecessary — but this is just one of many examples.\n"
] |
[
0,
0
] |
[] |
[] |
[
"matplotlib",
"numpy",
"python"
] |
stackoverflow_0074508827_matplotlib_numpy_python.txt
|
Q:
How to have input in Python only take in string and not number or anything else only letters
I am a beginner in Python so kindly do not use complex or advanced code.
contact = {}
def display_contact():
for name, number in sorted((k,v) for k, v in contact.items()):
print(f'Name: {name}, Number: {number}')
#def display_contact():
# print("Name\t\tContact Number")
# for key in contact:
# print("{}\t\t{}".format(key,contact.get(key)))
while True:
choice = int(input(" 1. Add new contact \n 2. Search contact \n 3. Display contact\n 4. Edit contact \n 5. Delete contact \n 6. Print \n 7. Exit \n Enter "))
#I have already tried
if choice == 1:
while True:
try:
name = str(input("Enter the contact name "))
if name != str:
except ValueError:
continue
else:
break
while True:
try:
phone = int(input("Enter number "))
except ValueError:
print("Sorry you can only enter a phone number")
continue
else:
break
contact[name] = phone
elif choice == 2:
search_name = input("Enter contact name ")
if search_name in contact:
print(search_name, "'s contact number is ", contact[search_name])
else:
print("Name is not found in contact book")
elif choice == 3:
if not contact:
print("Empty Phonebook")
else:
display_contact()
elif choice == 4:
edit_contact = input("Enter the contact to be edited ")
if edit_contact in contact:
phone = input("Enter number")
contact[edit_contact]=phone
print("Contact Updated")
display_contact()
else:
print("Name is not found in contact book")
elif choice == 5:
del_contact = input("Enter the contact to be deleted ")
if del_contact in contact:
confirm = input("Do you want to delete this contact Yes or No? ")
if confirm == 'Yes' or confirm == 'yes':
contact.pop(del_contact)
display_contact
else:
print("Name is not found in phone book")
elif choice == 6:
sort_contact = input("Enter yes to print your contact")
if sort_contact in contact:
confirm = input("Do you want to print your contact Yes or No? ")
if confirm == 'Yes' or confirm == 'yes':
strs = [display_contact]
print(sorted(strs))
else:
print("Phone book is printed.")
else:
break
I tried but keep getting errors and I can't fiugre out how to make it only take string or letter as input and not numbers.
if choice == 1:
while True:
try:
name = str(input("Enter the contact name "))
if name != str:
except ValueError:
continue
else:
break
it is not working my code still accepts the ans in integer and string.
I am a beginner so I might have made a lot of mistakes. Your patience would be appreciated.
A:
You can use a regex with re.fullmatch:
import re
while True:
name = input("Enter the contact name ")
if re.fullmatch(r'[a-zA-Z]+', name):
break
Or use the case-insensitive flag: re.fullmatch(r'[a-z]+', name, flags=re.I):
A:
As you noted that you are a beginner, I'm adding this piece of code
as a "custom-made" validation, just so you can check how you would do something like this by your own .
Note: @mozway gave a MUCH BETTER solution, that is super clean, and I recommend it over this one.
def valid_input(input: str):
# Check if any char is a number
for char in input:
if char.isdigit():
print('Numbers are not allowed!')
return False
return True
while True:
name = input("Enter data:")
if valid_input(name):
break
|
How to have input in Python only take in string and not number or anything else only letters
|
I am a beginner in Python so kindly do not use complex or advanced code.
contact = {}
def display_contact():
for name, number in sorted((k,v) for k, v in contact.items()):
print(f'Name: {name}, Number: {number}')
#def display_contact():
# print("Name\t\tContact Number")
# for key in contact:
# print("{}\t\t{}".format(key,contact.get(key)))
while True:
choice = int(input(" 1. Add new contact \n 2. Search contact \n 3. Display contact\n 4. Edit contact \n 5. Delete contact \n 6. Print \n 7. Exit \n Enter "))
#I have already tried
if choice == 1:
while True:
try:
name = str(input("Enter the contact name "))
if name != str:
except ValueError:
continue
else:
break
while True:
try:
phone = int(input("Enter number "))
except ValueError:
print("Sorry you can only enter a phone number")
continue
else:
break
contact[name] = phone
elif choice == 2:
search_name = input("Enter contact name ")
if search_name in contact:
print(search_name, "'s contact number is ", contact[search_name])
else:
print("Name is not found in contact book")
elif choice == 3:
if not contact:
print("Empty Phonebook")
else:
display_contact()
elif choice == 4:
edit_contact = input("Enter the contact to be edited ")
if edit_contact in contact:
phone = input("Enter number")
contact[edit_contact]=phone
print("Contact Updated")
display_contact()
else:
print("Name is not found in contact book")
elif choice == 5:
del_contact = input("Enter the contact to be deleted ")
if del_contact in contact:
confirm = input("Do you want to delete this contact Yes or No? ")
if confirm == 'Yes' or confirm == 'yes':
contact.pop(del_contact)
display_contact
else:
print("Name is not found in phone book")
elif choice == 6:
sort_contact = input("Enter yes to print your contact")
if sort_contact in contact:
confirm = input("Do you want to print your contact Yes or No? ")
if confirm == 'Yes' or confirm == 'yes':
strs = [display_contact]
print(sorted(strs))
else:
print("Phone book is printed.")
else:
break
I tried but keep getting errors and I can't fiugre out how to make it only take string or letter as input and not numbers.
if choice == 1:
while True:
try:
name = str(input("Enter the contact name "))
if name != str:
except ValueError:
continue
else:
break
it is not working my code still accepts the ans in integer and string.
I am a beginner so I might have made a lot of mistakes. Your patience would be appreciated.
|
[
"You can use a regex with re.fullmatch:\nimport re\n\nwhile True:\n name = input(\"Enter the contact name \")\n if re.fullmatch(r'[a-zA-Z]+', name):\n break\n\nOr use the case-insensitive flag: re.fullmatch(r'[a-z]+', name, flags=re.I):\n",
"As you noted that you are a beginner, I'm adding this piece of code\nas a \"custom-made\" validation, just so you can check how you would do something like this by your own .\nNote: @mozway gave a MUCH BETTER solution, that is super clean, and I recommend it over this one.\ndef valid_input(input: str):\n # Check if any char is a number \n for char in input:\n if char.isdigit():\n print('Numbers are not allowed!')\n return False\n return True\n\n\nwhile True:\n name = input(\"Enter data:\")\n if valid_input(name):\n break\n\n"
] |
[
4,
1
] |
[
"I found this answer from another website:\nextracted_letters = \" \".join(re.findall(\"[a-zA-Z]+\", numlettersstring))\n\nFirst, import re to use the re function.\nThen let's say that numlettersstring is the string you want only the letters from.\nThis piece of code will extract the letters from numlettersstring and output it in the extracted_letters variable.\n"
] |
[
-1
] |
[
"data_structures",
"list",
"python",
"python_3.x"
] |
stackoverflow_0074566687_data_structures_list_python_python_3.x.txt
|
Q:
Function that generates airflow dags dinamically not creating them
I have a function that will generate dags dinamically, from a database with the dag configs (I know, it's expensive to do that). The thing is, it only generates dags when I call this function in the same file that I define it, if I import in another file and execute it, it wont generate my dags.
Eg:
def generate_dags_dinamically():
dags = get_dag_configs()
# this 'dags' variable, contains some configs for generating the dags
for dag in dags:
# Defines dag and adds to globals
with DAG(
dag_id=dag.dag_id,
tags=dag.configs['tags'],
start_date=dag.start,
schedule_interval=dag.schedule,
default_args={'owner': dag.configs['owner']},
catchup=False
) as cur_dag:
globals()[dag.dag_id] = cur_dag
task_start = EmptyOperator(
task_id='task_start',
dag=cur_dag
)
task_end = EmptyOperator(
task_id='task_end',
dag=cur_dag
)
python_task = PythonOperator(
task_id=dag.task_id,
python_callable=dag.callable,
op_kwargs=dag.kwargs
retries=dag.task_retries
)
task_start >> python_task >> task_end
# When I call here, at the same file, airflow creates the dags.
generate_dags_dinamically()
But if I import in another file, and call the function, it wont create the dags.
from dags.dynamic_dags import generate_dags_dinamically
# This wont create my dags!
generate_dags_dinamically()
So I dont know how to solve this. Maybe it's something related to the global scope?
(I have some reasons to not call on the same file, like folder structure pattern, reusability and so on)
A:
Airflow will only 'see' the dag objects that are in the global namespace.
In order to correct your code your generate_dags_dinamically() function should return a list of dag objects and then you should add them to the global scope like so:
from dags.dynamic_dags import generate_dags_dinamically
dags_list = generate_dags_dinamically()
for dag in dags_list:
globals()[dag.dag_id] = dag
see the documentation for the similar code.
|
Function that generates airflow dags dinamically not creating them
|
I have a function that will generate dags dinamically, from a database with the dag configs (I know, it's expensive to do that). The thing is, it only generates dags when I call this function in the same file that I define it, if I import in another file and execute it, it wont generate my dags.
Eg:
def generate_dags_dinamically():
dags = get_dag_configs()
# this 'dags' variable, contains some configs for generating the dags
for dag in dags:
# Defines dag and adds to globals
with DAG(
dag_id=dag.dag_id,
tags=dag.configs['tags'],
start_date=dag.start,
schedule_interval=dag.schedule,
default_args={'owner': dag.configs['owner']},
catchup=False
) as cur_dag:
globals()[dag.dag_id] = cur_dag
task_start = EmptyOperator(
task_id='task_start',
dag=cur_dag
)
task_end = EmptyOperator(
task_id='task_end',
dag=cur_dag
)
python_task = PythonOperator(
task_id=dag.task_id,
python_callable=dag.callable,
op_kwargs=dag.kwargs
retries=dag.task_retries
)
task_start >> python_task >> task_end
# When I call here, at the same file, airflow creates the dags.
generate_dags_dinamically()
But if I import in another file, and call the function, it wont create the dags.
from dags.dynamic_dags import generate_dags_dinamically
# This wont create my dags!
generate_dags_dinamically()
So I dont know how to solve this. Maybe it's something related to the global scope?
(I have some reasons to not call on the same file, like folder structure pattern, reusability and so on)
|
[
"Airflow will only 'see' the dag objects that are in the global namespace.\nIn order to correct your code your generate_dags_dinamically() function should return a list of dag objects and then you should add them to the global scope like so:\nfrom dags.dynamic_dags import generate_dags_dinamically\n\n\ndags_list = generate_dags_dinamically()\nfor dag in dags_list:\n globals()[dag.dag_id] = dag\n\nsee the documentation for the similar code.\n"
] |
[
1
] |
[] |
[] |
[
"airflow",
"python",
"python_3.x"
] |
stackoverflow_0074494397_airflow_python_python_3.x.txt
|
Q:
How to install libraries in python without pip to make sure the integrity of their code?
I am working in a company that do not permit installing libraries without cybersecurity department permission. So I had to download the libraries for example from pypi.org, send them for authorization and install them by calling setup. However, I wonder if there are better solutions/practices to guaranty the sanity of the libraries. For example, I tried to download Google ortools, and this is not as easy as it appears.
A:
You can package your project with the wheel (.whl) files it needs on the platform it will be running on. That way, your IT dept. can sign off on those specific binaries and allow their installation, and you can be guaranteed they will work the same every time the software is installed.
However, it does mean that your software can no longer be installed just anywhere. Picking, downloading (or building) and including specific wheels means locking yourself into a limited range of Python versions, for a specific hardware architecture.
A:
can you use pip to install your downloaded and aproved libraries from a local path?... or is the use of pip prohibited?
link to python pip documentation to do that
|
How to install libraries in python without pip to make sure the integrity of their code?
|
I am working in a company that do not permit installing libraries without cybersecurity department permission. So I had to download the libraries for example from pypi.org, send them for authorization and install them by calling setup. However, I wonder if there are better solutions/practices to guaranty the sanity of the libraries. For example, I tried to download Google ortools, and this is not as easy as it appears.
|
[
"You can package your project with the wheel (.whl) files it needs on the platform it will be running on. That way, your IT dept. can sign off on those specific binaries and allow their installation, and you can be guaranteed they will work the same every time the software is installed.\nHowever, it does mean that your software can no longer be installed just anywhere. Picking, downloading (or building) and including specific wheels means locking yourself into a limited range of Python versions, for a specific hardware architecture.\n",
"can you use pip to install your downloaded and aproved libraries from a local path?... or is the use of pip prohibited?\nlink to python pip documentation to do that\n"
] |
[
1,
0
] |
[] |
[] |
[
"integrity",
"pip",
"python",
"security"
] |
stackoverflow_0074566930_integrity_pip_python_security.txt
|
Q:
how to download this zip file using python requests?
I am trying to download a zip file that is stored here:
http://e4ftl01.cr.usgs.gov/MEASURES/SRTMGL1.003/2000.02.11/N45W074.SRTMGL1.hgt.zip
If you paste this into the browser and hit enter, it will download the .zip folder.
If you inspect the browser while this is happening, you will see that there is an internal redirect going on:
And eventually the zip gets downloaded.
I am trying to automate this downloading using the python requests library by doing the following:
import requests
requests.get(url,
allow_redirects=True,
headers={'User-Agent':'Chrome/107.0.0.0'})
I've tried tons of combinations, using the full header string from the HTML inspection, forcing verify=True, with and without redirects, adding a HTTPBasicAuth user/pass that says is required although the file seems to download fine without any credentials.
Honestly no clue what I'm missing, this is not my expertise. I keep getting this error:
>>> requests.get(url,
... allow_redirects=True,
... headers={'User-Agent':'Chrome/107.0.0.0'})
Traceback (most recent call last):
File "C:\Users\dere\Miniconda3\envs\blender\lib\site-packages\urllib3\connection.py", line 174, in _new_conn
conn = connection.create_connection(
File "C:\Users\dere\Miniconda3\envs\blender\lib\site-packages\urllib3\util\connection.py", line 95, in create_connection
raise err
File "C:\Users\dere\Miniconda3\envs\blender\lib\site-packages\urllib3\util\connection.py", line 85, in create_connection
sock.connect(sa)
ConnectionRefusedError: [WinError 10061] No connection could be made because the target machine
actively refused it
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\dere\Miniconda3\envs\blender\lib\site-packages\urllib3\connectionpool.py", line 703, in urlopen
httplib_response = self._make_request(
File "C:\Users\dere\Miniconda3\envs\blender\lib\site-packages\urllib3\connectionpool.py", line 398, in _make_request
conn.request(method, url, **httplib_request_kw)
File "C:\Users\dere\Miniconda3\envs\blender\lib\site-packages\urllib3\connection.py", line 239, in request
super(HTTPConnection, self).request(method, url, body=body, headers=headers)
File "C:\Users\dere\Miniconda3\envs\blender\lib\http\client.py", line 1282, in request
self._send_request(method, url, body, headers, encode_chunked)
File "C:\Users\dere\Miniconda3\envs\blender\lib\http\client.py", line 1328, in _send_request
self.endheaders(body, encode_chunked=encode_chunked)
File "C:\Users\dere\Miniconda3\envs\blender\lib\http\client.py", line 1277, in endheaders
self._send_output(message_body, encode_chunked=encode_chunked)
File "C:\Users\dere\Miniconda3\envs\blender\lib\http\client.py", line 1037, in _send_output
self.send(msg)
File "C:\Users\dere\Miniconda3\envs\blender\lib\http\client.py", line 975, in send
self.connect()
File "C:\Users\dere\Miniconda3\envs\blender\lib\site-packages\urllib3\connection.py", line 205, in connect
conn = self._new_conn()
File "C:\Users\dere\Miniconda3\envs\blender\lib\site-packages\urllib3\connection.py", line 186, in _new_conn
raise NewConnectionError(
urllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPConnection object at 0x0000016B76A6FE50>: Failed to establish a new connection: [WinError 10061] No connection could be made because the target machine actively refused it
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\dere\Miniconda3\envs\blender\lib\site-packages\requests\adapters.py", line 489, in send
resp = conn.urlopen(
File "C:\Users\dere\Miniconda3\envs\blender\lib\site-packages\urllib3\connectionpool.py", line 787, in urlopen
retries = retries.increment(
File "C:\Users\dere\Miniconda3\envs\blender\lib\site-packages\urllib3\util\retry.py", line 592, in increment
raise MaxRetryError(_pool, url, error or ResponseError(cause))
urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='e4ftl01.cr.usgs.gov', port=80): Max retries exceeded with url: /MEASURES/SRTMGL1.003/2000.02.11/N45W074.SRTMGL1.hgt.zip (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x0000016B76A6FE50>: Failed to establish a new connection: [WinError 10061] No connection could be made because the target machine actively refused it'))
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Users\dere\Miniconda3\envs\blender\lib\site-packages\requests\api.py", line 73, in get
return request("get", url, params=params, **kwargs)
File "C:\Users\dere\Miniconda3\envs\blender\lib\site-packages\requests\api.py", line 59, in request
return session.request(method=method, url=url, **kwargs)
File "C:\Users\dere\Miniconda3\envs\blender\lib\site-packages\requests\sessions.py", line 587, in request
resp = self.send(prep, **send_kwargs)
File "C:\Users\dere\Miniconda3\envs\blender\lib\site-packages\requests\sessions.py", line 701, in send
r = adapter.send(request, **kwargs)
File "C:\Users\dere\Miniconda3\envs\blender\lib\site-packages\requests\adapters.py", line 565, in send
raise ConnectionError(e, request=request)
requests.exceptions.ConnectionError: HTTPConnectionPool(host='e4ftl01.cr.usgs.gov', port=80): Max retries exceeded with url: /MEASURES/SRTMGL1.003/2000.02.11/N45W074.SRTMGL1.hgt.zip (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x0000016B76A6FE50>: Failed to establish a new connection: [WinError 10061] No connection could be made because the target machine actively refused it'))
Can someone help me arrive at the code that will result in a successful request? I know how to write the request into a zip afterwards..
A:
Change http to Https: This should work
import requests
# download zip file from url
url = "https://e4ftl01.cr.usgs.gov/MEASURES/SRTMGL1.003/2000.02.11/N45W074.SRTMGL1.hgt.zip"
r = requests.get(url)
with open("N45W074.SRTMGL1.hgt.zip", "wb") as f:
f.write(r.content)
A:
Thanks to @cnemri for the tip.
It was a combination of changing http to https and also including this specific cookie header in the request header:
headers = {'Cookie': '_gid=GA1.2.48775707.1669266346; _hjSessionUser_606685=eyJpZCI6IjQ1MzEzM2QzLTI3MWEtNWM0YS04M2YzLWRmMmMzNDk4NjY1ZSIsImNyZWF0ZWQiOjE2NjkyNjYzNDU2MjUsImV4aXN0aW5nIjp0cnVlfQ==; ERS_production_2=b7dfade669180a6d6d250b030ffc3cf2UZZa8%2F471MfcV%2FaNuFtu6Pli%2BpP8jKOIwR4JvQjm%2B6DLPjl679vVf5SCDk7C5TLQsj7qckIev6lmtGb6Mes5RKDHUs%2BBp3EAjKW2%2BMCUpV%2Fnx0z1pdaCQQ%3D%3D; EROS_SSO_production_secure=eyJjcmVhdGVkIjoxNjY5MjY5MjEzLCJ1cGRhdGVkIjoiMjAyMi0xMS0yMyAyMzo1MzoyOCIsImF1dGhUeXBlIjoiRVJTIiwiYXV0aFNlcnZpY2UiOiJFUk9TIiwidmVyc2lvbiI6MS4xLCJzdGF0ZSI6ImVjMDY3YmMzYmRhMzBiZTUyNTkxYTNiZTYwMTMwZWNmNjAwMWU1Y2JlMGMxZmNkYTU4Y2Y4OTY0YjRlNTJkOTEiLCJpZCI6InY5U1Q4YVl3M1hRKCsyIiwic2VjcmV0IjoiPl8lMCZPYiY9MUVkclRLZTt4fHVZLVcyU1VvTiA1SE17KiQqaG12OSxvPl5%2BdyJ9; _ga_0YWDZEJ295=GS1.1.1669269458.1.0.1669269460.0.0.0; _ga=GA1.2.1055277358.1669266346; _ga_71JPYV1CCS=GS1.1.1669306300.2.1.1669306402.0.0.0; DATA=Y3_gg9uD6LmunsfDHneR9wAAARQ'}
then, this worked:
requests.get(url, headers=headers)
I'm not sure if this is the "solution" or just a workaround I've discovered. Still appreciate any input. I think it may just be credentials that are stored?
|
how to download this zip file using python requests?
|
I am trying to download a zip file that is stored here:
http://e4ftl01.cr.usgs.gov/MEASURES/SRTMGL1.003/2000.02.11/N45W074.SRTMGL1.hgt.zip
If you paste this into the browser and hit enter, it will download the .zip folder.
If you inspect the browser while this is happening, you will see that there is an internal redirect going on:
And eventually the zip gets downloaded.
I am trying to automate this downloading using the python requests library by doing the following:
import requests
requests.get(url,
allow_redirects=True,
headers={'User-Agent':'Chrome/107.0.0.0'})
I've tried tons of combinations, using the full header string from the HTML inspection, forcing verify=True, with and without redirects, adding a HTTPBasicAuth user/pass that says is required although the file seems to download fine without any credentials.
Honestly no clue what I'm missing, this is not my expertise. I keep getting this error:
>>> requests.get(url,
... allow_redirects=True,
... headers={'User-Agent':'Chrome/107.0.0.0'})
Traceback (most recent call last):
File "C:\Users\dere\Miniconda3\envs\blender\lib\site-packages\urllib3\connection.py", line 174, in _new_conn
conn = connection.create_connection(
File "C:\Users\dere\Miniconda3\envs\blender\lib\site-packages\urllib3\util\connection.py", line 95, in create_connection
raise err
File "C:\Users\dere\Miniconda3\envs\blender\lib\site-packages\urllib3\util\connection.py", line 85, in create_connection
sock.connect(sa)
ConnectionRefusedError: [WinError 10061] No connection could be made because the target machine
actively refused it
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\dere\Miniconda3\envs\blender\lib\site-packages\urllib3\connectionpool.py", line 703, in urlopen
httplib_response = self._make_request(
File "C:\Users\dere\Miniconda3\envs\blender\lib\site-packages\urllib3\connectionpool.py", line 398, in _make_request
conn.request(method, url, **httplib_request_kw)
File "C:\Users\dere\Miniconda3\envs\blender\lib\site-packages\urllib3\connection.py", line 239, in request
super(HTTPConnection, self).request(method, url, body=body, headers=headers)
File "C:\Users\dere\Miniconda3\envs\blender\lib\http\client.py", line 1282, in request
self._send_request(method, url, body, headers, encode_chunked)
File "C:\Users\dere\Miniconda3\envs\blender\lib\http\client.py", line 1328, in _send_request
self.endheaders(body, encode_chunked=encode_chunked)
File "C:\Users\dere\Miniconda3\envs\blender\lib\http\client.py", line 1277, in endheaders
self._send_output(message_body, encode_chunked=encode_chunked)
File "C:\Users\dere\Miniconda3\envs\blender\lib\http\client.py", line 1037, in _send_output
self.send(msg)
File "C:\Users\dere\Miniconda3\envs\blender\lib\http\client.py", line 975, in send
self.connect()
File "C:\Users\dere\Miniconda3\envs\blender\lib\site-packages\urllib3\connection.py", line 205, in connect
conn = self._new_conn()
File "C:\Users\dere\Miniconda3\envs\blender\lib\site-packages\urllib3\connection.py", line 186, in _new_conn
raise NewConnectionError(
urllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPConnection object at 0x0000016B76A6FE50>: Failed to establish a new connection: [WinError 10061] No connection could be made because the target machine actively refused it
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\dere\Miniconda3\envs\blender\lib\site-packages\requests\adapters.py", line 489, in send
resp = conn.urlopen(
File "C:\Users\dere\Miniconda3\envs\blender\lib\site-packages\urllib3\connectionpool.py", line 787, in urlopen
retries = retries.increment(
File "C:\Users\dere\Miniconda3\envs\blender\lib\site-packages\urllib3\util\retry.py", line 592, in increment
raise MaxRetryError(_pool, url, error or ResponseError(cause))
urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='e4ftl01.cr.usgs.gov', port=80): Max retries exceeded with url: /MEASURES/SRTMGL1.003/2000.02.11/N45W074.SRTMGL1.hgt.zip (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x0000016B76A6FE50>: Failed to establish a new connection: [WinError 10061] No connection could be made because the target machine actively refused it'))
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Users\dere\Miniconda3\envs\blender\lib\site-packages\requests\api.py", line 73, in get
return request("get", url, params=params, **kwargs)
File "C:\Users\dere\Miniconda3\envs\blender\lib\site-packages\requests\api.py", line 59, in request
return session.request(method=method, url=url, **kwargs)
File "C:\Users\dere\Miniconda3\envs\blender\lib\site-packages\requests\sessions.py", line 587, in request
resp = self.send(prep, **send_kwargs)
File "C:\Users\dere\Miniconda3\envs\blender\lib\site-packages\requests\sessions.py", line 701, in send
r = adapter.send(request, **kwargs)
File "C:\Users\dere\Miniconda3\envs\blender\lib\site-packages\requests\adapters.py", line 565, in send
raise ConnectionError(e, request=request)
requests.exceptions.ConnectionError: HTTPConnectionPool(host='e4ftl01.cr.usgs.gov', port=80): Max retries exceeded with url: /MEASURES/SRTMGL1.003/2000.02.11/N45W074.SRTMGL1.hgt.zip (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x0000016B76A6FE50>: Failed to establish a new connection: [WinError 10061] No connection could be made because the target machine actively refused it'))
Can someone help me arrive at the code that will result in a successful request? I know how to write the request into a zip afterwards..
|
[
"Change http to Https: This should work\nimport requests\n\n# download zip file from url\nurl = \"https://e4ftl01.cr.usgs.gov/MEASURES/SRTMGL1.003/2000.02.11/N45W074.SRTMGL1.hgt.zip\"\nr = requests.get(url)\nwith open(\"N45W074.SRTMGL1.hgt.zip\", \"wb\") as f:\n f.write(r.content)\n\n",
"Thanks to @cnemri for the tip.\nIt was a combination of changing http to https and also including this specific cookie header in the request header:\nheaders = {'Cookie': '_gid=GA1.2.48775707.1669266346; _hjSessionUser_606685=eyJpZCI6IjQ1MzEzM2QzLTI3MWEtNWM0YS04M2YzLWRmMmMzNDk4NjY1ZSIsImNyZWF0ZWQiOjE2NjkyNjYzNDU2MjUsImV4aXN0aW5nIjp0cnVlfQ==; ERS_production_2=b7dfade669180a6d6d250b030ffc3cf2UZZa8%2F471MfcV%2FaNuFtu6Pli%2BpP8jKOIwR4JvQjm%2B6DLPjl679vVf5SCDk7C5TLQsj7qckIev6lmtGb6Mes5RKDHUs%2BBp3EAjKW2%2BMCUpV%2Fnx0z1pdaCQQ%3D%3D; EROS_SSO_production_secure=eyJjcmVhdGVkIjoxNjY5MjY5MjEzLCJ1cGRhdGVkIjoiMjAyMi0xMS0yMyAyMzo1MzoyOCIsImF1dGhUeXBlIjoiRVJTIiwiYXV0aFNlcnZpY2UiOiJFUk9TIiwidmVyc2lvbiI6MS4xLCJzdGF0ZSI6ImVjMDY3YmMzYmRhMzBiZTUyNTkxYTNiZTYwMTMwZWNmNjAwMWU1Y2JlMGMxZmNkYTU4Y2Y4OTY0YjRlNTJkOTEiLCJpZCI6InY5U1Q4YVl3M1hRKCsyIiwic2VjcmV0IjoiPl8lMCZPYiY9MUVkclRLZTt4fHVZLVcyU1VvTiA1SE17KiQqaG12OSxvPl5%2BdyJ9; _ga_0YWDZEJ295=GS1.1.1669269458.1.0.1669269460.0.0.0; _ga=GA1.2.1055277358.1669266346; _ga_71JPYV1CCS=GS1.1.1669306300.2.1.1669306402.0.0.0; DATA=Y3_gg9uD6LmunsfDHneR9wAAARQ'}\n\nthen, this worked:\nrequests.get(url, headers=headers)\n\nI'm not sure if this is the \"solution\" or just a workaround I've discovered. Still appreciate any input. I think it may just be credentials that are stored?\n"
] |
[
1,
0
] |
[] |
[] |
[
"python",
"python_requests"
] |
stackoverflow_0074566535_python_python_requests.txt
|
Q:
beautifulsoup get last tag from snippet, if tag exists
Here's html snippet 1:
<td class="firstleft lineupopt-name" style=""><a href="/link/link_url?id=222" title="Donald Trump" target="_blank">Trump, Donald</a> <span style="color:#666;font-size:10px;">B</span> <span style="color:#cc1100;font-size:10px;font-weight:bold;">TTT</span></td>
Here's html snippet 2:
<td class="firstleft lineupopt-name" style=""><a href="/link/link_url2?id=221" title="Hillary Clinton" target="_blank">Clinton, Hillary</a> <span style="color:#cc1100;font-size:10px;font-weight:bold;">TTT</span></td>
Here's my relevant code:
all = cols[1].find_all('span')
for ele in all:
if (ele is not None):
ttt = cols[1].span.text
else:
ttt = 'none'
Issue: my code works in both instances, but for html snippet 2, it grabs content from the first span tag. In both instances, if the tag exists, I'd like to grab content from only the last span tag. How can this be done?
A:
BS4 now supports last-child so a possible approach could be:
soup.select('td span:last-child')
To get the texts out just iterat the resultset.
Example
from bs4 import BeautifulSoup
html='''
<td class="firstleft lineupopt-name" style=""><a href="/link/link_url?id=222" title="Donald Trump" target="_blank">Trump, Donald</a> <span style="color:#666;font-size:10px;">B</span> <span style="color:#cc1100;font-size:10px;font-weight:bold;">TTT</span></td>
<td class="firstleft lineupopt-name" style=""><a href="/link/link_url2?id=221" title="Hillary Clinton" target="_blank">Clinton, Hillary</a> <span style="color:#cc1100;font-size:10px;font-weight:bold;">TTT</span></td>
'''
soup = BeautifulSoup(html)
[t.text for t in soup.select('td span:last-child')]
Output
['TTT', 'TTT']
A:
A straightforward approach would be to get the last element by -1 index:
ttt = all[-1].text if all else 'none'
I've also tried to approach it with a CSS selector, but BeautifulSoup does not support last-child, last-of-type or nth-last-of-type and supports only nth-of-type pseudo-class.
A:
I test it in conda env with bs4 v4.9.1, and now nth-last-of-type(1) is OK.
A:
If you are trying to select only direct children from an already selected element, use the :scope tag to reference itself in the selector, so you can now use >, ~, and + operators. Docs
selected_element.select_one(":scope > :last-child")
|
beautifulsoup get last tag from snippet, if tag exists
|
Here's html snippet 1:
<td class="firstleft lineupopt-name" style=""><a href="/link/link_url?id=222" title="Donald Trump" target="_blank">Trump, Donald</a> <span style="color:#666;font-size:10px;">B</span> <span style="color:#cc1100;font-size:10px;font-weight:bold;">TTT</span></td>
Here's html snippet 2:
<td class="firstleft lineupopt-name" style=""><a href="/link/link_url2?id=221" title="Hillary Clinton" target="_blank">Clinton, Hillary</a> <span style="color:#cc1100;font-size:10px;font-weight:bold;">TTT</span></td>
Here's my relevant code:
all = cols[1].find_all('span')
for ele in all:
if (ele is not None):
ttt = cols[1].span.text
else:
ttt = 'none'
Issue: my code works in both instances, but for html snippet 2, it grabs content from the first span tag. In both instances, if the tag exists, I'd like to grab content from only the last span tag. How can this be done?
|
[
"BS4 now supports last-child so a possible approach could be:\nsoup.select('td span:last-child')\n\nTo get the texts out just iterat the resultset.\nExample\nfrom bs4 import BeautifulSoup\n\nhtml='''\n<td class=\"firstleft lineupopt-name\" style=\"\"><a href=\"/link/link_url?id=222\" title=\"Donald Trump\" target=\"_blank\">Trump, Donald</a> <span style=\"color:#666;font-size:10px;\">B</span> <span style=\"color:#cc1100;font-size:10px;font-weight:bold;\">TTT</span></td>\n<td class=\"firstleft lineupopt-name\" style=\"\"><a href=\"/link/link_url2?id=221\" title=\"Hillary Clinton\" target=\"_blank\">Clinton, Hillary</a> <span style=\"color:#cc1100;font-size:10px;font-weight:bold;\">TTT</span></td>\n'''\nsoup = BeautifulSoup(html)\n\n[t.text for t in soup.select('td span:last-child')]\n\nOutput\n['TTT', 'TTT']\n\n",
"A straightforward approach would be to get the last element by -1 index:\nttt = all[-1].text if all else 'none'\n\nI've also tried to approach it with a CSS selector, but BeautifulSoup does not support last-child, last-of-type or nth-last-of-type and supports only nth-of-type pseudo-class.\n",
"I test it in conda env with bs4 v4.9.1, and now nth-last-of-type(1) is OK.\n",
"If you are trying to select only direct children from an already selected element, use the :scope tag to reference itself in the selector, so you can now use >, ~, and + operators. Docs\nselected_element.select_one(\":scope > :last-child\")\n\n"
] |
[
1,
0,
0,
0
] |
[] |
[] |
[
"beautifulsoup",
"html",
"html_parsing",
"parsing",
"python"
] |
stackoverflow_0031729940_beautifulsoup_html_html_parsing_parsing_python.txt
|
Q:
Double iteration in for comprehension without 2d list
I would like to perform a double 'for' loop within a for-comprehension. However, I do not want to do it under the typical conditions, such as:
sentences = ['hello what are you doing?', 'trying to figure this out!']
[c for word in sentences for c in word]
Instead, I would like to perform this double iteration with condition, but in a for-comprehension:
words = ["snake", "porcupine", "lizard"]
substrings = ["sn", "o", "ke"]
new = []
for word in words:
for substr in substrings:
if substr in word:
new.append(word)
new = set(new)
print(new)
Any help is appreciated!
A:
Just figured it out, nevermind. Simply use any():
new = [word for word in words if any(substr in word for substr in substrings)]
|
Double iteration in for comprehension without 2d list
|
I would like to perform a double 'for' loop within a for-comprehension. However, I do not want to do it under the typical conditions, such as:
sentences = ['hello what are you doing?', 'trying to figure this out!']
[c for word in sentences for c in word]
Instead, I would like to perform this double iteration with condition, but in a for-comprehension:
words = ["snake", "porcupine", "lizard"]
substrings = ["sn", "o", "ke"]
new = []
for word in words:
for substr in substrings:
if substr in word:
new.append(word)
new = set(new)
print(new)
Any help is appreciated!
|
[
"Just figured it out, nevermind. Simply use any():\nnew = [word for word in words if any(substr in word for substr in substrings)]\n\n"
] |
[
1
] |
[] |
[] |
[
"for_comprehension",
"python"
] |
stackoverflow_0074567050_for_comprehension_python.txt
|
Q:
File not Found in Directory Python
Ok, I tried everythin and I´m starting to get frustrated :/
I want to open the .mp3 file and change it to a .wav.
The thing is I dont even come so far, because the code doenst find the .mp3.
At first I worked with the complete pat of the directory, but after several failed attempts I started to use the os lib.
import os
print(os.getcwd())
path_folder = "Kurisu\Data\correct"
os.chdir(path_folder)
print(os.getcwd())
for file in os.listdir(os.getcwd()):
Src = os.path.join(os.getcwd(),file)
print(Src)
sound = AudioSegment.from_mp3(Src)
It looks a bit chaotic but the main idea should come through.
FYI: path_folder had the complete path before. After that didint work I changed to it and already tried the version with the r infront of the path :/
All paths are correctly printed and the file is in the directory too.
Even checked if i have read and writing access to that file :D
I appreciate every help
A:
As you've mentioned in your question, you need to add r before your path and use \ for folders.
However, you need to load and play the file within a loop:
with open(Src, 'rb') as f:
sound = AudioSegment.from_file(f, format="mp3")
|
File not Found in Directory Python
|
Ok, I tried everythin and I´m starting to get frustrated :/
I want to open the .mp3 file and change it to a .wav.
The thing is I dont even come so far, because the code doenst find the .mp3.
At first I worked with the complete pat of the directory, but after several failed attempts I started to use the os lib.
import os
print(os.getcwd())
path_folder = "Kurisu\Data\correct"
os.chdir(path_folder)
print(os.getcwd())
for file in os.listdir(os.getcwd()):
Src = os.path.join(os.getcwd(),file)
print(Src)
sound = AudioSegment.from_mp3(Src)
It looks a bit chaotic but the main idea should come through.
FYI: path_folder had the complete path before. After that didint work I changed to it and already tried the version with the r infront of the path :/
All paths are correctly printed and the file is in the directory too.
Even checked if i have read and writing access to that file :D
I appreciate every help
|
[
"As you've mentioned in your question, you need to add r before your path and use \\ for folders.\nHowever, you need to load and play the file within a loop:\nwith open(Src, 'rb') as f: \n sound = AudioSegment.from_file(f, format=\"mp3\")\n\n"
] |
[
0
] |
[] |
[] |
[
"operating_system",
"python"
] |
stackoverflow_0074567049_operating_system_python.txt
|
Q:
How to check if files are still created?
I would like to make a script to check whether or not files are still being created inside a folder. We can consider for our problem that there are no more files being created if let's say for 5 sec the list of files present in that folder remains unchanged. Can anyone help me with this issue?
A:
You can use inotifywait to watch for events on a file or a directory.
inotifywait -m -e create /path/to/your/dir
It will show you the events and exits if no more event happens after 5 seconds.
inotifywait --timeout 5 -qm -e create /path/to/your/dir
By default it will use 5 seconds but you can change it by putting --timeout a
A:
By created I'm assuming you mean existing, anyway to do that:
import os
entries = os.listdir('my_directory/')
then check length of entries with len(entries)
|
How to check if files are still created?
|
I would like to make a script to check whether or not files are still being created inside a folder. We can consider for our problem that there are no more files being created if let's say for 5 sec the list of files present in that folder remains unchanged. Can anyone help me with this issue?
|
[
"You can use inotifywait to watch for events on a file or a directory.\ninotifywait -m -e create /path/to/your/dir\n\nIt will show you the events and exits if no more event happens after 5 seconds.\ninotifywait --timeout 5 -qm -e create /path/to/your/dir\n\nBy default it will use 5 seconds but you can change it by putting --timeout a\n",
"By created I'm assuming you mean existing, anyway to do that:\nimport os\nentries = os.listdir('my_directory/')\n\nthen check length of entries with len(entries)\n"
] |
[
1,
0
] |
[] |
[] |
[
"file",
"python"
] |
stackoverflow_0074566733_file_python.txt
|
Q:
Removing Duplicates out of List of Np-Arrays
I am working on a project for which I am analyzing and comparing several different binary matrices which represent combinatorial objects. For this, I need to generate and analyze datasets and I have turned to python to do so.
Basically, I have list of np.arrays and I am need to filter out duplicates, i.e. take out all np.arrays which occur more than once. The standard methods do not work (np.arrays are not hashable and normal indexing does not work on them either.)
What I attempted was the following, which also did not produce the desired result:
res = [y[i] for i in range(len(y)) if i == y.index(y[i]) ]
I receive an error telling me I need the all() or any() function, which kind of eliminates the point of the entire operation. Does anybody have an idea on how to solve this?
Remark: I know there are several questions on this topic with normal lists and I know how to approach these, but using np.arrays make this a lot more difficult..
A:
Works with numpy arrays:
Given
a = np.array([[1, 2, 3],
[1, 2, 3],
[1, 2, 3],
[4, 5, 6].
[4, 5, 6]])
You can do:
b = np.unique(a, axis=0)
This gives you a numpy array which is the same shape as a, but with all the duplicates removed. So in this example:
print(b)
array([[1, 2, 3],
[4, 5, 6]])
|
Removing Duplicates out of List of Np-Arrays
|
I am working on a project for which I am analyzing and comparing several different binary matrices which represent combinatorial objects. For this, I need to generate and analyze datasets and I have turned to python to do so.
Basically, I have list of np.arrays and I am need to filter out duplicates, i.e. take out all np.arrays which occur more than once. The standard methods do not work (np.arrays are not hashable and normal indexing does not work on them either.)
What I attempted was the following, which also did not produce the desired result:
res = [y[i] for i in range(len(y)) if i == y.index(y[i]) ]
I receive an error telling me I need the all() or any() function, which kind of eliminates the point of the entire operation. Does anybody have an idea on how to solve this?
Remark: I know there are several questions on this topic with normal lists and I know how to approach these, but using np.arrays make this a lot more difficult..
|
[
"Works with numpy arrays:\nGiven\na = np.array([[1, 2, 3],\n [1, 2, 3],\n [1, 2, 3],\n [4, 5, 6].\n [4, 5, 6]])\n\nYou can do:\nb = np.unique(a, axis=0)\n\nThis gives you a numpy array which is the same shape as a, but with all the duplicates removed. So in this example:\nprint(b)\narray([[1, 2, 3],\n [4, 5, 6]])\n\n"
] |
[
0
] |
[] |
[] |
[
"list",
"numpy",
"python"
] |
stackoverflow_0074567042_list_numpy_python.txt
|
Q:
Vscode error "zsh: command not found: python" (on macOs Monterrey 12.3.1)
I have no idea on how to solve this, I've tried to
echo "alias python=/usr/bin/python3" >> ~/.zshrc
And also
brew install python
I'm new on this so I really don't know what I'm doing, if someone could explain why I'm supposed to write those lines on my terminal I'd be very grateful
A:
after updating to macos Monterey 12.6.1 had similar problem.
Before this update, python referred to python2.7 which was removed completely in the latest Monterey versions.
Using following fix:
sudo ln -s /Applications/Xcode.app/Contents/Developer/usr/bin/python3 /Applications/Xcode.app/Contents/Developer/usr/bin/python
sudo ln -s /usr/bin/python3 /usr/local/bin/python
Result:
python -V
Python 3.9.6
A:
Try python3 --version in your terminal window, if it returned the version details, then python is installed in your system.
|
Vscode error "zsh: command not found: python" (on macOs Monterrey 12.3.1)
|
I have no idea on how to solve this, I've tried to
echo "alias python=/usr/bin/python3" >> ~/.zshrc
And also
brew install python
I'm new on this so I really don't know what I'm doing, if someone could explain why I'm supposed to write those lines on my terminal I'd be very grateful
|
[
"after updating to macos Monterey 12.6.1 had similar problem.\nBefore this update, python referred to python2.7 which was removed completely in the latest Monterey versions.\nUsing following fix:\nsudo ln -s /Applications/Xcode.app/Contents/Developer/usr/bin/python3 /Applications/Xcode.app/Contents/Developer/usr/bin/python\nsudo ln -s /usr/bin/python3 /usr/local/bin/python\n\nResult:\npython -V\nPython 3.9.6\n",
"Try python3 --version in your terminal window, if it returned the version details, then python is installed in your system.\n"
] |
[
1,
0
] |
[] |
[] |
[
"python",
"terminal",
"visual_studio_code"
] |
stackoverflow_0072045819_python_terminal_visual_studio_code.txt
|
Q:
Physically Based Rendering shows discontinuity on closed surface
I added a mesh to a pyvista.Plotter() with
p.add_mesh(mesh, show_edges=True, color='linen', pbr=True, metallic=0.8, roughness=0.1, diffuse=1)
but it displays with a discontinuity (where the mesh started and ended)
Why is this junction of cells different from similar ones around this toroid?
A:
It's probably due to how surface normals are computed, and that your mesh connectivity is off along that edge.
The way you often generate such closed surfaces is to parametrise with respect to some generalised coordinates, one of which in this case is the azimuthal angle. Where your azimuthal angle sweep starts (following the physics convention: phi = 0 corresponds to the +x axis) and ends (phi = 2*pi, again the +x axis) overlaps, but if you don't take care to ensure connectivity across this boundary then you don't have a closed toroid, but rather an open tube turned back on itself that has a seam where the two open ends meet. This affects how surface normals are estimated on the boundary faces.
If you check mesh.extract_feature_edges(boundary_edges=True, non_manifold_edges=True, feature_edges=False, manifold_edges=False).plot() you will likely get a vertical ring right where the artifact happens. Depending on your mesh it's probably enough to call mesh = mesh.clean() with a small tolerance value, because if your points at the seam properly overlap then this can merge your boundary points and fuse your boundary faces to remove the spurious edge. (You might also have to recompute the normals yourself, but probably not; I'm not entirely sure.)
Case in point: here's a concrete example using an extruded polygon (which is what your mesh seems to be, or close enough):
import pyvista as pv
# generate toroid
square = pv.Polygon(n_sides=4).translate((0, -2, 0))
toroid = square.extrude_rotate(resolution=8, rotation_axis=(1, 0, 0), capping=False)
# let's see what we've got
plotter = pv.Plotter()
plotter.add_mesh(toroid, color='lightblue', smooth_shading=True)
plotter.view_yz()
plotter.show()
From the shading it's obvious that something's amiss. It's the hidden boundary edge(s):
print(toroid.n_open_edges)
# 8
open_edges = toroid.extract_feature_edges(
boundary_edges=True,
non_manifold_edges=True,
feature_edges=False,
manifold_edges=False,
)
plotter = pv.Plotter()
plotter.add_mesh(toroid, color='lightblue', smooth_shading=True)
plotter.add_mesh(open_edges, color='red', line_width=5, render_lines_as_tubes=True)
plotter.view_yz()
plotter.show()
And cleaning with a small tolerance (to allow for floating-point errors) solves the problem:
cleaned = toroid.clean(tolerance=1e-12)
print(cleaned.n_open_edges)
# 0
plotter = pv.Plotter()
plotter.add_mesh(cleaned, color='lightblue', smooth_shading=True)
plotter.view_yz()
plotter.show()
A:
Good Answer. The first and last groups of points were not exactly the same due to floating point errors:-
[[ 4.8000000e+01 0.0000000e+00 0.0000000e+00]
[ ... ... ... ]
[ 4.8000000e+01 -3.9188699e-15 1.1756609e-14]]
I fixed it by rounding the coordinates when they were generated (to create an STL file):-
return (x.round(5), y.round(5), z.round(5))
Not sure why such tiny differences had such a noticeable visible effect...
|
Physically Based Rendering shows discontinuity on closed surface
|
I added a mesh to a pyvista.Plotter() with
p.add_mesh(mesh, show_edges=True, color='linen', pbr=True, metallic=0.8, roughness=0.1, diffuse=1)
but it displays with a discontinuity (where the mesh started and ended)
Why is this junction of cells different from similar ones around this toroid?
|
[
"It's probably due to how surface normals are computed, and that your mesh connectivity is off along that edge.\nThe way you often generate such closed surfaces is to parametrise with respect to some generalised coordinates, one of which in this case is the azimuthal angle. Where your azimuthal angle sweep starts (following the physics convention: phi = 0 corresponds to the +x axis) and ends (phi = 2*pi, again the +x axis) overlaps, but if you don't take care to ensure connectivity across this boundary then you don't have a closed toroid, but rather an open tube turned back on itself that has a seam where the two open ends meet. This affects how surface normals are estimated on the boundary faces.\nIf you check mesh.extract_feature_edges(boundary_edges=True, non_manifold_edges=True, feature_edges=False, manifold_edges=False).plot() you will likely get a vertical ring right where the artifact happens. Depending on your mesh it's probably enough to call mesh = mesh.clean() with a small tolerance value, because if your points at the seam properly overlap then this can merge your boundary points and fuse your boundary faces to remove the spurious edge. (You might also have to recompute the normals yourself, but probably not; I'm not entirely sure.)\nCase in point: here's a concrete example using an extruded polygon (which is what your mesh seems to be, or close enough):\nimport pyvista as pv\n\n# generate toroid\nsquare = pv.Polygon(n_sides=4).translate((0, -2, 0))\ntoroid = square.extrude_rotate(resolution=8, rotation_axis=(1, 0, 0), capping=False)\n\n# let's see what we've got\nplotter = pv.Plotter()\nplotter.add_mesh(toroid, color='lightblue', smooth_shading=True)\nplotter.view_yz()\nplotter.show()\n\n\nFrom the shading it's obvious that something's amiss. It's the hidden boundary edge(s):\nprint(toroid.n_open_edges)\n# 8\nopen_edges = toroid.extract_feature_edges(\n boundary_edges=True,\n non_manifold_edges=True,\n feature_edges=False,\n manifold_edges=False,\n)\n\nplotter = pv.Plotter()\nplotter.add_mesh(toroid, color='lightblue', smooth_shading=True)\nplotter.add_mesh(open_edges, color='red', line_width=5, render_lines_as_tubes=True)\nplotter.view_yz()\nplotter.show()\n\n\nAnd cleaning with a small tolerance (to allow for floating-point errors) solves the problem:\ncleaned = toroid.clean(tolerance=1e-12)\nprint(cleaned.n_open_edges)\n# 0\n\nplotter = pv.Plotter()\nplotter.add_mesh(cleaned, color='lightblue', smooth_shading=True)\nplotter.view_yz()\nplotter.show()\n\n\n",
"Good Answer. The first and last groups of points were not exactly the same due to floating point errors:-\n [[ 4.8000000e+01 0.0000000e+00 0.0000000e+00]\n [ ... ... ... ]\n [ 4.8000000e+01 -3.9188699e-15 1.1756609e-14]]\n\nI fixed it by rounding the coordinates when they were generated (to create an STL file):-\nreturn (x.round(5), y.round(5), z.round(5))\n\nNot sure why such tiny differences had such a noticeable visible effect...\n"
] |
[
1,
0
] |
[] |
[] |
[
"mesh",
"pbr",
"python",
"pyvista"
] |
stackoverflow_0074555881_mesh_pbr_python_pyvista.txt
|
Q:
I'm trying to install some dependencies, but I ran into an error
I typed in "sudo apt-get install -y wiringpi python-pigpio python3-pigpio" but got the error "Temporary failure resolving 'archive.raspberrypi.org' "
Here is a picture of the error
I set up a fixed address in sudo nano /etc/dhcpcd.conf. I tried looking for solutions online and have tried some of them such as changing the DNS server to 8.8.8.8, but they do not work. Other solutions I have seen I do not understand as I am new to raspberry pi.
Also, I connected it to the internet and it's still connected.
A:
I would suggest adding
nameserver 1.1.1.1
nameserver 1.0.0.1
to /etc/resolv.conf.
Note these are 2 DNS resolvers, which will only be used if your normal DNS (from your ISP) is not responding. They are provided by Cloudflare and are excellent.
Then try your command again.
|
I'm trying to install some dependencies, but I ran into an error
|
I typed in "sudo apt-get install -y wiringpi python-pigpio python3-pigpio" but got the error "Temporary failure resolving 'archive.raspberrypi.org' "
Here is a picture of the error
I set up a fixed address in sudo nano /etc/dhcpcd.conf. I tried looking for solutions online and have tried some of them such as changing the DNS server to 8.8.8.8, but they do not work. Other solutions I have seen I do not understand as I am new to raspberry pi.
Also, I connected it to the internet and it's still connected.
|
[
"I would suggest adding\nnameserver 1.1.1.1\nnameserver 1.0.0.1\n\nto /etc/resolv.conf.\nNote these are 2 DNS resolvers, which will only be used if your normal DNS (from your ISP) is not responding. They are provided by Cloudflare and are excellent.\nThen try your command again.\n"
] |
[
0
] |
[] |
[] |
[
"linux",
"python",
"raspberry_pi",
"terminal",
"ubuntu"
] |
stackoverflow_0074566699_linux_python_raspberry_pi_terminal_ubuntu.txt
|
Q:
Deploying a new model to a sagemaker endpoint without updating the config?
I want to deploy a new model to an existing AWS SageMaker endpoint. The model is trained by a different pipeline and stored as a mode.tar.gz in S3. The sagemaker endpoint config is pointing to this as the model data URL. Sagemaker however doesn't reload the model and I don't know how to convince it to do so.
I want to deploy a new model to an AWS SageMaker endpoint. The model is trained by a different pipeline and stored as a mode.tar.gz in S3. I provisioned the Sagemaker Endpoint using AWS CDK. Now, within the training pipeline, I want to allow the data scientists to optionally upload their newly trained model to the endpoint for testing. I dont want to create a new model or an endpoint config. Also, I dont want to change the infrastructure (AWS CDK) code.
The model is uploaded to the S3 location that the sagemaker endpoint config is using as the
model_data_url. Hence it should use the new model. But it doesn't load it. I know that Sagemaker caches models inside the container, but idk how to force a new load.
This documentation suggests to store the model tarball with another name in the same S3 folder, and alter the code to invoke the model. This is not possible for my application. And I dont want Sagemaker to default to an old model, once the TargetModel parameter is not present.
Here is what I am currently doing after uploading the model to S3. Even though the endpoint transitions into Updating state, it does not force a model reload:
def update_sm_endpoint(endpoint_name: str) -> Dict[str, Any]:
"""Forces the sagemaker endpoint to reload model from s3"""
sm = boto3.client("sagemaker")
return sm.update_endpoint_weights_and_capacities(
EndpointName=endpoint_name,
DesiredWeightsAndCapacities=[
{"VariantName": "main", "DesiredWeight": 1},
],
)
Any ideas?
A:
If you want to modify the model called in a SageMaker endpoint, you have to create a new model object and and new endpoint configuration. Then call update_endpoint This will not change the name of the endpoint.
comments on your question and SageMaker doc:
the documentation you mention ("This documentation suggests to store the model tarball with another name in the same S3 folder, and alter the code to invoke the model") is for SageMaker Multi-Model Endpoint, a service to store multiple models in the same endpoint in parallel. This is not what you need. You need a single-model SageMaker endpoint, and that you update with a
also, the API you mention sm.update_endpoint_weights_and_capacities is not needed for what you want (unless you want a progressive rollout from the traffic from model 1 to model 2).
|
Deploying a new model to a sagemaker endpoint without updating the config?
|
I want to deploy a new model to an existing AWS SageMaker endpoint. The model is trained by a different pipeline and stored as a mode.tar.gz in S3. The sagemaker endpoint config is pointing to this as the model data URL. Sagemaker however doesn't reload the model and I don't know how to convince it to do so.
I want to deploy a new model to an AWS SageMaker endpoint. The model is trained by a different pipeline and stored as a mode.tar.gz in S3. I provisioned the Sagemaker Endpoint using AWS CDK. Now, within the training pipeline, I want to allow the data scientists to optionally upload their newly trained model to the endpoint for testing. I dont want to create a new model or an endpoint config. Also, I dont want to change the infrastructure (AWS CDK) code.
The model is uploaded to the S3 location that the sagemaker endpoint config is using as the
model_data_url. Hence it should use the new model. But it doesn't load it. I know that Sagemaker caches models inside the container, but idk how to force a new load.
This documentation suggests to store the model tarball with another name in the same S3 folder, and alter the code to invoke the model. This is not possible for my application. And I dont want Sagemaker to default to an old model, once the TargetModel parameter is not present.
Here is what I am currently doing after uploading the model to S3. Even though the endpoint transitions into Updating state, it does not force a model reload:
def update_sm_endpoint(endpoint_name: str) -> Dict[str, Any]:
"""Forces the sagemaker endpoint to reload model from s3"""
sm = boto3.client("sagemaker")
return sm.update_endpoint_weights_and_capacities(
EndpointName=endpoint_name,
DesiredWeightsAndCapacities=[
{"VariantName": "main", "DesiredWeight": 1},
],
)
Any ideas?
|
[
"If you want to modify the model called in a SageMaker endpoint, you have to create a new model object and and new endpoint configuration. Then call update_endpoint This will not change the name of the endpoint.\ncomments on your question and SageMaker doc:\n\nthe documentation you mention (\"This documentation suggests to store the model tarball with another name in the same S3 folder, and alter the code to invoke the model\") is for SageMaker Multi-Model Endpoint, a service to store multiple models in the same endpoint in parallel. This is not what you need. You need a single-model SageMaker endpoint, and that you update with a\n\nalso, the API you mention sm.update_endpoint_weights_and_capacities is not needed for what you want (unless you want a progressive rollout from the traffic from model 1 to model 2).\n\n\n"
] |
[
0
] |
[] |
[] |
[
"amazon_sagemaker",
"amazon_web_services",
"boto3",
"mlops",
"python"
] |
stackoverflow_0074561905_amazon_sagemaker_amazon_web_services_boto3_mlops_python.txt
|
Q:
Why does my program gets stuck on the function move_down()?
I am making a moving "X in a grid" - learning the keyboard module.
For some reason, when debugging in vscode, my program won't leave the line:
while j < len(board[0]):
if board[i][j] == "X":
Are you seeing what's going on, on the left?
"Thead-8 (process) : PAUSED ON BREAKPOINT"
(next move / step-info press)
"Theard-8 (process) PAUSED ON STEP
What isVscode trying to tell me?
import keyboard
import os
import platform
def clear():
if platform.system() == "Windows": os.system('cls')
if platform.system() == "Darwin" or platform.system() == "Linux": os.system('clear')
board = [["-", "-", "-", "-", "-", "-", ],
["-", "-", "-", "-", "-", "-", ],
["-", "-", "-", "-", "-", "-", ],
["-", "X", "-", "-", "-", "-", ]]
clear()
for x in board:
print(x)
def move_right():
keyboard.send("Backspace") #Remove written character
i = 0
j = 0
while i < len(board):
j = 0
while j < len(board[0]):
if board[i][j] == "X":
if j+1 == len(board[0]): #IF AT THE END, CONTINUE
break
else: #Swap, and exit second loop - to prevent x constantly moving to the end
board[i][j], board[i][j+1] = board[i][j+1], board[i][j]
break
j += 1
i += 1
clear()
for x in board:
print("%s\n" % x)
def move_left():
keyboard.send("Backspace") #Remove written character
i = 0
j = 0
while i < len(board):
j = 0
while j < len(board[0]):
if board[i][j] == "X":
if j == 0: #IF AT THE END, CONTINUE
break
else: #Swap, and exit second loop - to prevent x constantly moving to the end
board[i][j], board[i][j-1] = board[i][j-1], board[i][j]
break
j += 1
i += 1
clear()
for x in board:
print("%s\n" % x)
def move_up():
keyboard.send("Backspace") #Remove written character
i = 0
j = 0
while i < len(board):
j = 0
while j < len(board[0]):
if board[i][j] == "X":
if i == 0: #IF AT THE END, CONTINUE
break
else: #Swap, and exit second loop - to prevent x constantly moving to the end
board[i][j], board[i-1][j] = board[i-1][j], board[i][j]
break
j += 1
i += 1
clear()
for x in board:
print("%s\n" % x)
def move_down():
keyboard.send("Backspace") #Remove written character
i = 0
j = 0
while i < len(board):
j = 0
while j < len(board[0]):
if board[i][j] == "X":
if i == len(board): #IF AT THE END, CONTINUE
break
else: #Swap, and exit BOTH loops, to prevent X moving to the next line, then tracking him again by mistake.
board[i][j], board[i+1][j] = board[i+1][j], board[i][j]
outer_loop_break = True
break
if outer_loop_break == True:
break
else:
j += 1
i += 1
clear()
for x in board:
print("%s\n" % x)
keyboard.add_hotkey("W", lambda: move_up())
keyboard.add_hotkey("A", lambda: move_left())
keyboard.add_hotkey("S", lambda: move_down())
keyboard.add_hotkey("D", lambda: move_right())
keyboard.wait("Esc")
I am not sure if this is a bug, or something wrong in my code. for some reason, for the very similiar move_up() function, this doesn't happen.
A:
You are not changing the while condition:
while j < len(board[0]):
if board[i][j] == "X":
# (...)
This is your whole loop.
j never changes and len(board[0]) never changes, and because if statement is false. It just loops indefinitely.
In move_up() function, you are changing j with this line j += 1, but this line is inside the while loop.
|
Why does my program gets stuck on the function move_down()?
|
I am making a moving "X in a grid" - learning the keyboard module.
For some reason, when debugging in vscode, my program won't leave the line:
while j < len(board[0]):
if board[i][j] == "X":
Are you seeing what's going on, on the left?
"Thead-8 (process) : PAUSED ON BREAKPOINT"
(next move / step-info press)
"Theard-8 (process) PAUSED ON STEP
What isVscode trying to tell me?
import keyboard
import os
import platform
def clear():
if platform.system() == "Windows": os.system('cls')
if platform.system() == "Darwin" or platform.system() == "Linux": os.system('clear')
board = [["-", "-", "-", "-", "-", "-", ],
["-", "-", "-", "-", "-", "-", ],
["-", "-", "-", "-", "-", "-", ],
["-", "X", "-", "-", "-", "-", ]]
clear()
for x in board:
print(x)
def move_right():
keyboard.send("Backspace") #Remove written character
i = 0
j = 0
while i < len(board):
j = 0
while j < len(board[0]):
if board[i][j] == "X":
if j+1 == len(board[0]): #IF AT THE END, CONTINUE
break
else: #Swap, and exit second loop - to prevent x constantly moving to the end
board[i][j], board[i][j+1] = board[i][j+1], board[i][j]
break
j += 1
i += 1
clear()
for x in board:
print("%s\n" % x)
def move_left():
keyboard.send("Backspace") #Remove written character
i = 0
j = 0
while i < len(board):
j = 0
while j < len(board[0]):
if board[i][j] == "X":
if j == 0: #IF AT THE END, CONTINUE
break
else: #Swap, and exit second loop - to prevent x constantly moving to the end
board[i][j], board[i][j-1] = board[i][j-1], board[i][j]
break
j += 1
i += 1
clear()
for x in board:
print("%s\n" % x)
def move_up():
keyboard.send("Backspace") #Remove written character
i = 0
j = 0
while i < len(board):
j = 0
while j < len(board[0]):
if board[i][j] == "X":
if i == 0: #IF AT THE END, CONTINUE
break
else: #Swap, and exit second loop - to prevent x constantly moving to the end
board[i][j], board[i-1][j] = board[i-1][j], board[i][j]
break
j += 1
i += 1
clear()
for x in board:
print("%s\n" % x)
def move_down():
keyboard.send("Backspace") #Remove written character
i = 0
j = 0
while i < len(board):
j = 0
while j < len(board[0]):
if board[i][j] == "X":
if i == len(board): #IF AT THE END, CONTINUE
break
else: #Swap, and exit BOTH loops, to prevent X moving to the next line, then tracking him again by mistake.
board[i][j], board[i+1][j] = board[i+1][j], board[i][j]
outer_loop_break = True
break
if outer_loop_break == True:
break
else:
j += 1
i += 1
clear()
for x in board:
print("%s\n" % x)
keyboard.add_hotkey("W", lambda: move_up())
keyboard.add_hotkey("A", lambda: move_left())
keyboard.add_hotkey("S", lambda: move_down())
keyboard.add_hotkey("D", lambda: move_right())
keyboard.wait("Esc")
I am not sure if this is a bug, or something wrong in my code. for some reason, for the very similiar move_up() function, this doesn't happen.
|
[
"You are not changing the while condition:\nwhile j < len(board[0]):\n if board[i][j] == \"X\":\n # (...)\n\nThis is your whole loop.\nj never changes and len(board[0]) never changes, and because if statement is false. It just loops indefinitely.\nIn move_up() function, you are changing j with this line j += 1, but this line is inside the while loop.\n"
] |
[
0
] |
[] |
[] |
[
"python",
"visual_studio_code"
] |
stackoverflow_0074567063_python_visual_studio_code.txt
|
Q:
NetworkX find root_node for a particular node in a directed graph
Suppose I have a directed graph G in Network X such that:
G has multiple trees in it
Every node N in G has exactly 1 or 0
parent's.
For a particular node N1, I want to find the root node of the tree it resides in (its ancestor that has a degree of 0). Is there an easy way to do this in network x?
I looked at:
Getting the root (head) of a DiGraph in networkx (Python)
But there are multiple root nodes in my graph. Just only one root node that happens to be in the same tree as N1.
A:
edit Nov 2017 note that this was written before networkx 2.0 was released. There is a migration guide for updating 1.x code into 2.0 code (and in particular making it compatible for both)
Here's a simple recursive algorithm. It assumes there is at most a single parent. If something doesn't have a parent, it's the root. Otherwise, it returns the root of its parent.
def find_root(G,node):
if G.predecessors(node): #True if there is a predecessor, False otherwise
root = find_root(G,G.predecessors(node)[0])
else:
root = node
return root
If the graph is a directed acyclic graph, this will still find a root, though it might not be the only root, or even the only root ancestor of a given node.
A:
I took the liberty of updating @Joel's script. His original post did not work for me.
def find_root(G,child):
parent = list(G.predecessors(child))
if len(parent) == 0:
print(f"found root: {child}")
return child
else:
return find_root(G, parent[0])
Here's a test:
G = nx.DiGraph(data = [('glu', 'skin'), ('glu', 'bmi'), ('glu', 'bp'), ('glu', 'age'), ('npreg', 'glu')])
test = find_root(G, "age")
age
glu
npreg
found root: npreg
A:
Networkx - 2.5.1
The root/leaf node can be found using the edges.
for node_id in graph.nodes:
if len(graph.in_edges(node_id)) == 0:
print("root node")
if len(graph.out_edges(node_id)) == 0:
print("leaf node")
A:
In case of multiple roots, we can do something like this:
def find_multiple_roots(G, nodes):
list_roots = []
for node in nodes:
predecessors = list(G.predecessors(node))
if len(predecessors)>0:
for predecessor in predecessors:
list_roots.extend(find_root(G, [predecessor]))
else:
list_roots.append(node)
return list_roots
Usage:
# node need to be passed as a list
find_multiple_roots(G, [node])
Warning: This recursive function can explode pretty quick (the number of recursive functions called could be exponentially proportional to the number of nodes exist between the current node and the root), so use it with care.
|
NetworkX find root_node for a particular node in a directed graph
|
Suppose I have a directed graph G in Network X such that:
G has multiple trees in it
Every node N in G has exactly 1 or 0
parent's.
For a particular node N1, I want to find the root node of the tree it resides in (its ancestor that has a degree of 0). Is there an easy way to do this in network x?
I looked at:
Getting the root (head) of a DiGraph in networkx (Python)
But there are multiple root nodes in my graph. Just only one root node that happens to be in the same tree as N1.
|
[
"edit Nov 2017 note that this was written before networkx 2.0 was released. There is a migration guide for updating 1.x code into 2.0 code (and in particular making it compatible for both)\n\nHere's a simple recursive algorithm. It assumes there is at most a single parent. If something doesn't have a parent, it's the root. Otherwise, it returns the root of its parent.\ndef find_root(G,node):\n if G.predecessors(node): #True if there is a predecessor, False otherwise\n root = find_root(G,G.predecessors(node)[0])\n else:\n root = node\n return root\n\nIf the graph is a directed acyclic graph, this will still find a root, though it might not be the only root, or even the only root ancestor of a given node.\n",
"I took the liberty of updating @Joel's script. His original post did not work for me.\ndef find_root(G,child):\n parent = list(G.predecessors(child))\n if len(parent) == 0:\n print(f\"found root: {child}\")\n return child\n else: \n return find_root(G, parent[0])\n\nHere's a test:\nG = nx.DiGraph(data = [('glu', 'skin'), ('glu', 'bmi'), ('glu', 'bp'), ('glu', 'age'), ('npreg', 'glu')])\ntest = find_root(G, \"age\")\nage\nglu\nnpreg\nfound root: npreg\n\n",
"Networkx - 2.5.1\nThe root/leaf node can be found using the edges.\nfor node_id in graph.nodes:\n if len(graph.in_edges(node_id)) == 0:\n print(\"root node\")\n if len(graph.out_edges(node_id)) == 0:\n print(\"leaf node\")\n\n",
"In case of multiple roots, we can do something like this:\ndef find_multiple_roots(G, nodes):\n list_roots = []\n for node in nodes:\n predecessors = list(G.predecessors(node))\n if len(predecessors)>0:\n for predecessor in predecessors:\n list_roots.extend(find_root(G, [predecessor]))\n else:\n list_roots.append(node)\n \n return list_roots\n\nUsage:\n# node need to be passed as a list\nfind_multiple_roots(G, [node])\n\nWarning: This recursive function can explode pretty quick (the number of recursive functions called could be exponentially proportional to the number of nodes exist between the current node and the root), so use it with care.\n"
] |
[
5,
3,
0,
0
] |
[] |
[] |
[
"graph_theory",
"networkx",
"python"
] |
stackoverflow_0036488758_graph_theory_networkx_python.txt
|
Q:
Python type annotations with TypeVar that excludes types
I'm trying to use @overload to communicate the different ways of calling a function, but what is easily communicated in the code with a simple else statement is not possible in the type annotations. Without the "else" MyPy (correctly) complains that the overload versions mismatch (see the snippet below for example).
error: Overloaded function signatures 1 and 2 overlap with incompatible return types
Did I understand correctly that there is no good solution for this problem?
eg. here is a simple example:
ListOrTuple = TypeVar("ListOrTuple", List, Tuple)
# unfortunately, typing doesn't support "anything else" at the moment
# https://github.com/python/typing/issues/599#issuecomment-586007066
AnythingElse = TypeVar("AnythingElse")
# what I would like to have is something like AnythingElse= TypeVar("AnythingElse", Not[List,Tuple])
@overload
def as_list(val: ListOrTuple) -> ListOrTuple:
...
@overload
def as_list(val: AnythingElse) -> List[AnythingElse]:
...
def as_list(val):
"""Return list/tuple as is, otherwise wrap in a list
>>> as_list("test")
['test']
"""
return val if isinstance(val, (list, tuple)) else [val]
A:
This is the work-around that I have. It works well enough for me but I don't like it at all.
# attempt to list all the "other" possible types
AnythingElse = TypeVar("AnythingElse", Set, Mapping, type, int, str, None, Callable, Set, Deque, ByteString)
ListOrTuple = TypeVar("ListOrTuple", List, Tuple, Sequence)
@overload
def as_list(val: ListOrTuple) -> ListOrTuple:
...
@overload
def as_list(val: AnythingElse) -> List[AnythingElse]:
...
def as_list(val):
"""Return list/tuple as is, otherwise wrap in a list
>>> as_list("test")
['test']
"""
return val if isinstance(val, (list, tuple)) else [val]
A:
Use bound="Union[List, Tuple]":
from typing import Any, List, Tuple, TypeVar, Union, overload
ListOrTuple = TypeVar("ListOrTuple", bound="Union[List, Tuple]")
AnythingElse = TypeVar("AnythingElse")
@overload
def as_list(val: ListOrTuple) -> ListOrTuple:
pass
@overload
def as_list(val: AnythingElse) -> List[AnythingElse]:
pass
def as_list(val: Any) -> Any:
"""Return list/tuple as is, otherwise wrap in a list
>>> as_list("test")
['test']
"""
return val if isinstance(val, (list, tuple)) else [val]
a = as_list(2) # it's List[int]
b = as_list('2') # it's List[str]
c = as_list(['2', '3']) # it's List[str]
A:
This question is old enough, but this problem arises in multiple questions. This is intended to be a duplicate target, because the question is formulated in generic and reusable way. Other questions asking about overloads with type "X or not X" can be closed as dupes of this.
This problem (expressing type not X or Any \ X) bound to overloads is not planned for any mypy milestone.
First, the solution: just write both overloads as you have done, e.g. (I reduced imports and the overall complexity to simplify the example)
from typing import Any, TypeVar
ListOrTuple = TypeVar("ListOrTuple", list[Any], tuple[Any, ...])
AnythingElse = TypeVar("AnythingElse")
@overload
def as_list(val: ListOrTuple) -> ListOrTuple: ... # type: ignore[misc]
@overload
def as_list(val: AnythingElse) -> list[AnythingElse]: ...
... and this works as intended. The error message about overlapping signatures is more like a lint warning than a real type error [1-2]. All you need is to ignore the error line like in example above. And everything will work great. Almost: see details below.
This is also explained in mypy official docs.
How does it work
In short ([4]):
When overloaded function is called, mypy goes over every overload and checks if there is a match. If arguments passed to function are compatible with types of overload parameters, mypy picks this overload and processes it further as regular function (if TypeVars are involved, things are getting much more difficult - in fact mypy tries to substitute them and picks an overload only if this procedure succeeded). Here we consider A compatible to B if and only if A is a (nominal or structural, non-strict) subtype of B. What does it mean for us? If there are multiple matches, the first one will be used. Thus to say (int) -> str; (float which is not int) -> float we have to define two overloads, (int) -> str; (float) -> float in this order. Are you happy? I'm not, mypy fails to treat int as a float subtype in overloads, which is a reported bug.
When overloaded definition is parsed and interpreted by mypy, it verifies its formal correctness. If arguments are not overlapping, overloads are OK. Otherwise, the more narrow are argument types, the more narrow return type should be - otherwise mypy complaints. Two different errors may be emitted here (numbers may differ with more signatures):
Overloaded function signature 2 will never be matched: signature 1's parameter type(s) are the same or broader. This is a bad sign: your second overload is just completely ignored. Basically it means that you need to swap the signatures to make both work and to get another message instead:
Overloaded function signatures 1 and 2 overlap with incompatible return types. This is much better! Basically it says "Are you sure that it is what you want? Strictly speaking, without implementation-specific assumptions any of overloads can be picked by PEP484-compatible type checker, plus there are some edge cases!". You can safely say "yes, I mean this!" and ignore the message. But beware (see below section)!
Why is the error message there?
Well, I cheated a bit when saying "everything will be fine" above. The main problem is that you can declare a variable to have wider type. If return types are not compatible, then you have actually tweaked type checker to produce some output not matching run-time behaviour. Like this:
class A: pass
class B(A): pass
@overload
def maybe_ok_1(x: B) -> int: ... # E: Overloaded function signatures 1 and 2 overlap with incompatible return types
@overload
def maybe_ok_1(x: A) -> str: ...
def maybe_ok_1(x): return 0 if isinstance(x, B) else 'A'
# So far, so good:
reveal_type(maybe_ok_1(B())) # N: revealed type is "builtins.int"
reveal_type(maybe_ok_1(A())) # N: revealed type is "builtins.str"
# But this can be dangerous:
# This is `B`, and actual return is `int` - but `mypy` doesn't think so.
x: A = B()
reveal_type(maybe_ok_1(x)) # N: Revealed type is "builtins.str" # Ooops!
Examples
Finally, check out a few examples for some practical view:
class A: pass
class B(A): pass
# Absolutely fine, no overlaps
@overload
def ok_1(x: str) -> int: ...
@overload
def ok_1(x: int) -> str: ...
def ok_1(x): return '1' if isinstance(x, int) else 1
reveal_type(ok_1(1))
reveal_type(ok_1('1'))
# Should use `TypeVar` instead, but this is a synthetic example - forgive me:)
@overload
def ok_2(x: B) -> B: ...
@overload
def ok_2(x: A) -> A: ...
def ok_2(x): return x
reveal_type(ok_2(A())) # N: revealed type is "__main__.A"
reveal_type(ok_2(B())) # N: revealed type is "__main__.B"
# But try to reverse the previous example - it is much worse!
@overload
def bad_1(x: A) -> A: ...
@overload
def bad_1(x: B) -> B: ... # E: Overloaded function signature 2 will never be matched: signature 1's parameter type(s) are the same or broader [misc]
def bad_1(x): return x
reveal_type(bad_1(B())) # N: Revealed type is "__main__.A" # Oops! Though, still true
reveal_type(bad_1(A())) # N: Revealed type is "__main__.A"
# Now let's make it completely invalid:
@overload
def bad_2(x: A) -> int: ...
@overload
def bad_2(x: B) -> str: ... # E: Overloaded function signature 2 will never be matched: signature 1's parameter type(s) are the same or broader [misc]
def bad_2(x): return 'B' if isinstance(x, B) else 1
reveal_type(bad_2(B())) # N: Revealed type is "builtins.int" # Oops! The actual return is 'B'
reveal_type(bad_2(A())) # N: Revealed type is "buitlins.int"
# Now watch something similar to ok_2, but with incompatible returns (we may want to ignore defn line)
@overload
def maybe_ok_1(x: B) -> int: ... # E: Overloaded function signatures 1 and 2 overlap with incompatible return types
@overload
def maybe_ok_1(x: A) -> str: ...
def maybe_ok_1(x): return 0 if isinstance(x, B) else 'A'
# So far, so good:
reveal_type(maybe_ok_1(B())) # N: revealed type is "builtins.int"
reveal_type(maybe_ok_1(A())) # N: revealed type is "builtins.str"
# But this can be dangerous:
# This is `B`, and actual return is `int` - but `mypy` doesn't think so.
x: A = B()
reveal_type(maybe_ok_1(x)) # N: Revealed type is "builtins.str" # Ooops!
Here's playground link so you can tweak the code to see what can be changed.
References from mypy issue tracker:
Just go on #12759
Just go on #13805
Special cases aren't special enough to break the rules? But they still keep it open...
Read me! This contains a short explanation of overload implementation
Ordering matters!
|
Python type annotations with TypeVar that excludes types
|
I'm trying to use @overload to communicate the different ways of calling a function, but what is easily communicated in the code with a simple else statement is not possible in the type annotations. Without the "else" MyPy (correctly) complains that the overload versions mismatch (see the snippet below for example).
error: Overloaded function signatures 1 and 2 overlap with incompatible return types
Did I understand correctly that there is no good solution for this problem?
eg. here is a simple example:
ListOrTuple = TypeVar("ListOrTuple", List, Tuple)
# unfortunately, typing doesn't support "anything else" at the moment
# https://github.com/python/typing/issues/599#issuecomment-586007066
AnythingElse = TypeVar("AnythingElse")
# what I would like to have is something like AnythingElse= TypeVar("AnythingElse", Not[List,Tuple])
@overload
def as_list(val: ListOrTuple) -> ListOrTuple:
...
@overload
def as_list(val: AnythingElse) -> List[AnythingElse]:
...
def as_list(val):
"""Return list/tuple as is, otherwise wrap in a list
>>> as_list("test")
['test']
"""
return val if isinstance(val, (list, tuple)) else [val]
|
[
"This is the work-around that I have. It works well enough for me but I don't like it at all.\n# attempt to list all the \"other\" possible types\nAnythingElse = TypeVar(\"AnythingElse\", Set, Mapping, type, int, str, None, Callable, Set, Deque, ByteString)\nListOrTuple = TypeVar(\"ListOrTuple\", List, Tuple, Sequence)\n\n\n@overload\ndef as_list(val: ListOrTuple) -> ListOrTuple:\n ...\n\n@overload\ndef as_list(val: AnythingElse) -> List[AnythingElse]:\n ...\n\ndef as_list(val):\n \"\"\"Return list/tuple as is, otherwise wrap in a list\n\n >>> as_list(\"test\")\n ['test']\n \"\"\"\n return val if isinstance(val, (list, tuple)) else [val]\n\n",
"Use bound=\"Union[List, Tuple]\":\nfrom typing import Any, List, Tuple, TypeVar, Union, overload\n\n\nListOrTuple = TypeVar(\"ListOrTuple\", bound=\"Union[List, Tuple]\")\nAnythingElse = TypeVar(\"AnythingElse\")\n\n\n@overload\ndef as_list(val: ListOrTuple) -> ListOrTuple:\n pass\n\n\n@overload\ndef as_list(val: AnythingElse) -> List[AnythingElse]:\n pass\n\n\ndef as_list(val: Any) -> Any:\n \"\"\"Return list/tuple as is, otherwise wrap in a list\n\n >>> as_list(\"test\")\n ['test']\n \"\"\"\n return val if isinstance(val, (list, tuple)) else [val]\n\n\na = as_list(2) # it's List[int]\nb = as_list('2') # it's List[str]\nc = as_list(['2', '3']) # it's List[str]\n\n",
"This question is old enough, but this problem arises in multiple questions. This is intended to be a duplicate target, because the question is formulated in generic and reusable way. Other questions asking about overloads with type \"X or not X\" can be closed as dupes of this.\nThis problem (expressing type not X or Any \\ X) bound to overloads is not planned for any mypy milestone.\nFirst, the solution: just write both overloads as you have done, e.g. (I reduced imports and the overall complexity to simplify the example)\nfrom typing import Any, TypeVar\n\nListOrTuple = TypeVar(\"ListOrTuple\", list[Any], tuple[Any, ...])\nAnythingElse = TypeVar(\"AnythingElse\")\n\n@overload\ndef as_list(val: ListOrTuple) -> ListOrTuple: ... # type: ignore[misc]\n@overload\ndef as_list(val: AnythingElse) -> list[AnythingElse]: ...\n\n... and this works as intended. The error message about overlapping signatures is more like a lint warning than a real type error [1-2]. All you need is to ignore the error line like in example above. And everything will work great. Almost: see details below.\nThis is also explained in mypy official docs.\nHow does it work\nIn short ([4]):\n\nWhen overloaded function is called, mypy goes over every overload and checks if there is a match. If arguments passed to function are compatible with types of overload parameters, mypy picks this overload and processes it further as regular function (if TypeVars are involved, things are getting much more difficult - in fact mypy tries to substitute them and picks an overload only if this procedure succeeded). Here we consider A compatible to B if and only if A is a (nominal or structural, non-strict) subtype of B. What does it mean for us? If there are multiple matches, the first one will be used. Thus to say (int) -> str; (float which is not int) -> float we have to define two overloads, (int) -> str; (float) -> float in this order. Are you happy? I'm not, mypy fails to treat int as a float subtype in overloads, which is a reported bug.\n\nWhen overloaded definition is parsed and interpreted by mypy, it verifies its formal correctness. If arguments are not overlapping, overloads are OK. Otherwise, the more narrow are argument types, the more narrow return type should be - otherwise mypy complaints. Two different errors may be emitted here (numbers may differ with more signatures):\n\nOverloaded function signature 2 will never be matched: signature 1's parameter type(s) are the same or broader. This is a bad sign: your second overload is just completely ignored. Basically it means that you need to swap the signatures to make both work and to get another message instead:\n Overloaded function signatures 1 and 2 overlap with incompatible return types. This is much better! Basically it says \"Are you sure that it is what you want? Strictly speaking, without implementation-specific assumptions any of overloads can be picked by PEP484-compatible type checker, plus there are some edge cases!\". You can safely say \"yes, I mean this!\" and ignore the message. But beware (see below section)!\n\n\n\nWhy is the error message there?\nWell, I cheated a bit when saying \"everything will be fine\" above. The main problem is that you can declare a variable to have wider type. If return types are not compatible, then you have actually tweaked type checker to produce some output not matching run-time behaviour. Like this:\nclass A: pass\nclass B(A): pass\n@overload\ndef maybe_ok_1(x: B) -> int: ... # E: Overloaded function signatures 1 and 2 overlap with incompatible return types\n@overload\ndef maybe_ok_1(x: A) -> str: ...\ndef maybe_ok_1(x): return 0 if isinstance(x, B) else 'A'\n\n# So far, so good:\nreveal_type(maybe_ok_1(B())) # N: revealed type is \"builtins.int\"\nreveal_type(maybe_ok_1(A())) # N: revealed type is \"builtins.str\"\n# But this can be dangerous:\n# This is `B`, and actual return is `int` - but `mypy` doesn't think so.\nx: A = B()\nreveal_type(maybe_ok_1(x)) # N: Revealed type is \"builtins.str\" # Ooops!\n\nExamples\nFinally, check out a few examples for some practical view:\n\nclass A: pass\nclass B(A): pass\n\n# Absolutely fine, no overlaps\n@overload\ndef ok_1(x: str) -> int: ...\n@overload\ndef ok_1(x: int) -> str: ...\ndef ok_1(x): return '1' if isinstance(x, int) else 1\n\nreveal_type(ok_1(1))\nreveal_type(ok_1('1'))\n\n# Should use `TypeVar` instead, but this is a synthetic example - forgive me:)\n@overload\ndef ok_2(x: B) -> B: ...\n@overload\ndef ok_2(x: A) -> A: ...\ndef ok_2(x): return x\n\nreveal_type(ok_2(A())) # N: revealed type is \"__main__.A\"\nreveal_type(ok_2(B())) # N: revealed type is \"__main__.B\"\n\n# But try to reverse the previous example - it is much worse!\n@overload\ndef bad_1(x: A) -> A: ...\n@overload\ndef bad_1(x: B) -> B: ... # E: Overloaded function signature 2 will never be matched: signature 1's parameter type(s) are the same or broader [misc]\ndef bad_1(x): return x\n\nreveal_type(bad_1(B())) # N: Revealed type is \"__main__.A\" # Oops! Though, still true\nreveal_type(bad_1(A())) # N: Revealed type is \"__main__.A\"\n\n# Now let's make it completely invalid:\n@overload\ndef bad_2(x: A) -> int: ...\n@overload\ndef bad_2(x: B) -> str: ... # E: Overloaded function signature 2 will never be matched: signature 1's parameter type(s) are the same or broader [misc]\ndef bad_2(x): return 'B' if isinstance(x, B) else 1\n\nreveal_type(bad_2(B())) # N: Revealed type is \"builtins.int\" # Oops! The actual return is 'B'\nreveal_type(bad_2(A())) # N: Revealed type is \"buitlins.int\"\n\n\n# Now watch something similar to ok_2, but with incompatible returns (we may want to ignore defn line)\n@overload\ndef maybe_ok_1(x: B) -> int: ... # E: Overloaded function signatures 1 and 2 overlap with incompatible return types\n@overload\ndef maybe_ok_1(x: A) -> str: ...\ndef maybe_ok_1(x): return 0 if isinstance(x, B) else 'A'\n\n# So far, so good:\nreveal_type(maybe_ok_1(B())) # N: revealed type is \"builtins.int\"\nreveal_type(maybe_ok_1(A())) # N: revealed type is \"builtins.str\"\n# But this can be dangerous:\n# This is `B`, and actual return is `int` - but `mypy` doesn't think so.\nx: A = B()\nreveal_type(maybe_ok_1(x)) # N: Revealed type is \"builtins.str\" # Ooops!\n\nHere's playground link so you can tweak the code to see what can be changed.\nReferences from mypy issue tracker:\n\nJust go on #12759\nJust go on #13805\nSpecial cases aren't special enough to break the rules? But they still keep it open...\nRead me! This contains a short explanation of overload implementation\nOrdering matters!\n\n"
] |
[
0,
0,
0
] |
[] |
[] |
[
"mypy",
"python",
"python_typing"
] |
stackoverflow_0060222982_mypy_python_python_typing.txt
|
Q:
Returning a line of txt.-file that has a word with more than 6 characters and starts with "A" in Python
I have a task to accomplish in Python with only one sentence:
I need to return lines of my txt-file that include words which have more than 6 characters and start with the letter "A".
My code is the following:
[line for line in open('test.txt') if line.split().count('A') > 6]
I am not sure how to implement another command in order to say that my word starts with "A" and has to have more than 6 characters. That is the furthest I could do. I thank you for your time.
Greetings
A:
I would split up your for loop so that it's not a list comprehension, to make it easier to understand what's going on. Once you do that, it should be clearer what you're missing so you can assemble it back into a list comprehension.
lines = []
with open('test.txt', 'r') as f:
for line in f: # this line reads each line in the file
add_line = False
for word in line.split():
if (word.startswith('A') and len(word) > 6):
add_line = True
break
if (add_line):
lines.append(line)
This roughly translates to
[line for line in open('test.txt', 'r') if any(len(word) > 6 and word.startswith('A') for word in line.split())]
A:
You should break each line and compare each word separately
[line for line in open('test.txt') if len([word for word in line.split(' ') if word[0].lower() == 'a' and len(word)> 6]) > 0]
|
Returning a line of txt.-file that has a word with more than 6 characters and starts with "A" in Python
|
I have a task to accomplish in Python with only one sentence:
I need to return lines of my txt-file that include words which have more than 6 characters and start with the letter "A".
My code is the following:
[line for line in open('test.txt') if line.split().count('A') > 6]
I am not sure how to implement another command in order to say that my word starts with "A" and has to have more than 6 characters. That is the furthest I could do. I thank you for your time.
Greetings
|
[
"I would split up your for loop so that it's not a list comprehension, to make it easier to understand what's going on. Once you do that, it should be clearer what you're missing so you can assemble it back into a list comprehension.\nlines = []\n\nwith open('test.txt', 'r') as f:\n for line in f: # this line reads each line in the file\n add_line = False\n for word in line.split():\n if (word.startswith('A') and len(word) > 6):\n add_line = True\n break\n if (add_line):\n lines.append(line)\n\nThis roughly translates to\n[line for line in open('test.txt', 'r') if any(len(word) > 6 and word.startswith('A') for word in line.split())]\n\n",
"You should break each line and compare each word separately\n[line for line in open('test.txt') if len([word for word in line.split(' ') if word[0].lower() == 'a' and len(word)> 6]) > 0]\n"
] |
[
1,
1
] |
[] |
[] |
[
"python",
"txt"
] |
stackoverflow_0074567208_python_txt.txt
|
Q:
Getting nested named results from pyparsing
I am modifying the pyparsing fourFn example to accept variables. Evaluation already works, now I want to be able to parse a string and output a list of required variables. Here's how I would like it to work:
from my_module.parser import FormulaParser
formula = '(x + y) * z'
fp = FormulaParser()
parser.get_variables
# => ['x', 'y', 'z']
I've added a set_results_name call like so:
ident = Word(alphas, alphanums + "_$").set_results_name('identifier', list_all_matches=True)
Now this works when the formula doesn't contain any nesting:
>>> formula = 'x * y + z'
>>> res = fp.parse_string(formula)
>>> res.identifier
ParseResults(['x', 'y', 'z'], {})
But I can't figure out how to get at the nested results:
>>> formula = '(x * y - (a/b)) + z'
>>> res = fp.parse_string(formula)
>>> res.identifier
ParseResults(['z'], {})
>>> print(res.dump())
[['x', '*', 'y', '-', ['a', '/', 'b']], '+', 'z']
- identifier: ['z']
[0]:
['x', '*', 'y', '-', ['a', '/', 'b']]
- identifier: ['x', 'y']
[0]:
x
[1]:
*
[2]:
y
[3]:
-
[4]:
['a', '/', 'b']
- identifier: ['a', 'b']
[1]:
+
[2]:
z
>>>
I see that all the variables are there at different levels. I could brute-force iterating over everything but that would be duplicating the parsers effort.
A:
There may be a better way but scan_string sort of works:
>>> from pyparsing import alphas, alphanums
>>> identifier = Word(alphas, alphanums + "_$")
>>> formula = '(a * sin(x + y)) / (galaxy - 3)'
>>> myvars = [var[0][0] for var in identifier.scan_string(formula)]
>>> myvars
['a', 'sin', 'x', 'y', 'galaxy']
Problems with this approach:
the [0][0] is a bit ugly
'PI' isn't excluded. e and pi are defined in the grammar as caseless keywords. parse_string correctly excludes them from identifier
|
Getting nested named results from pyparsing
|
I am modifying the pyparsing fourFn example to accept variables. Evaluation already works, now I want to be able to parse a string and output a list of required variables. Here's how I would like it to work:
from my_module.parser import FormulaParser
formula = '(x + y) * z'
fp = FormulaParser()
parser.get_variables
# => ['x', 'y', 'z']
I've added a set_results_name call like so:
ident = Word(alphas, alphanums + "_$").set_results_name('identifier', list_all_matches=True)
Now this works when the formula doesn't contain any nesting:
>>> formula = 'x * y + z'
>>> res = fp.parse_string(formula)
>>> res.identifier
ParseResults(['x', 'y', 'z'], {})
But I can't figure out how to get at the nested results:
>>> formula = '(x * y - (a/b)) + z'
>>> res = fp.parse_string(formula)
>>> res.identifier
ParseResults(['z'], {})
>>> print(res.dump())
[['x', '*', 'y', '-', ['a', '/', 'b']], '+', 'z']
- identifier: ['z']
[0]:
['x', '*', 'y', '-', ['a', '/', 'b']]
- identifier: ['x', 'y']
[0]:
x
[1]:
*
[2]:
y
[3]:
-
[4]:
['a', '/', 'b']
- identifier: ['a', 'b']
[1]:
+
[2]:
z
>>>
I see that all the variables are there at different levels. I could brute-force iterating over everything but that would be duplicating the parsers effort.
|
[
"There may be a better way but scan_string sort of works:\n>>> from pyparsing import alphas, alphanums\n>>> identifier = Word(alphas, alphanums + \"_$\")\n>>> formula = '(a * sin(x + y)) / (galaxy - 3)'\n>>> myvars = [var[0][0] for var in identifier.scan_string(formula)]\n>>> myvars\n['a', 'sin', 'x', 'y', 'galaxy']\n\nProblems with this approach:\n\nthe [0][0] is a bit ugly\n'PI' isn't excluded. e and pi are defined in the grammar as caseless keywords. parse_string correctly excludes them from identifier\n\n"
] |
[
0
] |
[] |
[] |
[
"parsing",
"pyparsing",
"python"
] |
stackoverflow_0074567195_parsing_pyparsing_python.txt
|
Q:
How to make a dictionary from lists
I want to add this two list into a dictionary
list_id= ['2000391314791P', '2000391314715P', '2000383032443P', '2000387592776P', '2000391314760P', '2000387592813P', '2000383032511P', '2000391314784P', '2000387592738P', '2000387592806P', '2000387592769P', '2000387592790P', '2000387592752P', '2000391314746P', '2000391314777P', '2000391314753P', '2000391314814P', '2000387592783P', '2000383032429P', '2000383032467P', 'MPM00043444018', 'MPM00040888375']
productos= ['APPLE MACBOOK PRO 13,3" / CHIP M2 (CPU 8NUC Y GPU 10NUC) / 8GB RAM / 256GB SSD / COLOR PLATA', 'APPLE MACBOOK AIR 13,6" / CHIP M2 (CPU 8NUC Y GPU 8NUC) / 8GB RAM / 256GB SSD / COLOR PLATA', 'APPLE MACBOOK AIR 13,3" / CHIP M1 (CPU 8NUC Y GPU 7NUC) / 8GB RAM / 256GB SSD / COLOR PLATA', 'APPLE MACBOOK PRO 16" / CHIP M1 PRO (CPU 10NUC Y GPU 16NUC) / 16GB RAM / 1TB SSD / COLOR PLATA', 'APPLE MACBOOK AIR 13,6" / CHIP M2 (CPU 8NUC Y GPU 10NUC) / 8GB RAM / 512GB SSD / GRIS ESPACIAL', 'APPLE MACBOOK PRO 14" / CHIP M1 PRO (CPU 8NUC Y GPU 14NUC) / 16GB RAM / 512GB SSD / GRIS ESPACIAL', 'APPLE MACBOOK PRO 13,3" / CHIP M1 (CPU 8NUC Y GPU 8NUC) / 8GB RAM / 512GB SSD / COLOR PLATA', 'APPLE MACBOOK PRO 13,3" / CHIP M2 (CPU 8NUC Y GPU 10NUC) / 8GB RAM / 256GB SSD / GRIS ESPACIAL', 'APPLE MACBOOK PRO 16" / CHIP M1 PRO (CPU 10NUC Y GPU 16NUC) / 16GB RAM / 512GB SSD / GRIS ESPACIAL', 'APPLE MACBOOK PRO 14" / CHIP M1 PRO (CPU 10NUC Y GPU 16NUC) / 16GB RAM / 1TB SSD / COLOR PLATA', 'APPLE MACBOOK PRO 16" / CHIP M1 PRO (CPU 10NUC Y GPU 16NUC) / 16GB RAM / 1TB SSD / GRIS ESPACIAL', 'APPLE MACBOOK PRO 14" / CHIP M1 PRO (CPU 10NUC Y GPU 16NUC) / 16GB RAM / 1TB SSD / GRIS ESPACIAL', 'APPLE MACBOOK PRO 16" / CHIP M1 MAX (CPU 10NUC Y GPU 32NUC) / 32GB RAM / 1TB SSD / COLOR PLATA', 'APPLE MACBOOK AIR 13,6" / CHIP M2 (CPU 8NUC Y GPU 10NUC) / 8GB RAM / 512GB SSD / AZUL MEDIANOCHE', 'APPLE MACBOOK AIR 13,6" / CHIP M2 (CPU 8NUC Y GPU 10NUC) / 8GB RAM / 512GB SSD / BLANCO ESTELAR', 'APPLE MACBOOK AIR 13,6" / CHIP M2 (CPU 8NUC Y GPU 10NUC) / 8GB RAM / 512GB SSD / COLOR PLATA', 'APPLE MACBOOK PRO 13,3" / CHIP M2 (CPU 8NUC Y GPU 10NUC) / 8GB RAM / 512GB SSD / COLOR PLATA', 'APPLE MACBOOK PRO 16" / CHIP M1 PRO (CPU 10NUC Y GPU 16NUC) / 16GB RAM / 512GB SSD / COLOR PLATA', 'APPLE MACBOOK AIR 13,3" / CHIP M1 (CPU 8NUC Y GPU 7NUC) / 8GB RAM / 256GB SSD / COLOR ORO', 'APPLE MACBOOK AIR 13,3" / CHIP M1 (CPU 8NUC Y GPU 8NUC) / 8GB RAM / 512GB SSD / GRIS ESPACIAL', 'APPLE MACBOOK PRO 14.2 1TB M1 PRO 10C GPU 16C SILVER', 'APPLE MACBOOK PRO 14.2 1TB M1 PRO 10C GPU 16C SPACE GREY']
datos = { id_: { 'name': i for i in productos} for id_ in lista_id}
{'2000391314791P': {'nombre': 'APPLE MACBOOK PRO 14.2 1TB M1 PRO 10C GPU 16C SPACE GREY'},
'2000391314715P': {'nombre': 'APPLE MACBOOK PRO 14.2 1TB M1 PRO 10C GPU 16C SPACE GREY'},
'2000383032443P': {'nombre': 'APPLE MACBOOK PRO 14.2 1TB M1 PRO 10C GPU 16C SPACE GREY'},
'2000387592776P': {'nombre': 'APPLE MACBOOK PRO 14.2 1TB M1 PRO 10C GPU 16C SPACE GREY'},
'2000391314760P': {'nombre': 'APPLE MACBOOK PRO 14.2 1TB M1 PRO 10C GPU 16C SPACE GREY'},
'2000387592813P': {'nombre': 'APPLE MACBOOK PRO 14.2 1TB M1 PRO 10C GPU 16C SPACE GREY'},
'2000383032511P': {'nombre': 'APPLE MACBOOK PRO 14.2 1TB M1 PRO 10C GPU 16C SPACE GREY'},
'2000391314784P': {'nombre': 'APPLE MACBOOK PRO 14.2 1TB M1 PRO 10C GPU 16C SPACE GREY'},
'2000387592738P': {'nombre': 'APPLE MACBOOK PRO 14.2 1TB M1 PRO 10C GPU 16C SPACE GREY'},
'2000387592806P': {'nombre': 'APPLE MACBOOK PRO 14.2 1TB M1 PRO 10C GPU 16C SPACE GREY'},
'2000387592769P': {'nombre': 'APPLE MACBOOK PRO 14.2 1TB M1 PRO 10C GPU 16C SPACE GREY'},
'2000387592790P': {'nombre': 'APPLE MACBOOK PRO 14.2 1TB M1 PRO 10C GPU 16C SPACE GREY'},
'2000387592752P': {'nombre': 'APPLE MACBOOK PRO 14.2 1TB M1 PRO 10C GPU 16C SPACE GREY'},
'2000391314746P': {'nombre': 'APPLE MACBOOK PRO 14.2 1TB M1 PRO 10C GPU 16C SPACE GREY'},
'2000391314777P': {'nombre': 'APPLE MACBOOK PRO 14.2 1TB M1 PRO 10C GPU 16C SPACE GREY'},
'2000391314753P': {'nombre': 'APPLE MACBOOK PRO 14.2 1TB M1 PRO 10C GPU 16C SPACE GREY'},
'2000391314814P': {'nombre': 'APPLE MACBOOK PRO 14.2 1TB M1 PRO 10C GPU 16C SPACE GREY'},
'2000387592783P': {'nombre': 'APPLE MACBOOK PRO 14.2 1TB M1 PRO 10C GPU 16C SPACE GREY'},
'2000383032429P': {'nombre': 'APPLE MACBOOK PRO 14.2 1TB M1 PRO 10C GPU 16C SPACE GREY'},
'2000383032467P': {'nombre': 'APPLE MACBOOK PRO 14.2 1TB M1 PRO 10C GPU 16C SPACE GREY'},
'MPM00043444018': {'nombre': 'APPLE MACBOOK PRO 14.2 1TB M1 PRO 10C GPU 16C SPACE GREY'},
'MPM00040888375': {'nombre': 'APPLE MACBOOK PRO 14.2 1TB M1 PRO 10C GPU 16C SPACE GREY'}}
I need one id for one product not the same product, like this:
{'2000391314791P': {'nombre': 'APPLE MACBOOK PRO 14.2 1TB M1 PRO 10C GPU 16C SPACE GREY'},
'2000391314715P': {'nombre': 'APPLE MACBOOK AIR 13,6 CHIP M2 CPU 8NUC Y GPU 8NUC 8GB RAM 256GB SSD COLOR PLATA'}
A:
Assuming that list_id and productos are in the same order and have the same number of items, you can simply use enumerate in your dictionary comprehension:
datos = {id: productos[i] for i, id in enumerate(list_id)}
I hope that's what you were looking for.
|
How to make a dictionary from lists
|
I want to add this two list into a dictionary
list_id= ['2000391314791P', '2000391314715P', '2000383032443P', '2000387592776P', '2000391314760P', '2000387592813P', '2000383032511P', '2000391314784P', '2000387592738P', '2000387592806P', '2000387592769P', '2000387592790P', '2000387592752P', '2000391314746P', '2000391314777P', '2000391314753P', '2000391314814P', '2000387592783P', '2000383032429P', '2000383032467P', 'MPM00043444018', 'MPM00040888375']
productos= ['APPLE MACBOOK PRO 13,3" / CHIP M2 (CPU 8NUC Y GPU 10NUC) / 8GB RAM / 256GB SSD / COLOR PLATA', 'APPLE MACBOOK AIR 13,6" / CHIP M2 (CPU 8NUC Y GPU 8NUC) / 8GB RAM / 256GB SSD / COLOR PLATA', 'APPLE MACBOOK AIR 13,3" / CHIP M1 (CPU 8NUC Y GPU 7NUC) / 8GB RAM / 256GB SSD / COLOR PLATA', 'APPLE MACBOOK PRO 16" / CHIP M1 PRO (CPU 10NUC Y GPU 16NUC) / 16GB RAM / 1TB SSD / COLOR PLATA', 'APPLE MACBOOK AIR 13,6" / CHIP M2 (CPU 8NUC Y GPU 10NUC) / 8GB RAM / 512GB SSD / GRIS ESPACIAL', 'APPLE MACBOOK PRO 14" / CHIP M1 PRO (CPU 8NUC Y GPU 14NUC) / 16GB RAM / 512GB SSD / GRIS ESPACIAL', 'APPLE MACBOOK PRO 13,3" / CHIP M1 (CPU 8NUC Y GPU 8NUC) / 8GB RAM / 512GB SSD / COLOR PLATA', 'APPLE MACBOOK PRO 13,3" / CHIP M2 (CPU 8NUC Y GPU 10NUC) / 8GB RAM / 256GB SSD / GRIS ESPACIAL', 'APPLE MACBOOK PRO 16" / CHIP M1 PRO (CPU 10NUC Y GPU 16NUC) / 16GB RAM / 512GB SSD / GRIS ESPACIAL', 'APPLE MACBOOK PRO 14" / CHIP M1 PRO (CPU 10NUC Y GPU 16NUC) / 16GB RAM / 1TB SSD / COLOR PLATA', 'APPLE MACBOOK PRO 16" / CHIP M1 PRO (CPU 10NUC Y GPU 16NUC) / 16GB RAM / 1TB SSD / GRIS ESPACIAL', 'APPLE MACBOOK PRO 14" / CHIP M1 PRO (CPU 10NUC Y GPU 16NUC) / 16GB RAM / 1TB SSD / GRIS ESPACIAL', 'APPLE MACBOOK PRO 16" / CHIP M1 MAX (CPU 10NUC Y GPU 32NUC) / 32GB RAM / 1TB SSD / COLOR PLATA', 'APPLE MACBOOK AIR 13,6" / CHIP M2 (CPU 8NUC Y GPU 10NUC) / 8GB RAM / 512GB SSD / AZUL MEDIANOCHE', 'APPLE MACBOOK AIR 13,6" / CHIP M2 (CPU 8NUC Y GPU 10NUC) / 8GB RAM / 512GB SSD / BLANCO ESTELAR', 'APPLE MACBOOK AIR 13,6" / CHIP M2 (CPU 8NUC Y GPU 10NUC) / 8GB RAM / 512GB SSD / COLOR PLATA', 'APPLE MACBOOK PRO 13,3" / CHIP M2 (CPU 8NUC Y GPU 10NUC) / 8GB RAM / 512GB SSD / COLOR PLATA', 'APPLE MACBOOK PRO 16" / CHIP M1 PRO (CPU 10NUC Y GPU 16NUC) / 16GB RAM / 512GB SSD / COLOR PLATA', 'APPLE MACBOOK AIR 13,3" / CHIP M1 (CPU 8NUC Y GPU 7NUC) / 8GB RAM / 256GB SSD / COLOR ORO', 'APPLE MACBOOK AIR 13,3" / CHIP M1 (CPU 8NUC Y GPU 8NUC) / 8GB RAM / 512GB SSD / GRIS ESPACIAL', 'APPLE MACBOOK PRO 14.2 1TB M1 PRO 10C GPU 16C SILVER', 'APPLE MACBOOK PRO 14.2 1TB M1 PRO 10C GPU 16C SPACE GREY']
datos = { id_: { 'name': i for i in productos} for id_ in lista_id}
{'2000391314791P': {'nombre': 'APPLE MACBOOK PRO 14.2 1TB M1 PRO 10C GPU 16C SPACE GREY'},
'2000391314715P': {'nombre': 'APPLE MACBOOK PRO 14.2 1TB M1 PRO 10C GPU 16C SPACE GREY'},
'2000383032443P': {'nombre': 'APPLE MACBOOK PRO 14.2 1TB M1 PRO 10C GPU 16C SPACE GREY'},
'2000387592776P': {'nombre': 'APPLE MACBOOK PRO 14.2 1TB M1 PRO 10C GPU 16C SPACE GREY'},
'2000391314760P': {'nombre': 'APPLE MACBOOK PRO 14.2 1TB M1 PRO 10C GPU 16C SPACE GREY'},
'2000387592813P': {'nombre': 'APPLE MACBOOK PRO 14.2 1TB M1 PRO 10C GPU 16C SPACE GREY'},
'2000383032511P': {'nombre': 'APPLE MACBOOK PRO 14.2 1TB M1 PRO 10C GPU 16C SPACE GREY'},
'2000391314784P': {'nombre': 'APPLE MACBOOK PRO 14.2 1TB M1 PRO 10C GPU 16C SPACE GREY'},
'2000387592738P': {'nombre': 'APPLE MACBOOK PRO 14.2 1TB M1 PRO 10C GPU 16C SPACE GREY'},
'2000387592806P': {'nombre': 'APPLE MACBOOK PRO 14.2 1TB M1 PRO 10C GPU 16C SPACE GREY'},
'2000387592769P': {'nombre': 'APPLE MACBOOK PRO 14.2 1TB M1 PRO 10C GPU 16C SPACE GREY'},
'2000387592790P': {'nombre': 'APPLE MACBOOK PRO 14.2 1TB M1 PRO 10C GPU 16C SPACE GREY'},
'2000387592752P': {'nombre': 'APPLE MACBOOK PRO 14.2 1TB M1 PRO 10C GPU 16C SPACE GREY'},
'2000391314746P': {'nombre': 'APPLE MACBOOK PRO 14.2 1TB M1 PRO 10C GPU 16C SPACE GREY'},
'2000391314777P': {'nombre': 'APPLE MACBOOK PRO 14.2 1TB M1 PRO 10C GPU 16C SPACE GREY'},
'2000391314753P': {'nombre': 'APPLE MACBOOK PRO 14.2 1TB M1 PRO 10C GPU 16C SPACE GREY'},
'2000391314814P': {'nombre': 'APPLE MACBOOK PRO 14.2 1TB M1 PRO 10C GPU 16C SPACE GREY'},
'2000387592783P': {'nombre': 'APPLE MACBOOK PRO 14.2 1TB M1 PRO 10C GPU 16C SPACE GREY'},
'2000383032429P': {'nombre': 'APPLE MACBOOK PRO 14.2 1TB M1 PRO 10C GPU 16C SPACE GREY'},
'2000383032467P': {'nombre': 'APPLE MACBOOK PRO 14.2 1TB M1 PRO 10C GPU 16C SPACE GREY'},
'MPM00043444018': {'nombre': 'APPLE MACBOOK PRO 14.2 1TB M1 PRO 10C GPU 16C SPACE GREY'},
'MPM00040888375': {'nombre': 'APPLE MACBOOK PRO 14.2 1TB M1 PRO 10C GPU 16C SPACE GREY'}}
I need one id for one product not the same product, like this:
{'2000391314791P': {'nombre': 'APPLE MACBOOK PRO 14.2 1TB M1 PRO 10C GPU 16C SPACE GREY'},
'2000391314715P': {'nombre': 'APPLE MACBOOK AIR 13,6 CHIP M2 CPU 8NUC Y GPU 8NUC 8GB RAM 256GB SSD COLOR PLATA'}
|
[
"Assuming that list_id and productos are in the same order and have the same number of items, you can simply use enumerate in your dictionary comprehension:\ndatos = {id: productos[i] for i, id in enumerate(list_id)}\n\nI hope that's what you were looking for.\n"
] |
[
0
] |
[] |
[] |
[
"jupyter_notebook",
"python"
] |
stackoverflow_0074567214_jupyter_notebook_python.txt
|
Q:
Columnar permutations in Python
How would I can find all the permutations of just the columns in a matrix. For example - if I had a square 6x6 matrix like so:
a b c d e f
1: 75 62 82 85 91 85
2: 64 74 74 82 74 64
3: 85 81 91 83 91 62
4: 91 63 81 75 75 72
5: 81 91 74 74 91 63
6: 91 72 81 64 75 72
All the numbers in each column - abcdef - would stay with that column as it moved through the columnar permutations.
A:
Here's how I represented your data... you seem to want rows, but the permutation you want is in columns. Here it is as a list of lists, of the rows first:
row_matrix = [[75, 62, 82, 85, 91, 85],
[64, 74, 74, 82, 74, 64],
[85, 81, 91, 83, 91, 62],
[91, 63, 81, 75, 75, 72],
[81, 91, 74, 74, 91, 63],
[91, 72, 81, 64, 75, 72]]
You can use numpy to easily transpose a 2D array of (rows x columns) --> (columns x rows) by converting the list to a numpy array and applying .T. I'll use that here, but will push things back into Python lists using .tolist() to keep it simpler for you.
import numpy as np
column_matrix = np.array(row_matrix).T.tolist()
Next, you need a list of the column numbers -- 0 through 5 in your case. I used integers instead of calling them "a" through "f" to facilitate indexing below... you can always assign the alpha names to them later (...but if you'd rather keep that labeling around the whole time, you can work with Python dictionaries instead):
columns = list(range(len(column_matrix)))
# print(columns)
# shows --> [0, 1, 2, 3, 4, 5]
For the permutations, there is a built-in library for generating them, as part of itertools. The primary index of the list of lists is now the column numbers, so if we generate the permutations of the column number order, we can build all the 2D matrices from that. After that, you can just transpose them again to get it back to the original row-wise data structure:
from itertools import permutations
all_permutations = []
for perm in itertools.permutations(columns):
shuffled_column_matrix = []
for idx in perm:
shuffled_column_matrix.append(column_matrix[idx])
all_permutations.append( np.array(shuffled_column_matrix).T.tolist() )
The above can be done in a slightly more compact way using a list-comprehension:
#all_permutations = []
#for perm in itertools.permutations(columns):
# shuffled_column_matrix = np.array( [ column_matrix[idx] for idx in perm ] )
# all_permutations.append( shuffled_column_matrix.T.tolist() )
When it finishes all_permutations is a list of all the column-wise permutations, represented as row-wise matrices similar to the input.
A:
From an array:
import numpy as np
arr = np.arange(9).reshape(3,3)
For columns:
import itertools
for i in itertools.permutations(arr.T):
i = np.asarray(i).T
For rows (if anyone needs rows instead):
import itertools
for i in itertools.permutations(arr):
i = np.asarray(i)
|
Columnar permutations in Python
|
How would I can find all the permutations of just the columns in a matrix. For example - if I had a square 6x6 matrix like so:
a b c d e f
1: 75 62 82 85 91 85
2: 64 74 74 82 74 64
3: 85 81 91 83 91 62
4: 91 63 81 75 75 72
5: 81 91 74 74 91 63
6: 91 72 81 64 75 72
All the numbers in each column - abcdef - would stay with that column as it moved through the columnar permutations.
|
[
"Here's how I represented your data... you seem to want rows, but the permutation you want is in columns. Here it is as a list of lists, of the rows first:\nrow_matrix = [[75, 62, 82, 85, 91, 85],\n [64, 74, 74, 82, 74, 64],\n [85, 81, 91, 83, 91, 62],\n [91, 63, 81, 75, 75, 72],\n [81, 91, 74, 74, 91, 63],\n [91, 72, 81, 64, 75, 72]]\n\nYou can use numpy to easily transpose a 2D array of (rows x columns) --> (columns x rows) by converting the list to a numpy array and applying .T. I'll use that here, but will push things back into Python lists using .tolist() to keep it simpler for you.\nimport numpy as np\n\ncolumn_matrix = np.array(row_matrix).T.tolist()\n\nNext, you need a list of the column numbers -- 0 through 5 in your case. I used integers instead of calling them \"a\" through \"f\" to facilitate indexing below... you can always assign the alpha names to them later (...but if you'd rather keep that labeling around the whole time, you can work with Python dictionaries instead):\ncolumns = list(range(len(column_matrix)))\n\n# print(columns)\n# shows --> [0, 1, 2, 3, 4, 5]\n\nFor the permutations, there is a built-in library for generating them, as part of itertools. The primary index of the list of lists is now the column numbers, so if we generate the permutations of the column number order, we can build all the 2D matrices from that. After that, you can just transpose them again to get it back to the original row-wise data structure:\nfrom itertools import permutations\n\nall_permutations = []\nfor perm in itertools.permutations(columns):\n shuffled_column_matrix = []\n for idx in perm:\n shuffled_column_matrix.append(column_matrix[idx])\n all_permutations.append( np.array(shuffled_column_matrix).T.tolist() )\n\nThe above can be done in a slightly more compact way using a list-comprehension:\n#all_permutations = []\n#for perm in itertools.permutations(columns):\n# shuffled_column_matrix = np.array( [ column_matrix[idx] for idx in perm ] )\n# all_permutations.append( shuffled_column_matrix.T.tolist() ) \n\nWhen it finishes all_permutations is a list of all the column-wise permutations, represented as row-wise matrices similar to the input.\n",
"From an array:\nimport numpy as np\narr = np.arange(9).reshape(3,3)\n\nFor columns:\nimport itertools\n\nfor i in itertools.permutations(arr.T):\n i = np.asarray(i).T\n\nFor rows (if anyone needs rows instead):\nimport itertools\n\nfor i in itertools.permutations(arr):\n i = np.asarray(i)\n\n"
] |
[
2,
0
] |
[] |
[] |
[
"matrix",
"multiple_columns",
"permutation",
"python"
] |
stackoverflow_0066226597_matrix_multiple_columns_permutation_python.txt
|
Q:
Python, overloading magic methods, multiple usage for same magic method
Is it possible to overload magic methods or get a similar result as in e.g. overloading methods in C# for magic methods in python ? or is it simply another handicap of this language and it is impossible at this moment as in the past "types" used to be ;)
def __init__(self, x:float, y:float, z:float) -> None:
self.x = x
self.y = y
self.z = z
def __add__(self, other:'Vector') -> 'Vector':
return Vector(self.x + other.x, self.y + other.y, self.z + other.z)
def __add__(self, other:float) -> 'Vector':
return Vector(self.x + other, self.y + other, self.z + other)
I have tried to test it and...
vector_one = Vector(1, 2, 3)
vector_two = Vector(4, 5, 6)
print(vector_one + vector_two)
A:
No, it's not possible to automatically overload, but you can check for type and proceed accordingly:
from typing import Union
class Vector:
# ...
def __add__(self, other: Union['Vector', float]) -> 'Vector':
if type(other) == Vector:
return Vector(self.x + other.x, self.y + other.y, self.z + other.z)
elif type(other) == float:
return Vector(self.x + other, self.y + other, self.z + other)
else:
raise TypeError # or something else
|
Python, overloading magic methods, multiple usage for same magic method
|
Is it possible to overload magic methods or get a similar result as in e.g. overloading methods in C# for magic methods in python ? or is it simply another handicap of this language and it is impossible at this moment as in the past "types" used to be ;)
def __init__(self, x:float, y:float, z:float) -> None:
self.x = x
self.y = y
self.z = z
def __add__(self, other:'Vector') -> 'Vector':
return Vector(self.x + other.x, self.y + other.y, self.z + other.z)
def __add__(self, other:float) -> 'Vector':
return Vector(self.x + other, self.y + other, self.z + other)
I have tried to test it and...
vector_one = Vector(1, 2, 3)
vector_two = Vector(4, 5, 6)
print(vector_one + vector_two)
|
[
"No, it's not possible to automatically overload, but you can check for type and proceed accordingly:\nfrom typing import Union\n\nclass Vector:\n # ...\n\n def __add__(self, other: Union['Vector', float]) -> 'Vector':\n if type(other) == Vector:\n return Vector(self.x + other.x, self.y + other.y, self.z + other.z)\n elif type(other) == float:\n return Vector(self.x + other, self.y + other, self.z + other)\n else:\n raise TypeError # or something else\n\n"
] |
[
1
] |
[] |
[] |
[
"magic_methods",
"overloading",
"overriding",
"python",
"python_3.x"
] |
stackoverflow_0074567336_magic_methods_overloading_overriding_python_python_3.x.txt
|
Q:
I am trying to create a dash app for a Restaurant directory the dropdown filter is work I know but map doesn't update
I have restaurant name, cusines, lat , long as columns. The map I am trying to show on the html page is not updating as per the filter , It shows all the restaurnats.
import dash
from dash import dcc
from dash import html
from dash.dependencies import Input, Output
import pandas as pd
import plotly.express as px
app = dash.Dash('app')
df = pd.read_excel('Resturants.XLSX')
app.layout = html.Div([
html.H1('Resturant map by cusine', style={'text-align': 'center'} ),
dcc.Dropdown(id='cusine_dd',
options=[{'label': i, 'value': i} for i in df.cusines.unique()],
value = 'Chinese',
style={'width':'40%'}),
html.Div(id= 'output', children =[] ),
dcc.Graph(id='mapbycusine', figure={})
])
@app.callback(
[Output(component_id='output', component_property='children'),
Output(component_id='mapbycusine', component_property='figure')],
[Input(component_id='cusine_dd', component_property='value')]
)
def update_figure(input_cusine):
#cusine_filter = 'All'
container = f"Currently showing {input_cusine}"
df_copy = df.copy(deep=True)
if input_cusine:
df_copy = df_copy[df_copy['cusines'] == input_cusine]
print(df_copy.head())
fig = px.scatter_mapbox(data_frame=df_copy, lat='Latitude', lon='Longitude', zoom=3, height=300, hover_data= ['cusines'])
fig.update_layout(mapbox_style="open-street-map")
fig.update_layout(height=1000, margin={"r":100,"t":100,"l":100,"b":300})
return container, fig
if __name__ == '__main__':
app.run_server(debug=True)
I am checking every step just to make sure if input and output is working I have used countainers updates on the html page and print the df_copy after filtering data as per the input cusine. The containers give the correct output and the df.copy.head() shows only filltered columns in the console but the map on html dash app is not updating.
I want the map to show the restaurants for just the selected cusine
A:
I cannot reproduce your error, but perhaps some of the steps I took in debugging might prove helpful:
First, I took the first 100 rows of a restaurant dataset from kaggle, and then randomly assigned three different cuisine types to each unique restaurant name. Then when creating the fig object with px.scatter_mapbox, I passed the cuisine column to the color argument and a mapping between cuisine and color to the color_discrete_map argument so that the marker colors will be different when you select different cuisine types from the dropdown.
This is the result, with visible changes between the marker locations as well as the marker colors (depending on the dropdown selection). Also, as you mentioned, the print outs of df_copy from inside your update_figure function match the dropdown selections as expected.
As far as I can tell, the only thing fundamentally different between your dashboard and mine would be the data that was used. Maybe it is possible that you have a bunch of duplicate rows that span multiple cuisines (i.e. imagine I took a data set of only Mexican restaurants, then duplicated all of the rows but switched the cuisine to Italian – then if you select Mexican or Italian from the dropdown, the result would be the same because you're plotting all of the same latitudes and longitudes in each case so the figure wouldn't appear to change, and you might not catch this from examining the df_copy print outs).
Can you verify that your data set doesn't contain duplicate latitude and longitude points with different cuisines assigned to the same point?
import dash
from dash import dcc
from dash import html
from dash.dependencies import Input, Output
import numpy as np
import pandas as pd
import plotly.express as px
app = dash.Dash('app')
# df = pd.read_excel('Resturants.XLSX')
## create a small restaurant dataset similar to yours
## downloaded from: https://www.kaggle.com/datasets/khushishahh/fast-food-restaurants-across-us?resource=download
df = pd.read_csv('Fast_Food_Restaurants_US.csv', nrows=100, usecols=['latitude','longitude','name'])
df.rename(columns={'latitude':'Latitude', 'longitude':'Longitude'}, inplace=True)
## randomly assign some cuisine types to unique restaurant names
unique_names = df['name'].unique()
np.random.seed(42)
name_cuisine_map = {
name:cuisine for name,cuisine in
zip(unique_names, np.random.choice(['American','Mexican','Chinese'], size=len(unique_names)))
}
cuisine_color_map = {
'American':'blue',
'Mexican':'green',
'Chinese':'red'
}
df['cusines'] = df['name'].map(name_cuisine_map)
df['color'] = df['cusines'].map(cuisine_color_map)
app.layout = html.Div([
html.H1('Resturant map by cusine', style={'text-align': 'center'} ),
dcc.Dropdown(id='cusine_dd',
options=[{'label': i, 'value': i} for i in df.cusines.unique()],
value = 'Chinese',
style={'width':'40%'}),
html.Div(id= 'output', children =[] ),
dcc.Graph(id='mapbycusine', figure={})
])
@app.callback(
[Output(component_id='output', component_property='children'),
Output(component_id='mapbycusine', component_property='figure')],
[Input(component_id='cusine_dd', component_property='value')]
)
def update_figure(input_cusine):
#cusine_filter = 'All'
container = f"Currently showing {input_cusine}"
df_copy = df.copy(deep=True)
if input_cusine:
df_copy = df_copy[df_copy['cusines'] == input_cusine]
print("\n")
print(df_copy.head())
fig = px.scatter_mapbox(data_frame=df_copy, lat='Latitude', lon='Longitude', zoom=3, height=300, hover_data= ['cusines'], color='cusines', color_discrete_map=cuisine_color_map)
fig.update_layout(mapbox_style="open-street-map")
fig.update_layout(height=1000, margin={"r":100,"t":100,"l":100,"b":300})
return container, fig
if __name__ == '__main__':
app.run_server(debug=True)
|
I am trying to create a dash app for a Restaurant directory the dropdown filter is work I know but map doesn't update
|
I have restaurant name, cusines, lat , long as columns. The map I am trying to show on the html page is not updating as per the filter , It shows all the restaurnats.
import dash
from dash import dcc
from dash import html
from dash.dependencies import Input, Output
import pandas as pd
import plotly.express as px
app = dash.Dash('app')
df = pd.read_excel('Resturants.XLSX')
app.layout = html.Div([
html.H1('Resturant map by cusine', style={'text-align': 'center'} ),
dcc.Dropdown(id='cusine_dd',
options=[{'label': i, 'value': i} for i in df.cusines.unique()],
value = 'Chinese',
style={'width':'40%'}),
html.Div(id= 'output', children =[] ),
dcc.Graph(id='mapbycusine', figure={})
])
@app.callback(
[Output(component_id='output', component_property='children'),
Output(component_id='mapbycusine', component_property='figure')],
[Input(component_id='cusine_dd', component_property='value')]
)
def update_figure(input_cusine):
#cusine_filter = 'All'
container = f"Currently showing {input_cusine}"
df_copy = df.copy(deep=True)
if input_cusine:
df_copy = df_copy[df_copy['cusines'] == input_cusine]
print(df_copy.head())
fig = px.scatter_mapbox(data_frame=df_copy, lat='Latitude', lon='Longitude', zoom=3, height=300, hover_data= ['cusines'])
fig.update_layout(mapbox_style="open-street-map")
fig.update_layout(height=1000, margin={"r":100,"t":100,"l":100,"b":300})
return container, fig
if __name__ == '__main__':
app.run_server(debug=True)
I am checking every step just to make sure if input and output is working I have used countainers updates on the html page and print the df_copy after filtering data as per the input cusine. The containers give the correct output and the df.copy.head() shows only filltered columns in the console but the map on html dash app is not updating.
I want the map to show the restaurants for just the selected cusine
|
[
"I cannot reproduce your error, but perhaps some of the steps I took in debugging might prove helpful:\nFirst, I took the first 100 rows of a restaurant dataset from kaggle, and then randomly assigned three different cuisine types to each unique restaurant name. Then when creating the fig object with px.scatter_mapbox, I passed the cuisine column to the color argument and a mapping between cuisine and color to the color_discrete_map argument so that the marker colors will be different when you select different cuisine types from the dropdown.\nThis is the result, with visible changes between the marker locations as well as the marker colors (depending on the dropdown selection). Also, as you mentioned, the print outs of df_copy from inside your update_figure function match the dropdown selections as expected.\n\nAs far as I can tell, the only thing fundamentally different between your dashboard and mine would be the data that was used. Maybe it is possible that you have a bunch of duplicate rows that span multiple cuisines (i.e. imagine I took a data set of only Mexican restaurants, then duplicated all of the rows but switched the cuisine to Italian – then if you select Mexican or Italian from the dropdown, the result would be the same because you're plotting all of the same latitudes and longitudes in each case so the figure wouldn't appear to change, and you might not catch this from examining the df_copy print outs).\nCan you verify that your data set doesn't contain duplicate latitude and longitude points with different cuisines assigned to the same point?\nimport dash\nfrom dash import dcc\nfrom dash import html\nfrom dash.dependencies import Input, Output\n\nimport numpy as np\nimport pandas as pd\nimport plotly.express as px \n\n\napp = dash.Dash('app')\n\n# df = pd.read_excel('Resturants.XLSX')\n\n## create a small restaurant dataset similar to yours\n## downloaded from: https://www.kaggle.com/datasets/khushishahh/fast-food-restaurants-across-us?resource=download\ndf = pd.read_csv('Fast_Food_Restaurants_US.csv', nrows=100, usecols=['latitude','longitude','name'])\ndf.rename(columns={'latitude':'Latitude', 'longitude':'Longitude'}, inplace=True)\n\n## randomly assign some cuisine types to unique restaurant names\nunique_names = df['name'].unique()\nnp.random.seed(42)\nname_cuisine_map = {\n name:cuisine for name,cuisine in \n zip(unique_names, np.random.choice(['American','Mexican','Chinese'], size=len(unique_names)))\n}\ncuisine_color_map = {\n 'American':'blue',\n 'Mexican':'green',\n 'Chinese':'red'\n}\ndf['cusines'] = df['name'].map(name_cuisine_map)\ndf['color'] = df['cusines'].map(cuisine_color_map)\n\napp.layout = html.Div([\n html.H1('Resturant map by cusine', style={'text-align': 'center'} ),\n\n dcc.Dropdown(id='cusine_dd', \n options=[{'label': i, 'value': i} for i in df.cusines.unique()],\n value = 'Chinese',\n style={'width':'40%'}),\n \n html.Div(id= 'output', children =[] ),\n\n dcc.Graph(id='mapbycusine', figure={})\n])\n\n@app.callback(\n [Output(component_id='output', component_property='children'),\n Output(component_id='mapbycusine', component_property='figure')],\n [Input(component_id='cusine_dd', component_property='value')]\n)\ndef update_figure(input_cusine):\n #cusine_filter = 'All'\n container = f\"Currently showing {input_cusine}\"\n df_copy = df.copy(deep=True)\n if input_cusine:\n df_copy = df_copy[df_copy['cusines'] == input_cusine] \n print(\"\\n\")\n print(df_copy.head())\n fig = px.scatter_mapbox(data_frame=df_copy, lat='Latitude', lon='Longitude', zoom=3, height=300, hover_data= ['cusines'], color='cusines', color_discrete_map=cuisine_color_map)\n fig.update_layout(mapbox_style=\"open-street-map\")\n fig.update_layout(height=1000, margin={\"r\":100,\"t\":100,\"l\":100,\"b\":300})\n return container, fig\n\nif __name__ == '__main__':\n app.run_server(debug=True)\n\n"
] |
[
1
] |
[] |
[] |
[
"dataframe",
"dropdown",
"plotly",
"plotly_dash",
"python"
] |
stackoverflow_0074566463_dataframe_dropdown_plotly_plotly_dash_python.txt
|
Q:
n dimensional tic tac toe finding all possible win conditions or lines in a n dimensional matrix
I am trying to implement an n-dimensional tic tac toe problem. To do this I want to be able to see if any player has won yet. If there is a n-dimensional matrix of n is there a way to check all the lines?
I've tried doing this in 2 dimensions and 3 dimensions but don't know how to algorithmically find all the win lines. I hope the below explains the problem in more detail.
So if we are playing 2D tic tac toe as below on a 3x3 board:
0 1 2
3 4 5
6 7 8
The possible lines (tic tac toe win conditions) are:
horizontal and vertical wins:
[0 1 2]
[3 4 5]
[6 7 8]
[0 3 6]
[1 4 7]
[2 5 8]
Diagonal wins:
[0 4 8]
[2 4 6]
If we are playing in 3 dimensions on a 4x4 board the horizontal and vertical lines are:
[0 1 2 3]
[4 5 6 7]
[ 8 9 10 11]
[12 13 14 15]
[ 0 4 8 12]
[16 20 24 28]
[32 36 40 44]
[48 52 56 60]
[ 0 16 32 48]
[ 1 17 33 49]
[ 2 18 34 50]
[ 3 19 35 51]
and there are further diagonal lines (the list is non-exhaustive) i.e.:
[0 20 40 60]
[0 21 42 63]
The first one is an edge diagonal that stays in one slice and the second is corner to corner. Things like [2 5 8] also win conditions in the 4x4 board because it is a line of length 3 on the board.
Is there a way to find all possible lines that are of length l or longer (horizontal, vertical, and diagonal) on an m-dimensional board of edge size n? n will be larger than l and m will at least be equal to 2.
I am currently using the following code to find all the horizontal and vertical cases:
def winEvaluation(boardInput):
dimensions = len(boardInput.shape)
for i in range(dimensions):
flatMatrix = np.transpose(boardInput,np.roll(np.arange(dimensions),i)).flatten().reshape(edgeLength**(dimensions-2),edgeLength,edgeLength)
for array in flatMatrix[0]:
print(array)`
but I don't have a strong grasp of how to find all the diagonals as the dimensionality increases.
A:
Probably done really badly, but here's a solution. Possibly modifiable to any axbxcxd.. board size.
The key insight is that winnability is transitive: if there is a winning sequence between A and B, and B and C, so is there between A and C.
Thus we use the disjoint set structure (this implementation is somewhat memory-inefficient, maybe) and whenever you have two endpoints you take their union.
from collections import defaultdict
def ticTacToeWins(rowLength, dimensions):
board = [0]
wins = DisjSet(rowLength ** dimensions):
for n in range(dimensions):
winAdd = (rowLength - 1) * len(board)
if 0 < n < rowLength:
for i in range(1, rowLength - 1):
for s, e in temp:
wins.Union(s + i * len(board), e + i * len(board))
increment = (rowLength ** n)
for b in board:
wins.Union(b, winAdd + b)
temp = wins.unionsSoFar.copy()
new = []
for s in range(1, rowLength):
toAdd = s * increment
for i in board:
new.append(i + toAdd)
board.extend(new)
return wins.all_wins(rowLength)
# taken from https://www.geeksforgeeks.org/disjoint-set-data-structures/
class DisjSet:
def __init__(self, n):
self.rank = [1] * n
self.parent = [i for i in range(n)]
self.unionsSoFar = []
def find(self, x):
if (self.parent[x] != x):
self.parent[x] = self.find(self.parent[x])
return self.parent[x]
def Union(self, x, y):
self.unionsSoFar.append((x, y))
xset = self.find(x)
yset = self.find(y)
if xset == yset:
return
if self.rank[xset] < self.rank[yset]:
self.parent[xset] = yset
elif self.rank[xset] > self.rank[yset]:
self.parent[yset] = xset
else:
self.parent[yset] = xset
self.rank[xset] = self.rank[xset] + 1
def all_wins(self, rowLength):
sets = defaultdict(list);
for i, parent in enumerate(self.parent):
sets[parent].append(i)
print(sets)
return [get_win(node1, node2, rowLength) for vals in sets.values() for node2 in vals for node1 in vals if node1 < node2]
def get_win(start, end, rowLength):
increment = (end - start) // (rowLength - 1)
return [start + increment * i for i in range(rowLength)]
|
n dimensional tic tac toe finding all possible win conditions or lines in a n dimensional matrix
|
I am trying to implement an n-dimensional tic tac toe problem. To do this I want to be able to see if any player has won yet. If there is a n-dimensional matrix of n is there a way to check all the lines?
I've tried doing this in 2 dimensions and 3 dimensions but don't know how to algorithmically find all the win lines. I hope the below explains the problem in more detail.
So if we are playing 2D tic tac toe as below on a 3x3 board:
0 1 2
3 4 5
6 7 8
The possible lines (tic tac toe win conditions) are:
horizontal and vertical wins:
[0 1 2]
[3 4 5]
[6 7 8]
[0 3 6]
[1 4 7]
[2 5 8]
Diagonal wins:
[0 4 8]
[2 4 6]
If we are playing in 3 dimensions on a 4x4 board the horizontal and vertical lines are:
[0 1 2 3]
[4 5 6 7]
[ 8 9 10 11]
[12 13 14 15]
[ 0 4 8 12]
[16 20 24 28]
[32 36 40 44]
[48 52 56 60]
[ 0 16 32 48]
[ 1 17 33 49]
[ 2 18 34 50]
[ 3 19 35 51]
and there are further diagonal lines (the list is non-exhaustive) i.e.:
[0 20 40 60]
[0 21 42 63]
The first one is an edge diagonal that stays in one slice and the second is corner to corner. Things like [2 5 8] also win conditions in the 4x4 board because it is a line of length 3 on the board.
Is there a way to find all possible lines that are of length l or longer (horizontal, vertical, and diagonal) on an m-dimensional board of edge size n? n will be larger than l and m will at least be equal to 2.
I am currently using the following code to find all the horizontal and vertical cases:
def winEvaluation(boardInput):
dimensions = len(boardInput.shape)
for i in range(dimensions):
flatMatrix = np.transpose(boardInput,np.roll(np.arange(dimensions),i)).flatten().reshape(edgeLength**(dimensions-2),edgeLength,edgeLength)
for array in flatMatrix[0]:
print(array)`
but I don't have a strong grasp of how to find all the diagonals as the dimensionality increases.
|
[
"Probably done really badly, but here's a solution. Possibly modifiable to any axbxcxd.. board size.\nThe key insight is that winnability is transitive: if there is a winning sequence between A and B, and B and C, so is there between A and C.\nThus we use the disjoint set structure (this implementation is somewhat memory-inefficient, maybe) and whenever you have two endpoints you take their union.\nfrom collections import defaultdict\n\ndef ticTacToeWins(rowLength, dimensions):\n board = [0]\n wins = DisjSet(rowLength ** dimensions):\n for n in range(dimensions):\n winAdd = (rowLength - 1) * len(board)\n if 0 < n < rowLength:\n for i in range(1, rowLength - 1):\n for s, e in temp:\n wins.Union(s + i * len(board), e + i * len(board))\n increment = (rowLength ** n)\n for b in board:\n wins.Union(b, winAdd + b)\n temp = wins.unionsSoFar.copy()\n new = []\n for s in range(1, rowLength):\n toAdd = s * increment\n for i in board:\n new.append(i + toAdd)\n board.extend(new)\n return wins.all_wins(rowLength)\n\n# taken from https://www.geeksforgeeks.org/disjoint-set-data-structures/\nclass DisjSet:\n def __init__(self, n):\n self.rank = [1] * n\n self.parent = [i for i in range(n)]\n self.unionsSoFar = []\n \n def find(self, x):\n if (self.parent[x] != x):\n self.parent[x] = self.find(self.parent[x])\n return self.parent[x]\n \n def Union(self, x, y):\n self.unionsSoFar.append((x, y))\n xset = self.find(x)\n yset = self.find(y)\n if xset == yset:\n return\n if self.rank[xset] < self.rank[yset]:\n self.parent[xset] = yset\n elif self.rank[xset] > self.rank[yset]:\n self.parent[yset] = xset\n else:\n self.parent[yset] = xset\n self.rank[xset] = self.rank[xset] + 1\n\n def all_wins(self, rowLength):\n sets = defaultdict(list);\n for i, parent in enumerate(self.parent):\n sets[parent].append(i)\n print(sets)\n return [get_win(node1, node2, rowLength) for vals in sets.values() for node2 in vals for node1 in vals if node1 < node2]\n \ndef get_win(start, end, rowLength):\n increment = (end - start) // (rowLength - 1)\n return [start + increment * i for i in range(rowLength)]\n\n\n"
] |
[
0
] |
[] |
[] |
[
"algorithm",
"matrix",
"python"
] |
stackoverflow_0074566532_algorithm_matrix_python.txt
|
Q:
How to use all GPUs in SageMaker real-time inference?
I have deployed a model on real-time inference in a single gpu instance, it works fine.
Now I want to use a multiple GPUs to decrease the inference time, what do I need to change in my inference.py to make it work?
Here is some of my code:
DEVICE = "cuda" if torch.cuda.is_available() else "cpu"
def model_fn(model_dir):
logger.info("Loading first model...")
model = Model().to(DEVICE)
with open(os.path.join(model_dir, "checkpoint.pth"), "rb") as f:
model.load_state_dict(torch.load(f, map_location=DEVICE)['state_dict'])
model = model.eval()
logger.info("Loading second model...")
model_2 = Model_2()
model_2.to(DEVICE)
checkpoint = torch.load('checkpoint_2.pth', map_location=DEVICE)
model_2(remove_prefix_state_dict(checkpoint['state_dict']), strict=True)
model_2 = model_2()
logger.info('Done loading models')
return {'first_model': model, 'second_model': model_2}
def input_fn(request_body, request_content_type):
assert request_content_type=='application/json'
url = json.loads(request_body)['url']
save_name = json.loads(request_body)['save_name']
logger.info(f'Image url: {url}')
img = Image.open(requests.get(url, stream=True).raw).convert('RGB')
w, h = img.size
input_tensor = preprocess(img)
input_batch = input_tensor.unsqueeze(0).to(DEVICE)
logger.info('Image ready to predict!')
return {'tensor':input_batch, 'w':w,'h':h,'image':img, 'save_name':save_name}
def predict_fn(input_object, model):
data = input_object['tensor']
logger.info('Generating prediction based on the input image')
model_1 = model['first_model']
model_2 = model['second_model']
d0, d1, d2, d3, d4, d5, d6 = model_1(data)
torch.cuda.empty_cache()
mask = torch.argmax(d0[0], axis=0).cpu().numpy()
mask = np.where(mask==2, 255, mask)
mask = np.where(mask==1, 128, mask)
img = input_object['image']
final_image = Image.fromarray(mask).resize((input_object['w'], input_object['h'])).convert('L')
img = np.array(img)[:,:,::-1]
final_image = np.array(final_image)
image_dict = to_dict(img, final_image)
final_image = model_2_process(model_2, image_dict)
torch.cuda.empty_cache()
return {"final_ouput": final_image, 'image':input_object['image'], 'save_name': input_object['save_name']}
I was thinking that maybe with torch multiprocessing, any tips?
A:
The answer mentioning Torch DDP and DP is not exactly appropriate since the value of those libraries is to conduct multi-GPU gradient descent (averaging the gradient inter-GPU in particular), which, as mentioned in 1., does not happen at inference. Actually, a well-done, optimized inference ideally doesn't even use PyTorch or TensorFlow at all, but instead a prediction-only optimized runtime such as SageMaker Neo, ONNXRuntime or NVIDIA TensorRT, to reduce memory footprint and latency.
to infer a single model that fits in a GPU, multi-GPU instances are generally not advised: inference is a share-nothing task, so that you can use N single-GPU instance and things are simpler and equally performant.
Inference on Multi-GPU host is useful in 2 cases: (1) if you do model parallel inference (not your case) or (2) if your service inference consists of a graph of models that are calling each other. In which case, the proximity of the various models called in the DAG can reduce latency. That seems to be your situation
My recommendations are the following:
Try using NVIDIA Triton, that supports well those DAG use-cases and is supported on SageMaker. https://aws.amazon.com/fr/blogs/machine-learning/deploy-fast-and-scalable-ai-with-nvidia-triton-inference-server-in-amazon-sagemaker/
If you want to do things custom, you could try assigning the 2 models to different cuda device id in PyTorch. Because cuda kernels are run asynchronously this could be enough to have some parallelism and a bit of acceleration vs 1 GPU if your models can run parallel
I saw multiprocessing used once (with MXNet) to load-balance inference requests across GPUs (in this AWS blog post) but it was for share-nothing, map-style distribution of batches of inferences. In your case you seem to have to connection between your model so Triton is probably a better fit.
Eventually, if your goal is to reduce latency, there are other ideas:
Fix any CPU bottleneck Your code seem to have a lot of CPU work (pre-processing, numpy...). Are you sure GPU is the bottleneck? If CPU is at 80%+, try large single-GPU G5, such as G5.16xlarge. They are great for computer vision inference
Use a better GPU if you are using a P2, P3 or G4dn, try G5 instead
Optimize code. 2 things to try, depending on the bottleneck:
If you do the inference in Torch, try to avoid doing algebra with Numpy, and do as much as possible with torch tensors on GPU.
If GPU is the bottleneck, try to replace PyTorch by ONNXRuntime or NVIDIA TensorRT.
A:
You must use torch.nn.DataParallel or torch.nn.parallel.DistributedDataParallel (read "Multi-GPU Examples" and "Use nn.parallel.DistributedDataParallel instead of multiprocessing or nn.DataParallel").
You must call the function by passing at least these three parameters:
module (Module) – module to be parallelized (your model)
device_ids (list of python:int or torch.device) – CUDA devices.
For single-device modules, device_ids can contain
exactly one device id, which represents the only CUDA device where the
input module corresponding to this process resides. Alternatively,
device_ids can also be None.
For multi-device modules and CPU
modules, device_ids must be None.
When device_ids is None for both cases, both the input data for the
forward pass and the actual module must be placed on the correct
device. (default: None)
output_device (int or torch.device) – Device location of output for single-device CUDA modules.
For multi-device modules and CPU modules, it must be None, and the module itself dictates the output location. (default: device_ids[0] for single-device modules)
for example:
from torch.nn.parallel import DistributedDataParallel
model = DistributedDataParallel(model, device_ids=[i], output_device=i)
|
How to use all GPUs in SageMaker real-time inference?
|
I have deployed a model on real-time inference in a single gpu instance, it works fine.
Now I want to use a multiple GPUs to decrease the inference time, what do I need to change in my inference.py to make it work?
Here is some of my code:
DEVICE = "cuda" if torch.cuda.is_available() else "cpu"
def model_fn(model_dir):
logger.info("Loading first model...")
model = Model().to(DEVICE)
with open(os.path.join(model_dir, "checkpoint.pth"), "rb") as f:
model.load_state_dict(torch.load(f, map_location=DEVICE)['state_dict'])
model = model.eval()
logger.info("Loading second model...")
model_2 = Model_2()
model_2.to(DEVICE)
checkpoint = torch.load('checkpoint_2.pth', map_location=DEVICE)
model_2(remove_prefix_state_dict(checkpoint['state_dict']), strict=True)
model_2 = model_2()
logger.info('Done loading models')
return {'first_model': model, 'second_model': model_2}
def input_fn(request_body, request_content_type):
assert request_content_type=='application/json'
url = json.loads(request_body)['url']
save_name = json.loads(request_body)['save_name']
logger.info(f'Image url: {url}')
img = Image.open(requests.get(url, stream=True).raw).convert('RGB')
w, h = img.size
input_tensor = preprocess(img)
input_batch = input_tensor.unsqueeze(0).to(DEVICE)
logger.info('Image ready to predict!')
return {'tensor':input_batch, 'w':w,'h':h,'image':img, 'save_name':save_name}
def predict_fn(input_object, model):
data = input_object['tensor']
logger.info('Generating prediction based on the input image')
model_1 = model['first_model']
model_2 = model['second_model']
d0, d1, d2, d3, d4, d5, d6 = model_1(data)
torch.cuda.empty_cache()
mask = torch.argmax(d0[0], axis=0).cpu().numpy()
mask = np.where(mask==2, 255, mask)
mask = np.where(mask==1, 128, mask)
img = input_object['image']
final_image = Image.fromarray(mask).resize((input_object['w'], input_object['h'])).convert('L')
img = np.array(img)[:,:,::-1]
final_image = np.array(final_image)
image_dict = to_dict(img, final_image)
final_image = model_2_process(model_2, image_dict)
torch.cuda.empty_cache()
return {"final_ouput": final_image, 'image':input_object['image'], 'save_name': input_object['save_name']}
I was thinking that maybe with torch multiprocessing, any tips?
|
[
"The answer mentioning Torch DDP and DP is not exactly appropriate since the value of those libraries is to conduct multi-GPU gradient descent (averaging the gradient inter-GPU in particular), which, as mentioned in 1., does not happen at inference. Actually, a well-done, optimized inference ideally doesn't even use PyTorch or TensorFlow at all, but instead a prediction-only optimized runtime such as SageMaker Neo, ONNXRuntime or NVIDIA TensorRT, to reduce memory footprint and latency.\nto infer a single model that fits in a GPU, multi-GPU instances are generally not advised: inference is a share-nothing task, so that you can use N single-GPU instance and things are simpler and equally performant.\nInference on Multi-GPU host is useful in 2 cases: (1) if you do model parallel inference (not your case) or (2) if your service inference consists of a graph of models that are calling each other. In which case, the proximity of the various models called in the DAG can reduce latency. That seems to be your situation\nMy recommendations are the following:\n\nTry using NVIDIA Triton, that supports well those DAG use-cases and is supported on SageMaker. https://aws.amazon.com/fr/blogs/machine-learning/deploy-fast-and-scalable-ai-with-nvidia-triton-inference-server-in-amazon-sagemaker/\n\nIf you want to do things custom, you could try assigning the 2 models to different cuda device id in PyTorch. Because cuda kernels are run asynchronously this could be enough to have some parallelism and a bit of acceleration vs 1 GPU if your models can run parallel\n\n\nI saw multiprocessing used once (with MXNet) to load-balance inference requests across GPUs (in this AWS blog post) but it was for share-nothing, map-style distribution of batches of inferences. In your case you seem to have to connection between your model so Triton is probably a better fit.\nEventually, if your goal is to reduce latency, there are other ideas:\n\nFix any CPU bottleneck Your code seem to have a lot of CPU work (pre-processing, numpy...). Are you sure GPU is the bottleneck? If CPU is at 80%+, try large single-GPU G5, such as G5.16xlarge. They are great for computer vision inference\n\nUse a better GPU if you are using a P2, P3 or G4dn, try G5 instead\n\nOptimize code. 2 things to try, depending on the bottleneck:\n\nIf you do the inference in Torch, try to avoid doing algebra with Numpy, and do as much as possible with torch tensors on GPU.\nIf GPU is the bottleneck, try to replace PyTorch by ONNXRuntime or NVIDIA TensorRT.\n\n\n\n",
"You must use torch.nn.DataParallel or torch.nn.parallel.DistributedDataParallel (read \"Multi-GPU Examples\" and \"Use nn.parallel.DistributedDataParallel instead of multiprocessing or nn.DataParallel\").\nYou must call the function by passing at least these three parameters:\n\nmodule (Module) – module to be parallelized (your model)\ndevice_ids (list of python:int or torch.device) – CUDA devices.\n\n\nFor single-device modules, device_ids can contain\nexactly one device id, which represents the only CUDA device where the\ninput module corresponding to this process resides. Alternatively,\ndevice_ids can also be None.\nFor multi-device modules and CPU\nmodules, device_ids must be None.\nWhen device_ids is None for both cases, both the input data for the\nforward pass and the actual module must be placed on the correct\ndevice. (default: None)\n\n\noutput_device (int or torch.device) – Device location of output for single-device CUDA modules.\n\nFor multi-device modules and CPU modules, it must be None, and the module itself dictates the output location. (default: device_ids[0] for single-device modules)\n\n\nfor example:\nfrom torch.nn.parallel import DistributedDataParallel\nmodel = DistributedDataParallel(model, device_ids=[i], output_device=i)\n\n"
] |
[
1,
0
] |
[] |
[] |
[
"amazon_sagemaker",
"machine_learning",
"python"
] |
stackoverflow_0074436974_amazon_sagemaker_machine_learning_python.txt
|
Q:
No module named 'graphviz' in Jupyter Notebook
I tried to draw a decision tree in Jupyter Notebook this way.
mglearn.plots.plot_animal_tree()
But I didn't make it in the right way and got the following error message.
---------------------------------------------------------------------------
ModuleNotFoundError Traceback (most recent call last)
<ipython-input-65-45733bae690a> in <module>()
1
----> 2 mglearn.plots.plot_animal_tree()
~\Desktop\introduction_to_ml_with_python\mglearn\plot_animal_tree.py in plot_animal_tree(ax)
4
5 def plot_animal_tree(ax=None):
----> 6 import graphviz
7 if ax is None:
8 ax = plt.gca()
ModuleNotFoundError: No module named 'graphviz
So I downloaded Graphviz Windows Packages and installed it.
And I added the PATH installed path(C:\Program Files (x86)\Graphviz2.38\bin) to USER PATH and (C:\Program Files (x86)\Graphviz2.38\bin\dot.exe) to SYSTEM PATH.
And restarted my PC. But it didnt work. I still can't get it working.
So I searched over the internet and got another solution that, I can add the PATH in my code like this.
import os
os.environ["PATH"] += os.pathsep + 'C:/Program Files (x86)/Graphviz2.38/bin'
But it didn't work.
So I do not know how to figure it out now.
I use the Python3.6 integrated into Anacode3.
And I ALSO tried installing graphviz via PIP like this.
pip install graphviz
BUT it still doesn't work.
Hope someone can help me, sincerely.
A:
in Anaconda install
python-graphviz
pydot
This will fix your problem
A:
As @grrr answered above, here is the code:
conda install -c anaconda python-graphviz
conda install -c anaconda pydot
A:
In case if your operation system is Ubuntu I recommend to try command:
sudo apt-get install -y graphviz libgraphviz-dev
A:
I know the question has already been answered, but for future readers, I came here with the same jupyter notebook issue; after installing python-graphviz and pydot I still had the same issue. Here's what worked for me: Make sure that the python version of your terminal matches that of the jupyter notebook, so run this both in python in your terminal and then in your juypter notebook. If you are using a conda environment, load the environment before checking the python version.
import sys
print(sys.version)
If they do not match, i.e. python 3.6.x vs python 3.7.x then give your jupyter notebook the ability to find the version of python you want.
conda install nb_conda_kernels
conda install ipykernel
and if you are using a conda environment,
python -m ipykernel install --user --name myenv--display-name "Python (myenv)"
where myenv is the name of your environment. Then go into your jupyter notebook, and in kernel -> change kernel, select the correct version of python. Fixed the issue!
A:
I installed the grphviz package using conda. However, I kept getting "module not found error" even after restart kernel multiple times.
Then following the suggestions on this page, I even installed "PyDot", however, it didn't really help. Finally, I installed the package using
pip install graphviz
and finally I can import it.
|
No module named 'graphviz' in Jupyter Notebook
|
I tried to draw a decision tree in Jupyter Notebook this way.
mglearn.plots.plot_animal_tree()
But I didn't make it in the right way and got the following error message.
---------------------------------------------------------------------------
ModuleNotFoundError Traceback (most recent call last)
<ipython-input-65-45733bae690a> in <module>()
1
----> 2 mglearn.plots.plot_animal_tree()
~\Desktop\introduction_to_ml_with_python\mglearn\plot_animal_tree.py in plot_animal_tree(ax)
4
5 def plot_animal_tree(ax=None):
----> 6 import graphviz
7 if ax is None:
8 ax = plt.gca()
ModuleNotFoundError: No module named 'graphviz
So I downloaded Graphviz Windows Packages and installed it.
And I added the PATH installed path(C:\Program Files (x86)\Graphviz2.38\bin) to USER PATH and (C:\Program Files (x86)\Graphviz2.38\bin\dot.exe) to SYSTEM PATH.
And restarted my PC. But it didnt work. I still can't get it working.
So I searched over the internet and got another solution that, I can add the PATH in my code like this.
import os
os.environ["PATH"] += os.pathsep + 'C:/Program Files (x86)/Graphviz2.38/bin'
But it didn't work.
So I do not know how to figure it out now.
I use the Python3.6 integrated into Anacode3.
And I ALSO tried installing graphviz via PIP like this.
pip install graphviz
BUT it still doesn't work.
Hope someone can help me, sincerely.
|
[
"in Anaconda install \n\npython-graphviz\npydot\n\nThis will fix your problem\n",
"As @grrr answered above, here is the code:\nconda install -c anaconda python-graphviz\n\nconda install -c anaconda pydot\n\n",
"In case if your operation system is Ubuntu I recommend to try command: \nsudo apt-get install -y graphviz libgraphviz-dev\n\n",
"I know the question has already been answered, but for future readers, I came here with the same jupyter notebook issue; after installing python-graphviz and pydot I still had the same issue. Here's what worked for me: Make sure that the python version of your terminal matches that of the jupyter notebook, so run this both in python in your terminal and then in your juypter notebook. If you are using a conda environment, load the environment before checking the python version.\nimport sys\nprint(sys.version)\nIf they do not match, i.e. python 3.6.x vs python 3.7.x then give your jupyter notebook the ability to find the version of python you want.\nconda install nb_conda_kernels\nconda install ipykernel\nand if you are using a conda environment,\npython -m ipykernel install --user --name myenv--display-name \"Python (myenv)\"\nwhere myenv is the name of your environment. Then go into your jupyter notebook, and in kernel -> change kernel, select the correct version of python. Fixed the issue!\n",
"I installed the grphviz package using conda. However, I kept getting \"module not found error\" even after restart kernel multiple times.\nThen following the suggestions on this page, I even installed \"PyDot\", however, it didn't really help. Finally, I installed the package using\npip install graphviz\n\nand finally I can import it.\n"
] |
[
62,
13,
5,
0,
0
] |
[] |
[] |
[
"graphviz",
"jupyter_notebook",
"python"
] |
stackoverflow_0052566756_graphviz_jupyter_notebook_python.txt
|
Q:
How to get indices of rows in a given column in which a value from a different given column appears?
I have two columns. The first one is longer and has multiple values such as:
0 'A'
1 'B'
2 'B'
3 'C'
4 'A'
5 'A'
All the values in the first column are listed in the second column:
0 'A'
1 'B'
2 'C'
The result I want is to have a list/series/column whatever of indecies of the values from the first column in the second column, as such:
0 'A' 0
1 'B' 1
2 'B' 1
3 'C' 2
4 'A' 0
5 'A' 0
Edit: in the actual problem that I have the number of values is very big so listing them by hand is not an option
A:
Working with pandas Series:
new_s = s1.map(dict(zip(s2, s2.index)))
A:
If both columns are Dataframes:
try this:
maper = dict(df2.reset_index().values[:, ::-1])
out = df1.assign(result=df1.replace(maper))
print(out)
|
How to get indices of rows in a given column in which a value from a different given column appears?
|
I have two columns. The first one is longer and has multiple values such as:
0 'A'
1 'B'
2 'B'
3 'C'
4 'A'
5 'A'
All the values in the first column are listed in the second column:
0 'A'
1 'B'
2 'C'
The result I want is to have a list/series/column whatever of indecies of the values from the first column in the second column, as such:
0 'A' 0
1 'B' 1
2 'B' 1
3 'C' 2
4 'A' 0
5 'A' 0
Edit: in the actual problem that I have the number of values is very big so listing them by hand is not an option
|
[
"Working with pandas Series:\nnew_s = s1.map(dict(zip(s2, s2.index)))\n\n",
"If both columns are Dataframes:\ntry this:\nmaper = dict(df2.reset_index().values[:, ::-1])\nout = df1.assign(result=df1.replace(maper))\nprint(out)\n\n"
] |
[
0,
0
] |
[] |
[] |
[
"dataframe",
"pandas",
"python"
] |
stackoverflow_0074567334_dataframe_pandas_python.txt
|
Q:
Why are all Python packages suddenly gone?
Today I wanted to run a (self written) Python script on my OSX laptop, but all of a sudden, all the imports returned an ImportError. The script was running fine about a month ago and in the meantime I didn't change anything to Python. Furthermore I'm sure that I didn't use a virtualenv back then.
So I just started reinstalling all the packages again (even pip needed a reinstall). I also need OpenCV, for which I run brew install opencv3, but that gives me:
Warning: homebrew/science/opencv3 3.2.0 is already installed
even though I still can't import it in Python:
>>> import cv2
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: No module named cv2
I can of course uninstall and reinstall OpenCV, but it really makes me wonder; how can this have happened? What could possibly erase all Python packages?
All tips are welcome!
EDIT
Ok, I just found out that before I was using the Python installed by brew, but that the python command somehow linked back to /usr/bin/python instead of /usr/local/Cellar/python/2.7.13_1/bin/python2 in which all packages are still installed correctly.
So to link python back to the brew version I ran brew unlink python && brew link python, but which python still refers to /usr/bin/python
Which brilliant soul can guide me back to using the brew Python?
EDIT2
I just checked out this list of suggestions to link python to the brew version again, but nothing seems to work. Let me show you what I did:
$ echo $PATH
/usr/local/opt/opencv3/bin:/opt/local/bin:/opt/local/sbin:/usr/local/heroku/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/opt/X11/bin:/usr/local/bin:/Users/hielke/Library/Android/sdk:/Users/hielke/Library/Android/sdk/tools:/Users/hielke/Library/Android/sdk/platform-tools:/usr/local/mysql/bin:/Users/hielke/.composer/vendor/bin
# which shows `/usr/local/bin` before `/usr/bin`
$ brew link --overwrite python
Warning: Already linked: /usr/local/Cellar/python/2.7.13_1
To relink: brew unlink python && brew link python
$ which python
/usr/bin/python # <= STILL RUNNING THE SYSTEM PYTHON
$ brew unlink python && brew link python
Unlinking /usr/local/Cellar/python/2.7.13_1... 26 symlinks removed
Linking /usr/local/Cellar/python/2.7.13_1... 26 symlinks created
$ which python
/usr/bin/python # <= STILL RUNNING THE SYSTEM PYTHON
$ cat /etc/paths
/usr/local/bin
/usr/bin # THIS SEEMS TO BE CORRECT
/bin
/usr/sbin
/sbin
I then restarted the terminal, but which python still gives me /usr/bin/python.
So then I restarted the whole OS, but frustratingly which python still gives me /usr/bin/python.
Who can get me out of this brew mess?!
A:
Ok, after a lot of messing around, I found that the folder /usr/local/Cellar/python/2.7.13_1/bin/ didn't contain a symlink called python, just python2 and python2.7.
So finally I solved it by creating a new symlink in /usr/local/Cellar/python/2.7.13_1/bin/ like this:
ln -s ../Frameworks/Python.framework/Versions/2.7/bin/python python
After that I ran
brew unlink python && brew link python
which solved all my problems.
Thanks for your attention and continuous inspiration!
ps. Although this was a solution to my troubles, I'm still unsure how this could have happened. If anybody can enlighten me that is of course still very welcome!
A:
In my case the installed modules seemed to disappear because macOS installed a new minor version and the python3 symlink was updated to point to that new version.
This can be checked by running: ls -laFG /usr/local/bin. As you can see, python3 is pointing to v3.11:
...which is the new version without any modules installed:
However, by explicitly pointing to the old version, we can see all the modules are still there:
|
Why are all Python packages suddenly gone?
|
Today I wanted to run a (self written) Python script on my OSX laptop, but all of a sudden, all the imports returned an ImportError. The script was running fine about a month ago and in the meantime I didn't change anything to Python. Furthermore I'm sure that I didn't use a virtualenv back then.
So I just started reinstalling all the packages again (even pip needed a reinstall). I also need OpenCV, for which I run brew install opencv3, but that gives me:
Warning: homebrew/science/opencv3 3.2.0 is already installed
even though I still can't import it in Python:
>>> import cv2
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: No module named cv2
I can of course uninstall and reinstall OpenCV, but it really makes me wonder; how can this have happened? What could possibly erase all Python packages?
All tips are welcome!
EDIT
Ok, I just found out that before I was using the Python installed by brew, but that the python command somehow linked back to /usr/bin/python instead of /usr/local/Cellar/python/2.7.13_1/bin/python2 in which all packages are still installed correctly.
So to link python back to the brew version I ran brew unlink python && brew link python, but which python still refers to /usr/bin/python
Which brilliant soul can guide me back to using the brew Python?
EDIT2
I just checked out this list of suggestions to link python to the brew version again, but nothing seems to work. Let me show you what I did:
$ echo $PATH
/usr/local/opt/opencv3/bin:/opt/local/bin:/opt/local/sbin:/usr/local/heroku/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/opt/X11/bin:/usr/local/bin:/Users/hielke/Library/Android/sdk:/Users/hielke/Library/Android/sdk/tools:/Users/hielke/Library/Android/sdk/platform-tools:/usr/local/mysql/bin:/Users/hielke/.composer/vendor/bin
# which shows `/usr/local/bin` before `/usr/bin`
$ brew link --overwrite python
Warning: Already linked: /usr/local/Cellar/python/2.7.13_1
To relink: brew unlink python && brew link python
$ which python
/usr/bin/python # <= STILL RUNNING THE SYSTEM PYTHON
$ brew unlink python && brew link python
Unlinking /usr/local/Cellar/python/2.7.13_1... 26 symlinks removed
Linking /usr/local/Cellar/python/2.7.13_1... 26 symlinks created
$ which python
/usr/bin/python # <= STILL RUNNING THE SYSTEM PYTHON
$ cat /etc/paths
/usr/local/bin
/usr/bin # THIS SEEMS TO BE CORRECT
/bin
/usr/sbin
/sbin
I then restarted the terminal, but which python still gives me /usr/bin/python.
So then I restarted the whole OS, but frustratingly which python still gives me /usr/bin/python.
Who can get me out of this brew mess?!
|
[
"Ok, after a lot of messing around, I found that the folder /usr/local/Cellar/python/2.7.13_1/bin/ didn't contain a symlink called python, just python2 and python2.7. \nSo finally I solved it by creating a new symlink in /usr/local/Cellar/python/2.7.13_1/bin/ like this:\nln -s ../Frameworks/Python.framework/Versions/2.7/bin/python python\n\nAfter that I ran \nbrew unlink python && brew link python\n\nwhich solved all my problems.\nThanks for your attention and continuous inspiration!\nps. Although this was a solution to my troubles, I'm still unsure how this could have happened. If anybody can enlighten me that is of course still very welcome!\n",
"In my case the installed modules seemed to disappear because macOS installed a new minor version and the python3 symlink was updated to point to that new version.\nThis can be checked by running: ls -laFG /usr/local/bin. As you can see, python3 is pointing to v3.11:\n\n...which is the new version without any modules installed:\n\nHowever, by explicitly pointing to the old version, we can see all the modules are still there:\n\n"
] |
[
2,
0
] |
[] |
[] |
[
"easy_install",
"opencv",
"package",
"pip",
"python"
] |
stackoverflow_0045257009_easy_install_opencv_package_pip_python.txt
|
Q:
Problem with Unalignable boolean Series with two columns filter
In this case I am working with 2 columns that are substracted from 2 Dataframes. The columns are ["# Externo","Nro Envio ML"]]
My target is to recieve the numbers that exist in "# Externo" but no exist in"Nro Envio ML" , only that number/ or numbers that fill to that condition.
To take a look what I am talking about:
dfn.info()
dfn
<class 'pandas.core.frame.DataFrame'>
Int64Index: 1143 entries, 0 to 2151
Data columns (total 2 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 # Externo 404 non-null object
1 Nro Envio ML 894 non-null object
dtypes: object(2)
memory usage: 26.8+ KB
# Externo Nro Envio ML
0 41764660663 NaN
1 41765189264 NaN
2 41765105927 NaN
3 41765931626 NaN
4 41766474810 NaN
... ... ...
2143 NaN 41768876815
2146 NaN 41784067107
2147 NaN 41784051958
2149 NaN 41785977098
2151 NaN 41796142562
1143 rows × 2 columns
# Externo True
Nro Envio ML True
dtype: bool
I am expecting ro recieve 41764660663 if exist in column "# Externo" and not in "Nro Envio ML"
This is What I tried:
df1 = df1[df1['Unnamed: 26'] == 'Flex']
df2= pd.concat([df, df1], axis=1)
df2
import numpy as np
df2['Nro Envio ML']=df2['Unnamed: 13']
dfn=df2[["# Externo","Nro Envio ML"]]
print(dfn.notnull().any(axis=0))
dfn= dfn.loc[:,dfn.notnull().any(axis=0)]
print(dfn)
print(dfn.dropna(axis=1,how='all'))
dfn.loc[~df['# Externo'].isin(dfn['Nro Envio ML'].tolist())]
The error I recieve:
IndexingError Traceback (most recent call last)
<ipython-input-144-54d975e5ad81> in <module>
3 print(dfn)
4 print(dfn.dropna(axis=1,how='all'))
----> 5 dfn.loc[~df['# Externo'].isin(dfn['Nro Envio ML'].tolist())]
3 frames
/usr/local/lib/python3.7/dist-packages/pandas/core/indexing.py in check_bool_indexer(index, key)
2387 if mask.any():
2388 raise IndexingError(
-> 2389 "Unalignable boolean Series provided as "
2390 "indexer (index of the boolean Series and of "
2391 "the indexed object do not match)."
IndexingError: Unalignable boolean Series provided as indexer (index of the boolean Series and of the indexed object do not match).
Here is not filtered:
dfn3= dfn.loc[~ dfn['# Externo'].isin(dfn['Nro Envio ML']),'# Externo'].values
dfn3
array(['41764660663', '41765189264', '41765105927', '41765931626',
'41766474810', '41766693570', '41767023186', '41766664967',
'41765527475', '41766933520', '41758431387', '41767065141',
'41766834461', '41763758747', '41767007000', '41764139836',
'41767128958', '41767958453', '41768439109', '41767519460',
'41768746394', '41767537504', '41768245931', '41768435988',
'41768710593', '41767850751', '41769343996', '41768163019',
'41767792365', '41769430226', '41769362435', '41767613260',
'41767399871', '41769237788', '41769335922', '41768743591',
'41768970216', '41768972816', '41767801455', '41767351959',
'41768856005', '41769069211', '41768960289', '41768876815',
'41768796242', '41768606054', '41769594117', '41768301217',
'41769316065', '41769275644', '41768747851', '41768992109',
'41767973684', '41768588967', '41769021462', '41768655275',
'41769195649', '41771323517', '41770997916', '41770624787',
'41771124135', '41767953692', '41771990757', '41771503073',
'41771518432', '41770587159', '41771770302', '41771264986',
'41770622684', '41771712719', '41770043750', '41769920549',
'41771890393', '41771093881', '41770335018', '41769851289',
'41769691702', '41770178002', '41770083356', '41771478219',
'41771689312', '41770310781', '41770503120', '41771320102',
'41770872304', '41772333923', '41773077420', '41774107375',
'41774470025', '41772354195', '41774278154', '41774516055',
'41764063012', '41773238895', '41770358839', '41773410325',
'41772497677', '41772207643', '41774095335', '41774540961',
'41773924133', '41772759005', '41772493934', '41773676496',
'41772632879', '41772582155', '41772586341', '41772592180',
'41774973883', '41775140655', '41775320466', '41775999294',
'41775447715', '41776040324', '41774633931', '41775257392',
'41775471162', '41771934549', '41775499496', '41774856789',
'41775136607', '41775410928', '41776142924', '41776094067',
'41775191189', '41775749633', '41775907614', '41774792841',
'41776033160', '41775490223', '41778623933', '41777644508',
'41780014741', '41778994962', '41777323701', '41776219972',
'41780552222', '41777798847', '41779796901', '41780799923',
'41780472850', '41772305897', '41780180889', '41780555214',
'41778280294', '41780767290', '41779889603', '41780667613',
'41778248797', '41778766814', '41780236744', '41779887066',
'41776670687', '41777525040', '41780960139', 'nan', '41777644374',
'41779923800', '41777002840', '41777753678', '41778182378',
'41776301694', '41779886597', '41779667714', '41781000946',
'41777189468', '41780087137', '41780155654', '41780775906',
'41778329111', '41783067184', '41782721889', '41781632703',
'41783780618', '41783873395', '41783998100', '41783931503',
'41782490708', '41778620781', '41776593233', '41783231988',
'41782256463', '41783528314', '41782914027', '41784027619',
'41781822829', '41784004699', '41783211341', '41784033505',
'41782545928', '41784051958', '41781766311', '41783040125',
'41783951875', '41784068580', '41783813820', '41783067755',
'41783016716', '41784060487', '41783803363', '41782531020',
'41781388743', '41785977098', '41786030848', '41786287968',
'41784805290', '41786552267', '41786879966', '41786460175',
'41786610058', '41785551493', '41786710599', '41786958316',
'41781724264', '41785445012', '41786594197', '41785477465',
'41786482621', '41784728916', '41786163574', '41785240433',
'41784798439', '41786406137', '41786330557', '41787005790',
'41786634121', '41786210955', '41784198119', '41786024295',
'41785069315', '41782349052', '41786708909', '41788240277',
'41788955033', '41789046308', '41784596066', '41788063455',
'41787694599', '41789136771', '41787403317', '41787409226',
'41789241747', '41787555666', '41787430932', '41787309404',
'41788910204', '41787568748', '41789414846', '41788177940',
'41789528530', '41789382342', '41789803654', '41788514458',
'41784831727', '41787377624', '41787828042', '41789205824',
'41789308552', '41789288899', '41789701434', '41787553674',
'41787681573', '41789442389', '41789190629', '41780044631',
'41789895907', '41788809900', '41789122350', '41788438919',
'41787977304', '41788642761', '41789281426', '41791796789',
'41791686344', '41790988049', '41787229497', '41790708372',
'41791150645', '41790453941', '41791020142', '41790384927',
'41790434960', '41791900221', '41791780863', '41792045890',
'41789979877', '41790213389', '41792328962', '41791367184',
'41791135752', '41792275060', '41791890035', '41792546856',
'41791884595', '41790134693', '41792095927', '41790458720',
'41791526022', '41792143565', '41791680878', '41790832413',
'41792463288', '41791972322', '41791084950', '41791591750',
'41792018279', '41791891437', '41790340322', '41792490749',
'41791949185', '41792273084', '41792942400', '41793195303',
'41793116161', '41793560497', '41793420765', '41793390721',
'41792995107', '41792853373', '41794017254', '41792829460',
'41794146341', '41794097400', '41793917806', '41793085795',
'41793153713', '41793479285', '41793672321', '41794163188',
'41792913806', '41795638686', '41796745322', '41796518007',
'41796793000', '41795214845', '41796220240', '41796073319',
'41796781702', '41795312941', '41797871757', '41797732193',
'41796831262', '41798441839', '41792712332', '41794174553',
'41798690031', '41798308119', '41798875026', '41798261237',
'41796142562', '41794298123', '41798116617', '41798838185',
'41798387675', '41794457006', '41797766954', '41798516007',
'41797807112', '41797868790', '41797073652', '41798109141',
'41797925241', '41798587922', '41798206365', '41795797834',
'41798921136', '41798844409', '41797860445', '41798137866',
'41798816124', '41794976940', '41795115092', '41794826346',
'41798335167', '41797220545', '41797338131', '41798519643',
'41798503487', '41796975986', '41796122923', '41797229414',
'41799541385', '41800565067', '41801544241', '41800619941',
'41800606765', '41801602923', '41800814367', '41799433986',
'41800528875', '41798885157', '41799587807', '41800708489',
'41799422642', '41801323370', '41799993602', '41800526158',
'41801058190', '41799946619', '41800698887', '41801171856',
'41801569361', '41800715567', '41800154420'], dtype=object)
Also I tried without luck:
newdf = dfn.drop_duplicates(
subset = ['Nro Envio ML', '# Externo'],
keep = 'last').reset_index(drop = True)
newdf
A:
I think you were quite close. Does this achieve what you're trying to do?
dfn['Externo'][~dfn['Externo'].isin(dfn['Nro Envio ML'])].dropna().tolist()
It returns all non-NaN values in the 'Externo' column that are not in the 'Nro Envio ML' column as a list.
I think the IndexingError you received may have been due to using df instead of dfn in the last line of your code, but I'm not 100% sure because df is not defined in your snippet.
|
Problem with Unalignable boolean Series with two columns filter
|
In this case I am working with 2 columns that are substracted from 2 Dataframes. The columns are ["# Externo","Nro Envio ML"]]
My target is to recieve the numbers that exist in "# Externo" but no exist in"Nro Envio ML" , only that number/ or numbers that fill to that condition.
To take a look what I am talking about:
dfn.info()
dfn
<class 'pandas.core.frame.DataFrame'>
Int64Index: 1143 entries, 0 to 2151
Data columns (total 2 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 # Externo 404 non-null object
1 Nro Envio ML 894 non-null object
dtypes: object(2)
memory usage: 26.8+ KB
# Externo Nro Envio ML
0 41764660663 NaN
1 41765189264 NaN
2 41765105927 NaN
3 41765931626 NaN
4 41766474810 NaN
... ... ...
2143 NaN 41768876815
2146 NaN 41784067107
2147 NaN 41784051958
2149 NaN 41785977098
2151 NaN 41796142562
1143 rows × 2 columns
# Externo True
Nro Envio ML True
dtype: bool
I am expecting ro recieve 41764660663 if exist in column "# Externo" and not in "Nro Envio ML"
This is What I tried:
df1 = df1[df1['Unnamed: 26'] == 'Flex']
df2= pd.concat([df, df1], axis=1)
df2
import numpy as np
df2['Nro Envio ML']=df2['Unnamed: 13']
dfn=df2[["# Externo","Nro Envio ML"]]
print(dfn.notnull().any(axis=0))
dfn= dfn.loc[:,dfn.notnull().any(axis=0)]
print(dfn)
print(dfn.dropna(axis=1,how='all'))
dfn.loc[~df['# Externo'].isin(dfn['Nro Envio ML'].tolist())]
The error I recieve:
IndexingError Traceback (most recent call last)
<ipython-input-144-54d975e5ad81> in <module>
3 print(dfn)
4 print(dfn.dropna(axis=1,how='all'))
----> 5 dfn.loc[~df['# Externo'].isin(dfn['Nro Envio ML'].tolist())]
3 frames
/usr/local/lib/python3.7/dist-packages/pandas/core/indexing.py in check_bool_indexer(index, key)
2387 if mask.any():
2388 raise IndexingError(
-> 2389 "Unalignable boolean Series provided as "
2390 "indexer (index of the boolean Series and of "
2391 "the indexed object do not match)."
IndexingError: Unalignable boolean Series provided as indexer (index of the boolean Series and of the indexed object do not match).
Here is not filtered:
dfn3= dfn.loc[~ dfn['# Externo'].isin(dfn['Nro Envio ML']),'# Externo'].values
dfn3
array(['41764660663', '41765189264', '41765105927', '41765931626',
'41766474810', '41766693570', '41767023186', '41766664967',
'41765527475', '41766933520', '41758431387', '41767065141',
'41766834461', '41763758747', '41767007000', '41764139836',
'41767128958', '41767958453', '41768439109', '41767519460',
'41768746394', '41767537504', '41768245931', '41768435988',
'41768710593', '41767850751', '41769343996', '41768163019',
'41767792365', '41769430226', '41769362435', '41767613260',
'41767399871', '41769237788', '41769335922', '41768743591',
'41768970216', '41768972816', '41767801455', '41767351959',
'41768856005', '41769069211', '41768960289', '41768876815',
'41768796242', '41768606054', '41769594117', '41768301217',
'41769316065', '41769275644', '41768747851', '41768992109',
'41767973684', '41768588967', '41769021462', '41768655275',
'41769195649', '41771323517', '41770997916', '41770624787',
'41771124135', '41767953692', '41771990757', '41771503073',
'41771518432', '41770587159', '41771770302', '41771264986',
'41770622684', '41771712719', '41770043750', '41769920549',
'41771890393', '41771093881', '41770335018', '41769851289',
'41769691702', '41770178002', '41770083356', '41771478219',
'41771689312', '41770310781', '41770503120', '41771320102',
'41770872304', '41772333923', '41773077420', '41774107375',
'41774470025', '41772354195', '41774278154', '41774516055',
'41764063012', '41773238895', '41770358839', '41773410325',
'41772497677', '41772207643', '41774095335', '41774540961',
'41773924133', '41772759005', '41772493934', '41773676496',
'41772632879', '41772582155', '41772586341', '41772592180',
'41774973883', '41775140655', '41775320466', '41775999294',
'41775447715', '41776040324', '41774633931', '41775257392',
'41775471162', '41771934549', '41775499496', '41774856789',
'41775136607', '41775410928', '41776142924', '41776094067',
'41775191189', '41775749633', '41775907614', '41774792841',
'41776033160', '41775490223', '41778623933', '41777644508',
'41780014741', '41778994962', '41777323701', '41776219972',
'41780552222', '41777798847', '41779796901', '41780799923',
'41780472850', '41772305897', '41780180889', '41780555214',
'41778280294', '41780767290', '41779889603', '41780667613',
'41778248797', '41778766814', '41780236744', '41779887066',
'41776670687', '41777525040', '41780960139', 'nan', '41777644374',
'41779923800', '41777002840', '41777753678', '41778182378',
'41776301694', '41779886597', '41779667714', '41781000946',
'41777189468', '41780087137', '41780155654', '41780775906',
'41778329111', '41783067184', '41782721889', '41781632703',
'41783780618', '41783873395', '41783998100', '41783931503',
'41782490708', '41778620781', '41776593233', '41783231988',
'41782256463', '41783528314', '41782914027', '41784027619',
'41781822829', '41784004699', '41783211341', '41784033505',
'41782545928', '41784051958', '41781766311', '41783040125',
'41783951875', '41784068580', '41783813820', '41783067755',
'41783016716', '41784060487', '41783803363', '41782531020',
'41781388743', '41785977098', '41786030848', '41786287968',
'41784805290', '41786552267', '41786879966', '41786460175',
'41786610058', '41785551493', '41786710599', '41786958316',
'41781724264', '41785445012', '41786594197', '41785477465',
'41786482621', '41784728916', '41786163574', '41785240433',
'41784798439', '41786406137', '41786330557', '41787005790',
'41786634121', '41786210955', '41784198119', '41786024295',
'41785069315', '41782349052', '41786708909', '41788240277',
'41788955033', '41789046308', '41784596066', '41788063455',
'41787694599', '41789136771', '41787403317', '41787409226',
'41789241747', '41787555666', '41787430932', '41787309404',
'41788910204', '41787568748', '41789414846', '41788177940',
'41789528530', '41789382342', '41789803654', '41788514458',
'41784831727', '41787377624', '41787828042', '41789205824',
'41789308552', '41789288899', '41789701434', '41787553674',
'41787681573', '41789442389', '41789190629', '41780044631',
'41789895907', '41788809900', '41789122350', '41788438919',
'41787977304', '41788642761', '41789281426', '41791796789',
'41791686344', '41790988049', '41787229497', '41790708372',
'41791150645', '41790453941', '41791020142', '41790384927',
'41790434960', '41791900221', '41791780863', '41792045890',
'41789979877', '41790213389', '41792328962', '41791367184',
'41791135752', '41792275060', '41791890035', '41792546856',
'41791884595', '41790134693', '41792095927', '41790458720',
'41791526022', '41792143565', '41791680878', '41790832413',
'41792463288', '41791972322', '41791084950', '41791591750',
'41792018279', '41791891437', '41790340322', '41792490749',
'41791949185', '41792273084', '41792942400', '41793195303',
'41793116161', '41793560497', '41793420765', '41793390721',
'41792995107', '41792853373', '41794017254', '41792829460',
'41794146341', '41794097400', '41793917806', '41793085795',
'41793153713', '41793479285', '41793672321', '41794163188',
'41792913806', '41795638686', '41796745322', '41796518007',
'41796793000', '41795214845', '41796220240', '41796073319',
'41796781702', '41795312941', '41797871757', '41797732193',
'41796831262', '41798441839', '41792712332', '41794174553',
'41798690031', '41798308119', '41798875026', '41798261237',
'41796142562', '41794298123', '41798116617', '41798838185',
'41798387675', '41794457006', '41797766954', '41798516007',
'41797807112', '41797868790', '41797073652', '41798109141',
'41797925241', '41798587922', '41798206365', '41795797834',
'41798921136', '41798844409', '41797860445', '41798137866',
'41798816124', '41794976940', '41795115092', '41794826346',
'41798335167', '41797220545', '41797338131', '41798519643',
'41798503487', '41796975986', '41796122923', '41797229414',
'41799541385', '41800565067', '41801544241', '41800619941',
'41800606765', '41801602923', '41800814367', '41799433986',
'41800528875', '41798885157', '41799587807', '41800708489',
'41799422642', '41801323370', '41799993602', '41800526158',
'41801058190', '41799946619', '41800698887', '41801171856',
'41801569361', '41800715567', '41800154420'], dtype=object)
Also I tried without luck:
newdf = dfn.drop_duplicates(
subset = ['Nro Envio ML', '# Externo'],
keep = 'last').reset_index(drop = True)
newdf
|
[
"I think you were quite close. Does this achieve what you're trying to do?\ndfn['Externo'][~dfn['Externo'].isin(dfn['Nro Envio ML'])].dropna().tolist()\n\nIt returns all non-NaN values in the 'Externo' column that are not in the 'Nro Envio ML' column as a list.\nI think the IndexingError you received may have been due to using df instead of dfn in the last line of your code, but I'm not 100% sure because df is not defined in your snippet.\n"
] |
[
1
] |
[] |
[] |
[
"filter",
"indexing",
"multiple_columns",
"pandas",
"python"
] |
stackoverflow_0074564869_filter_indexing_multiple_columns_pandas_python.txt
|
Q:
ImportError: cannot import name 'bigquery'
This must be a super trivial issue, but i've updated my windows virtual machine with;
pip install --upgrade google-cloud-storage
However, when I run the script I still receive the following error;
Traceback (most recent call last):
File "file.py", line 6, in <module>
from google.cloud import bigquery, storage
ImportError: cannot import name 'bigquery'
Any suggestions or workarounds?
Thanks,
Neel R
A:
I was facing the same problem.But applying every answer nothing was working.
Then I noticed that pip need to be ungraded. So I upgrade pip first.
python -m pip install --upgrade pip
Then I try this solution just changing a little bit https://stackoverflow.com/a/60895009/5393858
pip install --upgrade google-cloud
pip install --upgrade google-cloud-bigquery
pip install --upgrade google-cloud-storage
A:
From a fresh setup of the VM;
Failed - pip3 install --upgrade google-cloud
Worked - pip3 install --upgrade google-cloud-bigquery
Worked - pip3 install --upgrade google-cloud-storage
It appears that individual product solutions should be installed instead of the generic google-cloud.
If you're still stuck, this helped!
A:
Maybe try loading pip as a python module to ensure you're using the pip instance that your python executable is linked to.
python -m pip install --upgrade google-cloud-storage
A:
I didn't need to upgrade anything. The resolution was installing both google-cloud and google-cloud-bigquery pip dependencies.
|
ImportError: cannot import name 'bigquery'
|
This must be a super trivial issue, but i've updated my windows virtual machine with;
pip install --upgrade google-cloud-storage
However, when I run the script I still receive the following error;
Traceback (most recent call last):
File "file.py", line 6, in <module>
from google.cloud import bigquery, storage
ImportError: cannot import name 'bigquery'
Any suggestions or workarounds?
Thanks,
Neel R
|
[
"I was facing the same problem.But applying every answer nothing was working.\nThen I noticed that pip need to be ungraded. So I upgrade pip first.\npython -m pip install --upgrade pip\n\nThen I try this solution just changing a little bit https://stackoverflow.com/a/60895009/5393858\npip install --upgrade google-cloud\npip install --upgrade google-cloud-bigquery\npip install --upgrade google-cloud-storage\n\n",
"From a fresh setup of the VM;\nFailed - pip3 install --upgrade google-cloud\nWorked - pip3 install --upgrade google-cloud-bigquery\nWorked - pip3 install --upgrade google-cloud-storage\nIt appears that individual product solutions should be installed instead of the generic google-cloud.\nIf you're still stuck, this helped!\n",
"Maybe try loading pip as a python module to ensure you're using the pip instance that your python executable is linked to.\npython -m pip install --upgrade google-cloud-storage\n\n",
"I didn't need to upgrade anything. The resolution was installing both google-cloud and google-cloud-bigquery pip dependencies.\n"
] |
[
24,
10,
2,
0
] |
[] |
[] |
[
"google_bigquery",
"python"
] |
stackoverflow_0060894798_google_bigquery_python.txt
|
Q:
Parsing CSV file finding specific value in list
I'm parsing a CSV file using python. I've two problem:
My list is being treated as string
Is there a way to make my parsing more "elegant"
Example CSV file
Name, Address
host1,['192.168.x.10', '127.0.0.1']
host2,['192.168.x.12', '127.0.0.1']
host3,['192.168.x.14', '127.0.0.1']
My code:
with open('myFile') as file:
csv_reader = csv.DictReader(csv_file, delimiter=',')
for row in csv_reader:
for i in row['Address'].strip("][").replace("'","").split:
if('192.168' in i):
break
print(row[host], i)
Output:
host1 192.168.x.10
host2 192.168.x.12
host3 192.168.x.14
A:
padnas should help to read your csv and ast.literal_eval should help you transform your arrays, interpreted as strings, to be arrays again. If you don't want to use pandas, simply stick to ast.literal_eval only.
import ast
import pandas as pd
df = pd.read_csv('test.csv')
df['Address'] = df['Address'].apply(ast.literal_eval)
Note that my test.csv file only has the mere contents that you provided in your example.
A:
A little late, and ast.literal_eval is a nice solution, but as an alternative you could read the CSV file with the standard reader and then pattern match each row to construct a dictionary for each host. After which you can access the list as required. This may fail on your point 2 though...
import csv, re
address_pattern = r'\w+\.\w+\.\w+\.\w+'
with open('myFile') as file:
csv_reader = csv.reader(file, skipinitialspace=True)
headers = next(csv_reader)
address_list = [dict(zip(headers, (row[0], re.findall(address_pattern, ''.join(row))))) for row in csv_reader]
print('\n'.join(f"{item['Name']} {address}" for item in address_list for address in item['Address'] if address.startswith('192.168')))
# output:
# host1 192.168.x.10
# host2 192.168.x.12
# host3 192.168.x.14
|
Parsing CSV file finding specific value in list
|
I'm parsing a CSV file using python. I've two problem:
My list is being treated as string
Is there a way to make my parsing more "elegant"
Example CSV file
Name, Address
host1,['192.168.x.10', '127.0.0.1']
host2,['192.168.x.12', '127.0.0.1']
host3,['192.168.x.14', '127.0.0.1']
My code:
with open('myFile') as file:
csv_reader = csv.DictReader(csv_file, delimiter=',')
for row in csv_reader:
for i in row['Address'].strip("][").replace("'","").split:
if('192.168' in i):
break
print(row[host], i)
Output:
host1 192.168.x.10
host2 192.168.x.12
host3 192.168.x.14
|
[
"padnas should help to read your csv and ast.literal_eval should help you transform your arrays, interpreted as strings, to be arrays again. If you don't want to use pandas, simply stick to ast.literal_eval only.\nimport ast\nimport pandas as pd\n\ndf = pd.read_csv('test.csv')\ndf['Address'] = df['Address'].apply(ast.literal_eval)\n\nNote that my test.csv file only has the mere contents that you provided in your example.\n",
"A little late, and ast.literal_eval is a nice solution, but as an alternative you could read the CSV file with the standard reader and then pattern match each row to construct a dictionary for each host. After which you can access the list as required. This may fail on your point 2 though...\nimport csv, re\naddress_pattern = r'\\w+\\.\\w+\\.\\w+\\.\\w+'\n\nwith open('myFile') as file:\n csv_reader = csv.reader(file, skipinitialspace=True)\n headers = next(csv_reader)\n address_list = [dict(zip(headers, (row[0], re.findall(address_pattern, ''.join(row))))) for row in csv_reader]\n\nprint('\\n'.join(f\"{item['Name']} {address}\" for item in address_list for address in item['Address'] if address.startswith('192.168')))\n# output:\n# host1 192.168.x.10\n# host2 192.168.x.12\n# host3 192.168.x.14\n\n"
] |
[
1,
0
] |
[] |
[] |
[
"python",
"python_3.x"
] |
stackoverflow_0074566537_python_python_3.x.txt
|
Q:
How do I seperate parts of a list into multiple lists, and then put them all together into one big nested list?
I have a list that holds the names and ranks of five different cards (e.g 4 of spades, 2 of Hearts, etc..)
I need to be able to collect the first and third words of each 'section' in order to use it further. I had an idea to use nested lists which would keep each name and rank of a card in a list, and all 5 lists in a list of itself.
It'd look something like this:
[['King of Hearts'], ['4 of Clubs'], ['8 of Clubs'], ['Queen of Clubs'], ['9 of Diamonds']]
this way, using for loops, I can call the first word of each list (the rank) and do it for all 5 lists in 2 lines of code.
However, whenever I try to append each individual name to a list, it just ends up as one big list with each separated.
I've tried using for loops that take the original list, and append each of those individually as a list. however, i don't really know what i'm doing.
`
temp_card_list = list(p1cards)
card_list1 = []
for i in range(5):
print(temp_card_list[i])
card_list1.append(temp_card_list[i])
`
A:
I don't know exactly what you want as an output, but based on your question here is how you would access the first and third element of every string, without a nested list:
cards = ['King of Hearts', '4 of Clubs', '8 of Clubs', 'Queen of Clubs', '9 of Diamonds']
for card in cards:
string_list = card.split(' ')
rank = string_list[0] # 1st word
suit = string_list[2] # 3rd word
print(f'rank: {rank} | suit:{suit}')
print('----------')
Output:
rank: King | suit:Hearts
----------
rank: 4 | suit:Clubs
----------
rank: 8 | suit:Clubs
----------
rank: Queen | suit:Clubs
----------
rank: 9 | suit:Diamonds
----------
|
How do I seperate parts of a list into multiple lists, and then put them all together into one big nested list?
|
I have a list that holds the names and ranks of five different cards (e.g 4 of spades, 2 of Hearts, etc..)
I need to be able to collect the first and third words of each 'section' in order to use it further. I had an idea to use nested lists which would keep each name and rank of a card in a list, and all 5 lists in a list of itself.
It'd look something like this:
[['King of Hearts'], ['4 of Clubs'], ['8 of Clubs'], ['Queen of Clubs'], ['9 of Diamonds']]
this way, using for loops, I can call the first word of each list (the rank) and do it for all 5 lists in 2 lines of code.
However, whenever I try to append each individual name to a list, it just ends up as one big list with each separated.
I've tried using for loops that take the original list, and append each of those individually as a list. however, i don't really know what i'm doing.
`
temp_card_list = list(p1cards)
card_list1 = []
for i in range(5):
print(temp_card_list[i])
card_list1.append(temp_card_list[i])
`
|
[
"I don't know exactly what you want as an output, but based on your question here is how you would access the first and third element of every string, without a nested list:\ncards = ['King of Hearts', '4 of Clubs', '8 of Clubs', 'Queen of Clubs', '9 of Diamonds']\n\n\nfor card in cards:\n string_list = card.split(' ')\n rank = string_list[0] # 1st word\n suit = string_list[2] # 3rd word\n print(f'rank: {rank} | suit:{suit}')\n print('----------')\n\nOutput:\nrank: King | suit:Hearts\n----------\nrank: 4 | suit:Clubs \n----------\nrank: 8 | suit:Clubs \n----------\nrank: Queen | suit:Clubs\n----------\nrank: 9 | suit:Diamonds \n----------\n\n"
] |
[
1
] |
[] |
[] |
[
"nested",
"nested_lists",
"python"
] |
stackoverflow_0074567468_nested_nested_lists_python.txt
|
Q:
My write( ) function is not working, why?
So, im new to coding and im making a registration system for a fictional hospital, that gets the user name, the procedure they had and the date, after that it sum some days to it( to calculate return) and then write on a .txt file, but the write part is not working, how can i solve it? sorry that the prints and variables are on portuguese.
def cadastrar(arq, nomep , proc , x, y, z, w):
datas = datetime.strptime(w, '%Y-%m-%d')
l = 0
m = 0
n = 0
o = 0
p = 0
try:
a = open(arq, 'r+')
for linha in a:
dados = linha.split(';')
if dados[1] in ['Procedimento X']:
l = datas + \
timedelta(days = 15)
m = datas + \
timedelta(days = 152)
n = datas + \
timedelta(days = 304)
o = datas + \
timedelta(days = 456)
try:
a.write(f'{nomep};{proc};{x}-{y}-{z}\n;{l};{m};{n};{o}')
except:
print('\033[31mErro ao escrever.\033[m')
else:
print(f'\033[92m{nomep} foi cadastrado com sucesso.\033[m')
a.close()
finally:
print('')
I want it to write on the txt file but suddently it just stopped working and idk why.
A:
Change
a = open(arq, 'r+')
to
a = open(arq, 'w+')
|
My write( ) function is not working, why?
|
So, im new to coding and im making a registration system for a fictional hospital, that gets the user name, the procedure they had and the date, after that it sum some days to it( to calculate return) and then write on a .txt file, but the write part is not working, how can i solve it? sorry that the prints and variables are on portuguese.
def cadastrar(arq, nomep , proc , x, y, z, w):
datas = datetime.strptime(w, '%Y-%m-%d')
l = 0
m = 0
n = 0
o = 0
p = 0
try:
a = open(arq, 'r+')
for linha in a:
dados = linha.split(';')
if dados[1] in ['Procedimento X']:
l = datas + \
timedelta(days = 15)
m = datas + \
timedelta(days = 152)
n = datas + \
timedelta(days = 304)
o = datas + \
timedelta(days = 456)
try:
a.write(f'{nomep};{proc};{x}-{y}-{z}\n;{l};{m};{n};{o}')
except:
print('\033[31mErro ao escrever.\033[m')
else:
print(f'\033[92m{nomep} foi cadastrado com sucesso.\033[m')
a.close()
finally:
print('')
I want it to write on the txt file but suddently it just stopped working and idk why.
|
[
"Change\na = open(arq, 'r+')\n\nto\na = open(arq, 'w+')\n\n"
] |
[
0
] |
[] |
[] |
[
"datetime",
"fopen",
"fwrite",
"python",
"python_3.x"
] |
stackoverflow_0074567519_datetime_fopen_fwrite_python_python_3.x.txt
|
Q:
Python Tkinter: 6x5 entry boxes accepter input which will automatically become uppercase
I am working on a wordle clone as a project to get familiar with python and tkinter. I have made a 6x5 grid of entry boxes that all accept one letter. I am trying to make it that each box will automatically convert that letter to uppercase, but I am having issues with that. Only the very last entry will be uppercase.
def validate(P):
if len(P) == 0:
# empty Entry is ok
return True
elif len(P) == 1 and not P.isdigit():
# Entry with 1 digit is ok
return True
else:
# Anything else, reject it
return False
# sets v to upper
def caps(event):
v.set(v.get().upper())
vcmd = (window.register(validate), '%P')
canvas1 = tk.Canvas(window, width = 400, height = 300)
borderColor = Frame(window, background="#A3F299")
#2d array of entrys to create the 6x5 grid
entries = [[]]
inner = []
# Will loop to create 6 rows, 5 columns
# v - user input
# inner - row of entries
# entries - entire grid of entries
# validate ensures that only 1 character is entered in the box
for x in range(6):
for i in range(5):
v = StringVar()
inner.insert(i, Entry(window, validate="key", width = 2, validatecommand=vcmd, font=("Helvetica, 35"), justify='center',
textvariable= v, bg='white'))
inner[i].bind("<KeyRelease>", caps)
entries.insert(x, inner)
entries[x][i].grid(row= x, column= i)
I have sort of deduced that it is because v is being set as text for each entry and it will make it uppercase if a key is released; however, because it continues to loop through this loop, it keeps setting v to a different entry and that would be the only one to be set to uppercase. e.g:
If I only loop through it for one row the output will be [a, a, a, a, A]
if I loop through it a second time it will be [a, a, a, a, a], [a, a, a, a, A]
I have also tried making v a double array and setting each index to text variable and calling it in caps; however, it would only work for one row and it was very redundant because I would have to call v.set(v.get.... 5 different times, it would not work if I tried to loop through it.
A:
You can pass your widget name with %W into your validation part.
vcmd = (window.register(validate), '%P', '%W')
Now let's come to validation part. There is a little problem in here. You cannot change whatever comes into validation actually. It changes only when you return True. So as you may not execute anything after returned we have a problem.
Well, the solution is you can use .after method. Simply by entering time in milliseconds, you can run a function and change value right after validation returned True. If you pass time a bit more like 1000, it will be capitalized after 1 second.
def validate(P,nameofwidget):
if len(P) == 0:
# empty Entry is ok
return True
elif len(P) == 1 and not P.isdigit():
entry = window.nametowidget(nameofwidget)
def makeBigger(entry):
text = entry.get()
print("run:",text)
if(not text.isupper()):
entry.delete(0,"end")
entry.insert(0,text.upper())
entry.after(1,makeBigger,entry)
# Entry with 1 digit is ok
return True
else:
# Anything else, reject it
return False
So if you choose small values, people not even realize its value changed after they typed. However passing too small values can cause makeBigger function run even before entry value is updated so careful about that.
A:
As v will be the reference to the tkinter variable created in the last iteration of the nested for loop after the for loop, so caps() will always modify the last entry box.
You can pass v as an argument of caps() instead using lambda and default value of optional argument:
...
def caps(v):
v.set(v.get().upper())
...
entries = []
for x in range(6):
inner = []
for i in range(5):
v = StringVar()
inner.append(Entry(window, validate="key", width=2, validatecommand=vcmd,
font=("Helvetica, 35"), justify='center',
textvariable=v, bg='white'))
inner[i].bind("<KeyRelease>", lambda e, v=v: caps(v)) # pass v to caps()
inner[i].grid(row=x, column=i)
entries.append(inner)
...
Also if you want only letter can be input in those entry boxes, use P.isalpha() instead of not P.isdigit() inside validate().
|
Python Tkinter: 6x5 entry boxes accepter input which will automatically become uppercase
|
I am working on a wordle clone as a project to get familiar with python and tkinter. I have made a 6x5 grid of entry boxes that all accept one letter. I am trying to make it that each box will automatically convert that letter to uppercase, but I am having issues with that. Only the very last entry will be uppercase.
def validate(P):
if len(P) == 0:
# empty Entry is ok
return True
elif len(P) == 1 and not P.isdigit():
# Entry with 1 digit is ok
return True
else:
# Anything else, reject it
return False
# sets v to upper
def caps(event):
v.set(v.get().upper())
vcmd = (window.register(validate), '%P')
canvas1 = tk.Canvas(window, width = 400, height = 300)
borderColor = Frame(window, background="#A3F299")
#2d array of entrys to create the 6x5 grid
entries = [[]]
inner = []
# Will loop to create 6 rows, 5 columns
# v - user input
# inner - row of entries
# entries - entire grid of entries
# validate ensures that only 1 character is entered in the box
for x in range(6):
for i in range(5):
v = StringVar()
inner.insert(i, Entry(window, validate="key", width = 2, validatecommand=vcmd, font=("Helvetica, 35"), justify='center',
textvariable= v, bg='white'))
inner[i].bind("<KeyRelease>", caps)
entries.insert(x, inner)
entries[x][i].grid(row= x, column= i)
I have sort of deduced that it is because v is being set as text for each entry and it will make it uppercase if a key is released; however, because it continues to loop through this loop, it keeps setting v to a different entry and that would be the only one to be set to uppercase. e.g:
If I only loop through it for one row the output will be [a, a, a, a, A]
if I loop through it a second time it will be [a, a, a, a, a], [a, a, a, a, A]
I have also tried making v a double array and setting each index to text variable and calling it in caps; however, it would only work for one row and it was very redundant because I would have to call v.set(v.get.... 5 different times, it would not work if I tried to loop through it.
|
[
"You can pass your widget name with %W into your validation part.\nvcmd = (window.register(validate), '%P', '%W')\n\nNow let's come to validation part. There is a little problem in here. You cannot change whatever comes into validation actually. It changes only when you return True. So as you may not execute anything after returned we have a problem.\nWell, the solution is you can use .after method. Simply by entering time in milliseconds, you can run a function and change value right after validation returned True. If you pass time a bit more like 1000, it will be capitalized after 1 second.\ndef validate(P,nameofwidget):\n if len(P) == 0:\n # empty Entry is ok\n return True\n elif len(P) == 1 and not P.isdigit():\n entry = window.nametowidget(nameofwidget)\n def makeBigger(entry):\n text = entry.get()\n print(\"run:\",text)\n if(not text.isupper()):\n entry.delete(0,\"end\")\n entry.insert(0,text.upper())\n entry.after(1,makeBigger,entry)\n # Entry with 1 digit is ok\n return True\n else:\n # Anything else, reject it\n return False\n\nSo if you choose small values, people not even realize its value changed after they typed. However passing too small values can cause makeBigger function run even before entry value is updated so careful about that.\n",
"As v will be the reference to the tkinter variable created in the last iteration of the nested for loop after the for loop, so caps() will always modify the last entry box.\nYou can pass v as an argument of caps() instead using lambda and default value of optional argument:\n...\n\ndef caps(v):\n v.set(v.get().upper())\n\n...\n\nentries = []\nfor x in range(6):\n inner = []\n for i in range(5):\n v = StringVar()\n inner.append(Entry(window, validate=\"key\", width=2, validatecommand=vcmd,\n font=(\"Helvetica, 35\"), justify='center',\n textvariable=v, bg='white'))\n inner[i].bind(\"<KeyRelease>\", lambda e, v=v: caps(v)) # pass v to caps()\n inner[i].grid(row=x, column=i)\n entries.append(inner)\n...\n\nAlso if you want only letter can be input in those entry boxes, use P.isalpha() instead of not P.isdigit() inside validate().\n"
] |
[
0,
0
] |
[] |
[] |
[
"python",
"tkinter"
] |
stackoverflow_0074562862_python_tkinter.txt
|
Q:
How to pass a list as an environment variable?
I use a list as part of a Python program, and wanted to convert that to an environment variable.
So, it's like this:
list1 = ['a.1','b.2','c.3']
for items in list1:
alpha,number = items.split('.')
print(alpha,number)
which gives me, as expected:
a 1
b 2
c 3
But when I try to set it as an environment variable, as:
export LIST_ITEMS = 'a.1', 'b.2', 'c.3'
and do:
list1 = [os.environ.get("LIST_ITEMS")]
for items in list1:
alpha,number = items.split('.')
print(alpha,number)
I get an error: ValueError: too many values to unpack
How do I modify the way I pass the list, or get it so that I have the same output as without using env variables?
A:
The rationale
I recommend using JSON if you want to have data structured in an environment variable. JSON is simple to write / read, can be written in a single line, parsers exist, developers know it.
The solution
To test, execute this in your shell:
$ export ENV_LIST_EXAMPLE='["Foo", "bar"]'
Python code to execute in the same shell:
import os
import json
env_list = json.loads(os.environ['ENV_LIST_EXAMPLE'])
print(env_list)
print(type(env_list))
gives
['Foo', 'bar']
<class 'list'>
Package
Chances are high that you are interested in cfg_load
Debugging
If you see
JSONDecodeError: Expecting value: line 1 column 2 (char 1)
You might have used single-quotes instead of double-quotes. While some JSON libraries accept that, the JSON standard clearly states that you need to use double-quotes:
Wrong: "['foo', 'bar']"
Right: '["foo", "bar"]'
A:
I'm not sure why you'd do it through the environment variables, but you can do this:
export LIST_ITEMS ="a.1 b.2 c.3"
And in Python:
list1 = [i.split(".") for i in os.environ.get("LIST_ITEMS").split(" ")]
for k, v in list1:
print(k, v)
A:
The environs PyPI package handles my use case well: load a single setting from env var and coerce it to a list, int, etc:
from environs import Env
env = Env()
env.read_env() # read .env file, if it exists
# required variables
# env: GITHUB_USER=sloria
gh_user = env("GITHUB_USER") # => 'sloria'
# env: <unset>
secret = env("SECRET") # => raises error if not set
# casting
# env: MAX_CONNECTIONS=100
max_connections = env.int("MAX_CONNECTIONS") # => 100
# env: SHIP_DATE='1984-06-25'
ship_date = env.date("SHIP_DATE") # => datetime.date(1984, 6, 25)
# env: TTL=42
ttl = env.timedelta("TTL") # => datetime.timedelta(0, 42)
# providing a default value
# env: ENABLE_LOGIN=true
enable_login = env.bool("ENABLE_LOGIN", False) # => True
# env: <unset>
enable_feature_x = env.bool("ENABLE_FEATURE_X", False) # => False
# parsing lists
# env: GITHUB_REPOS=webargs,konch,ped
gh_repos = env.list("GITHUB_REPOS") # => ['webargs', 'konch', 'ped']
# env: COORDINATES=23.3,50.0
coords = env.list("COORDINATES", subcast=float) # => [23.3, 50.0]
A:
If you want to set the environment variable using that format — ['a.1','b.2','c.3'] — this would work:
from ast import literal_eval
list1 = [literal_eval(e.strip()) for e in os.environ["LIST_ITEMS"].split(',')]
for item in list1:
alpha,number = item.split('.')
print alpha, number
Output:
a 1
b 2
c 3
A:
YOUR_VARIABLE="val1, val2"
In your Python code, Just split them as a list
import os
os.environ.get('YOUR_VARIABLE').split(', ')
|
How to pass a list as an environment variable?
|
I use a list as part of a Python program, and wanted to convert that to an environment variable.
So, it's like this:
list1 = ['a.1','b.2','c.3']
for items in list1:
alpha,number = items.split('.')
print(alpha,number)
which gives me, as expected:
a 1
b 2
c 3
But when I try to set it as an environment variable, as:
export LIST_ITEMS = 'a.1', 'b.2', 'c.3'
and do:
list1 = [os.environ.get("LIST_ITEMS")]
for items in list1:
alpha,number = items.split('.')
print(alpha,number)
I get an error: ValueError: too many values to unpack
How do I modify the way I pass the list, or get it so that I have the same output as without using env variables?
|
[
"The rationale\nI recommend using JSON if you want to have data structured in an environment variable. JSON is simple to write / read, can be written in a single line, parsers exist, developers know it.\nThe solution\nTo test, execute this in your shell:\n$ export ENV_LIST_EXAMPLE='[\"Foo\", \"bar\"]'\n\nPython code to execute in the same shell:\nimport os\nimport json\n\nenv_list = json.loads(os.environ['ENV_LIST_EXAMPLE'])\nprint(env_list)\nprint(type(env_list))\n\ngives\n['Foo', 'bar']\n<class 'list'>\n\nPackage\nChances are high that you are interested in cfg_load\nDebugging\nIf you see\nJSONDecodeError: Expecting value: line 1 column 2 (char 1)\n\nYou might have used single-quotes instead of double-quotes. While some JSON libraries accept that, the JSON standard clearly states that you need to use double-quotes:\nWrong: \"['foo', 'bar']\"\nRight: '[\"foo\", \"bar\"]'\n\n",
"I'm not sure why you'd do it through the environment variables, but you can do this:\nexport LIST_ITEMS =\"a.1 b.2 c.3\"\n\nAnd in Python:\nlist1 = [i.split(\".\") for i in os.environ.get(\"LIST_ITEMS\").split(\" \")] \n\nfor k, v in list1:\n print(k, v)\n\n",
"The environs PyPI package handles my use case well: load a single setting from env var and coerce it to a list, int, etc:\nfrom environs import Env\n\nenv = Env()\nenv.read_env() # read .env file, if it exists\n\n# required variables\n# env: GITHUB_USER=sloria\ngh_user = env(\"GITHUB_USER\") # => 'sloria'\n# env: <unset>\nsecret = env(\"SECRET\") # => raises error if not set\n\n# casting\n# env: MAX_CONNECTIONS=100\nmax_connections = env.int(\"MAX_CONNECTIONS\") # => 100\n# env: SHIP_DATE='1984-06-25'\nship_date = env.date(\"SHIP_DATE\") # => datetime.date(1984, 6, 25)\n# env: TTL=42\nttl = env.timedelta(\"TTL\") # => datetime.timedelta(0, 42)\n\n# providing a default value\n# env: ENABLE_LOGIN=true\nenable_login = env.bool(\"ENABLE_LOGIN\", False) # => True\n# env: <unset>\nenable_feature_x = env.bool(\"ENABLE_FEATURE_X\", False) # => False\n\n# parsing lists\n# env: GITHUB_REPOS=webargs,konch,ped\ngh_repos = env.list(\"GITHUB_REPOS\") # => ['webargs', 'konch', 'ped']\n# env: COORDINATES=23.3,50.0\ncoords = env.list(\"COORDINATES\", subcast=float) # => [23.3, 50.0]\n\n",
"If you want to set the environment variable using that format — ['a.1','b.2','c.3'] — this would work:\nfrom ast import literal_eval\n\nlist1 = [literal_eval(e.strip()) for e in os.environ[\"LIST_ITEMS\"].split(',')]\nfor item in list1:\n alpha,number = item.split('.')\n print alpha, number\n\nOutput:\na 1\nb 2\nc 3\n\n",
"YOUR_VARIABLE=\"val1, val2\"\n\nIn your Python code, Just split them as a list\nimport os\nos.environ.get('YOUR_VARIABLE').split(', ')\n\n"
] |
[
112,
32,
27,
6,
0
] |
[] |
[] |
[
"python",
"python_2.7"
] |
stackoverflow_0031352317_python_python_2.7.txt
|
Q:
How to get multiple dictionary values?
I have a dictionary in Python, and what I want to do is get some values from it as a list, but I don't know if this is supported by the implementation.
myDictionary.get('firstKey') # works fine
myDictionary.get('firstKey','secondKey')
# gives me a KeyError -> OK, get is not defined for multiple keys
myDictionary['firstKey','secondKey'] # doesn't work either
Is there any way I can achieve this? In my example it looks easy, but let's say I have a dictionary of 20 entries, and I want to get 5 keys. Is there any other way than doing the following?
myDictionary.get('firstKey')
myDictionary.get('secondKey')
myDictionary.get('thirdKey')
myDictionary.get('fourthKey')
myDictionary.get('fifthKey')
A:
There already exists a function for this:
from operator import itemgetter
my_dict = {x: x**2 for x in range(10)}
itemgetter(1, 3, 2, 5)(my_dict)
#>>> (1, 9, 4, 25)
itemgetter will return a tuple if more than one argument is passed. To pass a list to itemgetter, use
itemgetter(*wanted_keys)(my_dict)
Keep in mind that itemgetter does not wrap its output in a tuple when only one key is requested, and does not support zero keys being requested.
A:
Use a for loop:
keys = ['firstKey', 'secondKey', 'thirdKey']
for key in keys:
myDictionary.get(key)
or a list comprehension:
[myDictionary.get(key) for key in keys]
A:
I'd suggest the very useful map function, which allows a function to operate element-wise on a list:
mydictionary = {'a': 'apple', 'b': 'bear', 'c': 'castle'}
keys = ['b', 'c']
values = list( map(mydictionary.get, keys) )
# values = ['bear', 'castle']
A:
You can use At from pydash:
from pydash import at
my_dict = {'a': 1, 'b': 2, 'c': 3}
my_list = at(my_dict, 'a', 'b')
my_list == [1, 2]
A:
As I see no similar answer here - it is worth pointing out that with the usage of a (list / generator) comprehension, you can unpack those multiple values and assign them to multiple variables in a single line of code:
first_val, second_val = (myDict.get(key) for key in [first_key, second_key])
A:
I think list comprehension is one of the cleanest ways that doesn't need any additional imports:
>>> d={"foo": 1, "bar": 2, "baz": 3}
>>> a = [d.get(k) for k in ["foo", "bar", "baz"]]
>>> a
[1, 2, 3]
Or if you want the values as individual variables then use multiple-assignment:
>>> a,b,c = [d.get(k) for k in ["foo", "bar", "baz"]]
>>> a,b,c
(1, 2, 3)
A:
If you have pandas installed you can turn it into a series with the keys as the index. So something like
import pandas as pd
s = pd.Series(my_dict)
s[['key1', 'key3', 'key2']]
A:
def get_all_values(nested_dictionary):
for key, value in nested_dictionary.items():
if type(value) is dict:
get_all_values(value)
else:
print(key, ":", value)
nested_dictionary = {'ResponseCode': 200, 'Data': {'256': {'StartDate': '2022-02-07', 'EndDate': '2022-02-27', 'IsStoreClose': False, 'StoreTypeMsg': 'Manual Processing Stopped', 'is_sync': False}}}
get_all_values(nested_dictionary)
A:
If you want to retain the mapping of the values to the keys, you should use a dict comprehension instead:
{key: myDictionary[key] for key in [
'firstKey',
'secondKey',
'thirdKey',
'fourthKey',
'fifthKey'
]}
A:
Slighlty different variation of list comprehension approach.
#doc
[dict[key] for key in (tuple_of_searched_keys)]
#example
my_dict = {x: x**2 for x in range(10)}
print([my_dict[key] for key in (8,9)])
A:
%timeit response of all the answers listed above. My apologies if missed some of the solutions, and I used my judgment to club similar answers. itemgetter seems to be the winner to me. pydash reports much lesser time but I don't know why it ran lesser loops and don't know if I can call it a fastest. Your thoughts?
from operator import itemgetter
my_dict = {x: x**2 for x in range(10)}
req_keys = [1, 3, 2, 5]
%timeit itemgetter(1, 3, 2, 5)(my_dict)
257 ns ± 4.61 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)
%timeit [my_dict.get(key) for key in req_keys]
604 ns ± 6.94 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)
%timeit list( map(my_dict.get, req_keys) )
529 ns ± 34.2 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)
!pip install pydash
from pydash import at
%timeit at(my_dict, 1, 3, 2, 5)
22.2 µs ± 572 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
%timeit (my_dict.get(key) for key in req_keys)
308 ns ± 6.53 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)
s = pd.Series(my_dict)
%timeit s[req_keys]
334 µs ± 58.1 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
|
How to get multiple dictionary values?
|
I have a dictionary in Python, and what I want to do is get some values from it as a list, but I don't know if this is supported by the implementation.
myDictionary.get('firstKey') # works fine
myDictionary.get('firstKey','secondKey')
# gives me a KeyError -> OK, get is not defined for multiple keys
myDictionary['firstKey','secondKey'] # doesn't work either
Is there any way I can achieve this? In my example it looks easy, but let's say I have a dictionary of 20 entries, and I want to get 5 keys. Is there any other way than doing the following?
myDictionary.get('firstKey')
myDictionary.get('secondKey')
myDictionary.get('thirdKey')
myDictionary.get('fourthKey')
myDictionary.get('fifthKey')
|
[
"There already exists a function for this:\nfrom operator import itemgetter\n\nmy_dict = {x: x**2 for x in range(10)}\n\nitemgetter(1, 3, 2, 5)(my_dict)\n#>>> (1, 9, 4, 25)\n\nitemgetter will return a tuple if more than one argument is passed. To pass a list to itemgetter, use\nitemgetter(*wanted_keys)(my_dict)\n\nKeep in mind that itemgetter does not wrap its output in a tuple when only one key is requested, and does not support zero keys being requested.\n",
"Use a for loop:\nkeys = ['firstKey', 'secondKey', 'thirdKey']\nfor key in keys:\n myDictionary.get(key)\n\nor a list comprehension:\n[myDictionary.get(key) for key in keys]\n\n",
"I'd suggest the very useful map function, which allows a function to operate element-wise on a list:\nmydictionary = {'a': 'apple', 'b': 'bear', 'c': 'castle'}\nkeys = ['b', 'c']\n\nvalues = list( map(mydictionary.get, keys) )\n\n# values = ['bear', 'castle']\n\n",
"You can use At from pydash:\nfrom pydash import at\nmy_dict = {'a': 1, 'b': 2, 'c': 3}\nmy_list = at(my_dict, 'a', 'b')\nmy_list == [1, 2]\n\n",
"As I see no similar answer here - it is worth pointing out that with the usage of a (list / generator) comprehension, you can unpack those multiple values and assign them to multiple variables in a single line of code:\nfirst_val, second_val = (myDict.get(key) for key in [first_key, second_key])\n\n",
"I think list comprehension is one of the cleanest ways that doesn't need any additional imports:\n>>> d={\"foo\": 1, \"bar\": 2, \"baz\": 3}\n>>> a = [d.get(k) for k in [\"foo\", \"bar\", \"baz\"]]\n>>> a\n[1, 2, 3]\n\nOr if you want the values as individual variables then use multiple-assignment:\n>>> a,b,c = [d.get(k) for k in [\"foo\", \"bar\", \"baz\"]]\n>>> a,b,c\n(1, 2, 3)\n\n",
"If you have pandas installed you can turn it into a series with the keys as the index. So something like\nimport pandas as pd\n\ns = pd.Series(my_dict)\n\ns[['key1', 'key3', 'key2']]\n\n",
"def get_all_values(nested_dictionary):\n for key, value in nested_dictionary.items():\n if type(value) is dict:\n get_all_values(value)\n else:\n print(key, \":\", value)\n\nnested_dictionary = {'ResponseCode': 200, 'Data': {'256': {'StartDate': '2022-02-07', 'EndDate': '2022-02-27', 'IsStoreClose': False, 'StoreTypeMsg': 'Manual Processing Stopped', 'is_sync': False}}}\n\nget_all_values(nested_dictionary)\n\n",
"If you want to retain the mapping of the values to the keys, you should use a dict comprehension instead:\n{key: myDictionary[key] for key in [\n 'firstKey',\n 'secondKey',\n 'thirdKey',\n 'fourthKey',\n 'fifthKey'\n]}\n\n",
"Slighlty different variation of list comprehension approach.\n#doc\n[dict[key] for key in (tuple_of_searched_keys)]\n\n#example\nmy_dict = {x: x**2 for x in range(10)}\nprint([my_dict[key] for key in (8,9)])\n\n",
"%timeit response of all the answers listed above. My apologies if missed some of the solutions, and I used my judgment to club similar answers. itemgetter seems to be the winner to me. pydash reports much lesser time but I don't know why it ran lesser loops and don't know if I can call it a fastest. Your thoughts?\nfrom operator import itemgetter\n\nmy_dict = {x: x**2 for x in range(10)}\nreq_keys = [1, 3, 2, 5]\n%timeit itemgetter(1, 3, 2, 5)(my_dict)\n257 ns ± 4.61 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)\n\n%timeit [my_dict.get(key) for key in req_keys]\n604 ns ± 6.94 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)\n\n\n%timeit list( map(my_dict.get, req_keys) )\n529 ns ± 34.2 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)\n\n\n!pip install pydash\nfrom pydash import at\n\n%timeit at(my_dict, 1, 3, 2, 5)\n22.2 µs ± 572 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)\n\n\n%timeit (my_dict.get(key) for key in req_keys)\n308 ns ± 6.53 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)\n\ns = pd.Series(my_dict)\n\n%timeit s[req_keys]\n334 µs ± 58.1 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)\n\n"
] |
[
158,
101,
52,
11,
8,
6,
5,
1,
0,
0,
0
] |
[
"If the fallback keys are not too many you can do something like this\nvalue = my_dict.get('first_key') or my_dict.get('second_key')\n\n",
"def get_all_values(nested_dictionary):\n for key, val in nested_dictionary.items():\n data_list = []\n if type(val) is dict:\n for key1, val1 in val.items():\n data_list.append(val1)\n\n return data_list\n\n"
] |
[
-1,
-2
] |
[
"dictionary",
"python"
] |
stackoverflow_0024204087_dictionary_python.txt
|
Q:
How do I loop through a dictionary, and apply a function using key: value pairs as arguments
Dictionary = {File1: "location1", File2: "location2", File3: "location3"}
def fancy_function1(location, file):
df = pd.read_csv(location)
df["new_column"] = df[file]
return df
need help needed writing this for loop or any other suggestions
for key in Dictionary:
##pass key value pairs into function
df = fancy_function(key, value)
return df
I want to then merge all 3 dataframes (created from fancy_function()) or assign each dataframe to variables e.g. df1, df2, df3 etc.
A:
I don't know if this helps.
for key in Dictionary:
value = Dictionary[key]
df = fancy_function(key, value)
return df
For me this is strange because you are returning outside a function, if you want to create multiple data frames I suggest the following.
dataframes = []
for key in Dictionary:
value = Dictionary[key]
df = fancy_function(key, value)
dataframes.append(df)
The append method appends to a list so you have a list with all your data frames, be careful because this can consume a lot of ram if you have a lot of data frames or some are big.
I hope this helps.
A:
for key in Dictionary:
##pass key value pairs into function
if df in locals():
df = fancy_function(key, value)
else
df=pd.concat([df, fancy_function(key, value)])
return df
|
How do I loop through a dictionary, and apply a function using key: value pairs as arguments
|
Dictionary = {File1: "location1", File2: "location2", File3: "location3"}
def fancy_function1(location, file):
df = pd.read_csv(location)
df["new_column"] = df[file]
return df
need help needed writing this for loop or any other suggestions
for key in Dictionary:
##pass key value pairs into function
df = fancy_function(key, value)
return df
I want to then merge all 3 dataframes (created from fancy_function()) or assign each dataframe to variables e.g. df1, df2, df3 etc.
|
[
"I don't know if this helps.\nfor key in Dictionary:\n value = Dictionary[key]\n df = fancy_function(key, value)\n return df\n\nFor me this is strange because you are returning outside a function, if you want to create multiple data frames I suggest the following.\ndataframes = []\nfor key in Dictionary:\n value = Dictionary[key]\n df = fancy_function(key, value)\n dataframes.append(df)\n\nThe append method appends to a list so you have a list with all your data frames, be careful because this can consume a lot of ram if you have a lot of data frames or some are big.\nI hope this helps.\n",
"for key in Dictionary:\n##pass key value pairs into function\n if df in locals():\n df = fancy_function(key, value)\n else\n df=pd.concat([df, fancy_function(key, value)])\n return df\n\n"
] |
[
0,
0
] |
[] |
[] |
[
"dictionary",
"loops",
"pandas",
"python",
"python_3.x"
] |
stackoverflow_0074567578_dictionary_loops_pandas_python_python_3.x.txt
|
Q:
I have installed all the python libraries i want to use in the vscode terminal but when i call to import, it won't work
[{
"resource": "/d:/Users/Home/Desktop/Python/estudos/pratices.py",
"owner": "_generated_diagnostic_collection_name_#0",
"code": {
"value": "reportMissingModuleSource",
"target": {
"$mid": 1,
"external": "https://github.com/microsoft/pyright/blob/main/docs/configuration.md#reportMissingModuleSource",
"path": "/microsoft/pyright/blob/main/docs/configuration.md",
"scheme": "https",
"authority": "github.com",
"fragment": "reportMissingModuleSource"
}
},
"severity": 4,
"message": "Import \"pandas\" could not be resolved from source",
"source": "Pylance",
"startLineNumber": 1,
"startColumn": 8,
"endLineNumber": 1,
"endColumn": 14
}]
is the message given
the cmd shell tells me i have all the libraries i want installed and they're in the project folder, i'm running a virtual environment but whenever i try to run something in a .py file, it says that it's not defined, i have installed anaconda but don't mean to use it right now, if i open a jupyter file it'll import no problem, but trying to run pip doesn't work at all
reinstalling vscode, making sure python's installed, making sure pip is installed
A:
I had a similar issue before and the solution I found is that you have to make sure the python interpreter for the current VSC window is your virtual environment instead of the system-wide python interpreter. On windows:
Press F1
Search for "interpreter".
Click the python one
Click "Enter interpreter path".
Finally locate your virtual environment.
A:
You should make sure that the current interpreter and the pandas library installed are the same interpreter environment.
The current interpreter can be output with the following code.
import sys
print(sys.executable)
Then install the pandas library with the resulting interpreter path.
<the path obtained above> -m pip install pandas
|
I have installed all the python libraries i want to use in the vscode terminal but when i call to import, it won't work
|
[{
"resource": "/d:/Users/Home/Desktop/Python/estudos/pratices.py",
"owner": "_generated_diagnostic_collection_name_#0",
"code": {
"value": "reportMissingModuleSource",
"target": {
"$mid": 1,
"external": "https://github.com/microsoft/pyright/blob/main/docs/configuration.md#reportMissingModuleSource",
"path": "/microsoft/pyright/blob/main/docs/configuration.md",
"scheme": "https",
"authority": "github.com",
"fragment": "reportMissingModuleSource"
}
},
"severity": 4,
"message": "Import \"pandas\" could not be resolved from source",
"source": "Pylance",
"startLineNumber": 1,
"startColumn": 8,
"endLineNumber": 1,
"endColumn": 14
}]
is the message given
the cmd shell tells me i have all the libraries i want installed and they're in the project folder, i'm running a virtual environment but whenever i try to run something in a .py file, it says that it's not defined, i have installed anaconda but don't mean to use it right now, if i open a jupyter file it'll import no problem, but trying to run pip doesn't work at all
reinstalling vscode, making sure python's installed, making sure pip is installed
|
[
"I had a similar issue before and the solution I found is that you have to make sure the python interpreter for the current VSC window is your virtual environment instead of the system-wide python interpreter. On windows:\n\nPress F1\nSearch for \"interpreter\".\nClick the python one\nClick \"Enter interpreter path\".\nFinally locate your virtual environment.\n\n",
"You should make sure that the current interpreter and the pandas library installed are the same interpreter environment.\nThe current interpreter can be output with the following code.\nimport sys\nprint(sys.executable)\n\nThen install the pandas library with the resulting interpreter path.\n<the path obtained above> -m pip install pandas\n\n\n"
] |
[
0,
0
] |
[] |
[] |
[
"pip",
"python",
"visual_studio_code"
] |
stackoverflow_0074564636_pip_python_visual_studio_code.txt
|
Q:
How do I get Python and Python in Visual Studio code to output the sem when using os.getcwd()?
I wanted to make it easier to edit my code on different devices with different usernames, so I decided to change how my code knows where my files are. Instead of using the entire file path, I decided to use os.getcwd but when I run it in Visual Studio Code I only get C:\Users\Name while when I run it with just python I get C:\Users\Name\Yuna-Discord-Bot. And for both I am using the same code:
import os
test=(os.getcwd())
print(test)
input("Enter to close")
Is there any way to get them to get Visual Studio Code to the same as just python, or vice versa?
A:
os.getcwd() returns the directory where the running file is located if you run the file directly in the terminal or double-click.
However if you run the file in vscode it should be noted that no matter where the file you are running, the cwd you get will still be the workspace instead of the folder where the file is located (even the file is in the deep directory).
It is determined by the way vscode retrieves files.
This is also the reason for the running error in your comments, because your {filePath} is different for the different running way.
You can try interactive Window which is more suitable for your needs.
or you can use debug mode, and add the following line to your launch.json:
"cwd": "${fileDirname}"
|
How do I get Python and Python in Visual Studio code to output the sem when using os.getcwd()?
|
I wanted to make it easier to edit my code on different devices with different usernames, so I decided to change how my code knows where my files are. Instead of using the entire file path, I decided to use os.getcwd but when I run it in Visual Studio Code I only get C:\Users\Name while when I run it with just python I get C:\Users\Name\Yuna-Discord-Bot. And for both I am using the same code:
import os
test=(os.getcwd())
print(test)
input("Enter to close")
Is there any way to get them to get Visual Studio Code to the same as just python, or vice versa?
|
[
"os.getcwd() returns the directory where the running file is located if you run the file directly in the terminal or double-click.\nHowever if you run the file in vscode it should be noted that no matter where the file you are running, the cwd you get will still be the workspace instead of the folder where the file is located (even the file is in the deep directory).\nIt is determined by the way vscode retrieves files.\nThis is also the reason for the running error in your comments, because your {filePath} is different for the different running way.\nYou can try interactive Window which is more suitable for your needs.\n\nor you can use debug mode, and add the following line to your launch.json:\n\"cwd\": \"${fileDirname}\"\n\n"
] |
[
2
] |
[] |
[] |
[
"python",
"visual_studio_code"
] |
stackoverflow_0074566483_python_visual_studio_code.txt
|
Q:
Trying to pass in the list of name and number from my contact python code but only save the very last input
import re
contact = {}
def display_contact():
for name, number in sorted((k,v) for k, v in contact.items()):
print(f'Name: {name}, Number: {number}')
#def display_contact():
# print("Name\t\tContact Number")
# for key in contact:
# print("{}\t\t{}".format(key,contact.get(key)))
while True:
choice = int(input(" 1. Add new contact \n 2. Search contact \n 3. Display contact\n 4. Edit contact \n 5. Delete contact \n 6. Save your contact as a file \n 7. Update Saved List \n 8. Exit \n Your choice: "))
if choice == 1:
while True:
name = input("Enter the contact name ")
if re.fullmatch(r'[a-zA-Z]+', name):
break
while True:
try:
phone = int(input("Enter number "))
except ValueError:
print("Sorry you can only enter a phone number")
continue
else:
break
contact[name] = phone
elif choice == 2:
search_name = input("Enter contact name ")
if search_name in contact:
print(search_name, "'s contact number is ", contact[search_name])
else:
print("Name is not found in contact book")
elif choice == 3:
if not contact:
print("Empty Phonebook")
else:
display_contact()
elif choice == 4:
edit_contact = input("Enter the contact to be edited ")
if edit_contact in contact:
phone = input("Enter number")
contact[edit_contact]=phone
print("Contact Updated")
display_contact()
else:
print("Name is not found in contact book")
elif choice == 5:
del_contact = input("Enter the contact to be deleted ")
if del_contact in contact:
confirm = input("Do you want to delete this contact Yes or No? ")
if confirm == 'Yes' or confirm == 'yes':
contact.pop(del_contact)
display_contact
else:
print("Name is not found in phone book")
elif choice == 6:
confirm = input("Do you want to save your contact-book Yes or No?")
if confirm == 'Yes' or confirm == 'yes':
with open('contact_list.txt','w') as file:
file.write(str(contact))
print("Your contact-book is saved!")
else:
print("Your contact book was not saved.")
# else:
elif choice == 7:
confirm = input("Do you want to update your saved contact-book Yes or No?")
if confirm == 'Yes' or confirm == 'yes':
f = open("Saved_Contact_List.txt" , "a")
f.write("Name = " + str(name))
f.write(" Number = " + str(phone))
f.close()
#with open('contact_list.txt','a') as file:
# file.write(str(contact))
print("Your contact-book has been updated!")
else:
print("Your contact book was not updated.")
else:
break
I have tried but only get to save the last input and not all of the contact list. Any ideas on how to save them all. I have been trying different code as I have comment some out to try a different way but it only print the last input. I would like it to save a output file with the first save to save all the contact then if they add or update a contact to save it as a updated saved file like choice 7. But I only get it to save the last input. I still learning how python works and this is over my head.
A:
You're looking for serialization, which is (usually) best left to libraries. The json library easily handles reading and writing dictionaries to a file.
To write a dictionary, take a look at json.dump():
with open("Saved_Contact_List.txt", "w") as f:
json.dump(contact, f)
|
Trying to pass in the list of name and number from my contact python code but only save the very last input
|
import re
contact = {}
def display_contact():
for name, number in sorted((k,v) for k, v in contact.items()):
print(f'Name: {name}, Number: {number}')
#def display_contact():
# print("Name\t\tContact Number")
# for key in contact:
# print("{}\t\t{}".format(key,contact.get(key)))
while True:
choice = int(input(" 1. Add new contact \n 2. Search contact \n 3. Display contact\n 4. Edit contact \n 5. Delete contact \n 6. Save your contact as a file \n 7. Update Saved List \n 8. Exit \n Your choice: "))
if choice == 1:
while True:
name = input("Enter the contact name ")
if re.fullmatch(r'[a-zA-Z]+', name):
break
while True:
try:
phone = int(input("Enter number "))
except ValueError:
print("Sorry you can only enter a phone number")
continue
else:
break
contact[name] = phone
elif choice == 2:
search_name = input("Enter contact name ")
if search_name in contact:
print(search_name, "'s contact number is ", contact[search_name])
else:
print("Name is not found in contact book")
elif choice == 3:
if not contact:
print("Empty Phonebook")
else:
display_contact()
elif choice == 4:
edit_contact = input("Enter the contact to be edited ")
if edit_contact in contact:
phone = input("Enter number")
contact[edit_contact]=phone
print("Contact Updated")
display_contact()
else:
print("Name is not found in contact book")
elif choice == 5:
del_contact = input("Enter the contact to be deleted ")
if del_contact in contact:
confirm = input("Do you want to delete this contact Yes or No? ")
if confirm == 'Yes' or confirm == 'yes':
contact.pop(del_contact)
display_contact
else:
print("Name is not found in phone book")
elif choice == 6:
confirm = input("Do you want to save your contact-book Yes or No?")
if confirm == 'Yes' or confirm == 'yes':
with open('contact_list.txt','w') as file:
file.write(str(contact))
print("Your contact-book is saved!")
else:
print("Your contact book was not saved.")
# else:
elif choice == 7:
confirm = input("Do you want to update your saved contact-book Yes or No?")
if confirm == 'Yes' or confirm == 'yes':
f = open("Saved_Contact_List.txt" , "a")
f.write("Name = " + str(name))
f.write(" Number = " + str(phone))
f.close()
#with open('contact_list.txt','a') as file:
# file.write(str(contact))
print("Your contact-book has been updated!")
else:
print("Your contact book was not updated.")
else:
break
I have tried but only get to save the last input and not all of the contact list. Any ideas on how to save them all. I have been trying different code as I have comment some out to try a different way but it only print the last input. I would like it to save a output file with the first save to save all the contact then if they add or update a contact to save it as a updated saved file like choice 7. But I only get it to save the last input. I still learning how python works and this is over my head.
|
[
"You're looking for serialization, which is (usually) best left to libraries. The json library easily handles reading and writing dictionaries to a file.\nTo write a dictionary, take a look at json.dump():\nwith open(\"Saved_Contact_List.txt\", \"w\") as f:\n json.dump(contact, f)\n\n"
] |
[
1
] |
[] |
[] |
[
"data_structures",
"project",
"python",
"python_3.x"
] |
stackoverflow_0074567668_data_structures_project_python_python_3.x.txt
|
Q:
datetime json dump or load problem during writing and reading file
i have this piece of code in testing currently
from_api_response_data = [
{
"active": True,
"available": True,
"test1": True,
"test2": "Testing Only",
"test3": False,
"test_name": "Tester 1",
"id": "12345abcxyz",
"test_url": {
"url": "/something/others/api/v1/abc123"
}
},
{
"active": True,
"available": True,
"test1": False,
"test2": "This also a test",
"test3": False,
"test_name": "Tester 2",
"id": "12345abcxyz678",
"test_url": {
"url": "/something/others/api/v1/abc1234"
}
}
]
filename = 'testingfile.json'
today = datetime.datetime.now().isoformat()
from_api_response_data.append(
{
'last_updated_date': today
}
)
Path(filename).write_text(
json.dumps(from_api_response_data, default=vars, indent=2)
)
test_file_json_read = json.loads(
Path(filename).read_text()
)
for test in test_file_json_read:
if test['available']:
print("true available")
what i am trying to simulate is getting data from api and append updated date and write the data into json file. If i remove that part in appending date, my code works fine when finding test['available']
from console output
true available
true available
but with the date append, i will have this error
if test['available']:
KeyError: 'available'
i am not sure why i am not able to read the test['available'] if the date is appended
this is what my testingfile.json showing
[
{
"active": true,
"available": true,
"test1": true,
"test2": "Testing Only",
"test3": false,
"test_name": "Tester 1",
"id": "12345abcxyz",
"test_url": {
"url": "/something/others/api/v1/abc123"
}
},
{
"active": true,
"available": true,
"test1": false,
"test2": "This also a test",
"test3": false,
"test_name": "Tester 2",
"id": "12345abcxyz678",
"test_url": {
"url": "/something/others/api/v1/abc1234"
}
},
{
"last_updated_date": "2022-11-25T09:48:12.765296"
}
]
A:
This is happening because the last dictionary this { "last_updated_date": "2022-11-25T09:48:12.765296" } doesn't contain the key 'available'. So the exception keyerror will be thrown. To get around it use get which return None when the key is not found
for test in test_file_json_read:
if test.get('available'):
print("true available")
A:
It's just doing what you wrote. You have an array for objects in json, and you are trying to get key "available" from each one. When you're adding new object to the array
from_api_response_data.append(
{
'last_updated_date': today
}
)
you are adding new object with just one key "last_updated_date", so further down the code can't find key "available" in this added object.
You should ether add "last_updated_date" to every object in your array (if it's applicable to each object)
for obj in from_api_response_data:
obj["last_updated_date"] = today
or to have json structure to be like
{
"data": [
...your objects
],
"last_updated_date": today
}
|
datetime json dump or load problem during writing and reading file
|
i have this piece of code in testing currently
from_api_response_data = [
{
"active": True,
"available": True,
"test1": True,
"test2": "Testing Only",
"test3": False,
"test_name": "Tester 1",
"id": "12345abcxyz",
"test_url": {
"url": "/something/others/api/v1/abc123"
}
},
{
"active": True,
"available": True,
"test1": False,
"test2": "This also a test",
"test3": False,
"test_name": "Tester 2",
"id": "12345abcxyz678",
"test_url": {
"url": "/something/others/api/v1/abc1234"
}
}
]
filename = 'testingfile.json'
today = datetime.datetime.now().isoformat()
from_api_response_data.append(
{
'last_updated_date': today
}
)
Path(filename).write_text(
json.dumps(from_api_response_data, default=vars, indent=2)
)
test_file_json_read = json.loads(
Path(filename).read_text()
)
for test in test_file_json_read:
if test['available']:
print("true available")
what i am trying to simulate is getting data from api and append updated date and write the data into json file. If i remove that part in appending date, my code works fine when finding test['available']
from console output
true available
true available
but with the date append, i will have this error
if test['available']:
KeyError: 'available'
i am not sure why i am not able to read the test['available'] if the date is appended
this is what my testingfile.json showing
[
{
"active": true,
"available": true,
"test1": true,
"test2": "Testing Only",
"test3": false,
"test_name": "Tester 1",
"id": "12345abcxyz",
"test_url": {
"url": "/something/others/api/v1/abc123"
}
},
{
"active": true,
"available": true,
"test1": false,
"test2": "This also a test",
"test3": false,
"test_name": "Tester 2",
"id": "12345abcxyz678",
"test_url": {
"url": "/something/others/api/v1/abc1234"
}
},
{
"last_updated_date": "2022-11-25T09:48:12.765296"
}
]
|
[
"This is happening because the last dictionary this { \"last_updated_date\": \"2022-11-25T09:48:12.765296\" } doesn't contain the key 'available'. So the exception keyerror will be thrown. To get around it use get which return None when the key is not found\nfor test in test_file_json_read:\n if test.get('available'):\n print(\"true available\")\n\n",
"It's just doing what you wrote. You have an array for objects in json, and you are trying to get key \"available\" from each one. When you're adding new object to the array\nfrom_api_response_data.append(\n {\n 'last_updated_date': today\n }\n)\n\nyou are adding new object with just one key \"last_updated_date\", so further down the code can't find key \"available\" in this added object.\nYou should ether add \"last_updated_date\" to every object in your array (if it's applicable to each object)\nfor obj in from_api_response_data:\n obj[\"last_updated_date\"] = today\n\nor to have json structure to be like\n{\n \"data\": [\n ...your objects\n ],\n \"last_updated_date\": today\n}\n\n"
] |
[
1,
0
] |
[] |
[] |
[
"python"
] |
stackoverflow_0074567638_python.txt
|
Q:
Is it possible to create an empty csv file in Django database?
I am wondering if it is possible to create an empty csv file in Django database. Basically what I am trying to do is to allow the user to upload a text file as TextUpload model object, then run the code in the backend to process the text file and to save it as ProcessedTextToCsv model object. My views.py code looks something like this:
def upload_view(request):
form = TextUploadsModelform(request.POST or None, request.FILES or None)
if form.is_valid():
form.save()
form = TextUploadsModelform()
ProcessedTextToCsv().save()
I am wondering if here I can create an empty csv so I can use file_name.path as argument in the function below
processTextToCsv(TextUploads.objects.latest('uploaded').file_name.path,ProcessedTextToCsv.objects.latest('uploaded').file_name.path)
return render(request, 'main_app/example.html',{'form': form})
A:
There is no such thing as Django database. Django is a framework that uses several databases.
Back to your question, yes it is possible, all steps are described in the Django tutorial for basic-file-uploads, what you want is something similar to this:
forms.py:
from django import forms
class UploadFileForm(forms.Form):
title = forms.CharField(max_length=50)
file = forms.FileField()
views.py:
from django.http import HttpResponseRedirect
from django.shortcuts import render
from .forms import UploadFileForm
# Imaginary function to handle an uploaded file.
from somewhere import handle_uploaded_file
def upload_file(request):
if request.method == 'POST':
form = UploadFileForm(request.POST, request.FILES)
if form.is_valid():
handle_uploaded_file(request.FILES['file'])
return HttpResponseRedirect('/success/url/')
else:
form = UploadFileForm()
return render(request, 'upload.html', {'form': form})
"somewhere".py:
def handle_uploaded_file(f):
with open(f, 'r') as file:
# read lines
# Treat the data to CSV format
with open("/path/to/your/file.csv",'w') as outfile:
# write your data
|
Is it possible to create an empty csv file in Django database?
|
I am wondering if it is possible to create an empty csv file in Django database. Basically what I am trying to do is to allow the user to upload a text file as TextUpload model object, then run the code in the backend to process the text file and to save it as ProcessedTextToCsv model object. My views.py code looks something like this:
def upload_view(request):
form = TextUploadsModelform(request.POST or None, request.FILES or None)
if form.is_valid():
form.save()
form = TextUploadsModelform()
ProcessedTextToCsv().save()
I am wondering if here I can create an empty csv so I can use file_name.path as argument in the function below
processTextToCsv(TextUploads.objects.latest('uploaded').file_name.path,ProcessedTextToCsv.objects.latest('uploaded').file_name.path)
return render(request, 'main_app/example.html',{'form': form})
|
[
"There is no such thing as Django database. Django is a framework that uses several databases.\nBack to your question, yes it is possible, all steps are described in the Django tutorial for basic-file-uploads, what you want is something similar to this:\nforms.py:\nfrom django import forms\n\nclass UploadFileForm(forms.Form):\n title = forms.CharField(max_length=50)\n file = forms.FileField()\n\nviews.py:\nfrom django.http import HttpResponseRedirect\nfrom django.shortcuts import render\nfrom .forms import UploadFileForm\n\n# Imaginary function to handle an uploaded file.\nfrom somewhere import handle_uploaded_file\n\ndef upload_file(request):\n if request.method == 'POST':\n form = UploadFileForm(request.POST, request.FILES)\n if form.is_valid():\n handle_uploaded_file(request.FILES['file'])\n return HttpResponseRedirect('/success/url/')\n else:\n form = UploadFileForm()\n return render(request, 'upload.html', {'form': form})\n\n\"somewhere\".py:\ndef handle_uploaded_file(f):\n with open(f, 'r') as file:\n # read lines\n # Treat the data to CSV format\n with open(\"/path/to/your/file.csv\",'w') as outfile:\n # write your data\n\n"
] |
[
0
] |
[] |
[] |
[
"csv",
"django",
"python"
] |
stackoverflow_0074567609_csv_django_python.txt
|
Q:
Convert Python dict into a dataframe
I have a Python dictionary like the following:
{u'2012-06-08': 388,
u'2012-06-09': 388,
u'2012-06-10': 388,
u'2012-06-11': 389,
u'2012-06-12': 389,
u'2012-06-13': 389,
u'2012-06-14': 389,
u'2012-06-15': 389,
u'2012-06-16': 389,
u'2012-06-17': 389,
u'2012-06-18': 390,
u'2012-06-19': 390,
u'2012-06-20': 390,
u'2012-06-21': 390,
u'2012-06-22': 390,
u'2012-06-23': 390,
u'2012-06-24': 390,
u'2012-06-25': 391,
u'2012-06-26': 391,
u'2012-06-27': 391,
u'2012-06-28': 391,
u'2012-06-29': 391,
u'2012-06-30': 391,
u'2012-07-01': 391,
u'2012-07-02': 392,
u'2012-07-03': 392,
u'2012-07-04': 392,
u'2012-07-05': 392,
u'2012-07-06': 392}
The keys are Unicode dates and the values are integers. I would like to convert this into a pandas dataframe by having the dates and their corresponding values as two separate columns. Example: col1: Dates col2: DateValue (the dates are still Unicode and datevalues are still integers)
Date DateValue
0 2012-07-01 391
1 2012-07-02 392
2 2012-07-03 392
. 2012-07-04 392
. ... ...
. ... ...
Any help in this direction would be much appreciated. I am unable to find resources on the pandas docs to help me with this.
I know one solution might be to convert each key-value pair in this dict, into a dict so the entire structure becomes a dict of dicts, and then we can add each row individually to the dataframe. But I want to know if there is an easier way and a more direct way to do this.
So far I have tried converting the dict into a series object but this doesn't seem to maintain the relationship between the columns:
s = Series(my_dict,index=my_dict.keys())
A:
The error here, is since calling the DataFrame constructor with scalar values (where it expects values to be a list/dict/... i.e. have multiple columns):
pd.DataFrame(d)
ValueError: If using all scalar values, you must must pass an index
You could take the items from the dictionary (i.e. the key-value pairs):
In [11]: pd.DataFrame(d.items()) # or list(d.items()) in python 3
Out[11]:
0 1
0 2012-07-02 392
1 2012-07-06 392
2 2012-06-29 391
3 2012-06-28 391
...
In [12]: pd.DataFrame(d.items(), columns=['Date', 'DateValue'])
Out[12]:
Date DateValue
0 2012-07-02 392
1 2012-07-06 392
2 2012-06-29 391
But I think it makes more sense to pass the Series constructor:
In [21]: s = pd.Series(d, name='DateValue')
Out[21]:
2012-06-08 388
2012-06-09 388
2012-06-10 388
In [22]: s.index.name = 'Date'
In [23]: s.reset_index()
Out[23]:
Date DateValue
0 2012-06-08 388
1 2012-06-09 388
2 2012-06-10 388
A:
When converting a dictionary into a pandas dataframe where you want the keys to be the columns of said dataframe and the values to be the row values, you can do simply put brackets around the dictionary like this:
>>> dict_ = {'key 1': 'value 1', 'key 2': 'value 2', 'key 3': 'value 3'}
>>> pd.DataFrame([dict_])
key 1 key 2 key 3
0 value 1 value 2 value 3
It's saved me some headaches so I hope it helps someone out there!
EDIT: In the pandas docs one option for the data parameter in the DataFrame constructor is a list of dictionaries. Here we're passing a list with one dictionary in it.
A:
As explained on another answer using pandas.DataFrame() directly here will not act as you think.
What you can do is use pandas.DataFrame.from_dict with orient='index':
In[7]: pandas.DataFrame.from_dict({u'2012-06-08': 388,
u'2012-06-09': 388,
u'2012-06-10': 388,
u'2012-06-11': 389,
u'2012-06-12': 389,
.....
u'2012-07-05': 392,
u'2012-07-06': 392}, orient='index', columns=['foo'])
Out[7]:
foo
2012-06-08 388
2012-06-09 388
2012-06-10 388
2012-06-11 389
2012-06-12 389
........
2012-07-05 392
2012-07-06 392
A:
Pass the items of the dictionary to the DataFrame constructor, and give the column names. After that parse the Date column to get Timestamp values.
Note the difference between python 2.x and 3.x:
In python 2.x:
df = pd.DataFrame(data.items(), columns=['Date', 'DateValue'])
df['Date'] = pd.to_datetime(df['Date'])
In Python 3.x: (requiring an additional 'list')
df = pd.DataFrame(list(data.items()), columns=['Date', 'DateValue'])
df['Date'] = pd.to_datetime(df['Date'])
A:
p.s. in particular, I've found Row-Oriented examples helpful; since often that how records are stored externally.
https://pbpython.com/pandas-list-dict.html
A:
Pandas have built-in function for conversion of dict to data frame.
pd.DataFrame.from_dict(dictionaryObject,orient='index')
For your data you can convert it like below:
import pandas as pd
your_dict={u'2012-06-08': 388,
u'2012-06-09': 388,
u'2012-06-10': 388,
u'2012-06-11': 389,
u'2012-06-12': 389,
u'2012-06-13': 389,
u'2012-06-14': 389,
u'2012-06-15': 389,
u'2012-06-16': 389,
u'2012-06-17': 389,
u'2012-06-18': 390,
u'2012-06-19': 390,
u'2012-06-20': 390,
u'2012-06-21': 390,
u'2012-06-22': 390,
u'2012-06-23': 390,
u'2012-06-24': 390,
u'2012-06-25': 391,
u'2012-06-26': 391,
u'2012-06-27': 391,
u'2012-06-28': 391,
u'2012-06-29': 391,
u'2012-06-30': 391,
u'2012-07-01': 391,
u'2012-07-02': 392,
u'2012-07-03': 392,
u'2012-07-04': 392,
u'2012-07-05': 392,
u'2012-07-06': 392}
your_df_from_dict=pd.DataFrame.from_dict(your_dict,orient='index')
print(your_df_from_dict)
A:
This is what worked for me, since I wanted to have a separate index column
df = pd.DataFrame.from_dict(some_dict, orient="index").reset_index()
df.columns = ['A', 'B']
A:
pd.DataFrame({'date' : dict_dates.keys() , 'date_value' : dict_dates.values() })
A:
This is how it worked for me :
df= pd.DataFrame([d.keys(), d.values()]).T
df.columns= ['keys', 'values'] # call them whatever you like
I hope this helps
A:
The simplest way I found is to create an empty dataframe and append the dict.
You need to tell panda's not to care about the index, otherwise you'll get the error: TypeError: Can only append a dict if ignore_index=True
import pandas as pd
mydict = {'foo': 'bar'}
df = pd.DataFrame()
df = df.append(mydict, ignore_index=True)
A:
You can also just pass the keys and values of the dictionary to the new dataframe, like so:
import pandas as pd
myDict = {<the_dict_from_your_example>]
df = pd.DataFrame()
df['Date'] = myDict.keys()
df['DateValue'] = myDict.values()
A:
In my case I wanted keys and values of a dict to be columns and values of DataFrame. So the only thing that worked for me was:
data = {'adjust_power': 'y', 'af_policy_r_submix_prio_adjust': '[null]', 'af_rf_info': '[null]', 'bat_ac': '3500', 'bat_capacity': '75'}
columns = list(data.keys())
values = list(data.values())
arr_len = len(values)
pd.DataFrame(np.array(values, dtype=object).reshape(1, arr_len), columns=columns)
A:
Accepts a dict as argument and returns a dataframe with the keys of the dict as index and values as a column.
def dict_to_df(d):
df=pd.DataFrame(d.items())
df.set_index(0, inplace=True)
return df
A:
I think that you can make some changes in your data format when you create dictionary, then you can easily convert it to DataFrame:
input:
a={'Dates':['2012-06-08','2012-06-10'],'Date_value':[388,389]}
output:
{'Date_value': [388, 389], 'Dates': ['2012-06-08', '2012-06-10']}
input:
aframe=DataFrame(a)
output: will be your DataFrame
You just need to use some text editing in somewhere like Sublime or maybe Excel.
A:
d = {'Date': list(yourDict.keys()),'Date_Values': list(yourDict.values())}
df = pandas.DataFrame(data=d)
If you don't encapsulate yourDict.keys() inside of list() , then you will end up with all of your keys and values being placed in every row of every column. Like this:
Date \
0 (2012-06-08, 2012-06-09, 2012-06-10, 2012-06-1...
1 (2012-06-08, 2012-06-09, 2012-06-10, 2012-06-1...
2 (2012-06-08, 2012-06-09, 2012-06-10, 2012-06-1...
3 (2012-06-08, 2012-06-09, 2012-06-10, 2012-06-1...
4 (2012-06-08, 2012-06-09, 2012-06-10, 2012-06-1...
But by adding list() then the result looks like this:
Date Date_Values
0 2012-06-08 388
1 2012-06-09 388
2 2012-06-10 388
3 2012-06-11 389
4 2012-06-12 389
...
A:
The point is how to put each element in a dataFarame.
Row-wise:
'pd.DataFrame(dic.items(), columns=['Date', 'Value'])'
or columns-wise:
'pd.DataFrame([dic])'
A:
I have run into this several times and have an example dictionary that I created from a function get_max_Path(), and it returns the sample dictionary:
{2: 0.3097502930247044,
3: 0.4413177909384636,
4: 0.5197224051562838,
5: 0.5717654946470984,
6: 0.6063959031223476,
7: 0.6365209824708223,
8: 0.655918861281035,
9: 0.680844386645206}
To convert this to a dataframe, I ran the following:
df = pd.DataFrame.from_dict(get_max_path(2), orient = 'index').reset_index()
Returns a simple two column dataframe with a separate index:
index 0
0 2 0.309750
1 3 0.441318
Just rename the columns using f.rename(columns={'index': 'Column1', 0: 'Column2'}, inplace=True)
A:
%timeit result on a common dictionary and pd.DataFrame.from_dict() is the clear winner.
%timeit cols_df = pd.DataFrame.from_dict(clu_meta,orient='index',columns=['Columns_fromUser'])
214 µs ± 9.38 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
%timeit pd.DataFrame([clu_meta])
943 µs ± 10.6 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
%timeit pd.DataFrame(clu_meta.items(), columns=['Default_colNames', 'Columns_fromUser'])
285 µs ± 7.91 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
|
Convert Python dict into a dataframe
|
I have a Python dictionary like the following:
{u'2012-06-08': 388,
u'2012-06-09': 388,
u'2012-06-10': 388,
u'2012-06-11': 389,
u'2012-06-12': 389,
u'2012-06-13': 389,
u'2012-06-14': 389,
u'2012-06-15': 389,
u'2012-06-16': 389,
u'2012-06-17': 389,
u'2012-06-18': 390,
u'2012-06-19': 390,
u'2012-06-20': 390,
u'2012-06-21': 390,
u'2012-06-22': 390,
u'2012-06-23': 390,
u'2012-06-24': 390,
u'2012-06-25': 391,
u'2012-06-26': 391,
u'2012-06-27': 391,
u'2012-06-28': 391,
u'2012-06-29': 391,
u'2012-06-30': 391,
u'2012-07-01': 391,
u'2012-07-02': 392,
u'2012-07-03': 392,
u'2012-07-04': 392,
u'2012-07-05': 392,
u'2012-07-06': 392}
The keys are Unicode dates and the values are integers. I would like to convert this into a pandas dataframe by having the dates and their corresponding values as two separate columns. Example: col1: Dates col2: DateValue (the dates are still Unicode and datevalues are still integers)
Date DateValue
0 2012-07-01 391
1 2012-07-02 392
2 2012-07-03 392
. 2012-07-04 392
. ... ...
. ... ...
Any help in this direction would be much appreciated. I am unable to find resources on the pandas docs to help me with this.
I know one solution might be to convert each key-value pair in this dict, into a dict so the entire structure becomes a dict of dicts, and then we can add each row individually to the dataframe. But I want to know if there is an easier way and a more direct way to do this.
So far I have tried converting the dict into a series object but this doesn't seem to maintain the relationship between the columns:
s = Series(my_dict,index=my_dict.keys())
|
[
"The error here, is since calling the DataFrame constructor with scalar values (where it expects values to be a list/dict/... i.e. have multiple columns):\npd.DataFrame(d)\nValueError: If using all scalar values, you must must pass an index\n\nYou could take the items from the dictionary (i.e. the key-value pairs):\nIn [11]: pd.DataFrame(d.items()) # or list(d.items()) in python 3\nOut[11]:\n 0 1\n0 2012-07-02 392\n1 2012-07-06 392\n2 2012-06-29 391\n3 2012-06-28 391\n...\n\nIn [12]: pd.DataFrame(d.items(), columns=['Date', 'DateValue'])\nOut[12]:\n Date DateValue\n0 2012-07-02 392\n1 2012-07-06 392\n2 2012-06-29 391\n\nBut I think it makes more sense to pass the Series constructor:\nIn [21]: s = pd.Series(d, name='DateValue')\nOut[21]:\n2012-06-08 388\n2012-06-09 388\n2012-06-10 388\n\nIn [22]: s.index.name = 'Date'\n\nIn [23]: s.reset_index()\nOut[23]:\n Date DateValue\n0 2012-06-08 388\n1 2012-06-09 388\n2 2012-06-10 388\n\n",
"When converting a dictionary into a pandas dataframe where you want the keys to be the columns of said dataframe and the values to be the row values, you can do simply put brackets around the dictionary like this:\n>>> dict_ = {'key 1': 'value 1', 'key 2': 'value 2', 'key 3': 'value 3'}\n>>> pd.DataFrame([dict_])\n\n key 1 key 2 key 3\n0 value 1 value 2 value 3\n\nIt's saved me some headaches so I hope it helps someone out there!\nEDIT: In the pandas docs one option for the data parameter in the DataFrame constructor is a list of dictionaries. Here we're passing a list with one dictionary in it.\n",
"As explained on another answer using pandas.DataFrame() directly here will not act as you think.\nWhat you can do is use pandas.DataFrame.from_dict with orient='index': \nIn[7]: pandas.DataFrame.from_dict({u'2012-06-08': 388,\n u'2012-06-09': 388,\n u'2012-06-10': 388,\n u'2012-06-11': 389,\n u'2012-06-12': 389,\n .....\n u'2012-07-05': 392,\n u'2012-07-06': 392}, orient='index', columns=['foo'])\nOut[7]: \n foo\n2012-06-08 388\n2012-06-09 388\n2012-06-10 388\n2012-06-11 389\n2012-06-12 389\n........\n2012-07-05 392\n2012-07-06 392\n\n",
"Pass the items of the dictionary to the DataFrame constructor, and give the column names. After that parse the Date column to get Timestamp values.\nNote the difference between python 2.x and 3.x:\nIn python 2.x:\ndf = pd.DataFrame(data.items(), columns=['Date', 'DateValue'])\ndf['Date'] = pd.to_datetime(df['Date'])\n\nIn Python 3.x: (requiring an additional 'list')\ndf = pd.DataFrame(list(data.items()), columns=['Date', 'DateValue'])\ndf['Date'] = pd.to_datetime(df['Date'])\n\n",
"\np.s. in particular, I've found Row-Oriented examples helpful; since often that how records are stored externally.\nhttps://pbpython.com/pandas-list-dict.html\n",
"Pandas have built-in function for conversion of dict to data frame.\n\npd.DataFrame.from_dict(dictionaryObject,orient='index')\n\nFor your data you can convert it like below:\nimport pandas as pd\nyour_dict={u'2012-06-08': 388,\n u'2012-06-09': 388,\n u'2012-06-10': 388,\n u'2012-06-11': 389,\n u'2012-06-12': 389,\n u'2012-06-13': 389,\n u'2012-06-14': 389,\n u'2012-06-15': 389,\n u'2012-06-16': 389,\n u'2012-06-17': 389,\n u'2012-06-18': 390,\n u'2012-06-19': 390,\n u'2012-06-20': 390,\n u'2012-06-21': 390,\n u'2012-06-22': 390,\n u'2012-06-23': 390,\n u'2012-06-24': 390,\n u'2012-06-25': 391,\n u'2012-06-26': 391,\n u'2012-06-27': 391,\n u'2012-06-28': 391,\n u'2012-06-29': 391,\n u'2012-06-30': 391,\n u'2012-07-01': 391,\n u'2012-07-02': 392,\n u'2012-07-03': 392,\n u'2012-07-04': 392,\n u'2012-07-05': 392,\n u'2012-07-06': 392}\n\nyour_df_from_dict=pd.DataFrame.from_dict(your_dict,orient='index')\nprint(your_df_from_dict)\n\n",
"This is what worked for me, since I wanted to have a separate index column\ndf = pd.DataFrame.from_dict(some_dict, orient=\"index\").reset_index()\ndf.columns = ['A', 'B']\n\n",
"pd.DataFrame({'date' : dict_dates.keys() , 'date_value' : dict_dates.values() })\n\n",
"This is how it worked for me : \ndf= pd.DataFrame([d.keys(), d.values()]).T\ndf.columns= ['keys', 'values'] # call them whatever you like\n\nI hope this helps\n",
"The simplest way I found is to create an empty dataframe and append the dict.\nYou need to tell panda's not to care about the index, otherwise you'll get the error: TypeError: Can only append a dict if ignore_index=True\nimport pandas as pd\nmydict = {'foo': 'bar'}\ndf = pd.DataFrame()\ndf = df.append(mydict, ignore_index=True)\n\n",
"You can also just pass the keys and values of the dictionary to the new dataframe, like so:\nimport pandas as pd\n\nmyDict = {<the_dict_from_your_example>]\ndf = pd.DataFrame()\ndf['Date'] = myDict.keys()\ndf['DateValue'] = myDict.values()\n\n",
"In my case I wanted keys and values of a dict to be columns and values of DataFrame. So the only thing that worked for me was:\ndata = {'adjust_power': 'y', 'af_policy_r_submix_prio_adjust': '[null]', 'af_rf_info': '[null]', 'bat_ac': '3500', 'bat_capacity': '75'} \n\ncolumns = list(data.keys())\nvalues = list(data.values())\narr_len = len(values)\n\npd.DataFrame(np.array(values, dtype=object).reshape(1, arr_len), columns=columns)\n\n",
"Accepts a dict as argument and returns a dataframe with the keys of the dict as index and values as a column.\ndef dict_to_df(d):\n df=pd.DataFrame(d.items())\n df.set_index(0, inplace=True)\n return df\n\n",
"I think that you can make some changes in your data format when you create dictionary, then you can easily convert it to DataFrame:\ninput:\na={'Dates':['2012-06-08','2012-06-10'],'Date_value':[388,389]}\n\noutput:\n{'Date_value': [388, 389], 'Dates': ['2012-06-08', '2012-06-10']}\n\ninput:\naframe=DataFrame(a)\n\noutput: will be your DataFrame\nYou just need to use some text editing in somewhere like Sublime or maybe Excel.\n",
"d = {'Date': list(yourDict.keys()),'Date_Values': list(yourDict.values())}\ndf = pandas.DataFrame(data=d)\n\nIf you don't encapsulate yourDict.keys() inside of list() , then you will end up with all of your keys and values being placed in every row of every column. Like this:\nDate \\\n0 (2012-06-08, 2012-06-09, 2012-06-10, 2012-06-1...\n1 (2012-06-08, 2012-06-09, 2012-06-10, 2012-06-1...\n2 (2012-06-08, 2012-06-09, 2012-06-10, 2012-06-1...\n3 (2012-06-08, 2012-06-09, 2012-06-10, 2012-06-1...\n4 (2012-06-08, 2012-06-09, 2012-06-10, 2012-06-1...\nBut by adding list() then the result looks like this:\nDate Date_Values\n0 2012-06-08 388\n1 2012-06-09 388\n2 2012-06-10 388\n3 2012-06-11 389\n4 2012-06-12 389\n...\n",
"The point is how to put each element in a dataFarame.\nRow-wise:\n'pd.DataFrame(dic.items(), columns=['Date', 'Value'])'\n\nor columns-wise:\n'pd.DataFrame([dic])'\n\n",
"I have run into this several times and have an example dictionary that I created from a function get_max_Path(), and it returns the sample dictionary:\n{2: 0.3097502930247044,\n 3: 0.4413177909384636,\n 4: 0.5197224051562838,\n 5: 0.5717654946470984,\n 6: 0.6063959031223476,\n 7: 0.6365209824708223,\n 8: 0.655918861281035,\n 9: 0.680844386645206}\nTo convert this to a dataframe, I ran the following:\ndf = pd.DataFrame.from_dict(get_max_path(2), orient = 'index').reset_index()\nReturns a simple two column dataframe with a separate index:\nindex 0\n0 2 0.309750\n1 3 0.441318\nJust rename the columns using f.rename(columns={'index': 'Column1', 0: 'Column2'}, inplace=True)\n",
"%timeit result on a common dictionary and pd.DataFrame.from_dict() is the clear winner.\n%timeit cols_df = pd.DataFrame.from_dict(clu_meta,orient='index',columns=['Columns_fromUser'])\n214 µs ± 9.38 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)\n\n%timeit pd.DataFrame([clu_meta])\n943 µs ± 10.6 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)\n\n%timeit pd.DataFrame(clu_meta.items(), columns=['Default_colNames', 'Columns_fromUser'])\n285 µs ± 7.91 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)\n\n"
] |
[
782,
326,
165,
84,
56,
16,
16,
13,
9,
9,
6,
6,
5,
1,
1,
1,
0,
0
] |
[] |
[] |
[
"dataframe",
"pandas",
"python"
] |
stackoverflow_0018837262_dataframe_pandas_python.txt
|
Q:
How to split complex JSON file into multiple files by Python
I am currently splitting Json file.
The structure of JSON file is like this :
{
"id": 2131424,
"file": "video_2131424_1938263.mp4",
"metadata": {
"width": 3840,
"height": 2160,
"duration": 312.83,
"fps": 30,
"frames": 9385,
"created": "Sun Jan 17 17:48:52 2021"
},
"frames": [
{
"number": 207,
"image": "frame_207.jpg",
"annotations": [
{
"label": {
"x": 730,
"y": 130,
"width": 62,
"height": 152
},
"category": {
"code": "child",
"attributes": [
{
"code": "global_id",
"value": "7148"
}
]
}
},
{
"label": {
"x": 815,
"y": 81,
"width": 106,
"height": 197
},
"category": {
"code": "person",
"attributes": []
}
}
]
},
{
"number": 221,
"image": "frame_221.jpg",
"annotations": [
{
"label": {
"x": 730,
"y": 130,
"width": 64,
"height": 160
},
"category": {
"code": "child",
"attributes": [
{
"code": "global_id",
"value": "7148"
}
]
}
},
{
"label": {
"x": 819,
"y": 82,
"width": 106,
"height": 200
},
"category": {
"code": "person",
"attributes": []
}
}
]
},
{
"number": 236,
"image": "frame_236.jpg",
"annotations": [
{
"label": {
"x": 731,
"y": 135,
"width": 74,
"height": 160
},
"category": {
"code": "child",
"attributes": [
{
"code": "global_id",
"value": "7148"
}
]
}
},
{
"label": {
"x": 821,
"y": 83,
"width": 106,
"height": 206
},
"category": {
"code": "person",
"attributes": []
}
}
]
},
I have to extract [x, y, width, height] from each label.
I tried some code like this:
file = json.load(open('annotation_2131424.json'))
file['frames'][i]['annotations'][j]['label']['x']
But I cannot split JSON.
I tried like this but I cannot run...
A:
I hope I've understood your question right. To get x, y, width, height from each label (dct is your dictionary from the question):
out = [
[
[
a["label"]["x"],
a["label"]["y"],
a["label"]["width"],
a["label"]["height"],
]
for a in frame["annotations"]
]
for frame in dct["frames"]
]
print(out)
Prints:
[
[[730, 130, 62, 152], [815, 81, 106, 197]],
[[730, 130, 64, 160], [819, 82, 106, 200]],
[[731, 135, 74, 160], [821, 83, 106, 206]],
]
|
How to split complex JSON file into multiple files by Python
|
I am currently splitting Json file.
The structure of JSON file is like this :
{
"id": 2131424,
"file": "video_2131424_1938263.mp4",
"metadata": {
"width": 3840,
"height": 2160,
"duration": 312.83,
"fps": 30,
"frames": 9385,
"created": "Sun Jan 17 17:48:52 2021"
},
"frames": [
{
"number": 207,
"image": "frame_207.jpg",
"annotations": [
{
"label": {
"x": 730,
"y": 130,
"width": 62,
"height": 152
},
"category": {
"code": "child",
"attributes": [
{
"code": "global_id",
"value": "7148"
}
]
}
},
{
"label": {
"x": 815,
"y": 81,
"width": 106,
"height": 197
},
"category": {
"code": "person",
"attributes": []
}
}
]
},
{
"number": 221,
"image": "frame_221.jpg",
"annotations": [
{
"label": {
"x": 730,
"y": 130,
"width": 64,
"height": 160
},
"category": {
"code": "child",
"attributes": [
{
"code": "global_id",
"value": "7148"
}
]
}
},
{
"label": {
"x": 819,
"y": 82,
"width": 106,
"height": 200
},
"category": {
"code": "person",
"attributes": []
}
}
]
},
{
"number": 236,
"image": "frame_236.jpg",
"annotations": [
{
"label": {
"x": 731,
"y": 135,
"width": 74,
"height": 160
},
"category": {
"code": "child",
"attributes": [
{
"code": "global_id",
"value": "7148"
}
]
}
},
{
"label": {
"x": 821,
"y": 83,
"width": 106,
"height": 206
},
"category": {
"code": "person",
"attributes": []
}
}
]
},
I have to extract [x, y, width, height] from each label.
I tried some code like this:
file = json.load(open('annotation_2131424.json'))
file['frames'][i]['annotations'][j]['label']['x']
But I cannot split JSON.
I tried like this but I cannot run...
|
[
"I hope I've understood your question right. To get x, y, width, height from each label (dct is your dictionary from the question):\nout = [\n [\n [\n a[\"label\"][\"x\"],\n a[\"label\"][\"y\"],\n a[\"label\"][\"width\"],\n a[\"label\"][\"height\"],\n ]\n for a in frame[\"annotations\"]\n ]\n for frame in dct[\"frames\"]\n]\n\nprint(out)\n\nPrints:\n[\n [[730, 130, 62, 152], [815, 81, 106, 197]],\n [[730, 130, 64, 160], [819, 82, 106, 200]],\n [[731, 135, 74, 160], [821, 83, 106, 206]],\n]\n\n"
] |
[
0
] |
[] |
[] |
[
"json",
"python"
] |
stackoverflow_0074567704_json_python.txt
|
Q:
how to save parsed data into two different lists
I have this code:
lokk = []
nums = 7
for _ in range(nums):
inner = driver.find_element_by_xpath(
"/html/body/div[1]/div[2]/div/div/div/div[2]/div/div/div/div[2]/div[2]/div/div/div[2]/div[5]/span[1]").get_attribute(
"innerHTML")
lokk.append(inner)
time.sleep()
print(lokk)
which provides me with this data:
['1', '2', '3', '4', '5', '6', '7']
what I want to do is to save that data into two different lists, the first list containing the first six values e.g. ['1', '2', '3', '4', '5', '6'] and the second lists containing the the whole seven values e.g. ['1', '2', '3', '4', '5', '6', '7'] how ever i want it be so that the next sample of data collected contains the last value of the second list as the first value of the list pair of lists like so ['7', '8', '9', '10', '11', '12', '13']
i thought this was the code that would somewhat enable be to get the data in the different lists like i wanted but then soon realized that by the time it goes to fetch the second set of data for the second list of seven values, the data would have changed and that's not what I want
lok = []
num = 6
for _ in range(num):
inner = driver.find_element_by_xpath(
"/html/body/div[1]/div[2]/div/div/div/div[2]/div/div/div/div[2]/div[2]/div/div/div[2]/div[5]/span[1]").get_attribute(
"innerHTML")
lok.append(inner)
time.sleep(10)
print(lok)
lokk = []
nums = 7
for _ in range(nums):
inner = driver.find_element_by_xpath(
"/html/body/div[1]/div[2]/div/div/div/div[2]/div/div/div/div[2]/div[2]/div/div/div[2]/div[5]/span[1]").get_attribute(
"innerHTML")
lokk.append(inner)
time.sleep()
print(lokk)
Another flaw i saw in this is that when it was time to run the process again later, the seventh data would not be the first data for the new set of lists.
Meaning that instead of:
listA = ['1', '2', '3', '4', '5', '6']
listB = ['1', '2', '3', '4', '5', '6', '7']
ListC = ['7', '8', '9', '10', '11', '12']
listD = ['7', '8', '9', '10', '11', '12', '13']
it would be:
listA = ['1', '2', '3', '4', '5', '6']
listB = ['1', '2', '3', '4', '5', '6', '7']
ListC = ['8', '9', '10', '11', '12', '13']
listD = ['8', '9', '10', '11', '12', '13', '14']`
I really hope I was clear enough in what I am looking for assistance in, if not please let me know.
Please help :(
A:
You can achieve this in many ways, try this:
For ListC:
lok_c = []
num_c = 13
for _ in range(num_c):
inner = driver.find_element_by_xpath("/html/body/div[1]/div[2]/div/div/div/div[2]/div/div/div/div[2]/div[2]/div/div/div[2]/div[5]/span[1]").
get_attribute("innerHTML")
if num_c > 7:
lok_c.append(inner)
time.sleep()
print(lok_c)
For ListD:
lok_d = []
num_d = 14
for _ in range(num_d):
inner = driver.find_element_by_xpath("/html/body/div[1]/div[2]/div/div/div/div[2]/div/div/div/div[2]/div[2]/div/div/div[2]/div[5]/span[1]").
get_attribute("innerHTML")
if num_d > 7:
lok_c.append(inner)
time.sleep()
print(lok_d)
Correct the indentation and variable name, if I missed anything.
|
how to save parsed data into two different lists
|
I have this code:
lokk = []
nums = 7
for _ in range(nums):
inner = driver.find_element_by_xpath(
"/html/body/div[1]/div[2]/div/div/div/div[2]/div/div/div/div[2]/div[2]/div/div/div[2]/div[5]/span[1]").get_attribute(
"innerHTML")
lokk.append(inner)
time.sleep()
print(lokk)
which provides me with this data:
['1', '2', '3', '4', '5', '6', '7']
what I want to do is to save that data into two different lists, the first list containing the first six values e.g. ['1', '2', '3', '4', '5', '6'] and the second lists containing the the whole seven values e.g. ['1', '2', '3', '4', '5', '6', '7'] how ever i want it be so that the next sample of data collected contains the last value of the second list as the first value of the list pair of lists like so ['7', '8', '9', '10', '11', '12', '13']
i thought this was the code that would somewhat enable be to get the data in the different lists like i wanted but then soon realized that by the time it goes to fetch the second set of data for the second list of seven values, the data would have changed and that's not what I want
lok = []
num = 6
for _ in range(num):
inner = driver.find_element_by_xpath(
"/html/body/div[1]/div[2]/div/div/div/div[2]/div/div/div/div[2]/div[2]/div/div/div[2]/div[5]/span[1]").get_attribute(
"innerHTML")
lok.append(inner)
time.sleep(10)
print(lok)
lokk = []
nums = 7
for _ in range(nums):
inner = driver.find_element_by_xpath(
"/html/body/div[1]/div[2]/div/div/div/div[2]/div/div/div/div[2]/div[2]/div/div/div[2]/div[5]/span[1]").get_attribute(
"innerHTML")
lokk.append(inner)
time.sleep()
print(lokk)
Another flaw i saw in this is that when it was time to run the process again later, the seventh data would not be the first data for the new set of lists.
Meaning that instead of:
listA = ['1', '2', '3', '4', '5', '6']
listB = ['1', '2', '3', '4', '5', '6', '7']
ListC = ['7', '8', '9', '10', '11', '12']
listD = ['7', '8', '9', '10', '11', '12', '13']
it would be:
listA = ['1', '2', '3', '4', '5', '6']
listB = ['1', '2', '3', '4', '5', '6', '7']
ListC = ['8', '9', '10', '11', '12', '13']
listD = ['8', '9', '10', '11', '12', '13', '14']`
I really hope I was clear enough in what I am looking for assistance in, if not please let me know.
Please help :(
|
[
"You can achieve this in many ways, try this:\nFor ListC:\n lok_c = []\n num_c = 13\n for _ in range(num_c):\n inner = driver.find_element_by_xpath(\"/html/body/div[1]/div[2]/div/div/div/div[2]/div/div/div/div[2]/div[2]/div/div/div[2]/div[5]/span[1]\").\nget_attribute(\"innerHTML\")\n if num_c > 7:\n lok_c.append(inner)\n time.sleep()\n print(lok_c)\n\nFor ListD:\n lok_d = []\n num_d = 14\n for _ in range(num_d):\n inner = driver.find_element_by_xpath(\"/html/body/div[1]/div[2]/div/div/div/div[2]/div/div/div/div[2]/div[2]/div/div/div[2]/div[5]/span[1]\").\nget_attribute(\"innerHTML\")\n if num_d > 7:\n lok_c.append(inner)\n time.sleep()\n print(lok_d)\n\nCorrect the indentation and variable name, if I missed anything.\n"
] |
[
0
] |
[] |
[] |
[
"list",
"parsing",
"python",
"selenium"
] |
stackoverflow_0074566611_list_parsing_python_selenium.txt
|
Q:
Order a string by number in the string - Python
I was told to solve this but I'm not having an optimum solution
Lets say I have one string. This string is something like this
string= 'House 1 - New & Painted
House 6
House 2 - Used
House 4'
Now, i have to build a function that order this string taking account the house number, so the new string has to be something like this
string= 'House 1 - New & Painted
House 2 - Used
House 4
House 6'
How can I do this in a function?
A:
You can use .splitlines() to get list of lines and use str.split to find and convert the number to integer in key function:
s = """\
House 1 - New & Painted
House 6
House 2 - Used
House 4"""
s = "\n".join(sorted(s.splitlines(), key=lambda v: int(v.split()[1])))
print(s)
Prints:
House 1 - New & Painted
House 2 - Used
House 4
House 6
|
Order a string by number in the string - Python
|
I was told to solve this but I'm not having an optimum solution
Lets say I have one string. This string is something like this
string= 'House 1 - New & Painted
House 6
House 2 - Used
House 4'
Now, i have to build a function that order this string taking account the house number, so the new string has to be something like this
string= 'House 1 - New & Painted
House 2 - Used
House 4
House 6'
How can I do this in a function?
|
[
"You can use .splitlines() to get list of lines and use str.split to find and convert the number to integer in key function:\ns = \"\"\"\\\nHouse 1 - New & Painted\nHouse 6\nHouse 2 - Used \nHouse 4\"\"\"\n\ns = \"\\n\".join(sorted(s.splitlines(), key=lambda v: int(v.split()[1])))\nprint(s)\n\nPrints:\nHouse 1 - New & Painted\nHouse 2 - Used \nHouse 4\nHouse 6\n\n"
] |
[
2
] |
[] |
[] |
[
"python",
"sorting"
] |
stackoverflow_0074567723_python_sorting.txt
|
Q:
Tkinter change paste command
I'm trying to change paste command on my program. When we copy table value from excel, whether it's vertical or horizontal line, it will converted to vertical entries list. But the problem is when I only want to paste single value to the random entries line, it will always print the value from 1st line entry and not from the entry line that I selected. Is it also possible create function to select all entries with mouse?
This is my code:
from tkinter import *
root=Tk()
d=[]
for i in range(4):
e=Entry(root,)
e.grid(row=i)
d.append(e)
def paste(event):
for entry in d:
entry.delete(0,'end')
data=root.clipboard_get().split()
for entry,i in zip(d,data):
if '\n':
entry.insert(0, i.split('\n'))
print(data)
elif '\t':
entry.insert(0, i.split('\t'))
print(data)
return 'break'
root.bind_all("<<Paste>>", paste)
root.mainloop()
Can you help me solve this problem?
Thank you!!
A:
It is because the for loop always starts from the first entry box. You need to find the index of the selected entry in the entry list d and paste the clipboard data starts from it:
def paste(event):
try:
# get selected entry
w = root.focus_get()
# get the index of the selected entry in the entry list
idx = d.index(w)
# get the data from clipboard and split them into list
data = root.clipboard_get()#.rstrip() # strip the trailing '\n'
#print(repr(data)) # for debug purpose
if '\t' in data:
data = data.split('\t')
elif '\n' in data:
data = data.split('\n')
# paste the data starts from the selected entry
for entry, txt in zip(d[idx:], data):
entry.delete(0, 'end')
entry.insert('end', txt)
return 'break'
except Exception as ex:
# something wrong, like no entry is selected
print(ex)
|
Tkinter change paste command
|
I'm trying to change paste command on my program. When we copy table value from excel, whether it's vertical or horizontal line, it will converted to vertical entries list. But the problem is when I only want to paste single value to the random entries line, it will always print the value from 1st line entry and not from the entry line that I selected. Is it also possible create function to select all entries with mouse?
This is my code:
from tkinter import *
root=Tk()
d=[]
for i in range(4):
e=Entry(root,)
e.grid(row=i)
d.append(e)
def paste(event):
for entry in d:
entry.delete(0,'end')
data=root.clipboard_get().split()
for entry,i in zip(d,data):
if '\n':
entry.insert(0, i.split('\n'))
print(data)
elif '\t':
entry.insert(0, i.split('\t'))
print(data)
return 'break'
root.bind_all("<<Paste>>", paste)
root.mainloop()
Can you help me solve this problem?
Thank you!!
|
[
"It is because the for loop always starts from the first entry box. You need to find the index of the selected entry in the entry list d and paste the clipboard data starts from it:\ndef paste(event):\n try:\n # get selected entry\n w = root.focus_get()\n # get the index of the selected entry in the entry list\n idx = d.index(w)\n # get the data from clipboard and split them into list\n data = root.clipboard_get()#.rstrip() # strip the trailing '\\n'\n #print(repr(data)) # for debug purpose\n if '\\t' in data:\n data = data.split('\\t')\n elif '\\n' in data:\n data = data.split('\\n')\n # paste the data starts from the selected entry\n for entry, txt in zip(d[idx:], data):\n entry.delete(0, 'end')\n entry.insert('end', txt)\n return 'break'\n except Exception as ex:\n # something wrong, like no entry is selected\n print(ex)\n\n"
] |
[
0
] |
[] |
[] |
[
"paste",
"python",
"tkinter"
] |
stackoverflow_0074567678_paste_python_tkinter.txt
|
Q:
Numpy: How to unwrap of a matrix
I am hoping to reshape matrices in such a form
A =
[[1,2,3],
[4,5,6],
[7,8,9]]
B =
[[10,11,12],
[13,14,15],
[16,17,18]]
Z = [[1, 2, 3 10, 11, 12],
[4, 5, 6, 13, 14, 15],
[7,8,9, 16, 17 ,18]]
Where A,B are a 3x3 matrices but z is a 3x6 matrix. I'd like to be able to apply it to higher dimensions.
np.ravel returns a flattened array so I can't use that because the output matrix will then be
Z =
[[1 ,2 ,3, 4, 5, 6],
[7, 8, 9, 10 ,11, 12],
[13, 14, 15, 16, 17, 18]]
I can't use np.reshape to (6,6) either because it would flatten the array before, since it results in the same matrix as above. Looking for some way to implement this.
A:
Assume A and B are always the same shape:
np.vstack((A, B)).reshape(len(A), -1)
#array([[ 1, 2, 3, 4, 5, 6],
# [ 7, 8, 9, 10, 11, 12],
# [13, 14, 15, 16, 17, 18]])
|
Numpy: How to unwrap of a matrix
|
I am hoping to reshape matrices in such a form
A =
[[1,2,3],
[4,5,6],
[7,8,9]]
B =
[[10,11,12],
[13,14,15],
[16,17,18]]
Z = [[1, 2, 3 10, 11, 12],
[4, 5, 6, 13, 14, 15],
[7,8,9, 16, 17 ,18]]
Where A,B are a 3x3 matrices but z is a 3x6 matrix. I'd like to be able to apply it to higher dimensions.
np.ravel returns a flattened array so I can't use that because the output matrix will then be
Z =
[[1 ,2 ,3, 4, 5, 6],
[7, 8, 9, 10 ,11, 12],
[13, 14, 15, 16, 17, 18]]
I can't use np.reshape to (6,6) either because it would flatten the array before, since it results in the same matrix as above. Looking for some way to implement this.
|
[
"Assume A and B are always the same shape:\nnp.vstack((A, B)).reshape(len(A), -1)\n\n#array([[ 1, 2, 3, 4, 5, 6],\n# [ 7, 8, 9, 10, 11, 12],\n# [13, 14, 15, 16, 17, 18]])\n\n"
] |
[
1
] |
[] |
[] |
[
"matrix",
"numpy",
"python"
] |
stackoverflow_0074567647_matrix_numpy_python.txt
|
Q:
Python 3 - How to change the syntax of a "datetime.timedelta(seconds=xxx)" object?
below a simple example using the Python interpreter:
>>> import datetime
>>>
>>> time=datetime.timedelta(seconds=10)
>>> str(time)
'0:00:10'
>>>
how can I change the syntaxt of the time object when I convert it in a string? As reslt, I want to see '00:00:10' and not '0:00:10'. I really don't understand why the last zero in the hours block is missing, I want to see always two digits in each block. how can I do it?
A:
You have to create your custom formatter to do this.
Python3 variant of @gumption solution which is written in python2.
This is the link to his solution
Custom Formatter
from string import Template
class TimeDeltaTemp(Template):
delimiter = "%"
def strfdtime(dtime, formats):
day = {"D": dtime.days}
hours, rem = divmod(dtime.seconds, 3600)
minutes, seconds = divmod(rem, 60)
day["H"] = '{:02d}'.format(hours)
day["M"] = '{:02d}'.format(minutes)
day["S"] = '{:02d}'.format(seconds)
time = TimeDeltaTemp(formats.upper())
return time.substitute(**day)
Usage
import datetime
time=datetime.timedelta(seconds=10)
print(strfdtime(time, '%H:%M:%S'))
time=datetime.timedelta(seconds=30)
print(strfdtime(time, '%h:%m:%s'))
Output
00:00:10
00:00:30
|
Python 3 - How to change the syntax of a "datetime.timedelta(seconds=xxx)" object?
|
below a simple example using the Python interpreter:
>>> import datetime
>>>
>>> time=datetime.timedelta(seconds=10)
>>> str(time)
'0:00:10'
>>>
how can I change the syntaxt of the time object when I convert it in a string? As reslt, I want to see '00:00:10' and not '0:00:10'. I really don't understand why the last zero in the hours block is missing, I want to see always two digits in each block. how can I do it?
|
[
"You have to create your custom formatter to do this.\nPython3 variant of @gumption solution which is written in python2.\nThis is the link to his solution\nCustom Formatter\nfrom string import Template\n\nclass TimeDeltaTemp(Template):\n delimiter = \"%\"\n\ndef strfdtime(dtime, formats):\n day = {\"D\": dtime.days}\n hours, rem = divmod(dtime.seconds, 3600)\n minutes, seconds = divmod(rem, 60)\n day[\"H\"] = '{:02d}'.format(hours)\n day[\"M\"] = '{:02d}'.format(minutes)\n day[\"S\"] = '{:02d}'.format(seconds)\n time = TimeDeltaTemp(formats.upper())\n return time.substitute(**day)\n\nUsage\n\nimport datetime\n\ntime=datetime.timedelta(seconds=10)\nprint(strfdtime(time, '%H:%M:%S'))\ntime=datetime.timedelta(seconds=30)\nprint(strfdtime(time, '%h:%m:%s'))\n\n\nOutput\n00:00:10\n00:00:30\n\n"
] |
[
2
] |
[] |
[] |
[
"datetime",
"python",
"python_3.x",
"timedelta"
] |
stackoverflow_0074567716_datetime_python_python_3.x_timedelta.txt
|
Q:
Cannot install lightgbm==3.3.3 on Apple Silicon
Here the full log of pip3 install lightgbm==3.3.3.
me % pip3 install lightgbm==3.3.3
Collecting lightgbm==3.3.3
Using cached lightgbm-3.3.3.tar.gz (1.5 MB)
Preparing metadata (setup.py) ... done
Requirement already satisfied: wheel in /opt/homebrew/lib/python3.10/site-packages (from lightgbm==3.3.3) (0.37.1)
Collecting numpy
Using cached numpy-1.23.5-cp310-cp310-macosx_11_0_arm64.whl (13.4 MB)
Collecting scipy
Using cached scipy-1.9.3-cp310-cp310-macosx_12_0_arm64.whl (28.5 MB)
Collecting scikit-learn!=0.22.0
Downloading scikit_learn-1.1.3-cp310-cp310-macosx_12_0_arm64.whl (7.7 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 7.7/7.7 MB 4.7 MB/s eta 0:00:00
Collecting threadpoolctl>=2.0.0
Downloading threadpoolctl-3.1.0-py3-none-any.whl (14 kB)
Collecting joblib>=1.0.0
Using cached joblib-1.2.0-py3-none-any.whl (297 kB)
Building wheels for collected packages: lightgbm
Building wheel for lightgbm (setup.py) ... error
error: subprocess-exited-with-error
× python setup.py bdist_wheel did not run successfully.
│ exit code: 1
╰─> [86 lines of output]
running bdist_wheel
/opt/homebrew/lib/python3.10/site-packages/setuptools/command/install.py:34: SetuptoolsDeprecationWarning: setup.py install is deprecated. Use build and pip and other standards-based tools.
warnings.warn(
running build
running build_py
creating build
creating build/lib
creating build/lib/lightgbm
copying lightgbm/callback.py -> build/lib/lightgbm
copying lightgbm/compat.py -> build/lib/lightgbm
copying lightgbm/plotting.py -> build/lib/lightgbm
copying lightgbm/__init__.py -> build/lib/lightgbm
copying lightgbm/engine.py -> build/lib/lightgbm
copying lightgbm/dask.py -> build/lib/lightgbm
copying lightgbm/basic.py -> build/lib/lightgbm
copying lightgbm/libpath.py -> build/lib/lightgbm
copying lightgbm/sklearn.py -> build/lib/lightgbm
running egg_info
writing lightgbm.egg-info/PKG-INFO
writing dependency_links to lightgbm.egg-info/dependency_links.txt
writing requirements to lightgbm.egg-info/requires.txt
writing top-level names to lightgbm.egg-info/top_level.txt
reading manifest file 'lightgbm.egg-info/SOURCES.txt'
reading manifest template 'MANIFEST.in'
no previously-included directories found matching 'build'
warning: no files found matching '*.so' under directory 'lightgbm'
warning: no files found matching '*.so' under directory 'compile'
warning: no files found matching '*.dll' under directory 'compile/Release'
warning: no files found matching '*.dll' under directory 'compile/windows/x64/DLL'
warning: no previously-included files matching '*.py[co]' found anywhere in distribution
warning: no previously-included files found matching 'compile/external_libs/compute/.git'
adding license file 'LICENSE'
writing manifest file 'lightgbm.egg-info/SOURCES.txt'
copying lightgbm/VERSION.txt -> build/lib/lightgbm
installing to build/bdist.macosx-13-arm64/wheel
running install
INFO:LightGBM:Starting to compile the library.
INFO:LightGBM:Starting to compile with CMake.
Traceback (most recent call last):
File "/private/var/folders/qf/7kpp7kws2bs9ljbxdl6vf9480000gn/T/pip-install-qdmnvrdo/lightgbm_7e71affc27c54e8fb3f78d9ef73bd942/setup.py", line 95, in silent_call
subprocess.check_call(cmd, stderr=log, stdout=log)
File "/opt/homebrew/Cellar/python@3.10/3.10.8/Frameworks/Python.framework/Versions/3.10/lib/python3.10/subprocess.py", line 364, in check_call
retcode = call(*popenargs, **kwargs)
File "/opt/homebrew/Cellar/python@3.10/3.10.8/Frameworks/Python.framework/Versions/3.10/lib/python3.10/subprocess.py", line 345, in call
with Popen(*popenargs, **kwargs) as p:
File "/opt/homebrew/Cellar/python@3.10/3.10.8/Frameworks/Python.framework/Versions/3.10/lib/python3.10/subprocess.py", line 971, in __init__
self._execute_child(args, executable, preexec_fn, close_fds,
File "/opt/homebrew/Cellar/python@3.10/3.10.8/Frameworks/Python.framework/Versions/3.10/lib/python3.10/subprocess.py", line 1847, in _execute_child
raise child_exception_type(errno_num, err_msg, err_filename)
FileNotFoundError: [Errno 2] No such file or directory: 'cmake'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<string>", line 2, in <module>
File "<pip-setuptools-caller>", line 34, in <module>
File "/private/var/folders/qf/7kpp7kws2bs9ljbxdl6vf9480000gn/T/pip-install-qdmnvrdo/lightgbm_7e71affc27c54e8fb3f78d9ef73bd942/setup.py", line 334, in <module>
setup(name='lightgbm',
File "/opt/homebrew/lib/python3.10/site-packages/setuptools/__init__.py", line 87, in setup
return distutils.core.setup(**attrs)
File "/opt/homebrew/lib/python3.10/site-packages/setuptools/_distutils/core.py", line 185, in setup
return run_commands(dist)
File "/opt/homebrew/lib/python3.10/site-packages/setuptools/_distutils/core.py", line 201, in run_commands
dist.run_commands()
File "/opt/homebrew/lib/python3.10/site-packages/setuptools/_distutils/dist.py", line 968, in run_commands
self.run_command(cmd)
File "/opt/homebrew/lib/python3.10/site-packages/setuptools/dist.py", line 1217, in run_command
super().run_command(command)
File "/opt/homebrew/lib/python3.10/site-packages/setuptools/_distutils/dist.py", line 987, in run_command
cmd_obj.run()
File "/opt/homebrew/lib/python3.10/site-packages/wheel/bdist_wheel.py", line 335, in run
self.run_command('install')
File "/opt/homebrew/lib/python3.10/site-packages/setuptools/_distutils/cmd.py", line 319, in run_command
self.distribution.run_command(command)
File "/opt/homebrew/lib/python3.10/site-packages/setuptools/dist.py", line 1217, in run_command
super().run_command(command)
File "/opt/homebrew/lib/python3.10/site-packages/setuptools/_distutils/dist.py", line 987, in run_command
cmd_obj.run()
File "/private/var/folders/qf/7kpp7kws2bs9ljbxdl6vf9480000gn/T/pip-install-qdmnvrdo/lightgbm_7e71affc27c54e8fb3f78d9ef73bd942/setup.py", line 248, in run
compile_cpp(use_mingw=self.mingw, use_gpu=self.gpu, use_cuda=self.cuda, use_mpi=self.mpi,
File "/private/var/folders/qf/7kpp7kws2bs9ljbxdl6vf9480000gn/T/pip-install-qdmnvrdo/lightgbm_7e71affc27c54e8fb3f78d9ef73bd942/setup.py", line 198, in compile_cpp
silent_call(cmake_cmd, raise_error=True, error_msg='Please install CMake and all required dependencies first')
File "/private/var/folders/qf/7kpp7kws2bs9ljbxdl6vf9480000gn/T/pip-install-qdmnvrdo/lightgbm_7e71affc27c54e8fb3f78d9ef73bd942/setup.py", line 99, in silent_call
raise Exception("\n".join((error_msg, LOG_NOTICE)))
Exception: Please install CMake and all required dependencies first
The full version of error log was saved into /Users/me/LightGBM_compilation.log
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for lightgbm
Running setup.py clean for lightgbm
Failed to build lightgbm
Installing collected packages: threadpoolctl, numpy, joblib, scipy, scikit-learn, lightgbm
Running setup.py install for lightgbm ... error
error: subprocess-exited-with-error
× Running setup.py install for lightgbm did not run successfully.
│ exit code: 1
╰─> [45 lines of output]
running install
/opt/homebrew/lib/python3.10/site-packages/setuptools/command/install.py:34: SetuptoolsDeprecationWarning: setup.py install is deprecated. Use build and pip and other standards-based tools.
warnings.warn(
INFO:LightGBM:Starting to compile the library.
INFO:LightGBM:Starting to compile with CMake.
Traceback (most recent call last):
File "/private/var/folders/qf/7kpp7kws2bs9ljbxdl6vf9480000gn/T/pip-install-qdmnvrdo/lightgbm_7e71affc27c54e8fb3f78d9ef73bd942/setup.py", line 95, in silent_call
subprocess.check_call(cmd, stderr=log, stdout=log)
File "/opt/homebrew/Cellar/python@3.10/3.10.8/Frameworks/Python.framework/Versions/3.10/lib/python3.10/subprocess.py", line 364, in check_call
retcode = call(*popenargs, **kwargs)
File "/opt/homebrew/Cellar/python@3.10/3.10.8/Frameworks/Python.framework/Versions/3.10/lib/python3.10/subprocess.py", line 345, in call
with Popen(*popenargs, **kwargs) as p:
File "/opt/homebrew/Cellar/python@3.10/3.10.8/Frameworks/Python.framework/Versions/3.10/lib/python3.10/subprocess.py", line 971, in __init__
self._execute_child(args, executable, preexec_fn, close_fds,
File "/opt/homebrew/Cellar/python@3.10/3.10.8/Frameworks/Python.framework/Versions/3.10/lib/python3.10/subprocess.py", line 1847, in _execute_child
raise child_exception_type(errno_num, err_msg, err_filename)
FileNotFoundError: [Errno 2] No such file or directory: 'cmake'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<string>", line 2, in <module>
File "<pip-setuptools-caller>", line 34, in <module>
File "/private/var/folders/qf/7kpp7kws2bs9ljbxdl6vf9480000gn/T/pip-install-qdmnvrdo/lightgbm_7e71affc27c54e8fb3f78d9ef73bd942/setup.py", line 334, in <module>
setup(name='lightgbm',
File "/opt/homebrew/lib/python3.10/site-packages/setuptools/__init__.py", line 87, in setup
return distutils.core.setup(**attrs)
File "/opt/homebrew/lib/python3.10/site-packages/setuptools/_distutils/core.py", line 185, in setup
return run_commands(dist)
File "/opt/homebrew/lib/python3.10/site-packages/setuptools/_distutils/core.py", line 201, in run_commands
dist.run_commands()
File "/opt/homebrew/lib/python3.10/site-packages/setuptools/_distutils/dist.py", line 968, in run_commands
self.run_command(cmd)
File "/opt/homebrew/lib/python3.10/site-packages/setuptools/dist.py", line 1217, in run_command
super().run_command(command)
File "/opt/homebrew/lib/python3.10/site-packages/setuptools/_distutils/dist.py", line 987, in run_command
cmd_obj.run()
File "/private/var/folders/qf/7kpp7kws2bs9ljbxdl6vf9480000gn/T/pip-install-qdmnvrdo/lightgbm_7e71affc27c54e8fb3f78d9ef73bd942/setup.py", line 248, in run
compile_cpp(use_mingw=self.mingw, use_gpu=self.gpu, use_cuda=self.cuda, use_mpi=self.mpi,
File "/private/var/folders/qf/7kpp7kws2bs9ljbxdl6vf9480000gn/T/pip-install-qdmnvrdo/lightgbm_7e71affc27c54e8fb3f78d9ef73bd942/setup.py", line 198, in compile_cpp
silent_call(cmake_cmd, raise_error=True, error_msg='Please install CMake and all required dependencies first')
File "/private/var/folders/qf/7kpp7kws2bs9ljbxdl6vf9480000gn/T/pip-install-qdmnvrdo/lightgbm_7e71affc27c54e8fb3f78d9ef73bd942/setup.py", line 99, in silent_call
raise Exception("\n".join((error_msg, LOG_NOTICE)))
Exception: Please install CMake and all required dependencies first
The full version of error log was saved into /Users/me/LightGBM_compilation.log
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: legacy-install-failure
× Encountered error while trying to install package.
╰─> lightgbm
note: This is an issue with the package mentioned above, not pip.
hint: See above for output from the failure.
There is a workaround for this?
A:
When you run pip install lightgbm and see this message in logs:
Building wheels for collected packages: lightgbm
it means that there is not a pre-compiled binary (i.e. wheel) available matching your platform (operating system + architecture + Python version), and that LightGBM needs to be built from source.
lightgbm is a Python package wrapping lib_lightgbm, a C++ library with a C API. So "built from source" for lightgbm means compiling that C/C++ code, which for LightGBM requires:
C and C++ compilers
the CMake build system
an installation of OpenMP
Those components are the "CMake and all required dependencies" referred to in the error message.
On macOS, you should already have a C/C++ compiler installed (clang) by default. To get CMake and OpenMP, run the following.
brew install cmake libomp
NOTE: lightgbm v3.3.3 and older does not support the newest version of OpenMP available as of this writing (v15.x). That was fixed in microsoft/LightGBM#5563. If you end up with OpenMP >=15.0 and lightgbm>=4.0 is not yet available from PyPI, either downgrade OpenMP or build a development version of lightgbm (see "Install from GitHub" in the docs).
|
Cannot install lightgbm==3.3.3 on Apple Silicon
|
Here the full log of pip3 install lightgbm==3.3.3.
me % pip3 install lightgbm==3.3.3
Collecting lightgbm==3.3.3
Using cached lightgbm-3.3.3.tar.gz (1.5 MB)
Preparing metadata (setup.py) ... done
Requirement already satisfied: wheel in /opt/homebrew/lib/python3.10/site-packages (from lightgbm==3.3.3) (0.37.1)
Collecting numpy
Using cached numpy-1.23.5-cp310-cp310-macosx_11_0_arm64.whl (13.4 MB)
Collecting scipy
Using cached scipy-1.9.3-cp310-cp310-macosx_12_0_arm64.whl (28.5 MB)
Collecting scikit-learn!=0.22.0
Downloading scikit_learn-1.1.3-cp310-cp310-macosx_12_0_arm64.whl (7.7 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 7.7/7.7 MB 4.7 MB/s eta 0:00:00
Collecting threadpoolctl>=2.0.0
Downloading threadpoolctl-3.1.0-py3-none-any.whl (14 kB)
Collecting joblib>=1.0.0
Using cached joblib-1.2.0-py3-none-any.whl (297 kB)
Building wheels for collected packages: lightgbm
Building wheel for lightgbm (setup.py) ... error
error: subprocess-exited-with-error
× python setup.py bdist_wheel did not run successfully.
│ exit code: 1
╰─> [86 lines of output]
running bdist_wheel
/opt/homebrew/lib/python3.10/site-packages/setuptools/command/install.py:34: SetuptoolsDeprecationWarning: setup.py install is deprecated. Use build and pip and other standards-based tools.
warnings.warn(
running build
running build_py
creating build
creating build/lib
creating build/lib/lightgbm
copying lightgbm/callback.py -> build/lib/lightgbm
copying lightgbm/compat.py -> build/lib/lightgbm
copying lightgbm/plotting.py -> build/lib/lightgbm
copying lightgbm/__init__.py -> build/lib/lightgbm
copying lightgbm/engine.py -> build/lib/lightgbm
copying lightgbm/dask.py -> build/lib/lightgbm
copying lightgbm/basic.py -> build/lib/lightgbm
copying lightgbm/libpath.py -> build/lib/lightgbm
copying lightgbm/sklearn.py -> build/lib/lightgbm
running egg_info
writing lightgbm.egg-info/PKG-INFO
writing dependency_links to lightgbm.egg-info/dependency_links.txt
writing requirements to lightgbm.egg-info/requires.txt
writing top-level names to lightgbm.egg-info/top_level.txt
reading manifest file 'lightgbm.egg-info/SOURCES.txt'
reading manifest template 'MANIFEST.in'
no previously-included directories found matching 'build'
warning: no files found matching '*.so' under directory 'lightgbm'
warning: no files found matching '*.so' under directory 'compile'
warning: no files found matching '*.dll' under directory 'compile/Release'
warning: no files found matching '*.dll' under directory 'compile/windows/x64/DLL'
warning: no previously-included files matching '*.py[co]' found anywhere in distribution
warning: no previously-included files found matching 'compile/external_libs/compute/.git'
adding license file 'LICENSE'
writing manifest file 'lightgbm.egg-info/SOURCES.txt'
copying lightgbm/VERSION.txt -> build/lib/lightgbm
installing to build/bdist.macosx-13-arm64/wheel
running install
INFO:LightGBM:Starting to compile the library.
INFO:LightGBM:Starting to compile with CMake.
Traceback (most recent call last):
File "/private/var/folders/qf/7kpp7kws2bs9ljbxdl6vf9480000gn/T/pip-install-qdmnvrdo/lightgbm_7e71affc27c54e8fb3f78d9ef73bd942/setup.py", line 95, in silent_call
subprocess.check_call(cmd, stderr=log, stdout=log)
File "/opt/homebrew/Cellar/python@3.10/3.10.8/Frameworks/Python.framework/Versions/3.10/lib/python3.10/subprocess.py", line 364, in check_call
retcode = call(*popenargs, **kwargs)
File "/opt/homebrew/Cellar/python@3.10/3.10.8/Frameworks/Python.framework/Versions/3.10/lib/python3.10/subprocess.py", line 345, in call
with Popen(*popenargs, **kwargs) as p:
File "/opt/homebrew/Cellar/python@3.10/3.10.8/Frameworks/Python.framework/Versions/3.10/lib/python3.10/subprocess.py", line 971, in __init__
self._execute_child(args, executable, preexec_fn, close_fds,
File "/opt/homebrew/Cellar/python@3.10/3.10.8/Frameworks/Python.framework/Versions/3.10/lib/python3.10/subprocess.py", line 1847, in _execute_child
raise child_exception_type(errno_num, err_msg, err_filename)
FileNotFoundError: [Errno 2] No such file or directory: 'cmake'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<string>", line 2, in <module>
File "<pip-setuptools-caller>", line 34, in <module>
File "/private/var/folders/qf/7kpp7kws2bs9ljbxdl6vf9480000gn/T/pip-install-qdmnvrdo/lightgbm_7e71affc27c54e8fb3f78d9ef73bd942/setup.py", line 334, in <module>
setup(name='lightgbm',
File "/opt/homebrew/lib/python3.10/site-packages/setuptools/__init__.py", line 87, in setup
return distutils.core.setup(**attrs)
File "/opt/homebrew/lib/python3.10/site-packages/setuptools/_distutils/core.py", line 185, in setup
return run_commands(dist)
File "/opt/homebrew/lib/python3.10/site-packages/setuptools/_distutils/core.py", line 201, in run_commands
dist.run_commands()
File "/opt/homebrew/lib/python3.10/site-packages/setuptools/_distutils/dist.py", line 968, in run_commands
self.run_command(cmd)
File "/opt/homebrew/lib/python3.10/site-packages/setuptools/dist.py", line 1217, in run_command
super().run_command(command)
File "/opt/homebrew/lib/python3.10/site-packages/setuptools/_distutils/dist.py", line 987, in run_command
cmd_obj.run()
File "/opt/homebrew/lib/python3.10/site-packages/wheel/bdist_wheel.py", line 335, in run
self.run_command('install')
File "/opt/homebrew/lib/python3.10/site-packages/setuptools/_distutils/cmd.py", line 319, in run_command
self.distribution.run_command(command)
File "/opt/homebrew/lib/python3.10/site-packages/setuptools/dist.py", line 1217, in run_command
super().run_command(command)
File "/opt/homebrew/lib/python3.10/site-packages/setuptools/_distutils/dist.py", line 987, in run_command
cmd_obj.run()
File "/private/var/folders/qf/7kpp7kws2bs9ljbxdl6vf9480000gn/T/pip-install-qdmnvrdo/lightgbm_7e71affc27c54e8fb3f78d9ef73bd942/setup.py", line 248, in run
compile_cpp(use_mingw=self.mingw, use_gpu=self.gpu, use_cuda=self.cuda, use_mpi=self.mpi,
File "/private/var/folders/qf/7kpp7kws2bs9ljbxdl6vf9480000gn/T/pip-install-qdmnvrdo/lightgbm_7e71affc27c54e8fb3f78d9ef73bd942/setup.py", line 198, in compile_cpp
silent_call(cmake_cmd, raise_error=True, error_msg='Please install CMake and all required dependencies first')
File "/private/var/folders/qf/7kpp7kws2bs9ljbxdl6vf9480000gn/T/pip-install-qdmnvrdo/lightgbm_7e71affc27c54e8fb3f78d9ef73bd942/setup.py", line 99, in silent_call
raise Exception("\n".join((error_msg, LOG_NOTICE)))
Exception: Please install CMake and all required dependencies first
The full version of error log was saved into /Users/me/LightGBM_compilation.log
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for lightgbm
Running setup.py clean for lightgbm
Failed to build lightgbm
Installing collected packages: threadpoolctl, numpy, joblib, scipy, scikit-learn, lightgbm
Running setup.py install for lightgbm ... error
error: subprocess-exited-with-error
× Running setup.py install for lightgbm did not run successfully.
│ exit code: 1
╰─> [45 lines of output]
running install
/opt/homebrew/lib/python3.10/site-packages/setuptools/command/install.py:34: SetuptoolsDeprecationWarning: setup.py install is deprecated. Use build and pip and other standards-based tools.
warnings.warn(
INFO:LightGBM:Starting to compile the library.
INFO:LightGBM:Starting to compile with CMake.
Traceback (most recent call last):
File "/private/var/folders/qf/7kpp7kws2bs9ljbxdl6vf9480000gn/T/pip-install-qdmnvrdo/lightgbm_7e71affc27c54e8fb3f78d9ef73bd942/setup.py", line 95, in silent_call
subprocess.check_call(cmd, stderr=log, stdout=log)
File "/opt/homebrew/Cellar/python@3.10/3.10.8/Frameworks/Python.framework/Versions/3.10/lib/python3.10/subprocess.py", line 364, in check_call
retcode = call(*popenargs, **kwargs)
File "/opt/homebrew/Cellar/python@3.10/3.10.8/Frameworks/Python.framework/Versions/3.10/lib/python3.10/subprocess.py", line 345, in call
with Popen(*popenargs, **kwargs) as p:
File "/opt/homebrew/Cellar/python@3.10/3.10.8/Frameworks/Python.framework/Versions/3.10/lib/python3.10/subprocess.py", line 971, in __init__
self._execute_child(args, executable, preexec_fn, close_fds,
File "/opt/homebrew/Cellar/python@3.10/3.10.8/Frameworks/Python.framework/Versions/3.10/lib/python3.10/subprocess.py", line 1847, in _execute_child
raise child_exception_type(errno_num, err_msg, err_filename)
FileNotFoundError: [Errno 2] No such file or directory: 'cmake'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<string>", line 2, in <module>
File "<pip-setuptools-caller>", line 34, in <module>
File "/private/var/folders/qf/7kpp7kws2bs9ljbxdl6vf9480000gn/T/pip-install-qdmnvrdo/lightgbm_7e71affc27c54e8fb3f78d9ef73bd942/setup.py", line 334, in <module>
setup(name='lightgbm',
File "/opt/homebrew/lib/python3.10/site-packages/setuptools/__init__.py", line 87, in setup
return distutils.core.setup(**attrs)
File "/opt/homebrew/lib/python3.10/site-packages/setuptools/_distutils/core.py", line 185, in setup
return run_commands(dist)
File "/opt/homebrew/lib/python3.10/site-packages/setuptools/_distutils/core.py", line 201, in run_commands
dist.run_commands()
File "/opt/homebrew/lib/python3.10/site-packages/setuptools/_distutils/dist.py", line 968, in run_commands
self.run_command(cmd)
File "/opt/homebrew/lib/python3.10/site-packages/setuptools/dist.py", line 1217, in run_command
super().run_command(command)
File "/opt/homebrew/lib/python3.10/site-packages/setuptools/_distutils/dist.py", line 987, in run_command
cmd_obj.run()
File "/private/var/folders/qf/7kpp7kws2bs9ljbxdl6vf9480000gn/T/pip-install-qdmnvrdo/lightgbm_7e71affc27c54e8fb3f78d9ef73bd942/setup.py", line 248, in run
compile_cpp(use_mingw=self.mingw, use_gpu=self.gpu, use_cuda=self.cuda, use_mpi=self.mpi,
File "/private/var/folders/qf/7kpp7kws2bs9ljbxdl6vf9480000gn/T/pip-install-qdmnvrdo/lightgbm_7e71affc27c54e8fb3f78d9ef73bd942/setup.py", line 198, in compile_cpp
silent_call(cmake_cmd, raise_error=True, error_msg='Please install CMake and all required dependencies first')
File "/private/var/folders/qf/7kpp7kws2bs9ljbxdl6vf9480000gn/T/pip-install-qdmnvrdo/lightgbm_7e71affc27c54e8fb3f78d9ef73bd942/setup.py", line 99, in silent_call
raise Exception("\n".join((error_msg, LOG_NOTICE)))
Exception: Please install CMake and all required dependencies first
The full version of error log was saved into /Users/me/LightGBM_compilation.log
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: legacy-install-failure
× Encountered error while trying to install package.
╰─> lightgbm
note: This is an issue with the package mentioned above, not pip.
hint: See above for output from the failure.
There is a workaround for this?
|
[
"When you run pip install lightgbm and see this message in logs:\n\nBuilding wheels for collected packages: lightgbm\n\nit means that there is not a pre-compiled binary (i.e. wheel) available matching your platform (operating system + architecture + Python version), and that LightGBM needs to be built from source.\nlightgbm is a Python package wrapping lib_lightgbm, a C++ library with a C API. So \"built from source\" for lightgbm means compiling that C/C++ code, which for LightGBM requires:\n\nC and C++ compilers\nthe CMake build system\nan installation of OpenMP\n\nThose components are the \"CMake and all required dependencies\" referred to in the error message.\nOn macOS, you should already have a C/C++ compiler installed (clang) by default. To get CMake and OpenMP, run the following.\nbrew install cmake libomp\n\nNOTE: lightgbm v3.3.3 and older does not support the newest version of OpenMP available as of this writing (v15.x). That was fixed in microsoft/LightGBM#5563. If you end up with OpenMP >=15.0 and lightgbm>=4.0 is not yet available from PyPI, either downgrade OpenMP or build a development version of lightgbm (see \"Install from GitHub\" in the docs).\n"
] |
[
0
] |
[] |
[] |
[
"apple_silicon",
"numpy",
"pip",
"python"
] |
stackoverflow_0074566704_apple_silicon_numpy_pip_python.txt
|
Q:
For a new Python project using the latest version of Python should I declare my types as upper or lower case
According to pep-0585 for the latest versions of Python, it appears we can use List and list interchangeably for type declarations. So which should I use?
Assume:
no requirement for backward compatibility
using the latest version of python
from typing import List
def hello_world_1(animals: list[str]) -> list[str]:
return animals
def hello_world_2(animals: List[str]) -> List[str]:
return animals
How can I set up a python linter to enforce consistency and only allow either upper or lowercase List for example?
A:
According to the current Python docs, typing.List and similar are deprecated.
The docs further state
The deprecated types will be removed from the typing module in the first Python version released 5 years after the release of Python 3.9.0. See details in PEP 585—Type Hinting Generics In Standard Collections.
Concerning enforcing consistency, it says
It is expected that type checkers will flag the deprecated types when the checked program targets Python 3.9 or newer.
However, I use Pylint personally (not a type checker, admittedly) and I don't believe I've noticed it flagging the deprecated types yet.
A:
PEP585 (linked in the OP) described the imports from typing as cumbersome and confusing. This seems to answer the question of which style is preferred.
A:
In general ... it is your choice. AFAIK, there is no PEP recommendation on which one to use.
If you have decided to use generic type hinting:
Prior to python 3.9 you had to use typing.List (for example)
From python 3.9 onward you can use either list or typing.List. Again ... your choice.
(If you are asking for our recommendations, that is explicitly off-topic for StackOverflow! Maybe discuss it with your co-workers, other people working on your projects, ...)
|
For a new Python project using the latest version of Python should I declare my types as upper or lower case
|
According to pep-0585 for the latest versions of Python, it appears we can use List and list interchangeably for type declarations. So which should I use?
Assume:
no requirement for backward compatibility
using the latest version of python
from typing import List
def hello_world_1(animals: list[str]) -> list[str]:
return animals
def hello_world_2(animals: List[str]) -> List[str]:
return animals
How can I set up a python linter to enforce consistency and only allow either upper or lowercase List for example?
|
[
"According to the current Python docs, typing.List and similar are deprecated.\nThe docs further state\n\nThe deprecated types will be removed from the typing module in the first Python version released 5 years after the release of Python 3.9.0. See details in PEP 585—Type Hinting Generics In Standard Collections.\n\nConcerning enforcing consistency, it says\n\nIt is expected that type checkers will flag the deprecated types when the checked program targets Python 3.9 or newer.\n\nHowever, I use Pylint personally (not a type checker, admittedly) and I don't believe I've noticed it flagging the deprecated types yet.\n",
"PEP585 (linked in the OP) described the imports from typing as cumbersome and confusing. This seems to answer the question of which style is preferred.\n",
"In general ... it is your choice. AFAIK, there is no PEP recommendation on which one to use.\nIf you have decided to use generic type hinting:\n\nPrior to python 3.9 you had to use typing.List (for example)\nFrom python 3.9 onward you can use either list or typing.List. Again ... your choice.\n\n\n(If you are asking for our recommendations, that is explicitly off-topic for StackOverflow! Maybe discuss it with your co-workers, other people working on your projects, ...)\n"
] |
[
3,
2,
1
] |
[] |
[] |
[
"python"
] |
stackoverflow_0074567273_python.txt
|
Q:
print pandas dataframe diff to new column
I have a dataframe that looks like this. There are two rows for each id. These represent a game where the row with the highest points is the winner:
id points
677 5
677 15
678 25
678 6
I would like to generate a new column 'win' in the dataframe so that the row with the same id with the higher points gets the value 1 and the lesser 0.
Like this:
id points win
677 5 0
677 15 1
678 25 1
678 6 0
I think I could do something like this, but can't figure out how you would get the diff to output a value based on the condition of greater or less and then push to a new column.
print(df.set_index('id').groupby(level=0).diff().query('points' > 0).index.unique().tolist())
A:
Find the max points for each id and mark it as win:
df['win'] = (df.points.groupby(df['id']).transform('max') == df.points).astype(int)
df
id points win
0 677 5 0
1 677 15 1
2 678 25 1
3 678 6 0
|
print pandas dataframe diff to new column
|
I have a dataframe that looks like this. There are two rows for each id. These represent a game where the row with the highest points is the winner:
id points
677 5
677 15
678 25
678 6
I would like to generate a new column 'win' in the dataframe so that the row with the same id with the higher points gets the value 1 and the lesser 0.
Like this:
id points win
677 5 0
677 15 1
678 25 1
678 6 0
I think I could do something like this, but can't figure out how you would get the diff to output a value based on the condition of greater or less and then push to a new column.
print(df.set_index('id').groupby(level=0).diff().query('points' > 0).index.unique().tolist())
|
[
"Find the max points for each id and mark it as win:\ndf['win'] = (df.points.groupby(df['id']).transform('max') == df.points).astype(int)\ndf\n id points win\n0 677 5 0\n1 677 15 1\n2 678 25 1\n3 678 6 0\n\n"
] |
[
1
] |
[] |
[] |
[
"pandas",
"python"
] |
stackoverflow_0074567843_pandas_python.txt
|
Q:
What model.predict() in Keras or Tensorflow is doing numerically?
What I would like to know:
I have built and trained a CNN model in keras and have been able to calculate predictions, but I would like to know the details of the process of what is happening numerically in the prediction step.
What I tried:
(1) Calculate the score with the trained Keras model.
(2) Coded my CNN and calculated the score and compared it with (1) (→ discrepancy).
(3) Checked the source code of Keras and Tensorflow on GitHub and investigated the difference between predict() of Keras and my CNN in terms of numerical processing.
I built such a model in keras.
model = Sequential()
mode.add(Conv2D(20,kernel_size=(1,5), strides=(1,1), padding='valid', input_shape=(24,13,1), activation='relu'))
mode.add(Conv2D(20,kernel_size=(1,9), strides=(1,1), padding='valid', activation='relu'))
mode.add(Conv2D(20,kernel_size=(24,1), strides=(1,1), padding='valid', activation='relu'))
mode.add(Flatten())
mode.add(Dense(20,activation='relu'))
mode.add(Dense(2,activation='sigmoid'))
model.load_weights(model_path / '[model_filename].h5')
model.summary()
On the other hand, I created my predictive CNN code this way.
I omit the code but I have succeeded in getting the values of kernel and bias.
my CNN code(only dense and Convolution)
def mydense(ind, ker, bia, func):
tem_dense = np.dot(ker.T, ind) + bia
return func(tem_dense)
def myconv(ind, ker, bia, func):
# input sample data's shape
H = ind.shape[0]
W = ind.shape[1]
# kernel' shape
Hf = ker.shape[0]
Wf = ker.shape[1]
K = ker.shape[3]
# result of convolution
u = np.zeros((H - Hf + 1, W - Wf + 1, K))
# output
z = np.zeros((u.shape))
for k in range(0,K):
for n in range(0,W - Wf + 1):
for m in range(0, H - Hf + 1):
u[m,n,k] = (ind[m:m+Hf, n:n+Wf, :] * ker[:,:,:,k] + bia[k]).sum()
z = np.array([func(i) for i in u])
return z
The results are very different from model.predict().
What I have already looked into:
I have looked further at predict() and other source code, but I needed help from those in the know.
As in the convolution process (Conv2D or Conv), I would like to know in detail what is happening numerically in the Dense process as well.
Any help would be appreciated.
A:
I solved it by myself.
I was mistaken and it was handled as per the text.
It didn't look like keras was doing anything else.
|
What model.predict() in Keras or Tensorflow is doing numerically?
|
What I would like to know:
I have built and trained a CNN model in keras and have been able to calculate predictions, but I would like to know the details of the process of what is happening numerically in the prediction step.
What I tried:
(1) Calculate the score with the trained Keras model.
(2) Coded my CNN and calculated the score and compared it with (1) (→ discrepancy).
(3) Checked the source code of Keras and Tensorflow on GitHub and investigated the difference between predict() of Keras and my CNN in terms of numerical processing.
I built such a model in keras.
model = Sequential()
mode.add(Conv2D(20,kernel_size=(1,5), strides=(1,1), padding='valid', input_shape=(24,13,1), activation='relu'))
mode.add(Conv2D(20,kernel_size=(1,9), strides=(1,1), padding='valid', activation='relu'))
mode.add(Conv2D(20,kernel_size=(24,1), strides=(1,1), padding='valid', activation='relu'))
mode.add(Flatten())
mode.add(Dense(20,activation='relu'))
mode.add(Dense(2,activation='sigmoid'))
model.load_weights(model_path / '[model_filename].h5')
model.summary()
On the other hand, I created my predictive CNN code this way.
I omit the code but I have succeeded in getting the values of kernel and bias.
my CNN code(only dense and Convolution)
def mydense(ind, ker, bia, func):
tem_dense = np.dot(ker.T, ind) + bia
return func(tem_dense)
def myconv(ind, ker, bia, func):
# input sample data's shape
H = ind.shape[0]
W = ind.shape[1]
# kernel' shape
Hf = ker.shape[0]
Wf = ker.shape[1]
K = ker.shape[3]
# result of convolution
u = np.zeros((H - Hf + 1, W - Wf + 1, K))
# output
z = np.zeros((u.shape))
for k in range(0,K):
for n in range(0,W - Wf + 1):
for m in range(0, H - Hf + 1):
u[m,n,k] = (ind[m:m+Hf, n:n+Wf, :] * ker[:,:,:,k] + bia[k]).sum()
z = np.array([func(i) for i in u])
return z
The results are very different from model.predict().
What I have already looked into:
I have looked further at predict() and other source code, but I needed help from those in the know.
As in the convolution process (Conv2D or Conv), I would like to know in detail what is happening numerically in the Dense process as well.
Any help would be appreciated.
|
[
"I solved it by myself.\nI was mistaken and it was handled as per the text.\nIt didn't look like keras was doing anything else.\n"
] |
[
0
] |
[] |
[] |
[
"deep_learning",
"keras",
"python",
"tensorflow"
] |
stackoverflow_0072300556_deep_learning_keras_python_tensorflow.txt
|
Q:
String to numpy array image
I want convert string to image.
What I want is :
input = string
output = 2D or 3D numpy array image consisting of 0, 255 maybe
Is there any package or module doing this??
Thank you.
A:
you can use opencv2.putText
https://docs.opencv.org/4.x/dc/da5/tutorial_py_drawing_functions.html
opencv-python workwith numpy.ndarray
|
String to numpy array image
|
I want convert string to image.
What I want is :
input = string
output = 2D or 3D numpy array image consisting of 0, 255 maybe
Is there any package or module doing this??
Thank you.
|
[
"you can use opencv2.putText\nhttps://docs.opencv.org/4.x/dc/da5/tutorial_py_drawing_functions.html\nopencv-python workwith numpy.ndarray\n"
] |
[
1
] |
[] |
[] |
[
"image",
"numpy",
"python",
"string",
"text"
] |
stackoverflow_0074567880_image_numpy_python_string_text.txt
|
Q:
How to make this sorting faster?
Task:
Submit a file containing the sort_people(people) function. It gets a list of people and returns a sorted list of people. Sort primarily by date of birth (oldest person to youngest), if there is a match by last name (ascending according to the usual Python string comparison), and finally by first name (also ascending). If any two persons match in all three characteristics, their mutual order can be arbitrary.
I solved this task using key function for Python sort() method but it is not the best approach, since I only got 5/10 points because of time limitation.
Problem: Help me make this algorithm faster.
My code:
def compare_people(p1, p2):
date1 = datetime.strptime(p1.birth_date, '%d.%m.%Y')
date2 = datetime.strptime(p2.birth_date, '%d.%m.%Y')
if date1>date2:
return 1
elif date1<date2:
return -1
if p1.last_name > p2.last_name:
return 1
elif p1.last_name < p2.last_name:
return -1
if p1.first_name > p2.first_name:
return 1
elif p1.first_name < p2.first_name:
return -1
else:
return 0
def sort_people(people):
cmp1 = functools.cmp_to_key(compare_people)
people.sort(key=cmp1)
return people
A:
A tuple (and list) comparison compares its items in order, and returns the comparison of the first non-equal member. Thus, if you construct a key that provides a series of values to be compared in order, you can get the equivalent of your code like this:
people.sort(key=lambda p: (
datetime.strptime(p1.birth_date, '%d.%m.%Y'),
p.last_name,
p.first_name,
))
This has several advantages:
cmp_to_key is invoked many times; key is invoked exactly once per element
Comparison of tuples is implemented in C, so it will be much faster than your compare_people
It is brief and easy to read
|
How to make this sorting faster?
|
Task:
Submit a file containing the sort_people(people) function. It gets a list of people and returns a sorted list of people. Sort primarily by date of birth (oldest person to youngest), if there is a match by last name (ascending according to the usual Python string comparison), and finally by first name (also ascending). If any two persons match in all three characteristics, their mutual order can be arbitrary.
I solved this task using key function for Python sort() method but it is not the best approach, since I only got 5/10 points because of time limitation.
Problem: Help me make this algorithm faster.
My code:
def compare_people(p1, p2):
date1 = datetime.strptime(p1.birth_date, '%d.%m.%Y')
date2 = datetime.strptime(p2.birth_date, '%d.%m.%Y')
if date1>date2:
return 1
elif date1<date2:
return -1
if p1.last_name > p2.last_name:
return 1
elif p1.last_name < p2.last_name:
return -1
if p1.first_name > p2.first_name:
return 1
elif p1.first_name < p2.first_name:
return -1
else:
return 0
def sort_people(people):
cmp1 = functools.cmp_to_key(compare_people)
people.sort(key=cmp1)
return people
|
[
"A tuple (and list) comparison compares its items in order, and returns the comparison of the first non-equal member. Thus, if you construct a key that provides a series of values to be compared in order, you can get the equivalent of your code like this:\npeople.sort(key=lambda p: (\n datetime.strptime(p1.birth_date, '%d.%m.%Y'),\n p.last_name,\n p.first_name,\n))\n\nThis has several advantages:\n\ncmp_to_key is invoked many times; key is invoked exactly once per element\nComparison of tuples is implemented in C, so it will be much faster than your compare_people\nIt is brief and easy to read\n\n"
] |
[
0
] |
[] |
[] |
[
"python",
"sorting"
] |
stackoverflow_0074567908_python_sorting.txt
|
Q:
Django Can't See Where I Typed in User ID
So I'm creating a web app in Django, and I encountered this error:
my urls.py:
from django.urls import path
from . import views
urlpatterns = [
path('', views.index, name='index'),
path('<int:user_id>/', views.profile, name="profile"),
#path('signup/', views.signup, name="signup"),
path("signup/", views.signup, name="signup")
]
my views.py:
from django.shortcuts import render, get_object_or_404
from django.contrib.auth import forms
from django.urls import reverse_lazy
from django.http import HttpResponse, Http404
from django.template import loader
from .models import User
from .forms import SignUpForm
from datetime import datetime
def index(request):
cool_people_list = User.objects.order_by("-username")[:5]
_template = loader.get_template("front_page.html")
context = {
"cool_people_list" : cool_people_list,
}
return HttpResponse(_template.render(context, request))
def profile(request, user_id):
try:
user = get_object_or_404(User, pk=user_id)
_template = loader.get_template("profile.html")
context = {
"user" : user
}
return HttpResponse(_template.render(context, request))
except:
raise Http404("The user you are looking for doesn't exist.")
def signup(request):
if request.method == "POST":
form = SignUpForm(request.POST)
if form.is_valid():
rn = str(datetime.today().strftime("%Y-%m-%D"))
rn2 = str(datetime.today)
"""usr_list = User.objects.order_by('-join_date')
latest_usr = usr_list.first()"""
new_user = User(3, str(form.cleaned_data.get("username")), str(form.cleaned_data.get("password")), rn, rn2)
new_user.save()
return render(request, "signup.html")
my models.py:
from django.db import models
from django.core.validators import MinLengthValidator
import datetime
class User(models.Model):
user_id = models.IntegerField(unique=True)
username = models.CharField(max_length=25, validators=[MinLengthValidator(3)])
password = models.CharField(max_length=25, validators=[MinLengthValidator(7)])
join_date = models.DateField()
last_online = models.DateTimeField()
def __str__(self):
return self.username
I kept trying different methods, like manually adding the user ID (temporary fix), but Django can't see where I type in the ID! It doesn't register it when I typed it in what I believe is the correct format for my 'User' model.
A:
You need to save an object in User Model like this...
def signup(request):
if request.method == "POST":
form = SignUpForm(request.POST)
if form.is_valid():
rn = datetime.today().strftime("%Y-%m-%D")
rn2 = datetime.today
new_user = User(username= form.cleaned_data["username"], password=form.cleaned_data["password"], rn=rn, rn2=rn2)
new_user.save()
else:
form.errors
return render(request, "signup.html")
|
Django Can't See Where I Typed in User ID
|
So I'm creating a web app in Django, and I encountered this error:
my urls.py:
from django.urls import path
from . import views
urlpatterns = [
path('', views.index, name='index'),
path('<int:user_id>/', views.profile, name="profile"),
#path('signup/', views.signup, name="signup"),
path("signup/", views.signup, name="signup")
]
my views.py:
from django.shortcuts import render, get_object_or_404
from django.contrib.auth import forms
from django.urls import reverse_lazy
from django.http import HttpResponse, Http404
from django.template import loader
from .models import User
from .forms import SignUpForm
from datetime import datetime
def index(request):
cool_people_list = User.objects.order_by("-username")[:5]
_template = loader.get_template("front_page.html")
context = {
"cool_people_list" : cool_people_list,
}
return HttpResponse(_template.render(context, request))
def profile(request, user_id):
try:
user = get_object_or_404(User, pk=user_id)
_template = loader.get_template("profile.html")
context = {
"user" : user
}
return HttpResponse(_template.render(context, request))
except:
raise Http404("The user you are looking for doesn't exist.")
def signup(request):
if request.method == "POST":
form = SignUpForm(request.POST)
if form.is_valid():
rn = str(datetime.today().strftime("%Y-%m-%D"))
rn2 = str(datetime.today)
"""usr_list = User.objects.order_by('-join_date')
latest_usr = usr_list.first()"""
new_user = User(3, str(form.cleaned_data.get("username")), str(form.cleaned_data.get("password")), rn, rn2)
new_user.save()
return render(request, "signup.html")
my models.py:
from django.db import models
from django.core.validators import MinLengthValidator
import datetime
class User(models.Model):
user_id = models.IntegerField(unique=True)
username = models.CharField(max_length=25, validators=[MinLengthValidator(3)])
password = models.CharField(max_length=25, validators=[MinLengthValidator(7)])
join_date = models.DateField()
last_online = models.DateTimeField()
def __str__(self):
return self.username
I kept trying different methods, like manually adding the user ID (temporary fix), but Django can't see where I type in the ID! It doesn't register it when I typed it in what I believe is the correct format for my 'User' model.
|
[
"You need to save an object in User Model like this...\ndef signup(request):\n if request.method == \"POST\":\n form = SignUpForm(request.POST)\n if form.is_valid():\n rn = datetime.today().strftime(\"%Y-%m-%D\")\n rn2 = datetime.today\n new_user = User(username= form.cleaned_data[\"username\"], password=form.cleaned_data[\"password\"], rn=rn, rn2=rn2)\n new_user.save()\n else:\n form.errors\n return render(request, \"signup.html\")\n\n"
] |
[
0
] |
[] |
[] |
[
"django",
"django_models",
"django_views",
"python",
"python_3.x"
] |
stackoverflow_0074566011_django_django_models_django_views_python_python_3.x.txt
|
Q:
`int('10**2')` raises `ValueError: invalid literal for int() with base 10: '10**2'` despite `type(10**2)` being ``
int('10**2') raises ValueError: invalid literal for int() with base 10: '10**2' despite type(10**2) being <class 'int'>.
I take input n as n = input(), then I do int(n). When I input 10**2, I get ValueError: invalid literal for int() with base 10: '10**2'.
I'm guessing the issue is that 10**2 is not a literal - it has to be evaluated first, but I'm hesitant to do int(eval(n)) since n can be any string.
By contrast, float('1e2') despite being very similar, doesn't raise an error. I guess 1e2 is considered a literal...? and doesn't have to be evaluated?
My current workaround is to check whether the string contains '**' and if it does, handle it accordingly:
n = input()
if '**' in n:
base, exp, *a = n.split('**')
if a:
raise ValueError(f'This input, {n}, can't be interpreted as an integer')
n = int(base)**int(exp)
else:
n = int(n)
or to support expressions like 3**3**3:
n = input()
if '**' in n:
operands = input.split('**')
# '**' associates to the right
exp = 1
while operands:
base = int(operands.pop())
exp = base ** exp
n = exp
else:
n = int(n)
A:
Yes, 10**2 must be evaluated while 1e2 is a constant. I suggest taking a look at Evaluating a mathematical expression in a string for some options regarding parsing mathematical expressions in strings.
|
`int('10**2')` raises `ValueError: invalid literal for int() with base 10: '10**2'` despite `type(10**2)` being ``
|
int('10**2') raises ValueError: invalid literal for int() with base 10: '10**2' despite type(10**2) being <class 'int'>.
I take input n as n = input(), then I do int(n). When I input 10**2, I get ValueError: invalid literal for int() with base 10: '10**2'.
I'm guessing the issue is that 10**2 is not a literal - it has to be evaluated first, but I'm hesitant to do int(eval(n)) since n can be any string.
By contrast, float('1e2') despite being very similar, doesn't raise an error. I guess 1e2 is considered a literal...? and doesn't have to be evaluated?
My current workaround is to check whether the string contains '**' and if it does, handle it accordingly:
n = input()
if '**' in n:
base, exp, *a = n.split('**')
if a:
raise ValueError(f'This input, {n}, can't be interpreted as an integer')
n = int(base)**int(exp)
else:
n = int(n)
or to support expressions like 3**3**3:
n = input()
if '**' in n:
operands = input.split('**')
# '**' associates to the right
exp = 1
while operands:
base = int(operands.pop())
exp = base ** exp
n = exp
else:
n = int(n)
|
[
"Yes, 10**2 must be evaluated while 1e2 is a constant. I suggest taking a look at Evaluating a mathematical expression in a string for some options regarding parsing mathematical expressions in strings.\n"
] |
[
2
] |
[] |
[] |
[
"input",
"integer",
"literals",
"python",
"user_input"
] |
stackoverflow_0074567924_input_integer_literals_python_user_input.txt
|
Q:
Can I paint on the tkinter canvas twice simultaneously?
I want the cursor's x and y coordinates to be tracked by two sliding lines when the cursor is over a canvas. One on the top of the canvas constrained to x, and one at the left of the canvas constrained to y.
I have actually achieved this, almost:
import tkinter as tk
def callback(event):
draw_y_marker(event.y)
draw_x_marker(event.x)
def draw_x_marker(x):
paint.coords(line, x, 0, x, 20)
def draw_y_marker(y):
paint.coords(line, 0, y, 20, y)
root = Tk()
paint = Canvas(root)
paint.bind('<Motion>', callback)
paint.pack()
line = paint.create_line(x, 0, x, height)
root.mainloop()
If I comment out the draw_y_marker call in callback I get a the line constrained to x sliding along the top of the screen, marking cursor position. If I comment out draw_x_marker I get the line constrained to y sliding along the side of the screen.
But not both which is what I want! If I uncomment both, only the draw_x_marker method works. How can I paint two things on the canvas simultaneously?
A:
You have only created one line item in the canvas, so how can you show two sliding lines?
You need to create the two sliding lines for x and y and update them in callback():
import tkinter as tk
def callback(event):
draw_y_marker(event.y)
draw_x_marker(event.x)
def draw_x_marker(x):
paint.coords(xline, x, 0, x, 20)
def draw_y_marker(y):
paint.coords(yline, 0, y, 20, y)
root = tk.Tk()
paint = tk.Canvas(root)
paint.bind('<Motion>', callback)
paint.pack()
# create the two sliding lines initially
xline = paint.create_line(0, 0, 0, 0)
yline = paint.create_line(0, 0, 0, 0)
root.mainloop()
|
Can I paint on the tkinter canvas twice simultaneously?
|
I want the cursor's x and y coordinates to be tracked by two sliding lines when the cursor is over a canvas. One on the top of the canvas constrained to x, and one at the left of the canvas constrained to y.
I have actually achieved this, almost:
import tkinter as tk
def callback(event):
draw_y_marker(event.y)
draw_x_marker(event.x)
def draw_x_marker(x):
paint.coords(line, x, 0, x, 20)
def draw_y_marker(y):
paint.coords(line, 0, y, 20, y)
root = Tk()
paint = Canvas(root)
paint.bind('<Motion>', callback)
paint.pack()
line = paint.create_line(x, 0, x, height)
root.mainloop()
If I comment out the draw_y_marker call in callback I get a the line constrained to x sliding along the top of the screen, marking cursor position. If I comment out draw_x_marker I get the line constrained to y sliding along the side of the screen.
But not both which is what I want! If I uncomment both, only the draw_x_marker method works. How can I paint two things on the canvas simultaneously?
|
[
"You have only created one line item in the canvas, so how can you show two sliding lines?\nYou need to create the two sliding lines for x and y and update them in callback():\nimport tkinter as tk\n\ndef callback(event):\n draw_y_marker(event.y)\n draw_x_marker(event.x)\n\ndef draw_x_marker(x):\n paint.coords(xline, x, 0, x, 20)\n\ndef draw_y_marker(y):\n paint.coords(yline, 0, y, 20, y)\n\nroot = tk.Tk()\npaint = tk.Canvas(root)\npaint.bind('<Motion>', callback)\npaint.pack()\n\n# create the two sliding lines initially\nxline = paint.create_line(0, 0, 0, 0)\nyline = paint.create_line(0, 0, 0, 0)\n\nroot.mainloop()\n\n"
] |
[
1
] |
[] |
[] |
[
"python",
"tkinter"
] |
stackoverflow_0074567944_python_tkinter.txt
|
Q:
Converting a dataframe's datetime64[ns] index to a comparable datetime dtype
Trying to create a mask for my dataframe but can not compare the upper bound / lower bound datetimes to the index of the dataframe due to it being datetime64[ns]. I have seen the solution be to convert via pd.Timestamp - however I still get a value error.
Additionally I have tried to convert the index and am thrown the error:
"Cannot convert input ... series... to timestamp"
INPUT:
x = yf.Ticker('^GSPC').history(period='max',interval='1d').loc[:,['Open']]
stdate = pd.Timestamp(2015,12,31)
edate = dt.datetime.today()
y = x.index > stdate
ACTUAL OUTPUT:
*"Invalid comparison between dtype=datetime64[ns, TIMEZONE] and Timestamp"*
EXPECTED OUTPUT:
[FALSE, FALSE, FALSE, TRUE, TRUE... TRUE]
A:
Datetime64 Indexes can be refined to just the date by .date
df.index.date >= date or df.index.datetime >= datetime
would work
A:
use numpy.datetime64
NOTE: seem can't compare between time-aware datetime,
use tz_localize to remove it's timezone
or you can just convert datetime to timestamp (int)
import numpy as np
import pandas as pd
s = pd.Series([0, 1669345200 * 10**9], dtype="datetime64[ns]").dt.tz_localize("UTC")
print(s.info())
stdate = np.datetime64("2015-12-31T00:00:00")
edate = np.datetime64("now")
print(stdate)
print(edate)
print(s.dt.tz_localize(None) > stdate)
|
Converting a dataframe's datetime64[ns] index to a comparable datetime dtype
|
Trying to create a mask for my dataframe but can not compare the upper bound / lower bound datetimes to the index of the dataframe due to it being datetime64[ns]. I have seen the solution be to convert via pd.Timestamp - however I still get a value error.
Additionally I have tried to convert the index and am thrown the error:
"Cannot convert input ... series... to timestamp"
INPUT:
x = yf.Ticker('^GSPC').history(period='max',interval='1d').loc[:,['Open']]
stdate = pd.Timestamp(2015,12,31)
edate = dt.datetime.today()
y = x.index > stdate
ACTUAL OUTPUT:
*"Invalid comparison between dtype=datetime64[ns, TIMEZONE] and Timestamp"*
EXPECTED OUTPUT:
[FALSE, FALSE, FALSE, TRUE, TRUE... TRUE]
|
[
"Datetime64 Indexes can be refined to just the date by .date\ndf.index.date >= date or df.index.datetime >= datetime \n\nwould work\n",
"use numpy.datetime64\nNOTE: seem can't compare between time-aware datetime,\nuse tz_localize to remove it's timezone\nor you can just convert datetime to timestamp (int)\nimport numpy as np\nimport pandas as pd\n\ns = pd.Series([0, 1669345200 * 10**9], dtype=\"datetime64[ns]\").dt.tz_localize(\"UTC\")\nprint(s.info())\n\nstdate = np.datetime64(\"2015-12-31T00:00:00\")\nedate = np.datetime64(\"now\")\nprint(stdate)\nprint(edate)\nprint(s.dt.tz_localize(None) > stdate)\n\n"
] |
[
0,
0
] |
[] |
[] |
[
"date",
"datetime64",
"pandas",
"python"
] |
stackoverflow_0074567875_date_datetime64_pandas_python.txt
|
Q:
How to solve AttributeError: 'Tensor' object has no attribute 'zero_grad' in pytorch
Still working through video tutorial https://www.youtube.com/watch?v=weQ5pShEVic&list=PLbMqOoYQ3Mxw1Sl5iAAV4SJmvnAGAhFvK&index=2 about pytorch but hit another error.
lossFunc = torch.nn.MSELoss()
for i in range(epoch):
output = net(x)
loss = lossFunc(output, y)
loss.zero_grad()
loss.backward()
for f in net.parameters():
f.data.sub_(learning_rate = f.grad.data)
print(output, loss)
Created the network, loss function and wanted to iterate before backpropogattion
but get this error:
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
/var/folders/v_/yq26pm194xj5ckqy8p_njwc00000gn/T/ipykernel_9995/2476130544.py in <module>
3 output = net(x)
4 loss = lossFunc(output, y)
----> 5 loss.zero_grad()
6 loss.backward()
7
AttributeError: 'Tensor' object has no attribute 'zero_grad'
What gives?
A:
You should use zero grad for your optimizer.
optimizer = torch.optim.Adam(net.parameters(), lr=0.001)
lossFunc = torch.nn.MSELoss()
for i in range(epoch):
optimizer.zero_grad()
output = net(x)
loss = lossFunc(output, y)
loss.backward()
optimizer.step()
|
How to solve AttributeError: 'Tensor' object has no attribute 'zero_grad' in pytorch
|
Still working through video tutorial https://www.youtube.com/watch?v=weQ5pShEVic&list=PLbMqOoYQ3Mxw1Sl5iAAV4SJmvnAGAhFvK&index=2 about pytorch but hit another error.
lossFunc = torch.nn.MSELoss()
for i in range(epoch):
output = net(x)
loss = lossFunc(output, y)
loss.zero_grad()
loss.backward()
for f in net.parameters():
f.data.sub_(learning_rate = f.grad.data)
print(output, loss)
Created the network, loss function and wanted to iterate before backpropogattion
but get this error:
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
/var/folders/v_/yq26pm194xj5ckqy8p_njwc00000gn/T/ipykernel_9995/2476130544.py in <module>
3 output = net(x)
4 loss = lossFunc(output, y)
----> 5 loss.zero_grad()
6 loss.backward()
7
AttributeError: 'Tensor' object has no attribute 'zero_grad'
What gives?
|
[
"You should use zero grad for your optimizer.\noptimizer = torch.optim.Adam(net.parameters(), lr=0.001)\nlossFunc = torch.nn.MSELoss()\nfor i in range(epoch):\n optimizer.zero_grad()\n output = net(x)\n loss = lossFunc(output, y)\n loss.backward()\n optimizer.step()\n\n"
] |
[
2
] |
[] |
[] |
[
"python",
"pytorch"
] |
stackoverflow_0074567865_python_pytorch.txt
|
Q:
Merge rows based on 2 field match
I'm trying to merge a specific column if the rows are similar, for example here "l.instagram.com" and "instagram.com" is actually the same source so I would like to merge activeUsers into instagram.com.
Give:
sessionSource dateRange activeUsers
0 snapchat.com previous 1
1 snapchat.com current 1
2 l.instagram.com previous 71
3 l.instagram.com current 23
4 instagram.com previous 5
5 instagram.com current 0
Each sessionSource has a row for "current" and "previous" period. But I want to merge l.instagram.com into instagram.com activeUsers since they are from the same source.
The desired result would look like this:
sessionSource dateRange activeUsers
0 snapchat.com previous 1
1 snapchat.com current 1
4 instagram.com previous 76
5 instagram.com current 23
I have tried few answers but I couldn't get to that result.
Thank you for your help.
A:
Replace the value l.instagram.com'with instagram.com: df['sessionSource']=df['sessionSource'].replace('l.instagram.com','instagram.com')
And then group by the columns 'sessionSource' & 'dataRange' and sum 'activeUsers':
sum_df = df.groupby(['sessionSource','dataRange']).agg({'activeUsers': 'sum'})
sum_df=sum_df.reset_index()
sum_df
This sum_df will give what you want. Hope this helps.
(image attached of the solution and output)
|
Merge rows based on 2 field match
|
I'm trying to merge a specific column if the rows are similar, for example here "l.instagram.com" and "instagram.com" is actually the same source so I would like to merge activeUsers into instagram.com.
Give:
sessionSource dateRange activeUsers
0 snapchat.com previous 1
1 snapchat.com current 1
2 l.instagram.com previous 71
3 l.instagram.com current 23
4 instagram.com previous 5
5 instagram.com current 0
Each sessionSource has a row for "current" and "previous" period. But I want to merge l.instagram.com into instagram.com activeUsers since they are from the same source.
The desired result would look like this:
sessionSource dateRange activeUsers
0 snapchat.com previous 1
1 snapchat.com current 1
4 instagram.com previous 76
5 instagram.com current 23
I have tried few answers but I couldn't get to that result.
Thank you for your help.
|
[
"Replace the value l.instagram.com'with instagram.com: df['sessionSource']=df['sessionSource'].replace('l.instagram.com','instagram.com')\nAnd then group by the columns 'sessionSource' & 'dataRange' and sum 'activeUsers':\nsum_df = df.groupby(['sessionSource','dataRange']).agg({'activeUsers': 'sum'})\n\n\nsum_df=sum_df.reset_index()\nsum_df\n\nThis sum_df will give what you want. Hope this helps.\n(image attached of the solution and output)\n\n"
] |
[
1
] |
[] |
[] |
[
"dataframe",
"pandas",
"python"
] |
stackoverflow_0074567695_dataframe_pandas_python.txt
|
Q:
Is there any difference between the type of the expressions "python" and ’python’?
Is word "python" and 'python' are different expression??
I want to know the differnecce between the type of the expressions "python" and ’python’
A:
They both are same. Double quotes are typically used for string representation, while single quotes are used for regular expressions, dict keys, and SQL. As a result, both single quotes and double quotes represent strings in Python, but we may need to use one over the other at times.
For more clarity:
type("python") and type('python') if you check this both will output as string.
Even if you write this:
if("python" == 'python'):
print("True")
else:
print("False")
You will see this code snippet will print true. So there is no differences between "python" and 'python'
A:
No there is no difference between "python" and 'python'. Both of them will get treated as strings. One advantage could be if your string needs to have one of those quotes in them, eg if John's car is your string then you do "John's car".
A:
In python, strings can be defined between a a pair of ' or a pair of ". So there is no actual difference between 'python' and "python"
BUT there is a difference between 'python's' and "python's".
'python's'
will spit back an error, because it thinks there is a missing ', and the s is not apart of the string because of this.
"python's"
does not spit back an error, because the ' in this object is considered part of the string. So, using "" to define strings can be better than '' due to ability to add apostrophe's within your string.
|
Is there any difference between the type of the expressions "python" and ’python’?
|
Is word "python" and 'python' are different expression??
I want to know the differnecce between the type of the expressions "python" and ’python’
|
[
"They both are same. Double quotes are typically used for string representation, while single quotes are used for regular expressions, dict keys, and SQL. As a result, both single quotes and double quotes represent strings in Python, but we may need to use one over the other at times.\nFor more clarity:\ntype(\"python\") and type('python') if you check this both will output as string.\nEven if you write this:\nif(\"python\" == 'python'):\n print(\"True\")\nelse:\n print(\"False\")\nYou will see this code snippet will print true. So there is no differences between \"python\" and 'python'\n",
"No there is no difference between \"python\" and 'python'. Both of them will get treated as strings. One advantage could be if your string needs to have one of those quotes in them, eg if John's car is your string then you do \"John's car\".\n",
"In python, strings can be defined between a a pair of ' or a pair of \". So there is no actual difference between 'python' and \"python\"\nBUT there is a difference between 'python's' and \"python's\".\n'python's' \n\nwill spit back an error, because it thinks there is a missing ', and the s is not apart of the string because of this.\n\"python's\"\n\ndoes not spit back an error, because the ' in this object is considered part of the string. So, using \"\" to define strings can be better than '' due to ability to add apostrophe's within your string.\n"
] |
[
1,
0,
0
] |
[] |
[] |
[
"python"
] |
stackoverflow_0074567993_python.txt
|
Q:
What is a 'NoneType' object?
I'm getting this error when I run my python script:
TypeError: cannot concatenate 'str' and 'NoneType' objects
I'm pretty sure the 'str' means string, but I dont know what a 'NoneType' object is. My script craps out on the second line, I know the first one works because the commands from that line are in my asa as I would expect. At first I thought it may be because I'm using variables and user input inside send_command.
Everything in 'CAPS' are variables, everything in 'lower case' is input from 'parser.add_option' options.
I'm using pexpect, and optparse
send_command(child, SNMPGROUPCMD + group + V3PRIVCMD)
send_command(child, SNMPSRVUSRCMD + snmpuser + group + V3AUTHCMD + snmphmac + snmpauth + PRIVCMD + snmpencrypt + snmppriv)
A:
NoneType is the type for the None object, which is an object that indicates no value. None is the return value of functions that "don't return anything". It is also a common default return value for functions that search for something and may or may not find it; for example, it's returned by re.search when the regex doesn't match, or dict.get when the key has no entry in the dict. You cannot add None to strings or other objects.
One of your variables is None, not a string. Maybe you forgot to return in one of your functions, or maybe the user didn't provide a command-line option and optparse gave you None for that option's value. When you try to add None to a string, you get that exception:
send_command(child, SNMPGROUPCMD + group + V3PRIVCMD)
One of group or SNMPGROUPCMD or V3PRIVCMD has None as its value.
A:
For the sake of defensive programming, objects should be checked against nullity before using.
if obj is None:
or
if obj is not None:
A:
NoneType is simply the type of the None singleton:
>>> type(None)
<type 'NoneType'>
From the latter link above:
None
The sole value of the type NoneType. None is frequently used to represent the absence of a value, as when default arguments are not passed to a function. Assignments to None are illegal and raise a SyntaxError.
In your case, it looks like one of the items you are trying to concatenate is None, hence your error.
A:
It means you're trying to concatenate a string with something that is None.
None is the "null" of Python, and NoneType is its type.
This code will raise the same kind of error:
>>> bar = "something"
>>> foo = None
>>> print foo + bar
TypeError: cannot concatenate 'str' and 'NoneType' objects
A:
In Python
NoneType is the type of the None object.
There is only one such object.
Therefore, "a None object" and "the None object" and
"None" are three equivalent ways of saying the same thing.
Since all Nones are identical and not only equal,
you should prefer x is None over x == None in your code.
You will get None in many places in regular Python
code as pointed out by the accepted answer.
You will also get None in your own code when you
use the function result of a function that does not end with
return myvalue or the like.
Representation:
There is a type NoneType in some but not all versions of Python,
see below.
When you execute print(type(None)), you will get
<type 'NoneType'>.
This is produced by the __repr__ method of NoneType.
See the documentation of repr
and that of
magic functions
(or "dunder functions" for the double underscores in their names) in general.
In Python 2.7
NoneType is a type defined in the
standard library module types
In Python 3.0 to 3.9
NoneType has been
removed
from
module types,
presumably because there is only a single value of this type.
It effectively exists nevertheless, it only has no built-in name:
You can access NoneType by writing type(None).
If you want NoneType back, just define
NoneType = type(None).
In Python 3.10+
NoneType is again a type defined in the
standard library module types,
introduced in order to
help type checkers do their work
A:
In Python, to represent the absence of a value, you can use the None value types.NoneType.None
A:
In the error message, instead of telling you that you can't concatenate two objects by showing their values (a string and None in this example), the Python interpreter tells you this by showing the types of the objects that you tried to concatenate. The type of every string is str while the type of the single None instance is called NoneType.
You normally do not need to concern yourself with NoneType, but in this example it is necessary to know that type(None) == NoneType.
A:
Your error's occurring due to something like this:
>>> None + "hello world"
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: unsupported operand type(s) for +: 'NoneType' and 'str'
>>>
Python's None object is roughly equivalent to null, nil, etc. in other languages.
A:
If you're getting type None for an object, make sure you're returning in the method. For example:
class Node:
# node definition
then,
def some_funct():
# some code
node = Node(self, self.head)
self.head = node
if you do not return anything from some_func(), the return type will be NoneType because it did not return anything.
Instead, if you return the node itself, which is a Node object, it will return the Node-object type.
def some_func(self):
node = Node(self, self.head)
self.head = node
return node
A:
One of the variables has not been given any value, thus it is a NoneType. You'll have to look into why this is, it's probably a simple logic error on your part.
A:
NoneType is the type of None.
See the Python 2 docs here:
https://docs.python.org/2/library/types.html#types.NoneType
A:
It's returned when you have for instance print as your last statement in a function instead of return:
def add(a, b):
print(a+ b)
x = add(5,5)
print(x)
print(type(x))
y = x + 545
print(y)
TypeError: unsupported operand type(s) for +: 'NoneType' and 'int'
<class 'NoneType'>
def add(a, b):
return (a+ b)
x = add(5,5)
print(x)
print(type(x))
10
<class 'int'>
555
A:
NoneType is type of None. Basically, The NoneType occurs for multiple reasons,
Firstly when you have a function and a condition inside (for instance), it will return None if that condition is not met.
Ex:-
def dummy(x, y): if x > y: return x res = dummy(10, 20) print(res) # Will give None as the condition doesn't meet.
To solve this return the function with 0, I.e return 0, the function will end with 0 instead of None if the condition is not satisfied.
Secondly, When you explicitly assign a variable to a built-in method, which doesn't return any value but None.
my_list = [1,2,3]
my_list = my_list.sort()
print(my_list) #None sort() mutate the DS but returns nothing if you print it.
Or
lis = None
re = lis.something())
print(re) # returns attribute error NonType object has no attribute something
|
What is a 'NoneType' object?
|
I'm getting this error when I run my python script:
TypeError: cannot concatenate 'str' and 'NoneType' objects
I'm pretty sure the 'str' means string, but I dont know what a 'NoneType' object is. My script craps out on the second line, I know the first one works because the commands from that line are in my asa as I would expect. At first I thought it may be because I'm using variables and user input inside send_command.
Everything in 'CAPS' are variables, everything in 'lower case' is input from 'parser.add_option' options.
I'm using pexpect, and optparse
send_command(child, SNMPGROUPCMD + group + V3PRIVCMD)
send_command(child, SNMPSRVUSRCMD + snmpuser + group + V3AUTHCMD + snmphmac + snmpauth + PRIVCMD + snmpencrypt + snmppriv)
|
[
"NoneType is the type for the None object, which is an object that indicates no value. None is the return value of functions that \"don't return anything\". It is also a common default return value for functions that search for something and may or may not find it; for example, it's returned by re.search when the regex doesn't match, or dict.get when the key has no entry in the dict. You cannot add None to strings or other objects.\nOne of your variables is None, not a string. Maybe you forgot to return in one of your functions, or maybe the user didn't provide a command-line option and optparse gave you None for that option's value. When you try to add None to a string, you get that exception:\nsend_command(child, SNMPGROUPCMD + group + V3PRIVCMD)\n\nOne of group or SNMPGROUPCMD or V3PRIVCMD has None as its value.\n",
"For the sake of defensive programming, objects should be checked against nullity before using.\nif obj is None:\n\nor\nif obj is not None:\n\n",
"NoneType is simply the type of the None singleton:\n>>> type(None)\n<type 'NoneType'>\n\nFrom the latter link above:\n\nNone\nThe sole value of the type NoneType. None is frequently used to represent the absence of a value, as when default arguments are not passed to a function. Assignments to None are illegal and raise a SyntaxError. \n\nIn your case, it looks like one of the items you are trying to concatenate is None, hence your error.\n",
"It means you're trying to concatenate a string with something that is None.\nNone is the \"null\" of Python, and NoneType is its type.\nThis code will raise the same kind of error:\n>>> bar = \"something\"\n>>> foo = None\n>>> print foo + bar\nTypeError: cannot concatenate 'str' and 'NoneType' objects\n\n",
"In Python\n\nNoneType is the type of the None object.\nThere is only one such object.\nTherefore, \"a None object\" and \"the None object\" and\n\"None\" are three equivalent ways of saying the same thing.\nSince all Nones are identical and not only equal,\nyou should prefer x is None over x == None in your code.\nYou will get None in many places in regular Python\ncode as pointed out by the accepted answer.\nYou will also get None in your own code when you\nuse the function result of a function that does not end with\nreturn myvalue or the like.\n\nRepresentation:\n\nThere is a type NoneType in some but not all versions of Python,\nsee below.\nWhen you execute print(type(None)), you will get\n<type 'NoneType'>.\nThis is produced by the __repr__ method of NoneType.\nSee the documentation of repr\nand that of\nmagic functions\n(or \"dunder functions\" for the double underscores in their names) in general.\n\nIn Python 2.7\n\nNoneType is a type defined in the\nstandard library module types\n\nIn Python 3.0 to 3.9\n\nNoneType has been\nremoved\nfrom\nmodule types,\npresumably because there is only a single value of this type.\nIt effectively exists nevertheless, it only has no built-in name:\nYou can access NoneType by writing type(None).\nIf you want NoneType back, just define\nNoneType = type(None).\n\nIn Python 3.10+\n\nNoneType is again a type defined in the\nstandard library module types,\nintroduced in order to\nhelp type checkers do their work\n\n",
"In Python, to represent the absence of a value, you can use the None value types.NoneType.None\n",
"In the error message, instead of telling you that you can't concatenate two objects by showing their values (a string and None in this example), the Python interpreter tells you this by showing the types of the objects that you tried to concatenate. The type of every string is str while the type of the single None instance is called NoneType.\nYou normally do not need to concern yourself with NoneType, but in this example it is necessary to know that type(None) == NoneType.\n",
"Your error's occurring due to something like this:\n>>> None + \"hello world\"\nTraceback (most recent call last):\n File \"<stdin>\", line 1, in <module>\nTypeError: unsupported operand type(s) for +: 'NoneType' and 'str'\n>>> \nPython's None object is roughly equivalent to null, nil, etc. in other languages.\n",
"If you're getting type None for an object, make sure you're returning in the method. For example:\nclass Node:\n # node definition\n\nthen,\ndef some_funct():\n # some code\n node = Node(self, self.head)\n self.head = node\n\nif you do not return anything from some_func(), the return type will be NoneType because it did not return anything.\nInstead, if you return the node itself, which is a Node object, it will return the Node-object type.\ndef some_func(self):\n node = Node(self, self.head)\n self.head = node\n return node\n\n",
"One of the variables has not been given any value, thus it is a NoneType. You'll have to look into why this is, it's probably a simple logic error on your part.\n",
"NoneType is the type of None.\nSee the Python 2 docs here:\nhttps://docs.python.org/2/library/types.html#types.NoneType\n",
"It's returned when you have for instance print as your last statement in a function instead of return:\ndef add(a, b):\n print(a+ b)\n\nx = add(5,5)\nprint(x)\nprint(type(x))\n\ny = x + 545\nprint(y)\n\nTypeError: unsupported operand type(s) for +: 'NoneType' and 'int'\n<class 'NoneType'>\ndef add(a, b):\n return (a+ b)\n\nx = add(5,5)\nprint(x)\nprint(type(x))\n\n10\n<class 'int'>\n555\n\n",
"NoneType is type of None. Basically, The NoneType occurs for multiple reasons,\n\nFirstly when you have a function and a condition inside (for instance), it will return None if that condition is not met.\nEx:-\ndef dummy(x, y): if x > y: return x res = dummy(10, 20) print(res) # Will give None as the condition doesn't meet.\n\nTo solve this return the function with 0, I.e return 0, the function will end with 0 instead of None if the condition is not satisfied.\n\nSecondly, When you explicitly assign a variable to a built-in method, which doesn't return any value but None.\nmy_list = [1,2,3]\nmy_list = my_list.sort()\nprint(my_list) #None sort() mutate the DS but returns nothing if you print it.\nOr\nlis = None\nre = lis.something())\nprint(re) # returns attribute error NonType object has no attribute something\n\n"
] |
[
111,
30,
29,
17,
9,
4,
2,
1,
1,
0,
0,
0,
0
] |
[] |
[] |
[
"nonetype",
"null",
"python"
] |
stackoverflow_0021095654_nonetype_null_python.txt
|
Q:
How to get the class name of a method in Python?
Here's my problem and the code:
I try to use a decorator to reecord the time cost; but I cannot get the class name.
import functools
import time
def log_time(func):
@functools.wraps(func)
def record(*args, **kwargs):
# print(func)
# print(func.__name__)
# print(func.__class__)
# print(func.__class__.__name__)
func_name = func.__name__
start_time = time.time()
result = func(*args, **kwargs)
print(f"{func_name} costs time: {time.time() - start_time:.2f}s")
return result
return record
class FakeProject:
@log_time
def __init__(self, value):
self.load_data = [i for i in range(value)]
fake_project = FakeProject(100000000)
above code has log message as __init__ cost time: 3.65s;
But I want the FakeProject.__init__ cost time: 3.65s instead.
How can I get the class name and print it? Anyone can help? Thanks anyway
I try to print
print(func)
print(func.__name__)
print(func.__class__)
print(func.__class__.__name__)
and I get
<function FakeProject.__init__ at 0x0000022F3F365318>
__init__
<class 'function'>
function
A:
In this particular case, you can just use the __qualname__:
import functools
import time
def log_time(func):
@functools.wraps(func)
def record(*args, **kwargs):
func_name = func.__qualname__
start_time = time.time()
result = func(*args, **kwargs)
print(f"{func_name} costs time: {time.time() - start_time:.2f}s")
return result
return record
class FakeProject:
@log_time
def __init__(self, value):
self.load_data = [i for i in range(value)]
fake_project = FakeProject(100000000)
This outputs:
FakeProject.__init__ costs time: 5.26s
|
How to get the class name of a method in Python?
|
Here's my problem and the code:
I try to use a decorator to reecord the time cost; but I cannot get the class name.
import functools
import time
def log_time(func):
@functools.wraps(func)
def record(*args, **kwargs):
# print(func)
# print(func.__name__)
# print(func.__class__)
# print(func.__class__.__name__)
func_name = func.__name__
start_time = time.time()
result = func(*args, **kwargs)
print(f"{func_name} costs time: {time.time() - start_time:.2f}s")
return result
return record
class FakeProject:
@log_time
def __init__(self, value):
self.load_data = [i for i in range(value)]
fake_project = FakeProject(100000000)
above code has log message as __init__ cost time: 3.65s;
But I want the FakeProject.__init__ cost time: 3.65s instead.
How can I get the class name and print it? Anyone can help? Thanks anyway
I try to print
print(func)
print(func.__name__)
print(func.__class__)
print(func.__class__.__name__)
and I get
<function FakeProject.__init__ at 0x0000022F3F365318>
__init__
<class 'function'>
function
|
[
"In this particular case, you can just use the __qualname__:\nimport functools\nimport time\n\n\ndef log_time(func):\n @functools.wraps(func)\n def record(*args, **kwargs):\n func_name = func.__qualname__\n start_time = time.time()\n result = func(*args, **kwargs)\n print(f\"{func_name} costs time: {time.time() - start_time:.2f}s\")\n return result\n\n return record\n\n\nclass FakeProject:\n @log_time\n def __init__(self, value):\n self.load_data = [i for i in range(value)]\n\nfake_project = FakeProject(100000000)\n\nThis outputs:\nFakeProject.__init__ costs time: 5.26s\n\n"
] |
[
0
] |
[] |
[] |
[
"class",
"decorator",
"printing",
"python"
] |
stackoverflow_0074568082_class_decorator_printing_python.txt
|
Q:
Seaborn lineplot Y-axis values to 1 decimal place - code not working but not sure why
I have the following code.
I am trying to plot a lineplot using seaborn.
However, I want all the Y-values to be to 1 decimal place.
When I try to set all these values to 1 decimal place, this does not seem to work.
I would be so grateful for a helping hand!
listedvariables = ['gender']
newestdf[['distance']] = newestdf[['distance']].round(1)
for i in range(0,len(listedvariables)):
plt.figure(figsize=(50,50))
ax = sns.lineplot(data=newestdf,x=listedvariables[i],y="distance",errorbar ='se',err_style='bars',linewidth=2)
ax.set_xlabel(listedvariables[i],labelpad = 40,fontsize=70,weight='bold')
ax.set_ylabel("Wayfinding Distance",labelpad = 40,fontsize=70,weight='bold')
ax.set_xticklabels(ax.get_xticks(),rotation = 30, ha="right",fontsize=60,weight='bold')
ax.set_yticklabels(ax.get_yticks(),ha="right",fontsize=60,weight='bold')
title = (listedvariables[i] + ' ' + 'plot')
ax.set_title(title,fontsize=70,pad=40,weight='bold')
dir_name = "/Users/macbook/Desktop/PhD Work/"
plt.rcParams["savefig.directory"] = os.chdir(os.path.dirname(dir_name))
plt.savefig(listedvariables[i]+' '+'scatterplot')
plt.show()
Plot:
As an example:
newestdf[['distance']].round(1)
0 -0.3
1 0.1
2 -0.2
3 -0.6
5 1.1
...
911 -0.2
912 0.2
913 -0.3
914 -0.4
915 -0.3
A:
Add these two lines (before title variable setup in your code):
ylabels = ['{:,.2f}'.format(x) for x in ax.get_yticks()]
ax.set_yticklabels(ylabels)
Hope this helps!
|
Seaborn lineplot Y-axis values to 1 decimal place - code not working but not sure why
|
I have the following code.
I am trying to plot a lineplot using seaborn.
However, I want all the Y-values to be to 1 decimal place.
When I try to set all these values to 1 decimal place, this does not seem to work.
I would be so grateful for a helping hand!
listedvariables = ['gender']
newestdf[['distance']] = newestdf[['distance']].round(1)
for i in range(0,len(listedvariables)):
plt.figure(figsize=(50,50))
ax = sns.lineplot(data=newestdf,x=listedvariables[i],y="distance",errorbar ='se',err_style='bars',linewidth=2)
ax.set_xlabel(listedvariables[i],labelpad = 40,fontsize=70,weight='bold')
ax.set_ylabel("Wayfinding Distance",labelpad = 40,fontsize=70,weight='bold')
ax.set_xticklabels(ax.get_xticks(),rotation = 30, ha="right",fontsize=60,weight='bold')
ax.set_yticklabels(ax.get_yticks(),ha="right",fontsize=60,weight='bold')
title = (listedvariables[i] + ' ' + 'plot')
ax.set_title(title,fontsize=70,pad=40,weight='bold')
dir_name = "/Users/macbook/Desktop/PhD Work/"
plt.rcParams["savefig.directory"] = os.chdir(os.path.dirname(dir_name))
plt.savefig(listedvariables[i]+' '+'scatterplot')
plt.show()
Plot:
As an example:
newestdf[['distance']].round(1)
0 -0.3
1 0.1
2 -0.2
3 -0.6
5 1.1
...
911 -0.2
912 0.2
913 -0.3
914 -0.4
915 -0.3
|
[
"Add these two lines (before title variable setup in your code):\nylabels = ['{:,.2f}'.format(x) for x in ax.get_yticks()]\nax.set_yticklabels(ylabels)\n\nHope this helps!\n"
] |
[
1
] |
[] |
[] |
[
"matplotlib",
"python"
] |
stackoverflow_0074567301_matplotlib_python.txt
|
Q:
Python expression differnce
What is the difference between the expressions
c = 299792458 and c = 2.99792458 * 10 ** 8 in python?
is it same or different
A:
c = 299792458 this is an integer and 2.99792458 * 10 ** 8 will evaluate to a float. So to answer your question, no they are not the same.
type(299792458)
> <class 'int'>
type(2.99792458 * 10 ** 8)
> <class 'float'>
|
Python expression differnce
|
What is the difference between the expressions
c = 299792458 and c = 2.99792458 * 10 ** 8 in python?
is it same or different
|
[
"c = 299792458 this is an integer and 2.99792458 * 10 ** 8 will evaluate to a float. So to answer your question, no they are not the same.\ntype(299792458)\n> <class 'int'>\n\ntype(2.99792458 * 10 ** 8) \n> <class 'float'>\n\n\n"
] |
[
1
] |
[] |
[] |
[
"python"
] |
stackoverflow_0074568067_python.txt
|
Q:
Is LightGBM available for Mac M1?
My goal is to learn a notebook. It has recall 97% while I am struggling with F1 Score 'Attrited Customer' 77.9%. The problem is the notebook uses LightGBM. I am unable to install LightGBM.
What I've tried:
pip install lightgbm -> it throws error python setup.py egg_info did not run successfully.
Then, I did pip install whell -> now it throws error python setup.py bdist_wheel did not run successfully.
Then, I did pip install Cmake, pip install --upgrade pip setuptools, brew install libomp -> the error persisted.
The full error
× python setup.py bdist_wheel did not run successfully.
│ exit code: 1
╰─> [80 lines of output]
INFO:root:running bdist_wheel
/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/setuptools/command/install.py:34: SetuptoolsDeprecationWarning: setup.py install is deprecated. Use build and pip and other standards-based tools.
warnings.warn(
INFO:root:running build
INFO:root:running build_py
INFO:root:creating build
INFO:root:creating build/lib
INFO:root:creating build/lib/lightgbm
A:
As of this writing, no official release of lightgbm (the Python package for LightGBM) supports the M1 Macs (which us ARM chips).
osx-arm64 builds of lightgbm are supported by the lightgbm conda-forge feedstock, so you can install lightgbm on an M1 Mac using conda.
conda install \
--yes \
-c conda-forge \
'lightgbm>=3.3.3'
Progress towards officially supporting M1 Mac builds of LightGBM can be tracked in microsoft/LightGBM#5269 and microsoft/LightGBM#5328.
|
Is LightGBM available for Mac M1?
|
My goal is to learn a notebook. It has recall 97% while I am struggling with F1 Score 'Attrited Customer' 77.9%. The problem is the notebook uses LightGBM. I am unable to install LightGBM.
What I've tried:
pip install lightgbm -> it throws error python setup.py egg_info did not run successfully.
Then, I did pip install whell -> now it throws error python setup.py bdist_wheel did not run successfully.
Then, I did pip install Cmake, pip install --upgrade pip setuptools, brew install libomp -> the error persisted.
The full error
× python setup.py bdist_wheel did not run successfully.
│ exit code: 1
╰─> [80 lines of output]
INFO:root:running bdist_wheel
/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/setuptools/command/install.py:34: SetuptoolsDeprecationWarning: setup.py install is deprecated. Use build and pip and other standards-based tools.
warnings.warn(
INFO:root:running build
INFO:root:running build_py
INFO:root:creating build
INFO:root:creating build/lib
INFO:root:creating build/lib/lightgbm
|
[
"As of this writing, no official release of lightgbm (the Python package for LightGBM) supports the M1 Macs (which us ARM chips).\nosx-arm64 builds of lightgbm are supported by the lightgbm conda-forge feedstock, so you can install lightgbm on an M1 Mac using conda.\nconda install \\\n --yes \\\n -c conda-forge \\\n 'lightgbm>=3.3.3'\n\nProgress towards officially supporting M1 Mac builds of LightGBM can be tracked in microsoft/LightGBM#5269 and microsoft/LightGBM#5328.\n"
] |
[
1
] |
[] |
[] |
[
"apple_m1",
"kaggle",
"lightgbm",
"pip",
"python"
] |
stackoverflow_0074568115_apple_m1_kaggle_lightgbm_pip_python.txt
|
Q:
Determine most utilized location for a specific date using Pandas
I would like to find out the most utilized location for the date of 2/1/2022.
Data
ID location total marks_free marks_utilized date
1 NY 6 5 1 2/1/2022
2 NY 10 5 5 2/1/2022
3 NY 2 1 1 2/1/2022
4 CA 5 4 1 2/1/2022
5 CA 6 5 1 2/1/2022
6 CA 10 10 0 2/1/2022
7 NY 6 6 0 3/1/2022
8 NY 10 10 0 3/1/2022
9 NY 2 1 1 3/1/2022
10 CA 5 4 1 3/1/2022
11 CA 6 5 1 3/1/2022
12 CA 10 10 0 3/1/2022
Desired
location marks_utilized date
NY 38% 2/1/2022
Logic
filter to 2/1/2022, groupby location
for instance lets take NY
sum(marks_utilized) / sum(total) * 100
7/18 *100 = 38%
Doing
# filter to 2/1/2022
df1 = df.groupby(['location', 'date']).agg({'marks_utilized': 'sum', 'total': 'sum'})
df1['marks_utilized'] = df['marks_utilized'] / df['total'] * 100
Still researching this.
A:
just need a simple modification on your attempt, it would work.
df1['marks_utilized'] = df['marks_utilized'] / df['total'] * 100 should be df1['marks_utilized'] = df1['marks_utilized'] / df1['total'] * 100
If you only want result in 2/1/2022, you could filter the df and do groupby afterwards. Also, could use df1.to_string(formatters={'marks_utilized': '{:,.2f}'.format} to format the float to percentage string.
ID,location,total,marks_free,marks_utilized,date
1,NY,6,5,1,2/1/2022
2,NY,10,5,5,2/1/2022
3,NY,2,1,1,2/1/2022
4,CA,5,4,1,3/1/2022
5,CA,6,5,1,3/1/2022
6,CA,10,10,0,3/1/2022
import pandas as pd
df = pd.read_csv("test.csv")
df1 = df.groupby(['location', 'date']).agg({'marks_utilized': 'sum', 'total': 'sum'})
df1['marks_utilized'] = df1['marks_utilized'] / df1['total']
max_row = df1.loc[df1['marks_utilized'].idxmax()]
print(max_row)
marks_utilized 0.388889
total 18.000000
Name: (NY, 2/1/2022), dtype: float64
A:
We could try
df.groupby(['location','date']).apply(lambda x : x['marks_utilized'].sum()/x['total'].sum()).\
mul(100).reset_index(name = 'marks_utilized')
Out[279]:
location date marks_utilized
0 CA 3/1/2022 9.523810
1 NY 2/1/2022 38.888889
|
Determine most utilized location for a specific date using Pandas
|
I would like to find out the most utilized location for the date of 2/1/2022.
Data
ID location total marks_free marks_utilized date
1 NY 6 5 1 2/1/2022
2 NY 10 5 5 2/1/2022
3 NY 2 1 1 2/1/2022
4 CA 5 4 1 2/1/2022
5 CA 6 5 1 2/1/2022
6 CA 10 10 0 2/1/2022
7 NY 6 6 0 3/1/2022
8 NY 10 10 0 3/1/2022
9 NY 2 1 1 3/1/2022
10 CA 5 4 1 3/1/2022
11 CA 6 5 1 3/1/2022
12 CA 10 10 0 3/1/2022
Desired
location marks_utilized date
NY 38% 2/1/2022
Logic
filter to 2/1/2022, groupby location
for instance lets take NY
sum(marks_utilized) / sum(total) * 100
7/18 *100 = 38%
Doing
# filter to 2/1/2022
df1 = df.groupby(['location', 'date']).agg({'marks_utilized': 'sum', 'total': 'sum'})
df1['marks_utilized'] = df['marks_utilized'] / df['total'] * 100
Still researching this.
|
[
"just need a simple modification on your attempt, it would work.\ndf1['marks_utilized'] = df['marks_utilized'] / df['total'] * 100 should be df1['marks_utilized'] = df1['marks_utilized'] / df1['total'] * 100\nIf you only want result in 2/1/2022, you could filter the df and do groupby afterwards. Also, could use df1.to_string(formatters={'marks_utilized': '{:,.2f}'.format} to format the float to percentage string.\nID,location,total,marks_free,marks_utilized,date\n1,NY,6,5,1,2/1/2022\n2,NY,10,5,5,2/1/2022\n3,NY,2,1,1,2/1/2022\n4,CA,5,4,1,3/1/2022\n5,CA,6,5,1,3/1/2022\n6,CA,10,10,0,3/1/2022\n\nimport pandas as pd\n\ndf = pd.read_csv(\"test.csv\")\ndf1 = df.groupby(['location', 'date']).agg({'marks_utilized': 'sum', 'total': 'sum'})\ndf1['marks_utilized'] = df1['marks_utilized'] / df1['total']\nmax_row = df1.loc[df1['marks_utilized'].idxmax()]\nprint(max_row)\n\nmarks_utilized 0.388889\ntotal 18.000000\nName: (NY, 2/1/2022), dtype: float64\n\n",
"We could try\ndf.groupby(['location','date']).apply(lambda x : x['marks_utilized'].sum()/x['total'].sum()).\\\n mul(100).reset_index(name = 'marks_utilized')\nOut[279]: \n location date marks_utilized\n0 CA 3/1/2022 9.523810\n1 NY 2/1/2022 38.888889\n\n"
] |
[
2,
1
] |
[] |
[] |
[
"group_by",
"numpy",
"pandas",
"python"
] |
stackoverflow_0074568063_group_by_numpy_pandas_python.txt
|
Q:
Is there a way to use laptop's built-in biometric sensors in python applications?
I am trying to make an application using python that registers students' attendance. I'm planning to use my laptop's fingerprint built-in fingerprint device to identify the students and register the attendance.
I've tried some web searches but I couldn't find anyway to use built-in fingerprint devices for applications with python. Do you know any way to do it.
The device that i want to use for fingerprints is Lenovo ThinkPad L540.
I managed to find some stuff like windows biometric framework but those things were to be used with other languages.
https://learn.microsoft.com/en-us/windows/win32/secbiomet/biometric-service-api-portal?redirectedfrom=MSDN
A:
This can not be done for now. The fingerprint sensor associated with laptop/mobile can be used for authentication purpose only. Means, you can add the more number of fingerprints who are eligible to access the device. Then, device will allow any one of them to unlock the device. It will not record whose fingerprint it is. It will just say, a fingerprint is authenticated or not.
For recording the attendance, you must go with the time attendances systems. if you want to build software based attendance system with the help of scanner, then you have to go with the fingerprint scanners like mfs100, zk7500 and etc.
A:
From what I can tell, this absolutely can be done. The following link is for a python wrapper around the Windows Biometric Framework. It is around 4 years old, but the functionality it offers still seems to work fine.
https://github.com/luspock/FingerPrint
The identify function in this wrapper prints out the Sub Factor value whenever someone places a matching finger on the scanner. In my experimentation, the returned Sub Factor is unique to each finger that is stored. In the first day you use this, you would just fill a dictionary with sub factors and student names, then that is everything you need for your use case.
Considering that this wrapper only makes use of the system biometric unit pool, the drawback here is that you have to add all of your student's fingers to your PC through the windows sign-in options, meaning they would be able to unlock it. If you are okay with that, it seems like this will suit your needs.
It would also be possible for you to disable login with fingerprint and only use the system pool for this particular use case. That would give you what you want and keep your PC safe from anyone that has their fingerprint stored in the system pool.
If you want to make use of a private pool, you would have to add that functionality to the wrapper yourself. That's totally possible, but it would be a lot of work.
One thing to note about the Windows Biometric Framework is that it requires the process calling the function to have focus. In order for me to test the wrapper, I used the command-line through the Windows Console Host. Windows Terminal doesn't work, because it doesn't properly acquire focus. You can also use tkinter and call the functions with a button.
|
Is there a way to use laptop's built-in biometric sensors in python applications?
|
I am trying to make an application using python that registers students' attendance. I'm planning to use my laptop's fingerprint built-in fingerprint device to identify the students and register the attendance.
I've tried some web searches but I couldn't find anyway to use built-in fingerprint devices for applications with python. Do you know any way to do it.
The device that i want to use for fingerprints is Lenovo ThinkPad L540.
I managed to find some stuff like windows biometric framework but those things were to be used with other languages.
https://learn.microsoft.com/en-us/windows/win32/secbiomet/biometric-service-api-portal?redirectedfrom=MSDN
|
[
"This can not be done for now. The fingerprint sensor associated with laptop/mobile can be used for authentication purpose only. Means, you can add the more number of fingerprints who are eligible to access the device. Then, device will allow any one of them to unlock the device. It will not record whose fingerprint it is. It will just say, a fingerprint is authenticated or not.\nFor recording the attendance, you must go with the time attendances systems. if you want to build software based attendance system with the help of scanner, then you have to go with the fingerprint scanners like mfs100, zk7500 and etc.\n",
"From what I can tell, this absolutely can be done. The following link is for a python wrapper around the Windows Biometric Framework. It is around 4 years old, but the functionality it offers still seems to work fine.\nhttps://github.com/luspock/FingerPrint\nThe identify function in this wrapper prints out the Sub Factor value whenever someone places a matching finger on the scanner. In my experimentation, the returned Sub Factor is unique to each finger that is stored. In the first day you use this, you would just fill a dictionary with sub factors and student names, then that is everything you need for your use case.\nConsidering that this wrapper only makes use of the system biometric unit pool, the drawback here is that you have to add all of your student's fingers to your PC through the windows sign-in options, meaning they would be able to unlock it. If you are okay with that, it seems like this will suit your needs.\nIt would also be possible for you to disable login with fingerprint and only use the system pool for this particular use case. That would give you what you want and keep your PC safe from anyone that has their fingerprint stored in the system pool.\nIf you want to make use of a private pool, you would have to add that functionality to the wrapper yourself. That's totally possible, but it would be a lot of work.\nOne thing to note about the Windows Biometric Framework is that it requires the process calling the function to have focus. In order for me to test the wrapper, I used the command-line through the Windows Console Host. Windows Terminal doesn't work, because it doesn't properly acquire focus. You can also use tkinter and call the functions with a button.\n"
] |
[
2,
0
] |
[] |
[] |
[
"biometrics",
"fingerprint",
"hardware",
"python"
] |
stackoverflow_0070875299_biometrics_fingerprint_hardware_python.txt
|
Q:
Removing an empty line in a file
I have been trying to delete lines from a file without loading in memory all the file, because it's too large (~1Gb). How i do it without leaving a blank line in the file?
For example:
I want this
foo bar
this is the line to be removed
foo bar
foo bar
To this:
foo bar
foo bar
foo bar
But I get this:
foo bar
foo bar
foo bar
So I have managed to delete the line but I also want to remove the blank line. The way I did it so far is I move the file pointer (cursor) to the place i want and then with writing ' ' overwrite the line.
a = f.tell()
f.readline()
b = f.tell()
f.seek(a)
l2 = b-a-1
blank = " "*l2
f.write(blank)
f.seek(a)
A:
A much simpler approach to filtering a file in-place would be to open the same file twice, once for reading and another for writing, output only what needs to be kept, and truncate the output in the end. This way, none of tell or seek or any file position calculations would be needed:
with open('file.txt') as file, open('file.txt', 'r+') as output:
for line in file:
if line != 'this is the line to be removed\n':
output.write(line)
output.truncate()
Demo: https://replit.com/@blhsing/SeagreenSlushyAutoresponder
A:
If you do need to remove the lines in place, which can be fraught with danger, then you could try the following. Basically, it keeps track of the latest line read and the latest line written, and truncates from the end of the last line written once the input is exhausted. Please test before use!
with open('file.txt', 'r+') as f:
r_pos = w_pos = f.tell()
while True:
f.seek(r_pos)
line = f.readline()
if not line:
break
r_pos = f.tell()
if 'remove' not in line: # or your criteria
f.seek(w_pos)
f.write(line)
w_pos = f.tell()
f.seek(w_pos)
f.truncate()
|
Removing an empty line in a file
|
I have been trying to delete lines from a file without loading in memory all the file, because it's too large (~1Gb). How i do it without leaving a blank line in the file?
For example:
I want this
foo bar
this is the line to be removed
foo bar
foo bar
To this:
foo bar
foo bar
foo bar
But I get this:
foo bar
foo bar
foo bar
So I have managed to delete the line but I also want to remove the blank line. The way I did it so far is I move the file pointer (cursor) to the place i want and then with writing ' ' overwrite the line.
a = f.tell()
f.readline()
b = f.tell()
f.seek(a)
l2 = b-a-1
blank = " "*l2
f.write(blank)
f.seek(a)
|
[
"A much simpler approach to filtering a file in-place would be to open the same file twice, once for reading and another for writing, output only what needs to be kept, and truncate the output in the end. This way, none of tell or seek or any file position calculations would be needed:\nwith open('file.txt') as file, open('file.txt', 'r+') as output:\n for line in file:\n if line != 'this is the line to be removed\\n':\n output.write(line)\n output.truncate()\n\nDemo: https://replit.com/@blhsing/SeagreenSlushyAutoresponder\n",
"If you do need to remove the lines in place, which can be fraught with danger, then you could try the following. Basically, it keeps track of the latest line read and the latest line written, and truncates from the end of the last line written once the input is exhausted. Please test before use!\nwith open('file.txt', 'r+') as f:\n r_pos = w_pos = f.tell()\n while True:\n f.seek(r_pos)\n line = f.readline()\n if not line:\n break\n r_pos = f.tell()\n if 'remove' not in line: # or your criteria\n f.seek(w_pos)\n f.write(line)\n w_pos = f.tell()\n f.seek(w_pos)\n f.truncate()\n\n"
] |
[
1,
0
] |
[] |
[] |
[
"file",
"large_files",
"python"
] |
stackoverflow_0074567654_file_large_files_python.txt
|
Q:
Sorting a list using sorted, key and lambda
i have this list which is a z3 model:
list = [x_2 = 0, x_1 = 1, x_3 = 27, x_11 = 1, x_18 = 4, x_17 = 6, x_26 = 4, x_12 = 4, x_7 = 2, x_22 = 8, x_23 = 27, x_21 = 1, x_28 = 4,x_6 = 1, x_16 = 4, x_27 = 9, x_13 = 27, x_8 = 27, x_29 = 1, x_24 = 19, x_19 = 2, x_14 = 13, x_9 = 20, x_4 = 23, x_25 = 5, x_20 = 4, x_15 = 3, x_10 = 2,x_5 = 1, x_0 = 0]
i have tried to use:
solved = solver.model()
list = sorted ([(i, solved[i]) for i in solved], key = lambda x: str(x[0]))
to sort the list so x_0 is first, x_1 is second, x_2 is next .....etc, but i get the result:
[(x_0, 0), (x_1, 1), (x_10, 2), (x_11, 1), (x_12, 4), (x_13, 27), (x_14, 13), (x_15, 3), (x_16, 4), (x_17, 6), (x_18, 4), (x_19, 2), (x_2, 0), (x_20, 4), (x_21, 1), (x_22, 8), (x_23, 27), (x_24, 19), (x_25, 5), (x_26, 4), (x_27, 9), (x_28, 4), (x_29, 1), (x_3, 27), (x_4, 23), (x_5, 1), (x_6, 1), (x_7, 2), (x_8, 27), (x_9, 20)]
where it has all the ones with 1's at the start first but its still quite out of order but i feel like this is close
A:
just wanted to ask.. is this working in your code.? Please let me know.
list = sorted ([(i, solved[i]) for i in solved], key = lambda x: int(str(x[0])[2:]))
My approach:- as all contains (x_) slicing this part (x_)-> [2:] and than sorting on the basis of integer. Just don't know is it working or not.
|
Sorting a list using sorted, key and lambda
|
i have this list which is a z3 model:
list = [x_2 = 0, x_1 = 1, x_3 = 27, x_11 = 1, x_18 = 4, x_17 = 6, x_26 = 4, x_12 = 4, x_7 = 2, x_22 = 8, x_23 = 27, x_21 = 1, x_28 = 4,x_6 = 1, x_16 = 4, x_27 = 9, x_13 = 27, x_8 = 27, x_29 = 1, x_24 = 19, x_19 = 2, x_14 = 13, x_9 = 20, x_4 = 23, x_25 = 5, x_20 = 4, x_15 = 3, x_10 = 2,x_5 = 1, x_0 = 0]
i have tried to use:
solved = solver.model()
list = sorted ([(i, solved[i]) for i in solved], key = lambda x: str(x[0]))
to sort the list so x_0 is first, x_1 is second, x_2 is next .....etc, but i get the result:
[(x_0, 0), (x_1, 1), (x_10, 2), (x_11, 1), (x_12, 4), (x_13, 27), (x_14, 13), (x_15, 3), (x_16, 4), (x_17, 6), (x_18, 4), (x_19, 2), (x_2, 0), (x_20, 4), (x_21, 1), (x_22, 8), (x_23, 27), (x_24, 19), (x_25, 5), (x_26, 4), (x_27, 9), (x_28, 4), (x_29, 1), (x_3, 27), (x_4, 23), (x_5, 1), (x_6, 1), (x_7, 2), (x_8, 27), (x_9, 20)]
where it has all the ones with 1's at the start first but its still quite out of order but i feel like this is close
|
[
"just wanted to ask.. is this working in your code.? Please let me know.\nlist = sorted ([(i, solved[i]) for i in solved], key = lambda x: int(str(x[0])[2:]))\n\nMy approach:- as all contains (x_) slicing this part (x_)-> [2:] and than sorting on the basis of integer. Just don't know is it working or not.\n"
] |
[
1
] |
[] |
[] |
[
"list",
"python",
"sorting",
"z3"
] |
stackoverflow_0074567697_list_python_sorting_z3.txt
|
Q:
Why only id column is shown in migration file when I create model in Python?
from django.db import models
class Town(models.Model):
name: models.CharField(max_length=70,unique=True)
country: models.CharField(max_length=30,unique=True)
class Meta:
pass
This is my model Town whith two attributes: name and country. When I create a migration in the initial_0001.py file only id column is shown
from django.db import migrations, models
class Migration(migrations.Migration):
initial = True
dependencies = [
]
operations = [
migrations.CreateModel(
name='Town',
fields=[
('id', models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
],
),
]
What kind of problem could it be?
A:
The ID-Field is always automatically generated by Django when making migrations. You can specifiy your own ID field aswell, but using an auto-incremented like this is fine for your use case.
You also might want to get rid of the unique=True, as it would prevent adding multiple towns from the same country.
Create your model like this and redo your migration process:
class Town(models.Model):
name = models.CharField(max_length=70)
country = models.CharField(max_length=30)
When you have 'no changes detected' when making migrations, I usually follow these steps:
First, Delete migrations folder in project.
Then delete django-migration entries in your Database with this query:
DELETE from django_migrations WHERE app='yourAppName'
Then, create new folder ‘migrations” in app folder + init.py
Lastly, redo the migration process:
py manage.py makemigrations
py manage.py migrate --run-syncdb
A:
I had the same problem, and this solved it. Hope it helps.
Instead of : it should be =
You also need to provide a default value for all the previous rows that were created
Then run python manage.py makemigrations
Run python manage.py migrate
class Town(models.Model):
name = models.CharField(max_length=70,unique=True, default='')
country = models.CharField(max_length=30,unique=True, default='')
class Meta:
pass
|
Why only id column is shown in migration file when I create model in Python?
|
from django.db import models
class Town(models.Model):
name: models.CharField(max_length=70,unique=True)
country: models.CharField(max_length=30,unique=True)
class Meta:
pass
This is my model Town whith two attributes: name and country. When I create a migration in the initial_0001.py file only id column is shown
from django.db import migrations, models
class Migration(migrations.Migration):
initial = True
dependencies = [
]
operations = [
migrations.CreateModel(
name='Town',
fields=[
('id', models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
],
),
]
What kind of problem could it be?
|
[
"The ID-Field is always automatically generated by Django when making migrations. You can specifiy your own ID field aswell, but using an auto-incremented like this is fine for your use case.\nYou also might want to get rid of the unique=True, as it would prevent adding multiple towns from the same country.\nCreate your model like this and redo your migration process:\nclass Town(models.Model):\n\n name = models.CharField(max_length=70)\n country = models.CharField(max_length=30)\n\nWhen you have 'no changes detected' when making migrations, I usually follow these steps:\nFirst, Delete migrations folder in project.\nThen delete django-migration entries in your Database with this query:\nDELETE from django_migrations WHERE app='yourAppName'\n\nThen, create new folder ‘migrations” in app folder + init.py\nLastly, redo the migration process:\npy manage.py makemigrations\npy manage.py migrate --run-syncdb\n\n",
"I had the same problem, and this solved it. Hope it helps.\n\nInstead of : it should be =\nYou also need to provide a default value for all the previous rows that were created\nThen run python manage.py makemigrations\nRun python manage.py migrate\n\n class Town(models.Model):\n \n name = models.CharField(max_length=70,unique=True, default='')\n country = models.CharField(max_length=30,unique=True, default='')\n \n class Meta:\n pass\n\n"
] |
[
0,
0
] |
[] |
[] |
[
"django_models",
"migration",
"python"
] |
stackoverflow_0072461496_django_models_migration_python.txt
|
Q:
Why is the time complexity (n*k) instead of ((n-k)*k) for this algorithm?
I am wondering why this brute force approach to a Maximum Sum Subarray of Size K problem is of time complexity nk instead of (n-k)k. Given that we are subtracting K elements from the outer most loop wouldn't the latter be more appropriate? The text solution mentions nk and confuses me slightly.
I have included the short code snippet below!
Thank you
def max_sub_array_of_size_k(k, arr):
max_sum = 0
window_sum = 0
for i in range(len(arr) - k + 1):
window_sum = 0
for j in range(i, i+k):
window_sum += arr[j]
max_sum = max(max_sum, window_sum)
return max_sum
I haven't actually tried to fix this, I just want to understand.
A:
In the calculation of time complexity, O(n)=O(n-1)=O(n-k) ,both represent the complexity of linear growth, thus O(n-k)✖️O(k) = O(n*k). Of course, this question can be optimized to O(n) time complexity by using the sum of prefixes.
def max_sub_array_of_size_k(k, arr):
s = [0]
for i in range(len(arr)):
# sum[i] = sum of arr[0] + ... + arr[i]
s.append(s[-1] + arr[i])
max_sum = float("-inf")
for i in range(1, len(s) + 1 - k):
max_sum = max(max_sum, s[i + k - 1] - s[i - 1])
return max_sum
A:
It's O(k(n-k+1)), actually, and you are right that that is a tighter bound than O(nk).
Big-O gives an upper bound, though, so O(k(n-k+1)) is a subset of O(nk), and saying that the complexity is in O(nk) is also correct.
It's just a question of whether or not the person making the statement about the algorithm's complexity cares about, or cares to communicate, the fact that it can be faster when k is close to n.
|
Why is the time complexity (n*k) instead of ((n-k)*k) for this algorithm?
|
I am wondering why this brute force approach to a Maximum Sum Subarray of Size K problem is of time complexity nk instead of (n-k)k. Given that we are subtracting K elements from the outer most loop wouldn't the latter be more appropriate? The text solution mentions nk and confuses me slightly.
I have included the short code snippet below!
Thank you
def max_sub_array_of_size_k(k, arr):
max_sum = 0
window_sum = 0
for i in range(len(arr) - k + 1):
window_sum = 0
for j in range(i, i+k):
window_sum += arr[j]
max_sum = max(max_sum, window_sum)
return max_sum
I haven't actually tried to fix this, I just want to understand.
|
[
"In the calculation of time complexity, O(n)=O(n-1)=O(n-k) ,both represent the complexity of linear growth, thus O(n-k)✖️O(k) = O(n*k). Of course, this question can be optimized to O(n) time complexity by using the sum of prefixes.\ndef max_sub_array_of_size_k(k, arr):\n s = [0]\n for i in range(len(arr)):\n # sum[i] = sum of arr[0] + ... + arr[i]\n s.append(s[-1] + arr[i]) \n \n max_sum = float(\"-inf\")\n for i in range(1, len(s) + 1 - k):\n max_sum = max(max_sum, s[i + k - 1] - s[i - 1])\n return max_sum\n\n",
"It's O(k(n-k+1)), actually, and you are right that that is a tighter bound than O(nk).\nBig-O gives an upper bound, though, so O(k(n-k+1)) is a subset of O(nk), and saying that the complexity is in O(nk) is also correct.\nIt's just a question of whether or not the person making the statement about the algorithm's complexity cares about, or cares to communicate, the fact that it can be faster when k is close to n.\n"
] |
[
1,
0
] |
[] |
[] |
[
"algorithm",
"arrays",
"python",
"sliding_window",
"time_complexity"
] |
stackoverflow_0074567521_algorithm_arrays_python_sliding_window_time_complexity.txt
|
Q:
Python Lists and Files
I need help figuring out how to output every word in a list that has whatever letter the user picks in it.
For example if my list was ["Bob", "Mary", "Jezebel"] and I ask the user to pick any letter and they pick the letter z, I want to find out how I can output Jezebel only from the list using a for loop.
import os.path
def name_file():
# asking user name of file
file_name = input("What is the name of the file to read the names from?")
while not os.path.exists(file_name):
print("This file does not exist")
file_name = input("What is the name of the file to read the names from?")
return file_name
name_file()
file_opener = open("5letterwords.txt","r")
read_line_by_line = file_opener.readlines()
word_list = []
for line in read_line_by_line:
word_list.append(line.strip())
print(word_list)
letter = input("Pick a letter of your choosing and every word with that letter will be outputted")
for letter in word_list:
print (letter in word_list)
Above is my current code and the last 3 lines are the part im struggling with. I want to output whatever words that has the letter picked by the user
A:
As the word list contains.. ["Bob", "Mary", "Jezebel"]
Code:-
word_list=["Bob", "Mary", "Jezebel"]
letter = input("Pick a letter of your choosing and every word with that letter will be outputted")
for word in word_list:
if letter in word:
print(word)
Output:-
Pick a letter of your choosing and every word with that letter will be outputted
z
Jezebel
Note-: All the word which contains "z" will be printed. if you wanted that only the first word in a word_list should be printed which contains letter "z" than you should apply break statement.
for word in word_list:
if letter in word:
print(word)
break #Like this
|
Python Lists and Files
|
I need help figuring out how to output every word in a list that has whatever letter the user picks in it.
For example if my list was ["Bob", "Mary", "Jezebel"] and I ask the user to pick any letter and they pick the letter z, I want to find out how I can output Jezebel only from the list using a for loop.
import os.path
def name_file():
# asking user name of file
file_name = input("What is the name of the file to read the names from?")
while not os.path.exists(file_name):
print("This file does not exist")
file_name = input("What is the name of the file to read the names from?")
return file_name
name_file()
file_opener = open("5letterwords.txt","r")
read_line_by_line = file_opener.readlines()
word_list = []
for line in read_line_by_line:
word_list.append(line.strip())
print(word_list)
letter = input("Pick a letter of your choosing and every word with that letter will be outputted")
for letter in word_list:
print (letter in word_list)
Above is my current code and the last 3 lines are the part im struggling with. I want to output whatever words that has the letter picked by the user
|
[
"As the word list contains.. [\"Bob\", \"Mary\", \"Jezebel\"]\nCode:-\nword_list=[\"Bob\", \"Mary\", \"Jezebel\"]\nletter = input(\"Pick a letter of your choosing and every word with that letter will be outputted\")\nfor word in word_list:\n if letter in word:\n print(word)\n\nOutput:-\nPick a letter of your choosing and every word with that letter will be outputted\nz\nJezebel\n\nNote-: All the word which contains \"z\" will be printed. if you wanted that only the first word in a word_list should be printed which contains letter \"z\" than you should apply break statement.\nfor word in word_list:\n if letter in word:\n print(word)\n break #Like this\n\n"
] |
[
0
] |
[] |
[] |
[
"file",
"for_loop",
"list",
"python"
] |
stackoverflow_0074568222_file_for_loop_list_python.txt
|
Q:
How to convert array of specific keys to an Object python
I am writing a script in Python. The script uses pyreadstat library. From the library I am calling read_sas7bdat function it is returning dataframe. The code:
df = pyreadstat.read_sas7bdat(FILE_LOC, row_offset=START_FROM_ROW, row_limit=PAGE_SIZE)
finalList = []
for key in df[0]:
l = list(map(lambda x: str(x) if str(x) == "nan" else x, df[0][key].tolist()))
nparray = {key: l}
finalList.append(nparray)
return json.dumps(finalList)
Following is the response I get as an array of object like this:
[
{
"fruits": [
"banana",
"apple",
"oranges",
"kiwi",
"pineapple"
]
},
{
"calories": [
"10",
"20",
"10",
"15",
"60"
]
}
]
And I want to convert it into in:
[
{
"fruits": "banana",
"calories": "10"
},
{
"fruits": "apple",
"calories": "20"
},
{
"fruits": "oranges",
"calories": "10"
},
{
"fruits": "kiwi",
"calories": "15"
},
{
"fruits": "pineapple",
"calories": "60"
}
]
This is how df looks like
( fruits calories
0 banana 10
1 apple 20, <pyreadstat._readstat_parser.metadata_container object at 0x7f2e38f5f100>)
How can it be done in Python?
A:
newJsonList = []
for i in range(5):
aDict = {}
for j in range(len(data)):
aDict[ list(data[j].keys())[0] ] = list(data[j].values())[0][i]
newJsonList.append(aDict)
print(json.dumps(newJsonList))
But also df.to_dict(orient='records') results the same <- Thanks @tdelaney
|
How to convert array of specific keys to an Object python
|
I am writing a script in Python. The script uses pyreadstat library. From the library I am calling read_sas7bdat function it is returning dataframe. The code:
df = pyreadstat.read_sas7bdat(FILE_LOC, row_offset=START_FROM_ROW, row_limit=PAGE_SIZE)
finalList = []
for key in df[0]:
l = list(map(lambda x: str(x) if str(x) == "nan" else x, df[0][key].tolist()))
nparray = {key: l}
finalList.append(nparray)
return json.dumps(finalList)
Following is the response I get as an array of object like this:
[
{
"fruits": [
"banana",
"apple",
"oranges",
"kiwi",
"pineapple"
]
},
{
"calories": [
"10",
"20",
"10",
"15",
"60"
]
}
]
And I want to convert it into in:
[
{
"fruits": "banana",
"calories": "10"
},
{
"fruits": "apple",
"calories": "20"
},
{
"fruits": "oranges",
"calories": "10"
},
{
"fruits": "kiwi",
"calories": "15"
},
{
"fruits": "pineapple",
"calories": "60"
}
]
This is how df looks like
( fruits calories
0 banana 10
1 apple 20, <pyreadstat._readstat_parser.metadata_container object at 0x7f2e38f5f100>)
How can it be done in Python?
|
[
"newJsonList = []\nfor i in range(5):\n aDict = {}\n for j in range(len(data)):\n aDict[ list(data[j].keys())[0] ] = list(data[j].values())[0][i]\n newJsonList.append(aDict)\nprint(json.dumps(newJsonList))\n\nBut also df.to_dict(orient='records') results the same <- Thanks @tdelaney\n"
] |
[
0
] |
[] |
[] |
[
"python"
] |
stackoverflow_0074541812_python.txt
|
Q:
Python: How to pre-calculate the slope of a line segment that will be on the graph in Matplotlib?
Given two points,
two_points = [(Timestamp('2022-11-25 01:15:00', freq='15T'), 0.08124),
(Timestamp('2022-11-25 02:15:00', freq='15T'), 0.08041)]
Use these two points to draw a line on a candlestick graph. For instance,
candlesticks graph with a line.
I want to know the slope of this green line before I draw it on the graph.
But the calculation of the slope is related to the length of the y-axis based on price and the width of the each candlestick based on timeframe.
Is there any good way to calculate the slope of the line? Thanks.
A:
As you have noted, the slope is going to be "Price Change" divided by "Time Change".
That said, you need first to decide what units you want the slope to be in.
Let's say, for example, that the Price is in dollars. If so, do you want to know the slope in dollars per second? dollars per minute? dollars per hour? dollars per day?
First break up your two points into Price and Time, then calculate the slope as
slope = (price2-price1) / (time2-time1)
Here is some example code to find the slope in dollars per hour:
import pandas as pd
two_points = [(pd.Timestamp('2022-11-25 01:15:00'), 0.08124),
(pd.Timestamp('2022-11-25 02:15:00'), 0.08041)]
point1 = two_points[0]
point2 = two_points[1]
time1 = point1[0]
time2 = point2[0]
price1 = point1[1]
price2 = point2[1]
price_change = price2 - price1
print('price change: %10.6f' % price_change)
time_change = time2 - time1
print('time change : Timedelta('+str(time_change)+')')
print()
print('time change in seconds: %6.1f' % time_change.seconds)
time_change_in_hours = time_change.seconds/(60*60)
print('time change in hours: %10.4f' % time_change_in_hours)
slope = price_change / time_change_in_hours
print()
print('the slope is %10.6f' % slope,'dollars per hour')
The output from the above code:
price change: -0.000830
time change : Timedelta(0 days 01:00:00)
time change in seconds: 3600.0
time change in hours: 1.0000
the slope is -0.000830 dollars per hour
|
Python: How to pre-calculate the slope of a line segment that will be on the graph in Matplotlib?
|
Given two points,
two_points = [(Timestamp('2022-11-25 01:15:00', freq='15T'), 0.08124),
(Timestamp('2022-11-25 02:15:00', freq='15T'), 0.08041)]
Use these two points to draw a line on a candlestick graph. For instance,
candlesticks graph with a line.
I want to know the slope of this green line before I draw it on the graph.
But the calculation of the slope is related to the length of the y-axis based on price and the width of the each candlestick based on timeframe.
Is there any good way to calculate the slope of the line? Thanks.
|
[
"As you have noted, the slope is going to be \"Price Change\" divided by \"Time Change\".\nThat said, you need first to decide what units you want the slope to be in.\nLet's say, for example, that the Price is in dollars. If so, do you want to know the slope in dollars per second? dollars per minute? dollars per hour? dollars per day?\nFirst break up your two points into Price and Time, then calculate the slope as\nslope = (price2-price1) / (time2-time1)\n\nHere is some example code to find the slope in dollars per hour:\nimport pandas as pd\n\ntwo_points = [(pd.Timestamp('2022-11-25 01:15:00'), 0.08124),\n (pd.Timestamp('2022-11-25 02:15:00'), 0.08041)]\n\npoint1 = two_points[0]\npoint2 = two_points[1]\n\ntime1 = point1[0]\ntime2 = point2[0]\n\nprice1 = point1[1]\nprice2 = point2[1]\n\nprice_change = price2 - price1\n\nprint('price change: %10.6f' % price_change)\n\ntime_change = time2 - time1\n\nprint('time change : Timedelta('+str(time_change)+')')\n\nprint()\nprint('time change in seconds: %6.1f' % time_change.seconds)\n\ntime_change_in_hours = time_change.seconds/(60*60)\n\nprint('time change in hours: %10.4f' % time_change_in_hours)\n\nslope = price_change / time_change_in_hours\n\nprint()\nprint('the slope is %10.6f' % slope,'dollars per hour')\n\n\nThe output from the above code:\nprice change: -0.000830\ntime change : Timedelta(0 days 01:00:00)\n\ntime change in seconds: 3600.0\ntime change in hours: 1.0000\n\nthe slope is -0.000830 dollars per hour\n\n"
] |
[
0
] |
[] |
[] |
[
"matplotlib",
"mplfinance",
"python",
"quantitative_finance"
] |
stackoverflow_0074568051_matplotlib_mplfinance_python_quantitative_finance.txt
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.