content
stringlengths 85
101k
| title
stringlengths 0
150
| question
stringlengths 15
48k
| answers
list | answers_scores
list | non_answers
list | non_answers_scores
list | tags
list | name
stringlengths 35
137
|
|---|---|---|---|---|---|---|---|---|
Q:
Print 1..N² in NxN matrix, starting at bottom-right and zig-zag
Given an input n, I want to print n lines with each n numbers such that the numbers 1 through n² are displayed in a zig-zag way, starting with 1 appearing at the bottom-right corner of the output matrix, and 2 at the end of the one-but-last row, ...etc.
Examples:
Given Input 3.
Print:
9 4 3
8 5 2
7 6 1
Given Input 1.
Print:
1
Given Input 4.
Print:
13 12 5 4
14 11 6 3
15 10 7 2
16 9 8 1
Attempt
n = int(input("Enter dimensions of matrix :"))
m = n
x = 1
columns = []
for row in range(n):
inner_column = []
for col in range(m):
inner_column.append(x)
x = x + 1
columns.append(inner_column)
for inner_column in columns:
print(' '.join(map(str, inner_column)))
I've tried something like this, but it prints out the array incorrectly. Any ideas?
A:
Your code explicitly performs x = 1 and then x = x + 1 in a loop. As you need the first column in reverse order, and there are n*n numbers to output, instead the first top-left value should be x = n * n and in the first column it should decrease like with x = x - 1. The next column should be filled from end to start, and the next should be filled from start to end, ...etc.
I would suggest making an iterator that visits rows in that zig-zag manner: 0, 1, 2, ... n - 1, and then n - 1, n - 2, ... 0, and back from the start. With that iterator you know exactly to which row you should append the next x value:
# Helper function to generate row numbers in zig-zag order, for as
# long as needed.
def zigzag(n):
if n % 2:
yield from range(n)
while True:
yield from range(n - 1, -1, -1)
yield from range(n)
n = int(input("Enter dimensions of matrix :"))
matrix = [[] for _ in range(n)]
visit = zigzag(n)
for x in range(n*n, 0, -1):
matrix[next(visit)].append(x)
Then print it:
for row in matrix:
print(' '.join(map(str, row)))
|
Print 1..N² in NxN matrix, starting at bottom-right and zig-zag
|
Given an input n, I want to print n lines with each n numbers such that the numbers 1 through n² are displayed in a zig-zag way, starting with 1 appearing at the bottom-right corner of the output matrix, and 2 at the end of the one-but-last row, ...etc.
Examples:
Given Input 3.
Print:
9 4 3
8 5 2
7 6 1
Given Input 1.
Print:
1
Given Input 4.
Print:
13 12 5 4
14 11 6 3
15 10 7 2
16 9 8 1
Attempt
n = int(input("Enter dimensions of matrix :"))
m = n
x = 1
columns = []
for row in range(n):
inner_column = []
for col in range(m):
inner_column.append(x)
x = x + 1
columns.append(inner_column)
for inner_column in columns:
print(' '.join(map(str, inner_column)))
I've tried something like this, but it prints out the array incorrectly. Any ideas?
|
[
"Your code explicitly performs x = 1 and then x = x + 1 in a loop. As you need the first column in reverse order, and there are n*n numbers to output, instead the first top-left value should be x = n * n and in the first column it should decrease like with x = x - 1. The next column should be filled from end to start, and the next should be filled from start to end, ...etc.\nI would suggest making an iterator that visits rows in that zig-zag manner: 0, 1, 2, ... n - 1, and then n - 1, n - 2, ... 0, and back from the start. With that iterator you know exactly to which row you should append the next x value:\n# Helper function to generate row numbers in zig-zag order, for as\n# long as needed.\ndef zigzag(n):\n if n % 2:\n yield from range(n)\n while True:\n yield from range(n - 1, -1, -1)\n yield from range(n)\n\nn = int(input(\"Enter dimensions of matrix :\"))\nmatrix = [[] for _ in range(n)]\nvisit = zigzag(n)\nfor x in range(n*n, 0, -1):\n matrix[next(visit)].append(x)\n\nThen print it:\nfor row in matrix:\n print(' '.join(map(str, row)))\n\n"
] |
[
0
] |
[] |
[] |
[
"algorithm",
"arrays",
"logic",
"python"
] |
stackoverflow_0074506321_algorithm_arrays_logic_python.txt
|
Q:
Image Zoom Using SciPy - wrong dimensions
The code from answer to this question produces an error for some values of zoom factor.
As mentioned in comments by @kg_sYy, "The rounding in int(np.round(h * zoom_factor)) seems to sometimes cause the resulting image to be 1 pixel smaller than target. The calculation then gets -1 as diff and you get image pixel size 1 for output. Changing to np.ceil() instead of np.round() seems to fix it."
However, changing to np.ceil() does not fix the error.
As an example, zoom_factor = 1.1317 and image shape (331, 331, 3) result in zoomed_img being one pixel in size.
What other rounding function could fix the problem and why does it occur?
A:
Why it becomes a single pixel
The code you have linked in question makes the assumption that:
out might still be slightly larger than img due to rounding, so trim off any extra pixels at the edges
And then proceeds to do trimming without checking anything. The zoomed image becomes 1 pixel whenever out is actually smaller than img (due to bad rounding, as the comment mentions). Then these following calculations fail:
trim_top = ((out.shape[0] - h) // 2)
trim_left = ((out.shape[1] - w) // 2)
Since the new image is actually smaller than (h, w), the trim_top and trim_left are actually negative numbers (-1, empirically).
So the final trim is actually:
out = out[-1: h - 1, -1: w - 1]
This is nothing but the bottom right pixel at [-1, -1].
How to fix
Ensure that out is equal or greater than img. Using ceil() should handle this. (See if you can find any image size and zoom factor for which out is still smaller than img.)
Check whether out is actually greater than img before trying to trim it.
Handle a possible round off when doing the trim.
def clipped_zoom(img, zoom_factor, **kwargs):
h, w = img.shape[:2]
zoom_tuple = (zoom_factor,) * 2 + (1,) * (img.ndim - 2)
# Zooming out
if zoom_factor < 1:
# Bounding box of the zoomed-out image within the output array
zh = int(np.round(h * zoom_factor))
zw = int(np.round(w * zoom_factor))
top = (h - zh) // 2
left = (w - zw) // 2
# Zero-padding
out = np.zeros_like(img)
out[top:top+zh, left:left+zw] = zoom(img, zoom_tuple, **kwargs)
# Zooming in
elif zoom_factor > 1:
# Bounding box of the zoomed-in region within the input array
zh = int(np.ceil(h / zoom_factor))
zw = int(np.ceil(w / zoom_factor))
top = (h - zh) // 2
left = (w - zw) // 2
out = zoom(img[top: top + zh, left: left + zw], zoom_tuple, **kwargs)
# >>> Check out against img before trying to trim
# >>> trim safely, accounting for rounding
if out.shape[0] > h:
h_diff = out.shape[0] - h
trim_top = h_diff // 2
trim_bottom = h_diff - trim_top
out = out[trim_top: out.shape[0] - trim_bottom, :]
if out.shape[1] > w:
w_diff = out.shape[1] - w
trim_left = w_diff // 2
trim_right = w_diff - trim_left
out = out[:, trim_left: out.shape[1] - trim_right]
# If zoom_factor == 1, just return the input array
else:
out = img
return out
|
Image Zoom Using SciPy - wrong dimensions
|
The code from answer to this question produces an error for some values of zoom factor.
As mentioned in comments by @kg_sYy, "The rounding in int(np.round(h * zoom_factor)) seems to sometimes cause the resulting image to be 1 pixel smaller than target. The calculation then gets -1 as diff and you get image pixel size 1 for output. Changing to np.ceil() instead of np.round() seems to fix it."
However, changing to np.ceil() does not fix the error.
As an example, zoom_factor = 1.1317 and image shape (331, 331, 3) result in zoomed_img being one pixel in size.
What other rounding function could fix the problem and why does it occur?
|
[
"Why it becomes a single pixel\nThe code you have linked in question makes the assumption that:\n\nout might still be slightly larger than img due to rounding, so trim off any extra pixels at the edges\n\nAnd then proceeds to do trimming without checking anything. The zoomed image becomes 1 pixel whenever out is actually smaller than img (due to bad rounding, as the comment mentions). Then these following calculations fail:\ntrim_top = ((out.shape[0] - h) // 2)\ntrim_left = ((out.shape[1] - w) // 2)\n\nSince the new image is actually smaller than (h, w), the trim_top and trim_left are actually negative numbers (-1, empirically).\nSo the final trim is actually:\nout = out[-1: h - 1, -1: w - 1]\n\nThis is nothing but the bottom right pixel at [-1, -1].\nHow to fix\n\nEnsure that out is equal or greater than img. Using ceil() should handle this. (See if you can find any image size and zoom factor for which out is still smaller than img.)\nCheck whether out is actually greater than img before trying to trim it.\nHandle a possible round off when doing the trim.\n\ndef clipped_zoom(img, zoom_factor, **kwargs):\n h, w = img.shape[:2]\n\n zoom_tuple = (zoom_factor,) * 2 + (1,) * (img.ndim - 2)\n\n # Zooming out\n if zoom_factor < 1:\n\n # Bounding box of the zoomed-out image within the output array\n zh = int(np.round(h * zoom_factor))\n zw = int(np.round(w * zoom_factor))\n top = (h - zh) // 2\n left = (w - zw) // 2\n\n # Zero-padding\n out = np.zeros_like(img)\n out[top:top+zh, left:left+zw] = zoom(img, zoom_tuple, **kwargs)\n\n # Zooming in\n elif zoom_factor > 1:\n\n # Bounding box of the zoomed-in region within the input array\n zh = int(np.ceil(h / zoom_factor))\n zw = int(np.ceil(w / zoom_factor))\n\n top = (h - zh) // 2\n left = (w - zw) // 2\n\n out = zoom(img[top: top + zh, left: left + zw], zoom_tuple, **kwargs)\n\n # >>> Check out against img before trying to trim\n # >>> trim safely, accounting for rounding\n\n if out.shape[0] > h:\n h_diff = out.shape[0] - h\n trim_top = h_diff // 2\n trim_bottom = h_diff - trim_top\n out = out[trim_top: out.shape[0] - trim_bottom, :]\n\n if out.shape[1] > w:\n w_diff = out.shape[1] - w\n trim_left = w_diff // 2\n trim_right = w_diff - trim_left\n out = out[:, trim_left: out.shape[1] - trim_right] \n\n # If zoom_factor == 1, just return the input array\n else:\n out = img\n return out\n\n"
] |
[
1
] |
[] |
[] |
[
"image",
"numpy",
"python",
"scipy",
"zooming"
] |
stackoverflow_0074504538_image_numpy_python_scipy_zooming.txt
|
Q:
Print the last longest string PYTHON
I'm currently facing the problem of not being able to print the last longest string.
Strings example:
banica
pizza
kiufte
The first and the third are same length, but I want the last longest string.
def longest(list1):
longest_list = max(len(elem) for elem in list1)
return longest_list
somelist=[]
while True:
s = input()
if s == "END":
break
somelist.append(s)
longest_string = max(somelist, key=len)
print(longest_string)
A:
I don`t know, what exactly you are trying to achieve, but as
longest_string = max(somelist, key=len)
gives you the first element with max length, you can just reverse the list, and get the last:
longest_string = max(somelist[::-1], key=len)
A:
This will work as you want :
def longest(list1):
longest_list = max(len(elem) for elem in list1)
return longest_list
somelist=[]
while True:
s = input()
if s == "END":
break
somelist.append(s)
longest_string = max(somelist[::-1], key=len)
print(longest_string)
A:
To employ your longest() function:
longest_length = longest(somelist)
longest_string = [s for s in somelist if len(s) == longest_length][-1]
(Index [-1] extracts the last element.)
|
Print the last longest string PYTHON
|
I'm currently facing the problem of not being able to print the last longest string.
Strings example:
banica
pizza
kiufte
The first and the third are same length, but I want the last longest string.
def longest(list1):
longest_list = max(len(elem) for elem in list1)
return longest_list
somelist=[]
while True:
s = input()
if s == "END":
break
somelist.append(s)
longest_string = max(somelist, key=len)
print(longest_string)
|
[
"I don`t know, what exactly you are trying to achieve, but as\nlongest_string = max(somelist, key=len)\n\ngives you the first element with max length, you can just reverse the list, and get the last:\nlongest_string = max(somelist[::-1], key=len)\n\n",
"This will work as you want :\n def longest(list1):\n longest_list = max(len(elem) for elem in list1)\n return longest_list\nsomelist=[]\nwhile True:\n\n s = input()\n\n if s == \"END\":\n break\n somelist.append(s)\nlongest_string = max(somelist[::-1], key=len)\nprint(longest_string)\n\n",
"To employ your longest() function:\nlongest_length = longest(somelist)\nlongest_string = [s for s in somelist if len(s) == longest_length][-1]\n\n(Index [-1] extracts the last element.)\n"
] |
[
5,
0,
0
] |
[] |
[] |
[
"python"
] |
stackoverflow_0074506325_python.txt
|
Q:
Didn't able to locate send files button
I am trying to locate a button that uploads a file and gets the ouput result by clicking the button on the page itself, I know how to upload file by send keys.
The website is https://huggingface.co/spaces/vaibhavsharda/semantic_clustering
My code is
import csv
import time
from selenium import webdriver
import chromedriver_autoinstaller
import datetime
from bs4 import BeautifulSoup
from selenium.webdriver import Keys
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
chromedriver_autoinstaller.install() # Check if the current version of chromedriver exists
# and if it doesn't exist, download it automatically,
# then add chromedriver to path
driver = webdriver.Chrome()
count = 0
entire_data = []
driver.get("https://huggingface.co/spaces/vaibhavsharda/semantic_clustering")
driver.maximize_window()
time.sleep(10)
s = driver.find_element(By.CSS_SELECTOR,'.exg6vvm15 .edgvbvh9') # the error is here.
s.send_keys("small_test.txt")
I am trying to locate element by selenium, I don't know if it doesn't load or something else is the error but I just want to locate the "Browse Files" button. Feel free to ask me anything.
A:
Element you trying to click is inside an iframe, so you need first to switch into the iframe in order to access that element.
The following code works:
from selenium import webdriver
from selenium.webdriver.chrome.service import Service
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC
options = Options()
options.add_argument("start-maximized")
webdriver_service = Service('C:\webdrivers\chromedriver.exe')
driver = webdriver.Chrome(options=options, service=webdriver_service)
wait = WebDriverWait(driver, 5)
url = "https://huggingface.co/spaces/vaibhavsharda/semantic_clustering"
driver.get(url)
wait.until(EC.frame_to_be_available_and_switch_to_it((By.CSS_SELECTOR, "iframe[title]")))
wait.until(EC.element_to_be_clickable((By.XPATH, "//button[@kind='primary'][not(@disabled)]"))).click()
When finished don't forget to switch to the default content with:
driver.switch_to.default_content()
UPD
Uploading file with Selenium is done by sending the uploaded file to a special element. This is not an element you are clicking as a user via GUI to upload elements. The element actually receiving uploaded files normally matching this XPath: //input[@type='file']
This is the fully working code - I tried this on my PC uploading some text file.
from selenium import webdriver
from selenium.webdriver.chrome.service import Service
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC
options = Options()
options.add_argument("start-maximized")
webdriver_service = Service('C:\webdrivers\chromedriver.exe')
driver = webdriver.Chrome(options=options, service=webdriver_service)
wait = WebDriverWait(driver, 5)
url = "https://huggingface.co/spaces/vaibhavsharda/semantic_clustering"
driver.get(url)
wait.until(EC.frame_to_be_available_and_switch_to_it((By.CSS_SELECTOR, "iframe[title]")))
wait.until(EC.presence_of_element_located((By.CSS_SELECTOR, "input[type='file']"))).send_keys("C:/project_name/.gitignore")
|
Didn't able to locate send files button
|
I am trying to locate a button that uploads a file and gets the ouput result by clicking the button on the page itself, I know how to upload file by send keys.
The website is https://huggingface.co/spaces/vaibhavsharda/semantic_clustering
My code is
import csv
import time
from selenium import webdriver
import chromedriver_autoinstaller
import datetime
from bs4 import BeautifulSoup
from selenium.webdriver import Keys
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
chromedriver_autoinstaller.install() # Check if the current version of chromedriver exists
# and if it doesn't exist, download it automatically,
# then add chromedriver to path
driver = webdriver.Chrome()
count = 0
entire_data = []
driver.get("https://huggingface.co/spaces/vaibhavsharda/semantic_clustering")
driver.maximize_window()
time.sleep(10)
s = driver.find_element(By.CSS_SELECTOR,'.exg6vvm15 .edgvbvh9') # the error is here.
s.send_keys("small_test.txt")
I am trying to locate element by selenium, I don't know if it doesn't load or something else is the error but I just want to locate the "Browse Files" button. Feel free to ask me anything.
|
[
"Element you trying to click is inside an iframe, so you need first to switch into the iframe in order to access that element.\nThe following code works:\nfrom selenium import webdriver\nfrom selenium.webdriver.chrome.service import Service\nfrom selenium.webdriver.chrome.options import Options\nfrom selenium.webdriver.support.ui import WebDriverWait\nfrom selenium.webdriver.common.by import By\nfrom selenium.webdriver.support import expected_conditions as EC\n\noptions = Options()\noptions.add_argument(\"start-maximized\")\n\nwebdriver_service = Service('C:\\webdrivers\\chromedriver.exe')\ndriver = webdriver.Chrome(options=options, service=webdriver_service)\nwait = WebDriverWait(driver, 5)\n\nurl = \"https://huggingface.co/spaces/vaibhavsharda/semantic_clustering\"\ndriver.get(url)\nwait.until(EC.frame_to_be_available_and_switch_to_it((By.CSS_SELECTOR, \"iframe[title]\")))\n\nwait.until(EC.element_to_be_clickable((By.XPATH, \"//button[@kind='primary'][not(@disabled)]\"))).click()\n\nWhen finished don't forget to switch to the default content with:\ndriver.switch_to.default_content()\n\nUPD\nUploading file with Selenium is done by sending the uploaded file to a special element. This is not an element you are clicking as a user via GUI to upload elements. The element actually receiving uploaded files normally matching this XPath: //input[@type='file']\nThis is the fully working code - I tried this on my PC uploading some text file.\nfrom selenium import webdriver\nfrom selenium.webdriver.chrome.service import Service\nfrom selenium.webdriver.chrome.options import Options\nfrom selenium.webdriver.support.ui import WebDriverWait\nfrom selenium.webdriver.common.by import By\nfrom selenium.webdriver.support import expected_conditions as EC\n\noptions = Options()\noptions.add_argument(\"start-maximized\")\n\nwebdriver_service = Service('C:\\webdrivers\\chromedriver.exe')\ndriver = webdriver.Chrome(options=options, service=webdriver_service)\nwait = WebDriverWait(driver, 5)\n\nurl = \"https://huggingface.co/spaces/vaibhavsharda/semantic_clustering\"\ndriver.get(url)\nwait.until(EC.frame_to_be_available_and_switch_to_it((By.CSS_SELECTOR, \"iframe[title]\")))\n\nwait.until(EC.presence_of_element_located((By.CSS_SELECTOR, \"input[type='file']\"))).send_keys(\"C:/project_name/.gitignore\")\n\n\n"
] |
[
1
] |
[] |
[] |
[
"iframe",
"python",
"selenium",
"selenium_webdriver",
"xpath"
] |
stackoverflow_0074505601_iframe_python_selenium_selenium_webdriver_xpath.txt
|
Q:
How to web scrap this page and turn it into a csv file?
My name is João, im a law student from Brazil and im new to this. Im trying to web scrape this page for a week to help me with the Undergraduate thesis and other researchers.
I want make a csv file with all the results from a research in a court (this link). As you can see in the link, there are 404 results (processo) divide in 41 pages. Each result has it own html with its information (such as in a marketplace).
The result html is divided in two main tables. The first one has the result general information and probably will have the same structure in all results. The second table contains the results files (they are decisions in a administrative process), which may change in number of files and even have some files with same name, but different dates. From this second table I just need the link to the oldest "relatório/voto" and its date and the link to oldest "acórdão" and its date.
The head of the csv file should look like the following image and each result should be a line.
I'm working with python on google colab and I've been trying many ways to scrape but it did not work well. My most complete approach was when I tried to adapt a product scrape tutorial: video and corespondent code in Github.
My adaptation does not work in colab, it neither results in a error message, nor in a csv file. In the following code, I identified some problems in the adaptation by comparing the pages and the lesson, they are:
While extracting the result html out of one of the 41 pages, I believe I should create a list results html extracted, but it extracted the text too and I'm not sure how to correct it.
While trying to extract the data from the result html, I fail. Whenever I tried to create a list with these it only returned me one result.
Beyond the tutorial, I would also like to extract data from the second table in the results html, it would be the link to the oldest "relatório/voto" and its date and the link to oldest "acórdão" and its date. I'm no sure how and when in the code i should do that.
ADAPTED CODE
from requests_html import HTMLSession
import csv
s = HTMLSession()
# STEP 01: take the result html
def get_results_links(page):
url = f"https://www.tce.sp.gov.br/jurisprudencia/pesquisar?txtTdPalvs=munic%C3%ADpio+pessoal+37&txtExp=temporari&txtQqUma=admiss%C3%A3o+contrata%C3%A7%C3%A3o&txtNenhPalvs=&txtNumIni=&txtNumFim=&tipoBuscaTxt=Documento&_tipoBuscaTxt=on&quantTrechos=1&processo=&exercicio=&dataAutuacaoInicio=&dataAutuacaoFim=&dataPubInicio=01%2F01%2F2021&dataPubFim=31%2F12%2F2021&_relator=1&_auditor=1&_materia=1&tipoDocumento=2&_tipoDocumento=1&acao=Executa&offset={page}"
links = []
r = s.get(url)
results = r.html.find('td.small a')
for item in results:
links.append(item.find('a', first=True).attrs['href']) #Problem 01: I believe it should creat a list of the results html extracted out the page, but it extracted the text too.
return links
# STEP 02: extracting relevant information from the result html before extracted
def parse_result(url):
r = s.get(url)
numero = r.html.find('td.small', first=True).text.strip()
data_autuacao = r.html.find('td.small', first=True).text.strip()
try:
parte_1 = r.html.find('td.small', first=True).text.strip()
except AttributeError as err:
sku = 'Não há'
try:
parte_2 = r.html.find('td.small', first=True).text.strip()
except AttributeError as err:
parte_2 = 'Não há'
materia = r.html.find('td.small', first=True).text.strip()
exercicio = r.html.find('td.small', first=True).text.strip()
objeto = r.html.find('td.small', first=True).text.strip()
relator = r.html.find('td.small', first=True).text.strip()
#Problem 02
# STEP 03: creating a list based objetcs created before
product = {
'Nº do Processo': numero,
"Link do Processo" : r,
'Data de Autuação': data_autuacao,
'Parte 1': parte_1,
'Parte 2': parte_2,
'Exercício': exercicio,
'Matéria' : materia,
'Objeto' : objeto,
'Relator' : relator
#'Relatório/Voto' :
#'Data Relatório/Voto' :
#'Acórdão' :
#'Data Acórdão' :
}#Problem 03
return product
# STEP 04: saving as csv
def save_csv(final):
keys = final [0].keys()
with open('products.csv', 'w') as f:
dict_writer = csv.DictWriter(f, keys)
dict_writer.writeheader()
dict_writer.writerows(final)
# STEP 05: main - joinning the functions
def main():
final = []
for x in range(0, 410, 10):
print('Getting Page ', x)
urls = get_results_links(x)
for url in urls:
final.append(parse_result(url))
print('Total: ', len(final))
save_csv(final)
Thank you, @shelter, for your help so far. I tryed to specify it.
A:
There are better (albeit more complex) ways of obtaining that information, like scrapy, or an async solution. Nonetheless, here is one way of getting that information you're after, as well as saving it into a csv file. I only scraped the first 2 pages (20 results), you can increase the range if you wish:
from bs4 import BeautifulSoup as bs
import requests
from tqdm.notebook import tqdm
import pandas as pd
pd.set_option('display.max_columns', None)
pd.set_option('display.max_colwidth', None)
headers = {
'User-Agent': 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/104.0.5112.79 Safari/537.36'
}
s = requests.Session()
s.headers.update(headers)
big_list = []
detailed_list = []
for x in tqdm(range(0, 20, 10)):
url = f'https://www.tce.sp.gov.br/jurisprudencia/pesquisar?txtTdPalvs=munic%C3%ADpio+pessoal+37&txtExp=temporari&txtQqUma=admiss%C3%A3o+contrata%C3%A7%C3%A3o&txtNenhPalvs=&txtNumIni=&txtNumFim=&tipoBuscaTxt=Documento&_tipoBuscaTxt=on&quantTrechos=1&processo=&exercicio=&dataAutuacaoInicio=&dataAutuacaoFim=&dataPubInicio=01%2F01%2F2021&dataPubFim=31%2F12%2F2021&_relator=1&_auditor=1&_materia=1&tipoDocumento=2&_tipoDocumento=1&acao=Executa&offset={x}'
r = s.get(url)
urls = bs(r.text, 'html.parser').select('tr[class="borda-superior"] td:nth-of-type(2) a')
big_list.extend(['https://www.tce.sp.gov.br/jurisprudencia/' + x.get('href') for x in urls])
for x in tqdm(big_list):
r = s.get(x)
soup = bs(r.text, 'html.parser')
n_proceso = soup.select_one('td:-soup-contains("N° Processo:")').find_next('td').text if soup.select('td:-soup-contains("N° Processo:")') else None
link_proceso = x
autoacao = soup.select_one('td:-soup-contains("Autuação:")').find_next('td').text if soup.select('td:-soup-contains("Autuação:")') else None
parte_1 = soup.select_one('td:-soup-contains("Parte 1:")').find_next('td').text if soup.select('td:-soup-contains("Parte 1:")') else None
parte_2 = soup.select_one('td:-soup-contains("Parte 2:")').find_next('td').text if soup.select('td:-soup-contains("Parte 2:")') else None
materia = soup.select_one('td:-soup-contains("Matéria:")').find_next('td').text if soup.select('td:-soup-contains("Matéria:")') else None
exercicio = soup.select_one('td:-soup-contains("Exercício:")').find_next('td').text if soup.select('td:-soup-contains("Exercício:")') else None
objeto = soup.select_one('td:-soup-contains("Objeto:")').find_next('td').text if soup.select('td:-soup-contains("Objeto:")') else None
relator = soup.select_one('td:-soup-contains("Relator:")').find_next('td').text if soup.select('td:-soup-contains("Relator:")') else None
relatorio_voto = soup.select_one('td:-soup-contains("Relatório / Voto ")').find_previous('a').get('href') if soup.select('td:-soup-contains("Relatório / Voto")') else None
data_relatorio = soup.select_one('td:-soup-contains("Relatório / Voto ")').find_previous('td').text if soup.select('td:-soup-contains("Relatório / Voto")') else None
acordao = soup.select_one('td:-soup-contains("Acórdão ")').find_previous('a').get('href') if soup.select('td:-soup-contains("Acórdão ")') else None
data_acordao = soup.select_one('td:-soup-contains("Acórdão ")').find_previous('td').text if soup.select('td:-soup-contains("Acórdão ")') else None
detailed_list.append((n_proceso, link_proceso, autoacao, parte_1, parte_2,
materia, exercicio, objeto, relator, relatorio_voto,
data_relatorio, acordao, data_acordao))
detailed_df = pd.DataFrame(detailed_list, columns = ['n_proceso', 'link_proceso', 'autoacao', 'parte_1',
'parte_2', 'materia', 'exercicio', 'objeto', 'relator',
'relatorio_voto', 'data_relatorio', 'acordao', 'data_acordao'])
display(detailed_df)
detailed_df.to_csv('legal_br_stuffs.csv')
Result in terminal:
100%
2/2 [00:04<00:00, 1.78s/it]
100%
20/20 [00:07<00:00, 2.56it/s]
n_proceso link_proceso autoacao parte_1 parte_2 materia exercicio objeto relator relatorio_voto data_relatorio acordao data_acordao
0 18955/989/20 https://www.tce.sp.gov.br/jurisprudencia/exibir?proc=18955/989/20&offset=0 31/07/2020 ELVES SCIARRETTA CARREIRA PREFEITURA MUNICIPAL DE BRODOWSKI RECURSO ORDINARIO 2020 Recurso Ordinário Protocolado em anexo. EDGARD CAMARGO RODRIGUES https://www2.tce.sp.gov.br/arqs_juri/pdf/801385.pdf 20/01/2021 https://www2.tce.sp.gov.br/arqs_juri/pdf/801414.pdf 20/01/2021
1 13614/989/18 https://www.tce.sp.gov.br/jurisprudencia/exibir?proc=13614/989/18&offset=0 11/06/2018 PREFEITURA MUNICIPAL DE SERRA NEGRA RECURSO ORDINARIO 2014 Recurso Ordinário ANTONIO ROQUE CITADINI https://www2.tce.sp.gov.br/arqs_juri/pdf/797986.pdf 05/02/2021 https://www2.tce.sp.gov.br/arqs_juri/pdf/800941.pdf 05/02/2021
2 6269/989/19 https://www.tce.sp.gov.br/jurisprudencia/exibir?proc=6269/989/19&offset=0 19/02/2019 PREFEITURA MUNICIPAL DE TREMEMBE ADMISSAO DE PESSOAL - CONCURSO PROCESSO SELETIVO 2018 INTERESSADO: Rafael Varejão Munhos e outros. EDITAL Nº: 01/2017. CONCURSO PÚBLICO: 01/2017. None https://www2.tce.sp.gov.br/arqs_juri/pdf/804240.pdf 06/02/2021 https://www2.tce.sp.gov.br/arqs_juri/pdf/804258.pdf 06/02/2021
3 14011/989/19 https://www.tce.sp.gov.br/jurisprudencia/exibir?proc=14011/989/19&offset=0 11/06/2019 RUBENS EDUARDO DE SOUZA AROUCA PREFEITURA MUNICIPAL DE TREMEMBE RECURSO ORDINARIO 2019 Recurso Ordinário RENATO MARTINS COSTA https://www2.tce.sp.gov.br/arqs_juri/pdf/804240.pdf 06/02/2021 https://www2.tce.sp.gov.br/arqs_juri/pdf/804258.pdf 06/02/2021
4 14082/989/19 https://www.tce.sp.gov.br/jurisprudencia/exibir?proc=14082/989/19&offset=0 12/06/2019 PREFEITURA MUNICIPAL DE TREMEMBE RECURSO ORDINARIO 2019 Recurso Ordinário nos autos do TC n° 6269.989.19 - Admissão de pessoal - Concurso Público RENATO MARTINS COSTA https://www2.tce.sp.gov.br/arqs_juri/pdf/804240.pdf 06/02/2021 https://www2.tce.sp.gov.br/arqs_juri/pdf/804258.pdf 06/02/2021
5 14238/989/19 https://www.tce.sp.gov.br/jurisprudencia/exibir?proc=14238/989/19&offset=0 13/06/2019 MARCELO VAQUELI PREFEITURA MUNICIPAL DE TREMEMBE RECURSO ORDINARIO 2019 Recurso Ordinário RENATO MARTINS COSTA https://www2.tce.sp.gov.br/arqs_juri/pdf/804240.pdf 06/02/2021 https://www2.tce.sp.gov.br/arqs_juri/pdf/804258.pdf 06/02/2021
6 14141/989/20 https://www.tce.sp.gov.br/jurisprudencia/exibir?proc=14141/989/20&offset=0 28/05/2020 PREFEITURA MUNICIPAL DE BIRIGUI CRISTIANO SALMEIRAO RECURSO ORDINARIO 2018 Recurso Ordinário RENATO MARTINS COSTA https://www2.tce.sp.gov.br/arqs_juri/pdf/804259.pdf 06/02/2021 https://www2.tce.sp.gov.br/arqs_juri/pdf/804262.pdf 06/02/2021
7 15371/989/19 https://www.tce.sp.gov.br/jurisprudencia/exibir?proc=15371/989/19&offset=0 02/07/2019 PREFEITURA MUNICIPAL DE BIRIGUI ADMISSAO DE PESSOAL - TEMPO DETERMINADO 2018 INTERESSADOS: ADRIANA PEREIRA CRISTAL E OUTROS. PROCESSOS SELETIVOS/EDITAIS Nºs:002/2016, 004/2017, 05/2017, 06/2017,001/2018 e 002/2018. LEIS AUTORIZADORAS: Nº 5134/2009 e Nº 3946/2001. None https://www2.tce.sp.gov.br/arqs_juri/pdf/804259.pdf 06/02/2021 https://www2.tce.sp.gov.br/arqs_juri/pdf/804262.pdf 06/02/2021
8 15388/989/20 https://www.tce.sp.gov.br/jurisprudencia/exibir?proc=15388/989/20&offset=0 04/06/2020 MARIA ANGELICA MIRANDA FERNANDES RECURSO ORDINARIO 2018 Recurso Ordinário RENATO MARTINS COSTA https://www2.tce.sp.gov.br/arqs_juri/pdf/804259.pdf 06/02/2021 https://www2.tce.sp.gov.br/arqs_juri/pdf/804262.pdf 06/02/2021
9 12911/989/16 https://www.tce.sp.gov.br/jurisprudencia/exibir?proc=12911/989/16&offset=0 20/07/2016 MARCELO CANDIDO DE SOUZA PREFEITURA MUNICIPAL DE SUZANO RECURSO ORDINARIO 2016 Recurso Ordinário Ref. Atos de Admissão de Pessoal - Exercício 2012. objetivando o preenchimento temporário dos cargos de Médico Cardiologista 20h, Fotógrafo, Médico Clínico Geral 20lt, Médico Gineco DIMAS RAMALHO https://www2.tce.sp.gov.br/arqs_juri/pdf/814599.pdf 27/04/2021 https://www2.tce.sp.gov.br/arqs_juri/pdf/814741.pdf 27/04/2021
10 1735/002/11 https://www.tce.sp.gov.br/jurisprudencia/exibir?proc=1735/002/11&offset=10 22/11/2011 FUNDACAO DE APOIO AOS HOSP VETERINARIOS DA UNESP ADMISSAO DE PESSOAL - TEMPO DETERMINADO 2010 ADMISSAO DE PESSOAL POR TEMPO DETERMINADO COM CONCURSO/PROCESSO SELETIVO ANTONIO ROQUE CITADINI https://www2.tce.sp.gov.br/arqs_juri/pdf/800893.pdf 21/01/2021 https://www2.tce.sp.gov.br/arqs_juri/pdf/800969.pdf 21/01/2021
11 23494/989/18 https://www.tce.sp.gov.br/jurisprudencia/exibir?proc=23494/989/18&offset=10 20/11/2018 HAMILTON LUIS FOZ RECURSO ORDINARIO 2018 Recurso Ordinário DIMAS RAMALHO https://www2.tce.sp.gov.br/arqs_juri/pdf/816918.pdf 13/05/2021 https://www2.tce.sp.gov.br/arqs_juri/pdf/817317.pdf 13/05/2021
12 24496/989/19 https://www.tce.sp.gov.br/jurisprudencia/exibir?proc=24496/989/19&offset=10 25/11/2019 PREFEITURA MUNICIPAL DE LORENA RECURSO ORDINARIO 2017 Recurso Ordinário em face de sentença proferida nos autos de TC 00006265.989.19-4 DIMAS RAMALHO https://www2.tce.sp.gov.br/arqs_juri/pdf/814660.pdf 27/04/2021 https://www2.tce.sp.gov.br/arqs_juri/pdf/814805.pdf 27/04/2021
13 17110/989/18 https://www.tce.sp.gov.br/jurisprudencia/exibir?proc=17110/989/18&offset=10 03/08/2018 JORGE ABISSAMRA PREFEITURA MUNICIPAL DE FERRAZ DE VASCONCELOS RECURSO ORDINARIO 2018 Recurso Ordinário DIMAS RAMALHO https://www2.tce.sp.gov.br/arqs_juri/pdf/814633.pdf 27/04/2021 https://www2.tce.sp.gov.br/arqs_juri/pdf/814774.pdf 27/04/2021
14 24043/989/19 https://www.tce.sp.gov.br/jurisprudencia/exibir?proc=24043/989/19&offset=10 18/11/2019 PREFEITURA MUNICIPAL DE IRAPURU RECURSO ORDINARIO 2018 Recurso ordinário ROBSON MARINHO https://www2.tce.sp.gov.br/arqs_juri/pdf/817014.pdf 12/05/2021 https://www2.tce.sp.gov.br/arqs_juri/pdf/817269.pdf 12/05/2021
15 2515/989/20 https://www.tce.sp.gov.br/jurisprudencia/exibir?proc=2515/989/20&offset=10 03/02/2020 PREFEITURA MUNICIPAL DE IPORANGA RECURSO ORDINARIO 2020 Recurso interposto em face da sentença proferida nos autos do TC 15791/989/19-7. ROBSON MARINHO https://www2.tce.sp.gov.br/arqs_juri/pdf/817001.pdf 12/05/2021 https://www2.tce.sp.gov.br/arqs_juri/pdf/817267.pdf 12/05/2021
16 1891/989/20 https://www.tce.sp.gov.br/jurisprudencia/exibir?proc=1891/989/20&offset=10 24/01/2020 PREFEITURA MUNICIPAL DE IPORANGA RECURSO ORDINARIO 2020 RECURSO ORDINÁRIO DIMAS RAMALHO https://www2.tce.sp.gov.br/arqs_juri/pdf/802484.pdf 03/02/2021 https://www2.tce.sp.gov.br/arqs_juri/pdf/802620.pdf 03/02/2021
17 15026/989/20 https://www.tce.sp.gov.br/jurisprudencia/exibir?proc=15026/989/20&offset=10 02/06/2020 DIXON RONAN CARVALHO PREFEITURA MUNICIPAL DE PAULINIA RECURSO ORDINARIO 2018 RECURSO ORDINÁRIO ANTONIO ROQUE CITADINI https://www2.tce.sp.gov.br/arqs_juri/pdf/802648.pdf 05/02/2021 https://www2.tce.sp.gov.br/arqs_juri/pdf/803361.pdf 05/02/2021
18 9070/989/20 https://www.tce.sp.gov.br/jurisprudencia/exibir?proc=9070/989/20&offset=10 09/03/2020 PREFEITURA MUNICIPAL DE FLORIDA PAULISTA RECURSO ORDINARIO 2017 Recurso Ordinário ROBSON MARINHO https://www2.tce.sp.gov.br/arqs_juri/pdf/817006.pdf 12/05/2021 https://www2.tce.sp.gov.br/arqs_juri/pdf/817296.pdf 12/05/2021
19 21543/989/20 https://www.tce.sp.gov.br/jurisprudencia/exibir?proc=21543/989/20&offset=10 11/09/2020 PREFEITURA MUNICIPAL DE JERIQUARA RECURSO ORDINARIO 2020 RECURSO ORDINÁRIO SIDNEY ESTANISLAU BERALDO https://www2.tce.sp.gov.br/arqs_juri/pdf/802997.pdf 13/02/2021 https://www2.tce.sp.gov.br/arqs_juri/pdf/804511.pdf 13/02/2021
If you will need coding in your career, I strongly suggest you start building some foundational knowledge first, then try to code or adapt other code.
|
How to web scrap this page and turn it into a csv file?
|
My name is João, im a law student from Brazil and im new to this. Im trying to web scrape this page for a week to help me with the Undergraduate thesis and other researchers.
I want make a csv file with all the results from a research in a court (this link). As you can see in the link, there are 404 results (processo) divide in 41 pages. Each result has it own html with its information (such as in a marketplace).
The result html is divided in two main tables. The first one has the result general information and probably will have the same structure in all results. The second table contains the results files (they are decisions in a administrative process), which may change in number of files and even have some files with same name, but different dates. From this second table I just need the link to the oldest "relatório/voto" and its date and the link to oldest "acórdão" and its date.
The head of the csv file should look like the following image and each result should be a line.
I'm working with python on google colab and I've been trying many ways to scrape but it did not work well. My most complete approach was when I tried to adapt a product scrape tutorial: video and corespondent code in Github.
My adaptation does not work in colab, it neither results in a error message, nor in a csv file. In the following code, I identified some problems in the adaptation by comparing the pages and the lesson, they are:
While extracting the result html out of one of the 41 pages, I believe I should create a list results html extracted, but it extracted the text too and I'm not sure how to correct it.
While trying to extract the data from the result html, I fail. Whenever I tried to create a list with these it only returned me one result.
Beyond the tutorial, I would also like to extract data from the second table in the results html, it would be the link to the oldest "relatório/voto" and its date and the link to oldest "acórdão" and its date. I'm no sure how and when in the code i should do that.
ADAPTED CODE
from requests_html import HTMLSession
import csv
s = HTMLSession()
# STEP 01: take the result html
def get_results_links(page):
url = f"https://www.tce.sp.gov.br/jurisprudencia/pesquisar?txtTdPalvs=munic%C3%ADpio+pessoal+37&txtExp=temporari&txtQqUma=admiss%C3%A3o+contrata%C3%A7%C3%A3o&txtNenhPalvs=&txtNumIni=&txtNumFim=&tipoBuscaTxt=Documento&_tipoBuscaTxt=on&quantTrechos=1&processo=&exercicio=&dataAutuacaoInicio=&dataAutuacaoFim=&dataPubInicio=01%2F01%2F2021&dataPubFim=31%2F12%2F2021&_relator=1&_auditor=1&_materia=1&tipoDocumento=2&_tipoDocumento=1&acao=Executa&offset={page}"
links = []
r = s.get(url)
results = r.html.find('td.small a')
for item in results:
links.append(item.find('a', first=True).attrs['href']) #Problem 01: I believe it should creat a list of the results html extracted out the page, but it extracted the text too.
return links
# STEP 02: extracting relevant information from the result html before extracted
def parse_result(url):
r = s.get(url)
numero = r.html.find('td.small', first=True).text.strip()
data_autuacao = r.html.find('td.small', first=True).text.strip()
try:
parte_1 = r.html.find('td.small', first=True).text.strip()
except AttributeError as err:
sku = 'Não há'
try:
parte_2 = r.html.find('td.small', first=True).text.strip()
except AttributeError as err:
parte_2 = 'Não há'
materia = r.html.find('td.small', first=True).text.strip()
exercicio = r.html.find('td.small', first=True).text.strip()
objeto = r.html.find('td.small', first=True).text.strip()
relator = r.html.find('td.small', first=True).text.strip()
#Problem 02
# STEP 03: creating a list based objetcs created before
product = {
'Nº do Processo': numero,
"Link do Processo" : r,
'Data de Autuação': data_autuacao,
'Parte 1': parte_1,
'Parte 2': parte_2,
'Exercício': exercicio,
'Matéria' : materia,
'Objeto' : objeto,
'Relator' : relator
#'Relatório/Voto' :
#'Data Relatório/Voto' :
#'Acórdão' :
#'Data Acórdão' :
}#Problem 03
return product
# STEP 04: saving as csv
def save_csv(final):
keys = final [0].keys()
with open('products.csv', 'w') as f:
dict_writer = csv.DictWriter(f, keys)
dict_writer.writeheader()
dict_writer.writerows(final)
# STEP 05: main - joinning the functions
def main():
final = []
for x in range(0, 410, 10):
print('Getting Page ', x)
urls = get_results_links(x)
for url in urls:
final.append(parse_result(url))
print('Total: ', len(final))
save_csv(final)
Thank you, @shelter, for your help so far. I tryed to specify it.
|
[
"There are better (albeit more complex) ways of obtaining that information, like scrapy, or an async solution. Nonetheless, here is one way of getting that information you're after, as well as saving it into a csv file. I only scraped the first 2 pages (20 results), you can increase the range if you wish:\nfrom bs4 import BeautifulSoup as bs\nimport requests\nfrom tqdm.notebook import tqdm\nimport pandas as pd\n\npd.set_option('display.max_columns', None)\npd.set_option('display.max_colwidth', None)\n\nheaders = {\n 'User-Agent': 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/104.0.5112.79 Safari/537.36'\n}\n\ns = requests.Session()\ns.headers.update(headers)\n\nbig_list = []\ndetailed_list = []\nfor x in tqdm(range(0, 20, 10)):\n url = f'https://www.tce.sp.gov.br/jurisprudencia/pesquisar?txtTdPalvs=munic%C3%ADpio+pessoal+37&txtExp=temporari&txtQqUma=admiss%C3%A3o+contrata%C3%A7%C3%A3o&txtNenhPalvs=&txtNumIni=&txtNumFim=&tipoBuscaTxt=Documento&_tipoBuscaTxt=on&quantTrechos=1&processo=&exercicio=&dataAutuacaoInicio=&dataAutuacaoFim=&dataPubInicio=01%2F01%2F2021&dataPubFim=31%2F12%2F2021&_relator=1&_auditor=1&_materia=1&tipoDocumento=2&_tipoDocumento=1&acao=Executa&offset={x}'\n r = s.get(url)\n urls = bs(r.text, 'html.parser').select('tr[class=\"borda-superior\"] td:nth-of-type(2) a')\n big_list.extend(['https://www.tce.sp.gov.br/jurisprudencia/' + x.get('href') for x in urls])\nfor x in tqdm(big_list):\n r = s.get(x)\n soup = bs(r.text, 'html.parser')\n n_proceso = soup.select_one('td:-soup-contains(\"N° Processo:\")').find_next('td').text if soup.select('td:-soup-contains(\"N° Processo:\")') else None\n link_proceso = x\n autoacao = soup.select_one('td:-soup-contains(\"Autuação:\")').find_next('td').text if soup.select('td:-soup-contains(\"Autuação:\")') else None\n parte_1 = soup.select_one('td:-soup-contains(\"Parte 1:\")').find_next('td').text if soup.select('td:-soup-contains(\"Parte 1:\")') else None\n parte_2 = soup.select_one('td:-soup-contains(\"Parte 2:\")').find_next('td').text if soup.select('td:-soup-contains(\"Parte 2:\")') else None\n materia = soup.select_one('td:-soup-contains(\"Matéria:\")').find_next('td').text if soup.select('td:-soup-contains(\"Matéria:\")') else None\n exercicio = soup.select_one('td:-soup-contains(\"Exercício:\")').find_next('td').text if soup.select('td:-soup-contains(\"Exercício:\")') else None\n objeto = soup.select_one('td:-soup-contains(\"Objeto:\")').find_next('td').text if soup.select('td:-soup-contains(\"Objeto:\")') else None\n relator = soup.select_one('td:-soup-contains(\"Relator:\")').find_next('td').text if soup.select('td:-soup-contains(\"Relator:\")') else None\n relatorio_voto = soup.select_one('td:-soup-contains(\"Relatório / Voto \")').find_previous('a').get('href') if soup.select('td:-soup-contains(\"Relatório / Voto\")') else None\n data_relatorio = soup.select_one('td:-soup-contains(\"Relatório / Voto \")').find_previous('td').text if soup.select('td:-soup-contains(\"Relatório / Voto\")') else None\n acordao = soup.select_one('td:-soup-contains(\"Acórdão \")').find_previous('a').get('href') if soup.select('td:-soup-contains(\"Acórdão \")') else None\n data_acordao = soup.select_one('td:-soup-contains(\"Acórdão \")').find_previous('td').text if soup.select('td:-soup-contains(\"Acórdão \")') else None\n detailed_list.append((n_proceso, link_proceso, autoacao, parte_1, parte_2, \n materia, exercicio, objeto, relator, relatorio_voto, \n data_relatorio, acordao, data_acordao))\ndetailed_df = pd.DataFrame(detailed_list, columns = ['n_proceso', 'link_proceso', 'autoacao', 'parte_1', \n 'parte_2', 'materia', 'exercicio', 'objeto', 'relator', \n 'relatorio_voto', 'data_relatorio', 'acordao', 'data_acordao'])\ndisplay(detailed_df) \ndetailed_df.to_csv('legal_br_stuffs.csv')\n\nResult in terminal:\n100%\n2/2 [00:04<00:00, 1.78s/it]\n100%\n20/20 [00:07<00:00, 2.56it/s]\nn_proceso link_proceso autoacao parte_1 parte_2 materia exercicio objeto relator relatorio_voto data_relatorio acordao data_acordao\n0 18955/989/20 https://www.tce.sp.gov.br/jurisprudencia/exibir?proc=18955/989/20&offset=0 31/07/2020 ELVES SCIARRETTA CARREIRA PREFEITURA MUNICIPAL DE BRODOWSKI RECURSO ORDINARIO 2020 Recurso Ordinário Protocolado em anexo. EDGARD CAMARGO RODRIGUES https://www2.tce.sp.gov.br/arqs_juri/pdf/801385.pdf 20/01/2021 https://www2.tce.sp.gov.br/arqs_juri/pdf/801414.pdf 20/01/2021\n1 13614/989/18 https://www.tce.sp.gov.br/jurisprudencia/exibir?proc=13614/989/18&offset=0 11/06/2018 PREFEITURA MUNICIPAL DE SERRA NEGRA RECURSO ORDINARIO 2014 Recurso Ordinário ANTONIO ROQUE CITADINI https://www2.tce.sp.gov.br/arqs_juri/pdf/797986.pdf 05/02/2021 https://www2.tce.sp.gov.br/arqs_juri/pdf/800941.pdf 05/02/2021\n2 6269/989/19 https://www.tce.sp.gov.br/jurisprudencia/exibir?proc=6269/989/19&offset=0 19/02/2019 PREFEITURA MUNICIPAL DE TREMEMBE ADMISSAO DE PESSOAL - CONCURSO PROCESSO SELETIVO 2018 INTERESSADO: Rafael Varejão Munhos e outros. EDITAL Nº: 01/2017. CONCURSO PÚBLICO: 01/2017. None https://www2.tce.sp.gov.br/arqs_juri/pdf/804240.pdf 06/02/2021 https://www2.tce.sp.gov.br/arqs_juri/pdf/804258.pdf 06/02/2021\n3 14011/989/19 https://www.tce.sp.gov.br/jurisprudencia/exibir?proc=14011/989/19&offset=0 11/06/2019 RUBENS EDUARDO DE SOUZA AROUCA PREFEITURA MUNICIPAL DE TREMEMBE RECURSO ORDINARIO 2019 Recurso Ordinário RENATO MARTINS COSTA https://www2.tce.sp.gov.br/arqs_juri/pdf/804240.pdf 06/02/2021 https://www2.tce.sp.gov.br/arqs_juri/pdf/804258.pdf 06/02/2021\n4 14082/989/19 https://www.tce.sp.gov.br/jurisprudencia/exibir?proc=14082/989/19&offset=0 12/06/2019 PREFEITURA MUNICIPAL DE TREMEMBE RECURSO ORDINARIO 2019 Recurso Ordinário nos autos do TC n° 6269.989.19 - Admissão de pessoal - Concurso Público RENATO MARTINS COSTA https://www2.tce.sp.gov.br/arqs_juri/pdf/804240.pdf 06/02/2021 https://www2.tce.sp.gov.br/arqs_juri/pdf/804258.pdf 06/02/2021\n5 14238/989/19 https://www.tce.sp.gov.br/jurisprudencia/exibir?proc=14238/989/19&offset=0 13/06/2019 MARCELO VAQUELI PREFEITURA MUNICIPAL DE TREMEMBE RECURSO ORDINARIO 2019 Recurso Ordinário RENATO MARTINS COSTA https://www2.tce.sp.gov.br/arqs_juri/pdf/804240.pdf 06/02/2021 https://www2.tce.sp.gov.br/arqs_juri/pdf/804258.pdf 06/02/2021\n6 14141/989/20 https://www.tce.sp.gov.br/jurisprudencia/exibir?proc=14141/989/20&offset=0 28/05/2020 PREFEITURA MUNICIPAL DE BIRIGUI CRISTIANO SALMEIRAO RECURSO ORDINARIO 2018 Recurso Ordinário RENATO MARTINS COSTA https://www2.tce.sp.gov.br/arqs_juri/pdf/804259.pdf 06/02/2021 https://www2.tce.sp.gov.br/arqs_juri/pdf/804262.pdf 06/02/2021\n7 15371/989/19 https://www.tce.sp.gov.br/jurisprudencia/exibir?proc=15371/989/19&offset=0 02/07/2019 PREFEITURA MUNICIPAL DE BIRIGUI ADMISSAO DE PESSOAL - TEMPO DETERMINADO 2018 INTERESSADOS: ADRIANA PEREIRA CRISTAL E OUTROS. PROCESSOS SELETIVOS/EDITAIS Nºs:002/2016, 004/2017, 05/2017, 06/2017,001/2018 e 002/2018. LEIS AUTORIZADORAS: Nº 5134/2009 e Nº 3946/2001. None https://www2.tce.sp.gov.br/arqs_juri/pdf/804259.pdf 06/02/2021 https://www2.tce.sp.gov.br/arqs_juri/pdf/804262.pdf 06/02/2021\n8 15388/989/20 https://www.tce.sp.gov.br/jurisprudencia/exibir?proc=15388/989/20&offset=0 04/06/2020 MARIA ANGELICA MIRANDA FERNANDES RECURSO ORDINARIO 2018 Recurso Ordinário RENATO MARTINS COSTA https://www2.tce.sp.gov.br/arqs_juri/pdf/804259.pdf 06/02/2021 https://www2.tce.sp.gov.br/arqs_juri/pdf/804262.pdf 06/02/2021\n9 12911/989/16 https://www.tce.sp.gov.br/jurisprudencia/exibir?proc=12911/989/16&offset=0 20/07/2016 MARCELO CANDIDO DE SOUZA PREFEITURA MUNICIPAL DE SUZANO RECURSO ORDINARIO 2016 Recurso Ordinário Ref. Atos de Admissão de Pessoal - Exercício 2012. objetivando o preenchimento temporário dos cargos de Médico Cardiologista 20h, Fotógrafo, Médico Clínico Geral 20lt, Médico Gineco DIMAS RAMALHO https://www2.tce.sp.gov.br/arqs_juri/pdf/814599.pdf 27/04/2021 https://www2.tce.sp.gov.br/arqs_juri/pdf/814741.pdf 27/04/2021\n10 1735/002/11 https://www.tce.sp.gov.br/jurisprudencia/exibir?proc=1735/002/11&offset=10 22/11/2011 FUNDACAO DE APOIO AOS HOSP VETERINARIOS DA UNESP ADMISSAO DE PESSOAL - TEMPO DETERMINADO 2010 ADMISSAO DE PESSOAL POR TEMPO DETERMINADO COM CONCURSO/PROCESSO SELETIVO ANTONIO ROQUE CITADINI https://www2.tce.sp.gov.br/arqs_juri/pdf/800893.pdf 21/01/2021 https://www2.tce.sp.gov.br/arqs_juri/pdf/800969.pdf 21/01/2021\n11 23494/989/18 https://www.tce.sp.gov.br/jurisprudencia/exibir?proc=23494/989/18&offset=10 20/11/2018 HAMILTON LUIS FOZ RECURSO ORDINARIO 2018 Recurso Ordinário DIMAS RAMALHO https://www2.tce.sp.gov.br/arqs_juri/pdf/816918.pdf 13/05/2021 https://www2.tce.sp.gov.br/arqs_juri/pdf/817317.pdf 13/05/2021\n12 24496/989/19 https://www.tce.sp.gov.br/jurisprudencia/exibir?proc=24496/989/19&offset=10 25/11/2019 PREFEITURA MUNICIPAL DE LORENA RECURSO ORDINARIO 2017 Recurso Ordinário em face de sentença proferida nos autos de TC 00006265.989.19-4 DIMAS RAMALHO https://www2.tce.sp.gov.br/arqs_juri/pdf/814660.pdf 27/04/2021 https://www2.tce.sp.gov.br/arqs_juri/pdf/814805.pdf 27/04/2021\n13 17110/989/18 https://www.tce.sp.gov.br/jurisprudencia/exibir?proc=17110/989/18&offset=10 03/08/2018 JORGE ABISSAMRA PREFEITURA MUNICIPAL DE FERRAZ DE VASCONCELOS RECURSO ORDINARIO 2018 Recurso Ordinário DIMAS RAMALHO https://www2.tce.sp.gov.br/arqs_juri/pdf/814633.pdf 27/04/2021 https://www2.tce.sp.gov.br/arqs_juri/pdf/814774.pdf 27/04/2021\n14 24043/989/19 https://www.tce.sp.gov.br/jurisprudencia/exibir?proc=24043/989/19&offset=10 18/11/2019 PREFEITURA MUNICIPAL DE IRAPURU RECURSO ORDINARIO 2018 Recurso ordinário ROBSON MARINHO https://www2.tce.sp.gov.br/arqs_juri/pdf/817014.pdf 12/05/2021 https://www2.tce.sp.gov.br/arqs_juri/pdf/817269.pdf 12/05/2021\n15 2515/989/20 https://www.tce.sp.gov.br/jurisprudencia/exibir?proc=2515/989/20&offset=10 03/02/2020 PREFEITURA MUNICIPAL DE IPORANGA RECURSO ORDINARIO 2020 Recurso interposto em face da sentença proferida nos autos do TC 15791/989/19-7. ROBSON MARINHO https://www2.tce.sp.gov.br/arqs_juri/pdf/817001.pdf 12/05/2021 https://www2.tce.sp.gov.br/arqs_juri/pdf/817267.pdf 12/05/2021\n16 1891/989/20 https://www.tce.sp.gov.br/jurisprudencia/exibir?proc=1891/989/20&offset=10 24/01/2020 PREFEITURA MUNICIPAL DE IPORANGA RECURSO ORDINARIO 2020 RECURSO ORDINÁRIO DIMAS RAMALHO https://www2.tce.sp.gov.br/arqs_juri/pdf/802484.pdf 03/02/2021 https://www2.tce.sp.gov.br/arqs_juri/pdf/802620.pdf 03/02/2021\n17 15026/989/20 https://www.tce.sp.gov.br/jurisprudencia/exibir?proc=15026/989/20&offset=10 02/06/2020 DIXON RONAN CARVALHO PREFEITURA MUNICIPAL DE PAULINIA RECURSO ORDINARIO 2018 RECURSO ORDINÁRIO ANTONIO ROQUE CITADINI https://www2.tce.sp.gov.br/arqs_juri/pdf/802648.pdf 05/02/2021 https://www2.tce.sp.gov.br/arqs_juri/pdf/803361.pdf 05/02/2021\n18 9070/989/20 https://www.tce.sp.gov.br/jurisprudencia/exibir?proc=9070/989/20&offset=10 09/03/2020 PREFEITURA MUNICIPAL DE FLORIDA PAULISTA RECURSO ORDINARIO 2017 Recurso Ordinário ROBSON MARINHO https://www2.tce.sp.gov.br/arqs_juri/pdf/817006.pdf 12/05/2021 https://www2.tce.sp.gov.br/arqs_juri/pdf/817296.pdf 12/05/2021\n19 21543/989/20 https://www.tce.sp.gov.br/jurisprudencia/exibir?proc=21543/989/20&offset=10 11/09/2020 PREFEITURA MUNICIPAL DE JERIQUARA RECURSO ORDINARIO 2020 RECURSO ORDINÁRIO SIDNEY ESTANISLAU BERALDO https://www2.tce.sp.gov.br/arqs_juri/pdf/802997.pdf 13/02/2021 https://www2.tce.sp.gov.br/arqs_juri/pdf/804511.pdf 13/02/2021\n\nIf you will need coding in your career, I strongly suggest you start building some foundational knowledge first, then try to code or adapt other code.\n"
] |
[
1
] |
[] |
[] |
[
"google_colaboratory",
"html",
"python",
"web_scraping"
] |
stackoverflow_0074504534_google_colaboratory_html_python_web_scraping.txt
|
Q:
Huggingface: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu
I am confusing about my fine-tune model implemented by Huggingface model. I am able to train my model, but while I want to predict it, I always get this error. The most similar problem is this. My transformers version is 4.24.0, but it didn't seem to help me. I also try this. Below is my code snippet.
from transformers import AutoTokenizer
from transformers import DataCollatorForSeq2Seq
from transformers import AutoModelForSeq2SeqLM, Seq2SeqTrainingArguments, Seq2SeqTrainer
from transformers import pipeline
from tqdm import tqdm
from datasets import Dataset
import pandas as pd
import numpy as np
import pyarrow as pa
import gc
import torch as t
import pickle
PATH = './datas/Batch_answers - train_data (no-blank).csv'
EPOCH = 1
LEARNING_RATE = 2e-5
TRAIN_BATCH_SIZE = 16
EVAL_BATCH_SIZE = 16
DEVICE = 'cuda' if t.cuda.is_available() else 'cpu'
df = pd.read_csv(PATH)
df = df.drop(labels='s', axis=1)
df = df.iloc[:, 1:5]
df = df.to_numpy()
qData = []
for i in tqdm(range(len(df))):
argument = df[i][0][1:-1]
response = df[i][1][1:-1]
qprime = df[i][2][1:-1]
qData.append({'statement':argument+'\n'+response, 'argument_sentence_summary':qprime})
qtable = pa.Table.from_pylist(qData)
qDataset = Dataset(qtable)
qDataset = qDataset.train_test_split(train_size=0.8)
qModel = AutoModelForSeq2SeqLM.from_pretrained("t5-small")
qTokenizer = AutoTokenizer.from_pretrained("t5-small")
qData_collator = DataCollatorForSeq2Seq(tokenizer=qTokenizer, model=qModel)
def Qpreprocessing(data):
model_input = qTokenizer(data['statement'], max_length=250, truncation=True)
labels = qTokenizer(text_target=data['argument_sentence_summary'], max_length=75, truncation=True)
model_input['labels'] = labels['input_ids']
return model_input
qToken = qDataset.map(Qpreprocessing, batched=True)
qTraining_args = Seq2SeqTrainingArguments(
output_dir="./result",
evaluation_strategy="epoch",
learning_rate=LEARNING_RATE,
per_device_train_batch_size=TRAIN_BATCH_SIZE,
per_device_eval_batch_size=EVAL_BATCH_SIZE,
weight_decay=0.01,
save_total_limit=3,
num_train_epochs=EPOCH,
fp16=True,
)
qTrainer = Seq2SeqTrainer(
model=qModel,
args=qTraining_args,
train_dataset=qToken['train'],
eval_dataset=qToken['test'],
tokenizer=qTokenizer,
data_collator=qData_collator
)
old_collator = qTrainer.data_collator
qTrainer.data_collator = lambda data: dict(old_collator(data))
qTrainer.train()
qp = pipeline('summarization', model=qModel, tokenizer=qTokenizer)
qp(qDataset['test'][0]['statement']) #break in this line
The full traceback:
RuntimeError Traceback (most recent call last)
Cell In [20], line 3
1 qp = pipeline('summarization', model=qModel, tokenizer=qTokenizer)
2 # temp = t.tensor(qDataset['test'][0]['statement']).to(DEVICE)
----> 3 qp(qDataset['train'][0]['statement'])
File ~\anaconda3\envs\ame\lib\site-packages\transformers\pipelines\text2text_generation.py:250, in SummarizationPipeline.__call__(self, *args, **kwargs)
226 def __call__(self, *args, **kwargs):
227 r"""
228 Summarize the text(s) given as inputs.
229
(...)
248 ids of the summary.
249 """
--> 250 return super().__call__(*args, **kwargs)
File ~\anaconda3\envs\ame\lib\site-packages\transformers\pipelines\text2text_generation.py:150, in Text2TextGenerationPipeline.__call__(self, *args, **kwargs)
121 def __call__(self, *args, **kwargs):
122 r"""
123 Generate the output text(s) using text(s) given as inputs.
124
(...)
147 ids of the generated text.
148 """
--> 150 result = super().__call__(*args, **kwargs)
151 if (
152 isinstance(args[0], list)
153 and all(isinstance(el, str) for el in args[0])
154 and all(len(res) == 1 for res in result)
155 ):
156 return [res[0] for res in result]
File ~\anaconda3\envs\ame\lib\site-packages\transformers\pipelines\base.py:1074, in Pipeline.__call__(self, inputs, num_workers, batch_size, *args, **kwargs)
1072 return self.iterate(inputs, preprocess_params, forward_params, postprocess_params)
1073 else:
-> 1074 return self.run_single(inputs, preprocess_params, forward_params, postprocess_params)
File ~\anaconda3\envs\ame\lib\site-packages\transformers\pipelines\base.py:1081, in Pipeline.run_single(self, inputs, preprocess_params, forward_params, postprocess_params)
1079 def run_single(self, inputs, preprocess_params, forward_params, postprocess_params):
1080 model_inputs = self.preprocess(inputs, **preprocess_params)
-> 1081 model_outputs = self.forward(model_inputs, **forward_params)
1082 outputs = self.postprocess(model_outputs, **postprocess_params)
1083 return outputs
File ~\anaconda3\envs\ame\lib\site-packages\transformers\pipelines\base.py:990, in Pipeline.forward(self, model_inputs, **forward_params)
988 with inference_context():
989 model_inputs = self._ensure_tensor_on_device(model_inputs, device=self.device)
--> 990 model_outputs = self._forward(model_inputs, **forward_params)
991 model_outputs = self._ensure_tensor_on_device(model_outputs, device=torch.device("cpu"))
992 else:
File ~\anaconda3\envs\ame\lib\site-packages\transformers\pipelines\text2text_generation.py:172, in Text2TextGenerationPipeline._forward(self, model_inputs, **generate_kwargs)
170 generate_kwargs["max_length"] = generate_kwargs.get("max_length", self.model.config.max_length)
171 self.check_inputs(input_length, generate_kwargs["min_length"], generate_kwargs["max_length"])
--> 172 output_ids = self.model.generate(**model_inputs, **generate_kwargs)
173 out_b = output_ids.shape[0]
174 if self.framework == "pt":
File ~\anaconda3\envs\ame\lib\site-packages\torch\autograd\grad_mode.py:27, in _DecoratorContextManager.__call__.<locals>.decorate_context(*args, **kwargs)
24 @functools.wraps(func)
25 def decorate_context(*args, **kwargs):
26 with self.clone():
---> 27 return func(*args, **kwargs)
File ~\anaconda3\envs\ame\lib\site-packages\transformers\generation_utils.py:1339, in GenerationMixin.generate(self, inputs, max_length, min_length, do_sample, early_stopping, num_beams, temperature, penalty_alpha, top_k, top_p, typical_p, repetition_penalty, bad_words_ids, force_words_ids, bos_token_id, pad_token_id, eos_token_id, length_penalty, no_repeat_ngram_size, encoder_no_repeat_ngram_size, num_return_sequences, max_time, max_new_tokens, decoder_start_token_id, use_cache, num_beam_groups, diversity_penalty, prefix_allowed_tokens_fn, logits_processor, renormalize_logits, stopping_criteria, constraints, output_attentions, output_hidden_states, output_scores, return_dict_in_generate, forced_bos_token_id, forced_eos_token_id, remove_invalid_values, synced_gpus, exponential_decay_length_penalty, suppress_tokens, begin_suppress_tokens, forced_decoder_ids, **model_kwargs)
1331 logger.warning(
1332 "A decoder-only architecture is being used, but right-padding was detected! For correct "
1333 "generation results, please set `padding_side='left'` when initializing the tokenizer."
1334 )
1336 if self.config.is_encoder_decoder and "encoder_outputs" not in model_kwargs:
1337 # if model is encoder decoder encoder_outputs are created
1338 # and added to `model_kwargs`
-> 1339 model_kwargs = self._prepare_encoder_decoder_kwargs_for_generation(
1340 inputs_tensor, model_kwargs, model_input_name
1341 )
1343 # 4. Prepare `input_ids` which will be used for auto-regressive generation
1344 if self.config.is_encoder_decoder:
File ~\anaconda3\envs\ame\lib\site-packages\transformers\generation_utils.py:583, in GenerationMixin._prepare_encoder_decoder_kwargs_for_generation(self, inputs_tensor, model_kwargs, model_input_name)
581 encoder_kwargs["return_dict"] = True
582 encoder_kwargs[model_input_name] = inputs_tensor
--> 583 model_kwargs["encoder_outputs"]: ModelOutput = encoder(**encoder_kwargs)
585 return model_kwargs
File ~\anaconda3\envs\ame\lib\site-packages\torch\nn\modules\module.py:1130, in Module._call_impl(self, *input, **kwargs)
1126 # If we don't have any hooks, we want to skip the rest of the logic in
1127 # this function, and just call forward.
1128 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1129 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1130 return forward_call(*input, **kwargs)
1131 # Do not call functions when jit is used
1132 full_backward_hooks, non_full_backward_hooks = [], []
File ~\anaconda3\envs\ame\lib\site-packages\transformers\models\t5\modeling_t5.py:941, in T5Stack.forward(self, input_ids, attention_mask, encoder_hidden_states, encoder_attention_mask, inputs_embeds, head_mask, cross_attn_head_mask, past_key_values, use_cache, output_attentions, output_hidden_states, return_dict)
939 if inputs_embeds is None:
940 assert self.embed_tokens is not None, "You have to initialize the model with valid token embeddings"
--> 941 inputs_embeds = self.embed_tokens(input_ids)
943 batch_size, seq_length = input_shape
945 # required mask seq length can be calculated via length of past
File ~\anaconda3\envs\ame\lib\site-packages\torch\nn\modules\module.py:1130, in Module._call_impl(self, *input, **kwargs)
1126 # If we don't have any hooks, we want to skip the rest of the logic in
1127 # this function, and just call forward.
1128 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1129 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1130 return forward_call(*input, **kwargs)
1131 # Do not call functions when jit is used
1132 full_backward_hooks, non_full_backward_hooks = [], []
File ~\anaconda3\envs\ame\lib\site-packages\torch\nn\modules\sparse.py:158, in Embedding.forward(self, input)
157 def forward(self, input: Tensor) -> Tensor:
--> 158 return F.embedding(
159 input, self.weight, self.padding_idx, self.max_norm,
160 self.norm_type, self.scale_grad_by_freq, self.sparse)
File ~\anaconda3\envs\ame\lib\site-packages\torch\nn\functional.py:2199, in embedding(input, weight, padding_idx, max_norm, norm_type, scale_grad_by_freq, sparse)
2193 # Note [embedding_renorm set_grad_enabled]
2194 # XXX: equivalent to
2195 # with torch.no_grad():
2196 # torch.embedding_renorm_
2197 # remove once script supports set_grad_enabled
2198 _no_grad_embedding_renorm_(weight, input, max_norm, norm_type)
-> 2199 return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument index in method wrapper__index_select)
Should that mean I need another way to predict my test dataset instead of using pipeline? Big thanks for help.
A:
I do get the idea from the comment. The way I solve this is I can still train my qModel on 'cuda', but if I want to do the prediction, I'll need to put my qModel to 'cpu'. So I modify my last few lines code to below:
qTrainer.train()
qModel = qModel.to('cpu') #put my model to cpu
qp = pipeline('summarization', model=qModel, tokenizer=qTokenizer)
print(qp(qDataset['test'][0]['statement']))
And it works.
|
Huggingface: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu
|
I am confusing about my fine-tune model implemented by Huggingface model. I am able to train my model, but while I want to predict it, I always get this error. The most similar problem is this. My transformers version is 4.24.0, but it didn't seem to help me. I also try this. Below is my code snippet.
from transformers import AutoTokenizer
from transformers import DataCollatorForSeq2Seq
from transformers import AutoModelForSeq2SeqLM, Seq2SeqTrainingArguments, Seq2SeqTrainer
from transformers import pipeline
from tqdm import tqdm
from datasets import Dataset
import pandas as pd
import numpy as np
import pyarrow as pa
import gc
import torch as t
import pickle
PATH = './datas/Batch_answers - train_data (no-blank).csv'
EPOCH = 1
LEARNING_RATE = 2e-5
TRAIN_BATCH_SIZE = 16
EVAL_BATCH_SIZE = 16
DEVICE = 'cuda' if t.cuda.is_available() else 'cpu'
df = pd.read_csv(PATH)
df = df.drop(labels='s', axis=1)
df = df.iloc[:, 1:5]
df = df.to_numpy()
qData = []
for i in tqdm(range(len(df))):
argument = df[i][0][1:-1]
response = df[i][1][1:-1]
qprime = df[i][2][1:-1]
qData.append({'statement':argument+'\n'+response, 'argument_sentence_summary':qprime})
qtable = pa.Table.from_pylist(qData)
qDataset = Dataset(qtable)
qDataset = qDataset.train_test_split(train_size=0.8)
qModel = AutoModelForSeq2SeqLM.from_pretrained("t5-small")
qTokenizer = AutoTokenizer.from_pretrained("t5-small")
qData_collator = DataCollatorForSeq2Seq(tokenizer=qTokenizer, model=qModel)
def Qpreprocessing(data):
model_input = qTokenizer(data['statement'], max_length=250, truncation=True)
labels = qTokenizer(text_target=data['argument_sentence_summary'], max_length=75, truncation=True)
model_input['labels'] = labels['input_ids']
return model_input
qToken = qDataset.map(Qpreprocessing, batched=True)
qTraining_args = Seq2SeqTrainingArguments(
output_dir="./result",
evaluation_strategy="epoch",
learning_rate=LEARNING_RATE,
per_device_train_batch_size=TRAIN_BATCH_SIZE,
per_device_eval_batch_size=EVAL_BATCH_SIZE,
weight_decay=0.01,
save_total_limit=3,
num_train_epochs=EPOCH,
fp16=True,
)
qTrainer = Seq2SeqTrainer(
model=qModel,
args=qTraining_args,
train_dataset=qToken['train'],
eval_dataset=qToken['test'],
tokenizer=qTokenizer,
data_collator=qData_collator
)
old_collator = qTrainer.data_collator
qTrainer.data_collator = lambda data: dict(old_collator(data))
qTrainer.train()
qp = pipeline('summarization', model=qModel, tokenizer=qTokenizer)
qp(qDataset['test'][0]['statement']) #break in this line
The full traceback:
RuntimeError Traceback (most recent call last)
Cell In [20], line 3
1 qp = pipeline('summarization', model=qModel, tokenizer=qTokenizer)
2 # temp = t.tensor(qDataset['test'][0]['statement']).to(DEVICE)
----> 3 qp(qDataset['train'][0]['statement'])
File ~\anaconda3\envs\ame\lib\site-packages\transformers\pipelines\text2text_generation.py:250, in SummarizationPipeline.__call__(self, *args, **kwargs)
226 def __call__(self, *args, **kwargs):
227 r"""
228 Summarize the text(s) given as inputs.
229
(...)
248 ids of the summary.
249 """
--> 250 return super().__call__(*args, **kwargs)
File ~\anaconda3\envs\ame\lib\site-packages\transformers\pipelines\text2text_generation.py:150, in Text2TextGenerationPipeline.__call__(self, *args, **kwargs)
121 def __call__(self, *args, **kwargs):
122 r"""
123 Generate the output text(s) using text(s) given as inputs.
124
(...)
147 ids of the generated text.
148 """
--> 150 result = super().__call__(*args, **kwargs)
151 if (
152 isinstance(args[0], list)
153 and all(isinstance(el, str) for el in args[0])
154 and all(len(res) == 1 for res in result)
155 ):
156 return [res[0] for res in result]
File ~\anaconda3\envs\ame\lib\site-packages\transformers\pipelines\base.py:1074, in Pipeline.__call__(self, inputs, num_workers, batch_size, *args, **kwargs)
1072 return self.iterate(inputs, preprocess_params, forward_params, postprocess_params)
1073 else:
-> 1074 return self.run_single(inputs, preprocess_params, forward_params, postprocess_params)
File ~\anaconda3\envs\ame\lib\site-packages\transformers\pipelines\base.py:1081, in Pipeline.run_single(self, inputs, preprocess_params, forward_params, postprocess_params)
1079 def run_single(self, inputs, preprocess_params, forward_params, postprocess_params):
1080 model_inputs = self.preprocess(inputs, **preprocess_params)
-> 1081 model_outputs = self.forward(model_inputs, **forward_params)
1082 outputs = self.postprocess(model_outputs, **postprocess_params)
1083 return outputs
File ~\anaconda3\envs\ame\lib\site-packages\transformers\pipelines\base.py:990, in Pipeline.forward(self, model_inputs, **forward_params)
988 with inference_context():
989 model_inputs = self._ensure_tensor_on_device(model_inputs, device=self.device)
--> 990 model_outputs = self._forward(model_inputs, **forward_params)
991 model_outputs = self._ensure_tensor_on_device(model_outputs, device=torch.device("cpu"))
992 else:
File ~\anaconda3\envs\ame\lib\site-packages\transformers\pipelines\text2text_generation.py:172, in Text2TextGenerationPipeline._forward(self, model_inputs, **generate_kwargs)
170 generate_kwargs["max_length"] = generate_kwargs.get("max_length", self.model.config.max_length)
171 self.check_inputs(input_length, generate_kwargs["min_length"], generate_kwargs["max_length"])
--> 172 output_ids = self.model.generate(**model_inputs, **generate_kwargs)
173 out_b = output_ids.shape[0]
174 if self.framework == "pt":
File ~\anaconda3\envs\ame\lib\site-packages\torch\autograd\grad_mode.py:27, in _DecoratorContextManager.__call__.<locals>.decorate_context(*args, **kwargs)
24 @functools.wraps(func)
25 def decorate_context(*args, **kwargs):
26 with self.clone():
---> 27 return func(*args, **kwargs)
File ~\anaconda3\envs\ame\lib\site-packages\transformers\generation_utils.py:1339, in GenerationMixin.generate(self, inputs, max_length, min_length, do_sample, early_stopping, num_beams, temperature, penalty_alpha, top_k, top_p, typical_p, repetition_penalty, bad_words_ids, force_words_ids, bos_token_id, pad_token_id, eos_token_id, length_penalty, no_repeat_ngram_size, encoder_no_repeat_ngram_size, num_return_sequences, max_time, max_new_tokens, decoder_start_token_id, use_cache, num_beam_groups, diversity_penalty, prefix_allowed_tokens_fn, logits_processor, renormalize_logits, stopping_criteria, constraints, output_attentions, output_hidden_states, output_scores, return_dict_in_generate, forced_bos_token_id, forced_eos_token_id, remove_invalid_values, synced_gpus, exponential_decay_length_penalty, suppress_tokens, begin_suppress_tokens, forced_decoder_ids, **model_kwargs)
1331 logger.warning(
1332 "A decoder-only architecture is being used, but right-padding was detected! For correct "
1333 "generation results, please set `padding_side='left'` when initializing the tokenizer."
1334 )
1336 if self.config.is_encoder_decoder and "encoder_outputs" not in model_kwargs:
1337 # if model is encoder decoder encoder_outputs are created
1338 # and added to `model_kwargs`
-> 1339 model_kwargs = self._prepare_encoder_decoder_kwargs_for_generation(
1340 inputs_tensor, model_kwargs, model_input_name
1341 )
1343 # 4. Prepare `input_ids` which will be used for auto-regressive generation
1344 if self.config.is_encoder_decoder:
File ~\anaconda3\envs\ame\lib\site-packages\transformers\generation_utils.py:583, in GenerationMixin._prepare_encoder_decoder_kwargs_for_generation(self, inputs_tensor, model_kwargs, model_input_name)
581 encoder_kwargs["return_dict"] = True
582 encoder_kwargs[model_input_name] = inputs_tensor
--> 583 model_kwargs["encoder_outputs"]: ModelOutput = encoder(**encoder_kwargs)
585 return model_kwargs
File ~\anaconda3\envs\ame\lib\site-packages\torch\nn\modules\module.py:1130, in Module._call_impl(self, *input, **kwargs)
1126 # If we don't have any hooks, we want to skip the rest of the logic in
1127 # this function, and just call forward.
1128 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1129 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1130 return forward_call(*input, **kwargs)
1131 # Do not call functions when jit is used
1132 full_backward_hooks, non_full_backward_hooks = [], []
File ~\anaconda3\envs\ame\lib\site-packages\transformers\models\t5\modeling_t5.py:941, in T5Stack.forward(self, input_ids, attention_mask, encoder_hidden_states, encoder_attention_mask, inputs_embeds, head_mask, cross_attn_head_mask, past_key_values, use_cache, output_attentions, output_hidden_states, return_dict)
939 if inputs_embeds is None:
940 assert self.embed_tokens is not None, "You have to initialize the model with valid token embeddings"
--> 941 inputs_embeds = self.embed_tokens(input_ids)
943 batch_size, seq_length = input_shape
945 # required mask seq length can be calculated via length of past
File ~\anaconda3\envs\ame\lib\site-packages\torch\nn\modules\module.py:1130, in Module._call_impl(self, *input, **kwargs)
1126 # If we don't have any hooks, we want to skip the rest of the logic in
1127 # this function, and just call forward.
1128 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1129 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1130 return forward_call(*input, **kwargs)
1131 # Do not call functions when jit is used
1132 full_backward_hooks, non_full_backward_hooks = [], []
File ~\anaconda3\envs\ame\lib\site-packages\torch\nn\modules\sparse.py:158, in Embedding.forward(self, input)
157 def forward(self, input: Tensor) -> Tensor:
--> 158 return F.embedding(
159 input, self.weight, self.padding_idx, self.max_norm,
160 self.norm_type, self.scale_grad_by_freq, self.sparse)
File ~\anaconda3\envs\ame\lib\site-packages\torch\nn\functional.py:2199, in embedding(input, weight, padding_idx, max_norm, norm_type, scale_grad_by_freq, sparse)
2193 # Note [embedding_renorm set_grad_enabled]
2194 # XXX: equivalent to
2195 # with torch.no_grad():
2196 # torch.embedding_renorm_
2197 # remove once script supports set_grad_enabled
2198 _no_grad_embedding_renorm_(weight, input, max_norm, norm_type)
-> 2199 return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument index in method wrapper__index_select)
Should that mean I need another way to predict my test dataset instead of using pipeline? Big thanks for help.
|
[
"I do get the idea from the comment. The way I solve this is I can still train my qModel on 'cuda', but if I want to do the prediction, I'll need to put my qModel to 'cpu'. So I modify my last few lines code to below:\nqTrainer.train()\n\nqModel = qModel.to('cpu') #put my model to cpu\n\nqp = pipeline('summarization', model=qModel, tokenizer=qTokenizer)\nprint(qp(qDataset['test'][0]['statement']))\n\nAnd it works.\n"
] |
[
0
] |
[] |
[] |
[
"data_science",
"huggingface_transformers",
"python",
"pytorch"
] |
stackoverflow_0074497166_data_science_huggingface_transformers_python_pytorch.txt
|
Q:
_append_dispatcher() missing 1 required positional argument: 'values'
i'm getting this error how to resolve it
I don't know why i'm getting this error
A:
np.append function gets 2 arguments. first argument is input array and second one is values. In mentioned problem, you didn't pass the second argument to np.append function.
details in site
|
_append_dispatcher() missing 1 required positional argument: 'values'
|
i'm getting this error how to resolve it
I don't know why i'm getting this error
|
[
"np.append function gets 2 arguments. first argument is input array and second one is values. In mentioned problem, you didn't pass the second argument to np.append function.\ndetails in site\n"
] |
[
0
] |
[] |
[] |
[
"jupyter_notebook",
"python"
] |
stackoverflow_0074506538_jupyter_notebook_python.txt
|
Q:
Spacy - AttributeError: 'getset_descriptor' object has no attribute 'setdefault'
I am trying to run this main.py file but I have the following error that I don't understand:
Traceback (most recent call last):
File "/Users/tyler/Desktop/Working Folder/trending-stories/main.py", line 4, in <module>
from news_processor import NewsProcessor
File "/Users/tyler/Desktop/Working Folder/trending-stories/news_processor.py", line 2, in <module>
from keywords_extraction import KeywordsExtract
File "/Users/tyler/Desktop/Working Folder/trending-stories/keywords_extraction.py", line 12, in <module>
class KeywordsExtract:
File "/Users/tyler/Desktop/Working Folder/trending-stories/keywords_extraction.py", line 13, in KeywordsExtract
MODEL = spacy.load("en_core_web_sm")
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/spacy/__init__.py", line 30, in load
return util.load_model(name, **overrides)
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/spacy/util.py", line 164, in load_model
return load_model_from_package(name, **overrides)
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/spacy/util.py", line 185, in load_model_from_package
return cls.load(**overrides)
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/en_core_web_sm/__init__.py", line 12, in load
return load_model_from_init_py(__file__, **overrides)
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/spacy/util.py", line 228, in load_model_from_init_py
return load_model_from_path(data_path, meta, **overrides)
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/spacy/util.py", line 211, in load_model_from_path
return nlp.from_disk(model_path, exclude=disable)
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/spacy/language.py", line 947, in from_disk
util.from_disk(path, deserializers, exclude)
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/spacy/util.py", line 654, in from_disk
reader(path / key)
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/spacy/language.py", line 932, in <lambda>
) and _fix_pretrained_vectors_name(self)
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/spacy/language.py", line 1071, in _fix_pretrained_vectors_name
proc.cfg.setdefault("deprecation_fixes", {})
AttributeError: 'getset_descriptor' object has no attribute 'setdefault'
The keywords_extraction.py function is like this:
class KeywordsExtract:
MODEL = spacy.load("en_core_web_sm")
allow_types = ['PERSON', 'GPE', 'ORG', 'NORP', 'LOC', 'FAC', 'WORK_OF_ART', 'EVENT', 'LAW', 'PRODUCT']
remove_words = ['new', 'time', 'matter', 'source', 'people', 'story', 'reuters story']
remove_entities = ['REUTERS', 'Reuters', 'Thomson Reuters', 'CNBC']
months = [cd.month_name[i] for i in range(1, 13)] + [cd.month_abbr[i] for i in range(1, 13)]
lookups = Lookups()
lemma_keep = ["data"]
lemma_exc = MODEL.vocab.lookups.get_table("lemma_exc")
for w in lemma_keep:
del lemma_exc[MODEL.vocab.strings["noun"]][w]
lookups.add_table("lemma_exc", lemma_exc)
lookups.add_table("lemma_rules", MODEL.vocab.lookups.get_table("lemma_rules"))
lookups.add_table("lemma_index", MODEL.vocab.lookups.get_table("lemma_index"))
lemmatizer = Lemmatizer(lookups)
I tried installing en_core_web_sm and en_core_web_sm but it doesn't work. How can I solve this problem? Is it probably because of the version of python I am using? My python version is 3.9.2.
A:
Yes, It's because of your python version.
you can downgrade your python version to lower than 3.6 or upgrade your spaCy version to greater than 3.x.x
|
Spacy - AttributeError: 'getset_descriptor' object has no attribute 'setdefault'
|
I am trying to run this main.py file but I have the following error that I don't understand:
Traceback (most recent call last):
File "/Users/tyler/Desktop/Working Folder/trending-stories/main.py", line 4, in <module>
from news_processor import NewsProcessor
File "/Users/tyler/Desktop/Working Folder/trending-stories/news_processor.py", line 2, in <module>
from keywords_extraction import KeywordsExtract
File "/Users/tyler/Desktop/Working Folder/trending-stories/keywords_extraction.py", line 12, in <module>
class KeywordsExtract:
File "/Users/tyler/Desktop/Working Folder/trending-stories/keywords_extraction.py", line 13, in KeywordsExtract
MODEL = spacy.load("en_core_web_sm")
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/spacy/__init__.py", line 30, in load
return util.load_model(name, **overrides)
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/spacy/util.py", line 164, in load_model
return load_model_from_package(name, **overrides)
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/spacy/util.py", line 185, in load_model_from_package
return cls.load(**overrides)
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/en_core_web_sm/__init__.py", line 12, in load
return load_model_from_init_py(__file__, **overrides)
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/spacy/util.py", line 228, in load_model_from_init_py
return load_model_from_path(data_path, meta, **overrides)
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/spacy/util.py", line 211, in load_model_from_path
return nlp.from_disk(model_path, exclude=disable)
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/spacy/language.py", line 947, in from_disk
util.from_disk(path, deserializers, exclude)
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/spacy/util.py", line 654, in from_disk
reader(path / key)
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/spacy/language.py", line 932, in <lambda>
) and _fix_pretrained_vectors_name(self)
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/spacy/language.py", line 1071, in _fix_pretrained_vectors_name
proc.cfg.setdefault("deprecation_fixes", {})
AttributeError: 'getset_descriptor' object has no attribute 'setdefault'
The keywords_extraction.py function is like this:
class KeywordsExtract:
MODEL = spacy.load("en_core_web_sm")
allow_types = ['PERSON', 'GPE', 'ORG', 'NORP', 'LOC', 'FAC', 'WORK_OF_ART', 'EVENT', 'LAW', 'PRODUCT']
remove_words = ['new', 'time', 'matter', 'source', 'people', 'story', 'reuters story']
remove_entities = ['REUTERS', 'Reuters', 'Thomson Reuters', 'CNBC']
months = [cd.month_name[i] for i in range(1, 13)] + [cd.month_abbr[i] for i in range(1, 13)]
lookups = Lookups()
lemma_keep = ["data"]
lemma_exc = MODEL.vocab.lookups.get_table("lemma_exc")
for w in lemma_keep:
del lemma_exc[MODEL.vocab.strings["noun"]][w]
lookups.add_table("lemma_exc", lemma_exc)
lookups.add_table("lemma_rules", MODEL.vocab.lookups.get_table("lemma_rules"))
lookups.add_table("lemma_index", MODEL.vocab.lookups.get_table("lemma_index"))
lemmatizer = Lemmatizer(lookups)
I tried installing en_core_web_sm and en_core_web_sm but it doesn't work. How can I solve this problem? Is it probably because of the version of python I am using? My python version is 3.9.2.
|
[
"Yes, It's because of your python version.\nyou can downgrade your python version to lower than 3.6 or upgrade your spaCy version to greater than 3.x.x\n"
] |
[
0
] |
[] |
[] |
[
"machine_learning",
"nlp",
"nltk",
"python",
"spacy"
] |
stackoverflow_0074506503_machine_learning_nlp_nltk_python_spacy.txt
|
Q:
Error installing Streamlit on Python 3.11
Screenshot of the errorI am trying to install Streamlit on python 3.11 and I keep on getting this error.
I found solutions saying to do pip install pyarrow, but it also gives a similar error. Failed to build wheel.
A:
Sorry, this should be a comment, but I am a new user and cannot do it yet. "Streamlit" currently doesn't officially support python 3.11. So, it is more or less a gray area.
You can also find this link useful: Installation error streamlit: Building wheel for pyarrow (pyproject.toml) ... error
"Streamlit currently supports versions 3.7, 3.8, 3.9, and 3.10 of Python."
https://docs.streamlit.io/knowledge-base/using-streamlit/sanity-checks
If we use analogy with 3.10, they will add support some time in January 2023.
https://docs.streamlit.io/library/changelog
|
Error installing Streamlit on Python 3.11
|
Screenshot of the errorI am trying to install Streamlit on python 3.11 and I keep on getting this error.
I found solutions saying to do pip install pyarrow, but it also gives a similar error. Failed to build wheel.
|
[
"Sorry, this should be a comment, but I am a new user and cannot do it yet. \"Streamlit\" currently doesn't officially support python 3.11. So, it is more or less a gray area.\nYou can also find this link useful: Installation error streamlit: Building wheel for pyarrow (pyproject.toml) ... error\n\"Streamlit currently supports versions 3.7, 3.8, 3.9, and 3.10 of Python.\"\nhttps://docs.streamlit.io/knowledge-base/using-streamlit/sanity-checks\nIf we use analogy with 3.10, they will add support some time in January 2023.\nhttps://docs.streamlit.io/library/changelog\n"
] |
[
0
] |
[] |
[] |
[
"python",
"streamlit"
] |
stackoverflow_0074505468_python_streamlit.txt
|
Q:
Why is mu square not stopping at the obstacle I created?
import pygame
import keyboard
screen = pygame.display.set_mode((800,600))
screen.fill((255,255,255))
blue = (20,40,200)
gray = (100,100,100)
x=200
y=200
w=60
h=60
p=0
OL=0
vel=0.1
#variables
def player():
pygame.draw.rect(screen,blue,pygame.Rect(p+x,p+y,p+w,p+h))
def obst():
pygame.draw.rect(screen,gray,pygame.Rect(OL+100,OL+100,OL+100,OL+10))
run=True
pygame.init()
while run:
screen.fill((255,255,255))
player()
obst()
for event in pygame.event.get():
if event.type == pygame.QUIT:
run = False
keys = pygame.key.get_pressed()
if keys[pygame.K_LEFT] and x>0 and x>OL:
x-= vel
elif x==OL:
vel=0
if keys[pygame.K_RIGHT]and x<800-w and x>OL:
x+= vel
elif x==OL:
vel=0
if keys[pygame.K_UP]and y>0 and y>OL:
y-= vel
elif y==OL:
vel=0
if keys[pygame.K_DOWN]and y<600-h and y>OL:
y+= vel
elif y==OL:
vel=0
pygame.display.update()
pygame.quit()
I tried putting variables in the coordinates of my obstalce and it just didnt work, im lost. there isnt a whole lot that I could do with my little to none knowledge of python. I just want my square to stop when it hits the obstacle and it is very hard to get it to do that, I tried everything I know, please help. putting OL in the obstcle coordinates did not seem to help anything, and I just want to make it so that all the shapes I make are masked as "OL" so that I can have complex obstacles or moving ones that stop the player. but I would also like for the player to be able to move away from the obstacle as well.
A:
Read *How do I detect collision in pygame?. I suggest to use pygame.Rect objects and colliderect for the collision detection. Create pygame.Rect objects for the player and the obstacle:
player_rect = pygame.Rect(200, 200, 60, 60)
obstacle_rect = pygame.Rect(100, 100, 100, 10)
Test for collision when the player has been moved and restrict the player's boundaries to the boundaries of the obstacle:
if keys[pygame.K_LEFT] and player_rect.left > 0:
player_rect.x -= vel
if player_rect.colliderect(obstacle_rect):
player_rect.left = obstacle_rect.right
Complete example:
import pygame
pygame.init()
screen = pygame.display.set_mode((800,600))
clock = pygame.time.Clock()
blue = (20,40,200)
gray = (100,100,100)
player_rect = pygame.Rect(200, 200, 60, 60)
obstacle_rect = pygame.Rect(100, 100, 100, 10)
vel = 5
def player():
pygame.draw.rect(screen, blue, player_rect)
def obst():
pygame.draw.rect(screen, gray, pygame.Rect(obstacle_rect))
run=True
while run:
clock.tick(60)
for event in pygame.event.get():
if event.type == pygame.QUIT:
run = False
keys = pygame.key.get_pressed()
if keys[pygame.K_LEFT] and player_rect.left > 0:
player_rect.x -= vel
if player_rect.colliderect(obstacle_rect):
player_rect.left = obstacle_rect.right
if keys[pygame.K_RIGHT] and player_rect.right < 800:
player_rect.x += vel
if player_rect.colliderect(obstacle_rect):
player_rect.right = obstacle_rect.left
if keys[pygame.K_UP] and player_rect.top > 0:
player_rect.y -= vel
if player_rect.colliderect(obstacle_rect):
player_rect.top = obstacle_rect.bottom
if keys[pygame.K_DOWN] and player_rect.bottom < 600:
player_rect.y += vel
if player_rect.colliderect(obstacle_rect):
player_rect.bottom = obstacle_rect.top
screen.fill((255,255,255))
player()
obst()
pygame.display.update()
pygame.quit()
|
Why is mu square not stopping at the obstacle I created?
|
import pygame
import keyboard
screen = pygame.display.set_mode((800,600))
screen.fill((255,255,255))
blue = (20,40,200)
gray = (100,100,100)
x=200
y=200
w=60
h=60
p=0
OL=0
vel=0.1
#variables
def player():
pygame.draw.rect(screen,blue,pygame.Rect(p+x,p+y,p+w,p+h))
def obst():
pygame.draw.rect(screen,gray,pygame.Rect(OL+100,OL+100,OL+100,OL+10))
run=True
pygame.init()
while run:
screen.fill((255,255,255))
player()
obst()
for event in pygame.event.get():
if event.type == pygame.QUIT:
run = False
keys = pygame.key.get_pressed()
if keys[pygame.K_LEFT] and x>0 and x>OL:
x-= vel
elif x==OL:
vel=0
if keys[pygame.K_RIGHT]and x<800-w and x>OL:
x+= vel
elif x==OL:
vel=0
if keys[pygame.K_UP]and y>0 and y>OL:
y-= vel
elif y==OL:
vel=0
if keys[pygame.K_DOWN]and y<600-h and y>OL:
y+= vel
elif y==OL:
vel=0
pygame.display.update()
pygame.quit()
I tried putting variables in the coordinates of my obstalce and it just didnt work, im lost. there isnt a whole lot that I could do with my little to none knowledge of python. I just want my square to stop when it hits the obstacle and it is very hard to get it to do that, I tried everything I know, please help. putting OL in the obstcle coordinates did not seem to help anything, and I just want to make it so that all the shapes I make are masked as "OL" so that I can have complex obstacles or moving ones that stop the player. but I would also like for the player to be able to move away from the obstacle as well.
|
[
"Read *How do I detect collision in pygame?. I suggest to use pygame.Rect objects and colliderect for the collision detection. Create pygame.Rect objects for the player and the obstacle:\nplayer_rect = pygame.Rect(200, 200, 60, 60)\nobstacle_rect = pygame.Rect(100, 100, 100, 10)\n\nTest for collision when the player has been moved and restrict the player's boundaries to the boundaries of the obstacle:\nif keys[pygame.K_LEFT] and player_rect.left > 0:\n player_rect.x -= vel\n if player_rect.colliderect(obstacle_rect):\n player_rect.left = obstacle_rect.right\n\n\nComplete example:\n\nimport pygame\n\npygame.init()\nscreen = pygame.display.set_mode((800,600))\nclock = pygame.time.Clock()\nblue = (20,40,200)\ngray = (100,100,100)\nplayer_rect = pygame.Rect(200, 200, 60, 60)\nobstacle_rect = pygame.Rect(100, 100, 100, 10)\nvel = 5\n\ndef player():\n pygame.draw.rect(screen, blue, player_rect)\ndef obst():\n pygame.draw.rect(screen, gray, pygame.Rect(obstacle_rect))\n\nrun=True\nwhile run:\n clock.tick(60)\n for event in pygame.event.get():\n if event.type == pygame.QUIT:\n run = False\n keys = pygame.key.get_pressed()\n \n if keys[pygame.K_LEFT] and player_rect.left > 0:\n player_rect.x -= vel\n if player_rect.colliderect(obstacle_rect):\n player_rect.left = obstacle_rect.right\n if keys[pygame.K_RIGHT] and player_rect.right < 800:\n player_rect.x += vel\n if player_rect.colliderect(obstacle_rect):\n player_rect.right = obstacle_rect.left\n if keys[pygame.K_UP] and player_rect.top > 0:\n player_rect.y -= vel\n if player_rect.colliderect(obstacle_rect):\n player_rect.top = obstacle_rect.bottom\n if keys[pygame.K_DOWN] and player_rect.bottom < 600:\n player_rect.y += vel\n if player_rect.colliderect(obstacle_rect):\n player_rect.bottom = obstacle_rect.top\n\n screen.fill((255,255,255))\n player()\n obst()\n pygame.display.update()\npygame.quit()\n\n"
] |
[
0
] |
[] |
[] |
[
"pygame",
"python"
] |
stackoverflow_0074505199_pygame_python.txt
|
Q:
Python variables behave differently after the value 256
In Python 3.8.10
pit@pit-desktop:~$ python
Python 3.8.10 (default, Jun 22 2022, 20:18:18)
[GCC 9.4.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> x1 = 256
>>> x2 = 256
>>> print(f'id(x1) = {id(x1)}, id(x2) = {id(x2)}')
id(x1) = 9809408, id(x2) = 9809408
>>> print(f'x1 is x2 = {x1 is x2}')
x1 is x2 = True
>>>
>>> y1 = 257
>>> y2 = 257
>>> print(f'id(y1) = {id(y1)}, id(y2) = {id(y2)}')
id(y1) = 140250419837264, id(y2) = 140250419837488
>>> print(f'y1 is y2 = {y1 is y2}')
y1 is y2 = False
>>>
But in Python 3.6 at https://pythontutor.com/visualize.html
image from pythontutor.com
The behavior is radically different. What is it for?
|
Python variables behave differently after the value 256
|
In Python 3.8.10
pit@pit-desktop:~$ python
Python 3.8.10 (default, Jun 22 2022, 20:18:18)
[GCC 9.4.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> x1 = 256
>>> x2 = 256
>>> print(f'id(x1) = {id(x1)}, id(x2) = {id(x2)}')
id(x1) = 9809408, id(x2) = 9809408
>>> print(f'x1 is x2 = {x1 is x2}')
x1 is x2 = True
>>>
>>> y1 = 257
>>> y2 = 257
>>> print(f'id(y1) = {id(y1)}, id(y2) = {id(y2)}')
id(y1) = 140250419837264, id(y2) = 140250419837488
>>> print(f'y1 is y2 = {y1 is y2}')
y1 is y2 = False
>>>
But in Python 3.6 at https://pythontutor.com/visualize.html
image from pythontutor.com
The behavior is radically different. What is it for?
|
[] |
[] |
[
"The IDs remain the same only as long as the specific object in memory referenced remains the same. With smaller numbers (less than 257?), it's more likely that they'll remain the cache. Try the same game using the number 42506, and you'll get the same False result as with 257, whereas when I did it with the number 32, the result was True.\n"
] |
[
-1
] |
[
"integer",
"python",
"variables"
] |
stackoverflow_0074506296_integer_python_variables.txt
|
Q:
Deploying a python Flask application with Jenkins and executing it
I am trying to do auto-deployment of a Python Flask application using Jenkins and then run it by using shell command on a Raspberry Pi server.
Here are some background info,
Before using Jenkins, my deployment and execution process was manual described below:
FTP to the directory where my Python scripts and Python venv are located
Replace Flask application scripts using FTP
Activate virtual environment to of Python(3.5) through the terminal on Raspberry Pi ("./venv/bin/activate")
Run myFlaskApp.py by executing "python myFlaskApp.py" in terminal
Now I have integrated Jenkins with the deployment/execution process described below:
Code change pushed to github
Jenkins automatically pulls from github
Jenkins deploy files to specified directories by executing shell commands
Jenkins then activates virtual environment and run myFlaskApp.py by bashing a .sh script in the shell terminal.
Now the problem that I am having is on step 4, because a Flask app has to always be alive, my Jenkins will never "finish building successfully", it will always be in a loading state as the Flask app is running on the shell terminal Jenkins is using.
Now my question:
What is the correct approach that I should be taking in order to activate myFlaskApp.py with Jenkins after deploying the files while not causing it to be "locked down" by the build process?
I have read up about Docker, SubShell and the Linux utility "Screen". Will any of these tools be useful to assist me in my situation right now and which approach should I be taking?
A:
The simple and robust solution (in my opinion) is to use Supervisor which is available in Debian as supervisor package. It allows you do make a daemon from script like your app, it can spawn multiple processes, watch if app doesn't crash and if it does it can start it again.
Note about virtualenv - you don't need to activate venv to use it. You just need to point appropriate Python executable (your_venv/bin/python) instead of default one. For example:
$ ./venv/bin/python myFlaskApp.py
A:
You need to create these files for deployment over jenkins.
Code can be found: https://github.com/ishwar6/django_ci_cd
This will work for both flask as well as django.
initial-setup.sh - This file is the first file to look at when setting up this project. It installs the required packages to make this project work such as Nginx, Jenkins, Python etc. Refer to the youtube video to see how and when it is used.
Jenkinsfile - This file contains the definition of the stages in the pipeline. The stages in this project's pipeline are Setup Python Virtual Environment, Setup gunicorn service and Setup Nginx. The stages in this pipeline just does two things. First it makes a file executable and then runs the file. The file carries out the commands that is described by the stage description.
envsetup.sh - This file sets up the python virtual environment, installs the python packages and then creates log files that will be used by Nginx.
gunicorn.sh - This file runs some Django management commands like migration commands and static files collection commands. It also sets up the gunicorn service that will be running the gunicorn server in the background.
nginx.sh - This file sets up Nginx with a configuration file that points Nginx to the gunicorn service that is running our application. This allows Nginx serve our application. I have followed a digital ocean article to setup this file. You can go through the video once to replicate sites-available and sites-enabled scanerio.
app.conf - This is an Nginx server configuration file. This file is used to setup Nginx as proxy server to gunicorn. For this configuration to work, change the value of server_name to the IP address or domain name of your server.
|
Deploying a python Flask application with Jenkins and executing it
|
I am trying to do auto-deployment of a Python Flask application using Jenkins and then run it by using shell command on a Raspberry Pi server.
Here are some background info,
Before using Jenkins, my deployment and execution process was manual described below:
FTP to the directory where my Python scripts and Python venv are located
Replace Flask application scripts using FTP
Activate virtual environment to of Python(3.5) through the terminal on Raspberry Pi ("./venv/bin/activate")
Run myFlaskApp.py by executing "python myFlaskApp.py" in terminal
Now I have integrated Jenkins with the deployment/execution process described below:
Code change pushed to github
Jenkins automatically pulls from github
Jenkins deploy files to specified directories by executing shell commands
Jenkins then activates virtual environment and run myFlaskApp.py by bashing a .sh script in the shell terminal.
Now the problem that I am having is on step 4, because a Flask app has to always be alive, my Jenkins will never "finish building successfully", it will always be in a loading state as the Flask app is running on the shell terminal Jenkins is using.
Now my question:
What is the correct approach that I should be taking in order to activate myFlaskApp.py with Jenkins after deploying the files while not causing it to be "locked down" by the build process?
I have read up about Docker, SubShell and the Linux utility "Screen". Will any of these tools be useful to assist me in my situation right now and which approach should I be taking?
|
[
"The simple and robust solution (in my opinion) is to use Supervisor which is available in Debian as supervisor package. It allows you do make a daemon from script like your app, it can spawn multiple processes, watch if app doesn't crash and if it does it can start it again.\nNote about virtualenv - you don't need to activate venv to use it. You just need to point appropriate Python executable (your_venv/bin/python) instead of default one. For example:\n$ ./venv/bin/python myFlaskApp.py\n\n",
"You need to create these files for deployment over jenkins.\nCode can be found: https://github.com/ishwar6/django_ci_cd\n\nThis will work for both flask as well as django.\n\ninitial-setup.sh - This file is the first file to look at when setting up this project. It installs the required packages to make this project work such as Nginx, Jenkins, Python etc. Refer to the youtube video to see how and when it is used.\n\nJenkinsfile - This file contains the definition of the stages in the pipeline. The stages in this project's pipeline are Setup Python Virtual Environment, Setup gunicorn service and Setup Nginx. The stages in this pipeline just does two things. First it makes a file executable and then runs the file. The file carries out the commands that is described by the stage description.\n\nenvsetup.sh - This file sets up the python virtual environment, installs the python packages and then creates log files that will be used by Nginx.\n\ngunicorn.sh - This file runs some Django management commands like migration commands and static files collection commands. It also sets up the gunicorn service that will be running the gunicorn server in the background.\n\nnginx.sh - This file sets up Nginx with a configuration file that points Nginx to the gunicorn service that is running our application. This allows Nginx serve our application. I have followed a digital ocean article to setup this file. You can go through the video once to replicate sites-available and sites-enabled scanerio.\n\napp.conf - This is an Nginx server configuration file. This file is used to setup Nginx as proxy server to gunicorn. For this configuration to work, change the value of server_name to the IP address or domain name of your server.\n\n\n"
] |
[
2,
0
] |
[] |
[] |
[
"flask",
"jenkins",
"linux",
"python",
"raspberry_pi"
] |
stackoverflow_0060681521_flask_jenkins_linux_python_raspberry_pi.txt
|
Q:
how to run bash code in a loop in google colab
I am trying to run a loop that requires the bash command --
!python3 -m runner.player_1
but when I make it into loop:
for player1 in range(0, 100, 1):
!python3 -m "runner.player_" + str(player1)
it doesn't work and returns the error:
/bin/bash: -c: line 0: syntax error near unexpected token `('
/bin/bash: -c: line 0: `python3 -m "runner.player_" + str(player1)'
how can i fix this? thank you
A:
A native Bash loop would look like
for i in {0..99}; do python3 -m runner.player_$i; done
You can replace the semicolons with newlines, and/or add a newline after do if you like. I'm guessing you will want it literally as a one-liner.
This seems like an XY problem, though; surely it would be better if whatever code implements these 100 modules would be refactored so that you can run them all sequentially in one go.
A:
You will have to call it as python function using os.system, not as magic command.
import os
for player1 in range(0, 100, 1):
os.system("python3 -m runner.player_" + str(player1))
|
how to run bash code in a loop in google colab
|
I am trying to run a loop that requires the bash command --
!python3 -m runner.player_1
but when I make it into loop:
for player1 in range(0, 100, 1):
!python3 -m "runner.player_" + str(player1)
it doesn't work and returns the error:
/bin/bash: -c: line 0: syntax error near unexpected token `('
/bin/bash: -c: line 0: `python3 -m "runner.player_" + str(player1)'
how can i fix this? thank you
|
[
"A native Bash loop would look like\nfor i in {0..99}; do python3 -m runner.player_$i; done\n\nYou can replace the semicolons with newlines, and/or add a newline after do if you like. I'm guessing you will want it literally as a one-liner.\nThis seems like an XY problem, though; surely it would be better if whatever code implements these 100 modules would be refactored so that you can run them all sequentially in one go.\n",
"You will have to call it as python function using os.system, not as magic command.\nimport os\nfor player1 in range(0, 100, 1):\n os.system(\"python3 -m runner.player_\" + str(player1))\n\n"
] |
[
1,
0
] |
[] |
[] |
[
"bash",
"google_colaboratory",
"loops",
"python"
] |
stackoverflow_0074506646_bash_google_colaboratory_loops_python.txt
|
Q:
'pygame.Surface' object has no attribute 'update'
I get the message : 'pygame.Surface' object has no attribute 'update'. But as you can see, i have an update function in the code. wha did i wrong? I looked around but i didn't fina a simular question.
class Createparticle:
def __init__(self, xx, yy,img):
self.x = xx
self.y = yy
self.img = img
self.particlelist = []
self.verzoegerung = 0
self.scale_k = 0.1
self.img = scale(img, self.scale_k)
self.alpha = 255
self.alpha_rate = 3
self.alive = True
self.vx = 0
self.vy = 4 + random.randint(-10, 10) / 10
self.k = 0.01 * random.random() * random.choice([-1, 1])
def update(self):
self.x += self.vx
self.vx += self.k
self.y -= self.vy
self.vy *= 0.99
self.scale_k += 0.005
self.alpha -= self.alpha_rate
self.img = scale(self.img, self.scale_k)
self.img.set_alpha(self.alpha)
self.particlelist = [i for i in self.particlelist if i.alive]
self.verzoegerung += 1
if self.verzoegerung % 2 == 0:
self.verzoegerung = 0
self.particlelist.append(self.img)
for i in self.particlelist:
i.update()
def draw(self):
for i in self.particlelist:
screen.blit(self.img, self.img.get_rect(center=(self.x, self.y)))
createparticle = Createparticle(500,300,basisbild)
while True:
screen.fill((0, 0, 0))
createparticle.update()
createparticle.draw()
pygame.display.update()
clock.tick(FPS)
A:
The error is caused by i.update(). i is an element from self.particlelist. In your case self.particlelist is an image (pygame.Surface). A pygame.Surface object has no update method. Probably i should not be a pygame.Surface, but you add pygame.Surface objects to the list:
self.particlelist.append(self.img)
So this line of code is obviously wrong and should be like this instead (note: Particle is a guess of mine, but I don't know how you named your classes.):
self.particlelist.append(Particle(self.img))
A:
you have a list of surfaces called particlelist, in your update function you are calling the update function on each item in that list. Since theses are of the 'surface' type they don't have a update function. This is where the error is coming from.
|
'pygame.Surface' object has no attribute 'update'
|
I get the message : 'pygame.Surface' object has no attribute 'update'. But as you can see, i have an update function in the code. wha did i wrong? I looked around but i didn't fina a simular question.
class Createparticle:
def __init__(self, xx, yy,img):
self.x = xx
self.y = yy
self.img = img
self.particlelist = []
self.verzoegerung = 0
self.scale_k = 0.1
self.img = scale(img, self.scale_k)
self.alpha = 255
self.alpha_rate = 3
self.alive = True
self.vx = 0
self.vy = 4 + random.randint(-10, 10) / 10
self.k = 0.01 * random.random() * random.choice([-1, 1])
def update(self):
self.x += self.vx
self.vx += self.k
self.y -= self.vy
self.vy *= 0.99
self.scale_k += 0.005
self.alpha -= self.alpha_rate
self.img = scale(self.img, self.scale_k)
self.img.set_alpha(self.alpha)
self.particlelist = [i for i in self.particlelist if i.alive]
self.verzoegerung += 1
if self.verzoegerung % 2 == 0:
self.verzoegerung = 0
self.particlelist.append(self.img)
for i in self.particlelist:
i.update()
def draw(self):
for i in self.particlelist:
screen.blit(self.img, self.img.get_rect(center=(self.x, self.y)))
createparticle = Createparticle(500,300,basisbild)
while True:
screen.fill((0, 0, 0))
createparticle.update()
createparticle.draw()
pygame.display.update()
clock.tick(FPS)
|
[
"The error is caused by i.update(). i is an element from self.particlelist. In your case self.particlelist is an image (pygame.Surface). A pygame.Surface object has no update method. Probably i should not be a pygame.Surface, but you add pygame.Surface objects to the list:\n\nself.particlelist.append(self.img)\n\n\nSo this line of code is obviously wrong and should be like this instead (note: Particle is a guess of mine, but I don't know how you named your classes.):\nself.particlelist.append(Particle(self.img))\n\n",
"you have a list of surfaces called particlelist, in your update function you are calling the update function on each item in that list. Since theses are of the 'surface' type they don't have a update function. This is where the error is coming from.\n"
] |
[
1,
0
] |
[] |
[] |
[
"pygame",
"python"
] |
stackoverflow_0074505581_pygame_python.txt
|
Q:
Split dataframe into grouped chunks
I would like to split a dataframe into chunks. I have created a function which is able to split a dataframe into equal size chunks however am unable to figure out how to split by groups.
Each split of dataframe must include all instances of a grouping variable, I'd like flexibility on how many groups could be included (as they are relatively small).
Example dataframe:
A 1
A 2
B 3
C 1
D 9
D 10
Target splits (include at least two groups):
Split 1:
A 1
A 2
B 3
Split 2:
C 1
D 9
D 10
If helpful, my current function looks like the following:
def split_frame(sequence, size=10000):
return (sequence[position:position + size] for position in range(0, len(sequence), size))
Help appreciated!
A:
Works in Python 2 and 3:
df = pd.DataFrame(data=['a', 'a', 'b', 'c', 'a', 'a', 'b', 'v', 'v', 'f'], columns=['A'])
def iter_by_group(df, column, num_groups):
groups = []
for i, group in df.groupby(column):
groups.append(group)
if len(groups) == num_groups:
yield pd.concat(groups)
groups = []
if groups:
yield pd.concat(groups)
for group in iter_by_group(df, 'A', 2):
print(group)
A
0 a
1 a
4 a
5 a
2 b
6 b
A
3 c
9 f
A
7 v
8 v
A:
The answer from Dennis Golomazov was too slow for my dataframes.
Storing the groups in a list and returning them with pd.concat() is a performance killer.
Here is a slightly faster version.
It enumerates the groups and returns them via their group number.
import pandas as pd
def group_chunks(df, column, chunk_size):
df["n_group"] = df.groupby(column).ngroup()
lower_group_index = 0
upper_group_index = chunk_size - 1
max_group_index = df["n_group"].max()
while lower_group_index <= max_group_index:
yield df.loc[:, df.columns != "n_group"][
df["n_group"].between(lower_group_index, upper_group_index)
]
lower_group_index = upper_group_index + 1
upper_group_index = upper_group_index + chunk_size
df = pd.DataFrame(data=['a', 'a', 'b', 'c', 'a', 'a', 'b', 'v', 'v', 'f'], columns=['A'])
for chunk in group_chunks(df, 'A', 2):
print(f"{chunk.sort_values(by='A')}\n")
A
0 a
1 a
4 a
5 a
2 b
6 b
A
3 c
9 f
A
7 v
8 v
A:
This should work as well:
n = 2
splits = {g:df for g,df in df.groupby(df.groupby('A').ngroup().floordiv(n))}
Each df can be accessed by the key in the dictionary. It would also then be possible to concat them back to one df, which now shows the group in the index
pd.concat(splits,names = ['groups'])
A:
A look at the various methods
def golomazov(df, column, num_groups):
groups = []
for i, group in df.groupby(column):
groups.append(group)
if len(groups) == num_groups:
yield pd.concat(groups)
groups = []
if groups:
yield pd.concat(groups)
def arigion(df, column, chunk_size):
df["n_group"] = df.groupby(column).ngroup()
lower_group_index = 0
upper_group_index = chunk_size - 1
max_group_index = df["n_group"].max()
while lower_group_index <= max_group_index:
yield df.loc[:, df.columns != "n_group"][
df["n_group"].between(lower_group_index, upper_group_index)
]
lower_group_index = upper_group_index + 1
upper_group_index = upper_group_index + chunk_size
def rhug123(df, column, n):
return {g: df for g, df in df.groupby(df.groupby('Symbol').ngroup().floordiv(n))}
def misantroop(df, column, num_groups):
symbol_groups = df.groupby(column)
groups = np.array_split(list(symbol_groups.groups), num_groups)
for group in groups:
yield pd.concat([symbol_groups.get_group(name) for name in group])
%timeit golomazov(df, 'Symbol', n)
157 ns ± 0.647 ns per loop (mean ± std. dev. of 7 runs, 10,000,000 loops each)
from pympler.asizeof = 414176
%timeit arigion(df, 'Symbol', n)
160 ns ± 0.903 ns per loop (mean ± std. dev. of 7 runs, 10,000,000 loops each)
from pympler.asizeof = 414176
%timeit rhug123(df, 'Symbol', n)
5.53 ms ± 28 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
from pympler.asizeof = 57534096
%timeit misantroop(df, 'Symbol', num_groups=n*40)
191 ns ± 2.09 ns per loop (mean ± std. dev. of 7 runs, 1,000,000 loops each)
from pympler.asizeof = 414176
|
Split dataframe into grouped chunks
|
I would like to split a dataframe into chunks. I have created a function which is able to split a dataframe into equal size chunks however am unable to figure out how to split by groups.
Each split of dataframe must include all instances of a grouping variable, I'd like flexibility on how many groups could be included (as they are relatively small).
Example dataframe:
A 1
A 2
B 3
C 1
D 9
D 10
Target splits (include at least two groups):
Split 1:
A 1
A 2
B 3
Split 2:
C 1
D 9
D 10
If helpful, my current function looks like the following:
def split_frame(sequence, size=10000):
return (sequence[position:position + size] for position in range(0, len(sequence), size))
Help appreciated!
|
[
"Works in Python 2 and 3:\ndf = pd.DataFrame(data=['a', 'a', 'b', 'c', 'a', 'a', 'b', 'v', 'v', 'f'], columns=['A']) \n\ndef iter_by_group(df, column, num_groups):\n groups = []\n for i, group in df.groupby(column):\n groups.append(group)\n if len(groups) == num_groups:\n yield pd.concat(groups)\n groups = []\n if groups:\n yield pd.concat(groups)\n\nfor group in iter_by_group(df, 'A', 2):\n print(group)\n\nA\n0 a\n1 a\n4 a\n5 a\n2 b\n6 b\n\nA\n3 c\n9 f\n\nA\n7 v\n8 v\n\n",
"The answer from Dennis Golomazov was too slow for my dataframes.\nStoring the groups in a list and returning them with pd.concat() is a performance killer.\nHere is a slightly faster version.\nIt enumerates the groups and returns them via their group number.\nimport pandas as pd\n\ndef group_chunks(df, column, chunk_size):\n df[\"n_group\"] = df.groupby(column).ngroup()\n lower_group_index = 0\n upper_group_index = chunk_size - 1\n max_group_index = df[\"n_group\"].max()\n while lower_group_index <= max_group_index:\n yield df.loc[:, df.columns != \"n_group\"][\n df[\"n_group\"].between(lower_group_index, upper_group_index)\n ]\n lower_group_index = upper_group_index + 1\n upper_group_index = upper_group_index + chunk_size\n\ndf = pd.DataFrame(data=['a', 'a', 'b', 'c', 'a', 'a', 'b', 'v', 'v', 'f'], columns=['A']) \nfor chunk in group_chunks(df, 'A', 2):\n print(f\"{chunk.sort_values(by='A')}\\n\")\n\n A\n0 a\n1 a\n4 a\n5 a\n2 b\n6 b\n\n A\n3 c\n9 f\n\n A\n7 v\n8 v\n\n\n",
"This should work as well:\nn = 2\nsplits = {g:df for g,df in df.groupby(df.groupby('A').ngroup().floordiv(n))}\n\nEach df can be accessed by the key in the dictionary. It would also then be possible to concat them back to one df, which now shows the group in the index\npd.concat(splits,names = ['groups'])\n\n",
"A look at the various methods\ndef golomazov(df, column, num_groups):\n groups = []\n for i, group in df.groupby(column):\n groups.append(group)\n if len(groups) == num_groups:\n yield pd.concat(groups)\n groups = []\n if groups:\n yield pd.concat(groups)\n \ndef arigion(df, column, chunk_size):\n df[\"n_group\"] = df.groupby(column).ngroup()\n lower_group_index = 0\n upper_group_index = chunk_size - 1\n max_group_index = df[\"n_group\"].max()\n while lower_group_index <= max_group_index:\n yield df.loc[:, df.columns != \"n_group\"][\n df[\"n_group\"].between(lower_group_index, upper_group_index)\n ]\n lower_group_index = upper_group_index + 1\n upper_group_index = upper_group_index + chunk_size\n\ndef rhug123(df, column, n):\n return {g: df for g, df in df.groupby(df.groupby('Symbol').ngroup().floordiv(n))}\n\ndef misantroop(df, column, num_groups):\n symbol_groups = df.groupby(column)\n groups = np.array_split(list(symbol_groups.groups), num_groups)\n for group in groups:\n yield pd.concat([symbol_groups.get_group(name) for name in group])\n\n%timeit golomazov(df, 'Symbol', n)\n157 ns ± 0.647 ns per loop (mean ± std. dev. of 7 runs, 10,000,000 loops each)\nfrom pympler.asizeof = 414176\n\n%timeit arigion(df, 'Symbol', n)\n160 ns ± 0.903 ns per loop (mean ± std. dev. of 7 runs, 10,000,000 loops each)\nfrom pympler.asizeof = 414176\n\n%timeit rhug123(df, 'Symbol', n)\n5.53 ms ± 28 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)\nfrom pympler.asizeof = 57534096\n\n%timeit misantroop(df, 'Symbol', num_groups=n*40)\n191 ns ± 2.09 ns per loop (mean ± std. dev. of 7 runs, 1,000,000 loops each)\nfrom pympler.asizeof = 414176\n\n"
] |
[
6,
2,
1,
0
] |
[] |
[] |
[
"pandas",
"python"
] |
stackoverflow_0051411506_pandas_python.txt
|
Q:
fillna with a condition (time limitation)
thanks for advance for checking the question.
i got a group of data, there are a lot of missing values for the column "bond_yield".
my first question had been solved, which requires me to fill na with previous data. my code is like this:
#sort the data first by company and then by time
df_dataset = df_dataset.sort_values(by=['gvkey','date'])
#fill in missing bond yields with previous day's data with a condition:
#fill the missing data only if the company of the row is as same as the that of the above row. otherwise, not fill in.
#this is an important condition to avoid filling the first row of company B with the data from the last row of company A.
df_dataset['bond_yield']= df_dataset.groupby('gvkey')['bond_yield'].fillna(method='ffill')
df_dataset.head()
company gvkey date market_cds_spread bond_yield
34315 AMCN.AIRLNS.GP.INC 1045 20040101 NaN NaN
34316 AMCN.AIRLNS.GP.INC 1045 20040102 NaN NaN
34317 AMCN.AIRLNS.GP.INC 1045 20040105 NaN NaN
34318 AMCN.AIRLNS.GP.INC 1045 20040106 NaN NaN
34319 AMCN.AIRLNS.GP.INC 1045 20040107 NaN NaN
(yes, the head rows are all NaN. but later, there are values.)
now, i'm asked to fill the data with a condition: fill in missing bond yields with values from previous days only when previous bond yield is from no more than 15 days ago.
i was thinking about isnull().sum(). but it didn't work. it just count all missing values, while i would like to count the value from a certain row forward to previous available data. (i have no idea, just try everything might help. and also my trying was not so direct.)
how can i fillna with the 15 days limitation?
A:
Sorry, this should be a comment, but I cannot leave one yet. fillna has a keyword limit. I think it can do what you want.
https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.fillna.html#pandas.DataFrame.fillna
|
fillna with a condition (time limitation)
|
thanks for advance for checking the question.
i got a group of data, there are a lot of missing values for the column "bond_yield".
my first question had been solved, which requires me to fill na with previous data. my code is like this:
#sort the data first by company and then by time
df_dataset = df_dataset.sort_values(by=['gvkey','date'])
#fill in missing bond yields with previous day's data with a condition:
#fill the missing data only if the company of the row is as same as the that of the above row. otherwise, not fill in.
#this is an important condition to avoid filling the first row of company B with the data from the last row of company A.
df_dataset['bond_yield']= df_dataset.groupby('gvkey')['bond_yield'].fillna(method='ffill')
df_dataset.head()
company gvkey date market_cds_spread bond_yield
34315 AMCN.AIRLNS.GP.INC 1045 20040101 NaN NaN
34316 AMCN.AIRLNS.GP.INC 1045 20040102 NaN NaN
34317 AMCN.AIRLNS.GP.INC 1045 20040105 NaN NaN
34318 AMCN.AIRLNS.GP.INC 1045 20040106 NaN NaN
34319 AMCN.AIRLNS.GP.INC 1045 20040107 NaN NaN
(yes, the head rows are all NaN. but later, there are values.)
now, i'm asked to fill the data with a condition: fill in missing bond yields with values from previous days only when previous bond yield is from no more than 15 days ago.
i was thinking about isnull().sum(). but it didn't work. it just count all missing values, while i would like to count the value from a certain row forward to previous available data. (i have no idea, just try everything might help. and also my trying was not so direct.)
how can i fillna with the 15 days limitation?
|
[
"Sorry, this should be a comment, but I cannot leave one yet. fillna has a keyword limit. I think it can do what you want.\nhttps://pandas.pydata.org/docs/reference/api/pandas.DataFrame.fillna.html#pandas.DataFrame.fillna\n"
] |
[
0
] |
[] |
[] |
[
"conditional_statements",
"fillna",
"pandas",
"python"
] |
stackoverflow_0074504721_conditional_statements_fillna_pandas_python.txt
|
Q:
How to get all token allowances in ethereum in python?
Is there a way to get all the approvals granted by an ethereum address along with the contract it granted permission to in python?
I want to obtain them programatically instead of using token approval checker websites.
Tried pulling the data using the requests made by websites like revoke.cash, but getting blocked often.
A:
You will need an indexed source in any case, whether your own or a hosted on e.g. ette.
From there you can get all tokens the user holds, and then you would get the latest allowance allowance(address owner, address spender) → uint256 (which is standard for most ERC20 tokens) for every token.
Some indexers (e.g ette) allow you to query by event, so you could get all Approvals, this is faster.
|
How to get all token allowances in ethereum in python?
|
Is there a way to get all the approvals granted by an ethereum address along with the contract it granted permission to in python?
I want to obtain them programatically instead of using token approval checker websites.
Tried pulling the data using the requests made by websites like revoke.cash, but getting blocked often.
|
[
"You will need an indexed source in any case, whether your own or a hosted on e.g. ette.\nFrom there you can get all tokens the user holds, and then you would get the latest allowance allowance(address owner, address spender) → uint256 (which is standard for most ERC20 tokens) for every token.\nSome indexers (e.g ette) allow you to query by event, so you could get all Approvals, this is faster.\n"
] |
[
1
] |
[] |
[] |
[
"ethereum",
"python",
"web3py"
] |
stackoverflow_0074502208_ethereum_python_web3py.txt
|
Q:
Create Pandas DataFrame from a string
In order to test some functionality I would like to create a DataFrame from a string. Let's say my test data looks like:
TESTDATA="""col1;col2;col3
1;4.4;99
2;4.5;200
3;4.7;65
4;3.2;140
"""
What is the simplest way to read that data into a Pandas DataFrame?
A:
A simple way to do this is to use StringIO.StringIO (python2) or io.StringIO (python3) and pass that to the pandas.read_csv function. E.g:
import sys
if sys.version_info[0] < 3:
from StringIO import StringIO
else:
from io import StringIO
import pandas as pd
TESTDATA = StringIO("""col1;col2;col3
1;4.4;99
2;4.5;200
3;4.7;65
4;3.2;140
""")
df = pd.read_csv(TESTDATA, sep=";")
A:
Split Method
data = input_string
df = pd.DataFrame([x.split(';') for x in data.split('\n')])
print(df)
A:
In one line, but first import io
import pandas as pd
import io
TESTDATA="""col1;col2;col3
1;4.4;99
2;4.5;200
3;4.7;65
4;3.2;140
"""
df = pd.read_csv(io.StringIO(TESTDATA), sep=";")
print(df)
A:
A quick and easy solution for interactive work is to copy-and-paste the text by loading the data from the clipboard.
Select the content of the string with your mouse:
In the Python shell use read_clipboard()
>>> pd.read_clipboard()
col1;col2;col3
0 1;4.4;99
1 2;4.5;200
2 3;4.7;65
3 4;3.2;140
Use the appropriate separator:
>>> pd.read_clipboard(sep=';')
col1 col2 col3
0 1 4.4 99
1 2 4.5 200
2 3 4.7 65
3 4 3.2 140
>>> df = pd.read_clipboard(sep=';') # save to dataframe
A:
This answer applies when a string is manually entered, not when it's read from somewhere.
A traditional variable-width CSV is unreadable for storing data as a string variable. Especially for use inside a .py file, consider fixed-width pipe-separated data instead. Various IDEs and editors may have a plugin to format pipe-separated text into a neat table.
Using read_csv
Store the following in a utility module, e.g. util/pandas.py. An example is included in the function's docstring.
import io
import re
import pandas as pd
def read_psv(str_input: str, **kwargs) -> pd.DataFrame:
"""Read a Pandas object from a pipe-separated table contained within a string.
Input example:
| int_score | ext_score | eligible |
| | 701 | True |
| 221.3 | 0 | False |
| | 576 | True |
| 300 | 600 | True |
The leading and trailing pipes are optional, but if one is present,
so must be the other.
`kwargs` are passed to `read_csv`. They must not include `sep`.
In PyCharm, the "Pipe Table Formatter" plugin has a "Format" feature that can
be used to neatly format a table.
Ref: https://stackoverflow.com/a/46471952/
"""
substitutions = [
('^ *', ''), # Remove leading spaces
(' *$', ''), # Remove trailing spaces
(r' *\| *', '|'), # Remove spaces between columns
]
if all(line.lstrip().startswith('|') and line.rstrip().endswith('|') for line in str_input.strip().split('\n')):
substitutions.extend([
(r'^\|', ''), # Remove redundant leading delimiter
(r'\|$', ''), # Remove redundant trailing delimiter
])
for pattern, replacement in substitutions:
str_input = re.sub(pattern, replacement, str_input, flags=re.MULTILINE)
return pd.read_csv(io.StringIO(str_input), sep='|', **kwargs)
Non-working alternatives
The code below doesn't work properly because it adds an empty column on both the left and right sides.
df = pd.read_csv(io.StringIO(df_str), sep=r'\s*\|\s*', engine='python')
As for read_fwf, it doesn't actually use so many of the optional kwargs that read_csv accepts and uses. As such, it shouldn't be used at all for pipe-separated data.
A:
Object: Take string make dataframe.
Solution
def str2frame(estr, sep = ',', lineterm = '\n', set_header = True):
dat = [x.split(sep) for x in estr.split(lineterm)][1:-1]
df = pd.DataFrame(dat)
if set_header:
df = df.T.set_index(0, drop = True).T # flip, set ix, flip back
return df
Example
estr = """
sym,date,strike,genus
APPLE,20MAY20,50.0,Malus
ORANGE,22JUL20,50.0,Rutaceae
"""
df = str2frame(estr)
print(df)
0 sym date strike genus
1 APPLE 20MAY20 50.0 Malus
2 ORANGE 22JUL20 50.0 Rutaceae
A:
Emample:
text = [ ['This is the NLP TASKS ARTICLE written by Anjum**'] ,['IN this article I”ll be explaining various DATA-CLEANING techniques '], ['So stay tuned for FURther More && '],['Nah I dont think he goes to usf ; he lives around']]
df = pd.DataFrame({'text':text})
Output
|
Create Pandas DataFrame from a string
|
In order to test some functionality I would like to create a DataFrame from a string. Let's say my test data looks like:
TESTDATA="""col1;col2;col3
1;4.4;99
2;4.5;200
3;4.7;65
4;3.2;140
"""
What is the simplest way to read that data into a Pandas DataFrame?
|
[
"A simple way to do this is to use StringIO.StringIO (python2) or io.StringIO (python3) and pass that to the pandas.read_csv function. E.g:\nimport sys\nif sys.version_info[0] < 3: \n from StringIO import StringIO\nelse:\n from io import StringIO\n\nimport pandas as pd\n\nTESTDATA = StringIO(\"\"\"col1;col2;col3\n 1;4.4;99\n 2;4.5;200\n 3;4.7;65\n 4;3.2;140\n \"\"\")\n\ndf = pd.read_csv(TESTDATA, sep=\";\")\n\n",
"Split Method\ndata = input_string\ndf = pd.DataFrame([x.split(';') for x in data.split('\\n')])\nprint(df)\n\n",
"In one line, but first import io\nimport pandas as pd\nimport io \n\nTESTDATA=\"\"\"col1;col2;col3\n1;4.4;99\n2;4.5;200\n3;4.7;65\n4;3.2;140\n\"\"\"\n\ndf = pd.read_csv(io.StringIO(TESTDATA), sep=\";\")\nprint(df)\n\n",
"A quick and easy solution for interactive work is to copy-and-paste the text by loading the data from the clipboard.\nSelect the content of the string with your mouse:\n\nIn the Python shell use read_clipboard()\n>>> pd.read_clipboard()\n col1;col2;col3\n0 1;4.4;99\n1 2;4.5;200\n2 3;4.7;65\n3 4;3.2;140\n\nUse the appropriate separator:\n>>> pd.read_clipboard(sep=';')\n col1 col2 col3\n0 1 4.4 99\n1 2 4.5 200\n2 3 4.7 65\n3 4 3.2 140\n\n>>> df = pd.read_clipboard(sep=';') # save to dataframe\n\n",
"This answer applies when a string is manually entered, not when it's read from somewhere.\nA traditional variable-width CSV is unreadable for storing data as a string variable. Especially for use inside a .py file, consider fixed-width pipe-separated data instead. Various IDEs and editors may have a plugin to format pipe-separated text into a neat table.\nUsing read_csv\nStore the following in a utility module, e.g. util/pandas.py. An example is included in the function's docstring.\nimport io\nimport re\n\nimport pandas as pd\n\n\ndef read_psv(str_input: str, **kwargs) -> pd.DataFrame:\n \"\"\"Read a Pandas object from a pipe-separated table contained within a string.\n\n Input example:\n | int_score | ext_score | eligible |\n | | 701 | True |\n | 221.3 | 0 | False |\n | | 576 | True |\n | 300 | 600 | True |\n\n The leading and trailing pipes are optional, but if one is present,\n so must be the other.\n\n `kwargs` are passed to `read_csv`. They must not include `sep`.\n\n In PyCharm, the \"Pipe Table Formatter\" plugin has a \"Format\" feature that can \n be used to neatly format a table.\n\n Ref: https://stackoverflow.com/a/46471952/\n \"\"\"\n\n substitutions = [\n ('^ *', ''), # Remove leading spaces\n (' *$', ''), # Remove trailing spaces\n (r' *\\| *', '|'), # Remove spaces between columns\n ]\n if all(line.lstrip().startswith('|') and line.rstrip().endswith('|') for line in str_input.strip().split('\\n')):\n substitutions.extend([\n (r'^\\|', ''), # Remove redundant leading delimiter\n (r'\\|$', ''), # Remove redundant trailing delimiter\n ])\n for pattern, replacement in substitutions:\n str_input = re.sub(pattern, replacement, str_input, flags=re.MULTILINE)\n return pd.read_csv(io.StringIO(str_input), sep='|', **kwargs)\n\n\nNon-working alternatives\nThe code below doesn't work properly because it adds an empty column on both the left and right sides.\ndf = pd.read_csv(io.StringIO(df_str), sep=r'\\s*\\|\\s*', engine='python')\n\nAs for read_fwf, it doesn't actually use so many of the optional kwargs that read_csv accepts and uses. As such, it shouldn't be used at all for pipe-separated data.\n",
"Object: Take string make dataframe.\nSolution\ndef str2frame(estr, sep = ',', lineterm = '\\n', set_header = True):\n dat = [x.split(sep) for x in estr.split(lineterm)][1:-1]\n df = pd.DataFrame(dat)\n if set_header:\n df = df.T.set_index(0, drop = True).T # flip, set ix, flip back\n return df\n\nExample\nestr = \"\"\"\nsym,date,strike,genus\nAPPLE,20MAY20,50.0,Malus\nORANGE,22JUL20,50.0,Rutaceae\n\"\"\"\n\ndf = str2frame(estr)\n\n\nprint(df)\n0 sym date strike genus\n1 APPLE 20MAY20 50.0 Malus\n2 ORANGE 22JUL20 50.0 Rutaceae\n\n",
"Emample:\ntext = [ ['This is the NLP TASKS ARTICLE written by Anjum**'] ,['IN this article I”ll be explaining various DATA-CLEANING techniques '], ['So stay tuned for FURther More && '],['Nah I dont think he goes to usf ; he lives around']]\ndf = pd.DataFrame({'text':text})\n\nOutput\n\n"
] |
[
752,
46,
46,
24,
9,
4,
0
] |
[] |
[] |
[
"csv",
"csv_import",
"pandas",
"python",
"string"
] |
stackoverflow_0022604564_csv_csv_import_pandas_python_string.txt
|
Q:
Removing items in a list from a string
I'm teaching myself Python and I have been stuck on this issue for a few days now.
The idea is to ask a user to input a sentence and then ask them for 5 characters that they would like to remove from the sentence.
For example the sentence input by the user is:
user_string = "The quick brown fox jumps over the lazy dog"
The characters they want to remove is:
lst = ["a", "b", "c", "d", "e"]
I have reached the point where I have the user string and the user list that needs to be removed, what I'm stuck on is figuring out how to loop through the list and check if each character is present in the string and then to strip it from the string.
I have tried to use a for loop but I'm not yet proficient in loops so I might be going about it the wrong way, this is my for loop so far:
for char in user_string[:]:
if char[0] in user_string:
removed_string = user_string.strip(char)
print(removed_string)
A:
The method user_string.strip(char) only removes leading and padding char, so you can't call it agin and again on the initial char
Here's 2 ways :
user_string = "The quick brown fox jumps over the lazy dog"
lst = ["a", "b", "c", "d", "e"]
# either collect valid chars
result = ""
for c in user_string:
if c not in lst:
result += c
print(result)
# either remove invalid chars
result = user_string[:]
for to_remove in lst:
result = result.replace(to_remove, "")
print(result)
A:
Iterate over the list of user input characters and replace their occurrence in the string with a null character ""
user_string = "the quick brown fox jumps over the lazy dog"
chars_to_remove = ["a", "b", "c", "d", "e"]
for char in chars_to_remove:
user_string = user_string.replace(char, "")
print(user_string)
Output:
th quik rown fox jumps ovr th lzy of
|
Removing items in a list from a string
|
I'm teaching myself Python and I have been stuck on this issue for a few days now.
The idea is to ask a user to input a sentence and then ask them for 5 characters that they would like to remove from the sentence.
For example the sentence input by the user is:
user_string = "The quick brown fox jumps over the lazy dog"
The characters they want to remove is:
lst = ["a", "b", "c", "d", "e"]
I have reached the point where I have the user string and the user list that needs to be removed, what I'm stuck on is figuring out how to loop through the list and check if each character is present in the string and then to strip it from the string.
I have tried to use a for loop but I'm not yet proficient in loops so I might be going about it the wrong way, this is my for loop so far:
for char in user_string[:]:
if char[0] in user_string:
removed_string = user_string.strip(char)
print(removed_string)
|
[
"The method user_string.strip(char) only removes leading and padding char, so you can't call it agin and again on the initial char\nHere's 2 ways :\nuser_string = \"The quick brown fox jumps over the lazy dog\"\nlst = [\"a\", \"b\", \"c\", \"d\", \"e\"]\n\n# either collect valid chars\nresult = \"\"\nfor c in user_string:\n if c not in lst:\n result += c\nprint(result)\n\n# either remove invalid chars\nresult = user_string[:]\nfor to_remove in lst:\n result = result.replace(to_remove, \"\")\nprint(result)\n\n",
"Iterate over the list of user input characters and replace their occurrence in the string with a null character \"\"\nuser_string = \"the quick brown fox jumps over the lazy dog\"\n\nchars_to_remove = [\"a\", \"b\", \"c\", \"d\", \"e\"]\n\nfor char in chars_to_remove:\n user_string = user_string.replace(char, \"\")\n\nprint(user_string)\n\nOutput:\nth quik rown fox jumps ovr th lzy of\n\n"
] |
[
2,
0
] |
[] |
[] |
[
"python"
] |
stackoverflow_0074506772_python.txt
|
Q:
list indices must be integers or slices, not str Django
I m getting this error. I am new in django. I am trying so send mail with django.
Tracke Back :
response = self.process_exception_by_middleware(e, request)
File "/home/bari/Desktop/email_send/env/lib/python3.6/site-packages/django/core/handlers/base.py", line 113, in _get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "/home/bari/Desktop/email_send/Simple_Email_Send_Project/email_app/views.py", line 36, in send_mail
message_body = form.changed_data["message_body"]
TypeError: list indices must be integers or slices, not str
[05/Jun/2020 17:55:22] "POST / HTTP/1.1" 500 69463
my views.py
def send_mail(request):
form = SendMailForm(request.POST)
template = 'send_mail.html'
if form.is_valid():
subject = form.cleaned_data["subject"]
message_body = form.changed_data["message_body"]
email_address = form.cleaned_data["email_address"]
try:
mail = EmailMessage(subject, message_body, settings.EMAIL_HOST_USER, [email_address])
mail.send()
return render(request, template,
{'email_form': form, 'error_message': 'Sent mail to {}'.format(email_address)})
except:
return render(request, template,
{'email_form': form, 'error_message': 'Email Send failed. Please try again later'})
How can I solve this ??? help will be highly appreciated...
A:
According to Django documentation at this link
form.changed_data returns the name of fields in the models where the data has changed provided that the initial form. As you don't have any initial parameters in your code I think it's a typo at.
message_body = form.cleaned_data["message_body"]
A:
try using single quotes rather that double quotes.I think it saying slices means it wants characters rather that a string.
This worked for me i don't know why either im also new to django
so like this:
message_body = form.cleaned_data['message_body']
^ ^
|
list indices must be integers or slices, not str Django
|
I m getting this error. I am new in django. I am trying so send mail with django.
Tracke Back :
response = self.process_exception_by_middleware(e, request)
File "/home/bari/Desktop/email_send/env/lib/python3.6/site-packages/django/core/handlers/base.py", line 113, in _get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "/home/bari/Desktop/email_send/Simple_Email_Send_Project/email_app/views.py", line 36, in send_mail
message_body = form.changed_data["message_body"]
TypeError: list indices must be integers or slices, not str
[05/Jun/2020 17:55:22] "POST / HTTP/1.1" 500 69463
my views.py
def send_mail(request):
form = SendMailForm(request.POST)
template = 'send_mail.html'
if form.is_valid():
subject = form.cleaned_data["subject"]
message_body = form.changed_data["message_body"]
email_address = form.cleaned_data["email_address"]
try:
mail = EmailMessage(subject, message_body, settings.EMAIL_HOST_USER, [email_address])
mail.send()
return render(request, template,
{'email_form': form, 'error_message': 'Sent mail to {}'.format(email_address)})
except:
return render(request, template,
{'email_form': form, 'error_message': 'Email Send failed. Please try again later'})
How can I solve this ??? help will be highly appreciated...
|
[
"According to Django documentation at this link\nform.changed_data returns the name of fields in the models where the data has changed provided that the initial form. As you don't have any initial parameters in your code I think it's a typo at.\nmessage_body = form.cleaned_data[\"message_body\"]\n\n",
"try using single quotes rather that double quotes.I think it saying slices means it wants characters rather that a string.\nThis worked for me i don't know why either im also new to django\nso like this:\nmessage_body = form.cleaned_data['message_body']\n ^ ^\n\n"
] |
[
0,
0
] |
[] |
[] |
[
"django",
"python"
] |
stackoverflow_0062221678_django_python.txt
|
Q:
How can I have only one api value?
everyone
I started programming in python yesterday to create a project. This consists of taking data from an API using the "Requests" library
So far I had no trouble getting familiar with the library, but I can't get results for what I'm specifically looking for.
My idea is just to get the name of the account.
Here the code
import requests
user = 'example'
payload = {'data': 'username'}
r = requests.get('https://api.imvu.com/user/user-'+user, params=payload)
json = r.json()
print(json)
My idea is that, within all the data that can be obtained, only obtain the name of the account. just the name
The code works perfectly, but it throws me all the account data.
For example:
{
"https://api.imvu.com/user/user-x?data=created": {
"data": {
"created": "2020-11-30T17:56:31Z",
"registered": "x",
"gender": "f",
"display_name": " ",
"age": "None",
"country": "None",
"state": "None",
"avatar_image": "x",
"avatar_portrait_image": "https://......",
"is_vip": false,
"is_ap": true,
"is_creator": false,
"is_adult": true,
"is_ageverified": true,
"is_staff": false,
"is_greeter": false,
"greeter_score": 0,
"badge_level": 0,
"username": "=== ONLY THIS I NEED ==="
}
}
}
As you can see, I only need one thing from all that data.
Sorry for bothering and I hope I can learn from your answers. Thanks so much for reading
A:
Unless API allows you to specify exactly what data to return (some does) then you got no control about the API behavior nor what data (and how) given endpoint returns. Publicly exposed API is all you can have in hand and sometimes you may get tons of useless data and there's basically nothing you can do about that.
A:
you might check whether there is an alternative REST method that only provides you with the username.
The REST response you cannot modify as it is sent from the server, so you need to parse the response e.g. like here
Extract value from json response python?
python
A:
To get specific item from json, you can simply make few changes in your code.
r = requests.get('https://api.imvu.com/user/user-'+user, params=payload)
json = r.json()
username = json["https://api.imvu.com/user/user-x?data=created"]["data"]["username"]
print(username)
|
How can I have only one api value?
|
everyone
I started programming in python yesterday to create a project. This consists of taking data from an API using the "Requests" library
So far I had no trouble getting familiar with the library, but I can't get results for what I'm specifically looking for.
My idea is just to get the name of the account.
Here the code
import requests
user = 'example'
payload = {'data': 'username'}
r = requests.get('https://api.imvu.com/user/user-'+user, params=payload)
json = r.json()
print(json)
My idea is that, within all the data that can be obtained, only obtain the name of the account. just the name
The code works perfectly, but it throws me all the account data.
For example:
{
"https://api.imvu.com/user/user-x?data=created": {
"data": {
"created": "2020-11-30T17:56:31Z",
"registered": "x",
"gender": "f",
"display_name": " ",
"age": "None",
"country": "None",
"state": "None",
"avatar_image": "x",
"avatar_portrait_image": "https://......",
"is_vip": false,
"is_ap": true,
"is_creator": false,
"is_adult": true,
"is_ageverified": true,
"is_staff": false,
"is_greeter": false,
"greeter_score": 0,
"badge_level": 0,
"username": "=== ONLY THIS I NEED ==="
}
}
}
As you can see, I only need one thing from all that data.
Sorry for bothering and I hope I can learn from your answers. Thanks so much for reading
|
[
"Unless API allows you to specify exactly what data to return (some does) then you got no control about the API behavior nor what data (and how) given endpoint returns. Publicly exposed API is all you can have in hand and sometimes you may get tons of useless data and there's basically nothing you can do about that.\n",
"you might check whether there is an alternative REST method that only provides you with the username.\nThe REST response you cannot modify as it is sent from the server, so you need to parse the response e.g. like here\nExtract value from json response python?\npython\n",
"To get specific item from json, you can simply make few changes in your code.\nr = requests.get('https://api.imvu.com/user/user-'+user, params=payload)\n\njson = r.json()\n\nusername = json[\"https://api.imvu.com/user/user-x?data=created\"][\"data\"][\"username\"]\n\nprint(username)\n\n"
] |
[
1,
0,
0
] |
[] |
[] |
[
"python",
"python_requests"
] |
stackoverflow_0074506807_python_python_requests.txt
|
Q:
How to add rows to a dataframe when values are recursively dependent?
I have a data frame with columns a and b
df = pd.DataFrame(data = [[3, 6], [5, 10], [9, 18], [17, 34]], columns = ["a", "b"])
The structure of this data is as follows,
if at denotes the value of column a at row t and the same for bt, then
bt = 2 * at
at = bt - 1 - 1
See how the values of a are determined by the previous values of b and the values of b is determined by a. This recursive dependency means that I can't simply use the .shift() command
We are given the value of a at row 0. How do I approach generating this data frame for a specified n rows efficiently, preferably without loops?
I attempted using loops. They're inefficient once the calculations get more complicated and as n increases. Are there better ways to generate recursively related columns?
A:
Here is my take on your interesting question, for instance with 3 as the value of a at row 0 and 10 as n:
import pandas as pd
A = 3
N = 10
dfs = [pd.DataFrame(data=[[A, 2 * A]], columns=["a", "b"])]
for _ in range(N - 1):
dfs = dfs + [
(dfs[-1].shift(-1, axis=1) - 1).pipe(
lambda df_: df_.fillna(df_["a"].values[0] * 2)
)
]
df = pd.concat(dfs, ignore_index=True)
Then:
print(df)
# Output
a b
0 3.0 6.0
1 5.0 10.0
2 9.0 18.0
3 17.0 34.0
4 33.0 66.0
5 65.0 130.0
6 129.0 258.0
7 257.0 514.0
8 513.0 1026.0
9 1025.0 2050.0
|
How to add rows to a dataframe when values are recursively dependent?
|
I have a data frame with columns a and b
df = pd.DataFrame(data = [[3, 6], [5, 10], [9, 18], [17, 34]], columns = ["a", "b"])
The structure of this data is as follows,
if at denotes the value of column a at row t and the same for bt, then
bt = 2 * at
at = bt - 1 - 1
See how the values of a are determined by the previous values of b and the values of b is determined by a. This recursive dependency means that I can't simply use the .shift() command
We are given the value of a at row 0. How do I approach generating this data frame for a specified n rows efficiently, preferably without loops?
I attempted using loops. They're inefficient once the calculations get more complicated and as n increases. Are there better ways to generate recursively related columns?
|
[
"Here is my take on your interesting question, for instance with 3 as the value of a at row 0 and 10 as n:\nimport pandas as pd\n\nA = 3\nN = 10\n\ndfs = [pd.DataFrame(data=[[A, 2 * A]], columns=[\"a\", \"b\"])]\nfor _ in range(N - 1):\n dfs = dfs + [\n (dfs[-1].shift(-1, axis=1) - 1).pipe(\n lambda df_: df_.fillna(df_[\"a\"].values[0] * 2)\n )\n ]\n\ndf = pd.concat(dfs, ignore_index=True)\n\nThen:\nprint(df)\n# Output\n a b\n0 3.0 6.0\n1 5.0 10.0\n2 9.0 18.0\n3 17.0 34.0\n4 33.0 66.0\n5 65.0 130.0\n6 129.0 258.0\n7 257.0 514.0\n8 513.0 1026.0\n9 1025.0 2050.0\n\n"
] |
[
1
] |
[] |
[] |
[
"data_analysis",
"pandas",
"python",
"time_series"
] |
stackoverflow_0074468502_data_analysis_pandas_python_time_series.txt
|
Q:
Node mul_1 required broadcastable shapes
As a reference / follow up to my question here:previously asked but no asnwers
I could compile my model by refraining from creating model objects, adding additional dimension and specifying axis to concatenate on
def make_model(input_shape, input_shape_feat):
base_input_layer = tf.keras.layers.Input(input_shape)
base_input_layer = normalizer(base_input_layer)
conv0 = keras.layers.Conv1D(filters=512, kernel_size=16, padding="same")(base_input_layer)
conv0 = keras.layers.BatchNormalization()(conv0)
conv0 = keras.layers.ReLU()(conv0)
conv0 = keras.layers.Dropout(0.5)(conv0)
conv1 = keras.layers.Conv1D(filters=256, kernel_size=8, padding="same")(conv0)
conv1 = keras.layers.BatchNormalization()(conv1)
conv1 = keras.layers.ReLU()(conv1)
conv1 = keras.layers.Dropout(0.5)(conv1)
conv2 = keras.layers.Conv1D(filters=128, kernel_size=8, padding="same")(conv1)
conv2 = keras.layers.BatchNormalization()(conv2)
conv2 = keras.layers.ReLU()(conv2)
conv2 = keras.layers.Dropout(0.5)(conv2)
conv3 = keras.layers.Conv1D(filters=64, kernel_size=8, padding="same")(conv2)
conv3 = keras.layers.BatchNormalization()(conv3)
conv3 = keras.layers.ReLU()(conv3)
conv3 = keras.layers.Dropout(0.5)(conv3)
conv4 = keras.layers.Conv1D(filters=32, kernel_size=4, padding="same")(conv3)
conv4 = keras.layers.BatchNormalization()(conv4)
conv4 = keras.layers.ReLU()(conv4)
conv4 = keras.layers.Dropout(0.5)(conv4)
gap = keras.layers.GlobalAveragePooling1D()(conv4)
gap = keras.layers.Flatten()(gap)
gap = keras.layers.Reshape(target_shape=(1, 32))(gap)
additional_input_layer = keras.Input(input_shape_feat)
additional_input_layer = normalizer_feat(additional_input_layer)
Y = keras.layers.Dense(32, activation='relu')(additional_input_layer)
Y = keras.layers.BatchNormalization()(Y)
Y = keras.layers.Dense(32, activation='relu')(Y)
Y = keras.layers.BatchNormalization()(Y)
Y = keras.layers.Dense(32, activation='relu')(Y)
Y = keras.layers.BatchNormalization()(Y)
Z = keras.layers.concatenate([gap, Y], axis=1)
Z = keras.layers.Dense(10, activation='relu')(Z)
Z = keras.layers.BatchNormalization()(Z)
Z = keras.layers.Dense(5, activation='relu')(Z)
Z = keras.layers.BatchNormalization()(Z)
Z = keras.layers.Dense(4, activation='relu')(Z)
Z = keras.layers.BatchNormalization()(Z)
Z = keras.layers.Dense(num_classes, activation="softmax")(Z)
return keras.Model([base_input_layer, additional_input_layer], Z)
model = make_model(input_shape=x_train.shape[1:], input_shape_feat=x_train_feat.shape[1:])
keras.utils.plot_model(model, show_shapes=True)
it actually compiles and shows me the following graph:
Now I would actually like to fit my model
epochs = 100
batch_size = 8
callbacks = [
keras.callbacks.ModelCheckpoint(
"best_model.h5", save_best_only=True, monitor="val_loss"
),
keras.callbacks.ReduceLROnPlateau(
monitor="val_loss", factor=0.5, patience=20, min_lr=0.0001
),
keras.callbacks.EarlyStopping(monitor="val_loss", patience=50, verbose=1),
]
model.compile(
optimizer="adam",
loss="sparse_categorical_crossentropy",
metrics=[_get_f1], # "sparse_categorical_accuracy"
)
history = model.fit(
[x_train, x_train_feat],
y_train,
batch_size=batch_size,
epochs=epochs,
callbacks=callbacks,
validation_split=0.2,
verbose=1,
)
but I get the following error
---> 18 history = model.fit(...) Node: 'mul_1' required broadcastable shapes ...
A:
I've figured it out.
The problem arises from the mismatch in dimensions of input (2D) and output (1D) as I have just a class label as output.
The solution is to flatten before final output layer
Z = keras.layers.Flatten()(Z)
Z = keras.layers.Dense(num_classes, activation="softmax")(Z)
|
Node mul_1 required broadcastable shapes
|
As a reference / follow up to my question here:previously asked but no asnwers
I could compile my model by refraining from creating model objects, adding additional dimension and specifying axis to concatenate on
def make_model(input_shape, input_shape_feat):
base_input_layer = tf.keras.layers.Input(input_shape)
base_input_layer = normalizer(base_input_layer)
conv0 = keras.layers.Conv1D(filters=512, kernel_size=16, padding="same")(base_input_layer)
conv0 = keras.layers.BatchNormalization()(conv0)
conv0 = keras.layers.ReLU()(conv0)
conv0 = keras.layers.Dropout(0.5)(conv0)
conv1 = keras.layers.Conv1D(filters=256, kernel_size=8, padding="same")(conv0)
conv1 = keras.layers.BatchNormalization()(conv1)
conv1 = keras.layers.ReLU()(conv1)
conv1 = keras.layers.Dropout(0.5)(conv1)
conv2 = keras.layers.Conv1D(filters=128, kernel_size=8, padding="same")(conv1)
conv2 = keras.layers.BatchNormalization()(conv2)
conv2 = keras.layers.ReLU()(conv2)
conv2 = keras.layers.Dropout(0.5)(conv2)
conv3 = keras.layers.Conv1D(filters=64, kernel_size=8, padding="same")(conv2)
conv3 = keras.layers.BatchNormalization()(conv3)
conv3 = keras.layers.ReLU()(conv3)
conv3 = keras.layers.Dropout(0.5)(conv3)
conv4 = keras.layers.Conv1D(filters=32, kernel_size=4, padding="same")(conv3)
conv4 = keras.layers.BatchNormalization()(conv4)
conv4 = keras.layers.ReLU()(conv4)
conv4 = keras.layers.Dropout(0.5)(conv4)
gap = keras.layers.GlobalAveragePooling1D()(conv4)
gap = keras.layers.Flatten()(gap)
gap = keras.layers.Reshape(target_shape=(1, 32))(gap)
additional_input_layer = keras.Input(input_shape_feat)
additional_input_layer = normalizer_feat(additional_input_layer)
Y = keras.layers.Dense(32, activation='relu')(additional_input_layer)
Y = keras.layers.BatchNormalization()(Y)
Y = keras.layers.Dense(32, activation='relu')(Y)
Y = keras.layers.BatchNormalization()(Y)
Y = keras.layers.Dense(32, activation='relu')(Y)
Y = keras.layers.BatchNormalization()(Y)
Z = keras.layers.concatenate([gap, Y], axis=1)
Z = keras.layers.Dense(10, activation='relu')(Z)
Z = keras.layers.BatchNormalization()(Z)
Z = keras.layers.Dense(5, activation='relu')(Z)
Z = keras.layers.BatchNormalization()(Z)
Z = keras.layers.Dense(4, activation='relu')(Z)
Z = keras.layers.BatchNormalization()(Z)
Z = keras.layers.Dense(num_classes, activation="softmax")(Z)
return keras.Model([base_input_layer, additional_input_layer], Z)
model = make_model(input_shape=x_train.shape[1:], input_shape_feat=x_train_feat.shape[1:])
keras.utils.plot_model(model, show_shapes=True)
it actually compiles and shows me the following graph:
Now I would actually like to fit my model
epochs = 100
batch_size = 8
callbacks = [
keras.callbacks.ModelCheckpoint(
"best_model.h5", save_best_only=True, monitor="val_loss"
),
keras.callbacks.ReduceLROnPlateau(
monitor="val_loss", factor=0.5, patience=20, min_lr=0.0001
),
keras.callbacks.EarlyStopping(monitor="val_loss", patience=50, verbose=1),
]
model.compile(
optimizer="adam",
loss="sparse_categorical_crossentropy",
metrics=[_get_f1], # "sparse_categorical_accuracy"
)
history = model.fit(
[x_train, x_train_feat],
y_train,
batch_size=batch_size,
epochs=epochs,
callbacks=callbacks,
validation_split=0.2,
verbose=1,
)
but I get the following error
---> 18 history = model.fit(...) Node: 'mul_1' required broadcastable shapes ...
|
[
"I've figured it out.\nThe problem arises from the mismatch in dimensions of input (2D) and output (1D) as I have just a class label as output.\nThe solution is to flatten before final output layer\nZ = keras.layers.Flatten()(Z)\nZ = keras.layers.Dense(num_classes, activation=\"softmax\")(Z)\n\n"
] |
[
0
] |
[] |
[] |
[
"keras",
"python",
"tensorflow"
] |
stackoverflow_0074502240_keras_python_tensorflow.txt
|
Q:
Is there a way to kill uvicorn cleanly?
Is there a way to kill uvicorn cleanly?
I.e., I can type ^C at it, if it is running in the foreground on a terminal. This causes the uvivorn process to die and all of the worker processes to be cleaned up. (I.e., they go away.)
On the other hand, if uvicorn is running in the background without a terminal, then I can't figure out a way to kill it cleanly. It seems to ignore SIGTERM, SIGINT, and SIGHUP. I can kill it with SIGKILL (i.e. -9), but then the worker processes remain alive, and I have to track all the worker processes down and kill them too. This is not ideal.
I am using uvicorn with CPython 3.7.4, uvivorn version 0.11.2, and FastAPI 0.46.0 on Red Hat Enterprise Linux Server 7.3 (Maipo).
A:
That's because you're running uvicorn as your only server. uvicorn is not a process manager and, as so, it does not manage its workers life cycle. That's why they recommend running uvicorn using gunicorn+UvicornWorker for production.
That said, you can kill the spawned workers and trigger it's shutdown using the script below:
$ kill $(pgrep -P $uvicorn_pid)
The reason why this works but not the kill on the parent pid is because when you ^C something, the signal is transmitted throughout all of its spawned processes that are attached to the stdin.
A:
lsof -i :8000
This will check processes using port :8000. If you are using different port for fastAPI then change the port number. I was using postman and python for fastAPI. So check process with python, then copy the PID usually 4-5 numbers.
Then run
kill -9 PID
Where PID is the PID number you copied
|
Is there a way to kill uvicorn cleanly?
|
Is there a way to kill uvicorn cleanly?
I.e., I can type ^C at it, if it is running in the foreground on a terminal. This causes the uvivorn process to die and all of the worker processes to be cleaned up. (I.e., they go away.)
On the other hand, if uvicorn is running in the background without a terminal, then I can't figure out a way to kill it cleanly. It seems to ignore SIGTERM, SIGINT, and SIGHUP. I can kill it with SIGKILL (i.e. -9), but then the worker processes remain alive, and I have to track all the worker processes down and kill them too. This is not ideal.
I am using uvicorn with CPython 3.7.4, uvivorn version 0.11.2, and FastAPI 0.46.0 on Red Hat Enterprise Linux Server 7.3 (Maipo).
|
[
"That's because you're running uvicorn as your only server. uvicorn is not a process manager and, as so, it does not manage its workers life cycle. That's why they recommend running uvicorn using gunicorn+UvicornWorker for production.\nThat said, you can kill the spawned workers and trigger it's shutdown using the script below:\n$ kill $(pgrep -P $uvicorn_pid)\n\nThe reason why this works but not the kill on the parent pid is because when you ^C something, the signal is transmitted throughout all of its spawned processes that are attached to the stdin.\n",
"lsof -i :8000\n\nThis will check processes using port :8000. If you are using different port for fastAPI then change the port number. I was using postman and python for fastAPI. So check process with python, then copy the PID usually 4-5 numbers.\nThen run\nkill -9 PID\n\nWhere PID is the PID number you copied\n"
] |
[
12,
0
] |
[
"In my case uvicorn managed to spawn new processes while pgrep -P was killing old ones,\nso I decided to kill the whole process group at once, just like ^C does:\nPID=\"$(pgrep -f example:app)\"\nif [[ -n \"$PID\" ]]\nthen\n PGID=\"$(ps --no-headers -p $PID -o pgid)\"\n kill -SIGINT -- -${PGID// /}\nfi\n\nEach line explained:\n\npgrep -f example:app gets the PID of the parent uvicorn ... example:app\n[[ -n \"$PID\" ]] checks this PID is not empty, to avoid further steps when uvicorn is not running\nps --no-headers -p $PID -o pgid gets PGID (Process Group ID) this PID is part of\nkill -SIGINT is similar to polite ^C (you may use kill -9 for non-polite instant kill)\n-- means the next token is a positional argument, not a named option, even if it starts with -\n-${PGID - negative value lets kill know it is PGID, not PID\n${PGID// /} removes all spaces ps added to PGID to align a column\n\n"
] |
[
-1
] |
[
"fastapi",
"python",
"python_3.x",
"uvicorn"
] |
stackoverflow_0060424390_fastapi_python_python_3.x_uvicorn.txt
|
Q:
botocore.exceptions.ClientError: An error occurred (403) when calling the HeadObject operation: Forbidden while using local mode in AWS SageMaker
trainer = PyTorch(
entry_point="train.py",
source_dir= "source_dir", # directory of your training script
role=role,
framework_version="1.5.0",
py_version="py3",
instance_type= "local",
instance_count=1,
output_path=output_path,
hyperparameters = hyperparameters
)
This code is running SageMaker NoteNook instance.
Error
Creating dsd3faq5lq-algo-1-ouews ...
Creating dsd3faq5lq-algo-1-ouews ... done
Attaching to dsd3faq5lq-algo-1-ouews
dsd3faq5lq-algo-1-ouews | 2022-11-10 05:39:26,444 sagemaker-training-toolkit INFO Imported framework sagemaker_pytorch_container.training
dsd3faq5lq-algo-1-ouews | 2022-11-10 05:39:26,475 sagemaker-training-toolkit INFO No GPUs detected (normal if no gpus installed)
dsd3faq5lq-algo-1-ouews | 2022-11-10 05:39:26,494 sagemaker_pytorch_container.training INFO Block until all host DNS lookups succeed.
dsd3faq5lq-algo-1-ouews | 2022-11-10 05:39:26,507 sagemaker_pytorch_container.training INFO Invoking user training script.
dsd3faq5lq-algo-1-ouews | 2022-11-10 05:39:26,673 sagemaker-training-toolkit ERROR Reporting training FAILURE
dsd3faq5lq-algo-1-ouews | 2022-11-10 05:39:26,674 sagemaker-training-toolkit ERROR framework error:
dsd3faq5lq-algo-1-ouews | Traceback (most recent call last):
dsd3faq5lq-algo-1-ouews | File "/opt/conda/lib/python3.6/site-packages/sagemaker_training/trainer.py", line 85, in train
dsd3faq5lq-algo-1-ouews | entrypoint()
dsd3faq5lq-algo-1-ouews | File "/opt/conda/lib/python3.6/site-packages/sagemaker_pytorch_container/training.py", line 121, in main
dsd3faq5lq-algo-1-ouews | train(environment.Environment())
dsd3faq5lq-algo-1-ouews | File "/opt/conda/lib/python3.6/site-packages/sagemaker_pytorch_container/training.py", line 73, in train
dsd3faq5lq-algo-1-ouews | runner_type=runner_type)
dsd3faq5lq-algo-1-ouews | File "/opt/conda/lib/python3.6/site-packages/sagemaker_training/entry_point.py", line 92, in run
dsd3faq5lq-algo-1-ouews | files.download_and_extract(uri=uri, path=environment.code_dir)
dsd3faq5lq-algo-1-ouews | File "/opt/conda/lib/python3.6/site-packages/sagemaker_training/files.py", line 131, in download_and_extract
dsd3faq5lq-algo-1-ouews | s3_download(uri, dst)
dsd3faq5lq-algo-1-ouews | File "/opt/conda/lib/python3.6/site-packages/sagemaker_training/files.py", line 167, in s3_download
dsd3faq5lq-algo-1-ouews | s3.Bucket(bucket).download_file(key, dst)
dsd3faq5lq-algo-1-ouews | File "/opt/conda/lib/python3.6/site-packages/boto3/s3/inject.py", line 246, in bucket_download_file
dsd3faq5lq-algo-1-ouews | ExtraArgs=ExtraArgs, Callback=Callback, Config=Config)
dsd3faq5lq-algo-1-ouews | File "/opt/conda/lib/python3.6/site-packages/boto3/s3/inject.py", line 172, in download_file
dsd3faq5lq-algo-1-ouews | extra_args=ExtraArgs, callback=Callback)
dsd3faq5lq-algo-1-ouews | File "/opt/conda/lib/python3.6/site-packages/boto3/s3/transfer.py", line 307, in download_file
dsd3faq5lq-algo-1-ouews | future.result()
dsd3faq5lq-algo-1-ouews | File "/opt/conda/lib/python3.6/site-packages/s3transfer/futures.py", line 106, in result
dsd3faq5lq-algo-1-ouews | return self._coordinator.result()
dsd3faq5lq-algo-1-ouews | File "/opt/conda/lib/python3.6/site-packages/s3transfer/futures.py", line 265, in result
dsd3faq5lq-algo-1-ouews | raise self._exception
dsd3faq5lq-algo-1-ouews | File "/opt/conda/lib/python3.6/site-packages/s3transfer/tasks.py", line 255, in _main
dsd3faq5lq-algo-1-ouews | self._submit(transfer_future=transfer_future, **kwargs)
dsd3faq5lq-algo-1-ouews | File "/opt/conda/lib/python3.6/site-packages/s3transfer/download.py", line 343, in _submit
dsd3faq5lq-algo-1-ouews | **transfer_future.meta.call_args.extra_args
dsd3faq5lq-algo-1-ouews | File "/opt/conda/lib/python3.6/site-packages/botocore/client.py", line 357, in _api_call
dsd3faq5lq-algo-1-ouews | return self._make_api_call(operation_name, kwargs)
dsd3faq5lq-algo-1-ouews | File "/opt/conda/lib/python3.6/site-packages/botocore/client.py", line 676, in _make_api_call
dsd3faq5lq-algo-1-ouews | raise error_class(parsed_response, operation_name)
dsd3faq5lq-algo-1-ouews | botocore.exceptions.ClientError: An error occurred (403) when calling the HeadObject operation: Forbidden
dsd3faq5lq-algo-1-ouews |
dsd3faq5lq-algo-1-ouews | An error occurred (403) when calling the HeadObject operation: Forbidden
dsd3faq5lq-algo-1-ouews exited with code 1
1
Aborting on container exit...
The same code worked a couple of days back but it is not working from yesterday onwards.
Sagemaker Version: 2.115.0
I tried changing the SageMaker version to 2.0 but to no avail.
I also uploaded training code on s3 and use s3_uri in "source_dir" but still got the same error.
The problem only occurs while using instance_type = "local"
It works perfectly fine if I use a remote instance such as instance_type = "ml.c4.xlarge"
I also tried uploading and downloading objects directly from S3 using boto3 and it worked perfectly fine.
eg;
session = boto3.Session()
s3 = session.resource('s3')
s3_obj = s3.Object(bucket, data_key)
s3_obj.put(Body=data)
s3client = boto3.client('s3')
response = s3client.get_object(Bucket= bucket, Key= data_key)
body = response['Body'].read()
The above code works perfectly fine in same instance.
I am not sure but if it was just an issue with permission, wouldn't the above code also not work.
A:
The fact that the code worked until a few days ago does not make the problem reproducible.
At this point, it is strictly dependent on the settings related to your AWS account.
Looking at the error log, it appears that you do not have access permissions to the S3 bucket.
Look at this question, it talks about this very same error:
ClientError: An error occurred (403) when calling the HeadObject operation: Forbidden
Check the IAM policies associated with the IAM role that the Lambda
function is using.
A:
I changed the code to below, replacing Sagwmaker.pytorch.Pytorch with sagemaker.estimator.Estimator, providing reference to the relevant Docker Image URI.
from sagemaker.estimator import Estimator
TRAIN_IMAGE_URI= "763104351884.dkr.ecr.ap-south-1.amazonaws.com/pytorch-training:1.12.1-cpu-py38"
trainer = Estimator(
image_uri = TRAIN_IMAGE_URI,
entry_point="train.py",
source_dir="source_dir", # directory of your training script
role=role,
base_job_name = base_job_name,
instance_type= "local",
instance_count=1,
output_path=output_path,
hyperparameters = hyperparameters
)
trainer.fit()
This seems to work perfectly in my case.
Although I still do not understand why.
|
botocore.exceptions.ClientError: An error occurred (403) when calling the HeadObject operation: Forbidden while using local mode in AWS SageMaker
|
trainer = PyTorch(
entry_point="train.py",
source_dir= "source_dir", # directory of your training script
role=role,
framework_version="1.5.0",
py_version="py3",
instance_type= "local",
instance_count=1,
output_path=output_path,
hyperparameters = hyperparameters
)
This code is running SageMaker NoteNook instance.
Error
Creating dsd3faq5lq-algo-1-ouews ...
Creating dsd3faq5lq-algo-1-ouews ... done
Attaching to dsd3faq5lq-algo-1-ouews
dsd3faq5lq-algo-1-ouews | 2022-11-10 05:39:26,444 sagemaker-training-toolkit INFO Imported framework sagemaker_pytorch_container.training
dsd3faq5lq-algo-1-ouews | 2022-11-10 05:39:26,475 sagemaker-training-toolkit INFO No GPUs detected (normal if no gpus installed)
dsd3faq5lq-algo-1-ouews | 2022-11-10 05:39:26,494 sagemaker_pytorch_container.training INFO Block until all host DNS lookups succeed.
dsd3faq5lq-algo-1-ouews | 2022-11-10 05:39:26,507 sagemaker_pytorch_container.training INFO Invoking user training script.
dsd3faq5lq-algo-1-ouews | 2022-11-10 05:39:26,673 sagemaker-training-toolkit ERROR Reporting training FAILURE
dsd3faq5lq-algo-1-ouews | 2022-11-10 05:39:26,674 sagemaker-training-toolkit ERROR framework error:
dsd3faq5lq-algo-1-ouews | Traceback (most recent call last):
dsd3faq5lq-algo-1-ouews | File "/opt/conda/lib/python3.6/site-packages/sagemaker_training/trainer.py", line 85, in train
dsd3faq5lq-algo-1-ouews | entrypoint()
dsd3faq5lq-algo-1-ouews | File "/opt/conda/lib/python3.6/site-packages/sagemaker_pytorch_container/training.py", line 121, in main
dsd3faq5lq-algo-1-ouews | train(environment.Environment())
dsd3faq5lq-algo-1-ouews | File "/opt/conda/lib/python3.6/site-packages/sagemaker_pytorch_container/training.py", line 73, in train
dsd3faq5lq-algo-1-ouews | runner_type=runner_type)
dsd3faq5lq-algo-1-ouews | File "/opt/conda/lib/python3.6/site-packages/sagemaker_training/entry_point.py", line 92, in run
dsd3faq5lq-algo-1-ouews | files.download_and_extract(uri=uri, path=environment.code_dir)
dsd3faq5lq-algo-1-ouews | File "/opt/conda/lib/python3.6/site-packages/sagemaker_training/files.py", line 131, in download_and_extract
dsd3faq5lq-algo-1-ouews | s3_download(uri, dst)
dsd3faq5lq-algo-1-ouews | File "/opt/conda/lib/python3.6/site-packages/sagemaker_training/files.py", line 167, in s3_download
dsd3faq5lq-algo-1-ouews | s3.Bucket(bucket).download_file(key, dst)
dsd3faq5lq-algo-1-ouews | File "/opt/conda/lib/python3.6/site-packages/boto3/s3/inject.py", line 246, in bucket_download_file
dsd3faq5lq-algo-1-ouews | ExtraArgs=ExtraArgs, Callback=Callback, Config=Config)
dsd3faq5lq-algo-1-ouews | File "/opt/conda/lib/python3.6/site-packages/boto3/s3/inject.py", line 172, in download_file
dsd3faq5lq-algo-1-ouews | extra_args=ExtraArgs, callback=Callback)
dsd3faq5lq-algo-1-ouews | File "/opt/conda/lib/python3.6/site-packages/boto3/s3/transfer.py", line 307, in download_file
dsd3faq5lq-algo-1-ouews | future.result()
dsd3faq5lq-algo-1-ouews | File "/opt/conda/lib/python3.6/site-packages/s3transfer/futures.py", line 106, in result
dsd3faq5lq-algo-1-ouews | return self._coordinator.result()
dsd3faq5lq-algo-1-ouews | File "/opt/conda/lib/python3.6/site-packages/s3transfer/futures.py", line 265, in result
dsd3faq5lq-algo-1-ouews | raise self._exception
dsd3faq5lq-algo-1-ouews | File "/opt/conda/lib/python3.6/site-packages/s3transfer/tasks.py", line 255, in _main
dsd3faq5lq-algo-1-ouews | self._submit(transfer_future=transfer_future, **kwargs)
dsd3faq5lq-algo-1-ouews | File "/opt/conda/lib/python3.6/site-packages/s3transfer/download.py", line 343, in _submit
dsd3faq5lq-algo-1-ouews | **transfer_future.meta.call_args.extra_args
dsd3faq5lq-algo-1-ouews | File "/opt/conda/lib/python3.6/site-packages/botocore/client.py", line 357, in _api_call
dsd3faq5lq-algo-1-ouews | return self._make_api_call(operation_name, kwargs)
dsd3faq5lq-algo-1-ouews | File "/opt/conda/lib/python3.6/site-packages/botocore/client.py", line 676, in _make_api_call
dsd3faq5lq-algo-1-ouews | raise error_class(parsed_response, operation_name)
dsd3faq5lq-algo-1-ouews | botocore.exceptions.ClientError: An error occurred (403) when calling the HeadObject operation: Forbidden
dsd3faq5lq-algo-1-ouews |
dsd3faq5lq-algo-1-ouews | An error occurred (403) when calling the HeadObject operation: Forbidden
dsd3faq5lq-algo-1-ouews exited with code 1
1
Aborting on container exit...
The same code worked a couple of days back but it is not working from yesterday onwards.
Sagemaker Version: 2.115.0
I tried changing the SageMaker version to 2.0 but to no avail.
I also uploaded training code on s3 and use s3_uri in "source_dir" but still got the same error.
The problem only occurs while using instance_type = "local"
It works perfectly fine if I use a remote instance such as instance_type = "ml.c4.xlarge"
I also tried uploading and downloading objects directly from S3 using boto3 and it worked perfectly fine.
eg;
session = boto3.Session()
s3 = session.resource('s3')
s3_obj = s3.Object(bucket, data_key)
s3_obj.put(Body=data)
s3client = boto3.client('s3')
response = s3client.get_object(Bucket= bucket, Key= data_key)
body = response['Body'].read()
The above code works perfectly fine in same instance.
I am not sure but if it was just an issue with permission, wouldn't the above code also not work.
|
[
"The fact that the code worked until a few days ago does not make the problem reproducible.\nAt this point, it is strictly dependent on the settings related to your AWS account.\nLooking at the error log, it appears that you do not have access permissions to the S3 bucket.\nLook at this question, it talks about this very same error:\nClientError: An error occurred (403) when calling the HeadObject operation: Forbidden\n\nCheck the IAM policies associated with the IAM role that the Lambda\nfunction is using.\n\n",
"I changed the code to below, replacing Sagwmaker.pytorch.Pytorch with sagemaker.estimator.Estimator, providing reference to the relevant Docker Image URI.\nfrom sagemaker.estimator import Estimator\n\nTRAIN_IMAGE_URI= \"763104351884.dkr.ecr.ap-south-1.amazonaws.com/pytorch-training:1.12.1-cpu-py38\"\n\ntrainer = Estimator(\n image_uri = TRAIN_IMAGE_URI,\n entry_point=\"train.py\",\n source_dir=\"source_dir\", # directory of your training script\n role=role,\n base_job_name = base_job_name,\n instance_type= \"local\",\n instance_count=1,\n output_path=output_path,\n hyperparameters = hyperparameters\n)\ntrainer.fit()\n\n\nThis seems to work perfectly in my case.\nAlthough I still do not understand why.\n"
] |
[
0,
0
] |
[] |
[] |
[
"amazon_sagemaker",
"python",
"pytorch"
] |
stackoverflow_0074384817_amazon_sagemaker_python_pytorch.txt
|
Q:
Use Flask path string to refer to variables
I have the following Flask app. It renders a html page with a form for each cell of the dataframe and allows the user to edit the cells and post the form data. The app then updates the dataframe.
'''
from flask import Flask, render_template, url_for, request, redirect
import pandas
app = Flask(__name__)
df_abc = pandas.read_excel('source1.xlsx')
@app.route('/modify/', methods=['POST', 'GET'])
def modify():
if request.method == 'POST':
global df_abc
df_abc = update_df_function() # this function returns an updated df based on the POST data
return redirect(url_for('modify'))
else:
table_data = df_abc.to_dict(orient='records')
return render_template('modify.html', table_data=table_data)
'''
However, I would like the following to work:
from flask import Flask, render_template, url_for, request, redirect
import pandas
app = Flask(__name__)
df_abc = pandas.read_excel('source1.xlsx')
df_xyz = pandas.read_excel('source2.xlsx')
@app.route('/modify/<name>', methods=['POST', 'GET'])
def modify(name):
if request.method == 'POST':
global name
name = update_df_function() # this function returns an updated df based on the POST data
return redirect(url_for('modify'))
else:
table_data = name.to_dict(orient='records')
return render_template('modify.html', table_data=table_data)
'''
This app would get the variable name from the Flask path. How can I set the variable names in the modify function (e.g. global df_abc) by using the string < name > from the Flask path? I.e. posting data from www.site.com/modify/df_abc should update df_abc, ./modify/df_xyz should update df_xyz etc.
Any help would be greatly appreciated. Thanks!
A:
The answer is using globals() with globals()[name], as shown below, and explained in this answer https://stackoverflow.com/a/1373201
from flask import Flask, render_template, url_for, request, redirect
import pandas
app = Flask(__name__)
df_abc = pandas.read_excel('source1.xlsx')
df_xyz = pandas.read_excel('source2.xlsx')
@app.route('/modify/<name>', methods=['POST', 'GET'])
def modify(name):
if request.method == 'POST':
globals()[name] = update_df_function() # this function returns an updated df based on the POST data
return redirect(url_for('modify'))
else:
table_data = globals()[name].to_dict(orient='records')
return render_template('modify.html', table_data=table_data)
|
Use Flask path string to refer to variables
|
I have the following Flask app. It renders a html page with a form for each cell of the dataframe and allows the user to edit the cells and post the form data. The app then updates the dataframe.
'''
from flask import Flask, render_template, url_for, request, redirect
import pandas
app = Flask(__name__)
df_abc = pandas.read_excel('source1.xlsx')
@app.route('/modify/', methods=['POST', 'GET'])
def modify():
if request.method == 'POST':
global df_abc
df_abc = update_df_function() # this function returns an updated df based on the POST data
return redirect(url_for('modify'))
else:
table_data = df_abc.to_dict(orient='records')
return render_template('modify.html', table_data=table_data)
'''
However, I would like the following to work:
from flask import Flask, render_template, url_for, request, redirect
import pandas
app = Flask(__name__)
df_abc = pandas.read_excel('source1.xlsx')
df_xyz = pandas.read_excel('source2.xlsx')
@app.route('/modify/<name>', methods=['POST', 'GET'])
def modify(name):
if request.method == 'POST':
global name
name = update_df_function() # this function returns an updated df based on the POST data
return redirect(url_for('modify'))
else:
table_data = name.to_dict(orient='records')
return render_template('modify.html', table_data=table_data)
'''
This app would get the variable name from the Flask path. How can I set the variable names in the modify function (e.g. global df_abc) by using the string < name > from the Flask path? I.e. posting data from www.site.com/modify/df_abc should update df_abc, ./modify/df_xyz should update df_xyz etc.
Any help would be greatly appreciated. Thanks!
|
[
"The answer is using globals() with globals()[name], as shown below, and explained in this answer https://stackoverflow.com/a/1373201\nfrom flask import Flask, render_template, url_for, request, redirect\nimport pandas\n\napp = Flask(__name__)\n\ndf_abc = pandas.read_excel('source1.xlsx')\ndf_xyz = pandas.read_excel('source2.xlsx')\n\n@app.route('/modify/<name>', methods=['POST', 'GET'])\ndef modify(name):\n if request.method == 'POST':\n globals()[name] = update_df_function() # this function returns an updated df based on the POST data\n return redirect(url_for('modify'))\n \n else:\n table_data = globals()[name].to_dict(orient='records')\n return render_template('modify.html', table_data=table_data)\n\n"
] |
[
0
] |
[] |
[] |
[
"flask",
"global",
"path",
"python",
"variables"
] |
stackoverflow_0074503379_flask_global_path_python_variables.txt
|
Q:
cmake error 'the source does not appear to contain CMakeLists.txt'
I'm installing opencv in ubuntu 16.04. After installing the necessary prerequisites I used the following command:-
kvs@Hunter:~/opencv_contrib$ mkdir build
kvs@Hunter:~/opencv_contrib$ cd build
kvs@Hunter:~/opencv_contrib/build$
kvs@Hunter:~/opencv_contrib/build$ cmake -D CMAKE_BUILD_TYPE=RELEASE -D CMAKE_INSTALL_PREFIX+/usr/local -D INSTALL_C_EXAMPLES=ON -D INSTALL_PYTHON_EXAMPLES=ON -D OPENCV_EXTRA_MODULES_PATH=~/opencv_contrib/modules -D BUILD_EXAMPLES=ON ..
but it produced an error:-
CMake Error: The source directory "/home/kvs/opencv_contrib" does not appear to contain CMakeLists.txt.
Specify --help for usage, or press the help button on the CMake GUI.
I used the command provided in the folder 'module' documentation. How do I solve it?
I tried the answers here at stack-overflow and a few other question but still can't figure it out.
Project Git repository here.
A:
You should do mkdir build and cd build while inside opencv folder, not the opencv-contrib folder. The CMakeLists.txt is there.
A:
Since you add .. after cmake, it will jump up and up (just like cd ..) in the directory. But if you want to run cmake under the same folder with CMakeLists.txt, please use . instead of ...
A:
This reply may be late but it may help users having similar problem.
The opencv-contrib (available at https://github.com/opencv/opencv_contrib/releases) contains extra modules but the build procedure has to be done from core opencv (available at from https://github.com/opencv/opencv/releases) modules.
Follow below steps (assuming you are building it using CMake GUI)
Download openCV (from https://github.com/opencv/opencv/releases) and unzip it somewhere on your computer. Create build folder inside it
Download exra modules from OpenCV. (from https://github.com/opencv/opencv_contrib/releases). Ensure you download the same version.
Unzip the folder.
Open CMake
Click Browse Source and navigate to your openCV folder.
Click Browse Build and navigate to your build Folder.
Click the configure button. You will be asked how you would like to generate the files. Choose Unix-Makefile from the drop down menu and Click OK. CMake will perform some tests and return a set of red boxes appear in the CMake Window.
Search for "OPENCV_EXTRA_MODULES_PATH" and provide the path to modules folder (e.g. /Users/purushottam_d/Programs/OpenCV3_4_5_contrib/modules)
Click Configure again, then Click Generate.
Go to build folder
# cd build
# make
# sudo make install
This will install the opencv libraries on your computer.
A:
An easier way to build OpenCV from source in a step by step fashion as given in this reference: Installing OpenCV from the Source
is to,
step 1: install dependencies,
sudo apt install build-essential cmake git pkg-config libgtk-
3-dev \libavcodec-dev libavformat-dev libswscale-dev
libv4l-dev \libxvidcore-dev libx264-dev libjpeg-dev
libpng-dev libtiff-dev \gfortran openexr libatlas-base-
dev python3-dev python3-numpy \libtbb2 libtbb-dev
libdc1394-22-dev
Step 2: create a directory opencv_build and Clone the necessary repositories as shown below,
mkdir ~/opencv_build && cd ~/opencv_build
git clone https://github.com/opencv/opencv.git
git clone https://github.com/opencv/opencv_contrib.git
step 3: cd into opencv directory, inside create another directory called build and cd into it,
cd ~/opencv_build/opencv
mkdir build && cd build
step 4: evoke Cmake to build OpenCV,
cmake -D CMAKE_BUILD_TYPE=RELEASE \
-D CMAKE_INSTALL_PREFIX=/usr/local \
-D INSTALL_C_EXAMPLES=ON \
-D INSTALL_PYTHON_EXAMPLES=ON \
-D OPENCV_GENERATE_PKGCONFIG=ON \
-D OPENCV_EXTRA_MODULES_PATH=~/opencv_build/opencv_contrib/modules \
-D BUILD_EXAMPLES=ON ..
If step 4 completes successfully you should see the following line at the end of the terminal, the build has been written to the directory created in step 3, along with the following lines above this line,
configuration done
generating done
step 5: To start the compilation process where -j
is a flag for the number of the processor inside your machine, for example -j6 means we have 6 processors available. to verify the number of processors type nproc on the terminal then use this number after -j. To start this process, we use the following command:
make -j6
step 6: install OpenCV, We use,
sudo make install
then check the version Of OpenCV to verify the installation:
pkg-config --modversion opencv4
A:
I had a similar problem with another package and neither operating from a clean directory, to build from out, nor copy/paste the CMakeLists.txt file from the source to the clean directory worked.
I simply solved installing by conda
|
cmake error 'the source does not appear to contain CMakeLists.txt'
|
I'm installing opencv in ubuntu 16.04. After installing the necessary prerequisites I used the following command:-
kvs@Hunter:~/opencv_contrib$ mkdir build
kvs@Hunter:~/opencv_contrib$ cd build
kvs@Hunter:~/opencv_contrib/build$
kvs@Hunter:~/opencv_contrib/build$ cmake -D CMAKE_BUILD_TYPE=RELEASE -D CMAKE_INSTALL_PREFIX+/usr/local -D INSTALL_C_EXAMPLES=ON -D INSTALL_PYTHON_EXAMPLES=ON -D OPENCV_EXTRA_MODULES_PATH=~/opencv_contrib/modules -D BUILD_EXAMPLES=ON ..
but it produced an error:-
CMake Error: The source directory "/home/kvs/opencv_contrib" does not appear to contain CMakeLists.txt.
Specify --help for usage, or press the help button on the CMake GUI.
I used the command provided in the folder 'module' documentation. How do I solve it?
I tried the answers here at stack-overflow and a few other question but still can't figure it out.
Project Git repository here.
|
[
"You should do mkdir build and cd build while inside opencv folder, not the opencv-contrib folder. The CMakeLists.txt is there. \n",
"Since you add .. after cmake, it will jump up and up (just like cd ..) in the directory. But if you want to run cmake under the same folder with CMakeLists.txt, please use . instead of ...\n",
"This reply may be late but it may help users having similar problem.\nThe opencv-contrib (available at https://github.com/opencv/opencv_contrib/releases) contains extra modules but the build procedure has to be done from core opencv (available at from https://github.com/opencv/opencv/releases) modules.\nFollow below steps (assuming you are building it using CMake GUI)\n\nDownload openCV (from https://github.com/opencv/opencv/releases) and unzip it somewhere on your computer. Create build folder inside it\nDownload exra modules from OpenCV. (from https://github.com/opencv/opencv_contrib/releases). Ensure you download the same version.\nUnzip the folder.\nOpen CMake\nClick Browse Source and navigate to your openCV folder.\nClick Browse Build and navigate to your build Folder.\nClick the configure button. You will be asked how you would like to generate the files. Choose Unix-Makefile from the drop down menu and Click OK. CMake will perform some tests and return a set of red boxes appear in the CMake Window.\nSearch for \"OPENCV_EXTRA_MODULES_PATH\" and provide the path to modules folder (e.g. /Users/purushottam_d/Programs/OpenCV3_4_5_contrib/modules)\nClick Configure again, then Click Generate.\nGo to build folder\n\n# cd build\n# make\n# sudo make install\n\n\nThis will install the opencv libraries on your computer.\n\n",
"An easier way to build OpenCV from source in a step by step fashion as given in this reference: Installing OpenCV from the Source\nis to,\nstep 1: install dependencies,\n sudo apt install build-essential cmake git pkg-config libgtk- \n 3-dev \\libavcodec-dev libavformat-dev libswscale-dev \n libv4l-dev \\libxvidcore-dev libx264-dev libjpeg-dev \n libpng-dev libtiff-dev \\gfortran openexr libatlas-base- \n dev python3-dev python3-numpy \\libtbb2 libtbb-dev \n libdc1394-22-dev\n\nStep 2: create a directory opencv_build and Clone the necessary repositories as shown below,\nmkdir ~/opencv_build && cd ~/opencv_build\ngit clone https://github.com/opencv/opencv.git\ngit clone https://github.com/opencv/opencv_contrib.git\n\nstep 3: cd into opencv directory, inside create another directory called build and cd into it,\ncd ~/opencv_build/opencv\nmkdir build && cd build\n\nstep 4: evoke Cmake to build OpenCV,\ncmake -D CMAKE_BUILD_TYPE=RELEASE \\\n-D CMAKE_INSTALL_PREFIX=/usr/local \\\n-D INSTALL_C_EXAMPLES=ON \\\n-D INSTALL_PYTHON_EXAMPLES=ON \\\n-D OPENCV_GENERATE_PKGCONFIG=ON \\ \n-D OPENCV_EXTRA_MODULES_PATH=~/opencv_build/opencv_contrib/modules \\\n-D BUILD_EXAMPLES=ON ..\n\nIf step 4 completes successfully you should see the following line at the end of the terminal, the build has been written to the directory created in step 3, along with the following lines above this line,\nconfiguration done\ngenerating done\nstep 5: To start the compilation process where -j\nis a flag for the number of the processor inside your machine, for example -j6 means we have 6 processors available. to verify the number of processors type nproc on the terminal then use this number after -j. To start this process, we use the following command:\nmake -j6 \n\nstep 6: install OpenCV, We use,\nsudo make install\n\nthen check the version Of OpenCV to verify the installation:\npkg-config --modversion opencv4\n\n",
"I had a similar problem with another package and neither operating from a clean directory, to build from out, nor copy/paste the CMakeLists.txt file from the source to the clean directory worked.\nI simply solved installing by conda\n"
] |
[
43,
23,
8,
1,
1
] |
[] |
[] |
[
"opencv",
"python"
] |
stackoverflow_0046448682_opencv_python.txt
|
Q:
Creating a submodel using textVectorization and Embedding layers in Keras throws: 'str' object has no attribute 'base_dtype' in Keras
I'm making a multi-input Tensorflow NLP model using text and numerical data. To create this, I plan on making two submodels, one for text and the other for numerical data, and then concatenating their outputs into my main model. For the text submodel, I've been following the Keras guides for text vectorization and embeddings (https://www.tensorflow.org/tutorials/text/word_embeddings#configure_the_dataset_for_performance and https://www.tensorflow.org/api_docs/python/tf/keras/layers/experimental/preprocessing/TextVectorization) and used TF-IDF weighting, indexing all bigrams. This is the code for the TextVectorization layer:
# Instantiate TextVectorization with "tf-idf" output_mode
# (multi-hot with TF-IDF weighting) and ngrams=2 (index all bigrams)
text_vectorizer = preprocessing.TextVectorization(output_mode="tf-idf", ngrams=2)
# Index the bigrams and learn the TF-IDF weights via `adapt()`
text_vectorizer.adapt(df['tweet_punct'].dropna().to_numpy())
print('Size of vocabulary:', len(text_vectorizer.get_vocabulary()))
vocab_size = len(text_vectorizer.get_vocabulary())
The error comes when I try to connect my vectorization layer to an embedding layer. This is the following script I've been using:
embedding_layer = Embedding(vocab_size, 100)(text_vectorizer)
LSTM_layer_1 = LSTM(128)(embedding_layer)
According to the only other question I could find related to this problem: 'str' object has no attribute 'base_dtype' error TensorFlow model the way the layers are added to each other should be right, yet running this gives me AttributeError: 'str' object has no attribute 'base_dtype' on the first line. Is there an issue with how I'm connecting the two layers? I am sort of new to Tensorflow and have never attempted to make a model this way so I am a little lost on what is going on here.
A:
Your text_vectorizer would be your first layer after your input layer. It is used like a normal layer and not like a dictionary.
import tensorflow as tf
inputs = tf.keras.Input(shape=(1,), dtype=tf.string)
x = text_vectorizer(inputs)
x = tf.keras.layers.Flatten()(x)
outputs = tf.keras.layers.Dense(1)(x)
model = tf.keras.Model(inputs, outputs, name="mymodel")
|
Creating a submodel using textVectorization and Embedding layers in Keras throws: 'str' object has no attribute 'base_dtype' in Keras
|
I'm making a multi-input Tensorflow NLP model using text and numerical data. To create this, I plan on making two submodels, one for text and the other for numerical data, and then concatenating their outputs into my main model. For the text submodel, I've been following the Keras guides for text vectorization and embeddings (https://www.tensorflow.org/tutorials/text/word_embeddings#configure_the_dataset_for_performance and https://www.tensorflow.org/api_docs/python/tf/keras/layers/experimental/preprocessing/TextVectorization) and used TF-IDF weighting, indexing all bigrams. This is the code for the TextVectorization layer:
# Instantiate TextVectorization with "tf-idf" output_mode
# (multi-hot with TF-IDF weighting) and ngrams=2 (index all bigrams)
text_vectorizer = preprocessing.TextVectorization(output_mode="tf-idf", ngrams=2)
# Index the bigrams and learn the TF-IDF weights via `adapt()`
text_vectorizer.adapt(df['tweet_punct'].dropna().to_numpy())
print('Size of vocabulary:', len(text_vectorizer.get_vocabulary()))
vocab_size = len(text_vectorizer.get_vocabulary())
The error comes when I try to connect my vectorization layer to an embedding layer. This is the following script I've been using:
embedding_layer = Embedding(vocab_size, 100)(text_vectorizer)
LSTM_layer_1 = LSTM(128)(embedding_layer)
According to the only other question I could find related to this problem: 'str' object has no attribute 'base_dtype' error TensorFlow model the way the layers are added to each other should be right, yet running this gives me AttributeError: 'str' object has no attribute 'base_dtype' on the first line. Is there an issue with how I'm connecting the two layers? I am sort of new to Tensorflow and have never attempted to make a model this way so I am a little lost on what is going on here.
|
[
"Your text_vectorizer would be your first layer after your input layer. It is used like a normal layer and not like a dictionary.\nimport tensorflow as tf\ninputs = tf.keras.Input(shape=(1,), dtype=tf.string)\nx = text_vectorizer(inputs)\nx = tf.keras.layers.Flatten()(x)\noutputs = tf.keras.layers.Dense(1)(x)\nmodel = tf.keras.Model(inputs, outputs, name=\"mymodel\")\n\n"
] |
[
0
] |
[] |
[] |
[
"deep_learning",
"jupyter_notebook",
"keras",
"python",
"tensorflow"
] |
stackoverflow_0067292093_deep_learning_jupyter_notebook_keras_python_tensorflow.txt
|
Q:
googleapiclient.errors.UnknownApiNameOrVersion: name: sheets version v4
I'm working with Google Sheets API and Pyinstaller.
My code runs just fine on the IDE, but whenever i try to run it on a .exe created by Pyinstaller, it provides the following error:.
I thought it could be a missing file or dependency but i tested it on other environments and the error persists. Any thoughts?
It was supposed to update a Google Sheets file and it does exactly that, except when i run it with pyinstaller.
A:
If you used --onedir (One Directory) this should work fine.
If you want a (One file) option then this issue didn't have any solution until now (I made a good search)
A:
edit the file
\Lib\site-packages\googleapiclient\discovery_cache_init_.py
add this line after line 26:
Old:
DISCOVERY_DOC_DIR = os.path.join(
os.path.dirname(os.path.realpath(__file__)), "documents"
)
New:
DISCOVERY_DOC_DIR = os.path.join(
os.path.dirname(os.path.realpath(__file__)), "documents"
)
IS_INSTALLER = getattr(sys, "frozen", False) and hasattr(sys, "_MEIPASS")
if IS_INSTALLER:
DISCOVERY_DOC_DIR = os.path.join(
sys._MEIPASS, "documents"
)
A:
pyinstaller --onefile main.py --collect-data googleapiclient
|
googleapiclient.errors.UnknownApiNameOrVersion: name: sheets version v4
|
I'm working with Google Sheets API and Pyinstaller.
My code runs just fine on the IDE, but whenever i try to run it on a .exe created by Pyinstaller, it provides the following error:.
I thought it could be a missing file or dependency but i tested it on other environments and the error persists. Any thoughts?
It was supposed to update a Google Sheets file and it does exactly that, except when i run it with pyinstaller.
|
[
"If you used --onedir (One Directory) this should work fine.\nIf you want a (One file) option then this issue didn't have any solution until now (I made a good search)\n",
"edit the file\n\n\\Lib\\site-packages\\googleapiclient\\discovery_cache_init_.py\n\nadd this line after line 26:\nOld:\nDISCOVERY_DOC_DIR = os.path.join(\n os.path.dirname(os.path.realpath(__file__)), \"documents\"\n )\n\nNew:\nDISCOVERY_DOC_DIR = os.path.join(\n os.path.dirname(os.path.realpath(__file__)), \"documents\"\n )\n\nIS_INSTALLER = getattr(sys, \"frozen\", False) and hasattr(sys, \"_MEIPASS\")\nif IS_INSTALLER:\n DISCOVERY_DOC_DIR = os.path.join(\n sys._MEIPASS, \"documents\"\n )\n\n",
"pyinstaller --onefile main.py --collect-data googleapiclient\n"
] |
[
0,
0,
0
] |
[] |
[] |
[
"google_sheets",
"pyinstaller",
"python",
"python_3.x"
] |
stackoverflow_0074239135_google_sheets_pyinstaller_python_python_3.x.txt
|
Q:
I'm trying to make it so my tkinter input field is first checked amongst a file and then if it's not there, is added
I've tried to make a functioning sign up page, and whilst my input can be added to the file, I first want to make sure that the input of username does not already exist in the file. The function which checks this is as follows:
forename = forename_entry.get()
surname = surname_entry.get()
username = username_entry.get()
password = password_entry.get()
with open("data.txt", "r") as file:
end_of_file = False
while not end_of_file:
existent_username = file.readline().strip()
if existent_username == username:
additional_info_text.config(text="Username already exists, try choosing a different one",
font=("Ariel", 10))
submit_data.config(state="disabled")
end_of_file = True
else:
with open("data.txt", "a") as edit_file:
edit_file.write(forename + "\n")
edit_file.write(surname + "\n")
edit_file.write(username + "\n")
edit_file.write(password + "\n")
edit_file.write("" + "\n")
end_of_file = True
Keep in mind that submit_data.config(state="disabled") is there to check if my code was functioning in checking if it was there or not, but it did not. I don't understand where i am going wrong, but it is most likely in my first check. Any help is appreciated.
A:
try if username in existent_username:
|
I'm trying to make it so my tkinter input field is first checked amongst a file and then if it's not there, is added
|
I've tried to make a functioning sign up page, and whilst my input can be added to the file, I first want to make sure that the input of username does not already exist in the file. The function which checks this is as follows:
forename = forename_entry.get()
surname = surname_entry.get()
username = username_entry.get()
password = password_entry.get()
with open("data.txt", "r") as file:
end_of_file = False
while not end_of_file:
existent_username = file.readline().strip()
if existent_username == username:
additional_info_text.config(text="Username already exists, try choosing a different one",
font=("Ariel", 10))
submit_data.config(state="disabled")
end_of_file = True
else:
with open("data.txt", "a") as edit_file:
edit_file.write(forename + "\n")
edit_file.write(surname + "\n")
edit_file.write(username + "\n")
edit_file.write(password + "\n")
edit_file.write("" + "\n")
end_of_file = True
Keep in mind that submit_data.config(state="disabled") is there to check if my code was functioning in checking if it was there or not, but it did not. I don't understand where i am going wrong, but it is most likely in my first check. Any help is appreciated.
|
[
"try if username in existent_username:\n"
] |
[
0
] |
[] |
[] |
[
"file_handling",
"python",
"tkinter"
] |
stackoverflow_0074506924_file_handling_python_tkinter.txt
|
Q:
How to setup a failure_hook to with send messages from telegram bot
I'm new one in Dagster. Could you help me, please? I want to understand how to set up an etl process error notification through a telegram bot
My code:
from dagster import (
load_assets_from_package_module,
asset,
repository,
define_asset_job,
ScheduleDefinition
)
import pyodbc
import pandas as pd
#import package_module.etl as assets
@asset
def get_categories():
conn = pyodbc.connect()
df = pd.read_sql_query(
"""SQL QUERY"""
, conn)
return df.to_csv('path/to/file',index=False)
daily_job = define_asset_job(name="daily_refresh", selection="*")
daily_schedule = ScheduleDefinition(
job=daily_job,
cron_schedule="@daily",
)
@repository
def etl():
return [
daily_job,
daily_schedule,
#load_assets_from_package_module(assets),
get_categories
]
A:
Hooks are not yet implemented with the "software defined assets" you are using. You can upvote for the feature here : https://github.com/dagster-io/dagster/issues/8577
You have to imagine a workaround for the moment.
|
How to setup a failure_hook to with send messages from telegram bot
|
I'm new one in Dagster. Could you help me, please? I want to understand how to set up an etl process error notification through a telegram bot
My code:
from dagster import (
load_assets_from_package_module,
asset,
repository,
define_asset_job,
ScheduleDefinition
)
import pyodbc
import pandas as pd
#import package_module.etl as assets
@asset
def get_categories():
conn = pyodbc.connect()
df = pd.read_sql_query(
"""SQL QUERY"""
, conn)
return df.to_csv('path/to/file',index=False)
daily_job = define_asset_job(name="daily_refresh", selection="*")
daily_schedule = ScheduleDefinition(
job=daily_job,
cron_schedule="@daily",
)
@repository
def etl():
return [
daily_job,
daily_schedule,
#load_assets_from_package_module(assets),
get_categories
]
|
[
"Hooks are not yet implemented with the \"software defined assets\" you are using. You can upvote for the feature here : https://github.com/dagster-io/dagster/issues/8577\nYou have to imagine a workaround for the moment.\n"
] |
[
0
] |
[] |
[] |
[
"dagster",
"python",
"telegram_bot"
] |
stackoverflow_0074442272_dagster_python_telegram_bot.txt
|
Q:
What is the data type used by set in python internally?
Interviewer asked me that what is the data type used by set internally in python and what is the time complexity of inserting value in set.
I tried to search on google but I am not getting any specific answer in google search.
Also, I tried to find the set class to check data type used by set in python but not able to find.
A:
set as well as dict use hash table as internal data type. As described in the Python documentation:
"A set object is an unordered collection of distinct hashable objects"
A:
Given that "a set is a collection which is unordered, unchangeable, and unindexed" and it can hold data of any type, you can guess that a set is a hash table. It is a simplified dictionary.
|
What is the data type used by set in python internally?
|
Interviewer asked me that what is the data type used by set internally in python and what is the time complexity of inserting value in set.
I tried to search on google but I am not getting any specific answer in google search.
Also, I tried to find the set class to check data type used by set in python but not able to find.
|
[
"set as well as dict use hash table as internal data type. As described in the Python documentation:\n\"A set object is an unordered collection of distinct hashable objects\"\n",
"Given that \"a set is a collection which is unordered, unchangeable, and unindexed\" and it can hold data of any type, you can guess that a set is a hash table. It is a simplified dictionary.\n"
] |
[
2,
2
] |
[] |
[] |
[
"python",
"python_3.x",
"set"
] |
stackoverflow_0074507123_python_python_3.x_set.txt
|
Q:
call a time-sensitive fail-safe function
I have a time-sensitive request, let's call it query_response.
How to write the program so that, if query_response take less than 2 seconds then run take_action else run abort_action.
def query_response():
print("Query Response")
def take_action():
print("Take Action")
def abort_action():
print("Abort Action")
A:
So basically what you can do is save the time that your function started at and subtract the start_time from the time that your function ended at. For example: if your query_response started 12:34 and ended 12:37, you would get an execution time of 3 minutes. Code:
import time
def query_response():
start_time = time.time() # Save the time your function started
print("Query Response")
time_passed = time.time() - start_time # Save the total execution time
# of your function
if time_passed > 2: take_action()
else: abort_action()
def take_action():
print("Take Action")
def abort_action():
print("Abort Action")
|
call a time-sensitive fail-safe function
|
I have a time-sensitive request, let's call it query_response.
How to write the program so that, if query_response take less than 2 seconds then run take_action else run abort_action.
def query_response():
print("Query Response")
def take_action():
print("Take Action")
def abort_action():
print("Abort Action")
|
[
"So basically what you can do is save the time that your function started at and subtract the start_time from the time that your function ended at. For example: if your query_response started 12:34 and ended 12:37, you would get an execution time of 3 minutes. Code:\nimport time\n\ndef query_response():\n start_time = time.time() # Save the time your function started\n print(\"Query Response\")\n time_passed = time.time() - start_time # Save the total execution time\n # of your function\n if time_passed > 2: take_action()\n else: abort_action()\n\ndef take_action():\n print(\"Take Action\") \n\ndef abort_action():\n print(\"Abort Action\")\n\n"
] |
[
1
] |
[] |
[] |
[
"python",
"python_3.x"
] |
stackoverflow_0074506988_python_python_3.x.txt
|
Q:
Plotting Bar Chart with X, Y and Z axis in matplotlib
I have below data,
Am trying to plot bar chart in matplotlib using below code,
pyplot.bar(gender, ward, width, color='orange')
pyplot.bar(count, gender, width, color='tomato')
Below is the result,
My expectation is as below which is created in excel,
Any suggestion will be helpful to get the same in matplotlib
A:
You could achieve that very easy with seaborn.barplot.
import seaborn as sns
sns.barplot(data=df, x='Ward', y='Count', hue='Gender', palette=['orange', 'tomato']) #df is the dataframe you showed as example
Without seaborn, you could pivot your data before plotting it. Like this:
df.pivot(index='Ward', columns='Gender', values='Count').plot(kind='bar', color=['orange', 'tomato'])
plt.ylabel('Count')
|
Plotting Bar Chart with X, Y and Z axis in matplotlib
|
I have below data,
Am trying to plot bar chart in matplotlib using below code,
pyplot.bar(gender, ward, width, color='orange')
pyplot.bar(count, gender, width, color='tomato')
Below is the result,
My expectation is as below which is created in excel,
Any suggestion will be helpful to get the same in matplotlib
|
[
"You could achieve that very easy with seaborn.barplot.\nimport seaborn as sns\nsns.barplot(data=df, x='Ward', y='Count', hue='Gender', palette=['orange', 'tomato']) #df is the dataframe you showed as example\n\n\nWithout seaborn, you could pivot your data before plotting it. Like this:\ndf.pivot(index='Ward', columns='Gender', values='Count').plot(kind='bar', color=['orange', 'tomato'])\nplt.ylabel('Count')\n\n"
] |
[
1
] |
[] |
[] |
[
"matplotlib",
"python"
] |
stackoverflow_0074507097_matplotlib_python.txt
|
Q:
Uploading an image to a website with Playwright
I'm trying to click the button upload an image to this website: https://prnt.sc/
But it seems like there is not even a [button], so can I even click anything? Is this even possible? Super confused.
There's lots of documentation on how to do this with selenium, but not much for Playwright unfortunately.
from playwright.sync_api import sync_playwright
with sync_playwright() as p:
browser = p.chromium.launch(headless=False, slow_mo=50)
page = browser.new_page()
page.goto("https://prnt.sc/")
page.locator("class=uploader__browse_button").click()
I am not using page.click because there is no button.
(From what I can see)
I still get errors using this code.
I've gone through the websites code and found
<form action="https://prntscr.com/upload.php" method="post" id="fileupload">
Hopefully that helps.
A:
Just use set_input_files. Here is an example:
from playwright.sync_api import sync_playwright
with sync_playwright() as p:
browser = p.webkit.launch()
page = browser.new_page()
page.goto('https://prnt.sc/')
# click on AGREE privacy
page.click('button[mode="primary"]')
# set file to form field
page.set_input_files('input[type="file"]', 'FULL_PATH_TO_FILE_HERE')
# wait for a link after upload
link = page.wait_for_selector('#link-textbox', state='visible').inner_text()
print(f'file link: {link}')
page.screenshot(path='example.png')
browser.close()
|
Uploading an image to a website with Playwright
|
I'm trying to click the button upload an image to this website: https://prnt.sc/
But it seems like there is not even a [button], so can I even click anything? Is this even possible? Super confused.
There's lots of documentation on how to do this with selenium, but not much for Playwright unfortunately.
from playwright.sync_api import sync_playwright
with sync_playwright() as p:
browser = p.chromium.launch(headless=False, slow_mo=50)
page = browser.new_page()
page.goto("https://prnt.sc/")
page.locator("class=uploader__browse_button").click()
I am not using page.click because there is no button.
(From what I can see)
I still get errors using this code.
I've gone through the websites code and found
<form action="https://prntscr.com/upload.php" method="post" id="fileupload">
Hopefully that helps.
|
[
"Just use set_input_files. Here is an example:\nfrom playwright.sync_api import sync_playwright\n\nwith sync_playwright() as p:\n browser = p.webkit.launch()\n page = browser.new_page()\n page.goto('https://prnt.sc/')\n # click on AGREE privacy\n page.click('button[mode=\"primary\"]')\n # set file to form field\n page.set_input_files('input[type=\"file\"]', 'FULL_PATH_TO_FILE_HERE')\n # wait for a link after upload\n link = page.wait_for_selector('#link-textbox', state='visible').inner_text()\n print(f'file link: {link}')\n page.screenshot(path='example.png')\n browser.close()\n\n"
] |
[
1
] |
[] |
[] |
[
"playwright",
"python"
] |
stackoverflow_0074506905_playwright_python.txt
|
Q:
convert 2D list to dict where duplicate values to keys and rest of values to list
As No import any library To Do This
x=[['A',1],['B',2],['C',3]]
y=[['A',100],['B',200],['C',300]]
z=[['A',1000],['B',2000],['C',3000]]
output must:
{'A':[1,100,1000],'B':[2,200,2000],'C':[3,300,3000]}
I tried :
dic=dict(filter(lambda i:i[0]==i[0],[x,y,z]))
So As Data I need first duplicated value to key , and common values to this key as list
A:
Try:
x = [["A", 1], ["B", 2], ["C", 3]]
y = [["A", 100], ["B", 200], ["C", 300]]
z = [["A", 1000], ["B", 2000], ["C", 3000]]
out = {}
for l in (x, y, z):
for a, b in l:
out.setdefault(a, []).append(b)
print(out)
Prints:
{"A": [1, 100, 1000], "B": [2, 200, 2000], "C": [3, 300, 3000]}
EDIT: Without dict.setdefault:
x = [["A", 1], ["B", 2], ["C", 3]]
y = [["A", 100], ["B", 200], ["C", 300]]
z = [["A", 1000], ["B", 2000], ["C", 3000]]
out = {}
for l in (x, y, z):
for a, b in l:
if a in out:
out[a].append(b)
else:
out[a] = [b]
print(out)
A:
You can use zip to merge the lists to a list of tuples and insert them to a dict with setdefault
d = dict()
for k, v in zip(*zip(*x, *y, *z)):
d.setdefault(k, []).append(v)
print(d) # {'A': [1, 100, 1000], 'B': [2, 200, 2000], 'C': [3, 300, 3000]}
A:
Let's use a dictionary comprehension:
dic_={x[0]: [x[1], y[1],z[1]] for (x,y,z) in zip(x, y,z)}
Output
>>> dic
>>> {'A': [1, 100, 1000], 'B': [2, 200, 2000], 'C': [3, 300, 3000]}
|
convert 2D list to dict where duplicate values to keys and rest of values to list
|
As No import any library To Do This
x=[['A',1],['B',2],['C',3]]
y=[['A',100],['B',200],['C',300]]
z=[['A',1000],['B',2000],['C',3000]]
output must:
{'A':[1,100,1000],'B':[2,200,2000],'C':[3,300,3000]}
I tried :
dic=dict(filter(lambda i:i[0]==i[0],[x,y,z]))
So As Data I need first duplicated value to key , and common values to this key as list
|
[
"Try:\nx = [[\"A\", 1], [\"B\", 2], [\"C\", 3]]\ny = [[\"A\", 100], [\"B\", 200], [\"C\", 300]]\nz = [[\"A\", 1000], [\"B\", 2000], [\"C\", 3000]]\n\nout = {}\nfor l in (x, y, z):\n for a, b in l:\n out.setdefault(a, []).append(b)\n\nprint(out)\n\nPrints:\n{\"A\": [1, 100, 1000], \"B\": [2, 200, 2000], \"C\": [3, 300, 3000]}\n\n\nEDIT: Without dict.setdefault:\nx = [[\"A\", 1], [\"B\", 2], [\"C\", 3]]\ny = [[\"A\", 100], [\"B\", 200], [\"C\", 300]]\nz = [[\"A\", 1000], [\"B\", 2000], [\"C\", 3000]]\n\nout = {}\nfor l in (x, y, z):\n for a, b in l:\n if a in out:\n out[a].append(b)\n else:\n out[a] = [b]\n\nprint(out)\n\n",
"You can use zip to merge the lists to a list of tuples and insert them to a dict with setdefault\nd = dict()\nfor k, v in zip(*zip(*x, *y, *z)):\n d.setdefault(k, []).append(v)\n\nprint(d) # {'A': [1, 100, 1000], 'B': [2, 200, 2000], 'C': [3, 300, 3000]}\n\n",
"Let's use a dictionary comprehension:\ndic_={x[0]: [x[1], y[1],z[1]] for (x,y,z) in zip(x, y,z)}\n\nOutput\n>>> dic\n>>> {'A': [1, 100, 1000], 'B': [2, 200, 2000], 'C': [3, 300, 3000]}\n\n"
] |
[
2,
0,
0
] |
[] |
[] |
[
"dictionary",
"list",
"python"
] |
stackoverflow_0074507106_dictionary_list_python.txt
|
Q:
Two-dimensional array in Python (stupid problem)
It is so stupid. This code works
N = int(input("Input the N: "))
MATRIX = [0] * N
for i in range(N):
MATRIX[i] = [0] * N
print(MATRIX)
print(" ")
for i in range(N):
for j in range(N):
z = int(input(" "))
MATRIX[i][j] = z
print(MATRIX)
But if I change 11 line. Instead of z = int(input(" ")), if I write z = int(input()) it will not work.
enter image description here
enter image description here
i tried nothing, it is just stupid
Traceback (most recent call last):
File "C:\Users\Сырым\PycharmProjects\pythonProject\main.py", line 11, in <module>
z = int(input())
ValueError: invalid literal for int() with base 10: ''
A:
This error occurs when you have pressed the Enter key (without typing any integer value) when prompted for input, which means you have passed an empty string to the int() function.
>>> int('')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ValueError: invalid literal for int() with base 10: ''
This is the same error when we are trying to convert any string value to integer
Example - Converting character 'A' to integer
>>> int('A')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ValueError: invalid literal for int() with base 10: 'A'
|
Two-dimensional array in Python (stupid problem)
|
It is so stupid. This code works
N = int(input("Input the N: "))
MATRIX = [0] * N
for i in range(N):
MATRIX[i] = [0] * N
print(MATRIX)
print(" ")
for i in range(N):
for j in range(N):
z = int(input(" "))
MATRIX[i][j] = z
print(MATRIX)
But if I change 11 line. Instead of z = int(input(" ")), if I write z = int(input()) it will not work.
enter image description here
enter image description here
i tried nothing, it is just stupid
Traceback (most recent call last):
File "C:\Users\Сырым\PycharmProjects\pythonProject\main.py", line 11, in <module>
z = int(input())
ValueError: invalid literal for int() with base 10: ''
|
[
"This error occurs when you have pressed the Enter key (without typing any integer value) when prompted for input, which means you have passed an empty string to the int() function.\n>>> int('')\nTraceback (most recent call last):\n File \"<stdin>\", line 1, in <module>\nValueError: invalid literal for int() with base 10: ''\n\nThis is the same error when we are trying to convert any string value to integer\nExample - Converting character 'A' to integer\n>>> int('A')\nTraceback (most recent call last):\n File \"<stdin>\", line 1, in <module>\nValueError: invalid literal for int() with base 10: 'A'\n\n"
] |
[
0
] |
[] |
[] |
[
"python"
] |
stackoverflow_0074506837_python.txt
|
Q:
How to find elements that match specific conditions selenium
i want to crawl data in web, but i don't know how to get data from these tags
i don't know how to get data from these tags. Please help me
from selenium import webdriver
import pandas as pd
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.common.by import By
browser = webdriver.Chrome(executable_path="./chromedriver.exe")
idx = 0
data = []
title = []
#print("Process 300 days from {}-{}-{}".format(current_date.day, current_date.month, current_date.year))
url = 'https://24hmoney.vn/stock/HAG/financial-report'
web = browser.get(url)
#Click nut theo quy
btn1 = browser.find_element(By.XPATH, "/html/body/div[1]/div/div/div[2]/div[1]/div[4]/div[2]/div[1]")
btn1.click()
#click hien thi tang giam so voi cung ki
#btn2 = browser.find_element(By.XPATH,"/html/body/div[1]/div/div/div[2]/div[1]/div[4]/div[3]/div[1]/span")
#btn2.click()
lai = browser.find_elements(By.CSS_SELECTOR,'p')
for raw in lai:
data.append(raw.text)
#print(raw.text)
tieude = browser.find_elements(By.CLASS_NAME,'sticky-col.first-col')
for raw2 in tieude:
title.append(raw2.text)
print(raw2.text)
#df = pd.DataFrame(data,columns=["HAG"])
df = pd.DataFrame(title,columns=["Tieude"])
df.to_csv("HAG.csv",index=False)
#a = input()
A:
Maybe the following code will solve your issue?
import requests
import pandas as pd
from bs4 import BeautifulSoup as bs
pd.set_option('display.max_columns', None)
pd.set_option('display.max_colwidth', None)
headers = {
'User-Agent': 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/104.0.5112.79 Safari/537.36'
}
url = 'https://24hmoney.vn/stock/HAG/financial-report'
r = requests.get(url, headers=headers)
soup = bs(r.text, 'html.parser')
table = soup.select_one('div[class="financial-report-box-content"] table')
df = pd.read_html(str(table))[0]
print(df)
Result in terminal:
Tiêu đề Q3/22 % Q3/21 Q2/22 % Q2/21 Q1/22 % Q1/21 Q4/21 % Q4/20 Q3/21 % Q3/20 Q2/21 % Q2/20 Q1/21 % Q1/20 Q4/20 % Q4/19
0 Doanh thu 1441.4 160.1% 1233.6 125.2% 802.6 182.3% 743.7 -19.2% 554.1 -20.9% 547.7 -15.4% 284.4 -66% 920.4 51%
1 Các khoản giảm trừ NaN NaN 6.2 -81.3% NaN NaN NaN NaN NaN NaN 3.4 68.3% 18.5 -678.5% 6.7 5.7%
2 Doanh thu thuần 1441.4 160.1% 1227.4 125.5% 802.6 201.9% 743.7 -18.6% 554.1 -20.9% 544.3 -14.6% 265.8 -68.1% 913.7 51.6%
3 Giá vốn hàng bán 1160.6 -207.3% 1051.7 -115.3% 512.8 -140.3% 511.5 52.7% 377.6 50.1% 488.4 3% 213.4 61.3% 1082.2 -74.1%
4 Lợi nhuận gộp 280.8 59.1% 175.7 214.4% 289.8 452.8% 232.3 237.9% 176.5 414.3% 55.9 -58.1% 52.4 -81.5% -168.5 -783.9%
5 Thu nhập tài chính 117.5 -10.9% 95.4 -24.8% 192.4 -44.9% 127.6 -83.7% 131.9 -5.3% 126.8 -34.3% 349.4 122.3% 783.6 203.9%
6 Chi phí tài chính 166.0 76% 875.9 -410.8% 185.9 13.4% 254.7 150.6% 691.5 -161.9% 171.5 -39.8% 214.8 33.6% 503.4 -44.1%
7 Chi phí tiền lãi 166.9 -0.2% 223.6 -35.6% 162.7 18.6% 167.4 66.3% 166.5 21.9% 165.0 25.9% 199.8 25.3% 496.9 -72.7%
8 Lãi/lỗ từ công ty liên doanh NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN -7.6 -1,149% 1.8 -18.8% 4.9 -86.8%
9 Chi phí bán hàng 58.6 -51.5% 90.5 -191.7% 52.1 -202.7% 42.3 34.5% 38.7 47.6% 31.0 76.5% 17.2 79.6% 64.6 14.5%
10 Chi phí quản lý doanh nghiệp 181.1 -60.4% 950.6 322.9% 5.2 101.4% 404.5 56% 457.0 395.4% 224.8 424.4% 367.3 -272.5% 919.8 -890.1%
11 Lãi/lỗ từ hoạt động kinh doanh 354.8 909% 255.2 29.3% 249.3 227.4% 167.7 119.3% 35.2 108.6% 197.3 10,174% -195.7 -202.8% -867.8 -258.5%
12 Thu nhập khác 2.8 35% 24.8 455.4% 5.7 -81.7% 44.0 66.5% 2.1 7.2% 4.5 -85.2% 31.1 68.9% 26.5 140.4%
13 Chi phí khác -7.4 56.3% -56.0 66.7% -15.3 81.7% -143.8 78.8% -17.0 89.7% -168.3 -98.2% -83.5 -153.3% -679.1 -141.2%
14 Thu nhập khác, ròng -4.6 69.2% -31.2 81% -9.6 81.7% -99.8 84.7% -14.9 90.8% -163.8 -198.8% -52.4 -260.2% -652.7 -141.2%
15 Lãi/lỗ từ công ty liên doanh NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
16 LỢI NHUẬN TRƯỚC THUẾ 350.2 1,629% 224.0 569.6% 239.8 196.6% 67.9 104.5% 20.2 103.5% 33.5 163.2% -248.1 -213.3% -1520.5 -196.6%
17 Thuế thu nhập doanh nghiệp – hiện thời 1.2 NaN 1.4 -333.9% 0.2 NaN 0.0 96.6% NaN NaN 0.3 -76% NaN NaN 1.1 -27.7%
18 Thuế thu nhập doanh nghiệp – hoãn lại 20.5 1,267% 42.2 -3.9% 18.4 -89.7% 28.6 801.7% 1.5 -19.5% 43.9 1,847% 179.4 15,941% 4.1 -102.4%
19 Chi phí thuế thu nhập doanh nghiệp 19.4 1,191% 40.8 -6.3% 18.2 -89.8% 28.6 656.6% 1.5 -13.8% 43.6 1,718% 179.4 18,238% 5.1 -103%
20 LỢI NHUẬN SAU THUẾ TNDN 369.5 1,599% 264.9 243.7% 258.0 475.2% 96.5 106.3% 21.7 103.8% 77.1 238.6% -68.8 12.1% -1525.6 -344.5%
21 Lợi ích của cổ đông thiểu số 8.8 548.1% -14.9 -3,505% 8.0 177% -45.7 87% -2.0 99.5% 0.4 100.2% -10.3 -14.7% -352.1 11.8%
22 Lợi nhuận của Cổ đông của Công ty mẹ 360.7 1,421% 279.7 265% 250.0 528% 142.2 112.1% 23.7 112.7% 76.6 -56.6% -58.4 15.6% -1173.5 -2,193%
23 EPS 4QGN (đ) 389.0 1,396% 301.0 262.6% 270.0 528.6% 153.0 112.1% 26.0 112.9% 83.0 -56.5% -63.0 16% -1265.0 -540.8%
|
How to find elements that match specific conditions selenium
|
i want to crawl data in web, but i don't know how to get data from these tags
i don't know how to get data from these tags. Please help me
from selenium import webdriver
import pandas as pd
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.common.by import By
browser = webdriver.Chrome(executable_path="./chromedriver.exe")
idx = 0
data = []
title = []
#print("Process 300 days from {}-{}-{}".format(current_date.day, current_date.month, current_date.year))
url = 'https://24hmoney.vn/stock/HAG/financial-report'
web = browser.get(url)
#Click nut theo quy
btn1 = browser.find_element(By.XPATH, "/html/body/div[1]/div/div/div[2]/div[1]/div[4]/div[2]/div[1]")
btn1.click()
#click hien thi tang giam so voi cung ki
#btn2 = browser.find_element(By.XPATH,"/html/body/div[1]/div/div/div[2]/div[1]/div[4]/div[3]/div[1]/span")
#btn2.click()
lai = browser.find_elements(By.CSS_SELECTOR,'p')
for raw in lai:
data.append(raw.text)
#print(raw.text)
tieude = browser.find_elements(By.CLASS_NAME,'sticky-col.first-col')
for raw2 in tieude:
title.append(raw2.text)
print(raw2.text)
#df = pd.DataFrame(data,columns=["HAG"])
df = pd.DataFrame(title,columns=["Tieude"])
df.to_csv("HAG.csv",index=False)
#a = input()
|
[
"Maybe the following code will solve your issue?\nimport requests\nimport pandas as pd\nfrom bs4 import BeautifulSoup as bs\n\npd.set_option('display.max_columns', None)\npd.set_option('display.max_colwidth', None)\n\nheaders = {\n 'User-Agent': 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/104.0.5112.79 Safari/537.36'\n}\n\nurl = 'https://24hmoney.vn/stock/HAG/financial-report'\n\nr = requests.get(url, headers=headers)\nsoup = bs(r.text, 'html.parser')\ntable = soup.select_one('div[class=\"financial-report-box-content\"] table')\ndf = pd.read_html(str(table))[0]\nprint(df)\n\nResult in terminal:\nTiêu đề Q3/22 % Q3/21 Q2/22 % Q2/21 Q1/22 % Q1/21 Q4/21 % Q4/20 Q3/21 % Q3/20 Q2/21 % Q2/20 Q1/21 % Q1/20 Q4/20 % Q4/19\n0 Doanh thu 1441.4 160.1% 1233.6 125.2% 802.6 182.3% 743.7 -19.2% 554.1 -20.9% 547.7 -15.4% 284.4 -66% 920.4 51%\n1 Các khoản giảm trừ NaN NaN 6.2 -81.3% NaN NaN NaN NaN NaN NaN 3.4 68.3% 18.5 -678.5% 6.7 5.7%\n2 Doanh thu thuần 1441.4 160.1% 1227.4 125.5% 802.6 201.9% 743.7 -18.6% 554.1 -20.9% 544.3 -14.6% 265.8 -68.1% 913.7 51.6%\n3 Giá vốn hàng bán 1160.6 -207.3% 1051.7 -115.3% 512.8 -140.3% 511.5 52.7% 377.6 50.1% 488.4 3% 213.4 61.3% 1082.2 -74.1%\n4 Lợi nhuận gộp 280.8 59.1% 175.7 214.4% 289.8 452.8% 232.3 237.9% 176.5 414.3% 55.9 -58.1% 52.4 -81.5% -168.5 -783.9%\n5 Thu nhập tài chính 117.5 -10.9% 95.4 -24.8% 192.4 -44.9% 127.6 -83.7% 131.9 -5.3% 126.8 -34.3% 349.4 122.3% 783.6 203.9%\n6 Chi phí tài chính 166.0 76% 875.9 -410.8% 185.9 13.4% 254.7 150.6% 691.5 -161.9% 171.5 -39.8% 214.8 33.6% 503.4 -44.1%\n7 Chi phí tiền lãi 166.9 -0.2% 223.6 -35.6% 162.7 18.6% 167.4 66.3% 166.5 21.9% 165.0 25.9% 199.8 25.3% 496.9 -72.7%\n8 Lãi/lỗ từ công ty liên doanh NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN -7.6 -1,149% 1.8 -18.8% 4.9 -86.8%\n9 Chi phí bán hàng 58.6 -51.5% 90.5 -191.7% 52.1 -202.7% 42.3 34.5% 38.7 47.6% 31.0 76.5% 17.2 79.6% 64.6 14.5%\n10 Chi phí quản lý doanh nghiệp 181.1 -60.4% 950.6 322.9% 5.2 101.4% 404.5 56% 457.0 395.4% 224.8 424.4% 367.3 -272.5% 919.8 -890.1%\n11 Lãi/lỗ từ hoạt động kinh doanh 354.8 909% 255.2 29.3% 249.3 227.4% 167.7 119.3% 35.2 108.6% 197.3 10,174% -195.7 -202.8% -867.8 -258.5%\n12 Thu nhập khác 2.8 35% 24.8 455.4% 5.7 -81.7% 44.0 66.5% 2.1 7.2% 4.5 -85.2% 31.1 68.9% 26.5 140.4%\n13 Chi phí khác -7.4 56.3% -56.0 66.7% -15.3 81.7% -143.8 78.8% -17.0 89.7% -168.3 -98.2% -83.5 -153.3% -679.1 -141.2%\n14 Thu nhập khác, ròng -4.6 69.2% -31.2 81% -9.6 81.7% -99.8 84.7% -14.9 90.8% -163.8 -198.8% -52.4 -260.2% -652.7 -141.2%\n15 Lãi/lỗ từ công ty liên doanh NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN\n16 LỢI NHUẬN TRƯỚC THUẾ 350.2 1,629% 224.0 569.6% 239.8 196.6% 67.9 104.5% 20.2 103.5% 33.5 163.2% -248.1 -213.3% -1520.5 -196.6%\n17 Thuế thu nhập doanh nghiệp – hiện thời 1.2 NaN 1.4 -333.9% 0.2 NaN 0.0 96.6% NaN NaN 0.3 -76% NaN NaN 1.1 -27.7%\n18 Thuế thu nhập doanh nghiệp – hoãn lại 20.5 1,267% 42.2 -3.9% 18.4 -89.7% 28.6 801.7% 1.5 -19.5% 43.9 1,847% 179.4 15,941% 4.1 -102.4%\n19 Chi phí thuế thu nhập doanh nghiệp 19.4 1,191% 40.8 -6.3% 18.2 -89.8% 28.6 656.6% 1.5 -13.8% 43.6 1,718% 179.4 18,238% 5.1 -103%\n20 LỢI NHUẬN SAU THUẾ TNDN 369.5 1,599% 264.9 243.7% 258.0 475.2% 96.5 106.3% 21.7 103.8% 77.1 238.6% -68.8 12.1% -1525.6 -344.5%\n21 Lợi ích của cổ đông thiểu số 8.8 548.1% -14.9 -3,505% 8.0 177% -45.7 87% -2.0 99.5% 0.4 100.2% -10.3 -14.7% -352.1 11.8%\n22 Lợi nhuận của Cổ đông của Công ty mẹ 360.7 1,421% 279.7 265% 250.0 528% 142.2 112.1% 23.7 112.7% 76.6 -56.6% -58.4 15.6% -1173.5 -2,193%\n23 EPS 4QGN (đ) 389.0 1,396% 301.0 262.6% 270.0 528.6% 153.0 112.1% 26.0 112.9% 83.0 -56.5% -63.0 16% -1265.0 -540.8%\n\n"
] |
[
0
] |
[] |
[] |
[
"python",
"selenium"
] |
stackoverflow_0074507129_python_selenium.txt
|
Q:
How to get absolute file path of folder from user input in python? The input gets added at the end of path
import os
print("enter folder name")
FolderName = input()
flag = os.path.isabs(FolderName)
if flag == False:
path = os.path.abspath(FolderName)
print("The absolute path is: " ,path)
What am I doing wrong here? Let's say the Folder name input is Neon.
The code output gives C:\Users\Desktop\Codes\Neon\Neon
Instead what I want is: C:\Users\Desktop\Codes\Neon\
A:
The os.path.abspath function normalizes the users current working directory and the input argument and then merges them together.
So if your input is 'Neon' and your current working directory is C:\Users\Desktop\Codes\Neon, then the output is C:\Users\Desktop\Neon\Neon.
Likewise if your input is fkdjfkjdsk then the output would be C:\Users\Desktop\Neon\fkdjfkjdsk.
If you are looking for a way to get the absolute path of the current directory you can use:
os.getcwd()
For the official definition:
os.path.abspath(path)
Return a normalized absolutized version of the pathname path. On most platforms, this is equivalent to calling the function normpath() as follows: normpath(join(os.getcwd(), path)).
A:
You are probably running your code when you are at the C:\Users\Desktop\Codes\Neon\ directory
Therefore, when you run os.path.abspath("Neon"), the function is assuming you are trying to refer to a file in the current directory, and returns C:\Users\Desktop\Codes\Neon\Neon.
If you want to have the absolute path of the current directory, use:
os.path.abspath(".")
A:
Most of the function inside the path module of the os library doesn't perform file/directory presence checks before performing operations. i.e., It is probable that if you enter the path to a filesystem object that doesn't exist, the function would still return a result for it.
Your current working directory of the Python file is not the one you expect.
Previous answers have covered the liability of the abspath function. The following code would produce the desired output (only for your case).
import os
os.chdir(r"C:\Users\Desktop\Codes")
print("enter folder name")
FolderName = input()
flag = os.path.isabs(FolderName)
if flag == False:
path = os.path.abspath(FolderName)
print("The absolute path is: " ,path)
But if you want to be sure, first display the current working directory to assure that the parent directory is the correct one. Also, include some directory presence functions within the code (such as isdir) in the code to assure that the directory name provided as input is real.
|
How to get absolute file path of folder from user input in python? The input gets added at the end of path
|
import os
print("enter folder name")
FolderName = input()
flag = os.path.isabs(FolderName)
if flag == False:
path = os.path.abspath(FolderName)
print("The absolute path is: " ,path)
What am I doing wrong here? Let's say the Folder name input is Neon.
The code output gives C:\Users\Desktop\Codes\Neon\Neon
Instead what I want is: C:\Users\Desktop\Codes\Neon\
|
[
"The os.path.abspath function normalizes the users current working directory and the input argument and then merges them together.\nSo if your input is 'Neon' and your current working directory is C:\\Users\\Desktop\\Codes\\Neon, then the output is C:\\Users\\Desktop\\Neon\\Neon.\nLikewise if your input is fkdjfkjdsk then the output would be C:\\Users\\Desktop\\Neon\\fkdjfkjdsk.\nIf you are looking for a way to get the absolute path of the current directory you can use:\nos.getcwd()\n\nFor the official definition:\nos.path.abspath(path)\n\nReturn a normalized absolutized version of the pathname path. On most platforms, this is equivalent to calling the function normpath() as follows: normpath(join(os.getcwd(), path)).\n\n",
"You are probably running your code when you are at the C:\\Users\\Desktop\\Codes\\Neon\\ directory\nTherefore, when you run os.path.abspath(\"Neon\"), the function is assuming you are trying to refer to a file in the current directory, and returns C:\\Users\\Desktop\\Codes\\Neon\\Neon.\nIf you want to have the absolute path of the current directory, use:\nos.path.abspath(\".\")\n\n",
"Most of the function inside the path module of the os library doesn't perform file/directory presence checks before performing operations. i.e., It is probable that if you enter the path to a filesystem object that doesn't exist, the function would still return a result for it.\nYour current working directory of the Python file is not the one you expect.\nPrevious answers have covered the liability of the abspath function. The following code would produce the desired output (only for your case).\nimport os\n\nos.chdir(r\"C:\\Users\\Desktop\\Codes\")\n\nprint(\"enter folder name\")\nFolderName = input()\n\nflag = os.path.isabs(FolderName)\n\nif flag == False:\n path = os.path.abspath(FolderName)\n print(\"The absolute path is: \" ,path)\n\nBut if you want to be sure, first display the current working directory to assure that the parent directory is the correct one. Also, include some directory presence functions within the code (such as isdir) in the code to assure that the directory name provided as input is real.\n"
] |
[
1,
0,
0
] |
[] |
[] |
[
"python",
"python_3.x"
] |
stackoverflow_0074507067_python_python_3.x.txt
|
Q:
How to subset a dataframe with given pairs or row indices and column labels?
I am given a dataframe (df_path) below, where the index corresponds to the index of the dataframe (df_from) i want to copy values from, and the values represent the column of the dataframe I want to copy values from.
**df_path**
{0: {Timestamp('2017-04-05 10:18:02.095000'): 0,
Timestamp('2017-04-05 10:35:03.740000'): 0,
Timestamp('2017-04-05 10:57:18.364000'): 0,
Timestamp('2017-04-05 11:10:09.142000'): 0,
Timestamp('2017-04-07 09:41:11.167000'): 0,
Timestamp('2017-04-07 09:47:22.457000'): 0,
Timestamp('2017-04-07 09:51:22.037000'): 0,
Timestamp('2017-04-07 09:54:59.803000'): 0,
Timestamp('2017-04-07 09:58:49.512000'): 0,
Timestamp('2017-04-07 10:05:45.506000'): 0,
Timestamp('2017-04-07 10:06:38.567000'): 0,
Timestamp('2017-04-24 09:32:06.261000'): 0,
Timestamp('2017-05-10 09:36:56.943000'): 0,
Timestamp('2017-05-29 09:31:32.211000'): 0,
Timestamp('2017-06-19 09:33:56.391000'): 0,
Timestamp('2017-06-19 09:36:11.743000'): 0,
Timestamp('2017-06-28 10:06:58.320000'): 1,
Timestamp('2017-06-28 10:12:04.859000'): 1,
Timestamp('2017-07-10 09:58:36.082000'): 1,
Timestamp('2017-07-11 09:43:03.421000'): 1,
Timestamp('2017-07-12 09:27:27.504000'): 2,
Timestamp('2017-07-12 09:31:16.304000'): 2,
Timestamp('2017-07-12 09:32:47.592000'): 2,
Timestamp('2017-07-12 09:33:43.216000'): 2,
Timestamp('2017-07-26 09:19:23.656000'): 2,
Timestamp('2017-07-26 09:32:07.647000'): 2,
Timestamp('2017-07-26 09:34:16.047000'): 2,
Timestamp('2017-07-26 09:36:29.241000'): 2,
Timestamp('2017-07-26 09:41:14.152000'): 2,
Timestamp('2017-07-26 09:45:01.198000'): 2,
Timestamp('2017-07-26 09:49:06.674000'): 2,
Timestamp('2017-08-07 09:17:59.231000'): 2,
Timestamp('2017-08-07 09:25:57.865000'): 2,
Timestamp('2017-08-07 09:31:29.751000'): 2,
Timestamp('2017-08-07 09:35:27.062000'): 2,
Timestamp('2017-08-15 09:23:40.111000'): 3,
Timestamp('2017-08-16 09:17:48.032000'): 3,
Timestamp('2017-08-16 09:20:37.396000'): 3,
Timestamp('2017-08-16 09:26:34.631000'): 3,
Timestamp('2017-08-16 10:01:35.525000'): 3,
Timestamp('2017-08-16 10:06:06.222000'): 3,
Timestamp('2017-08-16 10:38:44.717000'): 3,
Timestamp('2017-08-17 09:17:18.951000'): 3,
Timestamp('2017-08-17 09:21:33.846000'): 3,
Timestamp('2017-08-17 09:28:14.337000'): 3,
Timestamp('2017-08-17 09:30:42.855000'): 3,
Timestamp('2017-08-17 09:31:35.894000'): 3,
Timestamp('2017-08-17 09:33:15.819000'): 3,
Timestamp('2017-08-17 09:35:23.751000'): 3,
Timestamp('2017-08-17 09:38:45.211000'): 3,
Timestamp('2017-08-17 09:41:25.251000'): 3,
Timestamp('2017-08-17 09:45:21.319000'): 3,
Timestamp('2017-08-17 09:47:55.097000'): 3,
Timestamp('2017-08-17 09:50:17.234000'): 3,
Timestamp('2017-08-17 09:51:49.333000'): 3,
Timestamp('2017-08-18 10:13:44.958000'): 4,
Timestamp('2017-08-18 10:19:25.371000'): 4,
Timestamp('2017-08-18 10:25:33.984000'): 4,
Timestamp('2017-08-18 10:31:29.450000'): 4,
Timestamp('2017-08-18 10:42:42.320000'): 4,
Timestamp('2017-08-18 10:53:34.495000'): 4,
Timestamp('2017-08-29 09:38:10.660000'): 4,
Timestamp('2017-08-29 09:42:50.701000'): 4,
Timestamp('2017-08-29 09:45:21.301000'): 4,
Timestamp('2017-09-04 09:34:11.032000'): 4,
Timestamp('2017-09-07 09:34:48.306000'): 4,
Timestamp('2017-09-07 09:45:30.120000'): 4,
Timestamp('2017-09-11 09:16:19.693000'): 4,
Timestamp('2017-09-11 09:19:50.156000'): 4,
Timestamp('2017-09-11 09:30:40.390000'): 4},
1: {Timestamp('2017-04-05 10:18:02.095000'): 1,
Timestamp('2017-04-05 10:35:03.740000'): 1,
Timestamp('2017-04-05 10:57:18.364000'): 1,
Timestamp('2017-04-05 11:10:09.142000'): 1,
Timestamp('2017-04-07 09:41:11.167000'): 1,
Timestamp('2017-04-07 09:47:22.457000'): 1,
Timestamp('2017-04-07 09:51:22.037000'): 1,
Timestamp('2017-04-07 09:54:59.803000'): 1,
Timestamp('2017-04-07 09:58:49.512000'): 1,
Timestamp('2017-04-07 10:05:45.506000'): 1,
Timestamp('2017-04-07 10:06:38.567000'): 1,
Timestamp('2017-04-24 09:32:06.261000'): 1,
Timestamp('2017-05-10 09:36:56.943000'): 5,
Timestamp('2017-05-29 09:31:32.211000'): 5,
Timestamp('2017-06-19 09:33:56.391000'): 5,
Timestamp('2017-06-19 09:36:11.743000'): 5,
Timestamp('2017-06-28 10:06:58.320000'): 5,
Timestamp('2017-06-28 10:12:04.859000'): 5,
Timestamp('2017-07-10 09:58:36.082000'): 5,
Timestamp('2017-07-11 09:43:03.421000'): 5,
Timestamp('2017-07-12 09:27:27.504000'): 6,
Timestamp('2017-07-12 09:31:16.304000'): 6,
Timestamp('2017-07-12 09:32:47.592000'): 6,
Timestamp('2017-07-12 09:33:43.216000'): 6,
Timestamp('2017-07-26 09:19:23.656000'): 6,
Timestamp('2017-07-26 09:32:07.647000'): 6,
Timestamp('2017-07-26 09:34:16.047000'): 6,
Timestamp('2017-07-26 09:36:29.241000'): 6,
Timestamp('2017-07-26 09:41:14.152000'): 6,
Timestamp('2017-07-26 09:45:01.198000'): 6,
Timestamp('2017-07-26 09:49:06.674000'): 6,
Timestamp('2017-08-07 09:17:59.231000'): 6,
Timestamp('2017-08-07 09:25:57.865000'): 6,
Timestamp('2017-08-07 09:31:29.751000'): 6,
Timestamp('2017-08-07 09:35:27.062000'): 6,
Timestamp('2017-08-15 09:23:40.111000'): 7,
Timestamp('2017-08-16 09:17:48.032000'): 7,
Timestamp('2017-08-16 09:20:37.396000'): 7,
Timestamp('2017-08-16 09:26:34.631000'): 7,
Timestamp('2017-08-16 10:01:35.525000'): 7,
Timestamp('2017-08-16 10:06:06.222000'): 7,
Timestamp('2017-08-16 10:38:44.717000'): 7,
Timestamp('2017-08-17 09:17:18.951000'): 7,
Timestamp('2017-08-17 09:21:33.846000'): 7,
Timestamp('2017-08-17 09:28:14.337000'): 7,
Timestamp('2017-08-17 09:30:42.855000'): 7,
Timestamp('2017-08-17 09:31:35.894000'): 7,
Timestamp('2017-08-17 09:33:15.819000'): 7,
Timestamp('2017-08-17 09:35:23.751000'): 7,
Timestamp('2017-08-17 09:38:45.211000'): 7,
Timestamp('2017-08-17 09:41:25.251000'): 7,
Timestamp('2017-08-17 09:45:21.319000'): 7,
Timestamp('2017-08-17 09:47:55.097000'): 7,
Timestamp('2017-08-17 09:50:17.234000'): 7,
Timestamp('2017-08-17 09:51:49.333000'): 7,
Timestamp('2017-08-18 10:13:44.958000'): 8,
Timestamp('2017-08-18 10:19:25.371000'): 8,
Timestamp('2017-08-18 10:25:33.984000'): 8,
Timestamp('2017-08-18 10:31:29.450000'): 8,
Timestamp('2017-08-18 10:42:42.320000'): 8,
Timestamp('2017-08-18 10:53:34.495000'): 8,
Timestamp('2017-08-29 09:38:10.660000'): 8,
Timestamp('2017-08-29 09:42:50.701000'): 8,
Timestamp('2017-08-29 09:45:21.301000'): 8,
Timestamp('2017-09-04 09:34:11.032000'): 8,
Timestamp('2017-09-07 09:34:48.306000'): 8,
Timestamp('2017-09-07 09:45:30.120000'): 8,
Timestamp('2017-09-11 09:16:19.693000'): 8,
Timestamp('2017-09-11 09:19:50.156000'): 8,
Timestamp('2017-09-11 09:30:40.390000'): 8},
2: {Timestamp('2017-04-05 10:18:02.095000'): 2,
Timestamp('2017-04-05 10:35:03.740000'): 2,
Timestamp('2017-04-05 10:57:18.364000'): 2,
Timestamp('2017-04-05 11:10:09.142000'): 2,
Timestamp('2017-04-07 09:41:11.167000'): 2,
Timestamp('2017-04-07 09:47:22.457000'): 2,
Timestamp('2017-04-07 09:51:22.037000'): 2,
Timestamp('2017-04-07 09:54:59.803000'): 2,
Timestamp('2017-04-07 09:58:49.512000'): 2,
Timestamp('2017-04-07 10:05:45.506000'): 2,
Timestamp('2017-04-07 10:06:38.567000'): 2,
Timestamp('2017-04-24 09:32:06.261000'): 2,
Timestamp('2017-05-10 09:36:56.943000'): 6,
Timestamp('2017-05-29 09:31:32.211000'): 6,
Timestamp('2017-06-19 09:33:56.391000'): 6,
Timestamp('2017-06-19 09:36:11.743000'): 6,
Timestamp('2017-06-28 10:06:58.320000'): 9,
Timestamp('2017-06-28 10:12:04.859000'): 9,
Timestamp('2017-07-10 09:58:36.082000'): 9,
Timestamp('2017-07-11 09:43:03.421000'): 9,
Timestamp('2017-07-12 09:27:27.504000'): 9,
Timestamp('2017-07-12 09:31:16.304000'): 9,
Timestamp('2017-07-12 09:32:47.592000'): 9,
Timestamp('2017-07-12 09:33:43.216000'): 9,
Timestamp('2017-07-26 09:19:23.656000'): 9,
Timestamp('2017-07-26 09:32:07.647000'): 9,
Timestamp('2017-07-26 09:34:16.047000'): 9,
Timestamp('2017-07-26 09:36:29.241000'): 9,
Timestamp('2017-07-26 09:41:14.152000'): 9,
Timestamp('2017-07-26 09:45:01.198000'): 9,
Timestamp('2017-07-26 09:49:06.674000'): 9,
Timestamp('2017-08-07 09:17:59.231000'): 9,
Timestamp('2017-08-07 09:25:57.865000'): 9,
Timestamp('2017-08-07 09:31:29.751000'): 9,
Timestamp('2017-08-07 09:35:27.062000'): 9,
Timestamp('2017-08-15 09:23:40.111000'): 10,
Timestamp('2017-08-16 09:17:48.032000'): 10,
Timestamp('2017-08-16 09:20:37.396000'): 10,
Timestamp('2017-08-16 09:26:34.631000'): 10,
Timestamp('2017-08-16 10:01:35.525000'): 10,
Timestamp('2017-08-16 10:06:06.222000'): 10,
Timestamp('2017-08-16 10:38:44.717000'): 10,
Timestamp('2017-08-17 09:17:18.951000'): 10,
Timestamp('2017-08-17 09:21:33.846000'): 10,
Timestamp('2017-08-17 09:28:14.337000'): 10,
Timestamp('2017-08-17 09:30:42.855000'): 10,
Timestamp('2017-08-17 09:31:35.894000'): 10,
Timestamp('2017-08-17 09:33:15.819000'): 10,
Timestamp('2017-08-17 09:35:23.751000'): 10,
Timestamp('2017-08-17 09:38:45.211000'): 10,
Timestamp('2017-08-17 09:41:25.251000'): 10,
Timestamp('2017-08-17 09:45:21.319000'): 10,
Timestamp('2017-08-17 09:47:55.097000'): 10,
Timestamp('2017-08-17 09:50:17.234000'): 10,
Timestamp('2017-08-17 09:51:49.333000'): 10,
Timestamp('2017-08-18 10:13:44.958000'): 11,
Timestamp('2017-08-18 10:19:25.371000'): 11,
Timestamp('2017-08-18 10:25:33.984000'): 11,
Timestamp('2017-08-18 10:31:29.450000'): 11,
Timestamp('2017-08-18 10:42:42.320000'): 11,
Timestamp('2017-08-18 10:53:34.495000'): 11,
Timestamp('2017-08-29 09:38:10.660000'): 11,
Timestamp('2017-08-29 09:42:50.701000'): 11,
Timestamp('2017-08-29 09:45:21.301000'): 11,
Timestamp('2017-09-04 09:34:11.032000'): 11,
Timestamp('2017-09-07 09:34:48.306000'): 11,
Timestamp('2017-09-07 09:45:30.120000'): 11,
Timestamp('2017-09-11 09:16:19.693000'): 11,
Timestamp('2017-09-11 09:19:50.156000'): 11,
Timestamp('2017-09-11 09:30:40.390000'): 11},
3: {Timestamp('2017-04-05 10:18:02.095000'): 3,
Timestamp('2017-04-05 10:35:03.740000'): 3,
Timestamp('2017-04-05 10:57:18.364000'): 3,
Timestamp('2017-04-05 11:10:09.142000'): 3,
Timestamp('2017-04-07 09:41:11.167000'): 3,
Timestamp('2017-04-07 09:47:22.457000'): 3,
Timestamp('2017-04-07 09:51:22.037000'): 3,
Timestamp('2017-04-07 09:54:59.803000'): 3,
Timestamp('2017-04-07 09:58:49.512000'): 3,
Timestamp('2017-04-07 10:05:45.506000'): 3,
Timestamp('2017-04-07 10:06:38.567000'): 3,
Timestamp('2017-04-24 09:32:06.261000'): 3,
Timestamp('2017-05-10 09:36:56.943000'): 7,
Timestamp('2017-05-29 09:31:32.211000'): 7,
Timestamp('2017-06-19 09:33:56.391000'): 7,
Timestamp('2017-06-19 09:36:11.743000'): 7,
Timestamp('2017-06-28 10:06:58.320000'): 10,
Timestamp('2017-06-28 10:12:04.859000'): 10,
Timestamp('2017-07-10 09:58:36.082000'): 10,
Timestamp('2017-07-11 09:43:03.421000'): 10,
Timestamp('2017-07-12 09:27:27.504000'): 12,
Timestamp('2017-07-12 09:31:16.304000'): 12,
Timestamp('2017-07-12 09:32:47.592000'): 12,
Timestamp('2017-07-12 09:33:43.216000'): 12,
Timestamp('2017-07-26 09:19:23.656000'): 12,
Timestamp('2017-07-26 09:32:07.647000'): 12,
Timestamp('2017-07-26 09:34:16.047000'): 12,
Timestamp('2017-07-26 09:36:29.241000'): 12,
Timestamp('2017-07-26 09:41:14.152000'): 12,
Timestamp('2017-07-26 09:45:01.198000'): 12,
Timestamp('2017-07-26 09:49:06.674000'): 12,
Timestamp('2017-08-07 09:17:59.231000'): 12,
Timestamp('2017-08-07 09:25:57.865000'): 12,
Timestamp('2017-08-07 09:31:29.751000'): 12,
Timestamp('2017-08-07 09:35:27.062000'): 12,
Timestamp('2017-08-15 09:23:40.111000'): 12,
Timestamp('2017-08-16 09:17:48.032000'): 12,
Timestamp('2017-08-16 09:20:37.396000'): 12,
Timestamp('2017-08-16 09:26:34.631000'): 12,
Timestamp('2017-08-16 10:01:35.525000'): 12,
Timestamp('2017-08-16 10:06:06.222000'): 12,
Timestamp('2017-08-16 10:38:44.717000'): 12,
Timestamp('2017-08-17 09:17:18.951000'): 12,
Timestamp('2017-08-17 09:21:33.846000'): 12,
Timestamp('2017-08-17 09:28:14.337000'): 12,
Timestamp('2017-08-17 09:30:42.855000'): 12,
Timestamp('2017-08-17 09:31:35.894000'): 12,
Timestamp('2017-08-17 09:33:15.819000'): 12,
Timestamp('2017-08-17 09:35:23.751000'): 12,
Timestamp('2017-08-17 09:38:45.211000'): 12,
Timestamp('2017-08-17 09:41:25.251000'): 12,
Timestamp('2017-08-17 09:45:21.319000'): 12,
Timestamp('2017-08-17 09:47:55.097000'): 12,
Timestamp('2017-08-17 09:50:17.234000'): 12,
Timestamp('2017-08-17 09:51:49.333000'): 12,
Timestamp('2017-08-18 10:13:44.958000'): 13,
Timestamp('2017-08-18 10:19:25.371000'): 13,
Timestamp('2017-08-18 10:25:33.984000'): 13,
Timestamp('2017-08-18 10:31:29.450000'): 13,
Timestamp('2017-08-18 10:42:42.320000'): 13,
Timestamp('2017-08-18 10:53:34.495000'): 13,
Timestamp('2017-08-29 09:38:10.660000'): 13,
Timestamp('2017-08-29 09:42:50.701000'): 13,
Timestamp('2017-08-29 09:45:21.301000'): 13,
Timestamp('2017-09-04 09:34:11.032000'): 13,
Timestamp('2017-09-07 09:34:48.306000'): 13,
Timestamp('2017-09-07 09:45:30.120000'): 13,
Timestamp('2017-09-11 09:16:19.693000'): 13,
Timestamp('2017-09-11 09:19:50.156000'): 13,
Timestamp('2017-09-11 09:30:40.390000'): 13},
4: {Timestamp('2017-04-05 10:18:02.095000'): 4,
Timestamp('2017-04-05 10:35:03.740000'): 4,
Timestamp('2017-04-05 10:57:18.364000'): 4,
Timestamp('2017-04-05 11:10:09.142000'): 4,
Timestamp('2017-04-07 09:41:11.167000'): 4,
Timestamp('2017-04-07 09:47:22.457000'): 4,
Timestamp('2017-04-07 09:51:22.037000'): 4,
Timestamp('2017-04-07 09:54:59.803000'): 4,
Timestamp('2017-04-07 09:58:49.512000'): 4,
Timestamp('2017-04-07 10:05:45.506000'): 4,
Timestamp('2017-04-07 10:06:38.567000'): 4,
Timestamp('2017-04-24 09:32:06.261000'): 4,
Timestamp('2017-05-10 09:36:56.943000'): 8,
Timestamp('2017-05-29 09:31:32.211000'): 8,
Timestamp('2017-06-19 09:33:56.391000'): 8,
Timestamp('2017-06-19 09:36:11.743000'): 8,
Timestamp('2017-06-28 10:06:58.320000'): 11,
Timestamp('2017-06-28 10:12:04.859000'): 11,
Timestamp('2017-07-10 09:58:36.082000'): 11,
Timestamp('2017-07-11 09:43:03.421000'): 11,
Timestamp('2017-07-12 09:27:27.504000'): 13,
Timestamp('2017-07-12 09:31:16.304000'): 13,
Timestamp('2017-07-12 09:32:47.592000'): 13,
Timestamp('2017-07-12 09:33:43.216000'): 13,
Timestamp('2017-07-26 09:19:23.656000'): 13,
Timestamp('2017-07-26 09:32:07.647000'): 13,
Timestamp('2017-07-26 09:34:16.047000'): 13,
Timestamp('2017-07-26 09:36:29.241000'): 13,
Timestamp('2017-07-26 09:41:14.152000'): 13,
Timestamp('2017-07-26 09:45:01.198000'): 13,
Timestamp('2017-07-26 09:49:06.674000'): 13,
Timestamp('2017-08-07 09:17:59.231000'): 13,
Timestamp('2017-08-07 09:25:57.865000'): 13,
Timestamp('2017-08-07 09:31:29.751000'): 13,
Timestamp('2017-08-07 09:35:27.062000'): 13,
Timestamp('2017-08-15 09:23:40.111000'): 14,
Timestamp('2017-08-16 09:17:48.032000'): 14,
Timestamp('2017-08-16 09:20:37.396000'): 14,
Timestamp('2017-08-16 09:26:34.631000'): 14,
Timestamp('2017-08-16 10:01:35.525000'): 14,
Timestamp('2017-08-16 10:06:06.222000'): 14,
Timestamp('2017-08-16 10:38:44.717000'): 14,
Timestamp('2017-08-17 09:17:18.951000'): 14,
Timestamp('2017-08-17 09:21:33.846000'): 14,
Timestamp('2017-08-17 09:28:14.337000'): 14,
Timestamp('2017-08-17 09:30:42.855000'): 14,
Timestamp('2017-08-17 09:31:35.894000'): 14,
Timestamp('2017-08-17 09:33:15.819000'): 14,
Timestamp('2017-08-17 09:35:23.751000'): 14,
Timestamp('2017-08-17 09:38:45.211000'): 14,
Timestamp('2017-08-17 09:41:25.251000'): 14,
Timestamp('2017-08-17 09:45:21.319000'): 14,
Timestamp('2017-08-17 09:47:55.097000'): 14,
Timestamp('2017-08-17 09:50:17.234000'): 14,
Timestamp('2017-08-17 09:51:49.333000'): 14,
Timestamp('2017-08-18 10:13:44.958000'): 14,
Timestamp('2017-08-18 10:19:25.371000'): 14,
Timestamp('2017-08-18 10:25:33.984000'): 14,
Timestamp('2017-08-18 10:31:29.450000'): 14,
Timestamp('2017-08-18 10:42:42.320000'): 14,
Timestamp('2017-08-18 10:53:34.495000'): 14,
Timestamp('2017-08-29 09:38:10.660000'): 14,
Timestamp('2017-08-29 09:42:50.701000'): 14,
Timestamp('2017-08-29 09:45:21.301000'): 14,
Timestamp('2017-09-04 09:34:11.032000'): 14,
Timestamp('2017-09-07 09:34:48.306000'): 14,
Timestamp('2017-09-07 09:45:30.120000'): 14,
Timestamp('2017-09-11 09:16:19.693000'): 14,
Timestamp('2017-09-11 09:19:50.156000'): 14,
Timestamp('2017-09-11 09:30:40.390000'): 14}}
At the moment I am using the for loop to subset and create the new dataframe (result). But is there a more efficient way to do this without using for loop ?
df_from = pd.DataFrame(np.random.normal(0,1,size = (70,15)),index = df_ticks_paths.index)
result = pd.DataFrame(np.nan, index = df_from.index, columns = df_from.columns)
for idx, row in df_path.iterrows():
for col, col_copy in row.iteritems():
result.loc[idx,col] = df_from.loc[idx, col_copy]
A:
Here is another way to it which is, on my machine, twice as fast in average.
# In your post, you do not provide `df_ticks_paths.index`,
# so I make up one for demonstration purpose
df_ticks_paths_index = pd.DatetimeIndex(
[
"2017-04-04 10:18:02.095000",
"2017-04-05 10:35:03.740000",
"2017-04-05 10:59:18.364000",
"2017-04-07 09:41:11.167000",
"2017-09-05 11:10:09.142000",
"2017-10-07 19:41:11.167000",
],
dtype="datetime64[ns]",
freq=None,
)
df_from = pd.DataFrame(np.random.normal(0, 1, size=(6, 15)), index=df_ticks_paths_index)
print(df_from)
# Output
0 1 2 3 4 \
2017-04-04 10:18:02.095 -0.535905 0.569140 1.401489 0.551076 0.158319
2017-04-05 10:35:03.740 -1.474314 0.318631 0.178816 1.333643 1.097234
2017-04-05 10:59:18.364 1.878162 0.805685 -1.132736 0.184201 0.008899
2017-04-07 09:41:11.167 -0.480329 0.575849 1.855999 0.815211 -0.126898
2017-09-05 11:10:09.142 0.858272 -0.759353 0.275809 -0.228938 0.782700
2017-10-07 19:41:11.167 -0.914579 -0.347113 0.225863 0.126135 -1.987540
2017-09-11 09:30:40.390 -0.807214 0.511869 -0.980860 0.944337 0.618520
... 10 11 12 13 14
2017-04-04 10:18:02.095 ... -0.259234 0.861597 -1.297222 1.467518 0.209933
2017-04-05 10:35:03.740 ... 1.894300 0.592537 -0.146224 -1.392316 2.056190
2017-04-05 10:59:18.364 ... -0.695715 1.236572 0.036024 0.701121 1.013895
2017-04-07 09:41:11.167 ... -0.035515 -0.359212 0.952660 -1.192822 0.103593
2017-09-05 11:10:09.142 ... -0.702647 1.212190 0.901758 0.343678 1.910087
2017-10-07 19:41:11.167 ... 0.018768 0.544317 0.218981 0.625162 0.043180
2017-09-11 09:30:40.390 ... 0.505412 1.254301 -1.048322 0.222409 0.962458
[7 rows x 15 columns]
print(df_path)
# Output
0 1 2 3 4
2017-04-05 10:18:02.095 0 1 2 3 4
2017-04-05 10:35:03.740 0 1 2 3 4
2017-04-05 10:57:18.364 0 1 2 3 4
2017-04-05 11:10:09.142 0 1 2 3 4
2017-04-07 09:41:11.167 0 1 2 3 4
... .. .. .. .. ..
2017-09-07 09:34:48.306 4 8 11 13 14
2017-09-07 09:45:30.120 4 8 11 13 14
2017-09-11 09:16:19.693 4 8 11 13 14
2017-09-11 09:19:50.156 4 8 11 13 14
2017-09-11 09:30:40.390 4 8 11 13 14
[70 rows x 5 columns]
Then:
df_path.columns = [f"col{x}" for x in df_path.columns]
df = pd.merge(
left=df_path, right=df_from, how="inner", left_index=True, right_index=True
)
df = pd.concat(
[df.apply(lambda x: x[x[col]], axis=1) for col in df_path.columns], axis=1
)
print(df)
# Output
0 1 2 3 4
2017-04-05 10:35:03.740 -1.474314 0.318631 0.178816 1.333643 1.097234
2017-04-07 09:41:11.167 -0.480329 0.575849 1.855999 0.815211 -0.126898
2017-09-11 09:30:40.390 0.618520 -1.307552 1.254301 0.222409 0.962458
|
How to subset a dataframe with given pairs or row indices and column labels?
|
I am given a dataframe (df_path) below, where the index corresponds to the index of the dataframe (df_from) i want to copy values from, and the values represent the column of the dataframe I want to copy values from.
**df_path**
{0: {Timestamp('2017-04-05 10:18:02.095000'): 0,
Timestamp('2017-04-05 10:35:03.740000'): 0,
Timestamp('2017-04-05 10:57:18.364000'): 0,
Timestamp('2017-04-05 11:10:09.142000'): 0,
Timestamp('2017-04-07 09:41:11.167000'): 0,
Timestamp('2017-04-07 09:47:22.457000'): 0,
Timestamp('2017-04-07 09:51:22.037000'): 0,
Timestamp('2017-04-07 09:54:59.803000'): 0,
Timestamp('2017-04-07 09:58:49.512000'): 0,
Timestamp('2017-04-07 10:05:45.506000'): 0,
Timestamp('2017-04-07 10:06:38.567000'): 0,
Timestamp('2017-04-24 09:32:06.261000'): 0,
Timestamp('2017-05-10 09:36:56.943000'): 0,
Timestamp('2017-05-29 09:31:32.211000'): 0,
Timestamp('2017-06-19 09:33:56.391000'): 0,
Timestamp('2017-06-19 09:36:11.743000'): 0,
Timestamp('2017-06-28 10:06:58.320000'): 1,
Timestamp('2017-06-28 10:12:04.859000'): 1,
Timestamp('2017-07-10 09:58:36.082000'): 1,
Timestamp('2017-07-11 09:43:03.421000'): 1,
Timestamp('2017-07-12 09:27:27.504000'): 2,
Timestamp('2017-07-12 09:31:16.304000'): 2,
Timestamp('2017-07-12 09:32:47.592000'): 2,
Timestamp('2017-07-12 09:33:43.216000'): 2,
Timestamp('2017-07-26 09:19:23.656000'): 2,
Timestamp('2017-07-26 09:32:07.647000'): 2,
Timestamp('2017-07-26 09:34:16.047000'): 2,
Timestamp('2017-07-26 09:36:29.241000'): 2,
Timestamp('2017-07-26 09:41:14.152000'): 2,
Timestamp('2017-07-26 09:45:01.198000'): 2,
Timestamp('2017-07-26 09:49:06.674000'): 2,
Timestamp('2017-08-07 09:17:59.231000'): 2,
Timestamp('2017-08-07 09:25:57.865000'): 2,
Timestamp('2017-08-07 09:31:29.751000'): 2,
Timestamp('2017-08-07 09:35:27.062000'): 2,
Timestamp('2017-08-15 09:23:40.111000'): 3,
Timestamp('2017-08-16 09:17:48.032000'): 3,
Timestamp('2017-08-16 09:20:37.396000'): 3,
Timestamp('2017-08-16 09:26:34.631000'): 3,
Timestamp('2017-08-16 10:01:35.525000'): 3,
Timestamp('2017-08-16 10:06:06.222000'): 3,
Timestamp('2017-08-16 10:38:44.717000'): 3,
Timestamp('2017-08-17 09:17:18.951000'): 3,
Timestamp('2017-08-17 09:21:33.846000'): 3,
Timestamp('2017-08-17 09:28:14.337000'): 3,
Timestamp('2017-08-17 09:30:42.855000'): 3,
Timestamp('2017-08-17 09:31:35.894000'): 3,
Timestamp('2017-08-17 09:33:15.819000'): 3,
Timestamp('2017-08-17 09:35:23.751000'): 3,
Timestamp('2017-08-17 09:38:45.211000'): 3,
Timestamp('2017-08-17 09:41:25.251000'): 3,
Timestamp('2017-08-17 09:45:21.319000'): 3,
Timestamp('2017-08-17 09:47:55.097000'): 3,
Timestamp('2017-08-17 09:50:17.234000'): 3,
Timestamp('2017-08-17 09:51:49.333000'): 3,
Timestamp('2017-08-18 10:13:44.958000'): 4,
Timestamp('2017-08-18 10:19:25.371000'): 4,
Timestamp('2017-08-18 10:25:33.984000'): 4,
Timestamp('2017-08-18 10:31:29.450000'): 4,
Timestamp('2017-08-18 10:42:42.320000'): 4,
Timestamp('2017-08-18 10:53:34.495000'): 4,
Timestamp('2017-08-29 09:38:10.660000'): 4,
Timestamp('2017-08-29 09:42:50.701000'): 4,
Timestamp('2017-08-29 09:45:21.301000'): 4,
Timestamp('2017-09-04 09:34:11.032000'): 4,
Timestamp('2017-09-07 09:34:48.306000'): 4,
Timestamp('2017-09-07 09:45:30.120000'): 4,
Timestamp('2017-09-11 09:16:19.693000'): 4,
Timestamp('2017-09-11 09:19:50.156000'): 4,
Timestamp('2017-09-11 09:30:40.390000'): 4},
1: {Timestamp('2017-04-05 10:18:02.095000'): 1,
Timestamp('2017-04-05 10:35:03.740000'): 1,
Timestamp('2017-04-05 10:57:18.364000'): 1,
Timestamp('2017-04-05 11:10:09.142000'): 1,
Timestamp('2017-04-07 09:41:11.167000'): 1,
Timestamp('2017-04-07 09:47:22.457000'): 1,
Timestamp('2017-04-07 09:51:22.037000'): 1,
Timestamp('2017-04-07 09:54:59.803000'): 1,
Timestamp('2017-04-07 09:58:49.512000'): 1,
Timestamp('2017-04-07 10:05:45.506000'): 1,
Timestamp('2017-04-07 10:06:38.567000'): 1,
Timestamp('2017-04-24 09:32:06.261000'): 1,
Timestamp('2017-05-10 09:36:56.943000'): 5,
Timestamp('2017-05-29 09:31:32.211000'): 5,
Timestamp('2017-06-19 09:33:56.391000'): 5,
Timestamp('2017-06-19 09:36:11.743000'): 5,
Timestamp('2017-06-28 10:06:58.320000'): 5,
Timestamp('2017-06-28 10:12:04.859000'): 5,
Timestamp('2017-07-10 09:58:36.082000'): 5,
Timestamp('2017-07-11 09:43:03.421000'): 5,
Timestamp('2017-07-12 09:27:27.504000'): 6,
Timestamp('2017-07-12 09:31:16.304000'): 6,
Timestamp('2017-07-12 09:32:47.592000'): 6,
Timestamp('2017-07-12 09:33:43.216000'): 6,
Timestamp('2017-07-26 09:19:23.656000'): 6,
Timestamp('2017-07-26 09:32:07.647000'): 6,
Timestamp('2017-07-26 09:34:16.047000'): 6,
Timestamp('2017-07-26 09:36:29.241000'): 6,
Timestamp('2017-07-26 09:41:14.152000'): 6,
Timestamp('2017-07-26 09:45:01.198000'): 6,
Timestamp('2017-07-26 09:49:06.674000'): 6,
Timestamp('2017-08-07 09:17:59.231000'): 6,
Timestamp('2017-08-07 09:25:57.865000'): 6,
Timestamp('2017-08-07 09:31:29.751000'): 6,
Timestamp('2017-08-07 09:35:27.062000'): 6,
Timestamp('2017-08-15 09:23:40.111000'): 7,
Timestamp('2017-08-16 09:17:48.032000'): 7,
Timestamp('2017-08-16 09:20:37.396000'): 7,
Timestamp('2017-08-16 09:26:34.631000'): 7,
Timestamp('2017-08-16 10:01:35.525000'): 7,
Timestamp('2017-08-16 10:06:06.222000'): 7,
Timestamp('2017-08-16 10:38:44.717000'): 7,
Timestamp('2017-08-17 09:17:18.951000'): 7,
Timestamp('2017-08-17 09:21:33.846000'): 7,
Timestamp('2017-08-17 09:28:14.337000'): 7,
Timestamp('2017-08-17 09:30:42.855000'): 7,
Timestamp('2017-08-17 09:31:35.894000'): 7,
Timestamp('2017-08-17 09:33:15.819000'): 7,
Timestamp('2017-08-17 09:35:23.751000'): 7,
Timestamp('2017-08-17 09:38:45.211000'): 7,
Timestamp('2017-08-17 09:41:25.251000'): 7,
Timestamp('2017-08-17 09:45:21.319000'): 7,
Timestamp('2017-08-17 09:47:55.097000'): 7,
Timestamp('2017-08-17 09:50:17.234000'): 7,
Timestamp('2017-08-17 09:51:49.333000'): 7,
Timestamp('2017-08-18 10:13:44.958000'): 8,
Timestamp('2017-08-18 10:19:25.371000'): 8,
Timestamp('2017-08-18 10:25:33.984000'): 8,
Timestamp('2017-08-18 10:31:29.450000'): 8,
Timestamp('2017-08-18 10:42:42.320000'): 8,
Timestamp('2017-08-18 10:53:34.495000'): 8,
Timestamp('2017-08-29 09:38:10.660000'): 8,
Timestamp('2017-08-29 09:42:50.701000'): 8,
Timestamp('2017-08-29 09:45:21.301000'): 8,
Timestamp('2017-09-04 09:34:11.032000'): 8,
Timestamp('2017-09-07 09:34:48.306000'): 8,
Timestamp('2017-09-07 09:45:30.120000'): 8,
Timestamp('2017-09-11 09:16:19.693000'): 8,
Timestamp('2017-09-11 09:19:50.156000'): 8,
Timestamp('2017-09-11 09:30:40.390000'): 8},
2: {Timestamp('2017-04-05 10:18:02.095000'): 2,
Timestamp('2017-04-05 10:35:03.740000'): 2,
Timestamp('2017-04-05 10:57:18.364000'): 2,
Timestamp('2017-04-05 11:10:09.142000'): 2,
Timestamp('2017-04-07 09:41:11.167000'): 2,
Timestamp('2017-04-07 09:47:22.457000'): 2,
Timestamp('2017-04-07 09:51:22.037000'): 2,
Timestamp('2017-04-07 09:54:59.803000'): 2,
Timestamp('2017-04-07 09:58:49.512000'): 2,
Timestamp('2017-04-07 10:05:45.506000'): 2,
Timestamp('2017-04-07 10:06:38.567000'): 2,
Timestamp('2017-04-24 09:32:06.261000'): 2,
Timestamp('2017-05-10 09:36:56.943000'): 6,
Timestamp('2017-05-29 09:31:32.211000'): 6,
Timestamp('2017-06-19 09:33:56.391000'): 6,
Timestamp('2017-06-19 09:36:11.743000'): 6,
Timestamp('2017-06-28 10:06:58.320000'): 9,
Timestamp('2017-06-28 10:12:04.859000'): 9,
Timestamp('2017-07-10 09:58:36.082000'): 9,
Timestamp('2017-07-11 09:43:03.421000'): 9,
Timestamp('2017-07-12 09:27:27.504000'): 9,
Timestamp('2017-07-12 09:31:16.304000'): 9,
Timestamp('2017-07-12 09:32:47.592000'): 9,
Timestamp('2017-07-12 09:33:43.216000'): 9,
Timestamp('2017-07-26 09:19:23.656000'): 9,
Timestamp('2017-07-26 09:32:07.647000'): 9,
Timestamp('2017-07-26 09:34:16.047000'): 9,
Timestamp('2017-07-26 09:36:29.241000'): 9,
Timestamp('2017-07-26 09:41:14.152000'): 9,
Timestamp('2017-07-26 09:45:01.198000'): 9,
Timestamp('2017-07-26 09:49:06.674000'): 9,
Timestamp('2017-08-07 09:17:59.231000'): 9,
Timestamp('2017-08-07 09:25:57.865000'): 9,
Timestamp('2017-08-07 09:31:29.751000'): 9,
Timestamp('2017-08-07 09:35:27.062000'): 9,
Timestamp('2017-08-15 09:23:40.111000'): 10,
Timestamp('2017-08-16 09:17:48.032000'): 10,
Timestamp('2017-08-16 09:20:37.396000'): 10,
Timestamp('2017-08-16 09:26:34.631000'): 10,
Timestamp('2017-08-16 10:01:35.525000'): 10,
Timestamp('2017-08-16 10:06:06.222000'): 10,
Timestamp('2017-08-16 10:38:44.717000'): 10,
Timestamp('2017-08-17 09:17:18.951000'): 10,
Timestamp('2017-08-17 09:21:33.846000'): 10,
Timestamp('2017-08-17 09:28:14.337000'): 10,
Timestamp('2017-08-17 09:30:42.855000'): 10,
Timestamp('2017-08-17 09:31:35.894000'): 10,
Timestamp('2017-08-17 09:33:15.819000'): 10,
Timestamp('2017-08-17 09:35:23.751000'): 10,
Timestamp('2017-08-17 09:38:45.211000'): 10,
Timestamp('2017-08-17 09:41:25.251000'): 10,
Timestamp('2017-08-17 09:45:21.319000'): 10,
Timestamp('2017-08-17 09:47:55.097000'): 10,
Timestamp('2017-08-17 09:50:17.234000'): 10,
Timestamp('2017-08-17 09:51:49.333000'): 10,
Timestamp('2017-08-18 10:13:44.958000'): 11,
Timestamp('2017-08-18 10:19:25.371000'): 11,
Timestamp('2017-08-18 10:25:33.984000'): 11,
Timestamp('2017-08-18 10:31:29.450000'): 11,
Timestamp('2017-08-18 10:42:42.320000'): 11,
Timestamp('2017-08-18 10:53:34.495000'): 11,
Timestamp('2017-08-29 09:38:10.660000'): 11,
Timestamp('2017-08-29 09:42:50.701000'): 11,
Timestamp('2017-08-29 09:45:21.301000'): 11,
Timestamp('2017-09-04 09:34:11.032000'): 11,
Timestamp('2017-09-07 09:34:48.306000'): 11,
Timestamp('2017-09-07 09:45:30.120000'): 11,
Timestamp('2017-09-11 09:16:19.693000'): 11,
Timestamp('2017-09-11 09:19:50.156000'): 11,
Timestamp('2017-09-11 09:30:40.390000'): 11},
3: {Timestamp('2017-04-05 10:18:02.095000'): 3,
Timestamp('2017-04-05 10:35:03.740000'): 3,
Timestamp('2017-04-05 10:57:18.364000'): 3,
Timestamp('2017-04-05 11:10:09.142000'): 3,
Timestamp('2017-04-07 09:41:11.167000'): 3,
Timestamp('2017-04-07 09:47:22.457000'): 3,
Timestamp('2017-04-07 09:51:22.037000'): 3,
Timestamp('2017-04-07 09:54:59.803000'): 3,
Timestamp('2017-04-07 09:58:49.512000'): 3,
Timestamp('2017-04-07 10:05:45.506000'): 3,
Timestamp('2017-04-07 10:06:38.567000'): 3,
Timestamp('2017-04-24 09:32:06.261000'): 3,
Timestamp('2017-05-10 09:36:56.943000'): 7,
Timestamp('2017-05-29 09:31:32.211000'): 7,
Timestamp('2017-06-19 09:33:56.391000'): 7,
Timestamp('2017-06-19 09:36:11.743000'): 7,
Timestamp('2017-06-28 10:06:58.320000'): 10,
Timestamp('2017-06-28 10:12:04.859000'): 10,
Timestamp('2017-07-10 09:58:36.082000'): 10,
Timestamp('2017-07-11 09:43:03.421000'): 10,
Timestamp('2017-07-12 09:27:27.504000'): 12,
Timestamp('2017-07-12 09:31:16.304000'): 12,
Timestamp('2017-07-12 09:32:47.592000'): 12,
Timestamp('2017-07-12 09:33:43.216000'): 12,
Timestamp('2017-07-26 09:19:23.656000'): 12,
Timestamp('2017-07-26 09:32:07.647000'): 12,
Timestamp('2017-07-26 09:34:16.047000'): 12,
Timestamp('2017-07-26 09:36:29.241000'): 12,
Timestamp('2017-07-26 09:41:14.152000'): 12,
Timestamp('2017-07-26 09:45:01.198000'): 12,
Timestamp('2017-07-26 09:49:06.674000'): 12,
Timestamp('2017-08-07 09:17:59.231000'): 12,
Timestamp('2017-08-07 09:25:57.865000'): 12,
Timestamp('2017-08-07 09:31:29.751000'): 12,
Timestamp('2017-08-07 09:35:27.062000'): 12,
Timestamp('2017-08-15 09:23:40.111000'): 12,
Timestamp('2017-08-16 09:17:48.032000'): 12,
Timestamp('2017-08-16 09:20:37.396000'): 12,
Timestamp('2017-08-16 09:26:34.631000'): 12,
Timestamp('2017-08-16 10:01:35.525000'): 12,
Timestamp('2017-08-16 10:06:06.222000'): 12,
Timestamp('2017-08-16 10:38:44.717000'): 12,
Timestamp('2017-08-17 09:17:18.951000'): 12,
Timestamp('2017-08-17 09:21:33.846000'): 12,
Timestamp('2017-08-17 09:28:14.337000'): 12,
Timestamp('2017-08-17 09:30:42.855000'): 12,
Timestamp('2017-08-17 09:31:35.894000'): 12,
Timestamp('2017-08-17 09:33:15.819000'): 12,
Timestamp('2017-08-17 09:35:23.751000'): 12,
Timestamp('2017-08-17 09:38:45.211000'): 12,
Timestamp('2017-08-17 09:41:25.251000'): 12,
Timestamp('2017-08-17 09:45:21.319000'): 12,
Timestamp('2017-08-17 09:47:55.097000'): 12,
Timestamp('2017-08-17 09:50:17.234000'): 12,
Timestamp('2017-08-17 09:51:49.333000'): 12,
Timestamp('2017-08-18 10:13:44.958000'): 13,
Timestamp('2017-08-18 10:19:25.371000'): 13,
Timestamp('2017-08-18 10:25:33.984000'): 13,
Timestamp('2017-08-18 10:31:29.450000'): 13,
Timestamp('2017-08-18 10:42:42.320000'): 13,
Timestamp('2017-08-18 10:53:34.495000'): 13,
Timestamp('2017-08-29 09:38:10.660000'): 13,
Timestamp('2017-08-29 09:42:50.701000'): 13,
Timestamp('2017-08-29 09:45:21.301000'): 13,
Timestamp('2017-09-04 09:34:11.032000'): 13,
Timestamp('2017-09-07 09:34:48.306000'): 13,
Timestamp('2017-09-07 09:45:30.120000'): 13,
Timestamp('2017-09-11 09:16:19.693000'): 13,
Timestamp('2017-09-11 09:19:50.156000'): 13,
Timestamp('2017-09-11 09:30:40.390000'): 13},
4: {Timestamp('2017-04-05 10:18:02.095000'): 4,
Timestamp('2017-04-05 10:35:03.740000'): 4,
Timestamp('2017-04-05 10:57:18.364000'): 4,
Timestamp('2017-04-05 11:10:09.142000'): 4,
Timestamp('2017-04-07 09:41:11.167000'): 4,
Timestamp('2017-04-07 09:47:22.457000'): 4,
Timestamp('2017-04-07 09:51:22.037000'): 4,
Timestamp('2017-04-07 09:54:59.803000'): 4,
Timestamp('2017-04-07 09:58:49.512000'): 4,
Timestamp('2017-04-07 10:05:45.506000'): 4,
Timestamp('2017-04-07 10:06:38.567000'): 4,
Timestamp('2017-04-24 09:32:06.261000'): 4,
Timestamp('2017-05-10 09:36:56.943000'): 8,
Timestamp('2017-05-29 09:31:32.211000'): 8,
Timestamp('2017-06-19 09:33:56.391000'): 8,
Timestamp('2017-06-19 09:36:11.743000'): 8,
Timestamp('2017-06-28 10:06:58.320000'): 11,
Timestamp('2017-06-28 10:12:04.859000'): 11,
Timestamp('2017-07-10 09:58:36.082000'): 11,
Timestamp('2017-07-11 09:43:03.421000'): 11,
Timestamp('2017-07-12 09:27:27.504000'): 13,
Timestamp('2017-07-12 09:31:16.304000'): 13,
Timestamp('2017-07-12 09:32:47.592000'): 13,
Timestamp('2017-07-12 09:33:43.216000'): 13,
Timestamp('2017-07-26 09:19:23.656000'): 13,
Timestamp('2017-07-26 09:32:07.647000'): 13,
Timestamp('2017-07-26 09:34:16.047000'): 13,
Timestamp('2017-07-26 09:36:29.241000'): 13,
Timestamp('2017-07-26 09:41:14.152000'): 13,
Timestamp('2017-07-26 09:45:01.198000'): 13,
Timestamp('2017-07-26 09:49:06.674000'): 13,
Timestamp('2017-08-07 09:17:59.231000'): 13,
Timestamp('2017-08-07 09:25:57.865000'): 13,
Timestamp('2017-08-07 09:31:29.751000'): 13,
Timestamp('2017-08-07 09:35:27.062000'): 13,
Timestamp('2017-08-15 09:23:40.111000'): 14,
Timestamp('2017-08-16 09:17:48.032000'): 14,
Timestamp('2017-08-16 09:20:37.396000'): 14,
Timestamp('2017-08-16 09:26:34.631000'): 14,
Timestamp('2017-08-16 10:01:35.525000'): 14,
Timestamp('2017-08-16 10:06:06.222000'): 14,
Timestamp('2017-08-16 10:38:44.717000'): 14,
Timestamp('2017-08-17 09:17:18.951000'): 14,
Timestamp('2017-08-17 09:21:33.846000'): 14,
Timestamp('2017-08-17 09:28:14.337000'): 14,
Timestamp('2017-08-17 09:30:42.855000'): 14,
Timestamp('2017-08-17 09:31:35.894000'): 14,
Timestamp('2017-08-17 09:33:15.819000'): 14,
Timestamp('2017-08-17 09:35:23.751000'): 14,
Timestamp('2017-08-17 09:38:45.211000'): 14,
Timestamp('2017-08-17 09:41:25.251000'): 14,
Timestamp('2017-08-17 09:45:21.319000'): 14,
Timestamp('2017-08-17 09:47:55.097000'): 14,
Timestamp('2017-08-17 09:50:17.234000'): 14,
Timestamp('2017-08-17 09:51:49.333000'): 14,
Timestamp('2017-08-18 10:13:44.958000'): 14,
Timestamp('2017-08-18 10:19:25.371000'): 14,
Timestamp('2017-08-18 10:25:33.984000'): 14,
Timestamp('2017-08-18 10:31:29.450000'): 14,
Timestamp('2017-08-18 10:42:42.320000'): 14,
Timestamp('2017-08-18 10:53:34.495000'): 14,
Timestamp('2017-08-29 09:38:10.660000'): 14,
Timestamp('2017-08-29 09:42:50.701000'): 14,
Timestamp('2017-08-29 09:45:21.301000'): 14,
Timestamp('2017-09-04 09:34:11.032000'): 14,
Timestamp('2017-09-07 09:34:48.306000'): 14,
Timestamp('2017-09-07 09:45:30.120000'): 14,
Timestamp('2017-09-11 09:16:19.693000'): 14,
Timestamp('2017-09-11 09:19:50.156000'): 14,
Timestamp('2017-09-11 09:30:40.390000'): 14}}
At the moment I am using the for loop to subset and create the new dataframe (result). But is there a more efficient way to do this without using for loop ?
df_from = pd.DataFrame(np.random.normal(0,1,size = (70,15)),index = df_ticks_paths.index)
result = pd.DataFrame(np.nan, index = df_from.index, columns = df_from.columns)
for idx, row in df_path.iterrows():
for col, col_copy in row.iteritems():
result.loc[idx,col] = df_from.loc[idx, col_copy]
|
[
"Here is another way to it which is, on my machine, twice as fast in average.\n# In your post, you do not provide `df_ticks_paths.index`,\n# so I make up one for demonstration purpose\n\ndf_ticks_paths_index = pd.DatetimeIndex(\n [\n \"2017-04-04 10:18:02.095000\",\n \"2017-04-05 10:35:03.740000\",\n \"2017-04-05 10:59:18.364000\",\n \"2017-04-07 09:41:11.167000\",\n \"2017-09-05 11:10:09.142000\",\n \"2017-10-07 19:41:11.167000\",\n ],\n dtype=\"datetime64[ns]\",\n freq=None,\n)\n\ndf_from = pd.DataFrame(np.random.normal(0, 1, size=(6, 15)), index=df_ticks_paths_index)\n\nprint(df_from)\n# Output\n 0 1 2 3 4 \\\n2017-04-04 10:18:02.095 -0.535905 0.569140 1.401489 0.551076 0.158319 \n2017-04-05 10:35:03.740 -1.474314 0.318631 0.178816 1.333643 1.097234 \n2017-04-05 10:59:18.364 1.878162 0.805685 -1.132736 0.184201 0.008899 \n2017-04-07 09:41:11.167 -0.480329 0.575849 1.855999 0.815211 -0.126898 \n2017-09-05 11:10:09.142 0.858272 -0.759353 0.275809 -0.228938 0.782700 \n2017-10-07 19:41:11.167 -0.914579 -0.347113 0.225863 0.126135 -1.987540 \n2017-09-11 09:30:40.390 -0.807214 0.511869 -0.980860 0.944337 0.618520 \n\n ... 10 11 12 13 14 \n2017-04-04 10:18:02.095 ... -0.259234 0.861597 -1.297222 1.467518 0.209933 \n2017-04-05 10:35:03.740 ... 1.894300 0.592537 -0.146224 -1.392316 2.056190 \n2017-04-05 10:59:18.364 ... -0.695715 1.236572 0.036024 0.701121 1.013895 \n2017-04-07 09:41:11.167 ... -0.035515 -0.359212 0.952660 -1.192822 0.103593 \n2017-09-05 11:10:09.142 ... -0.702647 1.212190 0.901758 0.343678 1.910087 \n2017-10-07 19:41:11.167 ... 0.018768 0.544317 0.218981 0.625162 0.043180 \n2017-09-11 09:30:40.390 ... 0.505412 1.254301 -1.048322 0.222409 0.962458 \n\n[7 rows x 15 columns]\n\nprint(df_path)\n# Output\n 0 1 2 3 4\n2017-04-05 10:18:02.095 0 1 2 3 4\n2017-04-05 10:35:03.740 0 1 2 3 4\n2017-04-05 10:57:18.364 0 1 2 3 4\n2017-04-05 11:10:09.142 0 1 2 3 4\n2017-04-07 09:41:11.167 0 1 2 3 4\n... .. .. .. .. ..\n2017-09-07 09:34:48.306 4 8 11 13 14\n2017-09-07 09:45:30.120 4 8 11 13 14\n2017-09-11 09:16:19.693 4 8 11 13 14\n2017-09-11 09:19:50.156 4 8 11 13 14\n2017-09-11 09:30:40.390 4 8 11 13 14\n\n[70 rows x 5 columns]\n\nThen:\ndf_path.columns = [f\"col{x}\" for x in df_path.columns]\n\ndf = pd.merge(\n left=df_path, right=df_from, how=\"inner\", left_index=True, right_index=True\n)\n\ndf = pd.concat(\n [df.apply(lambda x: x[x[col]], axis=1) for col in df_path.columns], axis=1\n)\n\nprint(df)\n# Output\n 0 1 2 3 4\n2017-04-05 10:35:03.740 -1.474314 0.318631 0.178816 1.333643 1.097234\n2017-04-07 09:41:11.167 -0.480329 0.575849 1.855999 0.815211 -0.126898\n2017-09-11 09:30:40.390 0.618520 -1.307552 1.254301 0.222409 0.962458\n\n"
] |
[
0
] |
[] |
[] |
[
"pandas",
"python"
] |
stackoverflow_0074463725_pandas_python.txt
|
Q:
Why is the value of i not increasing in the python code below? What should I do to increase it?
Why is the value of i not increasing in the python code below? What should I do to increase it?
i = 0
for z in analink2.iloc[i,[1]]:
req = requests.get(z, headers = header)
print(z)
i += 1
analink2.head()
Out[268]:
section weblink
0 bilgisayar-tablet https://www.trendyol.com/bilgisayar-tablet-x-c...
1 bilgisayar-tablet https://www.trendyol.com/bilgisayar-tablet-x-c...
2 bilgisayar-tablet https://www.trendyol.com/bilgisayar-tablet-x-c...
3 bilgisayar-tablet https://www.trendyol.com/bilgisayar-tablet-x-c...
4 bilgisayar-tablet https://www.trendyol.com/bilgisayar-tablet-x-c...
I expect the value of i to increase and the index of z to increase with i as long as z is in the loop
A:
If you want to iterate over values in weblink column you can use next example:
for url in analink2["weblink"]:
print(url)
req = requests.get(url, headers=header)
If you want index value you can use for example .iterrows():
for idx, row in analink2.iterrows():
print(idx, row['weblink'])
req = requests.get(row['weblink'], headers=header)
|
Why is the value of i not increasing in the python code below? What should I do to increase it?
|
Why is the value of i not increasing in the python code below? What should I do to increase it?
i = 0
for z in analink2.iloc[i,[1]]:
req = requests.get(z, headers = header)
print(z)
i += 1
analink2.head()
Out[268]:
section weblink
0 bilgisayar-tablet https://www.trendyol.com/bilgisayar-tablet-x-c...
1 bilgisayar-tablet https://www.trendyol.com/bilgisayar-tablet-x-c...
2 bilgisayar-tablet https://www.trendyol.com/bilgisayar-tablet-x-c...
3 bilgisayar-tablet https://www.trendyol.com/bilgisayar-tablet-x-c...
4 bilgisayar-tablet https://www.trendyol.com/bilgisayar-tablet-x-c...
I expect the value of i to increase and the index of z to increase with i as long as z is in the loop
|
[
"If you want to iterate over values in weblink column you can use next example:\nfor url in analink2[\"weblink\"]:\n print(url)\n req = requests.get(url, headers=header)\n\n\nIf you want index value you can use for example .iterrows():\nfor idx, row in analink2.iterrows():\n print(idx, row['weblink'])\n req = requests.get(row['weblink'], headers=header)\n\n"
] |
[
0
] |
[] |
[] |
[
"dataframe",
"for_loop",
"loops",
"python",
"python_requests"
] |
stackoverflow_0074507236_dataframe_for_loop_loops_python_python_requests.txt
|
Q:
Python insert image into Tkinter error
I want to insert an image into my tkinter but I received error:
TclError: image "pyimage7" doesn't exist.
I am using WinPython-64-3.3.5.9. I tried "rozmery.gif" but didn't help.
from tkinter import *
from PIL import ImageTk, Image
app_root = Tk()
#Setting it up
img = ImageTk.PhotoImage(Image.open("rozmery.png"))
#Displaying it
imglabel = Label(app_root, image=img).grid(row=1, column=1)
app_root.mainloop()
A:
You should keep a reference to the image before placing it:
imglabel = Label(app_root, image=img)
imglabel.image = img
imglabel.grid(row=1, column=1)
A:
I had the same error message. I restarted the kernel (Spyder) and it worked perfectly fine. I have this kind of issue sometimes where the code works just fine but it refuses to work. Restarting the kernel is often the only way.
|
Python insert image into Tkinter error
|
I want to insert an image into my tkinter but I received error:
TclError: image "pyimage7" doesn't exist.
I am using WinPython-64-3.3.5.9. I tried "rozmery.gif" but didn't help.
from tkinter import *
from PIL import ImageTk, Image
app_root = Tk()
#Setting it up
img = ImageTk.PhotoImage(Image.open("rozmery.png"))
#Displaying it
imglabel = Label(app_root, image=img).grid(row=1, column=1)
app_root.mainloop()
|
[
"You should keep a reference to the image before placing it:\nimglabel = Label(app_root, image=img)\nimglabel.image = img\nimglabel.grid(row=1, column=1) \n\n",
"I had the same error message. I restarted the kernel (Spyder) and it worked perfectly fine. I have this kind of issue sometimes where the code works just fine but it refuses to work. Restarting the kernel is often the only way.\n"
] |
[
1,
0
] |
[] |
[] |
[
"python",
"tkinter"
] |
stackoverflow_0050174404_python_tkinter.txt
|
Q:
Flask-SQLAlchemy db.create_all() raises RuntimeError working outside of application context
I recently updated Flask-SQLAlchemy, and now db.create_all is raising RuntimeError: working outside of application context. How do I call create_all?
from flask import Flask
from flask_sqlalchemy import SQLAlchemy
app = Flask(__name__)
app.config['SQLALCHEMY_DATABASE_URI'] = 'sqlite:///project.db'
db = SQLAlchemy(app)
class User(db.Model):
id = db.Column(db.Integer, primary_key=True)
db.create_all()
This raises the following error:
Traceback (most recent call last):
File "/home/david/Projects/flask-sqlalchemy/example.py", line 11, in <module>
db.create_all()
File "/home/david/Projects/flask-sqlalchemy/src/flask_sqlalchemy/extension.py", line 751, in create_all
self._call_for_binds(bind_key, "create_all")
File "/home/david/Projects/flask-sqlalchemy/src/flask_sqlalchemy/extension.py", line 722, in _call_for_binds
engine = self.engines[key]
File "/home/david/Projects/flask-sqlalchemy/src/flask_sqlalchemy/extension.py", line 583, in engines
app = current_app._get_current_object() # type: ignore[attr-defined]
File "/home/david/Projects/flask-sqlalchemy/.venv/lib/python3.10/site-packages/werkzeug/local.py", line 513, in _get_current_object
raise RuntimeError(unbound_message) from None
RuntimeError: Working outside of application context.
This typically means that you attempted to use functionality that needed
the current application. To solve this, set up an application context
with app.app_context(). See the documentation for more information.
A:
As of Flask-SQLAlchemy 3.0, all access to db.engine (and db.session) requires an active Flask application context. db.create_all uses db.engine, so it requires an app context.
with app.app_context():
db.create_all()
When Flask handles requests or runs CLI commands, a context is automatically pushed. You only need to push one manually outside of those situations, such as while setting up the app.
Instead of calling create_all in your code, you can also call it manually in the shell. Use flask shell to start a Python shell that already has an app context and the db object imported.
$ flask shell
>>> db.create_all()
Or push a context manually if using a plain python shell.
$ python
>>> from project import app, db
>>> app.app_context().push()
>>> db.create_all()
A:
If you're using a python shell instead of flask shell, you can push a context manually. flask shell will handle that for you.
>>> from project import app, db
>>> app.app_context().push()
>>> db.create_all()
Learn more about the application context in the Flask docs or this video.
|
Flask-SQLAlchemy db.create_all() raises RuntimeError working outside of application context
|
I recently updated Flask-SQLAlchemy, and now db.create_all is raising RuntimeError: working outside of application context. How do I call create_all?
from flask import Flask
from flask_sqlalchemy import SQLAlchemy
app = Flask(__name__)
app.config['SQLALCHEMY_DATABASE_URI'] = 'sqlite:///project.db'
db = SQLAlchemy(app)
class User(db.Model):
id = db.Column(db.Integer, primary_key=True)
db.create_all()
This raises the following error:
Traceback (most recent call last):
File "/home/david/Projects/flask-sqlalchemy/example.py", line 11, in <module>
db.create_all()
File "/home/david/Projects/flask-sqlalchemy/src/flask_sqlalchemy/extension.py", line 751, in create_all
self._call_for_binds(bind_key, "create_all")
File "/home/david/Projects/flask-sqlalchemy/src/flask_sqlalchemy/extension.py", line 722, in _call_for_binds
engine = self.engines[key]
File "/home/david/Projects/flask-sqlalchemy/src/flask_sqlalchemy/extension.py", line 583, in engines
app = current_app._get_current_object() # type: ignore[attr-defined]
File "/home/david/Projects/flask-sqlalchemy/.venv/lib/python3.10/site-packages/werkzeug/local.py", line 513, in _get_current_object
raise RuntimeError(unbound_message) from None
RuntimeError: Working outside of application context.
This typically means that you attempted to use functionality that needed
the current application. To solve this, set up an application context
with app.app_context(). See the documentation for more information.
|
[
"As of Flask-SQLAlchemy 3.0, all access to db.engine (and db.session) requires an active Flask application context. db.create_all uses db.engine, so it requires an app context.\nwith app.app_context():\n db.create_all()\n\nWhen Flask handles requests or runs CLI commands, a context is automatically pushed. You only need to push one manually outside of those situations, such as while setting up the app.\n\nInstead of calling create_all in your code, you can also call it manually in the shell. Use flask shell to start a Python shell that already has an app context and the db object imported.\n$ flask shell\n>>> db.create_all()\n\nOr push a context manually if using a plain python shell.\n$ python\n>>> from project import app, db\n>>> app.app_context().push()\n>>> db.create_all()\n\n",
"If you're using a python shell instead of flask shell, you can push a context manually. flask shell will handle that for you.\n>>> from project import app, db\n>>> app.app_context().push()\n>>> db.create_all()\n\nLearn more about the application context in the Flask docs or this video.\n"
] |
[
28,
1
] |
[
"I spent several hours trying to figure out this myself.\nHaving succeeded, I felt I should do a detail post by example.\ncreate a model.py with example code below:\nfrom flask_sqlalchemy import SQLAlchemy\nimport datetime\nfrom flask_marshmallow import Marshmallow\nfrom secret import path\n\ndatabase_path = path \n\ndb = SQLAlchemy()\n\n\n'''\nsetup_db(app)\n binds a flask application and a SQLAlchemy service\n'''\ndef setup_db(app, database_path=database_path):\n app.config[\"SQLALCHEMY_DATABASE_URI\"] = database_path\n app.config[\"SQLALCHEMY_TRACK_MODIFICATIONS\"] = False\n db.app = app\n db.init_app(app)\n \n\n'''\ndb_drop_and_create_all()\n drops the database tables and starts fresh\n can be used to initialize a clean database\n !!NOTE you can change the database_filename variable to have multiple verisons of a database\n'''\n\ndef db_drop_and_create_all(app):\n with app.app_context():\n db.drop_all()\n db.create_all() \n \n\nclass Articles(db.Model):\n __tablename__ = \"articles\"\n id = db.Column(db.Integer, primary_key=True)\n title = db.Column(db.String(100))\n body = db.Column(db.Text())\n date = db.Column(db.DateTime, default = datetime.datetime.now)\n \n \n def __init__(self, title, body):\n self.name = title\n self.body = body\n \n \n def __repr__(self):\n return f\"articles(title = {title}, body = {body}, date = {date})\"\n\nin the model.py pay attention to the helper function db_drop_and_create_all(app):\ndef db_drop_and_create_all(app):\n with app.app_context():\n db.drop_all()\n db.create_all()\n\nNow, in your server.py or app.py whatever your choiced name, import the neccesary scripts as follows:\nfrom model import setup_db, Articles, db_drop_and_create_all\nfrom flask_cors import CORS\nfrom flask import Flask, jsonify\n\napp = Flask(__name__)\n\nsetup_db(app) \nCORS(app)\ndb_drop_and_create_all(app)\n \n\n \n@app.route('/', methods=['GET'])\ndef get_articles():\n return jsonify({\"Hello\": \"World\"})\n\n\n\nif __name__ == '__main__':\n app.run(debug=True)\n\nPro Tip:\nCors is optional for this example.\nEnsure to install all dependencies used for this exercise with pip install dependencyName\n"
] |
[
-1
] |
[
"flask",
"flask_sqlalchemy",
"python"
] |
stackoverflow_0073961938_flask_flask_sqlalchemy_python.txt
|
Q:
BYPASS captcha during exploration of website,using selenium
i'm trying to use the search engine of a website, but captcha blocks me continuously. Is there any way to perform search?
driver = webdriver.Firefox()
driver.implicitly_wait(10) # seconds
driver.get("https://www.autodoc.pl/")
query ='H317W01'
driver.find_element(By.ID, "search").send_keys(query)
driver.find_element(By.ID, "search").send_keys(Keys.ENTER)
A:
Captchas are implemented to block bots. Selenium is a bot.
Bypassing captchas is practically impossible, unless you control the website containing the captcha.
A:
You would need to load the picture data as per Download image with selenium python and analyze it with some OCR software (e.g. for Python, there's pytesseract).
But CAPTCHAs, by design, are composed in a way that defeats current OCR software, so brute-force recognition will most definitely produce garbage. You'll probably have to fine-tune or adapt that software and/or do some custom preprocessing of the image to ignore or filter out the noise added by the specific CAPTCHA for the software to be able to read the symbols.
|
BYPASS captcha during exploration of website,using selenium
|
i'm trying to use the search engine of a website, but captcha blocks me continuously. Is there any way to perform search?
driver = webdriver.Firefox()
driver.implicitly_wait(10) # seconds
driver.get("https://www.autodoc.pl/")
query ='H317W01'
driver.find_element(By.ID, "search").send_keys(query)
driver.find_element(By.ID, "search").send_keys(Keys.ENTER)
|
[
"Captchas are implemented to block bots. Selenium is a bot.\nBypassing captchas is practically impossible, unless you control the website containing the captcha.\n",
"You would need to load the picture data as per Download image with selenium python and analyze it with some OCR software (e.g. for Python, there's pytesseract).\nBut CAPTCHAs, by design, are composed in a way that defeats current OCR software, so brute-force recognition will most definitely produce garbage. You'll probably have to fine-tune or adapt that software and/or do some custom preprocessing of the image to ignore or filter out the noise added by the specific CAPTCHA for the software to be able to read the symbols.\n"
] |
[
0,
0
] |
[] |
[] |
[
"python",
"selenium"
] |
stackoverflow_0074498062_python_selenium.txt
|
Q:
Drf class based view how to manage method calls
I have been working on FBV in Django and am now trying out CBV. I have created a basic crud application
Views.py
class UserViews(APIView):
permission_classes = [IsViewOnly | IsManager | IsAdmin | IsStaff]
def get_objects(self, user_id):
#query
def post(self, request):
#create code
def get(self, request):
#details code
def put(self, request):
#update code
def delete(self):
#code
urls.py
urlpatterns = [
path('add-user/', views.UserViews.as_view(), name="create-user"),
path('update-user/', views.UserViews.as_view(), name="update-user"),
path('delete-user/', views.UserViews.as_view(), name="delete-user"),
path('list-users', views.UserSearchList.as_view(), name="list-user"),
path('view-user', views.UserViews.as_view(), name="view-user"),]
This code is working but, how do we prevent a situation where say a manager wants the view user details API but executes it with the delete method and the user is now deleted
A:
You can use the @api_view decorator with function-based views, In order to set a specific permission on a specific method in the APIView class.
@permission_classes([IsManager, ])
def delete(self, request):
# your code.
Note: when you set new permission classes via the class attribute or decorators you're telling the view to ignore the default list set in the settings.py file.
reference: Django setting the permission policy
|
Drf class based view how to manage method calls
|
I have been working on FBV in Django and am now trying out CBV. I have created a basic crud application
Views.py
class UserViews(APIView):
permission_classes = [IsViewOnly | IsManager | IsAdmin | IsStaff]
def get_objects(self, user_id):
#query
def post(self, request):
#create code
def get(self, request):
#details code
def put(self, request):
#update code
def delete(self):
#code
urls.py
urlpatterns = [
path('add-user/', views.UserViews.as_view(), name="create-user"),
path('update-user/', views.UserViews.as_view(), name="update-user"),
path('delete-user/', views.UserViews.as_view(), name="delete-user"),
path('list-users', views.UserSearchList.as_view(), name="list-user"),
path('view-user', views.UserViews.as_view(), name="view-user"),]
This code is working but, how do we prevent a situation where say a manager wants the view user details API but executes it with the delete method and the user is now deleted
|
[
"You can use the @api_view decorator with function-based views, In order to set a specific permission on a specific method in the APIView class.\n@permission_classes([IsManager, ])\ndef delete(self, request):\n # your code.\n\nNote: when you set new permission classes via the class attribute or decorators you're telling the view to ignore the default list set in the settings.py file.\nreference: Django setting the permission policy\n"
] |
[
0
] |
[] |
[] |
[
"django",
"django_class_based_views",
"django_rest_framework",
"django_views",
"python"
] |
stackoverflow_0074507179_django_django_class_based_views_django_rest_framework_django_views_python.txt
|
Q:
Generate logs as the function's caller
Is there a way to log messages as if they originated from the current function's caller?
I have a very simple global logging facility that gets initialized by means of __main__ as
logformat = '%(asctime)s %(levelname)s %(module)s:%(funcName)s:%(lineno)d - %(message)s'
logging.basicConfig(format=logformat, level=loglevel)
All functions then use the same global logger directly by means of
# Goes directly to the global logger
logging.warning('this might not work')
Some utility functions emit logs, but it is rather unhelpful that %(module)s:%(funcName)s:%(lineno)d resolves to helpers:my_helper_fn:42, while the caller of my_helper_fn that triggered the log is the interesting part.
Is there a way to log messages inside my_helper_fn so that funcName and friends automatically resolve to the caller of my_helper_fn? Ideally, I'd decorate my_helper_fn with something to the tune of @logAsCaller...
There is logger.findCaller which seems to be purposefully designed for this, I can't deduce from the docs how it's supposed to be used, though.
A:
There is logger.findCaller which seems to be purposefully designed for this, I can't deduce from the docs how it's supposed to be used, though.
More or less. This is indeed the way a logger finds the caller it should use in its message, but user code has no access to this method.
But the doc for logger.debug describes the stacklevel keyword only parameter (emphasize mine):
...The third optional keyword argument is stacklevel, which defaults to 1. If greater than 1, the corresponding number of stack frames are skipped when computing the line number and function name set in the LogRecord created for the logging event. This can be used in logging helpers so that the function name, filename and line number recorded are not the information for the helper function/method, but rather its caller.
And the doc for all other logging methods says that they support the same parameters as debug.
So you could just use:
# Goes directly to the global logger
logging.warning('this might not work', stacklevel=2)
|
Generate logs as the function's caller
|
Is there a way to log messages as if they originated from the current function's caller?
I have a very simple global logging facility that gets initialized by means of __main__ as
logformat = '%(asctime)s %(levelname)s %(module)s:%(funcName)s:%(lineno)d - %(message)s'
logging.basicConfig(format=logformat, level=loglevel)
All functions then use the same global logger directly by means of
# Goes directly to the global logger
logging.warning('this might not work')
Some utility functions emit logs, but it is rather unhelpful that %(module)s:%(funcName)s:%(lineno)d resolves to helpers:my_helper_fn:42, while the caller of my_helper_fn that triggered the log is the interesting part.
Is there a way to log messages inside my_helper_fn so that funcName and friends automatically resolve to the caller of my_helper_fn? Ideally, I'd decorate my_helper_fn with something to the tune of @logAsCaller...
There is logger.findCaller which seems to be purposefully designed for this, I can't deduce from the docs how it's supposed to be used, though.
|
[
"There is logger.findCaller which seems to be purposefully designed for this, I can't deduce from the docs how it's supposed to be used, though.\nMore or less. This is indeed the way a logger finds the caller it should use in its message, but user code has no access to this method.\nBut the doc for logger.debug describes the stacklevel keyword only parameter (emphasize mine):\n\n...The third optional keyword argument is stacklevel, which defaults to 1. If greater than 1, the corresponding number of stack frames are skipped when computing the line number and function name set in the LogRecord created for the logging event. This can be used in logging helpers so that the function name, filename and line number recorded are not the information for the helper function/method, but rather its caller.\n\nAnd the doc for all other logging methods says that they support the same parameters as debug.\nSo you could just use:\n# Goes directly to the global logger\nlogging.warning('this might not work', stacklevel=2)\n\n"
] |
[
1
] |
[] |
[] |
[
"python"
] |
stackoverflow_0074507284_python.txt
|
Q:
how can I update one field in Django rest framework in abstract user model
how can I update one field in the Django rest framework in the abstract user model can someone help me I want to Update device_id in my abstract user model I want to only update device_id dield without updating other field and I do not know I have to create another view or not or I should add update to serializers
here is my code
models.py
class User(AbstractUser):
is_student=models.BooleanField(default=False)
is_teacher=models.BooleanField(default=False)
mobile_no=models.CharField(max_length=200,blank=True)
device_id=models.CharField(max_length=200,blank=True)
USERNAME_FIELD = 'username'
def __str__(self):
return self.username
class Meta:
verbose_name_plural="1.User"
@receiver(post_save, sender=settings.AUTH_USER_MODEL)
def create_auth_token(sender, instance=None, created=False,**kwargs):
if created:
Token.objects.create(user=instance)
serializers.py
class UserSerializer(serializers.ModelSerializer):
class Meta:
model=User
fields=['id','username','mobile_no','is_student','device_id']
views.py
class StudentSignupView(generics.GenericAPIView):
serializer_class=StudentSignupSerializer
def post(self,request,*args,**kwargs):
serializer=self.get_serializer(data=request.data)
serializer.is_valid(raise_exception=True)
user=serializer.save()
return Response({
"user": UserSerializer(user, context=self.get_serializer_context()).data,
"token": Token.objects.get(user=user).key,
# "message":"account created successfully"
})
class TeacherSignupView(generics.GenericAPIView):
serializer_class=TeacherSignupSerializer
def post(self,request,*args,**kwargs):
serializer=self.get_serializer(data=request.data)
serializer.is_valid(raise_exception=True)
user=serializer.save()
return Response({
"user": UserSerializer(user, context=self.get_serializer_context()).data,
"token": Token.objects.get(user=user).key,
# "message":"account created successfully"
})
class CustomAuthToken(ObtainAuthToken):
def post(self,request,*args,**kwargs):
serializer=self.serializer_class(data=request.data, context={'request':request})
serializer.is_valid(raise_exception=True)
user=serializer.validated_data['user']
token,created=Token.objects.get_or_create(user=user)
return Response({
'token':token.key,
'user':UserSerializer(user, context=self.get_serializer_context()).data,
'is_student':user.is_student
})
A:
you could create a view a like this(ref)
from django.views.generic.edit import UpdateView
from myapp.models import Author
from django.contrib.auth import get_user_model
User = get_user_model()
class UserUpdateView(UpdateView):
model = User
fields = ['device_id']
|
how can I update one field in Django rest framework in abstract user model
|
how can I update one field in the Django rest framework in the abstract user model can someone help me I want to Update device_id in my abstract user model I want to only update device_id dield without updating other field and I do not know I have to create another view or not or I should add update to serializers
here is my code
models.py
class User(AbstractUser):
is_student=models.BooleanField(default=False)
is_teacher=models.BooleanField(default=False)
mobile_no=models.CharField(max_length=200,blank=True)
device_id=models.CharField(max_length=200,blank=True)
USERNAME_FIELD = 'username'
def __str__(self):
return self.username
class Meta:
verbose_name_plural="1.User"
@receiver(post_save, sender=settings.AUTH_USER_MODEL)
def create_auth_token(sender, instance=None, created=False,**kwargs):
if created:
Token.objects.create(user=instance)
serializers.py
class UserSerializer(serializers.ModelSerializer):
class Meta:
model=User
fields=['id','username','mobile_no','is_student','device_id']
views.py
class StudentSignupView(generics.GenericAPIView):
serializer_class=StudentSignupSerializer
def post(self,request,*args,**kwargs):
serializer=self.get_serializer(data=request.data)
serializer.is_valid(raise_exception=True)
user=serializer.save()
return Response({
"user": UserSerializer(user, context=self.get_serializer_context()).data,
"token": Token.objects.get(user=user).key,
# "message":"account created successfully"
})
class TeacherSignupView(generics.GenericAPIView):
serializer_class=TeacherSignupSerializer
def post(self,request,*args,**kwargs):
serializer=self.get_serializer(data=request.data)
serializer.is_valid(raise_exception=True)
user=serializer.save()
return Response({
"user": UserSerializer(user, context=self.get_serializer_context()).data,
"token": Token.objects.get(user=user).key,
# "message":"account created successfully"
})
class CustomAuthToken(ObtainAuthToken):
def post(self,request,*args,**kwargs):
serializer=self.serializer_class(data=request.data, context={'request':request})
serializer.is_valid(raise_exception=True)
user=serializer.validated_data['user']
token,created=Token.objects.get_or_create(user=user)
return Response({
'token':token.key,
'user':UserSerializer(user, context=self.get_serializer_context()).data,
'is_student':user.is_student
})
|
[
"you could create a view a like this(ref)\nfrom django.views.generic.edit import UpdateView\nfrom myapp.models import Author\nfrom django.contrib.auth import get_user_model\n\nUser = get_user_model()\n\nclass UserUpdateView(UpdateView):\n model = User\n fields = ['device_id']\n \n\n"
] |
[
0
] |
[] |
[] |
[
"abstract_class",
"django",
"django_rest_framework",
"python"
] |
stackoverflow_0074507168_abstract_class_django_django_rest_framework_python.txt
|
Q:
Why does XOR not produce the expected result
3**2==9 ^ 3-2==4
False
True ^ False
TRUE
Why is the result of first line False while it should be True?
A:
Because the operator ^ has more priority than == and the operation 9 ^ 3 has priority
A:
It returns False because of operation priority.
The priority of ^ is more than priority of ==
You can see the priority of operations in this Link
|
Why does XOR not produce the expected result
|
3**2==9 ^ 3-2==4
False
True ^ False
TRUE
Why is the result of first line False while it should be True?
|
[
"Because the operator ^ has more priority than == and the operation 9 ^ 3 has priority\n",
"It returns False because of operation priority.\nThe priority of ^ is more than priority of ==\nYou can see the priority of operations in this Link\n"
] |
[
0,
0
] |
[] |
[] |
[
"difference",
"google_colaboratory",
"python",
"try_catch",
"xor"
] |
stackoverflow_0074454972_difference_google_colaboratory_python_try_catch_xor.txt
|
Q:
Convert BufferedImage.TYPE_INT_ARGB to CV_8UC3 with JavaCV
I've been working on something similiar to OpenTrack in Java. I have working example with Demo Video opened with FFMpegGrabber but now I'm trying to implement it with PS3 Eye Webcam. I'm using JavaCV and I've tried to get CL-Eye SDK but now it is impossible to register at their site what is needed to get DLL Library for my PS3 Eye webcam. I've found https://github.com/diwi/PS3Eye based on libusb.
My code looks like this:
public Frame grab() {
int frame_w = ps3eye.getResolution().w;
int frame_h = ps3eye.getResolution().h;
BufferedImage frame = new BufferedImage(frame_w, frame_h, BufferedImage.TYPE_INT_ARGB);
int[] pixels = ((DataBufferInt) frame.getRaster().getDataBuffer()).getData();
ps3eye.getFrame(pixels);
framerate.update();
Mat image = new Mat(frame.getWidth(), frame.getHeight(), CV_32SC4);
image.put(0, 0, pixels);
List<Mat> channels = new ArrayList<>(4);
Core.split(image, channels);
Collections.swap(channels,0,channels.size()-1);
Core.merge(channels, image);
Mat finalImage = new Mat(frame.getWidth(), frame.getHeight(), CV_8UC3);
cvtColor(image, finalImage, COLOR_RGBA2BGR);
return CONVERTER.convert(finalImage);
}
My further operations base on SimpleBlobDetectour, findContours and cvtColors which are working fine with video file and CV_8UC3 type - but 32S is not supported especially by cvtColors and I need 3 image channels for FrameConverter. Do you have any idea how to do it properly? Thanks!
A:
Ok, the solutions was seriously easier than I've expected (even though I thought I've tried this method).
public Frame grab() {
int frame_w = ps3eye.getResolution().w;
int frame_h = ps3eye.getResolution().h;
BufferedImage frame = new BufferedImage(frame_w, frame_h, BufferedImage.TYPE_3BYTE_BGR);
byte[] pixels = ((DataBufferByte) frame.getRaster().getDataBuffer()).getData();
ps3eye.getFrame(pixels);
framerate.update();
Mat finalImage = new Mat(frame_h, frame_w, CV_8UC3);
finalImage.put(0, 0, pixels);
return CONVERTER.convert(finalImage);
}
I haven't though that changing BufferedImage type will automatically get proper data and the order of bytes.
IR-Head-Tracker test
|
Convert BufferedImage.TYPE_INT_ARGB to CV_8UC3 with JavaCV
|
I've been working on something similiar to OpenTrack in Java. I have working example with Demo Video opened with FFMpegGrabber but now I'm trying to implement it with PS3 Eye Webcam. I'm using JavaCV and I've tried to get CL-Eye SDK but now it is impossible to register at their site what is needed to get DLL Library for my PS3 Eye webcam. I've found https://github.com/diwi/PS3Eye based on libusb.
My code looks like this:
public Frame grab() {
int frame_w = ps3eye.getResolution().w;
int frame_h = ps3eye.getResolution().h;
BufferedImage frame = new BufferedImage(frame_w, frame_h, BufferedImage.TYPE_INT_ARGB);
int[] pixels = ((DataBufferInt) frame.getRaster().getDataBuffer()).getData();
ps3eye.getFrame(pixels);
framerate.update();
Mat image = new Mat(frame.getWidth(), frame.getHeight(), CV_32SC4);
image.put(0, 0, pixels);
List<Mat> channels = new ArrayList<>(4);
Core.split(image, channels);
Collections.swap(channels,0,channels.size()-1);
Core.merge(channels, image);
Mat finalImage = new Mat(frame.getWidth(), frame.getHeight(), CV_8UC3);
cvtColor(image, finalImage, COLOR_RGBA2BGR);
return CONVERTER.convert(finalImage);
}
My further operations base on SimpleBlobDetectour, findContours and cvtColors which are working fine with video file and CV_8UC3 type - but 32S is not supported especially by cvtColors and I need 3 image channels for FrameConverter. Do you have any idea how to do it properly? Thanks!
|
[
"Ok, the solutions was seriously easier than I've expected (even though I thought I've tried this method).\npublic Frame grab() {\n int frame_w = ps3eye.getResolution().w;\n int frame_h = ps3eye.getResolution().h;\n BufferedImage frame = new BufferedImage(frame_w, frame_h, BufferedImage.TYPE_3BYTE_BGR);\n byte[] pixels = ((DataBufferByte) frame.getRaster().getDataBuffer()).getData();\n ps3eye.getFrame(pixels);\n framerate.update();\n Mat finalImage = new Mat(frame_h, frame_w, CV_8UC3);\n finalImage.put(0, 0, pixels);\n\n return CONVERTER.convert(finalImage);\n}\n\nI haven't though that changing BufferedImage type will automatically get proper data and the order of bytes.\nIR-Head-Tracker test\n"
] |
[
1
] |
[] |
[] |
[
"c++",
"java",
"javacv",
"opencv",
"python"
] |
stackoverflow_0074503116_c++_java_javacv_opencv_python.txt
|
Q:
Matching Whole Dictionary Keys in String using List Comprehension
I have a dictionary called cc_dict and am trying to use list comprehension to iterate over each key to find a match in a string called new_string. The line below works but it also matches keys that are part of whole words. I want to match whole words only.
So, for example, the key "test" is matched in the string "text for testing".
How can I do this?
Example one:
cc_dict = {“test one”: “SSK1”, “result”: “TGG”, “pass”: “PSS1”, “fail”: “FL1”}
new_string = "If test one is complete."
output = [te for key, te in cc_dict.items() if key in new_string]
Desired result: SSK1
Example two:
cc_dict = {“test one”: “SSK1”, “result”: “TGG”, “pass”: “PSS1”, “fail”: “FL1”}
new_string = "There has been a failure in annex 1."
output = [te for key, te in cc_dict.items() if key in new_string]
Desired result: no result but as "fail" is contained in the word "failure" the list comprehension currently returns FL1.
A:
If I understand correctly, you can split your match string into words using split:
[te for key, te in cc_dict.items() if key in new_string.split()]
A:
You can try the following,
This method will use regex to determine if the whole string that you are looking for is available in the string that you are searching in,
def string_found(string1, string2):
if re.search(r"\b" + re.escape(string1) + r"\b", string2):
return True
return False
This can now be used in your dict comprehension,
output = [te for key, te in cc_dict.items() if string_found(key, new_string)]
Results,
# scenario 1
cc_dict = {“test one”: “SSK1”, “result”: “TGG”, “pass”: “PSS1”, “fail”: “FL1”}
new_string = "If test one is complete."
output = [te for key, te in cc_dict.items() if string_found(key, new_string)]
output
>>['SSK1']
# scenario 2
cc_dict = {“test one”: “SSK1”, “result”: “TGG”, “pass”: “PSS1”, “fail”: “FL1”}
new_string = "There has been a failure in annex 1."
output = [te for key, te in cc_dict.items() if string_found(key, new_string)]
output
>>[]
|
Matching Whole Dictionary Keys in String using List Comprehension
|
I have a dictionary called cc_dict and am trying to use list comprehension to iterate over each key to find a match in a string called new_string. The line below works but it also matches keys that are part of whole words. I want to match whole words only.
So, for example, the key "test" is matched in the string "text for testing".
How can I do this?
Example one:
cc_dict = {“test one”: “SSK1”, “result”: “TGG”, “pass”: “PSS1”, “fail”: “FL1”}
new_string = "If test one is complete."
output = [te for key, te in cc_dict.items() if key in new_string]
Desired result: SSK1
Example two:
cc_dict = {“test one”: “SSK1”, “result”: “TGG”, “pass”: “PSS1”, “fail”: “FL1”}
new_string = "There has been a failure in annex 1."
output = [te for key, te in cc_dict.items() if key in new_string]
Desired result: no result but as "fail" is contained in the word "failure" the list comprehension currently returns FL1.
|
[
"If I understand correctly, you can split your match string into words using split:\n[te for key, te in cc_dict.items() if key in new_string.split()]\n\n",
"You can try the following,\nThis method will use regex to determine if the whole string that you are looking for is available in the string that you are searching in,\ndef string_found(string1, string2):\n if re.search(r\"\\b\" + re.escape(string1) + r\"\\b\", string2):\n return True\n return False\n\nThis can now be used in your dict comprehension,\noutput = [te for key, te in cc_dict.items() if string_found(key, new_string)]\n\nResults,\n# scenario 1\ncc_dict = {“test one”: “SSK1”, “result”: “TGG”, “pass”: “PSS1”, “fail”: “FL1”}\n\nnew_string = \"If test one is complete.\"\n\noutput = [te for key, te in cc_dict.items() if string_found(key, new_string)]\n\noutput\n>>['SSK1']\n\n# scenario 2\ncc_dict = {“test one”: “SSK1”, “result”: “TGG”, “pass”: “PSS1”, “fail”: “FL1”}\n\nnew_string = \"There has been a failure in annex 1.\"\n\noutput = [te for key, te in cc_dict.items() if string_found(key, new_string)]\n\noutput\n>>[]\n\n"
] |
[
1,
1
] |
[] |
[] |
[
"python"
] |
stackoverflow_0074476930_python.txt
|
Q:
replace all floats in df with corresponding index name
I want to replace all values in my df that are float (excluding nans), with the name of the index of the corresponding row.
I have this:
index1 10.0 190.6
index2 17.9 NaN
index3 NaN 8.0
index4 9.0 70.0
I want to have this:
index1 index1 index1
index2 index2 NaN
index3 NaN index3
index4 index4 index4
Any ideas?
A:
Technically, np.nan is also float. If you want to replace non-null values with the index values, you can use df.where:
output = df.where(df.isna(), df.index.tolist())
Output:
1 2
0
index1 index1 index1
index2 index2 NaN
index3 NaN index3
index4 index4 index4
A:
if you want to do it for all columns of your df:
for column in df.columns:
df[column] = [e for e in df.index if df[column].notna]
If you want to do it for some columns, replace df.columns for a list with the columns you want to handle
Regards,
|
replace all floats in df with corresponding index name
|
I want to replace all values in my df that are float (excluding nans), with the name of the index of the corresponding row.
I have this:
index1 10.0 190.6
index2 17.9 NaN
index3 NaN 8.0
index4 9.0 70.0
I want to have this:
index1 index1 index1
index2 index2 NaN
index3 NaN index3
index4 index4 index4
Any ideas?
|
[
"Technically, np.nan is also float. If you want to replace non-null values with the index values, you can use df.where:\noutput = df.where(df.isna(), df.index.tolist())\n\nOutput:\n 1 2\n0 \nindex1 index1 index1\nindex2 index2 NaN\nindex3 NaN index3\nindex4 index4 index4\n\n",
"if you want to do it for all columns of your df:\nfor column in df.columns:\n\n df[column] = [e for e in df.index if df[column].notna]\n\nIf you want to do it for some columns, replace df.columns for a list with the columns you want to handle\nRegards,\n"
] |
[
2,
0
] |
[] |
[] |
[
"numpy",
"pandas",
"python"
] |
stackoverflow_0074506980_numpy_pandas_python.txt
|
Q:
Set xticks visible in when plotting using pandas
Consider the following snippet
import numpy as np
import pandas as pd
from matplotlib import pyplot as plt
data = np.random.rand(10,5)
cols = ["a","b","c","d","e"]
df = pd.DataFrame(data=data, columns = cols)
df.index.name="Time (s)"
fig,axes = plt.subplots(3,2,sharex=True, squeeze=False)
axes = axes.T.flat
axes[5].remove()
df.plot(subplots=True,grid=True,legend=True,ax = axes[0:5])
that produces the following plot
I wish to show the xticks in the subplots where they are missing as I wrote in red with reference to the above picture.
I wish to show only the xticks where I marked in red, not the labels. The labels are fine where they currently are and shall be kept there.
After some search, I tried with
for ax in axes:
ax.tick_params(axis="x")
and
for ax in axes:
ax.spines.set(visible=True)
but with no success.
Any hints?
EDIT: As someone kindly suggested, if I set sharex=False, then when I horizontally zoom on one axes I will not have the same zoom effect on the other axes and this is not what I want.
What I want is to: a) show the xticks in all axes, b) when I horizontally zoom on one axes all the other axes are horizontally zoomed of the same amount.
A:
You need to turn off sharing x properties by setting sharex=False (which is the default value by the way in matplotlib.pyplot.subplots):
Replace this:
fig,axes = plt.subplots(3,2,sharex=True, squeeze=False)
By this:
fig,axes = plt.subplots(3,2, squeeze=False)
# Output:
|
Set xticks visible in when plotting using pandas
|
Consider the following snippet
import numpy as np
import pandas as pd
from matplotlib import pyplot as plt
data = np.random.rand(10,5)
cols = ["a","b","c","d","e"]
df = pd.DataFrame(data=data, columns = cols)
df.index.name="Time (s)"
fig,axes = plt.subplots(3,2,sharex=True, squeeze=False)
axes = axes.T.flat
axes[5].remove()
df.plot(subplots=True,grid=True,legend=True,ax = axes[0:5])
that produces the following plot
I wish to show the xticks in the subplots where they are missing as I wrote in red with reference to the above picture.
I wish to show only the xticks where I marked in red, not the labels. The labels are fine where they currently are and shall be kept there.
After some search, I tried with
for ax in axes:
ax.tick_params(axis="x")
and
for ax in axes:
ax.spines.set(visible=True)
but with no success.
Any hints?
EDIT: As someone kindly suggested, if I set sharex=False, then when I horizontally zoom on one axes I will not have the same zoom effect on the other axes and this is not what I want.
What I want is to: a) show the xticks in all axes, b) when I horizontally zoom on one axes all the other axes are horizontally zoomed of the same amount.
|
[
"You need to turn off sharing x properties by setting sharex=False (which is the default value by the way in matplotlib.pyplot.subplots):\nReplace this:\nfig,axes = plt.subplots(3,2,sharex=True, squeeze=False)\n\nBy this:\nfig,axes = plt.subplots(3,2, squeeze=False)\n\n# Output:\n\n"
] |
[
1
] |
[] |
[] |
[
"matplotlib",
"pandas",
"python"
] |
stackoverflow_0074507451_matplotlib_pandas_python.txt
|
Q:
How to write data of type DataFrame using asksaveasfile dialog
I am trying to save my data from an input csv file and write it to another csv file. I know how to write the dataFile using the to_csv method and using a pre-determined file to write into(output.csv). How do I do it via asksaveasfile dialog method. Any help is appreciated.
import csv
import pandas as pd
import os
import tkinter as tk
from tkinter import filedialog
SAVING_PATH = 'C:/Users/Desktop/'
root = tk.Tk()
root.withdraw()
file_path = filedialog.askopenfilename()
dataFile=pd.read_csv(file_path,usecols=['Name','Email','Gender'])
dataFile.to_csv(os.path.join(SAVING_PATH,r'output.csv'))
dataFile = filedialog.asksaveasfile(mode='w', defaultextension=".csv")
A:
Nevermind I fixed the problem already.
import csv
import pandas as pd
import os
import tkinter as tk
from tkinter import filedialog
root = tk.Tk()
root.withdraw()
file_path = filedialog.askopenfilename()
dataFile=pd.read_csv(file_path,usecols=['Name','Email','Gender'])
SAVING_PATH = filedialog.asksaveasfile(mode='w', defaultextension=".csv")
dataFile.to_csv(SAVING_PATH)
A:
@Aurora_Titanium, it wokrs for me as well. But it creates an extra empty row after each row.
So I slightly modified yours:
data = pd.DataFrame(data, columns = ["Voltage [V]", "Current [A]"])
file_path = filedialog.asksaveasfilename(defaultextension=".csv",
filetypes=[("csv file", ".csv")],
)
data.to_csv(file_path, sep = ";", index = False, decimal = ",")
Output:
Voltage [V] Current [A]
1 1
2 2
3 3
4 4
5 5
6 6
7 7
8 8
9 9
10 10
|
How to write data of type DataFrame using asksaveasfile dialog
|
I am trying to save my data from an input csv file and write it to another csv file. I know how to write the dataFile using the to_csv method and using a pre-determined file to write into(output.csv). How do I do it via asksaveasfile dialog method. Any help is appreciated.
import csv
import pandas as pd
import os
import tkinter as tk
from tkinter import filedialog
SAVING_PATH = 'C:/Users/Desktop/'
root = tk.Tk()
root.withdraw()
file_path = filedialog.askopenfilename()
dataFile=pd.read_csv(file_path,usecols=['Name','Email','Gender'])
dataFile.to_csv(os.path.join(SAVING_PATH,r'output.csv'))
dataFile = filedialog.asksaveasfile(mode='w', defaultextension=".csv")
|
[
"Nevermind I fixed the problem already. \nimport csv\n import pandas as pd\n import os\n import tkinter as tk\n from tkinter import filedialog\n\n root = tk.Tk()\n root.withdraw()\n file_path = filedialog.askopenfilename() \n dataFile=pd.read_csv(file_path,usecols=['Name','Email','Gender'])\n SAVING_PATH = filedialog.asksaveasfile(mode='w', defaultextension=\".csv\")\n dataFile.to_csv(SAVING_PATH)\n\n",
"@Aurora_Titanium, it wokrs for me as well. But it creates an extra empty row after each row.\nSo I slightly modified yours:\ndata = pd.DataFrame(data, columns = [\"Voltage [V]\", \"Current [A]\"])\n \nfile_path = filedialog.asksaveasfilename(defaultextension=\".csv\",\n filetypes=[(\"csv file\", \".csv\")],\n ) \ndata.to_csv(file_path, sep = \";\", index = False, decimal = \",\")\n\nOutput:\n Voltage [V] Current [A]\n 1 1\n 2 2\n 3 3\n 4 4\n 5 5\n 6 6\n 7 7\n 8 8\n 9 9\n 10 10\n\n"
] |
[
6,
0
] |
[] |
[] |
[
"csv",
"pandas",
"python"
] |
stackoverflow_0047453050_csv_pandas_python.txt
|
Q:
Getting {"errorMessage": "'httpMethod'", "errorType": "KeyError"
Using Lambda function to Get and post request. While testing it gives error
{"errorMessage": "'httpMethod'", "errorType": "KeyError", "requestId": "435e6811-acc5-4bc7-b009-377bc6178bb8", "stackTrace": [" File "/var/task/lambda_function.py", line 11, in lambda_handler\n if event['httpMethod'] == 'GET':\n"]} :
dynamodb = boto3.resource('dynamodb')
table = dynamodb.Table('ApigatewayDynamo')
def lambda_handler(event, context):
print("event", event)
if event['httpMethod'] == 'GET':
name = event['queryStringParameters']['name']
response = table.get_item(Key={'name': name})
print(response)
print(response['Item'])
return {
'statusCode': 200,
'body': json.dumps(response['Item'])
}
if event['httpMethod'] == 'POST':
body = json.loads(event['body'])
print('body', body)
name = body.get('name')
print('Name is ', name)
if name is None:
return {
'statusCode': 400,
'body': json.dumps("Check the payload/ method")
}
table.put_item(Item=body)
return {
'statusCode': 200,
'body': json.dumps("Name added successfully")
}
return {
'statusCode': 400,
'body': json.dumps("Check the payload/ method/ Lambda function")
}
The Dynamo db table has name as primary key and the json testing data is
{
"name": "Kaira",
"Phone Number": 98777
}
What is to be done to resolve this?
I am trying to insert the data from post method and get the data from Get method.
A:
This error happens before you even read from DynamoDB.
You are getting a key error while trying to parse the event object. Have a look at your event object and ensure the path of the values your are trying to resolve from it are correct.
If that fails, share the value of event here and we can guide you better.
I also strongly suggest wrapping API requests in try catch/except blocks
A:
DynamoDB attribute names are case sensitive - the attribute name "Name" which you write is not the as the attribute name "name" which you try to read.
A:
The get request is also fetching the data now, the detail error was decimal was not getting serialised.
Got it worked out by importing simplejson as json
|
Getting {"errorMessage": "'httpMethod'", "errorType": "KeyError"
|
Using Lambda function to Get and post request. While testing it gives error
{"errorMessage": "'httpMethod'", "errorType": "KeyError", "requestId": "435e6811-acc5-4bc7-b009-377bc6178bb8", "stackTrace": [" File "/var/task/lambda_function.py", line 11, in lambda_handler\n if event['httpMethod'] == 'GET':\n"]} :
dynamodb = boto3.resource('dynamodb')
table = dynamodb.Table('ApigatewayDynamo')
def lambda_handler(event, context):
print("event", event)
if event['httpMethod'] == 'GET':
name = event['queryStringParameters']['name']
response = table.get_item(Key={'name': name})
print(response)
print(response['Item'])
return {
'statusCode': 200,
'body': json.dumps(response['Item'])
}
if event['httpMethod'] == 'POST':
body = json.loads(event['body'])
print('body', body)
name = body.get('name')
print('Name is ', name)
if name is None:
return {
'statusCode': 400,
'body': json.dumps("Check the payload/ method")
}
table.put_item(Item=body)
return {
'statusCode': 200,
'body': json.dumps("Name added successfully")
}
return {
'statusCode': 400,
'body': json.dumps("Check the payload/ method/ Lambda function")
}
The Dynamo db table has name as primary key and the json testing data is
{
"name": "Kaira",
"Phone Number": 98777
}
What is to be done to resolve this?
I am trying to insert the data from post method and get the data from Get method.
|
[
"This error happens before you even read from DynamoDB.\nYou are getting a key error while trying to parse the event object. Have a look at your event object and ensure the path of the values your are trying to resolve from it are correct.\nIf that fails, share the value of event here and we can guide you better.\nI also strongly suggest wrapping API requests in try catch/except blocks\n",
"DynamoDB attribute names are case sensitive - the attribute name \"Name\" which you write is not the as the attribute name \"name\" which you try to read.\n",
"The get request is also fetching the data now, the detail error was decimal was not getting serialised.\nGot it worked out by importing simplejson as json\n"
] |
[
1,
0,
0
] |
[] |
[] |
[
"amazon_dynamodb",
"amazon_web_services",
"aws_lambda",
"boto3",
"python"
] |
stackoverflow_0074506245_amazon_dynamodb_amazon_web_services_aws_lambda_boto3_python.txt
|
Q:
Replace numpy columns when indexing backwards
I'm using advanced indexing on the numpy arrays but find that my columns do not swap but instead they're replaced.
For example:
test_array = np.array([ [1, 2, 3, 4, 5] , [5, 4, 3, 2, 1] , [5, 2, 2, 2, 1] , [1, 2, 2, 1, 1] ])
zeros = np.repeat( np.zeros((1, 5)) , 2, axis=0)
swap_array = np.vstack([test_array, zeros])
swap_array[ : , [ 0 , 2 ] ] = swap_array[ : , [ -2 , -1 ] ]
print(swap_array)
#
[[0. 0. 5. 1. 0. 0.]
[0. 0. 2. 2. 0. 0.]
[0. 0. 2. 2. 0. 0.]
[0. 0. 2. 1. 0. 0.]
[0. 0. 1. 1. 0. 0.]]
What is the most effective way to swap the first-two columns with the last two, given I do not know the same of the array.
A:
They are indeed replaced, because there is no assignment stating that the last two columns should contain the first two. To do this, you can use the comma operator (or tuple unpacking), which evaluates all expressions on the right-hand side, before any assignment is made. There are pitfalls with this approach, so it is not safe, in general (See comments below & thanks @Jérôme Richard).
test_array = np.array([ [1, 2, 3, 4, 5] , [5, 4, 3, 2, 1] , [5, 2, 2, 2, 1] , [1, 2, 2, 1, 1] ])
zeros = np.repeat( np.zeros((1, 5)) , 2, axis=0)
swap_array = np.vstack([test_array, zeros])
print(swap_array)
swap_array[ : , [ 0 , 2 ] ], swap_array[ : , [ -2, -1 ] ] = swap_array[ : , [ -2 , -1 ] ], swap_array[ : , [ 0 , 2 ] ]
print(swap_array)
You can do this more concisely and safely. The approach below should be preferred.
swap_array[ : , [ 0 , 2, -2, -1 ] ] = swap_array[ : , [ -2 , -1, 0, 2 ] ]
Result for both codes:
[[1. 2. 3. 4. 5.]
[5. 4. 3. 2. 1.]
[5. 2. 2. 2. 1.]
[1. 2. 2. 1. 1.]
[0. 0. 0. 0. 0.]
[0. 0. 0. 0. 0.]]
[[4. 2. 5. 1. 3.]
[2. 4. 1. 5. 3.]
[2. 2. 1. 5. 2.]
[1. 2. 1. 1. 2.]
[0. 0. 0. 0. 0.]
[0. 0. 0. 0. 0.]]
|
Replace numpy columns when indexing backwards
|
I'm using advanced indexing on the numpy arrays but find that my columns do not swap but instead they're replaced.
For example:
test_array = np.array([ [1, 2, 3, 4, 5] , [5, 4, 3, 2, 1] , [5, 2, 2, 2, 1] , [1, 2, 2, 1, 1] ])
zeros = np.repeat( np.zeros((1, 5)) , 2, axis=0)
swap_array = np.vstack([test_array, zeros])
swap_array[ : , [ 0 , 2 ] ] = swap_array[ : , [ -2 , -1 ] ]
print(swap_array)
#
[[0. 0. 5. 1. 0. 0.]
[0. 0. 2. 2. 0. 0.]
[0. 0. 2. 2. 0. 0.]
[0. 0. 2. 1. 0. 0.]
[0. 0. 1. 1. 0. 0.]]
What is the most effective way to swap the first-two columns with the last two, given I do not know the same of the array.
|
[
"They are indeed replaced, because there is no assignment stating that the last two columns should contain the first two. To do this, you can use the comma operator (or tuple unpacking), which evaluates all expressions on the right-hand side, before any assignment is made. There are pitfalls with this approach, so it is not safe, in general (See comments below & thanks @Jérôme Richard).\ntest_array = np.array([ [1, 2, 3, 4, 5] , [5, 4, 3, 2, 1] , [5, 2, 2, 2, 1] , [1, 2, 2, 1, 1] ])\nzeros = np.repeat( np.zeros((1, 5)) , 2, axis=0)\nswap_array = np.vstack([test_array, zeros])\nprint(swap_array)\n\nswap_array[ : , [ 0 , 2 ] ], swap_array[ : , [ -2, -1 ] ] = swap_array[ : , [ -2 , -1 ] ], swap_array[ : , [ 0 , 2 ] ]\nprint(swap_array)\n\nYou can do this more concisely and safely. The approach below should be preferred.\nswap_array[ : , [ 0 , 2, -2, -1 ] ] = swap_array[ : , [ -2 , -1, 0, 2 ] ]\n\nResult for both codes:\n[[1. 2. 3. 4. 5.]\n [5. 4. 3. 2. 1.]\n [5. 2. 2. 2. 1.]\n [1. 2. 2. 1. 1.]\n [0. 0. 0. 0. 0.]\n [0. 0. 0. 0. 0.]]\n\n[[4. 2. 5. 1. 3.]\n [2. 4. 1. 5. 3.]\n [2. 2. 1. 5. 2.]\n [1. 2. 1. 1. 2.]\n [0. 0. 0. 0. 0.]\n [0. 0. 0. 0. 0.]]\n\n"
] |
[
1
] |
[] |
[] |
[
"arrays",
"numpy",
"python"
] |
stackoverflow_0074507482_arrays_numpy_python.txt
|
Q:
how to mock `await asyncio.Future()`?
I want to write a asyncio main function in pytest:
async def main(host, port):
log.debug('starting websockets server...')
async with websockets.serve(myserver, host, port):
await asyncio.Future() # run forever
async def test_main():
with patch('websockets.legacy.server.Serve') as mock_serve:
mock_serve.return_value=''
with patch('_asyncio.Future') as mock_future:
mock_future.return_value = ''
await main('','') # <- hang in this
but it always stop in await asyncio.Future() , any idea? thanks!
A:
The problem is that it that you dont patch the Future. Try debugging it with a breakpoint on await asyncio.Future() and you will see that type(asyncio.Future) is not a mock. But if you patch using with patch('asyncio.Future') as mock_future: and try the same thing, you will get type(asyncio.Future) is unittest.mock.MagicMock.
|
how to mock `await asyncio.Future()`?
|
I want to write a asyncio main function in pytest:
async def main(host, port):
log.debug('starting websockets server...')
async with websockets.serve(myserver, host, port):
await asyncio.Future() # run forever
async def test_main():
with patch('websockets.legacy.server.Serve') as mock_serve:
mock_serve.return_value=''
with patch('_asyncio.Future') as mock_future:
mock_future.return_value = ''
await main('','') # <- hang in this
but it always stop in await asyncio.Future() , any idea? thanks!
|
[
"The problem is that it that you dont patch the Future. Try debugging it with a breakpoint on await asyncio.Future() and you will see that type(asyncio.Future) is not a mock. But if you patch using with patch('asyncio.Future') as mock_future: and try the same thing, you will get type(asyncio.Future) is unittest.mock.MagicMock.\n"
] |
[
1
] |
[] |
[] |
[
"pytest",
"python",
"python_asyncio"
] |
stackoverflow_0074470798_pytest_python_python_asyncio.txt
|
Q:
Why did the Seq2SeqTrainer not stop when the EarlyStoppingCallback criteria is met?
When trying to use EarlyStopping for Seq2SeqTrainer, e.g. patience was set to 1 and threshold 1.0:
training_args = Seq2SeqTrainingArguments(
output_dir='./',
num_train_epochs=3,
per_device_train_batch_size=4,
per_device_eval_batch_size=4,
logging_steps=1,
save_steps=5,
eval_steps=1,
max_steps=10,
evaluation_strategy="steps",
predict_with_generate=True,
report_to=None,
metric_for_best_model="chr_f_score",
load_best_model_at_end=True
)
early_stop = EarlyStoppingCallback(2, 1.0)
trainer = Seq2SeqTrainer(
model=model,
args=training_args,
train_dataset=valid_data.with_format("torch"),
eval_dataset=test_data.with_format("torch"),
compute_metrics=compute_metrics,
callbacks=[early_stop]
)
trainer.train()
The model continues training until max_steps instead of stopping after the stopping criteria is met.
I'm not sure if this is a bug or maybe some argument is missing in when I use Seq2SeqTrainer, a working code to replicate the issue can be found on https://www.kaggle.com/code/alvations/huggingface-earlystopping-callbacks?scriptVersionId=110637297
Q: Why did the Seq2SeqTrainer not stop when the EarlyStoppingCallback criteria is met?
After the max_steps, if we do some probing, somehow the early_stopping_patience_counter has been reached but the training didn't stop
>>> early_stop.early_stopping_patience_counter
2
A:
As discussed in the comments in the question, the unexpected behavior of eval_steps going beyond the early stopping is because of the save_state being set at 5.
training_args = Seq2SeqTrainingArguments(
...,
logging_steps=1,
save_steps=5,
eval_steps=1,
max_steps=10,
evaluation_strategy="steps",
metric_for_best_model="chr_f_score",
load_best_model_at_end=True
)
To debug the EarlyStoppingCallback, try setting the save_steps to be the same as eval_steps, e.g.
training_args = Seq2SeqTrainingArguments(
...,
logging_steps=1,
save_steps=1,
eval_steps=1,
max_steps=10,
evaluation_strategy="steps",
metric_for_best_model="chr_f_score",
load_best_model_at_end=True
)
Afterwards, you should see that the early stopping behavior is now expected to stop at the patience / threshold you've set.
A:
Try to add the parameters of EarlyStoppingCallback as in the definition of the function in https://docs.fast.ai/callback.tracker.html
that is:
early_stop = EarlyStoppingCallback(min_delta=1.0, patience=2)
|
Why did the Seq2SeqTrainer not stop when the EarlyStoppingCallback criteria is met?
|
When trying to use EarlyStopping for Seq2SeqTrainer, e.g. patience was set to 1 and threshold 1.0:
training_args = Seq2SeqTrainingArguments(
output_dir='./',
num_train_epochs=3,
per_device_train_batch_size=4,
per_device_eval_batch_size=4,
logging_steps=1,
save_steps=5,
eval_steps=1,
max_steps=10,
evaluation_strategy="steps",
predict_with_generate=True,
report_to=None,
metric_for_best_model="chr_f_score",
load_best_model_at_end=True
)
early_stop = EarlyStoppingCallback(2, 1.0)
trainer = Seq2SeqTrainer(
model=model,
args=training_args,
train_dataset=valid_data.with_format("torch"),
eval_dataset=test_data.with_format("torch"),
compute_metrics=compute_metrics,
callbacks=[early_stop]
)
trainer.train()
The model continues training until max_steps instead of stopping after the stopping criteria is met.
I'm not sure if this is a bug or maybe some argument is missing in when I use Seq2SeqTrainer, a working code to replicate the issue can be found on https://www.kaggle.com/code/alvations/huggingface-earlystopping-callbacks?scriptVersionId=110637297
Q: Why did the Seq2SeqTrainer not stop when the EarlyStoppingCallback criteria is met?
After the max_steps, if we do some probing, somehow the early_stopping_patience_counter has been reached but the training didn't stop
>>> early_stop.early_stopping_patience_counter
2
|
[
"As discussed in the comments in the question, the unexpected behavior of eval_steps going beyond the early stopping is because of the save_state being set at 5.\ntraining_args = Seq2SeqTrainingArguments(\n ...,\n logging_steps=1,\n save_steps=5,\n eval_steps=1,\n max_steps=10,\n evaluation_strategy=\"steps\",\n metric_for_best_model=\"chr_f_score\",\n load_best_model_at_end=True\n)\n\nTo debug the EarlyStoppingCallback, try setting the save_steps to be the same as eval_steps, e.g.\ntraining_args = Seq2SeqTrainingArguments(\n ...,\n logging_steps=1,\n save_steps=1,\n eval_steps=1,\n max_steps=10,\n evaluation_strategy=\"steps\",\n metric_for_best_model=\"chr_f_score\",\n load_best_model_at_end=True\n)\n\nAfterwards, you should see that the early stopping behavior is now expected to stop at the patience / threshold you've set.\n",
"Try to add the parameters of EarlyStoppingCallback as in the definition of the function in https://docs.fast.ai/callback.tracker.html\nthat is:\nearly_stop = EarlyStoppingCallback(min_delta=1.0, patience=2)\n"
] |
[
0,
0
] |
[] |
[] |
[
"encoder_decoder",
"huggingface_transformers",
"python",
"pytorch",
"seq2seq"
] |
stackoverflow_0074394999_encoder_decoder_huggingface_transformers_python_pytorch_seq2seq.txt
|
Q:
Python - Having trouble assigning value into new col based on data from another cell
I have data that looks like;
ID File
1 this_file_whatever.ext1
2 this_whatever.ext2
3 this_is_ok_pooh.ext3
I am trying to get the extension and put the key from a dict in a new col based on the extension in File.
def create_filegroups(row):
filegroup_dict = {
'GroupA': 'ext1',
'GroupB': 'ext2',
'GroupC': 'ext3'
}
if '.' in row['Name']:
test = row['Name'].split(".",1)[1]
return test
DF = build_df()
DF['COL3'] = DF.apply(create_filegroups(row), axis=1)
print(DF)
I can't figure out what I am doing wrong. The dict compare I can do when I get there, but I can't seem to apply a function to the cells.
A:
I believe you need pandas.Series.map after extracting the file extension from the column File.
Try this:
df['COL3']= (
df['File']
.str.extract(r'\w+\.(\w+)', expand=False)
.map({k:v for v,k in filegroup_dict.items()})
)
# Output :
print(df)
ID File COL3
0 1 this_file_whatever.ext1 GroupA
1 2 this_whatever.ext2 GroupB
2 3 this_is_ok_pooh.ext3 GroupC
|
Python - Having trouble assigning value into new col based on data from another cell
|
I have data that looks like;
ID File
1 this_file_whatever.ext1
2 this_whatever.ext2
3 this_is_ok_pooh.ext3
I am trying to get the extension and put the key from a dict in a new col based on the extension in File.
def create_filegroups(row):
filegroup_dict = {
'GroupA': 'ext1',
'GroupB': 'ext2',
'GroupC': 'ext3'
}
if '.' in row['Name']:
test = row['Name'].split(".",1)[1]
return test
DF = build_df()
DF['COL3'] = DF.apply(create_filegroups(row), axis=1)
print(DF)
I can't figure out what I am doing wrong. The dict compare I can do when I get there, but I can't seem to apply a function to the cells.
|
[
"I believe you need pandas.Series.map after extracting the file extension from the column File.\nTry this:\ndf['COL3']= (\n df['File']\n .str.extract(r'\\w+\\.(\\w+)', expand=False)\n .map({k:v for v,k in filegroup_dict.items()})\n )\n\n# Output :\nprint(df)\n\n ID File COL3\n0 1 this_file_whatever.ext1 GroupA\n1 2 this_whatever.ext2 GroupB\n2 3 this_is_ok_pooh.ext3 GroupC\n\n"
] |
[
0
] |
[] |
[] |
[
"dictionary",
"pandas",
"python"
] |
stackoverflow_0074507547_dictionary_pandas_python.txt
|
Q:
How to use numpy to add a value by iterating the other values from another array?
I have 2 arrays. I want to access just the first index of array1 and add it to the values of array2. How can I use Numpy for this? I apologize as I cannot phrase my question correctly.
Here is an updated example: 10 + 1 + 2 + 3 + 4 + 5 which results to 25.
array1 = [10,11,12]
array2 = [1,2,3,4,5]
I would like to store each result in a third array.
array3 = [11,13,16,20,25]
A:
You don't need numpy, just with for loop, code could look like this :
array1 = [10, 11, 12]
array2 = [1, 2, 3, 4, 5]
s = 0
sumArray = []
for i in range(0, len(array1)):
s += array1[i]
for j in range(0, len(array2)):
s += array2[j]
sumArray.append(s)
s = 0
print(sumArray)
# in one line of code
print([array1[i] + sum(array2) for i in range(0, len(array1))])
# output [25, 26, 27]
# I hope this is what you want !
|
How to use numpy to add a value by iterating the other values from another array?
|
I have 2 arrays. I want to access just the first index of array1 and add it to the values of array2. How can I use Numpy for this? I apologize as I cannot phrase my question correctly.
Here is an updated example: 10 + 1 + 2 + 3 + 4 + 5 which results to 25.
array1 = [10,11,12]
array2 = [1,2,3,4,5]
I would like to store each result in a third array.
array3 = [11,13,16,20,25]
|
[
"You don't need numpy, just with for loop, code could look like this :\narray1 = [10, 11, 12]\narray2 = [1, 2, 3, 4, 5]\ns = 0\nsumArray = []\nfor i in range(0, len(array1)):\n s += array1[i]\n for j in range(0, len(array2)):\n s += array2[j]\n sumArray.append(s)\n s = 0\nprint(sumArray)\n# in one line of code\nprint([array1[i] + sum(array2) for i in range(0, len(array1))])\n# output [25, 26, 27]\n# I hope this is what you want !\n\n"
] |
[
0
] |
[] |
[] |
[
"numpy",
"python"
] |
stackoverflow_0074507529_numpy_python.txt
|
Q:
String center-align in python
How can I make center-align in the following poem using python
"She Walks in Beauty
BY LORD BYRON (GEORGE GORDON)
She walks in beauty, like the night
Of cloudless climes and starry skies;
And all that’s best of dark and bright
Meet in her aspect and her eyes;
Thus mellowed to that tender light
Which heaven to gaudy day denies."
x = "She Walks in Beauty\nBY LORD BYRON (GEORGE GORDON)\nShe walks in beauty, like the night\nOf cloudless climes and starry skies;\nAnd all that’s best of dark and bright\nMeet in her aspect and her eyes;\nThus mellowed to that tender light\nWhich heaven to gaudy day denies."
a = x.center(20, " ")
print(a)
I tried, but its not working. I try to make it center-align which will be depend on device size.
A:
Split the line into individual lines using splitlines().
Find the size of your terminal with os.get_terminal_size().
Iterate though lines and print the lines using .center() and pass column size.
import os
column, row = os.get_terminal_size()
x = "She Walks in Beauty\nBY LORD BYRON (GEORGE GORDON)\nShe walks in beauty, like the night\nOf cloudless climes and starry skies;\nAnd all that’s best of dark and bright\nMeet in her aspect and her eyes;\nThus mellowed to that tender light\nWhich heaven to gaudy day denies."
lines = x.splitlines()
for line in lines:
print(line.center(column))
|
String center-align in python
|
How can I make center-align in the following poem using python
"She Walks in Beauty
BY LORD BYRON (GEORGE GORDON)
She walks in beauty, like the night
Of cloudless climes and starry skies;
And all that’s best of dark and bright
Meet in her aspect and her eyes;
Thus mellowed to that tender light
Which heaven to gaudy day denies."
x = "She Walks in Beauty\nBY LORD BYRON (GEORGE GORDON)\nShe walks in beauty, like the night\nOf cloudless climes and starry skies;\nAnd all that’s best of dark and bright\nMeet in her aspect and her eyes;\nThus mellowed to that tender light\nWhich heaven to gaudy day denies."
a = x.center(20, " ")
print(a)
I tried, but its not working. I try to make it center-align which will be depend on device size.
|
[
"\nSplit the line into individual lines using splitlines().\nFind the size of your terminal with os.get_terminal_size().\nIterate though lines and print the lines using .center() and pass column size.\n\nimport os\n\ncolumn, row = os.get_terminal_size()\nx = \"She Walks in Beauty\\nBY LORD BYRON (GEORGE GORDON)\\nShe walks in beauty, like the night\\nOf cloudless climes and starry skies;\\nAnd all that’s best of dark and bright\\nMeet in her aspect and her eyes;\\nThus mellowed to that tender light\\nWhich heaven to gaudy day denies.\"\nlines = x.splitlines()\nfor line in lines:\n print(line.center(column))\n\n"
] |
[
0
] |
[] |
[] |
[
"center_align",
"python",
"python_3.x",
"string",
"text_alignment"
] |
stackoverflow_0074507599_center_align_python_python_3.x_string_text_alignment.txt
|
Q:
What is the expected value of a coin-toss that doubles in value if heads and why is it different in practice?
Here's the thought experiment: say I have a coin that is worth 1$. Everytime I toss it, if it lands on head, it will double in value. If it lands on tail, it will be forever stuck with the latest value. What is the expected final value of the coin?
Here is how I am thinking about it:
ExpectedValue = 1 * 0.5 + (1 * 2) * (0.5 * 0.5) + (1 * 2 * 2) * (0.5 * 0.5 * 0.5) ...
=0.5 + 0.5 + 0.5 ...
= Infinity
Assuming my Math is correct, the expected value should be infinity. However, when I do try to simulate it out on code, the expected value comes out very different. Here's the code below:
import random
def test(iterations):
total = 0
max = 0
for i in range(iterations):
coin = False
val = 1
while coin == False:
coin = random.choice([True, False])
val *= 2
total += val
if val > max:
max = val
ave = total/iterations
print(ave)
test(10000000) # returns 38.736616
I assume that the sample size of 10000000 should be statistically significant enough. However, the final expected value returned is 38.736616, which is nowhere near Infinity. Either my Math is wrong or my code is wrong. Which is it?
A:
The average value of the process over infinitely many trials is infinite. However, you did not perform infinitely many trials; you only performed 10,000,000, which falls short of infinity by approximately infinity.
Suppose we have a fair coin. In four flips, the average number of heads that come up is two. So, I do 100 trials: 100 times, I flip the coin four times, and I count the heads. I got 2.11, not two. Why?
My 100 trials are only 100 samples from the population. They are not distributed the same way as the population. Your 10,000,000 trials are only 10,000,000 samples from an infinite population. None of your samples happened to include the streak of a hundred heads in a row, which would have made the value for that sample 299, and would have made the average for your 10,000,000 trials more than 299/10,000,000 = 6.338•1022, which is a huge number (but still not infinity).
If you increase the number of trials, you will tend to see increasing averages, because increasing the number of trials tends to move your samples toward the full population distribution. For the process you describe, you need to move infinitely far to get to the full distribution. So you need infinitely many trials.
(Also, there is a bug in your code. If the trial starts with False, representing tails, it still doubles the value. This means the values for 0, 1, 2, 3… heads are taken as 2, 4, 8, 16,… The process you describe in the question would have them as 1, 2, 4, 8,…)
Another way of looking at this is to conduct just one trial. The average value for just one trial is infinite. However, half the time you start one trial, you stop after one coin flip, since you got a tail immediately. One quarter of the time you stop after one or two flips. One-eighth of the time, you stop after three flips. Most of the time, you will get a small number as the answer. In every trial, you will get a finite number as the answer; every trial will always end with getting a tail, and the value at that point will be finite. It is impossible to do a finite number of trials and ever end up with an infinite value. Yet there is no finite number that is greater than the expected value: If you do more and more trials, the average will tend to grow and grow, and, eventually, it will exceed any finite target.
|
What is the expected value of a coin-toss that doubles in value if heads and why is it different in practice?
|
Here's the thought experiment: say I have a coin that is worth 1$. Everytime I toss it, if it lands on head, it will double in value. If it lands on tail, it will be forever stuck with the latest value. What is the expected final value of the coin?
Here is how I am thinking about it:
ExpectedValue = 1 * 0.5 + (1 * 2) * (0.5 * 0.5) + (1 * 2 * 2) * (0.5 * 0.5 * 0.5) ...
=0.5 + 0.5 + 0.5 ...
= Infinity
Assuming my Math is correct, the expected value should be infinity. However, when I do try to simulate it out on code, the expected value comes out very different. Here's the code below:
import random
def test(iterations):
total = 0
max = 0
for i in range(iterations):
coin = False
val = 1
while coin == False:
coin = random.choice([True, False])
val *= 2
total += val
if val > max:
max = val
ave = total/iterations
print(ave)
test(10000000) # returns 38.736616
I assume that the sample size of 10000000 should be statistically significant enough. However, the final expected value returned is 38.736616, which is nowhere near Infinity. Either my Math is wrong or my code is wrong. Which is it?
|
[
"The average value of the process over infinitely many trials is infinite. However, you did not perform infinitely many trials; you only performed 10,000,000, which falls short of infinity by approximately infinity.\nSuppose we have a fair coin. In four flips, the average number of heads that come up is two. So, I do 100 trials: 100 times, I flip the coin four times, and I count the heads. I got 2.11, not two. Why?\nMy 100 trials are only 100 samples from the population. They are not distributed the same way as the population. Your 10,000,000 trials are only 10,000,000 samples from an infinite population. None of your samples happened to include the streak of a hundred heads in a row, which would have made the value for that sample 299, and would have made the average for your 10,000,000 trials more than 299/10,000,000 = 6.338•1022, which is a huge number (but still not infinity).\nIf you increase the number of trials, you will tend to see increasing averages, because increasing the number of trials tends to move your samples toward the full population distribution. For the process you describe, you need to move infinitely far to get to the full distribution. So you need infinitely many trials.\n(Also, there is a bug in your code. If the trial starts with False, representing tails, it still doubles the value. This means the values for 0, 1, 2, 3… heads are taken as 2, 4, 8, 16,… The process you describe in the question would have them as 1, 2, 4, 8,…)\nAnother way of looking at this is to conduct just one trial. The average value for just one trial is infinite. However, half the time you start one trial, you stop after one coin flip, since you got a tail immediately. One quarter of the time you stop after one or two flips. One-eighth of the time, you stop after three flips. Most of the time, you will get a small number as the answer. In every trial, you will get a finite number as the answer; every trial will always end with getting a tail, and the value at that point will be finite. It is impossible to do a finite number of trials and ever end up with an infinite value. Yet there is no finite number that is greater than the expected value: If you do more and more trials, the average will tend to grow and grow, and, eventually, it will exceed any finite target.\n"
] |
[
1
] |
[] |
[] |
[
"math",
"probability",
"python",
"statistics"
] |
stackoverflow_0074505811_math_probability_python_statistics.txt
|
Q:
Can't make Angle object in Manim, is the class deprecated or am I missing something?
Pretty new to Manim and trying to make an Angle object like it says on the documentation:
https://docs.manim.community/en/stable/reference/manim.mobject.geometry.line.Angle.html?highlight=angle#manim.mobject.geometry.line.Angle.from_three_points
But getting the following error:
NameError: name 'Angle' is not defined
Here's the line of code I used:
self.play(ShowCreation(Angle(nueva_linea2, nuevo_radio)))
(Being 'nueva_linea2' and 'nuevo_radio' two Line objects)
As you may already suspect, the following won't work either:
angulo = Angle(nueva_linea2, nuevo_radio)
self.play(ShowCreation(angulo))
Here's the Manim version according to the terminal:
ManimGL v1.6.1
A:
The community maintained version ("Manim") is different from Grant's version ("ManimGL" / "manimlib"). In ManimGL, there is no Angle mobject.
See here for more information: https://docs.manim.community/en/stable/faq/installation.html#why-are-there-different-versions-of-manim
|
Can't make Angle object in Manim, is the class deprecated or am I missing something?
|
Pretty new to Manim and trying to make an Angle object like it says on the documentation:
https://docs.manim.community/en/stable/reference/manim.mobject.geometry.line.Angle.html?highlight=angle#manim.mobject.geometry.line.Angle.from_three_points
But getting the following error:
NameError: name 'Angle' is not defined
Here's the line of code I used:
self.play(ShowCreation(Angle(nueva_linea2, nuevo_radio)))
(Being 'nueva_linea2' and 'nuevo_radio' two Line objects)
As you may already suspect, the following won't work either:
angulo = Angle(nueva_linea2, nuevo_radio)
self.play(ShowCreation(angulo))
Here's the Manim version according to the terminal:
ManimGL v1.6.1
|
[
"The community maintained version (\"Manim\") is different from Grant's version (\"ManimGL\" / \"manimlib\"). In ManimGL, there is no Angle mobject.\nSee here for more information: https://docs.manim.community/en/stable/faq/installation.html#why-are-there-different-versions-of-manim\n"
] |
[
0
] |
[] |
[] |
[
"manim",
"python"
] |
stackoverflow_0074505177_manim_python.txt
|
Q:
Upgrading Python 3.7 to 3.9 on macOS Big Sur
I'm trying to upgrade Python 3.7 to 3.9 on macOS Big Sur. I'm also trying to avoid losing packages that were installed on Python 3.7 and reinstalling them again on Python 3.9
I tried using
brew install python3
brew update && brew upgrade python
which yielded
Already up-to-date.
Warning: python3 3.9.1_7 already installed
However when I run python3 --version it yields Python 3.7.0
Is this an issue with the alias? Is there a way to uninstall Python 3.7 and keep Python 3.9?
Running brew link python3 yields
Linking /usr/local/Cellar/python@3.9/3.9.1_7...
Error: Could not symlink bin/2to3
Target /usr/local/bin/2to3
already exists. You may want to remove it:
rm '/usr/local/bin/2to3'
To force the link and overwrite all conflicting files:
brew link --overwrite python@3.9
To list all files that would be deleted:
brew link --overwrite --dry-run python@3.9
A:
I fixed this frustrating error by first removing the Python 3.7 manually, by deleting it from the Applications folder and then uninstalling Python 3.9 using brew uninstall python3
Next, I downloaded and installed the latest Python from here and it worked!
To save all the installed packages by generating a requirements file, Run
python3 -m pip freeze > requirements.txt
and to install them in another environment, Run
python3 -m pip install -r requirements.txt
A:
I suggest use official binaries:
Download version you need from the python.org
Call the .pkg
Invoke Update Shell Profile.command script under /Applications/Python\ 3.XX/
After all reboot your terminal
Check your Python version. The old version and dependencies remain intact.
|
Upgrading Python 3.7 to 3.9 on macOS Big Sur
|
I'm trying to upgrade Python 3.7 to 3.9 on macOS Big Sur. I'm also trying to avoid losing packages that were installed on Python 3.7 and reinstalling them again on Python 3.9
I tried using
brew install python3
brew update && brew upgrade python
which yielded
Already up-to-date.
Warning: python3 3.9.1_7 already installed
However when I run python3 --version it yields Python 3.7.0
Is this an issue with the alias? Is there a way to uninstall Python 3.7 and keep Python 3.9?
Running brew link python3 yields
Linking /usr/local/Cellar/python@3.9/3.9.1_7...
Error: Could not symlink bin/2to3
Target /usr/local/bin/2to3
already exists. You may want to remove it:
rm '/usr/local/bin/2to3'
To force the link and overwrite all conflicting files:
brew link --overwrite python@3.9
To list all files that would be deleted:
brew link --overwrite --dry-run python@3.9
|
[
"I fixed this frustrating error by first removing the Python 3.7 manually, by deleting it from the Applications folder and then uninstalling Python 3.9 using brew uninstall python3\nNext, I downloaded and installed the latest Python from here and it worked!\nTo save all the installed packages by generating a requirements file, Run\npython3 -m pip freeze > requirements.txt\n\nand to install them in another environment, Run\npython3 -m pip install -r requirements.txt\n\n",
"I suggest use official binaries:\n\nDownload version you need from the python.org\nCall the .pkg\nInvoke Update Shell Profile.command script under /Applications/Python\\ 3.XX/\nAfter all reboot your terminal\nCheck your Python version. The old version and dependencies remain intact.\n\n\n"
] |
[
2,
0
] |
[] |
[] |
[
"python"
] |
stackoverflow_0066004178_python.txt
|
Q:
How to pass my access and secret keys properly for GlueContext?
I have a glue notebook from which I am trying to read a specific file from a different AWS account. When I try to run a spark session and read it. The code works perfectly and I get the spark df but when I try to use glueContext.create_dynamic_frame() I get an Access Denied error.
This is what my code looks like so far. Is it cause I am not passing the AWS session credentials correctly?
import sys
from awsglue.transforms import *
from awsglue.utils import getResolvedOptions
from pyspark.context import SparkContext
from awsglue.context import GlueContext
from awsglue.job import Job
from pyspark.sql import SparkSession
access_key=''
secret_key=''
spark = SparkSession.builder \
.config("spark.jars.packages", "org.apache.hadoop:hadoop-aws:2.7.3,com.amazonaws:aws-java-sdk:1.7.4") \
.config("fs.s3a.impl","org.apache.hadoop.fs.s3a.S3AFileSystem") \
.config("fs.s3a.access.key", access_key) \
.config("fs.s3a.secret.key", secret_key) \
.getOrCreate()
sc = spark
glueContext = GlueContext(sc)
spark = glueContext.spark_session
dynamicFrame = glueContext.create_dynamic_frame.from_options(
connection_type="s3",
connection_options={"paths": ["s3://test/enterprise_survey.csv"]},
format="csv",
format_options={
"withHeader": True
},
)
When I try to run the code I get the following error:
Py4JJavaError: An error occurred while calling o458.getDynamicFrame.
: java.io.IOException: com.amazon.ws.emr.hadoop.fs.shaded.com.amazonaws.services.s3.model.AmazonS3Exception: Access Denied (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: N9EQPCTNJZSSENXP; S3 Extended
A:
You should avoid typing your actual creds in the code itself. Rather use a role that can access your services.
For S3 update the s3 bucket policy in the other account which would allow the job to read the data and also add IAM policy to job's account to read data from S3 in other account.
|
How to pass my access and secret keys properly for GlueContext?
|
I have a glue notebook from which I am trying to read a specific file from a different AWS account. When I try to run a spark session and read it. The code works perfectly and I get the spark df but when I try to use glueContext.create_dynamic_frame() I get an Access Denied error.
This is what my code looks like so far. Is it cause I am not passing the AWS session credentials correctly?
import sys
from awsglue.transforms import *
from awsglue.utils import getResolvedOptions
from pyspark.context import SparkContext
from awsglue.context import GlueContext
from awsglue.job import Job
from pyspark.sql import SparkSession
access_key=''
secret_key=''
spark = SparkSession.builder \
.config("spark.jars.packages", "org.apache.hadoop:hadoop-aws:2.7.3,com.amazonaws:aws-java-sdk:1.7.4") \
.config("fs.s3a.impl","org.apache.hadoop.fs.s3a.S3AFileSystem") \
.config("fs.s3a.access.key", access_key) \
.config("fs.s3a.secret.key", secret_key) \
.getOrCreate()
sc = spark
glueContext = GlueContext(sc)
spark = glueContext.spark_session
dynamicFrame = glueContext.create_dynamic_frame.from_options(
connection_type="s3",
connection_options={"paths": ["s3://test/enterprise_survey.csv"]},
format="csv",
format_options={
"withHeader": True
},
)
When I try to run the code I get the following error:
Py4JJavaError: An error occurred while calling o458.getDynamicFrame.
: java.io.IOException: com.amazon.ws.emr.hadoop.fs.shaded.com.amazonaws.services.s3.model.AmazonS3Exception: Access Denied (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: N9EQPCTNJZSSENXP; S3 Extended
|
[
"You should avoid typing your actual creds in the code itself. Rather use a role that can access your services.\nFor S3 update the s3 bucket policy in the other account which would allow the job to read the data and also add IAM policy to job's account to read data from S3 in other account.\n"
] |
[
0
] |
[] |
[] |
[
"amazon_s3",
"amazon_web_services",
"aws_glue",
"python"
] |
stackoverflow_0074493513_amazon_s3_amazon_web_services_aws_glue_python.txt
|
Q:
How to clear the Entry widget after a button is pressed in Tkinter?
I'm trying to clear the Entry widget after the user presses a button using Tkinter.
I tried using ent.delete(0, END), but I got an error saying that strings don't have the attribute delete.
Here is my code, where I'm getting error on real.delete(0, END):
secret = randrange(1,100)
print(secret)
def res(real, secret):
if secret==eval(real):
showinfo(message='that is right!')
real.delete(0, END)
def guess():
ge = Tk()
ge.title('guessing game')
Label(ge, text="what is your guess:").pack(side=TOP)
ent = Entry(ge)
ent.pack(side=TOP)
btn=Button(ge, text="Enter", command=lambda: res(ent.get(),secret))
btn.pack(side=LEFT)
ge.mainloop()
A:
After poking around a bit through the Introduction to Tkinter, I came up with the code below, which doesn't do anything except display a text field and clear it when the "Clear text" button is pushed:
import tkinter as tk
class App(tk.Frame):
def __init__(self, master):
tk.Frame.__init__(self, master, height=42, width=42)
self.entry = tk.Entry(self)
self.entry.focus()
self.entry.pack()
self.clear_button = tk.Button(self, text="Clear text", command=self.clear_text)
self.clear_button.pack()
def clear_text(self):
self.entry.delete(0, 'end')
def main():
root = tk.Tk()
App(root).pack(expand=True, fill='both')
root.mainloop()
if __name__ == "__main__":
main()
A:
I'm unclear about your question. From http://effbot.org/tkinterbook/entry.htm#patterns, it
seems you just need to do an assignment after you called the delete.
To add entry text to the widget, use the insert method. To replace the current text, you can call delete before you insert the new text.
e = Entry(master)
e.pack()
e.delete(0, END)
e.insert(0, "")
Could you post a bit more code?
A:
real gets the value ent.get() which is just a string. It has no idea where it came from, and no way to affect the widget.
Instead of real.delete(), call .delete() on the entry widget itself:
def res(ent, real, secret):
if secret == eval(real):
showinfo(message='that is right!')
ent.delete(0, END)
def guess():
...
btn = Button(ge, text="Enter", command=lambda: res(ent, ent.get(), secret))
A:
If in case you are using Python 3.x, you have to use
txt_entry = Entry(root)
txt_entry.pack()
txt_entry.delete(0, tkinter.END)
A:
You shall proceed with ent.delete(0,"end") instead of using 'END', use 'end' inside quotation.
secret = randrange(1,100)
print(secret)
def res(real, secret):
if secret==eval(real):
showinfo(message='that is right!')
real.delete(0, END)
def guess():
ge = Tk()
ge.title('guessing game')
Label(ge, text="what is your guess:").pack(side=TOP)
ent = Entry(ge)
ent.pack(side=TOP)
btn=Button(ge, text="Enter", command=lambda: res(ent.get(),secret))
btn.pack(side=LEFT)
ge.mainloop()
This shall solve your problem
A:
First of all, make sure the Text is enabled, then delete your tags, and then the content.
myText.config(state=NORMAL)
myText.tag_delete ("myTags")
myText.delete(1.0, END)
When the Text is "DISABLE", the delete does not work because the Text field is in read-only mode.
A:
def clear():
global input
abc =
input.set(abc)
root = Tk()
input = StringVar()
ent = Entry(root,textvariable = input,font=('ariel',23,'bold'),bg='powder blue',bd=30,justify='right').grid(columnspan=4,ipady=20)
Clear = Button(root,text="Clear",command=clear).pack()
Input is set the textvariable in the entry, which is the string variable and when I set the text of the string variable as "" this clears the text in the entry
A:
Simply define a function and set the value of your Combobox to empty/null or whatever you want. Try the following.
def Reset():
cmb.set("")
here, cmb is a variable in which you have assigned the Combobox. Now call that function in a button such as,
btn2 = ttk.Button(root, text="Reset",command=Reset)
A:
if you add the print code to check the type of real, you will see that real is a string, not an Entry so there is no delete attribute.
def res(real, secret):
print(type(real))
if secret==eval(real):
showinfo(message='that is right!')
real.delete(0, END)
>> output: <class 'str'>
Solution:
secret = randrange(1,100)
print(secret)
def res(real, secret):
if secret==eval(real):
showinfo(message='that is right!')
ent.delete(0, END) # we call the entry an delete its content
def guess():
ge = Tk()
ge.title('guessing game')
Label(ge, text="what is your guess:").pack(side=TOP)
global ent # Globalize ent to use it in other function
ent = Entry(ge)
ent.pack(side=TOP)
btn=Button(ge, text="Enter", command=lambda: res(ent.get(),secret))
btn.pack(side=LEFT)
ge.mainloop()
It should work.
A:
From my experience, Entry.delete(0, END) sometimes didn't work when the state of entry widget is DISABLED. Check the state of Entry when Entry.delete(0, END), doesn't work and if the value of entry widget remains, call entry.update() to reflect the result of delete(0, END).
A:
You can Use Entry.delete(1.0, END)
it deletes every thing in the entry field, if u use only Entry.delete(1.0)it deletes the last word you inputed ->Example text to ->xample text
|
How to clear the Entry widget after a button is pressed in Tkinter?
|
I'm trying to clear the Entry widget after the user presses a button using Tkinter.
I tried using ent.delete(0, END), but I got an error saying that strings don't have the attribute delete.
Here is my code, where I'm getting error on real.delete(0, END):
secret = randrange(1,100)
print(secret)
def res(real, secret):
if secret==eval(real):
showinfo(message='that is right!')
real.delete(0, END)
def guess():
ge = Tk()
ge.title('guessing game')
Label(ge, text="what is your guess:").pack(side=TOP)
ent = Entry(ge)
ent.pack(side=TOP)
btn=Button(ge, text="Enter", command=lambda: res(ent.get(),secret))
btn.pack(side=LEFT)
ge.mainloop()
|
[
"After poking around a bit through the Introduction to Tkinter, I came up with the code below, which doesn't do anything except display a text field and clear it when the \"Clear text\" button is pushed:\nimport tkinter as tk\n\nclass App(tk.Frame):\n def __init__(self, master):\n tk.Frame.__init__(self, master, height=42, width=42)\n self.entry = tk.Entry(self)\n self.entry.focus()\n self.entry.pack()\n self.clear_button = tk.Button(self, text=\"Clear text\", command=self.clear_text)\n self.clear_button.pack()\n\n def clear_text(self):\n self.entry.delete(0, 'end')\n\ndef main():\n root = tk.Tk()\n App(root).pack(expand=True, fill='both')\n root.mainloop()\n\nif __name__ == \"__main__\":\n main()\n\n",
"I'm unclear about your question. From http://effbot.org/tkinterbook/entry.htm#patterns, it\nseems you just need to do an assignment after you called the delete.\nTo add entry text to the widget, use the insert method. To replace the current text, you can call delete before you insert the new text.\ne = Entry(master)\ne.pack()\n\ne.delete(0, END)\ne.insert(0, \"\")\n\nCould you post a bit more code?\n",
"real gets the value ent.get() which is just a string. It has no idea where it came from, and no way to affect the widget.\nInstead of real.delete(), call .delete() on the entry widget itself:\ndef res(ent, real, secret):\n if secret == eval(real):\n showinfo(message='that is right!')\n ent.delete(0, END)\n\ndef guess():\n ...\n btn = Button(ge, text=\"Enter\", command=lambda: res(ent, ent.get(), secret))\n\n",
"If in case you are using Python 3.x, you have to use\ntxt_entry = Entry(root)\ntxt_entry.pack()\ntxt_entry.delete(0, tkinter.END)\n",
"You shall proceed with ent.delete(0,\"end\") instead of using 'END', use 'end' inside quotation. \n secret = randrange(1,100)\nprint(secret)\ndef res(real, secret):\n if secret==eval(real):\n showinfo(message='that is right!')\n real.delete(0, END)\n\ndef guess():\n ge = Tk()\n ge.title('guessing game')\n\n Label(ge, text=\"what is your guess:\").pack(side=TOP)\n\n ent = Entry(ge)\n ent.pack(side=TOP)\n\n btn=Button(ge, text=\"Enter\", command=lambda: res(ent.get(),secret))\n btn.pack(side=LEFT)\n\n ge.mainloop()\n\nThis shall solve your problem\n",
"First of all, make sure the Text is enabled, then delete your tags, and then the content.\nmyText.config(state=NORMAL)\nmyText.tag_delete (\"myTags\")\nmyText.delete(1.0, END)\n\nWhen the Text is \"DISABLE\", the delete does not work because the Text field is in read-only mode.\n",
"def clear(): \n global input \n abc = \n input.set(abc) \n\nroot = Tk() \ninput = StringVar() \nent = Entry(root,textvariable = input,font=('ariel',23,'bold'),bg='powder blue',bd=30,justify='right').grid(columnspan=4,ipady=20) \nClear = Button(root,text=\"Clear\",command=clear).pack() \n\nInput is set the textvariable in the entry, which is the string variable and when I set the text of the string variable as \"\" this clears the text in the entry \n",
"Simply define a function and set the value of your Combobox to empty/null or whatever you want. Try the following.\ndef Reset():\n cmb.set(\"\")\n\nhere, cmb is a variable in which you have assigned the Combobox. Now call that function in a button such as, \nbtn2 = ttk.Button(root, text=\"Reset\",command=Reset)\n\n",
"if you add the print code to check the type of real, you will see that real is a string, not an Entry so there is no delete attribute.\ndef res(real, secret):\n print(type(real))\n if secret==eval(real):\n showinfo(message='that is right!')\n real.delete(0, END)\n\n>> output: <class 'str'>\n\nSolution:\nsecret = randrange(1,100)\nprint(secret)\n\ndef res(real, secret):\n if secret==eval(real):\n showinfo(message='that is right!')\n ent.delete(0, END) # we call the entry an delete its content\n\ndef guess():\n\n ge = Tk()\n ge.title('guessing game')\n\n Label(ge, text=\"what is your guess:\").pack(side=TOP)\n\n global ent # Globalize ent to use it in other function\n ent = Entry(ge)\n ent.pack(side=TOP)\n\n btn=Button(ge, text=\"Enter\", command=lambda: res(ent.get(),secret))\n btn.pack(side=LEFT)\n\n ge.mainloop()\n\nIt should work.\n",
"From my experience, Entry.delete(0, END) sometimes didn't work when the state of entry widget is DISABLED. Check the state of Entry when Entry.delete(0, END), doesn't work and if the value of entry widget remains, call entry.update() to reflect the result of delete(0, END).\n",
"You can Use Entry.delete(1.0, END)\nit deletes every thing in the entry field, if u use only Entry.delete(1.0)it deletes the last word you inputed ->Example text to ->xample text\n"
] |
[
93,
18,
5,
4,
1,
0,
0,
0,
0,
0,
0
] |
[
"if none of the above is working you can use this->\nidAssignedToEntryWidget.delete(first = 0, last = UpperLimitAssignedToEntryWidget)\nfor e.g. -> \nid assigned is = en then\nen.delete(first =0, last =100) \n",
"Try with this:\nimport os\nos.system('clear')\n\n"
] |
[
-2,
-8
] |
[
"python",
"tkinter",
"user_interface",
"widget"
] |
stackoverflow_0002260235_python_tkinter_user_interface_widget.txt
|
Q:
Viewing the full output of an xarray DataArray in plain text
I am trying to view the full output of
print(MyDataArray)
instead of the shortened version which displays
array([[[[[0.00000000e+00, 0.00000000e+00, 0.00000000e+00, ...,
0.00000000e+00, 0.00000000e+00, 0.00000000e+00],
[0.00000000e+00, 0.00000000e+00, 0.00000000e+00, ...,
0.00000000e+00, 0.00000000e+00, 0.00000000e+00],
[0.00000000e+00, 0.00000000e+00, 0.00000000e+00, ...,
0.00000000e+00, 0.00000000e+00, 0.00000000e+00],
...,
[0.00000000e+00, 0.00000000e+00, 0.00000000e+00, ...,
0.00000000e+00, 0.00000000e+00, 0.00000000e+00],
[0.00000000e+00, 0.00000000e+00, 0.00000000e+00, ...,
0.00000000e+00, 0.00000000e+00, 0.00000000e+00],
[0.00000000e+00, 0.00000000e+00, 0.00000000e+00, ...,
0.00000000e+00, 0.00000000e+00, 0.00000000e+00]],
(this is a code snippet and not the full output i get)
basically i'm trying to get rid of the ....
I would like to see this output just as plain text, or written to a text file. I would like to maintain the current formatting.
I have already tried a number of things
increasing display_max_rows to a very large number (this is an option of xarray)
writing to an npz file, this resulted in a file I could only open in python, not allowing me to see it in plain text
exporting it to a normal python file (instead of jupyter notebook) and trying to print it from there
A:
I just solved my own question, I'm posting this information if somebody also encounters this problem. My solution is a workaround
new_numpy_ndarray = existing_xarray_DataArray.to_numpy()
np.set_printoptions(threshold=np.Inf)
print(new_numpy_array)
this allows you to then view your array in full.
Thank you to whoever recommended
np.set_printoptions(threshold=np.Inf)
this gave me the idea to convert my array to numpy
|
Viewing the full output of an xarray DataArray in plain text
|
I am trying to view the full output of
print(MyDataArray)
instead of the shortened version which displays
array([[[[[0.00000000e+00, 0.00000000e+00, 0.00000000e+00, ...,
0.00000000e+00, 0.00000000e+00, 0.00000000e+00],
[0.00000000e+00, 0.00000000e+00, 0.00000000e+00, ...,
0.00000000e+00, 0.00000000e+00, 0.00000000e+00],
[0.00000000e+00, 0.00000000e+00, 0.00000000e+00, ...,
0.00000000e+00, 0.00000000e+00, 0.00000000e+00],
...,
[0.00000000e+00, 0.00000000e+00, 0.00000000e+00, ...,
0.00000000e+00, 0.00000000e+00, 0.00000000e+00],
[0.00000000e+00, 0.00000000e+00, 0.00000000e+00, ...,
0.00000000e+00, 0.00000000e+00, 0.00000000e+00],
[0.00000000e+00, 0.00000000e+00, 0.00000000e+00, ...,
0.00000000e+00, 0.00000000e+00, 0.00000000e+00]],
(this is a code snippet and not the full output i get)
basically i'm trying to get rid of the ....
I would like to see this output just as plain text, or written to a text file. I would like to maintain the current formatting.
I have already tried a number of things
increasing display_max_rows to a very large number (this is an option of xarray)
writing to an npz file, this resulted in a file I could only open in python, not allowing me to see it in plain text
exporting it to a normal python file (instead of jupyter notebook) and trying to print it from there
|
[
"I just solved my own question, I'm posting this information if somebody also encounters this problem. My solution is a workaround\nnew_numpy_ndarray = existing_xarray_DataArray.to_numpy()\nnp.set_printoptions(threshold=np.Inf)\nprint(new_numpy_array)\n\nthis allows you to then view your array in full.\nThank you to whoever recommended\nnp.set_printoptions(threshold=np.Inf)\n\nthis gave me the idea to convert my array to numpy\n"
] |
[
1
] |
[] |
[] |
[
"numpy",
"pandas",
"python",
"python_xarray"
] |
stackoverflow_0074507619_numpy_pandas_python_python_xarray.txt
|
Q:
INSERT to MYSQL throwing error due to apostrophe
sql ="""INSERT INTO birthday(team, birthday)
VALUES ('Norway', {"2020-01-01": "Ram's BDay"}));"""
Above sql statement throws an error while inserting.
ProgrammingError: (1064, 'You have an error in your SQL syntax; check
the manual that corresponds to your MySQL server ver
Based on manual attempts I know it is related to apostrophe. Is it possible to insert the above statement, I don't have the control over apostrophe coming in the data stream.
A:
you must put the hole JSON String in quotes an escape them in the string
sql ="""INSERT INTO birthday(team, birthday)
VALUES ('Norway', "{\"2020-01-01\": \"Ram's BDay\"}" );"""
or you use single quotes then you must escape them in the string
sql ="""INSERT INTO birthday(team, birthday)
VALUES ('Norway', '{"2020-01-01": "Ram\'s BDay"}' );"""
SAMPLE
mysql> CREATE TABLE birthday(team VARCHAR(100), birthday VARCHAR(100)) ;
Query OK, 0 rows affected (0.14 sec)
mysql>
mysql> INSERT INTO birthday(team, birthday)
-> VALUES ('Norway', "{\"2020-01-01\": \"Ram's BDay\"}" );
Query OK, 1 row affected (0.06 sec)
mysql> SELECT * FROM birthday;
+--------+------------------------------+
| team | birthday |
+--------+------------------------------+
| Norway | {"2020-01-01": "Ram's BDay"} |
+--------+------------------------------+
1 row in set (0.03 sec)
mysql>
|
INSERT to MYSQL throwing error due to apostrophe
|
sql ="""INSERT INTO birthday(team, birthday)
VALUES ('Norway', {"2020-01-01": "Ram's BDay"}));"""
Above sql statement throws an error while inserting.
ProgrammingError: (1064, 'You have an error in your SQL syntax; check
the manual that corresponds to your MySQL server ver
Based on manual attempts I know it is related to apostrophe. Is it possible to insert the above statement, I don't have the control over apostrophe coming in the data stream.
|
[
"you must put the hole JSON String in quotes an escape them in the string\nsql =\"\"\"INSERT INTO birthday(team, birthday)\n VALUES ('Norway', \"{\\\"2020-01-01\\\": \\\"Ram's BDay\\\"}\" );\"\"\"\n\nor you use single quotes then you must escape them in the string\nsql =\"\"\"INSERT INTO birthday(team, birthday)\n VALUES ('Norway', '{\"2020-01-01\": \"Ram\\'s BDay\"}' );\"\"\"\n\nSAMPLE\nmysql> CREATE TABLE birthday(team VARCHAR(100), birthday VARCHAR(100)) ;\nQuery OK, 0 rows affected (0.14 sec)\n\nmysql> \nmysql> INSERT INTO birthday(team, birthday)\n -> VALUES ('Norway', \"{\\\"2020-01-01\\\": \\\"Ram's BDay\\\"}\" );\nQuery OK, 1 row affected (0.06 sec)\n\nmysql> SELECT * FROM birthday;\n+--------+------------------------------+\n| team | birthday |\n+--------+------------------------------+\n| Norway | {\"2020-01-01\": \"Ram's BDay\"} |\n+--------+------------------------------+\n1 row in set (0.03 sec)\n\nmysql> \n\n"
] |
[
0
] |
[
"You have a syntax mistake\nsql ='\"\"\"INSERT INTO birthday(team, birthday)\n VALUES (\\'Norway\\', {\"2020-01-01\": \"Ram\\'s BDay\"}));\"\"\"'\n\nIf you are using more apostrophe be sure to add \\ so it will not read\n"
] |
[
-1
] |
[
"mysql",
"python"
] |
stackoverflow_0074507605_mysql_python.txt
|
Q:
python - bag of words
I want to create a very simple bag of words based on multiple Excel-files (300).
DummyDoc1 = "This is a testdoc
DummyDoc2 = "This is also a testdoc, the second one"
...
I can import all the files and I also can do a simple wordcount (dict) for each file.
What I don't get is how to combine those two in a matrix that looks something like this.
Code importing files:
def get_files(dir):
files = [f.path for f in os.scandir(dir)]
return files
files = get_files_ext(DIR_IN, "xlsx")
for file in files:
file = fm.get_filename(file)
df_all = pd.read_excel(os.path.join(DIR_IN, file))
Code wordcount:
text = open(r"..\PycharmProjects\DrillPinsBagOfWords\files_in\test.csv", "r", errors="ignore")
d = dict()
for line in text:
line = line.strip()
line = line.lower()
words = line.split(" ")
for word in words:
if word in d:
d[word] = d[word] + 1
else:
d[word] = 1
gesorteerd = sorted(d.items(), key=lambda x: x[1], reverse=True)
for x in gesorteerd:
print(x)
Can someone give me some direction please?
================================================================
Here is the code I have so far. I'm still struggeling with the total dict.
import filemanager as fm
import pandas as pd
directory = r"C:\Users\files_in_test"
total_dict = dict()
files = fm.get_files_ext(directory, "csv")
count = 0
list_dict = []
for filename in files:
d = dict()
with open(filename, "r", errors="ignore") as text:
count += 1
for line in text:
line = line.strip()
line = line.lower()
words = line.split(" ")
for word in words:
if word in d:
d[word] = d[word] + 1
else:
d[word] = 1
print("Print dict", count, d)
# maak lijst van dict's
list_dict.append(d.copy())
# print lijst van dict's
print("Print list_dict: ", list_dict)
df = pd.DataFrame(list_dict)
print(df)
result = df.transpose()
print(result)
A:
Get all those excel files into one directory
Iterate over all files in that directory
Use the code from your wordcount to count words in every file
Use this source to export into excel format
import os
total = dict()
directory = "YOUR DIRECTORY HERE"
for filename in os.listdir(directory):
d = dict()
with open(filename, "r") as text:
for line in text:
line = line.strip()
line = line.lower()
words = line.split(" ")
for word in words:
if word in d:
d[word] = d[word] + 1
else:
d[word] = 1
total[filename] = d
gesorteerd = sorted(d.items(), key=lambda x: x[1], reverse=True)
for x in gesorteerd:
print(x)
|
python - bag of words
|
I want to create a very simple bag of words based on multiple Excel-files (300).
DummyDoc1 = "This is a testdoc
DummyDoc2 = "This is also a testdoc, the second one"
...
I can import all the files and I also can do a simple wordcount (dict) for each file.
What I don't get is how to combine those two in a matrix that looks something like this.
Code importing files:
def get_files(dir):
files = [f.path for f in os.scandir(dir)]
return files
files = get_files_ext(DIR_IN, "xlsx")
for file in files:
file = fm.get_filename(file)
df_all = pd.read_excel(os.path.join(DIR_IN, file))
Code wordcount:
text = open(r"..\PycharmProjects\DrillPinsBagOfWords\files_in\test.csv", "r", errors="ignore")
d = dict()
for line in text:
line = line.strip()
line = line.lower()
words = line.split(" ")
for word in words:
if word in d:
d[word] = d[word] + 1
else:
d[word] = 1
gesorteerd = sorted(d.items(), key=lambda x: x[1], reverse=True)
for x in gesorteerd:
print(x)
Can someone give me some direction please?
================================================================
Here is the code I have so far. I'm still struggeling with the total dict.
import filemanager as fm
import pandas as pd
directory = r"C:\Users\files_in_test"
total_dict = dict()
files = fm.get_files_ext(directory, "csv")
count = 0
list_dict = []
for filename in files:
d = dict()
with open(filename, "r", errors="ignore") as text:
count += 1
for line in text:
line = line.strip()
line = line.lower()
words = line.split(" ")
for word in words:
if word in d:
d[word] = d[word] + 1
else:
d[word] = 1
print("Print dict", count, d)
# maak lijst van dict's
list_dict.append(d.copy())
# print lijst van dict's
print("Print list_dict: ", list_dict)
df = pd.DataFrame(list_dict)
print(df)
result = df.transpose()
print(result)
|
[
"\nGet all those excel files into one directory\nIterate over all files in that directory\nUse the code from your wordcount to count words in every file\n\nUse this source to export into excel format\nimport os\n\ntotal = dict()\ndirectory = \"YOUR DIRECTORY HERE\"\nfor filename in os.listdir(directory):\n d = dict()\n with open(filename, \"r\") as text:\n for line in text:\n \n line = line.strip()\n line = line.lower()\n words = line.split(\" \")\n for word in words:\n if word in d:\n d[word] = d[word] + 1\n else:\n d[word] = 1\n total[filename] = d\n\n\ngesorteerd = sorted(d.items(), key=lambda x: x[1], reverse=True)\n\nfor x in gesorteerd:\n print(x)\n\n"
] |
[
0
] |
[] |
[] |
[
"dataframe",
"python",
"word_count"
] |
stackoverflow_0074507711_dataframe_python_word_count.txt
|
Q:
Can a cython subclass access private attributes of its cython superclass? Other cython classes?
I'm building cython extended types, and I've always been bothered that I had to make class attributes public for other extended types to be able to see them. But now than I'm also making subclasses I've even more surprised.
The following code
@cython.cclass
class Base:
base_attrib = cython.declare(cython.double, 0.0)
@cython.cclass
class Derived(Base):
derived_attrib = cython.declare(cython.double, 0.0)
@cython.cfunc
def f_on_derived(a: Derived, b: Derived) -> Derived:
res = Derived.__new__(Derived)
ad = a.derived_attrib
bd = b.derived_attrib
ab = a.base_attrib
bb = b.base_attrib
res.derived_attrib = ad + bd
res.base_attrib = ab + bb
return res
Produces a .c file, but the compiler then complains
src/crujisim/cythontests.c(40975): error C2065: 'base_attrib': undeclared identifier
src/crujisim/cythontests.c(40975): warning C4244: '=': conversion from 'double' to 'int', possible loss of data
src/crujisim/cythontests.c(41007): error C2065: 'derived_attrib': undeclared identifier
src/crujisim/cythontests.c(41007): warning C4244: '=': conversion from 'double' to 'int', possible loss of data
As it is a C function I would have expected the typing annotation to be enough, but it isn't.
I can make it compile by declaring public visibility, like
@cython.cclass
class Base:
base_attrib = cython.declare(cython.double, visibility='public')
@cython.cclass
class Derived(Base):
derived_attrib = cython.declare(cython.double, visibility='public')
But then the C code for res.base_attrib = ab + bb does have to go through python, like
__pyx_t_1 = PyFloat_FromDouble((__pyx_v_ab + __pyx_v_bb))
if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 26, __pyx_L1_error)__Pyx_GOTREF(__pyx_t_1)
if (__Pyx_PyObject_SetAttrStr(__pyx_v_res, __pyx_n_s_base_attrib, __pyx_t_1) < 0) __PYX_ERR(0, 26, __pyx_L1_error)__Pyx_DECREF(__pyx_t_1)
__pyx_t_1 = 0;
So two questions:
Can I have superclass attributes that are C only but accessible to subclasses?
Can I have class attributes that are C only and yet visible to C code in other instances?
Update
I've just noticed that if don't use fast instantiation that is res = Derived() instead of res = Derived.__new__(Derived) attributes do work as expected. But of course I've also now lost the fast instantiation.
Can I have my cake and eat it too?
A:
So several issues where at play here:
The compiler errors were due to the attribute declaration including a value. Leaving them just like base_attrib = cython.declare(cython.double) removed the warning and the values where initialized to 0 automatically all the same.
The other issue was that the object produced through fast instantiation had to through python to access its attributes, whereas the python instantation didn't. This was because the __new__ method produces a python object, not the C version. So in the code the only problem was accessing the attributes of the new instance, not the ones passed as arguments.
This was solved with declaring the variable holding the object returned by fast instantiation.
So the working fastest version of the original problem yet is
@cython.cclass
class Base:
base_attrib = cython.declare(cython.double)
@cython.cclass
class Derived(Base):
derived_attrib = cython.declare(cython.double)
@cython.cfunc
def f_on_derived(a: Derived, b: Derived) -> Derived:
res: Derived = Derived.__new__(Derived) # Notice res: Derived
ad = a.derived_attrib
bd = b.derived_attrib
ab = a.base_attrib
bb = b.base_attrib
res.derived_attrib = ad + bd
res.base_attrib = ab + bb
return res
In order to test speed I have these two functions
@cython.ccall
def python_instantiate() -> Derived:
o = Derived()
return o
@cython.ccall
def cython_fast_instantiate() -> Derived:
o: Derived = Derived.__new__(Derived)
return o
and timing them we get
In [2]: %timeit python_instantiate()
87.5 ns ± 0.895 ns per loop (mean ± std. dev. of 7 runs, 10,000,000 loops each)
In [3]: %timeit cython_fast_instantiate()
62.1 ns ± 0.574 ns per loop (mean ± std. dev. of 7 runs, 10,000,000 loops each)
Which proves that fast instantiation is faster, even though there is some python object reference incrementing and decrementing that I'm not sure is completely necessary.
|
Can a cython subclass access private attributes of its cython superclass? Other cython classes?
|
I'm building cython extended types, and I've always been bothered that I had to make class attributes public for other extended types to be able to see them. But now than I'm also making subclasses I've even more surprised.
The following code
@cython.cclass
class Base:
base_attrib = cython.declare(cython.double, 0.0)
@cython.cclass
class Derived(Base):
derived_attrib = cython.declare(cython.double, 0.0)
@cython.cfunc
def f_on_derived(a: Derived, b: Derived) -> Derived:
res = Derived.__new__(Derived)
ad = a.derived_attrib
bd = b.derived_attrib
ab = a.base_attrib
bb = b.base_attrib
res.derived_attrib = ad + bd
res.base_attrib = ab + bb
return res
Produces a .c file, but the compiler then complains
src/crujisim/cythontests.c(40975): error C2065: 'base_attrib': undeclared identifier
src/crujisim/cythontests.c(40975): warning C4244: '=': conversion from 'double' to 'int', possible loss of data
src/crujisim/cythontests.c(41007): error C2065: 'derived_attrib': undeclared identifier
src/crujisim/cythontests.c(41007): warning C4244: '=': conversion from 'double' to 'int', possible loss of data
As it is a C function I would have expected the typing annotation to be enough, but it isn't.
I can make it compile by declaring public visibility, like
@cython.cclass
class Base:
base_attrib = cython.declare(cython.double, visibility='public')
@cython.cclass
class Derived(Base):
derived_attrib = cython.declare(cython.double, visibility='public')
But then the C code for res.base_attrib = ab + bb does have to go through python, like
__pyx_t_1 = PyFloat_FromDouble((__pyx_v_ab + __pyx_v_bb))
if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 26, __pyx_L1_error)__Pyx_GOTREF(__pyx_t_1)
if (__Pyx_PyObject_SetAttrStr(__pyx_v_res, __pyx_n_s_base_attrib, __pyx_t_1) < 0) __PYX_ERR(0, 26, __pyx_L1_error)__Pyx_DECREF(__pyx_t_1)
__pyx_t_1 = 0;
So two questions:
Can I have superclass attributes that are C only but accessible to subclasses?
Can I have class attributes that are C only and yet visible to C code in other instances?
Update
I've just noticed that if don't use fast instantiation that is res = Derived() instead of res = Derived.__new__(Derived) attributes do work as expected. But of course I've also now lost the fast instantiation.
Can I have my cake and eat it too?
|
[
"So several issues where at play here:\nThe compiler errors were due to the attribute declaration including a value. Leaving them just like base_attrib = cython.declare(cython.double) removed the warning and the values where initialized to 0 automatically all the same.\nThe other issue was that the object produced through fast instantiation had to through python to access its attributes, whereas the python instantation didn't. This was because the __new__ method produces a python object, not the C version. So in the code the only problem was accessing the attributes of the new instance, not the ones passed as arguments.\nThis was solved with declaring the variable holding the object returned by fast instantiation.\nSo the working fastest version of the original problem yet is\n@cython.cclass\nclass Base:\n base_attrib = cython.declare(cython.double)\n\n@cython.cclass\nclass Derived(Base):\n derived_attrib = cython.declare(cython.double)\n\n@cython.cfunc\ndef f_on_derived(a: Derived, b: Derived) -> Derived:\n res: Derived = Derived.__new__(Derived) # Notice res: Derived\n ad = a.derived_attrib\n bd = b.derived_attrib\n ab = a.base_attrib\n bb = b.base_attrib\n res.derived_attrib = ad + bd\n res.base_attrib = ab + bb\n return res\n\nIn order to test speed I have these two functions\n@cython.ccall\ndef python_instantiate() -> Derived:\n o = Derived()\n return o\n\n@cython.ccall\ndef cython_fast_instantiate() -> Derived:\n o: Derived = Derived.__new__(Derived)\n return o\n\nand timing them we get\nIn [2]: %timeit python_instantiate()\n87.5 ns ± 0.895 ns per loop (mean ± std. dev. of 7 runs, 10,000,000 loops each)\n\nIn [3]: %timeit cython_fast_instantiate()\n62.1 ns ± 0.574 ns per loop (mean ± std. dev. of 7 runs, 10,000,000 loops each)\n\nWhich proves that fast instantiation is faster, even though there is some python object reference incrementing and decrementing that I'm not sure is completely necessary.\n"
] |
[
0
] |
[] |
[] |
[
"attributes",
"cython",
"python",
"subclass"
] |
stackoverflow_0074500553_attributes_cython_python_subclass.txt
|
Q:
How to write conditionals across multiple columns in dataframe?
I have the following pandas dataframe:
I am trying to write some conditional python statements, where if we have issue_status of 10 or 40 AND market_phase of 0 AND tade_state of (which is what we have in all of the cases in the above screenshot). Then I want to call a function called resolve_collision_mp(...).
Can I write the conditional in Python as follows?
# Collision for issue_status == 10
if market_info_df['issue_status'].eq('10').all() and market_info_df['market_phase'].eq('0').all() \
and market_info_df['trading_state'] == ' ': # need to change this, can't have equality for dataframe, need loc[...]
return resolve_collision_mp_10(market_info_df)
# Collision for issue_status == 40
if market_info_df['issue_status'].eq('40').all() and market_info_df['market_phase'].eq('0').all() \
and not market_info_df['trading_state']:
return resolve_collision_mp_40(market_info_df)
I don't think the above is correct, any help would be much appreciated!
A:
You can use .apply() with the relevant conditions,
df['new_col'] = df.apply(lambda row: resolve_collision_mp_10(row) if (row['issue_status'] == 10 and row['market_phase'] == 0 and row['tade_state'] = '') else None, axis=1)
df['new_col'] = df.apply(lambda row: resolve_collision_mp_40(row) if (row['issue_status'] == 40 and row['market_phase'] == 0 and row['tade_state'] = '') else None, axis=1)
Note: I am assuming that you are trying to create a new column with the return values of the resolve_collision_mp_10 and resolve_collision_mp_40 functions.
|
How to write conditionals across multiple columns in dataframe?
|
I have the following pandas dataframe:
I am trying to write some conditional python statements, where if we have issue_status of 10 or 40 AND market_phase of 0 AND tade_state of (which is what we have in all of the cases in the above screenshot). Then I want to call a function called resolve_collision_mp(...).
Can I write the conditional in Python as follows?
# Collision for issue_status == 10
if market_info_df['issue_status'].eq('10').all() and market_info_df['market_phase'].eq('0').all() \
and market_info_df['trading_state'] == ' ': # need to change this, can't have equality for dataframe, need loc[...]
return resolve_collision_mp_10(market_info_df)
# Collision for issue_status == 40
if market_info_df['issue_status'].eq('40').all() and market_info_df['market_phase'].eq('0').all() \
and not market_info_df['trading_state']:
return resolve_collision_mp_40(market_info_df)
I don't think the above is correct, any help would be much appreciated!
|
[
"You can use .apply() with the relevant conditions,\ndf['new_col'] = df.apply(lambda row: resolve_collision_mp_10(row) if (row['issue_status'] == 10 and row['market_phase'] == 0 and row['tade_state'] = '') else None, axis=1)\n\ndf['new_col'] = df.apply(lambda row: resolve_collision_mp_40(row) if (row['issue_status'] == 40 and row['market_phase'] == 0 and row['tade_state'] = '') else None, axis=1)\n\nNote: I am assuming that you are trying to create a new column with the return values of the resolve_collision_mp_10 and resolve_collision_mp_40 functions.\n"
] |
[
0
] |
[] |
[] |
[
"dataframe",
"pandas",
"python"
] |
stackoverflow_0074507462_dataframe_pandas_python.txt
|
Q:
How can I check the value of a DNS TXT record for a host?
I'm looking to verify domain ownership via a script, specifically a Python script, and would like know how to lookup the value of a DNS TXT entry. I know there are services and websites out there for this, but I would like to do it with a script.
A:
This is easy using dnspython. Here is an example:
import dns.resolver
print dns.resolver.resolve("aaa.asdflkjsadf.notatallsuspicio.us","TXT").response.answer[0][-1].strings[0]
This gives the following output:
PnCcKpPiGlLfApDbDoEcBbPjIfBnLpFaAaObAaAaMhNgNbIfPbHkMiEfPpGgJfOcPnLdDjBeHkOjFjIbPbIoKhIjHfJlAhAhFgGbGgNlMgKmFkLgNfBjMbCoBeNbGeOnAeHgLmKoFlLhLmDcKlEdEbDpFeHkFaBlGnHiOnChIoMlIhBgOnFfKoEhDnFkKfDaMgHbJhMgPgMjGiAoJpKjKkPaIcAdGiMbIbBbAfEiKjNbCeFoElKgOePmGjJaImL
Another option is to use dig in subprocess:
import subprocess
print subprocess.Popen(["dig","-t","txt","aaa.asdflkjsadf.notatallsuspicio.us","+short"], stdout=subprocess.PIPE).communicate()[0]
A:
This may be overly simplified, but if all you want is a quick read of the TXT record and don't mind dealing with parsing the result separately:
nslookup -q=txt somedomain.com
I found this did what I needed, short & sweet.
A:
Found another way to get list of all TXT records for a domain using dnspython.
import dns.resolver
[dns_record.to_text() for dns_record in dns.resolver.resolve("your-domain-here", "TXT").rrset]
A:
update 2022/11/20
# -*- coding:utf-8 -*-
# Copyright (c) DadouLab.SIG MIT
import dns
import dns.query
import dns.resolver
import logging
logger = logging.getLogger(__name__)
class Digger(object):
def __init__(self, resolvers=["1.1.1.1"]):
self.mResolver = dns.resolver.Resolver()
self.mResolver.timeout = 1
self.mResolver.lifetime = 0.5
self.mResolver.nameservers = resolvers
self.spec_query_type = ['CNAME', 'TXT', 'MX', 'NS', 'SRV', 'CAA']
def query(self, domain, query_type="A"):
"""
answer = dns.resolver.resolve("_dnsauth.test.com", "TXT").rrset
for dns_record in answer:
print(dns_record.to_text())
"""
try:
query_type = query_type.upper()
answer = self.mResolver.resolve(domain, query_type, raise_on_no_answer=False)
answer_raw = answer.chaining_result.answer.to_text()
logger.info("resolved response data => {}".format(answer_raw))
if query_type in self.spec_query_type:
records = [data.to_text() for data in answer]
else:
records = [data.address for data in answer]
return records
except (dns.resolver.NXDOMAIN, dns.resolver.NoAnswer,
dns.resolver.NoNameservers, dns.exception.Timeout) as error:
logger.warning("resolved error => {}".format(error))
return
def is_valid(self, domain, query_type="A"):
try:
self.mResolver.resolve(domain, query_type, raise_on_no_answer=False)
return True
except (dns.resolver.NXDOMAIN, dns.resolver.NoAnswer,
dns.resolver.NoNameservers, dns.exception.Timeout) as error:
logger.warning("resolved error => {}".format(error))
return
if __name__ == '__main__':
dig = Digger()
print(dig.query("www.example.com", query_type="A"))
|
How can I check the value of a DNS TXT record for a host?
|
I'm looking to verify domain ownership via a script, specifically a Python script, and would like know how to lookup the value of a DNS TXT entry. I know there are services and websites out there for this, but I would like to do it with a script.
|
[
"This is easy using dnspython. Here is an example:\nimport dns.resolver\nprint dns.resolver.resolve(\"aaa.asdflkjsadf.notatallsuspicio.us\",\"TXT\").response.answer[0][-1].strings[0]\n\nThis gives the following output:\nPnCcKpPiGlLfApDbDoEcBbPjIfBnLpFaAaObAaAaMhNgNbIfPbHkMiEfPpGgJfOcPnLdDjBeHkOjFjIbPbIoKhIjHfJlAhAhFgGbGgNlMgKmFkLgNfBjMbCoBeNbGeOnAeHgLmKoFlLhLmDcKlEdEbDpFeHkFaBlGnHiOnChIoMlIhBgOnFfKoEhDnFkKfDaMgHbJhMgPgMjGiAoJpKjKkPaIcAdGiMbIbBbAfEiKjNbCeFoElKgOePmGjJaImL\n\nAnother option is to use dig in subprocess:\nimport subprocess\n\nprint subprocess.Popen([\"dig\",\"-t\",\"txt\",\"aaa.asdflkjsadf.notatallsuspicio.us\",\"+short\"], stdout=subprocess.PIPE).communicate()[0] \n\n",
"This may be overly simplified, but if all you want is a quick read of the TXT record and don't mind dealing with parsing the result separately:\nnslookup -q=txt somedomain.com\n\nI found this did what I needed, short & sweet.\n",
"Found another way to get list of all TXT records for a domain using dnspython.\nimport dns.resolver\n[dns_record.to_text() for dns_record in dns.resolver.resolve(\"your-domain-here\", \"TXT\").rrset]\n\n",
"update 2022/11/20\n# -*- coding:utf-8 -*-\n# Copyright (c) DadouLab.SIG MIT\n\nimport dns\nimport dns.query\nimport dns.resolver\nimport logging\n\nlogger = logging.getLogger(__name__)\n\n\nclass Digger(object):\n def __init__(self, resolvers=[\"1.1.1.1\"]):\n self.mResolver = dns.resolver.Resolver()\n self.mResolver.timeout = 1\n self.mResolver.lifetime = 0.5\n self.mResolver.nameservers = resolvers\n self.spec_query_type = ['CNAME', 'TXT', 'MX', 'NS', 'SRV', 'CAA']\n\n def query(self, domain, query_type=\"A\"):\n \"\"\"\n answer = dns.resolver.resolve(\"_dnsauth.test.com\", \"TXT\").rrset\n for dns_record in answer:\n print(dns_record.to_text())\n \"\"\"\n try:\n query_type = query_type.upper()\n answer = self.mResolver.resolve(domain, query_type, raise_on_no_answer=False)\n answer_raw = answer.chaining_result.answer.to_text()\n logger.info(\"resolved response data => {}\".format(answer_raw))\n if query_type in self.spec_query_type:\n records = [data.to_text() for data in answer]\n else:\n records = [data.address for data in answer]\n return records\n except (dns.resolver.NXDOMAIN, dns.resolver.NoAnswer,\n dns.resolver.NoNameservers, dns.exception.Timeout) as error:\n logger.warning(\"resolved error => {}\".format(error))\n return\n\n def is_valid(self, domain, query_type=\"A\"):\n try:\n self.mResolver.resolve(domain, query_type, raise_on_no_answer=False)\n return True\n except (dns.resolver.NXDOMAIN, dns.resolver.NoAnswer,\n dns.resolver.NoNameservers, dns.exception.Timeout) as error:\n logger.warning(\"resolved error => {}\".format(error))\n return\n\n\nif __name__ == '__main__':\n dig = Digger()\n print(dig.query(\"www.example.com\", query_type=\"A\"))\n\n\n"
] |
[
23,
6,
0,
0
] |
[
"Something like this should work to at least get the value for the URL, I used google.com for the example.\nimport pycurl\nimport StringIO\nurl = \"whatsmyip.us/dns_txt.php?host=google.com\"\nc = pycurl.Curl()\nc.setopt(pycurl.URL, url)\nc.setopt(pycurl.HTTPHEADER, [\"Accept:\"])\ntxtcurl = StringIO.StringIO()\nc.setopt(pycurl.WRITEFUNCTION, txtcurl.write)\nc.perform\n\ndata = txtcurl.getvalue()\ndata = data.replace(\"Done!\", \"\")\nprint data\n\nI did not test any of this but pulled it from a previous project.\nBest of luck!\n"
] |
[
-7
] |
[
"dns",
"python"
] |
stackoverflow_0011705946_dns_python.txt
|
Q:
Can't use int(input())'s
x = int(input())
y = int(input())
z = int(input())
print(x, y, z)
When I input y an error shows up:
ValueError: invalid literal for int() with base 10: ''
I didn't know what to try so I just messed around and when I did the following it somehow worked
x = int(input())
print(x)
y = int(input())
print(y)
z = int(input())
print(z)
print(x, y, z)
so my question is why it doesn't work without prints
So apparently PYCharm is the problem. When I input the same numbers in VSC or any online python compiler I get what I input. I guess I won't be using PYCharm anymore.
A:
There's nothing inherently wrong with what you're doing, you can pass the result of input() into int():
>>> x = int(input())
123
>>> type(x)
<class 'int'>
>>> x
123
The error that you're getting indicates that you're passing something which isn't a number into int().
>>> int("hello")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ValueError: invalid literal for int() with base 10: 'hello'
So the issue is with what you were entering into the input() prompt. If you added whitespace (i.e. spaces, newlines, that sort of thing) then int() is fine with that, but if you entered any letters or other characters then you'd be out of luck.
Hopefully that explains the error you were seeing. If you believe you were entering only digits and you got that error, it would be helpful to see a paste of the console session so I can take a look and explain what's happening.
A:
In both code samples you will receive the error if you give a value that cannot be converted to integer, for example:
"asd"
"!"
" "
"12lorem"
Make sure you give number string as an input
|
Can't use int(input())'s
|
x = int(input())
y = int(input())
z = int(input())
print(x, y, z)
When I input y an error shows up:
ValueError: invalid literal for int() with base 10: ''
I didn't know what to try so I just messed around and when I did the following it somehow worked
x = int(input())
print(x)
y = int(input())
print(y)
z = int(input())
print(z)
print(x, y, z)
so my question is why it doesn't work without prints
So apparently PYCharm is the problem. When I input the same numbers in VSC or any online python compiler I get what I input. I guess I won't be using PYCharm anymore.
|
[
"There's nothing inherently wrong with what you're doing, you can pass the result of input() into int():\n>>> x = int(input())\n123\n>>> type(x)\n<class 'int'>\n>>> x\n123\n\nThe error that you're getting indicates that you're passing something which isn't a number into int().\n>>> int(\"hello\")\nTraceback (most recent call last):\n File \"<stdin>\", line 1, in <module>\nValueError: invalid literal for int() with base 10: 'hello'\n\nSo the issue is with what you were entering into the input() prompt. If you added whitespace (i.e. spaces, newlines, that sort of thing) then int() is fine with that, but if you entered any letters or other characters then you'd be out of luck.\nHopefully that explains the error you were seeing. If you believe you were entering only digits and you got that error, it would be helpful to see a paste of the console session so I can take a look and explain what's happening.\n",
"In both code samples you will receive the error if you give a value that cannot be converted to integer, for example:\n\"asd\"\n\"!\"\n\" \"\n\"12lorem\"\n\nMake sure you give number string as an input\n"
] |
[
1,
0
] |
[] |
[] |
[
"python"
] |
stackoverflow_0074507804_python.txt
|
Q:
Unable to install tables (Python, OS X) - could not find a local HDF5 installation
I keep having an error
ERROR:: Could not find a local HDF5 installation
when I'm installing tables in Python:
pip install tables
I've downloaded and installed http://continuum.io/downloads but it didn't help. What else can I try to solve it?
A:
If you are using the anaconda distribution you can just do:
$ conda install pytables
If you need to install from pip and already have the HDF5 libraries installed you can do:
$ HDF5_DIR=/path/to/hdf5 pip install tables
E.g. you could install HDF5 with conda and still install pytables using the method above but it's a lot easier just using conda.
A:
On mac os arm chip:
brew install hdf5
HDF5_DIR=/opt/homebrew/Cellar/hdf5/1.12.2_2 pip install tables
And you are done!
A:
On osx without installing anaconda:
brew install homebrew/science/hdf5
pip install tables
|
Unable to install tables (Python, OS X) - could not find a local HDF5 installation
|
I keep having an error
ERROR:: Could not find a local HDF5 installation
when I'm installing tables in Python:
pip install tables
I've downloaded and installed http://continuum.io/downloads but it didn't help. What else can I try to solve it?
|
[
"If you are using the anaconda distribution you can just do:\n$ conda install pytables\n\nIf you need to install from pip and already have the HDF5 libraries installed you can do:\n$ HDF5_DIR=/path/to/hdf5 pip install tables\n\nE.g. you could install HDF5 with conda and still install pytables using the method above but it's a lot easier just using conda.\n",
"On mac os arm chip:\nbrew install hdf5\nHDF5_DIR=/opt/homebrew/Cellar/hdf5/1.12.2_2 pip install tables\n\nAnd you are done!\n",
"On osx without installing anaconda: \nbrew install homebrew/science/hdf5\npip install tables\n\n"
] |
[
7,
1,
0
] |
[] |
[] |
[
"hdf5",
"macos",
"python"
] |
stackoverflow_0028733625_hdf5_macos_python.txt
|
Q:
Buffer.from(, 'hex') equivalent in Python
I have a typescript library that I need to translate into Python. I am using the library bs58 in Typescript and its equivalent base58 library in python.
My problem is coming when I try to replicate this:
const decodedTxHash = Buffer.from('34cc2932f90774851410a536e3db2c2e61266a1587fbc15e7e9c79b41631ac74', 'hex')
const nearBurnTxHash = bs58.encode(decodedTxHash)
This results in: 4Z6m9qjt9BNxTF1SdDw3bzYGXYzMp2gTmwRy5AJxpNps
What would be the way to get the same result in Python? I can tell you that I tried all I could think of about making it into a bytearray, feeding it as a string as bytes, nothing gave me the same result.
Any ideas?
A:
According to your title you're only asking on how to convert hex into bytes which can simply be archived by bytes.fromhex("<some hex in here>").
A full working example for your code will be:
import base58
raw_bytes = bytes.fromhex("34cc2932f90774851410a536e3db2c2e61266a1587fbc15e7e9c79b41631ac74")
b58_encoded = base58.b58encode(raw_bytes)
print(b58_encoded)
|
Buffer.from(, 'hex') equivalent in Python
|
I have a typescript library that I need to translate into Python. I am using the library bs58 in Typescript and its equivalent base58 library in python.
My problem is coming when I try to replicate this:
const decodedTxHash = Buffer.from('34cc2932f90774851410a536e3db2c2e61266a1587fbc15e7e9c79b41631ac74', 'hex')
const nearBurnTxHash = bs58.encode(decodedTxHash)
This results in: 4Z6m9qjt9BNxTF1SdDw3bzYGXYzMp2gTmwRy5AJxpNps
What would be the way to get the same result in Python? I can tell you that I tried all I could think of about making it into a bytearray, feeding it as a string as bytes, nothing gave me the same result.
Any ideas?
|
[
"According to your title you're only asking on how to convert hex into bytes which can simply be archived by bytes.fromhex(\"<some hex in here>\").\nA full working example for your code will be:\nimport base58\nraw_bytes = bytes.fromhex(\"34cc2932f90774851410a536e3db2c2e61266a1587fbc15e7e9c79b41631ac74\")\nb58_encoded = base58.b58encode(raw_bytes)\n\nprint(b58_encoded)\n\n"
] |
[
0
] |
[] |
[] |
[
"base58",
"hex",
"javascript",
"python",
"typescript"
] |
stackoverflow_0074507836_base58_hex_javascript_python_typescript.txt
|
Q:
How to get the real number after a string in a file
I have files that contain both strings and floats. I am interested in finding the floats after a specific string. Any help in writing such a function that reads the file look for that specific string and returns the float after it will be much appreciated.
Thanks
An example of a file is
lines = """aaaaaaaaaaaaaaa bbbbbbbbbbbbbbb cccccccccc
qq vvv rrr ssssa 22.6
zzzzx bbbb 12.0
xxxxxxxxxx -1.099
zzzz bbb nnn 33.5"""
import re
lines = """aaaaaaaaaaaaaaa bbbbbbbbbbbbbbb cccccccccc
qq vvv rrr ssssa 22.6
zzzzx bbbb 12.0
xxxxxxxxxx -1.099
zzzz bbb nnn 33.5"""
str_to_search = 'xxxxxxxxxx'
num = re.findall(r'^' + str_to_search + r' (\d+\.\d+)', lines, flags=re.M)
print(num)
This works if there are no negative signs. In other words, if the number after the string 'xxxxxxxxxx' is 1.099 rather than '-1.099', it works fine. The question I have is how to generalize so it accounts for negative numbers as well given that it can be positive number (no sign in this case) or a negative number (with a negative sign in this case)
A:
You can use regex
(-?\d+\.?\d*)
import re
lines = """aaaaaaaaaaaaaaa bbbbbbbbbbbbbbb cccccccccc
qq vvv rrr ssssa 22.6
zzzzx bbbb 12.0
xxxxxxxxxx -1.099
zzzz bbb nnn 33.5
xxxxxxxxxx 1.099"""
str_to_search = "xxxxxxxxxx"
num = re.findall(fr"(?m)^{str_to_search}\s+(-?\d+\.?\d*)", lines)
print(num)
Prints:
['-1.099', '1.099']
A:
You can change the regex to following:
num = re.findall(r'^' + str_to_search + r' (-?\d+\.?\d*)', lines, flags=re.M)
A:
I would just split the entire filecontent at every space. This will give us a list of all strings and floats. Then use list.index(" ") to find the index of the string you are searching for, put that into try/except to make sure your code wont stop if the string is not in the contents. Then just read the next element and try to convert it to a float.
Code:
lines = """aaaaaaaaaaaaaaa bbbbbbbbbbbbbbb cccccccccc
qq vvv rrr ssssa 22.6
zzzzx bbbb 12.0
xxxxxxxxxx -1.099
zzzz bbb nnn 33.5"""
lines = lines.replace("\n", " ").split(" ") # replace the newlines with spaces to split them as well
try:
float_index = lines.index("xxxxxxxxxx") + 1 # Get the element after the string you are trying to find
num = float(lines[float_index])
except Exception as e:
print(e)
print(num)
If you are looking for a solution in regex, use Andrej Kesely's awnser.
|
How to get the real number after a string in a file
|
I have files that contain both strings and floats. I am interested in finding the floats after a specific string. Any help in writing such a function that reads the file look for that specific string and returns the float after it will be much appreciated.
Thanks
An example of a file is
lines = """aaaaaaaaaaaaaaa bbbbbbbbbbbbbbb cccccccccc
qq vvv rrr ssssa 22.6
zzzzx bbbb 12.0
xxxxxxxxxx -1.099
zzzz bbb nnn 33.5"""
import re
lines = """aaaaaaaaaaaaaaa bbbbbbbbbbbbbbb cccccccccc
qq vvv rrr ssssa 22.6
zzzzx bbbb 12.0
xxxxxxxxxx -1.099
zzzz bbb nnn 33.5"""
str_to_search = 'xxxxxxxxxx'
num = re.findall(r'^' + str_to_search + r' (\d+\.\d+)', lines, flags=re.M)
print(num)
This works if there are no negative signs. In other words, if the number after the string 'xxxxxxxxxx' is 1.099 rather than '-1.099', it works fine. The question I have is how to generalize so it accounts for negative numbers as well given that it can be positive number (no sign in this case) or a negative number (with a negative sign in this case)
|
[
"You can use regex\n(-?\\d+\\.?\\d*)\n\n\nimport re\n\nlines = \"\"\"aaaaaaaaaaaaaaa bbbbbbbbbbbbbbb cccccccccc\nqq vvv rrr ssssa 22.6\nzzzzx bbbb 12.0\nxxxxxxxxxx -1.099\nzzzz bbb nnn 33.5\nxxxxxxxxxx 1.099\"\"\"\n\nstr_to_search = \"xxxxxxxxxx\"\nnum = re.findall(fr\"(?m)^{str_to_search}\\s+(-?\\d+\\.?\\d*)\", lines)\nprint(num)\n\nPrints:\n['-1.099', '1.099']\n\n",
"You can change the regex to following:\nnum = re.findall(r'^' + str_to_search + r' (-?\\d+\\.?\\d*)', lines, flags=re.M)\n\n",
"I would just split the entire filecontent at every space. This will give us a list of all strings and floats. Then use list.index(\" \") to find the index of the string you are searching for, put that into try/except to make sure your code wont stop if the string is not in the contents. Then just read the next element and try to convert it to a float.\nCode:\nlines = \"\"\"aaaaaaaaaaaaaaa bbbbbbbbbbbbbbb cccccccccc\nqq vvv rrr ssssa 22.6\nzzzzx bbbb 12.0\nxxxxxxxxxx -1.099\nzzzz bbb nnn 33.5\"\"\"\n\nlines = lines.replace(\"\\n\", \" \").split(\" \") # replace the newlines with spaces to split them as well\n\ntry:\n float_index = lines.index(\"xxxxxxxxxx\") + 1 # Get the element after the string you are trying to find\n\n num = float(lines[float_index])\nexcept Exception as e:\n print(e)\n\nprint(num)\n\nIf you are looking for a solution in regex, use Andrej Kesely's awnser.\n"
] |
[
3,
2,
1
] |
[] |
[] |
[
"file",
"python",
"string",
"txt"
] |
stackoverflow_0074507864_file_python_string_txt.txt
|
Q:
Adding a field to a list of dictionaries based on another field value
I have list of dictionary in the form of
[{'a': '2.1', 'z': 'apple', 'aa': 'banana'}, {'a': '4.7', 'z': 'apple', 'aa': 'banana'}, {'a': '1.6', 'z': 'apple', 'aa': 'orange'}]
I am looking to add another field to each dictionary whose value depends on value of another field and the final list should look like
[{'a': '2.1', 'z': 'apple', 'aa': 'banana', 'm':'anana'}, {'a': '4.7', 'z': 'apple', 'aa': 'banana', 'm':'anana'}, {'a': '1.6', 'z': 'apple', 'aa': 'orange', 'm':'range'}]
The first letter of the key 'aa' is removed and added to another field.
I did it by
for x in val:
x['m'] = x['aa'][1:]
Is this can be done using a single line operation
A:
I feel there is a misunderstanding about what are those "one-liners".
I, myself, proudly exhibit one of those from times to times, in answer to some questions.
But, in reality, it is not the fact that they fit on 1 line that makes them "pythonesque". But the fact that they are expression, not instruction. That is, some sort of functional programming, which is part of the multi-paradigms of python.
So if your question were
I need to build a new list of whose fields are the same of another one, plus a new field, computed from another field.
For now my solution is
newdics=[]
for d in val:
nd=d.copy()
nd['m'] = nd['aa][1:]
newdics.append(nd)
Then, we could have answered with a spectacular one liner such as
newdics=[dict(d, m=d['aa'][1:]) for d in val]
(Which is roughly the answer you got while I was typing this one, without the **{'keyword':value} usage, and a simple keyword=value instead, as argument for dict — note that **{'keyword':value} might be a good idea if in reality you have a bunch of rules saying what are those keywords, themselves computed. But in your question, that keyword is simply m, so it is even more compact to just add m=... to dict argument, rather than **{'m': ...})
Then, that one-liner would have been useful, because, it replace a whole code, using intermediates variables, whose only purpose would be to compute an expression, by a single expression.
But that is not your question. Your question is to replace a 2-lines imperative instruction by a 1-line imperative instruction. One-liner in such situation doesn't really help. It is still an instruction. It does not avoid imperative style to perform a functional operation.
But, well, if you really insist on having a single line, rather than the functional
[dict(d, m=d['aa'][1:]) for d in val]
that does not exactly the same as what you wanted, I would simply remind that in python, when a "subblock" is only one line, you can put it in the same line.
So, simply format your code that way
for x in val: x['m'] = x['aa'][1:]
It is even shorter than my functional one-liner (and therefore even shorter than Nuri's), and it has the advantage to do exactly what you wanted: not compute a value, but change the one you have; an instruction, not an expression.
|
Adding a field to a list of dictionaries based on another field value
|
I have list of dictionary in the form of
[{'a': '2.1', 'z': 'apple', 'aa': 'banana'}, {'a': '4.7', 'z': 'apple', 'aa': 'banana'}, {'a': '1.6', 'z': 'apple', 'aa': 'orange'}]
I am looking to add another field to each dictionary whose value depends on value of another field and the final list should look like
[{'a': '2.1', 'z': 'apple', 'aa': 'banana', 'm':'anana'}, {'a': '4.7', 'z': 'apple', 'aa': 'banana', 'm':'anana'}, {'a': '1.6', 'z': 'apple', 'aa': 'orange', 'm':'range'}]
The first letter of the key 'aa' is removed and added to another field.
I did it by
for x in val:
x['m'] = x['aa'][1:]
Is this can be done using a single line operation
|
[
"I feel there is a misunderstanding about what are those \"one-liners\".\nI, myself, proudly exhibit one of those from times to times, in answer to some questions.\nBut, in reality, it is not the fact that they fit on 1 line that makes them \"pythonesque\". But the fact that they are expression, not instruction. That is, some sort of functional programming, which is part of the multi-paradigms of python.\nSo if your question were\n\nI need to build a new list of whose fields are the same of another one, plus a new field, computed from another field.\nFor now my solution is\nnewdics=[]\nfor d in val:\n nd=d.copy()\n nd['m'] = nd['aa][1:]\n newdics.append(nd)\n\n\nThen, we could have answered with a spectacular one liner such as\nnewdics=[dict(d, m=d['aa'][1:]) for d in val]\n\n(Which is roughly the answer you got while I was typing this one, without the **{'keyword':value} usage, and a simple keyword=value instead, as argument for dict — note that **{'keyword':value} might be a good idea if in reality you have a bunch of rules saying what are those keywords, themselves computed. But in your question, that keyword is simply m, so it is even more compact to just add m=... to dict argument, rather than **{'m': ...})\nThen, that one-liner would have been useful, because, it replace a whole code, using intermediates variables, whose only purpose would be to compute an expression, by a single expression.\nBut that is not your question. Your question is to replace a 2-lines imperative instruction by a 1-line imperative instruction. One-liner in such situation doesn't really help. It is still an instruction. It does not avoid imperative style to perform a functional operation.\nBut, well, if you really insist on having a single line, rather than the functional\n[dict(d, m=d['aa'][1:]) for d in val]\n\nthat does not exactly the same as what you wanted, I would simply remind that in python, when a \"subblock\" is only one line, you can put it in the same line.\nSo, simply format your code that way\nfor x in val: x['m'] = x['aa'][1:]\n\nIt is even shorter than my functional one-liner (and therefore even shorter than Nuri's), and it has the advantage to do exactly what you wanted: not compute a value, but change the one you have; an instruction, not an expression.\n"
] |
[
1
] |
[] |
[] |
[
"dictionary",
"python"
] |
stackoverflow_0074507769_dictionary_python.txt
|
Q:
How to get last Thursday date of next month and next of next month using python
I want to get Date on Last Thursday of next month and next of next month.
Currently able to get Date of last thursday on current month.
Code:
import datetime
dt = datetime.datetime.today()
def lastThurs_currentmonth(dt):
currDate, currMth, currYr = dt, dt.month, dt.year
for i in range(31):
if currDate.month == currMth and currDate.year == currYr and currDate.weekday() == 3:
#print('dt:'+ str(currDate))
lastThuDate = currDate
currDate += datetime.timedelta(1)
return lastThuDate
lastThurs_currentmonth(dt)
Output:
datetime.datetime(2022, 11, 24, 11, 2, 17, 620842)
Now I need to get date last Thursday for next month and next of next month.
Expected Output:
date last Thursday for next month
datetime.datetime(2022, 12, 29)
date last Thursday for next of next month
datetime.datetime(2023, 1, 26)
Ref link:
Get the last thursday of the current month using python
A:
One way is to add months less one day from the start of the current month and then subtract back to the last Thursday. Thursdays are isoweekday 4, so it's a case of subtracting off the right number of days. Unfortunately timedelta doesn't allow months, so the dateutil library is also needed for my solution.
import datetime
from dateutil.relativedelta import relativedelta
def last_thrs(start_date, months):
date_to_check = datetime.date(start_date.year, start_date.month, 1) + relativedelta(months=months+1) - datetime.timedelta(days=1)
return date_to_check - datetime.timedelta(days = ((date_to_check.isoweekday() + 3) % 7))
dt_today = datetime.date.today()
print(last_thrs(dt_today, 1))
# 2022-12-29
|
How to get last Thursday date of next month and next of next month using python
|
I want to get Date on Last Thursday of next month and next of next month.
Currently able to get Date of last thursday on current month.
Code:
import datetime
dt = datetime.datetime.today()
def lastThurs_currentmonth(dt):
currDate, currMth, currYr = dt, dt.month, dt.year
for i in range(31):
if currDate.month == currMth and currDate.year == currYr and currDate.weekday() == 3:
#print('dt:'+ str(currDate))
lastThuDate = currDate
currDate += datetime.timedelta(1)
return lastThuDate
lastThurs_currentmonth(dt)
Output:
datetime.datetime(2022, 11, 24, 11, 2, 17, 620842)
Now I need to get date last Thursday for next month and next of next month.
Expected Output:
date last Thursday for next month
datetime.datetime(2022, 12, 29)
date last Thursday for next of next month
datetime.datetime(2023, 1, 26)
Ref link:
Get the last thursday of the current month using python
|
[
"One way is to add months less one day from the start of the current month and then subtract back to the last Thursday. Thursdays are isoweekday 4, so it's a case of subtracting off the right number of days. Unfortunately timedelta doesn't allow months, so the dateutil library is also needed for my solution.\nimport datetime\nfrom dateutil.relativedelta import relativedelta\n\ndef last_thrs(start_date, months):\n date_to_check = datetime.date(start_date.year, start_date.month, 1) + relativedelta(months=months+1) - datetime.timedelta(days=1)\n return date_to_check - datetime.timedelta(days = ((date_to_check.isoweekday() + 3) % 7))\n\n\ndt_today = datetime.date.today()\nprint(last_thrs(dt_today, 1))\n# 2022-12-29\n\n"
] |
[
1
] |
[] |
[] |
[
"datetime",
"python"
] |
stackoverflow_0074507551_datetime_python.txt
|
Q:
Attempt to connect to postgresql silently stops the program
I am trying to connect to postgresql using pyside2. But, when I try to call the open method, the program stops running without showing any error.
This is my code:
from PySide2.QtSql import QSqlDatabase, QSqlQuery
from PySide2.QtWidgets import QApplication
app = QApplication([])
db = QSqlDatabase.addDatabase("QPSQL")
db.setHostName("localhost")
db.setDatabaseName("mydatabase")
ok = db.open("user", "password")
print("hola")
The line print("hola") show nothing.
I'm pretty sure there's an internal error happening, but I can't see it. If they need to see the error message, they will have to explain step by step how to get it.
I was searching on Google and it seems that I am the only one in the world that this happens to him. Thanks for any help you can give me.
A:
I can not believe it. The problem was that it was not setting the port correctly. Now that I put the port right, it works perfect xD
I don't know why it took me so long to notice.
db.setPort(myport)
|
Attempt to connect to postgresql silently stops the program
|
I am trying to connect to postgresql using pyside2. But, when I try to call the open method, the program stops running without showing any error.
This is my code:
from PySide2.QtSql import QSqlDatabase, QSqlQuery
from PySide2.QtWidgets import QApplication
app = QApplication([])
db = QSqlDatabase.addDatabase("QPSQL")
db.setHostName("localhost")
db.setDatabaseName("mydatabase")
ok = db.open("user", "password")
print("hola")
The line print("hola") show nothing.
I'm pretty sure there's an internal error happening, but I can't see it. If they need to see the error message, they will have to explain step by step how to get it.
I was searching on Google and it seems that I am the only one in the world that this happens to him. Thanks for any help you can give me.
|
[
"I can not believe it. The problem was that it was not setting the port correctly. Now that I put the port right, it works perfect xD\nI don't know why it took me so long to notice.\n db.setPort(myport)\n\n"
] |
[
0
] |
[] |
[] |
[
"postgresql",
"pyside2",
"python",
"qt"
] |
stackoverflow_0074507874_postgresql_pyside2_python_qt.txt
|
Q:
Using Regex to combine lines start with quotation marks
I would like to combine two lines with only one line feed \n, and sometime the next line starts with a quotation mark. I am trying use this code to combine them, with \" to find quotation marks,
comb_nextline = re.sub(r'(?<=[^\.][A-Za-z,-])\n[ ]*(?=[a-zA-Z0-9\(\"])', ' ', txt)
but it doesn't work with the line start with a quotation mark. Is there any way to combine lines starts with quotation marks? Thanks!
My txt looks like this:
import re
txt= '''
The first process, called wafer bumping, involves a reflow solder process to form the solder balls on all of the input/output
(I/O) pads on the wafer. Because of the extremely small geometries involved, in some instances this process is best accomplished in a hydrogen atmosphere. RTC offers a high temperature furnace for this application, equipped with the hydrogen package, providing a re-flow process in a 100 hydrogen atmosphere. For a second process, called
"chip joining", RTC offers both a near infrared or forced convection oven.
'''
comb_nextline = re.sub(r'(?<=[^\.][A-Za-z,-])\n[ ]*(?=[a-zA-Z0-9\(\"])', ' ', txt)
print(comb_nextline)
And I hope to get this
txt =
'''
The first process, called wafer bumping, involves a reflow solder process to form the solder balls on all of the input/output (I/O) pads on the wafer. Because of the extremely small geometries involved, in some instances this process is best accomplished in a hydrogen atmosphere. RTC offers a high temperature furnace for this application, equipped with the hydrogen package, providing a re-flow process in a 100 hydrogen atmosphere. For a second process, called "chip joining", RTC offers both a near infrared or forced convection oven.
'''
A:
You can also match optional spaces before matching the newline
(?<=[^.][A-Za-z,-]) *\n *(?=[a-zA-Z0-9(\"])
Regex demo | Python demo
Or matching all spaces without newlines using a negated character class [^\S\n]
(?<=[^.][A-Za-z,-])[^\S\n]*\n[^\S\n]*(?=[a-zA-Z0-9(\"])
Regex demo
import re
txt = '''
The first process, called wafer bumping, involves a reflow solder process to form the solder balls on all of the input/output
(I/O) pads on the wafer. Because of the extremely small geometries involved, in some instances this process is best accomplished in a hydrogen atmosphere. RTC offers a high temperature furnace for this application, equipped with the hydrogen package, providing a re-flow process in a 100 hydrogen atmosphere. For a second process, called
"chip joining", RTC offers both a near infrared or forced convection oven.
'''
comb_nextline = re.sub(r'(?<=[^.][A-Za-z,-]) *\n *(?=[a-zA-Z0-9(\"])', ' ', txt)
print(comb_nextline)
Output
The first process, called wafer bumping, involves a reflow solder process to form the solder balls on all of the input/output (I/O) pads on the wafer. Because of the extremely small geometries involved, in some instances this process is best accomplished in a hydrogen atmosphere. RTC offers a high temperature furnace for this application, equipped with the hydrogen package, providing a re-flow process in a 100 hydrogen atmosphere. For a second process, called "chip joining", RTC offers both a near infrared or forced convection oven.
|
Using Regex to combine lines start with quotation marks
|
I would like to combine two lines with only one line feed \n, and sometime the next line starts with a quotation mark. I am trying use this code to combine them, with \" to find quotation marks,
comb_nextline = re.sub(r'(?<=[^\.][A-Za-z,-])\n[ ]*(?=[a-zA-Z0-9\(\"])', ' ', txt)
but it doesn't work with the line start with a quotation mark. Is there any way to combine lines starts with quotation marks? Thanks!
My txt looks like this:
import re
txt= '''
The first process, called wafer bumping, involves a reflow solder process to form the solder balls on all of the input/output
(I/O) pads on the wafer. Because of the extremely small geometries involved, in some instances this process is best accomplished in a hydrogen atmosphere. RTC offers a high temperature furnace for this application, equipped with the hydrogen package, providing a re-flow process in a 100 hydrogen atmosphere. For a second process, called
"chip joining", RTC offers both a near infrared or forced convection oven.
'''
comb_nextline = re.sub(r'(?<=[^\.][A-Za-z,-])\n[ ]*(?=[a-zA-Z0-9\(\"])', ' ', txt)
print(comb_nextline)
And I hope to get this
txt =
'''
The first process, called wafer bumping, involves a reflow solder process to form the solder balls on all of the input/output (I/O) pads on the wafer. Because of the extremely small geometries involved, in some instances this process is best accomplished in a hydrogen atmosphere. RTC offers a high temperature furnace for this application, equipped with the hydrogen package, providing a re-flow process in a 100 hydrogen atmosphere. For a second process, called "chip joining", RTC offers both a near infrared or forced convection oven.
'''
|
[
"You can also match optional spaces before matching the newline\n(?<=[^.][A-Za-z,-]) *\\n *(?=[a-zA-Z0-9(\\\"])\n\nRegex demo | Python demo\nOr matching all spaces without newlines using a negated character class [^\\S\\n]\n(?<=[^.][A-Za-z,-])[^\\S\\n]*\\n[^\\S\\n]*(?=[a-zA-Z0-9(\\\"])\n\nRegex demo\nimport re\n\ntxt = '''\nThe first process, called wafer bumping, involves a reflow solder process to form the solder balls on all of the input/output\n(I/O) pads on the wafer. Because of the extremely small geometries involved, in some instances this process is best accomplished in a hydrogen atmosphere. RTC offers a high temperature furnace for this application, equipped with the hydrogen package, providing a re-flow process in a 100 hydrogen atmosphere. For a second process, called \n\"chip joining\", RTC offers both a near infrared or forced convection oven.\n'''\n\ncomb_nextline = re.sub(r'(?<=[^.][A-Za-z,-]) *\\n *(?=[a-zA-Z0-9(\\\"])', ' ', txt)\nprint(comb_nextline)\n\nOutput\nThe first process, called wafer bumping, involves a reflow solder process to form the solder balls on all of the input/output (I/O) pads on the wafer. Because of the extremely small geometries involved, in some instances this process is best accomplished in a hydrogen atmosphere. RTC offers a high temperature furnace for this application, equipped with the hydrogen package, providing a re-flow process in a 100 hydrogen atmosphere. For a second process, called \"chip joining\", RTC offers both a near infrared or forced convection oven.\n\n"
] |
[
2
] |
[] |
[] |
[
"python",
"regex"
] |
stackoverflow_0074507967_python_regex.txt
|
Q:
How to create table dynamically from user input?
I am creating a wishlist app using Tkinter and sqlite3. I want the user to be able to create tables in database by imputing names. For that I connected a button to this function:
def create_table(table_name):
connection = sql.connect(f'{directory}\main.sqlite')
cursor = connection.cursor()
cursor.execute("CREATE TABLE ? (name TEXT, price REAL, url TEXT)",(table_name,))
connection.close()
This doesn't work and I get:
cursor.execute("create table ? (name text, price real, url text)",(table_name,))
sqlite3.OperationalError: near "?": syntax error
Is it possible to do string formatting in CREATE TABLE? I'd rather create separate tables than one with additional column for id of items. I don't want to use f-string as it can be an issue if user inputs commands instead of a name.
A:
Nope, this cannot be done. A table name cannot act as a dynamic parameter from SQLite's point of view. You will need to do something like this:
f'CREATE TABLE {table_name} (name TEXT, price REAL, url TEXT)'
But first you will need to validate the user input for table_name. Which shouldn't be a problem if you want to limit the allowed characters to (for example) only 1+ English letters and 0+ underscores. You might also want to validate the table name length and uniqueness somehow.
|
How to create table dynamically from user input?
|
I am creating a wishlist app using Tkinter and sqlite3. I want the user to be able to create tables in database by imputing names. For that I connected a button to this function:
def create_table(table_name):
connection = sql.connect(f'{directory}\main.sqlite')
cursor = connection.cursor()
cursor.execute("CREATE TABLE ? (name TEXT, price REAL, url TEXT)",(table_name,))
connection.close()
This doesn't work and I get:
cursor.execute("create table ? (name text, price real, url text)",(table_name,))
sqlite3.OperationalError: near "?": syntax error
Is it possible to do string formatting in CREATE TABLE? I'd rather create separate tables than one with additional column for id of items. I don't want to use f-string as it can be an issue if user inputs commands instead of a name.
|
[
"Nope, this cannot be done. A table name cannot act as a dynamic parameter from SQLite's point of view. You will need to do something like this:\nf'CREATE TABLE {table_name} (name TEXT, price REAL, url TEXT)'\n\nBut first you will need to validate the user input for table_name. Which shouldn't be a problem if you want to limit the allowed characters to (for example) only 1+ English letters and 0+ underscores. You might also want to validate the table name length and uniqueness somehow.\n"
] |
[
1
] |
[] |
[] |
[
"python",
"sqlite",
"string",
"string_formatting"
] |
stackoverflow_0074507946_python_sqlite_string_string_formatting.txt
|
Q:
I have different age groups data and different months, How can I convert it to Annually in Python?
df = pd.read_csv('1410001701eng.csv')
df.head()
df['date'] = pd.to_datetime(df['Age group'])
df['year'] = pd.DatetimeIndex(df['date']).year
monthly_year_avg = df.groupby('year')['VALUE'].mean()
print(monthly_year_avg)
This is my code. Could you please tell me or give me a hint or show me the website has similar questions. I have monthly data from Jan-1978 to November-2022. How can I convert all these monthly data from different age groups to annually by taking average?
or do you think I should calculate it one by one is Excel? Cause it only 44 years.
Thank you very much! Much appreciated
I tried search similar questions in reddit forum and Stack overflow, they all used rsample and get the result.
I have monthly data from Jan-1978 to November-2022. How can I convert all these monthly data from different age groups to annually by taking average?
A:
This should give you a new pandas dataframe with the yearly mean. Note that the if statement has a subtract by 1 on the timestep to account for no December column for 2022.
new_df = pd.DataFrame() #create empty pandas dataframe
time_step = 12 #years
for i in np.arange(0, len(df.columns), time_step):
new_header = df.columns[i][-2:]
if new_header == str(22): #If the year is 2022
sliced_for_mean = df.iloc[:, i:i+time_step-1] #take one off from the last step (no December column)
new_df[new_header] = sliced_for_mean.mean(axis=1) #means for each row appended to new_df
else: #else do this
sliced_for_mean = df.iloc[:, i:i+time_step] #sliced df to calculate mean for year
new_df[new_header] = sliced_for_mean.mean(axis=1) #means for each row appended to new_df
print(new_df)
A:
melt month columns to a single column "month", and extract year value from month. Then aggregate by year:
df = pd.DataFrame(data=[
["M", "21-30", None, None, 15000, 21000, 22500, 21800, None, None, None],
["M", "31-40", 18000, 19200, 19000, None, None, 21800, 21500, 22300, 22000],
["M", "41-50", 22200, None, 15000, 21000, 22500, 21800, None, None, 22000],
], columns=["gender", "age_group", "Nov-20", "Dec-20", "Mar-21", "Apr-21", "May-21", "Jun-21", "Jan-22", "Feb-22", "Mar-22"])
df = df.fillna(0)
df = df.melt(id_vars=["gender", "age_group"], value_vars=df.drop(["gender", "age_group"], axis=1).columns, var_name="month", value_name="value")
df["year"] = df["month"].str.split("-").str[1]
df = df.groupby(["gender", "age_group", "year"]).agg(avg=("value", np.mean)).reset_index()
[Out]
gender age_group year avg
0 M 21-30 20 0.000000
1 M 21-30 21 20075.000000
2 M 21-30 22 0.000000
3 M 31-40 20 18600.000000
4 M 31-40 21 10200.000000
5 M 31-40 22 21933.333333
6 M 41-50 20 11100.000000
7 M 41-50 21 20075.000000
8 M 41-50 22 7333.333333
|
I have different age groups data and different months, How can I convert it to Annually in Python?
|
df = pd.read_csv('1410001701eng.csv')
df.head()
df['date'] = pd.to_datetime(df['Age group'])
df['year'] = pd.DatetimeIndex(df['date']).year
monthly_year_avg = df.groupby('year')['VALUE'].mean()
print(monthly_year_avg)
This is my code. Could you please tell me or give me a hint or show me the website has similar questions. I have monthly data from Jan-1978 to November-2022. How can I convert all these monthly data from different age groups to annually by taking average?
or do you think I should calculate it one by one is Excel? Cause it only 44 years.
Thank you very much! Much appreciated
I tried search similar questions in reddit forum and Stack overflow, they all used rsample and get the result.
I have monthly data from Jan-1978 to November-2022. How can I convert all these monthly data from different age groups to annually by taking average?
|
[
"This should give you a new pandas dataframe with the yearly mean. Note that the if statement has a subtract by 1 on the timestep to account for no December column for 2022.\nnew_df = pd.DataFrame() #create empty pandas dataframe\ntime_step = 12 #years\nfor i in np.arange(0, len(df.columns), time_step):\n new_header = df.columns[i][-2:]\n\n if new_header == str(22): #If the year is 2022\n sliced_for_mean = df.iloc[:, i:i+time_step-1] #take one off from the last step (no December column)\n new_df[new_header] = sliced_for_mean.mean(axis=1) #means for each row appended to new_df\n\n else: #else do this\n sliced_for_mean = df.iloc[:, i:i+time_step] #sliced df to calculate mean for year\n new_df[new_header] = sliced_for_mean.mean(axis=1) #means for each row appended to new_df\n\nprint(new_df)\n\n",
"melt month columns to a single column \"month\", and extract year value from month. Then aggregate by year:\ndf = pd.DataFrame(data=[\n [\"M\", \"21-30\", None, None, 15000, 21000, 22500, 21800, None, None, None],\n [\"M\", \"31-40\", 18000, 19200, 19000, None, None, 21800, 21500, 22300, 22000],\n [\"M\", \"41-50\", 22200, None, 15000, 21000, 22500, 21800, None, None, 22000],\n], columns=[\"gender\", \"age_group\", \"Nov-20\", \"Dec-20\", \"Mar-21\", \"Apr-21\", \"May-21\", \"Jun-21\", \"Jan-22\", \"Feb-22\", \"Mar-22\"])\n\ndf = df.fillna(0)\n\ndf = df.melt(id_vars=[\"gender\", \"age_group\"], value_vars=df.drop([\"gender\", \"age_group\"], axis=1).columns, var_name=\"month\", value_name=\"value\")\n\ndf[\"year\"] = df[\"month\"].str.split(\"-\").str[1]\n\ndf = df.groupby([\"gender\", \"age_group\", \"year\"]).agg(avg=(\"value\", np.mean)).reset_index()\n\n[Out]\n gender age_group year avg\n0 M 21-30 20 0.000000\n1 M 21-30 21 20075.000000\n2 M 21-30 22 0.000000\n3 M 31-40 20 18600.000000\n4 M 31-40 21 10200.000000\n5 M 31-40 22 21933.333333\n6 M 41-50 20 11100.000000\n7 M 41-50 21 20075.000000\n8 M 41-50 22 7333.333333\n\n"
] |
[
0,
0
] |
[] |
[] |
[
"dataframe",
"pandas",
"python"
] |
stackoverflow_0074505669_dataframe_pandas_python.txt
|
Q:
import requests giving "Traceback (most recent call last):", Issue with python version or environment?
during my slow coding progress, I just found out that probably I have some issue with environment variables and I am little bit confused with it. I am using python 3.6 in Pycharm tool and it seems, that I have both versions of python installed:
Controller$ python2 -V
Python 2.7.17
Controller$ python3 -V
Python 3.10.5
When I running my script via console I got expected results, but when I put it in the /var/www/html (into localhost), I've got only the error message saying: "Traceback (most recent call last): ". After that PHP spit some bunch of error messages regarding importings:
Traceback (most recent call last): File "/home/doozie/Pycharm
Projects/Python Learning/JA GAME/log2.py", line 5, in import requests
File "/usr/lib/python3/dist-packages/requests/init.py", line 43,
in import urllib3 File
"/usr/lib/python3/dist-packages/urllib3/init.py", line 8, in from
.connectionpool import ( File
"/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 29,
in from .connection import ( File
"/usr/lib/python3/dist-packages/urllib3/connection.py", line 40, in
from .util.ssl_ import ( File
"/usr/lib/python3/dist-packages/urllib3/util/init.py", line 3, in
from .connection import is_connection_dropped File
"/usr/lib/python3/dist-packages/urllib3/util/connection.py", line 3,
in from .wait import wait_for_read File
"/usr/lib/python3/dist-packages/urllib3/util/wait.py", line 1, in from
.selectors import ( File
"/usr/lib/python3/dist-packages/urllib3/util/selectors.py", line 14,
in from collections.abc import namedtuple, MappingImportError: cannot
import name 'namedtuple'
What I found out is, that when I commented the "import request", the message is printed with no issue. So I was digging little bit and found that it is most probably due the python version which might be installed in wrong environment ?
I tried to run some commands, pip upgrade, pip install requests, but even after that, I got: "Requirement already satisfied" or other error messages, I dont understand:
Controller$ sudo pip3 install requests
[sudo] password for doozie:
Traceback (most recent call last):
File "/usr/bin/pip3", line 9, in <module>
from pip import main
File "/usr/lib/python3/dist-packages/pip/__init__.py", line 22, in <module>
from pip._vendor.requests.packages.urllib3.exceptions import DependencyWarning
File "/usr/lib/python3/dist-packages/pip/_vendor/__init__.py", line 73, in <module>
vendored("pkg_resources")
File "/usr/lib/python3/dist-packages/pip/_vendor/__init__.py", line 33, in vendored
__import__(modulename, globals(), locals(), level=0)
File "/usr/share/python-wheels/pkg_resources-0.0.0-py2.py3-none-any.whl/pkg_resources/__init__.py", line 77, in <module>
File "/usr/share/python-wheels/pkg_resources-0.0.0-py2.py3-none-any.whl/pkg_resources/_vendor/packaging/requirements.py", line 9, in <module>
File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 672, in _load_unlocked
File "<frozen importlib._bootstrap>", line 632, in _load_backward_compatible
File "/usr/share/python-wheels/pkg_resources-0.0.0-py2.py3-none-any.whl/pkg_resources/extern/__init__.py", line 43, in load_module
File "/usr/share/python-wheels/pkg_resources-0.0.0-py2.py3-none-any.whl/pkg_resources/_vendor/pyparsing.py", line 943, in <module>
AttributeError: module 'collections' has no attribute 'MutableMapping
So I decided to write and ask here before I run some commands which might completely destroy whole python or my progress, or even better my whole system :D.
Can you please advise me how can I correct this ?
Many thanks
A:
Ok, I finally found the solution via this reddit link:
https://www.reddit.com/r/learnpython/comments/tzja94/comment/i4022j7/
As I had a similar symptoms as in the case guy had, I went through the path from the error and did these changes in the "selectors.py" file.
# from collections.abc import namedtuple, Mapping
from collections import namedtuple
from collections.abc import Mapping
I simply commented the line: "from collections.abc import namedtuple, Mapping"
and added two new lines. "from collections import namedtuple" and "from collections.abc import Mapping".
After that import requests start working without error.
Many thanks all.
|
import requests giving "Traceback (most recent call last):", Issue with python version or environment?
|
during my slow coding progress, I just found out that probably I have some issue with environment variables and I am little bit confused with it. I am using python 3.6 in Pycharm tool and it seems, that I have both versions of python installed:
Controller$ python2 -V
Python 2.7.17
Controller$ python3 -V
Python 3.10.5
When I running my script via console I got expected results, but when I put it in the /var/www/html (into localhost), I've got only the error message saying: "Traceback (most recent call last): ". After that PHP spit some bunch of error messages regarding importings:
Traceback (most recent call last): File "/home/doozie/Pycharm
Projects/Python Learning/JA GAME/log2.py", line 5, in import requests
File "/usr/lib/python3/dist-packages/requests/init.py", line 43,
in import urllib3 File
"/usr/lib/python3/dist-packages/urllib3/init.py", line 8, in from
.connectionpool import ( File
"/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 29,
in from .connection import ( File
"/usr/lib/python3/dist-packages/urllib3/connection.py", line 40, in
from .util.ssl_ import ( File
"/usr/lib/python3/dist-packages/urllib3/util/init.py", line 3, in
from .connection import is_connection_dropped File
"/usr/lib/python3/dist-packages/urllib3/util/connection.py", line 3,
in from .wait import wait_for_read File
"/usr/lib/python3/dist-packages/urllib3/util/wait.py", line 1, in from
.selectors import ( File
"/usr/lib/python3/dist-packages/urllib3/util/selectors.py", line 14,
in from collections.abc import namedtuple, MappingImportError: cannot
import name 'namedtuple'
What I found out is, that when I commented the "import request", the message is printed with no issue. So I was digging little bit and found that it is most probably due the python version which might be installed in wrong environment ?
I tried to run some commands, pip upgrade, pip install requests, but even after that, I got: "Requirement already satisfied" or other error messages, I dont understand:
Controller$ sudo pip3 install requests
[sudo] password for doozie:
Traceback (most recent call last):
File "/usr/bin/pip3", line 9, in <module>
from pip import main
File "/usr/lib/python3/dist-packages/pip/__init__.py", line 22, in <module>
from pip._vendor.requests.packages.urllib3.exceptions import DependencyWarning
File "/usr/lib/python3/dist-packages/pip/_vendor/__init__.py", line 73, in <module>
vendored("pkg_resources")
File "/usr/lib/python3/dist-packages/pip/_vendor/__init__.py", line 33, in vendored
__import__(modulename, globals(), locals(), level=0)
File "/usr/share/python-wheels/pkg_resources-0.0.0-py2.py3-none-any.whl/pkg_resources/__init__.py", line 77, in <module>
File "/usr/share/python-wheels/pkg_resources-0.0.0-py2.py3-none-any.whl/pkg_resources/_vendor/packaging/requirements.py", line 9, in <module>
File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 672, in _load_unlocked
File "<frozen importlib._bootstrap>", line 632, in _load_backward_compatible
File "/usr/share/python-wheels/pkg_resources-0.0.0-py2.py3-none-any.whl/pkg_resources/extern/__init__.py", line 43, in load_module
File "/usr/share/python-wheels/pkg_resources-0.0.0-py2.py3-none-any.whl/pkg_resources/_vendor/pyparsing.py", line 943, in <module>
AttributeError: module 'collections' has no attribute 'MutableMapping
So I decided to write and ask here before I run some commands which might completely destroy whole python or my progress, or even better my whole system :D.
Can you please advise me how can I correct this ?
Many thanks
|
[
"Ok, I finally found the solution via this reddit link:\nhttps://www.reddit.com/r/learnpython/comments/tzja94/comment/i4022j7/\nAs I had a similar symptoms as in the case guy had, I went through the path from the error and did these changes in the \"selectors.py\" file.\n# from collections.abc import namedtuple, Mapping\nfrom collections import namedtuple\nfrom collections.abc import Mapping\n\nI simply commented the line: \"from collections.abc import namedtuple, Mapping\"\nand added two new lines. \"from collections import namedtuple\" and \"from collections.abc import Mapping\".\nAfter that import requests start working without error.\nMany thanks all.\n"
] |
[
0
] |
[] |
[] |
[
"environment_variables",
"python",
"python_import",
"request"
] |
stackoverflow_0074500969_environment_variables_python_python_import_request.txt
|
Q:
Ho to filter keys and values from a python dictionary based on conditions?
I have python dictionary with the below items :
> ...
> {'HostName': 'DEMOBDDBX00100.demo', 'BackupStatus': 'SUCCESS'}
> {'HostName': 'DEMOBDDBX00200.demo', 'BackupStatus': 'SUCCESS'}
> {'HostName': 'DEMOBDDBX10101.demo', 'BackupStatus': 'FAILURE'}
> {'HostName': 'DEMOBDDBX10102.demo', 'BackupStatus': 'FAILURE'}
> {'HostName': 'DEMOBDDBX10201.demo', 'BackupStatus': 'FAILURE'}
> {'HostName': 'DEMOBDDBX10202.demo', 'BackupStatus': 'FAILURE'}
> {'HostName': 'DEMOBDMBX00100.demo', 'BackupStatus': 'SUCCESS'}
> {'HostName': 'DEMOBDMBX00200.demo', 'BackupStatus': 'SUCCESS'}
> {'HostName': 'DEMOBDMBX10101.demo', 'BackupStatus': 'FAILURE'}
> {'HostName': 'DEMOBDMBX10102.demo', 'BackupStatus': 'FAILURE'}
> {'HostName': 'DEMODACRT10100.demo', 'BackupStatus': 'SUCCESS'}
> {'HostName': 'DEMODACRT10200.demo', 'BackupStatus': 'SUCCESS'}
> {'HostName': 'DEMODACTS10101.demo', 'BackupStatus': 'SUCCESS'}
> {'HostName': 'DEMODACTS10102.demo', 'BackupStatus': 'SUCCESS'}
> {'HostName': 'DEMOKLIRT10100.demo', 'BackupStatus': 'SUCCESS'}
> {'HostName': 'DEMOKLIRT10200.demo', 'BackupStatus': 'SUCCESS'}
> {'HostName': 'DEMOKNORT10100.demo', 'BackupStatus': 'SUCCESS'}
> {'HostName': 'DEMOKNORT10200.demo', 'BackupStatus': 'SUCCESS'}
> {'HostName': 'DEMOKOSRT10200.demo', 'BackupStatus': 'SUCCESS'}
> {'HostName': 'DEMOLABTS10300.demo', 'BackupStatus': 'SUCCESS'}
> {'HostName': 'DEMOLABTS10400.demo', 'BackupStatus': 'SUCCESS'}
> ...
I need to filter out the values in Hostname only if the BackupStatus == "FAILURE"
I need the output as:
{'HostName': 'DEMOBDMBX10101.demo', 'BackupStatus': 'FAILURE'}
{'HostName': 'DEMOBDDBX10101.demo', 'BackupStatus': 'FAILURE'}
{'HostName': 'DEMOBDDBX10102.demo', 'BackupStatus': 'FAILURE'}
{'HostName': 'DEMOBDDBX10201.demo', 'BackupStatus': 'FAILURE'}
{'HostName': 'DEMOBDDBX10202.demo', 'BackupStatus': 'FAILURE'}`
Can someone please help me with this?
A:
This does not look like a dictionary but a list of dictionaries.
If that's the case, it seems you want to obtain a sub-list with only those elements (dictionaries) that have BackupStatus equal to FAILURE. So you could do something like this:
recs = [
{'HostName': 'DEMOBDDBX00100.demo', 'BackupStatus': 'SUCCESS'},
{'HostName': 'DEMOBDDBX00200.demo', 'BackupStatus': 'SUCCESS'},
{'HostName': 'DEMOBDDBX10101.demo', 'BackupStatus': 'FAILURE'},
...
]
failed_recs = [v for v in recs if v['BackupStatus'] == 'FAILURE']
A:
Consider utilizing a list comprehension:
>>> data = [
... {'HostName': 'DEMOBDDBX00100.demo', 'BackupStatus': 'SUCCESS'},
... {'HostName': 'DEMOBDDBX00200.demo', 'BackupStatus': 'SUCCESS'},
... {'HostName': 'DEMOBDDBX10101.demo', 'BackupStatus': 'FAILURE'},
... {'HostName': 'DEMOBDDBX10102.demo', 'BackupStatus': 'FAILURE'},
... {'HostName': 'DEMOBDDBX10201.demo', 'BackupStatus': 'FAILURE'},
... {'HostName': 'DEMOBDDBX10202.demo', 'BackupStatus': 'FAILURE'},
... {'HostName': 'DEMOBDMBX00100.demo', 'BackupStatus': 'SUCCESS'},
... {'HostName': 'DEMOBDMBX00200.demo', 'BackupStatus': 'SUCCESS'},
... {'HostName': 'DEMOBDMBX10101.demo', 'BackupStatus': 'FAILURE'},
... {'HostName': 'DEMOBDMBX10102.demo', 'BackupStatus': 'FAILURE'},
... {'HostName': 'DEMODACRT10100.demo', 'BackupStatus': 'SUCCESS'},
... {'HostName': 'DEMODACRT10200.demo', 'BackupStatus': 'SUCCESS'},
... {'HostName': 'DEMODACTS10101.demo', 'BackupStatus': 'SUCCESS'},
... {'HostName': 'DEMODACTS10102.demo', 'BackupStatus': 'SUCCESS'},
... {'HostName': 'DEMOKLIRT10100.demo', 'BackupStatus': 'SUCCESS'},
... {'HostName': 'DEMOKLIRT10200.demo', 'BackupStatus': 'SUCCESS'},
... {'HostName': 'DEMOKNORT10100.demo', 'BackupStatus': 'SUCCESS'},
... {'HostName': 'DEMOKNORT10200.demo', 'BackupStatus': 'SUCCESS'},
... {'HostName': 'DEMOKOSRT10200.demo', 'BackupStatus': 'SUCCESS'},
... {'HostName': 'DEMOLABTS10300.demo', 'BackupStatus': 'SUCCESS'},
... {'HostName': 'DEMOLABTS10400.demo', 'BackupStatus': 'SUCCESS'},
... ]
>>> [d for d in data if d['BackupStatus'] == 'FAILURE']
[{'HostName': 'DEMOBDDBX10101.demo', 'BackupStatus': 'FAILURE'},
{'HostName': 'DEMOBDDBX10102.demo', 'BackupStatus': 'FAILURE'},
{'HostName': 'DEMOBDDBX10201.demo', 'BackupStatus': 'FAILURE'},
{'HostName': 'DEMOBDDBX10202.demo', 'BackupStatus': 'FAILURE'},
{'HostName': 'DEMOBDMBX10101.demo', 'BackupStatus': 'FAILURE'},
{'HostName': 'DEMOBDMBX10102.demo', 'BackupStatus': 'FAILURE'}]
|
Ho to filter keys and values from a python dictionary based on conditions?
|
I have python dictionary with the below items :
> ...
> {'HostName': 'DEMOBDDBX00100.demo', 'BackupStatus': 'SUCCESS'}
> {'HostName': 'DEMOBDDBX00200.demo', 'BackupStatus': 'SUCCESS'}
> {'HostName': 'DEMOBDDBX10101.demo', 'BackupStatus': 'FAILURE'}
> {'HostName': 'DEMOBDDBX10102.demo', 'BackupStatus': 'FAILURE'}
> {'HostName': 'DEMOBDDBX10201.demo', 'BackupStatus': 'FAILURE'}
> {'HostName': 'DEMOBDDBX10202.demo', 'BackupStatus': 'FAILURE'}
> {'HostName': 'DEMOBDMBX00100.demo', 'BackupStatus': 'SUCCESS'}
> {'HostName': 'DEMOBDMBX00200.demo', 'BackupStatus': 'SUCCESS'}
> {'HostName': 'DEMOBDMBX10101.demo', 'BackupStatus': 'FAILURE'}
> {'HostName': 'DEMOBDMBX10102.demo', 'BackupStatus': 'FAILURE'}
> {'HostName': 'DEMODACRT10100.demo', 'BackupStatus': 'SUCCESS'}
> {'HostName': 'DEMODACRT10200.demo', 'BackupStatus': 'SUCCESS'}
> {'HostName': 'DEMODACTS10101.demo', 'BackupStatus': 'SUCCESS'}
> {'HostName': 'DEMODACTS10102.demo', 'BackupStatus': 'SUCCESS'}
> {'HostName': 'DEMOKLIRT10100.demo', 'BackupStatus': 'SUCCESS'}
> {'HostName': 'DEMOKLIRT10200.demo', 'BackupStatus': 'SUCCESS'}
> {'HostName': 'DEMOKNORT10100.demo', 'BackupStatus': 'SUCCESS'}
> {'HostName': 'DEMOKNORT10200.demo', 'BackupStatus': 'SUCCESS'}
> {'HostName': 'DEMOKOSRT10200.demo', 'BackupStatus': 'SUCCESS'}
> {'HostName': 'DEMOLABTS10300.demo', 'BackupStatus': 'SUCCESS'}
> {'HostName': 'DEMOLABTS10400.demo', 'BackupStatus': 'SUCCESS'}
> ...
I need to filter out the values in Hostname only if the BackupStatus == "FAILURE"
I need the output as:
{'HostName': 'DEMOBDMBX10101.demo', 'BackupStatus': 'FAILURE'}
{'HostName': 'DEMOBDDBX10101.demo', 'BackupStatus': 'FAILURE'}
{'HostName': 'DEMOBDDBX10102.demo', 'BackupStatus': 'FAILURE'}
{'HostName': 'DEMOBDDBX10201.demo', 'BackupStatus': 'FAILURE'}
{'HostName': 'DEMOBDDBX10202.demo', 'BackupStatus': 'FAILURE'}`
Can someone please help me with this?
|
[
"This does not look like a dictionary but a list of dictionaries.\nIf that's the case, it seems you want to obtain a sub-list with only those elements (dictionaries) that have BackupStatus equal to FAILURE. So you could do something like this:\nrecs = [\n {'HostName': 'DEMOBDDBX00100.demo', 'BackupStatus': 'SUCCESS'},\n {'HostName': 'DEMOBDDBX00200.demo', 'BackupStatus': 'SUCCESS'},\n {'HostName': 'DEMOBDDBX10101.demo', 'BackupStatus': 'FAILURE'},\n ...\n]\nfailed_recs = [v for v in recs if v['BackupStatus'] == 'FAILURE']\n\n",
"Consider utilizing a list comprehension:\n>>> data = [\n... {'HostName': 'DEMOBDDBX00100.demo', 'BackupStatus': 'SUCCESS'},\n... {'HostName': 'DEMOBDDBX00200.demo', 'BackupStatus': 'SUCCESS'},\n... {'HostName': 'DEMOBDDBX10101.demo', 'BackupStatus': 'FAILURE'},\n... {'HostName': 'DEMOBDDBX10102.demo', 'BackupStatus': 'FAILURE'},\n... {'HostName': 'DEMOBDDBX10201.demo', 'BackupStatus': 'FAILURE'},\n... {'HostName': 'DEMOBDDBX10202.demo', 'BackupStatus': 'FAILURE'},\n... {'HostName': 'DEMOBDMBX00100.demo', 'BackupStatus': 'SUCCESS'},\n... {'HostName': 'DEMOBDMBX00200.demo', 'BackupStatus': 'SUCCESS'},\n... {'HostName': 'DEMOBDMBX10101.demo', 'BackupStatus': 'FAILURE'},\n... {'HostName': 'DEMOBDMBX10102.demo', 'BackupStatus': 'FAILURE'},\n... {'HostName': 'DEMODACRT10100.demo', 'BackupStatus': 'SUCCESS'},\n... {'HostName': 'DEMODACRT10200.demo', 'BackupStatus': 'SUCCESS'},\n... {'HostName': 'DEMODACTS10101.demo', 'BackupStatus': 'SUCCESS'},\n... {'HostName': 'DEMODACTS10102.demo', 'BackupStatus': 'SUCCESS'},\n... {'HostName': 'DEMOKLIRT10100.demo', 'BackupStatus': 'SUCCESS'},\n... {'HostName': 'DEMOKLIRT10200.demo', 'BackupStatus': 'SUCCESS'},\n... {'HostName': 'DEMOKNORT10100.demo', 'BackupStatus': 'SUCCESS'},\n... {'HostName': 'DEMOKNORT10200.demo', 'BackupStatus': 'SUCCESS'},\n... {'HostName': 'DEMOKOSRT10200.demo', 'BackupStatus': 'SUCCESS'},\n... {'HostName': 'DEMOLABTS10300.demo', 'BackupStatus': 'SUCCESS'},\n... {'HostName': 'DEMOLABTS10400.demo', 'BackupStatus': 'SUCCESS'},\n... ]\n>>> [d for d in data if d['BackupStatus'] == 'FAILURE']\n[{'HostName': 'DEMOBDDBX10101.demo', 'BackupStatus': 'FAILURE'}, \n {'HostName': 'DEMOBDDBX10102.demo', 'BackupStatus': 'FAILURE'}, \n {'HostName': 'DEMOBDDBX10201.demo', 'BackupStatus': 'FAILURE'}, \n {'HostName': 'DEMOBDDBX10202.demo', 'BackupStatus': 'FAILURE'}, \n {'HostName': 'DEMOBDMBX10101.demo', 'BackupStatus': 'FAILURE'}, \n {'HostName': 'DEMOBDMBX10102.demo', 'BackupStatus': 'FAILURE'}]\n\n"
] |
[
0,
0
] |
[] |
[] |
[
"dictionary",
"python"
] |
stackoverflow_0074507998_dictionary_python.txt
|
Q:
Install Google Maps 4.7 on Conda Environment
I would like to to install the latest Google Maps (Version 4.7 as of today) in a conda environment.
The question is, how to do it in the configuration file (The YAML file) which defines the environment without using the command line
I tried looking for the latest version of Google Maps on conda-forge. Yet it is of the 2.x branch.
A:
By looking at the answer for Using Pip to install packages to Anaconda Environment you may do something like:
name: YourName
channels:
- conda-forge
dependencies:
- python=3.11
- numpy
- scipy
- pip
- pip:
- googlemaps==4.7
|
Install Google Maps 4.7 on Conda Environment
|
I would like to to install the latest Google Maps (Version 4.7 as of today) in a conda environment.
The question is, how to do it in the configuration file (The YAML file) which defines the environment without using the command line
I tried looking for the latest version of Google Maps on conda-forge. Yet it is of the 2.x branch.
|
[
"By looking at the answer for Using Pip to install packages to Anaconda Environment you may do something like:\nname: YourName\nchannels:\n - conda-forge\ndependencies:\n - python=3.11\n - numpy\n - scipy\n - pip\n - pip:\n - googlemaps==4.7\n\n"
] |
[
1
] |
[] |
[] |
[
"conda",
"environment",
"google_maps",
"python"
] |
stackoverflow_0074507990_conda_environment_google_maps_python.txt
|
Q:
use min & max functions on objects list
so I have a list of objects that I made and every object has the var y.
so I want to use the min function to find the lowest y of them without creating a new list of this var or use loops.
from random import randint
class Dot():
def __init__(self, x=randint(-100, 100), y=randint(-100, 100)):
self.x, self.y = x, y
d1, d2, d3, d4, d5 = Dot(), Dot(), Dot(), Dot(), Dot()
dots = [d1, d2, d3, d4, d5]
So now I'm trying yo find the lowest y of the dots with the min function without using a list of y for that, I want to use only the list I already created there.
I already tried that:
min(dots.y)
but this try is really stupid and I knew it has very low chance to work, so it didn't work of course.
I'm using python 3.10 by the way so tell me if I need newer version.
A:
Try to use key= parameter of min():
from random import randint
class Dot:
def __init__(self, x=None, y=None):
if x is None:
x = randint(-100, 100)
if y is None:
y = randint(-100, 100)
self.x, self.y = x, y
d1, d2, d3, d4, d5 = Dot(), Dot(), Dot(), Dot(), Dot()
dots = [d1, d2, d3, d4, d5]
print(min(dots, key=lambda d: d.y).y)
Prints (for example):
6
EDIT: The default argument is only get evaluated once, so I've put the random generation into function body. Credits to @Thierry
|
use min & max functions on objects list
|
so I have a list of objects that I made and every object has the var y.
so I want to use the min function to find the lowest y of them without creating a new list of this var or use loops.
from random import randint
class Dot():
def __init__(self, x=randint(-100, 100), y=randint(-100, 100)):
self.x, self.y = x, y
d1, d2, d3, d4, d5 = Dot(), Dot(), Dot(), Dot(), Dot()
dots = [d1, d2, d3, d4, d5]
So now I'm trying yo find the lowest y of the dots with the min function without using a list of y for that, I want to use only the list I already created there.
I already tried that:
min(dots.y)
but this try is really stupid and I knew it has very low chance to work, so it didn't work of course.
I'm using python 3.10 by the way so tell me if I need newer version.
|
[
"Try to use key= parameter of min():\nfrom random import randint\n\n\nclass Dot:\n def __init__(self, x=None, y=None):\n if x is None:\n x = randint(-100, 100)\n\n if y is None:\n y = randint(-100, 100)\n\n self.x, self.y = x, y\n\n\n\nd1, d2, d3, d4, d5 = Dot(), Dot(), Dot(), Dot(), Dot()\n\ndots = [d1, d2, d3, d4, d5]\n\nprint(min(dots, key=lambda d: d.y).y)\n\nPrints (for example):\n6\n\n\nEDIT: The default argument is only get evaluated once, so I've put the random generation into function body. Credits to @Thierry\n"
] |
[
1
] |
[] |
[] |
[
"list",
"python",
"python_3.x"
] |
stackoverflow_0074508103_list_python_python_3.x.txt
|
Q:
How to add a new line to the string that I input to Python input()?
I ask user for some input using
s = input('enter something: ')
Then I save it to a text file.
I want my user to be able to input new lines using '\n'.
For example, if user input "hello\nbye", and I use file.write(s) to save the text, I want my text file to be:
hello
bye
But just typing in '\n' does not seen to work.
Specifying a replacement character then using str.replace is not an option for me.
I am using Python 3.11 but I can switch to any Python 3 version.
EDIT: I am interacting with the user through socket, and I cannot use sys.stdin.read() due to limitations with the console. I also cannot use the iter based solution as I only want the user to input once. Therefore, How to read multiple lines of raw input? does not resolve my issue.
A:
Since python's input can't turn \n into newline characters, you may need to do the conversion yourself:
s = input('enter something: ') #e.g "hello\nworld"
s = s.replace('\\n', '\n') # turns literal '\n' text into newline characters
...
file.write(s)
Now, s will have newlines instead of \n and will write and print as expected.
|
How to add a new line to the string that I input to Python input()?
|
I ask user for some input using
s = input('enter something: ')
Then I save it to a text file.
I want my user to be able to input new lines using '\n'.
For example, if user input "hello\nbye", and I use file.write(s) to save the text, I want my text file to be:
hello
bye
But just typing in '\n' does not seen to work.
Specifying a replacement character then using str.replace is not an option for me.
I am using Python 3.11 but I can switch to any Python 3 version.
EDIT: I am interacting with the user through socket, and I cannot use sys.stdin.read() due to limitations with the console. I also cannot use the iter based solution as I only want the user to input once. Therefore, How to read multiple lines of raw input? does not resolve my issue.
|
[
"Since python's input can't turn \\n into newline characters, you may need to do the conversion yourself:\ns = input('enter something: ') #e.g \"hello\\nworld\"\ns = s.replace('\\\\n', '\\n') # turns literal '\\n' text into newline characters\n...\nfile.write(s)\n\nNow, s will have newlines instead of \\n and will write and print as expected.\n"
] |
[
0
] |
[] |
[] |
[
"python"
] |
stackoverflow_0074508092_python.txt
|
Q:
create a tag at top of xml file using python
I have xml file named 'test.xml' like below format
<final_output>
<career_profile>
<output>
<Template OriginalSentence="1" SentenceID="1" RecordID="0">
<Employer Type="String"><Value>HCL TECHNOLOGY LTD</Value></Employer>
<duration><Value>JAN 2018 to till date</Value></duration>
</Template>
</output>
</career_profile>
</final_output>
I want to add a final_file tag on top of test.xml file using python.
The output should look like:
<final_file>
<final_output>
<career_profile>
<output>
<Template OriginalSentence="1" SentenceID="1" RecordID="0">
<Employer Type="String"><Value>HCL TECHNOLOGY LTD</Value></Employer>
<duration><Value>JAN 2018 to till date</Value></duration>
</Template>
</output>
</career_profile>
</final_output>
</final_file>
For this I have used additional xml file named 'random.xml' file containing <final_file> </final_file> .
random.xml looks like:
<final_file>
</final_file>
I am not getting the <final_output> </final_output> tag in the resultant output.xml.
I have tried the following code:
import xml.etree.ElementTree as ET
xmlfile1 = "random.xml"
xmlfile2 = "test.xml"
tree1 = ET.parse(xmlfile1)
tree2 = ET.parse(xmlfile2)
root1 = tree1.getroot()
root2 = tree2.getroot()
root1.extend(root2)
tree1.write('output.xml')
But I am getting like:
<final_file>
<career_profile>
<output>
<Template OriginalSentence="1" SentenceID="1" RecordID="0">
<Employer Type="String"><Value>HCL TECHNOLOGY LTD</Value></Employer>
<duration><Value>JAN 2018 to till date</Value></duration>
</Template>
</output>
</career_profile>
</final_file>
I have tried with
random.xml:
<final_file>
<final_output>
</final_output>
</final_file>
But I am getting like this:
<final_file>
<final_output> </final_output>
<career_profile>
<output>
<Template OriginalSentence="1" SentenceID="1" RecordID="0">
<Employer Type="String"><Value>HCL TECHNOLOGY LTD</Value></Employer>
<duration><Value>JAN 2018 to till date</Value></duration>
</Template>
</output>
</career_profile>
</final_file>
Is there a way to do this without additional random.xml file.
The thing I am expecting is that to existing .xml file <finalfile*> tag is to be inserted at top and </*final_file> tag at bottom of the file.
A:
If you want to use xml package then the simple solution will be
from xml.etree import ElementTree
xml_input = "input.xml"
xml_output = "output.xml"
tree_input = ElementTree.parse(xml_input)
root = tree_input.getroot()
# add new tag
new_root = ElementTree.Element("final_file")
new_root.insert(0, root)
# format XML
ElementTree.indent(new_root)
print(ElementTree.dump(new_root))
with open(xml_output, "wb") as xml_file:
xml_file.write(ElementTree.tostring(new_root))
in this case, the output will be
<final_file>
<final_output>
<career_profile>
<output>
<Template OriginalSentence="1" SentenceID="1" RecordID="0">
<Employer Type="String">
<Value>HCL TECHNOLOGY LTD</Value>
</Employer>
<duration>
<Value>JAN 2018 to till date</Value>
</duration>
</Template>
</output>
</career_profile>
</final_output>
</final_file>
|
create a tag at top of xml file using python
|
I have xml file named 'test.xml' like below format
<final_output>
<career_profile>
<output>
<Template OriginalSentence="1" SentenceID="1" RecordID="0">
<Employer Type="String"><Value>HCL TECHNOLOGY LTD</Value></Employer>
<duration><Value>JAN 2018 to till date</Value></duration>
</Template>
</output>
</career_profile>
</final_output>
I want to add a final_file tag on top of test.xml file using python.
The output should look like:
<final_file>
<final_output>
<career_profile>
<output>
<Template OriginalSentence="1" SentenceID="1" RecordID="0">
<Employer Type="String"><Value>HCL TECHNOLOGY LTD</Value></Employer>
<duration><Value>JAN 2018 to till date</Value></duration>
</Template>
</output>
</career_profile>
</final_output>
</final_file>
For this I have used additional xml file named 'random.xml' file containing <final_file> </final_file> .
random.xml looks like:
<final_file>
</final_file>
I am not getting the <final_output> </final_output> tag in the resultant output.xml.
I have tried the following code:
import xml.etree.ElementTree as ET
xmlfile1 = "random.xml"
xmlfile2 = "test.xml"
tree1 = ET.parse(xmlfile1)
tree2 = ET.parse(xmlfile2)
root1 = tree1.getroot()
root2 = tree2.getroot()
root1.extend(root2)
tree1.write('output.xml')
But I am getting like:
<final_file>
<career_profile>
<output>
<Template OriginalSentence="1" SentenceID="1" RecordID="0">
<Employer Type="String"><Value>HCL TECHNOLOGY LTD</Value></Employer>
<duration><Value>JAN 2018 to till date</Value></duration>
</Template>
</output>
</career_profile>
</final_file>
I have tried with
random.xml:
<final_file>
<final_output>
</final_output>
</final_file>
But I am getting like this:
<final_file>
<final_output> </final_output>
<career_profile>
<output>
<Template OriginalSentence="1" SentenceID="1" RecordID="0">
<Employer Type="String"><Value>HCL TECHNOLOGY LTD</Value></Employer>
<duration><Value>JAN 2018 to till date</Value></duration>
</Template>
</output>
</career_profile>
</final_file>
Is there a way to do this without additional random.xml file.
The thing I am expecting is that to existing .xml file <finalfile*> tag is to be inserted at top and </*final_file> tag at bottom of the file.
|
[
"If you want to use xml package then the simple solution will be\nfrom xml.etree import ElementTree\n\nxml_input = \"input.xml\"\nxml_output = \"output.xml\"\n\ntree_input = ElementTree.parse(xml_input)\nroot = tree_input.getroot()\n\n# add new tag\nnew_root = ElementTree.Element(\"final_file\")\nnew_root.insert(0, root)\n\n# format XML\nElementTree.indent(new_root)\n\nprint(ElementTree.dump(new_root))\n\nwith open(xml_output, \"wb\") as xml_file:\n xml_file.write(ElementTree.tostring(new_root))\n\nin this case, the output will be\n<final_file>\n <final_output>\n <career_profile>\n <output>\n <Template OriginalSentence=\"1\" SentenceID=\"1\" RecordID=\"0\">\n <Employer Type=\"String\">\n <Value>HCL TECHNOLOGY LTD</Value>\n </Employer>\n <duration>\n <Value>JAN 2018 to till date</Value>\n </duration>\n </Template>\n </output>\n </career_profile>\n </final_output>\n</final_file>\n\n"
] |
[
0
] |
[] |
[] |
[
"python",
"python_3.x",
"xml"
] |
stackoverflow_0074507942_python_python_3.x_xml.txt
|
Q:
How to properly import libraries that I downloaded via pip or conda?
I am trying to use "matplotlib" for a project and when importing it I get: "matplotlib.pyplot not resolved from source", then I tried to import pandas and I got something similar, how can I fix this?
I am using WSL, and I am in a virtual environment that I created in conda.
I want to use some libraries but they are not imported, even though I installed them with pip and it appears that they are there, it does not detect them.
A:
Maybe you are not connected to the correct conda env in visual studio code.
You could check that by pressing " CTRL SHIFT P" Then press on Select Interpreter and select your created environment.
To check if matplotlib was installed in the correct environment you could try the following:
open anaconda shell
type "conda env list" ( to see all the environment created)
type "conda activate <name_of_your_environment>"
type "conda list" (to see all the packages installed in this environment"
check if matpotlib appears in the outputed list
More:
to open anaconda shell type "anaconda prompt" in the search of your os.
|
How to properly import libraries that I downloaded via pip or conda?
|
I am trying to use "matplotlib" for a project and when importing it I get: "matplotlib.pyplot not resolved from source", then I tried to import pandas and I got something similar, how can I fix this?
I am using WSL, and I am in a virtual environment that I created in conda.
I want to use some libraries but they are not imported, even though I installed them with pip and it appears that they are there, it does not detect them.
|
[
"Maybe you are not connected to the correct conda env in visual studio code.\nYou could check that by pressing \" CTRL SHIFT P\" Then press on Select Interpreter and select your created environment.\nTo check if matplotlib was installed in the correct environment you could try the following:\n\nopen anaconda shell\ntype \"conda env list\" ( to see all the environment created)\ntype \"conda activate <name_of_your_environment>\"\ntype \"conda list\" (to see all the packages installed in this environment\"\ncheck if matpotlib appears in the outputed list\n\nMore:\nto open anaconda shell type \"anaconda prompt\" in the search of your os.\n"
] |
[
1
] |
[] |
[] |
[
"conda",
"matplotlib",
"pandas",
"python",
"windows_subsystem_for_linux"
] |
stackoverflow_0074501112_conda_matplotlib_pandas_python_windows_subsystem_for_linux.txt
|
Q:
Append a nested dictionary to a dictionary in python
I'm trying to append a nested dictionary to a dictionary, I have searched the internet and couldn't find an answer.
I tried
Colors = {}
a = {"1:1":{255,1,2}}
b = {"2:1":{1,255,2}}
Colors.update(a)
Colors.update(b)
print(Colors)
It prints
{'1:1': {1, 2, 255}, '2:1': {1, 2, 255}}
Instead of
{'1:1': {255,1,2}, '2:1': {1,255,2}}
A:
The reason the values don't keep their order is because you're using sets and not lists. Unlike lists, sets are unordered (you can read more here).
To fix your issue, you can use lists instead (note the {} turned into []:
Colors = {}
a = {"1:1":[255,1,2]}
b = {"2:1":[1,255,2]}
Colors.update(a)
Colors.update(b)
print(Colors)
Which prints:
{'1:1': [255, 1, 2], '2:1': [1, 255, 2]}
|
Append a nested dictionary to a dictionary in python
|
I'm trying to append a nested dictionary to a dictionary, I have searched the internet and couldn't find an answer.
I tried
Colors = {}
a = {"1:1":{255,1,2}}
b = {"2:1":{1,255,2}}
Colors.update(a)
Colors.update(b)
print(Colors)
It prints
{'1:1': {1, 2, 255}, '2:1': {1, 2, 255}}
Instead of
{'1:1': {255,1,2}, '2:1': {1,255,2}}
|
[
"The reason the values don't keep their order is because you're using sets and not lists. Unlike lists, sets are unordered (you can read more here).\nTo fix your issue, you can use lists instead (note the {} turned into []:\nColors = {}\n\na = {\"1:1\":[255,1,2]}\nb = {\"2:1\":[1,255,2]}\nColors.update(a)\nColors.update(b)\n\nprint(Colors)\n\nWhich prints:\n{'1:1': [255, 1, 2], '2:1': [1, 255, 2]}\n\n"
] |
[
0
] |
[] |
[] |
[
"dictionary",
"python"
] |
stackoverflow_0074508125_dictionary_python.txt
|
Q:
Enabling CAFFE2 while building pytorch from source on Windows command prompt
So, I was doing a model train up using Yolo7 on Windows platform and
C:\Users\LENOVO>python train.py --weights yolov7.pt --data "data/custom.yaml" --workers 4 --batch-size 4 --img 416 --cfg cfg/training/yolov7.yaml --name yolov7 --hyp data/hyp.scratch.p5.yaml
After running the above command the below stack trace of error showed up in my command prompt on windows. My question is:
How to do the suggestions of the error below? How to enable the BUILD_CAFFE2=1 while building pytorch on my Windows? Not using Conda of course. On my Windows command prompt only.
I installed pytorch from using the following source
https://github.com/pytorch/pytorch#from-source
Install caffe2 using commands from this source
https://caffe2.ai/docs/getting-started.html?platform=windows&configuration=compile
But the following error still shows while training my model.
I just need to know the command of enabling build_caffe2=1 on windows command prompt.
C:\Users\LENOVO\AppData\Local\Programs\Python\Python310\lib\site-packages\caffe2\__init__.py:5: UserWarning: Caffe2 support is not fully enabled in this PyTorch build. Please enable Caffe2 by building PyTorch from source with `BUILD_CAFFE2=1` flag.
warnings.warn("Caffe2 support is not fully enabled in this PyTorch build. "
C:\Users\LENOVO\AppData\Local\Programs\Python\Python310\lib\site-packages\caffe2\proto\__init__.py:17: UserWarning: Caffe2 support is not enabled in this PyTorch build. Please enable Caffe2 by building PyTorch from source with `BUILD_CAFFE2=1` flag.
warnings.warn('Caffe2 support is not enabled in this PyTorch build. '
C:\Users\LENOVO\AppData\Local\Programs\Python\Python310\lib\site-packages\caffe2\python\__init__.py:9: UserWarning: Caffe2 support is not enabled in this PyTorch build. Please enable Caffe2 by building PyTorch from source with `BUILD_CAFFE2=1` flag.
warnings.warn('Caffe2 support is not enabled in this PyTorch build. '
Traceback (most recent call last):
File "C:\Users\LENOVO\train.py", line 8, in <module>
from caffe2.python import core, scope
File "C:\Users\LENOVO\AppData\Local\Programs\Python\Python310\lib\site-packages\caffe2\python\__init__.py", line 7, in <module>
from caffe2.proto import caffe2_pb2
File "C:\Users\LENOVO\AppData\Local\Programs\Python\Python310\lib\site-packages\caffe2\proto\__init__.py", line 15, in <module>
from caffe2.proto import caffe2_pb2, metanet_pb2, torch_pb2
ImportError: cannot import name 'metanet_pb2' from partially initialized module 'caffe2.proto' (most likely due to a circular import) (C:\Users\LENOVO\AppData\Local\Programs\Python\Python310\lib\site-packages\caffe2\proto\__init__.py)
A:
I have solved the issue but setting BUILD_CAFFE2=1 on the command prompt before installing pytorch, with the following code.
set BUILD_CAFFE2=1
|
Enabling CAFFE2 while building pytorch from source on Windows command prompt
|
So, I was doing a model train up using Yolo7 on Windows platform and
C:\Users\LENOVO>python train.py --weights yolov7.pt --data "data/custom.yaml" --workers 4 --batch-size 4 --img 416 --cfg cfg/training/yolov7.yaml --name yolov7 --hyp data/hyp.scratch.p5.yaml
After running the above command the below stack trace of error showed up in my command prompt on windows. My question is:
How to do the suggestions of the error below? How to enable the BUILD_CAFFE2=1 while building pytorch on my Windows? Not using Conda of course. On my Windows command prompt only.
I installed pytorch from using the following source
https://github.com/pytorch/pytorch#from-source
Install caffe2 using commands from this source
https://caffe2.ai/docs/getting-started.html?platform=windows&configuration=compile
But the following error still shows while training my model.
I just need to know the command of enabling build_caffe2=1 on windows command prompt.
C:\Users\LENOVO\AppData\Local\Programs\Python\Python310\lib\site-packages\caffe2\__init__.py:5: UserWarning: Caffe2 support is not fully enabled in this PyTorch build. Please enable Caffe2 by building PyTorch from source with `BUILD_CAFFE2=1` flag.
warnings.warn("Caffe2 support is not fully enabled in this PyTorch build. "
C:\Users\LENOVO\AppData\Local\Programs\Python\Python310\lib\site-packages\caffe2\proto\__init__.py:17: UserWarning: Caffe2 support is not enabled in this PyTorch build. Please enable Caffe2 by building PyTorch from source with `BUILD_CAFFE2=1` flag.
warnings.warn('Caffe2 support is not enabled in this PyTorch build. '
C:\Users\LENOVO\AppData\Local\Programs\Python\Python310\lib\site-packages\caffe2\python\__init__.py:9: UserWarning: Caffe2 support is not enabled in this PyTorch build. Please enable Caffe2 by building PyTorch from source with `BUILD_CAFFE2=1` flag.
warnings.warn('Caffe2 support is not enabled in this PyTorch build. '
Traceback (most recent call last):
File "C:\Users\LENOVO\train.py", line 8, in <module>
from caffe2.python import core, scope
File "C:\Users\LENOVO\AppData\Local\Programs\Python\Python310\lib\site-packages\caffe2\python\__init__.py", line 7, in <module>
from caffe2.proto import caffe2_pb2
File "C:\Users\LENOVO\AppData\Local\Programs\Python\Python310\lib\site-packages\caffe2\proto\__init__.py", line 15, in <module>
from caffe2.proto import caffe2_pb2, metanet_pb2, torch_pb2
ImportError: cannot import name 'metanet_pb2' from partially initialized module 'caffe2.proto' (most likely due to a circular import) (C:\Users\LENOVO\AppData\Local\Programs\Python\Python310\lib\site-packages\caffe2\proto\__init__.py)
|
[
"I have solved the issue but setting BUILD_CAFFE2=1 on the command prompt before installing pytorch, with the following code.\nset BUILD_CAFFE2=1\n\n"
] |
[
0
] |
[] |
[] |
[
"caffe2",
"python",
"pytorch"
] |
stackoverflow_0074492524_caffe2_python_pytorch.txt
|
Q:
Comparing contents in the given tuples
I've two python class 'tuples' and and i want to compare contents on both of them and get only which is only in 'deactivated' tuple for eg.,
deactivated = ((34, 'abcd'), (250, 'def'), (350, 'xyz'))
schedules = ((34, 'abcd'), (250, 'def'))
to_deactivate = ()
in here, i want to push (350, 'xyz') which is not in schedules to another variable to_deactivate.
I've looked into some of the solutions online but most of them are just comparing whether the tuples are same or not. Please help me out on this.
A:
Are you sure you those variables should be tuples?
Because tuples are immutable. In their essence, you are not supposed to add or remove things from them. It looks like those variables are intended to be collections to which you add or remove things. Which is not possible with tuples.
For example, sets, since you don't seem to bother about order, but do seem to be interested about what is or is not in them. That is what sets are for. Very quick to check whether something is in it or not (with the same easy in operator. But in is O(n) for lists or tuples, and is O(log(n)), and in practice even O(1) with sets).
So, that is just a remark, but I feel that those variables should have been sets.
In which case, answer to your question would simply have been
to_deactivate = deactivated - schedules
Even if you insist on keeping them as tuples, one compact answer is still to use sets
to_deactivate=tuple(set(deactivated)-set(schedules))
Note that this is faster than the compound tuple solution. (1.38 μs vs 1.59 μs on my computer. And the timing difference grows when the size of the input tuples grows)
Timings
Just to make my point about timing clear, here is how much times it takes to compute is with the "compound tuple" strategy (Andrej's answer)
Of course, curve fitting is not a complexity method evaluation, but, well, since instinct says that we can expect either O(n), O(nlog(n)) or O(n²) complexity, we can conclude that we are clearly on the O(n²) side here.
Where as with conversion to sets :
Which looks as a straight line as it can get.
So, clearly, on the O(n) side.
And, of course, put together, we have a match clearly in favor of sets.
And to reply to SUTerliakov's comment: not just for big values. Even at the very start, sets win. Just a zoom on previous graph.
And that is comparing set in their most defavorable usage: when we keep every thing in tuples, and then convert to set only for the operation. It would be even faster if variables were sets from the beginning. And in the question, there is really no indication that there is a reason from them not to be sets.
A:
Try:
deactivated = ((34, "abcd"), (250, "def"), (350, "xyz"))
schedules = ((34, "abcd"), (250, "def"))
to_deactivate = tuple(tpl for tpl in deactivated if tpl not in schedules)
print(to_deactivate)
Prints:
((350, 'xyz'),)
A:
You could use iteration to compare every element in one tuple to every element of the other tuple and get the the value of index for which there exist value only in the first tuple
deactivated = ((34, 'abcd'), (250, 'def'), (350, 'xyz'))
schedules = ((34, 'abcd'), (250, 'def'))
to_deactivate = ()
for i in range(len(deactivated)):
if deactivated[i] not in schedules:
to_deactivate += deactivated[i]
print(to_deactivate)
This should work fine.
Edit: Iterating only once.
|
Comparing contents in the given tuples
|
I've two python class 'tuples' and and i want to compare contents on both of them and get only which is only in 'deactivated' tuple for eg.,
deactivated = ((34, 'abcd'), (250, 'def'), (350, 'xyz'))
schedules = ((34, 'abcd'), (250, 'def'))
to_deactivate = ()
in here, i want to push (350, 'xyz') which is not in schedules to another variable to_deactivate.
I've looked into some of the solutions online but most of them are just comparing whether the tuples are same or not. Please help me out on this.
|
[
"Are you sure you those variables should be tuples?\nBecause tuples are immutable. In their essence, you are not supposed to add or remove things from them. It looks like those variables are intended to be collections to which you add or remove things. Which is not possible with tuples.\nFor example, sets, since you don't seem to bother about order, but do seem to be interested about what is or is not in them. That is what sets are for. Very quick to check whether something is in it or not (with the same easy in operator. But in is O(n) for lists or tuples, and is O(log(n)), and in practice even O(1) with sets).\nSo, that is just a remark, but I feel that those variables should have been sets.\nIn which case, answer to your question would simply have been\nto_deactivate = deactivated - schedules\n\nEven if you insist on keeping them as tuples, one compact answer is still to use sets\nto_deactivate=tuple(set(deactivated)-set(schedules))\n\nNote that this is faster than the compound tuple solution. (1.38 μs vs 1.59 μs on my computer. And the timing difference grows when the size of the input tuples grows)\nTimings\nJust to make my point about timing clear, here is how much times it takes to compute is with the \"compound tuple\" strategy (Andrej's answer)\n\nOf course, curve fitting is not a complexity method evaluation, but, well, since instinct says that we can expect either O(n), O(nlog(n)) or O(n²) complexity, we can conclude that we are clearly on the O(n²) side here.\nWhere as with conversion to sets :\n\nWhich looks as a straight line as it can get.\nSo, clearly, on the O(n) side.\nAnd, of course, put together, we have a match clearly in favor of sets.\n\nAnd to reply to SUTerliakov's comment: not just for big values. Even at the very start, sets win. Just a zoom on previous graph.\n\nAnd that is comparing set in their most defavorable usage: when we keep every thing in tuples, and then convert to set only for the operation. It would be even faster if variables were sets from the beginning. And in the question, there is really no indication that there is a reason from them not to be sets.\n",
"Try:\ndeactivated = ((34, \"abcd\"), (250, \"def\"), (350, \"xyz\"))\nschedules = ((34, \"abcd\"), (250, \"def\"))\n\nto_deactivate = tuple(tpl for tpl in deactivated if tpl not in schedules)\nprint(to_deactivate)\n\nPrints:\n((350, 'xyz'),)\n\n",
"You could use iteration to compare every element in one tuple to every element of the other tuple and get the the value of index for which there exist value only in the first tuple\ndeactivated = ((34, 'abcd'), (250, 'def'), (350, 'xyz'))\nschedules = ((34, 'abcd'), (250, 'def'))\nto_deactivate = ()\nfor i in range(len(deactivated)):\n if deactivated[i] not in schedules:\n to_deactivate += deactivated[i]\nprint(to_deactivate)\n\nThis should work fine.\nEdit: Iterating only once.\n"
] |
[
2,
0,
0
] |
[] |
[] |
[
"python",
"tuples"
] |
stackoverflow_0074507765_python_tuples.txt
|
Q:
Factory_boy not creating different User objects Django
I am new to Factory_boy with Django. After spending some time I understood how to create a factory for User model.
I am using the default user model and following is my factory. I am using Faker for randomness
import factory
from . import models
from django.contrib.auth.models import User
from faker import Faker
from django.contrib.auth.hashers import make_password
fake = Faker()
class UserFactory(factory.DjangoModelFactory):
class Meta:
model = User
django_get_or_create = ('email',)
first_name = fake.first_name()
last_name = fake.last_name()
email = first_name+"."+last_name+"@gmail.com"
password = make_password("ojasojas")
username = first_name+"_"+last_name
Now in the django shell
I use UserFactory.create() to crate an user. this works fine. Is it possible to loop through create statement and crate 5 different users? Now when I am doing that I am getting only one user (crated once and 'get' 4 times) as follows. What am I missing?
A:
You are defining class attributes for your factory, which get evaluated only when the class is defined. email = first_name+"."+last_name+"@gmail.com" will be evaluated once, not each time you call UserFactory.create(), hence the unique constraint errors. The usual solution to this is to instead define instance attributes via __init__(), but FactoryBoy has their own solution to this: lazy attributes.
A:
This may not be the best solution, but its a work around I've been using lately. I set the following Global:
NAMES = [factory.Faker('name')._get_faker().name() for i in range(5)]
In this case, the above will generate a list of 5 fake names.
NOTE: this does not guarantee a unique name (as FuzzyChoice will just select one from the list).
and within your factory:
name = factory.LazyAttribute(lambda obj: factory.fuzzy.FuzzyChoice(NAMES).fuzz())
A:
FactoryBoy offers LazyFunction and LazyAttribute classes to achieve this. Difference between them is that you can access other model-factory fields in LazyAttribute.
Here is an example:
class UserFactory(factory.django.DjangoModelFactory):
class Meta:
model = models.User
django_get_or_create = ('email',)
@staticmethod
def gen_email(x):
return x.first_name + "." + x.last_name + "@gmail.com"
first_name = factory.LazyFunction(fake.first_name)
last_name = factory.LazyFunction(fake.last_name)
email = factory.LazyAttribute(gen_email)
password = make_password("ojasojas")
username = factory.LazyAttribute(lambda x: x.first_name + "_" + x.last_name)
You can use lambda expressions as I've used here for demonstration in making the username field 'lazy-evaluated', or create functions (static method) and pass them to LazyFunction/LazyAttribute.
You can read more about these classes in official documentation of Factory Boy:
LazyFunction
LazyAttribute
|
Factory_boy not creating different User objects Django
|
I am new to Factory_boy with Django. After spending some time I understood how to create a factory for User model.
I am using the default user model and following is my factory. I am using Faker for randomness
import factory
from . import models
from django.contrib.auth.models import User
from faker import Faker
from django.contrib.auth.hashers import make_password
fake = Faker()
class UserFactory(factory.DjangoModelFactory):
class Meta:
model = User
django_get_or_create = ('email',)
first_name = fake.first_name()
last_name = fake.last_name()
email = first_name+"."+last_name+"@gmail.com"
password = make_password("ojasojas")
username = first_name+"_"+last_name
Now in the django shell
I use UserFactory.create() to crate an user. this works fine. Is it possible to loop through create statement and crate 5 different users? Now when I am doing that I am getting only one user (crated once and 'get' 4 times) as follows. What am I missing?
|
[
"You are defining class attributes for your factory, which get evaluated only when the class is defined. email = first_name+\".\"+last_name+\"@gmail.com\" will be evaluated once, not each time you call UserFactory.create(), hence the unique constraint errors. The usual solution to this is to instead define instance attributes via __init__(), but FactoryBoy has their own solution to this: lazy attributes.\n",
"This may not be the best solution, but its a work around I've been using lately. I set the following Global:\nNAMES = [factory.Faker('name')._get_faker().name() for i in range(5)]\n\nIn this case, the above will generate a list of 5 fake names.\nNOTE: this does not guarantee a unique name (as FuzzyChoice will just select one from the list).\nand within your factory:\nname = factory.LazyAttribute(lambda obj: factory.fuzzy.FuzzyChoice(NAMES).fuzz()) \n\n",
"FactoryBoy offers LazyFunction and LazyAttribute classes to achieve this. Difference between them is that you can access other model-factory fields in LazyAttribute.\nHere is an example:\nclass UserFactory(factory.django.DjangoModelFactory):\n class Meta:\n model = models.User\n django_get_or_create = ('email',)\n \n @staticmethod\n def gen_email(x):\n return x.first_name + \".\" + x.last_name + \"@gmail.com\"\n \n first_name = factory.LazyFunction(fake.first_name)\n last_name = factory.LazyFunction(fake.last_name)\n email = factory.LazyAttribute(gen_email)\n password = make_password(\"ojasojas\")\n username = factory.LazyAttribute(lambda x: x.first_name + \"_\" + x.last_name)\n\nYou can use lambda expressions as I've used here for demonstration in making the username field 'lazy-evaluated', or create functions (static method) and pass them to LazyFunction/LazyAttribute.\n\nYou can read more about these classes in official documentation of Factory Boy:\n\nLazyFunction\nLazyAttribute\n\n"
] |
[
2,
0,
0
] |
[] |
[] |
[
"django",
"factory_boy",
"python"
] |
stackoverflow_0042709008_django_factory_boy_python.txt
|
Q:
How can I scroll a web page using selenium webdriver in python?
I am currently using selenium webdriver to parse through facebook user friends page and extract all ids from the AJAX script. But I need to scroll down to get all the friends. How can I scroll down in Selenium. I am using python.
A:
You can use
driver.execute_script("window.scrollTo(0, Y)")
where Y is the height (on a fullhd monitor it's 1080). (Thanks to @lukeis)
You can also use
driver.execute_script("window.scrollTo(0, document.body.scrollHeight);")
to scroll to the bottom of the page.
If you want to scroll to a page with infinite loading, like social network ones, facebook etc. (thanks to @Cuong Tran)
SCROLL_PAUSE_TIME = 0.5
# Get scroll height
last_height = driver.execute_script("return document.body.scrollHeight")
while True:
# Scroll down to bottom
driver.execute_script("window.scrollTo(0, document.body.scrollHeight);")
# Wait to load page
time.sleep(SCROLL_PAUSE_TIME)
# Calculate new scroll height and compare with last scroll height
new_height = driver.execute_script("return document.body.scrollHeight")
if new_height == last_height:
break
last_height = new_height
another method (thanks to Juanse) is, select an object and
label.sendKeys(Keys.PAGE_DOWN);
A:
If you want to scroll down to bottom of infinite page (like linkedin.com), you can use this code:
SCROLL_PAUSE_TIME = 0.5
# Get scroll height
last_height = driver.execute_script("return document.body.scrollHeight")
while True:
# Scroll down to bottom
driver.execute_script("window.scrollTo(0, document.body.scrollHeight);")
# Wait to load page
time.sleep(SCROLL_PAUSE_TIME)
# Calculate new scroll height and compare with last scroll height
new_height = driver.execute_script("return document.body.scrollHeight")
if new_height == last_height:
break
last_height = new_height
Reference: https://stackoverflow.com/a/28928684/1316860
A:
You can use send_keys to simulate an END (or PAGE_DOWN) key press (which normally scroll the page):
from selenium.webdriver.common.keys import Keys
html = driver.find_element_by_tag_name('html')
html.send_keys(Keys.END)
A:
same method as shown here:
in python you can just use
driver.execute_script("window.scrollTo(0, Y)")
(Y is the vertical position you want to scroll to)
A:
element=find_element_by_xpath("xpath of the li you are trying to access")
element.location_once_scrolled_into_view
this helped when I was trying to access a 'li' that was not visible.
A:
For my purpose, I wanted to scroll down more, keeping the windows position in mind. My solution was similar and used window.scrollY
driver.execute_script("window.scrollTo(0, window.scrollY + 200)")
which will go to the current y scroll position + 200
A:
This is how you scroll down the webpage:
driver.execute_script("window.scrollTo(0, 1000);")
A:
None of these answers worked for me, at least not for scrolling down a facebook search result page, but I found after a lot of testing this solution:
while driver.find_element_by_tag_name('div'):
driver.execute_script("window.scrollTo(0, document.body.scrollHeight);")
Divs=driver.find_element_by_tag_name('div').text
if 'End of Results' in Divs:
print 'end'
break
else:
continue
A:
The easiest way i found to solve that problem was to select a label and then send:
label.sendKeys(Keys.PAGE_DOWN);
Hope it works!
A:
When working with youtube the floating elements give the value "0" as the scroll height
so rather than using "return document.body.scrollHeight" try using this one "return document.documentElement.scrollHeight"
adjust the scroll pause time as per your internet speed
else it will run for only one time and then breaks after that.
SCROLL_PAUSE_TIME = 1
# Get scroll height
"""last_height = driver.execute_script("return document.body.scrollHeight")
this dowsnt work due to floating web elements on youtube
"""
last_height = driver.execute_script("return document.documentElement.scrollHeight")
while True:
# Scroll down to bottom
driver.execute_script("window.scrollTo(0,document.documentElement.scrollHeight);")
# Wait to load page
time.sleep(SCROLL_PAUSE_TIME)
# Calculate new scroll height and compare with last scroll height
new_height = driver.execute_script("return document.documentElement.scrollHeight")
if new_height == last_height:
print("break")
break
last_height = new_height
A:
scroll loading pages. Example: medium, quora,etc
last_height = driver.execute_script("return document.body.scrollHeight")
while True:
driver.execute_script("window.scrollTo(0, document.body.scrollHeight-1000);")
# Wait to load the page.
driver.implicitly_wait(30) # seconds
new_height = driver.execute_script("return document.body.scrollHeight")
if new_height == last_height:
break
last_height = new_height
# sleep for 30s
driver.implicitly_wait(30) # seconds
driver.quit()
A:
Here's an example selenium code snippet that you could use for this type of purpose. It goes to the url for youtube search results on 'Enumerate python tutorial' and scrolls down until it finds the video with the title: 'Enumerate python tutorial(2020).'
driver.get('https://www.youtube.com/results?search_query=enumerate+python')
target = driver.find_element_by_link_text('Enumerate python tutorial(2020).')
target.location_once_scrolled_into_view
A:
This code scrolls to the bottom but doesn't require that you wait each time. It'll continually scroll, and then stop at the bottom (or timeout)
from selenium import webdriver
import time
driver = webdriver.Chrome(executable_path='chromedriver.exe')
driver.get('https://example.com')
pre_scroll_height = driver.execute_script('return document.body.scrollHeight;')
run_time, max_run_time = 0, 1
while True:
iteration_start = time.time()
# Scroll webpage, the 100 allows for a more 'aggressive' scroll
driver.execute_script('window.scrollTo(0, 100*document.body.scrollHeight);')
post_scroll_height = driver.execute_script('return document.body.scrollHeight;')
scrolled = post_scroll_height != pre_scroll_height
timed_out = run_time >= max_run_time
if scrolled:
run_time = 0
pre_scroll_height = post_scroll_height
elif not scrolled and not timed_out:
run_time += time.time() - iteration_start
elif not scrolled and timed_out:
break
# closing the driver is optional
driver.close()
This is much faster than waiting 0.5-3 seconds each time for a response, when that response could take 0.1 seconds
A:
I was looking for a way of scrolling through a dynamic webpage, and automatically stopping once the end of the page is reached, and found this thread.
The post by @Cuong Tran, with one main modification, was the answer that I was looking for. I thought that others might find the modification helpful (it has a pronounced effect on how the code works), hence this post.
The modification is to move the statement that captures the last page height inside the loop (so that each check is comparing to the previous page height).
So, the code below:
Continuously scrolls down a dynamic webpage (.scrollTo()), only stopping when, for one iteration, the page height stays the same.
(There is another modification, where the break statement is inside another condition (in case the page 'sticks') which can be removed).
SCROLL_PAUSE_TIME = 0.5
while True:
# Get scroll height
### This is the difference. Moving this *inside* the loop
### means that it checks if scrollTo is still scrolling
last_height = driver.execute_script("return document.body.scrollHeight")
# Scroll down to bottom
driver.execute_script("window.scrollTo(0, document.body.scrollHeight);")
# Wait to load page
time.sleep(SCROLL_PAUSE_TIME)
# Calculate new scroll height and compare with last scroll height
new_height = driver.execute_script("return document.body.scrollHeight")
if new_height == last_height:
# try again (can be removed)
driver.execute_script("window.scrollTo(0, document.body.scrollHeight);")
# Wait to load page
time.sleep(SCROLL_PAUSE_TIME)
# Calculate new scroll height and compare with last scroll height
new_height = driver.execute_script("return document.body.scrollHeight")
# check if the page height has remained the same
if new_height == last_height:
# if so, you are done
break
# if not, move on to the next loop
else:
last_height = new_height
continue
A:
You can use send_keys to simulate a PAGE_DOWN key press (which normally scroll the page):
from selenium.webdriver.common.keys import Keys
html = driver.find_element_by_tag_name('html')
html.send_keys(Keys.PAGE_DOWN)
A:
if you want to scroll within a particular view/frame (WebElement), what you only need to do is to replace "body" with a particular element that you intend to scroll within. i get that element via "getElementById" in the example below:
self.driver.execute_script('window.scrollTo(0, document.getElementById("page-manager").scrollHeight);')
this is the case on YouTube, for example...
A:
The ScrollTo() function doesn't work anymore. This is what I used and it worked fine.
driver.execute_script("document.getElementById('mydiv').scrollIntoView();")
A:
According to the docs,
the class ActionChains does the job:
from selenium import webdriver
from selenium.webdriver import ActionChains
driver = webdriver.Firefox()
action_chains = ActionChains(driver)
action_chains.scroll(x: int, y: int, delta_x: int, delta_y: int, duration: int = 0, origin: str = 'viewport').perform()
A:
insert this line driver.execute_script("window.scrollBy(0,925)", "")
A:
The loop using the "send keys" method of scrolling the page:
pre_scroll_height = driver.execute_script('return document.body.scrollHeight;')
while True:
driver.find_element_by_tag_name('body').send_keys(Keys.END)
time.sleep(5)
post_scroll_height = driver.execute_script('return document.body.scrollHeight;')
print(pre_scroll_height, post_scroll_height)
if pre_scroll_height == post_scroll_height:
break
pre_scroll_height=post_scroll_height
A:
Here is a method I wrote to slowly scroll down to a targets element
You can pass either Y-th position of element of the CSS Selector to it
It scrolls exactly like we do via mouse-wheel
Once this method called, you call it again with same driver object but with new target element, it will then scroll up/down wherever that element exists
def slow_scroll_to_element(self, driver, element_selector=None, target_yth_location=None):
current_scroll_position = int(driver.execute_script("return window.scrollY"))
if element_selector:
target_yth_location = int(driver.execute_script("return document.querySelector('{}').getBoundingClientRect()['top'] + window.scrollY".format(element_selector)))
scrollSpeed = 100 if target_yth_location-current_scroll_position > 0 else -100
def chunks(a, n):
k, m = divmod(len(a), n)
return (a[i*k+min(i, m):(i+1)*k+min(i+1, m)] for i in range(n))
for l in list(chunks(list(range(current_scroll_position, target_yth_location, scrollSpeed)) + list([target_yth_location+(-scrollSpeed if scrollSpeed > 0 else scrollSpeed)]), 3)):
for pos in l:
driver.execute_script("window.scrollTo(0, "+str(pos)+");")
time.sleep(0.1)
time.sleep(random.randint(1,3))
A:
driver.execute_script("document.getElementById('your ID Element').scrollIntoView();")
it's working for my case.
A:
Just a small variation of the solutions provided so far: sometimes in scraping you have to meet the following requirements:
Keep scrolling step by step. Otherwise if you always jump to the bottom some elements are loaded only as containers/divs but their content is not loaded because they were never visible (because you jumped straight to the bottom);
Allow enough time for content to be loaded;
It's not an infinite scroll page, there is an end and you have to identify when the end is reached;
Here is a simple implementation:
from time import sleep
def keep_scrolling_to_the_bottom():
while True:
previous_scrollY = my_web_driver.execute_script( 'return window.scrollY' )
my_web_driver.execute_script( 'window.scrollBy( 0, 230 )' )
sleep( 0.4 )
if previous_scrollY == my_web_driver.execute_script( 'return window.scrollY' ):
print( 'job done, reached the bottom!' )
break
Tested and working on Windows 7 x64, Python 3.8.0, selenium 4.1.3, Google Chrome 107.0.5304.107, website for property rent.
|
How can I scroll a web page using selenium webdriver in python?
|
I am currently using selenium webdriver to parse through facebook user friends page and extract all ids from the AJAX script. But I need to scroll down to get all the friends. How can I scroll down in Selenium. I am using python.
|
[
"You can use\ndriver.execute_script(\"window.scrollTo(0, Y)\") \n\nwhere Y is the height (on a fullhd monitor it's 1080). (Thanks to @lukeis)\nYou can also use\ndriver.execute_script(\"window.scrollTo(0, document.body.scrollHeight);\")\n\nto scroll to the bottom of the page.\nIf you want to scroll to a page with infinite loading, like social network ones, facebook etc. (thanks to @Cuong Tran)\nSCROLL_PAUSE_TIME = 0.5\n\n# Get scroll height\nlast_height = driver.execute_script(\"return document.body.scrollHeight\")\n\nwhile True:\n # Scroll down to bottom\n driver.execute_script(\"window.scrollTo(0, document.body.scrollHeight);\")\n\n # Wait to load page\n time.sleep(SCROLL_PAUSE_TIME)\n\n # Calculate new scroll height and compare with last scroll height\n new_height = driver.execute_script(\"return document.body.scrollHeight\")\n if new_height == last_height:\n break\n last_height = new_height\n\nanother method (thanks to Juanse) is, select an object and \nlabel.sendKeys(Keys.PAGE_DOWN);\n\n",
"If you want to scroll down to bottom of infinite page (like linkedin.com), you can use this code:\nSCROLL_PAUSE_TIME = 0.5\n\n# Get scroll height\nlast_height = driver.execute_script(\"return document.body.scrollHeight\")\n\nwhile True:\n # Scroll down to bottom\n driver.execute_script(\"window.scrollTo(0, document.body.scrollHeight);\")\n\n # Wait to load page\n time.sleep(SCROLL_PAUSE_TIME)\n\n # Calculate new scroll height and compare with last scroll height\n new_height = driver.execute_script(\"return document.body.scrollHeight\")\n if new_height == last_height:\n break\n last_height = new_height\n\nReference: https://stackoverflow.com/a/28928684/1316860\n",
"You can use send_keys to simulate an END (or PAGE_DOWN) key press (which normally scroll the page):\nfrom selenium.webdriver.common.keys import Keys\nhtml = driver.find_element_by_tag_name('html')\nhtml.send_keys(Keys.END)\n\n",
"same method as shown here: \nin python you can just use \ndriver.execute_script(\"window.scrollTo(0, Y)\")\n\n(Y is the vertical position you want to scroll to)\n",
"element=find_element_by_xpath(\"xpath of the li you are trying to access\")\n\nelement.location_once_scrolled_into_view\n\nthis helped when I was trying to access a 'li' that was not visible.\n",
"For my purpose, I wanted to scroll down more, keeping the windows position in mind. My solution was similar and used window.scrollY\ndriver.execute_script(\"window.scrollTo(0, window.scrollY + 200)\")\n\nwhich will go to the current y scroll position + 200 \n",
"This is how you scroll down the webpage:\ndriver.execute_script(\"window.scrollTo(0, 1000);\")\n\n",
"None of these answers worked for me, at least not for scrolling down a facebook search result page, but I found after a lot of testing this solution:\nwhile driver.find_element_by_tag_name('div'):\n driver.execute_script(\"window.scrollTo(0, document.body.scrollHeight);\")\n Divs=driver.find_element_by_tag_name('div').text\n if 'End of Results' in Divs:\n print 'end'\n break\n else:\n continue\n\n",
"The easiest way i found to solve that problem was to select a label and then send:\nlabel.sendKeys(Keys.PAGE_DOWN);\n\nHope it works!\n",
"When working with youtube the floating elements give the value \"0\" as the scroll height\nso rather than using \"return document.body.scrollHeight\" try using this one \"return document.documentElement.scrollHeight\"\nadjust the scroll pause time as per your internet speed \nelse it will run for only one time and then breaks after that.\nSCROLL_PAUSE_TIME = 1\n\n# Get scroll height\n\"\"\"last_height = driver.execute_script(\"return document.body.scrollHeight\")\n\nthis dowsnt work due to floating web elements on youtube\n\"\"\"\n\nlast_height = driver.execute_script(\"return document.documentElement.scrollHeight\")\nwhile True:\n # Scroll down to bottom\n driver.execute_script(\"window.scrollTo(0,document.documentElement.scrollHeight);\")\n\n # Wait to load page\n time.sleep(SCROLL_PAUSE_TIME)\n\n # Calculate new scroll height and compare with last scroll height\n new_height = driver.execute_script(\"return document.documentElement.scrollHeight\")\n if new_height == last_height:\n print(\"break\")\n break\n last_height = new_height\n\n",
"scroll loading pages. Example: medium, quora,etc\nlast_height = driver.execute_script(\"return document.body.scrollHeight\")\n while True:\n driver.execute_script(\"window.scrollTo(0, document.body.scrollHeight-1000);\")\n # Wait to load the page.\n driver.implicitly_wait(30) # seconds\n new_height = driver.execute_script(\"return document.body.scrollHeight\")\n \n if new_height == last_height:\n break\n last_height = new_height\n # sleep for 30s\n driver.implicitly_wait(30) # seconds\n driver.quit()\n\n",
"Here's an example selenium code snippet that you could use for this type of purpose. It goes to the url for youtube search results on 'Enumerate python tutorial' and scrolls down until it finds the video with the title: 'Enumerate python tutorial(2020).'\ndriver.get('https://www.youtube.com/results?search_query=enumerate+python')\ntarget = driver.find_element_by_link_text('Enumerate python tutorial(2020).')\ntarget.location_once_scrolled_into_view\n\n",
"This code scrolls to the bottom but doesn't require that you wait each time. It'll continually scroll, and then stop at the bottom (or timeout)\nfrom selenium import webdriver\nimport time\n\ndriver = webdriver.Chrome(executable_path='chromedriver.exe')\ndriver.get('https://example.com')\n\npre_scroll_height = driver.execute_script('return document.body.scrollHeight;')\nrun_time, max_run_time = 0, 1\nwhile True:\n iteration_start = time.time()\n # Scroll webpage, the 100 allows for a more 'aggressive' scroll\n driver.execute_script('window.scrollTo(0, 100*document.body.scrollHeight);')\n\n post_scroll_height = driver.execute_script('return document.body.scrollHeight;')\n\n scrolled = post_scroll_height != pre_scroll_height\n timed_out = run_time >= max_run_time\n\n if scrolled:\n run_time = 0\n pre_scroll_height = post_scroll_height\n elif not scrolled and not timed_out:\n run_time += time.time() - iteration_start\n elif not scrolled and timed_out:\n break\n\n# closing the driver is optional \ndriver.close()\n\nThis is much faster than waiting 0.5-3 seconds each time for a response, when that response could take 0.1 seconds \n",
"I was looking for a way of scrolling through a dynamic webpage, and automatically stopping once the end of the page is reached, and found this thread.\nThe post by @Cuong Tran, with one main modification, was the answer that I was looking for. I thought that others might find the modification helpful (it has a pronounced effect on how the code works), hence this post. \nThe modification is to move the statement that captures the last page height inside the loop (so that each check is comparing to the previous page height).\nSo, the code below:\n\nContinuously scrolls down a dynamic webpage (.scrollTo()), only stopping when, for one iteration, the page height stays the same.\n\n(There is another modification, where the break statement is inside another condition (in case the page 'sticks') which can be removed).\n SCROLL_PAUSE_TIME = 0.5\n\n\n while True:\n\n # Get scroll height\n ### This is the difference. Moving this *inside* the loop\n ### means that it checks if scrollTo is still scrolling \n last_height = driver.execute_script(\"return document.body.scrollHeight\")\n\n # Scroll down to bottom\n driver.execute_script(\"window.scrollTo(0, document.body.scrollHeight);\")\n\n # Wait to load page\n time.sleep(SCROLL_PAUSE_TIME)\n\n # Calculate new scroll height and compare with last scroll height\n new_height = driver.execute_script(\"return document.body.scrollHeight\")\n if new_height == last_height:\n\n # try again (can be removed)\n driver.execute_script(\"window.scrollTo(0, document.body.scrollHeight);\")\n\n # Wait to load page\n time.sleep(SCROLL_PAUSE_TIME)\n\n # Calculate new scroll height and compare with last scroll height\n new_height = driver.execute_script(\"return document.body.scrollHeight\")\n\n # check if the page height has remained the same\n if new_height == last_height:\n # if so, you are done\n break\n # if not, move on to the next loop\n else:\n last_height = new_height\n continue\n\n",
"You can use send_keys to simulate a PAGE_DOWN key press (which normally scroll the page):\nfrom selenium.webdriver.common.keys import Keys\nhtml = driver.find_element_by_tag_name('html')\nhtml.send_keys(Keys.PAGE_DOWN)\n\n",
"if you want to scroll within a particular view/frame (WebElement), what you only need to do is to replace \"body\" with a particular element that you intend to scroll within. i get that element via \"getElementById\" in the example below:\nself.driver.execute_script('window.scrollTo(0, document.getElementById(\"page-manager\").scrollHeight);')\n\nthis is the case on YouTube, for example...\n",
"The ScrollTo() function doesn't work anymore. This is what I used and it worked fine.\ndriver.execute_script(\"document.getElementById('mydiv').scrollIntoView();\")\n\n",
"According to the docs,\nthe class ActionChains does the job:\nfrom selenium import webdriver\nfrom selenium.webdriver import ActionChains\n\ndriver = webdriver.Firefox()\naction_chains = ActionChains(driver)\naction_chains.scroll(x: int, y: int, delta_x: int, delta_y: int, duration: int = 0, origin: str = 'viewport').perform()\n\n",
"insert this line driver.execute_script(\"window.scrollBy(0,925)\", \"\")\n",
"The loop using the \"send keys\" method of scrolling the page:\npre_scroll_height = driver.execute_script('return document.body.scrollHeight;')\nwhile True:\n driver.find_element_by_tag_name('body').send_keys(Keys.END)\n time.sleep(5)\n post_scroll_height = driver.execute_script('return document.body.scrollHeight;')\n\n print(pre_scroll_height, post_scroll_height)\n if pre_scroll_height == post_scroll_height:\n break\n pre_scroll_height=post_scroll_height\n\n",
"Here is a method I wrote to slowly scroll down to a targets element\nYou can pass either Y-th position of element of the CSS Selector to it\nIt scrolls exactly like we do via mouse-wheel\nOnce this method called, you call it again with same driver object but with new target element, it will then scroll up/down wherever that element exists\ndef slow_scroll_to_element(self, driver, element_selector=None, target_yth_location=None):\n current_scroll_position = int(driver.execute_script(\"return window.scrollY\"))\n \n if element_selector:\n target_yth_location = int(driver.execute_script(\"return document.querySelector('{}').getBoundingClientRect()['top'] + window.scrollY\".format(element_selector)))\n \n scrollSpeed = 100 if target_yth_location-current_scroll_position > 0 else -100\n\n def chunks(a, n):\n k, m = divmod(len(a), n)\n return (a[i*k+min(i, m):(i+1)*k+min(i+1, m)] for i in range(n))\n \n for l in list(chunks(list(range(current_scroll_position, target_yth_location, scrollSpeed)) + list([target_yth_location+(-scrollSpeed if scrollSpeed > 0 else scrollSpeed)]), 3)):\n for pos in l:\n driver.execute_script(\"window.scrollTo(0, \"+str(pos)+\");\")\n time.sleep(0.1)\n time.sleep(random.randint(1,3))\n\n",
"driver.execute_script(\"document.getElementById('your ID Element').scrollIntoView();\")\n\nit's working for my case.\n",
"Just a small variation of the solutions provided so far: sometimes in scraping you have to meet the following requirements:\n\nKeep scrolling step by step. Otherwise if you always jump to the bottom some elements are loaded only as containers/divs but their content is not loaded because they were never visible (because you jumped straight to the bottom);\nAllow enough time for content to be loaded;\nIt's not an infinite scroll page, there is an end and you have to identify when the end is reached;\n\nHere is a simple implementation:\nfrom time import sleep\ndef keep_scrolling_to_the_bottom():\n while True:\n previous_scrollY = my_web_driver.execute_script( 'return window.scrollY' )\n my_web_driver.execute_script( 'window.scrollBy( 0, 230 )' )\n sleep( 0.4 )\n if previous_scrollY == my_web_driver.execute_script( 'return window.scrollY' ):\n print( 'job done, reached the bottom!' )\n break\n\nTested and working on Windows 7 x64, Python 3.8.0, selenium 4.1.3, Google Chrome 107.0.5304.107, website for property rent.\n"
] |
[
417,
101,
60,
28,
23,
16,
12,
8,
8,
8,
8,
8,
7,
6,
6,
3,
3,
3,
1,
1,
1,
0,
0
] |
[] |
[] |
[
"automated_tests",
"python",
"selenium",
"selenium_webdriver"
] |
stackoverflow_0020986631_automated_tests_python_selenium_selenium_webdriver.txt
|
Q:
Redirecting a python code output to a file and rotating it every day
I want to write a shell script that executes a python script and redirects its output with a timestamp to a different log file every day. The python script should run forever without stopping, it prints small text to the terminal every 10 seconds.
Here is how far I have come:
filename="log-$(date +%Y-%m-%d).log"
python3 -u HelloWord_every10secs.py 2>&1 |
while IFS= read -r line;
do echo "$(date +%x__%H:%M:%S:%3N) $line";
filename="log-$(date +%Y-%m-%d).log"
done >> "log/$filename" #u unbuffered output, for logging, add time stuff
This small code redirects the output of the python code to a log file in log/ directory and ads timestamps. The only issue that is missing it cannot rotate the log file over each day with a different date ending.
I except to have in log/ directory:
log-2022-11-20.log
log-2022-11-21.log
log-2022-11-22.log
... etc.
But I have only one with the date when I run the script.
I know that there is a Ubuntu built-in solution called lograte for these things, but I would like to solve this problem without that.
A:
Rather than controlling log files externally, I would recommend using TimedRotatingFileHandler instead.
Example:
import logging
from logging.handlers import TimedRotatingFileHandler
log = logging.getLogger(__name__)
def init_file_handler(log_path, logger=None):
handler = TimedRotatingFileHandler(log_path, when='midnight')
handler.setFormatter(logging.Formatter('%(asctime)s %(message)s'))
if logger is None:
logger = logging.getLogger() # This will be the root logger, which will capture all logs
logger.setLevel(logging.INFO) # Write INFO and above; use DEBUG or NOTSET for more
logger.addHandler(handler)
Using the above, if we run this:
>>> init_file_handler('test.log')
>>> log.info('foo')
The result will be:
$ cat test.log
2022-11-20 07:43:32,785 foo
The first log line written after midnight will trigger the roll-over, where test.log from this example would be renamed to test.log.2022-11-20, and a new test.log would be created.
It is possible to change the time format in the logged events (and the overall format of each line), and it is also possible to adjust the date format for old log files.
Additional resources:
https://docs.python.org/3/howto/logging.html#logging-basic-tutorial
Python TimedRotatingFileHandler logs to a file and stderr
|
Redirecting a python code output to a file and rotating it every day
|
I want to write a shell script that executes a python script and redirects its output with a timestamp to a different log file every day. The python script should run forever without stopping, it prints small text to the terminal every 10 seconds.
Here is how far I have come:
filename="log-$(date +%Y-%m-%d).log"
python3 -u HelloWord_every10secs.py 2>&1 |
while IFS= read -r line;
do echo "$(date +%x__%H:%M:%S:%3N) $line";
filename="log-$(date +%Y-%m-%d).log"
done >> "log/$filename" #u unbuffered output, for logging, add time stuff
This small code redirects the output of the python code to a log file in log/ directory and ads timestamps. The only issue that is missing it cannot rotate the log file over each day with a different date ending.
I except to have in log/ directory:
log-2022-11-20.log
log-2022-11-21.log
log-2022-11-22.log
... etc.
But I have only one with the date when I run the script.
I know that there is a Ubuntu built-in solution called lograte for these things, but I would like to solve this problem without that.
|
[
"Rather than controlling log files externally, I would recommend using TimedRotatingFileHandler instead.\nExample:\nimport logging\nfrom logging.handlers import TimedRotatingFileHandler\n\nlog = logging.getLogger(__name__)\n\ndef init_file_handler(log_path, logger=None):\n handler = TimedRotatingFileHandler(log_path, when='midnight')\n handler.setFormatter(logging.Formatter('%(asctime)s %(message)s'))\n if logger is None:\n logger = logging.getLogger() # This will be the root logger, which will capture all logs\n logger.setLevel(logging.INFO) # Write INFO and above; use DEBUG or NOTSET for more\n logger.addHandler(handler)\n\nUsing the above, if we run this:\n>>> init_file_handler('test.log')\n\n>>> log.info('foo')\n\nThe result will be:\n$ cat test.log\n2022-11-20 07:43:32,785 foo\n\nThe first log line written after midnight will trigger the roll-over, where test.log from this example would be renamed to test.log.2022-11-20, and a new test.log would be created.\nIt is possible to change the time format in the logged events (and the overall format of each line), and it is also possible to adjust the date format for old log files.\nAdditional resources:\n\nhttps://docs.python.org/3/howto/logging.html#logging-basic-tutorial\nPython TimedRotatingFileHandler logs to a file and stderr\n\n"
] |
[
0
] |
[] |
[] |
[
"logging",
"python",
"rotation",
"shell",
"ubuntu"
] |
stackoverflow_0074507894_logging_python_rotation_shell_ubuntu.txt
|
Q:
How can I improve my stock lookup code to run in under 5 minutes?
I have a list of stock symbols, for which I need to extract financial data. I wrote a function to get all data I need (see below).
I tested it on 35 stocks, it took me 9 min to run. The real dataset has more than 600 stock symbols, which would take hours of running.
Can you review my code and advise on how to make it run in less than 5 minutes, please?
Here are financial indicators I need for each stock:
# Free Cash Flow
# EV/EBIDTA
# P/E Ratio
# YoY Growth for Profit Margins
# EV/ Revenue
Here is sample dataset:
Symbol
0
AAOI
1
AAPL
2
ACCD
3
ACEV
4
ACEVU
Here is the code:
(Short Explanation: trying to get 5 financial indicators listed above for each stock, and if there is some missing data, then simply assign np.NaN value)
array=df['Symbol']
fin_df=pd.DataFrame()
for item in array:
#part 1
symbol=yf.Ticker(item)
info_df=pd.DataFrame(pd.Series(symbol.info)).T
values=['symbol','freeCashflow','enterpriseToEbitda','enterpriseToRevenue']
for value in values:
if value not in list(info_df.columns):
info_df[value]=np.NaN
else:
pass
info_df=info_df[['symbol','freeCashflow','enterpriseToEbitda','enterpriseToRevenue']]
#part 2
info_2=symbol.financials.T
try:
info_df['YoY Profit Margins Growth']=round(info_2['Gross Profit'][0]/info_2['Gross Profit'][1],2)
except:
info_df['YoY Profit Margins Growth']=np.NaN
#part 3
#info_3=pd.DataFrame(pd.Series(si.get_quote_table(item))).T
info_3=pd.DataFrame()
try:
info_3['PE Ratio (TTM)']=pd.Series(si.get_quote_table(item)).T['PE Ratio (TTM)']
except:
info_3['PE Ratio (TTM)']=np.NaN
info_df['PERatio']=info_3['PE Ratio (TTM)']
fin_df=pd.concat([fin_df,info_df])
fin_df.reset_index(drop=True, inplace=True)
fin_df
Is there is a way to make the function more efficient in terms of time?
A:
I know this post is pretty old, but I just came across it now. Check out the 'yfinance' library. There's all kinds of stuff available over there!!
import pandas_datareader as web
import pandas as pd
df = web.DataReader('AAPL', data_source='yahoo', start='2011-01-01', end='2021-01-12')
df.head()
import yfinance as yf
aapl = yf.Ticker("AAPL")
aapl
# get stock info
aapl.info
# get historical market data
hist = aapl.history(period="max")
# show actions (dividends, splits)
aapl.actions
# show dividends
aapl.dividends
# show splits
aapl.splits
# show financials
aapl.financials
aapl.quarterly_financials
# show major holders
aapl.major_holders
# show institutional holders
aapl.institutional_holders
# show balance sheet
aapl.balance_sheet
aapl.quarterly_balance_sheet
# show cashflow
aapl.cashflow
aapl.quarterly_cashflow
# show earnings
aapl.earnings
aapl.quarterly_earnings
# show sustainability
aapl.sustainability
# show analysts recommendations
aapl.recommendations
# show next event (earnings, etc)
aapl.calendar
# show ISIN code - *experimental*
# ISIN = International Securities Identification Number
aapl.isin
# show options expirations
aapl.options
# get option chain for specific expiration
opt = aapl.option_chain('YYYY-MM-DD')
Quarterly Financials Example:
2022-09-24 2022-06-25 \
Research Development 6761000000.0 6797000000.0
Effect Of Accounting Charges None None
Income Before Tax 24657000000.0 23066000000.0
Minority Interest None None
Net Income 20721000000.0 19442000000.0
Selling General Administrative 6440000000.0 6012000000.0
Gross Profit 38095000000.0 35885000000.0
Ebit 24894000000.0 23076000000.0
Operating Income 24894000000.0 23076000000.0
Other Operating Expenses None None
Interest Expense -827000000.0 -719000000.0
Extraordinary Items None None
Non Recurring None None
Other Items None None
Income Tax Expense 3936000000.0 3624000000.0
Total Revenue 90146000000.0 82959000000.0
Total Operating Expenses 65252000000.0 59883000000.0
Cost Of Revenue 52051000000.0 47074000000.0
Total Other Income Expense Net -237000000.0 -10000000.0
Discontinued Operations None None
Net Income From Continuing Ops 20721000000.0 19442000000.0
Net Income Applicable To Common Shares 20721000000.0 19442000000.0
2022-03-26 2021-12-25
Research Development 6387000000.0 6306000000.0
Effect Of Accounting Charges None None
Income Before Tax 30139000000.0 41241000000.0
Minority Interest None None
Net Income 25010000000.0 34630000000.0
Selling General Administrative 6193000000.0 6449000000.0
Gross Profit 42559000000.0 54243000000.0
Ebit 29979000000.0 41488000000.0
Operating Income 29979000000.0 41488000000.0
Other Operating Expenses None None
Interest Expense -691000000.0 -694000000.0
Extraordinary Items None None
Non Recurring None None
Other Items None None
Income Tax Expense 5129000000.0 6611000000.0
Total Revenue 97278000000.0 123945000000.0
Total Operating Expenses 67299000000.0 82457000000.0
Cost Of Revenue 54719000000.0 69702000000.0
Total Other Income Expense Net 160000000.0 -247000000.0
Discontinued Operations None None
Net Income From Continuing Ops 25010000000.0 34630000000.0
Net Income Applicable To Common Shares 25010000000.0 34630000000.0
|
How can I improve my stock lookup code to run in under 5 minutes?
|
I have a list of stock symbols, for which I need to extract financial data. I wrote a function to get all data I need (see below).
I tested it on 35 stocks, it took me 9 min to run. The real dataset has more than 600 stock symbols, which would take hours of running.
Can you review my code and advise on how to make it run in less than 5 minutes, please?
Here are financial indicators I need for each stock:
# Free Cash Flow
# EV/EBIDTA
# P/E Ratio
# YoY Growth for Profit Margins
# EV/ Revenue
Here is sample dataset:
Symbol
0
AAOI
1
AAPL
2
ACCD
3
ACEV
4
ACEVU
Here is the code:
(Short Explanation: trying to get 5 financial indicators listed above for each stock, and if there is some missing data, then simply assign np.NaN value)
array=df['Symbol']
fin_df=pd.DataFrame()
for item in array:
#part 1
symbol=yf.Ticker(item)
info_df=pd.DataFrame(pd.Series(symbol.info)).T
values=['symbol','freeCashflow','enterpriseToEbitda','enterpriseToRevenue']
for value in values:
if value not in list(info_df.columns):
info_df[value]=np.NaN
else:
pass
info_df=info_df[['symbol','freeCashflow','enterpriseToEbitda','enterpriseToRevenue']]
#part 2
info_2=symbol.financials.T
try:
info_df['YoY Profit Margins Growth']=round(info_2['Gross Profit'][0]/info_2['Gross Profit'][1],2)
except:
info_df['YoY Profit Margins Growth']=np.NaN
#part 3
#info_3=pd.DataFrame(pd.Series(si.get_quote_table(item))).T
info_3=pd.DataFrame()
try:
info_3['PE Ratio (TTM)']=pd.Series(si.get_quote_table(item)).T['PE Ratio (TTM)']
except:
info_3['PE Ratio (TTM)']=np.NaN
info_df['PERatio']=info_3['PE Ratio (TTM)']
fin_df=pd.concat([fin_df,info_df])
fin_df.reset_index(drop=True, inplace=True)
fin_df
Is there is a way to make the function more efficient in terms of time?
|
[
"I know this post is pretty old, but I just came across it now. Check out the 'yfinance' library. There's all kinds of stuff available over there!!\nimport pandas_datareader as web\nimport pandas as pd\n \ndf = web.DataReader('AAPL', data_source='yahoo', start='2011-01-01', end='2021-01-12')\ndf.head()\n\nimport yfinance as yf\naapl = yf.Ticker(\"AAPL\")\naapl\n \n \n# get stock info\naapl.info\n \n# get historical market data\nhist = aapl.history(period=\"max\")\n \n# show actions (dividends, splits)\naapl.actions\n \n# show dividends\naapl.dividends\n \n# show splits\naapl.splits\n \n# show financials\naapl.financials\naapl.quarterly_financials\n \n# show major holders\naapl.major_holders\n \n# show institutional holders\naapl.institutional_holders\n \n# show balance sheet\naapl.balance_sheet\naapl.quarterly_balance_sheet\n \n# show cashflow\naapl.cashflow\naapl.quarterly_cashflow\n \n# show earnings\naapl.earnings\naapl.quarterly_earnings\n \n# show sustainability\naapl.sustainability\n \n# show analysts recommendations\naapl.recommendations\n \n# show next event (earnings, etc)\naapl.calendar\n \n# show ISIN code - *experimental*\n# ISIN = International Securities Identification Number\naapl.isin\n \n# show options expirations\naapl.options\n \n# get option chain for specific expiration\nopt = aapl.option_chain('YYYY-MM-DD')\n\nQuarterly Financials Example:\n 2022-09-24 2022-06-25 \\\nResearch Development 6761000000.0 6797000000.0 \nEffect Of Accounting Charges None None \nIncome Before Tax 24657000000.0 23066000000.0 \nMinority Interest None None \nNet Income 20721000000.0 19442000000.0 \nSelling General Administrative 6440000000.0 6012000000.0 \nGross Profit 38095000000.0 35885000000.0 \nEbit 24894000000.0 23076000000.0 \nOperating Income 24894000000.0 23076000000.0 \nOther Operating Expenses None None \nInterest Expense -827000000.0 -719000000.0 \nExtraordinary Items None None \nNon Recurring None None \nOther Items None None \nIncome Tax Expense 3936000000.0 3624000000.0 \nTotal Revenue 90146000000.0 82959000000.0 \nTotal Operating Expenses 65252000000.0 59883000000.0 \nCost Of Revenue 52051000000.0 47074000000.0 \nTotal Other Income Expense Net -237000000.0 -10000000.0 \nDiscontinued Operations None None \nNet Income From Continuing Ops 20721000000.0 19442000000.0 \nNet Income Applicable To Common Shares 20721000000.0 19442000000.0 \n\n 2022-03-26 2021-12-25 \nResearch Development 6387000000.0 6306000000.0 \nEffect Of Accounting Charges None None \nIncome Before Tax 30139000000.0 41241000000.0 \nMinority Interest None None \nNet Income 25010000000.0 34630000000.0 \nSelling General Administrative 6193000000.0 6449000000.0 \nGross Profit 42559000000.0 54243000000.0 \nEbit 29979000000.0 41488000000.0 \nOperating Income 29979000000.0 41488000000.0 \nOther Operating Expenses None None \nInterest Expense -691000000.0 -694000000.0 \nExtraordinary Items None None \nNon Recurring None None \nOther Items None None \nIncome Tax Expense 5129000000.0 6611000000.0 \nTotal Revenue 97278000000.0 123945000000.0 \nTotal Operating Expenses 67299000000.0 82457000000.0 \nCost Of Revenue 54719000000.0 69702000000.0 \nTotal Other Income Expense Net 160000000.0 -247000000.0 \nDiscontinued Operations None None \nNet Income From Continuing Ops 25010000000.0 34630000000.0 \nNet Income Applicable To Common Shares 25010000000.0 34630000000.0 \n\n"
] |
[
1
] |
[] |
[] |
[
"performance",
"python",
"yahoo_finance",
"yfinance"
] |
stackoverflow_0073940353_performance_python_yahoo_finance_yfinance.txt
|
Q:
Cannot pass Twitter authorization
I am absolutely brand new to coding (about 11 days) and I was told to post my question here since I cannot solve my problem despite countless attempts.
I am using Python and Jupyter notebook (it is required that I use this)
I have a config file named config.txt where all my keys and tokens are located and I am able to call all my keys except the bearer_token which I do not know when to use this as I have seen hundreds of different answers.
I have a Twitter Academic Research account.
My goal is to pull various hashtags, in order to do so I must be able to authenticate my account via my code (this is what I am understanding).
I have tried various setup per different suggestions here on stackoverflow and I cannot authentic. I also get the following error message. Unauthorized: 401 Unauthorized
32 - Could not authenticate you. When I go to Twitter to learn and understand it says this:
401
Unauthorized
V1.1 V2 There was a problem authenticating your request. This could be due to missing or incorrect authentication credentials. This may also be returned in other undefined circumstances.
Check that you are using the correct authentication method and that your credentials are correct. The link provided no longer works and I don't know what I am doing to check my authentication methods and credentials are correct (I only have the one set of credentials so I am thinking that it is my authentication method...maybe??)
My code is further down.
This is my error message at the end of my last code block named authentication
I have tried multiple solutions by different people including all the recommendations mentioned in the [https://stackoverflow.com/questions/66156958/how-to-acess-tweets-with-bearer-token-using-tweepy-in-python/66597109#66597109]
import tweepy
import configparser
import os
#Define file path and make sure path is correct
file_name = "config.txt"
#Config file stored in the same directory as the script.
#Get correct working directory with os.getcwd()
file_path = os.path.join(os.getcwd(), file_name)
print(file_path) # Confirm file path is correct.
The path is correct and all is well for the above portion of code.
#Read info from the config file named config.txt
config = configparser.ConfigParser()
config.read(file_path)
#Will raise KeyError if the file path is not correct
api_key = config["twitter"]["API_key"]
api_secret = config["twitter"]["API_secret"]
access_token = config["twitter"]["Access_token"]
access_token_secret = config["twitter"]["Access_token_secret"]
#bearer_token = config["twitter"]["Bearer_token"]
#client_id = config["twitter"]["Client_ID"]
#client_secret = config["twitter"]["Client_Secret"]
#print(api_key, api_secret, access_token, access_token_secret)
#print (bearer_token)
Everything returns properly with the exception of my bearer_key - it is set up the same exact way as all my other keys and tokens in my config file
#authentication - this is where my issue is. My authentication is not working.
from tweepy.auth import OAuthHandler
auth = OAuthHandler("API_key", "API_secret")
auth.set_access_token = (access_token, access_token_secret)
api = tweepy.API(auth)
public_tweets = api.home_timeline()
print(public_tweets)
The following is just other example I have tried using of many to troubleshoot my issue
#auth = tweepy.OAuth2AppHandler("bearer_token", "API_key_secret")
#api = tweepy.API(auth)
#for tweet in tweepy.Cursor(api.search, q='cool').items(3):
#print(tweet.text)
No matter what, I get the error provided in the screenshot image above
What I really want to do is pull historical hashtags but this is not possible without authentication, I believe. It was recommended I try pulling my own timeline to start before I began writing queries for different pieces of data or information. I am sure there is something simplistic but being 11 days old in the world of coding I am quite lost. Thank you to everyone helping me to understand and troubleshoot. I really appreciate it.
A:
You are not passing your api_key and api_secret.
This:
auth = OAuthHandler("API_key", "API_secret")
should be this:
auth = OAuthHandler(api_key, api_secret)
|
Cannot pass Twitter authorization
|
I am absolutely brand new to coding (about 11 days) and I was told to post my question here since I cannot solve my problem despite countless attempts.
I am using Python and Jupyter notebook (it is required that I use this)
I have a config file named config.txt where all my keys and tokens are located and I am able to call all my keys except the bearer_token which I do not know when to use this as I have seen hundreds of different answers.
I have a Twitter Academic Research account.
My goal is to pull various hashtags, in order to do so I must be able to authenticate my account via my code (this is what I am understanding).
I have tried various setup per different suggestions here on stackoverflow and I cannot authentic. I also get the following error message. Unauthorized: 401 Unauthorized
32 - Could not authenticate you. When I go to Twitter to learn and understand it says this:
401
Unauthorized
V1.1 V2 There was a problem authenticating your request. This could be due to missing or incorrect authentication credentials. This may also be returned in other undefined circumstances.
Check that you are using the correct authentication method and that your credentials are correct. The link provided no longer works and I don't know what I am doing to check my authentication methods and credentials are correct (I only have the one set of credentials so I am thinking that it is my authentication method...maybe??)
My code is further down.
This is my error message at the end of my last code block named authentication
I have tried multiple solutions by different people including all the recommendations mentioned in the [https://stackoverflow.com/questions/66156958/how-to-acess-tweets-with-bearer-token-using-tweepy-in-python/66597109#66597109]
import tweepy
import configparser
import os
#Define file path and make sure path is correct
file_name = "config.txt"
#Config file stored in the same directory as the script.
#Get correct working directory with os.getcwd()
file_path = os.path.join(os.getcwd(), file_name)
print(file_path) # Confirm file path is correct.
The path is correct and all is well for the above portion of code.
#Read info from the config file named config.txt
config = configparser.ConfigParser()
config.read(file_path)
#Will raise KeyError if the file path is not correct
api_key = config["twitter"]["API_key"]
api_secret = config["twitter"]["API_secret"]
access_token = config["twitter"]["Access_token"]
access_token_secret = config["twitter"]["Access_token_secret"]
#bearer_token = config["twitter"]["Bearer_token"]
#client_id = config["twitter"]["Client_ID"]
#client_secret = config["twitter"]["Client_Secret"]
#print(api_key, api_secret, access_token, access_token_secret)
#print (bearer_token)
Everything returns properly with the exception of my bearer_key - it is set up the same exact way as all my other keys and tokens in my config file
#authentication - this is where my issue is. My authentication is not working.
from tweepy.auth import OAuthHandler
auth = OAuthHandler("API_key", "API_secret")
auth.set_access_token = (access_token, access_token_secret)
api = tweepy.API(auth)
public_tweets = api.home_timeline()
print(public_tweets)
The following is just other example I have tried using of many to troubleshoot my issue
#auth = tweepy.OAuth2AppHandler("bearer_token", "API_key_secret")
#api = tweepy.API(auth)
#for tweet in tweepy.Cursor(api.search, q='cool').items(3):
#print(tweet.text)
No matter what, I get the error provided in the screenshot image above
What I really want to do is pull historical hashtags but this is not possible without authentication, I believe. It was recommended I try pulling my own timeline to start before I began writing queries for different pieces of data or information. I am sure there is something simplistic but being 11 days old in the world of coding I am quite lost. Thank you to everyone helping me to understand and troubleshoot. I really appreciate it.
|
[
"You are not passing your api_key and api_secret.\nThis:\nauth = OAuthHandler(\"API_key\", \"API_secret\")\nshould be this:\nauth = OAuthHandler(api_key, api_secret)\n"
] |
[
0
] |
[] |
[] |
[
"authentication",
"jupyter_notebook",
"python",
"twitter",
"twitter_oauth"
] |
stackoverflow_0074507641_authentication_jupyter_notebook_python_twitter_twitter_oauth.txt
|
Q:
How to click on a button on a webpage and iterate through contents after clicking on button using python selenium
I am using Python Selenium to web scrape from https://finance.yahoo.com/quote/AAPL/balance-sheet?p=AAPL but I want to scrape the Quarterly data instead of the Annual after clicking on the "Quarterly" button on the top right. This is my code so far:
def readQuarterlyBSData(ticker):
url = 'https://finance.yahoo.com/quote/AAPL/balance-sheet?p=AAPL'
options = Options()
options.add_argument('--headless')
driver = webdriver.Chrome(service=Service(ChromeDriverManager().install()), options=options)
driver.get(url)
WebDriverWait(driver, 20).until(EC.element_to_be_clickable((By.XPATH, '//*[@id="Col1-1-Financials-Proxy"]/section/div[1]/div[2]/button'))).click()
soup = BeautifulSoup(driver.page_source, 'lxml')
ls= []
# Trying to iterate through each div after clicking on the Quarterly button but content is still Annual Data
for element in soup.find_all('div'):
ls.append(element.string) # add each element one by one to the list
I am able to get the button to click but when I iterate through the divs, I am still getting content that is from Annual data and not Quarterly data. Can someone show me how I can iterate through Quarterly data?
A:
soup = BeautifulSoup(driver.page_source, 'lxml')
You don't need to pass your driver.page_source to BS4, use Selenium itself to extract the data using driver.find_element function.
Here is the doc on that: https://selenium-python.readthedocs.io/locating-elements.html
Also, you are not waiting for the page source to be updated, so add a time delay after the click. You are just waiting for the button to appear, what happens after that? You immediately pass the page source that hasn't been updated after the click. So wait,
WebDriverWait(driver, 20).until(EC.element_to_be_clickable((By.XPATH, '//*[@id="Col1-1-Financials-Proxy"]/section/div[1]/div[2]/button'))).click()
time.sleep(10) # wait and see
soup = BeautifulSoup(driver.page_source, 'lxml')
Hope it helps :)
A:
No, no, no. Do it this way!
import pandas_datareader as web
import pandas as pd
# show balance sheet
aapl.balance_sheet
aapl.quarterly_balance_sheet
This is what you get.
2022-09-24 2022-06-25 \
Total Liab 3.020830e+11 2.782020e+11
Total Stockholder Equity 5.067200e+10 5.810700e+10
Other Current Liab 6.709400e+10 5.653900e+10
Total Assets 3.527550e+11 3.363090e+11
Common Stock 6.484900e+10 6.211500e+10
Other Current Assets 2.122300e+10 1.638600e+10
Retained Earnings -3.068000e+09 5.289000e+09
Other Liab 3.839400e+10 5.362900e+10
Gains Losses Not Affecting Retained Earnings -1.110900e+10 -9.297000e+09
Other Assets 4.401100e+10 5.260500e+10
Cash 2.364600e+10 2.750200e+10
Total Current Liabilities 1.539820e+11 1.298730e+11
Short Long Term Debt 1.112800e+10 1.400900e+10
Other Stockholder Equity -1.110900e+10 -9.297000e+09
Property Plant Equipment 5.253400e+10 4.033500e+10
Total Current Assets 1.354050e+11 1.122920e+11
Long Term Investments 1.208050e+11 1.310770e+11
Net Tangible Assets 5.067200e+10 5.810700e+10
Short Term Investments 2.465800e+10 2.072900e+10
Net Receivables 6.093200e+10 4.224200e+10
Long Term Debt 9.895900e+10 9.470000e+10
Inventory 4.946000e+09 5.433000e+09
Accounts Payable 6.411500e+10 4.834300e+10
2022-03-26 2021-12-25
Total Liab 2.832630e+11 3.092590e+11
Total Stockholder Equity 6.739900e+10 7.193200e+10
Other Current Liab 5.816800e+10 5.704300e+10
Total Assets 3.506620e+11 3.811910e+11
Common Stock 6.118100e+10 5.842400e+10
Other Current Assets 1.580900e+10 1.811200e+10
Retained Earnings 1.271200e+10 1.443500e+10
Other Liab 5.243200e+10 5.505600e+10
Gains Losses Not Affecting Retained Earnings -6.494000e+09 -9.270000e+08
Other Assets 5.195900e+10 5.010900e+10
Cash 2.809800e+10 3.711900e+10
Total Current Liabilities 1.275080e+11 1.475740e+11
Short Long Term Debt 9.659000e+09 1.116900e+10
Other Stockholder Equity -6.494000e+09 -9.270000e+08
Property Plant Equipment 3.930400e+10 3.924500e+10
Total Current Assets 1.181800e+11 1.531540e+11
Long Term Investments 1.412190e+11 1.386830e+11
Net Tangible Assets 6.739900e+10 7.193200e+10
Short Term Investments 2.341300e+10 2.679400e+10
Net Receivables 4.540000e+10 6.525300e+10
Long Term Debt 1.033230e+11 1.066290e+11
Inventory 5.460000e+09 5.876000e+09
Accounts Payable 5.268200e+10 7.436200e+10
There's all kinds of financial info/data available form the 'pandas_datareader' library.
https://pandas-datareader.readthedocs.io/en/latest/readers/yahoo.html
|
How to click on a button on a webpage and iterate through contents after clicking on button using python selenium
|
I am using Python Selenium to web scrape from https://finance.yahoo.com/quote/AAPL/balance-sheet?p=AAPL but I want to scrape the Quarterly data instead of the Annual after clicking on the "Quarterly" button on the top right. This is my code so far:
def readQuarterlyBSData(ticker):
url = 'https://finance.yahoo.com/quote/AAPL/balance-sheet?p=AAPL'
options = Options()
options.add_argument('--headless')
driver = webdriver.Chrome(service=Service(ChromeDriverManager().install()), options=options)
driver.get(url)
WebDriverWait(driver, 20).until(EC.element_to_be_clickable((By.XPATH, '//*[@id="Col1-1-Financials-Proxy"]/section/div[1]/div[2]/button'))).click()
soup = BeautifulSoup(driver.page_source, 'lxml')
ls= []
# Trying to iterate through each div after clicking on the Quarterly button but content is still Annual Data
for element in soup.find_all('div'):
ls.append(element.string) # add each element one by one to the list
I am able to get the button to click but when I iterate through the divs, I am still getting content that is from Annual data and not Quarterly data. Can someone show me how I can iterate through Quarterly data?
|
[
"\nsoup = BeautifulSoup(driver.page_source, 'lxml')\n\nYou don't need to pass your driver.page_source to BS4, use Selenium itself to extract the data using driver.find_element function.\nHere is the doc on that: https://selenium-python.readthedocs.io/locating-elements.html\nAlso, you are not waiting for the page source to be updated, so add a time delay after the click. You are just waiting for the button to appear, what happens after that? You immediately pass the page source that hasn't been updated after the click. So wait,\nWebDriverWait(driver, 20).until(EC.element_to_be_clickable((By.XPATH, '//*[@id=\"Col1-1-Financials-Proxy\"]/section/div[1]/div[2]/button'))).click()\ntime.sleep(10) # wait and see\nsoup = BeautifulSoup(driver.page_source, 'lxml')\n\nHope it helps :)\n",
"No, no, no. Do it this way!\nimport pandas_datareader as web\nimport pandas as pd\n\n# show balance sheet\naapl.balance_sheet\naapl.quarterly_balance_sheet\n\nThis is what you get.\n 2022-09-24 2022-06-25 \\\nTotal Liab 3.020830e+11 2.782020e+11 \nTotal Stockholder Equity 5.067200e+10 5.810700e+10 \nOther Current Liab 6.709400e+10 5.653900e+10 \nTotal Assets 3.527550e+11 3.363090e+11 \nCommon Stock 6.484900e+10 6.211500e+10 \nOther Current Assets 2.122300e+10 1.638600e+10 \nRetained Earnings -3.068000e+09 5.289000e+09 \nOther Liab 3.839400e+10 5.362900e+10 \nGains Losses Not Affecting Retained Earnings -1.110900e+10 -9.297000e+09 \nOther Assets 4.401100e+10 5.260500e+10 \nCash 2.364600e+10 2.750200e+10 \nTotal Current Liabilities 1.539820e+11 1.298730e+11 \nShort Long Term Debt 1.112800e+10 1.400900e+10 \nOther Stockholder Equity -1.110900e+10 -9.297000e+09 \nProperty Plant Equipment 5.253400e+10 4.033500e+10 \nTotal Current Assets 1.354050e+11 1.122920e+11 \nLong Term Investments 1.208050e+11 1.310770e+11 \nNet Tangible Assets 5.067200e+10 5.810700e+10 \nShort Term Investments 2.465800e+10 2.072900e+10 \nNet Receivables 6.093200e+10 4.224200e+10 \nLong Term Debt 9.895900e+10 9.470000e+10 \nInventory 4.946000e+09 5.433000e+09 \nAccounts Payable 6.411500e+10 4.834300e+10 \n\n 2022-03-26 2021-12-25 \nTotal Liab 2.832630e+11 3.092590e+11 \nTotal Stockholder Equity 6.739900e+10 7.193200e+10 \nOther Current Liab 5.816800e+10 5.704300e+10 \nTotal Assets 3.506620e+11 3.811910e+11 \nCommon Stock 6.118100e+10 5.842400e+10 \nOther Current Assets 1.580900e+10 1.811200e+10 \nRetained Earnings 1.271200e+10 1.443500e+10 \nOther Liab 5.243200e+10 5.505600e+10 \nGains Losses Not Affecting Retained Earnings -6.494000e+09 -9.270000e+08 \nOther Assets 5.195900e+10 5.010900e+10 \nCash 2.809800e+10 3.711900e+10 \nTotal Current Liabilities 1.275080e+11 1.475740e+11 \nShort Long Term Debt 9.659000e+09 1.116900e+10 \nOther Stockholder Equity -6.494000e+09 -9.270000e+08 \nProperty Plant Equipment 3.930400e+10 3.924500e+10 \nTotal Current Assets 1.181800e+11 1.531540e+11 \nLong Term Investments 1.412190e+11 1.386830e+11 \nNet Tangible Assets 6.739900e+10 7.193200e+10 \nShort Term Investments 2.341300e+10 2.679400e+10 \nNet Receivables 4.540000e+10 6.525300e+10 \nLong Term Debt 1.033230e+11 1.066290e+11 \nInventory 5.460000e+09 5.876000e+09 \nAccounts Payable 5.268200e+10 7.436200e+10 \n\nThere's all kinds of financial info/data available form the 'pandas_datareader' library.\nhttps://pandas-datareader.readthedocs.io/en/latest/readers/yahoo.html\n"
] |
[
1,
0
] |
[] |
[] |
[
"onclick",
"python",
"selenium",
"selenium_webdriver",
"web_scraping"
] |
stackoverflow_0073725603_onclick_python_selenium_selenium_webdriver_web_scraping.txt
|
Q:
Python yahoo finance data optimitzation
I've found a code here pretty good to retrieve some data I need (Python yahoo finance error market_cap=int(data.get_quote_yahoo(str)['marketCap']) TypeError: 'int' object is not callable):
tickers=["AAPL","GOOG","RY","HPQ"]
# Get market cap (not really necessary for you)
market_cap_data = web.get_quote_yahoo(tickers)['marketCap']
# Get the P/E ratio directly
pe_data = web.get_quote_yahoo(tickers)['trailingPE']
# print stock and p/e ratio
for stock, pe in zip(tickers, pe_data):
print(stock, pe)
# More keys that can be used
['language', 'region', 'quoteType', 'triggerable', 'quoteSourceName',
'currency', 'preMarketChange', 'preMarketChangePercent',
'preMarketTime', 'preMarketPrice', 'regularMarketChange',
'regularMarketChangePercent', 'regularMarketTime', 'regularMarketPrice',
'regularMarketDayHigh', 'regularMarketDayRange', 'regularMarketDayLow',
'regularMarketVolume', 'regularMarketPreviousClose', 'bid', 'ask',
'bidSize', 'askSize', 'fullExchangeName', 'financialCurrency',
'regularMarketOpen', 'averageDailyVolume3Month',
'averageDailyVolume10Day', 'fiftyTwoWeekLowChange',
'fiftyTwoWeekLowChangePercent', 'fiftyTwoWeekRange',
'fiftyTwoWeekHighChange', 'fiftyTwoWeekHighChangePercent',
'fiftyTwoWeekLow', 'fiftyTwoWeekHigh', 'dividendDate',
'earningsTimestamp', 'earningsTimestampStart', 'earningsTimestampEnd',
'trailingAnnualDividendRate', 'trailingPE',
'trailingAnnualDividendYield', 'marketState', 'epsTrailingTwelveMonths',
'epsForward', 'sharesOutstanding', 'bookValue', 'fiftyDayAverage',
'fiftyDayAverageChange', 'fiftyDayAverageChangePercent',
'twoHundredDayAverage', 'twoHundredDayAverageChange',
'twoHundredDayAverageChangePercent', 'marketCap', 'forwardPE',
'priceToBook', 'sourceInterval', 'exchangeDataDelayedBy', 'tradeable',
'firstTradeDateMilliseconds', 'priceHint', 'exchange', 'shortName',
'longName', 'messageBoardId', 'exchangeTimezoneName',
'exchangeTimezoneShortName', 'gmtOffSetMilliseconds', 'market',
'esgPopulated', 'price']
I would like to retrieve most of the commented fields at the end of the previous code, but I've done this so far:
import pandas_datareader as web
tickers = ["AAPL", "GOOG", "RY", "SAB.MC"]
market_cap_data = web.get_quote_yahoo(tickers)['marketCap']
pe_data = web.get_quote_yahoo(tickers)['trailingPE']
fiftytwo_low_data = web.get_quote_yahoo(tickers)['fiftyTwoWeekLowChangePercent']
for stock, mcap, pe, fiftytwo_low in zip(tickers, market_cap_data, pe_data, fiftytwo_low_data):
print(stock, mcap, pe, fiftytwo_low)
Obviously I could continue with my brute force, but do you know any way to make the code more elegant to retrieve the whole string of fields with column names?
['language', 'region', 'quoteType', 'triggerable', 'quoteSourceName',
'currency', 'preMarketChange', 'preMarketChangePercent',
'preMarketTime', 'preMarketPrice', 'regularMarketChange',
'regularMarketChangePercent', 'regularMarketTime', 'regularMarketPrice',
'regularMarketDayHigh', 'regularMarketDayRange', 'regularMarketDayLow',
'regularMarketVolume', 'regularMarketPreviousClose', 'bid', 'ask',
'bidSize', 'askSize', 'fullExchangeName', 'financialCurrency',
'regularMarketOpen', 'averageDailyVolume3Month',
'averageDailyVolume10Day', 'fiftyTwoWeekLowChange',
'fiftyTwoWeekLowChangePercent', 'fiftyTwoWeekRange',
'fiftyTwoWeekHighChange', 'fiftyTwoWeekHighChangePercent',
'fiftyTwoWeekLow', 'fiftyTwoWeekHigh', 'dividendDate',
'earningsTimestamp', 'earningsTimestampStart', 'earningsTimestampEnd',
'trailingAnnualDividendRate', 'trailingPE',
'trailingAnnualDividendYield', 'marketState', 'epsTrailingTwelveMonths',
'epsForward', 'sharesOutstanding', 'bookValue', 'fiftyDayAverage',
'fiftyDayAverageChange', 'fiftyDayAverageChangePercent',
'twoHundredDayAverage', 'twoHundredDayAverageChange',
'twoHundredDayAverageChangePercent', 'marketCap', 'forwardPE',
'priceToBook', 'sourceInterval', 'exchangeDataDelayedBy', 'tradeable',
'firstTradeDateMilliseconds', 'priceHint', 'exchange', 'shortName',
'longName', 'messageBoardId', 'exchangeTimezoneName',
'exchangeTimezoneShortName', 'gmtOffSetMilliseconds', 'market',
'esgPopulated', 'price']
thanks
A:
Using the set, you can get all the items that can be retrieved by the ticker for the initial set, and using the union set, you can also add in a list, so you can get all the item names that have a value in the issue you want to retrieve.
import pandas_datareader as web
import pandas as pd
tickers = ["AAPL", "GOOG", "RY", "SAB.MC"]
names = set()
for t in tickers:
market_cap_data = web.get_quote_yahoo(t)
names |= set(market_cap_data.columns.to_list())
names
{'ask',
'askSize',
'averageAnalystRating',
'averageDailyVolume10Day',
'averageDailyVolume3Month',
'bid',
'bidSize',
'bookValue',
'cryptoTradeable',
'currency',
'customPriceAlertConfidence',
'displayName',
...
'trailingAnnualDividendYield',
'trailingPE',
'triggerable',
'twoHundredDayAverage',
'twoHundredDayAverageChange',
'twoHundredDayAverageChangePercent',
'typeDisp'}
A:
I know this post is pretty old, but I just came across it now. Check out the 'yfinance' library. There's all kinds of stuff available over there!!
import pandas_datareader as web
import pandas as pd
df = web.DataReader('AAPL', data_source='yahoo', start='2011-01-01', end='2021-01-12')
df.head()
import yfinance as yf
aapl = yf.Ticker("AAPL")
aapl
# get stock info
aapl.info
# get historical market data
hist = aapl.history(period="max")
# show actions (dividends, splits)
aapl.actions
# show dividends
aapl.dividends
# show splits
aapl.splits
# show financials
aapl.financials
aapl.quarterly_financials
# show major holders
aapl.major_holders
# show institutional holders
aapl.institutional_holders
# show balance sheet
aapl.balance_sheet
aapl.quarterly_balance_sheet
# show cashflow
aapl.cashflow
aapl.quarterly_cashflow
# show earnings
aapl.earnings
aapl.quarterly_earnings
# show sustainability
aapl.sustainability
# show analysts recommendations
aapl.recommendations
# show next event (earnings, etc)
aapl.calendar
# show ISIN code - *experimental*
# ISIN = International Securities Identification Number
aapl.isin
# show options expirations
aapl.options
# get option chain for specific expiration
opt = aapl.option_chain('YYYY-MM-DD')
Result:
{'zip': '95014',
'sector': 'Technology',
'fullTimeEmployees': 164000,
'longBusinessSummary': 'Apple Inc. designs, manufactures, and markets smartphones, personal computers, tablets, wearables, and accessories worldwide. It also sells various related services. In addition, the company offers iPhone, a line of smartphones; Mac, a line of personal computers; iPad, a line of multi-purpose tablets; and wearables, home, and accessories comprising AirPods, Apple TV, Apple Watch, Beats products, and HomePod. Further, it provides AppleCare support and cloud services store services; and operates various platforms, including the App Store that allow customers to discover and download applications and digital content, such as books, music, video, games, and podcasts. Additionally, the company offers various services, such as Apple Arcade, a game subscription service; Apple Fitness+, a personalized fitness service; Apple Music, which offers users a curated listening experience with on-demand radio stations; Apple News+, a subscription news and magazine service; Apple TV+, which offers exclusive original content; Apple Card, a co-branded credit card; and Apple Pay, a cashless payment service, as well as licenses its intellectual property. The company serves consumers, and small and mid-sized businesses; and the education, enterprise, and government markets. It distributes third-party applications for its products through the App Store. The company also sells its products through its retail and online stores, and direct sales force; and third-party cellular network carriers, wholesalers, retailers, and resellers. Apple Inc. was incorporated in 1977 and is headquartered in Cupertino, California.',
'city': 'Cupertino',
'phone': '408 996 1010',
'state': 'CA',
'country': 'United States',
'companyOfficers': [],
'website': 'https://www.apple.com',
'maxAge': 1,
'address1': 'One Apple Park Way',
'industry': 'Consumer Electronics',
'ebitdaMargins': 0.33105,
'profitMargins': 0.2531,
'grossMargins': 0.43310001,
'operatingCashflow': 122151002112,
'revenueGrowth': 0.081,
'operatingMargins': 0.30289,
'ebitda': 130541002752,
'targetLowPrice': 122,
'recommendationKey': 'buy',
'grossProfits': 170782000000,
'freeCashflow': 90215251968,
'targetMedianPrice': 180,
'currentPrice': 151.29,
'earningsGrowth': 0.048,
'currentRatio': 0.879,
'returnOnAssets': 0.21214001,
'numberOfAnalystOpinions': 41,
'targetMeanPrice': 178.15,
'debtToEquity': 261.446,
'returnOnEquity': 1.75459,
'targetHighPrice': 214,
'totalCash': 48304001024,
'totalDebt': 132480000000,
'totalRevenue': 394328014848,
'totalCashPerShare': 3.036,
'financialCurrency': 'USD',
'revenuePerShare': 24.317,
'quickRatio': 0.709,
'recommendationMean': 1.9,
'exchange': 'NMS',
'shortName': 'Apple Inc.',
'longName': 'Apple Inc.',
'exchangeTimezoneName': 'America/New_York',
'exchangeTimezoneShortName': 'EST',
'isEsgPopulated': False,
'gmtOffSetMilliseconds': '-18000000',
'quoteType': 'EQUITY',
'symbol': 'AAPL',
'messageBoardId': 'finmb_24937',
'market': 'us_market',
'annualHoldingsTurnover': None,
'enterpriseToRevenue': 6.317,
'beta3Year': None,
'enterpriseToEbitda': 19.081,
'52WeekChange': -0.06042725,
'morningStarRiskRating': None,
'forwardEps': 6.82,
'revenueQuarterlyGrowth': None,
'sharesOutstanding': 15908100096,
'fundInceptionDate': None,
'annualReportExpenseRatio': None,
'totalAssets': None,
'bookValue': 3.178,
'sharesShort': 103178670,
'sharesPercentSharesOut': 0.0064999997,
'fundFamily': None,
'lastFiscalYearEnd': 1663977600,
'heldPercentInstitutions': 0.60030997,
'netIncomeToCommon': 99802996736,
'trailingEps': 6.11,
'lastDividendValue': 0.23,
'SandP52WeekChange': -0.15323704,
'priceToBook': 47.60541,
'heldPercentInsiders': 0.00071999995,
'nextFiscalYearEnd': 1727136000,
'yield': None,
'mostRecentQuarter': 1663977600,
'shortRatio': 1.14,
'sharesShortPreviousMonthDate': 1664496000,
'floatShares': 15891414476,
'beta': 1.246644,
'enterpriseValue': 2490915094528,
'priceHint': 2,
'threeYearAverageReturn': None,
'lastSplitDate': 1598832000,
'lastSplitFactor': '4:1',
'legalType': None,
'lastDividendDate': 1667520000,
'morningStarOverallRating': None,
'earningsQuarterlyGrowth': 0.008,
'priceToSalesTrailing12Months': 6.103387,
'dateShortInterest': 1667174400,
'pegRatio': 2.71,
'ytdReturn': None,
'forwardPE': 22.183283,
'lastCapGain': None,
'shortPercentOfFloat': 0.0064999997,
'sharesShortPriorMonth': 103251184,
'impliedSharesOutstanding': 0,
'category': None,
'fiveYearAverageReturn': None,
'previousClose': 150.72,
'regularMarketOpen': 152.305,
'twoHundredDayAverage': 155.0841,
'trailingAnnualDividendYield': 0.005971337,
'payoutRatio': 0.14729999,
'volume24Hr': None,
'regularMarketDayHigh': 152.57,
'navPrice': None,
'averageDailyVolume10Day': 84360340,
'regularMarketPreviousClose': 150.72,
'fiftyDayAverage': 147.0834,
'trailingAnnualDividendRate': 0.9,
'open': 152.305,
'toCurrency': None,
'averageVolume10days': 84360340,
'expireDate': None,
'algorithm': None,
'dividendRate': 0.92,
'exDividendDate': 1667520000,
'circulatingSupply': None,
'startDate': None,
'regularMarketDayLow': 149.97,
'currency': 'USD',
'trailingPE': 24.761045,
'regularMarketVolume': 74496725,
'lastMarket': None,
'maxSupply': None,
'openInterest': None,
'marketCap': 2406736461824,
'volumeAllCurrencies': None,
'strikePrice': None,
'averageVolume': 89929545,
'dayLow': 149.97,
'ask': 150.95,
'askSize': 1000,
'volume': 74496725,
'fiftyTwoWeekHigh': 182.94,
'fromCurrency': None,
'fiveYearAvgDividendYield': 1,
'fiftyTwoWeekLow': 129.04,
'bid': 150.82,
'tradeable': False,
'dividendYield': 0.0061000003,
'bidSize': 1100,
'dayHigh': 152.57,
'coinMarketCapLink': None,
'regularMarketPrice': 151.29,
'preMarketPrice': None,
'logo_url': 'https://logo.clearb
Just pick/choose what you want.
|
Python yahoo finance data optimitzation
|
I've found a code here pretty good to retrieve some data I need (Python yahoo finance error market_cap=int(data.get_quote_yahoo(str)['marketCap']) TypeError: 'int' object is not callable):
tickers=["AAPL","GOOG","RY","HPQ"]
# Get market cap (not really necessary for you)
market_cap_data = web.get_quote_yahoo(tickers)['marketCap']
# Get the P/E ratio directly
pe_data = web.get_quote_yahoo(tickers)['trailingPE']
# print stock and p/e ratio
for stock, pe in zip(tickers, pe_data):
print(stock, pe)
# More keys that can be used
['language', 'region', 'quoteType', 'triggerable', 'quoteSourceName',
'currency', 'preMarketChange', 'preMarketChangePercent',
'preMarketTime', 'preMarketPrice', 'regularMarketChange',
'regularMarketChangePercent', 'regularMarketTime', 'regularMarketPrice',
'regularMarketDayHigh', 'regularMarketDayRange', 'regularMarketDayLow',
'regularMarketVolume', 'regularMarketPreviousClose', 'bid', 'ask',
'bidSize', 'askSize', 'fullExchangeName', 'financialCurrency',
'regularMarketOpen', 'averageDailyVolume3Month',
'averageDailyVolume10Day', 'fiftyTwoWeekLowChange',
'fiftyTwoWeekLowChangePercent', 'fiftyTwoWeekRange',
'fiftyTwoWeekHighChange', 'fiftyTwoWeekHighChangePercent',
'fiftyTwoWeekLow', 'fiftyTwoWeekHigh', 'dividendDate',
'earningsTimestamp', 'earningsTimestampStart', 'earningsTimestampEnd',
'trailingAnnualDividendRate', 'trailingPE',
'trailingAnnualDividendYield', 'marketState', 'epsTrailingTwelveMonths',
'epsForward', 'sharesOutstanding', 'bookValue', 'fiftyDayAverage',
'fiftyDayAverageChange', 'fiftyDayAverageChangePercent',
'twoHundredDayAverage', 'twoHundredDayAverageChange',
'twoHundredDayAverageChangePercent', 'marketCap', 'forwardPE',
'priceToBook', 'sourceInterval', 'exchangeDataDelayedBy', 'tradeable',
'firstTradeDateMilliseconds', 'priceHint', 'exchange', 'shortName',
'longName', 'messageBoardId', 'exchangeTimezoneName',
'exchangeTimezoneShortName', 'gmtOffSetMilliseconds', 'market',
'esgPopulated', 'price']
I would like to retrieve most of the commented fields at the end of the previous code, but I've done this so far:
import pandas_datareader as web
tickers = ["AAPL", "GOOG", "RY", "SAB.MC"]
market_cap_data = web.get_quote_yahoo(tickers)['marketCap']
pe_data = web.get_quote_yahoo(tickers)['trailingPE']
fiftytwo_low_data = web.get_quote_yahoo(tickers)['fiftyTwoWeekLowChangePercent']
for stock, mcap, pe, fiftytwo_low in zip(tickers, market_cap_data, pe_data, fiftytwo_low_data):
print(stock, mcap, pe, fiftytwo_low)
Obviously I could continue with my brute force, but do you know any way to make the code more elegant to retrieve the whole string of fields with column names?
['language', 'region', 'quoteType', 'triggerable', 'quoteSourceName',
'currency', 'preMarketChange', 'preMarketChangePercent',
'preMarketTime', 'preMarketPrice', 'regularMarketChange',
'regularMarketChangePercent', 'regularMarketTime', 'regularMarketPrice',
'regularMarketDayHigh', 'regularMarketDayRange', 'regularMarketDayLow',
'regularMarketVolume', 'regularMarketPreviousClose', 'bid', 'ask',
'bidSize', 'askSize', 'fullExchangeName', 'financialCurrency',
'regularMarketOpen', 'averageDailyVolume3Month',
'averageDailyVolume10Day', 'fiftyTwoWeekLowChange',
'fiftyTwoWeekLowChangePercent', 'fiftyTwoWeekRange',
'fiftyTwoWeekHighChange', 'fiftyTwoWeekHighChangePercent',
'fiftyTwoWeekLow', 'fiftyTwoWeekHigh', 'dividendDate',
'earningsTimestamp', 'earningsTimestampStart', 'earningsTimestampEnd',
'trailingAnnualDividendRate', 'trailingPE',
'trailingAnnualDividendYield', 'marketState', 'epsTrailingTwelveMonths',
'epsForward', 'sharesOutstanding', 'bookValue', 'fiftyDayAverage',
'fiftyDayAverageChange', 'fiftyDayAverageChangePercent',
'twoHundredDayAverage', 'twoHundredDayAverageChange',
'twoHundredDayAverageChangePercent', 'marketCap', 'forwardPE',
'priceToBook', 'sourceInterval', 'exchangeDataDelayedBy', 'tradeable',
'firstTradeDateMilliseconds', 'priceHint', 'exchange', 'shortName',
'longName', 'messageBoardId', 'exchangeTimezoneName',
'exchangeTimezoneShortName', 'gmtOffSetMilliseconds', 'market',
'esgPopulated', 'price']
thanks
|
[
"Using the set, you can get all the items that can be retrieved by the ticker for the initial set, and using the union set, you can also add in a list, so you can get all the item names that have a value in the issue you want to retrieve.\nimport pandas_datareader as web\nimport pandas as pd\n\ntickers = [\"AAPL\", \"GOOG\", \"RY\", \"SAB.MC\"]\n\nnames = set()\nfor t in tickers:\n market_cap_data = web.get_quote_yahoo(t)\n names |= set(market_cap_data.columns.to_list())\nnames\n\n{'ask',\n 'askSize',\n 'averageAnalystRating',\n 'averageDailyVolume10Day',\n 'averageDailyVolume3Month',\n 'bid',\n 'bidSize',\n 'bookValue',\n 'cryptoTradeable',\n 'currency',\n 'customPriceAlertConfidence',\n 'displayName',\n ...\n 'trailingAnnualDividendYield',\n 'trailingPE',\n 'triggerable',\n 'twoHundredDayAverage',\n 'twoHundredDayAverageChange',\n 'twoHundredDayAverageChangePercent',\n 'typeDisp'}\n\n",
"I know this post is pretty old, but I just came across it now. Check out the 'yfinance' library. There's all kinds of stuff available over there!!\nimport pandas_datareader as web\nimport pandas as pd\n \ndf = web.DataReader('AAPL', data_source='yahoo', start='2011-01-01', end='2021-01-12')\ndf.head()\n\nimport yfinance as yf\naapl = yf.Ticker(\"AAPL\")\naapl\n \n \n# get stock info\naapl.info\n \n# get historical market data\nhist = aapl.history(period=\"max\")\n \n# show actions (dividends, splits)\naapl.actions\n \n# show dividends\naapl.dividends\n \n# show splits\naapl.splits\n \n# show financials\naapl.financials\naapl.quarterly_financials\n \n# show major holders\naapl.major_holders\n \n# show institutional holders\naapl.institutional_holders\n \n# show balance sheet\naapl.balance_sheet\naapl.quarterly_balance_sheet\n \n# show cashflow\naapl.cashflow\naapl.quarterly_cashflow\n \n# show earnings\naapl.earnings\naapl.quarterly_earnings\n \n# show sustainability\naapl.sustainability\n \n# show analysts recommendations\naapl.recommendations\n \n# show next event (earnings, etc)\naapl.calendar\n \n# show ISIN code - *experimental*\n# ISIN = International Securities Identification Number\naapl.isin\n \n# show options expirations\naapl.options\n \n# get option chain for specific expiration\nopt = aapl.option_chain('YYYY-MM-DD')\n\nResult:\n{'zip': '95014',\n 'sector': 'Technology',\n 'fullTimeEmployees': 164000,\n 'longBusinessSummary': 'Apple Inc. designs, manufactures, and markets smartphones, personal computers, tablets, wearables, and accessories worldwide. It also sells various related services. In addition, the company offers iPhone, a line of smartphones; Mac, a line of personal computers; iPad, a line of multi-purpose tablets; and wearables, home, and accessories comprising AirPods, Apple TV, Apple Watch, Beats products, and HomePod. Further, it provides AppleCare support and cloud services store services; and operates various platforms, including the App Store that allow customers to discover and download applications and digital content, such as books, music, video, games, and podcasts. Additionally, the company offers various services, such as Apple Arcade, a game subscription service; Apple Fitness+, a personalized fitness service; Apple Music, which offers users a curated listening experience with on-demand radio stations; Apple News+, a subscription news and magazine service; Apple TV+, which offers exclusive original content; Apple Card, a co-branded credit card; and Apple Pay, a cashless payment service, as well as licenses its intellectual property. The company serves consumers, and small and mid-sized businesses; and the education, enterprise, and government markets. It distributes third-party applications for its products through the App Store. The company also sells its products through its retail and online stores, and direct sales force; and third-party cellular network carriers, wholesalers, retailers, and resellers. Apple Inc. was incorporated in 1977 and is headquartered in Cupertino, California.',\n 'city': 'Cupertino',\n 'phone': '408 996 1010',\n 'state': 'CA',\n 'country': 'United States',\n 'companyOfficers': [],\n 'website': 'https://www.apple.com',\n 'maxAge': 1,\n 'address1': 'One Apple Park Way',\n 'industry': 'Consumer Electronics',\n 'ebitdaMargins': 0.33105,\n 'profitMargins': 0.2531,\n 'grossMargins': 0.43310001,\n 'operatingCashflow': 122151002112,\n 'revenueGrowth': 0.081,\n 'operatingMargins': 0.30289,\n 'ebitda': 130541002752,\n 'targetLowPrice': 122,\n 'recommendationKey': 'buy',\n 'grossProfits': 170782000000,\n 'freeCashflow': 90215251968,\n 'targetMedianPrice': 180,\n 'currentPrice': 151.29,\n 'earningsGrowth': 0.048,\n 'currentRatio': 0.879,\n 'returnOnAssets': 0.21214001,\n 'numberOfAnalystOpinions': 41,\n 'targetMeanPrice': 178.15,\n 'debtToEquity': 261.446,\n 'returnOnEquity': 1.75459,\n 'targetHighPrice': 214,\n 'totalCash': 48304001024,\n 'totalDebt': 132480000000,\n 'totalRevenue': 394328014848,\n 'totalCashPerShare': 3.036,\n 'financialCurrency': 'USD',\n 'revenuePerShare': 24.317,\n 'quickRatio': 0.709,\n 'recommendationMean': 1.9,\n 'exchange': 'NMS',\n 'shortName': 'Apple Inc.',\n 'longName': 'Apple Inc.',\n 'exchangeTimezoneName': 'America/New_York',\n 'exchangeTimezoneShortName': 'EST',\n 'isEsgPopulated': False,\n 'gmtOffSetMilliseconds': '-18000000',\n 'quoteType': 'EQUITY',\n 'symbol': 'AAPL',\n 'messageBoardId': 'finmb_24937',\n 'market': 'us_market',\n 'annualHoldingsTurnover': None,\n 'enterpriseToRevenue': 6.317,\n 'beta3Year': None,\n 'enterpriseToEbitda': 19.081,\n '52WeekChange': -0.06042725,\n 'morningStarRiskRating': None,\n 'forwardEps': 6.82,\n 'revenueQuarterlyGrowth': None,\n 'sharesOutstanding': 15908100096,\n 'fundInceptionDate': None,\n 'annualReportExpenseRatio': None,\n 'totalAssets': None,\n 'bookValue': 3.178,\n 'sharesShort': 103178670,\n 'sharesPercentSharesOut': 0.0064999997,\n 'fundFamily': None,\n 'lastFiscalYearEnd': 1663977600,\n 'heldPercentInstitutions': 0.60030997,\n 'netIncomeToCommon': 99802996736,\n 'trailingEps': 6.11,\n 'lastDividendValue': 0.23,\n 'SandP52WeekChange': -0.15323704,\n 'priceToBook': 47.60541,\n 'heldPercentInsiders': 0.00071999995,\n 'nextFiscalYearEnd': 1727136000,\n 'yield': None,\n 'mostRecentQuarter': 1663977600,\n 'shortRatio': 1.14,\n 'sharesShortPreviousMonthDate': 1664496000,\n 'floatShares': 15891414476,\n 'beta': 1.246644,\n 'enterpriseValue': 2490915094528,\n 'priceHint': 2,\n 'threeYearAverageReturn': None,\n 'lastSplitDate': 1598832000,\n 'lastSplitFactor': '4:1',\n 'legalType': None,\n 'lastDividendDate': 1667520000,\n 'morningStarOverallRating': None,\n 'earningsQuarterlyGrowth': 0.008,\n 'priceToSalesTrailing12Months': 6.103387,\n 'dateShortInterest': 1667174400,\n 'pegRatio': 2.71,\n 'ytdReturn': None,\n 'forwardPE': 22.183283,\n 'lastCapGain': None,\n 'shortPercentOfFloat': 0.0064999997,\n 'sharesShortPriorMonth': 103251184,\n 'impliedSharesOutstanding': 0,\n 'category': None,\n 'fiveYearAverageReturn': None,\n 'previousClose': 150.72,\n 'regularMarketOpen': 152.305,\n 'twoHundredDayAverage': 155.0841,\n 'trailingAnnualDividendYield': 0.005971337,\n 'payoutRatio': 0.14729999,\n 'volume24Hr': None,\n 'regularMarketDayHigh': 152.57,\n 'navPrice': None,\n 'averageDailyVolume10Day': 84360340,\n 'regularMarketPreviousClose': 150.72,\n 'fiftyDayAverage': 147.0834,\n 'trailingAnnualDividendRate': 0.9,\n 'open': 152.305,\n 'toCurrency': None,\n 'averageVolume10days': 84360340,\n 'expireDate': None,\n 'algorithm': None,\n 'dividendRate': 0.92,\n 'exDividendDate': 1667520000,\n 'circulatingSupply': None,\n 'startDate': None,\n 'regularMarketDayLow': 149.97,\n 'currency': 'USD',\n 'trailingPE': 24.761045,\n 'regularMarketVolume': 74496725,\n 'lastMarket': None,\n 'maxSupply': None,\n 'openInterest': None,\n 'marketCap': 2406736461824,\n 'volumeAllCurrencies': None,\n 'strikePrice': None,\n 'averageVolume': 89929545,\n 'dayLow': 149.97,\n 'ask': 150.95,\n 'askSize': 1000,\n 'volume': 74496725,\n 'fiftyTwoWeekHigh': 182.94,\n 'fromCurrency': None,\n 'fiveYearAvgDividendYield': 1,\n 'fiftyTwoWeekLow': 129.04,\n 'bid': 150.82,\n 'tradeable': False,\n 'dividendYield': 0.0061000003,\n 'bidSize': 1100,\n 'dayHigh': 152.57,\n 'coinMarketCapLink': None,\n 'regularMarketPrice': 151.29,\n 'preMarketPrice': None,\n 'logo_url': 'https://logo.clearb\n\nJust pick/choose what you want.\n"
] |
[
0,
0
] |
[] |
[] |
[
"python",
"yahoo_finance"
] |
stackoverflow_0074263159_python_yahoo_finance.txt
|
Q:
aiohttp.client_exceptions.ClientConnectorError: Cannot connect to host stackoverflow.com:443 ssl:default [Connect call failed ('151.101.193.69', 443)]
here is my code:
import asyncio
from aiohttp import ClientSession
async def main():
url = "https://stackoverflow.com/"
async with ClientSession() as session:
async with session.get(url) as resp:
print(resp.status)
asyncio.run(main())
if I run it on my computer, everything works, but if I run it on pythonanywhere, I get this error:
Traceback (most recent call last):
File "/home/0dminnimda/.local/lib/python3.8/site-packages/aiohttp/connector.py", line 936, in _wrap_create_connection
return await self._loop.create_connection(*args, **kwargs) # type: ignore # noqa
File "/usr/lib/python3.8/asyncio/base_events.py", line 1017, in create_connection
raise exceptions[0]
File "/usr/lib/python3.8/asyncio/base_events.py", line 1002, in create_connection
sock = await self._connect_sock(
File "/usr/lib/python3.8/asyncio/base_events.py", line 916, in _connect_sock
await self.sock_connect(sock, address)
File "/usr/lib/python3.8/asyncio/selector_events.py", line 485, in sock_connect
return await fut
File "/usr/lib/python3.8/asyncio/selector_events.py", line 517, in _sock_connect_cb
raise OSError(err, f'Connect call failed {address}')
ConnectionRefusedError: [Errno 111] Connect call failed ('151.101.193.69', 443)
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "test_c.py", line 39, in <module>
asyncio.run(main())
File "/usr/lib/python3.8/asyncio/runners.py", line 43, in run
return loop.run_until_complete(main)
File "/usr/lib/python3.8/asyncio/base_events.py", line 608, in run_until_complete
return future.result()
File "test_c.py", line 28, in main
async with session.get(url, timeout=30) as resp: # , headers=headers
File "/home/0dminnimda/.local/lib/python3.8/site-packages/aiohttp/client.py", line 1012, in __aenter__
self._resp = await self._coro
File "/home/0dminnimda/.local/lib/python3.8/site-packages/aiohttp/client.py", line 480, in _request
conn = await self._connector.connect(
File "/home/0dminnimda/.local/lib/python3.8/site-packages/aiohttp/connector.py", line 523, in connect
proto = await self._create_connection(req, traces, timeout)
File "/home/0dminnimda/.local/lib/python3.8/site-packages/aiohttp/connector.py", line 858, in _create_connection
_, proto = await self._create_direct_connection(
File "/home/0dminnimda/.local/lib/python3.8/site-packages/aiohttp/connector.py", line 1004, in _create_direct_connection
raise last_exc
File "/home/0dminnimda/.local/lib/python3.8/site-packages/aiohttp/connector.py", line 980, in _create_direct_connection
transp, proto = await self._wrap_create_connection(
File "/home/0dminnimda/.local/lib/python3.8/site-packages/aiohttp/connector.py", line 943, in _wrap_create_connection
raise client_error(req.connection_key, exc) from exc
aiohttp.client_exceptions.ClientConnectorError: Cannot connect to host stackoverflow.com:443 ssl:default [Connect call failed ('151.101.193.69', 443)]
Unclosed client session
client_session: <aiohttp.client.ClientSession object at 0x7f25a71d1a90>
aiohttp on hosting:
Name: aiohttp
Version: 3.6.2
Summary: Async http client/server framework (asyncio)
Home-page: https://github.com/aio-libs/aiohttp
Author: Nikolay Kim
Author-email: fafhrd91@gmail.com
License: Apache 2
Location: /home/0dminnimda/.local/lib/python3.8/site-packages
Requires: chardet, async-timeout, multidict, yarl, attrs
Required-by:
aiohttp on my PC:
Name: aiohttp
Version: 3.6.2
Summary: Async http client/server framework (asyncio)
Home-page: https://github.com/aio-libs/aiohttp
Author: Nikolay Kim
Author-email: fafhrd91@gmail.com
License: Apache 2
Location: c:\users\asus\appdata\roaming\python\python38\site-packages
Requires: async-timeout, attrs, chardet, yarl, multidict
Required-by:
I am at a loss that it is not so? I am running both files using python3.8.
I also tried other urls, they have the same problem
Do I need to add any more details?
A:
first solution
Referring to the help from the forum, I added trust_env = True when creating the client and now everything works.
Explanation:
Free accounts on PythonAnywhere must use a proxy to connect to the public internet, but aiohttp, by default, does not connect to a proxy accessible from an environment variable.
Link to aiohttp documentation (look for a parameter called "trust_env")
Here is the new code:
import asyncio
from aiohttp import ClientSession
async def main():
url = "https://stackoverflow.com/"
async with ClientSession(trust_env=True) as session:
async with session.get(url) as resp:
print(resp.status)
asyncio.run(main())
solution if the first didn't help you
The domain you are trying to access must be in whitelist, otherwise you may also get this error.
In this case you need to post a new topic on the pythonanywhere forum asking to add the domain to the whitelist.
If this is an api, then you will need to provide a link to the documentation for this api.
A:
If you are using Windows OS with python (3.8 or newer) and aiohttp (3.7.4 or older)
Sometimes the solution for an exception like ... Cannot connect to host <REQUESTED URL>:443 ssl:default [The parameter is incorrect] is:
import sys
...
policy = asyncio.WindowsSelectorEventLoopPolicy()
asyncio.set_event_loop_policy(policy)
asyncio.run(main())
And you can check your Python version and OS:
import sys
...
if (sys.platform.startswith('win')
and sys.version_info[0] == 3
and sys.version_info[1] >= 8):
policy = asyncio.WindowsSelectorEventLoopPolicy()
asyncio.set_event_loop_policy(policy)
asyncio.run(main())
here, in issue 4536, everything is described in more detail.
A:
set ssl to False when making the request
import aiohttp
conn = aiohttp.TCPConnector()
async with aiohttp.ClientSession(connector=conn) as session:
await session.get('https://example.com', ssl=False)
A:
From docs: https://docs.aiohttp.org/en/stable/client_reference.html, params of coroutine async-with request:
ssl – SSL validation mode. None for default SSL check (ssl.create_default_context() is used), False for skip SSL certificate validation, aiohttp.Fingerprint for fingerprint validation, ssl.SSLContext for custom SSL certificate validation. Supersedes verify_ssl, ssl_context and fingerprint parameters.
New in version 3.0.
import asyncio
from aiohttp import ClientSession
async def main():
url = "https://stackoverflow.com/"
async with ClientSession() as session:
async with session.get(url, ssl=False) as resp:
print(resp.status)
asyncio.run(main())
A:
In my case the problem was a very big amount of simultaneously opened connections. To solve the problem I have passed a limit parameter to connector:
import aiohttp
conn = aiohttp.TCPConnector(limit_per_host=5)
async with aiohttp.ClientSession(connector=conn) as session:
|
aiohttp.client_exceptions.ClientConnectorError: Cannot connect to host stackoverflow.com:443 ssl:default [Connect call failed ('151.101.193.69', 443)]
|
here is my code:
import asyncio
from aiohttp import ClientSession
async def main():
url = "https://stackoverflow.com/"
async with ClientSession() as session:
async with session.get(url) as resp:
print(resp.status)
asyncio.run(main())
if I run it on my computer, everything works, but if I run it on pythonanywhere, I get this error:
Traceback (most recent call last):
File "/home/0dminnimda/.local/lib/python3.8/site-packages/aiohttp/connector.py", line 936, in _wrap_create_connection
return await self._loop.create_connection(*args, **kwargs) # type: ignore # noqa
File "/usr/lib/python3.8/asyncio/base_events.py", line 1017, in create_connection
raise exceptions[0]
File "/usr/lib/python3.8/asyncio/base_events.py", line 1002, in create_connection
sock = await self._connect_sock(
File "/usr/lib/python3.8/asyncio/base_events.py", line 916, in _connect_sock
await self.sock_connect(sock, address)
File "/usr/lib/python3.8/asyncio/selector_events.py", line 485, in sock_connect
return await fut
File "/usr/lib/python3.8/asyncio/selector_events.py", line 517, in _sock_connect_cb
raise OSError(err, f'Connect call failed {address}')
ConnectionRefusedError: [Errno 111] Connect call failed ('151.101.193.69', 443)
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "test_c.py", line 39, in <module>
asyncio.run(main())
File "/usr/lib/python3.8/asyncio/runners.py", line 43, in run
return loop.run_until_complete(main)
File "/usr/lib/python3.8/asyncio/base_events.py", line 608, in run_until_complete
return future.result()
File "test_c.py", line 28, in main
async with session.get(url, timeout=30) as resp: # , headers=headers
File "/home/0dminnimda/.local/lib/python3.8/site-packages/aiohttp/client.py", line 1012, in __aenter__
self._resp = await self._coro
File "/home/0dminnimda/.local/lib/python3.8/site-packages/aiohttp/client.py", line 480, in _request
conn = await self._connector.connect(
File "/home/0dminnimda/.local/lib/python3.8/site-packages/aiohttp/connector.py", line 523, in connect
proto = await self._create_connection(req, traces, timeout)
File "/home/0dminnimda/.local/lib/python3.8/site-packages/aiohttp/connector.py", line 858, in _create_connection
_, proto = await self._create_direct_connection(
File "/home/0dminnimda/.local/lib/python3.8/site-packages/aiohttp/connector.py", line 1004, in _create_direct_connection
raise last_exc
File "/home/0dminnimda/.local/lib/python3.8/site-packages/aiohttp/connector.py", line 980, in _create_direct_connection
transp, proto = await self._wrap_create_connection(
File "/home/0dminnimda/.local/lib/python3.8/site-packages/aiohttp/connector.py", line 943, in _wrap_create_connection
raise client_error(req.connection_key, exc) from exc
aiohttp.client_exceptions.ClientConnectorError: Cannot connect to host stackoverflow.com:443 ssl:default [Connect call failed ('151.101.193.69', 443)]
Unclosed client session
client_session: <aiohttp.client.ClientSession object at 0x7f25a71d1a90>
aiohttp on hosting:
Name: aiohttp
Version: 3.6.2
Summary: Async http client/server framework (asyncio)
Home-page: https://github.com/aio-libs/aiohttp
Author: Nikolay Kim
Author-email: fafhrd91@gmail.com
License: Apache 2
Location: /home/0dminnimda/.local/lib/python3.8/site-packages
Requires: chardet, async-timeout, multidict, yarl, attrs
Required-by:
aiohttp on my PC:
Name: aiohttp
Version: 3.6.2
Summary: Async http client/server framework (asyncio)
Home-page: https://github.com/aio-libs/aiohttp
Author: Nikolay Kim
Author-email: fafhrd91@gmail.com
License: Apache 2
Location: c:\users\asus\appdata\roaming\python\python38\site-packages
Requires: async-timeout, attrs, chardet, yarl, multidict
Required-by:
I am at a loss that it is not so? I am running both files using python3.8.
I also tried other urls, they have the same problem
Do I need to add any more details?
|
[
"first solution\nReferring to the help from the forum, I added trust_env = True when creating the client and now everything works.\nExplanation:\nFree accounts on PythonAnywhere must use a proxy to connect to the public internet, but aiohttp, by default, does not connect to a proxy accessible from an environment variable.\nLink to aiohttp documentation (look for a parameter called \"trust_env\")\nHere is the new code:\nimport asyncio\nfrom aiohttp import ClientSession\n\n\nasync def main():\n url = \"https://stackoverflow.com/\"\n\n async with ClientSession(trust_env=True) as session:\n async with session.get(url) as resp:\n print(resp.status)\n\nasyncio.run(main())\n\nsolution if the first didn't help you\nThe domain you are trying to access must be in whitelist, otherwise you may also get this error.\nIn this case you need to post a new topic on the pythonanywhere forum asking to add the domain to the whitelist.\nIf this is an api, then you will need to provide a link to the documentation for this api.\n",
"If you are using Windows OS with python (3.8 or newer) and aiohttp (3.7.4 or older)\nSometimes the solution for an exception like ... Cannot connect to host <REQUESTED URL>:443 ssl:default [The parameter is incorrect] is:\nimport sys\n\n...\n\npolicy = asyncio.WindowsSelectorEventLoopPolicy()\nasyncio.set_event_loop_policy(policy)\n\nasyncio.run(main())\n\nAnd you can check your Python version and OS:\nimport sys\n\n...\n\nif (sys.platform.startswith('win')\n and sys.version_info[0] == 3\n and sys.version_info[1] >= 8):\n policy = asyncio.WindowsSelectorEventLoopPolicy()\n asyncio.set_event_loop_policy(policy)\n\nasyncio.run(main())\n\nhere, in issue 4536, everything is described in more detail.\n",
"set ssl to False when making the request\nimport aiohttp\nconn = aiohttp.TCPConnector()\n\nasync with aiohttp.ClientSession(connector=conn) as session:\n await session.get('https://example.com', ssl=False)\n\n",
"From docs: https://docs.aiohttp.org/en/stable/client_reference.html, params of coroutine async-with request:\nssl – SSL validation mode. None for default SSL check (ssl.create_default_context() is used), False for skip SSL certificate validation, aiohttp.Fingerprint for fingerprint validation, ssl.SSLContext for custom SSL certificate validation. Supersedes verify_ssl, ssl_context and fingerprint parameters.\nNew in version 3.0.\nimport asyncio\nfrom aiohttp import ClientSession\n\n\nasync def main():\n url = \"https://stackoverflow.com/\"\n\n async with ClientSession() as session:\n async with session.get(url, ssl=False) as resp:\n print(resp.status)\n\nasyncio.run(main())\n\n",
"In my case the problem was a very big amount of simultaneously opened connections. To solve the problem I have passed a limit parameter to connector:\nimport aiohttp\nconn = aiohttp.TCPConnector(limit_per_host=5)\n\nasync with aiohttp.ClientSession(connector=conn) as session:\n\n"
] |
[
22,
10,
3,
1,
1
] |
[
"I just solved what could have been a 3 hour problem in 5 mins by changing all https to http. If it's possible, don't use https. I had an issue where another library (playwright) could not use the selector event loop, I would have had to separate process to use aiohttp. Better yet, switch libraries, httpx is an okay alternative.\n"
] |
[
-5
] |
[
"aiohttp",
"python",
"python_3.x",
"python_asyncio",
"pythonanywhere"
] |
stackoverflow_0063347818_aiohttp_python_python_3.x_python_asyncio_pythonanywhere.txt
|
Q:
Keep rows according to condition in Pandas
I am looking for a code to find rows that matches a condition and keep those rows.
In the image example, I wish to keep all the apples with amt1 => 5 and amt2 < 5. I also want to keep the bananas with amt1 => 1 and amt2 < 5 (highlighted red in image). There are many other fruits in the list that I have to filter for (maybe about 10 fruits).
image example
Currently, I am filtering it individually (ie. creating a dataframe that filters out the red and small apples and another dataframe that filters out the green and big bananas and using concat to join the dataframes together afterwards). However, this process takes a long time to run because the dataset is huge. I am looking for a faster way (like filtering it in the dataframe itself without having to create a new dataframes). I also have to use column index instead of column names as the column name changes according to the date.
Hopefully what I said makes sense. Would appreciate any help!
A:
I am not quite sure I understand your requirements because I don't understand how the conditions for the rows to keep are formulated.
One thing you can use to combine multiple criteria for selecting data is the query method of the dataframe:
import pandas as pd
df = pd.DataFrame([
['Apple', 5, 1],
['Apple', 4, 2],
['Orange', 3, 3],
['Banana', 2, 4],
['Banana', 1, 5]],
columns=['Fruits', 'Amt1', 'Amt2'])
df.query('(Fruits == "Apple" & (Amt1 >= 5 & Amt2 < 5)) | (Fruits == "Banana" & (Amt1 >= 1 & Amt2 < 5))')
A:
You might use filter combined with itertuples following way
import pandas as pd
df = pd.DataFrame({"x":[1,2,3,4,5],"y":[10,20,30,40,50]})
def keep(row):
return row[0] >= 2 and row[1] <= 40
df_filtered = pd.DataFrame(filter(keep,df.itertuples())).set_index("Index")
print(df_filtered)
gives output
x y
Index
2 3 30
3 4 40
4 5 50
Explanation: keep is function which should return True for rows to keep False for rows to jettison. .itertuples() provides iterable of tuples, which are feed to filter which select records where keep evaluates to True, these selected rows are used to create new DataFrame. After that is done I set index so Index is corresponding to original DataFrame. Depending on your use case you might elect to not set index.
|
Keep rows according to condition in Pandas
|
I am looking for a code to find rows that matches a condition and keep those rows.
In the image example, I wish to keep all the apples with amt1 => 5 and amt2 < 5. I also want to keep the bananas with amt1 => 1 and amt2 < 5 (highlighted red in image). There are many other fruits in the list that I have to filter for (maybe about 10 fruits).
image example
Currently, I am filtering it individually (ie. creating a dataframe that filters out the red and small apples and another dataframe that filters out the green and big bananas and using concat to join the dataframes together afterwards). However, this process takes a long time to run because the dataset is huge. I am looking for a faster way (like filtering it in the dataframe itself without having to create a new dataframes). I also have to use column index instead of column names as the column name changes according to the date.
Hopefully what I said makes sense. Would appreciate any help!
|
[
"I am not quite sure I understand your requirements because I don't understand how the conditions for the rows to keep are formulated.\nOne thing you can use to combine multiple criteria for selecting data is the query method of the dataframe:\nimport pandas as pd\n\ndf = pd.DataFrame([\n ['Apple', 5, 1],\n ['Apple', 4, 2], \n ['Orange', 3, 3], \n ['Banana', 2, 4], \n ['Banana', 1, 5]], \n columns=['Fruits', 'Amt1', 'Amt2'])\n\ndf.query('(Fruits == \"Apple\" & (Amt1 >= 5 & Amt2 < 5)) | (Fruits == \"Banana\" & (Amt1 >= 1 & Amt2 < 5))')\n\n",
"You might use filter combined with itertuples following way\nimport pandas as pd\ndf = pd.DataFrame({\"x\":[1,2,3,4,5],\"y\":[10,20,30,40,50]})\ndef keep(row):\n return row[0] >= 2 and row[1] <= 40\ndf_filtered = pd.DataFrame(filter(keep,df.itertuples())).set_index(\"Index\")\nprint(df_filtered)\n\ngives output\n x y\nIndex \n2 3 30\n3 4 40\n4 5 50\n\nExplanation: keep is function which should return True for rows to keep False for rows to jettison. .itertuples() provides iterable of tuples, which are feed to filter which select records where keep evaluates to True, these selected rows are used to create new DataFrame. After that is done I set index so Index is corresponding to original DataFrame. Depending on your use case you might elect to not set index.\n"
] |
[
0,
0
] |
[] |
[] |
[
"dataframe",
"pandas",
"python"
] |
stackoverflow_0074508189_dataframe_pandas_python.txt
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.