content
stringlengths
85
101k
title
stringlengths
0
150
question
stringlengths
15
48k
answers
list
answers_scores
list
non_answers
list
non_answers_scores
list
tags
list
name
stringlengths
35
137
Q: Could not able to get the text body from articles while web scraping I'm scraping news articles from the website https://www.scmp.com/ Though I can get the title or author names from each articles but I can't able to get the text body or main content of the articles. I followed two methods but both didn't work. First method options = webdriver.ChromeOptions() lists = ['disable-popup-blocking'] caps = DesiredCapabilities().CHROME caps["pageLoadStrategy"] = "normal" driver.get('https://www.scmp.com/news/asia/east-asia/article/3199400/japan-asean-hold-summit-tokyo-around-december-2023-japanese-official') driver.implicitly_wait(5) bsObj = BeautifulSoup(driver.page_source, 'html.parser') text_res = bsObj.select('div[class="details__body body"]') text = "" for item in text_res: if item.get_text() == "": continue text = text + item.get_text().strip() + "\n" Second Method options = webdriver.ChromeOptions() driver = webdriver.Chrome(executable_path= r"E:\chromedriver\chromedriver.exe", options=options) #add your chrome path driver.get('https://www.scmp.com/news/asia/east-asia/article/3199400/japan-asean-hold-summit-tokyo-around-december-2023-japanese-official') driver.implicitly_wait(5) a = driver.find_element_by_class_name("details__body body").text print(a) Please help me with this. Thank you. A: There are several reasons that you cannot obtain the text from the article on the South China Morning Post. First when you open Chrome using selenium the URL for the article displays a GDRP notice. The GDRP has to be accepted via a button click. Second the page also displays a popup to set your news preferences. The news preference popup has to be X out. Third trying to extract the text using selenium will require some data cleaning. I would recommend using BeautifulSoup to extract the clean article text from a script tag on the page. Here is some rough code that clicks the GDRP button, X out the news preference popup and extract the article text. This code can be refined to fit your needs. import json from time import sleep from bs4 import BeautifulSoup from selenium import webdriver from selenium.webdriver.common.by import By from selenium.webdriver.chrome.options import Options from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.common.action_chains import ActionChains from selenium.webdriver.support import expected_conditions as EC from selenium.webdriver.common.desired_capabilities import DesiredCapabilities capabilities = DesiredCapabilities().CHROME chrome_options = Options() chrome_options.add_argument("--incognito") chrome_options.add_argument("--disable-infobars") chrome_options.add_argument("--disable-extensions") chrome_options.add_argument("--disable-popup-blocking") chrome_options.add_argument("--ignore-certificate-errors") # disable the banner "Chrome is being controlled by automated test software" chrome_options.add_experimental_option("useAutomationExtension", False) chrome_options.add_experimental_option("excludeSwitches", ["enable-automation"]) driver = webdriver.Chrome('/usr/local/bin/chromedriver', options=chrome_options) url_main = 'https://www.scmp.com/news/asia/east-asia/article/3199400/japan-asean-hold-summit-tokyo-around-december-2023-japanese-official' driver.get(url_main) driver.implicitly_wait(20) element_has_bottom_message = WebDriverWait(driver, 120).until(EC.presence_of_element_located((By.CLASS_NAME, "has-bottom-messaging"))) if element_has_bottom_message: element_gdpr = WebDriverWait(driver, 120).until( EC.presence_of_element_located((By.CLASS_NAME, "gdpr-banner__accept"))) if element_gdpr: gdrp_button = driver.find_element_by_xpath("//*[@class='gdpr-banner__accept']") driver.implicitly_wait(20) ActionChains(driver).move_to_element(gdrp_button).click(gdrp_button).perform() element_my_news_popup = WebDriverWait(driver, 120).until( EC.presence_of_element_located((By.CLASS_NAME, "my-news-landing-popup__icon-close"))) if element_my_news_popup: my_news_popup = driver.find_element_by_xpath("//*[@class='my-news-landing-popup__icon-close']") ActionChains(driver).move_to_element(my_news_popup).click(my_news_popup).perform() driver.implicitly_wait(20) raw_soup = BeautifulSoup(driver.page_source, 'lxml') json_dictionaries = raw_soup.find_all(name='script', attrs={'type': 'application/ld+json'}) if len(json_dictionaries) != 0: for json_dictionary in json_dictionaries: dictionary = json.loads("".join(json_dictionary.contents), strict=False) article_bool = bool([value for (key, value) in dictionary.items() if key == 'articleBody']) if article_bool: for key, value in dictionary.items(): if key == 'articleBody': print(value) sleep(30) driver.close() driver.quit() OUTPUT The leaders of Japan and 10-member Asean on Saturday agreed to hold a summit in Tokyo in or around December next year to commemorate the 50th anniversary of their relationship, a Japanese official said. Japanese Prime Minister Fumio Kishida and his counterparts from the Association of Southeast Asian Nations also pledged to deepen their cooperative ties when they met in Phnom Penh, according to the official. Japan has been trying to boost relations with Asean at a time when some of its members are increasingly vigilant against China ’s assertive territorial claims in the East and South China seas . Why is Japan losing ground in Asean despite being a bigger investor than China? “Although concerns are growing over opaque and unfair development support, Japan will continue to back sustainable growth” of Southeast Asia , Kishida said at the outset of the meeting, which was open to the media, in a veiled reference to Beijing’s trade and economic practices. Leaders of several nations mentioned the importance of freedom of navigation and overflight in the South China Sea, and of the necessity of adhering to international law, the official said after the meeting. The agreement on the special summit in Tokyo came as the US and China have been intensifying their competition for influence in Southeast Asia. In November last year, China and Asean agreed to upgrade their ties to a “comprehensive strategic partnership” when the two sides held a special online summit commemorating the 30th anniversary of their dialogue, with Chinese President Xi Jinping making a rare appearance. China has stepped up efforts to expand its clout in the region as security tensions with the US escalate in nearby waters. After China’s move, the US in May declared with Asean that they had decided to elevate their relationship to a “comprehensive strategic partnership” as well. At the Asean-Japan gathering, Kishida also reiterated his support for the “Asean Outlook on the Indo-Pacific”, an initiative aimed at maintaining peace, freedom and prosperity in the region, the official said.
Could not able to get the text body from articles while web scraping
I'm scraping news articles from the website https://www.scmp.com/ Though I can get the title or author names from each articles but I can't able to get the text body or main content of the articles. I followed two methods but both didn't work. First method options = webdriver.ChromeOptions() lists = ['disable-popup-blocking'] caps = DesiredCapabilities().CHROME caps["pageLoadStrategy"] = "normal" driver.get('https://www.scmp.com/news/asia/east-asia/article/3199400/japan-asean-hold-summit-tokyo-around-december-2023-japanese-official') driver.implicitly_wait(5) bsObj = BeautifulSoup(driver.page_source, 'html.parser') text_res = bsObj.select('div[class="details__body body"]') text = "" for item in text_res: if item.get_text() == "": continue text = text + item.get_text().strip() + "\n" Second Method options = webdriver.ChromeOptions() driver = webdriver.Chrome(executable_path= r"E:\chromedriver\chromedriver.exe", options=options) #add your chrome path driver.get('https://www.scmp.com/news/asia/east-asia/article/3199400/japan-asean-hold-summit-tokyo-around-december-2023-japanese-official') driver.implicitly_wait(5) a = driver.find_element_by_class_name("details__body body").text print(a) Please help me with this. Thank you.
[ "There are several reasons that you cannot obtain the text from the article on the South China Morning Post.\nFirst when you open Chrome using selenium the URL for the article displays a GDRP notice.\nThe GDRP has to be accepted via a button click.\nSecond the page also displays a popup to set your news preferences.\nThe news preference popup has to be X out.\nThird trying to extract the text using selenium will require some data cleaning. I would recommend using BeautifulSoup to extract the clean article text from a script tag on the page.\nHere is some rough code that clicks the GDRP button, X out the news preference popup and extract the article text.\nThis code can be refined to fit your needs.\nimport json\nfrom time import sleep\nfrom bs4 import BeautifulSoup\nfrom selenium import webdriver\nfrom selenium.webdriver.common.by import By\nfrom selenium.webdriver.chrome.options import Options\nfrom selenium.webdriver.support.ui import WebDriverWait\nfrom selenium.webdriver.common.action_chains import ActionChains\nfrom selenium.webdriver.support import expected_conditions as EC\nfrom selenium.webdriver.common.desired_capabilities import DesiredCapabilities\n\ncapabilities = DesiredCapabilities().CHROME\n\nchrome_options = Options()\nchrome_options.add_argument(\"--incognito\")\nchrome_options.add_argument(\"--disable-infobars\")\nchrome_options.add_argument(\"--disable-extensions\")\nchrome_options.add_argument(\"--disable-popup-blocking\")\nchrome_options.add_argument(\"--ignore-certificate-errors\")\n\n# disable the banner \"Chrome is being controlled by automated test software\"\nchrome_options.add_experimental_option(\"useAutomationExtension\", False)\nchrome_options.add_experimental_option(\"excludeSwitches\", [\"enable-automation\"])\n\ndriver = webdriver.Chrome('/usr/local/bin/chromedriver', options=chrome_options)\n\nurl_main = 'https://www.scmp.com/news/asia/east-asia/article/3199400/japan-asean-hold-summit-tokyo-around-december-2023-japanese-official'\n\ndriver.get(url_main)\n\ndriver.implicitly_wait(20)\nelement_has_bottom_message = WebDriverWait(driver, 120).until(EC.presence_of_element_located((By.CLASS_NAME, \"has-bottom-messaging\")))\nif element_has_bottom_message:\n element_gdpr = WebDriverWait(driver, 120).until(\n EC.presence_of_element_located((By.CLASS_NAME, \"gdpr-banner__accept\")))\n if element_gdpr:\n gdrp_button = driver.find_element_by_xpath(\"//*[@class='gdpr-banner__accept']\")\n driver.implicitly_wait(20)\n ActionChains(driver).move_to_element(gdrp_button).click(gdrp_button).perform()\n element_my_news_popup = WebDriverWait(driver, 120).until(\n EC.presence_of_element_located((By.CLASS_NAME, \"my-news-landing-popup__icon-close\")))\n if element_my_news_popup:\n my_news_popup = driver.find_element_by_xpath(\"//*[@class='my-news-landing-popup__icon-close']\")\n ActionChains(driver).move_to_element(my_news_popup).click(my_news_popup).perform()\n driver.implicitly_wait(20)\n raw_soup = BeautifulSoup(driver.page_source, 'lxml')\n json_dictionaries = raw_soup.find_all(name='script', attrs={'type': 'application/ld+json'})\n if len(json_dictionaries) != 0:\n for json_dictionary in json_dictionaries:\n dictionary = json.loads(\"\".join(json_dictionary.contents), strict=False)\n article_bool = bool([value for (key, value) in dictionary.items() if key == 'articleBody'])\n if article_bool:\n for key, value in dictionary.items():\n if key == 'articleBody':\n print(value)\n\n\nsleep(30)\ndriver.close()\ndriver.quit()\n\nOUTPUT\nThe leaders of Japan and 10-member Asean on Saturday agreed to hold a summit in Tokyo \nin or around December next year to commemorate the 50th anniversary of their relationship, \na Japanese official said. Japanese Prime Minister Fumio Kishida and his counterparts from \nthe Association of Southeast Asian Nations also pledged to deepen their cooperative ties \nwhen they met in Phnom Penh, according to the official. Japan has been trying to boost \nrelations with Asean at a time when some of its members are increasingly vigilant against \nChina ’s assertive territorial claims in the East and South China seas . Why is Japan \nlosing ground in Asean despite being a bigger investor than China? “Although concerns are \ngrowing over opaque and unfair development support, Japan will continue to back sustainable \ngrowth” of Southeast Asia , Kishida said at the outset of the meeting, which was open to \nthe media, in a veiled reference to Beijing’s trade and economic practices. Leaders of \nseveral nations mentioned the importance of freedom of navigation and overflight in the \nSouth China Sea, and of the necessity of adhering to international law, the official said \nafter the meeting. The agreement on the special summit in Tokyo came as the US and China \nhave been intensifying their competition for influence in Southeast Asia. In November last \nyear, China and Asean agreed to upgrade their ties to a “comprehensive strategic \npartnership” when the two sides held a special online summit commemorating the 30th \nanniversary of their dialogue, with Chinese President Xi Jinping making a rare appearance. \nChina has stepped up efforts to expand its clout in the region as security tensions \nwith the US escalate in nearby waters. After China’s move, the US in May declared with \nAsean that they had decided to elevate their relationship to a “comprehensive strategic \npartnership” as well. At the Asean-Japan gathering, Kishida also reiterated his support \nfor the “Asean Outlook on the Indo-Pacific”, an initiative aimed at maintaining peace, \nfreedom and prosperity in the region, the official said.\n\n" ]
[ 2 ]
[]
[]
[ "beautifulsoup", "html", "python", "selenium", "web_scraping" ]
stackoverflow_0074457838_beautifulsoup_html_python_selenium_web_scraping.txt
Q: How can i scale between [0,1] a random matrix I have to scale between [0,1] a matrix. So, for each element from matrix i have to do this formula: (Element - min_cols) / (max_cols - min_cols) min_cols -> array with every minimum of each column from the matrix. max_cols -> same but with max My problem is, i want to calculate result with this: result = (Element- min_cols) / (max_cols - min_cols) Or, from each element from the matrix i have to do difference between that element and the minimum from element's column, and do the difference between (maximum element's column and the minimum).* but when i have for example the value from min_cols negative and the value from max_cols also negative, it results the sum between both. I want to specify that the matrix is: _mat = np.random.randn(1000, 1000) * 50 A: Use numpy Example import numpy as np x = 50*np.random.rand(6,4) array([[26.7041017 , 46.88118463, 41.24541748, 31.17881807], [47.57036124, 16.49040094, 6.62454156, 37.15976348], [46.7157895 , 8.53357717, 39.01399714, 5.14287858], [24.36012016, 5.67603151, 40.7697121 , 13.09877845], [21.69045322, 12.61989002, 8.74692768, 46.23368735], [ 3.9058066 , 35.50845507, 4.66785679, 2.34177134]]) Apply your formula np.divide(np.subtract(x, x.min(axis=0)), x.max(axis=0)-x.min(axis=0)) array([[0.52212361, 1. , 1. , 0.65700132], [1. , 0.26245187, 0.05349413, 0.79326663], [0.98042871, 0.06934923, 0.93899483, 0.06381829], [0.46844205, 0. , 0.98699461, 0.24507946], [0.40730168, 0.16851918, 0.1115184 , 1. ], [0. , 0.7239974 , 0. , 0. ]]) The max value of each column is mapped to 1, the min value of each column is mapped to 0 an the intermediate values have are linearly mapped between 0 and 1
How can i scale between [0,1] a random matrix
I have to scale between [0,1] a matrix. So, for each element from matrix i have to do this formula: (Element - min_cols) / (max_cols - min_cols) min_cols -> array with every minimum of each column from the matrix. max_cols -> same but with max My problem is, i want to calculate result with this: result = (Element- min_cols) / (max_cols - min_cols) Or, from each element from the matrix i have to do difference between that element and the minimum from element's column, and do the difference between (maximum element's column and the minimum).* but when i have for example the value from min_cols negative and the value from max_cols also negative, it results the sum between both. I want to specify that the matrix is: _mat = np.random.randn(1000, 1000) * 50
[ "Use numpy\nExample\nimport numpy as np\n\nx = 50*np.random.rand(6,4)\n\narray([[26.7041017 , 46.88118463, 41.24541748, 31.17881807],\n [47.57036124, 16.49040094, 6.62454156, 37.15976348],\n [46.7157895 , 8.53357717, 39.01399714, 5.14287858],\n [24.36012016, 5.67603151, 40.7697121 , 13.09877845],\n [21.69045322, 12.61989002, 8.74692768, 46.23368735],\n [ 3.9058066 , 35.50845507, 4.66785679, 2.34177134]])\n\nApply your formula\nnp.divide(np.subtract(x, x.min(axis=0)), x.max(axis=0)-x.min(axis=0))\n\narray([[0.52212361, 1. , 1. , 0.65700132],\n [1. , 0.26245187, 0.05349413, 0.79326663],\n [0.98042871, 0.06934923, 0.93899483, 0.06381829],\n [0.46844205, 0. , 0.98699461, 0.24507946],\n [0.40730168, 0.16851918, 0.1115184 , 1. ],\n [0. , 0.7239974 , 0. , 0. ]])\n\nThe max value of each column is mapped to 1, the min value of each column is mapped to 0 an the intermediate values have are linearly mapped between 0 and 1\n" ]
[ 1 ]
[]
[]
[ "python" ]
stackoverflow_0074464157_python.txt
Q: Python fpdf add dashed line for Index page I want to create an Index page. How can I add a dashed line between cells (name and page) that I don't know the width of? There is the dashed line method, but to use it I need to specify the width and place it between two cells where the first one I don't know the size... A: This was my partial solution. self.__index is a Dict<string, int> for item in self.__index.items(): if item[0] and not item[0].isspace() and item[0] != "Index": # get with of title widthl = self.get_string_width(str(item[0])) + 3 # get width of page number widthr = self.get_string_width(str(item[1])) + 3 # not sure why I need to add + 3 at get_string_width, # but that makes the width correct in all cases I tested # adds cell with title self.cell(widthl, 5, txt=str(item[0]), border=0) # adds dashed line self.dashed_line(self.x - 1, self.y + 4, self.w - self.r_margin - widthr, self.y + 4) # adds page number (width 100% text aligned right) self.cell(0, 5, txt=str(item[1]), border=0, align="R") # Go to next line self.ln() Result: Please read this answer to learn how I added the index at the correct location
Python fpdf add dashed line for Index page
I want to create an Index page. How can I add a dashed line between cells (name and page) that I don't know the width of? There is the dashed line method, but to use it I need to specify the width and place it between two cells where the first one I don't know the size...
[ "This was my partial solution.\nself.__index is a Dict<string, int>\nfor item in self.__index.items():\n if item[0] and not item[0].isspace() and item[0] != \"Index\":\n # get with of title\n widthl = self.get_string_width(str(item[0])) + 3\n # get width of page number\n widthr = self.get_string_width(str(item[1])) + 3\n\n # not sure why I need to add + 3 at get_string_width,\n # but that makes the width correct in all cases I tested \n\n # adds cell with title\n self.cell(widthl,\n 5,\n txt=str(item[0]),\n border=0)\n\n # adds dashed line\n self.dashed_line(self.x - 1,\n self.y + 4,\n self.w - self.r_margin - widthr,\n self.y + 4)\n\n # adds page number (width 100% text aligned right)\n self.cell(0,\n 5,\n txt=str(item[1]),\n border=0,\n align=\"R\")\n\n # Go to next line\n self.ln()\n\nResult:\n\nPlease read this answer to learn how I added the index at the correct location\n" ]
[ 0 ]
[]
[]
[ "fpdf", "python" ]
stackoverflow_0074435849_fpdf_python.txt
Q: Are user provided translations for user provided templates possible with the django template engine? In my webapp, I'd like to allow users who want to deploy an instance to write their own templates. Specifically, I would like to include a template for a data protection declaration using the include tag and have this point to a location which users can define in their settings. However, this would not be translatable, as all translated strings have to be in django.po and that file is in version control. Is there a way to extend django.po, e.g. use an include statement to point it to a second, user generated translations file, similar to how I can include templates within other templates? A: Not entirely sure if this is possible, but you best bet is probably to use some other mechanism for translation. For example, you could create a template-tag user_translation and make it fetch the translation from the database or settings.
Are user provided translations for user provided templates possible with the django template engine?
In my webapp, I'd like to allow users who want to deploy an instance to write their own templates. Specifically, I would like to include a template for a data protection declaration using the include tag and have this point to a location which users can define in their settings. However, this would not be translatable, as all translated strings have to be in django.po and that file is in version control. Is there a way to extend django.po, e.g. use an include statement to point it to a second, user generated translations file, similar to how I can include templates within other templates?
[ "Not entirely sure if this is possible, but you best bet is probably to use some other mechanism for translation. For example, you could create a template-tag user_translation and make it fetch the translation from the database or settings.\n" ]
[ 0 ]
[]
[]
[ "django", "django_templates", "python", "translation" ]
stackoverflow_0073831490_django_django_templates_python_translation.txt
Q: LLDP module in SCAPY produce malformed packets I am using scapy.contrib.lldp library to craft an LLDP packet, with the following fields: Chassis ID (l1) Port ID (l2) Time to live (l3) System name (l4) Generic Organisation Specific - custom data 1 (l5) Generic Organisation Specific - custom data 2 (l6) End of LLDP (l7) The options for each field comes from a csv file imported as DataFrame and I use the library class for each field. The problem I have is that after I craft the packet (p=ethernet/l1/l2/l3/l4/l5/l6/l7) the l6 field has the double amount of bytes it is supposed to have, from the read data. I also tried to set a fixed value but the problem persists. Below is a sample of the packet in wireshark (ok packet and malformed packet), the DataFrame and relevant code. Layer  Field  Value Ethernet  dst  01:23:00:00:00:01 Ethernet  src  32:cb:cd:7b:5a:47 Ethernet type  35020 LLDPDUChassisID  _type  1 LLDPDUChassisID  _length  7 LLDPDUChassisID  subtype   4 LLDPDUChassisID family  None LLDPDUChassisID id  00:00:00:00:00:01 LLDPDUPortID  _type   2 LLDPDUPortID  _length   2 LLDPDUPortID  subtype  7 LLDPDUPortID family  None LLDPDUPortID id  1 LLDPDUTimeToLive   _type  3 LLDPDUTimeToLive  _length  2 LLDPDUTimeToLive  ttl  4919 LLDPDUSystemName  _type  5 LLDPDUSystemName  _length  10 LLDPDUSystemName  system_name  openflow:1 LLDPDUGenericOrganisationSpecific  _type  127 LLDPDUGenericOrganisationSpecific  _length  16 LLDPDUGenericOrganisationSpecific org_code  9953 LLDPDUGenericOrganisationSpecific  subtype  0 LLDPDUGenericOrganisationSpecific data  openflow:1:1 LLDPDUGenericOrganisationSpecific  _type  127 LLDPDUGenericOrganisationSpecific  _length  20 LLDPDUGenericOrganisationSpecific org_code  9953 LLDPDUGenericOrganisationSpecific  subtype  1 LLDPDUGenericOrganisationSpecific data  b'`\xafE\x16\t\xa0#5\x02\x7f\xd5\p\xf7\x11A' LLDPDUEndOfLLDPDU  _type  0 LLDPDUEndOfLLDPDU  _length  0 def getlldppack(host_2,ifa): lim = 1 log = "/root/log.log" file = "/root/dic_"+str(host_2)+"_"+str(lim)+".csv" while lim<5: try: lldp1 = pd.read_csv(file) except: with open(log,'a') as lf: lf.write("error when reading the packet "+file+" for count "+str(lim)+"\n") time.sleep(8) else: lldp1 = lldp1.iloc[: , 1:] e1=lldp1.loc[(lldp1['Layer']=='Ethernet')&(lldp1['Field']=='dst')].iloc[0,2] e2=lldp1.loc[(lldp1['Layer']=='Ethernet')&(lldp1['Field']=='src')].iloc[0,2] e3=int(lldp1.loc[(lldp1['Layer']=='Ethernet')&(lldp1['Field']=='type')].iloc[0,2]) e = Ether(dst=e1, src=e2, type=e3) a1=int(lldp1.loc[(lldp1['Layer']=='LLDPDUChassisID')&(lldp1['Field']=='_type')].iloc[0,2]) a2=int(lldp1.loc[(lldp1['Layer']=='LLDPDUChassisID')&(lldp1['Field']=='_length')].iloc[0,2]) a3=int(lldp1.loc[(lldp1['Layer']=='LLDPDUChassisID')&(lldp1['Field']=='subtype')].iloc[0,2]) a4=lldp1.loc[(lldp1['Layer']=='LLDPDUChassisID')&(lldp1['Field']=='family')].iloc[0,2] a5=lldp1.loc[(lldp1['Layer']=='LLDPDUChassisID')&(lldp1['Field']=='id')].iloc[0,2] b1=int(lldp1.loc[(lldp1['Layer']=='LLDPDUPortID')&(lldp1['Field']=='_type')].iloc[0,2]) b2=int(lldp1.loc[(lldp1['Layer']=='LLDPDUPortID')&(lldp1['Field']=='_length')].iloc[0,2]) b3=int(lldp1.loc[(lldp1['Layer']=='LLDPDUPortID')&(lldp1['Field']=='subtype')].iloc[0,2]) b4=lldp1.loc[(lldp1['Layer']=='LLDPDUPortID')&(lldp1['Field']=='family')].iloc[0,2] b5=int(lldp1.loc[(lldp1['Layer']=='LLDPDUPortID')&(lldp1['Field']=='id')].iloc[0,2]) c1=int(lldp1.loc[(lldp1['Layer']=='LLDPDUTimeToLive')&(lldp1['Field']=='_type')].iloc[0,2]) c2=int(lldp1.loc[(lldp1['Layer']=='LLDPDUTimeToLive')&(lldp1['Field']=='_length')].iloc[0,2]) c3=int(lldp1.loc[(lldp1['Layer']=='LLDPDUTimeToLive')&(lldp1['Field']=='ttl')].iloc[0,2]) d1=int(lldp1.loc[(lldp1['Layer']=='LLDPDUSystemName')&(lldp1['Field']=='_type')].iloc[0,2]) d2=int(lldp1.loc[(lldp1['Layer']=='LLDPDUSystemName')&(lldp1['Field']=='_length')].iloc[0,2]) d3=lldp1.loc[(lldp1['Layer']=='LLDPDUSystemName')&(lldp1['Field']=='system_name')].iloc[0,2] e1=int(lldp1.loc[(lldp1['Layer']=='LLDPDUGenericOrganisationSpecific')&(lldp1['Field']=='_type')].iloc[0,2]) e2=int(lldp1.loc[(lldp1['Layer']=='LLDPDUGenericOrganisationSpecific')&(lldp1['Field']=='_length')].iloc[0,2]) e3=int(lldp1.loc[(lldp1['Layer']=='LLDPDUGenericOrganisationSpecific')&(lldp1['Field']=='org_code')].iloc[0,2]) e4=int(lldp1.loc[(lldp1['Layer']=='LLDPDUGenericOrganisationSpecific')&(lldp1['Field']=='subtype')].iloc[0,2]) e5=lldp1.loc[(lldp1['Layer']=='LLDPDUGenericOrganisationSpecific')&(lldp1['Field']=='data')].iloc[0,2] f1=int(lldp1.loc[(lldp1['Layer']=='LLDPDUGenericOrganisationSpecific')&(lldp1['Field']=='_type')].iloc[1,2]) f2=int(lldp1.loc[(lldp1['Layer']=='LLDPDUGenericOrganisationSpecific')&(lldp1['Field']=='_length')].iloc[1,2]) f3=int(lldp1.loc[(lldp1['Layer']=='LLDPDUGenericOrganisationSpecific')&(lldp1['Field']=='org_code')].iloc[1,2]) f4=int(lldp1.loc[(lldp1['Layer']=='LLDPDUGenericOrganisationSpecific')&(lldp1['Field']=='subtype')].iloc[1,2]) f5=lldp1.loc[(lldp1['Layer']=='LLDPDUGenericOrganisationSpecific')&(lldp1['Field']=='data')].iloc[1,2] g1=int(lldp1.loc[(lldp1['Layer']=='LLDPDUEndOfLLDPDU')&(lldp1['Field']=='_type')].iloc[0,2]) g2=int(lldp1.loc[(lldp1['Layer']=='LLDPDUEndOfLLDPDU')&(lldp1['Field']=='_length')].iloc[0,2]) l1 = LLDPDUChassisID(_type=a1,_length=a2,subtype=a3,family=a4,id=a5) l2 = LLDPDUPortID(_type=b1,_length=b2,subtype=b3,family=b4,id=str(b5)) l3 = LLDPDUTimeToLive(_type=c1,_length=c2,ttl=c3) auxo=d3 l4 = LLDPDUSystemName(_type=d1,_length=d2,system_name=auxo) auxo=e5 l5 = LLDPDUGenericOrganisationSpecific(_type=e1,_length=e2,org_code=e3,subtype=e4,data=auxo) auxa=f5[2:-1] l6 = LLDPDUGenericOrganisationSpecific(_type=f1,_length=f2,org_code=f3,subtype=f4,data=auxa) l7 = LLDPDUEndOfLLDPDU(_type=0,_length=0) lldpu_layer = LLDPDU() lldpu_layer = l1/l2/l3/l4/l5/l6/l7 pack = e/lldpu_layer flag = False sendp(pack,count=1, iface=ifa) flag = True lim = lim +1 with open(log,'a') as lf: lf.write('read packet '+file+"\n") I tried changing the data types, also fixed the data in the option "data" of LLDPDUGenericOrganisationSpecific, but it did not work. I hope I can have a packet with the right length so it reproduces exactly the non-crafted packet. A: The problem was the encoding, it was wrong since I wrote the data in the data frame. The solution was to encode with base64 BEFORE saving the information in the data frame, I used this explanation: Convert byte[] to base64 and ASCII in Python That changed my data from b'`\xafE\x16\t\xa0#5\x02\x7f\xd5\p\xf7\x11A' to b'YK9FFgmgIzUCf9VccPcRQQ=='. Then, when I had to put it in the field, I removed the b'' characters and then did the decoding, as said in the link.
LLDP module in SCAPY produce malformed packets
I am using scapy.contrib.lldp library to craft an LLDP packet, with the following fields: Chassis ID (l1) Port ID (l2) Time to live (l3) System name (l4) Generic Organisation Specific - custom data 1 (l5) Generic Organisation Specific - custom data 2 (l6) End of LLDP (l7) The options for each field comes from a csv file imported as DataFrame and I use the library class for each field. The problem I have is that after I craft the packet (p=ethernet/l1/l2/l3/l4/l5/l6/l7) the l6 field has the double amount of bytes it is supposed to have, from the read data. I also tried to set a fixed value but the problem persists. Below is a sample of the packet in wireshark (ok packet and malformed packet), the DataFrame and relevant code. Layer  Field  Value Ethernet  dst  01:23:00:00:00:01 Ethernet  src  32:cb:cd:7b:5a:47 Ethernet type  35020 LLDPDUChassisID  _type  1 LLDPDUChassisID  _length  7 LLDPDUChassisID  subtype   4 LLDPDUChassisID family  None LLDPDUChassisID id  00:00:00:00:00:01 LLDPDUPortID  _type   2 LLDPDUPortID  _length   2 LLDPDUPortID  subtype  7 LLDPDUPortID family  None LLDPDUPortID id  1 LLDPDUTimeToLive   _type  3 LLDPDUTimeToLive  _length  2 LLDPDUTimeToLive  ttl  4919 LLDPDUSystemName  _type  5 LLDPDUSystemName  _length  10 LLDPDUSystemName  system_name  openflow:1 LLDPDUGenericOrganisationSpecific  _type  127 LLDPDUGenericOrganisationSpecific  _length  16 LLDPDUGenericOrganisationSpecific org_code  9953 LLDPDUGenericOrganisationSpecific  subtype  0 LLDPDUGenericOrganisationSpecific data  openflow:1:1 LLDPDUGenericOrganisationSpecific  _type  127 LLDPDUGenericOrganisationSpecific  _length  20 LLDPDUGenericOrganisationSpecific org_code  9953 LLDPDUGenericOrganisationSpecific  subtype  1 LLDPDUGenericOrganisationSpecific data  b'`\xafE\x16\t\xa0#5\x02\x7f\xd5\p\xf7\x11A' LLDPDUEndOfLLDPDU  _type  0 LLDPDUEndOfLLDPDU  _length  0 def getlldppack(host_2,ifa): lim = 1 log = "/root/log.log" file = "/root/dic_"+str(host_2)+"_"+str(lim)+".csv" while lim<5: try: lldp1 = pd.read_csv(file) except: with open(log,'a') as lf: lf.write("error when reading the packet "+file+" for count "+str(lim)+"\n") time.sleep(8) else: lldp1 = lldp1.iloc[: , 1:] e1=lldp1.loc[(lldp1['Layer']=='Ethernet')&(lldp1['Field']=='dst')].iloc[0,2] e2=lldp1.loc[(lldp1['Layer']=='Ethernet')&(lldp1['Field']=='src')].iloc[0,2] e3=int(lldp1.loc[(lldp1['Layer']=='Ethernet')&(lldp1['Field']=='type')].iloc[0,2]) e = Ether(dst=e1, src=e2, type=e3) a1=int(lldp1.loc[(lldp1['Layer']=='LLDPDUChassisID')&(lldp1['Field']=='_type')].iloc[0,2]) a2=int(lldp1.loc[(lldp1['Layer']=='LLDPDUChassisID')&(lldp1['Field']=='_length')].iloc[0,2]) a3=int(lldp1.loc[(lldp1['Layer']=='LLDPDUChassisID')&(lldp1['Field']=='subtype')].iloc[0,2]) a4=lldp1.loc[(lldp1['Layer']=='LLDPDUChassisID')&(lldp1['Field']=='family')].iloc[0,2] a5=lldp1.loc[(lldp1['Layer']=='LLDPDUChassisID')&(lldp1['Field']=='id')].iloc[0,2] b1=int(lldp1.loc[(lldp1['Layer']=='LLDPDUPortID')&(lldp1['Field']=='_type')].iloc[0,2]) b2=int(lldp1.loc[(lldp1['Layer']=='LLDPDUPortID')&(lldp1['Field']=='_length')].iloc[0,2]) b3=int(lldp1.loc[(lldp1['Layer']=='LLDPDUPortID')&(lldp1['Field']=='subtype')].iloc[0,2]) b4=lldp1.loc[(lldp1['Layer']=='LLDPDUPortID')&(lldp1['Field']=='family')].iloc[0,2] b5=int(lldp1.loc[(lldp1['Layer']=='LLDPDUPortID')&(lldp1['Field']=='id')].iloc[0,2]) c1=int(lldp1.loc[(lldp1['Layer']=='LLDPDUTimeToLive')&(lldp1['Field']=='_type')].iloc[0,2]) c2=int(lldp1.loc[(lldp1['Layer']=='LLDPDUTimeToLive')&(lldp1['Field']=='_length')].iloc[0,2]) c3=int(lldp1.loc[(lldp1['Layer']=='LLDPDUTimeToLive')&(lldp1['Field']=='ttl')].iloc[0,2]) d1=int(lldp1.loc[(lldp1['Layer']=='LLDPDUSystemName')&(lldp1['Field']=='_type')].iloc[0,2]) d2=int(lldp1.loc[(lldp1['Layer']=='LLDPDUSystemName')&(lldp1['Field']=='_length')].iloc[0,2]) d3=lldp1.loc[(lldp1['Layer']=='LLDPDUSystemName')&(lldp1['Field']=='system_name')].iloc[0,2] e1=int(lldp1.loc[(lldp1['Layer']=='LLDPDUGenericOrganisationSpecific')&(lldp1['Field']=='_type')].iloc[0,2]) e2=int(lldp1.loc[(lldp1['Layer']=='LLDPDUGenericOrganisationSpecific')&(lldp1['Field']=='_length')].iloc[0,2]) e3=int(lldp1.loc[(lldp1['Layer']=='LLDPDUGenericOrganisationSpecific')&(lldp1['Field']=='org_code')].iloc[0,2]) e4=int(lldp1.loc[(lldp1['Layer']=='LLDPDUGenericOrganisationSpecific')&(lldp1['Field']=='subtype')].iloc[0,2]) e5=lldp1.loc[(lldp1['Layer']=='LLDPDUGenericOrganisationSpecific')&(lldp1['Field']=='data')].iloc[0,2] f1=int(lldp1.loc[(lldp1['Layer']=='LLDPDUGenericOrganisationSpecific')&(lldp1['Field']=='_type')].iloc[1,2]) f2=int(lldp1.loc[(lldp1['Layer']=='LLDPDUGenericOrganisationSpecific')&(lldp1['Field']=='_length')].iloc[1,2]) f3=int(lldp1.loc[(lldp1['Layer']=='LLDPDUGenericOrganisationSpecific')&(lldp1['Field']=='org_code')].iloc[1,2]) f4=int(lldp1.loc[(lldp1['Layer']=='LLDPDUGenericOrganisationSpecific')&(lldp1['Field']=='subtype')].iloc[1,2]) f5=lldp1.loc[(lldp1['Layer']=='LLDPDUGenericOrganisationSpecific')&(lldp1['Field']=='data')].iloc[1,2] g1=int(lldp1.loc[(lldp1['Layer']=='LLDPDUEndOfLLDPDU')&(lldp1['Field']=='_type')].iloc[0,2]) g2=int(lldp1.loc[(lldp1['Layer']=='LLDPDUEndOfLLDPDU')&(lldp1['Field']=='_length')].iloc[0,2]) l1 = LLDPDUChassisID(_type=a1,_length=a2,subtype=a3,family=a4,id=a5) l2 = LLDPDUPortID(_type=b1,_length=b2,subtype=b3,family=b4,id=str(b5)) l3 = LLDPDUTimeToLive(_type=c1,_length=c2,ttl=c3) auxo=d3 l4 = LLDPDUSystemName(_type=d1,_length=d2,system_name=auxo) auxo=e5 l5 = LLDPDUGenericOrganisationSpecific(_type=e1,_length=e2,org_code=e3,subtype=e4,data=auxo) auxa=f5[2:-1] l6 = LLDPDUGenericOrganisationSpecific(_type=f1,_length=f2,org_code=f3,subtype=f4,data=auxa) l7 = LLDPDUEndOfLLDPDU(_type=0,_length=0) lldpu_layer = LLDPDU() lldpu_layer = l1/l2/l3/l4/l5/l6/l7 pack = e/lldpu_layer flag = False sendp(pack,count=1, iface=ifa) flag = True lim = lim +1 with open(log,'a') as lf: lf.write('read packet '+file+"\n") I tried changing the data types, also fixed the data in the option "data" of LLDPDUGenericOrganisationSpecific, but it did not work. I hope I can have a packet with the right length so it reproduces exactly the non-crafted packet.
[ "The problem was the encoding, it was wrong since I wrote the data in the data frame.\nThe solution was to encode with base64 BEFORE saving the information in the data frame, I used this explanation: Convert byte[] to base64 and ASCII in Python\nThat changed my data from b'`\\xafE\\x16\\t\\xa0#5\\x02\\x7f\\xd5\\p\\xf7\\x11A' to b'YK9FFgmgIzUCf9VccPcRQQ=='.\nThen, when I had to put it in the field, I removed the b'' characters and then did the decoding, as said in the link.\n" ]
[ 0 ]
[]
[]
[ "ethernet", "python", "scapy" ]
stackoverflow_0074294723_ethernet_python_scapy.txt
Q: OR-Tools - Nurse scheduling - prevent shift gaps with binary constraints I am using OR-Tools to solve a problem similar to the Nurse Scheduling problem. The difference in my case is that when I schedule a "Nurse" for a shift, they must then work consecutive days (i.e., there can be no gaps between days worked). Most of the similar questions point to this code. I have attempted to implement the answer adapted from there. However, I am getting output solutions which do not respecting the constraints. The logic I was trying to follow is that I want to forbid patterns that have gaps. For example: [1,0,1] [1,0,0,1] [1,0,0,0,1] Below is an example of my code where for # Modified from the code linked above: def negated_bounded_span(works, start, length): sequence = [] # Left border sequence.append(works[start].Not()) # Middle for i in range(1,length+1): sequence.append(works[start + i]) # Right border sequence.append(works[start + length + 1].Not()) return sequence for n in range(num_nurses): # nurse_days[(n,d)] is 1 if nurse n works on day d nrses = [nurse_days[(n, d)] for d in range(5)] for length in range(1, 4): for start in range(5 - length - 1): model.AddBoolOr(negated_bounded_span(nrses, start, length)) A modified excerpt of what the output of the above would look like is the following: ['Not(nurse_days_n0d0)', nurse_days_n0d1(0..1), 'Not(nurse_days_n0d2)'] ['Not(nurse_days_n0d1)', nurse_days_n0d2(0..1), 'Not(nurse_days_n0d3)'] ['Not(nurse_days_n0d2)', nurse_days_n0d3(0..1), 'Not(nurse_days_n0d4)'] ['Not(nurse_days_n0d0)', nurse_days_n0d1(0..1), nurse_days_n0d2(0..1), 'Not(nurse_days_n0d3)'] ['Not(nurse_days_n0d1)', nurse_days_n0d2(0..1), nurse_days_n0d3(0..1), 'Not(nurse_days_n0d4)'] ['Not(nurse_days_n0d0)', nurse_days_n0d1(0..1), nurse_days_n0d2(0..1), nurse_days_n0d3(0..1), 'Not(nurse_days_n0d4)'] Thanks for your help in advance. Similar questions reviewed: [1], [2], [3]. A: I don't use/know the syntax of or-tools, but you can probably construct a little boolean logic with constraints to do this. Let's say we introduce a binary variable to annotate which d day that nurse n starts working: s[n, d] ∈ {0, 1} And to enforce only one sequence of days worked, we need to constrain to one start, for all nurses ∑ s[n, d] over all d <= 1 for all n ∈ N Then we know for any particular day, d that in order for nurse n to be working, they either have to start on that day or be working the day prior, right? That's it... So, working[n, d] <= s[n, d] + working[n, d-1] for all n ∈ N, d ∈ {d: d ≠ d_0} The constraint for d_0 is left for the interested coder. ;) A: This can be implemented as follows: Introduce variables for each employee and day: worksOnDay[e, d] = true if the employee e works any shift on day d. workStarted[e, d] = true if the employee e has started working on day d or earlier. okToWork[e, d] = true if it is OK for employee e to work on day d. Constrain worksOnDay[e, d] to be true if any shift is taken on that day, i.e. boolean OR of the shifts for that day, represented using AddMaxEquality() in OR-Tools. AddMaxEquality() effectively constrains the target variable to be the OR of the operands. Constrain workStarted[e, d] to be true if it is true on the previous day d-1 or if the day is worked. (On the first day only if the day is worked). Constrain okToWork[e, d] to be true if the work had not already been started on the previous day, or if the previous day is worked. (On the first two days, work is always OK as there can't have been any gap). In other words, if the work had already been started, then it is only OK to work if the previous day is also worked. The expression in pseudo-code would be (not workStarted[e, d - 1]) or (worksOnDay[e, d - 1]), but since OR-Tools doesn't directly allow such Boolean operators, in the code we have to introduce a helping variable constrained to be workStarted[e, d - 1].Not() and use it in the disjunction. Finally, prevent work on days that it is not allowed by adding the implication that worksOnDay[e, d] implies okToWork[e, d]. This constraint will work in both directions and ensure that if okToWork[e, d] is false, then worksOnDay[e, d] will also be false since otherwise the constraint is violated. I'm sorry, I work in c# and don't have a working Python installation, but it should be easy enough to code the equivalent constraints in Python. Here's the code: var model = new CpModel(); IntVar[,,] work = new IntVar[numEmployees, numShifts, numDays]; IntVar[,] worksOnDay = new IntVar[numEmployees, numDays]; IntVar[,] workStarted = new IntVar[numEmployees, numDays]; IntVar[,] okToWork = new IntVar[numEmployees, numDays]; foreach (int e in Range(numEmployees)) { foreach (int s in Range(numShifts)) { foreach (int d in Range(numDays)) { work[e, s, d] = model.NewBoolVar($"work{e}_{s}_{d}"); } } } for (int e = 0; e < numEmployees; e++) { for (int d = 0; d < numDays; d++) { worksOnDay[e, d] = model.NewBoolVar($"WorksOnDay{e}_{d}"); workStarted[e, d] = model.NewBoolVar($"WorkStarted{e}_{d}"); okToWork[e, d] = model.NewBoolVar($"OkToWork{e}_{d}"); } } // WorksOnDay is true if any shift is taken on that day for (int e = 0; e < numEmployees; e++) { for (int d = 0; d < numDays; d++) { IEnumerable<IntVar> shiftsOnDay = (from int s in Range(numShifts) select work[e, s, d]); model.AddMaxEquality(worksOnDay[e, d], shiftsOnDay); } } // On the first day, WorkStarted is true if that day is worked for (int e = 0; e < numEmployees; e++) { model.Add(workStarted[e, 0] == worksOnDay[e, 0]); } // On subsequent days, WorkStarted is true if the day is worked, or if the work had been started on the day before for (int e = 0; e < numEmployees; e++) { for (int d = 1; d < numDays; d++) { model.AddMaxEquality(workStarted[e, d], new List<IntVar>() { workStarted[e, d - 1], worksOnDay[e, d] }); } } // On the first and second days, there cannot have been a gap, work is always OK for (int e = 0; e < numEmployees; e++) { model.Add(okToWork[e, 0] == 1); model.Add(okToWork[e, 1] == 1); } // For the third day and beyond, work is OK if the work had not been started by the previous day, or if the previous day is worked. for (int e = 0; e < numEmployees; e++) { for (int d = 2; d < numDays; d++) { IntVar workNotStartedYesterday = model.NewBoolVar("WorkNotStarted"); model.Add(workNotStartedYesterday == (LinearExpr)workStarted[e, d - 1].Not()); model.AddMaxEquality(okToWork[e, d], new List<IntVar>() { workNotStartedYesterday, worksOnDay[e, d - 1] }); } } // Prevent work on days that it is not allowed for (int e = 0; e < numEmployees; e++) { for (int d = 0; d < numDays; d++) { // Working on a day implies that it is OK to work on that day. // Stated otherwise, either it is ok to work on the day, or it is not worked on the day. model.AddImplication(worksOnDay[e, d], okToWork[e, d]); } }
OR-Tools - Nurse scheduling - prevent shift gaps with binary constraints
I am using OR-Tools to solve a problem similar to the Nurse Scheduling problem. The difference in my case is that when I schedule a "Nurse" for a shift, they must then work consecutive days (i.e., there can be no gaps between days worked). Most of the similar questions point to this code. I have attempted to implement the answer adapted from there. However, I am getting output solutions which do not respecting the constraints. The logic I was trying to follow is that I want to forbid patterns that have gaps. For example: [1,0,1] [1,0,0,1] [1,0,0,0,1] Below is an example of my code where for # Modified from the code linked above: def negated_bounded_span(works, start, length): sequence = [] # Left border sequence.append(works[start].Not()) # Middle for i in range(1,length+1): sequence.append(works[start + i]) # Right border sequence.append(works[start + length + 1].Not()) return sequence for n in range(num_nurses): # nurse_days[(n,d)] is 1 if nurse n works on day d nrses = [nurse_days[(n, d)] for d in range(5)] for length in range(1, 4): for start in range(5 - length - 1): model.AddBoolOr(negated_bounded_span(nrses, start, length)) A modified excerpt of what the output of the above would look like is the following: ['Not(nurse_days_n0d0)', nurse_days_n0d1(0..1), 'Not(nurse_days_n0d2)'] ['Not(nurse_days_n0d1)', nurse_days_n0d2(0..1), 'Not(nurse_days_n0d3)'] ['Not(nurse_days_n0d2)', nurse_days_n0d3(0..1), 'Not(nurse_days_n0d4)'] ['Not(nurse_days_n0d0)', nurse_days_n0d1(0..1), nurse_days_n0d2(0..1), 'Not(nurse_days_n0d3)'] ['Not(nurse_days_n0d1)', nurse_days_n0d2(0..1), nurse_days_n0d3(0..1), 'Not(nurse_days_n0d4)'] ['Not(nurse_days_n0d0)', nurse_days_n0d1(0..1), nurse_days_n0d2(0..1), nurse_days_n0d3(0..1), 'Not(nurse_days_n0d4)'] Thanks for your help in advance. Similar questions reviewed: [1], [2], [3].
[ "I don't use/know the syntax of or-tools, but you can probably construct a little boolean logic with constraints to do this.\nLet's say we introduce a binary variable to annotate which d day that nurse n starts working:\ns[n, d] ∈ {0, 1}\n\nAnd to enforce only one sequence of days worked, we need to constrain to one start, for all nurses\n∑ s[n, d] over all d <= 1 for all n ∈ N \n\nThen we know for any particular day, d that in order for nurse n to be working, they either have to start on that day or be working the day prior, right? That's it...\nSo,\nworking[n, d] <= s[n, d] + working[n, d-1] for all n ∈ N, d ∈ {d: d ≠ d_0}\n\nThe constraint for d_0 is left for the interested coder. ;)\n", "This can be implemented as follows:\nIntroduce variables for each employee and day:\nworksOnDay[e, d] = true if the employee e works any shift on day d.\nworkStarted[e, d] = true if the employee e has started working on day d or earlier.\nokToWork[e, d] = true if it is OK for employee e to work on day d.\nConstrain worksOnDay[e, d] to be true if any shift is taken on that day, i.e. boolean OR of the shifts for that day, represented using AddMaxEquality() in OR-Tools. AddMaxEquality() effectively constrains the target variable to be the OR of the operands.\nConstrain workStarted[e, d] to be true if it is true on the previous day d-1 or if the day is worked. (On the first day only if the day is worked).\nConstrain okToWork[e, d] to be true if the work had not already been started on the previous day, or if the previous day is worked. (On the first two days, work is always OK as there can't have been any gap). In other words, if the work had already been started, then it is only OK to work if the previous day is also worked. The expression in pseudo-code would be (not workStarted[e, d - 1]) or (worksOnDay[e, d - 1]), but since OR-Tools doesn't directly allow such Boolean operators, in the code we have to introduce a helping variable constrained to be workStarted[e, d - 1].Not() and use it in the disjunction.\nFinally, prevent work on days that it is not allowed by adding the implication that\nworksOnDay[e, d] implies okToWork[e, d]. This constraint will work in both directions and ensure that if okToWork[e, d] is false, then worksOnDay[e, d] will also be false since otherwise the constraint is violated.\nI'm sorry, I work in c# and don't have a working Python installation, but it should be easy enough to code the equivalent constraints in Python. Here's the code:\n var model = new CpModel();\n\n IntVar[,,] work = new IntVar[numEmployees, numShifts, numDays];\n IntVar[,] worksOnDay = new IntVar[numEmployees, numDays];\n IntVar[,] workStarted = new IntVar[numEmployees, numDays];\n IntVar[,] okToWork = new IntVar[numEmployees, numDays];\n\n foreach (int e in Range(numEmployees))\n {\n foreach (int s in Range(numShifts))\n {\n foreach (int d in Range(numDays))\n {\n work[e, s, d] = model.NewBoolVar($\"work{e}_{s}_{d}\");\n\n }\n }\n }\n\n for (int e = 0; e < numEmployees; e++)\n {\n for (int d = 0; d < numDays; d++)\n {\n worksOnDay[e, d] = model.NewBoolVar($\"WorksOnDay{e}_{d}\");\n workStarted[e, d] = model.NewBoolVar($\"WorkStarted{e}_{d}\");\n okToWork[e, d] = model.NewBoolVar($\"OkToWork{e}_{d}\");\n }\n }\n\n // WorksOnDay is true if any shift is taken on that day\n for (int e = 0; e < numEmployees; e++)\n {\n for (int d = 0; d < numDays; d++)\n {\n IEnumerable<IntVar> shiftsOnDay = (from int s in Range(numShifts) select work[e, s, d]);\n model.AddMaxEquality(worksOnDay[e, d], shiftsOnDay);\n }\n }\n\n // On the first day, WorkStarted is true if that day is worked\n for (int e = 0; e < numEmployees; e++)\n {\n model.Add(workStarted[e, 0] == worksOnDay[e, 0]);\n }\n\n // On subsequent days, WorkStarted is true if the day is worked, or if the work had been started on the day before\n for (int e = 0; e < numEmployees; e++)\n {\n for (int d = 1; d < numDays; d++)\n {\n model.AddMaxEquality(workStarted[e, d], new List<IntVar>() { workStarted[e, d - 1], worksOnDay[e, d] });\n }\n }\n\n // On the first and second days, there cannot have been a gap, work is always OK\n for (int e = 0; e < numEmployees; e++)\n {\n model.Add(okToWork[e, 0] == 1);\n model.Add(okToWork[e, 1] == 1);\n }\n\n // For the third day and beyond, work is OK if the work had not been started by the previous day, or if the previous day is worked.\n for (int e = 0; e < numEmployees; e++)\n {\n for (int d = 2; d < numDays; d++)\n {\n IntVar workNotStartedYesterday = model.NewBoolVar(\"WorkNotStarted\");\n model.Add(workNotStartedYesterday == (LinearExpr)workStarted[e, d - 1].Not());\n model.AddMaxEquality(okToWork[e, d], new List<IntVar>() { workNotStartedYesterday, worksOnDay[e, d - 1] });\n }\n }\n\n // Prevent work on days that it is not allowed\n for (int e = 0; e < numEmployees; e++)\n {\n for (int d = 0; d < numDays; d++)\n {\n // Working on a day implies that it is OK to work on that day.\n // Stated otherwise, either it is ok to work on the day, or it is not worked on the day.\n model.AddImplication(worksOnDay[e, d], okToWork[e, d]);\n }\n }\n\n" ]
[ 0, 0 ]
[]
[]
[ "constraint_programming", "or_tools", "python", "scheduling" ]
stackoverflow_0074436487_constraint_programming_or_tools_python_scheduling.txt
Q: How to find common edges from a binary dataframe? This is my dataset: Dept Cell culture Bioinfo Immunology Trigonometry Algebra Biotech Optics Biotech 1 1 1 0 0 1 0 Math 0 0 0 1 1 0 0 Physics 0 0 0 0 0 0 1 How I want my result: Dept 0 Biotech Cell culture Biotech Bioinfo Biotech Immunology Math Trigonometry Math Algebra Physics Optics I need to form pairs that have the value one, but I also need to rid of those values which are the same in both column and row index - such as biotech here. Is there an easy way to do this? A: Try this: #df = df.set_index('Dept') if needed move dept into the index df.dot(df.columns+',').str.strip(',').str.split(',').explode().reset_index() Output: Dept 0 0 Biotech Cell culture 1 Biotech Bioinfo 2 Biotech Immunology 3 Biotech Biotech 4 Math Trigonometry 5 Math Algebra 6 Physics Optics
How to find common edges from a binary dataframe?
This is my dataset: Dept Cell culture Bioinfo Immunology Trigonometry Algebra Biotech Optics Biotech 1 1 1 0 0 1 0 Math 0 0 0 1 1 0 0 Physics 0 0 0 0 0 0 1 How I want my result: Dept 0 Biotech Cell culture Biotech Bioinfo Biotech Immunology Math Trigonometry Math Algebra Physics Optics I need to form pairs that have the value one, but I also need to rid of those values which are the same in both column and row index - such as biotech here. Is there an easy way to do this?
[ "Try this:\n#df = df.set_index('Dept') if needed move dept into the index\ndf.dot(df.columns+',').str.strip(',').str.split(',').explode().reset_index()\n\nOutput:\n Dept 0\n0 Biotech Cell culture\n1 Biotech Bioinfo\n2 Biotech Immunology\n3 Biotech Biotech\n4 Math Trigonometry\n5 Math Algebra\n6 Physics Optics\n\n" ]
[ 1 ]
[]
[]
[ "group_by", "multiple_columns", "pandas", "python", "sorting" ]
stackoverflow_0074464296_group_by_multiple_columns_pandas_python_sorting.txt
Q: Moto doesn't mock DynamoDB I'm trying to write my unit tests for a Lambda function that communicates with DynamoDB. I'm using moto but it isn't mocking anything. Whenever I call something in boto3, it communicates using my AWS CLI profile to the actual API and not a mock one. Why is this happening? Here's the code: ### Unit test for the visitorCounterLambda function from visitorCounterLambda import handler import boto3 from moto import mock_dynamodb2 def setUp(self): #pass self.region = 'us-east-2' @mock_dynamodb2 def test_handler(): dynamodb = boto3.client('dynamodb') ddbTableName = "myDDBtable" # table = dynamodb.create_table( # TableName = ddbTableName, # BillingMode='PAY_PER_REQUEST', # AttributeDefinitions=[ # { # 'AttributeName': 'id', # 'AttributeType': 'S' # }, # ], # KeySchema=[ # { # 'AttributeName': 'id', # 'KeyType': 'HASH' # }, # ] # ) tablesListed = dynamodb.list_tables() print(tablesListed) if __name__ == '__main__': test_handler() print(tablesListed) returns my actual tables from my actual account. If I uncomment the create_table command, it creates the table in my AWS account as well. What am I missing here? Thanks A: I found out that the issue was with the from visitorCounterLambda import handler part because that script already established a boto3 client when imported and therefore mock could not break that. The proper way of doing it is outlined in the Moto documentation under "Very Important -- Recommended Usage". You should first establish the @mock_dynamodb2 then after that import your external resources into the function. Example: import boto3 from moto import mock_dynamodb2 @mock_dynamodb2 def test_handler(): from visitorCounterLambda import handler dynamodb = boto3.client('dynamodb') ## do your magic here tablesListed = dynamodb.list_tables() print(tablesListed) A: In my humble opinion: stay away from moto. Each and every single version comes with other issues. We have been using it for years and had to solve tricky bugs on every update of boto3, sometimes leaving the whole test suite broken for weeks. It is not possible to live in the fear of an upgrade. When using features that are a bit more advanced, you often end up with a crpytic error messages, that finally lead you to conclude that the feature in question is not supported. Rewritting the whole test suite to go out of such a dependency is a pain, and of course always come at the wrong time, Murphy's law. Mock AWS dependencies yourself for unit testing, and rely on integration tests to confirm the whole thinkg is working.
Moto doesn't mock DynamoDB
I'm trying to write my unit tests for a Lambda function that communicates with DynamoDB. I'm using moto but it isn't mocking anything. Whenever I call something in boto3, it communicates using my AWS CLI profile to the actual API and not a mock one. Why is this happening? Here's the code: ### Unit test for the visitorCounterLambda function from visitorCounterLambda import handler import boto3 from moto import mock_dynamodb2 def setUp(self): #pass self.region = 'us-east-2' @mock_dynamodb2 def test_handler(): dynamodb = boto3.client('dynamodb') ddbTableName = "myDDBtable" # table = dynamodb.create_table( # TableName = ddbTableName, # BillingMode='PAY_PER_REQUEST', # AttributeDefinitions=[ # { # 'AttributeName': 'id', # 'AttributeType': 'S' # }, # ], # KeySchema=[ # { # 'AttributeName': 'id', # 'KeyType': 'HASH' # }, # ] # ) tablesListed = dynamodb.list_tables() print(tablesListed) if __name__ == '__main__': test_handler() print(tablesListed) returns my actual tables from my actual account. If I uncomment the create_table command, it creates the table in my AWS account as well. What am I missing here? Thanks
[ "I found out that the issue was with the from visitorCounterLambda import handler part because that script already established a boto3 client when imported and therefore mock could not break that. The proper way of doing it is outlined in the Moto documentation under \"Very Important -- Recommended Usage\". You should first establish the @mock_dynamodb2 then after that import your external resources into the function.\nExample:\nimport boto3\nfrom moto import mock_dynamodb2\n\n@mock_dynamodb2\ndef test_handler():\n from visitorCounterLambda import handler\n dynamodb = boto3.client('dynamodb')\n\n ## do your magic here\n\n tablesListed = dynamodb.list_tables()\n print(tablesListed)\n\n", "In my humble opinion: stay away from moto. Each and every single version comes with other issues.\n\nWe have been using it for years and had to solve tricky bugs on every update of boto3, sometimes leaving the whole test suite broken for weeks. It is not possible to live in the fear of an upgrade.\nWhen using features that are a bit more advanced, you often end up with a crpytic error messages, that finally lead you to conclude that the feature in question is not supported.\n\nRewritting the whole test suite to go out of such a dependency is a pain, and of course always come at the wrong time, Murphy's law.\nMock AWS dependencies yourself for unit testing, and rely on integration tests to confirm the whole thinkg is working.\n" ]
[ 1, 0 ]
[]
[]
[ "amazon_dynamodb", "aws_lambda", "boto3", "moto", "python" ]
stackoverflow_0062232709_amazon_dynamodb_aws_lambda_boto3_moto_python.txt
Q: Multi-line function calls with strings in python I have a function call in python 2.7: execute_cmd('/sbin/ip addr flush dev ' + args.interface + ' && ' + '/sbin/ifdown ' + args.interface + ' ; ' + '/sbin/ifup ' + args.interface + ' && ' + '/sbin/ifconfig | grep ' + args.interface) This is running fine, but pylint is complaining with the following warning messages: C:220, 0: Wrong continued indentation (remove 1 space). + args.interface |^ (bad-continuation) C:221, 0: Wrong continued indentation (remove 1 space). + ' && ' |^ (bad-continuation) C:222, 0: Wrong continued indentation (remove 1 space). + '/sbin/ifconfig | grep ' |^ (bad-continuation) . . . What is the correct way to call a function in python with string argument(s) which spans across multiple lines?. A: Pylint tells you exactly what to do, remove one space: execute_cmd('/sbin/ip addr flush dev ' + args.interface + ' && ' + '/sbin/ifdown ' + args.interface + ' ; ' + '/sbin/ifup ' + args.interface + ' && ' + '/sbin/ifconfig | grep ' + args.interface) Also, you could use string formatting, for example: command_line = '/sbin/ip addr flush dev {0} && /sbin/ifdown {0} ; /sbin/ifup {0} && {0} /sbin/ifconfig | grep {0}'\ .format(args.interface) A: PEP 8 states that you can also start a long argument list (or anything within brackets, really) at the next line with one extra indentation level: execute_cmd( '/sbin/ip addr flush dev ' + args.interface + ' && ' + '/sbin/ifdown ' + args.interface + ' ; ' + '/sbin/ifup ' + args.interface + ' && ' + '/sbin/ifconfig | grep ' + args.interface ) As I said in my comment, binary operators should be put at the end of a line break, not at the start of a new one. What you can also do is use an fstring (python >3.6) and just drop the +s: execute_cmd( f'/sbin/ip addr flush dev {args.interface} && /sbin/ifdown' f' {args.interface} ; /sbin/ifup {args.interface} && ' f'/sbin/ifconfig | grep {args.interface}' ) The same with the .format function (from python .. 2.6 onwards I think?): execute_cmd( '/sbin/ip addr flush dev {0} && /sbin/ifdown' + ' {0} ; /sbin/ifup {0} && ' + '/sbin/ifconfig | grep {0}'.format(args.interface) ) A: I'd like to highlight the position of +, as pointed out in this post Best practice: income = (gross_wages + taxable_interest) Anti-pattern: (In PEP8, this is W504 line break after binary operator) income = (gross_wages + taxable_interest)
Multi-line function calls with strings in python
I have a function call in python 2.7: execute_cmd('/sbin/ip addr flush dev ' + args.interface + ' && ' + '/sbin/ifdown ' + args.interface + ' ; ' + '/sbin/ifup ' + args.interface + ' && ' + '/sbin/ifconfig | grep ' + args.interface) This is running fine, but pylint is complaining with the following warning messages: C:220, 0: Wrong continued indentation (remove 1 space). + args.interface |^ (bad-continuation) C:221, 0: Wrong continued indentation (remove 1 space). + ' && ' |^ (bad-continuation) C:222, 0: Wrong continued indentation (remove 1 space). + '/sbin/ifconfig | grep ' |^ (bad-continuation) . . . What is the correct way to call a function in python with string argument(s) which spans across multiple lines?.
[ "Pylint tells you exactly what to do, remove one space:\nexecute_cmd('/sbin/ip addr flush dev '\n + args.interface\n + ' && '\n + '/sbin/ifdown '\n + args.interface\n + ' ; '\n + '/sbin/ifup '\n + args.interface\n + ' && '\n + '/sbin/ifconfig | grep '\n + args.interface)\n\nAlso, you could use string formatting, for example:\ncommand_line = '/sbin/ip addr flush dev {0} && /sbin/ifdown {0} ; /sbin/ifup {0} && {0} /sbin/ifconfig | grep {0}'\\\n .format(args.interface)\n\n", "PEP 8 states that you can also start a long argument list (or anything within brackets, really) at the next line with one extra indentation level:\nexecute_cmd(\n '/sbin/ip addr flush dev ' +\n args.interface +\n ' && ' +\n '/sbin/ifdown ' +\n args.interface +\n ' ; ' +\n '/sbin/ifup ' +\n args.interface +\n ' && ' + \n '/sbin/ifconfig | grep ' +\n args.interface\n)\n\nAs I said in my comment, binary operators should be put at the end of a line break, not at the start of a new one.\n\nWhat you can also do is use an fstring (python >3.6) and just drop the +s:\nexecute_cmd(\n f'/sbin/ip addr flush dev {args.interface} && /sbin/ifdown'\n f' {args.interface} ; /sbin/ifup {args.interface} && '\n f'/sbin/ifconfig | grep {args.interface}'\n)\n\nThe same with the .format function (from python .. 2.6 onwards I think?):\nexecute_cmd(\n '/sbin/ip addr flush dev {0} && /sbin/ifdown' +\n ' {0} ; /sbin/ifup {0} && ' +\n '/sbin/ifconfig | grep {0}'.format(args.interface)\n)\n\n", "I'd like to highlight the position of +, as pointed out in this post\nBest practice:\nincome = (gross_wages\n + taxable_interest)\n\nAnti-pattern: (In PEP8, this is W504 line break after binary operator)\nincome = (gross_wages +\n taxable_interest)\n\n" ]
[ 1, 0, 0 ]
[]
[]
[ "python" ]
stackoverflow_0047829325_python.txt
Q: How to store Dataframe data to Firebase Storage? Given a pandas Dataframe which contains some data, what is the best to store this data to Firebase? Should I convert the Dataframe to a local file (e.g. .csv, .txt) and then upload it on Firebase Storage, or is it also possible to directly store the pandas Dataframe without conversion? Or are there better best practices? Update 01/03 - So far I've come with this solution, which requires writing a csv file locally, then reading it in and uploading it and then deleting the local file. I doubt however that this is the most efficient method, thus I would like to know if it can be done better and quicker? import os import firebase_admin from firebase_admin import db, storage cred = firebase_admin.credentials.Certificate(cert_json) app = firebase_admin.initialize_app(cred, config) bucket = storage.bucket(app=app) def upload_df(df, data_id): """ Upload a Dataframe as a csv to Firebase Storage :return: storage_ref """ # Storage location + extension storage_ref = data_id + ".csv" # Store locally df.to_csv(data_id) # Upload to Firebase Storage blob = bucket.blob(storage_ref) with open(data_id,'rb') as local_file: blob.upload_from_file(local_file) # Delete locally os.remove(data_id) return storage_ref A: With python-firebase and to_dict: postdata = my_df.to_dict() # Assumes any auth/headers you need are already taken care of. result = firebase.post('/my_endpoint', postdata, {'print': 'pretty'}) print(result) # Snapshot info You can get the data back using the snapshot info and endpoint, and reestablish the df with from_dict(). You could adapt this solution to SQL and JSON solutions, which pandas also has support for. Alternatively and depending on where you script executes from, you might consider treating firebase as a db and using the dbapi from firebase_admin (check this out.) As for whether it's according to best practice, it's difficult to say without knowing anything about your use case. A: if you just want to reduce code length and the steps of creating and deleting files, you can use upload_from_string: import firebase_admin from firebase_admin import db, storage cred = firebase_admin.credentials.Certificate(cert_json) app = firebase_admin.initialize_app(cred, config) bucket = storage.bucket(app=app) def upload_df(df, data_id): """ Upload a Dataframe as a csv to Firebase Storage :return: storage_ref """ storage_ref = data_id + '.csv' blob = bucket.blob(storage_ref) blob.upload_from_string(df.to_csv()) return storage_ref https://googleapis.github.io/google-cloud-python/latest/storage/blobs.html#google.cloud.storage.blob.Blob.upload_from_string A: After figuring out for hours, the following solution works for me. You need to convert your csv file to bytes & then upload it. import pyrebase import pandas firebaseConfig = { "apiKey": "xxxxx", "authDomain": "xxxxx", "projectId": "xxxxx", "storageBucket": "xxxxx", "messagingSenderId": "xxxxx", "appId": "xxxxx", "databaseURL":"xxxxx" }; firebase = pyrebase.initialize_app(firebaseConfig) storage = firebase.storage() df = pd.read_csv("/content/Future Prices.csv") # here is the magic. Convert your csv file to bytes and then upload it df_string = df.to_csv(index=False) db_bytes = bytes(df_string, 'utf8') fileName = "Future Prices.csv" storage.child("predictions/" + fileName).put(db_bytes) That's all Happy Coding! A: I found that starting from very modest size of dataframe (below 100KB!), and certainly for bigger ones, it's paying off to compress the data before storing. It does not have to be a dataframe, but it can be any onject (e.g. a dictionary). I used pickle below to compress. Your object can be seen on the usual firebase storage this way, and you get gains in memory and speed, both when writing and when reading, compared to uncompressed storage. For big objects it's also worth adding timeout for to avoid ConnectionError after the default timeout of 60 seconds. import firebase_admin from firebase_admin import credentials, initialize_app, storage import pickle cred = credentials.Certificate(json_cert_file) firebase_admin.initialize_app(cred, {'storageBucket': 'YOUR_storageBucket (without gs://)'}) bucket = storage.bucket() file_name = data_id + ".pkl" blob = bucket.blob(file_name) # write df to storage blob.upload_from_string(pickle.dumps(df, timeout=300)) # read df from storage df = pickle.loads(blob.download_as_string(timeout=300))
How to store Dataframe data to Firebase Storage?
Given a pandas Dataframe which contains some data, what is the best to store this data to Firebase? Should I convert the Dataframe to a local file (e.g. .csv, .txt) and then upload it on Firebase Storage, or is it also possible to directly store the pandas Dataframe without conversion? Or are there better best practices? Update 01/03 - So far I've come with this solution, which requires writing a csv file locally, then reading it in and uploading it and then deleting the local file. I doubt however that this is the most efficient method, thus I would like to know if it can be done better and quicker? import os import firebase_admin from firebase_admin import db, storage cred = firebase_admin.credentials.Certificate(cert_json) app = firebase_admin.initialize_app(cred, config) bucket = storage.bucket(app=app) def upload_df(df, data_id): """ Upload a Dataframe as a csv to Firebase Storage :return: storage_ref """ # Storage location + extension storage_ref = data_id + ".csv" # Store locally df.to_csv(data_id) # Upload to Firebase Storage blob = bucket.blob(storage_ref) with open(data_id,'rb') as local_file: blob.upload_from_file(local_file) # Delete locally os.remove(data_id) return storage_ref
[ "With python-firebase and to_dict:\npostdata = my_df.to_dict()\n\n# Assumes any auth/headers you need are already taken care of.\nresult = firebase.post('/my_endpoint', postdata, {'print': 'pretty'})\nprint(result)\n# Snapshot info\n\nYou can get the data back using the snapshot info and endpoint, and reestablish the df with from_dict(). You could adapt this solution to SQL and JSON solutions, which pandas also has support for.\nAlternatively and depending on where you script executes from, you might consider treating firebase as a db and using the dbapi from firebase_admin (check this out.)\nAs for whether it's according to best practice, it's difficult to say without knowing anything about your use case.\n", "if you just want to reduce code length and the steps of creating and deleting files, you can use upload_from_string:\nimport firebase_admin\nfrom firebase_admin import db, storage\n\ncred = firebase_admin.credentials.Certificate(cert_json)\napp = firebase_admin.initialize_app(cred, config)\nbucket = storage.bucket(app=app)\n\ndef upload_df(df, data_id):\n \"\"\"\n Upload a Dataframe as a csv to Firebase Storage\n :return: storage_ref\n \"\"\"\n storage_ref = data_id + '.csv'\n blob = bucket.blob(storage_ref)\n blob.upload_from_string(df.to_csv())\n\n return storage_ref\n\nhttps://googleapis.github.io/google-cloud-python/latest/storage/blobs.html#google.cloud.storage.blob.Blob.upload_from_string\n", "After figuring out for hours, the following solution works for me. You need to convert your csv file to bytes & then upload it.\nimport pyrebase\nimport pandas\n\nfirebaseConfig = {\n \"apiKey\": \"xxxxx\",\n \"authDomain\": \"xxxxx\",\n \"projectId\": \"xxxxx\",\n \"storageBucket\": \"xxxxx\",\n \"messagingSenderId\": \"xxxxx\",\n \"appId\": \"xxxxx\",\n \"databaseURL\":\"xxxxx\"\n};\n\nfirebase = pyrebase.initialize_app(firebaseConfig)\n\nstorage = firebase.storage()\n\ndf = pd.read_csv(\"/content/Future Prices.csv\")\n\n# here is the magic. Convert your csv file to bytes and then upload it\ndf_string = df.to_csv(index=False)\ndb_bytes = bytes(df_string, 'utf8')\n\nfileName = \"Future Prices.csv\"\n\nstorage.child(\"predictions/\" + fileName).put(db_bytes)\n\nThat's all Happy Coding!\n", "I found that starting from very modest size of dataframe (below 100KB!), and certainly for bigger ones, it's paying off to compress the data before storing. It does not have to be a dataframe, but it can be any onject (e.g. a dictionary). I used pickle below to compress. Your object can be seen on the usual firebase storage this way, and you get gains in memory and speed, both when writing and when reading, compared to uncompressed storage. For big objects it's also worth adding timeout for to avoid ConnectionError after the default timeout of 60 seconds.\nimport firebase_admin\nfrom firebase_admin import credentials, initialize_app, storage\nimport pickle\n\ncred = credentials.Certificate(json_cert_file)\nfirebase_admin.initialize_app(cred, {'storageBucket': 'YOUR_storageBucket (without gs://)'})\nbucket = storage.bucket()\n\nfile_name = data_id + \".pkl\"\nblob = bucket.blob(file_name)\n\n# write df to storage\nblob.upload_from_string(pickle.dumps(df, timeout=300))\n\n# read df from storage\ndf = pickle.loads(blob.download_as_string(timeout=300))\n\n" ]
[ 7, 1, 0, 0 ]
[]
[]
[ "dataframe", "firebase", "pandas", "python" ]
stackoverflow_0053886485_dataframe_firebase_pandas_python.txt
Q: Add x-axis to matplotlib with multiple y-axis line chart How do I add the x-axis(Month) to a simple Matplotlib My Dataset: Month Views CMA30 0 11 24662 24662.000000 1 11 2420 13541.000000 2 11 11318 12800.000000 3 11 8529 11732.250000 4 10 78861 25158.000000 5 10 1281 21178.500000 6 10 22701 21396.000000 7 10 17088 20857.500000 This is my code: df[['Views', 'CMA30']].plot(label='Views', figsize=(5, 5)) This is giving me Views and CMA30 on the y-axis. How do I add Month(1-12) on the x-axis? A: If you average the values per month, then try groupby/mean: df.groupby('Month')[['Views','CMA30']].mean().plot(label='Views', figsize=(5, 5))
Add x-axis to matplotlib with multiple y-axis line chart
How do I add the x-axis(Month) to a simple Matplotlib My Dataset: Month Views CMA30 0 11 24662 24662.000000 1 11 2420 13541.000000 2 11 11318 12800.000000 3 11 8529 11732.250000 4 10 78861 25158.000000 5 10 1281 21178.500000 6 10 22701 21396.000000 7 10 17088 20857.500000 This is my code: df[['Views', 'CMA30']].plot(label='Views', figsize=(5, 5)) This is giving me Views and CMA30 on the y-axis. How do I add Month(1-12) on the x-axis?
[ "If you average the values per month, then try groupby/mean:\ndf.groupby('Month')[['Views','CMA30']].mean().plot(label='Views', figsize=(5, 5))\n\n" ]
[ 2 ]
[]
[]
[ "dataframe", "matplotlib", "pandas", "python" ]
stackoverflow_0074464111_dataframe_matplotlib_pandas_python.txt
Q: Moving average for value present in two dataframe columns in python I'm stuck with the following problem since last night and I haven't found any solution anywhere. Given the dataframe df: team1 team2 score1 score2 0 A B 1 0 1 C A 3 2 2 B A 2 3 3 A C 2 1 I would like to pass a function that calculates moving average for the team1 BUT take into account both team1 and team2 columns. The output for moving average of 2 would be: team1 team2 score1 score2 mov_avg_a 0 A B 1 2 1 # for A 1 C A 3 2 1.5 # for C 2 B A 2 3 2.5 # for B 3 A C 2 1 2.5 # for A My idea is to call .apply() with custom function that would: Step 1. Blend team1 and team2 columns into a temporary column tempA with score1 and score2 values are returned if A (for example) is present, like so: team1 team2 score1 score2 tempA 0 A B 1 0 1 1 C A 3 2 2 2 B A 2 3 3 3 A C 2 1 2 Step 2. Apply rolling(2) to the tempA to get the desired output as seen above. I have tried creating this process and failed spectacularly. I am aware that using apply() in the case of large dataframe will be computationally expensive but I cannot think of a 'one line' solution here. Thank you in advance for your insights. Dataframe for tests: df = pd.DataFrame( { 'team1': ['A', 'C', 'B', 'A'], 'team2': ['B', 'A', 'A', 'C'], 'score1': [1, 3, 2, 2], 'score2': [0, 2, 3, 1] } ) EDIT: Upon some further thoughts I think the best solution is to create two separate datasets for team1 and team2 each, perform calculations on them and merge them back if needed. A: Given the clarification in comments I'll suggest this... In [2]: df Out[2]: team1 team2 score1 score2 0 A B 1 0 1 C A 3 2 2 B A 2 3 3 A C 2 1 In [3]: # restructure data frame ...: df_team_scores = pd.wide_to_long(df.assign(game_index=df.index), ...: ['team', 'score'], ...: i='game_index', ...: j='column_suffix') ...: df_team_scores Out[3]: team score game_index column_suffix 0 1 A 1 1 1 C 3 2 1 B 2 3 1 A 2 0 2 B 0 1 2 A 2 2 2 A 3 3 2 C 1 In [4]: # restore proper of scores (in order of game_index) ...: # order by team first to make table easier to understand ...: df_team_scores = df_team_scores.reset_index().sort_values(['team', 'game_index']) ...: df_team_scores Out[4]: game_index column_suffix team score 0 0 1 A 1 5 1 2 A 2 6 2 2 A 3 3 3 1 A 2 4 0 2 B 0 2 2 1 B 2 1 1 1 C 3 7 3 2 C 1 In [5]: # Compute the rolling score ...: s_rolling_score = df_team_scores.groupby(by='team')['score'].rolling(2, min_periods=1).mean() ...: s_rolling_score Out[5]: team A 0 1.0 5 1.5 6 2.5 3 2.5 B 4 0.0 2 1.0 C 1 3.0 7 2.0 Name: score, dtype: float64 In [6]: # Force indices to be compatible and merge back to team_scores data frame ...: df_team_scores['rolling_score'] = s_rolling_score.reset_index(level=0, drop=True) ...: df_team_scores Out[6]: game_index column_suffix team score rolling_score 0 0 1 A 1 1.0 5 1 2 A 2 1.5 6 2 2 A 3 2.5 3 3 1 A 2 2.5 4 0 2 B 0 0.0 2 2 1 B 2 1.0 1 1 1 C 3 3.0 7 3 2 C 1 2.0 Not quite a one-liner but does not rely on custom functions. If you need to merge this back into the original data frame, I'll leave it to someone else to figure it out.
Moving average for value present in two dataframe columns in python
I'm stuck with the following problem since last night and I haven't found any solution anywhere. Given the dataframe df: team1 team2 score1 score2 0 A B 1 0 1 C A 3 2 2 B A 2 3 3 A C 2 1 I would like to pass a function that calculates moving average for the team1 BUT take into account both team1 and team2 columns. The output for moving average of 2 would be: team1 team2 score1 score2 mov_avg_a 0 A B 1 2 1 # for A 1 C A 3 2 1.5 # for C 2 B A 2 3 2.5 # for B 3 A C 2 1 2.5 # for A My idea is to call .apply() with custom function that would: Step 1. Blend team1 and team2 columns into a temporary column tempA with score1 and score2 values are returned if A (for example) is present, like so: team1 team2 score1 score2 tempA 0 A B 1 0 1 1 C A 3 2 2 2 B A 2 3 3 3 A C 2 1 2 Step 2. Apply rolling(2) to the tempA to get the desired output as seen above. I have tried creating this process and failed spectacularly. I am aware that using apply() in the case of large dataframe will be computationally expensive but I cannot think of a 'one line' solution here. Thank you in advance for your insights. Dataframe for tests: df = pd.DataFrame( { 'team1': ['A', 'C', 'B', 'A'], 'team2': ['B', 'A', 'A', 'C'], 'score1': [1, 3, 2, 2], 'score2': [0, 2, 3, 1] } ) EDIT: Upon some further thoughts I think the best solution is to create two separate datasets for team1 and team2 each, perform calculations on them and merge them back if needed.
[ "Given the clarification in comments I'll suggest this...\nIn [2]: df\nOut[2]:\n team1 team2 score1 score2\n0 A B 1 0\n1 C A 3 2\n2 B A 2 3\n3 A C 2 1\n\nIn [3]: # restructure data frame\n ...: df_team_scores = pd.wide_to_long(df.assign(game_index=df.index),\n ...: ['team', 'score'],\n ...: i='game_index',\n ...: j='column_suffix')\n ...: df_team_scores\nOut[3]:\n team score\ngame_index column_suffix\n0 1 A 1\n1 1 C 3\n2 1 B 2\n3 1 A 2\n0 2 B 0\n1 2 A 2\n2 2 A 3\n3 2 C 1\n\nIn [4]: # restore proper of scores (in order of game_index)\n ...: # order by team first to make table easier to understand\n ...: df_team_scores = df_team_scores.reset_index().sort_values(['team', 'game_index'])\n ...: df_team_scores\nOut[4]:\n game_index column_suffix team score\n0 0 1 A 1\n5 1 2 A 2\n6 2 2 A 3\n3 3 1 A 2\n4 0 2 B 0\n2 2 1 B 2\n1 1 1 C 3\n7 3 2 C 1\n\nIn [5]: # Compute the rolling score\n ...: s_rolling_score = df_team_scores.groupby(by='team')['score'].rolling(2, min_periods=1).mean()\n ...: s_rolling_score\nOut[5]:\nteam\nA 0 1.0\n 5 1.5\n 6 2.5\n 3 2.5\nB 4 0.0\n 2 1.0\nC 1 3.0\n 7 2.0\nName: score, dtype: float64\n\nIn [6]: # Force indices to be compatible and merge back to team_scores data frame\n ...: df_team_scores['rolling_score'] = s_rolling_score.reset_index(level=0, drop=True)\n ...: df_team_scores\nOut[6]:\n game_index column_suffix team score rolling_score\n0 0 1 A 1 1.0\n5 1 2 A 2 1.5\n6 2 2 A 3 2.5\n3 3 1 A 2 2.5\n4 0 2 B 0 0.0\n2 2 1 B 2 1.0\n1 1 1 C 3 3.0\n7 3 2 C 1 2.0\n\nNot quite a one-liner but does not rely on custom functions. If you need to merge this back into the original data frame, I'll leave it to someone else to figure it out.\n" ]
[ 1 ]
[]
[]
[ "numpy", "pandas", "python" ]
stackoverflow_0074463830_numpy_pandas_python.txt
Q: Group by a category I have done KMeans clusters and now I need to analyse each individual cluster. For example look at cluster 1 and see what clients are on it and make conclusions. dfRFM['idcluster'] = num_cluster dfRFM.head() idcliente Recencia Frecuencia Monetario idcluster 1 3 251 44 -90.11 0 2 8 1011 44 87786.44 2 6 88 537 36 8589.57 0 7 98 505 2 -179.00 0 9 156 11 15 35259.50 0 How do I group so I only see results from lets say idcluster 0 and sort by lets say "Monetario". Thanks! A: To filter a dataframe, the most common way is to use df[df[colname] == val] Then you can use df.sort_values() In your case, that would look like this: dfRFM_id0 = dfRFM[dfRFM['idcluster']==0].sort_values('Monetario') The way this filtering works is that dfRFM['idcluster']==0 returns a series of True/False based on if it is, well, true or false. So then we have a sort of dfRFM[(True,False,True,True...)], and so the dataframe returns only the rows where we have a True. That is, filtering/selecting the data where the condition is true. edit: add 'the way this works...' A: I think you actually just need to filter your DF! df_new = dfRFM[dfRFM.idcluster == 0] and then sort by Montario df_new = df_new.sort_values(by = 'Monetario') Group by is really best for when you're wanting to look at the cluster as a whole - for example, if you wanted to see the average values for Recencia, Frecuencia, and Monetario for all of Group 0.
Group by a category
I have done KMeans clusters and now I need to analyse each individual cluster. For example look at cluster 1 and see what clients are on it and make conclusions. dfRFM['idcluster'] = num_cluster dfRFM.head() idcliente Recencia Frecuencia Monetario idcluster 1 3 251 44 -90.11 0 2 8 1011 44 87786.44 2 6 88 537 36 8589.57 0 7 98 505 2 -179.00 0 9 156 11 15 35259.50 0 How do I group so I only see results from lets say idcluster 0 and sort by lets say "Monetario". Thanks!
[ "To filter a dataframe, the most common way is to use df[df[colname] == val] Then you can use df.sort_values()\nIn your case, that would look like this:\ndfRFM_id0 = dfRFM[dfRFM['idcluster']==0].sort_values('Monetario')\n\nThe way this filtering works is that dfRFM['idcluster']==0 returns a series of True/False based on if it is, well, true or false. So then we have a sort of dfRFM[(True,False,True,True...)], and so the dataframe returns only the rows where we have a True. That is, filtering/selecting the data where the condition is true.\nedit: add 'the way this works...'\n", "I think you actually just need to filter your DF!\ndf_new = dfRFM[dfRFM.idcluster == 0]\n\nand then sort by Montario\ndf_new = df_new.sort_values(by = 'Monetario')\n\nGroup by is really best for when you're wanting to look at the cluster as a whole - for example, if you wanted to see the average values for Recencia, Frecuencia, and Monetario for all of Group 0.\n" ]
[ 0, 0 ]
[]
[]
[ "k_means", "pandas", "python" ]
stackoverflow_0074464475_k_means_pandas_python.txt
Q: Tkinter delete not working on referenced entry when referenceing lenth of entry I have a tkinter window class that I've made and my delete function is not working properly. my_window = tk.Tk() class QuoteForm(): def __init__(self,master): self.file_data = '' self.master = master self.master.rowconfigure(0, weight=1) self.master.rowconfigure(1, weight= 1) self.master.rowconfigure(2, weight = 1) master.geometry('600x400') master.resizable(False,False) #create the frames self.directory_frm = tk.Frame(master=master) self.directory_frm.grid(row=0) #this is the frame for the directory self.add_on_frm = tk.Frame(master=master) self.add_on_frm.grid(row=1) #this is the frame for add-ons input self.button_frm = tk.Frame(master=master) self.button_frm.grid(row=2) #this is the frame for #creates buttons, entries, labels self.load_directory_frame() #creates and grids the directory button self.load_add_on_frame() #creates and grids the entry buttons and labels self.load_button_frame() #creates and grids the buttons my_window.mainloop() def load_add_on_frame(self): vcmd = (self.master.register(self.validate_ent), '%S') #create inputs and labels for add-ons self.trip_ent = tk.Entry(master=self.add_on_frm,validate = 'key', validatecommand = vcmd, name='trip_ent') self.trip_ent.grid(column= 1, row = 0) self.raw_cutouts_ent = tk.Entry(master=self.add_on_frm,validate = 'key', validatecommand = vcmd) self.raw_cutouts_ent.grid(column= 3, row = 0) def clear_entries(self): entries = (self.trip_ent, self.raw_cutouts_ent) #list of entries to loop (there are a total of 12 in the actual code) for entry in entries: entry.delete(0,len(entry.get())) #this is where the trouble seems to happen new_quote = QuoteForm(my_window) My problem is that the on the second to last line of code (starting with 'entry.delete') Typically you would do 'entry.delete(0,END)' but because entry is a variable the code won't run with END. 'END' is an invalid index, and 'end' just does the same as pulling the length, and so i tried to make it dynamic by making the 'end' the length of whatever is in the entry. When i do that however, it deletes nothing [i also tried forcing it with int(len(entry.get()))]. If i manually enter an integer it will delete everything up to that integer, including if it's the same as the length of that entry, and I put breaks to confirm that i'm getting an int return and I am. I realize i could just write a line of code to delete each entry individually, but there's a totaly of 12 and I would like to clean it up. I'm adding the full code to be able to run below import os import re import tkinter as tk from tkinter import filedialog as fd from tkinter import messagebox import pandas as pd my_window = tk.Tk() class QuoteForm(): def __init__(self,master): self.file_data = '' self.master = master self.master.rowconfigure(0, weight=1) self.master.rowconfigure(1, weight= 1) self.master.rowconfigure(2, weight = 1) master.geometry('600x400') master.resizable(False,False) self.directory_frm = tk.Frame(master=master) self.directory_frm.grid(row=0) #this is the frame for the directory self.add_on_frm = tk.Frame(master=master) self.add_on_frm.grid(row=1) #this is the frame for add-ons input self.button_frm = tk.Frame(master=master) self.button_frm.grid(row=2) #this is the frame for self.load_directory_frame() self.load_add_on_frame() self.load_button_frame() my_window.mainloop() @staticmethod def get_quote_data(filepath): #read csv to get job infomation for pricing try: if filepath: job_info = pd.read_csv(filepath, index_col=0, #set index column skiprows=range(4), #skip first 4 rows usecols=['Item','Quan']) job_info = job_info.drop(labels='Grand Total:', axis= 0) customer_info = pd.read_csv(filepath, header=None, skiprows= lambda x: x not in range(2), #skip any row beyond first two rows usecols=[0,1]) #use first two columns customer_info = {customer_info.at[0,0].replace(':',''): customer_info.at[0,1], ##formatting the data for legibility customer_info.at[1,0].replace(':','') : customer_info.at[1,1]} return [customer_info, job_info] except: messagebox.showerror("Data Invalid", "Please make sure you select a valid estimate CSV file.") def sink_check(self): ####this is to be used at the submit buttons to confirm that there are not more sinks than cutouts cutouts = self.um_sink_inst_ent.get() sink_quan_list = (self.std_sink_ent.get(),self.upgrd_sink_ent.get(),self.van_sink_ent.get(),self.cust_sink_temp_ent.get()) sinks = sum(sink_quan_list) if sinks > cutouts: return False ###check that the sinks included does not exceed the number of sinks charged for install return True def validate_ent(self,input): if not input: return True elif re.fullmatch(r'[0-9]',input): return True return False def open_file(self): file = fd.askopenfile(mode='r', filetypes=[('CSV Files', '*.csv')]) if file: filepath = os.path.abspath(file.name) file_data = self.get_quote_data(filepath) cust_name = file_data[0]['Name'] job_addr = file_data[0]['Addr'] self.file_select_text['text'] = f"{job_addr} for {cust_name} is currently selected" def load_directory_frame(self): file_select_btn = tk.Button(master=self.directory_frm,text= "Select a file",command=self.open_file) file_select_btn.grid(column=0, row=0) self.file_select_text = tk.Label(master=self.directory_frm, text = "No File Selected") self.file_select_text.grid(column=1, row=0) def load_add_on_frame(self): vcmd = (self.master.register(self.validate_ent), '%S') #create inputs and labels for add-ons self.trip_ent = tk.Entry(master=self.add_on_frm,validate = 'key', validatecommand = vcmd, name='trip_ent') self.trip_ent.grid(column= 1, row = 0) self.raw_cutouts_ent = tk.Entry(master=self.add_on_frm,validate = 'key', validatecommand = vcmd) self.raw_cutouts_ent.grid(column= 3, row = 0) self.radii_ent = tk.Entry(master=self.add_on_frm,validate = 'key', validatecommand = vcmd) self.radii_ent.grid(column= 1, row = 1) self.arcs_ent = tk.Entry(master=self.add_on_frm,validate = 'key', validatecommand = vcmd) self.arcs_ent.grid(column= 3, row = 1) self.splay_ent = tk.Entry(master=self.add_on_frm,validate = 'key', validatecommand = vcmd) self.splay_ent.grid(column= 1, row = 2) self.wtrfall_ent = tk.Entry(master=self.add_on_frm,validate = 'key', validatecommand = vcmd) self.wtrfall_ent.grid(column= 3, row = 2) self.um_sink_inst_ent = tk.Entry(master=self.add_on_frm,validate = 'key', validatecommand = vcmd) self.um_sink_inst_ent.grid(column= 1, row = 3) self.farm_sink_co_ent = tk.Entry(master=self.add_on_frm,validate = 'key', validatecommand = vcmd) self.farm_sink_co_ent.grid(column= 3, row = 3) self.std_sink_ent = tk.Entry(master=self.add_on_frm,validate = 'key', validatecommand = vcmd) self.std_sink_ent.grid(column= 1, row = 4) self.upgrd_sink_ent = tk.Entry(master=self.add_on_frm,validate = 'key', validatecommand = vcmd) self.upgrd_sink_ent.grid(column= 3, row = 4) self.van_sink_ent = tk.Entry(master=self.add_on_frm,validate = 'key', validatecommand = vcmd) self.van_sink_ent.grid(column= 1, row = 5) self.cust_sink_temp_ent = tk.Entry(master=self.add_on_frm,validate = 'key', validatecommand = vcmd) self.cust_sink_temp_ent.grid(column= 3, row = 5) trip_lbl = tk.Label(master=self.add_on_frm,text = "Extra Trip(s)") trip_lbl.grid(column= 0, row = 0) raw_cutouts_lbl = tk.Label(master=self.add_on_frm,text = "Unpolished Cutout(s)") raw_cutouts_lbl.grid(column= 2, row = 0) radii_lbl = tk.Label(master=self.add_on_frm,text = "Radii") radii_lbl.grid(column= 0, row = 1) arcs_lbl = tk.Label(master=self.add_on_frm,text = "Arc(s)") arcs_lbl.grid(column= 2, row = 1) splay_lbl = tk.Label(master=self.add_on_frm,text = "Splay(s)") splay_lbl.grid(column= 0, row = 2) wtrfall_lbl = tk.Label(master=self.add_on_frm,text = "Waterfal Leg(s)") wtrfall_lbl.grid(column= 2, row = 2) um_sink_inst_lbl = tk.Label(master=self.add_on_frm,text = "Install of UM Sink(s)") um_sink_inst_lbl.grid(column= 0, row = 3) farm_sink_co_lbl = tk.Label(master=self.add_on_frm,text = "Farm Sink C/O") farm_sink_co_lbl.grid(column= 2, row = 3) std_sink_lbl = tk.Label(master=self.add_on_frm,text = "Standard 18ga Sink(s)") std_sink_lbl.grid(column= 0, row = 4) upgrd_sink_lbl = tk.Label(master=self.add_on_frm,text = "Upgrade 18ga Sink(s)") upgrd_sink_lbl.grid(column= 2, row = 4) van_sink_lbl = tk.Label(master=self.add_on_frm,text = "Vanity Sink(s)") van_sink_lbl.grid(column= 0, row = 5) cust_sink_temp_lbl = tk.Label(master=self.add_on_frm,text = "Customer Sink Template(s)") cust_sink_temp_lbl.grid(column= 2, row = 5) def load_button_frame(self): submit_btn = tk.Button(master=self.button_frm, text='Submit') submit_btn.grid(column=0,row=0) clear_btn = tk.Button(master=self.button_frm,text='Clear',command=self.clear_entries) clear_btn.grid(column=1, row=0) advanced_btn = tk.Button(master=self.button_frm,text='Advanced') advanced_btn.grid(column=2, row=0) def clear_entries(self): entries = (self.trip_ent, self.raw_cutouts_ent, self.radii_ent, self.arcs_ent, self.splay_ent, #list of entry boxes on the form self.wtrfall_ent, self.um_sink_inst_ent, self.um_sink_inst_ent, self.farm_sink_co_ent, self.std_sink_ent, self.upgrd_sink_ent, self.van_sink_ent, self.cust_sink_temp_ent) for entry in entries: entry.delete(0,tk.END) new_quote = QuoteForm(my_window) A: It's all about yourvalidate_ent function. Only when it returns true then your entry text can change. While typing tkinter just sent single chars like '1','2','a'. Even when you remove with backspace, this function gets the character you are trying to remove. However when you try to clear it function gets as an input whole string like '123543123'. This is not takes place inside r'[0-9]' reguler expression and you return false so tkinter denies removing it. There is two simple solution to fix this. First one add another condition for longer input like: def validate_ent(self,input): if not input: return True elif re.fullmatch(r'[0-9]',input): return True if(len(input)>2): return True return False However I do not recommend this one because if someone copy paste longer inputs then it can write letters inside entry boxes. def validate_ent(self,input): if not input: return True elif re.fullmatch(r'[0-9]*',input): return True return False In here we added a asteriks to reguler expression. Now it's accepting numbers bigger then 9. Now people can also paste numbers that fits into this rule. Also removing works as expected!
Tkinter delete not working on referenced entry when referenceing lenth of entry
I have a tkinter window class that I've made and my delete function is not working properly. my_window = tk.Tk() class QuoteForm(): def __init__(self,master): self.file_data = '' self.master = master self.master.rowconfigure(0, weight=1) self.master.rowconfigure(1, weight= 1) self.master.rowconfigure(2, weight = 1) master.geometry('600x400') master.resizable(False,False) #create the frames self.directory_frm = tk.Frame(master=master) self.directory_frm.grid(row=0) #this is the frame for the directory self.add_on_frm = tk.Frame(master=master) self.add_on_frm.grid(row=1) #this is the frame for add-ons input self.button_frm = tk.Frame(master=master) self.button_frm.grid(row=2) #this is the frame for #creates buttons, entries, labels self.load_directory_frame() #creates and grids the directory button self.load_add_on_frame() #creates and grids the entry buttons and labels self.load_button_frame() #creates and grids the buttons my_window.mainloop() def load_add_on_frame(self): vcmd = (self.master.register(self.validate_ent), '%S') #create inputs and labels for add-ons self.trip_ent = tk.Entry(master=self.add_on_frm,validate = 'key', validatecommand = vcmd, name='trip_ent') self.trip_ent.grid(column= 1, row = 0) self.raw_cutouts_ent = tk.Entry(master=self.add_on_frm,validate = 'key', validatecommand = vcmd) self.raw_cutouts_ent.grid(column= 3, row = 0) def clear_entries(self): entries = (self.trip_ent, self.raw_cutouts_ent) #list of entries to loop (there are a total of 12 in the actual code) for entry in entries: entry.delete(0,len(entry.get())) #this is where the trouble seems to happen new_quote = QuoteForm(my_window) My problem is that the on the second to last line of code (starting with 'entry.delete') Typically you would do 'entry.delete(0,END)' but because entry is a variable the code won't run with END. 'END' is an invalid index, and 'end' just does the same as pulling the length, and so i tried to make it dynamic by making the 'end' the length of whatever is in the entry. When i do that however, it deletes nothing [i also tried forcing it with int(len(entry.get()))]. If i manually enter an integer it will delete everything up to that integer, including if it's the same as the length of that entry, and I put breaks to confirm that i'm getting an int return and I am. I realize i could just write a line of code to delete each entry individually, but there's a totaly of 12 and I would like to clean it up. I'm adding the full code to be able to run below import os import re import tkinter as tk from tkinter import filedialog as fd from tkinter import messagebox import pandas as pd my_window = tk.Tk() class QuoteForm(): def __init__(self,master): self.file_data = '' self.master = master self.master.rowconfigure(0, weight=1) self.master.rowconfigure(1, weight= 1) self.master.rowconfigure(2, weight = 1) master.geometry('600x400') master.resizable(False,False) self.directory_frm = tk.Frame(master=master) self.directory_frm.grid(row=0) #this is the frame for the directory self.add_on_frm = tk.Frame(master=master) self.add_on_frm.grid(row=1) #this is the frame for add-ons input self.button_frm = tk.Frame(master=master) self.button_frm.grid(row=2) #this is the frame for self.load_directory_frame() self.load_add_on_frame() self.load_button_frame() my_window.mainloop() @staticmethod def get_quote_data(filepath): #read csv to get job infomation for pricing try: if filepath: job_info = pd.read_csv(filepath, index_col=0, #set index column skiprows=range(4), #skip first 4 rows usecols=['Item','Quan']) job_info = job_info.drop(labels='Grand Total:', axis= 0) customer_info = pd.read_csv(filepath, header=None, skiprows= lambda x: x not in range(2), #skip any row beyond first two rows usecols=[0,1]) #use first two columns customer_info = {customer_info.at[0,0].replace(':',''): customer_info.at[0,1], ##formatting the data for legibility customer_info.at[1,0].replace(':','') : customer_info.at[1,1]} return [customer_info, job_info] except: messagebox.showerror("Data Invalid", "Please make sure you select a valid estimate CSV file.") def sink_check(self): ####this is to be used at the submit buttons to confirm that there are not more sinks than cutouts cutouts = self.um_sink_inst_ent.get() sink_quan_list = (self.std_sink_ent.get(),self.upgrd_sink_ent.get(),self.van_sink_ent.get(),self.cust_sink_temp_ent.get()) sinks = sum(sink_quan_list) if sinks > cutouts: return False ###check that the sinks included does not exceed the number of sinks charged for install return True def validate_ent(self,input): if not input: return True elif re.fullmatch(r'[0-9]',input): return True return False def open_file(self): file = fd.askopenfile(mode='r', filetypes=[('CSV Files', '*.csv')]) if file: filepath = os.path.abspath(file.name) file_data = self.get_quote_data(filepath) cust_name = file_data[0]['Name'] job_addr = file_data[0]['Addr'] self.file_select_text['text'] = f"{job_addr} for {cust_name} is currently selected" def load_directory_frame(self): file_select_btn = tk.Button(master=self.directory_frm,text= "Select a file",command=self.open_file) file_select_btn.grid(column=0, row=0) self.file_select_text = tk.Label(master=self.directory_frm, text = "No File Selected") self.file_select_text.grid(column=1, row=0) def load_add_on_frame(self): vcmd = (self.master.register(self.validate_ent), '%S') #create inputs and labels for add-ons self.trip_ent = tk.Entry(master=self.add_on_frm,validate = 'key', validatecommand = vcmd, name='trip_ent') self.trip_ent.grid(column= 1, row = 0) self.raw_cutouts_ent = tk.Entry(master=self.add_on_frm,validate = 'key', validatecommand = vcmd) self.raw_cutouts_ent.grid(column= 3, row = 0) self.radii_ent = tk.Entry(master=self.add_on_frm,validate = 'key', validatecommand = vcmd) self.radii_ent.grid(column= 1, row = 1) self.arcs_ent = tk.Entry(master=self.add_on_frm,validate = 'key', validatecommand = vcmd) self.arcs_ent.grid(column= 3, row = 1) self.splay_ent = tk.Entry(master=self.add_on_frm,validate = 'key', validatecommand = vcmd) self.splay_ent.grid(column= 1, row = 2) self.wtrfall_ent = tk.Entry(master=self.add_on_frm,validate = 'key', validatecommand = vcmd) self.wtrfall_ent.grid(column= 3, row = 2) self.um_sink_inst_ent = tk.Entry(master=self.add_on_frm,validate = 'key', validatecommand = vcmd) self.um_sink_inst_ent.grid(column= 1, row = 3) self.farm_sink_co_ent = tk.Entry(master=self.add_on_frm,validate = 'key', validatecommand = vcmd) self.farm_sink_co_ent.grid(column= 3, row = 3) self.std_sink_ent = tk.Entry(master=self.add_on_frm,validate = 'key', validatecommand = vcmd) self.std_sink_ent.grid(column= 1, row = 4) self.upgrd_sink_ent = tk.Entry(master=self.add_on_frm,validate = 'key', validatecommand = vcmd) self.upgrd_sink_ent.grid(column= 3, row = 4) self.van_sink_ent = tk.Entry(master=self.add_on_frm,validate = 'key', validatecommand = vcmd) self.van_sink_ent.grid(column= 1, row = 5) self.cust_sink_temp_ent = tk.Entry(master=self.add_on_frm,validate = 'key', validatecommand = vcmd) self.cust_sink_temp_ent.grid(column= 3, row = 5) trip_lbl = tk.Label(master=self.add_on_frm,text = "Extra Trip(s)") trip_lbl.grid(column= 0, row = 0) raw_cutouts_lbl = tk.Label(master=self.add_on_frm,text = "Unpolished Cutout(s)") raw_cutouts_lbl.grid(column= 2, row = 0) radii_lbl = tk.Label(master=self.add_on_frm,text = "Radii") radii_lbl.grid(column= 0, row = 1) arcs_lbl = tk.Label(master=self.add_on_frm,text = "Arc(s)") arcs_lbl.grid(column= 2, row = 1) splay_lbl = tk.Label(master=self.add_on_frm,text = "Splay(s)") splay_lbl.grid(column= 0, row = 2) wtrfall_lbl = tk.Label(master=self.add_on_frm,text = "Waterfal Leg(s)") wtrfall_lbl.grid(column= 2, row = 2) um_sink_inst_lbl = tk.Label(master=self.add_on_frm,text = "Install of UM Sink(s)") um_sink_inst_lbl.grid(column= 0, row = 3) farm_sink_co_lbl = tk.Label(master=self.add_on_frm,text = "Farm Sink C/O") farm_sink_co_lbl.grid(column= 2, row = 3) std_sink_lbl = tk.Label(master=self.add_on_frm,text = "Standard 18ga Sink(s)") std_sink_lbl.grid(column= 0, row = 4) upgrd_sink_lbl = tk.Label(master=self.add_on_frm,text = "Upgrade 18ga Sink(s)") upgrd_sink_lbl.grid(column= 2, row = 4) van_sink_lbl = tk.Label(master=self.add_on_frm,text = "Vanity Sink(s)") van_sink_lbl.grid(column= 0, row = 5) cust_sink_temp_lbl = tk.Label(master=self.add_on_frm,text = "Customer Sink Template(s)") cust_sink_temp_lbl.grid(column= 2, row = 5) def load_button_frame(self): submit_btn = tk.Button(master=self.button_frm, text='Submit') submit_btn.grid(column=0,row=0) clear_btn = tk.Button(master=self.button_frm,text='Clear',command=self.clear_entries) clear_btn.grid(column=1, row=0) advanced_btn = tk.Button(master=self.button_frm,text='Advanced') advanced_btn.grid(column=2, row=0) def clear_entries(self): entries = (self.trip_ent, self.raw_cutouts_ent, self.radii_ent, self.arcs_ent, self.splay_ent, #list of entry boxes on the form self.wtrfall_ent, self.um_sink_inst_ent, self.um_sink_inst_ent, self.farm_sink_co_ent, self.std_sink_ent, self.upgrd_sink_ent, self.van_sink_ent, self.cust_sink_temp_ent) for entry in entries: entry.delete(0,tk.END) new_quote = QuoteForm(my_window)
[ "It's all about yourvalidate_ent function. Only when it returns true then your entry text can change. While typing tkinter just sent single chars like '1','2','a'. Even when you remove with backspace, this function gets the character you are trying to remove. However when you try to clear it function gets as an input whole string like '123543123'. This is not takes place inside r'[0-9]' reguler expression and you return false so tkinter denies removing it.\nThere is two simple solution to fix this.\nFirst one add another condition for longer input like:\ndef validate_ent(self,input):\n if not input:\n return True\n elif re.fullmatch(r'[0-9]',input):\n return True\n if(len(input)>2):\n return True\n return False\n\nHowever I do not recommend this one because if someone copy paste longer inputs then it can write letters inside entry boxes.\ndef validate_ent(self,input):\n if not input:\n return True\n elif re.fullmatch(r'[0-9]*',input):\n return True\n return False\n\nIn here we added a asteriks to reguler expression. Now it's accepting numbers bigger then 9. Now people can also paste numbers that fits into this rule. Also removing works as expected!\n" ]
[ 0 ]
[]
[]
[ "python", "tkinter", "tkinter_entry" ]
stackoverflow_0074464010_python_tkinter_tkinter_entry.txt
Q: Two forms on same template in django. How to collaborate the template with the views.py? I have a template with two forms like this and two textareas where the uploaded content will be returned: <form class="form-inline" role="form" action="/controlepunt140" method="POST" enctype="multipart/form-data" id="form_pdf" > <div class="form-group"> {% csrf_token %} {{ form_pdf }} <button type="submit" name="form_pdf" class="btn btn-warning">Upload!</button> </div> </form> <div class="form-outline"> <div class="form-group"> <textarea class="inline-txtarea form-control" cols="70" rows="25"> {{content}}</textarea > <form class="form-inline" role="form" action="/controlepunt140" method="POST" enctype="multipart/form-data" id="form_excel" > <div class="form-group"> {% csrf_token %} {{ form }} <button type="submit" name="form_excel" class="btn btn-warning">Upload!</button> </div> </form> <textarea class="inline-txtarea form-control" cols="65" rows="25"> {{content_excel}}</textarea > and the views.py: class ReadingFile(View): def get(self, request): form = ProfileForm() return render(request, "main/controle_punt140.html", { "form": form }) def post(self, request): types_of_encoding = ["utf8", "cp1252"] submitted_form = ProfileForm(request.POST, request.FILES) content = '' if submitted_form.is_valid(): uploadfile = UploadFile(image=request.FILES["upload_file"]) name_of_file = str(request.FILES['upload_file']) uploadfile.save() for encoding_type in types_of_encoding: with open(os.path.join(settings.MEDIA_ROOT, f"{uploadfile.image}"), 'r', encoding=encoding_type) as f: if uploadfile.image.path.endswith('.pdf'): pass else: content = f.read() return render(request, "main/controle_punt140.html", { 'form': ProfileForm(), "content": content }) return render(request, "main/controle_punt140.html", { "form": submitted_form, }) and forms.py: class ProfileForm(forms.Form): upload_file = forms.FileField() and urls.py: urlpatterns = [ path('', views.starting_page, name='starting_page'), path('controlepunt140', views.ReadingFile.as_view(), name='controlepunt140') ] So this works for the first upload function(pdf). The output is returned to the textarea. But how to have it also work with the second upload function content_excel? I.E: how to distinguish the two upload functions? So this part: return render(request, "main/controle_punt140.html", { 'form': ProfileForm(), "content": content }) return render(request, "main/controle_punt140.html", { "form": submitted_form, }) Would be double? one for pdf and one for excel A: According to the name of the submit buttons: #FORM PDF <button type="submit" name="form_pdf" class="btn btn-warning">Upload!</button> #FORM EXCEL <button type="submit" name="form_excel" class="btn btn-warning">Upload!</button> So, in your views.py you can distinguish them on this way: if request.POST.get('form_pdf'): .... elif request.POST.get('form_excel'): ....
Two forms on same template in django. How to collaborate the template with the views.py?
I have a template with two forms like this and two textareas where the uploaded content will be returned: <form class="form-inline" role="form" action="/controlepunt140" method="POST" enctype="multipart/form-data" id="form_pdf" > <div class="form-group"> {% csrf_token %} {{ form_pdf }} <button type="submit" name="form_pdf" class="btn btn-warning">Upload!</button> </div> </form> <div class="form-outline"> <div class="form-group"> <textarea class="inline-txtarea form-control" cols="70" rows="25"> {{content}}</textarea > <form class="form-inline" role="form" action="/controlepunt140" method="POST" enctype="multipart/form-data" id="form_excel" > <div class="form-group"> {% csrf_token %} {{ form }} <button type="submit" name="form_excel" class="btn btn-warning">Upload!</button> </div> </form> <textarea class="inline-txtarea form-control" cols="65" rows="25"> {{content_excel}}</textarea > and the views.py: class ReadingFile(View): def get(self, request): form = ProfileForm() return render(request, "main/controle_punt140.html", { "form": form }) def post(self, request): types_of_encoding = ["utf8", "cp1252"] submitted_form = ProfileForm(request.POST, request.FILES) content = '' if submitted_form.is_valid(): uploadfile = UploadFile(image=request.FILES["upload_file"]) name_of_file = str(request.FILES['upload_file']) uploadfile.save() for encoding_type in types_of_encoding: with open(os.path.join(settings.MEDIA_ROOT, f"{uploadfile.image}"), 'r', encoding=encoding_type) as f: if uploadfile.image.path.endswith('.pdf'): pass else: content = f.read() return render(request, "main/controle_punt140.html", { 'form': ProfileForm(), "content": content }) return render(request, "main/controle_punt140.html", { "form": submitted_form, }) and forms.py: class ProfileForm(forms.Form): upload_file = forms.FileField() and urls.py: urlpatterns = [ path('', views.starting_page, name='starting_page'), path('controlepunt140', views.ReadingFile.as_view(), name='controlepunt140') ] So this works for the first upload function(pdf). The output is returned to the textarea. But how to have it also work with the second upload function content_excel? I.E: how to distinguish the two upload functions? So this part: return render(request, "main/controle_punt140.html", { 'form': ProfileForm(), "content": content }) return render(request, "main/controle_punt140.html", { "form": submitted_form, }) Would be double? one for pdf and one for excel
[ "According to the name of the submit buttons:\n#FORM PDF\n<button type=\"submit\" name=\"form_pdf\" class=\"btn btn-warning\">Upload!</button>\n\n#FORM EXCEL\n<button type=\"submit\" name=\"form_excel\" class=\"btn btn-warning\">Upload!</button>\n\nSo, in your views.py you can distinguish them on this way:\nif request.POST.get('form_pdf'):\n ....\nelif request.POST.get('form_excel'):\n ....\n\n" ]
[ 3 ]
[]
[]
[ "django", "python" ]
stackoverflow_0074464152_django_python.txt
Q: Loading pandas data frame from pickle file in S3 bucket to AWS Lambda - problem with type I created a machine-learning model with a KNN classifier. Then, I made a pickle file of the test dataset and uploaded it to the AWS S3 bucket using AWS SDK. For testing purposes, I have downloaded it and tested the type with the following: with open("C:\\...path...\\test_features.pkl", 'rb') as f: test_data= pickle.load(f) print(type(test_data)) The result is <class 'pandas.core.frame.DataFrame'>, which is ok. However, when reading through AWS Lambda, the following part s3 = boto3.client('s3') test_features = s3.get_object(Bucket=bucket, Key= key) print(type(test_features)) gives <class 'dict'> How to get DataFrame type in AWS Lambda too? A: You will need to read content first then use pickle to load the content and create data frame test_features = s3.get_object(Bucket=bucket, Key= key) body = test_features['Body'].read() test_data = pickle.loads(body) print(type(test_data))
Loading pandas data frame from pickle file in S3 bucket to AWS Lambda - problem with type
I created a machine-learning model with a KNN classifier. Then, I made a pickle file of the test dataset and uploaded it to the AWS S3 bucket using AWS SDK. For testing purposes, I have downloaded it and tested the type with the following: with open("C:\\...path...\\test_features.pkl", 'rb') as f: test_data= pickle.load(f) print(type(test_data)) The result is <class 'pandas.core.frame.DataFrame'>, which is ok. However, when reading through AWS Lambda, the following part s3 = boto3.client('s3') test_features = s3.get_object(Bucket=bucket, Key= key) print(type(test_features)) gives <class 'dict'> How to get DataFrame type in AWS Lambda too?
[ "You will need to read content first then use pickle to load the content and create data frame\ntest_features = s3.get_object(Bucket=bucket, Key= key)\nbody = test_features['Body'].read()\ntest_data = pickle.loads(body)\nprint(type(test_data))\n\n" ]
[ 0 ]
[]
[]
[ "amazon_web_services", "aws_lambda", "mlops", "pandas", "python" ]
stackoverflow_0074464062_amazon_web_services_aws_lambda_mlops_pandas_python.txt
Q: Saving a cross-validation trained model in Scikit I have trained a model in scikit-learn using Cross-Validation and Naive Bayes classifier. How can I persist this model to later run against new instances? Here is simply what I have, I can get the CV scores but I don't know how to have access to the trained model gnb = GaussianNB() scores = cross_validation.cross_val_score(gnb, data_numpy[0],data_numpy[1], cv=10) A: cross_val_score doesn't changes your estimator, and it will not return fitted estimator. It just returns score of estimator of cross validation. To fit your estimator - you should call fit on it explicitly with provided dataset. To save (serialize) it - you can use pickle: # To fit your estimator gnb.fit(data_numpy[0], data_numpy[1]) # To serialize import pickle with open('our_estimator.pkl', 'wb') as fid: pickle.dump(gnb, fid) # To deserialize estimator later with open('our_estimator.pkl', 'rb') as fid: gnb = pickle.load(fid) A: I could be mistaken about multioutput.RegressorChain()'s internals, but I believe you could supply RegressorChain w/ the same cv and run RegressorChain w/ just one dv. That would allow you to use .predict() as you'd like.
Saving a cross-validation trained model in Scikit
I have trained a model in scikit-learn using Cross-Validation and Naive Bayes classifier. How can I persist this model to later run against new instances? Here is simply what I have, I can get the CV scores but I don't know how to have access to the trained model gnb = GaussianNB() scores = cross_validation.cross_val_score(gnb, data_numpy[0],data_numpy[1], cv=10)
[ "cross_val_score doesn't changes your estimator, and it will not return fitted estimator. It just returns score of estimator of cross validation.\nTo fit your estimator - you should call fit on it explicitly with provided dataset.\nTo save (serialize) it - you can use pickle:\n# To fit your estimator\ngnb.fit(data_numpy[0], data_numpy[1])\n# To serialize\nimport pickle\nwith open('our_estimator.pkl', 'wb') as fid:\n pickle.dump(gnb, fid)\n# To deserialize estimator later\nwith open('our_estimator.pkl', 'rb') as fid:\n gnb = pickle.load(fid)\n\n", "I could be mistaken about multioutput.RegressorChain()'s internals, but I believe you could supply RegressorChain w/ the same cv and run RegressorChain w/ just one dv.\nThat would allow you to use .predict() as you'd like.\n" ]
[ 17, 0 ]
[]
[]
[ "cross_validation", "pickle", "python", "scikit_learn" ]
stackoverflow_0032700797_cross_validation_pickle_python_scikit_learn.txt
Q: Use pandas df column as legend label? When plotting a pandas Datafarme column is it possible to use the Dataframe column name as the legend label instead of explicitly specifying the label? Example: import matplotlib.pyplot as plt import pandas as pd df = pd.DataFrame(data={'col1': [0, 2, 1, 3], 'col2': [9,7,8,9]}, index=[0, 1, 2, 3]) f = plt.figure() ax = f.subplots() ax.plot(df['col1'], label='col1') # How to not explicitly specify label? # ax.plot(df['col1']) # This does not produce a legend label ax.legend() A: Use the pandas plotting API: fig, ax = plt.subplots() df['col1'].plot(ax=ax) ax.legend()
Use pandas df column as legend label?
When plotting a pandas Datafarme column is it possible to use the Dataframe column name as the legend label instead of explicitly specifying the label? Example: import matplotlib.pyplot as plt import pandas as pd df = pd.DataFrame(data={'col1': [0, 2, 1, 3], 'col2': [9,7,8,9]}, index=[0, 1, 2, 3]) f = plt.figure() ax = f.subplots() ax.plot(df['col1'], label='col1') # How to not explicitly specify label? # ax.plot(df['col1']) # This does not produce a legend label ax.legend()
[ "Use the pandas plotting API:\nfig, ax = plt.subplots()\ndf['col1'].plot(ax=ax)\nax.legend()\n\n" ]
[ 1 ]
[]
[]
[ "dataframe", "matplotlib", "pandas", "python" ]
stackoverflow_0074464530_dataframe_matplotlib_pandas_python.txt
Q: My Arduino via communication with my Raspberry Pi isn't outputting correctly? I'm trying to test and send a variable from my Raspberry Pi to my Arduino to turn my Stepper Motor, but it's not turning as it would if I put in a variable into the Arduino code itself and turn. Here is my code for the Arduino: #include <AccelStepper.h> AccelStepper stepper(1,7,6); // Defaults to AccelStepper::FULL4WIRE (4 pins) on 2, 3, 4, 5 void setup(){ Serial.begin(9600); stepper.setMaxSpeed(150); stepper.setAcceleration(100); stepper.setCurrentPosition(0); } void loop() { if(Serial.available() > 0){ int theta = Serial.read(); double theta_to_pulse = theta/1.8; stepper.runToNewPosition(theta_to_pulse); //stepper.runToNewPosition(0); //stepper.run(); } } Then here is my Python code via Raspberry Pi: import serial ser = serial.Serial('/dev/ttyACM0',9600) while True: theta = 90 ser.write(theta) Can I ask for some guidance? The Stepper Motor turns a bit when I run the python code, but never to the full point. A: Are you sure the Python code you wrote is correct? The code have syntax errors. There is nothing inside the loop. Do you get any exception? It should be like this: import serial ser = serial.Serial('/dev/ttyACM0',9600) while True: theta = 90 ser.write(theta)
My Arduino via communication with my Raspberry Pi isn't outputting correctly?
I'm trying to test and send a variable from my Raspberry Pi to my Arduino to turn my Stepper Motor, but it's not turning as it would if I put in a variable into the Arduino code itself and turn. Here is my code for the Arduino: #include <AccelStepper.h> AccelStepper stepper(1,7,6); // Defaults to AccelStepper::FULL4WIRE (4 pins) on 2, 3, 4, 5 void setup(){ Serial.begin(9600); stepper.setMaxSpeed(150); stepper.setAcceleration(100); stepper.setCurrentPosition(0); } void loop() { if(Serial.available() > 0){ int theta = Serial.read(); double theta_to_pulse = theta/1.8; stepper.runToNewPosition(theta_to_pulse); //stepper.runToNewPosition(0); //stepper.run(); } } Then here is my Python code via Raspberry Pi: import serial ser = serial.Serial('/dev/ttyACM0',9600) while True: theta = 90 ser.write(theta) Can I ask for some guidance? The Stepper Motor turns a bit when I run the python code, but never to the full point.
[ "Are you sure the Python code you wrote is correct?\nThe code have syntax errors. There is nothing inside the loop. Do you get any exception?\nIt should be like this:\nimport serial\nser = serial.Serial('/dev/ttyACM0',9600) \n\nwhile True: \n theta = 90 \n ser.write(theta)\n\n" ]
[ 0 ]
[]
[]
[ "arduino", "pyserial", "python", "raspberry_pi", "stepper" ]
stackoverflow_0074464582_arduino_pyserial_python_raspberry_pi_stepper.txt
Q: Expanding Nested lists in Python without fully flattening Suppose I have a list with nested lists such of strings such as: items = ['Hello', ['Ben', 'Chris', 'Linda'], '! The things you can buy today are', ['Apples', 'Oranges']] I want a list of strings that combine and flatten the nested lists into all possibilities such that the result is: new_list = ['Hello Ben ! The things you can buy today are Apples', 'Hello Ben ! The things you can buy today are Oranges', 'Hello Chris ! The things you can buy today are Apples', 'Hello Chris ! The things you can buy today are Oranges', 'Hello Linda ! The things you can buy today are Apples', 'Hello Linda ! The things you can buy today are Oranges',] I've been looking through itertools documentation and nothing quite works as expected. I don't want to hard code iterations because this items list can range in number of items as well as number of nested lists. For example: list(itertools.chain(*items)) Will flatten the list but it splits up individual characters in the string items. Part of the challenge is that some items in the list are strings, and others are additional lists. Would appreciate any help. Thanks A: You need itertools.product(). Here is it in action: >>> items = [['Hello'], ['Ben', 'Chris', 'Linda']] >>> list(itertools.product(*items)) [('Hello', 'Ben'), ('Hello', 'Chris'), ('Hello', 'Linda')] Since itertools.product() takes list of lists as an input, so some transformation is needed in your code to convert 'Hello' to ['Hello'] - import itertools items = ['Hello', ['Ben', 'Chris', 'Linda'], '! The things you can buy today are', ['Apples', 'Oranges']] new_items = itertools.product(*[item if isinstance(item, list) else [item] for item in items]) new_list = [' '.join(x) for x in new_items] new_list: ['Hello Ben ! The things you can buy today are Apples', 'Hello Ben ! The things you can buy today are Oranges', 'Hello Chris ! The things you can buy today are Apples', 'Hello Chris ! The things you can buy today are Oranges', 'Hello Linda ! The things you can buy today are Apples', 'Hello Linda ! The things you can buy today are Oranges'] A: you could solve with backtracking: def test(itm): n = len(itm) results = [] def dfs(idx, res): if idx == n: results.append(res.copy()) return if isinstance(itm[idx], list): for d in itm[idx]: res.append(d) dfs(idx+1, res) res.pop() else: res.append(itm[idx]) dfs(idx+1, res) res.pop() dfs(0, []) return results output: test(items) [['Hello', 'Ben', '! The things you can buy today are', 'Apples'], ['Hello', 'Ben', '! The things you can buy today are', 'Oranges'], ['Hello', 'Chris', '! The things you can buy today are', 'Apples'], ['Hello', 'Chris', '! The things you can buy today are', 'Oranges'], ['Hello', 'Linda', '! The things you can buy today are', 'Apples'], ['Hello', 'Linda', '! The things you can buy today are', 'Oranges']]
Expanding Nested lists in Python without fully flattening
Suppose I have a list with nested lists such of strings such as: items = ['Hello', ['Ben', 'Chris', 'Linda'], '! The things you can buy today are', ['Apples', 'Oranges']] I want a list of strings that combine and flatten the nested lists into all possibilities such that the result is: new_list = ['Hello Ben ! The things you can buy today are Apples', 'Hello Ben ! The things you can buy today are Oranges', 'Hello Chris ! The things you can buy today are Apples', 'Hello Chris ! The things you can buy today are Oranges', 'Hello Linda ! The things you can buy today are Apples', 'Hello Linda ! The things you can buy today are Oranges',] I've been looking through itertools documentation and nothing quite works as expected. I don't want to hard code iterations because this items list can range in number of items as well as number of nested lists. For example: list(itertools.chain(*items)) Will flatten the list but it splits up individual characters in the string items. Part of the challenge is that some items in the list are strings, and others are additional lists. Would appreciate any help. Thanks
[ "You need itertools.product().\nHere is it in action:\n>>> items = [['Hello'], ['Ben', 'Chris', 'Linda']]\n>>> list(itertools.product(*items))\n[('Hello', 'Ben'), ('Hello', 'Chris'), ('Hello', 'Linda')]\n\nSince itertools.product() takes list of lists as an input, so some transformation is needed in your code to convert 'Hello' to ['Hello'] -\nimport itertools\nitems = ['Hello', ['Ben', 'Chris', 'Linda'], '! The things you can buy today are', ['Apples', 'Oranges']]\nnew_items = itertools.product(*[item if isinstance(item, list) else [item] for item in items])\nnew_list = [' '.join(x) for x in new_items]\n\nnew_list:\n['Hello Ben ! The things you can buy today are Apples',\n 'Hello Ben ! The things you can buy today are Oranges',\n 'Hello Chris ! The things you can buy today are Apples',\n 'Hello Chris ! The things you can buy today are Oranges',\n 'Hello Linda ! The things you can buy today are Apples',\n 'Hello Linda ! The things you can buy today are Oranges']\n\n", "you could solve with backtracking:\ndef test(itm):\n n = len(itm)\n results = []\n def dfs(idx, res):\n if idx == n:\n results.append(res.copy())\n return\n if isinstance(itm[idx], list):\n for d in itm[idx]:\n res.append(d)\n dfs(idx+1, res)\n res.pop()\n else:\n res.append(itm[idx])\n dfs(idx+1, res)\n res.pop()\n\n dfs(0, [])\n return results\n\noutput:\ntest(items)\n\n[['Hello', 'Ben', '! The things you can buy today are', 'Apples'],\n ['Hello', 'Ben', '! The things you can buy today are', 'Oranges'],\n ['Hello', 'Chris', '! The things you can buy today are', 'Apples'],\n ['Hello', 'Chris', '! The things you can buy today are', 'Oranges'],\n ['Hello', 'Linda', '! The things you can buy today are', 'Apples'],\n ['Hello', 'Linda', '! The things you can buy today are', 'Oranges']]\n\n" ]
[ 3, 0 ]
[]
[]
[ "flatten", "list", "python", "python_itertools" ]
stackoverflow_0074464216_flatten_list_python_python_itertools.txt
Q: making first letter of input uppercase with split in python I got most of the code down but I am having issues getting the string to space back out after making the first letter of each word uppercase here's what I have so far: message = input('Write a short message.') new_message = message.split() glue = "" for item in new_message: glue += item[0].upper() + item[1:] print(glue) A: try with: message.capitalize() A: If you want to capitalize each word you can try capitalize() and the code will look like this: message = input('Write a short message.') new_message = message.split() cap_message = [x.capitalize() for x in new_message] print(cap_message) message.split() - split the string into a list using default separator which is any whitespace. The result is a list of words. capitalize each word in the list using List Comprehention. The list of capitalized words is saved in cap_message variable for code clarity. print the list of capitalized words
making first letter of input uppercase with split in python
I got most of the code down but I am having issues getting the string to space back out after making the first letter of each word uppercase here's what I have so far: message = input('Write a short message.') new_message = message.split() glue = "" for item in new_message: glue += item[0].upper() + item[1:] print(glue)
[ "try with:\nmessage.capitalize()\n\n", "If you want to capitalize each word you can try capitalize() and the code will look like this:\n message = input('Write a short message.')\n \n new_message = message.split()\n cap_message = [x.capitalize() for x in new_message]\n print(cap_message)\n\n\nmessage.split() - split the string into a list using default separator which is any whitespace. The result is a list of words.\ncapitalize each word in the list using List Comprehention. The list of capitalized words is saved in cap_message variable for code clarity.\nprint the list of capitalized words\n\n" ]
[ 1, 0 ]
[]
[]
[ "for_loop", "input", "python", "split", "uppercase" ]
stackoverflow_0074464471_for_loop_input_python_split_uppercase.txt
Q: Appending to array or looping iterable from fixed array I have been using this method for Ranges. I can't work out an equivalent method for fixed lists/arrays. # What I've been using & OUTPUT I'm looking for. degrees = np.arange(10,50,10) ITER = np.array(degrees) for i in range( 4): x1 = np.sin(np.radians(ITER)) y1 = np.cos(np.radians(ITER )) XY = np.column_stack((np.asarray(x1),np.asarray(y1))) print(XY) Bad code: # appending to array has seen many failures. # appended array always prints empty. must have a false assumption xy1 = np.array([]) degrees = np.array([10, 20, 30, 40]) for degree in np.nditer(degrees): x1 = np.sin(np.radians(degree)) y1 = np.cos(np.radians(degree)) #np.append(xy1,[x1,y1]).reshape(2,1) #ugh = np.asarray([x1,y1]) #a = np.append(xy1,[[x1,y1]],axis =0).reshape(2,-1) #a = np.append(xy1,[[x1],[y1]],axis =0)#.reshape(2,-1) #np.append(xy1,ugh, axis =0).reshape(2,1) #np.append(xy1,ugh, axis =0) #a = np.append(xy1,[ugh]) XY = np.column_stack((np.asarray(x1),np.asarray(y1))) print(xy1) # OUTPUT should be same as working example above With the benefit of hindsight I would have used lists... But now I wish to use this as learning opportunity. Update: Answers as provided by @hpaulj # Iterate through Range degrees = np.arange(10,50,10) x1 = np.sin(np.radians(degrees)) y1 = np.cos(np.radians(degrees )) XY = np.column_stack((x1, y1)) # Iterate through fixed list degrees = np.array([10, 20, 30, 40]) XY = np.zeros((0,2)) for rad in np.radians(degrees): XY = np.append(XY, [[np.sin(rad), np.cos(rad)]], axis=0) My main mistake was the initialization of the array as wrong shape. XY = np.zeros((0,2)) degrees = np.array([10, 20, 30, 40]) for rads in np.radians(degrees): x1 = np.sin(rads) y1 = np.cos(rads) XY = np.append(XY, [[x1,y1]]).reshape(-1,2) A: Your first code runs; why are you trying to write something else? But I think you need to understand the first one better. Let's run it: In [449]: degrees = np.arange(10,50,10) ...: ITER = np.array(degrees) ...: for i in range( 4): ...: x1 = np.sin(np.radians(ITER)) ...: y1 = np.cos(np.radians(ITER )) ...: XY = np.column_stack((np.asarray(x1),np.asarray(y1))) In [450]: degrees Out[450]: array([10, 20, 30, 40]) In [451]: ITER Out[451]: array([10, 20, 30, 40]) arange produces an array (READ THE DOCS); so why the extra np.array(degrees) call? It doesn't change any; it just makes a another copy. In [452]: XY Out[452]: array([[0.17364818, 0.98480775], [0.34202014, 0.93969262], [0.5 , 0.8660254 ], [0.64278761, 0.76604444]]) degrees is (4,) shape; x1 is as well, and XY is (4,2), concatenating two 1d arrays as columns. Why the iteration for range(4)? Just to make the code run slower by repeating the sin calculations? You do the same thing 4 times, and don't accumulate anything. It just uses the last run to make XY. And x1 is already an array; why the extra np.array(x1) wrapping? In [453]: x1,y1 Out[453]: (array([0.17364818, 0.34202014, 0.5 , 0.64278761]), array([0.98480775, 0.93969262, 0.8660254 , 0.76604444])) In [454]: np.sin(np.radians(ITER)) Out[454]: array([0.17364818, 0.34202014, 0.5 , 0.64278761]) I don't know whether you are just being careless, or don't understand the basics of Python iteration. This is all you need: degrees = np.arange(10,50,10) x1 = np.sin(np.radians(ITER)) y1 = np.cos(np.radians(ITER )) XY = np.column_stack((x1, y1)) 2nd try I just noticed you use np.nditer. Why? If you are going to iterate, use the straight forward for degree in degress: .... nditer is not a faster way of iterating; the docs may be misleading in this regard. It is really only useful as a stepping stone toward writing fancy iterations in cython. The python version is slow - and overly complicated for most users. As the first code shows, you don't need to iterate to calculate sin/cos for all degrees. But if you must iterate, here's a simple clear version: In [457]: degrees = np.arange(10,50,10) ...: x1, y1 = [], [] ...: for degree in degrees: ...: x1.append(np.sin(np.radians(degree))) ...: y1.append(np.cos(np.radians(degree))) ...: XY = np.column_stack((np.array(x1), np.array(y1))) In [458]: x1 Out[458]: [0.17364817766693033, 0.3420201433256687, 0.49999999999999994, 0.6427876096865393] x1,y1 are lists; list append is relatively fast, and simple. np.append is slow and hard to use correctly. Don't use it (like nditer it needs a stronger disclaimer, and maybe even removal). Here's a version of iteration with np.append that works; I don't recommend it, but it illustrates how np.append might work: In [461]: degrees = np.arange(10,50,10) ...: XY = np.zeros((0,2)) ...: for rad in np.radians(degrees): ...: XY = np.append(XY, [[np.sin(rad), np.cos(rad)]], axis=0) I do just one np.radians conversion. No need to repeat or do it in the iteration. I initial XY as a (0,2) array - and I add a (1,2) array to it at each iteration. np.append with axis is just XY = np.concatenate((XY, [[np.sin...]]), axis=0) Your failed tries have various problems. np.array([]) has shape (0,). You can't join a (2,) to that with axis). np.append returns a new array; it does not work in-place. None of your tries changes xy1. Looking more at that second block of code, I get the impression that you are just being careless. You mix xy1, x1, y1, a, XY without paying attention to how they might, or might not, be related.
Appending to array or looping iterable from fixed array
I have been using this method for Ranges. I can't work out an equivalent method for fixed lists/arrays. # What I've been using & OUTPUT I'm looking for. degrees = np.arange(10,50,10) ITER = np.array(degrees) for i in range( 4): x1 = np.sin(np.radians(ITER)) y1 = np.cos(np.radians(ITER )) XY = np.column_stack((np.asarray(x1),np.asarray(y1))) print(XY) Bad code: # appending to array has seen many failures. # appended array always prints empty. must have a false assumption xy1 = np.array([]) degrees = np.array([10, 20, 30, 40]) for degree in np.nditer(degrees): x1 = np.sin(np.radians(degree)) y1 = np.cos(np.radians(degree)) #np.append(xy1,[x1,y1]).reshape(2,1) #ugh = np.asarray([x1,y1]) #a = np.append(xy1,[[x1,y1]],axis =0).reshape(2,-1) #a = np.append(xy1,[[x1],[y1]],axis =0)#.reshape(2,-1) #np.append(xy1,ugh, axis =0).reshape(2,1) #np.append(xy1,ugh, axis =0) #a = np.append(xy1,[ugh]) XY = np.column_stack((np.asarray(x1),np.asarray(y1))) print(xy1) # OUTPUT should be same as working example above With the benefit of hindsight I would have used lists... But now I wish to use this as learning opportunity. Update: Answers as provided by @hpaulj # Iterate through Range degrees = np.arange(10,50,10) x1 = np.sin(np.radians(degrees)) y1 = np.cos(np.radians(degrees )) XY = np.column_stack((x1, y1)) # Iterate through fixed list degrees = np.array([10, 20, 30, 40]) XY = np.zeros((0,2)) for rad in np.radians(degrees): XY = np.append(XY, [[np.sin(rad), np.cos(rad)]], axis=0) My main mistake was the initialization of the array as wrong shape. XY = np.zeros((0,2)) degrees = np.array([10, 20, 30, 40]) for rads in np.radians(degrees): x1 = np.sin(rads) y1 = np.cos(rads) XY = np.append(XY, [[x1,y1]]).reshape(-1,2)
[ "Your first code runs; why are you trying to write something else?\nBut I think you need to understand the first one better. Let's run it:\nIn [449]: degrees = np.arange(10,50,10)\n ...: ITER = np.array(degrees)\n ...: for i in range( 4): \n ...: x1 = np.sin(np.radians(ITER))\n ...: y1 = np.cos(np.radians(ITER ))\n ...: XY = np.column_stack((np.asarray(x1),np.asarray(y1)))\n\nIn [450]: degrees\nOut[450]: array([10, 20, 30, 40])\n\nIn [451]: ITER\nOut[451]: array([10, 20, 30, 40])\n\narange produces an array (READ THE DOCS); so why the extra np.array(degrees) call? It doesn't change any; it just makes a another copy.\nIn [452]: XY\nOut[452]: \narray([[0.17364818, 0.98480775],\n [0.34202014, 0.93969262],\n [0.5 , 0.8660254 ],\n [0.64278761, 0.76604444]])\n\ndegrees is (4,) shape; x1 is as well, and XY is (4,2), concatenating two 1d arrays as columns.\nWhy the iteration for range(4)? Just to make the code run slower by repeating the sin calculations? You do the same thing 4 times, and don't accumulate anything. It just uses the last run to make XY. And x1 is already an array; why the extra np.array(x1) wrapping?\nIn [453]: x1,y1\nOut[453]: \n(array([0.17364818, 0.34202014, 0.5 , 0.64278761]),\n array([0.98480775, 0.93969262, 0.8660254 , 0.76604444]))\n\nIn [454]: np.sin(np.radians(ITER))\nOut[454]: array([0.17364818, 0.34202014, 0.5 , 0.64278761])\n\nI don't know whether you are just being careless, or don't understand the basics of Python iteration.\nThis is all you need:\ndegrees = np.arange(10,50,10)\nx1 = np.sin(np.radians(ITER))\ny1 = np.cos(np.radians(ITER ))\nXY = np.column_stack((x1, y1))\n\n2nd try\nI just noticed you use np.nditer. Why? If you are going to iterate, use the straight forward\n for degree in degress:\n ....\n\nnditer is not a faster way of iterating; the docs may be misleading in this regard. It is really only useful as a stepping stone toward writing fancy iterations in cython. The python version is slow - and overly complicated for most users.\nAs the first code shows, you don't need to iterate to calculate sin/cos for all degrees. But if you must iterate, here's a simple clear version:\nIn [457]: degrees = np.arange(10,50,10)\n ...: x1, y1 = [], []\n ...: for degree in degrees:\n ...: x1.append(np.sin(np.radians(degree)))\n ...: y1.append(np.cos(np.radians(degree)))\n ...: XY = np.column_stack((np.array(x1), np.array(y1)))\n\nIn [458]: x1\nOut[458]: \n[0.17364817766693033,\n 0.3420201433256687,\n 0.49999999999999994,\n 0.6427876096865393]\n\nx1,y1 are lists; list append is relatively fast, and simple. np.append is slow and hard to use correctly. Don't use it (like nditer it needs a stronger disclaimer, and maybe even removal).\nHere's a version of iteration with np.append that works; I don't recommend it, but it illustrates how np.append might work:\n In [461]: degrees = np.arange(10,50,10)\n ...: XY = np.zeros((0,2))\n ...: for rad in np.radians(degrees):\n ...: XY = np.append(XY, [[np.sin(rad), np.cos(rad)]], axis=0)\n\nI do just one np.radians conversion. No need to repeat or do it in the iteration.\nI initial XY as a (0,2) array - and I add a (1,2) array to it at each iteration. np.append with axis is just\n XY = np.concatenate((XY, [[np.sin...]]), axis=0)\n\nYour failed tries have various problems. np.array([]) has shape (0,). You can't join a (2,) to that with axis). np.append returns a new array; it does not work in-place. None of your tries changes xy1.\nLooking more at that second block of code, I get the impression that you are just being careless. You mix xy1, x1, y1, a, XY without paying attention to how they might, or might not, be related.\n" ]
[ 0 ]
[]
[]
[ "numpy", "python" ]
stackoverflow_0074456449_numpy_python.txt
Q: Calling and splitting a text file I have a text file called test.txt with the words "cat dog frog" in it. I need to split it so each word appears on a new line. Can someone help me please? def get_tokens_from_file(test): with open("test.txt") as f: A: Try this: with open('test.txt','r') as f: for line in f: for word in line.split(): print(word) Or if you want to flatten it: with open('test.txt') as f: flat_list=[word for line in f for word in line.split()] A: The following function does what you need. The function reads each line of the source file (e.g. test.txt) and writes the words of that line as separate lines in the ouput.txt in the same directory. The function works for any number of lines in your source data file. def get_tokens_from_file(filename): with open(filename) as f: for line in f: tokens = line.split() with open("output.txt", 'a') as f: for word in tokens: f.write(word + '\n') Now you can run the function by the following command. get_tokens_from_file("test.txt") For example: Assume the test.txt has the following lines: cat dog frog bull zebra bee cow horse snake After you run the function, the result will be saved in the output.txt as follow: cat dog frog bull zebra bee cow horse snake
Calling and splitting a text file
I have a text file called test.txt with the words "cat dog frog" in it. I need to split it so each word appears on a new line. Can someone help me please? def get_tokens_from_file(test): with open("test.txt") as f:
[ "Try this:\nwith open('test.txt','r') as f:\n for line in f:\n for word in line.split():\n print(word) \n\nOr if you want to flatten it:\nwith open('test.txt') as f:\n flat_list=[word for line in f for word in line.split()]\n\n", "The following function does what you need.\nThe function reads each line of the source file (e.g. test.txt) and writes the words of that line as separate lines in the ouput.txt in the same directory.\nThe function works for any number of lines in your source data file.\ndef get_tokens_from_file(filename):\n with open(filename) as f:\n for line in f:\n tokens = line.split()\n with open(\"output.txt\", 'a') as f:\n for word in tokens:\n f.write(word + '\\n')\n\nNow you can run the function by the following command.\nget_tokens_from_file(\"test.txt\")\n\nFor example:\nAssume the test.txt has the following lines:\ncat dog frog\nbull zebra bee\ncow horse snake\n\nAfter you run the function, the result will be saved in the output.txt as follow:\ncat\ndog\nfrog\nbull\nzebra\nbee\ncow\nhorse\nsnake\n\n" ]
[ 0, 0 ]
[]
[]
[ "python" ]
stackoverflow_0074464290_python.txt
Q: Google OR Tools CP-SAT Solver - scheduling problem with objective to even out shift distribution without hard constraints (max/min per period) I am using Google ORTools using the Python wrapper to solve a nurse scheduling problem but I am having trouble finding a way to implement a constraint that attempts to evenly distribute worked shifts without using hard constraints. As an example, I working with a number of weeks, distinct shifts, and employees. For illustration I add a very simple constraint which is that no employee can work on average more than the others for the full time period. In the example below, one of many solutions with 5 weeks, 5 shifts, and 5 employees is to just have employee 0 work all the shifts in week 0, employee 1 all the shifts in week 1, etc. What I want to add, however, is a constraint that maximizes the number of distinct weeks each employee works WITHOUT using a weekly constraint such as each employee can only work up to 1 shift per week. A few things that I have attempted but have failed to get working: Create a binary grid indexed by employee and week with a 1 if the employee has at least 1 shift that week and 0 otherwise and maximize the total sum of the grid. Use a hard constraint such that each employee can only work so many shifts in a given week. This is what I want to avoid, I would rather the solver consider this as an objective than a hard constraint. My sample code is below: import os import math import pandas as pd from ortools.sat.python import cp_model num_weeks = 5 num_shifts = 5 num_employees = 5 all_weeks = range(num_weeks) all_shifts = range(num_shifts) all_employees = range(num_employees) model = cp_model.CpModel() assignments = {} #Calculate a maximum number of shifts to balance everyone out for the full period max_total_shifts = math.ceil((num_weeks*num_shifts)/num_employees) #Create a space of new boolean variables where the value is 1 if the employee is working that shift in that week, else 0 for w in all_weeks: for s in all_shifts: for e in all_employees: assignments[(w,s,e)] = model.NewBoolVar('w%i-s%i-e%i' % (w,s,e) ) model.AddAtMostOne(assignments[(w,s,e)] for e in all_employees) #Add the max constraint for e in all_employees: model.Add(sum(assignments[w,s,e] for w in all_weeks for s in all_shifts) <= max_total_shifts) #Assign as many shifts as possible model.Maximize( sum(assignments[(w,s,e)] for w in all_weeks for s in all_shifts for e in all_employees) ) #Solve the model solver = cp_model.CpSolver() status = solver.Solve(model) print(status) #Using pandas, view the solution solution = pd.DataFrame() data = [] for i,field in enumerate(model._CpModel__model.variables): model._CpModel__model.solution_hint.vars.extend([i]) model._CpModel__model.solution_hint.values.extend([solver._CpSolver__solution.solution[i]]) if solver._CpSolver__solution.solution[i]==1: data.append( [field.name,solver._CpSolver__solution.solution[i] ]) #print("{} has value {}".format(field.name,solver._CpSolver__solution.solution[i])) # solution = pd.DataFrame(data) A: Fairness is the most complex question in OR. You need to try to capture what you want with equations as sample as possible. Std deviation, variance are not simple. Minimizing max(worked per person) - min(worked per person);is simple. Good luck, this is a tough question to come up with a business acceptable definition of a good schedule.
Google OR Tools CP-SAT Solver - scheduling problem with objective to even out shift distribution without hard constraints (max/min per period)
I am using Google ORTools using the Python wrapper to solve a nurse scheduling problem but I am having trouble finding a way to implement a constraint that attempts to evenly distribute worked shifts without using hard constraints. As an example, I working with a number of weeks, distinct shifts, and employees. For illustration I add a very simple constraint which is that no employee can work on average more than the others for the full time period. In the example below, one of many solutions with 5 weeks, 5 shifts, and 5 employees is to just have employee 0 work all the shifts in week 0, employee 1 all the shifts in week 1, etc. What I want to add, however, is a constraint that maximizes the number of distinct weeks each employee works WITHOUT using a weekly constraint such as each employee can only work up to 1 shift per week. A few things that I have attempted but have failed to get working: Create a binary grid indexed by employee and week with a 1 if the employee has at least 1 shift that week and 0 otherwise and maximize the total sum of the grid. Use a hard constraint such that each employee can only work so many shifts in a given week. This is what I want to avoid, I would rather the solver consider this as an objective than a hard constraint. My sample code is below: import os import math import pandas as pd from ortools.sat.python import cp_model num_weeks = 5 num_shifts = 5 num_employees = 5 all_weeks = range(num_weeks) all_shifts = range(num_shifts) all_employees = range(num_employees) model = cp_model.CpModel() assignments = {} #Calculate a maximum number of shifts to balance everyone out for the full period max_total_shifts = math.ceil((num_weeks*num_shifts)/num_employees) #Create a space of new boolean variables where the value is 1 if the employee is working that shift in that week, else 0 for w in all_weeks: for s in all_shifts: for e in all_employees: assignments[(w,s,e)] = model.NewBoolVar('w%i-s%i-e%i' % (w,s,e) ) model.AddAtMostOne(assignments[(w,s,e)] for e in all_employees) #Add the max constraint for e in all_employees: model.Add(sum(assignments[w,s,e] for w in all_weeks for s in all_shifts) <= max_total_shifts) #Assign as many shifts as possible model.Maximize( sum(assignments[(w,s,e)] for w in all_weeks for s in all_shifts for e in all_employees) ) #Solve the model solver = cp_model.CpSolver() status = solver.Solve(model) print(status) #Using pandas, view the solution solution = pd.DataFrame() data = [] for i,field in enumerate(model._CpModel__model.variables): model._CpModel__model.solution_hint.vars.extend([i]) model._CpModel__model.solution_hint.values.extend([solver._CpSolver__solution.solution[i]]) if solver._CpSolver__solution.solution[i]==1: data.append( [field.name,solver._CpSolver__solution.solution[i] ]) #print("{} has value {}".format(field.name,solver._CpSolver__solution.solution[i])) # solution = pd.DataFrame(data)
[ "Fairness is the most complex question in OR.\nYou need to try to capture what you want with equations as sample as possible.\nStd deviation, variance are not simple.\nMinimizing max(worked per person) - min(worked per person);is simple.\nGood luck, this is a tough question to come up with a business acceptable definition of a good schedule.\n" ]
[ 1 ]
[]
[]
[ "constraint_programming", "or_tools", "python" ]
stackoverflow_0074464353_constraint_programming_or_tools_python.txt
Q: COUNT PRIMES: Write a function that returns the number of prime numbers that exist up to and including a given number Can someone help me with my code and let me know what's wrong in it? def count_primes(nums): count = 0 for num in range(2,nums+1): if num%2!=0 or num%3!=0 or num%5!=0: count+=1 return count A: You need to test it against each number up to the number you are checking for being prime, not just 2, 3 and 5. As there are more primes other than 2, 3 and 5 A: Does this work? Implemented a basic checker. def check(number): if number < 2: return False else: for divisor in range(2, number): if number % divisor == 0: return False return True def count_primes(nums): count = 0 for num in range(2,nums+1): if check(num): count+=1 return count print(count_primes(1)) print(count_primes(2)) print(count_primes(3)) print(count_primes(4)) print(count_primes(5)) print(count_primes(6)) print(count_primes(7)) print(count_primes(8)) print(count_primes(9)) print(count_primes(10)) Faster method: import math def check(number): if number < 2: return False if number == 2: return True if number % 2 == 0: return False for divisor in range(3, 1 + int(math.sqrt(number)), 2): if number % divisor == 0: return False return True def count_primes(nums): count = 0 for num in range(2,nums+1): if check(num): count += 1 return count print(count_primes(1)) print(count_primes(2)) print(count_primes(3)) print(count_primes(4)) print(count_primes(5)) print(count_primes(6)) print(count_primes(7)) print(count_primes(8)) print(count_primes(9)) print(count_primes(10))
COUNT PRIMES: Write a function that returns the number of prime numbers that exist up to and including a given number
Can someone help me with my code and let me know what's wrong in it? def count_primes(nums): count = 0 for num in range(2,nums+1): if num%2!=0 or num%3!=0 or num%5!=0: count+=1 return count
[ "You need to test it against each number up to the number you are checking for being prime, not just 2, 3 and 5. As there are more primes other than 2, 3 and 5\n", "Does this work? Implemented a basic checker.\ndef check(number):\n if number < 2:\n return False\n else:\n for divisor in range(2, number):\n if number % divisor == 0:\n return False\n return True\ndef count_primes(nums):\n count = 0\n for num in range(2,nums+1):\n if check(num):\n count+=1\n return count\nprint(count_primes(1))\nprint(count_primes(2))\nprint(count_primes(3))\nprint(count_primes(4))\nprint(count_primes(5))\nprint(count_primes(6))\nprint(count_primes(7))\nprint(count_primes(8))\nprint(count_primes(9))\nprint(count_primes(10))\n\nFaster method:\nimport math\ndef check(number):\n if number < 2:\n return False\n if number == 2:\n return True\n if number % 2 == 0:\n return False\n for divisor in range(3, 1 + int(math.sqrt(number)), 2):\n if number % divisor == 0:\n return False\n return True\ndef count_primes(nums):\n count = 0\n for num in range(2,nums+1):\n if check(num):\n count += 1\n return count\nprint(count_primes(1))\nprint(count_primes(2))\nprint(count_primes(3))\nprint(count_primes(4))\nprint(count_primes(5))\nprint(count_primes(6))\nprint(count_primes(7))\nprint(count_primes(8))\nprint(count_primes(9))\nprint(count_primes(10))\n\n" ]
[ 0, 0 ]
[]
[]
[ "count", "primes", "python" ]
stackoverflow_0074464617_count_primes_python.txt
Q: Remove duplicate rows based on previous rows' values in a specific column I have a dataframe similar to the following example: import pandas as pd data = pd.DataFrame(data={'col1': [1,2,3,4,5,6,7,8,9], 'col2': [1.55,1.55,1.55,1.8,1.9,1.9,1.9,2.1,2.1]}) In the second column, col2, several duplicate values can be seen, 3 times 1.55, 3 times 1.9 and 2 times 2.1. What I need to do is remove all rows which are a duplicate of its previous row. So, the first rows are the ones I'd like to keep. In this example, this would be the rows with col2 value 1, 4, 5, 8 giving the following dataframe as my desired output: clean_data = pd.DataFrame(data={'col1': [1,4,5,8], 'col2': [1.55,1.8,1.9,2.1]}) What is the best way to go about this for a dataframe which is much larger (in terms of rows) than this small example? A: You can use shift: data.loc[data['col2'] != data['col2'].shift(1)]
Remove duplicate rows based on previous rows' values in a specific column
I have a dataframe similar to the following example: import pandas as pd data = pd.DataFrame(data={'col1': [1,2,3,4,5,6,7,8,9], 'col2': [1.55,1.55,1.55,1.8,1.9,1.9,1.9,2.1,2.1]}) In the second column, col2, several duplicate values can be seen, 3 times 1.55, 3 times 1.9 and 2 times 2.1. What I need to do is remove all rows which are a duplicate of its previous row. So, the first rows are the ones I'd like to keep. In this example, this would be the rows with col2 value 1, 4, 5, 8 giving the following dataframe as my desired output: clean_data = pd.DataFrame(data={'col1': [1,4,5,8], 'col2': [1.55,1.8,1.9,2.1]}) What is the best way to go about this for a dataframe which is much larger (in terms of rows) than this small example?
[ "You can use shift:\ndata.loc[data['col2'] != data['col2'].shift(1)]\n\n" ]
[ 1 ]
[]
[]
[ "dataframe", "duplicates", "pandas", "python" ]
stackoverflow_0074464714_dataframe_duplicates_pandas_python.txt
Q: How to insert character ('-") every time my string changes from text to number and vice versa? This is an example of a bigger dataframe. Imagine I have a dataframe like this: import pandas as pd df = pd.DataFrame({"ID":["4SSS50FX","2TT1897FA"], "VALUE":[13, 56]}) df Out[2]: ID VALUE 0 4SSS50FX 13 1 2TT1897FA 56 I would like to insert "-" in the strings from df["ID"] everytime it changes from number to text and from text to number. So the output should be like: ID VALUE 0 4-SSS-50-FX 13 1 2-TT-1897-FA 56 I could create specific conditions for each case, but I would like to automate it for all the samples. Anyone could help me? A: You can use a regular expression with lookarounds. df['ID'] = df['ID'].str.replace(r'(?<=\d)(?=[A-Z])|(?<=[A-Z])(?=\d)', '-') The regexp matches an empty string that's either preceded by a digit and followed by a letter, or vice versa. This empty string is then replaced with -. A: Use a regex. >>> df['ID'].str.replace('(\d+(?=\D)|\D+(?=\d))', r'\1-', regex=True) 0 4-SSS-50-FX 1 2-TT-1897-FA Name: ID, dtype: object \d+(?=\D) means digits followed by non-digit. \D+(?=\d)) means non-digits followed by digit. Either of those are replaced with themselves plus a - character.
How to insert character ('-") every time my string changes from text to number and vice versa?
This is an example of a bigger dataframe. Imagine I have a dataframe like this: import pandas as pd df = pd.DataFrame({"ID":["4SSS50FX","2TT1897FA"], "VALUE":[13, 56]}) df Out[2]: ID VALUE 0 4SSS50FX 13 1 2TT1897FA 56 I would like to insert "-" in the strings from df["ID"] everytime it changes from number to text and from text to number. So the output should be like: ID VALUE 0 4-SSS-50-FX 13 1 2-TT-1897-FA 56 I could create specific conditions for each case, but I would like to automate it for all the samples. Anyone could help me?
[ "You can use a regular expression with lookarounds.\ndf['ID'] = df['ID'].str.replace(r'(?<=\\d)(?=[A-Z])|(?<=[A-Z])(?=\\d)', '-')\n\nThe regexp matches an empty string that's either preceded by a digit and followed by a letter, or vice versa. This empty string is then replaced with -.\n", "Use a regex.\n>>> df['ID'].str.replace('(\\d+(?=\\D)|\\D+(?=\\d))', r'\\1-', regex=True)\n0 4-SSS-50-FX\n1 2-TT-1897-FA\nName: ID, dtype: object\n\n\\d+(?=\\D) means digits followed by non-digit.\n\\D+(?=\\d)) means non-digits followed by digit.\nEither of those are replaced with themselves plus a - character.\n" ]
[ 1, 1 ]
[]
[]
[ "pandas", "python", "series" ]
stackoverflow_0074464690_pandas_python_series.txt
Q: exchangelib ews throttling policies python I am trying to build a database with all the emails. But I get the Error: ErrorServerBusy: The server cannot service this request right now. Try again later. Is there any way to work with the throttling policy of ews? One month of emails do work but when I exceed some not known barrier it gets interrupted. Are there any other ways to prevent the throttling policies? I thought about implementing time.sleep(), but how could I find out how for how long I need to wait after how many emails to make it work? shared_postboxes= [some accounts here] credentials = Credentials(username=my username, password=my password) config = Configuration(retry_policy=FaultTolerance(max_wait=600), credentials=credentials) for shared_postbox in tqdm(shared_postboxes): account = Account(shared_postbox, credentials=credentials, autodiscover=True) top_folder = account.root email_folders = [f for f in top_folder.walk() if isinstance(f, Messages)] for folder in tqdm(email_folders): for m in folder.all().only('text_body', 'datetime_received',"sender").filter(datetime_received__range=(start_of_month,end_of_month), sender__exists=True).order_by('-datetime_received'): try: senderdomain = ExtractingDomain(m.sender.email_address) except: print("could not extract domain") else: if senderdomain in domains_of_interest: postboxname = account.identity.primary_smtp_address body = m.text_body emails.append(body) senders.append(senderdomain) postbox.append(postboxname) received.append(m.datetime_received) account.protocol.close() A: You created a Configuration object that defines a retry policy, which is what you want to solve your issue. But you never passed the configuration to your Account object. To do that, create your account as: account = Account(shared_postbox, config=config, autodiscover=True)
exchangelib ews throttling policies python
I am trying to build a database with all the emails. But I get the Error: ErrorServerBusy: The server cannot service this request right now. Try again later. Is there any way to work with the throttling policy of ews? One month of emails do work but when I exceed some not known barrier it gets interrupted. Are there any other ways to prevent the throttling policies? I thought about implementing time.sleep(), but how could I find out how for how long I need to wait after how many emails to make it work? shared_postboxes= [some accounts here] credentials = Credentials(username=my username, password=my password) config = Configuration(retry_policy=FaultTolerance(max_wait=600), credentials=credentials) for shared_postbox in tqdm(shared_postboxes): account = Account(shared_postbox, credentials=credentials, autodiscover=True) top_folder = account.root email_folders = [f for f in top_folder.walk() if isinstance(f, Messages)] for folder in tqdm(email_folders): for m in folder.all().only('text_body', 'datetime_received',"sender").filter(datetime_received__range=(start_of_month,end_of_month), sender__exists=True).order_by('-datetime_received'): try: senderdomain = ExtractingDomain(m.sender.email_address) except: print("could not extract domain") else: if senderdomain in domains_of_interest: postboxname = account.identity.primary_smtp_address body = m.text_body emails.append(body) senders.append(senderdomain) postbox.append(postboxname) received.append(m.datetime_received) account.protocol.close()
[ "You created a Configuration object that defines a retry policy, which is what you want to solve your issue. But you never passed the configuration to your Account object. To do that, create your account as:\naccount = Account(shared_postbox, config=config, autodiscover=True)\n\n" ]
[ 1 ]
[]
[]
[ "exchangelib", "exchangewebservices", "outlook", "python", "python_requests" ]
stackoverflow_0074454121_exchangelib_exchangewebservices_outlook_python_python_requests.txt
Q: Implement Simple linear regression for predicting a response using a single feature using python using this data as refrance import numpy as np import matplotlib.pyplot as plt def estimate_coef(x, y): # number of observations/points n = np.size(x) # mean of x and y vector m_x = np.mean(x) m_y = np.mean(y) # calculating cross-deviation and deviation about x SS_xy = np.sum(y*x) - n*m_y*m_x SS_xx = np.sum(x*x) - n*m_x*m_x # calculating regression coefficients b_1 = SS_xy / SS_xx b_0 = m_y - b_1*m_x return (b_0, b_1) def plot_regression_line(x, y, b): # plotting the actual points as scatter plot plt.scatter(x, y, color = "m", marker = "o", s = 30) # predicted response vector y_pred = b[0] + b[1]*x # plotting the regression line plt.plot(x, y_pred, color = "g") # putting labels plt.xlabel('x') plt.ylabel('y') # function to show plot plt.show() def main(): # observations / data x = np.array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]) y = np.array([1, 3, 2, 5, 7, 8, 8, 9, 10, 12]) # estimating coefficients b = estimate_coef(x, y) print("Estimated coefficients:\nb_0 = {} \ \nb_1 = {}".format(b[0], b[1])) # plotting regression line plot_regression_line(x, y, b) if __name__ == "__main__": main() Is above code correct and if not what should i do beacuse its showing error on google colab !!! for output the the result shoul be a ploted graph as shown in the following picture the output should be similar to this image A: I replicate your code on my machine and it works perfectly, there is no bug in your code. Library and its version that installed in my colab Python : 3.7.15 Numpy : 1.21.6 Matplotlib : 3.2.2 You can also restart runtime it may solve your problem click on Runtime Select Restart runtime and then click on Run all
Implement Simple linear regression for predicting a response using a single feature using python
using this data as refrance import numpy as np import matplotlib.pyplot as plt def estimate_coef(x, y): # number of observations/points n = np.size(x) # mean of x and y vector m_x = np.mean(x) m_y = np.mean(y) # calculating cross-deviation and deviation about x SS_xy = np.sum(y*x) - n*m_y*m_x SS_xx = np.sum(x*x) - n*m_x*m_x # calculating regression coefficients b_1 = SS_xy / SS_xx b_0 = m_y - b_1*m_x return (b_0, b_1) def plot_regression_line(x, y, b): # plotting the actual points as scatter plot plt.scatter(x, y, color = "m", marker = "o", s = 30) # predicted response vector y_pred = b[0] + b[1]*x # plotting the regression line plt.plot(x, y_pred, color = "g") # putting labels plt.xlabel('x') plt.ylabel('y') # function to show plot plt.show() def main(): # observations / data x = np.array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]) y = np.array([1, 3, 2, 5, 7, 8, 8, 9, 10, 12]) # estimating coefficients b = estimate_coef(x, y) print("Estimated coefficients:\nb_0 = {} \ \nb_1 = {}".format(b[0], b[1])) # plotting regression line plot_regression_line(x, y, b) if __name__ == "__main__": main() Is above code correct and if not what should i do beacuse its showing error on google colab !!! for output the the result shoul be a ploted graph as shown in the following picture the output should be similar to this image
[ "I replicate your code on my machine and it works perfectly, there is no bug in your code.\nLibrary and its version that installed in my colab\nPython : 3.7.15\nNumpy : 1.21.6\nMatplotlib : 3.2.2\n\nYou can also restart runtime it may solve your problem\nclick on Runtime\nSelect Restart runtime and then click on Run all\n" ]
[ 0 ]
[]
[]
[ "linear_regression", "prediction", "python", "regression" ]
stackoverflow_0074441304_linear_regression_prediction_python_regression.txt
Q: Python change the starting values on the plot I have data set which looks like this: Hour_day Profits 7 645 3 354 5 346 11 153 23 478 7 464 12 356 0 346 I crated a line plot to visualize the hour on the x-axis and the profit values on y-axis. My code worked good with me but the problem is that on the x-axis it started at 0. but I want to start from 5 pm for example. hours = df.Hour_day.value_counts().keys() hours = hours.sort_values() # Get plot information from actual data y_values = list() for hr in hours: temp = df[df.Hour_day == hr] y_values.append(temp.Profits.mean()) # Plot comparison plt.plot(hours, y_values, color='y') A: From what I know you have two options: Create a sub DF that excludes the rows that have an Hour_day value under 5 and proceed with the rest of your code as normal: df_new = df.where(df['Hour_day'] >= 5) or, you might be able to set the x_ticks: default_x_ticks = range(5:23) plt.plot(hours, y_values, color='y') plt.xticks(default_x_ticks, hours) plt.show() I haven't tested the x_ticks code so you might have to play around with it just a touch, but there are lots of easy to find resources on x_ticks.
Python change the starting values on the plot
I have data set which looks like this: Hour_day Profits 7 645 3 354 5 346 11 153 23 478 7 464 12 356 0 346 I crated a line plot to visualize the hour on the x-axis and the profit values on y-axis. My code worked good with me but the problem is that on the x-axis it started at 0. but I want to start from 5 pm for example. hours = df.Hour_day.value_counts().keys() hours = hours.sort_values() # Get plot information from actual data y_values = list() for hr in hours: temp = df[df.Hour_day == hr] y_values.append(temp.Profits.mean()) # Plot comparison plt.plot(hours, y_values, color='y')
[ "From what I know you have two options:\nCreate a sub DF that excludes the rows that have an Hour_day value under 5 and proceed with the rest of your code as normal:\ndf_new = df.where(df['Hour_day'] >= 5)\n\nor, you might be able to set the x_ticks:\ndefault_x_ticks = range(5:23)\nplt.plot(hours, y_values, color='y')\nplt.xticks(default_x_ticks, hours)\nplt.show()\n\nI haven't tested the x_ticks code so you might have to play around with it just a touch, but there are lots of easy to find resources on x_ticks.\n" ]
[ 0 ]
[]
[]
[ "pandas", "plot", "python", "scikit_learn", "visualization" ]
stackoverflow_0074463935_pandas_plot_python_scikit_learn_visualization.txt
Q: Discrepancy in the number of trainable parameters between model.summary and len(conv_model.trainable_weights) Consider this tensorflow python code that loads a pretrained model: import tensorflow as tf conv_model = keras.applications.vgg16.VGG16( weights='imagenet', include_top=False) conv_model.trainable=False print("Number of trainable weights after freezing: ", len(conv_model.trainable_weights)) conv_model.trainable=True print("Number of trainable weights after defreezing: ", len(conv_model.trainable_weights)) and I got printed Number of trainable weights after freezing: 0 Number of trainable weights after defreezing: 26 However, if I do conv_model.trainable=True conv_model.summary() I get: Total params: 14,714,688 Trainable params: 14,714,688 Non-trainable params: 0 and if I freeze I get 0 trainable paraemters. Why there is this discrepancy between model.summary() and the other method? A: Length of the weights doesnt give the total parameters. You should use: from keras.utils.layer_utils import count_params np.sum([count_params(p) for p in conv_model.trainable_weights]) #14714688 instead of, len(conv_model.trainable_weights) Length gives the number of kernels and biases and each of them can be inspected by: for p in conv_model.trainable_weights: print (p.name, p.shape, np.cumprod(p.shape)[-1], count_params(p)) #outputs 26 conv layers shape params params block1_conv1/kernel:0 (3, 3, 3, 64) 1728 1728 block1_conv1/bias:0 (64,) 64 64 block1_conv2/kernel:0 (3, 3, 64, 64) 36864 36864 ... block5_conv3/kernel:0 (3, 3, 512, 512) 2359296 2359296 block5_conv3/bias:0 (512,) 512 512
Discrepancy in the number of trainable parameters between model.summary and len(conv_model.trainable_weights)
Consider this tensorflow python code that loads a pretrained model: import tensorflow as tf conv_model = keras.applications.vgg16.VGG16( weights='imagenet', include_top=False) conv_model.trainable=False print("Number of trainable weights after freezing: ", len(conv_model.trainable_weights)) conv_model.trainable=True print("Number of trainable weights after defreezing: ", len(conv_model.trainable_weights)) and I got printed Number of trainable weights after freezing: 0 Number of trainable weights after defreezing: 26 However, if I do conv_model.trainable=True conv_model.summary() I get: Total params: 14,714,688 Trainable params: 14,714,688 Non-trainable params: 0 and if I freeze I get 0 trainable paraemters. Why there is this discrepancy between model.summary() and the other method?
[ "Length of the weights doesnt give the total parameters. You should use:\nfrom keras.utils.layer_utils import count_params\nnp.sum([count_params(p) for p in conv_model.trainable_weights])\n#14714688\n\ninstead of,\nlen(conv_model.trainable_weights)\n\nLength gives the number of kernels and biases and each of them can be inspected by:\nfor p in conv_model.trainable_weights:\n print (p.name, p.shape, np.cumprod(p.shape)[-1], count_params(p))\n\n#outputs 26 conv layers shape params params\n\nblock1_conv1/kernel:0 (3, 3, 3, 64) 1728 1728\nblock1_conv1/bias:0 (64,) 64 64\nblock1_conv2/kernel:0 (3, 3, 64, 64) 36864 36864\n...\nblock5_conv3/kernel:0 (3, 3, 512, 512) 2359296 2359296\nblock5_conv3/bias:0 (512,) 512 512\n\n" ]
[ 1 ]
[]
[]
[ "pre_trained_model", "python", "tensorflow" ]
stackoverflow_0074464164_pre_trained_model_python_tensorflow.txt
Q: Based on a condition, how to fill columns with column names whose row are not null Hello my problem is almost the same as this post : How to fill in a column with column names whose rows are not NULL in Pandas? But in my case, instead of doing a concatenation, I need to fill the column based on wether the columns name are a Country or a Segment. Edit : the table Originally I have this : Segment Country Segment 1 Country 1 Segment 2 Nan Nan 123456 123456 Nan Nan Nan Nan Nan Nan Nan Nan Nan 123456 123456 Nan Nan Nan 123456 123456 Actually I have this (The first columns are filled by the two lines before the last in my code : Segment Country Segment 1 Country 1 Segment 2 Seg1 ; Country1 ; Seg1 ; Country1 ; 123456 123456 Nan Nan Nan Nan Nan Nan country1 ; seg2 ; country1 ; seg2 ; Nan 123456 123456 country1 ; seg2 ; country1 ; seg2 ; Nan 123456 123456 And I need this : Segment Country Segment 1 Country 1 Segment 2 Segment 1 Country1 123456 123456 Nan Nan Nan Nan Nan Nan Segment 2 country1 Nan 123456 123456 Segment 2 country1 Nan 123456 123456 Edit : My code Actually look like that after trying to integrate the anwser : Error is : AttributeError: Can only use .str accessor with string values!. Did you mean: 'std'? #For each column in df, check if there is a value and if yes : first copy the value into the 'Amount' Column, then copy the column name into the 'Segment' or 'Country' columns for column in df.columns[3:]: valueList = df[column][3:].values valueList = valueList[~pd.isna(valueList)] def detect(d): cols = d.columns.values dd = pd.DataFrame(columns=cols, index=d.index.unique()) for col in cols: s = d[col].loc[d[col].str.contains(col[0:3], case=False)].str.replace(r'(\w+)(\d+)', col + r'\2') dd[col] = s return dd #Fill amount Column with other columns values if NaN if column in isSP: df['Amount'].fillna(df[column], inplace = True) df['Segment'] = df.iloc[:, 3:].notna().dot(df.columns[3:] + ';' ).str.strip(';') df['Country'] = df.iloc[:, 3:].notna().dot(df.columns[3:] + ' ; ' ).str.strip(';') df[['Segment', 'Country']] = detect(df[['Segment', 'Country']].apply(lambda x: x.astype(str).str.split(r'\s+[+]\s+').explode())) Thank you very much. A: Given: Segment Country Segment 1 Country 1 Segment 2 0 Seg1;Country1 Seg1;Country1 123456 123456 Nan 1 Nan Nan Nan Nan Nan 2 country1;seg2 country1;seg2 Nan 123456 123456 3 country1;seg2 country1;seg2 Nan 123456 123456 Doing cols = ['Segment', 'Country'] df[cols] = df.Segment.str.split(';', expand=True) is_segment = 'eg' # ~You'll used '_sp' here~ # Let's sort values with a custom key, namely, # does the string (not) contain what we're looking for? key = lambda x: ~x.str.contains(is_segment, na=False) func = lambda x: x.sort_values(key=key, ignore_index=True) df[cols] = df[cols].apply(func, axis=1) print(df) Output: Segment Country Segment 1 Country 1 Segment 2 0 Seg1 Country1 123456 123456 Nan 1 Nan None Nan Nan Nan 2 seg2 country1 Nan 123456 123456 3 seg2 country1 Nan 123456 123456 Regex-heavy version: pattern = '(?P<Segment>.+eg\d);(?P<Country>.+)|(?P<Country_>.+);(?P<Segment_>.+eg\d)' extract = df.Segment.str.extract(pattern) for col in cols: df[col] = extract.filter(like=col).bfill(axis=1)[col]
Based on a condition, how to fill columns with column names whose row are not null
Hello my problem is almost the same as this post : How to fill in a column with column names whose rows are not NULL in Pandas? But in my case, instead of doing a concatenation, I need to fill the column based on wether the columns name are a Country or a Segment. Edit : the table Originally I have this : Segment Country Segment 1 Country 1 Segment 2 Nan Nan 123456 123456 Nan Nan Nan Nan Nan Nan Nan Nan Nan 123456 123456 Nan Nan Nan 123456 123456 Actually I have this (The first columns are filled by the two lines before the last in my code : Segment Country Segment 1 Country 1 Segment 2 Seg1 ; Country1 ; Seg1 ; Country1 ; 123456 123456 Nan Nan Nan Nan Nan Nan country1 ; seg2 ; country1 ; seg2 ; Nan 123456 123456 country1 ; seg2 ; country1 ; seg2 ; Nan 123456 123456 And I need this : Segment Country Segment 1 Country 1 Segment 2 Segment 1 Country1 123456 123456 Nan Nan Nan Nan Nan Nan Segment 2 country1 Nan 123456 123456 Segment 2 country1 Nan 123456 123456 Edit : My code Actually look like that after trying to integrate the anwser : Error is : AttributeError: Can only use .str accessor with string values!. Did you mean: 'std'? #For each column in df, check if there is a value and if yes : first copy the value into the 'Amount' Column, then copy the column name into the 'Segment' or 'Country' columns for column in df.columns[3:]: valueList = df[column][3:].values valueList = valueList[~pd.isna(valueList)] def detect(d): cols = d.columns.values dd = pd.DataFrame(columns=cols, index=d.index.unique()) for col in cols: s = d[col].loc[d[col].str.contains(col[0:3], case=False)].str.replace(r'(\w+)(\d+)', col + r'\2') dd[col] = s return dd #Fill amount Column with other columns values if NaN if column in isSP: df['Amount'].fillna(df[column], inplace = True) df['Segment'] = df.iloc[:, 3:].notna().dot(df.columns[3:] + ';' ).str.strip(';') df['Country'] = df.iloc[:, 3:].notna().dot(df.columns[3:] + ' ; ' ).str.strip(';') df[['Segment', 'Country']] = detect(df[['Segment', 'Country']].apply(lambda x: x.astype(str).str.split(r'\s+[+]\s+').explode())) Thank you very much.
[ "Given:\n Segment Country Segment 1 Country 1 Segment 2\n0 Seg1;Country1 Seg1;Country1 123456 123456 Nan\n1 Nan Nan Nan Nan Nan\n2 country1;seg2 country1;seg2 Nan 123456 123456\n3 country1;seg2 country1;seg2 Nan 123456 123456\n\nDoing\ncols = ['Segment', 'Country']\ndf[cols] = df.Segment.str.split(';', expand=True)\n\nis_segment = 'eg' # ~You'll used '_sp' here~\n\n# Let's sort values with a custom key, namely,\n# does the string (not) contain what we're looking for?\nkey = lambda x: ~x.str.contains(is_segment, na=False)\nfunc = lambda x: x.sort_values(key=key, ignore_index=True)\ndf[cols] = df[cols].apply(func, axis=1)\n\nprint(df)\n\nOutput:\n Segment Country Segment 1 Country 1 Segment 2\n0 Seg1 Country1 123456 123456 Nan\n1 Nan None Nan Nan Nan\n2 seg2 country1 Nan 123456 123456\n3 seg2 country1 Nan 123456 123456\n\n\nRegex-heavy version:\npattern = '(?P<Segment>.+eg\\d);(?P<Country>.+)|(?P<Country_>.+);(?P<Segment_>.+eg\\d)'\nextract = df.Segment.str.extract(pattern)\nfor col in cols:\n df[col] = extract.filter(like=col).bfill(axis=1)[col]\n\n" ]
[ 1 ]
[]
[]
[ "dataframe", "pandas", "python" ]
stackoverflow_0074461934_dataframe_pandas_python.txt
Q: I have a dictionary {string and int}. How do I compare integers values and not the keys within the same dictionary(python) I have a dictionary of people, I want to sort people alphabetically by name IF they are the same age, and to sort people descending by age if they are different age. So essentually, I need to return two outputs. Here is what I have so far, but it doesn't take in account the condition. How do I just compare the age value for every item and add it into a different list then sort it? people = {'Steve' : 20 , 'David': 21 , 'Andrew' : 19 , 'Bruce': 22 ,'James' : 20 , 'Dave': 26 ,'Smith' : 19} print('Sorted People in Alphabetical Order: ', dict(sorted(people.items()))) print('Sorted People in Numerical Order: ',dict(sorted(people.items(), key=lambda item: item[1]))) My wanted output is Sorted Same Age People in Alphabetical Order: {(Andrew,19), (James,20), (Smith, 19), (Steve, 20)} Sorted Different Ages by Age:{(David,21), (Bruce,22), (Dave,26)} A: You can pass tuple to key of sorted. First sort based on age then sort base alphabet. people = {'Steve' : 20 , 'David': 21 , 'Andrew' : 19 , 'Bruce': 22 ,'James' : 20 , 'Dave': 26 ,'Smith' : 19} res = dict(sorted(people.items(), key=lambda x: (x[1], x[0]))) # -------------------------------------------x[1]^^^ is value -> age # ---------------------------------------------------x[0]^^^ is key -> alphabet print(res) Output: {'Andrew': 19, 'Smith': 19, 'James': 20, 'Steve': 20, 'David': 21, 'Bruce': 22, 'Dave': 26}
I have a dictionary {string and int}. How do I compare integers values and not the keys within the same dictionary(python)
I have a dictionary of people, I want to sort people alphabetically by name IF they are the same age, and to sort people descending by age if they are different age. So essentually, I need to return two outputs. Here is what I have so far, but it doesn't take in account the condition. How do I just compare the age value for every item and add it into a different list then sort it? people = {'Steve' : 20 , 'David': 21 , 'Andrew' : 19 , 'Bruce': 22 ,'James' : 20 , 'Dave': 26 ,'Smith' : 19} print('Sorted People in Alphabetical Order: ', dict(sorted(people.items()))) print('Sorted People in Numerical Order: ',dict(sorted(people.items(), key=lambda item: item[1]))) My wanted output is Sorted Same Age People in Alphabetical Order: {(Andrew,19), (James,20), (Smith, 19), (Steve, 20)} Sorted Different Ages by Age:{(David,21), (Bruce,22), (Dave,26)}
[ "You can pass tuple to key of sorted. First sort based on age then sort base alphabet.\npeople = {'Steve' : 20 , 'David': 21 , 'Andrew' : 19 , 'Bruce': 22 ,'James' : 20 , 'Dave': 26 ,'Smith' : 19}\n\nres = dict(sorted(people.items(), key=lambda x: (x[1], x[0])))\n# -------------------------------------------x[1]^^^ is value -> age\n# ---------------------------------------------------x[0]^^^ is key -> alphabet\nprint(res)\n\nOutput:\n{'Andrew': 19,\n 'Smith': 19,\n 'James': 20,\n 'Steve': 20,\n 'David': 21,\n 'Bruce': 22,\n 'Dave': 26}\n\n" ]
[ 1 ]
[]
[]
[ "dictionary", "python", "sorting" ]
stackoverflow_0074464828_dictionary_python_sorting.txt
Q: How can I create a function that uses loc over multiple columns in a dataframe? I have a number of columns in a pandas dataframe where any values that are less than or equal to zero I want to change to NaN. I'm relatively new to python. I know copying and pasting code over multiple lines is a no-no, but I've struggled with writing functions so far. I would imagine there's an easier way to do this, but I haven't figured it out yet. What can I do? df.loc[df['col1'] <= 0, 'col1'] = np.nan df.loc[df['col2'] <= 0, 'col2'] = np.nan df.loc[df['col3'] <= 0, 'col3'] = np.nan df.loc[df['col4'] <= 0, 'col4'] = np.nan A: You can use df.where, which would replace the values that doesn't satisfy a condition with NaN: cols = ['col1', 'col2', 'col3', 'col4'] df = df.where(df[cols] > 0)
How can I create a function that uses loc over multiple columns in a dataframe?
I have a number of columns in a pandas dataframe where any values that are less than or equal to zero I want to change to NaN. I'm relatively new to python. I know copying and pasting code over multiple lines is a no-no, but I've struggled with writing functions so far. I would imagine there's an easier way to do this, but I haven't figured it out yet. What can I do? df.loc[df['col1'] <= 0, 'col1'] = np.nan df.loc[df['col2'] <= 0, 'col2'] = np.nan df.loc[df['col3'] <= 0, 'col3'] = np.nan df.loc[df['col4'] <= 0, 'col4'] = np.nan
[ "You can use df.where, which would replace the values that doesn't satisfy a condition with NaN:\ncols = ['col1', 'col2', 'col3', 'col4']\ndf = df.where(df[cols] > 0)\n\n" ]
[ 0 ]
[]
[]
[ "function", "numpy", "pandas", "python" ]
stackoverflow_0074464803_function_numpy_pandas_python.txt
Q: Using ffmpeg on PythonAnywhere My (first) web app uses pydub, which depends on ffmpeg. On my local windows environment, I installed ffmpeg and added the path to the ffmpeg executables to the windows "path" environment variables. It all works locally, but bow that I have deployed my app to PythonAnywhere, the following line in my code is causing an error: sound.export(export_path, format="mp3", bitrate="128k") I believe the error is because this code relies on ffmpeg. I have read on their forums that ffmpeg is installed for all users on PythonAnywhere. Is there something I need to do to get it to work? Do I need to add the path of the ffmpeg files to the environment variables? I have a .env file with other env variables -- would I need to add something to this? A: Got home and tried PythonAnywhere myself, and I don't find any issue with it and its FFmpeg. Without installing anything (no FFmpeg, no Python packages), I run successfully in REPL: >>> os.system('ffmpeg') [snipped ffmpeg banner] >>> import pydub >>> pydub.utils.get_encoder_name() 'ffmpeg' >>> pydub.utils.get_prober_name() 'ffprobe' >>> pydub.utils.get_supported_codecs() [snipped a large list of FFmpeg codecs] So, your issue is not ffmpeg/pydub. Post the exact error. Can it be that it doesn't like you to save a file in the directory you specified? A: The problem was not ffmpeg - which I have confirmed is installed for all users on PythonAnywhere. Instead, my issue was the export path I was using with pydub. I fixed my issue by changing: export_path = "media/my_path" to export_path = "home/my_app/my_app/media/my_path" which is then used in sound.export(export_path, format="mp3", bitrate="128k")
Using ffmpeg on PythonAnywhere
My (first) web app uses pydub, which depends on ffmpeg. On my local windows environment, I installed ffmpeg and added the path to the ffmpeg executables to the windows "path" environment variables. It all works locally, but bow that I have deployed my app to PythonAnywhere, the following line in my code is causing an error: sound.export(export_path, format="mp3", bitrate="128k") I believe the error is because this code relies on ffmpeg. I have read on their forums that ffmpeg is installed for all users on PythonAnywhere. Is there something I need to do to get it to work? Do I need to add the path of the ffmpeg files to the environment variables? I have a .env file with other env variables -- would I need to add something to this?
[ "Got home and tried PythonAnywhere myself, and I don't find any issue with it and its FFmpeg.\nWithout installing anything (no FFmpeg, no Python packages), I run successfully in REPL:\n>>> os.system('ffmpeg')\n[snipped ffmpeg banner]\n>>> import pydub\n>>> pydub.utils.get_encoder_name()\n'ffmpeg'\n>>> pydub.utils.get_prober_name()\n'ffprobe'\n>>> pydub.utils.get_supported_codecs()\n[snipped a large list of FFmpeg codecs]\n\nSo, your issue is not ffmpeg/pydub. Post the exact error. Can it be that it doesn't like you to save a file in the directory you specified?\n", "The problem was not ffmpeg - which I have confirmed is installed for all users on PythonAnywhere. Instead, my issue was the export path I was using with pydub. I fixed my issue by changing:\nexport_path = \"media/my_path\"\n\nto\nexport_path = \"home/my_app/my_app/media/my_path\"\n\nwhich is then used in\nsound.export(export_path, format=\"mp3\", bitrate=\"128k\")\n\n" ]
[ 1, 1 ]
[]
[]
[ "environment_variables", "ffmpeg", "pydub", "python", "pythonanywhere" ]
stackoverflow_0074448842_environment_variables_ffmpeg_pydub_python_pythonanywhere.txt
Q: Parsing long form dates from string I am aware that there are other solutions to similar problems on stack overflow but they don't work in my particular situation. I have some strings -- here are some examples of them. string_with_dates = "random non-date text, 22 May 1945 and 11 June 2004" string2 = "random non-date text, 01/01/1999 & 11 June 2004" string3 = "random non-date text, 01/01/1990, June 23 2010" string4 = "01/2/2010 and 25th of July 2020" string5 = "random non-date text, 01/02/1990" string6 = "random non-date text, 01/02/2010 June 10 2010" I need a parser that can determine how many date-like objects are in the string and then parse them into actual dates into a list. I can't find any solutions out there. Here is desired output: ['05/22/1945','06/11/2004'] Or as actual datetiem objects. Any ideas? I have tried the solutions listed here but they don't work. How to parse multiple dates from a block of text in Python (or another language) Here is what happens when I try the solutions suggested in that link: import itertools from dateutil import parser jumpwords = set(parser.parserinfo.JUMP) keywords = set(kw.lower() for kw in itertools.chain( parser.parserinfo.UTCZONE, parser.parserinfo.PERTAIN, (x for s in parser.parserinfo.WEEKDAYS for x in s), (x for s in parser.parserinfo.MONTHS for x in s), (x for s in parser.parserinfo.HMS for x in s), (x for s in parser.parserinfo.AMPM for x in s), )) def parse_multiple(s): def is_valid_kw(s): try: # is it a number? float(s) return True except ValueError: return s.lower() in keywords def _split(s): kw_found = False tokens = parser._timelex.split(s) for i in xrange(len(tokens)): if tokens[i] in jumpwords: continue if not kw_found and is_valid_kw(tokens[i]): kw_found = True start = i elif kw_found and not is_valid_kw(tokens[i]): kw_found = False yield "".join(tokens[start:i]) # handle date at end of input str if kw_found: yield "".join(tokens[start:]) return [parser.parse(x) for x in _split(s)] parse_multiple(string_with_dates) Output: ParserError: Unknown string format: 22 May 1945 and 11 June 2004 Another method: from dateutil.parser import _timelex, parser a = "I like peas on 2011-04-23, and I also like them on easter and my birthday, the 29th of July, 1928" p = parser() info = p.info def timetoken(token): try: float(token) return True except ValueError: pass return any(f(token) for f in (info.jump,info.weekday,info.month,info.hms,info.ampm,info.pertain,info.utczone,info.tzoffset)) def timesplit(input_string): batch = [] for token in _timelex(input_string): if timetoken(token): if info.jump(token): continue batch.append(token) else: if batch: yield " ".join(batch) batch = [] if batch: yield " ".join(batch) for item in timesplit(string_with_dates): print "Found:", (item) print "Parsed:", p.parse(item) Output: ParserError: Unknown string format: 22 May 1945 11 June 2004 Any ideas? A: Okay sorry to anyone who spent time on this -- but I was able to answer my own question. Leaving this up in case anyone else has the same issue. This package was able to work perfectly: https://pypi.org/project/datefinder/ import datefinder def DatesToList(x): dates = datefinder.find_dates(x) lists = [] for date in dates: lists.append(date) return (lists) dates = DateToList(string_with_dates) Output: [datetime.datetime(1945, 5, 22, 0, 0), datetime.datetime(2004, 6, 11, 0, 0)]
Parsing long form dates from string
I am aware that there are other solutions to similar problems on stack overflow but they don't work in my particular situation. I have some strings -- here are some examples of them. string_with_dates = "random non-date text, 22 May 1945 and 11 June 2004" string2 = "random non-date text, 01/01/1999 & 11 June 2004" string3 = "random non-date text, 01/01/1990, June 23 2010" string4 = "01/2/2010 and 25th of July 2020" string5 = "random non-date text, 01/02/1990" string6 = "random non-date text, 01/02/2010 June 10 2010" I need a parser that can determine how many date-like objects are in the string and then parse them into actual dates into a list. I can't find any solutions out there. Here is desired output: ['05/22/1945','06/11/2004'] Or as actual datetiem objects. Any ideas? I have tried the solutions listed here but they don't work. How to parse multiple dates from a block of text in Python (or another language) Here is what happens when I try the solutions suggested in that link: import itertools from dateutil import parser jumpwords = set(parser.parserinfo.JUMP) keywords = set(kw.lower() for kw in itertools.chain( parser.parserinfo.UTCZONE, parser.parserinfo.PERTAIN, (x for s in parser.parserinfo.WEEKDAYS for x in s), (x for s in parser.parserinfo.MONTHS for x in s), (x for s in parser.parserinfo.HMS for x in s), (x for s in parser.parserinfo.AMPM for x in s), )) def parse_multiple(s): def is_valid_kw(s): try: # is it a number? float(s) return True except ValueError: return s.lower() in keywords def _split(s): kw_found = False tokens = parser._timelex.split(s) for i in xrange(len(tokens)): if tokens[i] in jumpwords: continue if not kw_found and is_valid_kw(tokens[i]): kw_found = True start = i elif kw_found and not is_valid_kw(tokens[i]): kw_found = False yield "".join(tokens[start:i]) # handle date at end of input str if kw_found: yield "".join(tokens[start:]) return [parser.parse(x) for x in _split(s)] parse_multiple(string_with_dates) Output: ParserError: Unknown string format: 22 May 1945 and 11 June 2004 Another method: from dateutil.parser import _timelex, parser a = "I like peas on 2011-04-23, and I also like them on easter and my birthday, the 29th of July, 1928" p = parser() info = p.info def timetoken(token): try: float(token) return True except ValueError: pass return any(f(token) for f in (info.jump,info.weekday,info.month,info.hms,info.ampm,info.pertain,info.utczone,info.tzoffset)) def timesplit(input_string): batch = [] for token in _timelex(input_string): if timetoken(token): if info.jump(token): continue batch.append(token) else: if batch: yield " ".join(batch) batch = [] if batch: yield " ".join(batch) for item in timesplit(string_with_dates): print "Found:", (item) print "Parsed:", p.parse(item) Output: ParserError: Unknown string format: 22 May 1945 11 June 2004 Any ideas?
[ "Okay sorry to anyone who spent time on this -- but I was able to answer my own question. Leaving this up in case anyone else has the same issue.\nThis package was able to work perfectly: https://pypi.org/project/datefinder/\n\nimport datefinder\n\ndef DatesToList(x):\n \n dates = datefinder.find_dates(x)\n \n lists = []\n \n for date in dates:\n \n lists.append(date)\n \n return (lists)\n\ndates = DateToList(string_with_dates)\n\n\nOutput:\n\n[datetime.datetime(1945, 5, 22, 0, 0), datetime.datetime(2004, 6, 11, 0, 0)]\n\n" ]
[ 2 ]
[]
[]
[ "date", "parsing", "python", "python_3.x", "string" ]
stackoverflow_0074462363_date_parsing_python_python_3.x_string.txt
Q: Error while installing lxml through pip: Microsoft Visual C++ 14.0 is required I am on a windows 10 machine and recently moved from python 2.7 to 3.5. When trying to install lxml through pip, it stops and throws this error message- building 'lxml.etree' extension error: Microsoft Visual C++ 14.0 is required. Get it with "Microsoft Visual C++ Build Tools": http://landinghub.visualstudio.com/visual-cpp-build-tools I have a working copy of VS 2015 installed. When I try to install the visual cpp tools through that link, it says that Microsoft Visual Studio 2015 is already installed on the machine. I also tried installing visual studio c++ 2015 redistributables, both 64 and 32 bit versions, but both of them say that there's another version of the product already installed. typing set in the command prompt includes this - VS140COMNTOOLS=C:\Program Files (x86)\Microsoft Visual Studio 14.0\Common7\Tools\ Which means that the path is set. This is probably the only resource I could find on SO, but the answer suggests rolling back to Python 3.4.3 from 3.5. Has anybody resolved problems of this kind? Microsoft Visual C++ 14.0 is required (Unable to find vcvarsall.bat) EDIT: I managed to install it using the precompiled binary (Thanks Paul), but I would still like to know what's causing this. A: Have you checked that when you installed Visual Studio, you installed the C++ compiler? It seems like a silly question, but this is the mistake I made. Check by going into the setup for visual studio (Programs and features: Modify "Visual Studio 2015"), then under Programming Languages->VC++, make sure it's ticked. A: Run pip install wheel Download lxml from http://www.lfd.uci.edu/~gohlke/pythonlibs/#lxml, if your python version is 3.5 , download lxml-3.6.4-cp35-cp35m-win32.whl. Run python -m pip install lxml-3.6.4-cp35-cp35m-win32.whl A: As an update to the answer from @davidsheldon above, if you want to use Visual Studio Build Tools 2017 instead of 2015, it will work. I found that the default install of the build tools stand alone was not enough, however, I added `VC++ 2015.3 ... toolset for desktop (x86,x64) and then python was happy: A: I've found another solution to get through this: Because I use anaconda python, so I use this code: conda install -c conda-forge scrapy A: I have same question with you! I found a way no need install vs2015,maybe,you just haven't install twisted.http://www.lfd.uci.edu/~gohlke/pythonlibs/#twisted .download twisted --version(Twisted‑17.5.0‑cp36‑cp36m‑win_amd64.whl)(maybe win_amd32.whl if 64didn't work),and run : pip PATH + filename pip install C:\Users\CR\Downloads\Twisted-17.5.0-cp36-cp36m-win_amd64.whl pip install Scrapy I just install successful! good luck for you! my step to insatll scrapy: 1.pip install wheel 2.pip install lxml 3.pip install pyOpenSSL 4.pip install Twisted (fault->do like above) 5.install pywin32 form : https://sourceforge.net/projects/pywin32/files/pywin32/Build%20220/ 6.pip Scrapy (succesful) A: Had the same problem and noticed that I had installed the 32bit version in a 64bit machine. All I did was uninstall the wrong one and install the right version and it worked fine. A: Easiest way to achieve this, can be automated as it doesn't require user input: python -m pip install https://download.lfd.uci.edu/pythonlibs/archived/lxml-4.9.0-cp311-cp311-win_amd64.whl This will install the 64-bit version on your machine.
Error while installing lxml through pip: Microsoft Visual C++ 14.0 is required
I am on a windows 10 machine and recently moved from python 2.7 to 3.5. When trying to install lxml through pip, it stops and throws this error message- building 'lxml.etree' extension error: Microsoft Visual C++ 14.0 is required. Get it with "Microsoft Visual C++ Build Tools": http://landinghub.visualstudio.com/visual-cpp-build-tools I have a working copy of VS 2015 installed. When I try to install the visual cpp tools through that link, it says that Microsoft Visual Studio 2015 is already installed on the machine. I also tried installing visual studio c++ 2015 redistributables, both 64 and 32 bit versions, but both of them say that there's another version of the product already installed. typing set in the command prompt includes this - VS140COMNTOOLS=C:\Program Files (x86)\Microsoft Visual Studio 14.0\Common7\Tools\ Which means that the path is set. This is probably the only resource I could find on SO, but the answer suggests rolling back to Python 3.4.3 from 3.5. Has anybody resolved problems of this kind? Microsoft Visual C++ 14.0 is required (Unable to find vcvarsall.bat) EDIT: I managed to install it using the precompiled binary (Thanks Paul), but I would still like to know what's causing this.
[ "Have you checked that when you installed Visual Studio, you installed the C++ compiler? It seems like a silly question, but this is the mistake I made. Check by going into the setup for visual studio (Programs and features: Modify \"Visual Studio 2015\"), then under Programming Languages->VC++, make sure it's ticked.\n\n", "\nRun pip install wheel\nDownload lxml from http://www.lfd.uci.edu/~gohlke/pythonlibs/#lxml, if your python version is 3.5 , download lxml-3.6.4-cp35-cp35m-win32.whl.\nRun python -m pip install lxml-3.6.4-cp35-cp35m-win32.whl\n\n", "As an update to the answer from @davidsheldon above, if you want to use Visual Studio Build Tools 2017 instead of 2015, it will work.\nI found that the default install of the build tools stand alone was not enough, however, I added `VC++ 2015.3 ... toolset for desktop (x86,x64) and then python was happy:\n\n", "I've found another solution to get through this:\nBecause I use anaconda python, so I use this code:\nconda install -c conda-forge scrapy\n\n", "I have same question with you! I found a way no need install vs2015,maybe,you just haven't install twisted.http://www.lfd.uci.edu/~gohlke/pythonlibs/#twisted .download twisted --version(Twisted‑17.5.0‑cp36‑cp36m‑win_amd64.whl)(maybe win_amd32.whl if 64didn't work),and run : pip PATH + filename\npip install C:\\Users\\CR\\Downloads\\Twisted-17.5.0-cp36-cp36m-win_amd64.whl\n\npip install Scrapy\n\nI just install successful! good luck for you!\nmy step to insatll scrapy:\n1.pip install wheel\n2.pip install lxml\n3.pip install pyOpenSSL\n4.pip install Twisted (fault->do like above)\n5.install pywin32 form : https://sourceforge.net/projects/pywin32/files/pywin32/Build%20220/ \n6.pip Scrapy (succesful)\n", "Had the same problem and noticed that I had installed the 32bit version in a 64bit machine. All I did was uninstall the wrong one and install the right version and it worked fine.\n", "Easiest way to achieve this, can be automated as it doesn't require user input:\npython -m pip install https://download.lfd.uci.edu/pythonlibs/archived/lxml-4.9.0-cp311-cp311-win_amd64.whl\n\nThis will install the 64-bit version on your machine.\n" ]
[ 28, 8, 6, 2, 1, 0, 0 ]
[ "First:\npip install wheel\n\nSecond: go to http://www.lfd.uci.edu/~gohlke/pythonlibs/#lxml\nand download proper wheel.\npip install the file you downloaded (.whl).\n" ]
[ -2 ]
[ "lxml", "pip", "python", "visual_c++" ]
stackoverflow_0038949519_lxml_pip_python_visual_c++.txt
Q: Numpy check that all the element of each row of a 2D numpy array is the same I am sure this is an already answered question, but I couldn't find anywhere. I want to check that all the element of each row of a 2D numpy array is the same and 0 is a possibility. For example: >>> a = np.array([[0, 0, 0], [1, 1, 1], [2, 2, 2], [3, 3, 3]]) >>> a array([[0, 0, 0], [1, 1, 1], [2, 2, 2], [3, 3, 3]]) >>> function_to_find(a) True Looking around there are suggestions to use all() and any(), but I don't think it's my case. If I use them in this way: >>> a = np.array([[0, 0, 0], [1, 1, 1], [2, 2, 2], [3, 3, 3]]) >>> a.all() False >>> a.all(axis=1) array([False, True, True, True]) >>> a.all(axis=1).any() True but also this give me True and I want False: >>> a = np.array([[0, 0, 0], [1, 1, 1], [2, 2, 2], [3, 3, 5]]) >>> a.all() False >>> a.all(axis=1) array([False, True, True, True]) >>> a.all(axis=1).any() True A solution could be: results_bool = np.array([]) for i in a: results_bool = np.append(results_bool, np.all(i == i[0])) result = np.all(results_bool) but I would prefer to avoid loops and use numpy. Any idea? A: You can simply do the following: result = (a[:, 1:] == a[:, :-1]).all() Or, with broadcasting: result = (a[:, 1:] == a[:, [0]]).all() result = (a == a[:, [0]]).all() is similar, but the above avoids the redundant comparison of the column a[:,0] to itself.
Numpy check that all the element of each row of a 2D numpy array is the same
I am sure this is an already answered question, but I couldn't find anywhere. I want to check that all the element of each row of a 2D numpy array is the same and 0 is a possibility. For example: >>> a = np.array([[0, 0, 0], [1, 1, 1], [2, 2, 2], [3, 3, 3]]) >>> a array([[0, 0, 0], [1, 1, 1], [2, 2, 2], [3, 3, 3]]) >>> function_to_find(a) True Looking around there are suggestions to use all() and any(), but I don't think it's my case. If I use them in this way: >>> a = np.array([[0, 0, 0], [1, 1, 1], [2, 2, 2], [3, 3, 3]]) >>> a.all() False >>> a.all(axis=1) array([False, True, True, True]) >>> a.all(axis=1).any() True but also this give me True and I want False: >>> a = np.array([[0, 0, 0], [1, 1, 1], [2, 2, 2], [3, 3, 5]]) >>> a.all() False >>> a.all(axis=1) array([False, True, True, True]) >>> a.all(axis=1).any() True A solution could be: results_bool = np.array([]) for i in a: results_bool = np.append(results_bool, np.all(i == i[0])) result = np.all(results_bool) but I would prefer to avoid loops and use numpy. Any idea?
[ "You can simply do the following:\nresult = (a[:, 1:] == a[:, :-1]).all()\n\nOr, with broadcasting:\nresult = (a[:, 1:] == a[:, [0]]).all()\n\nresult = (a == a[:, [0]]).all() is similar, but the above avoids the redundant comparison of the column a[:,0] to itself.\n" ]
[ 1 ]
[]
[]
[ "arrays", "numpy", "python" ]
stackoverflow_0074464801_arrays_numpy_python.txt
Q: Impute null values based on a group statistic I have a copy dataset using df.dropna() and I have compiled the mean of those data using df.groupby based on different groups with the converted code below assigned in: # Suppose this is a result from df.groupby script impute_data = pd.DataFrame({'PClass': [1, 1, 2, 2, 3, 3], 'Sex': ['male', 'female', 'male', 'female', 'male', 'female',], 'Mean': [34, 29, 24, 40, 18, 25]}) Suppose I have this real dataset and I want to impute the missing values based on the means from copy dataset, how can it be achieved? d = {'PClass': [1, 3, 2, 3, 2, 1, 2, 1, 3, 2, 3, 1], 'Sex': ['male', 'male', 'female', 'male', 'female', 'female', 'male', 'male', 'female', 'male', 'female', 'female'], 'Age': [np.nan, np.nan, np.nan, np.nan, np.nan, np.nan, np.nan, np.nan, np.nan, np.nan, np.nan, np.nan]} df = pd.DataFrame(data=d) My intial solution for this is an if else statement where for example if Pclass=1 and Sex='male' impute 34 and so on, but I am not certain on how I can implement it. A: You can use update after renaming Mean to Age: impute_data.rename({'Mean':'Age'}, axis=1, inplace=True) df.update(impute_data) Note that update occurs in place, you shouldn't assign it to another dataframe.
Impute null values based on a group statistic
I have a copy dataset using df.dropna() and I have compiled the mean of those data using df.groupby based on different groups with the converted code below assigned in: # Suppose this is a result from df.groupby script impute_data = pd.DataFrame({'PClass': [1, 1, 2, 2, 3, 3], 'Sex': ['male', 'female', 'male', 'female', 'male', 'female',], 'Mean': [34, 29, 24, 40, 18, 25]}) Suppose I have this real dataset and I want to impute the missing values based on the means from copy dataset, how can it be achieved? d = {'PClass': [1, 3, 2, 3, 2, 1, 2, 1, 3, 2, 3, 1], 'Sex': ['male', 'male', 'female', 'male', 'female', 'female', 'male', 'male', 'female', 'male', 'female', 'female'], 'Age': [np.nan, np.nan, np.nan, np.nan, np.nan, np.nan, np.nan, np.nan, np.nan, np.nan, np.nan, np.nan]} df = pd.DataFrame(data=d) My intial solution for this is an if else statement where for example if Pclass=1 and Sex='male' impute 34 and so on, but I am not certain on how I can implement it.
[ "You can use update after renaming Mean to Age:\nimpute_data.rename({'Mean':'Age'}, axis=1, inplace=True)\ndf.update(impute_data)\n\nNote that update occurs in place, you shouldn't assign it to another dataframe.\n" ]
[ 0 ]
[]
[]
[ "data_analysis", "data_science", "pandas", "python" ]
stackoverflow_0074464863_data_analysis_data_science_pandas_python.txt
Q: how to convert a method type to a type that can be multiplied: TypeError: unsupported operand type(s) for *: 'method' and 'Piecewise' i have a problem that i need to multiply the type {method} and a piecewise function, all in symbolic sympy. i try to multiply a Piecewise expression with Derivative of another expression that. the result of the Derivative expression is from a type 'method' here is my code: import sympy as sp import numpy as np import matplotlib as plt # This is all the library's i need import mpmath n = 10 x = sp.symbols('x', positive=True) c = list(sp.symbols('c0:%d'%(n + 1))) f = 1+(((sp.exp(x) * (1 - np.exp(-1))) + (sp.exp(-x)) * (np.exp(1) - 1)) / (np.exp(-1) - np.exp(1))) xx = np.linspace(0, 1, n + 1) i = 0 N = [] a = sp.Piecewise( (((xx[i + 1] - x) / (xx[i + 1] - xx[i])), (x >= float((xx[i]))) and x <= float((xx[i + 1]))), (0, x > float(xx[i + 1])), ) N.append(a) for i in range(1, n): a = sp.Piecewise( (0, x < float(xx[i - 1])), ((xx[i - 1] - x) / (xx[i - 1] - xx[i]), ((x >= float((xx[i - 1]))) & (x <= float(xx[i])))), ((xx[i + 1] - x) / (xx[i + 1] - xx[i]), ((x >= float(xx[i])) & (x <= float(xx[i + 1])))), (0, x > float(xx[i + 1])), (0, True), ) N.append(a) i = i + 1 a = sp.Piecewise( (0, x < float(xx[i - 1])), ((xx[i - 1] - x) / (xx[i - 1] - xx[i]), ((x >= float((xx[i - 1]))) & (x <= float(xx[i])))), (0, True), ) N.append(a) k = [] #u = [] for i in range(0, n + 1): if i == 0: u = c[i] * N[i] else: u = c[i] * N[i] +u Ntag = [] for i in range(0, n + 1): tag = N[i].diff(x) Ntag.append(tag) utag = u.diff try: res= utag*Ntag[i]# for any integer except: traceback.print_exc() and the output is: TypeError: unsupported operand type(s) for *: 'method' and 'Piecewise' and the line that making the error is the last line: utag*Ntag[i] and the traceback is : Traceback (most recent call last): File ".py", line 56, in <module> res= utag*Ntag[i]# for any integer TypeError: unsupported operand type(s) for *: 'method' and 'Piecewise' A: Okay well the mistake is obvious. u.diff is clearly a function, not a number; but it's not being called like a function. The function itself is being assigned to utag. So, you're trying to multiply the function u.diff with the value Ntag[i]. But u.diff is not a numerical value, it is just a function. You'd have to call the function if you wanted it to return an actual value. But you're not calling it, you're just referencing it.
how to convert a method type to a type that can be multiplied: TypeError: unsupported operand type(s) for *: 'method' and 'Piecewise'
i have a problem that i need to multiply the type {method} and a piecewise function, all in symbolic sympy. i try to multiply a Piecewise expression with Derivative of another expression that. the result of the Derivative expression is from a type 'method' here is my code: import sympy as sp import numpy as np import matplotlib as plt # This is all the library's i need import mpmath n = 10 x = sp.symbols('x', positive=True) c = list(sp.symbols('c0:%d'%(n + 1))) f = 1+(((sp.exp(x) * (1 - np.exp(-1))) + (sp.exp(-x)) * (np.exp(1) - 1)) / (np.exp(-1) - np.exp(1))) xx = np.linspace(0, 1, n + 1) i = 0 N = [] a = sp.Piecewise( (((xx[i + 1] - x) / (xx[i + 1] - xx[i])), (x >= float((xx[i]))) and x <= float((xx[i + 1]))), (0, x > float(xx[i + 1])), ) N.append(a) for i in range(1, n): a = sp.Piecewise( (0, x < float(xx[i - 1])), ((xx[i - 1] - x) / (xx[i - 1] - xx[i]), ((x >= float((xx[i - 1]))) & (x <= float(xx[i])))), ((xx[i + 1] - x) / (xx[i + 1] - xx[i]), ((x >= float(xx[i])) & (x <= float(xx[i + 1])))), (0, x > float(xx[i + 1])), (0, True), ) N.append(a) i = i + 1 a = sp.Piecewise( (0, x < float(xx[i - 1])), ((xx[i - 1] - x) / (xx[i - 1] - xx[i]), ((x >= float((xx[i - 1]))) & (x <= float(xx[i])))), (0, True), ) N.append(a) k = [] #u = [] for i in range(0, n + 1): if i == 0: u = c[i] * N[i] else: u = c[i] * N[i] +u Ntag = [] for i in range(0, n + 1): tag = N[i].diff(x) Ntag.append(tag) utag = u.diff try: res= utag*Ntag[i]# for any integer except: traceback.print_exc() and the output is: TypeError: unsupported operand type(s) for *: 'method' and 'Piecewise' and the line that making the error is the last line: utag*Ntag[i] and the traceback is : Traceback (most recent call last): File ".py", line 56, in <module> res= utag*Ntag[i]# for any integer TypeError: unsupported operand type(s) for *: 'method' and 'Piecewise'
[ "Okay well the mistake is obvious. u.diff is clearly a function, not a number; but it's not being called like a function. The function itself is being assigned to utag. So, you're trying to multiply the function u.diff with the value Ntag[i]. But u.diff is not a numerical value, it is just a function. You'd have to call the function if you wanted it to return an actual value. But you're not calling it, you're just referencing it.\n" ]
[ 1 ]
[]
[]
[ "methods", "python", "sympy", "typeerror", "types" ]
stackoverflow_0074464674_methods_python_sympy_typeerror_types.txt
Q: Is there a way to define sets, variables and constraints intelligently in PYOMO without cross product? I have three different sets Number of Store - 100 Number of Products - 10 Number of Size in each product - 10 I want to create Parameter in pyomo which is combination of above three sets. Basically i want to skip cross product which have code snippet below. Reason to skip below approach is each product can have 10 different sizes and no need to create combination of product of A and sizes coming from product B, which doesn't make sense. Code snippet with cross product: model = pyo.AbstractModel() model.stores = pyo.Set() model.sizes = pyo.Set() model.packs = pyo.Set() model.products = pyo.Set() model.demand = pyo.Param(model.clusters, model.products, model.sizes, default = 0) A: So, if I understand your dilemma, the sizes are different for different products and a universal cross-set of products and sizes doesn't work because of that. I think you have 2 options. Either works. The easies thing to do would be just to make tuples of product-size pairs and use that as a set...basically merging the products with their sizes. products = {(shoes, 12), (shoes, 13), (shoes, 5), (pants, XL), (pants, L),...} It is perfectly legitimate to use a flat set like that and use that to initialize your pyomo.Set. It might get a little tricky if you need to sum over individual products because that info is merged with the sizes. Not sure if that is needed. Option 2 is to use an indexed set, so you would have sets of sizes that are indexed by product. Here is an example using EV's and times. You would set it up similarly for products & sizes.
Is there a way to define sets, variables and constraints intelligently in PYOMO without cross product?
I have three different sets Number of Store - 100 Number of Products - 10 Number of Size in each product - 10 I want to create Parameter in pyomo which is combination of above three sets. Basically i want to skip cross product which have code snippet below. Reason to skip below approach is each product can have 10 different sizes and no need to create combination of product of A and sizes coming from product B, which doesn't make sense. Code snippet with cross product: model = pyo.AbstractModel() model.stores = pyo.Set() model.sizes = pyo.Set() model.packs = pyo.Set() model.products = pyo.Set() model.demand = pyo.Param(model.clusters, model.products, model.sizes, default = 0)
[ "So, if I understand your dilemma, the sizes are different for different products and a universal cross-set of products and sizes doesn't work because of that.\nI think you have 2 options. Either works.\nThe easies thing to do would be just to make tuples of product-size pairs and use that as a set...basically merging the products with their sizes.\nproducts = {(shoes, 12), (shoes, 13), (shoes, 5), (pants, XL), (pants, L),...}\nIt is perfectly legitimate to use a flat set like that and use that to initialize your pyomo.Set. It might get a little tricky if you need to sum over individual products because that info is merged with the sizes. Not sure if that is needed.\nOption 2 is to use an indexed set, so you would have sets of sizes that are indexed by product. Here is an example using EV's and times. You would set it up similarly for products & sizes.\n" ]
[ 0 ]
[]
[]
[ "pyomo", "python" ]
stackoverflow_0074461347_pyomo_python.txt
Q: parse html using Python's "xml" module ParseError on meta tag I'm trying to parse some html using the xml python library. The html I'm trying to parse is from download.docker.com which breaks out to, <!DOCTYPE html> <html> <head> <meta charset="UTF-8"> <title>Index of linux/ubuntu/dists/jammy/pool/stable/amd64/</title> </head> <body> <h1>Index of linux/ubuntu/dists/jammy/pool/stable/amd64/</h1> <hr> <pre><a href="../">../</a> <a href="containerd.io_1.5.10-1_amd64.deb">containerd.io_1.5.10-1_amd64.deb</a> ... </pre><hr></body></html> Parsing the html with the following code, import urllib import xml.etree.ElementTree as ET html_doc = urllib.request.urlopen(<MY_URL>).read() root = ET.fromstring(html_doc) >>> ParseError: mismatched tag: line 6, column 2 unless I'm mistaken, this is because of the <meta charset="UTF-8">. Using something like lxml, I can make this work with, import urllib from lxml import html html_doc = urllib.request.urlopen(<MY_URL>).read() root = = html.fromstring(html_doc) Is there any way to parse this html using the xml python library instead of lxml? A: Is there any way to parse this html using the xml python library instead of lxml? The answer is no. An XML library (for example xml.etree.ElementTree) cannot be used to parse arbitrary HTML. It can be used to parse HTML that also happens to be well-formed XML. But your HTML document is not well-formed. lxml on the other hand can be used for both XML and HTML. By the way, note that "the xml python library" is ambiguous. There are several submodules in the xml package in the standard library (https://docs.python.org/3/library/xml.html). All of them will reject the HTML document in the question.
parse html using Python's "xml" module ParseError on meta tag
I'm trying to parse some html using the xml python library. The html I'm trying to parse is from download.docker.com which breaks out to, <!DOCTYPE html> <html> <head> <meta charset="UTF-8"> <title>Index of linux/ubuntu/dists/jammy/pool/stable/amd64/</title> </head> <body> <h1>Index of linux/ubuntu/dists/jammy/pool/stable/amd64/</h1> <hr> <pre><a href="../">../</a> <a href="containerd.io_1.5.10-1_amd64.deb">containerd.io_1.5.10-1_amd64.deb</a> ... </pre><hr></body></html> Parsing the html with the following code, import urllib import xml.etree.ElementTree as ET html_doc = urllib.request.urlopen(<MY_URL>).read() root = ET.fromstring(html_doc) >>> ParseError: mismatched tag: line 6, column 2 unless I'm mistaken, this is because of the <meta charset="UTF-8">. Using something like lxml, I can make this work with, import urllib from lxml import html html_doc = urllib.request.urlopen(<MY_URL>).read() root = = html.fromstring(html_doc) Is there any way to parse this html using the xml python library instead of lxml?
[ "\nIs there any way to parse this html using the xml python library instead of lxml?\n\nThe answer is no.\nAn XML library (for example xml.etree.ElementTree) cannot be used to parse arbitrary HTML. It can be used to parse HTML that also happens to be well-formed XML. But your HTML document is not well-formed.\nlxml on the other hand can be used for both XML and HTML.\nBy the way, note that \"the xml python library\" is ambiguous. There are several submodules in the xml package in the standard library (https://docs.python.org/3/library/xml.html). All of them will reject the HTML document in the question.\n" ]
[ 1 ]
[]
[]
[ "html", "python", "xml" ]
stackoverflow_0074353760_html_python_xml.txt
Q: why tensorflow uses 100% of all CPU cores? I've made a fresh install of Jupyter Notebook kernel and python packages, including tensorflow 2.4.1 (using miniconda env). When I train and test a model, my CPU usage saturate. In my old install, that's not happen (low CPU usage), and the time to accomplish the tasks was barely the same. Is there a config of jupyter and/or tensorflow? I've test on Jupyter Notebook and VSCode, the same problem occurs. Ubuntu 20.04 16GB RAM Intel® Core™ i5-8300H CPU @ 2.30GHz × 8 CPU usage when training a simple network model - htop view Edit: Condition solved. I've done a deep research on intel website, and found this link, about threading config. for Tensorflow and openMP. I run some quick tests varying the tensorflow 2.x section paramenters below, giving back no improvement. import tensorflow as tf tf.config.threading.set_inter_op_parallelism_threads() tf.config.threading.set_intra_op_parallelism_threads() tf.config.set_soft_device_placement(enabled) then I test the openMP settings, changing OMP_NUM_THREADS from 0 to 8, as reported on the graph below: training time vs OMP_NUM_THREADS import os os.environ["OMP_NUM_THREADS"] = “16” CPU usage reduced, with lower training time. CPU usage for OMP_NUM_THREADS equal to 0 OBS.: I am not an expert in ML benchmarks. Just fixed a network training parameters and topology for keras.Sequential() model. Don't know the reason why my CPU was threading at maximum OMP_NUM_THREADS=16 by default. A: In our multiuser environment we need to keep some of the cpu's free for higher prioritized jobs (we don't have the rights to 'nice' processes). So reducing tensorflows (Version 2.8.2) cpu-greed is quite essential. The abovementioned solution works in our purely cpu environment (linux, 40 cores) with a restriction of the inter/intra-threads to 1. import os os.environ["OMP_NUM_THREADS"] = “8” import tensorflow as tf tf.config.threading.set_inter_op_parallelism_threads(1) tf.config.threading.set_intra_op_parallelism_threads(1) This restricts the cpu-usage to 800%.
why tensorflow uses 100% of all CPU cores?
I've made a fresh install of Jupyter Notebook kernel and python packages, including tensorflow 2.4.1 (using miniconda env). When I train and test a model, my CPU usage saturate. In my old install, that's not happen (low CPU usage), and the time to accomplish the tasks was barely the same. Is there a config of jupyter and/or tensorflow? I've test on Jupyter Notebook and VSCode, the same problem occurs. Ubuntu 20.04 16GB RAM Intel® Core™ i5-8300H CPU @ 2.30GHz × 8 CPU usage when training a simple network model - htop view Edit: Condition solved. I've done a deep research on intel website, and found this link, about threading config. for Tensorflow and openMP. I run some quick tests varying the tensorflow 2.x section paramenters below, giving back no improvement. import tensorflow as tf tf.config.threading.set_inter_op_parallelism_threads() tf.config.threading.set_intra_op_parallelism_threads() tf.config.set_soft_device_placement(enabled) then I test the openMP settings, changing OMP_NUM_THREADS from 0 to 8, as reported on the graph below: training time vs OMP_NUM_THREADS import os os.environ["OMP_NUM_THREADS"] = “16” CPU usage reduced, with lower training time. CPU usage for OMP_NUM_THREADS equal to 0 OBS.: I am not an expert in ML benchmarks. Just fixed a network training parameters and topology for keras.Sequential() model. Don't know the reason why my CPU was threading at maximum OMP_NUM_THREADS=16 by default.
[ "In our multiuser environment we need to keep some of the cpu's free for higher prioritized jobs (we don't have the rights to 'nice' processes). So reducing tensorflows (Version 2.8.2) cpu-greed is quite essential. The abovementioned solution works in our purely cpu environment (linux, 40 cores) with a restriction of the inter/intra-threads to 1.\nimport os\nos.environ[\"OMP_NUM_THREADS\"] = “8”\n\nimport tensorflow as tf\ntf.config.threading.set_inter_op_parallelism_threads(1) \ntf.config.threading.set_intra_op_parallelism_threads(1)\n\nThis restricts the cpu-usage to 800%.\n" ]
[ 0 ]
[]
[]
[ "cpu", "jupyter_notebook", "python", "tensorflow" ]
stackoverflow_0068954373_cpu_jupyter_notebook_python_tensorflow.txt
Q: Get positional information from one object and apply it to another I am trying to query the location of an object (x,y,z coordinates) with xform, and then set the values harvested from xform to use in a setAttr command to influence the translation of a different object. pos = cmds.xform('pSphere1', r=True, ws=True, q=True, t=True ) print(pos) cmds.setAttr('pSphere2', tx=pos[0], ty=pos[1], tz=pos[2]) The print command is providing me with the correct coordinates however the setAttr command isn't picking them up and using them. I'm getting the error: Error: TypeError: file line 1: Invalid flag 'tx' Is this something to do with the 'data type' of the xform being "linear" and the setAttr being something else? If so, how do I work around or convert? A: You are supposed to use it like that : cmds.setAttr('pSphere2.tx', pos[0]) and your query should be pos = cmds.xform('pSphere1', ws=True, q=True, t=True ) To apply you can also do cmds.xform('pSphere2', ws=True, t=pos ) A: Something which is also possible with those commands : pos = cmds.xform('pSphere1', ws=True, q=True, t=True) cmds.setAttr('pSphere2.t', *pos)
Get positional information from one object and apply it to another
I am trying to query the location of an object (x,y,z coordinates) with xform, and then set the values harvested from xform to use in a setAttr command to influence the translation of a different object. pos = cmds.xform('pSphere1', r=True, ws=True, q=True, t=True ) print(pos) cmds.setAttr('pSphere2', tx=pos[0], ty=pos[1], tz=pos[2]) The print command is providing me with the correct coordinates however the setAttr command isn't picking them up and using them. I'm getting the error: Error: TypeError: file line 1: Invalid flag 'tx' Is this something to do with the 'data type' of the xform being "linear" and the setAttr being something else? If so, how do I work around or convert?
[ "You are supposed to use it like that :\n cmds.setAttr('pSphere2.tx', pos[0])\n\nand your query should be\n pos = cmds.xform('pSphere1', ws=True, q=True, t=True )\n\nTo apply you can also do\n cmds.xform('pSphere2', ws=True, t=pos )\n\n", "Something which is also possible with those commands :\npos = cmds.xform('pSphere1', ws=True, q=True, t=True)\ncmds.setAttr('pSphere2.t', *pos)\n\n" ]
[ 0, 0 ]
[]
[]
[ "maya", "python" ]
stackoverflow_0074437245_maya_python.txt
Q: Object not callable in python class in flask application This is my first flask application and i tried my best to get it running. Nevertheless i stuck into one error. I tried to create a python flask app but stuck into an error. here is my code flask.py from test_displayclass import Bartender class MyFlask: bartender = Bartender() def __init__(self): #self.bartender = Bartender() self.bartender.test() from flask import Flask app = Flask(__name__) my_flask = MyFlask() @app.route("/Test") def Test(): return my_flask.test.APIfunction if __name__ == "__main__": app.run(debug=True,port=9999) test_displayclass.py import adafruit_ssd1306 import busio from board import SCL, SDA from PIL import Image, ImageDraw, ImageFont class Display(): def __init__(self): i2c = busio.I2C(SCL, SDA) self.oled = adafruit_ssd1306.SSD1306_I2C(128, 64, i2c, addr=0x3C) self.oled.fill(0) self.oled.show() def drawImage(self, image): self.oled(image) self.oled.show() class Bartender(): def __init__(self): self.oled = Display() def test(self): image = Image.new("1", (20, 20)) draw = ImageDraw.Draw(image) font = ImageFont.truetype("/usr/share/fonts/truetype/dejavu/DejaVuSans.ttf", 25) self.len = len("e") draw.text( (0, 40 - 2 // 2), "e", font=font, fill=255, ) Error is: Traceback (most recent call last): File "/home/pi/Smart-Bartender/bartender_flask_new_test.py", line 13, in <module> my_flask = MyFlask() File "/home/pi/Smart-Bartender/bartender_flask_new_test.py", line 8, in __init__ self.bartender.test() File "/home/pi/Smart-Bartender/test_displayclass.py", line 61, in test self.oled.drawImage(image) File "/home/pi/Smart-Bartender/test_displayclass.py", line 33, in drawImage self.oled(image) TypeError: 'SSD1306_I2C' object is not callable Can you advice me how to do it the correct waY? A: Need of taken the function self.oled.image(image)
Object not callable in python class in flask application
This is my first flask application and i tried my best to get it running. Nevertheless i stuck into one error. I tried to create a python flask app but stuck into an error. here is my code flask.py from test_displayclass import Bartender class MyFlask: bartender = Bartender() def __init__(self): #self.bartender = Bartender() self.bartender.test() from flask import Flask app = Flask(__name__) my_flask = MyFlask() @app.route("/Test") def Test(): return my_flask.test.APIfunction if __name__ == "__main__": app.run(debug=True,port=9999) test_displayclass.py import adafruit_ssd1306 import busio from board import SCL, SDA from PIL import Image, ImageDraw, ImageFont class Display(): def __init__(self): i2c = busio.I2C(SCL, SDA) self.oled = adafruit_ssd1306.SSD1306_I2C(128, 64, i2c, addr=0x3C) self.oled.fill(0) self.oled.show() def drawImage(self, image): self.oled(image) self.oled.show() class Bartender(): def __init__(self): self.oled = Display() def test(self): image = Image.new("1", (20, 20)) draw = ImageDraw.Draw(image) font = ImageFont.truetype("/usr/share/fonts/truetype/dejavu/DejaVuSans.ttf", 25) self.len = len("e") draw.text( (0, 40 - 2 // 2), "e", font=font, fill=255, ) Error is: Traceback (most recent call last): File "/home/pi/Smart-Bartender/bartender_flask_new_test.py", line 13, in <module> my_flask = MyFlask() File "/home/pi/Smart-Bartender/bartender_flask_new_test.py", line 8, in __init__ self.bartender.test() File "/home/pi/Smart-Bartender/test_displayclass.py", line 61, in test self.oled.drawImage(image) File "/home/pi/Smart-Bartender/test_displayclass.py", line 33, in drawImage self.oled(image) TypeError: 'SSD1306_I2C' object is not callable Can you advice me how to do it the correct waY?
[ "Need of taken the function self.oled.image(image)\n" ]
[ 0 ]
[]
[]
[ "adafruit_circuitpython", "python" ]
stackoverflow_0074459896_adafruit_circuitpython_python.txt
Q: How to calculate numbers in a list? I have to add every number with one behind it in the list using loops or functions example text; list[1,2,3] => (1+3)+(2+1)+(3+2) output = 12 example code; myList = [1,2,3] x = myList [0] + myList [2] x = x + (myList [1]+myList [0]) x = x + (myList [2]+myList [1]) print(x) # 12 I dont want to calculate them using sum() or just like 1+2+3 A: In python, list[-1] returns the last element of the list so doing something like this should do the job - myList = [1,2,3] total = 0 for i, num in enumerate(myList): print(num, myList[i-1]) total += num + myList[i-1] print(total) Output: 1 3 2 1 3 2 12 A: Loop through the list, adding the element and the element before it to the total. Since list indexing wraps around when the index is negative, this will treat the last element as before the first element. total = 0 for i in range(len(myList)): total += myList[i] + myList[i-1] print(total) A: Try accessing the list index and value using enumerate function >>> [x+mylist[i-1] for i, x in enumerate(mylist)] [4, 3, 5] To get the sum of the result >>> sum([x+mylist[i-1] for i, x in enumerate(mylist)]) 12
How to calculate numbers in a list?
I have to add every number with one behind it in the list using loops or functions example text; list[1,2,3] => (1+3)+(2+1)+(3+2) output = 12 example code; myList = [1,2,3] x = myList [0] + myList [2] x = x + (myList [1]+myList [0]) x = x + (myList [2]+myList [1]) print(x) # 12 I dont want to calculate them using sum() or just like 1+2+3
[ "In python, list[-1] returns the last element of the list so doing something like this should do the job -\nmyList = [1,2,3]\ntotal = 0\nfor i, num in enumerate(myList):\n print(num, myList[i-1])\n total += num + myList[i-1]\nprint(total)\n\nOutput:\n1 3\n2 1\n3 2\n12\n\n", "Loop through the list, adding the element and the element before it to the total. Since list indexing wraps around when the index is negative, this will treat the last element as before the first element.\ntotal = 0\nfor i in range(len(myList)):\n total += myList[i] + myList[i-1]\nprint(total)\n\n", "Try accessing the list index and value using enumerate function\n>>> [x+mylist[i-1] for i, x in enumerate(mylist)]\n[4, 3, 5]\n\nTo get the sum of the result\n>>> sum([x+mylist[i-1] for i, x in enumerate(mylist)])\n12\n\n" ]
[ 1, 1, 0 ]
[]
[]
[ "list", "python" ]
stackoverflow_0074464989_list_python.txt
Q: Skfuzzy - get membership value from output I made a fuzzy logic model using skfuzzy. On the basis of the calculated output, I would like to assign myself a category (low, medium, high) in the dataframe table to which the calculated value belongs. I can't get the category name from the output. How can I do it in skfuzzy? My model: def fuzzy_logic(s, v, d): h_max = pipe.max_h(d) i_min = pipe.min_slope(h_max, d) i_max = pipe.max_slope(d) slope = ctrl.Antecedent(np.arange(i_min, i_max + 1, 1), 'slope') v_min = 0 v_max = 5 velocity = ctrl.Antecedent(np.arange(v_min, v_max + 0.1, 0.1), 'velocity') diameter = ctrl.Consequent(np.arange(1, 101, 1), 'diameter') # Populate slope with membership functions. slope['low'] = fuzz.trimf(slope.universe, [i_min, i_min, i_max / 2]) slope['medium'] = fuzz.trimf(slope.universe, [i_min, i_max / 2, i_max + 1]) slope['high'] = fuzz.trimf(slope.universe, [i_max / 2, i_max + 1, i_max + 1]) # Populate velocity with membership functions. velocity['low'] = fuzz.trimf(velocity.universe, [v_min, v_min, 0.5 * v_max]) velocity['medium'] = fuzz.trimf(velocity.universe, [v_min, 0.5 * v_max, v_max]) velocity['high'] = fuzz.trimf(velocity.universe, [0.5 * v_max, v_max, v_max]) # Populate diamter diameter['reduction'] = fuzz.trimf(diameter.universe, [1, 1, 50]) diameter['optimal'] = fuzz.trimf(diameter.universe, [1, 50, 100]) diameter['increase'] = fuzz.trimf(diameter.universe, [50, 100, 100]) # Define rules r1 = ctrl.Rule(slope['low'] & velocity['low'] , diameter['reduction']) r2 = ctrl.Rule(slope['low'] & velocity['medium'], diameter['reduction']) r4 = ctrl.Rule(slope['medium'] & velocity['low'], diameter['reduction']) r8 = ctrl.Rule(slope['high'] & velocity['low'], diameter['reduction']) r9 = ctrl.Rule(slope['high'] & velocity['medium'], diameter['reduction']) r3 = ctrl.Rule(slope['medium'] & velocity['medium'], diameter['optimal']) r5 = ctrl.Rule(slope['low'] & velocity['high'], diameter['increase']) r6 = ctrl.Rule(slope['medium'] & velocity['high'], diameter['increase']) r7 = ctrl.Rule(slope['high'] & velocity['high'], diameter['increase']) diameter_ctrl = ctrl.ControlSystem([r1, r2, r3, r4, r5, r6, r7, r8, r9]) # compute diameters = ctrl.ControlSystemSimulation(diameter_ctrl) # calculate diameters.input['slope'] = s diameters.input['velocity'] = v diameters.compute() return diameters.output['diameter'] print(fuzzy_logic(s=20, v=1, d=0.2)) output: 32.23415721908444 This plot shows efect The value is in the low category. How do you get it out of the model? A: Here's how you do it. You can use skfuzzy.interp_membership and apply it to all the Consequent membership function and get the max() value: print(interp_membership(diameter.universe, diameter['poor'].mf, diameters.output['diameter'])) print(interp_membership(diameter.universe, diameter['average'].mf, diameters.output['diameter'])) print(interp_membership(diameter.universe, diameter['good'].mf, diameters.output['diameter']))
Skfuzzy - get membership value from output
I made a fuzzy logic model using skfuzzy. On the basis of the calculated output, I would like to assign myself a category (low, medium, high) in the dataframe table to which the calculated value belongs. I can't get the category name from the output. How can I do it in skfuzzy? My model: def fuzzy_logic(s, v, d): h_max = pipe.max_h(d) i_min = pipe.min_slope(h_max, d) i_max = pipe.max_slope(d) slope = ctrl.Antecedent(np.arange(i_min, i_max + 1, 1), 'slope') v_min = 0 v_max = 5 velocity = ctrl.Antecedent(np.arange(v_min, v_max + 0.1, 0.1), 'velocity') diameter = ctrl.Consequent(np.arange(1, 101, 1), 'diameter') # Populate slope with membership functions. slope['low'] = fuzz.trimf(slope.universe, [i_min, i_min, i_max / 2]) slope['medium'] = fuzz.trimf(slope.universe, [i_min, i_max / 2, i_max + 1]) slope['high'] = fuzz.trimf(slope.universe, [i_max / 2, i_max + 1, i_max + 1]) # Populate velocity with membership functions. velocity['low'] = fuzz.trimf(velocity.universe, [v_min, v_min, 0.5 * v_max]) velocity['medium'] = fuzz.trimf(velocity.universe, [v_min, 0.5 * v_max, v_max]) velocity['high'] = fuzz.trimf(velocity.universe, [0.5 * v_max, v_max, v_max]) # Populate diamter diameter['reduction'] = fuzz.trimf(diameter.universe, [1, 1, 50]) diameter['optimal'] = fuzz.trimf(diameter.universe, [1, 50, 100]) diameter['increase'] = fuzz.trimf(diameter.universe, [50, 100, 100]) # Define rules r1 = ctrl.Rule(slope['low'] & velocity['low'] , diameter['reduction']) r2 = ctrl.Rule(slope['low'] & velocity['medium'], diameter['reduction']) r4 = ctrl.Rule(slope['medium'] & velocity['low'], diameter['reduction']) r8 = ctrl.Rule(slope['high'] & velocity['low'], diameter['reduction']) r9 = ctrl.Rule(slope['high'] & velocity['medium'], diameter['reduction']) r3 = ctrl.Rule(slope['medium'] & velocity['medium'], diameter['optimal']) r5 = ctrl.Rule(slope['low'] & velocity['high'], diameter['increase']) r6 = ctrl.Rule(slope['medium'] & velocity['high'], diameter['increase']) r7 = ctrl.Rule(slope['high'] & velocity['high'], diameter['increase']) diameter_ctrl = ctrl.ControlSystem([r1, r2, r3, r4, r5, r6, r7, r8, r9]) # compute diameters = ctrl.ControlSystemSimulation(diameter_ctrl) # calculate diameters.input['slope'] = s diameters.input['velocity'] = v diameters.compute() return diameters.output['diameter'] print(fuzzy_logic(s=20, v=1, d=0.2)) output: 32.23415721908444 This plot shows efect The value is in the low category. How do you get it out of the model?
[ "Here's how you do it. You can use skfuzzy.interp_membership and apply it to all the Consequent membership function and get the max() value:\nprint(interp_membership(diameter.universe, diameter['poor'].mf, diameters.output['diameter']))\nprint(interp_membership(diameter.universe, diameter['average'].mf, diameters.output['diameter']))\nprint(interp_membership(diameter.universe, diameter['good'].mf, diameters.output['diameter']))\n\n" ]
[ 0 ]
[]
[]
[ "fuzzy", "numpy", "pandas", "python", "skfuzzy" ]
stackoverflow_0073373581_fuzzy_numpy_pandas_python_skfuzzy.txt
Q: How to obtain multiple solutions of a binary LP problem using Google's OR-Tools in Python? I am new to integer optimization. I am trying to solve the following large (although not that large) binary linear optimization problem: max_{x} x_1+x_2+...+x_n subject to: A*x <= b ; x_i is binary for all i=1,...,n As you can see, . the control variable is a vector x of lengh, say, n=150; x_i is binary for all i=1,...,n . I want to maximize the sum of the x_i's . in the constraint, A is an nxn matrix and b is an nx1 vector. So I have n=150 linear inequality constraints. I want to obtain a certain number of solutions, NS. Say, NS=100. (I know there is more than one solution, and there are potentially millions of them.) I am using Google's OR-Tools for Python. I was able to write the problem and to obtain one solution. I have tried many different ways to obtain more solutions after that, but I just couldn't. For example: I tried using the SCIP solver, and then I used the value of the objective function at the optimum, call it V, to add another constraint, x_1+x_2+...+x_n >= V, on top of the original "Ax<=b," and then used the CP-SAT solver to find NS feasible vectors (I followed the instructions in this guide). There is no optimization in this second step, just a quest for feasibility. This didn't work: the solver produced N replicas of the same vector. Still, when asked for the number of solutions found, it misleadingly replies that solution_printer.solution_count() is equal to NS. Here's a snippet of the code that I used: # Define the constraints (A and b are lists) for j in range(n): constraint_expr = [int(A[j][l])*x[l] for l in range(n)] model.Add(sum(constraint_expr) <= int(b[j][0])) V = 112 constraint_obj_val = [-x[l] for l in range(n)] model.Add(sum(constraint_obj_val) <= -V) # Call the solver: solver = cp_model.CpSolver() solution_printer = VarArraySolutionPrinterWithLimit(x, NS) solver.parameters.enumerate_all_solutions = True status = solver.Solve(model, solution_printer) I tried using the SCIP solver and then using solver.NextSolution(), but every time I was using this command, the algorithm would produce a vector that was less and less optimal every time: the first one corresponded to a value of, say, V=112 (the optimal one!); the second vector corresponded to a value of 111; the third one, to 108; fourth to sixth, to 103; etc. My question is, unfortunately, a bit vague, but here it goes: what's the best way to obtain more than one solution to my optimization problem? Please let me know if I'm not being clear enough or if you need more/other chunks of the code, etc. This is my first time posting a question here :) Thanks in advance. A: Is your matrix A integral ? if not, you are not solving the same problem with scip and CP-SAT. Furthermore, why use scip? You should solve both part with the same solver. Furthermore, I believe the default solution pool implementation in scip will return all solutions found, in reverse order, thus in decreasing quality order. A: In Gurobi, you can do something like this to get more than one optimal solution : solver->SetSolverSpecificParametersAsString("PoolSearchMode=2"); // or-tools [Gurobi] From Gurobi Reference [Section 20.1]: By default, the Gurobi MIP solver will try to find one proven optimal solution to your model. You can use the PoolSearchMode parameter to control the approach used to find solutions. In its default setting (0), the MIP search simply aims to find one optimal solution. Setting the parameter to 1 causes the MIP search to expend additional effort to find more solutions, but in a non-systematic way. You will get more solutions, but not necessarily the best solutions. Setting the parameter to 2 causes the MIP to do a systematic search for the n best solutions. For both non-default settings, the PoolSolutions parameter sets the target for the number of solutions to find. Another way to find multiple optimal solutions could be to first solve the original problem to optimality and then add the objective function as a constraint with lower and upper bound as the optimal objective value.
How to obtain multiple solutions of a binary LP problem using Google's OR-Tools in Python?
I am new to integer optimization. I am trying to solve the following large (although not that large) binary linear optimization problem: max_{x} x_1+x_2+...+x_n subject to: A*x <= b ; x_i is binary for all i=1,...,n As you can see, . the control variable is a vector x of lengh, say, n=150; x_i is binary for all i=1,...,n . I want to maximize the sum of the x_i's . in the constraint, A is an nxn matrix and b is an nx1 vector. So I have n=150 linear inequality constraints. I want to obtain a certain number of solutions, NS. Say, NS=100. (I know there is more than one solution, and there are potentially millions of them.) I am using Google's OR-Tools for Python. I was able to write the problem and to obtain one solution. I have tried many different ways to obtain more solutions after that, but I just couldn't. For example: I tried using the SCIP solver, and then I used the value of the objective function at the optimum, call it V, to add another constraint, x_1+x_2+...+x_n >= V, on top of the original "Ax<=b," and then used the CP-SAT solver to find NS feasible vectors (I followed the instructions in this guide). There is no optimization in this second step, just a quest for feasibility. This didn't work: the solver produced N replicas of the same vector. Still, when asked for the number of solutions found, it misleadingly replies that solution_printer.solution_count() is equal to NS. Here's a snippet of the code that I used: # Define the constraints (A and b are lists) for j in range(n): constraint_expr = [int(A[j][l])*x[l] for l in range(n)] model.Add(sum(constraint_expr) <= int(b[j][0])) V = 112 constraint_obj_val = [-x[l] for l in range(n)] model.Add(sum(constraint_obj_val) <= -V) # Call the solver: solver = cp_model.CpSolver() solution_printer = VarArraySolutionPrinterWithLimit(x, NS) solver.parameters.enumerate_all_solutions = True status = solver.Solve(model, solution_printer) I tried using the SCIP solver and then using solver.NextSolution(), but every time I was using this command, the algorithm would produce a vector that was less and less optimal every time: the first one corresponded to a value of, say, V=112 (the optimal one!); the second vector corresponded to a value of 111; the third one, to 108; fourth to sixth, to 103; etc. My question is, unfortunately, a bit vague, but here it goes: what's the best way to obtain more than one solution to my optimization problem? Please let me know if I'm not being clear enough or if you need more/other chunks of the code, etc. This is my first time posting a question here :) Thanks in advance.
[ "Is your matrix A integral ? if not, you are not solving the same problem with scip and CP-SAT.\nFurthermore, why use scip? You should solve both part with the same solver.\nFurthermore, I believe the default solution pool implementation in scip will return all solutions found, in reverse order, thus in decreasing quality order.\n", "In Gurobi, you can do something like this to get more than one optimal solution :\nsolver->SetSolverSpecificParametersAsString(\"PoolSearchMode=2\"); // or-tools [Gurobi]\n\nFrom Gurobi Reference [Section 20.1]:\n\nBy default, the Gurobi MIP solver will try to find one proven optimal solution to your model.\n\n\nYou can use the PoolSearchMode parameter to control the approach used to find solutions.\nIn its default setting (0), the MIP search simply aims to find one\noptimal solution. Setting the parameter to 1 causes the MIP search to\nexpend additional effort to find more solutions, but in a\nnon-systematic way. You will get more solutions, but not necessarily\nthe best solutions. Setting the parameter to 2 causes the MIP to do a\nsystematic search for the n best solutions. For both non-default\nsettings, the PoolSolutions parameter sets the target for the number\nof solutions to find.\n\nAnother way to find multiple optimal solutions could be to first solve the original problem to optimality and then add the objective function as a constraint with lower and upper bound as the optimal objective value.\n" ]
[ 1, 0 ]
[]
[]
[ "integer_programming", "linear_programming", "or_tools", "python" ]
stackoverflow_0074464351_integer_programming_linear_programming_or_tools_python.txt
Q: How to convert a column value to list in Python I need to convert the value in the column 'value' to a list format. The dataframe df: emp_no value 0 390 10.0 1 395 20.0 2 397 30.0 3 522 40.0 4 525 40.0 Output should be: emp_no value 0 390 [5,10.0] 1 395 [5,20.0] 2 397 [5,30.0] 3 522 [5,40.0] 4 525 [5,40.0] A: import pandas as pd df = pd.DataFrame({"emp_no": [1, 2, 3], "value": [10.0, 20.0, 30.0]}) df['value'] = df['value'].astype('object') for index, row in df.iterrows(): df.at[index, 'value'] = [5, df.at[index, 'value']] print(df) # emp_no value # 0 1 [5, 10.0] # 1 2 [5, 20.0] # 2 3 [5, 30.0] A: You could try with map, as well : import pandas as pd df = pd.DataFrame({'em_pno':[390,396,397], 'value':[10.0,6.0,7.0]}) df['value'] = df['value'].map(lambda x:[5, x]) # IN em_pno value 0 390 10.0 1 396 6.0 2 397 7.0 # OUT em_pno value 0 390 [5, 10.0] 1 396 [5, 6.0] 2 397 [5, 7.0]
How to convert a column value to list in Python
I need to convert the value in the column 'value' to a list format. The dataframe df: emp_no value 0 390 10.0 1 395 20.0 2 397 30.0 3 522 40.0 4 525 40.0 Output should be: emp_no value 0 390 [5,10.0] 1 395 [5,20.0] 2 397 [5,30.0] 3 522 [5,40.0] 4 525 [5,40.0]
[ "import pandas as pd\n\ndf = pd.DataFrame({\"emp_no\": [1, 2, 3], \"value\": [10.0, 20.0, 30.0]})\n\ndf['value'] = df['value'].astype('object')\nfor index, row in df.iterrows():\n df.at[index, 'value'] = [5, df.at[index, 'value']]\n\nprint(df)\n\n# emp_no value\n# 0 1 [5, 10.0]\n# 1 2 [5, 20.0]\n# 2 3 [5, 30.0]\n\n", "You could try with map, as well :\nimport pandas as pd\n\ndf = pd.DataFrame({'em_pno':[390,396,397], 'value':[10.0,6.0,7.0]})\ndf['value'] = df['value'].map(lambda x:[5, x])\n\n# IN\n em_pno value\n0 390 10.0\n1 396 6.0\n2 397 7.0\n# OUT\n em_pno value\n0 390 [5, 10.0]\n1 396 [5, 6.0]\n2 397 [5, 7.0]\n\n" ]
[ 1, 1 ]
[]
[]
[ "dataframe", "list", "multiple_columns", "python", "row" ]
stackoverflow_0074465018_dataframe_list_multiple_columns_python_row.txt
Q: Get rid of annoying output I'm working with the following function: from moviepy.video.io.ffmpeg_tools import ffmpeg_extract_subclip if ffmpeg_extract_subclip("c:\\users\\samuel\\desktop\\VideosYT\\randomvideo.mp4", 0, 10, targetname="test.mp4"): print("") which is giving me the following output: Moviepy - Running: >>> "+ " ".join(cmd) Moviepy - Command successful which I don't want it to show in my terminal. How can I solve this output problem? Thank you! A: There is a logger parameter of the function after I searched for the source code. Here's a link! You can try to set it to None: if ffmpeg_extract_subclip("c:\\users\\samuel\\desktop\\VideosYT\\randomvideo.mp4", 0, 10, targetname="test.mp4", logger=None): print("")
Get rid of annoying output
I'm working with the following function: from moviepy.video.io.ffmpeg_tools import ffmpeg_extract_subclip if ffmpeg_extract_subclip("c:\\users\\samuel\\desktop\\VideosYT\\randomvideo.mp4", 0, 10, targetname="test.mp4"): print("") which is giving me the following output: Moviepy - Running: >>> "+ " ".join(cmd) Moviepy - Command successful which I don't want it to show in my terminal. How can I solve this output problem? Thank you!
[ "There is a logger parameter of the function after I searched for the source code. Here's a link!\nYou can try to set it to None:\nif ffmpeg_extract_subclip(\"c:\\\\users\\\\samuel\\\\desktop\\\\VideosYT\\\\randomvideo.mp4\", 0, 10, targetname=\"test.mp4\", logger=None):\n print(\"\")\n\n" ]
[ 0 ]
[]
[]
[ "moviepy", "output", "python", "subprocess" ]
stackoverflow_0074465063_moviepy_output_python_subprocess.txt
Q: How to show long text in tkinter tksheet cells? I have been learning tkinter and found a module tksheet, which helps me to show the tables. I am tinkering with the documentation, and try to create a simple table. I encountered one problem, sometimes the text are long and I want to show the whole text in the table. (I can set geometry of root app, but I am talking about showing the full text in sheet). How to show the full text in tksheet table? MWE %%writefile a.py from tksheet import Sheet import tkinter as tk import pandas as pd df = pd.DataFrame({'col0': [10,20,30], 'col1': [100,200,300], 'col2': ['NY','TX','OH'], 'col3': ['This is very long sentence.']*3 }) lst_data = df.values.tolist() headers = df.columns.tolist() win = tk.Tk() win.geometry("800x200") win.grid_columnconfigure(0, weight = 1) win.grid_rowconfigure(0, weight = 1) frame = tk.Frame() frame.grid_columnconfigure(0, weight = 1) frame.grid_rowconfigure(0, weight = 1) sheet = Sheet(frame,data = lst_data, headers=headers) sheet.enable_bindings() sheet.highlight_rows(rows = [0], bg = 'yellow', fg = None) frame.grid(row = 0, column = 0, sticky = "nswe") sheet.grid(row = 0, column = 0, sticky = "nswe") win.mainloop() Question: How to show the full text in cells? A: With reference to issues in github: https://github.com/ragardner/tksheet/issues/9 I found the solution: sheet.set_all_cell_sizes_to_text()
How to show long text in tkinter tksheet cells?
I have been learning tkinter and found a module tksheet, which helps me to show the tables. I am tinkering with the documentation, and try to create a simple table. I encountered one problem, sometimes the text are long and I want to show the whole text in the table. (I can set geometry of root app, but I am talking about showing the full text in sheet). How to show the full text in tksheet table? MWE %%writefile a.py from tksheet import Sheet import tkinter as tk import pandas as pd df = pd.DataFrame({'col0': [10,20,30], 'col1': [100,200,300], 'col2': ['NY','TX','OH'], 'col3': ['This is very long sentence.']*3 }) lst_data = df.values.tolist() headers = df.columns.tolist() win = tk.Tk() win.geometry("800x200") win.grid_columnconfigure(0, weight = 1) win.grid_rowconfigure(0, weight = 1) frame = tk.Frame() frame.grid_columnconfigure(0, weight = 1) frame.grid_rowconfigure(0, weight = 1) sheet = Sheet(frame,data = lst_data, headers=headers) sheet.enable_bindings() sheet.highlight_rows(rows = [0], bg = 'yellow', fg = None) frame.grid(row = 0, column = 0, sticky = "nswe") sheet.grid(row = 0, column = 0, sticky = "nswe") win.mainloop() Question: How to show the full text in cells?
[ "With reference to issues in github: https://github.com/ragardner/tksheet/issues/9\nI found the solution:\nsheet.set_all_cell_sizes_to_text()\n\n" ]
[ 0 ]
[]
[]
[ "python", "tkinter" ]
stackoverflow_0074464957_python_tkinter.txt
Q: PYQT5 - Update a label when a QPushButton is checked/unchecked How do I change the text of a label when QPushButton with checkable set to True is checked or unchecked. I am using the buttons as a seat selection chart and on clicking a certain seat I want to update a label(or, if possible something else) that will show the price based on seat selection, and would deduct the value when its unchecked. the code for the seats: vipseats = QButtonGroup() for i in range(1, 41): vseat = QPushButton(self.centralwidget, checkable = True) vipseats.addButton(vseat) if i <= 20: labelm = QLabel(f'V-A-{i}', self.centralwidget) labelm.setGeometry(QRect(280+ 60* i, 550, 50, 50)) vseat.setGeometry(QRect(280+ 60* i, 550, 50, 50)) elif 20 < i <= 41: labelm = QLabel(f'V-B-{i-20}', self.centralwidget) labelm.setGeometry(QRect(280+ 60* (i-20), 625, 50, 50)) vseat.setGeometry(QRect(280+ 60* (i-20), 625, 50, 50)) the seating chart
PYQT5 - Update a label when a QPushButton is checked/unchecked
How do I change the text of a label when QPushButton with checkable set to True is checked or unchecked. I am using the buttons as a seat selection chart and on clicking a certain seat I want to update a label(or, if possible something else) that will show the price based on seat selection, and would deduct the value when its unchecked. the code for the seats: vipseats = QButtonGroup() for i in range(1, 41): vseat = QPushButton(self.centralwidget, checkable = True) vipseats.addButton(vseat) if i <= 20: labelm = QLabel(f'V-A-{i}', self.centralwidget) labelm.setGeometry(QRect(280+ 60* i, 550, 50, 50)) vseat.setGeometry(QRect(280+ 60* i, 550, 50, 50)) elif 20 < i <= 41: labelm = QLabel(f'V-B-{i-20}', self.centralwidget) labelm.setGeometry(QRect(280+ 60* (i-20), 625, 50, 50)) vseat.setGeometry(QRect(280+ 60* (i-20), 625, 50, 50)) the seating chart
[]
[]
[ "As musicamante mentioned in the comment, connect a slot or that will do what ever you want:\nvseat.buttonToggled.connect(your_slot_function)\n\nAnd impelent the changes inside slot func.\n" ]
[ -1 ]
[ "pyqt5", "python", "qt" ]
stackoverflow_0074463401_pyqt5_python_qt.txt
Q: Validate URL paths with a specified amount of parts only just trying to work out the regex for this. Say I have to following list of URL paths /v1/users/ /v1/users/123abc/ /v1/users/123abc/456def/ /v1/users/123abc/456def/789ghi/ /v1/users/123abc/me/ /v1/users/123abc/me/456def/ where some parts are set, like v1 and users, and some parts are path parameters so they can be any values/characters, like 123abc and 456def. What regex pattern can I put in place for the path parameters so it matches against the right ones. For example, I tried to get /v1/users/123abc/456def/ using ^/v1/users/.*?/.*?/$. However, this regex matched with the following: /v1/users/123abc/456def/ /v1/users/123abc/456def/789ghi/ /v1/users/123abc/me/ /v1/users/123abc/me/456def/ I understand it may be impossible to not match with /v1/users/123abc/me/ however I have a way around this if someone can find a solution which can get both /v1/users/123abc/456def/ and /v1/users/123abc/me/. Thanks in advance. A: You can replace both .*? with [^/]* or even [^/]+ (as subparts must contain at least one char) and use ^/v1/users/[^/]+/[^/]+/$ See the regex demo. Details: ^ - start of string /v1/users/ - a literal string [^/]+ - one or more chars other than / / - a / char [^/]+ - one or more chars other than / / - a / char $ - end of string.
Validate URL paths with a specified amount of parts only
just trying to work out the regex for this. Say I have to following list of URL paths /v1/users/ /v1/users/123abc/ /v1/users/123abc/456def/ /v1/users/123abc/456def/789ghi/ /v1/users/123abc/me/ /v1/users/123abc/me/456def/ where some parts are set, like v1 and users, and some parts are path parameters so they can be any values/characters, like 123abc and 456def. What regex pattern can I put in place for the path parameters so it matches against the right ones. For example, I tried to get /v1/users/123abc/456def/ using ^/v1/users/.*?/.*?/$. However, this regex matched with the following: /v1/users/123abc/456def/ /v1/users/123abc/456def/789ghi/ /v1/users/123abc/me/ /v1/users/123abc/me/456def/ I understand it may be impossible to not match with /v1/users/123abc/me/ however I have a way around this if someone can find a solution which can get both /v1/users/123abc/456def/ and /v1/users/123abc/me/. Thanks in advance.
[ "You can replace both .*? with [^/]* or even [^/]+ (as subparts must contain at least one char) and use\n^/v1/users/[^/]+/[^/]+/$\n\nSee the regex demo.\nDetails:\n\n^ - start of string\n/v1/users/ - a literal string\n[^/]+ - one or more chars other than /\n/ - a / char\n[^/]+ - one or more chars other than /\n/ - a / char\n$ - end of string.\n\n" ]
[ 2 ]
[]
[]
[ "python", "regex" ]
stackoverflow_0074464874_python_regex.txt
Q: Calculating the cumulative sum in a list starting at the next element and going through the list in Python Let's say I got a list like this: L = [600, 200, 100, 80, 20] What is the most efficient way to calculate the cumulative sum starting from the next element for every element in the list. The output of should thus be: x_1 = 400 (200 + 100 + 80 + 20) x_2 = 200 (100 + 80 + 20) x_3 = 20 (20) x_4 = 0 A: try this: l = [600, 200, 100, 80, 20] res = [sum(l[i:]) for i in range(1, len(l))] print(res) for your example the output should be [400, 200, 100, 20] A: try using cumsum L = [600, 200, 100, 80, 20] df=pd.DataFrame(L,columns=['Value']) df['Running_Total'] = df['Value'].cumsum() df['Running_Total2'] = df['Value'].expanding().sum() print(df)
Calculating the cumulative sum in a list starting at the next element and going through the list in Python
Let's say I got a list like this: L = [600, 200, 100, 80, 20] What is the most efficient way to calculate the cumulative sum starting from the next element for every element in the list. The output of should thus be: x_1 = 400 (200 + 100 + 80 + 20) x_2 = 200 (100 + 80 + 20) x_3 = 20 (20) x_4 = 0
[ "try this:\n l = [600, 200, 100, 80, 20]\n res = [sum(l[i:]) for i in range(1, len(l))]\n print(res)\n\nfor your example the output should be [400, 200, 100, 20]\n", "try using cumsum\nL = [600, 200, 100, 80, 20]\ndf=pd.DataFrame(L,columns=['Value'])\ndf['Running_Total'] = df['Value'].cumsum()\ndf['Running_Total2'] = df['Value'].expanding().sum()\nprint(df)\n\n" ]
[ 1, 0 ]
[ "You can use the sum functio\nsum(L)-L[0]\n" ]
[ -2 ]
[ "list", "python", "sum" ]
stackoverflow_0074464013_list_python_sum.txt
Q: Can I iterate my context at view to put into my template? I've created this function at my views to iterate through my pages. for chapter in chapters: context["chapter_page"] = math.ceil((chapters.index(chapter) + 1) / 2) context["chapter"] = chapters return context I am still making a for a loop in my template, so I cannot remove him. I added this context, but the only returned page is the last page, which means, that my context["chapter_page"] is not iterating. {% for chapter in chapters %} <li> <a href="?page={{ chapter_page }}&#{{ chapter.number }}"> {{ chapter.number }} </a> </li> {% endfor %} Of course, I could not add this logic direct to my template, it is not accepted by Django. {% for chapter in chapters %} <li> <a href="?page={{ math.ceil((chapters.index(chapter) + 1) / 2) }}&#{{ chapter.number }}"> {{ chapter.number }} </a> </li> {% endfor %} I am expecting to do a loop and return each iterated number at my href=page A: I'm guessing here, because you have omitted a bunch of details, but it looks like you are expecting one chapter_page value per chapter. That's not what you have implemented. You have exactly one global chapter_page value. If you want one per chapter, then that number needs to be part of the chapter object, not global: for idx,chapter in enumerate(chapters): chapter["chapter_page"] = math.ceil((idx + 1) / 2) context["chapter"] = chapters return context and {% for chapter in chapters %} <li> <a href="?page={{ chapter.chapter_page }}&#{{ chapter.number }}"> {{ chapter.number }} </a> </li> {% endfor %}
Can I iterate my context at view to put into my template?
I've created this function at my views to iterate through my pages. for chapter in chapters: context["chapter_page"] = math.ceil((chapters.index(chapter) + 1) / 2) context["chapter"] = chapters return context I am still making a for a loop in my template, so I cannot remove him. I added this context, but the only returned page is the last page, which means, that my context["chapter_page"] is not iterating. {% for chapter in chapters %} <li> <a href="?page={{ chapter_page }}&#{{ chapter.number }}"> {{ chapter.number }} </a> </li> {% endfor %} Of course, I could not add this logic direct to my template, it is not accepted by Django. {% for chapter in chapters %} <li> <a href="?page={{ math.ceil((chapters.index(chapter) + 1) / 2) }}&#{{ chapter.number }}"> {{ chapter.number }} </a> </li> {% endfor %} I am expecting to do a loop and return each iterated number at my href=page
[ "I'm guessing here, because you have omitted a bunch of details, but it looks like you are expecting one chapter_page value per chapter. That's not what you have implemented. You have exactly one global chapter_page value.\nIf you want one per chapter, then that number needs to be part of the chapter object, not global:\n for idx,chapter in enumerate(chapters):\n chapter[\"chapter_page\"] = math.ceil((idx + 1) / 2)\n\n context[\"chapter\"] = chapters\n return context\n\nand\n{% for chapter in chapters %}\n <li>\n <a \n href=\"?page={{ chapter.chapter_page }}&#{{ chapter.number }}\">\n {{ chapter.number }}\n </a>\n </li>\n{% endfor %}\n\n" ]
[ 0 ]
[]
[]
[ "django", "python" ]
stackoverflow_0074465103_django_python.txt
Q: Solve second order PDEs with `scipy.integrate`, with both initial and final position known I'm confused reading scipy.integrate.solve_ivp documentation. I'm interested in a ballistic problem with drag and Magnus effect, but I'm focussing first on the simpler problem, considering only gravitational force. The corresponding PDE is Transforming to a first order PDE, we can write I have the initial and final 3D positions and time of the ball for , but I don't understand how to provide this information to the solver. It expects y0 which, in my notations, is , but I don't know the velocity at . (note: I know I can infer it from the second degree solution, but I don't want to since the solution will get very much more complex once I integrate the other forces). How should the problem be transformed to add the other initial condition on the position ? note: I also looked at the documentation of solve_bvp, but from my understanding, it doesn't fit the problem I try to solve… A: The function scipy.integrate.solve_bvp is indeed appropriate, with p0 and p1: (x(t), y(t), y(t)) at t=T0, T1. For posterity, here are the two functions required by solve_bvp: def bc(X0, X1, args=None): x0, y0, z0, vx0, vy0, vz0 = X0 x1, y1, z1, vx1, vy1, vz1 = X1 return np.array([ x0 - p0.x, y0 - p0.y, z0 - p0.z, x1 - p1.x, y1 - p1.y, z1 - p1.z, ]) def pde(t, X, args=None): x, y, z, vx, vy, vz = X dXdt = np.vstack([ vx, vy, vz, np.ones_like(t)*0, np.ones_like(t)*0, np.ones_like(t)*g, ]) return dXdt
Solve second order PDEs with `scipy.integrate`, with both initial and final position known
I'm confused reading scipy.integrate.solve_ivp documentation. I'm interested in a ballistic problem with drag and Magnus effect, but I'm focussing first on the simpler problem, considering only gravitational force. The corresponding PDE is Transforming to a first order PDE, we can write I have the initial and final 3D positions and time of the ball for , but I don't understand how to provide this information to the solver. It expects y0 which, in my notations, is , but I don't know the velocity at . (note: I know I can infer it from the second degree solution, but I don't want to since the solution will get very much more complex once I integrate the other forces). How should the problem be transformed to add the other initial condition on the position ? note: I also looked at the documentation of solve_bvp, but from my understanding, it doesn't fit the problem I try to solve…
[ "The function scipy.integrate.solve_bvp is indeed appropriate, with p0 and p1: (x(t), y(t), y(t)) at t=T0, T1. For posterity, here are the two functions required by solve_bvp:\ndef bc(X0, X1, args=None):\n x0, y0, z0, vx0, vy0, vz0 = X0\n x1, y1, z1, vx1, vy1, vz1 = X1\n\n return np.array([\n x0 - p0.x,\n y0 - p0.y,\n z0 - p0.z,\n x1 - p1.x,\n y1 - p1.y,\n z1 - p1.z,\n ])\n\ndef pde(t, X, args=None):\n x, y, z, vx, vy, vz = X\n dXdt = np.vstack([\n vx,\n vy,\n vz,\n np.ones_like(t)*0,\n np.ones_like(t)*0,\n np.ones_like(t)*g,\n ])\n return dXdt\n\n" ]
[ 0 ]
[]
[]
[ "differential_equations", "odeint", "pde", "python", "scipy" ]
stackoverflow_0074444504_differential_equations_odeint_pde_python_scipy.txt
Q: How to get the mean of each image in a batch? I have a batch of images thus the shape [None, 256, 256, 3] (the batch is set to none for practical purposes on use). I am trying to implement a layer that calculates the average of each of the of images or frames in the batch to result the shape [None, 1] or [None, 1, 1, 1]. I have checked to use tf.keras.layers.Average, but apparently it calculates across the batch, returning a tensor of the same shape. In hindsight I tried implementing the following custom layer: class ElementMean(tf.keras.layers.Layer): def __init__(self, **kwargs): super(ElementMean, self).__init__(**kwargs) def call(self, inputs): tensors = [] for ii in range(inputs.shape[0] if inputs.shape[0] is not None else 1): tensors.append(inputs[ii, ...]) return tf.keras.layers.Average()(tensors) but when it is used: import tensorflow as tf x = tf.keras.Input([256, 256, 3], None) y = ElementMean()(x) model = tf.keras.Model(inputs=x, outputs=y) model.compile() model.summary() tf.keras.utils.plot_model( model, show_shapes=True, show_dtype=True, show_layer_activations=True, show_layer_names=True ) I get the result: Model: "model" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= input_1 (InputLayer) [(None, 256, 256, 3)] 0 element_mean (ElementMean) (256, 256, 3) 0 ================================================================= Total params: 0 Trainable params: 0 Non-trainable params: 0 _________________________________________________________________ Which makes it entirely wrong. I also tried this change on the call: def call(self, inputs): tensors = [] for ii in range(inputs.shape[0] if inputs.shape[0] is not None else 1): tensors.append(tf.reduce_mean(inputs[ii, ...])) return tf.convert_to_tensor(tensors) Which in turn results to: Model: "model" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= input_1 (InputLayer) [(None, 256, 256, 3)] 0 element_mean (ElementMean) (1,) 0 ================================================================= Total params: 0 Trainable params: 0 Non-trainable params: 0 _________________________________________________________________ Which is also wrong. A: You can play around with the axes like this: import tensorflow as tf class ElementMean(tf.keras.layers.Layer): def __init__(self, **kwargs): super(ElementMean, self).__init__(**kwargs) def call(self, inputs): return tf.reduce_mean(inputs, axis=(1, 2, 3), keepdims=True) x = tf.keras.layers.Input([256, 256, 3], None) em = ElementMean() y = em(x) model = tf.keras.Model(x, y) model.summary() Model: "model_1" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= input_1 (InputLayer) [(None, 256, 256, 3)] 0 element_mean_1 (ElementMean (None, 1, 1, 1) 0 ) ================================================================= Total params: 0 Trainable params: 0 Non-trainable params: 0 _________________________________________________________________
How to get the mean of each image in a batch?
I have a batch of images thus the shape [None, 256, 256, 3] (the batch is set to none for practical purposes on use). I am trying to implement a layer that calculates the average of each of the of images or frames in the batch to result the shape [None, 1] or [None, 1, 1, 1]. I have checked to use tf.keras.layers.Average, but apparently it calculates across the batch, returning a tensor of the same shape. In hindsight I tried implementing the following custom layer: class ElementMean(tf.keras.layers.Layer): def __init__(self, **kwargs): super(ElementMean, self).__init__(**kwargs) def call(self, inputs): tensors = [] for ii in range(inputs.shape[0] if inputs.shape[0] is not None else 1): tensors.append(inputs[ii, ...]) return tf.keras.layers.Average()(tensors) but when it is used: import tensorflow as tf x = tf.keras.Input([256, 256, 3], None) y = ElementMean()(x) model = tf.keras.Model(inputs=x, outputs=y) model.compile() model.summary() tf.keras.utils.plot_model( model, show_shapes=True, show_dtype=True, show_layer_activations=True, show_layer_names=True ) I get the result: Model: "model" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= input_1 (InputLayer) [(None, 256, 256, 3)] 0 element_mean (ElementMean) (256, 256, 3) 0 ================================================================= Total params: 0 Trainable params: 0 Non-trainable params: 0 _________________________________________________________________ Which makes it entirely wrong. I also tried this change on the call: def call(self, inputs): tensors = [] for ii in range(inputs.shape[0] if inputs.shape[0] is not None else 1): tensors.append(tf.reduce_mean(inputs[ii, ...])) return tf.convert_to_tensor(tensors) Which in turn results to: Model: "model" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= input_1 (InputLayer) [(None, 256, 256, 3)] 0 element_mean (ElementMean) (1,) 0 ================================================================= Total params: 0 Trainable params: 0 Non-trainable params: 0 _________________________________________________________________ Which is also wrong.
[ "You can play around with the axes like this:\nimport tensorflow as tf\n\nclass ElementMean(tf.keras.layers.Layer):\n def __init__(self, **kwargs):\n super(ElementMean, self).__init__(**kwargs)\n \n def call(self, inputs):\n return tf.reduce_mean(inputs, axis=(1, 2, 3), keepdims=True)\n\nx = tf.keras.layers.Input([256, 256, 3], None)\nem = ElementMean()\ny = em(x)\nmodel = tf.keras.Model(x, y)\nmodel.summary()\n\nModel: \"model_1\"\n_________________________________________________________________\n Layer (type) Output Shape Param # \n=================================================================\n input_1 (InputLayer) [(None, 256, 256, 3)] 0 \n \n element_mean_1 (ElementMean (None, 1, 1, 1) 0 \n ) \n \n=================================================================\nTotal params: 0\nTrainable params: 0\nNon-trainable params: 0\n_________________________________________________________________\n\n\n" ]
[ 1 ]
[ "there is another way with segment means that allowed you to segment by heights, widths, and channels by remain its properties.\n\nSample: Width x Height x Channels, mean of each channel represent its data as mean value and you may summarize them later.\n\nimport os\nfrom os.path import exists\n\nimport tensorflow as tf\nimport tensorflow_io as tfio\n\nimport matplotlib.pyplot as plt\n\n\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\n: Variables\n\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\nPATH = os.path.join('F:\\\\datasets\\\\downloads\\\\Actors\\\\train\\\\Pikaploy', '*.tif')\nfiles = tf.data.Dataset.list_files(PATH)\nlist_file = []\n\nfor file in files.take(1):\n image = tf.io.read_file( file )\n image = tfio.experimental.image.decode_tiff(image, index=0)\n image = tf.image.resize(image, [28,32], method='nearest')\n list_file.append( image )\n\n\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\n: Class / Definitions\n\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\" \nclass MyDenseLayer(tf.keras.layers.Layer):\n def __init__(self, num_outputs):\n super(MyDenseLayer, self).__init__()\n self.num_outputs = num_outputs\n \n def build(self, input_shape):\n self.kernel = self.add_weight(\"kernel\",\n shape=[int(input_shape[-1]),\n self.num_outputs])\n\n def call(self, inputs):\n \n temp = tf.transpose( tf.constant(tf.cast(list_file, dtype=tf.int64), shape=(28, 32, 4), dtype=tf.int64) )\n temp = tf.transpose( temp ) \n mean = tf.constant( tf.math.segment_mean( temp, tf.ones([28], dtype=tf.int64)).numpy() )\n \n temp = tf.image.rot90(temp)\n mean = tf.constant( tf.math.segment_mean( tf.constant(mean[1::], shape=(32, 4)), tf.ones([32], dtype=tf.int64)).numpy() )\n\n return mean[1::]\n\nlayer = MyDenseLayer(10)\nsample = tf.transpose( tf.constant(tf.cast(list_file, dtype=tf.int64), shape=(28, 32, 4), dtype=tf.int64) )\ndata = layer(sample)\n\nprint( data )\n\n\nOutput: Rx Gx Bx Yx\n\ntf.Tensor([[161 166 171 255]], shape=(1, 4), dtype=int64)\n\n" ]
[ -2 ]
[ "deep_learning", "keras", "machine_learning", "python", "tensorflow" ]
stackoverflow_0074460032_deep_learning_keras_machine_learning_python_tensorflow.txt
Q: Selenium Edge Python errors auto close Edge browser after test execution I am trying to test selenium for a solution to auto log into a website but I cant even get Selenium to stay open. It does what it is supposed to do right now and then quits immediately without a driver.quit(). I get the following errors and I wish to understand what they mean: DevTools listening on ws://127.0.0.1:51111/devtools/browser/111111fe-423z-111zz-1116-r0z2300086f7 [3420:22152:1110/151643.950:ERROR:edge_auth_errors.cc(387)] EDGE_IDENTITY: Get Default OS Account failed: Error: Primary Error: kImplicitSignInFailure, Secondary Error: kAccountProviderFetchError, Platform error: 0, Error string: [3420:22152:1110/151644.757:ERROR:fallback_task_provider.cc(119)] Every renderer should have at least one task provided by a primary task provider. If a fallback task is shown, it is a bug. Please file a new bug and tag it as a dependency of crbug.com/739782. [3420:22152:1110/151647.899:ERROR:fallback_task_provider.cc(119)] Every renderer should have at least one task provided by a primary task provider. If a fallback task is shown, it is a bug. Please file a new bug and tag it as a dependency of crbug.com/739782. Yahoo | Mail, Weather, Search, Politics, News, Finance, Sports & Videos https://www.yahoo.com/ This is my code: from selenium import webdriver from selenium.webdriver.edge.service import Service ser = Service("C:\\Users\\Desktop\\Projects\\auto_login\\msedgedriver.exe") driver = webdriver.Edge(service = ser) driver.get("http://yahoo.com") print(driver.title) print(driver.current_url) A: The errors you are seeing: [3420:22152:1110/151643.950:ERROR:edge_auth_errors.cc(387)] EDGE_IDENTITY: Get Default OS Account failed: Error: Primary Error: kImplicitSignInFailure, Secondary Error: kAccountProviderFetchError, Platform error: 0, Error string: [3420:22152:1110/151644.757:ERROR:fallback_task_provider.cc(119)] Every renderer should have at least one task provided by a primary task provider. If a fallback task is shown, it is a bug. Please file a new bug and tag it as a dependency of crbug.com/739782. [3420:22152:1110/151647.899:ERROR:fallback_task_provider.cc(119)] Every renderer should have at least one task provided by a primary task provider. If a fallback task is shown, it is a bug. Please file a new bug and tag it as a dependency of crbug.com/739782. are the result of a generic bug due to Chrome spawned a child process & Task Manager compatibility which you can ignore as of now. For details check Issue 739782: [Task Manager] [Meta bug ☂️] Processes not shown in Task Manager. Additionally, some specific python frameworks tends to close the browser automatically when all the lines of the program are executed successfully e.g. Python-Unittest and have no relation with the errors explained above. A: I found the following code would disable the "DevTools listening on ..." errors (warnings?). from selenium import webdriver from selenium.webdriver.edge.options import Options as EdgeOptions options = EdgeOptions() options.add_experimental_option('excludeSwitches', ['enable-logging']) driver = webdriver.Edge(options=options) I am using Selenium 4.5.0 and Edge driver Edge version 105. It is basically the same code used for disabling these types of messages on Chrome drivers. A: I am using Selenium 4.6 with User-agent random from selenium import webdriver from selenium.webdriver.edge import service from selenium.webdriver.common.by import By from selenium.webdriver.support.ui import WebDriverWait from fake_useragent import UserAgent import os os.system("cls") def User(): ua = UserAgent() return ua.random edgeOption = webdriver.EdgeOptions() edgeOption.add_argument(f'user-agent={User()}') edgeOption.add_argument("start-maximized") s=service.Service(r'msedgedriver.exe') driver = webdriver.Edge(service=s, options=edgeOption) driver.get("http://whatsmyuseragent.org/") WebDriverWait(driver, 10) user = driver.find_element(By.XPATH, "//div[@class='intro-body']/div[@class='container']/div[@class='row']/div[@class='col-md-8 col-md-offset-2'][1]/div[@class='user-agent']/p[@class='intro-text']").text ip = driver.find_element(By.XPATH, "//div[@class='intro-body']/div[@class='container']/div[@class='row']/div[@class='col-md-8 col-md-offset-2'][2]/div[@class='ip-address']/p[@class='intro-text']").text print(user) print(ip)
Selenium Edge Python errors auto close Edge browser after test execution
I am trying to test selenium for a solution to auto log into a website but I cant even get Selenium to stay open. It does what it is supposed to do right now and then quits immediately without a driver.quit(). I get the following errors and I wish to understand what they mean: DevTools listening on ws://127.0.0.1:51111/devtools/browser/111111fe-423z-111zz-1116-r0z2300086f7 [3420:22152:1110/151643.950:ERROR:edge_auth_errors.cc(387)] EDGE_IDENTITY: Get Default OS Account failed: Error: Primary Error: kImplicitSignInFailure, Secondary Error: kAccountProviderFetchError, Platform error: 0, Error string: [3420:22152:1110/151644.757:ERROR:fallback_task_provider.cc(119)] Every renderer should have at least one task provided by a primary task provider. If a fallback task is shown, it is a bug. Please file a new bug and tag it as a dependency of crbug.com/739782. [3420:22152:1110/151647.899:ERROR:fallback_task_provider.cc(119)] Every renderer should have at least one task provided by a primary task provider. If a fallback task is shown, it is a bug. Please file a new bug and tag it as a dependency of crbug.com/739782. Yahoo | Mail, Weather, Search, Politics, News, Finance, Sports & Videos https://www.yahoo.com/ This is my code: from selenium import webdriver from selenium.webdriver.edge.service import Service ser = Service("C:\\Users\\Desktop\\Projects\\auto_login\\msedgedriver.exe") driver = webdriver.Edge(service = ser) driver.get("http://yahoo.com") print(driver.title) print(driver.current_url)
[ "The errors you are seeing:\n[3420:22152:1110/151643.950:ERROR:edge_auth_errors.cc(387)] EDGE_IDENTITY: Get Default OS Account failed: Error: Primary Error: kImplicitSignInFailure, Secondary Error: kAccountProviderFetchError, Platform error: 0, Error string: \n\n[3420:22152:1110/151644.757:ERROR:fallback_task_provider.cc(119)] Every renderer should have at least one task provided by a primary task provider. If a fallback task is shown, it is a bug. Please file a new bug and tag it as a dependency of crbug.com/739782.\n[3420:22152:1110/151647.899:ERROR:fallback_task_provider.cc(119)] Every renderer should have at least one task provided by a primary task provider. If a fallback task is shown, it is a bug. Please file a new bug and tag it as a dependency of crbug.com/739782.\n\nare the result of a generic bug due to Chrome spawned a child process & Task Manager compatibility which you can ignore as of now. For details check Issue 739782: [Task Manager] [Meta bug ☂️] Processes not shown in Task Manager.\nAdditionally, some specific python frameworks tends to close the browser automatically when all the lines of the program are executed successfully e.g. Python-Unittest and have no relation with the errors explained above.\n", "I found the following code would disable the \"DevTools listening on ...\" errors (warnings?).\nfrom selenium import webdriver\nfrom selenium.webdriver.edge.options import Options as EdgeOptions\n\noptions = EdgeOptions()\noptions.add_experimental_option('excludeSwitches', ['enable-logging'])\ndriver = webdriver.Edge(options=options)\n\nI am using Selenium 4.5.0 and Edge driver Edge version 105.\nIt is basically the same code used for disabling these types of messages on Chrome drivers.\n", "I am using Selenium 4.6 with User-agent random\nfrom selenium import webdriver\nfrom selenium.webdriver.edge import service\nfrom selenium.webdriver.common.by import By\nfrom selenium.webdriver.support.ui import WebDriverWait\nfrom fake_useragent import UserAgent\nimport os\nos.system(\"cls\")\n\n\n\ndef User():\n \n ua = UserAgent()\n return ua.random\n\n\nedgeOption = webdriver.EdgeOptions()\nedgeOption.add_argument(f'user-agent={User()}')\nedgeOption.add_argument(\"start-maximized\")\ns=service.Service(r'msedgedriver.exe')\ndriver = webdriver.Edge(service=s, options=edgeOption)\ndriver.get(\"http://whatsmyuseragent.org/\")\nWebDriverWait(driver, 10)\n\nuser = driver.find_element(By.XPATH, \"//div[@class='intro-body']/div[@class='container']/div[@class='row']/div[@class='col-md-8 col-md-offset-2'][1]/div[@class='user-agent']/p[@class='intro-text']\").text\nip = driver.find_element(By.XPATH, \"//div[@class='intro-body']/div[@class='container']/div[@class='row']/div[@class='col-md-8 col-md-offset-2'][2]/div[@class='ip-address']/p[@class='intro-text']\").text\n\nprint(user)\nprint(ip)\n\n" ]
[ 2, 0, 0 ]
[]
[]
[ "microsoft_edge", "python", "selenium", "selenium_edgedriver", "selenium_webdriver" ]
stackoverflow_0069919930_microsoft_edge_python_selenium_selenium_edgedriver_selenium_webdriver.txt
Q: "Failed - Download error" while download a file using Selenium Python I was trying to download a file using selenium but getting "Failed - Download error". I tried to disable the safe browsing but it didn't work. I have attached the screenshot and code as well. logs: DevTools listening on ws://127.0.0.1:53738/devtools/browser/d75dfd5b-1e3e-45c5-8edd-adf77dd9adb1 [2572:2724:0717/104626.877:ERROR:device_event_log_impl.cc(208)] [10:46:26.877] Bluetooth: bluetooth_adapter_winrt.cc:1074 Getting Default Adapter failed. from selenium import webdriver from selenium.webdriver.support.ui import Select from selenium import webdriver from selenium.webdriver.common.action_chains import ActionChains from selenium.webdriver.common.by import By from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support import expected_conditions as EC from selenium.webdriver.common.keys import Keys import time import csv from selenium.webdriver.chrome.options import Options link_list = [ "stewartwatson.co.uk", "peterkins.com", "gavin-bain.co.uk", "martinco.com", "tmmsolicitors.co.uk", "corecitilets.co.uk", "coxandco.co", "dunechtestates.co.uk", "bidwells.co.uk", "kwad.co.uk", ] options = webdriver.ChromeOptions() options.add_experimental_option("prefs", { "download.default_directory": r"C:\\Users\\Awais\\projects\\selenium\\web_email_extractor\\csv", "download.prompt_for_download": False, "download.directory_upgrade": True, "safebrowsing.enabled": False, "safebrowsing.ebabled": "false" }) driver = webdriver.Chrome(chrome_options=options) driver.get("https://www.webemailextractor.com") try: driver.find_element_by_xpath('//button[contains(text(),"Close")]').click() except: pass for i in link_list[0:5]: text_area = driver.find_element_by_xpath('//textarea[@placeholder="Enter domain/websites list"]') text_area.send_keys(i) text_area.send_keys(Keys.ENTER) submit = driver.find_element_by_xpath('//input[@value="Extract Email"]').click() try: btn = WebDriverWait(driver, 50).until(EC.element_to_be_clickable((By.XPATH, "//span[contains(text(), ' Process Completed')]"))) time.sleep(3) csv_download = driver.find_element_by_xpath('//button[@class="dt-button buttons-csv buttons-html5"]').click() except Exception as e: print(e) A: It worked when I removed the "r" from the path "download.default_directory": r"C:\\Users\\Awais\\projects\\selenium\\web_email_extractor\\csv", to: "download.default_directory": "C:\\Users\\Awais\\projects\\selenium\\web_email_extractor\\csv",
"Failed - Download error" while download a file using Selenium Python
I was trying to download a file using selenium but getting "Failed - Download error". I tried to disable the safe browsing but it didn't work. I have attached the screenshot and code as well. logs: DevTools listening on ws://127.0.0.1:53738/devtools/browser/d75dfd5b-1e3e-45c5-8edd-adf77dd9adb1 [2572:2724:0717/104626.877:ERROR:device_event_log_impl.cc(208)] [10:46:26.877] Bluetooth: bluetooth_adapter_winrt.cc:1074 Getting Default Adapter failed. from selenium import webdriver from selenium.webdriver.support.ui import Select from selenium import webdriver from selenium.webdriver.common.action_chains import ActionChains from selenium.webdriver.common.by import By from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support import expected_conditions as EC from selenium.webdriver.common.keys import Keys import time import csv from selenium.webdriver.chrome.options import Options link_list = [ "stewartwatson.co.uk", "peterkins.com", "gavin-bain.co.uk", "martinco.com", "tmmsolicitors.co.uk", "corecitilets.co.uk", "coxandco.co", "dunechtestates.co.uk", "bidwells.co.uk", "kwad.co.uk", ] options = webdriver.ChromeOptions() options.add_experimental_option("prefs", { "download.default_directory": r"C:\\Users\\Awais\\projects\\selenium\\web_email_extractor\\csv", "download.prompt_for_download": False, "download.directory_upgrade": True, "safebrowsing.enabled": False, "safebrowsing.ebabled": "false" }) driver = webdriver.Chrome(chrome_options=options) driver.get("https://www.webemailextractor.com") try: driver.find_element_by_xpath('//button[contains(text(),"Close")]').click() except: pass for i in link_list[0:5]: text_area = driver.find_element_by_xpath('//textarea[@placeholder="Enter domain/websites list"]') text_area.send_keys(i) text_area.send_keys(Keys.ENTER) submit = driver.find_element_by_xpath('//input[@value="Extract Email"]').click() try: btn = WebDriverWait(driver, 50).until(EC.element_to_be_clickable((By.XPATH, "//span[contains(text(), ' Process Completed')]"))) time.sleep(3) csv_download = driver.find_element_by_xpath('//button[@class="dt-button buttons-csv buttons-html5"]').click() except Exception as e: print(e)
[ "It worked when I removed the \"r\" from the path\n\"download.default_directory\": r\"C:\\\\Users\\\\Awais\\\\projects\\\\selenium\\\\web_email_extractor\\\\csv\",\n\nto:\n\"download.default_directory\": \"C:\\\\Users\\\\Awais\\\\projects\\\\selenium\\\\web_email_extractor\\\\csv\",\n\n" ]
[ 0 ]
[ "Dear: When we configure the options of selenium is what will be defined in its configurations and as you write the route so considers it in the first photo you will see a well-configured route and a bad one, note bars there is the difference enter image description here\nenter image description here\n" ]
[ -1 ]
[ "python", "selenium", "selenium_chromedriver", "selenium_webdriver" ]
stackoverflow_0062947965_python_selenium_selenium_chromedriver_selenium_webdriver.txt
Q: How to insert an item into a Queryset which is sorted by a numeric field and increment the value of the field of all subsequent items Let´s say, there is a Django model called TaskModel which has a field priority and we want to insert a new element and increment the existing element which has already the priority and increment also the priority of the following elements. priority is just a numeric field without any special flags like unique or primary/foreign key queryset = models.TaskModel.objects.filter().order_by('priority') Can this be done in a smart way with some methods on the Queryset itself? A: I believe you can do this by using Django's F expressions and overriding the model's save method. I guess you could instead override the model's __init__ method as in this answer, but I think using the save method is best. class TaskModel(models.Model): task = models.CharField(max_length=20) priority = models.IntegerField() # Override the save method so whenever a new TaskModel object is # added, this will be run. def save(self, *args, **kwargs): # First get all TaskModels with priority greater than, or # equal to the priority of the new task you are adding queryset = TaskModel.objects.filter(priority__gte=self.priority) # Use update with the F expression to increase the priorities # of all the tasks above the one you're adding queryset.update(priority=F('priority') + 1) # Finally, call the super method to call the model's # actual save() method super(TaskModel, self).save(*args, **kwargs) def __str__(self): return self.task Keep in mind that this can create gaps in the priorities. For example, what if you create a task with priority 5, then delete it, then add another task with priority 5? I think the only way to handle that would be to loop through the queryset, perhaps with a function like the one below, in your view, and call it whenever a new task is created, or it's priority modified: # tasks would be the queryset of all tasks, i.e, TaskModels.objects.all() def reorder_tasks(tasks): for i, task in enumerate(tasks): task.priority = i + 1 task.save() This method is not nearly as efficient, but it will not create the gaps. For this method, you would not change the TaskModel at all. Or perhaps you can also override the delete method of the TaskModel as well, as shown in this answer, but I haven't had a chance to test this yet. EDIT Short Version I don't know how to delete objects using a similar method to saving while keeping preventing priorities from having gaps. I would just use a loop as I have shown above. Long version I knew there was something different about deleting objects like this: def delete(self, *args, **kwargs): queryset = TaskModel.objects.filter(priority__gt=self.priority) queryset.update(priority=F('priority') - 1) super(TaskModel, self).delete(*args, **kwargs) This will work, in some situations. According to the docs on delete(): Keep in mind that this [calling delete()] will, whenever possible, be executed purely in SQL, and so the delete() methods of individual object instances will not necessarily be called during the process. If you’ve provided a custom delete() method on a model class and want to ensure that it is called, you will need to “manually” delete instances of that model (e.g., by iterating over a QuerySet and calling delete() on each object individually) rather than using the bulk delete() method of a QuerySet. So if you delete() a TaskModel object using the admin panel, the custom delete written above will never even get called, and while it should work if deleting an instance, for example in your view, since it will try acting directly on the database, it will not show up in the python until you refresh the query: tasks = TaskModel.objects.order_by('priority') for t in tasks: print(t.task, t.priority) tr = TaskModel.objects.get(task='three') tr.delete() # Here I need to call this AGAIN tasks = TaskModel.objects.order_by('priority') # BEFORE calling this for t in tasks: print(t.task, t.priority) # to see the effect If you still want to do it, I again refer to this answer to see how to handle it.
How to insert an item into a Queryset which is sorted by a numeric field and increment the value of the field of all subsequent items
Let´s say, there is a Django model called TaskModel which has a field priority and we want to insert a new element and increment the existing element which has already the priority and increment also the priority of the following elements. priority is just a numeric field without any special flags like unique or primary/foreign key queryset = models.TaskModel.objects.filter().order_by('priority') Can this be done in a smart way with some methods on the Queryset itself?
[ "I believe you can do this by using Django's F expressions and overriding the model's save method. I guess you could instead override the model's __init__ method as in this answer, but I think using the save method is best.\nclass TaskModel(models.Model):\n task = models.CharField(max_length=20)\n priority = models.IntegerField()\n \n # Override the save method so whenever a new TaskModel object is\n # added, this will be run.\n def save(self, *args, **kwargs):\n \n # First get all TaskModels with priority greater than, or\n # equal to the priority of the new task you are adding\n queryset = TaskModel.objects.filter(priority__gte=self.priority)\n\n # Use update with the F expression to increase the priorities\n # of all the tasks above the one you're adding\n queryset.update(priority=F('priority') + 1)\n\n # Finally, call the super method to call the model's\n # actual save() method\n super(TaskModel, self).save(*args, **kwargs)\n\n def __str__(self):\n return self.task\n\nKeep in mind that this can create gaps in the priorities. For example, what if you create a task with priority 5, then delete it, then add another task with priority 5? I think the only way to handle that would be to loop through the queryset, perhaps with a function like the one below, in your view, and call it whenever a new task is created, or it's priority modified:\n# tasks would be the queryset of all tasks, i.e, TaskModels.objects.all()\ndef reorder_tasks(tasks):\n for i, task in enumerate(tasks):\n task.priority = i + 1\n task.save()\n\nThis method is not nearly as efficient, but it will not create the gaps. For this method, you would not change the TaskModel at all.\nOr perhaps you can also override the delete method of the TaskModel as well, as shown in this answer, but I haven't had a chance to test this yet.\n\nEDIT\nShort Version\nI don't know how to delete objects using a similar method to saving while keeping preventing priorities from having gaps. I would just use a loop as I have shown above.\nLong version\nI knew there was something different about deleting objects like this:\ndef delete(self, *args, **kwargs):\n queryset = TaskModel.objects.filter(priority__gt=self.priority)\n queryset.update(priority=F('priority') - 1)\n super(TaskModel, self).delete(*args, **kwargs)\n\nThis will work, in some situations.\nAccording to the docs on delete():\n\nKeep in mind that this [calling delete()] will, whenever possible, be executed purely in\nSQL, and so the delete() methods of individual object instances will\nnot necessarily be called during the process. If you’ve provided a\ncustom delete() method on a model class and want to ensure that it is\ncalled, you will need to “manually” delete instances of that model\n(e.g., by iterating over a QuerySet and calling delete() on each\nobject individually) rather than using the bulk delete() method of a\nQuerySet.\n\nSo if you delete() a TaskModel object using the admin panel, the custom delete written above will never even get called, and while it should work if deleting an instance, for example in your view, since it will try acting directly on the database, it will not show up in the python until you refresh the query:\ntasks = TaskModel.objects.order_by('priority')\n \nfor t in tasks:\n print(t.task, t.priority)\n\ntr = TaskModel.objects.get(task='three')\ntr.delete()\n\n# Here I need to call this AGAIN\ntasks = TaskModel.objects.order_by('priority')\n\n# BEFORE calling this\nfor t in tasks:\n print(t.task, t.priority)\n \n# to see the effect\n\nIf you still want to do it, I again refer to this answer to see how to handle it.\n" ]
[ 2 ]
[]
[]
[ "django", "python", "sql" ]
stackoverflow_0074463989_django_python_sql.txt
Q: Python type conversion using variable value mytype = "int" myvalue = "35" my_int_val = mytype(myvalue) This throws up - TypeError: 'str' object is not callable I can't seem to remember the way to do so. Any ideas? Please note that I have to use "str", "int" instead of str or int (without quotes), because I am getting this value from somewhere else where it's being passed on as a string. A: If you want to use any builtin function dynamically to convert the data you can fetch it from __builtins__. mytype = "str" myvalue = "34" func = getattr(__builtins__, mytype) print(func(myvalue)) print(type(func(myvalue))) This will give you 34 <class 'str'> If you use mytype = "float" you'll get 34.0 <class 'float'> A: my_int_val = int(myvalue) my_str_val = str(myvalue) is the cleanest way to do this. But for some reason if you want the type to be stored in a string and call it, you can use eval: t = "int" my_int_val = eval(f"{t}({myvalue})") A: mytype is a variable not a datatype int_val = int(myvalue) str_val = str(myvalue) A: my try: mytype = "int" myvalue = "35" #my_int_val = myvalue.type(mytype) my_int_val = eval(mytype)(myvalue) print(my_int_val, type(my_int_val)) mytype = "str" myvalue = 35 print(myvalue, type(myvalue)) my_int_val = eval(mytype)(myvalue) print(my_int_val, type(my_int_val)) output: 35 <class 'int'> 35 <class 'int'> 35 <class 'str'> but need to confess I copied from here : Convert string to Python class object? oops didnt notice answer above, in any case : Warning: eval() can be used to execute arbitrary Python code. You should never use eval() with untrusted strings. (Security of Python's eval() on untrusted strings?) A: Instead of mytype = "str" just do mytype = str and it will work (as it sets mytype to the builtin function str). Example with a function: def cast(value, totype): return totype(value) mystr = cast(35, str) print(mystr, type(mystr)) myfloat = cast("35", float) print(myfloat, type(myfloat)) Output: 35 <class 'str'> 35.0 <class 'float'>
Python type conversion using variable value
mytype = "int" myvalue = "35" my_int_val = mytype(myvalue) This throws up - TypeError: 'str' object is not callable I can't seem to remember the way to do so. Any ideas? Please note that I have to use "str", "int" instead of str or int (without quotes), because I am getting this value from somewhere else where it's being passed on as a string.
[ "If you want to use any builtin function dynamically to convert the data you can fetch it from __builtins__.\nmytype = \"str\"\nmyvalue = \"34\"\n\nfunc = getattr(__builtins__, mytype)\n\nprint(func(myvalue))\nprint(type(func(myvalue)))\n\nThis will give you\n34\n<class 'str'>\n\nIf you use mytype = \"float\" you'll get\n34.0\n<class 'float'>\n\n", "my_int_val = int(myvalue)\nmy_str_val = str(myvalue)\n\nis the cleanest way to do this. But for some reason if you want the type to be stored in a string and call it, you can use eval:\nt = \"int\"\nmy_int_val = eval(f\"{t}({myvalue})\")\n\n", "mytype is a variable not a datatype\nint_val = int(myvalue)\nstr_val = str(myvalue)\n\n", "my try:\nmytype = \"int\"\nmyvalue = \"35\"\n#my_int_val = myvalue.type(mytype)\n\n\nmy_int_val = eval(mytype)(myvalue) \n\n\nprint(my_int_val, type(my_int_val))\n\nmytype = \"str\"\nmyvalue = 35\n\nprint(myvalue, type(myvalue))\n\nmy_int_val = eval(mytype)(myvalue)\n\nprint(my_int_val, type(my_int_val))\n\n\noutput:\n35 <class 'int'>\n35 <class 'int'>\n35 <class 'str'>\n\nbut need to confess I copied from here : Convert string to Python class object?\noops didnt notice answer above, in any case :\n\nWarning: eval() can be used to execute arbitrary Python code. You should never use eval() with untrusted strings. (Security of Python's eval() on untrusted strings?)\n\n", "Instead of mytype = \"str\" just do mytype = str and it will work (as it sets mytype to the builtin function str).\nExample with a function:\ndef cast(value, totype):\n return totype(value)\n\nmystr = cast(35, str)\nprint(mystr, type(mystr))\n\nmyfloat = cast(\"35\", float)\nprint(myfloat, type(myfloat))\n\nOutput:\n35 <class 'str'>\n35.0 <class 'float'>\n\n" ]
[ 3, 0, 0, 0, 0 ]
[]
[]
[ "python" ]
stackoverflow_0074463894_python.txt
Q: Calculating average of the rating field by applying filter on the dataframe enter image description here I have added image here which shows different columns in a dataframe ,so basically i want to calculate average of rating field based on the value Drama which is present in genre field of the dataframe ,which means that I will calculate the average rating of rows containing Drama in genre field ,so how to do it? A: If the rating column contains lists, you can use astype to be able to use str.contains properly, and then evaluate the mean: df[df['genre'].astype(str).str.contains('Drama')]['rating'].mean()
Calculating average of the rating field by applying filter on the dataframe
enter image description here I have added image here which shows different columns in a dataframe ,so basically i want to calculate average of rating field based on the value Drama which is present in genre field of the dataframe ,which means that I will calculate the average rating of rows containing Drama in genre field ,so how to do it?
[ "If the rating column contains lists, you can use astype to be able to use str.contains properly, and then evaluate the mean:\ndf[df['genre'].astype(str).str.contains('Drama')]['rating'].mean()\n\n" ]
[ 0 ]
[]
[]
[ "dataframe", "list", "pandas", "python", "series" ]
stackoverflow_0074465279_dataframe_list_pandas_python_series.txt
Q: Making multiple columns into one for .to_datetime Currently I have a code set up to read through a CSV file, but the CSV file has columns DAY, YEAR, and MONTH all as integers. I want to make them all one column of datetime64[ns] objects. To make them datetime64[ns] objects, I did the following: df.insert(0, "DATE", 0, True) df["YEAR"] = df["YEAR"].astype(str) df["MONTH"] = df["MONTH"].astype(str) df["DAY"] = df["DAY"].astype(str) cols = ["MONTH", "DAY", "YEAR"] df["DATE"] = df["MONTH"] + "-" + df["DAY"] + "-" + df["YEAR"] df["DATE"] = pd.to_datetime(df["DATE"]) My question is, is there a more efficient way to do this? I'm new to pandas and coding in general, so thank you in advance! A: Looking at the pd.to_datetime docs and the columns you listed you should be able to do something like. df["DATE"] = pd.to_datetime(df[["YEAR", "MONTH", "DAY"]]) Can't say for sure without the data to try it on, but pd.to_datetime should be able to handle these columns without having to alter them.
Making multiple columns into one for .to_datetime
Currently I have a code set up to read through a CSV file, but the CSV file has columns DAY, YEAR, and MONTH all as integers. I want to make them all one column of datetime64[ns] objects. To make them datetime64[ns] objects, I did the following: df.insert(0, "DATE", 0, True) df["YEAR"] = df["YEAR"].astype(str) df["MONTH"] = df["MONTH"].astype(str) df["DAY"] = df["DAY"].astype(str) cols = ["MONTH", "DAY", "YEAR"] df["DATE"] = df["MONTH"] + "-" + df["DAY"] + "-" + df["YEAR"] df["DATE"] = pd.to_datetime(df["DATE"]) My question is, is there a more efficient way to do this? I'm new to pandas and coding in general, so thank you in advance!
[ "Looking at the pd.to_datetime docs and the columns you listed you should be able to do something like.\ndf[\"DATE\"] = pd.to_datetime(df[[\"YEAR\", \"MONTH\", \"DAY\"]])\n\nCan't say for sure without the data to try it on, but pd.to_datetime should be able to handle these columns without having to alter them.\n" ]
[ 0 ]
[]
[]
[ "analytics", "pandas", "python" ]
stackoverflow_0074465162_analytics_pandas_python.txt
Q: How can I smooth a graph with hundreds of points? I'm working with data from my Spotify account and I've created a dataframe that contains all the minutes in the day and the total playtime during that minute for the last 5 years. The dataframe is this (by the way, I wonder if there is any way to work with time without having to select a specific date): time playtime 0 1970-01-01 00:00:00 47.138733 1 1970-01-01 00:01:00 52.419767 2 1970-01-01 00:02:00 47.943567 3 1970-01-01 00:03:00 43.322283 4 1970-01-01 00:04:00 58.029217 ... ... ... 1435 1970-01-01 23:55:00 46.276150 1436 1970-01-01 23:56:00 53.202717 1437 1970-01-01 23:57:00 49.844367 1438 1970-01-01 23:58:00 62.703600 1439 1970-01-01 23:59:00 55.437700 I've plotted the dataframe in order to obtain a visualization of how much music I listen during the day. This is the graph: enter image description here There are 1440 points, so outliers will appear. But, as you can probably see, there is a smooth curve that emerges from the graph. I want to get the actual smooth graph, but every method that I see uses interpolation and I don't think interpolating 1440 points is efficient. Is there any way to get a moving average or something similar so that I can plot a smooth curve? I've tried interpolating, but there are too many points and it takes ages to run. A: moving average for averaging the last 30 records df['playtime_rolling_30'] = df['playtime'].rolling(30).mean() A: For technical analysis and finance: It is more convenient to use a reliable third-party package like TA-Lib. A simple moving average operation with period of 3 then would be as simple as talib.SMA(arr, 3). For the purpose of education: Here is a simple code for simple moving average: def sma(arr, n=3): """ total number of items to aveage over for each new element=2*n+1 """ res = np.array([]) for sn, elem in enumerate(arr): ind_start=max(0, min(sn, sn-n)) ind_end=min(len(arr), max(sn, sn+n)) res = np.append(res, np.sum(arr[ind_start:ind_end]).astype('float32')/(ind_end-ind_start)) return res num_points=100 mu=0. sig=3. a=np.arange(num_points) + np.random.normal(mu, sig, num_points) sma_level = 3 plt.plot(a, label="raw") plt.plot(sma(a, n=sma_level), '--', label=f"sma-{2*sma_level+1}") plt.legend() Output:
How can I smooth a graph with hundreds of points?
I'm working with data from my Spotify account and I've created a dataframe that contains all the minutes in the day and the total playtime during that minute for the last 5 years. The dataframe is this (by the way, I wonder if there is any way to work with time without having to select a specific date): time playtime 0 1970-01-01 00:00:00 47.138733 1 1970-01-01 00:01:00 52.419767 2 1970-01-01 00:02:00 47.943567 3 1970-01-01 00:03:00 43.322283 4 1970-01-01 00:04:00 58.029217 ... ... ... 1435 1970-01-01 23:55:00 46.276150 1436 1970-01-01 23:56:00 53.202717 1437 1970-01-01 23:57:00 49.844367 1438 1970-01-01 23:58:00 62.703600 1439 1970-01-01 23:59:00 55.437700 I've plotted the dataframe in order to obtain a visualization of how much music I listen during the day. This is the graph: enter image description here There are 1440 points, so outliers will appear. But, as you can probably see, there is a smooth curve that emerges from the graph. I want to get the actual smooth graph, but every method that I see uses interpolation and I don't think interpolating 1440 points is efficient. Is there any way to get a moving average or something similar so that I can plot a smooth curve? I've tried interpolating, but there are too many points and it takes ages to run.
[ "moving average\nfor averaging the last 30 records\ndf['playtime_rolling_30'] = df['playtime'].rolling(30).mean()\n\n", "For technical analysis and finance:\nIt is more convenient to use a reliable third-party package like TA-Lib. A simple moving average operation with period of 3 then would be as simple as talib.SMA(arr, 3).\nFor the purpose of education:\nHere is a simple code for simple moving average:\ndef sma(arr, n=3):\n \"\"\"\n total number of items to aveage over for each new element=2*n+1\n \"\"\"\n res = np.array([])\n for sn, elem in enumerate(arr):\n ind_start=max(0, min(sn, sn-n))\n ind_end=min(len(arr), max(sn, sn+n))\n res = np.append(res, np.sum(arr[ind_start:ind_end]).astype('float32')/(ind_end-ind_start))\n \n return res\n\nnum_points=100\nmu=0.\nsig=3.\na=np.arange(num_points) + np.random.normal(mu, sig, num_points)\n\nsma_level = 3\nplt.plot(a, label=\"raw\")\nplt.plot(sma(a, n=sma_level), '--', label=f\"sma-{2*sma_level+1}\")\nplt.legend()\n\nOutput:\n\n" ]
[ 0, 0 ]
[]
[]
[ "matplotlib", "pandas", "python" ]
stackoverflow_0074465004_matplotlib_pandas_python.txt
Q: Can't bind value to button on release in Kivy python I wanted to save output of print to value so I can use it later. I have no idea where to go with it #btn.bind(on_release=lambda btn: print(btn.text)) full code: from kivy.uix.dropdown import DropDown from kivy.uix.button import Button from kivy.base import runTouchApp # create a dropdown with 10 buttons dropdown = DropDown() for index in range(1, 13): # When adding widgets, we need to specify the height manually # (disabling the size_hint_y) so the dropdown can calculate # the area it needs. btn = Button(text='%d moth' % index, size_hint_y=None, height=44) # for each button, attach a callback that will call the select() method # on the dropdown. We'll pass the text of the button as the data of the # selection. btn.bind(on_release=lambda btn: dropdown.select(btn.text)) btn.bind(on_release=lambda btn: print(btn.text)) # then add the button inside the dropdown dropdown.add_widget(btn) # create a big main button mainbutton = Button(text='How many months ago', size_hint=(None, None)) # show the dropdown menu when the main button is released # note: all the bind() calls pass the instance of the caller (here, the # mainbutton instance) as the first argument of the callback (here, # dropdown.open.). mainbutton.bind(on_release=dropdown.open) # one last thing, listen for the selection in the dropdown list and # assign the data to the button text. dropdown.bind(on_select=lambda instance, x: setattr(mainbutton, 'text', x)) print(value) runTouchApp(mainbutton) I want to save number that user choose A: This may get you closer to what you want. I don't know if this is your whole application or if you pared down to minimal function just to post here. from kivy.uix.dropdown import DropDown from kivy.uix.button import Button from kivy.base import runTouchApp from functools import partial # create a dropdown with 10 buttons dropdown = DropDown() global_selection = "" for index in range(1, 13): # When adding widgets, we need to specify the height manually # (disabling the size_hint_y) so the dropdown can calculate # the area it needs. btn = Button(text='%d moth' % index, size_hint_y=None, height=44) # for each button, attach a callback that will call the select() method # on the dropdown. We'll pass the text of the button as the data of the # selection. def new_func(self, my_button): global_selection = my_button.text print(f"The button selected was {global_selection}") btn.bind(on_release=lambda btn: dropdown.select(btn.text)) btn.bind(on_release=partial(new_func, btn)) # then add the button inside the dropdown dropdown.add_widget(btn) # create a big main button mainbutton = Button(text='How many months ago', size_hint=(None, None)) # show the dropdown menu when the main button is released # note: all the bind() calls pass the instance of the caller (here, the # mainbutton instance) as the first argument of the callback (here, # dropdown.open.). mainbutton.bind(on_release=dropdown.open) # one last thing, listen for the selection in the dropdown list and # assign the data to the button text. dropdown.bind(on_select=lambda instance, x: setattr(mainbutton, 'text', x)) runTouchApp(mainbutton)
Can't bind value to button on release in Kivy python
I wanted to save output of print to value so I can use it later. I have no idea where to go with it #btn.bind(on_release=lambda btn: print(btn.text)) full code: from kivy.uix.dropdown import DropDown from kivy.uix.button import Button from kivy.base import runTouchApp # create a dropdown with 10 buttons dropdown = DropDown() for index in range(1, 13): # When adding widgets, we need to specify the height manually # (disabling the size_hint_y) so the dropdown can calculate # the area it needs. btn = Button(text='%d moth' % index, size_hint_y=None, height=44) # for each button, attach a callback that will call the select() method # on the dropdown. We'll pass the text of the button as the data of the # selection. btn.bind(on_release=lambda btn: dropdown.select(btn.text)) btn.bind(on_release=lambda btn: print(btn.text)) # then add the button inside the dropdown dropdown.add_widget(btn) # create a big main button mainbutton = Button(text='How many months ago', size_hint=(None, None)) # show the dropdown menu when the main button is released # note: all the bind() calls pass the instance of the caller (here, the # mainbutton instance) as the first argument of the callback (here, # dropdown.open.). mainbutton.bind(on_release=dropdown.open) # one last thing, listen for the selection in the dropdown list and # assign the data to the button text. dropdown.bind(on_select=lambda instance, x: setattr(mainbutton, 'text', x)) print(value) runTouchApp(mainbutton) I want to save number that user choose
[ "This may get you closer to what you want. I don't know if this is your whole application or if you pared down to minimal function just to post here.\nfrom kivy.uix.dropdown import DropDown\nfrom kivy.uix.button import Button\nfrom kivy.base import runTouchApp\nfrom functools import partial\n\n# create a dropdown with 10 buttons\ndropdown = DropDown()\nglobal_selection = \"\"\nfor index in range(1, 13):\n # When adding widgets, we need to specify the height manually\n # (disabling the size_hint_y) so the dropdown can calculate\n # the area it needs.\n\n btn = Button(text='%d moth' % index, size_hint_y=None, height=44)\n # for each button, attach a callback that will call the select() method\n # on the dropdown. We'll pass the text of the button as the data of the\n # selection.\n def new_func(self, my_button):\n global_selection = my_button.text\n print(f\"The button selected was {global_selection}\")\n\n btn.bind(on_release=lambda btn: dropdown.select(btn.text))\n btn.bind(on_release=partial(new_func, btn))\n\n # then add the button inside the dropdown\n dropdown.add_widget(btn)\n# create a big main button\nmainbutton = Button(text='How many months ago', size_hint=(None, None))\n\n# show the dropdown menu when the main button is released\n# note: all the bind() calls pass the instance of the caller (here, the\n# mainbutton instance) as the first argument of the callback (here,\n# dropdown.open.).\nmainbutton.bind(on_release=dropdown.open)\n# one last thing, listen for the selection in the dropdown list and\n# assign the data to the button text.\ndropdown.bind(on_select=lambda instance, x: setattr(mainbutton, 'text', x))\nrunTouchApp(mainbutton)\n\n" ]
[ 0 ]
[]
[]
[ "kivy", "python", "user_interface" ]
stackoverflow_0074457997_kivy_python_user_interface.txt
Q: Python stdin errors, copied code from pycharm wont work in CMD Learning python as a beginner. I wanted to copy my code into CMD but it won't work. Here is code calculation_to_units = 24 name_of_unit = "hours" def days_to_units(number_of_days): if number_of_days > 0: return f"{number_of_days} days are {number_of_days * calculation_to_units} {name_of_unit}" else: return "Liczba dni musi być dodatnia :)" user_input = input("Hello user, enter amount of days you want to calculate to hours\n") user_input_number = int(user_input) calculated_value = days_to_units(user_input_number) print(calculated_value) despite the fact that it works in Pycharm. I already checked paths. I am not able to solve this problem. When I type in python3 test.py it also says C:\Users\Borys\AppData\Local\Programs\Python\Python310\python.exe: can't open file 'C:\Users\Borys\test.py': [Errno 2] No such file or directory Also recieved this message "unable to initialize device prn in python" My internet connection is so bad that it took me 10 minutes to sign up on stack overflow. Additionaly my english knowledge is too small for complex programming explainations. A: It can be difficult to paste code that calls input() into a python shell. Both the shell and the input() function read stdin. As soon as python reads the line with input(), it calls input() and that consumes the next line on stdin. In your case, that was a python code line intended to set a variable. That line was consumbed by input and was not read or executed by python. So you got a "variable not defined" error. But you would also have gotten another error because that line was also not the stuff you wanted to input. Suppose you had the script val = input("Input value: ") i_val = int(val) print(i_val) And pasted it into the python shell >>> val = input("Input value: ") Input value: i_val = int(val) >>> print(i_val) Traceback (most recent call last): File "<stdin>", line 1, in <module> NameError: name 'i_val' is not defined. Did you mean: 'eval'? The line i_val = int(val) was assigned to val - it was not interpreted by the shell. There would be an ">>> " if it did.
Python stdin errors, copied code from pycharm wont work in CMD
Learning python as a beginner. I wanted to copy my code into CMD but it won't work. Here is code calculation_to_units = 24 name_of_unit = "hours" def days_to_units(number_of_days): if number_of_days > 0: return f"{number_of_days} days are {number_of_days * calculation_to_units} {name_of_unit}" else: return "Liczba dni musi być dodatnia :)" user_input = input("Hello user, enter amount of days you want to calculate to hours\n") user_input_number = int(user_input) calculated_value = days_to_units(user_input_number) print(calculated_value) despite the fact that it works in Pycharm. I already checked paths. I am not able to solve this problem. When I type in python3 test.py it also says C:\Users\Borys\AppData\Local\Programs\Python\Python310\python.exe: can't open file 'C:\Users\Borys\test.py': [Errno 2] No such file or directory Also recieved this message "unable to initialize device prn in python" My internet connection is so bad that it took me 10 minutes to sign up on stack overflow. Additionaly my english knowledge is too small for complex programming explainations.
[ "It can be difficult to paste code that calls input() into a python shell. Both the shell and the input() function read stdin. As soon as python reads the line with input(), it calls input() and that consumes the next line on stdin. In your case, that was a python code line intended to set a variable. That line was consumbed by input and was not read or executed by python. So you got a \"variable not defined\" error. But you would also have gotten another error because that line was also not the stuff you wanted to input.\nSuppose you had the script\nval = input(\"Input value: \")\ni_val = int(val)\nprint(i_val)\n\nAnd pasted it into the python shell\n>>> val = input(\"Input value: \")\nInput value: i_val = int(val)\n>>> print(i_val)\nTraceback (most recent call last):\n File \"<stdin>\", line 1, in <module>\nNameError: name 'i_val' is not defined. Did you mean: 'eval'?\n\nThe line i_val = int(val) was assigned to val - it was not interpreted by the shell. There would be an \">>> \" if it did.\n" ]
[ 0 ]
[]
[]
[ "python", "stdin" ]
stackoverflow_0074464754_python_stdin.txt
Q: Yahoo stocks API cause errors for some stocks in Python where except function doesn´t work I use the Yahoo API a lot to get stock data from multiple exchanges in the world. The code normally works, but it look likes that there has been an update in the API. In the past when a stock was delisted, the except function in Python got the error and continued. The strange thing is that this error is caused by only some stocks, which are delisted? For instance stock ´ANTM´, this stock is delisted. I receive this error while trying to scrape the stock data: AttributeError: 'float' object has no attribute 'upper'. This code runs till infinity. !pip install yfinance import yfinance as yf import datetime as dt start = '2021-01-01' end = dt.datetime.today().strftime('%Y-%m-%d') tickers=['AAPL', 'ANTM'] for ticker in tickers: try: df=yf.download(ticker, start, end, progress=False) df.index = df.index.strftime('%Y/%m/%d') df.to_csv(f'{ticker}.csv') except Exception: if ticker not in tickers: continue I have created a workaround, but this is slow for 1K of stocks. #code with workaround tickers=['AAPL','ANTM'] for ticker in tickers: try: if yf.Ticker(ticker).info['regularMarketPrice']!=None: #workaround df=yf.download(ticker, start, end, progress=False) df.index = df.index.strftime('%Y/%m/%d') df.to_csv(f'{ticker}.csv') else: #workaround continue #workaround except Exception: if ticker not in tickers: continue What can I do to make sure that the code is catching the error, and continues with the code, which doesn't have a impact on the speed of the code? A: It looks like that the API is updated, the except exception is working again.
Yahoo stocks API cause errors for some stocks in Python where except function doesn´t work
I use the Yahoo API a lot to get stock data from multiple exchanges in the world. The code normally works, but it look likes that there has been an update in the API. In the past when a stock was delisted, the except function in Python got the error and continued. The strange thing is that this error is caused by only some stocks, which are delisted? For instance stock ´ANTM´, this stock is delisted. I receive this error while trying to scrape the stock data: AttributeError: 'float' object has no attribute 'upper'. This code runs till infinity. !pip install yfinance import yfinance as yf import datetime as dt start = '2021-01-01' end = dt.datetime.today().strftime('%Y-%m-%d') tickers=['AAPL', 'ANTM'] for ticker in tickers: try: df=yf.download(ticker, start, end, progress=False) df.index = df.index.strftime('%Y/%m/%d') df.to_csv(f'{ticker}.csv') except Exception: if ticker not in tickers: continue I have created a workaround, but this is slow for 1K of stocks. #code with workaround tickers=['AAPL','ANTM'] for ticker in tickers: try: if yf.Ticker(ticker).info['regularMarketPrice']!=None: #workaround df=yf.download(ticker, start, end, progress=False) df.index = df.index.strftime('%Y/%m/%d') df.to_csv(f'{ticker}.csv') else: #workaround continue #workaround except Exception: if ticker not in tickers: continue What can I do to make sure that the code is catching the error, and continues with the code, which doesn't have a impact on the speed of the code?
[ "It looks like that the API is updated, the except exception is working again.\n" ]
[ 0 ]
[]
[]
[ "finance", "python", "yahoo" ]
stackoverflow_0074251997_finance_python_yahoo.txt
Q: ValueError: while trying to use the crispy forms I am trying to make use of the crispy forms to display the form for inserting the data. I have a model as: class Athlete(models.Model): athlete_name=models.CharField(max_length=50) GENDER_CHOICES=( ('M','Male'), ('F','Female'), ('O','Others') ) gender=models.CharField(choices=GENDER_CHOICES,max_length=100) age=models.IntegerField() athlete_category=models.ForeignKey(Category,on_delete=models.CASCADE) image=models.FileField(upload_to='static/athlete_img', null=True) COUNTRY_CHOICES=( ('np','nepal'), ('in','india'), ('uk','united kingdom'), ('sp','spain'), ('ch','china') ) medals=models.IntegerField country=models.CharField(choices=COUNTRY_CHOICES,max_length=100) def __str__(self): return self.athlete_name In the forms.py...I have modelform as: class AthleteForm(ModelForm): class Meta: model:Athlete fields='__all__' In my views.py I have the following function: def add_athlete(request): if request.method == 'POST': form = AthleteForm(request.POST, request.FILES) if form.is_valid(): form.save() messages.add_message(request, messages.SUCCESS, 'Athlete added sucessfully') return redirect('/admin/athletes') else: messages.add_message(request, messages.ERROR, 'Enter the appropriate values') return render(request, 'forgame/addathletes.html', { 'form': form }) context = { 'form': AthleteForm } return render(request, 'forgame/addathletes.html', context) Inside my templates/forgame I have created addathletes.html {% extends 'layouts.html' %} {% load crispy_forms_tags %} {% block title %} <title>Game Category</title> {%endblock%} {% block main_content %} <div class="container-fluid mt-4"> <div class="d-flex justify-content-center"> <div class="col-md-6"> <h2>Add Categories Here!</h2> {% for msg in messages %} {% if msg.level == DEFAULT_MESSAGE_LEVELS.SUCCESS %} <div class="alert alert-success"> {{msg}} </div> {%endif%} {% if msg.level == DEFAULT_MESSAGE_LEVELS.ERROR %} <div class="alert alert-danger"> {{msg}} </div> {%endif%} {%endfor%} <form action="" method="post" class="shadow-lg p-3"> {%csrf_token%} {{form | crispy}} <div class="mt-3"> <input type="submit" value="Add Category" class="btn btn-primary"> </div> </form> </div> </div> </div> {% endblock %} My urls looks fine but I have been getting this error: Along with this: A: It should be = not : so: class AthleteForm(ModelForm): class Meta: model = Athlete fields='__all__' I'd also recommend you to maintain gaps between template tags like it should be {% endblock %} not {%endblock%} same goes for every tag.
ValueError: while trying to use the crispy forms
I am trying to make use of the crispy forms to display the form for inserting the data. I have a model as: class Athlete(models.Model): athlete_name=models.CharField(max_length=50) GENDER_CHOICES=( ('M','Male'), ('F','Female'), ('O','Others') ) gender=models.CharField(choices=GENDER_CHOICES,max_length=100) age=models.IntegerField() athlete_category=models.ForeignKey(Category,on_delete=models.CASCADE) image=models.FileField(upload_to='static/athlete_img', null=True) COUNTRY_CHOICES=( ('np','nepal'), ('in','india'), ('uk','united kingdom'), ('sp','spain'), ('ch','china') ) medals=models.IntegerField country=models.CharField(choices=COUNTRY_CHOICES,max_length=100) def __str__(self): return self.athlete_name In the forms.py...I have modelform as: class AthleteForm(ModelForm): class Meta: model:Athlete fields='__all__' In my views.py I have the following function: def add_athlete(request): if request.method == 'POST': form = AthleteForm(request.POST, request.FILES) if form.is_valid(): form.save() messages.add_message(request, messages.SUCCESS, 'Athlete added sucessfully') return redirect('/admin/athletes') else: messages.add_message(request, messages.ERROR, 'Enter the appropriate values') return render(request, 'forgame/addathletes.html', { 'form': form }) context = { 'form': AthleteForm } return render(request, 'forgame/addathletes.html', context) Inside my templates/forgame I have created addathletes.html {% extends 'layouts.html' %} {% load crispy_forms_tags %} {% block title %} <title>Game Category</title> {%endblock%} {% block main_content %} <div class="container-fluid mt-4"> <div class="d-flex justify-content-center"> <div class="col-md-6"> <h2>Add Categories Here!</h2> {% for msg in messages %} {% if msg.level == DEFAULT_MESSAGE_LEVELS.SUCCESS %} <div class="alert alert-success"> {{msg}} </div> {%endif%} {% if msg.level == DEFAULT_MESSAGE_LEVELS.ERROR %} <div class="alert alert-danger"> {{msg}} </div> {%endif%} {%endfor%} <form action="" method="post" class="shadow-lg p-3"> {%csrf_token%} {{form | crispy}} <div class="mt-3"> <input type="submit" value="Add Category" class="btn btn-primary"> </div> </form> </div> </div> </div> {% endblock %} My urls looks fine but I have been getting this error: Along with this:
[ "It should be = not : so:\nclass AthleteForm(ModelForm):\n class Meta:\n model = Athlete\n fields='__all__'\n\nI'd also recommend you to maintain gaps between template tags like it should be {% endblock %} not {%endblock%} same goes for every tag.\n" ]
[ 3 ]
[]
[]
[ "django", "django_forms", "django_models", "django_templates", "python" ]
stackoverflow_0074465456_django_django_forms_django_models_django_templates_python.txt
Q: Loop Python Script with Alternating Variable Value I was wondering if anyone had experience with looping a Python script in Jupyter Notebook that alternates the variable value through each iteration of the loop? The example below will summarize what I'm looking for: variable_file = [file_1, file_2, file_3, etc....] Around 20 cells which reference variable_file Final output cell which spits back the desired result for the first file. After this final output cell, I would like to rerun the entire script but with file_2, etc...until all files in the variable_file list have been run. If anyone was any input or tips, it would be greatly appreciated! I tried some if statements, but this problem is way out of the realm of my expertise and I'm far more comfortable with R than Python. A: You need to encapsulate the "around 20 cells" inside the loop. The loop has to be contained in a single cell. If you really want the content of each cell to be in a separate cell, you can create a function in each cell by encapsulating its content. Example: In cell 1: def cell_1(var): <do something using var> In cell 2: def cell_2(var): <do something else> In the final cell: variable_file = [file_1, file_2, file_3] for var in variable_file: cell_1(var) cell_2(var) ...
Loop Python Script with Alternating Variable Value
I was wondering if anyone had experience with looping a Python script in Jupyter Notebook that alternates the variable value through each iteration of the loop? The example below will summarize what I'm looking for: variable_file = [file_1, file_2, file_3, etc....] Around 20 cells which reference variable_file Final output cell which spits back the desired result for the first file. After this final output cell, I would like to rerun the entire script but with file_2, etc...until all files in the variable_file list have been run. If anyone was any input or tips, it would be greatly appreciated! I tried some if statements, but this problem is way out of the realm of my expertise and I'm far more comfortable with R than Python.
[ "You need to encapsulate the \"around 20 cells\" inside the loop. The loop has to be contained in a single cell.\nIf you really want the content of each cell to be in a separate cell, you can create a function in each cell by encapsulating its content.\nExample:\nIn cell 1:\ndef cell_1(var):\n <do something using var>\n\nIn cell 2:\ndef cell_2(var):\n <do something else>\n\nIn the final cell:\nvariable_file = [file_1, file_2, file_3]\nfor var in variable_file:\n cell_1(var)\n cell_2(var)\n ...\n\n" ]
[ 0 ]
[]
[]
[ "loops", "python" ]
stackoverflow_0074465415_loops_python.txt
Q: how to define a "middleman" class's init to successfully passthrough args from the grandchild to the parent in python 3 I have a semi-abstract Parent and Middle classes and some Grandchild fully implemented ones that inherit from each other, but while the Parent and Grandchild need some __init__ args, the middle one is just for shared implemented code: class Parent: def __init__(self, some_arg): ... class Middle(Parent, ABC): def do_something_any_middle_can_do(): ... class Grandchild(Middle): def __init__(self, some_arg): super().__init__(some_arg) ... As you can see, the super().__init__(some_arg) in the Grandchild would call the default __init__ in the Middle, and not send some_arg to the Parent. So far, I have thought to use **kwargs, but that requires the authors of any new Grandchild to explicitly name the args in super().__init__(some_arg=some_arg) and if they don't unexpected things might happen without a good error message: class Parent: def __init__(self, some_arg): ... class Middle(Parent, ABC): def __init__(self, **kwargs): super().__init__(kwargs) ... def do_something_any_middle_can_do(): ... class Grandchild(Middle): def __init__(self, some_arg): super().__init__(some_arg=some_arg) ... A: As you can see, the super().__init__(some_arg) in the Grandchild would call the default __init__ in the Middle, and not send some_arg to the Parent. That's not correct. There is no "default" __init__ method in Middle. super().__init__ refers to the __init__ attribute of the next class in self's method resolution order that has a defined __init__ method. Since Middle.__init__ is not defined, that means Parent.__init__ is called immediately. The advice given in Python's super considered super! is to use keyword arguments to avoid conflict between which classes define which positional parameters. (It's the responsibility of the class designer to know which keyword arguments each ancestor expects, and to resolve any conflicts between ancestors that use the same name for different purposes.) class Parent: def __init__(self, *, some_arg, **kwargs): super().__init__(**kwargs) ... class Middle(Parent, ABC): def do_something_any_middle_can_do(self): ... class Grandchild(Middle): def __init__(self, **kwargs): super().__init__(**kwargs) ... g = Grandchild(some_arg=3) Grandchild.__init__ doesn't need to "advertise" some_arg; it accepts it as an arbitrary keyword argument and passes it up the chain (via super().__init__). Eventually, some class (Parent, in this case) has a defined parameter to accept it; otherwise, the argument will be passed to object.__init__ where an exception will be raised.
how to define a "middleman" class's init to successfully passthrough args from the grandchild to the parent in python 3
I have a semi-abstract Parent and Middle classes and some Grandchild fully implemented ones that inherit from each other, but while the Parent and Grandchild need some __init__ args, the middle one is just for shared implemented code: class Parent: def __init__(self, some_arg): ... class Middle(Parent, ABC): def do_something_any_middle_can_do(): ... class Grandchild(Middle): def __init__(self, some_arg): super().__init__(some_arg) ... As you can see, the super().__init__(some_arg) in the Grandchild would call the default __init__ in the Middle, and not send some_arg to the Parent. So far, I have thought to use **kwargs, but that requires the authors of any new Grandchild to explicitly name the args in super().__init__(some_arg=some_arg) and if they don't unexpected things might happen without a good error message: class Parent: def __init__(self, some_arg): ... class Middle(Parent, ABC): def __init__(self, **kwargs): super().__init__(kwargs) ... def do_something_any_middle_can_do(): ... class Grandchild(Middle): def __init__(self, some_arg): super().__init__(some_arg=some_arg) ...
[ "\nAs you can see, the super().__init__(some_arg) in the Grandchild would call the default __init__ in the Middle, and not send some_arg to the Parent.\n\nThat's not correct. There is no \"default\" __init__ method in Middle.\nsuper().__init__ refers to the __init__ attribute of the next class in self's method resolution order that has a defined __init__ method. Since Middle.__init__ is not defined, that means Parent.__init__ is called immediately.\n\nThe advice given in Python's super considered super! is to use keyword arguments to avoid conflict between which classes define which positional parameters. (It's the responsibility of the class designer to know which keyword arguments each ancestor expects, and to resolve any conflicts between ancestors that use the same name for different purposes.)\nclass Parent:\n def __init__(self, *, some_arg, **kwargs):\n super().__init__(**kwargs)\n ...\n\n\nclass Middle(Parent, ABC):\n def do_something_any_middle_can_do(self):\n ...\n\nclass Grandchild(Middle):\n def __init__(self, **kwargs):\n super().__init__(**kwargs)\n ...\n\ng = Grandchild(some_arg=3)\n\nGrandchild.__init__ doesn't need to \"advertise\" some_arg; it accepts it as an arbitrary keyword argument and passes it up the chain (via super().__init__). Eventually, some class (Parent, in this case) has a defined parameter to accept it; otherwise, the argument will be passed to object.__init__ where an exception will be raised.\n" ]
[ 1 ]
[]
[]
[ "python", "python_3.x" ]
stackoverflow_0074465268_python_python_3.x.txt
Q: create mask with filled color using opencv I have an input image where I have drawn the green boundaries which I need to mask. I am able to identify the boundary, but my mask is all black with baground is black. how can I fill the boundary region with different color. May be keep the background white and mask region as black Input image im = cv2.imread(imagePath) plt.imshow(im) #color boundaries [B, G, R] lower = np.array([0,120,0]) upper = np.array([200,255,100]) # threshold on green color thresh = cv2.inRange(im, lower, upper) plt.imshow(thresh) # get largest contour contours = cv2.findContours(thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE) contours = contours[0] if len(contours) == 2 else contours[1] big_contour = max(contours, key=cv2.contourArea) x,y,w,h = cv2.boundingRect(big_contour) # draw filled contour on black background mask = np.zeros_like(im) cv2.drawContours(mask, [big_contour], 0, (255,255,255), cv2.FILLED) plt.imshow(mask) # apply mask to input image new_image = cv2.bitwise_and(im, mask) Generated Output I am expecting the green countor will be filled with some different color. May be white background with black countour. or transparent background A: To fill the contours drawn on the mask you should use the opencv's fillPoly function : im = cv2.imread(imagePath) plt.imshow(im) #color boundaries [B, G, R] lower = np.array([0,120,0]) upper = np.array([200,255,100]) # threshold on green color thresh = cv2.inRange(im, lower, upper) plt.imshow(thresh) # get largest contour contours = cv2.findContours(thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE) contours = contours[0] if len(contours) == 2 else contours[1] big_contour = max(contours, key=cv2.contourArea) x,y,w,h = cv2.boundingRect(big_contour) # draw filled contour on black background mask = np.zeros_like(im) # cv2.drawContours(mask, [big_contour], 0, (255,255,255), cv2.FILLED) mask = cv2.fillPoly(mask, pts =[big_contours], color=(255,255,255)) # fill the polygon plt.imshow(mask) # apply mask to input image new_image = cv2.bitwise_and(im, mask) A: This code generated canny image and then generates contours, then it generates mask and after this all it shows the output as the mixture of original and the mask image: import cv2 import numpy as np image = cv2.imread('image.png') cv2.waitKey(0) # Grayscale gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) # Find Canny edges edged = cv2.Canny(gray, 30, 200) cv2.waitKey(0) # Finding Contours # Use a copy of the image e.g. edged.copy() # since findContours alters the image contours, hierarchy = cv2.findContours(edged, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE) cv2.imshow('Canny Edges After Contouring', edged) print("Number of Contours found = " + str(len(contours))) # Draw all contours # -1 signifies drawing all contours cv2.drawContours(image, contours, -1, (0, 0, 255), 2) mask = np.zeros_like(image) # cv2.drawContours(mask, [big_contour], 0, (255,255,255), cv2.FILLED) cv2.fillPoly(mask, pts =contours, color=(0,255,0)) # fill the polygon new_image = cv2.bitwise_and(image, mask) while True: cv2.imshow('Contours', image) cv2.imshow('mask', mask) cv2.imshow('new_image', new_image) cv2.waitKey(1) # cv2.destroyAllWindows() Original image: Edged image: contours found: mask: Final image: Also you can change color of the mask fill.
create mask with filled color using opencv
I have an input image where I have drawn the green boundaries which I need to mask. I am able to identify the boundary, but my mask is all black with baground is black. how can I fill the boundary region with different color. May be keep the background white and mask region as black Input image im = cv2.imread(imagePath) plt.imshow(im) #color boundaries [B, G, R] lower = np.array([0,120,0]) upper = np.array([200,255,100]) # threshold on green color thresh = cv2.inRange(im, lower, upper) plt.imshow(thresh) # get largest contour contours = cv2.findContours(thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE) contours = contours[0] if len(contours) == 2 else contours[1] big_contour = max(contours, key=cv2.contourArea) x,y,w,h = cv2.boundingRect(big_contour) # draw filled contour on black background mask = np.zeros_like(im) cv2.drawContours(mask, [big_contour], 0, (255,255,255), cv2.FILLED) plt.imshow(mask) # apply mask to input image new_image = cv2.bitwise_and(im, mask) Generated Output I am expecting the green countor will be filled with some different color. May be white background with black countour. or transparent background
[ "To fill the contours drawn on the mask you should use the opencv's fillPoly function :\nim = cv2.imread(imagePath)\nplt.imshow(im)\n#color boundaries [B, G, R]\nlower = np.array([0,120,0])\nupper = np.array([200,255,100])\n\n# threshold on green color\nthresh = cv2.inRange(im, lower, upper)\nplt.imshow(thresh)\n# get largest contour\ncontours = cv2.findContours(thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)\ncontours = contours[0] if len(contours) == 2 else contours[1]\nbig_contour = max(contours, key=cv2.contourArea)\nx,y,w,h = cv2.boundingRect(big_contour)\n\n# draw filled contour on black background\nmask = np.zeros_like(im)\n# cv2.drawContours(mask, [big_contour], 0, (255,255,255), cv2.FILLED)\nmask = cv2.fillPoly(mask, pts =[big_contours], color=(255,255,255)) # fill the polygon\nplt.imshow(mask)\n# apply mask to input image\nnew_image = cv2.bitwise_and(im, mask)\n\n", "This code generated canny image and then generates contours, then it generates mask and after this all it shows the output as the mixture of original and the mask image:\nimport cv2\nimport numpy as np\nimage = cv2.imread('image.png')\ncv2.waitKey(0)\n\n# Grayscale\ngray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)\n\n# Find Canny edges\nedged = cv2.Canny(gray, 30, 200)\ncv2.waitKey(0)\n\n# Finding Contours\n# Use a copy of the image e.g. edged.copy()\n# since findContours alters the image\ncontours, hierarchy = cv2.findContours(edged,\n cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)\n\ncv2.imshow('Canny Edges After Contouring', edged)\n\n\nprint(\"Number of Contours found = \" + str(len(contours)))\n\n# Draw all contours\n# -1 signifies drawing all contours\ncv2.drawContours(image, contours, -1, (0, 0, 255), 2)\n\nmask = np.zeros_like(image)\n# cv2.drawContours(mask, [big_contour], 0, (255,255,255), cv2.FILLED)\ncv2.fillPoly(mask, pts =contours, color=(0,255,0)) # fill the polygon\n\nnew_image = cv2.bitwise_and(image, mask)\n\nwhile True:\n cv2.imshow('Contours', image)\n cv2.imshow('mask', mask)\n cv2.imshow('new_image', new_image)\n cv2.waitKey(1)\n# cv2.destroyAllWindows()\n\nOriginal image:\n\nEdged image:\n\ncontours found:\n\nmask:\n\nFinal image:\n\nAlso you can change color of the mask fill.\n" ]
[ 0, 0 ]
[]
[]
[ "opencv", "python" ]
stackoverflow_0074463028_opencv_python.txt
Q: Unclosed tag on line 6: 'with'. Looking for one of: endwith I'm learning how to use Django Templates but im getting this error Unclosed tag on line 6: 'with'. Looking for one of: endwith. this is my html code <!DOCTYPE html> <html> <body> <h1>Favorite rapper:</h1> {% with person="2pac" %} <h1>{{person}}</h1> </body> </html> this is the tutorial that I'm doing A: The error makes complete sense you should close the with tag so: <!DOCTYPE html> <html> <body> <h1>Favorite rapper:</h1> {% with person="2pac" %} <h1>{{person}}</h1> {% endwith %} </body> </html>
Unclosed tag on line 6: 'with'. Looking for one of: endwith
I'm learning how to use Django Templates but im getting this error Unclosed tag on line 6: 'with'. Looking for one of: endwith. this is my html code <!DOCTYPE html> <html> <body> <h1>Favorite rapper:</h1> {% with person="2pac" %} <h1>{{person}}</h1> </body> </html> this is the tutorial that I'm doing
[ "The error makes complete sense you should close the with tag so:\n<!DOCTYPE html>\n<html>\n <body>\n <h1>Favorite rapper:</h1>\n\n {% with person=\"2pac\" %}\n\n <h1>{{person}}</h1>\n {% endwith %}\n </body>\n</html>\n\n" ]
[ 2 ]
[]
[]
[ "django", "django_templates", "python" ]
stackoverflow_0074465308_django_django_templates_python.txt
Q: Aggregate the grouped values on many specific columns from list I'd like to perfom groupBy() operation with specific agg(). df = df.groupBy("x", "y").agg(F.max("a").alias("a"), F.max("b").alias("b")) But is there any way to aggregation using list of columns? I don't want to hardcode it. A: You can use list comprehension. list_of_cols = ["a", "b"] df = df.groupBy("x", "y").agg(*[F.max(x).alias(x) for x in list_of_cols])
Aggregate the grouped values on many specific columns from list
I'd like to perfom groupBy() operation with specific agg(). df = df.groupBy("x", "y").agg(F.max("a").alias("a"), F.max("b").alias("b")) But is there any way to aggregation using list of columns? I don't want to hardcode it.
[ "You can use list comprehension.\nlist_of_cols = [\"a\", \"b\"]\ndf = df.groupBy(\"x\", \"y\").agg(*[F.max(x).alias(x) for x in list_of_cols])\n\n" ]
[ 0 ]
[]
[]
[ "aggregate", "group_by", "pyspark", "python" ]
stackoverflow_0074465417_aggregate_group_by_pyspark_python.txt
Q: In selenium the send_keys function ignores spaces when used with VNC Why is the send_keys function ignoring spaces in my python script? I used vnc on ubuntu/debian 10. Everything works correctly when I run the script on my computer, but all spaces disappear on vps with vnc. Error is in Google chrome. ` element.send_keys("1 2 3") result: "123" ` Replacing the spaces with "Keys.SPACE" did not help me. I tried adding two slashes element.send_keys("John\\ Doe") A: Try importing the libs and instantiate actions: # Needed libs from selenium import webdriver from selenium.webdriver.common.action_chains import ActionChains import time # We create the driver driver = webdriver.Chrome() action = ActionChains(driver) Then make click into your element with something like: element.click() Then send the keys like: action.send_keys(departure).perform() A: I haven't been able to get this to work in Chrome. But spaces work fine in Firefox, so I'll have to use that. If someone finds the cause or solution to my problem, please write
In selenium the send_keys function ignores spaces when used with VNC
Why is the send_keys function ignoring spaces in my python script? I used vnc on ubuntu/debian 10. Everything works correctly when I run the script on my computer, but all spaces disappear on vps with vnc. Error is in Google chrome. ` element.send_keys("1 2 3") result: "123" ` Replacing the spaces with "Keys.SPACE" did not help me. I tried adding two slashes element.send_keys("John\\ Doe")
[ "Try importing the libs and instantiate actions:\n# Needed libs\nfrom selenium import webdriver\nfrom selenium.webdriver.common.action_chains import ActionChains\nimport time\n\n# We create the driver\ndriver = webdriver.Chrome()\naction = ActionChains(driver)\n\nThen make click into your element with something like:\nelement.click()\n\nThen send the keys like:\naction.send_keys(departure).perform()\n\n", "I haven't been able to get this to work in Chrome. But spaces work fine in Firefox, so I'll have to use that. If someone finds the cause or solution to my problem, please write\n" ]
[ 0, 0 ]
[]
[]
[ "python", "selenium", "vnc" ]
stackoverflow_0074415389_python_selenium_vnc.txt
Q: Why won't my program find/open my .csv file I'm trying to get my program to read my .csv file and when I run it, it says there is no such file. I converted an excel file of 10000 random numbers that range from 1,100 and I'm trying to run those numbers through my code. Am I getting this error from my .csv file or is it an error from my code? import csv import math import statistics filename = "data5.csv" # create array array = [] def calcstdDev(data): n = len(data) mean = sum(data) / n var = sum((x - mean)**2 for x in data) / n std_dev = var ** 0.5 return std_dev def ProcessData(data): print("\nThe Mean is: %.4f \n" % (statistics.mean(data))) print("\nThe Min is: %d \n" % (min(data))) print("\nThe Max is: %d \n" % (max(data))) print("\nThe Mode is: %d \n" % (statistics.mode(data))) print("\nThe StandDev: %.4f \n" % (statistics.stdev(data))) print("\nMy StandDev: %.4f \n" % (calcstdDev(data))) def main(): # reading csv file with open(filename, 'r') as csvfile: # creating a csv reader object csvreader = csv.reader(csvfile) # extracting each data row one by one for row in csvreader: value = int(row[0]) # get first element from line in file, convert to int array.append(value) # add value to array # print contents of array print("\n array\n") print(array) ProcessData(array) if __name__ == "__main__": # execute only if run as a script main() A: I'll not fix your problem, but I'll tell you how you can fix all problems of this sort by yourself in the future. Download Process Monitor and add a filter for python.exe, like so: Then start recording and look for data5.csv and see in which directory it looks for that file. If the file is not found, it will be displayed with the result "Name not found": Understand what a working directory is. The CSV file will be searched in the working directory if you didn't provide a full path. You can also output the working directory from your Python code: import os print(os.getcwd()) It should be the same directory as shown in Process Monitor. If you're running the program from the commend line, you can do like this: X:\> cd /d X:\wherever\the\csv\is\ X:\wherever\the\csv\is\> "X:\full\path\to\python3.exe" "X:\projects\python\mypython.py" That way the working directory is X:\wherever\the\csv\is\ and it will find the CSV file. Don't cd where python3.exe is. Don't cd where mypython.py is.
Why won't my program find/open my .csv file
I'm trying to get my program to read my .csv file and when I run it, it says there is no such file. I converted an excel file of 10000 random numbers that range from 1,100 and I'm trying to run those numbers through my code. Am I getting this error from my .csv file or is it an error from my code? import csv import math import statistics filename = "data5.csv" # create array array = [] def calcstdDev(data): n = len(data) mean = sum(data) / n var = sum((x - mean)**2 for x in data) / n std_dev = var ** 0.5 return std_dev def ProcessData(data): print("\nThe Mean is: %.4f \n" % (statistics.mean(data))) print("\nThe Min is: %d \n" % (min(data))) print("\nThe Max is: %d \n" % (max(data))) print("\nThe Mode is: %d \n" % (statistics.mode(data))) print("\nThe StandDev: %.4f \n" % (statistics.stdev(data))) print("\nMy StandDev: %.4f \n" % (calcstdDev(data))) def main(): # reading csv file with open(filename, 'r') as csvfile: # creating a csv reader object csvreader = csv.reader(csvfile) # extracting each data row one by one for row in csvreader: value = int(row[0]) # get first element from line in file, convert to int array.append(value) # add value to array # print contents of array print("\n array\n") print(array) ProcessData(array) if __name__ == "__main__": # execute only if run as a script main()
[ "I'll not fix your problem, but I'll tell you how you can fix all problems of this sort by yourself in the future.\nDownload Process Monitor and add a filter for python.exe, like so:\n\nThen start recording and look for data5.csv and see in which directory it looks for that file.\n\nIf the file is not found, it will be displayed with the result \"Name not found\":\n\nUnderstand what a working directory is. The CSV file will be searched in the working directory if you didn't provide a full path.\nYou can also output the working directory from your Python code:\nimport os\nprint(os.getcwd())\n\nIt should be the same directory as shown in Process Monitor.\nIf you're running the program from the commend line, you can do like this:\nX:\\> cd /d X:\\wherever\\the\\csv\\is\\\nX:\\wherever\\the\\csv\\is\\> \"X:\\full\\path\\to\\python3.exe\" \"X:\\projects\\python\\mypython.py\"\n\nThat way the working directory is X:\\wherever\\the\\csv\\is\\ and it will find the CSV file.\nDon't cd where python3.exe is. Don't cd where mypython.py is.\n" ]
[ 1 ]
[]
[]
[ "python" ]
stackoverflow_0074465568_python.txt
Q: python: structuring complete data I have the following dataframe: df = id date_diag date_medication medication_code 1 01-01-2000 03-01-2000 A 2 01-01-2000 02-01-2000 A 3 01-01-2000 04-01-2000 B 4 01-01-2000 05-01-2000 B I would like to create a table with the count of times a given medication was given after the date of the diagnoses: df = medication day1 day2 day3 day4 day5 day6 day7 A 0 1 1 0 0 0 0 B 0 0 0 1 1 0 0 A: here is one way to do it # create a temp fields, Seq to count the day of medication # and days difference b/w medication and diag # pivot # add prefix to column # and do cleanup out=(df.assign(seq=1, days=(pd.to_datetime(df['date_medication'], dayfirst=True).sub(pd.to_datetime(df['date_diag'], dayfirst=True))).dt.days + 1) .pivot(index='medication_code', columns='days', values='seq') .fillna(0) .add_prefix('day') .reset_index() .rename_axis(columns=None) ) out medication_code day2 day3 day4 day5 0 A 1.0 1.0 0.0 0.0 1 B 0.0 0.0 1.0 1.0 alternately, df['days']=pd.to_datetime(df['date_medication'], dayfirst=True).sub( pd.to_datetime(df['date_diag'], dayfirst=True)).dt.days + 1 out=pd.crosstab(df['medication_code'], df['days']).add_prefix('day').reset_index().rename_axis(columns=None) out medication_code day2 day3 day4 day5 0 A 1 1 0 0 1 B 0 0 1 1
python: structuring complete data
I have the following dataframe: df = id date_diag date_medication medication_code 1 01-01-2000 03-01-2000 A 2 01-01-2000 02-01-2000 A 3 01-01-2000 04-01-2000 B 4 01-01-2000 05-01-2000 B I would like to create a table with the count of times a given medication was given after the date of the diagnoses: df = medication day1 day2 day3 day4 day5 day6 day7 A 0 1 1 0 0 0 0 B 0 0 0 1 1 0 0
[ "here is one way to do it\n# create a temp fields, Seq to count the day of medication\n# and days difference b/w medication and diag\n# pivot\n# add prefix to column\n# and do cleanup\n\n\nout=(df.assign(seq=1, \n days=(pd.to_datetime(df['date_medication'], dayfirst=True).sub(pd.to_datetime(df['date_diag'], dayfirst=True))).dt.days + 1)\n .pivot(index='medication_code', columns='days', values='seq')\n .fillna(0)\n .add_prefix('day')\n .reset_index()\n .rename_axis(columns=None)\n)\nout\n\n medication_code day2 day3 day4 day5\n0 A 1.0 1.0 0.0 0.0\n1 B 0.0 0.0 1.0 1.0\n\nalternately,\ndf['days']=pd.to_datetime(df['date_medication'], dayfirst=True).sub(\n pd.to_datetime(df['date_diag'], dayfirst=True)).dt.days + 1\nout=pd.crosstab(df['medication_code'], df['days']).add_prefix('day').reset_index().rename_axis(columns=None)\n\n\nout\n\n\nmedication_code day2 day3 day4 day5\n0 A 1 1 0 0\n1 B 0 0 1 1\n\n" ]
[ 1 ]
[]
[]
[ "pandas", "python" ]
stackoverflow_0074464898_pandas_python.txt
Q: How to parse multiple dates from a block of text in Python (or another language) I have a string that has several date values in it, and I want to parse them all out. The string is natural language, so the best thing I've found so far is dateutil. Unfortunately, if a string has multiple date values in it, dateutil throws an error: >>> s = "I like peas on 2011-04-23, and I also like them on easter and my birthday, the 29th of July, 1928" >>> parse(s, fuzzy=True) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/lib/pymodules/python2.7/dateutil/parser.py", line 697, in parse return DEFAULTPARSER.parse(timestr, **kwargs) File "/usr/lib/pymodules/python2.7/dateutil/parser.py", line 303, in parse raise ValueError, "unknown string format" ValueError: unknown string format Any thoughts on how to parse all dates from a long string? Ideally, a list would be created, but I can handle that myself if I need to. I'm using Python, but at this point, other languages are probably OK, if they get the job done. PS - I guess I could recursively split the input file in the middle and try, try again until it works, but it's a hell of a hack. A: Looking at it, the least hacky way would be to modify dateutil parser to have a fuzzy-multiple option. parser._parse takes your string, tokenizes it with _timelex and then compares the tokens with data defined in parserinfo. Here, if a token doesn't match anything in parserinfo, the parse will fail unless fuzzy is True. What I suggest you allow non-matches while you don't have any processed time tokens, then when you hit a non-match, process the parsed data at that point and start looking for time tokens again. Shouldn't take too much effort. Update While you're waiting for your patch to get rolled in... This is a little hacky, uses non-public functions in the library, but doesn't require modifying the library and is not trial-and-error. You might have false positives if you have any lone tokens that can be turned into floats. You might need to filter the results some more. from dateutil.parser import _timelex, parser a = "I like peas on 2011-04-23, and I also like them on easter and my birthday, the 29th of July, 1928" p = parser() info = p.info def timetoken(token): try: float(token) return True except ValueError: pass return any(f(token) for f in (info.jump,info.weekday,info.month,info.hms,info.ampm,info.pertain,info.utczone,info.tzoffset)) def timesplit(input_string): batch = [] for token in _timelex(input_string): if timetoken(token): if info.jump(token): continue batch.append(token) else: if batch: yield " ".join(batch) batch = [] if batch: yield " ".join(batch) for item in timesplit(a): print "Found:", item print "Parsed:", p.parse(item) Yields: Found: 2011 04 23 Parsed: 2011-04-23 00:00:00 Found: 29 July 1928 Parsed: 1928-07-29 00:00:00 Update for Dieter Dateutil 2.1 appears to be written for compatibility with python3 and uses a "compatability" library called six. Something isn't right with it and it's not treating str objects as text. This solution works with dateutil 2.1 if you pass strings as unicode or as file-like objects: from cStringIO import StringIO for item in timesplit(StringIO(a)): print "Found:", item print "Parsed:", p.parse(StringIO(item)) If you want to set option on the parserinfo, instantiate a parserinfo and pass it to the parser object. E.g: from dateutil.parser import _timelex, parser, parserinfo info = parserinfo(dayfirst=True) p = parser(info) A: While I was offline, I was bothered by the answer I posted here yesterday. Yes it did the job, but it was unnecessarily complicated and extremely inefficient. Here's the back-of-the-envelope edition that should do a much better job! import itertools from dateutil import parser jumpwords = set(parser.parserinfo.JUMP) keywords = set(kw.lower() for kw in itertools.chain( parser.parserinfo.UTCZONE, parser.parserinfo.PERTAIN, (x for s in parser.parserinfo.WEEKDAYS for x in s), (x for s in parser.parserinfo.MONTHS for x in s), (x for s in parser.parserinfo.HMS for x in s), (x for s in parser.parserinfo.AMPM for x in s), )) def parse_multiple(s): def is_valid_kw(s): try: # is it a number? float(s) return True except ValueError: return s.lower() in keywords def _split(s): kw_found = False tokens = parser._timelex.split(s) for i in xrange(len(tokens)): if tokens[i] in jumpwords: continue if not kw_found and is_valid_kw(tokens[i]): kw_found = True start = i elif kw_found and not is_valid_kw(tokens[i]): kw_found = False yield "".join(tokens[start:i]) # handle date at end of input str if kw_found: yield "".join(tokens[start:]) return [parser.parse(x) for x in _split(s)] Example usage: >>> parse_multiple("I like peas on 2011-04-23, and I also like them on easter and my birthday, the 29th of July, 1928") [datetime.datetime(2011, 4, 23, 0, 0), datetime.datetime(1928, 7, 29, 0, 0)] It's probably worth noting that its behaviour deviates slightly from dateutil.parser.parse when dealing with empty/unknown strings. Dateutil will return the current day, while parse_multiple returns an empty list which, IMHO, is what one would expect. >>> from dateutil import parser >>> parser.parse("") datetime.datetime(2011, 8, 12, 0, 0) >>> parse_multiple("") [] P.S. Just spotted MattH's updated answer which does something very similar. A: I think if you put the "words" in an array, it should do the trick. With that you can verify if it is a date or no, and put in a variable. Once you have the date you should use datetime library library. A: Why not writing a regex pattern covering all the possible forms in which a date can appear, and then launching the regex to explore the text ? I presume that there are not dozen of dozens of manners to express a date in a string. The only problem is to gather the maximum of date's expressions A: I see that there are some good answers already but adding this one as it worked better in a use case of mine while the above answers didn't. Using this library: https://datefinder.readthedocs.io/en/latest/index.html#module-datefinder import datefinder def DatesToList(x): dates = datefinder.find_dates(x) lists = [] for date in dates: lists.append(date) return (lists) dates = DateToList(s) Output: [datetime.datetime(2011, 4, 23, 0, 0), datetime.datetime(1928, 7, 29, 0, 0)]
How to parse multiple dates from a block of text in Python (or another language)
I have a string that has several date values in it, and I want to parse them all out. The string is natural language, so the best thing I've found so far is dateutil. Unfortunately, if a string has multiple date values in it, dateutil throws an error: >>> s = "I like peas on 2011-04-23, and I also like them on easter and my birthday, the 29th of July, 1928" >>> parse(s, fuzzy=True) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/lib/pymodules/python2.7/dateutil/parser.py", line 697, in parse return DEFAULTPARSER.parse(timestr, **kwargs) File "/usr/lib/pymodules/python2.7/dateutil/parser.py", line 303, in parse raise ValueError, "unknown string format" ValueError: unknown string format Any thoughts on how to parse all dates from a long string? Ideally, a list would be created, but I can handle that myself if I need to. I'm using Python, but at this point, other languages are probably OK, if they get the job done. PS - I guess I could recursively split the input file in the middle and try, try again until it works, but it's a hell of a hack.
[ "Looking at it, the least hacky way would be to modify dateutil parser to have a fuzzy-multiple option.\nparser._parse takes your string, tokenizes it with _timelex and then compares the tokens with data defined in parserinfo.\nHere, if a token doesn't match anything in parserinfo, the parse will fail unless fuzzy is True.\nWhat I suggest you allow non-matches while you don't have any processed time tokens, then when you hit a non-match, process the parsed data at that point and start looking for time tokens again.\nShouldn't take too much effort.\n\nUpdate\nWhile you're waiting for your patch to get rolled in...\nThis is a little hacky, uses non-public functions in the library, but doesn't require modifying the library and is not trial-and-error. You might have false positives if you have any lone tokens that can be turned into floats. You might need to filter the results some more.\nfrom dateutil.parser import _timelex, parser\n\na = \"I like peas on 2011-04-23, and I also like them on easter and my birthday, the 29th of July, 1928\"\n\np = parser()\ninfo = p.info\n\ndef timetoken(token):\n try:\n float(token)\n return True\n except ValueError:\n pass\n return any(f(token) for f in (info.jump,info.weekday,info.month,info.hms,info.ampm,info.pertain,info.utczone,info.tzoffset))\n\ndef timesplit(input_string):\n batch = []\n for token in _timelex(input_string):\n if timetoken(token):\n if info.jump(token):\n continue\n batch.append(token)\n else:\n if batch:\n yield \" \".join(batch)\n batch = []\n if batch:\n yield \" \".join(batch)\n\nfor item in timesplit(a):\n print \"Found:\", item\n print \"Parsed:\", p.parse(item)\n\nYields:\nFound: 2011 04 23\nParsed: 2011-04-23 00:00:00\nFound: 29 July 1928\nParsed: 1928-07-29 00:00:00\n\nUpdate for Dieter\nDateutil 2.1 appears to be written for compatibility with python3 and uses a \"compatability\" library called six. Something isn't right with it and it's not treating str objects as text.\nThis solution works with dateutil 2.1 if you pass strings as unicode or as file-like objects:\nfrom cStringIO import StringIO\nfor item in timesplit(StringIO(a)):\n print \"Found:\", item\n print \"Parsed:\", p.parse(StringIO(item))\n\nIf you want to set option on the parserinfo, instantiate a parserinfo and pass it to the parser object. E.g:\nfrom dateutil.parser import _timelex, parser, parserinfo\ninfo = parserinfo(dayfirst=True)\np = parser(info)\n\n", "While I was offline, I was bothered by the answer I posted here yesterday. Yes it did the job, but it was unnecessarily complicated and extremely inefficient. \nHere's the back-of-the-envelope edition that should do a much better job! \nimport itertools\nfrom dateutil import parser\n\njumpwords = set(parser.parserinfo.JUMP)\nkeywords = set(kw.lower() for kw in itertools.chain(\n parser.parserinfo.UTCZONE,\n parser.parserinfo.PERTAIN,\n (x for s in parser.parserinfo.WEEKDAYS for x in s),\n (x for s in parser.parserinfo.MONTHS for x in s),\n (x for s in parser.parserinfo.HMS for x in s),\n (x for s in parser.parserinfo.AMPM for x in s),\n))\n\ndef parse_multiple(s):\n def is_valid_kw(s):\n try: # is it a number?\n float(s)\n return True\n except ValueError:\n return s.lower() in keywords\n\n def _split(s):\n kw_found = False\n tokens = parser._timelex.split(s)\n for i in xrange(len(tokens)):\n if tokens[i] in jumpwords:\n continue \n if not kw_found and is_valid_kw(tokens[i]):\n kw_found = True\n start = i\n elif kw_found and not is_valid_kw(tokens[i]):\n kw_found = False\n yield \"\".join(tokens[start:i])\n # handle date at end of input str\n if kw_found:\n yield \"\".join(tokens[start:])\n\n return [parser.parse(x) for x in _split(s)]\n\nExample usage:\n>>> parse_multiple(\"I like peas on 2011-04-23, and I also like them on easter and my birthday, the 29th of July, 1928\")\n[datetime.datetime(2011, 4, 23, 0, 0), datetime.datetime(1928, 7, 29, 0, 0)]\n\nIt's probably worth noting that its behaviour deviates slightly from dateutil.parser.parse when dealing with empty/unknown strings. Dateutil will return the current day, while parse_multiple returns an empty list which, IMHO, is what one would expect.\n>>> from dateutil import parser\n>>> parser.parse(\"\")\ndatetime.datetime(2011, 8, 12, 0, 0)\n>>> parse_multiple(\"\")\n[]\n\nP.S. Just spotted MattH's updated answer which does something very similar.\n", "I think if you put the \"words\" in an array, it should do the trick. With that you can verify if it is a date or no, and put in a variable.\nOnce you have the date you should use datetime library library.\n", "Why not writing a regex pattern covering all the possible forms in which a date can appear, and then launching the regex to explore the text ? I presume that there are not dozen of dozens of manners to express a date in a string.\nThe only problem is to gather the maximum of date's expressions\n", "I see that there are some good answers already but adding this one as it worked better in a use case of mine while the above answers didn't.\nUsing this library: https://datefinder.readthedocs.io/en/latest/index.html#module-datefinder\n\nimport datefinder\n\ndef DatesToList(x):\n \n dates = datefinder.find_dates(x)\n \n lists = []\n \n for date in dates:\n \n lists.append(date)\n \n return (lists)\n\n\ndates = DateToList(s)\n\n\n\nOutput:\n[datetime.datetime(2011, 4, 23, 0, 0), datetime.datetime(1928, 7, 29, 0, 0)]\n\n\n" ]
[ 18, 6, 0, 0, 0 ]
[]
[]
[ "parsing", "python", "python_dateutil" ]
stackoverflow_0007028689_parsing_python_python_dateutil.txt
Q: How can I get slopes from multiple columns in a df? I am using this code below to generate multiple scatter charts from a single dataframe. The first column is "Time" (x-axis for all charts) and the other are A,B,C... (y-axis for each chart). import numpy as np import pandas as pd import matplotlib.pyplot as plt df = pd.read_excel("output.xlsx") columns = ['A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'J', 'K', 'L', 'M', 'N', 'O', 'P', 'Q', 'R', 'S', 'T'] for i in enumerate(columns): plt.subplot(20,4, i[0]+1) x = 'Time' y = i[1] plt.scatter(x,y, data=df) plt.show() I was able to generate all charts but I would also like to have the slope for each one. I was thinking of something like this: from scipy import stats slope, intercept, r_value, p_value, std_err = stats.linregress(df['Time'], df['A']) But how can I scale this up to have the slope for each column? (A, B, C..) A: Consider: from scipy import stats import numpy as np import pandas as pd np.random.seed(42) columns = list('ABCDEFGHIJKLMNOPQRST') df = pd.DataFrame(np.random.rand(10, 21), columns = ['Time']+columns) lm = [] for c in columns: res =[c]+ list(stats.linregress(df['Time'], df[c])) lm.append(res) df_lm = pd.DataFrame(lm, columns = ['Category', 'slope', 'intercept', 'r_value', 'p_value', 'std_err'] ) df_lm This gives the results of all the columns in a dataframe: Category slope intercept r_value p_value std_err 0 A 0.094608 0.447455 0.066041 0.856163 0.505383 1 B -0.566963 0.698525 -0.464790 0.175910 0.381859 2 C 0.320227 0.407785 0.246870 0.491698 0.444417 3 D -0.171888 0.534895 -0.139052 0.701638 0.432797 4 E -0.238098 0.337545 -0.387208 0.268959 0.200444 5 F -0.467824 0.542608 -0.369604 0.293181 0.415820
How can I get slopes from multiple columns in a df?
I am using this code below to generate multiple scatter charts from a single dataframe. The first column is "Time" (x-axis for all charts) and the other are A,B,C... (y-axis for each chart). import numpy as np import pandas as pd import matplotlib.pyplot as plt df = pd.read_excel("output.xlsx") columns = ['A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'J', 'K', 'L', 'M', 'N', 'O', 'P', 'Q', 'R', 'S', 'T'] for i in enumerate(columns): plt.subplot(20,4, i[0]+1) x = 'Time' y = i[1] plt.scatter(x,y, data=df) plt.show() I was able to generate all charts but I would also like to have the slope for each one. I was thinking of something like this: from scipy import stats slope, intercept, r_value, p_value, std_err = stats.linregress(df['Time'], df['A']) But how can I scale this up to have the slope for each column? (A, B, C..)
[ "Consider:\nfrom scipy import stats\nimport numpy as np\nimport pandas as pd\n\nnp.random.seed(42)\ncolumns = list('ABCDEFGHIJKLMNOPQRST')\ndf = pd.DataFrame(np.random.rand(10, 21), columns = ['Time']+columns)\nlm = []\nfor c in columns:\n res =[c]+ list(stats.linregress(df['Time'], df[c]))\n lm.append(res)\ndf_lm = pd.DataFrame(lm, columns = ['Category', 'slope', 'intercept', 'r_value', 'p_value', 'std_err'] )\ndf_lm\n\nThis gives the results of all the columns in a dataframe:\n Category slope intercept r_value p_value std_err\n0 A 0.094608 0.447455 0.066041 0.856163 0.505383\n1 B -0.566963 0.698525 -0.464790 0.175910 0.381859\n2 C 0.320227 0.407785 0.246870 0.491698 0.444417\n3 D -0.171888 0.534895 -0.139052 0.701638 0.432797\n4 E -0.238098 0.337545 -0.387208 0.268959 0.200444\n5 F -0.467824 0.542608 -0.369604 0.293181 0.415820\n\n" ]
[ 0 ]
[]
[]
[ "linear_regression", "python", "scipy" ]
stackoverflow_0074436779_linear_regression_python_scipy.txt
Q: AttributeError in async kivy program could you please help me fix this program so that it won't show me an Attribute error. I am a newbie to Async programming and I have barely any idea of what is going on. Here is the code: """Example shows the recommended way of how to run Kivy with the Python built in asyncio event loop as just another async coroutine. """ import asyncio from kivy.app import App from kivy.lang.builder import Builder from kivy.uix.widget import Widget Builder.load_string('''BoxLayout: orientation: 'vertical' BoxLayout: ToggleButton: id: btn1 group: 'a' text: 'Sleeping' allow_no_selection: False on_state: if self.state == 'down': label.status = self.text ToggleButton: id: btn2 group: 'a' text: 'Swimming' allow_no_selection: False on_state: if self.state == 'down': label.status = self.text ToggleButton: id: btn3 group: 'a' text: 'Reading' allow_no_selection: False state: 'down' on_state: if self.state == 'down': label.status = self.text Label: id: label status: 'Reading' text: 'Beach status is "{}"'.format(self.status)''') class MainLayout(Widget): other_task = None def app_func(self): """This will run both methods asynchronously and then block until they are finished """ self.other_task = asyncio.ensure_future(self.waste_time_freely()) async def run_wrapper(): # we don't actually need to set asyncio as the lib because it is # the default, but it doesn't hurt to be explicit await self.async_run(async_lib='asyncio') print('App done') self.other_task.cancel() return asyncio.gather(run_wrapper(), self.other_task) async def waste_time_freely(self): """ This method is also run by the asyncio loop and periodically prints something. """ try: i = 0 while True: if self.root is not None: status = self.root.ids.label.status print('{} on the beach'.format(status)) # get some sleep if self.root.ids.btn1.state != 'down' and i >= 2: i = 0 print('Yawn, getting tired. Going to sleep') self.root.ids.btn1.trigger_action() i += 1 await asyncio.sleep(2) except asyncio.CancelledError as e: print('Wasting time was canceled', e) finally: # when canceled, print that it finished print('Done wasting time') class AsyncApp(App): def build(self): return MainLayout() if __name__ == '__main__': loop = asyncio.get_event_loop() loop.run_until_complete(MainLayout().app_func()) loop.close() Here is the error shown Can you please fix the Attribute error for me and also how do I get rid of all the deprecation warnings? Thank you. A: just in case you want a means to an end, and not specifically a correction to asyncio, I took the liberty of writing something the way I know: Threading. as I ran the code it did not display the buttons nicely so I changed the top level to a BoxLayout and in the build string named the top level according to the top level Class Name 'MainLayout' Kivy also provides a way to schedule tasks with kivy.clock and I occasionally use this in my kivy applications but more commonly use threads. """Example shows the recommended way of how to run Kivy with the Python built in Threading """ import time import threading from kivy.app import App from kivy.lang.builder import Builder from kivy.uix.boxlayout import BoxLayout Builder.load_string('''<MainLayout>: orientation: 'vertical' BoxLayout: orientation: 'vertical' BoxLayout: orientation: 'vertical' ToggleButton: id: btn1 group: 'a' text: 'Sleeping' allow_no_selection: False on_state: if self.state == 'down': label.status = self.text ToggleButton: id: btn2 group: 'a' text: 'Swimming' allow_no_selection: False on_press: root.kv_swim(self, my_argument = 'anything') on_state: if self.state == 'down': label.status = self.text ToggleButton: id: btn3 group: 'a' text: 'Reading' allow_no_selection: False state: 'down' on_press: root.kv_read(self, my_argument = 'anything') on_state: if self.state == 'down': label.status = self.text Label: id: label status: 'Reading' text: 'Beach status is "{}"'.format(self.status)''') class MainLayout(BoxLayout): other_task = None started_reading = False started_swimming = False def waste_time(self, task: str): while True: print(f"the task {task} is {time.time():.1f}") time.sleep(1.2) def kv_read(self, my_button, my_argument: str = "default_value"): print(f"you can send information from the button {my_argument}") if not self.started_reading: threading.Thread(target=self.waste_time, args=("read", ), daemon=True).start() self.started_reading = True else: print("don't start again") def kv_swim(self, my_button, my_argument: str = "default_value"): print(f"you can send information from the button {my_argument}") if not self.started_swimming: threading.Thread(target=self.waste_time, args=("swim", ), daemon=True).start() self.started_swimming = True else: print("don't start again") class ThreadedApp(App): def build(self): return MainLayout() if __name__ == '__main__': mine = ThreadedApp() mine.run()
AttributeError in async kivy program
could you please help me fix this program so that it won't show me an Attribute error. I am a newbie to Async programming and I have barely any idea of what is going on. Here is the code: """Example shows the recommended way of how to run Kivy with the Python built in asyncio event loop as just another async coroutine. """ import asyncio from kivy.app import App from kivy.lang.builder import Builder from kivy.uix.widget import Widget Builder.load_string('''BoxLayout: orientation: 'vertical' BoxLayout: ToggleButton: id: btn1 group: 'a' text: 'Sleeping' allow_no_selection: False on_state: if self.state == 'down': label.status = self.text ToggleButton: id: btn2 group: 'a' text: 'Swimming' allow_no_selection: False on_state: if self.state == 'down': label.status = self.text ToggleButton: id: btn3 group: 'a' text: 'Reading' allow_no_selection: False state: 'down' on_state: if self.state == 'down': label.status = self.text Label: id: label status: 'Reading' text: 'Beach status is "{}"'.format(self.status)''') class MainLayout(Widget): other_task = None def app_func(self): """This will run both methods asynchronously and then block until they are finished """ self.other_task = asyncio.ensure_future(self.waste_time_freely()) async def run_wrapper(): # we don't actually need to set asyncio as the lib because it is # the default, but it doesn't hurt to be explicit await self.async_run(async_lib='asyncio') print('App done') self.other_task.cancel() return asyncio.gather(run_wrapper(), self.other_task) async def waste_time_freely(self): """ This method is also run by the asyncio loop and periodically prints something. """ try: i = 0 while True: if self.root is not None: status = self.root.ids.label.status print('{} on the beach'.format(status)) # get some sleep if self.root.ids.btn1.state != 'down' and i >= 2: i = 0 print('Yawn, getting tired. Going to sleep') self.root.ids.btn1.trigger_action() i += 1 await asyncio.sleep(2) except asyncio.CancelledError as e: print('Wasting time was canceled', e) finally: # when canceled, print that it finished print('Done wasting time') class AsyncApp(App): def build(self): return MainLayout() if __name__ == '__main__': loop = asyncio.get_event_loop() loop.run_until_complete(MainLayout().app_func()) loop.close() Here is the error shown Can you please fix the Attribute error for me and also how do I get rid of all the deprecation warnings? Thank you.
[ "just in case you want a means to an end, and not specifically a correction to asyncio, I took the liberty of writing something the way I know: Threading.\nas I ran the code it did not display the buttons nicely so I changed the top level to a BoxLayout and in the build string named the top level according to the top level Class Name 'MainLayout'\nKivy also provides a way to schedule tasks with kivy.clock and I occasionally use this in my kivy applications but more commonly use threads.\n\"\"\"Example shows the recommended way of how to run Kivy with the Python built\nin Threading\n\"\"\"\nimport time\nimport threading\nfrom kivy.app import App\nfrom kivy.lang.builder import Builder\nfrom kivy.uix.boxlayout import BoxLayout\n\n\nBuilder.load_string('''<MainLayout>:\n orientation: 'vertical'\n BoxLayout:\n orientation: 'vertical'\n BoxLayout:\n orientation: 'vertical'\n ToggleButton:\n id: btn1\n group: 'a'\n text: 'Sleeping'\n allow_no_selection: False\n on_state: if self.state == 'down': label.status = self.text\n ToggleButton:\n id: btn2\n group: 'a'\n text: 'Swimming'\n allow_no_selection: False\n on_press: root.kv_swim(self, my_argument = 'anything')\n on_state: if self.state == 'down': label.status = self.text\n ToggleButton:\n id: btn3\n group: 'a'\n text: 'Reading'\n allow_no_selection: False\n state: 'down'\n on_press: root.kv_read(self, my_argument = 'anything')\n on_state: if self.state == 'down': label.status = self.text\n Label:\n id: label\n status: 'Reading'\n text: 'Beach status is \"{}\"'.format(self.status)''')\n\n\nclass MainLayout(BoxLayout):\n\n other_task = None\n started_reading = False\n started_swimming = False\n\n def waste_time(self, task: str):\n while True:\n print(f\"the task {task} is {time.time():.1f}\")\n time.sleep(1.2)\n\n def kv_read(self, my_button, my_argument: str = \"default_value\"):\n print(f\"you can send information from the button {my_argument}\")\n if not self.started_reading:\n threading.Thread(target=self.waste_time, args=(\"read\", ), daemon=True).start()\n self.started_reading = True\n else:\n print(\"don't start again\")\n\n def kv_swim(self, my_button, my_argument: str = \"default_value\"):\n print(f\"you can send information from the button {my_argument}\")\n if not self.started_swimming:\n threading.Thread(target=self.waste_time, args=(\"swim\", ), daemon=True).start()\n self.started_swimming = True\n else:\n print(\"don't start again\")\n\n\nclass ThreadedApp(App):\n\n def build(self):\n return MainLayout()\n\n\nif __name__ == '__main__':\n mine = ThreadedApp()\n mine.run()\n\n" ]
[ 1 ]
[]
[]
[ "asynchronous", "attributeerror", "kivy", "python", "python_asyncio" ]
stackoverflow_0074445708_asynchronous_attributeerror_kivy_python_python_asyncio.txt
Q: lxml insert new node in tree, with parent's contents inside i have this tree : <TEI> <teiHeader/> <text> <body> <div type="chapter"> <p rend="b"><pb n="1"/>lorem ipsum...</p> <p rend="b">lorem pb n="2"/> ipsum2...</p> <p>lorem ipsum3...</p> </div> <div type="chapter"> <p>lorem ipsum4...</p> <p rend="b">lorem ipsum5...</p> <p rend="b">pb n="3"/> lorem ipsum6...</p> </div> </body> </text> </TEI> and i would like to change all <p rend="b">lorem ipsum...</p> into <p><hi rend="b">lorem ipsum...</hi></p> problem is : all <pb n="X"/> tags are removed. i tried this (root = xml tree above) : parser = etree.XMLParser(ns_clean=True, remove_blank_text=True) root = etree.fromstring(root, parser) for item in root.findall(".//p[@rend='b']"): hi = etree.SubElement(item, "hi", rend=font_variant[variant]) hi.text = ''.join(item.itertext()) print(etree.tostring(root, pretty_print=True, xml_declaration=True)) and i get, for instance for the first <p/> : <p><pb n="1"/>lorem ipsum...<hi rend="b"> lorem ipsum...</hi></p> the <pb n="1"/> is missing. Could you help me out? A: If I understand you correctly,you are probably looking for something like this: for p in root.xpath('//p[@rend="b"]'): #clone the old <p> old = etree.fromstring(etree.tostring(p)) #change its name old.tag = "hi" #create a new element new = etree.fromstring('<p/>') #append the clone to the new element new.append(old) new.tail ="\n" #delete the old <p> and replace it with the new element p.getparent().replace(p, new)
lxml insert new node in tree, with parent's contents inside
i have this tree : <TEI> <teiHeader/> <text> <body> <div type="chapter"> <p rend="b"><pb n="1"/>lorem ipsum...</p> <p rend="b">lorem pb n="2"/> ipsum2...</p> <p>lorem ipsum3...</p> </div> <div type="chapter"> <p>lorem ipsum4...</p> <p rend="b">lorem ipsum5...</p> <p rend="b">pb n="3"/> lorem ipsum6...</p> </div> </body> </text> </TEI> and i would like to change all <p rend="b">lorem ipsum...</p> into <p><hi rend="b">lorem ipsum...</hi></p> problem is : all <pb n="X"/> tags are removed. i tried this (root = xml tree above) : parser = etree.XMLParser(ns_clean=True, remove_blank_text=True) root = etree.fromstring(root, parser) for item in root.findall(".//p[@rend='b']"): hi = etree.SubElement(item, "hi", rend=font_variant[variant]) hi.text = ''.join(item.itertext()) print(etree.tostring(root, pretty_print=True, xml_declaration=True)) and i get, for instance for the first <p/> : <p><pb n="1"/>lorem ipsum...<hi rend="b"> lorem ipsum...</hi></p> the <pb n="1"/> is missing. Could you help me out?
[ "If I understand you correctly,you are probably looking for something like this:\nfor p in root.xpath('//p[@rend=\"b\"]'):\n #clone the old <p>\n old = etree.fromstring(etree.tostring(p))\n #change its name\n old.tag = \"hi\"\n #create a new element\n new = etree.fromstring('<p/>') \n #append the clone to the new element\n new.append(old)\n new.tail =\"\\n\"\n #delete the old <p> and replace it with the new element\n p.getparent().replace(p, new)\n\n" ]
[ 1 ]
[]
[]
[ "lxml", "python", "xml" ]
stackoverflow_0074464493_lxml_python_xml.txt
Q: Using df.apply() to a time column that indicates times at every 2 seconds in pandas I am new to this data science world and trying to understand some basic pandas examples. I have a pandas data frame that I would like to create a new column and add some conditional values as below: It will include yes at every 2 seconds. Otherwise include no. Here is an example: This is my original data frame. id name time 0 1 name1 260.123 1 2 name2 261.323 2 3 name3 261.342 3 4 name4 261.567 4 5 name5 262.123 ... The new data frame will be like this: id name time time_delta 0 1 name1 260.123 yes 1 2 name2 261.323 no 2 3 name3 261.342 no 3 4 name4 261.567 no 4 5 name5 262.123 yes 5 6 name6 262.345 yes 6 7 name7 264.876 yes 7 8 name8 265.234 no 8 9 name9 266.234 yes 9 10 name10 267.234 no ... The code that I was using is: df['time_delta'] = df['time'].apply(apply_test) And the actual code of the function: def apply_test(num): prev = num if round(num) != prev + 2: prev = prev return "no" else: prev = num return "yes" Please note that the time column has decimals and no patterns. The result came as all no since the prev is assigned to the next number at each iteration. This was the way I thought it would be. Not sure if there are any other better ways. I would appreciate any help. UPDATE: Please note that the time column has decimals and the decimal values have no value in this case. For instance, time=234.xxx will be considered as 234 seconds. Therefore, the next 2 second point is 236. The data frame has multiple second value if we round it down. In this case, all of them have to be marked as yes. Please refer to the updates result data frame as an example. A: You can use: import numpy as np N = 2 # time step # define bins every N seconds bins = np.arange(np.floor(df['time'].min()), df['time'].max()+N, 2) # get the index of the first row per group idx = df.groupby(pd.cut(df['time'], bins))['time'].idxmin() # assign "yes" to the first else "no" df['timedelta'] = np.where(df.index.isin(idx), 'yes', 'no') Output: id name time time_delta 0 1 name1 260.123 yes 1 2 name2 260.323 no 2 3 name3 261.342 no 3 4 name4 261.567 no 4 5 name5 262.123 yes 5 6 name6 263.345 no 6 7 name7 264.876 yes A: You can check when the remaining of the cumulative sum of the diff changes value after divided by 2, that is when it enters a new segment of length 2: remaining = (df['time'].diff().cumsum() // 2).fillna(0) df['time_delta'] = np.where((~remaining.duplicated()), 'yes', 'no')
Using df.apply() to a time column that indicates times at every 2 seconds in pandas
I am new to this data science world and trying to understand some basic pandas examples. I have a pandas data frame that I would like to create a new column and add some conditional values as below: It will include yes at every 2 seconds. Otherwise include no. Here is an example: This is my original data frame. id name time 0 1 name1 260.123 1 2 name2 261.323 2 3 name3 261.342 3 4 name4 261.567 4 5 name5 262.123 ... The new data frame will be like this: id name time time_delta 0 1 name1 260.123 yes 1 2 name2 261.323 no 2 3 name3 261.342 no 3 4 name4 261.567 no 4 5 name5 262.123 yes 5 6 name6 262.345 yes 6 7 name7 264.876 yes 7 8 name8 265.234 no 8 9 name9 266.234 yes 9 10 name10 267.234 no ... The code that I was using is: df['time_delta'] = df['time'].apply(apply_test) And the actual code of the function: def apply_test(num): prev = num if round(num) != prev + 2: prev = prev return "no" else: prev = num return "yes" Please note that the time column has decimals and no patterns. The result came as all no since the prev is assigned to the next number at each iteration. This was the way I thought it would be. Not sure if there are any other better ways. I would appreciate any help. UPDATE: Please note that the time column has decimals and the decimal values have no value in this case. For instance, time=234.xxx will be considered as 234 seconds. Therefore, the next 2 second point is 236. The data frame has multiple second value if we round it down. In this case, all of them have to be marked as yes. Please refer to the updates result data frame as an example.
[ "You can use:\nimport numpy as np\n\nN = 2 # time step\n\n# define bins every N seconds\nbins = np.arange(np.floor(df['time'].min()), df['time'].max()+N, 2)\n# get the index of the first row per group\nidx = df.groupby(pd.cut(df['time'], bins))['time'].idxmin()\n\n# assign \"yes\" to the first else \"no\"\ndf['timedelta'] = np.where(df.index.isin(idx), 'yes', 'no')\n\nOutput:\n id name time time_delta\n0 1 name1 260.123 yes\n1 2 name2 260.323 no\n2 3 name3 261.342 no\n3 4 name4 261.567 no\n4 5 name5 262.123 yes\n5 6 name6 263.345 no\n6 7 name7 264.876 yes\n\n", "You can check when the remaining of the cumulative sum of the diff changes value after divided by 2, that is when it enters a new segment of length 2:\nremaining = (df['time'].diff().cumsum() // 2).fillna(0)\ndf['time_delta'] = np.where((~remaining.duplicated()), 'yes', 'no')\n\n" ]
[ 2, 2 ]
[]
[]
[ "data_science", "pandas", "python" ]
stackoverflow_0074465477_data_science_pandas_python.txt
Q: AWS ParamValidationError: Parameter validation failed: I have set up a trigger/lambda to upload into DynamoDB however i get the following error when uploading..not sure what is going wrong. So far i have just created a blank dDB table with the primary key of "PlayerWeekID" as string but nothing else. Is this an issue because DDB isnt reading in the data types? Do I need to specify these in the Lamdda or set up in DDB before running the code? Update: This is the python code: #change dataframe to json sdl_fpl_data = dffinal.to_json(orient='records', lines=True) s3 = boto3.resource('s3') obj = s3.Object('bucket-name','sdl_fpl_data.json') obj.put(Body=json.dumps(sdl_fpl_data)) Lambda: import boto3 import json s3_client = boto3.client('s3') dynamodb = boto3.resource('dynamodb') def lambda_handler(event, context): bucket = event['Records'][0]['s3']['bucket']['name'] json_file_name = event['Records'][0]['s3']['object']['key'] json_object = s3_client.get_object(Bucket=bucket,Key=json_file_name) jsonFileReader = json_object['Body'].read() jsonDict = json.loads(jsonFileReader) table = dynamodb.Table('my-table') table.put_item(Item=jsonDict) [ERROR] ParamValidationError: Parameter validation failed: Invalid type for parameter Item, value: { "GW": "GW1", "OR": "2,149,169", "GWP": 66, "PB": 3, "TM": 0, "TC": 0, "£": 100, "Manager": "XXXXX", "Team Name": "XXXXXX", "Player_Number": "372", "TP": 66, "PlayerWeekID": "372GW1" } , type: <class 'str'>, valid types: <class 'dict'> Traceback (most recent call last): File "/var/task/lambda_function.py", line 16, in lambda_handler table.put_item(Item=jsonDict) Output of jsonDict A: Can you share the output of your variable jsonDict. DynamoDB needs a JSON object as payload: {} From what I understand it looks like you're trying to save a list []. Ensure you are saving an object which contains the keys of your table and you should have no issue. Working example: import boto3 dynamodb = boto3.resource('dynamodb') table = dynamodb.Table('test') item = { "GW": "GW1", "OR": "2,149,169", "GWP": 66, "PB": 3, "TM": 0, "TC": 0, "£": 100, "Manager": "XXXXX", "Team Name": "XXXXXX", "Player_Number": "372", "TP": 66, "PlayerWeekID": "372GW1" } try: res = table.put_item(Item=item) print(res) except Exception as e: print(e)
AWS ParamValidationError: Parameter validation failed:
I have set up a trigger/lambda to upload into DynamoDB however i get the following error when uploading..not sure what is going wrong. So far i have just created a blank dDB table with the primary key of "PlayerWeekID" as string but nothing else. Is this an issue because DDB isnt reading in the data types? Do I need to specify these in the Lamdda or set up in DDB before running the code? Update: This is the python code: #change dataframe to json sdl_fpl_data = dffinal.to_json(orient='records', lines=True) s3 = boto3.resource('s3') obj = s3.Object('bucket-name','sdl_fpl_data.json') obj.put(Body=json.dumps(sdl_fpl_data)) Lambda: import boto3 import json s3_client = boto3.client('s3') dynamodb = boto3.resource('dynamodb') def lambda_handler(event, context): bucket = event['Records'][0]['s3']['bucket']['name'] json_file_name = event['Records'][0]['s3']['object']['key'] json_object = s3_client.get_object(Bucket=bucket,Key=json_file_name) jsonFileReader = json_object['Body'].read() jsonDict = json.loads(jsonFileReader) table = dynamodb.Table('my-table') table.put_item(Item=jsonDict) [ERROR] ParamValidationError: Parameter validation failed: Invalid type for parameter Item, value: { "GW": "GW1", "OR": "2,149,169", "GWP": 66, "PB": 3, "TM": 0, "TC": 0, "£": 100, "Manager": "XXXXX", "Team Name": "XXXXXX", "Player_Number": "372", "TP": 66, "PlayerWeekID": "372GW1" } , type: <class 'str'>, valid types: <class 'dict'> Traceback (most recent call last): File "/var/task/lambda_function.py", line 16, in lambda_handler table.put_item(Item=jsonDict) Output of jsonDict
[ "Can you share the output of your variable jsonDict.\nDynamoDB needs a JSON object as payload: {}\nFrom what I understand it looks like you're trying to save a list [].\nEnsure you are saving an object which contains the keys of your table and you should have no issue.\nWorking example:\nimport boto3\ndynamodb = boto3.resource('dynamodb')\ntable = dynamodb.Table('test')\n \n\nitem = {\n \"GW\": \"GW1\",\n \"OR\": \"2,149,169\",\n \"GWP\": 66,\n \"PB\": 3,\n \"TM\": 0,\n \"TC\": 0,\n \"£\": 100,\n \"Manager\": \"XXXXX\",\n \"Team Name\": \"XXXXXX\",\n \"Player_Number\": \"372\",\n \"TP\": 66,\n \"PlayerWeekID\": \"372GW1\"\n}\ntry: \n res = table.put_item(Item=item)\n print(res)\nexcept Exception as e:\n print(e)\n\n" ]
[ 0 ]
[]
[]
[ "amazon_dynamodb", "amazon_web_services", "aws_lambda", "python" ]
stackoverflow_0074463628_amazon_dynamodb_amazon_web_services_aws_lambda_python.txt
Q: ImportError: No module named PytQt5 following are my python, qt and sip versions root@thura:~# python -V Python 2.7.3 root@thura:~# qmake --version QMake version 3.0 Using Qt version 5.0.2 in /usr/lib/i386-linux-gnu root@thura:~# sip -V 4.15.3 I tried to import the PyQt5 by following by this from PyQt5.QtWidgets import QtGui, QtCore I got the following error ImportError: No module named PyQt5.QtWidgets How can I solve this error. Updated ===================== When I tried to PyQt4, I got following error. from PyQt4.QtCore import pyqtSlot as Slot RuntimeError: the sip module implements API v10.0 to v10.1 but the PyQt4.QtCore module requires API v8.1 Updated 2013-12-20 ====================================== 1) download sip-4.15.3.tar.gz from here 2) extract sip-4.15.3.tar.gz 3) copy sip-4.15.3 to /home/thura 4) type "cd /home/thura/sip-4.15.3" 5) type "python configure.py", press enter, follow the instructions (type yes and press enter) 6) type "make", press enter and type "make install", press enter 7) download PyQt-gpl-5.1.1.tar.gz from here 8) extract PyQt-gpl-5.1.1.tar.gz 9) copy PyQt-gpl-5.1.1 folder to /home/thura folder. 10) type "cd /home/thura/PyQt-gpl-5.1.1" 11) type "python configure.py", press enter, following the instructions (type yes and press enter) 12)type "make", press enter and type "make install", press enter update 2013-12-20 ===================== After redo it again. I got the following error make[2]: Entering directory `/home/thura/PyQt/qpy/QtDBus' make[2]: Nothing to be done for `install'. make[2]: Leaving directory `/home/thura/PyQt/qpy/QtDBus' make[1]: Leaving directory `/home/thura/PyQt/qpy' cd QtCore/ && ( test -e Makefile || /usr/lib/i386-linux-gnu/qt5/bin/qmake /home/thura/PyQt/QtCore/QtCore.pro -o Makefile ) && make -f Makefile install make[1]: Entering directory `/home/thura/PyQt/QtCore' g++ -c -pipe -O2 -Wall -W -D_REENTRANT -fPIC -DSIP_PROTECTED_IS_PUBLIC -Dprotected=public -DQT_NO_DEBUG -DQT_PLUGIN -DQT_CORE_LIB -I/usr/share/qt5/mkspecs/linux-g++ -I. -I/usr/local/include/python2.7 -I../qpy/QtCore -I/usr/include/qt5 -I/usr/include/qt5/QtCore -I. -o sipQtCoreQtWindowStates.o sipQtCoreQtWindowStates.cpp In file included from sipQtCoreQtWindowStates.cpp:24:0: sipAPIQtCore.h:28:17: fatal error: sip.h: No such file or directory compilation terminated. make[1]: *** [sipQtCoreQtWindowStates.o] Error 1 make[1]: Leaving directory `/home/thura/PyQt/QtCore' make: *** [sub-QtCore-install_subtargets-ordered] Error 2 A: If you are on ubuntu, just install pyqt5 with apt-get command: sudo apt-get install python3-pyqt5 # for python3 or sudo apt-get install python-pyqt5 # for python2 However, on Ubuntu 14.04 the python-pyqt5 package is left out [source] and need to be installed manually [source] A: pip install pyqt5 for python3 for ubuntu A: this can be solved under MacOS X by installing pyqt with brew brew install pyqt A: After getting the help from @Blender, @ekhumoro and @Dan, I understand the Linux and Python more than before. Thank you. I got the an idea by @ekhumoro, it is I didn't install PyQt5 correctly. So I delete PyQt5 folder and download again. And redo everything from very start. After redoing, I got the error as my last update at my question. So, when I search at stack, I got the following solution from here sudo ln -s /usr/include/python2.7 /usr/local/include/python2.7 And then, I did "sudo make" and "sudo make install" step by step. After "sudo make install", I got the following error. But I ignored it and I created a simple design with qt designer. And I converted it into python file by pyuic5. Everything are going well. install -m 755 -p /home/thura/PyQt/pyuic5 /usr/bin/ strip /usr/bin/pyuic5 strip:/usr/bin/pyuic5: File format not recognized make: [install_pyuic5] Error 1 (ignored) A: This probably means that python doesn't know where PyQt5 is located. To check, go into the interactive terminal and type: import sys print sys.path What you probably need to do is add the directory that contains the PyQt5 module to your PYTHONPATH environment variable. If you use bash, here's how: Type the following into your shell, and add it to the end of the file ~/.bashrc export PYTHONPATH=/path/to/PyQt5/directory:$PYTHONPATH where /path/to/PyQt5/directory is the path to the folder where the PyQt5 library is located. A: On windows, "pip install pyqt5", solved it for me. A: You can try to Open the anaconda-prompt with Admininistator user option; conda install pyqt=5 A: It may caused by different python version, check which version of python you are using, for me the global version was 2.7 and the python version installed in the virtual environment was 3.8, so there was difference, so I run the main.py inside the environment and it works. A: How to install PyQt5 in Python3 Just installing it did not work for me. I had to uninstall it first, then reinstall it: # upgrade pip python3 -m pip install --upgrade pip # uninstall python3 -m pip uninstall PyQt5 python3 -m pip uninstall PyQt5-sip python3 -m pip uninstall PyQtWebEngine # reinstall python3 -m pip install PyQt5 python3 -m pip install PyQt5-sip python3 -m pip install PyQtWebEngine See here where I learned this: Python 3.7.0 No module named 'PyQt5.QtWebEngineWidgets' If using a specific version of Python3, and the above doesn't work, you may need to specify the exact version of Python3 like this. Here I am specifying Python3.8, for instance: python3.8 -m pip install --upgrade pip python3.8 -m pip uninstall PyQt5 python3.8 -m pip uninstall PyQt5-sip python3.8 -m pip uninstall PyQtWebEngine python3.8 -m pip install PyQt5 python3.8 -m pip install PyQt5-sip python3.8 -m pip install PyQtWebEngine
ImportError: No module named PytQt5
following are my python, qt and sip versions root@thura:~# python -V Python 2.7.3 root@thura:~# qmake --version QMake version 3.0 Using Qt version 5.0.2 in /usr/lib/i386-linux-gnu root@thura:~# sip -V 4.15.3 I tried to import the PyQt5 by following by this from PyQt5.QtWidgets import QtGui, QtCore I got the following error ImportError: No module named PyQt5.QtWidgets How can I solve this error. Updated ===================== When I tried to PyQt4, I got following error. from PyQt4.QtCore import pyqtSlot as Slot RuntimeError: the sip module implements API v10.0 to v10.1 but the PyQt4.QtCore module requires API v8.1 Updated 2013-12-20 ====================================== 1) download sip-4.15.3.tar.gz from here 2) extract sip-4.15.3.tar.gz 3) copy sip-4.15.3 to /home/thura 4) type "cd /home/thura/sip-4.15.3" 5) type "python configure.py", press enter, follow the instructions (type yes and press enter) 6) type "make", press enter and type "make install", press enter 7) download PyQt-gpl-5.1.1.tar.gz from here 8) extract PyQt-gpl-5.1.1.tar.gz 9) copy PyQt-gpl-5.1.1 folder to /home/thura folder. 10) type "cd /home/thura/PyQt-gpl-5.1.1" 11) type "python configure.py", press enter, following the instructions (type yes and press enter) 12)type "make", press enter and type "make install", press enter update 2013-12-20 ===================== After redo it again. I got the following error make[2]: Entering directory `/home/thura/PyQt/qpy/QtDBus' make[2]: Nothing to be done for `install'. make[2]: Leaving directory `/home/thura/PyQt/qpy/QtDBus' make[1]: Leaving directory `/home/thura/PyQt/qpy' cd QtCore/ && ( test -e Makefile || /usr/lib/i386-linux-gnu/qt5/bin/qmake /home/thura/PyQt/QtCore/QtCore.pro -o Makefile ) && make -f Makefile install make[1]: Entering directory `/home/thura/PyQt/QtCore' g++ -c -pipe -O2 -Wall -W -D_REENTRANT -fPIC -DSIP_PROTECTED_IS_PUBLIC -Dprotected=public -DQT_NO_DEBUG -DQT_PLUGIN -DQT_CORE_LIB -I/usr/share/qt5/mkspecs/linux-g++ -I. -I/usr/local/include/python2.7 -I../qpy/QtCore -I/usr/include/qt5 -I/usr/include/qt5/QtCore -I. -o sipQtCoreQtWindowStates.o sipQtCoreQtWindowStates.cpp In file included from sipQtCoreQtWindowStates.cpp:24:0: sipAPIQtCore.h:28:17: fatal error: sip.h: No such file or directory compilation terminated. make[1]: *** [sipQtCoreQtWindowStates.o] Error 1 make[1]: Leaving directory `/home/thura/PyQt/QtCore' make: *** [sub-QtCore-install_subtargets-ordered] Error 2
[ "If you are on ubuntu, just install pyqt5 with apt-get command:\n sudo apt-get install python3-pyqt5 # for python3\n\nor\n sudo apt-get install python-pyqt5 # for python2\n\nHowever, on Ubuntu 14.04 the python-pyqt5 package is left out [source] and need to be installed manually [source]\n", "pip install pyqt5 for python3 for ubuntu\n", "this can be solved under MacOS X by installing pyqt with brew\nbrew install pyqt\n\n", "After getting the help from @Blender, @ekhumoro and @Dan, I understand the Linux and Python more than before. Thank you. I got the an idea by @ekhumoro, it is I didn't install PyQt5 correctly. So I delete PyQt5 folder and download again. And redo everything from very start. \nAfter redoing, I got the error as my last update at my question. So, when I search at stack, I got the following solution from here\nsudo ln -s /usr/include/python2.7 /usr/local/include/python2.7\n\nAnd then, I did \"sudo make\" and \"sudo make install\" step by step. After \"sudo make install\", I got the following error. But I ignored it and I created a simple design with qt designer. And I converted it into python file by pyuic5. Everything are going well.\ninstall -m 755 -p /home/thura/PyQt/pyuic5 /usr/bin/\nstrip /usr/bin/pyuic5\nstrip:/usr/bin/pyuic5: File format not recognized\nmake: [install_pyuic5] Error 1 (ignored)\n\n", "This probably means that python doesn't know where PyQt5 is located. To check, go into the interactive terminal and type:\nimport sys\nprint sys.path\n\nWhat you probably need to do is add the directory that contains the PyQt5 module to your PYTHONPATH environment variable. If you use bash, here's how:\nType the following into your shell, and add it to the end of the file ~/.bashrc\nexport PYTHONPATH=/path/to/PyQt5/directory:$PYTHONPATH\n\nwhere /path/to/PyQt5/directory is the path to the folder where the PyQt5 library is located.\n", "On windows, \"pip install pyqt5\", solved it for me.\n", "You can try to Open the anaconda-prompt with Admininistator user option;\nconda install pyqt=5\n", "It may caused by different python version, check which version of python you are using, for me the global version was 2.7 and the python version installed in the virtual environment was 3.8, so there was difference, so I run the main.py inside the environment and it works.\n\n", "How to install PyQt5 in Python3\nJust installing it did not work for me. I had to uninstall it first, then reinstall it:\n# upgrade pip\npython3 -m pip install --upgrade pip\n\n# uninstall\npython3 -m pip uninstall PyQt5\npython3 -m pip uninstall PyQt5-sip\npython3 -m pip uninstall PyQtWebEngine\n\n# reinstall\npython3 -m pip install PyQt5\npython3 -m pip install PyQt5-sip\npython3 -m pip install PyQtWebEngine\n\nSee here where I learned this: Python 3.7.0 No module named 'PyQt5.QtWebEngineWidgets'\nIf using a specific version of Python3, and the above doesn't work, you may need to specify the exact version of Python3 like this. Here I am specifying Python3.8, for instance:\npython3.8 -m pip install --upgrade pip\n\npython3.8 -m pip uninstall PyQt5\npython3.8 -m pip uninstall PyQt5-sip\npython3.8 -m pip uninstall PyQtWebEngine\n\npython3.8 -m pip install PyQt5\npython3.8 -m pip install PyQt5-sip\npython3.8 -m pip install PyQtWebEngine\n\n" ]
[ 38, 30, 14, 8, 4, 1, 1, 0, 0 ]
[]
[]
[ "pyqt5", "python" ]
stackoverflow_0020672918_pyqt5_python.txt
Q: How to calculate cumulative sum with django ORM? I'm trying to group_by() data based on dates and with every day I want to calculate Count on that day also the total count so far. Sample output I'm getting: [ { "dates": "2022-11-07", "count": 1 }, { "dates": "2022-11-08", "count": 3 }, { "dates": "2022-11-09", "count": 33 } ] Sample output I'm trying to achieve: [ { "dates": "2022-11-07", "count": 1, "cumulative_count": 1 }, { "dates": "2022-11-08", "count": 3, "cumulative_count": 4 }, { "dates": "2022-11-09", "count": 33, "cumulative_count": 37 } ] Here's my query: self.serializer_class.Meta.model.objects.all().annotate(dates=TruncDate("date__date")).values("dates").order_by("dates").annotate(count=Count("channel", distinct=True)).values("count", "dates") How can I extend this query to get a cumulative sum as well? A: I tried to solve your problem like this models.py class Demo(models.Model): count =models.IntegerField() dates = models.DateField() serializers.py class DemoSerializer(serializers.ModelSerializer): class Meta: model = Demo fields = "__all__" Views.py class DemoAPI(APIView): def get(self, request, pk=None, format=None): data = Demo.objects.all() cumulative_count= 0 # Normal Django ORM Queruset print('--------- Default Queryset Response ---------') for i in data: del i.__dict__['_state'] print(i.__dict__) # Adding cumulative_count key in ORM Queryset for i in data: cumulative_count += i.__dict__['count'] i.__dict__['cumulative_count'] = cumulative_count # Updated Django ORM Queruset with cumulative_count print('--------- Updated Queryset Response ---------') for i in data: # del i.__dict__['_state'] print(i.__dict__) Output before delete _state key from Queryset #--------- Default Queryset Response --------- {'_state': <django.db.models.base.ModelState object at 0x000001A07002A680>, 'id': 1, 'count': 1, 'dates': datetime.date(2022, 11, 7)} {'_state': <django.db.models.base.ModelState object at 0x000001A07002A5C0>, 'id': 2, 'count': 3, 'dates': datetime.date(2022, 11, 8)} {'_state': <django.db.models.base.ModelState object at 0x000001A07002A7A0>, 'id': 3, 'count': 33, 'dates': datetime.date(2022, 11, 9)} #--------- Updated Queryset Response --------- {'_state': <django.db.models.base.ModelState object at 0x000002DAB66E0AC0>, 'id': 1, 'count': 1, 'dates': datetime.date(2022, 11, 7), 'cumulative_count': 1} {'_state': <django.db.models.base.ModelState object at 0x000002DAB66E0C10>, 'id': 2, 'count': 3, 'dates': datetime.date(2022, 11, 8), 'cumulative_count': 4} {'_state': <django.db.models.base.ModelState object at 0x000002DAB66E0D60>, 'id': 3, 'count': 33, 'dates': datetime.date(2022, 11, 9), 'cumulative_count': 37} Output after delete _state key from Queryset Added cumulative_count key in Queryset #--------- Default Queryset Response --------- {'id': 1, 'count': 1, 'dates': datetime.date(2022, 11, 7)} {'id': 2, 'count': 3, 'dates': datetime.date(2022, 11, 8)} {'id': 3, 'count': 33, 'dates': datetime.date(2022, 11, 9)} #--------- Updated Queryset Response --------- {'id': 1, 'count': 1, 'dates': datetime.date(2022, 11, 7), 'cumulative_count': 1} {'id': 2, 'count': 3, 'dates': datetime.date(2022, 11, 8), 'cumulative_count': 4} {'id': 3, 'count': 33, 'dates': datetime.date(2022, 11, 9), 'cumulative_count': 37}
How to calculate cumulative sum with django ORM?
I'm trying to group_by() data based on dates and with every day I want to calculate Count on that day also the total count so far. Sample output I'm getting: [ { "dates": "2022-11-07", "count": 1 }, { "dates": "2022-11-08", "count": 3 }, { "dates": "2022-11-09", "count": 33 } ] Sample output I'm trying to achieve: [ { "dates": "2022-11-07", "count": 1, "cumulative_count": 1 }, { "dates": "2022-11-08", "count": 3, "cumulative_count": 4 }, { "dates": "2022-11-09", "count": 33, "cumulative_count": 37 } ] Here's my query: self.serializer_class.Meta.model.objects.all().annotate(dates=TruncDate("date__date")).values("dates").order_by("dates").annotate(count=Count("channel", distinct=True)).values("count", "dates") How can I extend this query to get a cumulative sum as well?
[ "I tried to solve your problem like this\nmodels.py\nclass Demo(models.Model):\n count =models.IntegerField()\n dates = models.DateField()\n \n\nserializers.py\nclass DemoSerializer(serializers.ModelSerializer):\n \n class Meta:\n model = Demo\n fields = \"__all__\"\n\nViews.py\nclass DemoAPI(APIView):\n def get(self, request, pk=None, format=None):\n data = Demo.objects.all()\n cumulative_count= 0\n\n # Normal Django ORM Queruset\n print('--------- Default Queryset Response ---------')\n for i in data:\n del i.__dict__['_state']\n print(i.__dict__)\n\n # Adding cumulative_count key in ORM Queryset \n for i in data:\n cumulative_count += i.__dict__['count']\n i.__dict__['cumulative_count'] = cumulative_count\n\n # Updated Django ORM Queruset with cumulative_count \n print('--------- Updated Queryset Response ---------')\n for i in data:\n # del i.__dict__['_state']\n print(i.__dict__)\n\nOutput before delete _state key from Queryset\n#--------- Default Queryset Response --------- \n{'_state': <django.db.models.base.ModelState object at 0x000001A07002A680>, 'id': 1, 'count': 1, 'dates': datetime.date(2022, 11, 7)}\n{'_state': <django.db.models.base.ModelState object at 0x000001A07002A5C0>, 'id': 2, 'count': 3, 'dates': datetime.date(2022, 11, 8)}\n{'_state': <django.db.models.base.ModelState object at 0x000001A07002A7A0>, 'id': 3, 'count': 33, 'dates': datetime.date(2022, 11, 9)}\n\n#--------- Updated Queryset Response --------- \n{'_state': <django.db.models.base.ModelState object at 0x000002DAB66E0AC0>, 'id': 1, 'count': 1, 'dates': datetime.date(2022, 11, 7), 'cumulative_count': 1}\n{'_state': <django.db.models.base.ModelState object at 0x000002DAB66E0C10>, 'id': 2, 'count': 3, 'dates': datetime.date(2022, 11, 8), 'cumulative_count': 4}\n{'_state': <django.db.models.base.ModelState object at 0x000002DAB66E0D60>, 'id': 3, 'count': 33, 'dates': datetime.date(2022, 11, 9), 'cumulative_count': 37}\n\nOutput after delete _state key from Queryset Added cumulative_count key in Queryset\n#--------- Default Queryset Response ---------\n{'id': 1, 'count': 1, 'dates': datetime.date(2022, 11, 7)}\n{'id': 2, 'count': 3, 'dates': datetime.date(2022, 11, 8)}\n{'id': 3, 'count': 33, 'dates': datetime.date(2022, 11, 9)}\n\n#--------- Updated Queryset Response ---------\n{'id': 1, 'count': 1, 'dates': datetime.date(2022, 11, 7), 'cumulative_count': 1}\n{'id': 2, 'count': 3, 'dates': datetime.date(2022, 11, 8), 'cumulative_count': 4}\n{'id': 3, 'count': 33, 'dates': datetime.date(2022, 11, 9), 'cumulative_count': 37}\n\n" ]
[ 0 ]
[]
[]
[ "django", "django_models", "django_orm", "django_rest_framework", "python" ]
stackoverflow_0074430261_django_django_models_django_orm_django_rest_framework_python.txt
Q: 123 to 321 problem giving errors when iterating I made this program trying to solve a problem which is fliping a number. For example, when the number 123 is the number inputed the number 321 should be the output. #function to swap number positions on the array def swapPositions(list, pos1, pos2): i = list[pos1] list[pos1] = list[pos2] list[pos2] = i myList = [] theNum = int(input("enter the value")) theNumInString = str(theNum) #loop to separate numbers on the integer into each position of the array for char in theNum2: myList.append(char) #this variable is to know how many times we should swap the positions numofSwaps = len(myList) % 2 posi1 = 0 posi2 = len(myList) - 1 while numofSwaps != 0: swapPositions(myList, posi1, posi2) #I add one and subtract one from the positions so they move further to the middle to swap other positions posi1 += 1 posi2 -= 1 numofSwaps -= 1 number = "".join(myList) print(number) what happens when I run the code and try for example 123 it returns 321 as expected BUT here comes the problem... when I input 12345 the output is 52341 which only swaps the outer two numbers. A: this can be done without converting the number to a string, for example # note: this example works for positive numbers only def reverseNum(x): y = 0 while x > 0: y = y*10 + x%10 x //= 10 return y >>> reverseNum(3124) 4213
123 to 321 problem giving errors when iterating
I made this program trying to solve a problem which is fliping a number. For example, when the number 123 is the number inputed the number 321 should be the output. #function to swap number positions on the array def swapPositions(list, pos1, pos2): i = list[pos1] list[pos1] = list[pos2] list[pos2] = i myList = [] theNum = int(input("enter the value")) theNumInString = str(theNum) #loop to separate numbers on the integer into each position of the array for char in theNum2: myList.append(char) #this variable is to know how many times we should swap the positions numofSwaps = len(myList) % 2 posi1 = 0 posi2 = len(myList) - 1 while numofSwaps != 0: swapPositions(myList, posi1, posi2) #I add one and subtract one from the positions so they move further to the middle to swap other positions posi1 += 1 posi2 -= 1 numofSwaps -= 1 number = "".join(myList) print(number) what happens when I run the code and try for example 123 it returns 321 as expected BUT here comes the problem... when I input 12345 the output is 52341 which only swaps the outer two numbers.
[ "this can be done without converting the number to a string, for example\n# note: this example works for positive numbers only\ndef reverseNum(x):\n y = 0\n while x > 0:\n y = y*10 + x%10\n x //= 10\n return y\n\n>>> reverseNum(3124)\n4213\n\n" ]
[ 0 ]
[]
[]
[ "python" ]
stackoverflow_0074465498_python.txt
Q: pip installing local package for testing in sibling folder I have source code I want to test in a test folder that is a sibling to the src code folder. see structure below ├── src │   ├── modules │   │   ├── module1 │   │   │   ├── __init__.py │   │   │   └── somecode.py │   │   └── module2 │   └── setup.py └── tests └── test1.py my setup.py file looks like this setuptools.setup( name="src_code", version=0.26, description="solves relative imports", author="yer boi", url="", package_data={"": ["LICENSE"]}, package_dir={"": "modules"}, packages=setuptools.find_packages(where="modules"), python_requires=">=3.8", py_modules=[ ], ) right now i have a .venv active and i am runing pip install -e src from the base folder to have a local editable package i can test and import into the tests folder/files. The problem is when i pip freeze I am getting some weird string in what has been installed and I cannot import any of my locally built packages. pip freeze looks like the following black==22.10.0 -e git+https://github.com/xxx/xxx.git@5da71eb9ecb1cb08c930edd1d052fa209375f38d#egg=src_code&subdirectory=lambdas Any one know why I am getting this strange pip freeze result, and how to fix this so I can locally build and import my src package? my top_levell.txt file in src_code.egg-info looks like this module1 module2 A: So the problem was taht i was doing pip install -e /path/to/package. This was creating some problems with importing for some reason, the links to the egg-info files seemed off. It you want to use a local package in a sibling folder for get about the -e flag, just do pip install /path/to/package and then everything should work
pip installing local package for testing in sibling folder
I have source code I want to test in a test folder that is a sibling to the src code folder. see structure below ├── src │   ├── modules │   │   ├── module1 │   │   │   ├── __init__.py │   │   │   └── somecode.py │   │   └── module2 │   └── setup.py └── tests └── test1.py my setup.py file looks like this setuptools.setup( name="src_code", version=0.26, description="solves relative imports", author="yer boi", url="", package_data={"": ["LICENSE"]}, package_dir={"": "modules"}, packages=setuptools.find_packages(where="modules"), python_requires=">=3.8", py_modules=[ ], ) right now i have a .venv active and i am runing pip install -e src from the base folder to have a local editable package i can test and import into the tests folder/files. The problem is when i pip freeze I am getting some weird string in what has been installed and I cannot import any of my locally built packages. pip freeze looks like the following black==22.10.0 -e git+https://github.com/xxx/xxx.git@5da71eb9ecb1cb08c930edd1d052fa209375f38d#egg=src_code&subdirectory=lambdas Any one know why I am getting this strange pip freeze result, and how to fix this so I can locally build and import my src package? my top_levell.txt file in src_code.egg-info looks like this module1 module2
[ "So the problem was taht i was doing pip install -e /path/to/package. This was creating some problems with importing for some reason, the links to the egg-info files seemed off. It you want to use a local package in a sibling folder for get about the -e flag, just do pip install /path/to/package and then everything should work\n" ]
[ 0 ]
[]
[]
[ "pip", "python", "python_unittest", "setup.py", "setuptools" ]
stackoverflow_0074465323_pip_python_python_unittest_setup.py_setuptools.txt
Q: How to read a csv file in Spark with multiline text in a cell I have the following file: The 'complaint' column has cases where newlines were created. When I try to create a Spark df with multiline text in the 'complaint' field, it gets shifted and I end up with: Is there a way to fix this? A: Please use quotes and the multi-line option on the data file. I will work you thru your test case that does not work. Data file without quotes. Data file with quotes on text. Text in both examples is multi-line for the complaint column (fields). Both files exist in directory in mounted ADLS Generation 2 storage. The shell command lists the files in the directory. The unquoted file returns unwanted rows. This is our bad test case. The quoted file returns the results that you want. This is our good test case. If you hover over the text, you can see a tool tip pop up. It contains the multi line data in a single cell. Enclosed is the working code for your use. df2 = spark.read.format("csv") \ .option("inferSchema", "true") \ .option("header", "true") \ .option("sep", ",") \ .option("multiLine", "true") \ .option("quote","\"") \ .load("/mnt/datalake/stack//mline-file-w-quotes.txt") display(df2)
How to read a csv file in Spark with multiline text in a cell
I have the following file: The 'complaint' column has cases where newlines were created. When I try to create a Spark df with multiline text in the 'complaint' field, it gets shifted and I end up with: Is there a way to fix this?
[ "Please use quotes and the multi-line option on the data file.\nI will work you thru your test case that does not work.\n\nData file without quotes.\n\nData file with quotes on text. Text in both examples is multi-line for the complaint column (fields).\n\nBoth files exist in directory in mounted ADLS Generation 2 storage. The shell command lists the files in the directory.\nThe unquoted file returns unwanted rows. This is our bad test case.\n\nThe quoted file returns the results that you want. This is our good test case.\n\nIf you hover over the text, you can see a tool tip pop up. It contains the multi line data in a single cell.\nEnclosed is the working code for your use.\ndf2 = spark.read.format(\"csv\") \\\n .option(\"inferSchema\", \"true\") \\\n .option(\"header\", \"true\") \\\n .option(\"sep\", \",\") \\\n .option(\"multiLine\", \"true\") \\\n .option(\"quote\",\"\\\"\") \\\n .load(\"/mnt/datalake/stack//mline-file-w-quotes.txt\")\n\ndisplay(df2)\n\n" ]
[ 0 ]
[]
[]
[ "apache_spark", "python" ]
stackoverflow_0074465262_apache_spark_python.txt
Q: Comparing times in Python I've no idea what I'm doing wrong in the following line of code and it's driving me nuts! As you'll probably know from the code, I only want the if block to run if it's before 5.15pm. if time(datetime.now().hour,datetime.now().minute) < time(17, 15): I've imported date and datetime so that's not the issue (it's TypeError error - see below error message I'm getting) Exception has occurred: TypeError (note: full exception trace is shown but execution is paused at: _run_module_as_main) 'module' object is not callable Can someone advise on what I'm doing wrong A: You probably want to from datetime import time: from datetime import datetime, time now = datetime.now() if time(now.hour, now.minute) < time(20, 15): print("Hello") Prints (now): Hello A: a bit shorter: from datetime import datetime, time datetime.now().time() < time(20,10) # False
Comparing times in Python
I've no idea what I'm doing wrong in the following line of code and it's driving me nuts! As you'll probably know from the code, I only want the if block to run if it's before 5.15pm. if time(datetime.now().hour,datetime.now().minute) < time(17, 15): I've imported date and datetime so that's not the issue (it's TypeError error - see below error message I'm getting) Exception has occurred: TypeError (note: full exception trace is shown but execution is paused at: _run_module_as_main) 'module' object is not callable Can someone advise on what I'm doing wrong
[ "You probably want to from datetime import time:\nfrom datetime import datetime, time\n\nnow = datetime.now()\n\nif time(now.hour, now.minute) < time(20, 15):\n print(\"Hello\")\n\nPrints (now):\nHello\n\n", "a bit shorter:\nfrom datetime import datetime, time\n\ndatetime.now().time() < time(20,10) # False\n\n" ]
[ 2, 1 ]
[]
[]
[ "datetime", "if_statement", "python", "python_3.x", "python_datetime" ]
stackoverflow_0074465296_datetime_if_statement_python_python_3.x_python_datetime.txt
Q: How to save an excel using pywin32? I am trying to save an excel file generated by another application that is open. i.e the excel application is in the foreground. This file has some data and it needs to be saved i.e written into the disk. In other words, I need to do an operation like File->SaveAs. Steps to reproduce: Open an Excel Application. This will be shown as Book1 - Excel in the title by default Write this code and run import win32com.client as win32 app = win32.gencache.EnsureDispatch('Excel.Application') app.Workbooks(1).SaveAs(r"C:\Users\test\Desktop\test.xlsx") app.Application.Quit() Error - Traceback (most recent call last): File "c:/Users/test/Downloads/automate_excel.py", line 6, in <module> ti = disp._oleobj_.GetTypeInfo() pywintypes.com_error: (-2147418111, 'Call was rejected by callee.', None, None) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "c:/Users/test/Downloads/automate_excel.py", line 6, in <module> app = win32.gencache.EnsureDispatch('Excel.Application') File "C:\Users\test\AppData\Local\Programs\Python\Python38\lib\site-packages\win32com\client\gencache.py", line 633, in EnsureDispatch raise TypeError( TypeError: This COM object can not automate the makepy process - please run makepy manually for this object A: There could be many sources for your problem so I would apreciate if you shared further code. The second error can for example occur when you are running multiple instances of the line excel = win32.gencache.EnsureDispatch('Excel.Application') for example in a for loop . Also make sure to have a version of excel that is fully activated and licensed . A: This is working for me (on python==3.9.8 and pywin32==305). You'll see that the first line is a different than yours, but I think that's really it. In the course of this we kept getting Attribute Errors for the Workbook or for setting DisplayAlerts. We found (from this question: Excel.Application.Workbooks attribute error when converting excel to pdf) that if Excel is in a loop (for example, editing a cell or has a pop-up open) then you will get an error. So, be sure to click enter out of a cell so that you aren't editing it. import win32com.client as win32 savepath = 'c:\\my\\file\\path\\test\\' xl = win32.Dispatch('Excel.Application') wb = xl.Workbooks['Book1'] wb.DisplayAlerts = False # helpful if saving multiple times to save file, it means you won't get a pop-up for overwrite and will default to save it. filename = 'new_xl.xlsx' wb.SaveAs(savepath+filename) wb.Close() xl.Quit() edit: add pywin32 version, include some more tips A: This is the version that worked for me based on @scotscotmcc's answer. The issue was with the cell which was in edit mode while I was running the program. Make sure you hit enter in the current cell and come out of the edit mode in excel. import win32com.client as win32 import random xl = win32.Dispatch('Excel.Application') wb = xl.Workbooks['Book1'] wb.SaveAs(r"C:\Users\...\Desktop\Form"+str(random.randint(0,1000))+".xlsx") wb.Close() xl.Quit()
How to save an excel using pywin32?
I am trying to save an excel file generated by another application that is open. i.e the excel application is in the foreground. This file has some data and it needs to be saved i.e written into the disk. In other words, I need to do an operation like File->SaveAs. Steps to reproduce: Open an Excel Application. This will be shown as Book1 - Excel in the title by default Write this code and run import win32com.client as win32 app = win32.gencache.EnsureDispatch('Excel.Application') app.Workbooks(1).SaveAs(r"C:\Users\test\Desktop\test.xlsx") app.Application.Quit() Error - Traceback (most recent call last): File "c:/Users/test/Downloads/automate_excel.py", line 6, in <module> ti = disp._oleobj_.GetTypeInfo() pywintypes.com_error: (-2147418111, 'Call was rejected by callee.', None, None) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "c:/Users/test/Downloads/automate_excel.py", line 6, in <module> app = win32.gencache.EnsureDispatch('Excel.Application') File "C:\Users\test\AppData\Local\Programs\Python\Python38\lib\site-packages\win32com\client\gencache.py", line 633, in EnsureDispatch raise TypeError( TypeError: This COM object can not automate the makepy process - please run makepy manually for this object
[ "There could be many sources for your problem so I would apreciate if you shared further code. The second error can for example occur when you are running multiple instances of the line excel = win32.gencache.EnsureDispatch('Excel.Application') for example in a for loop .\nAlso make sure to have a version of excel that is fully activated and licensed .\n", "This is working for me (on python==3.9.8 and pywin32==305). You'll see that the first line is a different than yours, but I think that's really it.\nIn the course of this we kept getting Attribute Errors for the Workbook or for setting DisplayAlerts. We found (from this question: Excel.Application.Workbooks attribute error when converting excel to pdf) that if Excel is in a loop (for example, editing a cell or has a pop-up open) then you will get an error. So, be sure to click enter out of a cell so that you aren't editing it.\nimport win32com.client as win32\nsavepath = 'c:\\\\my\\\\file\\\\path\\\\test\\\\'\n\nxl = win32.Dispatch('Excel.Application') \n\nwb = xl.Workbooks['Book1']\nwb.DisplayAlerts = False # helpful if saving multiple times to save file, it means you won't get a pop-up for overwrite and will default to save it.\nfilename = 'new_xl.xlsx'\nwb.SaveAs(savepath+filename)\nwb.Close()\nxl.Quit()\n\nedit: add pywin32 version, include some more tips\n", "This is the version that worked for me based on @scotscotmcc's answer. The issue was with the cell which was in edit mode while I was running the program. Make sure you hit enter in the current cell and come out of the edit mode in excel.\nimport win32com.client as win32\nimport random\nxl = win32.Dispatch('Excel.Application')\nwb = xl.Workbooks['Book1']\nwb.SaveAs(r\"C:\\Users\\...\\Desktop\\Form\"+str(random.randint(0,1000))+\".xlsx\")\nwb.Close()\nxl.Quit()\n\n" ]
[ 0, 0, 0 ]
[]
[]
[ "excel", "python", "pywin32", "win32com", "winapi" ]
stackoverflow_0074350835_excel_python_pywin32_win32com_winapi.txt
Q: How to click relative to a window/application in python and how to handle multiple scripts that click? Is there any way to click relative to an open window? For example, clicking a set amount of pixels to the right/up/left/down of an open tab of google chrome? I know how to click using absolute coordinates, or click on something that matches an image file, but I haven't been able to find anything regarding relative clicking. Another part to this - when automating some process on a computer that uses the mouse or keyboard to input commands, if you run two or more of the same script, is there a possibility that the commands interrupt each other? Like if you move the mouse and then click, but another script moves it again before the first one is allowed to click? Is there an easy solution for this? What my mind jumps to first is using a queuing process similar to handling multiple processes in an OS. A: You can use the library pyautogui. I put an example here: import pyautogui as pya start = pya.locateCenterOnScreen('start.png')#If the file is not a png file it will not work print(start) pya.moveTo(start)#Moves the mouse to the coordinates of the image #even you can make click with pya.click(button='left',clicks=2,x=start.x,y=start.y) # you can do two click on the image A: For anyone in the future coming across this post, using Python to move their mouse relative to a specific windows coordinates: EASIEST WAY (Requires pyautogui & pywin32 (win32gui successor)) Step 1: Get the HWND or the 'ID' of the 'Active Window' you want to move your mouse relative to: hwnd = win32gui.FindWindow(None, 'Untitled - Notepad') Step 2: Now let's get the global X and Y coordinates of the top left and bottom right corner of this app to determine where it is on your screen at the moment: x0, y0, x1, y1 = win32gui.GetWindowRect(hwnd) You can also get the dimensions of the window here as well: w = x1 - x0 # width h = y1 - y0 # height Step 3: We only need the X0 and Y0 (top left corner global coordinates on our screen), so return those if you put this in its own method Step 4: All you have to do now is create a custom method to moveMouse(x, y) which takes your relative coordinates in as an argument, and translates them using the newly acquired X0, Y0 coordinates. You're basically just adding the offset required to be 'relative' to the active window and that offset is acquired by determining the top left corners x, y position. def MoveMouse(x, y): x0, y0 = GetMyWindowSize() # Method I created to return the GetWindowRect() values pyautogui.moveTo(x + bsX, y + bsY) return Good luck! If you're not restricted to Python, AHK (Auto Hotkey) is also worth checking out for automation of mouse and keyboard inputs. There are native methods to recreate this functionality.
How to click relative to a window/application in python and how to handle multiple scripts that click?
Is there any way to click relative to an open window? For example, clicking a set amount of pixels to the right/up/left/down of an open tab of google chrome? I know how to click using absolute coordinates, or click on something that matches an image file, but I haven't been able to find anything regarding relative clicking. Another part to this - when automating some process on a computer that uses the mouse or keyboard to input commands, if you run two or more of the same script, is there a possibility that the commands interrupt each other? Like if you move the mouse and then click, but another script moves it again before the first one is allowed to click? Is there an easy solution for this? What my mind jumps to first is using a queuing process similar to handling multiple processes in an OS.
[ "You can use the library pyautogui.\nI put an example here:\nimport pyautogui as pya\nstart = pya.locateCenterOnScreen('start.png')#If the file is not a png file it will not work\nprint(start)\npya.moveTo(start)#Moves the mouse to the coordinates of the image\n#even you can make click with\npya.click(button='left',clicks=2,x=start.x,y=start.y) # you can do two click on the image\n\n", "For anyone in the future coming across this post, using Python to move their mouse relative to a specific windows coordinates:\nEASIEST WAY\n(Requires pyautogui & pywin32 (win32gui successor))\nStep 1: Get the HWND or the 'ID' of the 'Active Window' you want to move your mouse relative to:\nhwnd = win32gui.FindWindow(None, 'Untitled - Notepad')\n\nStep 2: Now let's get the global X and Y coordinates of the top left and bottom right corner of this app to determine where it is on your screen at the moment:\nx0, y0, x1, y1 = win32gui.GetWindowRect(hwnd)\n\nYou can also get the dimensions of the window here as well:\nw = x1 - x0 # width\nh = y1 - y0 # height\n\nStep 3: We only need the X0 and Y0 (top left corner global coordinates on our screen), so return those if you put this in its own method\nStep 4: All you have to do now is create a custom method to moveMouse(x, y) which takes your relative coordinates in as an argument, and translates them using the newly acquired X0, Y0 coordinates. You're basically just adding the offset required to be 'relative' to the active window and that offset is acquired by determining the top left corners x, y position.\ndef MoveMouse(x, y):\n x0, y0 = GetMyWindowSize() # Method I created to return the GetWindowRect() values\n\npyautogui.moveTo(x + bsX, y + bsY)\nreturn\n\nGood luck! If you're not restricted to Python, AHK (Auto Hotkey) is also worth checking out for automation of mouse and keyboard inputs. There are native methods to recreate this functionality.\n" ]
[ 0, 0 ]
[]
[]
[ "automation", "python" ]
stackoverflow_0061418522_automation_python.txt
Q: filtering out strings in a list Python I'm totally new to Python and I'm sure I'm missing something simple, I want to remove all Strings. def filter_list(l): for f in l: if isinstance(f, str): l.remove(f) return l print(filter_list([1,2,'a','b'])) The output I get is: [1,2,'b'] A: Often when we need to filter a sublist from a list given a condition, you'll see this sort of syntax (i.e. list comprehension) quite commonly, which serves to do the exact same thing. It's up to you which style you prefer: a = [1,2,'a','b'] b = [x for x in a if not isinstance(x, str)] print(b) # [1, 2] A: Your error came from removing items from list in iteration and at last, you don't check the last item (for more details read this : How to remove items from a list while iterating?) For this approach remove items with list comprehension. def filter_list(l): return [f for f in l if not isinstance(f, str)] print(filter_list([1,2,'a','b'])) # [1, 2] A: so you can do something like def filter_list(l) for f in l: if type(f) == str: l.remove(f) return l
filtering out strings in a list Python
I'm totally new to Python and I'm sure I'm missing something simple, I want to remove all Strings. def filter_list(l): for f in l: if isinstance(f, str): l.remove(f) return l print(filter_list([1,2,'a','b'])) The output I get is: [1,2,'b']
[ "Often when we need to filter a sublist from a list given a condition, you'll see this sort of syntax (i.e. list comprehension) quite commonly, which serves to do the exact same thing. It's up to you which style you prefer:\na = [1,2,'a','b']\nb = [x for x in a if not isinstance(x, str)]\nprint(b) # [1, 2]\n\n", "Your error came from removing items from list in iteration and at last, you don't check the last item (for more details read this : How to remove items from a list while iterating?) For this approach remove items with list comprehension.\ndef filter_list(l):\n return [f for f in l if not isinstance(f, str)]\n\nprint(filter_list([1,2,'a','b'])) \n# [1, 2]\n\n", "so you can do something like\ndef filter_list(l)\nfor f in l:\n if type(f) == str:\n l.remove(f)\nreturn l\n\n" ]
[ 2, 2, 0 ]
[]
[]
[ "python" ]
stackoverflow_0074465794_python.txt
Q: Pandas Merge issue I'm trying to merge one column values from df2 to df1. df1.merge(df2, how='outer') seems to be what I needed but result is not what I wanted because of duplicate. Using 'on' introduces _x and _y which I don't want either. In below Example: sub=site1 in both df1 and df2 is same, then 'fred' from df2 replaces 'own' of df1. # Pandas Merge test: import pandas as pd df1 = pd.DataFrame({'sub': ['site1', 'site2', 'site3'], 'iss': ['enc1', 'enc2', 'enc3'], 'rem': [1, 3, 5], 'own': ['andy', 'brian', 'cody']}) df2 = pd.DataFrame({'sub': ['data1', 'data2', 'site1'], 'rem': [2, 4, 6], 'own': ['david', 'edger', 'fred']}) >>> df1 sub iss rem own 0 site1 enc1 1 andy 1 site2 enc2 3 brian 2 site3 enc3 5 cody >>> df2 sub rem own 0 data1 2 david 1 data2 4 edger 2 site1 6 fred >>> df1.merge(df2, how='outer') sub iss rem own 0 site1 enc1 1 andy 1 site2 enc2 3 brian 2 site3 enc3 5 cody 3 data1 NaN 2 david 4 data2 NaN 4 edger 5 site1 NaN 6 fred >>> df1.merge(df2, on='sub', how='outer') sub iss rem_x own_x rem_y own_y 0 site1 enc1 1.0 andy 6.0 fred 1 site2 enc2 3.0 brian NaN NaN 2 site3 enc3 5.0 cody NaN NaN 3 data1 NaN NaN NaN 2.0 david 4 data2 NaN NaN NaN 4.0 edger Expected Output: sub iss rem own 0 site1 enc1 1 fred 1 site2 enc2 3 brian 2 site3 enc3 5 cody 3 data1 NaN 2 david 4 data2 NaN 4 edger A: A potential somewhat simple solution using pd.concat and loc to filter df1 to just contain records not present in df2 and then concat them together. # used to make use loc on index as it is a bit simpler. df1 = df1.set_index('sub') df2 = df2.set_index('sub') Then pd.concat them together. df3 = pd.concat([df1[~df1.index.isin(df2.index)],df2]) Output: print(df3) iss rem own sub site2 enc2 3 brian site3 enc3 5 cody data1 NaN 2 david data2 NaN 4 edger site1 NaN 6 fred This does not change the value of rem and iss for site1 to equal the value of df1 though. If that is also needed you would you could just add an additional loc statement as a possible solution. Like this: df3.loc[(df3.index.isin(df1.index.to_list())) & ~(df3['rem'].isin(df1['rem'].to_list())), ['iss','rem']] = df1[['iss','rem']] Final Output iss rem own sub site2 enc2 3 brian site3 enc3 5 cody data1 NaN 2 david data2 NaN 4 edger site1 enc1 1 fred A: Edit: changed to using update instead of fillna as per @bkeesey's comment you need to merge on sub then update the new columns and drop the old ones try import pandas as pd df1 = pd.DataFrame({'sub': ['site1', 'site2', 'site3'], 'iss': ['enc1', 'enc2', 'enc3'], 'rem': [1, 3, 5], 'own': ['andy', 'brian', 'cody']}) df2 = pd.DataFrame({'sub': ['data1', 'data2', 'site1'], 'rem': [2, 4, 6], 'own': ['david', 'edger', 'fred']}) dfm = df1.merge(df2, on='sub', how='outer', suffixes=["_x",""]) dfm.own.update(dfm.own_x) dfm.rem.update(dfm.rem_x) del dfm["own_x"] del dfm["rem_x"] result sub iss rem own 0 site1 enc1 6.0 fred 1 site2 enc2 3.0 brian 2 site3 enc3 5.0 cody 3 data1 NaN 2.0 david 4 data2 NaN 4.0 edger A: here is one way to do it # update the df1.own with the values for it in the df2 # using map df1['own'] = df1['sub'].map(df2.set_index('sub')['own']).fillna(df1['own']) out=(pd.concat([df1, df2]) # concat the two DF .drop_duplicates(subset=['sub']) # drop duplicates .reset_index() # reset index .drop(columns='index')) # remove the unwanted column out sub iss rem own 0 site1 enc1 1 fred 1 site2 enc2 3 brian 2 site3 enc3 5 cody 3 data1 NaN 2 david 4 data2 NaN 4 edger alternately, # merge the two DF, and drop the duplicates out=(pd.concat([df1, df2]) .drop_duplicates(subset=['sub']) .reset_index() .drop(columns='index')) # map the own in the resulting DF from concat out['own'] = out['sub'].map(df2.set_index('sub')['own']).fillna(out['own']) out sub iss rem own 0 site1 enc1 1 fred 1 site2 enc2 3 brian 2 site3 enc3 5 cody 3 data1 NaN 2 david 4 data2 NaN 4 edger
Pandas Merge issue
I'm trying to merge one column values from df2 to df1. df1.merge(df2, how='outer') seems to be what I needed but result is not what I wanted because of duplicate. Using 'on' introduces _x and _y which I don't want either. In below Example: sub=site1 in both df1 and df2 is same, then 'fred' from df2 replaces 'own' of df1. # Pandas Merge test: import pandas as pd df1 = pd.DataFrame({'sub': ['site1', 'site2', 'site3'], 'iss': ['enc1', 'enc2', 'enc3'], 'rem': [1, 3, 5], 'own': ['andy', 'brian', 'cody']}) df2 = pd.DataFrame({'sub': ['data1', 'data2', 'site1'], 'rem': [2, 4, 6], 'own': ['david', 'edger', 'fred']}) >>> df1 sub iss rem own 0 site1 enc1 1 andy 1 site2 enc2 3 brian 2 site3 enc3 5 cody >>> df2 sub rem own 0 data1 2 david 1 data2 4 edger 2 site1 6 fred >>> df1.merge(df2, how='outer') sub iss rem own 0 site1 enc1 1 andy 1 site2 enc2 3 brian 2 site3 enc3 5 cody 3 data1 NaN 2 david 4 data2 NaN 4 edger 5 site1 NaN 6 fred >>> df1.merge(df2, on='sub', how='outer') sub iss rem_x own_x rem_y own_y 0 site1 enc1 1.0 andy 6.0 fred 1 site2 enc2 3.0 brian NaN NaN 2 site3 enc3 5.0 cody NaN NaN 3 data1 NaN NaN NaN 2.0 david 4 data2 NaN NaN NaN 4.0 edger Expected Output: sub iss rem own 0 site1 enc1 1 fred 1 site2 enc2 3 brian 2 site3 enc3 5 cody 3 data1 NaN 2 david 4 data2 NaN 4 edger
[ "A potential somewhat simple solution using pd.concat and loc to filter df1 to just contain records not present in df2 and then concat them together.\n# used to make use loc on index as it is a bit simpler.\ndf1 = df1.set_index('sub')\ndf2 = df2.set_index('sub')\n\nThen pd.concat them together.\ndf3 = pd.concat([df1[~df1.index.isin(df2.index)],df2])\n\nOutput:\nprint(df3)\n iss rem own\nsub \nsite2 enc2 3 brian\nsite3 enc3 5 cody\ndata1 NaN 2 david\ndata2 NaN 4 edger\nsite1 NaN 6 fred\n\nThis does not change the value of rem and iss for site1 to equal the value of df1 though.\nIf that is also needed you would you could just add an additional loc statement as a possible solution. Like this:\ndf3.loc[(df3.index.isin(df1.index.to_list())) & ~(df3['rem'].isin(df1['rem'].to_list())), ['iss','rem']] = df1[['iss','rem']]\n\nFinal Output\n iss rem own\nsub \nsite2 enc2 3 brian\nsite3 enc3 5 cody\ndata1 NaN 2 david\ndata2 NaN 4 edger\nsite1 enc1 1 fred\n\n", "Edit: changed to using update instead of fillna as per @bkeesey's comment\nyou need to merge on sub then update the new columns and drop the old ones\ntry\nimport pandas as pd\n\ndf1 = pd.DataFrame({'sub': ['site1', 'site2', 'site3'], 'iss': ['enc1', 'enc2', 'enc3'], 'rem': [1, 3, 5], 'own': ['andy', 'brian', 'cody']})\ndf2 = pd.DataFrame({'sub': ['data1', 'data2', 'site1'], 'rem': [2, 4, 6], 'own': ['david', 'edger', 'fred']})\n\ndfm = df1.merge(df2, on='sub', how='outer', suffixes=[\"_x\",\"\"])\n\ndfm.own.update(dfm.own_x)\ndfm.rem.update(dfm.rem_x)\n\ndel dfm[\"own_x\"]\ndel dfm[\"rem_x\"]\n\nresult\n sub iss rem own\n0 site1 enc1 6.0 fred\n1 site2 enc2 3.0 brian\n2 site3 enc3 5.0 cody\n3 data1 NaN 2.0 david\n4 data2 NaN 4.0 edger\n\n", "here is one way to do it\n\n# update the df1.own with the values for it in the df2\n# using map\ndf1['own'] = df1['sub'].map(df2.set_index('sub')['own']).fillna(df1['own'])\n\n\nout=(pd.concat([df1, df2]) # concat the two DF\n.drop_duplicates(subset=['sub']) # drop duplicates\n.reset_index() # reset index\n.drop(columns='index')) # remove the unwanted column\n\nout\n\n sub iss rem own\n0 site1 enc1 1 fred\n1 site2 enc2 3 brian\n2 site3 enc3 5 cody\n3 data1 NaN 2 david\n4 data2 NaN 4 edger\n\nalternately,\n# merge the two DF, and drop the duplicates\nout=(pd.concat([df1, df2])\n.drop_duplicates(subset=['sub'])\n.reset_index()\n.drop(columns='index'))\n\n# map the own in the resulting DF from concat\nout['own'] = out['sub'].map(df2.set_index('sub')['own']).fillna(out['own'])\nout\n\nsub iss rem own\n0 site1 enc1 1 fred\n1 site2 enc2 3 brian\n2 site3 enc3 5 cody\n3 data1 NaN 2 david\n4 data2 NaN 4 edger\n\n" ]
[ 0, 0, 0 ]
[]
[]
[ "dataframe", "merge", "pandas", "python" ]
stackoverflow_0074464231_dataframe_merge_pandas_python.txt
Q: How to copy files? How do I copy a file in Python? A: shutil has many methods you can use. One of which is: import shutil shutil.copyfile(src, dst) # 2nd option shutil.copy(src, dst) # dst can be a folder; use shutil.copy2() to preserve timestamp Copy the contents of the file named src to a file named dst. Both src and dst need to be the entire filename of the files, including path. The destination location must be writable; otherwise, an IOError exception will be raised. If dst already exists, it will be replaced. Special files such as character or block devices and pipes cannot be copied with this function. With copy, src and dst are path names given as strs. Another shutil method to look at is shutil.copy2(). It's similar but preserves more metadata (e.g. time stamps). If you use os.path operations, use copy rather than copyfile. copyfile will only accept strings. A: Function Copiesmetadata Copiespermissions Uses file object Destinationmay be directory shutil.copy No Yes No Yes shutil.copyfile No No No No shutil.copy2 Yes Yes No Yes shutil.copyfileobj No No Yes No A: copy2(src,dst) is often more useful than copyfile(src,dst) because: it allows dst to be a directory (instead of the complete target filename), in which case the basename of src is used for creating the new file; it preserves the original modification and access info (mtime and atime) in the file metadata (however, this comes with a slight overhead). Here is a short example: import shutil shutil.copy2('/src/dir/file.ext', '/dst/dir/newname.ext') # complete target filename given shutil.copy2('/src/file.ext', '/dst/dir') # target filename is /dst/dir/file.ext A: In Python, you can copy the files using shutil module os module subprocess module import os import shutil import subprocess 1) Copying files using shutil module shutil.copyfile signature shutil.copyfile(src_file, dest_file, *, follow_symlinks=True) # example shutil.copyfile('source.txt', 'destination.txt') shutil.copy signature shutil.copy(src_file, dest_file, *, follow_symlinks=True) # example shutil.copy('source.txt', 'destination.txt') shutil.copy2 signature shutil.copy2(src_file, dest_file, *, follow_symlinks=True) # example shutil.copy2('source.txt', 'destination.txt') shutil.copyfileobj signature shutil.copyfileobj(src_file_object, dest_file_object[, length]) # example file_src = 'source.txt' f_src = open(file_src, 'rb') file_dest = 'destination.txt' f_dest = open(file_dest, 'wb') shutil.copyfileobj(f_src, f_dest) 2) Copying files using os module os.popen signature os.popen(cmd[, mode[, bufsize]]) # example # In Unix/Linux os.popen('cp source.txt destination.txt') # In Windows os.popen('copy source.txt destination.txt') os.system signature os.system(command) # In Linux/Unix os.system('cp source.txt destination.txt') # In Windows os.system('copy source.txt destination.txt') 3) Copying files using subprocess module subprocess.call signature subprocess.call(args, *, stdin=None, stdout=None, stderr=None, shell=False) # example (WARNING: setting `shell=True` might be a security-risk) # In Linux/Unix status = subprocess.call('cp source.txt destination.txt', shell=True) # In Windows status = subprocess.call('copy source.txt destination.txt', shell=True) subprocess.check_output signature subprocess.check_output(args, *, stdin=None, stderr=None, shell=False, universal_newlines=False) # example (WARNING: setting `shell=True` might be a security-risk) # In Linux/Unix status = subprocess.check_output('cp source.txt destination.txt', shell=True) # In Windows status = subprocess.check_output('copy source.txt destination.txt', shell=True) A: You can use one of the copy functions from the shutil package: ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ Function preserves supports accepts copies other permissions directory dest. file obj metadata ―――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――― shutil.copy ✔ ✔ ☐ ☐ shutil.copy2 ✔ ✔ ☐ ✔ shutil.copyfile ☐ ☐ ☐ ☐ shutil.copyfileobj ☐ ☐ ✔ ☐ ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ Example: import shutil shutil.copy('/etc/hostname', '/var/tmp/testhostname') A: Copying a file is a relatively straightforward operation as shown by the examples below, but you should instead use the shutil stdlib module for that. def copyfileobj_example(source, dest, buffer_size=1024*1024): """ Copy a file from source to dest. source and dest must be file-like objects, i.e. any object with a read or write method, like for example StringIO. """ while True: copy_buffer = source.read(buffer_size) if not copy_buffer: break dest.write(copy_buffer) If you want to copy by filename you could do something like this: def copyfile_example(source, dest): # Beware, this example does not handle any edge cases! with open(source, 'rb') as src, open(dest, 'wb') as dst: copyfileobj_example(src, dst) A: Use the shutil module. copyfile(src, dst) Copy the contents of the file named src to a file named dst. The destination location must be writable; otherwise, an IOError exception will be raised. If dst already exists, it will be replaced. Special files such as character or block devices and pipes cannot be copied with this function. src and dst are path names given as strings. Take a look at filesys for all the file and directory handling functions available in standard Python modules. A: Directory and File copy example - From Tim Golden's Python Stuff: http://timgolden.me.uk/python/win32_how_do_i/copy-a-file.html import os import shutil import tempfile filename1 = tempfile.mktemp (".txt") open (filename1, "w").close () filename2 = filename1 + ".copy" print filename1, "=>", filename2 shutil.copy (filename1, filename2) if os.path.isfile (filename2): print "Success" dirname1 = tempfile.mktemp (".dir") os.mkdir (dirname1) dirname2 = dirname1 + ".copy" print dirname1, "=>", dirname2 shutil.copytree (dirname1, dirname2) if os.path.isdir (dirname2): print "Success" A: For small files and using only python built-ins, you can use the following one-liner: with open(source, 'rb') as src, open(dest, 'wb') as dst: dst.write(src.read()) This is not optimal way for applications where the file is too large or when memory is critical, thus Swati's answer should be preferred. A: Firstly, I made an exhaustive cheatsheet of shutil methods for your reference. shutil_methods = {'copy':['shutil.copyfileobj', 'shutil.copyfile', 'shutil.copymode', 'shutil.copystat', 'shutil.copy', 'shutil.copy2', 'shutil.copytree',], 'move':['shutil.rmtree', 'shutil.move',], 'exception': ['exception shutil.SameFileError', 'exception shutil.Error'], 'others':['shutil.disk_usage', 'shutil.chown', 'shutil.which', 'shutil.ignore_patterns',] } Secondly, explain methods of copy in exmaples: shutil.copyfileobj(fsrc, fdst[, length]) manipulate opened objects In [3]: src = '~/Documents/Head+First+SQL.pdf' In [4]: dst = '~/desktop' In [5]: shutil.copyfileobj(src, dst) AttributeError: 'str' object has no attribute 'read' #copy the file object In [7]: with open(src, 'rb') as f1,open(os.path.join(dst,'test.pdf'), 'wb') as f2: ...: shutil.copyfileobj(f1, f2) In [8]: os.stat(os.path.join(dst,'test.pdf')) Out[8]: os.stat_result(st_mode=33188, st_ino=8598319475, st_dev=16777220, st_nlink=1, st_uid=501, st_gid=20, st_size=13507926, st_atime=1516067347, st_mtime=1516067335, st_ctime=1516067345) shutil.copyfile(src, dst, *, follow_symlinks=True) Copy and rename In [9]: shutil.copyfile(src, dst) IsADirectoryError: [Errno 21] Is a directory: ~/desktop' #so dst should be a filename instead of a directory name shutil.copy() Copy without preseving the metadata In [10]: shutil.copy(src, dst) Out[10]: ~/desktop/Head+First+SQL.pdf' #check their metadata In [25]: os.stat(src) Out[25]: os.stat_result(st_mode=33188, st_ino=597749, st_dev=16777220, st_nlink=1, st_uid=501, st_gid=20, st_size=13507926, st_atime=1516066425, st_mtime=1493698739, st_ctime=1514871215) In [26]: os.stat(os.path.join(dst, 'Head+First+SQL.pdf')) Out[26]: os.stat_result(st_mode=33188, st_ino=8598313736, st_dev=16777220, st_nlink=1, st_uid=501, st_gid=20, st_size=13507926, st_atime=1516066427, st_mtime=1516066425, st_ctime=1516066425) # st_atime,st_mtime,st_ctime changed shutil.copy2() Copy with preseving the metadata In [30]: shutil.copy2(src, dst) Out[30]: ~/desktop/Head+First+SQL.pdf' In [31]: os.stat(src) Out[31]: os.stat_result(st_mode=33188, st_ino=597749, st_dev=16777220, st_nlink=1, st_uid=501, st_gid=20, st_size=13507926, st_atime=1516067055, st_mtime=1493698739, st_ctime=1514871215) In [32]: os.stat(os.path.join(dst, 'Head+First+SQL.pdf')) Out[32]: os.stat_result(st_mode=33188, st_ino=8598313736, st_dev=16777220, st_nlink=1, st_uid=501, st_gid=20, st_size=13507926, st_atime=1516067063, st_mtime=1493698739, st_ctime=1516067055) # Preseved st_mtime shutil.copytree() Recursively copy an entire directory tree rooted at src, returning the destination directory A: shutil module offers some high-level operations on files. It supports file copying and removal. Refer to the table below for your use case. Function UtilizeFile Object Preserve FileMetadata Preserve Permissions Supports Directory Dest. shutil.copyfileobj ✔ ⅹ ⅹ ⅹ shutil.copyfile ⅹ ⅹ ⅹ ⅹ shutil.copy2 ⅹ ✔ ✔ ✔ shutil.copy ⅹ ⅹ ✔ ✔ A: You could use os.system('cp nameoffilegeneratedbyprogram /otherdirectory/') or as I did it, os.system('cp '+ rawfile + ' rawdata.dat') where rawfile is the name that I had generated inside the program. This is a Linux only solution A: As of Python 3.5 you can do the following for small files (ie: text files, small jpegs): from pathlib import Path source = Path('../path/to/my/file.txt') destination = Path('../path/where/i/want/to/store/it.txt') destination.write_bytes(source.read_bytes()) write_bytes will overwrite whatever was at the destination's location A: For large files, what I did was read the file line by line and read each line into an array. Then, once the array reached a certain size, append it to a new file. for line in open("file.txt", "r"): list.append(line) if len(list) == 1000000: output.writelines(list) del list[:] A: Use subprocess.call to copy the file from subprocess import call call("cp -p <file> <file>", shell=True) A: open(destination, 'wb').write(open(source, 'rb').read()) Open the source file in read mode, and write to destination file in write mode. A: In case you've come this far down. The answer is that you need the entire path and file name import os shutil.copy(os.path.join(old_dir, file), os.path.join(new_dir, file)) A: Here is a simple way to do it, without any module. It's similar to this answer, but has the benefit to also work if it's a big file that doesn't fit in RAM: with open('sourcefile', 'rb') as f, open('destfile', 'wb') as g: while True: block = f.read(16*1024*1024) # work by blocks of 16 MB if not block: # end of file break g.write(block) Since we're writing a new file, it does not preserve the modification time, etc. We can then use os.utime for this if needed. A: Similar to the accepted answer, the following code block might come in handy if you also want to make sure to create any (non-existent) folders in the path to the destination. from os import path, makedirs from shutil import copyfile makedirs(path.dirname(path.abspath(destination_path)), exist_ok=True) copyfile(source_path, destination_path) As the accepted answers notes, these lines will overwrite any file which exists at the destination path, so sometimes it might be useful to also add: if not path.exists(destination_path): before this code block. A: You can use system. For *nix systems import os copy_file = lambda src_file, dest: os.system(f"cp {src_file} {dest}") copy_file("./file", "../new_dir/file")
How to copy files?
How do I copy a file in Python?
[ "shutil has many methods you can use. One of which is:\nimport shutil\n\nshutil.copyfile(src, dst)\n\n# 2nd option\nshutil.copy(src, dst) # dst can be a folder; use shutil.copy2() to preserve timestamp\n\n\nCopy the contents of the file named src to a file named dst. Both src and dst need to be the entire filename of the files, including path.\nThe destination location must be writable; otherwise, an IOError exception will be raised.\nIf dst already exists, it will be replaced.\nSpecial files such as character or block devices and pipes cannot be copied with this function.\nWith copy, src and dst are path names given as strs.\n\nAnother shutil method to look at is shutil.copy2(). It's similar but preserves more metadata (e.g. time stamps).\nIf you use os.path operations, use copy rather than copyfile. copyfile will only accept strings.\n", "\n\n\n\nFunction\nCopiesmetadata\nCopiespermissions\nUses file object\nDestinationmay be directory\n\n\n\n\nshutil.copy\nNo\nYes\nNo\nYes\n\n\nshutil.copyfile\nNo\nNo\nNo\nNo\n\n\nshutil.copy2\nYes\nYes\nNo\nYes\n\n\nshutil.copyfileobj\nNo\nNo\nYes\nNo\n\n\n\n", "copy2(src,dst) is often more useful than copyfile(src,dst) because:\n\nit allows dst to be a directory (instead of the complete target filename), in which case the basename of src is used for creating the new file;\nit preserves the original modification and access info (mtime and atime) in the file metadata (however, this comes with a slight overhead).\n\nHere is a short example:\nimport shutil\nshutil.copy2('/src/dir/file.ext', '/dst/dir/newname.ext') # complete target filename given\nshutil.copy2('/src/file.ext', '/dst/dir') # target filename is /dst/dir/file.ext\n\n", "In Python, you can copy the files using\n\nshutil module\nos module\nsubprocess module\n\n\nimport os\nimport shutil\nimport subprocess\n\n\n1) Copying files using shutil module\nshutil.copyfile signature\nshutil.copyfile(src_file, dest_file, *, follow_symlinks=True)\n\n# example \nshutil.copyfile('source.txt', 'destination.txt')\n\n\nshutil.copy signature\nshutil.copy(src_file, dest_file, *, follow_symlinks=True)\n\n# example\nshutil.copy('source.txt', 'destination.txt')\n\n\nshutil.copy2 signature\nshutil.copy2(src_file, dest_file, *, follow_symlinks=True)\n\n# example\nshutil.copy2('source.txt', 'destination.txt') \n\n\nshutil.copyfileobj signature\nshutil.copyfileobj(src_file_object, dest_file_object[, length])\n\n# example\nfile_src = 'source.txt' \nf_src = open(file_src, 'rb')\n\nfile_dest = 'destination.txt' \nf_dest = open(file_dest, 'wb')\n\nshutil.copyfileobj(f_src, f_dest) \n\n\n2) Copying files using os module\nos.popen signature\nos.popen(cmd[, mode[, bufsize]])\n\n# example\n# In Unix/Linux\nos.popen('cp source.txt destination.txt') \n\n# In Windows\nos.popen('copy source.txt destination.txt')\n\n\nos.system signature\nos.system(command)\n\n\n# In Linux/Unix\nos.system('cp source.txt destination.txt') \n\n# In Windows\nos.system('copy source.txt destination.txt')\n\n\n3) Copying files using subprocess module\nsubprocess.call signature\nsubprocess.call(args, *, stdin=None, stdout=None, stderr=None, shell=False)\n\n# example (WARNING: setting `shell=True` might be a security-risk)\n# In Linux/Unix\nstatus = subprocess.call('cp source.txt destination.txt', shell=True) \n\n# In Windows\nstatus = subprocess.call('copy source.txt destination.txt', shell=True)\n\n\nsubprocess.check_output signature\nsubprocess.check_output(args, *, stdin=None, stderr=None, shell=False, universal_newlines=False)\n\n# example (WARNING: setting `shell=True` might be a security-risk)\n# In Linux/Unix\nstatus = subprocess.check_output('cp source.txt destination.txt', shell=True)\n\n# In Windows\nstatus = subprocess.check_output('copy source.txt destination.txt', shell=True)\n\n\n", "You can use one of the copy functions from the shutil package:\n\n━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\nFunction preserves supports accepts copies other\n permissions directory dest. file obj metadata \n――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――\nshutil.copy ✔ ✔ ☐ ☐\nshutil.copy2 ✔ ✔ ☐ ✔\nshutil.copyfile ☐ ☐ ☐ ☐\nshutil.copyfileobj ☐ ☐ ✔ ☐\n━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\n\nExample:\nimport shutil\nshutil.copy('/etc/hostname', '/var/tmp/testhostname')\n\n", "Copying a file is a relatively straightforward operation as shown by the examples below, but you should instead use the shutil stdlib module for that.\ndef copyfileobj_example(source, dest, buffer_size=1024*1024):\n \"\"\" \n Copy a file from source to dest. source and dest\n must be file-like objects, i.e. any object with a read or\n write method, like for example StringIO.\n \"\"\"\n while True:\n copy_buffer = source.read(buffer_size)\n if not copy_buffer:\n break\n dest.write(copy_buffer)\n\nIf you want to copy by filename you could do something like this:\ndef copyfile_example(source, dest):\n # Beware, this example does not handle any edge cases!\n with open(source, 'rb') as src, open(dest, 'wb') as dst:\n copyfileobj_example(src, dst)\n\n", "Use the shutil module.\ncopyfile(src, dst)\n\nCopy the contents of the file named src to a file named dst. The destination location must be writable; otherwise, an IOError exception will be raised. If dst already exists, it will be replaced. Special files such as character or block devices and pipes cannot be copied with this function. src and dst are path names given as strings.\nTake a look at filesys for all the file and directory handling functions available in standard Python modules.\n", "Directory and File copy example - From Tim Golden's Python Stuff:\nhttp://timgolden.me.uk/python/win32_how_do_i/copy-a-file.html\nimport os\nimport shutil\nimport tempfile\n\nfilename1 = tempfile.mktemp (\".txt\")\nopen (filename1, \"w\").close ()\nfilename2 = filename1 + \".copy\"\nprint filename1, \"=>\", filename2\n\nshutil.copy (filename1, filename2)\n\nif os.path.isfile (filename2): print \"Success\"\n\ndirname1 = tempfile.mktemp (\".dir\")\nos.mkdir (dirname1)\ndirname2 = dirname1 + \".copy\"\nprint dirname1, \"=>\", dirname2\n\nshutil.copytree (dirname1, dirname2)\n\nif os.path.isdir (dirname2): print \"Success\"\n\n", "For small files and using only python built-ins, you can use the following one-liner:\nwith open(source, 'rb') as src, open(dest, 'wb') as dst: dst.write(src.read())\n\nThis is not optimal way for applications where the file is too large or when memory is critical, thus Swati's answer should be preferred.\n", "Firstly, I made an exhaustive cheatsheet of shutil methods for your reference.\nshutil_methods =\n{'copy':['shutil.copyfileobj',\n 'shutil.copyfile',\n 'shutil.copymode',\n 'shutil.copystat',\n 'shutil.copy',\n 'shutil.copy2',\n 'shutil.copytree',],\n 'move':['shutil.rmtree',\n 'shutil.move',],\n 'exception': ['exception shutil.SameFileError',\n 'exception shutil.Error'],\n 'others':['shutil.disk_usage',\n 'shutil.chown',\n 'shutil.which',\n 'shutil.ignore_patterns',]\n}\n\nSecondly, explain methods of copy in exmaples:\n\n\nshutil.copyfileobj(fsrc, fdst[, length]) manipulate opened objects\n\n\nIn [3]: src = '~/Documents/Head+First+SQL.pdf'\nIn [4]: dst = '~/desktop'\nIn [5]: shutil.copyfileobj(src, dst)\nAttributeError: 'str' object has no attribute 'read'\n#copy the file object\nIn [7]: with open(src, 'rb') as f1,open(os.path.join(dst,'test.pdf'), 'wb') as f2:\n ...: shutil.copyfileobj(f1, f2)\nIn [8]: os.stat(os.path.join(dst,'test.pdf'))\nOut[8]: os.stat_result(st_mode=33188, st_ino=8598319475, st_dev=16777220, st_nlink=1, st_uid=501, st_gid=20, st_size=13507926, st_atime=1516067347, st_mtime=1516067335, st_ctime=1516067345)\n\n\n\nshutil.copyfile(src, dst, *, follow_symlinks=True) Copy and rename\n\n\nIn [9]: shutil.copyfile(src, dst)\nIsADirectoryError: [Errno 21] Is a directory: ~/desktop'\n#so dst should be a filename instead of a directory name\n\n\n\nshutil.copy() Copy without preseving the metadata\n\n\nIn [10]: shutil.copy(src, dst)\nOut[10]: ~/desktop/Head+First+SQL.pdf'\n#check their metadata\nIn [25]: os.stat(src)\nOut[25]: os.stat_result(st_mode=33188, st_ino=597749, st_dev=16777220, st_nlink=1, st_uid=501, st_gid=20, st_size=13507926, st_atime=1516066425, st_mtime=1493698739, st_ctime=1514871215)\nIn [26]: os.stat(os.path.join(dst, 'Head+First+SQL.pdf'))\nOut[26]: os.stat_result(st_mode=33188, st_ino=8598313736, st_dev=16777220, st_nlink=1, st_uid=501, st_gid=20, st_size=13507926, st_atime=1516066427, st_mtime=1516066425, st_ctime=1516066425)\n# st_atime,st_mtime,st_ctime changed\n\n\n\nshutil.copy2() Copy with preseving the metadata\n\n\nIn [30]: shutil.copy2(src, dst)\nOut[30]: ~/desktop/Head+First+SQL.pdf'\nIn [31]: os.stat(src)\nOut[31]: os.stat_result(st_mode=33188, st_ino=597749, st_dev=16777220, st_nlink=1, st_uid=501, st_gid=20, st_size=13507926, st_atime=1516067055, st_mtime=1493698739, st_ctime=1514871215)\nIn [32]: os.stat(os.path.join(dst, 'Head+First+SQL.pdf'))\nOut[32]: os.stat_result(st_mode=33188, st_ino=8598313736, st_dev=16777220, st_nlink=1, st_uid=501, st_gid=20, st_size=13507926, st_atime=1516067063, st_mtime=1493698739, st_ctime=1516067055)\n# Preseved st_mtime\n\n\n\nshutil.copytree()\n\n\nRecursively copy an entire directory tree rooted at src, returning the destination directory\n", "shutil module offers some high-level operations on files. It supports file copying and removal.\nRefer to the table below for your use case.\n\n\n\n\nFunction\nUtilizeFile Object\nPreserve FileMetadata\nPreserve Permissions\nSupports Directory Dest.\n\n\n\n\nshutil.copyfileobj\n✔\nⅹ\nⅹ\nⅹ\n\n\nshutil.copyfile\nⅹ\nⅹ\nⅹ\nⅹ\n\n\nshutil.copy2\nⅹ\n✔\n✔\n✔\n\n\nshutil.copy\nⅹ\nⅹ\n✔\n✔\n\n\n\n", "You could use os.system('cp nameoffilegeneratedbyprogram /otherdirectory/')\nor as I did it, \nos.system('cp '+ rawfile + ' rawdata.dat')\n\nwhere rawfile is the name that I had generated inside the program.\nThis is a Linux only solution \n", "As of Python 3.5 you can do the following for small files (ie: text files, small jpegs):\nfrom pathlib import Path\n\nsource = Path('../path/to/my/file.txt')\ndestination = Path('../path/where/i/want/to/store/it.txt')\ndestination.write_bytes(source.read_bytes())\n\nwrite_bytes will overwrite whatever was at the destination's location\n", "For large files, what I did was read the file line by line and read each line into an array. Then, once the array reached a certain size, append it to a new file. \nfor line in open(\"file.txt\", \"r\"):\n list.append(line)\n if len(list) == 1000000: \n output.writelines(list)\n del list[:]\n\n", "Use subprocess.call to copy the file\nfrom subprocess import call\ncall(\"cp -p <file> <file>\", shell=True)\n\n", "open(destination, 'wb').write(open(source, 'rb').read())\n\nOpen the source file in read mode, and write to destination file in write mode.\n", "In case you've come this far down. The answer is that you need the entire path and file name\nimport os\n\nshutil.copy(os.path.join(old_dir, file), os.path.join(new_dir, file))\n\n", "Here is a simple way to do it, without any module. It's similar to this answer, but has the benefit to also work if it's a big file that doesn't fit in RAM:\nwith open('sourcefile', 'rb') as f, open('destfile', 'wb') as g:\n while True:\n block = f.read(16*1024*1024) # work by blocks of 16 MB\n if not block: # end of file\n break\n g.write(block)\n\nSince we're writing a new file, it does not preserve the modification time, etc.\nWe can then use os.utime for this if needed.\n", "Similar to the accepted answer, the following code block might come in handy if you also want to make sure to create any (non-existent) folders in the path to the destination.\nfrom os import path, makedirs\nfrom shutil import copyfile\nmakedirs(path.dirname(path.abspath(destination_path)), exist_ok=True)\ncopyfile(source_path, destination_path)\n\nAs the accepted answers notes, these lines will overwrite any file which exists at the destination path, so sometimes it might be useful to also add: if not path.exists(destination_path): before this code block.\n", "You can use system.\nFor *nix systems\nimport os\n\ncopy_file = lambda src_file, dest: os.system(f\"cp {src_file} {dest}\")\n\ncopy_file(\"./file\", \"../new_dir/file\")\n\n" ]
[ 4312, 1954, 914, 235, 178, 108, 81, 49, 37, 33, 23, 20, 20, 13, 13, 13, 9, 8, 5, 1 ]
[ "Here is answer utilizing \" shutil.copyfileobj\" and is highly efficient. I used it in a tool I created some time ago. I didn't wrote this originally but tweaked it a little bit.\ndef copyFile(src, dst, buffer_size=10485760, perserveFileDate=True):\n '''\n @param src: Source File\n @param dst: Destination File (not file path)\n @param buffer_size: Buffer size to use during copy\n @param perserveFileDate: Preserve the original file date\n '''\n # Check to make sure destination directory exists. If it doesn't create the directory\n dstParent, dstFileName = os.path.split(dst)\n if(not(os.path.exists(dstParent))):\n os.makedirs(dstParent)\n \n # Optimize the buffer for small files\n buffer_size = min(buffer_size,os.path.getsize(src))\n if(buffer_size == 0):\n buffer_size = 1024\n \n if shutil._samefile(src, dst):\n raise shutil.Error(\"`%s` and `%s` are the same file\" % (src, dst))\n for fn in [src, dst]:\n try:\n st = os.stat(fn)\n except OSError:\n # File most likely does not exist\n pass\n else:\n # XXX What about other special files? (sockets, devices...)\n if shutil.stat.S_ISFIFO(st.st_mode):\n raise shutil.SpecialFileError(\"`%s` is a named pipe\" % fn)\n with open(src, 'rb') as fsrc:\n with open(dst, 'wb') as fdst:\n shutil.copyfileobj(fsrc, fdst, buffer_size)\n \n if(perserveFileDate):\n shutil.copystat(src, dst)\n\n", "There are so many answers already, that I decided to add a different one.\nYou can use os.link to create a hard link to a file:\nos.link(source, dest)\n\nThis is not an independent clone, but if you plan to only read (not modify) the new file and its content must remain the same as the original, this will work well. It also has a benefit that if you want to check whether the copy already exists, you can compare the hard links (with os.stat) instead of their content.\n", "Python provides in-built functions for easily copying files using the Operating System Shell utilities.\nFollowing command is used to Copy File\nshutil.copy(src,dst)\n\nFollowing command is used to Copy File with MetaData Information\nshutil.copystat(src,dst)\n\n", "shutil.copy(src, dst, *, follow_symlinks=True)\n" ]
[ -2, -2, -3, -6 ]
[ "copy", "file", "file_copying", "filesystems", "python" ]
stackoverflow_0000123198_copy_file_file_copying_filesystems_python.txt
Q: Issues with facial recognition with sklearn svm I am working on a school project to make a facial recognition program using Python. I am using the face_recognition and scikit-learn libraries. However, I am facing some issues. Here is my code: """ Structure: <Data>/ <person_1>/ <person_1_face-1>.jpg <person_1_face-2>.jpg . . <person_1_face-n>.jpg <person_2>/ <person_2_face-1>.jpg <person_2_face-2>.jpg . . <person_2_face-n>.jpg . . <person_n>/ <person_n_face-1>.jpg <person_n_face-2>.jpg . . <person_n_face-n>.jpg """ import os import cv2 import face_recognition import numpy as np from sklearn import svm IMG_DATA_DIR = "Data" class_names = [] encodings = [] image_dirs = os.listdir(IMG_DATA_DIR) # Loop through each person in the training directory for img_dir in image_dirs: img_files = os.listdir(f"{IMG_DATA_DIR}/{img_dir}") # Loop through each training image for the current person for img_file in img_files: # Get the face encodings for the face in each image file img = face_recognition.load_image_file(f"{IMG_DATA_DIR}/{img_dir}/{img_file}") class_names.append(os.path.splitext(img_dir)[0]) img_encoding = face_recognition.face_encodings(img)[0] encodings.append(img_encoding) clf = svm.SVC(gamma="scale") clf.fit(encodings, class_names) # Initializing webcam camera = cv2.VideoCapture(0) process_this_frame = True while True: success, img = camera.read() if process_this_frame: img_small = cv2.resize(img, (0, 0), None, 0.50, 0.50) img_small = cv2.cvtColor(img_small, cv2.COLOR_BGR2RGB) camera_faces_loc = face_recognition.face_locations(img_small) camera_encodings = face_recognition.face_encodings(img_small, camera_faces_loc) face_names = [] for encoding in camera_encodings: # loop through each face encodings visible in the camera frame # predict the names of the faces currently visible in the frame using clf.predict name = clf.predict([encoding]) print(name) face_names.extend(name) process_this_frame = not process_this_frame for (top, right, bottom, left), name in zip(camera_faces_loc, face_names): top *= 2 right *= 2 bottom *= 2 left *= 2 cv2.rectangle(img, (left, top), (right, bottom), (0, 255, 0), 2) cv2.rectangle( img, (left, bottom - 35), (right, bottom), (0, 255, 0), cv2.FILLED ) font = cv2.FONT_HERSHEY_DUPLEX cv2.putText(img, name, (left + 6, bottom - 6), font, 1.0, (255, 255, 255), 1) cv2.imshow("WebCam", img) if cv2.waitKey(1) & 0xFF == ord("q"): break camera.release() cv2.destroyAllWindows() As the code above suggests, my aim here is to supply multiple images of the same person to the model so that it gets better over time. So far, I am facing two major issues. Issue 1: If I have only one picture of the same person in their corresponding directories, the classifier is able to predict the name of the person(s) visible in the camera frame. However, if I add a second image to one of the directories (while keeping the other directories with only one image), the classifier predicts every face in the camera frame to be the person who had two images in his/her directory. For example, if person A has two images under his name in his directory while person B only has one, the classifier will predict person B to be person A (not only person B, the classifier will predict anyone to be person A). What is causing this issue? Having multiple images for the same person is a big reason I am using the svm classifier. Issue 2: If I show someone's face whose picture was not in the original training data directories, the classifier still randomly predicts this unknown person to be one the known persons. For example, if I have person A to C in my training directories, and I show a completely unknown person D, the classifier, for some reason, randomly predicts the unknown person to be either person A, B, or C. How should I deal with this? How should I get the classifier to notify me in some way that the person currently in the camera frame is not known, so that I can appropriately handle this? Thanks! A: My oppinion is that both your issues are going from here: face_names = [] for encoding in camera_encodings: name = clf.predict([encoding]) print(name) face_names.extend(name) There is no tolerance treshold being set, so either it founds the proper image as the most similar first .. or then any other too. So, if you are obliged to use that sklearn.. I've found that clf = svm.SVC(gamma="scale", tol=0.001) has parametr tol. Try to set it high or lower then default value (0.001).
Issues with facial recognition with sklearn svm
I am working on a school project to make a facial recognition program using Python. I am using the face_recognition and scikit-learn libraries. However, I am facing some issues. Here is my code: """ Structure: <Data>/ <person_1>/ <person_1_face-1>.jpg <person_1_face-2>.jpg . . <person_1_face-n>.jpg <person_2>/ <person_2_face-1>.jpg <person_2_face-2>.jpg . . <person_2_face-n>.jpg . . <person_n>/ <person_n_face-1>.jpg <person_n_face-2>.jpg . . <person_n_face-n>.jpg """ import os import cv2 import face_recognition import numpy as np from sklearn import svm IMG_DATA_DIR = "Data" class_names = [] encodings = [] image_dirs = os.listdir(IMG_DATA_DIR) # Loop through each person in the training directory for img_dir in image_dirs: img_files = os.listdir(f"{IMG_DATA_DIR}/{img_dir}") # Loop through each training image for the current person for img_file in img_files: # Get the face encodings for the face in each image file img = face_recognition.load_image_file(f"{IMG_DATA_DIR}/{img_dir}/{img_file}") class_names.append(os.path.splitext(img_dir)[0]) img_encoding = face_recognition.face_encodings(img)[0] encodings.append(img_encoding) clf = svm.SVC(gamma="scale") clf.fit(encodings, class_names) # Initializing webcam camera = cv2.VideoCapture(0) process_this_frame = True while True: success, img = camera.read() if process_this_frame: img_small = cv2.resize(img, (0, 0), None, 0.50, 0.50) img_small = cv2.cvtColor(img_small, cv2.COLOR_BGR2RGB) camera_faces_loc = face_recognition.face_locations(img_small) camera_encodings = face_recognition.face_encodings(img_small, camera_faces_loc) face_names = [] for encoding in camera_encodings: # loop through each face encodings visible in the camera frame # predict the names of the faces currently visible in the frame using clf.predict name = clf.predict([encoding]) print(name) face_names.extend(name) process_this_frame = not process_this_frame for (top, right, bottom, left), name in zip(camera_faces_loc, face_names): top *= 2 right *= 2 bottom *= 2 left *= 2 cv2.rectangle(img, (left, top), (right, bottom), (0, 255, 0), 2) cv2.rectangle( img, (left, bottom - 35), (right, bottom), (0, 255, 0), cv2.FILLED ) font = cv2.FONT_HERSHEY_DUPLEX cv2.putText(img, name, (left + 6, bottom - 6), font, 1.0, (255, 255, 255), 1) cv2.imshow("WebCam", img) if cv2.waitKey(1) & 0xFF == ord("q"): break camera.release() cv2.destroyAllWindows() As the code above suggests, my aim here is to supply multiple images of the same person to the model so that it gets better over time. So far, I am facing two major issues. Issue 1: If I have only one picture of the same person in their corresponding directories, the classifier is able to predict the name of the person(s) visible in the camera frame. However, if I add a second image to one of the directories (while keeping the other directories with only one image), the classifier predicts every face in the camera frame to be the person who had two images in his/her directory. For example, if person A has two images under his name in his directory while person B only has one, the classifier will predict person B to be person A (not only person B, the classifier will predict anyone to be person A). What is causing this issue? Having multiple images for the same person is a big reason I am using the svm classifier. Issue 2: If I show someone's face whose picture was not in the original training data directories, the classifier still randomly predicts this unknown person to be one the known persons. For example, if I have person A to C in my training directories, and I show a completely unknown person D, the classifier, for some reason, randomly predicts the unknown person to be either person A, B, or C. How should I deal with this? How should I get the classifier to notify me in some way that the person currently in the camera frame is not known, so that I can appropriately handle this? Thanks!
[ "My oppinion is that both your issues are going from here:\nface_names = []\nfor encoding in camera_encodings:\n name = clf.predict([encoding])\n print(name)\n face_names.extend(name)\n\nThere is no tolerance treshold being set, so either it founds the proper image as the most similar first .. or then any other too.\nSo, if you are obliged to use that sklearn.. I've found that\nclf = svm.SVC(gamma=\"scale\", tol=0.001)\n\nhas parametr tol. Try to set it high or lower then default value (0.001).\n" ]
[ 0 ]
[]
[]
[ "face_recognition", "python", "scikit_learn", "svm" ]
stackoverflow_0074461674_face_recognition_python_scikit_learn_svm.txt
Q: How can I deal with this "Error: list index out of range"? Im beginner and I have a script that looking for videos on YouTube by search query with youtube-search-python package How can I except this error Error: list index out of range? Code: async def search(self, search: str): results = await asyncio.gather(*[self.to_search(search, page=i) for i in range(10)]) count = sum([len(result['hits']) for result in results]) print(f"Count of tracks from {search}: {count} ") return results from youtubesearchpython import VideosSearch class YoutubeSample: def get_link(self, *args): try: producer_name, search = args[0] videos_search = VideosSearch(producer_name + search, limit=1).result() print("Youtube link for: ", producer_name, search, "Name of video: ", videos_search['result'][0]['title']) return videos_search except Exception as e: print("Error: ", e) return None Output: Youtube link for: PewDiePie Error: list index out of range Youtube link for: Mr Beat Error: list index out of range A: If the list is empty, you can check that by checking the length of the list of results through something like this: if videos_search['result'].length == 0: print("No results found") else: # run the code
How can I deal with this "Error: list index out of range"?
Im beginner and I have a script that looking for videos on YouTube by search query with youtube-search-python package How can I except this error Error: list index out of range? Code: async def search(self, search: str): results = await asyncio.gather(*[self.to_search(search, page=i) for i in range(10)]) count = sum([len(result['hits']) for result in results]) print(f"Count of tracks from {search}: {count} ") return results from youtubesearchpython import VideosSearch class YoutubeSample: def get_link(self, *args): try: producer_name, search = args[0] videos_search = VideosSearch(producer_name + search, limit=1).result() print("Youtube link for: ", producer_name, search, "Name of video: ", videos_search['result'][0]['title']) return videos_search except Exception as e: print("Error: ", e) return None Output: Youtube link for: PewDiePie Error: list index out of range Youtube link for: Mr Beat Error: list index out of range
[ "If the list is empty, you can check that by checking the length of the list of results through something like this:\nif videos_search['result'].length == 0:\n print(\"No results found\")\nelse:\n # run the code\n\n" ]
[ 1 ]
[]
[]
[ "python", "youtube" ]
stackoverflow_0074463305_python_youtube.txt
Q: Need help performing multiple api requests at once in python I am using a roblox api to determine whether a list of users owns a limited item or does not own it. Here is my code: import requests import json user_ids = ["115687329", "1427501340", "508866135"] with open("myfile.txt", "w") as file: for user_id in user_ids: response = requests.get( f"https://inventory.roblox.com/v1/users/{user_id}/items/Asset/1744060292/is-owned" ) data = response.text info = f"{data}:{user_id}\n" file.write(info) I am trying to make it so that it checks if the user ids listed in the code own the limited item or not, and then print the response in a txt file in a {data}:{user_id} format. I ran the code and for some reason it is only printing the result of the last user id in the list. Any idea of how I can fix this? A: The problem is that the last 3 lines of code need to be indented so that they're inside the for loop. Also worth mentioning that this kind of processing is well-suited to multi-threading for improved performance. import requests from concurrent.futures import ThreadPoolExecutor user_ids = ["115687329", "1427501340", "508866135"] def process(uid): (r := requests.get(f"https://inventory.roblox.com/v1/users/{uid}/items/Asset/1744060292/is-owned")).raise_for_status() return r.text, uid with ThreadPoolExecutor() as executor: with open('myfile.txt', 'w') as myfile: for data, uid in executor.map(process, user_ids): print(f'{data}:{uid}', file=myfile) Alternative version: In this case the list of user IDs are in a file (e.g., uidlist.txt). import requests from concurrent.futures import ThreadPoolExecutor def process(uid): (r := requests.get(f"https://inventory.roblox.com/v1/users/{uid}/items/Asset/1744060292/is-owned")).raise_for_status() return r.text, uid with ThreadPoolExecutor() as executor: with open('myfile.txt', 'w') as myfile, open('uidlist.txt') as uidfile: for data, uid in executor.map(process, map(str.strip, uidfile)): print(f'{data}:{uid}', file=myfile)
Need help performing multiple api requests at once in python
I am using a roblox api to determine whether a list of users owns a limited item or does not own it. Here is my code: import requests import json user_ids = ["115687329", "1427501340", "508866135"] with open("myfile.txt", "w") as file: for user_id in user_ids: response = requests.get( f"https://inventory.roblox.com/v1/users/{user_id}/items/Asset/1744060292/is-owned" ) data = response.text info = f"{data}:{user_id}\n" file.write(info) I am trying to make it so that it checks if the user ids listed in the code own the limited item or not, and then print the response in a txt file in a {data}:{user_id} format. I ran the code and for some reason it is only printing the result of the last user id in the list. Any idea of how I can fix this?
[ "The problem is that the last 3 lines of code need to be indented so that they're inside the for loop.\nAlso worth mentioning that this kind of processing is well-suited to multi-threading for improved performance.\nimport requests\nfrom concurrent.futures import ThreadPoolExecutor\n\nuser_ids = [\"115687329\", \"1427501340\", \"508866135\"]\n\ndef process(uid):\n (r := requests.get(f\"https://inventory.roblox.com/v1/users/{uid}/items/Asset/1744060292/is-owned\")).raise_for_status()\n return r.text, uid\n\nwith ThreadPoolExecutor() as executor:\n with open('myfile.txt', 'w') as myfile:\n for data, uid in executor.map(process, user_ids):\n print(f'{data}:{uid}', file=myfile)\n\nAlternative version:\nIn this case the list of user IDs are in a file (e.g., uidlist.txt).\nimport requests\nfrom concurrent.futures import ThreadPoolExecutor\n\ndef process(uid):\n (r := requests.get(f\"https://inventory.roblox.com/v1/users/{uid}/items/Asset/1744060292/is-owned\")).raise_for_status()\n return r.text, uid\n\nwith ThreadPoolExecutor() as executor:\n with open('myfile.txt', 'w') as myfile, open('uidlist.txt') as uidfile:\n for data, uid in executor.map(process, map(str.strip, uidfile)):\n print(f'{data}:{uid}', file=myfile)\n\n" ]
[ 0 ]
[]
[]
[ "api", "python" ]
stackoverflow_0074465689_api_python.txt
Q: Conditions and while loop not working properly py(tkinter) im making a Guess the number game but i have a problem:The user must guess a certain number, and if it exceeds that, the game ends and the user's status is determined,So I created a variable named sam and did this sam = 0 And then I made a loop with while and said: while sam < 10: And then, every wrong guess will be added to sam, but the problem is that if you do something wrong, this will happen: >>12 is lower >>12 is lower >>12 is lower >>12 is lower >>12 is lower >>12 is lower >>12 is lower >>12 is lower >>12 is lower >>12 is lower >>you lose That is, it repeats a condition until sam exceeds 10 And I don't know what to do, my code: from tkinter import * sam = 0 def get_1(): global correct correct = int(player1.get()) player1.pack_forget() sumbit.pack_forget() def Guess(): global sam a = int(player_2.get()) while sam < 10: if a == correct: break elif a > correct: print(a,"is higher") sam += 1 elif a < correct: print(a,"is lower") sam += 1 if a == correct: print("you win") else: print("you lose") app = Tk() player1 = Entry(app,font=20) app.minsize(300,300) player1.pack() player_2 = Entry(app,font=20) player_2.pack() sumbit = Button(app,font=10,text="player 1 sumbit",command=get_1) sumbit.pack() guss = Button(app,text="player 2 guess number",font=20,command=Guess) guss.pack() app.mainloop() A: Writing GUI programs requires a different mindset than writing non-GUI programs. A loop like the one you created is not appropriate for a GUI. The way GUIs work is that you define functions to be called in response to events, and then the event manager (mainloop) is a loop tht waits for events and then dispatches them to the handlers. So, in your case Guess should only handle a single guess and not try to loop over a range of guesses. It can keep a global counter for the number of guesses, and update it each time Guess is called. It might look something like this: def Guess(): global guesses global sam a = int(player_2.get()) player_2.delete(0, "end") guesses += 1 if a == correct: print("you win") elif guesses == 10: print("you lose") elif a > correct: print(a,"is higher") elif a < correct: print(a,"is lower") You need to initialize guesses to zero when player 1 submits the value, since that designates the start of the game. def get_1(): global correct global guesses correct = int(player1.get()) guesses = 0 player1.pack_forget() sumbit.pack_forget()
Conditions and while loop not working properly py(tkinter)
im making a Guess the number game but i have a problem:The user must guess a certain number, and if it exceeds that, the game ends and the user's status is determined,So I created a variable named sam and did this sam = 0 And then I made a loop with while and said: while sam < 10: And then, every wrong guess will be added to sam, but the problem is that if you do something wrong, this will happen: >>12 is lower >>12 is lower >>12 is lower >>12 is lower >>12 is lower >>12 is lower >>12 is lower >>12 is lower >>12 is lower >>12 is lower >>you lose That is, it repeats a condition until sam exceeds 10 And I don't know what to do, my code: from tkinter import * sam = 0 def get_1(): global correct correct = int(player1.get()) player1.pack_forget() sumbit.pack_forget() def Guess(): global sam a = int(player_2.get()) while sam < 10: if a == correct: break elif a > correct: print(a,"is higher") sam += 1 elif a < correct: print(a,"is lower") sam += 1 if a == correct: print("you win") else: print("you lose") app = Tk() player1 = Entry(app,font=20) app.minsize(300,300) player1.pack() player_2 = Entry(app,font=20) player_2.pack() sumbit = Button(app,font=10,text="player 1 sumbit",command=get_1) sumbit.pack() guss = Button(app,text="player 2 guess number",font=20,command=Guess) guss.pack() app.mainloop()
[ "Writing GUI programs requires a different mindset than writing non-GUI programs. A loop like the one you created is not appropriate for a GUI. The way GUIs work is that you define functions to be called in response to events, and then the event manager (mainloop) is a loop tht waits for events and then dispatches them to the handlers.\nSo, in your case Guess should only handle a single guess and not try to loop over a range of guesses. It can keep a global counter for the number of guesses, and update it each time Guess is called.\nIt might look something like this:\ndef Guess():\n global guesses\n global sam\n a = int(player_2.get())\n player_2.delete(0, \"end\")\n\n guesses += 1\n\n if a == correct:\n print(\"you win\")\n\n elif guesses == 10:\n print(\"you lose\")\n\n elif a > correct:\n print(a,\"is higher\")\n\n elif a < correct:\n print(a,\"is lower\")\n\nYou need to initialize guesses to zero when player 1 submits the value, since that designates the start of the game.\ndef get_1():\n global correct\n global guesses\n correct = int(player1.get())\n guesses = 0\n player1.pack_forget()\n sumbit.pack_forget()\n\n" ]
[ 0 ]
[]
[]
[ "python", "tkinter" ]
stackoverflow_0074465586_python_tkinter.txt
Q: How to get users with 3 or more consecutive weeks in order using pandas? I have a user table like this, USERID Week_Number Year 0 fb 5.0 2021 1 twitter 1.0 2021 2 twitter 2.0 2021 3 twitter 3.0 2021 4 twitter 1.0 2022 5 twitter 2.0 2022 6 twitter 3.0 2022 7 twitter 15.0 2022 8 twitter NaN NaN 9 human 21.0 2022 I want to find the users who login >= 3 consecutive weeks in the same year. The week numbers will be unique for each year. For example, in the above table we can see that user twitter is logged in week_no: 1, 2, 3 in the same year 2022 thereby satisfying the condition that I am looking for. The output I am looking for, USERID Year twitter 2021 twitter 2022 You can create the sample table using, import pandas as pd import numpy as np data = pd.DataFrame({"USERID": ["fb", "twitter", "twitter", "twitter", "twitter", "twitter", "twitter", "twitter", "twitter", "human"], "Week_Number": [5, 1, 2, 3, 1, 2, 3, 15, np.nan, 21], "Year": ["2021", "2021","2021","2021", "2022", "2022", "2022", "2022", np.nan, "2022"]}) Can someone help me achieve this required output? I have tried few things but not able to arrive at proper output. for ix, group in data.groupby([data.USERID, data.Year]): group = group.sort_values("Week_Number") group["Diff"] = (group.Week_Number - group.Week_Number.shift(1)).fillna(1) break Thank you for any help in advance. A: Since you are not interested in the details of when the run of at least three weeks occurred (start or end), but only the tuples (user, year) where the user had at least three consecutive weeks of usage, then it is quite simple: def min_consecutive(w, minimum_run=3): dy = w.diff() != 1 runlen = dy.groupby(dy.cumsum()).size() return (runlen >= minimum_run).any() s = ( data .sort_values('Week_Number') .groupby(['USERID', 'Year'])['Week_Number'] .apply(min_consecutive) ) >>> s[s] USERID Year twitter 2021 True 2022 True Name: Week_Number, dtype: bool Explanation We consider each group (user, year). In that group, we observe an (ordered, without repeats) series of week numerals. This could be e.g. [1,2,3,12,13,18,19,20,21] (a run of 3, a run of 2, and a run of 4). The Series dy shows where there were gaps in the run (e.g. for the hypothetical value above: [T,F,F,T,F,T,F,F,F]). We use the .cumsum() of that to make groups that are each a consecutive run, e.g. [1,1,1,2,2,3,3,3,3]. We take the size of each group (e.g. [3,2,4]), and return True iff any of those is at least minimum_run long. Addendum: locate the weeks that meet the criteria Here are some ideas, depending on how you'd like your output. df = data.dropna().sort_values(['USERID', 'Year', 'Week_Number']) df = df.assign(rungrp=(df.groupby(['USERID', 'Year'])['Week_Number'].diff() != 1).cumsum()) df = df.loc[df.groupby('rungrp')['rungrp'].transform('count') >= 3] >>> df USERID Week_Number Year rungrp 1 twitter 1.0 2021 3 2 twitter 2.0 2021 3 3 twitter 3.0 2021 3 4 twitter 1.0 2022 4 5 twitter 2.0 2022 4 6 twitter 3.0 2022 4 All the weeks that are part of a run of at least 3. Grouping to find week min and max of each run: >>> df.groupby(['USERID', 'Year', 'rungrp'])['Week_Number'].agg([min, max]) min max USERID Year rungrp twitter 2021 3 1.0 3.0 2022 4 1.0 3.0 A: Instead of looping you can create a column which will show whether a user in a year has had a consecutive increase, and then check if that column sums to more than 3 per user in a year: data.sort_values(by=['USERID','Year','Week_Number'],ascending=True,inplace=True) data.assign( grouped_increase = data.groupby([data.USERID, data.Year])["Week_Number"] .diff() .gt(0) .astype(int) ).groupby([data.USERID, data.Year])["grouped_increase"].sum().reset_index().query( "grouped_increase >= 3" ).drop( "grouped_increase", axis=1 ) USERID Year 3 twitter 2022 Based on your comment, using this DF: USERID Week_Number Year 8 fb 2.0 2021.0 9 fb 3.0 2021.0 10 fb 4.0 2021.0 0 fb 5.0 2021.0 11 fb 2.0 2022.0 12 fb 3.0 2022.0 13 fb 4.0 2022.0 14 fb 5.0 2022.0 7 human 21.0 2022.0 1 twitter 1.0 2021.0 2 twitter 1.0 2022.0 3 twitter 2.0 2022.0 4 twitter 3.0 2022.0 5 twitter 15.0 2022.0 6 twitter NaN NaN Running the above code gives: USERID Year 0 fb 2021.0 1 fb 2022.0 4 twitter 2022.0
How to get users with 3 or more consecutive weeks in order using pandas?
I have a user table like this, USERID Week_Number Year 0 fb 5.0 2021 1 twitter 1.0 2021 2 twitter 2.0 2021 3 twitter 3.0 2021 4 twitter 1.0 2022 5 twitter 2.0 2022 6 twitter 3.0 2022 7 twitter 15.0 2022 8 twitter NaN NaN 9 human 21.0 2022 I want to find the users who login >= 3 consecutive weeks in the same year. The week numbers will be unique for each year. For example, in the above table we can see that user twitter is logged in week_no: 1, 2, 3 in the same year 2022 thereby satisfying the condition that I am looking for. The output I am looking for, USERID Year twitter 2021 twitter 2022 You can create the sample table using, import pandas as pd import numpy as np data = pd.DataFrame({"USERID": ["fb", "twitter", "twitter", "twitter", "twitter", "twitter", "twitter", "twitter", "twitter", "human"], "Week_Number": [5, 1, 2, 3, 1, 2, 3, 15, np.nan, 21], "Year": ["2021", "2021","2021","2021", "2022", "2022", "2022", "2022", np.nan, "2022"]}) Can someone help me achieve this required output? I have tried few things but not able to arrive at proper output. for ix, group in data.groupby([data.USERID, data.Year]): group = group.sort_values("Week_Number") group["Diff"] = (group.Week_Number - group.Week_Number.shift(1)).fillna(1) break Thank you for any help in advance.
[ "Since you are not interested in the details of when the run of at least three weeks occurred (start or end), but only the tuples (user, year) where the user had at least three consecutive weeks of usage, then it is quite simple:\ndef min_consecutive(w, minimum_run=3):\n dy = w.diff() != 1\n runlen = dy.groupby(dy.cumsum()).size()\n return (runlen >= minimum_run).any()\n\ns = (\n data\n .sort_values('Week_Number')\n .groupby(['USERID', 'Year'])['Week_Number']\n .apply(min_consecutive)\n)\n>>> s[s]\nUSERID Year\ntwitter 2021 True\n 2022 True\nName: Week_Number, dtype: bool\n\nExplanation\nWe consider each group (user, year). In that group, we observe an (ordered, without repeats) series of week numerals. This could be e.g. [1,2,3,12,13,18,19,20,21] (a run of 3, a run of 2, and a run of 4). The Series dy shows where there were gaps in the run (e.g. for the hypothetical value above: [T,F,F,T,F,T,F,F,F]). We use the .cumsum() of that to make groups that are each a consecutive run, e.g. [1,1,1,2,2,3,3,3,3]. We take the size of each group (e.g. [3,2,4]), and return True iff any of those is at least minimum_run long.\nAddendum: locate the weeks that meet the criteria\nHere are some ideas, depending on how you'd like your output.\ndf = data.dropna().sort_values(['USERID', 'Year', 'Week_Number'])\ndf = df.assign(rungrp=(df.groupby(['USERID', 'Year'])['Week_Number'].diff() != 1).cumsum())\ndf = df.loc[df.groupby('rungrp')['rungrp'].transform('count') >= 3]\n>>> df\n USERID Week_Number Year rungrp\n1 twitter 1.0 2021 3\n2 twitter 2.0 2021 3\n3 twitter 3.0 2021 3\n4 twitter 1.0 2022 4\n5 twitter 2.0 2022 4\n6 twitter 3.0 2022 4\n\nAll the weeks that are part of a run of at least 3.\nGrouping to find week min and max of each run:\n>>> df.groupby(['USERID', 'Year', 'rungrp'])['Week_Number'].agg([min, max])\n min max\nUSERID Year rungrp \ntwitter 2021 3 1.0 3.0\n 2022 4 1.0 3.0\n\n", "Instead of looping you can create a column which will show whether a user in a year has had a consecutive increase, and then check if that column sums to more than 3 per user in a year:\ndata.sort_values(by=['USERID','Year','Week_Number'],ascending=True,inplace=True)\n\ndata.assign(\n grouped_increase = data.groupby([data.USERID, data.Year])[\"Week_Number\"]\n .diff()\n .gt(0)\n .astype(int)\n).groupby([data.USERID, data.Year])[\"grouped_increase\"].sum().reset_index().query(\n \"grouped_increase >= 3\"\n).drop(\n \"grouped_increase\", axis=1\n)\n\n\n USERID Year\n3 twitter 2022\n\nBased on your comment, using this DF:\n USERID Week_Number Year\n8 fb 2.0 2021.0\n9 fb 3.0 2021.0\n10 fb 4.0 2021.0\n0 fb 5.0 2021.0\n11 fb 2.0 2022.0\n12 fb 3.0 2022.0\n13 fb 4.0 2022.0\n14 fb 5.0 2022.0\n7 human 21.0 2022.0\n1 twitter 1.0 2021.0\n2 twitter 1.0 2022.0\n3 twitter 2.0 2022.0\n4 twitter 3.0 2022.0\n5 twitter 15.0 2022.0\n6 twitter NaN NaN\n\nRunning the above code gives:\n USERID Year\n0 fb 2021.0\n1 fb 2022.0\n4 twitter 2022.0\n\n" ]
[ 2, 0 ]
[]
[]
[ "pandas", "python" ]
stackoverflow_0074464008_pandas_python.txt
Q: Error in plotting quiver ufunc 'isfinite' I have the following lists, and I want to plot using the plt.quiver but I got the following error. I do not know what to modify the list so the plot is plotting. import matplotlib.pyplot as plt x=[0.5, 0.09113826200606436, 0.09090926587458355, 0.09090909053329622, 0.09090909090689524, 0.09090909090901886] y=[-0.5, -0.09113826200606436, -0.09090926587458355, -0.09090909053329618, -0.09090909090689503, -0.09090909090901889] u=[0.9, 0.0005041764133415783, 3.84924083801641e-07, -8.267483364576833e-10, -4.830719158022134e-12, -1.584843367652411e-13] v=[-0.9, -0.0005041764133415783, -3.84924083801641e-07, 8.267483364576833e-10, 4.830691402446519e-12, 1.584843367652411e-13] colour=['red', 'red', 'red', 'red', 'red', 'red'] plt.quiver(x,y,u,v, colour) TypeError Traceback (most recent call last) <ipython-input-12-2e445afc7e85> in <module> 138 print(colour) 139 plt.rcParams["figure.figsize"] = (30,30) --> 140 plt.quiver(x,y,u,v, colour) 141 plt.scatter(stable_point_xaxis, stable_point_yaxis, color='blue') 142 #plt.scatter(uNew, vNew, color='red') 5 frames /usr/local/lib/python3.7/dist-packages/numpy/ma/core.py in masked_invalid(a, copy) 2367 cls = type(a) 2368 else: -> 2369 condition = ~(np.isfinite(a)) 2370 cls = MaskedArray 2371 result = a.view(cls) TypeError: ufunc 'isfinite' not supported for the input types, and the inputs could not be safely coerced to any supported types according to the casting rule ''safe'' A: If you plot each arrow individually, Matplotlib does her job, but all the arrows look the same, even if they are not of the same size (in your data, they are apart by 13 orders of magnitude). My guess is that Matplotib encounters an error when she tries to rasterize, in the same scale, arrow objects that are so different in terms of their dimensions. from matplotlib.pyplot import quiver, show x = [0.5, 0.09113826200606436, 0.09090926587458355, 0.09090909053329622, 0.09090909090689524, 0.09090909090901886] y = [-0.5, -0.09113826200606436, -0.09090926587458355, -0.09090909053329618, -0.09090909090689503, -0.09090909090901889] u = [0.9, 0.0005041764133415783, 3.84924083801641e-07, -8.267483364576833e-10, -4.830719158022134e-12, -1.584843367652411e-13] v = [-0.9, -0.0005041764133415783, -3.84924083801641e-07, 8.267483364576833e-10, 4.830691402446519e-12, 1.584843367652411e-13] color = 'r r r r r'.split() for x_, y_, u_, v_ in zip(x,y,u,v): quiver(x_, y_, u_, v_, color=color) show()
Error in plotting quiver ufunc 'isfinite'
I have the following lists, and I want to plot using the plt.quiver but I got the following error. I do not know what to modify the list so the plot is plotting. import matplotlib.pyplot as plt x=[0.5, 0.09113826200606436, 0.09090926587458355, 0.09090909053329622, 0.09090909090689524, 0.09090909090901886] y=[-0.5, -0.09113826200606436, -0.09090926587458355, -0.09090909053329618, -0.09090909090689503, -0.09090909090901889] u=[0.9, 0.0005041764133415783, 3.84924083801641e-07, -8.267483364576833e-10, -4.830719158022134e-12, -1.584843367652411e-13] v=[-0.9, -0.0005041764133415783, -3.84924083801641e-07, 8.267483364576833e-10, 4.830691402446519e-12, 1.584843367652411e-13] colour=['red', 'red', 'red', 'red', 'red', 'red'] plt.quiver(x,y,u,v, colour) TypeError Traceback (most recent call last) <ipython-input-12-2e445afc7e85> in <module> 138 print(colour) 139 plt.rcParams["figure.figsize"] = (30,30) --> 140 plt.quiver(x,y,u,v, colour) 141 plt.scatter(stable_point_xaxis, stable_point_yaxis, color='blue') 142 #plt.scatter(uNew, vNew, color='red') 5 frames /usr/local/lib/python3.7/dist-packages/numpy/ma/core.py in masked_invalid(a, copy) 2367 cls = type(a) 2368 else: -> 2369 condition = ~(np.isfinite(a)) 2370 cls = MaskedArray 2371 result = a.view(cls) TypeError: ufunc 'isfinite' not supported for the input types, and the inputs could not be safely coerced to any supported types according to the casting rule ''safe''
[ "If you plot each arrow individually, Matplotlib does her job, but all the arrows look the same, even if they are not of the same size (in your data, they are apart by 13 orders of magnitude).\nMy guess is that Matplotib encounters an error when she tries to rasterize, in the same scale, arrow objects that are so different in terms of their dimensions.\nfrom matplotlib.pyplot import quiver, show\n\nx = [0.5, 0.09113826200606436, 0.09090926587458355, 0.09090909053329622, 0.09090909090689524, 0.09090909090901886]\ny = [-0.5, -0.09113826200606436, -0.09090926587458355, -0.09090909053329618, -0.09090909090689503, -0.09090909090901889]\nu = [0.9, 0.0005041764133415783, 3.84924083801641e-07, -8.267483364576833e-10, -4.830719158022134e-12, -1.584843367652411e-13]\nv = [-0.9, -0.0005041764133415783, -3.84924083801641e-07, 8.267483364576833e-10, 4.830691402446519e-12, 1.584843367652411e-13]\ncolor = 'r r r r r'.split()\n\nfor x_, y_, u_, v_ in zip(x,y,u,v):\n quiver(x_, y_, u_, v_, color=color)\nshow()\n\n\n" ]
[ 0 ]
[]
[]
[ "matplotlib", "plot", "python" ]
stackoverflow_0074465581_matplotlib_plot_python.txt
Q: Pandas "usecols" doesn't seems work perfectly I am reading below excel with below python code but not getting any idea why the first column header has ".1" even though set to ignore the first column. Any idea please? many thanks in advance. python script import pandas as pd import os os.system('cls') df = pd.read_excel('test_1\Book1.xlsx','sheet1', header=0, skiprows=1, usecols='B:D',index_col= 0, nrows=5) print(df) I am very confused about ".1" in the first column name header "PIA IM Equity.1" in the below result A: In the file you have two columns named "PIA IM Equity" when reading pandas will rename the second identical column by adding .1 in the name. If you would have a third column with the same name it would have added .2 in it.
Pandas "usecols" doesn't seems work perfectly
I am reading below excel with below python code but not getting any idea why the first column header has ".1" even though set to ignore the first column. Any idea please? many thanks in advance. python script import pandas as pd import os os.system('cls') df = pd.read_excel('test_1\Book1.xlsx','sheet1', header=0, skiprows=1, usecols='B:D',index_col= 0, nrows=5) print(df) I am very confused about ".1" in the first column name header "PIA IM Equity.1" in the below result
[ "In the file you have two columns named \"PIA IM Equity\" when reading pandas will rename the second identical column by adding .1 in the name. If you would have a third column with the same name it would have added .2 in it.\n" ]
[ 0 ]
[]
[]
[ "pandas", "python" ]
stackoverflow_0074465996_pandas_python.txt