content
stringlengths
85
101k
title
stringlengths
0
150
question
stringlengths
15
48k
answers
list
answers_scores
list
non_answers
list
non_answers_scores
list
tags
list
name
stringlengths
35
137
Q: "Timed out waiting for debuggee to spawn" error for certain conda envs on vs code? Update: I just discovered that with a python 3.7 environment created with Anaconda (version number 4.11.0, which creates a python 3.7.11), this problem happens while python 3.8(.12) created by conda doesn't have this kind of problem. And I found a solution: I use integratedTerminal in the launch.json, and without specifying terminal type, the default integrated terminal would be PowerShell which has this problem. After I change the terminal.integrated.shell.Windows setting to cmd.exe, this porlbem goes away and the problem goes through. Original post: I am developing a python 3.7 application on vs-code on Windows. I am using Anaconda to manage my working environments (and I have created several evns on the same dev machine). The env I am using for this app is called basic_env which is of version 3.7.10. Previously it was working fine. But today, when I returned from my 10-day holiday, and after I upgraded the vscode (to 1.64.0) and probably the python extension (version v2022.0.1786462952). The launching/debugging stopped working: a window popped up with an error mesage Timed out waiting for debuggee to spawn . I made a simple test python script using the same conda env. My script is only one line print("test....") And my launch.json looks like this: { "name": "Python: Current File", "type": "python", "request": "launch", "program": "${file}", "console": "integratedTerminal", "env": { "PYTHONPATH": "${workspaceFolder}" }, }, The same error showed up. I tried to google this issue, at first I thought this post was the same as my problem: using venv caused this problem, and using the system-global python didn't have this problem. So maybe the bug which was fixed 2 years ago was back along with the updates? But then I tried some other conda envs, some worked and some didn't (but none of them was of version 3.7.10). I tried to upgrade this env to 3.7.11 by: conda activate basic_env conda install python=3.7.11 (other existing 3.7.11 envs work fine) but the problem was still there. Also, I don't know how to get the ptvsd.adapter log file in the mentioned post. Could anyone help me track this problem? Thank you very much! Cheers A: Upon reading this post I tried changing my conda env. I might have changed it a few times and tried to debug. Eventually 1 just worked. I thought maybe it was a python version, so I switched to a conda env which just a few seconds earlier hadn't worked with debugging. But this time it just worked. I don't have an explanation as to why this worked, but if you're reading this maybe you can just try changing conda env a few times. Update: I ran into the issue again. This time toggling conda env didn't seem to work. I did try to Run Without Debugging (which worked), and then I tried Start Debugging which to my surprise worked. Again, no answer to why, but trying to provide info which could help someone.
"Timed out waiting for debuggee to spawn" error for certain conda envs on vs code?
Update: I just discovered that with a python 3.7 environment created with Anaconda (version number 4.11.0, which creates a python 3.7.11), this problem happens while python 3.8(.12) created by conda doesn't have this kind of problem. And I found a solution: I use integratedTerminal in the launch.json, and without specifying terminal type, the default integrated terminal would be PowerShell which has this problem. After I change the terminal.integrated.shell.Windows setting to cmd.exe, this porlbem goes away and the problem goes through. Original post: I am developing a python 3.7 application on vs-code on Windows. I am using Anaconda to manage my working environments (and I have created several evns on the same dev machine). The env I am using for this app is called basic_env which is of version 3.7.10. Previously it was working fine. But today, when I returned from my 10-day holiday, and after I upgraded the vscode (to 1.64.0) and probably the python extension (version v2022.0.1786462952). The launching/debugging stopped working: a window popped up with an error mesage Timed out waiting for debuggee to spawn . I made a simple test python script using the same conda env. My script is only one line print("test....") And my launch.json looks like this: { "name": "Python: Current File", "type": "python", "request": "launch", "program": "${file}", "console": "integratedTerminal", "env": { "PYTHONPATH": "${workspaceFolder}" }, }, The same error showed up. I tried to google this issue, at first I thought this post was the same as my problem: using venv caused this problem, and using the system-global python didn't have this problem. So maybe the bug which was fixed 2 years ago was back along with the updates? But then I tried some other conda envs, some worked and some didn't (but none of them was of version 3.7.10). I tried to upgrade this env to 3.7.11 by: conda activate basic_env conda install python=3.7.11 (other existing 3.7.11 envs work fine) but the problem was still there. Also, I don't know how to get the ptvsd.adapter log file in the mentioned post. Could anyone help me track this problem? Thank you very much! Cheers
[ "Upon reading this post I tried changing my conda env. I might have changed it a few times and tried to debug. Eventually 1 just worked. I thought maybe it was a python version, so I switched to a conda env which just a few seconds earlier hadn't worked with debugging. But this time it just worked.\nI don't have an explanation as to why this worked, but if you're reading this maybe you can just try changing conda env a few times.\nUpdate: I ran into the issue again. This time toggling conda env didn't seem to work. I did try to Run Without Debugging (which worked), and then I tried Start Debugging which to my surprise worked. Again, no answer to why, but trying to provide info which could help someone.\n" ]
[ 0 ]
[]
[]
[ "python", "visual_studio_code", "vscode_debugger" ]
stackoverflow_0071015203_python_visual_studio_code_vscode_debugger.txt
Q: How to integrate RabbitMQ RPC into FastApi properly I am improving my FastAPI project. One of the methods needs to run a heavy computational task on another machine. Due to the high load this should be done in a queue way. I am following RabbitMQ RPC guide to perform remote procedure call via message queue. This guide suggests to create exclusive queue for each client-server session, which means if I will create new instance for each method call then for N calls there will be N queues created, which is obviously inefficient. So, my questions are: Is there a way in FastAPI to create fixed pull of workers and give each worker only one unique instance of RPCClient? Or, maybe, create a fixed pull from those clients and give each new worker one client from this pull? How badly will be the slow if I use strait-forward solution mentioned above? Is there efficient way to return back results for several workers via single response queue? A: I've managed to solve all those problems by simply using Celery with RabbitMQ as backend
How to integrate RabbitMQ RPC into FastApi properly
I am improving my FastAPI project. One of the methods needs to run a heavy computational task on another machine. Due to the high load this should be done in a queue way. I am following RabbitMQ RPC guide to perform remote procedure call via message queue. This guide suggests to create exclusive queue for each client-server session, which means if I will create new instance for each method call then for N calls there will be N queues created, which is obviously inefficient. So, my questions are: Is there a way in FastAPI to create fixed pull of workers and give each worker only one unique instance of RPCClient? Or, maybe, create a fixed pull from those clients and give each new worker one client from this pull? How badly will be the slow if I use strait-forward solution mentioned above? Is there efficient way to return back results for several workers via single response queue?
[ "I've managed to solve all those problems by simply using Celery with RabbitMQ as backend\n" ]
[ 0 ]
[]
[]
[ "fastapi", "python", "rabbitmq", "rpc" ]
stackoverflow_0074415094_fastapi_python_rabbitmq_rpc.txt
Q: Upload image on Facebook Marketplace with selenium (python) I am trying to automatize the creation of ads on facebook marketplace. I success in log in and go on the correct page. But I don't how to upload an image with selenium. Indeed, the element which handle the uploading of image is not an input type=file but a div which has a role of a button which open the windows file window in order to choose a file. This is the html of the element : <div class="x1i10hfl x1qjc9v5 xjbqb8w xjqpnuy xa49m3k xqeqjp1 x2hbi6w x13fuv20 xu3j5b3 x1q0q8m5 x26u7qi x972fbf xcfux6l x1qhh985 xm0m39n x9f619 x1ypdohk xdl72j9 x2lah0s xe8uvvx xdj266r x11i5rnm xat24cr x1mh8g0r x2lwn1j xeuugli xexx8yu x4uap5 x18d9i69 xkhd6sd x1n2onr6 x16tdsg8 x1hl2dhg xggy1nq x1ja2u2z x1t137rt x1o1ewxj x3x9cwd x1e5q0jg x13rtm0m x1q0g3np x87ps6o x1lku1pv x1a2a7pz x78zum5 x1iyjqo2" role="button" tabindex="0"> I already tried this code : driver.find_element(By.XPATH, element_xpath).send_keys(absolute_path) But it doesn't work Is there someone who already tried and succeeded in ? A: Uploading file with Selenium is done by sending the uploaded file to a special element. This is not an element you are clicking as a user via GUI to upload elements. The element actually receiving uploaded files normally matching this XPath: //input[@type='file'] This is the fully working code - I tried this on my PC with my FB account uploading some document. I've erased the screenshot details for privacy reasons, but it clearly worked from selenium import webdriver from selenium.webdriver.chrome.service import Service from selenium.webdriver.chrome.options import Options from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.common.by import By from selenium.webdriver.support import expected_conditions as EC options = Options() options.add_argument("start-maximized") options.add_argument("--disable-infobars") options.add_argument("start-maximized") options.add_argument("--disable-extensions") # Pass the argument 1 to allow and 2 to block options.add_experimental_option( "prefs", {"profile.default_content_setting_values.notifications": 2} ) webdriver_service = Service('C:\webdrivers\chromedriver.exe') driver = webdriver.Chrome(options=options, service=webdriver_service) wait = WebDriverWait(driver, 20) url = "https://www.facebook.com/" driver.get(url) wait.until(EC.element_to_be_clickable((By.CSS_SELECTOR, "[name='email']"))).send_keys(my_username) wait.until(EC.element_to_be_clickable((By.CSS_SELECTOR, "[name='pass']"))).send_keys(my_password) wait.until(EC.element_to_be_clickable((By.CSS_SELECTOR, "[name='login']"))).click() driver.get("https://www.facebook.com/marketplace/create/item") wait.until(EC.presence_of_element_located((By.XPATH, "//input[@type='file']"))).send_keys("C:/Users/my_user/Downloads/doch.jpeg") This is the screenshot of what this code does:
Upload image on Facebook Marketplace with selenium (python)
I am trying to automatize the creation of ads on facebook marketplace. I success in log in and go on the correct page. But I don't how to upload an image with selenium. Indeed, the element which handle the uploading of image is not an input type=file but a div which has a role of a button which open the windows file window in order to choose a file. This is the html of the element : <div class="x1i10hfl x1qjc9v5 xjbqb8w xjqpnuy xa49m3k xqeqjp1 x2hbi6w x13fuv20 xu3j5b3 x1q0q8m5 x26u7qi x972fbf xcfux6l x1qhh985 xm0m39n x9f619 x1ypdohk xdl72j9 x2lah0s xe8uvvx xdj266r x11i5rnm xat24cr x1mh8g0r x2lwn1j xeuugli xexx8yu x4uap5 x18d9i69 xkhd6sd x1n2onr6 x16tdsg8 x1hl2dhg xggy1nq x1ja2u2z x1t137rt x1o1ewxj x3x9cwd x1e5q0jg x13rtm0m x1q0g3np x87ps6o x1lku1pv x1a2a7pz x78zum5 x1iyjqo2" role="button" tabindex="0"> I already tried this code : driver.find_element(By.XPATH, element_xpath).send_keys(absolute_path) But it doesn't work Is there someone who already tried and succeeded in ?
[ "Uploading file with Selenium is done by sending the uploaded file to a special element. This is not an element you are clicking as a user via GUI to upload elements. The element actually receiving uploaded files normally matching this XPath:\n//input[@type='file']\nThis is the fully working code - I tried this on my PC with my FB account uploading some document. I've erased the screenshot details for privacy reasons, but it clearly worked\nfrom selenium import webdriver\nfrom selenium.webdriver.chrome.service import Service\nfrom selenium.webdriver.chrome.options import Options\nfrom selenium.webdriver.support.ui import WebDriverWait\nfrom selenium.webdriver.common.by import By\nfrom selenium.webdriver.support import expected_conditions as EC\n\noptions = Options()\noptions.add_argument(\"start-maximized\")\noptions.add_argument(\"--disable-infobars\")\noptions.add_argument(\"start-maximized\")\noptions.add_argument(\"--disable-extensions\")\n\n# Pass the argument 1 to allow and 2 to block\noptions.add_experimental_option(\n \"prefs\", {\"profile.default_content_setting_values.notifications\": 2}\n)\n\nwebdriver_service = Service('C:\\webdrivers\\chromedriver.exe')\ndriver = webdriver.Chrome(options=options, service=webdriver_service)\nwait = WebDriverWait(driver, 20)\n\nurl = \"https://www.facebook.com/\"\ndriver.get(url)\nwait.until(EC.element_to_be_clickable((By.CSS_SELECTOR, \"[name='email']\"))).send_keys(my_username)\nwait.until(EC.element_to_be_clickable((By.CSS_SELECTOR, \"[name='pass']\"))).send_keys(my_password)\nwait.until(EC.element_to_be_clickable((By.CSS_SELECTOR, \"[name='login']\"))).click()\ndriver.get(\"https://www.facebook.com/marketplace/create/item\")\nwait.until(EC.presence_of_element_located((By.XPATH, \"//input[@type='file']\"))).send_keys(\"C:/Users/my_user/Downloads/doch.jpeg\")\n\nThis is the screenshot of what this code does:\n\n" ]
[ 0 ]
[]
[]
[ "automation", "python", "selenium", "selenium_webdriver", "web_scraping" ]
stackoverflow_0074479046_automation_python_selenium_selenium_webdriver_web_scraping.txt
Q: Python 1 printing an input letter-by-letter with sys I am testing the sys module's ability to print a string letter-by-letter on the same line. When I try to print an input this way, it works until it prints "none". I don't yet know enough about sys to find and correct the problem. I tried finding a similar question on this site, but only found answers for coding languages other than Python 1. This is the code I wrote: ###imports the sys and time modules### import sys import time ###defines a function to print the argument letter-by-letter on one line### desclist=[] def liner(prompt): for i in prompt: sys.stdout.write(i) sys.stdout.flush() time.sleep(0.05) ###prints input prompt through liner function### x=input(liner("Enter the input here.") The expected result was the following: Enter the input here. Instead, it printed the following: Enter the input here.None A: I have found one way that works. After liner(prompt) is run, it stays on the same line unless \n is used: liner("Enter the input here.") x=input() However, I was still wondering if there is a one-line solution so I don't need to enter multiple lines of code every time I print a string.
Python 1 printing an input letter-by-letter with sys
I am testing the sys module's ability to print a string letter-by-letter on the same line. When I try to print an input this way, it works until it prints "none". I don't yet know enough about sys to find and correct the problem. I tried finding a similar question on this site, but only found answers for coding languages other than Python 1. This is the code I wrote: ###imports the sys and time modules### import sys import time ###defines a function to print the argument letter-by-letter on one line### desclist=[] def liner(prompt): for i in prompt: sys.stdout.write(i) sys.stdout.flush() time.sleep(0.05) ###prints input prompt through liner function### x=input(liner("Enter the input here.") The expected result was the following: Enter the input here. Instead, it printed the following: Enter the input here.None
[ "I have found one way that works. After liner(prompt) is run, it stays on the same line unless \\n is used:\n liner(\"Enter the input here.\")\n x=input()\n\nHowever, I was still wondering if there is a one-line solution so I don't need to enter multiple lines of code every time I print a string.\n" ]
[ 0 ]
[]
[]
[ "input", "python", "sys" ]
stackoverflow_0074479126_input_python_sys.txt
Q: Python Concatenate strings stored in one variable into a single list so i have this variable which has stored multiple strings: 123 456 789 876 543 each string inside the variable is also classified as a string: <class 'str'> <class 'str'> <class 'str'> <class 'str'> <class 'str'> however when i try to get them all into a single list with attemps like: for x in varwithstr: full_lst = [] full_lst.append(x) or l = x.split(" ") i do not get the desired result: ['123','456','789','876','543'] instead i either get : ['123'] ['456'] ['789'] ['876'] ['543'] or: ['1'] ['2'] ['3'] ['5'] ['6'] ['7'] ['8'] ['9'] ['8'] ['7'] ['6'] ['5'] ['4'] ['3'] does anyone know what i'm missing here? Full Code: import xml.etree.ElementTree as ET import os path = 'data/path' for filenames in os.listdir(path): if filenames.endswith('.xml'): fullnames = os.path.join(path, filenames) tree = ET.parse(fullnames) root = tree.getroot() IDs = root[2].attrib.get("ProjectID") IDs is the variable i'm refering to. print(type(IDs))gives back the following <class 'str'> A: The lines are separated by '\n' (newline) and not ' ' (space). So maybe this can work. l = x.split("\n")
Python Concatenate strings stored in one variable into a single list
so i have this variable which has stored multiple strings: 123 456 789 876 543 each string inside the variable is also classified as a string: <class 'str'> <class 'str'> <class 'str'> <class 'str'> <class 'str'> however when i try to get them all into a single list with attemps like: for x in varwithstr: full_lst = [] full_lst.append(x) or l = x.split(" ") i do not get the desired result: ['123','456','789','876','543'] instead i either get : ['123'] ['456'] ['789'] ['876'] ['543'] or: ['1'] ['2'] ['3'] ['5'] ['6'] ['7'] ['8'] ['9'] ['8'] ['7'] ['6'] ['5'] ['4'] ['3'] does anyone know what i'm missing here? Full Code: import xml.etree.ElementTree as ET import os path = 'data/path' for filenames in os.listdir(path): if filenames.endswith('.xml'): fullnames = os.path.join(path, filenames) tree = ET.parse(fullnames) root = tree.getroot() IDs = root[2].attrib.get("ProjectID") IDs is the variable i'm refering to. print(type(IDs))gives back the following <class 'str'>
[ "The lines are separated by '\\n' (newline) and not ' ' (space). So maybe this can work.\nl = x.split(\"\\n\")\n\n" ]
[ 0 ]
[]
[]
[ "list", "python" ]
stackoverflow_0074479095_list_python.txt
Q: A more pythonic way for string placeholders? Is there a more pythonic way to do the following? F-strings seem to require a defined variable (no empty expressions) but if I want to define @names and @locations later on, what is the best way to go about it? funct_a = call_function() str_a = f"a very long string of text that contains {funct_a} and also @names or @locations" ... large chunk of code that modifies str_a and defines var_a, var_b, var_c, var_d ... if <conditional>: str_b = str_a.replace("@names", var_a).replace("@locations", var_b) elif <conditional>: str_b = str_a.replace("@names", var_c).replace("@locations", var_d) A: Escape {}'s from the f-string and use them later on in format: now = 'hey' s = f'{now}, then {{names}} or {{locations}}' # later on print(s.format(names='foo', locations='bar')) NB: requires some care if the immediate expansion also contains {}.
A more pythonic way for string placeholders?
Is there a more pythonic way to do the following? F-strings seem to require a defined variable (no empty expressions) but if I want to define @names and @locations later on, what is the best way to go about it? funct_a = call_function() str_a = f"a very long string of text that contains {funct_a} and also @names or @locations" ... large chunk of code that modifies str_a and defines var_a, var_b, var_c, var_d ... if <conditional>: str_b = str_a.replace("@names", var_a).replace("@locations", var_b) elif <conditional>: str_b = str_a.replace("@names", var_c).replace("@locations", var_d)
[ "Escape {}'s from the f-string and use them later on in format:\nnow = 'hey'\n\ns = f'{now}, then {{names}} or {{locations}}'\n\n# later on\n\nprint(s.format(names='foo', locations='bar'))\n\nNB: requires some care if the immediate expansion also contains {}.\n" ]
[ 5 ]
[]
[]
[ "python" ]
stackoverflow_0074479092_python.txt
Q: what could be the cause of this error in python vs code? enter code here i = 0 sums = [] while i <= 1000: if i%3==0 or i%5==0: sums.append(i) i=i+1 for i in sums: total = sums[i] + sums[i+1] print(total) The problem was: If we list all the natural numbers below 10 that are multiples of 3 or 5, we get 3, 5, 6 and 9. The sum of these multiples is 23. Find the sum of all the multiples of 3 or 5 below 1000. After i run the above code it brings out this error Traceback (most recent call last): File "c:\Users\user\Desktop\Python projects\Multiples of 3 or 5.py", line 8, in <module> total = sums[i] + sums[i+1] IndexError: list index out of range A: for i in sums: total = sums[i] + sums[i+1] imagine that sums array have 5 elements. and values like [3,5,7,10,15] and when you looped like above, it assigns values in order 3,5,7,10,15. So as we do not have seventh elementh in the list it gives up an error. However there is a easier way to do this print(sum(sums)) A: The values inside your list can only be used as indices if they are in fact within the range of your list. But as you append i to sums like this: while i <= 1000: if i%3==0 or i%5==0: sums.append(i) i=i+1 Thus making your last element in sums[-1] = 1000 but sums is of length 468. So the highest index you could use is 467, as lists count beginning with 0 up to len(nums)-1. That is why you get the IndexError: list index out of range message. To get the total of all values, you can either use a loop iterating over all elements in sums or use the built-in sum function: total = 0 for num in sums: total += num print(total) Or calling sum with nums: print(sum(nums))
what could be the cause of this error in python vs code?
enter code here i = 0 sums = [] while i <= 1000: if i%3==0 or i%5==0: sums.append(i) i=i+1 for i in sums: total = sums[i] + sums[i+1] print(total) The problem was: If we list all the natural numbers below 10 that are multiples of 3 or 5, we get 3, 5, 6 and 9. The sum of these multiples is 23. Find the sum of all the multiples of 3 or 5 below 1000. After i run the above code it brings out this error Traceback (most recent call last): File "c:\Users\user\Desktop\Python projects\Multiples of 3 or 5.py", line 8, in <module> total = sums[i] + sums[i+1] IndexError: list index out of range
[ "for i in sums:\n total = sums[i] + sums[i+1]\n\nimagine that sums array have 5 elements.\nand values like [3,5,7,10,15]\nand when you looped like above, it assigns values in order 3,5,7,10,15.\nSo as we do not have seventh elementh in the list it gives up an error.\nHowever there is a easier way to do this\nprint(sum(sums))\n\n", "The values inside your list can only be used as indices if they are in fact within the range of your list.\nBut as you append i to sums like this:\nwhile i <= 1000:\n if i%3==0 or i%5==0:\n sums.append(i)\n i=i+1\n\nThus making your last element in sums[-1] = 1000 but sums is of length 468. So the highest index you could use is 467, as lists count beginning with 0 up to len(nums)-1.\nThat is why you get the IndexError: list index out of range message.\nTo get the total of all values, you can either use a loop iterating over all elements in sums or use the built-in sum function:\ntotal = 0\nfor num in sums:\n total += num\nprint(total)\n\nOr calling sum with nums:\nprint(sum(nums))\n\n" ]
[ 0, 0 ]
[]
[]
[ "python" ]
stackoverflow_0074479183_python.txt
Q: Amazon website shows "Deliver to Country". How can I change it programmatically in Python Selenium to take screenshots The problem: I want to search keywords on Amazon and take screenshots. I am using selenium package. However, when I search on amazon.co.uk, it shows delivery address as Unites States. How can I change the "Deliver to Country"? Below are sample Python code and a sample screenshot. import time as t from datetime import datetime from selenium import webdriver from webdriver_manager.chrome import ChromeDriverManager from selenium.webdriver.chrome.options import Options from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support import expected_conditions as EC from selenium.webdriver.common.by import By from selenium.webdriver.common.action_chains import ActionChains chrome_options = Options() chrome_options.add_argument("--no-sandbox") chrome_options.add_argument('disable-notifications') chrome_options.add_argument("user-agent=UA") chrome_options.add_argument("--start-maximized") chrome_options.add_argument("--headless") chrome_options.add_argument('window-size=2160x3840') urls = ['https://www.amazon.co.uk/s?k=advil', 'https://www.amazon.co.uk/s?k=Whitening toothpaste'] def get_secondly_screenshots(navi_dictionary): driver = webdriver.Chrome(ChromeDriverManager().install(), options=chrome_options) driver.get('https://www.amazon.co.uk') driver.execute_script("document.body.style.zoom='50%'") driver.get(url) try: test = driver.find_element('xpath', '//*[@id="sp-cc-rejectall-link"]') test.click() print('gotcha!') except: pass now = datetime.now() date_time = now.strftime("%Y_%m_%d_%H_%M_%S") sh_url = url.split('?k=')[1] print(sh_url, date_time) driver.save_screenshot(f'{sh_url}_{date_time}.png') print('screenshotted ', url) t.sleep(2) driver.quit() for url in urls: get_secondly_screenshots(url) A: In order to set UK delivery address on UK Amazon when your IP address is out from the UK you can do the following steps: Open the "Delivery to" dialog Insert some valid UK postal code and click submit button Approve this on the appeared after that pop-up. As you asked, I also added the code to close cookies banner and click "continue". The following Selenium code performs that from selenium import webdriver from selenium.webdriver.chrome.service import Service from selenium.webdriver.chrome.options import Options from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.common.by import By from selenium.webdriver.support import expected_conditions as EC options = Options() options.add_argument("start-maximized") webdriver_service = Service('C:\webdrivers\chromedriver.exe') driver = webdriver.Chrome(options=options, service=webdriver_service) wait = WebDriverWait(driver, 5) url = "https://www.amazon.co.uk/s?k=advil" driver.get(url) wait.until(EC.element_to_be_clickable((By.ID, 'nav-global-location-popover-link'))).click() wait.until(EC.element_to_be_clickable((By.CSS_SELECTOR, "[data-action='GLUXPostalInputAction']"))).send_keys("PO16 7GZ") wait.until(EC.element_to_be_clickable((By.CSS_SELECTOR, "[aria-labelledby='GLUXZipUpdate-announce']"))).click() wait.until(EC.element_to_be_clickable((By.CSS_SELECTOR, ".a-popover-footer #GLUXConfirmClose"))).click() time.sleep(1) wait.until(EC.element_to_be_clickable((By.CSS_SELECTOR, ".glow-toaster-footer input[data-action-type='DISMISS']"))).click() wait.until(EC.element_to_be_clickable((By.ID, "sp-cc-accept"))).click() The result of this code is In case you have any questions about my code - don't hesitate to ask. A: Isn't it enough to interact with that switch initially and set the desired country? Let me make a few changes to your code, which I explain below, to make it perform better for this task. It is necessary to resolve this depreciation notice: 'DeprecationWarning: executable_path has been deprecated, please pass in a Service object'. You can instantiate the driver, reject cookies, and change the country of delivery only once and not at each iteration. You have to integrate some driver wait (and not sleeps) to prevent the page from still not loading properly and the script from failing. The "DELAY" parameter you can configure based on the speed of your connection, the heaviness of the page, and the performance of your pc. You asked to change the country of delivery, so just select the corresponding value in the drop-down menu with Select. You can change the country simply by changing that string. However, if you want to enter UK from amazon.co.uk (and it is not already selected) you will have to enter a zip code. import time as t from datetime import datetime from selenium import webdriver from webdriver_manager.chrome import ChromeDriverManager from selenium.webdriver.chrome.options import Options from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.common.by import By from selenium.webdriver.support import expected_conditions as EC from selenium.common.exceptions import TimeoutException from selenium.webdriver.chrome.service import Service from selenium.webdriver.support.ui import Select chrome_options = Options() chrome_options.add_argument("--no-sandbox") chrome_options.add_argument('disable-notifications') chrome_options.add_argument("user-agent=UA") chrome_options.add_argument("--start-maximized") chrome_options.add_argument("--headless") chrome_options.add_argument('window-size=2160x3840') DELAY = 20 # Number of seconds before timing out def get_secondly_screenshots(driver, url): # got to current url driver.get(url) now = datetime.now() date_time = now.strftime("%Y_%m_%d_%H_%M_%S") sh_url = url.split('?k=')[1] print(sh_url, date_time) driver.save_screenshot(f'{sh_url}_{date_time}.png') print('screenshotted ', url) if __name__ == '__main__': urls = ['https://www.amazon.co.uk/s?k=advil', 'https://www.amazon.co.uk/s?k=Whitening toothpaste'] driver = webdriver.Chrome(service=Service(ChromeDriverManager().install()), options=chrome_options) # first go to amazon site driver.get('https://www.amazon.co.uk') # You can reject cookies once instead of each iteration try: cookie_btn = WebDriverWait(driver, DELAY).until(EC.element_to_be_clickable((By.XPATH, '//*[@id="sp-cc-rejectall-link"]'))) cookie_btn.click() except TimeoutException: raise TimeoutException("Page not yet loaded correctly") # You can change delivery country once (even here instead of each iteration) WebDriverWait(driver, DELAY).until(EC.element_to_be_clickable((By.ID, 'nav-global-location-popover-link'))).click() WebDriverWait(driver, DELAY).until(EC.element_to_be_clickable((By.ID, 'GLUXCountryList'))) select = Select(driver.find_element(By.ID, 'GLUXCountryList')) # select country by value (e.g. 'UK') select.select_by_value('IT') for url in urls: get_secondly_screenshots(driver, url) driver.quit() To stay in UK, you will have to comment out the Select part and implement the following code in the main: if __name__ == '__main__': urls = ['https://www.amazon.co.uk/s?k=advil', 'https://www.amazon.co.uk/s?k=Whitening toothpaste'] driver = webdriver.Chrome(service=Service(ChromeDriverManager().install()), options=chrome_options) # first go to amazon site driver.get('https://www.amazon.co.uk') # You can reject cookies once instead of each iteration try: cookie_btn = WebDriverWait(driver, DELAY).until(EC.element_to_be_clickable((By.XPATH, '//*[@id="sp-cc-rejectall-link"]'))) cookie_btn.click() except TimeoutException: raise TimeoutException("Page not yet loaded correctly") # You can change delivery country once (even here instead of each iteration) WebDriverWait(driver, DELAY).until(EC.element_to_be_clickable((By.ID, 'nav-global-location-popover-link'))).click() WebDriverWait(driver, DELAY).until(EC.element_to_be_clickable((By.ID, 'GLUXCountryList'))) select = Select(driver.find_element(By.ID, 'GLUXCountryList')) # set an UK zipcode to foce UK delivery WebDriverWait(driver, DELAY).until(EC.element_to_be_clickable((By.ID, "GLUXZipUpdateInput"))).send_keys("E1W 2RG") WebDriverWait(driver, DELAY).until(EC.element_to_be_clickable((By.ID, "GLUXZipUpdate"))).click() WebDriverWait(driver, DELAY).until(EC.element_to_be_clickable((By.CSS_SELECTOR, "[aria-labelledby='GLUXZipUpdate-announce']"))).click() for url in urls: get_secondly_screenshots(driver, url) driver.quit()
Amazon website shows "Deliver to Country". How can I change it programmatically in Python Selenium to take screenshots
The problem: I want to search keywords on Amazon and take screenshots. I am using selenium package. However, when I search on amazon.co.uk, it shows delivery address as Unites States. How can I change the "Deliver to Country"? Below are sample Python code and a sample screenshot. import time as t from datetime import datetime from selenium import webdriver from webdriver_manager.chrome import ChromeDriverManager from selenium.webdriver.chrome.options import Options from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support import expected_conditions as EC from selenium.webdriver.common.by import By from selenium.webdriver.common.action_chains import ActionChains chrome_options = Options() chrome_options.add_argument("--no-sandbox") chrome_options.add_argument('disable-notifications') chrome_options.add_argument("user-agent=UA") chrome_options.add_argument("--start-maximized") chrome_options.add_argument("--headless") chrome_options.add_argument('window-size=2160x3840') urls = ['https://www.amazon.co.uk/s?k=advil', 'https://www.amazon.co.uk/s?k=Whitening toothpaste'] def get_secondly_screenshots(navi_dictionary): driver = webdriver.Chrome(ChromeDriverManager().install(), options=chrome_options) driver.get('https://www.amazon.co.uk') driver.execute_script("document.body.style.zoom='50%'") driver.get(url) try: test = driver.find_element('xpath', '//*[@id="sp-cc-rejectall-link"]') test.click() print('gotcha!') except: pass now = datetime.now() date_time = now.strftime("%Y_%m_%d_%H_%M_%S") sh_url = url.split('?k=')[1] print(sh_url, date_time) driver.save_screenshot(f'{sh_url}_{date_time}.png') print('screenshotted ', url) t.sleep(2) driver.quit() for url in urls: get_secondly_screenshots(url)
[ "In order to set UK delivery address on UK Amazon when your IP address is out from the UK you can do the following steps:\n\nOpen the \"Delivery to\" dialog\nInsert some valid UK postal code and click submit button\nApprove this on the appeared after that pop-up.\nAs you asked, I also added the code to close cookies banner and click \"continue\".\nThe following Selenium code performs that\n\nfrom selenium import webdriver\nfrom selenium.webdriver.chrome.service import Service\nfrom selenium.webdriver.chrome.options import Options\nfrom selenium.webdriver.support.ui import WebDriverWait\nfrom selenium.webdriver.common.by import By\nfrom selenium.webdriver.support import expected_conditions as EC\n\noptions = Options()\noptions.add_argument(\"start-maximized\")\n\nwebdriver_service = Service('C:\\webdrivers\\chromedriver.exe')\ndriver = webdriver.Chrome(options=options, service=webdriver_service)\nwait = WebDriverWait(driver, 5)\n\nurl = \"https://www.amazon.co.uk/s?k=advil\"\ndriver.get(url)\n\nwait.until(EC.element_to_be_clickable((By.ID, 'nav-global-location-popover-link'))).click()\nwait.until(EC.element_to_be_clickable((By.CSS_SELECTOR, \"[data-action='GLUXPostalInputAction']\"))).send_keys(\"PO16 7GZ\")\nwait.until(EC.element_to_be_clickable((By.CSS_SELECTOR, \"[aria-labelledby='GLUXZipUpdate-announce']\"))).click()\nwait.until(EC.element_to_be_clickable((By.CSS_SELECTOR, \".a-popover-footer #GLUXConfirmClose\"))).click()\ntime.sleep(1)\nwait.until(EC.element_to_be_clickable((By.CSS_SELECTOR, \".glow-toaster-footer input[data-action-type='DISMISS']\"))).click()\nwait.until(EC.element_to_be_clickable((By.ID, \"sp-cc-accept\"))).click()\n\nThe result of this code is\n\nIn case you have any questions about my code - don't hesitate to ask.\n", "Isn't it enough to interact with that switch initially and set the desired country?\nLet me make a few changes to your code, which I explain below, to make it perform better for this task.\n\nIt is necessary to resolve this depreciation notice: 'DeprecationWarning: executable_path has been deprecated, please pass in a Service object'.\nYou can instantiate the driver, reject cookies, and change the country of delivery only once and not at each iteration.\nYou have to integrate some driver wait (and not sleeps) to prevent the page from still not loading properly and the script from failing. The \"DELAY\" parameter you can configure based on the speed of your connection, the heaviness of the page, and the performance of your pc.\nYou asked to change the country of delivery, so just select the corresponding value in the drop-down menu with Select. You can change the country simply by changing that string.\nHowever, if you want to enter UK from amazon.co.uk (and it is not already selected) you will have to enter a zip code.\n\nimport time as t\nfrom datetime import datetime\nfrom selenium import webdriver\nfrom webdriver_manager.chrome import ChromeDriverManager\nfrom selenium.webdriver.chrome.options import Options\n\nfrom selenium.webdriver.support.ui import WebDriverWait\nfrom selenium.webdriver.common.by import By\nfrom selenium.webdriver.support import expected_conditions as EC\nfrom selenium.common.exceptions import TimeoutException\nfrom selenium.webdriver.chrome.service import Service\nfrom selenium.webdriver.support.ui import Select\n\nchrome_options = Options()\nchrome_options.add_argument(\"--no-sandbox\")\nchrome_options.add_argument('disable-notifications')\nchrome_options.add_argument(\"user-agent=UA\")\nchrome_options.add_argument(\"--start-maximized\")\nchrome_options.add_argument(\"--headless\")\nchrome_options.add_argument('window-size=2160x3840')\n\nDELAY = 20 # Number of seconds before timing out\n\n\ndef get_secondly_screenshots(driver, url):\n\n # got to current url\n driver.get(url)\n\n now = datetime.now()\n date_time = now.strftime(\"%Y_%m_%d_%H_%M_%S\")\n sh_url = url.split('?k=')[1]\n print(sh_url, date_time)\n driver.save_screenshot(f'{sh_url}_{date_time}.png')\n\n print('screenshotted ', url)\n\n\nif __name__ == '__main__':\n urls = ['https://www.amazon.co.uk/s?k=advil', 'https://www.amazon.co.uk/s?k=Whitening toothpaste']\n\n driver = webdriver.Chrome(service=Service(ChromeDriverManager().install()), options=chrome_options)\n\n # first go to amazon site\n driver.get('https://www.amazon.co.uk')\n\n # You can reject cookies once instead of each iteration\n try:\n cookie_btn = WebDriverWait(driver, DELAY).until(EC.element_to_be_clickable((By.XPATH, '//*[@id=\"sp-cc-rejectall-link\"]')))\n cookie_btn.click()\n except TimeoutException:\n raise TimeoutException(\"Page not yet loaded correctly\")\n\n # You can change delivery country once (even here instead of each iteration)\n WebDriverWait(driver, DELAY).until(EC.element_to_be_clickable((By.ID, 'nav-global-location-popover-link'))).click()\n WebDriverWait(driver, DELAY).until(EC.element_to_be_clickable((By.ID, 'GLUXCountryList')))\n\n select = Select(driver.find_element(By.ID, 'GLUXCountryList'))\n\n # select country by value (e.g. 'UK')\n select.select_by_value('IT')\n\n for url in urls:\n get_secondly_screenshots(driver, url)\n\n driver.quit()\n\nTo stay in UK, you will have to comment out the Select part and implement the following code in the main:\nif __name__ == '__main__':\n urls = ['https://www.amazon.co.uk/s?k=advil', 'https://www.amazon.co.uk/s?k=Whitening toothpaste']\n\n driver = webdriver.Chrome(service=Service(ChromeDriverManager().install()), options=chrome_options)\n\n # first go to amazon site\n driver.get('https://www.amazon.co.uk')\n\n # You can reject cookies once instead of each iteration\n try:\n cookie_btn = WebDriverWait(driver, DELAY).until(EC.element_to_be_clickable((By.XPATH, '//*[@id=\"sp-cc-rejectall-link\"]')))\n cookie_btn.click()\n except TimeoutException:\n raise TimeoutException(\"Page not yet loaded correctly\")\n\n # You can change delivery country once (even here instead of each iteration)\n WebDriverWait(driver, DELAY).until(EC.element_to_be_clickable((By.ID, 'nav-global-location-popover-link'))).click()\n WebDriverWait(driver, DELAY).until(EC.element_to_be_clickable((By.ID, 'GLUXCountryList')))\n\n select = Select(driver.find_element(By.ID, 'GLUXCountryList'))\n\n # set an UK zipcode to foce UK delivery\n WebDriverWait(driver, DELAY).until(EC.element_to_be_clickable((By.ID, \"GLUXZipUpdateInput\"))).send_keys(\"E1W 2RG\")\n WebDriverWait(driver, DELAY).until(EC.element_to_be_clickable((By.ID, \"GLUXZipUpdate\"))).click()\n WebDriverWait(driver, DELAY).until(EC.element_to_be_clickable((By.CSS_SELECTOR, \"[aria-labelledby='GLUXZipUpdate-announce']\"))).click()\n\n for url in urls:\n get_secondly_screenshots(driver, url)\n\n driver.quit()\n\n" ]
[ 1, 1 ]
[]
[]
[ "css_selectors", "python", "selenium", "selenium_webdriver", "web_scraping" ]
stackoverflow_0074478453_css_selectors_python_selenium_selenium_webdriver_web_scraping.txt
Q: matplotlib has no attribute 'pyplot' I can import matplotlib but when I try to run the following: matplotlib.pyplot(x) I get: Traceback (most recent call last): File "<pyshell#31>", line 1, in <module> matplotlib.pyplot(x) AttributeError: 'module' object has no attribute 'pyplot' A: pyplot is a sub-module of matplotlib which doesn't get imported with a simple import matplotlib. >>> import matplotlib >>> print matplotlib.pyplot Traceback (most recent call last): File "<stdin>", line 1, in <module> AttributeError: 'module' object has no attribute 'pyplot' >>> import matplotlib.pyplot >>> It seems customary to do: import matplotlib.pyplot as plt at which time you can use the various functions and classes it contains: p = plt.plot(...) A: Did you import it? Importing matplotlib is not enough. >>> import matplotlib >>> matplotlib.pyplot Traceback (most recent call last): File "<stdin>", line 1, in <module> AttributeError: 'module' object has no attribute 'pyplot' but >>> import matplotlib.pyplot >>> matplotlib.pyplot works. pyplot is a submodule of matplotlib and not immediately imported when you import matplotlib. The most common form of importing pyplot is import matplotlib.pyplot as plt Thus, your statements won't be too long, e.g. plt.plot([1,2,3,4,5]) instead of matplotlib.pyplot.plot([1,2,3,4,5]) And: pyplot is not a function, it's a module! So don't call it, use the functions defined inside this module instead. See my example above
matplotlib has no attribute 'pyplot'
I can import matplotlib but when I try to run the following: matplotlib.pyplot(x) I get: Traceback (most recent call last): File "<pyshell#31>", line 1, in <module> matplotlib.pyplot(x) AttributeError: 'module' object has no attribute 'pyplot'
[ "pyplot is a sub-module of matplotlib which doesn't get imported with a simple import matplotlib.\n>>> import matplotlib\n>>> print matplotlib.pyplot\nTraceback (most recent call last):\n File \"<stdin>\", line 1, in <module>\nAttributeError: 'module' object has no attribute 'pyplot'\n>>> import matplotlib.pyplot\n>>> \n\nIt seems customary to do: import matplotlib.pyplot as plt at which time you can use the various functions and classes it contains:\np = plt.plot(...)\n\n", "Did you import it? Importing matplotlib is not enough.\n>>> import matplotlib\n>>> matplotlib.pyplot\nTraceback (most recent call last):\n File \"<stdin>\", line 1, in <module>\nAttributeError: 'module' object has no attribute 'pyplot'\n\nbut \n>>> import matplotlib.pyplot\n>>> matplotlib.pyplot\n\nworks.\npyplot is a submodule of matplotlib and not immediately imported when you import matplotlib.\nThe most common form of importing pyplot is \nimport matplotlib.pyplot as plt\n\nThus, your statements won't be too long, e.g.\nplt.plot([1,2,3,4,5])\n\ninstead of \nmatplotlib.pyplot.plot([1,2,3,4,5])\n\nAnd: pyplot is not a function, it's a module! So don't call it, use the functions defined inside this module instead. See my example above\n" ]
[ 59, 42 ]
[ "You have to import matplotlib.pyplot\nimport matplotlib.pyplot as plt\n" ]
[ -1 ]
[ "matplotlib", "python" ]
stackoverflow_0014812342_matplotlib_python.txt
Q: Extract date from string in a pandas dataframe column I am trying to extract date from a DF column containing strings and store in another column. from dateutil.parser import parse extract = parse("January 24, 1976", fuzzy_with_tokens=True) print(str(extract[0])) The above code extracts: 1976-01-24 00:00:00 I would like this to be done to all strings in a column in a DF. The below is what I am trying but is not working: df['Dates'] = df.apply(lambda x: parse(x['Column to extract'], fuzzy_with_tokens=True), axis=1) Things to note: If there are multiple dates, need to join them with some delimiter There can be strings without date. In that case parser returns an error "ParserError: String does not contain a date". This needs to be handled. A: See pd.to_datetime It operates in a vectorized manner so can convert all dates quickly. df["Dates"] = pd.to_datetime(df["Dates"]) If there are strings that won't convert to a datetime and you want them nullified, you can use errors="coerce" df["Dates"] = pd.to_datetime(df["Dates"], errors="coerce") NER with spacy import spacy # 3.4.2 from spacy import displacy nlp = spacy.load("en_core_web_sm") eg_txt = "today is january 26, 2016. Tomorrow is january 27, 2016" doc = nlp(eg_txt) displacy.render(doc, style="ent") We can apply the spacy logic to a dataframe import pandas as pd # 1.5.1 # some fake data df = pd.DataFrame({ "text": ["today is january 26, 2016. Tomorrow is january 27, 2016", "today is january 26, 2016.", "Tomorrow is january 27, 2016"] }) # convert text to spacy docs docs = nlp.pipe(df.text.to_numpy()) # unpack the generator into a series doc_series = pd.Series(docs, index=df.index, name="docs") df = df.join(doc_series) # extract entities df["entities"] = df.docs.apply(lambda x: x.ents) # explode to one entity per row df = df.explode(column="entities") # build dictionary of ent type and ent text df["entities"] = df.entities.apply(lambda ent: {ent.label_: ent.text}) # join back with df df = df.join(df["entities"].apply(pd.Series)) # convert all DATE entities to datetime df["dates"] = pd.to_datetime(df.DATE, errors="coerce") # back to one row per original text and a container of datetimes df = df.groupby("text").dates.unique().to_frame().reset_index() print(df) text dates 0 Tomorrow is january 27, 2016 [NaT, 2016-01-27T00:00:00.000000000] 1 today is january 26, 2016. [2022-11-17T11:42:49.607705000, 2016-01-26T00:... 2 today is january 26, 2016. Tomorrow is january... [2022-11-17T11:42:49.605705000, 2016-01-26T00:... A: If you want to use parse, you may need a customized function to handle exceptions: def parse_date(row): try: date = parse(row, fuzzy_with_tokens=True) return date[0] except: return np.nan df['dates'] = df['Column to extract'].apply(lambda x: parse_date(x))
Extract date from string in a pandas dataframe column
I am trying to extract date from a DF column containing strings and store in another column. from dateutil.parser import parse extract = parse("January 24, 1976", fuzzy_with_tokens=True) print(str(extract[0])) The above code extracts: 1976-01-24 00:00:00 I would like this to be done to all strings in a column in a DF. The below is what I am trying but is not working: df['Dates'] = df.apply(lambda x: parse(x['Column to extract'], fuzzy_with_tokens=True), axis=1) Things to note: If there are multiple dates, need to join them with some delimiter There can be strings without date. In that case parser returns an error "ParserError: String does not contain a date". This needs to be handled.
[ "See pd.to_datetime\nIt operates in a vectorized manner so can convert all dates quickly.\ndf[\"Dates\"] = pd.to_datetime(df[\"Dates\"])\n\nIf there are strings that won't convert to a datetime and you want them nullified, you can use errors=\"coerce\"\ndf[\"Dates\"] = pd.to_datetime(df[\"Dates\"], errors=\"coerce\")\n\nNER with spacy\nimport spacy # 3.4.2\nfrom spacy import displacy\n\n\nnlp = spacy.load(\"en_core_web_sm\")\n\neg_txt = \"today is january 26, 2016. Tomorrow is january 27, 2016\"\n\ndoc = nlp(eg_txt)\n\ndisplacy.render(doc, style=\"ent\")\n\n\nWe can apply the spacy logic to a dataframe\nimport pandas as pd # 1.5.1\n\n\n# some fake data\ndf = pd.DataFrame({\n \"text\": [\"today is january 26, 2016. Tomorrow is january 27, 2016\",\n \"today is january 26, 2016.\",\n \"Tomorrow is january 27, 2016\"]\n})\n\n# convert text to spacy docs\ndocs = nlp.pipe(df.text.to_numpy())\n\n# unpack the generator into a series\ndoc_series = pd.Series(docs, index=df.index, name=\"docs\")\n\ndf = df.join(doc_series)\n\n# extract entities\ndf[\"entities\"] = df.docs.apply(lambda x: x.ents)\n\n# explode to one entity per row\ndf = df.explode(column=\"entities\")\n\n# build dictionary of ent type and ent text\ndf[\"entities\"] = df.entities.apply(lambda ent: {ent.label_: ent.text})\n\n# join back with df\ndf = df.join(df[\"entities\"].apply(pd.Series))\n\n# convert all DATE entities to datetime\ndf[\"dates\"] = pd.to_datetime(df.DATE, errors=\"coerce\")\n\n# back to one row per original text and a container of datetimes\ndf = df.groupby(\"text\").dates.unique().to_frame().reset_index()\n\nprint(df)\n\n text dates\n0 Tomorrow is january 27, 2016 [NaT, 2016-01-27T00:00:00.000000000]\n1 today is january 26, 2016. [2022-11-17T11:42:49.607705000, 2016-01-26T00:...\n2 today is january 26, 2016. Tomorrow is january... [2022-11-17T11:42:49.605705000, 2016-01-26T00:...\n\n", "If you want to use parse, you may need a customized function to handle exceptions:\ndef parse_date(row):\n try:\n date = parse(row, fuzzy_with_tokens=True)\n return date[0]\n except:\n return np.nan\n\n\ndf['dates'] = df['Column to extract'].apply(lambda x: parse_date(x))\n\n" ]
[ 1, 0 ]
[]
[]
[ "extract", "pandas", "python", "python_dateutil" ]
stackoverflow_0074479115_extract_pandas_python_python_dateutil.txt
Q: Large dataset and finding permutations matching various criteria I have a list of football players with length 15000 which consists of dicts (same size all). An element in the list looks like this: { 'id': '123456', 'name': 'Foo Bar', 'position': 'GK', 'club': 'Python FC', 'league': 'Champions League', 'country': 'Neverland' } Given a team which consists of 11 players, where each position is specified and must be filled. A formation may look like this: formation = [('ST', 2), ('LM', 1), ('RM', 1), ('CM', 2), ('LB', 1), ('RB', 1), ('CB', 2), ('GK', 1)] I would like to find all possible combinations/permutations of players matching the following criteria: formation players from at least 5 different countries maximum 4 from the same club What I have tried Assuming my_list is the list of all players; Attempt 1 from itertools import * squads = permutations(my_list,11) match_countries = [squad for squad in squads if Counter([player['country'] for player in squad]).most_common().__len__() >= 5 and Counter([player['club'] for player in squad]).most_common().__len__() <= 4 ] But this will take WAY too long, because I have mal-formed squads. Attempt 2 Still using the same list of players initially, I split it up in per-position lists. So I have a list of players per position, like this: list_goalkeepers = [player for player in my_list if player['position'] == 'GK'] Then using these lists I make squads. Example a squad with 2 strikers and a goalkeeper would look like this: squads = product(list_goalkeepers, list_strikers, list_strikers, .....) This results still in a huge number of squads, but at least they are all valid; when I try to find a match for number of countries, it'll still iterate through all squads to check if there is a match. I perform the country search like this: match_countries = [squad for squad in collection if Counter([player['country'] for player in squad]).most_common().__len__() >= 5 ] Is there any way to do this fast(er)? This is tediously slow. A: Here's a couple of things which will help, but they won't reduce this to a tractable problem (see below). First, squads = product(list_goalkeepers, list_strikers, list_strikers, .....) is not actually correct. product([striker1, striker2], [striker1, striker2]) (to just look a small bit of that product) generates four possibilities: [striker1, striker1] [striker1, striker2] [striker2, striker1] [striker2, striker2] Of those, two are incorrect because the same player is included twice in a squad, and the other two are duplicates because the order of the players in each squad is irrelevant. So there is actually only one legal combination, {striker1, striker2}. To get that, you need itertools.combinations(strikers, 2). If the list A has n elements, product(A, A) will produce n² lists, whereas combinations(A, 2) will produce (n²-n)/2 lists, about half the number. Since you have three positions with two players, your product invocation generates a bit more than 8 times too many squads. So getting it right will speed things up quite a bit. But it's not quite a simple as adding some calls to combination. What you need to do is something like this: from collections import defaultdict from itertools import product, combinations, chain position_players = defaultdict(list) for player in all_players: position_players[player['position']].append(player) def flatten(list_of_lists): return [*chain.from_iterable(list_of_lists)] # See below for more general solution candidates = [*map(flatten, product( product(*(position_players[pos] for pos in ('LM', 'RM', 'LB', 'RB', 'GK'))), *(combinations(position_players[pos], 2) for pos in ('ST', 'CM', 'CB'))))] A more general solution would use formations to construct the final product, but I think the above is already hard enough to read :-). Still, for what it's worth: candidates = [*map(flatten, product( *(combinations(position_players[posn], count) for posn, count in formation)))] Secondly, you seem to be implementing both of the other criteria, the maximum number of players per club and the minimum number of countries, using the same formula involving counter.most_common().__len__(). Leaving aside the question of why you directly call the __len__ dunder instead of just using the more natural len(counter.most_common()), this formulation is either incorrect or inefficient: Counter([player['club'] for player in squad]).most_common().__len__() <= 4 checks whether there are at most four clubs represented in the squad. But the criterion you have is that no club be represented by more than four players, which would be Counter([player['club'] for player in squad]).most_common(1) <= 4. Counter([player['country'] for player in squad]).most_common().__len__() >= 5 does check that there are at least five countries represented. But so does the much simpler (and somewhat faster): len(set(player['country'] for player in squad)) >= 5 Fixing those will make the list correct, and speed the solution up considerably. But it won't really help. As with many combinatorial problems, it's easy to underestimate the number of possible candidates and thus formulate completely impractical solutions which involve generating every possibility. As a quick illustration, let's suppose 5001 players are distributed roughly proportionately between positions: 1364 candidates for each of the 5 unitary positions ('LM', 'RM', 'LB', 'RB', 'GK') and 2727 candidates for each of the remaining three dual positions ('ST', 'CM', 'CB'). There are then 1364⁵ * (2727 * 2726 / 2)³ possible squads, leaving aside the country/club criterion which probably don't eliminate the majority of possibilities. That works out to 901,147,384,847,503,556,419,043,700,514,410,951,988,224 possible squads. I think it's safe to say that iterating over all of those is not just "agonizingly slow". It's impossible to do within your lifetime (or, indeed, the predicted lifetime of the planet). You are probably better advised to find a way of creating and using a random sample of a tractable size, selected uniformly from the universe of possibilities.
Large dataset and finding permutations matching various criteria
I have a list of football players with length 15000 which consists of dicts (same size all). An element in the list looks like this: { 'id': '123456', 'name': 'Foo Bar', 'position': 'GK', 'club': 'Python FC', 'league': 'Champions League', 'country': 'Neverland' } Given a team which consists of 11 players, where each position is specified and must be filled. A formation may look like this: formation = [('ST', 2), ('LM', 1), ('RM', 1), ('CM', 2), ('LB', 1), ('RB', 1), ('CB', 2), ('GK', 1)] I would like to find all possible combinations/permutations of players matching the following criteria: formation players from at least 5 different countries maximum 4 from the same club What I have tried Assuming my_list is the list of all players; Attempt 1 from itertools import * squads = permutations(my_list,11) match_countries = [squad for squad in squads if Counter([player['country'] for player in squad]).most_common().__len__() >= 5 and Counter([player['club'] for player in squad]).most_common().__len__() <= 4 ] But this will take WAY too long, because I have mal-formed squads. Attempt 2 Still using the same list of players initially, I split it up in per-position lists. So I have a list of players per position, like this: list_goalkeepers = [player for player in my_list if player['position'] == 'GK'] Then using these lists I make squads. Example a squad with 2 strikers and a goalkeeper would look like this: squads = product(list_goalkeepers, list_strikers, list_strikers, .....) This results still in a huge number of squads, but at least they are all valid; when I try to find a match for number of countries, it'll still iterate through all squads to check if there is a match. I perform the country search like this: match_countries = [squad for squad in collection if Counter([player['country'] for player in squad]).most_common().__len__() >= 5 ] Is there any way to do this fast(er)? This is tediously slow.
[ "Here's a couple of things which will help, but they won't reduce this to a tractable problem (see below).\nFirst,\nsquads = product(list_goalkeepers, list_strikers, list_strikers, .....)\n\nis not actually correct. product([striker1, striker2], [striker1, striker2]) (to just look a small bit of that product) generates four possibilities:\n[striker1, striker1]\n[striker1, striker2]\n[striker2, striker1]\n[striker2, striker2]\n\nOf those, two are incorrect because the same player is included twice in a squad, and the other two are duplicates because the order of the players in each squad is irrelevant. So there is actually only one legal combination, {striker1, striker2}. To get that, you need itertools.combinations(strikers, 2).\nIf the list A has n elements, product(A, A) will produce n² lists, whereas combinations(A, 2) will produce (n²-n)/2 lists, about half the number. Since you have three positions with two players, your product invocation generates a bit more than 8 times too many squads. So getting it right will speed things up quite a bit. But it's not quite a simple as adding some calls to combination. What you need to do is something like this:\nfrom collections import defaultdict\nfrom itertools import product, combinations, chain\n \nposition_players = defaultdict(list)\nfor player in all_players:\n position_players[player['position']].append(player)\ndef flatten(list_of_lists):\n return [*chain.from_iterable(list_of_lists)]\n# See below for more general solution\ncandidates = [*map(flatten,\n product(\n product(*(position_players[pos]\n for pos in ('LM', 'RM', 'LB', 'RB', 'GK'))),\n *(combinations(position_players[pos], 2)\n for pos in ('ST', 'CM', 'CB'))))]\n\nA more general solution would use formations to construct the final product, but I think the above is already hard enough to read :-). Still, for what it's worth:\ncandidates = [*map(flatten,\n product(\n *(combinations(position_players[posn], count)\n for posn, count in formation)))]\n\nSecondly, you seem to be implementing both of the other criteria, the maximum number of players per club and the minimum number of countries, using the same formula involving counter.most_common().__len__(). Leaving aside the question of why you directly call the __len__ dunder instead of just using the more natural len(counter.most_common()), this formulation is either incorrect or inefficient:\n\nCounter([player['club'] for player in squad]).most_common().__len__() <= 4\nchecks whether there are at most four clubs represented in the squad. But the criterion you have is that no club be represented by more than four players, which would be\nCounter([player['club'] for player in squad]).most_common(1) <= 4.\n\nCounter([player['country'] for player in squad]).most_common().__len__() >= 5\ndoes check that there are at least five countries represented. But so does the much simpler (and somewhat faster):\nlen(set(player['country'] for player in squad)) >= 5\n\n\nFixing those will make the list correct, and speed the solution up considerably. But it won't really help.\nAs with many combinatorial problems, it's easy to underestimate the number of possible candidates and thus formulate completely impractical solutions which involve generating every possibility.\nAs a quick illustration, let's suppose 5001 players are distributed roughly proportionately between positions: 1364 candidates for each of the 5 unitary positions ('LM', 'RM', 'LB', 'RB', 'GK') and 2727 candidates for each of the remaining three dual positions ('ST', 'CM', 'CB'). There are then 1364⁵ * (2727 * 2726 / 2)³ possible squads, leaving aside the country/club criterion which probably don't eliminate the majority of possibilities. That works out to 901,147,384,847,503,556,419,043,700,514,410,951,988,224 possible squads. I think it's safe to say that iterating over all of those is not just \"agonizingly slow\". It's impossible to do within your lifetime (or, indeed, the predicted lifetime of the planet).\nYou are probably better advised to find a way of creating and using a random sample of a tractable size, selected uniformly from the universe of possibilities.\n" ]
[ 0 ]
[]
[]
[ "generator", "list", "permutation", "python" ]
stackoverflow_0074476651_generator_list_permutation_python.txt
Q: TypeError: unsupported operand type(s) for +: 'DatetimeArray' and 'relativedelta' I am trying to convert a column called Month_Next from a dataframe called df_actual from the last day of one month to the first day of the next. The column looks like this: And I'm using df_actual.Month_Next = pd.to_datetime(df_actual.Month_Next) + relativedelta(months=1, day=1) and getting this error. TypeError: unsupported operand type(s) for +: 'DatetimeArray' and 'relativedelta' Which makes no sense to me since this exact code works in a different notebook where Month_Next comes in as I believe a Timestamp object like so Any ideas as to what's going on here? A: You are trying to add date types from different packages - one from pandas and the other dateutil. Try converting them to pandas types (use pandas.Timedelta). Example: import pandas as pd datetime_arr = pd.arrays.DatetimeArray(pd.Series([0, 1, 2, 3, 4])) print(datetime_arr) print(datetime_arr + pd.Timedelta(10, 'd')) Output: <DatetimeArray> [ '1970-01-01 00:00:00', '1970-01-01 00:00:00.000000001', '1970-01-01 00:00:00.000000002', '1970-01-01 00:00:00.000000003', '1970-01-01 00:00:00.000000004'] Length: 5, dtype: datetime64[ns] <DatetimeArray> [ '1970-01-11 00:00:00', '1970-01-11 00:00:00.000000001', '1970-01-11 00:00:00.000000002', '1970-01-11 00:00:00.000000003', '1970-01-11 00:00:00.000000004'] Length: 5, dtype: datetime64[ns] A: This works just fine: Month_Next = df_actual.AsOfDate + pd.DateOffset(months =1) - pd.offsets.MonthBegin(1)
TypeError: unsupported operand type(s) for +: 'DatetimeArray' and 'relativedelta'
I am trying to convert a column called Month_Next from a dataframe called df_actual from the last day of one month to the first day of the next. The column looks like this: And I'm using df_actual.Month_Next = pd.to_datetime(df_actual.Month_Next) + relativedelta(months=1, day=1) and getting this error. TypeError: unsupported operand type(s) for +: 'DatetimeArray' and 'relativedelta' Which makes no sense to me since this exact code works in a different notebook where Month_Next comes in as I believe a Timestamp object like so Any ideas as to what's going on here?
[ "You are trying to add date types from different packages - one from pandas and the other dateutil. Try converting them to pandas types (use pandas.Timedelta).\nExample:\nimport pandas as pd\n\ndatetime_arr = pd.arrays.DatetimeArray(pd.Series([0, 1, 2, 3, 4]))\n\nprint(datetime_arr)\nprint(datetime_arr + pd.Timedelta(10, 'd'))\n\nOutput:\n<DatetimeArray>\n[ '1970-01-01 00:00:00', '1970-01-01 00:00:00.000000001',\n '1970-01-01 00:00:00.000000002', '1970-01-01 00:00:00.000000003',\n '1970-01-01 00:00:00.000000004']\nLength: 5, dtype: datetime64[ns]\n<DatetimeArray>\n[ '1970-01-11 00:00:00', '1970-01-11 00:00:00.000000001',\n '1970-01-11 00:00:00.000000002', '1970-01-11 00:00:00.000000003',\n '1970-01-11 00:00:00.000000004']\nLength: 5, dtype: datetime64[ns]\n\n", "This works just fine:\nMonth_Next = df_actual.AsOfDate + pd.DateOffset(months =1) - pd.offsets.MonthBegin(1)\n\n" ]
[ 1, 1 ]
[]
[]
[ "dataframe", "datetime", "pandas", "python", "relativedelta" ]
stackoverflow_0074467623_dataframe_datetime_pandas_python_relativedelta.txt
Q: How to convert Python dictionary to Scala equivalent (Map?)? I have a large (~700K) Python dictionary that has many sub-dictionaries, that I need to convert to whatever the appropriate equivalent is in Scala (Map?). It can be immutable. What's the easiest/quickest way to do this? The dictionary is a hardcoded static dictionary in the source code of a larger Python script, which I'm converting to Scala, so I need to convert the dictionary to Scala as part of that. I'm not going to be changing the dictionary, it can be read-only (but doesn't have to be). It's a one-time conversion, not something I need to repeat. The Scala script will be run once a day on a Hadoop based big data platform. I'm after the solution that is quickest to implement, ideally it will be at least reasonably efficient at run time too but that's not so important. Here's the start of the dictionary in Python: MyData = {"590":{"69035":{"name":"Orange Carabe","id":"GLP01","realms":["epc.mnc001.mcc340.3gppnetwork.org"],"iso":"GP"},"59066":{"name":"Dauphin Telecom","id":"GLPDT","realms":["epc.mnc008.mcc340.3gppnetwork.org"],"iso":"GP"},"59077":{"name":"Dauphin Telecom","id":"GLPDT","realms":["epc.mnc008.mcc340.3gppnetwork.org"],"iso":"GP"},"69000":{"name":"Outremer Tlcom","id":"GUF01","realms":["epc.mnc002.mcc340.3gppnetwork.org"],"iso":"GP"},"6004":{"name":"Setel N.V.","id":"ANTUT","realms":["epc.mnc091.mcc362.3gppnetwork.org"],"iso":"AN"}, .... I'm an experienced developer but new to both Python and Scala so looking for explicit solutions or code :)
How to convert Python dictionary to Scala equivalent (Map?)?
I have a large (~700K) Python dictionary that has many sub-dictionaries, that I need to convert to whatever the appropriate equivalent is in Scala (Map?). It can be immutable. What's the easiest/quickest way to do this? The dictionary is a hardcoded static dictionary in the source code of a larger Python script, which I'm converting to Scala, so I need to convert the dictionary to Scala as part of that. I'm not going to be changing the dictionary, it can be read-only (but doesn't have to be). It's a one-time conversion, not something I need to repeat. The Scala script will be run once a day on a Hadoop based big data platform. I'm after the solution that is quickest to implement, ideally it will be at least reasonably efficient at run time too but that's not so important. Here's the start of the dictionary in Python: MyData = {"590":{"69035":{"name":"Orange Carabe","id":"GLP01","realms":["epc.mnc001.mcc340.3gppnetwork.org"],"iso":"GP"},"59066":{"name":"Dauphin Telecom","id":"GLPDT","realms":["epc.mnc008.mcc340.3gppnetwork.org"],"iso":"GP"},"59077":{"name":"Dauphin Telecom","id":"GLPDT","realms":["epc.mnc008.mcc340.3gppnetwork.org"],"iso":"GP"},"69000":{"name":"Outremer Tlcom","id":"GUF01","realms":["epc.mnc002.mcc340.3gppnetwork.org"],"iso":"GP"},"6004":{"name":"Setel N.V.","id":"ANTUT","realms":["epc.mnc091.mcc362.3gppnetwork.org"],"iso":"AN"}, .... I'm an experienced developer but new to both Python and Scala so looking for explicit solutions or code :)
[]
[]
[ "When you say you \"have it in python\", where is it coming from? Is python code generating it, or reading it from a file, or...?\nI ask because my first move would be to try to just re-implement whatever is loading/generating it into the python runtime in scala instead. Otherwise you're adding unnecessary performance overhead (or worse - potentially reading in python, writing in python and then reading in scala, when you could just read in scala), and saddling yourself with the maintenance of both python and scala components going forward.\nIf that's not an option, there are libraries that will let you call python and scala functions from each other, but my bias would be to serialize it into a json from python and then deserialize that in scala.\n" ]
[ -1 ]
[ "python", "scala" ]
stackoverflow_0074479328_python_scala.txt
Q: I get an error while installing python-docx how can i solve this? C:\Users\Mateo>pip install python-docx Collecting python-docx Using cached python_docx-0.8.11-py3-none-any.whl Collecting lxml>=2.3.2 Using cached lxml-4.9.1.tar.gz (3.4 MB) Preparing metadata (setup.py) ... done Building wheels for collected packages: lxml Building wheel for lxml (setup.py) ... error error: subprocess-exited-with-error × python setup.py bdist_wheel did not run successfully. │ exit code: 1 ╰─> [94 lines of output] Building lxml version 4.9.1. Building without Cython. Building against pre-built libxml2 andl libxslt libraries running bdist_wheel running build running build_py creating build creating build\lib.win-amd64-cpython-311 creating build\lib.win-amd64-cpython-311\lxml copying src\lxml\builder.py -> build\lib.win-amd64-cpython-311\lxml copying src\lxml\cssselect.py -> build\lib.win-amd64-cpython-311\lxml copying src\lxml\doctestcompare.py -> build\lib.win-amd64-cpython-311\lxml copying src\lxml\ElementInclude.py -> build\lib.win-amd64-cpython-311\lxml copying src\lxml\pyclasslookup.py -> build\lib.win-amd64-cpython-311\lxml copying src\lxml\sax.py -> build\lib.win-amd64-cpython-311\lxml copying src\lxml\usedoctest.py -> build\lib.win-amd64-cpython-311\lxml copying src\lxml\_elementpath.py -> build\lib.win-amd64-cpython-311\lxml copying src\lxml\__init__.py -> build\lib.win-amd64-cpython-311\lxml creating build\lib.win-amd64-cpython-311\lxml\includes copying src\lxml\includes\__init__.py -> build\lib.win-amd64-cpython-311\lxml\includes creating build\lib.win-amd64-cpython-311\lxml\html copying src\lxml\html\builder.py -> build\lib.win-amd64-cpython-311\lxml\html copying src\lxml\html\clean.py -> build\lib.win-amd64-cpython-311\lxml\html copying src\lxml\html\defs.py -> build\lib.win-amd64-cpython-311\lxml\html copying src\lxml\html\diff.py -> build\lib.win-amd64-cpython-311\lxml\html copying src\lxml\html\ElementSoup.py -> build\lib.win-amd64-cpython-311\lxml\html copying src\lxml\html\formfill.py -> build\lib.win-amd64-cpython-311\lxml\html copying src\lxml\html\html5parser.py -> build\lib.win-amd64-cpython-311\lxml\html copying src\lxml\html\soupparser.py -> build\lib.win-amd64-cpython-311\lxml\html copying src\lxml\html\usedoctest.py -> build\lib.win-amd64-cpython-311\lxml\html copying src\lxml\html\_diffcommand.py -> build\lib.win-amd64-cpython-311\lxml\html copying src\lxml\html\_html5builder.py -> build\lib.win-amd64-cpython-311\lxml\html copying src\lxml\html\_setmixin.py -> build\lib.win-amd64-cpython-311\lxml\html copying src\lxml\html\__init__.py -> build\lib.win-amd64-cpython-311\lxml\html creating build\lib.win-amd64-cpython-311\lxml\isoschematron copying src\lxml\isoschematron\__init__.py -> build\lib.win-amd64-cpython-311\lxml\isoschematron copying src\lxml\etree.h -> build\lib.win-amd64-cpython-311\lxml copying src\lxml\etree_api.h -> build\lib.win-amd64-cpython-311\lxml copying src\lxml\lxml.etree.h -> build\lib.win-amd64-cpython-311\lxml copying src\lxml\lxml.etree_api.h -> build\lib.win-amd64-cpython-311\lxml copying src\lxml\includes\c14n.pxd -> build\lib.win-amd64-cpython-311\lxml\includes copying src\lxml\includes\config.pxd -> build\lib.win-amd64-cpython-311\lxml\includes copying src\lxml\includes\dtdvalid.pxd -> build\lib.win-amd64-cpython-311\lxml\includes copying src\lxml\includes\etreepublic.pxd -> build\lib.win-amd64-cpython-311\lxml\includes copying src\lxml\includes\htmlparser.pxd -> build\lib.win-amd64-cpython-311\lxml\includes copying src\lxml\includes\relaxng.pxd -> build\lib.win-amd64-cpython-311\lxml\includes copying src\lxml\includes\schematron.pxd -> build\lib.win-amd64-cpython-311\lxml\includes copying src\lxml\includes\tree.pxd -> build\lib.win-amd64-cpython-311\lxml\includes copying src\lxml\includes\uri.pxd -> build\lib.win-amd64-cpython-311\lxml\includes copying src\lxml\includes\xinclude.pxd -> build\lib.win-amd64-cpython-311\lxml\includes copying src\lxml\includes\xmlerror.pxd -> build\lib.win-amd64-cpython-311\lxml\includes copying src\lxml\includes\xmlparser.pxd -> build\lib.win-amd64-cpython-311\lxml\includes copying src\lxml\includes\xmlschema.pxd -> build\lib.win-amd64-cpython-311\lxml\includes copying src\lxml\includes\xpath.pxd -> build\lib.win-amd64-cpython-311\lxml\includes copying src\lxml\includes\xslt.pxd -> build\lib.win-amd64-cpython-311\lxml\includes copying src\lxml\includes\__init__.pxd -> build\lib.win-amd64-cpython-311\lxml\includes copying src\lxml\includes\etree_defs.h -> build\lib.win-amd64-cpython-311\lxml\includes copying src\lxml\includes\lxml-version.h -> build\lib.win-amd64-cpython-311\lxml\includes creating build\lib.win-amd64-cpython-311\lxml\isoschematron\resources creating build\lib.win-amd64-cpython-311\lxml\isoschematron\resources\rng copying src\lxml\isoschematron\resources\rng\iso-schematron.rng -> build\lib.win-amd64-cpython-311\lxml\isoschematron\resources\rng creating build\lib.win-amd64-cpython-311\lxml\isoschematron\resources\xsl copying src\lxml\isoschematron\resources\xsl\RNG2Schtrn.xsl -> build\lib.win-amd64-cpython-311\lxml\isoschematron\resources\xsl copying src\lxml\isoschematron\resources\xsl\XSD2Schtrn.xsl -> build\lib.win-amd64-cpython-311\lxml\isoschematron\resources\xsl creating build\lib.win-amd64-cpython-311\lxml\isoschematron\resources\xsl\iso-schematron-xslt1 copying src\lxml\isoschematron\resources\xsl\iso-schematron-xslt1\iso_abstract_expand.xsl -> build\lib.win-amd64-cpython-311\lxml\isoschematron\resources\xsl\iso-schematron-xslt1 copying src\lxml\isoschematron\resources\xsl\iso-schematron-xslt1\iso_dsdl_include.xsl -> build\lib.win-amd64-cpython-311\lxml\isoschematron\resources\xsl\iso-schematron-xslt1 copying src\lxml\isoschematron\resources\xsl\iso-schematron-xslt1\iso_schematron_message.xsl -> build\lib.win-amd64-cpython-311\lxml\isoschematron\resources\xsl\iso-schematron-xslt1 copying src\lxml\isoschematron\resources\xsl\iso-schematron-xslt1\iso_schematron_skeleton_for_xslt1.xsl -> build\lib.win-amd64-cpython-311\lxml\isoschematron\resources\xsl\iso-schematron-xslt1 copying src\lxml\isoschematron\resources\xsl\iso-schematron-xslt1\iso_svrl_for_xslt1.xsl -> build\lib.win-amd64-cpython-311\lxml\isoschematron\resources\xsl\iso-schematron-xslt1 copying src\lxml\isoschematron\resources\xsl\iso-schematron-xslt1\readme.txt -> build\lib.win-amd64-cpython-311\lxml\isoschematron\resources\xsl\iso-schematron-xslt1 running build_ext building 'lxml.etree' extension creating build\temp.win-amd64-cpython-311 creating build\temp.win-amd64-cpython-311\Release creating build\temp.win-amd64-cpython-311\Release\src creating build\temp.win-amd64-cpython-311\Release\src\lxml "C:\Program Files (x86)\Microsoft Visual Studio\2019\BuildTools\VC\Tools\MSVC\14.29.30133\bin\HostX86\x64\cl.exe" /c /nologo /O2 /W3 /GL /DNDEBUG /MD -DCYTHON_CLINE_IN_TRACEBACK=0 -Isrc -Isrc\lxml\includes -IC:\Users\Mateo\AppData\Local\Programs\Python\Python311\include -IC:\Users\Mateo\AppData\Local\Programs\Python\Python311\Include "-IC:\Program Files (x86)\Microsoft Visual Studio\2019\BuildTools\VC\Tools\MSVC\14.29.30133\include" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\ucrt" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\shared" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\um" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\winrt" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\cppwinrt" /Tcsrc\lxml\etree.c /Fobuild\temp.win-amd64-cpython-311\Release\src\lxml\etree.obj -w cl : Command line warning D9025 : overriding '/W3' with '/w' etree.c C:\Users\Mateo\AppData\Local\Temp\pip-install-e21y1uta\lxml_6a1c02a358e44a829f28dd12b951e3ab\src\lxml\includes/etree_defs.h(14): fatal error C1083: Cannot open include file: 'libxml/xmlversion.h': No such file or directory Compile failed: command 'C:\\Program Files (x86)\\Microsoft Visual Studio\\2019\\BuildTools\\VC\\Tools\\MSVC\\14.29.30133\\bin\\HostX86\\x64\\cl.exe' failed with exit code 2 creating Users creating Users\Mateo creating Users\Mateo\AppData creating Users\Mateo\AppData\Local creating Users\Mateo\AppData\Local\Temp "C:\Program Files (x86)\Microsoft Visual Studio\2019\BuildTools\VC\Tools\MSVC\14.29.30133\bin\HostX86\x64\cl.exe" /c /nologo /O2 /W3 /GL /DNDEBUG /MD -I/usr/include/libxml2 "-IC:\Program Files (x86)\Microsoft Visual Studio\2019\BuildTools\VC\Tools\MSVC\14.29.30133\include" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\ucrt" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\shared" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\um" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\winrt" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\cppwinrt" /TcC:\Users\Mateo\AppData\Local\Temp\xmlXPathInit3lxm04ns.c /FoUsers\Mateo\AppData\Local\Temp\xmlXPathInit3lxm04ns.obj xmlXPathInit3lxm04ns.c C:\Users\Mateo\AppData\Local\Temp\xmlXPathInit3lxm04ns.c(1): fatal error C1083: Cannot open include file: 'libxml/xpath.h': No such file or directory error: command 'C:\\Program Files (x86)\\Microsoft Visual Studio\\2019\\BuildTools\\VC\\Tools\\MSVC\\14.29.30133\\bin\\HostX86\\x64\\cl.exe' failed with exit code 2 ********************************************************************************* Could not find function xmlCheckVersion in library libxml2. Is libxml2 installed? ********************************************************************************* [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. ERROR: Failed building wheel for lxml Running setup.py clean for lxml Failed to build lxml Installing collected packages: lxml, python-docx Running setup.py install for lxml ... error error: subprocess-exited-with-error × Running setup.py install for lxml did not run successfully. │ exit code: 1 ╰─> [91 lines of output] Building lxml version 4.9.1. Building without Cython. Building against pre-built libxml2 andl libxslt libraries running install C:\Users\Mateo\AppData\Local\Programs\Python\Python311\Lib\site-packages\setuptools\command\install.py:34: SetuptoolsDeprecationWarning: setup.py install is deprecated. Use build and pip and other standards-based tools. warnings.warn( running build running build_py creating build creating build\lib.win-amd64-cpython-311 creating build\lib.win-amd64-cpython-311\lxml copying src\lxml\builder.py -> build\lib.win-amd64-cpython-311\lxml copying src\lxml\cssselect.py -> build\lib.win-amd64-cpython-311\lxml copying src\lxml\doctestcompare.py -> build\lib.win-amd64-cpython-311\lxml copying src\lxml\ElementInclude.py -> build\lib.win-amd64-cpython-311\lxml copying src\lxml\pyclasslookup.py -> build\lib.win-amd64-cpython-311\lxml copying src\lxml\sax.py -> build\lib.win-amd64-cpython-311\lxml copying src\lxml\usedoctest.py -> build\lib.win-amd64-cpython-311\lxml copying src\lxml\_elementpath.py -> build\lib.win-amd64-cpython-311\lxml copying src\lxml\__init__.py -> build\lib.win-amd64-cpython-311\lxml creating build\lib.win-amd64-cpython-311\lxml\includes copying src\lxml\includes\__init__.py -> build\lib.win-amd64-cpython-311\lxml\includes creating build\lib.win-amd64-cpython-311\lxml\html copying src\lxml\html\builder.py -> build\lib.win-amd64-cpython-311\lxml\html copying src\lxml\html\clean.py -> build\lib.win-amd64-cpython-311\lxml\html copying src\lxml\html\defs.py -> build\lib.win-amd64-cpython-311\lxml\html copying src\lxml\html\diff.py -> build\lib.win-amd64-cpython-311\lxml\html copying src\lxml\html\ElementSoup.py -> build\lib.win-amd64-cpython-311\lxml\html copying src\lxml\html\formfill.py -> build\lib.win-amd64-cpython-311\lxml\html copying src\lxml\html\html5parser.py -> build\lib.win-amd64-cpython-311\lxml\html copying src\lxml\html\soupparser.py -> build\lib.win-amd64-cpython-311\lxml\html copying src\lxml\html\usedoctest.py -> build\lib.win-amd64-cpython-311\lxml\html copying src\lxml\html\_diffcommand.py -> build\lib.win-amd64-cpython-311\lxml\html copying src\lxml\html\_html5builder.py -> build\lib.win-amd64-cpython-311\lxml\html copying src\lxml\html\_setmixin.py -> build\lib.win-amd64-cpython-311\lxml\html copying src\lxml\html\__init__.py -> build\lib.win-amd64-cpython-311\lxml\html creating build\lib.win-amd64-cpython-311\lxml\isoschematron copying src\lxml\isoschematron\__init__.py -> build\lib.win-amd64-cpython-311\lxml\isoschematron copying src\lxml\etree.h -> build\lib.win-amd64-cpython-311\lxml copying src\lxml\etree_api.h -> build\lib.win-amd64-cpython-311\lxml copying src\lxml\lxml.etree.h -> build\lib.win-amd64-cpython-311\lxml copying src\lxml\lxml.etree_api.h -> build\lib.win-amd64-cpython-311\lxml copying src\lxml\includes\c14n.pxd -> build\lib.win-amd64-cpython-311\lxml\includes copying src\lxml\includes\config.pxd -> build\lib.win-amd64-cpython-311\lxml\includes copying src\lxml\includes\dtdvalid.pxd -> build\lib.win-amd64-cpython-311\lxml\includes copying src\lxml\includes\etreepublic.pxd -> build\lib.win-amd64-cpython-311\lxml\includes copying src\lxml\includes\htmlparser.pxd -> build\lib.win-amd64-cpython-311\lxml\includes copying src\lxml\includes\relaxng.pxd -> build\lib.win-amd64-cpython-311\lxml\includes copying src\lxml\includes\schematron.pxd -> build\lib.win-amd64-cpython-311\lxml\includes copying src\lxml\includes\tree.pxd -> build\lib.win-amd64-cpython-311\lxml\includes copying src\lxml\includes\uri.pxd -> build\lib.win-amd64-cpython-311\lxml\includes copying src\lxml\includes\xinclude.pxd -> build\lib.win-amd64-cpython-311\lxml\includes copying src\lxml\includes\xmlerror.pxd -> build\lib.win-amd64-cpython-311\lxml\includes copying src\lxml\includes\xmlparser.pxd -> build\lib.win-amd64-cpython-311\lxml\includes copying src\lxml\includes\xmlschema.pxd -> build\lib.win-amd64-cpython-311\lxml\includes copying src\lxml\includes\xpath.pxd -> build\lib.win-amd64-cpython-311\lxml\includes copying src\lxml\includes\xslt.pxd -> build\lib.win-amd64-cpython-311\lxml\includes copying src\lxml\includes\__init__.pxd -> build\lib.win-amd64-cpython-311\lxml\includes copying src\lxml\includes\etree_defs.h -> build\lib.win-amd64-cpython-311\lxml\includes copying src\lxml\includes\lxml-version.h -> build\lib.win-amd64-cpython-311\lxml\includes creating build\lib.win-amd64-cpython-311\lxml\isoschematron\resources creating build\lib.win-amd64-cpython-311\lxml\isoschematron\resources\rng copying src\lxml\isoschematron\resources\rng\iso-schematron.rng -> build\lib.win-amd64-cpython-311\lxml\isoschematron\resources\rng creating build\lib.win-amd64-cpython-311\lxml\isoschematron\resources\xsl copying src\lxml\isoschematron\resources\xsl\RNG2Schtrn.xsl -> build\lib.win-amd64-cpython-311\lxml\isoschematron\resources\xsl copying src\lxml\isoschematron\resources\xsl\XSD2Schtrn.xsl -> build\lib.win-amd64-cpython-311\lxml\isoschematron\resources\xsl creating build\lib.win-amd64-cpython-311\lxml\isoschematron\resources\xsl\iso-schematron-xslt1 copying src\lxml\isoschematron\resources\xsl\iso-schematron-xslt1\iso_abstract_expand.xsl -> build\lib.win-amd64-cpython-311\lxml\isoschematron\resources\xsl\iso-schematron-xslt1 copying src\lxml\isoschematron\resources\xsl\iso-schematron-xslt1\iso_dsdl_include.xsl -> build\lib.win-amd64-cpython-311\lxml\isoschematron\resources\xsl\iso-schematron-xslt1 copying src\lxml\isoschematron\resources\xsl\iso-schematron-xslt1\iso_schematron_message.xsl -> build\lib.win-amd64-cpython-311\lxml\isoschematron\resources\xsl\iso-schematron-xslt1 copying src\lxml\isoschematron\resources\xsl\iso-schematron-xslt1\iso_schematron_skeleton_for_xslt1.xsl -> build\lib.win-amd64-cpython-311\lxml\isoschematron\resources\xsl\iso-schematron-xslt1 copying src\lxml\isoschematron\resources\xsl\iso-schematron-xslt1\iso_svrl_for_xslt1.xsl -> build\lib.win-amd64-cpython-311\lxml\isoschematron\resources\xsl\iso-schematron-xslt1 copying src\lxml\isoschematron\resources\xsl\iso-schematron-xslt1\readme.txt -> build\lib.win-amd64-cpython-311\lxml\isoschematron\resources\xsl\iso-schematron-xslt1 running build_ext building 'lxml.etree' extension creating build\temp.win-amd64-cpython-311 creating build\temp.win-amd64-cpython-311\Release creating build\temp.win-amd64-cpython-311\Release\src creating build\temp.win-amd64-cpython-311\Release\src\lxml "C:\Program Files (x86)\Microsoft Visual Studio\2019\BuildTools\VC\Tools\MSVC\14.29.30133\bin\HostX86\x64\cl.exe" /c /nologo /O2 /W3 /GL /DNDEBUG /MD -DCYTHON_CLINE_IN_TRACEBACK=0 -Isrc -Isrc\lxml\includes -IC:\Users\Mateo\AppData\Local\Programs\Python\Python311\include -IC:\Users\Mateo\AppData\Local\Programs\Python\Python311\Include "-IC:\Program Files (x86)\Microsoft Visual Studio\2019\BuildTools\VC\Tools\MSVC\14.29.30133\include" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\ucrt" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\shared" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\um" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\winrt" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\cppwinrt" /Tcsrc\lxml\etree.c /Fobuild\temp.win-amd64-cpython-311\Release\src\lxml\etree.obj -w cl : Command line warning D9025 : overriding '/W3' with '/w' etree.c C:\Users\Mateo\AppData\Local\Temp\pip-install-e21y1uta\lxml_6a1c02a358e44a829f28dd12b951e3ab\src\lxml\includes/etree_defs.h(14): fatal error C1083: Cannot open include file: 'libxml/xmlversion.h': No such file or directory Compile failed: command 'C:\\Program Files (x86)\\Microsoft Visual Studio\\2019\\BuildTools\\VC\\Tools\\MSVC\\14.29.30133\\bin\\HostX86\\x64\\cl.exe' failed with exit code 2 "C:\Program Files (x86)\Microsoft Visual Studio\2019\BuildTools\VC\Tools\MSVC\14.29.30133\bin\HostX86\x64\cl.exe" /c /nologo /O2 /W3 /GL /DNDEBUG /MD -I/usr/include/libxml2 "-IC:\Program Files (x86)\Microsoft Visual Studio\2019\BuildTools\VC\Tools\MSVC\14.29.30133\include" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\ucrt" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\shared" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\um" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\winrt" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\cppwinrt" /TcC:\Users\Mateo\AppData\Local\Temp\xmlXPathInit_6as6yae.c /FoUsers\Mateo\AppData\Local\Temp\xmlXPathInit_6as6yae.obj xmlXPathInit_6as6yae.c C:\Users\Mateo\AppData\Local\Temp\xmlXPathInit_6as6yae.c(1): fatal error C1083: Cannot open include file: 'libxml/xpath.h': No such file or directory error: command 'C:\\Program Files (x86)\\Microsoft Visual Studio\\2019\\BuildTools\\VC\\Tools\\MSVC\\14.29.30133\\bin\\HostX86\\x64\\cl.exe' failed with exit code 2 ********************************************************************************* Could not find function xmlCheckVersion in library libxml2. Is libxml2 installed? ********************************************************************************* [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. error: legacy-install-failure × Encountered error while trying to install package. ╰─> lxml note: This is an issue with the package mentioned above, not pip. hint: See above for output from the failure. I want to download python-docx and i am not so familiar with python libraries, pip and installing this way. If I try this I get some errors. First it said that it couldn't build a wheel for lxml and that I needed microsoft visual C++ 2014 so I downloaded microsoft visual studio and installed the C++ build tools. Than that error dissapeared but now it sais it can't build the wheel and "Could not find function xmlCheckVersion in library libxml2. Is libxml2 installed?" I tried downloading the zip file libxml2 and putting it where python is stored butnothing works. A: Okay after an hour of searching :) I found something that works. So for the beginners like me I'll explain it very simple. First download the right lxml file here: http://www.lfd.uci.edu/~gohlke/pythonlibs/#lxml Then type this in cmd: pip install C:\path\to\downloaded\file\lxml‑4.5.2‑cp39‑cp39‑win32.whl but ofc change the path to where you stored the file you just downloaded. Now you have lxml installed and the error should be gone.
I get an error while installing python-docx how can i solve this?
C:\Users\Mateo>pip install python-docx Collecting python-docx Using cached python_docx-0.8.11-py3-none-any.whl Collecting lxml>=2.3.2 Using cached lxml-4.9.1.tar.gz (3.4 MB) Preparing metadata (setup.py) ... done Building wheels for collected packages: lxml Building wheel for lxml (setup.py) ... error error: subprocess-exited-with-error × python setup.py bdist_wheel did not run successfully. │ exit code: 1 ╰─> [94 lines of output] Building lxml version 4.9.1. Building without Cython. Building against pre-built libxml2 andl libxslt libraries running bdist_wheel running build running build_py creating build creating build\lib.win-amd64-cpython-311 creating build\lib.win-amd64-cpython-311\lxml copying src\lxml\builder.py -> build\lib.win-amd64-cpython-311\lxml copying src\lxml\cssselect.py -> build\lib.win-amd64-cpython-311\lxml copying src\lxml\doctestcompare.py -> build\lib.win-amd64-cpython-311\lxml copying src\lxml\ElementInclude.py -> build\lib.win-amd64-cpython-311\lxml copying src\lxml\pyclasslookup.py -> build\lib.win-amd64-cpython-311\lxml copying src\lxml\sax.py -> build\lib.win-amd64-cpython-311\lxml copying src\lxml\usedoctest.py -> build\lib.win-amd64-cpython-311\lxml copying src\lxml\_elementpath.py -> build\lib.win-amd64-cpython-311\lxml copying src\lxml\__init__.py -> build\lib.win-amd64-cpython-311\lxml creating build\lib.win-amd64-cpython-311\lxml\includes copying src\lxml\includes\__init__.py -> build\lib.win-amd64-cpython-311\lxml\includes creating build\lib.win-amd64-cpython-311\lxml\html copying src\lxml\html\builder.py -> build\lib.win-amd64-cpython-311\lxml\html copying src\lxml\html\clean.py -> build\lib.win-amd64-cpython-311\lxml\html copying src\lxml\html\defs.py -> build\lib.win-amd64-cpython-311\lxml\html copying src\lxml\html\diff.py -> build\lib.win-amd64-cpython-311\lxml\html copying src\lxml\html\ElementSoup.py -> build\lib.win-amd64-cpython-311\lxml\html copying src\lxml\html\formfill.py -> build\lib.win-amd64-cpython-311\lxml\html copying src\lxml\html\html5parser.py -> build\lib.win-amd64-cpython-311\lxml\html copying src\lxml\html\soupparser.py -> build\lib.win-amd64-cpython-311\lxml\html copying src\lxml\html\usedoctest.py -> build\lib.win-amd64-cpython-311\lxml\html copying src\lxml\html\_diffcommand.py -> build\lib.win-amd64-cpython-311\lxml\html copying src\lxml\html\_html5builder.py -> build\lib.win-amd64-cpython-311\lxml\html copying src\lxml\html\_setmixin.py -> build\lib.win-amd64-cpython-311\lxml\html copying src\lxml\html\__init__.py -> build\lib.win-amd64-cpython-311\lxml\html creating build\lib.win-amd64-cpython-311\lxml\isoschematron copying src\lxml\isoschematron\__init__.py -> build\lib.win-amd64-cpython-311\lxml\isoschematron copying src\lxml\etree.h -> build\lib.win-amd64-cpython-311\lxml copying src\lxml\etree_api.h -> build\lib.win-amd64-cpython-311\lxml copying src\lxml\lxml.etree.h -> build\lib.win-amd64-cpython-311\lxml copying src\lxml\lxml.etree_api.h -> build\lib.win-amd64-cpython-311\lxml copying src\lxml\includes\c14n.pxd -> build\lib.win-amd64-cpython-311\lxml\includes copying src\lxml\includes\config.pxd -> build\lib.win-amd64-cpython-311\lxml\includes copying src\lxml\includes\dtdvalid.pxd -> build\lib.win-amd64-cpython-311\lxml\includes copying src\lxml\includes\etreepublic.pxd -> build\lib.win-amd64-cpython-311\lxml\includes copying src\lxml\includes\htmlparser.pxd -> build\lib.win-amd64-cpython-311\lxml\includes copying src\lxml\includes\relaxng.pxd -> build\lib.win-amd64-cpython-311\lxml\includes copying src\lxml\includes\schematron.pxd -> build\lib.win-amd64-cpython-311\lxml\includes copying src\lxml\includes\tree.pxd -> build\lib.win-amd64-cpython-311\lxml\includes copying src\lxml\includes\uri.pxd -> build\lib.win-amd64-cpython-311\lxml\includes copying src\lxml\includes\xinclude.pxd -> build\lib.win-amd64-cpython-311\lxml\includes copying src\lxml\includes\xmlerror.pxd -> build\lib.win-amd64-cpython-311\lxml\includes copying src\lxml\includes\xmlparser.pxd -> build\lib.win-amd64-cpython-311\lxml\includes copying src\lxml\includes\xmlschema.pxd -> build\lib.win-amd64-cpython-311\lxml\includes copying src\lxml\includes\xpath.pxd -> build\lib.win-amd64-cpython-311\lxml\includes copying src\lxml\includes\xslt.pxd -> build\lib.win-amd64-cpython-311\lxml\includes copying src\lxml\includes\__init__.pxd -> build\lib.win-amd64-cpython-311\lxml\includes copying src\lxml\includes\etree_defs.h -> build\lib.win-amd64-cpython-311\lxml\includes copying src\lxml\includes\lxml-version.h -> build\lib.win-amd64-cpython-311\lxml\includes creating build\lib.win-amd64-cpython-311\lxml\isoschematron\resources creating build\lib.win-amd64-cpython-311\lxml\isoschematron\resources\rng copying src\lxml\isoschematron\resources\rng\iso-schematron.rng -> build\lib.win-amd64-cpython-311\lxml\isoschematron\resources\rng creating build\lib.win-amd64-cpython-311\lxml\isoschematron\resources\xsl copying src\lxml\isoschematron\resources\xsl\RNG2Schtrn.xsl -> build\lib.win-amd64-cpython-311\lxml\isoschematron\resources\xsl copying src\lxml\isoschematron\resources\xsl\XSD2Schtrn.xsl -> build\lib.win-amd64-cpython-311\lxml\isoschematron\resources\xsl creating build\lib.win-amd64-cpython-311\lxml\isoschematron\resources\xsl\iso-schematron-xslt1 copying src\lxml\isoschematron\resources\xsl\iso-schematron-xslt1\iso_abstract_expand.xsl -> build\lib.win-amd64-cpython-311\lxml\isoschematron\resources\xsl\iso-schematron-xslt1 copying src\lxml\isoschematron\resources\xsl\iso-schematron-xslt1\iso_dsdl_include.xsl -> build\lib.win-amd64-cpython-311\lxml\isoschematron\resources\xsl\iso-schematron-xslt1 copying src\lxml\isoschematron\resources\xsl\iso-schematron-xslt1\iso_schematron_message.xsl -> build\lib.win-amd64-cpython-311\lxml\isoschematron\resources\xsl\iso-schematron-xslt1 copying src\lxml\isoschematron\resources\xsl\iso-schematron-xslt1\iso_schematron_skeleton_for_xslt1.xsl -> build\lib.win-amd64-cpython-311\lxml\isoschematron\resources\xsl\iso-schematron-xslt1 copying src\lxml\isoschematron\resources\xsl\iso-schematron-xslt1\iso_svrl_for_xslt1.xsl -> build\lib.win-amd64-cpython-311\lxml\isoschematron\resources\xsl\iso-schematron-xslt1 copying src\lxml\isoschematron\resources\xsl\iso-schematron-xslt1\readme.txt -> build\lib.win-amd64-cpython-311\lxml\isoschematron\resources\xsl\iso-schematron-xslt1 running build_ext building 'lxml.etree' extension creating build\temp.win-amd64-cpython-311 creating build\temp.win-amd64-cpython-311\Release creating build\temp.win-amd64-cpython-311\Release\src creating build\temp.win-amd64-cpython-311\Release\src\lxml "C:\Program Files (x86)\Microsoft Visual Studio\2019\BuildTools\VC\Tools\MSVC\14.29.30133\bin\HostX86\x64\cl.exe" /c /nologo /O2 /W3 /GL /DNDEBUG /MD -DCYTHON_CLINE_IN_TRACEBACK=0 -Isrc -Isrc\lxml\includes -IC:\Users\Mateo\AppData\Local\Programs\Python\Python311\include -IC:\Users\Mateo\AppData\Local\Programs\Python\Python311\Include "-IC:\Program Files (x86)\Microsoft Visual Studio\2019\BuildTools\VC\Tools\MSVC\14.29.30133\include" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\ucrt" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\shared" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\um" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\winrt" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\cppwinrt" /Tcsrc\lxml\etree.c /Fobuild\temp.win-amd64-cpython-311\Release\src\lxml\etree.obj -w cl : Command line warning D9025 : overriding '/W3' with '/w' etree.c C:\Users\Mateo\AppData\Local\Temp\pip-install-e21y1uta\lxml_6a1c02a358e44a829f28dd12b951e3ab\src\lxml\includes/etree_defs.h(14): fatal error C1083: Cannot open include file: 'libxml/xmlversion.h': No such file or directory Compile failed: command 'C:\\Program Files (x86)\\Microsoft Visual Studio\\2019\\BuildTools\\VC\\Tools\\MSVC\\14.29.30133\\bin\\HostX86\\x64\\cl.exe' failed with exit code 2 creating Users creating Users\Mateo creating Users\Mateo\AppData creating Users\Mateo\AppData\Local creating Users\Mateo\AppData\Local\Temp "C:\Program Files (x86)\Microsoft Visual Studio\2019\BuildTools\VC\Tools\MSVC\14.29.30133\bin\HostX86\x64\cl.exe" /c /nologo /O2 /W3 /GL /DNDEBUG /MD -I/usr/include/libxml2 "-IC:\Program Files (x86)\Microsoft Visual Studio\2019\BuildTools\VC\Tools\MSVC\14.29.30133\include" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\ucrt" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\shared" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\um" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\winrt" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\cppwinrt" /TcC:\Users\Mateo\AppData\Local\Temp\xmlXPathInit3lxm04ns.c /FoUsers\Mateo\AppData\Local\Temp\xmlXPathInit3lxm04ns.obj xmlXPathInit3lxm04ns.c C:\Users\Mateo\AppData\Local\Temp\xmlXPathInit3lxm04ns.c(1): fatal error C1083: Cannot open include file: 'libxml/xpath.h': No such file or directory error: command 'C:\\Program Files (x86)\\Microsoft Visual Studio\\2019\\BuildTools\\VC\\Tools\\MSVC\\14.29.30133\\bin\\HostX86\\x64\\cl.exe' failed with exit code 2 ********************************************************************************* Could not find function xmlCheckVersion in library libxml2. Is libxml2 installed? ********************************************************************************* [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. ERROR: Failed building wheel for lxml Running setup.py clean for lxml Failed to build lxml Installing collected packages: lxml, python-docx Running setup.py install for lxml ... error error: subprocess-exited-with-error × Running setup.py install for lxml did not run successfully. │ exit code: 1 ╰─> [91 lines of output] Building lxml version 4.9.1. Building without Cython. Building against pre-built libxml2 andl libxslt libraries running install C:\Users\Mateo\AppData\Local\Programs\Python\Python311\Lib\site-packages\setuptools\command\install.py:34: SetuptoolsDeprecationWarning: setup.py install is deprecated. Use build and pip and other standards-based tools. warnings.warn( running build running build_py creating build creating build\lib.win-amd64-cpython-311 creating build\lib.win-amd64-cpython-311\lxml copying src\lxml\builder.py -> build\lib.win-amd64-cpython-311\lxml copying src\lxml\cssselect.py -> build\lib.win-amd64-cpython-311\lxml copying src\lxml\doctestcompare.py -> build\lib.win-amd64-cpython-311\lxml copying src\lxml\ElementInclude.py -> build\lib.win-amd64-cpython-311\lxml copying src\lxml\pyclasslookup.py -> build\lib.win-amd64-cpython-311\lxml copying src\lxml\sax.py -> build\lib.win-amd64-cpython-311\lxml copying src\lxml\usedoctest.py -> build\lib.win-amd64-cpython-311\lxml copying src\lxml\_elementpath.py -> build\lib.win-amd64-cpython-311\lxml copying src\lxml\__init__.py -> build\lib.win-amd64-cpython-311\lxml creating build\lib.win-amd64-cpython-311\lxml\includes copying src\lxml\includes\__init__.py -> build\lib.win-amd64-cpython-311\lxml\includes creating build\lib.win-amd64-cpython-311\lxml\html copying src\lxml\html\builder.py -> build\lib.win-amd64-cpython-311\lxml\html copying src\lxml\html\clean.py -> build\lib.win-amd64-cpython-311\lxml\html copying src\lxml\html\defs.py -> build\lib.win-amd64-cpython-311\lxml\html copying src\lxml\html\diff.py -> build\lib.win-amd64-cpython-311\lxml\html copying src\lxml\html\ElementSoup.py -> build\lib.win-amd64-cpython-311\lxml\html copying src\lxml\html\formfill.py -> build\lib.win-amd64-cpython-311\lxml\html copying src\lxml\html\html5parser.py -> build\lib.win-amd64-cpython-311\lxml\html copying src\lxml\html\soupparser.py -> build\lib.win-amd64-cpython-311\lxml\html copying src\lxml\html\usedoctest.py -> build\lib.win-amd64-cpython-311\lxml\html copying src\lxml\html\_diffcommand.py -> build\lib.win-amd64-cpython-311\lxml\html copying src\lxml\html\_html5builder.py -> build\lib.win-amd64-cpython-311\lxml\html copying src\lxml\html\_setmixin.py -> build\lib.win-amd64-cpython-311\lxml\html copying src\lxml\html\__init__.py -> build\lib.win-amd64-cpython-311\lxml\html creating build\lib.win-amd64-cpython-311\lxml\isoschematron copying src\lxml\isoschematron\__init__.py -> build\lib.win-amd64-cpython-311\lxml\isoschematron copying src\lxml\etree.h -> build\lib.win-amd64-cpython-311\lxml copying src\lxml\etree_api.h -> build\lib.win-amd64-cpython-311\lxml copying src\lxml\lxml.etree.h -> build\lib.win-amd64-cpython-311\lxml copying src\lxml\lxml.etree_api.h -> build\lib.win-amd64-cpython-311\lxml copying src\lxml\includes\c14n.pxd -> build\lib.win-amd64-cpython-311\lxml\includes copying src\lxml\includes\config.pxd -> build\lib.win-amd64-cpython-311\lxml\includes copying src\lxml\includes\dtdvalid.pxd -> build\lib.win-amd64-cpython-311\lxml\includes copying src\lxml\includes\etreepublic.pxd -> build\lib.win-amd64-cpython-311\lxml\includes copying src\lxml\includes\htmlparser.pxd -> build\lib.win-amd64-cpython-311\lxml\includes copying src\lxml\includes\relaxng.pxd -> build\lib.win-amd64-cpython-311\lxml\includes copying src\lxml\includes\schematron.pxd -> build\lib.win-amd64-cpython-311\lxml\includes copying src\lxml\includes\tree.pxd -> build\lib.win-amd64-cpython-311\lxml\includes copying src\lxml\includes\uri.pxd -> build\lib.win-amd64-cpython-311\lxml\includes copying src\lxml\includes\xinclude.pxd -> build\lib.win-amd64-cpython-311\lxml\includes copying src\lxml\includes\xmlerror.pxd -> build\lib.win-amd64-cpython-311\lxml\includes copying src\lxml\includes\xmlparser.pxd -> build\lib.win-amd64-cpython-311\lxml\includes copying src\lxml\includes\xmlschema.pxd -> build\lib.win-amd64-cpython-311\lxml\includes copying src\lxml\includes\xpath.pxd -> build\lib.win-amd64-cpython-311\lxml\includes copying src\lxml\includes\xslt.pxd -> build\lib.win-amd64-cpython-311\lxml\includes copying src\lxml\includes\__init__.pxd -> build\lib.win-amd64-cpython-311\lxml\includes copying src\lxml\includes\etree_defs.h -> build\lib.win-amd64-cpython-311\lxml\includes copying src\lxml\includes\lxml-version.h -> build\lib.win-amd64-cpython-311\lxml\includes creating build\lib.win-amd64-cpython-311\lxml\isoschematron\resources creating build\lib.win-amd64-cpython-311\lxml\isoschematron\resources\rng copying src\lxml\isoschematron\resources\rng\iso-schematron.rng -> build\lib.win-amd64-cpython-311\lxml\isoschematron\resources\rng creating build\lib.win-amd64-cpython-311\lxml\isoschematron\resources\xsl copying src\lxml\isoschematron\resources\xsl\RNG2Schtrn.xsl -> build\lib.win-amd64-cpython-311\lxml\isoschematron\resources\xsl copying src\lxml\isoschematron\resources\xsl\XSD2Schtrn.xsl -> build\lib.win-amd64-cpython-311\lxml\isoschematron\resources\xsl creating build\lib.win-amd64-cpython-311\lxml\isoschematron\resources\xsl\iso-schematron-xslt1 copying src\lxml\isoschematron\resources\xsl\iso-schematron-xslt1\iso_abstract_expand.xsl -> build\lib.win-amd64-cpython-311\lxml\isoschematron\resources\xsl\iso-schematron-xslt1 copying src\lxml\isoschematron\resources\xsl\iso-schematron-xslt1\iso_dsdl_include.xsl -> build\lib.win-amd64-cpython-311\lxml\isoschematron\resources\xsl\iso-schematron-xslt1 copying src\lxml\isoschematron\resources\xsl\iso-schematron-xslt1\iso_schematron_message.xsl -> build\lib.win-amd64-cpython-311\lxml\isoschematron\resources\xsl\iso-schematron-xslt1 copying src\lxml\isoschematron\resources\xsl\iso-schematron-xslt1\iso_schematron_skeleton_for_xslt1.xsl -> build\lib.win-amd64-cpython-311\lxml\isoschematron\resources\xsl\iso-schematron-xslt1 copying src\lxml\isoschematron\resources\xsl\iso-schematron-xslt1\iso_svrl_for_xslt1.xsl -> build\lib.win-amd64-cpython-311\lxml\isoschematron\resources\xsl\iso-schematron-xslt1 copying src\lxml\isoschematron\resources\xsl\iso-schematron-xslt1\readme.txt -> build\lib.win-amd64-cpython-311\lxml\isoschematron\resources\xsl\iso-schematron-xslt1 running build_ext building 'lxml.etree' extension creating build\temp.win-amd64-cpython-311 creating build\temp.win-amd64-cpython-311\Release creating build\temp.win-amd64-cpython-311\Release\src creating build\temp.win-amd64-cpython-311\Release\src\lxml "C:\Program Files (x86)\Microsoft Visual Studio\2019\BuildTools\VC\Tools\MSVC\14.29.30133\bin\HostX86\x64\cl.exe" /c /nologo /O2 /W3 /GL /DNDEBUG /MD -DCYTHON_CLINE_IN_TRACEBACK=0 -Isrc -Isrc\lxml\includes -IC:\Users\Mateo\AppData\Local\Programs\Python\Python311\include -IC:\Users\Mateo\AppData\Local\Programs\Python\Python311\Include "-IC:\Program Files (x86)\Microsoft Visual Studio\2019\BuildTools\VC\Tools\MSVC\14.29.30133\include" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\ucrt" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\shared" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\um" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\winrt" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\cppwinrt" /Tcsrc\lxml\etree.c /Fobuild\temp.win-amd64-cpython-311\Release\src\lxml\etree.obj -w cl : Command line warning D9025 : overriding '/W3' with '/w' etree.c C:\Users\Mateo\AppData\Local\Temp\pip-install-e21y1uta\lxml_6a1c02a358e44a829f28dd12b951e3ab\src\lxml\includes/etree_defs.h(14): fatal error C1083: Cannot open include file: 'libxml/xmlversion.h': No such file or directory Compile failed: command 'C:\\Program Files (x86)\\Microsoft Visual Studio\\2019\\BuildTools\\VC\\Tools\\MSVC\\14.29.30133\\bin\\HostX86\\x64\\cl.exe' failed with exit code 2 "C:\Program Files (x86)\Microsoft Visual Studio\2019\BuildTools\VC\Tools\MSVC\14.29.30133\bin\HostX86\x64\cl.exe" /c /nologo /O2 /W3 /GL /DNDEBUG /MD -I/usr/include/libxml2 "-IC:\Program Files (x86)\Microsoft Visual Studio\2019\BuildTools\VC\Tools\MSVC\14.29.30133\include" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\ucrt" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\shared" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\um" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\winrt" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\cppwinrt" /TcC:\Users\Mateo\AppData\Local\Temp\xmlXPathInit_6as6yae.c /FoUsers\Mateo\AppData\Local\Temp\xmlXPathInit_6as6yae.obj xmlXPathInit_6as6yae.c C:\Users\Mateo\AppData\Local\Temp\xmlXPathInit_6as6yae.c(1): fatal error C1083: Cannot open include file: 'libxml/xpath.h': No such file or directory error: command 'C:\\Program Files (x86)\\Microsoft Visual Studio\\2019\\BuildTools\\VC\\Tools\\MSVC\\14.29.30133\\bin\\HostX86\\x64\\cl.exe' failed with exit code 2 ********************************************************************************* Could not find function xmlCheckVersion in library libxml2. Is libxml2 installed? ********************************************************************************* [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. error: legacy-install-failure × Encountered error while trying to install package. ╰─> lxml note: This is an issue with the package mentioned above, not pip. hint: See above for output from the failure. I want to download python-docx and i am not so familiar with python libraries, pip and installing this way. If I try this I get some errors. First it said that it couldn't build a wheel for lxml and that I needed microsoft visual C++ 2014 so I downloaded microsoft visual studio and installed the C++ build tools. Than that error dissapeared but now it sais it can't build the wheel and "Could not find function xmlCheckVersion in library libxml2. Is libxml2 installed?" I tried downloading the zip file libxml2 and putting it where python is stored butnothing works.
[ "Okay after an hour of searching :) I found something that works. So for the beginners like me I'll explain it very simple.\n\nFirst download the right lxml file here:\nhttp://www.lfd.uci.edu/~gohlke/pythonlibs/#lxml\nThen type this in cmd:\npip install C:\\path\\to\\downloaded\\file\\lxml‑4.5.2‑cp39‑cp39‑win32.whl\nbut ofc change the path to where you stored the file you just downloaded.\nNow you have lxml installed and the error should be gone.\n\n" ]
[ 1 ]
[]
[]
[ "libxml2", "pip", "python", "python_docx", "xml" ]
stackoverflow_0074479256_libxml2_pip_python_python_docx_xml.txt
Q: High accuracy during training and validation, low accuracy during prediction with the same dataset So I'm trying to train Keras model. There is high accuracy (I'm using f1score, but accuracy is also high) while training and validating. But when I'm trying to predict some dataset I'm getting lower accuracy. Even if I predict training set. So I guess it's not about overfitting problem. What then is the problem? import matplotlib.pyplot as plt skf = StratifiedKFold(n_splits=5) for train_index, test_index in skf.split(X, y): X_train, X_test = X[train_index], X[test_index] y_train, y_test = y[train_index], y[test_index] X_train,x_val,y_train,y_val = train_test_split(X_train, y_train, test_size=0.5,stratify = y_train) y_train = encode(y_train) y_val = encode(y_val) model = Sequential() model.add(Dense(50,input_dim=X_train.shape[1],activation='tanh')) model.add(Dropout(0.5)) model.add(Dense(25,activation='tanh')) model.add(Dropout(0.5)) model.add(Dense(10,activation='tanh')) model.add(Dropout(0.5)) model.add(Dense(2, activation='softmax')) opt = Adam(learning_rate=0.001) model.compile(loss='binary_crossentropy', optimizer=opt, metrics=['acc', ta.utils.metrics.f1score]) history = model.fit(X_train, y_train, validation_data=(x_val, y_val), epochs=5000, verbose=0) plt.plot(history.history['f1score']) plt.plot(history.history['val_f1score']) plt.title('model accuracy') plt.ylabel('f1score') plt.xlabel('epoch') plt.legend(['train', 'test'], loc='upper left') plt.show() break The result is here. As you can see results high at training and validation set. And code for predict: from sklearn.metrics import f1_score y_pred = model.predict(x_train) y_pred = decode(y_pred) y_train_t = decode(y_train) print(f1_score(y_train_t, y_pred)) The result is 0.64, that is less than expected 0.9. My decode and encode: def encode(y): Y=np.zeros((y.shape[0],2)) for i in range(len(y)): if y[i]==1: Y[i][1]=1 else : Y[i][0]=1 return Y def decode(y): Y=np.zeros((y.shape[0])) for i in range(len(y)): if np.argmax(y[i])==1: Y[i]=1 else : Y[i]=0 return Y A: Since you use a last layer of model.add(Dense(2, activation='softmax') you should not use loss='binary_crossentropy' in model.compile(), but loss='categorical_crossentropy' instead. Due to this mistake, the results shown during model fitting are probably wrong - the results returned by sklearn's f1_score are the real ones. Irrelevant to your question (as I guess the follow-up one will be how to improve it?), we practically never use activation='tanh' for the hidden layers (try relu instead). Also, dropout should not be used by default (especially with such a high value of 0.5); comment-out all dropout layers and only add them back if your model overfits (using dropout when it is not needed is known to hurt performance). A: I think that you should change the binary_crossentropy to categorical_crossentropy since you use one-hot encoding. A: Somehow, the combination of image generator and the predict_generator() function or the predict() function of Keras' model does not work as expected. Rather than using image generator to do prediction, I'd rather loop through all test images one-by-one and get the prediction for each image in each iteration. I am using Plaid-ML Keras as my backend and to get prediction I am using the following code. import os from PIL import Image import keras import numpy ### # I am not including code to load models or train model ### print("Prediction result:") dir = "/path/to/test/images" files = os.listdir(dir) correct = 0 total = 0 #dictionary to label all traffic signs class. classes = { 0:'This is Cat', 1:'This is Dog', } for file_name in files: total += 1 image = Image.open(dir + "/" + file_name).convert('RGB') image = image.resize((100,100)) image = numpy.expand_dims(image, axis=0) image = numpy.array(image) image = image/255 pred = model.predict_classes([image])[0] sign = classes[pred] if ("cat" in file_name) and ("cat" in sign): print(correct,". ", file_name, sign) correct+=1 elif ("dog" in file_name) and ("dog" in sign): print(correct,". ", file_name, sign) correct+=1 print("accuracy: ", (correct/total))
High accuracy during training and validation, low accuracy during prediction with the same dataset
So I'm trying to train Keras model. There is high accuracy (I'm using f1score, but accuracy is also high) while training and validating. But when I'm trying to predict some dataset I'm getting lower accuracy. Even if I predict training set. So I guess it's not about overfitting problem. What then is the problem? import matplotlib.pyplot as plt skf = StratifiedKFold(n_splits=5) for train_index, test_index in skf.split(X, y): X_train, X_test = X[train_index], X[test_index] y_train, y_test = y[train_index], y[test_index] X_train,x_val,y_train,y_val = train_test_split(X_train, y_train, test_size=0.5,stratify = y_train) y_train = encode(y_train) y_val = encode(y_val) model = Sequential() model.add(Dense(50,input_dim=X_train.shape[1],activation='tanh')) model.add(Dropout(0.5)) model.add(Dense(25,activation='tanh')) model.add(Dropout(0.5)) model.add(Dense(10,activation='tanh')) model.add(Dropout(0.5)) model.add(Dense(2, activation='softmax')) opt = Adam(learning_rate=0.001) model.compile(loss='binary_crossentropy', optimizer=opt, metrics=['acc', ta.utils.metrics.f1score]) history = model.fit(X_train, y_train, validation_data=(x_val, y_val), epochs=5000, verbose=0) plt.plot(history.history['f1score']) plt.plot(history.history['val_f1score']) plt.title('model accuracy') plt.ylabel('f1score') plt.xlabel('epoch') plt.legend(['train', 'test'], loc='upper left') plt.show() break The result is here. As you can see results high at training and validation set. And code for predict: from sklearn.metrics import f1_score y_pred = model.predict(x_train) y_pred = decode(y_pred) y_train_t = decode(y_train) print(f1_score(y_train_t, y_pred)) The result is 0.64, that is less than expected 0.9. My decode and encode: def encode(y): Y=np.zeros((y.shape[0],2)) for i in range(len(y)): if y[i]==1: Y[i][1]=1 else : Y[i][0]=1 return Y def decode(y): Y=np.zeros((y.shape[0])) for i in range(len(y)): if np.argmax(y[i])==1: Y[i]=1 else : Y[i]=0 return Y
[ "Since you use a last layer of\nmodel.add(Dense(2, activation='softmax')\n\nyou should not use loss='binary_crossentropy' in model.compile(), but loss='categorical_crossentropy' instead.\nDue to this mistake, the results shown during model fitting are probably wrong - the results returned by sklearn's f1_score are the real ones.\nIrrelevant to your question (as I guess the follow-up one will be how to improve it?), we practically never use activation='tanh' for the hidden layers (try relu instead). Also, dropout should not be used by default (especially with such a high value of 0.5); comment-out all dropout layers and only add them back if your model overfits (using dropout when it is not needed is known to hurt performance).\n", "I think that you should change the binary_crossentropy to categorical_crossentropy since you use one-hot encoding.\n", "Somehow, the combination of image generator and the predict_generator() function or the predict() function of Keras' model does not work as expected.\nRather than using image generator to do prediction, I'd rather loop through all test images one-by-one and get the prediction for each image in each iteration. I am using Plaid-ML Keras as my backend and to get prediction I am using the following code.\nimport os\nfrom PIL import Image\nimport keras\nimport numpy\n\n###\n# I am not including code to load models or train model\n###\n\nprint(\"Prediction result:\")\ndir = \"/path/to/test/images\"\nfiles = os.listdir(dir)\ncorrect = 0\ntotal = 0\n#dictionary to label all traffic signs class.\nclasses = {\n 0:'This is Cat',\n 1:'This is Dog',\n}\nfor file_name in files:\n total += 1\n image = Image.open(dir + \"/\" + file_name).convert('RGB')\n image = image.resize((100,100))\n image = numpy.expand_dims(image, axis=0)\n image = numpy.array(image)\n image = image/255\n pred = model.predict_classes([image])[0]\n sign = classes[pred]\n if (\"cat\" in file_name) and (\"cat\" in sign):\n print(correct,\". \", file_name, sign)\n correct+=1\n elif (\"dog\" in file_name) and (\"dog\" in sign):\n print(correct,\". \", file_name, sign)\n correct+=1\nprint(\"accuracy: \", (correct/total))\n\n" ]
[ 1, 0, 0 ]
[]
[]
[ "deep_learning", "keras", "machine_learning", "python", "tensorflow" ]
stackoverflow_0066452884_deep_learning_keras_machine_learning_python_tensorflow.txt
Q: how to use pandas groupby to aggregate data across multiple columns I have a pandas dataframe: Reference timestamp sub_reference datatype_indicator figure REF1 2022-09-01 10 A 23.6 REF1 2022-09-01 48 B 25.8 REF1 2022-09-02 10 A 17.4 REF1 2022-10-01 10 A 23.6 REF1 2022-10-01 48 B 25.8 REF1 2022-10-02 10 A 17.4 REF2 2022-09-01 10 A 23.6 REF2 2022-09-01 48 B 25.8 REF2 2022-09-02 10 A 17.4 REF2 2022-10-01 11 A 23.6 REF2 2022-10-01 47 B 25.8 REF2 2022-10-02 10 A 17.4 REF3 2022-09-01 10 A 23.6 REF3 2022-09-01 48 B 25.8 REF3 2022-09-02 10 A 17.4 REF3 2022-10-01 11 A 23.6 REF3 2022-10-01 47 B 25.8 REF3 2022-10-02 10 A 17.4 I need to group the data by 'Reference' and the month in 'timestamp' to produce an aggregated value of 'figure' for the reference/month.. I am trying the below code, but receive TypeError: unhashable type: 'Series' dg = df1.groupby([ pd.Grouper('reference'), pd.Grouper(df1['timestamp'].dt.month) ]).sum() dg.index = dg.index.strftime('%B') print(dg) A: I've never used the pd.Grouper before, but I think your issue is with how it is treating the extraction of the month. I tried it like this: >>> # add a new column for month >>> df1["month"] = df1["timestamp"].dt.month >>> dg = df1.groupby(by=["Reference", "month"], as_index=False).agg({"figure":sum}) >>> dg Reference month figure 0 REF1 9 66.8 1 REF1 10 66.8 2 REF2 9 66.8 3 REF2 10 66.8 4 REF3 9 66.8 5 REF3 10 66.8 A: # create a year-month from teh date # groupby and sum figure df['month'] = pd.to_datetime(df['timestamp']).dt.strftime('%Y-%b') out= df.groupby(['Reference','month' ], as_index=False)['figure'].sum() out OR # use assign to create month column # group and sum figure out= (df.assign(month=pd.to_datetime(df['timestamp']).dt.strftime('%Y-%b')) .groupby(['Reference','month' ], as_index=False)['figure'].sum()) out Reference month figure 0 REF1 2022-Oct 66.8 1 REF1 2022-Sep 66.8 2 REF2 2022-Oct 66.8 3 REF2 2022-Sep 66.8 4 REF3 2022-Oct 66.8 5 REF3 2022-Sep 66.8 A: grouper = pd.PeriodIndex(df['timestamp'], freq='M') df.groupby(['Reference', grouper])['figure'].sum().reset_index() result: Reference timestamp figure 0 REF1 2022-09 66.8 1 REF1 2022-10 66.8 2 REF2 2022-09 66.8 3 REF2 2022-10 66.8 4 REF3 2022-09 66.8 5 REF3 2022-10 66.8 if you want change to %B grouper = pd.to_datetime(df['timestamp']).dt.strftime('%B') df.groupby(['Reference', grouper])['figure'].sum().reset_index() result: Reference timestamp figure 0 REF1 October 66.8 1 REF1 September 66.8 2 REF2 October 66.8 3 REF2 September 66.8 4 REF3 October 66.8 5 REF3 September 66.8
how to use pandas groupby to aggregate data across multiple columns
I have a pandas dataframe: Reference timestamp sub_reference datatype_indicator figure REF1 2022-09-01 10 A 23.6 REF1 2022-09-01 48 B 25.8 REF1 2022-09-02 10 A 17.4 REF1 2022-10-01 10 A 23.6 REF1 2022-10-01 48 B 25.8 REF1 2022-10-02 10 A 17.4 REF2 2022-09-01 10 A 23.6 REF2 2022-09-01 48 B 25.8 REF2 2022-09-02 10 A 17.4 REF2 2022-10-01 11 A 23.6 REF2 2022-10-01 47 B 25.8 REF2 2022-10-02 10 A 17.4 REF3 2022-09-01 10 A 23.6 REF3 2022-09-01 48 B 25.8 REF3 2022-09-02 10 A 17.4 REF3 2022-10-01 11 A 23.6 REF3 2022-10-01 47 B 25.8 REF3 2022-10-02 10 A 17.4 I need to group the data by 'Reference' and the month in 'timestamp' to produce an aggregated value of 'figure' for the reference/month.. I am trying the below code, but receive TypeError: unhashable type: 'Series' dg = df1.groupby([ pd.Grouper('reference'), pd.Grouper(df1['timestamp'].dt.month) ]).sum() dg.index = dg.index.strftime('%B') print(dg)
[ "I've never used the pd.Grouper before, but I think your issue is with how it is treating the extraction of the month.\nI tried it like this:\n>>> # add a new column for month\n>>> df1[\"month\"] = df1[\"timestamp\"].dt.month\n\n>>> dg = df1.groupby(by=[\"Reference\", \"month\"], as_index=False).agg({\"figure\":sum})\n>>> dg\n Reference month figure\n0 REF1 9 66.8\n1 REF1 10 66.8\n2 REF2 9 66.8\n3 REF2 10 66.8\n4 REF3 9 66.8\n5 REF3 10 66.8\n\n", "# create a year-month from teh date\n# groupby and sum figure\ndf['month'] = pd.to_datetime(df['timestamp']).dt.strftime('%Y-%b')\nout= df.groupby(['Reference','month' ], as_index=False)['figure'].sum()\n\nout\n\nOR\n# use assign to create month column\n# group and sum figure\n\nout= (df.assign(month=pd.to_datetime(df['timestamp']).dt.strftime('%Y-%b'))\n .groupby(['Reference','month' ], as_index=False)['figure'].sum())\n\nout\n\n Reference month figure\n0 REF1 2022-Oct 66.8\n1 REF1 2022-Sep 66.8\n2 REF2 2022-Oct 66.8\n3 REF2 2022-Sep 66.8\n4 REF3 2022-Oct 66.8\n5 REF3 2022-Sep 66.8\n\n", "grouper = pd.PeriodIndex(df['timestamp'], freq='M')\ndf.groupby(['Reference', grouper])['figure'].sum().reset_index()\n\nresult:\n Reference timestamp figure\n0 REF1 2022-09 66.8\n1 REF1 2022-10 66.8\n2 REF2 2022-09 66.8\n3 REF2 2022-10 66.8\n4 REF3 2022-09 66.8\n5 REF3 2022-10 66.8\n\nif you want change to %B\ngrouper = pd.to_datetime(df['timestamp']).dt.strftime('%B')\ndf.groupby(['Reference', grouper])['figure'].sum().reset_index()\n\nresult:\n Reference timestamp figure\n0 REF1 October 66.8\n1 REF1 September 66.8\n2 REF2 October 66.8\n3 REF2 September 66.8\n4 REF3 October 66.8\n5 REF3 September 66.8\n\n" ]
[ 1, 0, 0 ]
[]
[]
[ "dataframe", "group_by", "pandas", "python", "sum" ]
stackoverflow_0074479192_dataframe_group_by_pandas_python_sum.txt
Q: Pandas ValueError when creating series indexes from a list of pd.Index objects When trying to create a pandas Series in the following way, I am receiving a ValueError: indexes = [pd.Index([1]), pd.Index([2])] pd.Series( ["a", "b"], index=indexes ) ValueError: Length of values (2) does not match length of index (1) Is this expected/documented behaviour? Tested on: python3.11/pandas1.5.1 python3.9.13/pandas1.4.4 A: Why are you not using ? indexes = [1,2] pd.Series( ["a", "b"], index=indexes )
Pandas ValueError when creating series indexes from a list of pd.Index objects
When trying to create a pandas Series in the following way, I am receiving a ValueError: indexes = [pd.Index([1]), pd.Index([2])] pd.Series( ["a", "b"], index=indexes ) ValueError: Length of values (2) does not match length of index (1) Is this expected/documented behaviour? Tested on: python3.11/pandas1.5.1 python3.9.13/pandas1.4.4
[ "Why are you not using ?\nindexes = [1,2]\npd.Series(\n [\"a\", \"b\"], \n index=indexes\n)\n\n" ]
[ 0 ]
[]
[]
[ "pandas", "python" ]
stackoverflow_0074479349_pandas_python.txt
Q: Is there a faster way to create a df from a txt file? I have a .txt file with lines such as "G1 X174.774 Y46.362 E1.48236", "M73 Q1 S245", all with one letter then a number and then a space. I'm trying to create a dataframe such that each row is a line from my file and each column is a letter. If my file were just the two lines above, my resulting dataframe would be G X Y E M Q S 1 174.774 46.362 1.48236 0 0 0 0 0 0 0 73 1 245 So far I have a dataframe with the columns of all possible letters in the .txt file, and the .txt file is now represented as a list of strings representing each line of the file. As of now I can only figure out how to add each line individually to the df with the following for loop: for j in tqdm(range(len(lines))): line = lines[j] points = line.split() k = [x[0] for x in points] v = [x[1:] for x in points] line_dict = dict(zip(k, v)) df.loc[j] = pd.Series(line_dict) This gives me my desired result (the unspecified values are NaN, but I can change these to zero later), but as my files have 200k+ lines, it's taking about an hour per file. Is there a faster way I could do this? I've been trying to think of a way to use list comprehension, but using the dict is confusing me a bit, and I'm not sure how much faster that would make things anyway. I haven't been able to find much on stackoverflow about this subject, but if I missed something please feel free to share the link with me! Thanks! A: Yes, I suspect there is. Do not incrementally increase the number of rows in a dataframe in a loop: df.loc[j] = pd.Series(line_dict) This will result in quadratic time complexity. Instead, accumulate those dicts into a list, then create a pandas dataframe from that list at the very end. So: data = [] for line in tqdm(range(lines)): points = line.split() k = [x[0] for x in points] v = [x[1:] for x in points] line_dict = dict(zip(k, v)) data.append(line_dict) df = pd.DataFrame(data) The above should be linear time. A: Specifying the sep parameter in pandas.read_csv could be a good idea. If the separator is space, then the dataframe constructing could be implemented as follows: import pandas as pd df = pd.read_csv('file.txt', sep=' ')
Is there a faster way to create a df from a txt file?
I have a .txt file with lines such as "G1 X174.774 Y46.362 E1.48236", "M73 Q1 S245", all with one letter then a number and then a space. I'm trying to create a dataframe such that each row is a line from my file and each column is a letter. If my file were just the two lines above, my resulting dataframe would be G X Y E M Q S 1 174.774 46.362 1.48236 0 0 0 0 0 0 0 73 1 245 So far I have a dataframe with the columns of all possible letters in the .txt file, and the .txt file is now represented as a list of strings representing each line of the file. As of now I can only figure out how to add each line individually to the df with the following for loop: for j in tqdm(range(len(lines))): line = lines[j] points = line.split() k = [x[0] for x in points] v = [x[1:] for x in points] line_dict = dict(zip(k, v)) df.loc[j] = pd.Series(line_dict) This gives me my desired result (the unspecified values are NaN, but I can change these to zero later), but as my files have 200k+ lines, it's taking about an hour per file. Is there a faster way I could do this? I've been trying to think of a way to use list comprehension, but using the dict is confusing me a bit, and I'm not sure how much faster that would make things anyway. I haven't been able to find much on stackoverflow about this subject, but if I missed something please feel free to share the link with me! Thanks!
[ "Yes, I suspect there is. Do not incrementally increase the number of rows in a dataframe in a loop:\ndf.loc[j] = pd.Series(line_dict)\n\nThis will result in quadratic time complexity.\nInstead, accumulate those dicts into a list, then create a pandas dataframe from that list at the very end. So:\ndata = []\nfor line in tqdm(range(lines)):\n points = line.split()\n k = [x[0] for x in points]\n v = [x[1:] for x in points]\n line_dict = dict(zip(k, v))\n data.append(line_dict)\n\ndf = pd.DataFrame(data)\n\nThe above should be linear time.\n", "Specifying the sep parameter in pandas.read_csv could be a good idea. If the separator is space, then the dataframe constructing could be implemented as follows:\nimport pandas as pd\ndf = pd.read_csv('file.txt', sep=' ')\n\n" ]
[ 1, 0 ]
[]
[]
[ "dictionary", "loops", "optimization", "pandas", "python" ]
stackoverflow_0074479254_dictionary_loops_optimization_pandas_python.txt
Q: parsing telegram MessageMediaPoll and print it as readable text Good day every one, I'm trying to parse telegram poll data, I have the following: {'_': 'MessageMediaPoll', 'poll': {'_': 'Poll', 'id': 578954245254551900254, 'question': 'Have you seen it ?! ', 'answers': [{'_': 'PollAnswer', 'text': 'Lost', 'option': [48]}, {'_': 'PollAnswer', 'text': 'Am lose', 'option': [49]}, {'_': 'PollAnswer', 'text': 'Have lost', 'option': [50]}, {'_': 'PollAnswer', 'text': 'Am losing', 'option': [51]}], 'closed': False, 'public_voters': False, 'multiple_choice': False, 'quiz': True, 'close_period': None, 'close_date': None}, 'results': {'_': 'PollResults', 'min': False, 'results': [{'_': 'PollAnswerVoters', 'option': [48], 'voters': 2066, 'chosen': False, 'correct': True}, {'_': 'PollAnswerVoters', 'option': [49], 'voters': 471, 'chosen': False, 'correct': False}, {'_': 'PollAnswerVoters', 'option': [50], 'voters': 704, 'chosen': False, 'correct': False}, {'_': 'PollAnswerVoters', 'option': [51], 'voters': 279, 'chosen': True, 'correct': False}], 'total_voters': 3520, 'recent_voters': [], 'solution': None, 'solution_entities': []}} and I want to print it like this: Q: Have you seen it ?! A: Lost|Correct A: Am lose|Incorrect A: Have lost|Incorrect A: Am losing|Incorrect How I can achieve that in Python? and what is the type of the data? json? A: data = {'_': 'MessageMediaPoll', 'poll': {'_': 'Poll', 'id': 57894245245450254, 'question': 'Have you seen it ?! ', 'answers': [{'_': 'PollAnswer', 'text': 'Lost', 'option': [48]}, {'_': 'PollAnswer', 'text': 'Am lose', 'option': [49]}, {'_': 'PollAnswer', 'text': 'Have lost', 'option': [50]}, {'_': 'PollAnswer', 'text': 'Am losing', 'option': [51]}], 'closed': False, 'public_voters': False, 'multiple_choice': False, 'quiz': True, 'close_period': None, 'close_date': None}, 'results': {'_': 'PollResults', 'min': False, 'results': [{'_': 'PollAnswerVoters', 'option': [48], 'voters': 2066, 'chosen': False, 'correct': True}, {'_': 'PollAnswerVoters', 'option': [49], 'voters': 471, 'chosen': False, 'correct': False}, {'_': 'PollAnswerVoters', 'option': [50], 'voters': 704, 'chosen': False, 'correct': False}, {'_': 'PollAnswerVoters', 'option': [51], 'voters': 279, 'chosen': True, 'correct': False}], 'total_voters': 3520, 'recent_voters': [], 'solution': None, 'solution_entities': []}} Questionz = data['poll']['question'] Answerz = "" Truez = "" fullData = "" #print(data['poll']['question']) for answer in data['poll']['answers']: #print(answer['text'], answer['option']) Answerz = Answerz + answer['text'] + "\n" #print(answer['text']) for results in data['results']['results']: #print(results['correct']) Truez = Truez + str(results['correct']) + "\n" TruezLines = Truez.split("\n") n = 0 for line in Answerz.splitlines(): print(line, "|||||||||" ,TruezLines[n]) fullData = line, "|||||||||" ,TruezLines[n] n += 1
parsing telegram MessageMediaPoll and print it as readable text
Good day every one, I'm trying to parse telegram poll data, I have the following: {'_': 'MessageMediaPoll', 'poll': {'_': 'Poll', 'id': 578954245254551900254, 'question': 'Have you seen it ?! ', 'answers': [{'_': 'PollAnswer', 'text': 'Lost', 'option': [48]}, {'_': 'PollAnswer', 'text': 'Am lose', 'option': [49]}, {'_': 'PollAnswer', 'text': 'Have lost', 'option': [50]}, {'_': 'PollAnswer', 'text': 'Am losing', 'option': [51]}], 'closed': False, 'public_voters': False, 'multiple_choice': False, 'quiz': True, 'close_period': None, 'close_date': None}, 'results': {'_': 'PollResults', 'min': False, 'results': [{'_': 'PollAnswerVoters', 'option': [48], 'voters': 2066, 'chosen': False, 'correct': True}, {'_': 'PollAnswerVoters', 'option': [49], 'voters': 471, 'chosen': False, 'correct': False}, {'_': 'PollAnswerVoters', 'option': [50], 'voters': 704, 'chosen': False, 'correct': False}, {'_': 'PollAnswerVoters', 'option': [51], 'voters': 279, 'chosen': True, 'correct': False}], 'total_voters': 3520, 'recent_voters': [], 'solution': None, 'solution_entities': []}} and I want to print it like this: Q: Have you seen it ?! A: Lost|Correct A: Am lose|Incorrect A: Have lost|Incorrect A: Am losing|Incorrect How I can achieve that in Python? and what is the type of the data? json?
[ "data = {'_': 'MessageMediaPoll', 'poll': {'_': 'Poll', 'id': 57894245245450254, 'question': 'Have you seen it ?! ', 'answers': [{'_': 'PollAnswer', 'text': 'Lost', 'option': [48]}, {'_': 'PollAnswer', 'text': 'Am lose', 'option': [49]}, {'_': 'PollAnswer', 'text': 'Have lost', 'option': [50]}, {'_': 'PollAnswer', 'text': 'Am losing', 'option': [51]}], 'closed': False, 'public_voters': False, 'multiple_choice': False, 'quiz': True, 'close_period': None, 'close_date': None}, 'results': {'_': 'PollResults', 'min': False, 'results': [{'_': 'PollAnswerVoters', 'option': [48], 'voters': 2066, 'chosen': False, 'correct': True}, {'_': 'PollAnswerVoters', 'option': [49], 'voters': 471, 'chosen': False, 'correct': False}, {'_': 'PollAnswerVoters', 'option': [50], 'voters': 704, 'chosen': False, 'correct': False}, {'_': 'PollAnswerVoters', 'option': [51], 'voters': 279, 'chosen': True, 'correct': False}], 'total_voters': 3520, 'recent_voters': [], 'solution': None, 'solution_entities': []}}\n\nQuestionz = data['poll']['question']\nAnswerz = \"\"\nTruez = \"\"\nfullData = \"\"\n\n#print(data['poll']['question'])\nfor answer in data['poll']['answers']:\n #print(answer['text'], answer['option'])\n Answerz = Answerz + answer['text'] + \"\\n\"\n #print(answer['text'])\n\n\nfor results in data['results']['results']:\n #print(results['correct'])\n Truez = Truez + str(results['correct']) + \"\\n\"\n \nTruezLines = Truez.split(\"\\n\")\nn = 0\nfor line in Answerz.splitlines():\n print(line, \"|||||||||\" ,TruezLines[n])\n fullData = line, \"|||||||||\" ,TruezLines[n]\n n += 1\n\n" ]
[ 0 ]
[]
[]
[ "python", "telegram" ]
stackoverflow_0074433968_python_telegram.txt
Q: Minecraft Clone Bug - Ursina Engine I don't know why my minecraft clone destroys blocks of the ground, where I don't want. Here's my code: from ursina import * from ursina.prefabs.first_person_controller import FirstPersonController from random import * from perlin_noise import * app = Ursina() player = FirstPersonController() Sky(color=color.azure,texture=None) amp = 6 freq = 24 shells = [] shellWidth = 12 noise = PerlinNoise(octaves=2,seed=randrange(1, 100000000000000000000000000000000000)) for i in range(shellWidth*shellWidth): ent = Entity(model='cube', texture='ursina-tutorials-main/assets/grass', collider='box') shells.append(ent) def respawn(): player.y=5 player.gravity=0 Entity1=None def destruction(position: Vec3): try: collider_entity = Entity( model="cube", collider="box", visible=False, scale=Vec3(0.5, 0.5, 0.5), position=position ) collider_entity.intersects(ignore=[collider_entity]).entity.color=color.clear collider_entity.intersects(ignore=[collider_entity]).entity.collider = None return collider_entity.position except:pass destructionPos=None TextureList=["ursina-tutorials-main/assets/grass","ursina-tutorials-main/assets/sandMinecraft.jfif"] textureNumber=0 x1=0 z1=0 def input(key): global TextureList,textureNumber global Entity1, destructionPos, x1,z1 x1=0 z1=0 amp = 6 freq = 24 position_x = player.x position_z = player.z if key == "w" or key == "w hold" or key == "s" or key == "s hold" or key == "a" or key == "a hold" or key == "d" or key == "d hold": x1 = abs(position_x - abs(player.x)) if player.x > position_x else -abs(position_x - abs(player.x)) z1 = abs(position_z - abs(player.z)) if player.z > position_z else -abs(position_x - abs(player.x)) for i in range(len(shells)): x = shells[i].x = floor((i / shellWidth) + player.x - 0.5 * shellWidth) z = shells[i].z = floor((i % shellWidth) + player.z - 0.5 * shellWidth) y = shells[i].y = floor(noise([x / freq, z / freq]) * amp) if key=="left mouse down" and shells[i].hovered and mouse.world_point: Entity1=(round(mouse.world_point.x), ceil(mouse.world_point.y)-1, round(mouse.world_point.z)) if key=="right mouse down" and mouse.world_point: PlacedBlock = Entity(model='cube', texture=TextureList[textureNumber%2], color=color.white, collider='box', scale=(1, 1, 1), position=(round(mouse.world_point.x), ceil(mouse.world_point.y), round(mouse.world_point.z)),on_click=lambda:destroy(PlacedBlock)) if key=="g": textureNumber+=1 if Entity1!=None: if destructionPos!=None: if distance(Entity(position=(destructionPos)),Entity(position=(Entity1)))>=1: "" myDestructionList=[] def update(): global Entity1,x1,z1,destructionPos if player.y<-100: respawn() try: x000,y000,z000=Entity1 myDestructionList.append(destructionPos) if player.x != x1 or player.z != z1: if (myDestructionList.__len__()+1)>3: destructionPos = destruction(position=(x1,y000,z1)) destructionPos=destruction(destructionPos) else: "" print(destructionPos) except:pass app.run() When I clicked on the blocks, they destroyed behind me. And when I was on the place, where the block should get destroyed, I fell down. A: I have to do this, but the hole isn't still displayed yet. from ursina import * from ursina.prefabs.first_person_controller import FirstPersonController from random import * from perlin_noise import * #import pyautogui app = Ursina() player = FirstPersonController() Sky(color=color.azure,texture=None) amp = 6 freq = 24 shells = [] shellWidth = 12 noise = PerlinNoise(octaves=2,seed=randrange(1, 100000000000000000000000000000000000)) for i in range(shellWidth*shellWidth): '''def TextureEntity(): ent.texture="ursina-tutorials-main/assets/TextureLess.png"''' ent = Entity(model='cube', texture='ursina-tutorials-main/assets/grass', collider='box')#, on_click=TextureEntity) shells.append(ent) def respawn(): player.y=5 player.gravity=0 Entity1=None def destruction(position: Vec3): try: collider_entity = Entity( model="cube", collider="box", visible=False, scale=Vec3(0.5, 0.5, 0.5), position=position ) #collider_entity.intersects(ignore = [collider_entity]).entity.color=color.clear collider_entity.intersects(ignore=[collider_entity]).entity.color=color.clear collider_entity.intersects(ignore=[collider_entity]).entity.collider = None return collider_entity.position except:pass destructionPos=None TextureList=["ursina-tutorials-main/assets/grass","ursina-tutorials-main/assets/sandMinecraft.jfif"] textureNumber=0 x1=0 z1=0 def input(key): global TextureList,textureNumber global Entity1, destructionPos, x1,z1 x1=0 z1=0 amp = 6 freq = 24 position_x = player.x position_z = player.z if key == "w" or key == "w hold" or key == "s" or key == "s hold" or key == "a" or key == "a hold" or key == "d" or key == "d hold": x1 = abs(position_x - abs(player.x)) if player.x > position_x else -abs(position_x - abs(player.x)) z1 = abs(position_z - abs(player.z)) if player.z > position_z else -abs(position_x - abs(player.x)) for i in range(len(shells)): x = shells[i].x = floor((i / shellWidth) + player.x - 0.5 * shellWidth) z = shells[i].z = floor((i % shellWidth) + player.z - 0.5 * shellWidth) y = shells[i].y = floor(noise([x / freq, z / freq]) * amp) if key=="left mouse down" and shells[i].hovered and mouse.world_point: Entity1=(round(mouse.world_point.x), ceil(mouse.world_point.y)-1, round(mouse.world_point.z)) if key=="right mouse down" and mouse.world_point: PlacedBlock = Entity(model='cube', texture=TextureList[textureNumber%2], color=color.white, collider='box', scale=(1, 1, 1), position=(round(mouse.world_point.x), ceil(mouse.world_point.y), round(mouse.world_point.z)),on_click=lambda:destroy(PlacedBlock)) if key=="g": textureNumber+=1 if Entity1!=None: if destructionPos!=None: if distance(Entity(position=(destructionPos)),Entity(position=(Entity1)))>=1: "" #xee,yee,zee=destructionPos #xeee,yeee,zeee=Entity1 #entity=Entity(model='cube',collider='box',texture=TextureList[textureNumber%2],color=color.white,position=((abs(xee - abs(xeee)) if xeee > xee else -abs(xee - abs(xeee))),yeee,(abs(zee - abs(zeee)) if zeee > zee else -abs(zee - abs(zeee))))) #print(entity.position) myDestructionList=[] def update(): global Entity1,x1,z1,destructionPos if player.y<-100: respawn() try: x000,y000,z000=Entity1 myDestructionList.append(destructionPos) if player.x != x1 or player.z != z1: if (myDestructionList.__len__()+1)>1: destructionPos = destruction(position=(x1,y000,z1)) destructionPos=destruction(destructionPos)-Vec3(sqrt(player.x)-destructionPos.x,sqrt(player.y)-destructionPos.y,sqrt(player.z)-destructionPos.z) else: "" print(destructionPos) except:pass app.run() A: I tried this code: from ursina import * from ursina.prefabs.first_person_controller import FirstPersonController from random import * from perlin_noise import * #import pyautogui app = Ursina() player = FirstPersonController() Sky(color=color.azure,texture=None) shells = [] shellWidth = 12 noise = PerlinNoise(octaves=2,seed=randrange(1, 100000000000000000000000000000000000)) for i in range(shellWidth*shellWidth): '''def TextureEntity(): ent.texture="ursina-tutorials-main/assets/TextureLess.png"''' ent = Entity(model='cube', texture='ursina-tutorials-main/assets/grass', collider='box')#, on_click=TextureEntity) shells.append(ent) def respawn(): player.y=5 player.gravity=0 TextureList=["ursina-tutorials-main/assets/grass","ursina-tutorials-main/assets/sandMinecraft.jfif"] textureNumber=0 def input(key): global TextureList,textureNumber if key=="right mouse down" and mouse.world_point: PlacedBlock = Entity(model='cube', texture=TextureList[textureNumber%2], color=color.white, collider='box', scale=(1, 1, 1), position=(round(mouse.world_point.x), ceil(mouse.world_point.y), round(mouse.world_point.z)),on_click=lambda:destroy(PlacedBlock)) if key=="g": textureNumber+=1 destroylist=[] amp = 6 freq = 24 def update(): global amp,freq,destroylist for i in range(len(shells)): longString=f"{floor((i / shellWidth) + player.x - 0.5 * shellWidth)},{floor((i % shellWidth) + player.z - 0.5 * shellWidth)}, {floor(noise([floor((i / shellWidth) + player.x - 0.5 * shellWidth) / freq,floor((i % shellWidth) + player.z - 0.5 * shellWidth) / freq]) * amp)}" position = shells[i].position = Vec3(floor((i / shellWidth) + player.x - 0.5 * shellWidth),floor((i % shellWidth) + player.z - 0.5 * shellWidth),floor(noise([floor((i / shellWidth) + player.x - 0.5 * shellWidth) / freq, floor((i % shellWidth) + player.z - 0.5 * shellWidth) / freq]) * amp))\ if not longString in destroylist else None if held_keys["left mouse"] and shells[i].hovered: destroylist.append(longString) shells[i].position=None print(destroylist) app.run() But it only destroys when I'm walking, and it doesn't generate terrain anymore? A: That works! from ursina import * from ursina.prefabs.first_person_controller import FirstPersonController from random import * from perlin_noise import * #import pyautogui app = Ursina() player = FirstPersonController() Sky(color=color.azure,texture=None) shells = [] shellWidth = 12 noise = PerlinNoise(octaves=2,seed=randrange(1, 100000000000000000000000000000000000)) for i in range(shellWidth*shellWidth): '''def TextureEntity(): ent.texture="ursina-tutorials-main/assets/TextureLess.png"''' ent = Entity(model='cube', texture='ursina-tutorials-main/assets/grass', collider='box')#, on_click=TextureEntity) shells.append(ent) def respawn(): player.y=5 player.gravity=0 TextureList=["ursina-tutorials-main/assets/grass","ursina-tutorials-main/assets/sandMinecraft.jfif"] textureNumber=0 def input(key): global TextureList,textureNumber if key=="right mouse down" and mouse.world_point: PlacedBlock = Entity(model='cube', texture=TextureList[textureNumber%2], color=color.white, collider='box', scale=(1, 1, 1), position=(round(mouse.world_point.x), ceil(mouse.world_point.y), round(mouse.world_point.z)),on_click=lambda:destroy(PlacedBlock)) if key=="g": textureNumber+=1 destroylist=[] amp = 6 freq = 24 def update(): global amp,freq,destroylist for i in range(len(shells)): x = shells[i].x = floor((i / shellWidth) + player.x - 0.5 * shellWidth) z = shells[i].z = floor((i % shellWidth) + player.z - 0.5 * shellWidth) y = shells[i].y = floor(noise([x / freq, z / freq]) * amp) shellsPosition=shells[i].position if shells[i].position in destroylist: shells[i].x=3000 if held_keys["left mouse"] and shells[i].hovered: destroylist.append(shellsPosition) shells[i].position=None print(destroylist) app.run()
Minecraft Clone Bug - Ursina Engine
I don't know why my minecraft clone destroys blocks of the ground, where I don't want. Here's my code: from ursina import * from ursina.prefabs.first_person_controller import FirstPersonController from random import * from perlin_noise import * app = Ursina() player = FirstPersonController() Sky(color=color.azure,texture=None) amp = 6 freq = 24 shells = [] shellWidth = 12 noise = PerlinNoise(octaves=2,seed=randrange(1, 100000000000000000000000000000000000)) for i in range(shellWidth*shellWidth): ent = Entity(model='cube', texture='ursina-tutorials-main/assets/grass', collider='box') shells.append(ent) def respawn(): player.y=5 player.gravity=0 Entity1=None def destruction(position: Vec3): try: collider_entity = Entity( model="cube", collider="box", visible=False, scale=Vec3(0.5, 0.5, 0.5), position=position ) collider_entity.intersects(ignore=[collider_entity]).entity.color=color.clear collider_entity.intersects(ignore=[collider_entity]).entity.collider = None return collider_entity.position except:pass destructionPos=None TextureList=["ursina-tutorials-main/assets/grass","ursina-tutorials-main/assets/sandMinecraft.jfif"] textureNumber=0 x1=0 z1=0 def input(key): global TextureList,textureNumber global Entity1, destructionPos, x1,z1 x1=0 z1=0 amp = 6 freq = 24 position_x = player.x position_z = player.z if key == "w" or key == "w hold" or key == "s" or key == "s hold" or key == "a" or key == "a hold" or key == "d" or key == "d hold": x1 = abs(position_x - abs(player.x)) if player.x > position_x else -abs(position_x - abs(player.x)) z1 = abs(position_z - abs(player.z)) if player.z > position_z else -abs(position_x - abs(player.x)) for i in range(len(shells)): x = shells[i].x = floor((i / shellWidth) + player.x - 0.5 * shellWidth) z = shells[i].z = floor((i % shellWidth) + player.z - 0.5 * shellWidth) y = shells[i].y = floor(noise([x / freq, z / freq]) * amp) if key=="left mouse down" and shells[i].hovered and mouse.world_point: Entity1=(round(mouse.world_point.x), ceil(mouse.world_point.y)-1, round(mouse.world_point.z)) if key=="right mouse down" and mouse.world_point: PlacedBlock = Entity(model='cube', texture=TextureList[textureNumber%2], color=color.white, collider='box', scale=(1, 1, 1), position=(round(mouse.world_point.x), ceil(mouse.world_point.y), round(mouse.world_point.z)),on_click=lambda:destroy(PlacedBlock)) if key=="g": textureNumber+=1 if Entity1!=None: if destructionPos!=None: if distance(Entity(position=(destructionPos)),Entity(position=(Entity1)))>=1: "" myDestructionList=[] def update(): global Entity1,x1,z1,destructionPos if player.y<-100: respawn() try: x000,y000,z000=Entity1 myDestructionList.append(destructionPos) if player.x != x1 or player.z != z1: if (myDestructionList.__len__()+1)>3: destructionPos = destruction(position=(x1,y000,z1)) destructionPos=destruction(destructionPos) else: "" print(destructionPos) except:pass app.run() When I clicked on the blocks, they destroyed behind me. And when I was on the place, where the block should get destroyed, I fell down.
[ "I have to do this, but the hole isn't still displayed yet.\nfrom ursina import *\nfrom ursina.prefabs.first_person_controller import FirstPersonController\nfrom random import *\nfrom perlin_noise import *\n#import pyautogui\napp = Ursina()\nplayer = FirstPersonController()\nSky(color=color.azure,texture=None)\namp = 6\nfreq = 24\nshells = []\nshellWidth = 12\nnoise = PerlinNoise(octaves=2,seed=randrange(1, 100000000000000000000000000000000000))\nfor i in range(shellWidth*shellWidth):\n '''def TextureEntity():\n ent.texture=\"ursina-tutorials-main/assets/TextureLess.png\"'''\n ent = Entity(model='cube', texture='ursina-tutorials-main/assets/grass', collider='box')#, on_click=TextureEntity)\n shells.append(ent)\ndef respawn():\n player.y=5\n player.gravity=0\nEntity1=None\n\n\ndef destruction(position: Vec3):\n try:\n collider_entity = Entity(\n model=\"cube\",\n collider=\"box\",\n visible=False,\n scale=Vec3(0.5, 0.5, 0.5),\n position=position\n )\n #collider_entity.intersects(ignore = [collider_entity]).entity.color=color.clear\n collider_entity.intersects(ignore=[collider_entity]).entity.color=color.clear\n collider_entity.intersects(ignore=[collider_entity]).entity.collider = None\n return collider_entity.position\n except:pass\n\ndestructionPos=None\n\nTextureList=[\"ursina-tutorials-main/assets/grass\",\"ursina-tutorials-main/assets/sandMinecraft.jfif\"]\ntextureNumber=0\nx1=0\nz1=0\ndef input(key):\n global TextureList,textureNumber\n global Entity1, destructionPos, x1,z1\n x1=0\n z1=0\n amp = 6\n freq = 24\n position_x = player.x\n position_z = player.z\n if key == \"w\" or key == \"w hold\" or key == \"s\" or key == \"s hold\" or key == \"a\" or key == \"a hold\" or key == \"d\" or key == \"d hold\":\n x1 = abs(position_x - abs(player.x)) if player.x > position_x else -abs(position_x - abs(player.x))\n z1 = abs(position_z - abs(player.z)) if player.z > position_z else -abs(position_x - abs(player.x))\n\n for i in range(len(shells)):\n x = shells[i].x = floor((i / shellWidth) + player.x - 0.5 * shellWidth)\n z = shells[i].z = floor((i % shellWidth) + player.z - 0.5 * shellWidth)\n y = shells[i].y = floor(noise([x / freq, z / freq]) * amp)\n if key==\"left mouse down\" and shells[i].hovered and mouse.world_point:\n Entity1=(round(mouse.world_point.x), ceil(mouse.world_point.y)-1, round(mouse.world_point.z))\n if key==\"right mouse down\" and mouse.world_point:\n PlacedBlock = Entity(model='cube', texture=TextureList[textureNumber%2], color=color.white, collider='box', scale=(1, 1, 1),\n position=(round(mouse.world_point.x), ceil(mouse.world_point.y), round(mouse.world_point.z)),on_click=lambda:destroy(PlacedBlock))\n if key==\"g\":\n textureNumber+=1\n if Entity1!=None:\n if destructionPos!=None:\n if distance(Entity(position=(destructionPos)),Entity(position=(Entity1)))>=1:\n \"\"\n #xee,yee,zee=destructionPos\n #xeee,yeee,zeee=Entity1\n #entity=Entity(model='cube',collider='box',texture=TextureList[textureNumber%2],color=color.white,position=((abs(xee - abs(xeee)) if xeee > xee else -abs(xee - abs(xeee))),yeee,(abs(zee - abs(zeee)) if zeee > zee else -abs(zee - abs(zeee)))))\n #print(entity.position)\nmyDestructionList=[]\ndef update():\n global Entity1,x1,z1,destructionPos\n if player.y<-100:\n respawn()\n try:\n x000,y000,z000=Entity1\n myDestructionList.append(destructionPos)\n if player.x != x1 or player.z != z1:\n if (myDestructionList.__len__()+1)>1:\n destructionPos = destruction(position=(x1,y000,z1))\n destructionPos=destruction(destructionPos)-Vec3(sqrt(player.x)-destructionPos.x,sqrt(player.y)-destructionPos.y,sqrt(player.z)-destructionPos.z)\n else:\n \"\"\n print(destructionPos)\n except:pass\napp.run()\n\n", "I tried this code:\nfrom ursina import *\nfrom ursina.prefabs.first_person_controller import FirstPersonController\nfrom random import *\nfrom perlin_noise import *\n#import pyautogui\napp = Ursina()\nplayer = FirstPersonController()\nSky(color=color.azure,texture=None)\nshells = []\nshellWidth = 12\nnoise = PerlinNoise(octaves=2,seed=randrange(1, 100000000000000000000000000000000000))\nfor i in range(shellWidth*shellWidth):\n '''def TextureEntity():\n ent.texture=\"ursina-tutorials-main/assets/TextureLess.png\"'''\n ent = Entity(model='cube', texture='ursina-tutorials-main/assets/grass', collider='box')#, on_click=TextureEntity)\n shells.append(ent)\ndef respawn():\n player.y=5\n player.gravity=0\n\nTextureList=[\"ursina-tutorials-main/assets/grass\",\"ursina-tutorials-main/assets/sandMinecraft.jfif\"]\ntextureNumber=0\ndef input(key):\n global TextureList,textureNumber\n if key==\"right mouse down\" and mouse.world_point:\n PlacedBlock = Entity(model='cube', texture=TextureList[textureNumber%2], color=color.white, collider='box', scale=(1, 1, 1),\n position=(round(mouse.world_point.x), ceil(mouse.world_point.y), round(mouse.world_point.z)),on_click=lambda:destroy(PlacedBlock))\n if key==\"g\":\n textureNumber+=1\ndestroylist=[]\namp = 6\nfreq = 24\ndef update():\n global amp,freq,destroylist\n for i in range(len(shells)):\n longString=f\"{floor((i / shellWidth) + player.x - 0.5 * shellWidth)},{floor((i % shellWidth) + player.z - 0.5 * shellWidth)}, {floor(noise([floor((i / shellWidth) + player.x - 0.5 * shellWidth) / freq,floor((i % shellWidth) + player.z - 0.5 * shellWidth) / freq]) * amp)}\"\n position = shells[i].position = Vec3(floor((i / shellWidth) + player.x - 0.5 * shellWidth),floor((i % shellWidth) + player.z - 0.5 * shellWidth),floor(noise([floor((i / shellWidth) + player.x - 0.5 * shellWidth) / freq, floor((i % shellWidth) + player.z - 0.5 * shellWidth) / freq]) * amp))\\\n if not longString in destroylist else None\n if held_keys[\"left mouse\"] and shells[i].hovered:\n destroylist.append(longString)\n shells[i].position=None\n print(destroylist)\napp.run()\n\nBut it only destroys when I'm walking, and it doesn't generate terrain anymore?\n", "That works!\nfrom ursina import *\nfrom ursina.prefabs.first_person_controller import FirstPersonController\nfrom random import *\nfrom perlin_noise import *\n#import pyautogui\napp = Ursina()\nplayer = FirstPersonController()\nSky(color=color.azure,texture=None)\nshells = []\nshellWidth = 12\nnoise = PerlinNoise(octaves=2,seed=randrange(1, 100000000000000000000000000000000000))\nfor i in range(shellWidth*shellWidth):\n '''def TextureEntity():\n ent.texture=\"ursina-tutorials-main/assets/TextureLess.png\"'''\n ent = Entity(model='cube', texture='ursina-tutorials-main/assets/grass', collider='box')#, on_click=TextureEntity)\n shells.append(ent)\ndef respawn():\n player.y=5\n player.gravity=0\n\nTextureList=[\"ursina-tutorials-main/assets/grass\",\"ursina-tutorials-main/assets/sandMinecraft.jfif\"]\ntextureNumber=0\ndef input(key):\n global TextureList,textureNumber\n if key==\"right mouse down\" and mouse.world_point:\n PlacedBlock = Entity(model='cube', texture=TextureList[textureNumber%2], color=color.white, collider='box', scale=(1, 1, 1),\n position=(round(mouse.world_point.x), ceil(mouse.world_point.y), round(mouse.world_point.z)),on_click=lambda:destroy(PlacedBlock))\n if key==\"g\":\n textureNumber+=1\ndestroylist=[]\namp = 6\nfreq = 24\ndef update():\n global amp,freq,destroylist\n for i in range(len(shells)):\n x = shells[i].x = floor((i / shellWidth) + player.x - 0.5 * shellWidth)\n z = shells[i].z = floor((i % shellWidth) + player.z - 0.5 * shellWidth)\n y = shells[i].y = floor(noise([x / freq, z / freq]) * amp)\n shellsPosition=shells[i].position\n if shells[i].position in destroylist:\n shells[i].x=3000\n if held_keys[\"left mouse\"] and shells[i].hovered:\n destroylist.append(shellsPosition)\n shells[i].position=None\n print(destroylist)\napp.run()\n\n" ]
[ 0, 0, 0 ]
[]
[]
[ "python", "ursina" ]
stackoverflow_0074307820_python_ursina.txt
Q: How to manipulate a python list based on the following restrictions? import numpy as np m1 = np.arange(1,10).reshape(3,3) diagonal = np.diag(m1) antdiagonal =[] for j in range(0,3): x = m1[j][3-1-j] antdiagonal.append(x) def common_data(list1, list2): result = False for x in list1: for y in list2: if x == y: result = True return result if(common_data(list(diagonal), list(antdiagonal))): print("hitter") else: print("Non-hitter") In the above code snippet , the Matrix (m1) will be considered as “hitter” if any integer is repeating in both the principal diagonal and the anti-diagonal of m1. Otherwise should print “non hitter”. The principal diagonal of the above matrix(m1) is {1,5,9} and the principle antidiagonal will be {3,5,7}. and For the given matrix(m1) the output will be “non hitter”. Please modify the above code to get the result. i have tried with above code snippet but missing the logic for displaying "hitter" or "non-hitter" A: pdiagonal = [] pantidiagonal = [] # for Principal Diagonal def getPrincipalDiagonal(mat, n): for i in range(n): for j in range(n): if (i == j): pdiagonal.append(mat[i][j]) # for Anti-Diagonal def getSecondaryDiagonal(mat, n): for i in range(n): for j in range(n): if ((i + j) == (n - 1)): pantidiagonal.append(mat[i][j]) #for "non-hitter" matrix n = 3 a = [[1,2,3],[4,5,6],[7,8,9]] #for "hitter" matrix # n = 4 # a = [[2, 2, 3, 4 ],[5, 6, 7, 8 ],[1, 2, 3, 4 ],[6, 6, 7, 8 ]] getPrincipalDiagonal(a, n) getSecondaryDiagonal(a, n) print("Principal Digonal : ", pdiagonal,"Principal Anti-Diagonal", pantidiagonal) if len(set(pdiagonal).intersection(pantidiagonal)) > 1: print("Result: hitter") else: print("Result: non-hitter") NOTE: Based on your description, here i have termed a matrix as "hitter" if more than one element will be common, and "non-hitter" otherwise.
How to manipulate a python list based on the following restrictions?
import numpy as np m1 = np.arange(1,10).reshape(3,3) diagonal = np.diag(m1) antdiagonal =[] for j in range(0,3): x = m1[j][3-1-j] antdiagonal.append(x) def common_data(list1, list2): result = False for x in list1: for y in list2: if x == y: result = True return result if(common_data(list(diagonal), list(antdiagonal))): print("hitter") else: print("Non-hitter") In the above code snippet , the Matrix (m1) will be considered as “hitter” if any integer is repeating in both the principal diagonal and the anti-diagonal of m1. Otherwise should print “non hitter”. The principal diagonal of the above matrix(m1) is {1,5,9} and the principle antidiagonal will be {3,5,7}. and For the given matrix(m1) the output will be “non hitter”. Please modify the above code to get the result. i have tried with above code snippet but missing the logic for displaying "hitter" or "non-hitter"
[ "pdiagonal = []\npantidiagonal = []\n\n# for Principal Diagonal\ndef getPrincipalDiagonal(mat, n):\n for i in range(n):\n for j in range(n):\n if (i == j):\n pdiagonal.append(mat[i][j])\n\n# for Anti-Diagonal\ndef getSecondaryDiagonal(mat, n):\n for i in range(n):\n for j in range(n):\n if ((i + j) == (n - 1)):\n pantidiagonal.append(mat[i][j])\n\n#for \"non-hitter\" matrix\nn = 3\na = [[1,2,3],[4,5,6],[7,8,9]]\n\n#for \"hitter\" matrix\n# n = 4\n# a = [[2, 2, 3, 4 ],[5, 6, 7, 8 ],[1, 2, 3, 4 ],[6, 6, 7, 8 ]]\n\ngetPrincipalDiagonal(a, n)\ngetSecondaryDiagonal(a, n)\nprint(\"Principal Digonal : \", pdiagonal,\"Principal Anti-Diagonal\", pantidiagonal)\n\n\nif len(set(pdiagonal).intersection(pantidiagonal)) > 1:\n print(\"Result: hitter\")\nelse: \n print(\"Result: non-hitter\")\n\nNOTE: Based on your description, here i have termed a matrix as \"hitter\" if more than one element will be common, and \"non-hitter\" otherwise.\n" ]
[ 0 ]
[]
[]
[ "data_science", "python" ]
stackoverflow_0074464301_data_science_python.txt
Q: Python: get value from dictionary when key is a list I have a dictionary where the key is a list cfn = {('A', 'B'): 1, ('A','C'): 2 , ('A', 'D'): 3} genes = ['A', 'C', 'D', 'E'] I am trying to get a value from the dictionary if the gene pairs in the key exist in a list together. My attempt is as follows, however I get TypeError: unhashable type: 'list' def create_networks(genes, cfn): network = list() for i in range(0, len(genes)): for j in range(1, len(genes)): edge = cfn.get([genes[i], genes[j]],0) if edge > 0: network.append([genes[i], genes[j], edge]) desired output: network = [['A','C', 2], ['A', 'D', 3]] solution based on comments and answer below: edge = cfn.get((genes[i], genes[j]),0) A: Your keys in cfn are of type tuple as a key needs to be a hashable type. Hashable types are immutable data types such as: int string tuple frozenset as they can't be changed or mutated. Otherwise you can't access the value stored at that key. So in your case you just need to change these [] into this (): def create_networks(genes, cfn): network = list() for i in range(0, len(genes)): for j in range(1, len(genes)): # Use () to create a tuple edge = cfn.get((genes[i], genes[j]),0) if edge > 0: network.append([genes[i], genes[j], edge]) return network That way you don't get the error and get your expected result of >>> create_networks(genes, cfn) [['A', 'C', 2], ['A', 'D', 3]] A: You can do something like so: https://onecompiler.com/python2/3yp8xepzp cfn = {'AB': 1, 'AC': 2 , 'AD': 3} genes = ['A', 'C', 'D', 'E'] def create_networks(genes, cfn): network = [] for i in range(0, len(genes)): for j in range(1, len(genes)): keyy = genes[i]+''+genes[j] if keyy in cfn.keys(): edge2 = cfn[genes[i]+''+genes[j]] if edge2 > 0: network.append([genes[i], genes[j], edge2]) return network print(create_networks(genes,cfn))
Python: get value from dictionary when key is a list
I have a dictionary where the key is a list cfn = {('A', 'B'): 1, ('A','C'): 2 , ('A', 'D'): 3} genes = ['A', 'C', 'D', 'E'] I am trying to get a value from the dictionary if the gene pairs in the key exist in a list together. My attempt is as follows, however I get TypeError: unhashable type: 'list' def create_networks(genes, cfn): network = list() for i in range(0, len(genes)): for j in range(1, len(genes)): edge = cfn.get([genes[i], genes[j]],0) if edge > 0: network.append([genes[i], genes[j], edge]) desired output: network = [['A','C', 2], ['A', 'D', 3]] solution based on comments and answer below: edge = cfn.get((genes[i], genes[j]),0)
[ "Your keys in cfn are of type tuple as a key needs to be a hashable type.\nHashable types are immutable data types such as:\n\nint\nstring\ntuple\nfrozenset\n\nas they can't be changed or mutated. Otherwise you can't access the value stored at that key.\nSo in your case you just need to change these [] into this ():\ndef create_networks(genes, cfn):\n network = list()\n for i in range(0, len(genes)):\n for j in range(1, len(genes)):\n # Use () to create a tuple\n edge = cfn.get((genes[i], genes[j]),0)\n if edge > 0:\n network.append([genes[i], genes[j], edge])\n return network\n\nThat way you don't get the error and get your expected result of\n>>> create_networks(genes, cfn)\n[['A', 'C', 2], ['A', 'D', 3]]\n\n", "You can do something like so:\nhttps://onecompiler.com/python2/3yp8xepzp\ncfn = {'AB': 1, 'AC': 2 , 'AD': 3}\ngenes = ['A', 'C', 'D', 'E']\n\ndef create_networks(genes, cfn):\n network = []\n for i in range(0, len(genes)):\n for j in range(1, len(genes)):\n keyy = genes[i]+''+genes[j]\n if keyy in cfn.keys():\n edge2 = cfn[genes[i]+''+genes[j]]\n if edge2 > 0:\n network.append([genes[i], genes[j], edge2])\n return network\n\nprint(create_networks(genes,cfn))\n\n" ]
[ 1, 0 ]
[]
[]
[ "dictionary", "list", "python" ]
stackoverflow_0074479316_dictionary_list_python.txt
Q: Error in simple Caesar Cypher program - Python v3 In the below code, an unexpected output is produced. The desired result is as follows: Enter a plaintext message and then the rotation key. The plaintext is then converted to cypher text and saved to a file. For example, a user enters 'Hello!' and a key of 13. The output should give 'Uryyb!' and write it to a file. Somewhere within this small program there's an error but I'm struggling to find it. Can anyone identify it? # Caesar cypher function def rot(text, key): # Iterate through each character in the message. for char in text: # Set cypher_text to an empty string to add to later cypher_text = '' # Check if the character is a letter (A-Z/a-z). if char.isalpha(): # Get the Unicode number from of the character. num = ord(char) # If the final number is greater than 'z'/122.. if (num + key) > 122: # If we go too far, work out how many spaces passed # 'a'/97 it should be using the proper key. x = (num + key) - 122 # Add the chr version of x more characters passed # 'a'/97, take 1 to account for the 'a' position. # This adds a string character on to the cypher_text # variable. cypher_text += chr(x + ord('a') - 1) # If the rotated value doesn't go passed 'z'/122 elif num + key <= 122: # Use the key to add to the decimal version of the # character, add the chr version of the value to the # cypher text. cypher_text += chr(num + key) # Else, if the character is not a letter, simply add it as is. # This way we don't change symbols or spaces. else: cypher_text += char # Return the final result of the processed characters for use. return cypher_text # Ask the user for their message plain_input = input('Input the text you want to encode: ') # Aks the user for the rotation key rot_key = int(input('Input the key you want to use from 0 to 25: ')) # Secret message is the result of the rot function secret_message = rot(plain_input, rot_key) # Print out message for feedback print('Writing the following cypher text to file:', secret_message) # Write the message to file with open('TestFile.txt', 'a+') as file: file.write(secret_message) I've attempted changing the order of functions within the code, but to no avail. A: You are overwriting cypher_text in each iteration of the loop. # Set cypher_text to an empty string to add to later cypher_text = '' You should move this line before the loop. # Set cypher_text to an empty string to add to later cypher_text = '' # Iterate through each character in the message. for char in text: ...
Error in simple Caesar Cypher program - Python v3
In the below code, an unexpected output is produced. The desired result is as follows: Enter a plaintext message and then the rotation key. The plaintext is then converted to cypher text and saved to a file. For example, a user enters 'Hello!' and a key of 13. The output should give 'Uryyb!' and write it to a file. Somewhere within this small program there's an error but I'm struggling to find it. Can anyone identify it? # Caesar cypher function def rot(text, key): # Iterate through each character in the message. for char in text: # Set cypher_text to an empty string to add to later cypher_text = '' # Check if the character is a letter (A-Z/a-z). if char.isalpha(): # Get the Unicode number from of the character. num = ord(char) # If the final number is greater than 'z'/122.. if (num + key) > 122: # If we go too far, work out how many spaces passed # 'a'/97 it should be using the proper key. x = (num + key) - 122 # Add the chr version of x more characters passed # 'a'/97, take 1 to account for the 'a' position. # This adds a string character on to the cypher_text # variable. cypher_text += chr(x + ord('a') - 1) # If the rotated value doesn't go passed 'z'/122 elif num + key <= 122: # Use the key to add to the decimal version of the # character, add the chr version of the value to the # cypher text. cypher_text += chr(num + key) # Else, if the character is not a letter, simply add it as is. # This way we don't change symbols or spaces. else: cypher_text += char # Return the final result of the processed characters for use. return cypher_text # Ask the user for their message plain_input = input('Input the text you want to encode: ') # Aks the user for the rotation key rot_key = int(input('Input the key you want to use from 0 to 25: ')) # Secret message is the result of the rot function secret_message = rot(plain_input, rot_key) # Print out message for feedback print('Writing the following cypher text to file:', secret_message) # Write the message to file with open('TestFile.txt', 'a+') as file: file.write(secret_message) I've attempted changing the order of functions within the code, but to no avail.
[ "You are overwriting cypher_text in each iteration of the loop.\n# Set cypher_text to an empty string to add to later\ncypher_text = ''\n\nYou should move this line before the loop.\n# Set cypher_text to an empty string to add to later\ncypher_text = ''\n# Iterate through each character in the message.\nfor char in text:\n ...\n\n" ]
[ 0 ]
[]
[]
[ "python" ]
stackoverflow_0074477270_python.txt
Q: Django - Python: 'int' object has no attribute 'get' I am setting up a Django project to allow tickets to be sold for various theatre dates with a price for adults and a price for children. I have created a models.py and ticket_details.html. I am unfortunately receiving the following error: 'int' object has no attribute 'get' and I am at a loss to how I am to get the adult and child price for the total calculations to display in my bag.html. The problem is with my contexts.py and views.py files. I have tried the 'get' option but it is not working. Can someone advise? models.py: class Show(models.Model): '''Programmatic Name''' name = models.CharField(max_length=254) friendly_name = models.CharField(max_length=254, null=True, blank=True) poster = models.ImageField(null=True, blank=True) def __str__(self): return self.name def get_friendly_name(self): return self.friendly_name class Ticket(models.Model): show = models.ForeignKey('show', null=True, blank=True, on_delete=models.SET_NULL) name = models.CharField(max_length=254) event_date = models.CharField(max_length=254) description = models.TextField(null=True, blank=True) event_details = models.TextField(null=True, blank=True) place = models.CharField(max_length=254) location = models.CharField(max_length=254) position = models.CharField(max_length=254) image = models.ImageField(null=True, blank=True) price_details = models.TextField(null=True, blank=True) date = models.DateTimeField(null=True, blank=True) adult_price = models.DecimalField(max_digits=8, decimal_places=2, null=True, blank=True) child_price = models.DecimalField(max_digits=8, decimal_places=2, null=True, blank=True) def __str__(self): return self.name ticket_detail.html: <form class="form" action="{% url 'add_to_bag' ticket.id %}" method="POST"> {% csrf_token %} <div class="form-row"> <div class="form-group w-50"> <div class="input-group"> <div class="input-group-prepend"> <button class="decrement-qty btn btn-black rounded-0" data-item_id="{{ ticket.id }}" id="decrement-qty_{{ ticket.id }}"> <span class="icon"> <i class="fas fa-minus"></i> </span> </button> </div> <input class="form-control qty_input" type="number" name="adult_quantity" value="1" min="1" max="99" data-item_id="{{ ticket.id }}" id="adult_ticket"> <div class="input-group-append"> <button class="increment-qty btn btn-black rounded-0" data-item_id="{{ ticket.id }}" id="increment-qty_{{ ticket.id }}"> <span class="icon"> <i class="fas fa-plus"></i> </span> </button> </div> </div> <div class="input-group"> <div class="input-group-prepend"> <button class="decrement-qty btn btn-black rounded-0" data-item_id="{{ ticket.id }}" id="decrement-qty_{{ ticket.id }}"> <span class="icon"> <i class="fas fa-minus"></i> </span> </button> </div> <input class="form-control qty_input" type="number" name="child_quantity" value="1" min="1" max="99" data-item_id="{{ ticket.id }}" id="child_ticket"> <div class="input-group-append"> <button class="increment-qty btn btn-black rounded-0" data-item_id="{{ ticket.id }}" id="increment-qty_{{ ticket.id }}"> <span class="icon"> <i class="fas fa-plus"></i> </span> </button> </div> </div> </div> <div class="col-12 mb-3"> <input type="submit" class="btn btn-default btn-lg text-uppercase pl-5 pr-5" value="Add to Bag" /> </div> <input type="hidden" name="redirect_url" value="{{ request.path }}" /> </div> </form> I have created an app for 'bag' as well as a views.py and contexts.py accordingly. bag.html <div class="col-12"> {% if bag_items %} <div class="table-responsive rounded"> <table class="table content-p"> <tr> <th scope="col">Ticket Info</th> <th scope="col">Date</th> <th scope="col">Price</th> <th scope="col">Quantity</th> <th scope="col">Sub Total</th> </tr> {% for item in bag_items %} {% if item.adult_ticket %} <tr> <td>{{ item.ticket.name }}</td> <td>{{ item.ticket.date}}</td> <td>{{ item.ticket.adult_price }}</td> <td> <form class="form update-form" method="POST" action="{% url 'adjust_bag' item.item_id %}"> {% csrf_token %} <div class="form-group"> <div class="input-group"> <div class="input-group-prepend"> <button class="decrement-qty btn btn-sm btn-black rounded-0" data-item_id="{{ item.item_id }}" id="decrement-qty_{{ item.item_id }}"> <span> <i class="fas fa-minus fa-sm"></i> </span> </button> </div> <input class="form-control form-control-sm qty_input" type="number" id="adult-ticket" name="adultquantity" value="{{ item.quantity }}" min="1" max="99" data-item_id="{{ item.item_id }}" id="id_qty_{{ item.item_id }}"> <div class="input-group-append"> <button class="increment-qty btn btn-sm btn-black rounded-0" data-item_id="{{ item.item_id }}" id="increment-qty_{{ item.item_id }}"> <span> <i class="fas fa-plus fa-sm"></i> </span> </button> </div> </div> </div> </form> <a class="update-link text-info"><small>Update</small></a> <a class="remove-item text-danger float-right" id="remove_{{ item.item_id }}" data-product_size="{{ item.size }}"><small>Remove</small></a> </td> <td>${{ item.ticket.price|calc_subtotal:item.adult_quantity }}</td> </tr> {% elif item.child_ticket %} <tr> <td>{{ item.ticket.name }}</td> <td>{{ item.ticket.date}}</td> <td>{{ item.ticket.child_price }}</td> <td> <form class="form update-form" method="POST" action="{% url 'adjust_bag' item.item_id %}"> {% csrf_token %} <div class="form-group"> <div class="input-group"> <div class="input-group-prepend"> <button class="decrement-qty btn btn-sm btn-black rounded-0" data-item_id="{{ item.item_id }}" id="decrement-qty_{{ item.item_id }}"> <span> <i class="fas fa-minus fa-sm"></i> </span> </button> </div> <input class="form-control form-control-sm qty_input" type="number" id="child-ticket" name="child_quantity" value="{{ item.quantity }}" min="1" max="99" data-item_id="{{ item.item_id }}" id="id_qty_{{ item.item_id }}"> <div class="input-group-append"> <button class="increment-qty btn btn-sm btn-black rounded-0" data-item_id="{{ item.item_id }}" id="increment-qty_{{ item.item_id }}"> <span> <i class="fas fa-plus fa-sm"></i> </span> </button> </div> </div> </div> </form> <a class="update-link text-info"><small>Update</small></a> <a class="remove-item text-danger float-right" id="remove_{{ item.item_id }}" data-product_size="{{ item.size }}"><small>Remove</small></a> </td> <td>${{ item.ticket.price|calc_subtotal:item.child_quantity }}</td> </tr> {% endif %} {% endfor %} <tr> <td>Bag Total: €{{ grand_total|floatformat:2 }}</td> </tr> </table> </div> {% else %} <p>Your bag is empty</p> {% endif %} </div> views.py def add_to_bag(request, item_id): '''Submit form to this view including ticket id and quanity''' ''' Add a quantity of the specified tickets to the shopping bag''' child_quantity = int(request.POST.get('child_quantity')) adult_quantity = int(request.POST.get('adult_quantity')) redirect_url = request.POST.get('redirect_url') bag = request.session.get('bag', {}) '''Once in view get bag variable if exisits in session or create if doesnt''' '''Add to bag''' def add_quantity(quantity, item_id, bag): if quantity: if item_id in list(bag.keys()): bag[item_id] += quantity else: bag[item_id] = quantity if adult_quantity or child_quantity: if adult_quantity: add_quantity( adult_quantity, 'adult_quantity', item_id, bag,) if child_quantity: add_quantity( child_quantity, 'child_quantity', item_id, bag,) request.session['bag'] = bag return redirect(redirect_url) contexts.py from django.conf import settings from django.shortcuts import get_object_or_404 from tickets.models import Ticket def bag_contents(request): bag_items = [] '''Empty list for bag items to live in''' total = 0 ticket_count = 0 bag = request.session.get('bag', {}) for item_id, adult_quantity in bag.items(): if adult_quantity.get('adult_quantity'): ticket = get_object_or_404(Ticket, pk=item_id) total += adult_quantity.get('adult_quantity') * ticket.adult_price ticket_count += adult_quantity.get('adult_quantity') bag_items.append({ 'item_id': item_id, 'quantity': adult_quantity.get('adult_quantity'), 'ticket': ticket, 'adult_ticket': True, }) for item_id, child_quantity in bag.items(): if child_quantity.get('child_quantity'): ticket = get_object_or_404(Ticket, pk=item_id) total += child_quantity.get('child_quantity') * ticket.child_price ticket_count += child_quantity.get('child_quantity') bag_items.append({ 'item_id': item_id, 'quantity': child_quantity.get('child_quantity'), 'ticket': ticket, 'child_ticket': True, }) grand_total = total context = { 'bag_items': bag_items, 'total': total, 'ticket_count': ticket_count, 'grand_total': grand_total, } '''Make dictionary available to all templates across the enitire application''' return context A: The issue is how you are iterating in for item_id, adult_quantity in bag.items():. I see that bag is a dictionary, and I think that it's a dictionary like: { 'item_id': 1, 'quantity': 10, 'ticket': ticket, 'adult_ticket': True, } If this is correct, then why do you need to iterate through a bag in contexts.py? Would this not work: def bag_contents(request): bag_items = [] '''Empty list for bag items to live in''' total = 0 ticket_count = 0 bag = request.session.get('bag', {}) if 'item_id' in bag: item_id = bag.item_id else: item_id = None ticket = get_object_or_404(Ticket, pk=item_id) if bag.adult_ticket: total += bag.quantity * ticket.adult_price bag_items.append({ 'item_id': item_id, 'quantity': bag.quantity, 'ticket': ticket, 'adult_ticket': True, }) if bag.child_ticket: total += bag.quantity * ticket.child_price bag_items.append({ 'item_id': item_id, 'quantity': bag.quantity, 'ticket': ticket, 'child_ticket': True, }) ticket_count += bag.quantity grand_total = total context = { 'bag_items': bag_items, 'total': total, 'ticket_count': ticket_count, 'grand_total': grand_total, } '''Make dictionary available to all templates across the enitire application''' return context
Django - Python: 'int' object has no attribute 'get'
I am setting up a Django project to allow tickets to be sold for various theatre dates with a price for adults and a price for children. I have created a models.py and ticket_details.html. I am unfortunately receiving the following error: 'int' object has no attribute 'get' and I am at a loss to how I am to get the adult and child price for the total calculations to display in my bag.html. The problem is with my contexts.py and views.py files. I have tried the 'get' option but it is not working. Can someone advise? models.py: class Show(models.Model): '''Programmatic Name''' name = models.CharField(max_length=254) friendly_name = models.CharField(max_length=254, null=True, blank=True) poster = models.ImageField(null=True, blank=True) def __str__(self): return self.name def get_friendly_name(self): return self.friendly_name class Ticket(models.Model): show = models.ForeignKey('show', null=True, blank=True, on_delete=models.SET_NULL) name = models.CharField(max_length=254) event_date = models.CharField(max_length=254) description = models.TextField(null=True, blank=True) event_details = models.TextField(null=True, blank=True) place = models.CharField(max_length=254) location = models.CharField(max_length=254) position = models.CharField(max_length=254) image = models.ImageField(null=True, blank=True) price_details = models.TextField(null=True, blank=True) date = models.DateTimeField(null=True, blank=True) adult_price = models.DecimalField(max_digits=8, decimal_places=2, null=True, blank=True) child_price = models.DecimalField(max_digits=8, decimal_places=2, null=True, blank=True) def __str__(self): return self.name ticket_detail.html: <form class="form" action="{% url 'add_to_bag' ticket.id %}" method="POST"> {% csrf_token %} <div class="form-row"> <div class="form-group w-50"> <div class="input-group"> <div class="input-group-prepend"> <button class="decrement-qty btn btn-black rounded-0" data-item_id="{{ ticket.id }}" id="decrement-qty_{{ ticket.id }}"> <span class="icon"> <i class="fas fa-minus"></i> </span> </button> </div> <input class="form-control qty_input" type="number" name="adult_quantity" value="1" min="1" max="99" data-item_id="{{ ticket.id }}" id="adult_ticket"> <div class="input-group-append"> <button class="increment-qty btn btn-black rounded-0" data-item_id="{{ ticket.id }}" id="increment-qty_{{ ticket.id }}"> <span class="icon"> <i class="fas fa-plus"></i> </span> </button> </div> </div> <div class="input-group"> <div class="input-group-prepend"> <button class="decrement-qty btn btn-black rounded-0" data-item_id="{{ ticket.id }}" id="decrement-qty_{{ ticket.id }}"> <span class="icon"> <i class="fas fa-minus"></i> </span> </button> </div> <input class="form-control qty_input" type="number" name="child_quantity" value="1" min="1" max="99" data-item_id="{{ ticket.id }}" id="child_ticket"> <div class="input-group-append"> <button class="increment-qty btn btn-black rounded-0" data-item_id="{{ ticket.id }}" id="increment-qty_{{ ticket.id }}"> <span class="icon"> <i class="fas fa-plus"></i> </span> </button> </div> </div> </div> <div class="col-12 mb-3"> <input type="submit" class="btn btn-default btn-lg text-uppercase pl-5 pr-5" value="Add to Bag" /> </div> <input type="hidden" name="redirect_url" value="{{ request.path }}" /> </div> </form> I have created an app for 'bag' as well as a views.py and contexts.py accordingly. bag.html <div class="col-12"> {% if bag_items %} <div class="table-responsive rounded"> <table class="table content-p"> <tr> <th scope="col">Ticket Info</th> <th scope="col">Date</th> <th scope="col">Price</th> <th scope="col">Quantity</th> <th scope="col">Sub Total</th> </tr> {% for item in bag_items %} {% if item.adult_ticket %} <tr> <td>{{ item.ticket.name }}</td> <td>{{ item.ticket.date}}</td> <td>{{ item.ticket.adult_price }}</td> <td> <form class="form update-form" method="POST" action="{% url 'adjust_bag' item.item_id %}"> {% csrf_token %} <div class="form-group"> <div class="input-group"> <div class="input-group-prepend"> <button class="decrement-qty btn btn-sm btn-black rounded-0" data-item_id="{{ item.item_id }}" id="decrement-qty_{{ item.item_id }}"> <span> <i class="fas fa-minus fa-sm"></i> </span> </button> </div> <input class="form-control form-control-sm qty_input" type="number" id="adult-ticket" name="adultquantity" value="{{ item.quantity }}" min="1" max="99" data-item_id="{{ item.item_id }}" id="id_qty_{{ item.item_id }}"> <div class="input-group-append"> <button class="increment-qty btn btn-sm btn-black rounded-0" data-item_id="{{ item.item_id }}" id="increment-qty_{{ item.item_id }}"> <span> <i class="fas fa-plus fa-sm"></i> </span> </button> </div> </div> </div> </form> <a class="update-link text-info"><small>Update</small></a> <a class="remove-item text-danger float-right" id="remove_{{ item.item_id }}" data-product_size="{{ item.size }}"><small>Remove</small></a> </td> <td>${{ item.ticket.price|calc_subtotal:item.adult_quantity }}</td> </tr> {% elif item.child_ticket %} <tr> <td>{{ item.ticket.name }}</td> <td>{{ item.ticket.date}}</td> <td>{{ item.ticket.child_price }}</td> <td> <form class="form update-form" method="POST" action="{% url 'adjust_bag' item.item_id %}"> {% csrf_token %} <div class="form-group"> <div class="input-group"> <div class="input-group-prepend"> <button class="decrement-qty btn btn-sm btn-black rounded-0" data-item_id="{{ item.item_id }}" id="decrement-qty_{{ item.item_id }}"> <span> <i class="fas fa-minus fa-sm"></i> </span> </button> </div> <input class="form-control form-control-sm qty_input" type="number" id="child-ticket" name="child_quantity" value="{{ item.quantity }}" min="1" max="99" data-item_id="{{ item.item_id }}" id="id_qty_{{ item.item_id }}"> <div class="input-group-append"> <button class="increment-qty btn btn-sm btn-black rounded-0" data-item_id="{{ item.item_id }}" id="increment-qty_{{ item.item_id }}"> <span> <i class="fas fa-plus fa-sm"></i> </span> </button> </div> </div> </div> </form> <a class="update-link text-info"><small>Update</small></a> <a class="remove-item text-danger float-right" id="remove_{{ item.item_id }}" data-product_size="{{ item.size }}"><small>Remove</small></a> </td> <td>${{ item.ticket.price|calc_subtotal:item.child_quantity }}</td> </tr> {% endif %} {% endfor %} <tr> <td>Bag Total: €{{ grand_total|floatformat:2 }}</td> </tr> </table> </div> {% else %} <p>Your bag is empty</p> {% endif %} </div> views.py def add_to_bag(request, item_id): '''Submit form to this view including ticket id and quanity''' ''' Add a quantity of the specified tickets to the shopping bag''' child_quantity = int(request.POST.get('child_quantity')) adult_quantity = int(request.POST.get('adult_quantity')) redirect_url = request.POST.get('redirect_url') bag = request.session.get('bag', {}) '''Once in view get bag variable if exisits in session or create if doesnt''' '''Add to bag''' def add_quantity(quantity, item_id, bag): if quantity: if item_id in list(bag.keys()): bag[item_id] += quantity else: bag[item_id] = quantity if adult_quantity or child_quantity: if adult_quantity: add_quantity( adult_quantity, 'adult_quantity', item_id, bag,) if child_quantity: add_quantity( child_quantity, 'child_quantity', item_id, bag,) request.session['bag'] = bag return redirect(redirect_url) contexts.py from django.conf import settings from django.shortcuts import get_object_or_404 from tickets.models import Ticket def bag_contents(request): bag_items = [] '''Empty list for bag items to live in''' total = 0 ticket_count = 0 bag = request.session.get('bag', {}) for item_id, adult_quantity in bag.items(): if adult_quantity.get('adult_quantity'): ticket = get_object_or_404(Ticket, pk=item_id) total += adult_quantity.get('adult_quantity') * ticket.adult_price ticket_count += adult_quantity.get('adult_quantity') bag_items.append({ 'item_id': item_id, 'quantity': adult_quantity.get('adult_quantity'), 'ticket': ticket, 'adult_ticket': True, }) for item_id, child_quantity in bag.items(): if child_quantity.get('child_quantity'): ticket = get_object_or_404(Ticket, pk=item_id) total += child_quantity.get('child_quantity') * ticket.child_price ticket_count += child_quantity.get('child_quantity') bag_items.append({ 'item_id': item_id, 'quantity': child_quantity.get('child_quantity'), 'ticket': ticket, 'child_ticket': True, }) grand_total = total context = { 'bag_items': bag_items, 'total': total, 'ticket_count': ticket_count, 'grand_total': grand_total, } '''Make dictionary available to all templates across the enitire application''' return context
[ "The issue is how you are iterating in for item_id, adult_quantity in bag.items():. I see that bag is a dictionary, and I think that it's a dictionary like:\n{\n 'item_id': 1,\n 'quantity': 10,\n 'ticket': ticket,\n 'adult_ticket': True,\n}\n\nIf this is correct, then why do you need to iterate through a bag in contexts.py? Would this not work:\ndef bag_contents(request):\n\n bag_items = []\n '''Empty list for bag items to live in'''\n total = 0\n ticket_count = 0\n bag = request.session.get('bag', {})\n\n if 'item_id' in bag:\n item_id = bag.item_id\n else:\n item_id = None\n \n ticket = get_object_or_404(Ticket, pk=item_id)\n\n if bag.adult_ticket:\n total += bag.quantity * ticket.adult_price\n bag_items.append({\n 'item_id': item_id,\n 'quantity': bag.quantity,\n 'ticket': ticket,\n 'adult_ticket': True,\n })\n if bag.child_ticket:\n total += bag.quantity * ticket.child_price\n bag_items.append({\n 'item_id': item_id,\n 'quantity': bag.quantity,\n 'ticket': ticket,\n 'child_ticket': True,\n })\n \n ticket_count += bag.quantity\n\n\n grand_total = total\n\n context = {\n 'bag_items': bag_items,\n 'total': total,\n 'ticket_count': ticket_count,\n 'grand_total': grand_total,\n }\n\n '''Make dictionary available to all templates across the enitire application'''\n return context\n\n" ]
[ 0 ]
[]
[]
[ "django", "e_commerce", "model", "price", "python" ]
stackoverflow_0074477773_django_e_commerce_model_price_python.txt
Q: Draw text around image in semicircular path in Python I need to write/draw some text of the objects in an image around semicircular path, I have used ImageMagic/Wand using the image.distort method but it works for longer text, if the text is small it looks bad. Is there a way in PIL or ImageMagic/Wand to achieve that. I am looking for something like this image. I have already tried suggestions in this post but it does not work for all text lengths. Also when I paste the text image on original image, it does not align to center A: You can pad the text with spaces in Imagemagick. convert -font Arial -pointsize 20 label:' Your Curved Text Your Curved Text ' -virtual-pixel Background -background white -distort Arc 360 -rotate -90 arc_circle_text.jpg convert -font Arial -pointsize 20 label:' Text ' -virtual-pixel Background -background white -distort Arc 360 -rotate -90 arc_circle_text2.jpg
Draw text around image in semicircular path in Python
I need to write/draw some text of the objects in an image around semicircular path, I have used ImageMagic/Wand using the image.distort method but it works for longer text, if the text is small it looks bad. Is there a way in PIL or ImageMagic/Wand to achieve that. I am looking for something like this image. I have already tried suggestions in this post but it does not work for all text lengths. Also when I paste the text image on original image, it does not align to center
[ "You can pad the text with spaces in Imagemagick.\nconvert -font Arial -pointsize 20 label:' Your Curved Text Your Curved Text ' -virtual-pixel Background -background white -distort Arc 360 -rotate -90 arc_circle_text.jpg\n\n\nconvert -font Arial -pointsize 20 label:' Text ' -virtual-pixel Background -background white -distort Arc 360 -rotate -90 arc_circle_text2.jpg\n\n\n" ]
[ 1 ]
[]
[]
[ "image_processing", "imagemagick", "python", "python_imaging_library", "wand" ]
stackoverflow_0074468853_image_processing_imagemagick_python_python_imaging_library_wand.txt
Q: ASGI_APPLICATION not working with Django Channels I followed the tutorial in the channels documentation but when I start the server python3 manage.py runserver it gives me this : Watching for file changes with StatReloader Performing system checks... System check identified no issues (0 silenced). October 17, 2022 - 00:13:21 Django version 4.1.2, using settings 'config.settings' Starting development server at http://127.0.0.1:8000/ Quit the server with CONTROL-C. when I expected for it to give me this : Watching for file changes with StatReloader Performing system checks... System check identified no issues (0 silenced). October 17, 2022 - 00:13:21 Django version 4.1.2, using settings 'config.settings' Starting ASGI/Channels version 3.0.5 development server at http://127.0.0.1:8000/ Quit the server with CONTROL-C. settings.py INSTALLED_APPS = [ 'channels', 'django.contrib.admin', 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.messages', 'django.contrib.staticfiles', ... ] ASGI_APPLICATION = 'config.asgi.application' asgi.py import os from django.core.asgi import get_asgi_application from channels.routing import ProtocolTypeRouter os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'config.settings') application = ProtocolTypeRouter({ 'http': get_asgi_application(), }) It doesn't give any errors even when I change the ASGI_APPLICATION = 'config.asgi.application to ASGI_APPLICATION = ''. A: This could be due to the fact that the Django and channels versions you have used are not compatible Try : channels==3.0.4 and django==4.0.0 A: Use version of python that support by channels, you will found it at pypi channels page A: I had the same problem, and found that there was a new release of Channels. Since the project's Pipfile did not specify a version, it was automatically upgraded. Maybe you had the same issue, your question was asked 2 days after Channels v4.0 release. Downgrading to v3.0.5 again solved the problem until I can properly upgrade.
ASGI_APPLICATION not working with Django Channels
I followed the tutorial in the channels documentation but when I start the server python3 manage.py runserver it gives me this : Watching for file changes with StatReloader Performing system checks... System check identified no issues (0 silenced). October 17, 2022 - 00:13:21 Django version 4.1.2, using settings 'config.settings' Starting development server at http://127.0.0.1:8000/ Quit the server with CONTROL-C. when I expected for it to give me this : Watching for file changes with StatReloader Performing system checks... System check identified no issues (0 silenced). October 17, 2022 - 00:13:21 Django version 4.1.2, using settings 'config.settings' Starting ASGI/Channels version 3.0.5 development server at http://127.0.0.1:8000/ Quit the server with CONTROL-C. settings.py INSTALLED_APPS = [ 'channels', 'django.contrib.admin', 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.messages', 'django.contrib.staticfiles', ... ] ASGI_APPLICATION = 'config.asgi.application' asgi.py import os from django.core.asgi import get_asgi_application from channels.routing import ProtocolTypeRouter os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'config.settings') application = ProtocolTypeRouter({ 'http': get_asgi_application(), }) It doesn't give any errors even when I change the ASGI_APPLICATION = 'config.asgi.application to ASGI_APPLICATION = ''.
[ "This could be due to the fact that the Django and channels versions you have used are not compatible\nTry : channels==3.0.4 and django==4.0.0\n", "Use version of python that support by channels, you will found it at pypi channels page\n", "I had the same problem, and found that there was a new release of Channels. Since the project's Pipfile did not specify a version, it was automatically upgraded.\nMaybe you had the same issue, your question was asked 2 days after Channels v4.0 release.\nDowngrading to v3.0.5 again solved the problem until I can properly upgrade.\n" ]
[ 4, 0, 0 ]
[]
[]
[ "django", "django_channels", "python" ]
stackoverflow_0074091600_django_django_channels_python.txt
Q: Django list_display does not work at reverse model Django list_display does not work at reverse model, i want to list_display the title and the production company of two seperated but related tables. here ist my model.py class ProjectBaseModel(models.Model): title = models.CharField("Titel", max_length=100, blank=False, unique=True) former_title = models.CharField("ehemaliger Titel", max_length=100, blank=True) title_international = models.CharField( "Titel, international", max_length=100, blank=True, null=True, unique=True ) program_length = models.PositiveSmallIntegerField( verbose_name="Länge in Min.", blank=True, validators=[MaxValueValidator(300)] ) def __str__(self): return self.title class Meta: abstract = True ordering = ["title"] class FeatureFilm(ProjectBaseModel): class Meta: verbose_name = "Kinofilm" verbose_name_plural = "Kinofilme" class ProjectCompanySet(models.Model): featurefilm = models.ForeignKey( FeatureFilm, on_delete=models.CASCADE, null=True, blank=True ) tv_movie = models.ForeignKey( TvMovie, on_delete=models.CASCADE, null=True, blank=True ) production_company = models.ForeignKey( CompanyOrBranch, related_name="production_company", verbose_name="Produktionsfirma", on_delete=models.SET_NULL, blank=True, null=True, ) My FeatureFilm table inherits from the ProjectBaseModel table. I want to display the FeatureFilm list in the django backend admin. In the list I want to show the appropriate name, but I also want to show the field production_company which is one row of the ProductionCompanySet table, which is a child table related to the FeatureFilm table. the production_company field ist a foreign key to the CompanyOrBranch Table. Here you can see this table: class CompanyOrBranch(CompanyBaseModel): name = models.CharField( "Firma oder Niederlassung", max_length=60, blank=False, ) and here is my admin.py from django.contrib import admin from .models import ( FeatureFilm, TvMovie, ProjectCompanySet, Vendor, StaffList, VendorVFX, QuoteAndEffort, ) class ProjectCompanySetInLine(admin.StackedInline): model = ProjectCompanySet fields = ( "production_company", "co_production", "distributor", "broadcast", "world_sales", ) classes = ["collapse"] extra = 0 class FeatureFilmAdmin(admin.ModelAdmin): inlines = [ QuoteAndEffortSetInLine, ProjectCompanySetInLine, VendorVFXSetInLine, VendorSetInLine, StaffListSetInLine, ] list_display = ["title", "projectcompanyset__production_company", ] A: My question was not precise enough. Yes, the FeatureFilm table points with a Foreignkey to a ProjectCompanySet table. and this has a field: production_company. what I actually want is that the first ProjectCompanySet entry and from it the first company is shown to me. What I have managed in the meantime is that the str fuction of the ProjectCompany is displayed to me. But if I have several ProjectCompanySet tables then also then several times. But I want only the first ! and then also the string of the production_company. that I have not yet managed. here my so far extended code: class FeatureFilmAdmin(admin.ModelAdmin): inlines = [ QuoteAndEffortSetInLine, ProjectCompanySetInLine, VendorVFXSetInLine, VendorSetInLine, StaffListSetInLine, ] list_display = ["title", "program_length", "get_company_set"] def get_company_set(self, Featurefilm): return FeatureFilm.objects.filter(pk=Featurefilm.id).values( "projectcompanyset__production_company__name" )
Django list_display does not work at reverse model
Django list_display does not work at reverse model, i want to list_display the title and the production company of two seperated but related tables. here ist my model.py class ProjectBaseModel(models.Model): title = models.CharField("Titel", max_length=100, blank=False, unique=True) former_title = models.CharField("ehemaliger Titel", max_length=100, blank=True) title_international = models.CharField( "Titel, international", max_length=100, blank=True, null=True, unique=True ) program_length = models.PositiveSmallIntegerField( verbose_name="Länge in Min.", blank=True, validators=[MaxValueValidator(300)] ) def __str__(self): return self.title class Meta: abstract = True ordering = ["title"] class FeatureFilm(ProjectBaseModel): class Meta: verbose_name = "Kinofilm" verbose_name_plural = "Kinofilme" class ProjectCompanySet(models.Model): featurefilm = models.ForeignKey( FeatureFilm, on_delete=models.CASCADE, null=True, blank=True ) tv_movie = models.ForeignKey( TvMovie, on_delete=models.CASCADE, null=True, blank=True ) production_company = models.ForeignKey( CompanyOrBranch, related_name="production_company", verbose_name="Produktionsfirma", on_delete=models.SET_NULL, blank=True, null=True, ) My FeatureFilm table inherits from the ProjectBaseModel table. I want to display the FeatureFilm list in the django backend admin. In the list I want to show the appropriate name, but I also want to show the field production_company which is one row of the ProductionCompanySet table, which is a child table related to the FeatureFilm table. the production_company field ist a foreign key to the CompanyOrBranch Table. Here you can see this table: class CompanyOrBranch(CompanyBaseModel): name = models.CharField( "Firma oder Niederlassung", max_length=60, blank=False, ) and here is my admin.py from django.contrib import admin from .models import ( FeatureFilm, TvMovie, ProjectCompanySet, Vendor, StaffList, VendorVFX, QuoteAndEffort, ) class ProjectCompanySetInLine(admin.StackedInline): model = ProjectCompanySet fields = ( "production_company", "co_production", "distributor", "broadcast", "world_sales", ) classes = ["collapse"] extra = 0 class FeatureFilmAdmin(admin.ModelAdmin): inlines = [ QuoteAndEffortSetInLine, ProjectCompanySetInLine, VendorVFXSetInLine, VendorSetInLine, StaffListSetInLine, ] list_display = ["title", "projectcompanyset__production_company", ]
[ "My question was not precise enough. Yes, the FeatureFilm table points with a Foreignkey to a ProjectCompanySet table. and this has a field: production_company. what I actually want is that the first ProjectCompanySet entry and from it the first company is shown to me. What I have managed in the meantime is that the str fuction of the ProjectCompany is displayed to me. But if I have several ProjectCompanySet tables then also then several times. But I want only the first ! and then also the string of the production_company. that I have not yet managed.\nhere my so far extended code:\nclass FeatureFilmAdmin(admin.ModelAdmin):\ninlines = [\n QuoteAndEffortSetInLine,\n ProjectCompanySetInLine,\n VendorVFXSetInLine,\n VendorSetInLine,\n StaffListSetInLine,\n]\nlist_display = [\"title\", \"program_length\", \"get_company_set\"]\n\ndef get_company_set(self, Featurefilm):\n return FeatureFilm.objects.filter(pk=Featurefilm.id).values(\n \"projectcompanyset__production_company__name\"\n )\n\n" ]
[ 0 ]
[]
[]
[ "django", "django_admin", "foreign_keys", "orm", "python" ]
stackoverflow_0074453280_django_django_admin_foreign_keys_orm_python.txt
Q: Iterate over all the sub-groups of a list Let's say I have a list [1,2,3,4,5,6], and I want to iterate over all the subgroups of len 2 [1,2] [3,4] [5,6]. The naive way of doing it L = [1,2,3,4,5,6] N = len(L)//2 for k in range(N): slice = L[k*2:(k+1)*2] for val in slice: #Do things with the slice However I was wondering if there is a more pythonic method to iterate over a "partitioned" list already. I also accept solutions with numpy arrays. Something like: L = [1,2,3,4,5,6] slices = f(L,2) # A nice "f" here? for slice in slices: for val in slice: #Do things with the slice Thanks a lot! A: Use the grouper recipe from the itertools library: import itertools def grouper(iterable, n, fillvalue=None): "Collect data into fixed-length chunks or blocks" # grouper('ABCDEFG', 3, 'x') --> ABC DEF Gxx" args = [iter(iterable)] * n return itertools.zip_longest(*args, fillvalue=fillvalue) L = [1,2,3,4,5,6] for slice in grouper(L, 2): print(slice) A: To have a nice f as you are asking (not commenting on whether it is really a good idea, depending on what you are really trying to do) I would go with itertools itertools.islice(itertools.pairwise(L), 0, None, 2) is your f. Note that L is a list here. But it could be any iterator. Which is the point with itertools. You could have billions of iteration in L, and therefore billions of iterations with my generator, without using any memory. As long as L is not in memory, and that what you are doing with the slice is not stacking them in memory (if you do, then the method is just the same as any other). Usage example import itertools L=[1,2,3,4,5,6] for p in itertools.islice(itertools.pairwise(L), 0, None, 2): print(p) (1, 2) (3, 4) (5, 6) Explanation itertools.pairwise iterates by pairs. So almost what you are looking for. Except that those are 'overlapping'. In your case, it iterates (1,2), (2,3), (3,4), (4,5), (5,6) itertools.islice(it, 0, None, 2) iterates every two elements. So both together, your get the 1st, 3rd, 5th, .. pairs of previous iterator, that is what you want Timings Doing nothing, with 1000 elements method Timing Yours 94 ms Variant 52 ms numpy 187 ms itertools 48 ms Woodford 42 ms Note: what I call "variant" is almost the same as your method (not the same timings tho!), avoiding the k*2 for k in range(0,len(L),2): slice = L[k:k+2] for val in slice: .... The fact that it is so fast (almost as fast as mine) says a lot about how negligible all this is. All I did is avoid 2 multiplication, and it almost halves the timing. Note 2: numpy is inefficient in this example, precisely because we do nothing in this question but iterating. So building of the array is what costs. But depending on what you want to do, numpy can be way faster than any other method, if you can avoid any iteration. For example (just using a random one), if what you want to do is computing the sum for every pairs (a,b) of L of a+2b, numpy's a[:,0].sum()+a[:,1].sum()*2 would beats any iteration based method, even with itertools. But, well, from what we know of your problem (that is that you want to iterate), my itertools method is so far the fastest. And since it is a one-liner, I guess it is also the most pythonesque. Edit I stand corrected: Woodford's (also itertools, but different) method, posted while I was writing this answer, is faster. Not a one-liner as is. But that is because they wanted to deal with case there is not a even number of elements in L, which other method did not. Else it could also be written like this zip(*[iter(L)]*2) For example for p in zip(*[iter(L)]*2): print(p) Gives the same result as before. (Explatation: we have 2 competing iterators for the same iterable. So each time we "consume" an element from an iterator, it is no longer available for the other. So zipping them iterates through pairs of successive elements from the initial iterator L, never using the same element twice). I update my timing table.
Iterate over all the sub-groups of a list
Let's say I have a list [1,2,3,4,5,6], and I want to iterate over all the subgroups of len 2 [1,2] [3,4] [5,6]. The naive way of doing it L = [1,2,3,4,5,6] N = len(L)//2 for k in range(N): slice = L[k*2:(k+1)*2] for val in slice: #Do things with the slice However I was wondering if there is a more pythonic method to iterate over a "partitioned" list already. I also accept solutions with numpy arrays. Something like: L = [1,2,3,4,5,6] slices = f(L,2) # A nice "f" here? for slice in slices: for val in slice: #Do things with the slice Thanks a lot!
[ "Use the grouper recipe from the itertools library:\nimport itertools\n\ndef grouper(iterable, n, fillvalue=None):\n \"Collect data into fixed-length chunks or blocks\"\n # grouper('ABCDEFG', 3, 'x') --> ABC DEF Gxx\"\n args = [iter(iterable)] * n\n return itertools.zip_longest(*args, fillvalue=fillvalue)\n\nL = [1,2,3,4,5,6]\nfor slice in grouper(L, 2):\n print(slice)\n\n", "To have a nice f as you are asking (not commenting on whether it is really a good idea, depending on what you are really trying to do) I would go with itertools\nitertools.islice(itertools.pairwise(L), 0, None, 2)\n\nis your f. Note that L is a list here. But it could be any iterator. Which is the point with itertools. You could have billions of iteration in L, and therefore billions of iterations with my generator, without using any memory. As long as L is not in memory, and that what you are doing with the slice is not stacking them in memory (if you do, then the method is just the same as any other).\nUsage example\nimport itertools\nL=[1,2,3,4,5,6]\nfor p in itertools.islice(itertools.pairwise(L), 0, None, 2):\n print(p)\n\n(1, 2)\n(3, 4)\n(5, 6)\n\nExplanation\nitertools.pairwise iterates by pairs. So almost what you are looking for.\nExcept that those are 'overlapping'.\nIn your case, it iterates (1,2), (2,3), (3,4), (4,5), (5,6)\nitertools.islice(it, 0, None, 2) iterates every two elements.\nSo both together, your get the 1st, 3rd, 5th, .. pairs of previous iterator, that is what you want\nTimings\nDoing nothing, with 1000 elements\n\n\n\n\nmethod\nTiming\n\n\n\n\nYours\n94 ms\n\n\nVariant\n52 ms\n\n\nnumpy\n187 ms\n\n\nitertools\n48 ms\n\n\nWoodford\n42 ms\n\n\n\n\nNote: what I call \"variant\" is almost the same as your method (not the same timings tho!), avoiding the k*2\nfor k in range(0,len(L),2):\n slice = L[k:k+2]\n for val in slice:\n ....\n\nThe fact that it is so fast (almost as fast as mine) says a lot about how negligible all this is. All I did is avoid 2 multiplication, and it almost halves the timing.\nNote 2: numpy is inefficient in this example, precisely because we do nothing in this question but iterating. So building of the array is what costs.\nBut depending on what you want to do, numpy can be way faster than any other method, if you can avoid any iteration.\nFor example (just using a random one), if what you want to do is computing the sum for every pairs (a,b) of L of a+2b, numpy's a[:,0].sum()+a[:,1].sum()*2 would beats any iteration based method, even with itertools.\nBut, well, from what we know of your problem (that is that you want to iterate), my itertools method is so far the fastest. And since it is a one-liner, I guess it is also the most pythonesque.\nEdit\nI stand corrected: Woodford's (also itertools, but different) method, posted while I was writing this answer, is faster.\nNot a one-liner as is. But that is because they wanted to deal with case there is not a even number of elements in L, which other method did not.\nElse it could also be written like this\nzip(*[iter(L)]*2)\n\nFor example\nfor p in zip(*[iter(L)]*2):\n print(p)\n\nGives the same result as before.\n(Explatation: we have 2 competing iterators for the same iterable. So each time we \"consume\" an element from an iterator, it is no longer available for the other. So zipping them iterates through pairs of successive elements from the initial iterator L, never using the same element twice).\nI update my timing table.\n" ]
[ 2, 1 ]
[]
[]
[ "list", "numpy", "python" ]
stackoverflow_0074479111_list_numpy_python.txt
Q: Remove element in list with bool flag with List Comprehension Wondering if there would be a neat way to use List Comprehension to accomplish removing an element from a list based on a bool. example test_list = [ "apple", "orange", "grape", "lemon" ] apple = True if apple: test_list.remove("apple") print(test_list) expected output ['orange', 'grape', 'lemon'] I know I could so something like: test_list = [x for x in test_list if "apple" not in x] But wondering if I could use a bool flag to do this instead of a string as I only want to to run if the bool is True. A: test_list = [x for x in test_list if not (apple and x == "apple")] Results: >>> apple = True >>> [x for x in test_list if not (apple and x == "apple")] ['orange', 'grape', 'lemon'] >>> apple = False >>> [x for x in test_list if not (apple and x == "apple")] ['apple', 'orange', 'grape', 'lemon'] Note: Going by the initial example, removing one element from a list depending on a flag, I would stick to that example, which is very clear what it does: if apple: test_list.remove("apple") My list comprehension condition takes more effort to understand. Clarity beats conciseness and (premature) optimisation. There is no good reason with your example to use a list comprehension. Also: my list comprehension is not precisely equivalent as the if - .remove(...) part, as pointed out by Edward Peters. The list comprehension will remove all elements that are "apple" (if apple is True), while the if - .remove() variant will only remove the first occurrence of "apple", and leave any remaining "apple" elements in the list. Should you desire the first behaviour, I'd be inclined to use: if apple: test_list = [item for item in test_list if item != "apple"] which is still much clearer than the list comprehension with the double condition, while still using the practicality of a list comprehension to filter a list. A: We can create a boolean list and use a list comprehension to get the test_list without the apple item inside: test_list = [ "apple", "orange", "grape", "lemon" ] test_list_bool=[True if x=='apple' else False for x in test_list] test_list=[test_list[i] for i in range(len(test_list)) if not test_list_bool[i]] Output >>> test_list >>> ['orange', 'grape', 'lemon']
Remove element in list with bool flag with List Comprehension
Wondering if there would be a neat way to use List Comprehension to accomplish removing an element from a list based on a bool. example test_list = [ "apple", "orange", "grape", "lemon" ] apple = True if apple: test_list.remove("apple") print(test_list) expected output ['orange', 'grape', 'lemon'] I know I could so something like: test_list = [x for x in test_list if "apple" not in x] But wondering if I could use a bool flag to do this instead of a string as I only want to to run if the bool is True.
[ "test_list = [x for x in test_list if not (apple and x == \"apple\")]\n\nResults:\n>>> apple = True\n>>> [x for x in test_list if not (apple and x == \"apple\")]\n['orange', 'grape', 'lemon']\n\n>>> apple = False\n>>> [x for x in test_list if not (apple and x == \"apple\")]\n['apple', 'orange', 'grape', 'lemon']\n\n\nNote: Going by the initial example, removing one element from a list depending on a flag, I would stick to that example, which is very clear what it does:\nif apple:\n test_list.remove(\"apple\")\n\nMy list comprehension condition takes more effort to understand. Clarity beats conciseness and (premature) optimisation. There is no good reason with your example to use a list comprehension.\nAlso: my list comprehension is not precisely equivalent as the if - .remove(...) part, as pointed out by Edward Peters. The list comprehension will remove all elements that are \"apple\" (if apple is True), while the if - .remove() variant will only remove the first occurrence of \"apple\", and leave any remaining \"apple\" elements in the list.\nShould you desire the first behaviour, I'd be inclined to use:\nif apple:\n test_list = [item for item in test_list if item != \"apple\"]\n\nwhich is still much clearer than the list comprehension with the double condition, while still using the practicality of a list comprehension to filter a list.\n", "We can create a boolean list and use a list comprehension to get the test_list without the apple item inside:\ntest_list = [\n \"apple\",\n \"orange\",\n \"grape\",\n \"lemon\"\n]\n\ntest_list_bool=[True if x=='apple' else False for x in test_list]\n\ntest_list=[test_list[i] for i in range(len(test_list)) if not test_list_bool[i]]\n\n\nOutput\n>>> test_list\n>>> ['orange', 'grape', 'lemon']\n\n" ]
[ 1, 0 ]
[]
[]
[ "list_comprehension", "python" ]
stackoverflow_0074479220_list_comprehension_python.txt
Q: Why does ZoneInfo("UTC") do different time conversions from timezone.utc? I was trying to convert a datetime from one timezone to another. I'm in the process of updating our Python codebase to stop relying on utilities we don't need anymore. In particular, I'm deprecating our use of arrow and pytz. In doing so, I noticed some strange behavior from ZoneInfo("UTC"). from datetime import datetime, timezone jan1_in_utc = datetime.fromisoformat('2022-01-01T08:00').replace(tzinfo=ZoneInfo("UTC")) # This gives datetime.datetime(2022, 1, 1, 8, 0, tzinfo=zoneinfo.ZoneInfo(key='UTC')) # Let's say I try to convert it to America/Toronto timezone jan1_in_utc.astimezone(ZoneInfo("America/Toronto")) # This gives me the SAME date time ?!?!? # datetime.datetime(2022, 1, 1, 8, 0, tzinfo=zoneinfo.ZoneInfo(key='America/Toronto')) # However, if I use timezone.utc instead jan1_in_utc = datetime.fromisoformat('2022-01-01T08:00').replace(tzinfo=timezone.utc) # This works as expected jan1_in_utc.astimezone(ZoneInfo("America/Toronto")) # this correctly calculates a -5 offset # datetime.datetime(2022, 1, 1, 3, 0, tzinfo=zoneinfo.ZoneInfo(key='America/Toronto')) I'm not sure what I'm doing wrong. "UTC" is in the list of zoneinfo.available_timezones(). Using "utc" raises an error. I also noticed this oddity. Calculating the utcoffset from the ZoneInfo("UTC") isn't 0. jan1_in_utc = datetime.fromisoformat('2022-01-01T08:00').replace(tzinfo=ZoneInfo("UTC")) ZoneInfo("UTC").utcoffset(jan1_in_utc) Where as if I use timezone.utc, there's no time difference. jan1_in_utc = datetime.fromisoformat('2022-01-01T08:00').replace(tzinfo=timezone.utc) timezone.utc.utcoffset(jan1_in_utc) # This gives datetime.timedelta(0) Now I'm unsure if I should use ZoneInfo at all, or if I should still rely on pytz and arrow. Any thoughts? Clearly, I'm missing something! A: Cannot reproduce. Did you make sure tzdata is installed and up-to-date? On Python 3.9.15 (main, Oct 30 2022, 10:17:28) [GCC 11.3.0] on linux I get Toronto time at UTC-5 as expected for both options: from datetime import datetime, timezone from zoneinfo import ZoneInfo jan1_in_utc = datetime.fromisoformat('2022-01-01T08:00').replace(tzinfo=ZoneInfo("UTC")) jan1_in_utc = jan1_in_utc.astimezone(ZoneInfo("America/Toronto")) print(repr(jan1_in_utc)) # datetime.datetime(2022, 1, 1, 3, 0, tzinfo=zoneinfo.ZoneInfo(key='America/Toronto')) jan1_in_utc = datetime.fromisoformat('2022-01-01T08:00').replace(tzinfo=timezone.utc) jan1_in_utc = jan1_in_utc.astimezone(ZoneInfo("America/Toronto")) print(repr(jan1_in_utc)) # datetime.datetime(2022, 1, 1, 3, 0, tzinfo=zoneinfo.ZoneInfo(key='America/Toronto')) A: I'm not entirely sure why, but I've narrowed the scope of the issue. When we're building our Docker containers, we added a line to set /etc/timezone to be America/Toronto. That causes ZoneInfo.utcoffset to behave differently. Because of this, the utcoffset method appears to pick up that timezone and calculate offsets differently. >>> jan1_in_utc = datetime.fromisoformat('2022-01-01T08:00').replace(tzinfo=ZoneInfo("UTC")) >>> ZoneInfo("UTC").utcoffset(jan1_in_utc) datetime.timedelta(days=-1, seconds=68400) It returns 1 day minus 19 hours which is -5 hours. That's the offset for America/Toronto. I believe this explains why no adjustment is made to the time. ZoneInfo seems to think I'm converting from America/Toronto to America/Toronto, so no change is made to the datetime. Python's datetime.timezone.utc doesn't exhibit this behavior. Even with /etc/timezone set to America/Toronto it calculates the offset correctly. >>> jan1_in_utc = datetime.fromisoformat('2022-01-01T08:00').replace(tzinfo=ZoneInfo("UTC")) >>> ZoneInfo("UTC").utcoffset(jan1_in_utc) datetime.timedelta(0)
Why does ZoneInfo("UTC") do different time conversions from timezone.utc?
I was trying to convert a datetime from one timezone to another. I'm in the process of updating our Python codebase to stop relying on utilities we don't need anymore. In particular, I'm deprecating our use of arrow and pytz. In doing so, I noticed some strange behavior from ZoneInfo("UTC"). from datetime import datetime, timezone jan1_in_utc = datetime.fromisoformat('2022-01-01T08:00').replace(tzinfo=ZoneInfo("UTC")) # This gives datetime.datetime(2022, 1, 1, 8, 0, tzinfo=zoneinfo.ZoneInfo(key='UTC')) # Let's say I try to convert it to America/Toronto timezone jan1_in_utc.astimezone(ZoneInfo("America/Toronto")) # This gives me the SAME date time ?!?!? # datetime.datetime(2022, 1, 1, 8, 0, tzinfo=zoneinfo.ZoneInfo(key='America/Toronto')) # However, if I use timezone.utc instead jan1_in_utc = datetime.fromisoformat('2022-01-01T08:00').replace(tzinfo=timezone.utc) # This works as expected jan1_in_utc.astimezone(ZoneInfo("America/Toronto")) # this correctly calculates a -5 offset # datetime.datetime(2022, 1, 1, 3, 0, tzinfo=zoneinfo.ZoneInfo(key='America/Toronto')) I'm not sure what I'm doing wrong. "UTC" is in the list of zoneinfo.available_timezones(). Using "utc" raises an error. I also noticed this oddity. Calculating the utcoffset from the ZoneInfo("UTC") isn't 0. jan1_in_utc = datetime.fromisoformat('2022-01-01T08:00').replace(tzinfo=ZoneInfo("UTC")) ZoneInfo("UTC").utcoffset(jan1_in_utc) Where as if I use timezone.utc, there's no time difference. jan1_in_utc = datetime.fromisoformat('2022-01-01T08:00').replace(tzinfo=timezone.utc) timezone.utc.utcoffset(jan1_in_utc) # This gives datetime.timedelta(0) Now I'm unsure if I should use ZoneInfo at all, or if I should still rely on pytz and arrow. Any thoughts? Clearly, I'm missing something!
[ "Cannot reproduce. Did you make sure tzdata is installed and up-to-date?\nOn\nPython 3.9.15 (main, Oct 30 2022, 10:17:28) \n[GCC 11.3.0] on linux\n\nI get Toronto time at UTC-5 as expected for both options:\nfrom datetime import datetime, timezone\nfrom zoneinfo import ZoneInfo\n\njan1_in_utc = datetime.fromisoformat('2022-01-01T08:00').replace(tzinfo=ZoneInfo(\"UTC\"))\njan1_in_utc = jan1_in_utc.astimezone(ZoneInfo(\"America/Toronto\"))\nprint(repr(jan1_in_utc))\n# datetime.datetime(2022, 1, 1, 3, 0, tzinfo=zoneinfo.ZoneInfo(key='America/Toronto'))\n\njan1_in_utc = datetime.fromisoformat('2022-01-01T08:00').replace(tzinfo=timezone.utc)\njan1_in_utc = jan1_in_utc.astimezone(ZoneInfo(\"America/Toronto\"))\nprint(repr(jan1_in_utc))\n# datetime.datetime(2022, 1, 1, 3, 0, tzinfo=zoneinfo.ZoneInfo(key='America/Toronto'))\n\n", "I'm not entirely sure why, but I've narrowed the scope of the issue. When we're building our Docker containers, we added a line to set /etc/timezone to be America/Toronto. That causes ZoneInfo.utcoffset to behave differently.\nBecause of this, the utcoffset method appears to pick up that timezone and calculate offsets differently.\n>>> jan1_in_utc = datetime.fromisoformat('2022-01-01T08:00').replace(tzinfo=ZoneInfo(\"UTC\"))\n>>> ZoneInfo(\"UTC\").utcoffset(jan1_in_utc)\ndatetime.timedelta(days=-1, seconds=68400) \n\nIt returns 1 day minus 19 hours which is -5 hours. That's the offset for America/Toronto. I believe this explains why no adjustment is made to the time. ZoneInfo seems to think I'm converting from America/Toronto to America/Toronto, so no change is made to the datetime.\nPython's datetime.timezone.utc doesn't exhibit this behavior. Even with /etc/timezone set to America/Toronto it calculates the offset correctly.\n>>> jan1_in_utc = datetime.fromisoformat('2022-01-01T08:00').replace(tzinfo=ZoneInfo(\"UTC\"))\n>>> ZoneInfo(\"UTC\").utcoffset(jan1_in_utc)\ndatetime.timedelta(0)\n\n" ]
[ 0, 0 ]
[]
[]
[ "datetime", "python", "zoneinfo" ]
stackoverflow_0074467999_datetime_python_zoneinfo.txt
Q: PySpark - Collect vs CrossJoin, which to choose to create a max column? Spark Masters! Does anyone has some tips on which is better or faster on pyspark to create a column with the max number of another column. Option A: max_num = df.agg({"number": "max"}).collect()[0][0] df = df.withColumn("max", f.lit(max_num)) Option B: max_num = df2.select(f.max(f.col("number")).alias("max")) df2 = df2.crossJoin(max_num) Please feel free, to add any other comments, even not directly related, is more for learning purpose. Please, feel free to add an option C, D … On thread is a testable code I made (also any comments on the code are welcome) Testing code: import time from pyspark.sql import SparkSession import pyspark.sql.functions as f # -------------------------------------------------------------------------------------- # 01 - Data creation spark = SparkSession.builder.getOrCreate() data = [] for i in range(10000): data.append( { "1": "adsadasd", "number": 1323, "3": "andfja" } ) data.append( { "1": "afasdf", "number": 8908, "3": "fdssfv" } ) df = spark.createDataFrame(data) df2 = spark.createDataFrame(data) df.count() df2.count() print(df.rdd.getNumPartitions()) print(df2.rdd.getNumPartitions()) # -------------------------------------------------------------------------------------- # 02 - Tests # B) Crossjoin start_time = time.time() max_num = df2.select(f.max(f.col("number")).alias("max")) df2 = df2.crossJoin(max_num) print(df2.count()) print("Collect time: ", time.time() - start_time) # A) Collect start_time = time.time() max_num = df.agg({"number": "max"}).collect()[0][0] df = df.withColumn("max", f.lit(max_num)) print(df.count()) print("Collect time: ", time.time() - start_time) df2.show() df.show() Measure the performance of collect and crossjoin on pyspark. A: I added another method similar to your B method, which consists in creating a Window over all dataframe and then taking the maximum value on it: df3.withColumn("max", F.max("number").over(Window.partitionBy())) Here is how the three methods performed over a dataframe of 100 million rows (I couldn't fit much more into memory): import time import numpy as np import pandas as pd from pyspark.sql import SparkSession import pyspark.sql.functions as F from pyspark.sql.window import Window # -------------------------------------------------------------------------------------- # 01 - Data creation spark = SparkSession.builder.getOrCreate() data = pd.DataFrame({ 'aaa': '1', 'number': np.random.randint(0, 100, size=100000000) }) df = spark.createDataFrame(data) df2 = spark.createDataFrame(data) df3 = spark.createDataFrame(data) # -------------------------------------------------------------------------------------- # 02 - Tests # A) Collect method = 'A' start_time = time.time() max_num = df.agg({"number": "max"}).collect()[0][0] df = df.withColumn("max", F.lit(max_num)) print(f"Collect time method {method}: ", time.time() - start_time) # B) Crossjoin method = 'B' start_time = time.time() max_num = df2.select(F.max(F.col("number")).alias("max")) df2 = df2.crossJoin(max_num) print(f"Collect time method {method}: ", time.time() - start_time) # C) Window method = 'C' start_time = time.time() df3 = df3.withColumn("max", F.max("number").over(Window.partitionBy())) print(f"Collect time method {method}: ", time.time() - start_time) Results: Collect time method A: 1.890228033065796 Collect time method B: 0.01714015007019043 Collect time method C: 0.03456592559814453 I tried the same code also with 100k rows; method A halves its collect time (~0.9 sec) but it's still high, whereas method B and C stay more or less the same. No other sensible methods came to mind. Therefore, it seems that method B may be the most efficient one. A: Made some changes to @ric-s great suggestion. import time import numpy as np import pandas as pd from pyspark.sql import SparkSession import pyspark.sql.functions as F from pyspark.sql.window import Window # -------------------------------------------------------------------------------------- # 01 - Data creation spark = SparkSession.builder.getOrCreate() data = pd.DataFrame({ 'aaa': '1', 'number': np.random.randint(0, 100, size=1000000) }) df = spark.createDataFrame(data) df2 = spark.createDataFrame(data) df3 = spark.createDataFrame(data) df.count() df2.count() df3.count() # -------------------------------------------------------------------------------------- # 02 - Tests # A) Collect method = 'A' start_time = time.time() max_num = df.agg({"number": "max"}).collect()[0][0] df = df.withColumn("max", F.lit(max_num)) df.count() print(f"Collect time method {method}: ", time.time() - start_time) # B) Crossjoin method = 'B' start_time = time.time() max_num = df2.select(F.max(F.col("number")).alias("max")) df2 = df2.crossJoin(max_num) df2.count() print(f"Collect time method {method}: ", time.time() - start_time) # C) Window method = 'C' start_time = time.time() df3 = df3.withColumn("max", F.max("number").over(Window.partitionBy())) df3.count() print(f"Collect time method {method}: ", time.time() - start_time) And got this results: Collect time method A: 1.8250329494476318 Collect time method B: 1.373009204864502 Collect time method C: 0.4454350471496582
PySpark - Collect vs CrossJoin, which to choose to create a max column?
Spark Masters! Does anyone has some tips on which is better or faster on pyspark to create a column with the max number of another column. Option A: max_num = df.agg({"number": "max"}).collect()[0][0] df = df.withColumn("max", f.lit(max_num)) Option B: max_num = df2.select(f.max(f.col("number")).alias("max")) df2 = df2.crossJoin(max_num) Please feel free, to add any other comments, even not directly related, is more for learning purpose. Please, feel free to add an option C, D … On thread is a testable code I made (also any comments on the code are welcome) Testing code: import time from pyspark.sql import SparkSession import pyspark.sql.functions as f # -------------------------------------------------------------------------------------- # 01 - Data creation spark = SparkSession.builder.getOrCreate() data = [] for i in range(10000): data.append( { "1": "adsadasd", "number": 1323, "3": "andfja" } ) data.append( { "1": "afasdf", "number": 8908, "3": "fdssfv" } ) df = spark.createDataFrame(data) df2 = spark.createDataFrame(data) df.count() df2.count() print(df.rdd.getNumPartitions()) print(df2.rdd.getNumPartitions()) # -------------------------------------------------------------------------------------- # 02 - Tests # B) Crossjoin start_time = time.time() max_num = df2.select(f.max(f.col("number")).alias("max")) df2 = df2.crossJoin(max_num) print(df2.count()) print("Collect time: ", time.time() - start_time) # A) Collect start_time = time.time() max_num = df.agg({"number": "max"}).collect()[0][0] df = df.withColumn("max", f.lit(max_num)) print(df.count()) print("Collect time: ", time.time() - start_time) df2.show() df.show() Measure the performance of collect and crossjoin on pyspark.
[ "I added another method similar to your B method, which consists in creating a Window over all dataframe and then taking the maximum value on it:\ndf3.withColumn(\"max\", F.max(\"number\").over(Window.partitionBy()))\n\n\nHere is how the three methods performed over a dataframe of 100 million rows (I couldn't fit much more into memory):\nimport time\nimport numpy as np\nimport pandas as pd\nfrom pyspark.sql import SparkSession\nimport pyspark.sql.functions as F\nfrom pyspark.sql.window import Window\n\n# --------------------------------------------------------------------------------------\n# 01 - Data creation\nspark = SparkSession.builder.getOrCreate()\n\ndata = pd.DataFrame({\n 'aaa': '1',\n 'number': np.random.randint(0, 100, size=100000000)\n})\ndf = spark.createDataFrame(data)\ndf2 = spark.createDataFrame(data)\ndf3 = spark.createDataFrame(data)\n\n# --------------------------------------------------------------------------------------\n# 02 - Tests\n\n# A) Collect\nmethod = 'A'\nstart_time = time.time()\nmax_num = df.agg({\"number\": \"max\"}).collect()[0][0]\ndf = df.withColumn(\"max\", F.lit(max_num))\nprint(f\"Collect time method {method}: \", time.time() - start_time)\n\n# B) Crossjoin\nmethod = 'B'\nstart_time = time.time()\nmax_num = df2.select(F.max(F.col(\"number\")).alias(\"max\"))\ndf2 = df2.crossJoin(max_num)\nprint(f\"Collect time method {method}: \", time.time() - start_time)\n\n# C) Window\nmethod = 'C'\nstart_time = time.time()\ndf3 = df3.withColumn(\"max\", F.max(\"number\").over(Window.partitionBy()))\nprint(f\"Collect time method {method}: \", time.time() - start_time)\n\nResults:\nCollect time method A: 1.890228033065796\nCollect time method B: 0.01714015007019043\nCollect time method C: 0.03456592559814453\n\nI tried the same code also with 100k rows; method A halves its collect time (~0.9 sec) but it's still high, whereas method B and C stay more or less the same.\nNo other sensible methods came to mind.\nTherefore, it seems that method B may be the most efficient one.\n", "Made some changes to @ric-s great suggestion.\nimport time\nimport numpy as np\nimport pandas as pd\nfrom pyspark.sql import SparkSession\nimport pyspark.sql.functions as F\nfrom pyspark.sql.window import Window\n\n# --------------------------------------------------------------------------------------\n# 01 - Data creation\nspark = SparkSession.builder.getOrCreate()\n\ndata = pd.DataFrame({\n 'aaa': '1',\n 'number': np.random.randint(0, 100, size=1000000)\n})\ndf = spark.createDataFrame(data)\ndf2 = spark.createDataFrame(data)\ndf3 = spark.createDataFrame(data)\n\ndf.count()\ndf2.count()\ndf3.count()\n# --------------------------------------------------------------------------------------\n# 02 - Tests\n\n# A) Collect\nmethod = 'A'\nstart_time = time.time()\nmax_num = df.agg({\"number\": \"max\"}).collect()[0][0]\ndf = df.withColumn(\"max\", F.lit(max_num))\ndf.count()\nprint(f\"Collect time method {method}: \", time.time() - start_time)\n\n# B) Crossjoin\nmethod = 'B'\nstart_time = time.time()\nmax_num = df2.select(F.max(F.col(\"number\")).alias(\"max\"))\ndf2 = df2.crossJoin(max_num)\ndf2.count()\nprint(f\"Collect time method {method}: \", time.time() - start_time)\n\n# C) Window\nmethod = 'C'\nstart_time = time.time()\ndf3 = df3.withColumn(\"max\", F.max(\"number\").over(Window.partitionBy()))\ndf3.count()\nprint(f\"Collect time method {method}: \", time.time() - start_time)\n\nAnd got this results:\nCollect time method A: 1.8250329494476318\nCollect time method B: 1.373009204864502\nCollect time method C: 0.4454350471496582\n\n" ]
[ 1, 0 ]
[]
[]
[ "apache_spark", "collect", "dataframe", "pyspark", "python" ]
stackoverflow_0074469309_apache_spark_collect_dataframe_pyspark_python.txt
Q: Solve almostIncreasingSequence (Codefights) Given a sequence of integers as an array, determine whether it is possible to obtain a strictly increasing sequence by removing no more than one element from the array. Example For sequence [1, 3, 2, 1], the output should be: almostIncreasingSequence(sequence) = false; There is no one element in this array that can be removed in order to get a strictly increasing sequence. For sequence [1, 3, 2], the output should be: almostIncreasingSequence(sequence) = true. You can remove 3 from the array to get the strictly increasing sequence [1, 2]. Alternately, you can remove 2 to get the strictly increasing sequence [1, 3]. My code: def almostIncreasingSequence(sequence): c= 0 for i in range(len(sequence)-1): if sequence[i]>=sequence[i+1]: c +=1 return c<1 But it can't pass all tests. input: [1, 3, 2] Output:false Expected Output:true Input: [10, 1, 2, 3, 4, 5] Output: false Expected Output: true Input: [0, -2, 5, 6] Output: false Expected Output: true input: [1, 1] Output: false Expected Output: true Input: [1, 2, 3, 4, 3, 6] Output: false Expected Output: true Input: [1, 2, 3, 4, 99, 5, 6] Output: false Expected Output: true A: Your algorithm is much too simplistic. You have a right idea, checking consecutive pairs of elements that the earlier element is less than the later element, but more is required. Make a routine first_bad_pair(sequence) that checks the list that all pairs of elements are in order. If so, return the value -1. Otherwise, return the index of the earlier element: this will be a value from 0 to n-2. Then one algorithm that would work is to check the original list. If it works, fine, but if not try deleting the earlier or later offending elements. If either of those work, fine, otherwise not fine. I can think of other algorithms but this one seems the most straightforward. If you do not like the up-to-two temporary lists that are made by combining two slices of the original list, the equivalent could be done with comparisons in the original list using more if statements. Here is Python code that passes all the tests you show. def first_bad_pair(sequence): """Return the first index of a pair of elements where the earlier element is not less than the later elements. If no such pair exists, return -1.""" for i in range(len(sequence)-1): if sequence[i] >= sequence[i+1]: return i return -1 def almostIncreasingSequence(sequence): """Return whether it is possible to obtain a strictly increasing sequence by removing no more than one element from the array.""" j = first_bad_pair(sequence) if j == -1: return True # List is increasing if first_bad_pair(sequence[j-1:j] + sequence[j+1:]) == -1: return True # Deleting earlier element makes increasing if first_bad_pair(sequence[j:j+1] + sequence[j+2:]) == -1: return True # Deleting later element makes increasing return False # Deleting either does not make increasing If you do want to avoid those temporary lists, here is other code that has a more complicated pair-checking routine. def first_bad_pair(sequence, k): """Return the first index of a pair of elements in sequence[] for indices k-1, k+1, k+2, k+3, ... where the earlier element is not less than the later element. If no such pair exists, return -1.""" if 0 < k < len(sequence) - 1: if sequence[k-1] >= sequence[k+1]: return k-1 for i in range(k+1, len(sequence)-1): if sequence[i] >= sequence[i+1]: return i return -1 def almostIncreasingSequence(sequence): """Return whether it is possible to obtain a strictly increasing sequence by removing no more than one element from the array.""" j = first_bad_pair(sequence, -1) if j == -1: return True # List is increasing if first_bad_pair(sequence, j) == -1: return True # Deleting earlier element makes increasing if first_bad_pair(sequence, j+1) == -1: return True # Deleting later element makes increasing return False # Deleting either does not make increasing And here are the tests I used. print('\nThese should be True.') print(almostIncreasingSequence([])) print(almostIncreasingSequence([1])) print(almostIncreasingSequence([1, 2])) print(almostIncreasingSequence([1, 2, 3])) print(almostIncreasingSequence([1, 3, 2])) print(almostIncreasingSequence([10, 1, 2, 3, 4, 5])) print(almostIncreasingSequence([0, -2, 5, 6])) print(almostIncreasingSequence([1, 1])) print(almostIncreasingSequence([1, 2, 3, 4, 3, 6])) print(almostIncreasingSequence([1, 2, 3, 4, 99, 5, 6])) print(almostIncreasingSequence([1, 2, 2, 3])) print('\nThese should be False.') print(almostIncreasingSequence([1, 3, 2, 1])) print(almostIncreasingSequence([3, 2, 1])) print(almostIncreasingSequence([1, 1, 1])) A: The solution is close to the intuitive one, where you check if the current item in the sequence is greater than the current maximum value (which by definition is the previous item in a strictly increasing sequence). The wrinkle is that in some scenarios you should remove the current item that violates the above, whilst in other scenarios you should remove previous larger item. For example consider the following: [1, 2, 5, 4, 6] You check the sequence at item with value 4 and find it breaks the increasing sequence rule. In this example, it is obvious you should remove the previous item 5, and it is important to consider why. The reason why is that the value 4 is greater than the "previous" maximum (the maximum value before 5, which in this example is 2), hence the 5 is the outlier and should be removed. Next consider the following: [1, 4, 5, 2, 6] You check the sequence at item with value 2 and find it breaks the increasing sequence rule. In this example, 2 is not greater than the "previous" maximum of 4 hence 2 is the outlier and should be removed. Now you might argue that the net effect of each scenario described above is the same - one item is removed from the sequence, which we can track with a counter. The important distinction however is how you update the maximum and previous_maximum values: For [1, 2, 5, 4, 6], because 5 is the outlier, 4 should become the new maximum. For [1, 4, 5, 2, 6], because 2 is the outlier, 5 should remain as the maximum. This distinction is critical in evaluating further items in the sequence, ensuring we correctly ignore the previous outlier. Here is the solution based upon the above description (O(n) complexity and O(1) space): def almostIncreasingSequence(sequence): removed = 0 previous_maximum = maximum = float('-infinity') for s in sequence: if s > maximum: # All good previous_maximum = maximum maximum = s elif s > previous_maximum: # Violation - remove current maximum outlier removed += 1 maximum = s else: # Violation - remove current item outlier removed += 1 if removed > 1: return False return True We initially set the maximum and previous_maximum to -infinity and define a counter removed with value 0. The first test case is the "passing" case and simply updates the maximum and previous_maximum values. The second test case is triggered when s <= maximum and checks if s > previous_maximum - if this is true, then the previous maximum value is the outlier and is removed, with s being updated to the new maximum and the removed counter incremented. The third test case is triggered when s <= maximum and s <= previous_maximum - in this case, s is the outlier, so s is removed (no changes to maximum and previous_maximum) and the removed counter incremented. One edge case to consider is the following: [10, 1, 2, 3, 4] For this case, the first item is the outlier, but we only know this once we examine the second item (1). At this point, maximum is 10 whilst previous_maximum is -infinity, so 10 (or any sequence where the first item is larger than the second item) will be correctly identified as the outlier. A: This is mine. Hope you find this helpful: def almostIncreasingSequence(sequence): #Take out the edge cases if len(sequence) <= 2: return True #Set up a new function to see if it's increasing sequence def IncreasingSequence(test_sequence): if len(test_sequence) == 2: if test_sequence[0] < test_sequence[1]: return True else: for i in range(0, len(test_sequence)-1): if test_sequence[i] >= test_sequence[i+1]: return False else: pass return True for i in range (0, len(sequence) - 1): if sequence[i] >= sequence [i+1]: #Either remove the current one or the next one test_seq1 = sequence[:i] + sequence[i+1:] test_seq2 = sequence[:i+1] + sequence[i+2:] if IncreasingSequence(test_seq1) == True: return True elif IncreasingSequence(test_seq2) == True: return True else: return False A: Here's my simple solution def almostIncreasingSequence(sequence): removed_one = False prev_maxval = None maxval = None for s in sequence: if not maxval or s > maxval: prev_maxval = maxval maxval = s elif not prev_maxval or s > prev_maxval: if removed_one: return False removed_one = True maxval = s else: if removed_one: return False removed_one = True return True A: The reason why your modest algorithm fails here (apart from the missing '=' in return) is, it's just counting the elements which are greater than the next one and returning a result if that count is more than 1. What's important in this is to look at the list after removing one element at a time from it, and confirm that it is still a sorted list. My attempt at this is really short and works for all scenario. It fails the time constraint on the last hidden test set alone in the exercise. As the problem name suggests, I directly wanted to compare the list to its sorted version, and handle the 'almost' case later - thus having the almostIncreasingSequence. i.e.: if sequence==sorted(sequence): . . But as the problem says: determine whether it is possible to obtain a strictly increasing sequence by removing no more than one element from the array (at a time). I started visualizing the list by removing an element at a time during iteration, and check if the rest of the list is a sorted version of itself. Thus bringing me to this: for i in range(len(sequence)): temp=sequence.copy() del temp[i] if temp==sorted(temp): . . It was here when I could see that if this condition is true for the full list, then we have what is required - an almostIncreasingSequence! So I completed my code this way: def almostIncreasingSequence(sequence): t=0 for i in range(len(sequence)): temp=sequence.copy() del temp[i] if temp==sorted(temp): t+=1 return(True if t>0 else False) This solution still fails on lists such as [1, 1, 1, 2, 3]. As @rory-daulton noted in his comments, we need to differentiate between a 'sorted' and an 'increasingSequence' in this problem. While the test [1, 1, 1, 2, 3] is sorted, its on an increasingSequence as demanded in the problem. To handle this, following is the final code with a one line condition added to check for consecutive same numbers: def almostIncreasingSequence(sequence): t=0 for i in range(len(sequence)): temp=sequence.copy() del temp[i] if temp==sorted(temp) and not(any(i==j for i,j in zip(sorted(temp), sorted(temp)[1:]))): t+=1 return t>0 As this still fails the execution time limit on the last of the test (the list must be really big), I am still looking if there is a way to optimize this solution of mine. A: def almostIncreasingSequence(sequence): if len(sequence) == 1: return True decreasing = 0 for i in range(1,len(sequence)): if sequence[i] <= sequence[i-1]: decreasing +=1 if decreasing > 1: return False if sequence[i] <= sequence[i-2] and i-2 >=0: if i != len(sequence)-1 and sequence[i+1] <= sequence[i-1]: return False return True A: I'm still working on mine. Wrote it like this but I can't pass the last 3 hidden tests. def almostIncreasingSequence(sequence): boolMe = 0 checkRep = 0 for x in range(0, len(sequence)-1): if sequence[x]>sequence[x+1]: boolMe = boolMe + 1 if (x!=0) & (x!=(len(sequence)-2)): if sequence[x-1]>sequence[x+2]: boolMe = boolMe + 1 if sequence.count(sequence[x])>1: checkRep = checkRep + 1 if (boolMe > 1) | (checkRep > 2): return False return True A: There are two possibilities whenever you hit the condition of the sequence[i-1]>=sequence[i] Delete index i-1 Delete index i So my idea was to create copy and delete the indexes and check if they are sorted and then at the end you can do the or and return if the ans is attainable. Complexity will be O(N2)[because of del] and space O(N) def almostIncreasingSequence(sequence): c0,c1=1,1 n=len(sequence) l1=[] l2=[] for i in sequence: l1.append(i) l2.append(i) for i in range(1,n): if sequence[i-1]>=sequence[i]: del l1[i] del l2[i-1] break for i in range(1,n-1): if l1[i-1]>=l1[i]: c0=0 break for i in range(1,n-1): if l2[i-1]>=l2[i]: c1=0 break return bool(c0 or c1) This is accepted solution. A: Here is a solution in Java. boolean almostIncreasingSequence(int[] sequence) { int count = 0; for(int i=1; i< sequence.length; i++){ if(sequence[i] <= sequence[i-1]){ count++; if( i > 1 && i < sequence.length -1 && sequence[i] <= sequence[i-2] && sequence[i+1] <= sequence[i-1] ) { count++; } } } return count <= 1; } A: This was a pretty cool exercise. I did it like this: def almostIncreasingSequence(list): removedIdx = [] #Indexes that need to be removed for idx, item in enumerate(list): tmp = [] #Indexes between current index and 0 that break the increasing order for i in range(idx-1, -1, -1): if list[idx]<=list[i]: #Add index to tmp if number breaks order tmp.append(i) if len(tmp)>1: #If more than one of the former numbers breaks order removedIdx.append(idx) #Add current index to removedIdx else: if len(tmp)>0: #If only one of the former numbers breaks order removedIdx.append(tmp[0]) #Add it to removedIdx return len(set(removedIdx))<=1 print('\nThese should be True.') print(almostIncreasingSequence([])) print(almostIncreasingSequence([1])) print(almostIncreasingSequence([1, 2])) print(almostIncreasingSequence([1, 2, 3])) print(almostIncreasingSequence([1, 3, 2])) print(almostIncreasingSequence([10, 1, 2, 3, 4, 5])) print(almostIncreasingSequence([0, -2, 5, 6])) print(almostIncreasingSequence([1, 1])) print(almostIncreasingSequence([1, 2, 3, 4, 3, 6])) print(almostIncreasingSequence([1, 2, 3, 4, 99, 5, 6])) print(almostIncreasingSequence([1, 2, 2, 3])) print('\nThese should be False.') print(almostIncreasingSequence([1, 3, 2, 1])) print(almostIncreasingSequence([3, 2, 1])) print(almostIncreasingSequence([1, 1, 1])) print(almostIncreasingSequence([1, 1, 1, 2, 3])) A: With Python3, I started with something like this... def almostIncreasingSequence(sequence): for i, x in enumerate(sequence): ret = False s = sequence[:i]+sequence[i+1:] for j, y in enumerate(s[1:]): if s[j+1] <= s[j]: ret = True break if ret: break if not ret: return True return False But kept timing out on Check #29. I kicked myself when I realized that this works, too, but still times out on #29. I have no idea how to speed it up. def almostIncreasingSequence(sequence): for i, x in enumerate(sequence): s = sequence[:i] s.extend(sequence[i+1:]) if s == sorted(set(s)): return True return False A: Well, here's also my solution, I think it's a little bit cleaner than other solutions proposed here so I'll just bring it below. What it does is it basically checks for an index in which i-th value is larger than (i+1)-th value, if it finds such an index, checks whether removing any of those two makes the list into an increasing sequence. def almostIncreasingSequence(sequence): def is_increasing(lst): for idx in range(len(lst)-1): if lst[idx] >= lst[idx + 1]: return False return True for idx in range(len(sequence) - 1): if sequence[idx] >= sequence[idx + 1]: fixable = is_increasing([*sequence[:idx], *sequence[idx+1:]]) or is_increasing([*sequence[:idx+1], *sequence[idx+2:]]) if not fixable: return False return True A: C++ Answer with looping array only once bool almostIncreasingSequence(std::vector<int> a) { int n=a.size(), p=-1, c=0; for (int i=1;i<n;i++) if (a[i-1]>=a[i]) p=i, c++; if (c>1) return 0; if (c==0) return 1; if (p==n-1 || p==1) return 1; if (a[p-1] < a[p+1]) return 1; if (a[p-2] < a[p]) return 1; return 0; } A: I worked on this problem using JavaScript, the idea is to find the breakpoints where the sequence is not strictly increasing, from [0, ..., x - 1] and [x, ..., n.length - 1]. If there are more than 1 breakpoint, then return false. Once I am able to locate the break point, just check the combination both [0, ..., x - 1, x + 1, ..., n.length - 1] and [0, ..., x - 2, x, ..., n.length - 1], to check whether breakpoint existed in those 2 arrays. It can be simplified to check 2 pairs of points also. Here is my solution: function almostIncreasingSequence(sequence) { const breakpoints = findBreak(sequence); if (breakpoints.length === 0) return true; if (breakpoints.length > 1) return false; const removedSeq = sequence .slice(0, breakpoints[0]) .concat(sequence.slice(breakpoints[0] + 1, sequence.length)); const removedSeq2 = sequence .slice(0, breakpoints[0] - 1) .concat(sequence.slice(breakpoints[0], sequence.length)); return ( findBreak(removedSeq).length === 0 || findBreak(removedSeq2).length === 0 ); } const findBreak = (sequence) => { const breakpoints = []; for (let i = 1; i < sequence.length; i++) { if (sequence[i] <= sequence[i - 1]) { breakpoints.push(i); } } return breakpoints; }; A: Here is the solution for javascript, take those principles and convert them to the desired lang. It makes two array copies (sequence1, sequence2) and checks several things for both: sequence1 - deletes one or more next array items where the next item is smaller than the former item. Then it counts number of deletions (iterator1). sequence2 - deletes one or more former array items where the former item is bigger than the next item. Then it counts number of deletions (iterator2). checks separately for both arrays sequence1 and sequence2 if they have duplicated item which will set separate result flags to false (bigger1, bigger2) check to see whether any of the two arrays (sequence1, sequence2) pass the criteria to be valid. If yes return true for that array and end program. Criteria for checking is: number of deletions of items from the array in question cannot be larger than 1, and there should be no duplicated items (bigger1, bigger2). If there are duplicates it means that two number are the same thus one is not bigger than another one. However, for some reason testing page almostincreasingSequence at Codesignal (test 12) returns the array [1, 1] to be true which is not correct because the criteria they have been describing says that the next number has to be strictly bigger than the former one, not equal to it. I have made the code according to their written criteria (next number has to be bigger, not equal to). Code: function solution(sequence) { let iterator1 = 0; let iterator2 = 0; let bigger1 = true; let bigger2 = true; let sequence1 = sequence.slice(); // copy of the orig array let sequence2 = sequence.slice(); for (let i = 0; i < sequence1.length; i++) { if (typeof sequence1[i+1] !== 'undefined' && sequence1[i+1] < sequence1[i]) { sequence1.splice(i+1, 1); // delete number iterator1++; // count number of deletions i = -1; continue; // restart for loop from i = 0 } } for (let i = 0; i < sequence2.length; i++) { if (typeof sequence2[i+1] !== 'undefined' && sequence2[i] > sequence2[i+1]) { sequence2.splice(i, 1); iterator2++; i = -1; continue; } } for (let i = 0; i < sequence1.length; i++) { for (let k = i + 1; k < sequence1.length; k++) { if (sequence1[i] == sequence1[k]) { // if two numbers are equal.. bigger1 = false; // one is not bigger than another - false } } } for (let i = 0; i < sequence2.length; i++) { for (let k = i + 1; k < sequence2.length; k++) { if (sequence2[i] == sequence2[k]) { bigger2 = false; } } } if (iterator1 < 2 && bigger1 == true) { return true; } else if (iterator2 < 2 && bigger2 == true) { return true; } else { return false; } } /* Sample tests */ let arr = [1, 2, 1, 2]; // false //let arr = [3, 6, 5, 8, 10, 20, 15]; // false //let arr = [1, 3, 2, 1]; // false //let arr = [1, 3, 2]; // true //let arr = [10, 1, 2, 3, 4, 5]; // true //let arr = [1, 2, 1, 3, 2]; // false //let arr = [1, 1, 2, 3, 4, 4]; // false //let arr = [1, 4, 10, 4, 2]; // false //let arr = [1, 1, 1, 2, 3]; // false //let arr = [0, -2, 5, 6]; // true //let arr = [1, 2, 3, 4, 5, 3, 5, 6]; // false //let arr = [40, 50, 60, 10, 20, 30]; // false //let arr = [1, 1]; // false //let arr = [1, 2, 5, 3, 5]; // true //let arr = [1, 2, 5, 5, 5]; // false //let arr = [10, 1, 2, 3, 4, 5, 6, 1]; // false //let arr = [1, 2, 3, 4, 3, 6]; // true (Test 16) //let arr = [1, 2, 3, 4, 99, 5, 6]; // true //let arr = [123, -17, -5, 1, 2, 3, 12, 43, 45]; // true //let arr = [3, 5, 67, 98, 3]; // true solution(arr); But if you want code to pass criteria despite of their instructions, thus to allow numbers to be equal like [1, 1] = true and not strictly next number bigger than former one, then solution would be this: function solution(sequence) { let iterator1 = 0; let iterator2 = 0; let sequence1 = sequence.slice(); let sequence2 = sequence.slice(); for (let i = 0; i < sequence1.length; i++) { if (typeof sequence1[i+1] !== 'undefined' && sequence1[i+1] <= sequence1[i]) { sequence1.splice(i+1, 1); iterator1++; i = -1; continue; } } for (let i = 0; i < sequence2.length; i++) { if (typeof sequence2[i+1] !== 'undefined' && sequence2[i] >= sequence2[i+1]) { sequence2.splice(i, 1); iterator2++; i = -1; continue; } } if (iterator1 < 2) { return true; } else if (iterator2 < 2) { return true; } else { return false; } } A: boolean almostIncreasingSequence(int[] sequence) { int length = sequence.length; if(length ==1) return true; if(length ==2 && sequence[1] > sequence[0]) return true; int count = 0; int index = 0; boolean iter = true; while(iter){ index = checkSequence(sequence,index); if(index != -1){ count++; index++; if(index >= length-1){ iter=false; }else if(index-1 !=0){ if(sequence[index-1] <= sequence[index]){ iter=false; count++; }else if(((sequence[index] <= sequence[index-2])) && ((sequence[index+1] <= sequence[index-1]))){ iter=false; count++; } } }else{ iter = false; } } if(count > 1) return false; return true; } int checkSequence(int[] sequence, int index){ for(; index < sequence.length-1; index++){ if(sequence[index+1] <= sequence[index]){ return index; } } return -1; } A: Below is the Python3 code that I used and it worked fine: def almostIncreasingSequence(sequence): flag = False if(len(sequence) < 3): return True if(sequence == sorted(sequence)): if(len(sequence)==len(set(sequence))): return True bigFlag = True for i in range(len(sequence)): if(bigFlag and i < len(sequence)-1 and sequence[i] < sequence[i+1]): bigFlag = True continue tempSeq = sequence[:i] + sequence[i+1:] if(tempSeq == sorted(tempSeq)): if(len(tempSeq)==len(set(tempSeq))): flag = True break bigFlag = False return flag A: This works on most cases except has problems with performance. def almostIncreasingSequence(sequence): if len(sequence)==2: return sequence==sorted(list(sequence)) else: for i in range(0,len(sequence)): newsequence=sequence[:i]+sequence[i+1:] if (newsequence==sorted(list(newsequence))) and len(newsequence)==len(set(newsequence)): return True break else: result=False return result A: This is my Solution, def almostIncreasingSequence(sequence): def hasIncreasingOrder(slicedSquence, lengthOfArray): count =0 output = True while(count < (lengthOfArray-1)) : if slicedSquence[count] >= slicedSquence[count+1] : output = False break count = count +1 return output count = 0 seqOutput = False lengthOfArray = len(sequence) while count < lengthOfArray: newArray = sequence[:count] + sequence[count+1:] if hasIncreasingOrder(newArray, lengthOfArray-1): seqOutput = True break count = count+1 return seqOutput A: This one works well. bool almostIncreasingSequence(std::vector<int> sequence) { /* if(is_sorted(sequence.begin(), sequence.end())){ return true; } */ int max = INT_MIN; int secondMax = INT_MIN; int count = 0; int i = 0; while(i < sequence.size()){ if(sequence[i] > max){ secondMax = max; max = sequence[i]; }else if(sequence[i] > secondMax){ max = sequence[i]; count++; cout<<"count after increase = "<<count<<endl; }else {count++; cout<<"ELSE count++ = "<<count<<endl;} i++; } return count <= 1; } A: def almostIncreasingSequence(sequence): if len(sequence) == 1: return False if len(sequence) == 2: return True c = 0 c1 = 0 for i in range(1,len(sequence)): if sequence[i-1] >= sequence[i]: c += 1 if i != 0 and i+1 < len(sequence): if sequence[i-1] >= sequence[i+1]: c1 += 1 if c > 1 or c1 > 1: return False return c1 == 1 or c == 1 A: this is mine and it runs fine. I just remove suggested elements and see if new list is strictly increasing To check if a list is strictly increasing. I check if there are any duplicates first. I then check if the sorted list is the same as the original list import numpy as np def IncreasingSequence(sequence): temp=sequence.copy() temp.sort() if (len(sequence) != len(set(sequence))): return False if (sequence==temp): return True return False def almostIncreasingSequence(sequence): for i in range(len(sequence)-1): if sequence[i] >= sequence[i+1]: sequence_temp=sequence.copy() sequence_temp.pop(i) # print(sequence_temp) # print(IncreasingSequence(sequence_temp)) if (IncreasingSequence(sequence_temp)): return True # Might be the neighbor that is worth removing sequence_temp=sequence.copy() sequence_temp.pop(i+1) if (IncreasingSequence(sequence_temp)): return True return False A: I spent a whole day trying to make it as short as possible, but no luck. But here is my accepted answer in CodeSignal. def almostIncreasingSequence(sequence): if len(sequence)<=2: return True def isstepdown(subsequence): return [a>=b for a,b in zip(subsequence, subsequence[1:])] stepdowns = isstepdown(sequence) n_stepdown = sum(stepdowns) if n_stepdown>1: return False else: sequence2 = sequence.copy() sequence.pop(stepdowns.index(True)) stepdowns_temp = isstepdown(sequence) n_stepdown = sum(stepdowns_temp) sequence2.pop(stepdowns.index(True)+1) stepdowns_temp = isstepdown(sequence2) n_stepdown += sum(stepdowns_temp) if n_stepdown<=1: return True else: return False A: Here is another solution. Passes all the tests. Just one method. One pass through the list. def almostIncreasingSequence(sequence): count = 0 if len(sequence) < 3: return True for i in range(1, len(sequence) - 2): #to test only inner elements if (sequence[i] >= sequence[i+1]): count += 1 if count == 2: # the second time this occurs return False #check if skipping one of these items solves the problem if sequence[i-1] >= sequence[i+1] and sequence[i] >= sequence[i+2]: return False i += 1 #handle the first element if sequence[0] >= sequence[1]: count += 1 if count == 2: return False #handle the last element if sequence[-2] >= sequence[-1] and count == 1: return False return True A: Works on CodeSignal test cases def almostIncreasingSequence(sequence): s = sequence # for ease prevMax = s[0] # stores previous max value to which current element has to be compared found = False maxI = 0 #index of prevMax for i in range(1, len(s)): if s[i] <= prevMax: if found: return False else: found = True if maxI > 0 : #checks if current item is smaller thant the prevMax and the value before that if s[i] <= s[maxI] and s[i] > s[maxI - 1]: prevMax = s[i] maxI = i else: # checks if the current and next element are smaller than the prevMax value if (i+1) < len(s) and s[i+1] <= s[maxI]: prevMax = s[i] maxI = i else: prevMax = s[i] maxI = i return True A: This is my solution. def almostIncreasingSequence(sequence): duplicated = 0 for i in range(1, len(sequence) - 1): if sequence[i-1] == sequence[i] == sequence[i+1]: return False elif sequence[i-1] == sequence[i]: duplicated += 1 elif sequence[i] == sequence[i+1]: duplicated += 1 elif sequence[i-1] <= sequence[i] <= sequence[i+1]: continue else: return False return 0 <= duplicated <= 1
Solve almostIncreasingSequence (Codefights)
Given a sequence of integers as an array, determine whether it is possible to obtain a strictly increasing sequence by removing no more than one element from the array. Example For sequence [1, 3, 2, 1], the output should be: almostIncreasingSequence(sequence) = false; There is no one element in this array that can be removed in order to get a strictly increasing sequence. For sequence [1, 3, 2], the output should be: almostIncreasingSequence(sequence) = true. You can remove 3 from the array to get the strictly increasing sequence [1, 2]. Alternately, you can remove 2 to get the strictly increasing sequence [1, 3]. My code: def almostIncreasingSequence(sequence): c= 0 for i in range(len(sequence)-1): if sequence[i]>=sequence[i+1]: c +=1 return c<1 But it can't pass all tests. input: [1, 3, 2] Output:false Expected Output:true Input: [10, 1, 2, 3, 4, 5] Output: false Expected Output: true Input: [0, -2, 5, 6] Output: false Expected Output: true input: [1, 1] Output: false Expected Output: true Input: [1, 2, 3, 4, 3, 6] Output: false Expected Output: true Input: [1, 2, 3, 4, 99, 5, 6] Output: false Expected Output: true
[ "Your algorithm is much too simplistic. You have a right idea, checking consecutive pairs of elements that the earlier element is less than the later element, but more is required.\nMake a routine first_bad_pair(sequence) that checks the list that all pairs of elements are in order. If so, return the value -1. Otherwise, return the index of the earlier element: this will be a value from 0 to n-2. Then one algorithm that would work is to check the original list. If it works, fine, but if not try deleting the earlier or later offending elements. If either of those work, fine, otherwise not fine.\nI can think of other algorithms but this one seems the most straightforward. If you do not like the up-to-two temporary lists that are made by combining two slices of the original list, the equivalent could be done with comparisons in the original list using more if statements.\nHere is Python code that passes all the tests you show.\ndef first_bad_pair(sequence):\n \"\"\"Return the first index of a pair of elements where the earlier\n element is not less than the later elements. If no such pair\n exists, return -1.\"\"\"\n for i in range(len(sequence)-1):\n if sequence[i] >= sequence[i+1]:\n return i\n return -1\n\ndef almostIncreasingSequence(sequence):\n \"\"\"Return whether it is possible to obtain a strictly increasing\n sequence by removing no more than one element from the array.\"\"\"\n j = first_bad_pair(sequence)\n if j == -1:\n return True # List is increasing\n if first_bad_pair(sequence[j-1:j] + sequence[j+1:]) == -1:\n return True # Deleting earlier element makes increasing\n if first_bad_pair(sequence[j:j+1] + sequence[j+2:]) == -1:\n return True # Deleting later element makes increasing\n return False # Deleting either does not make increasing\n\nIf you do want to avoid those temporary lists, here is other code that has a more complicated pair-checking routine.\ndef first_bad_pair(sequence, k):\n \"\"\"Return the first index of a pair of elements in sequence[]\n for indices k-1, k+1, k+2, k+3, ... where the earlier element is\n not less than the later element. If no such pair exists, return -1.\"\"\"\n if 0 < k < len(sequence) - 1:\n if sequence[k-1] >= sequence[k+1]:\n return k-1\n for i in range(k+1, len(sequence)-1):\n if sequence[i] >= sequence[i+1]:\n return i\n return -1\n\ndef almostIncreasingSequence(sequence):\n \"\"\"Return whether it is possible to obtain a strictly increasing\n sequence by removing no more than one element from the array.\"\"\"\n j = first_bad_pair(sequence, -1)\n if j == -1:\n return True # List is increasing\n if first_bad_pair(sequence, j) == -1:\n return True # Deleting earlier element makes increasing\n if first_bad_pair(sequence, j+1) == -1:\n return True # Deleting later element makes increasing\n return False # Deleting either does not make increasing\n\nAnd here are the tests I used.\nprint('\\nThese should be True.')\nprint(almostIncreasingSequence([]))\nprint(almostIncreasingSequence([1]))\nprint(almostIncreasingSequence([1, 2]))\nprint(almostIncreasingSequence([1, 2, 3]))\nprint(almostIncreasingSequence([1, 3, 2]))\nprint(almostIncreasingSequence([10, 1, 2, 3, 4, 5]))\nprint(almostIncreasingSequence([0, -2, 5, 6]))\nprint(almostIncreasingSequence([1, 1]))\nprint(almostIncreasingSequence([1, 2, 3, 4, 3, 6]))\nprint(almostIncreasingSequence([1, 2, 3, 4, 99, 5, 6]))\nprint(almostIncreasingSequence([1, 2, 2, 3]))\n\nprint('\\nThese should be False.')\nprint(almostIncreasingSequence([1, 3, 2, 1]))\nprint(almostIncreasingSequence([3, 2, 1]))\nprint(almostIncreasingSequence([1, 1, 1]))\n\n", "The solution is close to the intuitive one, where you check if the current item in the sequence is greater than the current maximum value (which by definition is the previous item in a strictly increasing sequence).\nThe wrinkle is that in some scenarios you should remove the current item that violates the above, whilst in other scenarios you should remove previous larger item.\nFor example consider the following:\n[1, 2, 5, 4, 6]\nYou check the sequence at item with value 4 and find it breaks the increasing sequence rule. In this example, it is obvious you should remove the previous item 5, and it is important to consider why. The reason why is that the value 4 is greater than the \"previous\" maximum (the maximum value before 5, which in this example is 2), hence the 5 is the outlier and should be removed.\nNext consider the following:\n[1, 4, 5, 2, 6]\nYou check the sequence at item with value 2 and find it breaks the increasing sequence rule. In this example, 2 is not greater than the \"previous\" maximum of 4 hence 2 is the outlier and should be removed.\nNow you might argue that the net effect of each scenario described above is the same - one item is removed from the sequence, which we can track with a counter.\nThe important distinction however is how you update the maximum and previous_maximum values:\n\nFor [1, 2, 5, 4, 6], because 5 is the outlier, 4 should become the new maximum.\n\nFor [1, 4, 5, 2, 6], because 2 is the outlier, 5 should remain as the maximum.\n\n\nThis distinction is critical in evaluating further items in the sequence, ensuring we correctly ignore the previous outlier.\nHere is the solution based upon the above description (O(n) complexity and O(1) space):\ndef almostIncreasingSequence(sequence):\n removed = 0\n previous_maximum = maximum = float('-infinity')\n for s in sequence:\n if s > maximum:\n # All good\n previous_maximum = maximum\n maximum = s\n elif s > previous_maximum:\n # Violation - remove current maximum outlier\n removed += 1\n maximum = s\n else:\n # Violation - remove current item outlier\n removed += 1\n if removed > 1:\n return False\n return True\n\nWe initially set the maximum and previous_maximum to -infinity and define a counter removed with value 0.\nThe first test case is the \"passing\" case and simply updates the maximum and previous_maximum values.\nThe second test case is triggered when s <= maximum and checks if s > previous_maximum - if this is true, then the previous maximum value is the outlier and is removed, with s being updated to the new maximum and the removed counter incremented.\nThe third test case is triggered when s <= maximum and s <= previous_maximum - in this case, s is the outlier, so s is removed (no changes to maximum and previous_maximum) and the removed counter incremented.\nOne edge case to consider is the following:\n[10, 1, 2, 3, 4]\nFor this case, the first item is the outlier, but we only know this once we examine the second item (1). At this point, maximum is 10 whilst previous_maximum is -infinity, so 10 (or any sequence where the first item is larger than the second item) will be correctly identified as the outlier.\n", "This is mine. Hope you find this helpful:\ndef almostIncreasingSequence(sequence):\n\n #Take out the edge cases\n if len(sequence) <= 2:\n return True\n\n #Set up a new function to see if it's increasing sequence\n def IncreasingSequence(test_sequence):\n if len(test_sequence) == 2:\n if test_sequence[0] < test_sequence[1]:\n return True\n else:\n for i in range(0, len(test_sequence)-1):\n if test_sequence[i] >= test_sequence[i+1]:\n return False\n else:\n pass\n return True\n\n for i in range (0, len(sequence) - 1):\n if sequence[i] >= sequence [i+1]:\n #Either remove the current one or the next one\n test_seq1 = sequence[:i] + sequence[i+1:]\n test_seq2 = sequence[:i+1] + sequence[i+2:]\n if IncreasingSequence(test_seq1) == True:\n return True\n elif IncreasingSequence(test_seq2) == True:\n return True\n else:\n return False\n\n", "Here's my simple solution\ndef almostIncreasingSequence(sequence):\n removed_one = False\n prev_maxval = None\n maxval = None\n for s in sequence:\n if not maxval or s > maxval:\n prev_maxval = maxval\n maxval = s\n elif not prev_maxval or s > prev_maxval:\n if removed_one:\n return False\n removed_one = True\n maxval = s\n else:\n if removed_one:\n return False\n removed_one = True\n return True\n\n", "The reason why your modest algorithm fails here (apart from the missing '=' in return) is, it's just counting the elements which are greater than the next one and returning a result if that count is more than 1.\nWhat's important in this is to look at the list after removing one element at a time from it, and confirm that it is still a sorted list.\nMy attempt at this is really short and works for all scenario. It fails the time constraint on the last hidden test set alone in the exercise.\n\nAs the problem name suggests, I directly wanted to compare the list to its sorted version, and handle the 'almost' case later - thus having the almostIncreasingSequence. i.e.:\nif sequence==sorted(sequence):\n .\n .\n\nBut as the problem says:\n\ndetermine whether it is possible to obtain a strictly increasing sequence by removing no more than one element from the array (at a time).\n\nI started visualizing the list by removing an element at a time during iteration, and check if the rest of the list is a sorted version of itself. Thus bringing me to this:\nfor i in range(len(sequence)):\n temp=sequence.copy()\n del temp[i]\n if temp==sorted(temp):\n .\n .\n\nIt was here when I could see that if this condition is true for the full list, then we have what is required - an almostIncreasingSequence! So I completed my code this way:\ndef almostIncreasingSequence(sequence):\n t=0\n for i in range(len(sequence)):\n temp=sequence.copy()\n del temp[i]\n if temp==sorted(temp):\n t+=1\n return(True if t>0 else False)\n\nThis solution still fails on lists such as [1, 1, 1, 2, 3]. \nAs @rory-daulton noted in his comments, we need to differentiate between a 'sorted' and an 'increasingSequence' in this problem. While the test [1, 1, 1, 2, 3] is sorted, its on an increasingSequence as demanded in the problem. To handle this, following is the final code with a one line condition added to check for consecutive same numbers:\ndef almostIncreasingSequence(sequence):\n t=0\n for i in range(len(sequence)):\n temp=sequence.copy()\n del temp[i]\n if temp==sorted(temp) and not(any(i==j for i,j in zip(sorted(temp), sorted(temp)[1:]))):\n t+=1\n return t>0\n\n\nAs this still fails the execution time limit on the last of the test (the list must be really big), I am still looking if there is a way to optimize this solution of mine.\n", "def almostIncreasingSequence(sequence):\n if len(sequence) == 1:\n return True\n \n decreasing = 0\n for i in range(1,len(sequence)):\n if sequence[i] <= sequence[i-1]:\n decreasing +=1\n if decreasing > 1:\n return False\n \n if sequence[i] <= sequence[i-2] and i-2 >=0:\n if i != len(sequence)-1 and sequence[i+1] <= sequence[i-1]:\n return False\n return True\n\n", "I'm still working on mine. Wrote it like this but I can't pass the last 3 hidden tests.\ndef almostIncreasingSequence(sequence):\n\nboolMe = 0\ncheckRep = 0\n\nfor x in range(0, len(sequence)-1):\n\n if sequence[x]>sequence[x+1]:\n boolMe = boolMe + 1\n if (x!=0) & (x!=(len(sequence)-2)):\n if sequence[x-1]>sequence[x+2]:\n boolMe = boolMe + 1\n if sequence.count(sequence[x])>1:\n checkRep = checkRep + 1\n\n if (boolMe > 1) | (checkRep > 2): \n return False\nreturn True\n\n", "There are two possibilities whenever you hit the condition of the \nsequence[i-1]>=sequence[i] \n\nDelete index i-1\nDelete index i\n\nSo my idea was to create copy and delete the indexes and check if they are sorted and then at the end you can do the or and return if the ans is attainable.\nComplexity will be O(N2)[because of del] and space O(N)\ndef almostIncreasingSequence(sequence):\n c0,c1=1,1\n n=len(sequence)\n l1=[]\n l2=[]\n for i in sequence:\n l1.append(i)\n l2.append(i)\n for i in range(1,n):\n if sequence[i-1]>=sequence[i]:\n del l1[i]\n del l2[i-1]\n break\n for i in range(1,n-1):\n if l1[i-1]>=l1[i]:\n c0=0\n break\n for i in range(1,n-1):\n if l2[i-1]>=l2[i]:\n c1=0\n break\n return bool(c0 or c1)\n\nThis is accepted solution.\n", "Here is a solution in Java.\nboolean almostIncreasingSequence(int[] sequence) {\n\n int count = 0;\n\n for(int i=1; i< sequence.length; i++){\n if(sequence[i] <= sequence[i-1]){\n count++;\n \n if( i > 1 && i < sequence.length -1 \n && sequence[i] <= sequence[i-2] \n && sequence[i+1] <= sequence[i-1] )\n {\n count++;\n }\n }\n }\n\n return count <= 1;\n}\n\n", "This was a pretty cool exercise.\nI did it like this:\ndef almostIncreasingSequence(list):\n removedIdx = [] #Indexes that need to be removed\n\n for idx, item in enumerate(list):\n tmp = [] #Indexes between current index and 0 that break the increasing order\n for i in range(idx-1, -1, -1):\n if list[idx]<=list[i]: #Add index to tmp if number breaks order\n tmp.append(i)\n if len(tmp)>1: #If more than one of the former numbers breaks order \n removedIdx.append(idx) #Add current index to removedIdx\n else:\n if len(tmp)>0: #If only one of the former numbers breaks order\n removedIdx.append(tmp[0]) #Add it to removedIdx\n return len(set(removedIdx))<=1\n\nprint('\\nThese should be True.')\nprint(almostIncreasingSequence([]))\nprint(almostIncreasingSequence([1]))\nprint(almostIncreasingSequence([1, 2]))\nprint(almostIncreasingSequence([1, 2, 3]))\nprint(almostIncreasingSequence([1, 3, 2]))\nprint(almostIncreasingSequence([10, 1, 2, 3, 4, 5]))\nprint(almostIncreasingSequence([0, -2, 5, 6]))\nprint(almostIncreasingSequence([1, 1]))\nprint(almostIncreasingSequence([1, 2, 3, 4, 3, 6]))\nprint(almostIncreasingSequence([1, 2, 3, 4, 99, 5, 6]))\nprint(almostIncreasingSequence([1, 2, 2, 3]))\n\nprint('\\nThese should be False.')\nprint(almostIncreasingSequence([1, 3, 2, 1]))\nprint(almostIncreasingSequence([3, 2, 1]))\nprint(almostIncreasingSequence([1, 1, 1]))\nprint(almostIncreasingSequence([1, 1, 1, 2, 3]))\n\n", "With Python3, I started with something like this...\ndef almostIncreasingSequence(sequence):\n for i, x in enumerate(sequence):\n ret = False\n s = sequence[:i]+sequence[i+1:]\n for j, y in enumerate(s[1:]):\n if s[j+1] <= s[j]:\n ret = True\n break\n if ret:\n break\n if not ret:\n return True\n return False\n\nBut kept timing out on Check #29.\nI kicked myself when I realized that this works, too, but still times out on #29. I have no idea how to speed it up.\ndef almostIncreasingSequence(sequence):\n for i, x in enumerate(sequence): \n s = sequence[:i]\n s.extend(sequence[i+1:])\n if s == sorted(set(s)):\n return True\n return False\n\n", "Well, here's also my solution,\nI think it's a little bit cleaner than other solutions proposed here so I'll just bring it below.\nWhat it does is it basically checks for an index in which i-th value is larger than (i+1)-th value, if it finds such an index, checks whether removing any of those two makes the list into an increasing sequence.\ndef almostIncreasingSequence(sequence):\n\n def is_increasing(lst):\n for idx in range(len(lst)-1):\n if lst[idx] >= lst[idx + 1]:\n return False\n return True\n\n for idx in range(len(sequence) - 1):\n if sequence[idx] >= sequence[idx + 1]:\n fixable = is_increasing([*sequence[:idx], *sequence[idx+1:]]) or is_increasing([*sequence[:idx+1], *sequence[idx+2:]])\n if not fixable:\n return False\n\n return True\n\n", "C++ Answer with looping array only once\nbool almostIncreasingSequence(std::vector<int> a) \n{\n int n=a.size(), p=-1, c=0;\n \n for (int i=1;i<n;i++)\n if (a[i-1]>=a[i]) \n p=i, c++;\n \n if (c>1) return 0;\n if (c==0) return 1;\n if (p==n-1 || p==1) return 1;\n if (a[p-1] < a[p+1]) return 1;\n if (a[p-2] < a[p]) return 1;\n return 0;\n}\n\n", "I worked on this problem using JavaScript, the idea is to find the breakpoints where the sequence is not strictly increasing, from [0, ..., x - 1] and [x, ..., n.length - 1]. If there are more than 1 breakpoint, then return false.\nOnce I am able to locate the break point, just check the combination both [0, ..., x - 1, x + 1, ..., n.length - 1] and [0, ..., x - 2, x, ..., n.length - 1], to check whether breakpoint existed in those 2 arrays. It can be simplified to check 2 pairs of points also.\nHere is my solution:\nfunction almostIncreasingSequence(sequence) {\n const breakpoints = findBreak(sequence);\n\n if (breakpoints.length === 0) return true;\n if (breakpoints.length > 1) return false;\n\n const removedSeq = sequence\n .slice(0, breakpoints[0])\n .concat(sequence.slice(breakpoints[0] + 1, sequence.length));\n\n const removedSeq2 = sequence\n .slice(0, breakpoints[0] - 1)\n .concat(sequence.slice(breakpoints[0], sequence.length));\n\n return (\n findBreak(removedSeq).length === 0 || findBreak(removedSeq2).length === 0\n );\n}\n\nconst findBreak = (sequence) => {\n const breakpoints = [];\n for (let i = 1; i < sequence.length; i++) {\n if (sequence[i] <= sequence[i - 1]) {\n breakpoints.push(i);\n }\n }\n return breakpoints;\n};\n\n", "Here is the solution for javascript, take those principles and convert them to the desired lang. It makes two array copies (sequence1, sequence2) and checks several things for both:\n\nsequence1 - deletes one or more next array items where the next item is smaller than the former item. Then it counts number of deletions (iterator1).\n\nsequence2 - deletes one or more former array items where the former item is bigger than the next item. Then it counts number of deletions (iterator2).\n\nchecks separately for both arrays sequence1 and sequence2 if they have duplicated item which will set separate result flags to false (bigger1, bigger2)\n\ncheck to see whether any of the two arrays (sequence1, sequence2) pass the criteria to be valid. If yes return true for that array and end program. Criteria for checking is: number of deletions of items from the array in question cannot be larger than 1, and there should be no duplicated items (bigger1, bigger2). If there are duplicates it means that two number are the same thus one is not bigger than another one.\n\n\nHowever, for some reason testing page almostincreasingSequence at Codesignal (test 12) returns the array [1, 1] to be true which is not correct because the criteria they have been describing says that the next number has to be strictly bigger than the former one, not equal to it. I have made the code according to their written criteria (next number has to be bigger, not equal to).\nCode:\nfunction solution(sequence) {\n let iterator1 = 0;\n let iterator2 = 0;\n let bigger1 = true;\n let bigger2 = true;\n let sequence1 = sequence.slice(); // copy of the orig array\n let sequence2 = sequence.slice();\n\n for (let i = 0; i < sequence1.length; i++) {\n if (typeof sequence1[i+1] !== 'undefined' && sequence1[i+1] < sequence1[i]) {\n sequence1.splice(i+1, 1); // delete number\n iterator1++; // count number of deletions\n i = -1; continue; // restart for loop from i = 0\n }\n }\n \n for (let i = 0; i < sequence2.length; i++) {\n if (typeof sequence2[i+1] !== 'undefined' && sequence2[i] > sequence2[i+1]) {\n sequence2.splice(i, 1);\n iterator2++;\n i = -1; continue;\n }\n }\n \n for (let i = 0; i < sequence1.length; i++) {\n for (let k = i + 1; k < sequence1.length; k++) {\n if (sequence1[i] == sequence1[k]) { // if two numbers are equal..\n bigger1 = false; // one is not bigger than another - false\n }\n }\n }\n \n for (let i = 0; i < sequence2.length; i++) {\n for (let k = i + 1; k < sequence2.length; k++) {\n if (sequence2[i] == sequence2[k]) {\n bigger2 = false;\n }\n }\n }\n \n if (iterator1 < 2 && bigger1 == true) {\n return true;\n } else if (iterator2 < 2 && bigger2 == true) {\n return true;\n } else {\n return false;\n }\n\n}\n\n/* Sample tests */ \nlet arr = [1, 2, 1, 2]; // false\n//let arr = [3, 6, 5, 8, 10, 20, 15]; // false\n//let arr = [1, 3, 2, 1]; // false\n//let arr = [1, 3, 2]; // true\n//let arr = [10, 1, 2, 3, 4, 5]; // true\n//let arr = [1, 2, 1, 3, 2]; // false\n//let arr = [1, 1, 2, 3, 4, 4]; // false\n//let arr = [1, 4, 10, 4, 2]; // false\n//let arr = [1, 1, 1, 2, 3]; // false\n//let arr = [0, -2, 5, 6]; // true\n//let arr = [1, 2, 3, 4, 5, 3, 5, 6]; // false\n//let arr = [40, 50, 60, 10, 20, 30]; // false\n//let arr = [1, 1]; // false \n//let arr = [1, 2, 5, 3, 5]; // true\n//let arr = [1, 2, 5, 5, 5]; // false\n//let arr = [10, 1, 2, 3, 4, 5, 6, 1]; // false\n//let arr = [1, 2, 3, 4, 3, 6]; // true (Test 16)\n//let arr = [1, 2, 3, 4, 99, 5, 6]; // true\n//let arr = [123, -17, -5, 1, 2, 3, 12, 43, 45]; // true\n//let arr = [3, 5, 67, 98, 3]; // true\n\nsolution(arr);\n\nBut if you want code to pass criteria despite of their instructions, thus to allow numbers to be equal like [1, 1] = true and not strictly next number bigger than former one, then solution would be this:\nfunction solution(sequence) {\n let iterator1 = 0;\n let iterator2 = 0;\n let sequence1 = sequence.slice();\n let sequence2 = sequence.slice();\n\n for (let i = 0; i < sequence1.length; i++) {\n if (typeof sequence1[i+1] !== 'undefined' && sequence1[i+1] <= sequence1[i]) {\n sequence1.splice(i+1, 1);\n iterator1++;\n i = -1; continue;\n }\n }\n \n for (let i = 0; i < sequence2.length; i++) {\n if (typeof sequence2[i+1] !== 'undefined' && sequence2[i] >= sequence2[i+1]) {\n sequence2.splice(i, 1);\n iterator2++;\n i = -1; continue;\n }\n }\n \n if (iterator1 < 2) {\n return true;\n } else if (iterator2 < 2) {\n return true;\n } else {\n return false;\n }\n\n}\n\n", "boolean almostIncreasingSequence(int[] sequence) {\n int length = sequence.length;\n if(length ==1) return true;\n if(length ==2 && sequence[1] > sequence[0]) return true;\n int count = 0;\n int index = 0;\n boolean iter = true;\n\n while(iter){\n index = checkSequence(sequence,index);\n if(index != -1){\n count++;\n index++;\n if(index >= length-1){\n iter=false;\n }else if(index-1 !=0){\n if(sequence[index-1] <= sequence[index]){\n iter=false;\n count++;\n }else if(((sequence[index] <= sequence[index-2])) && ((sequence[index+1] <= sequence[index-1]))){\n iter=false;\n count++; \n }\n }\n }else{\n iter = false;\n }\n }\n if(count > 1) return false;\n return true;\n}\n\n int checkSequence(int[] sequence, int index){\n for(; index < sequence.length-1; index++){\n if(sequence[index+1] <= sequence[index]){\n return index; \n }\n }\n return -1;\n}\n\n", "Below is the Python3 code that I used and it worked fine:\ndef almostIncreasingSequence(sequence):\nflag = False\n\nif(len(sequence) < 3):\n return True\n\nif(sequence == sorted(sequence)):\n if(len(sequence)==len(set(sequence))):\n return True\n\nbigFlag = True\nfor i in range(len(sequence)):\n if(bigFlag and i < len(sequence)-1 and sequence[i] < sequence[i+1]):\n bigFlag = True\n continue\n tempSeq = sequence[:i] + sequence[i+1:]\n if(tempSeq == sorted(tempSeq)):\n if(len(tempSeq)==len(set(tempSeq))):\n flag = True\n break\n bigFlag = False\nreturn flag\n\n", "This works on most cases except has problems with performance.\ndef almostIncreasingSequence(sequence):\n if len(sequence)==2:\n return sequence==sorted(list(sequence))\n else:\n for i in range(0,len(sequence)):\n newsequence=sequence[:i]+sequence[i+1:]\n if (newsequence==sorted(list(newsequence))) and len(newsequence)==len(set(newsequence)):\n return True\n break\n else:\n result=False\n return result\n\n", "This is my Solution, \ndef almostIncreasingSequence(sequence):\ndef hasIncreasingOrder(slicedSquence, lengthOfArray):\n count =0\n output = True\n while(count < (lengthOfArray-1)) :\n if slicedSquence[count] >= slicedSquence[count+1] :\n output = False\n break\n count = count +1\n return output\n\ncount = 0\nseqOutput = False\nlengthOfArray = len(sequence)\nwhile count < lengthOfArray:\n newArray = sequence[:count] + sequence[count+1:] \n if hasIncreasingOrder(newArray, lengthOfArray-1):\n seqOutput = True\n break\n count = count+1\nreturn seqOutput\n\n", "This one works well.\nbool almostIncreasingSequence(std::vector<int> sequence) {\n/*\nif(is_sorted(sequence.begin(), sequence.end())){\n return true;\n }\n*/\nint max = INT_MIN;\nint secondMax = INT_MIN;\nint count = 0;\nint i = 0;\n\nwhile(i < sequence.size()){\n if(sequence[i] > max){\n secondMax = max;\n max = sequence[i];\n\n}else if(sequence[i] > secondMax){\n max = sequence[i];\n count++;\n cout<<\"count after increase = \"<<count<<endl;\n }else {count++; cout<<\"ELSE count++ = \"<<count<<endl;}\n\n\n i++;\n}\n\nreturn count <= 1;\n\n\n}\n\n", "def almostIncreasingSequence(sequence):\n if len(sequence) == 1:\n return False\n if len(sequence) == 2:\n return True\n c = 0\n c1 = 0\n for i in range(1,len(sequence)):\n if sequence[i-1] >= sequence[i]:\n c += 1\n if i != 0 and i+1 < len(sequence):\n if sequence[i-1] >= sequence[i+1]:\n c1 += 1\n if c > 1 or c1 > 1:\n return False\n return c1 == 1 or c == 1\n\n", "this is mine and it runs fine. I just remove suggested elements and see if new list is strictly increasing\nTo check if a list is strictly increasing. I check if there are any duplicates first. I then check if the sorted list is the same as the original list\nimport numpy as np\n\ndef IncreasingSequence(sequence):\n temp=sequence.copy()\n temp.sort()\n if (len(sequence) != len(set(sequence))):\n return False\n if (sequence==temp):\n return True\n \n return False\ndef almostIncreasingSequence(sequence):\n\n for i in range(len(sequence)-1):\n if sequence[i] >= sequence[i+1]:\n sequence_temp=sequence.copy()\n sequence_temp.pop(i)\n # print(sequence_temp)\n # print(IncreasingSequence(sequence_temp))\n if (IncreasingSequence(sequence_temp)):\n return True\n # Might be the neighbor that is worth removing\n sequence_temp=sequence.copy()\n sequence_temp.pop(i+1)\n if (IncreasingSequence(sequence_temp)):\n return True\n \n return False\n\n", "I spent a whole day trying to make it as short as possible, but no luck. But here is my accepted answer in CodeSignal.\ndef almostIncreasingSequence(sequence):\n if len(sequence)<=2:\n return True\n \n def isstepdown(subsequence):\n return [a>=b for a,b in zip(subsequence, subsequence[1:])]\n \n stepdowns = isstepdown(sequence)\n n_stepdown = sum(stepdowns)\n if n_stepdown>1:\n return False\n else:\n sequence2 = sequence.copy()\n \n sequence.pop(stepdowns.index(True))\n stepdowns_temp = isstepdown(sequence)\n n_stepdown = sum(stepdowns_temp)\n\n sequence2.pop(stepdowns.index(True)+1)\n stepdowns_temp = isstepdown(sequence2)\n n_stepdown += sum(stepdowns_temp) \n if n_stepdown<=1:\n return True\n else:\n return False\n\n", "Here is another solution. Passes all the tests. Just one method. One pass through\nthe list.\ndef almostIncreasingSequence(sequence):\ncount = 0\n\nif len(sequence) < 3:\n return True\nfor i in range(1, len(sequence) - 2): #to test only inner elements\n if (sequence[i] >= sequence[i+1]): \n count += 1\n if count == 2: # the second time this occurs\n return False\n #check if skipping one of these items solves the problem\n if sequence[i-1] >= sequence[i+1] and sequence[i] >= sequence[i+2]:\n return False\n i += 1\n#handle the first element\nif sequence[0] >= sequence[1]:\n count += 1\n if count == 2:\n return False\n#handle the last element\nif sequence[-2] >= sequence[-1] and count == 1:\n return False \nreturn True \n\n", "\nWorks on CodeSignal test cases\n\ndef almostIncreasingSequence(sequence):\ns = sequence # for ease\nprevMax = s[0] # stores previous max value to which current element has to be compared\nfound = False\nmaxI = 0 #index of prevMax\nfor i in range(1, len(s)):\n if s[i] <= prevMax:\n if found:\n return False\n else:\n found = True\n if maxI > 0 : #checks if current item is smaller thant the prevMax and the value before that \n if s[i] <= s[maxI] and s[i] > s[maxI - 1]:\n prevMax = s[i]\n maxI = i\n else: # checks if the current and next element are smaller than the prevMax value\n if (i+1) < len(s) and s[i+1] <= s[maxI]:\n prevMax = s[i]\n maxI = i\n else:\n prevMax = s[i]\n maxI = i\n\nreturn True\n\n", "This is my solution.\n def almostIncreasingSequence(sequence):\n duplicated = 0\n for i in range(1, len(sequence) - 1):\n if sequence[i-1] == sequence[i] == sequence[i+1]:\n return False\n elif sequence[i-1] == sequence[i]:\n duplicated += 1\n elif sequence[i] == sequence[i+1]:\n duplicated += 1\n elif sequence[i-1] <= sequence[i] <= sequence[i+1]:\n continue\n else:\n return False\n return 0 <= duplicated <= 1\n\n" ]
[ 53, 16, 6, 5, 5, 3, 2, 2, 2, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "Swift\n//Brute Force\n//Running Time: O(n * n)\nfunc isIncreasing(sequence: [Int]) -> Bool {\n if sequence.count == 1 { return true }\n \n var isStrictlyIncreasing = false\n \nfor (indexOfPotentialNumberToRemove) in 0...sequence.count - 1 {\n \n print(\"indexOfPotentialNumberToRemove: \\(indexOfPotentialNumberToRemove)\")\n \n if isStrictlyIncreasing { return true }\n var remainingArray = sequence\n remainingArray.remove(at: indexOfPotentialNumberToRemove)\n \n isStrictlyIncreasing = true\n for i in 0...remainingArray.count - 1 {\n if i + 1 < remainingArray.count {\n let currentNumber = remainingArray[i]\n let nextNumber = remainingArray[i + 1]\n if nextNumber <= currentNumber {\n isStrictlyIncreasing = false\n break\n }\n \n }\n }\n \n }\n \n return isStrictlyIncreasing\n \n\n}\n" ]
[ -1 ]
[ "arrays", "python" ]
stackoverflow_0043017251_arrays_python.txt
Q: SublimeText3: How to set spaces to 4 on .py file only? I am using SublimeText3 and writing HTML/CSS so I set spaces to 2 on editing HTML/CSS files but I want to use 4 spaces auto on editing .py files. How to do it? A: While you have any particular file open, choosing Preferences > Settings - Syntax Specific will open/create a set of preferences that apply only to files of that particular type. Settings in a syntax specific preferences file are applied on top of the global default preferences, allowing you to specify for particular files that you would like a few things to be configured differently, while all other settings remain at the global preference. So generally speaking, to have different settings for Python, do this while you're editing a Python file and apply settings to the right hand pane that apply the specific settings changes you would like to see in Python files.
SublimeText3: How to set spaces to 4 on .py file only?
I am using SublimeText3 and writing HTML/CSS so I set spaces to 2 on editing HTML/CSS files but I want to use 4 spaces auto on editing .py files. How to do it?
[ "While you have any particular file open, choosing Preferences > Settings - Syntax Specific will open/create a set of preferences that apply only to files of that particular type.\nSettings in a syntax specific preferences file are applied on top of the global default preferences, allowing you to specify for particular files that you would like a few things to be configured differently, while all other settings remain at the global preference.\nSo generally speaking, to have different settings for Python, do this while you're editing a Python file and apply settings to the right hand pane that apply the specific settings changes you would like to see in Python files.\n" ]
[ 1 ]
[]
[]
[ "python", "sublimetext3" ]
stackoverflow_0074469375_python_sublimetext3.txt
Q: I am creating a face recognition system using Python and OpenCV on these versions AttributeError: module 'cv2' has no attribute 'face' face_recognizer = cv2.face.createLBPHFaceRecognizer() A: As stated in this answer, you have to install opencv-contrib-python pip install opencv-contrib-python
I am creating a face recognition system using Python and OpenCV on these versions
AttributeError: module 'cv2' has no attribute 'face' face_recognizer = cv2.face.createLBPHFaceRecognizer()
[ "As stated in this answer, you have to install opencv-contrib-python\npip install opencv-contrib-python\n\n" ]
[ 1 ]
[]
[]
[ "face_recognition", "opencv", "python" ]
stackoverflow_0074479428_face_recognition_opencv_python.txt
Q: Numpy doesn't respond accurate in m1 macbook I have macbook pro m1 pro and I have tested some simple numpy commands on it and it doesn't respond correctly but if I check the same command in an online compiler it respond ok. Can you help me please? import numpy as np y=np.array([[1,2,3],[4,5,6],[7,8,9]]) print(np.linalg.det(y)) the result in my Macbook is : -9.51619735392994e-16 while the correct answer and also the online compiler answer is : 0.0 A: Comparing two floats (or doubles etc) can be problematic. Generally, instead of comparing for exact equality they should be checked against an error bound. If they are within the error bound, they are considered equal. Just round the results and you will always get 0. it is nothing to do with mac. it could be two instance on the same machine as well.
Numpy doesn't respond accurate in m1 macbook
I have macbook pro m1 pro and I have tested some simple numpy commands on it and it doesn't respond correctly but if I check the same command in an online compiler it respond ok. Can you help me please? import numpy as np y=np.array([[1,2,3],[4,5,6],[7,8,9]]) print(np.linalg.det(y)) the result in my Macbook is : -9.51619735392994e-16 while the correct answer and also the online compiler answer is : 0.0
[ "Comparing two floats (or doubles etc) can be problematic. Generally, instead of comparing for exact equality they should be checked against an error bound. If they are within the error bound, they are considered equal.\nJust round the results and you will always get 0. it is nothing to do with mac. it could be two instance on the same machine as well.\n" ]
[ 0 ]
[]
[]
[ "apple_m1", "numpy", "python" ]
stackoverflow_0074479446_apple_m1_numpy_python.txt
Q: How do I switch to a Python log formatter I have defined in my logging.ini file? This is my logging.ini file: [loggers] keys=root [handlers] keys=consoleHandler [formatters] keys=simpleFormatter,json [logger_root] level=INFO handlers=consoleHandler [handler_consoleHandler] class=StreamHandler formatter=json args=(sys.stdout,) [formatter_json] class=pythonjsonlogger.jsonlogger.JsonFormatter format=%(asctime)s %(name)s %(levelname)s %(message)s [formatter_simpleFormatter] format=%(asctime)s %(name)s - %(levelname)s:%(message)s I want to switch the formatter via an environment variable, but this is not working (AttributeError: 'RootLogger' object has no attribute 'setFormatter'): import logging.config # Load logging config file logging_config_file_path = path.join( path.dirname(path.abspath(__file__)), "logging.ini" ) logging.config.fileConfig(logging_config_file_path) # Override log settings via env vars LOGLEVEL = os.environ.get("LOGLEVEL", "INFO").upper() LOG_FORMATTER = os.environ.get("LOG_FORMATTER", "simpleFormatter").upper() LOGLEVEL_NUMBER = logging.getLevelName(LOGLEVEL) LOGLEVEL_DEBUG_NUMBER = 10 logger = logging.getLogger() logger.setLevel(LOGLEVEL) # setFormatter seems to want an object logger.setFormatter(LOG_FORMATTER I have all my settings defined in the ini file. How do I switch a formatter via an environment variable like I currently do for the log level? Edit Perhaps I must be missing something obvious. I tried passing arguments to the config, but it's not working and I can find almost no examples of how to use defaults: logging.config.fileConfig( logging_config_file_path, defaults={"formatter": "simpleFormatter"} ) In logging.ini [handler_consoleHandler] class=StreamHandler # formatter=simpleFormatter formatter='%(formatter)s' Throws: configparser.InterpolationSyntaxError: bad interpolation variable reference '%(formatter)' A: Adding this an answer, but it's not a very good one. I wound up just creating a separate log config file and switching like this: LOG_CONFIG_PROFILE = os.environ.get("LOG_CONFIG_PROFILE", "logging_conf_local") logging_config_file_path = path.join( path.dirname(path.abspath(__file__)), f"{LOG_CONFIG_PROFILE}.ini" ) logging.config.fileConfig(logging_config_file_path) logger = logging.getLogger() Love to know if there is a better way or if this is the best way.
How do I switch to a Python log formatter I have defined in my logging.ini file?
This is my logging.ini file: [loggers] keys=root [handlers] keys=consoleHandler [formatters] keys=simpleFormatter,json [logger_root] level=INFO handlers=consoleHandler [handler_consoleHandler] class=StreamHandler formatter=json args=(sys.stdout,) [formatter_json] class=pythonjsonlogger.jsonlogger.JsonFormatter format=%(asctime)s %(name)s %(levelname)s %(message)s [formatter_simpleFormatter] format=%(asctime)s %(name)s - %(levelname)s:%(message)s I want to switch the formatter via an environment variable, but this is not working (AttributeError: 'RootLogger' object has no attribute 'setFormatter'): import logging.config # Load logging config file logging_config_file_path = path.join( path.dirname(path.abspath(__file__)), "logging.ini" ) logging.config.fileConfig(logging_config_file_path) # Override log settings via env vars LOGLEVEL = os.environ.get("LOGLEVEL", "INFO").upper() LOG_FORMATTER = os.environ.get("LOG_FORMATTER", "simpleFormatter").upper() LOGLEVEL_NUMBER = logging.getLevelName(LOGLEVEL) LOGLEVEL_DEBUG_NUMBER = 10 logger = logging.getLogger() logger.setLevel(LOGLEVEL) # setFormatter seems to want an object logger.setFormatter(LOG_FORMATTER I have all my settings defined in the ini file. How do I switch a formatter via an environment variable like I currently do for the log level? Edit Perhaps I must be missing something obvious. I tried passing arguments to the config, but it's not working and I can find almost no examples of how to use defaults: logging.config.fileConfig( logging_config_file_path, defaults={"formatter": "simpleFormatter"} ) In logging.ini [handler_consoleHandler] class=StreamHandler # formatter=simpleFormatter formatter='%(formatter)s' Throws: configparser.InterpolationSyntaxError: bad interpolation variable reference '%(formatter)'
[ "Adding this an answer, but it's not a very good one.\nI wound up just creating a separate log config file and switching like this:\nLOG_CONFIG_PROFILE = os.environ.get(\"LOG_CONFIG_PROFILE\", \"logging_conf_local\")\nlogging_config_file_path = path.join(\n path.dirname(path.abspath(__file__)), f\"{LOG_CONFIG_PROFILE}.ini\"\n)\nlogging.config.fileConfig(logging_config_file_path)\n\nlogger = logging.getLogger()\n\nLove to know if there is a better way or if this is the best way.\n" ]
[ 0 ]
[]
[]
[ "python", "python_logging" ]
stackoverflow_0074478345_python_python_logging.txt
Q: My python list doesn't understand letters :( my Code doesent understand letters in the list i would like somone to help me fix this usernames = (BTP, btp, Btp, BTp) def username(usernames2): if usernames == input('whats your username? : ') Its a simple username system, i plan to use for a interface im making. A: usernames is defined as a tuple of 4 items, with the names BTP, btp, Btp, and BTp. You said "list" in your title but your code has no actual lists. Lists use brackets, tuples use parentheses. Anyway, I'm assuming you actually want to check if the user's input actually was equal to the letters "btp" and you want the check to be case-insensitive, hence why you included all combos of uppercase and lowercase. The main issue is that you didn't put quotes around the strings, so you have just 4 bare names sitting in your code which the interpreter expects to have been defined previously. But, you actually don't have to define all the possible combinations of uppercase and lowercase in the first place - there's a much easier method to do a case-insensitive string compare, here. So, your code just needs to look like: usename = "btp" def username(usernames2): if input('whats your username? : ').lower() == username Or, if you want to check against multiple usernames, you can use the in operator: usenames = ["btp", "abc", "foo", "bar"] def username(usernames2): if input('whats your username? : ').lower() in usernames A: If you haven't declared BTP, btp, Btp, and BTp you will get a NameError If you wanted to use strings you need single or double quotation marks: usernames = ("BTP", "btp", "Btp", "BTp") With that you create a tuple containing four string elements. The next issue is with your if condition as you compare if a tuple is equal a string. Try storing the input given from the user in a variable: def username(usernames): user_input = input('whats your username?: ') if user_input in usernames: # Do something when username is found
My python list doesn't understand letters :(
my Code doesent understand letters in the list i would like somone to help me fix this usernames = (BTP, btp, Btp, BTp) def username(usernames2): if usernames == input('whats your username? : ') Its a simple username system, i plan to use for a interface im making.
[ "usernames is defined as a tuple of 4 items, with the names BTP, btp, Btp, and BTp. You said \"list\" in your title but your code has no actual lists. Lists use brackets, tuples use parentheses.\nAnyway, I'm assuming you actually want to check if the user's input actually was equal to the letters \"btp\" and you want the check to be case-insensitive, hence why you included all combos of uppercase and lowercase.\nThe main issue is that you didn't put quotes around the strings, so you have just 4 bare names sitting in your code which the interpreter expects to have been defined previously. But, you actually don't have to define all the possible combinations of uppercase and lowercase in the first place - there's a much easier method to do a case-insensitive string compare, here.\nSo, your code just needs to look like:\nusename = \"btp\"\ndef username(usernames2):\n if input('whats your username? : ').lower() == username\n\nOr, if you want to check against multiple usernames, you can use the in operator:\nusenames = [\"btp\", \"abc\", \"foo\", \"bar\"]\ndef username(usernames2):\n if input('whats your username? : ').lower() in usernames\n\n", "If you haven't declared BTP, btp, Btp, and BTp you will get a NameError\nIf you wanted to use strings you need single or double quotation marks:\nusernames = (\"BTP\", \"btp\", \"Btp\", \"BTp\")\n\nWith that you create a tuple containing four string elements.\nThe next issue is with your if condition as you compare if a tuple is equal a string.\nTry storing the input given from the user in a variable:\ndef username(usernames):\n user_input = input('whats your username?: ')\n if user_input in usernames:\n # Do something when username is found\n\n" ]
[ 1, 0 ]
[]
[]
[ "python", "python_3.x" ]
stackoverflow_0074479654_python_python_3.x.txt
Q: pd.read_excel throws PermissionError if file is open in Excel Whenever I have the file open in Excel and run the code, I get the following error which is surprising because I thought read_excel should be a read only operation and would not require the file to be unlocked? Traceback (most recent call last): File "C:\Users\Public\a.py", line 53, in <module> main() File "C:\Users\Public\workspace\a.py", line 47, in main blend = plStream(rootDir); File "C:\Users\Public\workspace\a.py", line 20, in plStream df = pd.read_excel(fPath, sheetname="linear strategy", index_col="date", parse_dates=True) File "C:\Users\Public\Continuum\Anaconda35\lib\site-packages\pandas\io\excel.py", line 163, in read_excel io = ExcelFile(io, engine=engine) File "C:\Users\Public\Continuum\Anaconda35\lib\site-packages\pandas\io\excel.py", line 206, in __init__ self.book = xlrd.open_workbook(io) File "C:\Users\Public\Continuum\Anaconda35\lib\site-packages\xlrd\__init__.py", line 394, in open_workbook f = open(filename, "rb") PermissionError: [Errno 13] Permission denied: '<Path to File>' A: Generally Excel have a lot of restrictions when opening files (can't open the same file twice, can't open 2 different files with the same name ..etc). I don't have excel on machine to test, but checking the docs for read_excel I've noticed that it allows you to set the engine. from the stack trace you posted it seems like the error is thrown by xlrd which is the default engine used by pandas. try using any of the other ones Supported engines: “xlrd”, “openpyxl”, “odf”, “pyxlsb”, default “xlrd”. so try with the rest, like df = pd.read_excel(fPath, sheetname="linear strategy", index_col="date", parse_dates=True, engine="openpyxl") I know this is not a real answer, but you might want to submit a bug report to pandas or xlrd teams. A: I would suggest using the xlwings module instead which allows for greater functionality. Firstly, you will need to load your workbook using the following line: If the spreadsheet is in the same folder as your python script: import xlwings as xw workbook = xw.Book('myfile.xls') Alternatively: workbook = xw.Book('"C:\Users\...\myfile.xls') Then, you can create your Pandas DataFrame, by specifying the sheet within your spreadsheet and the cell where your dataset begins: df = workbook.sheets[0].range('A1').options(pd.DataFrame, header=1, index=False, expand='table').value When specifying a sheet you can either specify a sheet by its name or by its location (i.e. first, second etc.) in the following way: workbook.sheets[0] or workbook.sheets['sheet_name'] Lastly, you can simply install the xlwings module by using Pip install xlwings A: As a workaround I suggest making python create a copy of the original file then read from the copy. After that the code should delete the copied file. It's a bit of extra work but should work. Example import shutil shutil.copy("C://Test//Test.xlsx", "C://Test//koko.xlsx") A: Mostly there is no issues in your code. [ If you publish the code it will be easier.] You need to change the permissions of the directory you are using so that all users have read and write permissions. A: I got this to work by first setting the working directory, then opening the file. Maybe something to do with shared drive permissions and read_excel function. import os import pandas as pd os.chdir("c:\\Users\\...\\") filepath = "...\\filename.xlsx" sheetname = 'sheet1' df_xls = pd.read_excel(filepath, sheet_name=sheetname, engine='openpyxl')
pd.read_excel throws PermissionError if file is open in Excel
Whenever I have the file open in Excel and run the code, I get the following error which is surprising because I thought read_excel should be a read only operation and would not require the file to be unlocked? Traceback (most recent call last): File "C:\Users\Public\a.py", line 53, in <module> main() File "C:\Users\Public\workspace\a.py", line 47, in main blend = plStream(rootDir); File "C:\Users\Public\workspace\a.py", line 20, in plStream df = pd.read_excel(fPath, sheetname="linear strategy", index_col="date", parse_dates=True) File "C:\Users\Public\Continuum\Anaconda35\lib\site-packages\pandas\io\excel.py", line 163, in read_excel io = ExcelFile(io, engine=engine) File "C:\Users\Public\Continuum\Anaconda35\lib\site-packages\pandas\io\excel.py", line 206, in __init__ self.book = xlrd.open_workbook(io) File "C:\Users\Public\Continuum\Anaconda35\lib\site-packages\xlrd\__init__.py", line 394, in open_workbook f = open(filename, "rb") PermissionError: [Errno 13] Permission denied: '<Path to File>'
[ "Generally Excel have a lot of restrictions when opening files (can't open the same file twice, can't open 2 different files with the same name ..etc).\nI don't have excel on machine to test, but checking the docs for read_excel I've noticed that it allows you to set the engine.\nfrom the stack trace you posted it seems like the error is thrown by xlrd which is the default engine used by pandas.\ntry using any of the other ones\n\nSupported engines: “xlrd”, “openpyxl”, “odf”, “pyxlsb”, default “xlrd”.\n\nso try with the rest, like\ndf = pd.read_excel(fPath, sheetname=\"linear strategy\", index_col=\"date\", parse_dates=True, engine=\"openpyxl\")\n\nI know this is not a real answer, but you might want to submit a bug report to pandas or xlrd teams.\n", "I would suggest using the xlwings module instead which allows for greater functionality.\nFirstly, you will need to load your workbook using the following line:\nIf the spreadsheet is in the same folder as your python script:\nimport xlwings as xw\nworkbook = xw.Book('myfile.xls')\n\nAlternatively:\nworkbook = xw.Book('\"C:\\Users\\...\\myfile.xls')\n\nThen, you can create your Pandas DataFrame, by specifying the sheet within your spreadsheet and the cell where your dataset begins:\ndf = workbook.sheets[0].range('A1').options(pd.DataFrame, \n header=1,\n index=False, \n expand='table').value\n\nWhen specifying a sheet you can either specify a sheet by its name or by its location (i.e. first, second etc.) in the following way:\nworkbook.sheets[0] or workbook.sheets['sheet_name']\nLastly, you can simply install the xlwings module by using Pip install xlwings \n", "As a workaround I suggest making python create a copy of the original file then read from the copy. After that the code should delete the copied file. It's a bit of extra work but should work.\nExample\nimport shutil\nshutil.copy(\"C://Test//Test.xlsx\", \"C://Test//koko.xlsx\")\n\n", "Mostly there is no issues in your code. [ If you publish the code it will be easier.]\nYou need to change the permissions of the directory you are using so that all users have read and write permissions.\n", "I got this to work by first setting the working directory, then opening the file. Maybe something to do with shared drive permissions and read_excel function.\nimport os\nimport pandas as pd\n\nos.chdir(\"c:\\\\Users\\\\...\\\\\")\n\nfilepath = \"...\\\\filename.xlsx\"\nsheetname = 'sheet1'\n\ndf_xls = pd.read_excel(filepath, sheet_name=sheetname, engine='openpyxl')\n\n" ]
[ 3, 2, 2, 0, 0 ]
[ "I fix this error simply closing the .xlsx file that was open.\n", "You can set engine = 'xlrd', then you can run the code while Excel has the file open.\ndf = pd.read_excel(filename, sheetname, engine = 'xlrd')\n\nYou may need to pip install xlrd if you don't have it\n", "You may also want to check if the file has a password? Alternatively you can open the file with the password required using the code below:\nimport sys\nimport win32com.client\nxlApp = win32com.client.Dispatch(\"Excel.Application\")\nprint \"Excel library version:\", xlApp.Version\nfilename, password = <-- enter your own filename and password\nxlwb = xlApp.Workbooks.Open(filename, Password=password) \n# xlwb = xlApp.Workbooks.Open(filename)\nxlws = xlwb.Sheets([insert number here]) # counts from 1, not from 0\nprint xlws.Name\nprint xlws.Cells(1, 1) # that's A1\n\n", "You can set engine='python' then you can run it even if the file is open\ndf = pd.read_excel(filename, engine = 'python')\n" ]
[ -1, -2, -2, -3 ]
[ "excel", "pandas", "python" ]
stackoverflow_0035743905_excel_pandas_python.txt
Q: I have a variable value in lower case and the same value is in one of the dictionary keys how do I fulfill the condition i have document_title variable value with lowercase letters and same value is in the dic keys with upercase letter TITLE_MAP = { 'AUS Marketing Consent': "DOCUMENT_TYPE_MARKETING_CONSENT", 'Consent & History': "DOCUMENT_TYPE_CONSENT", } document_title = 'aus marketing consent' if i do this won't work with me if document_title in TITLE_MAP.keys(): return True I want to fulfill the condition even with the difference A: You can use the casefold method to do string comparison. Since you want to apply it to all the keys, you can use a list comprehension. if document_title.casefold() in [x.casefold() for x in TITLE_MAP.keys()]: print(True) Hope this helps. A: if document_title.upper() in TITLE_MAP.key(): return True A: Maybe it's overkill but you can try this solution : if document_title.lower() in {k.lower() for k in TITLE_MAP.keys()}: print(True) It lowers every keys from your dictionnary A: The two strings must be in the same case. You have to convert all keys to lowercase. Try the code below TITLE_MAP = { 'AUS Marketing Consent': "DOCUMENT_TYPE_MARKETING_CONSENT", 'Consent & History': "DOCUMENT_TYPE_CONSENT", } TITLE_MAP = {k.lower(): v for k, v in TITLE_MAP.items()} document_title = 'aus marketing consent' if document_title.lower() in TITLE_MAP: print(True)
I have a variable value in lower case and the same value is in one of the dictionary keys how do I fulfill the condition
i have document_title variable value with lowercase letters and same value is in the dic keys with upercase letter TITLE_MAP = { 'AUS Marketing Consent': "DOCUMENT_TYPE_MARKETING_CONSENT", 'Consent & History': "DOCUMENT_TYPE_CONSENT", } document_title = 'aus marketing consent' if i do this won't work with me if document_title in TITLE_MAP.keys(): return True I want to fulfill the condition even with the difference
[ "You can use the casefold method to do string comparison. Since you want to apply it to all the keys, you can use a list comprehension.\nif document_title.casefold() in [x.casefold() for x in TITLE_MAP.keys()]:\n print(True)\n\nHope this helps.\n", "if document_title.upper() in TITLE_MAP.key():\n return True\n\n", "Maybe it's overkill but you can try this solution :\nif document_title.lower() in {k.lower() for k in TITLE_MAP.keys()}:\n print(True)\n\nIt lowers every keys from your dictionnary\n", "The two strings must be in the same case. You have to convert all keys to lowercase. Try the code below\nTITLE_MAP = {\n 'AUS Marketing Consent': \"DOCUMENT_TYPE_MARKETING_CONSENT\",\n 'Consent & History': \"DOCUMENT_TYPE_CONSENT\",\n}\n\nTITLE_MAP = {k.lower(): v for k, v in TITLE_MAP.items()}\n\ndocument_title = 'aus marketing consent'\n\nif document_title.lower() in TITLE_MAP:\n print(True)\n\n" ]
[ 1, 0, 0, 0 ]
[]
[]
[ "character", "dictionary", "python", "string" ]
stackoverflow_0074479418_character_dictionary_python_string.txt
Q: Counting each day in a dataframe Say I have a dataframe 'df': I would like to add an additional column named 'Day No' which adds a count to each day. Desired output below: This wont reset at the end of each month, the count will just continue. For example at the end of the year it will read 365 for all the 1 hour entries in the last day of the year. The dtype of column 'Datetime' is datetime64[ns]. Any help greatly appreciated, Thanks. A: here is one way to do it # convert to datetime and extract dayofyear df['Day No']= pd.to_datetime(df['DateTime'], dayfirst=True).dt.dayofyear PS: if you had shared df constructor or as text, i would have been able to share the result A: You can map the result of enumerated unique values: reversed_dict = dict(enumerate(df['DateTime'].unique(), 1)) df['Day No'] = df['DateTime'].map({v:k for k,v in reversed_dict.items()})
Counting each day in a dataframe
Say I have a dataframe 'df': I would like to add an additional column named 'Day No' which adds a count to each day. Desired output below: This wont reset at the end of each month, the count will just continue. For example at the end of the year it will read 365 for all the 1 hour entries in the last day of the year. The dtype of column 'Datetime' is datetime64[ns]. Any help greatly appreciated, Thanks.
[ "here is one way to do it\n# convert to datetime and extract dayofyear\n\ndf['Day No']= pd.to_datetime(df['DateTime'], dayfirst=True).dt.dayofyear\n\nPS: if you had shared df constructor or as text, i would have been able to share the result\n", "You can map the result of enumerated unique values:\nreversed_dict = dict(enumerate(df['DateTime'].unique(), 1))\n\ndf['Day No'] = df['DateTime'].map({v:k for k,v in reversed_dict.items()})\n\n" ]
[ 2, 0 ]
[]
[]
[ "dataframe", "pandas", "python" ]
stackoverflow_0074479445_dataframe_pandas_python.txt
Q: Compare and count the sparse arrays in a list in python Holla! I have a list of 60 large-size 2d arrays (30000,30000). The goal is to compare each array with every other array and count the total number of exactly the same arrays in the entire list. I am working on this logic, however, it is counting the number of same arrays individually and not what I want: import numpy as np import pandas as pd import scipy.sparse as sp ## I am using this dummy setup, to begin with (rather than the large data) # creating 4 dummy arrays a = np.zeros((6,6)) a[1,2] = 1 a[2,5] = 1 a[3,2] = 1 a[4,1] = 1 print(a) b = np.zeros((6,6)) b[1,2] = 1 b[2,5] = 1 b[3,2] = 1 b[4,1] = 1 c = np.zeros((6,6)) c[1,3] = 1 c[2,5] = 1 d = np.zeros((6,6)) d[1,3] = 1 d[2,4] = 1 # storing the arrays in a list list2d = [a,b,c,d] #loop through the list to count the number of arrays with exactly same values n = len(list2d) for i in range(n): count = 0 for j in range(n): if (list2d[i] == list2d[j]).all() and i != j: count += 1 print('list2d[',i,'] is the same as list2d[',j,']') else: print('list2d[',i,'] is not the same as list2d[',j,']') print('total number of same arrays || count = ',count) Another option is working with sparse matrices and storing them in a list. However, I'm not sure whether we can compare or check for equity on the entire list with 60 sparse arrays. # again finalizing a logic on a dummy setup a_sparse = sp.csr_matrix(a) b_sparse = sp.csr_matrix(b) c_sparse = sp.csr_matrix(c) d_sparse = sp.csr_matrix(d) print(a_sparse) # #list of sparse matrices list_sparse = [a_sparse,b_sparse,c_sparse,d_sparse] ## compare the list of sparse arrays and count the total number of exactly same arrays ## also, print/ store all the equal arrays Any suggestions and/or feedback for getting the correct logic is appreciated. Cheers! A: EDIT#3: Based on your comments, I think this is what you are trying to do. import numpy as np from copy import deepcopy def convert_to_tuple(mat): x = tuple(np.flatnonzero(mat)) + mat.shape return (x) def get_replicates(id, mat, mat_list): replicates = 0 #Remove the relevant matrix from mat_list to avoid checking the reference against itself del mat_list[id] # Create a tuple of the reference matrix ref = convert_to_tuple(mat) print(id, ":", ref) # Check how many replicates of the reference matrix there are for m in mat_list: s = set() s.add(convert_to_tuple(m)) s.add(ref) replicates += (-len(s)+2) # Replace the matrix into mat_list mat_list.insert(id, mat) return replicates # Generate a number of sparse matrices # a=b # c=d=e=f=g # ---------------------------------------------------------------------------- a = np.zeros((6,6)) a[1,2] = 1 a[2,5] = 1 a[3,2] = 1 a[4,1] = 1 b = deepcopy(a) c = np.zeros((6,6)) c[1,3] = 1 c[2,5] = 1 d = deepcopy(c) e = deepcopy(c) f = deepcopy(c) g = deepcopy(c) # storing the arrays in a list list2d = [a,b,c,d,e,f,g] # Identify the number of replicates #------------------------------------------------------------------------------ number_of_replicates = [get_replicates(i, arr, list2d) for i, arr in enumerate(list2d)] # Print the number of replicates #------------------------------------------------------------------------------ for i, reps in enumerate(number_of_replicates): print(f"Sparse Array {i} has {reps} replicates") OUTPUT: 0 : (8, 17, 20, 25, 6, 6) 1 : (8, 17, 20, 25, 6, 6) 2 : (9, 17, 6, 6) 3 : (9, 17, 6, 6) 4 : (9, 17, 6, 6) 5 : (9, 17, 6, 6) 6 : (9, 17, 6, 6) Sparse Array 0 has 1 replicates Sparse Array 1 has 1 replicates Sparse Array 2 has 4 replicates Sparse Array 3 has 4 replicates Sparse Array 4 has 4 replicates Sparse Array 5 has 4 replicates Sparse Array 6 has 4 replicates The top part of the output shows the what each matrix looks like after being converted to a tuple. The tuple contains the index of each 1 within the matrix, and the shape of each matrix 6,6 is appended to the end. From the output you can see that: array a and b - have 1 replicate each arrays c,d,e,f,g - have 4 replicates each A: I would probably not choose to fiddle with array stuff in weird ways. I definitely would not cast sparse matrices to dense for this, as doing that directly for 60 of these 30k x 30k things would require 450GB of memory or so. Just check everything as it is. Set up the problem (so that there are 40 unique matrices and 20 matrices which are not unique), and use Counters instead of reinventing that wheel: from collections import Counter from scipy import sparse import numpy as np list_of_arrays = [sparse.rand(200,200,density=np.random.uniform(0.025, 0.075),format='csr') for _ in range(50)] for i in range(10): list_of_arrays.append(list_of_arrays[i]) Exclude any matrices which have unique shapes or unique nnz (as they're trivial to check): # Check NNZ nnz_counter = Counter([x.nnz for x in list_of_arrays]) non_unique_arrays = [x for x in list_of_arrays if nnz_counter[x.nnz] > 1] # Check Shape shape_counter = Counter([x.shape for x in non_unique_arrays]) non_unique_arrays = [x for x in non_unique_arrays if shape_counter[x.shape] > 1] Use numpy array views + hashing to compare arrays to find identical arrays (this returns a list of True if the array has a duplicate and False otherwise). # Check a list of arrays for duplicates by hashing def array_hash(arrays): return [hash(x.view) for x in arrays] def array_hash_duplicates(arrays): hashes = array_hash(arrays) hash_counter = Counter(hashes) return [True if hash_counter[x] > 1 else False for x in hashes] Now apply that check to the matrix indptr, indices, and data arrays in order, removing any matrices which are unique after each check. # Check indptr, indices, and data in order non_unique_arrays = [ x for x, y in zip( non_unique_arrays, array_hash_duplicates([x.indptr for x in non_unique_arrays]) ) if y ] non_unique_arrays = [ x for x, y in zip( non_unique_arrays, array_hash_duplicates([x.indices for x in non_unique_arrays]) ) if y ] duplicates = Counter(array_hash([x.data for x in non_unique_arrays])) n_duplicates = sum(x - 1 for x in duplicates.values()) >>> n_duplicates 10 This results in a list of matrices which are non-unique (so at least one other matrix is identical in the list). It's possible to have multiple non-unique matrices which are not the same, of course. Note that this is inefficient if you expect the list to be duplicates of the same python object, not just different matrices with the same values. That would be easy to solve another way.
Compare and count the sparse arrays in a list in python
Holla! I have a list of 60 large-size 2d arrays (30000,30000). The goal is to compare each array with every other array and count the total number of exactly the same arrays in the entire list. I am working on this logic, however, it is counting the number of same arrays individually and not what I want: import numpy as np import pandas as pd import scipy.sparse as sp ## I am using this dummy setup, to begin with (rather than the large data) # creating 4 dummy arrays a = np.zeros((6,6)) a[1,2] = 1 a[2,5] = 1 a[3,2] = 1 a[4,1] = 1 print(a) b = np.zeros((6,6)) b[1,2] = 1 b[2,5] = 1 b[3,2] = 1 b[4,1] = 1 c = np.zeros((6,6)) c[1,3] = 1 c[2,5] = 1 d = np.zeros((6,6)) d[1,3] = 1 d[2,4] = 1 # storing the arrays in a list list2d = [a,b,c,d] #loop through the list to count the number of arrays with exactly same values n = len(list2d) for i in range(n): count = 0 for j in range(n): if (list2d[i] == list2d[j]).all() and i != j: count += 1 print('list2d[',i,'] is the same as list2d[',j,']') else: print('list2d[',i,'] is not the same as list2d[',j,']') print('total number of same arrays || count = ',count) Another option is working with sparse matrices and storing them in a list. However, I'm not sure whether we can compare or check for equity on the entire list with 60 sparse arrays. # again finalizing a logic on a dummy setup a_sparse = sp.csr_matrix(a) b_sparse = sp.csr_matrix(b) c_sparse = sp.csr_matrix(c) d_sparse = sp.csr_matrix(d) print(a_sparse) # #list of sparse matrices list_sparse = [a_sparse,b_sparse,c_sparse,d_sparse] ## compare the list of sparse arrays and count the total number of exactly same arrays ## also, print/ store all the equal arrays Any suggestions and/or feedback for getting the correct logic is appreciated. Cheers!
[ "EDIT#3:\nBased on your comments, I think this is what you are trying to do.\nimport numpy as np\nfrom copy import deepcopy\n\ndef convert_to_tuple(mat):\n x = tuple(np.flatnonzero(mat)) + mat.shape\n return (x)\n\ndef get_replicates(id, mat, mat_list):\n replicates = 0\n \n #Remove the relevant matrix from mat_list to avoid checking the reference against itself\n del mat_list[id]\n \n # Create a tuple of the reference matrix\n ref = convert_to_tuple(mat)\n print(id, \":\", ref)\n \n # Check how many replicates of the reference matrix there are\n for m in mat_list:\n s = set()\n s.add(convert_to_tuple(m))\n s.add(ref)\n replicates += (-len(s)+2)\n \n # Replace the matrix into mat_list\n mat_list.insert(id, mat)\n \n return replicates \n \n\n# Generate a number of sparse matrices\n# a=b\n# c=d=e=f=g\n# ----------------------------------------------------------------------------\na = np.zeros((6,6))\na[1,2] = 1\na[2,5] = 1\na[3,2] = 1\na[4,1] = 1\n\nb = deepcopy(a)\n\nc = np.zeros((6,6))\nc[1,3] = 1\nc[2,5] = 1\n\nd = deepcopy(c)\ne = deepcopy(c)\nf = deepcopy(c)\ng = deepcopy(c)\n\n\n# storing the arrays in a list\nlist2d = [a,b,c,d,e,f,g]\n\n\n# Identify the number of replicates\n#------------------------------------------------------------------------------\nnumber_of_replicates = [get_replicates(i, arr, list2d) for i, arr in enumerate(list2d)]\n \n\n# Print the number of replicates \n#------------------------------------------------------------------------------\nfor i, reps in enumerate(number_of_replicates):\n print(f\"Sparse Array {i} has {reps} replicates\")\n\nOUTPUT:\n0 : (8, 17, 20, 25, 6, 6)\n1 : (8, 17, 20, 25, 6, 6)\n2 : (9, 17, 6, 6)\n3 : (9, 17, 6, 6)\n4 : (9, 17, 6, 6)\n5 : (9, 17, 6, 6)\n6 : (9, 17, 6, 6)\n\nSparse Array 0 has 1 replicates\nSparse Array 1 has 1 replicates\nSparse Array 2 has 4 replicates\nSparse Array 3 has 4 replicates\nSparse Array 4 has 4 replicates\nSparse Array 5 has 4 replicates\nSparse Array 6 has 4 replicates\n\nThe top part of the output shows the what each matrix looks like after being converted to a tuple. The tuple contains the index of each 1 within the matrix, and the shape of each matrix 6,6 is appended to the end.\nFrom the output you can see that:\narray a and b - have 1 replicate each\narrays c,d,e,f,g - have 4 replicates each\n\n", "I would probably not choose to fiddle with array stuff in weird ways. I definitely would not cast sparse matrices to dense for this, as doing that directly for 60 of these 30k x 30k things would require 450GB of memory or so. Just check everything as it is.\nSet up the problem (so that there are 40 unique matrices and 20 matrices which are not unique), and use Counters instead of reinventing that wheel:\nfrom collections import Counter\nfrom scipy import sparse\nimport numpy as np\n\nlist_of_arrays = [sparse.rand(200,200,density=np.random.uniform(0.025, 0.075),format='csr') for _ in range(50)]\n\nfor i in range(10):\n list_of_arrays.append(list_of_arrays[i])\n\nExclude any matrices which have unique shapes or unique nnz (as they're trivial to check):\n# Check NNZ\nnnz_counter = Counter([x.nnz for x in list_of_arrays])\nnon_unique_arrays = [x for x in list_of_arrays if nnz_counter[x.nnz] > 1]\n\n# Check Shape\nshape_counter = Counter([x.shape for x in non_unique_arrays])\nnon_unique_arrays = [x for x in non_unique_arrays if shape_counter[x.shape] > 1]\n\nUse numpy array views + hashing to compare arrays to find identical arrays (this returns a list of True if the array has a duplicate and False otherwise).\n# Check a list of arrays for duplicates by hashing\ndef array_hash(arrays):\n return [hash(x.view) for x in arrays]\n\ndef array_hash_duplicates(arrays):\n hashes = array_hash(arrays)\n hash_counter = Counter(hashes)\n return [True if hash_counter[x] > 1 else False for x in hashes]\n\nNow apply that check to the matrix indptr, indices, and data arrays in order, removing any matrices which are unique after each check.\n# Check indptr, indices, and data in order\nnon_unique_arrays = [\n x\n for x, y in zip(\n non_unique_arrays,\n array_hash_duplicates([x.indptr for x in non_unique_arrays])\n )\n if y\n]\n\nnon_unique_arrays = [\n x\n for x, y in zip(\n non_unique_arrays,\n array_hash_duplicates([x.indices for x in non_unique_arrays])\n )\n if y\n]\n\nduplicates = Counter(array_hash([x.data for x in non_unique_arrays]))\nn_duplicates = sum(x - 1 for x in duplicates.values())\n\n>>> n_duplicates\n10\n\nThis results in a list of matrices which are non-unique (so at least one other matrix is identical in the list). It's possible to have multiple non-unique matrices which are not the same, of course.\nNote that this is inefficient if you expect the list to be duplicates of the same python object, not just different matrices with the same values. That would be easy to solve another way.\n" ]
[ 1, 1 ]
[]
[]
[ "arrays", "list", "numpy", "python", "sparse_matrix" ]
stackoverflow_0074397307_arrays_list_numpy_python_sparse_matrix.txt
Q: lxml find all elements between two tags extracted a word document and search in this all bookmarks. But the bookmark tag have no end tag, so lxml find only the bookmarkStart but not the elements between bookmarkStart and bookmarkEnd. How can i get all Elements within bookmarkStart and bookmarkEnd? Thanks! <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <w:document xmlns:wpc="http://schemas.microsoft.com/office/word/2010/wordprocessingCanvas" xmlns:cx="http://schemas.microsoft.com/office/drawing/2014/chartex" xmlns:cx1="http://schemas.microsoft.com/office/drawing/2015/9/8/chartex" xmlns:cx2="http://schemas.microsoft.com/office/drawing/2015/10/21/chartex" xmlns:cx3="http://schemas.microsoft.com/office/drawing/2016/5/9/chartex" xmlns:cx4="http://schemas.microsoft.com/office/drawing/2016/5/10/chartex" xmlns:cx5="http://schemas.microsoft.com/office/drawing/2016/5/11/chartex" xmlns:cx6="http://schemas.microsoft.com/office/drawing/2016/5/12/chartex" xmlns:cx7="http://schemas.microsoft.com/office/drawing/2016/5/13/chartex" xmlns:cx8="http://schemas.microsoft.com/office/drawing/2016/5/14/chartex" xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" xmlns:aink="http://schemas.microsoft.com/office/drawing/2016/ink" xmlns:am3d="http://schemas.microsoft.com/office/drawing/2017/model3d" xmlns:o="urn:schemas-microsoft-com:office:office" xmlns:oel="http://schemas.microsoft.com/office/2019/extlst" xmlns:r="http://schemas.openxmlformats.org/officeDocument/2006/relationships" xmlns:m="http://schemas.openxmlformats.org/officeDocument/2006/math" xmlns:v="urn:schemas-microsoft-com:vml" xmlns:wp14="http://schemas.microsoft.com/office/word/2010/wordprocessingDrawing" xmlns:wp="http://schemas.openxmlformats.org/drawingml/2006/wordprocessingDrawing" xmlns:w10="urn:schemas-microsoft-com:office:word" xmlns:w="http://schemas.openxmlformats.org/wordprocessingml/2006/main" xmlns:w14="http://schemas.microsoft.com/office/word/2010/wordml" xmlns:w15="http://schemas.microsoft.com/office/word/2012/wordml" xmlns:w16cex="http://schemas.microsoft.com/office/word/2018/wordml/cex" xmlns:w16cid="http://schemas.microsoft.com/office/word/2016/wordml/cid" xmlns:w16="http://schemas.microsoft.com/office/word/2018/wordml" xmlns:w16sdtdh="http://schemas.microsoft.com/office/word/2020/wordml/sdtdatahash" xmlns:w16se="http://schemas.microsoft.com/office/word/2015/wordml/symex" xmlns:wpg="http://schemas.microsoft.com/office/word/2010/wordprocessingGroup" xmlns:wpi="http://schemas.microsoft.com/office/word/2010/wordprocessingInk" xmlns:wne="http://schemas.microsoft.com/office/word/2006/wordml" xmlns:wps="http://schemas.microsoft.com/office/word/2010/wordprocessingShape" mc:Ignorable="w14 w15 w16se w16cid w16 w16cex w16sdtdh wp14"> <w:body> <w:p w14:paraId="2DDA6990" w14:textId="44789F6F" w:rsidR="0067078D" w:rsidRDefault="003F5B0A"> <w:bookmarkStart w:id="0" w:name="testmark"/> <w:proofErr w:type="spellStart"/> <w:r> <w:t>sometext</w:t> </w:r> <w:bookmarkEnd w:id="0"/> <w:proofErr w:type="spellEnd"/> </w:p> <w:sectPr w:rsidR="0067078D"> <w:pgSz w:w="11906" w:h="16838"/> <w:pgMar w:top="1417" w:right="1417" w:bottom="1134" w:left="1417" w:header="708" w:footer="708" w:gutter="0"/> <w:cols w:space="708"/> <w:docGrid w:linePitch="360"/> </w:sectPr> </w:body> </w:document> from lxml import etree as ET ns = {'w': 'http://schemas.openxmlformats.org/wordprocessingml/2006/main'} ns2 = '{http://schemas.openxmlformats.org/wordprocessingml/2006/main}' with open('document.xml', 'r', encoding='utf-8') as xml_file: tree_word = ET.parse(xml_file) findall_param = 'w:bookmarkStart' find_param = 'w:t' root_word = tree_word.getroot() field_content = tree_word.findall('.//'+findall_param, ns) for bookmark in field_content: textmarker = bookmark.attrib[f"{ns2}name"] print(ET.tostring(bookmark)) t = bookmark.find('.//w:t', ns) A: If I understand you correctly, and based on the sample xml in the question, the following should get you at least close to what you are trying to do: word = """[your sample xml]""" doc = etree.XML(word.encode()) ns = {'w': 'http://schemas.openxmlformats.org/wordprocessingml/2006/main'} start_param = 'w:bookmarkStart' t_param = 'w:t' end_param = "bookmarkEnd" doc.xpath(f'/{start_param}',namespaces=ns) for el in doc.xpath(f'//w:p[.//{book_param}]//{book_param}/following-sibling::*',namespaces=ns): if etree.QName(el).localname==f"{end_param}": break else: if len(el.xpath(f'.//{t_param}',namespaces=ns) )>0: el.xpath(f'.//{t_param}',namespaces=ns)[0].text="some new text" print(etree.tostring(doc).decode()) Try it on your actual document and see if it works.
lxml find all elements between two tags
extracted a word document and search in this all bookmarks. But the bookmark tag have no end tag, so lxml find only the bookmarkStart but not the elements between bookmarkStart and bookmarkEnd. How can i get all Elements within bookmarkStart and bookmarkEnd? Thanks! <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <w:document xmlns:wpc="http://schemas.microsoft.com/office/word/2010/wordprocessingCanvas" xmlns:cx="http://schemas.microsoft.com/office/drawing/2014/chartex" xmlns:cx1="http://schemas.microsoft.com/office/drawing/2015/9/8/chartex" xmlns:cx2="http://schemas.microsoft.com/office/drawing/2015/10/21/chartex" xmlns:cx3="http://schemas.microsoft.com/office/drawing/2016/5/9/chartex" xmlns:cx4="http://schemas.microsoft.com/office/drawing/2016/5/10/chartex" xmlns:cx5="http://schemas.microsoft.com/office/drawing/2016/5/11/chartex" xmlns:cx6="http://schemas.microsoft.com/office/drawing/2016/5/12/chartex" xmlns:cx7="http://schemas.microsoft.com/office/drawing/2016/5/13/chartex" xmlns:cx8="http://schemas.microsoft.com/office/drawing/2016/5/14/chartex" xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" xmlns:aink="http://schemas.microsoft.com/office/drawing/2016/ink" xmlns:am3d="http://schemas.microsoft.com/office/drawing/2017/model3d" xmlns:o="urn:schemas-microsoft-com:office:office" xmlns:oel="http://schemas.microsoft.com/office/2019/extlst" xmlns:r="http://schemas.openxmlformats.org/officeDocument/2006/relationships" xmlns:m="http://schemas.openxmlformats.org/officeDocument/2006/math" xmlns:v="urn:schemas-microsoft-com:vml" xmlns:wp14="http://schemas.microsoft.com/office/word/2010/wordprocessingDrawing" xmlns:wp="http://schemas.openxmlformats.org/drawingml/2006/wordprocessingDrawing" xmlns:w10="urn:schemas-microsoft-com:office:word" xmlns:w="http://schemas.openxmlformats.org/wordprocessingml/2006/main" xmlns:w14="http://schemas.microsoft.com/office/word/2010/wordml" xmlns:w15="http://schemas.microsoft.com/office/word/2012/wordml" xmlns:w16cex="http://schemas.microsoft.com/office/word/2018/wordml/cex" xmlns:w16cid="http://schemas.microsoft.com/office/word/2016/wordml/cid" xmlns:w16="http://schemas.microsoft.com/office/word/2018/wordml" xmlns:w16sdtdh="http://schemas.microsoft.com/office/word/2020/wordml/sdtdatahash" xmlns:w16se="http://schemas.microsoft.com/office/word/2015/wordml/symex" xmlns:wpg="http://schemas.microsoft.com/office/word/2010/wordprocessingGroup" xmlns:wpi="http://schemas.microsoft.com/office/word/2010/wordprocessingInk" xmlns:wne="http://schemas.microsoft.com/office/word/2006/wordml" xmlns:wps="http://schemas.microsoft.com/office/word/2010/wordprocessingShape" mc:Ignorable="w14 w15 w16se w16cid w16 w16cex w16sdtdh wp14"> <w:body> <w:p w14:paraId="2DDA6990" w14:textId="44789F6F" w:rsidR="0067078D" w:rsidRDefault="003F5B0A"> <w:bookmarkStart w:id="0" w:name="testmark"/> <w:proofErr w:type="spellStart"/> <w:r> <w:t>sometext</w:t> </w:r> <w:bookmarkEnd w:id="0"/> <w:proofErr w:type="spellEnd"/> </w:p> <w:sectPr w:rsidR="0067078D"> <w:pgSz w:w="11906" w:h="16838"/> <w:pgMar w:top="1417" w:right="1417" w:bottom="1134" w:left="1417" w:header="708" w:footer="708" w:gutter="0"/> <w:cols w:space="708"/> <w:docGrid w:linePitch="360"/> </w:sectPr> </w:body> </w:document> from lxml import etree as ET ns = {'w': 'http://schemas.openxmlformats.org/wordprocessingml/2006/main'} ns2 = '{http://schemas.openxmlformats.org/wordprocessingml/2006/main}' with open('document.xml', 'r', encoding='utf-8') as xml_file: tree_word = ET.parse(xml_file) findall_param = 'w:bookmarkStart' find_param = 'w:t' root_word = tree_word.getroot() field_content = tree_word.findall('.//'+findall_param, ns) for bookmark in field_content: textmarker = bookmark.attrib[f"{ns2}name"] print(ET.tostring(bookmark)) t = bookmark.find('.//w:t', ns)
[ "If I understand you correctly, and based on the sample xml in the question, the following should get you at least close to what you are trying to do:\nword = \"\"\"[your sample xml]\"\"\"\ndoc = etree.XML(word.encode())\nns = {'w': 'http://schemas.openxmlformats.org/wordprocessingml/2006/main'}\nstart_param = 'w:bookmarkStart'\nt_param = 'w:t'\nend_param = \"bookmarkEnd\"\n\ndoc.xpath(f'/{start_param}',namespaces=ns)\nfor el in doc.xpath(f'//w:p[.//{book_param}]//{book_param}/following-sibling::*',namespaces=ns): \n if etree.QName(el).localname==f\"{end_param}\":\n break\n else:\n if len(el.xpath(f'.//{t_param}',namespaces=ns) )>0:\n el.xpath(f'.//{t_param}',namespaces=ns)[0].text=\"some new text\"\nprint(etree.tostring(doc).decode())\n\nTry it on your actual document and see if it works.\n" ]
[ 0 ]
[]
[]
[ "lxml", "python" ]
stackoverflow_0074474718_lxml_python.txt
Q: How to install a win32 version of python using win64 anaconda I am trying to set up the covarep software on my win64 machine for a project and need to install 'a Windows 32-bit version of Python 2.7, 3.3, and/or 3.4'. I used conda (platform win-64) to run conda create -n "covarep-env" python=3.4.0 -c free This created an environment that has python version 3.4.0, but this obviously defaults to installing the win-64 version. After following the covarep README instructions and running import covarep_py in python, I get the error RuntimeError: To call deployed MATLAB code on a win32 machine, you must run a win32 version of Python. Details: C:\Program Files (x86)\MATLAB\MATLAB Runtime\v90\runtime\win32 Q: Is there a way to specify the win32 platform version of python when running conda create? A: You must set the CONDA_FORCE_32BIT environment variable (got it from [YouTube]: DotPi - Create 32-bit Python Environments from a 64-bit Conda Installation) before creating the environment (not related to (previous) "environment variable"). Unfortunately the only official reference I could find is [Anaconda.Docs]: Troubleshooting - Using 32- and 64-bit libraries and CONDA_FORCE_32BIT. Example (Anaconda Prompt - I used Python 3.6 as an example, as I need it for another task): (base) [cfati@CFATI-5510-0:C:\Users\cfati]> :: Create a "regular" (pc064) environment (base) [cfati@CFATI-5510-0:C:\Users\cfati]> conda create -n py_pc064_03_06_02 python=3.6.2 Collecting package metadata (current_repodata.json): done Solving environment: failed with repodata from current_repodata.json, will retry with next repodata source. Collecting package metadata (repodata.json): done Solving environment: done ==> WARNING: A newer version of conda exists. <== current version: 4.14.0 latest version: 22.9.0 Please update conda by running $ conda update -n base -c defaults conda ## Package Plan ## environment location: f:\Install\pc064\Anaconda\Anaconda\Version\envs\py_pc064_03_06_02 added / updated specs: - python=3.6.2 The following NEW packages will be INSTALLED: certifi pkgs/main/win-64::certifi-2021.5.30-py36haa95532_0 pip pkgs/main/win-64::pip-21.2.2-py36haa95532_0 python pkgs/main/win-64::python-3.6.2-h09676a0_15 setuptools pkgs/main/win-64::setuptools-58.0.4-py36haa95532_0 vc pkgs/main/win-64::vc-14.2-h21ff451_1 vs2015_runtime pkgs/main/win-64::vs2015_runtime-14.27.29016-h5e58377_2 wheel pkgs/main/noarch::wheel-0.37.1-pyhd3eb1b0_0 wincertstore pkgs/main/win-64::wincertstore-0.2-py36h7fe50ca_0 Proceed ([y]/n)? y Preparing transaction: done Verifying transaction: done Executing transaction: done # # To activate this environment, use # # $ conda activate py_pc064_03_06_02 # # To deactivate an active environment, use # # $ conda deactivate Retrieving notices: ...working... done (base) [cfati@CFATI-5510-0:C:\Users\cfati]> (base) [cfati@CFATI-5510-0:C:\Users\cfati]> :: SET ENVIRONMENT VARIABLE (base) [cfati@CFATI-5510-0:C:\Users\cfati]> set CONDA_FORCE_32BIT=1 (base) [cfati@CFATI-5510-0:C:\Users\cfati]> :: Create a funky (pc032) environment (base) [cfati@CFATI-5510-0:C:\Users\cfati]> conda create -n py_pc032_03_06_02 python=3.6.2 Collecting package metadata (current_repodata.json): done Solving environment: failed with repodata from current_repodata.json, will retry with next repodata source. Collecting package metadata (repodata.json): done Solving environment: done ## Package Plan ## environment location: f:\Install\pc064\Anaconda\Anaconda\Version\envs\py_pc032_03_06_02 added / updated specs: - python=3.6.2 The following packages will be downloaded: package | build ---------------------------|----------------- certifi-2021.5.30 | py36h9f7ea03_0 140 KB pip-21.2.2 | py36h9f7ea03_0 1.8 MB python-3.6.2 | hb0ff576_15 12.8 MB setuptools-58.0.4 | py36h9f7ea03_0 777 KB vc-14.2 | h21ff451_1 8 KB vs2015_runtime-14.27.29016 | h5e58377_2 1000 KB wheel-0.37.1 | pyhd3eb1b0_0 33 KB wincertstore-0.2 | py36hcdd9a18_0 14 KB ------------------------------------------------------------ Total: 16.5 MB The following NEW packages will be INSTALLED: certifi pkgs/main/win-32::certifi-2021.5.30-py36h9f7ea03_0 pip pkgs/main/win-32::pip-21.2.2-py36h9f7ea03_0 python pkgs/main/win-32::python-3.6.2-hb0ff576_15 setuptools pkgs/main/win-32::setuptools-58.0.4-py36h9f7ea03_0 vc pkgs/main/win-32::vc-14.2-h21ff451_1 vs2015_runtime pkgs/main/win-32::vs2015_runtime-14.27.29016-h5e58377_2 wheel pkgs/main/noarch::wheel-0.37.1-pyhd3eb1b0_0 wincertstore pkgs/main/win-32::wincertstore-0.2-py36hcdd9a18_0 Proceed ([y]/n)? y Downloading and Extracting Packages vc-14.2 | 8 KB | ############################################################################ | 100% vs2015_runtime-14.27 | 1000 KB | ############################################################################ | 100% certifi-2021.5.30 | 140 KB | ############################################################################ | 100% setuptools-58.0.4 | 777 KB | ############################################################################ | 100% wincertstore-0.2 | 14 KB | ############################################################################ | 100% pip-21.2.2 | 1.8 MB | ############################################################################ | 100% python-3.6.2 | 12.8 MB | ############################################################################ | 100% wheel-0.37.1 | 33 KB | ############################################################################ | 100% Preparing transaction: done Verifying transaction: done Executing transaction: done # # To activate this environment, use # # $ conda activate py_pc032_03_06_02 # # To deactivate an active environment, use # # $ conda deactivate Retrieving notices: ...working... done (base) [cfati@CFATI-5510-0:C:\Users\cfati]> :: RESET ENVIRONMENT VARIABLE (to avoid any future problems in this terminal) (base) [cfati@CFATI-5510-0:C:\Users\cfati]> set CONDA_FORCE_32BIT= (base) [cfati@CFATI-5510-0:C:\Users\cfati]> conda env list # conda environments: # F:\Install\pc032\Intel\OneAPI\Version\intelpython\python3.7 F:\Install\pc032\Intel\OneAPI\Version\intelpython\python3.7\envs\2021.1.1 base * f:\Install\pc064\Anaconda\Anaconda\Version py_pc032_03_06_02 f:\Install\pc064\Anaconda\Anaconda\Version\envs\py_pc032_03_06_02 py_pc064_03_06_02 f:\Install\pc064\Anaconda\Anaconda\Version\envs\py_pc064_03_06_02 py_pc064_03_08_08 f:\Install\pc064\Anaconda\Anaconda\Version\envs\py_pc064_03_08_08 py_pc064_03_10_00 f:\Install\pc064\Anaconda\Anaconda\Version\envs\py_pc064_03_10_00 Verify the created environments (check [SO]: How do I determine if my python shell is executing in 32bit or 64bit mode on OS X? (@CristiFati's answer) for more details): (base) [cfati@CFATI-5510-0:C:\Users\cfati]> :: Activate pc064 env (base) [cfati@CFATI-5510-0:C:\Users\cfati]> conda activate py_pc064_03_06_02 (py_pc064_03_06_02) [cfati@CFATI-5510-0:C:\Users\cfati]> python Python 3.6.2 |Anaconda, Inc.| (default, Sep 30 2017, 11:52:29) [MSC v.1900 64 bit (AMD64)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> import ctypes as ct >>> print(ct.sizeof(ct.c_void_p) * 8) 64 >>> ^Z (py_pc064_03_06_02) [cfati@CFATI-5510-0:C:\Users\cfati]> conda deactivate (base) [cfati@CFATI-5510-0:C:\Users\cfati]> :: Activate pc032 env (base) [cfati@CFATI-5510-0:C:\Users\cfati]> conda activate py_pc032_03_06_02 (py_pc032_03_06_02) [cfati@CFATI-5510-0:C:\Users\cfati]> python Python 3.6.2 |Anaconda, Inc.| (default, Sep 30 2017, 11:44:55) [MSC v.1900 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> import ctypes as ct >>> print(ct.sizeof(ct.c_void_p) * 8) 32 >>> ^Z (py_pc032_03_06_02) [cfati@CFATI-5510-0:C:\Users\cfati]> conda deactivate (base) [cfati@CFATI-5510-0:C:\Users\cfati]>
How to install a win32 version of python using win64 anaconda
I am trying to set up the covarep software on my win64 machine for a project and need to install 'a Windows 32-bit version of Python 2.7, 3.3, and/or 3.4'. I used conda (platform win-64) to run conda create -n "covarep-env" python=3.4.0 -c free This created an environment that has python version 3.4.0, but this obviously defaults to installing the win-64 version. After following the covarep README instructions and running import covarep_py in python, I get the error RuntimeError: To call deployed MATLAB code on a win32 machine, you must run a win32 version of Python. Details: C:\Program Files (x86)\MATLAB\MATLAB Runtime\v90\runtime\win32 Q: Is there a way to specify the win32 platform version of python when running conda create?
[ "You must set the CONDA_FORCE_32BIT environment variable (got it from [YouTube]: DotPi - Create 32-bit Python Environments from a 64-bit Conda Installation) before creating the environment (not related to (previous) \"environment variable\").\nUnfortunately the only official reference I could find is [Anaconda.Docs]: Troubleshooting - Using 32- and 64-bit libraries and CONDA_FORCE_32BIT.\nExample (Anaconda Prompt - I used Python 3.6 as an example, as I need it for another task):\n\n(base) [cfati@CFATI-5510-0:C:\\Users\\cfati]> :: Create a \"regular\" (pc064) environment\n(base) [cfati@CFATI-5510-0:C:\\Users\\cfati]> conda create -n py_pc064_03_06_02 python=3.6.2\nCollecting package metadata (current_repodata.json): done\nSolving environment: failed with repodata from current_repodata.json, will retry with next repodata source.\nCollecting package metadata (repodata.json): done\nSolving environment: done\n\n\n==> WARNING: A newer version of conda exists. <==\n current version: 4.14.0\n latest version: 22.9.0\n\nPlease update conda by running\n\n $ conda update -n base -c defaults conda\n\n\n\n## Package Plan ##\n\n environment location: f:\\Install\\pc064\\Anaconda\\Anaconda\\Version\\envs\\py_pc064_03_06_02\n\n added / updated specs:\n - python=3.6.2\n\n\nThe following NEW packages will be INSTALLED:\n\n certifi pkgs/main/win-64::certifi-2021.5.30-py36haa95532_0\n pip pkgs/main/win-64::pip-21.2.2-py36haa95532_0\n python pkgs/main/win-64::python-3.6.2-h09676a0_15\n setuptools pkgs/main/win-64::setuptools-58.0.4-py36haa95532_0\n vc pkgs/main/win-64::vc-14.2-h21ff451_1\n vs2015_runtime pkgs/main/win-64::vs2015_runtime-14.27.29016-h5e58377_2\n wheel pkgs/main/noarch::wheel-0.37.1-pyhd3eb1b0_0\n wincertstore pkgs/main/win-64::wincertstore-0.2-py36h7fe50ca_0\n\n\nProceed ([y]/n)? y\n\nPreparing transaction: done\nVerifying transaction: done\nExecuting transaction: done\n#\n# To activate this environment, use\n#\n# $ conda activate py_pc064_03_06_02\n#\n# To deactivate an active environment, use\n#\n# $ conda deactivate\n\nRetrieving notices: ...working... done\n\n(base) [cfati@CFATI-5510-0:C:\\Users\\cfati]>\n(base) [cfati@CFATI-5510-0:C:\\Users\\cfati]> :: SET ENVIRONMENT VARIABLE\n(base) [cfati@CFATI-5510-0:C:\\Users\\cfati]> set CONDA_FORCE_32BIT=1\n\n(base) [cfati@CFATI-5510-0:C:\\Users\\cfati]> :: Create a funky (pc032) environment\n(base) [cfati@CFATI-5510-0:C:\\Users\\cfati]> conda create -n py_pc032_03_06_02 python=3.6.2\nCollecting package metadata (current_repodata.json): done\nSolving environment: failed with repodata from current_repodata.json, will retry with next repodata source.\nCollecting package metadata (repodata.json): done\nSolving environment: done\n\n## Package Plan ##\n\n environment location: f:\\Install\\pc064\\Anaconda\\Anaconda\\Version\\envs\\py_pc032_03_06_02\n\n added / updated specs:\n - python=3.6.2\n\n\nThe following packages will be downloaded:\n\n package | build\n ---------------------------|-----------------\n certifi-2021.5.30 | py36h9f7ea03_0 140 KB\n pip-21.2.2 | py36h9f7ea03_0 1.8 MB\n python-3.6.2 | hb0ff576_15 12.8 MB\n setuptools-58.0.4 | py36h9f7ea03_0 777 KB\n vc-14.2 | h21ff451_1 8 KB\n vs2015_runtime-14.27.29016 | h5e58377_2 1000 KB\n wheel-0.37.1 | pyhd3eb1b0_0 33 KB\n wincertstore-0.2 | py36hcdd9a18_0 14 KB\n ------------------------------------------------------------\n Total: 16.5 MB\n\nThe following NEW packages will be INSTALLED:\n\n certifi pkgs/main/win-32::certifi-2021.5.30-py36h9f7ea03_0\n pip pkgs/main/win-32::pip-21.2.2-py36h9f7ea03_0\n python pkgs/main/win-32::python-3.6.2-hb0ff576_15\n setuptools pkgs/main/win-32::setuptools-58.0.4-py36h9f7ea03_0\n vc pkgs/main/win-32::vc-14.2-h21ff451_1\n vs2015_runtime pkgs/main/win-32::vs2015_runtime-14.27.29016-h5e58377_2\n wheel pkgs/main/noarch::wheel-0.37.1-pyhd3eb1b0_0\n wincertstore pkgs/main/win-32::wincertstore-0.2-py36hcdd9a18_0\n\n\nProceed ([y]/n)? y\n\n\nDownloading and Extracting Packages\nvc-14.2 | 8 KB | ############################################################################ | 100%\nvs2015_runtime-14.27 | 1000 KB | ############################################################################ | 100%\ncertifi-2021.5.30 | 140 KB | ############################################################################ | 100%\nsetuptools-58.0.4 | 777 KB | ############################################################################ | 100%\nwincertstore-0.2 | 14 KB | ############################################################################ | 100%\npip-21.2.2 | 1.8 MB | ############################################################################ | 100%\npython-3.6.2 | 12.8 MB | ############################################################################ | 100%\nwheel-0.37.1 | 33 KB | ############################################################################ | 100%\nPreparing transaction: done\nVerifying transaction: done\nExecuting transaction: done\n#\n# To activate this environment, use\n#\n# $ conda activate py_pc032_03_06_02\n#\n# To deactivate an active environment, use\n#\n# $ conda deactivate\n\nRetrieving notices: ...working... done\n\n(base) [cfati@CFATI-5510-0:C:\\Users\\cfati]> :: RESET ENVIRONMENT VARIABLE (to avoid any future problems in this terminal)\n(base) [cfati@CFATI-5510-0:C:\\Users\\cfati]> set CONDA_FORCE_32BIT=\n\n(base) [cfati@CFATI-5510-0:C:\\Users\\cfati]> conda env list\n# conda environments:\n#\n F:\\Install\\pc032\\Intel\\OneAPI\\Version\\intelpython\\python3.7\n F:\\Install\\pc032\\Intel\\OneAPI\\Version\\intelpython\\python3.7\\envs\\2021.1.1\nbase * f:\\Install\\pc064\\Anaconda\\Anaconda\\Version\npy_pc032_03_06_02 f:\\Install\\pc064\\Anaconda\\Anaconda\\Version\\envs\\py_pc032_03_06_02\npy_pc064_03_06_02 f:\\Install\\pc064\\Anaconda\\Anaconda\\Version\\envs\\py_pc064_03_06_02\npy_pc064_03_08_08 f:\\Install\\pc064\\Anaconda\\Anaconda\\Version\\envs\\py_pc064_03_08_08\npy_pc064_03_10_00 f:\\Install\\pc064\\Anaconda\\Anaconda\\Version\\envs\\py_pc064_03_10_00\n\n\nVerify the created environments (check [SO]: How do I determine if my python shell is executing in 32bit or 64bit mode on OS X? (@CristiFati's answer) for more details):\n\n(base) [cfati@CFATI-5510-0:C:\\Users\\cfati]> :: Activate pc064 env\n(base) [cfati@CFATI-5510-0:C:\\Users\\cfati]> conda activate py_pc064_03_06_02\n\n(py_pc064_03_06_02) [cfati@CFATI-5510-0:C:\\Users\\cfati]> python\nPython 3.6.2 |Anaconda, Inc.| (default, Sep 30 2017, 11:52:29) [MSC v.1900 64 bit (AMD64)] on win32\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\n>>> import ctypes as ct\n>>> print(ct.sizeof(ct.c_void_p) * 8)\n64\n>>> ^Z\n\n\n(py_pc064_03_06_02) [cfati@CFATI-5510-0:C:\\Users\\cfati]> conda deactivate\n\n(base) [cfati@CFATI-5510-0:C:\\Users\\cfati]> :: Activate pc032 env\n(base) [cfati@CFATI-5510-0:C:\\Users\\cfati]> conda activate py_pc032_03_06_02\n\n(py_pc032_03_06_02) [cfati@CFATI-5510-0:C:\\Users\\cfati]> python\nPython 3.6.2 |Anaconda, Inc.| (default, Sep 30 2017, 11:44:55) [MSC v.1900 32 bit (Intel)] on win32\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\n>>> import ctypes as ct\n>>> print(ct.sizeof(ct.c_void_p) * 8)\n32\n>>> ^Z\n\n\n(py_pc032_03_06_02) [cfati@CFATI-5510-0:C:\\Users\\cfati]> conda deactivate\n\n(base) [cfati@CFATI-5510-0:C:\\Users\\cfati]>\n\n\n" ]
[ 1 ]
[]
[]
[ "anaconda", "anaconda3", "python", "windows" ]
stackoverflow_0074479238_anaconda_anaconda3_python_windows.txt
Q: Problem with parent-child class and turtle, kernel says it is an error in the turtle bib Below is the code I have and the error which is displayed is: turtle.Vec2D() argument after * must be an iterable, not int. The task is to create a square, triangle, polygon and rectangle. The properties should be put together in a parent class. Each other class should be the child class from the class GeometricObject (the parent class). import math import turtle #------------------------------------------------------------------------ #------------------------------------------------------------------------ class GeometricObject: def __init__(self, starting_angle = 45, side_length = 100, position = (0,0)): self.side_length = side_length self.starting_angle = starting_angle self.position = position class Square(GeometricObject): def __init__(self, side_length, position, starting_angle, turn = 90): super().__init__(side_length, position, starting_angle) self.turn = turn def draw(self): turtle.setheading(self.starting_angle) self.move_to_position(self.position) for i in range(4): turtle.forward(self.side_length) turtle.left(self.turn) self.starting_angle = 0 turtle.setheading(0) def calculate_area(self): return math.sqrt(self.side_length) def move_to_position(self, new_position = (100, 0)): turtle.penup() turtle.goto(new_position) turtle.pendown() def set_starting_angle(self, starting_angle = 45): self.starting_angle = starting_angle #------------------------------------------------------------------------ #------------------------------------------------------------------------ class Rectangle(GeometricObject): def __init__(self, side_length, position, starting_angle, width = 100): super().__init__(side_length, position, starting_angle) self.width = width def draw(self): turtle.setheading(self.starting_angle) self.move_to_position(self.position) for i in range(2): turtle.forward(self.side_length) turtle.left(90) turtle.forward(self.width) turtle.left(90) self.starting_angle = 0 turtle.setheading(0) def move_to_position(self, new_position = (0, 0)): turtle.penup() turtle.goto(new_position) turtle.pendown() def calculate_area(self): print(self.side_length * self.width) def set_starting_angle(self, starting_angle = 45): self.starting_angle = starting_angle #------------------------------------------------------------------------ #------------------------------------------------------------------------ class Triangle(GeometricObject): def __init__(self, side_length, position, starting_angle): super().__init__(side_length, position, starting_angle) pass def draw(self): turtle.setheading(self.starting_angle) self.move_to_position(self.position) for i in range(3): turtle.forward(self.side_length) turtle.left(120) self.starting_angle = 0 turtle.setheading(0) def move_to_position(self, new_position = (100, 0)): turtle.penup() turtle.goto(new_position) turtle.pendown() def calculate_area(self): print(self.side_length * (self.side_length / 2)) def set_starting_angle(self, starting_angle): self.starting_angle = starting_angle #------------------------------------------------------------------------ #------------------------------------------------------------------------ class RegularPolygon(GeometricObject): def __init__(self, side_length, position, starting_angle, n = 6): super().__init__(side_length, position, starting_angle) self.n = n def draw(self): turtle.setheading(self.starting_angle) self.move_to_position(self.position) for i in range(self.n): turtle.forward(self.side_length) turtle.left(360 / self.n) self.starting_angle = 0 turtle.setheading(0) def move_to_position(self, new_position = (100, 0)): turtle.penup() turtle.goto(new_position) turtle.pendown() def calculate_area(self): #print((3 * math.sqrt(3) * (self.side_length **2)) / 2) print((self.n / 4) * math.cot(180 / self.n) * math.sqrt(self.side_length)) def set_starting_angle(self, starting_angle = 45): self.starting_angle = starting_angle #------------------------------------------------------------------------ #------------------------------------------------------------------------ def main(): # ----- IGNORE THIS PART --------------------------------- wn = turtle.Screen() rootwindow = wn.getcanvas().winfo_toplevel() rootwindow.call('wm', 'attributes', '.', '-topmost', '1') rootwindow.call('wm', 'attributes', '.', '-topmost', '0') # ----- IGNORE THIS PART --------------------------------- rect1 = Rectangle(60, (0,0), 45) rect1.set_starting_angle(90) rect1.draw() square1 = Square(60, (100, 200), 45) square1.set_starting_angle(45) square1.draw() regpol= RegularPolygon() regpol.set_starting_angle(180) regpol.draw() tri = Triangle() tri.set_starting_angle(239) tri.draw() wn.mainloop() turtle.done() main() I wanted turtle to draw all of the objects I created. A: The problem appears to be that you're playing fast and loose with argument order: class GeometricObject: def __init__(self, starting_angle = 45, side_length = 100, position = (0,0)): class Square(GeometricObject): def __init__(self, side_length, position, starting_angle, turn = 90): super().__init__(side_length, position, starting_angle) class Rectangle(GeometricObject): def __init__(self, side_length, position, starting_angle, width = 100): super().__init__(side_length, position, starting_angle) Since these classes are calling the super.__init__() of GeometricObject, their arguments should match order-wise. A simple fix might be: class GeometricObject: def __init__(self, side_length=100, position=(0,0), starting_angle=45): Which would get you further along until you break on this call: regpol = RegularPolygon() which is lacking required arguments.
Problem with parent-child class and turtle, kernel says it is an error in the turtle bib
Below is the code I have and the error which is displayed is: turtle.Vec2D() argument after * must be an iterable, not int. The task is to create a square, triangle, polygon and rectangle. The properties should be put together in a parent class. Each other class should be the child class from the class GeometricObject (the parent class). import math import turtle #------------------------------------------------------------------------ #------------------------------------------------------------------------ class GeometricObject: def __init__(self, starting_angle = 45, side_length = 100, position = (0,0)): self.side_length = side_length self.starting_angle = starting_angle self.position = position class Square(GeometricObject): def __init__(self, side_length, position, starting_angle, turn = 90): super().__init__(side_length, position, starting_angle) self.turn = turn def draw(self): turtle.setheading(self.starting_angle) self.move_to_position(self.position) for i in range(4): turtle.forward(self.side_length) turtle.left(self.turn) self.starting_angle = 0 turtle.setheading(0) def calculate_area(self): return math.sqrt(self.side_length) def move_to_position(self, new_position = (100, 0)): turtle.penup() turtle.goto(new_position) turtle.pendown() def set_starting_angle(self, starting_angle = 45): self.starting_angle = starting_angle #------------------------------------------------------------------------ #------------------------------------------------------------------------ class Rectangle(GeometricObject): def __init__(self, side_length, position, starting_angle, width = 100): super().__init__(side_length, position, starting_angle) self.width = width def draw(self): turtle.setheading(self.starting_angle) self.move_to_position(self.position) for i in range(2): turtle.forward(self.side_length) turtle.left(90) turtle.forward(self.width) turtle.left(90) self.starting_angle = 0 turtle.setheading(0) def move_to_position(self, new_position = (0, 0)): turtle.penup() turtle.goto(new_position) turtle.pendown() def calculate_area(self): print(self.side_length * self.width) def set_starting_angle(self, starting_angle = 45): self.starting_angle = starting_angle #------------------------------------------------------------------------ #------------------------------------------------------------------------ class Triangle(GeometricObject): def __init__(self, side_length, position, starting_angle): super().__init__(side_length, position, starting_angle) pass def draw(self): turtle.setheading(self.starting_angle) self.move_to_position(self.position) for i in range(3): turtle.forward(self.side_length) turtle.left(120) self.starting_angle = 0 turtle.setheading(0) def move_to_position(self, new_position = (100, 0)): turtle.penup() turtle.goto(new_position) turtle.pendown() def calculate_area(self): print(self.side_length * (self.side_length / 2)) def set_starting_angle(self, starting_angle): self.starting_angle = starting_angle #------------------------------------------------------------------------ #------------------------------------------------------------------------ class RegularPolygon(GeometricObject): def __init__(self, side_length, position, starting_angle, n = 6): super().__init__(side_length, position, starting_angle) self.n = n def draw(self): turtle.setheading(self.starting_angle) self.move_to_position(self.position) for i in range(self.n): turtle.forward(self.side_length) turtle.left(360 / self.n) self.starting_angle = 0 turtle.setheading(0) def move_to_position(self, new_position = (100, 0)): turtle.penup() turtle.goto(new_position) turtle.pendown() def calculate_area(self): #print((3 * math.sqrt(3) * (self.side_length **2)) / 2) print((self.n / 4) * math.cot(180 / self.n) * math.sqrt(self.side_length)) def set_starting_angle(self, starting_angle = 45): self.starting_angle = starting_angle #------------------------------------------------------------------------ #------------------------------------------------------------------------ def main(): # ----- IGNORE THIS PART --------------------------------- wn = turtle.Screen() rootwindow = wn.getcanvas().winfo_toplevel() rootwindow.call('wm', 'attributes', '.', '-topmost', '1') rootwindow.call('wm', 'attributes', '.', '-topmost', '0') # ----- IGNORE THIS PART --------------------------------- rect1 = Rectangle(60, (0,0), 45) rect1.set_starting_angle(90) rect1.draw() square1 = Square(60, (100, 200), 45) square1.set_starting_angle(45) square1.draw() regpol= RegularPolygon() regpol.set_starting_angle(180) regpol.draw() tri = Triangle() tri.set_starting_angle(239) tri.draw() wn.mainloop() turtle.done() main() I wanted turtle to draw all of the objects I created.
[ "The problem appears to be that you're playing fast and loose with argument order:\nclass GeometricObject: \n def __init__(self, starting_angle = 45, side_length = 100, position = (0,0)): \n \nclass Square(GeometricObject):\n def __init__(self, side_length, position, starting_angle, turn = 90):\n super().__init__(side_length, position, starting_angle)\n\nclass Rectangle(GeometricObject):\n def __init__(self, side_length, position, starting_angle, width = 100):\n super().__init__(side_length, position, starting_angle)\n\nSince these classes are calling the super.__init__() of GeometricObject, their arguments should match order-wise. A simple fix might be:\nclass GeometricObject: \n def __init__(self, side_length=100, position=(0,0), starting_angle=45): \n\nWhich would get you further along until you break on this call:\nregpol = RegularPolygon()\n\nwhich is lacking required arguments.\n" ]
[ 0 ]
[]
[]
[ "parent_child", "python", "turtle_graphics" ]
stackoverflow_0074477328_parent_child_python_turtle_graphics.txt
Q: Cannot pickle Tensorflow object in Python - TypeError: can't pickle _thread._local objects I want to pickle the history object after running a keras fit on tensorflow. But I am getting an error. import gzip import numpy as np import os import pickle import tensorflow as tf from tensorflow import keras with gzip.open('mnist.pkl.gz', 'rb') as f: train_set, test_set = pickle.load(f, encoding='latin1') X_train = np.asarray(train_set[0]) y_train = np.asarray(train_set[1]) X_test = np.asarray(test_set[0]) y_test = np.asarray(test_set[1]) X_valid, X_train = X_train[:5000]/255.0, X_train[5000:]/255.0 y_valid, y_train = y_train[:5000], y_train[5000:] class_names = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat', 'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle Boot'] model = keras.models.Sequential() model.add(keras.layers.Flatten(input_shape=[28,28])) model.add(keras.layers.Dense(300, activation = 'relu')) model.add(keras.layers.Dense(100, activation = 'relu')) model.add(keras.layers.Dense(10, activation = 'softmax')) model.summary() model.compile(loss='sparse_categorical_crossentropy', optimizer='sgd', metrics=['accuracy']) history = model.fit(X_train, y_train, epochs=1, validation_data =(X_valid, y_valid)) if not os.path.isdir('models'): os.mkdir('models') model.save('models/basic.h5') with open('models/basic_history.pickle', 'wb') as f: pickle.dump(history, f) It gives me the following error: Traceback (most recent call last): File "main.py", line 69, in <module> pickle.dump(history, f) TypeError: can't pickle _thread._local objects PS: To get the code to run, download the fashion_mnist data: https://s3.amazonaws.com/img-datasets/mnist.pkl.g A: As Karl suggested, the history object cannot be pickled. But it's dictionary can: with open('models/basic_history.pickle', 'wb') as f: pickle.dump(history.history, f) A: joblib also worked for me: import joblib model_filename = "lstm.pkl" joblib.dump(history.history, model_filename)
Cannot pickle Tensorflow object in Python - TypeError: can't pickle _thread._local objects
I want to pickle the history object after running a keras fit on tensorflow. But I am getting an error. import gzip import numpy as np import os import pickle import tensorflow as tf from tensorflow import keras with gzip.open('mnist.pkl.gz', 'rb') as f: train_set, test_set = pickle.load(f, encoding='latin1') X_train = np.asarray(train_set[0]) y_train = np.asarray(train_set[1]) X_test = np.asarray(test_set[0]) y_test = np.asarray(test_set[1]) X_valid, X_train = X_train[:5000]/255.0, X_train[5000:]/255.0 y_valid, y_train = y_train[:5000], y_train[5000:] class_names = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat', 'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle Boot'] model = keras.models.Sequential() model.add(keras.layers.Flatten(input_shape=[28,28])) model.add(keras.layers.Dense(300, activation = 'relu')) model.add(keras.layers.Dense(100, activation = 'relu')) model.add(keras.layers.Dense(10, activation = 'softmax')) model.summary() model.compile(loss='sparse_categorical_crossentropy', optimizer='sgd', metrics=['accuracy']) history = model.fit(X_train, y_train, epochs=1, validation_data =(X_valid, y_valid)) if not os.path.isdir('models'): os.mkdir('models') model.save('models/basic.h5') with open('models/basic_history.pickle', 'wb') as f: pickle.dump(history, f) It gives me the following error: Traceback (most recent call last): File "main.py", line 69, in <module> pickle.dump(history, f) TypeError: can't pickle _thread._local objects PS: To get the code to run, download the fashion_mnist data: https://s3.amazonaws.com/img-datasets/mnist.pkl.g
[ "As Karl suggested, the history object cannot be pickled. But it's dictionary can:\nwith open('models/basic_history.pickle', 'wb') as f:\n pickle.dump(history.history, f)\n\n", "joblib also worked for me:\nimport joblib\nmodel_filename = \"lstm.pkl\"\njoblib.dump(history.history, model_filename)\n\n" ]
[ 8, 1 ]
[]
[]
[ "pickle", "python", "tensorflow" ]
stackoverflow_0059326551_pickle_python_tensorflow.txt
Q: Identify uploaded file type from buffer I'm using django to accept files from the user (mostly csv, text and excel). I need to detect the file type for further processing Using python-magic I'm getting different results for reading a file and a buffer import magic magic.from_file('/testfiles/xls.xls',mime=True) 'application/vnd.ms-excel' f = open('/testfiles/xls.xls','r') magic.from_buffer(f,mime=True) *** TypeError: object of type 'file' has no len() magic.from_buffer(f.read(2048),mime=True) 'application/octet-stream' f = open('/testfiles/csv.csv','r') magic.from_buffer(f.read(1024),mime=True) 'text/plain' magic.from_file('/testfiles/csv.csv',mime=True) 'text/plain' I got the idea for f.read(1024) from this question I realize octet-stream indicate a specific application file type but I would like to verify it's excel. Note: Django provides an attribute called content_type for this type of thing but the documentation states that it relies on the file extension and should be verified. my question is, What is the best way to identify the type of an uploaded file ? A: You can use filetype Python Package(pip install filetype). The below code worked for me : import filetype fileinfo = filetype.guess(mock.jpg) #the argument can be buffer or file detectedExt = fileinfo.extension detectedmime = fileinfo.mime filetype package documentation
Identify uploaded file type from buffer
I'm using django to accept files from the user (mostly csv, text and excel). I need to detect the file type for further processing Using python-magic I'm getting different results for reading a file and a buffer import magic magic.from_file('/testfiles/xls.xls',mime=True) 'application/vnd.ms-excel' f = open('/testfiles/xls.xls','r') magic.from_buffer(f,mime=True) *** TypeError: object of type 'file' has no len() magic.from_buffer(f.read(2048),mime=True) 'application/octet-stream' f = open('/testfiles/csv.csv','r') magic.from_buffer(f.read(1024),mime=True) 'text/plain' magic.from_file('/testfiles/csv.csv',mime=True) 'text/plain' I got the idea for f.read(1024) from this question I realize octet-stream indicate a specific application file type but I would like to verify it's excel. Note: Django provides an attribute called content_type for this type of thing but the documentation states that it relies on the file extension and should be verified. my question is, What is the best way to identify the type of an uploaded file ?
[ "You can use filetype Python Package(pip install filetype). The below code worked for me :\nimport filetype\n\nfileinfo = filetype.guess(mock.jpg) #the argument can be buffer or file\ndetectedExt = fileinfo.extension\ndetectedmime = fileinfo.mime\n\nfiletype package documentation\n" ]
[ 0 ]
[]
[]
[ "django", "python" ]
stackoverflow_0020160548_django_python.txt
Q: set x axis as column names on barplot I have a dataframe such as this: data = {'name': ['Bob', 'Chuck', 'Daren', 'Elisa'], '100m': [19, 14, 12, 11], '200m': [36, 25, 24, 24], '400m': [67, 64, 58, 57], '800m': [117, 120, 123, 121]} df = pd.DataFrame(data) name 100m 200m 400m 800m 1 Bob 19 36 67 117 2 Chuck 14 25 64 120 3 Daren 12 24 58 123 4 Elisa 11 24 57 121 My task is simple: Plot the times (along the y-axis), with the name of the event (100m, 200m, etc. along the x-axis). The hue of each bar should be determined by the 'name' column, and look something like this. Furthermore, I would like to overlay the results (not stack). However, there is no functionality in seaborn nor matplotlib to do this. A: Instead of using seaborn, which is an API for matplotlib, plot df directly with pandas.DataFrame.plot. matplotlib is the default plotting backend for pandas. Tested in python 3.11, pandas 1.5.1, matplotlib 3.6.2, seaborn 0.12.1 ax = df.set_index('name').T.plot.bar(alpha=.7, rot=0, stacked=True) seaborn.barplot does not have an option for stacked bars, however, this can be implemented with seaborn.histplot, as shown in Stacked Bar Chart with Centered Labels. df must be converted from a wide format to a long format with df.melt # melt the dataframe dfm = df.melt(id_vars='name') # plot ax = sns.histplot(data=dfm, x='variable', weights='value', hue='name', discrete=True, multiple='stack')
set x axis as column names on barplot
I have a dataframe such as this: data = {'name': ['Bob', 'Chuck', 'Daren', 'Elisa'], '100m': [19, 14, 12, 11], '200m': [36, 25, 24, 24], '400m': [67, 64, 58, 57], '800m': [117, 120, 123, 121]} df = pd.DataFrame(data) name 100m 200m 400m 800m 1 Bob 19 36 67 117 2 Chuck 14 25 64 120 3 Daren 12 24 58 123 4 Elisa 11 24 57 121 My task is simple: Plot the times (along the y-axis), with the name of the event (100m, 200m, etc. along the x-axis). The hue of each bar should be determined by the 'name' column, and look something like this. Furthermore, I would like to overlay the results (not stack). However, there is no functionality in seaborn nor matplotlib to do this.
[ "Instead of using seaborn, which is an API for matplotlib, plot df directly with pandas.DataFrame.plot. matplotlib is the default plotting backend for pandas.\nTested in python 3.11, pandas 1.5.1, matplotlib 3.6.2, seaborn 0.12.1\nax = df.set_index('name').T.plot.bar(alpha=.7, rot=0, stacked=True)\n\n\nseaborn.barplot does not have an option for stacked bars, however, this can be implemented with seaborn.histplot, as shown in Stacked Bar Chart with Centered Labels.\ndf must be converted from a wide format to a long format with df.melt\n# melt the dataframe\ndfm = df.melt(id_vars='name')\n\n# plot\nax = sns.histplot(data=dfm, x='variable', weights='value', hue='name', discrete=True, multiple='stack')\n\n\n" ]
[ 2 ]
[]
[]
[ "matplotlib", "pandas", "python", "seaborn", "stacked_bar_chart" ]
stackoverflow_0074479784_matplotlib_pandas_python_seaborn_stacked_bar_chart.txt
Q: How to give image as an user input in api request using flask in python I am creating an API where I want to give image as an user input. I know request.args.get take user input in dictionary format. I want to know if in any way user can give image as input to api in below api script. My image path is E:\env\abc.png a.py import pandas as pd from datetime import datetime from pandas import json_normalize from flask import request, Flask, Response from flask_cors import CORS app = Flask(__name__) CORS(app) @app.route("/api_endpoint", methods=["GET"]) def function_for_api(): user_input_image = request.args.get('user_input_image') print("USER IMAGE",user_input_image) status = 200 resJson = "python_file_name output will be here in json format" return Response(response=resJson, status=status, mimetype="application/json") if __name__ == "__main__": app.run() A: First, this method should be a POST and not a GET. You are putting information on the server. Second, you want to read the file from the files parameter and not one of the query parameters. @app.route("/api_endpoint", methods=["POST"]) def function_for_api(): img = request.files['file'] print(img.filename) return Response(status=200) Here is an example of you could call this function uploading an image. import requests pic_file = "picture_filename" # post a request with file and receive response with open(pic_file, 'rb') as f: resp = requests.post(f"{your_server_address}/api_endpoint", files={'file': f})
How to give image as an user input in api request using flask in python
I am creating an API where I want to give image as an user input. I know request.args.get take user input in dictionary format. I want to know if in any way user can give image as input to api in below api script. My image path is E:\env\abc.png a.py import pandas as pd from datetime import datetime from pandas import json_normalize from flask import request, Flask, Response from flask_cors import CORS app = Flask(__name__) CORS(app) @app.route("/api_endpoint", methods=["GET"]) def function_for_api(): user_input_image = request.args.get('user_input_image') print("USER IMAGE",user_input_image) status = 200 resJson = "python_file_name output will be here in json format" return Response(response=resJson, status=status, mimetype="application/json") if __name__ == "__main__": app.run()
[ "First, this method should be a POST and not a GET. You are putting information on the server. Second, you want to read the file from the files parameter and not one of the query parameters.\n@app.route(\"/api_endpoint\", methods=[\"POST\"])\ndef function_for_api():\n img = request.files['file']\n print(img.filename)\n return Response(status=200)\n\nHere is an example of you could call this function uploading an image.\nimport requests\npic_file = \"picture_filename\"\n# post a request with file and receive response\nwith open(pic_file, 'rb') as f:\n resp = requests.post(f\"{your_server_address}/api_endpoint\", files={'file': f})\n\n" ]
[ 1 ]
[]
[]
[ "api", "flask", "python", "rest" ]
stackoverflow_0074479707_api_flask_python_rest.txt
Q: In dataframe, how to speed up recognizing rows that have more than 5 consecutive previous values with same sign? I have a dataframe like this. val consecutive 0 0.0001 0.0 1 0.0008 0.0 2 -0.0001 0.0 3 0.0005 0.0 4 0.0008 0.0 5 0.0002 0.0 6 0.0012 0.0 7 0.0012 1.0 8 0.0007 1.0 9 0.0004 1.0 10 0.0002 1.0 11 0.0000 0.0 12 0.0015 0.0 13 -0.0005 0.0 14 -0.0003 0.0 15 0.0001 0.0 16 0.0001 0.0 17 0.0003 0.0 18 -0.0003 0.0 19 -0.0001 0.0 20 0.0000 0.0 21 0.0000 0.0 22 -0.0008 0.0 23 -0.0008 0.0 24 -0.0001 0.0 25 -0.0006 0.0 26 -0.0010 1.0 27 0.0002 0.0 28 -0.0003 0.0 29 -0.0008 0.0 30 -0.0010 0.0 31 -0.0003 0.0 32 -0.0005 1.0 33 -0.0012 1.0 34 -0.0002 1.0 35 0.0000 0.0 36 -0.0018 0.0 37 -0.0009 0.0 38 -0.0007 0.0 39 0.0000 0.0 40 -0.0011 0.0 41 -0.0006 0.0 42 -0.0010 0.0 43 -0.0015 0.0 44 -0.0012 1.0 45 -0.0011 1.0 46 -0.0010 1.0 47 -0.0014 1.0 48 -0.0011 1.0 49 -0.0017 1.0 50 -0.0015 1.0 51 -0.0010 1.0 52 -0.0014 1.0 53 -0.0012 1.0 54 -0.0004 1.0 55 -0.0007 1.0 56 -0.0011 1.0 57 -0.0008 1.0 58 -0.0006 1.0 59 0.0002 0.0 The column 'consecutive' is what I want to compute. It is '1' when current row has more than 5 consecutive previous values with same sign (either positive or negative, including it self). What I've tried is: df['consecutive'] = df['val'].rolling(5).apply( lambda arr: np.all(arr > 0) or np.all(arr < 0), raw=True ).replace(np.nan, 0) But it's too slow for large dataset. Do you have any idea how to speed up? A: One option is to avoid the use of apply() altogether. The main idea is to create 2 'helper' columns: sign: boolean Series indicating if value is positive (True) or negative (False) id: group identical consecutive occurences together Finally, we can groupby the id and use cumulative count to isolate the rows which have 4 or more previous rows with the same sign (i.e. get all rows with 5 consecutive sign values). # Setup test dataset import pandas as pd import numpy as np vals = np.random.randn(20000) df = pd.DataFrame({'val': vals}) # Create the helper columns sign = df['val'] >= 0 df['id'] = sign.ne(sign.shift()).cumsum() # Count the ids and set flag to True if the cumcount is above our desired value df['consecutive'] = df.groupby('id').cumcount() >= 4 Benchmarking On my system I get the following benchmarks: sign = df['val'] >= 0 # 92 µs ± 10.1 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each) df['id'] = sign.ne(sign.shift()).cumsum() # 1.06 ms ± 137 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each) df['consecutive'] = df.groupby('id').cumcount() >= 4 # 3.36 ms ± 293 µs per loop (mean ± std. dev. of 7 runs, 100 loops each) Thus in total we get an average runtime of: 4.51 ms For reference, your solution and @Emma 's solution ran respectively on my system in: # 287 ms ± 108 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) # 121 ms ± 13.3 ms per loop (mean ± std. dev. of 7 runs, 10 loops each) A: Not sure this is fast enough for your data size but using min, max seems faster. With 20k rows, df['consecutive'] = df['val'].rolling(5).apply( lambda arr: np.all(arr > 0) or np.all(arr < 0), raw=True ) # 144 ms ± 2.32 ms per loop (mean ± std. dev. of 7 runs, 10 loops each) df['consecutive'] = df['val'].rolling(5).apply( lambda arr: (arr.min() > 0 or arr.max() < 0), raw=True ) # 57.1 ms ± 85.8 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
In dataframe, how to speed up recognizing rows that have more than 5 consecutive previous values with same sign?
I have a dataframe like this. val consecutive 0 0.0001 0.0 1 0.0008 0.0 2 -0.0001 0.0 3 0.0005 0.0 4 0.0008 0.0 5 0.0002 0.0 6 0.0012 0.0 7 0.0012 1.0 8 0.0007 1.0 9 0.0004 1.0 10 0.0002 1.0 11 0.0000 0.0 12 0.0015 0.0 13 -0.0005 0.0 14 -0.0003 0.0 15 0.0001 0.0 16 0.0001 0.0 17 0.0003 0.0 18 -0.0003 0.0 19 -0.0001 0.0 20 0.0000 0.0 21 0.0000 0.0 22 -0.0008 0.0 23 -0.0008 0.0 24 -0.0001 0.0 25 -0.0006 0.0 26 -0.0010 1.0 27 0.0002 0.0 28 -0.0003 0.0 29 -0.0008 0.0 30 -0.0010 0.0 31 -0.0003 0.0 32 -0.0005 1.0 33 -0.0012 1.0 34 -0.0002 1.0 35 0.0000 0.0 36 -0.0018 0.0 37 -0.0009 0.0 38 -0.0007 0.0 39 0.0000 0.0 40 -0.0011 0.0 41 -0.0006 0.0 42 -0.0010 0.0 43 -0.0015 0.0 44 -0.0012 1.0 45 -0.0011 1.0 46 -0.0010 1.0 47 -0.0014 1.0 48 -0.0011 1.0 49 -0.0017 1.0 50 -0.0015 1.0 51 -0.0010 1.0 52 -0.0014 1.0 53 -0.0012 1.0 54 -0.0004 1.0 55 -0.0007 1.0 56 -0.0011 1.0 57 -0.0008 1.0 58 -0.0006 1.0 59 0.0002 0.0 The column 'consecutive' is what I want to compute. It is '1' when current row has more than 5 consecutive previous values with same sign (either positive or negative, including it self). What I've tried is: df['consecutive'] = df['val'].rolling(5).apply( lambda arr: np.all(arr > 0) or np.all(arr < 0), raw=True ).replace(np.nan, 0) But it's too slow for large dataset. Do you have any idea how to speed up?
[ "One option is to avoid the use of apply() altogether.\nThe main idea is to create 2 'helper' columns:\n\nsign: boolean Series indicating if value is positive (True) or negative (False)\nid: group identical consecutive occurences together\n\nFinally, we can groupby the id and use cumulative count to isolate the rows which have 4 or more previous rows with the same sign (i.e. get all rows with 5 consecutive sign values).\n# Setup test dataset\nimport pandas as pd\nimport numpy as np\n\nvals = np.random.randn(20000)\ndf = pd.DataFrame({'val': vals})\n\n# Create the helper columns\nsign = df['val'] >= 0\ndf['id'] = sign.ne(sign.shift()).cumsum()\n\n# Count the ids and set flag to True if the cumcount is above our desired value\ndf['consecutive'] = df.groupby('id').cumcount() >= 4\n\nBenchmarking\nOn my system I get the following benchmarks:\nsign = df['val'] >= 0\n\n# 92 µs ± 10.1 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)\n\ndf['id'] = sign.ne(sign.shift()).cumsum()\n\n# 1.06 ms ± 137 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)\n\ndf['consecutive'] = df.groupby('id').cumcount() >= 4\n\n# 3.36 ms ± 293 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)\n\nThus in total we get an average runtime of: 4.51 ms\nFor reference, your solution and @Emma 's solution ran respectively on my system in:\n# 287 ms ± 108 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)\n\n# 121 ms ± 13.3 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)\n\n", "Not sure this is fast enough for your data size but using min, max seems faster.\nWith 20k rows,\ndf['consecutive'] = df['val'].rolling(5).apply(\n lambda arr: np.all(arr > 0) or np.all(arr < 0), raw=True\n)\n\n# 144 ms ± 2.32 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)\n\ndf['consecutive'] = df['val'].rolling(5).apply(\n lambda arr: (arr.min() > 0 or arr.max() < 0), raw=True\n)\n\n# 57.1 ms ± 85.8 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)\n\n" ]
[ 1, 0 ]
[]
[]
[ "data_cleaning", "dataframe", "pandas", "python" ]
stackoverflow_0074478178_data_cleaning_dataframe_pandas_python.txt
Q: Parallelize nonlinear regression using multiprocessing or MPI I have a simple nonlinear regression. It runs sequentially fine except for taking long time to complete. The process can speed up using MPI or multiprocess. How should I approach applying them to run my code? Here is my code for nonlinear regression: data = pd.read_csv('....csv') X = data.iloc[:, 0] Y = data.iloc[:, 1] #Model build a = 0 b = 0 c = 0 L = 0.0001 epochs = 10000 n = float(len(X)) #Perform Gradient Descent for i in range(epochs): Y_pred = a*X*X + b*X + c # The current predicted value of Y D_a = (-2/n) * sum(X*X * (Y - Y_pred)) # Derivative wrt a D_b = (-2/n) * sum(X * (Y - Y_pred)) # Derivative wrt b D_c = (-2/n) * sum(Y - Y_pred) # Derivative wrt c a = a - L * D_a # Update a b = b - L * D_b # Update b c = c - L * D_c # Update c print (a, b, c) #Predictions Y_pred = a*X*X + b*X + c A: Gradient descent is sequential by nature, you need the parameters from previous step in order to make update at the current step. Few optimizations you can add to improve your code include using numpy arrays instead of pandas series, and also moving the X*X outside the for loop as suggested by @Victor Eijkhout. Here's a minimal working example : import pandas as pd import numpy as np np.random.seed(123) # generate a fake dataset data = pd.DataFrame({ "x": np.random.randn(10_000), "y": np.random.randn(10_000) }) def train_v1(data): X = data.iloc[:, 0] Y = data.iloc[:, 1] #Model build a = 0 b = 0 c = 0 L = 0.0001 epochs = 10_000 n = float(len(X)) #Perform Gradient Descent for i in range(epochs): Y_pred = a*X*X + b*X + c # The current predicted value of Y D_a = (-2/n) * sum(X*X * (Y - Y_pred)) # Derivative wrt a D_b = (-2/n) * sum(X * (Y - Y_pred)) # Derivative wrt b D_c = (-2/n) * sum(Y - Y_pred) # Derivative wrt c a = a - L * D_a # Update a b = b - L * D_b # Update b c = c - L * D_c # Update c print (a, b, c) def train_v2(data): X = data.iloc[:, 0].values Y = data.iloc[:, 1].values X_square = X*X #Model build a = 0 b = 0 c = 0 L = 0.0001 epochs = 10000 n = float(len(X)) #Perform Gradient Descent for i in range(epochs): Y_pred = a*X_square + b*X + c # The current predicted value of Y D_a = (-2/n) * np.sum(X_square * (Y - Y_pred)) # Derivative wrt a D_b = (-2/n) * np.sum(X * (Y - Y_pred)) # Derivative wrt b D_c = (-2/n) * np.sum(Y - Y_pred) # Derivative wrt c a = a - L * D_a # Update a b = b - L * D_b # Update b c = c - L * D_c # Update c print (a, b, c) I created two functions : train_v1 which is exactly the code you provided train_v2 with some optimisations %%timeit train_v1(data) Output : 0.004405914780579786 0.004665404814519434 0.005183005935438013 0.004405914780579786 0.004665404814519434 0.005183005935438013 0.004405914780579786 0.004665404814519434 0.005183005935438013 0.004405914780579786 0.004665404814519434 0.005183005935438013 0.004405914780579786 0.004665404814519434 0.005183005935438013 0.004405914780579786 0.004665404814519434 0.005183005935438013 0.004405914780579786 0.004665404814519434 0.005183005935438013 0.004405914780579786 0.004665404814519434 0.005183005935438013 16.6 s ± 342 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) %%timeit train_v2(data) Output : 0.0044059147805797835 0.004665404814519433 0.005183005935438011 0.0044059147805797835 0.004665404814519433 0.005183005935438011 0.0044059147805797835 0.004665404814519433 0.005183005935438011 0.0044059147805797835 0.004665404814519433 0.005183005935438011 0.0044059147805797835 0.004665404814519433 0.005183005935438011 0.0044059147805797835 0.004665404814519433 0.005183005935438011 0.0044059147805797835 0.004665404814519433 0.005183005935438011 0.0044059147805797835 0.004665404814519433 0.005183005935438011 408 ms ± 2.19 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) As you can see the gain is huge 408 milliseconds for the optimized code VS 16.6 seconds four your original code.
Parallelize nonlinear regression using multiprocessing or MPI
I have a simple nonlinear regression. It runs sequentially fine except for taking long time to complete. The process can speed up using MPI or multiprocess. How should I approach applying them to run my code? Here is my code for nonlinear regression: data = pd.read_csv('....csv') X = data.iloc[:, 0] Y = data.iloc[:, 1] #Model build a = 0 b = 0 c = 0 L = 0.0001 epochs = 10000 n = float(len(X)) #Perform Gradient Descent for i in range(epochs): Y_pred = a*X*X + b*X + c # The current predicted value of Y D_a = (-2/n) * sum(X*X * (Y - Y_pred)) # Derivative wrt a D_b = (-2/n) * sum(X * (Y - Y_pred)) # Derivative wrt b D_c = (-2/n) * sum(Y - Y_pred) # Derivative wrt c a = a - L * D_a # Update a b = b - L * D_b # Update b c = c - L * D_c # Update c print (a, b, c) #Predictions Y_pred = a*X*X + b*X + c
[ "Gradient descent is sequential by nature, you need the parameters from previous step in order to make update at the current step.\nFew optimizations you can add to improve your code include using numpy arrays instead of pandas series, and also moving the X*X outside the for loop as suggested by @Victor Eijkhout. Here's a minimal working example :\nimport pandas as pd\nimport numpy as np\nnp.random.seed(123)\n\n# generate a fake dataset\ndata = pd.DataFrame({\n \"x\": np.random.randn(10_000),\n \"y\": np.random.randn(10_000)\n})\n\n\ndef train_v1(data):\n X = data.iloc[:, 0]\n Y = data.iloc[:, 1]\n\n #Model build\n a = 0\n b = 0\n c = 0\n\n L = 0.0001\n epochs = 10_000\n\n n = float(len(X)) \n\n #Perform Gradient Descent \n for i in range(epochs): \n Y_pred = a*X*X + b*X + c # The current predicted value of Y\n D_a = (-2/n) * sum(X*X * (Y - Y_pred)) # Derivative wrt a\n D_b = (-2/n) * sum(X * (Y - Y_pred)) # Derivative wrt b\n D_c = (-2/n) * sum(Y - Y_pred) # Derivative wrt c\n a = a - L * D_a # Update a\n b = b - L * D_b # Update b\n c = c - L * D_c # Update c\n print (a, b, c)\n\n \ndef train_v2(data):\n X = data.iloc[:, 0].values\n Y = data.iloc[:, 1].values\n X_square = X*X\n\n #Model build\n a = 0\n b = 0\n c = 0\n\n L = 0.0001\n epochs = 10000\n\n n = float(len(X)) \n\n #Perform Gradient Descent \n for i in range(epochs): \n Y_pred = a*X_square + b*X + c # The current predicted value of Y\n D_a = (-2/n) * np.sum(X_square * (Y - Y_pred)) # Derivative wrt a\n D_b = (-2/n) * np.sum(X * (Y - Y_pred)) # Derivative wrt b\n D_c = (-2/n) * np.sum(Y - Y_pred) # Derivative wrt c\n a = a - L * D_a # Update a\n b = b - L * D_b # Update b\n c = c - L * D_c # Update c\n print (a, b, c)\n\nI created two functions :\n\ntrain_v1 which is exactly the code you provided\ntrain_v2 with some optimisations\n\n%%timeit\ntrain_v1(data)\n\nOutput :\n0.004405914780579786 0.004665404814519434 0.005183005935438013\n0.004405914780579786 0.004665404814519434 0.005183005935438013\n0.004405914780579786 0.004665404814519434 0.005183005935438013\n0.004405914780579786 0.004665404814519434 0.005183005935438013\n0.004405914780579786 0.004665404814519434 0.005183005935438013\n0.004405914780579786 0.004665404814519434 0.005183005935438013\n0.004405914780579786 0.004665404814519434 0.005183005935438013\n0.004405914780579786 0.004665404814519434 0.005183005935438013\n16.6 s ± 342 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)\n\n%%timeit\ntrain_v2(data)\n\nOutput :\n0.0044059147805797835 0.004665404814519433 0.005183005935438011\n0.0044059147805797835 0.004665404814519433 0.005183005935438011\n0.0044059147805797835 0.004665404814519433 0.005183005935438011\n0.0044059147805797835 0.004665404814519433 0.005183005935438011\n0.0044059147805797835 0.004665404814519433 0.005183005935438011\n0.0044059147805797835 0.004665404814519433 0.005183005935438011\n0.0044059147805797835 0.004665404814519433 0.005183005935438011\n0.0044059147805797835 0.004665404814519433 0.005183005935438011\n408 ms ± 2.19 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)\n\nAs you can see the gain is huge 408 milliseconds for the optimized code VS 16.6 seconds four your original code.\n" ]
[ 0 ]
[]
[]
[ "mpi", "multiprocessing", "python", "python_multiprocessing" ]
stackoverflow_0074479217_mpi_multiprocessing_python_python_multiprocessing.txt
Q: Pytorch gradient descent keeps sending me NaNs mean squared errors I am trying to apply, within the framework of a course, a gradient descent to estimate a linear model. My code is the following : model = torch.nn.Linear(1,1) myModel = model(X) ds = torch.utils.data.TensorDataset(X, Y) dl = torch.utils.data.DataLoader(ds) optimiser = torch.optim.SGD(model.parameters(), lr=0.01) loss = torch.nn.functional.mse_loss for epoch in range(100): for (Xb, yb) in dl: yb_pred = model(Xb) c_loss = loss(yb_pred, yb) print(c_loss) optimiser.zero_grad() c_loss.backward() optimiser.step() Yet it keeps printing NaNs, which I do not understand. Have I done a mistake in the implementation ? I have the following output (x numerous times) : tensor(nan, grad_fn=<MseLossBackward0>) A: There is nothing wrong with your code but Nan values can be explained by gradients exploding depending on the data X and Y. You can try with a lower learning rate (1e-3 or 1e-4). For instance if you test with this toy linear example: X = torch.randn(100, 1) Y = X * 2 + 3 The loss will converge to 0 quickly.
Pytorch gradient descent keeps sending me NaNs mean squared errors
I am trying to apply, within the framework of a course, a gradient descent to estimate a linear model. My code is the following : model = torch.nn.Linear(1,1) myModel = model(X) ds = torch.utils.data.TensorDataset(X, Y) dl = torch.utils.data.DataLoader(ds) optimiser = torch.optim.SGD(model.parameters(), lr=0.01) loss = torch.nn.functional.mse_loss for epoch in range(100): for (Xb, yb) in dl: yb_pred = model(Xb) c_loss = loss(yb_pred, yb) print(c_loss) optimiser.zero_grad() c_loss.backward() optimiser.step() Yet it keeps printing NaNs, which I do not understand. Have I done a mistake in the implementation ? I have the following output (x numerous times) : tensor(nan, grad_fn=<MseLossBackward0>)
[ "There is nothing wrong with your code but Nan values can be explained by gradients exploding depending on the data X and Y. You can try with a lower learning rate (1e-3 or 1e-4).\nFor instance if you test with this toy linear example:\nX = torch.randn(100, 1)\nY = X * 2 + 3\n\nThe loss will converge to 0 quickly.\n" ]
[ 1 ]
[]
[]
[ "python", "pytorch" ]
stackoverflow_0074476864_python_pytorch.txt
Q: How to set the title of a new tab when returning a fileresponse I have an button that when pressed opens up a new tab and displays a PDF. When the new tab is opened the title looks like some sort of metadata about the PDF. ex: "Microsoft Powerpoint:The original.ppt" instead of the name of the PDF "Generated.pdf". How do I set the title of the tab to be the name of the actual PDF being displayed? <input type="button" onclick="window.open('{% url 'get_file' %}','_blank');" value="Show File"/></td> views.py: def GetFile(request) filepath = os.path.join('my_path/' + variable + '/' + filename) response = FileResponse(open(filepath, 'rb'), content_type='application/pdf') response['Content-Disposition'] = 'filename="{}"'.format(filename) return response A: Think this is missing the disposition! Try response['Content-Disposition'] = 'attachment; filename="{}"'.format(filename) or response['Content-Disposition'] = 'inline; filename="{}"'.format(filename) attachment; should result in a browser window asking what you want to do with the file. "Save as" will be one option. inline; should invoke the relevant application on the client machine with no further prompt. Details are browser-specific, though.
How to set the title of a new tab when returning a fileresponse
I have an button that when pressed opens up a new tab and displays a PDF. When the new tab is opened the title looks like some sort of metadata about the PDF. ex: "Microsoft Powerpoint:The original.ppt" instead of the name of the PDF "Generated.pdf". How do I set the title of the tab to be the name of the actual PDF being displayed? <input type="button" onclick="window.open('{% url 'get_file' %}','_blank');" value="Show File"/></td> views.py: def GetFile(request) filepath = os.path.join('my_path/' + variable + '/' + filename) response = FileResponse(open(filepath, 'rb'), content_type='application/pdf') response['Content-Disposition'] = 'filename="{}"'.format(filename) return response
[ "Think this is missing the disposition!\nTry\nresponse['Content-Disposition'] = 'attachment; filename=\"{}\"'.format(filename)\n\nor\nresponse['Content-Disposition'] = 'inline; filename=\"{}\"'.format(filename)\n\nattachment; should result in a browser window asking what you want to do with the file. \"Save as\" will be one option. inline; should invoke the relevant application on the client machine with no further prompt. Details are browser-specific, though.\n" ]
[ 0 ]
[]
[]
[ "browser", "django", "http", "python" ]
stackoverflow_0074478099_browser_django_http_python.txt
Q: Please how do i form a dictionary from a file content that has header sections and body sections? Given a File with the contents below : ****************** * Header title 1 * + trig apple * + targ beans * + trig grapes * + targ berries * Header title 2 * + trig beans * + targ joke * + trig help * + targ me The above pattern repeats with every header title having a uniq string. As i read the file i would like to create an ordered dict with keys as the Header titles and values as a list of the lines in the body section. So something like this : d = { Header title 1: ['+ trig apple', '+ targ beans', '+ trig grapes', '+ targ berries' ], Header title 2: ['+ trig beans', '+ targ joke', '+ trig grapes', '+ targ berries' ], . . . <key>: <value> } Please i am stuck! My current solution tries to iterate the file line by line to store the values in the list for each header, but i am seeing that it is storing all the body sections for all the headers into the list value for each header. Essentially my solution is not giving what i need. I indicated above what i tried A: The below code will create the file based on your sample input, then read it into an OrderedDict. This assumes headers start with * and records start with * +. It also presupposes that no records occur before the first header is set. You also likely want to clean up your text by removing new lines \n. from collections import OrderedDict file_content = """* Header title 1 * + trig apple * + targ beans * + trig grapes * + targ berries * Header title 2 * + trig beans * + targ joke * + trig help * + targ me""" # Write file with open("file.txt", "w+") as new_file: new_file.write(file_content) # Read file to ordered dict d = OrderedDict() with open("file.txt") as f: for line in f: if line.startswith("* +"): # Note this could be unbound, we assume Headers always start with '*' # and preceed any records with '* +' d[current_key].append(line.replace("* ", "")) elif line.startswith("*"): current_key = line.replace("* ", "") d[current_key] = [] print(d["Header title 1\n"]) print(d["Header title 2\n"]) # ['+ trig apple\n', '+ targ beans\n', '+ trig grapes\n', '+ targ berries\n'] # ['+ trig beans\n', '+ targ joke\n', '+ trig help\n', '+ targ me']
Please how do i form a dictionary from a file content that has header sections and body sections?
Given a File with the contents below : ****************** * Header title 1 * + trig apple * + targ beans * + trig grapes * + targ berries * Header title 2 * + trig beans * + targ joke * + trig help * + targ me The above pattern repeats with every header title having a uniq string. As i read the file i would like to create an ordered dict with keys as the Header titles and values as a list of the lines in the body section. So something like this : d = { Header title 1: ['+ trig apple', '+ targ beans', '+ trig grapes', '+ targ berries' ], Header title 2: ['+ trig beans', '+ targ joke', '+ trig grapes', '+ targ berries' ], . . . <key>: <value> } Please i am stuck! My current solution tries to iterate the file line by line to store the values in the list for each header, but i am seeing that it is storing all the body sections for all the headers into the list value for each header. Essentially my solution is not giving what i need. I indicated above what i tried
[ "The below code will create the file based on your sample input, then read it into an OrderedDict. This assumes headers start with * and records start with * +. It also presupposes that no records occur before the first header is set. You also likely want to clean up your text by removing new lines \\n.\nfrom collections import OrderedDict\n\nfile_content = \"\"\"* Header title 1\n* + trig apple\n* + targ beans\n* + trig grapes\n* + targ berries\n\n* Header title 2\n* + trig beans\n* + targ joke\n* + trig help\n* + targ me\"\"\"\n\n# Write file\nwith open(\"file.txt\", \"w+\") as new_file:\n new_file.write(file_content)\n\n# Read file to ordered dict\nd = OrderedDict()\nwith open(\"file.txt\") as f:\n for line in f:\n if line.startswith(\"* +\"):\n # Note this could be unbound, we assume Headers always start with '*'\n # and preceed any records with '* +'\n d[current_key].append(line.replace(\"* \", \"\"))\n elif line.startswith(\"*\"):\n current_key = line.replace(\"* \", \"\")\n d[current_key] = []\nprint(d[\"Header title 1\\n\"])\nprint(d[\"Header title 2\\n\"])\n\n# ['+ trig apple\\n', '+ targ beans\\n', '+ trig grapes\\n', '+ targ berries\\n']\n# ['+ trig beans\\n', '+ targ joke\\n', '+ trig help\\n', '+ targ me']\n\n" ]
[ 1 ]
[]
[]
[ "ordereddict", "python" ]
stackoverflow_0074479767_ordereddict_python.txt
Q: Trying to compare different sized one-hot-encoded lists I have run an autoencoder model, and returned a dictionary with each output and it's label, using FashionMNIST. My goal is to print 10 images only for the dress and coat class (class labels 3 and 4). I have one-hot-encoded the labels such that the dress class appears as [0.,0,.0,1.,0.,0.,0.,0.,0.]. My dictionary output is: print(pa). #dictionary is called pa {'output': array([[1.5346111e-04, 2.3307074e-04, 2.8705355e-04, ..., 1.9890528e-04, 1.8257453e-04, 2.0764180e-04], [1.9767908e-03, 1.5839143e-03, 1.7811939e-03, ..., 1.7838757e-03, 1.4038634e-03, 2.3405524e-03], [5.8998094e-06, 6.9388111e-06, 5.8752844e-06, ..., 5.1715115e-06, 4.4670110e-06, 1.2018012e-05], ..., [2.1034568e-05, 3.0344427e-05, 7.0048365e-05, ..., 9.4724113e-05, 8.9003828e-05, 4.1828611e-05], [2.7930623e-06, 3.0393956e-06, 4.5835086e-06, ..., 3.8765144e-04, 3.6324131e-05, 5.6411723e-06], [1.2453397e-04, 1.1948447e-04, 2.0121646e-04, ..., 1.0773790e-03, 2.9582143e-04, 1.7229551e-04]], dtype=float32), 'label': array([[1., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 1., 0.], [0., 0., 0., ..., 1., 0., 0.], ..., [1., 0., 0., ..., 0., 0., 0.], [0., 0., 1., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.]], dtype=float32)} I am trying to run a for loop, where if the pa['label'] is equal to a certain one-hot-encoded array, I plot the corresponding pa['output']. for i in range(len(pa['label'])): if pa['label'][i] == np.array([0.,0.,0.,1.,0.,0.,0.,0.,0.]): print(pa['lable'][i]) # plt.imshow(pa['output'][i].reshape(28,28)) # plt.show() However, I get a warning(?): /opt/conda/lib/python3.7/site-packages/ipykernel_launcher.py:2: DeprecationWarning: elementwise comparison failed; this will raise an error in the future. I have also tried making a list of arrays of the one-hot-encoded arrays i want to plot and trying to compare my dictionary label to this array (different sized arrays): clothing_array = np.array([[0., 0., 0., 1., 0., 0., 0., 0., 0., 0.], [0., 0., 0., 0., 1., 0., 0., 0., 0., 0.]]) for i in range(len(pa['label'])): if (pa['label'][i] == clothing_array[i]).any(): plt.imshow(pa['output'][i].reshape(28,28)) plt.show() However, it plots a picture of a tshirt, a bag, and then i get the error IndexError: index 2 is out of bounds for axis 0 with size 2 Which i understand since clothing_array only has two indices. But obviously this code is wrong since I want to print ONLY dress and coat. I don't know why it's printing these images and i don't know how to fix it. Any help or clarifying questions are more than welcome. Here are the first ten arrays of my dictionary labels: array([[0., 0., 0., 1., 0., 0., 0., 0., 0., 0.], [0., 0., 0., 0., 0., 0., 0., 0., 1., 0.], [0., 0., 0., 1., 0., 0., 0., 0., 0., 0.], [0., 0., 0., 0., 0., 0., 0., 0., 0., 1.], [1., 0., 0., 0., 0., 0., 0., 0., 0., 0.], [0., 0., 0., 0., 1., 0., 0., 0., 0., 0.], [0., 0., 0., 0., 0., 1., 0., 0., 0., 0.], [0., 0., 0., 0., 0., 1., 0., 0., 0., 0.], [0., 0., 0., 0., 1., 0., 0., 0., 0., 0.], [1., 0., 0., 0., 0., 0., 0., 0., 0., 0.]], dtype=float32) A: I will post an example here. Here we have two arrays for you x is the label array and y the clothing . You can get in z the ones that are identical (the indexes). Finally by using the matching_indexes you can collect the onces you want from output and plot them x = np.array([[1., 0., 0., 0., 0., 0., 0.], [0., 1., 0., 0., 0., 0., 0.], [0., 0., 0., 0., 1., 0., 0.], [1., 0., 0., 0., 0., 0., 0.], [0., 0., 1., 0., 0., 0., 0.], [0., 0., 0., 1., 0., 0., 0.]]) y = np.array([[1.,0.,0.,0.,0.,0.,0.]]) z= np.multiply(x,y) matching_indexes = np.where(z.any(axis=1))[0]
Trying to compare different sized one-hot-encoded lists
I have run an autoencoder model, and returned a dictionary with each output and it's label, using FashionMNIST. My goal is to print 10 images only for the dress and coat class (class labels 3 and 4). I have one-hot-encoded the labels such that the dress class appears as [0.,0,.0,1.,0.,0.,0.,0.,0.]. My dictionary output is: print(pa). #dictionary is called pa {'output': array([[1.5346111e-04, 2.3307074e-04, 2.8705355e-04, ..., 1.9890528e-04, 1.8257453e-04, 2.0764180e-04], [1.9767908e-03, 1.5839143e-03, 1.7811939e-03, ..., 1.7838757e-03, 1.4038634e-03, 2.3405524e-03], [5.8998094e-06, 6.9388111e-06, 5.8752844e-06, ..., 5.1715115e-06, 4.4670110e-06, 1.2018012e-05], ..., [2.1034568e-05, 3.0344427e-05, 7.0048365e-05, ..., 9.4724113e-05, 8.9003828e-05, 4.1828611e-05], [2.7930623e-06, 3.0393956e-06, 4.5835086e-06, ..., 3.8765144e-04, 3.6324131e-05, 5.6411723e-06], [1.2453397e-04, 1.1948447e-04, 2.0121646e-04, ..., 1.0773790e-03, 2.9582143e-04, 1.7229551e-04]], dtype=float32), 'label': array([[1., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 1., 0.], [0., 0., 0., ..., 1., 0., 0.], ..., [1., 0., 0., ..., 0., 0., 0.], [0., 0., 1., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.]], dtype=float32)} I am trying to run a for loop, where if the pa['label'] is equal to a certain one-hot-encoded array, I plot the corresponding pa['output']. for i in range(len(pa['label'])): if pa['label'][i] == np.array([0.,0.,0.,1.,0.,0.,0.,0.,0.]): print(pa['lable'][i]) # plt.imshow(pa['output'][i].reshape(28,28)) # plt.show() However, I get a warning(?): /opt/conda/lib/python3.7/site-packages/ipykernel_launcher.py:2: DeprecationWarning: elementwise comparison failed; this will raise an error in the future. I have also tried making a list of arrays of the one-hot-encoded arrays i want to plot and trying to compare my dictionary label to this array (different sized arrays): clothing_array = np.array([[0., 0., 0., 1., 0., 0., 0., 0., 0., 0.], [0., 0., 0., 0., 1., 0., 0., 0., 0., 0.]]) for i in range(len(pa['label'])): if (pa['label'][i] == clothing_array[i]).any(): plt.imshow(pa['output'][i].reshape(28,28)) plt.show() However, it plots a picture of a tshirt, a bag, and then i get the error IndexError: index 2 is out of bounds for axis 0 with size 2 Which i understand since clothing_array only has two indices. But obviously this code is wrong since I want to print ONLY dress and coat. I don't know why it's printing these images and i don't know how to fix it. Any help or clarifying questions are more than welcome. Here are the first ten arrays of my dictionary labels: array([[0., 0., 0., 1., 0., 0., 0., 0., 0., 0.], [0., 0., 0., 0., 0., 0., 0., 0., 1., 0.], [0., 0., 0., 1., 0., 0., 0., 0., 0., 0.], [0., 0., 0., 0., 0., 0., 0., 0., 0., 1.], [1., 0., 0., 0., 0., 0., 0., 0., 0., 0.], [0., 0., 0., 0., 1., 0., 0., 0., 0., 0.], [0., 0., 0., 0., 0., 1., 0., 0., 0., 0.], [0., 0., 0., 0., 0., 1., 0., 0., 0., 0.], [0., 0., 0., 0., 1., 0., 0., 0., 0., 0.], [1., 0., 0., 0., 0., 0., 0., 0., 0., 0.]], dtype=float32)
[ "I will post an example here.\nHere we have two arrays for you x is the label array and y the clothing . You can get in z the ones that are identical (the indexes). Finally by using the matching_indexes you can collect the onces you want from output and plot them\nx = np.array([[1., 0., 0., 0., 0., 0., 0.],\n [0., 1., 0., 0., 0., 0., 0.],\n [0., 0., 0., 0., 1., 0., 0.],\n [1., 0., 0., 0., 0., 0., 0.],\n [0., 0., 1., 0., 0., 0., 0.],\n [0., 0., 0., 1., 0., 0., 0.]])\n\ny = np.array([[1.,0.,0.,0.,0.,0.,0.]])\n\nz= np.multiply(x,y)\nmatching_indexes = np.where(z.any(axis=1))[0]\n\n" ]
[ 1 ]
[]
[]
[ "arrays", "mnist", "python" ]
stackoverflow_0074478908_arrays_mnist_python.txt
Q: How to hide console window in python? I am writing an IRC bot in Python. I wish to make stand-alone binaries for Linux and Windows of it. And mainly I wish that when the bot initiates, the console window should hide and the user should not be able to see the window. What can I do for that? A: Simply save it with a .pyw extension. This will prevent the console window from opening. On Windows systems, there is no notion of an “executable mode”. The Python installer automatically associates .py files with python.exe so that a double-click on a Python file will run it as a script. The extension can also be .pyw, in that case, the console window that normally appears is suppressed. Explanation at the bottom of section 2.2.2 A: In linux, just run it, no problem. In Windows, you want to use the pythonw executable. Update Okay, if I understand the question in the comments, you're asking how to make the command window in which you've started the bot from the command line go away afterwards? UNIX (Linux) $ nohup mypythonprog & Windows C:/> start pythonw mypythonprog I think that's right. In any case, now you can close the terminal. A: On Unix Systems (including GNU/Linux, macOS, and BSD) Use nohup mypythonprog &, and you can close the terminal window without disrupting the process. You can also run exit if you are running in the cloud and don't want to leave a hanging shell process. On Windows Systems Save the program with a .pyw extension and now it will open with pythonw.exe. No shell window. For example, if you have foo.py, you need to rename it to foo.pyw. A: This will hide your console. Implement these lines in your code first to start hiding your console at first. import win32gui, win32con the_program_to_hide = win32gui.GetForegroundWindow() win32gui.ShowWindow(the_program_to_hide , win32con.SW_HIDE) Update May 2020 : If you've got trouble on pip install win32con on Command Prompt, you can simply pip install pywin32.Then on your python script, execute import win32.lib.win32con as win32con instead of import win32con. To show back your program again win32con.SW_SHOW works fine: win32gui.ShowWindow(the_program_to_hide , win32con.SW_SHOW) A: If all you want to do is run your Python Script on a windows computer that has the Python Interpreter installed, converting the extension of your saved script from '.py' to '.pyw' should do the trick. But if you're using py2exe to convert your script into a standalone application that would run on any windows machine, you will need to make the following changes to your 'setup.py' file. The following example is of a simple python-GUI made using Tkinter: from distutils.core import setup import py2exe setup (console = ['tkinter_example.pyw'], options = { 'py2exe' : {'packages':['Tkinter']}}) Change "console" in the code above to "windows".. from distutils.core import setup import py2exe setup (windows = ['tkinter_example.pyw'], options = { 'py2exe' : {'packages':['Tkinter']}}) This will only open the Tkinter generated GUI and no console window. A: Some additional info. for situations that'll need the win32gui solution posted by Mohsen Haddadi earlier in this thread: As of python 361, win32gui & win32con are not part of the python std library. To use them, pywin32 package will need to be installed; now possible via pip. More background info on pywin32 package is at: How to use the win32gui module with Python?. Also, to apply discretion while closing a window so as to not inadvertently close any window in the foreground, the resolution could be extended along the lines of the following: try : import win32gui, win32con; frgrnd_wndw = win32gui.GetForegroundWindow(); wndw_title = win32gui.GetWindowText(frgrnd_wndw); if wndw_title.endswith("python.exe"): win32gui.ShowWindow(frgrnd_wndw, win32con.SW_HIDE); #endif except : pass A: After writing the code you want to convert the file from .py to .exe, so possibly you will use pyinstaller and it is good to make exe file. So you can hide the console in this way: pyinstaller --onefile main.py --windowed I used to this way and it works. A: just change the file extension from .py to .pyw A: As another answer for all upcoming readers: If you are using Visual Studio as IDE, you can set "Window Application" in the Project settings with a single checkmark. Which is working with py-extension as well.
How to hide console window in python?
I am writing an IRC bot in Python. I wish to make stand-alone binaries for Linux and Windows of it. And mainly I wish that when the bot initiates, the console window should hide and the user should not be able to see the window. What can I do for that?
[ "Simply save it with a .pyw extension. This will prevent the console window from opening.\n\nOn Windows systems, there is no notion of an “executable mode”. The Python installer automatically associates .py files with python.exe so that a double-click on a Python file will run it as a script. The extension can also be .pyw, in that case, the console window that normally appears is suppressed.\n\nExplanation at the bottom of section 2.2.2\n", "In linux, just run it, no problem. In Windows, you want to use the pythonw executable.\nUpdate\nOkay, if I understand the question in the comments, you're asking how to make the command window in which you've started the bot from the command line go away afterwards?\n\nUNIX (Linux)\n\n\n$ nohup mypythonprog &\n\n\nWindows\n\n\nC:/> start pythonw mypythonprog\n\nI think that's right. In any case, now you can close the terminal.\n", "On Unix Systems (including GNU/Linux, macOS, and BSD)\nUse nohup mypythonprog &, and you can close the terminal window without disrupting the process. You can also run exit if you are running in the cloud and don't want to leave a hanging shell process.\nOn Windows Systems\nSave the program with a .pyw extension and now it will open with pythonw.exe. No shell window.\nFor example, if you have foo.py, you need to rename it to foo.pyw.\n", "This will hide your console. Implement these lines in your code first to start hiding your console at first.\nimport win32gui, win32con\n\nthe_program_to_hide = win32gui.GetForegroundWindow()\nwin32gui.ShowWindow(the_program_to_hide , win32con.SW_HIDE)\n\nUpdate May 2020 :\nIf you've got trouble on pip install win32con on Command Prompt, you can simply pip install pywin32.Then on your python script, execute import win32.lib.win32con as win32con instead of import win32con.\nTo show back your program again win32con.SW_SHOW works fine:\nwin32gui.ShowWindow(the_program_to_hide , win32con.SW_SHOW)\n\n", "If all you want to do is run your Python Script on a windows computer that has the Python Interpreter installed, converting the extension of your saved script from '.py' to '.pyw' should do the trick. \nBut if you're using py2exe to convert your script into a standalone application that would run on any windows machine, you will need to make the following changes to your 'setup.py' file. \nThe following example is of a simple python-GUI made using Tkinter:\nfrom distutils.core import setup\nimport py2exe\nsetup (console = ['tkinter_example.pyw'],\n options = { 'py2exe' : {'packages':['Tkinter']}})\n\nChange \"console\" in the code above to \"windows\"..\nfrom distutils.core import setup\nimport py2exe\nsetup (windows = ['tkinter_example.pyw'],\n options = { 'py2exe' : {'packages':['Tkinter']}})\n\nThis will only open the Tkinter generated GUI and no console window.\n", "Some additional info. for situations that'll need the win32gui solution posted by Mohsen Haddadi earlier in this thread:\nAs of python 361, win32gui & win32con are not part of the python std library.\nTo use them, pywin32 package will need to be installed; now possible via pip.\nMore background info on pywin32 package is at: How to use the win32gui module with Python?.\nAlso, to apply discretion while closing a window so as to not inadvertently close any window in the foreground, the resolution could be extended along the lines of the following:\ntry :\n\n import win32gui, win32con;\n\n frgrnd_wndw = win32gui.GetForegroundWindow();\n wndw_title = win32gui.GetWindowText(frgrnd_wndw);\n if wndw_title.endswith(\"python.exe\"):\n win32gui.ShowWindow(frgrnd_wndw, win32con.SW_HIDE);\n #endif\nexcept :\n pass\n\n", "After writing the code you want to convert the file from .py to .exe, so possibly you will use pyinstaller and it is good to make exe file. So you can hide the console in this way:\npyinstaller --onefile main.py --windowed\n\nI used to this way and it works.\n", "just change the file extension from .py to .pyw\n", "As another answer for all upcoming readers:\nIf you are using Visual Studio as IDE, you can set \"Window Application\" in the Project settings with a single checkmark. Which is working with py-extension as well.\n" ]
[ 159, 45, 28, 24, 11, 6, 1, 1, 0 ]
[ "a decorator factory for this (windows version, unix version should be easier via os.fork)\ndef deco_factory_daemon_subprocess(*, flag_env_var_name='__this_daemon_subprocess__', **kwargs_for_subprocess):\n def deco(target):\n @functools.wraps(target)\n def tgt(*args, **kwargs):\n if os.environ.get(flag_env_var_name) == __file__:\n target(*args, **kwargs)\n else:\n os.environ[flag_env_var_name] = __file__\n real_argv = psutil.Process(os.getpid()).cmdline()\n exec_dir, exec_basename = path_split(real_argv[0])\n if exec_basename.lower() == 'python.exe':\n real_argv[0] = shutil.which('pythonw.exe')\n kwargs = dict(env=os.environ, stdout=subprocess.PIPE, stderr=subprocess.PIPE, )\n kwargs.update(kwargs_for_subprocess)\n subprocess.Popen(real_argv, **kwargs)\n\n return tgt\n\n return deco\n\nuse it like this:\n@deco_factory_daemon_subprocess()\ndef run():\n ...\n\n\ndef main():\n run()\n\n" ]
[ -1 ]
[ "console", "hide", "python" ]
stackoverflow_0000764631_console_hide_python.txt
Q: Using win32com to control Excel and I need to update the color of Data Points but they seem to be read only wb = excel.Workbooks.Open(f"C:\\Users\\user\\Downloads\\EXCEL\\Credits_Query.xlsx") ws=wb.Sheets("OEM Pivot") chart = ws.ChartObjects(1).Chart chart.SeriesCollection(1).XValues Returns: ('NTK553FAE5', '8DG62496AA', 'TOM-100G-Q-LR4', 'ORM-CXH1', ...) chart.SeriesCollection(1).Points(1).Fill.ForeColor.RGB Returns: 39423 But it appears to be readonly. >>> chart.SeriesCollection(1).Points(1).Fill.ForeColor.RGB = 50 Traceback (most recent call last): File "C:\Users\user\AppData\Roaming\Python\Python39\site-packages\win32com\client\__init__.py", line 590, in __setattr__ args, defArgs = self._prop_map_put_[attr] KeyError: 'RGB' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "<stdin>", line 1, in <module> File "C:\Users\jepal\AppData\Roaming\Python\Python39\site-packages\win32com\client\__init__.py", line 592, in __setattr__ raise AttributeError( AttributeError: '<win32com.gen_py.Microsoft Excel 16.0 Object Library.ChartColorFormat instance at 0x2231402656864>' object has no attribute 'RGB' I also tried several variations of: chart.SeriesCollection(1).Points(1).Fill.ForeColor.RGB.setattr But no luck, is it possible to change the color of the Data Points? A: As usual, hours of researching with no luck, and 2 min after I post I find the answer. chart.SeriesCollection(1).Points(3).Fill.ForeColor.SchemeColor = 47 This allows you to change the color of the individual points.
Using win32com to control Excel and I need to update the color of Data Points but they seem to be read only
wb = excel.Workbooks.Open(f"C:\\Users\\user\\Downloads\\EXCEL\\Credits_Query.xlsx") ws=wb.Sheets("OEM Pivot") chart = ws.ChartObjects(1).Chart chart.SeriesCollection(1).XValues Returns: ('NTK553FAE5', '8DG62496AA', 'TOM-100G-Q-LR4', 'ORM-CXH1', ...) chart.SeriesCollection(1).Points(1).Fill.ForeColor.RGB Returns: 39423 But it appears to be readonly. >>> chart.SeriesCollection(1).Points(1).Fill.ForeColor.RGB = 50 Traceback (most recent call last): File "C:\Users\user\AppData\Roaming\Python\Python39\site-packages\win32com\client\__init__.py", line 590, in __setattr__ args, defArgs = self._prop_map_put_[attr] KeyError: 'RGB' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "<stdin>", line 1, in <module> File "C:\Users\jepal\AppData\Roaming\Python\Python39\site-packages\win32com\client\__init__.py", line 592, in __setattr__ raise AttributeError( AttributeError: '<win32com.gen_py.Microsoft Excel 16.0 Object Library.ChartColorFormat instance at 0x2231402656864>' object has no attribute 'RGB' I also tried several variations of: chart.SeriesCollection(1).Points(1).Fill.ForeColor.RGB.setattr But no luck, is it possible to change the color of the Data Points?
[ "As usual, hours of researching with no luck, and 2 min after I post I find the answer.\nchart.SeriesCollection(1).Points(3).Fill.ForeColor.SchemeColor = 47\n\nThis allows you to change the color of the individual points.\n" ]
[ 0 ]
[]
[]
[ "excel", "python" ]
stackoverflow_0074479354_excel_python.txt
Q: How can I add a minimize / maximize buttons in GUI made with Qt Designer? I've create a GUI in "Qt Designer". Now I'd like to open a simple window with a minimize/maximize buttons in the top right corner. from PyQt5 import uic window = uic.loadUi("Video_Player.ui") # Video_Player.ui is the name of my GUI main file. window.show() should be something like this: window.setWindowFlag(Qt.WindowMinimizeButtonHint , True) But I don't know how to set/define my Qt to make it work...? A: I think you should firstly hide the Windows bar in this way: self.setWindowFlag(Qt.FramelessWindowHint) And then add your own Minimize, Maximize and Close botton on QtDesigner. Finally for example you can make them work as follows in your code: self.maxBtn = self.findChild(QPushButton,'Maximize_btn') self.maxBtn.clicked.connect(lambda: self.showMaximized()) self.minBtn = self.findChild(QPushButton,'Minimize_btn') self.minBtn.clicked.connect(lambda: self.showMinimized()) self.closeBtn = self.findChild(QPushButton,'Close_btn') self.closeBtn.clicked.connect(lambda: self.close()) A: self.setWindowFlags(_qt.FramelessWindowHint) This hides standard window's titlebar Before After
How can I add a minimize / maximize buttons in GUI made with Qt Designer?
I've create a GUI in "Qt Designer". Now I'd like to open a simple window with a minimize/maximize buttons in the top right corner. from PyQt5 import uic window = uic.loadUi("Video_Player.ui") # Video_Player.ui is the name of my GUI main file. window.show() should be something like this: window.setWindowFlag(Qt.WindowMinimizeButtonHint , True) But I don't know how to set/define my Qt to make it work...?
[ "I think you should firstly hide the Windows bar in this way:\nself.setWindowFlag(Qt.FramelessWindowHint)\n\nAnd then add your own Minimize, Maximize and Close botton on QtDesigner. Finally for example you can make them work as follows in your code:\nself.maxBtn = self.findChild(QPushButton,'Maximize_btn')\nself.maxBtn.clicked.connect(lambda: self.showMaximized())\n\nself.minBtn = self.findChild(QPushButton,'Minimize_btn')\nself.minBtn.clicked.connect(lambda: self.showMinimized())\n\nself.closeBtn = self.findChild(QPushButton,'Close_btn')\nself.closeBtn.clicked.connect(lambda: self.close())\n\n", "self.setWindowFlags(_qt.FramelessWindowHint)\nThis hides standard window's titlebar\nBefore\n\nAfter\n\n" ]
[ 1, 0 ]
[]
[]
[ "pyqt", "pyqt5", "python", "qt_designer" ]
stackoverflow_0065165757_pyqt_pyqt5_python_qt_designer.txt
Q: Python script to export the all subfolders in a folder into separate .ZIP folders, but ignoring individual files? I have a directory of subfolders that gets populated with another script. Each of those subfolders in the directory need to be compressed into a .ZIP folder. However in that directory is also a number of files (PDFs, .TXTs etc) that are not in subfolders. I'm trying to create a script that will create zip folders out of the individual sub folders, but totally ignore the individual files. import os import zipfile path = r"E:\Test\XYZ L48" path = os.path.abspath(os.path.normpath(os.path.expanduser(path))) for folder in os.listdir(path): zipf = zipfile.ZipFile('{0}.zip'.format(os.path.join(path, folder)), 'w', zipfile.ZIP_DEFLATED) for root, dirs, files in os.wal k(os.path.join(path, folder)): for filename in files: zipf.write(os.path.abspath(os.path.join(root, filename)), arcname=filename) zipf.close() I tried this, which worked to create ZIPs out the subfolders, but also archives all the files. Is there a way to modify this to ignore files in the directory, and only zip the sub folders? Thanks! A: Use scandir instead of listdir. Then you can check to see if each is a file, a directory, or a symbolic link.
Python script to export the all subfolders in a folder into separate .ZIP folders, but ignoring individual files?
I have a directory of subfolders that gets populated with another script. Each of those subfolders in the directory need to be compressed into a .ZIP folder. However in that directory is also a number of files (PDFs, .TXTs etc) that are not in subfolders. I'm trying to create a script that will create zip folders out of the individual sub folders, but totally ignore the individual files. import os import zipfile path = r"E:\Test\XYZ L48" path = os.path.abspath(os.path.normpath(os.path.expanduser(path))) for folder in os.listdir(path): zipf = zipfile.ZipFile('{0}.zip'.format(os.path.join(path, folder)), 'w', zipfile.ZIP_DEFLATED) for root, dirs, files in os.wal k(os.path.join(path, folder)): for filename in files: zipf.write(os.path.abspath(os.path.join(root, filename)), arcname=filename) zipf.close() I tried this, which worked to create ZIPs out the subfolders, but also archives all the files. Is there a way to modify this to ignore files in the directory, and only zip the sub folders? Thanks!
[ "Use scandir instead of listdir. Then you can check to see if each is a file, a directory, or a symbolic link.\n" ]
[ 0 ]
[]
[]
[ "archive", "compression", "python", "subdirectory", "zip" ]
stackoverflow_0074476732_archive_compression_python_subdirectory_zip.txt
Q: How can i get a file extension from a filetype? I have a list of filenames as follows files = [ '/dl/files/4j55eeer_wq3wxxpiqm.jpg', '/home/Desktop/hjsd03wnsbdr9rk3k', 'kd0dje7cmidj0xks03nd8nd8a3', ... ] The problem is most of the files do not have an extension in the filenames, what would be the best way to get file extension of these files ? I don't know if this is even possible because python would treat all files as buffer or string objects that do not have any filetype associated with them. can this be done at all ? A: Once you use magic to get the MIME type, you can use mimetypes.guess_extension() to get the extension for it. A: It can be done if you have an oracle that determines file types from their content. Happily at least one such oracle is already implemented in Python: https://github.com/ahupp/python-magic A: The below code worked for me : import filetype fileinfo = filetype.guess(mock.jpg) #the argument can be buffer/file detectedExt = fileinfo.extension detectedmime = fileinfo.mime filetype package documentation
How can i get a file extension from a filetype?
I have a list of filenames as follows files = [ '/dl/files/4j55eeer_wq3wxxpiqm.jpg', '/home/Desktop/hjsd03wnsbdr9rk3k', 'kd0dje7cmidj0xks03nd8nd8a3', ... ] The problem is most of the files do not have an extension in the filenames, what would be the best way to get file extension of these files ? I don't know if this is even possible because python would treat all files as buffer or string objects that do not have any filetype associated with them. can this be done at all ?
[ "Once you use magic to get the MIME type, you can use mimetypes.guess_extension() to get the extension for it.\n", "It can be done if you have an oracle that determines file types from their content. Happily at least one such oracle is already implemented in Python: https://github.com/ahupp/python-magic\n", "The below code worked for me :\nimport filetype\n\nfileinfo = filetype.guess(mock.jpg) #the argument can be buffer/file\ndetectedExt = fileinfo.extension\ndetectedmime = fileinfo.mime\n\nfiletype package documentation\n" ]
[ 16, 3, 0 ]
[]
[]
[ "file", "python" ]
stackoverflow_0016872139_file_python.txt
Q: Convert Eviews date format to python date I have my vector dates in this format 2022M8,2022M09, etc... (eviews format). How do i read this type of string dates in python? I wish convert this dates in this 20220801 format. Thanks in advance!! I have tried this: date_time_str = '1973M10' date_time_obj = datetime.strptime(date_time_str, '%Y M /%m') print ("The type of the date is now", type(date_time_obj)) print ("The date is", date_time_obj) A: small typo ? this works just fine: from datetime import datetime date_time_str = '1973M10' date_time_obj = datetime.strptime(date_time_str, '%YM%m') print ("The type of the date is now", type(date_time_obj)) print ("The date is", date_time_obj) gives: The type of the date is now <class 'datetime.datetime'> The date is 1973-10-01 00:00:00 From there, read the datetime docs to output in your desired format.
Convert Eviews date format to python date
I have my vector dates in this format 2022M8,2022M09, etc... (eviews format). How do i read this type of string dates in python? I wish convert this dates in this 20220801 format. Thanks in advance!! I have tried this: date_time_str = '1973M10' date_time_obj = datetime.strptime(date_time_str, '%Y M /%m') print ("The type of the date is now", type(date_time_obj)) print ("The date is", date_time_obj)
[ "small typo ?\nthis works just fine:\nfrom datetime import datetime\ndate_time_str = '1973M10'\ndate_time_obj = datetime.strptime(date_time_str, '%YM%m')\nprint (\"The type of the date is now\", type(date_time_obj))\nprint (\"The date is\", date_time_obj)\n\ngives:\nThe type of the date is now <class 'datetime.datetime'>\nThe date is 1973-10-01 00:00:00\n\nFrom there, read the datetime docs to output in your desired format.\n" ]
[ 1 ]
[]
[]
[ "date", "python" ]
stackoverflow_0074469407_date_python.txt
Q: Python script to get username and password from text? I have a script for creating accounts that outputs the following: creating user in XYZ: username: testing firstName: Bob lastName:Test email:auto999@nowhere.com password:gWY6*Pja&4 So, I need to create a python script that will store the username and password in a csv file. I tried splitting this string by spaces and colons then indexing it, but this isn't working quite properly and could fail if the message is different. Does anyone have any idea how to do this? A: Regex is almost always the answer to this type of issue: import re text = 'creating user in XYZ: username: testing firstName: Bob lastName:Test email:auto999@nowhere.com password:gWY6*Pja&4' pattern = '.*username:\s*(\S+)\s*firstName:\s*(\S+)\s*lastName:\s*(\S+)\s*email:\s*(\S+)\s*password:\s*(\S+)' values = re.findall(pattern, text) print(values) Output: [('testing', 'Bob', 'Test', 'auto999@nowhere.com', 'gWY6*Pja&4')] Regexr Pattern Explanation A: I don't see the need for Regex here, a simple but robust parsing is enough: def get_data(account: str, attribute: str) -> str: data = ' '.join(account.split()).strip() for k, v in {' :': ':', ' : ': ':', ': ': ':'}.items(): data = data.replace(k, v) index1 = data.find(attribute) index2 = data.find(' ', index1) return data[index1 + len(attribute + ':'): len(account) if index2 == -1 else index2] example of use: acc = "username: testing firstName: Bob lastName:Test email:auto999@nowhere.com password:gWY6*Pja&4" print(get_data(acc, 'username')) print(get_data(acc, 'password')) output: testing gWY6*Pja&4 As the generator is yours, you can control how the accounts are created and I personally think that Regex is not easy to maintain. This approach works even adding extra spaces or changing the order of the attributes, e.g.: acc = " username: testing firstName: Bob lastName :Test email:auto999@nowhere.com password : gWY6*Pja&4 " acc = "firstName: Bob username: testing email:auto999@nowhere.com password:gWY6*Pja&4 lastName:Test "
Python script to get username and password from text?
I have a script for creating accounts that outputs the following: creating user in XYZ: username: testing firstName: Bob lastName:Test email:auto999@nowhere.com password:gWY6*Pja&4 So, I need to create a python script that will store the username and password in a csv file. I tried splitting this string by spaces and colons then indexing it, but this isn't working quite properly and could fail if the message is different. Does anyone have any idea how to do this?
[ "Regex is almost always the answer to this type of issue:\nimport re\n\ntext = 'creating user in XYZ: username: testing firstName: Bob lastName:Test email:auto999@nowhere.com password:gWY6*Pja&4'\n\npattern = '.*username:\\s*(\\S+)\\s*firstName:\\s*(\\S+)\\s*lastName:\\s*(\\S+)\\s*email:\\s*(\\S+)\\s*password:\\s*(\\S+)'\n\nvalues = re.findall(pattern, text)\n\nprint(values)\n\nOutput:\n[('testing', 'Bob', 'Test', 'auto999@nowhere.com', 'gWY6*Pja&4')]\n\nRegexr Pattern Explanation\n", "I don't see the need for Regex here, a simple but robust parsing is enough:\ndef get_data(account: str, attribute: str) -> str:\n data = ' '.join(account.split()).strip()\n for k, v in {' :': ':', ' : ': ':', ': ': ':'}.items():\n data = data.replace(k, v)\n index1 = data.find(attribute)\n index2 = data.find(' ', index1)\n return data[index1 + len(attribute + ':'): len(account) if index2 == -1 else index2]\n\nexample of use:\nacc = \"username: testing firstName: Bob lastName:Test email:auto999@nowhere.com password:gWY6*Pja&4\"\nprint(get_data(acc, 'username'))\nprint(get_data(acc, 'password'))\n\noutput:\ntesting\ngWY6*Pja&4\n\nAs the generator is yours, you can control how the accounts are created and I personally think that Regex is not easy to maintain.\nThis approach works even adding extra spaces or changing the order of the attributes, e.g.:\nacc = \" username: testing firstName: Bob lastName :Test email:auto999@nowhere.com password : gWY6*Pja&4 \"\nacc = \"firstName: Bob username: testing email:auto999@nowhere.com password:gWY6*Pja&4 lastName:Test \"\n\n" ]
[ 0, 0 ]
[]
[]
[ "python" ]
stackoverflow_0074479688_python.txt
Q: Converting .ui to .py with pyuic5? When I convert a .ui file in QtDesigner to a .py file, the format changes and it runs differently. When I run it in QtDesigner it looks like a normal page but once I convert it to a .py file and run it, the edges are cut off and I cannot see half the buttons/labels. Even once I expand the screen that has opened the labels are cut off and only half visible. Is there a way I can stop this from happening? A: You firstly need to correctly set the layout and widgets inside them, in a way that the size of each object is guaranteed when moving to the code. Try to watch this tutorial, I found it very useful! Qt Designer - create application GUI (DESIGN APPLICATION LAYOUT) - part 02 And then you need to just import the .ui file as follows: from PyQt5 import uic class MainWindow(QMainWindow): def __init__(self): super(MainWindow,self).__init__() uic.loadUi("NameofYourFile.ui",self) self.show()
Converting .ui to .py with pyuic5?
When I convert a .ui file in QtDesigner to a .py file, the format changes and it runs differently. When I run it in QtDesigner it looks like a normal page but once I convert it to a .py file and run it, the edges are cut off and I cannot see half the buttons/labels. Even once I expand the screen that has opened the labels are cut off and only half visible. Is there a way I can stop this from happening?
[ "You firstly need to correctly set the layout and widgets inside them, in a way that the size of each object is guaranteed when moving to the code.\nTry to watch this tutorial, I found it very useful!\nQt Designer - create application GUI (DESIGN APPLICATION LAYOUT) - part 02\nAnd then you need to just import the .ui file as follows:\nfrom PyQt5 import uic\n\nclass MainWindow(QMainWindow):\ndef __init__(self):\n super(MainWindow,self).__init__()\n uic.loadUi(\"NameofYourFile.ui\",self)\n self.show()\n\n" ]
[ 0 ]
[]
[]
[ "pyqt5", "python", "qt_designer", "user_interface" ]
stackoverflow_0073974721_pyqt5_python_qt_designer_user_interface.txt
Q: Data collation step causing "ValueError: Unable to create tensor..." due to unnecessary padding attempts to extra inputs I am trying to fine-tune a Bart model from the huggingface transformers framework on a dialogue summarisation task. The Bart model by default takes in the conversations as a monolithic piece of text as the input and takes the summaries as the decoder input while training. I want to explicitly train the model on dialogue speaker and utterance information rather than waiting for the model to implicitly learn them. For this reason, I am extracting the position IDs of the speaker name tokens and their utterance tokens when I send them to the model along with the original input tokens and summary tokens and send them separately. However, the model's data collator/padding automation expects this information to also be the same size as the inputs (I need to disable this behaviour/change the way I am encoding the speaker to utterance mapping). Please find the code and description for the above issue below: I am using the SAMSum dataset for the dialogue summarisation task. The dataset looks like this Conversation: Amanda: I baked cookies. Do you want some? Jerry: Sure! Amanda: I'll bring you tomorrow :-) Summary: Amanda baked cookies and will bring Jerry some tomorrow. The conversation gets tokenized as: tokens = [0, 10127, 5219, 35, 38, 17241, 1437, 15269, 4, 1832, 47, 236, 103, 116, 50121, 50118, 39237, 35, 9136, 328, 50121, 50118, 10127, 5219, 35, 38, 581, 836, 47, 3859, 48433, 2] The explicit speaker-utterance information is encoded as: [0, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 1, 2, 2, 2, 2, 2, 1, 1, 2, 2, 2, 2, 2, 2, 2, 0] Where 1s indicate that tokens[1:3] map to a name "Amanda" and the 2s indicate that tokens[3:16] map to an utterance ": I baked cookies. Do you want some?" I am trying to send this speaker utterance association information to the forward function in the hopes of adding a loss on the basis of this information. I intend to override the compute_loss method of the Trainer class from huggingface framework to edit the loss after I can successfully relay this explicit information. I am currently trying the following: tokenized_dataset_train = train_datasets.map(preprocess_function, batched=True) where the preprocess_function tokenizes and adds the speaker-utterance information in the form of a key-value pair. tokenized_dataset_train is of the form {'input_ids':[...], 'attention_mask':[...], 'spk_utt_pos':[...], ...} The preprocess function makes sure that the lengths for each of 'input_ids', 'attention_masks', and 'spk_utt_pos' is the same. The data_collator from the DataCollatorForSeq2Seq pads 'input_ids' and 'attention_masks', but also tries to pad 'spk_utt_pos' which gives an error: Unable to create tensor, you should probably activate truncation and/or padding with 'padding=True' 'truncation=True' to have batched tensors with the same length. Perhaps your features (`spk_utt_pos` in this case) have excessive nesting (inputs type `list` where type `int` is expected). Upon printing the sizes of 'input_ids', 'attention_masks', and 'spk_utt_pos' inside the train loop during the data collation step I found that the sizes of were not the same. Example: (A 32 instance batch) 'input_ids' sizes 357 357 357 357 357 357 357 357 357 357 357 357 357 357 357 357 357 357 357 357 357 357 357 357 357 357 357 357 357 357 357 357 'attention_mask' sizes 357 357 357 357 357 357 357 357 357 357 357 357 357 357 357 357 357 357 357 357 357 357 357 357 357 357 357 357 357 357 357 357 'spk_utt_pos' sizes 285 276 276 321 58 93 77 69 198 266 55 107 85 235 47 280 209 357 86 186 27 52 80 77 85 231 266 237 322 125 251 126 My question is: Is there something wrong with my approach to adding this explicit information to my model? What can be another method to send the speaker-utterance information to my model? A: I solved this by extending the DataCollatorForSeq2Seq class and overriding the __call__ method in it to also pad my 'spk_utt_pos' list appropriately.
Data collation step causing "ValueError: Unable to create tensor..." due to unnecessary padding attempts to extra inputs
I am trying to fine-tune a Bart model from the huggingface transformers framework on a dialogue summarisation task. The Bart model by default takes in the conversations as a monolithic piece of text as the input and takes the summaries as the decoder input while training. I want to explicitly train the model on dialogue speaker and utterance information rather than waiting for the model to implicitly learn them. For this reason, I am extracting the position IDs of the speaker name tokens and their utterance tokens when I send them to the model along with the original input tokens and summary tokens and send them separately. However, the model's data collator/padding automation expects this information to also be the same size as the inputs (I need to disable this behaviour/change the way I am encoding the speaker to utterance mapping). Please find the code and description for the above issue below: I am using the SAMSum dataset for the dialogue summarisation task. The dataset looks like this Conversation: Amanda: I baked cookies. Do you want some? Jerry: Sure! Amanda: I'll bring you tomorrow :-) Summary: Amanda baked cookies and will bring Jerry some tomorrow. The conversation gets tokenized as: tokens = [0, 10127, 5219, 35, 38, 17241, 1437, 15269, 4, 1832, 47, 236, 103, 116, 50121, 50118, 39237, 35, 9136, 328, 50121, 50118, 10127, 5219, 35, 38, 581, 836, 47, 3859, 48433, 2] The explicit speaker-utterance information is encoded as: [0, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 1, 2, 2, 2, 2, 2, 1, 1, 2, 2, 2, 2, 2, 2, 2, 0] Where 1s indicate that tokens[1:3] map to a name "Amanda" and the 2s indicate that tokens[3:16] map to an utterance ": I baked cookies. Do you want some?" I am trying to send this speaker utterance association information to the forward function in the hopes of adding a loss on the basis of this information. I intend to override the compute_loss method of the Trainer class from huggingface framework to edit the loss after I can successfully relay this explicit information. I am currently trying the following: tokenized_dataset_train = train_datasets.map(preprocess_function, batched=True) where the preprocess_function tokenizes and adds the speaker-utterance information in the form of a key-value pair. tokenized_dataset_train is of the form {'input_ids':[...], 'attention_mask':[...], 'spk_utt_pos':[...], ...} The preprocess function makes sure that the lengths for each of 'input_ids', 'attention_masks', and 'spk_utt_pos' is the same. The data_collator from the DataCollatorForSeq2Seq pads 'input_ids' and 'attention_masks', but also tries to pad 'spk_utt_pos' which gives an error: Unable to create tensor, you should probably activate truncation and/or padding with 'padding=True' 'truncation=True' to have batched tensors with the same length. Perhaps your features (`spk_utt_pos` in this case) have excessive nesting (inputs type `list` where type `int` is expected). Upon printing the sizes of 'input_ids', 'attention_masks', and 'spk_utt_pos' inside the train loop during the data collation step I found that the sizes of were not the same. Example: (A 32 instance batch) 'input_ids' sizes 357 357 357 357 357 357 357 357 357 357 357 357 357 357 357 357 357 357 357 357 357 357 357 357 357 357 357 357 357 357 357 357 'attention_mask' sizes 357 357 357 357 357 357 357 357 357 357 357 357 357 357 357 357 357 357 357 357 357 357 357 357 357 357 357 357 357 357 357 357 'spk_utt_pos' sizes 285 276 276 321 58 93 77 69 198 266 55 107 85 235 47 280 209 357 86 186 27 52 80 77 85 231 266 237 322 125 251 126 My question is: Is there something wrong with my approach to adding this explicit information to my model? What can be another method to send the speaker-utterance information to my model?
[ "I solved this by extending the DataCollatorForSeq2Seq class and overriding the __call__ method in it to also pad my 'spk_utt_pos' list appropriately.\n" ]
[ 0 ]
[]
[]
[ "huggingface", "huggingface_transformers", "python", "pytorch", "pytorch_dataloader" ]
stackoverflow_0074437271_huggingface_huggingface_transformers_python_pytorch_pytorch_dataloader.txt
Q: Sorting a list of lists by every list and return the final index I want to sort a list with an arbitrary number of lists inside to sort by each of said lists. Furthermore I do not want to use any libraries (neither python-native nor 3rd party). data = [['a', 'b', 'a', 'b', 'a'], [9, 8, 7, 6, 5]] I know I can achieve this by doing list(zip(*sorted(zip(*data)))) # [('a', 'a', 'a', 'b', 'b'), (5, 7, 9, 6, 8)] but I would like to have the sorting-index of that very process. In this case: index = [4, 2, 0, 3, 1] I found several answers for a fixed number of inside lists, or such that only want to sort by a specific list. Neither case is what I am looking for. A: Add a temporary index list to the end before sorting. The result will show you the pre-sorted indices in the appended list: data = [['a', 'b', 'a', 'b', 'a'], [9, 8, 7, 6, 5]] assert all(len(sublist) == len(data[0]) for sublist in data) data.append(range(len(data[0]))) *sorted_data, indices = list(zip(*sorted(zip(*data)))) print(sorted_data) # [('a', 'a', 'a', 'b', 'b'), (5, 7, 9, 6, 8)] print(indices) # (4, 2, 0, 3, 1) A: Try this data = [["a", "b", "a", "b", "a"], [9, 8, 7, 6, 5]] def sortList(inputList): masterList = [[value, index] for index, value in enumerate(inputList)] masterList.sort() values = [] indices = [] for item in masterList: values.append(item[0]) # get the item indices.append(item[1]) # get the index return values, indices sortedData = [] sortedIndices = [] for subList in data: sortedList, indices = sortList(subList) sortedData.append(sortedList) sortedIndices.append(indices) print(sortedData) print(sortedIndices)
Sorting a list of lists by every list and return the final index
I want to sort a list with an arbitrary number of lists inside to sort by each of said lists. Furthermore I do not want to use any libraries (neither python-native nor 3rd party). data = [['a', 'b', 'a', 'b', 'a'], [9, 8, 7, 6, 5]] I know I can achieve this by doing list(zip(*sorted(zip(*data)))) # [('a', 'a', 'a', 'b', 'b'), (5, 7, 9, 6, 8)] but I would like to have the sorting-index of that very process. In this case: index = [4, 2, 0, 3, 1] I found several answers for a fixed number of inside lists, or such that only want to sort by a specific list. Neither case is what I am looking for.
[ "Add a temporary index list to the end before sorting. The result will show you the pre-sorted indices in the appended list:\ndata = [['a', 'b', 'a', 'b', 'a'], [9, 8, 7, 6, 5]]\nassert all(len(sublist) == len(data[0]) for sublist in data)\ndata.append(range(len(data[0])))\n*sorted_data, indices = list(zip(*sorted(zip(*data))))\n\nprint(sorted_data)\n# [('a', 'a', 'a', 'b', 'b'), (5, 7, 9, 6, 8)]\n\nprint(indices)\n# (4, 2, 0, 3, 1)\n\n", "Try this\ndata = [[\"a\", \"b\", \"a\", \"b\", \"a\"], [9, 8, 7, 6, 5]]\n\n\ndef sortList(inputList):\n masterList = [[value, index] for index, value in enumerate(inputList)]\n masterList.sort()\n\n values = []\n indices = []\n for item in masterList:\n values.append(item[0]) # get the item\n indices.append(item[1]) # get the index\n return values, indices\n\n\nsortedData = []\nsortedIndices = []\nfor subList in data:\n sortedList, indices = sortList(subList)\n sortedData.append(sortedList)\n sortedIndices.append(indices)\n\n\nprint(sortedData)\nprint(sortedIndices)\n\n" ]
[ 3, 1 ]
[]
[]
[ "nested_lists", "python", "sorting" ]
stackoverflow_0074479939_nested_lists_python_sorting.txt
Q: Np.where change value in column if another column value is in another dataframe column Let me explain the structure of the problem that I'm trying to solve. Let's suppose that we have two dataframes DF1: ID Value AA 2 AB 1 AC 2 AD 1 AE 2 DF2: ID New Value AA 1 AC 1 If the ID column row in DF1 is in DF2, then I would like to change the value in the same row in DF1 to the one that it has in DF2, so the end result would be something like this: DF1: ID Value AA 1 AB 1 AC 1 AD 1 AE 2 So far, I have tried attempts with .loc and np.where but none of them where successful, my closest attempt is the following line of code: DF1['Value'][row] = [DF2['New Value'][row] if ((DF1['ID'][row]).isin(DF2['ID'])) else DF1['Value'][row] for row in DF['ID']] A: here is one way to to do it using map # set index on ID in DF2 and map to DF # replace failed mapping with the value in DF df['Value']=df['ID'].map(df2.set_index(['ID'])['New Value']).fillna(df['Value']) df ID Value 0 AA 1.0 1 AB 1.0 2 AC 1.0 3 AD 1.0 4 AE 2.0 A: You can go straight with merge then ffill Data: df1 = pd.DataFrame({'name':['a','b','c'], 'val':[1,2,3]}) df2 = pd.DataFrame({'name':['a','c'], 'newval':[10,20]}) Merge df1 and df2 df = pd.merge(df1, df2, on='name', how='left') Now you ffill (forward fill). This means you take two columns val and newval. Any missing value in newval is filled by value in val. The axis=1 means you fill by rows not by column df[['val', 'newval']] = df[['val', 'newval']].ffill(axis=1) A: Given: # df1 ID Value 0 AA 2 1 AB 1 2 AC 2 3 AD 1 4 AE 2 # df2 ID New Value 0 AA 1 1 AC 1 Doing: # Set Indices df1, df2 = [df.set_index('ID') for df in (df1, df2)] # Use loc: df1.loc[df2.index, 'Value'] = df2['New Value'] print(df1.reset_index()) Output: ID Value 0 AA 1 1 AB 1 2 AC 1 3 AD 1 4 AE 2
Np.where change value in column if another column value is in another dataframe column
Let me explain the structure of the problem that I'm trying to solve. Let's suppose that we have two dataframes DF1: ID Value AA 2 AB 1 AC 2 AD 1 AE 2 DF2: ID New Value AA 1 AC 1 If the ID column row in DF1 is in DF2, then I would like to change the value in the same row in DF1 to the one that it has in DF2, so the end result would be something like this: DF1: ID Value AA 1 AB 1 AC 1 AD 1 AE 2 So far, I have tried attempts with .loc and np.where but none of them where successful, my closest attempt is the following line of code: DF1['Value'][row] = [DF2['New Value'][row] if ((DF1['ID'][row]).isin(DF2['ID'])) else DF1['Value'][row] for row in DF['ID']]
[ "here is one way to to do it using map\n# set index on ID in DF2 and map to DF\n# replace failed mapping with the value in DF\ndf['Value']=df['ID'].map(df2.set_index(['ID'])['New Value']).fillna(df['Value'])\ndf\n\n ID Value\n0 AA 1.0\n1 AB 1.0\n2 AC 1.0\n3 AD 1.0\n4 AE 2.0\n\n", "You can go straight with merge then ffill\nData:\ndf1 = pd.DataFrame({'name':['a','b','c'],\n 'val':[1,2,3]})\ndf2 = pd.DataFrame({'name':['a','c'],\n 'newval':[10,20]})\n\nMerge df1 and df2\ndf = pd.merge(df1, df2, on='name', how='left')\n\nNow you ffill (forward fill). This means you take two columns val and newval. Any missing value in newval is filled by value in val. The axis=1 means you fill by rows not by column\ndf[['val', 'newval']] = df[['val', 'newval']].ffill(axis=1)\n\n", "Given:\n# df1\n\n ID Value\n0 AA 2\n1 AB 1\n2 AC 2\n3 AD 1\n4 AE 2\n\n# df2\n\n ID New Value\n0 AA 1\n1 AC 1\n\nDoing:\n# Set Indices\ndf1, df2 = [df.set_index('ID') for df in (df1, df2)]\n\n# Use loc:\ndf1.loc[df2.index, 'Value'] = df2['New Value']\n\nprint(df1.reset_index())\n\nOutput:\n ID Value\n0 AA 1\n1 AB 1\n2 AC 1\n3 AD 1\n4 AE 2\n\n" ]
[ 1, 1, 1 ]
[]
[]
[ "dataframe", "numpy", "pandas", "python" ]
stackoverflow_0074479898_dataframe_numpy_pandas_python.txt
Q: Azure Blob Storage with Python, create containers but not list them? Azure Blob Storage v12.13.1 Python 3.9.15 I have no problem creating containers... ## Create the container blob_service_client = BlobServiceClient(account_url=sas_url) container_client = blob_service_client.create_container(container_name) but when I go to list them all_containers = blob_service_client.list_containers() for i,r in enumerate(all_containers): print(r) I get this error... HttpResponseError: This request is not authorized to perform this operation using this resource type. RequestId:a04349e6-b01e-0010-58ac-fa6495000000 Appreciate any suggestions! A: More than likely you are encountering this error is because your SAS token does not have list (l) permission. Please try creating a blob service client with a SAS URL that has list permission in it.
Azure Blob Storage with Python, create containers but not list them?
Azure Blob Storage v12.13.1 Python 3.9.15 I have no problem creating containers... ## Create the container blob_service_client = BlobServiceClient(account_url=sas_url) container_client = blob_service_client.create_container(container_name) but when I go to list them all_containers = blob_service_client.list_containers() for i,r in enumerate(all_containers): print(r) I get this error... HttpResponseError: This request is not authorized to perform this operation using this resource type. RequestId:a04349e6-b01e-0010-58ac-fa6495000000 Appreciate any suggestions!
[ "More than likely you are encountering this error is because your SAS token does not have list (l) permission.\nPlease try creating a blob service client with a SAS URL that has list permission in it.\n" ]
[ 1 ]
[]
[]
[ "azure", "azure_blob_storage", "python", "python_3.x" ]
stackoverflow_0074479952_azure_azure_blob_storage_python_python_3.x.txt
Q: 'int' object is not iterable in arrays with use height I'm having a problem with this code, I need to calculate the height of a certain number of people and after that: show the smallest and largest height of the group the average height of the women the percentage difference between the amount of men and women When running the code, an error appears: print('A menor e maior altura do grupo são: {} e {}'. format({min(altura_grupo)}, {max(altura_grupo)})) TypeError: 'int' object is not iterable sizegroup = int(input('Digite o tamanho do grupo:')) altura_grupo = [] altura_h = [] altura_m = [] grupo_homens = [] grupo_mulheres = [] for num in range(sizegroup): sexo = input('Sexo (M | F):') altura_grupo = int(input('Digite a sua altura (em cm):')) if sexo in 'Mm': grupo_homens.append(sexo) altura_h.append(altura_grupo) else: grupo_mulheres.append(sexo) altura_m.append(altura_grupo) print('A menor e maior altura do grupo são: {} e {}'. format({min(altura_grupo)}, {max(altura_grupo)})) print('A média das alturas femininas é:', {sum(altura_m)/lens(grupo_mulheres)}) print('A quantidade de homens é {} e a diferença percentual com a quantidade de mulheres é de:{}'. format(lens(grupo_homens), (lens(grupo_homens)-lens(grupo_mulheres))*100)) When running, the code must: Receive the number of people in a group Enter gender and height and with this, present the smallest and largest height of people The average height among women The percentage difference between men and women A: altura_grupo = int(input('Digite a sua altura (em cm):')) is replacing the list with the input, not adding to the list. Use append() to add to a list. altura_grupo.append(int(input('Digite a sua altura (em cm):'))) Then you will be able to get the minimum and maximum of the list.
'int' object is not iterable in arrays with use height
I'm having a problem with this code, I need to calculate the height of a certain number of people and after that: show the smallest and largest height of the group the average height of the women the percentage difference between the amount of men and women When running the code, an error appears: print('A menor e maior altura do grupo são: {} e {}'. format({min(altura_grupo)}, {max(altura_grupo)})) TypeError: 'int' object is not iterable sizegroup = int(input('Digite o tamanho do grupo:')) altura_grupo = [] altura_h = [] altura_m = [] grupo_homens = [] grupo_mulheres = [] for num in range(sizegroup): sexo = input('Sexo (M | F):') altura_grupo = int(input('Digite a sua altura (em cm):')) if sexo in 'Mm': grupo_homens.append(sexo) altura_h.append(altura_grupo) else: grupo_mulheres.append(sexo) altura_m.append(altura_grupo) print('A menor e maior altura do grupo são: {} e {}'. format({min(altura_grupo)}, {max(altura_grupo)})) print('A média das alturas femininas é:', {sum(altura_m)/lens(grupo_mulheres)}) print('A quantidade de homens é {} e a diferença percentual com a quantidade de mulheres é de:{}'. format(lens(grupo_homens), (lens(grupo_homens)-lens(grupo_mulheres))*100)) When running, the code must: Receive the number of people in a group Enter gender and height and with this, present the smallest and largest height of people The average height among women The percentage difference between men and women
[ "altura_grupo = int(input('Digite a sua altura (em cm):'))\n\nis replacing the list with the input, not adding to the list. Use append() to add to a list.\naltura_grupo.append(int(input('Digite a sua altura (em cm):')))\n\nThen you will be able to get the minimum and maximum of the list.\n" ]
[ 1 ]
[]
[]
[ "arrays", "conditional_statements", "javascript", "python" ]
stackoverflow_0074480174_arrays_conditional_statements_javascript_python.txt
Q: Machine Learning: Combining Binary Encoder and RobustScaler I have a dataset with numerical and categorical data. The data includes outliner, which are essential for interpretation later. I’ve binary encoded the categorical data and used the RobustScaler on the numerical data. The categorical binary encoded data does not get scaled. Is this combination possible or is there a logical error? A: There's no reason why you couldn't do that, but there's also no point. The reason why you scale input features to be on roughly the same scale is that lots of inference methods get tripped up by features which are on vastly different scales. See Why does feature scaling improve the convergence speed for gradient descent? for more. A binary feature which ranges from 0 to 1 and a continuous feature where the 25-75% percentile range from -1 to 1 are already on approximately the same scale. Since a binary feature is easier to interpret than a scaled binary feature, I would just leave it and not apply another scaling method.
Machine Learning: Combining Binary Encoder and RobustScaler
I have a dataset with numerical and categorical data. The data includes outliner, which are essential for interpretation later. I’ve binary encoded the categorical data and used the RobustScaler on the numerical data. The categorical binary encoded data does not get scaled. Is this combination possible or is there a logical error?
[ "There's no reason why you couldn't do that, but there's also no point.\nThe reason why you scale input features to be on roughly the same scale is that lots of inference methods get tripped up by features which are on vastly different scales. See Why does feature scaling improve the convergence speed for gradient descent? for more.\nA binary feature which ranges from 0 to 1 and a continuous feature where the 25-75% percentile range from -1 to 1 are already on approximately the same scale.\nSince a binary feature is easier to interpret than a scaled binary feature, I would just leave it and not apply another scaling method.\n" ]
[ 0 ]
[]
[]
[ "data_preprocessing", "machine_learning", "python", "scaling" ]
stackoverflow_0074480076_data_preprocessing_machine_learning_python_scaling.txt
Q: Unable to install pwn package for python I am trying to install the pwn library on my MacBook Air (M2, 2022) but it's failing while building the wheel for unicorn. I'm using python version 3.10.6. This is the command I'm using: python3 -m pip install --upgrade pwn without the --upgrade part I still get the same error message. If I replace pwn with pwntools I still get the same error message as well. wtdcode stated in the GitHub issue: "Due to the fact that GitHub doesn't provide an M1 CI (actions/runner-images#2187), there is no available PyPI release yet. You may build it by yourself." So my question is, How do I build it myself? error msg: Building wheel for unicorn (setup.py) ... error error: subprocess-exited-with-error × python setup.py bdist_wheel did not run successfully. │ exit code: 1 ╰─> [4 lines of output] running bdist_wheel running build Building C extensions error: [Errno 2] No such file or directory: '/private/var/folders/6d/85dtjcrj57173csw50tk8r300000gn/T/pip-install-o33_11sd/unicorn_530dd415f77a40418edfdec7c2d599f2/../../include/unicorn' [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. ERROR: Failed building wheel for unicorn Running setup.py clean for unicorn Successfully built psutil Failed to build unicorn Installing collected packages: unicorn, pyserial, pyelftools, rpyc, ropgadget, requests, python-dateutil, pysocks, psutil, pathlib2, packaging, mako, intervaltree, colored-traceback, paramiko, pwntools, pwn Running setup.py install for unicorn ... error error: subprocess-exited-with-error × Running setup.py install for unicorn did not run successfully. │ exit code: 1 ╰─> [4 lines of output] running install running build Building C extensions error: [Errno 2] No such file or directory: '/private/var/folders/6d/85dtjcrj57173csw50tk8r300000gn/T/pip-install-o33_11sd/unicorn_530dd415f77a40418edfdec7c2d599f2/../../include/unicorn' [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. error: legacy-install-failure × Encountered error while trying to install package. ╰─> unicorn note: This is an issue with the package mentioned above, not pip. hint: See above for output from the failure. Thank you for your help. A: I have an M1 mac and had the same issue—nothing worked for me either, so I eventually just tried installing an older version of unicorn (if you do pip install unicorn== without specifying the version, you can list all of them), and tried different ones until one worked. (For me, this was just downgrading to 2.0.0)
Unable to install pwn package for python
I am trying to install the pwn library on my MacBook Air (M2, 2022) but it's failing while building the wheel for unicorn. I'm using python version 3.10.6. This is the command I'm using: python3 -m pip install --upgrade pwn without the --upgrade part I still get the same error message. If I replace pwn with pwntools I still get the same error message as well. wtdcode stated in the GitHub issue: "Due to the fact that GitHub doesn't provide an M1 CI (actions/runner-images#2187), there is no available PyPI release yet. You may build it by yourself." So my question is, How do I build it myself? error msg: Building wheel for unicorn (setup.py) ... error error: subprocess-exited-with-error × python setup.py bdist_wheel did not run successfully. │ exit code: 1 ╰─> [4 lines of output] running bdist_wheel running build Building C extensions error: [Errno 2] No such file or directory: '/private/var/folders/6d/85dtjcrj57173csw50tk8r300000gn/T/pip-install-o33_11sd/unicorn_530dd415f77a40418edfdec7c2d599f2/../../include/unicorn' [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. ERROR: Failed building wheel for unicorn Running setup.py clean for unicorn Successfully built psutil Failed to build unicorn Installing collected packages: unicorn, pyserial, pyelftools, rpyc, ropgadget, requests, python-dateutil, pysocks, psutil, pathlib2, packaging, mako, intervaltree, colored-traceback, paramiko, pwntools, pwn Running setup.py install for unicorn ... error error: subprocess-exited-with-error × Running setup.py install for unicorn did not run successfully. │ exit code: 1 ╰─> [4 lines of output] running install running build Building C extensions error: [Errno 2] No such file or directory: '/private/var/folders/6d/85dtjcrj57173csw50tk8r300000gn/T/pip-install-o33_11sd/unicorn_530dd415f77a40418edfdec7c2d599f2/../../include/unicorn' [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. error: legacy-install-failure × Encountered error while trying to install package. ╰─> unicorn note: This is an issue with the package mentioned above, not pip. hint: See above for output from the failure. Thank you for your help.
[ "I have an M1 mac and had the same issue—nothing worked for me either, so I eventually just tried installing an older version of unicorn (if you do pip install unicorn== without specifying the version, you can list all of them), and tried different ones until one worked.\n(For me, this was just downgrading to 2.0.0)\n" ]
[ 0 ]
[]
[]
[ "pip", "pwntools", "python", "unicorn" ]
stackoverflow_0073819091_pip_pwntools_python_unicorn.txt
Q: how do I input custom arrays into rows & columns in 2d character array Rows = int(input("give the number of rows:")) Columns = int(input("Give the number of columns:")) matrix = [] for i in range(Rows): matrix.append(['a', 'b', 'c','d', 'e']) for vector in matrix: print(matrix) here's the output: give the number of rows:3 Give the number of columns:3 [['a', 'b', 'c', 'd', 'e']] [['a', 'b', 'c', 'd', 'e'], ['a', 'b', 'c', 'd', 'e']] [['a', 'b', 'c', 'd', 'e'], ['a', 'b', 'c', 'd', 'e']] [['a', 'b', 'c', 'd', 'e'], ['a', 'b', 'c', 'd', 'e'], ['a', 'b', 'c', 'd', 'e']] [['a', 'b', 'c', 'd', 'e'], ['a', 'b', 'c', 'd', 'e'], ['a', 'b', 'c', 'd', 'e']] [['a', 'b', 'c', 'd', 'e'], ['a', 'b', 'c', 'd', 'e'], ['a', 'b', 'c', 'd', 'e']] [it needed to be like this when the user input the rows and columns 3x3] a b c d e f g h i A: There are many ways to initalize an array with a specific size. Below is one of the more concise ways. Rows = int(input("Give the number of rows:")) Columns = int(input("Give the number of columns:")) matrix = [["a"]*Rows]*Columns print(matrix) This will give the output Give the number of rows:3 Give the number of columns:3 [['a', 'a', 'a'], ['a', 'a', 'a'], ['a', 'a', 'a']] This gives the array sizing that you are looking for.
how do I input custom arrays into rows & columns in 2d character array
Rows = int(input("give the number of rows:")) Columns = int(input("Give the number of columns:")) matrix = [] for i in range(Rows): matrix.append(['a', 'b', 'c','d', 'e']) for vector in matrix: print(matrix) here's the output: give the number of rows:3 Give the number of columns:3 [['a', 'b', 'c', 'd', 'e']] [['a', 'b', 'c', 'd', 'e'], ['a', 'b', 'c', 'd', 'e']] [['a', 'b', 'c', 'd', 'e'], ['a', 'b', 'c', 'd', 'e']] [['a', 'b', 'c', 'd', 'e'], ['a', 'b', 'c', 'd', 'e'], ['a', 'b', 'c', 'd', 'e']] [['a', 'b', 'c', 'd', 'e'], ['a', 'b', 'c', 'd', 'e'], ['a', 'b', 'c', 'd', 'e']] [['a', 'b', 'c', 'd', 'e'], ['a', 'b', 'c', 'd', 'e'], ['a', 'b', 'c', 'd', 'e']] [it needed to be like this when the user input the rows and columns 3x3] a b c d e f g h i
[ "There are many ways to initalize an array with a specific size. Below is one of the more concise ways.\nRows = int(input(\"Give the number of rows:\"))\nColumns = int(input(\"Give the number of columns:\"))\nmatrix = [[\"a\"]*Rows]*Columns\n\nprint(matrix)\n\nThis will give the output\nGive the number of rows:3\nGive the number of columns:3\n[['a', 'a', 'a'], ['a', 'a', 'a'], ['a', 'a', 'a']]\n\nThis gives the array sizing that you are looking for.\n" ]
[ 0 ]
[]
[]
[ "arrays", "matrix", "python" ]
stackoverflow_0074480133_arrays_matrix_python.txt
Q: The view didn't return an HttpResponse object. It returned None instead I have the following simple view. Why is it resulting in this error? The view auth_lifecycle.views.user_profile didn't return an HttpResponse object. It returned None instead. """Renders web pages for the user-authentication-lifecycle project.""" from django.shortcuts import render from django.template import RequestContext from django.contrib.auth import authenticate, login def user_profile(request): """Displays information unique to the logged-in user.""" user = authenticate(username='superuserusername', password='sueruserpassword') login(request, user) render(request, 'auth_lifecycle/user_profile.html', context_instance=RequestContext(request)) A: Because the view must return render, not just call it. (Note that render is a simple wrapper around an HttpResponse). Change the last line to return render(request, 'auth_lifecycle/user_profile.html', context_instance=RequestContext(request)) (Also note the render(...) function returns a HttpResponse object behind the scenes.) A: if qs.count()==1: print('cart id exists') if .... else: return render(request,"carts/home.html",{}) Such type of code will also return you the same error this is because of the intents as the return statement should be for else not for if statement. above code can be changed to if qs.count()==1: print('cart id exists') if .... else: return render(request,"carts/home.html",{}) This may solve such issues A: I had the same error using an UpdateView I had this: if form.is_valid() and form2.is_valid(): form.save() form2.save() return HttpResponseRedirect(self.get_success_url()) and I solved just doing: if form.is_valid() and form2.is_valid(): form.save() form2.save() return HttpResponseRedirect(reverse_lazy('adopcion:solicitud_listar')) A: I know this is very late to post something here but this may help out someone to figure out the silly mistake. That there are chances that you a re missing return before render(). please make sure that. A: Python is very sensitive to indentation, with the code below I got the same error: except IntegrityError as e: if 'unique constraint' in e.args: return render(request, "calender.html") The correct indentation is: except IntegrityError as e: if 'unique constraint' in e.args: return render(request, "calender.html") A: I have the same issue but resolved it by returning the render after saving the data. error_message = None if not first_name: error_message = "first name is required!!!!" elif len(first_name) < 4: error_message = "first name must be more than 4 characters!!!" elif not error_message: signup_obj = Signuup(firstname=first_name, lastname=last_name, email=email, password=password) print("here is the complete object!!!!") signup_obj.register() else: return render(request, 'signup.html', {'error': error_message}) Issue: After saving data if I do not have any error_message to show but I am not returning anything after saving. Solution Error solved after adding signup_obj.register() return render(request, 'signup.html') In the code.... A: In Django, every view must return a HttpResponse (or its subclass). But, we usually use the render(...) function while rendering the templates in Django. If we are using render(...) function, we won't encounter any errors since the render(...) is returning an HttpResponse internally. Coming to this specific case, you're missing a return statement, and thus the view did return None, which caused the exception. So, adding a return statement will solve the issue, as below def user_profile(request): # your code return render(...) ^^^^^^ Troubleshooting Many people face this issue; their code/logic may differ, but the reason will be the same. Here are a few scenarios that may help you to troubleshoot the situation, Have you missed adding a return statement? def user_profile(request): HttpResponse("Success") # missing a `return` here Are you sure the returned object is a HttpResponse or its a subclass? Some people may return the model object or form object directly from the view def user_profile(request): my_model_object = MyModel.objects.get(pk=request.GET.get('id')) # at last, return a model instance return my_model_object Does all your if...else clauses properly return a HttpResponse? (In the following example, It is not clear what should return, a. in case the form.is_valid() is False b. in case the form.is_valid() is True (after form.save()) def user_profile(request): if request.method == "POST": form = UserProfileForm(request.POST, instance=request.user) if form.is_valid(): form.save() else: return render( request, "user_profile.html", {"form": UserProfileForm(instance=request.user)}, )
The view didn't return an HttpResponse object. It returned None instead
I have the following simple view. Why is it resulting in this error? The view auth_lifecycle.views.user_profile didn't return an HttpResponse object. It returned None instead. """Renders web pages for the user-authentication-lifecycle project.""" from django.shortcuts import render from django.template import RequestContext from django.contrib.auth import authenticate, login def user_profile(request): """Displays information unique to the logged-in user.""" user = authenticate(username='superuserusername', password='sueruserpassword') login(request, user) render(request, 'auth_lifecycle/user_profile.html', context_instance=RequestContext(request))
[ "Because the view must return render, not just call it. (Note that render is a simple wrapper around an HttpResponse). Change the last line to\nreturn render(request, 'auth_lifecycle/user_profile.html',\n context_instance=RequestContext(request))\n\n(Also note the render(...) function returns a HttpResponse object behind the scenes.)\n", "if qs.count()==1:\n print('cart id exists')\n if ....\n\nelse: \n return render(request,\"carts/home.html\",{})\n\nSuch type of code will also return you the same error this is because \nof the intents as the return statement should be for else not for if statement.\nabove code can be changed to \nif qs.count()==1:\n print('cart id exists')\n if ....\n\nelse: \n\nreturn render(request,\"carts/home.html\",{})\n\nThis may solve such issues\n", "I had the same error using an UpdateView\nI had this:\nif form.is_valid() and form2.is_valid():\n form.save()\n form2.save()\n return HttpResponseRedirect(self.get_success_url())\n\nand I solved just doing:\nif form.is_valid() and form2.is_valid():\n form.save()\n form2.save()\n return HttpResponseRedirect(reverse_lazy('adopcion:solicitud_listar'))\n\n", "I know this is very late to post something here but this may help out someone to figure out the silly mistake.\nThat there are chances that you a re missing return before render(). please make sure that.\n", "Python is very sensitive to indentation, with the code below I got the same error:\n except IntegrityError as e:\n if 'unique constraint' in e.args:\n return render(request, \"calender.html\")\n\nThe correct indentation is:\n except IntegrityError as e:\n if 'unique constraint' in e.args:\n return render(request, \"calender.html\")\n\n", "I have the same issue but resolved it by returning the render after saving the data.\n error_message = None\n if not first_name:\n error_message = \"first name is required!!!!\"\n elif len(first_name) < 4:\n error_message = \"first name must be more than 4 characters!!!\"\n\n elif not error_message:\n signup_obj = Signuup(firstname=first_name, lastname=last_name, email=email, password=password)\n print(\"here is the complete object!!!!\")\n\n signup_obj.register()\n \n\n else:\n return render(request, 'signup.html', {'error': error_message})\n\nIssue: After saving data if I do not have any error_message to show but I am not returning anything after saving.\nSolution Error solved after adding\n signup_obj.register()\n return render(request, 'signup.html')\n\nIn the code....\n", "In Django, every view must return a HttpResponse (or its subclass). But, we usually use the render(...) function while rendering the templates in Django. If we are using render(...) function, we won't encounter any errors since the render(...) is returning an HttpResponse internally.\nComing to this specific case, you're missing a return statement, and thus the view did return None, which caused the exception.\nSo, adding a return statement will solve the issue, as below\ndef user_profile(request):\n # your code\n return render(...)\n ^^^^^^\n\n\nTroubleshooting\nMany people face this issue; their code/logic may differ, but the reason will be the same. Here are a few scenarios that may help you to troubleshoot the situation,\n\nHave you missed adding a return statement?\n\ndef user_profile(request):\n HttpResponse(\"Success\") # missing a `return` here\n\n\nAre you sure the returned object is a HttpResponse or its a subclass? Some people may return the model object or form object directly from the view\n\ndef user_profile(request):\n my_model_object = MyModel.objects.get(pk=request.GET.get('id'))\n # at last, return a model instance\n return my_model_object\n\n\nDoes all your if...else clauses properly return a HttpResponse? (In the following example, It is not clear what should return,\na. in case the form.is_valid() is False\nb. in case the form.is_valid() is True (after form.save())\n\n\ndef user_profile(request):\n if request.method == \"POST\":\n form = UserProfileForm(request.POST, instance=request.user)\n if form.is_valid():\n form.save()\n else:\n return render(\n request,\n \"user_profile.html\",\n {\"form\": UserProfileForm(instance=request.user)},\n )\n\n" ]
[ 105, 13, 5, 2, 0, 0, 0 ]
[]
[]
[ "django", "django_views", "python" ]
stackoverflow_0026258905_django_django_views_python.txt
Q: Computing average loop in Python based on certain conditions met in another column First timer posting here and new to Python, so apologies in advance if I am missing any key information below. Essentially, I have a large CSV file that I was able to clean up a bit on scripts that contains various numerical values over ~150 miles of data with each data line being one foot. After I clean the file up a bit, tables would typically look like something below: ABC Mile Ft Param1 A 1 1000 0.1234 A 1 1001 0.1111 A 1 1002 0.1221 A 1 1003 0.1511 B 1 1004 0.1999 B 1 1005 0.2011 B 1 1006 0.1878 B 1 1007 0.1999 C 1 1008 0.5321 C 1 1009 0.5333 C 1 1010 0.5445 C 1 1011 0.5655 C 1 1012 0.5852 A 1 1013 0.2788 A 1 1014 0.2899 A 1 1015 0.2901 A 1 1016 0.2921 A 1 1017 0.2877 A 1 1018 0.2896 For this file, the 'ABC' column will always only equal A, B, or C. What I am trying to do is average the Param1 numbers for each set of A, B, and C. Thus in the example above, I would be looking to get the average of Param1 when it equals A from Ft 1000 to 1003, when it equals B from Ft 1004 to 1007, when it equals C from Ft 1008 to 1012, when it equals A from 1013 to 1018 and so on for the rest of the file. Edit I should also mention that in these files, ABC will equal the same value typically for several hundred rows until it equals another value that will again repeat for several hundred rows, and so on. So the 'ABC' column could values could be something like this: AAA...AAA BBB...BBB CCC...CCC BBB...BBB AAA...AAA I have been looking at use of a for loop as below, but the problem is that I get all the averages of Param1 when equals A over a full mile, not each grouping. This is what I have thus far: for i in range(1,df['Mile'].max()): avg_p1 = df.loc[(df['Mile'] == i) & (df['ABC'] =='A'), 'Param1'].mean() print(avg_p1) But in this case, I get the average of Param1 when ABC = A over the full mile. In the table example above, I want the average of Param1 when ABC = A from Ft 1000 to 1003 and 1013 to 1018, as separate averages repeated through the whole document. Would there need to be a second for loop or some kind of if/else condition added to the existing loop above? Any help for this novice programmer would be much appreciated :) A: Thank you for this interesting question. The idea is to create a group for each continuous value 'A', 'B', or 'C' until it changes. I also assume that your data is already sorted by mile df['change'] = np.where(df['ABC']!=df['ABC'].shift(1),1.0,0.0) Now you simply cumsum to create new group indicator df['gr'] = df['change'].cumsum() Everything should be fine now and you can use groupby to get what you want df.groupby('gr')['Param1'].mean().reset_index() A: df.groupby('ABC')['Ft'].mean() output: ABC A 1009.9 B 1005.5 C 1010.0 Name: Ft, dtype: float64 A: First, get a list of the bins for each category, then you can do the average by category and bin... something like this: results = {} for cat in df['ABC'].unique(): results[cat] = [] category_index = df[df['ABC'] == cat].index.to_series() # Get list of continuous indexes bins = category_index.groupby( category_index.diff().ne(1).cumsum() ).agg(['first','last']).apply(tuple,1).tolist() # Average by category and bin for bin in bins: bin_low, bin_high = bin df_cut = df.iloc[bin_low:bin_high] low_ft, high_ft = df.iloc[bin_low]['Ft'], df.iloc[bin_high]['Ft'] average_value = df_cut.groupby('ABC').mean()['Param1'][cat] results[cat].append(((low_ft, high_ft), average_value)) results Output: { 'A': [ ((1000, 1003), 0.11886666666666668), ((1013, 1018), 0.28772000000000003) ], 'B': [ ((1004, 1007), 0.19626666666666667) ], 'C': [ ((1008, 1012), 0.54385) ] }
Computing average loop in Python based on certain conditions met in another column
First timer posting here and new to Python, so apologies in advance if I am missing any key information below. Essentially, I have a large CSV file that I was able to clean up a bit on scripts that contains various numerical values over ~150 miles of data with each data line being one foot. After I clean the file up a bit, tables would typically look like something below: ABC Mile Ft Param1 A 1 1000 0.1234 A 1 1001 0.1111 A 1 1002 0.1221 A 1 1003 0.1511 B 1 1004 0.1999 B 1 1005 0.2011 B 1 1006 0.1878 B 1 1007 0.1999 C 1 1008 0.5321 C 1 1009 0.5333 C 1 1010 0.5445 C 1 1011 0.5655 C 1 1012 0.5852 A 1 1013 0.2788 A 1 1014 0.2899 A 1 1015 0.2901 A 1 1016 0.2921 A 1 1017 0.2877 A 1 1018 0.2896 For this file, the 'ABC' column will always only equal A, B, or C. What I am trying to do is average the Param1 numbers for each set of A, B, and C. Thus in the example above, I would be looking to get the average of Param1 when it equals A from Ft 1000 to 1003, when it equals B from Ft 1004 to 1007, when it equals C from Ft 1008 to 1012, when it equals A from 1013 to 1018 and so on for the rest of the file. Edit I should also mention that in these files, ABC will equal the same value typically for several hundred rows until it equals another value that will again repeat for several hundred rows, and so on. So the 'ABC' column could values could be something like this: AAA...AAA BBB...BBB CCC...CCC BBB...BBB AAA...AAA I have been looking at use of a for loop as below, but the problem is that I get all the averages of Param1 when equals A over a full mile, not each grouping. This is what I have thus far: for i in range(1,df['Mile'].max()): avg_p1 = df.loc[(df['Mile'] == i) & (df['ABC'] =='A'), 'Param1'].mean() print(avg_p1) But in this case, I get the average of Param1 when ABC = A over the full mile. In the table example above, I want the average of Param1 when ABC = A from Ft 1000 to 1003 and 1013 to 1018, as separate averages repeated through the whole document. Would there need to be a second for loop or some kind of if/else condition added to the existing loop above? Any help for this novice programmer would be much appreciated :)
[ "Thank you for this interesting question.\nThe idea is to create a group for each continuous value 'A', 'B', or 'C' until it changes. I also assume that your data is already sorted by mile\ndf['change'] = np.where(df['ABC']!=df['ABC'].shift(1),1.0,0.0)\n\nNow you simply cumsum to create new group indicator\ndf['gr'] = df['change'].cumsum()\n\nEverything should be fine now and you can use groupby to get what you want\ndf.groupby('gr')['Param1'].mean().reset_index()\n\n", "df.groupby('ABC')['Ft'].mean()\n\noutput:\nABC\nA 1009.9\nB 1005.5\nC 1010.0\nName: Ft, dtype: float64\n\n", "First, get a list of the bins for each category, then you can do the average by category and bin... something like this:\nresults = {}\nfor cat in df['ABC'].unique():\n results[cat] = []\n category_index = df[df['ABC'] == cat].index.to_series()\n\n # Get list of continuous indexes\n bins = category_index.groupby(\n category_index.diff().ne(1).cumsum()\n ).agg(['first','last']).apply(tuple,1).tolist()\n\n # Average by category and bin\n for bin in bins:\n bin_low, bin_high = bin\n df_cut = df.iloc[bin_low:bin_high]\n low_ft, high_ft = df.iloc[bin_low]['Ft'], df.iloc[bin_high]['Ft']\n average_value = df_cut.groupby('ABC').mean()['Param1'][cat]\n results[cat].append(((low_ft, high_ft), average_value))\n\nresults\n\nOutput:\n{\n 'A': [\n ((1000, 1003), 0.11886666666666668),\n ((1013, 1018), 0.28772000000000003)\n ],\n 'B': [\n ((1004, 1007), 0.19626666666666667)\n ],\n 'C': [\n ((1008, 1012), 0.54385)\n ]\n}\n\n" ]
[ 1, 0, 0 ]
[]
[]
[ "for_loop", "mean", "pandas", "python" ]
stackoverflow_0074479749_for_loop_mean_pandas_python.txt
Q: how to get a specific objects in an API? Hi i'm trying to consume an API in python, I made the connection and it works pretty well. In that API I have 100 results, and I just want to get 10 of them, do you know how to do that? import requests import pprint url='https://jsonplaceholder.typicode.com/post' response=requests.get(url) pprint.pprint(response.json()) I tried with list comprehensions but I don't get how to use them in the dictionary for that API A: When you make your request, if it was valid, the response object has a json method which returns the json data of your response. In your case response.json() gives you a list of json objects. You can manipulate that like any python list. result = response.json() first_ten = result[:10] What the [:10] notation means is "Give me a piece of the list starting from zero up to 10". The zero is implied because no first number is specified - it's the same as result[0:10] and you can use this notation to get any subsequence of the list you want. P.S. Your url value is missing an s - 'https://jsonplaceholder.typicode.com/post', the right url is - 'https://jsonplaceholder.typicode.com/posts'
how to get a specific objects in an API?
Hi i'm trying to consume an API in python, I made the connection and it works pretty well. In that API I have 100 results, and I just want to get 10 of them, do you know how to do that? import requests import pprint url='https://jsonplaceholder.typicode.com/post' response=requests.get(url) pprint.pprint(response.json()) I tried with list comprehensions but I don't get how to use them in the dictionary for that API
[ "When you make your request, if it was valid, the response object has a json method which returns the json data of your response.\nIn your case response.json() gives you a list of json objects. You can manipulate that like any python list.\nresult = response.json()\nfirst_ten = result[:10]\n\nWhat the [:10] notation means is \"Give me a piece of the list starting from zero up to 10\". The zero is implied because no first number is specified - it's the same as result[0:10] and you can use this notation to get any subsequence of the list you want.\nP.S. Your url value is missing an s - 'https://jsonplaceholder.typicode.com/post', the right url is - 'https://jsonplaceholder.typicode.com/posts'\n" ]
[ 1 ]
[]
[]
[ "api", "dictionary", "python" ]
stackoverflow_0074480201_api_dictionary_python.txt
Q: How to align text left on a plotly bar chart (example image contained) [Plotly-Dash] I need help in adding text to my graph. I have tried text = 'y' and text-position = 'inside' but the text goes vertical or gets squashed for small bar charts so it can fit inside the bar. I just want it to write across. Here is a working example of the code that needs fixing: app = dash.Dash(__name__) app.css.append_css({'external_url': 'https://codepen.io/amyoshino/pen/jzXypZ.css'}) labels1 = ['0-7', '8-12', '13-15', '16-20', '21-25', '26+'] values1 = [10, 30, 10, 5, 6, 8] labels2 = ['India', 'Scotland', 'Germany', 'NW England', 'N Ireland', 'Norway', 'NE England', 'Paris', 'North Africa', 'scandinavia'] values2 = [1, 0, 4, 9, 11, 18, 50, 7, 0, 2] values3 = [10, 111, 75, 20] labels4 = ['Safety Manager', 'Office Administrator', 'Internal Officer', 'Assistant Producer'] bar_color = ['#f6fbfc', '#eef7fa', '#e6f3f7', '#deeff5', '#d6ebf2', '#cde7f0', '#c5e3ed', '#bddfeb', '#b5dbe8', '#add8e6'] bar_color2 = ['#e6f3f7', '#deeff5', '#d6ebf2', '#cde7f0', '#c5e3ed', '#bddfeb', '#b5dbe8', '#add8e6'] app.layout = html.Div([ html.Div([ html.Div([ dcc.Graph(id = 'age', figure = { 'data': [go.Bar(x = values1, y = labels1, orientation = 'h', marker=dict(color = bar_color2), text = labels1, textposition = 'inside' ) ], 'layout': go.Layout(title = 'Number of respondees per tenure', yaxis=dict( zeroline=False, showline=False, showgrid = False, autorange="reversed", ), xaxis=dict( zeroline=False, showline=False, showgrid = False ) ) } ) ], className = 'four columns'), html.Div([ dcc.Graph(id = 'location', figure = { 'data': [go.Bar(x = values2, y = labels2, orientation = 'h', marker=dict(color = bar_color), text = labels2, textposition = 'inside' ) ], 'layout': go.Layout(title = 'Number of respondees per region', yaxis=dict( zeroline=False, showline=False, showgrid = False, autorange="reversed", ), xaxis=dict( zeroline=False, showline=False, showgrid = False ) ) } ) ], className = 'four columns'), html.Div([ dcc.Graph(id = 'job', figure = { 'data': [go.Bar(x = values3, y = labels4, orientation = 'h', marker=dict(color = bar_color2), text = labels4, textposition = 'inside' ) ], 'layout': go.Layout(title = 'Number of respondees per role', yaxis=dict( # automargin=True, zeroline=False, showline=False, showgrid = False, autorange="reversed", ), xaxis=dict( zeroline=False, showline=False, showgrid = False ) ) } ) ], className = 'four columns') ], className = 'row') ]) if __name__ == '__main__': app.run_server() Here's the output: Here's an example of how I want my text to look: I need help with two things: Make the text align to the left not the right of the bar. If the bar length is short I want the text to still be visible (even if the bar length is zero) and not squashed or vertically aligned. If you can also give an explanation of how to fix y-axis being cut off in the third chart that would be amazing. For now, I have to change the labels to force it to fit which is time-consuming. Is there a way of adding padding to the container or something? Thanks. A: You can pass text into go.Bar(), where you can set textposition="inside" and insidetextanchor="start", which should solve this issue. fig = go.Figure(go.Bar( x=[20, 14, 23], y=['giraffes', 'orangutans', 'monkeys'], orientation='h', # define the annotations text=['giraffes', 'orangutans', 'monkeys'], # position, "auto", "inside" or "outside" textposition="auto", # anchor could be "start" or "end" insidetextanchor="start", insidetextfont=dict(family='Times', size=13, color='white'), outsidetextfont=dict(family='Times', size=13, color='white'))) fig.update_layout( yaxis=dict( showticklabels=False, )) fig.show() A: This is an inelegant workaround, but after scouring the plotly python docs, I couldn't find anything that would do exactly what you were asking with the plotly attributes provided. If you need a one-time, quick fix now, try using yaxis=dict(showticklabels=False) and add your labels manually as annotations like: layout = go.Layout( # Hide the y tick labels yaxis=dict( showticklabels=False), annotations=[ dict( # I had to try different x values to get alignment x=0.8, y='giraffes', xref='x', yref='y', text='Giraffes', font=dict( family='Arial', size=24, color='rgba(255, 255, 255)' ), align='left', # Don't show any arrow showarrow=False, ), The output I got looked like: You can check the Plotly Annotations and Chart Attributes documentation to see if there is anything that better suits your needs. Edit: I started posting this response before the code was added to the question. Here is an example of how the annotations could be made for the first two y labels of the first graph in the code in question: app.layout = html.Div([ html.Div([ html.Div([ dcc.Graph(id = 'age', figure = { 'data': [go.Bar(x = values1, y = labels1, orientation = 'h', marker=dict(color = bar_color2), text = labels1, textposition = 'inside' ) ], 'layout': go.Layout(title = 'Number of respondees per tenure', yaxis=dict( zeroline=False, showline=False, showgrid = False, showticklabels=False autorange="reversed", ), xaxis=dict( zeroline=False, showline=False, showgrid = False ) ), annotations=[dict( x=0.8, y=labels1[0], xref='x', yref='y', text=labels1[0], font=dict( family='Arial', size=24, color='rgba(255, 255, 255)' ), align='left', showarrow=False, ), dict( x=1.2, y=labels1[1], xref='x', yref='y', text=labels1[1], font=dict( family='Arial', size=24, color='rgba(255, 255, 255)' ), align='left', showarrow=False, ), Edit 2: @ user8322222, to answer the question in your comment, you could use a list comprehension to make your annotations dictionary like so: annotations1 = [dict(x=(len(labels1[i])*0.15), y=labels1[i], xref='x', yref='y', text=labels1[i], font=dict(family='Arial', size=24, color='rgba(255, 255, 255)'), align='left', showarrow=False) for i in range(len(labels1))] However I don't think there will be a constant you could multiply by the length of the text in characters (like I used for x in the example) to get perfect alignment. You could use the pixel length or other measures for the string as in this post to devise a more accurate way of determining x to get it properly aligned. Hope that helps. A: You can prevent the y-axis from being cutoff in your third chart by changing the margins of the figure. Add the following code to the inside of the call to go.Layout(): margin=go.layout.Margin( l=150, # left margin, in px r=80, # right margin, in px t=80, # top margin, in px b=80, # bottom margin, in px pad=0 ) You can adjust the left margin for different y-axis labels, or you could set it to automatically scale with the length of the longest label. A: If you are using plotly.express plots, you can achieve that with: fig.update_traces(insidetextanchor="start")
How to align text left on a plotly bar chart (example image contained) [Plotly-Dash]
I need help in adding text to my graph. I have tried text = 'y' and text-position = 'inside' but the text goes vertical or gets squashed for small bar charts so it can fit inside the bar. I just want it to write across. Here is a working example of the code that needs fixing: app = dash.Dash(__name__) app.css.append_css({'external_url': 'https://codepen.io/amyoshino/pen/jzXypZ.css'}) labels1 = ['0-7', '8-12', '13-15', '16-20', '21-25', '26+'] values1 = [10, 30, 10, 5, 6, 8] labels2 = ['India', 'Scotland', 'Germany', 'NW England', 'N Ireland', 'Norway', 'NE England', 'Paris', 'North Africa', 'scandinavia'] values2 = [1, 0, 4, 9, 11, 18, 50, 7, 0, 2] values3 = [10, 111, 75, 20] labels4 = ['Safety Manager', 'Office Administrator', 'Internal Officer', 'Assistant Producer'] bar_color = ['#f6fbfc', '#eef7fa', '#e6f3f7', '#deeff5', '#d6ebf2', '#cde7f0', '#c5e3ed', '#bddfeb', '#b5dbe8', '#add8e6'] bar_color2 = ['#e6f3f7', '#deeff5', '#d6ebf2', '#cde7f0', '#c5e3ed', '#bddfeb', '#b5dbe8', '#add8e6'] app.layout = html.Div([ html.Div([ html.Div([ dcc.Graph(id = 'age', figure = { 'data': [go.Bar(x = values1, y = labels1, orientation = 'h', marker=dict(color = bar_color2), text = labels1, textposition = 'inside' ) ], 'layout': go.Layout(title = 'Number of respondees per tenure', yaxis=dict( zeroline=False, showline=False, showgrid = False, autorange="reversed", ), xaxis=dict( zeroline=False, showline=False, showgrid = False ) ) } ) ], className = 'four columns'), html.Div([ dcc.Graph(id = 'location', figure = { 'data': [go.Bar(x = values2, y = labels2, orientation = 'h', marker=dict(color = bar_color), text = labels2, textposition = 'inside' ) ], 'layout': go.Layout(title = 'Number of respondees per region', yaxis=dict( zeroline=False, showline=False, showgrid = False, autorange="reversed", ), xaxis=dict( zeroline=False, showline=False, showgrid = False ) ) } ) ], className = 'four columns'), html.Div([ dcc.Graph(id = 'job', figure = { 'data': [go.Bar(x = values3, y = labels4, orientation = 'h', marker=dict(color = bar_color2), text = labels4, textposition = 'inside' ) ], 'layout': go.Layout(title = 'Number of respondees per role', yaxis=dict( # automargin=True, zeroline=False, showline=False, showgrid = False, autorange="reversed", ), xaxis=dict( zeroline=False, showline=False, showgrid = False ) ) } ) ], className = 'four columns') ], className = 'row') ]) if __name__ == '__main__': app.run_server() Here's the output: Here's an example of how I want my text to look: I need help with two things: Make the text align to the left not the right of the bar. If the bar length is short I want the text to still be visible (even if the bar length is zero) and not squashed or vertically aligned. If you can also give an explanation of how to fix y-axis being cut off in the third chart that would be amazing. For now, I have to change the labels to force it to fit which is time-consuming. Is there a way of adding padding to the container or something? Thanks.
[ "You can pass text into go.Bar(), where you can set textposition=\"inside\" and insidetextanchor=\"start\", which should solve this issue.\n\nfig = go.Figure(go.Bar(\n x=[20, 14, 23],\n y=['giraffes', 'orangutans', 'monkeys'],\n orientation='h',\n # define the annotations\n text=['giraffes', 'orangutans', 'monkeys'],\n # position, \"auto\", \"inside\" or \"outside\"\n textposition=\"auto\",\n # anchor could be \"start\" or \"end\"\n insidetextanchor=\"start\",\n insidetextfont=dict(family='Times', size=13, color='white'),\n outsidetextfont=dict(family='Times', size=13, color='white')))\nfig.update_layout(\n yaxis=dict(\n showticklabels=False,\n ))\nfig.show()\n\n", "This is an inelegant workaround, but after scouring the plotly python docs, I couldn't find anything that would do exactly what you were asking with the plotly attributes provided. If you need a one-time, quick fix now, try using yaxis=dict(showticklabels=False) and add your labels manually as annotations like:\nlayout = go.Layout(\n # Hide the y tick labels\n yaxis=dict(\n showticklabels=False),\n annotations=[\n dict(\n # I had to try different x values to get alignment\n x=0.8,\n y='giraffes',\n xref='x',\n yref='y',\n text='Giraffes',\n font=dict(\n family='Arial',\n size=24,\n color='rgba(255, 255, 255)'\n ),\n align='left',\n # Don't show any arrow\n showarrow=False,\n ), \n\nThe output I got looked like:\n\nYou can check the Plotly Annotations and Chart Attributes documentation to see if there is anything that better suits your needs. \nEdit: I started posting this response before the code was added to the question. Here is an example of how the annotations could be made for the first two y labels of the first graph in the code in question:\napp.layout = html.Div([\n html.Div([ \n html.Div([\n dcc.Graph(id = 'age',\n figure = {\n 'data': [go.Bar(x = values1,\n y = labels1,\n orientation = 'h',\n marker=dict(color = bar_color2),\n text = labels1,\n textposition = 'inside'\n )\n ],\n 'layout': go.Layout(title = 'Number of respondees per tenure',\n yaxis=dict(\n zeroline=False,\n showline=False,\n showgrid = False,\n showticklabels=False\n autorange=\"reversed\",\n ),\n xaxis=dict(\n zeroline=False,\n showline=False,\n showgrid = False\n )\n ),\n annotations=[dict(\n x=0.8,\n y=labels1[0],\n xref='x',\n yref='y',\n text=labels1[0],\n font=dict(\n family='Arial',\n size=24,\n color='rgba(255, 255, 255)'\n ),\n align='left',\n showarrow=False,\n ), \n dict(\n x=1.2,\n y=labels1[1],\n xref='x',\n yref='y',\n text=labels1[1],\n font=dict(\n family='Arial',\n size=24,\n color='rgba(255, 255, 255)'\n ),\n align='left',\n showarrow=False,\n ),\n\nEdit 2: @ user8322222, to answer the question in your comment, you could use a list comprehension to make your annotations dictionary like so:\n annotations1 = [dict(x=(len(labels1[i])*0.15), y=labels1[i], xref='x', yref='y', \ntext=labels1[i], font=dict(family='Arial', size=24, color='rgba(255, 255, 255)'),\n align='left', showarrow=False) for i in range(len(labels1))]\n\nHowever I don't think there will be a constant you could multiply by the length of the text in characters (like I used for x in the example) to get perfect alignment. You could use the pixel length or other measures for the string as in this post to devise a more accurate way of determining x to get it properly aligned. Hope that helps.\n", "You can prevent the y-axis from being cutoff in your third chart by changing the margins of the figure. Add the following code to the inside of the call to go.Layout():\nmargin=go.layout.Margin(\n l=150, # left margin, in px\n r=80, # right margin, in px\n t=80, # top margin, in px\n b=80, # bottom margin, in px\n pad=0\n )\n\nYou can adjust the left margin for different y-axis labels, or you could set it to automatically scale with the length of the longest label.\n\n", "If you are using plotly.express plots, you can achieve that with:\nfig.update_traces(insidetextanchor=\"start\")\n\n" ]
[ 6, 2, 1, 0 ]
[]
[]
[ "plotly", "plotly_dash", "python" ]
stackoverflow_0055396090_plotly_plotly_dash_python.txt
Q: Get starlette request body in the middleware context I have such middleware class RequestContext(BaseHTTPMiddleware): async def dispatch(self, request: Request, call_next: RequestResponseEndpoint): request_id = request_ctx.set(str(uuid4())) # generate uuid to request body = await request.body() if body: logger.info(...) # log request with body else: logger.info(...) # log request without body response = await call_next(request) response.headers['X-Request-ID'] = request_ctx.get() logger.info("%s" % (response.status_code)) request_ctx.reset(request_id) return response So the line body = await request.body() freezes all requests that have body and I have 504 from all of them. How can I safely read the request body in this context? I just want to log request parameters. A: I would not create a Middleware that inherits from BaseHTTPMiddleware since it has some issues, FastAPI gives you a opportunity to create your own routers, in my experience this approach is way better. from fastapi import APIRouter, FastAPI, Request, Response, Body from fastapi.routing import APIRoute from typing import Callable, List from uuid import uuid4 class ContextIncludedRoute(APIRoute): def get_route_handler(self) -> Callable: original_route_handler = super().get_route_handler() async def custom_route_handler(request: Request) -> Response: request_id = str(uuid4()) response: Response = await original_route_handler(request) if await request.body(): print(await request.body()) response.headers["Request-ID"] = request_id return response return custom_route_handler app = FastAPI() router = APIRouter(route_class=ContextIncludedRoute) @router.post("/context") async def non_default_router(bod: List[str] = Body(...)): return bod app.include_router(router) Works as expected. b'["string"]' INFO: 127.0.0.1:49784 - "POST /context HTTP/1.1" 200 OK A: In case you still wanted to use BaseHTTP, I recently ran into this problem and came up with a solution: Middleware Code from starlette.middleware.base import BaseHTTPMiddleware from starlette.requests import Request import json from .async_iterator_wrapper import async_iterator_wrapper as aiwrap class some_middleware(BaseHTTPMiddleware): async def dispatch(self, request:Request, call_next:RequestResponseEndpoint): # -------------------------- # DO WHATEVER YOU TO DO HERE #--------------------------- response = await call_next(request) # Consuming FastAPI response and grabbing body here resp_body = [section async for section in response.__dict__['body_iterator']] # Repairing FastAPI response response.__setattr__('body_iterator', aiwrap(resp_body) # Formatting response body for logging try: resp_body = json.loads(resp_body[0].decode()) except: resp_body = str(resp_body) async_iterator_wrapper Code from TypeError from Python 3 async for loop class async_iterator_wrapper: def __init__(self, obj): self._it = iter(obj) def __aiter__(self): return self async def __anext__(self): try: value = next(self._it) except StopIteration: raise StopAsyncIteration return value I really hope this can help someone! I found this very helpful for logging. Big thanks to @Eddified for the aiwrap class A: Turns out await request.json() can only be called once per the request cycle. So if you need to access the request body in multiple middlewares for filtering or authentication etc then there's a work around which is to create a custom middleware that copies the contents of request body in request.state. The middleware should be loaded as early as necessary. Each middleware next in chain or controller can then access the request body from request.state instead of calling await request.json() again. Here's a example: class CopyRequestMiddleware(BaseHTTPMiddleware): async def dispatch(self, request: Request, call_next): request_body = await request.json() request.state.body = request_body response = await call_next(request) return response class LogRequestMiddleware(BaseHTTPMiddleware): async def dispatch(self, request: Request, call_next): # Since it'll be loaded after CopyRequestMiddleware it can access request.state.body. request_body = request.state.body print(request_body) response = await call_next(request) return response The controller will access request body from request.state as well request_body = request.state.body A: You can do this safely with a generic ASGI middleware: from typing import Iterable, List, Protocol, Generator import pytest from starlette.responses import Response from starlette.testclient import TestClient from starlette.types import ASGIApp, Scope, Send, Receive, Message class Logger(Protocol): def info(self, message: str) -> None: ... class BodyLoggingMiddleware: def __init__( self, app: ASGIApp, logger: Logger, ) -> None: self.app = app self.logger = logger async def __call__(self, scope: Scope, receive: Receive, send: Send) -> None: if scope["type"] != "http": await self.app(scope, receive, send) return done = False chunks: "List[bytes]" = [] async def wrapped_receive() -> Message: nonlocal done message = await receive() if message["type"] == "http.disconnect": done = True return message body = message.get("body", b"") more_body = message.get("more_body", False) if not more_body: done = True chunks.append(body) return message try: await self.app(scope, wrapped_receive, send) finally: while not done: await wrapped_receive() self.logger.info(b"".join(chunks).decode()) # or somethin async def consume_body_app(scope: Scope, receive: Receive, send: Send) -> None: done = False while not done: msg = await receive() done = "more_body" not in msg await Response()(scope, receive, send) async def consume_partial_body_app(scope: Scope, receive: Receive, send: Send) -> None: await receive() await Response()(scope, receive, send) class TestException(Exception): pass async def consume_body_and_error_app(scope: Scope, receive: Receive, send: Send) -> None: done = False while not done: msg = await receive() done = "more_body" not in msg raise TestException async def consume_partial_body_and_error_app(scope: Scope, receive: Receive, send: Send) -> None: await receive() raise TestException class TestLogger: def __init__(self, recorder: List[str]) -> None: self.recorder = recorder def info(self, message: str) -> None: self.recorder.append(message) @pytest.mark.parametrize( "chunks, expected_logs", [ ([b"foo", b" ", b"bar", b" ", "baz"], ["foo bar baz"]), ] ) @pytest.mark.parametrize( "app", [consume_body_app, consume_partial_body_app] ) def test_body_logging_middleware_no_errors(chunks: Iterable[bytes], expected_logs: Iterable[str], app: ASGIApp) -> None: logs: List[str] = [] client = TestClient(BodyLoggingMiddleware(app, TestLogger(logs))) def chunk_gen() -> Generator[bytes, None, None]: yield from iter(chunks) resp = client.get("/", data=chunk_gen()) assert resp.status_code == 200 assert logs == expected_logs @pytest.mark.parametrize( "chunks, expected_logs", [ ([b"foo", b" ", b"bar", b" ", "baz"], ["foo bar baz"]), ] ) @pytest.mark.parametrize( "app", [consume_body_and_error_app, consume_partial_body_and_error_app] ) def test_body_logging_middleware_with_errors(chunks: Iterable[bytes], expected_logs: Iterable[str], app: ASGIApp) -> None: logs: List[str] = [] client = TestClient(BodyLoggingMiddleware(app, TestLogger(logs))) def chunk_gen() -> Generator[bytes, None, None]: yield from iter(chunks) with pytest.raises(TestException): client.get("/", data=chunk_gen()) assert logs == expected_logs if __name__ == "__main__": import os pytest.main(args=[os.path.abspath(__file__)]) A: Just because such solution not stated yet, but it's worked for me: from typing import Callable, Awaitable from starlette.middleware.base import BaseHTTPMiddleware from starlette.requests import Request from starlette.responses import StreamingResponse from starlette.concurrency import iterate_in_threadpool class LogStatsMiddleware(BaseHTTPMiddleware): async def dispatch( # type: ignore self, request: Request, call_next: Callable[[Request], Awaitable[StreamingResponse]], ) -> Response: response = await call_next(request) response_body = [section async for section in response.body_iterator] response.body_iterator = iterate_in_threadpool(iter(response_body)) logging.info(f"response_body={response_body[0].decode()}") return response def init_app(app): app.add_middleware(LogStatsMiddleware) iterate_in_threadpool actually making from iterator object async Iterator If you look on implementation of starlette.responses.StreamingResponse you'll see, that this function used exactly for this A: If you only want to read request parameters, best solution i found was to implement a "route_class" and add it as arg when creating the fastapi.APIRouter, this is because parsing the request within the middleware is considered problematic The intention behind the route handler from what i understand is to attach exceptions handling logic to specific routers, but since it's being invoked before every route call, you can use it to access the Request arg Fastapi documentation You could do something as follows: class MyRequestLoggingRoute(APIRoute): def get_route_handler(self) -> Callable: original_route_handler = super().get_route_handler() async def custom_route_handler(request: Request) -> Response: body = await request.body() if body: logger.info(...) # log request with body else: logger.info(...) # log request without body try: return await original_route_handler(request) except RequestValidationError as exc: detail = {"errors": exc.errors(), "body": body.decode()} raise HTTPException(status_code=422, detail=detail) return custom_route_handler A: The issue is in Uvicorn. The FastAPI/Starlette::Request class does cache the body, but the Uvicorn function RequestResponseCycle::request() does not, so if you instantiate two or more Request classes and ask for the body(), only the instance that asks for the body first will have a valid body. I solved creating a mock function that returns a cached copy of the request(): class LogRequestsMiddleware: def __init__(self, app:ASGIApp) -> None: self.app = app async def __call__(self, scope: Scope, receive: Receive, send: Send) -> None: receive_cached_ = await receive() async def receive_cached(): return receive_cached_ request = Request(scope, receive = receive_cached) # do what you need here await self.app(scope, receive_cached, send) app.add_middleware(LogRequestsMiddleware)
Get starlette request body in the middleware context
I have such middleware class RequestContext(BaseHTTPMiddleware): async def dispatch(self, request: Request, call_next: RequestResponseEndpoint): request_id = request_ctx.set(str(uuid4())) # generate uuid to request body = await request.body() if body: logger.info(...) # log request with body else: logger.info(...) # log request without body response = await call_next(request) response.headers['X-Request-ID'] = request_ctx.get() logger.info("%s" % (response.status_code)) request_ctx.reset(request_id) return response So the line body = await request.body() freezes all requests that have body and I have 504 from all of them. How can I safely read the request body in this context? I just want to log request parameters.
[ "I would not create a Middleware that inherits from BaseHTTPMiddleware since it has some issues, FastAPI gives you a opportunity to create your own routers, in my experience this approach is way better.\nfrom fastapi import APIRouter, FastAPI, Request, Response, Body\nfrom fastapi.routing import APIRoute\n\nfrom typing import Callable, List\nfrom uuid import uuid4\n\n\nclass ContextIncludedRoute(APIRoute):\n def get_route_handler(self) -> Callable:\n original_route_handler = super().get_route_handler()\n\n async def custom_route_handler(request: Request) -> Response:\n request_id = str(uuid4())\n response: Response = await original_route_handler(request)\n\n if await request.body():\n print(await request.body())\n\n response.headers[\"Request-ID\"] = request_id\n return response\n\n return custom_route_handler\n\n\napp = FastAPI()\nrouter = APIRouter(route_class=ContextIncludedRoute)\n\n\n@router.post(\"/context\")\nasync def non_default_router(bod: List[str] = Body(...)):\n return bod\n\n\napp.include_router(router)\n\nWorks as expected.\nb'[\"string\"]'\nINFO: 127.0.0.1:49784 - \"POST /context HTTP/1.1\" 200 OK\n\n", "In case you still wanted to use BaseHTTP, I recently ran into this problem and came up with a solution:\nMiddleware Code\nfrom starlette.middleware.base import BaseHTTPMiddleware\nfrom starlette.requests import Request\nimport json\nfrom .async_iterator_wrapper import async_iterator_wrapper as aiwrap\n\nclass some_middleware(BaseHTTPMiddleware):\n async def dispatch(self, request:Request, call_next:RequestResponseEndpoint):\n # --------------------------\n # DO WHATEVER YOU TO DO HERE\n #---------------------------\n \n response = await call_next(request)\n\n # Consuming FastAPI response and grabbing body here\n resp_body = [section async for section in response.__dict__['body_iterator']]\n # Repairing FastAPI response\n response.__setattr__('body_iterator', aiwrap(resp_body)\n\n # Formatting response body for logging\n try:\n resp_body = json.loads(resp_body[0].decode())\n except:\n resp_body = str(resp_body)\n\n\nasync_iterator_wrapper Code from\nTypeError from Python 3 async for loop\nclass async_iterator_wrapper:\n def __init__(self, obj):\n self._it = iter(obj)\n def __aiter__(self):\n return self\n async def __anext__(self):\n try:\n value = next(self._it)\n except StopIteration:\n raise StopAsyncIteration\n return value\n\nI really hope this can help someone! I found this very helpful for logging.\nBig thanks to @Eddified for the aiwrap class\n", "Turns out await request.json() can only be called once per the request cycle. So if you need to access the request body in multiple middlewares for filtering or authentication etc then there's a work around which is to create a custom middleware that copies the contents of request body in request.state. The middleware should be loaded as early as necessary. Each middleware next in chain or controller can then access the request body from request.state instead of calling await request.json() again. Here's a example:\nclass CopyRequestMiddleware(BaseHTTPMiddleware):\n async def dispatch(self, request: Request, call_next):\n request_body = await request.json()\n request.state.body = request_body\n\n response = await call_next(request)\n return response\n\nclass LogRequestMiddleware(BaseHTTPMiddleware):\n async def dispatch(self, request: Request, call_next):\n # Since it'll be loaded after CopyRequestMiddleware it can access request.state.body.\n request_body = request.state.body\n print(request_body)\n \n response = await call_next(request)\n return response\n\nThe controller will access request body from request.state as well\nrequest_body = request.state.body\n\n", "You can do this safely with a generic ASGI middleware:\nfrom typing import Iterable, List, Protocol, Generator\n\nimport pytest\n\nfrom starlette.responses import Response\nfrom starlette.testclient import TestClient\nfrom starlette.types import ASGIApp, Scope, Send, Receive, Message\n\n\nclass Logger(Protocol):\n def info(self, message: str) -> None:\n ...\n\n\nclass BodyLoggingMiddleware:\n def __init__(\n self,\n app: ASGIApp,\n logger: Logger,\n ) -> None:\n self.app = app\n self.logger = logger\n\n async def __call__(self, scope: Scope, receive: Receive, send: Send) -> None:\n if scope[\"type\"] != \"http\":\n await self.app(scope, receive, send)\n return\n \n done = False\n chunks: \"List[bytes]\" = []\n\n async def wrapped_receive() -> Message:\n nonlocal done\n message = await receive()\n if message[\"type\"] == \"http.disconnect\":\n done = True\n return message\n body = message.get(\"body\", b\"\")\n more_body = message.get(\"more_body\", False)\n if not more_body:\n done = True\n chunks.append(body)\n return message\n try:\n await self.app(scope, wrapped_receive, send)\n finally:\n while not done:\n await wrapped_receive()\n self.logger.info(b\"\".join(chunks).decode()) # or somethin\n\n\nasync def consume_body_app(scope: Scope, receive: Receive, send: Send) -> None:\n done = False\n while not done:\n msg = await receive()\n done = \"more_body\" not in msg\n await Response()(scope, receive, send)\n\n\nasync def consume_partial_body_app(scope: Scope, receive: Receive, send: Send) -> None:\n await receive()\n await Response()(scope, receive, send)\n\n\nclass TestException(Exception):\n pass\n\n\nasync def consume_body_and_error_app(scope: Scope, receive: Receive, send: Send) -> None:\n done = False\n while not done:\n msg = await receive()\n done = \"more_body\" not in msg\n raise TestException\n\n\nasync def consume_partial_body_and_error_app(scope: Scope, receive: Receive, send: Send) -> None:\n await receive()\n raise TestException\n\n\nclass TestLogger:\n def __init__(self, recorder: List[str]) -> None:\n self.recorder = recorder\n \n def info(self, message: str) -> None:\n self.recorder.append(message)\n\n\n@pytest.mark.parametrize(\n \"chunks, expected_logs\", [\n ([b\"foo\", b\" \", b\"bar\", b\" \", \"baz\"], [\"foo bar baz\"]),\n ]\n)\n@pytest.mark.parametrize(\n \"app\",\n [consume_body_app, consume_partial_body_app]\n)\ndef test_body_logging_middleware_no_errors(chunks: Iterable[bytes], expected_logs: Iterable[str], app: ASGIApp) -> None:\n logs: List[str] = []\n client = TestClient(BodyLoggingMiddleware(app, TestLogger(logs)))\n\n def chunk_gen() -> Generator[bytes, None, None]:\n yield from iter(chunks)\n\n resp = client.get(\"/\", data=chunk_gen())\n assert resp.status_code == 200\n assert logs == expected_logs\n\n\n@pytest.mark.parametrize(\n \"chunks, expected_logs\", [\n ([b\"foo\", b\" \", b\"bar\", b\" \", \"baz\"], [\"foo bar baz\"]),\n ]\n)\n@pytest.mark.parametrize(\n \"app\",\n [consume_body_and_error_app, consume_partial_body_and_error_app]\n)\ndef test_body_logging_middleware_with_errors(chunks: Iterable[bytes], expected_logs: Iterable[str], app: ASGIApp) -> None:\n logs: List[str] = []\n client = TestClient(BodyLoggingMiddleware(app, TestLogger(logs)))\n\n def chunk_gen() -> Generator[bytes, None, None]:\n yield from iter(chunks)\n\n with pytest.raises(TestException):\n client.get(\"/\", data=chunk_gen())\n assert logs == expected_logs\n\n\nif __name__ == \"__main__\":\n import os\n pytest.main(args=[os.path.abspath(__file__)])\n\n", "Just because such solution not stated yet, but it's worked for me:\nfrom typing import Callable, Awaitable\n\nfrom starlette.middleware.base import BaseHTTPMiddleware\nfrom starlette.requests import Request\nfrom starlette.responses import StreamingResponse\nfrom starlette.concurrency import iterate_in_threadpool\n\nclass LogStatsMiddleware(BaseHTTPMiddleware):\n async def dispatch( # type: ignore\n self, request: Request, call_next: Callable[[Request], Awaitable[StreamingResponse]],\n ) -> Response:\n response = await call_next(request)\n response_body = [section async for section in response.body_iterator]\n response.body_iterator = iterate_in_threadpool(iter(response_body))\n logging.info(f\"response_body={response_body[0].decode()}\")\n return response\n\ndef init_app(app):\n app.add_middleware(LogStatsMiddleware)\n\niterate_in_threadpool actually making from iterator object async Iterator\nIf you look on implementation of starlette.responses.StreamingResponse you'll see, that this function used exactly for this\n", "If you only want to read request parameters, best solution i found was to implement a \"route_class\" and add it as arg when creating the fastapi.APIRouter, this is because parsing the request within the middleware is considered problematic\nThe intention behind the route handler from what i understand is to attach exceptions handling logic to specific routers, but since it's being invoked before every route call, you can use it to access the Request arg\nFastapi documentation\nYou could do something as follows:\nclass MyRequestLoggingRoute(APIRoute):\n def get_route_handler(self) -> Callable:\n original_route_handler = super().get_route_handler()\n\n async def custom_route_handler(request: Request) -> Response:\n body = await request.body()\n if body:\n logger.info(...) # log request with body\n else:\n logger.info(...) # log request without body\n try:\n\n return await original_route_handler(request)\n except RequestValidationError as exc:\n detail = {\"errors\": exc.errors(), \"body\": body.decode()}\n raise HTTPException(status_code=422, detail=detail)\n\n return custom_route_handler\n\n", "The issue is in Uvicorn. The FastAPI/Starlette::Request class does cache the body, but the Uvicorn function RequestResponseCycle::request() does not, so if you instantiate two or more Request classes and ask for the body(), only the instance that asks for the body first will have a valid body.\nI solved creating a mock function that returns a cached copy of the request():\nclass LogRequestsMiddleware:\ndef __init__(self, app:ASGIApp) -> None:\n self.app = app\n\nasync def __call__(self, scope: Scope, receive: Receive, send: Send) -> None:\n receive_cached_ = await receive()\n async def receive_cached():\n return receive_cached_\n request = Request(scope, receive = receive_cached)\n \n # do what you need here\n\n await self.app(scope, receive_cached, send)\n\napp.add_middleware(LogRequestsMiddleware)\n\n" ]
[ 5, 3, 2, 1, 0, 0, 0 ]
[]
[]
[ "fastapi", "http", "middleware", "python", "starlette" ]
stackoverflow_0064115628_fastapi_http_middleware_python_starlette.txt
Q: I have a problem with the surface of my pygame script, I debug but can't find the answer So, to start I was developing a game in pygame, and it worked very well until then, but when adding my animations for my characters, the script has an error, a surface problem; "TypeError: Source objects must be a surface". I searched for several hours if someone already had my problem but without result... I attach my code below. My main.py : import pygame from game import Game if __name__ == '__main__': pygame.init() game = Game() game.run() My game.py : import pygame import pytmx import pyscroll from playr import Player class Game: def __init__(self): self.screen = pygame.display.set_mode((800, 800)) pygame.display.set_caption("Labyrinthe DVT") tmx_data = pytmx.util_pygame.load_pygame('carte.tmx') map_data = pyscroll.data.TiledMapData(tmx_data) map_layer = pyscroll.orthographic.BufferedRenderer(map_data, self.screen.get_size()) map_layer.zoom = 2 player_position = tmx_data.get_object_by_name("player") self.player = Player(player_position.x, player_position.y) self.group = pyscroll.PyscrollGroup(map_layer = map_layer, default_layer= 5) self.group.add(self.player) def handle_input(self): pressed = pygame.key.get_pressed() if pressed[pygame.K_UP]: self.player.move_up() self.player.change_animation('up') elif pressed[pygame.K_DOWN]: self.player.move_down() self.player.change_animation('down') elif pressed[pygame.K_RIGHT]: self.player.move_right() self.player.change_animation('right') elif pressed[pygame.K_LEFT]: self.player.move_left() self.player.change_animation('left') def run(self): clock = pygame.time.Clock() running = True while running: self.handle_input() self.group.update() #centrer la camera self.group.center(self.player.rect) #dessiner les calques self.group.draw(self.screen) #error pygame.display.flip() for event in pygame.event.get(): if event.type == pygame.QUIT: running = False clock.tick(60) pygame.quit() My player.py : import pygame class Player(pygame.sprite.Sprite): def __init__(self, x, y): super().__init__() self.sprite_sheet = pygame.image.load('player sheet.png') self.image = self.get_image(0 ,0) self.image.set_colorkey([0, 0, 0]) self.rect = self.image.get_rect() self.position = [x, y] self.vitesse = 2 #stocker les image pour l'effet self.image = { 'down' : self.get_image(0, 0), 'left': self.get_image(0, 32), 'right': self.get_image(0, 64), 'up': self.get_image(0, 96) } def change_animation(self, name): self.image = self.images[name] self.image.set_colorkey((0, 0, 0)) def move_right(self): self.position[0] += self.vitesse def move_left(self): self.position[0] -= self.vitesse def move_down(self): self.position[1] += self.vitesse def move_up(self): self.position[1] -= self.vitesse def update(self): self.rect.topleft = self.position def get_image(self, x, y): image = pygame.Surface([32, 32]) image.blit(self.sprite_sheet, (0, 0), (x, y, 32, 32)) return image Here is the entirety of my code, if you have more questions in order to be able to help me, do not hesitate. PS: I recall the error: "TypeError: Source objects must be a surface" A: The problem is that self.image is used twice. First it is a pygame.Surface: self.image = self.get_image(0 ,0) Then it is a dictionary: self.image = { 'down' : self.get_image(0, 0), 'left': self.get_image(0, 32), 'right': self.get_image(0, 64), 'up': self.get_image(0, 96) } However, since Player is a pygame.sprite.Sprite object, the iamge attribute must be a pygame.Surface object, but not an dictionary. This is because of pygame.sprite.Group.draw(): Draws the contained Sprites to the Surface argument. This uses the Sprite.image attribute for the source surface, and Sprite.rect. [...] Rename the dictionary to solve the issue. e.g.: self.direction_images = { 'down' : self.get_image(0, 0), 'left': self.get_image(0, 32), 'right': self.get_image(0, 64), 'up': self.get_image(0, 96) }
I have a problem with the surface of my pygame script, I debug but can't find the answer
So, to start I was developing a game in pygame, and it worked very well until then, but when adding my animations for my characters, the script has an error, a surface problem; "TypeError: Source objects must be a surface". I searched for several hours if someone already had my problem but without result... I attach my code below. My main.py : import pygame from game import Game if __name__ == '__main__': pygame.init() game = Game() game.run() My game.py : import pygame import pytmx import pyscroll from playr import Player class Game: def __init__(self): self.screen = pygame.display.set_mode((800, 800)) pygame.display.set_caption("Labyrinthe DVT") tmx_data = pytmx.util_pygame.load_pygame('carte.tmx') map_data = pyscroll.data.TiledMapData(tmx_data) map_layer = pyscroll.orthographic.BufferedRenderer(map_data, self.screen.get_size()) map_layer.zoom = 2 player_position = tmx_data.get_object_by_name("player") self.player = Player(player_position.x, player_position.y) self.group = pyscroll.PyscrollGroup(map_layer = map_layer, default_layer= 5) self.group.add(self.player) def handle_input(self): pressed = pygame.key.get_pressed() if pressed[pygame.K_UP]: self.player.move_up() self.player.change_animation('up') elif pressed[pygame.K_DOWN]: self.player.move_down() self.player.change_animation('down') elif pressed[pygame.K_RIGHT]: self.player.move_right() self.player.change_animation('right') elif pressed[pygame.K_LEFT]: self.player.move_left() self.player.change_animation('left') def run(self): clock = pygame.time.Clock() running = True while running: self.handle_input() self.group.update() #centrer la camera self.group.center(self.player.rect) #dessiner les calques self.group.draw(self.screen) #error pygame.display.flip() for event in pygame.event.get(): if event.type == pygame.QUIT: running = False clock.tick(60) pygame.quit() My player.py : import pygame class Player(pygame.sprite.Sprite): def __init__(self, x, y): super().__init__() self.sprite_sheet = pygame.image.load('player sheet.png') self.image = self.get_image(0 ,0) self.image.set_colorkey([0, 0, 0]) self.rect = self.image.get_rect() self.position = [x, y] self.vitesse = 2 #stocker les image pour l'effet self.image = { 'down' : self.get_image(0, 0), 'left': self.get_image(0, 32), 'right': self.get_image(0, 64), 'up': self.get_image(0, 96) } def change_animation(self, name): self.image = self.images[name] self.image.set_colorkey((0, 0, 0)) def move_right(self): self.position[0] += self.vitesse def move_left(self): self.position[0] -= self.vitesse def move_down(self): self.position[1] += self.vitesse def move_up(self): self.position[1] -= self.vitesse def update(self): self.rect.topleft = self.position def get_image(self, x, y): image = pygame.Surface([32, 32]) image.blit(self.sprite_sheet, (0, 0), (x, y, 32, 32)) return image Here is the entirety of my code, if you have more questions in order to be able to help me, do not hesitate. PS: I recall the error: "TypeError: Source objects must be a surface"
[ "The problem is that self.image is used twice. First it is a pygame.Surface:\n\nself.image = self.get_image(0 ,0)\n\n\nThen it is a dictionary:\n\nself.image = {\n 'down' : self.get_image(0, 0),\n 'left': self.get_image(0, 32),\n 'right': self.get_image(0, 64),\n 'up': self.get_image(0, 96)\n}\n\n\nHowever, since Player is a pygame.sprite.Sprite object, the iamge attribute must be a pygame.Surface object, but not an dictionary. This is because of pygame.sprite.Group.draw():\n\nDraws the contained Sprites to the Surface argument. This uses the Sprite.image attribute for the source surface, and Sprite.rect. [...]\n\nRename the dictionary to solve the issue. e.g.:\nself.direction_images = {\n 'down' : self.get_image(0, 0),\n 'left': self.get_image(0, 32),\n 'right': self.get_image(0, 64),\n 'up': self.get_image(0, 96)\n}\n\n" ]
[ 1 ]
[]
[]
[ "pygame", "pygame_surface", "python" ]
stackoverflow_0074480212_pygame_pygame_surface_python.txt
Q: Mocking a HTTP server in Python I'm writing a REST client and I need to mock a HTTP server in my tests. What would be the most appropriate library to do that? It would be great if I could create expected HTTP requests and compare them to actual. A: Try HTTPretty, a HTTP client mock library for Python helps you focus on the client side. A: You can also create a small mock server on your own. I am using a small web server called Flask. import flask app = flask.Flask(__name__) def callback(): return flask.jsonify(list()) app.add_url_rule("users", view_func=callback) app.run() This will spawn a server under http://localhost:5000/users executing the callback function. I created a gist to provide a working example with shutdown mechanism etc. https://gist.github.com/eruvanos/f6f62edb368a20aaa880e12976620db8 A: Mockintosh seems like another option. A: You can do this without using any external library by just running a temporary HTTP server. For example mocking a https://api.ipify.org?format=json """Unit tests for ipify""" import http.server import threading import unittest import urllib.request class MockIpifyHTTPRequestHandler(http.server.BaseHTTPRequestHandler): """HTTPServer mock request handler""" def do_GET(self): # pylint: disable=invalid-name """Handle GET requests""" self.send_response(200) self.send_header("Content-Type", "application/json") self.end_headers() self.wfile.write(b'{"ip":"1.2.3.45"}') def log_request(self, code=None, size=None): """Don't log anything""" class UnitTests(unittest.TestCase): """Unit tests for urlopen""" def test_urlopen(self): """Test urlopen ipify""" server = http.server.ThreadingHTTPServer( ("127.0.0.127", 9999), MockIpifyHTTPRequestHandler ) with server: server_thread = threading.Thread(target=server.serve_forever) server_thread.daemon = True server_thread.start() request = request = urllib.request.Request("http://127.0.0.127:9999/") with urllib.request.urlopen(request) as response: result = response.read() server.shutdown() self.assertEqual(result, b'{"ip":"1.2.3.45"}') Alternative solution I found is in https://stackoverflow.com/a/34929900/15862
Mocking a HTTP server in Python
I'm writing a REST client and I need to mock a HTTP server in my tests. What would be the most appropriate library to do that? It would be great if I could create expected HTTP requests and compare them to actual.
[ "Try HTTPretty, a HTTP client mock library for Python helps you focus on the client side.\n", "You can also create a small mock server on your own.\nI am using a small web server called Flask.\nimport flask\napp = flask.Flask(__name__)\n\ndef callback():\n return flask.jsonify(list())\n\napp.add_url_rule(\"users\", view_func=callback)\napp.run()\n\nThis will spawn a server under http://localhost:5000/users executing the callback function.\nI created a gist to provide a working example with shutdown mechanism etc.\nhttps://gist.github.com/eruvanos/f6f62edb368a20aaa880e12976620db8\n", "Mockintosh seems like another option.\n", "You can do this without using any external library by just running a temporary HTTP server.\nFor example mocking a https://api.ipify.org?format=json\n\"\"\"Unit tests for ipify\"\"\"\n\nimport http.server\nimport threading\nimport unittest\nimport urllib.request\n\n\nclass MockIpifyHTTPRequestHandler(http.server.BaseHTTPRequestHandler):\n \"\"\"HTTPServer mock request handler\"\"\"\n\n def do_GET(self): # pylint: disable=invalid-name\n \"\"\"Handle GET requests\"\"\"\n self.send_response(200)\n self.send_header(\"Content-Type\", \"application/json\")\n self.end_headers()\n self.wfile.write(b'{\"ip\":\"1.2.3.45\"}')\n\n def log_request(self, code=None, size=None):\n \"\"\"Don't log anything\"\"\"\n\n\nclass UnitTests(unittest.TestCase):\n \"\"\"Unit tests for urlopen\"\"\"\n\n def test_urlopen(self):\n \"\"\"Test urlopen ipify\"\"\"\n server = http.server.ThreadingHTTPServer(\n (\"127.0.0.127\", 9999), MockIpifyHTTPRequestHandler\n )\n with server:\n server_thread = threading.Thread(target=server.serve_forever)\n server_thread.daemon = True\n server_thread.start()\n\n request = request = urllib.request.Request(\"http://127.0.0.127:9999/\")\n with urllib.request.urlopen(request) as response:\n result = response.read()\n server.shutdown()\n\n self.assertEqual(result, b'{\"ip\":\"1.2.3.45\"}')\n\nAlternative solution I found is in https://stackoverflow.com/a/34929900/15862\n" ]
[ 10, 8, 0, 0 ]
[]
[]
[ "http", "mocking", "python", "rest", "unit_testing" ]
stackoverflow_0021877387_http_mocking_python_rest_unit_testing.txt
Q: sqlalchemy: rename a column on *query* level I need to rename a column in a query, but I can't do it on column level, eg session.query(MyModel.col_name.label('new_name')) Is there any way to rename a column on the resulting query object? Eg, something like session.query(...).blah().blah().rename_column('old_name', 'new_name') A: It doesn't look like there's any built in solution for that – but here's a workaround I've implemented which may help you: To rename before the query has been executed: # Start off with your regular query – but as a subquery query = session.query(MyModel.col_name.label('old_name')).subquery() # Now, perform a second query with new labels query_2 = session.query(query.c.old_name.label('new_name')) # Or, if there's only one column: query_3 = session.query(query.label('new_name')) A: I've figured out a way to do this. In my case, I was having issues with except_, which prefixes columns with the table name. Here's how I did that: def _except(included_query, excluded_query, Model, prefix): """An SQLALchemy except_ that removes the prefixes on the columns, so they can be referenced in a subquery by their un-prefixed names.""" query = included_query.except_(excluded_query) subquery = query.subquery() # Get a list of columns from the subquery, relabeled with the simple column name. columns = [] for column_name in _attribute_names(Model): column = getattr(subquery.c, prefix + column_name) columns.append(column.label(column_name)) # Wrap the query to select the simple column names. This is necessary because # except_ prefixes column names with a string derived from the table name. return Model.query.from_statement(Model.query.with_entities(*columns).statement)
sqlalchemy: rename a column on *query* level
I need to rename a column in a query, but I can't do it on column level, eg session.query(MyModel.col_name.label('new_name')) Is there any way to rename a column on the resulting query object? Eg, something like session.query(...).blah().blah().rename_column('old_name', 'new_name')
[ "It doesn't look like there's any built in solution for that – but here's a workaround I've implemented which may help you:\nTo rename before the query has been executed:\n# Start off with your regular query – but as a subquery\nquery = session.query(MyModel.col_name.label('old_name')).subquery()\n\n# Now, perform a second query with new labels\nquery_2 = session.query(query.c.old_name.label('new_name'))\n\n# Or, if there's only one column:\nquery_3 = session.query(query.label('new_name'))\n\n", "I've figured out a way to do this. In my case, I was having issues with except_, which prefixes columns with the table name. Here's how I did that:\ndef _except(included_query, excluded_query, Model, prefix):\n \"\"\"An SQLALchemy except_ that removes the prefixes on the columns, so they can be\n referenced in a subquery by their un-prefixed names.\"\"\"\n query = included_query.except_(excluded_query)\n subquery = query.subquery()\n # Get a list of columns from the subquery, relabeled with the simple column name.\n columns = []\n for column_name in _attribute_names(Model):\n column = getattr(subquery.c, prefix + column_name)\n columns.append(column.label(column_name))\n # Wrap the query to select the simple column names. This is necessary because\n # except_ prefixes column names with a string derived from the table name.\n return Model.query.from_statement(Model.query.with_entities(*columns).statement)\n\n" ]
[ 1, 0 ]
[]
[]
[ "python", "sqlalchemy" ]
stackoverflow_0052718054_python_sqlalchemy.txt
Q: Tensorflow tensor loses dimension for some reason I have a custom loss function that is reporting an error before any real processing happens. I have a y_train of dimension (2717, 5, 5, 6) and a batch size of 25 with constants S1=S2=5. All I do is tf.reshape to make sure I get the desired dimension of (25,5,5,6), then I want to extract one axis but its somehow not working properly. @tf.function def yolo_loss(y_true,y_pred): #mse = tf.keras.losses.MeanSquaredError(reduction=tf.keras.losses.Reduction.SUM) lambda_noobj = 0.5 lambda_coord = 5 y_pred = tf.reshape(y_pred,[batch_size,S1,S2,C+B*5]) y_true = tf.reshape(y_true,[batch_size,S1,S2,6]) exists_box = tf.reshape(y_true[...,0],[batch_size,S1,S2,1]) ........ While the first reshape of y_true works perfectly fine I get an error for the exists_box line, to be precise: exists_box = tf.reshape(y_true[...,0],[batch_size,S1,S2,1]) Node: 'Reshape_2' Input to reshape is a tensor with 425 values, but the requested shape has 625 [[{{node Reshape_2}}]] [Op:__inference_train_function_44379] The ellipsis in [...,0] should return me an object of size 25 *5 * 5 = 625 so I am confused why it says the object is of dimension 425. I also made sure that all arrays in y_train are of the same shape. A: It seems that the error is caused by the last batch of your y_train dataset, which has shape (17, 5, 5, 6) (17 * 5 * 5 * 1 = 425). This occurs because when tensorflow batches your data, the last batch contains all the remaining elements, number of whose does not have to be your specified batch_size (in your case 25) - note that 2717 % 25 = 17. There are two things you can do: drop the remainding elements from the dataset; use this option if you are okay with losing a few examples from your traning data; if you are using tf.data.Dataset object, this can be done by providing drop_remainder=True in the batch method: dataset = dataset.batch(25, drop_remainder=True) change your loss function so that it can process input with different first dimension than 25; from your description it's not clear what your loss function does, so you'll have to figure this out by yourself.
Tensorflow tensor loses dimension for some reason
I have a custom loss function that is reporting an error before any real processing happens. I have a y_train of dimension (2717, 5, 5, 6) and a batch size of 25 with constants S1=S2=5. All I do is tf.reshape to make sure I get the desired dimension of (25,5,5,6), then I want to extract one axis but its somehow not working properly. @tf.function def yolo_loss(y_true,y_pred): #mse = tf.keras.losses.MeanSquaredError(reduction=tf.keras.losses.Reduction.SUM) lambda_noobj = 0.5 lambda_coord = 5 y_pred = tf.reshape(y_pred,[batch_size,S1,S2,C+B*5]) y_true = tf.reshape(y_true,[batch_size,S1,S2,6]) exists_box = tf.reshape(y_true[...,0],[batch_size,S1,S2,1]) ........ While the first reshape of y_true works perfectly fine I get an error for the exists_box line, to be precise: exists_box = tf.reshape(y_true[...,0],[batch_size,S1,S2,1]) Node: 'Reshape_2' Input to reshape is a tensor with 425 values, but the requested shape has 625 [[{{node Reshape_2}}]] [Op:__inference_train_function_44379] The ellipsis in [...,0] should return me an object of size 25 *5 * 5 = 625 so I am confused why it says the object is of dimension 425. I also made sure that all arrays in y_train are of the same shape.
[ "It seems that the error is caused by the last batch of your y_train dataset, which has shape (17, 5, 5, 6) (17 * 5 * 5 * 1 = 425). This occurs because when tensorflow batches your data, the last batch contains all the remaining elements, number of whose does not have to be your specified batch_size (in your case 25) - note that 2717 % 25 = 17.\nThere are two things you can do:\n\ndrop the remainding elements from the dataset; use this option if you are okay with losing a few examples from your traning data; if you are using tf.data.Dataset object, this can be done by providing drop_remainder=True in the batch method:\n\ndataset = dataset.batch(25, drop_remainder=True)\n\n\nchange your loss function so that it can process input with different first dimension than 25; from your description it's not clear what your loss function does, so you'll have to figure this out by yourself.\n\n" ]
[ 1 ]
[]
[]
[ "keras", "loss_function", "python", "tensorflow" ]
stackoverflow_0074479259_keras_loss_function_python_tensorflow.txt
Q: How do you prevent SQLAlchemy from prefixing column names when using except_ I have a query like the following: query = included_query.except_(excluded_query) Both included_query and excluded_query query over a particular model called TestModel. However, when I create a subquery with that query (ie subquery = query.subquery()), instead of having the direct columns of TestModel (eg subquery.c.id) it instead prefixes all the columns (eg subquery.c.test_models_id). I have tried using with_entitites to return the columns in the right name, however if I do that, it no longer returns a list of TestModel objects and instead returns a tuple of column values. How can I return TestModel objects while retaining the correct column names (without prefixing)? I see there's a related question about this here with no answer: SQLAlchemy : column name prefixed on the subquery of union_all of 3 tables A: I've figured out a way to do this: def _except(included_query, excluded_query, Model, prefix): """An SQLALchemy except_ that removes the prefixes on the columns, so they can be referenced in a subquery by their un-prefixed names.""" query = included_query.except_(excluded_query) subquery = query.subquery() # Get a list of columns from the subquery, relabeled with the simple column name. columns = [] for column_name in _attribute_names(Model): column = getattr(subquery.c, prefix + column_name) columns.append(column.label(column_name)) # Wrap the query to select the simple column names. This is necessary because # except_ prefixes column names with a string derived from the table name. return Model.query.from_statement(Model.query.with_entities(*columns).statement)
How do you prevent SQLAlchemy from prefixing column names when using except_
I have a query like the following: query = included_query.except_(excluded_query) Both included_query and excluded_query query over a particular model called TestModel. However, when I create a subquery with that query (ie subquery = query.subquery()), instead of having the direct columns of TestModel (eg subquery.c.id) it instead prefixes all the columns (eg subquery.c.test_models_id). I have tried using with_entitites to return the columns in the right name, however if I do that, it no longer returns a list of TestModel objects and instead returns a tuple of column values. How can I return TestModel objects while retaining the correct column names (without prefixing)? I see there's a related question about this here with no answer: SQLAlchemy : column name prefixed on the subquery of union_all of 3 tables
[ "I've figured out a way to do this:\ndef _except(included_query, excluded_query, Model, prefix):\n \"\"\"An SQLALchemy except_ that removes the prefixes on the columns, so they can be\n referenced in a subquery by their un-prefixed names.\"\"\"\n query = included_query.except_(excluded_query)\n subquery = query.subquery()\n # Get a list of columns from the subquery, relabeled with the simple column name.\n columns = []\n for column_name in _attribute_names(Model):\n column = getattr(subquery.c, prefix + column_name)\n columns.append(column.label(column_name))\n # Wrap the query to select the simple column names. This is necessary because\n # except_ prefixes column names with a string derived from the table name.\n return Model.query.from_statement(Model.query.with_entities(*columns).statement)\n\n" ]
[ 0 ]
[]
[]
[ "python", "sqlalchemy" ]
stackoverflow_0073588253_python_sqlalchemy.txt
Q: Tox InterpreterNotFound Gitlab-CI Pipeline I need some help with testing my python package using tox in a gitlab-ci pipeline: I want to test my package on multiple versions. For this, I can write the following in my tox.ini: [tox] envlist = py{310, 311} [testenv] deps = -rrequirements.txt commands = python -m pytest tests -s Running the command tox works locally, as I have multiple python versions installed via conda (I believe this is the reason). Until now, I have always also tested my package in my gitlab pipeline: (.gitlab-ci.yml) image: python:3.11 unit-test: stage: test script: - pip install tox - tox -r This causes the pipeline to fail with the following message: ERROR: py310: InterpreterNotFound: python3.10 py311: commands succeeded Is there a gitlab ci container image already out there, that includes multiple python versions? A: This is what I came up with for some of my projects: '.review': before_script: - 'python -m pip install tox' script: - 'export TOXENV="${CI_JOB_NAME##review}"' - 'tox' 'review py38': extends: '.review' image: 'python:3.8' 'review py39': extends: '.review' image: 'python:3.9' I have not really looked into this in a while, so there might be better solutions nowadays. Anyway, the advantage of this solution is that it avoids repeating the script part for each Python version. The "trick" is to set the TOXENV environment variable to something like py38 or py39, which is done be extracting this value from the name of the job. A: A temporary, simple, and easily maintainable solution is using the following .gitlab-ci.yml configuration: image: python:latest unit-test-3.11: image: python:3.11 stage: test script: - pip install tox - tox -r -e py311 unit-test-3.10: image: python:3.10 stage: test script: - pip install tox - tox -r -e py310 Note: I believe an even better solution exists, that doesn't require multiple images/jobs (as https://github.com/AntoineD/docstring-inheritance/blob/main/.github/workflows/tests.yml doesn't for example). However, until someone posts a better solution here, I will mark this one as correct.
Tox InterpreterNotFound Gitlab-CI Pipeline
I need some help with testing my python package using tox in a gitlab-ci pipeline: I want to test my package on multiple versions. For this, I can write the following in my tox.ini: [tox] envlist = py{310, 311} [testenv] deps = -rrequirements.txt commands = python -m pytest tests -s Running the command tox works locally, as I have multiple python versions installed via conda (I believe this is the reason). Until now, I have always also tested my package in my gitlab pipeline: (.gitlab-ci.yml) image: python:3.11 unit-test: stage: test script: - pip install tox - tox -r This causes the pipeline to fail with the following message: ERROR: py310: InterpreterNotFound: python3.10 py311: commands succeeded Is there a gitlab ci container image already out there, that includes multiple python versions?
[ "This is what I came up with for some of my projects:\n'.review':\n before_script:\n - 'python -m pip install tox'\n script:\n - 'export TOXENV=\"${CI_JOB_NAME##review}\"'\n - 'tox'\n\n'review py38':\n extends: '.review'\n image: 'python:3.8'\n\n'review py39':\n extends: '.review'\n image: 'python:3.9'\n\nI have not really looked into this in a while, so there might be better solutions nowadays. Anyway, the advantage of this solution is that it avoids repeating the script part for each Python version. The \"trick\" is to set the TOXENV environment variable to something like py38 or py39, which is done be extracting this value from the name of the job.\n", "A temporary, simple, and easily maintainable solution is using the following .gitlab-ci.yml configuration:\nimage: python:latest\n\nunit-test-3.11:\n image: python:3.11\n stage: test\n script:\n - pip install tox\n - tox -r -e py311\n\nunit-test-3.10:\n image: python:3.10\n stage: test\n script:\n - pip install tox\n - tox -r -e py310\n\n\nNote:\nI believe an even better solution exists, that doesn't require multiple images/jobs (as https://github.com/AntoineD/docstring-inheritance/blob/main/.github/workflows/tests.yml doesn't for example).\nHowever, until someone posts a better solution here, I will mark this one as correct.\n" ]
[ 1, 0 ]
[]
[]
[ "gitlab_ci", "python", "tox" ]
stackoverflow_0074474552_gitlab_ci_python_tox.txt
Q: How do I capture the properties I want from a string? I hope you are well I have the following string: "{\"code\":0,\"description\":\"Done\",\"response\":{\"id\":\"8-717-2346\",\"idType\":\"CIP\",\"suscriptionId\":\"92118213\"},....\"childProducts\":[]}}"... To which I'm trying to capture the attributes: id, idType and subscriptionId and map them as a dataframe, but the entire body of the .cvs puts it in a single row so it is almost impossible for me to work without index desired output: id, idType, suscriptionID 0. '7-84-1811', 'CIP', 21312421412 1. '1-232-42', 'IO' , 21421e324 My code: import pandas as pd import json path = '/example.csv' df = pd.read_csv(path) normalize_df = json.load(df) print(df) A: Considering your string is in JSON format, you can do this. drop columns, transpose, and get headers right. toEscape = "{\"code\":0,\"description\":\"Done\",\"response\":{\"id\":\"8-717-2346\",\"idType\":\"CIP\",\"suscriptionId\":\"92118213\"}}" json_string = toEscape.encode('utf-8').decode('unicode_escape') df = pd.read_json(json_string) df = df.drop(["code","description"], axis=1) df = df.transpose().reset_index().drop("index", axis=1) df.to_csv("user_details.csv") the output looks like this: id idType suscriptionId 0 8-717-2346 CIP 92118213 Thank you for the question.
How do I capture the properties I want from a string?
I hope you are well I have the following string: "{\"code\":0,\"description\":\"Done\",\"response\":{\"id\":\"8-717-2346\",\"idType\":\"CIP\",\"suscriptionId\":\"92118213\"},....\"childProducts\":[]}}"... To which I'm trying to capture the attributes: id, idType and subscriptionId and map them as a dataframe, but the entire body of the .cvs puts it in a single row so it is almost impossible for me to work without index desired output: id, idType, suscriptionID 0. '7-84-1811', 'CIP', 21312421412 1. '1-232-42', 'IO' , 21421e324 My code: import pandas as pd import json path = '/example.csv' df = pd.read_csv(path) normalize_df = json.load(df) print(df)
[ "Considering your string is in JSON format, you can do this.\ndrop columns, transpose, and get headers right.\ntoEscape = \"{\\\"code\\\":0,\\\"description\\\":\\\"Done\\\",\\\"response\\\":{\\\"id\\\":\\\"8-717-2346\\\",\\\"idType\\\":\\\"CIP\\\",\\\"suscriptionId\\\":\\\"92118213\\\"}}\"\n\njson_string = toEscape.encode('utf-8').decode('unicode_escape')\n\ndf = pd.read_json(json_string)\n\ndf = df.drop([\"code\",\"description\"], axis=1)\n\ndf = df.transpose().reset_index().drop(\"index\", axis=1)\n\ndf.to_csv(\"user_details.csv\")\n\nthe output looks like this:\nid idType suscriptionId\n0 8-717-2346 CIP 92118213\n\nThank you for the question.\n" ]
[ 0 ]
[]
[]
[ "dataframe", "pandas", "python" ]
stackoverflow_0074479810_dataframe_pandas_python.txt
Q: Error with cmap from a matplotlib defined list of colors in Folium - choropleth map I'm triying to pass a customize list of colors to my choropleth map with geopandas.explore, but i get the error: UnboundLocalError: local variable 'binning' referenced before assignment if I specified a given list of colors ex: cmap= 'Blues', it displays with no problem I copy the code below import geopandas as gpd import matplotlib.colors import folium color_list= matplotlib.colors.LinearSegmentedColormap.from_list('custom', [ "#e1f2f2", "#A8DCDC", "#115F5F"]) ventas_map= data.explore(column='ventas', scheme='Quantiles', cmap= color_list, tiles= 'OpenStreetMap', k=5, legend_kwds={'caption': 'ventas[$]','colorbar': True ,'scale': False}, name= 'ventas') Anyone know how to solve it? A: As noted this is a bug that I have patched and created a PR 2590 This will need to go through merge and release process. In interim you can download and use version of function from my patch which has been committed to GitHub import geopandas as gpd import matplotlib.colors import folium import numpy as np import requests from pathlib import Path # download and import patched module file_url = "https://github.com/rraymondgh/geopandas/raw/explore_cmap_scheme/geopandas/explore.py" with open(Path.cwd().joinpath(file_url.split("/")[-1]), "w") as f: f.write(requests.get(file_url).text) import explore gpd.explore._explore = explore._explore r = np.random.RandomState(42) data = gpd.read_file(gpd.datasets.get_path("naturalearth_lowres")) data["ventas"] = r.uniform(1, 30, len(data)) color_list = matplotlib.colors.LinearSegmentedColormap.from_list( "custom", ["#e1f2f2", "#A8DCDC", "#115F5F"], 7 ) # use patched module, note change from data.explore() to explore._explore() with first # argument being geodataframe ventas_map = explore._explore( data, column="ventas", scheme="Quantiles", cmap=color_list, tiles="OpenStreetMap", k=5, legend_kwds={"caption": "ventas[$]", "colorbar": True, "scale": False}, name="ventas", ) ventas_map A: I found the right way to do this, I defined the color list with the following function, and then use the name list I gave between 'color_list' in the cmap atribute in geopandas.explore function def colors2cmap(*args, name=None): """Create a colormap from a list of given colors. Parameters: *args: Arbitrary number of colors (Named color, HEX or RGB). name (str): Name with which the colormap is registered. Returns: LinearSegmentedColormap. Examples: >>> colors2cmap('darkorange', 'white', 'darkgreen', name='test') """ if len(args) < 2: raise Exception("Give at least two colors.") cmap_data = [matplotlib.colors.to_hex(c) for c in args] cmap = matplotlib.colors.LinearSegmentedColormap.from_list(name, cmap_data) plt.register_cmap(name, cmap) return cmap } And called the function: colors2cmap("#e1f2f2", "#A8DCDC", "#115F5F", name='color_list')
Error with cmap from a matplotlib defined list of colors in Folium - choropleth map
I'm triying to pass a customize list of colors to my choropleth map with geopandas.explore, but i get the error: UnboundLocalError: local variable 'binning' referenced before assignment if I specified a given list of colors ex: cmap= 'Blues', it displays with no problem I copy the code below import geopandas as gpd import matplotlib.colors import folium color_list= matplotlib.colors.LinearSegmentedColormap.from_list('custom', [ "#e1f2f2", "#A8DCDC", "#115F5F"]) ventas_map= data.explore(column='ventas', scheme='Quantiles', cmap= color_list, tiles= 'OpenStreetMap', k=5, legend_kwds={'caption': 'ventas[$]','colorbar': True ,'scale': False}, name= 'ventas') Anyone know how to solve it?
[ "As noted this is a bug that I have patched and created a PR 2590\nThis will need to go through merge and release process. In interim you can download and use version of function from my patch which has been committed to GitHub\nimport geopandas as gpd\nimport matplotlib.colors\nimport folium\nimport numpy as np\nimport requests\nfrom pathlib import Path\n\n# download and import patched module\nfile_url = \"https://github.com/rraymondgh/geopandas/raw/explore_cmap_scheme/geopandas/explore.py\"\nwith open(Path.cwd().joinpath(file_url.split(\"/\")[-1]), \"w\") as f:\n f.write(requests.get(file_url).text)\nimport explore\n\ngpd.explore._explore = explore._explore\n\nr = np.random.RandomState(42)\ndata = gpd.read_file(gpd.datasets.get_path(\"naturalearth_lowres\"))\ndata[\"ventas\"] = r.uniform(1, 30, len(data))\n\ncolor_list = matplotlib.colors.LinearSegmentedColormap.from_list(\n \"custom\", [\"#e1f2f2\", \"#A8DCDC\", \"#115F5F\"], 7\n)\n\n# use patched module, note change from data.explore() to explore._explore() with first\n# argument being geodataframe\nventas_map = explore._explore(\n data,\n column=\"ventas\",\n scheme=\"Quantiles\",\n cmap=color_list,\n tiles=\"OpenStreetMap\",\n k=5,\n legend_kwds={\"caption\": \"ventas[$]\", \"colorbar\": True, \"scale\": False},\n name=\"ventas\",\n)\n\nventas_map\n\n", "I found the right way to do this, I defined the color list with the following function, and then use the name list I gave between 'color_list' in the cmap atribute in geopandas.explore function\ndef colors2cmap(*args, name=None):\n\"\"\"Create a colormap from a list of given colors.\n\nParameters:\n *args: Arbitrary number of colors (Named color, HEX or RGB).\n name (str): Name with which the colormap is registered.\n\nReturns:\n LinearSegmentedColormap.\n\nExamples:\n >>> colors2cmap('darkorange', 'white', 'darkgreen', name='test')\n\"\"\"\nif len(args) < 2:\n raise Exception(\"Give at least two colors.\")\n\ncmap_data = [matplotlib.colors.to_hex(c) for c in args]\n\ncmap = matplotlib.colors.LinearSegmentedColormap.from_list(name, cmap_data)\nplt.register_cmap(name, cmap)\n\nreturn cmap }\n\nAnd called the function:\n colors2cmap(\"#e1f2f2\", \"#A8DCDC\", \"#115F5F\", name='color_list')\n\n" ]
[ 0, 0 ]
[]
[]
[ "choropleth", "folium", "geopandas", "matplotlib", "python" ]
stackoverflow_0073979846_choropleth_folium_geopandas_matplotlib_python.txt
Q: Detect last zero-crossing I'm generating an exponential sweep with the following function: @jit(nopython=True) def generate_exponential_sweep(time_in_seconds, sr): time_in_samples = time_in_seconds * sr exponential_sweep = np.zeros(time_in_samples, dtype=np.double) for n in range(time_in_samples): t = n / sr exponential_sweep[n] = np.sin( (2.0 * np.pi * starting_frequency * sweep_duration) / np.log(ending_frequency / starting_frequency) * (np.exp((t / sweep_duration) * np.log(ending_frequency / starting_frequency)) - 1.0)) number_of_samples = 50 exponential_sweep[-number_of_samples:] = fade(exponential_sweep[-number_of_samples:], 1, 0) return exponential_sweep Right now the sine wave does not finish at a zero-crossing, so for avoiding the problem I managed to make a fade function that simply fades the volume to zero: @jit(nopython=True) def fade(data, gain_start, gain_end): gain = gain_start delta = (gain_end - gain_start) / (len(data) - 1) for i in range(len(data)): data[i] = data[i] * gain gain = gain + delta return data The question is: Would it be better/faster to detect the last zero-crossing in the array and make the sine wave finish there? If better, how can it be done? A: Since time_in_seconds, sr, starting_frequency and ending_frequency are all unknown, we can't guarantee that it will hit any zeroes or even cross it, without any giving them any constraints. The only way to properly do this is to use a window (or fade in/out), with a known frequency behaviour. This rules out 1. We can continue with 2. I would suggest a tapered cosine window for this task - scipy.signal.windows.tukey - which offers the fade-in/-out from 0 to 1 and vv, and is a very common choice for audio tasks. An example of this can be implemented as - import numpy as np import scipy def fade(data: np.ndarray, fade_time: float, sr: float) -> np.ndarray: alpha = sr * 2 * fade_time / len(data) window = scipy.signal.windows.tukey(len(data), alpha) return data * window The resulting window - with a fade time of 0.1 s - would look like this To add this to your already existing code and simplying it - import numpy as np def generate_exponential_sweep( time_in_seconds: float, sr: float, starting_frequency: float, ending_frequency: float, fade_time: float) -> np.ndarray: t = np.arange(0, time_in_seconds, 1/sr) exponential_sweep = np.sin(2 * np.pi * ( starting_frequency * time_in_seconds * ( (ending_frequency / starting_frequency) ** (t / time_in_seconds) - 1 ) / np.log(starting_frequency / ending_frequency) ) ) exponential_sweep = fade(exponential_sweep, fade_time, sr) return exponential_sweep We can replace that whole block creating the sweep by scipy.signal.chirp which does exactly the same - import numpy as np import scipy def generate_exponential_sweep( time_in_seconds: float, sr: float, starting_frequency: float, ending_frequency: float, fade_time: float) -> np.ndarray: t = np.arange(0, time_in_seconds, 1/sr) exponential_sweep = scipy.signal.chirp( t, f0=starting_frequency, f1=ending_frequency, t1=time_in_seconds, method='logarithmic') exponential_sweep = fade(exponential_sweep, fade_time, sr) return exponential_sweep And just a general comment - don't mix putting variables as arguments and not. Please include all in def generate_exponential_sweep(time_in_seconds, sr, starting_frequency, ending_frequency): ...
Detect last zero-crossing
I'm generating an exponential sweep with the following function: @jit(nopython=True) def generate_exponential_sweep(time_in_seconds, sr): time_in_samples = time_in_seconds * sr exponential_sweep = np.zeros(time_in_samples, dtype=np.double) for n in range(time_in_samples): t = n / sr exponential_sweep[n] = np.sin( (2.0 * np.pi * starting_frequency * sweep_duration) / np.log(ending_frequency / starting_frequency) * (np.exp((t / sweep_duration) * np.log(ending_frequency / starting_frequency)) - 1.0)) number_of_samples = 50 exponential_sweep[-number_of_samples:] = fade(exponential_sweep[-number_of_samples:], 1, 0) return exponential_sweep Right now the sine wave does not finish at a zero-crossing, so for avoiding the problem I managed to make a fade function that simply fades the volume to zero: @jit(nopython=True) def fade(data, gain_start, gain_end): gain = gain_start delta = (gain_end - gain_start) / (len(data) - 1) for i in range(len(data)): data[i] = data[i] * gain gain = gain + delta return data The question is: Would it be better/faster to detect the last zero-crossing in the array and make the sine wave finish there? If better, how can it be done?
[ "Since time_in_seconds, sr, starting_frequency and ending_frequency are all unknown, we can't guarantee that it will hit any zeroes or even cross it, without any giving them any constraints. The only way to properly do this is to use a window (or fade in/out), with a known frequency behaviour.\nThis rules out 1. We can continue with 2.\n\nI would suggest a tapered cosine window for this task - scipy.signal.windows.tukey - which offers the fade-in/-out from 0 to 1 and vv, and is a very common choice for audio tasks.\nAn example of this can be implemented as -\nimport numpy as np\nimport scipy\n\ndef fade(data: np.ndarray, fade_time: float, sr: float) -> np.ndarray:\n alpha = sr * 2 * fade_time / len(data)\n window = scipy.signal.windows.tukey(len(data), alpha)\n\n return data * window\n\nThe resulting window - with a fade time of 0.1 s - would look like this\nTo add this to your already existing code and simplying it -\nimport numpy as np\n\ndef generate_exponential_sweep(\n time_in_seconds: float, sr: float, starting_frequency: float, \n ending_frequency: float, fade_time: float) -> np.ndarray:\n t = np.arange(0, time_in_seconds, 1/sr)\n \n exponential_sweep = np.sin(2 * np.pi * (\n starting_frequency * time_in_seconds * (\n (ending_frequency / starting_frequency) ** (t / time_in_seconds) - 1\n ) / np.log(starting_frequency / ending_frequency)\n )\n )\n\n exponential_sweep = fade(exponential_sweep, fade_time, sr)\n\n return exponential_sweep\n\nWe can replace that whole block creating the sweep by scipy.signal.chirp which does exactly the same -\nimport numpy as np\nimport scipy\n\ndef generate_exponential_sweep(\n time_in_seconds: float, sr: float, starting_frequency: float, \n ending_frequency: float, fade_time: float) -> np.ndarray:\n t = np.arange(0, time_in_seconds, 1/sr)\n \n exponential_sweep = scipy.signal.chirp(\n t, f0=starting_frequency, f1=ending_frequency, \n t1=time_in_seconds, method='logarithmic')\n\n exponential_sweep = fade(exponential_sweep, fade_time, sr)\n\n return exponential_sweep\n\n\nAnd just a general comment - don't mix putting variables as arguments and not. Please include all in\ndef generate_exponential_sweep(time_in_seconds, sr, starting_frequency, ending_frequency):\n...\n\n" ]
[ 1 ]
[]
[]
[ "acoustics", "audio", "python" ]
stackoverflow_0072650916_acoustics_audio_python.txt
Q: Network X remove edges and put it back Hi is there a way to put the edges back after removing the edges from networkx? The reason I remove it at first place is because I need to group the connected edges based on the attribute. import networkx # To create an empty undirected graph G = networkx.Graph() # To add a node G.add_node(1) G.add_node(2) G.add_node(3) G.add_node(4) G.add_node(7) G.add_node(9) G.add_node(10) G.add_node(11) G.add_node(12) G.add_node(13) G.add_node(14) G.add_node(15) # To add an edge G.add_edge(1,2, color="r", asset="A1") G.add_edge(3,1,color="r", asset="A2") G.add_edge(2,4,color="b", asset="A3") G.add_edge(4,1,color="b", asset="A4") G.add_edge(9,1,color="e", asset="A5") G.add_edge(1,7,color="d", asset="A6") G.add_edge(2,9,color="d", asset="A7") G.add_edge(10,11,color="d", asset="A8") G.add_edge(11,12,color="e", asset="A9") G.add_edge(12,13,color="c", asset="A10") G.add_edge(14,15,color="c", asset="A11") # Then I would like to remove edges that has color 'e'. So when I do : for c in nx.connected_components(G): attribute = nx.get_edge_attributes(G.subgraph(c),"asset") print("attribute", attribute) attribute {(1, 2): 'A1', (1, 3): 'A2', (1, 4): 'A4', (1, 7): 'A6', (2, 4): 'A3', (2, 9): 'A7'} attribute {(10, 11): 'A8'} attribute {(12, 13): 'A10'} attribute {(14, 15): 'A11'} # this way I can put them into a separate group because if there is color'e' #in the edges, then i would assign the next connected components to a new group. Then i need the edges with color='e' in the networkx again because i need to assign them to the group before it (I can take care of this). But does anyone know if there is a way to put edges back to networkx or is there a way to get the expected output without having to remove the edges? Since I create all the node using momepy, if there is any inbuilt function that i can use from momepy, then that is even better, otherwise i am happy with other solution A: The easiest approach is probably to make a copy of G, and remove the edges from the copy so that you can always access the original graph G. For instance, consider the following: H = G.copy() for v,w,d in G.edges(data=True): if d['color']=='e': H.remove_edge(v,w) for c in nx.connected_components(H): attribute = nx.get_edge_attributes(H.subgraph(c),"asset") print("attribute", attribute) leads to the desired result attribute {(1, 2): 'A1', (1, 3): 'A2', (1, 4): 'A4', (1, 7): 'A6', (2, 4): 'A3', (2, 9): 'A7'} attribute {(10, 11): 'A8'} attribute {(12, 13): 'A10'} attribute {(14, 15): 'A11'}
Network X remove edges and put it back
Hi is there a way to put the edges back after removing the edges from networkx? The reason I remove it at first place is because I need to group the connected edges based on the attribute. import networkx # To create an empty undirected graph G = networkx.Graph() # To add a node G.add_node(1) G.add_node(2) G.add_node(3) G.add_node(4) G.add_node(7) G.add_node(9) G.add_node(10) G.add_node(11) G.add_node(12) G.add_node(13) G.add_node(14) G.add_node(15) # To add an edge G.add_edge(1,2, color="r", asset="A1") G.add_edge(3,1,color="r", asset="A2") G.add_edge(2,4,color="b", asset="A3") G.add_edge(4,1,color="b", asset="A4") G.add_edge(9,1,color="e", asset="A5") G.add_edge(1,7,color="d", asset="A6") G.add_edge(2,9,color="d", asset="A7") G.add_edge(10,11,color="d", asset="A8") G.add_edge(11,12,color="e", asset="A9") G.add_edge(12,13,color="c", asset="A10") G.add_edge(14,15,color="c", asset="A11") # Then I would like to remove edges that has color 'e'. So when I do : for c in nx.connected_components(G): attribute = nx.get_edge_attributes(G.subgraph(c),"asset") print("attribute", attribute) attribute {(1, 2): 'A1', (1, 3): 'A2', (1, 4): 'A4', (1, 7): 'A6', (2, 4): 'A3', (2, 9): 'A7'} attribute {(10, 11): 'A8'} attribute {(12, 13): 'A10'} attribute {(14, 15): 'A11'} # this way I can put them into a separate group because if there is color'e' #in the edges, then i would assign the next connected components to a new group. Then i need the edges with color='e' in the networkx again because i need to assign them to the group before it (I can take care of this). But does anyone know if there is a way to put edges back to networkx or is there a way to get the expected output without having to remove the edges? Since I create all the node using momepy, if there is any inbuilt function that i can use from momepy, then that is even better, otherwise i am happy with other solution
[ "The easiest approach is probably to make a copy of G, and remove the edges from the copy so that you can always access the original graph G. For instance, consider the following:\nH = G.copy()\n\nfor v,w,d in G.edges(data=True):\n if d['color']=='e':\n H.remove_edge(v,w)\n\nfor c in nx.connected_components(H):\n attribute = nx.get_edge_attributes(H.subgraph(c),\"asset\")\n print(\"attribute\", attribute)\n\nleads to the desired result\nattribute {(1, 2): 'A1', (1, 3): 'A2', (1, 4): 'A4', (1, 7): 'A6', (2, 4): 'A3', (2, 9): 'A7'}\nattribute {(10, 11): 'A8'}\nattribute {(12, 13): 'A10'}\nattribute {(14, 15): 'A11'}\n\n" ]
[ 0 ]
[]
[]
[ "networkx", "python" ]
stackoverflow_0074469075_networkx_python.txt
Q: How upload file with selenium I trying to upload video file with selenium, it doesn't work my code: a = wait.until(EC.element_to_be_clickable((By.TAG_NAME, 'input'))) browser.execute_script("arguments[0].style.visibility = 'visible'", a) a.send_keys("C:/Users/NIKITA/Desktop/vk_clips/testvid.mp4") This script works but doesn't load the file and doesn't throw an error. I tried searching for the element using XPath, it causes a timeout exception. A: The web element actually accepting the uploaded file is matching this XPath: "//input[@type='file']". This element is not visible. You can see yourself on picture you shared visibility: hidden. Again, this is not an element you clicking when uploading file manually as a user via the GUI. So, to upload file to it you can not wait for it to become visible or clickable. Just wait for this element presence. Your code can be something like the following: wait.until(EC.element_to_be_clickable((By.XPATH, "//input[@type='file']"))).send_keys("C:/Users/NIKITA/Desktop/vk_clips/testvid.mp4")
How upload file with selenium
I trying to upload video file with selenium, it doesn't work my code: a = wait.until(EC.element_to_be_clickable((By.TAG_NAME, 'input'))) browser.execute_script("arguments[0].style.visibility = 'visible'", a) a.send_keys("C:/Users/NIKITA/Desktop/vk_clips/testvid.mp4") This script works but doesn't load the file and doesn't throw an error. I tried searching for the element using XPath, it causes a timeout exception.
[ "The web element actually accepting the uploaded file is matching this XPath: \"//input[@type='file']\". This element is not visible. You can see yourself on picture you shared visibility: hidden.\nAgain, this is not an element you clicking when uploading file manually as a user via the GUI.\nSo, to upload file to it you can not wait for it to become visible or clickable.\nJust wait for this element presence.\nYour code can be something like the following:\nwait.until(EC.element_to_be_clickable((By.XPATH, \"//input[@type='file']\"))).send_keys(\"C:/Users/NIKITA/Desktop/vk_clips/testvid.mp4\")\n\n" ]
[ 0 ]
[]
[]
[ "python", "selenium", "selenium_webdriver", "webdriverwait", "xpath" ]
stackoverflow_0074480471_python_selenium_selenium_webdriver_webdriverwait_xpath.txt
Q: How to write the division of two columns of a dataframe using asyncio? def divis(data): data['prom'] = data['total']/data['num2'] return data async def divis(data): data['prom'] = await (data['total']/data['num2']) return data await divis(df2) TypeError: unhashable type: 'Series' A: Based on the question fastest way to apply an async function to pandas dataframe and its accepted answer, this will look like: import asyncio import numpy as np import pandas as pd async def fun2(x, y): return x / y async def divis(data): data['prom'] = await asyncio.gather(*(fun2(x, y) for x, y in zip(data['total'], data['num2']))) return data await divis(df2)
How to write the division of two columns of a dataframe using asyncio?
def divis(data): data['prom'] = data['total']/data['num2'] return data async def divis(data): data['prom'] = await (data['total']/data['num2']) return data await divis(df2) TypeError: unhashable type: 'Series'
[ "Based on the question fastest way to apply an async function to pandas dataframe and its accepted answer, this will look like:\nimport asyncio\n\nimport numpy as np\nimport pandas as pd\n\nasync def fun2(x, y):\n return x / y\n\nasync def divis(data):\n data['prom'] = await asyncio.gather(*(fun2(x, y) for x, y in zip(data['total'], data['num2'])))\n return data\n \nawait divis(df2)\n\n" ]
[ 0 ]
[]
[]
[ "concurrency", "python", "python_asyncio" ]
stackoverflow_0074480252_concurrency_python_python_asyncio.txt
Q: How do I return a child class instance after running a super class method? I have 2 python classes one subclasses the other class A: def __init__(some params): do something() def method(params): return A_new_A_instance class B(A) def __init__(some params): super().__init__(some params) def new_method(params): a_instance=super.method(params) return B(a) The above works fine for some of the methods I'm using heavily. The issue is that class A has a lot of methods some I'm using as is others I'm modifying etc. And a few I don't care about. Most of the methods in A returns another instance of A (like selecting, adding, re-ordering data) But I want to make sure that whichever A.method() I call I want return an instance of B when I do B.method(). Is there a magic way to do this for all methods of A or do I need to over them one by one? A: As long as the constructor of both A and B are the same (they take the same parameters) you can use a factory function to create new instances of A and override it for B: class A: def __init__(self, *params): pass def _create_new_instance(self, *params): return A(*params) def method(self, *params): # this will either call A._create_new_instance or # B._create_new_instance depending on type(self) return self._create_new_instance(*params) class B(A): def __init__(self, *params): super().__init__(self, *params) def _create_new_instance(self, *params): return B(*params) def new_method(self, *params): new_b = self.method(*params) do_something_new(new_b) return new_b assert isinstance(A().method(), A) assert isinstance(B().method(), B)
How do I return a child class instance after running a super class method?
I have 2 python classes one subclasses the other class A: def __init__(some params): do something() def method(params): return A_new_A_instance class B(A) def __init__(some params): super().__init__(some params) def new_method(params): a_instance=super.method(params) return B(a) The above works fine for some of the methods I'm using heavily. The issue is that class A has a lot of methods some I'm using as is others I'm modifying etc. And a few I don't care about. Most of the methods in A returns another instance of A (like selecting, adding, re-ordering data) But I want to make sure that whichever A.method() I call I want return an instance of B when I do B.method(). Is there a magic way to do this for all methods of A or do I need to over them one by one?
[ "As long as the constructor of both A and B are the same (they take the same parameters) you can use a factory function to create new instances of A and override it for B:\nclass A:\n def __init__(self, *params):\n pass\n\n def _create_new_instance(self, *params):\n return A(*params)\n\n def method(self, *params):\n # this will either call A._create_new_instance or\n # B._create_new_instance depending on type(self)\n return self._create_new_instance(*params)\n\nclass B(A):\n def __init__(self, *params):\n super().__init__(self, *params)\n\n def _create_new_instance(self, *params):\n return B(*params)\n\n def new_method(self, *params):\n new_b = self.method(*params)\n do_something_new(new_b)\n return new_b\n\nassert isinstance(A().method(), A)\nassert isinstance(B().method(), B)\n\n" ]
[ 1 ]
[]
[]
[ "inheritance", "oop", "python", "super" ]
stackoverflow_0074480394_inheritance_oop_python_super.txt
Q: Calculating a sum of characters converted to hex in python Working in Python I need to calculate a checksum in a very specific way. The checksum is the lower byte of the sum of the hexadecimal representation of ASCII characters. Sounds confusing, here is the documentation with an example. Here is my code in python. chars = ['L', '3', '2', '0', '0'] checksum = hex(sum(int(hex(ord(c)), 16) for c in chars))[-2:] print(checksum) '11' Is there a simpler way? A: int(hex(x), 16) converts a number to its hex representation, then back to an integer. You could just use x. One byte is two hex digits. To get the lower byte of something, you just need to bitwise-and it with 0xff So, your code would simply be written as: checksum = sum(ord(c) for c in chars) & 0xff # or checksum = sum(map(ord, chars)) & 0xff And to express it as a hex string, just us the f-string syntax: checksum_hex = f"{checksum:x}" # "11"
Calculating a sum of characters converted to hex in python
Working in Python I need to calculate a checksum in a very specific way. The checksum is the lower byte of the sum of the hexadecimal representation of ASCII characters. Sounds confusing, here is the documentation with an example. Here is my code in python. chars = ['L', '3', '2', '0', '0'] checksum = hex(sum(int(hex(ord(c)), 16) for c in chars))[-2:] print(checksum) '11' Is there a simpler way?
[ "\nint(hex(x), 16) converts a number to its hex representation, then back to an integer. You could just use x.\nOne byte is two hex digits. To get the lower byte of something, you just need to bitwise-and it with 0xff\n\nSo, your code would simply be written as:\nchecksum = sum(ord(c) for c in chars) & 0xff\n# or\nchecksum = sum(map(ord, chars)) & 0xff\n\nAnd to express it as a hex string, just us the f-string syntax:\nchecksum_hex = f\"{checksum:x}\" # \"11\"\n\n" ]
[ 1 ]
[]
[]
[ "hex", "python" ]
stackoverflow_0074480483_hex_python.txt
Q: Drawing Directed Graph with Edge meta-data (with NetworkX in Python) I have a directed multigraph that I want to represent as a (complete) directed graph with edge meta-data such that if there are e number of edges from node A to node B (in the original multigraph) then I save e as the meta-data for the edge (A,B) in the new (not-multi) directed graph. I can construct the graph as follows: DG = nx.complete_graph(node_list, create_using= nx.DiGraph() ) where node_list = ['node_A', 'node_B', ....] I can add the edges using: DG.edges[('node_A', 'node_B')]['edge_count'] = 1 But how do I print this value (nicely) using the draw command? I tried the following nx.draw(DG, with_labels = True) plt.show() But the edge values hide; what's more, I would need a nice way to show the meta-data associated with edge (A,B) and easily distinguishing it from edge (B,A). A: You should be able to do the following: edge_labels = nx.get_edge_attributes(G,'edge_count') pos = nx.spring_layout(G) nx.draw(G, pos = pos, with_labels=True) nx.draw_networkx_edge_labels(G, pos=pos, edge_labels = edge_labels) plt.show() Here's an approach that uses curved arrows to avoid overlapping labels import networkx as nx import matplotlib.pyplot as plt G = nx.DiGraph() G.add_nodes_from(range(4)) G.add_edge(0,1,edge_count = 1) G.add_edge(1,0,edge_count = 2) G.add_edge(1,2,edge_count = 2) G.add_edge(2,3,edge_count = 3) G.add_edge(3,0,edge_count = 2) def offset(d, pos, dist = .1): for (u,v),obj in d.items(): par = dist*(pos[v] - pos[u]) dx,dy = par[1],-par[0] x,y = obj.get_position() obj.set_position((x+dx,y+dy)) edge_labels = nx.get_edge_attributes(G,'edge_count') pos = nx.spring_layout(G) nx.draw(G, pos = pos, with_labels=True, connectionstyle = 'arc3,rad=0.2', node_color = 'orange') d = nx.draw_networkx_edge_labels(G, pos=pos, edge_labels = edge_labels) offset(d,pos) plt.gca().set_aspect('equal') plt.show() Result from the above:
Drawing Directed Graph with Edge meta-data (with NetworkX in Python)
I have a directed multigraph that I want to represent as a (complete) directed graph with edge meta-data such that if there are e number of edges from node A to node B (in the original multigraph) then I save e as the meta-data for the edge (A,B) in the new (not-multi) directed graph. I can construct the graph as follows: DG = nx.complete_graph(node_list, create_using= nx.DiGraph() ) where node_list = ['node_A', 'node_B', ....] I can add the edges using: DG.edges[('node_A', 'node_B')]['edge_count'] = 1 But how do I print this value (nicely) using the draw command? I tried the following nx.draw(DG, with_labels = True) plt.show() But the edge values hide; what's more, I would need a nice way to show the meta-data associated with edge (A,B) and easily distinguishing it from edge (B,A).
[ "You should be able to do the following:\nedge_labels = nx.get_edge_attributes(G,'edge_count')\n\npos = nx.spring_layout(G)\nnx.draw(G, pos = pos, with_labels=True)\nnx.draw_networkx_edge_labels(G, pos=pos, edge_labels = edge_labels)\nplt.show()\n\n\nHere's an approach that uses curved arrows to avoid overlapping labels\nimport networkx as nx\nimport matplotlib.pyplot as plt\n\nG = nx.DiGraph()\n\nG.add_nodes_from(range(4))\n\nG.add_edge(0,1,edge_count = 1)\nG.add_edge(1,0,edge_count = 2)\nG.add_edge(1,2,edge_count = 2)\nG.add_edge(2,3,edge_count = 3)\nG.add_edge(3,0,edge_count = 2)\n\ndef offset(d, pos, dist = .1):\n for (u,v),obj in d.items():\n par = dist*(pos[v] - pos[u])\n dx,dy = par[1],-par[0]\n x,y = obj.get_position()\n obj.set_position((x+dx,y+dy))\n\nedge_labels = nx.get_edge_attributes(G,'edge_count')\npos = nx.spring_layout(G)\nnx.draw(G, pos = pos, with_labels=True, connectionstyle = 'arc3,rad=0.2', node_color = 'orange')\nd = nx.draw_networkx_edge_labels(G, pos=pos, edge_labels = edge_labels)\noffset(d,pos)\nplt.gca().set_aspect('equal')\nplt.show()\n\nResult from the above:\n\n" ]
[ 1 ]
[]
[]
[ "networkx", "python" ]
stackoverflow_0074480392_networkx_python.txt
Q: How to read bangla dataframe json file with pandas Here my code look like import codecs import pandas as pd pd.read_json(codecs.open('/content/drive/MyDrive/content_colab_access/quotes_test.json', 'r', 'utf-8')) print(data.shape) data.head() I have different quotes in quotes_test.json. Here some parts of dataframe are, [ { "Quote": "যখন মানুষের খুব প্রিয় কেউ তাকে অপছন্দ করে না", "Author": "Humayun Ahmed", "Tags": [ "bangladesh"," bengali"," humayun-ahmed " ], "Popularity": 0.381, "Category": "life" } ] The error i found, ValueError: Unexpected character found when decoding array value So my question to all of you what is the right way ? I want to make the output like Thank you. A: The encoding is not of required type. pd.read_json(codecs.open('/content/drive/MyDrive/content_colab_access/quotes_test.json', 'r', 'utf-8-sig')) I recommend module chardet to detect encoding.
How to read bangla dataframe json file with pandas
Here my code look like import codecs import pandas as pd pd.read_json(codecs.open('/content/drive/MyDrive/content_colab_access/quotes_test.json', 'r', 'utf-8')) print(data.shape) data.head() I have different quotes in quotes_test.json. Here some parts of dataframe are, [ { "Quote": "যখন মানুষের খুব প্রিয় কেউ তাকে অপছন্দ করে না", "Author": "Humayun Ahmed", "Tags": [ "bangladesh"," bengali"," humayun-ahmed " ], "Popularity": 0.381, "Category": "life" } ] The error i found, ValueError: Unexpected character found when decoding array value So my question to all of you what is the right way ? I want to make the output like Thank you.
[ "The encoding is not of required type.\npd.read_json(codecs.open('/content/drive/MyDrive/content_colab_access/quotes_test.json', 'r', 'utf-8-sig'))\n\nI recommend module chardet to detect encoding.\n" ]
[ 0 ]
[]
[]
[ "dataframe", "json", "pandas", "project", "python" ]
stackoverflow_0074480440_dataframe_json_pandas_project_python.txt
Q: Unexpected calculation of number of trainable parameters (Pytorch) Consider the following code from torch import nn from torchsummary import summary from torchvision import models model = models.efficientnet_b7(pretrained=True) model.classifier[-1].out_features = 4 # because i have a 4-class problem; initially the output is 1000 classes model.classifier = nn.Sequential(*model.classifier, nn.Softmax(dim=1)) # add softmax # freeze features for child in model.features: for param in child.parameters(): param.requires_grad = False When I run model.classifier I get the below (expected) output which as per my calculations implies that the total trainable parameters should be (2560 + 1) * 4 output nodes = 10244 tranable params. However, when i attempt to calculate the total number of trainable params by summary(model, (3,128,128)) I get and by sum(p.numel() for p in model.parameters() if p.requires_grad) I get The 2,561,000, in both cases, comes from (2560 + 1) * 1000 classes. But, why does it still consider 1000 classes though ? A: Resetting an attribute of an initialized layer does not necessarily re-initialize it with the newly-set attribute. What you need is model.classifier[-1] = nn.Linear(2560, 4).
Unexpected calculation of number of trainable parameters (Pytorch)
Consider the following code from torch import nn from torchsummary import summary from torchvision import models model = models.efficientnet_b7(pretrained=True) model.classifier[-1].out_features = 4 # because i have a 4-class problem; initially the output is 1000 classes model.classifier = nn.Sequential(*model.classifier, nn.Softmax(dim=1)) # add softmax # freeze features for child in model.features: for param in child.parameters(): param.requires_grad = False When I run model.classifier I get the below (expected) output which as per my calculations implies that the total trainable parameters should be (2560 + 1) * 4 output nodes = 10244 tranable params. However, when i attempt to calculate the total number of trainable params by summary(model, (3,128,128)) I get and by sum(p.numel() for p in model.parameters() if p.requires_grad) I get The 2,561,000, in both cases, comes from (2560 + 1) * 1000 classes. But, why does it still consider 1000 classes though ?
[ "Resetting an attribute of an initialized layer does not necessarily re-initialize it with the newly-set attribute. What you need is model.classifier[-1] = nn.Linear(2560, 4).\n" ]
[ 1 ]
[]
[]
[ "conv_neural_network", "pre_trained_model", "python", "pytorch", "transfer_learning" ]
stackoverflow_0074479801_conv_neural_network_pre_trained_model_python_pytorch_transfer_learning.txt
Q: More efficient ways to self join a dataframe? I'm trying to find the count of cars for the past 5 sales entries in a dataset. My current approach using the code below is to: Calculate the row number for each entry Self join the dataframe to get the history for each dealership Keep the 5 previous entries Sum the sales for these 5 # Calculate row number for each sale df = df.sort_values(["dealership_id", "time"], ascending=[True, True]) df["row_num"] = df.groupby(["dealership_id"]).cumcount() df.drop_duplicates() df_2 = df[["dealership_id", "sales_entry", "car_count", "row_num"]] # Join to get history df_3 = pd.merge( df_2[['dealership_id','sales_entry','row_num']], df_2[['dealership_id','car_count','row_num']], how="inner", on=["dealership_id"], ) # Keep past 5 sales df_4 = df_3.loc[(df_3['row_num_x'] - df_3['row_num_y'] > 0) & (df_3['row_num_y'] - df_3['row_num_x'] <= 4)] # Sum 5 previous sales df_4 = ( df_4.groupby(["dealership_id", "sales_entry"]) .agg({"car_count": "sum"}) .reset_index() ) However due to the size of the datasets the second step is far too inefficient. Can anyone recommend a better way to go about solving this? I also tried to create a column instead of using a join: df_3['dealership_id'][df_3['dealership_id']].values But I get the error that "None of [Int64Index .... are in the index". I have checked that my columns named are cleaned and that the data types are all ints. dealership_id sales_entry car_count time row_num 0 123 entry_asfs 3 11:00 1 1 123 entry_kmsl 0 13:05 2 2 456 entry_sdfm 2 14:10 3 3 456 entry_sknw 1 10:10 1 4 456 entry_kmsl 1 14:35 2 A: From what it looks like, you are trying to find the sum of the car count for rolling past 5 days, for each dealer. So this would be a group by then rolling sum operation: df = df.sort_values(["dealership_id", "time"], ascending=[True, True]) df['rolling_sum'] = df.groupby('dealership_id')['car_count'].rolling(5).sum().values
More efficient ways to self join a dataframe?
I'm trying to find the count of cars for the past 5 sales entries in a dataset. My current approach using the code below is to: Calculate the row number for each entry Self join the dataframe to get the history for each dealership Keep the 5 previous entries Sum the sales for these 5 # Calculate row number for each sale df = df.sort_values(["dealership_id", "time"], ascending=[True, True]) df["row_num"] = df.groupby(["dealership_id"]).cumcount() df.drop_duplicates() df_2 = df[["dealership_id", "sales_entry", "car_count", "row_num"]] # Join to get history df_3 = pd.merge( df_2[['dealership_id','sales_entry','row_num']], df_2[['dealership_id','car_count','row_num']], how="inner", on=["dealership_id"], ) # Keep past 5 sales df_4 = df_3.loc[(df_3['row_num_x'] - df_3['row_num_y'] > 0) & (df_3['row_num_y'] - df_3['row_num_x'] <= 4)] # Sum 5 previous sales df_4 = ( df_4.groupby(["dealership_id", "sales_entry"]) .agg({"car_count": "sum"}) .reset_index() ) However due to the size of the datasets the second step is far too inefficient. Can anyone recommend a better way to go about solving this? I also tried to create a column instead of using a join: df_3['dealership_id'][df_3['dealership_id']].values But I get the error that "None of [Int64Index .... are in the index". I have checked that my columns named are cleaned and that the data types are all ints. dealership_id sales_entry car_count time row_num 0 123 entry_asfs 3 11:00 1 1 123 entry_kmsl 0 13:05 2 2 456 entry_sdfm 2 14:10 3 3 456 entry_sknw 1 10:10 1 4 456 entry_kmsl 1 14:35 2
[ "From what it looks like, you are trying to find the sum of the car count for rolling past 5 days, for each dealer. So this would be a group by then rolling sum operation:\ndf = df.sort_values([\"dealership_id\", \"time\"], ascending=[True, True])\ndf['rolling_sum'] = df.groupby('dealership_id')['car_count'].rolling(5).sum().values\n\n" ]
[ 0 ]
[]
[]
[ "dataframe", "pandas", "python" ]
stackoverflow_0074479475_dataframe_pandas_python.txt
Q: How do I ignore keyword arguments when they are not used in the method? I have a class with various functions that have default values for some keywords, but values can also be specified. However, the functions use different keywords. This is a minimum reproducible example. The actual case has functions that are more complicated and inter-related. Example Class: class Things(object): def __init__(self, **kwargs): self.other = 999 self.result = self.some_fcn(**kwargs) self.other2 = self.some_fcn2(**kwargs) def some_fcn(self, x=None, y=None): if x is None: x = 7 if y is None: y = 7 return x + y def some_fcn2(self, z=None): if z is None: z = -1 return self.other * z Tests, these work: ans = Things().some_fcn() print("default, should be 14:", ans) ans = Things(x=1) print("should be 8:", ans.result) And here on ans = Things(x=1, z=100) it fails --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-50-5e7081adb4fb> in <module> 25 ans = Things().some_fcn(9, -2) 26 print("should be 7:", ans) ---> 27 ans = Things(x=1) 28 print("should be 8:", ans.result) 29 print("***") <ipython-input-50-5e7081adb4fb> in __init__(self, **kwargs) 3 self.other = 999 4 self.result = self.some_fcn(**kwargs) ----> 5 self.other2 = self.some_fcn2(**kwargs) 6 7 self.__dict__.update(kwargs) TypeError: some_fcn2() got an unexpected keyword argument 'x' How do I ignore keyword arguments when they are not used in the method? The similar question at How to ignore extra keyword arguments in Python? says to add self.__dict__.update(kwargs) to the constructor, but that still produces the same error. A: Use **kwargs for your inner functions as well, and then inside those functions check if the arguments exist. class Things(object): def __init__(self, **kwargs): self.other = 999 self.result = self.some_fcn(**kwargs) self.other2 = self.some_fcn2(**kwargs) def some_fcn(self, **kwargs): x = 7 if 'x' not in kwargs else kwargs['x'] y = 7 if 'y' not in kwargs else kwargs['y'] return x + y def some_fcn2(self, **kwargs): z = -1 if 'z' not in kwargs else kwargs['z'] return self.other * z
How do I ignore keyword arguments when they are not used in the method?
I have a class with various functions that have default values for some keywords, but values can also be specified. However, the functions use different keywords. This is a minimum reproducible example. The actual case has functions that are more complicated and inter-related. Example Class: class Things(object): def __init__(self, **kwargs): self.other = 999 self.result = self.some_fcn(**kwargs) self.other2 = self.some_fcn2(**kwargs) def some_fcn(self, x=None, y=None): if x is None: x = 7 if y is None: y = 7 return x + y def some_fcn2(self, z=None): if z is None: z = -1 return self.other * z Tests, these work: ans = Things().some_fcn() print("default, should be 14:", ans) ans = Things(x=1) print("should be 8:", ans.result) And here on ans = Things(x=1, z=100) it fails --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-50-5e7081adb4fb> in <module> 25 ans = Things().some_fcn(9, -2) 26 print("should be 7:", ans) ---> 27 ans = Things(x=1) 28 print("should be 8:", ans.result) 29 print("***") <ipython-input-50-5e7081adb4fb> in __init__(self, **kwargs) 3 self.other = 999 4 self.result = self.some_fcn(**kwargs) ----> 5 self.other2 = self.some_fcn2(**kwargs) 6 7 self.__dict__.update(kwargs) TypeError: some_fcn2() got an unexpected keyword argument 'x' How do I ignore keyword arguments when they are not used in the method? The similar question at How to ignore extra keyword arguments in Python? says to add self.__dict__.update(kwargs) to the constructor, but that still produces the same error.
[ "Use **kwargs for your inner functions as well, and then inside those functions check if the arguments exist.\nclass Things(object):\n def __init__(self, **kwargs):\n self.other = 999\n self.result = self.some_fcn(**kwargs)\n self.other2 = self.some_fcn2(**kwargs)\n\n def some_fcn(self, **kwargs):\n x = 7 if 'x' not in kwargs else kwargs['x']\n y = 7 if 'y' not in kwargs else kwargs['y']\n return x + y\n\n def some_fcn2(self, **kwargs):\n z = -1 if 'z' not in kwargs else kwargs['z']\n return self.other * z\n\n" ]
[ 0 ]
[]
[]
[ "class", "keyword_argument", "methods", "python" ]
stackoverflow_0074480565_class_keyword_argument_methods_python.txt
Q: I need to filter/copy/fetch only past 3 days data from 30-60 days data set I've date variable "interview_start" which holds unique record start date. I just need to filter past 3 days including today's data into another or same dataframe. I've used below code which is working fine but not giving me today's data instead giving me past 3 days. I want to include today's data too. from datetime import datetime today = date.today() lastdayfrom = pd.to_datetime(today) df = df.set_index('interview_start') df = df.sort_index() df= df.loc[lastdayfrom - pd.Timedelta(days=4):lastdayfrom].reset_index() how to include today's data too? I've tried (days**<**4) but it is giving me error. However data set is huge with n of variables and data. Format of interview_Start A: Not the most practical approach, but you could loop through the rows in the dataframe, adding each row to a new dataframe, until the date is out of your date range. I'm going under the assumption you want today data + the three days before e.g. 17/11, 16/11, 15/11, 14/11 from datetime import datetime, timedelta valid_dates = [] today = datetime.now() valid_dates.append(today.strftime('%Y-%m-%d')) for i in range(1, 4): valid_dates.append(((today + timedelta(days=-i))).strftime('%Y-%m-%d')) You can then loop through your dataframe, adding rows that row['Date'] in valid_dates to your new dataframe.
I need to filter/copy/fetch only past 3 days data from 30-60 days data set
I've date variable "interview_start" which holds unique record start date. I just need to filter past 3 days including today's data into another or same dataframe. I've used below code which is working fine but not giving me today's data instead giving me past 3 days. I want to include today's data too. from datetime import datetime today = date.today() lastdayfrom = pd.to_datetime(today) df = df.set_index('interview_start') df = df.sort_index() df= df.loc[lastdayfrom - pd.Timedelta(days=4):lastdayfrom].reset_index() how to include today's data too? I've tried (days**<**4) but it is giving me error. However data set is huge with n of variables and data. Format of interview_Start
[ "Not the most practical approach, but you could loop through the rows in the dataframe, adding each row to a new dataframe, until the date is out of your date range.\nI'm going under the assumption you want today data + the three days before e.g. 17/11, 16/11, 15/11, 14/11\nfrom datetime import datetime, timedelta\n\n\nvalid_dates = []\ntoday = datetime.now()\nvalid_dates.append(today.strftime('%Y-%m-%d'))\n\nfor i in range(1, 4):\n valid_dates.append(((today + timedelta(days=-i))).strftime('%Y-%m-%d'))\n\n\nYou can then loop through your dataframe, adding rows that\nrow['Date'] in valid_dates\n\nto your new dataframe.\n" ]
[ 0 ]
[]
[]
[ "pandas", "python" ]
stackoverflow_0074480529_pandas_python.txt
Q: What does the group_keys argument to pandas.groupby actually do? In pandas.DataFrame.groupby, there is an argument group_keys, which I gather is supposed to do something relating to how group keys are included in the dataframe subsets. According to the documentation: group_keys : boolean, default True When calling apply, add group keys to index to identify pieces However, I can't really find any examples where group_keys makes an actual difference: import pandas as pd df = pd.DataFrame([[0, 1, 3], [3, 1, 1], [3, 0, 0], [2, 3, 3], [2, 1, 0]], columns=list('xyz')) gby = df.groupby('x') gby_k = df.groupby('x', group_keys=False) It doesn't make a difference in the output of apply: ap = gby.apply(pd.DataFrame.sum) # x y z # x # 0 0 1 3 # 2 4 4 3 # 3 6 1 1 ap_k = gby_k.apply(pd.DataFrame.sum) # x y z # x # 0 0 1 3 # 2 4 4 3 # 3 6 1 1 And even if you print out the grouped subsets as you go, the results are still identical: def printer_func(x): print(x) return x print('gby') print('--------------') gby.apply(printer_func) print('--------------') print('gby_k') print('--------------') gby_k.apply(printer_func) print('--------------') # gby # -------------- # x y z # 0 0 1 3 # x y z # 0 0 1 3 # x y z # 3 2 3 3 # 4 2 1 0 # x y z # 1 3 1 1 # 2 3 0 0 # -------------- # gby_k # -------------- # x y z # 0 0 1 3 # x y z # 0 0 1 3 # x y z # 3 2 3 3 # 4 2 1 0 # x y z # 1 3 1 1 # 2 3 0 0 # -------------- I considered the possibility that the default argument is actually True, but switching group_keys to explicitly False doesn't make a difference either. What exactly is this argument for? (Run on pandas version 0.18.1) Edit: I did find a way where group_keys changes behavior, based on this answer: import pandas as pd import numpy as np row_idx = pd.MultiIndex.from_product(((0, 1), (2, 3, 4))) d = pd.DataFrame([[4, 3], [1, 3], [1, 1], [2, 4], [0, 1], [4, 2]], index=row_idx) df_n = d.groupby(level=0).apply(lambda x: x.nlargest(2, [0])) # 0 1 # 0 0 2 4 3 # 3 1 3 # 1 1 4 4 2 # 2 2 4 df_k = d.groupby(level=0, group_keys=False).apply(lambda x: x.nlargest(2, [0])) # 0 1 # 0 2 4 3 # 3 1 3 # 1 4 4 2 # 2 2 4 However, I'm still not clear on the intelligible principle behind what group_keys is supposed to do. This behavior does not seem intuitive based on @piRSquared's answer. A: group_keys parameter in groupby comes handy during apply operations that creates an additional index column corresponding to the grouped columns[group_keys=True] and eliminates in the case[group_keys=False] especially during the case when trying to perform operations on individual columns. One such instance: In [21]: gby = df.groupby('x',group_keys=True).apply(lambda row: row['x']) In [22]: gby Out[22]: x 0 0 0 2 3 2 4 2 3 1 3 2 3 Name: x, dtype: int64 In [23]: gby_k = df.groupby('x', group_keys=False).apply(lambda row: row['x']) In [24]: gby_k Out[24]: 0 0 3 2 4 2 1 3 2 3 Name: x, dtype: int64 One of it's intended application could be to group by one of the levels of the hierarchy by converting it to a Multi-index dataframe object. In [27]: gby.groupby(level='x').sum() Out[27]: x 0 0 2 4 3 6 Name: x, dtype: int64 A: If you are passing a function that preserves an index, pandas tries to keep that information. But if you pass a function that removes all semblance of index information, group_keys=True allows you to keep that information. Use this instead f = lambda df: df.reset_index(drop=True) Then the different groupby gby.apply(lambda df: df.reset_index(drop=True)) gby_k.apply(lambda df: df.reset_index(drop=True)) A: Such a convoluted documentation. Answer is simple (applicable only for groupby, followed by apply) Condition1 When the result set length is same as the original df 1.a) If the result set is ordered by the group , the group_keys=True will add the group key Ex: df.groupby(...).apply(lambda df:df[0] +df[1] ) # results are ordered by their specific group 1.b) If the result set is ordered by the original index, then there is no need for the library to specify group key as the original order is still retained Ex. df.groupby(..).apply(lambda df: df +1) # results are in the original order Condition2 When result set length is not the same as original length, then group key is always included Ex. df.groupby(...).apply(lambda x:x.mean()) #results length is changed/reduced. group_keys has no effect
What does the group_keys argument to pandas.groupby actually do?
In pandas.DataFrame.groupby, there is an argument group_keys, which I gather is supposed to do something relating to how group keys are included in the dataframe subsets. According to the documentation: group_keys : boolean, default True When calling apply, add group keys to index to identify pieces However, I can't really find any examples where group_keys makes an actual difference: import pandas as pd df = pd.DataFrame([[0, 1, 3], [3, 1, 1], [3, 0, 0], [2, 3, 3], [2, 1, 0]], columns=list('xyz')) gby = df.groupby('x') gby_k = df.groupby('x', group_keys=False) It doesn't make a difference in the output of apply: ap = gby.apply(pd.DataFrame.sum) # x y z # x # 0 0 1 3 # 2 4 4 3 # 3 6 1 1 ap_k = gby_k.apply(pd.DataFrame.sum) # x y z # x # 0 0 1 3 # 2 4 4 3 # 3 6 1 1 And even if you print out the grouped subsets as you go, the results are still identical: def printer_func(x): print(x) return x print('gby') print('--------------') gby.apply(printer_func) print('--------------') print('gby_k') print('--------------') gby_k.apply(printer_func) print('--------------') # gby # -------------- # x y z # 0 0 1 3 # x y z # 0 0 1 3 # x y z # 3 2 3 3 # 4 2 1 0 # x y z # 1 3 1 1 # 2 3 0 0 # -------------- # gby_k # -------------- # x y z # 0 0 1 3 # x y z # 0 0 1 3 # x y z # 3 2 3 3 # 4 2 1 0 # x y z # 1 3 1 1 # 2 3 0 0 # -------------- I considered the possibility that the default argument is actually True, but switching group_keys to explicitly False doesn't make a difference either. What exactly is this argument for? (Run on pandas version 0.18.1) Edit: I did find a way where group_keys changes behavior, based on this answer: import pandas as pd import numpy as np row_idx = pd.MultiIndex.from_product(((0, 1), (2, 3, 4))) d = pd.DataFrame([[4, 3], [1, 3], [1, 1], [2, 4], [0, 1], [4, 2]], index=row_idx) df_n = d.groupby(level=0).apply(lambda x: x.nlargest(2, [0])) # 0 1 # 0 0 2 4 3 # 3 1 3 # 1 1 4 4 2 # 2 2 4 df_k = d.groupby(level=0, group_keys=False).apply(lambda x: x.nlargest(2, [0])) # 0 1 # 0 2 4 3 # 3 1 3 # 1 4 4 2 # 2 2 4 However, I'm still not clear on the intelligible principle behind what group_keys is supposed to do. This behavior does not seem intuitive based on @piRSquared's answer.
[ "group_keys parameter in groupby comes handy during apply operations that creates an additional index column corresponding to the grouped columns[group_keys=True] and eliminates in the case[group_keys=False] especially during the case when trying to perform operations on individual columns.\nOne such instance:\nIn [21]: gby = df.groupby('x',group_keys=True).apply(lambda row: row['x'])\n\nIn [22]: gby\nOut[22]: \nx \n0 0 0\n2 3 2\n 4 2\n3 1 3\n 2 3\nName: x, dtype: int64\n\nIn [23]: gby_k = df.groupby('x', group_keys=False).apply(lambda row: row['x'])\n\nIn [24]: gby_k\nOut[24]: \n0 0\n3 2\n4 2\n1 3\n2 3\nName: x, dtype: int64\n\nOne of it's intended application could be to group by one of the levels of the hierarchy by converting it to a Multi-index dataframe object.\nIn [27]: gby.groupby(level='x').sum()\nOut[27]: \nx\n0 0\n2 4\n3 6\nName: x, dtype: int64\n\n", "If you are passing a function that preserves an index, pandas tries to keep that information. But if you pass a function that removes all semblance of index information, group_keys=True allows you to keep that information.\nUse this instead\nf = lambda df: df.reset_index(drop=True)\n\nThen the different groupby\ngby.apply(lambda df: df.reset_index(drop=True))\n\n\ngby_k.apply(lambda df: df.reset_index(drop=True))\n\n\n", "Such a convoluted documentation. Answer is simple (applicable only for groupby, followed by apply)\nCondition1\nWhen the result set length is same as the original df\n1.a) If the result set is ordered by the group , the group_keys=True will add the group key\nEx: df.groupby(...).apply(lambda df:df[0] +df[1] ) # results are ordered by their specific group\n1.b) If the result set is ordered by the original index, then there is no need for the library to specify group key as the original order is still retained\nEx. df.groupby(..).apply(lambda df: df +1)\n# results are in the original order\nCondition2\n\nWhen result set length is not the same as original length, then group key is always included\nEx. df.groupby(...).apply(lambda x:x.mean())\n#results length is changed/reduced. group_keys has no effect\n\n" ]
[ 12, 6, 0 ]
[]
[]
[ "pandas", "python" ]
stackoverflow_0038856583_pandas_python.txt
Q: Data Frame, panda Trying to extract data from a single city in a dataset that contains data from different cities in the same column. | City | Temp | | -------- | -------- | | New York | Warm | | Boston | Cold | I New York I Warm I I Texas I Cold I When i run my code it doesnt include any data, just the header. Tried this Code manhattan_df = complaints_df[complaints_df.Borough == "MANHATTAN"].loc[:, ['Complaint Type', 'Borough']] manhattan_df But as said, only displays header. A: df = pd.DataFrame(dict(a=[1,2,3,4,], b=[5,6,7,8], c=[1,2,3,4])) sub_df = df.query(f'a == 1').loc[:,['b','c']] sub_df
Data Frame, panda
Trying to extract data from a single city in a dataset that contains data from different cities in the same column. | City | Temp | | -------- | -------- | | New York | Warm | | Boston | Cold | I New York I Warm I I Texas I Cold I When i run my code it doesnt include any data, just the header. Tried this Code manhattan_df = complaints_df[complaints_df.Borough == "MANHATTAN"].loc[:, ['Complaint Type', 'Borough']] manhattan_df But as said, only displays header.
[ "df = pd.DataFrame(dict(a=[1,2,3,4,], b=[5,6,7,8], c=[1,2,3,4]))\nsub_df = df.query(f'a == 1').loc[:,['b','c']]\nsub_df\n\n" ]
[ 0 ]
[]
[]
[ "dataframe", "pandas", "python" ]
stackoverflow_0074478923_dataframe_pandas_python.txt
Q: ModuleNotFoundError: No module named 'Crypto' Python Firebase I know there are similar questions asked but I have read them and couldn't solved my problem I'm trying to import pyrebase; however it gives me these error messages: Traceback (most recent call last): File "c:\Users\yaman\OneDrive\Masaüstü\BİYOLOJİK ŞİFRELEME\JustSth.py", line 1, in <module> import pyrebase File "C:\Users\yaman\AppData\Local\Programs\Python\Python39\lib\site-packages\pyrebase\__init__.py", line 1, in <module> from .pyrebase import initialize_app File "C:\Users\yaman\AppData\Local\Programs\Python\Python39\lib\site-packages\pyrebase\pyrebase.py", line 23, in <module> from Crypto.PublicKey import RSA ModuleNotFoundError: No module named 'Crypto' I downloaded the pycryptodome and it says Requirement Satisfied when I try to install it again? How can I solve this problem? Thanks from now! A: Try : from Crypto.PubliCKey import * or: from Crypto import * A: You installed the module in a different venv while you are in another. One way to resolve this is to copy and append the path of the installed module into your working env before you import it. Note: You can locate the path when you try to install the module again, where it says Requirement Satisfied you will see the path there as well. import sys sys.path.append("\paste\path\here") # path ends at "site-packages" from Crypto.PublicKey import RSA
ModuleNotFoundError: No module named 'Crypto' Python Firebase
I know there are similar questions asked but I have read them and couldn't solved my problem I'm trying to import pyrebase; however it gives me these error messages: Traceback (most recent call last): File "c:\Users\yaman\OneDrive\Masaüstü\BİYOLOJİK ŞİFRELEME\JustSth.py", line 1, in <module> import pyrebase File "C:\Users\yaman\AppData\Local\Programs\Python\Python39\lib\site-packages\pyrebase\__init__.py", line 1, in <module> from .pyrebase import initialize_app File "C:\Users\yaman\AppData\Local\Programs\Python\Python39\lib\site-packages\pyrebase\pyrebase.py", line 23, in <module> from Crypto.PublicKey import RSA ModuleNotFoundError: No module named 'Crypto' I downloaded the pycryptodome and it says Requirement Satisfied when I try to install it again? How can I solve this problem? Thanks from now!
[ "Try :\nfrom Crypto.PubliCKey import *\n\nor:\nfrom Crypto import *\n\n", "You installed the module in a different venv while you are in another. One way to resolve this is to copy and append the path of the installed module into your working env before you import it.\nNote: You can locate the path when you try to install the module again, where it says Requirement Satisfied you will see the path there as well.\nimport sys\n\nsys.path.append(\"\\paste\\path\\here\") # path ends at \"site-packages\"\n\nfrom Crypto.PublicKey import RSA\n\n" ]
[ 0, 0 ]
[]
[]
[ "pycryptodome", "python" ]
stackoverflow_0074479814_pycryptodome_python.txt
Q: How can I use a JSON file such as a database to store new and old objects? I get the JSON objects from one of the sites and I want to attach these objects into a JSON file. the task is: I want to use JSON file as a database to save all information from the site I get objects from it so, the data I will show it breaking into 2 data which titled by the date like so: first: will be new data that will appear in the news.html second: will be old data that will appear in the old.html the problems that I'm facing are, which file handling I have to use (r, a, w)? of course, will not be r because I want data to be written when new data comes from the request so, in this case, I should use (a or, w), but if I used w will override all data that exists in JSON file. and if I used a I will face 3 challenges in every request process I do: the main curly braces will repeat with new data that appended to File but I need to repeat data into the main curlies itself. curly braces will not be separated by a comma. in every request process that occurs by reloading the page will be repeated the same objects so, one object will repeat more than one time. so, my questions are: how can I append data and avoid the 3 problems I defined above? which file handling I have to use? import requests import json import datetime import re``` def response(request): now = datetime.datetime.now() date_now = "{}-{}-{}".format(now.year, now.month, now.day) url = "http://newsapi.org/v2/everything" params = { 'q': 'bitcoin', 'from': date_now, 'sortBy': 'publishedAt', 'apiKey': '1186d3b0ccf24e6a91ab9816de603b90' } response = requests.request("GET", url, params=params) return response def index(request): now = datetime.datetime.now() date_now = "{}-{}-{}".format(now.year, now.month, now.day) res = response(request) # all news arr_data = [] for news in res.json()['articles']: publishedAt = re.match("\d+-\d+-\d+", news['publishedAt']) rm_words = news['content'].split()[:-2] or None content = " ".join(rm_words) data = { publishedAt.group(): { "source": news['source'], "title": news['title'], "describe": news['description'], "url": news['url'], "urlImage": news['urlToImage'], "content": content } } arr_data.append(data) spec_data = [] with open('data.json', "w+") as fp: json.dump(arr_data, fp, indent=4) for data in json.load(open("data.json", "r")): spec_data.append(data) context = { 'data': spec_data, 'date_now': date_now } return render(request, 'news/news.html', context) def old_news(request): now = datetime.datetime.now() date_now = "{}-{}-{}".format(now.year, now.month, now.day) res = response(request) # all news arr_data = [] for news in res.json()['articles']: publishedAt = re.match("\d+-\d+-\d+", news['publishedAt']) rm_words = news['content'].split()[:-2] or None content = " ".join(rm_words) data = { publishedAt.group(): { "source": news['source'], "title": news['title'], "describe": news['description'], "url": news['url'], "urlImage": news['urlToImage'], "content": content } } arr_data.append(data) spec_data = [] with open('data.json', "w+") as fp: json.dump(arr_data, fp, indent=4) for data in json.load(open("data.json", "r")): spec_data.append(data) context = { 'data': spec_data, 'date_now': date_now } return render(request, 'news/old_news.html', context) A: I suggest to use both read and write modes to fulfill this task. First you have to read the current content of the file by using the read state and then store them in a variable. try: with open('data.json') as json_file: json_data = json.load(json_file) Next, update the variable with the values you need. json_data['new_key'] = [] json_data['new_key'].append("1") Finally, you have to write to the file with the current content and the old content. with open('data.json', 'w') as outfile: json.dump(json_data, outfile) Optional Step if the JSON file is empty: import json try: with open('data.json') as json_file: json_data = json.load(json_file) except json.decoder.JSONDecodeError: with open('data.json', 'w') as outfile: json_data = {} json.dump(json_data, outfile)
How can I use a JSON file such as a database to store new and old objects?
I get the JSON objects from one of the sites and I want to attach these objects into a JSON file. the task is: I want to use JSON file as a database to save all information from the site I get objects from it so, the data I will show it breaking into 2 data which titled by the date like so: first: will be new data that will appear in the news.html second: will be old data that will appear in the old.html the problems that I'm facing are, which file handling I have to use (r, a, w)? of course, will not be r because I want data to be written when new data comes from the request so, in this case, I should use (a or, w), but if I used w will override all data that exists in JSON file. and if I used a I will face 3 challenges in every request process I do: the main curly braces will repeat with new data that appended to File but I need to repeat data into the main curlies itself. curly braces will not be separated by a comma. in every request process that occurs by reloading the page will be repeated the same objects so, one object will repeat more than one time. so, my questions are: how can I append data and avoid the 3 problems I defined above? which file handling I have to use? import requests import json import datetime import re``` def response(request): now = datetime.datetime.now() date_now = "{}-{}-{}".format(now.year, now.month, now.day) url = "http://newsapi.org/v2/everything" params = { 'q': 'bitcoin', 'from': date_now, 'sortBy': 'publishedAt', 'apiKey': '1186d3b0ccf24e6a91ab9816de603b90' } response = requests.request("GET", url, params=params) return response def index(request): now = datetime.datetime.now() date_now = "{}-{}-{}".format(now.year, now.month, now.day) res = response(request) # all news arr_data = [] for news in res.json()['articles']: publishedAt = re.match("\d+-\d+-\d+", news['publishedAt']) rm_words = news['content'].split()[:-2] or None content = " ".join(rm_words) data = { publishedAt.group(): { "source": news['source'], "title": news['title'], "describe": news['description'], "url": news['url'], "urlImage": news['urlToImage'], "content": content } } arr_data.append(data) spec_data = [] with open('data.json', "w+") as fp: json.dump(arr_data, fp, indent=4) for data in json.load(open("data.json", "r")): spec_data.append(data) context = { 'data': spec_data, 'date_now': date_now } return render(request, 'news/news.html', context) def old_news(request): now = datetime.datetime.now() date_now = "{}-{}-{}".format(now.year, now.month, now.day) res = response(request) # all news arr_data = [] for news in res.json()['articles']: publishedAt = re.match("\d+-\d+-\d+", news['publishedAt']) rm_words = news['content'].split()[:-2] or None content = " ".join(rm_words) data = { publishedAt.group(): { "source": news['source'], "title": news['title'], "describe": news['description'], "url": news['url'], "urlImage": news['urlToImage'], "content": content } } arr_data.append(data) spec_data = [] with open('data.json', "w+") as fp: json.dump(arr_data, fp, indent=4) for data in json.load(open("data.json", "r")): spec_data.append(data) context = { 'data': spec_data, 'date_now': date_now } return render(request, 'news/old_news.html', context)
[ "I suggest to use both read and write modes to fulfill this task.\nFirst you have to read the current content of the file by using the read state and then store them in a variable.\ntry:\nwith open('data.json') as json_file:\n json_data = json.load(json_file)\n\nNext, update the variable with the values you need.\njson_data['new_key'] = []\njson_data['new_key'].append(\"1\")\n\nFinally, you have to write to the file with the current content and the old content.\nwith open('data.json', 'w') as outfile:\n json.dump(json_data, outfile)\n\nOptional Step if the JSON file is empty:\nimport json\ntry:\n with open('data.json') as json_file:\n json_data = json.load(json_file)\nexcept json.decoder.JSONDecodeError:\n with open('data.json', 'w') as outfile:\n json_data = {}\n json.dump(json_data, outfile)\n\n" ]
[ 1 ]
[ "About File Handling:\nThere pysonDB. That`s DataBase based on JSON. Maybe it will help someone\nP.S: Version 2 is available - pysonDB-v2\n" ]
[ -1 ]
[ "django", "json", "python", "python_requests" ]
stackoverflow_0065288343_django_json_python_python_requests.txt
Q: python : aggregate dataframe values by bin I have a dataset with that looks like that : |col A|col B| 1 20 3 123 7 2 ... I would like to compute the mean value of col B over each bin of col A. This would result in a new dataframe containing only one row per bin with : | mid value of the col A bin | avg value of col B over that bin | A: As you haven't specified the number of bins and their properties, let me illustrate what you may do with pandas.cut to the example data you provided: import pandas as pd # reproduce your example data df = pd.DataFrame({'col A': [1, 3, 7], 'col B': [20, 123, 2]}) # suggest only 2 bins would be proper for 3 rows of data df['col A bins'] = pd.cut(df['col A'], bins=2) Output: # bins may be labeled as you like, not as automatic interval col A col B col A bins 0 1 20 (0.994, 4.0] 1 3 123 (0.994, 4.0] 2 7 2 (4.0, 7.0] Then we may group the initial columns by the new bins, with col A aggregated to median (as from your new column names) and col B to mean, making it look as your expected result by renaming and dropping columns: df.groupby('col A bins').agg({'col A': 'median', 'col B': 'mean'} ).rename(columns={'col A':'mid value of the col A bin', 'col B':'avg value of col B over that bin'} ).reset_index(drop=True) Output: mid value of the col A bin avg value of col B over that bin 0 2.0 71.5 1 7.0 2.0
python : aggregate dataframe values by bin
I have a dataset with that looks like that : |col A|col B| 1 20 3 123 7 2 ... I would like to compute the mean value of col B over each bin of col A. This would result in a new dataframe containing only one row per bin with : | mid value of the col A bin | avg value of col B over that bin |
[ "As you haven't specified the number of bins and their properties, let me illustrate what you may do with pandas.cut to the example data you provided:\nimport pandas as pd\n\n# reproduce your example data\ndf = pd.DataFrame({'col A': [1, 3, 7],\n 'col B': [20, 123, 2]})\n\n# suggest only 2 bins would be proper for 3 rows of data\ndf['col A bins'] = pd.cut(df['col A'], \n bins=2)\n\nOutput:\n# bins may be labeled as you like, not as automatic interval\n col A col B col A bins\n0 1 20 (0.994, 4.0]\n1 3 123 (0.994, 4.0]\n2 7 2 (4.0, 7.0]\n\nThen we may group the initial columns by the new bins, with col A aggregated to median (as from your new column names) and col B to mean, making it look as your expected result by renaming and dropping columns:\ndf.groupby('col A bins').agg({'col A': 'median',\n 'col B': 'mean'}\n ).rename(columns={'col A':'mid value of the col A bin',\n 'col B':'avg value of col B over that bin'}\n ).reset_index(drop=True)\n\nOutput:\n mid value of the col A bin avg value of col B over that bin\n0 2.0 71.5\n1 7.0 2.0\n\n" ]
[ 1 ]
[]
[]
[ "binning", "dataframe", "pandas", "python" ]
stackoverflow_0074479504_binning_dataframe_pandas_python.txt