content
stringlengths
85
101k
title
stringlengths
0
150
question
stringlengths
15
48k
answers
list
answers_scores
list
non_answers
list
non_answers_scores
list
tags
list
name
stringlengths
35
137
Q: How do you use a for loop to send out emails in Python using win32com I have a df which has contact details of several people, below is a test example of what it looks like: first_name Last_name email Steve Smith email1@outlook.com John Walker email2@outlook.com etc... In short, I want to use Python to send a customised email to each of the people in the df. Here is the code I've used so far: import win32com.client as win32 import pandas as pd df = pd.read_excel("test_emails.xlsx") for i in df.email: outlook = win32.Dispatch('outlook.application') mail = outlook.CreateItem(0) mail.Subject = 'Test email' mail.To = i mail.HTMLBody = r""" Dear recipient,<br><br> This is a test email. """ mail.Send() This works fine, in the sense that it sends the test email to everyone in the email column in the df. However, how could I make the emails more customisable for each recipient? For example, rather than "Dear recipient", to include the first name of each person instead as part of the for loop. A: You can iterate through the entire Dataframe row by row and use f-strings to abtain your goal. import win32com.client as win32 import pandas as pd df = pd.read_excel("test_emails.xlsx") for index, row in df.iterrows(): outlook = win32.Dispatch('outlook.application') mail = outlook.CreateItem(0) mail.Subject = 'Test email' mail.To = row['email'] mail.HTMLBody = f""" Dear {row['first_name']} {row['last_name']}, This is a test email. """ mail.Send()
How do you use a for loop to send out emails in Python using win32com
I have a df which has contact details of several people, below is a test example of what it looks like: first_name Last_name email Steve Smith email1@outlook.com John Walker email2@outlook.com etc... In short, I want to use Python to send a customised email to each of the people in the df. Here is the code I've used so far: import win32com.client as win32 import pandas as pd df = pd.read_excel("test_emails.xlsx") for i in df.email: outlook = win32.Dispatch('outlook.application') mail = outlook.CreateItem(0) mail.Subject = 'Test email' mail.To = i mail.HTMLBody = r""" Dear recipient,<br><br> This is a test email. """ mail.Send() This works fine, in the sense that it sends the test email to everyone in the email column in the df. However, how could I make the emails more customisable for each recipient? For example, rather than "Dear recipient", to include the first name of each person instead as part of the for loop.
[ "You can iterate through the entire Dataframe row by row and use f-strings to abtain your goal.\nimport win32com.client as win32\nimport pandas as pd\n\ndf = pd.read_excel(\"test_emails.xlsx\")\n\nfor index, row in df.iterrows():\n outlook = win32.Dispatch('outlook.application')\n mail = outlook.CreateItem(0)\n mail.Subject = 'Test email'\n mail.To = row['email']\n mail.HTMLBody = f\"\"\"\n Dear {row['first_name']} {row['last_name']},\n This is a test email.\n \"\"\"\n mail.Send() \n\n" ]
[ 1 ]
[]
[]
[ "dataframe", "for_loop", "python", "win32com" ]
stackoverflow_0074521476_dataframe_for_loop_python_win32com.txt
Q: How to remove duplicates with different orders from a list? I made a special triangle (or whatever they're called). It works fine but a flaw is it prints out the same triangle in a different order. This is the code: SpecialTriangles = [] for i in range(15): for j in range(15): for k in range(15): if i**2 + j**2 == k**2: if i**2 + 0 != k**2: if 0 + j**2 != k**2: if 0 + 0 != k**2: SpecialTriangles.append([i, j, k]) print(SpecialTriangles) And this is what the output is: [[3, 4, 5], [4, 3, 5], [5, 12, 13], [6, 8, 10], [8, 6, 10], [12, 5, 13]] So I want this to print just one of a kind in ascending order so: [[3, 4, 5], [5, 12, 13], [6, 8, 10]] A: You're looking for itertools.combinations: from itertools import combinations for i, j, k in combinations(range(15), 3): # do your logic with i, j, k Just as you requested, combinations() will give each possible triplet just once. A: Here is one possible way with itertools.combinations. iterools.combinations returns a generator with each combination exactly once, then you check if your condition fits. import itertools out = list( filter( lambda x: x[0]**2 + x[1]**2 == x[2]**2, itertools.combinations([*range(15)], 3) ) ) print(out) [(3, 4, 5), (5, 12, 13), (6, 8, 10)] If you don't want to use filter, instead a list comprehension (due to readability) here is another way: out = [x for x in itertools.combinations([*range(15)],3) if x[0]**2 + x[1]**2 == x[2]**2] print(out) A: While the answer by joanis is correct and efficient, I think it would still be a good idea to understand how to avoid the duplicate values. With each nested for loop, you are starting back at 1 again and looping through to 15. However you would have already checked the values up to whatever value the for loop above is on. For example, say you are in the loop where i = 7, you would not need to start at k = 1 as the values for 1 have already been checked when i = 1 at the first iteration. e.g. This code: for i in range(3): for k in range(3): print(i, k) Outputs: 0 0 0 1 0 2 1 0 1 1 1 2 2 0 2 1 2 2 And as you can see, when i = 1, you only need to check the values of k > 1 as the rest have already been checked (0 1 is the same as 1 0) The improved code would therefore be: for i in range(3): for k in range(i, 3): print(i, k) Which outputs: 0 0 0 1 0 2 1 1 1 2 2 2 And these values will be unique So in your case, the improved nested loop should be: for i in range(15): for j in range(i, 15): for k in range(j, 15): I know this was a lot of info for a fairly simple question, but it can hopefully get you thinking about how you can make your code more efficient in the future!
How to remove duplicates with different orders from a list?
I made a special triangle (or whatever they're called). It works fine but a flaw is it prints out the same triangle in a different order. This is the code: SpecialTriangles = [] for i in range(15): for j in range(15): for k in range(15): if i**2 + j**2 == k**2: if i**2 + 0 != k**2: if 0 + j**2 != k**2: if 0 + 0 != k**2: SpecialTriangles.append([i, j, k]) print(SpecialTriangles) And this is what the output is: [[3, 4, 5], [4, 3, 5], [5, 12, 13], [6, 8, 10], [8, 6, 10], [12, 5, 13]] So I want this to print just one of a kind in ascending order so: [[3, 4, 5], [5, 12, 13], [6, 8, 10]]
[ "You're looking for itertools.combinations:\nfrom itertools import combinations\n\nfor i, j, k in combinations(range(15), 3):\n # do your logic with i, j, k\n\nJust as you requested, combinations() will give each possible triplet just once.\n", "Here is one possible way with itertools.combinations. iterools.combinations returns a generator with each combination exactly once, then you check if your condition fits.\nimport itertools\n\nout = list(\n filter(\n lambda x: x[0]**2 + x[1]**2 == x[2]**2, itertools.combinations([*range(15)], 3)\n )\n)\nprint(out)\n\n[(3, 4, 5), (5, 12, 13), (6, 8, 10)]\n\nIf you don't want to use filter, instead a list comprehension (due to readability) here is another way:\nout = [x for x in itertools.combinations([*range(15)],3) if x[0]**2 + x[1]**2 == x[2]**2]\nprint(out)\n\n", "While the answer by joanis is correct and efficient, I think it would still be a good idea to understand how to avoid the duplicate values.\nWith each nested for loop, you are starting back at 1 again and looping through to 15. However you would have already checked the values up to whatever value the for loop above is on.\nFor example, say you are in the loop where i = 7, you would not need to start at k = 1 as the values for 1 have already been checked when i = 1 at the first iteration.\ne.g. This code:\nfor i in range(3):\n for k in range(3):\n print(i, k)\n\nOutputs:\n0 0\n0 1\n0 2\n1 0\n1 1\n1 2\n2 0\n2 1\n2 2\n\nAnd as you can see, when i = 1, you only need to check the values of k > 1 as the rest have already been checked (0 1 is the same as 1 0)\nThe improved code would therefore be:\nfor i in range(3):\n for k in range(i, 3):\n print(i, k)\n\nWhich outputs:\n0 0\n0 1\n0 2\n1 1\n1 2\n2 2\n\nAnd these values will be unique\nSo in your case, the improved nested loop should be:\nfor i in range(15):\n for j in range(i, 15):\n for k in range(j, 15):\n\nI know this was a lot of info for a fairly simple question, but it can hopefully get you thinking about how you can make your code more efficient in the future!\n" ]
[ 6, 1, 1 ]
[ "As the other answers mention, you are looking for combinations of the three elements, i.e. collections of the three indexes irrespectively of their order.\nAs an alternative to leave your code more \"explicit\", you might order the triplet of indexes and append it only if they are not already in SpecialTriangles:\nSpecialTriangles = []\n\nfor i in range(15):\n for j in range(15):\n for k in range(15):\n if (i**2 + j**2) == k**2:\n if (i**2 + 0) != k**2:\n if (0 + j**2) != k**2: \n if (0 + 0) != k**2:\n ordered_triplet = sorted([i, j, k])\n if ordered_triplet not in SpecialTriangles:\n SpecialTriangles.append(ordered_triplet)\n\nprint(SpecialTriangles)\n\nHere's the result:\n[[3, 4, 5], [5, 12, 13], [6, 8, 10]]\n\n" ]
[ -1 ]
[ "python" ]
stackoverflow_0074521723_python.txt
Q: How can I keep checking every 30 seconds on multiple processes if alive python I have program that is running 2 process in parallel. After processes are launched, I am trying to check every 30 seconds if processes are still alive. Below is my pseudo code. Both processes takes between 5-10 minutes. I checked both processes ran successfully but while processes are alive it is not getting into while loop. When it is while loop, processes are not alive anymore processes=list() proc_1 = Process(target=target1) proc_1.start() processes.append(proc_1) proc_2 = Process(target=target2) proc_2.start() processes.append(proc_2) for process in processes: process.join() elapsed_time = 0 process_timeout = 120 # seconds while elapsed_time < process_timeout: is_any_process_alive = proc_1.is_alive() or proc_2.is_alive() if is_any_process_alive: time.sleep(30) elapsed_time += 30 else: break A: When you call: for process in processes: process.join() you are waiting for your two processes to finish before continuing on to your loop. Only after both are finished do you attempt to enter the while loop, but then immediately break as both have already finished. join should be used when you need to make sure a process has finished before moving on, such as requiring that a file has been written or a computation has completed. A common tripping point on what qualifies on "computation completed" is that if you are expecting data over a Queue, the underlaying pipe may block if the buffer is full, so you should .get all expected data from that queue before you attempt to join as the process may still be blocking on .put. (TLDR) Don't wait for the process to be done before trying to get the output, as it sometimes can't finish until the output is received. It is also notable that unless you specify daemon=True for a process, it will attempt to .join at the end of the main script as part of cleanup. If you did specify daemon=True, instead .terminate will be called to kill the process right away when the main process is finished. There are some instances where the interpreter shuts down unexpectedly where orphan processes can continue to live past the parent, but that's generally when extension libraries have exceptions outside of python (like a segfault)
How can I keep checking every 30 seconds on multiple processes if alive python
I have program that is running 2 process in parallel. After processes are launched, I am trying to check every 30 seconds if processes are still alive. Below is my pseudo code. Both processes takes between 5-10 minutes. I checked both processes ran successfully but while processes are alive it is not getting into while loop. When it is while loop, processes are not alive anymore processes=list() proc_1 = Process(target=target1) proc_1.start() processes.append(proc_1) proc_2 = Process(target=target2) proc_2.start() processes.append(proc_2) for process in processes: process.join() elapsed_time = 0 process_timeout = 120 # seconds while elapsed_time < process_timeout: is_any_process_alive = proc_1.is_alive() or proc_2.is_alive() if is_any_process_alive: time.sleep(30) elapsed_time += 30 else: break
[ "When you call:\nfor process in processes:\n process.join()\n\nyou are waiting for your two processes to finish before continuing on to your loop. Only after both are finished do you attempt to enter the while loop, but then immediately break as both have already finished.\njoin should be used when you need to make sure a process has finished before moving on, such as requiring that a file has been written or a computation has completed. A common tripping point on what qualifies on \"computation completed\" is that if you are expecting data over a Queue, the underlaying pipe may block if the buffer is full, so you should .get all expected data from that queue before you attempt to join as the process may still be blocking on .put. (TLDR) Don't wait for the process to be done before trying to get the output, as it sometimes can't finish until the output is received.\nIt is also notable that unless you specify daemon=True for a process, it will attempt to .join at the end of the main script as part of cleanup. If you did specify daemon=True, instead .terminate will be called to kill the process right away when the main process is finished. There are some instances where the interpreter shuts down unexpectedly where orphan processes can continue to live past the parent, but that's generally when extension libraries have exceptions outside of python (like a segfault)\n" ]
[ 1 ]
[]
[]
[ "multiprocessing", "python" ]
stackoverflow_0074513150_multiprocessing_python.txt
Q: Trigger a Python function exactly on the minute I have a function that I want to trigger at every turn of the minute — at 00 seconds. It fires off a packet over the air to a dumb display that will be mounted on the wall. I know I can brute force it with a while loop but that seems a bit harsh. I have tried using sched but that ends up adding a second every minute. What are my options? A: You might try APScheduler, a cron-style scheduler module for Python. From their examples: from apscheduler.scheduler import Scheduler # Start the scheduler sched = Scheduler() sched.start() def job_function(): print "Hello World" sched.add_cron_job(job_function, second=0) will run job_function every minute. A: What if you measured how long it took your code to execute, and subtracted that from a sleep time of 60? import time while True: timeBegin = time.time() CODE(.....) timeEnd = time.time() timeElapsed = timeEnd - timeBegin time.sleep(60-timeElapsed) A: The simplest solution would be to register a timeout with the operating system to expire when you want it to. Now there are quite a few ways to do so with a blocking instruction and the best option depends on your implementation. Simplest way would be to use time.sleep(): import time current_time = time.time() time_to_sleep = 60 - (current_time % 60) time.sleep(time_to_sleep) This way you take the current time and calculate the amount of time you need to sleep (in seconds). Not millisecond accurate but close enough. A: APScheduler is the correct approach. The syntax has changed since the original answer, however. As of APScheduler 3.3.1: def fn(): print("Hello, world") from apscheduler.schedulers.background import BackgroundScheduler scheduler = BackgroundScheduler() scheduler.start() scheduler.add_job(fn, trigger='cron', second=0) A: You can try Threading.Timer See this Example from threading import Timer def job_function(): Timer(60, job_function).start () print("Running job_funtion") It will print "Running job_function" every Minute Edit: If we are critical about the time at which it should run from threading import Timer from time import time def job_function(): Timer(int(time()/60)*60+60 - time(), job_function).start () print("Running job_funtion") It will run exactly at 0th second of every minute. A: The syntax has been changed, so in APScheduler of version 3.6.3 (Released: Nov 5, 2019) use the following snippet: from apscheduler.schedulers.blocking import BlockingScheduler from apscheduler.triggers.cron import CronTrigger def fn(): print('Hello, world!') sched = BlockingScheduler() # Execute fn() at the start of each minute. sched.add_job(fn, trigger=CronTrigger(second=00)) sched.start() A: The Python time module is usually what I use for events like this. It has a method called sleep(t), where t equals the time in seconds you want to delay. Combined with a while loop, you can get what you're looking for: import time while condition: time.sleep(60) f(x) A: May use this example: def do(): print("do do bi do") while True: alert_minutes= [15,30,45,0] now=time.localtime(time.time()) if now.tm_min in alert_minutes: do() time.sleep(60)
Trigger a Python function exactly on the minute
I have a function that I want to trigger at every turn of the minute — at 00 seconds. It fires off a packet over the air to a dumb display that will be mounted on the wall. I know I can brute force it with a while loop but that seems a bit harsh. I have tried using sched but that ends up adding a second every minute. What are my options?
[ "You might try APScheduler, a cron-style scheduler module for Python.\nFrom their examples:\nfrom apscheduler.scheduler import Scheduler\n\n# Start the scheduler\nsched = Scheduler()\nsched.start()\n\ndef job_function():\n print \"Hello World\"\n\nsched.add_cron_job(job_function, second=0)\n\nwill run job_function every minute.\n", "What if you measured how long it took your code to execute, and subtracted that from a sleep time of 60?\nimport time\n\nwhile True:\n timeBegin = time.time()\n\n CODE(.....)\n\n timeEnd = time.time()\n timeElapsed = timeEnd - timeBegin\n time.sleep(60-timeElapsed)\n\n", "The simplest solution would be to register a timeout with the operating system to expire when you want it to.\nNow there are quite a few ways to do so with a blocking instruction and the best option depends on your implementation. Simplest way would be to use time.sleep():\nimport time\n\ncurrent_time = time.time()\ntime_to_sleep = 60 - (current_time % 60)\ntime.sleep(time_to_sleep)\n\nThis way you take the current time and calculate the amount of time you need to sleep (in seconds). Not millisecond accurate but close enough.\n", "APScheduler is the correct approach. The syntax has changed since the original answer, however.\nAs of APScheduler 3.3.1:\ndef fn():\n print(\"Hello, world\")\n\nfrom apscheduler.schedulers.background import BackgroundScheduler\n\nscheduler = BackgroundScheduler()\nscheduler.start()\nscheduler.add_job(fn, trigger='cron', second=0)\n\n", "You can try Threading.Timer\nSee this Example\nfrom threading import Timer\n\ndef job_function():\n Timer(60, job_function).start ()\n print(\"Running job_funtion\")\n\nIt will print \"Running job_function\" every Minute\nEdit:\nIf we are critical about the time at which it should run\nfrom threading import Timer\nfrom time import time\n\n\ndef job_function():\n Timer(int(time()/60)*60+60 - time(), job_function).start ()\n print(\"Running job_funtion\")\n\nIt will run exactly at 0th second of every minute.\n", "The syntax has been changed, so in APScheduler of version 3.6.3 (Released: Nov 5, 2019) use the following snippet:\nfrom apscheduler.schedulers.blocking import BlockingScheduler\nfrom apscheduler.triggers.cron import CronTrigger\n\ndef fn():\n print('Hello, world!')\n\n\nsched = BlockingScheduler()\n\n# Execute fn() at the start of each minute.\nsched.add_job(fn, trigger=CronTrigger(second=00))\nsched.start()\n\n", "The Python time module is usually what I use for events like this. It has a method called sleep(t), where t equals the time in seconds you want to delay. \nCombined with a while loop, you can get what you're looking for:\nimport time\n\nwhile condition:\n time.sleep(60)\n f(x)\n\n", "May use this example:\n def do():\n print(\"do do bi do\")\n\n while True:\n alert_minutes= [15,30,45,0]\n now=time.localtime(time.time())\n if now.tm_min in alert_minutes:\n do()\n time.sleep(60)\n\n" ]
[ 7, 5, 4, 4, 2, 2, 0, 0 ]
[ "you could use a while loop and sleep to not eat up the processor too much\n" ]
[ -1 ]
[ "python", "schedule", "time" ]
stackoverflow_0019645720_python_schedule_time.txt
Q: Elastic serach testing on Production Database (Read only ES usecase) How to test ES data without mocking, which is smart enough to figure out, what should be result at the top Did google, found most of the libraries are mocking data, but as we have evolving ES indices and logic changes day by day, what should be best practise to follow. A: You could configure your local elasticsearch to connect production DB when it creates the index.
Elastic serach testing on Production Database (Read only ES usecase)
How to test ES data without mocking, which is smart enough to figure out, what should be result at the top Did google, found most of the libraries are mocking data, but as we have evolving ES indices and logic changes day by day, what should be best practise to follow.
[ "You could configure your local elasticsearch to connect production DB when it creates the index.\n" ]
[ 0 ]
[]
[]
[ "elasticsearch", "pytest", "python", "testing" ]
stackoverflow_0074522257_elasticsearch_pytest_python_testing.txt
Q: Rust serializes different than python? || Change Endianess in bincode I'm new to Rust but not to programming. So I try to send data to a server via TCPStream (server is able to respond with 500Hz running on a robot) I got a working example python-program from the company which builds the robots. The problem: After a couple of days and reading the documentation over and over again I measured a specific situation with wireshark. So I know that the working python-program sends 00 05 56 00 02 to the server which is correct 05 = packet size, 56 = the requested action to do, 02 = the data to be requested my rust program sends 05 00 56 02 00 and I can't figure out why mod rtde_setup; mod package_types; use std::io::{Read, Write}; use std::mem; use std::net::{TcpStream}; // use serde_derive::{Serialize, Deserialize}; use crate::package_types::*; use crate::rtde_setup::*; fn main() { #[allow(dead_code)] static ADDR: &str = "10.0.0.164"; static PORT: &str = "30004"; let mut set_protocol_version = RequestProtocolVersion { header: Header { package_size: 5, package_type: RTDE_REQUEST_PROTOCOL_VERSION, }, protocol_version: 2, }; //Python sends 00 05 56 00 02 bei 'V' //Rust sends 05 00 56 02 00 bei 'V' //header ist 3 byte //set_protocol_version.protocol_version ist 2 let mut payload = (set_protocol_version); match TcpStream::connect(format!("{}:{}", ADDR, PORT)) { Ok(mut stream) => { stream.set_nodelay(true).expect("set_nodelay call failed"); println!("Successfully connected to server {} on port {}", ADDR, PORT); let payload_byte: Vec<u8> = bincode::serialize(&payload).unwrap(); stream.write(&payload_byte).unwrap(); println!("Sent package (byte): {:?}", &payload_byte); } Err(e) => { println!("Failed to connect: {}", e); } } println!("Terminated."); } def negotiate_protocol_version(self): cmd = Command.RTDE_REQUEST_PROTOCOL_VERSION payload = struct.pack('>H', RTDE_PROTOCOL_VERSION_2) success = self.__sendAndReceive(cmd, payload) if success: self.__protocolVersion = RTDE_PROTOCOL_VERSION_2 return success def __sendAndReceive(self, cmd, payload=b''): if self.__sendall(cmd, payload): return self.__recv(cmd) else: return None [edit] okay I tried to change the endianess with let options = bincode::DefaultOptions::new().with_big_endian(); but the output is still the same... so the options are I'm not able to understand the documentation it doesn't work you're right with the Default endianess as little... so I hoped this would help. maybe I'm just doing it wrong? [edit] worked! thank you I inserted let serialize_options = bincode::DefaultOptions::new() .with_fixint_encoding() .with_big_endian(); and edited let payload_byte: Vec<u8> = serialize_options.serialize(&payload).unwrap(); now I sent 00 05 56 00 02 and got an answer from the server next move will be deserializing with the new knowledge A: Inserting let serialize_options = bincode::DefaultOptions::new() .with_fixint_encoding() .with_big_endian(); and editing let payload_byte: Vec<u8> = serialize_options.serialize(&payload).unwrap(); worked for me Big thanks to you
Rust serializes different than python? || Change Endianess in bincode
I'm new to Rust but not to programming. So I try to send data to a server via TCPStream (server is able to respond with 500Hz running on a robot) I got a working example python-program from the company which builds the robots. The problem: After a couple of days and reading the documentation over and over again I measured a specific situation with wireshark. So I know that the working python-program sends 00 05 56 00 02 to the server which is correct 05 = packet size, 56 = the requested action to do, 02 = the data to be requested my rust program sends 05 00 56 02 00 and I can't figure out why mod rtde_setup; mod package_types; use std::io::{Read, Write}; use std::mem; use std::net::{TcpStream}; // use serde_derive::{Serialize, Deserialize}; use crate::package_types::*; use crate::rtde_setup::*; fn main() { #[allow(dead_code)] static ADDR: &str = "10.0.0.164"; static PORT: &str = "30004"; let mut set_protocol_version = RequestProtocolVersion { header: Header { package_size: 5, package_type: RTDE_REQUEST_PROTOCOL_VERSION, }, protocol_version: 2, }; //Python sends 00 05 56 00 02 bei 'V' //Rust sends 05 00 56 02 00 bei 'V' //header ist 3 byte //set_protocol_version.protocol_version ist 2 let mut payload = (set_protocol_version); match TcpStream::connect(format!("{}:{}", ADDR, PORT)) { Ok(mut stream) => { stream.set_nodelay(true).expect("set_nodelay call failed"); println!("Successfully connected to server {} on port {}", ADDR, PORT); let payload_byte: Vec<u8> = bincode::serialize(&payload).unwrap(); stream.write(&payload_byte).unwrap(); println!("Sent package (byte): {:?}", &payload_byte); } Err(e) => { println!("Failed to connect: {}", e); } } println!("Terminated."); } def negotiate_protocol_version(self): cmd = Command.RTDE_REQUEST_PROTOCOL_VERSION payload = struct.pack('>H', RTDE_PROTOCOL_VERSION_2) success = self.__sendAndReceive(cmd, payload) if success: self.__protocolVersion = RTDE_PROTOCOL_VERSION_2 return success def __sendAndReceive(self, cmd, payload=b''): if self.__sendall(cmd, payload): return self.__recv(cmd) else: return None [edit] okay I tried to change the endianess with let options = bincode::DefaultOptions::new().with_big_endian(); but the output is still the same... so the options are I'm not able to understand the documentation it doesn't work you're right with the Default endianess as little... so I hoped this would help. maybe I'm just doing it wrong? [edit] worked! thank you I inserted let serialize_options = bincode::DefaultOptions::new() .with_fixint_encoding() .with_big_endian(); and edited let payload_byte: Vec<u8> = serialize_options.serialize(&payload).unwrap(); now I sent 00 05 56 00 02 and got an answer from the server next move will be deserializing with the new knowledge
[ "Inserting\n let serialize_options = bincode::DefaultOptions::new()\n .with_fixint_encoding()\n .with_big_endian();\n\nand editing\n let payload_byte: Vec<u8> = \n serialize_options.serialize(&payload).unwrap();\n\nworked for me\nBig thanks to you\n" ]
[ 0 ]
[]
[]
[ "python", "rust", "serialization", "tcp" ]
stackoverflow_0074508447_python_rust_serialization_tcp.txt
Q: python list comprehension in dictionary def square100(): d = {f"{x}" : f"{x**2}" for x in range(101)} print(d) if __name__ == "__main__": quadrado100() this function return the values in ascending order. def square100(): d = {f"{x} : {x**2}" for x in range(101)} print(d) if __name__ == "__main__": quadrado100() but this function that should do the same thing, shows in a random order. does anyone know why? nothing to say here A: The latter is a set comprehension, not a dict comprehension (neither one is a list comprehension); the difference is that there is no : (at top level, outside string quotes and the like) separating a key from a value in a set literal or comprehension, while there is one in a dict literal or comprehension. sets have arbitrary order (effectively random for strings; it will change between different runs of Python, and can change even within a single run of Python based on the order in which items are added and removed), while dicts (in 3.6 as an implementation detail, and in 3.7+ as a language guarantee) are insertion-ordered. So your first bit of a code (a dict comprehension) retains order, while the latter, based on sets, does not.
python list comprehension in dictionary
def square100(): d = {f"{x}" : f"{x**2}" for x in range(101)} print(d) if __name__ == "__main__": quadrado100() this function return the values in ascending order. def square100(): d = {f"{x} : {x**2}" for x in range(101)} print(d) if __name__ == "__main__": quadrado100() but this function that should do the same thing, shows in a random order. does anyone know why? nothing to say here
[ "The latter is a set comprehension, not a dict comprehension (neither one is a list comprehension); the difference is that there is no : (at top level, outside string quotes and the like) separating a key from a value in a set literal or comprehension, while there is one in a dict literal or comprehension.\nsets have arbitrary order (effectively random for strings; it will change between different runs of Python, and can change even within a single run of Python based on the order in which items are added and removed), while dicts (in 3.6 as an implementation detail, and in 3.7+ as a language guarantee) are insertion-ordered. So your first bit of a code (a dict comprehension) retains order, while the latter, based on sets, does not.\n" ]
[ 2 ]
[]
[]
[ "dictionary", "dictionary_comprehension", "python" ]
stackoverflow_0074522323_dictionary_dictionary_comprehension_python.txt
Q: Scraping news title from a page with bs4 in python I was trying to scrape the "entry-title" of the last news on the site "https://www.abafg.it/category/avvisi/" and prints [ ] instead, what am i doing the wrong way? The result of what the code returns instead of the "entry-title" of the page i want to scrape the info I tried to scrape the class "entry-title" to let me save the title, the link of where that news leads and the date of publish A: The entry-title class is not of the link a tag, but of the h2 wrapped around it. You can use names = [h.a for h in soup.find_all('h2', class_='entry-title')] But I think using CSS selectors would be better here names = soup.select('h2.entry-title > a[href]') will select any a tag with a href attribute and with a h2 parent of class entry-title. Then, for a in names: print(a.get_text().strip(), a.get('href')) will print AVVISO LEZIONI DI SCULTURA : PROF.BORRELLI https://www.abafg.it/avviso-lezioni-di-scultura-prof-borrelli/ ORARIO DELLE LEZIONI A.A.2022/2023 IN VIGORE DAL 21 NOVEMBRE 2022 https://www.abafg.it/orario-delle-lezioni-a-a-2022-2023-in-vigore-dal-21-novembre-2022/ PROROGA BANDO AFFIDAMENTI INTERNI D.D. N. 3 DEL 4.11.2022 https://www.abafg.it/proroga-bando-affidamenti-interni-d-d-n-3-del-4-11-2022/ D.D. n. 7 del 15.11.2022 DECRETO GRADUATORIA PROVVISORIA ABPR19 https://www.abafg.it/d-d-n-7-del-15-11-2022-decreto-graduatoria-provvisoria-abpr19/ D.D. n. 5 DEL 10.11.2022 DECRETO DI NOMINA COMMISSIONE ABPR19 https://www.abafg.it/d-d-n-5-del-10-11-2022-decreto-di-nomina-commissione-abpr19/ RIAPERTURA BANDO AFFIDAMENTI INTERNI D.D. N. 3 DEL 4.11.2022 https://www.abafg.it/riapertura-bando-affidamenti-interni-d-d-n-4-del-4-11-2022/ D.D.81 del 26.10.2022 GRADUATORIA DEFINITIVA ABST48 STORIA DELLE ARTI APPLICATE https://www.abafg.it/d-d-81-del-26-10-2022-graduatoria-definitiva-abst48-storia-delle-arti-applicate/ AVVISO PRESENTAZIONE DOMANDE CULTORE DELLA MATERIA A.A.22.23-SCADENZA 11.11.2022 https://www.abafg.it/avviso-presentazione-domande-cultore-della-materia-a-a-22-23-scadenza-11-11-2022/ D.D. N.78 DEL 19/10/2022 BANDO GRADUATORIE D’ISTITUTO-SCADENZA 9/11/2022. https://www.abafg.it/d-d-n-78-bando-graduatorie-distituto-scadenza-9-11-2022/ ORARIO PROVVISIORIO DELLE LEZIONI A.A. 2022/2023: TRIENNIO E BIENNIO https://www.abafg.it/orario-provvisiorio-delle-lezioni-a-a-2022-2023-triennio-e-biennio/ Added EDIT: to save the printed text into a file, you could first save it as one string with .join first asText = '\n'.join([f'{a.get_text().strip()} {a.get("href")}' for a in names]) and then you could save it with with open('./resources/titles.txt', 'w', encoding='utf-8') as f: f.write(asText) If you want something more visuals-friendly, I suggest using pandas asDF = pandas.DataFrame([{ 'title': a.get_text().strip(), 'link': a.get('href') } for a in names]) asText = asDF.to_markdown(index=False) and now asText looks like | title | link | |:---------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------| | ORARIO DELLE LEZIONI A.A.2022/2023 IN VIGORE DAL 21 NOVEMBRE 2022 | https://www.abafg.it/orario-delle-lezioni-a-a-2022-2023-in-vigore-dal-21-novembre-2022/ | | PROROGA BANDO AFFIDAMENTI INTERNI D.D. N. 3 DEL 4.11.2022 | https://www.abafg.it/proroga-bando-affidamenti-interni-d-d-n-3-del-4-11-2022/ | | D.D. n. 7 del 15.11.2022 DECRETO GRADUATORIA PROVVISORIA ABPR19 | https://www.abafg.it/d-d-n-7-del-15-11-2022-decreto-graduatoria-provvisoria-abpr19/ | | D.D. n. 5 DEL 10.11.2022 DECRETO DI NOMINA COMMISSIONE ABPR19 | https://www.abafg.it/d-d-n-5-del-10-11-2022-decreto-di-nomina-commissione-abpr19/ | | RIAPERTURA BANDO AFFIDAMENTI INTERNI D.D. N. 3 DEL 4.11.2022 | https://www.abafg.it/riapertura-bando-affidamenti-interni-d-d-n-4-del-4-11-2022/ | | D.D.81 del 26.10.2022 GRADUATORIA DEFINITIVA ABST48 STORIA DELLE ARTI APPLICATE | https://www.abafg.it/d-d-81-del-26-10-2022-graduatoria-definitiva-abst48-storia-delle-arti-applicate/ | | AVVISO PRESENTAZIONE DOMANDE CULTORE DELLA MATERIA A.A.22.23-SCADENZA 11.11.2022 | https://www.abafg.it/avviso-presentazione-domande-cultore-della-materia-a-a-22-23-scadenza-11-11-2022/ | | D.D. N.78 DEL 19/10/2022 BANDO GRADUATORIE D’ISTITUTO-SCADENZA 9/11/2022. | https://www.abafg.it/d-d-n-78-bando-graduatorie-distituto-scadenza-9-11-2022/ | | ORARIO PROVVISIORIO DELLE LEZIONI A.A. 2022/2023: TRIENNIO E BIENNIO | https://www.abafg.it/orario-provvisiorio-delle-lezioni-a-a-2022-2023-triennio-e-biennio/ | | GRADUATORIA DEFINITIVA ABST47 STILE,STORIA DELL’ARTE E DEL COSTUME | https://www.abafg.it/graduatoria-definitiva-abst47-stilestoria-dellarte-e-del-costume/ | And then, instead of TXT, you could also save it as CSV with asDF.to_csv('./resources/titles.csv', index=False) so that you can view it as a spreadsheet
Scraping news title from a page with bs4 in python
I was trying to scrape the "entry-title" of the last news on the site "https://www.abafg.it/category/avvisi/" and prints [ ] instead, what am i doing the wrong way? The result of what the code returns instead of the "entry-title" of the page i want to scrape the info I tried to scrape the class "entry-title" to let me save the title, the link of where that news leads and the date of publish
[ "The entry-title class is not of the link a tag, but of the h2 wrapped around it.\nYou can use\nnames = [h.a for h in soup.find_all('h2', class_='entry-title')]\n\nBut I think using CSS selectors would be better here\nnames = soup.select('h2.entry-title > a[href]')\n\nwill select any a tag with a href attribute and with a h2 parent of class entry-title.\n\nThen,\nfor a in names: print(a.get_text().strip(), a.get('href'))\n\nwill print\nAVVISO LEZIONI DI SCULTURA : PROF.BORRELLI https://www.abafg.it/avviso-lezioni-di-scultura-prof-borrelli/\nORARIO DELLE LEZIONI A.A.2022/2023 IN VIGORE DAL 21 NOVEMBRE 2022 https://www.abafg.it/orario-delle-lezioni-a-a-2022-2023-in-vigore-dal-21-novembre-2022/\nPROROGA BANDO AFFIDAMENTI INTERNI D.D. N. 3 DEL 4.11.2022 https://www.abafg.it/proroga-bando-affidamenti-interni-d-d-n-3-del-4-11-2022/\nD.D. n. 7 del 15.11.2022 DECRETO GRADUATORIA PROVVISORIA ABPR19 https://www.abafg.it/d-d-n-7-del-15-11-2022-decreto-graduatoria-provvisoria-abpr19/\nD.D. n. 5 DEL 10.11.2022 DECRETO DI NOMINA COMMISSIONE ABPR19 https://www.abafg.it/d-d-n-5-del-10-11-2022-decreto-di-nomina-commissione-abpr19/\nRIAPERTURA BANDO AFFIDAMENTI INTERNI D.D. N. 3 DEL 4.11.2022 https://www.abafg.it/riapertura-bando-affidamenti-interni-d-d-n-4-del-4-11-2022/\nD.D.81 del 26.10.2022 GRADUATORIA DEFINITIVA ABST48 STORIA DELLE ARTI APPLICATE https://www.abafg.it/d-d-81-del-26-10-2022-graduatoria-definitiva-abst48-storia-delle-arti-applicate/\nAVVISO PRESENTAZIONE DOMANDE CULTORE DELLA MATERIA A.A.22.23-SCADENZA 11.11.2022 https://www.abafg.it/avviso-presentazione-domande-cultore-della-materia-a-a-22-23-scadenza-11-11-2022/\nD.D. N.78 DEL 19/10/2022 BANDO GRADUATORIE D’ISTITUTO-SCADENZA 9/11/2022. https://www.abafg.it/d-d-n-78-bando-graduatorie-distituto-scadenza-9-11-2022/\nORARIO PROVVISIORIO DELLE LEZIONI A.A. 2022/2023: TRIENNIO E BIENNIO https://www.abafg.it/orario-provvisiorio-delle-lezioni-a-a-2022-2023-triennio-e-biennio/\n\n\n\nAdded EDIT: to save the printed text into a file, you could first save it as one string with .join first\nasText = '\\n'.join([f'{a.get_text().strip()} {a.get(\"href\")}' for a in names])\n\nand then you could save it with\nwith open('./resources/titles.txt', 'w', encoding='utf-8') as f: \n f.write(asText)\n\n\nIf you want something more visuals-friendly, I suggest using pandas\nasDF = pandas.DataFrame([{\n 'title': a.get_text().strip(), 'link': a.get('href')\n} for a in names])\nasText = asDF.to_markdown(index=False)\n\nand now asText looks like\n| title | link |\n|:---------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------|\n| ORARIO DELLE LEZIONI A.A.2022/2023 IN VIGORE DAL 21 NOVEMBRE 2022 | https://www.abafg.it/orario-delle-lezioni-a-a-2022-2023-in-vigore-dal-21-novembre-2022/ |\n| PROROGA BANDO AFFIDAMENTI INTERNI D.D. N. 3 DEL 4.11.2022 | https://www.abafg.it/proroga-bando-affidamenti-interni-d-d-n-3-del-4-11-2022/ |\n| D.D. n. 7 del 15.11.2022 DECRETO GRADUATORIA PROVVISORIA ABPR19 | https://www.abafg.it/d-d-n-7-del-15-11-2022-decreto-graduatoria-provvisoria-abpr19/ |\n| D.D. n. 5 DEL 10.11.2022 DECRETO DI NOMINA COMMISSIONE ABPR19 | https://www.abafg.it/d-d-n-5-del-10-11-2022-decreto-di-nomina-commissione-abpr19/ |\n| RIAPERTURA BANDO AFFIDAMENTI INTERNI D.D. N. 3 DEL 4.11.2022 | https://www.abafg.it/riapertura-bando-affidamenti-interni-d-d-n-4-del-4-11-2022/ |\n| D.D.81 del 26.10.2022 GRADUATORIA DEFINITIVA ABST48 STORIA DELLE ARTI APPLICATE | https://www.abafg.it/d-d-81-del-26-10-2022-graduatoria-definitiva-abst48-storia-delle-arti-applicate/ |\n| AVVISO PRESENTAZIONE DOMANDE CULTORE DELLA MATERIA A.A.22.23-SCADENZA 11.11.2022 | https://www.abafg.it/avviso-presentazione-domande-cultore-della-materia-a-a-22-23-scadenza-11-11-2022/ |\n| D.D. N.78 DEL 19/10/2022 BANDO GRADUATORIE D’ISTITUTO-SCADENZA 9/11/2022. | https://www.abafg.it/d-d-n-78-bando-graduatorie-distituto-scadenza-9-11-2022/ |\n| ORARIO PROVVISIORIO DELLE LEZIONI A.A. 2022/2023: TRIENNIO E BIENNIO | https://www.abafg.it/orario-provvisiorio-delle-lezioni-a-a-2022-2023-triennio-e-biennio/ |\n| GRADUATORIA DEFINITIVA ABST47 STILE,STORIA DELL’ARTE E DEL COSTUME | https://www.abafg.it/graduatoria-definitiva-abst47-stilestoria-dellarte-e-del-costume/ |\n\nAnd then, instead of TXT, you could also save it as CSV with\nasDF.to_csv('./resources/titles.csv', index=False)\n\nso that you can view it as a spreadsheet\n\n" ]
[ 0 ]
[]
[]
[ "beautifulsoup", "python", "screen_scraping", "wordpress" ]
stackoverflow_0074521981_beautifulsoup_python_screen_scraping_wordpress.txt
Q: How to create a python script to parse a xlsx spreadsheet file and generate sql statements? I have a xlsx file with two columns (id, meal) and 100 rows of data, and I want to parse the data to generate a notepad file that has sql update statements. Id Meal 12345 Child 23456 Adult 34567 Senior 34599 Senior I'm unsure on how to implement if/else/else if statements and add data from the xlsx file to generate sql statements to add into a notepad file. If the meal is 'Child', an example of the sql script generated would be update system.user set meal_name = 'Child', meal_price = 'Child' where customer_id = '12345'; If the meal is 'Adult', an example of the sql script generated would be update system.user set meal_name = 'Adult', meal_price = 'Adult' where customer_id = '23456'; If the meal is 'Senior', an example of the sql script generated would be update system.user set meal_name = 'Senior', where customer_id = '34567'; Anything to help would be very much appreciated, it's my first experience with Python so I'm unsure on how to get started. This is the current code I have which doesn't have much, I'm just not sure how to get started import openpyxl from pathlib import Path xlsx_file = Path('CustomerData', 'customer_data.xlsx') wb_obj = openpyxl.load_workbook(xlsx_file) sheet = wb_obj.active col_names = [] for column in sheet.iter_cols(1, sheet.max_column): col_names.append(column[0].value) print(col_names) A: Using the pandas library, I think we can achieve what you want like this: import pandas as pd df = pd.read_excel(path) # read our dataframe from excel all_statements = [] # initialize an empty list to append to for row in df.itertuples(index=False): # loop over each row statement = f"update system.user set meal_name = '{row.Meal}', meal_price = '{row.Child}' where customer_id = '{row.Id}'" # create a statement for that row using f-strings all_statements.append(statement) # append the statement for this row to our list of all_statements with open(my_text_file, 'a') as file: # open in append mode, which will add new lines to the end of an existing file file.writelines(all_statements) This code will read in your excel table and then loop over the rows to create a statement for each row. I don't think we need to do any if-else for creating those statements, since every statement appears to be the same format - it has the same pieces but we replace a few words with the values from the table. Then we write these lines to your text file. In the last piece here, the file is opened in 'a' which is for append, and so will append the lines to an existing file (or create a new file if it doesn't already exist). If you want to overwrite what is already in the file, you will just change the 'a' to 'w' (for write mode).
How to create a python script to parse a xlsx spreadsheet file and generate sql statements?
I have a xlsx file with two columns (id, meal) and 100 rows of data, and I want to parse the data to generate a notepad file that has sql update statements. Id Meal 12345 Child 23456 Adult 34567 Senior 34599 Senior I'm unsure on how to implement if/else/else if statements and add data from the xlsx file to generate sql statements to add into a notepad file. If the meal is 'Child', an example of the sql script generated would be update system.user set meal_name = 'Child', meal_price = 'Child' where customer_id = '12345'; If the meal is 'Adult', an example of the sql script generated would be update system.user set meal_name = 'Adult', meal_price = 'Adult' where customer_id = '23456'; If the meal is 'Senior', an example of the sql script generated would be update system.user set meal_name = 'Senior', where customer_id = '34567'; Anything to help would be very much appreciated, it's my first experience with Python so I'm unsure on how to get started. This is the current code I have which doesn't have much, I'm just not sure how to get started import openpyxl from pathlib import Path xlsx_file = Path('CustomerData', 'customer_data.xlsx') wb_obj = openpyxl.load_workbook(xlsx_file) sheet = wb_obj.active col_names = [] for column in sheet.iter_cols(1, sheet.max_column): col_names.append(column[0].value) print(col_names)
[ "Using the pandas library, I think we can achieve what you want like this:\nimport pandas as pd\n\ndf = pd.read_excel(path) # read our dataframe from excel\n\nall_statements = [] # initialize an empty list to append to\nfor row in df.itertuples(index=False): # loop over each row\n statement = f\"update system.user set meal_name = '{row.Meal}', meal_price = '{row.Child}' where customer_id = '{row.Id}'\" # create a statement for that row using f-strings\n all_statements.append(statement) # append the statement for this row to our list of all_statements\n \nwith open(my_text_file, 'a') as file: # open in append mode, which will add new lines to the end of an existing file\n file.writelines(all_statements)\n\nThis code will read in your excel table and then loop over the rows to create a statement for each row. I don't think we need to do any if-else for creating those statements, since every statement appears to be the same format - it has the same pieces but we replace a few words with the values from the table.\nThen we write these lines to your text file. In the last piece here, the file is opened in 'a' which is for append, and so will append the lines to an existing file (or create a new file if it doesn't already exist). If you want to overwrite what is already in the file, you will just change the 'a' to 'w' (for write mode).\n" ]
[ 1 ]
[]
[]
[ "python", "xlsx" ]
stackoverflow_0074522144_python_xlsx.txt
Q: Python context manager that measures time I am struggling to make a piece of code that allows to measure time spent within a "with" statement and assigns the time measured (a float) to the variable provided in the "with" statement. import time class catchtime: def __enter__(self): self.t = time.clock() return 1 def __exit__(self, type, value, traceback): return time.clock() - self.t with catchtime() as t: pass This code leaves t=1 and not the difference between clock() calls. How to approach this problem? I need a way to assign a new value from within the exit method. PEP 343 describes in more detail how contect manager works but I do not understand most of it. A: Here is an example of using contextmanager from time import perf_counter from contextlib import contextmanager @contextmanager def catchtime() -> float: start = perf_counter() yield lambda: perf_counter() - start with catchtime() as t: import time time.sleep(1) print(f"Execution time: {t():.4f} secs") Output: Execution time: 1.0014 secs A: The top-rated answer can give the incorrect time As noted by @Mercury, the top answer by @Vlad Bezden, while slick, is technically incorrect since the value yielded by t() is also potentially affected by code executed outside of the with statement. For example, if you run time.sleep(5) after the with statement but before the print statement, then calling t() in the print statement will give you ~6 sec, not ~1 sec. In some cases, this can be avoided by inserting the print command inside the context manager as below: from time import perf_counter from contextlib import contextmanager @contextmanager def catchtime() -> float: start = perf_counter() yield lambda: perf_counter() - start print(f'Time: {perf_counter() - start:.3f} seconds') However, even with this modification, notice how running sleep(5) later on causes the incorrect time to be printed: from time import sleep with catchtime() as t: sleep(1) # >>> "Time: 1.000 seconds" sleep(5) print(f'Time: {t():.3f} seconds') # >>> "Time: 6.000 seconds" Solution #1: A fix for the above approach This solution cumulatively nets the difference between two timer objects, t1 and t2. The catchtime function can be distilled down into 3 lines of code. Note that perf_counter has been renamed to press_button to help with visualization. Initialize t1 and t2 simultaneously. Later t2 will be reassigned to preserve the tick count after the yield statement. For this step, imagine simultaneously pressing the on/off button for timer1 and timer2. Measure the difference between timer2 and timer1. Initially, this will be 0 on pass #1, but in subsequent runs, it will be the absolute difference between the tick count in t1 and t2. This final step is equivalent to pressing the on/off button for timer2 again. This puts the two timers out of sync, making the difference measurement in the step before meaningful from pass #2 onwards. from time import perf_counter as press_button from time import sleep @contextmanager def catchtime() -> float: t1 = t2 = press_button() yield lambda: t2 - t1 t2 = press_button() with catchtime() as t: sleep(1) sleep(5) print(f'Time: {t():.3f} seconds') # >>> Time: 1.000 seconds Solution #2: An alternative, more flexible approach This code is similar to the excellent answer given by @BrenBarn, except that it: Automatically prints the executed time as a formatted string (remove this by deleting the final print(self.readout)) Saves the formatted string for later use (self.readout) Saves the float result for later use (self.time) from time import perf_counter class catchtime: def __enter__(self): self.time = perf_counter() return self def __exit__(self, type, value, traceback): self.time = perf_counter() - self.time self.readout = f'Time: {self.time:.3f} seconds' print(self.readout) Notice how the intermediate sleep(5) commands no longer affect the printed time. from time import sleep with catchtime() as t: sleep(1) # >>> "Time: 1.000 seconds" sleep(5) print(t.time) # >>> 1.000283900000009 sleep(5) print(t.readout) # >>> "Time: 1.000 seconds" A: You can't get that to assign your timing to t. As described in the PEP, the variable you specify in the as clause (if any) gets assigned the result of calling __enter__, not __exit__. In other words, t is only assigned at the start of the with block, not at the end. What you could do is change your __exit__ so that instead of returning the value, it does self.t = time.clock() - self.t. Then, after the with block finishes, the t attribute of the context manager will hold the elapsed time. To make that work, you also want to return self instead of 1 from __enter__. Not sure what you were trying to achieve by using 1. So it looks like this: class catchtime(object): def __enter__(self): self.t = time.clock() return self def __exit__(self, type, value, traceback): self.t = time.clock() - self.t with catchtime() as t: time.sleep(1) print(t.t) And a value pretty close to 1 is printed. A: Solved (almost). Resulting variable is coercible and convertible to a float (but not a float itself). class catchtime: def __enter__(self): self.t = time.clock() return self def __exit__(self, type, value, traceback): self.e = time.clock() def __float__(self): return float(self.e - self.t) def __coerce__(self, other): return (float(self), other) def __str__(self): return str(float(self)) def __repr__(self): return str(float(self)) with catchtime() as t: pass print t print repr(t) print float(t) print 0+t print 1*t 1.10000000001e-05 1.10000000001e-05 1.10000000001e-05 1.10000000001e-05 1.10000000001e-05 A: The issue in top rated answer could be also fixed as below: @contextmanager def catchtime() -> float: start = perf_counter() end = start yield lambda: end - start end = perf_counter() A: I like this approach, which is simple to use and allows a contextual message: from time import perf_counter from contextlib import ContextDecorator class cmtimer(ContextDecorator): def __init__(self, msg): self.msg = msg def __enter__(self): self.time = perf_counter() return self def __exit__(self, type, value, traceback): elapsed = perf_counter() - self.time print(f'{self.msg} took {elapsed:.3f} seconds') Use it this way: with cmtimer('Loading JSON'): with open('data.json') as f: results = json.load(f) Output: Loading JSON took 1.577 seconds A: You could do it in this way below: import time class Exectime: def __enter__(self): self.time = time.time() return self def __exit__(self, exc_type, exc_val, exc_tb): self.time = time.time() - self.time with Exectime() as ext: <your code here in with statement> print('execution time is:' +str(ext.time)) It will calculate time spent to process codes within 'with' statement. A: With this implementtion you can get time during the process and any time after from contextlib import contextmanager from time import perf_counter @contextmanager def catchtime(task_name='It', verbose=True): class timer: def __init__(self): self._t1 = None self._t2 = None def start(self): self._t1 = perf_counter() self._t2 = None def stop(self): self._t2 = perf_counter() @property def time(self): return (self._t2 or perf_counter()) - self._t1 t = timer() t.start() try: yield t finally: t.stop() if verbose: print(f'{task_name} took {t.time :.3f} seconds') Usage examples: from time import sleep ############################ # 1. will print result with catchtime('First task'): sleep(1) ############################ # 2. will print result (without task name) and save result to t object with catchtime() as t: sleep(1) t.time # operation time is saved here ############################ # 3. will not print anyhting but will save result to t object with catchtime() as t: sleep(1) t.time # operation time is saved here
Python context manager that measures time
I am struggling to make a piece of code that allows to measure time spent within a "with" statement and assigns the time measured (a float) to the variable provided in the "with" statement. import time class catchtime: def __enter__(self): self.t = time.clock() return 1 def __exit__(self, type, value, traceback): return time.clock() - self.t with catchtime() as t: pass This code leaves t=1 and not the difference between clock() calls. How to approach this problem? I need a way to assign a new value from within the exit method. PEP 343 describes in more detail how contect manager works but I do not understand most of it.
[ "Here is an example of using contextmanager\nfrom time import perf_counter\nfrom contextlib import contextmanager\n\n@contextmanager\ndef catchtime() -> float:\n start = perf_counter()\n yield lambda: perf_counter() - start\n\n\nwith catchtime() as t:\n import time\n time.sleep(1)\n\nprint(f\"Execution time: {t():.4f} secs\")\n\nOutput:\nExecution time: 1.0014 secs\n", "The top-rated answer can give the incorrect time\nAs noted by @Mercury, the top answer by @Vlad Bezden, while slick, is technically incorrect since the value yielded by t() is also potentially affected by code executed outside of the with statement. For example, if you run time.sleep(5) after the with statement but before the print statement, then calling t() in the print statement will give you ~6 sec, not ~1 sec.\nIn some cases, this can be avoided by inserting the print command inside the context manager as below:\nfrom time import perf_counter\nfrom contextlib import contextmanager\n\n\n@contextmanager\ndef catchtime() -> float:\n start = perf_counter()\n yield lambda: perf_counter() - start\n print(f'Time: {perf_counter() - start:.3f} seconds')\n\n\nHowever, even with this modification, notice how running sleep(5) later on causes the incorrect time to be printed:\nfrom time import sleep\n\nwith catchtime() as t:\n sleep(1)\n\n# >>> \"Time: 1.000 seconds\"\n\nsleep(5)\nprint(f'Time: {t():.3f} seconds')\n\n# >>> \"Time: 6.000 seconds\"\n\nSolution #1: A fix for the above approach\nThis solution cumulatively nets the difference between two timer objects, t1 and t2. The catchtime function can be distilled down into 3 lines of code. Note that perf_counter has been renamed to press_button to help with visualization.\n\nInitialize t1 and t2 simultaneously. Later t2 will be reassigned to preserve the tick count after the yield statement. For this step, imagine simultaneously pressing the on/off button for timer1 and timer2.\nMeasure the difference between timer2 and timer1. Initially, this will be 0 on pass #1, but in subsequent runs, it will be the absolute difference between the tick count in t1 and t2.\nThis final step is equivalent to pressing the on/off button for timer2 again. This puts the two timers out of sync, making the difference measurement in the step before meaningful from pass #2 onwards.\n\nfrom time import perf_counter as press_button\nfrom time import sleep\n\n@contextmanager\ndef catchtime() -> float:\n t1 = t2 = press_button() \n yield lambda: t2 - t1\n t2 = press_button() \n\nwith catchtime() as t:\n sleep(1)\n\nsleep(5)\nprint(f'Time: {t():.3f} seconds')\n\n# >>> Time: 1.000 seconds\n\nSolution #2: An alternative, more flexible approach\nThis code is similar to the excellent answer given by @BrenBarn, except that it:\n\nAutomatically prints the executed time as a formatted string (remove this by deleting the final print(self.readout))\nSaves the formatted string for later use (self.readout)\nSaves the float result for later use (self.time)\n\nfrom time import perf_counter\n\n\nclass catchtime:\n def __enter__(self):\n self.time = perf_counter()\n return self\n\n def __exit__(self, type, value, traceback):\n self.time = perf_counter() - self.time\n self.readout = f'Time: {self.time:.3f} seconds'\n print(self.readout)\n\n\nNotice how the intermediate sleep(5) commands no longer affect the printed time.\nfrom time import sleep\n\nwith catchtime() as t:\n sleep(1)\n\n# >>> \"Time: 1.000 seconds\"\n\nsleep(5)\nprint(t.time)\n\n# >>> 1.000283900000009\n\nsleep(5)\nprint(t.readout)\n\n# >>> \"Time: 1.000 seconds\"\n\n", "You can't get that to assign your timing to t. As described in the PEP, the variable you specify in the as clause (if any) gets assigned the result of calling __enter__, not __exit__. In other words, t is only assigned at the start of the with block, not at the end.\nWhat you could do is change your __exit__ so that instead of returning the value, it does self.t = time.clock() - self.t. Then, after the with block finishes, the t attribute of the context manager will hold the elapsed time.\nTo make that work, you also want to return self instead of 1 from __enter__. Not sure what you were trying to achieve by using 1.\nSo it looks like this:\nclass catchtime(object):\n def __enter__(self):\n self.t = time.clock()\n return self\n\n def __exit__(self, type, value, traceback):\n self.t = time.clock() - self.t\n\nwith catchtime() as t:\n time.sleep(1)\n\nprint(t.t)\n\nAnd a value pretty close to 1 is printed.\n", "Solved (almost). Resulting variable is coercible and convertible to a float (but not a float itself).\nclass catchtime:\n def __enter__(self):\n self.t = time.clock()\n return self\n\n def __exit__(self, type, value, traceback):\n self.e = time.clock()\n\n def __float__(self):\n return float(self.e - self.t)\n\n def __coerce__(self, other):\n return (float(self), other)\n\n def __str__(self):\n return str(float(self))\n\n def __repr__(self):\n return str(float(self))\n\nwith catchtime() as t:\n pass\n\nprint t\nprint repr(t)\nprint float(t)\nprint 0+t\nprint 1*t\n\n1.10000000001e-05\n1.10000000001e-05\n1.10000000001e-05\n1.10000000001e-05\n1.10000000001e-05\n\n", "The issue in top rated answer could be also fixed as below:\n@contextmanager\ndef catchtime() -> float:\n start = perf_counter()\n end = start\n yield lambda: end - start\n end = perf_counter()\n\n", "I like this approach, which is simple to use and allows a contextual message:\nfrom time import perf_counter\nfrom contextlib import ContextDecorator\n\nclass cmtimer(ContextDecorator):\n def __init__(self, msg):\n self.msg = msg\n\n def __enter__(self):\n self.time = perf_counter()\n return self\n\n def __exit__(self, type, value, traceback):\n elapsed = perf_counter() - self.time\n print(f'{self.msg} took {elapsed:.3f} seconds')\n\nUse it this way:\nwith cmtimer('Loading JSON'):\n with open('data.json') as f:\n results = json.load(f)\n\nOutput:\nLoading JSON took 1.577 seconds\n\n", "You could do it in this way below:\nimport time\n\nclass Exectime:\n\n def __enter__(self):\n self.time = time.time()\n return self\n def __exit__(self, exc_type, exc_val, exc_tb):\n self.time = time.time() - self.time\n\n\n\nwith Exectime() as ext:\n <your code here in with statement>\n\nprint('execution time is:' +str(ext.time))\n\nIt will calculate time spent to process codes within 'with' statement.\n", "With this implementtion you can get time during the process and any time after\nfrom contextlib import contextmanager\nfrom time import perf_counter\n\n\n@contextmanager\ndef catchtime(task_name='It', verbose=True):\n class timer:\n def __init__(self):\n self._t1 = None\n self._t2 = None\n\n def start(self):\n self._t1 = perf_counter()\n self._t2 = None\n\n def stop(self):\n self._t2 = perf_counter()\n\n @property\n def time(self):\n return (self._t2 or perf_counter()) - self._t1\n\n t = timer()\n t.start()\n try:\n yield t\n finally:\n t.stop()\n if verbose:\n print(f'{task_name} took {t.time :.3f} seconds')\n\nUsage examples:\nfrom time import sleep\n\n############################\n\n# 1. will print result\nwith catchtime('First task'):\n sleep(1)\n\n############################\n\n# 2. will print result (without task name) and save result to t object\nwith catchtime() as t:\n sleep(1)\n\nt.time # operation time is saved here\n\n############################\n\n# 3. will not print anyhting but will save result to t object\nwith catchtime() as t:\n sleep(1)\n\nt.time # operation time is saved here\n\n" ]
[ 26, 13, 12, 8, 3, 3, 1, 0 ]
[]
[]
[ "python", "with_statement" ]
stackoverflow_0033987060_python_with_statement.txt
Q: Yamnet audio classification for feature extraction I am currently working on audio classification task and using Yamnet which is a pretrained model from tfhub.. I am using it to extract embeddings from audios and then i use another simple classification model composed of two dense layers, the second model takes as input the embeddings given by yamnet and does the classification. The problem is that the embeddings given by yamnet are always in a way that the third class have the highest value and it is always the predicted class. If anyone worked on such issue plz i need ur help and thanks in advance. I followed this tuto : https://blog.tensorflow.org/2021/03/transfer-learning-for-audio-data-with-yamnet.html A: Sounds like your data are not separated equally between each class. Your model overfits with the "third class" from your dataset. I would consider investigating the possibility of splitting the data for train, validation and testing using the stratified method so that every class is included during training/validation/testing. Here is a resource of Stratified K fold: https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.StratifiedKFold.html
Yamnet audio classification for feature extraction
I am currently working on audio classification task and using Yamnet which is a pretrained model from tfhub.. I am using it to extract embeddings from audios and then i use another simple classification model composed of two dense layers, the second model takes as input the embeddings given by yamnet and does the classification. The problem is that the embeddings given by yamnet are always in a way that the third class have the highest value and it is always the predicted class. If anyone worked on such issue plz i need ur help and thanks in advance. I followed this tuto : https://blog.tensorflow.org/2021/03/transfer-learning-for-audio-data-with-yamnet.html
[ "Sounds like your data are not separated equally between each class. Your model overfits with the \"third class\" from your dataset. I would consider investigating the possibility of splitting the data for train, validation and testing using the stratified method so that every class is included during training/validation/testing.\nHere is a resource of Stratified K fold:\nhttps://scikit-learn.org/stable/modules/generated/sklearn.model_selection.StratifiedKFold.html\n" ]
[ 0 ]
[]
[]
[ "audio", "deep_learning", "multiclass_classification", "python", "tensorflow_hub" ]
stackoverflow_0071649169_audio_deep_learning_multiclass_classification_python_tensorflow_hub.txt
Q: SSLCertVerificationError is caught by ValueError and not OSError We had a bug I'm trying to understand why happened. In the documentation it is mentioned that SSLError a subtype of OSError. (https://docs.python.org/3/library/ssl.html#ssl.SSLError) However this code doesn't work as expected - it seems that ValueError wins over def bar(): raise ssl.SSLCertVerificationError try: bar() except ValueError: print('Got ValueError') except OSError: print('Got OSError') The above results to Got ValueError Couldn't find any documented bug around this Python version - Python 3.8.9 A: It looks like you're getting this behaviour because SSLCertVerificationError extends both OSError and ValueError. i.e. running inspect's getmro on this class...: import ssl import inspect inspect.getmro(ssl.SSLCertVerificationError) ...gives this <class 'ssl.SSLCertVerificationError'> <class 'ssl.SSLError'> <class 'OSError'> <class 'ValueError'> <class 'Exception'> <class 'BaseException'> <class 'object'> Python supports multipe inheritence; i.e. whilst in many languages a class only has one immediate parent (which may itself have one parent), in Python, a class may have several immediate parents (super classes). Looking at SSLError, that inherits from OSError but doesn't from ValueError; so it seems that SSLCertVerificationError's immediate parents are ssl.Error and ValueError. i.e. inspect.getmro(ssl.SSLError) returns: <class 'ssl.SSLError'> <class 'OSError'> <class 'Exception'> <class 'BaseException'> <class 'object'> Given SSLCertVerificationError inherits from both OSError (via SSLError) and ValueError it seems Python will use whichever catch block comes first; as demonstrated with the 2 examples below. def bar(): raise ssl.SSLCertVerificationError try: bar() except ValueError: print('I win (Value Error)') except OSError: print('already handled above (OS Error)') def bar(): raise ssl.SSLCertVerificationError try: bar() except OSError: print('I win (OS Error)') except ValueError: print('already handled above (Value Error)') Were you to replace raise ssl.SSLCertVerificationError with raise ssl.SSLError you'd only see the except OSError logic in play as the ValueError wouldn't be part of the object's inheritence hierarchy. Note: I don't really know Python; so can't say for certain whether the above's the behaviour expected according to the language's design; but the above behaviour does make sense given what we can show about its inheritence.
SSLCertVerificationError is caught by ValueError and not OSError
We had a bug I'm trying to understand why happened. In the documentation it is mentioned that SSLError a subtype of OSError. (https://docs.python.org/3/library/ssl.html#ssl.SSLError) However this code doesn't work as expected - it seems that ValueError wins over def bar(): raise ssl.SSLCertVerificationError try: bar() except ValueError: print('Got ValueError') except OSError: print('Got OSError') The above results to Got ValueError Couldn't find any documented bug around this Python version - Python 3.8.9
[ "It looks like you're getting this behaviour because SSLCertVerificationError extends both OSError and ValueError.\ni.e. running inspect's getmro on this class...:\nimport ssl\nimport inspect\ninspect.getmro(ssl.SSLCertVerificationError)\n\n...gives this\n\n<class 'ssl.SSLCertVerificationError'>\n<class 'ssl.SSLError'>\n<class 'OSError'>\n<class 'ValueError'>\n<class 'Exception'>\n<class 'BaseException'>\n<class 'object'>\n\nPython supports multipe inheritence; i.e. whilst in many languages a class only has one immediate parent (which may itself have one parent), in Python, a class may have several immediate parents (super classes).\nLooking at SSLError, that inherits from OSError but doesn't from ValueError; so it seems that SSLCertVerificationError's immediate parents are ssl.Error and ValueError.\ni.e. inspect.getmro(ssl.SSLError) returns:\n\n<class 'ssl.SSLError'>\n<class 'OSError'>\n<class 'Exception'>\n<class 'BaseException'>\n<class 'object'>\n\nGiven SSLCertVerificationError inherits from both OSError (via SSLError) and ValueError it seems Python will use whichever catch block comes first; as demonstrated with the 2 examples below.\ndef bar():\n raise ssl.SSLCertVerificationError\n\ntry:\n bar()\nexcept ValueError:\n print('I win (Value Error)')\nexcept OSError:\n print('already handled above (OS Error)')\n\ndef bar():\n raise ssl.SSLCertVerificationError\n\ntry:\n bar()\nexcept OSError:\n print('I win (OS Error)')\nexcept ValueError:\n print('already handled above (Value Error)')\n\nWere you to replace raise ssl.SSLCertVerificationError with raise ssl.SSLError you'd only see the except OSError logic in play as the ValueError wouldn't be part of the object's inheritence hierarchy.\nNote: I don't really know Python; so can't say for certain whether the above's the behaviour expected according to the language's design; but the above behaviour does make sense given what we can show about its inheritence.\n" ]
[ 0 ]
[]
[]
[ "python" ]
stackoverflow_0074519740_python.txt
Q: how to add a newline after each append in list I have a list of tuples in rows which I need to append to another list and add a newline after each entry I tried everything I can think of but I cant seem to do it properly here is the code: niz = [""" (5, 6, 4) (90, 100, 13), (5, 8, 13), (9, 11, 13) (9, 11, 5), (19, 20, 5), (30, 34, 5) (9, 11, 4) (22, 25, 13), (17, 19, 13) """] list = [] for n in niz: list.append(n) list = '\n'.join(list) print(list) This is the closest I get: (5, 6, 4) (90, 100, 13), (5, 8, 13), (9, 11, 13) (9, 11, 5), (19, 20, 5), (30, 34, 5) (9, 11, 4) (22, 25, 13), (17, 19, 13) But I need it to be: [(5, 6, 4), (90, 100, 13), (5, 8, 13), (9, 11, 13), (9, 11, 5), (19, 20, 5), (30, 34, 5), (9, 11, 4), (22, 25, 13), (17, 19, 13)] A: When you write niz = [""" (5, 6, 4) (90, 100, 13), (5, 8, 13), (9, 11, 13) (9, 11, 5), (19, 20, 5), (30, 34, 5) (9, 11, 4) (22, 25, 13), (17, 19, 13) """] You are creating a list with a single string, and not a list of tuples. I would do something like this: niz = ["[",(5, 6, 4), "\n", (90, 100, 13), (5, 8, 13), (9, 11, 13), "\n", (9, 11, 5), (19, 20, 5), (30, 34, 5), "\n", (9, 11, 4), "\n", (22, 25, 13), (17, 19, 13),"]"] for item in niz: print(item, end=" ") A: Here is what I suggest: nizz = niz[0].replace(')\n(', '), (') nizz = nizz.replace('), (', ')|(').replace('\n', '') result = [eval(i) for i in nizz.split('|')] It gives you the list of all tuples A: Join using ",\n" instead of just "\n"and append the first "[" and last "]" characters list = [] for n in niz: list.append(n) list = '[' + ',\n'.join(list) + ']' print(list)
how to add a newline after each append in list
I have a list of tuples in rows which I need to append to another list and add a newline after each entry I tried everything I can think of but I cant seem to do it properly here is the code: niz = [""" (5, 6, 4) (90, 100, 13), (5, 8, 13), (9, 11, 13) (9, 11, 5), (19, 20, 5), (30, 34, 5) (9, 11, 4) (22, 25, 13), (17, 19, 13) """] list = [] for n in niz: list.append(n) list = '\n'.join(list) print(list) This is the closest I get: (5, 6, 4) (90, 100, 13), (5, 8, 13), (9, 11, 13) (9, 11, 5), (19, 20, 5), (30, 34, 5) (9, 11, 4) (22, 25, 13), (17, 19, 13) But I need it to be: [(5, 6, 4), (90, 100, 13), (5, 8, 13), (9, 11, 13), (9, 11, 5), (19, 20, 5), (30, 34, 5), (9, 11, 4), (22, 25, 13), (17, 19, 13)]
[ "When you write\nniz = [\"\"\"\n(5, 6, 4)\n(90, 100, 13), (5, 8, 13), (9, 11, 13)\n(9, 11, 5), (19, 20, 5), (30, 34, 5)\n(9, 11, 4)\n(22, 25, 13), (17, 19, 13)\n\"\"\"]\n\nYou are creating a list with a single string, and not a list of tuples. I would do something like this:\nniz = [\"[\",(5, 6, 4), \"\\n\", (90, 100, 13), (5, 8, 13), (9, 11, 13), \"\\n\", (9, 11, 5), (19, 20, 5), (30, 34, 5), \"\\n\", (9, 11, 4), \"\\n\", (22, 25, 13), (17, 19, 13),\"]\"]\n\nfor item in niz:\n print(item, end=\" \")\n\n", "Here is what I suggest:\nnizz = niz[0].replace(')\\n(', '), (')\nnizz = nizz.replace('), (', ')|(').replace('\\n', '')\nresult = [eval(i) for i in nizz.split('|')]\n\nIt gives you the list of all tuples\n", "Join using \",\\n\" instead of just \"\\n\"and append the first \"[\" and last \"]\" characters\nlist = []\nfor n in niz:\n list.append(n) \nlist = '[' + ',\\n'.join(list) + ']'\n\nprint(list)\n\n" ]
[ 0, 0, -2 ]
[]
[]
[ "for_loop", "list", "newline", "python", "tuples" ]
stackoverflow_0074522188_for_loop_list_newline_python_tuples.txt
Q: Python default version not opening by default (Windows) It seems my default python is v3.11 (indicated by the asterisk when doing command line "py -0p") and yet at the same time it says it's v3.10 (command line "python --version") and v3.10 is also the version it opens by default via command line. I've updated the environment variables (PY_PYTHON and Path) to point to v3.11, as well as the registry setting Computer\HKEY_CLASSES_ROOT\Applications\python.exe\shell\open\command, but still no joy. Anyone any idea what else might be affecting this? see the conflicting command line results here A: The easiest way to switch to the newest version is to uninstall the older ones but if you want to keep them, try restarting your cmd.
Python default version not opening by default (Windows)
It seems my default python is v3.11 (indicated by the asterisk when doing command line "py -0p") and yet at the same time it says it's v3.10 (command line "python --version") and v3.10 is also the version it opens by default via command line. I've updated the environment variables (PY_PYTHON and Path) to point to v3.11, as well as the registry setting Computer\HKEY_CLASSES_ROOT\Applications\python.exe\shell\open\command, but still no joy. Anyone any idea what else might be affecting this? see the conflicting command line results here
[ "The easiest way to switch to the newest version is to uninstall the older ones but if you want to keep them, try restarting your cmd.\n" ]
[ 0 ]
[]
[]
[ "default", "python", "version" ]
stackoverflow_0074522309_default_python_version.txt
Q: select.poll doesn't detect available read unless I sleep for some time Context I would like to use select.poll to know when data is available to read, buffer this data, and use said buffer as a subprocess' stdin. The data is being dumped at equally spaced intervals. (see execution example) It's important that reading data in the main script is non-blocking, so subprocess can be executed from there. Problem #!/usr/bin/env python3 # # file: wrap.py # import select import sys import time max_retries = 2 timeout = 300 fd_stdin = sys.stdin.fileno() poll = select.poll() poll.register(fd_stdin, select.POLLIN) tries = 0 while True: events = poll.poll(timeout) # means we timeout if len(events) == 0: print('timeout') tries += 1 if tries >= max_retries: print('sleeping') time.sleep(1) continue tries = 0 for fd, event in events: if fd != fd_stdin or event & select.POLLIN != 1: print(f'Unknown event {event}') continue print(sys.stdin.readline(), flush=True) To test the program I run this, to simulate the equally spaced interval dump. while true; do for i in {1..10}; do echo $i; done; sleep 10; done | ./wrap.py But it doesn't work as expected (or I don't understand how it is supposed to work) What confuse me the most is that if I had a small sleep directive in the bash while loop, it does what I want. while true; do for i in {1..10}; do echo $i; sleep 0.01; done; sleep 10; done | ./wrap.py I even try using a python script as the dump block, but it doesn't change a thing (still need to sleep to get expected result). Edited to add the python echo script: #!/usr/bin/env python import time while True: for i in range(1000): print(i) # this line allow the code to behave as expected. # time.sleep(0.01) time.sleep(10) A: First, your shebang (#!) must be in line 1 , not line 4. Nobody will be able to replicate the problem without fixing that. Second, I think there is a flaw in your original bash script. The way it's written, the CPU will execute at full speed to generate a bunch of 10 consecutive lines and then sleep 10 seconds. I suspect that's not what you were expecting. Now, why do we have only a 1 before all those timeouts? It's complex. Please stay with me. At time 0: your bash script generate 10 lines. A few microseconds later, poll() returns an event, so your program call readline() to extract the first line. But the other 9 lines stay in readline() buffer, waiting to be read, and the stdin buffer is now empty. For the next 10 seconds, poll only return timeouts since there is nothing new in stdin buffer. At time 10, your bash script generate a new bunch of 10 lines and that generate an event. Your program call readline() and get a 2. That line is from bunch #1 (still residing in readline buffer). The readline buffer now contain 18 lines and stdin buffer is empty. Next event will be at time 30. Et cetera, et cetera. Addition of a small sleep in the bash script change the dynamic because poll will now have time to generate an event for each line so your program will generate a readline() for each line. But your program will still be 'fragile' because there's still a risk that the stdin buffer may contain 2 lines. These 2 lines will be transferred together to readline buffer but readline() will only consume 1 line. So lines will accumulate in readline buffer. Conclusion: There are 2 bugs: one in the bash script. It's the cause of confusion. second: every time you get an event you must completely consume readline buffer. It not easy since readline may block, waiting for an EOL. I would suggest to do a read() instead of readline() and use find() or splitlines() to extract lines from your buffer. And you will be able to remove tries and max_retries.
select.poll doesn't detect available read unless I sleep for some time
Context I would like to use select.poll to know when data is available to read, buffer this data, and use said buffer as a subprocess' stdin. The data is being dumped at equally spaced intervals. (see execution example) It's important that reading data in the main script is non-blocking, so subprocess can be executed from there. Problem #!/usr/bin/env python3 # # file: wrap.py # import select import sys import time max_retries = 2 timeout = 300 fd_stdin = sys.stdin.fileno() poll = select.poll() poll.register(fd_stdin, select.POLLIN) tries = 0 while True: events = poll.poll(timeout) # means we timeout if len(events) == 0: print('timeout') tries += 1 if tries >= max_retries: print('sleeping') time.sleep(1) continue tries = 0 for fd, event in events: if fd != fd_stdin or event & select.POLLIN != 1: print(f'Unknown event {event}') continue print(sys.stdin.readline(), flush=True) To test the program I run this, to simulate the equally spaced interval dump. while true; do for i in {1..10}; do echo $i; done; sleep 10; done | ./wrap.py But it doesn't work as expected (or I don't understand how it is supposed to work) What confuse me the most is that if I had a small sleep directive in the bash while loop, it does what I want. while true; do for i in {1..10}; do echo $i; sleep 0.01; done; sleep 10; done | ./wrap.py I even try using a python script as the dump block, but it doesn't change a thing (still need to sleep to get expected result). Edited to add the python echo script: #!/usr/bin/env python import time while True: for i in range(1000): print(i) # this line allow the code to behave as expected. # time.sleep(0.01) time.sleep(10)
[ "First, your shebang (#!) must be in line 1 , not line 4.\nNobody will be able to replicate the problem without fixing that.\nSecond, I think there is a flaw in your original bash script. The way it's written, the CPU will execute at full speed to generate a bunch of 10 consecutive lines and then sleep 10 seconds.\nI suspect that's not what you were expecting.\nNow, why do we have only a 1 before all those timeouts?\nIt's complex. Please stay with me.\nAt time 0: your bash script generate 10 lines. A few microseconds later, poll() returns an event, so your program call readline() to extract the first line. But the other 9 lines stay in readline() buffer, waiting to be read, and the stdin buffer is now empty.\nFor the next 10 seconds, poll only return timeouts since there is nothing new in stdin buffer.\nAt time 10, your bash script generate a new bunch of 10 lines and that generate an event. Your program call readline() and get a 2. That line is from bunch #1 (still residing in readline buffer). The readline buffer now contain\n18 lines and stdin buffer is empty.\nNext event will be at time 30.\nEt cetera, et cetera.\nAddition of a small sleep in the bash script change the dynamic because poll will now have time to generate an event for each line so your program will generate a readline() for each line. But your program will still be 'fragile' because there's still a risk that the stdin buffer may contain 2 lines. These 2 lines will be transferred together to readline buffer but readline() will only consume 1 line. So lines will accumulate in readline buffer.\nConclusion:\nThere are 2 bugs:\n\none in the bash script. It's the cause of confusion.\nsecond: every time you get an event you must completely consume readline buffer. It not easy since readline may block, waiting for an EOL.\nI would suggest to do a read() instead of readline() and use find() or splitlines() to extract lines from your buffer.\n\nAnd you will be able to remove tries and max_retries.\n" ]
[ 2 ]
[]
[]
[ "polling", "python", "select" ]
stackoverflow_0074520647_polling_python_select.txt
Q: Comparing Two Dataframes Columns against a third column in Second DataFrame I am trying to compare two different dataframes for column "Source2/Source3" against "Spider". If they are a match then create column True/False. Secondly, If there is a match (True) then I want to make sure column "Product_ID' in scheduler_df is 12345. If this is the case then mark 'True' else 'False'. So far in my code, I am already comparing both dataframes to make sure Source and Spider match. I am struggling in comparing it now to the product_id column. consolidate_df Source2 Source3 Jen_Arrest Jen_Jail Ben_Arrest Ben_Jail scheduler_df Spider Product_ID Jen_Arrest 88888 Ben_Arrest 12345 Current Code: consolidate_df = pd.read_excel(os.path.join(path, consolidated_fn), sheet_name='Poseidon-2') scheduler_df = pd.read_excel(os.path.join(path, scheduler_fn)) consolidate_df['isScheduler'] = consolidate_df['Source_2'].isin(scheduler_df['Spider']) | consolidate_df['Source_3'].isin(scheduler_df['Spider']) final df: Source2 isScheduler isProd Jen_Arrest True False (because 8888 does not match 12345) Ben_Arrest True True (because 12345 matches 12345) A: we can create the testing for the product_id in scheduler_df: scheduler_df['prod_test']=scheduler_df.Product_ID.apply(lambda x:True if x==12345 else False) then we create our column isProd: consolidate_df['isProd']=consolidate_df['isScheduler'] & scheduler_df['prod_test'] | consolidate_df['isScheduler'] & scheduler_df['prod_test']
Comparing Two Dataframes Columns against a third column in Second DataFrame
I am trying to compare two different dataframes for column "Source2/Source3" against "Spider". If they are a match then create column True/False. Secondly, If there is a match (True) then I want to make sure column "Product_ID' in scheduler_df is 12345. If this is the case then mark 'True' else 'False'. So far in my code, I am already comparing both dataframes to make sure Source and Spider match. I am struggling in comparing it now to the product_id column. consolidate_df Source2 Source3 Jen_Arrest Jen_Jail Ben_Arrest Ben_Jail scheduler_df Spider Product_ID Jen_Arrest 88888 Ben_Arrest 12345 Current Code: consolidate_df = pd.read_excel(os.path.join(path, consolidated_fn), sheet_name='Poseidon-2') scheduler_df = pd.read_excel(os.path.join(path, scheduler_fn)) consolidate_df['isScheduler'] = consolidate_df['Source_2'].isin(scheduler_df['Spider']) | consolidate_df['Source_3'].isin(scheduler_df['Spider']) final df: Source2 isScheduler isProd Jen_Arrest True False (because 8888 does not match 12345) Ben_Arrest True True (because 12345 matches 12345)
[ "we can create the testing for the product_id in scheduler_df:\n scheduler_df['prod_test']=scheduler_df.Product_ID.apply(lambda x:True if x==12345 else False)\n\nthen we create our column isProd:\nconsolidate_df['isProd']=consolidate_df['isScheduler'] & scheduler_df['prod_test'] | consolidate_df['isScheduler'] & scheduler_df['prod_test']\n\n" ]
[ 0 ]
[]
[]
[ "pandas", "python" ]
stackoverflow_0074521177_pandas_python.txt
Q: Declare a global variable with certain type In python, is it possible to declare a global variable with a type? I know this is fine to declare a local variable like this. student: Student Or global student But I'm looking for something like this global student: Student A: I did it like this: from typing import Union from my_class import MyClass foo_my_class: Union[MyClass, None] = None def setup_function(): """ test setup """ global foo_my_class foo_my_class = MyClass() ... I.e., this is a test module, and I want foo_my_class to be available at global scope for every test in the module. The setup_function() runs before every test, so I re-initialize foo_my_class each time (so that it is fresh and clean for each test). foo_my_class still has to be declared at global scope, and it makes most sense to me to init to None, hence the Union typing. You don't need to do it like this, but if you don't initialize here, flake8 will complain. There are a few other ways to do this, but this one works and satisfies my linters (flake8, pylint, mypy).
Declare a global variable with certain type
In python, is it possible to declare a global variable with a type? I know this is fine to declare a local variable like this. student: Student Or global student But I'm looking for something like this global student: Student
[ "I did it like this:\nfrom typing import Union\nfrom my_class import MyClass\n\nfoo_my_class: Union[MyClass, None] = None\n\ndef setup_function():\n \"\"\" test setup \"\"\"\n global foo_my_class\n foo_my_class = MyClass()\n...\n\nI.e., this is a test module, and I want foo_my_class to be available at global scope for every test in the module. The setup_function() runs before every test, so I re-initialize foo_my_class each time (so that it is fresh and clean for each test).\nfoo_my_class still has to be declared at global scope, and it makes most sense to me to init to None, hence the Union typing. You don't need to do it like this, but if you don't initialize here, flake8 will complain. There are a few other ways to do this, but this one works and satisfies my linters (flake8, pylint, mypy).\n" ]
[ 0 ]
[ "There are no set types for python variables, so you just need to declare global variable - which would automatically be any variable declared outside of a function scope.\n" ]
[ -1 ]
[ "global_variables", "python", "types", "variables" ]
stackoverflow_0057928762_global_variables_python_types_variables.txt
Q: Sum all columns by month? I have a dataframe: date C P 0 15.4.21 0.06 0.94 1 16.4.21 0.15 1.32 2 2.5.21 0.06 1.17 3 8.5.21 0.20 0.82 4 9.6.21 0.04 -5.09 5 1.2.22 0.05 7.09 I need to create 2 columns where I Sum both C and P for each month. So new df will have 2 columns, for example for the month 4 (April) (0.06+0.94+0.15+1.32) = 2.47, so new df: 4/21 5/21 6/21 2/22 0 2.47 2.25 .. .. Columns names and order doesn't matter, actualy a string month name even better(April 22). I was playing with something like this, which is not what i need: df[['C','P']].groupby(df['date'].dt.to_period('M')).sum() A: You almost had it, you need to convert first to_datetime: out = (df[['C','P']] .groupby(pd.to_datetime(df['date'], day_first=True) .dt.to_period('M')) .sum() ) Output: C P date 2021-02 0.06 1.17 2021-04 0.21 2.26 2021-08 0.20 0.82 2021-09 0.04 -5.09 2022-01 0.05 7.09 If you want the grand total, sum again: out = (df[['C','P']] .groupby(pd.to_datetime(df['date']).dt.to_period('M')) .sum().sum(axis=1) ) Output: date 2021-02 1.23 2021-04 2.47 2021-08 1.02 2021-09 -5.05 2022-01 7.14 Freq: M, dtype: float64 as "Month year" If you want a string, better convert it in the end to keep the order: out.index = out.index.strftime('%B %y') Output: date February 21 1.23 April 21 2.47 August 21 1.02 September 21 -5.05 January 22 7.14 dtype: float64
Sum all columns by month?
I have a dataframe: date C P 0 15.4.21 0.06 0.94 1 16.4.21 0.15 1.32 2 2.5.21 0.06 1.17 3 8.5.21 0.20 0.82 4 9.6.21 0.04 -5.09 5 1.2.22 0.05 7.09 I need to create 2 columns where I Sum both C and P for each month. So new df will have 2 columns, for example for the month 4 (April) (0.06+0.94+0.15+1.32) = 2.47, so new df: 4/21 5/21 6/21 2/22 0 2.47 2.25 .. .. Columns names and order doesn't matter, actualy a string month name even better(April 22). I was playing with something like this, which is not what i need: df[['C','P']].groupby(df['date'].dt.to_period('M')).sum()
[ "You almost had it, you need to convert first to_datetime:\nout = (df[['C','P']]\n .groupby(pd.to_datetime(df['date'], day_first=True)\n .dt.to_period('M'))\n .sum()\n )\n\nOutput:\n C P\ndate \n2021-02 0.06 1.17\n2021-04 0.21 2.26\n2021-08 0.20 0.82\n2021-09 0.04 -5.09\n2022-01 0.05 7.09\n\nIf you want the grand total, sum again:\nout = (df[['C','P']]\n .groupby(pd.to_datetime(df['date']).dt.to_period('M'))\n .sum().sum(axis=1)\n )\n\nOutput:\ndate\n2021-02 1.23\n2021-04 2.47\n2021-08 1.02\n2021-09 -5.05\n2022-01 7.14\nFreq: M, dtype: float64\n\nas \"Month year\"\nIf you want a string, better convert it in the end to keep the order:\nout.index = out.index.strftime('%B %y')\n\nOutput:\ndate\nFebruary 21 1.23\nApril 21 2.47\nAugust 21 1.02\nSeptember 21 -5.05\nJanuary 22 7.14\ndtype: float64\n\n" ]
[ 1 ]
[]
[]
[ "pandas", "python" ]
stackoverflow_0074522628_pandas_python.txt
Q: How to get rid of comma at the end of printing from loop So basically I have the list of many points and I want to extract only unique values. I have written a function but I have 1 problem: how to avoid printing comma at the end of the list? def unique(list1): unique_values = [] for u in list1: if u not in unique_values: unique_values.append(u) for u in unique_values: print(u, end=", ") wells = ["U1", "U1", "U3", "U3", "U3", "U5", "U5", "U5", "U7", "U7", "U7", "U7", "U7", "U8", "U8"] print("The unique values from list are...:", end=" ") unique(wells) my output is for now: "The unique values from list are...: U1, U3, U5, U7, U8," A: Replace: for u in unique_values: print(u, end=", ") with the pythonic: print(', '.join(unique_values)) Also generally better style to return unique_values and use print(', '.join(unique(wells))) A: It might be an overkill but you can use NumPy "unique" method, which is probably more efficient, a specially for large arrays or long lists. The following code will do: import numpy as np x = np.array(['a', 'a', 'b', 'c', 'd', 'd']) y = np.unique(x) print(', '.join(y)) and the result is: a, b, c, d Added in Edit: the following solution works also for non-string lists. ''' Print unique values in a list or numpy array ''' x = [1, 2, 2, 5, 7, 1, 3] print(x) # set(x) will return a set of the unique values of x u = set(x) print(u) # remove the curly brackets str_u = str(u).strip("}{") print(str_u) and the result is 1, 2, 3, 5, 7 A: hope the below changes helps you :) def unique(list1): unique_values = [] for u in list1: if u not in unique_values: unique_values.append(u) return unique_values wells = ["U1", "U1", "U3", "U3", "U3", "U5", "U5", "U5", "U7", "U7", "U7", "U7", "U7", "U8", "U8"] req = unique(wells) # prints in list format print(f"The unique values from list are...: {req}") # prints in string format print(f"The unique values from list are...: {' '.join(req)}") or the unique values can also be found out using set(wells) A: There are a couple issues here: the trailing , as noted in the question, testing membership on an unsorted list. Use str.join() to handle the trailing comma, and don't even try to keep track of unique values and check if a new value has already been seen, just cast to set. def unique(list1): unique_values = set(list1) print(', '.join(sorted(unique_values))) wells = ["U1", "U1", "U3", "U3", "U3", "U5", "U5", "U5", "U7", "U7", "U7", "U7", "U7", "U8", "U8"] print("The unique values from list are...:", end=" ") unique(wells)
How to get rid of comma at the end of printing from loop
So basically I have the list of many points and I want to extract only unique values. I have written a function but I have 1 problem: how to avoid printing comma at the end of the list? def unique(list1): unique_values = [] for u in list1: if u not in unique_values: unique_values.append(u) for u in unique_values: print(u, end=", ") wells = ["U1", "U1", "U3", "U3", "U3", "U5", "U5", "U5", "U7", "U7", "U7", "U7", "U7", "U8", "U8"] print("The unique values from list are...:", end=" ") unique(wells) my output is for now: "The unique values from list are...: U1, U3, U5, U7, U8,"
[ "Replace:\n\n for u in unique_values:\n print(u, end=\", \")\n\nwith the pythonic:\n\n print(', '.join(unique_values))\n\nAlso generally better style to return unique_values and use print(', '.join(unique(wells)))\n", "It might be an overkill but you can use NumPy \"unique\" method, which is probably more efficient, a specially for large arrays or long lists.\nThe following code will do:\nimport numpy as np\nx = np.array(['a', 'a', 'b', 'c', 'd', 'd'])\ny = np.unique(x)\nprint(', '.join(y))\n\nand the result is:\na, b, c, d\n\nAdded in Edit: the following solution works also for non-string lists.\n''' Print unique values in a list or numpy array '''\n\nx = [1, 2, 2, 5, 7, 1, 3]\nprint(x)\n\n# set(x) will return a set of the unique values of x\nu = set(x)\nprint(u)\n\n# remove the curly brackets\nstr_u = str(u).strip(\"}{\")\nprint(str_u)\n\nand the result is\n1, 2, 3, 5, 7\n\n", "hope the below changes helps you :)\ndef unique(list1):\n unique_values = []\n for u in list1:\n if u not in unique_values:\n unique_values.append(u)\n return unique_values\n\n\nwells = [\"U1\", \"U1\", \"U3\", \"U3\", \"U3\", \"U5\", \"U5\", \"U5\", \"U7\", \"U7\", \"U7\", \"U7\", \"U7\", \"U8\", \"U8\"]\nreq = unique(wells)\n# prints in list format\nprint(f\"The unique values from list are...: {req}\")\n# prints in string format\nprint(f\"The unique values from list are...: {' '.join(req)}\")\n\nor the unique values can also be found out using set(wells)\n", "There are a couple issues here: the trailing , as noted in the question, testing membership on an unsorted list. Use str.join() to handle the trailing comma, and don't even try to keep track of unique values and check if a new value has already been seen, just cast to set.\ndef unique(list1):\n unique_values = set(list1)\n print(', '.join(sorted(unique_values)))\n\n\nwells = [\"U1\", \"U1\", \"U3\", \"U3\", \"U3\", \"U5\", \"U5\", \"U5\", \"U7\", \"U7\", \"U7\", \"U7\", \"U7\", \"U8\", \"U8\"]\nprint(\"The unique values from list are...:\", end=\" \")\nunique(wells)\n\n" ]
[ 3, 1, 0, 0 ]
[]
[]
[ "python" ]
stackoverflow_0074522488_python.txt
Q: How can i pass a requirement into a sort function, and create a template sort method to either sort one of the object of class "reading" in python #i want to pass the list, and algorithm (bubblesort) into the sort method with a requirement (temp or wind_speed) class Reading: def __init__(self, _temperature, _windspeed): self.temp = _temperature self.windspeed = _windspeed def bubblesort(num): for i in range (len(num)-1, 0, -1): for j in range (i): if num[j] > num [j+1] : temp = num[j] num[j] = num[j+1] num[j+1] = temp return num r_list = [Reading(randint(10, 60), randint(10, 60)) for i in range(20)] def sort(lst, alg): #how do i pass the requirement, and alg? bubblesort(lst) sort(r_list, alg) #how do i create a templated bubblesort to either sort temp or windspeed? #The Output is supposed to return a sorted list (r_list) according to the requirement A: Here's an example how to pass a function as an argument to another function: def add(x, y): return x + y def mul(x,y): return x * y def calculate(x, y, func): return func(x, y) z1 = calculate(1, 1, add) z2 = calculate(1, 1, mul) print(f"add = {z1}, mul = {z2}")
How can i pass a requirement into a sort function, and create a template sort method to either sort one of the object of class "reading" in python
#i want to pass the list, and algorithm (bubblesort) into the sort method with a requirement (temp or wind_speed) class Reading: def __init__(self, _temperature, _windspeed): self.temp = _temperature self.windspeed = _windspeed def bubblesort(num): for i in range (len(num)-1, 0, -1): for j in range (i): if num[j] > num [j+1] : temp = num[j] num[j] = num[j+1] num[j+1] = temp return num r_list = [Reading(randint(10, 60), randint(10, 60)) for i in range(20)] def sort(lst, alg): #how do i pass the requirement, and alg? bubblesort(lst) sort(r_list, alg) #how do i create a templated bubblesort to either sort temp or windspeed? #The Output is supposed to return a sorted list (r_list) according to the requirement
[ "Here's an example how to pass a function as an argument to another function:\ndef add(x, y):\n return x + y\n\ndef mul(x,y):\n return x * y\n\ndef calculate(x, y, func):\n return func(x, y)\n\nz1 = calculate(1, 1, add)\nz2 = calculate(1, 1, mul)\n\nprint(f\"add = {z1}, mul = {z2}\")\n\n" ]
[ 2 ]
[]
[]
[ "bubble_sort", "objectlistview_python", "python", "sorting" ]
stackoverflow_0074522565_bubble_sort_objectlistview_python_python_sorting.txt
Q: How to fix Django / python free(): invalid pointer? When I run the django manage.py app, I got free(): invalid pointer error. Example: >python manage.py check System check identified no issues (0 silenced). free(): invalid pointer Abortado (imagem do núcleo gravada) The django app is running fine but I'm trying to get rid off this message. How can I fix this error or get more info to debug it? Python 3.8.10 (default, Jun 22 2022, 20:18:18) [GCC 9.4.0] on linux (Ubuntu 20.04) Django==2.2.28 with virtualenv A: I have tested with same environment with same Django version and I ran the check command, which did not yield this problem. I assume it is an issue with Pytorch, as mentioned in here: GitHub issue #21018. To resolve it, you can take the following steps (copy pasted from this SO answer: https://stackoverflow.com/a/56363390/2696165) There is a known issue with importing both open3d and PyTorch. A few possible workarounds exist: (1) Some people have found that changing the order in which you import the two packages can resolve the issue, though in my personal testing both ways crash. (2) Other people have found compiling both packages from source to help. (3) Still others have found that moving open3d and PyTorch to be called from separate scripts resolves the issue. A: That's a C error related to memory freeing, probably due to a bug from some unpatched dependency package, since you're using an old Django version. Some options: Check your project at debug level: python3 manage.py check --fail-level DEBUG List your dependencies and look online for which one originates that error: python3 -m pip freeze Log files. A: This usually happens with external dependencies. You can run the check command on a specific app: python manage.py check app1 app2 app3 For Example: python manage.py check auth user myapp Running on any specific database is also possible. python manage.py check --database default --database other A few similar articles found: https://github.com/duckietown/apriltags3-py/issues/1 https://github.com/googleapis/python-pubsub/issues/414 https://bugs.python.org/issue41335
How to fix Django / python free(): invalid pointer?
When I run the django manage.py app, I got free(): invalid pointer error. Example: >python manage.py check System check identified no issues (0 silenced). free(): invalid pointer Abortado (imagem do núcleo gravada) The django app is running fine but I'm trying to get rid off this message. How can I fix this error or get more info to debug it? Python 3.8.10 (default, Jun 22 2022, 20:18:18) [GCC 9.4.0] on linux (Ubuntu 20.04) Django==2.2.28 with virtualenv
[ "I have tested with same environment with same Django version and I ran the check command, which did not yield this problem. I assume it is an issue with Pytorch, as mentioned in here: GitHub issue #21018.\nTo resolve it, you can take the following steps (copy pasted from this SO answer: https://stackoverflow.com/a/56363390/2696165)\n\n\nThere is a known issue with importing both open3d and PyTorch. A few possible workarounds exist:\n(1) Some people have found that changing the order in which you import the two packages can resolve the issue, though in my personal testing both ways crash.\n(2) Other people have found compiling both packages from source to help.\n(3) Still others have found that moving open3d and PyTorch to be called from separate scripts resolves the issue.\n\n", "That's a C error related to memory freeing, probably due to a bug from some unpatched dependency package, since you're using an old Django version.\nSome options:\n\nCheck your project at debug level:\npython3 manage.py check --fail-level DEBUG\n\nList your dependencies and look online for which one originates that error:\npython3 -m pip freeze\n\nLog files.\n\n\n", "This usually happens with external dependencies.\nYou can run the check command on a specific app:\n python manage.py check app1 app2 app3 \n\nFor Example:\n python manage.py check auth user myapp\n\n\nRunning on any specific database is also possible.\n python manage.py check --database default --database other\n\nA few similar articles found:\nhttps://github.com/duckietown/apriltags3-py/issues/1\nhttps://github.com/googleapis/python-pubsub/issues/414\nhttps://bugs.python.org/issue41335\n" ]
[ 1, 1, 0 ]
[]
[]
[ "django", "python" ]
stackoverflow_0073313134_django_python.txt
Q: django-smart-selects not working properly I want to have chained foreign key in my django-admin and thus I am using django-smart-selects. I have followed the documentation properly Install django-smart-selects Add it in installed_apps in settings.py Add this line in my base urls.py url(r'^chaining/', include('smart_selects.urls')), changed my model accordingly: class AddressModel(BaseModel): country = models.ForeignKey(CountryModel, null=True, blank=True, on_delete=models.PROTECT) state = ChainedForeignKey(StateModel, chained_field=country, chained_model_field=country, null=True, blank=True, on_delete=models.PROTECT) city = models.CharField(max_length=200, null=True, blank=True) and added this to my setting.py JQUERY_URL = True But everytime I try to create an address from admin, I get this error:- if path.startswith(('http://', 'https://', '/')): AttributeError: 'bool' object has no attribute 'startswith' How to solve this? A: Use USE_DJANGO_JQUERY = True instead of JQUERY_URL = True in your settings.py A: Use USE_DJANGO_JQUERY = True and JQUERY_URL = False Add this in your html <script src="/static/smart-selects/admin/js/chainedfk.js"></script> <script src="/static/smart-selects/admin/js/bindfields.js"></script> A: i put a code on terminal 'show staticfiles' then I got two static folders. automatically made a smart selectors folder , then I copied the smart selectors folder and pasted it into my only created static file , then resolved my all problems . Thank you. show static files
django-smart-selects not working properly
I want to have chained foreign key in my django-admin and thus I am using django-smart-selects. I have followed the documentation properly Install django-smart-selects Add it in installed_apps in settings.py Add this line in my base urls.py url(r'^chaining/', include('smart_selects.urls')), changed my model accordingly: class AddressModel(BaseModel): country = models.ForeignKey(CountryModel, null=True, blank=True, on_delete=models.PROTECT) state = ChainedForeignKey(StateModel, chained_field=country, chained_model_field=country, null=True, blank=True, on_delete=models.PROTECT) city = models.CharField(max_length=200, null=True, blank=True) and added this to my setting.py JQUERY_URL = True But everytime I try to create an address from admin, I get this error:- if path.startswith(('http://', 'https://', '/')): AttributeError: 'bool' object has no attribute 'startswith' How to solve this?
[ "Use USE_DJANGO_JQUERY = True instead of JQUERY_URL = True in your settings.py\n", "Use USE_DJANGO_JQUERY = True\nand JQUERY_URL = False\n\nAdd this in your html\n\n <script src=\"/static/smart-selects/admin/js/chainedfk.js\"></script>\n <script src=\"/static/smart-selects/admin/js/bindfields.js\"></script>\n\n", "i put a code on terminal 'show staticfiles' then I got two static folders. automatically made a smart selectors folder , then I copied the smart selectors folder and pasted it into my only created static file , then resolved my all problems . Thank you.\nshow static files\n\n" ]
[ 3, 0, 0 ]
[]
[]
[ "django", "django_admin", "django_models", "django_smart_selects", "python" ]
stackoverflow_0061727169_django_django_admin_django_models_django_smart_selects_python.txt
Q: pyproject.toml/setuptools duplicates files into root site-packages directory I have a problem with how pip/setuptools is installing my package. When installing from the project directory (i.e. pip install .) my project's sub-packages are duplicated and placed in the root site-packages directory. The configuration is set entirely within pyproject.toml (with a minimal setup.py for compiling a single extension). If my package is named mypackage which contains 3 sub-packages and depends on 3 dependencies, this is the expected directory structure under site-packages in the venv: site-packages - dependency1 - dependency2 - dependency3 - myproject - subpackage1 - subpackage2 - subpackage3 Yet below is what I end up with, it looks like any folder containing any .py files are copied to root site-packages (i.e. including the venv itself and docs since they contain py files: site-packages - dependency1 - dependency2 - dependency3 - mypackage - subpackage1 - subpackage2 - subpackage3 - subpackage1 - subpackage2 - subpackage3 - docs - venv What can I do to avoid duplicating sub-packages into the top-level site-packages directory/install correctly? Here is my project structure: myproject/ - pyproject.toml - setup.py - docs/ - myproject/ - __init__.py - subpackage1/ - subpackage2/ - subpackage3/ - venv/ The reduced contents of pyproject.toml [project] name = "myproject" requires-python = ">= 3.7" dependencies = [ "dependency1", "dependency2", "dependency3", ] [tool.setuptools] packages = [ "myproject", "myproject.subpackage1", "myproject.subpackage2", "myproject.subpackage3", ] [build-system] requires = ["setuptools >= 61.0.0", "cython"] build-backend = "setuptools.build_meta" The contents of setup.py: from setuptools import Extension, setup from Cython.Build import cythonize ext_modules = [ Extension( "subpackage1.func", ["..."], extra_compile_args=['-fopenmp'], extra_link_args=['-fopenmp'], ) ] setup(ext_modules=cythonize(ext_modules)) A: I've just ran into the same issue. In my case, the build directory used by pip was polluted with the "subfolders", probably because of a previous run where my package discovery settings were erroneous. Because of this, although my configuration was (now) correct, these orphaned directories were copied to my site-packages as well. In my case the build directory was in the folder where I called pip install . from. If you want to find the build directory, or just check whether this is the problem, log pip's output to a file with pip install . --log foo.txt, and search for copying inside. You should see a line like: Arguments: ('copying', '<build directory>\\lib\\subpackage1\\bar.py', ... Hope this helps!
pyproject.toml/setuptools duplicates files into root site-packages directory
I have a problem with how pip/setuptools is installing my package. When installing from the project directory (i.e. pip install .) my project's sub-packages are duplicated and placed in the root site-packages directory. The configuration is set entirely within pyproject.toml (with a minimal setup.py for compiling a single extension). If my package is named mypackage which contains 3 sub-packages and depends on 3 dependencies, this is the expected directory structure under site-packages in the venv: site-packages - dependency1 - dependency2 - dependency3 - myproject - subpackage1 - subpackage2 - subpackage3 Yet below is what I end up with, it looks like any folder containing any .py files are copied to root site-packages (i.e. including the venv itself and docs since they contain py files: site-packages - dependency1 - dependency2 - dependency3 - mypackage - subpackage1 - subpackage2 - subpackage3 - subpackage1 - subpackage2 - subpackage3 - docs - venv What can I do to avoid duplicating sub-packages into the top-level site-packages directory/install correctly? Here is my project structure: myproject/ - pyproject.toml - setup.py - docs/ - myproject/ - __init__.py - subpackage1/ - subpackage2/ - subpackage3/ - venv/ The reduced contents of pyproject.toml [project] name = "myproject" requires-python = ">= 3.7" dependencies = [ "dependency1", "dependency2", "dependency3", ] [tool.setuptools] packages = [ "myproject", "myproject.subpackage1", "myproject.subpackage2", "myproject.subpackage3", ] [build-system] requires = ["setuptools >= 61.0.0", "cython"] build-backend = "setuptools.build_meta" The contents of setup.py: from setuptools import Extension, setup from Cython.Build import cythonize ext_modules = [ Extension( "subpackage1.func", ["..."], extra_compile_args=['-fopenmp'], extra_link_args=['-fopenmp'], ) ] setup(ext_modules=cythonize(ext_modules))
[ "I've just ran into the same issue.\nIn my case, the build directory used by pip was polluted with the \"subfolders\", probably because of a previous run where my package discovery settings were erroneous.\nBecause of this, although my configuration was (now) correct, these orphaned directories were copied to my site-packages as well.\nIn my case the build directory was in the folder where I called pip install . from.\nIf you want to find the build directory, or just check whether this is the problem, log pip's output to a file with pip install . --log foo.txt, and search for copying inside.\nYou should see a line like:\nArguments: ('copying', '<build directory>\\\\lib\\\\subpackage1\\\\bar.py', ...\nHope this helps!\n" ]
[ 0 ]
[]
[]
[ "pyproject.toml", "python", "setuptools" ]
stackoverflow_0073491139_pyproject.toml_python_setuptools.txt
Q: How to save a json file using json.dump without the square bracket I need to save the json file without the beginning and ending [ and ] respectively. Sample data: import pandas as pd import json df = pd.DataFrame({'name' : ['abc', 'pqr', 'xzy'], 'score' : [85, 90, 80], 'address' : ['ab street', 'pq street', 'xy ave']}) df name score address 0 abc 85 ab street 1 pqr 90 pq street 2 xzy 80 xy ave I then try to save the above dataframe using: jl = json.loads(df.to_json(orient='records')) f = open('expfile.json', 'w') json.dump(jl, f, indent = 4) f.close() Output: [ { "name": "abc", "score": 85, "address": "ab street" }, { "name": "pqr", "score": 90, "address": "pq street" }, { "name": "xzy", "score": 80, "address": "xy ave" } ] Which is fine enough, but I need the output without the starting and ending square brackets as below: { "name": "abc", "score": 85, "address": "ab street" }, { "name": "pqr", "score": 90, "address": "pq street" }, { "name": "xzy", "score": 80, "address": "xy ave" } Could someone please let me know how to accomplish the same. PS I have complex nested dictionary/json structures inside my columns in many of my dataframes, I parsed them using ast.literal_eval. I tried using to_json(orient = 'records', lines = True) to which I got this error JSONDecodeError: Extra data: line 2 column 1 (char 425). A: The jsoning-in-a-loop variant would be something like this: jl = [ { "name": "abc", "score": 85, "address": "ab street" }, { "name": "pqr", "score": 90, "address": "pq street" }, { "name": "xzy", "score": 80, "address": "xy ave" } ] import json print(",\n".join(json.dumps(x, indent=4) for x in jl)) Produces { "name": "abc", "score": 85, "address": "ab street" }, { "name": "pqr", "score": 90, "address": "pq street" }, { "name": "xzy", "score": 80, "address": "xy ave" } A: If want to drop the []'s in output then can iterate over the rows in the dataframe and write out a record at a time. import pandas as pd import json df = pd.DataFrame({'name' : ['abc', 'pqr', 'xzy'], 'score' : [85, 90, 80], 'address' : ['ab street', 'pq street', 'xy ave']}) with open("out.dat", "w") as fout: for idx, row in df.iterrows(): if idx != 0: fout.write(',\n') fout.write(json.dumps(row.to_dict(), indent=4)) Output: { "name": "abc", "score": 85, "address": "ab street" }, { "name": "pqr", "score": 90, "address": "pq street" }, { "name": "xzy", "score": 80, "address": "xy ave" }
How to save a json file using json.dump without the square bracket
I need to save the json file without the beginning and ending [ and ] respectively. Sample data: import pandas as pd import json df = pd.DataFrame({'name' : ['abc', 'pqr', 'xzy'], 'score' : [85, 90, 80], 'address' : ['ab street', 'pq street', 'xy ave']}) df name score address 0 abc 85 ab street 1 pqr 90 pq street 2 xzy 80 xy ave I then try to save the above dataframe using: jl = json.loads(df.to_json(orient='records')) f = open('expfile.json', 'w') json.dump(jl, f, indent = 4) f.close() Output: [ { "name": "abc", "score": 85, "address": "ab street" }, { "name": "pqr", "score": 90, "address": "pq street" }, { "name": "xzy", "score": 80, "address": "xy ave" } ] Which is fine enough, but I need the output without the starting and ending square brackets as below: { "name": "abc", "score": 85, "address": "ab street" }, { "name": "pqr", "score": 90, "address": "pq street" }, { "name": "xzy", "score": 80, "address": "xy ave" } Could someone please let me know how to accomplish the same. PS I have complex nested dictionary/json structures inside my columns in many of my dataframes, I parsed them using ast.literal_eval. I tried using to_json(orient = 'records', lines = True) to which I got this error JSONDecodeError: Extra data: line 2 column 1 (char 425).
[ "The jsoning-in-a-loop variant would be something like this:\njl = [\n {\n \"name\": \"abc\",\n \"score\": 85,\n \"address\": \"ab street\"\n },\n {\n \"name\": \"pqr\",\n \"score\": 90,\n \"address\": \"pq street\"\n },\n {\n \"name\": \"xzy\",\n \"score\": 80,\n \"address\": \"xy ave\"\n }\n]\n\nimport json\nprint(\",\\n\".join(json.dumps(x, indent=4) for x in jl))\n\nProduces\n\n{ \n \"name\": \"abc\", \n \"score\": 85, \n \"address\": \"ab street\" \n}, \n{ \n \"name\": \"pqr\", \n \"score\": 90, \n \"address\": \"pq street\" \n}, \n{ \n \"name\": \"xzy\", \n \"score\": 80, \n \"address\": \"xy ave\" \n}\n\n\n", "If want to drop the []'s in output then can iterate over the rows in the dataframe and write out a record at a time.\nimport pandas as pd\nimport json\n\ndf = pd.DataFrame({'name' : ['abc', 'pqr', 'xzy'],\n 'score' : [85, 90, 80],\n 'address' : ['ab street', 'pq street', 'xy ave']})\n\nwith open(\"out.dat\", \"w\") as fout:\n for idx, row in df.iterrows():\n if idx != 0:\n fout.write(',\\n')\n fout.write(json.dumps(row.to_dict(), indent=4))\n\nOutput:\n{\n \"name\": \"abc\",\n \"score\": 85,\n \"address\": \"ab street\"\n},\n{\n \"name\": \"pqr\",\n \"score\": 90,\n \"address\": \"pq street\"\n},\n{\n \"name\": \"xzy\",\n \"score\": 80,\n \"address\": \"xy ave\"\n}\n\n" ]
[ 2, 0 ]
[]
[]
[ "json", "pandas", "python" ]
stackoverflow_0074522486_json_pandas_python.txt
Q: How do I number the inputs that I take from the user? How do I list inputs. I'm writing an program whereby when the user inputs 20 heights of the students, it automatically determines the tallest and shortest height. I want to ask for the Input in this way : Height of Student No.1 = Height of Student No.2 = for x in range(20): height = float(input("Height of Student No. = " )) I tried to do this: for x in range(20): height = float(input("Height of Student No." , x)) But it gave me this error TypeError: input expected at most 1 argument, got 2 A: As the error suggests, input() only takes one argument and you gave it two: you gave it your string "Height of Student No." and you gave it x. I think what you want here is to include the value of x in your string. This can be accomplished using f-strings like so: for x in range(1,21): height = float(input(f"Height of Student No. {x}")) Putting the f at the beginning of the string allows you to put variables inside of {} and it will evaluate your string. That said, your result will just save into height and then the next one will save over that, and onward, so you probably want to create a list and append them to it, like so: my_heights = [] # an empty list for x in range(1,21): height = float(input(f"Height of Student No. {x}")) my_heights.append(height) minimum = min(my_heights) maximum = max(my_heights) edit: start ranges at 1 instead of at 0, end at 21
How do I number the inputs that I take from the user?
How do I list inputs. I'm writing an program whereby when the user inputs 20 heights of the students, it automatically determines the tallest and shortest height. I want to ask for the Input in this way : Height of Student No.1 = Height of Student No.2 = for x in range(20): height = float(input("Height of Student No. = " )) I tried to do this: for x in range(20): height = float(input("Height of Student No." , x)) But it gave me this error TypeError: input expected at most 1 argument, got 2
[ "As the error suggests, input() only takes one argument and you gave it two: you gave it your string \"Height of Student No.\" and you gave it x.\nI think what you want here is to include the value of x in your string. This can be accomplished using f-strings like so:\nfor x in range(1,21):\n height = float(input(f\"Height of Student No. {x}\"))\n\nPutting the f at the beginning of the string allows you to put variables inside of {} and it will evaluate your string.\nThat said, your result will just save into height and then the next one will save over that, and onward, so you probably want to create a list and append them to it, like so:\nmy_heights = [] # an empty list\nfor x in range(1,21):\n height = float(input(f\"Height of Student No. {x}\"))\n my_heights.append(height)\nminimum = min(my_heights)\nmaximum = max(my_heights)\n\nedit: start ranges at 1 instead of at 0, end at 21\n" ]
[ 1 ]
[]
[]
[ "python" ]
stackoverflow_0074522766_python.txt
Q: One connection to DB for app, or a connection on every execution? I'm using psycopg2 library to connection to my postgresql database. Every time I want to execute any query, I make a make a new connection like this: import psycopg2 def run_query(query): with psycopg2.connect("dbname=test user=postgres") as connection: cursor = connection.cursor() cursor.execute(query) cursor.close() But I think it's faster to make one connection for whole app execution like this: import psycopg2 connection = psycopg2.connect("dbname=test user=postgres") def run_query(query): cursor = connection.cursor() cursor.execute(query) cursor.close() So which is better way to connect my database during all execution time on my app? I've tried both ways and both worked, but I want to know which is better and why. A: Both ways are bad. The fist one is particularly bad, because opening a database connection is quite expensive. The second is bad, because you will end up with a single connection (which is too few) one connection per process or thread (which is usually too many). Use a connection pool. A: You should strongly consider using a connection pool, as other answers have suggested, this will be less costly than creating a connection every time you query, as well as deal with workloads that one connection alone couldn't deal with. Create a file called something like mydb.py, and include the following: import psycopg2 import psycopg2.pool from contextlib import contextmanager dbpool = psycopg2.pool.ThreadedConnectionPool(host=<<YourHost>>, port=<<YourPort>>, dbname=<<YourDB>>, user=<<YourUser>>, password=<<YourPassword>>, ) @contextmanager def db_cursor(): conn = dbpool.getconn() try: with conn.cursor() as cur: yield cur conn.commit() """ You can have multiple exception types here. For example, if you wanted to specifically check for the 23503 "FOREIGN KEY VIOLATION" error type, you could do: except psycopg2.Error as e: conn.rollback() if e.pgcode = '23503': raise KeyError(e.diag.message_primary) else raise Exception(e.pgcode) """ except: conn.rollback() raise finally: dbpool.putconn(conn) This will allow you run queries as so: import mydb def myfunction(): with mydb.db_cursor() as cur: cur.execute("""Select * from blahblahblah...""")
One connection to DB for app, or a connection on every execution?
I'm using psycopg2 library to connection to my postgresql database. Every time I want to execute any query, I make a make a new connection like this: import psycopg2 def run_query(query): with psycopg2.connect("dbname=test user=postgres") as connection: cursor = connection.cursor() cursor.execute(query) cursor.close() But I think it's faster to make one connection for whole app execution like this: import psycopg2 connection = psycopg2.connect("dbname=test user=postgres") def run_query(query): cursor = connection.cursor() cursor.execute(query) cursor.close() So which is better way to connect my database during all execution time on my app? I've tried both ways and both worked, but I want to know which is better and why.
[ "Both ways are bad. The fist one is particularly bad, because opening a database connection is quite expensive. The second is bad, because you will end up with a single connection (which is too few) one connection per process or thread (which is usually too many).\nUse a connection pool.\n", "You should strongly consider using a connection pool, as other answers have suggested, this will be less costly than creating a connection every time you query, as well as deal with workloads that one connection alone couldn't deal with.\nCreate a file called something like mydb.py, and include the following:\nimport psycopg2\nimport psycopg2.pool\nfrom contextlib import contextmanager\n\ndbpool = psycopg2.pool.ThreadedConnectionPool(host=<<YourHost>>,\n port=<<YourPort>>,\n dbname=<<YourDB>>,\n user=<<YourUser>>,\n password=<<YourPassword>>,\n )\n\n@contextmanager\ndef db_cursor():\n conn = dbpool.getconn()\n try:\n with conn.cursor() as cur:\n yield cur\n conn.commit()\n \"\"\"\n You can have multiple exception types here.\n For example, if you wanted to specifically check for the\n 23503 \"FOREIGN KEY VIOLATION\" error type, you could do:\n except psycopg2.Error as e:\n conn.rollback()\n if e.pgcode = '23503':\n raise KeyError(e.diag.message_primary)\n else\n raise Exception(e.pgcode)\n \"\"\"\n except:\n conn.rollback()\n raise\n finally:\n dbpool.putconn(conn)\n\nThis will allow you run queries as so:\nimport mydb\n\ndef myfunction():\n with mydb.db_cursor() as cur:\n cur.execute(\"\"\"Select * from blahblahblah...\"\"\")\n\n" ]
[ 0, 0 ]
[]
[]
[ "postgresql", "psycopg2", "python" ]
stackoverflow_0074511042_postgresql_psycopg2_python.txt
Q: ctypes.util.find_library() did not manage to locate a library called 'pango-1.0-0' UBUNTU SERVER (EC2) I am setting up my EC2 instance on AWS with an UBUNTU 18.04 and running into the following error when trying to run this gunicorn command gunicorn --bind 0.0.0.0:8000 zipherJobCards.wsgi:application error: OSError: cannot load library 'pango-1.0-0': pango-1.0-0: cannot open shared object file: No such file or directory. Additionally, ctypes.util.find_library() did not manage to locate a library called 'pango-1.0-0' I ran into this error after installing weasyprint in both my base directory and my web apps directory. Does anyone know the cause of this and also how to fix it? A: I got a similar error and after installing this solved the problem sudo apt-get install -y libpangocairo-1.0-0 A: For mac, I just installed pango using homebrew and it seemed to work. Ran brew install pango Relevant link: https://formulae.brew.sh/formula/pango
ctypes.util.find_library() did not manage to locate a library called 'pango-1.0-0' UBUNTU SERVER (EC2)
I am setting up my EC2 instance on AWS with an UBUNTU 18.04 and running into the following error when trying to run this gunicorn command gunicorn --bind 0.0.0.0:8000 zipherJobCards.wsgi:application error: OSError: cannot load library 'pango-1.0-0': pango-1.0-0: cannot open shared object file: No such file or directory. Additionally, ctypes.util.find_library() did not manage to locate a library called 'pango-1.0-0' I ran into this error after installing weasyprint in both my base directory and my web apps directory. Does anyone know the cause of this and also how to fix it?
[ "I got a similar error and after installing this solved the problem\nsudo apt-get install -y libpangocairo-1.0-0\n", "For mac, I just installed pango using homebrew and it seemed to work.\nRan brew install pango\nRelevant link: https://formulae.brew.sh/formula/pango\n" ]
[ 2, 0 ]
[]
[]
[ "amazon_ec2", "django", "python", "ubuntu_18.04", "weasyprint" ]
stackoverflow_0070031075_amazon_ec2_django_python_ubuntu_18.04_weasyprint.txt
Q: TypeError: counter_label() missing 1 required positional argument: 'label' I can count number but i can't countdown on label tkinter Exception in Tkinter callback Traceback (most recent call last): File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.10_3.10.2288.0_x64__qbz5n2kfra8p0\lib\tkinter_init_.py", line 1921, in call return self.func(*args) File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.10_3.10.2288.0_x64__qbz5n2kfra8p0\lib\tkinter_init_.py", line 839, in callit func(*args) TypeError: counter_label() missing 1 required positional argument:'label' import tkinter as tk counter = 0 status = 1 def counter_label(label): global status def countup(): global counter counter += 1 label.after(1000, countup) def countdown(): global counter counter -=1 # for num in range(start,0,-1): # print(num) label.after(1000,countdown) if status == 1: countup() if counter == 11: status = 0 if status == 0: countdown() if counter == 0: status = 1 label.config(text=str(counter)) label.after(1000,counter_label) print(counter) root = tk.Tk() root.title("Counting Seconds") label = tk.Label(root, fg="green") label.pack() counter_label(label) root.mainloop() A: your recursive call label.after(1000,counter_label) can be modified to include label argument with an anonymous function label.after(1000,lambda x=label: counter_label(x))
TypeError: counter_label() missing 1 required positional argument: 'label'
I can count number but i can't countdown on label tkinter Exception in Tkinter callback Traceback (most recent call last): File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.10_3.10.2288.0_x64__qbz5n2kfra8p0\lib\tkinter_init_.py", line 1921, in call return self.func(*args) File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.10_3.10.2288.0_x64__qbz5n2kfra8p0\lib\tkinter_init_.py", line 839, in callit func(*args) TypeError: counter_label() missing 1 required positional argument:'label' import tkinter as tk counter = 0 status = 1 def counter_label(label): global status def countup(): global counter counter += 1 label.after(1000, countup) def countdown(): global counter counter -=1 # for num in range(start,0,-1): # print(num) label.after(1000,countdown) if status == 1: countup() if counter == 11: status = 0 if status == 0: countdown() if counter == 0: status = 1 label.config(text=str(counter)) label.after(1000,counter_label) print(counter) root = tk.Tk() root.title("Counting Seconds") label = tk.Label(root, fg="green") label.pack() counter_label(label) root.mainloop()
[ "your recursive call\nlabel.after(1000,counter_label)\n\ncan be modified to include label argument with an anonymous function\nlabel.after(1000,lambda x=label: counter_label(x))\n\n" ]
[ 0 ]
[]
[]
[ "python", "tkinter" ]
stackoverflow_0074522773_python_tkinter.txt
Q: Upload object to Oracle Storage using put_object in Python I'm trying to upload an object to Oracle Storage with oci-cli library in Python. When I try using command-line: oci os object put -ns grddddaaaZZ -bn dev.bucket --name processed/2020-11 --file /path/to/my/file/image.tif I actually get a response like: Upload ID: 4f...78f0fdc5 Split file into 2 parts for upload. Uploading object [------------------------------------] 0% ... but when I try using the framework: try: namespace = 'grddddaaaZZ' bucket = 'dev.bucket' object_path = 'processed/2020-11/image.tif' with open('/path/to/my/file/image.tif', "rb") as image: publish_payload = image.read() response = object_storage.put_object(namespace, bucket, object_path, publish_payload) except (InvalidConfig, BaseConnectTimeout, ConfigFileNotFound, ServiceError) as error: logging.error(">>>>>>>> Something went wrong when try to list bucket {} objects. Error {}". format(bucket, error)) the upload does not complete: ... response = object_storage.put_object(namespace, bucket, object_path, publish_payload) File ".../.venv/lib/python3.8/site-packages/oci/object_storage/object_storage_client.py", line 4113, in put_object return self.base_client.call_api( File ".../.venv/lib/python3.8/site-packages/oci/base_client.py", line 272, in call_api response = self.request(request) File ".../.venv/lib/python3.8/site-packages/oci/base_client.py", line 378, in request raise exceptions.RequestException(e) oci.exceptions.RequestException: ('Connection aborted.', timeout('The write operation timed out')) I thought that it could be the size of file (which is around 208Mb), but in put_object documentation says 5Gb limit. So, I do not think it could be the issue. My last chance would be to use os.system(), but it would not be what I truly want. Some clue in what could be missing in this second option? A: You could try uploading some other data first, to see if it's the payload: namespace = 'grddddaaaZZ' bucket_name = 'dev.bucket' object_name = 'processed/2020-11/test.txt' test_data = b"Hello, World!" obj = object_storage.put_object( namespace, bucket_name, object_name, my_data) or you try it without reading the file contents and just passing the file object: namespace = 'grddddaaaZZ' bucket = 'dev.bucket' object_path = 'processed/2020-11/image.tif' with open('/path/to/my/file/image.tif', 'rb') as f: obj = object_storage.put_object(namespace, bucket, object_path, f) A: with open('tomcat_access_log_20221118-231901.log.zip', 'rb') as filePtr: ... upload_resp = object_storage_client.put_object(nameSpace,bucket_name='my-Test-Bucket',object_name=file_to_upload,put_object_body=filePtr) Note : file_to_upload = 'empty_folder_for_testing/tomcat-admin-server/tomcat_access_log_20221118-231901.log.zip' The above code getting stuck for very log till end getting timeout. But actually i can see file uploaded properly. But this command getting stuck for long enough till timeout ... Any idea ?
Upload object to Oracle Storage using put_object in Python
I'm trying to upload an object to Oracle Storage with oci-cli library in Python. When I try using command-line: oci os object put -ns grddddaaaZZ -bn dev.bucket --name processed/2020-11 --file /path/to/my/file/image.tif I actually get a response like: Upload ID: 4f...78f0fdc5 Split file into 2 parts for upload. Uploading object [------------------------------------] 0% ... but when I try using the framework: try: namespace = 'grddddaaaZZ' bucket = 'dev.bucket' object_path = 'processed/2020-11/image.tif' with open('/path/to/my/file/image.tif', "rb") as image: publish_payload = image.read() response = object_storage.put_object(namespace, bucket, object_path, publish_payload) except (InvalidConfig, BaseConnectTimeout, ConfigFileNotFound, ServiceError) as error: logging.error(">>>>>>>> Something went wrong when try to list bucket {} objects. Error {}". format(bucket, error)) the upload does not complete: ... response = object_storage.put_object(namespace, bucket, object_path, publish_payload) File ".../.venv/lib/python3.8/site-packages/oci/object_storage/object_storage_client.py", line 4113, in put_object return self.base_client.call_api( File ".../.venv/lib/python3.8/site-packages/oci/base_client.py", line 272, in call_api response = self.request(request) File ".../.venv/lib/python3.8/site-packages/oci/base_client.py", line 378, in request raise exceptions.RequestException(e) oci.exceptions.RequestException: ('Connection aborted.', timeout('The write operation timed out')) I thought that it could be the size of file (which is around 208Mb), but in put_object documentation says 5Gb limit. So, I do not think it could be the issue. My last chance would be to use os.system(), but it would not be what I truly want. Some clue in what could be missing in this second option?
[ "You could try uploading some other data first, to see if it's the payload:\nnamespace = 'grddddaaaZZ'\nbucket_name = 'dev.bucket'\nobject_name = 'processed/2020-11/test.txt'\ntest_data = b\"Hello, World!\"\n\nobj = object_storage.put_object(\n namespace,\n bucket_name,\n object_name,\n my_data)\n\nor you try it without reading the file contents and just passing the file object:\nnamespace = 'grddddaaaZZ'\nbucket = 'dev.bucket'\nobject_path = 'processed/2020-11/image.tif'\n\nwith open('/path/to/my/file/image.tif', 'rb') as f:\n obj = object_storage.put_object(namespace, bucket, object_path, f)\n\n", " with open('tomcat_access_log_20221118-231901.log.zip', 'rb') as filePtr:\n... upload_resp = object_storage_client.put_object(nameSpace,bucket_name='my-Test-Bucket',object_name=file_to_upload,put_object_body=filePtr)\n\nNote : file_to_upload = 'empty_folder_for_testing/tomcat-admin-server/tomcat_access_log_20221118-231901.log.zip'\nThe above code getting stuck for very log till end getting timeout. But actually i can see file uploaded properly. But this command getting stuck for long enough till timeout ... Any idea ?\n" ]
[ 2, 0 ]
[]
[]
[ "cloud", "oracle", "oracle_cloud_infrastructure", "python" ]
stackoverflow_0065814234_cloud_oracle_oracle_cloud_infrastructure_python.txt
Q: How can I convert multiple columns in a pandas dataframe into a column containing dictionaries of those columns? I have a very large dataframe containing the following columns: RegAddress.CareOf,RegAddress.POBox,RegAddress.AddressLine1,RegAddress.AddressLine2,RegAddress.PostTown,RegAddress.County,RegAddress.Country,RegAddress.PostCode I am inserting this dataframe (loaded from a CSV) into a relational database, and so would like to convert these columns into a single column, RegAddress, containing a dictionary, which contains the keys CareOf, POBox, AddressLine1... and so on. I cannot figure out how to do this in a vectorised fashion, i.e. go from: RegAddress.CareOf,RegAddress.POBox Me,2 You,3 to: RegAddress {"CareOf": "Me", "POBox": 2} {"CareOf": "You", "POBox": 3} efficiently. A: You can use the .apply() method to achieve this: selected_cols = ['RegAddress.CareOf', 'RegAddress.POBox'] df2 = pd.DataFrame() df2['RegAddress'] = df.apply( lambda row: { col.split('.')[1]: row[col] for col in row.index if col in selected_cols }, axis=1 ) Result: RegAddress 0 {'CareOf': 'Me', 'POBox': 2} 1 {'CareOf': 'You', 'POBox': 3}
How can I convert multiple columns in a pandas dataframe into a column containing dictionaries of those columns?
I have a very large dataframe containing the following columns: RegAddress.CareOf,RegAddress.POBox,RegAddress.AddressLine1,RegAddress.AddressLine2,RegAddress.PostTown,RegAddress.County,RegAddress.Country,RegAddress.PostCode I am inserting this dataframe (loaded from a CSV) into a relational database, and so would like to convert these columns into a single column, RegAddress, containing a dictionary, which contains the keys CareOf, POBox, AddressLine1... and so on. I cannot figure out how to do this in a vectorised fashion, i.e. go from: RegAddress.CareOf,RegAddress.POBox Me,2 You,3 to: RegAddress {"CareOf": "Me", "POBox": 2} {"CareOf": "You", "POBox": 3} efficiently.
[ "You can use the .apply() method to achieve this:\nselected_cols = ['RegAddress.CareOf', 'RegAddress.POBox']\n\ndf2 = pd.DataFrame()\ndf2['RegAddress'] = df.apply(\n lambda row: {\n col.split('.')[1]: row[col] for col in row.index\n if col in selected_cols\n },\n axis=1\n)\n\nResult:\n RegAddress\n0 {'CareOf': 'Me', 'POBox': 2}\n1 {'CareOf': 'You', 'POBox': 3}\n\n" ]
[ 1 ]
[]
[]
[ "dataframe", "pandas", "python" ]
stackoverflow_0074522825_dataframe_pandas_python.txt
Q: How to create multiple data frames & write it to excel file I have two data frames and joined the data with left join from the column "country" i need to create a separate table in excel for each 4 countries from the joined dataframes as per the attached format. Please advise how can i achieve this ? please advise how can i achieve this? Import Pandas as pd import numpy as np file = pd.ExcelFile(r"p:\test\sample.xlsx") df1 = pd.read_excel(file, 'sample1') df2 = pd.read_excel(file, 'sample2') df3 = (pd.merge(df1, df2, left_on='country', right_on='Country', how='left').drop.('amount', axis=1)) n = len(pd.unique(df3['country']))` A: You can loop on df3['Country'].unique(): for country in df3['Country'].unique(): df_ = df3[df3['Country'] == country].to_excel(f'path_to_output_{country}.xlsx',index=False) A: You can use df.groupby: for country, g in df.groupby("country"): g.to_excel(f"file_{country}.xls", index=False)
How to create multiple data frames & write it to excel file
I have two data frames and joined the data with left join from the column "country" i need to create a separate table in excel for each 4 countries from the joined dataframes as per the attached format. Please advise how can i achieve this ? please advise how can i achieve this? Import Pandas as pd import numpy as np file = pd.ExcelFile(r"p:\test\sample.xlsx") df1 = pd.read_excel(file, 'sample1') df2 = pd.read_excel(file, 'sample2') df3 = (pd.merge(df1, df2, left_on='country', right_on='Country', how='left').drop.('amount', axis=1)) n = len(pd.unique(df3['country']))`
[ "You can loop on df3['Country'].unique():\nfor country in df3['Country'].unique():\n df_ = df3[df3['Country'] == country].to_excel(f'path_to_output_{country}.xlsx',index=False)\n \n\n", "You can use df.groupby:\nfor country, g in df.groupby(\"country\"):\n g.to_excel(f\"file_{country}.xls\", index=False)\n\n" ]
[ 0, 0 ]
[]
[]
[ "dataframe", "export_to_excel", "loops", "pandas", "python" ]
stackoverflow_0074522812_dataframe_export_to_excel_loops_pandas_python.txt
Q: Using if statement to find specific string values in a list I have a column within a dataframe that is composed of lists. I am trying to use an if statement to identify values in these lists that contain any special character or number. The numbers I am trying to identify are string values, not numeric. I have tried using regex to identify these values, but I don't know exactly how to use this in an if statement. The code below gives me what I want, but I know there has to be a more succinct way to do it: if '-' in row['col_name'].iloc[0] or '/' in row['col_name'].iloc[0] or '0' in row['col_name'].iloc[0] or '1' in row['col_name'].iloc[0]: return action I only included a few special characters and numbers in this example. I would like to find ANY special character or numeric value. Thank you in advance! A: in reference to this post, the following might be what you need: special_chars = ['-', '/', '0', '1'] # returns df with only the rows in which the column contains any of these characters result_df = df.loc[df['col_name'].str.contains('|'.join(special_chars))] the '|' will function as a regex character.
Using if statement to find specific string values in a list
I have a column within a dataframe that is composed of lists. I am trying to use an if statement to identify values in these lists that contain any special character or number. The numbers I am trying to identify are string values, not numeric. I have tried using regex to identify these values, but I don't know exactly how to use this in an if statement. The code below gives me what I want, but I know there has to be a more succinct way to do it: if '-' in row['col_name'].iloc[0] or '/' in row['col_name'].iloc[0] or '0' in row['col_name'].iloc[0] or '1' in row['col_name'].iloc[0]: return action I only included a few special characters and numbers in this example. I would like to find ANY special character or numeric value. Thank you in advance!
[ "in reference to this post, the following might be what you need:\nspecial_chars = ['-', '/', '0', '1']\n\n# returns df with only the rows in which the column contains any of these characters\nresult_df = df.loc[df['col_name'].str.contains('|'.join(special_chars))]\n\nthe '|' will function as a regex character.\n" ]
[ 0 ]
[]
[]
[ "if_statement", "list", "python" ]
stackoverflow_0074522890_if_statement_list_python.txt
Q: odient: "takes 1 positional argument but 2 were given" I have been using odient in python for a project and it's been working completely fine. I did the same thing I always do for this problem and for some reason it keeps saying my defined function takes 1 positional argument but 2 were given, even though it's been fine doing problems like this before. Here is my code: def sy(J): Ntot=J[0] xb=J[1] dNtotdt=nn2-nv dxbdt=(-nv*xb-xb*dNtotdt)/Ntot return[dNtotdt,dxbdt] #odeint requires that we set up a vector of times (question asks for 0-10) t_val=np.linspace(0,10,46) #46 for more accuracy #we also need to make an initial condition vector Yo=np.array([Ntoto,xbo]) #use odient function to find the concentrations ans=odeint(sy,Yo,t_val) print(ans) please help A: your derivative function passed to odeint needs to expect 2 inputs (y and t), the most straight forward solution is to just make your function take multiple arguments as you seem to have forgotten. def sy(J,t):
odient: "takes 1 positional argument but 2 were given"
I have been using odient in python for a project and it's been working completely fine. I did the same thing I always do for this problem and for some reason it keeps saying my defined function takes 1 positional argument but 2 were given, even though it's been fine doing problems like this before. Here is my code: def sy(J): Ntot=J[0] xb=J[1] dNtotdt=nn2-nv dxbdt=(-nv*xb-xb*dNtotdt)/Ntot return[dNtotdt,dxbdt] #odeint requires that we set up a vector of times (question asks for 0-10) t_val=np.linspace(0,10,46) #46 for more accuracy #we also need to make an initial condition vector Yo=np.array([Ntoto,xbo]) #use odient function to find the concentrations ans=odeint(sy,Yo,t_val) print(ans) please help
[ "your derivative function passed to odeint needs to expect 2 inputs (y and t), the most straight forward solution is to just make your function take multiple arguments as you seem to have forgotten.\ndef sy(J,t):\n\n" ]
[ 1 ]
[ "In the error it clearly mentioned that the function \"Odient\" takes 1 positional argument but you are trying to put more than 1 argument example.\n#This function take one Parameter \"var\"\ndef foo(var):\n return var\n\n#Calling the function with print statement\n\nprint(foo(var, var2)) #Trying to give more than 1 argument. But it gives error \n\nLike this you are doing with your code.\n" ]
[ -1 ]
[ "python" ]
stackoverflow_0074522901_python.txt
Q: How to drop columns which have same values in all rows via pandas or spark dataframe? Suppose I've data similar to following: index id name value value2 value3 data1 val5 0 345 name1 1 99 23 3 66 1 12 name2 1 99 23 2 66 5 2 name6 1 99 23 7 66 How can we drop all those columns like (value, value2, value3) where all rows have the same values, in one command or couple of commands using python? Consider we have many columns similar to value, value2, value3...value200. Output: index id name data1 0 345 name1 3 1 12 name2 2 5 2 name6 7 A: What we can do is use nunique to calculate the number of unique values in each column of the dataframe, and drop the columns which only have a single unique value: In [285]: nunique = df.nunique() cols_to_drop = nunique[nunique == 1].index df.drop(cols_to_drop, axis=1) Out[285]: index id name data1 0 0 345 name1 3 1 1 12 name2 2 2 5 2 name6 7 Another way is to just diff the numeric columns, take abs values and sums them: In [298]: cols = df.select_dtypes([np.number]).columns diff = df[cols].diff().abs().sum() df.drop(diff[diff== 0].index, axis=1) ​ Out[298]: index id name data1 0 0 345 name1 3 1 1 12 name2 2 2 5 2 name6 7 Another approach is to use the property that the standard deviation will be zero for a column with the same value: In [300]: cols = df.select_dtypes([np.number]).columns std = df[cols].std() cols_to_drop = std[std==0].index df.drop(cols_to_drop, axis=1) Out[300]: index id name data1 0 0 345 name1 3 1 1 12 name2 2 2 5 2 name6 7 Actually the above can be done in a one-liner: In [306]: df.drop(df.std()[(df.std() == 0)].index, axis=1) Out[306]: index id name data1 0 0 345 name1 3 1 1 12 name2 2 2 5 2 name6 7 A: A simple one liner(python): df=df[[i for i in df if len(set(df[i]))>1]] A: Another solution is set_index from column which are not compared and then compare first row selected by iloc by eq with all DataFrame and last use boolean indexing: df1 = df.set_index(['index','id','name',]) print (~df1.eq(df1.iloc[0]).all()) value False value2 False value3 False data1 True val5 False dtype: bool print (df1.ix[:, (~df1.eq(df1.iloc[0]).all())].reset_index()) index id name data1 0 0 345 name1 3 1 1 12 name2 2 2 5 2 name6 7 A: pythonic solution Original DataFrame index id name value value2 value3 data1 val5 0 345 name1 1 99 23 3 66 1 12 name2 1 99 23 2 66 5 2 name6 1 99 23 7 66 Solution for col in df.columns: # Loop through columns if len(df[col].unique()) == 1: # Find unique values in column along with their length and if len is == 1 then it contains same values df.drop([col], axis=1, inplace=True) # Drop the column Dataframe after executing above code index id name data1 0 0 345 name1 3 1 1 12 name2 2 2 5 2 name6 7 A: You can use nunique, which returns the number of unique values in each column: In [3]: df.loc[:, df.nunique() > 1] Out[3]: index id name data1 0 0 345 name1 3 1 1 12 name2 2 2 5 2 name6 7 A: This should work also: cols_to_drop = [] for col in df: if df[col].std()==0: cols_to_drop.append(col) df= df.drop(columns = cols_to_drop)
How to drop columns which have same values in all rows via pandas or spark dataframe?
Suppose I've data similar to following: index id name value value2 value3 data1 val5 0 345 name1 1 99 23 3 66 1 12 name2 1 99 23 2 66 5 2 name6 1 99 23 7 66 How can we drop all those columns like (value, value2, value3) where all rows have the same values, in one command or couple of commands using python? Consider we have many columns similar to value, value2, value3...value200. Output: index id name data1 0 345 name1 3 1 12 name2 2 5 2 name6 7
[ "What we can do is use nunique to calculate the number of unique values in each column of the dataframe, and drop the columns which only have a single unique value:\nIn [285]:\nnunique = df.nunique()\ncols_to_drop = nunique[nunique == 1].index\ndf.drop(cols_to_drop, axis=1)\n\nOut[285]:\n index id name data1\n0 0 345 name1 3\n1 1 12 name2 2\n2 5 2 name6 7\n\nAnother way is to just diff the numeric columns, take abs values and sums them:\nIn [298]:\ncols = df.select_dtypes([np.number]).columns\ndiff = df[cols].diff().abs().sum()\ndf.drop(diff[diff== 0].index, axis=1)\n​\nOut[298]:\n index id name data1\n0 0 345 name1 3\n1 1 12 name2 2\n2 5 2 name6 7\n\nAnother approach is to use the property that the standard deviation will be zero for a column with the same value:\nIn [300]:\ncols = df.select_dtypes([np.number]).columns\nstd = df[cols].std()\ncols_to_drop = std[std==0].index\ndf.drop(cols_to_drop, axis=1)\n\nOut[300]:\n index id name data1\n0 0 345 name1 3\n1 1 12 name2 2\n2 5 2 name6 7\n\nActually the above can be done in a one-liner:\nIn [306]:\ndf.drop(df.std()[(df.std() == 0)].index, axis=1)\n\nOut[306]:\n index id name data1\n0 0 345 name1 3\n1 1 12 name2 2\n2 5 2 name6 7\n\n", "A simple one liner(python):\ndf=df[[i for i in df if len(set(df[i]))>1]]\n\n", "Another solution is set_index from column which are not compared and then compare first row selected by iloc by eq with all DataFrame and last use boolean indexing:\ndf1 = df.set_index(['index','id','name',])\nprint (~df1.eq(df1.iloc[0]).all())\nvalue False\nvalue2 False\nvalue3 False\ndata1 True\nval5 False\ndtype: bool\n\nprint (df1.ix[:, (~df1.eq(df1.iloc[0]).all())].reset_index())\n index id name data1\n0 0 345 name1 3\n1 1 12 name2 2\n2 5 2 name6 7\n\n", "pythonic solution \nOriginal DataFrame \nindex id name value value2 value3 data1 val5\n 0 345 name1 1 99 23 3 66\n 1 12 name2 1 99 23 2 66\n 5 2 name6 1 99 23 7 66\n\nSolution \nfor col in df.columns: # Loop through columns\n if len(df[col].unique()) == 1: # Find unique values in column along with their length and if len is == 1 then it contains same values\n df.drop([col], axis=1, inplace=True) # Drop the column\n\nDataframe after executing above code\n index id name data1\n0 0 345 name1 3\n1 1 12 name2 2\n2 5 2 name6 7\n\n", "You can use nunique, which returns the number of unique values in each column:\nIn [3]: df.loc[:, df.nunique() > 1]\nOut[3]: \n index id name data1\n0 0 345 name1 3\n1 1 12 name2 2\n2 5 2 name6 7\n\n", "This should work also:\ncols_to_drop = []\nfor col in df:\n if df[col].std()==0:\n cols_to_drop.append(col)\ndf= df.drop(columns = cols_to_drop)\n\n" ]
[ 62, 7, 4, 1, 1, 0 ]
[]
[]
[ "apache_spark_sql", "duplicates", "multiple_columns", "pandas", "python" ]
stackoverflow_0039658574_apache_spark_sql_duplicates_multiple_columns_pandas_python.txt
Q: Best way to deploy multiple client websites by Wagtail I want to create wagtail websites for my clients. The websites will be identical and have the same features, but the templates should be different. Every time I update a feature to a new version, all websites will get the latest version automatically. By this approach, I don't need to deploy new feature versions (or base website version) to my clients separately. I just need to deploy once and all clients will get the latest website version. I will use the 'Multi-instance' feature of Wagtail which seems to fit my needs. In the documentation on 'Multi-instance' it states: multiple sites share the same, single set of project files. Deployment would update the single set of project files and reload each instance. https://www.accordbox.com/blog/add-bootstrap-theme-wagtail/ Let's say I want to have two different blog templates (different Bootstrap themes) in this tutorial. The blog template file is 'post_page.html', and is a project file, so it will be deployed once, and all websites will get the same template in the 'Multi-instance' feature of Wagtail. So my question is: How can I deploy one blog template (post_page.html) to one client website, & another blog template to another client website? A: I think you will want to include all the template variations in your code base and then choose which one to use at request time. To choose a template file dynamically, you create a get_template method. So the question becomes how do you configure which site uses which template(s). I would suggest looking into wagtail.contrib.settings for a place to map a site to its templates. Where I work we support 3 variations - but they come as a set. You can't pick the blog template from A and mix with the calendar template from B. We do this for our own sanity - especially since many of our blocks support display options of their own; for example, the 3 display options here are just styles the user can choose for the same block.
Best way to deploy multiple client websites by Wagtail
I want to create wagtail websites for my clients. The websites will be identical and have the same features, but the templates should be different. Every time I update a feature to a new version, all websites will get the latest version automatically. By this approach, I don't need to deploy new feature versions (or base website version) to my clients separately. I just need to deploy once and all clients will get the latest website version. I will use the 'Multi-instance' feature of Wagtail which seems to fit my needs. In the documentation on 'Multi-instance' it states: multiple sites share the same, single set of project files. Deployment would update the single set of project files and reload each instance. https://www.accordbox.com/blog/add-bootstrap-theme-wagtail/ Let's say I want to have two different blog templates (different Bootstrap themes) in this tutorial. The blog template file is 'post_page.html', and is a project file, so it will be deployed once, and all websites will get the same template in the 'Multi-instance' feature of Wagtail. So my question is: How can I deploy one blog template (post_page.html) to one client website, & another blog template to another client website?
[ "I think you will want to include all the template variations in your code base and then choose which one to use at request time. To choose a template file dynamically, you create a get_template method.\nSo the question becomes how do you configure which site uses which template(s). I would suggest looking into wagtail.contrib.settings for a place to map a site to its templates. Where I work we support 3 variations - but they come as a set. You can't pick the blog template from A and mix with the calendar template from B. We do this for our own sanity - especially since many of our blocks support display options of their own; for example, the 3 display options here are just styles the user can choose for the same block.\n" ]
[ 1 ]
[]
[]
[ "django", "python", "wagtail" ]
stackoverflow_0074499599_django_python_wagtail.txt
Q: Why cant I use my newly downloaded python library? So ive tried to install customtkinter and the installation was successfull Using cached customtkinter-4.6.3-py3-none-any.whl (246 kB) Requirement already satisfied: darkdetect in c:\users\omen1\appdata\local\programs\python\python311\lib\site-packages (from customtkinter) (0.7.1) Installing collected packages: customtkinter Successfully installed customtkinter-4.6.3 But when i then go to vs code and write import customtkinter and run it says Traceback (most recent call last): File "c:\Users\OMEN1\OneDrive\Skrivbord\python projects\database.py", line 290, in <module> import customtkinter ModuleNotFoundError: No module named 'customtkinter' I have tried to uninstall and re-install My pip is also fully uppdated aswell as my python 3.11 Ive tried multiple things A: Ensure that the interpreter you're using in VSCode is aligned to where you installed the library. For example if you installed it with Python3, your VSCode may be pointed to Python2 instead. Additionally, according to the PyPi link for that library - "To use CustomTkinter, just place the /customtkinter folder from this repository next to your program, and then you can do import customtkinter."
Why cant I use my newly downloaded python library?
So ive tried to install customtkinter and the installation was successfull Using cached customtkinter-4.6.3-py3-none-any.whl (246 kB) Requirement already satisfied: darkdetect in c:\users\omen1\appdata\local\programs\python\python311\lib\site-packages (from customtkinter) (0.7.1) Installing collected packages: customtkinter Successfully installed customtkinter-4.6.3 But when i then go to vs code and write import customtkinter and run it says Traceback (most recent call last): File "c:\Users\OMEN1\OneDrive\Skrivbord\python projects\database.py", line 290, in <module> import customtkinter ModuleNotFoundError: No module named 'customtkinter' I have tried to uninstall and re-install My pip is also fully uppdated aswell as my python 3.11 Ive tried multiple things
[ "Ensure that the interpreter you're using in VSCode is aligned to where you installed the library.\nFor example if you installed it with Python3, your VSCode may be pointed to Python2 instead.\nAdditionally, according to the PyPi link for that library - \"To use CustomTkinter, just place the /customtkinter folder from this repository next to your program, and then you can do import customtkinter.\"\n" ]
[ 1 ]
[]
[]
[ "import", "importerror", "python" ]
stackoverflow_0074522343_import_importerror_python.txt
Q: How to merge two queryset on specific column Hello I am using a postgres database on my django app. I have this model: class MyFile(models.Model): uuid = models.UUIDField( default=python_uuid.uuid4, editable=False, unique=True) file = models.FileField(upload_to=upload_to, null=True, blank=True) path = models.CharField(max_length=200) status = models.ManyToManyField(Device, through='FileStatus') user = models.ForeignKey('users.User', on_delete=models.SET_NULL, null=True, blank=True) when = models.DateTimeField(auto_now_add=True) canceled = models.BooleanField(default=False) group = models.UUIDField( default=python_uuid.uuid4, editable=False) What I want is to group my MyFile by group, get all the data + a list of file associated to it. I managed to get a group associated to a list of file with: MyFile.objects.all().values('group').annotate(file=ArrayAgg('file', ordering='-when')) which is giving me a result like: [{'group': 'toto', 'file':['file1', file2']}, ...] I can also get all my MyFile data with: MyFile.objects.all().distinct('group') What I want is to get a result like: [{'group': 'toto', 'file':['file1', file2'], 'when': 'ok', 'path': 'ok', 'user': 'ok', 'status': [], canceled: False}, ...] So I fought I could merge my two queryset on the group column but this does not work. Any ideas? A: Try this if it works MyFile.objects.values('group').annotate(file=ArrayAgg('file', ordering='-when'))
How to merge two queryset on specific column
Hello I am using a postgres database on my django app. I have this model: class MyFile(models.Model): uuid = models.UUIDField( default=python_uuid.uuid4, editable=False, unique=True) file = models.FileField(upload_to=upload_to, null=True, blank=True) path = models.CharField(max_length=200) status = models.ManyToManyField(Device, through='FileStatus') user = models.ForeignKey('users.User', on_delete=models.SET_NULL, null=True, blank=True) when = models.DateTimeField(auto_now_add=True) canceled = models.BooleanField(default=False) group = models.UUIDField( default=python_uuid.uuid4, editable=False) What I want is to group my MyFile by group, get all the data + a list of file associated to it. I managed to get a group associated to a list of file with: MyFile.objects.all().values('group').annotate(file=ArrayAgg('file', ordering='-when')) which is giving me a result like: [{'group': 'toto', 'file':['file1', file2']}, ...] I can also get all my MyFile data with: MyFile.objects.all().distinct('group') What I want is to get a result like: [{'group': 'toto', 'file':['file1', file2'], 'when': 'ok', 'path': 'ok', 'user': 'ok', 'status': [], canceled: False}, ...] So I fought I could merge my two queryset on the group column but this does not work. Any ideas?
[ "Try this if it works\nMyFile.objects.values('group').annotate(file=ArrayAgg('file', ordering='-when'))\n\n" ]
[ 0 ]
[]
[]
[ "django", "django_queryset", "postgresql", "python" ]
stackoverflow_0074522897_django_django_queryset_postgresql_python.txt
Q: Plotting timedelta values gives out of scope axis I have a dataframe that looks like this: commits commitdates Age (in days) Year-Month server_version 0 97 2021-04-07 75 days 2021-04 v1 1 20 2021-05-31 43 days 2021-05 v3 2 54 2021-06-21 54 days 2021-06 v0.1 3 100 2021-06-18 75 days 2021-06 v2.1.0 4 12 2020-12-06 22 days 2020-12 Nan I want to plot the Age(in days) which is of type timedelta64[ns] , by the number of commits of type int64 by the server_versions on the x axis. I tried to do this using plotly, but the age values are a bit strange and don't seem correct, I just want them to be displayed starting from 0 to all the rest, like normal int values, I am not sure how can I fix this. My code is this: import plotly.express as px fig = px.scatter(final_api, x="Version", y="Age (in days)", color="commits", width =1000, height = 800) fig.show() I am a bit new to plotly, any help will be appreciated. A: All you have to do is to convert your timedelta64 to days and add days as suffix for the yaxis, I answer your question based on this random data: import pandas as pd import numpy as np s = pd.Series(pd.timedelta_range(start='1 days', end='75 days')) df = pd.DataFrame() df['commits'] = np.random.randint(100, size=len(s)) df['Version'] = 'v'+df['commits'].astype(str) df['Age (in days)'] = s fig = px.scatter(df, x="Version", y=df['Age (in days)'].dt.days, color="commits", width =1000, height = 800) fig.update_yaxes(ticksuffix = " days") fig.show() Output
Plotting timedelta values gives out of scope axis
I have a dataframe that looks like this: commits commitdates Age (in days) Year-Month server_version 0 97 2021-04-07 75 days 2021-04 v1 1 20 2021-05-31 43 days 2021-05 v3 2 54 2021-06-21 54 days 2021-06 v0.1 3 100 2021-06-18 75 days 2021-06 v2.1.0 4 12 2020-12-06 22 days 2020-12 Nan I want to plot the Age(in days) which is of type timedelta64[ns] , by the number of commits of type int64 by the server_versions on the x axis. I tried to do this using plotly, but the age values are a bit strange and don't seem correct, I just want them to be displayed starting from 0 to all the rest, like normal int values, I am not sure how can I fix this. My code is this: import plotly.express as px fig = px.scatter(final_api, x="Version", y="Age (in days)", color="commits", width =1000, height = 800) fig.show() I am a bit new to plotly, any help will be appreciated.
[ "All you have to do is to convert your timedelta64 to days and add days as suffix for the yaxis, I answer your question based on this random data:\nimport pandas as pd\nimport numpy as np\n\ns = pd.Series(pd.timedelta_range(start='1 days', end='75 days'))\ndf = pd.DataFrame()\ndf['commits'] = np.random.randint(100, size=len(s))\ndf['Version'] = 'v'+df['commits'].astype(str)\ndf['Age (in days)'] = s\n\nfig = px.scatter(df, x=\"Version\", y=df['Age (in days)'].dt.days, color=\"commits\", width =1000, height = 800)\nfig.update_yaxes(ticksuffix = \" days\")\nfig.show()\n\nOutput\n\n" ]
[ 1 ]
[]
[]
[ "pandas", "plotly", "python" ]
stackoverflow_0074522268_pandas_plotly_python.txt
Q: django.template.exceptions.TemplateSyntaxError: 'bootstrap_field' received some positional argument(s) after some keyword argument(s) I was trying to modify my django sign_in template with bootstrap field along with some arguments but i was not able too. Exception: C:\Users\hp\Desktop\fastparcel\core\templates\sign_in.html, error at line 25 'bootstrap_field' received some positional argument(s) after some keyword argument(s) {% bootstrap_field form.username show_lable=False placeholder ="Email" %}` Html {% extends 'base.html' %} {% load bootstrap4 %} {% block content%} <div class="container-fluid mt-5"> <div class="justify-content-center"> <div class="col-lg-4"> <div class="card"> <div class="card-body"> <h4 class="text-center text-uppercase mb-3"> <b> {% if request.GET.next != '/courier/'%} Customer {% else %} Courier {% endif %} </b> </h4> <form action="POST"> {% csrf_token %} {% bootstrap_form_errors form %} {% bootstrap_label "Email" %} {% bootstrap_field form.username show_lable=False placeholder ="Email" %} {% bootstrap_field field form.password %} <button class="btn btn-warning btn-block "> Sign in</button> </form> </div> </div> </div> </div> </div> {% endblock %} A: thanks @raphael and the answer was removing the space after placeholder in {% bootstrap_field form.username show_lable=False placeholder ="Email" %} to be like {% bootstrap_field form.username show_label=False placeholder="Email" %}
django.template.exceptions.TemplateSyntaxError: 'bootstrap_field' received some positional argument(s) after some keyword argument(s)
I was trying to modify my django sign_in template with bootstrap field along with some arguments but i was not able too. Exception: C:\Users\hp\Desktop\fastparcel\core\templates\sign_in.html, error at line 25 'bootstrap_field' received some positional argument(s) after some keyword argument(s) {% bootstrap_field form.username show_lable=False placeholder ="Email" %}` Html {% extends 'base.html' %} {% load bootstrap4 %} {% block content%} <div class="container-fluid mt-5"> <div class="justify-content-center"> <div class="col-lg-4"> <div class="card"> <div class="card-body"> <h4 class="text-center text-uppercase mb-3"> <b> {% if request.GET.next != '/courier/'%} Customer {% else %} Courier {% endif %} </b> </h4> <form action="POST"> {% csrf_token %} {% bootstrap_form_errors form %} {% bootstrap_label "Email" %} {% bootstrap_field form.username show_lable=False placeholder ="Email" %} {% bootstrap_field field form.password %} <button class="btn btn-warning btn-block "> Sign in</button> </form> </div> </div> </div> </div> </div> {% endblock %}
[ "thanks @raphael\nand the answer was\nremoving the space after placeholder in\n{% bootstrap_field form.username show_lable=False placeholder =\"Email\" %}\n\nto be like\n{% bootstrap_field form.username show_label=False placeholder=\"Email\" %}\n\n" ]
[ 0 ]
[]
[]
[ "backend", "bootstrap_4", "django", "html", "python" ]
stackoverflow_0074522385_backend_bootstrap_4_django_html_python.txt
Q: Python: Grab objects with match string from Python list I have a list that looks something like this: ['CALSIM', '1693', '1938', '1429', '1646', '1199', '1204', '1477', '1268', '1158', '1051', '998', '1135', '2381', '2513', 'Sky19', '1627', '2124', '1859', '2504', '1690', '1784', 'Sky21', 'Sky38', '2833', 'Sky20'] I want to create a new list from this list that only includes objects with the substring Sky in them. Is there a convenient way to do this? A: You can use a conditional list composition to create a new list: original_list = ['CALSIM', '1693', '1938', '1429', '1646', '1199', '1204', '1477', '1268', '1158', '1051', '998', '1135', '2381', '2513', 'Sky19', '1627', '2124', '1859', '2504', '1690', '1784', 'Sky21', 'Sky38', '2833', 'Sky20'] output_list = [x for x in original_list if 'sky' in x.lower()] >>> output_list ['Sky19', 'Sky21', 'Sky38', 'Sky20'] The list comp checks each value of the list, converts it to lowercase (do you care about Sky vs sky vs SKY?), and checks if the string sky is in that value. If yes, it's added to output_list. Because we're running various string operations on the values of original_list, this code will error if any of your values aren't strings. You could add extra checks to catch those if that's a possibility.
Python: Grab objects with match string from Python list
I have a list that looks something like this: ['CALSIM', '1693', '1938', '1429', '1646', '1199', '1204', '1477', '1268', '1158', '1051', '998', '1135', '2381', '2513', 'Sky19', '1627', '2124', '1859', '2504', '1690', '1784', 'Sky21', 'Sky38', '2833', 'Sky20'] I want to create a new list from this list that only includes objects with the substring Sky in them. Is there a convenient way to do this?
[ "You can use a conditional list composition to create a new list:\noriginal_list = ['CALSIM', '1693', '1938', '1429', '1646', '1199', '1204', '1477', '1268', '1158', '1051', '998', '1135', '2381', '2513', 'Sky19', '1627', '2124', '1859', '2504', '1690', '1784', 'Sky21', 'Sky38', '2833', 'Sky20']\n\noutput_list = [x for x in original_list if 'sky' in x.lower()]\n\n>>> output_list\n['Sky19', 'Sky21', 'Sky38', 'Sky20']\n\nThe list comp checks each value of the list, converts it to lowercase (do you care about Sky vs sky vs SKY?), and checks if the string sky is in that value. If yes, it's added to output_list.\nBecause we're running various string operations on the values of original_list, this code will error if any of your values aren't strings. You could add extra checks to catch those if that's a possibility.\n" ]
[ 0 ]
[]
[]
[ "python" ]
stackoverflow_0074523168_python.txt
Q: Is it possible to put python's turtle into a function? First of all, I'm sorry if I made a stupid mistake because I'm a beginner. Please forgive me I started making a "game" in python using the turtle class for homework. Here is the code: import turtle window = turtle.Screen() window.setup(width=800, height=800) window.bgcolor("black") window.tracer(0) player = turtle.Turtle() player.speed(0) player.shape("square") player.color("red") player.penup() player.goto(0, 0) def objectup(t): y = t.ycor() y += 30 t.sety(y) objectup(player) window.onkeypress(objectup(player), "w") window.listen() while True: window.update() I don't get an error message, but the enemy still won't go up, and I don't know why What's wrong with this code? Thanks in advance (if i made a mistake, sorry for my english) I got it to work by adding y = player.ycor() to the function. But this way I can't move other objects with the same function. I have no idea, so I'm asking here to see if anyone can help a beginner A: window.onkeypress(objectup(player), "w") means you call objectup(player) and you pass the value it returns to window.onkeypress(..., 'w'). But it doesn't return anything (which means it returns None). You need to pass a function to window.onkeypress that will be called by turtle, something like: window.onkeypress(objectup, 'w') But if you want it to work for different objects you need to make function that returns a function: def objectup(t): def wrapped(): y = t.ycor() y += 30 t.sety(y) return wrapped window.onkeypress(objectup(player), 'w')
Is it possible to put python's turtle into a function?
First of all, I'm sorry if I made a stupid mistake because I'm a beginner. Please forgive me I started making a "game" in python using the turtle class for homework. Here is the code: import turtle window = turtle.Screen() window.setup(width=800, height=800) window.bgcolor("black") window.tracer(0) player = turtle.Turtle() player.speed(0) player.shape("square") player.color("red") player.penup() player.goto(0, 0) def objectup(t): y = t.ycor() y += 30 t.sety(y) objectup(player) window.onkeypress(objectup(player), "w") window.listen() while True: window.update() I don't get an error message, but the enemy still won't go up, and I don't know why What's wrong with this code? Thanks in advance (if i made a mistake, sorry for my english) I got it to work by adding y = player.ycor() to the function. But this way I can't move other objects with the same function. I have no idea, so I'm asking here to see if anyone can help a beginner
[ "window.onkeypress(objectup(player), \"w\") means you call objectup(player) and you pass the value it returns to window.onkeypress(..., 'w'). But it doesn't return anything (which means it returns None). You need to pass a function to window.onkeypress that will be called by turtle, something like:\nwindow.onkeypress(objectup, 'w')\n\nBut if you want it to work for different objects you need to make function that returns a function:\ndef objectup(t):\n def wrapped():\n y = t.ycor()\n y += 30\n t.sety(y)\n return wrapped\n\nwindow.onkeypress(objectup(player), 'w')\n\n" ]
[ 0 ]
[]
[]
[ "function", "python", "turtle_graphics" ]
stackoverflow_0074523090_function_python_turtle_graphics.txt
Q: Using itertools groupby, create groups of elements, if ANY key is same in each element Given a list of strings, how to group them if any value is similar? inputList = ['w', 'd', 'c', 'm', 'w d', 'm c', 'd w', 'c m', 'o', 'p'] desiredOutput = [['d w', 'd', 'w', 'w d',], ['c', 'c m', 'm', 'm c'], ['o'], ['p']] How to sort a list properly by first, next, and last items? My sorting attempt: groupedList = sorted(inputList, key=lambda ch: [c for c in ch.split()]) Output: ['c', 'c m', 'd', 'd w', 'm', 'm c', 'o', 'p', 'w', 'w d'] Desired output: ['c', 'c m', 'm c', 'm', 'd', 'd w', 'w', 'w d', 'o', 'p'] My grouping attempt: b = sorted(g, key=lambda elem: [i1[0] for i1 in elem[0].split()]) # sort by all first characters b = groupby(b, key=lambda elem: [i1[0] in elem[0].split()[:-1] for i1 in elem[0].split()[:-1]]) b = [[item for item in data] for (key, data) in b] Output: [[('c winnicott', 3), ('d winnicott', 2)], [('d w winnicott', 2), ('w d winnicott', 1)], [('w winnicott', 1)]] Desired output: [[('c winnicott', 3)], [('d winnicott', 2), ('d w winnicott', 2), ('w d winnicott', 1), ('w winnicott', 1)]] A: I did it with the bubble sort algorithm. def bubbleSort(arr): n = len(arr) swapped = False for i in range(n-1): for j in range(0, n-i-1): g1 = arr[j][0].split() g2 = arr[j + 1][0].split() if any([k > l for k in g1] for l in g2): swapped = True arr[j], arr[j + 1] = arr[j + 1], arr[j] if any(s in g2 for s in g1): arr[j].extend(arr[j + 1]) arr[j + 1] = ['-'] if not swapped: return arr arr = [a for a in arr if a[0]!='-'] return arr inputList = ['w', 'd', 'c', 'm', 'w d', 'm c', 'd w', 'c m', 'o', 'p'] #inputList = ["m", "d", "w d", "m c", "c d"] inputList = [[n] for n in inputList] print(bubbleSort(inputList)) Output: [['p'], ['o'], ['c m', 'm c', 'c', 'm'], ['d w', 'w d', 'w', 'd']]
Using itertools groupby, create groups of elements, if ANY key is same in each element
Given a list of strings, how to group them if any value is similar? inputList = ['w', 'd', 'c', 'm', 'w d', 'm c', 'd w', 'c m', 'o', 'p'] desiredOutput = [['d w', 'd', 'w', 'w d',], ['c', 'c m', 'm', 'm c'], ['o'], ['p']] How to sort a list properly by first, next, and last items? My sorting attempt: groupedList = sorted(inputList, key=lambda ch: [c for c in ch.split()]) Output: ['c', 'c m', 'd', 'd w', 'm', 'm c', 'o', 'p', 'w', 'w d'] Desired output: ['c', 'c m', 'm c', 'm', 'd', 'd w', 'w', 'w d', 'o', 'p'] My grouping attempt: b = sorted(g, key=lambda elem: [i1[0] for i1 in elem[0].split()]) # sort by all first characters b = groupby(b, key=lambda elem: [i1[0] in elem[0].split()[:-1] for i1 in elem[0].split()[:-1]]) b = [[item for item in data] for (key, data) in b] Output: [[('c winnicott', 3), ('d winnicott', 2)], [('d w winnicott', 2), ('w d winnicott', 1)], [('w winnicott', 1)]] Desired output: [[('c winnicott', 3)], [('d winnicott', 2), ('d w winnicott', 2), ('w d winnicott', 1), ('w winnicott', 1)]]
[ "I did it with the bubble sort algorithm.\ndef bubbleSort(arr):\nn = len(arr)\nswapped = False\n\nfor i in range(n-1):\n for j in range(0, n-i-1):\n \n g1 = arr[j][0].split()\n g2 = arr[j + 1][0].split()\n \n if any([k > l for k in g1] for l in g2):\n\n swapped = True\n arr[j], arr[j + 1] = arr[j + 1], arr[j]\n \n if any(s in g2 for s in g1):\n arr[j].extend(arr[j + 1])\n arr[j + 1] = ['-']\n \n if not swapped:\n return arr\n \narr = [a for a in arr if a[0]!='-']\nreturn arr\n\ninputList = ['w', 'd', 'c', 'm', 'w d', 'm c', 'd w', 'c m', 'o', 'p']\n#inputList = [\"m\", \"d\", \"w d\", \"m c\", \"c d\"]\n\ninputList = [[n] for n in inputList]\n\nprint(bubbleSort(inputList))\n\nOutput:\n[['p'], ['o'], ['c m', 'm c', 'c', 'm'], ['d w', 'w d', 'w', 'd']]\n\n" ]
[ 1 ]
[]
[]
[ "group_by", "python", "python_itertools", "string" ]
stackoverflow_0074492029_group_by_python_python_itertools_string.txt
Q: Is it possible to make abstract classes? How can I make a class or method abstract in Python? I tried redefining __new__() like so: class F: def __new__(cls): raise Exception("Unable to create an instance of abstract class %s" %cls) but now if I create a class G that inherits from F like so: class G(F): pass then I can't instantiate G either, since it calls its super class's __new__ method. Is there a better way to define an abstract class? A: Use the abc module to create abstract classes. Use the abstractmethod decorator to declare a method abstract, and declare a class abstract using one of three ways, depending upon your Python version. In Python 3.4 and above, you can inherit from ABC. In earlier versions of Python, you need to specify your class's metaclass as ABCMeta. Specifying the metaclass has different syntax in Python 3 and Python 2. The three possibilities are shown below: # Python 3.4+ from abc import ABC, abstractmethod class Abstract(ABC): @abstractmethod def foo(self): pass # Python 3.0+ from abc import ABCMeta, abstractmethod class Abstract(metaclass=ABCMeta): @abstractmethod def foo(self): pass # Python 2 from abc import ABCMeta, abstractmethod class Abstract: __metaclass__ = ABCMeta @abstractmethod def foo(self): pass Whichever way you use, you won't be able to instantiate an abstract class that has abstract methods, but will be able to instantiate a subclass that provides concrete definitions of those methods: >>> Abstract() Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: Can't instantiate abstract class Abstract with abstract methods foo >>> class StillAbstract(Abstract): ... pass ... >>> StillAbstract() Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: Can't instantiate abstract class StillAbstract with abstract methods foo >>> class Concrete(Abstract): ... def foo(self): ... print('Hello, World') ... >>> Concrete() <__main__.Concrete object at 0x7fc935d28898> A: The old-school (pre-PEP 3119) way to do this is just to raise NotImplementedError in the abstract class when an abstract method is called. class Abstract(object): def foo(self): raise NotImplementedError('subclasses must override foo()!') class Derived(Abstract): def foo(self): print 'Hooray!' >>> d = Derived() >>> d.foo() Hooray! >>> a = Abstract() >>> a.foo() Traceback (most recent call last): [...] This doesn't have the same nice properties as using the abc module does. You can still instantiate the abstract base class itself, and you won't find your mistake until you call the abstract method at runtime. But if you're dealing with a small set of simple classes, maybe with just a few abstract methods, this approach is a little easier than trying to wade through the abc documentation. A: Here's a very easy way without having to deal with the ABC module. In the __init__ method of the class that you want to be an abstract class, you can check the "type" of self. If the type of self is the base class, then the caller is trying to instantiate the base class, so raise an exception. Here's a simple example: class Base(): def __init__(self): if type(self) is Base: raise Exception('Base is an abstract class and cannot be instantiated directly') # Any initialization code print('In the __init__ method of the Base class') class Sub(Base): def __init__(self): print('In the __init__ method of the Sub class before calling __init__ of the Base class') super().__init__() print('In the __init__ method of the Sub class after calling __init__ of the Base class') subObj = Sub() baseObj = Base() When run, it produces: In the __init__ method of the Sub class before calling __init__ of the Base class In the __init__ method of the Base class In the __init__ method of the Sub class after calling __init__ of the Base class Traceback (most recent call last): File "/Users/irvkalb/Desktop/Demo files/Abstract.py", line 16, in <module> baseObj = Base() File "/Users/irvkalb/Desktop/Demo files/Abstract.py", line 4, in __init__ raise Exception('Base is an abstract class and cannot be instantiated directly') Exception: Base is an abstract class and cannot be instantiated directly This shows that you can instantiate a subclass that inherits from a base class, but you cannot instantiate the base class directly. A: Most Previous answers were correct but here is the answer and example for Python 3.7. Yes, you can create an abstract class and method. Just as a reminder sometimes a class should define a method which logically belongs to a class, but that class cannot specify how to implement the method. For example, in the below Parents and Babies classes they both eat but the implementation will be different for each because babies and parents eat a different kind of food and the number of times they eat is different. So, eat method subclasses overrides AbstractClass.eat. from abc import ABC, abstractmethod class AbstractClass(ABC): def __init__(self, value): self.value = value super().__init__() @abstractmethod def eat(self): pass class Parents(AbstractClass): def eat(self): return "eat solid food "+ str(self.value) + " times each day" class Babies(AbstractClass): def eat(self): return "Milk only "+ str(self.value) + " times or more each day" food = 3 mom = Parents(food) print("moms ----------") print(mom.eat()) infant = Babies(food) print("infants ----------") print(infant.eat()) OUTPUT: moms ---------- eat solid food 3 times each day infants ---------- Milk only 3 times or more each day A: As explained in the other answers, yes you can use abstract classes in Python using the abc module. Below I give an actual example using abstract @classmethod, @property and @abstractmethod (using Python 3.6+). For me it is usually easier to start off with examples I can easily copy&paste; I hope this answer is also useful for others. Let's first create a base class called Base: from abc import ABC, abstractmethod class Base(ABC): @classmethod @abstractmethod def from_dict(cls, d): pass @property @abstractmethod def prop1(self): pass @property @abstractmethod def prop2(self): pass @prop2.setter @abstractmethod def prop2(self, val): pass @abstractmethod def do_stuff(self): pass Our Base class will always have a from_dict classmethod, a property prop1 (which is read-only) and a property prop2 (which can also be set) as well as a function called do_stuff. Whatever class is now built based on Base will have to implement all of these four methods/properties. Please note that for a method to be abstract, two decorators are required - classmethod and abstract property. Now we could create a class A like this: class A(Base): def __init__(self, name, val1, val2): self.name = name self.__val1 = val1 self._val2 = val2 @classmethod def from_dict(cls, d): name = d['name'] val1 = d['val1'] val2 = d['val2'] return cls(name, val1, val2) @property def prop1(self): return self.__val1 @property def prop2(self): return self._val2 @prop2.setter def prop2(self, value): self._val2 = value def do_stuff(self): print('juhu!') def i_am_not_abstract(self): print('I can be customized') All required methods/properties are implemented and we can - of course - also add additional functions that are not part of Base (here: i_am_not_abstract). Now we can do: a1 = A('dummy', 10, 'stuff') a2 = A.from_dict({'name': 'from_d', 'val1': 20, 'val2': 'stuff'}) a1.prop1 # prints 10 a1.prop2 # prints 'stuff' As desired, we cannot set prop1: a.prop1 = 100 will return AttributeError: can't set attribute Also our from_dict method works fine: a2.prop1 # prints 20 If we now defined a second class B like this: class B(Base): def __init__(self, name): self.name = name @property def prop1(self): return self.name and tried to instantiate an object like this: b = B('iwillfail') we will get an error TypeError: Can't instantiate abstract class B with abstract methods do_stuff, from_dict, prop2 listing all the things defined in Base which we did not implement in B. A: This one will be working in python 3 from abc import ABCMeta, abstractmethod class Abstract(metaclass=ABCMeta): @abstractmethod def foo(self): pass Abstract() >>> TypeError: Can not instantiate abstract class Abstract with abstract methods foo A: also this works and is simple: class A_abstract(object): def __init__(self): # quite simple, old-school way. if self.__class__.__name__ == "A_abstract": raise NotImplementedError("You can't instantiate this abstract class. Derive it, please.") class B(A_abstract): pass b = B() # here an exception is raised: a = A_abstract() A: You can also harness the __new__ method to your advantage. You just forgot something. The __new__ method always returns the new object so you must return its superclass' new method. Do as follows. class F: def __new__(cls): if cls is F: raise TypeError("Cannot create an instance of abstract class '{}'".format(cls.__name__)) return super().__new__(cls) When using the new method, you have to return the object, not the None keyword. That's all you missed. A: I find the accepted answer, and all the others strange, since they pass self to an abstract class. An abstract class is not instantiated so can't have a self. So try this, it works. from abc import ABCMeta, abstractmethod class Abstract(metaclass=ABCMeta): @staticmethod @abstractmethod def foo(): """An abstract method. No need to write pass""" class Derived(Abstract): def foo(self): print('Hooray!') FOO = Derived() FOO.foo() A: from abc import ABCMeta, abstractmethod #Abstract class and abstract method declaration class Jungle(metaclass=ABCMeta): #constructor with default values def __init__(self, name="Unknown"): self.visitorName = name def welcomeMessage(self): print("Hello %s , Welcome to the Jungle" % self.visitorName) # abstract method is compulsory to defined in child-class @abstractmethod def scarySound(self): pass A: Late to answer here, but to answer the other question "How to make abstract methods" which points here, I offer the following. # decorators.py def abstract(f): def _decorator(*_): raise NotImplementedError(f"Method '{f.__name__}' is abstract") return _decorator # yourclass.py class Vehicle: def add_energy(): print("Energy added!") @abstract def get_make(): ... @abstract def get_model(): ... The class base Vehicle class can still be instantiated for unit testing (unlike with ABC), and the Pythonic raising of an exception is present. Oh yes, you also get the method name that is abstract in the exception with this method for convenience. A: You can create an abstract class by extending ABC which stands for "Abstract Base Classes" and can create the abstract method with @abstractmethod in the abstract class as shown below: from abc import ABC, abstractmethod class Animal(ABC): @abstractmethod def sound(self): pass And, to use an abstract class, it should be extended by a child class and the child class should override the abstract method of the abstract class as shown below: from abc import ABC, abstractmethod class Animal(ABC): @abstractmethod def sound(self): pass class Cat(Animal): # Extends "Animal" abstract class def sound(self): # Overrides "sound()" abstract method print("Meow!!") obj = Cat() obj.sound() Output: Meow!! And, an abstract method can have code rather than pass and can be called by a child class as shown below: from abc import ABC, abstractmethod class Animal(ABC): @abstractmethod def sound(self): print("Wow!!") # Here class Cat(Animal): def sound(self): super().sound() # Here obj = Cat() obj.sound() Output: Wow!! And, an abstract class can have the variables and non-abstract methods which can be called by a child class and non-abstract methods don't need to be overridden by a child class as shown below: from abc import ABC, abstractmethod class Animal(ABC): @abstractmethod def sound(self): pass def __init__(self): # Here self.name = "John" # Here x = "Hello" # Here def test1(self): # Here print("Test1") @classmethod # Here def test2(cls): print("Test2") @staticmethod # Here def test3(): print("Test3") class Cat(Animal): def sound(self): print(self.name) # Here print(super().x) # Here super().test1() # Here super().test2() # Here super().test3() # Here obj = Cat() obj.sound() Output: John Hello Test1 Test2 Test3 And, you can define an abstract class and static methods and an abstract getter, setter and deleter in an abstract class as shown below. *@abstractmethod must be the innermost decorator otherwise error occurs and you can see my answer which explains more about an abstract getter, setter and deleter: from abc import ABC, abstractmethod class Person(ABC): @classmethod @abstractmethod # The innermost decorator def test1(cls): pass @staticmethod @abstractmethod # The innermost decorator def test2(): pass @property @abstractmethod # The innermost decorator def name(self): pass @name.setter @abstractmethod # The innermost decorator def name(self, name): pass @name.deleter @abstractmethod # The innermost decorator def name(self): pass Then, you need to override them in a child class as shown below: class Student(Person): def __init__(self, name): self._name = name @classmethod def test1(cls): # Overrides abstract class method print("Test1") @staticmethod def test2(): # Overrides abstract static method print("Test2") @property def name(self): # Overrides abstract getter return self._name @name.setter def name(self, name): # Overrides abstract setter self._name = name @name.deleter def name(self): # Overrides abstract deleter del self._name Then, you can instantiate the child class and call them as shown below: obj = Student("John") # Instantiates "Student" class obj.test1() # Class method obj.test2() # Static method print(obj.name) # Getter obj.name = "Tom" # Setter print(obj.name) # Getter del obj.name # Deleter print(hasattr(obj, "name")) Output: Test1 Test2 John Tom False And, if you try to instantiate an abstract class as shown below: from abc import ABC, abstractmethod class Animal(ABC): @abstractmethod def sound(self): pass obj = Animal() The error below occurs: TypeError: Can't instantiate abstract class Animal with abstract methods sound And, if you don't override the abstract method of an abstract class in a child class and you instantiate the child class as shown below: from abc import ABC, abstractmethod class Animal(ABC): @abstractmethod def sound(self): pass class Cat(Animal): pass # Doesn't override "sound()" abstract method obj = Cat() # Here The error below occurs: TypeError: Can't instantiate abstract class Cat with abstract methods sound And, if you define an abstract method in the non-abstract class which doesn't extend ABC, the abstract method is a normal instance method so there are no errors even if the non-abstract class is instantiated and even if a child class doesn't override the abstract method of the non-abstract class as shown below: from abc import ABC, abstractmethod class Animal: # Doesn't extend "ABC" @abstractmethod # Here def sound(self): print("Wow!!") class Cat(Animal): pass # Doesn't override "sound()" abstract method obj1 = Animal() # Here obj1.sound() obj2 = Cat() # Here obj2.sound() Output: Wow!! Wow!! In addition, you can replace Cat class extending Animal class below: from abc import ABC, abstractmethod class Animal(ABC): @abstractmethod def sound(self): pass # ↓↓↓ Here ↓↓↓ class Cat(Animal): def sound(self): print("Meow!!") # ↑↑↑ Here ↑↑↑ print(issubclass(Cat, Animal)) With this code having register() below: from abc import ABC, abstractmethod class Animal(ABC): @abstractmethod def sound(self): pass # ↓↓↓ Here ↓↓↓ class Cat: def sound(self): print("Meow!!") Animal.register(Cat) # ↑↑↑ Here ↑↑↑ print(issubclass(Cat, Animal)) Then, both of the code above outputs the same result below showing Cat class is the subclass of Animal class: True
Is it possible to make abstract classes?
How can I make a class or method abstract in Python? I tried redefining __new__() like so: class F: def __new__(cls): raise Exception("Unable to create an instance of abstract class %s" %cls) but now if I create a class G that inherits from F like so: class G(F): pass then I can't instantiate G either, since it calls its super class's __new__ method. Is there a better way to define an abstract class?
[ "Use the abc module to create abstract classes. Use the abstractmethod decorator to declare a method abstract, and declare a class abstract using one of three ways, depending upon your Python version.\nIn Python 3.4 and above, you can inherit from ABC. In earlier versions of Python, you need to specify your class's metaclass as ABCMeta. Specifying the metaclass has different syntax in Python 3 and Python 2. The three possibilities are shown below:\n# Python 3.4+\nfrom abc import ABC, abstractmethod\nclass Abstract(ABC):\n @abstractmethod\n def foo(self):\n pass\n\n# Python 3.0+\nfrom abc import ABCMeta, abstractmethod\nclass Abstract(metaclass=ABCMeta):\n @abstractmethod\n def foo(self):\n pass\n\n# Python 2\nfrom abc import ABCMeta, abstractmethod\nclass Abstract:\n __metaclass__ = ABCMeta\n\n @abstractmethod\n def foo(self):\n pass\n\nWhichever way you use, you won't be able to instantiate an abstract class that has abstract methods, but will be able to instantiate a subclass that provides concrete definitions of those methods:\n>>> Abstract()\nTraceback (most recent call last):\n File \"<stdin>\", line 1, in <module>\nTypeError: Can't instantiate abstract class Abstract with abstract methods foo\n>>> class StillAbstract(Abstract):\n... pass\n... \n>>> StillAbstract()\nTraceback (most recent call last):\n File \"<stdin>\", line 1, in <module>\nTypeError: Can't instantiate abstract class StillAbstract with abstract methods foo\n>>> class Concrete(Abstract):\n... def foo(self):\n... print('Hello, World')\n... \n>>> Concrete()\n<__main__.Concrete object at 0x7fc935d28898>\n\n", "The old-school (pre-PEP 3119) way to do this is just to raise NotImplementedError in the abstract class when an abstract method is called.\nclass Abstract(object):\n def foo(self):\n raise NotImplementedError('subclasses must override foo()!')\n\nclass Derived(Abstract):\n def foo(self):\n print 'Hooray!'\n\n>>> d = Derived()\n>>> d.foo()\nHooray!\n>>> a = Abstract()\n>>> a.foo()\nTraceback (most recent call last): [...]\n\nThis doesn't have the same nice properties as using the abc module does. You can still instantiate the abstract base class itself, and you won't find your mistake until you call the abstract method at runtime. \nBut if you're dealing with a small set of simple classes, maybe with just a few abstract methods, this approach is a little easier than trying to wade through the abc documentation.\n", "Here's a very easy way without having to deal with the ABC module.\nIn the __init__ method of the class that you want to be an abstract class, you can check the \"type\" of self. If the type of self is the base class, then the caller is trying to instantiate the base class, so raise an exception. Here's a simple example:\nclass Base():\n def __init__(self):\n if type(self) is Base:\n raise Exception('Base is an abstract class and cannot be instantiated directly')\n # Any initialization code\n print('In the __init__ method of the Base class')\n\nclass Sub(Base):\n def __init__(self):\n print('In the __init__ method of the Sub class before calling __init__ of the Base class')\n super().__init__()\n print('In the __init__ method of the Sub class after calling __init__ of the Base class')\n\nsubObj = Sub()\nbaseObj = Base()\n\nWhen run, it produces:\nIn the __init__ method of the Sub class before calling __init__ of the Base class\nIn the __init__ method of the Base class\nIn the __init__ method of the Sub class after calling __init__ of the Base class\nTraceback (most recent call last):\n File \"/Users/irvkalb/Desktop/Demo files/Abstract.py\", line 16, in <module>\n baseObj = Base()\n File \"/Users/irvkalb/Desktop/Demo files/Abstract.py\", line 4, in __init__\n raise Exception('Base is an abstract class and cannot be instantiated directly')\nException: Base is an abstract class and cannot be instantiated directly\n\nThis shows that you can instantiate a subclass that inherits from a base class, but you cannot instantiate the base class directly.\n", "Most Previous answers were correct but here is the answer and example for Python 3.7. Yes, you can create an abstract class and method. Just as a reminder sometimes a class should define a method which logically belongs to a class, but that class cannot specify how to implement the method. For example, in the below Parents and Babies classes they both eat but the implementation will be different for each because babies and parents eat a different kind of food and the number of times they eat is different. So, eat method subclasses overrides AbstractClass.eat. \nfrom abc import ABC, abstractmethod\n\nclass AbstractClass(ABC):\n\n def __init__(self, value):\n self.value = value\n super().__init__()\n\n @abstractmethod\n def eat(self):\n pass\n\nclass Parents(AbstractClass):\n def eat(self):\n return \"eat solid food \"+ str(self.value) + \" times each day\"\n\nclass Babies(AbstractClass):\n def eat(self):\n return \"Milk only \"+ str(self.value) + \" times or more each day\"\n\nfood = 3 \nmom = Parents(food)\nprint(\"moms ----------\")\nprint(mom.eat())\n\ninfant = Babies(food)\nprint(\"infants ----------\")\nprint(infant.eat())\n\nOUTPUT:\nmoms ----------\neat solid food 3 times each day\ninfants ----------\nMilk only 3 times or more each day\n\n", "As explained in the other answers, yes you can use abstract classes in Python using the abc module. Below I give an actual example using abstract @classmethod, @property and @abstractmethod (using Python 3.6+). For me it is usually easier to start off with examples I can easily copy&paste; I hope this answer is also useful for others.\nLet's first create a base class called Base:\nfrom abc import ABC, abstractmethod\n\nclass Base(ABC):\n\n @classmethod\n @abstractmethod\n def from_dict(cls, d):\n pass\n \n @property\n @abstractmethod\n def prop1(self):\n pass\n\n @property\n @abstractmethod\n def prop2(self):\n pass\n\n @prop2.setter\n @abstractmethod\n def prop2(self, val):\n pass\n\n @abstractmethod\n def do_stuff(self):\n pass\n\nOur Base class will always have a from_dict classmethod, a property prop1 (which is read-only) and a property prop2 (which can also be set) as well as a function called do_stuff. Whatever class is now built based on Base will have to implement all of these four methods/properties. Please note that for a method to be abstract, two decorators are required - classmethod and abstract property.\nNow we could create a class A like this:\nclass A(Base):\n def __init__(self, name, val1, val2):\n self.name = name\n self.__val1 = val1\n self._val2 = val2\n\n @classmethod\n def from_dict(cls, d):\n name = d['name']\n val1 = d['val1']\n val2 = d['val2']\n\n return cls(name, val1, val2)\n\n @property\n def prop1(self):\n return self.__val1\n\n @property\n def prop2(self):\n return self._val2\n\n @prop2.setter\n def prop2(self, value):\n self._val2 = value\n\n def do_stuff(self):\n print('juhu!')\n\n def i_am_not_abstract(self):\n print('I can be customized')\n\nAll required methods/properties are implemented and we can - of course - also add additional functions that are not part of Base (here: i_am_not_abstract).\nNow we can do:\na1 = A('dummy', 10, 'stuff')\na2 = A.from_dict({'name': 'from_d', 'val1': 20, 'val2': 'stuff'})\n\na1.prop1\n# prints 10\n\na1.prop2\n# prints 'stuff'\n\nAs desired, we cannot set prop1:\na.prop1 = 100\n\nwill return\n\nAttributeError: can't set attribute\n\nAlso our from_dict method works fine:\na2.prop1\n# prints 20\n\nIf we now defined a second class B like this:\nclass B(Base):\n def __init__(self, name):\n self.name = name\n\n @property\n def prop1(self):\n return self.name\n\nand tried to instantiate an object like this:\nb = B('iwillfail')\n\nwe will get an error\n\nTypeError: Can't instantiate abstract class B with abstract methods\ndo_stuff, from_dict, prop2\n\nlisting all the things defined in Base which we did not implement in B.\n", "This one will be working in python 3 \nfrom abc import ABCMeta, abstractmethod\n\nclass Abstract(metaclass=ABCMeta):\n\n @abstractmethod\n def foo(self):\n pass\n\nAbstract()\n>>> TypeError: Can not instantiate abstract class Abstract with abstract methods foo\n\n", "also this works and is simple:\nclass A_abstract(object):\n\n def __init__(self):\n # quite simple, old-school way.\n if self.__class__.__name__ == \"A_abstract\": \n raise NotImplementedError(\"You can't instantiate this abstract class. Derive it, please.\")\n\nclass B(A_abstract):\n\n pass\n\nb = B()\n\n# here an exception is raised:\na = A_abstract()\n\n", "You can also harness the __new__ method to your advantage. You just forgot something.\nThe __new__ method always returns the new object so you must return its superclass' new method. Do as follows.\nclass F:\n def __new__(cls):\n if cls is F:\n raise TypeError(\"Cannot create an instance of abstract class '{}'\".format(cls.__name__))\n return super().__new__(cls)\n\nWhen using the new method, you have to return the object, not the None keyword. That's all you missed.\n", "I find the accepted answer, and all the others strange, since they pass self to an abstract class. An abstract class is not instantiated so can't have a self. \nSo try this, it works.\nfrom abc import ABCMeta, abstractmethod\n\n\nclass Abstract(metaclass=ABCMeta):\n @staticmethod\n @abstractmethod\n def foo():\n \"\"\"An abstract method. No need to write pass\"\"\"\n\n\nclass Derived(Abstract):\n def foo(self):\n print('Hooray!')\n\n\nFOO = Derived()\nFOO.foo()\n\n", " from abc import ABCMeta, abstractmethod\n\n #Abstract class and abstract method declaration\n class Jungle(metaclass=ABCMeta):\n #constructor with default values\n def __init__(self, name=\"Unknown\"):\n self.visitorName = name\n\n def welcomeMessage(self):\n print(\"Hello %s , Welcome to the Jungle\" % self.visitorName)\n\n # abstract method is compulsory to defined in child-class\n @abstractmethod\n def scarySound(self):\n pass\n\n", "Late to answer here, but to answer the other question \"How to make abstract methods\" which points here, I offer the following.\n# decorators.py\ndef abstract(f):\n def _decorator(*_):\n raise NotImplementedError(f\"Method '{f.__name__}' is abstract\")\n return _decorator\n\n\n# yourclass.py\nclass Vehicle:\n def add_energy():\n print(\"Energy added!\")\n\n @abstract\n def get_make(): ...\n\n @abstract\n def get_model(): ...\n\nThe class base Vehicle class can still be instantiated for unit testing (unlike with ABC), and the Pythonic raising of an exception is present. Oh yes, you also get the method name that is abstract in the exception with this method for convenience.\n", "You can create an abstract class by extending ABC which stands for \"Abstract Base Classes\" and can create the abstract method with @abstractmethod in the abstract class as shown below:\nfrom abc import ABC, abstractmethod\n\nclass Animal(ABC):\n @abstractmethod\n def sound(self):\n pass\n\nAnd, to use an abstract class, it should be extended by a child class and the child class should override the abstract method of the abstract class as shown below:\nfrom abc import ABC, abstractmethod\n\nclass Animal(ABC):\n @abstractmethod\n def sound(self):\n pass\n\nclass Cat(Animal): # Extends \"Animal\" abstract class\n def sound(self): # Overrides \"sound()\" abstract method\n print(\"Meow!!\")\n\nobj = Cat()\nobj.sound()\n\nOutput:\nMeow!!\n\nAnd, an abstract method can have code rather than pass and can be called by a child class as shown below:\nfrom abc import ABC, abstractmethod\n\nclass Animal(ABC):\n @abstractmethod\n def sound(self):\n print(\"Wow!!\") # Here\n\nclass Cat(Animal):\n def sound(self):\n super().sound() # Here\n \nobj = Cat()\nobj.sound()\n\nOutput:\nWow!!\n\nAnd, an abstract class can have the variables and non-abstract methods which can be called by a child class and non-abstract methods don't need to be overridden by a child class as shown below:\nfrom abc import ABC, abstractmethod\n\nclass Animal(ABC):\n @abstractmethod\n def sound(self):\n pass\n \n def __init__(self): # Here\n self.name = \"John\" # Here\n \n x = \"Hello\" # Here\n \n def test1(self): # Here\n print(\"Test1\")\n \n @classmethod # Here\n def test2(cls):\n print(\"Test2\")\n \n @staticmethod # Here\n def test3():\n print(\"Test3\")\n\nclass Cat(Animal):\n def sound(self):\n print(self.name) # Here\n print(super().x) # Here\n super().test1() # Here\n super().test2() # Here\n super().test3() # Here\n\nobj = Cat()\nobj.sound()\n\nOutput:\nJohn\nHello\nTest1\nTest2\nTest3\n\nAnd, you can define an abstract class and static methods and an abstract getter, setter and deleter in an abstract class as shown below. *@abstractmethod must be the innermost decorator otherwise error occurs and you can see my answer which explains more about an abstract getter, setter and deleter:\nfrom abc import ABC, abstractmethod\n\nclass Person(ABC):\n\n @classmethod\n @abstractmethod # The innermost decorator\n def test1(cls):\n pass\n \n @staticmethod\n @abstractmethod # The innermost decorator\n def test2():\n pass\n\n @property\n @abstractmethod # The innermost decorator\n def name(self):\n pass\n\n @name.setter\n @abstractmethod # The innermost decorator\n def name(self, name):\n pass\n\n @name.deleter\n @abstractmethod # The innermost decorator\n def name(self):\n pass\n\nThen, you need to override them in a child class as shown below:\nclass Student(Person):\n \n def __init__(self, name):\n self._name = name\n \n @classmethod\n def test1(cls): # Overrides abstract class method\n print(\"Test1\")\n \n @staticmethod\n def test2(): # Overrides abstract static method\n print(\"Test2\")\n \n @property\n def name(self): # Overrides abstract getter\n return self._name\n \n @name.setter\n def name(self, name): # Overrides abstract setter\n self._name = name\n \n @name.deleter\n def name(self): # Overrides abstract deleter\n del self._name\n\nThen, you can instantiate the child class and call them as shown below:\nobj = Student(\"John\") # Instantiates \"Student\" class\nobj.test1() # Class method\nobj.test2() # Static method\nprint(obj.name) # Getter\nobj.name = \"Tom\" # Setter\nprint(obj.name) # Getter\ndel obj.name # Deleter\nprint(hasattr(obj, \"name\"))\n\nOutput:\nTest1\nTest2\nJohn \nTom \nFalse\n\nAnd, if you try to instantiate an abstract class as shown below:\nfrom abc import ABC, abstractmethod\n\nclass Animal(ABC):\n @abstractmethod\n def sound(self):\n pass\n\nobj = Animal()\n\nThe error below occurs:\n\nTypeError: Can't instantiate abstract class Animal with abstract methods sound\n\nAnd, if you don't override the abstract method of an abstract class in a child class and you instantiate the child class as shown below:\nfrom abc import ABC, abstractmethod\n\nclass Animal(ABC):\n @abstractmethod\n def sound(self):\n pass\n\nclass Cat(Animal):\n pass # Doesn't override \"sound()\" abstract method\n\nobj = Cat() # Here\n\nThe error below occurs:\n\nTypeError: Can't instantiate abstract class Cat with abstract methods sound\n\nAnd, if you define an abstract method in the non-abstract class which doesn't extend ABC, the abstract method is a normal instance method so there are no errors even if the non-abstract class is instantiated and even if a child class doesn't override the abstract method of the non-abstract class as shown below:\nfrom abc import ABC, abstractmethod\n\nclass Animal: # Doesn't extend \"ABC\"\n @abstractmethod # Here\n def sound(self):\n print(\"Wow!!\")\n\nclass Cat(Animal):\n pass # Doesn't override \"sound()\" abstract method\n\nobj1 = Animal() # Here\nobj1.sound()\n\nobj2 = Cat() # Here\nobj2.sound()\n\nOutput:\nWow!!\nWow!!\n\nIn addition, you can replace Cat class extending Animal class below:\nfrom abc import ABC, abstractmethod\n\nclass Animal(ABC):\n @abstractmethod\n def sound(self):\n pass\n\n# ↓↓↓ Here ↓↓↓\n\nclass Cat(Animal):\n def sound(self):\n print(\"Meow!!\")\n\n# ↑↑↑ Here ↑↑↑\n\nprint(issubclass(Cat, Animal))\n\nWith this code having register() below:\nfrom abc import ABC, abstractmethod\n\nclass Animal(ABC):\n @abstractmethod\n def sound(self):\n pass\n\n# ↓↓↓ Here ↓↓↓\n\nclass Cat:\n def sound(self):\n print(\"Meow!!\")\n \nAnimal.register(Cat)\n\n# ↑↑↑ Here ↑↑↑\n\nprint(issubclass(Cat, Animal))\n\nThen, both of the code above outputs the same result below showing Cat class is the subclass of Animal class:\nTrue\n\n" ]
[ 734, 146, 29, 23, 16, 9, 3, 3, 3, 3, 2, 0 ]
[ "In your code snippet, you could also resolve this by providing an implementation for the __new__ method in the subclass, likewise:\ndef G(F):\n def __new__(cls):\n # do something here\n\nBut this is a hack and I advise you against it, unless you know what you are doing. For nearly all cases I advise you to use the abc module, that others before me have suggested.\nAlso when you create a new (base) class, make it subclass object, like this: class MyBaseClass(object):. I don't know if it is that much significant anymore, but it helps retain style consistency on your code\n", "Just a quick addition to @TimGilbert's old-school answer...you can make your abstract base class's init() method throw an exception and that would prevent it from being instantiated, no?\n>>> class Abstract(object):\n... def __init__(self):\n... raise NotImplementedError(\"You can't instantiate this class!\")\n...\n>>> a = Abstract()\nTraceback (most recent call last):\n File \"<stdin>\", line 1, in <module>\n File \"<stdin>\", line 3, in __init__\nNotImplementedError: You can't instantiate this class! \n\n" ]
[ -3, -3 ]
[ "abstract", "abstract_class", "class", "inheritance", "python" ]
stackoverflow_0013646245_abstract_abstract_class_class_inheritance_python.txt
Q: Only allow certain things to be imported How can I allow just what I specify to be imported? For example: main.py import random def getnumber(): return random.randint(1, 5) other.py import main print(dir(main)) In this example, I want to import the getnumber function, but not the random module. I know that "from main import getnumber" will work, but how do I do this when using "import main" A: You can import the module, and then delete particular names from its namespace: import main del main.random Note that this does not modify the original module's source, just your ability to access those names from your own module. (I can't think of any good reason to do this, but there it is.) If you're trying to create a sanitized version of main.py that only includes specific names, create another file that imports those specific names from main.py. For example: # main/getnumber.py import random def getnumber(): return random.randint(1, 5) # main/__init__.py from .getnumber import getnumber When you do import main, you're importing main/__init__.py, which in turn imports getnumber (and only getnumber) from main/getnumber.py.
Only allow certain things to be imported
How can I allow just what I specify to be imported? For example: main.py import random def getnumber(): return random.randint(1, 5) other.py import main print(dir(main)) In this example, I want to import the getnumber function, but not the random module. I know that "from main import getnumber" will work, but how do I do this when using "import main"
[ "You can import the module, and then delete particular names from its namespace:\nimport main\ndel main.random\n\nNote that this does not modify the original module's source, just your ability to access those names from your own module. (I can't think of any good reason to do this, but there it is.)\nIf you're trying to create a sanitized version of main.py that only includes specific names, create another file that imports those specific names from main.py. For example:\n# main/getnumber.py\n\nimport random\n\ndef getnumber():\n return random.randint(1, 5)\n\n# main/__init__.py\n\nfrom .getnumber import getnumber\n\nWhen you do import main, you're importing main/__init__.py, which in turn imports getnumber (and only getnumber) from main/getnumber.py.\n" ]
[ 0 ]
[]
[]
[ "module", "python" ]
stackoverflow_0074523179_module_python.txt
Q: How do I get my code to recall upon the function again to start over? (Python) Here is my code that I am working on. It is supposed to take a number from the user and if its a perfect number it says so, but if its not it asks to enter a new number. When I get to the enter a new number part, it doesn't register my input. Can someone help me out? def isPerfect(num): if num <= 0: return False total = 0 for i in range(1,num): if num%i== 0: total = total + i if total == num: return True else: return False def main(): num = int(input("Enter a perfect integer: ")) if isPerfect(num) == False: op = int(input(f"{num} is not a perfect number. Re-enter:")) isPerfect(op) elif isPerfect(num) == True: print("Congratulations!",num, 'is a perfect number.') if __name__ == '__main__': main() A: you could just put a while True loop into your main function like so: def main(): first_run = True perfect_num_received = False while not perfect_num_received: if first_run: num = int(input("Enter a perfect integer: ")) first_run = False if isPerfect(num) == False: num = int(input(f"{num} is not a perfect number. Re-enter:")) elif isPerfect(num) == True: perfect_num_received = True print("Congratulations!",num, 'is a perfect number.') but also there is already a built in function to check for the data type of a variable so you could just do something like this maybe: if type(num) == int: ... A: From the problem description, I assume that the program should terminate after the perfect number is found. There are two things which can be added, a loop to go over the input prompt each time the user feeds a number and a break condition after finding the perfect number. Here's one implementation which works. def main(): while True: num = int(input("Enter a perfect integer: ")) if isPerfect(num) == False: num = int(input(f"{num} is not a perfect number. Re-enter: ")) isPerfect(num) elif isPerfect(num) == True: print("Congratulations! ",num, ' is a perfect number.') break
How do I get my code to recall upon the function again to start over? (Python)
Here is my code that I am working on. It is supposed to take a number from the user and if its a perfect number it says so, but if its not it asks to enter a new number. When I get to the enter a new number part, it doesn't register my input. Can someone help me out? def isPerfect(num): if num <= 0: return False total = 0 for i in range(1,num): if num%i== 0: total = total + i if total == num: return True else: return False def main(): num = int(input("Enter a perfect integer: ")) if isPerfect(num) == False: op = int(input(f"{num} is not a perfect number. Re-enter:")) isPerfect(op) elif isPerfect(num) == True: print("Congratulations!",num, 'is a perfect number.') if __name__ == '__main__': main()
[ "you could just put a while True loop into your main function like so:\ndef main():\n first_run = True\n perfect_num_received = False\n while not perfect_num_received:\n if first_run:\n num = int(input(\"Enter a perfect integer: \"))\n first_run = False\n if isPerfect(num) == False:\n num = int(input(f\"{num} is not a perfect number. Re-enter:\"))\n elif isPerfect(num) == True:\n perfect_num_received = True\n print(\"Congratulations!\",num, 'is a perfect number.')\n\nbut also there is already a built in function to check for the data type of a variable so you could just do something like this maybe:\nif type(num) == int:\n ...\n\n", "From the problem description, I assume that the program should terminate after the perfect number is found. There are two things which can be added, a loop to go over the input prompt each time the user feeds a number and a break condition after finding the perfect number. Here's one implementation which works.\ndef main():\nwhile True:\n num = int(input(\"Enter a perfect integer: \"))\n if isPerfect(num) == False:\n num = int(input(f\"{num} is not a perfect number. Re-enter: \"))\n isPerfect(num)\n elif isPerfect(num) == True:\n print(\"Congratulations! \",num, ' is a perfect number.')\n break\n\n" ]
[ 1, 0 ]
[]
[]
[ "function", "integer", "python", "python_3.x", "user_input" ]
stackoverflow_0074523018_function_integer_python_python_3.x_user_input.txt
Q: python: is there any practice to allow using class functions outside of the object I just want to say I am a newbie to OOP so I am not sure what I am supposed to do there. so lets say I have a class that has a whole bunch of functions on data in it: class stuff: def __init__ ... def func1(self, arg1, arg2) self.var1=arg1*self.var3 self.var2=arg2*self.var4 ... the func1 uses a lot of variables from the class (using self), and I have a lot of functions and a lot of variables which is very convenient in a class. However every once in a while I also need to use the function on data outside of an object. In that case I would have to pass var3 and var4 to the function and I don't know how to do that. Actually there are about 10 variables that I would have to pass. So is there a good way to do that? Should I make a copy of every function for using outside of objects? But there are a lot of functions and they are quite long, and I will have to remove self. before every variable, it will be hard to maintain. Should I create an object for all data I want to process? That would also be annoying, init does a lot of stuff that requires full data and moving it to separate functions will create a lot of them. Should I make functions inside class that just call functions outside class? Two issues, I would have to name them differently and memorize both names, and what if the data I am passing is very big? Or is there a different way to do that? So I was wondering if there is something to fix that in python because I don't know what to google. I've noticed a lot of libraries use "." in their function names so I assume those are in functions in classes, but I seem to use them on my data, without creating objects A: the func1 uses a lot of variables from the class (using self), and I have a lot of functions and a lot of variables which is very convenient in a class this is a clear violation of the single responsibility principle, so you should consider splitting the class down to multiple classes, each one is responsible for one kind of input data and handles operations on it. A: I believe you're asking about if you can use a class' member function where either self refers to another object which it reads its member data from (this is possible if the other object has members with the same name as your original class) OR more likely, you have a couple free-floating (not class variables) variables... or your other class does not have members with the exact same name. In the second case, the best course of action would to be to create some stand-alone function that takes a couple parameters (or, even cleaner, a single class that holds all the arguments) and then have your classes reference this procedure, rather than having it be inside a class. The reason for this is if this behavior is required in multiple places, the class you are writing is probably not actually supposed to be responsible for it in the first place anyways. A: Here's a quick example of the syntax you need: class MyFirstClass: def __init__(self, name): self.name = name @staticmethod def static_function(): print("Static function called!") def instance_function(self): print("Instance function called on ", self.name) class MySecondClass: def main(): MyFirstClass.static_function() anInstance = MyFirstClass("Adam") anotherInstance = MyFirstClass("Bob") anInstance.instance_function() anotherInstance.instance_function() MySecondClass.main() A: I won't speak to whether it's a good design principle (it isn't) but I have committed the sin of convenience myself. This also presupposes that the class will still be instantiated and you will be using some of the self attributes, just not all of them. If this is not the case, see other answers regarding staticmethods class Stuff: def __init__() ... def func1(self, arg1, arg2, arg3=None, arg4=None) if arg3=None: self.var1=arg1*self.var3 else: self.var1=arg1*arg3 self.var2=arg2*self.var4 Actually there are about 10 variables that I would have to pass. You could make use of kwargs in that case and unpack them within the function
python: is there any practice to allow using class functions outside of the object
I just want to say I am a newbie to OOP so I am not sure what I am supposed to do there. so lets say I have a class that has a whole bunch of functions on data in it: class stuff: def __init__ ... def func1(self, arg1, arg2) self.var1=arg1*self.var3 self.var2=arg2*self.var4 ... the func1 uses a lot of variables from the class (using self), and I have a lot of functions and a lot of variables which is very convenient in a class. However every once in a while I also need to use the function on data outside of an object. In that case I would have to pass var3 and var4 to the function and I don't know how to do that. Actually there are about 10 variables that I would have to pass. So is there a good way to do that? Should I make a copy of every function for using outside of objects? But there are a lot of functions and they are quite long, and I will have to remove self. before every variable, it will be hard to maintain. Should I create an object for all data I want to process? That would also be annoying, init does a lot of stuff that requires full data and moving it to separate functions will create a lot of them. Should I make functions inside class that just call functions outside class? Two issues, I would have to name them differently and memorize both names, and what if the data I am passing is very big? Or is there a different way to do that? So I was wondering if there is something to fix that in python because I don't know what to google. I've noticed a lot of libraries use "." in their function names so I assume those are in functions in classes, but I seem to use them on my data, without creating objects
[ "\nthe func1 uses a lot of variables from the class (using self), and I\nhave a lot of functions and a lot of variables which is very\nconvenient in a class\n\nthis is a clear violation of the single responsibility principle, so you should consider splitting the class down to multiple classes, each one is responsible for one kind of input data and handles operations on it.\n", "I believe you're asking about if you can use a class' member function where either self refers to another object which it reads its member data from (this is possible if the other object has members with the same name as your original class) OR more likely, you have a couple free-floating (not class variables) variables... or your other class does not have members with the exact same name.\nIn the second case, the best course of action would to be to create some stand-alone function that takes a couple parameters (or, even cleaner, a single class that holds all the arguments) and then have your classes reference this procedure, rather than having it be inside a class. The reason for this is if this behavior is required in multiple places, the class you are writing is probably not actually supposed to be responsible for it in the first place anyways.\n", "Here's a quick example of the syntax you need:\nclass MyFirstClass:\n def __init__(self, name):\n self.name = name\n\n @staticmethod\n def static_function():\n print(\"Static function called!\")\n def instance_function(self):\n print(\"Instance function called on \", self.name)\n \n \nclass MySecondClass:\n def main():\n MyFirstClass.static_function()\n anInstance = MyFirstClass(\"Adam\")\n anotherInstance = MyFirstClass(\"Bob\")\n anInstance.instance_function()\n anotherInstance.instance_function()\n \nMySecondClass.main()\n\n", "I won't speak to whether it's a good design principle (it isn't) but I have committed the sin of convenience myself. This also presupposes that the class will still be instantiated and you will be using some of the self attributes, just not all of them. If this is not the case, see other answers regarding staticmethods\nclass Stuff:\n def __init__() ...\n def func1(self, arg1, arg2, arg3=None, arg4=None)\n if arg3=None:\n self.var1=arg1*self.var3\n else:\n self.var1=arg1*arg3\n self.var2=arg2*self.var4\n\n\nActually there are about 10 variables that I would have to pass.\n\nYou could make use of kwargs in that case and unpack them within the function\n" ]
[ 1, 0, 0, 0 ]
[]
[]
[ "class", "function", "object", "oop", "python" ]
stackoverflow_0074523025_class_function_object_oop_python.txt
Q: Move first date type to specific column - pandas I have a pandas dataframe, loaded from a csv, structured as well: Who created the csv made same mistakes, and I need to move the first date which appears in each raw, to the column "Opening Date". The final result should be: How can I do it witout specifing fom which column extract the date? (the only information I have is that it is the first one after the "Opening date" column). A: I thought a very explanatory approach. First, we need a function that recognizes the date type. I didn't understand if there is a specific format in your csv, so when in doubt we will use a function that recognizes any pattern. Check out 'Check if string has date, any format': from dateutil.parser import parse def is_date(string, fuzzy=False): try: parse(string, fuzzy=fuzzy) return True except ValueError: return False At this point, we can iterate for each row in your dataframe and where there is no value in the right column, we search on all the next ones. sub_df = df.iloc[:, df.columns.str.find("Opening Data").argmax()+1:] # retrieve only remaining columns for index, row in df.iterrows(): if not row['Opening Data']: for col in sub_df.columns: if is_date(row[col]): df.iloc[index]['Opening Data'] = row[col] df.iloc[index][col] = '' Starting from a dataset of this form: Opening Data col_0 col_1 0 01-01-2000 00:00:00 1 02-01-2000 00:00:00 2 03-01-2000 00:00:00 the output will be: Opening Data col_0 col_1 0 01-01-2000 00:00:00 1 02-01-2000 00:00:00 2 03-01-2000 00:00:00
Move first date type to specific column - pandas
I have a pandas dataframe, loaded from a csv, structured as well: Who created the csv made same mistakes, and I need to move the first date which appears in each raw, to the column "Opening Date". The final result should be: How can I do it witout specifing fom which column extract the date? (the only information I have is that it is the first one after the "Opening date" column).
[ "I thought a very explanatory approach.\nFirst, we need a function that recognizes the date type. I didn't understand if there is a specific format in your csv, so when in doubt we will use a function that recognizes any pattern.\nCheck out 'Check if string has date, any format':\nfrom dateutil.parser import parse\n\ndef is_date(string, fuzzy=False):\n try: \n parse(string, fuzzy=fuzzy)\n return True\n\n except ValueError:\n return False\n\nAt this point, we can iterate for each row in your dataframe and where there is no value in the right column, we search on all the next ones.\nsub_df = df.iloc[:, df.columns.str.find(\"Opening Data\").argmax()+1:] # retrieve only remaining columns\n\nfor index, row in df.iterrows():\n if not row['Opening Data']:\n for col in sub_df.columns:\n if is_date(row[col]):\n df.iloc[index]['Opening Data'] = row[col]\n df.iloc[index][col] = ''\n\nStarting from a dataset of this form:\n\n\n\n\n\nOpening Data\ncol_0\ncol_1\n\n\n\n\n0\n01-01-2000 00:00:00\n\n\n\n\n1\n\n02-01-2000 00:00:00\n\n\n\n2\n\n\n03-01-2000 00:00:00\n\n\n\n\nthe output will be:\n\n\n\n\n\nOpening Data\ncol_0\ncol_1\n\n\n\n\n0\n01-01-2000 00:00:00\n\n\n\n\n1\n02-01-2000 00:00:00\n\n\n\n\n2\n03-01-2000 00:00:00\n\n\n\n\n\n" ]
[ 1 ]
[]
[]
[ "csv", "dataframe", "pandas", "python" ]
stackoverflow_0074522767_csv_dataframe_pandas_python.txt
Q: VS Code using Jupyter: Connecting to kernel: Python 3.6.9: Waiting for Jupyter Session to be idle I am having trouble running my import statement in VS code Jupyter. I split them into individual cells. I find when I run import numpy as np the cell hangs and I get a message Connecting to kernel: Python 3.6.9: Waiting for Jupyter Session to be idle How do I fix this? A: To solve it, I uninstalled the extension Jupyter notebook (which requires a reload), and then installed it. A: This may be related to the extended version. I hope this article is helpful to you. A: Alright so this one surprised me.. I was using Jupyter-like code cells "#%%" (see docs) to run jupyter notebook in VSCode. And I ran into the same issue as OP. The error disappeared when I renamed my file, from "inspect.py" to "tmp.py". A: I found my solution was to select the correct version of Python. I had 2 choices for Python 3.6.9. I chose the one called "base(Python 3.6.9)" which had a different location to "Python 3.6.9" and the base version worked. Something odd if going on here, maybe I should remove the other version? A: For me, upgrading ipython to version 7.34.0 (from 7.32.0) fixed it. I'm using jedi version 0.18.1. Related Update: this broke again for me when I upgraded my virtual environment to Python 3.8. I just upgraded my ipykernel package and now the notebook runs.
VS Code using Jupyter: Connecting to kernel: Python 3.6.9: Waiting for Jupyter Session to be idle
I am having trouble running my import statement in VS code Jupyter. I split them into individual cells. I find when I run import numpy as np the cell hangs and I get a message Connecting to kernel: Python 3.6.9: Waiting for Jupyter Session to be idle How do I fix this?
[ "To solve it, I uninstalled the extension Jupyter notebook (which requires a reload), and then installed it.\n", "This may be related to the extended version. I hope this article is helpful to you.\n", "Alright so this one surprised me..\nI was using Jupyter-like code cells \"#%%\" (see docs) to run jupyter notebook in VSCode. And I ran into the same issue as OP.\nThe error disappeared when I renamed my file, from \"inspect.py\" to \"tmp.py\".\n", "I found my solution was to select the correct version of Python. I had 2 choices for Python 3.6.9. I chose the one called \"base(Python 3.6.9)\" which had a different location to \"Python 3.6.9\" and the base version worked. Something odd if going on here, maybe I should remove the other version?\n", "For me, upgrading ipython to version 7.34.0 (from 7.32.0) fixed it. I'm using jedi version 0.18.1. Related\nUpdate: this broke again for me when I upgraded my virtual environment to Python 3.8. I just upgraded my ipykernel package and now the notebook runs.\n" ]
[ 1, 0, 0, 0, 0 ]
[]
[]
[ "jupyter", "python", "visual_studio_code" ]
stackoverflow_0071509993_jupyter_python_visual_studio_code.txt
Q: IndexError when using Enumerated Indexes in NumPy I am trying to create a fifth-order FIR filter in Python described by the following difference equation (apologies dark mode users but LaTeX is not yet supported on SO): def filter(x): h = np.array([-0.0147, 0.173, 0.342, 0.342, 0.173, -0.0147]) y = np.zeros_like(x) buf_array = np.zeros_like(h) buf = 0.0 for n in enumerate(x): for k in enumerate(h): buf = h[k]*x[n-k] buf_array[k] = buf y[n] = np.sum(buf_array) return y When using the filter, the Traceback leads me to the following line: 10 for n in enumerate(x): 11 for k in enumerate(h): ---> 12 buf = h[k]*x[n-k] 13 buf_array[k] = buf 15 y[n] = np.sum(buf_array) IndexError: only integers, slices (`:`), ellipsis (`...`), numpy.newaxis (`None`) and integer or boolean arrays are valid indices I have tried playing around with indexes and all, but have not managed to understand why this error is being caused. TIA A: As someone suggested in the comments, this case use requires looping over indexes and elements on their own, as using for index in enumerate(ndarray) will result in index being a tuple rather than being an integer. Furthermore, using for index, item in enumerate(ndarray) is suggested, as shown below: # Filter function def filter(x): h = np.array([-0.0147, 0.173, 0.342, 0.342, 0.173, -0.0147]) y = np.zeros_like(x) buf_array = np.zeros_like(h) buf = 0.0 for n, n_i in enumerate(x): for k, k_i in enumerate(h): i = n-k buf = h[k]*x[i] buf_array[k] = buf y[n] = np.sum(buf_array) return y
IndexError when using Enumerated Indexes in NumPy
I am trying to create a fifth-order FIR filter in Python described by the following difference equation (apologies dark mode users but LaTeX is not yet supported on SO): def filter(x): h = np.array([-0.0147, 0.173, 0.342, 0.342, 0.173, -0.0147]) y = np.zeros_like(x) buf_array = np.zeros_like(h) buf = 0.0 for n in enumerate(x): for k in enumerate(h): buf = h[k]*x[n-k] buf_array[k] = buf y[n] = np.sum(buf_array) return y When using the filter, the Traceback leads me to the following line: 10 for n in enumerate(x): 11 for k in enumerate(h): ---> 12 buf = h[k]*x[n-k] 13 buf_array[k] = buf 15 y[n] = np.sum(buf_array) IndexError: only integers, slices (`:`), ellipsis (`...`), numpy.newaxis (`None`) and integer or boolean arrays are valid indices I have tried playing around with indexes and all, but have not managed to understand why this error is being caused. TIA
[ "As someone suggested in the comments, this case use requires looping over indexes and elements on their own, as using for index in enumerate(ndarray) will result in index being a tuple rather than being an integer. Furthermore, using for index, item in enumerate(ndarray) is suggested, as shown below:\n# Filter function\ndef filter(x):\n\n h = np.array([-0.0147, 0.173, 0.342, 0.342, 0.173, -0.0147])\n y = np.zeros_like(x)\n\n buf_array = np.zeros_like(h)\n buf = 0.0\n\n for n, n_i in enumerate(x):\n for k, k_i in enumerate(h):\n i = n-k\n buf = h[k]*x[i]\n buf_array[k] = buf\n\n y[n] = np.sum(buf_array)\n\n return y\n\n" ]
[ 0 ]
[]
[]
[ "index_error", "numpy", "numpy_ndarray", "python", "signal_processing" ]
stackoverflow_0074499605_index_error_numpy_numpy_ndarray_python_signal_processing.txt
Q: Define a random variable I'm new with Python. I have to create a new rv U(t). Assuming that Z has a standard normal distribution c = 1.57, I have that: U(t) = 0 if Z(t) <= c U(t) = Φ(Z(t)) Z(t) > c Where Φ(·) is the cdf of the standard normal distribution N(0, 1). I start sampling random numbers from the normal distribution and I create an array of zeros: z = np.random.normal(0, 1, 100) u = np.zeros([1, 100]) Then, I write the following loop: for i in list(range(1, 101, 1)): if z[:i] < c: u[:i] = 0 else z[:i] > c: u[:i] = norm.cdf(z[:i], loc=0, scale=1) However, there is something wrong. I got this error: File "<ipython-input-837-1cbdd0641a75>", line 4 else z[:i] > c: ^ SyntaxError: invalid syntax Can someone please help me find out where the error is? Or suggest another way to deal with the problem? Thank you! A: Use the power of numpy! z = np.random.normal(0, 1, 100) u = np.zeros(z.shape) Since you initialized u to zeros, you don't need to do anything for the z <= c cases. For the others, you can use numpy's logical indexing to only set the elements that fulfill the condition # Get only the elements of z where z > c z_filt = z[z > c] # Calculate the norm.cdf at these values of z, norm_vals = norm.cdf(z_filt, loc=0, scale=1) # Assign those to the elements of u where z > c u[z > c] = norm_vals Of course, you can condense these three lines down to a single line: u[z > c] = norm.cdf(z[z > c], loc=0, scale=1) This approach will be significantly faster than iterating over the arrays and setting individual elements. If you're curious why your code didn't work and how to fix it, You don't need to list out a range to iterate on it. Just for i in range(100) is good enough z[:i] < c will give an array containing i boolean values. Putting an if condition on that will give you an error: ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all(). I suspect you meant to do z[i] < c u[:i] = 0 will set all elements of u from the start to the ith index to zero. You probably only wanted u[i] = 0. This is really not even necessary since you already initialized u to be zeros else <condition> doesn't work. You want elif <condition>. Although you don't really need that here, since you could just do if z[i] > c and that's the only condition you need. for i in range(100): if z[i] > c: u[i] = norm.cdf(z[i], loc=0, scale=1) Comparing the runtimes of these two approaches: import timeit import numpy as np from scipy.stats import norm from matplotlib import pyplot as plt def f_numpy(size): z = np.random.normal(0, 1, size) u = np.zeros(z.shape) u[z > c] = norm.cdf(z[z > c], loc=0, scale=1) return u def f_loopy(size): z = np.random.normal(0, 1, size) u = np.zeros(z.shape) for i in range(size): if z[i] > c: u[i] = norm.cdf(z[i], loc=0, scale=1) return u c = 0 sizes = [10, 100, 1000, 10_000, 100_000] times = np.zeros((len(sizes), 2)) for i, s in enumerate(sizes): times[i, 0] = timeit.timeit('f_numpy(s)', globals=globals(), number=100) / 100 print(">") times[i, 1] = timeit.timeit('f_loopy(s)', globals=globals(), number=10) / 10 print(".") fig, ax = plt.subplots() ax.plot(sizes, times[:, 0], label="numpy") ax.plot(sizes, times[:, 1], label="loopy") ax.set_xscale('log') ax.set_yscale('log') ax.set_xlabel('Array size') ax.set_ylabel('Time per function call (s)') ax.legend() fig.tight_layout() we get the following plot, which shows that the numpy approach is orders of magnitude faster than the loopy approach.
Define a random variable
I'm new with Python. I have to create a new rv U(t). Assuming that Z has a standard normal distribution c = 1.57, I have that: U(t) = 0 if Z(t) <= c U(t) = Φ(Z(t)) Z(t) > c Where Φ(·) is the cdf of the standard normal distribution N(0, 1). I start sampling random numbers from the normal distribution and I create an array of zeros: z = np.random.normal(0, 1, 100) u = np.zeros([1, 100]) Then, I write the following loop: for i in list(range(1, 101, 1)): if z[:i] < c: u[:i] = 0 else z[:i] > c: u[:i] = norm.cdf(z[:i], loc=0, scale=1) However, there is something wrong. I got this error: File "<ipython-input-837-1cbdd0641a75>", line 4 else z[:i] > c: ^ SyntaxError: invalid syntax Can someone please help me find out where the error is? Or suggest another way to deal with the problem? Thank you!
[ "Use the power of numpy!\nz = np.random.normal(0, 1, 100)\nu = np.zeros(z.shape)\n\nSince you initialized u to zeros, you don't need to do anything for the z <= c cases. For the others, you can use numpy's logical indexing to only set the elements that fulfill the condition\n# Get only the elements of z where z > c\nz_filt = z[z > c]\n\n# Calculate the norm.cdf at these values of z, \nnorm_vals = norm.cdf(z_filt, loc=0, scale=1)\n\n# Assign those to the elements of u where z > c\nu[z > c] = norm_vals\n\nOf course, you can condense these three lines down to a single line:\nu[z > c] = norm.cdf(z[z > c], loc=0, scale=1)\n\nThis approach will be significantly faster than iterating over the arrays and setting individual elements.\n\nIf you're curious why your code didn't work and how to fix it,\n\nYou don't need to list out a range to iterate on it. Just for i in range(100) is good enough\nz[:i] < c will give an array containing i boolean values. Putting an if condition on that will give you an error: ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all(). I suspect you meant to do z[i] < c\nu[:i] = 0 will set all elements of u from the start to the ith index to zero. You probably only wanted u[i] = 0. This is really not even necessary since you already initialized u to be zeros\nelse <condition> doesn't work. You want elif <condition>. Although you don't really need that here, since you could just do if z[i] > c and that's the only condition you need.\n\nfor i in range(100):\n if z[i] > c:\n u[i] = norm.cdf(z[i], loc=0, scale=1) \n\n\nComparing the runtimes of these two approaches:\nimport timeit\nimport numpy as np\nfrom scipy.stats import norm\nfrom matplotlib import pyplot as plt\n\ndef f_numpy(size):\n z = np.random.normal(0, 1, size)\n u = np.zeros(z.shape)\n u[z > c] = norm.cdf(z[z > c], loc=0, scale=1)\n return u\n\ndef f_loopy(size):\n z = np.random.normal(0, 1, size)\n u = np.zeros(z.shape)\n for i in range(size):\n if z[i] > c:\n u[i] = norm.cdf(z[i], loc=0, scale=1) \n return u\n\nc = 0\n\nsizes = [10, 100, 1000, 10_000, 100_000]\ntimes = np.zeros((len(sizes), 2))\nfor i, s in enumerate(sizes):\n times[i, 0] = timeit.timeit('f_numpy(s)', globals=globals(), number=100) / 100\n print(\">\")\n times[i, 1] = timeit.timeit('f_loopy(s)', globals=globals(), number=10) / 10\n print(\".\")\n \nfig, ax = plt.subplots()\nax.plot(sizes, times[:, 0], label=\"numpy\")\nax.plot(sizes, times[:, 1], label=\"loopy\")\nax.set_xscale('log')\nax.set_yscale('log')\nax.set_xlabel('Array size')\nax.set_ylabel('Time per function call (s)')\nax.legend()\nfig.tight_layout()\n\nwe get the following plot, which shows that the numpy approach is orders of magnitude faster than the loopy approach.\n\n" ]
[ 1 ]
[]
[]
[ "python", "random", "variables" ]
stackoverflow_0074523253_python_random_variables.txt
Q: i want my turtle code start when user press enter key by python I want my pong game that is made by turtle module to start when user press Enter key (python) what I did is just add s to start but I cannot do enter key he should type enter as string word not key A: As @droebi mentioned, I would advise you to slightly improve the question, as there are some mistakes that add slight difficulty to read your question. But from what I inferred, you want the user to not press the enter key, but actually type the word enter in the console to start the program. This problem can be solved in two ways: input("Type Enter to start:\n") # your code You can do this if you don't really care what the user is typing. import sys start = input("Type Enter to start:\n").lower() if start == "enter": pass else: sys.exit("You did not type Enter. Please type 'Enter' to start.") # your code You can do this if you want to be particular that the user SHOULD type enter. In case you did want to start the program when the enter key is pressed, however, then you can take a look at the answers for this question.
i want my turtle code start when user press enter key by python
I want my pong game that is made by turtle module to start when user press Enter key (python) what I did is just add s to start but I cannot do enter key he should type enter as string word not key
[ "As @droebi mentioned, I would advise you to slightly improve the question, as there are some mistakes that add slight difficulty to read your question.\nBut from what I inferred, you want the user to not press the enter key, but actually type the word enter in the console to start the program.\nThis problem can be solved in two ways:\ninput(\"Type Enter to start:\\n\")\n# your code\n\nYou can do this if you don't really care what the user is typing.\nimport sys\nstart = input(\"Type Enter to start:\\n\").lower()\nif start == \"enter\":\n pass\nelse:\n sys.exit(\"You did not type Enter. Please type 'Enter' to start.\")\n# your code\n\nYou can do this if you want to be particular that the user SHOULD type enter.\nIn case you did want to start the program when the enter key is pressed, however, then you can take a look at the answers for this question.\n" ]
[ 0 ]
[]
[]
[ "keyboard", "pong", "python", "python_3.x", "turtle_graphics" ]
stackoverflow_0074506303_keyboard_pong_python_python_3.x_turtle_graphics.txt
Q: For all values ​in a row, if a certain word is duplicated more than once, we want to remove it from the list I have the following dataframe en ko Tuberculosis of heart 심장의 결핵 Tuberculosis of myocardium 심근의 결핵 Tuberculosis of endocardium 심내막의 결핵 Tuberculosis of oesophagus 식도의 결핵 Zoster keratoconjunctivitis 대상포진 각막결막염 Zoster blepharitis 대상포진 안검염 Zoster iritis 대상포진 홍채염 I want a result like this. en ko heart 심장의 myocardium 심근의 endocardium 심내막의 oesophagus 식도의 keratoconjunctivitis 각막결막염 blepharitis 안검염 iritis 홍채염 This is just an example, I have about 50,000 word pairs. Been doing this for 1 week now. A: You can use: import re # identify duplicates s = df.stack().str.split().explode() dups = s[s.duplicated()].groupby(level=1).unique().to_dict() # {'en': array(['Tuberculosis', 'of', 'Zoster'], dtype=object), # 'ko': array(['결핵', '대상포진'], dtype=object)} # remove them df.apply(lambda s: s.str.replace('|'.join(dups[s.name]), '', regex=True)) Output: en ko 0 heart 심장의 1 myocardium 심근의 2 endocardium 심내막의 3 oesophagus 식도의 4 keratoconjunctivitis 각막결막염 5 blepharitis 안검염 6 iritis 홍채염 A: I don't know how extensible this will be to a larger dataset, given tat I don't know the structure of korean re:whitespace between entities, but it works on the given data. We split the data into two columns, as the preposition "of" doesn't appear to exist in the 'ko' column and that impacts following steps. Then for each column, we split on whitespace to make list columns, we explode those to individual rows, then we get the value counts to determine which elements appear more than once ko=df['ko'].str.split().explode().value_counts() en=df['en'].str.split().explode().value_counts() ko 결핵 4 대상포진 3 심장의 1 심근의 1 심내막의 1 식도의 1 각막결막염 1 안검염 1 홍채염 1 Name: ko, dtype: int64 After that, we use boolean indexing to select only those elements that appear only once for each series ko_col=ko[ko==1] en_col=en[en==1] en_col heart 1 myocardium 1 endocardium 1 oesophagus 1 keratoconjunctivitis 1 blepharitis 1 iritis 1 Name: en, dtype: int64 We rely on the fact that order should be preserved in the above steps, but worth spot checking in your larger dataset, and we recombine to create your output dataframe new_df=pd.DataFrame({'en':en_col.index,'ko':ko_col.index}) new_df en ko 0 heart 심장의 1 myocardium 심근의 2 endocardium 심내막의 3 oesophagus 식도의 4 keratoconjunctivitis 각막결막염 5 blepharitis 안검염 6 iritis 홍채염
For all values ​in a row, if a certain word is duplicated more than once, we want to remove it from the list
I have the following dataframe en ko Tuberculosis of heart 심장의 결핵 Tuberculosis of myocardium 심근의 결핵 Tuberculosis of endocardium 심내막의 결핵 Tuberculosis of oesophagus 식도의 결핵 Zoster keratoconjunctivitis 대상포진 각막결막염 Zoster blepharitis 대상포진 안검염 Zoster iritis 대상포진 홍채염 I want a result like this. en ko heart 심장의 myocardium 심근의 endocardium 심내막의 oesophagus 식도의 keratoconjunctivitis 각막결막염 blepharitis 안검염 iritis 홍채염 This is just an example, I have about 50,000 word pairs. Been doing this for 1 week now.
[ "You can use:\nimport re\n\n# identify duplicates\ns = df.stack().str.split().explode()\ndups = s[s.duplicated()].groupby(level=1).unique().to_dict()\n# {'en': array(['Tuberculosis', 'of', 'Zoster'], dtype=object),\n# 'ko': array(['결핵', '대상포진'], dtype=object)}\n\n# remove them\ndf.apply(lambda s: s.str.replace('|'.join(dups[s.name]), '', regex=True))\n\nOutput:\n en ko\n0 heart 심장의\n1 myocardium 심근의\n2 endocardium 심내막의\n3 oesophagus 식도의\n4 keratoconjunctivitis 각막결막염\n5 blepharitis 안검염\n6 iritis 홍채염\n\n", "I don't know how extensible this will be to a larger dataset, given tat I don't know the structure of korean re:whitespace between entities, but it works on the given data.\nWe split the data into two columns, as the preposition \"of\" doesn't appear to exist in the 'ko' column and that impacts following steps. Then for each column, we split on whitespace to make list columns, we explode those to individual rows, then we get the value counts to determine which elements appear more than once\nko=df['ko'].str.split().explode().value_counts()\nen=df['en'].str.split().explode().value_counts()\n\nko\n결핵 4\n대상포진 3\n심장의 1\n심근의 1\n심내막의 1\n식도의 1\n각막결막염 1\n안검염 1\n홍채염 1\nName: ko, dtype: int64\n\nAfter that, we use boolean indexing to select only those elements that appear only once for each series\nko_col=ko[ko==1]\nen_col=en[en==1]\n\nen_col\nheart 1\nmyocardium 1\nendocardium 1\noesophagus 1\nkeratoconjunctivitis 1\nblepharitis 1\niritis 1\nName: en, dtype: int64\n\nWe rely on the fact that order should be preserved in the above steps, but worth spot checking in your larger dataset, and we recombine to create your output dataframe\nnew_df=pd.DataFrame({'en':en_col.index,'ko':ko_col.index})\nnew_df\n en ko\n0 heart 심장의\n1 myocardium 심근의\n2 endocardium 심내막의\n3 oesophagus 식도의\n4 keratoconjunctivitis 각막결막염\n5 blepharitis 안검염\n6 iritis 홍채염\n\n" ]
[ 1, 0 ]
[]
[]
[ "dataframe", "pandas", "python" ]
stackoverflow_0074522783_dataframe_pandas_python.txt
Q: How to do *custom* action when receiving a warning in Python? I have a script that iterates through thousands of csv's and reads them into pandas, then does a bunch of other stuff with it down the line. Every once and a while, I get this message: sys:1: DtypeWarning: Columns (10,11,23) have mixed types.Specify dtype option on import or set low_memory=False. I tried the try/except statement, however it isn't caught since its a warning and not an exception. Is there a way to do something like: try: df = pd.read_csv(file_path) except pd.errors.DtypeWarning: df = pd.read_csv(file_path, low_memory=False) I only want to use low_memory=False if i get the warning on that specific file, and not the the other thousands of files I also can't just set all the column dtypes because many of the csv files have different columns/data/etc. I don't want to set warnings.simplefilter('error', pd.errors.DtypeWarning) because it seems like overkill. I don't want some other DtypeWarning somewhere to keep it from running if I didn't catch it. A: warnings.simplefilter exists exactly for this purpose. Inside the warning handler you can set conditions to make sure you don't catch unwanted warnings. If the warning happens only "once in a while" than you probably want be wasting too much runtime on the warning handler code.
How to do *custom* action when receiving a warning in Python?
I have a script that iterates through thousands of csv's and reads them into pandas, then does a bunch of other stuff with it down the line. Every once and a while, I get this message: sys:1: DtypeWarning: Columns (10,11,23) have mixed types.Specify dtype option on import or set low_memory=False. I tried the try/except statement, however it isn't caught since its a warning and not an exception. Is there a way to do something like: try: df = pd.read_csv(file_path) except pd.errors.DtypeWarning: df = pd.read_csv(file_path, low_memory=False) I only want to use low_memory=False if i get the warning on that specific file, and not the the other thousands of files I also can't just set all the column dtypes because many of the csv files have different columns/data/etc. I don't want to set warnings.simplefilter('error', pd.errors.DtypeWarning) because it seems like overkill. I don't want some other DtypeWarning somewhere to keep it from running if I didn't catch it.
[ "warnings.simplefilter exists exactly for this purpose.\nInside the warning handler you can set conditions to make sure you don't catch unwanted warnings.\nIf the warning happens only \"once in a while\" than you probably want be wasting too much runtime on the warning handler code.\n" ]
[ 0 ]
[]
[]
[ "dtype", "pandas", "python", "warnings" ]
stackoverflow_0074523098_dtype_pandas_python_warnings.txt
Q: Python-Playwright: Is there a way to introspect and/or run commands interactively? I'm trying to move from Selenium to Playwright for some webscraping tasks. Perhaps I got stuck into this bad habit of having Selenium running the browser on the side while testing the commands and selectors on the run. Is there any way to achieve something similar using Playwright? What I achieved so far was running playwright on the console, something similar to this: from playwright.sync_api import sync_playwright with sync_playwright() as pw: browser = pw.chromium.launch(headless=False) page = browser.new_page() page.goto('https://google.com') page.pause() I get a Browser window together with a Playwright Inspector - from there I none of my commands or variables will execute. A: I'd use the technique from can i run playwright outside of 'with'? and How to start playwright outside 'with' without context managers on the interactive repl: PS C:\Users\foo\Desktop> py Python 3.10.2 (tags/v3.10.2:a58ebcc, Jan 17 2022, 14:12:15) [MSC v.1929 64 bit (AMD64)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> from playwright.sync_api import sync_playwright >>> p = sync_playwright().start() >>> browser = p.chromium.launch(headless=False) >>> page = browser.new_page() >>> page.goto("https://www.example.com") <Response url='https://www.example.com/' request=<Request url='https://www.example.com/' method='GET'>> >>> page.title() 'Example Domain' >>> page.close() >>> browser.close() >>> p.stop() If you use page.pause(), try running playwright.resume() in the browser dev tools console to resume the Python repl. If you really need to do this from a script rather than the Python repl, you could use the code interpreter or roll your own, but I'd try to avoid this if possible.
Python-Playwright: Is there a way to introspect and/or run commands interactively?
I'm trying to move from Selenium to Playwright for some webscraping tasks. Perhaps I got stuck into this bad habit of having Selenium running the browser on the side while testing the commands and selectors on the run. Is there any way to achieve something similar using Playwright? What I achieved so far was running playwright on the console, something similar to this: from playwright.sync_api import sync_playwright with sync_playwright() as pw: browser = pw.chromium.launch(headless=False) page = browser.new_page() page.goto('https://google.com') page.pause() I get a Browser window together with a Playwright Inspector - from there I none of my commands or variables will execute.
[ "I'd use the technique from can i run playwright outside of 'with'? and How to start playwright outside 'with' without context managers on the interactive repl:\nPS C:\\Users\\foo\\Desktop> py\nPython 3.10.2 (tags/v3.10.2:a58ebcc, Jan 17 2022, 14:12:15) [MSC v.1929 64 bit (AMD64)] on win32\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\n>>> from playwright.sync_api import sync_playwright\n>>> p = sync_playwright().start()\n>>> browser = p.chromium.launch(headless=False)\n>>> page = browser.new_page()\n>>> page.goto(\"https://www.example.com\")\n<Response url='https://www.example.com/' request=<Request url='https://www.example.com/' method='GET'>>\n>>> page.title()\n'Example Domain'\n>>> page.close()\n>>> browser.close()\n>>> p.stop()\n\nIf you use page.pause(), try running playwright.resume() in the browser dev tools console to resume the Python repl.\nIf you really need to do this from a script rather than the Python repl, you could use the code interpreter or roll your own, but I'd try to avoid this if possible.\n" ]
[ 2 ]
[]
[]
[ "playwright_python", "python" ]
stackoverflow_0074517390_playwright_python_python.txt
Q: Problem with If- Else Conditiones, How can I resolve it? The problem is that I want that the code shows the graph if the Value of "Recordinaciones" is > 1, and shows "No hay Recorinaciones Dobles" if <1 but I have some strange issue. Hope someone can help me! The problem is: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all(). Here it's the code: import pandas as pd doc = input('Ingresa el nombre del archivo: ') print(f'Ingresaste {doc}') df=pd.read_excel(doc+'.xlsx') df['Recordinaciones'] = df.apply(lambda _: '', axis=1) df['Cantidad'] = df.apply(lambda _: '', axis=1) rcs=df[['Cliente','# Externo','Recordinaciones']].groupby(['Cliente','# Externo']).count().reset_index().sort_values(['Recordinaciones'],ascending=False) Recoordinaciones = rcs['Recordinaciones'] if Recoordinaciones > 1: # Pregunto si x es mayor a 1 print(rcs[(rcs['Recordinaciones'] > 1)]) else: print( "No hay Recorinaciones Dobles") # cumple, ejecuto esto Error Message Ingresa el nombre del archivo: Test Feb Ingresaste Test Feb ValueError Traceback (most recent call last) in 11 12 Recoordinaciones = rcs['Recordinaciones'] ---> 13 if Recoordinaciones > 1: # Pregunto si x es mayor a 1 14 print(rcs[(rcs['Recordinaciones'] > 1)]) 15 else: /usr/local/lib/python3.7/dist-packages/pandas/core/generic.py in nonzero(self) 1536 def nonzero(self): 1537 raise ValueError( -> 1538 f"The truth value of a {type(self).name} is ambiguous. " 1539 "Use a.empty, a.bool(), a.item(), a.any() or a.all()." 1540 ) ValueError: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all(). A: try this: Recoordinaciones = rcs.loc[rcs['Recordinaciones'] > 1]['Recordinaciones'].tolist() if len(Recoordinaciones) == 0: print('no values >1') else: for r in Recoordinaciones: print(r) basically the loc function receives a condition with a boolean outcome and locates the rows where this condition is met.
Problem with If- Else Conditiones, How can I resolve it?
The problem is that I want that the code shows the graph if the Value of "Recordinaciones" is > 1, and shows "No hay Recorinaciones Dobles" if <1 but I have some strange issue. Hope someone can help me! The problem is: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all(). Here it's the code: import pandas as pd doc = input('Ingresa el nombre del archivo: ') print(f'Ingresaste {doc}') df=pd.read_excel(doc+'.xlsx') df['Recordinaciones'] = df.apply(lambda _: '', axis=1) df['Cantidad'] = df.apply(lambda _: '', axis=1) rcs=df[['Cliente','# Externo','Recordinaciones']].groupby(['Cliente','# Externo']).count().reset_index().sort_values(['Recordinaciones'],ascending=False) Recoordinaciones = rcs['Recordinaciones'] if Recoordinaciones > 1: # Pregunto si x es mayor a 1 print(rcs[(rcs['Recordinaciones'] > 1)]) else: print( "No hay Recorinaciones Dobles") # cumple, ejecuto esto Error Message Ingresa el nombre del archivo: Test Feb Ingresaste Test Feb ValueError Traceback (most recent call last) in 11 12 Recoordinaciones = rcs['Recordinaciones'] ---> 13 if Recoordinaciones > 1: # Pregunto si x es mayor a 1 14 print(rcs[(rcs['Recordinaciones'] > 1)]) 15 else: /usr/local/lib/python3.7/dist-packages/pandas/core/generic.py in nonzero(self) 1536 def nonzero(self): 1537 raise ValueError( -> 1538 f"The truth value of a {type(self).name} is ambiguous. " 1539 "Use a.empty, a.bool(), a.item(), a.any() or a.all()." 1540 ) ValueError: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().
[ "try this:\nRecoordinaciones = rcs.loc[rcs['Recordinaciones'] > 1]['Recordinaciones'].tolist()\n\nif len(Recoordinaciones) == 0:\n print('no values >1')\nelse:\n for r in Recoordinaciones: \n print(r)\n\nbasically the loc function receives a condition with a boolean outcome and locates the rows where this condition is met.\n" ]
[ 0 ]
[]
[]
[ "if_statement", "pandas", "python" ]
stackoverflow_0074523120_if_statement_pandas_python.txt
Q: Convert a list of strings to a list of [0.0 or 1.0] I have couple of lists and one of them looks like this : ['SHAPE69', 'SHAPE48', 'SHAPE15', 'SHAPE28', 'SHAPE33', 'SHAPE27', ...] with 100 shapes in the list. If the shape number is even, then convert it to 0.0, which is a float number. If the shape number is odd, then convert it to 1.0, which is also a float number. The result list should be like [1.0, 0.0, 1.0, 0.0, 1.0, 1.0, ...]. How could I convert the list easily? A: input_list = ['SHAPE69', 'SHAPE48', 'SHAPE15', 'SHAPE28', 'SHAPE33', 'SHAPE27'] def converter(s: str) -> float: shape_length = len('SHAPE') substr = s[shape_length:] try: shape_integer = int(substr) except ValueError: raise ValueError(f'failed to extract integer value from string {s}') if shape_integer % 2 == 0: # it's even return 0.0 else: return 1.0 output_list = [converter(x) for x in input_list] print(output_list) [1.0, 0.0, 1.0, 0.0, 1.0, 1.0] The function converter trims the number out of the SHAPE12 string, and attempts to convert it into an integer. Then it runs a modulus operation to determine if it's odd or even, returning the appropriate float. The list comp creates a new list by running each value of the input_list through this function. A: If you have a list, you can use a list comprehension and the modulo (%) operator: l = ['SHAPE69', 'SHAPE48', 'SHAPE15', 'SHAPE28', 'SHAPE33', 'SHAPE27'] out = [int(s.removeprefix('SHAPE'))%2 for s in l] NB. removeprefix requires python 3.9+, for earlier versions: out = [int(s[5:])%2 for s in l] Output: [1, 0, 1, 0, 1, 1] Variant with pandas: import pandas as pd out = pd.to_numeric(pd.Series(l).str.extract(r'(\d+)', expand=False) ).mod(2).tolist() A: This code should do it: array = ['SHAPE69', 'SHAPE48', 'SHAPE15', 'SHAPE28', 'SHAPE33', 'SHAPE27'] float_array = [] for item in array: if int(item[6:8]) % 2 == 0: float_array.append(0.0) else: float_array.append(1.0) print(float_array)
Convert a list of strings to a list of [0.0 or 1.0]
I have couple of lists and one of them looks like this : ['SHAPE69', 'SHAPE48', 'SHAPE15', 'SHAPE28', 'SHAPE33', 'SHAPE27', ...] with 100 shapes in the list. If the shape number is even, then convert it to 0.0, which is a float number. If the shape number is odd, then convert it to 1.0, which is also a float number. The result list should be like [1.0, 0.0, 1.0, 0.0, 1.0, 1.0, ...]. How could I convert the list easily?
[ "input_list = ['SHAPE69', 'SHAPE48', 'SHAPE15', 'SHAPE28', 'SHAPE33', 'SHAPE27']\n\n\ndef converter(s: str) -> float:\n shape_length = len('SHAPE')\n substr = s[shape_length:]\n try:\n shape_integer = int(substr)\n except ValueError:\n raise ValueError(f'failed to extract integer value from string {s}')\n if shape_integer % 2 == 0:\n # it's even\n return 0.0\n else:\n return 1.0\n\n\noutput_list = [converter(x) for x in input_list]\nprint(output_list)\n[1.0, 0.0, 1.0, 0.0, 1.0, 1.0]\n\nThe function converter trims the number out of the SHAPE12 string, and attempts to convert it into an integer. Then it runs a modulus operation to determine if it's odd or even, returning the appropriate float.\nThe list comp creates a new list by running each value of the input_list through this function.\n", "If you have a list, you can use a list comprehension and the modulo (%) operator:\nl = ['SHAPE69', 'SHAPE48', 'SHAPE15', 'SHAPE28', 'SHAPE33', 'SHAPE27']\nout = [int(s.removeprefix('SHAPE'))%2 for s in l]\n\nNB. removeprefix requires python 3.9+, for earlier versions:\nout = [int(s[5:])%2 for s in l]\n\nOutput:\n[1, 0, 1, 0, 1, 1]\n\nVariant with pandas:\nimport pandas as pd\n\nout = pd.to_numeric(pd.Series(l).str.extract(r'(\\d+)', expand=False)\n ).mod(2).tolist()\n\n", "This code should do it:\narray = ['SHAPE69', 'SHAPE48', 'SHAPE15', 'SHAPE28', 'SHAPE33', 'SHAPE27']\nfloat_array = []\nfor item in array:\n if int(item[6:8]) % 2 == 0:\n float_array.append(0.0)\n else:\n float_array.append(1.0)\nprint(float_array)\n\n" ]
[ 2, 1, 0 ]
[]
[]
[ "list", "python" ]
stackoverflow_0074523217_list_python.txt
Q: Collect multiple values out of JSON file via API in python, where some values can be none / [] I want to extract the values of scientific publications from the openalex API. However, since this API does not have complete values for all publications, the resulting JSON file is not always complete. If the file is complete, my code will run without issues. If the API does not have all information available, it can happen that the following result is found but cannot get interpreted: "institutions":[] instead of "institutions":[{"id":"https://openalex.org/I2057...}{...}]. As a result, I always get an "IndexError: list index out of range". After an extensive search, I have already tried to solve the problem with the help of try / except or if-queries (if required, I can also provide them). Unfortunately, I did not succeed. My goal is that in the charlist, in places where no information is available ([]), None or Null is entered. The goal is to program the code as performant as possible since I will have a high six-digit number of requests. This is, of course, already cleared with the API operator. My code listed below already works for complete JSON files (upper magid_list) but not for incomplete entries (2301544176) as in the lower, not commented-out magid_list. import requests import json baseurl = 'https://api.openalex.org/works?filter=ids.mag:' #**upper magid_listworks without problems** #magid_list = [2301543590, 2301543835] #**error occur** #**see page "https://api.openalex.org/works?filter=ids.mag:2301544176" no information for institution given** magid_list = [2301543590, 2301543835, 2301544176] def main_request(baseurl, endpoint): r = requests.get(baseurl + endpoint) return r.json() def parse_json(response): charlist = [] pupdate = data['results'][0]['publication_date'] display_name = data['results'][0]['display_name'] for item in response['results'][0]['authorships']: char = { 'magid': str(x), 'display_name': display_name, 'pupdate': pupdate, 'author': item['author']['display_name'], 'institution_id': item['institutions'][0]['id'] } charlist.append(char) return charlist finallist = [] for x in magid_list: print(x) data = main_request(baseurl, str(x)) finallist.extend(parse_json(main_request(baseurl, str(x)))) df = pd.DataFrame(finallist) print(df.head(), df.tail()) If I can provide further information or clarification, let me know. Attached you can find the full IndexError Traceback: --------------------------------------------------------------------------- IndexError Traceback (most recent call last) f:\AlexPE\__programming\Masterarbeit.ipynb Cell 153 in <cell line: 37>() 37 for x in list: 38 print(x) ---> 39 finallist.extend(parse_json(main_request(baseurl, str(x)))) 41 df = pd.DataFrame(finallist) 43 #data = main_request(baseurl, endpoint) 44 #print(get_pages(data)) 45 #print(parse_json(data)) f:\AlexPE\__programming\Masterarbeit.ipynb Cell 153 in parse_json(response) 20 display_name = data['results'][0]['display_name'] 23 for item in response['results'][0]['authorships']: 24 char = { 25 'magid': str(x), 26 'display_name': display_name, 27 'pupdate': pupdate, 28 'author': item['author']['display_name'], ---> 29 'institution_id': item['institutions'][0]['id'] 30 } 32 charlist.append(char) 33 return charlist IndexError: list index out of range A: Check for the existence of values before attempting to access them: def parse_json(response): charlist = [] pupdate = display_name = None if data['results']: pupdate = data['results'][0].get('publication_date') display_name = data['results'][0].get('display_name') for item in response['results'][0]['authorships']: institution_id = None if item['institutions']: institution_id = item['institutions'][0].get('id') char = { 'magid': str(x), 'display_name': display_name, 'pupdate': pupdate, 'author': item['author']['display_name'], 'institution_id': institution_id } charlist.append(char) return charlist
Collect multiple values out of JSON file via API in python, where some values can be none / []
I want to extract the values of scientific publications from the openalex API. However, since this API does not have complete values for all publications, the resulting JSON file is not always complete. If the file is complete, my code will run without issues. If the API does not have all information available, it can happen that the following result is found but cannot get interpreted: "institutions":[] instead of "institutions":[{"id":"https://openalex.org/I2057...}{...}]. As a result, I always get an "IndexError: list index out of range". After an extensive search, I have already tried to solve the problem with the help of try / except or if-queries (if required, I can also provide them). Unfortunately, I did not succeed. My goal is that in the charlist, in places where no information is available ([]), None or Null is entered. The goal is to program the code as performant as possible since I will have a high six-digit number of requests. This is, of course, already cleared with the API operator. My code listed below already works for complete JSON files (upper magid_list) but not for incomplete entries (2301544176) as in the lower, not commented-out magid_list. import requests import json baseurl = 'https://api.openalex.org/works?filter=ids.mag:' #**upper magid_listworks without problems** #magid_list = [2301543590, 2301543835] #**error occur** #**see page "https://api.openalex.org/works?filter=ids.mag:2301544176" no information for institution given** magid_list = [2301543590, 2301543835, 2301544176] def main_request(baseurl, endpoint): r = requests.get(baseurl + endpoint) return r.json() def parse_json(response): charlist = [] pupdate = data['results'][0]['publication_date'] display_name = data['results'][0]['display_name'] for item in response['results'][0]['authorships']: char = { 'magid': str(x), 'display_name': display_name, 'pupdate': pupdate, 'author': item['author']['display_name'], 'institution_id': item['institutions'][0]['id'] } charlist.append(char) return charlist finallist = [] for x in magid_list: print(x) data = main_request(baseurl, str(x)) finallist.extend(parse_json(main_request(baseurl, str(x)))) df = pd.DataFrame(finallist) print(df.head(), df.tail()) If I can provide further information or clarification, let me know. Attached you can find the full IndexError Traceback: --------------------------------------------------------------------------- IndexError Traceback (most recent call last) f:\AlexPE\__programming\Masterarbeit.ipynb Cell 153 in <cell line: 37>() 37 for x in list: 38 print(x) ---> 39 finallist.extend(parse_json(main_request(baseurl, str(x)))) 41 df = pd.DataFrame(finallist) 43 #data = main_request(baseurl, endpoint) 44 #print(get_pages(data)) 45 #print(parse_json(data)) f:\AlexPE\__programming\Masterarbeit.ipynb Cell 153 in parse_json(response) 20 display_name = data['results'][0]['display_name'] 23 for item in response['results'][0]['authorships']: 24 char = { 25 'magid': str(x), 26 'display_name': display_name, 27 'pupdate': pupdate, 28 'author': item['author']['display_name'], ---> 29 'institution_id': item['institutions'][0]['id'] 30 } 32 charlist.append(char) 33 return charlist IndexError: list index out of range
[ "Check for the existence of values before attempting to access them:\ndef parse_json(response):\n charlist = []\n pupdate = display_name = None\n if data['results']:\n pupdate = data['results'][0].get('publication_date')\n display_name = data['results'][0].get('display_name')\n for item in response['results'][0]['authorships']:\n institution_id = None\n if item['institutions']:\n institution_id = item['institutions'][0].get('id')\n char = {\n 'magid': str(x),\n 'display_name': display_name,\n 'pupdate': pupdate,\n 'author': item['author']['display_name'],\n 'institution_id': institution_id\n }\n \n charlist.append(char)\n return charlist\n\n" ]
[ 0 ]
[]
[]
[ "api", "json", "python", "python_jsons", "python_requests" ]
stackoverflow_0074522684_api_json_python_python_jsons_python_requests.txt
Q: Color problem with Log transform to brighten dark area. Why and how to fix? So I try to enhance this image by applying log transform on it original image The area where there are bright white color turns into color blue on the enhanced image. enhanced image path = '...JPG' image = cv2.imread(path) c = 255 / np.log(1 + np.max(image)) log_image = c * (np.log(image + 1)) # Specify the data type so that # float value will be converted to int log_image = np.array(log_image, dtype = np.uint8) cv2.imwrite('img.JPG', log_image) There's also a warning: RuntimeWarning: divide by zero encountered in log I tried using other type of log (e.g log2, log10...) but it still show the same result. I tried changing dtype = np.uint32 but it causes error. A: Same cause for the two problems Namely this line log_image = c * (np.log(image + 1)) image+1 is an array of np.uint8, as image is. But if there are 255 components in image, then image+1 overflows. 256 are turned into 0. Which lead to np.log(imag+1) to be log(0) at this points. Hence the error. And hence the fact that brightest parts have strange colors, since they are the ones containing 255 So, since log will have to work with floats anyway, just convert to float yourself before calling log path = '...JPG' image = cv2.imread(path) c = 255 / np.log(1 + np.max(image)) log_image = c * (np.log(image.astype(float) + 1)) # Specify the data type so that # float value will be converted to int log_image = np.array(log_image, dtype = np.uint8) cv2.imwrite('img.JPG', log_image)
Color problem with Log transform to brighten dark area. Why and how to fix?
So I try to enhance this image by applying log transform on it original image The area where there are bright white color turns into color blue on the enhanced image. enhanced image path = '...JPG' image = cv2.imread(path) c = 255 / np.log(1 + np.max(image)) log_image = c * (np.log(image + 1)) # Specify the data type so that # float value will be converted to int log_image = np.array(log_image, dtype = np.uint8) cv2.imwrite('img.JPG', log_image) There's also a warning: RuntimeWarning: divide by zero encountered in log I tried using other type of log (e.g log2, log10...) but it still show the same result. I tried changing dtype = np.uint32 but it causes error.
[ "Same cause for the two problems\nNamely this line\nlog_image = c * (np.log(image + 1))\n\nimage+1 is an array of np.uint8, as image is. But if there are 255 components in image, then image+1 overflows. 256 are turned into 0. Which lead to np.log(imag+1) to be log(0) at this points. Hence the error.\nAnd hence the fact that brightest parts have strange colors, since they are the ones containing 255\nSo, since log will have to work with floats anyway, just convert to float yourself before calling log\npath = '...JPG'\nimage = cv2.imread(path)\nc = 255 / np.log(1 + np.max(image)) \nlog_image = c * (np.log(image.astype(float) + 1))\n\n# Specify the data type so that\n# float value will be converted to int\nlog_image = np.array(log_image, dtype = np.uint8)\ncv2.imwrite('img.JPG', log_image)\n\n\n" ]
[ 3 ]
[]
[]
[ "image_enhancement", "image_processing", "numpy", "opencv", "python" ]
stackoverflow_0074523327_image_enhancement_image_processing_numpy_opencv_python.txt
Q: how can we make many different kv files and import all the required widget in one main kv file so to modularize the code as you can see in the above images I am make two different screens in .kv files so suppose I have more screens and I don't want to make mess in the same .kv files so like is there a way we can write these aboutScreen and HomeScreen in different .kv file and than import it to one .kv file for code modularization. I tried to directly import the files but seems it doesn't work like that ? A: Here is an example: kivy.uix.screenmanager.ScreenManagerException: ScreenManager accepts only Screen widget error You can also split python code into separate py files as well, like screen1.kv and screen2.py. I'll give you and example if you are interested in. A: For my implementation of this I created a subfolder for my additional kv files in the working directory, so I account for the subfolder in the include. In your main.kv use: #: include kv/screen1.kv #: include kv/screen2.kv WindowManager: Screen1: Screen2:
how can we make many different kv files and import all the required widget in one main kv file so to modularize the code
as you can see in the above images I am make two different screens in .kv files so suppose I have more screens and I don't want to make mess in the same .kv files so like is there a way we can write these aboutScreen and HomeScreen in different .kv file and than import it to one .kv file for code modularization. I tried to directly import the files but seems it doesn't work like that ?
[ "Here is an example:\nkivy.uix.screenmanager.ScreenManagerException: ScreenManager accepts only Screen widget error\nYou can also split python code into separate py files as well, like screen1.kv and screen2.py. I'll give you and example if you are interested in.\n", "For my implementation of this I created a subfolder for my additional kv files in the working directory, so I account for the subfolder in the include.\nIn your main.kv use:\n#: include kv/screen1.kv\n#: include kv/screen2.kv\n\nWindowManager:\n Screen1:\n Screen2:\n\n" ]
[ 0, 0 ]
[]
[]
[ "kivy", "kivy_language", "python", "python_development_mode", "user_interface" ]
stackoverflow_0074312560_kivy_kivy_language_python_python_development_mode_user_interface.txt
Q: how to segregate the column wrt certain conditions in pyspark dataframe i have a dataframe df as shown below: VehNum Control_circuit control_circuit_status partnumbers errors Flag 4234456 DOC ok A567UR Software Issue 0 4234456 DOC not_okay A568UR Software Issue 1 4234456 DOC not_okay A569UR Hardware issue 2 4234457 ACR ok A234TY Hardware issue 0 4234457 ACR ok A235TY Hardware issue 0 4234457 ACR ok A234TY Hardware issue 0 4234487 QWR ok A276TY Hardware issue 0 4234487 QWR not_okay A872UR Hardware issue 1 3423448 QWR not_okay A872UR Hardware issue 1 i want to add a new column called "Control_Flag" and perform below operations: for each VehNum ,Control_circuit if it has flag value only 0 then Control_Flag column will hold value 0 else if it has 0 ,1 or 2 then Control_Flag column will hold value 1. result should be as below: VehNum Control_circuit control_circuit_status partnumbers errors Flag Control_Flag 4234456 DOC ok A567UR Software Issue 0 1 4234456 DOC not_okay A568UR Software Issue 1 1 4234456 DOC not_okay A569UR Hardware issue 2 1 4234457 ACR ok A234TY Hardware issue 0 0 4234457 ACR ok A235TY Hardware issue 0 0 4234457 ACR ok A234TY Hardware issue 0 0 4234487 QWR ok A276TY Hardware issue 0 1 4234487 QWR not_okay A872UR Hardware issue 1 1 3423448 QWR not_okay A872UR Hardware issue 1 1 how to achieve this using pyspark? A: using a aggregate window with SUM() will help achieve this from pyspark.sql import functions as F from pyspark.sql.types import * from pyspark.sql import Window df = spark.createDataFrame( [ ("4234456", "DOC", "ok", "A567UR", "Software Issue", 0), ("4234456", "DOC", "not_okay", "A568UR", "Software Issue", 1), ("4234456", "DOC", "not_okay", "A569UR", "Hardware Issue", 2), ("4234457", "ACR", "ok", "A234TY", "Hardware Issue", 0), ("4234457", "ACR", "ok", "A234TY", "Hardware Issue", 0), ("4234457", "ACR", "ok", "A234TY", "Hardware Issue", 0), ("4234487", "QWR", "ok", "A276TY", "Hardware Issue", 0), ("4234487", "QWR", "not_okay", "A872UR", "Hardware Issue", 1), ("3423448", "QWR", "not_okay", "A872UR", "Hardware Issue", 1), ], ["VehNum", "Control_circuit", "control_circuit_status", "partnumbers", "errors", "Flag"], ) df_agg_window = Window.partitionBy( "VehNum", "Control_circuit", ) df = ( df .withColumn( "flag_sum", F.sum("Flag").over(df_agg_window), ) .withColumn( "Control_Flag", F.when( F.lower(F.col("flag_sum")) > 0, F.lit(1), ) .otherwise(F.lit(0)), ) #.drop(F.col("flag_sum")) ) df.show() output: +-------+---------------+----------------------+-----------+--------------+----+--------+------------+ | VehNum|Control_circuit|control_circuit_status|partnumbers| errors|Flag|flag_sum|Control_Flag| +-------+---------------+----------------------+-----------+--------------+----+--------+------------+ |4234457| ACR| ok| A234TY|Hardware Issue| 0| 0| 0| |4234457| ACR| ok| A234TY|Hardware Issue| 0| 0| 0| |4234457| ACR| ok| A234TY|Hardware Issue| 0| 0| 0| |4234487| QWR| not_okay| A872UR|Hardware Issue| 1| 1| 1| |4234487| QWR| ok| A276TY|Hardware Issue| 0| 1| 1| |4234456| DOC| ok| A567UR|Software Issue| 0| 3| 1| |4234456| DOC| not_okay| A569UR|Hardware Issue| 2| 3| 1| |4234456| DOC| not_okay| A568UR|Software Issue| 1| 3| 1| |3423448| QWR| not_okay| A872UR|Hardware Issue| 1| 1| 1| +-------+---------------+----------------------+-----------+--------------+----+--------+------------+
how to segregate the column wrt certain conditions in pyspark dataframe
i have a dataframe df as shown below: VehNum Control_circuit control_circuit_status partnumbers errors Flag 4234456 DOC ok A567UR Software Issue 0 4234456 DOC not_okay A568UR Software Issue 1 4234456 DOC not_okay A569UR Hardware issue 2 4234457 ACR ok A234TY Hardware issue 0 4234457 ACR ok A235TY Hardware issue 0 4234457 ACR ok A234TY Hardware issue 0 4234487 QWR ok A276TY Hardware issue 0 4234487 QWR not_okay A872UR Hardware issue 1 3423448 QWR not_okay A872UR Hardware issue 1 i want to add a new column called "Control_Flag" and perform below operations: for each VehNum ,Control_circuit if it has flag value only 0 then Control_Flag column will hold value 0 else if it has 0 ,1 or 2 then Control_Flag column will hold value 1. result should be as below: VehNum Control_circuit control_circuit_status partnumbers errors Flag Control_Flag 4234456 DOC ok A567UR Software Issue 0 1 4234456 DOC not_okay A568UR Software Issue 1 1 4234456 DOC not_okay A569UR Hardware issue 2 1 4234457 ACR ok A234TY Hardware issue 0 0 4234457 ACR ok A235TY Hardware issue 0 0 4234457 ACR ok A234TY Hardware issue 0 0 4234487 QWR ok A276TY Hardware issue 0 1 4234487 QWR not_okay A872UR Hardware issue 1 1 3423448 QWR not_okay A872UR Hardware issue 1 1 how to achieve this using pyspark?
[ "using a aggregate window with SUM() will help achieve this\nfrom pyspark.sql import functions as F\nfrom pyspark.sql.types import *\nfrom pyspark.sql import Window\n\ndf = spark.createDataFrame(\n [\n (\"4234456\", \"DOC\", \"ok\", \"A567UR\", \"Software Issue\", 0),\n (\"4234456\", \"DOC\", \"not_okay\", \"A568UR\", \"Software Issue\", 1),\n (\"4234456\", \"DOC\", \"not_okay\", \"A569UR\", \"Hardware Issue\", 2), \n (\"4234457\", \"ACR\", \"ok\", \"A234TY\", \"Hardware Issue\", 0),\n (\"4234457\", \"ACR\", \"ok\", \"A234TY\", \"Hardware Issue\", 0),\n (\"4234457\", \"ACR\", \"ok\", \"A234TY\", \"Hardware Issue\", 0), \n (\"4234487\", \"QWR\", \"ok\", \"A276TY\", \"Hardware Issue\", 0),\n (\"4234487\", \"QWR\", \"not_okay\", \"A872UR\", \"Hardware Issue\", 1),\n (\"3423448\", \"QWR\", \"not_okay\", \"A872UR\", \"Hardware Issue\", 1),\n ],\n [\"VehNum\", \"Control_circuit\", \"control_circuit_status\", \"partnumbers\", \"errors\", \"Flag\"],\n)\n\ndf_agg_window = Window.partitionBy(\n \"VehNum\",\n \"Control_circuit\",\n)\n\ndf = (\n df\n .withColumn(\n \"flag_sum\",\n F.sum(\"Flag\").over(df_agg_window),\n )\n .withColumn(\n \"Control_Flag\",\n F.when(\n F.lower(F.col(\"flag_sum\")) > 0,\n F.lit(1),\n )\n .otherwise(F.lit(0)),\n )\n #.drop(F.col(\"flag_sum\"))\n)\n\n\ndf.show()\n\noutput:\n+-------+---------------+----------------------+-----------+--------------+----+--------+------------+\n| VehNum|Control_circuit|control_circuit_status|partnumbers| errors|Flag|flag_sum|Control_Flag|\n+-------+---------------+----------------------+-----------+--------------+----+--------+------------+\n|4234457| ACR| ok| A234TY|Hardware Issue| 0| 0| 0|\n|4234457| ACR| ok| A234TY|Hardware Issue| 0| 0| 0|\n|4234457| ACR| ok| A234TY|Hardware Issue| 0| 0| 0|\n|4234487| QWR| not_okay| A872UR|Hardware Issue| 1| 1| 1|\n|4234487| QWR| ok| A276TY|Hardware Issue| 0| 1| 1|\n|4234456| DOC| ok| A567UR|Software Issue| 0| 3| 1|\n|4234456| DOC| not_okay| A569UR|Hardware Issue| 2| 3| 1|\n|4234456| DOC| not_okay| A568UR|Software Issue| 1| 3| 1|\n|3423448| QWR| not_okay| A872UR|Hardware Issue| 1| 1| 1|\n+-------+---------------+----------------------+-----------+--------------+----+--------+------------+\n\n" ]
[ 1 ]
[]
[]
[ "pyspark", "python", "python_3.x" ]
stackoverflow_0074522793_pyspark_python_python_3.x.txt
Q: How to get trend component and cyclical component in one series by Python hpfilter? This is my data: Year Z-value 0 1976-01-01 9.170293 1 1977-01-01 9.130933 2 1978-01-01 9.092142 3 1979-01-01 9.179282 4 1980-01-01 9.031123 5 1981-01-01 8.899608 6 1982-01-01 8.533545 7 1983-01-01 8.648138 8 1984-01-01 8.895921 9 1985-01-01 9.035276 10 1986-01-01 8.898070 11 1987-01-01 9.096961 12 1988-01-01 9.267598 13 1989-01-01 9.270736 14 1990-01-01 9.051413 15 1991-01-01 8.798996 16 1992-01-01 8.821594 17 1993-01-01 8.959126 18 1994-01-01 9.226342 19 1995-01-01 9.453473 20 1996-01-01 9.608805 21 1997-01-01 9.939561 22 1998-01-01 10.030579 23 1999-01-01 10.481201 24 2000-01-01 11.027884 25 2001-01-01 11.023259 26 2002-01-01 11.031710 27 2003-01-01 11.101627 28 2004-01-01 11.321485 29 2005-01-01 11.548922 30 2006-01-01 11.394613 31 2007-01-01 11.238485 32 2008-01-01 11.094884 33 2009-01-01 10.289895 34 2010-01-01 10.493154 35 2011-01-01 10.618517 36 2012-01-01 10.455861 37 2013-01-01 10.617282 38 2014-01-01 10.600950 39 2015-01-01 10.194091 40 2016-01-01 10.212243 41 2017-01-01 10.662858 42 2018-01-01 10.750010 and this is my code import pandas as pd import numpy as np import matplotlib.pyplot as plt import statsmodels.api as sm from scipy.optimize import minimize, show_options import requests import seaborn as sns sns.set() cycle, trend = sm.tsa.filters.hpfilter(z,43) plt.plot(trend) How I can get a time trend component and cyclical component from above Z-value? Thank you very much! A: What about: import matplotlib.pyplot as plt import statsmodels.api as sm cycle, trend = sm.tsa.filters.hpfilter(df['Z-value'], 43) df['Year'] = pd.to_datetime(df['Year']) ax = plt.subplot() ax.plot(df['Year'], df['Z-value'], label='Z-Value') ax2 = ax.twinx() ax2.plot(df['Year'], cycle, ls='--', label='cycle', c='red') ax2.plot(df['Year'], trend, label='trend', c='green') plt.legend()
How to get trend component and cyclical component in one series by Python hpfilter?
This is my data: Year Z-value 0 1976-01-01 9.170293 1 1977-01-01 9.130933 2 1978-01-01 9.092142 3 1979-01-01 9.179282 4 1980-01-01 9.031123 5 1981-01-01 8.899608 6 1982-01-01 8.533545 7 1983-01-01 8.648138 8 1984-01-01 8.895921 9 1985-01-01 9.035276 10 1986-01-01 8.898070 11 1987-01-01 9.096961 12 1988-01-01 9.267598 13 1989-01-01 9.270736 14 1990-01-01 9.051413 15 1991-01-01 8.798996 16 1992-01-01 8.821594 17 1993-01-01 8.959126 18 1994-01-01 9.226342 19 1995-01-01 9.453473 20 1996-01-01 9.608805 21 1997-01-01 9.939561 22 1998-01-01 10.030579 23 1999-01-01 10.481201 24 2000-01-01 11.027884 25 2001-01-01 11.023259 26 2002-01-01 11.031710 27 2003-01-01 11.101627 28 2004-01-01 11.321485 29 2005-01-01 11.548922 30 2006-01-01 11.394613 31 2007-01-01 11.238485 32 2008-01-01 11.094884 33 2009-01-01 10.289895 34 2010-01-01 10.493154 35 2011-01-01 10.618517 36 2012-01-01 10.455861 37 2013-01-01 10.617282 38 2014-01-01 10.600950 39 2015-01-01 10.194091 40 2016-01-01 10.212243 41 2017-01-01 10.662858 42 2018-01-01 10.750010 and this is my code import pandas as pd import numpy as np import matplotlib.pyplot as plt import statsmodels.api as sm from scipy.optimize import minimize, show_options import requests import seaborn as sns sns.set() cycle, trend = sm.tsa.filters.hpfilter(z,43) plt.plot(trend) How I can get a time trend component and cyclical component from above Z-value? Thank you very much!
[ "What about:\nimport matplotlib.pyplot as plt\nimport statsmodels.api as sm\ncycle, trend = sm.tsa.filters.hpfilter(df['Z-value'], 43)\n\ndf['Year'] = pd.to_datetime(df['Year'])\n\nax = plt.subplot()\n\nax.plot(df['Year'], df['Z-value'], label='Z-Value')\nax2 = ax.twinx()\n\nax2.plot(df['Year'], cycle, ls='--', label='cycle', c='red')\nax2.plot(df['Year'], trend, label='trend', c='green')\n\nplt.legend()\n\n\n" ]
[ 0 ]
[]
[]
[ "pandas", "python", "seaborn", "statsmodels" ]
stackoverflow_0074523438_pandas_python_seaborn_statsmodels.txt
Q: Why is ArgumentParser object giving None value even though arguments are being passed through shell script I am calling a python script in a shell script and passing arguments to this python job. Arguments are being loaded from a config file. The variables being called are correctly echoed when testing in the shell script. The HIVE_ labelled arguments are all being marked as None in the argument parser. Shell Script set -e if \[ ! -z "$1" \] then config_file="$1" else config_file="./env.sh" fi ${venv_path} ${mpw_path}/src/main_sample.py \ \--MPW_BASE "${mpw_base}" --MPW_PATH "${mpw_path}" \ \--FUNC "${trueup}" --TRUEUP_FILE "${trueup_file}" \ \--TRUEUP_FILE_MANUAL "${trueup_file_manual}" --LATEST_TRUEUP_STAMP "${latest_trueup_stamp}" \ \--SQL_driver "${SQL_driver}" --SQL_server "$SQL_server" \ \--SQL_port "${SQL_port}" --SQL_db "${SQL_db}" \ \--LANDING_ZONE "${landing_zone}" --TIME_STAMP "${time_stamp}" \\ \--HIVE_driver "${HIVE_driver}" --HIVE_host "${HIVE_host}" \ \--HIVE_ZKNamespace "${HIVE_zknamespace}" --HIVE_ServiceDiscoveryMode "${HIVE_servicediscoverymode}" \ \--HIVE_AuthMech "${HIVE_authMech}" --HIVE_KrbServiceName "${HIVE_krbservicename}" \\ \--HIVE_KrbHostFQDN "${HIVE_krbhostfqdn}" --HIVE_SSP_tezqueue "${HIVE_ssp_tezqueue}" Config File HIVE_driver="" HIVE_host="" HIVE_zknamespace="" HIVE_servicediscoverymode="" HIVE_authMech="" HIVE_krbserviceame="" HIVE_krbhostfqdn="" HIVE_ssp_tezqueue="" Config variables values are missing for obvious reasoning Python Script parser = argparse.ArgumentParser(description='MPW Arg Parser') parser.add_argument('--MPW_BASE', help='base directory of project') parser.add_argument('--MPW_PATH', help='directory of the repo') parser.add_argument('--FUNC', help='function to done') parser.add_argument('--TRUEUP_FILE', help='mapping to be passed in writecustomertrueup') parser.add_argument('--TRUEUP_FILE_MANUAL', help='trueup exercise of previous week') parser.add_argument('--LATEST_TRUEUP_STAMP', help='trueup stamp') parser.add_argument('--SQL_driver', help='SQL Driver') parser.add_argument('--SQL_server', help='SQL Server') parser.add_argument('--SQL_port', help='SQL Port') parser.add_argument('--SQL_db', help='DATABASE to connect') parser.add_argument('--LANDING_ZONE', help='Landing zone of MPW') parser.add_argument('--TIME_STAMP', help="time stamp") parser.add_argument('--HIVE_driver', help='EDL Driver') parser.add_argument('--HIVE_host', help='EDL Host') parser.add_argument('--HIVE_ZKNamespace', help='EDL ZKNamespace') parser.add_argument('--HIVE_ServiceDiscoveryMode', help='EDL Service Discovery Mode') parser.add_argument('--HIVE_AuthMech', help='EDL Auth Mech') parser.add_argument('--HIVE_KrbServiceName', help = 'KrbServiceName') parser.add_argument('--HIVE_KrbHostFQDN', help = 'For EDL connection') parser.add_argument('--HIVE_SSP_tezqueue', help = 'For EDL connection') args, unknown = parser.parse_known_args() if args.MPW_BASE is None or args.MPW_PATH is None or args.FUNC is None \ or args.TRUEUP_FILE is None or args.TRUEUP_FILE_MANUAL is None \ or args.LATEST_TRUEUP_STAMP is None or args.SQL_driver is None \ or args.SQL_server is None or args.SQL_port is None or args.SQL_db \ is None or args.LANDING_ZONE is None or args.TIME_STAMP is None \ or args.HIVE_driver is None or args.HIVE_host is None or args.HIVE_ZKNamespace \ is None or args.HIVE_ServiceDiscoveryMode is None or args.HIVE_AuthMech \ is None or args.HIVE_KrbServiceName is None or args.HIVE_KrbHostFQDN is None \ or args.HIVE_SSP_tezqueue is None: logging.error(str(args)) logging.error(str(unknown)) Error for args: 22/11/21 13:16:02 ERROR \<module\> Namespace(FUNC='TRUEUP', HIVE_AuthMech=None, HIVE_KrbHostFQDN=None, HIVE_KrbServiceName=None, HIVE_SSP_tezqueue=None, HIVE_ServiceDiscoveryMode=None, HIVE_ZKNamespace=None, HIVE_driver=None, HIVE_host=None Error for unknown: 22/11/21 13:16:02 ERROR \<module\> \[' '\] I tried to change the variable names in the config and the shell script files and am expecting it to read the arguments correctly but still get None value for the HIVE_ named variables. A: This code doesn't do anything with the config file, just stores its name/path in a variable. The config needs to be read in order to use the values. set -e if [ ! -z "$1" ] then config_file="$1" else config_file="./env.sh" fi . "${config_file}"
Why is ArgumentParser object giving None value even though arguments are being passed through shell script
I am calling a python script in a shell script and passing arguments to this python job. Arguments are being loaded from a config file. The variables being called are correctly echoed when testing in the shell script. The HIVE_ labelled arguments are all being marked as None in the argument parser. Shell Script set -e if \[ ! -z "$1" \] then config_file="$1" else config_file="./env.sh" fi ${venv_path} ${mpw_path}/src/main_sample.py \ \--MPW_BASE "${mpw_base}" --MPW_PATH "${mpw_path}" \ \--FUNC "${trueup}" --TRUEUP_FILE "${trueup_file}" \ \--TRUEUP_FILE_MANUAL "${trueup_file_manual}" --LATEST_TRUEUP_STAMP "${latest_trueup_stamp}" \ \--SQL_driver "${SQL_driver}" --SQL_server "$SQL_server" \ \--SQL_port "${SQL_port}" --SQL_db "${SQL_db}" \ \--LANDING_ZONE "${landing_zone}" --TIME_STAMP "${time_stamp}" \\ \--HIVE_driver "${HIVE_driver}" --HIVE_host "${HIVE_host}" \ \--HIVE_ZKNamespace "${HIVE_zknamespace}" --HIVE_ServiceDiscoveryMode "${HIVE_servicediscoverymode}" \ \--HIVE_AuthMech "${HIVE_authMech}" --HIVE_KrbServiceName "${HIVE_krbservicename}" \\ \--HIVE_KrbHostFQDN "${HIVE_krbhostfqdn}" --HIVE_SSP_tezqueue "${HIVE_ssp_tezqueue}" Config File HIVE_driver="" HIVE_host="" HIVE_zknamespace="" HIVE_servicediscoverymode="" HIVE_authMech="" HIVE_krbserviceame="" HIVE_krbhostfqdn="" HIVE_ssp_tezqueue="" Config variables values are missing for obvious reasoning Python Script parser = argparse.ArgumentParser(description='MPW Arg Parser') parser.add_argument('--MPW_BASE', help='base directory of project') parser.add_argument('--MPW_PATH', help='directory of the repo') parser.add_argument('--FUNC', help='function to done') parser.add_argument('--TRUEUP_FILE', help='mapping to be passed in writecustomertrueup') parser.add_argument('--TRUEUP_FILE_MANUAL', help='trueup exercise of previous week') parser.add_argument('--LATEST_TRUEUP_STAMP', help='trueup stamp') parser.add_argument('--SQL_driver', help='SQL Driver') parser.add_argument('--SQL_server', help='SQL Server') parser.add_argument('--SQL_port', help='SQL Port') parser.add_argument('--SQL_db', help='DATABASE to connect') parser.add_argument('--LANDING_ZONE', help='Landing zone of MPW') parser.add_argument('--TIME_STAMP', help="time stamp") parser.add_argument('--HIVE_driver', help='EDL Driver') parser.add_argument('--HIVE_host', help='EDL Host') parser.add_argument('--HIVE_ZKNamespace', help='EDL ZKNamespace') parser.add_argument('--HIVE_ServiceDiscoveryMode', help='EDL Service Discovery Mode') parser.add_argument('--HIVE_AuthMech', help='EDL Auth Mech') parser.add_argument('--HIVE_KrbServiceName', help = 'KrbServiceName') parser.add_argument('--HIVE_KrbHostFQDN', help = 'For EDL connection') parser.add_argument('--HIVE_SSP_tezqueue', help = 'For EDL connection') args, unknown = parser.parse_known_args() if args.MPW_BASE is None or args.MPW_PATH is None or args.FUNC is None \ or args.TRUEUP_FILE is None or args.TRUEUP_FILE_MANUAL is None \ or args.LATEST_TRUEUP_STAMP is None or args.SQL_driver is None \ or args.SQL_server is None or args.SQL_port is None or args.SQL_db \ is None or args.LANDING_ZONE is None or args.TIME_STAMP is None \ or args.HIVE_driver is None or args.HIVE_host is None or args.HIVE_ZKNamespace \ is None or args.HIVE_ServiceDiscoveryMode is None or args.HIVE_AuthMech \ is None or args.HIVE_KrbServiceName is None or args.HIVE_KrbHostFQDN is None \ or args.HIVE_SSP_tezqueue is None: logging.error(str(args)) logging.error(str(unknown)) Error for args: 22/11/21 13:16:02 ERROR \<module\> Namespace(FUNC='TRUEUP', HIVE_AuthMech=None, HIVE_KrbHostFQDN=None, HIVE_KrbServiceName=None, HIVE_SSP_tezqueue=None, HIVE_ServiceDiscoveryMode=None, HIVE_ZKNamespace=None, HIVE_driver=None, HIVE_host=None Error for unknown: 22/11/21 13:16:02 ERROR \<module\> \[' '\] I tried to change the variable names in the config and the shell script files and am expecting it to read the arguments correctly but still get None value for the HIVE_ named variables.
[ "This code doesn't do anything with the config file, just stores its name/path in a variable. The config needs to be read in order to use the values.\nset -e\n\nif [ ! -z \"$1\" ]\nthen\nconfig_file=\"$1\"\nelse\nconfig_file=\"./env.sh\"\nfi\n\n. \"${config_file}\"\n\n" ]
[ 0 ]
[]
[]
[ "argparse", "python", "sh" ]
stackoverflow_0074523411_argparse_python_sh.txt
Q: Is there a more efficient and less ugly way to load variable-length data into a nested data structure in Python? I have a number of HDF5 files which are saved in a hierarchical manner (i.e. multiple folders containing multiple files where the files in each folder are related - I am not referring to the hierarchical structure of individual HDF5 files). I want to read the data (a vector) from each file and store it in a data structure which reflects the hierarchical between the files. However, the length of the data varies between files. In the example below the data is stored in a field of the HDF5 file called "data". If the data from each file was the same length, I would simply use a NumPy array. However, because of the variable lengths I have been using nested lists as follows: import glob import h5py # list of directories dir_list = ["dir1", "dir2"] # load data and store in nested list data = [] for dir_idx, dir in enumerate(dir_list): data.append([]) file_list = glob.glob(dir + "/*.hdf5") for file_idx, file in enumerate(file_list): with h5py.File(file, "r") as fid: data[dir_idx].append(fid["data"][:]) This seems inefficient and looks ugly, but I don't know a better solution. Ideally, I would like to use NumPy because of the more efficient memory management. Could anybody suggest a more elegant solution? A: A slightly cleaner way of building a nested list of arrays is: data = [] for dir in dir_list: data1 = [] file_list = glob.glob(dir + "/*.hdf5") for file in file_list: with h5py.File(file, "r") as fid: data1.append(fid["data"][:]) data.append(data1) It should do the same thing, just without the enumerate and idx. No difference in performance. It's loading the same "data" dataset from each file in each dir. I was going to say you make a 2d or higher array to store the datasets, but that's only possible if len(file_list) is the same for all dir, and you know that ahead of time. And if all "data" datasets are the same shape, you could start with something like a arr = np.zeros((len(dir_list), len(file_list), N, M), float) Then the enumerate would useful in assign values to arr[dir_index, file_index,:, : ] = fit['data'][:] One way or other you have to iterate on the dirs and files. As long as you don't use things like np.append in the loops, it doesn't matter a whole lot how you collect the arrays.
Is there a more efficient and less ugly way to load variable-length data into a nested data structure in Python?
I have a number of HDF5 files which are saved in a hierarchical manner (i.e. multiple folders containing multiple files where the files in each folder are related - I am not referring to the hierarchical structure of individual HDF5 files). I want to read the data (a vector) from each file and store it in a data structure which reflects the hierarchical between the files. However, the length of the data varies between files. In the example below the data is stored in a field of the HDF5 file called "data". If the data from each file was the same length, I would simply use a NumPy array. However, because of the variable lengths I have been using nested lists as follows: import glob import h5py # list of directories dir_list = ["dir1", "dir2"] # load data and store in nested list data = [] for dir_idx, dir in enumerate(dir_list): data.append([]) file_list = glob.glob(dir + "/*.hdf5") for file_idx, file in enumerate(file_list): with h5py.File(file, "r") as fid: data[dir_idx].append(fid["data"][:]) This seems inefficient and looks ugly, but I don't know a better solution. Ideally, I would like to use NumPy because of the more efficient memory management. Could anybody suggest a more elegant solution?
[ "A slightly cleaner way of building a nested list of arrays is:\ndata = []\nfor dir in dir_list:\n data1 = []\n file_list = glob.glob(dir + \"/*.hdf5\")\n for file in file_list:\n with h5py.File(file, \"r\") as fid:\n data1.append(fid[\"data\"][:])\n data.append(data1)\n\nIt should do the same thing, just without the enumerate and idx. No difference in performance. It's loading the same \"data\" dataset from each file in each dir.\nI was going to say you make a 2d or higher array to store the datasets, but that's only possible if len(file_list) is the same for all dir, and you know that ahead of time. And if all \"data\" datasets are the same shape, you could start with something like a\n arr = np.zeros((len(dir_list), len(file_list), N, M), float)\n\nThen the enumerate would useful in assign values to\n arr[dir_index, file_index,:, : ] = fit['data'][:]\n\nOne way or other you have to iterate on the dirs and files. As long as you don't use things like np.append in the loops, it doesn't matter a whole lot how you collect the arrays.\n" ]
[ 0 ]
[]
[]
[ "list", "nested", "numpy", "python" ]
stackoverflow_0074445746_list_nested_numpy_python.txt
Q: Using Pandas, i'm trying to keep on my DataFrame only 100 rows of each value of my column "neighborhood" I have a super large dataset that i'm trying to shrink. My idea is to keep 100 rows by neighborhood. Here's an overview of my data : index name neighborhood 0 name 1 neighborhood A 1 name 2 neighborhood A 2 name 3 neighborhood B 3 name 4 neighborhood B 4 name 5 neighborhood C 5 name 6 neighborhood C 6 name 7 neighborhood D 7 name 8 neighborhood D 8 name 9 neighborhood E 9 name 10 neighborhood E What is the more efficient way to do so ? Thanks in advance I'm expecting to create something that looks like : index name neighborhood 0 name 1 neighborhood A 1 name 3 neighborhood B 2 name 5 neighborhood C 3 name 7 neighborhood D 4 name 9 neighborhood E A: i think, you can use groupby and *nth: dfx=df.groupby('neighborhood').nth[:100] A: It depends how you want to select the rows. first n with groupby.head: n = 100 out = df.groupby('neighborhood').head(n) random n rows with groupby.sample: n = 100 out = df.groupby('neighborhood').sample(n=n)
Using Pandas, i'm trying to keep on my DataFrame only 100 rows of each value of my column "neighborhood"
I have a super large dataset that i'm trying to shrink. My idea is to keep 100 rows by neighborhood. Here's an overview of my data : index name neighborhood 0 name 1 neighborhood A 1 name 2 neighborhood A 2 name 3 neighborhood B 3 name 4 neighborhood B 4 name 5 neighborhood C 5 name 6 neighborhood C 6 name 7 neighborhood D 7 name 8 neighborhood D 8 name 9 neighborhood E 9 name 10 neighborhood E What is the more efficient way to do so ? Thanks in advance I'm expecting to create something that looks like : index name neighborhood 0 name 1 neighborhood A 1 name 3 neighborhood B 2 name 5 neighborhood C 3 name 7 neighborhood D 4 name 9 neighborhood E
[ "i think, you can use groupby and *nth:\ndfx=df.groupby('neighborhood').nth[:100]\n\n", "It depends how you want to select the rows.\nfirst n with groupby.head:\nn = 100\nout = df.groupby('neighborhood').head(n)\n\nrandom n rows with groupby.sample:\nn = 100\nout = df.groupby('neighborhood').sample(n=n)\n\n" ]
[ 2, 2 ]
[]
[]
[ "dataframe", "pandas", "python" ]
stackoverflow_0074523564_dataframe_pandas_python.txt
Q: How to handle (complete or totally remove) incomplete trajectories drawn in a phase portrait? For the following nonlinear system xdot = x + exp(-y) ydot = -y whose phase portrait is: import numpy as np import matplotlib.pyplot as plt xvalues, yvalues = np.meshgrid(np.arange(-5, 5, 0.1), np.arange(-5, 5, 0.1)) xdot = xvalues - np.exp(-yvalues) ydot = - yvalues plt.streamplot(xvalues, yvalues, xdot, ydot, color='r', linewidth=0.5, density=1.2) plt.show() However, some of the trajectories are (visually-unappealing) incomplete arcs like the blue ones highlighted below: I need to (i) eliminate those arcs off the plot or (ii) make them complete just like, e.g., the black one. How can I achieve this? can I achieve the following changes
How to handle (complete or totally remove) incomplete trajectories drawn in a phase portrait?
For the following nonlinear system xdot = x + exp(-y) ydot = -y whose phase portrait is: import numpy as np import matplotlib.pyplot as plt xvalues, yvalues = np.meshgrid(np.arange(-5, 5, 0.1), np.arange(-5, 5, 0.1)) xdot = xvalues - np.exp(-yvalues) ydot = - yvalues plt.streamplot(xvalues, yvalues, xdot, ydot, color='r', linewidth=0.5, density=1.2) plt.show() However, some of the trajectories are (visually-unappealing) incomplete arcs like the blue ones highlighted below: I need to (i) eliminate those arcs off the plot or (ii) make them complete just like, e.g., the black one. How can I achieve this? can I achieve the following changes
[]
[]
[ "As of Matplotlib version 3.6.0, an optional parameter broken_streamlines has been added for disabling streamline breaks.\nAdding it to your snippet (and halving the density to compensate for the visual clutter) produces the following result:\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nxvalues, yvalues = np.meshgrid(np.arange(-5, 5, 0.1), np.arange(-5, 5, 0.1))\nxdot = xvalues - np.exp(-yvalues)\nydot = - yvalues\nplt.streamplot(xvalues, yvalues, xdot, ydot, color='r', linewidth=0.5, density=0.6, broken_streamlines=False)\nplt.show()\n\na stream plot with continuous streamlines\nNote\nThis parameter just extends the streamlines which were originally drawn (as in the question). This means that the streamlines in the modified plot above look uneven, due to the way the streamline start points are chosen (see the documentation for the density parameter of matplotlib.pyplot.streamplot for more details on how streamline start points are chosen).\nFor accurate streamline density, consider using matplotlib.pyplot.contour, but be aware that contour does not show arrows.\n" ]
[ -1 ]
[ "matplotlib", "plot", "python" ]
stackoverflow_0074312776_matplotlib_plot_python.txt
Q: Get max value across subset of rows and compare to constant to return max in new column I am trying to create a new column in a dataframe that is the maximum value across two columns or a constant value. Whichever is the largest value will be returned to the new column. import numpy as np import pandas as pd df = pd.DataFrame({ 'loan_num': ['111', '333', '555', '777'], 'bllnterm': [0, 240, 360, 240], 'amortterm': [0, 360, 360, 360] }) I have tried using pd.clip, np.maximum, and np.amax but none seem to run without throwing an error. df = df.assign(amtz = df[['bllnterm', 'amortterm']].clip(lower=1, axis=1)) This returns a ValueError: Wrong number of items passed 2, placement implies 1 df = df.assign(amtz = np.maximum(df[['bllnterm', 'amortterm']], 1)) This returns a ValueError: Wrong number of items passed 2, placement implies 1 df = df.assign(amtz = np.amax(df[['bllnterm', 'amortterm']], axis=1, initial=1)) This returns a TypeError: max() got an unexpected keyword argument 'initial'. However, initial is a keyword in the docs so I'm not sure what is going on there. My desired output looks like this: loan_num bllnterm amortterm amtz ---------------------------------------------- 111 0 0 1 333 240 360 360 555 360 360 360 777 240 360 360 A: You were on the right track, you need to combine max and clip: df['amtz'] = df[['bllnterm', 'amortterm']].max(axis=1).clip(lower=1) As assign: df.assign(amtz=df[['bllnterm', 'amortterm']].max(axis=1).clip(lower=1)) output: loan_num bllnterm amortterm amtz 0 111 0 0 1 1 333 240 360 360 2 555 360 360 360 3 777 240 360 360
Get max value across subset of rows and compare to constant to return max in new column
I am trying to create a new column in a dataframe that is the maximum value across two columns or a constant value. Whichever is the largest value will be returned to the new column. import numpy as np import pandas as pd df = pd.DataFrame({ 'loan_num': ['111', '333', '555', '777'], 'bllnterm': [0, 240, 360, 240], 'amortterm': [0, 360, 360, 360] }) I have tried using pd.clip, np.maximum, and np.amax but none seem to run without throwing an error. df = df.assign(amtz = df[['bllnterm', 'amortterm']].clip(lower=1, axis=1)) This returns a ValueError: Wrong number of items passed 2, placement implies 1 df = df.assign(amtz = np.maximum(df[['bllnterm', 'amortterm']], 1)) This returns a ValueError: Wrong number of items passed 2, placement implies 1 df = df.assign(amtz = np.amax(df[['bllnterm', 'amortterm']], axis=1, initial=1)) This returns a TypeError: max() got an unexpected keyword argument 'initial'. However, initial is a keyword in the docs so I'm not sure what is going on there. My desired output looks like this: loan_num bllnterm amortterm amtz ---------------------------------------------- 111 0 0 1 333 240 360 360 555 360 360 360 777 240 360 360
[ "You were on the right track, you need to combine max and clip:\ndf['amtz'] = df[['bllnterm', 'amortterm']].max(axis=1).clip(lower=1)\n\nAs assign:\ndf.assign(amtz=df[['bllnterm', 'amortterm']].max(axis=1).clip(lower=1))\n\noutput:\n loan_num bllnterm amortterm amtz\n0 111 0 0 1\n1 333 240 360 360\n2 555 360 360 360\n3 777 240 360 360\n\n" ]
[ 2 ]
[]
[]
[ "numpy", "pandas", "python" ]
stackoverflow_0074523593_numpy_pandas_python.txt
Q: How to fix this strange error: "RuntimeError: CUDA error: out of memory" I successfully trained the network but got this error during validation: RuntimeError: CUDA error: out of memory A: The error occurs because you ran out of memory on your GPU. One way to solve it is to reduce the batch size until your code runs without this error. A: 1.. When you only perform validation not training, you don't need to calculate gradients for forward and backward phase. In that situation, your code can be located under with torch.no_grad(): ... net=Net() pred_for_validation=net(input) ... Above code doesn't use GPU memory 2.. If you use += operator in your code, it can accumulate gradient continuously in your gradient graph. In that case, you need to use float() like following site https://pytorch.org/docs/stable/notes/faq.html#my-model-reports-cuda-runtime-error-2-out-of-memory Even if docs guides with float(), in case of me, item() also worked like entire_loss=0.0 for i in range(100): one_loss=loss_function(prediction,label) entire_loss+=one_loss.item() 3.. If you use for loop in training code, data can be sustained until entire for loop ends. So, in that case, you can explicitly delete variables after performing optimizer.step() for one_epoch in range(100): ... optimizer.step() del intermediate_variable1,intermediate_variable2,... A: The best way is to find the process engaging gpu memory and kill it: find the PID of python process from: nvidia-smi copy the PID and kill it by: sudo kill -9 pid A: I had the same issue and this code worked for me : import gc gc.collect() torch.cuda.empty_cache() A: It might be for a number of reasons that I try to report in the following list: Modules parameters: check the number of dimensions for your modules. Linear layers that transform a big input tensor (e.g., size 1000) in another big output tensor (e.g., size 1000) will require a matrix whose size is (1000, 1000). RNN decoder maximum steps: if you're using an RNN decoder in your architecture, avoid looping for a big number of steps. Usually, you fix a given number of decoding steps that is reasonable for your dataset. Tensors usage: minimise the number of tensors that you create. The garbage collector won't release them until they go out of scope. Batch size: incrementally increase your batch size until you go out of memory. It's a common trick that even famous library implement (see the biggest_batch_first description for the BucketIterator in AllenNLP. In addition, I would recommend you to have a look to the official PyTorch documentation: https://pytorch.org/docs/stable/notes/faq.html A: I am a Pytorch user. In my case, the cause for this error message was actually not due to GPU memory, but due to the version mismatch between Pytorch and CUDA. Check whether the cause is really due to your GPU memory, by a code below. import torch foo = torch.tensor([1,2,3]) foo = foo.to('cuda') If an error still occurs for the above code, it will be better to re-install your Pytorch according to your CUDA version. (In my case, this solved the problem.) Pytorch install link A similar case will happen also for Tensorflow/Keras. A: If you are getting this error in Google Colab use this code: import torch torch.cuda.empty_cache() A: Problem solved by the following code: import os os.environ['CUDA_VISIBLE_DEVICES']='2, 3' A: In my experience, this is not a typical CUDA OOM Error caused by PyTorch trying to allocate more memory on the GPU than you currently have. The giveaway is the distinct lack of the following text in the error message. Tried to allocate xxx GiB (GPU Y; XXX GiB total capacity; yyy MiB already allocated; zzz GiB free; aaa MiB reserved in total by PyTorch) In my experience, this is an Nvidia driver issue. A reboot has always solved the issue for me, but there are times when a reboot is not possible. One alternative to rebooting is to kill all Nvidia processes and reload the drivers manually. I always refer to the unaccepted answer of this question written by Comzyh when performing the driver cycle. Hope this helps anyone trapped in this situation. A: If someone arrives here because of fast.ai, the batch size of a loader such as ImageDataLoaders can be controlled via bs=N where N is the size of the batch. My dedicated GPU is limited to 2GB of memory, using bs=8 in the following example worked in my situation: from fastai.vision.all import * path = untar_data(URLs.PETS)/'images' def is_cat(x): return x[0].isupper() dls = ImageDataLoaders.from_name_func( path, get_image_files(path), valid_pct=0.2, seed=42, label_func=is_cat, item_tfms=Resize(244), num_workers=0, bs=) learn = cnn_learner(dls, resnet34, metrics=error_rate) learn.fine_tune(1) A: Not sure if this'll help you or not, but this is what solved the issue for me: export PYTORCH_CUDA_ALLOC_CONF=max_split_size_mb:128 Nothing else in this thread helped.
How to fix this strange error: "RuntimeError: CUDA error: out of memory"
I successfully trained the network but got this error during validation: RuntimeError: CUDA error: out of memory
[ "The error occurs because you ran out of memory on your GPU.\nOne way to solve it is to reduce the batch size until your code runs without this error.\n", "1.. When you only perform validation not training,\nyou don't need to calculate gradients for forward and backward phase.\nIn that situation, your code can be located under\nwith torch.no_grad():\n ...\n net=Net()\n pred_for_validation=net(input)\n ...\n\nAbove code doesn't use GPU memory\n2.. If you use += operator in your code,\nit can accumulate gradient continuously in your gradient graph.\nIn that case, you need to use float() like following site\nhttps://pytorch.org/docs/stable/notes/faq.html#my-model-reports-cuda-runtime-error-2-out-of-memory \nEven if docs guides with float(), in case of me, item() also worked like \nentire_loss=0.0\nfor i in range(100):\n one_loss=loss_function(prediction,label)\n entire_loss+=one_loss.item()\n\n3.. If you use for loop in training code,\ndata can be sustained until entire for loop ends.\nSo, in that case, you can explicitly delete variables after performing optimizer.step() \nfor one_epoch in range(100):\n ...\n optimizer.step()\n del intermediate_variable1,intermediate_variable2,...\n\n", "The best way is to find the process engaging gpu memory and kill it:\nfind the PID of python process from: \nnvidia-smi\n\ncopy the PID and kill it by:\nsudo kill -9 pid\n\n", "I had the same issue and this code worked for me :\nimport gc\n\ngc.collect()\n\ntorch.cuda.empty_cache()\n\n", "It might be for a number of reasons that I try to report in the following list:\n\nModules parameters: check the number of dimensions for your modules. Linear layers that transform a big input tensor (e.g., size 1000) in another big output tensor (e.g., size 1000) will require a matrix whose size is (1000, 1000). \nRNN decoder maximum steps: if you're using an RNN decoder in your architecture, avoid looping for a big number of steps. Usually, you fix a given number of decoding steps that is reasonable for your dataset.\nTensors usage: minimise the number of tensors that you create. The garbage collector won't release them until they go out of scope.\nBatch size: incrementally increase your batch size until you go out of memory. It's a common trick that even famous library implement (see the biggest_batch_first description for the BucketIterator in AllenNLP.\n\nIn addition, I would recommend you to have a look to the official PyTorch documentation: https://pytorch.org/docs/stable/notes/faq.html\n", "I am a Pytorch user. In my case, the cause for this error message was actually not due to GPU memory, but due to the version mismatch between Pytorch and CUDA.\nCheck whether the cause is really due to your GPU memory, by a code below.\nimport torch\nfoo = torch.tensor([1,2,3])\nfoo = foo.to('cuda')\n\nIf an error still occurs for the above code, it will be better to re-install your Pytorch according to your CUDA version. (In my case, this solved the problem.)\nPytorch install link\nA similar case will happen also for Tensorflow/Keras.\n", "If you are getting this error in Google Colab use this code:\nimport torch\ntorch.cuda.empty_cache()\n\n", "Problem solved by the following code:\nimport os\nos.environ['CUDA_VISIBLE_DEVICES']='2, 3'\n\n", "In my experience, this is not a typical CUDA OOM Error caused by PyTorch trying to allocate more memory on the GPU than you currently have.\nThe giveaway is the distinct lack of the following text in the error message.\n\nTried to allocate xxx GiB (GPU Y; XXX GiB total capacity; yyy MiB already allocated; zzz GiB free; aaa MiB reserved in total by PyTorch)\n\nIn my experience, this is an Nvidia driver issue. A reboot has always solved the issue for me, but there are times when a reboot is not possible.\nOne alternative to rebooting is to kill all Nvidia processes and reload the drivers manually. I always refer to the unaccepted answer of this question written by Comzyh when performing the driver cycle. Hope this helps anyone trapped in this situation.\n", "If someone arrives here because of fast.ai, the batch size of a loader such as ImageDataLoaders can be controlled via bs=N where N is the size of the batch.\nMy dedicated GPU is limited to 2GB of memory, using bs=8 in the following example worked in my situation:\nfrom fastai.vision.all import *\npath = untar_data(URLs.PETS)/'images'\n\ndef is_cat(x): return x[0].isupper()\ndls = ImageDataLoaders.from_name_func(\n path, get_image_files(path), valid_pct=0.2, seed=42,\n label_func=is_cat, item_tfms=Resize(244), num_workers=0, bs=)\n\nlearn = cnn_learner(dls, resnet34, metrics=error_rate)\nlearn.fine_tune(1)\n\n", "Not sure if this'll help you or not, but this is what solved the issue for me:\nexport PYTORCH_CUDA_ALLOC_CONF=max_split_size_mb:128\nNothing else in this thread helped.\n" ]
[ 36, 35, 32, 25, 9, 9, 3, 1, 1, 0, 0 ]
[ "I faced the same issue with my computer. All you have to do is customize your cfg file that suits your computer.Turns out my computer takes image size below 600 X 600 and when I adjusted the same in config file, the program ran smoothly.Picture Describing my cfg file\n", "For me, I deleted some files in c drive to get more free space, and it solved the issue.\n", "Type sudo reboot in Terminal in the root folder and wait like 5 min then run it again.\n" ]
[ -2, -2, -7 ]
[ "python", "pytorch" ]
stackoverflow_0054374935_python_pytorch.txt
Q: SAP, Python and PySide6 - GUI freezes when i execute another class with a long long process this is the ui_main from my python script: import ui_nova from PySide6.QtCore import (QCoreApplication, Signal, QThread, QObject, QRunnable, Slot, QThreadPool) from PySide6 import QtCore from PySide6.QtGui import * from PySide6 import QtWidgets from PySide6.QtWidgets import (QApplication, QMainWindow, QWidget) from threading import Thread import conexoes import threading import sys import traceback class Sinais(QObject): finished = Signal() progress = Signal(int) class Executora(QThread): funcao = None def __init__(self): super(Executora, self).__init__() self.sinais = Sinais() self.funcaoaExecutar = None self.finished = self.sinais.finished self.progress = self.sinais.progress def nomeDaFuncao(self, funcao): self.funcaoaExecutar = funcao @Slot() def runner(self, funcao): processo = conexoes.Conexoes() aExecutar = funcao execute = eval(f'processo.{aExecutar}') try: execute() except: traceback.print_exc() exctype, value = sys.exc_info()[:2] self.signals.error.emit((exctype, value, traceback.format_exc())) else: print("rodou") finally: self.finished.emit() class UIMainConexao(QMainWindow, ui_nova.Ui_MainWindow): def __init__(self): super(UIMainConexao, self).__init__() self.setupUi(self) self.MainWW.setWindowFlags(ui_nova.QtCore.Qt.FramelessWindowHint) self.MainWW.setAttribute(ui_nova.Qt.WA_TranslucentBackground) self.buttonFechar.clicked.connect(self.close) self.buttonMinimizar.clicked.connect(self.showMinimized) self.threadpool = QThreadPool() self.offset = None print("Multithreading with maximum %d threads" % self.threadpool.maxThreadCount()) # install the event filter on the infoBar widget self.frameToolBar.installEventFilter(self) self.buttonHome.clicked.connect(lambda: self.stackedWidget.setCurrentWidget(self.pageInicial)) self.buttonUserConfig.clicked.connect(lambda: self.stackedWidget.setCurrentWidget(self.pageUser)) self.buttonEmpresas.clicked.connect(lambda: self.execute("boletoBancoX")) def eventFilter(self, source, event): if source == self.frameToolBar: if event.type() == ui_nova.QtCore.QEvent.MouseButtonPress: self.offset = event.pos() elif event.type() == ui_nova.QtCore.QEvent.MouseMove and self.offset is not None: # no need for complex computations: just use the offset to compute # "delta" position, and add that to the current one self.move(self.pos() - self.offset + event.pos()) # return True to tell Qt that the event has been accepted and # should not be processed any further return True elif event.type() == ui_nova.QtCore.QEvent.MouseButtonRelease: self.offset = None # let Qt process any other event return super().eventFilter(source, event) @Slot() def execute(self, funcao): aExecutar = funcao self.thread = QThread() self.worker = Executora() self.worker.moveToThread(self.thread) self.thread.started.connect(lambda: print("Iniciou")) self.thread.started.connect(lambda: self.worker.runner(aExecutar)) self.worker.finished.connect(lambda: print("É PRA FINALIZAR")) self.worker.finished.connect(self.thread.quit) self.worker.finished.connect(self.worker.deleteLater) self.thread.finished.connect(self.thread.deleteLater) self.thread.start() This Python project has a long structure with a lot of py files. The exe will contain about 70-100 pages with different process that will be executed ONE AT A TIME. In "Conexoes" has the conections to the all files and processes that will be executed, so i created a method to link every button(i will add all the buttons connections) to their respective method in conexoes giving just the name using the def execute. When i start the process, will work and the GUI freezes during the process, but if i use Daemon Threads, the script will run the first steps of the proper function (gui dont freezes), but will crash because him couldnt get the SAPEngineScript. I already tried read many sites how to use Threads in python and put in code, but all didnt work properly. I really dont know what i do. A: So, after much more search, i found the solution, which i think can be very usefully for everyone who use QT with SAP. Basicly, when you start a sap function using threading, you will receive an error about the Object SAPGUI, so the solution for this is just import pythoncom for your code and insert "pythoncom.CoInitialize()" before the line getObject("SAPGUI"). Now im using Daemon Thread to execute the function without freezes de GUI. Ex: import win32com.client import pythoncom def processoSAP(self): try: pythoncom.CoInitialize() self.SapGuiAuto = win32com.client.GetObject("SAPGUI") if not type(self.SapGuiAuto) == win32com.client.CDispatch: return self.application = self.SapGuiAuto.GetScriptingEngine except NameError: print(NameError) Where i found: https://django.fun/en/qa/44126/
SAP, Python and PySide6 - GUI freezes when i execute another class with a long long process
this is the ui_main from my python script: import ui_nova from PySide6.QtCore import (QCoreApplication, Signal, QThread, QObject, QRunnable, Slot, QThreadPool) from PySide6 import QtCore from PySide6.QtGui import * from PySide6 import QtWidgets from PySide6.QtWidgets import (QApplication, QMainWindow, QWidget) from threading import Thread import conexoes import threading import sys import traceback class Sinais(QObject): finished = Signal() progress = Signal(int) class Executora(QThread): funcao = None def __init__(self): super(Executora, self).__init__() self.sinais = Sinais() self.funcaoaExecutar = None self.finished = self.sinais.finished self.progress = self.sinais.progress def nomeDaFuncao(self, funcao): self.funcaoaExecutar = funcao @Slot() def runner(self, funcao): processo = conexoes.Conexoes() aExecutar = funcao execute = eval(f'processo.{aExecutar}') try: execute() except: traceback.print_exc() exctype, value = sys.exc_info()[:2] self.signals.error.emit((exctype, value, traceback.format_exc())) else: print("rodou") finally: self.finished.emit() class UIMainConexao(QMainWindow, ui_nova.Ui_MainWindow): def __init__(self): super(UIMainConexao, self).__init__() self.setupUi(self) self.MainWW.setWindowFlags(ui_nova.QtCore.Qt.FramelessWindowHint) self.MainWW.setAttribute(ui_nova.Qt.WA_TranslucentBackground) self.buttonFechar.clicked.connect(self.close) self.buttonMinimizar.clicked.connect(self.showMinimized) self.threadpool = QThreadPool() self.offset = None print("Multithreading with maximum %d threads" % self.threadpool.maxThreadCount()) # install the event filter on the infoBar widget self.frameToolBar.installEventFilter(self) self.buttonHome.clicked.connect(lambda: self.stackedWidget.setCurrentWidget(self.pageInicial)) self.buttonUserConfig.clicked.connect(lambda: self.stackedWidget.setCurrentWidget(self.pageUser)) self.buttonEmpresas.clicked.connect(lambda: self.execute("boletoBancoX")) def eventFilter(self, source, event): if source == self.frameToolBar: if event.type() == ui_nova.QtCore.QEvent.MouseButtonPress: self.offset = event.pos() elif event.type() == ui_nova.QtCore.QEvent.MouseMove and self.offset is not None: # no need for complex computations: just use the offset to compute # "delta" position, and add that to the current one self.move(self.pos() - self.offset + event.pos()) # return True to tell Qt that the event has been accepted and # should not be processed any further return True elif event.type() == ui_nova.QtCore.QEvent.MouseButtonRelease: self.offset = None # let Qt process any other event return super().eventFilter(source, event) @Slot() def execute(self, funcao): aExecutar = funcao self.thread = QThread() self.worker = Executora() self.worker.moveToThread(self.thread) self.thread.started.connect(lambda: print("Iniciou")) self.thread.started.connect(lambda: self.worker.runner(aExecutar)) self.worker.finished.connect(lambda: print("É PRA FINALIZAR")) self.worker.finished.connect(self.thread.quit) self.worker.finished.connect(self.worker.deleteLater) self.thread.finished.connect(self.thread.deleteLater) self.thread.start() This Python project has a long structure with a lot of py files. The exe will contain about 70-100 pages with different process that will be executed ONE AT A TIME. In "Conexoes" has the conections to the all files and processes that will be executed, so i created a method to link every button(i will add all the buttons connections) to their respective method in conexoes giving just the name using the def execute. When i start the process, will work and the GUI freezes during the process, but if i use Daemon Threads, the script will run the first steps of the proper function (gui dont freezes), but will crash because him couldnt get the SAPEngineScript. I already tried read many sites how to use Threads in python and put in code, but all didnt work properly. I really dont know what i do.
[ "So, after much more search, i found the solution, which i think can be very usefully for everyone who use QT with SAP.\nBasicly, when you start a sap function using threading, you will receive an error about the Object SAPGUI, so the solution for this is just import pythoncom for your code and insert \"pythoncom.CoInitialize()\" before the line getObject(\"SAPGUI\").\nNow im using Daemon Thread to execute the function without freezes de GUI.\nEx:\nimport win32com.client\nimport pythoncom\n\n\ndef processoSAP(self):\n try:\n pythoncom.CoInitialize()\n self.SapGuiAuto = win32com.client.GetObject(\"SAPGUI\")\n if not type(self.SapGuiAuto) == win32com.client.CDispatch:\n return\n self.application = self.SapGuiAuto.GetScriptingEngine\n except NameError:\n print(NameError)\n\nWhere i found: https://django.fun/en/qa/44126/\n" ]
[ 0 ]
[]
[]
[ "multithreading", "pyside", "python", "python_multithreading", "sap_gui" ]
stackoverflow_0074506815_multithreading_pyside_python_python_multithreading_sap_gui.txt
Q: Traversing though list in list python I have to see if M is in the list and if not append to list value is on the list1 = [["A", "B", "C", "D"], ["E", "F", "G", "H"], ["I", "J", "K", "L"]] I have tried: def check_if_in_list(t): for items in list1: if t in list1: Print("True") else: Print("False") list1.append(t) check_if_in_list("M") It is not indexing properly through the list A: list1 = [["A", "B", "C", "D"], ["E", "F", "G", "H"], ["I", "J", "K", "L"]] def check_if_in_list(t): for items in list1: if t in items: print("True") else: print("False") items.append(t) check_if_in_list("M") False False False list1 [['A', 'B', 'C', 'D', 'M'], ['E', 'F', 'G', 'H', 'M'], ['I', 'J', 'K', 'L', 'M']]
Traversing though list in list python
I have to see if M is in the list and if not append to list value is on the list1 = [["A", "B", "C", "D"], ["E", "F", "G", "H"], ["I", "J", "K", "L"]] I have tried: def check_if_in_list(t): for items in list1: if t in list1: Print("True") else: Print("False") list1.append(t) check_if_in_list("M") It is not indexing properly through the list
[ "list1 = [[\"A\", \"B\", \"C\", \"D\"], [\"E\", \"F\", \"G\", \"H\"], [\"I\", \"J\", \"K\", \"L\"]]\ndef check_if_in_list(t):\n for items in list1:\n if t in items:\n print(\"True\")\n else:\n print(\"False\")\n items.append(t)\n\ncheck_if_in_list(\"M\")\nFalse\nFalse\nFalse\nlist1\n[['A', 'B', 'C', 'D', 'M'],\n ['E', 'F', 'G', 'H', 'M'],\n ['I', 'J', 'K', 'L', 'M']]\n\n" ]
[ 1 ]
[]
[]
[ "python" ]
stackoverflow_0074523677_python.txt
Q: How to get foreign key attribute (and many to many attribute) of a model instance in Django in asynchronous queries? In asynchronous queries, I want to get foreign key and many to many attributes of a model instance. In a simple example, I want to print university and courses for all instances of the model Student. models.py: from django.db import models class University(models.Model): name = models.CharField(max_length=64) class Course(models.Model): name = models.CharField(max_length=64) class Student(models.Model): name = models.CharField(max_length=64) university = models.ForeignKey(to=University, on_delete=models.CASCADE) courses = models.ManyToManyField(to=Course) when I use this code (in django 4.1): import asyncio async def main(): async for student in Student.objects.all(): print(student.name) print(student.university.name) for course in student.courses.all(): print(course.name) asyncio.run(main()) I get the following error: django.core.exceptions.SynchronousOnlyOperation: You cannot call this from an async context - use a thread or sync_to_async. How can I fix this error? A: This is the method I used to get foreign key and many to many attributes (for django 4.1 or higher). async def main(): async for student in Student.objects.all(): print(student.name) university = await University.objects.aget(id=student.university_id) print(university.name) async for course in student.courses.all(): print(course.name) asyncio.run(main())
How to get foreign key attribute (and many to many attribute) of a model instance in Django in asynchronous queries?
In asynchronous queries, I want to get foreign key and many to many attributes of a model instance. In a simple example, I want to print university and courses for all instances of the model Student. models.py: from django.db import models class University(models.Model): name = models.CharField(max_length=64) class Course(models.Model): name = models.CharField(max_length=64) class Student(models.Model): name = models.CharField(max_length=64) university = models.ForeignKey(to=University, on_delete=models.CASCADE) courses = models.ManyToManyField(to=Course) when I use this code (in django 4.1): import asyncio async def main(): async for student in Student.objects.all(): print(student.name) print(student.university.name) for course in student.courses.all(): print(course.name) asyncio.run(main()) I get the following error: django.core.exceptions.SynchronousOnlyOperation: You cannot call this from an async context - use a thread or sync_to_async. How can I fix this error?
[ "This is the method I used to get foreign key and many to many attributes (for django 4.1 or higher).\nasync def main():\n async for student in Student.objects.all():\n\n print(student.name)\n\n university = await University.objects.aget(id=student.university_id)\n print(university.name)\n\n async for course in student.courses.all():\n print(course.name)\n\n\nasyncio.run(main())\n\n" ]
[ 0 ]
[]
[]
[ "async_await", "asynchronous", "django", "python" ]
stackoverflow_0074467521_async_await_asynchronous_django_python.txt
Q: How to generate points within rectangle, at random locations and without overlap? I have an image with width: 1980 and height: 1080. Ultimately, I want to place various shapes within the image, but at random locations and in such a way that they do not overlap. The 0,0 coordinates of the image are in the center. Before rendering the shapes into the image (I don't need help with this), I need to write an algorithm to generate the XY points/locations. I want to be able to specify the minimum distance any given point is allowed to get to any other points. How can do this? All I have been able to do, is to generate points at equally spaced locations and then add a bit of randomness to each point. But this is not ideal, because it means points just vary within some 'cell' within a grid, and if the randomness value is too high, they will appear outside of the rectangle. Here is my code: import numpy as np import matplotlib.pyplot as plt from matplotlib.patches import Rectangle from random import randrange def is_square(integer): root = np.sqrt(integer) return integer == int(root + 0.5) ** 2 def perfect_sqr(n): nextN = np.floor(np.sqrt(n)) + 1 return int(nextN * nextN) def generate_cells(width = 1920, height = 1080, n = 9, show_plot=False): # If the number is not a perfect square, we need to find the next number which is # so that we can get the root N, which will be used to determine the number of rows/columns if not is_square(n): n = perfect_sqr(n) N = np.sqrt(n) # generate x and y lists, where each represents an array of points evenly spaced between 0 and the width/height x = np.array(list(range(0, width, int(width/N)))) y = np.array(list(range(0, height, int(height/N)))) # center the points within each 'cell' x_centered = x+int(width/N)/2 y_centered = y+int(height/N)/2 x_centered = [a+randrange(50) for a in x_centered] y_centered = [a+randrange(50) for a in y_centered] # generate a grid with the points xv, yv = np.meshgrid(x_centered, y_centered) if(show_plot): plt.scatter(xv,yv) plt.gca().add_patch(Rectangle((0,0),width, height,edgecolor='red', facecolor='none', lw=1)) plt.show() # convert the arrays to 1D xx = xv.flatten() yy = yv.flatten() # Merge them side-by-side zips = zip(xx, yy) # convert to set of points/tuples and return return set(zips) coords = generate_cells(width=1920, height=1080, n=15, show_plot=True) print(coords) A: Assuming you simply want to randomly define non-overlapping coordinates within the confines of a maximum image size subject to not having images overlap, this might be a good solution. import numpy as np def locateImages(field_height: int, field_width: int, min_sep: int, points: int)-> np.array: h_range = np.array(range(min_sep//2, field_height - (min_sep//2), min_sep)) w_range = np.array(range(min_sep//2, field_width-(min_sep//2), min_sep)) mx_len = max(len(h_range), len(w_range)) if len(h_range) < mx_len: xtra = np.random.choice(h_range, mx_len - len(h_range)) h_range = np.append(h_range, xtra) if len(w_range) < mx_len: xtra = np.random.choice(w_range, mx_len - len(w_range)) w_range = np.append(w_range, xtra) h_points = np.random.choice(h_range, points, replace=False) w_points = np.random.choice(w_range, points, replace=False) return np.concatenate((np.vstack(h_points), np.vstack(w_points)), axis= 1) Then given: field_height = the vertical coordinate of the Image space field_width = the maximum horizontal coordinate of the Image space min_sep = the minimum spacing between images points = number of coordinates to be selected Then: locateImages(15, 8, 2, 5) will yield: array([[13, 1], [ 7, 3], [ 1, 5], [ 5, 5], [11, 5]]) Render the output: points = locateImages(1080, 1920, 100, 15) x,y= zip(*points) plt.scatter(x,x) plt.gca().add_patch(Rectangle((0,0),1920, 1080,edgecolor='red', facecolor='none', lw=1)) plt.show()
How to generate points within rectangle, at random locations and without overlap?
I have an image with width: 1980 and height: 1080. Ultimately, I want to place various shapes within the image, but at random locations and in such a way that they do not overlap. The 0,0 coordinates of the image are in the center. Before rendering the shapes into the image (I don't need help with this), I need to write an algorithm to generate the XY points/locations. I want to be able to specify the minimum distance any given point is allowed to get to any other points. How can do this? All I have been able to do, is to generate points at equally spaced locations and then add a bit of randomness to each point. But this is not ideal, because it means points just vary within some 'cell' within a grid, and if the randomness value is too high, they will appear outside of the rectangle. Here is my code: import numpy as np import matplotlib.pyplot as plt from matplotlib.patches import Rectangle from random import randrange def is_square(integer): root = np.sqrt(integer) return integer == int(root + 0.5) ** 2 def perfect_sqr(n): nextN = np.floor(np.sqrt(n)) + 1 return int(nextN * nextN) def generate_cells(width = 1920, height = 1080, n = 9, show_plot=False): # If the number is not a perfect square, we need to find the next number which is # so that we can get the root N, which will be used to determine the number of rows/columns if not is_square(n): n = perfect_sqr(n) N = np.sqrt(n) # generate x and y lists, where each represents an array of points evenly spaced between 0 and the width/height x = np.array(list(range(0, width, int(width/N)))) y = np.array(list(range(0, height, int(height/N)))) # center the points within each 'cell' x_centered = x+int(width/N)/2 y_centered = y+int(height/N)/2 x_centered = [a+randrange(50) for a in x_centered] y_centered = [a+randrange(50) for a in y_centered] # generate a grid with the points xv, yv = np.meshgrid(x_centered, y_centered) if(show_plot): plt.scatter(xv,yv) plt.gca().add_patch(Rectangle((0,0),width, height,edgecolor='red', facecolor='none', lw=1)) plt.show() # convert the arrays to 1D xx = xv.flatten() yy = yv.flatten() # Merge them side-by-side zips = zip(xx, yy) # convert to set of points/tuples and return return set(zips) coords = generate_cells(width=1920, height=1080, n=15, show_plot=True) print(coords)
[ "Assuming you simply want to randomly define non-overlapping coordinates within the confines of a maximum image size subject to not having images overlap, this might be a good solution.\nimport numpy as np \ndef locateImages(field_height: int, field_width: int, min_sep: int, points: int)-> np.array:\n h_range = np.array(range(min_sep//2, field_height - (min_sep//2), min_sep))\n w_range = np.array(range(min_sep//2, field_width-(min_sep//2), min_sep))\n mx_len = max(len(h_range), len(w_range))\n if len(h_range) < mx_len:\n xtra = np.random.choice(h_range, mx_len - len(h_range))\n h_range = np.append(h_range, xtra)\n if len(w_range) < mx_len:\n xtra = np.random.choice(w_range, mx_len - len(w_range))\n w_range = np.append(w_range, xtra)\n h_points = np.random.choice(h_range, points, replace=False)\n w_points = np.random.choice(w_range, points, replace=False)\n return np.concatenate((np.vstack(h_points), np.vstack(w_points)), axis= 1) \n\nThen given:\nfield_height = the vertical coordinate of the Image space\nfield_width = the maximum horizontal coordinate of the Image space\nmin_sep = the minimum spacing between images\npoints = number of coordinates to be selected\nThen:\nlocateImages(15, 8, 2, 5) will yield:\narray([[13, 1],\n [ 7, 3],\n [ 1, 5],\n [ 5, 5],\n [11, 5]])\n\nRender the output:\npoints = locateImages(1080, 1920, 100, 15)\nx,y= zip(*points)\nplt.scatter(x,x)\nplt.gca().add_patch(Rectangle((0,0),1920, 1080,edgecolor='red', facecolor='none', lw=1))\nplt.show()\n\n" ]
[ 0 ]
[]
[]
[ "python" ]
stackoverflow_0074514089_python.txt
Q: how to make program stop with a hotkey (outside the console) I made an autoclicker and i can stop it by pressing b but only at the right timing. I didn't find anything that would allow me to stop the program by pressing a button at any time without accessing the console Here's the program: from time import sleep import keyboard import mouse state=True while state: if keyboard.is_pressed("b"): state=False else: mouse.click() sleep(1) A: I already answered at Using a key listener to stop a loop You can simply use the add_hotkey method. Example: import keyboard state = True def stop(): state = False # The function you want to execute to stop the loop keyboard.add_hotkey("b", stop) # add the hotkey
how to make program stop with a hotkey (outside the console)
I made an autoclicker and i can stop it by pressing b but only at the right timing. I didn't find anything that would allow me to stop the program by pressing a button at any time without accessing the console Here's the program: from time import sleep import keyboard import mouse state=True while state: if keyboard.is_pressed("b"): state=False else: mouse.click() sleep(1)
[ "I already answered at Using a key listener to stop a loop\nYou can simply use the add_hotkey method.\nExample:\nimport keyboard\n\nstate = True\n\ndef stop():\n state = False # The function you want to execute to stop the loop\n\nkeyboard.add_hotkey(\"b\", stop) # add the hotkey\n\n" ]
[ 0 ]
[]
[]
[ "keyboard", "mouse", "python", "python_3.x" ]
stackoverflow_0074523208_keyboard_mouse_python_python_3.x.txt
Q: Python regex: looking for a regex match close to a starting point I am wondering if it is possible to looking for a regex match close to a starting point. The distance between the starting point and the match is an initial parameter. Imagine this scenario. I have an input text, a starting point and a regex like these: str_text = f" bla bla bla bla 12 bla blablabla@bla.com bla bla bla " str_starting_point = "12" str_regex = "[a-z0-9]\\S{0,64}[a-z0-9]@[a-z0-9\\-\\.]{0,252}[a-z0-9]\\.[a-z]{2,10}|[a-z0-9]@[a-z0-9\\-\\.]{0,252}[a-z0-9]\\.[a-z]{2,10}" re.findall(str_regex, str_text) ['blablabla@bla.com'] Now I'm trying to search a match of the regex close to the starting point. I'm trying using the regex above, but it doesn't work: inf_lim = 0 sup_lim = 2 str_regex_composed = f" {str_starting_point} " + r"(\w+\s){" + f"{inf_lim},{sup_lim}" + "}" + f"{str_regex} " re.findall(str_regex_composed, str_text) Desired output: "blablabla@bla.com" or "" Do you have solutions or advices? Thanks A: One way to do this would be to use the finditer method and manually calculate which matches are closest using the api for the match objects, specifically for your problem, it seems like start would be what you want.
Python regex: looking for a regex match close to a starting point
I am wondering if it is possible to looking for a regex match close to a starting point. The distance between the starting point and the match is an initial parameter. Imagine this scenario. I have an input text, a starting point and a regex like these: str_text = f" bla bla bla bla 12 bla blablabla@bla.com bla bla bla " str_starting_point = "12" str_regex = "[a-z0-9]\\S{0,64}[a-z0-9]@[a-z0-9\\-\\.]{0,252}[a-z0-9]\\.[a-z]{2,10}|[a-z0-9]@[a-z0-9\\-\\.]{0,252}[a-z0-9]\\.[a-z]{2,10}" re.findall(str_regex, str_text) ['blablabla@bla.com'] Now I'm trying to search a match of the regex close to the starting point. I'm trying using the regex above, but it doesn't work: inf_lim = 0 sup_lim = 2 str_regex_composed = f" {str_starting_point} " + r"(\w+\s){" + f"{inf_lim},{sup_lim}" + "}" + f"{str_regex} " re.findall(str_regex_composed, str_text) Desired output: "blablabla@bla.com" or "" Do you have solutions or advices? Thanks
[ "One way to do this would be to use the finditer method and manually calculate which matches are closest using the api for the match objects, specifically for your problem, it seems like start would be what you want.\n" ]
[ 0 ]
[]
[]
[ "python", "regex", "string" ]
stackoverflow_0074521925_python_regex_string.txt
Q: To check whether a number is multiple of second number I want to check whether a number is multiple of second. What's wrong with the following code? def is_multiple(x,y): if x!=0 & (y%x)==0 : print("true") else: print("false") end print("A program in python") x=input("enter a number :") y=input("enter its multiple :") is_multiple(x,y) error: TypeError: not all arguments converted during string formatting A: You are using the binary AND operator &; you want the boolean AND operator here, and: x and (y % x) == 0 Next, you want to get your inputs converted to integers: x = int(input("enter a number :")) y = int(input("enter its multiple :")) You'll get a NameError for that end expression on a line, drop that altogether, Python doesn't need those. You can test for just x; in a boolean context such as an if statement, a number is considered to be false if 0: if x and y % x == 0: Your function is_multiple() should probably just return a boolean; leave printing to the part of the program doing all the other input/output: def is_multiple(x, y): return x and (y % x) == 0 print("A program in python") x = int(input("enter a number :")) y = int(input("enter its multiple :")) if is_multiple(x, y): print("true") else: print("false") That last part could simplified if using a conditional expression: print("A program in python") x = int(input("enter a number :")) y = int(input("enter its multiple :")) print("true" if is_multiple(x, y) else "false") A: Some things to mention: Conditions with and, not & (binary operator) Convert input to numbers (for example using int()) - you might also want to catch if something other than a number is entered This should work: def is_multiple(x,y): if x != 0 and y%x == 0: print("true") else: print("false") print("A program in python") x = int(input("enter a number :")) y = int(input("enter its multiple :")) is_multiple(x, y) A: Use and operator instead of bitwise & operator. You need to conver values to integers using int() def is_multiple(x,y): if x!=0 and (y%x)==0 : print("true") else: print("false") print("A program in python") x = int(input("enter a number :")) y = int(input("enter its multiple :")) is_multiple(x,y) A: I tried this and worked also for when x and/or y are equal to 0. Idk if there's a shorter way of writing it. Tested with (4,12), (12, 4), (2,0), (0,2), (0, 0) (result should be : False True False True True). def exo1(x,y): #x = int(input("input number x: ")) #y = int(input("input number y: ")) if x==0 and y==0: return True if x>0 and y==0: return False if y>0 and x==0: return True if x!=0 and y!=0 and (x%y)==0: return True else: return False print(exo1()) print(exo1(4,12)) print(exo1(12,4)) print(exo1(2,0)) print(exo1(0,2)) print(exo1(0,0))
To check whether a number is multiple of second number
I want to check whether a number is multiple of second. What's wrong with the following code? def is_multiple(x,y): if x!=0 & (y%x)==0 : print("true") else: print("false") end print("A program in python") x=input("enter a number :") y=input("enter its multiple :") is_multiple(x,y) error: TypeError: not all arguments converted during string formatting
[ "You are using the binary AND operator &; you want the boolean AND operator here, and:\nx and (y % x) == 0\n\nNext, you want to get your inputs converted to integers:\nx = int(input(\"enter a number :\"))\ny = int(input(\"enter its multiple :\"))\n\nYou'll get a NameError for that end expression on a line, drop that altogether, Python doesn't need those.\nYou can test for just x; in a boolean context such as an if statement, a number is considered to be false if 0:\nif x and y % x == 0:\n\nYour function is_multiple() should probably just return a boolean; leave printing to the part of the program doing all the other input/output:\ndef is_multiple(x, y):\n return x and (y % x) == 0\n\nprint(\"A program in python\")\nx = int(input(\"enter a number :\"))\ny = int(input(\"enter its multiple :\"))\nif is_multiple(x, y):\n print(\"true\")\nelse:\n print(\"false\")\n\nThat last part could simplified if using a conditional expression:\nprint(\"A program in python\")\nx = int(input(\"enter a number :\"))\ny = int(input(\"enter its multiple :\"))\nprint(\"true\" if is_multiple(x, y) else \"false\")\n\n", "Some things to mention:\n\nConditions with and, not & (binary operator)\nConvert input to numbers (for example using int()) - you might also want to catch if something other than a number is entered\n\nThis should work:\ndef is_multiple(x,y):\n if x != 0 and y%x == 0:\n print(\"true\")\n else:\n print(\"false\")\n\nprint(\"A program in python\")\nx = int(input(\"enter a number :\"))\ny = int(input(\"enter its multiple :\"))\nis_multiple(x, y)\n\n", "Use and operator instead of bitwise & operator.\nYou need to conver values to integers using int()\ndef is_multiple(x,y):\n if x!=0 and (y%x)==0 :\n print(\"true\")\n else:\n print(\"false\")\n\nprint(\"A program in python\")\nx = int(input(\"enter a number :\"))\ny = int(input(\"enter its multiple :\"))\nis_multiple(x,y)\n\n", "I tried this and worked also for when x and/or y are equal to 0. Idk if there's a shorter way of writing it.\nTested with (4,12), (12, 4), (2,0), (0,2), (0, 0)\n(result should be : False True False True True).\ndef exo1(x,y):\n #x = int(input(\"input number x: \"))\n #y = int(input(\"input number y: \"))\n if x==0 and y==0:\n return True \n if x>0 and y==0:\n return False \n if y>0 and x==0:\n return True \n if x!=0 and y!=0 and (x%y)==0: \n return True\n else:\n return False\nprint(exo1())\nprint(exo1(4,12))\nprint(exo1(12,4))\nprint(exo1(2,0))\nprint(exo1(0,2))\nprint(exo1(0,0))\n\n" ]
[ 11, 4, 0, 0 ]
[]
[]
[ "numbers", "python" ]
stackoverflow_0031449216_numbers_python.txt
Q: Use of secondary indexes in a redis database in comparison with SQL statements I'm working with a redis database. I have already implemented Python code to access the redis server. The problem is that the code implemented is very complex and it is not easy maintainable. Secondary indexes in Redis database To simplify the question I suppose that in my database are present a set of 4 keys inserted by the following commands: hset key:1 id 1 field1 1001 hset key:2 id 2 field1 999 hset key:3 id 3 field1 1002 hset key:4 id 4 field1 1000 Previous set of keys is ordered by the field id. I have used the Secondary indexing guide of the Redis documentation to implement a secondary index to get the keys list ordered by field1. To do this, according with the guide, I have created a sorted set called zfield1 inside the database by the following commands: zadd zfield1 1001 1 zadd zfield1 999 2 zadd zfield1 1002 3 zadd zfield1 1000 4 The sorted set zfield1 is ordered by the field field1. With the command zrange I get the list of id field ordered by field1: zrange zfield1 0 -1 1) "2" 2) "4" 3) "1" 4) "3" The first element of the list obtained by zrange is "2" and this element provides the information to get all the value of the key with the lower field1 value. So by the following command I can get all key values relatives to key:2: hgetall key:2 1) "id" 2) "2" 3) "field1" 4) "999" With a suited loop that executes the command hgetall, I can get all the key values ordered by field1. Compare with a SQL database I think that the previous presentation is the implementation of the following SQL query (where TABLE1 is a table in a generic SQL database): SELECT * from TABLE1 order by field1 This is the first time that I use Redis and if I compare it to the SQL query I think that its usage it is more complex respect of a SQL database. So I have the doubt that there are other more simple ways to implement a SQL query as SELECT * from TABLE1 order by field1 with Redis. Question Someone could tell if there are other Redis commands (for example a particular use of the Redis command KEYS) that help to get keys ordered by a secondary index? Note: Useful links on this topic are also welcome. A: This is the first time that I use Redis and if I compare it to the SQL query I think that its usage it is more complex respect of a SQL database Indeed: Redis' main goal is performance and its data structures and commands are designed with that in mind. There are no native secondary indexes in Redis, as keeping one would have a non-negligible cost: in fact, the guide you referenced shows a pattern you can use to mimic one, as the data type used there is just a sorted set - which is a first citizen in the Redis ecosystem. A relational database is a completely different beast. If you wish to use multiple indexes in Redis then I would suggest creating and maintaining multiple keys (using the aforementioned sorted set data type would be okay) while you create/modify/delete your main keys.
Use of secondary indexes in a redis database in comparison with SQL statements
I'm working with a redis database. I have already implemented Python code to access the redis server. The problem is that the code implemented is very complex and it is not easy maintainable. Secondary indexes in Redis database To simplify the question I suppose that in my database are present a set of 4 keys inserted by the following commands: hset key:1 id 1 field1 1001 hset key:2 id 2 field1 999 hset key:3 id 3 field1 1002 hset key:4 id 4 field1 1000 Previous set of keys is ordered by the field id. I have used the Secondary indexing guide of the Redis documentation to implement a secondary index to get the keys list ordered by field1. To do this, according with the guide, I have created a sorted set called zfield1 inside the database by the following commands: zadd zfield1 1001 1 zadd zfield1 999 2 zadd zfield1 1002 3 zadd zfield1 1000 4 The sorted set zfield1 is ordered by the field field1. With the command zrange I get the list of id field ordered by field1: zrange zfield1 0 -1 1) "2" 2) "4" 3) "1" 4) "3" The first element of the list obtained by zrange is "2" and this element provides the information to get all the value of the key with the lower field1 value. So by the following command I can get all key values relatives to key:2: hgetall key:2 1) "id" 2) "2" 3) "field1" 4) "999" With a suited loop that executes the command hgetall, I can get all the key values ordered by field1. Compare with a SQL database I think that the previous presentation is the implementation of the following SQL query (where TABLE1 is a table in a generic SQL database): SELECT * from TABLE1 order by field1 This is the first time that I use Redis and if I compare it to the SQL query I think that its usage it is more complex respect of a SQL database. So I have the doubt that there are other more simple ways to implement a SQL query as SELECT * from TABLE1 order by field1 with Redis. Question Someone could tell if there are other Redis commands (for example a particular use of the Redis command KEYS) that help to get keys ordered by a secondary index? Note: Useful links on this topic are also welcome.
[ "\nThis is the first time that I use Redis and if I compare it to the SQL query I think that its usage it is more complex respect of a SQL database\n\nIndeed: Redis' main goal is performance and its data structures and commands are designed with that in mind. There are no native secondary indexes in Redis, as keeping one would have a non-negligible cost: in fact, the guide you referenced shows a pattern you can use to mimic one, as the data type used there is just a sorted set - which is a first citizen in the Redis ecosystem. A relational database is a completely different beast.\nIf you wish to use multiple indexes in Redis then I would suggest creating and maintaining multiple keys (using the aforementioned sorted set data type would be okay) while you create/modify/delete your main keys.\n" ]
[ 1 ]
[]
[]
[ "database", "python", "redis", "sql" ]
stackoverflow_0074520451_database_python_redis_sql.txt
Q: python opencv videoWriter fps rounding I am trying to measure some event in an input video file: "test.mp4". This is done by processing the video in several steps, where each step performs some operations on the video data and writes the intermediate results to a new video file. The fps of the input video is: 29.42346629489295 fps Below I have written a script to test the problem. When I write a new file using this script the fps gets rounded in the outputfile to 29.0 fps, and this is the problem. import cv2 import sys inputfilepath = "test.mp4" outputfilepath = "testFps.mp4" video = cv2.VideoCapture(inputfilepath) fps = video.get(cv2.CAP_PROP_FPS) framecount = int(video.get(cv2.CAP_PROP_FRAME_COUNT)) width = int(video.get(cv2.CAP_PROP_FRAME_WIDTH)) height = int(video.get(cv2.CAP_PROP_FRAME_HEIGHT)) print("original fps: ", fps) # fps = 29.4 print("writing fps: ", fps) writer = cv2.VideoWriter(outputfilepath, 0x00000020, fps, (width, height)) for i in range(framecount): success, image = video.read() if not success: break writer.write(image) writer.release() video.release() video = cv2.VideoCapture(outputfilepath) fps = video.get(cv2.CAP_PROP_FPS) # the next line will not print the same fps as I passed to cv2.VideoWriter(...) above. print("output fps: ", fps) video.release() I have tried to hardcode different values for fps. It seems like everything below 29.5 fps is rounded to zero decimals (29 fps), and everything above gets round to one decimal (29.x fps) So my questions are: Is it possible to get any fps with the mp4 format? Which fps is actually used in the output file? How can I get the correct fps in the output file? Additional info I tried many different values ranging from 28 fps to 31 fps, and plotted the actual output file framerate vs the expected. This displays some kind of fractral behavior, maybe this hint will inspire some math wizard in here :) A: OpenCV uses some toolkit to do the writing. In my case, on iOS, OpenCV uses the native AVFoundation library. It seems AVFoundation (or the OpenCV api) can't handle well an fps value with many significant digits, like 29.7787878779, and something was being rounded incorrectly either in OpenCV's api or AVFoundation. To fix the issue, I rounded off some of the significant digits before calling VideoWriter::open normalizedFPS = round(1000.0 * normalizedFPS) / 1000.0; Hope it works for you also! I've seen 30,000 used as a timescale recommendation, so perhaps test out 1000.0 vs 30,000.0 A: Unfortunately there is a bug inside OpenCV. I read and write an 46 minutes movie and 2 seconds are missing (same number of frames but a diferent FPS written in the header of the file) For me is a big problem as I tried to join the audio information in another editor and you can see the 2 missing seconds The original movie is fps = (60/1.001) = 59.94005994... and OpenCv is rounding this fps to 60 no matter if I wrote 59.94005994 in 2 places videoWriter = new VideoWriter( full_file_name, fourCC, fps, scaled_size, true); videoWriter.set(CAP_PROP_FPS, fps); BAD NEWS - I found this code inside opencv source code outfps = cvRound(fps); bool AVIWriteContainer::initContainer(const String& filename, double fps, Size size, bool iscolor){ outfps = cvRound(fps); width = size.width; height = size.height; channels = iscolor ? 3 : 1; moviPointer = 0; bool result = strm->open(filename); return result; } We have to escalate this bug toward OpenCv team (I use opencv-460.jar) I will try to manualy change the header using other programs - this could save the day the math computation that explain the missing seconds
python opencv videoWriter fps rounding
I am trying to measure some event in an input video file: "test.mp4". This is done by processing the video in several steps, where each step performs some operations on the video data and writes the intermediate results to a new video file. The fps of the input video is: 29.42346629489295 fps Below I have written a script to test the problem. When I write a new file using this script the fps gets rounded in the outputfile to 29.0 fps, and this is the problem. import cv2 import sys inputfilepath = "test.mp4" outputfilepath = "testFps.mp4" video = cv2.VideoCapture(inputfilepath) fps = video.get(cv2.CAP_PROP_FPS) framecount = int(video.get(cv2.CAP_PROP_FRAME_COUNT)) width = int(video.get(cv2.CAP_PROP_FRAME_WIDTH)) height = int(video.get(cv2.CAP_PROP_FRAME_HEIGHT)) print("original fps: ", fps) # fps = 29.4 print("writing fps: ", fps) writer = cv2.VideoWriter(outputfilepath, 0x00000020, fps, (width, height)) for i in range(framecount): success, image = video.read() if not success: break writer.write(image) writer.release() video.release() video = cv2.VideoCapture(outputfilepath) fps = video.get(cv2.CAP_PROP_FPS) # the next line will not print the same fps as I passed to cv2.VideoWriter(...) above. print("output fps: ", fps) video.release() I have tried to hardcode different values for fps. It seems like everything below 29.5 fps is rounded to zero decimals (29 fps), and everything above gets round to one decimal (29.x fps) So my questions are: Is it possible to get any fps with the mp4 format? Which fps is actually used in the output file? How can I get the correct fps in the output file? Additional info I tried many different values ranging from 28 fps to 31 fps, and plotted the actual output file framerate vs the expected. This displays some kind of fractral behavior, maybe this hint will inspire some math wizard in here :)
[ "OpenCV uses some toolkit to do the writing. In my case, on iOS, OpenCV uses the native AVFoundation library. It seems AVFoundation (or the OpenCV api) can't handle well an fps value with many significant digits, like 29.7787878779, and something was being rounded incorrectly either in OpenCV's api or AVFoundation.\nTo fix the issue, I rounded off some of the significant digits before calling VideoWriter::open\nnormalizedFPS = round(1000.0 * normalizedFPS) / 1000.0;\n\nHope it works for you also!\nI've seen 30,000 used as a timescale recommendation, so perhaps test out 1000.0 vs 30,000.0\n", "Unfortunately there is a bug inside OpenCV.\nI read and write an 46 minutes movie and 2 seconds are missing (same number of frames but a diferent FPS written in the header of the file)\nFor me is a big problem as I tried to join the audio information in another editor and you can see the 2 missing seconds\nThe original movie is fps = (60/1.001) = 59.94005994... and OpenCv is rounding this fps to 60 no matter if I wrote 59.94005994 in 2 places\n\nvideoWriter = new VideoWriter( full_file_name, fourCC, fps, scaled_size, true);\nvideoWriter.set(CAP_PROP_FPS, fps);\n\nBAD NEWS - I found this code inside opencv source code\noutfps = cvRound(fps);\n\nbool AVIWriteContainer::initContainer(const String& filename, double fps, Size size, bool iscolor){\n outfps = cvRound(fps);\n width = size.width;\n height = size.height;\n channels = iscolor ? 3 : 1;\n moviPointer = 0;\n bool result = strm->open(filename);\n return result;\n}\n\nWe have to escalate this bug toward OpenCv team (I use opencv-460.jar)\nI will try to manualy change the header using other programs - this could save the day\nthe math computation that explain the missing seconds\n\n\n" ]
[ 0, 0 ]
[]
[]
[ "frame_rate", "mp4", "opencv", "python", "video_encoding" ]
stackoverflow_0049654051_frame_rate_mp4_opencv_python_video_encoding.txt
Q: Plotly: Select data with multiple dropdown menus from dataframe I want to create an interactive figure with plotly graph objects, where I can select the data from two dropdown menus. The menus should select the specific data from a dataframe. My dataframe looks like this: mode y x1 x2 0 A 3 0 6 1 A 4 1 7 2 A 2 2 8 3 B 1 3 9 4 B 0 4 10 5 B 5 5 11 I want the first dropdownmenu to choose between mode "A" and "B", and the second to choose between "x1" and "x2". The first menu works perfectly: fig = go.Figure() fig.add_trace(go.Scatter(x=df["x1"], y=df[df["mode"]=="A"]["y"])) buttons = [] for modes in list(df["mode"].unique()): buttons.append( dict( args=[{"y":[df[df["mode"]==modes]["y"]], "x":[df[df["mode"]==modes]["x1"]] }], label = modes, method = "restyle" ) ) fig.update_layout( updatemenus=[go.layout.Updatemenu(buttons=buttons)], # second menu ) For the second menu, I tried adding: buttons=list([ dict( args= [{"x":[df["x1"]] }], label= "x1", method="restyle"), dict( args= [{"x":[df["x2"]] }], label= "x2", method="restyle") ]) which adds a drowpdown menu, but not with the right functionality. It doesn't choose the "correct" x2 values, if you select "B" in the first menu. And here lies the problem: I think I need to add something in the direction of: dict( args= [{"x":[df[df["mode"] == **current mode from menu 1** ]["x1"]] }], label= "x1", method="restyle" ) in order to choose the correct x2 values, but I don't know how. That's as far as I got, but I'm not finding any way to do it. I've already tried searching for but didn't come up with a solution. My desired output: A and x1: plots y(A,x1) -> (0,3)(1,4)(2,2) A and x2: plots y(A,x2) -> (6,3)(7,4)(8,2) B and x1: plots y(B,x1) -> (3,1)(4,0)(5,5) B and x2: plots y(B,x2) -> (9,1)(10,0)(11,5) Code: import pandas as pd import plotly.graph_objects as go data = {'mode': ["A", "A", "A", "B", "B", "B"],'y': [3, 4, 2, 1, 0, 5], 'x1': [0, 1, 2, 3, 4, 5], 'x2': [6, 7, 8, 9, 10, 11]} df = pd.DataFrame.from_dict(data) fig = go.Figure() fig.add_trace(go.Scatter(x=df["x1"], y=df[df["mode"]=="A"]["y"])) buttons = [] for modes in list(df["mode"].unique()): buttons.append( dict( args=[{"y":[df[df["mode"]==modes]["y"]], "x":[df[df["mode"]==modes]["x1"]] }], label = modes, method = "restyle") ) fig.update_layout( updatemenus=[ go.layout.Updatemenu(buttons=buttons), dict( buttons=list([ dict( args= [{"x":[df["x1"]] }], label= "x1", method="restyle"), dict( args= [{"x":[df["x2"]] }], label= "x2", method="restyle") ]), y=0.2 ) ] ) fig A: You need to add df['y'] to the arg as follows: import pandas as pd import plotly.graph_objects as go data = {'mode': ["A", "A", "A", "B", "B", "B"],'y': [3, 4, 2, 1, 0, 5], 'x1': [0, 1, 2, 3, 4, 5], 'x2': [6, 7, 8, 9, 10, 11]} df = pd.DataFrame.from_dict(data) fig = go.Figure() fig.add_trace(go.Scatter(x=df["x1"], y=df[df["mode"]=="A"]["y"])) buttons = [] for modes in list(df["mode"].unique()): buttons.append( dict( args=[{"y":[df[df["mode"]==modes]["y"].tolist()], "x":[df[df["mode"]==modes]["x1"].tolist()] }], label = modes, method = "restyle") ) fig.update_layout( updatemenus=[ go.layout.Updatemenu(buttons=buttons), go.layout.Updatemenu( buttons=list([ dict( args= [{"x":[df["x1"]], "y":[df["y"]]}], #<---------- label= "x1", method="update"), dict( args= [{"x":[df["x2"]], "y":[df["y"]]}], #<---------- label= "x2", method="update") ]), y=0.2, ) ] ) fig Output
Plotly: Select data with multiple dropdown menus from dataframe
I want to create an interactive figure with plotly graph objects, where I can select the data from two dropdown menus. The menus should select the specific data from a dataframe. My dataframe looks like this: mode y x1 x2 0 A 3 0 6 1 A 4 1 7 2 A 2 2 8 3 B 1 3 9 4 B 0 4 10 5 B 5 5 11 I want the first dropdownmenu to choose between mode "A" and "B", and the second to choose between "x1" and "x2". The first menu works perfectly: fig = go.Figure() fig.add_trace(go.Scatter(x=df["x1"], y=df[df["mode"]=="A"]["y"])) buttons = [] for modes in list(df["mode"].unique()): buttons.append( dict( args=[{"y":[df[df["mode"]==modes]["y"]], "x":[df[df["mode"]==modes]["x1"]] }], label = modes, method = "restyle" ) ) fig.update_layout( updatemenus=[go.layout.Updatemenu(buttons=buttons)], # second menu ) For the second menu, I tried adding: buttons=list([ dict( args= [{"x":[df["x1"]] }], label= "x1", method="restyle"), dict( args= [{"x":[df["x2"]] }], label= "x2", method="restyle") ]) which adds a drowpdown menu, but not with the right functionality. It doesn't choose the "correct" x2 values, if you select "B" in the first menu. And here lies the problem: I think I need to add something in the direction of: dict( args= [{"x":[df[df["mode"] == **current mode from menu 1** ]["x1"]] }], label= "x1", method="restyle" ) in order to choose the correct x2 values, but I don't know how. That's as far as I got, but I'm not finding any way to do it. I've already tried searching for but didn't come up with a solution. My desired output: A and x1: plots y(A,x1) -> (0,3)(1,4)(2,2) A and x2: plots y(A,x2) -> (6,3)(7,4)(8,2) B and x1: plots y(B,x1) -> (3,1)(4,0)(5,5) B and x2: plots y(B,x2) -> (9,1)(10,0)(11,5) Code: import pandas as pd import plotly.graph_objects as go data = {'mode': ["A", "A", "A", "B", "B", "B"],'y': [3, 4, 2, 1, 0, 5], 'x1': [0, 1, 2, 3, 4, 5], 'x2': [6, 7, 8, 9, 10, 11]} df = pd.DataFrame.from_dict(data) fig = go.Figure() fig.add_trace(go.Scatter(x=df["x1"], y=df[df["mode"]=="A"]["y"])) buttons = [] for modes in list(df["mode"].unique()): buttons.append( dict( args=[{"y":[df[df["mode"]==modes]["y"]], "x":[df[df["mode"]==modes]["x1"]] }], label = modes, method = "restyle") ) fig.update_layout( updatemenus=[ go.layout.Updatemenu(buttons=buttons), dict( buttons=list([ dict( args= [{"x":[df["x1"]] }], label= "x1", method="restyle"), dict( args= [{"x":[df["x2"]] }], label= "x2", method="restyle") ]), y=0.2 ) ] ) fig
[ "You need to add df['y'] to the arg as follows:\nimport pandas as pd\nimport plotly.graph_objects as go\n\ndata = {'mode': [\"A\", \"A\", \"A\", \"B\", \"B\", \"B\"],'y': [3, 4, 2, 1, 0, 5], 'x1': [0, 1, 2, 3, 4, 5], 'x2': [6, 7, 8, 9, 10, 11]}\ndf = pd.DataFrame.from_dict(data)\n\nfig = go.Figure()\nfig.add_trace(go.Scatter(x=df[\"x1\"], y=df[df[\"mode\"]==\"A\"][\"y\"]))\n \nbuttons = []\nfor modes in list(df[\"mode\"].unique()):\n buttons.append(\n dict(\n args=[{\"y\":[df[df[\"mode\"]==modes][\"y\"].tolist()],\n \"x\":[df[df[\"mode\"]==modes][\"x1\"].tolist()]\n }],\n label = modes,\n method = \"restyle\")\n )\n \n \n\n\nfig.update_layout(\n updatemenus=[\n \n go.layout.Updatemenu(buttons=buttons),\n go.layout.Updatemenu(\n buttons=list([\n dict(\n args= [{\"x\":[df[\"x1\"]], \"y\":[df[\"y\"]]}], #<----------\n label= \"x1\",\n method=\"update\"),\n dict(\n args= [{\"x\":[df[\"x2\"]], \"y\":[df[\"y\"]]}], #<----------\n label= \"x2\",\n method=\"update\")\n ]),\n y=0.2,\n )\n ]\n)\n\nfig\n\nOutput\n\n" ]
[ 0 ]
[]
[]
[ "drop_down_menu", "pandas", "plotly", "python" ]
stackoverflow_0074519638_drop_down_menu_pandas_plotly_python.txt
Q: Language detection using deepl's python library Is there a way to use the deepl Python client library (or raw API) to detect the source language (without translating it)? The marketing blurb on the API website says, detection is available but I can't find it anywhere in the library or API. A: Currently, our API does not support "just" detecting the language. Our recommendation would be to try translating a small part of the sentence and use the detected language from the response. Even if we had a separate /detectLanguage endpoint, using it would be similar to this approach as you had to send some text anyway. A: I worked quite some time with deepl due to its simplicity, to be honest, I don't think deepl has that feature as a standalone function. But if you send a request to translate you will get the detected language in the response. If you have the starter package you can translate unlimited texts, so this might be a viable workaround. If you are open to another tool, LibreTranslate has a detect endpoint but the pricing is a bit more expensive. However, you can host it yourself if you have the infrastructure for it. https://libretranslate.com/docs/
Language detection using deepl's python library
Is there a way to use the deepl Python client library (or raw API) to detect the source language (without translating it)? The marketing blurb on the API website says, detection is available but I can't find it anywhere in the library or API.
[ "Currently, our API does not support \"just\" detecting the language. Our recommendation would be to try translating a small part of the sentence and use the detected language from the response.\nEven if we had a separate /detectLanguage endpoint, using it would be similar to this approach as you had to send some text anyway.\n", "I worked quite some time with deepl due to its simplicity, to be honest, I don't think deepl has that feature as a standalone function. But if you send a request to translate you will get the detected language in the response. If you have the starter package you can translate unlimited texts, so this might be a viable workaround.\nIf you are open to another tool, LibreTranslate has a detect endpoint but the pricing is a bit more expensive. However, you can host it yourself if you have the infrastructure for it.\nhttps://libretranslate.com/docs/\n" ]
[ 2, 0 ]
[]
[]
[ "deepl", "python" ]
stackoverflow_0074420850_deepl_python.txt
Q: Optimal way to edit and replace value in a row for a datetime format I have a datetime format which am trying to use for one of my requirement. Here is my code and this is how the input dataframe looks like- data=pd.DataFrame({'A': ['abc','bcd'], 'B': [pd.to_datetime('1/1/18 0:00'), 'apples'], 'C':[pd.to_datetime('1/2/18 0:00'),'mangoes'], 'D':[pd.to_datetime('1/3/18 0:00'),'orange'],'E':[pd.to_datetime('1/4/18 0:00'),'plantain'], 'F':[pd.to_datetime('1/5/18 0:00'),'plantain'],'G':[pd.to_datetime('1/6/18 0:00'),'red'],'H':[pd.to_datetime('1/2/18 0:00'),'green']}) (I have used pd.to_datetime to showcase the data format which I have in my input file) I/p dataframe- My objective is to get the date to a str format which gives an o/p like- 1/18 (m/yy) instead of 1/1/18 (m-d-yy). I was trying this approach- data.loc[0,['B']]='1-2018' data.loc[0,['C']]='2-2018' data.loc[0,['D']]='3-2018' . . . . This would do my job but I need something which is more optimal and doesnt require me to write more line of codes. For eg- Say If the date were of 3-4 years of intervals, then using my approach it will be to tedious to write all the month and year. Is there a better approach to use say a for loop for writing the above line of code? A: a way to make your dataframe a little bit more accessible might be to transpose it so you have an actual date column: df_new = df.drop(columns=['A']).T.copy() df_new.rename(columns={0: 'Date', 1: 'Fruit'}, inplace=True) df_new['Date_str'] = df_new['Date'].dt.strftime('%m/%Y') this will give you a date column with strings like '01/2018' also the table will be vertical basically. if you need some other specifications please let me know :)
Optimal way to edit and replace value in a row for a datetime format
I have a datetime format which am trying to use for one of my requirement. Here is my code and this is how the input dataframe looks like- data=pd.DataFrame({'A': ['abc','bcd'], 'B': [pd.to_datetime('1/1/18 0:00'), 'apples'], 'C':[pd.to_datetime('1/2/18 0:00'),'mangoes'], 'D':[pd.to_datetime('1/3/18 0:00'),'orange'],'E':[pd.to_datetime('1/4/18 0:00'),'plantain'], 'F':[pd.to_datetime('1/5/18 0:00'),'plantain'],'G':[pd.to_datetime('1/6/18 0:00'),'red'],'H':[pd.to_datetime('1/2/18 0:00'),'green']}) (I have used pd.to_datetime to showcase the data format which I have in my input file) I/p dataframe- My objective is to get the date to a str format which gives an o/p like- 1/18 (m/yy) instead of 1/1/18 (m-d-yy). I was trying this approach- data.loc[0,['B']]='1-2018' data.loc[0,['C']]='2-2018' data.loc[0,['D']]='3-2018' . . . . This would do my job but I need something which is more optimal and doesnt require me to write more line of codes. For eg- Say If the date were of 3-4 years of intervals, then using my approach it will be to tedious to write all the month and year. Is there a better approach to use say a for loop for writing the above line of code?
[ "a way to make your dataframe a little bit more accessible might be to transpose it so you have an actual date column:\ndf_new = df.drop(columns=['A']).T.copy()\ndf_new.rename(columns={0: 'Date', 1: 'Fruit'}, inplace=True)\ndf_new['Date_str'] = df_new['Date'].dt.strftime('%m/%Y')\n\nthis will give you a date column with strings like '01/2018' also the table will be vertical basically.\nif you need some other specifications please let me know :)\n" ]
[ 1 ]
[]
[]
[ "dataframe", "datetime", "for_loop", "lines_of_code", "python" ]
stackoverflow_0074523774_dataframe_datetime_for_loop_lines_of_code_python.txt
Q: Creating scipy.stats random variable subclass does not result in expected object type I am trying to extend scipy.stats.rv_discrete to provide some simple distributions for the user. For example, in the simplest case they might want a distribution with a constant output. Here's my code for that: from scipy.stats._distn_infrastructure import rv_sample class const(rv_sample): # a distribution with probability 1 for a single val def __init__(self, val, *args, **kwds): super(const, self).__init__(values=(val, 1), *args, **kwds) However, this is not resulting in an object of the same type as the built-in random variable distributions, and this is messing up some operations I want to perform on distributions generically. Compare this to the poisson distribution: from scipy.stats import poisson import inspect print('\nThese should both contain rv_discrete:') print('1: ', inspect.getmro(poisson.__class__)) print('2: ', inspect.getmro(const.__class__)) print('\nThese should both be rv_frozen:') print('1: ', inspect.getmro(poisson(5).__class__)) print('2: ', inspect.getmro(const(5).__class__)) Output: These should both contain rv_discrete: 1: (<class 'scipy.stats._discrete_distns.poisson_gen'>, <class 'scipy.stats._distn_infrastructure.rv_discrete'>, <class 'scipy.stats._distn_infrastructure.rv_generic'>, <class 'object'>) 2: (<class 'type'>, <class 'object'>) These should both be rv_frozen: 1: (<class 'scipy.stats._distn_infrastructure.rv_frozen'>, <class 'object'>) 2: (<class '__main__.const'>, <class 'scipy.stats._distn_infrastructure.rv_sample'>, <class 'scipy.stats._distn_infrastructure.rv_discrete'>, <class 'scipy.stats._distn_infrastructure.rv_generic'>, <class 'object'>) Any tips on what I'm doing wrong here? I'm relatively inexperienced when it comes to subclassing so it may be something simple. Thank you! A: This issue is not able to be worked around at the moment, but is part of an overhaul of scipy's distributions as being tracked here: https://github.com/scipy/scipy/issues/15928
Creating scipy.stats random variable subclass does not result in expected object type
I am trying to extend scipy.stats.rv_discrete to provide some simple distributions for the user. For example, in the simplest case they might want a distribution with a constant output. Here's my code for that: from scipy.stats._distn_infrastructure import rv_sample class const(rv_sample): # a distribution with probability 1 for a single val def __init__(self, val, *args, **kwds): super(const, self).__init__(values=(val, 1), *args, **kwds) However, this is not resulting in an object of the same type as the built-in random variable distributions, and this is messing up some operations I want to perform on distributions generically. Compare this to the poisson distribution: from scipy.stats import poisson import inspect print('\nThese should both contain rv_discrete:') print('1: ', inspect.getmro(poisson.__class__)) print('2: ', inspect.getmro(const.__class__)) print('\nThese should both be rv_frozen:') print('1: ', inspect.getmro(poisson(5).__class__)) print('2: ', inspect.getmro(const(5).__class__)) Output: These should both contain rv_discrete: 1: (<class 'scipy.stats._discrete_distns.poisson_gen'>, <class 'scipy.stats._distn_infrastructure.rv_discrete'>, <class 'scipy.stats._distn_infrastructure.rv_generic'>, <class 'object'>) 2: (<class 'type'>, <class 'object'>) These should both be rv_frozen: 1: (<class 'scipy.stats._distn_infrastructure.rv_frozen'>, <class 'object'>) 2: (<class '__main__.const'>, <class 'scipy.stats._distn_infrastructure.rv_sample'>, <class 'scipy.stats._distn_infrastructure.rv_discrete'>, <class 'scipy.stats._distn_infrastructure.rv_generic'>, <class 'object'>) Any tips on what I'm doing wrong here? I'm relatively inexperienced when it comes to subclassing so it may be something simple. Thank you!
[ "This issue is not able to be worked around at the moment, but is part of an overhaul of scipy's distributions as being tracked here: https://github.com/scipy/scipy/issues/15928\n" ]
[ 0 ]
[]
[]
[ "class", "python", "scipy", "statistics" ]
stackoverflow_0060981879_class_python_scipy_statistics.txt
Q: from PIL import Image - DLL load failed while importing _imaging I'm on windows 10 using Python 3.9.12 and pillow-9.3.0 and having some issues while trying to use from PIL import Image. Error i'm getting is: ImportError: DLL load failed while importing _imaging: The specified module could not be found. Anyone has an idea how to resolve? reinstalled Python 3.9.12 tried installing / uninstalling multiple versions of Pillow :8.3.2 & 8.4 & 9.0 & 9.3 A: Update your python version to 3.11 Because Pillow 3.9.0 was builted with python 3.11 beta If you're using chocolatey as package manager use $ choco upgrade python -y If you're not using a package manager, download and install python from official site
from PIL import Image - DLL load failed while importing _imaging
I'm on windows 10 using Python 3.9.12 and pillow-9.3.0 and having some issues while trying to use from PIL import Image. Error i'm getting is: ImportError: DLL load failed while importing _imaging: The specified module could not be found. Anyone has an idea how to resolve? reinstalled Python 3.9.12 tried installing / uninstalling multiple versions of Pillow :8.3.2 & 8.4 & 9.0 & 9.3
[ "Update your python version to 3.11\nBecause Pillow 3.9.0 was builted with python 3.11 beta\nIf you're using chocolatey as package manager use\n$ choco upgrade python -y\n\nIf you're not using a package manager, download and install python from official site\n" ]
[ 0 ]
[]
[]
[ "python", "python_imaging_library" ]
stackoverflow_0074523787_python_python_imaging_library.txt
Q: find specific numbers in a sequence Hi I'd like to understand how in the following python program to proceed to add "the latest added number" and the "count of numbers that were added". the output should be like [121 21 11], the code gives 121 but how do I get the other two? sum = 0 k = 1 while sum <= 100: sum = sum + k k = k + 2 print(sum) I don't know what commands to use to find out the answer, sum is 121, how do I add 21 which is the last number added before sum <= 100 and 11 which is the count of numbers (1,3,5,7,9,11,13,15,17,19,21) A: First off, "sum" is a built-in function, so you should not use it as a variable name. Next, you can easily build a list of your nums making it easy to get sum, count, last, etc. nums = [1] while sum(nums) <= 100: nums.append(nums[-1]+2) print(sum(nums), nums[-1], len(nums)) 121 21 11 A: you should just store your ks in a list so you can access them later: sum = 0 k = 1 k_list = [1] while sum <= 100: sum += k k_list.append(k) k += 2 print(sum, k_list[-1], len(k_list))
find specific numbers in a sequence
Hi I'd like to understand how in the following python program to proceed to add "the latest added number" and the "count of numbers that were added". the output should be like [121 21 11], the code gives 121 but how do I get the other two? sum = 0 k = 1 while sum <= 100: sum = sum + k k = k + 2 print(sum) I don't know what commands to use to find out the answer, sum is 121, how do I add 21 which is the last number added before sum <= 100 and 11 which is the count of numbers (1,3,5,7,9,11,13,15,17,19,21)
[ "First off, \"sum\" is a built-in function, so you should not use it as a variable name.\nNext, you can easily build a list of your nums making it easy to get sum, count, last, etc.\nnums = [1]\nwhile sum(nums) <= 100:\n nums.append(nums[-1]+2)\n\nprint(sum(nums), nums[-1], len(nums))\n121 21 11\n\n", "you should just store your ks in a list so you can access them later:\nsum = 0\nk = 1\nk_list = [1]\n\nwhile sum <= 100:\n sum += k\n k_list.append(k)\n k += 2\n\nprint(sum, k_list[-1], len(k_list))\n\n" ]
[ 0, 0 ]
[]
[]
[ "python" ]
stackoverflow_0074523916_python.txt
Q: How generate all combinations of a binary array without repeating I am trying to generate an array of all combinations of an array, but how can I generate without repeating. My first solution was just remove the repeating elements using some for, but I am dealing with big arrays, with 50 length size or more and the execution never end. ex: (0,0,1,0) [1,0,0,0] [0,1,0,0] [0,0,1,0] [0,0,0,1] A: If your array is really just 0s and 1s, another possibility is to use itertools.combinations to determine, where the 1s are in every combination. Example: from itertools import combinations array = [0,0,1,1,0,1,0,1,0,0,1,0,1,0,1] n = len(array) k = sum(array) for comb in combinations(range(n), k): # Any combination to chose k numbers from range 0..n next_arr = [1 if i in comb else 0 for i in range(n)] print(next_arr) A: for your example with 4 spaces you can represent from 0(0000) up to 2**4-1 or 15 (1111) so you can make all the binary combos with arrays = [list(f"{i:04b}") for i in range(2**4)] A: Use itertools.permutations, store results in a set (as itertools.permutations treats elements as unique based on their position, not on their value).: >>> from itertools import permutations >>> set(permutations([0,0,1,0])) {(0, 0, 1, 0), (1, 0, 0, 0), (0, 0, 0, 1), (0, 1, 0, 0)}
How generate all combinations of a binary array without repeating
I am trying to generate an array of all combinations of an array, but how can I generate without repeating. My first solution was just remove the repeating elements using some for, but I am dealing with big arrays, with 50 length size or more and the execution never end. ex: (0,0,1,0) [1,0,0,0] [0,1,0,0] [0,0,1,0] [0,0,0,1]
[ "If your array is really just 0s and 1s, another possibility is to use itertools.combinations to determine, where the 1s are in every combination. Example:\nfrom itertools import combinations\n\narray = [0,0,1,1,0,1,0,1,0,0,1,0,1,0,1]\nn = len(array)\nk = sum(array)\n\nfor comb in combinations(range(n), k): # Any combination to chose k numbers from range 0..n\n next_arr = [1 if i in comb else 0 for i in range(n)]\n print(next_arr)\n\n", "for your example with 4 spaces\nyou can represent from 0(0000) up to 2**4-1 or 15 (1111)\nso you can make all the binary combos with\narrays = [list(f\"{i:04b}\") for i in range(2**4)]\n\n", "Use itertools.permutations, store results in a set (as itertools.permutations treats elements as unique based on their position, not on their value).:\n>>> from itertools import permutations\n>>> set(permutations([0,0,1,0]))\n{(0, 0, 1, 0), (1, 0, 0, 0), (0, 0, 0, 1), (0, 1, 0, 0)}\n\n" ]
[ 3, 0, 0 ]
[]
[]
[ "combinations", "python" ]
stackoverflow_0074523662_combinations_python.txt
Q: Is it possible to do this in class...? I want to know if it's possible to do this in beautifulsoup, look at the class. city = soup.find_all("div", class_="pizdz") s =0 For I in city: C= I.find("a", class="pizdz_{s}") s += 1 I tried to do that, but it didn't work. Can you do the same but in a different way? A: Use an f-string to substitute the variable. And you need to use class_. for s, I in enumerate(city): C = I.find("A", class_=f"pizdz_{s}") You can use enumerate() instead of incrementing s in your own code.
Is it possible to do this in class...?
I want to know if it's possible to do this in beautifulsoup, look at the class. city = soup.find_all("div", class_="pizdz") s =0 For I in city: C= I.find("a", class="pizdz_{s}") s += 1 I tried to do that, but it didn't work. Can you do the same but in a different way?
[ "Use an f-string to substitute the variable. And you need to use class_.\nfor s, I in enumerate(city):\n C = I.find(\"A\", class_=f\"pizdz_{s}\")\n\nYou can use enumerate() instead of incrementing s in your own code.\n" ]
[ 1 ]
[]
[]
[ "beautifulsoup", "python", "python_3.x" ]
stackoverflow_0074524040_beautifulsoup_python_python_3.x.txt
Q: Code completion not giving recommendations Say I'm working with the 'requests' Python library. req = requests.get("http://google.com") Now after this, if I type req., I'm supposed to get a list of all methods I can access. But for some reason I don't, even if I manually press Ctrl+Space. If I try this in iPython, I get autocomplete recommendations. Even if I try it via the built in Python console in PyCharm, I get recommendations. Why's this happening? A: As Python is a dynamically typed language, you need to ensure it can work out what type things are, and inspect on the libraries on your system correctly. Try to make sure it's obvious what type the object is in your code. One good way as of PyCharm 2.7 (back when versions were numbers) is to enable runtime type detection - PyCharm hooks into your program while it runs (while debugging), and checks the types of variables as they are used. You can enable this by going to settings, going to the "Build, Execution, Deployment" section and then the "Python Debugger" subsection and enabling "Collect run-time types information for code insight". Obviously it is worth noting that this isn't perfect - if you make changes, this won't be updated til the code is executed, and it can only tell you about values it has seen - other code paths you haven't tried could set other types. You can also 'tell' PyCharm by using Epydoc or Sphinx style docstrings that contain information about parameter and return value types. PyCharm will use these to improve it's inspections. Python also gained support for function annotations as of Python 3. These can be used for type hints as per PEP 484. See the typing module for more. This is more formal, so it can also be used for tools like mypy which a type checker that can programmatically check these types for consistency, giving Python a TypeScript-style optional static typing. A: Python is a dynamically typed language, which means that the "get" function does not declare its return type. When you're entering code in IPython or in the PyCharm console, the code is actually being executed, and it's possible to inspect the object instance in the running interpreter and to get the list of its methods. When you're entering code in PyCharm or in any other Python IDE, it is not executed, and it's only possible to use static analysis to infer the return type of the method. This is not possible in all cases. A: PyCharm has no idea what the dict contains if you fill it dynamically. So you have to hint PyCharm about the keys of dict beforehand. Prodict does exactly this to hint PyCharm, so you get code completion. First, if you want to be able to access the response object, then you have to get a json response and convert it to dict. That's achieved with .json() method of requests like this: response = requests.get("https://some.restservice.com/user/1").json() OK, we loaded it to a dict object, now you can access keys with bracket syntax: print(response['name']) Since you ask for auto code completion, you certainly need to hint PyCharm about the keys of dict. If you already know the respone schema, you can use Prodict to hint PyCharm: class Response(Prodict): name: str price: float response_dict = requests.get("https://some.restservice.com/user/1").json() response = Response.from_dict(response_dict) print(response.name) print(response.price) In the above code, both name and price attributes are auto-complated. If you don't know the schema of the response, then you can still use dot-notation to access dict attributes like this: response_dict = requests.get("https://some.restservice.com/user/1").json() response = Prodict.from_dict(response_dict) print(response.name) But code-completion will not be available since PyCharm can't know what the schema is. What's more is, Prodict class is derived directly from dict, so you can use it as dict too. This is the screenshot from Prodict repo that illustrates code completion: Disclaimer: I am the author of Prodict. A: if will just detect methods or variables and... with write some part of it: File->Setting -> Editor -> General -> Code Completion in top of opened window , unCheck [ Mach Case ] A: It's an old question but probably all the provided answers missed the mark by a margin as wide as Sun's distance to Betelgeuse (none of the answers is accepted and @user1265125 is an active guy with 8 yrs here and more cred than me). As it happens, I've just had exactly the same problem as OP and the solution was: A NON-ASCII CHAR SOMEWHERE IN THE PROJECT'S FOLDER PATH Seriously, PyCharm devs...[doubleFacepalm] A: In my case the solution is to reset the settings, everething else wasn`t working for me. "From the main menu, select File | Manage IDE Settings | Restore Default Settings.Alternatively, press Shift twice and type Restore default settings." A: I had a similar problem. Only functions I had already used were suggested and only as plain text and not recognised as methods. What fixed that for me was deleting the /.idea folder in the project directory. (Afterwards you will have to set your run configurations again) A: With the latest version update to 2022.2, even auto-complete stopped working for me. After quite a bit of reading articles, I just found the https://youtrack.jetbrains.com/issue/PY-50489 issue which was the root problem. The old plugins were pending update, after that, the code completion issue was fixed. So, try and check if you are facing the same problem, if the plugins are up to date in Settings —> Plugins.
Code completion not giving recommendations
Say I'm working with the 'requests' Python library. req = requests.get("http://google.com") Now after this, if I type req., I'm supposed to get a list of all methods I can access. But for some reason I don't, even if I manually press Ctrl+Space. If I try this in iPython, I get autocomplete recommendations. Even if I try it via the built in Python console in PyCharm, I get recommendations. Why's this happening?
[ "As Python is a dynamically typed language, you need to ensure it can work out what type things are, and inspect on the libraries on your system correctly. Try to make sure it's obvious what type the object is in your code.\nOne good way as of PyCharm 2.7 (back when versions were numbers) is to enable runtime type detection - PyCharm hooks into your program while it runs (while debugging), and checks the types of variables as they are used.\nYou can enable this by going to settings, going to the \"Build, Execution, Deployment\" section and then the \"Python Debugger\" subsection and enabling \"Collect run-time types information for code insight\".\n\nObviously it is worth noting that this isn't perfect - if you make changes, this won't be updated til the code is executed, and it can only tell you about values it has seen - other code paths you haven't tried could set other types.\nYou can also 'tell' PyCharm by using Epydoc or Sphinx style docstrings that contain information about parameter and return value types. PyCharm will use these to improve it's inspections.\nPython also gained support for function annotations as of Python 3. These can be used for type hints as per PEP 484. See the typing module for more. This is more formal, so it can also be used for tools like mypy which a type checker that can programmatically check these types for consistency, giving Python a TypeScript-style optional static typing.\n", "Python is a dynamically typed language, which means that the \"get\" function does not declare its return type. When you're entering code in IPython or in the PyCharm console, the code is actually being executed, and it's possible to inspect the object instance in the running interpreter and to get the list of its methods. When you're entering code in PyCharm or in any other Python IDE, it is not executed, and it's only possible to use static analysis to infer the return type of the method. This is not possible in all cases.\n", "PyCharm has no idea what the dict contains if you fill it dynamically. So you have to hint PyCharm about the keys of dict beforehand. Prodict does exactly this to hint PyCharm, so you get code completion.\nFirst, if you want to be able to access the response object, then you have to get a json response and convert it to dict. That's achieved with .json() method of requests like this:\nresponse = requests.get(\"https://some.restservice.com/user/1\").json()\n\nOK, we loaded it to a dict object, now you can access keys with bracket syntax:\nprint(response['name'])\n\nSince you ask for auto code completion, you certainly need to hint PyCharm about the keys of dict. If you already know the respone schema, you can use Prodict to hint PyCharm:\nclass Response(Prodict):\n name: str\n price: float\n\nresponse_dict = requests.get(\"https://some.restservice.com/user/1\").json()\n\nresponse = Response.from_dict(response_dict)\nprint(response.name)\nprint(response.price)\n\nIn the above code, both name and price attributes are auto-complated.\nIf you don't know the schema of the response, then you can still use dot-notation to access dict attributes like this:\nresponse_dict = requests.get(\"https://some.restservice.com/user/1\").json()\nresponse = Prodict.from_dict(response_dict)\nprint(response.name)\n\nBut code-completion will not be available since PyCharm can't know what the schema is.\nWhat's more is, Prodict class is derived directly from dict, so you can use it as dict too.\nThis is the screenshot from Prodict repo that illustrates code completion:\n\nDisclaimer: I am the author of Prodict.\n", "if will just detect methods or variables and... with write some part of it:\n File->Setting -> Editor -> General -> Code Completion\nin top of opened window , unCheck [ Mach Case ] \n", "It's an old question but probably all the provided answers missed the mark by a margin as wide as Sun's distance to Betelgeuse (none of the answers is accepted and @user1265125 is an active guy with 8 yrs here and more cred than me).\nAs it happens, I've just had exactly the same problem as OP and the solution was:\nA NON-ASCII CHAR SOMEWHERE IN THE PROJECT'S FOLDER PATH\nSeriously, PyCharm devs...[doubleFacepalm]\n", "In my case the solution is to reset the settings, everething else wasn`t working for me.\n\"From the main menu, select File | Manage IDE Settings | Restore Default Settings.Alternatively, press Shift twice and type Restore default settings.\"\n", "I had a similar problem. Only functions I had already used were suggested and only as plain text and not recognised as methods.\nWhat fixed that for me was deleting the /.idea folder in the project directory. (Afterwards you will have to set your run configurations again)\n", "With the latest version update to 2022.2, even auto-complete stopped working for me. After quite a bit of reading articles, I just found the https://youtrack.jetbrains.com/issue/PY-50489 issue which was the root problem. The old plugins were pending update, after that, the code completion issue was fixed.\nSo, try and check if you are facing the same problem, if the plugins are up to date in Settings —> Plugins.\n" ]
[ 23, 8, 1, 0, 0, 0, 0, 0 ]
[]
[]
[ "autocomplete", "pycharm", "python" ]
stackoverflow_0015022804_autocomplete_pycharm_python.txt
Q: Errors installing pygraphviz on mac os 11.6 I'm trying to install pygraphviz in order to get layouts for my network. However, I have trouble installing pygraphviz using pip install pygraphviz. I get the following lengthy error: ERROR: Command errored out with exit status 1: command: /opt/anaconda3/bin/python -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/private/var/folders/w0/pv1mwphj1t552sml25d44bq40000gn/T/pip-install-dabvxss7/pygraphviz/setup.py'"'"'; __file__='"'"'/private/var/folders/w0/pv1mwphj1t552sml25d44bq40000gn/T/pip-install-dabvxss7/pygraphviz/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' bdist_wheel -d /private/var/folders/w0/pv1mwphj1t552sml25d44bq40000gn/T/pip-wheel-ktbtqll_ cwd: /private/var/folders/w0/pv1mwphj1t552sml25d44bq40000gn/T/pip-install-dabvxss7/pygraphviz/ Complete output (71 lines): running bdist_wheel running build running build_py creating build creating build/lib.macosx-10.9-x86_64-3.8 creating build/lib.macosx-10.9-x86_64-3.8/pygraphviz copying pygraphviz/scraper.py -> build/lib.macosx-10.9-x86_64-3.8/pygraphviz copying pygraphviz/graphviz.py -> build/lib.macosx-10.9-x86_64-3.8/pygraphviz copying pygraphviz/__init__.py -> build/lib.macosx-10.9-x86_64-3.8/pygraphviz copying pygraphviz/agraph.py -> build/lib.macosx-10.9-x86_64-3.8/pygraphviz copying pygraphviz/testing.py -> build/lib.macosx-10.9-x86_64-3.8/pygraphviz creating build/lib.macosx-10.9-x86_64-3.8/pygraphviz/tests copying pygraphviz/tests/test_unicode.py -> build/lib.macosx-10.9-x86_64-3.8/pygraphviz/tests copying pygraphviz/tests/test_scraper.py -> build/lib.macosx-10.9-x86_64-3.8/pygraphviz/tests copying pygraphviz/tests/test_readwrite.py -> build/lib.macosx-10.9-x86_64-3.8/pygraphviz/tests copying pygraphviz/tests/test_string.py -> build/lib.macosx-10.9-x86_64-3.8/pygraphviz/tests copying pygraphviz/tests/__init__.py -> build/lib.macosx-10.9-x86_64-3.8/pygraphviz/tests copying pygraphviz/tests/test_html.py -> build/lib.macosx-10.9-x86_64-3.8/pygraphviz/tests copying pygraphviz/tests/test_node_attributes.py -> build/lib.macosx-10.9-x86_64-3.8/pygraphviz/tests copying pygraphviz/tests/test_drawing.py -> build/lib.macosx-10.9-x86_64-3.8/pygraphviz/tests copying pygraphviz/tests/test_subgraph.py -> build/lib.macosx-10.9-x86_64-3.8/pygraphviz/tests copying pygraphviz/tests/test_close.py -> build/lib.macosx-10.9-x86_64-3.8/pygraphviz/tests copying pygraphviz/tests/test_edge_attributes.py -> build/lib.macosx-10.9-x86_64-3.8/pygraphviz/tests copying pygraphviz/tests/test_clear.py -> build/lib.macosx-10.9-x86_64-3.8/pygraphviz/tests copying pygraphviz/tests/test_layout.py -> build/lib.macosx-10.9-x86_64-3.8/pygraphviz/tests copying pygraphviz/tests/test_attribute_defaults.py -> build/lib.macosx-10.9-x86_64-3.8/pygraphviz/tests copying pygraphviz/tests/test_graph.py -> build/lib.macosx-10.9-x86_64-3.8/pygraphviz/tests running egg_info writing pygraphviz.egg-info/PKG-INFO writing dependency_links to pygraphviz.egg-info/dependency_links.txt writing top-level names to pygraphviz.egg-info/top_level.txt reading manifest file 'pygraphviz.egg-info/SOURCES.txt' reading manifest template 'MANIFEST.in' warning: no files found matching '*.png' under directory 'doc' warning: no files found matching '*.txt' under directory 'doc' warning: no files found matching '*.css' under directory 'doc' warning: no previously-included files matching '*~' found anywhere in distribution warning: no previously-included files matching '*.pyc' found anywhere in distribution warning: no previously-included files matching '.svn' found anywhere in distribution no previously-included directories found matching 'doc/build' writing manifest file 'pygraphviz.egg-info/SOURCES.txt' copying pygraphviz/graphviz.i -> build/lib.macosx-10.9-x86_64-3.8/pygraphviz copying pygraphviz/graphviz_wrap.c -> build/lib.macosx-10.9-x86_64-3.8/pygraphviz running build_ext building 'pygraphviz._graphviz' extension creating build/temp.macosx-10.9-x86_64-3.8 creating build/temp.macosx-10.9-x86_64-3.8/pygraphviz gcc -Wno-unused-result -Wsign-compare -Wunreachable-code -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -I/opt/anaconda3/include -arch x86_64 -I/opt/anaconda3/include -arch x86_64 -I/opt/anaconda3/include/python3.8 -c pygraphviz/graphviz_wrap.c -o build/temp.macosx-10.9-x86_64-3.8/pygraphviz/graphviz_wrap.o pygraphviz/graphviz_wrap.c:1756:7: warning: 'tp_print' is deprecated [-Wdeprecated-declarations] 0, /* tp_print */ ^ /opt/anaconda3/include/python3.8/cpython/object.h:260:5: note: 'tp_print' has been explicitly marked deprecated here Py_DEPRECATED(3.8) int (*tp_print)(PyObject *, FILE *, int); ^ /opt/anaconda3/include/python3.8/pyport.h:515:54: note: expanded from macro 'Py_DEPRECATED' #define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__)) ^ pygraphviz/graphviz_wrap.c:1923:7: warning: 'tp_print' is deprecated [-Wdeprecated-declarations] 0, /* tp_print */ ^ /opt/anaconda3/include/python3.8/cpython/object.h:260:5: note: 'tp_print' has been explicitly marked deprecated here Py_DEPRECATED(3.8) int (*tp_print)(PyObject *, FILE *, int); ^ /opt/anaconda3/include/python3.8/pyport.h:515:54: note: expanded from macro 'Py_DEPRECATED' #define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__)) ^ pygraphviz/graphviz_wrap.c:2711:10: fatal error: 'graphviz/cgraph.h' file not found #include "graphviz/cgraph.h" ^~~~~~~~~~~~~~~~~~~ 2 warnings and 1 error generated. error: command 'gcc' failed with exit status 1 ---------------------------------------- ERROR: Failed building wheel for pygraphviz Running setup.py clean for pygraphviz Failed to build pygraphviz Installing collected packages: pygraphviz Running setup.py install for pygraphviz ... error ERROR: Command errored out with exit status 1: command: /opt/anaconda3/bin/python -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/private/var/folders/w0/pv1mwphj1t552sml25d44bq40000gn/T/pip-install-dabvxss7/pygraphviz/setup.py'"'"'; __file__='"'"'/private/var/folders/w0/pv1mwphj1t552sml25d44bq40000gn/T/pip-install-dabvxss7/pygraphviz/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record /private/var/folders/w0/pv1mwphj1t552sml25d44bq40000gn/T/pip-record-p4ku8efv/install-record.txt --single-version-externally-managed --compile --install-headers /opt/anaconda3/include/python3.8/pygraphviz cwd: /private/var/folders/w0/pv1mwphj1t552sml25d44bq40000gn/T/pip-install-dabvxss7/pygraphviz/ Complete output (71 lines): running install running build running build_py creating build creating build/lib.macosx-10.9-x86_64-3.8 creating build/lib.macosx-10.9-x86_64-3.8/pygraphviz copying pygraphviz/scraper.py -> build/lib.macosx-10.9-x86_64-3.8/pygraphviz copying pygraphviz/graphviz.py -> build/lib.macosx-10.9-x86_64-3.8/pygraphviz copying pygraphviz/__init__.py -> build/lib.macosx-10.9-x86_64-3.8/pygraphviz copying pygraphviz/agraph.py -> build/lib.macosx-10.9-x86_64-3.8/pygraphviz copying pygraphviz/testing.py -> build/lib.macosx-10.9-x86_64-3.8/pygraphviz creating build/lib.macosx-10.9-x86_64-3.8/pygraphviz/tests copying pygraphviz/tests/test_unicode.py -> build/lib.macosx-10.9-x86_64-3.8/pygraphviz/tests copying pygraphviz/tests/test_scraper.py -> build/lib.macosx-10.9-x86_64-3.8/pygraphviz/tests copying pygraphviz/tests/test_readwrite.py -> build/lib.macosx-10.9-x86_64-3.8/pygraphviz/tests copying pygraphviz/tests/test_string.py -> build/lib.macosx-10.9-x86_64-3.8/pygraphviz/tests copying pygraphviz/tests/__init__.py -> build/lib.macosx-10.9-x86_64-3.8/pygraphviz/tests copying pygraphviz/tests/test_html.py -> build/lib.macosx-10.9-x86_64-3.8/pygraphviz/tests copying pygraphviz/tests/test_node_attributes.py -> build/lib.macosx-10.9-x86_64-3.8/pygraphviz/tests copying pygraphviz/tests/test_drawing.py -> build/lib.macosx-10.9-x86_64-3.8/pygraphviz/tests copying pygraphviz/tests/test_subgraph.py -> build/lib.macosx-10.9-x86_64-3.8/pygraphviz/tests copying pygraphviz/tests/test_close.py -> build/lib.macosx-10.9-x86_64-3.8/pygraphviz/tests copying pygraphviz/tests/test_edge_attributes.py -> build/lib.macosx-10.9-x86_64-3.8/pygraphviz/tests copying pygraphviz/tests/test_clear.py -> build/lib.macosx-10.9-x86_64-3.8/pygraphviz/tests copying pygraphviz/tests/test_layout.py -> build/lib.macosx-10.9-x86_64-3.8/pygraphviz/tests copying pygraphviz/tests/test_attribute_defaults.py -> build/lib.macosx-10.9-x86_64-3.8/pygraphviz/tests copying pygraphviz/tests/test_graph.py -> build/lib.macosx-10.9-x86_64-3.8/pygraphviz/tests running egg_info writing pygraphviz.egg-info/PKG-INFO writing dependency_links to pygraphviz.egg-info/dependency_links.txt writing top-level names to pygraphviz.egg-info/top_level.txt reading manifest file 'pygraphviz.egg-info/SOURCES.txt' reading manifest template 'MANIFEST.in' warning: no files found matching '*.png' under directory 'doc' warning: no files found matching '*.txt' under directory 'doc' warning: no files found matching '*.css' under directory 'doc' warning: no previously-included files matching '*~' found anywhere in distribution warning: no previously-included files matching '*.pyc' found anywhere in distribution warning: no previously-included files matching '.svn' found anywhere in distribution no previously-included directories found matching 'doc/build' writing manifest file 'pygraphviz.egg-info/SOURCES.txt' copying pygraphviz/graphviz.i -> build/lib.macosx-10.9-x86_64-3.8/pygraphviz copying pygraphviz/graphviz_wrap.c -> build/lib.macosx-10.9-x86_64-3.8/pygraphviz running build_ext building 'pygraphviz._graphviz' extension creating build/temp.macosx-10.9-x86_64-3.8 creating build/temp.macosx-10.9-x86_64-3.8/pygraphviz gcc -Wno-unused-result -Wsign-compare -Wunreachable-code -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -I/opt/anaconda3/include -arch x86_64 -I/opt/anaconda3/include -arch x86_64 -I/opt/anaconda3/include/python3.8 -c pygraphviz/graphviz_wrap.c -o build/temp.macosx-10.9-x86_64-3.8/pygraphviz/graphviz_wrap.o pygraphviz/graphviz_wrap.c:1756:7: warning: 'tp_print' is deprecated [-Wdeprecated-declarations] 0, /* tp_print */ ^ /opt/anaconda3/include/python3.8/cpython/object.h:260:5: note: 'tp_print' has been explicitly marked deprecated here Py_DEPRECATED(3.8) int (*tp_print)(PyObject *, FILE *, int); ^ /opt/anaconda3/include/python3.8/pyport.h:515:54: note: expanded from macro 'Py_DEPRECATED' #define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__)) ^ pygraphviz/graphviz_wrap.c:1923:7: warning: 'tp_print' is deprecated [-Wdeprecated-declarations] 0, /* tp_print */ ^ /opt/anaconda3/include/python3.8/cpython/object.h:260:5: note: 'tp_print' has been explicitly marked deprecated here Py_DEPRECATED(3.8) int (*tp_print)(PyObject *, FILE *, int); ^ /opt/anaconda3/include/python3.8/pyport.h:515:54: note: expanded from macro 'Py_DEPRECATED' #define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__)) ^ pygraphviz/graphviz_wrap.c:2711:10: fatal error: 'graphviz/cgraph.h' file not found #include "graphviz/cgraph.h" ^~~~~~~~~~~~~~~~~~~ 2 warnings and 1 error generated. error: command 'gcc' failed with exit status 1 ---------------------------------------- ERROR: Command errored out with exit status 1: /opt/anaconda3/bin/python -u -c Does anyone know how to fix this issue. I've been going through a lot of stack overlfow threads, but nothing worked. Thanks for your suggestions! A: PyGraphviz requires the graphviz library to be installed. The easiest way to do this is probably to use homebrew, as described in the macOS section of the PyGraphviz installation docs. A: I had the same problem, however direct recommendations are not working properly. So, the sequence is the following: brew install graphviz pip install --install-option="--include-path=/opt/local/include" --install-option="--library-path=/opt/local/lib" pygraphviz The second — is how to force pip to find a correct path to header files after brew install.
Errors installing pygraphviz on mac os 11.6
I'm trying to install pygraphviz in order to get layouts for my network. However, I have trouble installing pygraphviz using pip install pygraphviz. I get the following lengthy error: ERROR: Command errored out with exit status 1: command: /opt/anaconda3/bin/python -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/private/var/folders/w0/pv1mwphj1t552sml25d44bq40000gn/T/pip-install-dabvxss7/pygraphviz/setup.py'"'"'; __file__='"'"'/private/var/folders/w0/pv1mwphj1t552sml25d44bq40000gn/T/pip-install-dabvxss7/pygraphviz/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' bdist_wheel -d /private/var/folders/w0/pv1mwphj1t552sml25d44bq40000gn/T/pip-wheel-ktbtqll_ cwd: /private/var/folders/w0/pv1mwphj1t552sml25d44bq40000gn/T/pip-install-dabvxss7/pygraphviz/ Complete output (71 lines): running bdist_wheel running build running build_py creating build creating build/lib.macosx-10.9-x86_64-3.8 creating build/lib.macosx-10.9-x86_64-3.8/pygraphviz copying pygraphviz/scraper.py -> build/lib.macosx-10.9-x86_64-3.8/pygraphviz copying pygraphviz/graphviz.py -> build/lib.macosx-10.9-x86_64-3.8/pygraphviz copying pygraphviz/__init__.py -> build/lib.macosx-10.9-x86_64-3.8/pygraphviz copying pygraphviz/agraph.py -> build/lib.macosx-10.9-x86_64-3.8/pygraphviz copying pygraphviz/testing.py -> build/lib.macosx-10.9-x86_64-3.8/pygraphviz creating build/lib.macosx-10.9-x86_64-3.8/pygraphviz/tests copying pygraphviz/tests/test_unicode.py -> build/lib.macosx-10.9-x86_64-3.8/pygraphviz/tests copying pygraphviz/tests/test_scraper.py -> build/lib.macosx-10.9-x86_64-3.8/pygraphviz/tests copying pygraphviz/tests/test_readwrite.py -> build/lib.macosx-10.9-x86_64-3.8/pygraphviz/tests copying pygraphviz/tests/test_string.py -> build/lib.macosx-10.9-x86_64-3.8/pygraphviz/tests copying pygraphviz/tests/__init__.py -> build/lib.macosx-10.9-x86_64-3.8/pygraphviz/tests copying pygraphviz/tests/test_html.py -> build/lib.macosx-10.9-x86_64-3.8/pygraphviz/tests copying pygraphviz/tests/test_node_attributes.py -> build/lib.macosx-10.9-x86_64-3.8/pygraphviz/tests copying pygraphviz/tests/test_drawing.py -> build/lib.macosx-10.9-x86_64-3.8/pygraphviz/tests copying pygraphviz/tests/test_subgraph.py -> build/lib.macosx-10.9-x86_64-3.8/pygraphviz/tests copying pygraphviz/tests/test_close.py -> build/lib.macosx-10.9-x86_64-3.8/pygraphviz/tests copying pygraphviz/tests/test_edge_attributes.py -> build/lib.macosx-10.9-x86_64-3.8/pygraphviz/tests copying pygraphviz/tests/test_clear.py -> build/lib.macosx-10.9-x86_64-3.8/pygraphviz/tests copying pygraphviz/tests/test_layout.py -> build/lib.macosx-10.9-x86_64-3.8/pygraphviz/tests copying pygraphviz/tests/test_attribute_defaults.py -> build/lib.macosx-10.9-x86_64-3.8/pygraphviz/tests copying pygraphviz/tests/test_graph.py -> build/lib.macosx-10.9-x86_64-3.8/pygraphviz/tests running egg_info writing pygraphviz.egg-info/PKG-INFO writing dependency_links to pygraphviz.egg-info/dependency_links.txt writing top-level names to pygraphviz.egg-info/top_level.txt reading manifest file 'pygraphviz.egg-info/SOURCES.txt' reading manifest template 'MANIFEST.in' warning: no files found matching '*.png' under directory 'doc' warning: no files found matching '*.txt' under directory 'doc' warning: no files found matching '*.css' under directory 'doc' warning: no previously-included files matching '*~' found anywhere in distribution warning: no previously-included files matching '*.pyc' found anywhere in distribution warning: no previously-included files matching '.svn' found anywhere in distribution no previously-included directories found matching 'doc/build' writing manifest file 'pygraphviz.egg-info/SOURCES.txt' copying pygraphviz/graphviz.i -> build/lib.macosx-10.9-x86_64-3.8/pygraphviz copying pygraphviz/graphviz_wrap.c -> build/lib.macosx-10.9-x86_64-3.8/pygraphviz running build_ext building 'pygraphviz._graphviz' extension creating build/temp.macosx-10.9-x86_64-3.8 creating build/temp.macosx-10.9-x86_64-3.8/pygraphviz gcc -Wno-unused-result -Wsign-compare -Wunreachable-code -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -I/opt/anaconda3/include -arch x86_64 -I/opt/anaconda3/include -arch x86_64 -I/opt/anaconda3/include/python3.8 -c pygraphviz/graphviz_wrap.c -o build/temp.macosx-10.9-x86_64-3.8/pygraphviz/graphviz_wrap.o pygraphviz/graphviz_wrap.c:1756:7: warning: 'tp_print' is deprecated [-Wdeprecated-declarations] 0, /* tp_print */ ^ /opt/anaconda3/include/python3.8/cpython/object.h:260:5: note: 'tp_print' has been explicitly marked deprecated here Py_DEPRECATED(3.8) int (*tp_print)(PyObject *, FILE *, int); ^ /opt/anaconda3/include/python3.8/pyport.h:515:54: note: expanded from macro 'Py_DEPRECATED' #define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__)) ^ pygraphviz/graphviz_wrap.c:1923:7: warning: 'tp_print' is deprecated [-Wdeprecated-declarations] 0, /* tp_print */ ^ /opt/anaconda3/include/python3.8/cpython/object.h:260:5: note: 'tp_print' has been explicitly marked deprecated here Py_DEPRECATED(3.8) int (*tp_print)(PyObject *, FILE *, int); ^ /opt/anaconda3/include/python3.8/pyport.h:515:54: note: expanded from macro 'Py_DEPRECATED' #define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__)) ^ pygraphviz/graphviz_wrap.c:2711:10: fatal error: 'graphviz/cgraph.h' file not found #include "graphviz/cgraph.h" ^~~~~~~~~~~~~~~~~~~ 2 warnings and 1 error generated. error: command 'gcc' failed with exit status 1 ---------------------------------------- ERROR: Failed building wheel for pygraphviz Running setup.py clean for pygraphviz Failed to build pygraphviz Installing collected packages: pygraphviz Running setup.py install for pygraphviz ... error ERROR: Command errored out with exit status 1: command: /opt/anaconda3/bin/python -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/private/var/folders/w0/pv1mwphj1t552sml25d44bq40000gn/T/pip-install-dabvxss7/pygraphviz/setup.py'"'"'; __file__='"'"'/private/var/folders/w0/pv1mwphj1t552sml25d44bq40000gn/T/pip-install-dabvxss7/pygraphviz/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record /private/var/folders/w0/pv1mwphj1t552sml25d44bq40000gn/T/pip-record-p4ku8efv/install-record.txt --single-version-externally-managed --compile --install-headers /opt/anaconda3/include/python3.8/pygraphviz cwd: /private/var/folders/w0/pv1mwphj1t552sml25d44bq40000gn/T/pip-install-dabvxss7/pygraphviz/ Complete output (71 lines): running install running build running build_py creating build creating build/lib.macosx-10.9-x86_64-3.8 creating build/lib.macosx-10.9-x86_64-3.8/pygraphviz copying pygraphviz/scraper.py -> build/lib.macosx-10.9-x86_64-3.8/pygraphviz copying pygraphviz/graphviz.py -> build/lib.macosx-10.9-x86_64-3.8/pygraphviz copying pygraphviz/__init__.py -> build/lib.macosx-10.9-x86_64-3.8/pygraphviz copying pygraphviz/agraph.py -> build/lib.macosx-10.9-x86_64-3.8/pygraphviz copying pygraphviz/testing.py -> build/lib.macosx-10.9-x86_64-3.8/pygraphviz creating build/lib.macosx-10.9-x86_64-3.8/pygraphviz/tests copying pygraphviz/tests/test_unicode.py -> build/lib.macosx-10.9-x86_64-3.8/pygraphviz/tests copying pygraphviz/tests/test_scraper.py -> build/lib.macosx-10.9-x86_64-3.8/pygraphviz/tests copying pygraphviz/tests/test_readwrite.py -> build/lib.macosx-10.9-x86_64-3.8/pygraphviz/tests copying pygraphviz/tests/test_string.py -> build/lib.macosx-10.9-x86_64-3.8/pygraphviz/tests copying pygraphviz/tests/__init__.py -> build/lib.macosx-10.9-x86_64-3.8/pygraphviz/tests copying pygraphviz/tests/test_html.py -> build/lib.macosx-10.9-x86_64-3.8/pygraphviz/tests copying pygraphviz/tests/test_node_attributes.py -> build/lib.macosx-10.9-x86_64-3.8/pygraphviz/tests copying pygraphviz/tests/test_drawing.py -> build/lib.macosx-10.9-x86_64-3.8/pygraphviz/tests copying pygraphviz/tests/test_subgraph.py -> build/lib.macosx-10.9-x86_64-3.8/pygraphviz/tests copying pygraphviz/tests/test_close.py -> build/lib.macosx-10.9-x86_64-3.8/pygraphviz/tests copying pygraphviz/tests/test_edge_attributes.py -> build/lib.macosx-10.9-x86_64-3.8/pygraphviz/tests copying pygraphviz/tests/test_clear.py -> build/lib.macosx-10.9-x86_64-3.8/pygraphviz/tests copying pygraphviz/tests/test_layout.py -> build/lib.macosx-10.9-x86_64-3.8/pygraphviz/tests copying pygraphviz/tests/test_attribute_defaults.py -> build/lib.macosx-10.9-x86_64-3.8/pygraphviz/tests copying pygraphviz/tests/test_graph.py -> build/lib.macosx-10.9-x86_64-3.8/pygraphviz/tests running egg_info writing pygraphviz.egg-info/PKG-INFO writing dependency_links to pygraphviz.egg-info/dependency_links.txt writing top-level names to pygraphviz.egg-info/top_level.txt reading manifest file 'pygraphviz.egg-info/SOURCES.txt' reading manifest template 'MANIFEST.in' warning: no files found matching '*.png' under directory 'doc' warning: no files found matching '*.txt' under directory 'doc' warning: no files found matching '*.css' under directory 'doc' warning: no previously-included files matching '*~' found anywhere in distribution warning: no previously-included files matching '*.pyc' found anywhere in distribution warning: no previously-included files matching '.svn' found anywhere in distribution no previously-included directories found matching 'doc/build' writing manifest file 'pygraphviz.egg-info/SOURCES.txt' copying pygraphviz/graphviz.i -> build/lib.macosx-10.9-x86_64-3.8/pygraphviz copying pygraphviz/graphviz_wrap.c -> build/lib.macosx-10.9-x86_64-3.8/pygraphviz running build_ext building 'pygraphviz._graphviz' extension creating build/temp.macosx-10.9-x86_64-3.8 creating build/temp.macosx-10.9-x86_64-3.8/pygraphviz gcc -Wno-unused-result -Wsign-compare -Wunreachable-code -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -I/opt/anaconda3/include -arch x86_64 -I/opt/anaconda3/include -arch x86_64 -I/opt/anaconda3/include/python3.8 -c pygraphviz/graphviz_wrap.c -o build/temp.macosx-10.9-x86_64-3.8/pygraphviz/graphviz_wrap.o pygraphviz/graphviz_wrap.c:1756:7: warning: 'tp_print' is deprecated [-Wdeprecated-declarations] 0, /* tp_print */ ^ /opt/anaconda3/include/python3.8/cpython/object.h:260:5: note: 'tp_print' has been explicitly marked deprecated here Py_DEPRECATED(3.8) int (*tp_print)(PyObject *, FILE *, int); ^ /opt/anaconda3/include/python3.8/pyport.h:515:54: note: expanded from macro 'Py_DEPRECATED' #define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__)) ^ pygraphviz/graphviz_wrap.c:1923:7: warning: 'tp_print' is deprecated [-Wdeprecated-declarations] 0, /* tp_print */ ^ /opt/anaconda3/include/python3.8/cpython/object.h:260:5: note: 'tp_print' has been explicitly marked deprecated here Py_DEPRECATED(3.8) int (*tp_print)(PyObject *, FILE *, int); ^ /opt/anaconda3/include/python3.8/pyport.h:515:54: note: expanded from macro 'Py_DEPRECATED' #define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__)) ^ pygraphviz/graphviz_wrap.c:2711:10: fatal error: 'graphviz/cgraph.h' file not found #include "graphviz/cgraph.h" ^~~~~~~~~~~~~~~~~~~ 2 warnings and 1 error generated. error: command 'gcc' failed with exit status 1 ---------------------------------------- ERROR: Command errored out with exit status 1: /opt/anaconda3/bin/python -u -c Does anyone know how to fix this issue. I've been going through a lot of stack overlfow threads, but nothing worked. Thanks for your suggestions!
[ "PyGraphviz requires the graphviz library to be installed. The easiest way to do this is probably to use homebrew, as described in the macOS section of the PyGraphviz installation docs.\n", "I had the same problem, however direct recommendations are not working properly.\nSo, the sequence is the following:\n\nbrew install graphviz\npip install --install-option=\"--include-path=/opt/local/include\" --install-option=\"--library-path=/opt/local/lib\" pygraphviz\n\nThe second — is how to force pip to find a correct path to header files after brew install.\n" ]
[ 0, 0 ]
[]
[]
[ "python" ]
stackoverflow_0070151897_python.txt
Q: Read matrix from txt file in Python (no numpy) using function I am beginner and trying to Read a matrix from a text file and return it and use function read_matrix(pathname) but all that I can find and build it and it does not work. Can you help me understand where I did wrong. Please no numpy def read_matrix(pathname): matrices=[] m=[] for line in file("matrix.txt",'r'): if line=="1 1\n": if len(m)>0: matrices.append(m) m=[] else: m.append(line.strip().split(' ')) if len(m)>0: matrices.append(m) return(m) m = read_matrix('matrix.txt') A: if your file looks like this: data.txt: 1 2 3 4 5 6 7 8 9 you can read it to a matrix (list of lists) as follow: with open("data.txt") as fid: txt=fid.read() matrix = [[int(val) for val in line.split()] for line in txt.split('\n') if line] your code could work as follow, however there are some lines which could be written better: def read_matrix(pathname): matrices = [] m = [] for line in open(pathname,'r'): if line=="1 1\n": if len(m) > 0: matrices.append(m) m=[] else: m.append(line.strip().split(' ')) if len(m)>0: matrices.append(m) return(m) m = read_matrix('data.txt')
Read matrix from txt file in Python (no numpy) using function
I am beginner and trying to Read a matrix from a text file and return it and use function read_matrix(pathname) but all that I can find and build it and it does not work. Can you help me understand where I did wrong. Please no numpy def read_matrix(pathname): matrices=[] m=[] for line in file("matrix.txt",'r'): if line=="1 1\n": if len(m)>0: matrices.append(m) m=[] else: m.append(line.strip().split(' ')) if len(m)>0: matrices.append(m) return(m) m = read_matrix('matrix.txt')
[ "if your file looks like this:\ndata.txt:\n\n1 2 3\n4 5 6\n7 8 9\n\nyou can read it to a matrix (list of lists) as follow:\nwith open(\"data.txt\") as fid:\n txt=fid.read()\n\n\nmatrix = [[int(val) for val in line.split()] for line in txt.split('\\n') if line]\n\nyour code could work as follow, however there are some lines which could be written better:\ndef read_matrix(pathname):\n matrices = []\n m = []\n for line in open(pathname,'r'):\n if line==\"1 1\\n\": \n if len(m) > 0: \n matrices.append(m)\n m=[]\n else:\n m.append(line.strip().split(' '))\n if len(m)>0: matrices.append(m)\n return(m)\n\nm = read_matrix('data.txt')\n\n" ]
[ 1 ]
[]
[]
[ "matrix", "python" ]
stackoverflow_0074524004_matrix_python.txt
Q: What is the most Pythonic way to dynamically create a DataFrame containing person age in month? I have a list of people with their firstname, lastname and their date of birth in a DataFrame. data = [ ["John", "Wayne", "13.12.2018"], ["Max", "Muster", "02.06.2016"], ["Steve", "Black", "11.04.2017"], ["Amy", "Smith", "10.10.2017"], ["July", "House", "08.05.2018"], ["Anna", "Whine", "20.08.2016"], ["Charly", "Johnson", "16.07.2016"], ] people = pd.DataFrame( data, columns=["first", "last", "birthdate"], ) people["birthdate"] = pd.to_datetime(people["birthdate"], format="%d.%m.%Y") first last birthdate 0 John Wayne 2018-12-13 1 Max Muster 2016-06-02 2 Steve Black 2017-04-11 3 Amy Smith 2017-10-10 4 July House 2018-05-08 5 Anna Whine 2016-08-20 6 Charly Johnson 2016-07-16 I would like to create another dataframe having the same rows but the months of a year as columns. The data should be the people's age at the end of the month. Here is what I'm currently doing # generate series for all months months = pd.date_range("2022-01-01", "2022-12-01", freq="MS") # calculate age for every person age = pd.DataFrame(data={"first": people["first"], "last": people["last"]}) for value in months: last_day_of_month = value + pd.offsets.MonthEnd() age[value.strftime("%b")] = (last_day_of_month - people["birthdate"]).astype( "timedelta64[Y]" ) first last Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec 0 John Wayne 3.0 3.0 3.0 3.0 3.0 3.0 3.0 3.0 3.0 3.0 3.0 4.0 1 Max Muster 5.0 5.0 5.0 5.0 5.0 6.0 6.0 6.0 6.0 6.0 6.0 6.0 2 Steve Black 4.0 4.0 4.0 5.0 5.0 5.0 5.0 5.0 5.0 5.0 5.0 5.0 3 Amy Smith 4.0 4.0 4.0 4.0 4.0 4.0 4.0 4.0 4.0 5.0 5.0 5.0 4 July House 3.0 3.0 3.0 3.0 4.0 4.0 4.0 4.0 4.0 4.0 4.0 4.0 5 Anna Whine 5.0 5.0 5.0 5.0 5.0 5.0 5.0 6.0 6.0 6.0 6.0 6.0 6 Charly Johnson 5.0 5.0 5.0 5.0 5.0 5.0 6.0 6.0 6.0 6.0 6.0 6.0 That works fine but I was wondering if there is a more pythonic way to solve my problem. The for loop is certainly something I would use in other programming languages but I thought "Maybe there is a smarter way to solve this ...". Also another general question: Would you rather use the columns for the months or the rows? I'm new to Python and Pandas and was wondering if there are some best practices around time series data modelling. Thank you very much! A: You can try to vectorize all you operations using numpy broadcasting: months = pd.date_range("2022-01-01", "2022-12-01", freq="ME") idx = pd.MultiIndex.from_frame(people[['first', 'last']]) out = (pd.DataFrame( months.to_numpy() - people[['birthdate']].to_numpy(), index=idx, columns=months.strftime('%b') ) .astype("timedelta64[Y]") .reset_index() ) print(out) Output: first last Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec 0 John Wayne 3.0 3.0 3.0 3.0 3.0 3.0 3.0 3.0 3.0 3.0 3.0 3.0 1 Max Muster 5.0 5.0 5.0 5.0 5.0 5.0 6.0 6.0 6.0 6.0 6.0 6.0 2 Steve Black 4.0 4.0 4.0 4.0 5.0 5.0 5.0 5.0 5.0 5.0 5.0 5.0 3 Amy Smith 4.0 4.0 4.0 4.0 4.0 4.0 4.0 4.0 4.0 4.0 5.0 5.0 4 July House 3.0 3.0 3.0 3.0 3.0 4.0 4.0 4.0 4.0 4.0 4.0 4.0 5 Anna Whine 5.0 5.0 5.0 5.0 5.0 5.0 5.0 5.0 6.0 6.0 6.0 6.0 6 Charly Johnson 5.0 5.0 5.0 5.0 5.0 5.0 5.0 6.0 6.0 6.0 6.0 6.0
What is the most Pythonic way to dynamically create a DataFrame containing person age in month?
I have a list of people with their firstname, lastname and their date of birth in a DataFrame. data = [ ["John", "Wayne", "13.12.2018"], ["Max", "Muster", "02.06.2016"], ["Steve", "Black", "11.04.2017"], ["Amy", "Smith", "10.10.2017"], ["July", "House", "08.05.2018"], ["Anna", "Whine", "20.08.2016"], ["Charly", "Johnson", "16.07.2016"], ] people = pd.DataFrame( data, columns=["first", "last", "birthdate"], ) people["birthdate"] = pd.to_datetime(people["birthdate"], format="%d.%m.%Y") first last birthdate 0 John Wayne 2018-12-13 1 Max Muster 2016-06-02 2 Steve Black 2017-04-11 3 Amy Smith 2017-10-10 4 July House 2018-05-08 5 Anna Whine 2016-08-20 6 Charly Johnson 2016-07-16 I would like to create another dataframe having the same rows but the months of a year as columns. The data should be the people's age at the end of the month. Here is what I'm currently doing # generate series for all months months = pd.date_range("2022-01-01", "2022-12-01", freq="MS") # calculate age for every person age = pd.DataFrame(data={"first": people["first"], "last": people["last"]}) for value in months: last_day_of_month = value + pd.offsets.MonthEnd() age[value.strftime("%b")] = (last_day_of_month - people["birthdate"]).astype( "timedelta64[Y]" ) first last Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec 0 John Wayne 3.0 3.0 3.0 3.0 3.0 3.0 3.0 3.0 3.0 3.0 3.0 4.0 1 Max Muster 5.0 5.0 5.0 5.0 5.0 6.0 6.0 6.0 6.0 6.0 6.0 6.0 2 Steve Black 4.0 4.0 4.0 5.0 5.0 5.0 5.0 5.0 5.0 5.0 5.0 5.0 3 Amy Smith 4.0 4.0 4.0 4.0 4.0 4.0 4.0 4.0 4.0 5.0 5.0 5.0 4 July House 3.0 3.0 3.0 3.0 4.0 4.0 4.0 4.0 4.0 4.0 4.0 4.0 5 Anna Whine 5.0 5.0 5.0 5.0 5.0 5.0 5.0 6.0 6.0 6.0 6.0 6.0 6 Charly Johnson 5.0 5.0 5.0 5.0 5.0 5.0 6.0 6.0 6.0 6.0 6.0 6.0 That works fine but I was wondering if there is a more pythonic way to solve my problem. The for loop is certainly something I would use in other programming languages but I thought "Maybe there is a smarter way to solve this ...". Also another general question: Would you rather use the columns for the months or the rows? I'm new to Python and Pandas and was wondering if there are some best practices around time series data modelling. Thank you very much!
[ "You can try to vectorize all you operations using numpy broadcasting:\nmonths = pd.date_range(\"2022-01-01\", \"2022-12-01\", freq=\"ME\")\n\nidx = pd.MultiIndex.from_frame(people[['first', 'last']])\n\nout = (pd.DataFrame(\n months.to_numpy() -\n people[['birthdate']].to_numpy(),\n index=idx,\n columns=months.strftime('%b')\n )\n .astype(\"timedelta64[Y]\")\n .reset_index()\n )\n\nprint(out)\n\nOutput:\n first last Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec\n0 John Wayne 3.0 3.0 3.0 3.0 3.0 3.0 3.0 3.0 3.0 3.0 3.0 3.0\n1 Max Muster 5.0 5.0 5.0 5.0 5.0 5.0 6.0 6.0 6.0 6.0 6.0 6.0\n2 Steve Black 4.0 4.0 4.0 4.0 5.0 5.0 5.0 5.0 5.0 5.0 5.0 5.0\n3 Amy Smith 4.0 4.0 4.0 4.0 4.0 4.0 4.0 4.0 4.0 4.0 5.0 5.0\n4 July House 3.0 3.0 3.0 3.0 3.0 4.0 4.0 4.0 4.0 4.0 4.0 4.0\n5 Anna Whine 5.0 5.0 5.0 5.0 5.0 5.0 5.0 5.0 6.0 6.0 6.0 6.0\n6 Charly Johnson 5.0 5.0 5.0 5.0 5.0 5.0 5.0 6.0 6.0 6.0 6.0 6.0\n\n" ]
[ 3 ]
[]
[]
[ "dataframe", "pandas", "python" ]
stackoverflow_0074523929_dataframe_pandas_python.txt
Q: Recommended way to install and update packages from different channels in conda Conda does a good job explaining what are channels and how to use them. However, I never know what to do when I want to install packages from different channels. For example, most packages recommend installation via the conda-forge (i.e. xarray). However, I occasionally encounter a package that uses a different channel (like HoloViews). Whenever I encounter those packages from different channels, chances are relatively high that conda will find some conflicts that it can't solve. If that's the case, what's the preferred way of resolving them? Also, how do I update these packages? Should I update them all individually specifying their respective channels? A: There are a few ways to install packages from different channels. One way is to use the --channel option with the conda install command. For example, to install the xarray package from the conda-forge channel, you would use the following command: conda install --channel conda-forge xarray Another way to install packages from different channels is to use the conda config --add channels command. This will add the specified channel to your conda configuration, and any packages installed from that channel will be installed from that channel by default. For example, to add the conda-forge channel to your conda configuration, you would use the following command: conda config --add channels conda-forge Once you have added the channel to your conda configuration, you can install the xarray package with the following command: conda install xarray If you want to update a package that is installed from a different channel, you can use the --channel option with the conda update command. For example, to update the xarray package to the latest version from the conda-forge channel, you would use the following command: conda update --channel conda-forge xarray
Recommended way to install and update packages from different channels in conda
Conda does a good job explaining what are channels and how to use them. However, I never know what to do when I want to install packages from different channels. For example, most packages recommend installation via the conda-forge (i.e. xarray). However, I occasionally encounter a package that uses a different channel (like HoloViews). Whenever I encounter those packages from different channels, chances are relatively high that conda will find some conflicts that it can't solve. If that's the case, what's the preferred way of resolving them? Also, how do I update these packages? Should I update them all individually specifying their respective channels?
[ "There are a few ways to install packages from different channels. One way is to use the --channel option with the conda install command. For example, to install the xarray package from the conda-forge channel, you would use the following command:\nconda install --channel conda-forge xarray\n\nAnother way to install packages from different channels is to use the conda config --add channels command. This will add the specified channel to your conda configuration, and any packages installed from that channel will be installed from that channel by default. For example, to add the conda-forge channel to your conda configuration, you would use the following command:\nconda config --add channels conda-forge\n\nOnce you have added the channel to your conda configuration, you can install the xarray package with the following command:\nconda install xarray\n\nIf you want to update a package that is installed from a different channel, you can use the --channel option with the conda update command. For example, to update the xarray package to the latest version from the conda-forge channel, you would use the following command:\nconda update --channel conda-forge xarray\n\n" ]
[ 1 ]
[]
[]
[ "conda", "python" ]
stackoverflow_0074265336_conda_python.txt
Q: How to extract specific data in Python from a REST API request I'm using a REST API from RapidApi, and I succeded in printing the whole response, but I need only some specific parameters. Like, to print only the Deprature and Arrival times. When using params:{} it doesn't help, because that prints every parameter with the specified argument. I need the inverse, to print a specific parameter with more arguments. import requests url = "https://timetable-lookup.p.rapidapi.com/TimeTable/LHR/BCN/20221119/" headers = { "X-RapidAPI-Key": "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX", "X-RapidAPI-Host": "timetable-lookup.p.rapidapi.com" } response = requests.request("GET",url,headers=headers, params=querystring) print(response.text) The API response is the following: <?xml version="1.0" encoding="UTF-8"?> <OTA_AirDetailsRS PrimaryLangID="eng" Version="1.0" TransactionIdentifier="" FLSNote="This XML adds attributes not in the OTA XML spec. All such attributes start with FLS" FLSDevice="ota-xml-expanded" xmlns="http://www.opentravel.org/OTA/2003/05"> <Success/> <FLSResponseFields FLSOriginCode="LHR" FLSOriginName="Heathrow Airport" FLSDestinationCode="BCN" FLSDestinationName="Barcelona Airport" FLSStartDate="2022-11-19" FLSEndDate="2022-11-19" FLSResultCount="5" FLSRoutesFound="124" FLSBranchCount="1457" FLSTargetCount="1112" FLSRecordCount="785252"/> <FlightDetails TotalFlightTime="PT2H05M" TotalMiles="714" TotalTripTime="PT2H05M" FLSDepartureDateTime="2022-11-19T06:05:00" FLSDepartureTimeOffset="+0000" FLSDepartureCode="LHR" FLSDepartureName="Heathrow Airport" FLSArrivalDateTime="2022-11-19T09:10:00" FLSArrivalTimeOffset="+0100" FLSArrivalCode="BCN" FLSArrivalName="Barcelona Airport" FLSFlightType="NonStop" FLSFlightLegs="1" FLSFlightDays=".....6." FLSDayIndicator=""> <FlightLegDetails DepartureDateTime="2022-11-19T06:05:00" FLSDepartureTimeOffset="+0000" ArrivalDateTime="2022-11-19T09:10:00" FLSArrivalTimeOffset="+0100" FlightNumber="472" JourneyDuration="PT2H05M" SequenceNumber="1" LegDistance="714" FLSMeals="G" FLSInflightServices=" " FLSUUID="LHRBCN20221119BA472"> <DepartureAirport CodeContext="IATA" LocationCode="LHR" FLSLocationName="Heathrow Airport" Terminal="5" FLSDayIndicator=""/> <ArrivalAirport CodeContext="IATA" LocationCode="BCN" FLSLocationName="Barcelona Airport" Terminal="1" FLSDayIndicator=""/> <MarketingAirline Code="BA" CodeContext="IATA" CompanyShortName="British Airways"/> <Equipment AirEquipType="32N"/> </FlightLegDetails> </FlightDetails> <FlightDetails TotalFlightTime="PT2H05M" TotalMiles="714" TotalTripTime="PT2H05M" FLSDepartureDateTime="2022-11-19T07:25:00" FLSDepartureTimeOffset="+0000" FLSDepartureCode="LHR" FLSDepartureName="Heathrow Airport" FLSArrivalDateTime="2022-11-19T10:30:00" FLSArrivalTimeOffset="+0100" FLSArrivalCode="BCN" FLSArrivalName="Barcelona Airport" FLSFlightType="NonStop" FLSFlightLegs="1" FLSFlightDays=".....6." FLSDayIndicator=""> <FlightLegDetails DepartureDateTime="2022-11-19T07:25:00" FLSDepartureTimeOffset="+0000" ArrivalDateTime="2022-11-19T10:30:00" FLSArrivalTimeOffset="+0100" FlightNumber="478" JourneyDuration="PT2H05M" SequenceNumber="1" LegDistance="714" FLSMeals="G" FLSInflightServices=" " FLSUUID="LHRBCN20221119BA478"> <DepartureAirport CodeContext="IATA" LocationCode="LHR" FLSLocationName="Heathrow Airport" Terminal="5" FLSDayIndicator=""/> <ArrivalAirport CodeContext="IATA" LocationCode="BCN" FLSLocationName="Barcelona Airport" Terminal="1" FLSDayIndicator=""/> <MarketingAirline Code="BA" CodeContext="IATA" CompanyShortName="British Airways"/> <Equipment AirEquipType="320"/> </FlightLegDetails> </FlightDetails> <FlightDetails TotalFlightTime="PT2H05M" TotalMiles="714" TotalTripTime="PT2H05M" FLSDepartureDateTime="2022-11-19T10:25:00" FLSDepartureTimeOffset="+0000" FLSDepartureCode="LHR" FLSDepartureName="Heathrow Airport" FLSArrivalDateTime="2022-11-19T13:30:00" FLSArrivalTimeOffset="+0100" FLSArrivalCode="BCN" FLSArrivalName="Barcelona Airport" FLSFlightType="NonStop" FLSFlightLegs="1" FLSFlightDays=".....6." FLSDayIndicator=""> <FlightLegDetails DepartureDateTime="2022-11-19T10:25:00" FLSDepartureTimeOffset="+0000" ArrivalDateTime="2022-11-19T13:30:00" FLSArrivalTimeOffset="+0100" FlightNumber="474" JourneyDuration="PT2H05M" SequenceNumber="1" LegDistance="714" FLSMeals="G" FLSInflightServices=" " FLSUUID="LHRBCN20221119BA474"> <DepartureAirport CodeContext="IATA" LocationCode="LHR" FLSLocationName="Heathrow Airport" Terminal="5" FLSDayIndicator=""/> <ArrivalAirport CodeContext="IATA" LocationCode="BCN" FLSLocationName="Barcelona Airport" Terminal="1" FLSDayIndicator=""/> <MarketingAirline Code="BA" CodeContext="IATA" CompanyShortName="British Airways"/> <Equipment AirEquipType="32N"/> </FlightLegDetails> </FlightDetails> <FlightDetails TotalFlightTime="PT2H05M" TotalMiles="714" TotalTripTime="PT2H05M" FLSDepartureDateTime="2022-11-19T13:15:00" FLSDepartureTimeOffset="+0000" FLSDepartureCode="LHR" FLSDepartureName="Heathrow Airport" FLSArrivalDateTime="2022-11-19T16:20:00" FLSArrivalTimeOffset="+0100" FLSArrivalCode="BCN" FLSArrivalName="Barcelona Airport" FLSFlightType="NonStop" FLSFlightLegs="1" FLSFlightDays=".....6." FLSDayIndicator=""> <FlightLegDetails DepartureDateTime="2022-11-19T13:15:00" FLSDepartureTimeOffset="+0000" ArrivalDateTime="2022-11-19T16:20:00" FLSArrivalTimeOffset="+0100" FlightNumber="480" JourneyDuration="PT2H05M" SequenceNumber="1" LegDistance="714" FLSMeals="G" FLSInflightServices=" " FLSUUID="LHRBCN20221119BA480"> <DepartureAirport CodeContext="IATA" LocationCode="LHR" FLSLocationName="Heathrow Airport" Terminal="5" FLSDayIndicator=""/> <ArrivalAirport CodeContext="IATA" LocationCode="BCN" FLSLocationName="Barcelona Airport" Terminal="1" FLSDayIndicator=""/> <MarketingAirline Code="BA" CodeContext="IATA" CompanyShortName="British Airways"/> <Equipment AirEquipType="320"/> </FlightLegDetails> </FlightDetails> <FlightDetails TotalFlightTime="PT2H05M" TotalMiles="714" TotalTripTime="PT2H05M" FLSDepartureDateTime="2022-11-19T19:20:00" FLSDepartureTimeOffset="+0000" FLSDepartureCode="LHR" FLSDepartureName="Heathrow Airport" FLSArrivalDateTime="2022-11-19T22:25:00" FLSArrivalTimeOffset="+0100" FLSArrivalCode="BCN" FLSArrivalName="Barcelona Airport" FLSFlightType="NonStop" FLSFlightLegs="1" FLSFlightDays=".....6." FLSDayIndicator=""> <FlightLegDetails DepartureDateTime="2022-11-19T19:20:00" FLSDepartureTimeOffset="+0000" ArrivalDateTime="2022-11-19T22:25:00" FLSArrivalTimeOffset="+0100" FlightNumber="482" JourneyDuration="PT2H05M" SequenceNumber="1" LegDistance="714" FLSMeals="G" FLSInflightServices=" " FLSUUID="LHRBCN20221119BA482"> <DepartureAirport CodeContext="IATA" LocationCode="LHR" FLSLocationName="Heathrow Airport" Terminal="5" FLSDayIndicator=""/> <ArrivalAirport CodeContext="IATA" LocationCode="BCN" FLSLocationName="Barcelona Airport" Terminal="1" FLSDayIndicator=""/> <MarketingAirline Code="BA" CodeContext="IATA" CompanyShortName="British Airways"/> <Equipment AirEquipType="32N"/> </FlightLegDetails> </FlightDetails> </OTA_AirDetailsRS> How can I write the code to display only the DepartureDateTime , ArrivalDateTime, and LocationCode for the arrival and destination country? Thank you! A: Try parsing the output using the xml.etree.ElementTree package. From there, you should be able to search through your xml tree to find the relevant data and display it however you wish. Here's a snippet to get you started: # create element tree object tree = ET.parse(xmlfile) # get root element root = tree.getroot() From there you can search from the root using a tree structure. The documentation is here https://docs.python.org/3/library/xml.etree.elementtree.html A: If anyone interested, I found a solution, I used BeautifulSoup Python library to parse the contents from my XML response. Use the code below as an example: soup = BeautifulSoup(response.content, 'html.parser') for i in range(5): print ("Arrivals: ",soup.findAll("flightlegdetails")[i]["arrivaldatetime"]) I printed 5 answers from type FlightLegDetail with ArrivalDateTime foobar. This link could give you some more information: https://www.projectpro.io/recipes/parse-xml-in-python
How to extract specific data in Python from a REST API request
I'm using a REST API from RapidApi, and I succeded in printing the whole response, but I need only some specific parameters. Like, to print only the Deprature and Arrival times. When using params:{} it doesn't help, because that prints every parameter with the specified argument. I need the inverse, to print a specific parameter with more arguments. import requests url = "https://timetable-lookup.p.rapidapi.com/TimeTable/LHR/BCN/20221119/" headers = { "X-RapidAPI-Key": "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX", "X-RapidAPI-Host": "timetable-lookup.p.rapidapi.com" } response = requests.request("GET",url,headers=headers, params=querystring) print(response.text) The API response is the following: <?xml version="1.0" encoding="UTF-8"?> <OTA_AirDetailsRS PrimaryLangID="eng" Version="1.0" TransactionIdentifier="" FLSNote="This XML adds attributes not in the OTA XML spec. All such attributes start with FLS" FLSDevice="ota-xml-expanded" xmlns="http://www.opentravel.org/OTA/2003/05"> <Success/> <FLSResponseFields FLSOriginCode="LHR" FLSOriginName="Heathrow Airport" FLSDestinationCode="BCN" FLSDestinationName="Barcelona Airport" FLSStartDate="2022-11-19" FLSEndDate="2022-11-19" FLSResultCount="5" FLSRoutesFound="124" FLSBranchCount="1457" FLSTargetCount="1112" FLSRecordCount="785252"/> <FlightDetails TotalFlightTime="PT2H05M" TotalMiles="714" TotalTripTime="PT2H05M" FLSDepartureDateTime="2022-11-19T06:05:00" FLSDepartureTimeOffset="+0000" FLSDepartureCode="LHR" FLSDepartureName="Heathrow Airport" FLSArrivalDateTime="2022-11-19T09:10:00" FLSArrivalTimeOffset="+0100" FLSArrivalCode="BCN" FLSArrivalName="Barcelona Airport" FLSFlightType="NonStop" FLSFlightLegs="1" FLSFlightDays=".....6." FLSDayIndicator=""> <FlightLegDetails DepartureDateTime="2022-11-19T06:05:00" FLSDepartureTimeOffset="+0000" ArrivalDateTime="2022-11-19T09:10:00" FLSArrivalTimeOffset="+0100" FlightNumber="472" JourneyDuration="PT2H05M" SequenceNumber="1" LegDistance="714" FLSMeals="G" FLSInflightServices=" " FLSUUID="LHRBCN20221119BA472"> <DepartureAirport CodeContext="IATA" LocationCode="LHR" FLSLocationName="Heathrow Airport" Terminal="5" FLSDayIndicator=""/> <ArrivalAirport CodeContext="IATA" LocationCode="BCN" FLSLocationName="Barcelona Airport" Terminal="1" FLSDayIndicator=""/> <MarketingAirline Code="BA" CodeContext="IATA" CompanyShortName="British Airways"/> <Equipment AirEquipType="32N"/> </FlightLegDetails> </FlightDetails> <FlightDetails TotalFlightTime="PT2H05M" TotalMiles="714" TotalTripTime="PT2H05M" FLSDepartureDateTime="2022-11-19T07:25:00" FLSDepartureTimeOffset="+0000" FLSDepartureCode="LHR" FLSDepartureName="Heathrow Airport" FLSArrivalDateTime="2022-11-19T10:30:00" FLSArrivalTimeOffset="+0100" FLSArrivalCode="BCN" FLSArrivalName="Barcelona Airport" FLSFlightType="NonStop" FLSFlightLegs="1" FLSFlightDays=".....6." FLSDayIndicator=""> <FlightLegDetails DepartureDateTime="2022-11-19T07:25:00" FLSDepartureTimeOffset="+0000" ArrivalDateTime="2022-11-19T10:30:00" FLSArrivalTimeOffset="+0100" FlightNumber="478" JourneyDuration="PT2H05M" SequenceNumber="1" LegDistance="714" FLSMeals="G" FLSInflightServices=" " FLSUUID="LHRBCN20221119BA478"> <DepartureAirport CodeContext="IATA" LocationCode="LHR" FLSLocationName="Heathrow Airport" Terminal="5" FLSDayIndicator=""/> <ArrivalAirport CodeContext="IATA" LocationCode="BCN" FLSLocationName="Barcelona Airport" Terminal="1" FLSDayIndicator=""/> <MarketingAirline Code="BA" CodeContext="IATA" CompanyShortName="British Airways"/> <Equipment AirEquipType="320"/> </FlightLegDetails> </FlightDetails> <FlightDetails TotalFlightTime="PT2H05M" TotalMiles="714" TotalTripTime="PT2H05M" FLSDepartureDateTime="2022-11-19T10:25:00" FLSDepartureTimeOffset="+0000" FLSDepartureCode="LHR" FLSDepartureName="Heathrow Airport" FLSArrivalDateTime="2022-11-19T13:30:00" FLSArrivalTimeOffset="+0100" FLSArrivalCode="BCN" FLSArrivalName="Barcelona Airport" FLSFlightType="NonStop" FLSFlightLegs="1" FLSFlightDays=".....6." FLSDayIndicator=""> <FlightLegDetails DepartureDateTime="2022-11-19T10:25:00" FLSDepartureTimeOffset="+0000" ArrivalDateTime="2022-11-19T13:30:00" FLSArrivalTimeOffset="+0100" FlightNumber="474" JourneyDuration="PT2H05M" SequenceNumber="1" LegDistance="714" FLSMeals="G" FLSInflightServices=" " FLSUUID="LHRBCN20221119BA474"> <DepartureAirport CodeContext="IATA" LocationCode="LHR" FLSLocationName="Heathrow Airport" Terminal="5" FLSDayIndicator=""/> <ArrivalAirport CodeContext="IATA" LocationCode="BCN" FLSLocationName="Barcelona Airport" Terminal="1" FLSDayIndicator=""/> <MarketingAirline Code="BA" CodeContext="IATA" CompanyShortName="British Airways"/> <Equipment AirEquipType="32N"/> </FlightLegDetails> </FlightDetails> <FlightDetails TotalFlightTime="PT2H05M" TotalMiles="714" TotalTripTime="PT2H05M" FLSDepartureDateTime="2022-11-19T13:15:00" FLSDepartureTimeOffset="+0000" FLSDepartureCode="LHR" FLSDepartureName="Heathrow Airport" FLSArrivalDateTime="2022-11-19T16:20:00" FLSArrivalTimeOffset="+0100" FLSArrivalCode="BCN" FLSArrivalName="Barcelona Airport" FLSFlightType="NonStop" FLSFlightLegs="1" FLSFlightDays=".....6." FLSDayIndicator=""> <FlightLegDetails DepartureDateTime="2022-11-19T13:15:00" FLSDepartureTimeOffset="+0000" ArrivalDateTime="2022-11-19T16:20:00" FLSArrivalTimeOffset="+0100" FlightNumber="480" JourneyDuration="PT2H05M" SequenceNumber="1" LegDistance="714" FLSMeals="G" FLSInflightServices=" " FLSUUID="LHRBCN20221119BA480"> <DepartureAirport CodeContext="IATA" LocationCode="LHR" FLSLocationName="Heathrow Airport" Terminal="5" FLSDayIndicator=""/> <ArrivalAirport CodeContext="IATA" LocationCode="BCN" FLSLocationName="Barcelona Airport" Terminal="1" FLSDayIndicator=""/> <MarketingAirline Code="BA" CodeContext="IATA" CompanyShortName="British Airways"/> <Equipment AirEquipType="320"/> </FlightLegDetails> </FlightDetails> <FlightDetails TotalFlightTime="PT2H05M" TotalMiles="714" TotalTripTime="PT2H05M" FLSDepartureDateTime="2022-11-19T19:20:00" FLSDepartureTimeOffset="+0000" FLSDepartureCode="LHR" FLSDepartureName="Heathrow Airport" FLSArrivalDateTime="2022-11-19T22:25:00" FLSArrivalTimeOffset="+0100" FLSArrivalCode="BCN" FLSArrivalName="Barcelona Airport" FLSFlightType="NonStop" FLSFlightLegs="1" FLSFlightDays=".....6." FLSDayIndicator=""> <FlightLegDetails DepartureDateTime="2022-11-19T19:20:00" FLSDepartureTimeOffset="+0000" ArrivalDateTime="2022-11-19T22:25:00" FLSArrivalTimeOffset="+0100" FlightNumber="482" JourneyDuration="PT2H05M" SequenceNumber="1" LegDistance="714" FLSMeals="G" FLSInflightServices=" " FLSUUID="LHRBCN20221119BA482"> <DepartureAirport CodeContext="IATA" LocationCode="LHR" FLSLocationName="Heathrow Airport" Terminal="5" FLSDayIndicator=""/> <ArrivalAirport CodeContext="IATA" LocationCode="BCN" FLSLocationName="Barcelona Airport" Terminal="1" FLSDayIndicator=""/> <MarketingAirline Code="BA" CodeContext="IATA" CompanyShortName="British Airways"/> <Equipment AirEquipType="32N"/> </FlightLegDetails> </FlightDetails> </OTA_AirDetailsRS> How can I write the code to display only the DepartureDateTime , ArrivalDateTime, and LocationCode for the arrival and destination country? Thank you!
[ "Try parsing the output using the xml.etree.ElementTree package. From there, you should be able to search through your xml tree to find the relevant data and display it however you wish.\nHere's a snippet to get you started:\n# create element tree object\ntree = ET.parse(xmlfile)\n \n# get root element\nroot = tree.getroot()\n\nFrom there you can search from the root using a tree structure. The documentation is here https://docs.python.org/3/library/xml.etree.elementtree.html\n", "If anyone interested, I found a solution, I used BeautifulSoup Python library to parse the contents from my XML response. Use the code below as an example:\nsoup = BeautifulSoup(response.content, 'html.parser')\nfor i in range(5):\n print (\"Arrivals: \",soup.findAll(\"flightlegdetails\")[i][\"arrivaldatetime\"])\n\nI printed 5 answers from type FlightLegDetail with ArrivalDateTime foobar. This link could give you some more information:\nhttps://www.projectpro.io/recipes/parse-xml-in-python\n" ]
[ 1, 0 ]
[]
[]
[ "api", "python", "request", "rest" ]
stackoverflow_0074501243_api_python_request_rest.txt
Q: Is it impossible developing with fastApi, uvloop, windows? I'm learning fastapi from Youtube class I succeeded. except for the [uvloop] module I realized that uvloop doesn't install in windows and my development environment is Windows + PyCharm. How are others using this module? Are they only using mac? What should I do? Should I view other videos or remove uvloop? or replace uvloop? Help me. A: Fastapi itself does not depend on uvloop. The transient extra dependency UVIcorn installed with ao called standard extras however does. However, UVicorn[standard] is just an extra dependency and not a required one. So if you just install fastapi without any extras and uvicorn without extras you should be good to go. A: You can develop fastAPI applications on pycharm + windows by creating new fastAPI project directly from pycharm menu. If you generated a fastAPI application using openapi, then use docker to develop your fastAPI application with pycharm as in the picture below.
Is it impossible developing with fastApi, uvloop, windows?
I'm learning fastapi from Youtube class I succeeded. except for the [uvloop] module I realized that uvloop doesn't install in windows and my development environment is Windows + PyCharm. How are others using this module? Are they only using mac? What should I do? Should I view other videos or remove uvloop? or replace uvloop? Help me.
[ "Fastapi itself does not depend on uvloop. The transient extra dependency UVIcorn installed with ao called standard extras however does. However, UVicorn[standard] is just an extra dependency and not a required one. So if you just install fastapi without any extras and uvicorn without extras you should be good to go.\n", "You can develop fastAPI applications on pycharm + windows by creating new fastAPI project directly from pycharm menu.\nIf you generated a fastAPI application using openapi, then use docker to develop your fastAPI application with pycharm as in the picture below.\n\n" ]
[ 1, 0 ]
[]
[]
[ "python", "uvloop" ]
stackoverflow_0070731019_python_uvloop.txt
Q: How to get XPATH elements that have different endings? I am trying to add each product to cart by going with the click over the product and then click the button add product to cart from this site https://www.bershka.com/ro/femeie/accesorii/%C8%99osete-c1010194004.html from selenium import webdriver from selenium.webdriver.chrome.service import Service from selenium.webdriver.chrome.options import Options from selenium.webdriver.common.by import By from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support import expected_conditions as EC from selenium.webdriver.common.action_chains import ActionChains from selenium.webdriver.common.keys import Keys import time options = Options() options = webdriver.ChromeOptions() options.add_experimental_option("detach", True) options.add_argument("start-maximized") webdriver_service = Service('C:\webdrivers\chromedriver.exe') driver = webdriver.Chrome(options=options, service=webdriver_service) wait = WebDriverWait(driver, 30) driver.get("https://www.bershka.com/ro/femeie/accesorii/%C8%99osete-c1010194004.html") cookies_bttn = driver.find_element(By.ID, "onetrust-accept-btn-handler") cookies_bttn.click() driver.implicitly_wait(10) country_save = driver.find_element(By.CSS_SELECTOR, "#geoblocking > div > div > div.select-country-container > button.button.is-sm.confirm") country_save.click() hoover = ActionChains(driver) time.sleep(10) pbody = wait.until(EC.presence_of_element_located((By.TAG_NAME, 'body'))) for x in range(5): pbody.send_keys(Keys.PAGE_DOWN) print('scrolled') time.sleep(1) sosete = wait.until(EC.presence_of_all_elements_located((By.XPATH, '//div[@class="category-product-card"]'))) print(len(sosete)) for x in str(len(sosete)): ActionChains(driver).move_to_element(sosete).perform() wait.until(EC.element_to_be_clickable((By.CSS_SELECTOR, ".quick-purchase__detail__button"))).click() Output: AttributeError: move_to requires a WebElement I've tried many ways but errors pop up everytime and I can't find any solution I thought about making a for loop using XPATH but I dont know how to get each product because they have different li like so: first product = /html/body/div[2]/div/div/div[2]/main/div/div/div/div[2]/section[1]/div/ul/li[1]/div second product = /html/body/div[2]/div/div/div[2]/main/div/div/div/div[2]/section[1]/div/ul/li[2]/div And so on A: Assuming the code from your previous answer, let actions be defined as: actions = ActionChains(driver) Depending on your geographical IP address, you might need: try: wait.until(EC.element_to_be_clickable((By.XPATH, '//span[@class="bskico-cancel-16"]'))).click() print('removed location popup') except Exception as e: print('no location popup') Assuming the items list as: items = wait.until(EC.presence_of_all_elements_located((By.XPATH, '//div[@class="category-product-card"]'))) You add to your code: for i in items: actions.move_to_element(i).perform() t.sleep(5) i.find_element(By.XPATH, './/span[@class="quick-purchase__detail__button__text"]').click() print('added to basket', i.text) t.sleep(5) The hardcoded waiting times are there to account for network slowness, as well as server slowness. Shopping basket accepts a maximum of 26 products, so you will not be able to add them all to it.
How to get XPATH elements that have different endings?
I am trying to add each product to cart by going with the click over the product and then click the button add product to cart from this site https://www.bershka.com/ro/femeie/accesorii/%C8%99osete-c1010194004.html from selenium import webdriver from selenium.webdriver.chrome.service import Service from selenium.webdriver.chrome.options import Options from selenium.webdriver.common.by import By from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support import expected_conditions as EC from selenium.webdriver.common.action_chains import ActionChains from selenium.webdriver.common.keys import Keys import time options = Options() options = webdriver.ChromeOptions() options.add_experimental_option("detach", True) options.add_argument("start-maximized") webdriver_service = Service('C:\webdrivers\chromedriver.exe') driver = webdriver.Chrome(options=options, service=webdriver_service) wait = WebDriverWait(driver, 30) driver.get("https://www.bershka.com/ro/femeie/accesorii/%C8%99osete-c1010194004.html") cookies_bttn = driver.find_element(By.ID, "onetrust-accept-btn-handler") cookies_bttn.click() driver.implicitly_wait(10) country_save = driver.find_element(By.CSS_SELECTOR, "#geoblocking > div > div > div.select-country-container > button.button.is-sm.confirm") country_save.click() hoover = ActionChains(driver) time.sleep(10) pbody = wait.until(EC.presence_of_element_located((By.TAG_NAME, 'body'))) for x in range(5): pbody.send_keys(Keys.PAGE_DOWN) print('scrolled') time.sleep(1) sosete = wait.until(EC.presence_of_all_elements_located((By.XPATH, '//div[@class="category-product-card"]'))) print(len(sosete)) for x in str(len(sosete)): ActionChains(driver).move_to_element(sosete).perform() wait.until(EC.element_to_be_clickable((By.CSS_SELECTOR, ".quick-purchase__detail__button"))).click() Output: AttributeError: move_to requires a WebElement I've tried many ways but errors pop up everytime and I can't find any solution I thought about making a for loop using XPATH but I dont know how to get each product because they have different li like so: first product = /html/body/div[2]/div/div/div[2]/main/div/div/div/div[2]/section[1]/div/ul/li[1]/div second product = /html/body/div[2]/div/div/div[2]/main/div/div/div/div[2]/section[1]/div/ul/li[2]/div And so on
[ "Assuming the code from your previous answer, let actions be defined as:\nactions = ActionChains(driver)\n\nDepending on your geographical IP address, you might need:\ntry:\n wait.until(EC.element_to_be_clickable((By.XPATH, '//span[@class=\"bskico-cancel-16\"]'))).click()\n print('removed location popup')\nexcept Exception as e:\n print('no location popup')\n\nAssuming the items list as:\nitems = wait.until(EC.presence_of_all_elements_located((By.XPATH, '//div[@class=\"category-product-card\"]')))\n\nYou add to your code:\nfor i in items:\n actions.move_to_element(i).perform()\n t.sleep(5)\n i.find_element(By.XPATH, './/span[@class=\"quick-purchase__detail__button__text\"]').click()\n print('added to basket', i.text)\n t.sleep(5)\n\nThe hardcoded waiting times are there to account for network slowness, as well as server slowness.\nShopping basket accepts a maximum of 26 products, so you will not be able to add them all to it.\n" ]
[ 1 ]
[]
[]
[ "python", "selenium", "selenium_chromedriver", "web_scraping" ]
stackoverflow_0074523803_python_selenium_selenium_chromedriver_web_scraping.txt
Q: How to nest a json object into an empty json object to create a geojson file? I'm trying to create a geojson file. I have a list of objects and their coordinates in an excel file. I brought in that information into a pandas dataframe and am trying to loop through the records to create a geojson file. I mostly have everything working but I'm trying to match the schema of geojson.io so I can open the file on that platform and make edits to the points and then save again as a geojson file. I have mostly everything working, but to create the coordinates object, there is 2 brackets before the list of coordinates & I'm having a hard time trying to replicate that in python. How do I code a nested object into another? A: I figured it out! I just needed to discover the geojson package.
How to nest a json object into an empty json object to create a geojson file?
I'm trying to create a geojson file. I have a list of objects and their coordinates in an excel file. I brought in that information into a pandas dataframe and am trying to loop through the records to create a geojson file. I mostly have everything working but I'm trying to match the schema of geojson.io so I can open the file on that platform and make edits to the points and then save again as a geojson file. I have mostly everything working, but to create the coordinates object, there is 2 brackets before the list of coordinates & I'm having a hard time trying to replicate that in python. How do I code a nested object into another?
[ "I figured it out! I just needed to discover the geojson package.\n\n" ]
[ 0 ]
[]
[]
[ "geojson", "json", "python" ]
stackoverflow_0074523714_geojson_json_python.txt
Q: Mass converting .doc files to .docx i have around 1.2 Million .doc Files (all around 50kb big) in need of conversion to .docx. So far i tried using Word via win32com interface for Python, but it is really really slow (1-2 Files per Second). Is there any faster way to accomplish this? Edit: Code im using so far: def convert_doc_to_docx(): dir = "sampledir" word = win32com.client.Dispatch("Word.Application") word.visible = 0 globData = glob.iglob(dir + "*.doc") totalFiles = len([name for name in os.listdir(dir) if os.path.isfile(os.path.join(dir, name))]) for i, doc in enumerate(globData): in_file = os.path.abspath(doc) wb = word.Documents.Open(in_file) out_file = os.path.abspath(doc + "x") wb.SaveAs2(out_file, FileFormat=16) # file format for docx wb.Close() os.remove(in_file) print(f"{i+1} von {totalFiles} Dateien bearbeitet!") word.Quit() A: As other commenters have suggested wordconv seems to be a good solution and much faster than using win32com. For ~1700 files transfer time was ~389 seconds or about ~.21 seconds per object. This time largely can depend on your system hardware since it is involving a lot of read and write operations as well as some processing power for the conversion. I basically maxed out 16GB of ram and an old 6th gen i7. Using a HDD probably will slow it down a lot. Even at .21 seconds per object it's going to take like 70 hours (if it's similar to the speed on my machine). But it's a vast improvement of 1-2 second per object which is 10x as long. I use subprocess.Popen() to run the command C:\\Program Files\\Microsoft Office\\root\\Office16\\Wordconv.exe -oice -nme srcfile dstfile in the for loop. Although the recommended way to invoke a subprocess is subprocess.run() I used subprocess.Popen() because it won't wait for the process to finish before continuing. There might be a way to do this with subprocess.run as well but I'm not familiar enough with it to say. (maybe someone can provide feedback on that) import os import subprocess from timeit import default_timer as timer def convert_doc_to_docx(): src_dir = r"c:\Users\myuser\test" out_dir = "c:\\Users\\myuser\\test\\dst\\" all_files = [name for name in os.listdir(src_dir) if os.path.isfile(os.path.join(src_dir, name))] file_count = len(all_files) # change according to where "WordConv.exe" is located on your system path_to_wordconv = "C:\\Program Files\\Microsoft Office\\root\\Office16\\Wordconv.exe" print(f"Source dir file count: {file_count}") start = timer() for file in all_files: in_file_path = os.path.join(src_dir, file) out_file_path = out_dir + file + "x" # this will get process intensive subprocess.Popen([f"{path_to_wordconv}","-oice","-nme",f"{in_file_path}",f"{out_file_path}"]) end = timer() count_output_dir = len([name for name in os.listdir(out_dir) if os.path.isfile(os.path.join(out_dir, name))]) elapsed_time = end-start time_object = elapsed_time / count_output_dir print(f"Elapsed time: {elapsed_time} second") print(f"Time per object: {time_object} second") return convert_doc_to_docx() Output Source dir file count: 1728 Elapsed time: 369.7448267 second Time per object: 0.21397270063657406 second
Mass converting .doc files to .docx
i have around 1.2 Million .doc Files (all around 50kb big) in need of conversion to .docx. So far i tried using Word via win32com interface for Python, but it is really really slow (1-2 Files per Second). Is there any faster way to accomplish this? Edit: Code im using so far: def convert_doc_to_docx(): dir = "sampledir" word = win32com.client.Dispatch("Word.Application") word.visible = 0 globData = glob.iglob(dir + "*.doc") totalFiles = len([name for name in os.listdir(dir) if os.path.isfile(os.path.join(dir, name))]) for i, doc in enumerate(globData): in_file = os.path.abspath(doc) wb = word.Documents.Open(in_file) out_file = os.path.abspath(doc + "x") wb.SaveAs2(out_file, FileFormat=16) # file format for docx wb.Close() os.remove(in_file) print(f"{i+1} von {totalFiles} Dateien bearbeitet!") word.Quit()
[ "As other commenters have suggested wordconv seems to be a good solution and much faster than using win32com. For ~1700 files transfer time was ~389 seconds or about ~.21 seconds per object. This time largely can depend on your system hardware since it is involving a lot of read and write operations as well as some processing power for the conversion. I basically maxed out 16GB of ram and an old 6th gen i7. Using a HDD probably will slow it down a lot. Even at .21 seconds per object it's going to take like 70 hours (if it's similar to the speed on my machine). But it's a vast improvement of 1-2 second per object which is 10x as long.\nI use subprocess.Popen() to run the command C:\\\\Program Files\\\\Microsoft Office\\\\root\\\\Office16\\\\Wordconv.exe -oice -nme srcfile dstfile in the for loop.\nAlthough the recommended way to invoke a subprocess is subprocess.run() I used subprocess.Popen() because it won't wait for the process to finish before continuing. There might be a way to do this with subprocess.run as well but I'm not familiar enough with it to say. (maybe someone can provide feedback on that)\nimport os\nimport subprocess\nfrom timeit import default_timer as timer\n\n\n\ndef convert_doc_to_docx():\n \n src_dir = r\"c:\\Users\\myuser\\test\"\n out_dir = \"c:\\\\Users\\\\myuser\\\\test\\\\dst\\\\\"\n all_files = [name for name in os.listdir(src_dir) if os.path.isfile(os.path.join(src_dir, name))]\n file_count = len(all_files)\n\n # change according to where \"WordConv.exe\" is located on your system\n path_to_wordconv = \"C:\\\\Program Files\\\\Microsoft Office\\\\root\\\\Office16\\\\Wordconv.exe\"\n\n print(f\"Source dir file count: {file_count}\")\n start = timer()\n for file in all_files:\n in_file_path = os.path.join(src_dir, file)\n out_file_path = out_dir + file + \"x\"\n\n # this will get process intensive \n subprocess.Popen([f\"{path_to_wordconv}\",\"-oice\",\"-nme\",f\"{in_file_path}\",f\"{out_file_path}\"])\n \n end = timer()\n\n count_output_dir = len([name for name in os.listdir(out_dir) if os.path.isfile(os.path.join(out_dir, name))]) \n elapsed_time = end-start\n time_object = elapsed_time / count_output_dir\n\n \n print(f\"Elapsed time: {elapsed_time} second\")\n print(f\"Time per object: {time_object} second\")\n \n\n\n return\n \n\nconvert_doc_to_docx()\n\nOutput\nSource dir file count: 1728\nElapsed time: 369.7448267 second\nTime per object: 0.21397270063657406 second\n\n" ]
[ 1 ]
[]
[]
[ "ms_word", "python" ]
stackoverflow_0074521779_ms_word_python.txt
Q: Calculate the difference between two list, and store the result in a third list. Python How would I calculate the difference between two separate list and store them in a third list. for example... list_1 [('M', 4000.0), ('R', 5320.0)] list_2 [('M', 4222.0), ('R', 5442.0)] I tried the following list_3 = [] list_3.append([list_1] - [list_2]) print(list_3) but I'm met with, a TypeError TypeError: unsupported operand type(s) for -: 'list' and 'list' A: This seems like something better suited to a dictionary dict_1 = {'M': 4000.0, 'R': 5320.0} dict_2 = {'M': 4222.0, 'R': 5442.0} dict_3 = {} dict_3['M'] = dict_1['M'] - dict_2['M'] dict_3['R'] = dict_1['R'] - dict_2['R'] If you're set on using a list of tuples you could do something with tuple(map(operator.add, tup1, tup2) which allows you to add and subtract tuples. You would have to do something about the string values though A: OK, here's one possibility, which might or might not be what you want. It assumes that the tuples are in the same order in both lists and does not verify that the first elements of corresponding tuples are the same (which is possibly a bug in waiting). It produces the result of subtracting the numeric values in the second list from the numeric values in the first list, rather than the absolute value of the differences; in your example, both numbers are negative. (If you wanted the absolute value, use abs(v1 - v2)) It uses zip to combine the two lists, and a list comprehension to put the computed results back together into a list. >>> def diff(list_1, list_2): ... return [(first, v1 - v2) for ((first, v1), (_, v2)) in zip(list_1, list_2)] ... >>> >>> list_1 = [('M', 4000.0), ('R', 5320.0)] >>> list_2 = [('M', 4222.0), ('R', 5442.0)] >>> diff(list_1, list_2) [('M', -222.0), ('R', -122.0)] Comprehensions are good when you need to do an element-by-element computation from one or more input lists. The comprehension just states how to create each output element from the inputs; the possibly-unfamiliar syntax in the for clause states how to decompose the inputs into individual pieces for use by the computation. zip takes two or more lists and constructs an iterator of tuples; the tuples take one value from each list and the iterator continues until one of the lists is exhausted. So if list1 is a list of Xs and list2 is a list of Ys, zip(list1, list2) produces an iterator over tuples of (X, Y). In this case, X and Y are both tuples, with the same structure: a letter and a number. So zip creates tuples of the form ((letter, number), (letter, number)) and the for clause takes that apart using the pattern ((first, v1), (_, v2)), which precisely matches the structure of the zipped tuples. (It must match or Python would complain.) I use the variable name _ to indicate that the value is ignored (that's a convention, not a rule, but later you'll find that it is a rule in some Python constructs). The letter from the second list is ignored because there's nothing in the problem description which indicates what to do with it.
Calculate the difference between two list, and store the result in a third list. Python
How would I calculate the difference between two separate list and store them in a third list. for example... list_1 [('M', 4000.0), ('R', 5320.0)] list_2 [('M', 4222.0), ('R', 5442.0)] I tried the following list_3 = [] list_3.append([list_1] - [list_2]) print(list_3) but I'm met with, a TypeError TypeError: unsupported operand type(s) for -: 'list' and 'list'
[ "This seems like something better suited to a dictionary\ndict_1 = {'M': 4000.0, 'R': 5320.0}\ndict_2 = {'M': 4222.0, 'R': 5442.0}\n\ndict_3 = {}\ndict_3['M'] = dict_1['M'] - dict_2['M']\ndict_3['R'] = dict_1['R'] - dict_2['R']\n\nIf you're set on using a list of tuples you could do something with\ntuple(map(operator.add, tup1, tup2)\n\nwhich allows you to add and subtract tuples. You would have to do something about the string values though\n", "OK, here's one possibility, which might or might not be what you want. It assumes that the tuples are in the same order in both lists and does not verify that the first elements of corresponding tuples are the same (which is possibly a bug in waiting). It produces the result of subtracting the numeric values in the second list from the numeric values in the first list, rather than the absolute value of the differences; in your example, both numbers are negative. (If you wanted the absolute value, use abs(v1 - v2))\nIt uses zip to combine the two lists, and a list comprehension to put the computed results back together into a list.\n>>> def diff(list_1, list_2):\n... return [(first, v1 - v2) for ((first, v1), (_, v2)) in zip(list_1, list_2)]\n... \n>>> \n>>> list_1 = [('M', 4000.0), ('R', 5320.0)]\n>>> list_2 = [('M', 4222.0), ('R', 5442.0)]\n>>> diff(list_1, list_2)\n[('M', -222.0), ('R', -122.0)]\n\nComprehensions are good when you need to do an element-by-element computation from one or more input lists. The comprehension just states how to create each output element from the inputs; the possibly-unfamiliar syntax in the for clause states how to decompose the inputs into individual pieces for use by the computation.\nzip takes two or more lists and constructs an iterator of tuples; the tuples take one value from each list and the iterator continues until one of the lists is exhausted. So if list1 is a list of Xs and list2 is a list of Ys, zip(list1, list2) produces an iterator over tuples of (X, Y). In this case, X and Y are both tuples, with the same structure: a letter and a number. So zip creates tuples of the form ((letter, number), (letter, number)) and the for clause takes that apart using the pattern ((first, v1), (_, v2)), which precisely matches the structure of the zipped tuples. (It must match or Python would complain.)\nI use the variable name _ to indicate that the value is ignored (that's a convention, not a rule, but later you'll find that it is a rule in some Python constructs). The letter from the second list is ignored because there's nothing in the problem description which indicates what to do with it.\n" ]
[ 0, 0 ]
[]
[]
[ "list", "python" ]
stackoverflow_0074523987_list_python.txt
Q: Pandas - Conditionally finding max of row according to column value while maintaining index order I'm trying to find, hopefully, a one lines to accomplish the following: I have the following dataframe: import pandas as pd import numpy as np SIZE = 10 df = pd.DataFrame({'col1': np.random.randint(100, size=SIZE), 'col2': np.random.randint(100, size=SIZE), 'col3': np.random.randint(100, size=SIZE), 'col4': np.random.randint(2, size=SIZE)}) print(df) outputting col1 col2 col3 col4 0 55 96 40 0 1 82 59 34 1 2 85 66 25 1 3 90 69 27 0 4 36 32 79 1 5 33 69 80 1 6 11 53 88 0 7 31 51 96 0 8 89 76 88 1 9 4 76 47 0 I'm currently ignoring col4 and calculating the max value of each row as follows: df[['col1', 'col2', 'col3']].max(axis=1) resulting in 0 96 1 82 2 85 3 90 4 79 5 80 6 88 7 96 8 89 9 76 dtype: int64 I want to use col4 to conditionally calculate the max value. If col4 value is 0, calculate max value of col1, else calculate max value of ['col2', 'col3']. I also want to keep the same index/order of the dataframe. The end result would be 0 55 # col1 1 59 # max(col2, col3) 2 66 # max(col2, col3) 3 90 # col1 4 79 # max(col2, col3) 5 80 # max(col2, col3) 6 11 # col1 7 31 # col1 8 88 # max(col2, col3) 9 4 # col1 dtype: int64 One possibility would be to create two new dataframes, calculate the max, and join them again, but this would possibly mess the index (I guess I could save that too). Any better ideas? Apologies if this question was already asked, but I couldn't find with the search terms A: There might be a better option... but this does the job by simply applying your rule as a lambda row-wise: df.apply(lambda x: x[["col2", "col3"]].max() if x["col4"] else x["col1"], axis=1) A: A vectorial way would be: out = df['col1'].where(df['col4'].eq(0), df[['col2', 'col3']].max(axis=1)) Or: out = df[['col2', 'col3']].max(axis=1) out.loc[df['col4'].eq(0)] = df['col1'] output: 0 55 1 59 2 66 3 90 4 79 5 80 6 11 7 31 8 88 9 4 Name: col1, dtype: int64
Pandas - Conditionally finding max of row according to column value while maintaining index order
I'm trying to find, hopefully, a one lines to accomplish the following: I have the following dataframe: import pandas as pd import numpy as np SIZE = 10 df = pd.DataFrame({'col1': np.random.randint(100, size=SIZE), 'col2': np.random.randint(100, size=SIZE), 'col3': np.random.randint(100, size=SIZE), 'col4': np.random.randint(2, size=SIZE)}) print(df) outputting col1 col2 col3 col4 0 55 96 40 0 1 82 59 34 1 2 85 66 25 1 3 90 69 27 0 4 36 32 79 1 5 33 69 80 1 6 11 53 88 0 7 31 51 96 0 8 89 76 88 1 9 4 76 47 0 I'm currently ignoring col4 and calculating the max value of each row as follows: df[['col1', 'col2', 'col3']].max(axis=1) resulting in 0 96 1 82 2 85 3 90 4 79 5 80 6 88 7 96 8 89 9 76 dtype: int64 I want to use col4 to conditionally calculate the max value. If col4 value is 0, calculate max value of col1, else calculate max value of ['col2', 'col3']. I also want to keep the same index/order of the dataframe. The end result would be 0 55 # col1 1 59 # max(col2, col3) 2 66 # max(col2, col3) 3 90 # col1 4 79 # max(col2, col3) 5 80 # max(col2, col3) 6 11 # col1 7 31 # col1 8 88 # max(col2, col3) 9 4 # col1 dtype: int64 One possibility would be to create two new dataframes, calculate the max, and join them again, but this would possibly mess the index (I guess I could save that too). Any better ideas? Apologies if this question was already asked, but I couldn't find with the search terms
[ "There might be a better option... but this does the job by simply applying your rule as a lambda row-wise:\ndf.apply(lambda x: x[[\"col2\", \"col3\"]].max() if x[\"col4\"] else x[\"col1\"], axis=1)\n\n", "A vectorial way would be:\nout = df['col1'].where(df['col4'].eq(0), df[['col2', 'col3']].max(axis=1))\n\nOr:\nout = df[['col2', 'col3']].max(axis=1)\nout.loc[df['col4'].eq(0)] = df['col1']\n\noutput:\n0 55\n1 59\n2 66\n3 90\n4 79\n5 80\n6 11\n7 31\n8 88\n9 4\nName: col1, dtype: int64\n\n" ]
[ 1, 1 ]
[]
[]
[ "dataframe", "pandas", "python" ]
stackoverflow_0074523898_dataframe_pandas_python.txt
Q: remove  from python pandas dataframe I'm trying to remove  and » from a column in a pandas dataframe. it would look something like this: | special_character | | mobileapps (new ad unit) » en-ca » alerts » severe-outlookdesktop | | mobileapps (new ad unit) » fr-ca » alerts » (video) » videotablet | | mobileapps (new ad unit) » en-ca » smartphone | I've tried df = df9['special_character'].replace('Â', ' ').replace('»', ' ') but no luck Is there other ways to remove it? I'm kinda stuck here. A: You can add regex df9['special_character_remove']= df9['special_character'].replace({'Â': ' ','»': ' '},regex=True)
remove  from python pandas dataframe
I'm trying to remove  and » from a column in a pandas dataframe. it would look something like this: | special_character | | mobileapps (new ad unit) » en-ca » alerts » severe-outlookdesktop | | mobileapps (new ad unit) » fr-ca » alerts » (video) » videotablet | | mobileapps (new ad unit) » en-ca » smartphone | I've tried df = df9['special_character'].replace('Â', ' ').replace('»', ' ') but no luck Is there other ways to remove it? I'm kinda stuck here.
[ "You can add regex\ndf9['special_character_remove']= df9['special_character'].replace({'Â': ' ','»': ' '},regex=True)\n\n" ]
[ 2 ]
[]
[]
[ "pandas", "python" ]
stackoverflow_0074524325_pandas_python.txt
Q: Can pandas read and archive within an archive? I have an archive file (archive.tar.gz) which contains multiple archive files (file.txt.gz). If I first extract the .txt.gz files to a folder, I can then open them with pandas directly using: import pandas as pd df = pd.read_csv('file.txt.gz', sep='\t', encoding='utf-8') But if I explore the archive using the tarfile library, then it doesn't work: import pandas as pd import tarfile tar = tarfile.open("archive.tar.gz", "r:*") csv_path = tar.getnames()[1] df = pd.read_csv(tar.extractfile(csv_path), sep='\t', encoding='utf-8') UnicodeDecodeError: 'utf-8' codec can't decode byte 0x8b in position 1: invalid start byte Is that possible to do? A: When you open the file by filename, then Pandas will be able to infer that it is compressed with gzip due to the *.gz extension on the filename. When you pass it a file object, you need to tell it explicitly about the compression so that it can decompress it as it reads the file. This should work: df = pd.read_csv( tar.extractfile(csv_path), compression='gzip', sep='\t', encoding='utf-8') For more details, see the entry about the "compression" argument in the documentation for read_csv(). A: read_csv is probably trying to interpret the input as a filename. If you wrap the extracted file in io.BytesIO, I suspect you should be able to get it to treat it as it would an open file handle from io import BytesIO df = pd.read_csv(BytesIO(tar.extractfile(csv_path)), ...) A: A bit late but I had the same requirement and the following solution works. Two small changes - you have to read the extracted file tar.extractfile(xx).read() and pass it to BytesIO(): from io import BytesIO tar = tarfile.open("archive.tar.gz", "r:gz") csv_path = tar.getnames()[1] csv_bytes = BytesIO(tar.extractfile(csv_path).read()) df = pd.read_csv(csv_bytes, sep='\t', encoding='utf-8')
Can pandas read and archive within an archive?
I have an archive file (archive.tar.gz) which contains multiple archive files (file.txt.gz). If I first extract the .txt.gz files to a folder, I can then open them with pandas directly using: import pandas as pd df = pd.read_csv('file.txt.gz', sep='\t', encoding='utf-8') But if I explore the archive using the tarfile library, then it doesn't work: import pandas as pd import tarfile tar = tarfile.open("archive.tar.gz", "r:*") csv_path = tar.getnames()[1] df = pd.read_csv(tar.extractfile(csv_path), sep='\t', encoding='utf-8') UnicodeDecodeError: 'utf-8' codec can't decode byte 0x8b in position 1: invalid start byte Is that possible to do?
[ "When you open the file by filename, then Pandas will be able to infer that it is compressed with gzip due to the *.gz extension on the filename.\nWhen you pass it a file object, you need to tell it explicitly about the compression so that it can decompress it as it reads the file.\nThis should work:\ndf = pd.read_csv(\n tar.extractfile(csv_path),\n compression='gzip',\n sep='\\t',\n encoding='utf-8')\n\nFor more details, see the entry about the \"compression\" argument in the documentation for read_csv().\n", "read_csv is probably trying to interpret the input as a filename. If you wrap the extracted file in io.BytesIO, I suspect you should be able to get it to treat it as it would an open file handle\nfrom io import BytesIO\ndf = pd.read_csv(BytesIO(tar.extractfile(csv_path)), ...)\n\n", "A bit late but I had the same requirement and the following solution works. Two small changes - you have to read the extracted file tar.extractfile(xx).read() and pass it to BytesIO():\nfrom io import BytesIO\n\ntar = tarfile.open(\"archive.tar.gz\", \"r:gz\")\ncsv_path = tar.getnames()[1]\ncsv_bytes = BytesIO(tar.extractfile(csv_path).read())\ndf = pd.read_csv(csv_bytes, sep='\\t', encoding='utf-8')\n\n" ]
[ 3, 0, 0 ]
[]
[]
[ "pandas", "python", "tarfile" ]
stackoverflow_0060346002_pandas_python_tarfile.txt
Q: How to do a function in python that loops through two or more data frames of different sizes and indexes in pandas? I am looking to create a function that loops through two existing dataframes I have based on some conditions and generates a value relating to those variables. Forgive the wording but for those familiar with excel the problem would be solved with index match and then normal equations within parentheses. The excel solution is simple but is static and does not cover each date or unit in my data as explained below. I have created a smaller sample of my dataset which is fairly large and covers many date periods (one year), whereas in the sample I only show one date. There are also periods in the sample as column headers ranging 1-10, but in my dataset it runs from 1-48 for each day which I have shortened for the example here to make everything less clunky. DF1: Efficiency Quintile Date MODELLED Rank BM_Unit 1 2 3 4 5 6 7 8 9 10 Gas_1 08/01/2022 1 FAWN-1 130.43 130.93 130.78 130.58 130.57 130.54 130.71 130.87 130.89 130.98 Gas_1 08/01/2022 2 GRAI-6 339.45 342.33 322.53 312.40 303.78 307.60 316.35 277.18 293.48 325.75 Gas_1 08/01/2022 3 EECL-1 363.31 386.71 364.46 363.31 363.31 363.38 361.87 305.06 286.99 282.74 Gas_1 08/01/2022 4 PEMB-21 334.40 419.50 436.70 441.90 440.50 415.80 327.90 323.70 322.70 331.10 Gas_1 08/01/2022 5 PEMB-51 370.65 370.45 359.90 326.25 326.20 322.65 324.60 274.25 319.55 288.80 Gas_1 08/01/2022 6 PEMB-41 337.00 423.40 423.10 427.50 427.00 419.00 361.00 318.80 263.20 226.70 Gas_1 08/01/2022 7 WBURB-1 240.41 293.17 252.27 256.51 261.65 253.44 247.14 217.08 223.11 199.27 Gas_1 08/01/2022 8 PEMB-31 297.73 360.27 355.40 357.07 358.67 353.07 300.93 284.73 268.73 255.20 Gas_1 08/01/2022 9 GRMO-1 106.62 106.11 105.96 106.00 106.00 105.98 105.99 105.90 105.47 105.31 Gas_2 08/01/2022 10 PEMB-11 432.80 430.40 430.70 431.90 432.10 429.30 430.00 408.30 320.90 346.50 Gas_2 08/01/2022 11 STAY-1 216.07 223.27 232.67 243.47 234.67 221.73 227.00 128.57 237.00 218.33 Gas_2 08/01/2022 12 GRAI-7 425.20 425.40 377.90 339.40 342.00 329.80 408.00 0.00 329.00 257.30 Gas_2 08/01/2022 13 DIDCB6 465.80 459.50 411.60 411.70 413.70 200.55 167.83 0.00 264.15 248.29 Gas_2 08/01/2022 14 SCCL-3 311.50 337.40 378.80 311.50 381.30 0.00 0.00 0.00 0.00 0.00 DF2: Fuel Efficiency Quantile Quintile Efficiency (%) Emissions Factor (tCO2e/MWh) Efficiency_adjusted_EF Gas 1 Gas_1 1 0.467 0.467 Gas 2 Gas_2 1 0.467 0.467 Gas 3 Gas_3 1 0.467 0.467 Gas 4 Gas_4 1 0.467 0.467 Gas 5 Gas_5 1 0.467 0.467 Coal 1 Coal_1 1 1.046 1.046 DF3 (Desired): MODELLED Rank BM_Unit 1 2 3 4 5 6 7 8 9 10 1 FAWN-1 30.46 30.57 30.54 30.49 30.49 30.48 30.52 30.56 30.56 30.58 2 GRAI-6 79.26 79.93 75.31 72.95 70.93 71.82 73.87 64.72 68.53 76.06 3 EECL-1 84.83 90.30 85.10 84.83 84.83 84.85 84.50 71.23 67.01 66.02 4 PEMB-21 78.08 97.95 101.97 103.18 102.86 97.09 76.56 75.58 75.35 77.31 5 PEMB-51 86.55 86.50 84.04 76.18 76.17 75.34 75.79 64.04 74.61 67.43 6 PEMB-41 78.69 98.86 98.79 99.82 99.70 97.84 84.29 74.44 61.46 52.93 7 WBURB-1 56.13 68.45 58.90 59.89 61.10 59.18 57.71 50.69 52.10 46.53 8 PEMB-31 69.52 84.12 82.99 83.38 83.75 82.44 70.27 66.49 62.75 59.59 9 GRMO-1 24.90 24.78 24.74 24.75 24.75 24.75 24.75 24.73 24.63 24.59 10 PEMB-11 101.06 100.50 100.57 100.85 100.90 100.24 100.41 95.34 74.93 80.91 11 STAY-1 50.45 52.13 54.33 56.85 54.79 51.77 53.00 30.02 55.34 50.98 12 GRAI-7 99.28 99.33 88.24 79.25 79.86 77.01 95.27 0.00 76.82 60.08 13 DIDCB6 108.76 107.29 96.11 96.13 96.60 46.83 39.19 0.00 61.68 57.98 14 SCCL-3 72.74 78.78 88.45 72.74 89.03 0.00 0.00 0.00 0.00 0.00 I need the function to essentially help return an output similar to DF3 in the example that loops through DF1 and DF2 based on the following equation: For each: date, BM_unit, period(1, 2, 3...) in DF1 multiply the values (e.g. 130.43) also in DF1 by the matching parameters in DF2: the number in the last column that matches the same column in DF1 (gas_1 = 0.47., etc). The logic makes sense in excel though a simple excel function with index and match but based on the amount of conditions I know this needs to be a function that loops through all the dataset I am just a little confused on how to do this. Any help is appreciated! A: Your data is structured in 'wide' format which is a bit of an anti-pattern. (It's worth reading up on 'third normal form' - might seem in the weeds but it's one of those foundational concepts in relational/tabular data). So step 1 should be getting it into a standard form (where each row is a unique 'value' with a unique 'index' - in this case the combination of the number identifying the 'column' - the sample? - and the BM_unit, I think. The melt method does this. Then it's a simple merge to join the two (similar to a database join). Finally, if you REALLY have to, you can pivot it again... but really ask yourself if you have to. variable_columns = ["Efficiency Quintile", "Date", "MODELLED Rank", "BM_Unit"] df_melted = pd.melt(df1, id_vars=variable_columns) df_merged = df_melted.merge(df2, left_on="Efficiency Quintile", right_on="Quintile") df_merged["calculated_column"] = df_merged["value"] * df_merged["Efficiency_adjusted_EF"] # I think this is the column you want? # Are you REALLY sure you need to do this? df_final = df_merged.pivot(columns=variable_columns], values=["calculated_column"]) I haven't tested that but hopefully that's close enough.
How to do a function in python that loops through two or more data frames of different sizes and indexes in pandas?
I am looking to create a function that loops through two existing dataframes I have based on some conditions and generates a value relating to those variables. Forgive the wording but for those familiar with excel the problem would be solved with index match and then normal equations within parentheses. The excel solution is simple but is static and does not cover each date or unit in my data as explained below. I have created a smaller sample of my dataset which is fairly large and covers many date periods (one year), whereas in the sample I only show one date. There are also periods in the sample as column headers ranging 1-10, but in my dataset it runs from 1-48 for each day which I have shortened for the example here to make everything less clunky. DF1: Efficiency Quintile Date MODELLED Rank BM_Unit 1 2 3 4 5 6 7 8 9 10 Gas_1 08/01/2022 1 FAWN-1 130.43 130.93 130.78 130.58 130.57 130.54 130.71 130.87 130.89 130.98 Gas_1 08/01/2022 2 GRAI-6 339.45 342.33 322.53 312.40 303.78 307.60 316.35 277.18 293.48 325.75 Gas_1 08/01/2022 3 EECL-1 363.31 386.71 364.46 363.31 363.31 363.38 361.87 305.06 286.99 282.74 Gas_1 08/01/2022 4 PEMB-21 334.40 419.50 436.70 441.90 440.50 415.80 327.90 323.70 322.70 331.10 Gas_1 08/01/2022 5 PEMB-51 370.65 370.45 359.90 326.25 326.20 322.65 324.60 274.25 319.55 288.80 Gas_1 08/01/2022 6 PEMB-41 337.00 423.40 423.10 427.50 427.00 419.00 361.00 318.80 263.20 226.70 Gas_1 08/01/2022 7 WBURB-1 240.41 293.17 252.27 256.51 261.65 253.44 247.14 217.08 223.11 199.27 Gas_1 08/01/2022 8 PEMB-31 297.73 360.27 355.40 357.07 358.67 353.07 300.93 284.73 268.73 255.20 Gas_1 08/01/2022 9 GRMO-1 106.62 106.11 105.96 106.00 106.00 105.98 105.99 105.90 105.47 105.31 Gas_2 08/01/2022 10 PEMB-11 432.80 430.40 430.70 431.90 432.10 429.30 430.00 408.30 320.90 346.50 Gas_2 08/01/2022 11 STAY-1 216.07 223.27 232.67 243.47 234.67 221.73 227.00 128.57 237.00 218.33 Gas_2 08/01/2022 12 GRAI-7 425.20 425.40 377.90 339.40 342.00 329.80 408.00 0.00 329.00 257.30 Gas_2 08/01/2022 13 DIDCB6 465.80 459.50 411.60 411.70 413.70 200.55 167.83 0.00 264.15 248.29 Gas_2 08/01/2022 14 SCCL-3 311.50 337.40 378.80 311.50 381.30 0.00 0.00 0.00 0.00 0.00 DF2: Fuel Efficiency Quantile Quintile Efficiency (%) Emissions Factor (tCO2e/MWh) Efficiency_adjusted_EF Gas 1 Gas_1 1 0.467 0.467 Gas 2 Gas_2 1 0.467 0.467 Gas 3 Gas_3 1 0.467 0.467 Gas 4 Gas_4 1 0.467 0.467 Gas 5 Gas_5 1 0.467 0.467 Coal 1 Coal_1 1 1.046 1.046 DF3 (Desired): MODELLED Rank BM_Unit 1 2 3 4 5 6 7 8 9 10 1 FAWN-1 30.46 30.57 30.54 30.49 30.49 30.48 30.52 30.56 30.56 30.58 2 GRAI-6 79.26 79.93 75.31 72.95 70.93 71.82 73.87 64.72 68.53 76.06 3 EECL-1 84.83 90.30 85.10 84.83 84.83 84.85 84.50 71.23 67.01 66.02 4 PEMB-21 78.08 97.95 101.97 103.18 102.86 97.09 76.56 75.58 75.35 77.31 5 PEMB-51 86.55 86.50 84.04 76.18 76.17 75.34 75.79 64.04 74.61 67.43 6 PEMB-41 78.69 98.86 98.79 99.82 99.70 97.84 84.29 74.44 61.46 52.93 7 WBURB-1 56.13 68.45 58.90 59.89 61.10 59.18 57.71 50.69 52.10 46.53 8 PEMB-31 69.52 84.12 82.99 83.38 83.75 82.44 70.27 66.49 62.75 59.59 9 GRMO-1 24.90 24.78 24.74 24.75 24.75 24.75 24.75 24.73 24.63 24.59 10 PEMB-11 101.06 100.50 100.57 100.85 100.90 100.24 100.41 95.34 74.93 80.91 11 STAY-1 50.45 52.13 54.33 56.85 54.79 51.77 53.00 30.02 55.34 50.98 12 GRAI-7 99.28 99.33 88.24 79.25 79.86 77.01 95.27 0.00 76.82 60.08 13 DIDCB6 108.76 107.29 96.11 96.13 96.60 46.83 39.19 0.00 61.68 57.98 14 SCCL-3 72.74 78.78 88.45 72.74 89.03 0.00 0.00 0.00 0.00 0.00 I need the function to essentially help return an output similar to DF3 in the example that loops through DF1 and DF2 based on the following equation: For each: date, BM_unit, period(1, 2, 3...) in DF1 multiply the values (e.g. 130.43) also in DF1 by the matching parameters in DF2: the number in the last column that matches the same column in DF1 (gas_1 = 0.47., etc). The logic makes sense in excel though a simple excel function with index and match but based on the amount of conditions I know this needs to be a function that loops through all the dataset I am just a little confused on how to do this. Any help is appreciated!
[ "Your data is structured in 'wide' format which is a bit of an anti-pattern. (It's worth reading up on 'third normal form' - might seem in the weeds but it's one of those foundational concepts in relational/tabular data).\nSo step 1 should be getting it into a standard form (where each row is a unique 'value' with a unique 'index' - in this case the combination of the number identifying the 'column' - the sample? - and the BM_unit, I think. The melt method does this.\nThen it's a simple merge to join the two (similar to a database join). Finally, if you REALLY have to, you can pivot it again... but really ask yourself if you have to.\nvariable_columns = [\"Efficiency Quintile\", \"Date\", \"MODELLED Rank\", \"BM_Unit\"]\ndf_melted = pd.melt(df1, id_vars=variable_columns)\ndf_merged = df_melted.merge(df2, left_on=\"Efficiency Quintile\", right_on=\"Quintile\")\ndf_merged[\"calculated_column\"] = df_merged[\"value\"] * df_merged[\"Efficiency_adjusted_EF\"] # I think this is the column you want?\n\n# Are you REALLY sure you need to do this?\ndf_final = df_merged.pivot(columns=variable_columns], values=[\"calculated_column\"])\n\nI haven't tested that but hopefully that's close enough.\n" ]
[ 1 ]
[]
[]
[ "excel", "jupyter_notebook", "pandas", "python" ]
stackoverflow_0074522391_excel_jupyter_notebook_pandas_python.txt
Q: Segmentation fault (core dumped) when launching python in anaconda When I tried to create a virtual environment using miniconda command 'conda create -n py37 python=3.7', I encountered some problem when I tried to launch python in the virtual environment using command 'python'. It seems python cannot be launched appropriately in the terminal. The error info is listed as followed: (py37) bash-4.2$ python Python 3.7.13 (default, Oct 18 2022, 18:57:03) [GCC 11.2.0] :: Anaconda, Inc. on linux Type "help", "copyright", "credits" or "license" for more information. Segmentation fault (core dumped) I tried several methods including creating another environment, using the command 'conda clean', even reinstall the miniconda3, but nothing works. Everything seems to be normal under the python outside the conda env. Anyone knows how to solve this problem? A: Ran into a similar issue probably in a completely different context, but try making sure that the python you install is from conda-forge or not the main conda channel. e.g. conda create -n py10_test -c conda-forge -y python==3.10 I found this worked for me.
Segmentation fault (core dumped) when launching python in anaconda
When I tried to create a virtual environment using miniconda command 'conda create -n py37 python=3.7', I encountered some problem when I tried to launch python in the virtual environment using command 'python'. It seems python cannot be launched appropriately in the terminal. The error info is listed as followed: (py37) bash-4.2$ python Python 3.7.13 (default, Oct 18 2022, 18:57:03) [GCC 11.2.0] :: Anaconda, Inc. on linux Type "help", "copyright", "credits" or "license" for more information. Segmentation fault (core dumped) I tried several methods including creating another environment, using the command 'conda clean', even reinstall the miniconda3, but nothing works. Everything seems to be normal under the python outside the conda env. Anyone knows how to solve this problem?
[ "Ran into a similar issue probably in a completely different context, but try making sure that the python you install is from conda-forge or not the main conda channel. e.g.\nconda create -n py10_test -c conda-forge -y python==3.10\nI found this worked for me.\n" ]
[ 0 ]
[]
[]
[ "anaconda", "miniconda", "python" ]
stackoverflow_0074367207_anaconda_miniconda_python.txt