content
stringlengths
85
101k
title
stringlengths
0
150
question
stringlengths
15
48k
answers
list
answers_scores
list
non_answers
list
non_answers_scores
list
tags
list
name
stringlengths
35
137
Q: I want to print the value between 2 to 5 from 1first columns I want to print the value between 2 to 5 from 1first columns. df5 = pd.DataFrame({ 'colA': ['C4GSP3JOIHJ2', 'CAGPS3JOIHJ2','CALCG3EST2','CLCCV3JOIHJ2','CLCNF3JOIHJ2','CLCQU3JOIHJ2','CLSMS3JOIHJ2','CMICO3JOIHJ2'], }) output look like this A: df5['output']=df5['colA'].str[1:6]
I want to print the value between 2 to 5 from 1first columns
I want to print the value between 2 to 5 from 1first columns. df5 = pd.DataFrame({ 'colA': ['C4GSP3JOIHJ2', 'CAGPS3JOIHJ2','CALCG3EST2','CLCCV3JOIHJ2','CLCNF3JOIHJ2','CLCQU3JOIHJ2','CLSMS3JOIHJ2','CMICO3JOIHJ2'], }) output look like this
[ "df5['output']=df5['colA'].str[1:6]\n" ]
[ -1 ]
[ "df['output'] = df.colA.apply(lambda x: x[1:6]) \n\nshould work. Docs for apply function: https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.apply.html\n" ]
[ -1 ]
[ "python" ]
stackoverflow_0074462442_python.txt
Q: issues when using re.finditer with + sign character in string I am using the following code to find the location the start index of some strings as well as a temperature all of which are read from a text file. The array searchString, contains what I'm looking for. It does locate the index of the first character of each string. The issue is that unless I put the backslash in front of the string: +25°C, finditer gives an error. (Alternately, if I remove the + sign, it works - but I need to look for the specific +25). My question is am I correctly escaping the + sign, since the line: print('Looking for: ' + headerName + ' in the file: ' + filename ) displays : Looking for: +25°C in the file: 123.txt (with the slash showing in front of of the +) Am I just 'getting away with this', or is this escaping as it should? thanks import re path = 'C:\mypath\\' searchString =["Power","Cal", "test", "Frequency", "Max", "\+25°C"] filename = '123.txt' # file name to check for text def search_str(file_path): with open(file_path, 'r') as file: content = file.read() for headerName in searchString: print('Looking for: ' + headerName + ' in the file: ' + filename ) match =re.finditer(headerName, content) sub_indices=[] for temp in match: index = temp.start() sub_indices.append(index) print(sub_indices ,'\n') A: You should use the re.escape() function to escape your string pattern. It will escape all the special characters in given string, for example: >>> print(re.escape('+25°C')) \+25°C >>> print(re.escape('my_pattern with specials+&$@(')) my_pattern\ with\ specials\+\&\$@\( So replace your searchString with literal strings and try it with: def search_str(file_path): with open(file_path, 'r') as file: content = file.read() for headerName in searchString: print('Looking for: ' + headerName + ' in the file: ' + filename ) match =re.finditer(re.escape(headerName), content) sub_indices=[] for temp in match: index = temp.start() sub_indices.append(index) print(sub_indices ,'\n')
issues when using re.finditer with + sign character in string
I am using the following code to find the location the start index of some strings as well as a temperature all of which are read from a text file. The array searchString, contains what I'm looking for. It does locate the index of the first character of each string. The issue is that unless I put the backslash in front of the string: +25°C, finditer gives an error. (Alternately, if I remove the + sign, it works - but I need to look for the specific +25). My question is am I correctly escaping the + sign, since the line: print('Looking for: ' + headerName + ' in the file: ' + filename ) displays : Looking for: +25°C in the file: 123.txt (with the slash showing in front of of the +) Am I just 'getting away with this', or is this escaping as it should? thanks import re path = 'C:\mypath\\' searchString =["Power","Cal", "test", "Frequency", "Max", "\+25°C"] filename = '123.txt' # file name to check for text def search_str(file_path): with open(file_path, 'r') as file: content = file.read() for headerName in searchString: print('Looking for: ' + headerName + ' in the file: ' + filename ) match =re.finditer(headerName, content) sub_indices=[] for temp in match: index = temp.start() sub_indices.append(index) print(sub_indices ,'\n')
[ "You should use the re.escape() function to escape your string pattern. It will escape all the special characters in given string, for example:\n>>> print(re.escape('+25°C'))\n\\+25°C\n>>> print(re.escape('my_pattern with specials+&$@('))\nmy_pattern\\ with\\ specials\\+\\&\\$@\\(\n\nSo replace your searchString with literal strings and try it with:\ndef search_str(file_path):\n with open(file_path, 'r') as file:\n content = file.read()\n\n for headerName in searchString:\n print('Looking for: ' + headerName + ' in the file: ' + filename )\n match =re.finditer(re.escape(headerName), content)\n sub_indices=[]\n for temp in match:\n index = temp.start()\n sub_indices.append(index) \n print(sub_indices ,'\\n')\n\n\n" ]
[ 1 ]
[]
[]
[ "python" ]
stackoverflow_0074462437_python.txt
Q: How can I implement (I think via inheritance) a new class, with a parameter of the parent class, and a paremeter which comes from another module? My task is: Write the code for a class called RandomWalker. This class has only one parameter (an instance variable) called position which is initialized at the creation of a new instance of the class. Write a class called Simulation with, at least, the following instance variables: an instance of RandomWalker called random walker, a networkx graph G, a rate parameter lambda_, and an ending time t_end. All those parameters should be assigned during the initialization (using the _init_ method). What I did so far: class RandomWalker: def __init__(self, position=0): self.position = position saved in my computer as RandomWalker.py. Then: import networkx as nx class Simulation(RandomWalker): def __init__(self, random_walker, G, lambda_, t): *self.random_walker = RandomWalker.__init__(self, position) *self.nx.Graph() = G self.lambda_ = lambda_ self.t = t The lines starting with "*" are the ones that I think (pretty sure) are wrong, but with the documentation I have and I found browsing, I don't get the right information useful to accomplish this task (which apparently should be easy). It's the first time I work with classes in python. I hope you get the idea of what I was trying to do, any help is appreciated A: No, you don't need to use inheritance in your case - Position is not RandomWalker, but it contains a Random Walker. This is a use case for composition, in Python most easily achieved by assigning an object attribute, just like RandomWalker gets position attribute. class Simulation: def __init__(self, random_walker, G, lambda_, t): self.random_walker = random_walker self.G = G self.lambda_ = lambda_ self.t = t When creating your Simulation object, you need to already have RandomWalker and graph objects created: G = nx.Graph() random_walker = RandomWalker(position) simulation = Simulation(random_walker, G, lambda_, t)
How can I implement (I think via inheritance) a new class, with a parameter of the parent class, and a paremeter which comes from another module?
My task is: Write the code for a class called RandomWalker. This class has only one parameter (an instance variable) called position which is initialized at the creation of a new instance of the class. Write a class called Simulation with, at least, the following instance variables: an instance of RandomWalker called random walker, a networkx graph G, a rate parameter lambda_, and an ending time t_end. All those parameters should be assigned during the initialization (using the _init_ method). What I did so far: class RandomWalker: def __init__(self, position=0): self.position = position saved in my computer as RandomWalker.py. Then: import networkx as nx class Simulation(RandomWalker): def __init__(self, random_walker, G, lambda_, t): *self.random_walker = RandomWalker.__init__(self, position) *self.nx.Graph() = G self.lambda_ = lambda_ self.t = t The lines starting with "*" are the ones that I think (pretty sure) are wrong, but with the documentation I have and I found browsing, I don't get the right information useful to accomplish this task (which apparently should be easy). It's the first time I work with classes in python. I hope you get the idea of what I was trying to do, any help is appreciated
[ "No, you don't need to use inheritance in your case - Position is not RandomWalker, but it contains a Random Walker. This is a use case for composition, in Python most easily achieved by assigning an object attribute, just like RandomWalker gets position attribute.\nclass Simulation:\n def __init__(self, random_walker, G, lambda_, t):\n self.random_walker = random_walker\n self.G = G\n self.lambda_ = lambda_\n self.t = t\n\nWhen creating your Simulation object, you need to already have RandomWalker and graph objects created:\nG = nx.Graph()\nrandom_walker = RandomWalker(position)\nsimulation = Simulation(random_walker, G, lambda_, t)\n\n" ]
[ 1 ]
[]
[]
[ "class", "initialization", "python", "python_3.x" ]
stackoverflow_0074462492_class_initialization_python_python_3.x.txt
Q: Nothing solves SSLCertVerificationError I am getting the infamous error ssl.SSLCertVerificationError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1131) I tried almost all solutions available online so far but no luck. I am using pyoidc (with keycloak and superset) which uses urllib that fails to contact the auth server. Here is the part of my Dockerfile that tries to solve this error: RUN pip install --upgrade certifi COPY ssl/ /usr/local/share/ca-certificates/ RUN update-ca-certificates RUN cat /usr/local/share/ca-certificates/mycert.crt >> /usr/local/lib/python3.8/site-packages/certifi/cacert.pem RUN cat /usr/local/share/ca-certificates/mycert.crt >> /usr/local/lib/python3.8/site-packages/httplib2/cacerts.txt ENV REQUESTS_CA_BUNDLE=/usr/local/lib/python3.8/site-packages/certifi/cacert.pem I am still getting the error with these steps. What else should I do? Any ideas/help are greatly appreciated. A: Know this is late but just for other people... My issue was that I was using virtuelenv. So I did: cat /usr/local/share/ca-certificates/mycert.crt >> vendor/lib/python3.8/site-packages/certifi/cacert.pem With vendor being my virtuelenv folder. Thanks Cemre for the idea. Took a while to figure this one out.
Nothing solves SSLCertVerificationError
I am getting the infamous error ssl.SSLCertVerificationError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1131) I tried almost all solutions available online so far but no luck. I am using pyoidc (with keycloak and superset) which uses urllib that fails to contact the auth server. Here is the part of my Dockerfile that tries to solve this error: RUN pip install --upgrade certifi COPY ssl/ /usr/local/share/ca-certificates/ RUN update-ca-certificates RUN cat /usr/local/share/ca-certificates/mycert.crt >> /usr/local/lib/python3.8/site-packages/certifi/cacert.pem RUN cat /usr/local/share/ca-certificates/mycert.crt >> /usr/local/lib/python3.8/site-packages/httplib2/cacerts.txt ENV REQUESTS_CA_BUNDLE=/usr/local/lib/python3.8/site-packages/certifi/cacert.pem I am still getting the error with these steps. What else should I do? Any ideas/help are greatly appreciated.
[ "Know this is late but just for other people...\nMy issue was that I was using virtuelenv. So I did:\ncat /usr/local/share/ca-certificates/mycert.crt >> vendor/lib/python3.8/site-packages/certifi/cacert.pem\n\nWith vendor being my virtuelenv folder.\nThanks Cemre for the idea. Took a while to figure this one out.\n" ]
[ 0 ]
[]
[]
[ "python", "ssl" ]
stackoverflow_0069927923_python_ssl.txt
Q: Authentication for using Google Cloud Platform API through Google Colab I am trying to use the Healthcare API, specifically the Healthcare Natural Language API for which there is this tutorial as well as this other one The tutorial outlines how to run the API on a string; I've been tasked with figuring out how to make use of the API with a dataset of medical text data. I am most comfortable in Python out of all GCP options, so I attempted to run the code through Colab. I used a service account json key for authentication, but this isn't best practice. So, I had to delete the key since we are dealing with patient data and everyone on my team is new to GCP. In order for me to continue exploring running the Healthcare NLP API on a dataset rather than one string, I need to figure out authentication through a different method. In this regard, I have the following questions: Pros/cons of me trying to run this through Colab? Should I look to shifting to running my code within the GCP interface? Are my choices Colab (and being forced to use a json key) vs working within GCP shell/terminal (with a plethora of authentication options)? This is what I gather from my research, but I am quite new to using APIs, working with cloud computing, etc. I've tried to look into related tutorials such as this but their lack of direct relationship to what I am doing (ie: can't find one where the API being used is Healthcare), and my lack of familiarity with APIs and GCPs, I don't particularly understand what is going on + I keep seeing service accounts generally mentioned at one point or another, and I am not allowed to use service account keys for the time being. A: Instead of using a service account you can use your own credentials, and supply them to your code using the "application default credentials". To set this up, make sure the GOOGLE_APPLICATION_CREDENTIALS environment variable is unset, then run gcloud auth application-default login (docs). After going through the login flow you can either generate an auth token manually, using gcloud auth application-default print-access-token, or you can use the Google client libraries, i.e. from googleapiclient import discovery api_version = "v1" service_name = "healthcare" # Returns an authorized API client by discovering the Healthcare API # and using GOOGLE_APPLICATION_CREDENTIALS environment variable. client = discovery.build(service_name, api_version) Under the hood this is using the application default credentials, you can also do with the google-auth Python package if you want. You can find a summary of all the standard authentication methods here
Authentication for using Google Cloud Platform API through Google Colab
I am trying to use the Healthcare API, specifically the Healthcare Natural Language API for which there is this tutorial as well as this other one The tutorial outlines how to run the API on a string; I've been tasked with figuring out how to make use of the API with a dataset of medical text data. I am most comfortable in Python out of all GCP options, so I attempted to run the code through Colab. I used a service account json key for authentication, but this isn't best practice. So, I had to delete the key since we are dealing with patient data and everyone on my team is new to GCP. In order for me to continue exploring running the Healthcare NLP API on a dataset rather than one string, I need to figure out authentication through a different method. In this regard, I have the following questions: Pros/cons of me trying to run this through Colab? Should I look to shifting to running my code within the GCP interface? Are my choices Colab (and being forced to use a json key) vs working within GCP shell/terminal (with a plethora of authentication options)? This is what I gather from my research, but I am quite new to using APIs, working with cloud computing, etc. I've tried to look into related tutorials such as this but their lack of direct relationship to what I am doing (ie: can't find one where the API being used is Healthcare), and my lack of familiarity with APIs and GCPs, I don't particularly understand what is going on + I keep seeing service accounts generally mentioned at one point or another, and I am not allowed to use service account keys for the time being.
[ "Instead of using a service account you can use your own credentials, and supply them to your code using the \"application default credentials\". To set this up, make sure the GOOGLE_APPLICATION_CREDENTIALS environment variable is unset, then run gcloud auth application-default login (docs). After going through the login flow you can either generate an auth token manually, using gcloud auth application-default print-access-token, or you can use the Google client libraries, i.e.\nfrom googleapiclient import discovery\n\napi_version = \"v1\"\nservice_name = \"healthcare\"\n# Returns an authorized API client by discovering the Healthcare API\n# and using GOOGLE_APPLICATION_CREDENTIALS environment variable.\nclient = discovery.build(service_name, api_version)\n\nUnder the hood this is using the application default credentials, you can also do with the google-auth Python package if you want.\nYou can find a summary of all the standard authentication methods here\n" ]
[ 0 ]
[]
[]
[ "google_cloud_healthcare", "google_cloud_platform", "google_colaboratory", "google_healthcare_api", "python" ]
stackoverflow_0074451204_google_cloud_healthcare_google_cloud_platform_google_colaboratory_google_healthcare_api_python.txt
Q: psycopg2: insert multiple rows with one query I need to insert multiple rows with one query (number of rows is not constant), so I need to execute query like this one: INSERT INTO t (a, b) VALUES (1, 2), (3, 4), (5, 6); The only way I know is args = [(1,2), (3,4), (5,6)] args_str = ','.join(cursor.mogrify("%s", (x, )) for x in args) cursor.execute("INSERT INTO t (a, b) VALUES "+args_str) but I want some simpler way. A: I built a program that inserts multiple lines to a server that was located in another city. I found out that using this method was about 10 times faster than executemany. In my case tup is a tuple containing about 2000 rows. It took about 10 seconds when using this method: args_str = ','.join(cur.mogrify("(%s,%s,%s,%s,%s,%s,%s,%s,%s)", x) for x in tup) cur.execute("INSERT INTO table VALUES " + args_str) and 2 minutes when using this method: cur.executemany("INSERT INTO table VALUES(%s,%s,%s,%s,%s,%s,%s,%s,%s)", tup) A: New execute_values method in Psycopg 2.7: data = [(1,'x'), (2,'y')] insert_query = 'insert into t (a, b) values %s' psycopg2.extras.execute_values ( cursor, insert_query, data, template=None, page_size=100 ) The pythonic way of doing it in Psycopg 2.6: data = [(1,'x'), (2,'y')] records_list_template = ','.join(['%s'] * len(data)) insert_query = 'insert into t (a, b) values {}'.format(records_list_template) cursor.execute(insert_query, data) Explanation: If the data to be inserted is given as a list of tuples like in data = [(1,'x'), (2,'y')] then it is already in the exact required format as the values syntax of the insert clause expects a list of records as in insert into t (a, b) values (1, 'x'),(2, 'y') Psycopg adapts a Python tuple to a Postgresql record. The only necessary work is to provide a records list template to be filled by psycopg # We use the data list to be sure of the template length records_list_template = ','.join(['%s'] * len(data)) and place it in the insert query insert_query = 'insert into t (a, b) values {}'.format(records_list_template) Printing the insert_query outputs insert into t (a, b) values %s,%s Now to the usual Psycopg arguments substitution cursor.execute(insert_query, data) Or just testing what will be sent to the server print (cursor.mogrify(insert_query, data).decode('utf8')) Output: insert into t (a, b) values (1, 'x'),(2, 'y') A: Update with psycopg2 2.7: The classic executemany() is about 60 times slower than @ant32 's implementation (called "folded") as explained in this thread: https://www.postgresql.org/message-id/20170130215151.GA7081%40deb76.aryehleib.com This implementation was added to psycopg2 in version 2.7 and is called execute_values(): from psycopg2.extras import execute_values execute_values(cur, "INSERT INTO test (id, v1, v2) VALUES %s", [(1, 2, 3), (4, 5, 6), (7, 8, 9)]) Previous Answer: To insert multiple rows, using the multirow VALUES syntax with execute() is about 10x faster than using psycopg2 executemany(). Indeed, executemany() just runs many individual INSERT statements. @ant32 's code works perfectly in Python 2. But in Python 3, cursor.mogrify() returns bytes, cursor.execute() takes either bytes or strings, and ','.join() expects str instance. So in Python 3 you may need to modify @ant32 's code, by adding .decode('utf-8'): args_str = ','.join(cur.mogrify("(%s,%s,%s,%s,%s,%s,%s,%s,%s)", x).decode('utf-8') for x in tup) cur.execute("INSERT INTO table VALUES " + args_str) Or by using bytes (with b'' or b"") only: args_bytes = b','.join(cur.mogrify("(%s,%s,%s,%s,%s,%s,%s,%s,%s)", x) for x in tup) cur.execute(b"INSERT INTO table VALUES " + args_bytes) A: cursor.copy_from is the fastest solution I've found for bulk inserts by far. Here's a gist I made containing a class named IteratorFile which allows an iterator yielding strings to be read like a file. We can convert each input record to a string using a generator expression. So the solution would be args = [(1,2), (3,4), (5,6)] f = IteratorFile(("{}\t{}".format(x[0], x[1]) for x in args)) cursor.copy_from(f, 'table_name', columns=('a', 'b')) For this trivial size of args it won't make much of a speed difference, but I see big speedups when dealing with thousands+ of rows. It will also be more memory efficient than building a giant query string. An iterator would only ever hold one input record in memory at a time, where at some point you'll run out of memory in your Python process or in Postgres by building the query string. A: A snippet from Psycopg2's tutorial page at Postgresql.org (see bottom): A last item I would like to show you is how to insert multiple rows using a dictionary. If you had the following: namedict = ({"first_name":"Joshua", "last_name":"Drake"}, {"first_name":"Steven", "last_name":"Foo"}, {"first_name":"David", "last_name":"Bar"}) You could easily insert all three rows within the dictionary by using: cur = conn.cursor() cur.executemany("""INSERT INTO bar(first_name,last_name) VALUES (%(first_name)s, %(last_name)s)""", namedict) It doesn't save much code, but it definitively looks better. A: All of these techniques are called 'Extended Inserts" in Postgres terminology, and as of the 24th of November 2016, it's still a ton faster than psychopg2's executemany() and all the other methods listed in this thread (which i tried before coming to this answer). Here's some code which doesnt use cur.mogrify and is nice and simply to get your head around: valueSQL = [ '%s', '%s', '%s', ... ] # as many as you have columns. sqlrows = [] rowsPerInsert = 3 # more means faster, but with diminishing returns.. for row in getSomeData: # row == [1, 'a', 'yolo', ... ] sqlrows += row if ( len(sqlrows)/len(valueSQL) ) % rowsPerInsert == 0: # sqlrows == [ 1, 'a', 'yolo', 2, 'b', 'swag', 3, 'c', 'selfie' ] insertSQL = 'INSERT INTO "twitter" VALUES ' + ','.join(['(' + ','.join(valueSQL) + ')']*rowsPerInsert) cur.execute(insertSQL, sqlrows) con.commit() sqlrows = [] insertSQL = 'INSERT INTO "twitter" VALUES ' + ','.join(['(' + ','.join(valueSQL) + ')']*len(sqlrows)) cur.execute(insertSQL, sqlrows) con.commit() But it should be noted that if you can use copy_from(), you should use copy_from ;) A: I've been using ant32's answer above for several years. However I've found that is thorws an error in python 3 because mogrify returns a byte string. Converting explicitly to bytse strings is a simple solution for making code python 3 compatible. args_str = b','.join(cur.mogrify("(%s,%s,%s,%s,%s,%s,%s,%s,%s)", x) for x in tup) cur.execute(b"INSERT INTO table VALUES " + args_str) A: executemany accept array of tuples https://www.postgresqltutorial.com/postgresql-python/insert/ """ array of tuples """ vendor_list = [(value1,)] """ insert multiple vendors into the vendors table """ sql = "INSERT INTO vendors(vendor_name) VALUES(%s)" conn = None try: # read database configuration params = config() # connect to the PostgreSQL database conn = psycopg2.connect(**params) # create a new cursor cur = conn.cursor() # execute the INSERT statement cur.executemany(sql,vendor_list) # commit the changes to the database conn.commit() # close communication with the database cur.close() except (Exception, psycopg2.DatabaseError) as error: print(error) finally: if conn is not None: conn.close() A: The cursor.copyfrom solution as provided by @jopseph.sheedy (https://stackoverflow.com/users/958118/joseph-sheedy) above (https://stackoverflow.com/a/30721460/11100064) is indeed lightning fast. However, the example he gives are not generically usable for a record with any number of fields and it took me while to figure out how to use it correctly. The IteratorFile needs to be instantiated with tab-separated fields like this (r is a list of dicts where each dict is a record): f = IteratorFile("{0}\t{1}\t{2}\t{3}\t{4}".format(r["id"], r["type"], r["item"], r["month"], r["revenue"]) for r in records) To generalise for an arbitrary number of fields we will first create a line string with the correct amount of tabs and field placeholders : "{}\t{}\t{}....\t{}" and then use .format() to fill in the field values : *list(r.values())) for r in records: line = "\t".join(["{}"] * len(records[0])) f = IteratorFile(line.format(*list(r.values())) for r in records) complete function in gist here. A: execute_batch has been added to psycopg2 since this question was posted. It is faster than execute_values. A: Security vulnerabilities As of 2022-11-16, the answers by @Clodoaldo Neto (for Psycopg 2.6), @Joseph Sheedy, @J.J, @Bart Jonk, @kevo Njoki, @TKoutny and @Nihal Sharma contain SQL injection vulnerabilities and should not be used. The fastest proposal so far (copy_from) should not be used either because it is difficult to escape the data correctly. This is easily apparent when trying to insert characters like ', ", \n, \, \t or \n. The author of psycopg2 also recommends against copy_from: copy_from() and copy_to() are really just ancient and incomplete methods The fastest method The fastest method is cursor.copy_expert, which can insert data straight from CSV files. with open("mydata.csv") as f: cursor.copy_expert("COPY mytable (my_id, a, b) FROM STDIN WITH csv", f) copy_expert is also the fastest method when generating the CSV file on-the-fly. For reference, see the following CSVFile class, which takes care to limit memory usage. import io, csv class CSVFile(io.TextIOBase): # Create a CSV file from rows. Can only be read once. def __init__(self, rows, size=8192): self.row_iter = iter(rows) self.buf = io.StringIO() self.available = 0 self.size = size def read(self, n): # Buffer new CSV rows until enough data is available buf = self.buf writer = csv.writer(buf) while self.available < n: try: row_length = writer.writerow(next(self.row_iter)) self.available += row_length self.size = max(self.size, row_length) except StopIteration: break # Read requested amount of data from buffer write_pos = buf.tell() read_pos = write_pos - self.available buf.seek(read_pos) data = buf.read(n) self.available -= len(data) # Shrink buffer if it grew very large if read_pos > 2 * self.size: remaining = buf.read() buf.seek(0) buf.write(remaining) buf.truncate() else: buf.seek(write_pos) return data This class can then be used like: rows = [(1, "a", "b"), (2, "c", "d")] cursor.copy_expert("COPY mytable (my_id, a, b) FROM STDIN WITH csv", CSVFile(rows)) If all your data fits into memory, you can also generate the entire CSV data directly without the CSVFile class, but if you do not know how much data you are going to insert in the future, you probably should not do that. f = io.StringIO() writer = csv.writer(f) for row in rows: writer.writerow(row) f.seek(0) cursor.copy_expert("COPY mytable (my_id, a, b) FROM STDIN WITH csv", f) Benchmark results 914 milliseconds - many calls to cursor.execute 846 milliseconds - cursor.executemany 362 milliseconds - psycopg2.extras.execute_batch 346 milliseconds - execute_batch with page_size=1000 265 milliseconds - execute_batch with prepared statement 161 milliseconds - psycopg2.extras.execute_values 127 milliseconds - cursor.execute with string-concatenated values 39 milliseconds - copy_expert generating the entire CSV file at once 32 milliseconds - copy_expert with CSVFile A: Another nice and efficient approach - is to pass rows for insertion as 1 argument, which is array of json objects. E.g. you passing argument: [ {id: 18, score: 1}, { id: 19, score: 5} ] It is array, which may contain any amount of objects inside. Then your SQL looks like: INSERT INTO links (parent_id, child_id, score) SELECT 123, (r->>'id')::int, (r->>'score')::int FROM unnest($1::json[]) as r Notice: Your postgress must be new enough, to support json A: If you're using SQLAlchemy, you don't need to mess with hand-crafting the string because SQLAlchemy supports generating a multi-row VALUES clause for a single INSERT statement: rows = [] for i, name in enumerate(rawdata): row = { 'id': i, 'name': name, 'valid': True, } rows.append(row) if len(rows) > 0: # INSERT fails if no rows insert_query = SQLAlchemyModelName.__table__.insert().values(rows) session.execute(insert_query) A: From @ant32 def myInsertManyTuples(connection, table, tuple_of_tuples): cursor = connection.cursor() try: insert_len = len(tuple_of_tuples[0]) insert_template = "(" for i in range(insert_len): insert_template += "%s," insert_template = insert_template[:-1] + ")" args_str = ",".join( cursor.mogrify(insert_template, x).decode("utf-8") for x in tuple_of_tuples ) cursor.execute("INSERT INTO " + table + " VALUES " + args_str) connection.commit() except psycopg2.Error as e: print(f"psycopg2.Error in myInsertMany = {e}") connection.rollback()
psycopg2: insert multiple rows with one query
I need to insert multiple rows with one query (number of rows is not constant), so I need to execute query like this one: INSERT INTO t (a, b) VALUES (1, 2), (3, 4), (5, 6); The only way I know is args = [(1,2), (3,4), (5,6)] args_str = ','.join(cursor.mogrify("%s", (x, )) for x in args) cursor.execute("INSERT INTO t (a, b) VALUES "+args_str) but I want some simpler way.
[ "I built a program that inserts multiple lines to a server that was located in another city. \nI found out that using this method was about 10 times faster than executemany. In my case tup is a tuple containing about 2000 rows. It took about 10 seconds when using this method:\nargs_str = ','.join(cur.mogrify(\"(%s,%s,%s,%s,%s,%s,%s,%s,%s)\", x) for x in tup)\ncur.execute(\"INSERT INTO table VALUES \" + args_str) \n\nand 2 minutes when using this method:\ncur.executemany(\"INSERT INTO table VALUES(%s,%s,%s,%s,%s,%s,%s,%s,%s)\", tup)\n\n", "New execute_values method in Psycopg 2.7:\ndata = [(1,'x'), (2,'y')]\ninsert_query = 'insert into t (a, b) values %s'\npsycopg2.extras.execute_values (\n cursor, insert_query, data, template=None, page_size=100\n)\n\nThe pythonic way of doing it in Psycopg 2.6:\ndata = [(1,'x'), (2,'y')]\nrecords_list_template = ','.join(['%s'] * len(data))\ninsert_query = 'insert into t (a, b) values {}'.format(records_list_template)\ncursor.execute(insert_query, data)\n\nExplanation: If the data to be inserted is given as a list of tuples like in\ndata = [(1,'x'), (2,'y')]\n\nthen it is already in the exact required format as \n\nthe values syntax of the insert clause expects a list of records as in\ninsert into t (a, b) values (1, 'x'),(2, 'y')\nPsycopg adapts a Python tuple to a Postgresql record. \n\nThe only necessary work is to provide a records list template to be filled by psycopg\n# We use the data list to be sure of the template length\nrecords_list_template = ','.join(['%s'] * len(data))\n\nand place it in the insert query\ninsert_query = 'insert into t (a, b) values {}'.format(records_list_template)\n\nPrinting the insert_query outputs\ninsert into t (a, b) values %s,%s\n\nNow to the usual Psycopg arguments substitution\ncursor.execute(insert_query, data)\n\nOr just testing what will be sent to the server\nprint (cursor.mogrify(insert_query, data).decode('utf8'))\n\nOutput:\ninsert into t (a, b) values (1, 'x'),(2, 'y')\n\n", "Update with psycopg2 2.7:\nThe classic executemany() is about 60 times slower than @ant32 's implementation (called \"folded\") as explained in this thread: https://www.postgresql.org/message-id/20170130215151.GA7081%40deb76.aryehleib.com\nThis implementation was added to psycopg2 in version 2.7 and is called execute_values():\nfrom psycopg2.extras import execute_values\nexecute_values(cur,\n \"INSERT INTO test (id, v1, v2) VALUES %s\",\n [(1, 2, 3), (4, 5, 6), (7, 8, 9)])\n\n\nPrevious Answer:\nTo insert multiple rows, using the multirow VALUES syntax with execute() is about 10x faster than using psycopg2 executemany(). Indeed, executemany() just runs many individual INSERT statements.\n@ant32 's code works perfectly in Python 2. But in Python 3, cursor.mogrify() returns bytes, cursor.execute() takes either bytes or strings, and ','.join() expects str instance.\nSo in Python 3 you may need to modify @ant32 's code, by adding .decode('utf-8'):\nargs_str = ','.join(cur.mogrify(\"(%s,%s,%s,%s,%s,%s,%s,%s,%s)\", x).decode('utf-8') for x in tup)\ncur.execute(\"INSERT INTO table VALUES \" + args_str)\n\nOr by using bytes (with b'' or b\"\") only: \nargs_bytes = b','.join(cur.mogrify(\"(%s,%s,%s,%s,%s,%s,%s,%s,%s)\", x) for x in tup)\ncur.execute(b\"INSERT INTO table VALUES \" + args_bytes) \n\n", "cursor.copy_from is the fastest solution I've found for bulk inserts by far. Here's a gist I made containing a class named IteratorFile which allows an iterator yielding strings to be read like a file. We can convert each input record to a string using a generator expression. So the solution would be\nargs = [(1,2), (3,4), (5,6)]\nf = IteratorFile((\"{}\\t{}\".format(x[0], x[1]) for x in args))\ncursor.copy_from(f, 'table_name', columns=('a', 'b'))\n\nFor this trivial size of args it won't make much of a speed difference, but I see big speedups when dealing with thousands+ of rows. It will also be more memory efficient than building a giant query string. An iterator would only ever hold one input record in memory at a time, where at some point you'll run out of memory in your Python process or in Postgres by building the query string.\n", "A snippet from Psycopg2's tutorial page at Postgresql.org (see bottom):\n\nA last item I would like to show you is how to insert multiple rows using a dictionary. If you had the following: \n\nnamedict = ({\"first_name\":\"Joshua\", \"last_name\":\"Drake\"},\n {\"first_name\":\"Steven\", \"last_name\":\"Foo\"},\n {\"first_name\":\"David\", \"last_name\":\"Bar\"})\n\n\nYou could easily insert all three rows within the dictionary by using: \n\ncur = conn.cursor()\ncur.executemany(\"\"\"INSERT INTO bar(first_name,last_name) VALUES (%(first_name)s, %(last_name)s)\"\"\", namedict)\n\nIt doesn't save much code, but it definitively looks better.\n", "All of these techniques are called 'Extended Inserts\" in Postgres terminology, and as of the 24th of November 2016, it's still a ton faster than psychopg2's executemany() and all the other methods listed in this thread (which i tried before coming to this answer).\nHere's some code which doesnt use cur.mogrify and is nice and simply to get your head around:\nvalueSQL = [ '%s', '%s', '%s', ... ] # as many as you have columns.\nsqlrows = []\nrowsPerInsert = 3 # more means faster, but with diminishing returns..\nfor row in getSomeData:\n # row == [1, 'a', 'yolo', ... ]\n sqlrows += row\n if ( len(sqlrows)/len(valueSQL) ) % rowsPerInsert == 0:\n # sqlrows == [ 1, 'a', 'yolo', 2, 'b', 'swag', 3, 'c', 'selfie' ]\n insertSQL = 'INSERT INTO \"twitter\" VALUES ' + ','.join(['(' + ','.join(valueSQL) + ')']*rowsPerInsert)\n cur.execute(insertSQL, sqlrows)\n con.commit()\n sqlrows = []\ninsertSQL = 'INSERT INTO \"twitter\" VALUES ' + ','.join(['(' + ','.join(valueSQL) + ')']*len(sqlrows))\ncur.execute(insertSQL, sqlrows)\ncon.commit()\n\nBut it should be noted that if you can use copy_from(), you should use copy_from ;)\n", "I've been using ant32's answer above for several years. However I've found that is thorws an error in python 3 because mogrify returns a byte string.\nConverting explicitly to bytse strings is a simple solution for making code python 3 compatible.\nargs_str = b','.join(cur.mogrify(\"(%s,%s,%s,%s,%s,%s,%s,%s,%s)\", x) for x in tup) \ncur.execute(b\"INSERT INTO table VALUES \" + args_str)\n\n", "executemany accept array of tuples\nhttps://www.postgresqltutorial.com/postgresql-python/insert/\n \"\"\" array of tuples \"\"\"\n vendor_list = [(value1,)]\n\n \"\"\" insert multiple vendors into the vendors table \"\"\"\n sql = \"INSERT INTO vendors(vendor_name) VALUES(%s)\"\n conn = None\n try:\n # read database configuration\n params = config()\n # connect to the PostgreSQL database\n conn = psycopg2.connect(**params)\n # create a new cursor\n cur = conn.cursor()\n # execute the INSERT statement\n cur.executemany(sql,vendor_list)\n # commit the changes to the database\n conn.commit()\n # close communication with the database\n cur.close()\n except (Exception, psycopg2.DatabaseError) as error:\n print(error)\n finally:\n if conn is not None:\n conn.close()\n\n", "The cursor.copyfrom solution as provided by @jopseph.sheedy (https://stackoverflow.com/users/958118/joseph-sheedy) above (https://stackoverflow.com/a/30721460/11100064) is indeed lightning fast. \nHowever, the example he gives are not generically usable for a record with any number of fields and it took me while to figure out how to use it correctly. \nThe IteratorFile needs to be instantiated with tab-separated fields like this (r is a list of dicts where each dict is a record): \n f = IteratorFile(\"{0}\\t{1}\\t{2}\\t{3}\\t{4}\".format(r[\"id\"],\n r[\"type\"],\n r[\"item\"],\n r[\"month\"],\n r[\"revenue\"]) for r in records)\n\nTo generalise for an arbitrary number of fields we will first create a line string with the correct amount of tabs and field placeholders : \"{}\\t{}\\t{}....\\t{}\" and then use .format() to fill in the field values : *list(r.values())) for r in records:\n line = \"\\t\".join([\"{}\"] * len(records[0]))\n\n f = IteratorFile(line.format(*list(r.values())) for r in records)\n\ncomplete function in gist here.\n", "execute_batch has been added to psycopg2 since this question was posted.\nIt is faster than execute_values.\n", "Security vulnerabilities\nAs of 2022-11-16, the answers by @Clodoaldo Neto (for Psycopg 2.6), @Joseph Sheedy, @J.J, @Bart Jonk, @kevo Njoki, @TKoutny and @Nihal Sharma contain SQL injection vulnerabilities and should not be used.\nThe fastest proposal so far (copy_from) should not be used either because it is difficult to escape the data correctly. This is easily apparent when trying to insert characters like ', \", \\n, \\, \\t or \\n.\nThe author of psycopg2 also recommends against copy_from:\n\ncopy_from() and copy_to() are really just ancient and incomplete methods\n\nThe fastest method\nThe fastest method is cursor.copy_expert, which can insert data straight from CSV files.\nwith open(\"mydata.csv\") as f:\n cursor.copy_expert(\"COPY mytable (my_id, a, b) FROM STDIN WITH csv\", f)\n\ncopy_expert is also the fastest method when generating the CSV file on-the-fly. For reference, see the following CSVFile class, which takes care to limit memory usage.\nimport io, csv\n\nclass CSVFile(io.TextIOBase):\n # Create a CSV file from rows. Can only be read once.\n def __init__(self, rows, size=8192):\n self.row_iter = iter(rows)\n self.buf = io.StringIO()\n self.available = 0\n self.size = size\n\n def read(self, n):\n # Buffer new CSV rows until enough data is available\n buf = self.buf\n writer = csv.writer(buf)\n while self.available < n:\n try:\n row_length = writer.writerow(next(self.row_iter))\n self.available += row_length\n self.size = max(self.size, row_length)\n except StopIteration:\n break\n\n # Read requested amount of data from buffer\n write_pos = buf.tell()\n read_pos = write_pos - self.available\n buf.seek(read_pos)\n data = buf.read(n)\n self.available -= len(data)\n\n # Shrink buffer if it grew very large\n if read_pos > 2 * self.size:\n remaining = buf.read()\n buf.seek(0)\n buf.write(remaining)\n buf.truncate()\n else:\n buf.seek(write_pos)\n\n return data\n\nThis class can then be used like:\nrows = [(1, \"a\", \"b\"), (2, \"c\", \"d\")]\ncursor.copy_expert(\"COPY mytable (my_id, a, b) FROM STDIN WITH csv\", CSVFile(rows))\n\nIf all your data fits into memory, you can also generate the entire CSV data directly without the CSVFile class, but if you do not know how much data you are going to insert in the future, you probably should not do that.\nf = io.StringIO()\nwriter = csv.writer(f)\nfor row in rows:\n writer.writerow(row)\nf.seek(0)\ncursor.copy_expert(\"COPY mytable (my_id, a, b) FROM STDIN WITH csv\", f)\n\nBenchmark results\n\n914 milliseconds - many calls to cursor.execute\n846 milliseconds - cursor.executemany\n362 milliseconds - psycopg2.extras.execute_batch\n346 milliseconds - execute_batch with page_size=1000\n265 milliseconds - execute_batch with prepared statement\n161 milliseconds - psycopg2.extras.execute_values\n127 milliseconds - cursor.execute with string-concatenated values\n39 milliseconds - copy_expert generating the entire CSV file at once\n32 milliseconds - copy_expert with CSVFile\n\n", "Another nice and efficient approach - is to pass rows for insertion as 1 argument, \nwhich is array of json objects.\nE.g. you passing argument:\n[ {id: 18, score: 1}, { id: 19, score: 5} ]\n\nIt is array, which may contain any amount of objects inside.\nThen your SQL looks like:\nINSERT INTO links (parent_id, child_id, score) \nSELECT 123, (r->>'id')::int, (r->>'score')::int \nFROM unnest($1::json[]) as r \n\nNotice: Your postgress must be new enough, to support json\n", "If you're using SQLAlchemy, you don't need to mess with hand-crafting the string because SQLAlchemy supports generating a multi-row VALUES clause for a single INSERT statement:\nrows = []\nfor i, name in enumerate(rawdata):\n row = {\n 'id': i,\n 'name': name,\n 'valid': True,\n }\n rows.append(row)\nif len(rows) > 0: # INSERT fails if no rows\n insert_query = SQLAlchemyModelName.__table__.insert().values(rows)\n session.execute(insert_query)\n\n", "From @ant32\ndef myInsertManyTuples(connection, table, tuple_of_tuples):\n cursor = connection.cursor()\n try:\n insert_len = len(tuple_of_tuples[0])\n insert_template = \"(\"\n for i in range(insert_len):\n insert_template += \"%s,\"\n insert_template = insert_template[:-1] + \")\"\n\n args_str = \",\".join(\n cursor.mogrify(insert_template, x).decode(\"utf-8\")\n for x in tuple_of_tuples\n )\n cursor.execute(\"INSERT INTO \" + table + \" VALUES \" + args_str)\n connection.commit()\n\n except psycopg2.Error as e:\n print(f\"psycopg2.Error in myInsertMany = {e}\")\n connection.rollback()\n\n" ]
[ 286, 226, 104, 36, 32, 7, 3, 3, 2, 2, 2, 1, 1, 0 ]
[ "If you want to insert multiple rows within one insert statemens (assuming you are not using ORM) the easiest way so far for me would be to use list of dictionaries. Here is an example:\n t = [{'id':1, 'start_date': '2015-07-19 00:00:00', 'end_date': '2015-07-20 00:00:00', 'campaignid': 6},\n {'id':2, 'start_date': '2015-07-19 00:00:00', 'end_date': '2015-07-20 00:00:00', 'campaignid': 7},\n {'id':3, 'start_date': '2015-07-19 00:00:00', 'end_date': '2015-07-20 00:00:00', 'campaignid': 8}]\n\nconn.execute(\"insert into campaign_dates\n (id, start_date, end_date, campaignid) \n values (%(id)s, %(start_date)s, %(end_date)s, %(campaignid)s);\",\n t)\n\nAs you can see only one query will be executed:\nINFO sqlalchemy.engine.base.Engine insert into campaign_dates (id, start_date, end_date, campaignid) values (%(id)s, %(start_date)s, %(end_date)s, %(campaignid)s);\nINFO sqlalchemy.engine.base.Engine [{'campaignid': 6, 'id': 1, 'end_date': '2015-07-20 00:00:00', 'start_date': '2015-07-19 00:00:00'}, {'campaignid': 7, 'id': 2, 'end_date': '2015-07-20 00:00:00', 'start_date': '2015-07-19 00:00:00'}, {'campaignid': 8, 'id': 3, 'end_date': '2015-07-20 00:00:00', 'start_date': '2015-07-19 00:00:00'}]\nINFO sqlalchemy.engine.base.Engine COMMIT\n\n", "psycopg2 2.9.3\ndata = \"(1, 2), (3, 4), (5, 6)\"\nquery = \"INSERT INTO t (a, b) VALUES {0}\".format(data)\ncursor.execute(query)\n\nor\ndata = [(1, 2), (3, 4), (5, 6)]\ndata = \",\".join(map(str, data))\nquery = \"INSERT INTO t (a, b) VALUES {0}\".format(data)\ncursor.execute(query)\n\n", "The Solution am using can insert like 8000 records in 1 millisecond\ncurtime = datetime.datetime.now()\npostData = dict()\npostData[\"title\"] = \"This is Title Text\"\npostData[\"body\"] = \"This a Body Text it Can be Long Text\"\npostData['created_at'] = curtime.isoformat()\npostData['updated_at'] = curtime.isoformat()\ndata = []\nfor x in range(8000):\n data.append(((postData)))\nvals = []\nfor d in postData:\n vals.append(tuple(d.values())) #Here we extract the Values from the Dict\nflds = \",\".join(map(str, postData[0]))\ntableFlds = \",\".join(map(str, vals))\nsqlStr = f\"INSERT INTO posts ({flds}) VALUES {tableFlds}\"\ndb.execute(sqlStr)\nconnection.commit()\nrowsAffected = db.rowcount\nprint(f'{rowsAffected} Rows Affected')\n\n", "Finally in SQLalchemy1.2 version, this new implementation is added to use psycopg2.extras.execute_batch() instead of executemany when you initialize your engine with use_batch_mode=True like:\nengine = create_engine(\n \"postgresql+psycopg2://scott:tiger@host/dbname\",\n use_batch_mode=True)\n\nhttp://docs.sqlalchemy.org/en/latest/changelog/migration_12.html#change-4109\nThen someone would have to use SQLalchmey won't bother to try different combinations of sqla and psycopg2 and direct SQL together..\n", "Using aiopg - The snippet below works perfectly fine\n # items = [10, 11, 12, 13]\n # group = 1\n tup = [(gid, pid) for pid in items]\n args_str = \",\".join([str(s) for s in tup])\n # insert into group values (1, 10), (1, 11), (1, 12), (1, 13)\n yield from cur.execute(\"INSERT INTO group VALUES \" + args_str)\n\n" ]
[ -1, -1, -1, -4, -5 ]
[ "postgresql", "psycopg2", "python" ]
stackoverflow_0008134602_postgresql_psycopg2_python.txt
Q: ffmpeg issue with Jupyter notebook I am trying to extract frames from a video file using ffmpeg in Python. I installed ffmpeg using Homebrew and ffmpeg-python on the Anaconda-Navigator. Yet when I call ffmpeg on Jupyter notebook as follows !ffmpeg -i "$file" "$rootdir"/"$folder_name"/frame%04d.png I get an error saying zsh:1: command not found: ffmpeg I clearly see ffmpeg in my usr/local/bin. Can someone please assist me in sorting this? I am able to use ffmpeg in Google Colab, though. A: In my experience, using !/usr/bin/ffmpeg was the solution, you can also verify this by trying !whereis ffmpeg and use whatever directory it's in after the ! hope this helps!
ffmpeg issue with Jupyter notebook
I am trying to extract frames from a video file using ffmpeg in Python. I installed ffmpeg using Homebrew and ffmpeg-python on the Anaconda-Navigator. Yet when I call ffmpeg on Jupyter notebook as follows !ffmpeg -i "$file" "$rootdir"/"$folder_name"/frame%04d.png I get an error saying zsh:1: command not found: ffmpeg I clearly see ffmpeg in my usr/local/bin. Can someone please assist me in sorting this? I am able to use ffmpeg in Google Colab, though.
[ "In my experience, using !/usr/bin/ffmpeg was the solution, you can also verify this by trying !whereis ffmpeg and use whatever directory it's in after the ! hope this helps!\n" ]
[ 0 ]
[]
[]
[ "ffmpeg", "python" ]
stackoverflow_0074130520_ffmpeg_python.txt
Q: how to compare two columns in different data frames & replace the values Two dataframes are there, df1 has 2 columns ,name & profession df_1 Name profession srinu senior engineer Azahar engineer vijaya data analyst rahul team lead swapna manager krishna engineer rama senior engineer df_2 has only one column with (name-employeeid) df_2 Name-empid vijaya-2124148 rahul-2124152 krishna-2124189 rama-2124169 I am trying to compare df1 and df2 using name and if names in df2 matches with df1, profession should be replaced as"data scientist". i have tried many things,but couldn't figure it out,can someone help me,please? i have tried using map,replace..but getting errors A: You could use: # extract the names from df_2 m = (df_1['Name'] .str.lower() .isin(df_2['Name-empid'].str.extract('(\w+)-', expand=False)) ) # match with df_1 ensuring common case df_1.loc[m, 'profession'] = 'Data Scientist' Output: Name profession 0 srinu senior engineer 1 Azahar engineer 2 vijaya Data Scientist 3 rahul Data Scientist 4 swapna manager 5 krishna Data Scientist 6 rama Data Scientist
how to compare two columns in different data frames & replace the values
Two dataframes are there, df1 has 2 columns ,name & profession df_1 Name profession srinu senior engineer Azahar engineer vijaya data analyst rahul team lead swapna manager krishna engineer rama senior engineer df_2 has only one column with (name-employeeid) df_2 Name-empid vijaya-2124148 rahul-2124152 krishna-2124189 rama-2124169 I am trying to compare df1 and df2 using name and if names in df2 matches with df1, profession should be replaced as"data scientist". i have tried many things,but couldn't figure it out,can someone help me,please? i have tried using map,replace..but getting errors
[ "You could use:\n# extract the names from df_2\nm = (df_1['Name']\n .str.lower()\n .isin(df_2['Name-empid'].str.extract('(\\w+)-', expand=False))\n )\n\n# match with df_1 ensuring common case\ndf_1.loc[m, 'profession'] = 'Data Scientist'\n\nOutput:\n Name profession\n0 srinu senior engineer\n1 Azahar engineer\n2 vijaya Data Scientist\n3 rahul Data Scientist\n4 swapna manager\n5 krishna Data Scientist\n6 rama Data Scientist\n\n" ]
[ 0 ]
[]
[]
[ "dataframe", "pandas", "python", "replace" ]
stackoverflow_0074462625_dataframe_pandas_python_replace.txt
Q: algorithm to know if inserted, substitued or deleted a character (similar to Levenshtein) I want to make a function that keeps track of the transformations made to make one string identical to another one Example: A = batyu B = beauty diff(A,B) has to return: [[1,"Insert", "e"], [5, "Delete"], [3, "Insert", "u"]]\ I used Levenshtein.editops but i want to code the function that does this A: The wikipedia article for levenshtein distance gives you the function it uses. Now it's your turn to implement it in python. If you have code that does not do what you expect it to, feel free to post another question detailing what you tried, what you expected and why it didn't work. If you can read C you can also check out the implementation of editops. A: You can use the output from the example in the documentation https://docs.python.org/3/library/difflib.html#difflib.SequenceMatcher.get_opcodes : a= 'adela' b= 'adella' dif = difflib.SequenceMatcher(None, a, b) opcodes = dif.get_opcodes() for tag, i1, i2, j1, j2 in opcodes: print('{:7} a[{}:{}] --> b[{}:{}] {!r:>8} --> {!r}'.format( tag, i1, i2, j1, j2, a[i1:i2], b[j1:j2])) so get your sequencematcher object and then iterate over the opcodes and store however you want. I came across this searching for a quick link to the editops documentation. For my purpose I used this as a measure of how close the strings were: print(len([x for x in opcodes if x[0] != 'equal']))
algorithm to know if inserted, substitued or deleted a character (similar to Levenshtein)
I want to make a function that keeps track of the transformations made to make one string identical to another one Example: A = batyu B = beauty diff(A,B) has to return: [[1,"Insert", "e"], [5, "Delete"], [3, "Insert", "u"]]\ I used Levenshtein.editops but i want to code the function that does this
[ "The wikipedia article for levenshtein distance gives you the function it uses. Now it's your turn to implement it in python.\nIf you have code that does not do what you expect it to, feel free to post another question detailing what you tried, what you expected and why it didn't work.\nIf you can read C you can also check out the implementation of editops.\n", "You can use the output from the example in the documentation https://docs.python.org/3/library/difflib.html#difflib.SequenceMatcher.get_opcodes :\na= 'adela'\nb= 'adella'\ndif = difflib.SequenceMatcher(None, a, b)\nopcodes = dif.get_opcodes()\nfor tag, i1, i2, j1, j2 in opcodes:\n print('{:7} a[{}:{}] --> b[{}:{}] {!r:>8} --> {!r}'.format(\n tag, i1, i2, j1, j2, a[i1:i2], b[j1:j2]))\n\nso get your sequencematcher object and then iterate over the opcodes and store however you want. I came across this searching for a quick link to the editops documentation. For my purpose I used this as a measure of how close the strings were:\nprint(len([x for x in opcodes if x[0] != 'equal']))\n\n" ]
[ 0, 0 ]
[]
[]
[ "algorithm", "levenshtein_distance", "python" ]
stackoverflow_0072020784_algorithm_levenshtein_distance_python.txt
Q: Dask lazy initialization very slow for list comprehension I'm trying to see if Dask would be a suitable addition to my project and wrote some very simple test cases to look into it's performance. However, Dask is taking a relatively long time to simply perform the lazy initialization. @delayed def normd(st): return st.lower().replace(',', '') @delayed def add_vald(v): return v+5 def norm(st): return st.lower().replace(',', '') def add_val(v): return v+5 test_list = [i for i in range(1000)] test_list1 = ["AeBe,oF,221e"]*1000 %timeit rlist = [add_val(y) for y in test_list] #124 µs ± 7.25 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each) %timeit rlist = [norm(y) for y in test_list1] #392 µs ± 18.9 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each) %timeit rlist = [add_vald(y) for y in test_list] #19.1 ms ± 436 µs per loop (mean ± std. dev. of 7 runs, 100 loops each) rlist = [add_vald(y) for y in test_list] %timeit rlist1 = compute(*rlist, get=dask.multiprocessing.get) #892 ms ± 36.6 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) %timeit rlist = [normd(y) for y in test_list1] #18.7 ms ± 408 µs per loop (mean ± std. dev. of 7 runs, 10 loops each) rlist = [normd(y) for y in test_list1] %timeit rlist1 = compute(*rlist, get=dask.multiprocessing.get) #912 ms ± 54.1 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) I have looked into Dask For Loop In Parallel and parallel dask for loop slower than regular loop? and I tried increasing size to 1 million items but while the regular loop takes about a second the dask loop never ends. After waiting for half an hour to simply finish lazy initialization of add_vald I killed it. I'm not sure what's going wrong here and would greatly appreciate any insight you might be able to offer. Thanks! A: When creating a delayed object, dask is doing a couple of things: calculating a unique key for the object, based on the function and inputs creating a graph object to store the desired operations. You could probably do these things a little faster with your own dict comprehension - delayed is intended for convenience. Upon execution, each task requires some overhead, whether from switching threads, or communicating with between processes, depending on the scheduler chosen. This is well documented. Furthermore, threads within a process won't actually run in parallel for this workload because of python's GIL. In general, it is recommended to split your work into batches, such that the overhead per task becomes small compared to the time it takes to execute the task; so that using dask becomes worthwhile. Don't forget the first rule of dask: be sure that you need dask.
Dask lazy initialization very slow for list comprehension
I'm trying to see if Dask would be a suitable addition to my project and wrote some very simple test cases to look into it's performance. However, Dask is taking a relatively long time to simply perform the lazy initialization. @delayed def normd(st): return st.lower().replace(',', '') @delayed def add_vald(v): return v+5 def norm(st): return st.lower().replace(',', '') def add_val(v): return v+5 test_list = [i for i in range(1000)] test_list1 = ["AeBe,oF,221e"]*1000 %timeit rlist = [add_val(y) for y in test_list] #124 µs ± 7.25 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each) %timeit rlist = [norm(y) for y in test_list1] #392 µs ± 18.9 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each) %timeit rlist = [add_vald(y) for y in test_list] #19.1 ms ± 436 µs per loop (mean ± std. dev. of 7 runs, 100 loops each) rlist = [add_vald(y) for y in test_list] %timeit rlist1 = compute(*rlist, get=dask.multiprocessing.get) #892 ms ± 36.6 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) %timeit rlist = [normd(y) for y in test_list1] #18.7 ms ± 408 µs per loop (mean ± std. dev. of 7 runs, 10 loops each) rlist = [normd(y) for y in test_list1] %timeit rlist1 = compute(*rlist, get=dask.multiprocessing.get) #912 ms ± 54.1 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) I have looked into Dask For Loop In Parallel and parallel dask for loop slower than regular loop? and I tried increasing size to 1 million items but while the regular loop takes about a second the dask loop never ends. After waiting for half an hour to simply finish lazy initialization of add_vald I killed it. I'm not sure what's going wrong here and would greatly appreciate any insight you might be able to offer. Thanks!
[ "When creating a delayed object, dask is doing a couple of things:\n\ncalculating a unique key for the object, based on the function and inputs\ncreating a graph object to store the desired operations.\n\nYou could probably do these things a little faster with your own dict comprehension - delayed is intended for convenience.\nUpon execution, each task requires some overhead, whether from switching threads, or communicating with between processes, depending on the scheduler chosen. This is well documented. Furthermore, threads within a process won't actually run in parallel for this workload because of python's GIL.\nIn general, it is recommended to split your work into batches, such that the overhead per task becomes small compared to the time it takes to execute the task; so that using dask becomes worthwhile. Don't forget the first rule of dask: be sure that you need dask.\n" ]
[ 0 ]
[]
[]
[ "dask", "dask_delayed", "loops", "parallel_processing", "python" ]
stackoverflow_0053622333_dask_dask_delayed_loops_parallel_processing_python.txt
Q: This queryset contains a reference to an outer query and may only be used in a subquery model ProductFilter has products ManyToManyField. I need to get attribute to_export from product.filters of the highest priority (ProductFilter.priority field) I figured out this filters = ProductFilter.objects.filter( products__in=[OuterRef('pk')] ).order_by('priority') Product.objects.annotate( filter_to_export=Subquery(filters.values('to_export')[:1]) ) But it raises ValueError: This queryset contains a reference to an outer query and may only be used in a subquery. Do you know why? A: This is old, but anyway: Looks like the related lookup cannot handle OuterRef here: products__in=[OuterRef('pk')] Note: In Django 3.2 the OP's example yields a different error, viz. TypeError: Field 'id' expected a number but got ResolvedOuterRef(pk). As there's only one pk value here, I don't think you need to use the __in lookup. The following seems to work, although I'm not sure if it's exactly what the OP wants: filters = ProductFilter.objects.filter( products=OuterRef('pk') # replaced products__in lookup ).order_by('priority') products = Product.objects.annotate( filter_to_export=Subquery(filters.values('to_export')[:1]) ) The resulting SQL (slightly simplified table names): SELECT "product"."id", ( SELECT U0."to_export" FROM "productfilter" U0 INNER JOIN "productfilter_products" U1 ON (U0."id" = U1."productfilter_id") WHERE U1."product_id" = "product"."id" ORDER BY U0."priority" DESC LIMIT 1 ) AS "filter_to_export" FROM "product"
This queryset contains a reference to an outer query and may only be used in a subquery
model ProductFilter has products ManyToManyField. I need to get attribute to_export from product.filters of the highest priority (ProductFilter.priority field) I figured out this filters = ProductFilter.objects.filter( products__in=[OuterRef('pk')] ).order_by('priority') Product.objects.annotate( filter_to_export=Subquery(filters.values('to_export')[:1]) ) But it raises ValueError: This queryset contains a reference to an outer query and may only be used in a subquery. Do you know why?
[ "This is old, but anyway:\nLooks like the related lookup cannot handle OuterRef here: products__in=[OuterRef('pk')]\nNote: In Django 3.2 the OP's example yields a different error, viz. TypeError: Field 'id' expected a number but got ResolvedOuterRef(pk).\nAs there's only one pk value here, I don't think you need to use the __in lookup.\nThe following seems to work, although I'm not sure if it's exactly what the OP wants:\nfilters = ProductFilter.objects.filter(\n products=OuterRef('pk') # replaced products__in lookup\n).order_by('priority')\n\nproducts = Product.objects.annotate(\n filter_to_export=Subquery(filters.values('to_export')[:1])\n)\n\nThe resulting SQL (slightly simplified table names):\nSELECT \"product\".\"id\", (\n SELECT U0.\"to_export\" \n FROM \"productfilter\" U0 \n INNER JOIN \"productfilter_products\" U1 \n ON (U0.\"id\" = U1.\"productfilter_id\") \n WHERE U1.\"product_id\" = \"product\".\"id\" \n ORDER BY U0.\"priority\" DESC LIMIT 1\n) AS \"filter_to_export\" \nFROM \"product\"\n\n" ]
[ 0 ]
[]
[]
[ "django", "django_models", "django_queryset", "python", "sql" ]
stackoverflow_0060518636_django_django_models_django_queryset_python_sql.txt
Q: Unable to config pytesseract in heroku I try to deploy pytesseract app in heroku after doing much researchs online. I added TESSDATA_PREFIX=./.apt/usr/share/tesseract-ocr/4.00/tessdata in Heroku Config vars I have https://github.com/heroku/heroku-buildpack-apt in my heroku buildpack. I have Aptfile containing: tesseract-ocr tesseract-ocr-eng I have pytesseract.tesseract_cmd = '/app/.apt/usr/bin/tesseract' in my code. I am deploying flask API to heroku, so my Procfile is: web: gunicorn app:app The error from heroku logs: 2022-11-16T04:22:39.262113+00:00 app[web.1]: text = pytesseract.image_to_string(img, config="--psm 6") 2022-11-16T04:22:39.262115+00:00 app[web.1]: File "/app/.heroku/python/lib/python3.10/site-packages/pytesseract/pytesseract.py", line 423, in image_to_string 2022-11-16T04:22:39.262116+00:00 app[web.1]: return { 2022-11-16T04:22:39.262117+00:00 app[web.1]: File "/app/.heroku/python/lib/python3.10/site-packages/pytesseract/pytesseract.py", line 426, in <lambda> 2022-11-16T04:22:39.262117+00:00 app[web.1]: Output.STRING: lambda: run_and_get_output(*args), 2022-11-16T04:22:39.262117+00:00 app[web.1]: File "/app/.heroku/python/lib/python3.10/site-packages/pytesseract/pytesseract.py", line 288, in run_and_get_output 2022-11-16T04:22:39.262118+00:00 app[web.1]: run_tesseract(**kwargs) 2022-11-16T04:22:39.262118+00:00 app[web.1]: File "/app/.heroku/python/lib/python3.10/site-packages/pytesseract/pytesseract.py", line 264, in run_tesseract 2022-11-16T04:22:39.262119+00:00 app[web.1]: raise TesseractError(proc.returncode, get_errors(error_string)) 2022-11-16T04:22:39.262121+00:00 app[web.1]: pytesseract.pytesseract.TesseractError: (127, '/app/.apt/usr/bin/tesseract: error while loading shared libraries: libarchive.so.13: cannot open shared object file: No such file or directory') Anything I missed or how should I solve this? A: The error message indicates a missing library: error while loading shared libraries: libarchive.so.13: cannot open shared object file: No such file or directory The apt buildpack doesn't do dependency resolution, so you may have to explicitly include transitive dependencies. You can search https://packages.ubuntu.com to see which packages contain missing files. Make sure to match the Ubuntu LTS major version to the Heroku stack you are using, e.g. for Heroku 22 you'll want to look at packages for Ubuntu 22.04 LTS (Jammy). In this case, the libarchive13 package contains libarchive.so.13. Add that to your Aptfile, commit, and redeploy. If you find other missing dependencies, repeat the process.
Unable to config pytesseract in heroku
I try to deploy pytesseract app in heroku after doing much researchs online. I added TESSDATA_PREFIX=./.apt/usr/share/tesseract-ocr/4.00/tessdata in Heroku Config vars I have https://github.com/heroku/heroku-buildpack-apt in my heroku buildpack. I have Aptfile containing: tesseract-ocr tesseract-ocr-eng I have pytesseract.tesseract_cmd = '/app/.apt/usr/bin/tesseract' in my code. I am deploying flask API to heroku, so my Procfile is: web: gunicorn app:app The error from heroku logs: 2022-11-16T04:22:39.262113+00:00 app[web.1]: text = pytesseract.image_to_string(img, config="--psm 6") 2022-11-16T04:22:39.262115+00:00 app[web.1]: File "/app/.heroku/python/lib/python3.10/site-packages/pytesseract/pytesseract.py", line 423, in image_to_string 2022-11-16T04:22:39.262116+00:00 app[web.1]: return { 2022-11-16T04:22:39.262117+00:00 app[web.1]: File "/app/.heroku/python/lib/python3.10/site-packages/pytesseract/pytesseract.py", line 426, in <lambda> 2022-11-16T04:22:39.262117+00:00 app[web.1]: Output.STRING: lambda: run_and_get_output(*args), 2022-11-16T04:22:39.262117+00:00 app[web.1]: File "/app/.heroku/python/lib/python3.10/site-packages/pytesseract/pytesseract.py", line 288, in run_and_get_output 2022-11-16T04:22:39.262118+00:00 app[web.1]: run_tesseract(**kwargs) 2022-11-16T04:22:39.262118+00:00 app[web.1]: File "/app/.heroku/python/lib/python3.10/site-packages/pytesseract/pytesseract.py", line 264, in run_tesseract 2022-11-16T04:22:39.262119+00:00 app[web.1]: raise TesseractError(proc.returncode, get_errors(error_string)) 2022-11-16T04:22:39.262121+00:00 app[web.1]: pytesseract.pytesseract.TesseractError: (127, '/app/.apt/usr/bin/tesseract: error while loading shared libraries: libarchive.so.13: cannot open shared object file: No such file or directory') Anything I missed or how should I solve this?
[ "The error message indicates a missing library:\nerror while loading shared libraries: libarchive.so.13: cannot open shared object file: No such file or directory\n\nThe apt buildpack doesn't do dependency resolution, so you may have to explicitly include transitive dependencies.\nYou can search https://packages.ubuntu.com to see which packages contain missing files. Make sure to match the Ubuntu LTS major version to the Heroku stack you are using, e.g. for Heroku 22 you'll want to look at packages for Ubuntu 22.04 LTS (Jammy).\nIn this case, the libarchive13 package contains libarchive.so.13. Add that to your Aptfile, commit, and redeploy. If you find other missing dependencies, repeat the process.\n" ]
[ 0 ]
[]
[]
[ "heroku", "python", "python_tesseract" ]
stackoverflow_0074455213_heroku_python_python_tesseract.txt
Q: Can i use regex within a pytest expression Is it possible to locate tests with pytest using pattern matching, for example i want to find all tests that begin with the letters from a-m i have been trying things like pytest -m ^[aA-mM] pytest --collectonly -k test_^[aA-mM] --quiet Not got it to work so far, is this possible? A: Doesn't seem possible according to pytest doc. Have you considered marking the tests instead? This helps with filtering them out when you run pytest. More info about marking could be found in the pytest doc about markers... or another tutorial about it But in short, for example: just add @pytest.mark.foo onto some tests, and @pytest.mark.bar to others run pytest -m foo to run the tests marked as foo only.
Can i use regex within a pytest expression
Is it possible to locate tests with pytest using pattern matching, for example i want to find all tests that begin with the letters from a-m i have been trying things like pytest -m ^[aA-mM] pytest --collectonly -k test_^[aA-mM] --quiet Not got it to work so far, is this possible?
[ "Doesn't seem possible according to pytest doc.\nHave you considered marking the tests instead?\nThis helps with filtering them out when you run pytest.\nMore info about marking could be found in the pytest doc about markers...\nor another tutorial about it\nBut in short, for example:\n\njust add @pytest.mark.foo onto some tests, and @pytest.mark.bar to others\nrun pytest -m foo to run the tests marked as foo only.\n\n" ]
[ 1 ]
[]
[]
[ "pytest", "python" ]
stackoverflow_0074378132_pytest_python.txt
Q: How to find the exponential of a number? What is the easiest/most optimal way of finding the exponential of a number, say x, in Python? i.e. how can I implement e^x? A: The easiest and most optimal way to do e^x in Python is: from math import exp print(exp(4)) Output >>> 54.598150033144236 A: You can use the math.exp() function from the math module (read the docs). >>> import math >>> x = 4 >>> print(math.exp(x)) 54.598150033144236
How to find the exponential of a number?
What is the easiest/most optimal way of finding the exponential of a number, say x, in Python? i.e. how can I implement e^x?
[ "The easiest and most optimal way to do e^x in Python is:\nfrom math import exp\n\nprint(exp(4))\n\nOutput\n>>> 54.598150033144236\n\n", "You can use the math.exp() function from the math module (read the docs).\n>>> import math\n>>> x = 4\n>>> print(math.exp(x))\n54.598150033144236\n\n" ]
[ 1, 0 ]
[]
[]
[ "python" ]
stackoverflow_0074462623_python.txt
Q: Pandas - starting iteration index and slicing with .loc I'm still quite new to Python and programming in general. With luck, I have the right idea, but I can't quite get this to work. With my example df, I want iteration to start when entry == 1. import pandas as pd import numpy as np nan = np.nan a = [0,0,4,4,4,4,6,6] b = [4,4,4,4,4,4,4,4] entry = [nan,nan,nan,nan,1,nan,nan,nan] df = pd.DataFrame(columns=['a', 'b', 'entry']) df = pd.DataFrame.assign(df, a=a, b=b, entry=entry) I wrote a function, with little success. It returns an error, unhashable type: 'slice'. FWIW, I'm applying this function to groups of various lengths. def exit_row(df): start = df.index[df.entry == 1] df.loc[start:,(df.a > df.b), 'exit'] = 1 return df Ideally, the result would be as below: a b entry exit 0 0 4 NaN NaN 1 0 4 NaN NaN 2 4 4 NaN NaN 3 4 4 NaN NaN 4 4 4 1.0 NaN 5 4 4 NaN NaN 6 6 4 NaN 1 7 6 4 NaN 1 Any advice much appreciated. I had wondered if I should attempt a For loop instead, though I often find them difficult to read. A: You can use boolean indexing: # what are the rows after entry? m1 = df['entry'].notna().cummax() # in which rows is a>b? m2 = df['a'].gt(df['b']) # set 1 where both conditions are True df.loc[m1&m2, 'exit'] = 1 output: a b entry exit 0 0 4 NaN NaN 1 0 4 NaN NaN 2 4 4 NaN NaN 3 4 4 NaN NaN 4 4 4 1.0 NaN 5 4 4 NaN NaN 6 6 4 NaN 1.0 7 6 4 NaN 1.0 Intermediates: a b entry notna m1 m2 m1&m2 exit 0 0 4 NaN False False False False NaN 1 0 4 NaN False False False False NaN 2 4 4 NaN False False False False NaN 3 4 4 NaN False False False False NaN 4 4 4 1.0 True True False False NaN 5 4 4 NaN False True False False NaN 6 6 4 NaN False True True True 1.0 7 6 4 NaN False True True True 1.0
Pandas - starting iteration index and slicing with .loc
I'm still quite new to Python and programming in general. With luck, I have the right idea, but I can't quite get this to work. With my example df, I want iteration to start when entry == 1. import pandas as pd import numpy as np nan = np.nan a = [0,0,4,4,4,4,6,6] b = [4,4,4,4,4,4,4,4] entry = [nan,nan,nan,nan,1,nan,nan,nan] df = pd.DataFrame(columns=['a', 'b', 'entry']) df = pd.DataFrame.assign(df, a=a, b=b, entry=entry) I wrote a function, with little success. It returns an error, unhashable type: 'slice'. FWIW, I'm applying this function to groups of various lengths. def exit_row(df): start = df.index[df.entry == 1] df.loc[start:,(df.a > df.b), 'exit'] = 1 return df Ideally, the result would be as below: a b entry exit 0 0 4 NaN NaN 1 0 4 NaN NaN 2 4 4 NaN NaN 3 4 4 NaN NaN 4 4 4 1.0 NaN 5 4 4 NaN NaN 6 6 4 NaN 1 7 6 4 NaN 1 Any advice much appreciated. I had wondered if I should attempt a For loop instead, though I often find them difficult to read.
[ "You can use boolean indexing:\n# what are the rows after entry?\nm1 = df['entry'].notna().cummax()\n# in which rows is a>b?\nm2 = df['a'].gt(df['b'])\n\n# set 1 where both conditions are True\ndf.loc[m1&m2, 'exit'] = 1\n\noutput:\n a b entry exit\n0 0 4 NaN NaN\n1 0 4 NaN NaN\n2 4 4 NaN NaN\n3 4 4 NaN NaN\n4 4 4 1.0 NaN\n5 4 4 NaN NaN\n6 6 4 NaN 1.0\n7 6 4 NaN 1.0\n\nIntermediates:\n a b entry notna m1 m2 m1&m2 exit\n0 0 4 NaN False False False False NaN\n1 0 4 NaN False False False False NaN\n2 4 4 NaN False False False False NaN\n3 4 4 NaN False False False False NaN\n4 4 4 1.0 True True False False NaN\n5 4 4 NaN False True False False NaN\n6 6 4 NaN False True True True 1.0\n7 6 4 NaN False True True True 1.0\n\n" ]
[ 1 ]
[]
[]
[ "pandas", "pandas_loc", "python" ]
stackoverflow_0074462705_pandas_pandas_loc_python.txt
Q: pytest parameterisation and asyncio coroutines I have the following test: @pytest.mark.parametrize( "raw_id", [28815543, "PMC5890441", "doi:10.1007/978-981-10-5203-3_9" "28815543"] ) def test_can_fetch_publication(raw_id): idr = IdReference.build(raw_id) res = asyncio.run(fetch.summary(idr)) assert res == { "id": "28815543", "source": "MED", "pmid": "28815543", "doi": "10.1007/978-981-10-5203-3_9", "title": "Understanding the Role of lncRNAs in Nervous System Development.", <... snip ...> which runs the function fetch.summary: @lru_cache() @retry(requests.HTTPError, tries=5, delay=1) @throttle(rate_limit=5, period=1.0) async def summary(id_reference): LOGGER.info("Fetching remote summary for %s", id_reference) url = id_reference.external_url() response = requests.get(url) response.raise_for_status() data = response.json() assert data, "Somehow got no data" if data.get("hitCount", 0) == 0: raise UnknownReference(id_reference) if data["hitCount"] == 1: if not data["resultList"]["result"][0]: raise UnknownReference(id_reference) return data["resultList"]["result"][0] <... snip ...> IdReference is a class defined like this: @attr.s(frozen=True, hash=True) class IdReference(object): namespace = attr.ib(validator=is_a(KnownServices)) external_id = attr.ib(validator=is_a(str)) < ... snip ...> My problem comes when I try to run the test. Given the first and last elements in the parameterisation result in the same IdRefrenece object (the int gets converted to a string - external_id in the class), the coroutine produced by is the same, and I end up with a RuntimeError: cannot reuse already awaited coroutine Is there a way to get around this? Or am I going to have to split the parameterisation out? I tried running the test using pytest-asyncio and get the same error, because I think I just fundamentally have to deal with the coroutine being the 'same' somehow Python version is 3.11, pytest 7.2.0 A: this is not async code. Fixing the test is trivial, but you will have to change your code, please read the whole text. The part responsible to getting you the same (and therefore an "already used") co-routine in this code is the lru_cache. Just reset the cache in your test body, preventing you from getting used-off co-routines and you should be fine: ... def test_can_fetch_publication(raw_id): idr = IdReference.build(raw_id) fetch.summary.cache_clear() res = asyncio.run(fetch.summary(idr)) ... Rather: this would work, but it is NOT a fault test by accident: the failing test in fact showed you what would happen at run time if this cache is ever hit. The functools.lru_cache() decorator DOES NOT WORK, as you can see, for co-routine functions. Just drop it from your code (instead of the above work-around to clear the cache). Since it caches what is returned for the function call, what is cached is the actual co-routine, not the co-routine return value. If this is an important cache to have, write an in-function mechanism for it, instead of a decorator - or rather - your function is not async at all, as it uses the blocking requests.get: just drop the async part in that funciton (as it is, it will be causing more harm than good, by FAR, anyway). Or rewrite it usign httpx, aiohttp or other asyncio counterpart to requests. Equally, there is a chance the throttle decorator would work for an async function - but the retry decorator certainly wont: the actual code is executed as a co-routine just after the co-routine function wrapped by "retry" is executed (co-routine functions return co-routine objects, which are, in their turn, executed when await is called, or as a task): if any exception occurs at this point, any except clause in the retry operator is already past.
pytest parameterisation and asyncio coroutines
I have the following test: @pytest.mark.parametrize( "raw_id", [28815543, "PMC5890441", "doi:10.1007/978-981-10-5203-3_9" "28815543"] ) def test_can_fetch_publication(raw_id): idr = IdReference.build(raw_id) res = asyncio.run(fetch.summary(idr)) assert res == { "id": "28815543", "source": "MED", "pmid": "28815543", "doi": "10.1007/978-981-10-5203-3_9", "title": "Understanding the Role of lncRNAs in Nervous System Development.", <... snip ...> which runs the function fetch.summary: @lru_cache() @retry(requests.HTTPError, tries=5, delay=1) @throttle(rate_limit=5, period=1.0) async def summary(id_reference): LOGGER.info("Fetching remote summary for %s", id_reference) url = id_reference.external_url() response = requests.get(url) response.raise_for_status() data = response.json() assert data, "Somehow got no data" if data.get("hitCount", 0) == 0: raise UnknownReference(id_reference) if data["hitCount"] == 1: if not data["resultList"]["result"][0]: raise UnknownReference(id_reference) return data["resultList"]["result"][0] <... snip ...> IdReference is a class defined like this: @attr.s(frozen=True, hash=True) class IdReference(object): namespace = attr.ib(validator=is_a(KnownServices)) external_id = attr.ib(validator=is_a(str)) < ... snip ...> My problem comes when I try to run the test. Given the first and last elements in the parameterisation result in the same IdRefrenece object (the int gets converted to a string - external_id in the class), the coroutine produced by is the same, and I end up with a RuntimeError: cannot reuse already awaited coroutine Is there a way to get around this? Or am I going to have to split the parameterisation out? I tried running the test using pytest-asyncio and get the same error, because I think I just fundamentally have to deal with the coroutine being the 'same' somehow Python version is 3.11, pytest 7.2.0
[ "this is not async code. Fixing the test is trivial, but you will have to change your code, please read the whole text.\nThe part responsible to getting you the same (and therefore an \"already used\") co-routine in this code is the lru_cache.\nJust reset the cache in your test body, preventing you from getting used-off co-routines and you should be fine:\n...\ndef test_can_fetch_publication(raw_id):\n idr = IdReference.build(raw_id)\n fetch.summary.cache_clear()\n res = asyncio.run(fetch.summary(idr))\n ...\n\nRather: this would work, but it is NOT a fault test by accident: the failing test in fact showed you what would happen at run time if this cache is ever hit. The functools.lru_cache() decorator DOES NOT WORK, as you can see, for co-routine functions. Just drop it from your code (instead of the above work-around to clear the cache).\nSince it caches what is returned for the function call, what is cached is the actual co-routine, not the co-routine return value.\nIf this is an important cache to have, write an in-function mechanism for it, instead of a decorator - or rather - your function is not async at all, as it uses the blocking requests.get: just drop the async part in that funciton (as it is, it will be causing more harm than good, by FAR, anyway). Or rewrite it usign httpx, aiohttp or other asyncio counterpart to requests. Equally, there is a chance the throttle decorator would work for an async function - but the retry decorator certainly wont: the actual code is executed as a co-routine just after the co-routine function wrapped by \"retry\" is executed (co-routine functions return co-routine objects, which are, in their turn, executed when await is called, or as a task): if any exception occurs at this point, any except clause in the retry operator is already past.\n" ]
[ 1 ]
[]
[]
[ "pytest", "python", "python_asyncio" ]
stackoverflow_0074459672_pytest_python_python_asyncio.txt
Q: Looping through a list and coloring the text as appropriate based on values I have a large list of names and scores from a survey. I am hoping there is a simple way to loop through the list and color the score red or green based on its value. I have successfully been able to color one of the values but my solution requires a lot of lines and I am hoping there is a quicker way to accomplish this other than copy and pasting the if elif else statements after each "_data" line, here is a snippet of the code: ` one_frame['text'] = score.index[0] one_data['text'] = score.iloc[0, 0] if c_metric == 'Sleep' and score.iloc[0, 0] < 5 or c_metric != 'Sleep' and score.iloc[0, 0] <2.5: one_data.configure(fg='red') elif c_metric == 'Sleep' and score.iloc[0, 0] > 5 or c_metric != 'Sleep' and score.iloc[0, 0] > 4: one_data.configure(fg='green') else: one_data.configure(fg='white') two_frame['text'] = score.index[1] two_data['text'] = score.iloc[1, 0] three_frame['text'] = score.index[2] three_data['text'] = score.iloc[2, 0] four_frame['text'] = score.index[3] four_data['text'] = score.iloc[3, 0] ` The data frame is 2 columns, a name and a score. I have tried a few combination of a for loop but have not found the right solution to color the text appropriately A: I realize this may be a unique issue but if anyone comes across a similar problem I have found a solution after a lot of trial and error: I have added the score.iloc to a list, then under a new function looped through the list and colored as needed: global avgs avgs = [] try: one_frame['text'] = score.index[0] one_data['text'] = score.iloc[0, 0] avgs.append(score.iloc[0, 0]) two_frame['text'] = score.index[1] two_data['text'] = score.iloc[1, 0] avgs.append(score.iloc[1, 0]) ... for i, k, g in zip(avgs, labs, fras): if c_metric == 'Sleep' and i < 6 or c_metric == 'RPE' and i > 15 or c_metric != 'Sleep' and c_metric != 'RPE' and i < 2.5: k.configure(fg='red') g.configure(fg='red') elif c_metric == 'Sleep' and i > 7 or c_metric == 'RPE' and i < 11 or c_metric != 'Sleep' and c_metric != 'RPE' and i > 4: k.configure(fg='green') g.configure(fg='green') else: k.configure(fg='white') I also realize this is not the full code so if you do have any more questions I can explain a bit more and share the full code. Good Luck!
Looping through a list and coloring the text as appropriate based on values
I have a large list of names and scores from a survey. I am hoping there is a simple way to loop through the list and color the score red or green based on its value. I have successfully been able to color one of the values but my solution requires a lot of lines and I am hoping there is a quicker way to accomplish this other than copy and pasting the if elif else statements after each "_data" line, here is a snippet of the code: ` one_frame['text'] = score.index[0] one_data['text'] = score.iloc[0, 0] if c_metric == 'Sleep' and score.iloc[0, 0] < 5 or c_metric != 'Sleep' and score.iloc[0, 0] <2.5: one_data.configure(fg='red') elif c_metric == 'Sleep' and score.iloc[0, 0] > 5 or c_metric != 'Sleep' and score.iloc[0, 0] > 4: one_data.configure(fg='green') else: one_data.configure(fg='white') two_frame['text'] = score.index[1] two_data['text'] = score.iloc[1, 0] three_frame['text'] = score.index[2] three_data['text'] = score.iloc[2, 0] four_frame['text'] = score.index[3] four_data['text'] = score.iloc[3, 0] ` The data frame is 2 columns, a name and a score. I have tried a few combination of a for loop but have not found the right solution to color the text appropriately
[ "I realize this may be a unique issue but if anyone comes across a similar problem I have found a solution after a lot of trial and error:\nI have added the score.iloc to a list, then under a new function looped through the list and colored as needed:\nglobal avgs\navgs = []\n\ntry:\n one_frame['text'] = score.index[0]\n one_data['text'] = score.iloc[0, 0]\n avgs.append(score.iloc[0, 0])\n two_frame['text'] = score.index[1]\n two_data['text'] = score.iloc[1, 0]\n avgs.append(score.iloc[1, 0])\n\n...\nfor i, k, g in zip(avgs, labs, fras):\n if c_metric == 'Sleep' and i < 6 or c_metric == 'RPE' and i > 15 or c_metric != 'Sleep' and c_metric != 'RPE' and i < 2.5:\n k.configure(fg='red')\n g.configure(fg='red')\n elif c_metric == 'Sleep' and i > 7 or c_metric == 'RPE' and i < 11 or c_metric != 'Sleep' and c_metric != 'RPE' and i > 4:\n k.configure(fg='green')\n g.configure(fg='green')\n else:\n k.configure(fg='white')\n\nI also realize this is not the full code so if you do have any more questions I can explain a bit more and share the full code.\nGood Luck!\n" ]
[ 1 ]
[]
[]
[ "python", "tkinter" ]
stackoverflow_0074433629_python_tkinter.txt
Q: Conditionally join a list of strings in Jinja I have a list like users = ['tom', 'dick', 'harry'] In a Jinja template I'd like to print a list of all users except tom joined. I cannot modify the variable before it's passed to the template. I tried a list comprehension, and using Jinja's reject filter but I haven't been able to get these to work, e.g. {{ [name for name in users if name != 'tom'] | join(', ') }} gives a syntax error. How can I join list items conditionally? A: Use reject filter with sameas test: >>> import jinja2 >>> template = jinja2.Template("{{ users|reject('sameas', 'tom')|join(',') }}") >>> template.render(users=['tom', 'dick', 'harry']) u'dick,harry' UPDATE If you're using Jinja 2.8+, use equalto instead of sameas as @Dougal commented; sameas tests with Python is, while equalto tests with ==. A: This should work, as list comprehension is not directly supported. Ofcourse, appropiate filters will be better. {% for name in users if name != 'tom'%} {{name}} {% if not loop.last %} {{','}} {% endif %} {% endfor %} A: Another solution with "equalto" test instead of "sameas" Use reject filter with equalto test: >>> import jinja2 >>> template = jinja2.Template("{{ items|reject('equalto', 'aa')|join(',') }}") >>> template.render(users=['aa', 'bb', 'cc']) u'bb,cc' A: If someone is trying to reject more than one value just try to write 'in' with list afterwards instead of 'equalto'/'sameas': >>> import jinja2 >>> template = jinja2.Template("{{ items|reject('in', ['a','b'])|join(',') }}") >>> template.render(users=['a', 'b', 'c']) u'c'
Conditionally join a list of strings in Jinja
I have a list like users = ['tom', 'dick', 'harry'] In a Jinja template I'd like to print a list of all users except tom joined. I cannot modify the variable before it's passed to the template. I tried a list comprehension, and using Jinja's reject filter but I haven't been able to get these to work, e.g. {{ [name for name in users if name != 'tom'] | join(', ') }} gives a syntax error. How can I join list items conditionally?
[ "Use reject filter with sameas test:\n>>> import jinja2\n>>> template = jinja2.Template(\"{{ users|reject('sameas', 'tom')|join(',') }}\")\n>>> template.render(users=['tom', 'dick', 'harry'])\nu'dick,harry'\n\nUPDATE\nIf you're using Jinja 2.8+, use equalto instead of sameas as @Dougal commented; sameas tests with Python is, while equalto tests with ==.\n", "This should work, as list comprehension is not directly supported. Ofcourse, appropiate filters will be better.\n{% for name in users if name != 'tom'%}\n {{name}}\n {% if not loop.last %}\n {{','}}\n {% endif %}\n{% endfor %}\n\n", "Another solution with \"equalto\" test instead of \"sameas\"\nUse reject filter with equalto test:\n>>> import jinja2\n>>> template = jinja2.Template(\"{{ items|reject('equalto', 'aa')|join(',') }}\")\n>>> template.render(users=['aa', 'bb', 'cc'])\nu'bb,cc'\n\n", "If someone is trying to reject more than one value just try to write 'in' with list afterwards instead of 'equalto'/'sameas':\n>>> import jinja2\n>>> template = jinja2.Template(\"{{ items|reject('in', ['a','b'])|join(',') }}\")\n>>> template.render(users=['a', 'b', 'c'])\nu'c'\n\n" ]
[ 24, 13, 1, 0 ]
[]
[]
[ "jinja2", "python" ]
stackoverflow_0024041885_jinja2_python.txt
Q: How to group redundant values in pytest parametrize test? I am trying to remove redundant rows in my parametrized tests. Redundant - I mean I repeat this kind of code all the time. Here is example of my test: 1 @pytest.mark.parametrize("field, violations", [ 2 (None, [NULL_VIOLATION]), 3 (True, []), 4 (False, []) 5 ]) 6 def test_validate_field(field: str, violations: [str]): 7 ... As you can see, lines: 2,3,4 are simple test of annotation @NotNull in my Controller Class. Line 2 is bad path test and line 3,4 are happy path. I repeat those 3 lines in every test when I need to check @NotNull Is it possible to inline this somehow? What I want to achieve is something similar to that pseudo code: 1 @pytest.mark.parametrize("field, violations", [ 2 check_not_null_constraint() 3 ]) 4 def test_validate_field(field: str, violations: [str]): 5 ... I don't want to get rid of parametrized because instead of checking that not_null I am testing many other things like size etc. I am testing everything per parameter. So 1 test for 1 parameter in class. A: I'd go with the following: put the list of the paths into a variable, which contains a list... path_params = [(None, [NULL_VIOLATION]), (True, []), (False, [])] And then, pass that variable into parametrize: @pytest.mark.parametrize("field, violations", path_params) UPD: naturally, this would only work if you want to reuse these exact params in other tests; for similar stuff, the other answer would probably work better... A: This can be accomplished using pytest_generate_tests See: https://docs.pytest.org/en/6.2.x/parametrize.html This test setup: import pytest @pytest.mark.parametrize( "a, b",[(1,10),(2,20)] ) def test_param_func1(a, b): assert 10*a == b @pytest.mark.parametrize( "a, b",[(1,10),(2,20)] ) def test_param_func2(a, b): assert a in (1,2) assert b in (10,20) $ pytest -v -k "test_param" test_param.py::test_param_func1[1-10] PASSED [ 25%] test_param.py::test_param_func1[2-20] PASSED [ 50%] test_param.py::test_param_func2[1-10] PASSED [ 75%] test_param.py::test_param_func2[2-20] PASSED [100%] Can also be achieved like this: import pytest def pytest_generate_tests(metafunc): # This hook function runs once per test function metafunc.parametrize("a, b",[(1,10),(2,20)]) def test_param_func3(a, b): assert 10*a == b def test_param_func4(a, b): assert a in (1,2) assert b in (10,20) $ pytest -v -k "test_param" test_param.py::test_param_func3[1-10] PASSED [ 25%] test_param.py::test_param_func3[2-20] PASSED [ 50%] test_param.py::test_param_func4[1-10] PASSED [ 75%] test_param.py::test_param_func5[2-20] PASSED [100%] You can take it a step farther and parametrize your tests conditionally based on command-line arguments using metafunc.config.getoption("--yourflag") (Pytest Generate Tests Based on Arguments)
How to group redundant values in pytest parametrize test?
I am trying to remove redundant rows in my parametrized tests. Redundant - I mean I repeat this kind of code all the time. Here is example of my test: 1 @pytest.mark.parametrize("field, violations", [ 2 (None, [NULL_VIOLATION]), 3 (True, []), 4 (False, []) 5 ]) 6 def test_validate_field(field: str, violations: [str]): 7 ... As you can see, lines: 2,3,4 are simple test of annotation @NotNull in my Controller Class. Line 2 is bad path test and line 3,4 are happy path. I repeat those 3 lines in every test when I need to check @NotNull Is it possible to inline this somehow? What I want to achieve is something similar to that pseudo code: 1 @pytest.mark.parametrize("field, violations", [ 2 check_not_null_constraint() 3 ]) 4 def test_validate_field(field: str, violations: [str]): 5 ... I don't want to get rid of parametrized because instead of checking that not_null I am testing many other things like size etc. I am testing everything per parameter. So 1 test for 1 parameter in class.
[ "I'd go with the following: put the list of the paths into a variable, which contains a list...\npath_params = [(None, [NULL_VIOLATION]), (True, []), (False, [])]\n\nAnd then, pass that variable into parametrize:\n@pytest.mark.parametrize(\"field, violations\", path_params)\n\nUPD: naturally, this would only work if you want to reuse these exact params in other tests; for similar stuff, the other answer would probably work better...\n", "This can be accomplished using pytest_generate_tests See: https://docs.pytest.org/en/6.2.x/parametrize.html\nThis test setup:\nimport pytest\n\n@pytest.mark.parametrize(\n \"a, b\",[(1,10),(2,20)]\n)\ndef test_param_func1(a, b):\n assert 10*a == b\n\n@pytest.mark.parametrize(\n \"a, b\",[(1,10),(2,20)]\n)\ndef test_param_func2(a, b):\n assert a in (1,2) \n assert b in (10,20)\n\n$ pytest -v -k \"test_param\"\ntest_param.py::test_param_func1[1-10] PASSED [ 25%] \ntest_param.py::test_param_func1[2-20] PASSED [ 50%] \ntest_param.py::test_param_func2[1-10] PASSED [ 75%] \ntest_param.py::test_param_func2[2-20] PASSED [100%]\n\nCan also be achieved like this:\nimport pytest\n\ndef pytest_generate_tests(metafunc):\n # This hook function runs once per test function\n metafunc.parametrize(\"a, b\",[(1,10),(2,20)])\n\ndef test_param_func3(a, b):\n assert 10*a == b\n\ndef test_param_func4(a, b):\n assert a in (1,2) \n assert b in (10,20)\n\n$ pytest -v -k \"test_param\"\ntest_param.py::test_param_func3[1-10] PASSED [ 25%] \ntest_param.py::test_param_func3[2-20] PASSED [ 50%] \ntest_param.py::test_param_func4[1-10] PASSED [ 75%] \ntest_param.py::test_param_func5[2-20] PASSED [100%]\n\nYou can take it a step farther and parametrize your tests conditionally based on command-line arguments using metafunc.config.getoption(\"--yourflag\") (Pytest Generate Tests Based on Arguments)\n" ]
[ 1, 0 ]
[]
[]
[ "pytest", "python", "python_3.x" ]
stackoverflow_0074375816_pytest_python_python_3.x.txt
Q: Moving desired row to the top of pandas Data Frame In pandas, how can I copy or move a row to the top of the Data Frame without creating a copy of the Data Frame? For example, I managed to do almost what I want with the code below, but I have the impression that there might be a better way to accomplish this: import pandas as pd df = pd.DataFrame({'Probe':['Test1','Test2','Test3'], 'Sequence':['AATGCGT','TGCGTAA','ATGCATG']}) df Probe Sequence 0 Test1 AATGCGT 1 Test2 TGCGTAA 2 Test3 ATGCATG df_shifted = df.shift(1) df_shifted Probe Sequence 0 NaN NaN 1 Test1 AATGCGT 2 Test2 TGCGTAA df_shifted.ix[0] = df.ix[2] df_shifted Probe Sequence 0 Test3 ATGCATG 1 Test1 AATGCGT 2 Test2 TGCGTAA A: pandas.concat: df = pd.concat([df.iloc[[n],:], df.drop(n, axis=0)], axis=0) A: Try this. You don't need to make a copy of the dataframe. df["new"] = range(1,len(df)+1) Probe Sequence new 0 Test1 AATGCGT 1 1 Test2 TGCGTAA 2 2 Test3 ATGCATG 3 df.ix[2,'new'] = 0 df.sort_values("new").drop('new', axis=1) Probe Sequence 2 Test3 ATGCATG 0 Test1 AATGCGT 1 Test2 TGCGTAA Basically, since you can't insert the row into the index at 0, create a column so you can. If you want the index ordered, use this: df.sort_values("new").reset_index(drop='True').drop('new', axis=1) Probe Sequence 0 Test3 ATGCATG 1 Test1 AATGCGT 2 Test2 TGCGTAA Edit: df.ix is deprecated. Here's the same method with .loc. df["new"] = range(1,len(df)+1) df.loc[df.index==2, 'new'] = 0 df.sort_values("new").drop('new', axis=1) A: Okay, I think I came up with a solution. By all means, please feel free to add your own answer if you think yours is better: import numpy as np df.ix[3] = np.nan df Probe Sequence 0 Test1 AATGCGT 1 Test2 TGCGTAA 2 Test3 ATGCATG 3 NaN NaN df = df.shift(1) Probe Sequence 0 NaN NaN 1 Test1 AATGCGT 2 Test2 TGCGTAA 3 Test3 ATGCATG df.ix[0] = df.ix[2] df Probe Sequence 0 Test3 ATGCATG 1 Test1 AATGCGT 2 Test2 TGCGTAA 3 Test3 ATGCATG A: I'm a little surprised no one has submitted this: df = df.loc[[2, 0, 1]]
Moving desired row to the top of pandas Data Frame
In pandas, how can I copy or move a row to the top of the Data Frame without creating a copy of the Data Frame? For example, I managed to do almost what I want with the code below, but I have the impression that there might be a better way to accomplish this: import pandas as pd df = pd.DataFrame({'Probe':['Test1','Test2','Test3'], 'Sequence':['AATGCGT','TGCGTAA','ATGCATG']}) df Probe Sequence 0 Test1 AATGCGT 1 Test2 TGCGTAA 2 Test3 ATGCATG df_shifted = df.shift(1) df_shifted Probe Sequence 0 NaN NaN 1 Test1 AATGCGT 2 Test2 TGCGTAA df_shifted.ix[0] = df.ix[2] df_shifted Probe Sequence 0 Test3 ATGCATG 1 Test1 AATGCGT 2 Test2 TGCGTAA
[ "pandas.concat:\ndf = pd.concat([df.iloc[[n],:], df.drop(n, axis=0)], axis=0)\n\n", "Try this. You don't need to make a copy of the dataframe.\ndf[\"new\"] = range(1,len(df)+1)\n\n Probe Sequence new\n0 Test1 AATGCGT 1\n1 Test2 TGCGTAA 2\n2 Test3 ATGCATG 3\n\n\ndf.ix[2,'new'] = 0\ndf.sort_values(\"new\").drop('new', axis=1)\n\n Probe Sequence\n2 Test3 ATGCATG\n0 Test1 AATGCGT\n1 Test2 TGCGTAA\n\nBasically, since you can't insert the row into the index at 0, create a column so you can.\nIf you want the index ordered, use this:\ndf.sort_values(\"new\").reset_index(drop='True').drop('new', axis=1)\n\n Probe Sequence\n0 Test3 ATGCATG\n1 Test1 AATGCGT\n2 Test2 TGCGTAA\n\nEdit: df.ix is deprecated. Here's the same method with .loc.\ndf[\"new\"] = range(1,len(df)+1)\ndf.loc[df.index==2, 'new'] = 0\ndf.sort_values(\"new\").drop('new', axis=1)\n\n", "Okay, I think I came up with a solution. By all means, please feel free to add your own answer if you think yours is better:\nimport numpy as np\n\ndf.ix[3] = np.nan\n\ndf\n\n Probe Sequence\n0 Test1 AATGCGT\n1 Test2 TGCGTAA\n2 Test3 ATGCATG\n3 NaN NaN\n\ndf = df.shift(1)\n\n Probe Sequence\n0 NaN NaN\n1 Test1 AATGCGT\n2 Test2 TGCGTAA\n3 Test3 ATGCATG\n\ndf.ix[0] = df.ix[2]\n\ndf\n\n Probe Sequence\n0 Test3 ATGCATG\n1 Test1 AATGCGT\n2 Test2 TGCGTAA\n3 Test3 ATGCATG\n\n", "I'm a little surprised no one has submitted this:\ndf = df.loc[[2, 0, 1]]\n\n" ]
[ 7, 7, 0, 0 ]
[]
[]
[ "pandas", "python" ]
stackoverflow_0038980507_pandas_python.txt
Q: Make an array (1, x, x^2) based on x in dataframe Let's suppose that I have dataframe df similar like: c0 c1 10 2 8 2 4 1 How can I make an np.array for each element of this datafrmae so that I have for each element array (1, x, x^2)? For example for col1, first element (0) should get the array [1, 2, 4], for c0, for the fisrt elemend (10), get [1,10,100] etc. The new np.array should then have the dimensions (2 x 3 x 3), 2 for two columns, 3 for 3 rows and 3 for 3 values [1,x,x^2]? I tried: df.loc[:,i].apply(lambda x: np.power(x,np.arange(3))) But I could not get the wanted form of np.array (dimensions: 2 x 3 x 3). A: The numpy approach would be to broadcast to a 3D array: out = df.to_numpy()[...,None]**[0,1,2] Output: array([[[ 1, 10, 100], [ 1, 2, 4]], [[ 1, 8, 64], [ 1, 2, 4]], [[ 1, 4, 16], [ 1, 1, 1]]]) If you want a (2, 3, 3) shape, swap the axes: out = (df.to_numpy()[...,None]**[0,1,2]).swapaxes(0, 1) output: array([[[ 1, 10, 100], [ 1, 8, 64], [ 1, 4, 16]], [[ 1, 2, 4], [ 1, 2, 4], [ 1, 1, 1]]])
Make an array (1, x, x^2) based on x in dataframe
Let's suppose that I have dataframe df similar like: c0 c1 10 2 8 2 4 1 How can I make an np.array for each element of this datafrmae so that I have for each element array (1, x, x^2)? For example for col1, first element (0) should get the array [1, 2, 4], for c0, for the fisrt elemend (10), get [1,10,100] etc. The new np.array should then have the dimensions (2 x 3 x 3), 2 for two columns, 3 for 3 rows and 3 for 3 values [1,x,x^2]? I tried: df.loc[:,i].apply(lambda x: np.power(x,np.arange(3))) But I could not get the wanted form of np.array (dimensions: 2 x 3 x 3).
[ "The numpy approach would be to broadcast to a 3D array:\nout = df.to_numpy()[...,None]**[0,1,2]\n\nOutput:\narray([[[ 1, 10, 100],\n [ 1, 2, 4]],\n\n [[ 1, 8, 64],\n [ 1, 2, 4]],\n\n [[ 1, 4, 16],\n [ 1, 1, 1]]])\n\nIf you want a (2, 3, 3) shape, swap the axes:\nout = (df.to_numpy()[...,None]**[0,1,2]).swapaxes(0, 1)\n\noutput:\narray([[[ 1, 10, 100],\n [ 1, 8, 64],\n [ 1, 4, 16]],\n\n [[ 1, 2, 4],\n [ 1, 2, 4],\n [ 1, 1, 1]]])\n\n" ]
[ 1 ]
[]
[]
[ "pandas", "polynomials", "python" ]
stackoverflow_0074462806_pandas_polynomials_python.txt
Q: How to create a n to n matrix in python with input value diagonal I am working on a probabilistic calculation and I could run my code for a small matrix like; P_4 = np.array([ [0 ,1 ,0 , 0, 0], [0 ,1/4,3/4, 0, 0], [0 ,0 ,2/4,2/4, 0], [0 ,0 ,0 ,3/4,1/4], [0 ,0 ,0 , 0,1 ], ]) However, I would like to create a N*N matrix and to fill the values diagonally 0/n and next value 1 - 0/n. n = 5 a = np.zeros((n,n),dtype = int) np.fill_diagonal(a,np.array([range(1/n)])) a writing the above code gives me the error TypeError: 'float' object cannot be interpreted as an integer I would a appreciate any suggestions. A: Here's one option using linspace and diag. n = 5 diag = np.linspace(0, 1, n) diag1 = (1 - diag[:-1]) a = np.diag(diag) + np.diag(diag1, 1) a Output: array([[0. , 1. , 0. , 0. , 0. ], [0. , 0.25, 0.75, 0. , 0. ], [0. , 0. , 0.5 , 0.5 , 0. ], [0. , 0. , 0. , 0.75, 0.25], [0. , 0. , 0. , 0. , 1. ]])
How to create a n to n matrix in python with input value diagonal
I am working on a probabilistic calculation and I could run my code for a small matrix like; P_4 = np.array([ [0 ,1 ,0 , 0, 0], [0 ,1/4,3/4, 0, 0], [0 ,0 ,2/4,2/4, 0], [0 ,0 ,0 ,3/4,1/4], [0 ,0 ,0 , 0,1 ], ]) However, I would like to create a N*N matrix and to fill the values diagonally 0/n and next value 1 - 0/n. n = 5 a = np.zeros((n,n),dtype = int) np.fill_diagonal(a,np.array([range(1/n)])) a writing the above code gives me the error TypeError: 'float' object cannot be interpreted as an integer I would a appreciate any suggestions.
[ "Here's one option using linspace and diag.\nn = 5\ndiag = np.linspace(0, 1, n)\ndiag1 = (1 - diag[:-1])\na = np.diag(diag) + np.diag(diag1, 1)\na\n\nOutput:\narray([[0. , 1. , 0. , 0. , 0. ],\n [0. , 0.25, 0.75, 0. , 0. ],\n [0. , 0. , 0.5 , 0.5 , 0. ],\n [0. , 0. , 0. , 0.75, 0.25],\n [0. , 0. , 0. , 0. , 1. ]])\n\n" ]
[ 2 ]
[]
[]
[ "numpy", "python" ]
stackoverflow_0074462782_numpy_python.txt
Q: Multiprocess does not apply list change to all processes How i can share list changes between multiprocessing parallel process?, im having trouble with that. import multiprocessing listx = [] def one(): global listx time.sleep(5) if 'ok' not in listx: print('not in') else: print('in') def two(): global listx listx.append('ok') if __name__ == '__main__': p1 = multiprocessing.Process(target=one) p2 = multiprocessing.Process(target=two) p1.start() p2.start() # Output : not in A: If you don't specifically need multiprocessing, you can use threading, as they share resources. Else, you need to use a pipes and queues to synchronize your resources: doc A: You can used a managed list: import multiprocessing import time def one(listx): time.sleep(5) if 'ok' not in listx: print('not in') else: print('in') def two(listx): listx.append('ok') if __name__ == '__main__': with multiprocessing.Manager() as manager: listx = manager.list() p1 = multiprocessing.Process(target=one, args=(listx,)) p2 = multiprocessing.Process(target=two, args=(listx,)) p1.start() p2.start() # Wait for processes to complete # and results printed before destroying manager process: p1.join() p2.join() print(listx) Prints: in ['ok']
Multiprocess does not apply list change to all processes
How i can share list changes between multiprocessing parallel process?, im having trouble with that. import multiprocessing listx = [] def one(): global listx time.sleep(5) if 'ok' not in listx: print('not in') else: print('in') def two(): global listx listx.append('ok') if __name__ == '__main__': p1 = multiprocessing.Process(target=one) p2 = multiprocessing.Process(target=two) p1.start() p2.start() # Output : not in
[ "If you don't specifically need multiprocessing, you can use threading, as they share resources. Else, you need to use a pipes and queues to synchronize your resources: doc\n", "You can used a managed list:\nimport multiprocessing\nimport time\n\ndef one(listx):\n time.sleep(5)\n if 'ok' not in listx:\n print('not in')\n else:\n print('in')\n\ndef two(listx):\n listx.append('ok')\n\nif __name__ == '__main__':\n with multiprocessing.Manager() as manager:\n listx = manager.list()\n p1 = multiprocessing.Process(target=one, args=(listx,))\n p2 = multiprocessing.Process(target=two, args=(listx,))\n p1.start()\n p2.start()\n # Wait for processes to complete\n # and results printed before destroying manager process:\n p1.join()\n p2.join()\n print(listx)\n\nPrints:\nin\n['ok']\n\n" ]
[ 0, 0 ]
[]
[]
[ "multiprocessing", "python" ]
stackoverflow_0074442936_multiprocessing_python.txt
Q: pandas agg using "intermediate" column without recomputing [group size] times the same value Let's say I have a DataFrame df=pd.DataFrame({'a':[1,2,np.nan,3,4,5,3], 'b':[11,22,22,11,22,22,22]}) a b 0 1.0 11 1 2.0 22 2 NaN 22 3 3.0 11 4 4.0 22 5 5.0 22 6 3.0 22 I want compute a reduced dataframe where I group by b, and where my column depends on the groupwise mean. Specifically, I want the column to contain the number of elements in a that are smaller than the group wise mean. For this I found a solution which seems like it could be improved because I am guessing it recomputed the mean 2 times for the '11' group and 5 times for the '22' group: Slow solution using groupby, agg and NamedAgg: df.groupby('b').agg(c=pd.NamedAgg(column='a', aggfunc=lambda x: sum(i<x.mean() for i in x))) dff=df.groupby('b').agg(c=pd.NamedAgg(column='a', aggfunc=lambda x: sum(i<x.mean() for i in x))) print(dff) c b 11 1 22 2 Do you know a better way where the mean is only computed once per group? I have searched parameters in pandas merge, concat, join, agg, apply etc. But I think there must be a savant combination of these that would achieve what I am trying to do. A: Don't use python's sum, use the vectorial counterpart, it will enable you to compute the mean only once per group: df.groupby('b')['a'].agg(c=lambda s: s.lt(s.mean()).sum()) output: c b 11 1 22 2 Speed comparison ## provided example # vectorial approach 1.07 ms ± 33.2 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each) # loop 2.86 ms ± 129 µs per loop (mean ± std. dev. of 7 runs, 100 loops each) ## 70k rows # vectorial approach 3.19 ms ± 391 µs per loop (mean ± std. dev. of 7 runs, 100 loops each) # loop 7.67 s ± 104 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) A: From my experience, if having a large dataset (millions or more of obs), then using merge turn out to be more efficient and faster than using lambda or transform. tem = df.groupby('b')['a'].mean().reset_index(name='mean') df = pd.merge(df, tem, on='b', how='left') df.loc[df['a']<df['mean']].groupby('b')['a'].count().reset_index(name='c')
pandas agg using "intermediate" column without recomputing [group size] times the same value
Let's say I have a DataFrame df=pd.DataFrame({'a':[1,2,np.nan,3,4,5,3], 'b':[11,22,22,11,22,22,22]}) a b 0 1.0 11 1 2.0 22 2 NaN 22 3 3.0 11 4 4.0 22 5 5.0 22 6 3.0 22 I want compute a reduced dataframe where I group by b, and where my column depends on the groupwise mean. Specifically, I want the column to contain the number of elements in a that are smaller than the group wise mean. For this I found a solution which seems like it could be improved because I am guessing it recomputed the mean 2 times for the '11' group and 5 times for the '22' group: Slow solution using groupby, agg and NamedAgg: df.groupby('b').agg(c=pd.NamedAgg(column='a', aggfunc=lambda x: sum(i<x.mean() for i in x))) dff=df.groupby('b').agg(c=pd.NamedAgg(column='a', aggfunc=lambda x: sum(i<x.mean() for i in x))) print(dff) c b 11 1 22 2 Do you know a better way where the mean is only computed once per group? I have searched parameters in pandas merge, concat, join, agg, apply etc. But I think there must be a savant combination of these that would achieve what I am trying to do.
[ "Don't use python's sum, use the vectorial counterpart, it will enable you to compute the mean only once per group:\ndf.groupby('b')['a'].agg(c=lambda s: s.lt(s.mean()).sum())\n\noutput:\n c\nb \n11 1\n22 2\n\nSpeed comparison\n## provided example\n\n# vectorial approach\n1.07 ms ± 33.2 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)\n# loop\n2.86 ms ± 129 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)\n\n\n## 70k rows\n\n# vectorial approach\n3.19 ms ± 391 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)\n\n# loop\n7.67 s ± 104 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)\n\n", "From my experience, if having a large dataset (millions or more of obs), then using merge turn out to be more efficient and faster than using lambda or transform.\ntem = df.groupby('b')['a'].mean().reset_index(name='mean')\ndf = pd.merge(df, tem, on='b', how='left')\ndf.loc[df['a']<df['mean']].groupby('b')['a'].count().reset_index(name='c')\n\n" ]
[ 0, 0 ]
[]
[]
[ "aggregate", "dataframe", "group_by", "pandas", "python" ]
stackoverflow_0074461707_aggregate_dataframe_group_by_pandas_python.txt
Q: How to make countdown in python without using time.sleep method? I'm new to python. I wonder if there any way to make countdown program in python without using any external library and time.sleep method? Please give me an example with code. thanks in advance. A: You could use the Unix Timestamp, which gives you a precise value of the seconds wich have passed since the first day of 1970. By substracting a startvalue of this from the actual time each checking time, you can calculate the time that has passed since your desired time. You can get the Unixseconds in python with time.time() Of course, this would require importing the time module, but this is a builtin library hence why I wouldnt classify it as additional library. A: Simple answer: No, it is not directly possible to wait without any imports. The Python modules do this very efficiently, you should use them. But there's a dirty option for UNIX/Linux: Without any imports you could read the time (and the progressing of time) from /proc/uptime and build a wait function on your own. Note that this is not in any way the best practice and very inefficient. But, if you really can't use modules in your program it could look something like this: def wait(seconds): uptime_start = getUptime() uptime = uptime_start while(uptime != uptime_start + seconds): uptime = getUptime() def getUptime(): with open("/proc/uptime", "r") as f: uptime = f.read().split(" ")[0].strip() return int(float(uptime)) print("Hello, let's wait 5 seconds") wait(5) print("Time passed quickly.") This does not work for Windows. There are no alternatives, as none of Python 3.9.6 built-in functions (see https://docs.python.org/3/library/functions.html) allow for the measuring of time. If you want your Python code to stop at some point without freezing other parts of your program, you won't get around using modules. It would work like this: import threading def waitForThis(): print("now the time has passed") # after 5 seconds, 'waitForThis' gets executed timer = threading.Timer(5.0, waitForThis) timer.start() print("I can write this while it's running") A: number = 5 print("Countdown!") while True: print(number) number = number - 1 if number <= 0: break print("Now!") ez pz
How to make countdown in python without using time.sleep method?
I'm new to python. I wonder if there any way to make countdown program in python without using any external library and time.sleep method? Please give me an example with code. thanks in advance.
[ "You could use the Unix Timestamp, which gives you a precise value of the seconds wich have passed since the first day of 1970. By substracting a startvalue of this from the actual time each checking time, you can calculate the time that has passed since your desired time. You can get the Unixseconds in python with\ntime.time()\n\nOf course, this would require importing the time module, but this is a builtin library hence why I wouldnt classify it as additional library.\n", "Simple answer:\nNo, it is not directly possible to wait without any imports. The Python modules do this very efficiently, you should use them.\nBut there's a dirty option for UNIX/Linux: \nWithout any imports you could read the time (and the progressing of time) from /proc/uptime and build a wait function on your own.\nNote that this is not in any way the best practice and very inefficient.\nBut, if you really can't use modules in your program it could look something like this:\ndef wait(seconds):\n uptime_start = getUptime()\n uptime = uptime_start\n while(uptime != uptime_start + seconds):\n uptime = getUptime()\n \ndef getUptime():\n with open(\"/proc/uptime\", \"r\") as f:\n uptime = f.read().split(\" \")[0].strip()\n return int(float(uptime))\n\nprint(\"Hello, let's wait 5 seconds\")\nwait(5)\nprint(\"Time passed quickly.\")\n\nThis does not work for Windows. There are no alternatives, as none of Python 3.9.6 built-in functions (see https://docs.python.org/3/library/functions.html) allow for the measuring of time.\nIf you want your Python code to stop at some point without freezing other parts of your program, you won't get around using modules.\nIt would work like this:\nimport threading\n\ndef waitForThis():\n print(\"now the time has passed\")\n\n# after 5 seconds, 'waitForThis' gets executed\ntimer = threading.Timer(5.0, waitForThis)\ntimer.start()\n\nprint(\"I can write this while it's running\")\n\n", "number = 5\nprint(\"Countdown!\")\nwhile True:\n print(number)\n number = number - 1\n if number <= 0:\n break\n\nprint(\"Now!\")\n\nez pz\n" ]
[ 0, 0, 0 ]
[]
[]
[ "python" ]
stackoverflow_0068783625_python.txt
Q: Removing a particular node on a specific line for all .xml files in a folder I would really appreciate if i can get help with this problem, I have got hundreds of .xml files in a folder lets say /annots, and each .xml has a node called <occluded>0</occluded> and I am basically trying to iterate through all the .xml files in the folder /annots and delete that node <occluded>0</occluded> from all of the files in the folder, or better still delete text from a particular line since the node <occluded>0</occluded>, appears on the same line for each file. Heres what my .xml file looks like: ` <annotation> <folder></folder> <filename>Glass217_jpg.rf.92a572df568fd3d2ce118bfb5ffa6cb5.jpg</filename> <path>Glass217_jpg.rf.92a572df568fd3d2ce118bfb5ffa6cb5.jpg</path> <source> <database>roboflow.ai</database> </source> <size> <width>640</width> <height>640</height> <depth>3</depth> </size> <segmented>0</segmented> <object> <name>Glass</name> <pose>Unspecified</pose> <truncated>0</truncated> <difficult>0</difficult> <occluded>0</occluded> <bndbox> <xmin>55</xmin> <xmax>372</xmax> <ymin>248</ymin> <ymax>406</ymax> </bndbox> </object> </annotation> ` I havent been able to try anything else, as I am not sure there is any tool to help me with this. I am also expecting the file names to remain the same when the node has been deleted for each A: If you want to do it by line number, you can simply use sed: sed -i.bkp -e'19d' annots/*.xml Will delete line 19 on all *.xml files in folder annots. The original files will be retained with .bkp suffix. If you cannot rely on the line number, but want to parse the XML and delete the tag, xmlstarlet is your friend. Unfortunately, xmlstarlet does not have any in-place-editing option, so you need to run a loop: for FILE in annots/*.xml; do mv $FILE $FILE.bkp xmlstarlet ed -d '//occluded' $FILE.bkp >$FILE done
Removing a particular node on a specific line for all .xml files in a folder
I would really appreciate if i can get help with this problem, I have got hundreds of .xml files in a folder lets say /annots, and each .xml has a node called <occluded>0</occluded> and I am basically trying to iterate through all the .xml files in the folder /annots and delete that node <occluded>0</occluded> from all of the files in the folder, or better still delete text from a particular line since the node <occluded>0</occluded>, appears on the same line for each file. Heres what my .xml file looks like: ` <annotation> <folder></folder> <filename>Glass217_jpg.rf.92a572df568fd3d2ce118bfb5ffa6cb5.jpg</filename> <path>Glass217_jpg.rf.92a572df568fd3d2ce118bfb5ffa6cb5.jpg</path> <source> <database>roboflow.ai</database> </source> <size> <width>640</width> <height>640</height> <depth>3</depth> </size> <segmented>0</segmented> <object> <name>Glass</name> <pose>Unspecified</pose> <truncated>0</truncated> <difficult>0</difficult> <occluded>0</occluded> <bndbox> <xmin>55</xmin> <xmax>372</xmax> <ymin>248</ymin> <ymax>406</ymax> </bndbox> </object> </annotation> ` I havent been able to try anything else, as I am not sure there is any tool to help me with this. I am also expecting the file names to remain the same when the node has been deleted for each
[ "If you want to do it by line number, you can simply use sed:\nsed -i.bkp -e'19d' annots/*.xml\n\nWill delete line 19 on all *.xml files in folder annots. The original files will be retained with .bkp suffix.\nIf you cannot rely on the line number, but want to parse the XML and delete the tag, xmlstarlet is your friend. Unfortunately, xmlstarlet does not have any in-place-editing option, so you need to run a loop:\nfor FILE in annots/*.xml; do \n mv $FILE $FILE.bkp\n xmlstarlet ed -d '//occluded' $FILE.bkp >$FILE\ndone\n\n" ]
[ 1 ]
[]
[]
[ "linux", "python", "xml", "xml_parsing" ]
stackoverflow_0074462396_linux_python_xml_xml_parsing.txt
Q: Tkinter window background color does not reflect in the window I have used time.sleep(5) to see if the changes are reflected in the window. The window opens in blue color. After I click on the 'Go' button it changes to yellow. But why does it not change to green when it enters the function 'func2'? import time import tkinter global win def func1(): global win win = tkinter.Tk() win.geometry("300x200") win.configure(bg='blue') time.sleep(5) button_win = tkinter.Button(win,text='Go',command=func2) button_win.pack() print('mainloop') win.mainloop() def func2(): print("func2") global win win.configure(bg = 'green') time.sleep(5) print("in func1") time.sleep(5) print("func3 call") func3() def func3(): global win time.sleep(5) win.configure(bg = 'yellow') func1() OUTPUT in console mainloop (I click on 'Go' button) func2 in func1 func3 call A: Basically what happens is that you don't return to the mainloop where the commands and all the other fancy stuff happens / get executed. Without processing the e.g win.configure(bg='green') your window won't get green. So before you change the value of the background color you should make sure to either update_idletasks() or return to the mainloop. In addition using global in the global namespace makes no sense. Use global only when you want to place a variable in the global namespace, you won't need it to access the global namespace, cause in the module, everything is routed to the global namespace. Also take the note that time.sleep stops the mainloop for the amount of sleeping seconds, you should avoid that, out of the same reason your window won't turn green. The mainloop processes all events and stuff, so your window will become unresponsive. A working code example is: import time import tkinter def func1(): global win win = tkinter.Tk() win.geometry("300x200") win.configure(bg='blue') button_win = tkinter.Button(win,text='Go',command=func2) button_win.pack() win.mainloop() def func2(): win.configure(bg = 'green') win.update_idletasks() time.sleep(5) func3() def func3(): win.configure(bg = 'yellow') win.update_idletasks() func1() A: EDIT:adding full version of code First of all, as Trooper Z mentioned, if you are not using threads, u should avoid time.sleep when using tkinter. Also u should avoid using globals if you don't have to. First of all I removed all time.sleeps as we don't need to. Program already enters inside a loop in mainloop. Also we connected func2 to button. We used lambda to be able to send window as parameter. def func1(): win = tkinter.Tk() win.geometry("300x200") win.configure(bg='blue') button_win = tkinter.Button(win,text='Go',command=lambda:func2(win)) button_win.pack() win.mainloop() in func2 we changed color to green then used after method. What after do is, wait as long as first parameter in milliseconds(5000 = 5 second in this case) than call function def func2(win): win.configure(bg = 'green') win.after(5000,lambda:func3(win)) and this is the func3 def func3(win): win.configure(bg = 'yellow') Full version of code import tkinter def func1(): win = tkinter.Tk() win.geometry("300x200") win.configure(bg='blue') button_win = tkinter.Button(win,text='Go',command=lambda:func2(win)) button_win.pack() win.mainloop() def func2(win): win.configure(bg = 'green') win.after(5000,lambda:func3(win)) def func3(win): win.configure(bg = 'yellow') func1()
Tkinter window background color does not reflect in the window
I have used time.sleep(5) to see if the changes are reflected in the window. The window opens in blue color. After I click on the 'Go' button it changes to yellow. But why does it not change to green when it enters the function 'func2'? import time import tkinter global win def func1(): global win win = tkinter.Tk() win.geometry("300x200") win.configure(bg='blue') time.sleep(5) button_win = tkinter.Button(win,text='Go',command=func2) button_win.pack() print('mainloop') win.mainloop() def func2(): print("func2") global win win.configure(bg = 'green') time.sleep(5) print("in func1") time.sleep(5) print("func3 call") func3() def func3(): global win time.sleep(5) win.configure(bg = 'yellow') func1() OUTPUT in console mainloop (I click on 'Go' button) func2 in func1 func3 call
[ "Basically what happens is that you don't return to the mainloop where the commands and all the other fancy stuff happens / get executed. Without processing the e.g win.configure(bg='green') your window won't get green. So before you change the value of the background color you should make sure to either update_idletasks() or return to the mainloop.\nIn addition using global in the global namespace makes no sense. Use global only when you want to place a variable in the global namespace, you won't need it to access the global namespace, cause in the module, everything is routed to the global namespace.\nAlso take the note that time.sleep stops the mainloop for the amount of sleeping seconds, you should avoid that, out of the same reason your window won't turn green. The mainloop processes all events and stuff, so your window will become unresponsive.\nA working code example is:\nimport time\nimport tkinter\n\ndef func1():\n global win\n win = tkinter.Tk()\n win.geometry(\"300x200\")\n win.configure(bg='blue')\n button_win = tkinter.Button(win,text='Go',command=func2)\n button_win.pack()\n win.mainloop()\n\ndef func2():\n win.configure(bg = 'green')\n win.update_idletasks()\n time.sleep(5)\n func3()\n\ndef func3():\n win.configure(bg = 'yellow')\n win.update_idletasks()\n\nfunc1() \n\n", "EDIT:adding full version of code\nFirst of all, as Trooper Z mentioned, if you are not using threads, u should avoid time.sleep when using tkinter.\nAlso u should avoid using globals if you don't have to.\nFirst of all I removed all time.sleeps as we don't need to. Program already enters inside a loop in mainloop. Also we connected func2 to button. We used lambda to be able to send window as parameter.\ndef func1():\n win = tkinter.Tk()\n win.geometry(\"300x200\")\n win.configure(bg='blue')\n button_win = tkinter.Button(win,text='Go',command=lambda:func2(win))\n button_win.pack()\n win.mainloop()\n\nin func2 we changed color to green then used after method.\nWhat after do is, wait as long as first parameter in milliseconds(5000 = 5 second in this case) than call function\ndef func2(win):\n win.configure(bg = 'green')\n win.after(5000,lambda:func3(win))\n\nand this is the func3\ndef func3(win):\n win.configure(bg = 'yellow')\n\nFull version of code\nimport tkinter\n\ndef func1():\n win = tkinter.Tk()\n win.geometry(\"300x200\")\n win.configure(bg='blue')\n button_win = tkinter.Button(win,text='Go',command=lambda:func2(win))\n button_win.pack()\n win.mainloop()\n\ndef func2(win):\n win.configure(bg = 'green')\n win.after(5000,lambda:func3(win))\n\ndef func3(win):\n win.configure(bg = 'yellow')\n\nfunc1()\n\n" ]
[ 1, 1 ]
[]
[]
[ "python", "tkinter", "user_interface" ]
stackoverflow_0074462622_python_tkinter_user_interface.txt
Q: RadioButton not selected tkinter I've been learning tkinter and I ran into this thing I don't understand, what does it mean when a radio button has a '-'? it's like is neither marked nor unmarked, is it not returning anything? I grabbed this code from the internet so anyone can see what I mean: from tkinter import * root = Tk() btn1 = StringVar() def do_something(): val0 = entry1.get() val1 = btn1.get() print("The variable values are " + val1 + " and " + val0) print("The method values are " + btn1.get() + " and " + entry1.get()) rb1 = Radiobutton(root, text="Euro", value="euro", variable=btn1).grid(row=2, column=1, sticky=W) rb2 = Radiobutton(root, text="Dollar", value="dollar", variable=btn1).grid(row=3, column=1, sticky=W) rb3 = Radiobutton(root, text="Yen", value="yen", variable=btn1).grid(row=4, column=1, sticky=W) label1 = Label(root, text="Input Here") label1.grid(row=1, sticky=E) entry1 = Entry(root) entry1.grid(row=1, column=1, sticky=W) go = Button(root, text="Print Selection", fg="white", bg="black", command=do_something) go.grid(row=10, columnspan=3) root.mainloop() This happened to me in a class as well, a group of radio buttons have this hyphen in them until I bound self to the StringVar(), what's happening under the hood? A: Use IntVar() instead of StringVar(). In Python 3.8+ Used f-string format. Here is code: from tkinter import * root = Tk() btn1 = IntVar() def do_something(): val0 = float(entry1.get()) val1 = val0 print(f"The variable values are {val1} and {val0}") print(f"The method values are {val1} and {val0}") rb1 = Radiobutton(root, text="Euro", value="euro", variable=btn1).grid(row=2, column=1, sticky=W) rb2 = Radiobutton(root, text="Dollar", value="dollar", variable=btn1).grid(row=3, column=1, sticky=W) rb3 = Radiobutton(root, text="Yen", value="yen", variable=btn1).grid(row=4, column=1, sticky=W) label1 = Label(root, text="Input Here") label1.grid(row=1, sticky=E) entry1 = Entry(root) entry1.grid(row=1, column=1, sticky=W) go = Button(root, text="Print Selection", fg="white", bg="black", command=do_something) go.grid(row=10, columnspan=3) root.mainloop() Result before: Result after:
RadioButton not selected tkinter
I've been learning tkinter and I ran into this thing I don't understand, what does it mean when a radio button has a '-'? it's like is neither marked nor unmarked, is it not returning anything? I grabbed this code from the internet so anyone can see what I mean: from tkinter import * root = Tk() btn1 = StringVar() def do_something(): val0 = entry1.get() val1 = btn1.get() print("The variable values are " + val1 + " and " + val0) print("The method values are " + btn1.get() + " and " + entry1.get()) rb1 = Radiobutton(root, text="Euro", value="euro", variable=btn1).grid(row=2, column=1, sticky=W) rb2 = Radiobutton(root, text="Dollar", value="dollar", variable=btn1).grid(row=3, column=1, sticky=W) rb3 = Radiobutton(root, text="Yen", value="yen", variable=btn1).grid(row=4, column=1, sticky=W) label1 = Label(root, text="Input Here") label1.grid(row=1, sticky=E) entry1 = Entry(root) entry1.grid(row=1, column=1, sticky=W) go = Button(root, text="Print Selection", fg="white", bg="black", command=do_something) go.grid(row=10, columnspan=3) root.mainloop() This happened to me in a class as well, a group of radio buttons have this hyphen in them until I bound self to the StringVar(), what's happening under the hood?
[ "Use IntVar() instead of StringVar(). In Python 3.8+ Used f-string format.\nHere is code:\nfrom tkinter import *\n\nroot = Tk()\nbtn1 = IntVar()\n\n\ndef do_something():\n val0 = float(entry1.get())\n val1 = val0\n print(f\"The variable values are {val1} and {val0}\")\n print(f\"The method values are {val1} and {val0}\")\n\n\nrb1 = Radiobutton(root, text=\"Euro\", value=\"euro\",\n variable=btn1).grid(row=2, column=1, sticky=W)\n\nrb2 = Radiobutton(root, text=\"Dollar\", value=\"dollar\",\n variable=btn1).grid(row=3, column=1, sticky=W)\n\nrb3 = Radiobutton(root, text=\"Yen\", value=\"yen\",\n variable=btn1).grid(row=4, column=1, sticky=W)\n\nlabel1 = Label(root, text=\"Input Here\")\nlabel1.grid(row=1, sticky=E)\n\nentry1 = Entry(root)\nentry1.grid(row=1, column=1, sticky=W)\n\ngo = Button(root, text=\"Print Selection\", fg=\"white\",\n bg=\"black\", command=do_something)\ngo.grid(row=10, columnspan=3)\n\nroot.mainloop()\n\nResult before:\n\nResult after:\n\n" ]
[ 0 ]
[]
[]
[ "python", "radio_button", "tkinter" ]
stackoverflow_0074454609_python_radio_button_tkinter.txt
Q: tkinter window without the surrounding I was wondering if there is a way to remove the borders (titlebar with buttons and the edges on each side) of a tkinter window. Does someone know how to do that? Couldn't find any solution for this in the web.
tkinter window without the surrounding
I was wondering if there is a way to remove the borders (titlebar with buttons and the edges on each side) of a tkinter window. Does someone know how to do that? Couldn't find any solution for this in the web.
[]
[]
[ "Try to set borderwidth and highlightthickness to 0\n" ]
[ -1 ]
[ "customization", "python", "titlebar", "tkinter", "window" ]
stackoverflow_0074462908_customization_python_titlebar_tkinter_window.txt
Q: Heroku Django: Looking for wrong GDAL version on new Heroku-22 Stack UPDATE Upgrading Django to version 3.2 has not fixed the error. I am receiving the same error message, just with different versions django.core.exceptions.ImproperlyConfigured: Could not find the GDAL library (tried "gdal", "GDAL", "gdal3.1.0", "gdal3.0.0", "gdal2.4.0", "gdal2.3.0", "gdal2.2.0", "gdal2.1.0", "gdal2.0.0"). Is GDAL installed? If it is, try setting GDAL_LIBRARY_PATH in your settings. I found this one thread where a discussion was being had I think with a similar issue, however it ended before it was resolved. I added this to my settings.py which was mentioned in that thread: GEOS_LIBRARY_PATH = '/app/.heroku/vendor/lib/libgeos_c.so' if os.environ.get('ENV') == 'HEROKU' else os.getenv('GEOS_LIBRARY_PATH') GDAL_LIBRARY_PATH = '/app/.heroku/vendor/lib/libgdal.so' if os.environ.get('ENV') == 'HEROKU' else os.getenv('GDAL_LIBRARY_PATH') I am receiving an error still though, a different one (maybe getting closer?) OSError: /app/.heroku/python/lib/python3.7/site-packages/django/contrib/gis/gdal: cannot open shared object file: No such file or directory It's interesting because it seems to be looking for python3.7 but right before it's using the correct python3.10 . I checked, there is no trace of a python3.7 specificied anywhere in my project code from django.contrib.gis.gdal.libgdal import GDAL_VERSION, lgdal File "/app/.heroku/python/lib/python3.10/site-packages/django/contrib/gis/gdal/libgdal.py", line 50, in <module> lgdal = CDLL(lib_path) File "/app/.heroku/python/lib/python3.10/ctypes/__init__.py", line 374, in __init__ self._handle = _dlopen(self._name, mode) ORIGINAL POST I am trying to upgrade from the Heroku-18 to Heroku-22 stack, for my Django web app. In order to use the new stack, I had to upgrade from Python 3.7.2 to Python 3.10.8. From what I've seen on Stack Overflow so far and other sources on the internet, I have already been installing GDAL/GIS the correct way for a Django app on Heroku. Which is to use a buildpack and have it ordered first in the list of buildpacks: remote: Building source: remote: remote: -----> Building on the Heroku-22 stack remote: -----> Using buildpacks: remote: 1. https://github.com/heroku/heroku-geo-buildpack.git remote: 2. heroku/python I expected no issues on the upgrade, but now for some reason it's having trouble finding GDAL. These are the versions that are being installed: remote: -----> Geo Packages (GDAL/GEOS/PROJ) app detected remote: -----> Installing GDAL-2.4.0 remote: -----> Installing GEOS-3.7.2 remote: -----> Installing PROJ-5.2.0 And here is the error I'm getting. Notice that it is not looking for gdal2.4.0, which is what's installed. However, it is looking for gdal which I'd hope would be covered by this install. django.core.exceptions.ImproperlyConfigured: Could not find the GDAL library (tried "gdal", "GDAL", "gdal2.2.0", "gdal2.1.0", "gdal2.0.0", "gdal1.11.0", "gdal1.10.0", "gdal1.9.0"). Is GDAL installed? If it is, try setting GDAL_LIBRARY_PATH in your settings. While the answer to this question states not to set the GDAL_LIBRARY_PATH variable, I think one potential solution would be to set that anyway. However, I'm not sure where GDAL is being installed, as it isn't able to be installed at all without the build failing and an error occuring. If anyone knows a solution, that would be super helpful. Thank you so much!!! \\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\ Sources looked at before: Django with GDAL throwing error when deploying on Heroku Missing GDAL on Heroku https://help.heroku.com/D5INLB1A/python-s-build_with_geo_libraries-legacy-feature-is-no-longer-supported Full error trace: remote: -----> Building on the Heroku-22 stack remote: -----> Using buildpacks: remote: 1. https://github.com/heroku/heroku-geo-buildpack.git remote: 2. heroku/python remote: -----> Geo Packages (GDAL/GEOS/PROJ) app detected remote: -----> Installing GDAL-2.4.0 remote: -----> Installing GEOS-3.7.2 remote: -----> Installing PROJ-5.2.0 remote: -----> Python app detected remote: -----> Using Python version specified in runtime.txt remote: -----> Stack has changed from heroku-18 to heroku-22, clearing cache remote: -----> Installing python-3.10.8 remote: -----> Installing pip 22.3.1, setuptools 63.4.3 and wheel 0.37.1 remote: -----> Installing dependencies with Pipenv 2020.11.15 remote: Installing dependencies from Pipfile.lock (9f21b7)... remote: -----> Installing SQLite3 remote: -----> $ python manage.py collectstatic --noinput remote: Traceback (most recent call last): remote: File "/tmp/build_5b1145a7/manage.py", line 15, in <module> remote: execute_from_command_line(sys.argv) remote: File "/app/.heroku/python/lib/python3.10/site-packages/django/core/management/__init__.py", line 381, in execute_from_command_line remote: utility.execute() remote: File "/app/.heroku/python/lib/python3.10/site-packages/django/core/management/__init__.py", line 357, in execute remote: django.setup() remote: File "/app/.heroku/python/lib/python3.10/site-packages/django/__init__.py", line 24, in setup remote: apps.populate(settings.INSTALLED_APPS) remote: File "/app/.heroku/python/lib/python3.10/site-packages/django/apps/registry.py", line 112, in populate remote: app_config.import_models() remote: File "/app/.heroku/python/lib/python3.10/site-packages/django/apps/config.py", line 198, in import_models remote: self.models_module = import_module(models_module_name) remote: File "/app/.heroku/python/lib/python3.10/importlib/__init__.py", line 126, in import_module remote: return _bootstrap._gcd_import(name[level:], package, level) remote: File "<frozen importlib._bootstrap>", line 1050, in _gcd_impor remote: File "<frozen importlib._bootstrap>", line 1027, in _find_and_load remote: File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked remote: File "<frozen importlib._bootstrap>", line 688, in _load_unlocked remote: File "<frozen importlib._bootstrap_external>", line 883, in exec_module remote: File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed remote: File "/tmp/build_5b1145a7/*****/models.py", line 6, in <module remote: from *****.models import *****, ***** remote: File "/tmp/build_5b1145a7/*****/models.py", line 4, in <module> remote: from .models_functions import (is_url, is_state, attempt_str2bool, is_choice, are_choices, choice_name, display_range) remote: File "/tmp/build_5b1145a7/*****/models_functions.py", line 7, in <module> remote: from django.contrib.gis.geos import Point remote: File "/app/.heroku/python/lib/python3.10/site-packages/django/contrib/gis/geos/__init__.py", line 5, in <module> remote: from .collections import ( # NOQA remote: File "/app/.heroku/python/lib/python3.10/site-packages/django/contrib/gis/geos/collections.py", line 9, in <module> remote: from django.contrib.gis.geos.geometry import GEOSGeometry, LinearGeometryMixin remote: File "/app/.heroku/python/lib/python3.10/site-packages/django/contrib/gis/geos/geometry.py", line 8, in <module> remote: from django.contrib.gis import gdal remote: File "/app/.heroku/python/lib/python3.10/site-packages/django/contrib/gis/gdal/__init__.py", line 28, in <module> remote: from django.contrib.gis.gdal.datasource import DataSource remote: File "/app/.heroku/python/lib/python3.10/site-packages/django/contrib/gis/gdal/datasource.py", line 39, in <module> remote: from django.contrib.gis.gdal.driver import Driver remote: File "/app/.heroku/python/lib/python3.10/site-packages/django/contrib/gis/gdal/driver.py", line 5, in <module> remote: from django.contrib.gis.gdal.prototypes import ds as vcapi, raster as rcapi remote: File "/app/.heroku/python/lib/python3.10/site-packages/django/contrib/gis/gdal/prototypes/ds.py", line 9, in <module> remote: from django.contrib.gis.gdal.libgdal import GDAL_VERSION, lgdal remote: File "/app/.heroku/python/lib/python3.10/site-packages/django/contrib/gis/gdal/libgdal.py", line 40, in <module> remote: raise ImproperlyConfigured( remote: django.core.exceptions.ImproperlyConfigured: Could not find the GDAL library (tried "gdal", "GDAL", "gdal2.2.0", "gdal2.1.0", "gdal2.0.0", "gdal1.11.0", "gdal1.10.0", "gdal1.9.0"). Is GDAL installed? If it is, try setting GDAL_LIBRARY_PATH in your settings. remote: remote: ! Error while running '$ python manage.py collectstatic --noinput' remote: See traceback above for details. remote: remote: You may need to update application code to resolve this error. remote: Or, you can disable collectstatic for this application: remote: remote: $ heroku config:set DISABLE_COLLECTSTATIC=1 remote: remote: https://devcenter.heroku.com/articles/django-assets remote: ! Push rejected, failed to compile Python app. remote: remote: ! Push failed A: What version of Django are you using? I suspect it's quite old. The list of supported versions of GDAL for Django 2.1 matches the list in your error message: 2.2 2.1 2.0 1.11 1.10 1.9 You'll have to upgrade to at least Django 3.0 to get support for GDAL 2.4, but even that version is well beyond its extended support period, which ended April 6, 2021. I suggest you upgrade to at least Django 3.2, the oldest actively supported version. It's an LTS release and should continue to receive support until April, 2024.
Heroku Django: Looking for wrong GDAL version on new Heroku-22 Stack
UPDATE Upgrading Django to version 3.2 has not fixed the error. I am receiving the same error message, just with different versions django.core.exceptions.ImproperlyConfigured: Could not find the GDAL library (tried "gdal", "GDAL", "gdal3.1.0", "gdal3.0.0", "gdal2.4.0", "gdal2.3.0", "gdal2.2.0", "gdal2.1.0", "gdal2.0.0"). Is GDAL installed? If it is, try setting GDAL_LIBRARY_PATH in your settings. I found this one thread where a discussion was being had I think with a similar issue, however it ended before it was resolved. I added this to my settings.py which was mentioned in that thread: GEOS_LIBRARY_PATH = '/app/.heroku/vendor/lib/libgeos_c.so' if os.environ.get('ENV') == 'HEROKU' else os.getenv('GEOS_LIBRARY_PATH') GDAL_LIBRARY_PATH = '/app/.heroku/vendor/lib/libgdal.so' if os.environ.get('ENV') == 'HEROKU' else os.getenv('GDAL_LIBRARY_PATH') I am receiving an error still though, a different one (maybe getting closer?) OSError: /app/.heroku/python/lib/python3.7/site-packages/django/contrib/gis/gdal: cannot open shared object file: No such file or directory It's interesting because it seems to be looking for python3.7 but right before it's using the correct python3.10 . I checked, there is no trace of a python3.7 specificied anywhere in my project code from django.contrib.gis.gdal.libgdal import GDAL_VERSION, lgdal File "/app/.heroku/python/lib/python3.10/site-packages/django/contrib/gis/gdal/libgdal.py", line 50, in <module> lgdal = CDLL(lib_path) File "/app/.heroku/python/lib/python3.10/ctypes/__init__.py", line 374, in __init__ self._handle = _dlopen(self._name, mode) ORIGINAL POST I am trying to upgrade from the Heroku-18 to Heroku-22 stack, for my Django web app. In order to use the new stack, I had to upgrade from Python 3.7.2 to Python 3.10.8. From what I've seen on Stack Overflow so far and other sources on the internet, I have already been installing GDAL/GIS the correct way for a Django app on Heroku. Which is to use a buildpack and have it ordered first in the list of buildpacks: remote: Building source: remote: remote: -----> Building on the Heroku-22 stack remote: -----> Using buildpacks: remote: 1. https://github.com/heroku/heroku-geo-buildpack.git remote: 2. heroku/python I expected no issues on the upgrade, but now for some reason it's having trouble finding GDAL. These are the versions that are being installed: remote: -----> Geo Packages (GDAL/GEOS/PROJ) app detected remote: -----> Installing GDAL-2.4.0 remote: -----> Installing GEOS-3.7.2 remote: -----> Installing PROJ-5.2.0 And here is the error I'm getting. Notice that it is not looking for gdal2.4.0, which is what's installed. However, it is looking for gdal which I'd hope would be covered by this install. django.core.exceptions.ImproperlyConfigured: Could not find the GDAL library (tried "gdal", "GDAL", "gdal2.2.0", "gdal2.1.0", "gdal2.0.0", "gdal1.11.0", "gdal1.10.0", "gdal1.9.0"). Is GDAL installed? If it is, try setting GDAL_LIBRARY_PATH in your settings. While the answer to this question states not to set the GDAL_LIBRARY_PATH variable, I think one potential solution would be to set that anyway. However, I'm not sure where GDAL is being installed, as it isn't able to be installed at all without the build failing and an error occuring. If anyone knows a solution, that would be super helpful. Thank you so much!!! \\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\ Sources looked at before: Django with GDAL throwing error when deploying on Heroku Missing GDAL on Heroku https://help.heroku.com/D5INLB1A/python-s-build_with_geo_libraries-legacy-feature-is-no-longer-supported Full error trace: remote: -----> Building on the Heroku-22 stack remote: -----> Using buildpacks: remote: 1. https://github.com/heroku/heroku-geo-buildpack.git remote: 2. heroku/python remote: -----> Geo Packages (GDAL/GEOS/PROJ) app detected remote: -----> Installing GDAL-2.4.0 remote: -----> Installing GEOS-3.7.2 remote: -----> Installing PROJ-5.2.0 remote: -----> Python app detected remote: -----> Using Python version specified in runtime.txt remote: -----> Stack has changed from heroku-18 to heroku-22, clearing cache remote: -----> Installing python-3.10.8 remote: -----> Installing pip 22.3.1, setuptools 63.4.3 and wheel 0.37.1 remote: -----> Installing dependencies with Pipenv 2020.11.15 remote: Installing dependencies from Pipfile.lock (9f21b7)... remote: -----> Installing SQLite3 remote: -----> $ python manage.py collectstatic --noinput remote: Traceback (most recent call last): remote: File "/tmp/build_5b1145a7/manage.py", line 15, in <module> remote: execute_from_command_line(sys.argv) remote: File "/app/.heroku/python/lib/python3.10/site-packages/django/core/management/__init__.py", line 381, in execute_from_command_line remote: utility.execute() remote: File "/app/.heroku/python/lib/python3.10/site-packages/django/core/management/__init__.py", line 357, in execute remote: django.setup() remote: File "/app/.heroku/python/lib/python3.10/site-packages/django/__init__.py", line 24, in setup remote: apps.populate(settings.INSTALLED_APPS) remote: File "/app/.heroku/python/lib/python3.10/site-packages/django/apps/registry.py", line 112, in populate remote: app_config.import_models() remote: File "/app/.heroku/python/lib/python3.10/site-packages/django/apps/config.py", line 198, in import_models remote: self.models_module = import_module(models_module_name) remote: File "/app/.heroku/python/lib/python3.10/importlib/__init__.py", line 126, in import_module remote: return _bootstrap._gcd_import(name[level:], package, level) remote: File "<frozen importlib._bootstrap>", line 1050, in _gcd_impor remote: File "<frozen importlib._bootstrap>", line 1027, in _find_and_load remote: File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked remote: File "<frozen importlib._bootstrap>", line 688, in _load_unlocked remote: File "<frozen importlib._bootstrap_external>", line 883, in exec_module remote: File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed remote: File "/tmp/build_5b1145a7/*****/models.py", line 6, in <module remote: from *****.models import *****, ***** remote: File "/tmp/build_5b1145a7/*****/models.py", line 4, in <module> remote: from .models_functions import (is_url, is_state, attempt_str2bool, is_choice, are_choices, choice_name, display_range) remote: File "/tmp/build_5b1145a7/*****/models_functions.py", line 7, in <module> remote: from django.contrib.gis.geos import Point remote: File "/app/.heroku/python/lib/python3.10/site-packages/django/contrib/gis/geos/__init__.py", line 5, in <module> remote: from .collections import ( # NOQA remote: File "/app/.heroku/python/lib/python3.10/site-packages/django/contrib/gis/geos/collections.py", line 9, in <module> remote: from django.contrib.gis.geos.geometry import GEOSGeometry, LinearGeometryMixin remote: File "/app/.heroku/python/lib/python3.10/site-packages/django/contrib/gis/geos/geometry.py", line 8, in <module> remote: from django.contrib.gis import gdal remote: File "/app/.heroku/python/lib/python3.10/site-packages/django/contrib/gis/gdal/__init__.py", line 28, in <module> remote: from django.contrib.gis.gdal.datasource import DataSource remote: File "/app/.heroku/python/lib/python3.10/site-packages/django/contrib/gis/gdal/datasource.py", line 39, in <module> remote: from django.contrib.gis.gdal.driver import Driver remote: File "/app/.heroku/python/lib/python3.10/site-packages/django/contrib/gis/gdal/driver.py", line 5, in <module> remote: from django.contrib.gis.gdal.prototypes import ds as vcapi, raster as rcapi remote: File "/app/.heroku/python/lib/python3.10/site-packages/django/contrib/gis/gdal/prototypes/ds.py", line 9, in <module> remote: from django.contrib.gis.gdal.libgdal import GDAL_VERSION, lgdal remote: File "/app/.heroku/python/lib/python3.10/site-packages/django/contrib/gis/gdal/libgdal.py", line 40, in <module> remote: raise ImproperlyConfigured( remote: django.core.exceptions.ImproperlyConfigured: Could not find the GDAL library (tried "gdal", "GDAL", "gdal2.2.0", "gdal2.1.0", "gdal2.0.0", "gdal1.11.0", "gdal1.10.0", "gdal1.9.0"). Is GDAL installed? If it is, try setting GDAL_LIBRARY_PATH in your settings. remote: remote: ! Error while running '$ python manage.py collectstatic --noinput' remote: See traceback above for details. remote: remote: You may need to update application code to resolve this error. remote: Or, you can disable collectstatic for this application: remote: remote: $ heroku config:set DISABLE_COLLECTSTATIC=1 remote: remote: https://devcenter.heroku.com/articles/django-assets remote: ! Push rejected, failed to compile Python app. remote: remote: ! Push failed
[ "What version of Django are you using? I suspect it's quite old.\nThe list of supported versions of GDAL for Django 2.1 matches the list in your error message:\n\n2.2\n2.1\n2.0\n1.11\n1.10\n1.9\n\nYou'll have to upgrade to at least Django 3.0 to get support for GDAL 2.4, but even that version is well beyond its extended support period, which ended April 6, 2021.\nI suggest you upgrade to at least Django 3.2, the oldest actively supported version. It's an LTS release and should continue to receive support until April, 2024.\n" ]
[ 1 ]
[]
[]
[ "django", "gdal", "gis", "heroku", "python" ]
stackoverflow_0074453238_django_gdal_gis_heroku_python.txt
Q: Python Selenium driver.find_element().text returns empty string, but text is visible in the driver.page_source I'm trying to scrape some titles of the videos and to do so I'm using Selenium, but I've encountered a problem. driver.find_element().text returns empty string, but title is for sure located in given XPATH. Here is the fragment of the page source returned by driver.page_source: <div class="title"><a href="/f/4n3x7e31hpwxm8"target="_blank">Big.Sky.S03E01.ITA.WEBDL.1080p</a></div> To find the title I am trying to use: hoverable = driver.find_element(By.XPATH, '//*[@id="videojs"]/div[1]') ActionChains(driver).move_to_element(hoverable).perform() wait = WebDriverWait(driver, 20) title_from_url = wait.until(EC.visibility_of_element_located((By.XPATH, '/html/body/div[1]/a'))).text title_from_url = driver.find_element( By.XPATH, '/html/body/div[1]/a' ).text.casefold() From what I've read it could be caused by the fact that the page might not be fully loaded (I wasn't using any wait condition here). After that I've tried to add a wait condition and even time.sleep(), but it didn't change anything. <mini question: how would proper wait staitment look like here?> Edit: I think the problem is caused, because title is showing up only when mouse is in the player area. I think some mouse movement will be needed here, but I have tried to move mouse into the player area and for some time it is working but after a while there is a moment when title will disappear too fast. Is there a way to use find_element() while also moving mouse? Any help will be appreciated. Best regards, Ed. Example site: https://mixdrop.to/e/4n3x7e31hpwxm8. A: You have to wait for element to be completely loaded before extracting it text content. WebDriverWait expected_conditions explicit waits should be used for that. This should wait in case the element is visible on the page and the locator is correct: from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.common.by import By from selenium.webdriver.support import expected_conditions as EC wait = WebDriverWait(driver, 20) title_from_url = wait.until(EC.visibility_of_element_located((By.XPATH, '//div[contains(@class, "title")]/a'))).text UPD In case the element content is dynamically changes and we need to hover over that element to make the desired text to appear there, we can simulate the mouse hover action with the help of ActionChains module. The tool-tip etc. will not disappear until you perform some click etc. on that page. So, just performing a background action of driver.find_element() will not affect that text. UPD2 The title element is not visible. It becomes visible only by hovering over the player. Here I'm performing such hovering and then getting the title text: from selenium import webdriver from selenium.webdriver import ActionChains from selenium.webdriver.chrome.service import Service from selenium.webdriver.chrome.options import Options from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.common.by import By from selenium.webdriver.support import expected_conditions as EC options = Options() options.add_argument("start-maximized") webdriver_service = Service('C:\webdrivers\chromedriver.exe') driver = webdriver.Chrome(options=options, service=webdriver_service) wait = WebDriverWait(driver, 5) actions = ActionChains(driver) url = "https://mixdrop.to/e/4n3x7e31hpwxm8" driver.get(url) player = wait.until(EC.presence_of_element_located((By.CLASS_NAME, 'player'))) actions.move_to_element(player).perform() title = wait.until(EC.presence_of_element_located((By.XPATH, '//div[contains(@class, "title")]/a'))) print(title.text) The output is: Big.Sky.S03E01.ITA.WEBDL.1080p A: If you suspect its because of sync issue. You can use selenium waits.Let it be implicit of explicit. Implicit: objdriver.implicitely_wait(float) Explicit: from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.common.by import By from selenium.webdriver.support import expected_conditions as EC objwait=WebDriverWait(driver,float,poll_frequency=float,ignored_exception=float) objelement=objwait.until(EC.visibility_of_element_located((By.XPATH,"Your XPATH")))
Python Selenium driver.find_element().text returns empty string, but text is visible in the driver.page_source
I'm trying to scrape some titles of the videos and to do so I'm using Selenium, but I've encountered a problem. driver.find_element().text returns empty string, but title is for sure located in given XPATH. Here is the fragment of the page source returned by driver.page_source: <div class="title"><a href="/f/4n3x7e31hpwxm8"target="_blank">Big.Sky.S03E01.ITA.WEBDL.1080p</a></div> To find the title I am trying to use: hoverable = driver.find_element(By.XPATH, '//*[@id="videojs"]/div[1]') ActionChains(driver).move_to_element(hoverable).perform() wait = WebDriverWait(driver, 20) title_from_url = wait.until(EC.visibility_of_element_located((By.XPATH, '/html/body/div[1]/a'))).text title_from_url = driver.find_element( By.XPATH, '/html/body/div[1]/a' ).text.casefold() From what I've read it could be caused by the fact that the page might not be fully loaded (I wasn't using any wait condition here). After that I've tried to add a wait condition and even time.sleep(), but it didn't change anything. <mini question: how would proper wait staitment look like here?> Edit: I think the problem is caused, because title is showing up only when mouse is in the player area. I think some mouse movement will be needed here, but I have tried to move mouse into the player area and for some time it is working but after a while there is a moment when title will disappear too fast. Is there a way to use find_element() while also moving mouse? Any help will be appreciated. Best regards, Ed. Example site: https://mixdrop.to/e/4n3x7e31hpwxm8.
[ "You have to wait for element to be completely loaded before extracting it text content. WebDriverWait expected_conditions explicit waits should be used for that.\nThis should wait in case the element is visible on the page and the locator is correct:\nfrom selenium.webdriver.support.ui import WebDriverWait\nfrom selenium.webdriver.common.by import By\nfrom selenium.webdriver.support import expected_conditions as EC\n\nwait = WebDriverWait(driver, 20)\n\ntitle_from_url = wait.until(EC.visibility_of_element_located((By.XPATH, '//div[contains(@class, \"title\")]/a'))).text\n\nUPD\nIn case the element content is dynamically changes and we need to hover over that element to make the desired text to appear there, we can simulate the mouse hover action with the help of ActionChains module.\nThe tool-tip etc. will not disappear until you perform some click etc. on that page. So, just performing a background action of driver.find_element() will not affect that text.\nUPD2\nThe title element is not visible. It becomes visible only by hovering over the player.\nHere I'm performing such hovering and then getting the title text:\nfrom selenium import webdriver\nfrom selenium.webdriver import ActionChains\nfrom selenium.webdriver.chrome.service import Service\nfrom selenium.webdriver.chrome.options import Options\nfrom selenium.webdriver.support.ui import WebDriverWait\nfrom selenium.webdriver.common.by import By\nfrom selenium.webdriver.support import expected_conditions as EC\n\noptions = Options()\noptions.add_argument(\"start-maximized\")\n\nwebdriver_service = Service('C:\\webdrivers\\chromedriver.exe')\ndriver = webdriver.Chrome(options=options, service=webdriver_service)\nwait = WebDriverWait(driver, 5)\nactions = ActionChains(driver)\n\nurl = \"https://mixdrop.to/e/4n3x7e31hpwxm8\"\ndriver.get(url)\n\nplayer = wait.until(EC.presence_of_element_located((By.CLASS_NAME, 'player')))\nactions.move_to_element(player).perform()\ntitle = wait.until(EC.presence_of_element_located((By.XPATH, '//div[contains(@class, \"title\")]/a')))\nprint(title.text)\n\nThe output is:\nBig.Sky.S03E01.ITA.WEBDL.1080p\n\n", "If you suspect its because of sync issue. You can use selenium waits.Let it be implicit of explicit.\nImplicit:\nobjdriver.implicitely_wait(float)\nExplicit:\nfrom selenium.webdriver.support.ui import WebDriverWait\nfrom selenium.webdriver.common.by import By\nfrom selenium.webdriver.support import expected_conditions as EC\n\nobjwait=WebDriverWait(driver,float,poll_frequency=float,ignored_exception=float)\nobjelement=objwait.until(EC.visibility_of_element_located((By.XPATH,\"Your XPATH\")))\n" ]
[ 1, 0 ]
[]
[]
[ "python", "selenium", "selenium_webdriver" ]
stackoverflow_0074462555_python_selenium_selenium_webdriver.txt
Q: Return dataframe variable on multiprocessing I want to import (pd.read_pickle) 4 files at the same time but later in the code I can't use the variables. How can I get the returned dataframes of the functions on multiprocessing. def a(): df1 = pd.read_pickle(r"C:\xampp\htdocs\bi\cache\ventas.pkl") return df1 def b(): df2 = pd.read_pickle(r"C:\xampp\htdocs\bi\cache\altas.pkl") return df2 def c(): df3 = pd.read_pickle(r"C:\xampp\htdocs\bi\cache\vali.pkl") return df3 def d(): df4 = pd.read_pickle(r"C:\xampp\htdocs\bi\cache\cuotas.pkl") return df4 p1 = multiprocessing.Process(target = a) p2 = multiprocessing.Process(target = b) p3 = multiprocessing.Process(target = c) p4 = multiprocessing.Process(target = d) if __name__=='__main__': p1.start() p2.start() p3.start() p4.start() p1.join() p2.join() p3.join() p4.join() print(df1) A: Judging by your file names, it appears you are running under Windows. If so, any code that creates child processes must be invoked from a if __name__ == '__main__': block. Whether you can save any time by using multiprocessing is questionable. The worker functions a, b, c and d are possibly too trivial and doing concurrent disk I/O will be counter-productive unless you have a solid-state drive with a high bandwidth. Using a multiprocessing.Pool per Michael Butscher's comment would like the following. import pandas as pd def a(): df1 = pd.read_pickle(r"C:\xampp\htdocs\bi\cache\ventas.pkl") return df1 def b(): df2 = pd.read_pickle(r"C:\xampp\htdocs\bi\cache\altas.pkl") return df2 def c(): df3 = pd.read_pickle(r"C:\xampp\htdocs\bi\cache\vali.pkl") return df3 def d(): df4 = pd.read_pickle(r"C:\xampp\htdocs\bi\cache\cuotas.pkl") return df4 if __name__=='__main__': from multiprocessing import Pool with Pool(4) as pool: async_results = [pool.apply_async(worker) for worker in (a, b, c, d)] results = [async_result.get() for async_result in async_results] # Unpack: df1, df2, df3, df4 = results print(df1)
Return dataframe variable on multiprocessing
I want to import (pd.read_pickle) 4 files at the same time but later in the code I can't use the variables. How can I get the returned dataframes of the functions on multiprocessing. def a(): df1 = pd.read_pickle(r"C:\xampp\htdocs\bi\cache\ventas.pkl") return df1 def b(): df2 = pd.read_pickle(r"C:\xampp\htdocs\bi\cache\altas.pkl") return df2 def c(): df3 = pd.read_pickle(r"C:\xampp\htdocs\bi\cache\vali.pkl") return df3 def d(): df4 = pd.read_pickle(r"C:\xampp\htdocs\bi\cache\cuotas.pkl") return df4 p1 = multiprocessing.Process(target = a) p2 = multiprocessing.Process(target = b) p3 = multiprocessing.Process(target = c) p4 = multiprocessing.Process(target = d) if __name__=='__main__': p1.start() p2.start() p3.start() p4.start() p1.join() p2.join() p3.join() p4.join() print(df1)
[ "Judging by your file names, it appears you are running under Windows. If so, any code that creates child processes must be invoked from a if __name__ == '__main__': block.\nWhether you can save any time by using multiprocessing is questionable. The worker functions a, b, c and d are possibly too trivial and doing concurrent disk I/O will be counter-productive unless you have a solid-state drive with a high bandwidth. Using a multiprocessing.Pool per Michael Butscher's comment would like the following.\nimport pandas as pd\n\ndef a():\n df1 = pd.read_pickle(r\"C:\\xampp\\htdocs\\bi\\cache\\ventas.pkl\")\n return df1\n\ndef b():\n df2 = pd.read_pickle(r\"C:\\xampp\\htdocs\\bi\\cache\\altas.pkl\")\n return df2\n\ndef c():\n df3 = pd.read_pickle(r\"C:\\xampp\\htdocs\\bi\\cache\\vali.pkl\")\n return df3\n\ndef d():\n df4 = pd.read_pickle(r\"C:\\xampp\\htdocs\\bi\\cache\\cuotas.pkl\")\n return df4\n\nif __name__=='__main__':\n from multiprocessing import Pool\n \n with Pool(4) as pool:\n async_results = [pool.apply_async(worker) for worker in (a, b, c, d)]\n results = [async_result.get() for async_result in async_results]\n # Unpack:\n df1, df2, df3, df4 = results\n print(df1)\n\n" ]
[ 0 ]
[]
[]
[ "dataframe", "multiprocessing", "multithreading", "pandas", "python" ]
stackoverflow_0074421948_dataframe_multiprocessing_multithreading_pandas_python.txt
Q: convert "tensorflow.python.framework.ops.EagerTensor" to tensorflow.Tensor or torch.Tensor? This my function that SHOULD convert an img or jpeg file to a tensor, so that I can then feed it to my AI but it returns a "tensorflow.python.framework.ops.EagerTensor" and I can't figure out how to convert it to a native f or torch tensor. def imgprocessing(path): test_img = image.load_img(path, target_size=(28, 28)) test_img_array = image.img_to_array(test_img) test_img_array = test_img_array / 255.0 # normalize test_img_array = tf.image.rgb_to_grayscale(test_img_array) # will return shape (28, 28, 1) test_img_array = tf.squeeze(test_img_array, axis = -1) # shape is (28, 28) t = tf.expand_dims(test_img_array, axis = 0) # shape: (1, 28, 28) t = tf.convert_to_tensor(t, dtype=tf.float32) return t Does anybody know how to convert this or just how to turn a Image to a Tensor with dimensions of 1,28,28? Help would really be appreciated A: Q: I can't figure out how to convert it to a native f or torch tensor. Error: AttributeError: 'Tensor' object has no attribute 'numpy' You can do it by this step but you may not convert from array to tf.constant within the definition ( tensorflow.python.framework.ops.EagerTensor ). You cannot convert to NumPy when using TF1 alternateuse the "skimage.transform" and "Numpy" for TF1, it is also Dtype compatibility when using float64. The problem is at " image = tf.image.resize(image, [32,32], method='nearest') " it is image cannot convert to tf.constant(). image = plt.imread( file ) image = tf.keras.utils.img_to_array( image ) image = tf.image.resize(image, [32,32], method='nearest') image = tf.image.rgb_to_grayscale( image ) Sample ( Between process ): You cannot access extending in the function "tf.image.resize" and " tf.image.rgb_to_grayscale ", which are supposed to use for the work process. { image.numpy() | image } import tensorflow as tf from skimage.transform import resize """"""""""""""""""""""""""""""""""""""""""""""""""""""""" : Functions """"""""""""""""""""""""""""""""""""""""""""""""""""""""" @tf.function def f( ): image = plt.imread( "F:\\datasets\\downloads\\dark\\train\\01.jpg" ) image = tf.keras.utils.img_to_array( image ) image = tf.image.resize(image, [32,32], method='nearest') image = tf.image.rgb_to_grayscale( image ) return image """"""""""""""""""""""""""""""""""""""""""""""""""""""""" : Tasks """"""""""""""""""""""""""""""""""""""""""""""""""""""""" print( f(c, d) ) Output: tf.constant() ... [[ 23.122398] [ 19.688301] [ 21.9161 ] ... [ 15.7597 ] [ 44.8233 ] [ 42.111702]]], shape=(32, 32, 1), dtype=float32) Sample ( Load Image ): This way you had image as Numpy, I always using when using TF1 but TF2 you can use tf.constant() """"""""""""""""""""""""""""""""""""""""""""""""""""""""" : Functions """"""""""""""""""""""""""""""""""""""""""""""""""""""""" @tf.function def f( ): image = plt.imread( "F:\\datasets\\downloads\\dark\\train\\01.jpg" ) image = resize(image, (32, 32)) image = np.reshape( image, (1, 32, 32, 3) ) return image Output: Image to Numpy in function call. ... [[0.27418377 0.30133097 0.30310639] [0.10582442 0.12432269 0.12456823] [0.07306318 0.08882116 0.09093407] ... [0.14883224 0.09423414 0.07170916] [0.19801652 0.11498221 0.07868552] [0.25829258 0.16194494 0.11493717]]]], shape=(1, 32, 32, 3), dtype=float64)
convert "tensorflow.python.framework.ops.EagerTensor" to tensorflow.Tensor or torch.Tensor?
This my function that SHOULD convert an img or jpeg file to a tensor, so that I can then feed it to my AI but it returns a "tensorflow.python.framework.ops.EagerTensor" and I can't figure out how to convert it to a native f or torch tensor. def imgprocessing(path): test_img = image.load_img(path, target_size=(28, 28)) test_img_array = image.img_to_array(test_img) test_img_array = test_img_array / 255.0 # normalize test_img_array = tf.image.rgb_to_grayscale(test_img_array) # will return shape (28, 28, 1) test_img_array = tf.squeeze(test_img_array, axis = -1) # shape is (28, 28) t = tf.expand_dims(test_img_array, axis = 0) # shape: (1, 28, 28) t = tf.convert_to_tensor(t, dtype=tf.float32) return t Does anybody know how to convert this or just how to turn a Image to a Tensor with dimensions of 1,28,28? Help would really be appreciated
[ "Q: I can't figure out how to convert it to a native f or torch tensor.\nError: AttributeError: 'Tensor' object has no attribute 'numpy'\nYou can do it by this step but you may not convert from array to tf.constant within the definition ( tensorflow.python.framework.ops.EagerTensor ). You cannot convert to NumPy when using TF1 alternateuse the \"skimage.transform\" and \"Numpy\" for TF1, it is also Dtype compatibility when using float64. The problem is at \" image = tf.image.resize(image, [32,32], method='nearest') \" it is image cannot convert to tf.constant().\nimage = plt.imread( file )\nimage = tf.keras.utils.img_to_array( image )\nimage = tf.image.resize(image, [32,32], method='nearest')\nimage = tf.image.rgb_to_grayscale( image )\n\n\nSample ( Between process ): You cannot access extending in the function \"tf.image.resize\" and \" tf.image.rgb_to_grayscale \", which are supposed to use for the work process. { image.numpy() | image }\n\nimport tensorflow as tf\nfrom skimage.transform import resize\n \n\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\n: Functions\n\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\n@tf.function\ndef f( ):\n image = plt.imread( \"F:\\\\datasets\\\\downloads\\\\dark\\\\train\\\\01.jpg\" )\n image = tf.keras.utils.img_to_array( image )\n image = tf.image.resize(image, [32,32], method='nearest')\n image = tf.image.rgb_to_grayscale( image )\n return image\n \n\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\n: Tasks\n\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\nprint( f(c, d) )\n\n\nOutput: tf.constant()\n\n ...\n [[ 23.122398]\n [ 19.688301]\n [ 21.9161 ]\n ...\n [ 15.7597 ]\n [ 44.8233 ]\n [ 42.111702]]], shape=(32, 32, 1), dtype=float32)\n\n\nSample ( Load Image ): This way you had image as Numpy, I always using when using TF1 but TF2 you can use tf.constant()\n\n\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\n: Functions\n\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\n@tf.function\ndef f( ):\n image = plt.imread( \"F:\\\\datasets\\\\downloads\\\\dark\\\\train\\\\01.jpg\" )\n image = resize(image, (32, 32))\n image = np.reshape( image, (1, 32, 32, 3) )\n return image\n\n\nOutput: Image to Numpy in function call.\n\n ...\n [[0.27418377 0.30133097 0.30310639]\n [0.10582442 0.12432269 0.12456823]\n [0.07306318 0.08882116 0.09093407]\n ...\n [0.14883224 0.09423414 0.07170916]\n [0.19801652 0.11498221 0.07868552]\n [0.25829258 0.16194494 0.11493717]]]], shape=(1, 32, 32, 3), dtype=float64)\n\n" ]
[ 0 ]
[]
[]
[ "python", "pytorch", "tensorflow" ]
stackoverflow_0074462224_python_pytorch_tensorflow.txt
Q: Openpyxl to delete Table row in Excel I'm having a bad time figuring out how to delete an entire empty row. When the row is part of an Excel Table. So I tried with the following code. But it keeps the format of the table and it doesn't work ie for Functions like Count If, because it counts those blank rows. from openpyxl import load_workbook as lw wb = lw(file) ws = wb['Sheet'] endrow = 10 #target row from which I will delete #delete entire rows from endrow to the end of the sheet for i in range(endrow, ws.max_row + 1): wsRemesas.delete_rows(i) I want those rows to be absolutely blank like the default file. Not part of a table or with format. Regards. A: I used this solution: import xlwings as xw from xlwings.constants import DeleteShiftDirection app = xw.App(visible=False) wb = app.books.open('PathtoFile') sht = wb.sheets['SheetName'] endrow = XX #number of target row from you want to delete below # Delete after endrow till row 10,000 sht.range(str(endrow)+':10000').api.Delete(DeleteShiftDirection.xlShiftUp) wb.save() app.kill()
Openpyxl to delete Table row in Excel
I'm having a bad time figuring out how to delete an entire empty row. When the row is part of an Excel Table. So I tried with the following code. But it keeps the format of the table and it doesn't work ie for Functions like Count If, because it counts those blank rows. from openpyxl import load_workbook as lw wb = lw(file) ws = wb['Sheet'] endrow = 10 #target row from which I will delete #delete entire rows from endrow to the end of the sheet for i in range(endrow, ws.max_row + 1): wsRemesas.delete_rows(i) I want those rows to be absolutely blank like the default file. Not part of a table or with format. Regards.
[ "I used this solution:\nimport xlwings as xw\nfrom xlwings.constants import DeleteShiftDirection\n\napp = xw.App(visible=False)\nwb = app.books.open('PathtoFile')\nsht = wb.sheets['SheetName']\n\nendrow = XX #number of target row from you want to delete below\n\n# Delete after endrow till row 10,000\nsht.range(str(endrow)+':10000').api.Delete(DeleteShiftDirection.xlShiftUp) \n\nwb.save()\napp.kill() \n\n" ]
[ 0 ]
[]
[]
[ "excel", "openpyxl", "python" ]
stackoverflow_0074214146_excel_openpyxl_python.txt
Q: How to loop through indexes from lists nested in a dictionary? I created the following dictionary below (mean_task_dict). This dictionary includes three keys associated with three lists. Each lists includes 48 numeric values. mean_task_dict = { "Interoception": task_mean_intero, "Exteroception": task_mean_extero, "Cognitive": task_mean_cognit, } I would like to plot the values contained in each list inside a scatterplot where the x-axis comprises three categories (ROI_positions = np.array([1, 2, 3])). Each of the respective lists in the dictionary has to be linked to one of the categories from ROI_positions above. Here is my current attempt or code for this task: import numpy as np import matplotlib.pyplot as plt task_mean_intero = [-0.28282956438352846, -0.33826908282117457, -0.23669673649758388] task_mean_extero = [-0.3306686353702893, -0.4675910056474869, -0.2708033871055369] task_mean_cognit = [-0.3053766849270014, -0.41698707094527254, -0.35655464189810543] mean_task_dict = { "Interoception": task_mean_intero, "Exteroception": task_mean_extero, "Cognitive": task_mean_cognit, } for value in mean_task_dict.values(): ROI_positions = np.array([1, 2, 3]) data_ROIs = np.array([ mean_task_dict["Interoception"][1], mean_task_dict["Exteroception"][1], mean_task_dict["Cognitive"][1] ]) plt.scatter(ROI_positions, data_ROIs) My problem is that I am only able to compute and plot the data for one value by paradigmatically selecting the second index value of each list [1]. How can I loop through all values inside the three lists nested in the dictionary, so that I can plot them all together in one plot? A: Do you want something like this? ROI_positions = np.array([1, 2, 3]) for i in range(len(mean_task_dict)): data_ROIs = np.array([ mean_task_dict["Interoception"][i], mean_task_dict["Exteroception"][i], mean_task_dict["Cognitive"][i] ]) plt.scatter(ROI_positions, data_ROIs) plt.show() To be independent of dict size, you could do this, ROI_positions = np.arange(len(mean_task_dict)) data_ROIs = np.array(list(zip(*mean_task_dict.values()))) for i in range(len(mean_task_dict)): plt.scatter(ROI_positions, data_ROIs[i]) plt.show() A: Use a nested loop that iterates over each element of the dictionary and then iterates over each list item of that dictionary element. for value in mean_task_dict.values(): for item in value: #do stuff here
How to loop through indexes from lists nested in a dictionary?
I created the following dictionary below (mean_task_dict). This dictionary includes three keys associated with three lists. Each lists includes 48 numeric values. mean_task_dict = { "Interoception": task_mean_intero, "Exteroception": task_mean_extero, "Cognitive": task_mean_cognit, } I would like to plot the values contained in each list inside a scatterplot where the x-axis comprises three categories (ROI_positions = np.array([1, 2, 3])). Each of the respective lists in the dictionary has to be linked to one of the categories from ROI_positions above. Here is my current attempt or code for this task: import numpy as np import matplotlib.pyplot as plt task_mean_intero = [-0.28282956438352846, -0.33826908282117457, -0.23669673649758388] task_mean_extero = [-0.3306686353702893, -0.4675910056474869, -0.2708033871055369] task_mean_cognit = [-0.3053766849270014, -0.41698707094527254, -0.35655464189810543] mean_task_dict = { "Interoception": task_mean_intero, "Exteroception": task_mean_extero, "Cognitive": task_mean_cognit, } for value in mean_task_dict.values(): ROI_positions = np.array([1, 2, 3]) data_ROIs = np.array([ mean_task_dict["Interoception"][1], mean_task_dict["Exteroception"][1], mean_task_dict["Cognitive"][1] ]) plt.scatter(ROI_positions, data_ROIs) My problem is that I am only able to compute and plot the data for one value by paradigmatically selecting the second index value of each list [1]. How can I loop through all values inside the three lists nested in the dictionary, so that I can plot them all together in one plot?
[ "Do you want something like this?\nROI_positions = np.array([1, 2, 3])\nfor i in range(len(mean_task_dict)):\n data_ROIs = np.array([\n mean_task_dict[\"Interoception\"][i],\n mean_task_dict[\"Exteroception\"][i],\n mean_task_dict[\"Cognitive\"][i]\n ])\n plt.scatter(ROI_positions, data_ROIs)\nplt.show()\n\n\nTo be independent of dict size, you could do this,\nROI_positions = np.arange(len(mean_task_dict))\ndata_ROIs = np.array(list(zip(*mean_task_dict.values())))\nfor i in range(len(mean_task_dict)):\n plt.scatter(ROI_positions, data_ROIs[i])\nplt.show()\n\n", "Use a nested loop that iterates over each element of the dictionary and then iterates over each list item of that dictionary element.\nfor value in mean_task_dict.values():\n for item in value:\n #do stuff here\n\n" ]
[ 2, 0 ]
[]
[]
[ "dictionary", "for_loop", "python" ]
stackoverflow_0074462724_dictionary_for_loop_python.txt
Q: Is there a way to separate a dict into separate data frames with unique names? I'm new to python, so please forgive me if this is a stupid question. I'm trying to separate a bigger dataset into smaller data frames based on a unique row value (station ID). I've done the following, which made a dict and did separate them into smaller data frames, but within this dict? dfs = dict(list(df.groupby('Station'))) when I open it in Jupyter it only shows the station ID next to a number series (0-20). is there a way to name these smaller data frames to the station ID? I'm used to R/tidyverse so there has to be a way to do this easily? Thank you! S tried the following too: dct = {} for idx, v in enumerate(df['Station'].unique()): dct[f'df{idx}'] = df.loc[df['Station'] == v] print(dct) but just names them df1, df2, df3, etc. A: If you need a dict specifically, you can use dfs = {name: group for name, group in df.groupby('Station')} but that creates copies of data; try iterating over the groups (and names) directly with for name, group in df.groupby('Station'): # logic
Is there a way to separate a dict into separate data frames with unique names?
I'm new to python, so please forgive me if this is a stupid question. I'm trying to separate a bigger dataset into smaller data frames based on a unique row value (station ID). I've done the following, which made a dict and did separate them into smaller data frames, but within this dict? dfs = dict(list(df.groupby('Station'))) when I open it in Jupyter it only shows the station ID next to a number series (0-20). is there a way to name these smaller data frames to the station ID? I'm used to R/tidyverse so there has to be a way to do this easily? Thank you! S tried the following too: dct = {} for idx, v in enumerate(df['Station'].unique()): dct[f'df{idx}'] = df.loc[df['Station'] == v] print(dct) but just names them df1, df2, df3, etc.
[ "If you need a dict specifically, you can use\ndfs = {name: group for name, group in df.groupby('Station')}\n\nbut that creates copies of data; try iterating over the groups (and names) directly with\nfor name, group in df.groupby('Station'):\n # logic\n\n" ]
[ 0 ]
[]
[]
[ "database", "organization", "pandas", "python" ]
stackoverflow_0074463056_database_organization_pandas_python.txt
Q: Python/matplotlib : getting rid of matplotlib.mpl warning I am using matplotlib using python 3.4. When I start my program, I have the following warning message: C:\Python34-32bits\lib\site-packages\matplotlib\cbook.py:123: MatplotlibDeprecationWarning: The matplotlib.mpl module was deprecated in version 1.3. Use import matplotlib as mpl instead. warnings.warn(message, mplDeprecation, stacklevel=1) As far as I know I do not use mpl, and all my imports concerning matplotlib are: import matplotlib.pyplot as plt import matplotlib.animation as animation Anything I should do ? A: You can suppress that particular warning, which is probably the preferred way: import warnings import matplotlib.cbook warnings.filterwarnings("ignore",category=matplotlib.cbook.mplDeprecation) A: you can temporarily suppress a warning, when importing import warnings def fxn(): warnings.warn("deprecated", DeprecationWarning) with warnings.catch_warnings(): warnings.simplefilter("ignore") fxn() A: I was able to suppress that warning with the code below: import warnings warnings.filterwarnings("ignore",category=UserWarning) A: It would be useful to see the code, however remember to set the parameters of the plot first, only then initialize the plot. Exemple, what you may have done: plt.pcolormesh(X, Y, Z) plt.axes().set_aspect('equal') What you have to do: plt.axes().set_aspect('equal') plt.pcolormesh(X, Y, Z) A: Due to MatplotlibDeprecationWarning: mplDeprecation was deprecated in Matplotlib 3.6 and will be removed two minor releases later ... use this instead: import warnings import matplotlib warnings.filterwarnings("ignore", category=matplotlib.MatplotlibDeprecationWarning)
Python/matplotlib : getting rid of matplotlib.mpl warning
I am using matplotlib using python 3.4. When I start my program, I have the following warning message: C:\Python34-32bits\lib\site-packages\matplotlib\cbook.py:123: MatplotlibDeprecationWarning: The matplotlib.mpl module was deprecated in version 1.3. Use import matplotlib as mpl instead. warnings.warn(message, mplDeprecation, stacklevel=1) As far as I know I do not use mpl, and all my imports concerning matplotlib are: import matplotlib.pyplot as plt import matplotlib.animation as animation Anything I should do ?
[ "You can suppress that particular warning, which is probably the preferred way:\nimport warnings\nimport matplotlib.cbook\nwarnings.filterwarnings(\"ignore\",category=matplotlib.cbook.mplDeprecation)\n\n", "you can temporarily suppress a warning, when importing\nimport warnings\n\ndef fxn():\n warnings.warn(\"deprecated\", DeprecationWarning)\n\nwith warnings.catch_warnings():\n warnings.simplefilter(\"ignore\")\n fxn()\n\n", "I was able to suppress that warning with the code below:\nimport warnings\n\nwarnings.filterwarnings(\"ignore\",category=UserWarning)\n\n", "It would be useful to see the code, however remember to set the parameters of the plot first, only then initialize the plot.\nExemple, what you may have done:\nplt.pcolormesh(X, Y, Z)\nplt.axes().set_aspect('equal')\n\nWhat you have to do:\nplt.axes().set_aspect('equal')\nplt.pcolormesh(X, Y, Z)\n\n", "Due to\nMatplotlibDeprecationWarning: mplDeprecation was deprecated in Matplotlib 3.6 and will be removed two minor releases later ...\nuse this instead:\nimport warnings\nimport matplotlib\nwarnings.filterwarnings(\"ignore\", category=matplotlib.MatplotlibDeprecationWarning)\n\n" ]
[ 41, 3, 1, 0, 0 ]
[]
[]
[ "deprecation_warning", "matplotlib", "python" ]
stackoverflow_0024502500_deprecation_warning_matplotlib_python.txt
Q: Django Form : cleaned_data.get(field name) is returning None in clean() method I am following the Django documentation of Django form but unable to understand what is the issue in my code. I am writing the below code in the clean method to check if both name and email starts with lowercase s or not but Django is returning None in cleaned_data.get(field name) method and I am getting "Attribute error" : 'NoneType' object has no attribute 'startswith'. Please help me on this: Reference: https://docs.djangoproject.com/en/4.1/ref/forms/validation/#cleaning-and-validating-fields-that-depend-on-each-other from django import forms from django.core.exceptions import ValidationError class GirlsFeedback(forms.Form): name = forms.CharField(label = 'Enter Your Name', label_suffix = " ", required=True, disabled=False, min_length = 5, max_length = 100, strip=True) password = forms.CharField(label='Enter Your Password', label_suffix = " ", required=True, disabled=False, min_length=8, max_length=10, help_text="Minimum 8 and Maximum 10 characters are allowed.", widget=forms.PasswordInput) email = forms.EmailField(error_messages={'required': 'Email is mandatory'}) def clean(self): cleaned_data = super().clean() name = cleaned_data.get('name') email = cleaned_data.get('email') if name.startswith('s') and email.startswith('s') !=True: raise ValidationError('Name and email both should start with a lowercase s') Error: AttributeError at /feedback3/ 'NoneType' object has no attribute 'startswith' Request Method: POST Request URL: http://localhost:8000/feedback3/ Django Version: 4.1.2 Exception Type: AttributeError Exception Value: 'NoneType' object has no attribute 'startswith' Exception Location: C:\Users\singh\Desktop\Journey\Django Journey\Geeky Shows\Eleven\Feedback3\forms.py, line 72, in clean Raised during: Feedback3.views.feedback Python Executable: C:\Users\singh\AppData\Local\Programs\Python\Python310\python.exe Python Version: 3.10.7 views.py: from django.http import HttpResponseRedirect from django.shortcuts import render from .forms import GirlsFeedback # Create your views here. def success(request): return render(request, 'Feedback3/success.html', {'name':name,'email':email}) def feedback(request): if request.method == 'POST': var = GirlsFeedback(request.POST) if var.is_valid(): global name, password, email name = var.cleaned_data['name'] password = var.cleaned_data['password'] email = var.cleaned_data['email'] return HttpResponseRedirect('/feedback3/success') else: var = GirlsFeedback() return render(request, 'Feedback3/feedback.html', {'data': var}) feedback.html: <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <meta http-equiv="X-UA-Compatible" content="IE=edge"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <title>Feedback 3</title> </head> <body> <h1>This is the third feedback page</h1> <form action="" method="POST" novalidate> {% csrf_token %} {{data.as_p}} <input type="submit" value="Submit Data"> </form> </body> </html> Success.html: <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <meta http-equiv="X-UA-Compatible" content="IE=edge"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <title>Feedback3</title> </head> <body> <p>Dear {{name}}, Thanks for your feedback. Your feedback has been submitted with the below details </p> <p>Name : {{name}}</p> <p>Email : {{email}}</p> </body> </html> A: It seems that form is not receiving any data, that's why it returned NoneType, although you can use default value to prevent error as: def clean(self): cleaned_data = super().clean() name = cleaned_data.get('name', "Sujit Singh") email = cleaned_data.get('email', "sujit123@gmail.com") if name.startswith('s') and email.startswith('s') !=True: raise ValidationError('Name and email both should start with a lowercase s') Currently it will return default values instead of NoneType so It will not give any errors. The view should be: def feedback(request): if request.method == 'POST': var = GirlsFeedback(request.POST) if var.is_valid(): name = var.cleaned_data['name'] password = var.cleaned_data['password'] email = var.cleaned_data['email'] return HttpResponseRedirect('/feedback3/success') else: var = GirlsFeedback() return render(request, 'Feedback3/feedback.html', {'data': var})
Django Form : cleaned_data.get(field name) is returning None in clean() method
I am following the Django documentation of Django form but unable to understand what is the issue in my code. I am writing the below code in the clean method to check if both name and email starts with lowercase s or not but Django is returning None in cleaned_data.get(field name) method and I am getting "Attribute error" : 'NoneType' object has no attribute 'startswith'. Please help me on this: Reference: https://docs.djangoproject.com/en/4.1/ref/forms/validation/#cleaning-and-validating-fields-that-depend-on-each-other from django import forms from django.core.exceptions import ValidationError class GirlsFeedback(forms.Form): name = forms.CharField(label = 'Enter Your Name', label_suffix = " ", required=True, disabled=False, min_length = 5, max_length = 100, strip=True) password = forms.CharField(label='Enter Your Password', label_suffix = " ", required=True, disabled=False, min_length=8, max_length=10, help_text="Minimum 8 and Maximum 10 characters are allowed.", widget=forms.PasswordInput) email = forms.EmailField(error_messages={'required': 'Email is mandatory'}) def clean(self): cleaned_data = super().clean() name = cleaned_data.get('name') email = cleaned_data.get('email') if name.startswith('s') and email.startswith('s') !=True: raise ValidationError('Name and email both should start with a lowercase s') Error: AttributeError at /feedback3/ 'NoneType' object has no attribute 'startswith' Request Method: POST Request URL: http://localhost:8000/feedback3/ Django Version: 4.1.2 Exception Type: AttributeError Exception Value: 'NoneType' object has no attribute 'startswith' Exception Location: C:\Users\singh\Desktop\Journey\Django Journey\Geeky Shows\Eleven\Feedback3\forms.py, line 72, in clean Raised during: Feedback3.views.feedback Python Executable: C:\Users\singh\AppData\Local\Programs\Python\Python310\python.exe Python Version: 3.10.7 views.py: from django.http import HttpResponseRedirect from django.shortcuts import render from .forms import GirlsFeedback # Create your views here. def success(request): return render(request, 'Feedback3/success.html', {'name':name,'email':email}) def feedback(request): if request.method == 'POST': var = GirlsFeedback(request.POST) if var.is_valid(): global name, password, email name = var.cleaned_data['name'] password = var.cleaned_data['password'] email = var.cleaned_data['email'] return HttpResponseRedirect('/feedback3/success') else: var = GirlsFeedback() return render(request, 'Feedback3/feedback.html', {'data': var}) feedback.html: <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <meta http-equiv="X-UA-Compatible" content="IE=edge"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <title>Feedback 3</title> </head> <body> <h1>This is the third feedback page</h1> <form action="" method="POST" novalidate> {% csrf_token %} {{data.as_p}} <input type="submit" value="Submit Data"> </form> </body> </html> Success.html: <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <meta http-equiv="X-UA-Compatible" content="IE=edge"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <title>Feedback3</title> </head> <body> <p>Dear {{name}}, Thanks for your feedback. Your feedback has been submitted with the below details </p> <p>Name : {{name}}</p> <p>Email : {{email}}</p> </body> </html>
[ "It seems that form is not receiving any data, that's why it returned NoneType, although you can use default value to prevent error as:\ndef clean(self):\n cleaned_data = super().clean()\n name = cleaned_data.get('name', \"Sujit Singh\")\n email = cleaned_data.get('email', \"sujit123@gmail.com\")\n if name.startswith('s') and email.startswith('s') !=True:\n raise ValidationError('Name and email both should start with a lowercase s')\n\nCurrently it will return default values instead of NoneType so It will not give any errors.\nThe view should be:\ndef feedback(request):\n if request.method == 'POST':\n var = GirlsFeedback(request.POST)\n if var.is_valid(): \n name = var.cleaned_data['name']\n password = var.cleaned_data['password']\n email = var.cleaned_data['email']\n return HttpResponseRedirect('/feedback3/success')\n else:\n var = GirlsFeedback()\n return render(request, 'Feedback3/feedback.html', {'data': var})\n\n" ]
[ 1 ]
[]
[]
[ "django", "django_forms", "django_templates", "django_views", "python" ]
stackoverflow_0074461298_django_django_forms_django_templates_django_views_python.txt
Q: Collapse pandas data frame based on column I have a table below , I will like to group and concatenate into a new field based on the siteid using pandas/python <!DOCTYPE html> <html> <style> table, th, td { border:1px solid black; } </style> <body> <table style="width:100%"> <tr> <th>SiteID</th> <th>Name</th> <th>Count</th> </tr> <tr> <td>A</td> <td>Conserve</td> <td>3</td> </tr> <tr> <td>A</td> <td>Listed</td> <td>5</td> </tr> <tr> <td>B</td> <td>Listed</td> <td>5</td> </tr> </table> </body> </html> I will like the new table to look like this <!DOCTYPE html> <html> <style> table, th, td { border:1px solid black; } </style> <body> <table style="width:100%"> <tr> <th>SiteID</th> <th>Output</th> </tr> <tr> <td>A</td> <td>There are Conserve : 3, Listed : 5 </td> </tr> <tr> <td>B</td> <td>There are Listed : 5</td> </tr> </table> </body> </html> not sure what code to use, I have used group by. I tried this df = df.groupby("SiteID")["Name"].agg(";".join).reset_index() but I would like to put the result in a new field with a concatenate string as above A: You can use a custom groupby.agg: out = ( (df['Name']+': '+df['Count'].astype(str)) .groupby(df['SiteID']).agg(', '.join) .reset_index(name='Output') ) output: SiteID Output 0 A Conserve: 3, Listed: 5 1 B Listed: 5 If you need the leading "There are": df['Output'] = 'There are ' + df['Output'] A: Here is how you can achieve this: res = ( df .assign(Output=df[['Name', 'Count']] .astype(str) .apply(': '.join, axis=1) ) .groupby('SiteID',as_index=False)['Output'] .apply(lambda x: f"There are {', '.join(x)}") ) print(res) SiteID Output 0 A There are Conserve: 3, Listed: 5 1 B There are Listed: 5
Collapse pandas data frame based on column
I have a table below , I will like to group and concatenate into a new field based on the siteid using pandas/python <!DOCTYPE html> <html> <style> table, th, td { border:1px solid black; } </style> <body> <table style="width:100%"> <tr> <th>SiteID</th> <th>Name</th> <th>Count</th> </tr> <tr> <td>A</td> <td>Conserve</td> <td>3</td> </tr> <tr> <td>A</td> <td>Listed</td> <td>5</td> </tr> <tr> <td>B</td> <td>Listed</td> <td>5</td> </tr> </table> </body> </html> I will like the new table to look like this <!DOCTYPE html> <html> <style> table, th, td { border:1px solid black; } </style> <body> <table style="width:100%"> <tr> <th>SiteID</th> <th>Output</th> </tr> <tr> <td>A</td> <td>There are Conserve : 3, Listed : 5 </td> </tr> <tr> <td>B</td> <td>There are Listed : 5</td> </tr> </table> </body> </html> not sure what code to use, I have used group by. I tried this df = df.groupby("SiteID")["Name"].agg(";".join).reset_index() but I would like to put the result in a new field with a concatenate string as above
[ "You can use a custom groupby.agg:\nout = (\n (df['Name']+': '+df['Count'].astype(str))\n .groupby(df['SiteID']).agg(', '.join)\n .reset_index(name='Output')\n)\n\noutput:\n SiteID Output\n0 A Conserve: 3, Listed: 5\n1 B Listed: 5\n\nIf you need the leading \"There are\":\ndf['Output'] = 'There are ' + df['Output']\n\n", "Here is how you can achieve this:\nres = (\n df\n .assign(Output=df[['Name', 'Count']]\n .astype(str)\n .apply(': '.join, axis=1)\n )\n .groupby('SiteID',as_index=False)['Output']\n .apply(lambda x: f\"There are {', '.join(x)}\")\n)\nprint(res)\n\n SiteID Output\n0 A There are Conserve: 3, Listed: 5\n1 B There are Listed: 5\n\n" ]
[ 0, 0 ]
[]
[]
[ "pandas", "python" ]
stackoverflow_0074462966_pandas_python.txt
Q: Parsing JSON response for individual value I'm having trouble parsing the below JSON Response Dict object to just return/print the 'data' value (testing.test.com). See the dict below: [{'_id': '~1742209152', 'id': '~1742209152', 'createdBy': 'test@test.com', 'createdAt': 1666089754558, '_type': 'case_artifact', 'dataType': 'domain', 'data': 'testing.test.com', 'startDate': 1666089754558, 'tlp': 2, 'pap': 2, 'tags': ['Domain'], 'ioc': True, 'sighted': True, 'message': '', 'reports': {}, 'stats': {}, 'ignoreSimilarity': False}] Whenever I go to run the following code to attempt to parse the data, I'm shown an error 'print(observables['data'])TypeError: list indices must be integers or slices, not str': observables = json.dumps(response) #getting JSON response dict which works fine print(observables) #printing is successful print(observables['data']) #issue is here I realise the error is suggesting I use int rather than string, but when I try to reverse this, it doesn't work and sends me on an endless number of errors. Is there a specific way to do this? I'm not overly confident in my scripting abilities so appreciate any pointers! Ps - as a side note, this interaction is occurring been an API and my python file, but since I'm only having issues with the JSON response return parsing, I doubt that has any impact. A: your response is a list of dict objects. note that the first opening brackets are [ and not {. you need to address the first (and only) object in your example and then access it as a dict using the 'data' key. try print(observables[0]['data']) EDIT: after seeing more of the code in chat room, and figuring out you used dumps() on the response object. I figured out that you're handling a string, and not a list/dict. you should remove the call to dumps() since you're getting a requests.Response object as a response, you can just do: observables = response.json() and then continue with the original answer.
Parsing JSON response for individual value
I'm having trouble parsing the below JSON Response Dict object to just return/print the 'data' value (testing.test.com). See the dict below: [{'_id': '~1742209152', 'id': '~1742209152', 'createdBy': 'test@test.com', 'createdAt': 1666089754558, '_type': 'case_artifact', 'dataType': 'domain', 'data': 'testing.test.com', 'startDate': 1666089754558, 'tlp': 2, 'pap': 2, 'tags': ['Domain'], 'ioc': True, 'sighted': True, 'message': '', 'reports': {}, 'stats': {}, 'ignoreSimilarity': False}] Whenever I go to run the following code to attempt to parse the data, I'm shown an error 'print(observables['data'])TypeError: list indices must be integers or slices, not str': observables = json.dumps(response) #getting JSON response dict which works fine print(observables) #printing is successful print(observables['data']) #issue is here I realise the error is suggesting I use int rather than string, but when I try to reverse this, it doesn't work and sends me on an endless number of errors. Is there a specific way to do this? I'm not overly confident in my scripting abilities so appreciate any pointers! Ps - as a side note, this interaction is occurring been an API and my python file, but since I'm only having issues with the JSON response return parsing, I doubt that has any impact.
[ "your response is a list of dict objects. note that the first opening brackets are [ and not {.\nyou need to address the first (and only) object in your example and then access it as a dict using the 'data' key.\ntry print(observables[0]['data'])\nEDIT:\nafter seeing more of the code in chat room, and figuring out you used dumps() on the response object. I figured out that you're handling a string, and not a list/dict.\nyou should remove the call to dumps()\nsince you're getting a requests.Response object as a response, you can just do: observables = response.json() and then continue with the original answer.\n" ]
[ 2 ]
[]
[]
[ "json", "jsonresponse", "parsing", "python", "scripting" ]
stackoverflow_0074463094_json_jsonresponse_parsing_python_scripting.txt
Q: confusing about nested for loop in order to increase multiple index The question is asking to create a nested loop to append and increase multiple index in a 2D list,for somehow I can't print the element in the list and i tried to print the length of the list it just return 0. the expect value in the list is: If duration of the music sequence is 1s, starting pitch is 60 and ending pitch is 64, then the content of the music list for one sequence will be: [ [0.0, 60, 0.2], [0.2, 61, 0.2], [0.4, 62, 0.2], [0.6, 63, 0.2], [0.8, 64, 0.2] ] There are 5 music notes because the pitch number starts from 60 and goes up to 64, i.e. number of notes = 64 - 60 + 1 The duration of each music note is 0.2s, which is just the duration of the music sequence divided by 5 so the list is music_data=[time,pitch,duration] here are more examples if the music sequence is repeated twice, an example music data with five notes (from 60 to 64 and a music sequence duration of 1 second) will look like this: [ [0.0, 60, 0.2], [0.2, 61, 0.2], [0.4, 62, 0.2], [0.6, 63, 0.2], [0.8, 64, 0.2], [1.0, 60, 0.2], [1.2, 61, 0.2], [1.4, 62, 0.2], [1.6, 63, 0.2], [1.8, 64, 0.2] ] You need to be careful that the range of pitch numbers works quite differently for increasing pitch numbers (step = 1) and decreasing pitch numbers (step = -1) You also need to make sure that the range of pitch numbers is inclusive of the starting pitch and the ending pitch values For example, if the starting pitch and ending pitch are 60 and 72 respectively, you will need write range(60, 73) to generate the correct range of pitch numbers The function template provided by task: # This function makes a piece of crazy music in the music list def makeCrazyMusic(): global music_data ##### # # TODO: # - Ask for the crazy music parameters # - Clear the music list # - Use a nested loop to generate the crazy music in the music list # - Update the music summary # ##### After refer to the instruction, i ve tried : def makeCrazyMusic(): global music_data ##### # # TODO: # - Ask for the crazy music parameters # - Clear the music list # - Use a nested loop to generate the crazy music in the music list # - Update the music summary # ##### #time = start time of note #pitch the pitch of note #durantion the length of the note #duration = duration / note --constant # = duration / startpitch -endpitch+1) #note = start pitch - end pitch +1 #time = time + duration #pitch = from start to end # try: times_input = int(turtle.numinput("Times to play",\ "Please enter number of times to play the sequence:")) dura_input = float(turtle.numinput("Duration",\ "Please enter duration to play the sequence:")) start_pitch = int(turtle.numinput("Start pitch",\ "Please enter Start pitch to play the sequence:")) end_pitch = int(turtle.numinput("End Pitch",\ "Please enter end pitch of the sequence:")) except TypeError: return music_data=[] #[time(+duration),pitch(nonc),duration(const)] index=0 for index in range(times_input): for pitch in (start_pitch,end_pitch+1): music_data.append([index,start_pitch,dura_input/times_input]) index= index+(dura_input/times_input) start_pitch= start_pitch+1 for x in range(len(music_data)): print(music_data[x]) The expected OUTPUT is: if the music sequence is repeated twice, an example music data with five notes (from 60 to 64 and a music sequence duration of 1 second) will look like this: #times_input =2 #dura_input = 1 #start_pitch =60 #end_pitch =64 [ [0.0, 60, 0.2], [0.2, 61, 0.2], [0.4, 62, 0.2], [0.6, 63, 0.2], [0.8, 64, 0.2], [1.0, 60, 0.2], [1.2, 61, 0.2], [1.4, 62, 0.2], [1.6, 63, 0.2], [1.8, 64, 0.2] ] The ACTUAL OUTPUT: [0, 60, 0.5] [0.5, 61, 0.5] [1, 62, 0.5] [1.5, 63, 0.5] A: Calculate the duration of each note by dividing the total duration by the number of notes. Then use this in a list comprehension. def pitches_list(start, end, total_duration): duration = total_duration / (end - start + 1) return [[i * duration, start + i, duration] for i in range(end - start + 1)] A: Finally solved issue, here is the code: def makeCrazyMusic(): global music_data ##### # # TODO: # - Ask for the crazy music parameters # - Clear the music list # - Use a nested loop to generate the crazy music in the music list # - Update the music summary # ##### try: times_input = int(turtle.numinput("Times to play",\ "Please enter number of times to play the sequence:")) dura_input = float(turtle.numinput("Duration",\ "Please enter duration to play the sequence:")) start_pitch = int(turtle.numinput("Start pitch",\ "Please enter Start pitch to play the sequence:")) end_pitch = int(turtle.numinput("End Pitch",\ "Please enter end pitch of the sequence:")) except TypeError: return music_data=[] #[time(+duration),pitch(nonc),duration(const)] index=0 duration = dura_input/(start_pitch - end_pitch+1) duration2 =dura_input/(end_pitch - start_pitch+1) for index in range(times_input): if start_pitch > end_pitch: for pitch in range(start_pitch,end_pitch-1,-1): music_data.append([index,pitch,duration]) index= index+duration #start_pitch= start_pitch+1 if start_pitch<end_pitch: for pitch in range(start_pitch,end_pitch+1,1): music_data.append([index,pitch,duration2]) index= index+duration2 # for x in range(len(music_data)): # print(music_data[x]) updateMusicSummary()
confusing about nested for loop in order to increase multiple index
The question is asking to create a nested loop to append and increase multiple index in a 2D list,for somehow I can't print the element in the list and i tried to print the length of the list it just return 0. the expect value in the list is: If duration of the music sequence is 1s, starting pitch is 60 and ending pitch is 64, then the content of the music list for one sequence will be: [ [0.0, 60, 0.2], [0.2, 61, 0.2], [0.4, 62, 0.2], [0.6, 63, 0.2], [0.8, 64, 0.2] ] There are 5 music notes because the pitch number starts from 60 and goes up to 64, i.e. number of notes = 64 - 60 + 1 The duration of each music note is 0.2s, which is just the duration of the music sequence divided by 5 so the list is music_data=[time,pitch,duration] here are more examples if the music sequence is repeated twice, an example music data with five notes (from 60 to 64 and a music sequence duration of 1 second) will look like this: [ [0.0, 60, 0.2], [0.2, 61, 0.2], [0.4, 62, 0.2], [0.6, 63, 0.2], [0.8, 64, 0.2], [1.0, 60, 0.2], [1.2, 61, 0.2], [1.4, 62, 0.2], [1.6, 63, 0.2], [1.8, 64, 0.2] ] You need to be careful that the range of pitch numbers works quite differently for increasing pitch numbers (step = 1) and decreasing pitch numbers (step = -1) You also need to make sure that the range of pitch numbers is inclusive of the starting pitch and the ending pitch values For example, if the starting pitch and ending pitch are 60 and 72 respectively, you will need write range(60, 73) to generate the correct range of pitch numbers The function template provided by task: # This function makes a piece of crazy music in the music list def makeCrazyMusic(): global music_data ##### # # TODO: # - Ask for the crazy music parameters # - Clear the music list # - Use a nested loop to generate the crazy music in the music list # - Update the music summary # ##### After refer to the instruction, i ve tried : def makeCrazyMusic(): global music_data ##### # # TODO: # - Ask for the crazy music parameters # - Clear the music list # - Use a nested loop to generate the crazy music in the music list # - Update the music summary # ##### #time = start time of note #pitch the pitch of note #durantion the length of the note #duration = duration / note --constant # = duration / startpitch -endpitch+1) #note = start pitch - end pitch +1 #time = time + duration #pitch = from start to end # try: times_input = int(turtle.numinput("Times to play",\ "Please enter number of times to play the sequence:")) dura_input = float(turtle.numinput("Duration",\ "Please enter duration to play the sequence:")) start_pitch = int(turtle.numinput("Start pitch",\ "Please enter Start pitch to play the sequence:")) end_pitch = int(turtle.numinput("End Pitch",\ "Please enter end pitch of the sequence:")) except TypeError: return music_data=[] #[time(+duration),pitch(nonc),duration(const)] index=0 for index in range(times_input): for pitch in (start_pitch,end_pitch+1): music_data.append([index,start_pitch,dura_input/times_input]) index= index+(dura_input/times_input) start_pitch= start_pitch+1 for x in range(len(music_data)): print(music_data[x]) The expected OUTPUT is: if the music sequence is repeated twice, an example music data with five notes (from 60 to 64 and a music sequence duration of 1 second) will look like this: #times_input =2 #dura_input = 1 #start_pitch =60 #end_pitch =64 [ [0.0, 60, 0.2], [0.2, 61, 0.2], [0.4, 62, 0.2], [0.6, 63, 0.2], [0.8, 64, 0.2], [1.0, 60, 0.2], [1.2, 61, 0.2], [1.4, 62, 0.2], [1.6, 63, 0.2], [1.8, 64, 0.2] ] The ACTUAL OUTPUT: [0, 60, 0.5] [0.5, 61, 0.5] [1, 62, 0.5] [1.5, 63, 0.5]
[ "Calculate the duration of each note by dividing the total duration by the number of notes. Then use this in a list comprehension.\ndef pitches_list(start, end, total_duration):\n duration = total_duration / (end - start + 1)\n return [[i * duration, start + i, duration] for i in range(end - start + 1)]\n\n", "Finally solved issue, here is the code:\ndef makeCrazyMusic():\n global music_data\n\n #####\n #\n # TODO:\n # - Ask for the crazy music parameters\n # - Clear the music list\n # - Use a nested loop to generate the crazy music in the music list\n # - Update the music summary\n #\n #####\n\n \n try:\n times_input = int(turtle.numinput(\"Times to play\",\\\n \"Please enter number of times to play the sequence:\"))\n dura_input = float(turtle.numinput(\"Duration\",\\\n \"Please enter duration to play the sequence:\"))\n start_pitch = int(turtle.numinput(\"Start pitch\",\\\n \"Please enter Start pitch to play the sequence:\"))\n end_pitch = int(turtle.numinput(\"End Pitch\",\\\n \"Please enter end pitch of the sequence:\"))\n except TypeError:\n return\n \n music_data=[] #[time(+duration),pitch(nonc),duration(const)]\n index=0\n duration = dura_input/(start_pitch - end_pitch+1)\n duration2 =dura_input/(end_pitch - start_pitch+1)\n for index in range(times_input):\n if start_pitch > end_pitch:\n for pitch in range(start_pitch,end_pitch-1,-1):\n music_data.append([index,pitch,duration])\n index= index+duration\n #start_pitch= start_pitch+1\n if start_pitch<end_pitch:\n for pitch in range(start_pitch,end_pitch+1,1):\n music_data.append([index,pitch,duration2])\n index= index+duration2\n # for x in range(len(music_data)):\n # print(music_data[x])\n updateMusicSummary()\n\n" ]
[ 0, 0 ]
[]
[]
[ "python", "python_3.x" ]
stackoverflow_0074449237_python_python_3.x.txt
Q: amending the x-axis of a histogram created with the df.hist function I am using the df.hist function and am looping through variables to create histogram plots. I would like to create plots where the x-axis values are directly below the bars. As one example, I have attached the following plot. Here, I don't want '1.5','2.5' or '3.5' to be displayed on the x-axis and for the numbers '1','2' and '3' to be aligned to the centre of the bars. I would be so grateful for a helping hand! listedvariables = ['distance','duration','age','gender-quantised','hours_of_sleep','frequency_of_alarm_usage','sleepiness_bed','sleepiness_waking','sleep_quality','nap_duration_mins','frequency_of_naps','normal_time_of_wakeup','number_of_times_wakeup_during_night','time_spent_awake_during_night_mins','time_of_going_to_sleep','time_to_fall_asleep_mins','sleep_onset_time','sleep_period_length_mins','total_sleep_duration_mins','time_in_bed_mins','sleep_efficiency','sleep_bout_length_mins','mid_point_of_sleep','takes_naps_yes/no','sleepiness_resolution_index','highest_education_level_acheived','hours_exercise_per_week_in_last_6_months','drink_alcohol_yes/no','drink_caffeine_yes/no','hours_exercise_per_week','hours_of_phone_use_per_week','video_game_phone/tablet_hours_per_week','video_game_all_devices_hours_per_week'] for i in range(0,len(listedvariables)): fig = newerdf[[listedvariables[i]]].hist(figsize=(30,20)) [x.title.set_size(40) for x in fig.ravel()] [x.tick_params(axis='x',labelsize=40) for x in fig.ravel()] [x.tick_params(axis='y',labelsize=40) for x in fig.ravel()] plt.tight_layout() A: Since the x-values are discrete, a histogram is not the right tool here. Instead, it seems like you want to create a bar plot. So try replacing df.hist() with df.plot.bar().
amending the x-axis of a histogram created with the df.hist function
I am using the df.hist function and am looping through variables to create histogram plots. I would like to create plots where the x-axis values are directly below the bars. As one example, I have attached the following plot. Here, I don't want '1.5','2.5' or '3.5' to be displayed on the x-axis and for the numbers '1','2' and '3' to be aligned to the centre of the bars. I would be so grateful for a helping hand! listedvariables = ['distance','duration','age','gender-quantised','hours_of_sleep','frequency_of_alarm_usage','sleepiness_bed','sleepiness_waking','sleep_quality','nap_duration_mins','frequency_of_naps','normal_time_of_wakeup','number_of_times_wakeup_during_night','time_spent_awake_during_night_mins','time_of_going_to_sleep','time_to_fall_asleep_mins','sleep_onset_time','sleep_period_length_mins','total_sleep_duration_mins','time_in_bed_mins','sleep_efficiency','sleep_bout_length_mins','mid_point_of_sleep','takes_naps_yes/no','sleepiness_resolution_index','highest_education_level_acheived','hours_exercise_per_week_in_last_6_months','drink_alcohol_yes/no','drink_caffeine_yes/no','hours_exercise_per_week','hours_of_phone_use_per_week','video_game_phone/tablet_hours_per_week','video_game_all_devices_hours_per_week'] for i in range(0,len(listedvariables)): fig = newerdf[[listedvariables[i]]].hist(figsize=(30,20)) [x.title.set_size(40) for x in fig.ravel()] [x.tick_params(axis='x',labelsize=40) for x in fig.ravel()] [x.tick_params(axis='y',labelsize=40) for x in fig.ravel()] plt.tight_layout()
[ "Since the x-values are discrete, a histogram is not the right tool here. Instead, it seems like you want to create a bar plot. So try replacing df.hist() with df.plot.bar().\n" ]
[ 0 ]
[]
[]
[ "dataframe", "jupyter_notebook", "pandas", "python" ]
stackoverflow_0074463110_dataframe_jupyter_notebook_pandas_python.txt
Q: convert cosine similarity distance to confidence percent I am working to features of images based on deep learning techniques, and for labeling images, I specify the desired label with a threshold using cosine distance. The algorithm is as follows: import math from itertools import izip def dot_product(v1, v2): return sum(map(lambda x: x[0] * x[1], izip(v1, v2))) def cosine_measure(v1, v2): prod = dot_product(v1, v2) len1 = math.sqrt(dot_product(v1, v1)) len2 = math.sqrt(dot_product(v2, v2)) return prod / (len1 * len2) Suppose I get the number 0.34. How do I convert this number to a percentage? A: Cosine is a sinosoidal function which is a non-linear function. Thus calculating linear distances from cosine values would be a mistake. One good approximation would be treating cosine angle as linearly spaced and finding distance from cosine inverse function i.e. angle instead of cosine value itself. import math cos_sim = cosine_measure(v1, v2) perc_dist = (math.pi - math.acos(cos_sim)) * 100 / math.pi
convert cosine similarity distance to confidence percent
I am working to features of images based on deep learning techniques, and for labeling images, I specify the desired label with a threshold using cosine distance. The algorithm is as follows: import math from itertools import izip def dot_product(v1, v2): return sum(map(lambda x: x[0] * x[1], izip(v1, v2))) def cosine_measure(v1, v2): prod = dot_product(v1, v2) len1 = math.sqrt(dot_product(v1, v1)) len2 = math.sqrt(dot_product(v2, v2)) return prod / (len1 * len2) Suppose I get the number 0.34. How do I convert this number to a percentage?
[ "Cosine is a sinosoidal function which is a non-linear function. Thus calculating linear distances from cosine values would be a mistake. One good approximation would be treating cosine angle as linearly spaced and finding distance from cosine inverse function i.e. angle instead of cosine value itself.\n import math\n cos_sim = cosine_measure(v1, v2)\n perc_dist = (math.pi - math.acos(cos_sim)) * 100 / math.pi\n\n" ]
[ 0 ]
[]
[]
[ "cosine_similarity", "numpy", "python" ]
stackoverflow_0070859038_cosine_similarity_numpy_python.txt
Q: How can I use python to find specific Thai word in multiple csv file and return list of file name that's contain the word I have like 100+ file in directory and I need to find out which files contain the word in Thai that I looking for Thank you I try this but it doesn't work ` import pandas as pd import re import os FOLDER_PATH = r'C:\Users\project' list = os.listdir(FOLDER_PATH) def is_name_in_csv(word,csv_file): with open(csv_file,"r") as f: data = f.read() return bool(re.search(word, data)) word = "บัญชีรายรับ" for csv_file in list: if is_name_in_csv(word,csv_file): print(f"find the {word} in {csv_file}") ` A: You don't need regex. You can simply check if word in fileContents. Also, I changed list to paths because list is a built-in python keyword. import os paths = os.listdir(r'C:\Users\project') def files_with_word(word:str, paths:list) -> str: for path in paths: with open(path, "r") as f: if word in f.read(): yield path #if you need to do something after each found file for filepath in files_with_word("บัญชีรายรับ", paths): print(filepath) #or if you just need a list filepaths = [fp for fp in files_with_word("บัญชีรายรับ", paths)]
How can I use python to find specific Thai word in multiple csv file and return list of file name that's contain the word
I have like 100+ file in directory and I need to find out which files contain the word in Thai that I looking for Thank you I try this but it doesn't work ` import pandas as pd import re import os FOLDER_PATH = r'C:\Users\project' list = os.listdir(FOLDER_PATH) def is_name_in_csv(word,csv_file): with open(csv_file,"r") as f: data = f.read() return bool(re.search(word, data)) word = "บัญชีรายรับ" for csv_file in list: if is_name_in_csv(word,csv_file): print(f"find the {word} in {csv_file}") `
[ "You don't need regex. You can simply check if word in fileContents. Also, I changed list to paths because list is a built-in python keyword.\nimport os\n\npaths = os.listdir(r'C:\\Users\\project')\n\ndef files_with_word(word:str, paths:list) -> str:\n for path in paths:\n with open(path, \"r\") as f:\n if word in f.read():\n yield path\n\n#if you need to do something after each found file\nfor filepath in files_with_word(\"บัญชีรายรับ\", paths):\n print(filepath)\n\n#or if you just need a list\nfilepaths = [fp for fp in files_with_word(\"บัญชีรายรับ\", paths)]\n\n\n" ]
[ 0 ]
[]
[]
[ "csv", "python", "thai" ]
stackoverflow_0074463103_csv_python_thai.txt
Q: error Name 'false' is not defined when adding "justMyCode": false to launch.json in Visual Studio Code when using arguments to run my debugger Name 'false' is not defined when adding "justMyCode": false to launch.json in Visual Studio Code 3 Getting this error when trying to debug my program in python, I'm trying to run with these arguments as shown in the configuration "configurations": [ { "name": "Python: Current File", "type": "python", "request": "launch", "program": "${file}", "console": "integratedTerminal", "args": [ "main.py", "--run_mode", "testing", "--uni_name", "greenwich", "--env", "local_development" ], }, not sure what the issue is, here is an answer to a very similar question below, "This configuration triggers the run or debug of the currently focused file (see the "name": "Python: Current File", and "program": "${file}" settings) And the error message popups, because you tried to debug launch.json file - which makes no sense for a python debugger. In other words - first, you have to switch to/focus your python file and trigger debug afterward. but to be honest that answer made no sense to me. I don't get what's being said here {first, you have to switch to/focus your python file and trigger debug afterward} if anyone could help, id be very grateful. tried running the debugger with the configuration { "version": "0.2.0", "configurations": [ { "name": "Python: Current File", "type": "python", "request": "launch", "program": "${file}", "console": "integratedTerminal", "args": [ "--run_mode", "testing", "--uni_name", "greenwich", "--env", "local_development" ], } get this error Exception has occurred: NameError (note: full exception trace is shown but execution is paused at: _run_module_as_main) name 'false' is not defined A: So this has worked, { "version": "0.2.0", "configurations": [ { "name": "Python: Current File", "type": "python", "request": "launch", "program": "main.py", "console": "integratedTerminal", "justMyCode": false, "args": [ "--run_mode", "testing", "--uni_name", "greenwich", "--env", "local_development" ] } ] } the value in the program argument needed to be the file in which to launch I assume, Thank you !
error Name 'false' is not defined when adding "justMyCode": false to launch.json in Visual Studio Code when using arguments to run my debugger
Name 'false' is not defined when adding "justMyCode": false to launch.json in Visual Studio Code 3 Getting this error when trying to debug my program in python, I'm trying to run with these arguments as shown in the configuration "configurations": [ { "name": "Python: Current File", "type": "python", "request": "launch", "program": "${file}", "console": "integratedTerminal", "args": [ "main.py", "--run_mode", "testing", "--uni_name", "greenwich", "--env", "local_development" ], }, not sure what the issue is, here is an answer to a very similar question below, "This configuration triggers the run or debug of the currently focused file (see the "name": "Python: Current File", and "program": "${file}" settings) And the error message popups, because you tried to debug launch.json file - which makes no sense for a python debugger. In other words - first, you have to switch to/focus your python file and trigger debug afterward. but to be honest that answer made no sense to me. I don't get what's being said here {first, you have to switch to/focus your python file and trigger debug afterward} if anyone could help, id be very grateful. tried running the debugger with the configuration { "version": "0.2.0", "configurations": [ { "name": "Python: Current File", "type": "python", "request": "launch", "program": "${file}", "console": "integratedTerminal", "args": [ "--run_mode", "testing", "--uni_name", "greenwich", "--env", "local_development" ], } get this error Exception has occurred: NameError (note: full exception trace is shown but execution is paused at: _run_module_as_main) name 'false' is not defined
[ "So this has worked,\n{\n \"version\": \"0.2.0\",\n \"configurations\": [\n {\n \"name\": \"Python: Current File\",\n \"type\": \"python\",\n \"request\": \"launch\",\n \"program\": \"main.py\",\n \"console\": \"integratedTerminal\",\n \"justMyCode\": false,\n \"args\": [\n \"--run_mode\",\n \"testing\",\n \"--uni_name\",\n \"greenwich\",\n \"--env\",\n \"local_development\"\n ]\n }\n ]\n}\n\nthe value in the program argument needed to be the file in which to launch I assume,\n\nThank you !\n\n" ]
[ 0 ]
[]
[]
[ "debugging", "python", "vscode_debugger" ]
stackoverflow_0074462903_debugging_python_vscode_debugger.txt
Q: Convert bytestring to float in python I am working on a project where I read data which is written into memory by a Delphi/Pascal program using memory mapping on a Windows PC. I am now mapping the memory again using pythons mmap and the handle given by the other program and as expected get back a bytestring. I know that this should represent 13 8-byte floating-point numbers but I do not know how I could correctly convert this bytestring back into those. I also know the aproximate value of the floating-point numbers to check my results. The code I am using to get the bytestring looks like this: import mmap import time size_map = 13*8 mmf_name = "MMF" mm = mmap.mmap(-1, size_map, mmf_name, access=mmap.ACCESS_READ) while True: mm.seek(0) mmf = mm.read() print(mmf) time.sleep(0.04) mm.close() For now I am just running the code again every 40 ms because the data is written every 40 ms into memory. The output looks something like this: b'\xcd\xcc\xcc\xe0\xe6v\xb9\xbf\x9a\x99\x99!F\xcd&@\xf5\xa2\xc5,.\xaf\xbd\xbf\x95\xb0\xea\xb5\xae\n\xd9?333/\x9b\x165@\x00\x00\x00h\x89D1\xc08\xd1\xc3\xc3\x92\x82\xf7?tA\x8fB\xd6G\x04@]\xc1\xed\x98rA\x07@\x9a\x99\x99\x99\x99\x191@\x00\x00\x00\xc0\xcc\xcc=@\x00\x00\x00\xc0\x1eE7@\x00\x00\x00\x00\xb8\x1e\x1a@' I tried struct.unpack(), .decode() and float.fromhex() to somehow get back the right value but it didn't work. For example the first 8 bytes should roughly represent a value between -0.071 and -0.090. The problem seems to be very basic but I still wasn't able to figure it out by now. I would be very grateful for any suggestions how to deal with this and get the right floating-point values from a bytestring. If I am missing any needed Information I am of course willing do give that too. Thank you! A: Try using numpys frompuffer function. You will get an array you can than read: https://numpy.org/doc/stable/reference/generated/numpy.frombuffer.html import numpy as np buffer = b'\xcd\xcc\xcc\xe0\xe6v\xb9\xbf\x9a\x99\x99!F\xcd&@\xf5\xa2\xc5,.\xaf\xbd\xbf\x95\xb0\xea\xb5\xae\n\xd9?333/\x9b\x165@\x00\x00\x00h\x89D1\xc08\xd1\xc3\xc3\x92\x82\xf7?tA\x8fB\xd6G\x04@]\xc1\xed\x98rA\x07@\x9a\x99\x99\x99\x99\x191@\x00\x00\x00\xc0\xcc\xcc=@\x00\x00\x00\xc0\x1eE7@\x00\x00\x00\x00\xb8\x1e\x1a@' nparray = np.frombuffer(buffer, dtype=np.float64) nparray array([ -0.09947055, 11.40092568, -0.11595429, 0.39127701, 21.08830543, -17.26772165, 1.46937825, 2.53507664, 2.90695686, 17.1 , 29.79999924, 23.27000046, 6.52999878]) nparray[0] -0.09947055
Convert bytestring to float in python
I am working on a project where I read data which is written into memory by a Delphi/Pascal program using memory mapping on a Windows PC. I am now mapping the memory again using pythons mmap and the handle given by the other program and as expected get back a bytestring. I know that this should represent 13 8-byte floating-point numbers but I do not know how I could correctly convert this bytestring back into those. I also know the aproximate value of the floating-point numbers to check my results. The code I am using to get the bytestring looks like this: import mmap import time size_map = 13*8 mmf_name = "MMF" mm = mmap.mmap(-1, size_map, mmf_name, access=mmap.ACCESS_READ) while True: mm.seek(0) mmf = mm.read() print(mmf) time.sleep(0.04) mm.close() For now I am just running the code again every 40 ms because the data is written every 40 ms into memory. The output looks something like this: b'\xcd\xcc\xcc\xe0\xe6v\xb9\xbf\x9a\x99\x99!F\xcd&@\xf5\xa2\xc5,.\xaf\xbd\xbf\x95\xb0\xea\xb5\xae\n\xd9?333/\x9b\x165@\x00\x00\x00h\x89D1\xc08\xd1\xc3\xc3\x92\x82\xf7?tA\x8fB\xd6G\x04@]\xc1\xed\x98rA\x07@\x9a\x99\x99\x99\x99\x191@\x00\x00\x00\xc0\xcc\xcc=@\x00\x00\x00\xc0\x1eE7@\x00\x00\x00\x00\xb8\x1e\x1a@' I tried struct.unpack(), .decode() and float.fromhex() to somehow get back the right value but it didn't work. For example the first 8 bytes should roughly represent a value between -0.071 and -0.090. The problem seems to be very basic but I still wasn't able to figure it out by now. I would be very grateful for any suggestions how to deal with this and get the right floating-point values from a bytestring. If I am missing any needed Information I am of course willing do give that too. Thank you!
[ "Try using numpys frompuffer function. You will get an array you can than read:\nhttps://numpy.org/doc/stable/reference/generated/numpy.frombuffer.html\nimport numpy as np\n\nbuffer = b'\\xcd\\xcc\\xcc\\xe0\\xe6v\\xb9\\xbf\\x9a\\x99\\x99!F\\xcd&@\\xf5\\xa2\\xc5,.\\xaf\\xbd\\xbf\\x95\\xb0\\xea\\xb5\\xae\\n\\xd9?333/\\x9b\\x165@\\x00\\x00\\x00h\\x89D1\\xc08\\xd1\\xc3\\xc3\\x92\\x82\\xf7?tA\\x8fB\\xd6G\\x04@]\\xc1\\xed\\x98rA\\x07@\\x9a\\x99\\x99\\x99\\x99\\x191@\\x00\\x00\\x00\\xc0\\xcc\\xcc=@\\x00\\x00\\x00\\xc0\\x1eE7@\\x00\\x00\\x00\\x00\\xb8\\x1e\\x1a@'\nnparray = np.frombuffer(buffer, dtype=np.float64)\nnparray\n\n\narray([ -0.09947055, 11.40092568, -0.11595429, 0.39127701,\n21.08830543, -17.26772165, 1.46937825, 2.53507664,\n2.90695686, 17.1 , 29.79999924, 23.27000046,\n6.52999878])\n\nnparray[0]\n\n\n-0.09947055\n\n" ]
[ 1 ]
[]
[]
[ "floating_point", "mmap", "python" ]
stackoverflow_0074463167_floating_point_mmap_python.txt
Q: How to compare different dataframes by column? I have two csv files with 200 columns each. The two files have the exact same numbers in rows and columns. I want to compare each columns separately. The idea would be to compare column 1 value of file "a" to column 1 value of file "b" and check the difference and so on for all the numbers in the column (there are 100 rows) and write out a number that in how many cases were the difference more than 3. I would like to repeat the same for all the columns. I know it should be a double for loop but idk exactly how. Probably 2 for loops but have no idea how to do that... Thanks in advance! import pandas as pd dk = pd.read_csv('C:/Users/D/1_top_a.csv', sep=',', header=None) dk = dk.dropna(how='all') dk = dk.dropna(how='all', axis=1) print(dk) dl = pd.read_csv('C:/Users/D/1_top_b.csv', sep=',', header=None) dl = dl.dropna(how='all') dl = dl.dropna(how='all', axis=1) print(dl) rows=dk.shape[0] print(rows) for i print(dk._get_value(0,0)) A: df1 = pd.DataFrame(dict(cola=[1,2,3,4], colb=[4,5,6,7])) df2 = pd.DataFrame(dict(cola=[1,2,4,5], colb=[9,7,8,9])) for col in df1.columns: diff = df1[col].compare(df2[col]) if diff.shape[0] >= 3: print(f'Found {diff.shape[0]} diffs in {col}') print(diff)
How to compare different dataframes by column?
I have two csv files with 200 columns each. The two files have the exact same numbers in rows and columns. I want to compare each columns separately. The idea would be to compare column 1 value of file "a" to column 1 value of file "b" and check the difference and so on for all the numbers in the column (there are 100 rows) and write out a number that in how many cases were the difference more than 3. I would like to repeat the same for all the columns. I know it should be a double for loop but idk exactly how. Probably 2 for loops but have no idea how to do that... Thanks in advance! import pandas as pd dk = pd.read_csv('C:/Users/D/1_top_a.csv', sep=',', header=None) dk = dk.dropna(how='all') dk = dk.dropna(how='all', axis=1) print(dk) dl = pd.read_csv('C:/Users/D/1_top_b.csv', sep=',', header=None) dl = dl.dropna(how='all') dl = dl.dropna(how='all', axis=1) print(dl) rows=dk.shape[0] print(rows) for i print(dk._get_value(0,0))
[ "df1 = pd.DataFrame(dict(cola=[1,2,3,4], colb=[4,5,6,7]))\ndf2 = pd.DataFrame(dict(cola=[1,2,4,5], colb=[9,7,8,9]))\n\nfor col in df1.columns:\n diff = df1[col].compare(df2[col])\n if diff.shape[0] >= 3:\n print(f'Found {diff.shape[0]} diffs in {col}')\n print(diff)\n\n" ]
[ 0 ]
[]
[]
[ "dataframe", "for_loop", "pandas", "python" ]
stackoverflow_0074463001_dataframe_for_loop_pandas_python.txt
Q: I cant click on a button with Selenium Its my first time using selenium and I am developing a project to download a meeting attendence on Microsoft Teams, The code works well and i can get to the screen that I need to make de download The screen, now I just need to click on "Baixar", but it doesn't work, My code: BAIXAR = (By.XPATH, '//*[@id="app"]/div/div/div/div/div[6]/div/div/div[1]/div[2]/button') WebDriverWait(navegador, 60).until(EC.element_to_be_clickable(BAIXAR)).click() A print from de inspector I need to click on the button "Baixar" A: The wait statement with EC.element_to_be_clikable() will return you boolean (True/False). Hence you can't apply .click() with Boolean. Instead use the same wait statement with EC.presence_of_element_located objelement=WebDriverWait(navegador60).until(EC.presence_of_element_located((BAIXAR)) objelement.click()
I cant click on a button with Selenium
Its my first time using selenium and I am developing a project to download a meeting attendence on Microsoft Teams, The code works well and i can get to the screen that I need to make de download The screen, now I just need to click on "Baixar", but it doesn't work, My code: BAIXAR = (By.XPATH, '//*[@id="app"]/div/div/div/div/div[6]/div/div/div[1]/div[2]/button') WebDriverWait(navegador, 60).until(EC.element_to_be_clickable(BAIXAR)).click() A print from de inspector I need to click on the button "Baixar"
[ "The wait statement with EC.element_to_be_clikable() will return you boolean (True/False). Hence you can't apply .click() with Boolean.\nInstead use the same wait statement with EC.presence_of_element_located\nobjelement=WebDriverWait(navegador60).until(EC.presence_of_element_located((BAIXAR))\nobjelement.click()\n" ]
[ 0 ]
[]
[]
[ "python", "selenium" ]
stackoverflow_0074461302_python_selenium.txt
Q: Airflow DAG: How to insert data into a table using Python operator, not BigQuery operator? I am trying to insert some data into a table using a simple Python operator, not the BigQuery operator, but I am unsure how to implement this. I am trying to implement this in the form of an Airflow DAG. I have written a simple DAG, and I have managed to use the following to insert the data from a GCS Bucket to BigQuery, but I am wanting to do this using a Python operator instead, not BigQuery: load_csv = gcs_to_bq.GoogleCloudStorageToBigQueryOperator( task_id='gcs_to_bq_example', bucket='cloud-samples-data', source_objects=['bigquery/us-states/us-states.csv'], destination_project_dataset_table='airflow_test.gcs_to_bq_table', schema_fields=[ {'name': 'name', 'type': 'STRING', 'mode': 'NULLABLE'}, {'name': 'post_abbr', 'type': 'STRING', 'mode': 'NULLABLE'}, ], write_disposition='WRITE_TRUNCATE', dag=dag) I am wanting to achieve the above using a simple Python operator instead of BigQuery. BQ to GCS: BigQuery to GCS: # from google.cloud import bigquery # client = bigquery.Client() # bucket_name = 'my-bucket' project = "bigquery-public-data" dataset_id = "samples" table_id = "shakespeare" destination_uri = "gs://{}/{}".format(bucket_name, "shakespeare.csv") dataset_ref = bigquery.DatasetReference(project, dataset_id) table_ref = dataset_ref.table(table_id) extract_job = client.extract_table( table_ref, destination_uri, # Location must match that of the source table. location="US", ) # API request extract_job.result() # Waits for job to complete. print( "Exported {}:{}.{} to {}".format(project, dataset_id, table_id, destination_uri) ) A: You can use BigQuery Python client in a PythonOperator to insert GCS files to BigQuery, example : PythonOperator( task_id="gcs_to_bq", op_kwargs={ 'dataset': 'dataset', 'table': 'table' }, python_callable=load_gcs_files_to_bq ) def load_gcs_files_to_bq(dataset, table): from google.cloud import bigquery # Construct a BigQuery client object. client = bigquery.Client() # TODO(developer): Set table_id to the ID of the table to create. table_id = f"your-project.{dataset}.{table}" job_config = bigquery.LoadJobConfig( schema=[ bigquery.SchemaField("name", "STRING"), bigquery.SchemaField("post_abbr", "STRING"), ], skip_leading_rows=1, # The source format defaults to CSV, so the line below is optional. source_format=bigquery.SourceFormat.CSV, ) uri = "gs://cloud-samples-data/bigquery/us-states/us-states.csv" load_job = client.load_table_from_uri( uri, table_id, job_config=job_config ) # Make an API request. load_job.result() # Waits for the job to complete. destination_table = client.get_table(table_id) # Make an API request. print("Loaded {} rows.".format(destination_table.num_rows))
Airflow DAG: How to insert data into a table using Python operator, not BigQuery operator?
I am trying to insert some data into a table using a simple Python operator, not the BigQuery operator, but I am unsure how to implement this. I am trying to implement this in the form of an Airflow DAG. I have written a simple DAG, and I have managed to use the following to insert the data from a GCS Bucket to BigQuery, but I am wanting to do this using a Python operator instead, not BigQuery: load_csv = gcs_to_bq.GoogleCloudStorageToBigQueryOperator( task_id='gcs_to_bq_example', bucket='cloud-samples-data', source_objects=['bigquery/us-states/us-states.csv'], destination_project_dataset_table='airflow_test.gcs_to_bq_table', schema_fields=[ {'name': 'name', 'type': 'STRING', 'mode': 'NULLABLE'}, {'name': 'post_abbr', 'type': 'STRING', 'mode': 'NULLABLE'}, ], write_disposition='WRITE_TRUNCATE', dag=dag) I am wanting to achieve the above using a simple Python operator instead of BigQuery. BQ to GCS: BigQuery to GCS: # from google.cloud import bigquery # client = bigquery.Client() # bucket_name = 'my-bucket' project = "bigquery-public-data" dataset_id = "samples" table_id = "shakespeare" destination_uri = "gs://{}/{}".format(bucket_name, "shakespeare.csv") dataset_ref = bigquery.DatasetReference(project, dataset_id) table_ref = dataset_ref.table(table_id) extract_job = client.extract_table( table_ref, destination_uri, # Location must match that of the source table. location="US", ) # API request extract_job.result() # Waits for job to complete. print( "Exported {}:{}.{} to {}".format(project, dataset_id, table_id, destination_uri) )
[ "You can use BigQuery Python client in a PythonOperator to insert GCS files to BigQuery, example :\nPythonOperator(\n task_id=\"gcs_to_bq\",\n op_kwargs={\n 'dataset': 'dataset',\n 'table': 'table'\n },\n python_callable=load_gcs_files_to_bq\n)\n\ndef load_gcs_files_to_bq(dataset, table):\n from google.cloud import bigquery\n\n # Construct a BigQuery client object.\n client = bigquery.Client()\n\n # TODO(developer): Set table_id to the ID of the table to create.\n table_id = f\"your-project.{dataset}.{table}\"\n\n job_config = bigquery.LoadJobConfig(\n schema=[\n bigquery.SchemaField(\"name\", \"STRING\"),\n bigquery.SchemaField(\"post_abbr\", \"STRING\"),\n ],\n skip_leading_rows=1,\n # The source format defaults to CSV, so the line below is optional.\n source_format=bigquery.SourceFormat.CSV,\n )\n \n uri = \"gs://cloud-samples-data/bigquery/us-states/us-states.csv\"\n\n load_job = client.load_table_from_uri(\n uri, table_id, job_config=job_config\n ) # Make an API request.\n\n load_job.result() # Waits for the job to complete.\n\n destination_table = client.get_table(table_id) # Make an API request.\n print(\"Loaded {} rows.\".format(destination_table.num_rows))\n\n" ]
[ 0 ]
[]
[]
[ "airflow", "directed_acyclic_graphs", "google_cloud_storage", "python", "sql" ]
stackoverflow_0074462042_airflow_directed_acyclic_graphs_google_cloud_storage_python_sql.txt
Q: Create Columns in Dataframe Inside Loop With Filters Pyspark I want to create columns for each element in list "weeks" and have them be all in one dataframe. Dataframe "df" is filtered based on "weeknum" then the columns are created. At the time it runs but the end dataframe only contains information about the last "weeknum". How can I create the columns for all "weeknum" joined left? I tried this: weeks = [24, 25] for weeknum in weeks: df_new = df.filter(df.week == weeknum).groupBy(['gender', 'pro']).pivot("share").agg(first('forecast_units')) \ .withColumnRenamed('0.01', 'units_1_share_wk'+str(weeknum))\ .withColumnRenamed('0.1', 'units_10_share_wk'+str(weeknum))\ .withColumnRenamed('0.15', 'units_15_share_wk'+str(weeknum))\ .withColumnRenamed('0.2', 'units_20_share_wk'+str(weeknum)) df_new.show() But this only returns the dataframe with the last "weeknum" in "weeks". The original dataframe "df" looks like this: |country|gender|order_date| pro|share| prediction|week|dayofweek|forecast_units| +-------+------+----------+------------+-------------+------------------+----+---------+-------------------+ | ES| Male|2022-09-15|Jeans - Flat| 0.01|13.322306632995605| 37| 5| 93.0| | ES| Male|2022-09-15|Jeans - Flat| 0.1| 19.09369468688965| 37| 5| 134.0| | ES| Male|2022-09-15|Jeans - Flat| 0.15|22.504554748535156| 37| 5| 158.0| I want the end dataframe to have the following structure: |gender|pro|units_1_tpr_wk24|units_10_tpr_wk24|units_15_tpr_wk24|units_20_tpr_wk24|units_1_tpr_wk25|units_10_tpr_wk25|units_15_tpr_wk25|units_20_tpr_wk25| Expected Output: |gender|pro|units_1_tpr_wk24|units_10_tpr_wk24|units_15_tpr_wk24|units_20_tpr_wk24|units_1_tpr_wk25|units_10_tpr_wk25|units_15_tpr_wk25|units_20_tpr_wk25| |---+---+---+---+---+---+---+---+---+---+| |Female|Belts|28.0|0.0|0.0|0.0|28.0|0.0|0.0|0.0| |Female|Dress|0.0|44.0|0.0|0.0|0.0|0.0|0.0|0.0| |Male|Belts|0.0|0.0|33.0|0.0|28.0|0.0|0.0|0.0| |Male|Suits|0.0|0.0|0.0|34.0|0.0|0.0|0.0|0.0| A: I would suggest generating all of the required columns first, and then passing it into a select function like this: from pyspark.sql.functions import col weeks = [24, 25] cols_to_select = [] for weeknum in weeks: cols_to_select.extend([ col('0.01').alias(f'units_1_share_wk{weeknum}'), col('0.1').alias(f'units_10_share_wk{weeknum}'), col('0.15').alias(f'units_15_share_wk{weeknum}'), col('0.2').alias(f'units_20_share_wk{weeknum}') ]) df.filter(df.week == weeknum).groupBy(['gender', 'pro']).pivot("share").agg(first('forecast_units')).select([col("gender"), col("pro")] + cols_to_select)
Create Columns in Dataframe Inside Loop With Filters Pyspark
I want to create columns for each element in list "weeks" and have them be all in one dataframe. Dataframe "df" is filtered based on "weeknum" then the columns are created. At the time it runs but the end dataframe only contains information about the last "weeknum". How can I create the columns for all "weeknum" joined left? I tried this: weeks = [24, 25] for weeknum in weeks: df_new = df.filter(df.week == weeknum).groupBy(['gender', 'pro']).pivot("share").agg(first('forecast_units')) \ .withColumnRenamed('0.01', 'units_1_share_wk'+str(weeknum))\ .withColumnRenamed('0.1', 'units_10_share_wk'+str(weeknum))\ .withColumnRenamed('0.15', 'units_15_share_wk'+str(weeknum))\ .withColumnRenamed('0.2', 'units_20_share_wk'+str(weeknum)) df_new.show() But this only returns the dataframe with the last "weeknum" in "weeks". The original dataframe "df" looks like this: |country|gender|order_date| pro|share| prediction|week|dayofweek|forecast_units| +-------+------+----------+------------+-------------+------------------+----+---------+-------------------+ | ES| Male|2022-09-15|Jeans - Flat| 0.01|13.322306632995605| 37| 5| 93.0| | ES| Male|2022-09-15|Jeans - Flat| 0.1| 19.09369468688965| 37| 5| 134.0| | ES| Male|2022-09-15|Jeans - Flat| 0.15|22.504554748535156| 37| 5| 158.0| I want the end dataframe to have the following structure: |gender|pro|units_1_tpr_wk24|units_10_tpr_wk24|units_15_tpr_wk24|units_20_tpr_wk24|units_1_tpr_wk25|units_10_tpr_wk25|units_15_tpr_wk25|units_20_tpr_wk25| Expected Output: |gender|pro|units_1_tpr_wk24|units_10_tpr_wk24|units_15_tpr_wk24|units_20_tpr_wk24|units_1_tpr_wk25|units_10_tpr_wk25|units_15_tpr_wk25|units_20_tpr_wk25| |---+---+---+---+---+---+---+---+---+---+| |Female|Belts|28.0|0.0|0.0|0.0|28.0|0.0|0.0|0.0| |Female|Dress|0.0|44.0|0.0|0.0|0.0|0.0|0.0|0.0| |Male|Belts|0.0|0.0|33.0|0.0|28.0|0.0|0.0|0.0| |Male|Suits|0.0|0.0|0.0|34.0|0.0|0.0|0.0|0.0|
[ "I would suggest generating all of the required columns first, and then passing it into a select function like this:\nfrom pyspark.sql.functions import col\n\nweeks = [24, 25]\ncols_to_select = []\nfor weeknum in weeks:\n cols_to_select.extend([\n col('0.01').alias(f'units_1_share_wk{weeknum}'),\n col('0.1').alias(f'units_10_share_wk{weeknum}'),\n col('0.15').alias(f'units_15_share_wk{weeknum}'),\n col('0.2').alias(f'units_20_share_wk{weeknum}')\n ])\n\ndf.filter(df.week == weeknum).groupBy(['gender', 'pro']).pivot(\"share\").agg(first('forecast_units')).select([col(\"gender\"), col(\"pro\")] + cols_to_select)\n\n" ]
[ 0 ]
[]
[]
[ "databricks", "dataframe", "loops", "pyspark", "python" ]
stackoverflow_0074459610_databricks_dataframe_loops_pyspark_python.txt
Q: How to kick a user using slash commands Discord.py I'm trying to make my Discord bot kick a member, and send that "user banned because reason" to a specific channel and not the channel the command was used. The code I'm using: @bot.slash_command(description = "Kick someone", guild_ids=[1041057700823449682]) @commands.has_permissions(kick_members=True) @option("member",description = "Select member") @option("reason",description = "Reason for kick (you can leave this empty)") async def kick( ctx, member: discord.Member, channel: bot.get_channel(1042042492020863037), *, reason=None): if reason==None: reason="(no reason)" await ctx.guild.kick(member) await ctx.respond("Done :)") await ctx.channel.send(f'User {member.mention} was kicked because {reason}') When I try using this code I get a few errors: Traceback (most recent call last): File "c:\Users\fonti\Documents\Projetos Python\Bot do Discord\Iniciar Bot.py", line 152, in <module> async def kick( File "C:\Users\fonti\AppData\Local\Programs\Python\Python310\lib\site-packages\discord\bot.py", line 905, in decorator self.add_application_command(result) File "C:\Users\fonti\AppData\Local\Programs\Python\Python310\lib\site-packages\discord\bot.py", line 127, in add_application_command command._set_cog(None) File "C:\Users\fonti\AppData\Local\Programs\Python\Python310\lib\site-packages\discord\commands\core.py", line 603, in _set_cog self.cog = cog File "C:\Users\fonti\AppData\Local\Programs\Python\Python310\lib\site-packages\discord\commands\core.py", line 827, in cog self._validate_parameters() File "C:\Users\fonti\AppData\Local\Programs\Python\Python310\lib\site-packages\discord\commands\core.py", line 705, in _validate_parameters self.options: list[Option] = self._parse_options(params) File "C:\Users\fonti\AppData\Local\Programs\Python\Python310\lib\site-packages\discord\commands\core.py", line 745, in _parse_options option = Option(option) File "C:\Users\fonti\AppData\Local\Programs\Python\Python310\lib\site-packages\discord\commands\options.py", line 210, in __init__ self.input_type = SlashCommandOptionType.from_datatype(input_type) File "C:\Users\fonti\AppData\Local\Programs\Python\Python310\lib\site-packages\discord\enums.py", line 707, in from_datatype if datatype.__name__ in ["Member", "User"]: AttributeError: 'NoneType' object has no attribute '__name__'. Did you mean: '__ne__'? I was trying to send the message... (f'User {member.mention} was kicked because {reason}') to a specific channel. If I remove the channel condition, the bot works, but sends this message to the channel the command was used. A: I believe the cause of your error is your channel definition inside your kick command definition. Try removing the channel definition from your kick command definition and put it inside the function instead. The way I have it setup on my bot, other than the channel definition, is the same as yours and mine works perfectly A: To send it in the channel, instead of using ctx.channel.send, you can use ctx.send. I think that's where you're running into your error. Also here's how I tend to set up my kick command using slash commands so that my answer makes more sense: @nextcord.slash_command() # I use nextcord, a dpy fork so your setup is gonna be different @commands.has_permissions(whatever permissions you want) async def kickuser(self, ctx, member : nextcord.Member, *, reason='None'): # default reason is none so that it is optional in the slash command # side note for nextcord.Member, having it there makes it so that there's a drop down menu that functions the same way as if you were to @ someone in a message. This makes it easier to kick the right person embed = discord.Embed(description=f'You have been kicked from {ctx.guild} for reason: {reason}') embed = nextcord.Embed(description=f'{member} has been kicked for reason: {reason}') # setting up embed to send await ctx.send(embed=embed) # sends the embed in chat letting people know the person has been kicked await member.kick(reason=reason) # actually kicking the person, this comes after all the other steps so that we are able to mention and direct message them that they have been kicked Hope this helps A: This snip of code uses PyCord ( Confirmed to work by myself ) @discord.default_permissions(kick_members = True) async def kick(ctx, member : discord.Member, *, reason=None): await member.kick(reason=reason) await ctx.respond(f'{member.mention} has been kicked!') A: Just get a channel object with bot.get_channel() Then use the channels send() function to send your message. Also, : after arguments in function definitions are type hints, they are made for IDEs, if you want to assign a default value, you have to use = instead. (Look at your channel argument) EDIT You are using type hints in your code. Type hints are made for IDEs in first place, so they can show you mistakes in your code more easier. But you are „setting“ a value in it with the function, but this is for discord.py None, thats causing your error. Use : for showing which class an argument has to be. But use = for setting a default value, if this argument is not being passed. def function(type_hint: str, default_value = 0, mixed : int = 10): print(type_hint, default_value, mixed) Answer again if you need even further help ;-)
How to kick a user using slash commands Discord.py
I'm trying to make my Discord bot kick a member, and send that "user banned because reason" to a specific channel and not the channel the command was used. The code I'm using: @bot.slash_command(description = "Kick someone", guild_ids=[1041057700823449682]) @commands.has_permissions(kick_members=True) @option("member",description = "Select member") @option("reason",description = "Reason for kick (you can leave this empty)") async def kick( ctx, member: discord.Member, channel: bot.get_channel(1042042492020863037), *, reason=None): if reason==None: reason="(no reason)" await ctx.guild.kick(member) await ctx.respond("Done :)") await ctx.channel.send(f'User {member.mention} was kicked because {reason}') When I try using this code I get a few errors: Traceback (most recent call last): File "c:\Users\fonti\Documents\Projetos Python\Bot do Discord\Iniciar Bot.py", line 152, in <module> async def kick( File "C:\Users\fonti\AppData\Local\Programs\Python\Python310\lib\site-packages\discord\bot.py", line 905, in decorator self.add_application_command(result) File "C:\Users\fonti\AppData\Local\Programs\Python\Python310\lib\site-packages\discord\bot.py", line 127, in add_application_command command._set_cog(None) File "C:\Users\fonti\AppData\Local\Programs\Python\Python310\lib\site-packages\discord\commands\core.py", line 603, in _set_cog self.cog = cog File "C:\Users\fonti\AppData\Local\Programs\Python\Python310\lib\site-packages\discord\commands\core.py", line 827, in cog self._validate_parameters() File "C:\Users\fonti\AppData\Local\Programs\Python\Python310\lib\site-packages\discord\commands\core.py", line 705, in _validate_parameters self.options: list[Option] = self._parse_options(params) File "C:\Users\fonti\AppData\Local\Programs\Python\Python310\lib\site-packages\discord\commands\core.py", line 745, in _parse_options option = Option(option) File "C:\Users\fonti\AppData\Local\Programs\Python\Python310\lib\site-packages\discord\commands\options.py", line 210, in __init__ self.input_type = SlashCommandOptionType.from_datatype(input_type) File "C:\Users\fonti\AppData\Local\Programs\Python\Python310\lib\site-packages\discord\enums.py", line 707, in from_datatype if datatype.__name__ in ["Member", "User"]: AttributeError: 'NoneType' object has no attribute '__name__'. Did you mean: '__ne__'? I was trying to send the message... (f'User {member.mention} was kicked because {reason}') to a specific channel. If I remove the channel condition, the bot works, but sends this message to the channel the command was used.
[ "I believe the cause of your error is your channel definition inside your kick command definition. Try removing the channel definition from your kick command definition and put it inside the function instead. The way I have it setup on my bot, other than the channel definition, is the same as yours and mine works perfectly\n", "To send it in the channel, instead of using ctx.channel.send, you can use ctx.send. I think that's where you're running into your error. Also here's how I tend to set up my kick command using slash commands so that my answer makes more sense:\n@nextcord.slash_command() # I use nextcord, a dpy fork so your setup is gonna be different\n@commands.has_permissions(whatever permissions you want)\nasync def kickuser(self, ctx, member : nextcord.Member, *, reason='None'): # default reason is none so that it is optional in the slash command\n # side note for nextcord.Member, having it there makes it so that there's a drop down menu that functions the same way as if you were to @ someone in a message. This makes it easier to kick the right person \n embed = discord.Embed(description=f'You have been kicked from {ctx.guild} for reason: {reason}')\n \n embed = nextcord.Embed(description=f'{member} has been kicked for reason: {reason}') # setting up embed to send\n await ctx.send(embed=embed) # sends the embed in chat letting people know the person has been kicked\n await member.kick(reason=reason) # actually kicking the person, this comes after all the other steps so that we are able to mention and direct message them that they have been kicked\n\nHope this helps\n", "This snip of code uses PyCord ( Confirmed to work by myself )\n@discord.default_permissions(kick_members = True)\nasync def kick(ctx, member : discord.Member, *, reason=None):\n await member.kick(reason=reason)\n await ctx.respond(f'{member.mention} has been kicked!')\n\n", "Just get a channel object with bot.get_channel()\nThen use the channels send() function to send your message.\nAlso, : after arguments in function definitions are type hints, they are made for IDEs, if you want to assign a default value, you have to use = instead. (Look at your channel argument)\nEDIT\nYou are using type hints in your code. Type hints are made for IDEs in first place, so they can show you mistakes in your code more easier. But you are „setting“ a value in it with the function, but this is for discord.py None, thats causing your error. Use : for showing which class an argument has to be. But use = for setting a default value, if this argument is not being passed.\ndef function(type_hint: str, default_value = 0, mixed : int = 10):\n print(type_hint, default_value, mixed)\n\nAnswer again if you need even further help ;-)\n" ]
[ 1, 0, 0, 0 ]
[]
[]
[ "discord.py", "python" ]
stackoverflow_0074447926_discord.py_python.txt
Q: How to propagate SIGTERM to children created via subprocess Given the following Python scripts: a.py: #!/usr/bin/env python3 # a.py import signal import subprocess import os def main(): print('Starting process {}'.format(os.getpid())) subprocess.check_call('./b.py') if __name__ == '__main__': main() b.py: #!/usr/bin/env python3 # b.py import signal import time import os def cleanup(signum, frame): print('Cleaning up...') raise RuntimeError("Error") def main(): print('Starting process {}'.format(os.getpid())) signal.signal(signal.SIGINT, cleanup) signal.signal(signal.SIGTERM, cleanup) while True: print('Hello') time.sleep(1) if __name__ == '__main__': main() If I execute a.py, and then later I kill it via kill -15 <pid_of_a_py>, it kills a.py, but b.py keeps running: $ ./a.py Starting process 119429 Starting process 119430 Hello Hello Hello Hello Hello Terminated // On a separate terminal, I ran "kill -15 119429" $ Hello Hello Hello Hello Why is that? How can I make sure that SIGTERM is propagated from a.py to b.py? Consider also a deeper chain a.py -> b.py -> c.py -> d.py ... Where I only want to explicitly setup error handling and cleanup for the innermost script. A: One solution is to explicitly throw SystemExit from a.py #!/usr/bin/env python3 # a.py import signal import subprocess import os def cleanup(signum, frame): raise SystemExit(signum) def main(): signal.signal(signal.SIGINT, cleanup) signal.signal(signal.SIGTERM, cleanup) print('Starting process {}'.format(os.getpid())) subprocess.check_call('./b.py') if __name__ == '__main__': main() Alternatively you can start the process with Popen and call Popen.send_signal to the child process when parent exits. EDIT: I've done some reading on the topic and it's an expected behaviour. kill -15 <pid> sends the signal to the specified process and only this one, the signal is not supposed to be propagated. However, you can send a signal to the process group which will kill all children as well. The syntax is kill -15 -<pgid> (note extra dash). The process group id is typically the same as the leader process id. A: There is a way to achieve the same using psutil import os import signal import psutil def kill_proc_tree(pid, sig=signal.SIGTERM, include_parent=True, timeout=None, on_terminate=None): """Kill a process tree (including grandchildren) with signal "sig" and return a (gone, still_alive) tuple. "on_terminate", if specified, is a callback function which is called as soon as a child terminates. """ assert pid != os.getpid(), "won't kill myself" parent = psutil.Process(pid) children = parent.children(recursive=True) if include_parent: children.append(parent) for p in children: try: p.send_signal(sig) except psutil.NoSuchProcess: pass gone, alive = psutil.wait_procs(children, timeout=timeout, callback=on_terminate) return (gone, alive) You might also want to implement such a logic: send SIGTERM to a list of processes give them some time to terminate send SIGKILL to those ones which are still alive import psutil def on_terminate(proc): print("process {} terminated with exit code {}".format(proc, proc.returncode)) procs = psutil.Process().children() for p in procs: p.terminate() gone, alive = psutil.wait_procs(procs, timeout=3, callback=on_terminate) for p in alive: p.kill()
How to propagate SIGTERM to children created via subprocess
Given the following Python scripts: a.py: #!/usr/bin/env python3 # a.py import signal import subprocess import os def main(): print('Starting process {}'.format(os.getpid())) subprocess.check_call('./b.py') if __name__ == '__main__': main() b.py: #!/usr/bin/env python3 # b.py import signal import time import os def cleanup(signum, frame): print('Cleaning up...') raise RuntimeError("Error") def main(): print('Starting process {}'.format(os.getpid())) signal.signal(signal.SIGINT, cleanup) signal.signal(signal.SIGTERM, cleanup) while True: print('Hello') time.sleep(1) if __name__ == '__main__': main() If I execute a.py, and then later I kill it via kill -15 <pid_of_a_py>, it kills a.py, but b.py keeps running: $ ./a.py Starting process 119429 Starting process 119430 Hello Hello Hello Hello Hello Terminated // On a separate terminal, I ran "kill -15 119429" $ Hello Hello Hello Hello Why is that? How can I make sure that SIGTERM is propagated from a.py to b.py? Consider also a deeper chain a.py -> b.py -> c.py -> d.py ... Where I only want to explicitly setup error handling and cleanup for the innermost script.
[ "One solution is to explicitly throw SystemExit from a.py\n#!/usr/bin/env python3\n# a.py\nimport signal\nimport subprocess\nimport os\n\n\ndef cleanup(signum, frame):\n raise SystemExit(signum)\n\ndef main():\n signal.signal(signal.SIGINT, cleanup)\n signal.signal(signal.SIGTERM, cleanup)\n print('Starting process {}'.format(os.getpid()))\n subprocess.check_call('./b.py')\n\nif __name__ == '__main__':\n main()\n\nAlternatively you can start the process with Popen and call Popen.send_signal to the child process when parent exits.\nEDIT:\nI've done some reading on the topic and it's an expected behaviour. kill -15 <pid> sends the signal to the specified process and only this one, the signal is not supposed to be propagated. However, you can send a signal to the process group which will kill all children as well. The syntax is kill -15 -<pgid> (note extra dash). The process group id is typically the same as the leader process id.\n", "There is a way to achieve the same using psutil\nimport os\nimport signal\nimport psutil\n\ndef kill_proc_tree(pid, sig=signal.SIGTERM, include_parent=True,\n timeout=None, on_terminate=None):\n \"\"\"Kill a process tree (including grandchildren) with signal\n \"sig\" and return a (gone, still_alive) tuple.\n \"on_terminate\", if specified, is a callback function which is\n called as soon as a child terminates.\n \"\"\"\n assert pid != os.getpid(), \"won't kill myself\"\n parent = psutil.Process(pid)\n children = parent.children(recursive=True)\n if include_parent:\n children.append(parent)\n for p in children:\n try:\n p.send_signal(sig)\n except psutil.NoSuchProcess:\n pass\n gone, alive = psutil.wait_procs(children, timeout=timeout,\n callback=on_terminate)\n return (gone, alive)\n\nYou might also want to implement such a logic:\n\nsend SIGTERM to a list of processes\ngive them some time to terminate\nsend SIGKILL to those ones which are still alive\n\nimport psutil\n\ndef on_terminate(proc):\n print(\"process {} terminated with exit code {}\".format(proc, proc.returncode))\n\nprocs = psutil.Process().children()\nfor p in procs:\n p.terminate()\ngone, alive = psutil.wait_procs(procs, timeout=3, callback=on_terminate)\nfor p in alive:\n p.kill()\n\n" ]
[ 0, 0 ]
[]
[]
[ "python", "sigterm", "subprocess" ]
stackoverflow_0067823770_python_sigterm_subprocess.txt
Q: Django password reset page loading issue I have implemented Forgot Password functionality in my web Blog. I have used Django and my gmail account as mailbox. I tried both the settings in my gmail account by enabling less secure apps/2 step authentications. Blog Flow: User Signup (by providing email id) Login (If Forgot Password get password reset link on email) Home Page Now the issue is, After giving email id for password reset I am not getting success page. The same web page keeps loading, neither receiving the email nor error. Following are the codes: settings.py MEDIA_ROOT = os.path.join(BASE_DIR, 'media') MEDIA_URL = '/media/' STATIC_URL = '/static/' LOGIN_REDIRECT_URL = '/home' LOGIN_URL = '/login' EMAIL_USE_TLS = True EMAIL_HOST = 'smtp.gmail.com' EMAIL_PORT = 587 EMAIL_HOST_USER = 'myemail@gmail.com' EMAIL_HOST_PASSWORD = 'mypassword' EMAIL_BACKEND = 'django.core.mail.backends.smtp.EmailBackend' urls.py from django.contrib import admin from django.urls import path, include from django.conf import settings from django.conf.urls.static import static from django.contrib.auth import views as auth_views urlpatterns = [ path('admin/', admin.site.urls), path('',include("users.urls")), path('login/',auth_views.LoginView.as_view(template_name = 'users/login.html'), name = 'login'), path('logout/',auth_views.LogoutView.as_view(template_name = 'users/logout.html'), name = 'logout'), path('password-reset/',auth_views.PasswordResetView.as_view(template_name = 'users/password_reset.html'), name = 'password_reset'), path('password-reset/done/',auth_views.PasswordResetDoneView.as_view(template_name = 'users/password_reset_done.html'), name = 'password_reset_done'), path('password-reset-confirm/<uidb64>/<token>/',auth_views.PasswordResetConfirmView.as_view(template_name = 'users/password_reset_confirm.html'), name = 'password_reset_confirm'), ] urlpatterns += [ ] + static(settings.MEDIA_URL, document_root=settings.MEDIA_ROOT) password_reset.html {% extends "users/base.html" %} {% block content %} <div class="content-section"> <form method="POST"> {% csrf_token %} <fieldset class="form-group"> <legend class="border-bottom mb-4">Reset Password</legend> {{ form }} </fieldset> <div class="form-group"> <button class="btn btn-outline-info" type="submit">Request Password Reset</button> </div> </form> </div> {% endblock content %} password_reset_done.html {% extends "users/base.html" %} {% block content %} <div class="alert alert-info"> An email has been sent with instructions to reset your password </div> {% endblock content %} password_reset_complete.html {% extends "users/base.html" %} {% block content %} <div class="alert alert-info"> Your password has been set. </div> <a href="{% url 'login' %}">Sign In Here</a> {% endblock content %} password_reset_confirm.html {% extends "users/base.html" %} {% load crispy_forms_tags %} {% block content %} <div class="content-section"> <form method="POST"> {% csrf_token %} <fieldset class="form-group"> <legend class="border-bottom mb-4">Reset Password</legend> {{ form|crispy }} </fieldset> <div class="form-group"> <button class="btn btn-outline-info" type="submit">Reset Password</button> </div> </form> </div> {% endblock content %} A: I had the same issue and solved it by changing the value of EMAIL_PORT in settings.py
Django password reset page loading issue
I have implemented Forgot Password functionality in my web Blog. I have used Django and my gmail account as mailbox. I tried both the settings in my gmail account by enabling less secure apps/2 step authentications. Blog Flow: User Signup (by providing email id) Login (If Forgot Password get password reset link on email) Home Page Now the issue is, After giving email id for password reset I am not getting success page. The same web page keeps loading, neither receiving the email nor error. Following are the codes: settings.py MEDIA_ROOT = os.path.join(BASE_DIR, 'media') MEDIA_URL = '/media/' STATIC_URL = '/static/' LOGIN_REDIRECT_URL = '/home' LOGIN_URL = '/login' EMAIL_USE_TLS = True EMAIL_HOST = 'smtp.gmail.com' EMAIL_PORT = 587 EMAIL_HOST_USER = 'myemail@gmail.com' EMAIL_HOST_PASSWORD = 'mypassword' EMAIL_BACKEND = 'django.core.mail.backends.smtp.EmailBackend' urls.py from django.contrib import admin from django.urls import path, include from django.conf import settings from django.conf.urls.static import static from django.contrib.auth import views as auth_views urlpatterns = [ path('admin/', admin.site.urls), path('',include("users.urls")), path('login/',auth_views.LoginView.as_view(template_name = 'users/login.html'), name = 'login'), path('logout/',auth_views.LogoutView.as_view(template_name = 'users/logout.html'), name = 'logout'), path('password-reset/',auth_views.PasswordResetView.as_view(template_name = 'users/password_reset.html'), name = 'password_reset'), path('password-reset/done/',auth_views.PasswordResetDoneView.as_view(template_name = 'users/password_reset_done.html'), name = 'password_reset_done'), path('password-reset-confirm/<uidb64>/<token>/',auth_views.PasswordResetConfirmView.as_view(template_name = 'users/password_reset_confirm.html'), name = 'password_reset_confirm'), ] urlpatterns += [ ] + static(settings.MEDIA_URL, document_root=settings.MEDIA_ROOT) password_reset.html {% extends "users/base.html" %} {% block content %} <div class="content-section"> <form method="POST"> {% csrf_token %} <fieldset class="form-group"> <legend class="border-bottom mb-4">Reset Password</legend> {{ form }} </fieldset> <div class="form-group"> <button class="btn btn-outline-info" type="submit">Request Password Reset</button> </div> </form> </div> {% endblock content %} password_reset_done.html {% extends "users/base.html" %} {% block content %} <div class="alert alert-info"> An email has been sent with instructions to reset your password </div> {% endblock content %} password_reset_complete.html {% extends "users/base.html" %} {% block content %} <div class="alert alert-info"> Your password has been set. </div> <a href="{% url 'login' %}">Sign In Here</a> {% endblock content %} password_reset_confirm.html {% extends "users/base.html" %} {% load crispy_forms_tags %} {% block content %} <div class="content-section"> <form method="POST"> {% csrf_token %} <fieldset class="form-group"> <legend class="border-bottom mb-4">Reset Password</legend> {{ form|crispy }} </fieldset> <div class="form-group"> <button class="btn btn-outline-info" type="submit">Reset Password</button> </div> </form> </div> {% endblock content %}
[ "I had the same issue and solved it by changing the value of EMAIL_PORT in settings.py\n" ]
[ 0 ]
[]
[]
[ "django", "passwords", "python", "reset" ]
stackoverflow_0060097493_django_passwords_python_reset.txt
Q: Can not rename the table's column name using pandas dataframe I am new in jupyter notebook and python. Recently I'm working in this code but I can't find out the problem. I want to rename "Tesla Quarterly Revenue(Millions of US $)" and "Tesla Quarterly Revenue(Millions of US $).1" into "Data" and "Revenue" but it not changed. Here is my code: !pip install pandas !pip install requests !pip install bs4 !pip install -U yfinance pandas !pip install plotly !pip install html5lib !pip install lxml import yfinance as yf import pandas as pd import requests from bs4 import BeautifulSoup import plotly.graph_objects as go from plotly.subplots import make_subplots url = "https://www.macrotrends.net/stocks/charts/TSLA/tesla/revenue?utm_medium=Exinfluencer&utm_source=Exinfluencer&utm_content=000026UJ&utm_term=10006555&utm_id=NA-SkillsNetwork-Channel-SkillsNetworkCoursesIBMDeveloperSkillsNetworkPY0220ENSkillsNetwork23455606-2022-01-01" html_data = requests.get(url).text soup = BeautifulSoup(html_data, 'html5lib') tesla_revenue = pd.read_html(url, match = "Tesla Quarterly Revenue")[0] tesla_revenue = tesla_revenue.rename(columns={"Tesla Quarterly Revenue(Millions of US $)":"Date","Tesla Quarterly Revenue(Millions of US $).1":"Revenue"}) tesla_revenue.head() Here is the Output: A: Could not reproduce the issue, it works as expected. May print your originally .columns and compare the values to your dict - Not sure if the source is interpreted differnt by module versions: print(tesla_revenue.columns) Just in case an alternative: tesla_revenue.columns = ['Date','Revenue'] Example import pandas as pd url = "https://www.macrotrends.net/stocks/charts/TSLA/tesla/revenue?utm_medium=Exinfluencer&utm_source=Exinfluencer&utm_content=000026UJ&utm_term=10006555&utm_id=NA-SkillsNetwork-Channel-SkillsNetworkCoursesIBMDeveloperSkillsNetworkPY0220ENSkillsNetwork23455606-2022-01-01" tesla_revenue = pd.read_html(url, match = "Tesla Quarterly Revenue")[0] #tesla_revenue = tesla_revenue.rename(columns={"Tesla Quarterly Revenue(Millions of US $)":"Date","Tesla Quarterly Revenue(Millions of US $).1":"Revenue"}) tesla_revenue.columns = ['Date','Revenue'] tesla_revenue.head() Output Date Revenue 0 2022-09-30 $21,454 1 2022-06-30 $16,934 2 2022-03-31 $18,756 3 2021-12-31 $17,719 4 2021-09-30 $13,757 A: I could reproduce the error. You have a mistake in your column names. Instead of "Tesla Quarterly Revenue(Millions of US $)" it is "Tesla Quarterly Revenue (Millions of US $)" with a space between Revenue and the value in brackets. The same applies to the second column header. To avoid this you could also save the column names into a variable like this: soup = BeautifulSoup(html_data, 'html5lib') tesla_revenue = pd.read_html(url, match="Tesla Quarterly Revenue")[0] col1_name = tesla_revenue.columns[0] col2_name = tesla_revenue.columns[1] tesla_revenue = tesla_revenue.rename(columns={col1_name:"Date",col2_name:"Revenue"}) tesla_revenue.head() This makes the code also a bit more readable :)
Can not rename the table's column name using pandas dataframe
I am new in jupyter notebook and python. Recently I'm working in this code but I can't find out the problem. I want to rename "Tesla Quarterly Revenue(Millions of US $)" and "Tesla Quarterly Revenue(Millions of US $).1" into "Data" and "Revenue" but it not changed. Here is my code: !pip install pandas !pip install requests !pip install bs4 !pip install -U yfinance pandas !pip install plotly !pip install html5lib !pip install lxml import yfinance as yf import pandas as pd import requests from bs4 import BeautifulSoup import plotly.graph_objects as go from plotly.subplots import make_subplots url = "https://www.macrotrends.net/stocks/charts/TSLA/tesla/revenue?utm_medium=Exinfluencer&utm_source=Exinfluencer&utm_content=000026UJ&utm_term=10006555&utm_id=NA-SkillsNetwork-Channel-SkillsNetworkCoursesIBMDeveloperSkillsNetworkPY0220ENSkillsNetwork23455606-2022-01-01" html_data = requests.get(url).text soup = BeautifulSoup(html_data, 'html5lib') tesla_revenue = pd.read_html(url, match = "Tesla Quarterly Revenue")[0] tesla_revenue = tesla_revenue.rename(columns={"Tesla Quarterly Revenue(Millions of US $)":"Date","Tesla Quarterly Revenue(Millions of US $).1":"Revenue"}) tesla_revenue.head() Here is the Output:
[ "Could not reproduce the issue, it works as expected. May print your originally .columns and compare the values to your dict - Not sure if the source is interpreted differnt by module versions:\nprint(tesla_revenue.columns)\n\nJust in case an alternative:\ntesla_revenue.columns = ['Date','Revenue']\n\nExample\nimport pandas as pd\nurl = \"https://www.macrotrends.net/stocks/charts/TSLA/tesla/revenue?utm_medium=Exinfluencer&utm_source=Exinfluencer&utm_content=000026UJ&utm_term=10006555&utm_id=NA-SkillsNetwork-Channel-SkillsNetworkCoursesIBMDeveloperSkillsNetworkPY0220ENSkillsNetwork23455606-2022-01-01\"\ntesla_revenue = pd.read_html(url, match = \"Tesla Quarterly Revenue\")[0]\n#tesla_revenue = tesla_revenue.rename(columns={\"Tesla Quarterly Revenue(Millions of US $)\":\"Date\",\"Tesla Quarterly Revenue(Millions of US $).1\":\"Revenue\"})\ntesla_revenue.columns = ['Date','Revenue']\ntesla_revenue.head()\n\nOutput\n\n\n\n\n\nDate\nRevenue\n\n\n\n\n0\n2022-09-30\n$21,454\n\n\n1\n2022-06-30\n$16,934\n\n\n2\n2022-03-31\n$18,756\n\n\n3\n2021-12-31\n$17,719\n\n\n4\n2021-09-30\n$13,757\n\n\n\n", "I could reproduce the error. You have a mistake in your column names. Instead of \"Tesla Quarterly Revenue(Millions of US $)\" it is \"Tesla Quarterly Revenue (Millions of US $)\" with a space between Revenue and the value in brackets. The same applies to the second column header.\nTo avoid this you could also save the column names into a variable like this:\nsoup = BeautifulSoup(html_data, 'html5lib')\ntesla_revenue = pd.read_html(url, match=\"Tesla Quarterly Revenue\")[0]\ncol1_name = tesla_revenue.columns[0]\ncol2_name = tesla_revenue.columns[1]\ntesla_revenue = tesla_revenue.rename(columns={col1_name:\"Date\",col2_name:\"Revenue\"})\ntesla_revenue.head()\n\nThis makes the code also a bit more readable :)\n" ]
[ 0, 0 ]
[]
[]
[ "data_science", "jupyter", "jupyter_notebook", "python" ]
stackoverflow_0074462791_data_science_jupyter_jupyter_notebook_python.txt
Q: Python Fizzbuzz problems with loop I've searched for the answer for about an hour, and it seems most people have coded fizzbuzz a different way than myself. However, having tried everything to figure out why this simple code will not work I'm getting extremely frustrated. Can anyone point out the simple problem I'm sure I'm having? The code runs but it just returns the value 1. def fizzbuzz(intList): for n in intList: if n % 3 == 0 and n % 5 == 0: return n.replace(str(n),"Fizzbuzz") elif n % 3 == 0: return n.replace(str(n),"Fizz") elif n % 5 == 0: return n.replace(str(n),"Buzz") else: return n A: The first value it looks at is 1. Since 1%x is only 0 for an x of 1, it goes to the else and returns 1. And then it's done, because that's what return does. That leads to the bigger problem, which is that you are starting a loop and then guaranteeing that you will leave that loop after only one iteration, because there's a return in every branch. You'll need to replace those return statements with either append()s to a list (don't forget to return the resulting list) or print() calls. Also, if you started with something like 3, your code would try to use replace on an integer, which is not something you can do with integers. You would get a traceback. A: The code is returning 1 because consider this list [1,2,3,4,5,6,7,8,9,10]. All three conditions will get false and the last else will return 1. If you want the answer, append them into list. Something like this: def fizzbuzz(intList): temp = [] for n in intList: if n % 3 == 0 and n % 5 == 0: temp.append("Fizzbuzz") elif n % 3 == 0: temp.append("Fizz") elif n % 5 == 0: temp.append("Buzz") else: temp.append(n) return temp print fizzbuzz(range(1,20)) A: Perhaps if you take a look at this code you will better understand yours. Although this is a completely different implementation of fizzbuzz in Python3 #!/usr/bin/python3 for i in range(1,100): msg = "Fizz" * bool(i%3==0) msg += "Buzz" * bool(i%5==0) if not msg: msg = i print(msg) A: My skills in python are fairly average but i love using dicitonaries. Here is the Fizz Buzz program using dictionaries.Without an if. for data in range(1, 101): msg = [str((data % 3 == 0)), str((data % 5 == 0))] // msg returns a list with ['True' ,'False'] depending on the condition conveter = {"True False": "Fizz", "False True": "Buzz", "True True": "Fizz Buzz", "False False": data } val = conveter[" ".join(msg)] print(val) A: I just implemented FizzBuzz as for n in range(1, 100): if n%15==0: print "FizzBuzz" elif n%5==0: print "Buzz" elif n%3==0: print "Fizz" else: print n The best thing with it that It works It fits into a tweet, with a margin A: Years later, based on this... FizzBuzz: For integers up to and including 100, prints FizzBuzz if the integer is divisible by 3 and 5 (15); Fizz if it's divisible by 3 (and not 5); Buzz if it's divisible by 5 (and not 3); and the integer otherwise. def FizzBuzz(): for i in range(1,101): print { 3 : "Fizz", 5 : "Buzz", 15 : "FizzBuzz"}.get(15*(not i%15) or 5*(not i%5 ) or 3*(not i%3 ), '{}'.format(i)) The .get() method works wonders here. Operates as follows For all integers from 1 to 100 (101 is NOT included), print the value of the dictionary key that we call via get according to these rules. "Get the first non-False item in the get call, or return the integer as a string." When checking for a True value, thus a value we can lookup, Python evaluates 0 to False. If i mod 15 = 0, that's False, we would go to the next one. Therefore we NOT each of the 'mods' (aka remainder), so that if the mod == 0, which == False, we get a True statement. We multiply True by the dictionary key which returns the dictionary key (i.e. 3*True == 3) When the integer it not divisible by 3, 5 or 15, then we fall to the default clause of printing the int '{}'.format(i) just inserts i into that string - as a string. Some of the output Fizz 79 Buzz Fizz 82 83 Fizz Buzz 86 Fizz 88 89 FizzBuzz 91 92 Fizz 94 Buzz Fizz 97 98 Fizz Buzz A: how a python function should look like if we want to see the next result in the interactive mode of the python interpreter: >>> fizz(15) [ 1, 2, 'fizz', 4, 'buzz', 'fizz', 7, 8, 'fizz', 'buzz', 11, 'fizz', 13, 14, 'fizzbuzz' ] A: n = int(input()) out = [] for i in range(1, n): if i % 3 == 0 and i % 5 == 0: out.append("fizzbuzz") continue elif i % 3 == 0: out.append("fizz") continue elif i % 5 == 0: out.append("buzz") continue out.append(i) print(out) Answer as [1,2,'fizz'] A: x = int(input('Enter the number: ')) if x % 3 ==0 and x % 5 ==0: print('FizzBuzz') elif x % 3 ==0: print('Fizz') elif x % 5 ==0: print('Buzz') else: print(f'{x} ¯\_(ツ)_/¯')
Python Fizzbuzz problems with loop
I've searched for the answer for about an hour, and it seems most people have coded fizzbuzz a different way than myself. However, having tried everything to figure out why this simple code will not work I'm getting extremely frustrated. Can anyone point out the simple problem I'm sure I'm having? The code runs but it just returns the value 1. def fizzbuzz(intList): for n in intList: if n % 3 == 0 and n % 5 == 0: return n.replace(str(n),"Fizzbuzz") elif n % 3 == 0: return n.replace(str(n),"Fizz") elif n % 5 == 0: return n.replace(str(n),"Buzz") else: return n
[ "The first value it looks at is 1. Since 1%x is only 0 for an x of 1, it goes to the else and returns 1. And then it's done, because that's what return does.\nThat leads to the bigger problem, which is that you are starting a loop and then guaranteeing that you will leave that loop after only one iteration, because there's a return in every branch. You'll need to replace those return statements with either append()s to a list (don't forget to return the resulting list) or print() calls.\nAlso, if you started with something like 3, your code would try to use replace on an integer, which is not something you can do with integers. You would get a traceback.\n", "The code is returning 1 because consider this list [1,2,3,4,5,6,7,8,9,10]. All three conditions will get false and the last else will return 1. If you want the answer, append them into list.\nSomething like this:\ndef fizzbuzz(intList):\n temp = []\n for n in intList:\n if n % 3 == 0 and n % 5 == 0:\n temp.append(\"Fizzbuzz\")\n elif n % 3 == 0:\n temp.append(\"Fizz\")\n elif n % 5 == 0:\n temp.append(\"Buzz\")\n else:\n temp.append(n)\n return temp\n\n\nprint fizzbuzz(range(1,20))\n\n", "Perhaps if you take a look at this code you will\nbetter understand yours. Although this is a completely\ndifferent implementation of fizzbuzz in Python3\n#!/usr/bin/python3\n\nfor i in range(1,100):\n msg = \"Fizz\" * bool(i%3==0)\n msg += \"Buzz\" * bool(i%5==0)\n if not msg:\n msg = i\n print(msg)\n\n", "My skills in python are fairly average but i love using dicitonaries. Here is the Fizz Buzz program using dictionaries.Without an if.\nfor data in range(1, 101):\n msg = [str((data % 3 == 0)), str((data % 5 == 0))]\n // msg returns a list with ['True' ,'False'] depending on the condition\n conveter = {\"True False\": \"Fizz\",\n \"False True\": \"Buzz\",\n \"True True\": \"Fizz Buzz\",\n \"False False\": data\n }\n val = conveter[\" \".join(msg)]\n print(val)\n\n", "I just implemented FizzBuzz as\nfor n in range(1, 100):\n if n%15==0: print \"FizzBuzz\"\n elif n%5==0: print \"Buzz\"\n elif n%3==0: print \"Fizz\"\n else: print n\n\nThe best thing with it that\n\nIt works\nIt fits into a tweet, with a margin\n\n", "Years later, based on this...\n\nFizzBuzz: For integers up to and including 100, prints FizzBuzz if the integer is divisible by 3 and 5 (15); Fizz if it's divisible by 3 (and not 5); Buzz if it's divisible by 5 (and not 3); and the integer otherwise.\n\ndef FizzBuzz():\n for i in range(1,101):\n print {\n 3 : \"Fizz\",\n 5 : \"Buzz\",\n 15 : \"FizzBuzz\"}.get(15*(not i%15) or\n 5*(not i%5 ) or\n 3*(not i%3 ), '{}'.format(i))\n\nThe .get() method works wonders here.\nOperates as follows\nFor all integers from 1 to 100 (101 is NOT included),\nprint the value of the dictionary key that we call via get according to these rules. \n\"Get the first non-False item in the get call, or return the integer as a string.\" \nWhen checking for a True value, thus a value we can lookup, Python evaluates 0 to False. If i mod 15 = 0, that's False, we would go to the next one. \nTherefore we NOT each of the 'mods' (aka remainder), so that if the mod == 0, which == False, we get a True statement. We multiply True by the dictionary key which returns the dictionary key (i.e. 3*True == 3)\nWhen the integer it not divisible by 3, 5 or 15, then we fall to the default clause of printing the int '{}'.format(i) just inserts i into that string - as a string.\nSome of the output\nFizz\n79\nBuzz\nFizz\n82\n83\nFizz\nBuzz\n86\nFizz\n88\n89\nFizzBuzz\n91\n92\nFizz\n94\nBuzz\nFizz\n97\n98\nFizz\nBuzz \n", "how a python function should look like if we want to see the next result in the interactive mode of the python interpreter:\n>>> fizz(15)\n[ 1, 2, 'fizz', 4, 'buzz', 'fizz', 7, 8, 'fizz', 'buzz', 11, 'fizz', 13, 14, 'fizzbuzz' ]\n\n", "n = int(input()) \nout = []\nfor i in range(1, n):\n if i % 3 == 0 and i % 5 == 0: \n out.append(\"fizzbuzz\") \n continue \n elif i % 3 == 0: \n out.append(\"fizz\") \n continue \n elif i % 5 == 0: \n out.append(\"buzz\")\n continue \n out.append(i) \n print(out) \n\nAnswer as [1,2,'fizz']\n", "x = int(input('Enter the number: '))\nif x % 3 ==0 and x % 5 ==0:\n print('FizzBuzz')\nelif x % 3 ==0:\n print('Fizz')\nelif x % 5 ==0:\n print('Buzz')\nelse:\n print(f'{x} ¯\\_(ツ)_/¯')\n\n" ]
[ 1, 1, 0, 0, 0, 0, 0, 0, 0 ]
[]
[]
[ "fizzbuzz", "loops", "python" ]
stackoverflow_0034101222_fizzbuzz_loops_python.txt
Q: Ordering matrices by column I Have this matrix: matrix: [['I' 'N' 'T' 'E' 'R' 'E' 'S' 'T' 'I' 'N' 'G'] ['D' 'G' 'F' 'F' 'G' 'D' 'A' 'A' 'D' 'V' 'A'] ['A' ' ' ' ' ' ' ' ' ' ' ' ' ' ' ' ' ' ' ' ']] Want to order like this, without changing the position of the following rows in alphabethic order, only the first row: matrix [['E' 'E' 'G' 'I' 'I' 'N' 'N' 'R' 'S' 'T' 'T'] ['F' 'D' 'A' 'D' 'D' 'G' 'V' 'G' 'A' 'F' 'A'] [' ' ' ' ' ' 'A' ' ' ' ' ' ' ' ' ' ' ' ' ' ']] Is there any way i can do this with numpy? A: Assuming this input: array([['I', 'N', 'T', 'E', 'R', 'E', 'S', 'T', 'I', 'N', 'G'], ['D', 'G', 'F', 'F', 'G', 'D', 'A', 'A', 'D', 'V', 'A'], ['A', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ']], dtype='<U1') Use indexing and np.argsort: out = matriz_senha[:, np.argsort(matriz_senha[0])] Output: array([['E', 'E', 'G', 'I', 'I', 'N', 'N', 'R', 'S', 'T', 'T'], ['F', 'D', 'A', 'D', 'D', 'G', 'V', 'G', 'A', 'F', 'A'], [' ', ' ', ' ', 'A', ' ', ' ', ' ', ' ', ' ', ' ', ' ']], dtype='<U1')
Ordering matrices by column
I Have this matrix: matrix: [['I' 'N' 'T' 'E' 'R' 'E' 'S' 'T' 'I' 'N' 'G'] ['D' 'G' 'F' 'F' 'G' 'D' 'A' 'A' 'D' 'V' 'A'] ['A' ' ' ' ' ' ' ' ' ' ' ' ' ' ' ' ' ' ' ' ']] Want to order like this, without changing the position of the following rows in alphabethic order, only the first row: matrix [['E' 'E' 'G' 'I' 'I' 'N' 'N' 'R' 'S' 'T' 'T'] ['F' 'D' 'A' 'D' 'D' 'G' 'V' 'G' 'A' 'F' 'A'] [' ' ' ' ' ' 'A' ' ' ' ' ' ' ' ' ' ' ' ' ' ']] Is there any way i can do this with numpy?
[ "Assuming this input:\narray([['I', 'N', 'T', 'E', 'R', 'E', 'S', 'T', 'I', 'N', 'G'],\n ['D', 'G', 'F', 'F', 'G', 'D', 'A', 'A', 'D', 'V', 'A'],\n ['A', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ']],\n dtype='<U1')\n\nUse indexing and np.argsort:\nout = matriz_senha[:, np.argsort(matriz_senha[0])]\n\nOutput:\narray([['E', 'E', 'G', 'I', 'I', 'N', 'N', 'R', 'S', 'T', 'T'],\n ['F', 'D', 'A', 'D', 'D', 'G', 'V', 'G', 'A', 'F', 'A'],\n [' ', ' ', ' ', 'A', ' ', ' ', ' ', ' ', ' ', ' ', ' ']],\n dtype='<U1')\n\n" ]
[ 1 ]
[]
[]
[ "arrays", "matrix", "numpy", "python", "slice" ]
stackoverflow_0074462758_arrays_matrix_numpy_python_slice.txt
Q: Discord bot doesn't respond to interaction When I try kicking someone from an account that has no kick permissions, the bot says "the application did not respond". The code I'm using: @bot.slash_command(description = "Kickar alguém", guild_ids=[1041057700823449682]) @has_permissions(kick_members=True) @option("member",description = "Seleciona o membro") @option("reason",description = "O motivo do kick (podes deixar vazio)") async def kick( ctx, member: discord.Member, reason=None): if reason==None: reason="és idiota" await ctx.respond(f"Kickaste {member} com sucesso :)") await member.send(f'Foste kickado de {ctx.guild} porque {reason}.') await ctx.guild.kick(member) @kick.error async def kick_error(error, ctx): if isinstance(error, MissingPermissions): await ctx.respond("Desculpa {ctx.message.author}, não tens permissões para isso!") Kicking someone from an account with these permissions works well, but when I try from an account without perms, the bot doesn't respond with "Sorry user, you dont have perms". A: This is due to the fact that the ctx.respond method actually takes an argument if it's a slash command. What you have is very close, but the correct code would be as follows await ctx.respond(content="Hello world!") You simply need to specify the content (does not apply if it's an embed) A: I've fixed this by going into Discord > Server Configs > Integrations > The Bot, and then modifying who can use the commands. Also I removed @kick.error and the code with it.
Discord bot doesn't respond to interaction
When I try kicking someone from an account that has no kick permissions, the bot says "the application did not respond". The code I'm using: @bot.slash_command(description = "Kickar alguém", guild_ids=[1041057700823449682]) @has_permissions(kick_members=True) @option("member",description = "Seleciona o membro") @option("reason",description = "O motivo do kick (podes deixar vazio)") async def kick( ctx, member: discord.Member, reason=None): if reason==None: reason="és idiota" await ctx.respond(f"Kickaste {member} com sucesso :)") await member.send(f'Foste kickado de {ctx.guild} porque {reason}.') await ctx.guild.kick(member) @kick.error async def kick_error(error, ctx): if isinstance(error, MissingPermissions): await ctx.respond("Desculpa {ctx.message.author}, não tens permissões para isso!") Kicking someone from an account with these permissions works well, but when I try from an account without perms, the bot doesn't respond with "Sorry user, you dont have perms".
[ "This is due to the fact that the ctx.respond method actually takes an argument if it's a slash command. What you have is very close, but the correct code would be as follows\nawait ctx.respond(content=\"Hello world!\")\n\nYou simply need to specify the content (does not apply if it's an embed)\n", "I've fixed this by going into Discord > Server Configs > Integrations > The Bot, and then modifying who can use the commands. Also I removed @kick.error and the code with it.\n" ]
[ 0, 0 ]
[]
[]
[ "discord", "discord.py", "python" ]
stackoverflow_0074451820_discord_discord.py_python.txt
Q: Why does PyMupdf Document show the error, no attribute 'new_page', when it is a PDF? I'm working on annotating a PDF and I want to change its color. I was guided to this helpful link: https://pymupdf.readthedocs.io/en/latest/faq.html#how-to-add-and-modify-annotations I used the code in the link: # -*- coding: utf-8 -*- """ ------------------------------------------------------------------------------- Demo script showing how annotations can be added to a PDF using PyMuPDF. It contains the following annotation types: Caret, Text, FreeText, text markers (underline, strike-out, highlight, squiggle), Circle, Square, Line, PolyLine, Polygon, FileAttachment, Stamp and Redaction. There is some effort to vary appearances by adding colors, line ends, opacity, rotation, dashed lines, etc. Dependencies ------------ PyMuPDF v1.17.0 ------------------------------------------------------------------------------- """ from __future__ import print_function import gc import sys import fitz print(fitz.__doc__) if fitz.VersionBind.split(".") < ["1", "17", "0"]: sys.exit("PyMuPDF v1.17.0+ is needed.") gc.set_debug(gc.DEBUG_UNCOLLECTABLE) highlight = "this text is highlighted" underline = "this text is underlined" strikeout = "this text is striked out" squiggled = "this text is zigzag-underlined" red = (1, 0, 0) blue = (0, 0, 1) gold = (1, 1, 0) green = (0, 1, 0) displ = fitz.Rect(0, 50, 0, 50) r = fitz.Rect(72, 72, 220, 100) t1 = u"têxt üsès Lätiñ charß,\nEUR: €, mu: µ, super scripts: ²³!" def print_descr(annot): """Print a short description to the right of each annot rect.""" annot.parent.insert_text( annot.rect.br + (10, -5), "%s annotation" % annot.type[1], color=red ) doc = fitz.open() page = doc.new_page() page.set_rotation(0) annot = page.add_caret_annot(r.tl) print_descr(annot) r = r + displ annot = page.add_freetext_annot( r, t1, fontsize=10, rotate=90, text_color=blue, fill_color=gold, align=fitz.TEXT_ALIGN_CENTER, ) annot.set_border(width=0.3, dashes=[2]) annot.update(text_color=blue, fill_color=gold) print_descr(annot) r = annot.rect + displ annot = page.add_text_annot(r.tl, t1) print_descr(annot) # Adding text marker annotations: # first insert a unique text, then search for it, then mark it pos = annot.rect.tl + displ.tl page.insert_text( pos, # insertion point highlight, # inserted text morph=(pos, fitz.Matrix(-5)), # rotate around insertion point ) rl = page.search_for(highlight, quads=True) # need a quad b/o tilted text annot = page.add_highlight_annot(rl[0]) print_descr(annot) pos = annot.rect.bl # next insertion point page.insert_text(pos, underline, morph=(pos, fitz.Matrix(-10))) rl = page.search_for(underline, quads=True) annot = page.add_underline_annot(rl[0]) print_descr(annot) pos = annot.rect.bl page.insert_text(pos, strikeout, morph=(pos, fitz.Matrix(-15))) rl = page.search_for(strikeout, quads=True) annot = page.add_strikeout_annot(rl[0]) print_descr(annot) pos = annot.rect.bl page.insert_text(pos, squiggled, morph=(pos, fitz.Matrix(-20))) rl = page.search_for(squiggled, quads=True) annot = page.add_squiggly_annot(rl[0]) print_descr(annot) pos = annot.rect.bl r = fitz.Rect(pos, pos.x + 75, pos.y + 35) + (0, 20, 0, 20) annot = page.add_polyline_annot([r.bl, r.tr, r.br, r.tl]) # 'Polyline' annot.set_border(width=0.3, dashes=[2]) annot.set_colors(stroke=blue, fill=green) annot.set_line_ends(fitz.PDF_ANNOT_LE_CLOSED_ARROW, fitz.PDF_ANNOT_LE_R_CLOSED_ARROW) annot.update(fill_color=(1, 1, 0)) print_descr(annot) r += displ annot = page.add_polygon_annot([r.bl, r.tr, r.br, r.tl]) # 'Polygon' annot.set_border(width=0.3, dashes=[2]) annot.set_colors(stroke=blue, fill=gold) annot.set_line_ends(fitz.PDF_ANNOT_LE_DIAMOND, fitz.PDF_ANNOT_LE_CIRCLE) annot.update() print_descr(annot) r += displ annot = page.add_line_annot(r.tr, r.bl) # 'Line' annot.set_border(width=0.3, dashes=[2]) annot.set_colors(stroke=blue, fill=gold) annot.set_line_ends(fitz.PDF_ANNOT_LE_DIAMOND, fitz.PDF_ANNOT_LE_CIRCLE) annot.update() print_descr(annot) r += displ annot = page.add_rect_annot(r) # 'Square' annot.set_border(width=1, dashes=[1, 2]) annot.set_colors(stroke=blue, fill=gold) annot.update(opacity=0.5) print_descr(annot) r += displ annot = page.add_circle_annot(r) # 'Circle' annot.set_border(width=0.3, dashes=[2]) annot.set_colors(stroke=blue, fill=gold) annot.update() print_descr(annot) r += displ annot = page.add_file_annot( r.tl, b"just anything for testing", "testdata.txt" # 'FileAttachment' ) print_descr(annot) # annot.rect r += displ annot = page.add_stamp_annot(r, stamp=10) # 'Stamp' annot.set_colors(stroke=green) annot.update() print_descr(annot) r += displ + (0, 0, 50, 10) rc = page.insert_textbox( r, "This content will be removed upon applying the redaction.", color=blue, align=fitz.TEXT_ALIGN_CENTER, ) annot = page.add_redact_annot(r) print_descr(annot) doc.save(__file__.replace(".py", "-%i.pdf" % page.rotation), deflate=True) And I keep running into this error: AttributeError: 'Document' object has no attribute 'new_page' I've tried it on a few other PDFs and it does not seem to work, however, PYMUDF documentation https://pymupdf.readthedocs.io/en/latest/document.html#Document.new_page describes that it should have this attribute. How do I enable a new page to be inserted to remove this error? A: I had some issues with similar attributes and updated the latest version of the pymupdf library using: python -m pip install --upgrade pymupdf A: They seem to have named this as _newPage(). The documentation also notes a method called insert_page() which is also not present. Seems like the documentation is out of sync with the latest version.
Why does PyMupdf Document show the error, no attribute 'new_page', when it is a PDF?
I'm working on annotating a PDF and I want to change its color. I was guided to this helpful link: https://pymupdf.readthedocs.io/en/latest/faq.html#how-to-add-and-modify-annotations I used the code in the link: # -*- coding: utf-8 -*- """ ------------------------------------------------------------------------------- Demo script showing how annotations can be added to a PDF using PyMuPDF. It contains the following annotation types: Caret, Text, FreeText, text markers (underline, strike-out, highlight, squiggle), Circle, Square, Line, PolyLine, Polygon, FileAttachment, Stamp and Redaction. There is some effort to vary appearances by adding colors, line ends, opacity, rotation, dashed lines, etc. Dependencies ------------ PyMuPDF v1.17.0 ------------------------------------------------------------------------------- """ from __future__ import print_function import gc import sys import fitz print(fitz.__doc__) if fitz.VersionBind.split(".") < ["1", "17", "0"]: sys.exit("PyMuPDF v1.17.0+ is needed.") gc.set_debug(gc.DEBUG_UNCOLLECTABLE) highlight = "this text is highlighted" underline = "this text is underlined" strikeout = "this text is striked out" squiggled = "this text is zigzag-underlined" red = (1, 0, 0) blue = (0, 0, 1) gold = (1, 1, 0) green = (0, 1, 0) displ = fitz.Rect(0, 50, 0, 50) r = fitz.Rect(72, 72, 220, 100) t1 = u"têxt üsès Lätiñ charß,\nEUR: €, mu: µ, super scripts: ²³!" def print_descr(annot): """Print a short description to the right of each annot rect.""" annot.parent.insert_text( annot.rect.br + (10, -5), "%s annotation" % annot.type[1], color=red ) doc = fitz.open() page = doc.new_page() page.set_rotation(0) annot = page.add_caret_annot(r.tl) print_descr(annot) r = r + displ annot = page.add_freetext_annot( r, t1, fontsize=10, rotate=90, text_color=blue, fill_color=gold, align=fitz.TEXT_ALIGN_CENTER, ) annot.set_border(width=0.3, dashes=[2]) annot.update(text_color=blue, fill_color=gold) print_descr(annot) r = annot.rect + displ annot = page.add_text_annot(r.tl, t1) print_descr(annot) # Adding text marker annotations: # first insert a unique text, then search for it, then mark it pos = annot.rect.tl + displ.tl page.insert_text( pos, # insertion point highlight, # inserted text morph=(pos, fitz.Matrix(-5)), # rotate around insertion point ) rl = page.search_for(highlight, quads=True) # need a quad b/o tilted text annot = page.add_highlight_annot(rl[0]) print_descr(annot) pos = annot.rect.bl # next insertion point page.insert_text(pos, underline, morph=(pos, fitz.Matrix(-10))) rl = page.search_for(underline, quads=True) annot = page.add_underline_annot(rl[0]) print_descr(annot) pos = annot.rect.bl page.insert_text(pos, strikeout, morph=(pos, fitz.Matrix(-15))) rl = page.search_for(strikeout, quads=True) annot = page.add_strikeout_annot(rl[0]) print_descr(annot) pos = annot.rect.bl page.insert_text(pos, squiggled, morph=(pos, fitz.Matrix(-20))) rl = page.search_for(squiggled, quads=True) annot = page.add_squiggly_annot(rl[0]) print_descr(annot) pos = annot.rect.bl r = fitz.Rect(pos, pos.x + 75, pos.y + 35) + (0, 20, 0, 20) annot = page.add_polyline_annot([r.bl, r.tr, r.br, r.tl]) # 'Polyline' annot.set_border(width=0.3, dashes=[2]) annot.set_colors(stroke=blue, fill=green) annot.set_line_ends(fitz.PDF_ANNOT_LE_CLOSED_ARROW, fitz.PDF_ANNOT_LE_R_CLOSED_ARROW) annot.update(fill_color=(1, 1, 0)) print_descr(annot) r += displ annot = page.add_polygon_annot([r.bl, r.tr, r.br, r.tl]) # 'Polygon' annot.set_border(width=0.3, dashes=[2]) annot.set_colors(stroke=blue, fill=gold) annot.set_line_ends(fitz.PDF_ANNOT_LE_DIAMOND, fitz.PDF_ANNOT_LE_CIRCLE) annot.update() print_descr(annot) r += displ annot = page.add_line_annot(r.tr, r.bl) # 'Line' annot.set_border(width=0.3, dashes=[2]) annot.set_colors(stroke=blue, fill=gold) annot.set_line_ends(fitz.PDF_ANNOT_LE_DIAMOND, fitz.PDF_ANNOT_LE_CIRCLE) annot.update() print_descr(annot) r += displ annot = page.add_rect_annot(r) # 'Square' annot.set_border(width=1, dashes=[1, 2]) annot.set_colors(stroke=blue, fill=gold) annot.update(opacity=0.5) print_descr(annot) r += displ annot = page.add_circle_annot(r) # 'Circle' annot.set_border(width=0.3, dashes=[2]) annot.set_colors(stroke=blue, fill=gold) annot.update() print_descr(annot) r += displ annot = page.add_file_annot( r.tl, b"just anything for testing", "testdata.txt" # 'FileAttachment' ) print_descr(annot) # annot.rect r += displ annot = page.add_stamp_annot(r, stamp=10) # 'Stamp' annot.set_colors(stroke=green) annot.update() print_descr(annot) r += displ + (0, 0, 50, 10) rc = page.insert_textbox( r, "This content will be removed upon applying the redaction.", color=blue, align=fitz.TEXT_ALIGN_CENTER, ) annot = page.add_redact_annot(r) print_descr(annot) doc.save(__file__.replace(".py", "-%i.pdf" % page.rotation), deflate=True) And I keep running into this error: AttributeError: 'Document' object has no attribute 'new_page' I've tried it on a few other PDFs and it does not seem to work, however, PYMUDF documentation https://pymupdf.readthedocs.io/en/latest/document.html#Document.new_page describes that it should have this attribute. How do I enable a new page to be inserted to remove this error?
[ "I had some issues with similar attributes and updated the latest version of the pymupdf library using: python -m pip install --upgrade pymupdf\n", "They seem to have named this as _newPage(). The documentation also notes a method called insert_page() which is also not present. Seems like the documentation is out of sync with the latest version.\n" ]
[ 1, 0 ]
[]
[]
[ "annotations", "pymupdf", "python" ]
stackoverflow_0068197427_annotations_pymupdf_python.txt
Q: Explain __dict__ attribute I am really confused about the __dict__ attribute. I have searched a lot but still I am not sure about the output. Could someone explain the use of this attribute from zero, in cases when it is used in a object, a class, or a function? A: Basically it contains all the attributes which describe the object in question. It can be used to alter or read the attributes. Quoting from the documentation for __dict__ A dictionary or other mapping object used to store an object's (writable) attributes. Remember, everything is an object in Python. When I say everything, I mean everything like functions, classes, objects etc (Ya you read it right, classes. Classes are also objects). For example: def func(): pass func.temp = 1 print(func.__dict__) class TempClass: a = 1 def temp_function(self): pass print(TempClass.__dict__) will output {'temp': 1} {'__module__': '__main__', 'a': 1, 'temp_function': <function TempClass.temp_function at 0x10a3a2950>, '__dict__': <attribute '__dict__' of 'TempClass' objects>, '__weakref__': <attribute '__weakref__' of 'TempClass' objects>, '__doc__': None} A: __dict__ can get the instance variables (data attributes) in an object as a dictionary. So, if there is Person class below: class Person: x1 = "Hello" x2 = "World" def __init__(self, name, age): self.name = name self.age = age def test1(self): pass @classmethod def test2(cls): pass @staticmethod def test3(): pass obj = Person("John", 27) print(obj.__dict__) # Here __dict__ gets name and age with their values in a dictionary as shown below: {'name': 'John', 'age': 27} And, if the new instance variable gender is added after instanciation as shown below: # ... obj = Person("John", 27) obj.gender = "Male" # Here print(obj.__dict__) __dict__ gets name, age and gender with their values in a dictionary as shown below: {'name': 'John', 'age': 27, 'gender': 'Male'} In addition, if using dir() as shown below: # ... obj = Person("John", 27) obj.gender = "Male" print(dir(obj)) # Here We can get all in an object as a list as shown below: ['__class__', '__delattr__', '__dict__', '__dir__', '__doc__', '__eq__', '__format__', '__ge__', '__getattribute__', '__gt__', '__hash__', '__init__', '__init_subclass__', '__le__', '__lt__', '__module__', '__ne__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__sizeof__', '__str__', '__subclasshook__', '__weakref__', 'age', 'gender', 'name', 'test1', 'test2', 'test3', 'x1', 'x2'] And, as far as I researched and as I asked this question, there are no functions to get only the static or special variables or normal, class, static or special methods in an object in Python. A: Python documentation defines __dict__ as: A dictionary or other mapping object used to store an object’s (writable) attributes. This definition is however a bit fuzzy, which leads to a lot of wrong usage of __dict__. Indeed, when you read this definition, can you tell what is a "writable" attribute and what it isn't? Examples Let's run a few examples showing how confusing and inaccurate it can be. class A: foo = 1 def __init__(self): self.bar = 2 @property def baz(self): return self._baz @bar.setter def baz(self, value): self._baz = value >>> a = A() >>> a.foo 1 >>> a.bar 2 Given the above class A and knowing __dict__'s definition, can you guess what would be the value of a.__dict__? Is foo a writable attribute of a? Is bar a writable attribute of a? Is baz a writable attribute of a? Is _baz a writable attribute of a? Here is the answer: >>> a.__dict__ {'bar': 2} Surprisingly, foo doesn't show up. Indeed, although accessible with a.foo, it is an attribute of the class A, not of the instance a. Now what happens if we define it explicitly as an attribute of a? >>> a.foo = 1 >>> a.__dict__ {'bar': 2, 'foo': 1} From our point of view, nothing really changed, a.foo is still equal to 1, but now it shows up in __dict__. Note that we can keep playing with it by deleting a.foo for instance: >>> del a.foo >>> a.__dict__ {'bar': 2} >>> a.foo 1 What happened here is that we deleted the instance attribute, and calling a.foo falls back again to A.foo. Let's have a look at baz now. We can assume that we can't find it in a.__dict__ because we didn't give it a value yet. >>> a.baz = 3 Alright, now we defined a.baz. So, is baz a writable attribute? What about _baz? >>> a.__dict__ {'bar': 2, '_baz': 3} From __dict__'s perspective, _baz is a writable attribute, but baz isn't. The explanation, again, is that baz is an attribute of the class A, not the instance a. >>> A.__dict__['baz'] <property at 0x7fb1e30c9590> a.baz is only an abstraction layer calling A.baz.fget(a) behind the scenes. Let's be even more sneaky with our dear friend and challenge its definition of "writable". class B: def __init__(self): self.foobar = 'baz' def __setattr__(self, name, value): if name == 'foobar' and 'foobar' in self.__dict__: # Allow the first definition of foobar # but prevent any subsequent redefinition raise TypeError("'foobar' is a read-only attribute") super().__setattr__(name, value) >>> b = B() >>> b.foobar 'baz' >>> b.foobar = 'bazbar' TypeError: 'foobar' is a read-only attribute >>> # From our developer's perspective, 'foobar' is not a writable attribute >>> # But __dict__ doesn't share this point of view >>> b.__dict__ {'foobar': 'baz'} Then what is __dict__ exactly? Thanks to the behavior noticed in the above examples, we can now have a better understanding of what __dict__ actually does. But we need to switch from the developer's to the computer's perspective: __dict__ contains the data stored in the program's memory for this specific object. That's it, __dict__ exposes what's actually stored in memory at our object's address. Python documentation on data model also defines it as the object's namespace: A class instance has a namespace implemented as a dictionary which is the first place in which attribute references are searched. When an attribute is not found there, and the instance’s class has an attribute by that name, the search continues with the class attributes. However, I believe thinking of __dict__ as the object's memory table gives a much better visualization of what's included in this namespace, and what's not. But! There is a catch... You thought we were done with __dict__? __dict__ is not the way to deal with the object's memory footprint, but a way. There is indeed another way: __slots__. I won't detail here how it works, there is already a very complete answer about __slots__ if you want to learn more about it. But the important thing for us is that, if slots are defined: class C: __slots__ = ('foo', 'bar') def __init__(self): self.foo = 1 self.bar = 2 >>> c = C() >>> c.foo 1 >>> c.bar 2 >>> c.__dict__ AttributeError: 'C' object has no attribute '__dict__' We can say "goodbye" to __dict__. So when should I use __dict__? As we saw, __dict__ must be seen from the computer's perspective, not from the seat of the developer. More often than not, what we consider "attributes" of our object is not directly connected to what's actually stored in memory. Especially with the use of properties or __getattr__ for instance, that add a level of abstraction for our comfort. Although the use of __dict__ to inspect the attributes of an object will work in most trivial cases, we can't rely 100% rely on it. Which is a shame for something used to write generic logic. The use case of __dict__ should probably be limited to inspecting an object's memory contents. Which is not so common. And keep in mind that __dict__ might not be defined at all (or lack some attributes actually stored in memory) when slots are defined. It can also be very useful in Python's console to quickly check a class' attributes and methods. Or an object's attributes (I know I just said we can't rely on it, but in the console who cares if it fails sometimes or if it's not accurate). Thanks but... how do I browse my object's attribute then? We saw that __dict__ is often misused and that we can't really rely on it to inspect an object's attributes. But then, what is the correct way to do it? Is there any way to browse an objects attributes from the developer's abstracted point of view? Yes. There are several ways to do introspection, and the correct way will depend on what you actually want to get. Instance attributes, class attributes, properties, private attributes, even methods, ... Technically, all these are attributes, and according to your situation, you might want to include some but exclude others. The context is also important. Maybe you are using a library that already exposes the attributes you want through their API. But in general, you can rely on the inspect module. class D: foo = 1 __slots = ('bar', '_baz') @property def baz(self): return self._baz @baz.setter def baz(self, value): self._baz = value def __init__(self): self.bar = 2 self.baz = 3 def pointless_method(self): pass >>> import inspect >>> dict((k, v) for k, v in inspect.getmembers(d) if k[0] != '_') {'bar': 2, 'baz': 3, 'foo': 1} >>> dict((k, getattr(d, k)) for k, v in inspect.getmembers(D) if inspect.isdatadescriptor(v) or inspect.isfunction(v)) { '__init__': <bound method D.__init__ of <__main__.D object at 0x7fb1e26a5b40>>, '_baz': 3, 'bar': 2, 'baz': 3, 'pointless_method': <bound method D.pointless_method of <__main__.D object at 0x7fb1e26a5b40>> }
Explain __dict__ attribute
I am really confused about the __dict__ attribute. I have searched a lot but still I am not sure about the output. Could someone explain the use of this attribute from zero, in cases when it is used in a object, a class, or a function?
[ "Basically it contains all the attributes which describe the object in question. It can be used to alter or read the attributes.\nQuoting from the documentation for __dict__\n\nA dictionary or other mapping object used to store an object's (writable) attributes.\n\nRemember, everything is an object in Python. When I say everything, I mean everything like functions, classes, objects etc (Ya you read it right, classes. Classes are also objects). For example:\ndef func():\n pass\n\nfunc.temp = 1\n\nprint(func.__dict__)\n\nclass TempClass:\n a = 1\n def temp_function(self):\n pass\n\nprint(TempClass.__dict__)\n\nwill output\n{'temp': 1}\n{'__module__': '__main__', \n 'a': 1, \n 'temp_function': <function TempClass.temp_function at 0x10a3a2950>, \n '__dict__': <attribute '__dict__' of 'TempClass' objects>, \n '__weakref__': <attribute '__weakref__' of 'TempClass' objects>, \n '__doc__': None}\n\n", "__dict__ can get the instance variables (data attributes) in an object as a dictionary.\nSo, if there is Person class below:\nclass Person:\n x1 = \"Hello\"\n x2 = \"World\"\n \n def __init__(self, name, age):\n self.name = name\n self.age = age\n \n def test1(self):\n pass\n \n @classmethod\n def test2(cls):\n pass\n \n @staticmethod\n def test3():\n pass\n\nobj = Person(\"John\", 27) \nprint(obj.__dict__) # Here\n\n__dict__ gets name and age with their values in a dictionary as shown below:\n{'name': 'John', 'age': 27}\n\nAnd, if the new instance variable gender is added after instanciation as shown below:\n# ...\n\nobj = Person(\"John\", 27)\nobj.gender = \"Male\" # Here\nprint(obj.__dict__)\n\n__dict__ gets name, age and gender with their values in a dictionary as shown below:\n{'name': 'John', 'age': 27, 'gender': 'Male'}\n\nIn addition, if using dir() as shown below:\n# ...\n\nobj = Person(\"John\", 27)\nobj.gender = \"Male\" \nprint(dir(obj)) # Here\n\nWe can get all in an object as a list as shown below:\n['__class__', '__delattr__', '__dict__', '__dir__', '__doc__', '__eq__', \n'__format__', '__ge__', '__getattribute__', '__gt__', '__hash__',\n'__init__', '__init_subclass__', '__le__', '__lt__', '__module__', \n'__ne__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', \n'__setattr__', '__sizeof__', '__str__', '__subclasshook__', '__weakref__', \n'age', 'gender', 'name', 'test1', 'test2', 'test3', 'x1', 'x2']\n\nAnd, as far as I researched and as I asked this question, there are no functions to get only the static or special variables or normal, class, static or special methods in an object in Python.\n", "Python documentation defines __dict__ as:\n\nA dictionary or other mapping object used to store an object’s (writable) attributes.\n\nThis definition is however a bit fuzzy, which leads to a lot of wrong usage of __dict__.\nIndeed, when you read this definition, can you tell what is a \"writable\" attribute and what it isn't?\n\nExamples\nLet's run a few examples showing how confusing and inaccurate it can be.\nclass A:\n\n foo = 1\n\n def __init__(self):\n self.bar = 2\n\n @property\n def baz(self):\n return self._baz\n\n @bar.setter\n def baz(self, value):\n self._baz = value\n\n>>> a = A()\n>>> a.foo\n1\n>>> a.bar\n2\n\nGiven the above class A and knowing __dict__'s definition, can you guess what would be the value of a.__dict__?\n\nIs foo a writable attribute of a?\nIs bar a writable attribute of a?\nIs baz a writable attribute of a?\nIs _baz a writable attribute of a?\n\nHere is the answer:\n>>> a.__dict__\n{'bar': 2}\n\nSurprisingly, foo doesn't show up. Indeed, although accessible with a.foo, it is an attribute of the class A, not of the instance a.\nNow what happens if we define it explicitly as an attribute of a?\n>>> a.foo = 1\n>>> a.__dict__\n{'bar': 2, 'foo': 1}\n\nFrom our point of view, nothing really changed, a.foo is still equal to 1, but now it shows up in __dict__. Note that we can keep playing with it by deleting a.foo for instance:\n>>> del a.foo\n>>> a.__dict__\n{'bar': 2}\n>>> a.foo\n1\n\nWhat happened here is that we deleted the instance attribute, and calling a.foo falls back again to A.foo.\nLet's have a look at baz now. We can assume that we can't find it in a.__dict__ because we didn't give it a value yet.\n>>> a.baz = 3\n\nAlright, now we defined a.baz. So, is baz a writable attribute? What about _baz?\n>>> a.__dict__\n{'bar': 2, '_baz': 3}\n\nFrom __dict__'s perspective, _baz is a writable attribute, but baz isn't. The explanation, again, is that baz is an attribute of the class A, not the instance a.\n>>> A.__dict__['baz']\n<property at 0x7fb1e30c9590>\n\na.baz is only an abstraction layer calling A.baz.fget(a) behind the scenes.\nLet's be even more sneaky with our dear friend and challenge its definition of \"writable\".\nclass B:\n\n def __init__(self):\n self.foobar = 'baz'\n\n def __setattr__(self, name, value):\n if name == 'foobar' and 'foobar' in self.__dict__:\n # Allow the first definition of foobar\n # but prevent any subsequent redefinition\n raise TypeError(\"'foobar' is a read-only attribute\")\n super().__setattr__(name, value)\n\n>>> b = B()\n>>> b.foobar\n'baz'\n>>> b.foobar = 'bazbar'\nTypeError: 'foobar' is a read-only attribute\n>>> # From our developer's perspective, 'foobar' is not a writable attribute\n>>> # But __dict__ doesn't share this point of view\n>>> b.__dict__\n{'foobar': 'baz'}\n\n\nThen what is __dict__ exactly?\nThanks to the behavior noticed in the above examples, we can now have a better understanding of what __dict__ actually does. But we need to switch from the developer's to the computer's perspective:\n__dict__ contains the data stored in the program's memory for this specific object.\nThat's it, __dict__ exposes what's actually stored in memory at our object's address.\nPython documentation on data model also defines it as the object's namespace:\n\nA class instance has a namespace implemented as a dictionary which is the first place in which attribute references are searched. When an attribute is not found there, and the instance’s class has an attribute by that name, the search continues with the class attributes.\n\nHowever, I believe thinking of __dict__ as the object's memory table gives a much better visualization of what's included in this namespace, and what's not.\nBut! There is a catch...\n\nYou thought we were done with __dict__?\n__dict__ is not the way to deal with the object's memory footprint, but a way.\nThere is indeed another way: __slots__. I won't detail here how it works, there is already a very complete answer about __slots__ if you want to learn more about it. But the important thing for us is that, if slots are defined:\nclass C:\n __slots__ = ('foo', 'bar')\n\n def __init__(self):\n self.foo = 1\n self.bar = 2\n\n>>> c = C()\n>>> c.foo\n1\n>>> c.bar\n2\n>>> c.__dict__\nAttributeError: 'C' object has no attribute '__dict__'\n\nWe can say \"goodbye\" to __dict__.\n\nSo when should I use __dict__?\nAs we saw, __dict__ must be seen from the computer's perspective, not from the seat of the developer. More often than not, what we consider \"attributes\" of our object is not directly connected to what's actually stored in memory. Especially with the use of properties or __getattr__ for instance, that add a level of abstraction for our comfort.\nAlthough the use of __dict__ to inspect the attributes of an object will work in most trivial cases, we can't rely 100% rely on it. Which is a shame for something used to write generic logic.\nThe use case of __dict__ should probably be limited to inspecting an object's memory contents. Which is not so common. And keep in mind that __dict__ might not be defined at all (or lack some attributes actually stored in memory) when slots are defined.\nIt can also be very useful in Python's console to quickly check a class' attributes and methods. Or an object's attributes (I know I just said we can't rely on it, but in the console who cares if it fails sometimes or if it's not accurate).\n\nThanks but... how do I browse my object's attribute then?\nWe saw that __dict__ is often misused and that we can't really rely on it to inspect an object's attributes. But then, what is the correct way to do it? Is there any way to browse an objects attributes from the developer's abstracted point of view?\nYes. There are several ways to do introspection, and the correct way will depend on what you actually want to get. Instance attributes, class attributes, properties, private attributes, even methods, ... Technically, all these are attributes, and according to your situation, you might want to include some but exclude others. The context is also important. Maybe you are using a library that already exposes the attributes you want through their API.\nBut in general, you can rely on the inspect module.\nclass D:\n\n foo = 1\n __slots = ('bar', '_baz')\n\n @property\n def baz(self):\n return self._baz\n\n @baz.setter\n def baz(self, value):\n self._baz = value\n\n def __init__(self):\n self.bar = 2\n self.baz = 3\n\n def pointless_method(self):\n pass\n\n\n>>> import inspect\n>>> dict((k, v) for k, v in inspect.getmembers(d) if k[0] != '_')\n{'bar': 2, 'baz': 3, 'foo': 1}\n>>> dict((k, getattr(d, k)) for k, v in inspect.getmembers(D) if inspect.isdatadescriptor(v) or inspect.isfunction(v))\n{\n '__init__': <bound method D.__init__ of <__main__.D object at 0x7fb1e26a5b40>>,\n '_baz': 3,\n 'bar': 2,\n 'baz': 3,\n 'pointless_method': <bound method D.pointless_method of <__main__.D object at 0x7fb1e26a5b40>>\n}\n\n" ]
[ 126, 1, 1 ]
[]
[]
[ "python" ]
stackoverflow_0019907442_python.txt
Q: get files data of folder gdrive without api So I have a problem I just want to get files link from a drive folder but I find out that can only done by API of google drive but I don't want to use API for that. I was thinking I can do that with simple web scrapping but I found out it can not happen because drive use server to get link. In simple words I want to know how to get files data from folder of Google-drive without API. A: I think you should consult the Terms of service google does not allow web scraping. You should use the Google drive api to do this. If you want to get file data then this is the best way forward. Google even has serval samples to help you get started manage-downloads
get files data of folder gdrive without api
So I have a problem I just want to get files link from a drive folder but I find out that can only done by API of google drive but I don't want to use API for that. I was thinking I can do that with simple web scrapping but I found out it can not happen because drive use server to get link. In simple words I want to know how to get files data from folder of Google-drive without API.
[ "I think you should consult the Terms of service google does not allow web scraping.\nYou should use the Google drive api to do this. If you want to get file data then this is the best way forward.\nGoogle even has serval samples to help you get started manage-downloads\n" ]
[ 0 ]
[]
[]
[ "api", "google_drive_api", "python" ]
stackoverflow_0074463266_api_google_drive_api_python.txt
Q: How to change attribute based on boolean condition I am trying to alter point size based on whether its name exists in a list or not, I've tried many different ways but I keep generating this error. Code: graph = alt.Chart(df).mark_point( filled = False).encode( x=alt.X(axe_x), y=alt.Y(axe_y), size=alt.condition( (alt.datum.name) in (some_list), alt.value(150), alt.value(50)) ) Error: NotImplementedError: condition predicate of type <class 'bool'> How can I get around this? A: You can use a transform_lookup for this import altair as alt import pandas as pd from vega_datasets import data source = data.cars() # lookup table matching the string to corresonding size df2 = pd.DataFrame({ 'key': ['Europe', 'Japan', 'USA'], 's': [50, 50, 200] }) alt.Chart(source).mark_circle(opacity=0.5).transform_lookup( lookup='Origin', from_=alt.LookupData(df2, key='key', fields=['s']) ).encode( x='Horsepower', y='Miles_per_Gallon', color='Origin', size =alt.Size('s:N', title='Size') )
How to change attribute based on boolean condition
I am trying to alter point size based on whether its name exists in a list or not, I've tried many different ways but I keep generating this error. Code: graph = alt.Chart(df).mark_point( filled = False).encode( x=alt.X(axe_x), y=alt.Y(axe_y), size=alt.condition( (alt.datum.name) in (some_list), alt.value(150), alt.value(50)) ) Error: NotImplementedError: condition predicate of type <class 'bool'> How can I get around this?
[ "You can use a transform_lookup for this\nimport altair as alt\nimport pandas as pd\nfrom vega_datasets import data\nsource = data.cars()\n# lookup table matching the string to corresonding size\ndf2 = pd.DataFrame({\n 'key': ['Europe', 'Japan', 'USA'],\n 's': [50, 50, 200]\n})\n\nalt.Chart(source).mark_circle(opacity=0.5).transform_lookup(\n lookup='Origin',\n from_=alt.LookupData(df2, key='key', fields=['s']) \n).encode(\n x='Horsepower',\n y='Miles_per_Gallon',\n color='Origin',\n size =alt.Size('s:N', title='Size')\n)\n\n\n" ]
[ 1 ]
[]
[]
[ "altair", "python" ]
stackoverflow_0074456892_altair_python.txt
Q: How could I compare the results of two csv files that only contains numbers? I have two csv files with 200 columns each. The two files have the exact same numbers in rows and columns. I want to compare each columns separately. The idea would be to compare column 1 value of file "a" to column 1 value of file "b" and check the difference and so on for all the numbers in the column (there are 100 rows) and write out a number that in how many cases were the difference more than 3. I would like to repeat the same for all the columns. import pandas as pd df=pd.read_csv('a.csv') de=pd.read_csv('b.csv') A: I came up with something and I hope it helps you: # file1.csv: # # 1;1;1 # 3;3;3 # 5;5;5 # 7;7;7 # # files2.csv: # # 2;2;2 # 4;3;4 # 6;5;6 # 8;8;8 import csv # change this to 200 for your file columns_num = 3 # a dictionary that will hold our columns and the number of differences diff = {} # {column_index: number_of_differences} with open("file1.csv", 'r') as file1, open("file2.csv", 'r') as file2: reader1, reader2 = csv.reader(file1), csv.reader(file2) # here we go line by line for line1, line2 in zip(reader1, reader2): # your delimiter may not be the same line1 = line1[0].split(";") # output : [1, 1, 1] line2 = line2[0].split(";") # here we go column by column for i in range(0, columns_num): if line1[i] != line2[i]: try: # if the column exist, we increment its value diff[i] += 1 except KeyError: # if the column doesn't exist, we add it diff[i] = 1 print(diff) # Output: # {0: 4, 1: 2, 2: 4} diff_new = {i: diff[i] for i in diff if diff[i] > 3} print(diff_new) # Output: # {0: 4, 2: 4} A: import pandas as pd import numpy as np file1 = pd.read_csv("file1.csv", header=None) file2 = pd.read_csv("file2.csv", header=None) diff_mask = file1 != file2 # count per column where difference is more than three diff_more_3 = np.sum(diff_mask, axis=0) > 3 # Get the number of columns where that is the case. print(sum(diff_more_3)) diff_more_3 contains a boolean flag for each column if there are three or more differences. If you just want the number you just sum them up.
How could I compare the results of two csv files that only contains numbers?
I have two csv files with 200 columns each. The two files have the exact same numbers in rows and columns. I want to compare each columns separately. The idea would be to compare column 1 value of file "a" to column 1 value of file "b" and check the difference and so on for all the numbers in the column (there are 100 rows) and write out a number that in how many cases were the difference more than 3. I would like to repeat the same for all the columns. import pandas as pd df=pd.read_csv('a.csv') de=pd.read_csv('b.csv')
[ "I came up with something and I hope it helps you:\n# file1.csv:\n#\n# 1;1;1\n# 3;3;3\n# 5;5;5\n# 7;7;7\n#\n# files2.csv:\n#\n# 2;2;2\n# 4;3;4\n# 6;5;6\n# 8;8;8\n\nimport csv\n\n# change this to 200 for your file\ncolumns_num = 3\n\n# a dictionary that will hold our columns and the number of differences\ndiff = {} # {column_index: number_of_differences}\n\nwith open(\"file1.csv\", 'r') as file1, open(\"file2.csv\", 'r') as file2:\n reader1, reader2 = csv.reader(file1), csv.reader(file2)\n\n # here we go line by line\n for line1, line2 in zip(reader1, reader2):\n # your delimiter may not be the same\n line1 = line1[0].split(\";\") # output : [1, 1, 1]\n line2 = line2[0].split(\";\")\n \n # here we go column by column\n for i in range(0, columns_num):\n if line1[i] != line2[i]:\n try:\n # if the column exist, we increment its value\n diff[i] += 1\n except KeyError:\n # if the column doesn't exist, we add it\n diff[i] = 1\n\nprint(diff)\n\n# Output:\n# {0: 4, 1: 2, 2: 4}\n\ndiff_new = {i: diff[i] for i in diff if diff[i] > 3}\nprint(diff_new)\n\n# Output:\n# {0: 4, 2: 4}\n\n", "import pandas as pd\nimport numpy as np\n\nfile1 = pd.read_csv(\"file1.csv\", header=None)\nfile2 = pd.read_csv(\"file2.csv\", header=None)\n\ndiff_mask = file1 != file2\n\n# count per column where difference is more than three\ndiff_more_3 = np.sum(diff_mask, axis=0) > 3\n\n# Get the number of columns where that is the case.\nprint(sum(diff_more_3))\n\ndiff_more_3 contains a boolean flag for each column if there are three or more differences. If you just want the number you just sum them up.\n" ]
[ 0, 0 ]
[]
[]
[ "python" ]
stackoverflow_0074459330_python.txt
Q: How does automatic differentiation with respect to the input work? I've been trying to understand how automatic differentiation (autodiff) works. There are several implementations of this that can be found in Tensorflow, PyTorch and other programs. There are three aspects of automatic differentiation that currently seem vague to me. The exact process used to calculate the gradients How autodiff works with respect to inputs How autodiff works with respect to a singular value as input So far, it seems to roughly follow the following steps: Break up original function into elementary operations (individual arithmetic operations, composition and function calls). The elementary operations are combined to form a computational graph in such a way that the original function can be calculated using the computational graph. The computational graph is executed for a certain input, and each operation is recorded Walking through the recorded operations in reverse using the chain rule gives us the gradient. First of all, is this a correct overview of the steps that are taken in automatic differentiation? Secondly, how would the above process work for a derivative with respect to the inputs. For instance, a function would need a difference in the x value. Does that mean that the derivative can only be calculated after at least two different x values have been provided as the input? Or does it require multiple inputs at once (i.e. vector input) over which it can calculate a difference? And how does this compare when we calculate the gradient with respect to the model weights (i.e. as done in backpropagation). Thirdly, how can we take the derivative of a singular value. Take, for instance, the following Python code where the derivative of is calculated: x = tf.constant(3.0) with tf.GradientTape() as tape:   tape.watch(x)   y = x**2 # dy = 2x * dx dy_dx = tape.gradient(y, x) print(dy_dx.numpy()) # prints: '6.0' Since dx is the difference between several x inputs, would that not mean that dx = 0? I found that this paper had a pretty good overview of the various modes of autodiff. As well as the differences as compared to numerical and symbolic differentiation. However, it did not bring a full understanding and I would still like to understand the autodiff process in context of these traditional differentiation techniques. Rather than applying it practically, I would love to get a more theoretical understanding. A: I think what you need to understand first is what is a derivative, many math textbooks could help you with that. The notation dx means an infinitesimal variation, so you not actually compute any difference, but do a symbolic operation on your function f that transforms it to a function f' also noted df/dx, which you then apply at any point where it is defined. Regarding the algorithm used for automatic differentiation, you understood it right, the part that you seem to be missing is how the derivatives of elementary operations are computed and what do they mean, but it would be hard to do a crash course about that in a SO answer.
How does automatic differentiation with respect to the input work?
I've been trying to understand how automatic differentiation (autodiff) works. There are several implementations of this that can be found in Tensorflow, PyTorch and other programs. There are three aspects of automatic differentiation that currently seem vague to me. The exact process used to calculate the gradients How autodiff works with respect to inputs How autodiff works with respect to a singular value as input So far, it seems to roughly follow the following steps: Break up original function into elementary operations (individual arithmetic operations, composition and function calls). The elementary operations are combined to form a computational graph in such a way that the original function can be calculated using the computational graph. The computational graph is executed for a certain input, and each operation is recorded Walking through the recorded operations in reverse using the chain rule gives us the gradient. First of all, is this a correct overview of the steps that are taken in automatic differentiation? Secondly, how would the above process work for a derivative with respect to the inputs. For instance, a function would need a difference in the x value. Does that mean that the derivative can only be calculated after at least two different x values have been provided as the input? Or does it require multiple inputs at once (i.e. vector input) over which it can calculate a difference? And how does this compare when we calculate the gradient with respect to the model weights (i.e. as done in backpropagation). Thirdly, how can we take the derivative of a singular value. Take, for instance, the following Python code where the derivative of is calculated: x = tf.constant(3.0) with tf.GradientTape() as tape:   tape.watch(x)   y = x**2 # dy = 2x * dx dy_dx = tape.gradient(y, x) print(dy_dx.numpy()) # prints: '6.0' Since dx is the difference between several x inputs, would that not mean that dx = 0? I found that this paper had a pretty good overview of the various modes of autodiff. As well as the differences as compared to numerical and symbolic differentiation. However, it did not bring a full understanding and I would still like to understand the autodiff process in context of these traditional differentiation techniques. Rather than applying it practically, I would love to get a more theoretical understanding.
[ "I think what you need to understand first is what is a derivative, many math textbooks could help you with that. The notation dx means an infinitesimal variation, so you not actually compute any difference, but do a symbolic operation on your function f that transforms it to a function f' also noted df/dx, which you then apply at any point where it is defined.\nRegarding the algorithm used for automatic differentiation, you understood it right, the part that you seem to be missing is how the derivatives of elementary operations are computed and what do they mean, but it would be hard to do a crash course about that in a SO answer.\n" ]
[ 0 ]
[]
[]
[ "autograd", "automatic_differentiation", "differentiation", "math", "python" ]
stackoverflow_0074460500_autograd_automatic_differentiation_differentiation_math_python.txt
Q: pySpark Replacing Null Value on subsets of rows I have a pySpark dataframe, where I have null values that I want to replace - however the value to replace with is different for different groups. My data looks like this (appologies, I dont have a way to past it as text): For group A I want to replace the null values with -999; while for group B, I want to replace the null value with 0. Currently, I split the data into sections, then do a df = df.fillna(-999) . Is there a more efficient way of doing it? in psudo-code I was thinking something along the line of df = df.where(col('group') == A).fillna(lit(-999)).where(col('group') == B).fillna(lit(0)) but ofcourse, this doesn't work. A: You can use when: from pyspark.sql import functions as F # Loop over all the columns you want to fill for col in ('Col1', 'Col2', 'Col3'): # compute here conditions to fill using a value or another fill_a = F.col(col).isNull() & (F.col('Group') == 'A') fill_b = F.col(col).isNull() & (F.col('Group') == 'B') # Fill the column based on the different conditions # using nested `when` - `otherwise`. # # Do not forget to add the last `otherwise` with the original # values if none of the previous conditions have been met filled_col = ( F.when(fill_a, -999) .otherwise( F.when(fill_b, 0) .otherwise(F.col(col)) ) ) # 'overwrite' the original column with the filled column df = df.withColumn(col, filled_col) A: Another possible option is to use coalesce for each column with a "filler" column holding the replacement values: import pyspark.sql.functions as F for c in ['Col1', 'Col2', 'Col3']: df = df.withColumn(c, F.coalesce(c, F.when(F.col('group') == 'A', -999) .when(F.col('group') == 'B', 0)))
pySpark Replacing Null Value on subsets of rows
I have a pySpark dataframe, where I have null values that I want to replace - however the value to replace with is different for different groups. My data looks like this (appologies, I dont have a way to past it as text): For group A I want to replace the null values with -999; while for group B, I want to replace the null value with 0. Currently, I split the data into sections, then do a df = df.fillna(-999) . Is there a more efficient way of doing it? in psudo-code I was thinking something along the line of df = df.where(col('group') == A).fillna(lit(-999)).where(col('group') == B).fillna(lit(0)) but ofcourse, this doesn't work.
[ "You can use when:\nfrom pyspark.sql import functions as F\n\n# Loop over all the columns you want to fill\nfor col in ('Col1', 'Col2', 'Col3'):\n # compute here conditions to fill using a value or another\n fill_a = F.col(col).isNull() & (F.col('Group') == 'A')\n fill_b = F.col(col).isNull() & (F.col('Group') == 'B')\n\n # Fill the column based on the different conditions \n # using nested `when` - `otherwise`.\n #\n # Do not forget to add the last `otherwise` with the original \n # values if none of the previous conditions have been met\n filled_col = (\n F.when(fill_a, -999)\n .otherwise(\n F.when(fill_b, 0)\n .otherwise(F.col(col))\n )\n )\n\n # 'overwrite' the original column with the filled column\n df = df.withColumn(col, filled_col)\n\n", "Another possible option is to use coalesce for each column with a \"filler\" column holding the replacement values:\nimport pyspark.sql.functions as F\n\nfor c in ['Col1', 'Col2', 'Col3']:\n df = df.withColumn(c, F.coalesce(c, F.when(F.col('group') == 'A', -999)\n .when(F.col('group') == 'B', 0)))\n\n" ]
[ 1, 0 ]
[]
[]
[ "null", "pyspark", "python" ]
stackoverflow_0074456021_null_pyspark_python.txt
Q: How to split multi-dimensional arrays based on the unique indices of another array? I have two torch tensors a and b: import torch torch.manual_seed(0) # for reproducibility a = torch.rand(size = (5, 10, 1)) b = torch.tensor([3, 3, 1, 5, 3, 1, 0, 2, 1, 2]) I want to split the 2nd dimension of a (which is dim = 1 in the Python language) based on the unique values in b. What I have tried so far: # find the unique values and unique indices of b unique_values, unique_indices = torch.unique(b, return_inverse = True) # split a in where dim = 1, based on unique indices l = torch.tensor_split(a, unique_indices, dim = 1) I was expecting l to be a list of n number of tensors where n is the number of unique values in b. I was also expecting the tensors to have the shape (5, number of elements corresponding to unique_values, 1). However, I get the following: print(l) (tensor([[[0.8198], [0.9971], [0.6984]], [[0.7262], [0.7011], [0.2038]], [[0.1147], [0.3168], [0.6965]], [[0.0340], [0.9442], [0.8802]], [[0.6833], [0.7529], [0.8579]]]), tensor([], size=(5, 0, 1)), tensor([], size=(5, 0, 1)), tensor([[[0.9971], [0.6984], [0.5675]], [[0.7011], [0.2038], [0.6511]], [[0.3168], [0.6965], [0.9143]], [[0.9442], [0.8802], [0.0012]], [[0.7529], [0.8579], [0.6870]]]), tensor([], size=(5, 0, 1)), tensor([], size=(5, 0, 1)), tensor([], size=(5, 0, 1)), tensor([[[0.8198], [0.9971]], [[0.7262], [0.7011]], [[0.1147], [0.3168]], [[0.0340], [0.9442]], [[0.6833], [0.7529]]]), tensor([], size=(5, 0, 1)), tensor([[[0.9971]], [[0.7011]], [[0.3168]], [[0.9442]], [[0.7529]]]), tensor([[[0.6984], [0.5675], [0.8352], [0.2056], [0.5932], [0.1123], [0.1535], [0.2417]], [[0.2038], [0.6511], [0.7745], [0.4369], [0.5191], [0.6159], [0.8102], [0.9801]], [[0.6965], [0.9143], [0.9351], [0.9412], [0.5995], [0.0652], [0.5460], [0.1872]], [[0.8802], [0.0012], [0.5936], [0.4158], [0.4177], [0.2711], [0.6923], [0.2038]], [[0.8579], [0.6870], [0.0051], [0.1757], [0.7497], [0.6047], [0.1100], [0.2121]]])) Why do I get empty tensors like tensor([], size=(5, 0, 1)) and how would I achieve what I want to achieve? A: From your description of the desired result: I was also expecting the tensors to have the shape (5, number of elements corresponding to unique_values, 1). I believe you are looking for the count (or frequency) of unique values. If you want to keep using torch.unique, then you can provide the return_counts argument combined with a call to torch.cumsum. Something like this should work: >>> indices = torch.cumsum(counts, dim=0) >>> splits = torch.tensor_split(a, indices[:-1], dim = 1) Let's have a look: >>> for x in splits: ... print(x.shape) torch.Size([5, 1, 1]) torch.Size([5, 3, 1]) torch.Size([5, 2, 1]) torch.Size([5, 3, 1]) torch.Size([5, 1, 1]) A: Are you looking for the index_select method? You have correclty obtained your unique values in unique_values. Now what you need to do is: l = a.index_select(1, unique_values)
How to split multi-dimensional arrays based on the unique indices of another array?
I have two torch tensors a and b: import torch torch.manual_seed(0) # for reproducibility a = torch.rand(size = (5, 10, 1)) b = torch.tensor([3, 3, 1, 5, 3, 1, 0, 2, 1, 2]) I want to split the 2nd dimension of a (which is dim = 1 in the Python language) based on the unique values in b. What I have tried so far: # find the unique values and unique indices of b unique_values, unique_indices = torch.unique(b, return_inverse = True) # split a in where dim = 1, based on unique indices l = torch.tensor_split(a, unique_indices, dim = 1) I was expecting l to be a list of n number of tensors where n is the number of unique values in b. I was also expecting the tensors to have the shape (5, number of elements corresponding to unique_values, 1). However, I get the following: print(l) (tensor([[[0.8198], [0.9971], [0.6984]], [[0.7262], [0.7011], [0.2038]], [[0.1147], [0.3168], [0.6965]], [[0.0340], [0.9442], [0.8802]], [[0.6833], [0.7529], [0.8579]]]), tensor([], size=(5, 0, 1)), tensor([], size=(5, 0, 1)), tensor([[[0.9971], [0.6984], [0.5675]], [[0.7011], [0.2038], [0.6511]], [[0.3168], [0.6965], [0.9143]], [[0.9442], [0.8802], [0.0012]], [[0.7529], [0.8579], [0.6870]]]), tensor([], size=(5, 0, 1)), tensor([], size=(5, 0, 1)), tensor([], size=(5, 0, 1)), tensor([[[0.8198], [0.9971]], [[0.7262], [0.7011]], [[0.1147], [0.3168]], [[0.0340], [0.9442]], [[0.6833], [0.7529]]]), tensor([], size=(5, 0, 1)), tensor([[[0.9971]], [[0.7011]], [[0.3168]], [[0.9442]], [[0.7529]]]), tensor([[[0.6984], [0.5675], [0.8352], [0.2056], [0.5932], [0.1123], [0.1535], [0.2417]], [[0.2038], [0.6511], [0.7745], [0.4369], [0.5191], [0.6159], [0.8102], [0.9801]], [[0.6965], [0.9143], [0.9351], [0.9412], [0.5995], [0.0652], [0.5460], [0.1872]], [[0.8802], [0.0012], [0.5936], [0.4158], [0.4177], [0.2711], [0.6923], [0.2038]], [[0.8579], [0.6870], [0.0051], [0.1757], [0.7497], [0.6047], [0.1100], [0.2121]]])) Why do I get empty tensors like tensor([], size=(5, 0, 1)) and how would I achieve what I want to achieve?
[ "From your description of the desired result:\n\nI was also expecting the tensors to have the shape (5, number of elements corresponding to unique_values, 1).\n\nI believe you are looking for the count (or frequency) of unique values. If you want to keep using torch.unique, then you can provide the return_counts argument combined with a call to torch.cumsum.\nSomething like this should work:\n>>> indices = torch.cumsum(counts, dim=0)\n>>> splits = torch.tensor_split(a, indices[:-1], dim = 1)\n\nLet's have a look:\n>>> for x in splits:\n... print(x.shape)\ntorch.Size([5, 1, 1])\ntorch.Size([5, 3, 1])\ntorch.Size([5, 2, 1])\ntorch.Size([5, 3, 1])\ntorch.Size([5, 1, 1])\n\n", "Are you looking for the index_select method?\nYou have correclty obtained your unique values in unique_values.\nNow what you need to do is:\nl = a.index_select(1, unique_values)\n\n" ]
[ 2, 1 ]
[]
[]
[ "python", "pytorch", "tensor" ]
stackoverflow_0074462683_python_pytorch_tensor.txt
Q: Numpy SVD gives infinite singular values for array with finite elements I've run into this problem (infinite singular values despite finite entries in an array) several times for relatively small arrays with dimensions around 100 by 100. The arrays are large enough that I've struggled to see a pattern. I give a working example below that I found by rounding the values in one of my matrices, though I wish I could engineer a simpler example. import numpy as np kmat = np.zeros((81, 81), dtype='complex') kmat[([30, 32, 36, 36, 38, 38, 57, 57, 59, 59, 63, 65], [68, 14, 62, 74, 8, 20, 61, 73, 7, 19, 67, 13])] = (0.04+0.03j) kmat[([31, 31, 37, 58, 64, 64],[35, 47, 41, 40, 34, 46])] = (0.16+0.11j) kmat[([33, 33, 35, 35, 39, 41, 45, 45, 47, 47, 60, 62, 66, 66, 68, 68, 72, 74], [62, 74, 8, 20, 68, 14, 62, 74, 8, 20, 67, 13, 61, 73, 7, 19, 67, 13])] = (0.03+0.02j) kmat[([34, 40, 40, 46, 61, 61, 67, 73, 73], [41, 35, 47, 41, 34, 46, 40, 34, 46])] = (0.13+0.09j) kmat[([30, 30, 32, 32, 36, 38, 57, 59, 63, 63, 65, 65], [62, 74, 8, 20, 68, 14, 67, 13, 61, 73, 7, 19])] = -(0.04+0.03j) kmat[([31, 37, 37, 58, 58, 64], [41, 35, 47, 34, 46, 40])] = -(0.16+0.11j) kmat[([33, 35, 39, 39, 41, 41, 45, 47, 60, 60, 62, 62, 66, 68, 72, 72, 74, 74], [68, 14, 62, 74, 8, 20, 68, 14, 61, 73, 7, 19, 67, 13, 61, 73, 7, 19])] = -(0.03+0.02j) kmat[([34, 34, 40, 46, 46, 61, 67, 67, 73], [35, 47, 41, 35, 47, 40, 34, 46, 40])] = -(0.13+0.09j) print(np.linalg.svd(kmat, full_matrices = 0, compute_uv = 0)) The output is [ inf 6.71714225e-001 6.71714225e-001 1.63401346e-001 1.63401346e-001 1.63401346e-001 5.06904064e-017 4.89771960e-017 2.03140157e-017 1.72656309e-017 1.40275705e-017 3.53543469e-018 1.83729709e-018 1.12027584e-018 8.52297427e-020 1.81345172e-033 1.27726594e-034 8.75935866e-035 2.02878907e-036 9.30164632e-049 8.54881928e-050 6.95546444e-051 2.49250115e-052 4.92974326e-053 1.18027016e-064 2.83787877e-066 3.61447306e-067 2.40364993e-069 2.01469630e-069 6.85315161e-081 1.15983261e-085 9.21712550e-086 3.87403183e-097 6.63966512e-102 5.67626333e-102 4.16050009e-118 3.27338859e-134 2.33809507e-150 1.55632960e-166 1.82909508e-182 1.14892283e-198 1.51906443e-214 nan nan nan nan nan nan nan nan nan nan nan nan nan 0.00000000e+000 0.00000000e+000 0.00000000e+000 0.00000000e+000 0.00000000e+000 0.00000000e+000 0.00000000e+000 nan nan nan 0.00000000e+000 0.00000000e+000 0.00000000e+000 0.00000000e+000 0.00000000e+000 0.00000000e+000 0.00000000e+000 0.00000000e+000 0.00000000e+000 0.00000000e+000 0.00000000e+000 0.00000000e+000 0.00000000e+000 0.00000000e+000 0.00000000e+000 0.00000000e+000] The largest singular value is returned as infinity, inf. There are also 18 nan returned, as well as well as some nonzero and zero singular values. However, since every element of my array is not infinite, I don't see where this trouble is originating from. Why is numpy's svd giving an infinite singular value for an array with finite values and what can I do to avoid this? In searching for the answer, I've tried a variety of 3 by 3 matrices, such as those with a column or row of zeros, but the singular values appear to be fine. A: I had the same error on Intel processors. You can fix this by installing the intel-numpy package. pip install intel-numpy More information: https://anaconda.org/intel/numpy
Numpy SVD gives infinite singular values for array with finite elements
I've run into this problem (infinite singular values despite finite entries in an array) several times for relatively small arrays with dimensions around 100 by 100. The arrays are large enough that I've struggled to see a pattern. I give a working example below that I found by rounding the values in one of my matrices, though I wish I could engineer a simpler example. import numpy as np kmat = np.zeros((81, 81), dtype='complex') kmat[([30, 32, 36, 36, 38, 38, 57, 57, 59, 59, 63, 65], [68, 14, 62, 74, 8, 20, 61, 73, 7, 19, 67, 13])] = (0.04+0.03j) kmat[([31, 31, 37, 58, 64, 64],[35, 47, 41, 40, 34, 46])] = (0.16+0.11j) kmat[([33, 33, 35, 35, 39, 41, 45, 45, 47, 47, 60, 62, 66, 66, 68, 68, 72, 74], [62, 74, 8, 20, 68, 14, 62, 74, 8, 20, 67, 13, 61, 73, 7, 19, 67, 13])] = (0.03+0.02j) kmat[([34, 40, 40, 46, 61, 61, 67, 73, 73], [41, 35, 47, 41, 34, 46, 40, 34, 46])] = (0.13+0.09j) kmat[([30, 30, 32, 32, 36, 38, 57, 59, 63, 63, 65, 65], [62, 74, 8, 20, 68, 14, 67, 13, 61, 73, 7, 19])] = -(0.04+0.03j) kmat[([31, 37, 37, 58, 58, 64], [41, 35, 47, 34, 46, 40])] = -(0.16+0.11j) kmat[([33, 35, 39, 39, 41, 41, 45, 47, 60, 60, 62, 62, 66, 68, 72, 72, 74, 74], [68, 14, 62, 74, 8, 20, 68, 14, 61, 73, 7, 19, 67, 13, 61, 73, 7, 19])] = -(0.03+0.02j) kmat[([34, 34, 40, 46, 46, 61, 67, 67, 73], [35, 47, 41, 35, 47, 40, 34, 46, 40])] = -(0.13+0.09j) print(np.linalg.svd(kmat, full_matrices = 0, compute_uv = 0)) The output is [ inf 6.71714225e-001 6.71714225e-001 1.63401346e-001 1.63401346e-001 1.63401346e-001 5.06904064e-017 4.89771960e-017 2.03140157e-017 1.72656309e-017 1.40275705e-017 3.53543469e-018 1.83729709e-018 1.12027584e-018 8.52297427e-020 1.81345172e-033 1.27726594e-034 8.75935866e-035 2.02878907e-036 9.30164632e-049 8.54881928e-050 6.95546444e-051 2.49250115e-052 4.92974326e-053 1.18027016e-064 2.83787877e-066 3.61447306e-067 2.40364993e-069 2.01469630e-069 6.85315161e-081 1.15983261e-085 9.21712550e-086 3.87403183e-097 6.63966512e-102 5.67626333e-102 4.16050009e-118 3.27338859e-134 2.33809507e-150 1.55632960e-166 1.82909508e-182 1.14892283e-198 1.51906443e-214 nan nan nan nan nan nan nan nan nan nan nan nan nan 0.00000000e+000 0.00000000e+000 0.00000000e+000 0.00000000e+000 0.00000000e+000 0.00000000e+000 0.00000000e+000 nan nan nan 0.00000000e+000 0.00000000e+000 0.00000000e+000 0.00000000e+000 0.00000000e+000 0.00000000e+000 0.00000000e+000 0.00000000e+000 0.00000000e+000 0.00000000e+000 0.00000000e+000 0.00000000e+000 0.00000000e+000 0.00000000e+000 0.00000000e+000 0.00000000e+000] The largest singular value is returned as infinity, inf. There are also 18 nan returned, as well as well as some nonzero and zero singular values. However, since every element of my array is not infinite, I don't see where this trouble is originating from. Why is numpy's svd giving an infinite singular value for an array with finite values and what can I do to avoid this? In searching for the answer, I've tried a variety of 3 by 3 matrices, such as those with a column or row of zeros, but the singular values appear to be fine.
[ "I had the same error on Intel processors. You can fix this by installing the intel-numpy package.\npip install intel-numpy\n\nMore information: https://anaconda.org/intel/numpy\n" ]
[ 1 ]
[]
[]
[ "infinity", "numpy", "python", "svd" ]
stackoverflow_0073243207_infinity_numpy_python_svd.txt
Q: gspread requires an older google-auth Today pip -install --user --upgrade told me gspread 5.7.0 requires google-auth==1.12.0, but you have google-auth 2.14.1 which is incompatible. Please note the huge discrepancy in google-auth version numbers: 1.12 vs 2.14. I think I update my packages often enough, so this huge jump in google-auth version numbers is a surprise. What has happened? I assume that the current gspread won't work with google-auth v2, so another (more important) question is when will it be updated, if ever? What do other gspread users do? A: This has been reported and was claimed to be fixed by adding dependabot to maintain dependencies. It was actually fixed in v5.7.1. See Remove fixed version for google dependency system.
gspread requires an older google-auth
Today pip -install --user --upgrade told me gspread 5.7.0 requires google-auth==1.12.0, but you have google-auth 2.14.1 which is incompatible. Please note the huge discrepancy in google-auth version numbers: 1.12 vs 2.14. I think I update my packages often enough, so this huge jump in google-auth version numbers is a surprise. What has happened? I assume that the current gspread won't work with google-auth v2, so another (more important) question is when will it be updated, if ever? What do other gspread users do?
[ "This has been reported and was claimed to be fixed by adding dependabot to maintain dependencies.\nIt was actually fixed in v5.7.1. See Remove fixed version for google dependency system.\n" ]
[ 0 ]
[]
[]
[ "google_auth_library", "gspread", "pip", "python" ]
stackoverflow_0074434493_google_auth_library_gspread_pip_python.txt
Q: How to cast columns in psycopg2's excluded values insert I want to update a table using 2 columns of a pandas dataframe. Here is my code: query = """ update table_1 m set column_1 = e.column_1 from (VALUES %s) AS e (column_2, column_1) where m.column_2= e.column_2::text""" args = (('random_value_2','2022-11-15T13:04:18.844Z'), ('random_value_1','2022-11-15T13:04:18.844Z')) psycopg2.extras.execute_values( cur, query, args, template=None, page_size=100 ) When I run this code, I get the error: psycopg2.errors.DatatypeMismatch: column "column_1" is of type timestamp with time zone but expression is of type record LINE 3: set column_1= e.column_1 How to cast str / python datetime to timestamp with timezone in psycopg2? A: Table creation and initial data entry. create table table_1 (column_1 timestamp, column_2 varchar); insert into table_1 values (current_timestamp,'random_value_2'), (current_timestamp, 'random_value_1'); select * from table_1; column_1 | column_2 ----------------------------+---------------- 11/16/2022 08:00:58.309285 | random_value_2 11/16/2022 08:00:58.309285 | random_value_1 Python code as you show: import psycopg2 from psycopg2.extras import execute_values con = psycopg2.connect(dbname="test", host='localhost', user='postgres', port=5432) cur = con.cursor() args = (('random_value_2','2022-11-15T13:04:18.844Z'), ('random_value_1','2022-11-15T13:04:18.844Z')) query = """ update table_1 m set column_1 = e.column_1 from (VALUES %s) AS e (column_2, column_1) where m.column_2= e.column_2::text""" execute_values( cur, query, args, template=None, page_size=100 ) DatatypeMismatch: column "column_1" is of type timestamp without time zone but expression is of type text LINE 3: set column_1 = e.column_1 con.rollback() The error is different in that expression that fails is text not a record. This implies you are actually doing something different. Corrected code, putting in explicit cast to timestamp: query = """ update table_1 m set column_1 = e.column_1::timestamp from (VALUES %s) AS e (column_2, column_1) where m.column_2= e.column_2::text""" execute_values( cur, query, args, template=None, page_size=100 ) con.commit() Select from table: select * from table_1; column_1 | column_2 ----------------------------+---------------- 11/16/2022 08:00:58.309285 | random_value_2 11/16/2022 08:00:58.309285 | random_value_1
How to cast columns in psycopg2's excluded values insert
I want to update a table using 2 columns of a pandas dataframe. Here is my code: query = """ update table_1 m set column_1 = e.column_1 from (VALUES %s) AS e (column_2, column_1) where m.column_2= e.column_2::text""" args = (('random_value_2','2022-11-15T13:04:18.844Z'), ('random_value_1','2022-11-15T13:04:18.844Z')) psycopg2.extras.execute_values( cur, query, args, template=None, page_size=100 ) When I run this code, I get the error: psycopg2.errors.DatatypeMismatch: column "column_1" is of type timestamp with time zone but expression is of type record LINE 3: set column_1= e.column_1 How to cast str / python datetime to timestamp with timezone in psycopg2?
[ "Table creation and initial data entry.\ncreate table table_1 (column_1 timestamp, column_2 varchar);\n\ninsert into table_1 values (current_timestamp,'random_value_2'), (current_timestamp, 'random_value_1');\n\nselect * from table_1;\n column_1 | column_2 \n----------------------------+----------------\n 11/16/2022 08:00:58.309285 | random_value_2\n 11/16/2022 08:00:58.309285 | random_value_1\n\n\nPython code as you show:\nimport psycopg2\nfrom psycopg2.extras import execute_values \n\ncon = psycopg2.connect(dbname=\"test\", host='localhost', user='postgres', port=5432)\ncur = con.cursor()\n\nargs = (('random_value_2','2022-11-15T13:04:18.844Z'), ('random_value_1','2022-11-15T13:04:18.844Z'))\n\nquery = \"\"\"\n update table_1 m\n set column_1 = e.column_1\n from (VALUES %s) AS e (column_2, column_1) \n where m.column_2= e.column_2::text\"\"\"\n\nexecute_values(\n cur, query, args, template=None, page_size=100\n )\n\nDatatypeMismatch: column \"column_1\" is of type timestamp without time zone but expression is of type text\nLINE 3: set column_1 = e.column_1\n\ncon.rollback()\n\nThe error is different in that expression that fails is text not a record. This implies you are actually doing something different.\nCorrected code, putting in explicit cast to timestamp:\nquery = \"\"\"\n update table_1 m\n set column_1 = e.column_1::timestamp\n from (VALUES %s) AS e (column_2, column_1) \n where m.column_2= e.column_2::text\"\"\"\n\nexecute_values(\n cur, query, args, template=None, page_size=100\n )\n\ncon.commit()\n\nSelect from table:\nselect * from table_1;\n column_1 | column_2 \n----------------------------+----------------\n 11/16/2022 08:00:58.309285 | random_value_2\n 11/16/2022 08:00:58.309285 | random_value_1\n\n\n\n\n" ]
[ 0 ]
[]
[]
[ "postgresql", "psycopg2", "python" ]
stackoverflow_0074458844_postgresql_psycopg2_python.txt
Q: merging a list in a list in Python? I have a list that looks similar to: list = [[[a,b,c], e, f, g], h, i, j] and my desired output is: merged_list = [a,b,c,e,f,g,h,i,j] does anyone know an efficient way to do this? i tried to do some sort of merging lists with the sum function but it didn't work A: first i make all your variable into string because it was giving me error of not defined so here is the code #you have 3 list in total list = [[['a','b','c'], 'e', 'f', 'g'], 'h', 'i', 'j'] output_list = [] # created a empty list for storing the all data in list for list2 in list: # you will get 2 list now for list1 in list2:# you will get 1 list now for i in list1: # now you will only have items in list output_list.append(i) #adding all item in output_list one by one print(output_list) # printing the list
merging a list in a list in Python?
I have a list that looks similar to: list = [[[a,b,c], e, f, g], h, i, j] and my desired output is: merged_list = [a,b,c,e,f,g,h,i,j] does anyone know an efficient way to do this? i tried to do some sort of merging lists with the sum function but it didn't work
[ "first i make all your variable into string because it was giving me error of not defined\nso here is the code\n#you have 3 list in total\nlist = [[['a','b','c'], 'e', 'f', 'g'], 'h', 'i', 'j']\n\noutput_list = [] # created a empty list for storing the all data in list\n\nfor list2 in list: # you will get 2 list now\n for list1 in list2:# you will get 1 list now\n for i in list1: # now you will only have items in list\n output_list.append(i) #adding all item in output_list one by one\nprint(output_list) # printing the list \n\n\n" ]
[ 0 ]
[]
[]
[ "list", "python" ]
stackoverflow_0074463513_list_python.txt
Q: Pandas updating certain values in large database but not matching dataframe size I have a large database that is containing a certain number per customer per type of item. Each day there will be a lot of updates to a certain type of item of a certain customer. The database will look as following: import pandas as pd df = pd.DataFrame({'customer' : ['customer1', 'customer2'], 'item1': [12, 13], 'item2' : [3, 28],'item3': [2, 1]}) df2 = pd.DataFrame({'customer' : ['customer1', 'customer2'], 'item?': ['item1', 'item1'], 'quantity' : [2, 5]}) customer item1 item2 item3 0 customer1 12 3 2 1 customer2 13 28 1 customer item? quantity 0 customer1 item1 2 1 customer2 item1 5 The dataframe needs to be updated by df2, where the customer is a string and the item is also a string. I am expecting the following dataframe: customer item1 item2 item3 0 customer1 14 3 2 1 customer2 18 28 1 So essentially it is df1+df1, but this could sometimes be minus as well. I have tried the following: customerlist = df1['customer'].tolist() for i in customerlist: df1.loc[df1.customer == customerlist[i]] But I am already running into problems. Does someone have a function or whatever that works? A: We can use 'customer' as the index in both dataframes to make sure we align them correctly. Then all we do is add a reshaped version of df2 onto df based on alignment on index (both rows and columns), and when the item or the customer is a mismatch we use the values from df: df.set_index('customer').add( pd.pivot_table( df2,index='customer',columns='item?',values='quantity') ).fillna(df.set_index('customer')).astype(int) prints: item1 item2 item3 customer customer1 14 3 2 customer2 18 28 1 I am not sure what you mean by "sometimes be minus as well". If you mean that you want to subtract the two dataframes then use sub instead of add. If you mean there might be a minus in the values, that shouldn't affect the code because + and - will be a -
Pandas updating certain values in large database but not matching dataframe size
I have a large database that is containing a certain number per customer per type of item. Each day there will be a lot of updates to a certain type of item of a certain customer. The database will look as following: import pandas as pd df = pd.DataFrame({'customer' : ['customer1', 'customer2'], 'item1': [12, 13], 'item2' : [3, 28],'item3': [2, 1]}) df2 = pd.DataFrame({'customer' : ['customer1', 'customer2'], 'item?': ['item1', 'item1'], 'quantity' : [2, 5]}) customer item1 item2 item3 0 customer1 12 3 2 1 customer2 13 28 1 customer item? quantity 0 customer1 item1 2 1 customer2 item1 5 The dataframe needs to be updated by df2, where the customer is a string and the item is also a string. I am expecting the following dataframe: customer item1 item2 item3 0 customer1 14 3 2 1 customer2 18 28 1 So essentially it is df1+df1, but this could sometimes be minus as well. I have tried the following: customerlist = df1['customer'].tolist() for i in customerlist: df1.loc[df1.customer == customerlist[i]] But I am already running into problems. Does someone have a function or whatever that works?
[ "We can use 'customer' as the index in both dataframes to make sure we align them correctly. Then all we do is add a reshaped version of df2 onto df based on alignment on index (both rows and columns), and when the item or the customer is a mismatch we use the values from df:\ndf.set_index('customer').add(\n pd.pivot_table(\n df2,index='customer',columns='item?',values='quantity')\n ).fillna(df.set_index('customer')).astype(int)\n\nprints:\n item1 item2 item3\ncustomer \ncustomer1 14 3 2\ncustomer2 18 28 1\n\nI am not sure what you mean by \"sometimes be minus as well\". If you mean that you want to subtract the two dataframes then use sub instead of add. If you mean there might be a minus in the values, that shouldn't affect the code because + and - will be a -\n" ]
[ 2 ]
[]
[]
[ "pandas", "python" ]
stackoverflow_0074463558_pandas_python.txt
Q: How to use LibTorrent for python to get information about the distribution? I would like to get the distribution data BEFORE uploading. All there is is a magnet link or .torrent file. What do I need to do? A: The question was incorrectly asked by me. I needed to find the size of all the files in the torrent. This is done using: handle = lt.add_magnet_uri(ses, url, params) handle.status().total_wanted
How to use LibTorrent for python to get information about the distribution?
I would like to get the distribution data BEFORE uploading. All there is is a magnet link or .torrent file. What do I need to do?
[ "The question was incorrectly asked by me. I needed to find the size of all the files in the torrent. This is done using:\nhandle = lt.add_magnet_uri(ses, url, params)\nhandle.status().total_wanted\n" ]
[ 1 ]
[]
[]
[ "libtorrent", "python" ]
stackoverflow_0074448565_libtorrent_python.txt
Q: How to change the entry of a MultiIndex columns pandas DataFrame? I have a dataframe df such that df.columns returns MultiIndex([( 'a', 's1', 'm/s'), ( 'a', 's2', '%'), ( 'a', 's3', '°C'), ('b', 'z3', '°C'), ('b', 'z4', 'kPa')], names=['kind', 'names', 'units']) How to change ONLY the column name ('b', 'z3', '°C') into ('b', 'z3', 'degC')? At the moment I am trying the following old_cols_name = ('b', 'z3', '°C') new_cols_name = list(old_cols_name) new_cols_name[2] = "degC" df = df_temp.rename( columns={old_cols_name: new_cols_name}, ) But it does not work in the sense that the columns names of df are left unchanged. EDIT: Question slightly changed to add more generality. A: You can't modify a MultiIndex, so you will have to recreate it. An handy way might be to transform back and forth to DataFrame. Assuming idx the MultiIndex: new_idx = pd.MultiIndex.from_frame(idx.to_frame().replace('°C', 'degC')) Or use the DataFrame constructor: new_idx = pd.DataFrame(index=idx).rename({'°C': 'degC'}).index Note that you can limit to a given level with: new_idx = pd.Series(index=idx).rename({'°C': 'degC'}, level='units').index Output: MultiIndex([('a', 's1', 'm/s'), ('a', 's2', '%'), ('a', 's3', 'degC'), ('b', 'z3', 'degC'), ('b', 'z4', 'kPa')], names=['kind', 'names', 'units']) A: I believe changing the desired column alone would work: df=pd.DataFrame(columns= pd.MultiIndex.from_tuples([( 'a', 's1', 'm/s'), ( 'a', 's2', '%'), ( 'a', 's3', 'degC'), ('b', 'z3', '°C'), ('b', 'z4', 'kPa')], names=['kind', 'names', 'units'])) MultiIndex([('a', 's1', 'm/s'), ('a', 's2', '%'), ('a', 's3', 'degC'), ('b', 'z3', '°C'), ('b', 'z4', 'kPa')], names=['kind', 'names', 'units']) using: df = df.rename(columns={'°C': 'degC'}, level=2) MultiIndex([('a', 's1', 'm/s'), ('a', 's2', '%'), ('a', 's3', 'degC'), ('b', 'z3', 'degC'), ('b', 'z4', 'kPa')], names=['kind', 'names', 'units']) In the edited question you mentioned you only want to change one of the '°C' to 'degC'. This is not possible as in MultiIndex the two '°C' are considered as the same label. Essentially the structure of the MultiIndex would need to be changed, instead of just a name change. To do that you have to reconstruct the MultiIndex: new_idx = df.columns.to_numpy() new_idx[3] = ('b', 'z3', 'degC') df.columns = pd.MultiIndex.from_tuples(new_idx, names = df.columns.names)
How to change the entry of a MultiIndex columns pandas DataFrame?
I have a dataframe df such that df.columns returns MultiIndex([( 'a', 's1', 'm/s'), ( 'a', 's2', '%'), ( 'a', 's3', '°C'), ('b', 'z3', '°C'), ('b', 'z4', 'kPa')], names=['kind', 'names', 'units']) How to change ONLY the column name ('b', 'z3', '°C') into ('b', 'z3', 'degC')? At the moment I am trying the following old_cols_name = ('b', 'z3', '°C') new_cols_name = list(old_cols_name) new_cols_name[2] = "degC" df = df_temp.rename( columns={old_cols_name: new_cols_name}, ) But it does not work in the sense that the columns names of df are left unchanged. EDIT: Question slightly changed to add more generality.
[ "You can't modify a MultiIndex, so you will have to recreate it.\nAn handy way might be to transform back and forth to DataFrame.\nAssuming idx the MultiIndex:\nnew_idx = pd.MultiIndex.from_frame(idx.to_frame().replace('°C', 'degC'))\n\nOr use the DataFrame constructor:\nnew_idx = pd.DataFrame(index=idx).rename({'°C': 'degC'}).index\n\nNote that you can limit to a given level with:\nnew_idx = pd.Series(index=idx).rename({'°C': 'degC'}, level='units').index\n\nOutput:\nMultiIndex([('a', 's1', 'm/s'),\n ('a', 's2', '%'),\n ('a', 's3', 'degC'),\n ('b', 'z3', 'degC'),\n ('b', 'z4', 'kPa')],\n names=['kind', 'names', 'units'])\n\n", "I believe changing the desired column alone would work:\ndf=pd.DataFrame(columns= pd.MultiIndex.from_tuples([( 'a', 's1', 'm/s'),\n ( 'a', 's2', '%'),\n ( 'a', 's3', 'degC'),\n ('b', 'z3', '°C'),\n ('b', 'z4', 'kPa')],\n names=['kind', 'names', 'units']))\n\nMultiIndex([('a', 's1', 'm/s'),\n ('a', 's2', '%'),\n ('a', 's3', 'degC'),\n ('b', 'z3', '°C'),\n ('b', 'z4', 'kPa')],\n names=['kind', 'names', 'units'])\n\nusing:\ndf = df.rename(columns={'°C': 'degC'}, level=2)\n\nMultiIndex([('a', 's1', 'm/s'),\n ('a', 's2', '%'),\n ('a', 's3', 'degC'),\n ('b', 'z3', 'degC'),\n ('b', 'z4', 'kPa')],\n names=['kind', 'names', 'units'])\n\nIn the edited question you mentioned you only want to change one of the '°C' to 'degC'. This is not possible as in MultiIndex the two '°C' are considered as the same label. Essentially the structure of the MultiIndex would need to be changed, instead of just a name change. To do that you have to reconstruct the MultiIndex:\nnew_idx = df.columns.to_numpy()\nnew_idx[3] = ('b', 'z3', 'degC')\ndf.columns = pd.MultiIndex.from_tuples(new_idx, names = df.columns.names)\n\n" ]
[ 1, 1 ]
[]
[]
[ "pandas", "python" ]
stackoverflow_0074463487_pandas_python.txt
Q: What is the difference between the Matlab "smooth" function, and the Python "scipy.signal.savgol_filter"? I am currently translating some code written in Matlab, and re-writing it in Python. I have a function below in Matlab: yy = smooth(y, span, 'sgolay', degree) This function is meant to smooth the signal y, using the Savitzky-Golay calculation. I found a Python function that applies this calculation to an input signal. from scipy.signal import savgol_filter yy = savgol_filter(y, span, degree) Would both of these functions produce the same output yy for the same input y? If not, is there any Python equivalent of the Matlab smooth function? Thank you in advance for the answers. A: I would compare the impulse response function of both to answer your question. From the below test, I would say it is not a bad idea to think they does the same thing. As mentioned in the comments, boundary cases like samples without neighbors, odd/even samples, etc could be implemented differently. span=5; degree=2; y=zeros(100,1); y(length(y)/2)=1; figure,stem(y),hold on, stem(smooth(y, span, 'sgolay', degree)) legend({'input','PSF'}) #%% import numpy as np from scipy.signal import savgol_filter import matplotlib.pyplot as plt span=5 degree=2 y=np.zeros(100); y[y.shape[0]//2]=1 yy = savgol_filter(y, span, degree) plt.stem(y,linefmt='red',label='input') plt.stem(yy,linefmt='blue',label='PSF') plt.show()
What is the difference between the Matlab "smooth" function, and the Python "scipy.signal.savgol_filter"?
I am currently translating some code written in Matlab, and re-writing it in Python. I have a function below in Matlab: yy = smooth(y, span, 'sgolay', degree) This function is meant to smooth the signal y, using the Savitzky-Golay calculation. I found a Python function that applies this calculation to an input signal. from scipy.signal import savgol_filter yy = savgol_filter(y, span, degree) Would both of these functions produce the same output yy for the same input y? If not, is there any Python equivalent of the Matlab smooth function? Thank you in advance for the answers.
[ "I would compare the impulse response function of both to answer your question. From the below test, I would say it is not a bad idea to think they does the same thing. As mentioned in the comments, boundary cases like samples without neighbors, odd/even samples, etc could be implemented differently.\nspan=5;\ndegree=2;\ny=zeros(100,1);\ny(length(y)/2)=1;\nfigure,stem(y),hold on, stem(smooth(y, span, 'sgolay', degree))\nlegend({'input','PSF'})\n\n\n\n#%%\nimport numpy as np\nfrom scipy.signal import savgol_filter\nimport matplotlib.pyplot as plt\n\nspan=5\ndegree=2\ny=np.zeros(100);\ny[y.shape[0]//2]=1\nyy = savgol_filter(y, span, degree)\n\n\nplt.stem(y,linefmt='red',label='input')\nplt.stem(yy,linefmt='blue',label='PSF')\nplt.show()\n\n\n\n" ]
[ 1 ]
[]
[]
[ "matlab", "python", "scipy", "signals" ]
stackoverflow_0074435553_matlab_python_scipy_signals.txt
Q: openpyxl - TypeError: __init__() got an unexpected keyword argument 'synchVertical' while using read_excel, python I get this error every time im trying to read my excel file. The strange thing is, its working on the windows pc from my cousin, but on my Macbook. Can anyone help me? Thanks in advance! emp = pd.read_excel('./employment_08_09.xlsx') Traceback (most recent call last): File "/opt/anaconda3/lib/python3.9/site-packages/spyder_kernels/py3compat.py", line 356, in compat_exec exec(code, globals, locals) File "/Users/litsa/Downloads/Excercises_ToBeHanded/Exercise_2.py", line 17, in <module> emp = pd.read_excel('./employment_08_09.xlsx') File "/opt/anaconda3/lib/python3.9/site-packages/pandas/util/_decorators.py", line 311, in wrapper return func(*args, **kwargs) File "/opt/anaconda3/lib/python3.9/site-packages/pandas/io/excel/_base.py", line 465, in read_excel data = io.parse( File "/opt/anaconda3/lib/python3.9/site-packages/pandas/io/excel/_base.py", line 1458, in parse return self._reader.parse( File "/opt/anaconda3/lib/python3.9/site-packages/pandas/io/excel/_base.py", line 638, in parse data = self.get_sheet_data(sheet, convert_float) File "/opt/anaconda3/lib/python3.9/site-packages/pandas/io/excel/_openpyxl.py", line 581, in get_sheet_data for row_number, row in enumerate(sheet.rows): File "/opt/anaconda3/lib/python3.9/site-packages/openpyxl/worksheet/_read_only.py", line 79, in _cells_by_row for idx, row in parser.parse(): File "openpyxl/worksheet/_reader.py", line 151, in parse File "/opt/anaconda3/lib/python3.9/site-packages/openpyxl/descriptors/serialisable.py", line 103, in from_tree return cls(**attrib) TypeError: __init__() got an unexpected keyword argument 'synchVertical' I already tried to install openpyxl because I saw that in another post, but this didn't helped
openpyxl - TypeError: __init__() got an unexpected keyword argument 'synchVertical' while using read_excel, python
I get this error every time im trying to read my excel file. The strange thing is, its working on the windows pc from my cousin, but on my Macbook. Can anyone help me? Thanks in advance! emp = pd.read_excel('./employment_08_09.xlsx') Traceback (most recent call last): File "/opt/anaconda3/lib/python3.9/site-packages/spyder_kernels/py3compat.py", line 356, in compat_exec exec(code, globals, locals) File "/Users/litsa/Downloads/Excercises_ToBeHanded/Exercise_2.py", line 17, in <module> emp = pd.read_excel('./employment_08_09.xlsx') File "/opt/anaconda3/lib/python3.9/site-packages/pandas/util/_decorators.py", line 311, in wrapper return func(*args, **kwargs) File "/opt/anaconda3/lib/python3.9/site-packages/pandas/io/excel/_base.py", line 465, in read_excel data = io.parse( File "/opt/anaconda3/lib/python3.9/site-packages/pandas/io/excel/_base.py", line 1458, in parse return self._reader.parse( File "/opt/anaconda3/lib/python3.9/site-packages/pandas/io/excel/_base.py", line 638, in parse data = self.get_sheet_data(sheet, convert_float) File "/opt/anaconda3/lib/python3.9/site-packages/pandas/io/excel/_openpyxl.py", line 581, in get_sheet_data for row_number, row in enumerate(sheet.rows): File "/opt/anaconda3/lib/python3.9/site-packages/openpyxl/worksheet/_read_only.py", line 79, in _cells_by_row for idx, row in parser.parse(): File "openpyxl/worksheet/_reader.py", line 151, in parse File "/opt/anaconda3/lib/python3.9/site-packages/openpyxl/descriptors/serialisable.py", line 103, in from_tree return cls(**attrib) TypeError: __init__() got an unexpected keyword argument 'synchVertical' I already tried to install openpyxl because I saw that in another post, but this didn't helped
[]
[]
[ "You are getting this error because Python cannot find the path and therefore you need to change your working directory. You can do this with the Operating system interface in Python.\nChange your working directory using os.chdir(path)\nimport os\nimport pandas as pd\n\nos.chdir(\"/Users/Documents\")\n\n#confirm the path directory\nprint(os.getcwd())\n\nExcel_read = pd.read_excel(\"Example.xlsx\")\n\nprint(Excel_read )\n\n" ]
[ -1 ]
[ "openpyxl", "pandas", "python" ]
stackoverflow_0074463670_openpyxl_pandas_python.txt
Q: How to containerize a python script from a pulled image from docker hub First I am very new to docker, so apologies if this doesn't make sense. This is my situation: I have a data science/machine learning project in a python script (written in a single .py file). I want to containerize this application. I would need to create a Dockerfile to do that. But since this is a machine learning project, there are a lot of packages that I need to pip install. So I found this Docker image from https://hub.docker.com/r/continuumio/miniconda3, which has miniconda installed, which has the packages that I need. I pulled this image. And now I don't know what to do with it. How do I continue from here. So far, my Dockerfile is empty. How can I use this image as my starting point and perhaps install more modules if needed and then finally, how to containerize my python script based on this modified image? Many thanks. A: You can specify a base image using the FROM command. Example: FROM continuumio/miniconda3:latest You'll want to use the RUN command to install dependencies (assuming you need any more,) the COPY command to get your main.py file into the container, and CMD can be used to set a default command to run when the container starts up. Documentation for each of those can be found here.
How to containerize a python script from a pulled image from docker hub
First I am very new to docker, so apologies if this doesn't make sense. This is my situation: I have a data science/machine learning project in a python script (written in a single .py file). I want to containerize this application. I would need to create a Dockerfile to do that. But since this is a machine learning project, there are a lot of packages that I need to pip install. So I found this Docker image from https://hub.docker.com/r/continuumio/miniconda3, which has miniconda installed, which has the packages that I need. I pulled this image. And now I don't know what to do with it. How do I continue from here. So far, my Dockerfile is empty. How can I use this image as my starting point and perhaps install more modules if needed and then finally, how to containerize my python script based on this modified image? Many thanks.
[ "You can specify a base image using the FROM command.\nExample:\nFROM continuumio/miniconda3:latest\n\nYou'll want to use the RUN command to install dependencies (assuming you need any more,) the COPY command to get your main.py file into the container, and CMD can be used to set a default command to run when the container starts up. Documentation for each of those can be found here.\n" ]
[ 0 ]
[]
[]
[ "docker", "python" ]
stackoverflow_0074463767_docker_python.txt
Q: How to manually shutdown a socket server? I have a simple socket server, how do I shut it down when I enter "shutdown" in the terminal on the server side? import socket SERVER = "xxxx" PORT = 1234 ADDR = (SERVER, PORT) FORMAT = "utf-8" server = socket.socket(socket.AF_INET, socket.SOCK_STREAM) server.bind(ADDR) def handle_connection(conn, addr): ... server.listen() while True: conn, addr = server.accept() handle_connection(conn, addr) A: Close active connections and exit. It can be done with: server.close() exit(0) A: To shutdown you socket server manually by calling server.close(), you whole code should be: import socket SERVER = "xxxx" PORT = 1234 ADDR = (SERVER, PORT) FORMAT = "utf-8" server = socket.socket(socket.AF_INET, socket.SOCK_STREAM) server.bind(ADDR) def handle_connection(conn, addr): ... server.listen() while True: conn, addr = server.accept() handle_connection(conn, addr) # call server.close() to shut down your server.
How to manually shutdown a socket server?
I have a simple socket server, how do I shut it down when I enter "shutdown" in the terminal on the server side? import socket SERVER = "xxxx" PORT = 1234 ADDR = (SERVER, PORT) FORMAT = "utf-8" server = socket.socket(socket.AF_INET, socket.SOCK_STREAM) server.bind(ADDR) def handle_connection(conn, addr): ... server.listen() while True: conn, addr = server.accept() handle_connection(conn, addr)
[ "Close active connections and exit. It can be done with:\nserver.close()\nexit(0)\n\n", "To shutdown you socket server manually by calling server.close(), you whole code should be:\nimport socket \n\nSERVER = \"xxxx\"\nPORT = 1234\nADDR = (SERVER, PORT)\nFORMAT = \"utf-8\"\n\nserver = socket.socket(socket.AF_INET, socket.SOCK_STREAM)\nserver.bind(ADDR)\n\ndef handle_connection(conn, addr):\n ... \n\nserver.listen()\nwhile True:\n conn, addr = server.accept()\n handle_connection(conn, addr)\n\n# call server.close() to shut down your server.\n\n" ]
[ 0, 0 ]
[]
[]
[ "python", "sockets" ]
stackoverflow_0074463032_python_sockets.txt
Q: Sort boxplot and colour by pairs I have some data for conditions that go together by pairs, structured like this: mydata = { "WT_before": [11,12,13], "WT_after": [16,17,18], "MRE11_before": [21,22,23,24,25], "MRE11_after": [26,27,28,29,30], "NBS1_before": [31,32,33,34], "NBS1_after": [36,37,38,39] } (my real data has more conditions and more values per condition, this is just an example) I looked into colouring the boxplots by pairs to help reading the figure, but it seemed quite convoluted to do in matplotlib. For the moment I'm doing it this way: bxplt_labels, bxplt_data = mydata.keys(), mydata.values() bxplt_colors = ["pink", "pink", "lightgreen", "lightgreen", "lightblue", "lightblue"] fig2, ax = plt.subplots(figsize=(20, 10), dpi=500) bplot = plt.boxplot(bxplt_data, vert=False, showfliers=False, notch=False, patch_artist=True,) for patch, color in zip(bplot['boxes'], bxplt_colors): patch.set_facecolor(color) plt.yticks(range(1, len(bxplt_labels) + 1), bxplt_labels) fig2.show() which produces the figure: I would like: to sort the condition names, so that I can order them to my choosing, and to get a more elegant way of choosing the colours used, in particular because I will need to reuse this data for more figures afterwards (like scatterplot before/after for each condition) If it is needed, I can rearrange the data structure, but each condition doesn't have the same number of values, so a dictionary seemed like the best option for me. Alternatevely, I can use seaborn, which I saw has quite a few possibilities, but I'm not familiar with it, so I would need more time to understand it. Could you help me to figure out? A: Seaborn works easiest with a dataframe in "long form". In this case, there would be rows with the condition repeated for every value with that condition. Seaborn's boxplot accepts an order= keyword, where you can change the order of the x-values. E.g. order=sorted(mydata.keys()) to sort the values alphabetically. Or list(mydata.keys())[::-1] to use the original order, but reversed. The default order would be how the values appear in the dataframe. For a horizontal boxplot, you can use x='value', y='condition'. The order will apply to either x or y, depending on which column contains strings. For coloring, you can use the palette= keyword. This can either be a string indicating one of matplotlib's or seaborn's colormaps. Or it can be a list of colors. Many more options are possible. import matplotlib.pyplot as plt import seaborn as sns import pandas as pd mydata = { "WT_before": [11, 12, 13], "WT_after": [16, 17, 18], "MRE11_before": [21, 22, 23, 24, 25], "MRE11_after": [26, 27, 28, 29, 30], "NBS1_before": [31, 32, 33, 34], "NBS1_after": [36, 37, 38, 39] } df = pd.DataFrame([[k, val] for k, vals in mydata.items() for val in vals], columns=['condition', 'value']) fig, ax = plt.subplots(figsize=(12, 5)) sns.boxplot(data=df, x='condition', y='value', order=['WT_before', 'WT_after', 'MRE11_before', 'MRE11_after', 'NBS1_before', 'NBS1_after'], palette='turbo', ax=ax) plt.tight_layout() plt.show() Here is an example with horizontal boxes: sns.boxplot(data=df, x='value', y='condition', palette='Paired') sns.despine() plt.xlabel('') plt.ylabel('') plt.tight_layout() plt.show() The dataframe would look like: condition value 0 WT_before 11 1 WT_before 12 2 WT_before 13 3 WT_after 16 4 WT_after 17 5 WT_after 18 6 MRE11_before 21 7 MRE11_before 22 8 MRE11_before 23 9 MRE11_before 24 10 MRE11_before 25 11 MRE11_after 26 12 MRE11_after 27 13 MRE11_after 28 14 MRE11_after 29 15 MRE11_after 30 16 NBS1_before 31 17 NBS1_before 32 18 NBS1_before 33 19 NBS1_before 34 20 NBS1_after 36 21 NBS1_after 37 22 NBS1_after 38 23 NBS1_after 39
Sort boxplot and colour by pairs
I have some data for conditions that go together by pairs, structured like this: mydata = { "WT_before": [11,12,13], "WT_after": [16,17,18], "MRE11_before": [21,22,23,24,25], "MRE11_after": [26,27,28,29,30], "NBS1_before": [31,32,33,34], "NBS1_after": [36,37,38,39] } (my real data has more conditions and more values per condition, this is just an example) I looked into colouring the boxplots by pairs to help reading the figure, but it seemed quite convoluted to do in matplotlib. For the moment I'm doing it this way: bxplt_labels, bxplt_data = mydata.keys(), mydata.values() bxplt_colors = ["pink", "pink", "lightgreen", "lightgreen", "lightblue", "lightblue"] fig2, ax = plt.subplots(figsize=(20, 10), dpi=500) bplot = plt.boxplot(bxplt_data, vert=False, showfliers=False, notch=False, patch_artist=True,) for patch, color in zip(bplot['boxes'], bxplt_colors): patch.set_facecolor(color) plt.yticks(range(1, len(bxplt_labels) + 1), bxplt_labels) fig2.show() which produces the figure: I would like: to sort the condition names, so that I can order them to my choosing, and to get a more elegant way of choosing the colours used, in particular because I will need to reuse this data for more figures afterwards (like scatterplot before/after for each condition) If it is needed, I can rearrange the data structure, but each condition doesn't have the same number of values, so a dictionary seemed like the best option for me. Alternatevely, I can use seaborn, which I saw has quite a few possibilities, but I'm not familiar with it, so I would need more time to understand it. Could you help me to figure out?
[ "Seaborn works easiest with a dataframe in \"long form\". In this case, there would be rows with the condition repeated for every value with that condition.\nSeaborn's boxplot accepts an order= keyword, where you can change the order of the x-values. E.g. order=sorted(mydata.keys()) to sort the values alphabetically. Or list(mydata.keys())[::-1] to use the original order, but reversed. The default order would be how the values appear in the dataframe.\nFor a horizontal boxplot, you can use x='value', y='condition'. The order will apply to either x or y, depending on which column contains strings.\nFor coloring, you can use the palette= keyword. This can either be a string indicating one of matplotlib's or seaborn's colormaps. Or it can be a list of colors. Many more options are possible.\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nimport pandas as pd\n\nmydata = {\n \"WT_before\": [11, 12, 13],\n \"WT_after\": [16, 17, 18],\n \"MRE11_before\": [21, 22, 23, 24, 25],\n \"MRE11_after\": [26, 27, 28, 29, 30],\n \"NBS1_before\": [31, 32, 33, 34],\n \"NBS1_after\": [36, 37, 38, 39]\n}\ndf = pd.DataFrame([[k, val] for k, vals in mydata.items() for val in vals],\n columns=['condition', 'value'])\n\nfig, ax = plt.subplots(figsize=(12, 5))\nsns.boxplot(data=df, x='condition', y='value',\n order=['WT_before', 'WT_after', 'MRE11_before', 'MRE11_after', 'NBS1_before', 'NBS1_after'],\n palette='turbo', ax=ax)\n\nplt.tight_layout()\nplt.show()\n\n\nHere is an example with horizontal boxes:\nsns.boxplot(data=df, x='value', y='condition', palette='Paired')\nsns.despine()\nplt.xlabel('')\nplt.ylabel('')\nplt.tight_layout()\nplt.show()\n\n\nThe dataframe would look like:\n\n\n\n\n\ncondition\nvalue\n\n\n\n\n0\nWT_before\n11\n\n\n1\nWT_before\n12\n\n\n2\nWT_before\n13\n\n\n3\nWT_after\n16\n\n\n4\nWT_after\n17\n\n\n5\nWT_after\n18\n\n\n6\nMRE11_before\n21\n\n\n7\nMRE11_before\n22\n\n\n8\nMRE11_before\n23\n\n\n9\nMRE11_before\n24\n\n\n10\nMRE11_before\n25\n\n\n11\nMRE11_after\n26\n\n\n12\nMRE11_after\n27\n\n\n13\nMRE11_after\n28\n\n\n14\nMRE11_after\n29\n\n\n15\nMRE11_after\n30\n\n\n16\nNBS1_before\n31\n\n\n17\nNBS1_before\n32\n\n\n18\nNBS1_before\n33\n\n\n19\nNBS1_before\n34\n\n\n20\nNBS1_after\n36\n\n\n21\nNBS1_after\n37\n\n\n22\nNBS1_after\n38\n\n\n23\nNBS1_after\n39\n\n\n\n" ]
[ 2 ]
[]
[]
[ "matplotlib", "plot", "python", "seaborn" ]
stackoverflow_0074462307_matplotlib_plot_python_seaborn.txt
Q: Remove the missing values from the rows having greater than 5 missing values and then print the percentage of missing values in each column import pandas as pd df = pd.read_csv('https://query.data.world/s/Hfu_PsEuD1Z_yJHmGaxWTxvkz7W_b0') d= df.loc[df.isnull().sum(axis=1)>5] d.dropna(axis=0,inplace=True) print(round(100*(1-df.count()/len(df)),2)) i m getting output as Ord_id 0.00 Prod_id 0.00 Ship_id 0.00 Cust_id 0.00 Sales 0.24 Discount 0.65 Order_Quantity 0.65 Profit 0.65 Shipping_Cost 0.65 Product_Base_Margin 1.30 dtype: float64 but the output is Ord_id 0.00 Prod_id 0.00 Ship_id 0.00 Cust_id 0.00 Sales 0.00 Discount 0.42 Order_Quantity 0.42 Profit 0.42 Shipping_Cost 0.42 Product_Base_Margin 1.06 dtype: float64 A: Try this way: df.drop(df[df.isnull().sum(axis=1)>5].index,axis=0,inplace=True) print(round(100*(1-df.count()/len(df)),2)) A: I think you are trying to find the index of rows with null values sum greater 5. Use np.where instead of df.loc to find the index and then drop them. Try: import pandas as pd import numpy as np df = pd.read_csv('https://query.data.world/s/Hfu_PsEuD1Z_yJHmGaxWTxvkz7W_b0') d = np.where(df.isnull().sum(axis=1)>5) df= df.drop(df.index[d]) print(round(100*(1-df.count()/len(df)),2)) output: Ord_id 0.00 Prod_id 0.00 Ship_id 0.00 Cust_id 0.00 Sales 0.00 Discount 0.42 Order_Quantity 0.42 Profit 0.42 Shipping_Cost 0.42 Product_Base_Margin 1.06 dtype: float64 A: Try this, it should work df = df[df.isnull().sum(axis=1) <= 5] print(round(100*(1-df.count()/len(df)),2)) A: Try this solution import pandas as pd df = pd.read_csv('https://query.data.world/s/Hfu_PsEuD1Z_yJHmGaxWTxvkz7W_b0') df = df[df.isnull().sum(axis=1)<=5] print(round(100*(df.isnull().sum()/len(df.index)),2)) A: This Should work. df = df.drop(df[df.isnull().sum(axis=1) > 5].index) print(round(100 * (df.isnull().sum() / len(df.index)), 2)) A: {marks = marks[marks.isnull().sum(axis=1) < 5] print(marks.isna().sum())} Please try these this will help A: This works: import pandas as pd df = pd.read_csv('https://query.data.world/s/Hfu_PsEuD1Z_yJHmGaxWTxvkz7W_b0') df = df[df.isnull().sum(axis=1)<5] print(df.isnull().sum())
Remove the missing values from the rows having greater than 5 missing values and then print the percentage of missing values in each column
import pandas as pd df = pd.read_csv('https://query.data.world/s/Hfu_PsEuD1Z_yJHmGaxWTxvkz7W_b0') d= df.loc[df.isnull().sum(axis=1)>5] d.dropna(axis=0,inplace=True) print(round(100*(1-df.count()/len(df)),2)) i m getting output as Ord_id 0.00 Prod_id 0.00 Ship_id 0.00 Cust_id 0.00 Sales 0.24 Discount 0.65 Order_Quantity 0.65 Profit 0.65 Shipping_Cost 0.65 Product_Base_Margin 1.30 dtype: float64 but the output is Ord_id 0.00 Prod_id 0.00 Ship_id 0.00 Cust_id 0.00 Sales 0.00 Discount 0.42 Order_Quantity 0.42 Profit 0.42 Shipping_Cost 0.42 Product_Base_Margin 1.06 dtype: float64
[ "Try this way:\ndf.drop(df[df.isnull().sum(axis=1)>5].index,axis=0,inplace=True)\n\nprint(round(100*(1-df.count()/len(df)),2))\n\n", "I think you are trying to find the index of rows with null values sum greater 5. Use np.where instead of df.loc to find the index and then drop them.\nTry:\nimport pandas as pd\nimport numpy as np\ndf = pd.read_csv('https://query.data.world/s/Hfu_PsEuD1Z_yJHmGaxWTxvkz7W_b0')\nd = np.where(df.isnull().sum(axis=1)>5)\ndf= df.drop(df.index[d])\nprint(round(100*(1-df.count()/len(df)),2))\n\noutput:\nOrd_id 0.00\nProd_id 0.00\nShip_id 0.00\nCust_id 0.00\nSales 0.00\nDiscount 0.42\nOrder_Quantity 0.42\nProfit 0.42\nShipping_Cost 0.42\nProduct_Base_Margin 1.06\ndtype: float64\n\n", "Try this, it should work\ndf = df[df.isnull().sum(axis=1) <= 5]\nprint(round(100*(1-df.count()/len(df)),2))\n\n", "Try this solution\n\nimport pandas as pd\ndf = pd.read_csv('https://query.data.world/s/Hfu_PsEuD1Z_yJHmGaxWTxvkz7W_b0')\ndf = df[df.isnull().sum(axis=1)<=5]\nprint(round(100*(df.isnull().sum()/len(df.index)),2))\n\n", "This Should work.\ndf = df.drop(df[df.isnull().sum(axis=1) > 5].index)\n\nprint(round(100 * (df.isnull().sum() / len(df.index)), 2))\n\n", "{marks = marks[marks.isnull().sum(axis=1) < 5]\nprint(marks.isna().sum())}\n\nPlease try these this will help\n", "This works:\nimport pandas as pd\ndf = pd.read_csv('https://query.data.world/s/Hfu_PsEuD1Z_yJHmGaxWTxvkz7W_b0')\ndf = df[df.isnull().sum(axis=1)<5]\nprint(df.isnull().sum())\n\n" ]
[ 3, 1, 0, 0, 0, 0, 0 ]
[]
[]
[ "pandas", "python" ]
stackoverflow_0055207940_pandas_python.txt
Q: My mysqlx python query is inserting a "AS" statement instead of a "SELECT" statement. Why? Here is the python code I am running. def queryOrg(self, OrgID): session = mysqlx.get_session( {'host': db.HOST, 'port': db.PORT, 'user': db.USER, 'password': db.PASSWORD}) org_schema = session.get_schema('Organizations') org_table = org_schema.get_table('Organizations') result = org_table.select(["*"]).where('Organization_ID = ' + str(OrgID)).execute() print(result) And here is the error output mysqlx.errors.OperationalError: 1064: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'AS `*` FROM `Organizations`.`Organizations` WHERE (`Organization_ID` = 2)' at line 1 When I run... def queryOrg(self, OrgID): session = mysqlx.get_session( {'host': db.HOST, 'port': db.PORT, 'user': db.USER, 'password': db.PASSWORD}) org_schema = session.get_schema('Organizations') org_table = org_schema.get_table('Organizations') result = org_table.select(["*"]).where('Organization_ID = ' + str(OrgID)).get_sql() print(result) I get this output. SELECT * FROM Organizations.Organizations WHERE Organization_ID = 2 So, it seems to be generating the correct mysql query when I use the .get_sql() method, but when I call the .execute() method somehting changes the query to 'AS * FROM Organizations.Organizations WHERE (Organization_ID = 2)' I have no idea why this would be happening. I am running a MySQL server on a ubuntu rasberry pi, version 8.031-0ubuntu0.22.04.1 and I am running python 3.8.10 with pip3 mysql-connector-python 8.0.30. This function is getting called through a flask app. I am running flask on version 2.2.2 Any help at all would be very welcome. A: Oh! I think I found a solution. the .select() method sets the '*' as a default when nothing is passed into it. Trying to pass ["*"] into .select() caused the mysqlx to generate something like 'SELECT * AS * ...' which was causing the problem. This new code works perfectly. def queryOrg(self, OrgID): session = mysqlx.get_session( {'host': db.HOST, 'port': db.PORT, 'user': db.USER, 'password': db.PASSWORD}) org_schema = session.get_schema('Organizations') org_table = org_schema.get_table('Organizations') result = org_table.select().where(("Organization_ID = '%s'") % str(OrgID)).execute() print(result.fetch_one())
My mysqlx python query is inserting a "AS" statement instead of a "SELECT" statement. Why?
Here is the python code I am running. def queryOrg(self, OrgID): session = mysqlx.get_session( {'host': db.HOST, 'port': db.PORT, 'user': db.USER, 'password': db.PASSWORD}) org_schema = session.get_schema('Organizations') org_table = org_schema.get_table('Organizations') result = org_table.select(["*"]).where('Organization_ID = ' + str(OrgID)).execute() print(result) And here is the error output mysqlx.errors.OperationalError: 1064: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'AS `*` FROM `Organizations`.`Organizations` WHERE (`Organization_ID` = 2)' at line 1 When I run... def queryOrg(self, OrgID): session = mysqlx.get_session( {'host': db.HOST, 'port': db.PORT, 'user': db.USER, 'password': db.PASSWORD}) org_schema = session.get_schema('Organizations') org_table = org_schema.get_table('Organizations') result = org_table.select(["*"]).where('Organization_ID = ' + str(OrgID)).get_sql() print(result) I get this output. SELECT * FROM Organizations.Organizations WHERE Organization_ID = 2 So, it seems to be generating the correct mysql query when I use the .get_sql() method, but when I call the .execute() method somehting changes the query to 'AS * FROM Organizations.Organizations WHERE (Organization_ID = 2)' I have no idea why this would be happening. I am running a MySQL server on a ubuntu rasberry pi, version 8.031-0ubuntu0.22.04.1 and I am running python 3.8.10 with pip3 mysql-connector-python 8.0.30. This function is getting called through a flask app. I am running flask on version 2.2.2 Any help at all would be very welcome.
[ "Oh! I think I found a solution. the .select() method sets the '*' as a default when nothing is passed into it. Trying to pass [\"*\"] into .select() caused the mysqlx to generate something like 'SELECT * AS * ...' which was causing the problem. This new code works perfectly.\ndef queryOrg(self, OrgID):\n session = mysqlx.get_session(\n {'host': db.HOST, 'port': db.PORT, 'user': db.USER, 'password': db.PASSWORD})\n org_schema = session.get_schema('Organizations')\n org_table = org_schema.get_table('Organizations')\n result = org_table.select().where((\"Organization_ID = '%s'\") % str(OrgID)).execute()\n print(result.fetch_one())\n\n" ]
[ 0 ]
[]
[]
[ "mysql", "python" ]
stackoverflow_0074453457_mysql_python.txt
Q: Django CSRF Protection GraphQL API I do have a graphqlAPI which I use for CRUD Operations to my database. The authentication is tokenbased. So if an user wants to make cruds (mutations) to my database, it needs a valid token in order to do that. What I dont know is if my graphql API is also protected against CSRF attacks as I exempt this protection with csrf_exempt without csrf_exempt it needs a csrf token. Is there a way to ask for a valid csrf token without sending it over the frontend ? The graphql api is only used for the backend for inserting data into the database in which I cant get the csrf token over the frontend. Backend/Frontend: Django Database: Mongo Thanks A: If the authentication token is transported in a header field (rather than in a cookie), there is no need for CSRF protection. This is because if a user is tricked into making an unwanted request to the endpoint, the browser will not automatically insert the token into the request, so it will be unauthenticated. You could also say that the authentication token already serves as anti-CSRF token. Browsers will automatically insert (non same-site, non third-party) cookies into requests, by contrast.
Django CSRF Protection GraphQL API
I do have a graphqlAPI which I use for CRUD Operations to my database. The authentication is tokenbased. So if an user wants to make cruds (mutations) to my database, it needs a valid token in order to do that. What I dont know is if my graphql API is also protected against CSRF attacks as I exempt this protection with csrf_exempt without csrf_exempt it needs a csrf token. Is there a way to ask for a valid csrf token without sending it over the frontend ? The graphql api is only used for the backend for inserting data into the database in which I cant get the csrf token over the frontend. Backend/Frontend: Django Database: Mongo Thanks
[ "If the authentication token is transported in a header field (rather than in a cookie), there is no need for CSRF protection. This is because if a user is tricked into making an unwanted request to the endpoint, the browser will not automatically insert the token into the request, so it will be unauthenticated. You could also say that the authentication token already serves as anti-CSRF token.\nBrowsers will automatically insert (non same-site, non third-party) cookies into requests, by contrast.\n" ]
[ 1 ]
[]
[]
[ "csrf", "django", "graphql", "python" ]
stackoverflow_0074457185_csrf_django_graphql_python.txt
Q: "AttributeError: 'NoneType' object has no attribute 'get_text'" Whenever I tried to run this code: page = requests.get(URL, headers = headers) soup = BeautifulSoup(page.content, 'html.parser') title = soup.find(id="productTitle").get_text() price = soup.find(id="priceblock_ourprice").get_text() converted_price = price[0:7] if (converted_price < '₹ 1,200'): send_mail() print(converted_price) print(title.strip()) if(converted_price > '₹ 1,400'): send_mail() It gives me an error AttributeError: 'NoneType' object has no attribute 'get_text' earlier this code was working fine. A: import requests from bs4 import BeautifulSoup url = 'https://www.amazon.com/Camera-24-2MP-18-135mm-Essential-Including/dp/B081PMPPM1/ref=sr_1_1_sspa?dchild=1&keywords=Canon+EOS+80D&qid=1593325243&sr=8-1-spons&psc=1&spLa=ZW5jcnlwdGVkUXVhbGlmaWVyPUEyU1M0M1JVTkY3WTBVJmVuY3J5cHRlZElkPUEwNDQzMjI5Uk9DM08zQkM1RU9RJmVuY3J5cHRlZEFkSWQ9QTAyNjI0NjkzT0ZLUExSRkdJMDYmd2lkZ2V0TmFtZT1zcF9hdGYmYWN0aW9uPWNsaWNrUmVkaXJlY3QmZG9Ob3RMb2dDbGljaz10cnVl' headers = { "user-Agent": 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4103.116 Safari/537.36'} page = requests.get(url,headers= headers) soup = BeautifulSoup(page.content,"lxml") title = soup.find(id = "productTitle").get_text() print(title) i tried this and it worked A: Either the productTitle id or the priceblock_ourprice id do not exist in the page you are querying. I would suggest you following two steps: - Check the URL on your browser and look for that ids - Check what you get in page.content because it is maybe not the same as what you see in the browser Hope it helps A: I assume you trying analyze Amazon products. Elements productTitle and priceblock_ourprice exist (I have checked). You should check page.content. Maybe your headers are unacceptable for website. Try: import requests from bs4 import BeautifulSoup URL = "https://www.amazon.de/COMIFORT-PC-Tisch-Studie-Schreibtisch-Mehrfarbig/dp/B075R95B1S" page = requests.get(URL) soup = BeautifulSoup(page.content, "lxml") title = soup.find(id="productTitle").get_text() price = soup.find(id="priceblock_ourprice").get_text() print(title) print(price) Result: COMIFORT, Computerschreibtisch, Schreibtisch für das Arbeitszimmer, Schreibtisch, Maße: 90 x 50 x 77 cm 50,53 € A: Please check it might be the reason product it out of stock means price in not there in the site thats why its Nonetype. Try to select another product with visible price. A: I know this is 2.2 years late, but I'm going through this DevEd tutorial now - and 'ourprice' is now 'id="priceblock_dealprice". But only runs once every 15 attempts. A: It works once and then stops working. Amazon is blocking the request I think. The ids are correct and it does not change if you use lxml, html.parser, or html5lib. Provided you print(soup) and look in the body, you will see a captcha prompt from amazon basically saying you have to prove you are not a robot. I don't know a way around that. A: If you try to run it consecutive days you will run into this error. Amazon if blocking the request. One trick to get it working again is to simply print the html after you get it before trying to parse. response = requests.get(url=AMAZON_URI, headers={ "User-Agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.114 Safari/537.36", "Accept-Language": "en-US,en;q=0.9" }) response.raise_for_status() data = response.text # Adding this print will fix the issue for consecutive days. print(data) soup = BeautifulSoup(data, "html.parser") price_dollar = soup.find(name="span", class_="a-price-whole").getText() price_cents = soup.find(name="span", class_="a-price-fraction").getText() total_price = (float(f"{price_dollar}{price_cents}")) print(total_price)
"AttributeError: 'NoneType' object has no attribute 'get_text'"
Whenever I tried to run this code: page = requests.get(URL, headers = headers) soup = BeautifulSoup(page.content, 'html.parser') title = soup.find(id="productTitle").get_text() price = soup.find(id="priceblock_ourprice").get_text() converted_price = price[0:7] if (converted_price < '₹ 1,200'): send_mail() print(converted_price) print(title.strip()) if(converted_price > '₹ 1,400'): send_mail() It gives me an error AttributeError: 'NoneType' object has no attribute 'get_text' earlier this code was working fine.
[ "import requests\n\n\n\nfrom bs4 import BeautifulSoup \n\nurl = 'https://www.amazon.com/Camera-24-2MP-18-135mm-Essential-Including/dp/B081PMPPM1/ref=sr_1_1_sspa?dchild=1&keywords=Canon+EOS+80D&qid=1593325243&sr=8-1-spons&psc=1&spLa=ZW5jcnlwdGVkUXVhbGlmaWVyPUEyU1M0M1JVTkY3WTBVJmVuY3J5cHRlZElkPUEwNDQzMjI5Uk9DM08zQkM1RU9RJmVuY3J5cHRlZEFkSWQ9QTAyNjI0NjkzT0ZLUExSRkdJMDYmd2lkZ2V0TmFtZT1zcF9hdGYmYWN0aW9uPWNsaWNrUmVkaXJlY3QmZG9Ob3RMb2dDbGljaz10cnVl'\n\nheaders = { \"user-Agent\": 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4103.116 Safari/537.36'}\n\npage = requests.get(url,headers= headers)\n\nsoup = BeautifulSoup(page.content,\"lxml\")\n\ntitle = soup.find(id = \"productTitle\").get_text()\n\nprint(title)\n\ni tried this and it worked\n", "Either the productTitle id or the priceblock_ourprice id do not exist in the page you are querying. I would suggest you following two steps:\n- Check the URL on your browser and look for that ids\n- Check what you get in page.content because it is maybe not the same as what you see in the browser\nHope it helps\n", "I assume you trying analyze Amazon products.\nElements productTitle and priceblock_ourprice exist (I have checked).\nYou should check page.content. \nMaybe your headers are unacceptable for website.\nTry:\nimport requests\nfrom bs4 import BeautifulSoup\n\nURL = \"https://www.amazon.de/COMIFORT-PC-Tisch-Studie-Schreibtisch-Mehrfarbig/dp/B075R95B1S\"\npage = requests.get(URL)\nsoup = BeautifulSoup(page.content, \"lxml\")\n\ntitle = soup.find(id=\"productTitle\").get_text()\nprice = soup.find(id=\"priceblock_ourprice\").get_text()\n\nprint(title)\nprint(price)\n\nResult:\nCOMIFORT, Computerschreibtisch, Schreibtisch für das Arbeitszimmer, Schreibtisch, Maße: 90 x 50 x 77 cm \n50,53 €\n\n", "Please check it might be the reason product it out of stock means price in not there in the site thats why its Nonetype. Try to select another product with visible price.\n", "I know this is 2.2 years late, but I'm going through this DevEd tutorial now -\nand 'ourprice' is now 'id=\"priceblock_dealprice\". But only runs once every 15 attempts.\n", "It works once and then stops working. Amazon is blocking the request I think.\nThe ids are correct and it does not change if you use lxml, html.parser, or html5lib. Provided you print(soup) and look in the body, you will see a captcha prompt from amazon basically saying you have to prove you are not a robot. I don't know a way around that.\n", "If you try to run it consecutive days you will run into this error. Amazon if blocking the request. One trick to get it working again is to simply print the html after you get it before trying to parse.\nresponse = requests.get(url=AMAZON_URI, headers={\n \"User-Agent\": \"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.114 Safari/537.36\",\n \"Accept-Language\": \"en-US,en;q=0.9\"\n})\nresponse.raise_for_status()\n\ndata = response.text\n# Adding this print will fix the issue for consecutive days.\nprint(data)\nsoup = BeautifulSoup(data, \"html.parser\")\nprice_dollar = soup.find(name=\"span\", class_=\"a-price-whole\").getText()\nprice_cents = soup.find(name=\"span\", class_=\"a-price-fraction\").getText()\ntotal_price = (float(f\"{price_dollar}{price_cents}\"))\n\n\nprint(total_price)\n\n" ]
[ 3, 1, 1, 1, 1, 0, 0 ]
[]
[]
[ "python" ]
stackoverflow_0057462202_python.txt
Q: Unwanted characters in the HTML beautified text I have my original web scraped HTML text as this > {"overview":"\\u003cp\\u003e\\u003cspan style=\\"font-size: > 10.5pt;\\"\\u003e\\u003cspan class=\\"TextRun SCXW87260372 BCX0\\" style=\\"margin: 0px; padding: 0px; -webkit-user-drag: none; > -webkit-tap-highlight-color: transparent; color: #000000; font-family: \'Meiryo UI\', \'Meiryo UI_MSFontService\', sans-serif; font-kerning: > none; line-height: 15.1083px; font-variant-ligatures: none > !important;\\"\\u003e\\u003cspan class=\\"NormalTextRun SCXW87260372 > BCX0\\" style=\\"margin: 0px; padding: 0px; -webkit-user-drag: none; > -webkit-tap-highlight-color: transparent; background-color: inherit;\\"\\u003eFioriアプリの動作確認で、2通りのトラブルシューティングをする\\u003c/span\\u003e\\u003c/span\\u003e\\u003cspan > class=\\"EOP SCXW87260372 BCX0\\" style=\\"margin: 0px;..... I used the BeautifulSoup to eliminate all the HTML tags using the below code def beautify_full_text(content): try: soup = BeautifulSoup(content.encode('utf-8').decode('unicode-escape'), "html.parser") for tag in soup(): for attribute in ["class", "id", "name", "style"]: del tag[attribute] return os.linesep.join([s for s in soup.text.splitlines() if s]) except Exception as e: print(e) return I now see that the returned text has no HTML Tags but has the below text {"overview":"Fioriã\x82¢ã\x83\x97ã\x83ªã\x81®å\x8b\x95ä½\x9c確èª\x8dã\x81§ã\x80\x81ï¼\x92é\x80\x9aã\x82\x8aã\x81®ã\x83\x88ã\x83©ã\x83\x96ã\x83«ã\x82·ã\x83¥ã\x83¼ã\x83\x86ã\x82£ã\x83³ã\x82°ã\x82\x92ã\x81\x99ã\x82\x8bÂ\xa0\nGatewayã\x81®ã\x82¨ã\x83©ã\x83¼ã\x83\xadã\x82°ã\x82\x92確èª\x8dÂ\xa0\nã\x83\x96ã\x83©ã\x82¦ã\x82¶ã\x81®ã\x82³ã\x83³ã\x82½ã\x83¼ã\x83«ã\x81§ICFã\x82µã\x83¼ã\x83\x93ã\x82¹ç\xad\x89ã\x81§403/403ã\x81\x8cå\x87ºã\x81¦ã\x81\x84ã\x81ªã\x81\x84ã\x81\x8b\nÂ\xa0â\x80¯Â\xa0[Gateway Foundation] Which Tools Can Be Used for Troubleshooting?Â\xa0\n極å\x8a\x9bã\x83\xadã\x82°ã\x82ªã\x83³è¨\x80èª\x9eï¼\x9dè\x8b±èª\x9eã\x81«ã\x81\x97ã\x81¦ã\x80\x81ã\x80\x8cggrksã\x80\x8dã\x82\x92ã\x82ªã\x83\x96ã\x83©ã\x83¼ã\x83\x88ã\x81«å\x8c\nã\x82\x93ã\x81§è¨\x80ã\x81\x86Â\xa0\n"} Is there a way I can eliminate these unwanted characters as well? A: The problem with the unicode-escape codec is that it decodes the escape codes, but also decodes to latin1. Since you have non-latin1 characters in the stream, re-encode as latin1 to undo the incorrect decoding and decode as utf8 again: s='''\ {"overview":"\\u003cp\\u003e\\u003cspan style=\\"font-size: 10.5pt;\\"\\u003e\\u003cspan class=\\"TextRun SCXW87260372 BCX0\\" style=\\"margin: 0px; padding: 0px; -webkit-user-drag: none; -webkit-tap-highlight-color: transparent; color: #000000; font-family: \'Meiryo UI\', \'Meiryo UI_MSFontService\', sans-serif; font-kerning: none; line-height: 15.1083px; font-variant-ligatures: none !important;\\"\\u003e\\u003cspan class=\\"NormalTextRun SCXW87260372 BCX0\\" style=\\"margin: 0px; padding: 0px; -webkit-user-drag: none; -webkit-tap-highlight-color: transparent; background-color: inherit;\\"\\u003eFioriアプリの動作確認で、2通りのトラブルシューティングをする\\u003c/span\\u003e\\u003c/span\\u003e\\u003cspan class=\\"EOP SCXW87260372 BCX0\\" style=\\"margin: 0px;''' print(s.encode('utf8').decode('unicode-escape').encode('latin1').decode('utf8')) Output: {"overview":"<p><span style="font-size: 10.5pt;"><span class="TextRun SCXW87260372 BCX0" style="margin: 0px; padding: 0px; -webkit-user-drag: none; -webkit-tap-highlight-color: transparent; color: #000000; font-family: 'Meiryo UI', 'Meiryo UI_MSFontService', sans-serif; font-kerning: none; line-height: 15.1083px; font-variant-ligatures: none !important;"><span class="NormalTextRun SCXW87260372 BCX0" style="margin: 0px; padding: 0px; -webkit-user-drag: none; -webkit-tap-highlight-color: transparent; background-color: inherit;">Fioriアプリの動作確認で、2通りのトラブルシューティングをする</span></span><span class="EOP SCXW87260372 BCX0" style="margin: 0px; Now that it is decoded, it looks more like it was a JSON response. If you used the requests module to retrieve the data look at response.json() to see if it decodes correctly, or use json.loads() on your scraped string. A: It turns out that a small tweak solved the problem. Currently, the code looks as below def beautify_full_text(content): try: soup = BeautifulSoup(content.encode('utf-8').decode('unicode-escape'), "html.parser") for tag in soup(): for attribute in ["class", "id", "name", "style"]: del tag[attribute] beau_text = os.linesep.join([s for s in soup.text.splitlines() if s]) beau_text = beau_text.encode("ascii", "ignore").decode() return beau_text except Exception as e: print(e) return
Unwanted characters in the HTML beautified text
I have my original web scraped HTML text as this > {"overview":"\\u003cp\\u003e\\u003cspan style=\\"font-size: > 10.5pt;\\"\\u003e\\u003cspan class=\\"TextRun SCXW87260372 BCX0\\" style=\\"margin: 0px; padding: 0px; -webkit-user-drag: none; > -webkit-tap-highlight-color: transparent; color: #000000; font-family: \'Meiryo UI\', \'Meiryo UI_MSFontService\', sans-serif; font-kerning: > none; line-height: 15.1083px; font-variant-ligatures: none > !important;\\"\\u003e\\u003cspan class=\\"NormalTextRun SCXW87260372 > BCX0\\" style=\\"margin: 0px; padding: 0px; -webkit-user-drag: none; > -webkit-tap-highlight-color: transparent; background-color: inherit;\\"\\u003eFioriアプリの動作確認で、2通りのトラブルシューティングをする\\u003c/span\\u003e\\u003c/span\\u003e\\u003cspan > class=\\"EOP SCXW87260372 BCX0\\" style=\\"margin: 0px;..... I used the BeautifulSoup to eliminate all the HTML tags using the below code def beautify_full_text(content): try: soup = BeautifulSoup(content.encode('utf-8').decode('unicode-escape'), "html.parser") for tag in soup(): for attribute in ["class", "id", "name", "style"]: del tag[attribute] return os.linesep.join([s for s in soup.text.splitlines() if s]) except Exception as e: print(e) return I now see that the returned text has no HTML Tags but has the below text {"overview":"Fioriã\x82¢ã\x83\x97ã\x83ªã\x81®å\x8b\x95ä½\x9c確èª\x8dã\x81§ã\x80\x81ï¼\x92é\x80\x9aã\x82\x8aã\x81®ã\x83\x88ã\x83©ã\x83\x96ã\x83«ã\x82·ã\x83¥ã\x83¼ã\x83\x86ã\x82£ã\x83³ã\x82°ã\x82\x92ã\x81\x99ã\x82\x8bÂ\xa0\nGatewayã\x81®ã\x82¨ã\x83©ã\x83¼ã\x83\xadã\x82°ã\x82\x92確èª\x8dÂ\xa0\nã\x83\x96ã\x83©ã\x82¦ã\x82¶ã\x81®ã\x82³ã\x83³ã\x82½ã\x83¼ã\x83«ã\x81§ICFã\x82µã\x83¼ã\x83\x93ã\x82¹ç\xad\x89ã\x81§403/403ã\x81\x8cå\x87ºã\x81¦ã\x81\x84ã\x81ªã\x81\x84ã\x81\x8b\nÂ\xa0â\x80¯Â\xa0[Gateway Foundation] Which Tools Can Be Used for Troubleshooting?Â\xa0\n極å\x8a\x9bã\x83\xadã\x82°ã\x82ªã\x83³è¨\x80èª\x9eï¼\x9dè\x8b±èª\x9eã\x81«ã\x81\x97ã\x81¦ã\x80\x81ã\x80\x8cggrksã\x80\x8dã\x82\x92ã\x82ªã\x83\x96ã\x83©ã\x83¼ã\x83\x88ã\x81«å\x8c\nã\x82\x93ã\x81§è¨\x80ã\x81\x86Â\xa0\n"} Is there a way I can eliminate these unwanted characters as well?
[ "The problem with the unicode-escape codec is that it decodes the escape codes, but also decodes to latin1. Since you have non-latin1 characters in the stream, re-encode as latin1 to undo the incorrect decoding and decode as utf8 again:\ns='''\\\n{\"overview\":\"\\\\u003cp\\\\u003e\\\\u003cspan style=\\\\\"font-size:\n10.5pt;\\\\\"\\\\u003e\\\\u003cspan class=\\\\\"TextRun SCXW87260372 BCX0\\\\\" style=\\\\\"margin: 0px; padding: 0px; -webkit-user-drag: none;\n-webkit-tap-highlight-color: transparent; color: #000000; font-family: \\'Meiryo UI\\', \\'Meiryo UI_MSFontService\\', sans-serif; font-kerning:\nnone; line-height: 15.1083px; font-variant-ligatures: none\n!important;\\\\\"\\\\u003e\\\\u003cspan class=\\\\\"NormalTextRun SCXW87260372\nBCX0\\\\\" style=\\\\\"margin: 0px; padding: 0px; -webkit-user-drag: none;\n-webkit-tap-highlight-color: transparent; background-color: inherit;\\\\\"\\\\u003eFioriアプリの動作確認で、2通りのトラブルシューティングをする\\\\u003c/span\\\\u003e\\\\u003c/span\\\\u003e\\\\u003cspan\nclass=\\\\\"EOP SCXW87260372 BCX0\\\\\" style=\\\\\"margin: 0px;'''\n\nprint(s.encode('utf8').decode('unicode-escape').encode('latin1').decode('utf8'))\n\nOutput:\n{\"overview\":\"<p><span style=\"font-size:\n10.5pt;\"><span class=\"TextRun SCXW87260372 BCX0\" style=\"margin: 0px; padding: 0px; -webkit-user-drag: none;\n-webkit-tap-highlight-color: transparent; color: #000000; font-family: 'Meiryo UI', 'Meiryo UI_MSFontService', sans-serif; font-kerning:\nnone; line-height: 15.1083px; font-variant-ligatures: none\n!important;\"><span class=\"NormalTextRun SCXW87260372\nBCX0\" style=\"margin: 0px; padding: 0px; -webkit-user-drag: none;\n-webkit-tap-highlight-color: transparent; background-color: inherit;\">Fioriアプリの動作確認で、2通りのトラブルシューティングをする</span></span><span\nclass=\"EOP SCXW87260372 BCX0\" style=\"margin: 0px;\n\nNow that it is decoded, it looks more like it was a JSON response. If you used the requests module to retrieve the data look at response.json() to see if it decodes correctly, or use json.loads() on your scraped string.\n", "It turns out that a small tweak solved the problem. Currently, the code looks as below\ndef beautify_full_text(content):\n try:\n soup = BeautifulSoup(content.encode('utf-8').decode('unicode-escape'), \"html.parser\")\n for tag in soup():\n for attribute in [\"class\", \"id\", \"name\", \"style\"]:\n del tag[attribute]\n \n beau_text = os.linesep.join([s for s in soup.text.splitlines() if s])\n beau_text = beau_text.encode(\"ascii\", \"ignore\").decode()\n return beau_text\n except Exception as e:\n print(e)\n return\n\n" ]
[ 1, 0 ]
[]
[]
[ "beautifulsoup", "html", "python", "unicode", "web_scraping" ]
stackoverflow_0074443942_beautifulsoup_html_python_unicode_web_scraping.txt
Q: Why Binance klines last close value with interval is different from others? I am getting the candle stick values from binance api and print them like following. for i in range(0, 10): service = BinanceSpotService() klines = service.get_klines(symbol='BTCUSDT', interval=Client.KLINE_INTERVAL_15MINUTE) print(klines[["date", "close"]].tail(2)) each loop prints the last two datas like this: date close 498 2022-11-13 07:45:00 16774.99 499 2022-11-13 08:00:00 16769.12 date close 498 2022-11-13 07:45:00 16774.99 499 2022-11-13 08:00:00 16769.10 date close 498 2022-11-13 07:45:00 16774.99 499 2022-11-13 08:00:00 16772.34 date close 498 2022-11-13 07:45:00 16774.99 499 2022-11-13 08:00:00 16770.48 the last item date does not change but close values are different. Why is this so? A: Short Anwer: The most recent kline is still constantly changing. In your example, you do not pass any endTime. This gets then passed over to the Binance API. When there is no defined endTime, the API will return the most recent klines. If startTime and endTime are not sent, the most recent klines are returned. source Applied to your example: You are getting the klines data multiple times in a loop. The most recent kline (klines[-1]) is the one, which is constantly changing, because the time window is still open and therefore changes with every trade made.
Why Binance klines last close value with interval is different from others?
I am getting the candle stick values from binance api and print them like following. for i in range(0, 10): service = BinanceSpotService() klines = service.get_klines(symbol='BTCUSDT', interval=Client.KLINE_INTERVAL_15MINUTE) print(klines[["date", "close"]].tail(2)) each loop prints the last two datas like this: date close 498 2022-11-13 07:45:00 16774.99 499 2022-11-13 08:00:00 16769.12 date close 498 2022-11-13 07:45:00 16774.99 499 2022-11-13 08:00:00 16769.10 date close 498 2022-11-13 07:45:00 16774.99 499 2022-11-13 08:00:00 16772.34 date close 498 2022-11-13 07:45:00 16774.99 499 2022-11-13 08:00:00 16770.48 the last item date does not change but close values are different. Why is this so?
[ "Short Anwer: The most recent kline is still constantly changing.\nIn your example, you do not pass any endTime.\nThis gets then passed over to the Binance API. When there is no defined endTime, the API will return the most recent klines.\nIf startTime and endTime are not sent, the most recent klines are returned.\n\nsource\nApplied to your example:\nYou are getting the klines data multiple times in a loop. The most recent kline (klines[-1]) is the one, which is constantly changing, because the time window is still open and therefore changes with every trade made.\n" ]
[ 0 ]
[]
[]
[ "binance", "python" ]
stackoverflow_0074419388_binance_python.txt
Q: How to normalize data which contain positive and negative numbers into 0 and 1? I have a dataset that contains negative and positive values. then here I use MinMaxScaler() to normalize the data to 0 and 1. but because the normalized data has negative and positive values in it, the normalization is not optimal, so the resulting prediction results are not optimal. then I try to change the negative data to positive with abs() then the result from abs() is normalized using MinMaxScaler() the result will be better. is there a way for me to keep the negative and positive values but have good predictions? my last activation function is Sigmoid Here my model structure: model = Sequential() model.add(LSTM(64, activation='relu', return_sequences= False, input_shape= (50,89))) model.add(Dense(32,activation='relu')) model.add(Dense(16,activation='relu')) model.add(Dense(1, activation='sigmoid')) model.compile(loss = 'mse', optimizer=Adam(learning_rate=0.002), metrics=['mse']) model.summary() model structure Here my code of normalization with abs(): df = pd.read_csv('1113_Rwalk40s1.csv', low_memory=False) columns = ['Fx', 'Fy', 'Fz', 'Mx', 'My', 'Mz']] selected_df = df[columns] FCDatas = selected_df[:2050] FCDatas = abs(FCDatas) SmartInsole = np.array(SIData[:2050]) FCData = np.array(FCDatas) Dataset = np.concatenate((SmartInsole, FCData), axis=1) scaler_in = MinMaxScaler(feature_range=(0, 1)) scaler_out = MinMaxScaler(feature_range=(0, 1)) data_scaled_in = scaler_in.fit_transform(Dataset[:,0:89]) data_scaled_out = scaler_out.fit_transform(Dataset[:,89:90]) The result using abs() Here my code of normalization with without abs(): df = pd.read_csv('1113_Rwalk40s1.csv', low_memory=False) columns = ['Fx', 'Fy', 'Fz', 'Mx', 'My', 'Mz']] selected_df = df[columns] FCDatas = selected_df[:2050] SmartInsole = np.array(SIData[:2050]) FCData = np.array(FCDatas) Dataset = np.concatenate((SmartInsole, FCData), axis=1) scaler_in = MinMaxScaler(feature_range=(0, 1)) scaler_out = MinMaxScaler(feature_range=(0, 1)) data_scaled_in = scaler_in.fit_transform(Dataset[:,0:89]) data_scaled_out = scaler_out.fit_transform(Dataset[:,89:90]) The result without abs() A: You can change the range of MinMaxScaler to be between [-1,1], if you'd like to keep the smallest number (negative in your case) still negative, but the largest number still positive. Does this help?
How to normalize data which contain positive and negative numbers into 0 and 1?
I have a dataset that contains negative and positive values. then here I use MinMaxScaler() to normalize the data to 0 and 1. but because the normalized data has negative and positive values in it, the normalization is not optimal, so the resulting prediction results are not optimal. then I try to change the negative data to positive with abs() then the result from abs() is normalized using MinMaxScaler() the result will be better. is there a way for me to keep the negative and positive values but have good predictions? my last activation function is Sigmoid Here my model structure: model = Sequential() model.add(LSTM(64, activation='relu', return_sequences= False, input_shape= (50,89))) model.add(Dense(32,activation='relu')) model.add(Dense(16,activation='relu')) model.add(Dense(1, activation='sigmoid')) model.compile(loss = 'mse', optimizer=Adam(learning_rate=0.002), metrics=['mse']) model.summary() model structure Here my code of normalization with abs(): df = pd.read_csv('1113_Rwalk40s1.csv', low_memory=False) columns = ['Fx', 'Fy', 'Fz', 'Mx', 'My', 'Mz']] selected_df = df[columns] FCDatas = selected_df[:2050] FCDatas = abs(FCDatas) SmartInsole = np.array(SIData[:2050]) FCData = np.array(FCDatas) Dataset = np.concatenate((SmartInsole, FCData), axis=1) scaler_in = MinMaxScaler(feature_range=(0, 1)) scaler_out = MinMaxScaler(feature_range=(0, 1)) data_scaled_in = scaler_in.fit_transform(Dataset[:,0:89]) data_scaled_out = scaler_out.fit_transform(Dataset[:,89:90]) The result using abs() Here my code of normalization with without abs(): df = pd.read_csv('1113_Rwalk40s1.csv', low_memory=False) columns = ['Fx', 'Fy', 'Fz', 'Mx', 'My', 'Mz']] selected_df = df[columns] FCDatas = selected_df[:2050] SmartInsole = np.array(SIData[:2050]) FCData = np.array(FCDatas) Dataset = np.concatenate((SmartInsole, FCData), axis=1) scaler_in = MinMaxScaler(feature_range=(0, 1)) scaler_out = MinMaxScaler(feature_range=(0, 1)) data_scaled_in = scaler_in.fit_transform(Dataset[:,0:89]) data_scaled_out = scaler_out.fit_transform(Dataset[:,89:90]) The result without abs()
[ "You can change the range of MinMaxScaler to be between [-1,1], if you'd like to keep the smallest number (negative in your case) still negative, but the largest number still positive. Does this help?\n" ]
[ 1 ]
[]
[]
[ "data_analysis", "normalization", "python" ]
stackoverflow_0074463956_data_analysis_normalization_python.txt
Q: Changing the underlying variable's value in a dictionary of variables How can I change the value of a variable using dictionary? Right now I have to check every key of the dictionary and then change the corresponding variable's value. ` list1 = [1, 2, 3] list2 = [4, 5, 6] list3 = [7, 8, 9] dictionary = { "dog": list1, "cat": list2, "mouse": list3 } animal = input("Type dog, cat or mouse: ") numbers_list = dictionary[animal] # Adds 1 to all elements of the list numbers_list = [x+1 for x in numbers_list] # Is there an easier way to do this? # Is there a way to change the value of the original list without # using large amount of if-statements, since we know that # dictionary[animal] is the list that we want to change? # Using dictionary[animal] = numbers_list.copy(), obviously wont help because it # only changes the list in the dictionary if animal == "dog": list1 = numbers_list.copy() if animal == "cat": list2 = numbers_list.copy() if animal == "mouse": list3 = numbers_list.copy() print(list1, list2, list3) ` I've tried using dictionary[animal] = numbers_list.copy() but that just changes the value in the dictionary, but not the actual list. Those if-statements work, but if there is a large dictionary, it is quite a lot of work. A: You can replace the dict value with a new list on the fly - dictionary[animal] = [i+1 for i in dictionary[animal]] I would suggest to stop using the listx variables and use dict itself to maintain those lists and mappings. dictionary = { "dog": [1, 2, 3], "cat": [4, 5, 6], "mouse": [7, 8, 9] } animal = "cat" dictionary[animal] = [i+1 for i in dictionary[animal]] print(dictionary[animal]) #instead of printing list1, list2, list3, print the key, values in the dict for animal, listx in dictionary.items(): print(animal, listx) Output: [5, 6, 7] dog [1, 2, 3] cat [5, 6, 7] mouse [7, 8, 9] A: There's no need for separate lists. You can assign the lists directly as part of the dictionary definition. animals = { "dog": [1, 2, 3], "cat": [4, 5, 6], "mouse": [7, 8, 9] } animal = input("Type dog, cat or mouse: ") Once you know the animal name, you can iterate over the list directly and increment each number: for i in range(len(animals[animal])): animals[animal][i] += 1
Changing the underlying variable's value in a dictionary of variables
How can I change the value of a variable using dictionary? Right now I have to check every key of the dictionary and then change the corresponding variable's value. ` list1 = [1, 2, 3] list2 = [4, 5, 6] list3 = [7, 8, 9] dictionary = { "dog": list1, "cat": list2, "mouse": list3 } animal = input("Type dog, cat or mouse: ") numbers_list = dictionary[animal] # Adds 1 to all elements of the list numbers_list = [x+1 for x in numbers_list] # Is there an easier way to do this? # Is there a way to change the value of the original list without # using large amount of if-statements, since we know that # dictionary[animal] is the list that we want to change? # Using dictionary[animal] = numbers_list.copy(), obviously wont help because it # only changes the list in the dictionary if animal == "dog": list1 = numbers_list.copy() if animal == "cat": list2 = numbers_list.copy() if animal == "mouse": list3 = numbers_list.copy() print(list1, list2, list3) ` I've tried using dictionary[animal] = numbers_list.copy() but that just changes the value in the dictionary, but not the actual list. Those if-statements work, but if there is a large dictionary, it is quite a lot of work.
[ "You can replace the dict value with a new list on the fly -\ndictionary[animal] = [i+1 for i in dictionary[animal]]\nI would suggest to stop using the listx variables and use dict itself to maintain those lists and mappings.\ndictionary = {\n \"dog\": [1, 2, 3],\n \"cat\": [4, 5, 6],\n \"mouse\": [7, 8, 9]\n}\n\nanimal = \"cat\"\ndictionary[animal] = [i+1 for i in dictionary[animal]]\nprint(dictionary[animal])\n\n#instead of printing list1, list2, list3, print the key, values in the dict\nfor animal, listx in dictionary.items():\n print(animal, listx)\n\nOutput:\n[5, 6, 7]\ndog [1, 2, 3]\ncat [5, 6, 7]\nmouse [7, 8, 9]\n\n", "There's no need for separate lists. You can assign the lists directly as part of the dictionary definition.\nanimals = {\n \"dog\": [1, 2, 3],\n \"cat\": [4, 5, 6],\n \"mouse\": [7, 8, 9]\n}\n\nanimal = input(\"Type dog, cat or mouse: \")\n\nOnce you know the animal name, you can iterate over the list directly and increment each number:\nfor i in range(len(animals[animal])):\n animals[animal][i] += 1\n\n" ]
[ 0, 0 ]
[]
[]
[ "python", "python_3.x" ]
stackoverflow_0074463899_python_python_3.x.txt
Q: python colorama print all colors I am new to learning Python, and I came across colorama. As a test project, I wanted to print out all the available colors in colorama. from colorama import Fore from colorama import init as colorama_init colorama_init(autoreset=True) colors = [x for x in dir(Fore) if x[0] != "_"] for color in colors: print(color + f"{color}") of course this outputs all black output like this: BLACKBLACK BLUEBLUE CYANCYAN ... because the Dir(Fore) just gives me a string representation of Fore.BLUE, Fore.GREEN, ... Is there a way to access all the Fore Color property so they actually work, as in: print(Fore.BLUE + "Blue") Or in other words, this may express my problem better. I wanted to write this: print(Fore.BLACK + 'BLACK') print(Fore.BLUE + 'BLUE') print(Fore.CYAN + 'CYAN') print(Fore.GREEN + 'GREEN') print(Fore.LIGHTBLACK_EX + 'LIGHTBLACK_EX') print(Fore.LIGHTBLUE_EX + 'LIGHTBLUE_EX') print(Fore.LIGHTCYAN_EX + 'LIGHTCYAN_EX') print(Fore.LIGHTGREEN_EX + 'LIGHTGREEN_EX') print(Fore.LIGHTMAGENTA_EX + 'LIGHTMAGENTA_EX') print(Fore.LIGHTRED_EX + 'LIGHTRED_EX') print(Fore.LIGHTWHITE_EX + 'LIGHTWHITE_EX') print(Fore.LIGHTYELLOW_EX + 'LIGHTYELLOW_EX') print(Fore.MAGENTA + 'MAGENTA') print(Fore.RED + 'RED') print(Fore.RESET + 'RESET') print(Fore.WHITE + 'WHITE') print(Fore.YELLOW + 'YELLOW') in a shorter way: for color in all_the_colors_that_are_available_in_Fore: print('the word color in the representing color') #or something like this? print(Fore.color + color) A: The reason why it's printing the color name twice is well described in Patrick's comment on the question. Is their a way to access all the Fore Color property so they actualy work as in According to: https://pypi.org/project/colorama/ You can print a colored string using other ways than e.g.print(Fore.RED + 'some red text') You can use colored function from termcolor module which takes a string and a color to colorize that string. But not all Fore colors are supported so you can do the following: from colorama import Fore from colorama import init as colorama_init from termcolor import colored colorama_init(autoreset=True) colors = [x for x in dir(Fore) if x[0] != "_"] colors = [i for i in colors if i not in ["BLACK", "RESET"] and "LIGHT" not in i] for color in colors: print(colored(color, color.lower())) Hope this answered your question. EDIT: I read more about Fore items and I found that you can retrieve a dictionary containing each color as keys and it's code as values, so you can do the following to include all the colors in Fore: from colorama import Fore from colorama import init as colorama_init colorama_init(autoreset=True) colors = dict(Fore.__dict__.items()) for color in colors.keys(): print(colors[color] + f"{color}") A: You could also use eval(). for i in listOfColors: color = "Fore." + i print(eval(color), i) print(Style.RESET_ALL, end='') #end='' prevents extra newlines
python colorama print all colors
I am new to learning Python, and I came across colorama. As a test project, I wanted to print out all the available colors in colorama. from colorama import Fore from colorama import init as colorama_init colorama_init(autoreset=True) colors = [x for x in dir(Fore) if x[0] != "_"] for color in colors: print(color + f"{color}") of course this outputs all black output like this: BLACKBLACK BLUEBLUE CYANCYAN ... because the Dir(Fore) just gives me a string representation of Fore.BLUE, Fore.GREEN, ... Is there a way to access all the Fore Color property so they actually work, as in: print(Fore.BLUE + "Blue") Or in other words, this may express my problem better. I wanted to write this: print(Fore.BLACK + 'BLACK') print(Fore.BLUE + 'BLUE') print(Fore.CYAN + 'CYAN') print(Fore.GREEN + 'GREEN') print(Fore.LIGHTBLACK_EX + 'LIGHTBLACK_EX') print(Fore.LIGHTBLUE_EX + 'LIGHTBLUE_EX') print(Fore.LIGHTCYAN_EX + 'LIGHTCYAN_EX') print(Fore.LIGHTGREEN_EX + 'LIGHTGREEN_EX') print(Fore.LIGHTMAGENTA_EX + 'LIGHTMAGENTA_EX') print(Fore.LIGHTRED_EX + 'LIGHTRED_EX') print(Fore.LIGHTWHITE_EX + 'LIGHTWHITE_EX') print(Fore.LIGHTYELLOW_EX + 'LIGHTYELLOW_EX') print(Fore.MAGENTA + 'MAGENTA') print(Fore.RED + 'RED') print(Fore.RESET + 'RESET') print(Fore.WHITE + 'WHITE') print(Fore.YELLOW + 'YELLOW') in a shorter way: for color in all_the_colors_that_are_available_in_Fore: print('the word color in the representing color') #or something like this? print(Fore.color + color)
[ "The reason why it's printing the color name twice is well described in Patrick's comment on the question.\nIs their a way to access all the Fore Color property so they actualy work as in\nAccording to: https://pypi.org/project/colorama/\nYou can print a colored string using other ways than e.g.print(Fore.RED + 'some red text')\nYou can use colored function from termcolor module which takes a string and a color to colorize that string. But not all Fore colors are supported so you can do the following:\nfrom colorama import Fore\nfrom colorama import init as colorama_init\nfrom termcolor import colored\n\ncolorama_init(autoreset=True)\n\ncolors = [x for x in dir(Fore) if x[0] != \"_\"]\ncolors = [i for i in colors if i not in [\"BLACK\", \"RESET\"] and \"LIGHT\" not in i] \n\nfor color in colors:\n print(colored(color, color.lower()))\n\nHope this answered your question.\nEDIT:\nI read more about Fore items and I found that you can retrieve a dictionary containing each color as keys and it's code as values, so you can do the following to include all the colors in Fore:\nfrom colorama import Fore\nfrom colorama import init as colorama_init\n\ncolorama_init(autoreset=True)\n\ncolors = dict(Fore.__dict__.items())\n\nfor color in colors.keys():\n print(colors[color] + f\"{color}\")\n\n\n", "You could also use eval().\nfor i in listOfColors:\n color = \"Fore.\" + i\n print(eval(color), i)\n print(Style.RESET_ALL, end='') #end='' prevents extra newlines\n\n" ]
[ 7, 0 ]
[]
[]
[ "colorama", "properties", "python" ]
stackoverflow_0061686780_colorama_properties_python.txt
Q: Changing style of pandas.DataFrame: Permanently? When I change the style of a pandas.DataFrame, for instance like so # color these columns color_columns = ['roi', 'percent_of_ath'] (portfolio_df .style # color negative numbers red .apply(lambda v: 'color: red' if v < 0 else 'color: black', subset=color_columns) # color selected cols light blue .apply(lambda s: 'background-color: lightblue', subset=color_columns)) the styles applied to the dataframe are not permanent. To make them stick I can assign the output of the (portfolio_df ... part to the same dataframe like so: portfolio_df = (portfolio_df ... Displaying this overwritten portfolio_df in a Jupyter Notebook, I can see the beautifully styled DataFrame. But trying to change the style from within a function that is imported from a module, I fail. I construct the DataFrame in the function, change the style, return the (now) styled DataFrame from the function, display it in the Jupyter Notebook, I see a non-styled DataFrame. Edit Inspecting the type of the return value of the styling operation s = (portfolio_df.style.apply(... I see this: >>> type(s) pandas.io.formats.style.Styler So the operation does not return a DataFrame, but a ...Styler object. I was erroneously thinking that I can re-assign this return value to my original DataFrame, thus overwrite it and make the style change permanent. Question Is the operation of applying a style to a DataFrame a destructive or non-desctructive operation? The answer seems to be that the style is not changed permanently. Now, how can I make it change permanently? Edit 2 Viewing the source code of Pandas, I looked at the docstring for class Styler (see [1]): If using in the Jupyter notebook, Styler has defined a ``_repr_html_`` to automatically render itself. Otherwise call Styler.render to get the generated HTML. So in a Jupyter notebook, Styler has a method that auto renders the dataframe, respecting the applied style. Otherwise (in iPython) it creates HTML. Assigning the return value of the applied style to a variable s = (portfolio_df.style.apply(... I can use it in an Jupyter notebook to render the new style. What I understand is this: I cannot output my dataframe into a Jupyter notebook and expect it to render the new style. But I can output s to show the new style. [1] class Styler in pandas/pandas/io/formats/style.py Docstring, line 39. A: I can give you two recommendations: 1. Write a simple function to display your dataframes This is by far the simplest and least hacky solution. You could write: def my_style(df:pd.DataFrame, color_columns:list[str]=['roi', 'percent_of_ath']): return (df .style .applymap(lambda v: 'color: red' if v < 0 else None, subset=color_columns) ) This lets you write code like: df.pipe(my_style) # This will output a formatted dataframe Or from IPython.display import display # This will print a nicely formatted dataframe def my_display(df:pd.DataFrame, style=my_style): display(df.pipe(style)) 2. Overwrite the Pandas _repr_html_ method I don't advice this, but it is what you are asking for ;) from pandas._config import get_option from pandas.io.formats import format as fmt def _my_repr_html_(self) -> str | None: """ Return a html representation for a particular DataFrame. Mainly for IPython notebook. """ if self._info_repr(): buf = StringIO() self.info(buf=buf) # need to escape the <class>, should be the first line. val = buf.getvalue().replace("<", r"&lt;", 1) val = val.replace(">", r"&gt;", 1) return "<pre>" + val + "</pre>" if get_option("display.notebook_repr_html"): max_rows = get_option("display.max_rows") min_rows = get_option("display.min_rows") max_cols = get_option("display.max_columns") show_dimensions = get_option("display.show_dimensions") formatter = fmt.DataFrameFormatter( self, columns=None, col_space=None, na_rep="NaN", formatters=None, float_format=None, sparsify=None, justify=None, index_names=True, header=True, index=True, bold_rows=True, escape=True, max_rows=max_rows, min_rows=min_rows, max_cols=max_cols, show_dimensions=show_dimensions, decimal=".", ) # return fmt.DataFrameRenderer(formatter).to_html(notebook=True) return self.pipe(my_style).to_html(notebook=True) # <<<< !!! HERE !!! else: return None df.pipe(_my_repr_html_) pd.DataFrame._repr_html_ = _my_repr_html_ Be careful! This sample code does not handle very long or wide DataFrames. Edit: The code above for overwriting repr_html has a minimal edit of the pandas code. This is a minimal working example: def my_style(df:pd.DataFrame, color_columns:list[str]=['roi', 'percent_of_ath']): return (df.style.applymap( lambda v: 'color: red' if v < 0 else None, subset=color_columns) ) def _my_repr_html_(self) -> str | None: return self.pipe(my_style)._repr_html_() # <<<< !!! HERE !!! pd.DataFrame._repr_html_ = _my_repr_html_
Changing style of pandas.DataFrame: Permanently?
When I change the style of a pandas.DataFrame, for instance like so # color these columns color_columns = ['roi', 'percent_of_ath'] (portfolio_df .style # color negative numbers red .apply(lambda v: 'color: red' if v < 0 else 'color: black', subset=color_columns) # color selected cols light blue .apply(lambda s: 'background-color: lightblue', subset=color_columns)) the styles applied to the dataframe are not permanent. To make them stick I can assign the output of the (portfolio_df ... part to the same dataframe like so: portfolio_df = (portfolio_df ... Displaying this overwritten portfolio_df in a Jupyter Notebook, I can see the beautifully styled DataFrame. But trying to change the style from within a function that is imported from a module, I fail. I construct the DataFrame in the function, change the style, return the (now) styled DataFrame from the function, display it in the Jupyter Notebook, I see a non-styled DataFrame. Edit Inspecting the type of the return value of the styling operation s = (portfolio_df.style.apply(... I see this: >>> type(s) pandas.io.formats.style.Styler So the operation does not return a DataFrame, but a ...Styler object. I was erroneously thinking that I can re-assign this return value to my original DataFrame, thus overwrite it and make the style change permanent. Question Is the operation of applying a style to a DataFrame a destructive or non-desctructive operation? The answer seems to be that the style is not changed permanently. Now, how can I make it change permanently? Edit 2 Viewing the source code of Pandas, I looked at the docstring for class Styler (see [1]): If using in the Jupyter notebook, Styler has defined a ``_repr_html_`` to automatically render itself. Otherwise call Styler.render to get the generated HTML. So in a Jupyter notebook, Styler has a method that auto renders the dataframe, respecting the applied style. Otherwise (in iPython) it creates HTML. Assigning the return value of the applied style to a variable s = (portfolio_df.style.apply(... I can use it in an Jupyter notebook to render the new style. What I understand is this: I cannot output my dataframe into a Jupyter notebook and expect it to render the new style. But I can output s to show the new style. [1] class Styler in pandas/pandas/io/formats/style.py Docstring, line 39.
[ "I can give you two recommendations:\n1. Write a simple function to display your dataframes\nThis is by far the simplest and least hacky solution. You could write:\ndef my_style(df:pd.DataFrame, color_columns:list[str]=['roi', 'percent_of_ath']):\n return (df\n .style\n .applymap(lambda v: 'color: red' if v < 0 \n else None, subset=color_columns)\n ) \n\nThis lets you write code like:\ndf.pipe(my_style) # This will output a formatted dataframe\nOr\nfrom IPython.display import display \n\n# This will print a nicely formatted dataframe\ndef my_display(df:pd.DataFrame, style=my_style):\n display(df.pipe(style))\n\n\n2. Overwrite the Pandas _repr_html_ method\nI don't advice this, but it is what you are asking for ;)\nfrom pandas._config import get_option\nfrom pandas.io.formats import format as fmt\n\ndef _my_repr_html_(self) -> str | None:\n \"\"\"\n Return a html representation for a particular DataFrame.\n\n Mainly for IPython notebook.\n \"\"\"\n if self._info_repr():\n buf = StringIO()\n self.info(buf=buf)\n # need to escape the <class>, should be the first line.\n val = buf.getvalue().replace(\"<\", r\"&lt;\", 1)\n val = val.replace(\">\", r\"&gt;\", 1)\n return \"<pre>\" + val + \"</pre>\"\n\n if get_option(\"display.notebook_repr_html\"):\n max_rows = get_option(\"display.max_rows\")\n min_rows = get_option(\"display.min_rows\")\n max_cols = get_option(\"display.max_columns\")\n show_dimensions = get_option(\"display.show_dimensions\")\n\n formatter = fmt.DataFrameFormatter(\n self,\n columns=None,\n col_space=None,\n na_rep=\"NaN\",\n formatters=None,\n float_format=None,\n sparsify=None,\n justify=None,\n index_names=True,\n header=True,\n index=True,\n bold_rows=True,\n escape=True,\n max_rows=max_rows,\n min_rows=min_rows,\n max_cols=max_cols,\n show_dimensions=show_dimensions,\n decimal=\".\",\n )\n # return fmt.DataFrameRenderer(formatter).to_html(notebook=True)\n return self.pipe(my_style).to_html(notebook=True) # <<<< !!! HERE !!! \n else:\n return None\n \ndf.pipe(_my_repr_html_)\n\npd.DataFrame._repr_html_ = _my_repr_html_\n\nBe careful! This sample code does not handle very long or wide DataFrames.\nEdit:\nThe code above for overwriting repr_html has a minimal edit of the pandas code. This is a minimal working example:\ndef my_style(df:pd.DataFrame, color_columns:list[str]=['roi', 'percent_of_ath']):\n return (df.style.applymap(\n lambda v: 'color: red' if v < 0 else None, subset=color_columns)\n ) \n\ndef _my_repr_html_(self) -> str | None:\n return self.pipe(my_style)._repr_html_() # <<<< !!! HERE !!! \n \npd.DataFrame._repr_html_ = _my_repr_html_\n\n" ]
[ 0 ]
[ "try using this function\ndf.style.applymap()\n\n" ]
[ -4 ]
[ "jupyter_notebook", "pandas", "pandas_styles", "python" ]
stackoverflow_0056176720_jupyter_notebook_pandas_pandas_styles_python.txt
Q: Django ManyToMany all values by default I have the following model: class Product(models.Model): provinces = models.ManyToManyField('Province', related_name='formats') By default, products can be sold in every province. How can I define the model "Product" so that every product created has all provinces by default? Thanks! A: Use the default key. You can't directly set default model values to an iterable like a list, so wrap them in a callable, as the Django documentation advises: https://docs.djangoproject.com/en/1.8/ref/models/fields/ def allProvinces(): return provincesList provinces = models.ManyToManyField('Province', related_name='formats', default=allProvinces) A: You need to use post_save signal. You can not use default field option for many-to-may fields as mentioned here A: You cannot directly add a list to M2M directly, you should first get the objects : def allProvinces(): provinceList = Province.objects.all() return provinceList And then add the default=allProvinces :) A: Expanding from https://stackoverflow.com/a/32068983/1581629 I did this: class Product(models.Model): ... def save(self, *args, **kwargs): created_flag = False if not self.pk: created_flag = True super().save(*args, **kwargs) if created_flag: self.provinces = Province.objects.all()
Django ManyToMany all values by default
I have the following model: class Product(models.Model): provinces = models.ManyToManyField('Province', related_name='formats') By default, products can be sold in every province. How can I define the model "Product" so that every product created has all provinces by default? Thanks!
[ "Use the default key. You can't directly set default model values to an iterable like a list, so wrap them in a callable, as the Django documentation advises: https://docs.djangoproject.com/en/1.8/ref/models/fields/\ndef allProvinces():\n return provincesList\n\nprovinces = models.ManyToManyField('Province', related_name='formats', default=allProvinces)\n\n", "You need to use post_save signal.\nYou can not use default field option for many-to-may fields as mentioned here\n", "You cannot directly add a list to M2M directly, you should first get the objects :\ndef allProvinces():\n provinceList = Province.objects.all()\n return provinceList\n\nAnd then add the default=allProvinces :)\n", "Expanding from https://stackoverflow.com/a/32068983/1581629 I did this:\nclass Product(models.Model):\n ...\n def save(self, *args, **kwargs):\n created_flag = False\n if not self.pk:\n created_flag = True\n super().save(*args, **kwargs)\n if created_flag:\n self.provinces = Province.objects.all()\n\n" ]
[ 7, 4, 1, 0 ]
[]
[]
[ "django", "django_orm", "python" ]
stackoverflow_0031617838_django_django_orm_python.txt
Q: How to make one single connection to mongodb with multiple databases and collections in pypsark I've got a connectino to mongodb and several databses and collecions inside, I just wanna have one connection an make queries to several collections in pyspark. I think that one connection per query delays the performance. That's what I have: database_1 = "data_1" database_2 = "data_2" collection_1 = "client_1" collection_2 = "client_2" myquery_1 = [query_1] myquery_2 = [query_2] dataframe_1 = spark.read.format("com.mongodb.spark.sql.DefaultSource") .option("uri", "string_connection") .option("database", database_1) .option("collection", collection_1) .option("pipeline",myquery_1).load() dataframe_2 = spark.read.format("com.mongodb.spark.sql.DefaultSource")` .option("uri", "string_connection") .option("database", database_2) .option("collection", collection_2) .option("pipeline",myquery_1).load() and I want one single connection and the option to use differents databases and collections and no load a connection for every single query. dataframe = spark.read.format("com.mongodb.spark.sql.DefaultSource") .option("uri", "string_connection").option("database", database).option("collection", collection).option("pipeline",myquery).load() A: I don't think that's a problem. Spark using lazy evaluation which means RDD's are evaluated until at the very end that an action is needed to be done and spark optimization take care of queries and their connection. in other words when you do spark.read . that line is just defining the dataframe and spark doesn't really read data just yet, untill it sees an action on those dataframes like show the data , or write them somehwere .
How to make one single connection to mongodb with multiple databases and collections in pypsark
I've got a connectino to mongodb and several databses and collecions inside, I just wanna have one connection an make queries to several collections in pyspark. I think that one connection per query delays the performance. That's what I have: database_1 = "data_1" database_2 = "data_2" collection_1 = "client_1" collection_2 = "client_2" myquery_1 = [query_1] myquery_2 = [query_2] dataframe_1 = spark.read.format("com.mongodb.spark.sql.DefaultSource") .option("uri", "string_connection") .option("database", database_1) .option("collection", collection_1) .option("pipeline",myquery_1).load() dataframe_2 = spark.read.format("com.mongodb.spark.sql.DefaultSource")` .option("uri", "string_connection") .option("database", database_2) .option("collection", collection_2) .option("pipeline",myquery_1).load() and I want one single connection and the option to use differents databases and collections and no load a connection for every single query. dataframe = spark.read.format("com.mongodb.spark.sql.DefaultSource") .option("uri", "string_connection").option("database", database).option("collection", collection).option("pipeline",myquery).load()
[ "I don't think that's a problem.\nSpark using lazy evaluation which means RDD's are evaluated until at the very end that an action is needed to be done and spark optimization take care of queries and their connection.\nin other words when you do spark.read . that line is just defining the dataframe and spark doesn't really read data just yet, untill it sees an action on those dataframes like show the data , or write them somehwere .\n" ]
[ 0 ]
[]
[]
[ "pyspark", "python", "python_3.8" ]
stackoverflow_0074464028_pyspark_python_python_3.8.txt
Q: Is there a shorter way to create loops trough the rows when using append? I have a data frame with employees and all the roles that they are able to do. ` Employees ID Brand_Manager Payroll_Manager Accountant Auditor 0 Jessi 1A 1 0 1 0 1 Lara 1B 1 0 0 1 2 Mike 1C 1 0 0 0 3 Artur 1D 1 0 0 0 4 James 2A 1 0 0 0 5 Claudia 3B 1 0 0 0 6 Zuzska 4C 1 1 0 1 7 Bartz 2B 1 1 0 0 8 Alexa 3B 1 1 0 0 ` To make work the program that I want to apply, I need to split the data and create new rows for the same person for each role (value=1). The rest of the values for the rest of the roles will become 0. The codes work well when using append(), however this example contain only 8 employees and 4 roles. I need to do the same with a lot more employees and almost 100 extra roles, which will create a very long script. I have done this: First, select all the employees that have more than one role col_list=df.columns df['many'] = df[col_list].sum(axis=1) df_single = df[ df['many'] == 1 ] df_many = df[ df['many'] >= 2 ] Then create lists and append: lststaff = list ( df_many.Employees) lstEmployees = [] lstID = [] lstBrand_Manager = [] lstPayroll_Manager = [] lstAccountant = [] lstAuditor = [] Loop through the names for i in lststaff: ID = str ( df_many.loc [ df_many['Employees'] == i, 'ID' ].tolist()[0] ) Brand_Manager = ( df_many.loc [ df_many['Employees'] == i, 'Brand_Manager'].astype(int) ) Brand_Manager = np.array(Brand_Manager) if ( Brand_Manager == 1 ).any(): lstEmployees.append ( i + '_Brand_Manager' ) lstAccountant.append (0) lstBrand_Manager.append(1) lstAuditor.append(0) lstID.append (ID) lstPayroll_Manager.append(0) Accountant = ( df_many.loc [ df_many['Employees'] == i, 'Accountant'].astype(int) ) Accountant = np.array(Accountant) if ( Accountant == 1 ).any(): lstEmployees.append ( i + '_Accountant' ) lstAccountant.append (1) lstBrand_Manager.append(0) lstAuditor.append(0) lstID.append (ID) lstPayroll_Manager.append(0) Auditor = ( df_many.loc [ df_many['Employees'] == i, 'Auditor'].astype(int) ) Auditor = np.array(Auditor) if ( Auditor == 1 ).any(): lstEmployees.append ( i + '_Auditor' ) lstAccountant.append (0) lstBrand_Manager.append(0) lstAuditor.append(1) lstID.append (ID) lstPayroll_Manager.append(0) Payroll_Manager = ( df_many.loc [ df_many['Employees'] == i, 'Payroll_Manager'].astype(int) ) Payroll_Manager = np.array(Payroll_Manager) if ( Payroll_Manager == 1 ).any(): lstEmployees.append ( i + '_Payroll_Manager' ) lstAccountant.append (0) lstBrand_Manager.append(0) lstAuditor.append(0) lstID.append (ID) lstPayroll_Manager.append(1) final_df = pd.DataFrame ( { "Employees" : lstEmployees ,"ID" : lstID ,"Brand_Manager" :lstBrand_Manager ,"Accountant" :lstAccountant ,"Auditor" : lstAuditor ,"Payroll_Manager" : lstPayroll_Manager } ) final_df The codes works well, however if I have to add 100 more roles. I will add 100 more lists and the amount of lists to be appended would be crazy.... Is there any other way of doing it with a function or for loops? The output is as follow: Employees ID Brand_Manager Accountant Auditor Payroll_Manager 0 Jessi_Brand_Manager 1A 1 0 0 0 1 Jessi_Accountant 1A 0 1 0 0 2 Lara_Brand_Manager 1B 1 0 0 0 3 Lara_Auditor 1B 0 0 1 0 4 Zuzska_Brand_Manager 4C 1 0 0 0 5 Zuzska_Auditor 4C 0 0 1 0 6 Zuzska_Payroll_Manager 4C 0 0 0 1 7 Bartz_Brand_Manager 2B 1 0 0 0 8 Bartz_Payroll_Manager 2B 0 0 0 1 9 Alexa_Brand_Manager 3B 1 0 0 0 10 Alexa_Payroll_Manager 3B 0 0 0 1 A: Here's one approach. pandas gives a "SettingWithCopy" warning, but I believe this script will always result in the expected behavior. import numpy as np import pandas as pd #input dataframe df = pd.DataFrame({'Employees': {0: 'Jessi', 1: 'Lara', 2: 'Mike', 3: 'Artur', 4: 'James', 5: 'Claudia', 6: 'Zuzska', 7: 'Bartz', 8: 'Alexa'}, 'ID': {0: '1A', 1: '1B', 2: '1C', 3: '1D', 4: '2A', 5: '3B', 6: '4C', 7: '2B', 8: '3B'}, 'Brand_Manager': {0: 1, 1: 1, 2: 1, 3: 1, 4: 1, 5: 1, 6: 1, 7: 1, 8: 1}, 'Payroll_Manager': {0: 0, 1: 0, 2: 0, 3: 0, 4: 0, 5: 0, 6: 1, 7: 1, 8: 1}, 'Accountant': {0: 1, 1: 0, 2: 0, 3: 0, 4: 0, 5: 0, 6: 0, 7: 0, 8: 0}, 'Auditor': {0: 0, 1: 1, 2: 0, 3: 0, 4: 0, 5: 0, 6: 1, 7: 0, 8: 0}}) # isolate job columns job_cols = df.columns.drop(['Employees','ID']) num_jobs = len(job_cols) # drop employees with only one job df_in = df.loc[df[job_cols].sum(axis = 1)>=2,:] # information processing def get_nums(row): return np.eye(num_jobs, dtype = int)[row.astype(bool)] def get_labels(row): return job_cols[row.astype(bool)] df_proc = df_in[['Employees','ID']] df_proc.loc[:,'all'] = df_in[job_cols].apply(get_nums,axis = 1) df_proc = df_proc.explode('all').reset_index(drop=True) df_proc[job_cols] = np.vstack(df_proc['all']) df_proc['jobs'] = df_in[job_cols].apply(get_labels,axis = 1).explode().reset_index(drop=True) # generate output dataframe df_out = df_proc[['Employees','ID',*job_cols]] df_out.loc[:,'Employees'] = df_proc['Employees'] + '_' + df_proc['jobs'] Resulting dataframe df_out: Employees ID Brand_Manager Payroll_Manager Accountant \ 0 Jessi_Brand_Manager 1A 1 0 0 1 Jessi_Accountant 1A 0 0 1 2 Lara_Brand_Manager 1B 1 0 0 3 Lara_Auditor 1B 0 0 0 4 Zuzska_Brand_Manager 4C 1 0 0 5 Zuzska_Payroll_Manager 4C 0 1 0 6 Zuzska_Auditor 4C 0 0 0 7 Bartz_Brand_Manager 2B 1 0 0 8 Bartz_Payroll_Manager 2B 0 1 0 9 Alexa_Brand_Manager 3B 1 0 0 10 Alexa_Payroll_Manager 3B 0 1 0 Auditor 0 0 1 0 2 0 3 1 4 0 5 0 6 1 7 0 8 0 9 0 10 0 Intermediate dataframe df_proc before generating df_out, for reference: Employees ID all Brand_Manager Payroll_Manager Accountant \ 0 Jessi 1A [1, 0, 0, 0] 1 0 0 1 Jessi 1A [0, 0, 1, 0] 0 0 1 2 Lara 1B [1, 0, 0, 0] 1 0 0 3 Lara 1B [0, 0, 0, 1] 0 0 0 4 Zuzska 4C [1, 0, 0, 0] 1 0 0 5 Zuzska 4C [0, 1, 0, 0] 0 1 0 6 Zuzska 4C [0, 0, 0, 1] 0 0 0 7 Bartz 2B [1, 0, 0, 0] 1 0 0 8 Bartz 2B [0, 1, 0, 0] 0 1 0 9 Alexa 3B [1, 0, 0, 0] 1 0 0 10 Alexa 3B [0, 1, 0, 0] 0 1 0 Auditor jobs 0 0 Brand_Manager 1 0 Accountant 2 0 Brand_Manager 3 1 Auditor 4 0 Brand_Manager 5 0 Payroll_Manager 6 1 Auditor 7 0 Brand_Manager 8 0 Payroll_Manager 9 0 Brand_Manager 10 0 Payroll_Manager
Is there a shorter way to create loops trough the rows when using append?
I have a data frame with employees and all the roles that they are able to do. ` Employees ID Brand_Manager Payroll_Manager Accountant Auditor 0 Jessi 1A 1 0 1 0 1 Lara 1B 1 0 0 1 2 Mike 1C 1 0 0 0 3 Artur 1D 1 0 0 0 4 James 2A 1 0 0 0 5 Claudia 3B 1 0 0 0 6 Zuzska 4C 1 1 0 1 7 Bartz 2B 1 1 0 0 8 Alexa 3B 1 1 0 0 ` To make work the program that I want to apply, I need to split the data and create new rows for the same person for each role (value=1). The rest of the values for the rest of the roles will become 0. The codes work well when using append(), however this example contain only 8 employees and 4 roles. I need to do the same with a lot more employees and almost 100 extra roles, which will create a very long script. I have done this: First, select all the employees that have more than one role col_list=df.columns df['many'] = df[col_list].sum(axis=1) df_single = df[ df['many'] == 1 ] df_many = df[ df['many'] >= 2 ] Then create lists and append: lststaff = list ( df_many.Employees) lstEmployees = [] lstID = [] lstBrand_Manager = [] lstPayroll_Manager = [] lstAccountant = [] lstAuditor = [] Loop through the names for i in lststaff: ID = str ( df_many.loc [ df_many['Employees'] == i, 'ID' ].tolist()[0] ) Brand_Manager = ( df_many.loc [ df_many['Employees'] == i, 'Brand_Manager'].astype(int) ) Brand_Manager = np.array(Brand_Manager) if ( Brand_Manager == 1 ).any(): lstEmployees.append ( i + '_Brand_Manager' ) lstAccountant.append (0) lstBrand_Manager.append(1) lstAuditor.append(0) lstID.append (ID) lstPayroll_Manager.append(0) Accountant = ( df_many.loc [ df_many['Employees'] == i, 'Accountant'].astype(int) ) Accountant = np.array(Accountant) if ( Accountant == 1 ).any(): lstEmployees.append ( i + '_Accountant' ) lstAccountant.append (1) lstBrand_Manager.append(0) lstAuditor.append(0) lstID.append (ID) lstPayroll_Manager.append(0) Auditor = ( df_many.loc [ df_many['Employees'] == i, 'Auditor'].astype(int) ) Auditor = np.array(Auditor) if ( Auditor == 1 ).any(): lstEmployees.append ( i + '_Auditor' ) lstAccountant.append (0) lstBrand_Manager.append(0) lstAuditor.append(1) lstID.append (ID) lstPayroll_Manager.append(0) Payroll_Manager = ( df_many.loc [ df_many['Employees'] == i, 'Payroll_Manager'].astype(int) ) Payroll_Manager = np.array(Payroll_Manager) if ( Payroll_Manager == 1 ).any(): lstEmployees.append ( i + '_Payroll_Manager' ) lstAccountant.append (0) lstBrand_Manager.append(0) lstAuditor.append(0) lstID.append (ID) lstPayroll_Manager.append(1) final_df = pd.DataFrame ( { "Employees" : lstEmployees ,"ID" : lstID ,"Brand_Manager" :lstBrand_Manager ,"Accountant" :lstAccountant ,"Auditor" : lstAuditor ,"Payroll_Manager" : lstPayroll_Manager } ) final_df The codes works well, however if I have to add 100 more roles. I will add 100 more lists and the amount of lists to be appended would be crazy.... Is there any other way of doing it with a function or for loops? The output is as follow: Employees ID Brand_Manager Accountant Auditor Payroll_Manager 0 Jessi_Brand_Manager 1A 1 0 0 0 1 Jessi_Accountant 1A 0 1 0 0 2 Lara_Brand_Manager 1B 1 0 0 0 3 Lara_Auditor 1B 0 0 1 0 4 Zuzska_Brand_Manager 4C 1 0 0 0 5 Zuzska_Auditor 4C 0 0 1 0 6 Zuzska_Payroll_Manager 4C 0 0 0 1 7 Bartz_Brand_Manager 2B 1 0 0 0 8 Bartz_Payroll_Manager 2B 0 0 0 1 9 Alexa_Brand_Manager 3B 1 0 0 0 10 Alexa_Payroll_Manager 3B 0 0 0 1
[ "Here's one approach. pandas gives a \"SettingWithCopy\" warning, but I believe this script will always result in the expected behavior.\nimport numpy as np\nimport pandas as pd\n\n#input dataframe\ndf = pd.DataFrame({'Employees': {0: 'Jessi', 1: 'Lara', 2: 'Mike', 3: 'Artur', 4: 'James', 5: 'Claudia', 6: 'Zuzska', 7: 'Bartz', 8: 'Alexa'}, 'ID': {0: '1A', 1: '1B', 2: '1C', 3: '1D', 4: '2A', 5: '3B', 6: '4C', 7: '2B', 8: '3B'}, 'Brand_Manager': {0: 1, 1: 1, 2: 1, 3: 1, 4: 1, 5: 1, 6: 1, 7: 1, 8: 1}, 'Payroll_Manager': {0: 0, 1: 0, 2: 0, 3: 0, 4: 0, 5: 0, 6: 1, 7: 1, 8: 1}, 'Accountant': {0: 1, 1: 0, 2: 0, 3: 0, 4: 0, 5: 0, 6: 0, 7: 0, 8: 0}, 'Auditor': {0: 0, 1: 1, 2: 0, 3: 0, 4: 0, 5: 0, 6: 1, 7: 0, 8: 0}})\n\n# isolate job columns\njob_cols = df.columns.drop(['Employees','ID'])\nnum_jobs = len(job_cols)\n\n# drop employees with only one job\ndf_in = df.loc[df[job_cols].sum(axis = 1)>=2,:]\n\n# information processing\ndef get_nums(row):\n return np.eye(num_jobs, dtype = int)[row.astype(bool)]\ndef get_labels(row):\n return job_cols[row.astype(bool)]\n\ndf_proc = df_in[['Employees','ID']]\ndf_proc.loc[:,'all'] = df_in[job_cols].apply(get_nums,axis = 1)\ndf_proc = df_proc.explode('all').reset_index(drop=True)\ndf_proc[job_cols] = np.vstack(df_proc['all'])\ndf_proc['jobs'] = df_in[job_cols].apply(get_labels,axis = 1).explode().reset_index(drop=True)\n\n# generate output dataframe\ndf_out = df_proc[['Employees','ID',*job_cols]]\ndf_out.loc[:,'Employees'] = df_proc['Employees'] + '_' + df_proc['jobs']\n\nResulting dataframe df_out:\n Employees ID Brand_Manager Payroll_Manager Accountant \\\n0 Jessi_Brand_Manager 1A 1 0 0 \n1 Jessi_Accountant 1A 0 0 1 \n2 Lara_Brand_Manager 1B 1 0 0 \n3 Lara_Auditor 1B 0 0 0 \n4 Zuzska_Brand_Manager 4C 1 0 0 \n5 Zuzska_Payroll_Manager 4C 0 1 0 \n6 Zuzska_Auditor 4C 0 0 0 \n7 Bartz_Brand_Manager 2B 1 0 0 \n8 Bartz_Payroll_Manager 2B 0 1 0 \n9 Alexa_Brand_Manager 3B 1 0 0 \n10 Alexa_Payroll_Manager 3B 0 1 0 \n\n Auditor \n0 0 \n1 0 \n2 0 \n3 1 \n4 0 \n5 0 \n6 1 \n7 0 \n8 0 \n9 0 \n10 0 \n\nIntermediate dataframe df_proc before generating df_out, for reference:\n Employees ID all Brand_Manager Payroll_Manager Accountant \\\n0 Jessi 1A [1, 0, 0, 0] 1 0 0 \n1 Jessi 1A [0, 0, 1, 0] 0 0 1 \n2 Lara 1B [1, 0, 0, 0] 1 0 0 \n3 Lara 1B [0, 0, 0, 1] 0 0 0 \n4 Zuzska 4C [1, 0, 0, 0] 1 0 0 \n5 Zuzska 4C [0, 1, 0, 0] 0 1 0 \n6 Zuzska 4C [0, 0, 0, 1] 0 0 0 \n7 Bartz 2B [1, 0, 0, 0] 1 0 0 \n8 Bartz 2B [0, 1, 0, 0] 0 1 0 \n9 Alexa 3B [1, 0, 0, 0] 1 0 0 \n10 Alexa 3B [0, 1, 0, 0] 0 1 0 \n\n Auditor jobs \n0 0 Brand_Manager \n1 0 Accountant \n2 0 Brand_Manager \n3 1 Auditor \n4 0 Brand_Manager \n5 0 Payroll_Manager \n6 1 Auditor \n7 0 Brand_Manager \n8 0 Payroll_Manager \n9 0 Brand_Manager \n10 0 Payroll_Manager \n\n" ]
[ 0 ]
[]
[]
[ "append", "multiple_columns", "python" ]
stackoverflow_0074453795_append_multiple_columns_python.txt
Q: limit the number of colors of an image to a specified number based on prodominant colors in python I want to process images in a way to limit the number of colors to a predetermined and specific number I tried using this method from PIL import image image= Image.open("input.png") result = image.convert('P', palette=Image.ADAPTIVE, colors=2) result.save("saved.png") for some reason it used to work but now doesn't work i'm pretty sure i didn't change anything is there a fix or another method ? thanks A: FIXED : the problem is the color mode to be able to use this function you need first to convert the color mode of the image to RGB like this : image = image.convert('RGB')
limit the number of colors of an image to a specified number based on prodominant colors in python
I want to process images in a way to limit the number of colors to a predetermined and specific number I tried using this method from PIL import image image= Image.open("input.png") result = image.convert('P', palette=Image.ADAPTIVE, colors=2) result.save("saved.png") for some reason it used to work but now doesn't work i'm pretty sure i didn't change anything is there a fix or another method ? thanks
[ "FIXED :\nthe problem is the color mode\nto be able to use this function you need first to convert the color mode of the image to RGB like this :\nimage = image.convert('RGB')\n\n" ]
[ 1 ]
[]
[]
[ "image_processing", "python", "python_imaging_library" ]
stackoverflow_0074464095_image_processing_python_python_imaging_library.txt
Q: OpenCV cv2 image to PyGame image? def cvimage_to_pygame(image): """Convert cvimage into a pygame image""" return pygame.image.frombuffer(image.tostring(), image.shape[:2], "RGB") The function takes a numpy array taken from the cv2 camera. When I display the returned pyGame image on a pyGame window, it appears in three broken images. I don't know why this is! Any help would be greatly appreciated. Heres what happens:: (Pygame on the left) A: In the shape field width and height parameters are swapped. Replace argument: image.shape[:2] # gives you (height, width) tuple With image.shape[1::-1] # gives you (width, height) tuple A: An other issue that i found : Colors are not right... This is because open cv images are in BGR (Blue Green Red) not in RGB ! so the right command is : pygame.image.frombuffer(image.tostring(), image.shape[1::-1], "BGR")
OpenCV cv2 image to PyGame image?
def cvimage_to_pygame(image): """Convert cvimage into a pygame image""" return pygame.image.frombuffer(image.tostring(), image.shape[:2], "RGB") The function takes a numpy array taken from the cv2 camera. When I display the returned pyGame image on a pyGame window, it appears in three broken images. I don't know why this is! Any help would be greatly appreciated. Heres what happens:: (Pygame on the left)
[ "In the shape field width and height parameters are swapped. Replace argument:\nimage.shape[:2] # gives you (height, width) tuple\n\nWith \nimage.shape[1::-1] # gives you (width, height) tuple\n\n", "An other issue that i found : Colors are not right... This is because open cv images are in BGR (Blue Green Red) not in RGB ! so the right command is :\npygame.image.frombuffer(image.tostring(), image.shape[1::-1], \"BGR\")\n\n" ]
[ 8, 0 ]
[]
[]
[ "numpy", "opencv", "pygame", "python" ]
stackoverflow_0019306211_numpy_opencv_pygame_python.txt
Q: Numpy - How to get an array of the pattern gamma^t for some 0-t? I am creating a basic gridworld RL problem and I need to calculate the return for some given episode. I currently have the array of rewards, and I would like to element-wise multiply this with a list of the form: [gamma**0, gamma**1, gamma**2, ....] In order to get: [r_0*gamma**0, r_1*gamma**1, r_2*gamma**2, ....] and then use np.sum() to get the entire return. How can I complete that first step? I tried using Logspace, but it isn't quite what I want (or I'm doing it wrong). A: if the example if like this for reward array and gamma is some value: n = 20 reward = np.random.randint(0, 10, n) gamma = 2 np.sum(reward * (gamma ** np.arange(n)))
Numpy - How to get an array of the pattern gamma^t for some 0-t?
I am creating a basic gridworld RL problem and I need to calculate the return for some given episode. I currently have the array of rewards, and I would like to element-wise multiply this with a list of the form: [gamma**0, gamma**1, gamma**2, ....] In order to get: [r_0*gamma**0, r_1*gamma**1, r_2*gamma**2, ....] and then use np.sum() to get the entire return. How can I complete that first step? I tried using Logspace, but it isn't quite what I want (or I'm doing it wrong).
[ "if the example if like this for reward array and gamma is some value:\nn = 20 \nreward = np.random.randint(0, 10, n)\ngamma = 2\n\nnp.sum(reward * (gamma ** np.arange(n)))\n\n" ]
[ 1 ]
[]
[]
[ "arrays", "numpy", "python", "reinforcement_learning" ]
stackoverflow_0074464029_arrays_numpy_python_reinforcement_learning.txt
Q: Django 3.1 - async views - working with querysets Since 3.1 (currently beta) Django have support for async views async def myview(request): users = User.objects.all() This example will not work - since ORM is not yet async ready so what's the current workaround ? you cannot just use sync_to_async with queryset - as they it is not evaluated: from asgiref.sync import sync_to_async async def myview(request): users = await sync_to_async(User.objects.all)() so the only way is evaluate queryset inside sync_to_async: async def myview(request): users = await sync_to_async(lambda: list(User.objects.all()))() which looks very ugly any thoughts on how to make it nicer ? A: There is a common GOTCHA: Django querysets are lazy evaluated (database query happens only when you start iterating): so instead - use evaluation (with list): from asgiref.sync import sync_to_async async def myview(request): users = await sync_to_async(list)(User.objects.all()) A: From Django 4.1 async for is supported on all QuerySets: async def myview(request): async for user in User.objects.all(): ... more info: link
Django 3.1 - async views - working with querysets
Since 3.1 (currently beta) Django have support for async views async def myview(request): users = User.objects.all() This example will not work - since ORM is not yet async ready so what's the current workaround ? you cannot just use sync_to_async with queryset - as they it is not evaluated: from asgiref.sync import sync_to_async async def myview(request): users = await sync_to_async(User.objects.all)() so the only way is evaluate queryset inside sync_to_async: async def myview(request): users = await sync_to_async(lambda: list(User.objects.all()))() which looks very ugly any thoughts on how to make it nicer ?
[ "There is a common GOTCHA: Django querysets are lazy evaluated (database query happens only when you start iterating):\nso instead - use evaluation (with list):\nfrom asgiref.sync import sync_to_async\n\nasync def myview(request):\n users = await sync_to_async(list)(User.objects.all())\n\n", "From Django 4.1 async for is supported on all QuerySets:\nasync def myview(request):\n async for user in User.objects.all():\n ...\n\nmore info: link\n" ]
[ 14, 1 ]
[]
[]
[ "asynchronous", "django", "django_3.1", "python" ]
stackoverflow_0062530017_asynchronous_django_django_3.1_python.txt
Q: Reportlab - How to add margin between Tables? So i am trying to create three tables per page, the following code will collide all three tables together with 0 margin between them. I would like some white space between two tables. Is there a configuration for that? doc = SimpleDocTemplate("my.pdf", pagesize=A4) elements = [] i = 0 for person in persons: data = get_data() t = Table(data, colWidths=col_widths, rowHeights=row_heights) elements.append(t) i = i + 1 if i % 3 == 0: elements.append(PageBreak()) doc.build(elements) A: You could try using the Spacer function to add space between the tables. An example of its use from the documentation is: from reportlab.platypus import SimpleDocTemplate, Paragraph, Spacer def go(): doc = SimpleDocTemplate("hello.pdf") Story = [Spacer(1,2*inch)] for i in range(100): bogustext = ("This is Paragraph number %s. " % i) *20 p = Paragraph(bogustext, style) Story.append(p) Story.append(Spacer(1,0.2*inch)) doc.build(Story, onFirstPage=myFirstPage, onLaterPages=myLaterPages)
Reportlab - How to add margin between Tables?
So i am trying to create three tables per page, the following code will collide all three tables together with 0 margin between them. I would like some white space between two tables. Is there a configuration for that? doc = SimpleDocTemplate("my.pdf", pagesize=A4) elements = [] i = 0 for person in persons: data = get_data() t = Table(data, colWidths=col_widths, rowHeights=row_heights) elements.append(t) i = i + 1 if i % 3 == 0: elements.append(PageBreak()) doc.build(elements)
[ "You could try using the Spacer function to add space between the tables. An example of its use from the documentation is:\nfrom reportlab.platypus import SimpleDocTemplate, Paragraph, Spacer\n\ndef go():\n doc = SimpleDocTemplate(\"hello.pdf\")\n Story = [Spacer(1,2*inch)]\n\n for i in range(100):\n bogustext = (\"This is Paragraph number %s. \" % i) *20\n p = Paragraph(bogustext, style)\n Story.append(p)\n Story.append(Spacer(1,0.2*inch))\n doc.build(Story, onFirstPage=myFirstPage, onLaterPages=myLaterPages)\n\n" ]
[ 1 ]
[]
[]
[ "python", "reportlab" ]
stackoverflow_0074463969_python_reportlab.txt
Q: How to create an application which embeds and runs Python code without local Python installation? Hello fellow software developers. I want to distribute a C program which is scriptable by embedding the Python interpreter. The C program uses Py_Initialize, PyImport_Import and so on to accomplish Python embedding. I'm looking for a solution where I distribute only the following components: my program executable and its libraries the Python library (dll/so) a ZIP-file containing all necessary Python modules and libraries. How can I accomplish this? Is there a step-by-step recipe for that? The solution should be suitable for both Windows and Linux. Thanks in advance. A: Have you looked at Python's official documentation : Embedding Python into another application? There's also this really nice PDF by IBM : Embed Python scripting in C application. You should be able to do what you want using those two resources. A: I simply tested my executable on a computer which hasn't Python installed and it worked. When you link Python to your executable (no matter if dynamically or statically) your executable already gains basic Python language functionality (operators, methods, basic structures like string, list, tuple, dict, etc.) WITHOUT any other dependancy. Then I let Python's setup.py compile a Python source distribution via python setup.py sdist --format=zip which gave me a ZIP file I named pylib-2.6.4.zip. My further steps were: char pycmd[1000]; // temporary buffer for forged Python script lines ... Py_NoSiteFlag=1; Py_SetProgramName(argv[0]); Py_SetPythonHome(directoryWhereMyOwnPythonScriptsReside); Py_InitializeEx(0); // forge Python command to set the lookup path // add the zipped Python distribution library to the search path as well snprintf( pycmd, sizeof(pycmd), "import sys; sys.path = ['%s/pylib-2.6.4.zip','%s']", applicationDirectory, directoryWhereMyOwnPythonScriptsReside ); // ... and execute PyRun_SimpleString(pycmd); // now all succeeding Python import calls should be able to // find the other modules, especially those in the zipped library ... A: Did you take a look at Portable Python ? No need to install anything. Just copy the included files to use the interpreter. Edit : This is a Windows only solution. A: Have you looked at Embedding Python in Another Application in the Python documentation? Once you have that, you can use an import hook (see PEP 302) to have your embedded Python code load modules from whatever place you choose. If you have everything in one zipfile, though, you probably just need to make it the only entry on sys.path. A: I think here is the answer you want Unable to get python embedded to work with zip'd library Basically, you need: Py_NoSiteFlag=1; Py_SetProgramName(argv[0]); Py_SetPythonHome("."); Py_InitializeEx(0); PyRun_SimpleString("import sys"); PyRun_SimpleString("sys.path = ['.','python27.zip','python27.zip/DLLs','python27.zip/Lib','python27.zip/site-packages']"); in your c/c++ code for loading the python standard library. And in your python27.zip, all .py source code are located at python27.zip/Lib as described in the sys.path variable. Hope this helps. A: There's a program called py2exe. I don't know if it's only available for Windows. Also, the latest version that I used does not wrap everything up into one .exe file. It creates a bunch of stuff that has to be distributed - a zip file, etc.. A: You can compile to one .exe file using pyinstaller pip install pyinstaller pyinstaller --onefile urPythonScriptName.py
How to create an application which embeds and runs Python code without local Python installation?
Hello fellow software developers. I want to distribute a C program which is scriptable by embedding the Python interpreter. The C program uses Py_Initialize, PyImport_Import and so on to accomplish Python embedding. I'm looking for a solution where I distribute only the following components: my program executable and its libraries the Python library (dll/so) a ZIP-file containing all necessary Python modules and libraries. How can I accomplish this? Is there a step-by-step recipe for that? The solution should be suitable for both Windows and Linux. Thanks in advance.
[ "Have you looked at Python's official documentation : Embedding Python into another application?\nThere's also this really nice PDF by IBM : Embed Python scripting in C application.\nYou should be able to do what you want using those two resources.\n", "I simply tested my executable on a computer which hasn't Python installed and it worked.\nWhen you link Python to your executable (no matter if dynamically or statically) your executable already gains basic Python language functionality (operators, methods, basic structures like string, list, tuple, dict, etc.) WITHOUT any other dependancy.\nThen I let Python's setup.py compile a Python source distribution via python setup.py sdist --format=zip which gave me a ZIP file I named pylib-2.6.4.zip.\nMy further steps were:\nchar pycmd[1000]; // temporary buffer for forged Python script lines\n...\nPy_NoSiteFlag=1;\nPy_SetProgramName(argv[0]);\nPy_SetPythonHome(directoryWhereMyOwnPythonScriptsReside);\nPy_InitializeEx(0);\n\n// forge Python command to set the lookup path\n// add the zipped Python distribution library to the search path as well\nsnprintf(\n pycmd,\n sizeof(pycmd),\n \"import sys; sys.path = ['%s/pylib-2.6.4.zip','%s']\",\n applicationDirectory,\n directoryWhereMyOwnPythonScriptsReside\n);\n\n// ... and execute\nPyRun_SimpleString(pycmd);\n\n// now all succeeding Python import calls should be able to\n// find the other modules, especially those in the zipped library\n\n...\n\n", "Did you take a look at Portable Python ? No need to install anything. Just copy the included files to use the interpreter. \nEdit : This is a Windows only solution.\n", "Have you looked at Embedding Python in Another Application in the Python documentation?\nOnce you have that, you can use an import hook (see PEP 302) to have your embedded Python code load modules from whatever place you choose. If you have everything in one zipfile, though, you probably just need to make it the only entry on sys.path.\n", "I think here is the answer you want Unable to get python embedded to work with zip'd library\nBasically, you need:\nPy_NoSiteFlag=1;\nPy_SetProgramName(argv[0]);\nPy_SetPythonHome(\".\");\nPy_InitializeEx(0);\nPyRun_SimpleString(\"import sys\");\nPyRun_SimpleString(\"sys.path = ['.','python27.zip','python27.zip/DLLs','python27.zip/Lib','python27.zip/site-packages']\");\n\nin your c/c++ code for loading the python standard library.\nAnd in your python27.zip, all .py source code are located at python27.zip/Lib as described in the sys.path variable. \nHope this helps.\n", "There's a program called py2exe. I don't know if it's only available for Windows. Also, the latest version that I used does not wrap everything up into one .exe file. It creates a bunch of stuff that has to be distributed - a zip file, etc..\n", "You can compile to one .exe file using pyinstaller\npip install pyinstaller\n\npyinstaller --onefile urPythonScriptName.py\n\n" ]
[ 6, 6, 1, 1, 1, 0, 0 ]
[]
[]
[ "c", "distribution", "dll", "python" ]
stackoverflow_0002494468_c_distribution_dll_python.txt
Q: Why is Apache Beam `DoFn.setup()` called more then once after worker startup? I am currently experimenting with a streaming Dataflow pipeline (in Python). I read a stream of data which I like to write into a PG CloudSQL instance. To do so, I am looking for a proper place to create the database connection. As I am writing the data using a ParDo function, I'd thought the DoFn.setup() would be a good place. According to multiple resources, this should be a good place as setup() is only called once (when the worker starts). I ran some tests, but it seems that setup() is called way more often then only on initialization of the worker. It seems to run just as much as start_bundle() (which is after so many elements). I created a simple pipeline that reads some messages from PubSub, extracts an object's filename and outputs the filename. Besides that, it logs the times that setup() and start_bundle() are being called: import argparse import logging from datetime import datetime import apache_beam as beam from apache_beam.options.pipeline_options import PipelineOptions setup_counter=0 bundle_counter=0 class GetFileName(beam.DoFn): """ Generate file path from PubSub message attributes """ def _now(self): return datetime.now().strftime("%Y/%m/%d %H:%M:%S") def setup(self): global setup_counter moment = self._now() logging.info("setup() called %s" % moment) setup_counter=setup_counter+1 logging.info(f"""setup_counter = {setup_counter}""") def start_bundle(self): global bundle_counter moment = self._now() logging.info("Bundle started %s" % moment) bundle_counter=bundle_counter+1 logging.info(f"""Bundle_counter = {bundle_counter}""") def process(self, element): attr = dict(element.attributes) objectid = attr["objectId"] # not sure if this is the prettiest way to create this uri, but works for the poc path = f'{objectid}' yield path def run(input_subscription, pipeline_args=None): pipeline_options = PipelineOptions( pipeline_args, streaming=True ) with beam.Pipeline(options=pipeline_options) as pipeline: files = (pipeline | "Read from PubSub" >> beam.io.ReadFromPubSub(subscription=input_subscription, with_attributes=True) | "Get filepath" >> beam.ParDo(GetFileName()) ) files | "Print results" >> beam.Map(logging.info) if __name__ == "__main__": logging.getLogger().setLevel(logging.INFO) parser = argparse.ArgumentParser() parser.add_argument( "--input_subscription", dest="input_subscription", required=True, help="The Cloud Pub/Sub subscription to read from." ) known_args, pipeline_args = parser.parse_known_args() run( known_args.input_subscription, pipeline_args ) Based on this, I would expect to see that setup() is only logged once (after starting the pipeline) and start_bundle() an arbitrary amount of times, when running this job on DirectRunner. However, it seems that setup() is called just as much as start_bundle(). Looking at the logs: python main.py \ > --runner DirectRunner \ > --input_subscription <my_subscription> \ > --direct_num_workers 1 \ > --streaming true ... INFO:root:setup() called 2022/11/16 15:11:13 INFO:root:setup_counter = 1 INFO:root:Bundle started 2022/11/16 15:11:13 INFO:root:Bundle_counter = 1 INFO:root:avro/20221116135543584-hlgeinp.avro INFO:root:avro/20221116135543600-hlsusop.avro INFO:root:avro/20221116135543592-hlmvtgp.avro INFO:root:avro/20221116135543597-hlsuppp.avro INFO:root:avro/20221116135553122-boevtdp.avro INFO:root:avro/20221116135553126-bomipep.avro INFO:root:avro/20221116135553127-hlsuppp.avro INFO:root:avro/20221116135155024-boripep.avro INFO:root:avro/20221116135155020-bolohdp.avro INFO:root:avro/20221116135155029-hlmvaep.avro ... INFO:root:setup() called 2022/11/16 15:11:16 INFO:root:setup_counter = 2 INFO:root:Bundle started 2022/11/16 15:11:16 INFO:root:Bundle_counter = 2 INFO:root:high-volume/20221112234700584-hlprenp.avro INFO:root:high-volume/20221113011240903-hlprenp.avro INFO:root:high-volume/20221113010654305-hlprenp.avro INFO:root:high-volume/20221113010822785-hlprenp.avro INFO:root:high-volume/20221113010927402-hlprenp.avro INFO:root:high-volume/20221113011248805-hlprenp.avro INFO:root:high-volume/20221112234730001-hlprenp.avro INFO:root:high-volume/20221112234738994-hlprenp.avro INFO:root:high-volume/20221113010956395-hlprenp.avro INFO:root:high-volume/20221113011648293-hlprenp.avro ... INFO:root:setup() called 2022/11/16 15:11:18 INFO:root:setup_counter = 3 INFO:root:Bundle started 2022/11/16 15:11:18 INFO:root:Bundle_counter = 3 INFO:root:high-volume/20221113012008604-hlprenp.avro INFO:root:high-volume/20221113011337394-hlprenp.avro INFO:root:high-volume/20221113011307598-hlprenp.avro INFO:root:high-volume/20221113011345403-hlprenp.avro INFO:root:high-volume/20221113012000982-hlprenp.avro INFO:root:high-volume/20221113011712190-hlprenp.avro INFO:root:high-volume/20221113011640005-hlprenp.avro INFO:root:high-volume/20221113012751380-hlprenp.avro INFO:root:high-volume/20221113011914286-hlprenp.avro INFO:root:high-volume/20221113012439206-hlprenp.avro Can someone clarify this behavior? I am wondering whether my understanding of setup()'s functionality is incorrect or whether this can be explained in another way. Because based on this test, it seems that setup() is not a great place to setup a DB connection. A: According to the Beam documentation, the setup method can be invoked more that once : DoFn.setup(): Called whenever the DoFn instance is deserialized on the worker. This means it can be called more than once per worker because multiple instances of a given DoFn subclass may be created (e.g., due to parallelization, or due to garbage collection after a period of disuse). This is a good place to connect to database instances, open network connections or other resources. But it still remains the best place to instantiate and create a connection pool for a database. The teardown is the best place to close the connections per worker. DoFn.teardown(): Called once (as a best effort) per DoFn instance when the DoFn instance is shutting down. This is a good place to close database instances, close network connections or other resources. Note that teardown is called as a best effort and is not guaranteed. For example, if the worker crashes, teardown might not be called.
Why is Apache Beam `DoFn.setup()` called more then once after worker startup?
I am currently experimenting with a streaming Dataflow pipeline (in Python). I read a stream of data which I like to write into a PG CloudSQL instance. To do so, I am looking for a proper place to create the database connection. As I am writing the data using a ParDo function, I'd thought the DoFn.setup() would be a good place. According to multiple resources, this should be a good place as setup() is only called once (when the worker starts). I ran some tests, but it seems that setup() is called way more often then only on initialization of the worker. It seems to run just as much as start_bundle() (which is after so many elements). I created a simple pipeline that reads some messages from PubSub, extracts an object's filename and outputs the filename. Besides that, it logs the times that setup() and start_bundle() are being called: import argparse import logging from datetime import datetime import apache_beam as beam from apache_beam.options.pipeline_options import PipelineOptions setup_counter=0 bundle_counter=0 class GetFileName(beam.DoFn): """ Generate file path from PubSub message attributes """ def _now(self): return datetime.now().strftime("%Y/%m/%d %H:%M:%S") def setup(self): global setup_counter moment = self._now() logging.info("setup() called %s" % moment) setup_counter=setup_counter+1 logging.info(f"""setup_counter = {setup_counter}""") def start_bundle(self): global bundle_counter moment = self._now() logging.info("Bundle started %s" % moment) bundle_counter=bundle_counter+1 logging.info(f"""Bundle_counter = {bundle_counter}""") def process(self, element): attr = dict(element.attributes) objectid = attr["objectId"] # not sure if this is the prettiest way to create this uri, but works for the poc path = f'{objectid}' yield path def run(input_subscription, pipeline_args=None): pipeline_options = PipelineOptions( pipeline_args, streaming=True ) with beam.Pipeline(options=pipeline_options) as pipeline: files = (pipeline | "Read from PubSub" >> beam.io.ReadFromPubSub(subscription=input_subscription, with_attributes=True) | "Get filepath" >> beam.ParDo(GetFileName()) ) files | "Print results" >> beam.Map(logging.info) if __name__ == "__main__": logging.getLogger().setLevel(logging.INFO) parser = argparse.ArgumentParser() parser.add_argument( "--input_subscription", dest="input_subscription", required=True, help="The Cloud Pub/Sub subscription to read from." ) known_args, pipeline_args = parser.parse_known_args() run( known_args.input_subscription, pipeline_args ) Based on this, I would expect to see that setup() is only logged once (after starting the pipeline) and start_bundle() an arbitrary amount of times, when running this job on DirectRunner. However, it seems that setup() is called just as much as start_bundle(). Looking at the logs: python main.py \ > --runner DirectRunner \ > --input_subscription <my_subscription> \ > --direct_num_workers 1 \ > --streaming true ... INFO:root:setup() called 2022/11/16 15:11:13 INFO:root:setup_counter = 1 INFO:root:Bundle started 2022/11/16 15:11:13 INFO:root:Bundle_counter = 1 INFO:root:avro/20221116135543584-hlgeinp.avro INFO:root:avro/20221116135543600-hlsusop.avro INFO:root:avro/20221116135543592-hlmvtgp.avro INFO:root:avro/20221116135543597-hlsuppp.avro INFO:root:avro/20221116135553122-boevtdp.avro INFO:root:avro/20221116135553126-bomipep.avro INFO:root:avro/20221116135553127-hlsuppp.avro INFO:root:avro/20221116135155024-boripep.avro INFO:root:avro/20221116135155020-bolohdp.avro INFO:root:avro/20221116135155029-hlmvaep.avro ... INFO:root:setup() called 2022/11/16 15:11:16 INFO:root:setup_counter = 2 INFO:root:Bundle started 2022/11/16 15:11:16 INFO:root:Bundle_counter = 2 INFO:root:high-volume/20221112234700584-hlprenp.avro INFO:root:high-volume/20221113011240903-hlprenp.avro INFO:root:high-volume/20221113010654305-hlprenp.avro INFO:root:high-volume/20221113010822785-hlprenp.avro INFO:root:high-volume/20221113010927402-hlprenp.avro INFO:root:high-volume/20221113011248805-hlprenp.avro INFO:root:high-volume/20221112234730001-hlprenp.avro INFO:root:high-volume/20221112234738994-hlprenp.avro INFO:root:high-volume/20221113010956395-hlprenp.avro INFO:root:high-volume/20221113011648293-hlprenp.avro ... INFO:root:setup() called 2022/11/16 15:11:18 INFO:root:setup_counter = 3 INFO:root:Bundle started 2022/11/16 15:11:18 INFO:root:Bundle_counter = 3 INFO:root:high-volume/20221113012008604-hlprenp.avro INFO:root:high-volume/20221113011337394-hlprenp.avro INFO:root:high-volume/20221113011307598-hlprenp.avro INFO:root:high-volume/20221113011345403-hlprenp.avro INFO:root:high-volume/20221113012000982-hlprenp.avro INFO:root:high-volume/20221113011712190-hlprenp.avro INFO:root:high-volume/20221113011640005-hlprenp.avro INFO:root:high-volume/20221113012751380-hlprenp.avro INFO:root:high-volume/20221113011914286-hlprenp.avro INFO:root:high-volume/20221113012439206-hlprenp.avro Can someone clarify this behavior? I am wondering whether my understanding of setup()'s functionality is incorrect or whether this can be explained in another way. Because based on this test, it seems that setup() is not a great place to setup a DB connection.
[ "According to the Beam documentation, the setup method can be invoked more that once :\nDoFn.setup(): Called whenever the DoFn instance is deserialized on the worker. \nThis means it can be called more than once per worker because multiple instances of a given DoFn subclass may be created \n(e.g., due to parallelization, or due to garbage collection \nafter a period of disuse). \nThis is a good place to connect to database instances, open network connections or other resources.\n\nBut it still remains the best place to instantiate and create a connection pool for a database.\nThe teardown is the best place to close the connections per worker.\nDoFn.teardown(): Called once (as a best effort) per DoFn instance when the DoFn instance is shutting down. \nThis is a good place to close database instances, close network connections or other resources.\n\nNote that teardown is called as a best effort and is not guaranteed. For example, \nif the worker crashes, teardown might not be called.\n\n" ]
[ 2 ]
[]
[]
[ "apache_beam", "google_cloud_dataflow", "python" ]
stackoverflow_0074462039_apache_beam_google_cloud_dataflow_python.txt
Q: Upload file to Google bucket directly from SFTP server using Python I am trying to upload file from SFTP server to GCS bucket using cloud function. But this code not working. I am able to sftp. But when I try to upload file in GCS bucket, it doesn't work, and the requirement is to use cloud function with Python. Any help will be appreciated. Here is the sample code I am trying. This code is working except sftp.get("test_report.csv", bucket_destination). Please help. destination_bucket ="gs://test-bucket/reports" with pysftp.Connection(host, username, password=sftp_password) as sftp: print ("Connection successfully established ... ") # Switch to a remote directory sftp.cwd('/test/outgoing/') bucket_destination = "destination_bucket" sftp.cwd('/test/outgoing/') if sftp.exists("test_report.csv"): sftp.get("test_report.csv", bucket_destination) else: print("doesnt exist") A: The pysftp cannot work with GCP directly. Imo, you cannot actually upload a file directly from SFTP to GCP anyhow, at least not from a code running on yet another machine. But you can transfer the file without storing it on the intermediate machine, using pysftp Connection.open (or better using Paramiko SFTPClient.open) and GCS API Blob.upload_from_file. That's what many actually mean by "directly". client = storage.Client(credentials=credentials, project='myproject') bucket = client.get_bucket('mybucket') blob = bucket.blob('test_report.csv') with sftp.open('test_report.csv', bufsize=32768) as f: blob.upload_from_file(f) For the rest of the GCP code, see How to upload a file to Google Cloud Storage on Python 3? For the purpose of bufsize, see Reading file opened with Python Paramiko SFTPClient.open method is slow. Consider not using pysftp, it's dead project. Use Paramiko directly (the code will be mostly the same). See pysftp vs. Paramiko. A: i got the solution based on your reply thanks here is the code bucket = client.get_bucket(destination_bucket) `blob = bucket.blob(destination_folder +filename) with sftp.open(sftp_filename, bufsize=32768) as f: blob.upload_from_file(f) A: This is exactly what we built SFTP Gateway for. We have a lot of customers that still want to use SFTP, but we needed to write files directly to Google Cloud Storage. Files don't get saved temporarily on another machine. The data is streamed directly from the SFTP Client (python, filezilla, or any other client), straight to GCS. https://console.cloud.google.com/marketplace/product/thorn-technologies-public/sftp-gateway?project=thorn-technologies-public Full disclosure, this is our product and we use it for all our consulting clients. We are happy to help you get it setup if you want to try it.
Upload file to Google bucket directly from SFTP server using Python
I am trying to upload file from SFTP server to GCS bucket using cloud function. But this code not working. I am able to sftp. But when I try to upload file in GCS bucket, it doesn't work, and the requirement is to use cloud function with Python. Any help will be appreciated. Here is the sample code I am trying. This code is working except sftp.get("test_report.csv", bucket_destination). Please help. destination_bucket ="gs://test-bucket/reports" with pysftp.Connection(host, username, password=sftp_password) as sftp: print ("Connection successfully established ... ") # Switch to a remote directory sftp.cwd('/test/outgoing/') bucket_destination = "destination_bucket" sftp.cwd('/test/outgoing/') if sftp.exists("test_report.csv"): sftp.get("test_report.csv", bucket_destination) else: print("doesnt exist")
[ "The pysftp cannot work with GCP directly.\nImo, you cannot actually upload a file directly from SFTP to GCP anyhow, at least not from a code running on yet another machine. But you can transfer the file without storing it on the intermediate machine, using pysftp Connection.open (or better using Paramiko SFTPClient.open) and GCS API Blob.upload_from_file. That's what many actually mean by \"directly\".\nclient = storage.Client(credentials=credentials, project='myproject')\nbucket = client.get_bucket('mybucket')\nblob = bucket.blob('test_report.csv')\n\nwith sftp.open('test_report.csv', bufsize=32768) as f:\n blob.upload_from_file(f)\n\nFor the rest of the GCP code, see How to upload a file to Google Cloud Storage on Python 3?\nFor the purpose of bufsize, see Reading file opened with Python Paramiko SFTPClient.open method is slow.\n\nConsider not using pysftp, it's dead project. Use Paramiko directly (the code will be mostly the same). See pysftp vs. Paramiko.\n", "i got the solution based on your reply thanks here is the code\n bucket = client.get_bucket(destination_bucket)\n `blob = bucket.blob(destination_folder +filename)\n\n with sftp.open(sftp_filename, bufsize=32768) as f:\n blob.upload_from_file(f)\n\n", "This is exactly what we built SFTP Gateway for. We have a lot of customers that still want to use SFTP, but we needed to write files directly to Google Cloud Storage. Files don't get saved temporarily on another machine. The data is streamed directly from the SFTP Client (python, filezilla, or any other client), straight to GCS.\nhttps://console.cloud.google.com/marketplace/product/thorn-technologies-public/sftp-gateway?project=thorn-technologies-public\nFull disclosure, this is our product and we use it for all our consulting clients. We are happy to help you get it setup if you want to try it.\n" ]
[ 2, 0, 0 ]
[]
[]
[ "gcs", "pysftp", "python", "sftp" ]
stackoverflow_0070911611_gcs_pysftp_python_sftp.txt
Q: Send image from Flask to React I'm trying to send a randomly generated image from a flask API to my React frontend. I started by just saving the image every time I generate it to the file system then trying to access it with react but this doesn't work with the production build. Now I'm using flask's send_file(), but I'm not sure of what I'm doing wrong on the frontend since this is my first React project. In flask I have: @main.route('/get-image', methods=['GET']) def get_image(): image_array = generate_random_image() img = Image.fromarray(image_array) img.save('img.png') return send_file('img.png', 'image/png') and on the front end class App extends React.Component { constructor(props) { super(props) this.state = { image: '/get-image' }; } updateImage() { fetch("/get-image").then(response => { this.setState({image: /** not sure what to have here **/}); }) } render() { return ( <div className="App"> <Container> <Row> <Col> <Image src={this.state.image} rounded/> </Col> </Row> <Row> <Container> <Container> <Row> <Col> <Button onClick={() => this.updateImage()}>Original</Button> </Col> </Row> </Container> </div> ); } } It might be worth noting that I'm running React on localhost:3000 and flask on localhost:5000. I have "proxy": "http://localhost:5000" in my package.json in the React directory. Any advice? I tried a bunch of things on the react end but none worked. Am I doing something fundamentally wrong? A: If you want your Flask endpoint to return an image, you don't need to first save it to a file. You can return the image directly after setting appropriate header(s), most importantly content-type: from flask import request @app.route("/get-image") def image_endpoint(): ... request.headers["content-type"] = "image/png" image = get_random_png_image_as_bytes() return image Then you can load image directly: <img src="http://localhost:5000/get-image" />
Send image from Flask to React
I'm trying to send a randomly generated image from a flask API to my React frontend. I started by just saving the image every time I generate it to the file system then trying to access it with react but this doesn't work with the production build. Now I'm using flask's send_file(), but I'm not sure of what I'm doing wrong on the frontend since this is my first React project. In flask I have: @main.route('/get-image', methods=['GET']) def get_image(): image_array = generate_random_image() img = Image.fromarray(image_array) img.save('img.png') return send_file('img.png', 'image/png') and on the front end class App extends React.Component { constructor(props) { super(props) this.state = { image: '/get-image' }; } updateImage() { fetch("/get-image").then(response => { this.setState({image: /** not sure what to have here **/}); }) } render() { return ( <div className="App"> <Container> <Row> <Col> <Image src={this.state.image} rounded/> </Col> </Row> <Row> <Container> <Container> <Row> <Col> <Button onClick={() => this.updateImage()}>Original</Button> </Col> </Row> </Container> </div> ); } } It might be worth noting that I'm running React on localhost:3000 and flask on localhost:5000. I have "proxy": "http://localhost:5000" in my package.json in the React directory. Any advice? I tried a bunch of things on the react end but none worked. Am I doing something fundamentally wrong?
[ "If you want your Flask endpoint to return an image, you don't need to first save it to a file. You can return the image directly after setting appropriate header(s), most importantly content-type:\nfrom flask import request\n\n@app.route(\"/get-image\")\ndef image_endpoint():\n ...\n request.headers[\"content-type\"] = \"image/png\"\n image = get_random_png_image_as_bytes()\n return image\n\nThen you can load image directly:\n<img src=\"http://localhost:5000/get-image\" />\n\n" ]
[ 0 ]
[]
[]
[ "flask", "image", "python", "reactjs" ]
stackoverflow_0059084001_flask_image_python_reactjs.txt
Q: How to remove a certain string before printing? answer = input('Enter a number: ') x = 10**(len(answer) - 1) print(answer, end = ' = ') for i in answer: if '0' in i: x = x//10 continue else: print('(' + i + ' * ' + str(x) + ')' , end = '') x = x//10 print(' + ', end = '') so i have this problem, when i enter any number, everything is great but at the end there is an extra ' + ' that i do not want. Now normally this wouldnt be an issue with lists and .remove function, however i am not allowed to use these for this problem. I cannot come up with any sort of solution that does not involve functions I tried matching the length but it didnt work because of '0' A: you can insert an extra condition in the else block: else: print('(' + i + ' * ' + str(x) + ')' , end = '') x = x//10 if x: print(' + ', end = '') this will help not to insert the last plus when it is not needed A: The error is that there is an extra '+' at the end of the output. This can be fixed by adding an 'if' statement to the end of the code that checks if the last character in the output is a '+' and, if so, removes it. A: Well Valery's answer is the best just add one more condition in case the answer was a 10 multiple if x and int(answer)%10 != 0: A: This kind of problem is best solved using the str.join() method. answer = input("Enter a number: ") x = 10**(len(answer) - 1) terms = [] for i in answer: if i == "0": x = x//10 continue else: terms.append(f"({i} * {x})") x = x//10 print(f"{answer} =", " + ".join(terms)) Sample interaction: Enter a number: 1025 1025 = (1 * 1000) + (2 * 10) + (5 * 1) Notes We build up the terms by appending them into the list terms At the end of the for loop, given 1025 as the input, the terms looks like this ['(1 * 1000)', '(2 * 10)', '(5 * 1)'] Update Here is a patch of your original solution: answer = input('Enter a number: ') x = 10**(len(answer) - 1) print(answer, end = ' = ') for i in answer: if '0' in i: x = x//10 continue else: print('(' + i + ' * ' + str(x) + ')' , end = '') x = x//10 if x == 0: print() else: print(' + ', end = '') The difference is in the last 4 lines where x (poor name, by the way), reaches 0, we know that we should not add any more plus signs. A: answer = input('Enter a number: ') #finds length of answer length = 0 for n in answer: length += 1 loop_counter = 0 ## zero_counter = 0 multiple_of_ten = 10 #finds if answer is multiple of 10 and if so by what magnitude while True: if int(answer) % multiple_of_ten == 0: #counts the zeroes aka multiple of 10 zero_counter += 1 multiple_of_ten = multiple_of_ten*10 else: break #finds the multiple of 10 needed for print output x = 10**(length - 1) print(answer, end = ' = ') for i in answer: # if its a 0 it will skip if '0' in i: x = x//10 #still divises x by 10 for the next loop pass else: print('(' + i + ' * ' + str(x) + ')' , end = '') x = x//10 #if position in loop and zeroes remaining plus one is equal to #the length of the integer provided, it means all the reamining #digits are 0 if loop_counter + zero_counter + 1 == length: break else: #adds ' + ' between strings print(' + ', end = '') # keeps track of position in loop loop_counter += 1 ended up implementing a counter to see how many zeroes there are, and a counter to see where we are in the for loop and stop the loop when its the same as amount of zeroes remaining A: I tested this code and it worked fine if x and int(answer)%10 != 0: Enter a number: 25 25 = (2 * 10) + (5 * 1) Enter a number: 1000 1000 = (1 * 1000) Enter a number: 117 117 = (1 * 100) + (1 * 10) + (7 * 1)
How to remove a certain string before printing?
answer = input('Enter a number: ') x = 10**(len(answer) - 1) print(answer, end = ' = ') for i in answer: if '0' in i: x = x//10 continue else: print('(' + i + ' * ' + str(x) + ')' , end = '') x = x//10 print(' + ', end = '') so i have this problem, when i enter any number, everything is great but at the end there is an extra ' + ' that i do not want. Now normally this wouldnt be an issue with lists and .remove function, however i am not allowed to use these for this problem. I cannot come up with any sort of solution that does not involve functions I tried matching the length but it didnt work because of '0'
[ "you can insert an extra condition in the else block:\nelse:\n print('(' + i + ' * ' + str(x) + ')' , end = '')\n x = x//10\n if x:\n print(' + ', end = '')\n\nthis will help not to insert the last plus when it is not needed\n", "The error is that there is an extra '+' at the end of the output. This can be fixed by adding an 'if' statement to the end of the code that checks if the last character in the output is a '+' and, if so, removes it.\n", "Well Valery's answer is the best just add one more condition in case the answer was a 10 multiple\nif x and int(answer)%10 != 0:\n\n", "This kind of problem is best solved using the str.join() method.\nanswer = input(\"Enter a number: \")\nx = 10**(len(answer) - 1)\n\nterms = []\nfor i in answer: \n if i == \"0\":\n x = x//10\n continue\n else:\n terms.append(f\"({i} * {x})\")\n x = x//10\n\nprint(f\"{answer} =\", \" + \".join(terms))\n\nSample interaction:\nEnter a number: 1025\n1025 = (1 * 1000) + (2 * 10) + (5 * 1)\n\nNotes\n\nWe build up the terms by appending them into the list terms\n\nAt the end of the for loop, given 1025 as the input, the terms looks like this\n['(1 * 1000)', '(2 * 10)', '(5 * 1)']\n\n\nUpdate\nHere is a patch of your original solution:\nanswer = input('Enter a number: ')\nx = 10**(len(answer) - 1)\nprint(answer, end = ' = ')\nfor i in answer: \n if '0' in i:\n x = x//10\n continue\n else:\n print('(' + i + ' * ' + str(x) + ')' , end = '')\n x = x//10\n if x == 0:\n print()\n else:\n print(' + ', end = '')\n\nThe difference is in the last 4 lines where x (poor name, by the way), reaches 0, we know that we should not add any more plus signs.\n", "answer = input('Enter a number: ')\n\n#finds length of answer\nlength = 0\nfor n in answer:\n length += 1\nloop_counter = 0\n##\n\n\nzero_counter = 0\nmultiple_of_ten = 10\n\n#finds if answer is multiple of 10 and if so by what magnitude\nwhile True: \n if int(answer) % multiple_of_ten == 0:\n #counts the zeroes aka multiple of 10\n zero_counter += 1\n multiple_of_ten = multiple_of_ten*10\n else:\n break\n\n#finds the multiple of 10 needed for print output\nx = 10**(length - 1)\n\nprint(answer, end = ' = ')\nfor i in answer:\n # if its a 0 it will skip\n if '0' in i:\n x = x//10\n #still divises x by 10 for the next loop\n pass\n else:\n print('(' + i + ' * ' + str(x) + ')' , end = '')\n x = x//10\n \n #if position in loop and zeroes remaining plus one is equal to\n #the length of the integer provided, it means all the reamining\n #digits are 0\n if loop_counter + zero_counter + 1 == length:\n break\n else:\n #adds ' + ' between strings\n print(' + ', end = '')\n\n # keeps track of position in loop\n loop_counter += 1\n \n\nended up implementing a counter to see how many zeroes there are, and a counter to see where we are in the for loop and stop the loop when its the same as amount of zeroes remaining\n", "I tested this code and it worked fine\nif x and int(answer)%10 != 0:\n\nEnter a number: 25\n25 = (2 * 10) + (5 * 1)\n\nEnter a number: 1000\n1000 = (1 * 1000)\n\nEnter a number: 117\n117 = (1 * 100) + (1 * 10) + (7 * 1)\n\n" ]
[ 1, 0, 0, 0, 0, 0 ]
[]
[]
[ "python" ]
stackoverflow_0074450970_python.txt