content
stringlengths
85
101k
title
stringlengths
0
150
question
stringlengths
15
48k
answers
list
answers_scores
list
non_answers
list
non_answers_scores
list
tags
list
name
stringlengths
35
137
Q: Graphviz (on python) not showing latin and chinese characters on github (encoding issue) I have a python notebook and using graphviz for making a graph I get all characters showing correctly on VS Code, but on github all latin and chinese characters show incorrectly like this: I know the problem is on the encoding, but it's related to the graphviz library. I don't have problems showing special characters on strings or other graphs. This is my code: from graphviz import Digraph h = Digraph("Shu") h.attr(rankdir="LR",fontname="verdana") with h.subgraph(name='cluster_wu') as c: c.attr(label='5 Shu antigos 五輸穴', style='rounded,filled', color='blanchedalmond') c.node_attr.update(style='filled', color='white') c.node('a1','Poço\n井穴', color='white') c.node('a2',"Manancial\n滎穴", color='lightgray') c.node('a3',"Riacho\n輸穴", color='darkgray') c.node('a4',"Rio\n經穴", color='gray40') c.node('a5',"Mar\n合穴", color='gray20', fontcolor='white') c.edges([('a1', 'a2'), ('a2', 'a3'), ('a3', 'a4'), ('a4', 'a5')]) h I tried declaring UTF8 with no luck: # -*- coding: utf-8 -*- Thanks for any help I can get. A: by default this library uses as fontname: Times-roman, this is caused because probably the fontname is not compatible with chinese characters, I can relate this to another similar question and I think that the most recommended thing is to change the fontname to a compatible font for your task: command line program exec: -Nfontname="YRDZST" or digraph { label="YRDZST" fontname="YRDZST" ... } You can also use a external file, for this task you shall follow a series of steps that are indicated from documentation How to add custom fonts
Graphviz (on python) not showing latin and chinese characters on github (encoding issue)
I have a python notebook and using graphviz for making a graph I get all characters showing correctly on VS Code, but on github all latin and chinese characters show incorrectly like this: I know the problem is on the encoding, but it's related to the graphviz library. I don't have problems showing special characters on strings or other graphs. This is my code: from graphviz import Digraph h = Digraph("Shu") h.attr(rankdir="LR",fontname="verdana") with h.subgraph(name='cluster_wu') as c: c.attr(label='5 Shu antigos 五輸穴', style='rounded,filled', color='blanchedalmond') c.node_attr.update(style='filled', color='white') c.node('a1','Poço\n井穴', color='white') c.node('a2',"Manancial\n滎穴", color='lightgray') c.node('a3',"Riacho\n輸穴", color='darkgray') c.node('a4',"Rio\n經穴", color='gray40') c.node('a5',"Mar\n合穴", color='gray20', fontcolor='white') c.edges([('a1', 'a2'), ('a2', 'a3'), ('a3', 'a4'), ('a4', 'a5')]) h I tried declaring UTF8 with no luck: # -*- coding: utf-8 -*- Thanks for any help I can get.
[ "by default this library uses as fontname: Times-roman, this is caused because probably the fontname is not compatible with chinese characters, I can relate this to another similar question and I think that the most recommended thing is to change the fontname to a compatible font for your task:\ncommand line program exec:\n-Nfontname=\"YRDZST\"\nor\ndigraph {\n label=\"YRDZST\"\n fontname=\"YRDZST\"\n ...\n}\n\nYou can also use a external file, for this task you shall follow a series of steps that are indicated from documentation\n\nHow to add custom fonts\n\n" ]
[ 1 ]
[]
[]
[ "github", "graphviz", "jupyter_notebook", "python" ]
stackoverflow_0074513293_github_graphviz_jupyter_notebook_python.txt
Q: Bootstrap carousel elements how to decrease the built-in width of the section The 1st image is here And the 2nd image is here This is the carousel code of website. <div class="row testing"> <div id="carouselExampleMen" class="carousel carousel-dark slide" data-bs-ride="carousel"> <div class="carousel-inner mens-section"> <div class="carousel-item active" data-bs-interval="10000"> <img src="fashion/men1.png" class="d-block w-100 " alt="..."> </div> <div class="carousel-item" data-bs-interval="2000"> <img src="fashion/men2.png" class="d-block w-100 " alt="..."> </div> <div class="carousel-item"> <img src="fashion/men3.png" class="d-block w-100 " alt="..."> </div> <button class="carousel-control-prev" type="button" data-bs-target="#carouselExampleMen" data-bs-slide="prev"> <span class="carousel-control-prev-icon" aria-hidden="true"></span> <span class="visually-hidden">Previous</span> </button> <button class="carousel-control-next" type="button" data-bs-target="#carouselExampleMen" data-bs-slide="next"> <span class="carousel-control-next-icon" aria-hidden="true"></span> <span class="visually-hidden">Next</span> </button> <div class="carousel-indicators"> <button type="button" data-bs-target="#carouselExampleMen" data-bs-slide-to="0" class="active" aria-current="true" aria-label="Slide 1"></button> <button type="button" data-bs-target="#carouselExampleMen" data-bs-slide-to="1" aria-label="Slide 2"></button> <button type="button" data-bs-target="#carouselExampleMen" data-bs-slide-to="2" aria-label="Slide 3"></button> </div> </div> </div> </div> A: You can do that by overwriting its class: .testing { width: 100%; } .carousel-inner{ width:50%; max-height: 200px !important; } change the width percentage as you like and of course, you can use media queries for different screen sizes @media only screen and (max-width: 488px) {}
Bootstrap carousel elements how to decrease the built-in width of the section
The 1st image is here And the 2nd image is here This is the carousel code of website. <div class="row testing"> <div id="carouselExampleMen" class="carousel carousel-dark slide" data-bs-ride="carousel"> <div class="carousel-inner mens-section"> <div class="carousel-item active" data-bs-interval="10000"> <img src="fashion/men1.png" class="d-block w-100 " alt="..."> </div> <div class="carousel-item" data-bs-interval="2000"> <img src="fashion/men2.png" class="d-block w-100 " alt="..."> </div> <div class="carousel-item"> <img src="fashion/men3.png" class="d-block w-100 " alt="..."> </div> <button class="carousel-control-prev" type="button" data-bs-target="#carouselExampleMen" data-bs-slide="prev"> <span class="carousel-control-prev-icon" aria-hidden="true"></span> <span class="visually-hidden">Previous</span> </button> <button class="carousel-control-next" type="button" data-bs-target="#carouselExampleMen" data-bs-slide="next"> <span class="carousel-control-next-icon" aria-hidden="true"></span> <span class="visually-hidden">Next</span> </button> <div class="carousel-indicators"> <button type="button" data-bs-target="#carouselExampleMen" data-bs-slide-to="0" class="active" aria-current="true" aria-label="Slide 1"></button> <button type="button" data-bs-target="#carouselExampleMen" data-bs-slide-to="1" aria-label="Slide 2"></button> <button type="button" data-bs-target="#carouselExampleMen" data-bs-slide-to="2" aria-label="Slide 3"></button> </div> </div> </div> </div>
[ "You can do that by overwriting its class:\n\n\n.testing {\n width: 100%;\n}\n.carousel-inner{\n width:50%;\n max-height: 200px !important;\n}\n\n\n\nchange the width percentage as you like\nand of course, you can use media queries for different screen sizes\n@media only screen and (max-width: 488px) {}\n\n" ]
[ 0 ]
[]
[]
[ "bootstrap_5", "css", "html", "python", "web" ]
stackoverflow_0074517324_bootstrap_5_css_html_python_web.txt
Q: Problem with importing deepface in python I want to analyse pictures I've saved local in python using PyCharm. I found the module called deepface to do so. I've installed it via the windows prompt and use this code in my script: from deepface import DeepFace result = DeepFace.analyze(img_path='C:\\Users\\...\\UC0f4MuOdnBnWbk_YuCmjwKA.jpg', actions=['gender','age']) print(result) But everytime I get the following error: ModuleNotFoundError: No module named 'tensorflow.python' I've checked tensorflow and it's installed... I'm pretty new to python, so please be kind, but what am I doing wrong? A: So I've foudn out that deepface doesn't work with Python 3.11 because of different dependencies (Status from 2022/11/20). For me Python 3.8 worked.
Problem with importing deepface in python
I want to analyse pictures I've saved local in python using PyCharm. I found the module called deepface to do so. I've installed it via the windows prompt and use this code in my script: from deepface import DeepFace result = DeepFace.analyze(img_path='C:\\Users\\...\\UC0f4MuOdnBnWbk_YuCmjwKA.jpg', actions=['gender','age']) print(result) But everytime I get the following error: ModuleNotFoundError: No module named 'tensorflow.python' I've checked tensorflow and it's installed... I'm pretty new to python, so please be kind, but what am I doing wrong?
[ "So I've foudn out that deepface doesn't work with Python 3.11 because of different dependencies (Status from 2022/11/20). For me Python 3.8 worked.\n" ]
[ 0 ]
[]
[]
[ "deepface", "python", "python_module" ]
stackoverflow_0074400500_deepface_python_python_module.txt
Q: pandas multiindex columns rename hi I would like to rename the columns of my df. it has a multiindex columns and I would like to change the second level of it ie I have : ('GDP US Chained 2012 Dollars SAAR', 'GDP CHWG Index') ('GDP US Personal Consumption Chained 2012 Dollars SAAR', 'GDPCTOT Index') ('US Gross Private Domestic Investment Total Chained 2012 SAAR', 'GPDITOTC Index') 1969-12-31 00:00:00 4947.1 3052.12 593.659 1970-03-31 00:00:00 4939.76 3071.06 575.953 1970-06-30 00:00:00 4946.77 3084.97 577.205 1970-09-30 00:00:00 4992.36 3112.01 586.598 1970-12-31 00:00:00 4938.86 3103.57 555.454 I would like to change the second row column and replace The "index" with "" and delete the ' '. I tried : df.columns.get_level_values(1).str.lower().str.replace('index', '', regex=True).str.strip() it works but I can not put it in the column name A: Use rename with lambda function and parameter level=1: L = [('GDP US Chained 2012 Dollars SAAR', 'GDP CHWG Index'), ('GDP US Personal Consumption Chained 2012 Dollars SAAR', 'GDPCTOT Index'), ('US Gross Private Domestic Investment Total Chained 2012 SAAR', 'GPDITOTC Index')] c = pd.MultiIndex.from_tuples(L) df = pd.DataFrame(columns=c, index=[0]) df = df.rename(columns=lambda x: x.lower().replace('index','').strip(), level=1) print (df) GDP US Chained 2012 Dollars SAAR \ gdp chwg 0 NaN GDP US Personal Consumption Chained 2012 Dollars SAAR \ gdpctot 0 NaN US Gross Private Domestic Investment Total Chained 2012 SAAR gpditotc 0 NaN A: df.columns = pd.MultiIndex.from_tuples([ (col0, col1.lower().replace('index', ''), *more_cols) for col0, col1, *more_cols in df.columns], names=df.columns.names) shoud do
pandas multiindex columns rename
hi I would like to rename the columns of my df. it has a multiindex columns and I would like to change the second level of it ie I have : ('GDP US Chained 2012 Dollars SAAR', 'GDP CHWG Index') ('GDP US Personal Consumption Chained 2012 Dollars SAAR', 'GDPCTOT Index') ('US Gross Private Domestic Investment Total Chained 2012 SAAR', 'GPDITOTC Index') 1969-12-31 00:00:00 4947.1 3052.12 593.659 1970-03-31 00:00:00 4939.76 3071.06 575.953 1970-06-30 00:00:00 4946.77 3084.97 577.205 1970-09-30 00:00:00 4992.36 3112.01 586.598 1970-12-31 00:00:00 4938.86 3103.57 555.454 I would like to change the second row column and replace The "index" with "" and delete the ' '. I tried : df.columns.get_level_values(1).str.lower().str.replace('index', '', regex=True).str.strip() it works but I can not put it in the column name
[ "Use rename with lambda function and parameter level=1:\nL = [('GDP US Chained 2012 Dollars SAAR', 'GDP CHWG Index'),\n ('GDP US Personal Consumption Chained 2012 Dollars SAAR', 'GDPCTOT Index'),\n ('US Gross Private Domestic Investment Total Chained 2012 SAAR', 'GPDITOTC Index')]\n\nc = pd.MultiIndex.from_tuples(L)\ndf = pd.DataFrame(columns=c, index=[0])\n\ndf = df.rename(columns=lambda x: x.lower().replace('index','').strip(), level=1)\nprint (df)\n GDP US Chained 2012 Dollars SAAR \\\n gdp chwg \n0 NaN \n\n GDP US Personal Consumption Chained 2012 Dollars SAAR \\\n gdpctot \n0 NaN \n\n US Gross Private Domestic Investment Total Chained 2012 SAAR \n gpditotc \n0 NaN \n\n", "df.columns = pd.MultiIndex.from_tuples([ (col0, col1.lower().replace('index', ''), *more_cols) for col0, col1, *more_cols in df.columns], names=df.columns.names)\n\nshoud do\n" ]
[ 0, 0 ]
[]
[]
[ "multi_index", "pandas", "python", "rename" ]
stackoverflow_0074517802_multi_index_pandas_python_rename.txt
Q: Check python function determine isogram from codewars An isogram is a word that has no repeating letters, consecutive or non-consecutive. Implement a function that determines whether a string that contains only letters is an isogram. Assume the empty string is an isogram. Ignore letter case. is_isogram("Dermatoglyphics" ) == true is_isogram("aba" ) == false is_isogram("moOse" ) == false # -- ignore letter case Here is my code: def is_isogram(string): string = string.lower() for char in string: if string.count(char) > 1: return False else: return True And when I tried to run the test code Test.assert_equals(is_isogram("moOse"), False, "same chars may not be same case" ) It failed, but I thought I did convert everything into lowercase. Can someone help? A: How about using sets? Casting the string into a set will drop the duplicate characters, causing isograms to return as True, as the length of the set won't differ from the length of the original string: def is_isogram(s): s = s.lower() return len(set(s)) == len(s) print is_isogram("Dermatoglyphics") print is_isogram("aba") print is_isogram("moOse") print is_isogram("") This outputs: True False False True A: Try this: def is_isogram(string): string = string.lower() for char in string: if string.count(char) > 1: return False return True In your code when is_isogram("moose") is called, it will see that the first character's ('m') count is not greater than 1. So it will return True. Once it hits the return statement, it will stop the execution for the rest string. So you should really write return True only after for-loop to make sure that the function checks for the whole string. If however, at any point, it finds a character's count to be greater than 1, then it will simply return False and stop executing because there's no point of checking any more when one point is found where condition does not hold. A: Try this : def is_isogram(s): string = s.lower() if len(s) == len(set(string)): return True return False A: Try this out: def is_isogram(string): return len(string) == len(set(string.lower())) "Implement a function that determines whether a string that contains only letters is an isogram." By using sets, you can create unique elements. So if there are any repeating numbers, it will only select one. By calling len() on these strings, you can compare the length to the original. Sorry if I explained it poorly. I am working on this. A: let us define an isogram well: according to wikipedia An Isogram is a word in which no letter occurs more than once. check here for more about an isogram just remind letter I write this code and it works for me : def is_isogram(argument): print(len(argument)) if isinstance(argument,str): valeur=argument.lower() if not argument: return False else: for char in valeur: if valeur.count(char)>1 or not char.isalpha(): return False return True else: raise TypeError("need a string ") NB: the hidden test is the fact that you must check if the char in the string is a alpha character a-z, when i add this it pass all the hiddens tests up vote if this help A: I reckon this might not be the best solution in terms of maximizing memory space and time. This answer is just for intuition purposes using a dictionary and two for loops: def is_isogram(string): #your code here #create an empty dictionary m={} #loop through the string and check for repeating characters for char in string: #make all characters lower case to ignore case variations char = char.lower() if char in m: m[char] += 1 else: m[char] = 1 #loop through dictionary and get value counts. for j, v in m.items(): #if there is a letter/character with a count > 1 return False if v > 1: return False #Notice where the "return True" command has been placed. It is outside. return True
Check python function determine isogram from codewars
An isogram is a word that has no repeating letters, consecutive or non-consecutive. Implement a function that determines whether a string that contains only letters is an isogram. Assume the empty string is an isogram. Ignore letter case. is_isogram("Dermatoglyphics" ) == true is_isogram("aba" ) == false is_isogram("moOse" ) == false # -- ignore letter case Here is my code: def is_isogram(string): string = string.lower() for char in string: if string.count(char) > 1: return False else: return True And when I tried to run the test code Test.assert_equals(is_isogram("moOse"), False, "same chars may not be same case" ) It failed, but I thought I did convert everything into lowercase. Can someone help?
[ "How about using sets? Casting the string into a set will drop the duplicate characters, causing isograms to return as True, as the length of the set won't differ from the length of the original string:\ndef is_isogram(s):\n s = s.lower()\n return len(set(s)) == len(s)\n\nprint is_isogram(\"Dermatoglyphics\")\nprint is_isogram(\"aba\")\nprint is_isogram(\"moOse\")\nprint is_isogram(\"\")\n\nThis outputs:\nTrue\nFalse\nFalse\nTrue\n\n", "Try this:\ndef is_isogram(string):\n string = string.lower()\n for char in string:\n if string.count(char) > 1:\n return False\n return True\n\nIn your code when is_isogram(\"moose\") is called, it will see that the first character's ('m') count is not greater than 1. So it will return True. Once it hits the return statement, it will stop the execution for the rest string. So you should really write return True only after for-loop to make sure that the function checks for the whole string.\nIf however, at any point, it finds a character's count to be greater than 1, then it will simply return False and stop executing because there's no point of checking any more when one point is found where condition does not hold.\n", "Try this :\ndef is_isogram(s):\n string = s.lower()\n if len(s) == len(set(string)):\n return True\n return False\n\n", "Try this out:\ndef is_isogram(string):\n return len(string) == len(set(string.lower()))\n\n\"Implement a function that determines whether a string that contains only letters is an isogram.\"\nBy using sets, you can create unique elements. So if there are any repeating numbers, it will only select one. By calling len() on these strings, you can compare the length to the original.\nSorry if I explained it poorly. I am working on this.\n", "let us define an isogram well:\naccording to wikipedia An Isogram is a word in which no letter occurs more than once.\ncheck here for more about an isogram \njust remind letter\nI write this code and it works for me :\ndef is_isogram(argument):\n print(len(argument))\n if isinstance(argument,str):\n valeur=argument.lower()\n if not argument:\n return False\n else:\n for char in valeur:\n if valeur.count(char)>1 or not char.isalpha():\n return False\n return True\n else:\n raise TypeError(\"need a string \")\n\nNB: the hidden test is the fact that you must check if the char in the string is a alpha character a-z, when i add this it pass all the hiddens tests\nup vote if this help\n", "I reckon this might not be the best solution in terms of maximizing memory space and time. This answer is just for intuition purposes using a dictionary and two for loops:\ndef is_isogram(string):\n #your code here\n #create an empty dictionary\n m={}\n #loop through the string and check for repeating characters\n for char in string:\n #make all characters lower case to ignore case variations\n char = char.lower()\n if char in m:\n m[char] += 1\n else:\n m[char] = 1\n #loop through dictionary and get value counts.\n for j, v in m.items():\n #if there is a letter/character with a count > 1 return False\n if v > 1:\n return False\n #Notice where the \"return True\" command has been placed. It is outside. \n return True \n\n" ]
[ 4, 3, 1, 1, 0, 0 ]
[]
[]
[ "algorithm", "python", "python_2.7", "string" ]
stackoverflow_0037924869_algorithm_python_python_2.7_string.txt
Q: Jupyter Notebook no output even though compilation successfull I am fairly new to Python and working with a Jupyter Notebook in which I am supposed to classify the MNIST dataset using a DecisionTreeClassifier. Now the dataset has previously already been divided into the features and the target variables in seperate files. When reading those in and working with them, I can't seem to get any output, even though it compiles fine. Restarting the Kernel did not solve the issue. Other simpler operations produce an output. It it perhaps due to the size of the data set? Here's the code: import pandas as pd from sklearn.tree import DecisionTreeClassifier from sklearn.model_selection import train_test_split from joblib import dump """ Placehorder for comments: """ def mnistDTC(): df = pd.read_csv('./data/mnist_target.csv', index_col = 0) target = pd.read_csv('data/mnist_target.csv', index_col = 0) tree_clf = DecisionTreeClassifier() df_train, df_test, target_train, target_test = train_test_split(df, target, test_size=0.2, random_state=0) tree_clf.fit(df_train, target_train) predictions = tree_clf.predict(df_test) print(predictions[:10]) Thanks in Advance! A: You defined the function mnistDC, but did not call it, which is why there is no output. In order to call the function, put the following line just after the definition, or in a new cell : mnistDC()
Jupyter Notebook no output even though compilation successfull
I am fairly new to Python and working with a Jupyter Notebook in which I am supposed to classify the MNIST dataset using a DecisionTreeClassifier. Now the dataset has previously already been divided into the features and the target variables in seperate files. When reading those in and working with them, I can't seem to get any output, even though it compiles fine. Restarting the Kernel did not solve the issue. Other simpler operations produce an output. It it perhaps due to the size of the data set? Here's the code: import pandas as pd from sklearn.tree import DecisionTreeClassifier from sklearn.model_selection import train_test_split from joblib import dump """ Placehorder for comments: """ def mnistDTC(): df = pd.read_csv('./data/mnist_target.csv', index_col = 0) target = pd.read_csv('data/mnist_target.csv', index_col = 0) tree_clf = DecisionTreeClassifier() df_train, df_test, target_train, target_test = train_test_split(df, target, test_size=0.2, random_state=0) tree_clf.fit(df_train, target_train) predictions = tree_clf.predict(df_test) print(predictions[:10]) Thanks in Advance!
[ "You defined the function mnistDC, but did not call it, which is why there is no output.\nIn order to call the function, put the following line just after the definition, or in a new cell :\nmnistDC()\n\n" ]
[ 0 ]
[]
[]
[ "classification", "decision_tree", "jupyter", "output", "python" ]
stackoverflow_0074517761_classification_decision_tree_jupyter_output_python.txt
Q: Encryption of an input with Caesar Cipher in Python I have to write this code where The function must receive a path to a text file which must contain text composed of only English letters and punctuation symbols and a destination file for encrypted data. Punctuation symbols must be left as they are without any modification and the encrypted text must be written to a different file. Also, I have to validate the inputs. I've done most of it but in the first part, where I have to ask for a text, the code isn't accepting spaces or punctuation marks, and as I gather it's because of .isalpha, however I couldn't find a way to fix it. I'm not sure if I have completed the aforementioned requirements, so any type of feedback / constructive criticism is appreciated. while True: # Validating input text string = input("Enter the text to be encrypted: ") if not string.isalpha(): print("Please enter a valid text") continue else: break while True: # Validating input key key = input("Enter the key: ") try: key = int(key) except ValueError: print("Please enter a valid key: ") continue break def caesarcipher(string, key): # Caesar Cipher encrypted_string = [] new_key = key % 26 for letter in string: encrypted_string.append(getnewletter(letter, new_key)) return ''.join(encrypted_string) def getnewletter(letter, key): new_letter = ord(letter) + key return chr(new_letter) if new_letter <= 122 else chr(96 + new_letter % 122) with open('Caesar.txt', 'a') as the_file: # Writing to a text file the_file.write(caesarcipher(string, key)) print(caesarcipher(string, key)) print('Your text has been encrypted via Caesar-Cipher, the result is in Caesar.txt') A: You can validate input string by checking if it doesn't contain any Alphabet characters then it's invalid input: import string def check_valid_input(str): for c in str: if not c.isalpha() and (c not in string.punctuation): return False return True and any(c.isalpha() for c in str) while True: # Validating input text string = input("Enter the text to be encrypted: ") # checking if contain only alphabet characters and punctuations if not check_valid_input(string): print("Please enter a valid text") continue else: break In this way you only accept input string if there are alphabet characters and doesn't contain any special characters(other than punctuation) in it. A: Well, you could check it "manualy". # ____help_function____ def check_alpha(m_string): list_wanted = ['!', '?', '.', ','] for letter in m_string: if not (letter in list_wanted or letter.isalpha()): return False return True # ____in your code____ while True: string = input("Enter the text to be encrypted: ") if check_aplha(string): break else: print('....')
Encryption of an input with Caesar Cipher in Python
I have to write this code where The function must receive a path to a text file which must contain text composed of only English letters and punctuation symbols and a destination file for encrypted data. Punctuation symbols must be left as they are without any modification and the encrypted text must be written to a different file. Also, I have to validate the inputs. I've done most of it but in the first part, where I have to ask for a text, the code isn't accepting spaces or punctuation marks, and as I gather it's because of .isalpha, however I couldn't find a way to fix it. I'm not sure if I have completed the aforementioned requirements, so any type of feedback / constructive criticism is appreciated. while True: # Validating input text string = input("Enter the text to be encrypted: ") if not string.isalpha(): print("Please enter a valid text") continue else: break while True: # Validating input key key = input("Enter the key: ") try: key = int(key) except ValueError: print("Please enter a valid key: ") continue break def caesarcipher(string, key): # Caesar Cipher encrypted_string = [] new_key = key % 26 for letter in string: encrypted_string.append(getnewletter(letter, new_key)) return ''.join(encrypted_string) def getnewletter(letter, key): new_letter = ord(letter) + key return chr(new_letter) if new_letter <= 122 else chr(96 + new_letter % 122) with open('Caesar.txt', 'a') as the_file: # Writing to a text file the_file.write(caesarcipher(string, key)) print(caesarcipher(string, key)) print('Your text has been encrypted via Caesar-Cipher, the result is in Caesar.txt')
[ "You can validate input string by checking if it doesn't contain any Alphabet characters then it's invalid input:\nimport string\n\ndef check_valid_input(str):\n for c in str:\n if not c.isalpha() and (c not in string.punctuation):\n return False\n return True and any(c.isalpha() for c in str)\n \n\nwhile True: # Validating input text\n string = input(\"Enter the text to be encrypted: \")\n # checking if contain only alphabet characters and punctuations\n if not check_valid_input(string): \n print(\"Please enter a valid text\")\n continue\n else:\n break\n\nIn this way you only accept input string if there are alphabet characters and doesn't contain any special characters(other than punctuation) in it.\n", "Well, you could check it \"manualy\".\n# ____help_function____\ndef check_alpha(m_string):\n list_wanted = ['!', '?', '.', ',']\n\n for letter in m_string:\n if not (letter in list_wanted or letter.isalpha()):\n return False\n\n return True\n\n# ____in your code____\nwhile True:\n string = input(\"Enter the text to be encrypted: \")\n\n if check_aplha(string):\n break\n else:\n print('....')\n\n" ]
[ 1, 0 ]
[]
[]
[ "caesar_cipher", "python" ]
stackoverflow_0074517671_caesar_cipher_python.txt
Q: Convert list of tuple string to list of tuple object in python I have string like below: [(.1, apple), (.2, orange), (.3, banana), (.4, jack), (.5, grape), (.6, mango)] i need to convert above string to object in python like below: [('.1', 'apple'), ('.2', 'orange'), ('.3', 'banana'), ('.4', 'jack'), ('.5', 'grape'), ('.6', 'mango')] is there any efficient way of converting this either by using regex or any other ways? Thanks in advance A: you can do the following import re string = """[(.1, apple), (.2, orange), (.3, banana), (.4, jack), (.5, grape), (.6, mango)]""" values = [tuple(ele.split(',')) for ele in re.findall(".\d, \w+", string)] this outputs print(values) >>> [('.1', ' apple'), ('.2', ' orange'), ('.3', ' banana'), ('.4', ' jack'), ('.5', ' grape'), ('.6', ' mango')] A: Using ast.literal_eval we can try first converting your string to a valid Python list, then convert to an object: import ast import re inp = "[(.1, apple), (.2, orange), (.3, banana), (.4, jack), (.5, grape), (.6, mango)]" inp = re.sub(r'([A-Za-z]+)', r"'\1'", inp) object = ast.literal_eval(inp) print(object) This prints: [(0.1, 'apple'), (0.2, 'orange'), (0.3, 'banana'), (0.4, 'jack'), (0.5, 'grape'), (0.6, 'mango')]
Convert list of tuple string to list of tuple object in python
I have string like below: [(.1, apple), (.2, orange), (.3, banana), (.4, jack), (.5, grape), (.6, mango)] i need to convert above string to object in python like below: [('.1', 'apple'), ('.2', 'orange'), ('.3', 'banana'), ('.4', 'jack'), ('.5', 'grape'), ('.6', 'mango')] is there any efficient way of converting this either by using regex or any other ways? Thanks in advance
[ "you can do the following\nimport re\n\nstring = \"\"\"[(.1, apple), (.2, orange), (.3, banana), (.4, jack), (.5, grape), (.6, mango)]\"\"\"\nvalues = [tuple(ele.split(',')) for ele in re.findall(\".\\d, \\w+\", string)]\n\nthis outputs\nprint(values)\n>>> [('.1', ' apple'), ('.2', ' orange'), ('.3', ' banana'), ('.4', ' jack'), ('.5', ' grape'), ('.6', ' mango')]\n\n", "Using ast.literal_eval we can try first converting your string to a valid Python list, then convert to an object:\nimport ast\nimport re\n\ninp = \"[(.1, apple), (.2, orange), (.3, banana), (.4, jack), (.5, grape), (.6, mango)]\"\ninp = re.sub(r'([A-Za-z]+)', r\"'\\1'\", inp)\nobject = ast.literal_eval(inp)\nprint(object)\n\nThis prints:\n[(0.1, 'apple'), (0.2, 'orange'), (0.3, 'banana'), (0.4, 'jack'), (0.5, 'grape'), (0.6, 'mango')]\n\n" ]
[ 2, 0 ]
[]
[]
[ "list", "python", "string" ]
stackoverflow_0074517565_list_python_string.txt
Q: What is the requirements.txt, what should be in it? I am switching from replit to pebblehost to host my python bot. What do I put in my requirements.txt? These are the imports that I have at the start of my bot. import asyncio import datetime import functools import io import json import os import random import re import string import urllib.parse import urllib.request import time from urllib import parse, request from itertools import cycle from bs4 import BeautifulSoup as bs4 import cloudscraper import discord, time import random, threading import asyncio from discord.ext import commands import aiohttp import colorama import discord import numpy import requests from time import sleep from discord import Permissions from discord.ext import commands from discord.utils import get A: You just write the Packages you‘ve installed in it. If you write >= 1.17 the version has to be higher than 1.17 Like: Discord.py Pillow
What is the requirements.txt, what should be in it?
I am switching from replit to pebblehost to host my python bot. What do I put in my requirements.txt? These are the imports that I have at the start of my bot. import asyncio import datetime import functools import io import json import os import random import re import string import urllib.parse import urllib.request import time from urllib import parse, request from itertools import cycle from bs4 import BeautifulSoup as bs4 import cloudscraper import discord, time import random, threading import asyncio from discord.ext import commands import aiohttp import colorama import discord import numpy import requests from time import sleep from discord import Permissions from discord.ext import commands from discord.utils import get
[ "You just write the Packages you‘ve installed in it.\nIf you write >= 1.17 the version has to be higher than 1.17\nLike:\nDiscord.py\nPillow\n" ]
[ 0 ]
[]
[]
[ "discord", "discord.py", "python", "replit" ]
stackoverflow_0074506852_discord_discord.py_python_replit.txt
Q: how to move files with the exact same name of the xml file to another directory in pyhon hi I have the following code that works just fine but I do not know how to move the matching name files to the same directory. for example I have 3 files with the same name (xml, jpeg, txt) when I move the xml file I want all the files with the same name to move with it. I was looking in the forum and did not find anything. import shutil from pathlib import Path from xml.etree import ElementTree as ET def contains_drone(path): tree = ET.parse(path.as_posix()) root = tree.getroot() for obj in root.findall('object'): rank = obj.find('name').text if rank == 'car': return True return False def move_drone_files(src="D:\\TomProject\\Images\\", dst="D:\\TomProject\\Done"): src, dst = Path(src), Path(dst) for path in src.iterdir(): if path.suffix == '.xml' and contains_drone(path): print(f'Moving {path.as_posix()} to {dst.as_posix()}') shutil.move(path, dst) if __name__ == "__main__": move_drone_files() A: You should do something like this: import shutil from pathlib import Path from xml.etree import ElementTree as ET def contains_drone(path): tree = ET.parse(path.as_posix()) root = tree.getroot() for obj in root.findall('object'): rank = obj.find('name').text if rank == 'drone': return True return False def move_drone_files(src='D:\\TomProject\\Images', dst='D:\\TomProject\\Images\\Done'): src, dst = Path(src), Path(dst) for path in src.iterdir(): if path.suffix == '.xml' and contains_drone(path): print(f'Moving {path.as_posix()} to {dst.as_posix()}') shutil.move(path, dst) if __name__=='__main__': move_drone_files() Then simply execute your file with python3 file.py and the code in main will be executed. A: You need to add if __name__ == "__main__": at the end of the file, with the function you want to call: import shutil from pathlib import Path from xml.etree import ElementTree as ET def contains_drone(path): tree = ET.parse(path.as_posix()) root = tree.getroot() for obj in root.findall('object'): rank = obj.find('name').text if rank == 'drone': return True return False def move_drone_files(src='D:\\TomProject\\Images', dst='D:\\TomProject\\Images\\Done'): src, dst = Path(src), Path(dst) for path in src.iterdir(): if path.suffix == '.xml' and contains_drone(path): print(f'Moving {path.as_posix()} to {dst.as_posix()}') shutil.move(path, dst) if __name__ == "__main__": move_drone_files() In your code, move_drone_files() is calling contains_drone(path) inside itself (see line if path.suffix == '.xml' and contains_drone(path):), so it seems that you only need to call move_drone_files() in the main section. Then you just need to execute the python script with a cmd like: python script.py, python3 script.py or python3.X script.py depending on which python version you installed. PD: I fixed some typos and tab errors you had in the code you posted A: I guess there is a syntax error in your code: replace src='D:\\TomProject\\Images, dst='D:\\TomProject\\Images\\Done' with src="D:\\TomProject\\Images", dst="D:\\TomProject\\Images\\Done". Furthermore, remove the four spaces in front of def move_drone_files(...), otherwise python will throw a syntax error. To call one of these functions, you must type: containsdrone('path/to/file') # and move_drone_files() The modified code should look like this: import shutil from pathlib import Path from xml.etree import ElementTree as ET def contains_drone(path): tree = ET.parse(path.as_posix()) root = tree.getroot() for obj in root.findall('object'): rank = obj.find('name').text if rank == 'drone': return True return False def move_drone_files(src="D:\\TomProject\\Images", dst="D:\\TomProject\\Images\\Done"): src, dst = Path(src), Path(dst) for path in src.iterdir(): if path.suffix == '.xml' and contains_drone(path): print(f'Moving {path.as_posix()} to {dst.as_posix()}') shutil.move(path, dst) containsdrone('path/to/file') move_drone_files()
how to move files with the exact same name of the xml file to another directory in pyhon
hi I have the following code that works just fine but I do not know how to move the matching name files to the same directory. for example I have 3 files with the same name (xml, jpeg, txt) when I move the xml file I want all the files with the same name to move with it. I was looking in the forum and did not find anything. import shutil from pathlib import Path from xml.etree import ElementTree as ET def contains_drone(path): tree = ET.parse(path.as_posix()) root = tree.getroot() for obj in root.findall('object'): rank = obj.find('name').text if rank == 'car': return True return False def move_drone_files(src="D:\\TomProject\\Images\\", dst="D:\\TomProject\\Done"): src, dst = Path(src), Path(dst) for path in src.iterdir(): if path.suffix == '.xml' and contains_drone(path): print(f'Moving {path.as_posix()} to {dst.as_posix()}') shutil.move(path, dst) if __name__ == "__main__": move_drone_files()
[ "You should do something like this:\nimport shutil\nfrom pathlib import Path\nfrom xml.etree import ElementTree as ET\n\n\n def contains_drone(path):\n tree = ET.parse(path.as_posix())\n root = tree.getroot()\n for obj in root.findall('object'):\n rank = obj.find('name').text\n if rank == 'drone': \n return True\n return False\n \n\ndef move_drone_files(src='D:\\\\TomProject\\\\Images', dst='D:\\\\TomProject\\\\Images\\\\Done'):\n src, dst = Path(src), Path(dst)\n for path in src.iterdir():\n if path.suffix == '.xml' and contains_drone(path):\n print(f'Moving {path.as_posix()} to {dst.as_posix()}')\n shutil.move(path, dst)\n\nif __name__=='__main__':\n move_drone_files()\n\nThen simply execute your file with python3 file.py and the code in main will be executed.\n", "You need to add if __name__ == \"__main__\": at the end of the file, with the function you want to call:\nimport shutil\nfrom pathlib import Path\nfrom xml.etree import ElementTree as ET\n\n\ndef contains_drone(path):\n tree = ET.parse(path.as_posix())\n root = tree.getroot()\n for obj in root.findall('object'):\n rank = obj.find('name').text\n if rank == 'drone': \n return True\n return False\n \n\ndef move_drone_files(src='D:\\\\TomProject\\\\Images', dst='D:\\\\TomProject\\\\Images\\\\Done'):\n src, dst = Path(src), Path(dst)\n for path in src.iterdir():\n if path.suffix == '.xml' and contains_drone(path):\n print(f'Moving {path.as_posix()} to {dst.as_posix()}')\n shutil.move(path, dst)\n\nif __name__ == \"__main__\":\n move_drone_files()\n\nIn your code, move_drone_files() is calling contains_drone(path) inside itself (see line if path.suffix == '.xml' and contains_drone(path):), so it seems that you only need to call move_drone_files() in the main section. Then you just need to execute the python script with a cmd like: python script.py, python3 script.py or python3.X script.py depending on which python version you installed.\nPD: I fixed some typos and tab errors you had in the code you posted\n", "I guess there is a syntax error in your code: replace src='D:\\\\TomProject\\\\Images, dst='D:\\\\TomProject\\\\Images\\\\Done' with src=\"D:\\\\TomProject\\\\Images\", dst=\"D:\\\\TomProject\\\\Images\\\\Done\".\nFurthermore, remove the four spaces in front of def move_drone_files(...), otherwise python will throw a syntax error.\nTo call one of these functions, you must type:\ncontainsdrone('path/to/file')\n# and\nmove_drone_files()\n\nThe modified code should look like this:\nimport shutil\nfrom pathlib import Path\nfrom xml.etree import ElementTree as ET\n\n\ndef contains_drone(path):\n tree = ET.parse(path.as_posix())\n root = tree.getroot()\n for obj in root.findall('object'):\n rank = obj.find('name').text\n if rank == 'drone': \n return True\n return False\n \n\ndef move_drone_files(src=\"D:\\\\TomProject\\\\Images\", dst=\"D:\\\\TomProject\\\\Images\\\\Done\"):\n src, dst = Path(src), Path(dst)\n for path in src.iterdir():\n if path.suffix == '.xml' and contains_drone(path):\n print(f'Moving {path.as_posix()} to {dst.as_posix()}')\n shutil.move(path, dst)\n\n\ncontainsdrone('path/to/file')\nmove_drone_files()\n\n" ]
[ 0, 0, 0 ]
[]
[]
[ "python" ]
stackoverflow_0074517895_python.txt
Q: How to compare two JSON objects with the same elements in a different order equal? How can I test whether two JSON objects are equal in python, disregarding the order of lists? For example ... JSON document a: { "errors": [ {"error": "invalid", "field": "email"}, {"error": "required", "field": "name"} ], "success": false } JSON document b: { "success": false, "errors": [ {"error": "required", "field": "name"}, {"error": "invalid", "field": "email"} ] } a and b should compare equal, even though the order of the "errors" lists are different. A: If you want two objects with the same elements but in a different order to compare equal, then the obvious thing to do is compare sorted copies of them - for instance, for the dictionaries represented by your JSON strings a and b: import json a = json.loads(""" { "errors": [ {"error": "invalid", "field": "email"}, {"error": "required", "field": "name"} ], "success": false } """) b = json.loads(""" { "success": false, "errors": [ {"error": "required", "field": "name"}, {"error": "invalid", "field": "email"} ] } """) >>> sorted(a.items()) == sorted(b.items()) False ... but that doesn't work, because in each case, the "errors" item of the top-level dict is a list with the same elements in a different order, and sorted() doesn't try to sort anything except the "top" level of an iterable. To fix that, we can define an ordered function which will recursively sort any lists it finds (and convert dictionaries to lists of (key, value) pairs so that they're orderable): def ordered(obj): if isinstance(obj, dict): return sorted((k, ordered(v)) for k, v in obj.items()) if isinstance(obj, list): return sorted(ordered(x) for x in obj) else: return obj If we apply this function to a and b, the results compare equal: >>> ordered(a) == ordered(b) True A: Another way could be to use json.dumps(X, sort_keys=True) option: import json a, b = json.dumps(a, sort_keys=True), json.dumps(b, sort_keys=True) a == b # a normal string comparison This works for nested dictionaries and lists. A: Decode them and compare them as mgilson comment. Order does not matter for dictionary as long as the keys, and values matches. (Dictionary has no order in Python) >>> {'a': 1, 'b': 2} == {'b': 2, 'a': 1} True But order is important in list; sorting will solve the problem for the lists. >>> [1, 2] == [2, 1] False >>> [1, 2] == sorted([2, 1]) True >>> a = '{"errors": [{"error": "invalid", "field": "email"}, {"error": "required", "field": "name"}], "success": false}' >>> b = '{"errors": [{"error": "required", "field": "name"}, {"error": "invalid", "field": "email"}], "success": false}' >>> a, b = json.loads(a), json.loads(b) >>> a['errors'].sort() >>> b['errors'].sort() >>> a == b True Above example will work for the JSON in the question. For general solution, see Zero Piraeus's answer. A: Yes! You can use jycm from jycm.helper import make_ignore_order_func from jycm.jycm import YouchamaJsonDiffer a = { "errors": [ {"error": "invalid", "field": "email"}, {"error": "required", "field": "name"} ], "success": False } b = { "success": False, "errors": [ {"error": "required", "field": "name"}, {"error": "invalid", "field": "email"} ] } ycm = YouchamaJsonDiffer(a, b, ignore_order_func=make_ignore_order_func([ "^errors", ])) ycm.diff() assert ycm.to_dict(no_pairs=True) == {} # aka no diff for a more complex example(value changes in deep structure) from jycm.helper import make_ignore_order_func from jycm.jycm import YouchamaJsonDiffer a = { "errors": [ {"error": "invalid", "field": "email"}, {"error": "required", "field": "name"} ], "success": True } b = { "success": False, "errors": [ {"error": "required", "field": "name-1"}, {"error": "invalid", "field": "email"} ] } ycm = YouchamaJsonDiffer(a, b, ignore_order_func=make_ignore_order_func([ "^errors", ])) ycm.diff() assert ycm.to_dict() == { 'just4vis:pairs': [ {'left': 'invalid', 'right': 'invalid', 'left_path': 'errors->[0]->error', 'right_path': 'errors->[1]->error'}, {'left': {'error': 'invalid', 'field': 'email'}, 'right': {'error': 'invalid', 'field': 'email'}, 'left_path': 'errors->[0]', 'right_path': 'errors->[1]'}, {'left': 'email', 'right': 'email', 'left_path': 'errors->[0]->field', 'right_path': 'errors->[1]->field'}, {'left': {'error': 'invalid', 'field': 'email'}, 'right': {'error': 'invalid', 'field': 'email'}, 'left_path': 'errors->[0]', 'right_path': 'errors->[1]'}, {'left': 'required', 'right': 'required', 'left_path': 'errors->[1]->error', 'right_path': 'errors->[0]->error'}, {'left': {'error': 'required', 'field': 'name'}, 'right': {'error': 'required', 'field': 'name-1'}, 'left_path': 'errors->[1]', 'right_path': 'errors->[0]'}, {'left': 'name', 'right': 'name-1', 'left_path': 'errors->[1]->field', 'right_path': 'errors->[0]->field'}, {'left': {'error': 'required', 'field': 'name'}, 'right': {'error': 'required', 'field': 'name-1'}, 'left_path': 'errors->[1]', 'right_path': 'errors->[0]'}, {'left': {'error': 'required', 'field': 'name'}, 'right': {'error': 'required', 'field': 'name-1'}, 'left_path': 'errors->[1]', 'right_path': 'errors->[0]'} ], 'value_changes': [ {'left': 'name', 'right': 'name-1', 'left_path': 'errors->[1]->field', 'right_path': 'errors->[0]->field', 'old': 'name', 'new': 'name-1'}, {'left': True, 'right': False, 'left_path': 'success', 'right_path': 'success', 'old': True, 'new': False} ] } whose results can be rendered as A: You can write your own equals function: dicts are equal if: 1) all keys are equal, 2) all values are equal lists are equal if: all items are equal and in the same order primitives are equal if a == b Because you're dealing with json, you'll have standard python types: dict, list, etc., so you can do hard type checking if type(obj) == 'dict':, etc. Rough example (not tested): def json_equals(jsonA, jsonB): if type(jsonA) != type(jsonB): # not equal return False if type(jsonA) == dict: if len(jsonA) != len(jsonB): return False for keyA in jsonA: if keyA not in jsonB or not json_equal(jsonA[keyA], jsonB[keyA]): return False elif type(jsonA) == list: if len(jsonA) != len(jsonB): return False for itemA, itemB in zip(jsonA, jsonB): if not json_equal(itemA, itemB): return False else: return jsonA == jsonB A: For the following two dicts 'dictWithListsInValue' and 'reorderedDictWithReorderedListsInValue' which are simply reordered versions of each other dictObj = {"foo": "bar", "john": "doe"} reorderedDictObj = {"john": "doe", "foo": "bar"} dictObj2 = {"abc": "def"} dictWithListsInValue = {'A': [{'X': [dictObj2, dictObj]}, {'Y': 2}], 'B': dictObj2} reorderedDictWithReorderedListsInValue = {'B': dictObj2, 'A': [{'Y': 2}, {'X': [reorderedDictObj, dictObj2]}]} a = {"L": "M", "N": dictWithListsInValue} b = {"L": "M", "N": reorderedDictWithReorderedListsInValue} print(sorted(a.items()) == sorted(b.items())) # gives false gave me wrong result i.e. false . So I created my own cutstom ObjectComparator like this: def my_list_cmp(list1, list2): if (list1.__len__() != list2.__len__()): return False for l in list1: found = False for m in list2: res = my_obj_cmp(l, m) if (res): found = True break if (not found): return False return True def my_obj_cmp(obj1, obj2): if isinstance(obj1, list): if (not isinstance(obj2, list)): return False return my_list_cmp(obj1, obj2) elif (isinstance(obj1, dict)): if (not isinstance(obj2, dict)): return False exp = set(obj2.keys()) == set(obj1.keys()) if (not exp): # print(obj1.keys(), obj2.keys()) return False for k in obj1.keys(): val1 = obj1.get(k) val2 = obj2.get(k) if isinstance(val1, list): if (not my_list_cmp(val1, val2)): return False elif isinstance(val1, dict): if (not my_obj_cmp(val1, val2)): return False else: if val2 != val1: return False else: return obj1 == obj2 return True dictObj = {"foo": "bar", "john": "doe"} reorderedDictObj = {"john": "doe", "foo": "bar"} dictObj2 = {"abc": "def"} dictWithListsInValue = {'A': [{'X': [dictObj2, dictObj]}, {'Y': 2}], 'B': dictObj2} reorderedDictWithReorderedListsInValue = {'B': dictObj2, 'A': [{'Y': 2}, {'X': [reorderedDictObj, dictObj2]}]} a = {"L": "M", "N": dictWithListsInValue} b = {"L": "M", "N": reorderedDictWithReorderedListsInValue} print(my_obj_cmp(a, b)) # gives true which gave me the correct expected output! Logic is pretty simple: If the objects are of type 'list' then compare each item of the first list with the items of the second list until found , and if the item is not found after going through the second list , then 'found' would be = false. 'found' value is returned Else if the objects to be compared are of type 'dict' then compare the values present for all the respective keys in both the objects. (Recursive comparison is performed) Else simply call obj1 == obj2 . It by default works fine for the object of strings and numbers and for those eq() is defined appropriately . (Note that the algorithm can further be improved by removing the items found in object2, so that the next item of object1 would not compare itself with the items already found in the object2) A: For others who'd like to debug the two JSON objects (usually, there is a reference and a target), here is a solution you may use. It will list the "path" of different/mismatched ones from target to the reference. level option is used for selecting how deep you would like to look into. show_variables option can be turned on to show the relevant variable. def compareJson(example_json, target_json, level=-1, show_variables=False): _different_variables = _parseJSON(example_json, target_json, level=level, show_variables=show_variables) return len(_different_variables) == 0, _different_variables def _parseJSON(reference, target, path=[], level=-1, show_variables=False): if level > 0 and len(path) == level: return [] _different_variables = list() # the case that the inputs is a dict (i.e. json dict) if isinstance(reference, dict): for _key in reference: _path = path+[_key] try: _different_variables += _parseJSON(reference[_key], target[_key], _path, level, show_variables) except KeyError: _record = ''.join(['[%s]'%str(p) for p in _path]) if show_variables: _record += ': %s <--> MISSING!!'%str(reference[_key]) _different_variables.append(_record) # the case that the inputs is a list/tuple elif isinstance(reference, list) or isinstance(reference, tuple): for index, v in enumerate(reference): _path = path+[index] try: _target_v = target[index] _different_variables += _parseJSON(v, _target_v, _path, level, show_variables) except IndexError: _record = ''.join(['[%s]'%str(p) for p in _path]) if show_variables: _record += ': %s <--> MISSING!!'%str(v) _different_variables.append(_record) # the actual comparison about the value, if they are not the same, record it elif reference != target: _record = ''.join(['[%s]'%str(p) for p in path]) if show_variables: _record += ': %s <--> %s'%(str(reference), str(target)) _different_variables.append(_record) return _different_variables A: import json #API response sample # some JSON: x = '{ "name":"John", "age":30, "city":"New York"}' # parse x json to Python dictionary: y = json.loads(x) #access Python dictionary print(y["age"]) # expected json as dictionary thisdict = { "name":"John", "age":30, "city":"New York"} print(thisdict) # access Python dictionary print(thisdict["age"]) # Compare Two access Python dictionary if thisdict == y: print ("dict1 is equal to dict2") else: print ("dict1 is not equal to dict2")
How to compare two JSON objects with the same elements in a different order equal?
How can I test whether two JSON objects are equal in python, disregarding the order of lists? For example ... JSON document a: { "errors": [ {"error": "invalid", "field": "email"}, {"error": "required", "field": "name"} ], "success": false } JSON document b: { "success": false, "errors": [ {"error": "required", "field": "name"}, {"error": "invalid", "field": "email"} ] } a and b should compare equal, even though the order of the "errors" lists are different.
[ "If you want two objects with the same elements but in a different order to compare equal, then the obvious thing to do is compare sorted copies of them - for instance, for the dictionaries represented by your JSON strings a and b:\nimport json\n\na = json.loads(\"\"\"\n{\n \"errors\": [\n {\"error\": \"invalid\", \"field\": \"email\"},\n {\"error\": \"required\", \"field\": \"name\"}\n ],\n \"success\": false\n}\n\"\"\")\n\nb = json.loads(\"\"\"\n{\n \"success\": false,\n \"errors\": [\n {\"error\": \"required\", \"field\": \"name\"},\n {\"error\": \"invalid\", \"field\": \"email\"}\n ]\n}\n\"\"\")\n\n>>> sorted(a.items()) == sorted(b.items())\nFalse\n\n... but that doesn't work, because in each case, the \"errors\" item of the top-level dict is a list with the same elements in a different order, and sorted() doesn't try to sort anything except the \"top\" level of an iterable.\nTo fix that, we can define an ordered function which will recursively sort any lists it finds (and convert dictionaries to lists of (key, value) pairs so that they're orderable):\ndef ordered(obj):\n if isinstance(obj, dict):\n return sorted((k, ordered(v)) for k, v in obj.items())\n if isinstance(obj, list):\n return sorted(ordered(x) for x in obj)\n else:\n return obj\n\nIf we apply this function to a and b, the results compare equal:\n>>> ordered(a) == ordered(b)\nTrue\n\n", "Another way could be to use json.dumps(X, sort_keys=True) option:\nimport json\na, b = json.dumps(a, sort_keys=True), json.dumps(b, sort_keys=True)\na == b # a normal string comparison\n\nThis works for nested dictionaries and lists.\n", "Decode them and compare them as mgilson comment.\nOrder does not matter for dictionary as long as the keys, and values matches. (Dictionary has no order in Python)\n>>> {'a': 1, 'b': 2} == {'b': 2, 'a': 1}\nTrue\n\nBut order is important in list; sorting will solve the problem for the lists.\n>>> [1, 2] == [2, 1]\nFalse\n>>> [1, 2] == sorted([2, 1])\nTrue\n\n\n>>> a = '{\"errors\": [{\"error\": \"invalid\", \"field\": \"email\"}, {\"error\": \"required\", \"field\": \"name\"}], \"success\": false}'\n>>> b = '{\"errors\": [{\"error\": \"required\", \"field\": \"name\"}, {\"error\": \"invalid\", \"field\": \"email\"}], \"success\": false}'\n>>> a, b = json.loads(a), json.loads(b)\n>>> a['errors'].sort()\n>>> b['errors'].sort()\n>>> a == b\nTrue\n\nAbove example will work for the JSON in the question. For general solution, see Zero Piraeus's answer.\n", "Yes! You can use jycm\nfrom jycm.helper import make_ignore_order_func\nfrom jycm.jycm import YouchamaJsonDiffer\n\na = {\n \"errors\": [\n {\"error\": \"invalid\", \"field\": \"email\"},\n {\"error\": \"required\", \"field\": \"name\"}\n ],\n \"success\": False\n}\nb = {\n \"success\": False,\n \"errors\": [\n {\"error\": \"required\", \"field\": \"name\"},\n {\"error\": \"invalid\", \"field\": \"email\"}\n ]\n}\nycm = YouchamaJsonDiffer(a, b, ignore_order_func=make_ignore_order_func([\n \"^errors\",\n]))\nycm.diff()\nassert ycm.to_dict(no_pairs=True) == {} # aka no diff\n\nfor a more complex example(value changes in deep structure)\nfrom jycm.helper import make_ignore_order_func\nfrom jycm.jycm import YouchamaJsonDiffer\n\na = {\n \"errors\": [\n {\"error\": \"invalid\", \"field\": \"email\"},\n {\"error\": \"required\", \"field\": \"name\"}\n ],\n \"success\": True\n}\n\nb = {\n \"success\": False,\n \"errors\": [\n {\"error\": \"required\", \"field\": \"name-1\"},\n {\"error\": \"invalid\", \"field\": \"email\"}\n ]\n}\nycm = YouchamaJsonDiffer(a, b, ignore_order_func=make_ignore_order_func([\n \"^errors\",\n]))\nycm.diff()\nassert ycm.to_dict() == {\n 'just4vis:pairs': [\n {'left': 'invalid', 'right': 'invalid', 'left_path': 'errors->[0]->error', 'right_path': 'errors->[1]->error'},\n {'left': {'error': 'invalid', 'field': 'email'}, 'right': {'error': 'invalid', 'field': 'email'},\n 'left_path': 'errors->[0]', 'right_path': 'errors->[1]'},\n {'left': 'email', 'right': 'email', 'left_path': 'errors->[0]->field', 'right_path': 'errors->[1]->field'},\n {'left': {'error': 'invalid', 'field': 'email'}, 'right': {'error': 'invalid', 'field': 'email'},\n 'left_path': 'errors->[0]', 'right_path': 'errors->[1]'},\n {'left': 'required', 'right': 'required', 'left_path': 'errors->[1]->error',\n 'right_path': 'errors->[0]->error'},\n {'left': {'error': 'required', 'field': 'name'}, 'right': {'error': 'required', 'field': 'name-1'},\n 'left_path': 'errors->[1]', 'right_path': 'errors->[0]'},\n {'left': 'name', 'right': 'name-1', 'left_path': 'errors->[1]->field', 'right_path': 'errors->[0]->field'},\n {'left': {'error': 'required', 'field': 'name'}, 'right': {'error': 'required', 'field': 'name-1'},\n 'left_path': 'errors->[1]', 'right_path': 'errors->[0]'},\n {'left': {'error': 'required', 'field': 'name'}, 'right': {'error': 'required', 'field': 'name-1'},\n 'left_path': 'errors->[1]', 'right_path': 'errors->[0]'}\n ],\n 'value_changes': [\n {'left': 'name', 'right': 'name-1', 'left_path': 'errors->[1]->field', 'right_path': 'errors->[0]->field',\n 'old': 'name', 'new': 'name-1'},\n {'left': True, 'right': False, 'left_path': 'success', 'right_path': 'success', 'old': True, 'new': False}\n ]\n}\n\nwhose results can be rendered as\n\n", "You can write your own equals function:\n\ndicts are equal if: 1) all keys are equal, 2) all values are equal\nlists are equal if: all items are equal and in the same order\nprimitives are equal if a == b\n\nBecause you're dealing with json, you'll have standard python types: dict, list, etc., so you can do hard type checking if type(obj) == 'dict':, etc.\nRough example (not tested):\ndef json_equals(jsonA, jsonB):\n if type(jsonA) != type(jsonB):\n # not equal\n return False\n if type(jsonA) == dict:\n if len(jsonA) != len(jsonB):\n return False\n for keyA in jsonA:\n if keyA not in jsonB or not json_equal(jsonA[keyA], jsonB[keyA]):\n return False\n elif type(jsonA) == list:\n if len(jsonA) != len(jsonB):\n return False\n for itemA, itemB in zip(jsonA, jsonB):\n if not json_equal(itemA, itemB):\n return False\n else:\n return jsonA == jsonB\n\n", "For the following two dicts 'dictWithListsInValue' and 'reorderedDictWithReorderedListsInValue' which are simply reordered versions of each other\ndictObj = {\"foo\": \"bar\", \"john\": \"doe\"}\nreorderedDictObj = {\"john\": \"doe\", \"foo\": \"bar\"}\ndictObj2 = {\"abc\": \"def\"}\ndictWithListsInValue = {'A': [{'X': [dictObj2, dictObj]}, {'Y': 2}], 'B': dictObj2}\nreorderedDictWithReorderedListsInValue = {'B': dictObj2, 'A': [{'Y': 2}, {'X': [reorderedDictObj, dictObj2]}]}\na = {\"L\": \"M\", \"N\": dictWithListsInValue}\nb = {\"L\": \"M\", \"N\": reorderedDictWithReorderedListsInValue}\n\nprint(sorted(a.items()) == sorted(b.items())) # gives false\n\ngave me wrong result i.e. false .\nSo I created my own cutstom ObjectComparator like this:\ndef my_list_cmp(list1, list2):\n if (list1.__len__() != list2.__len__()):\n return False\n\n for l in list1:\n found = False\n for m in list2:\n res = my_obj_cmp(l, m)\n if (res):\n found = True\n break\n\n if (not found):\n return False\n\n return True\n\n\ndef my_obj_cmp(obj1, obj2):\n if isinstance(obj1, list):\n if (not isinstance(obj2, list)):\n return False\n return my_list_cmp(obj1, obj2)\n elif (isinstance(obj1, dict)):\n if (not isinstance(obj2, dict)):\n return False\n exp = set(obj2.keys()) == set(obj1.keys())\n if (not exp):\n # print(obj1.keys(), obj2.keys())\n return False\n for k in obj1.keys():\n val1 = obj1.get(k)\n val2 = obj2.get(k)\n if isinstance(val1, list):\n if (not my_list_cmp(val1, val2)):\n return False\n elif isinstance(val1, dict):\n if (not my_obj_cmp(val1, val2)):\n return False\n else:\n if val2 != val1:\n return False\n else:\n return obj1 == obj2\n\n return True\n\n\ndictObj = {\"foo\": \"bar\", \"john\": \"doe\"}\nreorderedDictObj = {\"john\": \"doe\", \"foo\": \"bar\"}\ndictObj2 = {\"abc\": \"def\"}\ndictWithListsInValue = {'A': [{'X': [dictObj2, dictObj]}, {'Y': 2}], 'B': dictObj2}\nreorderedDictWithReorderedListsInValue = {'B': dictObj2, 'A': [{'Y': 2}, {'X': [reorderedDictObj, dictObj2]}]}\na = {\"L\": \"M\", \"N\": dictWithListsInValue}\nb = {\"L\": \"M\", \"N\": reorderedDictWithReorderedListsInValue}\n\nprint(my_obj_cmp(a, b)) # gives true\n\nwhich gave me the correct expected output!\nLogic is pretty simple:\nIf the objects are of type 'list' then compare each item of the first list with the items of the second list until found , and if the item is not found after going through the second list , then 'found' would be = false. 'found' value is returned\nElse if the objects to be compared are of type 'dict' then compare the values present for all the respective keys in both the objects. (Recursive comparison is performed)\nElse simply call obj1 == obj2 . It by default works fine for the object of strings and numbers and for those eq() is defined appropriately .\n(Note that the algorithm can further be improved by removing the items found in object2, so that the next item of object1 would not compare itself with the items already found in the object2)\n", "For others who'd like to debug the two JSON objects (usually, there is a reference and a target), here is a solution you may use. It will list the \"path\" of different/mismatched ones from target to the reference.\nlevel option is used for selecting how deep you would like to look into.\nshow_variables option can be turned on to show the relevant variable.\ndef compareJson(example_json, target_json, level=-1, show_variables=False):\n _different_variables = _parseJSON(example_json, target_json, level=level, show_variables=show_variables)\n return len(_different_variables) == 0, _different_variables\n\ndef _parseJSON(reference, target, path=[], level=-1, show_variables=False): \n if level > 0 and len(path) == level:\n return []\n \n _different_variables = list()\n # the case that the inputs is a dict (i.e. json dict) \n if isinstance(reference, dict):\n for _key in reference: \n _path = path+[_key]\n try:\n _different_variables += _parseJSON(reference[_key], target[_key], _path, level, show_variables)\n except KeyError:\n _record = ''.join(['[%s]'%str(p) for p in _path])\n if show_variables:\n _record += ': %s <--> MISSING!!'%str(reference[_key])\n _different_variables.append(_record)\n # the case that the inputs is a list/tuple\n elif isinstance(reference, list) or isinstance(reference, tuple):\n for index, v in enumerate(reference):\n _path = path+[index]\n try:\n _target_v = target[index]\n _different_variables += _parseJSON(v, _target_v, _path, level, show_variables)\n except IndexError:\n _record = ''.join(['[%s]'%str(p) for p in _path])\n if show_variables:\n _record += ': %s <--> MISSING!!'%str(v)\n _different_variables.append(_record)\n # the actual comparison about the value, if they are not the same, record it\n elif reference != target:\n _record = ''.join(['[%s]'%str(p) for p in path])\n if show_variables:\n _record += ': %s <--> %s'%(str(reference), str(target))\n _different_variables.append(_record)\n\n return _different_variables\n\n", "import json\n\n#API response sample\n# some JSON:\n\nx = '{ \"name\":\"John\", \"age\":30, \"city\":\"New York\"}'\n\n# parse x json to Python dictionary:\ny = json.loads(x)\n\n#access Python dictionary\nprint(y[\"age\"])\n\n\n# expected json as dictionary\nthisdict = { \"name\":\"John\", \"age\":30, \"city\":\"New York\"}\nprint(thisdict)\n\n\n# access Python dictionary\nprint(thisdict[\"age\"])\n\n# Compare Two access Python dictionary\n\nif thisdict == y:\n print (\"dict1 is equal to dict2\")\nelse:\n print (\"dict1 is not equal to dict2\")\n\n" ]
[ 198, 72, 20, 8, 2, 2, 1, 0 ]
[ "With KnoDL, it can match data without mapping fields.\n" ]
[ -1 ]
[ "comparison", "django", "json", "python" ]
stackoverflow_0025851183_comparison_django_json_python.txt
Q: Is it possible to Implement a node of data structures in artificial Intelligence? I am working on a project which have to do image predictions using artifical intelligence, this is the image, you can see that the nodes are attached with each other, and first encoding the image and then hidden layer and then decoding layer. My question is, the real implementation of autoencoder is very difficult to understand, is it possible to do coding of autoencoder nodes like we normaly do in data structures for creating linklist node BST node etc? I want the code looks easy to understand. like.... #include <iostream> using namespace std; struct node { double data[100][100]; struct node *next; }; class autoencoder { struct node *head; struct node *temp; // to traverse through the whole list public: LinkedList() { head = NULL; } void insert() { node *NewNode = new node; cout << "Enter data :: "; cin >> NewNode->data[100][100]; NewNode->next = 0; if (head == 0) { head = temp = NewNode; } else { temp->next = NewNode; temp = NewNode; // temp is treversing to newnode } } void activation() // some activation function void sigmoid fucntion // some sigmoid function } int main() { autoencoder obj; obj.insertnode() obj.activation() obj.sigmoid() } this is sudo code type. My wquestion is the real autoencoder implementation include so much libraries and other stuff which is not understandable, is it possible to implement the nodes of autoEncoder like shown in the image? I have a lot of search but didn't find any solution. If it is possible please let me know the guidence. If not please let me noe so that I will waste my time on searching this. A: Yes, it's definitely possible to implement neural networks using common data structures. For modern Neural Networks, the data structure of choice at top level is not the linked list but the Graph - linked lists are the simplest Graph type (just linear). More complex networks (e.g. ResNet) have more than one path. One of the key things that you should do is to keep the weights and inputs separate. double data[100][100] is unclear to me. Are those the weights, or the image data you're processing? And besides, it's an autoencoder. Those dimensions should vary across the layers, as your picture shows. (And in the real world, we choose nice round numbers like 64 or 128) Note that autoencoders aren't that special from a data structure perspective. They're just a set of layers. The key to autoencoders is how they're trained, and that's not visible in the structure, A: At its core, a node is fairly simple: struct Node { double value; double bias = 0; std::vector<std::pair<Node*, float>> connections; double compute() { value = bias; for (auto&& [node, weight] : connections) { value += node->value * weight; } return value; } }; A layer is a bunch of nodes: using Layer = std::vector<Node>; And a network is a bunch of layers. We assume everything is fully-connected: struct Network { std::vector<Layer> layers; void addFCLayer(int size) { Layer newLayer(size); if (!layers.empty()) { Layer& prevLayer = layers.back(); for (auto& newNode: newLayer) { for (auto& prevNode: prevLayer) { newNode.connections.emplace_back(&prevNode, rand()); } } } layers.push_back(std::move(newLayer)); } And finally, forward propagation is just calling compute layer per layer, skipping the first (input) layer. You can read the output values from network.layers.back() void forwardProp() { for (auto it = ++layers.begin(); it != layers.end(); it++) { for (auto& node: *it) { node.compute(); } } } }; There is lots of "exercise left to the reader", of course: using predefined weights and biases instead of using rand() using a different activation function some nice scaffolding so you can feed in an image instead of setting a few hundred nodes' values and reading them back out. actually training/testing the network and updating weights
Is it possible to Implement a node of data structures in artificial Intelligence?
I am working on a project which have to do image predictions using artifical intelligence, this is the image, you can see that the nodes are attached with each other, and first encoding the image and then hidden layer and then decoding layer. My question is, the real implementation of autoencoder is very difficult to understand, is it possible to do coding of autoencoder nodes like we normaly do in data structures for creating linklist node BST node etc? I want the code looks easy to understand. like.... #include <iostream> using namespace std; struct node { double data[100][100]; struct node *next; }; class autoencoder { struct node *head; struct node *temp; // to traverse through the whole list public: LinkedList() { head = NULL; } void insert() { node *NewNode = new node; cout << "Enter data :: "; cin >> NewNode->data[100][100]; NewNode->next = 0; if (head == 0) { head = temp = NewNode; } else { temp->next = NewNode; temp = NewNode; // temp is treversing to newnode } } void activation() // some activation function void sigmoid fucntion // some sigmoid function } int main() { autoencoder obj; obj.insertnode() obj.activation() obj.sigmoid() } this is sudo code type. My wquestion is the real autoencoder implementation include so much libraries and other stuff which is not understandable, is it possible to implement the nodes of autoEncoder like shown in the image? I have a lot of search but didn't find any solution. If it is possible please let me know the guidence. If not please let me noe so that I will waste my time on searching this.
[ "Yes, it's definitely possible to implement neural networks using common data structures. For modern Neural Networks, the data structure of choice at top level is not the linked list but the Graph - linked lists are the simplest Graph type (just linear). More complex networks (e.g. ResNet) have more than one path.\nOne of the key things that you should do is to keep the weights and inputs separate. double data[100][100] is unclear to me. Are those the weights, or the image data you're processing? And besides, it's an autoencoder. Those dimensions should vary across the layers, as your picture shows. (And in the real world, we choose nice round numbers like 64 or 128)\nNote that autoencoders aren't that special from a data structure perspective. They're just a set of layers. The key to autoencoders is how they're trained, and that's not visible in the structure,\n", "At its core, a node is fairly simple:\nstruct Node {\n double value;\n double bias = 0;\n std::vector<std::pair<Node*, float>> connections;\n\n double compute() {\n value = bias;\n for (auto&& [node, weight] : connections) {\n value += node->value * weight;\n }\n\n return value;\n }\n};\n\nA layer is a bunch of nodes:\nusing Layer = std::vector<Node>;\n\nAnd a network is a bunch of layers. We assume everything is fully-connected:\nstruct Network {\n std::vector<Layer> layers;\n\n void addFCLayer(int size) {\n Layer newLayer(size);\n if (!layers.empty()) {\n Layer& prevLayer = layers.back();\n for (auto& newNode: newLayer) {\n for (auto& prevNode: prevLayer) {\n newNode.connections.emplace_back(&prevNode, rand());\n }\n }\n }\n layers.push_back(std::move(newLayer));\n }\n\nAnd finally, forward propagation is just calling compute layer per layer, skipping the first (input) layer. You can read the output values from network.layers.back()\n void forwardProp() {\n for (auto it = ++layers.begin(); it != layers.end(); it++) {\n for (auto& node: *it) {\n node.compute();\n }\n }\n }\n};\n\nThere is lots of \"exercise left to the reader\", of course:\n\nusing predefined weights and biases instead of using rand()\nusing a different activation function\nsome nice scaffolding so you can feed in an image instead of setting a few hundred nodes' values and reading them back out.\nactually training/testing the network and updating weights\n\n" ]
[ 1, 0 ]
[]
[]
[ "autoencoder", "c++", "neural_network", "python" ]
stackoverflow_0074517187_autoencoder_c++_neural_network_python.txt
Q: Chromedriver error when exiting an EC2 instance I'm trying to run a really simple script on an Ubuntu EC2 machine with Selenium. I put the next piece of code inside a loop since the script should run in the background forever: from selenium import webdriver def play(): chrome_options = webdriver.ChromeOptions() chrome_options.add_argument("--headless") chrome_options.add_argument("--disable-gpu") chrome_options.add_argument("--no-sandbox") chrome_options.add_argument("enable-automation") chrome_options.add_argument("--disable-infobars") chrome_options.add_argument("--disable-dev-shm-usage") try: driver = webdriver.Chrome(executable_path='/usr/bin/chromedriver', options=chrome_options) except Exception as e: with open(f'{os.getcwd()}/error_log.txt', 'a') as f: f.write(str(datetime.datetime.now())) f.write(str(e)) While connected to the instance with ssh, the script runs perfectly, but when disconnected, I get this error: Message: Service /usr/bin/chromedriver unexpectedly exited. Status code was: 1 After re-connecting, the script works normally again with no touch. I'm running the script as follow: nohup python3 script.py & A: When you run a process from ssh, it is bound to your terminal session so as soon as you close the session, all subordinate processes are terminated. There are number of options how to deal. Nearly all of them implies that you have some additional tools installed and might be specific for your particular OS. Here are nice threads about the issue: https://serverfault.com/questions/463366/does-getting-disconnected-from-an-ssh-session-kill-your-programs https://askubuntu.com/questions/8653/how-to-keep-processes-running-after-ending-ssh-session https://superuser.com/questions/1293298/how-to-detach-ssh-session-without-killing-a-running-process#:~:text=ssh%20into%20your%20remote%20box,but%20leave%20your%20processes%20running. A: The command you are running is attached to your shell session. In order to keep the script running, make use of nohup, and this should allow the process to continue even after you have disconnected from your shell session. Try the following when you are on the machine nohup ./script.py > foo.out 2> foo.err < /dev/null & See the original answer here
Chromedriver error when exiting an EC2 instance
I'm trying to run a really simple script on an Ubuntu EC2 machine with Selenium. I put the next piece of code inside a loop since the script should run in the background forever: from selenium import webdriver def play(): chrome_options = webdriver.ChromeOptions() chrome_options.add_argument("--headless") chrome_options.add_argument("--disable-gpu") chrome_options.add_argument("--no-sandbox") chrome_options.add_argument("enable-automation") chrome_options.add_argument("--disable-infobars") chrome_options.add_argument("--disable-dev-shm-usage") try: driver = webdriver.Chrome(executable_path='/usr/bin/chromedriver', options=chrome_options) except Exception as e: with open(f'{os.getcwd()}/error_log.txt', 'a') as f: f.write(str(datetime.datetime.now())) f.write(str(e)) While connected to the instance with ssh, the script runs perfectly, but when disconnected, I get this error: Message: Service /usr/bin/chromedriver unexpectedly exited. Status code was: 1 After re-connecting, the script works normally again with no touch. I'm running the script as follow: nohup python3 script.py &
[ "When you run a process from ssh, it is bound to your terminal session so as soon as you close the session, all subordinate processes are terminated.\nThere are number of options how to deal. Nearly all of them implies that you have some additional tools installed and might be specific for your particular OS.\nHere are nice threads about the issue:\nhttps://serverfault.com/questions/463366/does-getting-disconnected-from-an-ssh-session-kill-your-programs\nhttps://askubuntu.com/questions/8653/how-to-keep-processes-running-after-ending-ssh-session\nhttps://superuser.com/questions/1293298/how-to-detach-ssh-session-without-killing-a-running-process#:~:text=ssh%20into%20your%20remote%20box,but%20leave%20your%20processes%20running.\n", "The command you are running is attached to your shell session. In order to keep the script running, make use of nohup, and this should allow the process to continue even after you have disconnected from your shell session.\nTry the following when you are on the machine\nnohup ./script.py > foo.out 2> foo.err < /dev/null &\n\nSee the original answer here\n" ]
[ 1, 0 ]
[]
[]
[ "amazon_ec2", "python", "selenium", "selenium_chromedriver" ]
stackoverflow_0074517809_amazon_ec2_python_selenium_selenium_chromedriver.txt
Q: Is there a way to get the url of popup js onclick dialogue using selenium? This is the website link which I am trying to scrape for data https://tis.nhai.gov.in/tollplazasataglance.aspx?language=en# There are links in 4th column in above site if clicked a popup window comes which has certain info along with href for the next link when we click More Information tab.We get to such links https://tis.nhai.gov.in/TollInformation.aspx?TollPlazaID=236 From selenium import webdriver driver = webdriver.Firefox() driver.maximize_window() driver.get("https://tis.nhai.gov.in/tollplazasataglance.aspx?language=en#") b = driver.find_element("xpath", '//*[@id="tollList"]/table/tbody/tr[2]/td[4]/a') c = driver.execute_script("arguments[0].click();", b) At this point I am stuck up as am unable to capture the href or url of the popup window... Kindly help me to get past to the other page from the pop up window A: Those pop-ups are the result of POST requests, where the payload is each location ID. Here is a way to get the locations IDs: from selenium import webdriver from selenium.webdriver.chrome.service import Service from selenium.webdriver.chrome.options import Options from selenium.webdriver.common.by import By from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support import expected_conditions as EC from selenium.webdriver.common.keys import Keys import time as t chrome_options = Options() chrome_options.add_argument("--no-sandbox") chrome_options.add_argument('disable-notifications') chrome_options.add_argument("window-size=1280,720") webdriver_service = Service("chromedriver/chromedriver") ## path to where you saved chromedriver binary driver = webdriver.Chrome(service=webdriver_service, options=chrome_options) wait = WebDriverWait(driver, 25) url = 'https://tis.nhai.gov.in/tollplazasataglance.aspx?language=en#' driver.get(url) places = wait.until(EC.presence_of_all_elements_located((By.XPATH, '//div[@id="tollList"]//tbody/tr/td[4]' ))) for p in places: p_id = p.find_element(By.XPATH, './/a').get_attribute('onclick').split('(')[1].split(')')[0] print(p.text, p_id) Result in terminal: Aganampudi 236 Amakathadu 258 Badava 4486 Bandapalli 5697 Bandlapalli 5952 Basapuram 4542 Bathalapalli 5753 Bolapalli 252 [...] Once you have the IDs, you can go to each place' page with https://tis.nhai.gov.in/TollInformation.aspx?TollPlazaID={place_id}.
Is there a way to get the url of popup js onclick dialogue using selenium?
This is the website link which I am trying to scrape for data https://tis.nhai.gov.in/tollplazasataglance.aspx?language=en# There are links in 4th column in above site if clicked a popup window comes which has certain info along with href for the next link when we click More Information tab.We get to such links https://tis.nhai.gov.in/TollInformation.aspx?TollPlazaID=236 From selenium import webdriver driver = webdriver.Firefox() driver.maximize_window() driver.get("https://tis.nhai.gov.in/tollplazasataglance.aspx?language=en#") b = driver.find_element("xpath", '//*[@id="tollList"]/table/tbody/tr[2]/td[4]/a') c = driver.execute_script("arguments[0].click();", b) At this point I am stuck up as am unable to capture the href or url of the popup window... Kindly help me to get past to the other page from the pop up window
[ "Those pop-ups are the result of POST requests, where the payload is each location ID. Here is a way to get the locations IDs:\nfrom selenium import webdriver\nfrom selenium.webdriver.chrome.service import Service\nfrom selenium.webdriver.chrome.options import Options\nfrom selenium.webdriver.common.by import By\nfrom selenium.webdriver.support.ui import WebDriverWait\nfrom selenium.webdriver.support import expected_conditions as EC\nfrom selenium.webdriver.common.keys import Keys\nimport time as t\n\nchrome_options = Options()\nchrome_options.add_argument(\"--no-sandbox\")\nchrome_options.add_argument('disable-notifications')\nchrome_options.add_argument(\"window-size=1280,720\")\n\nwebdriver_service = Service(\"chromedriver/chromedriver\") ## path to where you saved chromedriver binary\ndriver = webdriver.Chrome(service=webdriver_service, options=chrome_options)\nwait = WebDriverWait(driver, 25)\nurl = 'https://tis.nhai.gov.in/tollplazasataglance.aspx?language=en#'\ndriver.get(url)\nplaces = wait.until(EC.presence_of_all_elements_located((By.XPATH, '//div[@id=\"tollList\"]//tbody/tr/td[4]' )))\nfor p in places:\n p_id = p.find_element(By.XPATH, './/a').get_attribute('onclick').split('(')[1].split(')')[0]\n print(p.text, p_id)\n\nResult in terminal:\nAganampudi 236\nAmakathadu 258\nBadava 4486\nBandapalli 5697\nBandlapalli 5952\nBasapuram 4542\nBathalapalli 5753\nBolapalli 252\n[...]\n\nOnce you have the IDs, you can go to each place' page with https://tis.nhai.gov.in/TollInformation.aspx?TollPlazaID={place_id}.\n" ]
[ 0 ]
[]
[]
[ "python", "selenium_webdriver", "web_scraping" ]
stackoverflow_0074517480_python_selenium_webdriver_web_scraping.txt
Q: Finding whether 2 indices are adjacent in circular list Say I have a circular list which would look like this to a human: How can I determine whether two indices are adjacent please? So far I have: def is_next_to(a, b): if a == b: return False return abs(a - b) == 1 assert is_next_to(1, 1) is False assert is_next_to(1, 2) is True assert is_next_to(0, 1) is True assert is_next_to(5, 0) is True assert is_next_to(4, 3) is True assert is_next_to(3, 4) is True Do I need to make special cases for (0, 5) or (5, 0), or is there some way to use modular arithmetic to solve this? A: In a circle of 6, the number 5 is a neighbor of 0, but in a circle of 8, the number 5 would not be a neighbor of 0. So you can only reliable determine this when you know the size of the circle: this should be an extra parameter to your function. Once you have that, you can use this: def is_next_to(n, a, b): return abs(a - b) == 1 or a + b == n - 1 n = 6 assert is_next_to(n, 1, 1) is False assert is_next_to(n, 1, 2) is True assert is_next_to(n, 0, 1) is True assert is_next_to(n, 5, 0) is True assert is_next_to(n, 4, 3) is True assert is_next_to(n, 3, 4) is True With modular arithmetic it would look like this: return (a + 1) % n == b or (b + 1) % n == a or: return (a - b) % n in (1, n - 1)
Finding whether 2 indices are adjacent in circular list
Say I have a circular list which would look like this to a human: How can I determine whether two indices are adjacent please? So far I have: def is_next_to(a, b): if a == b: return False return abs(a - b) == 1 assert is_next_to(1, 1) is False assert is_next_to(1, 2) is True assert is_next_to(0, 1) is True assert is_next_to(5, 0) is True assert is_next_to(4, 3) is True assert is_next_to(3, 4) is True Do I need to make special cases for (0, 5) or (5, 0), or is there some way to use modular arithmetic to solve this?
[ "In a circle of 6, the number 5 is a neighbor of 0, but in a circle of 8, the number 5 would not be a neighbor of 0. So you can only reliable determine this when you know the size of the circle: this should be an extra parameter to your function.\nOnce you have that, you can use this:\ndef is_next_to(n, a, b):\n return abs(a - b) == 1 or a + b == n - 1\n\n \nn = 6 \nassert is_next_to(n, 1, 1) is False\nassert is_next_to(n, 1, 2) is True\nassert is_next_to(n, 0, 1) is True\nassert is_next_to(n, 5, 0) is True\nassert is_next_to(n, 4, 3) is True\nassert is_next_to(n, 3, 4) is True\n\nWith modular arithmetic it would look like this:\n return (a + 1) % n == b or (b + 1) % n == a \n\nor:\n return (a - b) % n in (1, n - 1) \n\n" ]
[ 1 ]
[]
[]
[ "circular_list", "modular_arithmetic", "python" ]
stackoverflow_0074517997_circular_list_modular_arithmetic_python.txt
Q: How to stop FastAPI app after raising an Exception? When handling exceptions in FastAPI, is there a way to stop the application after raising an HTTPException? An example of what I am trying to achieve: @api.route("/") def index(): try: do_something() except Exception as e: raise HTTPException(status_code=500, detail="Doing something failed!") sys.exit(1) if __name__ == "__main__": uvicorn.run(api) Raising the HTTPException alone won't stop my program, and every line of code after the raise won't be executed. Is there a good way to do something like this, or something similar with the same result? A: As described in the comments earlier, you can follow a similar approach described here, as well as here and here. Once an exception is raised, you can use a custom handler, in which you can stop the currently running event loop, using a Background Task (see Starlette's documentation as well). It is not necessary to do this inside a background task, as when stop() is called, the loop will run all scheduled tasks and then exit; however, if you do so (as in the example below), make sure you define the background task function with async def, as normal def endpoints/functions, as described in this answer, run in an external threadpool, and you, otherwise, wouldn't be able to get the running event loop. Using this approach, any operations you need to be executed when the application is shutting down, using a shutdown event handler, they will do so. Example: Accessing http://127.0.0.1:8000/hi will cause the app to terminate after returning the response. from fastapi import FastAPI, HTTPException, Request from fastapi.responses import PlainTextResponse from starlette.exceptions import HTTPException as StarletteHTTPException from starlette.background import BackgroundTask import asyncio app = FastAPI() @app.on_event('shutdown') def shutdown_event(): print('Shutting down...!') async def exit_app(): loop = asyncio.get_running_loop() loop.stop() @app.exception_handler(StarletteHTTPException) async def http_exception_handler(request, exc): task = BackgroundTask(exit_app) return PlainTextResponse(str(exc.detail), status_code=exc.status_code, background=task) @app.get('/{msg}') def main(msg: str): if msg == 'hi': raise HTTPException(status_code=500, detail='Something went wrong') return {'msg': msg} if __name__ == '__main__': import uvicorn uvicorn.run(app, host='0.0.0.0', port=8000) A: As you already know how to solve part of raising an exception and executing the code, last part is to stop the loop. You cannot do it with sys.exit(), you need to call stop directly: @api.route("/") def stop(): loop = asyncio.get_event_loop() loop.stop() Or kill the gunicorn process with subprocess.run and kill/pkill if for some reason loop cannot be stopped gracefully. Be careful of the concurrency here!
How to stop FastAPI app after raising an Exception?
When handling exceptions in FastAPI, is there a way to stop the application after raising an HTTPException? An example of what I am trying to achieve: @api.route("/") def index(): try: do_something() except Exception as e: raise HTTPException(status_code=500, detail="Doing something failed!") sys.exit(1) if __name__ == "__main__": uvicorn.run(api) Raising the HTTPException alone won't stop my program, and every line of code after the raise won't be executed. Is there a good way to do something like this, or something similar with the same result?
[ "As described in the comments earlier, you can follow a similar approach described here, as well as here and here. Once an exception is raised, you can use a custom handler, in which you can stop the currently running event loop, using a Background Task (see Starlette's documentation as well). It is not necessary to do this inside a background task, as when stop() is called, the loop will run all scheduled tasks and then exit; however, if you do so (as in the example below), make sure you define the background task function with async def, as normal def endpoints/functions, as described in this answer, run in an external threadpool, and you, otherwise, wouldn't be able to get the running event loop. Using this approach, any operations you need to be executed when the application is shutting down, using a shutdown event handler, they will do so.\nExample:\nAccessing http://127.0.0.1:8000/hi will cause the app to terminate after returning the response.\nfrom fastapi import FastAPI, HTTPException, Request\nfrom fastapi.responses import PlainTextResponse\nfrom starlette.exceptions import HTTPException as StarletteHTTPException\nfrom starlette.background import BackgroundTask\nimport asyncio\n\napp = FastAPI()\n\n@app.on_event('shutdown')\ndef shutdown_event():\n print('Shutting down...!')\n \nasync def exit_app():\n loop = asyncio.get_running_loop()\n loop.stop()\n \n@app.exception_handler(StarletteHTTPException)\nasync def http_exception_handler(request, exc):\n task = BackgroundTask(exit_app)\n return PlainTextResponse(str(exc.detail), status_code=exc.status_code, background=task)\n \n@app.get('/{msg}')\ndef main(msg: str):\n if msg == 'hi':\n raise HTTPException(status_code=500, detail='Something went wrong')\n\n return {'msg': msg}\n \nif __name__ == '__main__':\n import uvicorn\n uvicorn.run(app, host='0.0.0.0', port=8000)\n\n", "As you already know how to solve part of raising an exception and executing the code, last part is to stop the loop. You cannot do it with sys.exit(), you need to call stop directly:\n@api.route(\"/\")\ndef stop():\n loop = asyncio.get_event_loop()\n loop.stop()\n\nOr kill the gunicorn process with subprocess.run and kill/pkill if for some reason loop cannot be stopped gracefully.\nBe careful of the concurrency here!\n" ]
[ 1, 0 ]
[]
[]
[ "exception", "fastapi", "httpexception", "python" ]
stackoverflow_0074517267_exception_fastapi_httpexception_python.txt
Q: How to resolve django.core.exceptions.ImproperlyConfigured: Requested setting INSTALLED_APPS error? I am getting this error when I try to run python3 poppulate_first_app.py file (using Kali Linux and venv with django 3.1.7). Error... Traceback (most recent call last): File "/home/hadi/Documents/first_django_project/poppulate_first_app.py", line 4, in <module> from first_app.models import * File "/home/hadi/Documents/first_django_project/first_app/models.py", line 6, in <module> class Topic(models.Model): File "/home/hadi/Documents/first_django_project/venv/lib/python3.9/site-packages/django/db/models/base.py", line 108, in __new__ app_config = apps.get_containing_app_config(module) File "/home/hadi/Documents/first_django_project/venv/lib/python3.9/site-packages/django/apps/registry.py", line 253, in get_containing_app_config self.check_apps_ready() File "/home/hadi/Documents/first_django_project/venv/lib/python3.9/site-packages/django/apps/registry.py", line 135, in check_apps_ready settings.INSTALLED_APPS File "/home/hadi/Documents/first_django_project/venv/lib/python3.9/site-packages/django/conf/__init__.py", line 82, in __getattr__ self._setup(name) File "/home/hadi/Documents/first_django_project/venv/lib/python3.9/site-packages/django/conf/__init__.py", line 63, in _setup raise ImproperlyConfigured( django.core.exceptions.ImproperlyConfigured: Requested setting INSTALLED_APPS, but settings are not configured. You must either define the environment variable DJANGO_SETTINGS_MODULE or call settings.configure() before accessing settings. I tried to run it on Windows too, but the same error, also I recreate the project in case I miss with settings or something, also I tried most of the answers posted here but none of them worked for me. Here is my poppulate_first_app.py code: import django import random from faker import Faker from first_app.models import * import os os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'first_django_project.settings') django.setup() # FAKE POP SCRIPT fake_gen = Faker() topics = ['Search', 'Social', 'Marketplace', 'News', 'Games'] def add_topic(): t = Topic.objects.get_or_create(top_name=random.choice(topics))[0] t.save() return t def populate(n=5): for entry in range(n): # get the topic for the entry top = add_topic() # create the fake data for that entry fake_url = fake_gen.url() fake_date = fake_gen.date() fake_name = fake_gen.company() # create the new webpage entry webpg = Webpage.objects.get_or_create(topic=top, url=fake_url, name=fake_name)[0] # create a fake access record for that webpage acc_rec = AccessRecord(namme=webpg, date=fake_date) if __name__ == '__main__': print('Populating script!') populate(20) print('Populating complete!') Here is my settings.py code: """ Django settings for first_django_project project. Generated by 'django-admin startproject' using Django 3.1.7. For more information on this file, see https://docs.djangoproject.com/en/3.1/topics/settings/ For the full list of settings and their values, see https://docs.djangoproject.com/en/3.1/ref/settings/ """ from pathlib import Path # Build paths inside the project like this: BASE_DIR / 'subdir'. BASE_DIR = Path(__file__).resolve().parent.parent TEMPLATES_DIR = BASE_DIR / 'templates' STATIC_DIR = BASE_DIR / 'static' # Quick-start development settings - unsuitable for production # See https://docs.djangoproject.com/en/3.1/howto/deployment/checklist/ # SECURITY WARNING: keep the secret key used in production secret! SECRET_KEY = 'f*7mcgl1l+4l@$qxf!xp*91l%*1^cl@@rp8&_m5upzr&4j_dqr' # SECURITY WARNING: don't run with debug turned on in production! DEBUG = True ALLOWED_HOSTS = [] # Application definition INSTALLED_APPS = [ 'first_app.apps.FirstAppConfig', 'django.contrib.admin', 'django.contrib.contenttypes', 'django.contrib.auth', 'django.contrib.sessions', 'django.contrib.messages', 'django.contrib.staticfiles', ] # import django # django.setup() MIDDLEWARE = [ 'django.middleware.security.SecurityMiddleware', 'django.contrib.sessions.middleware.SessionMiddleware', 'django.middleware.common.CommonMiddleware', 'django.middleware.csrf.CsrfViewMiddleware', 'django.contrib.auth.middleware.AuthenticationMiddleware', 'django.contrib.messages.middleware.MessageMiddleware', 'django.middleware.clickjacking.XFrameOptionsMiddleware', ] ROOT_URLCONF = 'first_django_project.urls' TEMPLATES = [ { 'BACKEND': 'django.template.backends.django.DjangoTemplates', 'DIRS': [TEMPLATES_DIR, ], 'APP_DIRS': True, 'OPTIONS': { 'context_processors': [ 'django.template.context_processors.debug', 'django.template.context_processors.request', 'django.contrib.auth.context_processors.auth', 'django.contrib.messages.context_processors.messages', ], }, }, ] WSGI_APPLICATION = 'first_django_project.wsgi.application' # Database # https://docs.djangoproject.com/en/3.1/ref/settings/#databases DATABASES = { 'default': { 'ENGINE': 'django.db.backends.sqlite3', 'NAME': BASE_DIR / 'db.sqlite3', } } # Password validation # https://docs.djangoproject.com/en/3.1/ref/settings/#auth-password-validators AUTH_PASSWORD_VALIDATORS = [ { 'NAME': 'django.contrib.auth.password_validation.UserAttributeSimilarityValidator', }, { 'NAME': 'django.contrib.auth.password_validation.MinimumLengthValidator', }, { 'NAME': 'django.contrib.auth.password_validation.CommonPasswordValidator', }, { 'NAME': 'django.contrib.auth.password_validation.NumericPasswordValidator', }, ] # Internationalization # https://docs.djangoproject.com/en/3.1/topics/i18n/ LANGUAGE_CODE = 'en-us' TIME_ZONE = 'UTC' USE_I18N = True USE_L10N = True USE_TZ = True # Static files (CSS, JavaScript, Images) # https://docs.djangoproject.com/en/3.1/howto/static-files/ STATIC_URL = '/static/' STATICFILES_DIRS = [ STATIC_DIR, ] Here is my models.py code: from django.db import models # Create your models here. class Topic(models.Model): top_name = models.CharField(max_length=264, unique=True) def __str__(self): return self.top_name class Webpage(models.Model): topic = models.ForeignKey(Topic, on_delete=models.DO_NOTHING) name = models.CharField(max_length=264, unique=True) url = models.URLField(unique=True) def __str__(self): return self.name class AccessRecord(models.Model): name = models.ForeignKey(Webpage, on_delete=models.DO_NOTHING) date = models.DateField() def __str__(self): return str(self.date) Here is my wsgi.py code: """ WSGI config for first_django_project project. It exposes the WSGI callable as a module-level variable named ``application``. For more information on this file, see https://docs.djangoproject.com/en/3.1/howto/deployment/wsgi/ """ import os from django.core.wsgi import get_wsgi_application os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'first_django_project.settings') application = get_wsgi_application() A: I just move the import os os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'first_django_project.settings') django.setup() before the from first_app.models import * A: The error message is telling what to do. Run the following line in your terminal. export DJANGO_SETTINGS_MODULE=poppulate_first_app.settings A: You get this error when you are running python on the terminal instead of running python manage.py shell. If you are running the python, on terminal, you have to manually set DJANGO_SETTTINGS_MODULE environment variable so that django knows where to find your settings. When you use python manage.py shell, this is automatically done for you.
How to resolve django.core.exceptions.ImproperlyConfigured: Requested setting INSTALLED_APPS error?
I am getting this error when I try to run python3 poppulate_first_app.py file (using Kali Linux and venv with django 3.1.7). Error... Traceback (most recent call last): File "/home/hadi/Documents/first_django_project/poppulate_first_app.py", line 4, in <module> from first_app.models import * File "/home/hadi/Documents/first_django_project/first_app/models.py", line 6, in <module> class Topic(models.Model): File "/home/hadi/Documents/first_django_project/venv/lib/python3.9/site-packages/django/db/models/base.py", line 108, in __new__ app_config = apps.get_containing_app_config(module) File "/home/hadi/Documents/first_django_project/venv/lib/python3.9/site-packages/django/apps/registry.py", line 253, in get_containing_app_config self.check_apps_ready() File "/home/hadi/Documents/first_django_project/venv/lib/python3.9/site-packages/django/apps/registry.py", line 135, in check_apps_ready settings.INSTALLED_APPS File "/home/hadi/Documents/first_django_project/venv/lib/python3.9/site-packages/django/conf/__init__.py", line 82, in __getattr__ self._setup(name) File "/home/hadi/Documents/first_django_project/venv/lib/python3.9/site-packages/django/conf/__init__.py", line 63, in _setup raise ImproperlyConfigured( django.core.exceptions.ImproperlyConfigured: Requested setting INSTALLED_APPS, but settings are not configured. You must either define the environment variable DJANGO_SETTINGS_MODULE or call settings.configure() before accessing settings. I tried to run it on Windows too, but the same error, also I recreate the project in case I miss with settings or something, also I tried most of the answers posted here but none of them worked for me. Here is my poppulate_first_app.py code: import django import random from faker import Faker from first_app.models import * import os os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'first_django_project.settings') django.setup() # FAKE POP SCRIPT fake_gen = Faker() topics = ['Search', 'Social', 'Marketplace', 'News', 'Games'] def add_topic(): t = Topic.objects.get_or_create(top_name=random.choice(topics))[0] t.save() return t def populate(n=5): for entry in range(n): # get the topic for the entry top = add_topic() # create the fake data for that entry fake_url = fake_gen.url() fake_date = fake_gen.date() fake_name = fake_gen.company() # create the new webpage entry webpg = Webpage.objects.get_or_create(topic=top, url=fake_url, name=fake_name)[0] # create a fake access record for that webpage acc_rec = AccessRecord(namme=webpg, date=fake_date) if __name__ == '__main__': print('Populating script!') populate(20) print('Populating complete!') Here is my settings.py code: """ Django settings for first_django_project project. Generated by 'django-admin startproject' using Django 3.1.7. For more information on this file, see https://docs.djangoproject.com/en/3.1/topics/settings/ For the full list of settings and their values, see https://docs.djangoproject.com/en/3.1/ref/settings/ """ from pathlib import Path # Build paths inside the project like this: BASE_DIR / 'subdir'. BASE_DIR = Path(__file__).resolve().parent.parent TEMPLATES_DIR = BASE_DIR / 'templates' STATIC_DIR = BASE_DIR / 'static' # Quick-start development settings - unsuitable for production # See https://docs.djangoproject.com/en/3.1/howto/deployment/checklist/ # SECURITY WARNING: keep the secret key used in production secret! SECRET_KEY = 'f*7mcgl1l+4l@$qxf!xp*91l%*1^cl@@rp8&_m5upzr&4j_dqr' # SECURITY WARNING: don't run with debug turned on in production! DEBUG = True ALLOWED_HOSTS = [] # Application definition INSTALLED_APPS = [ 'first_app.apps.FirstAppConfig', 'django.contrib.admin', 'django.contrib.contenttypes', 'django.contrib.auth', 'django.contrib.sessions', 'django.contrib.messages', 'django.contrib.staticfiles', ] # import django # django.setup() MIDDLEWARE = [ 'django.middleware.security.SecurityMiddleware', 'django.contrib.sessions.middleware.SessionMiddleware', 'django.middleware.common.CommonMiddleware', 'django.middleware.csrf.CsrfViewMiddleware', 'django.contrib.auth.middleware.AuthenticationMiddleware', 'django.contrib.messages.middleware.MessageMiddleware', 'django.middleware.clickjacking.XFrameOptionsMiddleware', ] ROOT_URLCONF = 'first_django_project.urls' TEMPLATES = [ { 'BACKEND': 'django.template.backends.django.DjangoTemplates', 'DIRS': [TEMPLATES_DIR, ], 'APP_DIRS': True, 'OPTIONS': { 'context_processors': [ 'django.template.context_processors.debug', 'django.template.context_processors.request', 'django.contrib.auth.context_processors.auth', 'django.contrib.messages.context_processors.messages', ], }, }, ] WSGI_APPLICATION = 'first_django_project.wsgi.application' # Database # https://docs.djangoproject.com/en/3.1/ref/settings/#databases DATABASES = { 'default': { 'ENGINE': 'django.db.backends.sqlite3', 'NAME': BASE_DIR / 'db.sqlite3', } } # Password validation # https://docs.djangoproject.com/en/3.1/ref/settings/#auth-password-validators AUTH_PASSWORD_VALIDATORS = [ { 'NAME': 'django.contrib.auth.password_validation.UserAttributeSimilarityValidator', }, { 'NAME': 'django.contrib.auth.password_validation.MinimumLengthValidator', }, { 'NAME': 'django.contrib.auth.password_validation.CommonPasswordValidator', }, { 'NAME': 'django.contrib.auth.password_validation.NumericPasswordValidator', }, ] # Internationalization # https://docs.djangoproject.com/en/3.1/topics/i18n/ LANGUAGE_CODE = 'en-us' TIME_ZONE = 'UTC' USE_I18N = True USE_L10N = True USE_TZ = True # Static files (CSS, JavaScript, Images) # https://docs.djangoproject.com/en/3.1/howto/static-files/ STATIC_URL = '/static/' STATICFILES_DIRS = [ STATIC_DIR, ] Here is my models.py code: from django.db import models # Create your models here. class Topic(models.Model): top_name = models.CharField(max_length=264, unique=True) def __str__(self): return self.top_name class Webpage(models.Model): topic = models.ForeignKey(Topic, on_delete=models.DO_NOTHING) name = models.CharField(max_length=264, unique=True) url = models.URLField(unique=True) def __str__(self): return self.name class AccessRecord(models.Model): name = models.ForeignKey(Webpage, on_delete=models.DO_NOTHING) date = models.DateField() def __str__(self): return str(self.date) Here is my wsgi.py code: """ WSGI config for first_django_project project. It exposes the WSGI callable as a module-level variable named ``application``. For more information on this file, see https://docs.djangoproject.com/en/3.1/howto/deployment/wsgi/ """ import os from django.core.wsgi import get_wsgi_application os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'first_django_project.settings') application = get_wsgi_application()
[ "I just move the\nimport os\n\nos.environ.setdefault('DJANGO_SETTINGS_MODULE', 'first_django_project.settings')\n\ndjango.setup()\n\nbefore the from first_app.models import *\n", "The error message is telling what to do. Run the following line in your terminal.\nexport DJANGO_SETTINGS_MODULE=poppulate_first_app.settings\n\n", "You get this error when you are running python on the terminal instead of running python manage.py shell.\nIf you are running the python, on terminal, you have to manually set DJANGO_SETTTINGS_MODULE environment variable so that django knows where to find your settings.\nWhen you use python manage.py shell, this is automatically done for you.\n" ]
[ 7, 0, 0 ]
[]
[]
[ "django", "python", "python_3.x" ]
stackoverflow_0066716375_django_python_python_3.x.txt
Q: selenium - wait when editbox is interactable (after button click) So I'm trying to learn about interacting with elements, after they are loaded (or enabled/interactable). In this case pressing button enables Edit box (after like 3-4secs), so you can write something. Here's link: http://the-internet.herokuapp.com/dynamic_controls Here is how it looks now - works, but what if this edit-box would load, for example, 6 seconds? Then it'd be wrecked up... enable = browser.find_element(By.XPATH, "/html/body/div[2]/div/div[1]/form[2]/button") enable.click() time.sleep(5) fillform = browser.find_element(By.XPATH, "/html/body/div[2]/div/div[1]/form[2]/input") fillform.send_keys("testtt") time.sleep(1) I also tried browser.implicitly_wait(20) but it is like ignored and does nothing. Browser just keeps closing, because it can't find ENABLED edit box. It gives the error: selenium.common.exceptions.ElementNotInteractableException: Message: element not interactable I've tried another method for this - didn't work, as well... element = WebDriverWait(browser, 5).until(EC.visibility_of_element_located((By.XPATH, "/html/body/div[2]/div/div[1]/form[2]/input"))) I am using Chrome+Python. A: Use WebDriverWait() and wait for element_to_be_clickable(). Also use the following xpath option. driver.get('http://the-internet.herokuapp.com/dynamic_controls') WebDriverWait(driver,10).until(EC.element_to_be_clickable((By.XPATH, "//button[text()='Enable']"))).click() WebDriverWait(driver,10).until(EC.element_to_be_clickable((By.XPATH, "//form[@id='input-example']//input"))).send_keys("testtt") browser snapshot:
selenium - wait when editbox is interactable (after button click)
So I'm trying to learn about interacting with elements, after they are loaded (or enabled/interactable). In this case pressing button enables Edit box (after like 3-4secs), so you can write something. Here's link: http://the-internet.herokuapp.com/dynamic_controls Here is how it looks now - works, but what if this edit-box would load, for example, 6 seconds? Then it'd be wrecked up... enable = browser.find_element(By.XPATH, "/html/body/div[2]/div/div[1]/form[2]/button") enable.click() time.sleep(5) fillform = browser.find_element(By.XPATH, "/html/body/div[2]/div/div[1]/form[2]/input") fillform.send_keys("testtt") time.sleep(1) I also tried browser.implicitly_wait(20) but it is like ignored and does nothing. Browser just keeps closing, because it can't find ENABLED edit box. It gives the error: selenium.common.exceptions.ElementNotInteractableException: Message: element not interactable I've tried another method for this - didn't work, as well... element = WebDriverWait(browser, 5).until(EC.visibility_of_element_located((By.XPATH, "/html/body/div[2]/div/div[1]/form[2]/input"))) I am using Chrome+Python.
[ "Use WebDriverWait() and wait for element_to_be_clickable(). Also use the following xpath option.\ndriver.get('http://the-internet.herokuapp.com/dynamic_controls')\nWebDriverWait(driver,10).until(EC.element_to_be_clickable((By.XPATH, \"//button[text()='Enable']\"))).click()\nWebDriverWait(driver,10).until(EC.element_to_be_clickable((By.XPATH, \"//form[@id='input-example']//input\"))).send_keys(\"testtt\")\n\nbrowser snapshot:\n\n" ]
[ 0 ]
[]
[]
[ "python", "selenium", "selenium_webdriver", "webdriverwait", "xpath" ]
stackoverflow_0074518012_python_selenium_selenium_webdriver_webdriverwait_xpath.txt
Q: How can i validate django field with either of two validators? Here is the code, I want ip_address to satisfy either of validate_fqdn or validate_ipv4_address. import re def validate_fqdn(value): pattern = re.compile(r'^[a-zA-Z0-9-_]+\.?[a-zA-Z0-9-_]+\.[a-zA-Z0-9-_]+$') if not pattern.match(value): raise ValidationError('Provided fqdn is not valid') return value class KSerializer(serializers.HyperlinkedModelSerializer): ip_address = serializers.CharField(max_length = 100, validators = [validate_fqdn, validate_ipv4_address]) How can I achieve this? A: A new validator will do: def validate_fqdn_or_ipv4_address(value): try: return validate_fqdn(value) except: return validate_ipv4_address(value)
How can i validate django field with either of two validators?
Here is the code, I want ip_address to satisfy either of validate_fqdn or validate_ipv4_address. import re def validate_fqdn(value): pattern = re.compile(r'^[a-zA-Z0-9-_]+\.?[a-zA-Z0-9-_]+\.[a-zA-Z0-9-_]+$') if not pattern.match(value): raise ValidationError('Provided fqdn is not valid') return value class KSerializer(serializers.HyperlinkedModelSerializer): ip_address = serializers.CharField(max_length = 100, validators = [validate_fqdn, validate_ipv4_address]) How can I achieve this?
[ "A new validator will do:\ndef validate_fqdn_or_ipv4_address(value):\n try:\n return validate_fqdn(value)\n except:\n return validate_ipv4_address(value)\n\n" ]
[ 1 ]
[]
[]
[ "django", "django_models", "django_rest_framework", "orm", "python" ]
stackoverflow_0074518214_django_django_models_django_rest_framework_orm_python.txt
Q: Extracting a dictionary into a set of tuples Giving this dictionary: d = {'x': '999999999', 'y': ['888888888', '333333333'], 'z': '666666666', 'p': ['0000000', '11111111', '22222222'] } is it possible to make a set of tuples ? The output should be {( x, 999999999),(y,888888888, 333333333),...} I tried this : x_set = {(k, v) for k, values in d.items() for v in values} A: x_set = set() for k, v in d.items(): items = [k] if(type(v) == list): items.extend(v) else: items.append(v) x_set.add(tuple(items)) Check if the dictionary element is a list or not so you know whether to iterate through the element or simply append it. A: You could construct a set of tuples with cases depending on whether the dictionary values are lists or not. d = {'x': '999999999', 'y': ['888888888', '333333333'], 'z': '666666666', 'p': ['0000000', '11111111', '22222222'] } tuple_set = set(tuple([k] + list(map(int, v)) if isinstance(v,list) else [k, int(v)]) for k,v in d.items())
Extracting a dictionary into a set of tuples
Giving this dictionary: d = {'x': '999999999', 'y': ['888888888', '333333333'], 'z': '666666666', 'p': ['0000000', '11111111', '22222222'] } is it possible to make a set of tuples ? The output should be {( x, 999999999),(y,888888888, 333333333),...} I tried this : x_set = {(k, v) for k, values in d.items() for v in values}
[ "x_set = set()\nfor k, v in d.items():\n items = [k]\n if(type(v) == list):\n items.extend(v)\n else:\n items.append(v)\n x_set.add(tuple(items))\n\nCheck if the dictionary element is a list or not so you know whether to iterate through the element or simply append it.\n", "You could construct a set of tuples with cases depending on whether the dictionary values are lists or not.\nd = {'x': '999999999',\n'y': ['888888888', '333333333'],\n'z': '666666666',\n'p': ['0000000', '11111111', '22222222'] }\n\ntuple_set = set(tuple([k] + list(map(int, v)) if isinstance(v,list) else [k, int(v)]) for k,v in d.items())\n\n" ]
[ 2, 1 ]
[]
[]
[ "python" ]
stackoverflow_0074518013_python.txt
Q: this code always go to else I tried to put if in if to force code to go to it but still go to else def add(x, y): return x + y def multiple(x, y): return x * y def subtrack(x, y): return x - y def divide(x, y): return x / y print('select your operation please') print('1-Add') print('2-Multiple') print('3-subtrack') print('4-Divide') chose=int(input('enter your selection please: ')) num1=int(input('enter your first num please: ')) num2=int(input('enter your second num please: ')) if chose == '1': print(num1,'+',num2,'=',add(num1,num2)) elif chose == '2': print(num1,'*',num2,'=',multiple(num1,num2)) elif chose == '3': print(num1, '-', num2, '=', subtrack(num1,num2)) elif chose == '4': print(num1,'/',num2,'=',divide(num1,num2)) else: print("invalid number operation") this code always go to else I tried to put if in if to force code to go to it but still go to else some solutions please A: When doing: if chose == '1' You're comparing to a char in python. If you do if chose == 1 you're actually comparing to a int. Which is also what you enter in the inputs. removing the ' around the right hand side of your if comparison operators, you will not keep getting pushed to the 'else' statement! A: I don't know what language this is but chose is an int an your if checks on an string(or char depending the language). if you change your comparison to: if chose == 1: print(num1,'+',num2,'=',add(num1,num2)) elif chose == 2: print(num1,'*',num2,'=',multiple(num1,num2)) elif chose == 3: print(num1, '-', num2, '=', subtrack(num1,num2)) elif chose == 4: print(num1,'/',num2,'=',divide(num1,num2)) else: print("invalid number operation") It should work as expected A: You are receiving user input and converting to int and while comparison you are comparing it with string i,e. if chose == '1': the solution is: 1.you receive user input in int or compare the integer value chose=int(input('enter your selection please: ')) if chose == 1: print(num1,'+',num2,'=',add(num1,num2)) elif chose == 2: print(num1,'*',num2,'=',multiple(num1,num2)) elif chose == 3: print(num1, '-', num2, '=', subtrack(num1,num2)) elif chose == 4: print(num1,'/',num2,'=',divide(num1,num2)) else: print("invalid number operation") 2.You receive user input in str and compare the string value chose=str(input('enter your selection please: '))'= if chose == '1': print(num1,'+',num2,'=',add(num1,num2)) elif chose == '2': print(num1,'*',num2,'=',multiple(num1,num2)) elif chose == '3': print(num1, '-', num2, '=', subtrack(num1,num2)) elif chose == '4': print(num1,'/',num2,'=',divide(num1,num2)) else: print("invalid number operation")
this code always go to else I tried to put if in if to force code to go to it but still go to else
def add(x, y): return x + y def multiple(x, y): return x * y def subtrack(x, y): return x - y def divide(x, y): return x / y print('select your operation please') print('1-Add') print('2-Multiple') print('3-subtrack') print('4-Divide') chose=int(input('enter your selection please: ')) num1=int(input('enter your first num please: ')) num2=int(input('enter your second num please: ')) if chose == '1': print(num1,'+',num2,'=',add(num1,num2)) elif chose == '2': print(num1,'*',num2,'=',multiple(num1,num2)) elif chose == '3': print(num1, '-', num2, '=', subtrack(num1,num2)) elif chose == '4': print(num1,'/',num2,'=',divide(num1,num2)) else: print("invalid number operation") this code always go to else I tried to put if in if to force code to go to it but still go to else some solutions please
[ "When doing:\nif chose == '1' \nYou're comparing to a char in python.\nIf you do\nif chose == 1\nyou're actually comparing to a int. Which is also what you enter in the inputs.\nremoving the ' around the right hand side of your if comparison operators, you will not keep getting pushed to the 'else' statement!\n", "I don't know what language this is but chose is an int an your if checks on an string(or char depending the language).\nif you change your comparison to:\nif chose == 1:\n print(num1,'+',num2,'=',add(num1,num2))\nelif chose == 2:\n print(num1,'*',num2,'=',multiple(num1,num2))\nelif chose == 3:\n print(num1, '-', num2, '=', subtrack(num1,num2))\nelif chose == 4:\n print(num1,'/',num2,'=',divide(num1,num2))\nelse:\n print(\"invalid number operation\")\n\nIt should work as expected\n", "You are receiving user input and converting to int and while comparison you are comparing it with string i,e. if chose == '1':\nthe solution is:\n1.you receive user input in int or compare the integer value\nchose=int(input('enter your selection please: '))\nif chose == 1:\n print(num1,'+',num2,'=',add(num1,num2))\nelif chose == 2:\n print(num1,'*',num2,'=',multiple(num1,num2))\nelif chose == 3:\n print(num1, '-', num2, '=', subtrack(num1,num2))\nelif chose == 4:\n print(num1,'/',num2,'=',divide(num1,num2))\nelse:\n print(\"invalid number operation\")\n\n2.You receive user input in str and compare the string value\nchose=str(input('enter your selection please: '))'=\nif chose == '1':\n print(num1,'+',num2,'=',add(num1,num2))\nelif chose == '2':\n print(num1,'*',num2,'=',multiple(num1,num2))\nelif chose == '3':\n print(num1, '-', num2, '=', subtrack(num1,num2))\nelif chose == '4':\n print(num1,'/',num2,'=',divide(num1,num2))\nelse:\n print(\"invalid number operation\")\n\n" ]
[ 1, 1, 0 ]
[]
[]
[ "if_statement", "python" ]
stackoverflow_0074518179_if_statement_python.txt
Q: pytest: ignore third party library warnings with -Werror I am running my unit tests as: pytest -Werror ... to ensure my code does not raise any warnings. However I am using third party libraries I cannot fix. These third party libraries are causing warnings which then will cause pytest -Werror to abort. In my case the warning is DeprecationWarning so I cannot disable this warning by the exception class, as it is too wide and affects other code too. How do I suppress the third party library warnings so that pytest ignores them? I have tried: with warnings.catch_warnings(): call_third_party_code_with_warnings() ... but pytest still terminates the test as ERROR and ignores warnings.catch_warnings. Specifically warning is File "/Users/moo/Library/Caches/pypoetry/virtualenvs/trade-executor-8Oz1GdY1-py3.10/lib/python3.10/site-packages/jsonschema/validators.py", line 196, in __init_subclass__ warnings.warn( DeprecationWarning: Subclassing validator classes is not intended to be part of their public API. A future version will make doing so an error, as the behavior of subclasses isn't guaranteed to stay the same between releases of jsonschema. Instead, prefer composition of validators, wrapping them in an object owned entirely by the downstream library. Disabling warning output in pyproject.toml seems to disable the warning output in pytest, but does not seem to affect -Werror flag exceptions. filterwarnings = [ "ignore:::.*.jsonschema", # DeprecationWarning: Subclassing validator classes is not intended to be part of their public API. A future version will make doing so an error, as the behavior of subclasses isn't guaranteed to stay the same between releases of jsonschema. Instead, prefer composition of validators, wrapping them in an object owned entirely by the downstream library. "ignore:::.*.validators", "ignore::DeprecationWarning:openapi_spec_validator.*:" ] A: You can add filterwarnings in pytest.ini [pytest] filterwarnings = ignore::DeprecationWarning:THIRD_PARTY_NAME.*: or ignore everything from this library if you prefer filterwarnings = ignore:::.*.THIRD_PARTY_NAME Edit Instead of using -Werror flag you can use pytest.ini to get the same results and ignore the library [pytest] filterwarnings = error ignore::DeprecationWarning I know you don't want to silence completely this warning, how ever it's not raised from the third-party, it's raised from builtins.py. You could try something like ignore::jsonschema.validators.DeprecationWarning, but I doubt it will work.
pytest: ignore third party library warnings with -Werror
I am running my unit tests as: pytest -Werror ... to ensure my code does not raise any warnings. However I am using third party libraries I cannot fix. These third party libraries are causing warnings which then will cause pytest -Werror to abort. In my case the warning is DeprecationWarning so I cannot disable this warning by the exception class, as it is too wide and affects other code too. How do I suppress the third party library warnings so that pytest ignores them? I have tried: with warnings.catch_warnings(): call_third_party_code_with_warnings() ... but pytest still terminates the test as ERROR and ignores warnings.catch_warnings. Specifically warning is File "/Users/moo/Library/Caches/pypoetry/virtualenvs/trade-executor-8Oz1GdY1-py3.10/lib/python3.10/site-packages/jsonschema/validators.py", line 196, in __init_subclass__ warnings.warn( DeprecationWarning: Subclassing validator classes is not intended to be part of their public API. A future version will make doing so an error, as the behavior of subclasses isn't guaranteed to stay the same between releases of jsonschema. Instead, prefer composition of validators, wrapping them in an object owned entirely by the downstream library. Disabling warning output in pyproject.toml seems to disable the warning output in pytest, but does not seem to affect -Werror flag exceptions. filterwarnings = [ "ignore:::.*.jsonschema", # DeprecationWarning: Subclassing validator classes is not intended to be part of their public API. A future version will make doing so an error, as the behavior of subclasses isn't guaranteed to stay the same between releases of jsonschema. Instead, prefer composition of validators, wrapping them in an object owned entirely by the downstream library. "ignore:::.*.validators", "ignore::DeprecationWarning:openapi_spec_validator.*:" ]
[ "You can add filterwarnings in pytest.ini\n[pytest]\nfilterwarnings = ignore::DeprecationWarning:THIRD_PARTY_NAME.*:\n\nor ignore everything from this library if you prefer\nfilterwarnings = ignore:::.*.THIRD_PARTY_NAME\n\nEdit\nInstead of using -Werror flag you can use pytest.ini to get the same results and ignore the library\n[pytest]\nfilterwarnings = \n error\n ignore::DeprecationWarning\n\nI know you don't want to silence completely this warning, how ever it's not raised from the third-party, it's raised from builtins.py.\nYou could try something like ignore::jsonschema.validators.DeprecationWarning, but I doubt it will work.\n" ]
[ 3 ]
[]
[]
[ "pytest", "python" ]
stackoverflow_0074518081_pytest_python.txt
Q: Write into workbook with extension other than xlsx I use openpyxl to write into Excel with pandas and as long as I'm working on a file I'd like to use a different extension for it. The usual convetion is to appand .lock, but openpyxl denies to cooperate with it and complains that the extension is invalid. Is there a way to disable this check or altenatively to make it accept it? A: I got it! This is how it goes: from pandas.io.excel import _OpenpyxlWriter class ExcelWriterWithLock(_OpenpyxlWriter): supported_extensions = (".xlsx", ".xlsm", ".lock")
Write into workbook with extension other than xlsx
I use openpyxl to write into Excel with pandas and as long as I'm working on a file I'd like to use a different extension for it. The usual convetion is to appand .lock, but openpyxl denies to cooperate with it and complains that the extension is invalid. Is there a way to disable this check or altenatively to make it accept it?
[ "I got it! This is how it goes:\nfrom pandas.io.excel import _OpenpyxlWriter\n\nclass ExcelWriterWithLock(_OpenpyxlWriter):\n supported_extensions = (\".xlsx\", \".xlsm\", \".lock\")\n\n" ]
[ 1 ]
[]
[]
[ "openpyxl", "pandas", "python", "python_3.x" ]
stackoverflow_0074517986_openpyxl_pandas_python_python_3.x.txt
Q: Brownie: CompilerError: File outside of allowed directories I'm trying to import "@chainlink/contracts/src/v0.6/interfaces/AggregatorV3Interface.sol" to my contract but i encountered this error. CompilerError: solc returned the following errors: contracts/Lottery.sol:4:1: ParserError: Source "C:/Users/Алексей/.brownie/packages/smartcontractkit/chainlink-brownie contracts@1.1.1/contracts/src/v0.6/interfaces/AggregatorV3Interface.sol" not found: File outside of allowed directories. import "@chainlink/contracts/src/v0.6/interfaces/AggregatorV3Interface.sol"; ^--------------------------------------------------------------------------^ This is my contract (Lottery.sol): // SPDX-License-Identifier: MIT pragma solidity ^0.6.6; import "@chainlink/contracts/src/v0.6/interfaces/AggregatorV3Interface.sol"; contract Lottery { address payable[] public players; uint256 public usdEnterFee; AggregatorV3Interface internal ethUsdPriceFeed; constructor(address _priceFeedAddress) public { usdEnterFee = 50 * (10 ** 18); ethUsdPriceFeed = AggregatorV3Interface(_priceFeedAddress); } function enter() public payable { players.push(msg.sender); } function getEnterFee() public view returns(uint256) { } function startLottery() public { } function endLottery() public { } } This is brownie-config.yaml: dependencies: - smartcontractkit/chainlink-brownie-contracts@1.1.1 compiler: solc: remappings: - '@chainlink=smartcontractkit/chainlink-brownie contracts@1.1.1' Error: enter image description here A: it doesn't find aggregatorV3interface.sol Have you installed it? try pip3 install @chainlink/contracts or npm install @chainlink/contracts if you already done it, check if it is in the right path A: A "-" is missing in the "remappings:" it should be: remappings: - '@chainlink=smartcontractkit/chainlink-brownie-contracts@1.1.1'
Brownie: CompilerError: File outside of allowed directories
I'm trying to import "@chainlink/contracts/src/v0.6/interfaces/AggregatorV3Interface.sol" to my contract but i encountered this error. CompilerError: solc returned the following errors: contracts/Lottery.sol:4:1: ParserError: Source "C:/Users/Алексей/.brownie/packages/smartcontractkit/chainlink-brownie contracts@1.1.1/contracts/src/v0.6/interfaces/AggregatorV3Interface.sol" not found: File outside of allowed directories. import "@chainlink/contracts/src/v0.6/interfaces/AggregatorV3Interface.sol"; ^--------------------------------------------------------------------------^ This is my contract (Lottery.sol): // SPDX-License-Identifier: MIT pragma solidity ^0.6.6; import "@chainlink/contracts/src/v0.6/interfaces/AggregatorV3Interface.sol"; contract Lottery { address payable[] public players; uint256 public usdEnterFee; AggregatorV3Interface internal ethUsdPriceFeed; constructor(address _priceFeedAddress) public { usdEnterFee = 50 * (10 ** 18); ethUsdPriceFeed = AggregatorV3Interface(_priceFeedAddress); } function enter() public payable { players.push(msg.sender); } function getEnterFee() public view returns(uint256) { } function startLottery() public { } function endLottery() public { } } This is brownie-config.yaml: dependencies: - smartcontractkit/chainlink-brownie-contracts@1.1.1 compiler: solc: remappings: - '@chainlink=smartcontractkit/chainlink-brownie contracts@1.1.1' Error: enter image description here
[ "it doesn't find aggregatorV3interface.sol\nHave you installed it?\ntry pip3 install @chainlink/contracts or npm install @chainlink/contracts\nif you already done it, check if it is in the right path\n", "A \"-\" is missing in the \"remappings:\"\nit should be:\n remappings:\n - '@chainlink=smartcontractkit/chainlink-brownie-contracts@1.1.1'\n\n" ]
[ 0, 0 ]
[]
[]
[ "brownie", "python", "smartcontracts", "solidity" ]
stackoverflow_0071018981_brownie_python_smartcontracts_solidity.txt
Q: Strange error when trying to get data from database in another file I was trying to get count of items in databases. Getting count with second database is working as planned, but the first one is giving me this error KeyError: <weakref at 0x000001E85C863330; to "Flask" at 0x000001E8397750D0> This program is a very simplified, but removed elements are working fine(Get, Post, Delete methods...) So I have 3 files server1: app = Flask(__name__) api = Api(app) app.config['SQLALCHEMY_DATABASE_URI'] = 'sqlite:///emp.db' app.config['SQLALCHEMY_TRACK_MODIFICATIONS'] = False db = SQLAlchemy(app) class Value(db.Model): id = db.Column(db.Integer, primary_key=True) value = db.Column(db.Integer, nullable=False) class GetCount(Resource): @staticmethod def count(): count = Value.query.count() return count server2: app2 = Flask(__name__) api2 = Api(app2) app2.config['SQLALCHEMY_DATABASE_URI'] = 'sqlite:///emp2.db' app2.config['SQLALCHEMY_TRACK_MODIFICATIONS'] = False db2 = SQLAlchemy(app2) class Value2(db2.Model): id = db2.Column(db2.Integer, primary_key=True) value = db2.Column(db2.Integer, nullable=False) class GetCount2(Resource): @staticmethod def count(): count = Value2.query.count() return count masternode: import time from server1 import app, Value from server2 import app2, Value2 app.app_context().push() app2.app_context().push() while True: c = Value.query.count() c2 = Value2.query.count() print(c, c2) time.sleep(1) I was trying to start this program, but got the error mentioned above. But when I deleted c = Value.query.count() from masternode file I got expected result(1 1 1 1 and so on) So I really don't understand why one program is working and other is not when they are practically the same Full error traceback: Traceback (most recent call last): File "C:\Users\Sergio\Desktop\Домашка\FlaskTest\masternode.py", line 15, in <module> c1 = Value.query.count() ^^^^^^^^^^^^^^^^^^^ File "C:\Users\Sergio\AppData\Local\Programs\Python\Python311\Lib\site-packages\sqlalchemy\orm\query.py", line 3175, in count return self._from_self(col).enable_eagerloads(False).scalar() ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Sergio\AppData\Local\Programs\Python\Python311\Lib\site-packages\sqlalchemy\orm\query.py", line 2892, in scalar ret = self.one() ^^^^^^^^^^ File "C:\Users\Sergio\AppData\Local\Programs\Python\Python311\Lib\site-packages\sqlalchemy\orm\query.py", line 2869, in one return self._iter().one() ^^^^^^^^^^^^ File "C:\Users\Sergio\AppData\Local\Programs\Python\Python311\Lib\site-packages\sqlalchemy\orm\query.py", line 2915, in _iter result = self.session.execute( ^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Sergio\AppData\Local\Programs\Python\Python311\Lib\site-packages\sqlalchemy\orm\session.py", line 1702, in execute bind = self.get_bind(**bind_arguments) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Sergio\AppData\Local\Programs\Python\Python311\Lib\site-packages\flask_sqlalchemy\session.py", line 61, in get_bind engines = self._db.engines ^^^^^^^^^^^^^^^^ File "C:\Users\Sergio\AppData\Local\Programs\Python\Python311\Lib\site-packages\flask_sqlalchemy\extension.py", line 629, in engines return self._app_engines[app] ~~~~~~~~~~~~~~~~~^^^^^ File "C:\Users\Sergio\AppData\Local\Programs\Python\Python311\Lib\weakref.py", line 415, in __getitem__ return self.data[ref(key)] ~~~~~~~~~^^^^^^^^^^ KeyError: <weakref at 0x000001E85C863330; to 'Flask' at 0x000001E8397750D0> A: Flask uses app contexts to determine the current app, so queries for different apps should be run in their respective contexts. Something like this ought to work: while True: with app.app_context(): c = Value.query.count() with app2.app_context: c2 = Value2.query.count() print(c, c2) time.sleep(1)
Strange error when trying to get data from database in another file
I was trying to get count of items in databases. Getting count with second database is working as planned, but the first one is giving me this error KeyError: <weakref at 0x000001E85C863330; to "Flask" at 0x000001E8397750D0> This program is a very simplified, but removed elements are working fine(Get, Post, Delete methods...) So I have 3 files server1: app = Flask(__name__) api = Api(app) app.config['SQLALCHEMY_DATABASE_URI'] = 'sqlite:///emp.db' app.config['SQLALCHEMY_TRACK_MODIFICATIONS'] = False db = SQLAlchemy(app) class Value(db.Model): id = db.Column(db.Integer, primary_key=True) value = db.Column(db.Integer, nullable=False) class GetCount(Resource): @staticmethod def count(): count = Value.query.count() return count server2: app2 = Flask(__name__) api2 = Api(app2) app2.config['SQLALCHEMY_DATABASE_URI'] = 'sqlite:///emp2.db' app2.config['SQLALCHEMY_TRACK_MODIFICATIONS'] = False db2 = SQLAlchemy(app2) class Value2(db2.Model): id = db2.Column(db2.Integer, primary_key=True) value = db2.Column(db2.Integer, nullable=False) class GetCount2(Resource): @staticmethod def count(): count = Value2.query.count() return count masternode: import time from server1 import app, Value from server2 import app2, Value2 app.app_context().push() app2.app_context().push() while True: c = Value.query.count() c2 = Value2.query.count() print(c, c2) time.sleep(1) I was trying to start this program, but got the error mentioned above. But when I deleted c = Value.query.count() from masternode file I got expected result(1 1 1 1 and so on) So I really don't understand why one program is working and other is not when they are practically the same Full error traceback: Traceback (most recent call last): File "C:\Users\Sergio\Desktop\Домашка\FlaskTest\masternode.py", line 15, in <module> c1 = Value.query.count() ^^^^^^^^^^^^^^^^^^^ File "C:\Users\Sergio\AppData\Local\Programs\Python\Python311\Lib\site-packages\sqlalchemy\orm\query.py", line 3175, in count return self._from_self(col).enable_eagerloads(False).scalar() ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Sergio\AppData\Local\Programs\Python\Python311\Lib\site-packages\sqlalchemy\orm\query.py", line 2892, in scalar ret = self.one() ^^^^^^^^^^ File "C:\Users\Sergio\AppData\Local\Programs\Python\Python311\Lib\site-packages\sqlalchemy\orm\query.py", line 2869, in one return self._iter().one() ^^^^^^^^^^^^ File "C:\Users\Sergio\AppData\Local\Programs\Python\Python311\Lib\site-packages\sqlalchemy\orm\query.py", line 2915, in _iter result = self.session.execute( ^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Sergio\AppData\Local\Programs\Python\Python311\Lib\site-packages\sqlalchemy\orm\session.py", line 1702, in execute bind = self.get_bind(**bind_arguments) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Sergio\AppData\Local\Programs\Python\Python311\Lib\site-packages\flask_sqlalchemy\session.py", line 61, in get_bind engines = self._db.engines ^^^^^^^^^^^^^^^^ File "C:\Users\Sergio\AppData\Local\Programs\Python\Python311\Lib\site-packages\flask_sqlalchemy\extension.py", line 629, in engines return self._app_engines[app] ~~~~~~~~~~~~~~~~~^^^^^ File "C:\Users\Sergio\AppData\Local\Programs\Python\Python311\Lib\weakref.py", line 415, in __getitem__ return self.data[ref(key)] ~~~~~~~~~^^^^^^^^^^ KeyError: <weakref at 0x000001E85C863330; to 'Flask' at 0x000001E8397750D0>
[ "Flask uses app contexts to determine the current app, so queries for different apps should be run in their respective contexts.\nSomething like this ought to work:\nwhile True:\n with app.app_context():\n c = Value.query.count()\n with app2.app_context:\n c2 = Value2.query.count()\n print(c, c2)\n time.sleep(1)\n\n" ]
[ 1 ]
[]
[]
[ "flask", "flask_sqlalchemy", "python" ]
stackoverflow_0074512773_flask_flask_sqlalchemy_python.txt
Q: Error setting spatial dimensions in netCDF file I'm trying to convert a NetCDF file to raster using rioxarray in python. However, when I try to set lat and lon as spatial dimensions (they are variables in my original .nc file), I get an error message. How can I set lon and lat from variables to dimensions? Is there an alternative way to convert a NetCDF file to raster? Thanks! Here is my full code: import xarray as xr import rioxarray as rio nc_file = xr.open_dataset('......') #path to nc data file nc_file = nc_file.set_spatial_dims(x_dim='nav_lon', y_dim='nav_lat') #set spatial dimensions bT = nc_file['votemper'] #extract variable from nc data bT.rio.write_crs("epsg:4326", inplace=True) #define CRS bT.rio.to_raster(r"bottomT_raster.tiff") #save as raster Here is the structure of my nc_file: nc_file Out[60]: <xarray.Dataset> Dimensions: (y: 86, x: 102, deptht: 1, time_counter: 132) Coordinates: * deptht (deptht) float32 3.047 * time_counter (time_counter) datetime64[ns] 2009-01-16T12:00:00 ... 2019-... Dimensions without coordinates: y, x Data variables: nav_lon (y, x) float32 -26.05 -26.0 -25.95 ... -21.1 -21.05 -21.0 nav_lat (y, x) float32 13.96 13.96 13.96 13.96 ... 18.04 18.04 18.04 votemper (time_counter, y, x) float32 ... vosaline (time_counter, y, x) float32 ... And here is the error when setting the spatial dimensions: nc_file = nc_file.set_spatial_dims(x_dim='nav_lon', y_dim='nav_lat') Traceback (most recent call last): File "/var/folders/fy/wyzk01_n36jgjq0csnn195080000gn/T/ipykernel_1587/4107632915.py", line 1, in <cell line: 1> nc_file = nc_file.set_spatial_dims(x_dim='nav_lon', y_dim='nav_lat') File "/opt/miniconda3/lib/python3.8/site-packages/xarray/core/common.py", line 239, in __getattr__ raise AttributeError( AttributeError: 'Dataset' object has no attribute 'set_spatial_dims' A: This error is caused because when you read the file a type Dataset object is returned, so which does not contain any method called set_spatial_dimentions, what you could do is, convert it to an array type object using function to_array and then use the set_spatial_dimen method: nc_file = xr.open_dataset('......') nc_file = nc_file.to_array() nc_file = nc_file.rio.write_crs("epsg:4326", inplace=True) and nc_file.rio.set_spatial_dims("nav_lon", "nav_lat", inplace=True) related topics: setting dimension using rioxarray set_spatial_dims() converting NetCDF dataset with 2D coordinates to GeoTiff #47
Error setting spatial dimensions in netCDF file
I'm trying to convert a NetCDF file to raster using rioxarray in python. However, when I try to set lat and lon as spatial dimensions (they are variables in my original .nc file), I get an error message. How can I set lon and lat from variables to dimensions? Is there an alternative way to convert a NetCDF file to raster? Thanks! Here is my full code: import xarray as xr import rioxarray as rio nc_file = xr.open_dataset('......') #path to nc data file nc_file = nc_file.set_spatial_dims(x_dim='nav_lon', y_dim='nav_lat') #set spatial dimensions bT = nc_file['votemper'] #extract variable from nc data bT.rio.write_crs("epsg:4326", inplace=True) #define CRS bT.rio.to_raster(r"bottomT_raster.tiff") #save as raster Here is the structure of my nc_file: nc_file Out[60]: <xarray.Dataset> Dimensions: (y: 86, x: 102, deptht: 1, time_counter: 132) Coordinates: * deptht (deptht) float32 3.047 * time_counter (time_counter) datetime64[ns] 2009-01-16T12:00:00 ... 2019-... Dimensions without coordinates: y, x Data variables: nav_lon (y, x) float32 -26.05 -26.0 -25.95 ... -21.1 -21.05 -21.0 nav_lat (y, x) float32 13.96 13.96 13.96 13.96 ... 18.04 18.04 18.04 votemper (time_counter, y, x) float32 ... vosaline (time_counter, y, x) float32 ... And here is the error when setting the spatial dimensions: nc_file = nc_file.set_spatial_dims(x_dim='nav_lon', y_dim='nav_lat') Traceback (most recent call last): File "/var/folders/fy/wyzk01_n36jgjq0csnn195080000gn/T/ipykernel_1587/4107632915.py", line 1, in <cell line: 1> nc_file = nc_file.set_spatial_dims(x_dim='nav_lon', y_dim='nav_lat') File "/opt/miniconda3/lib/python3.8/site-packages/xarray/core/common.py", line 239, in __getattr__ raise AttributeError( AttributeError: 'Dataset' object has no attribute 'set_spatial_dims'
[ "This error is caused because when you read the file a type Dataset object is returned, so which does not contain any method called set_spatial_dimentions, what you could do is, convert it to an array type object using function to_array and then use the set_spatial_dimen method:\nnc_file = xr.open_dataset('......')\nnc_file = nc_file.to_array()\nnc_file = nc_file.rio.write_crs(\"epsg:4326\", inplace=True)\n\nand\nnc_file.rio.set_spatial_dims(\"nav_lon\", \"nav_lat\", inplace=True) \nrelated topics:\n\nsetting dimension using rioxarray set_spatial_dims()\nconverting NetCDF dataset with 2D coordinates to GeoTiff #47\n\n" ]
[ 0 ]
[]
[]
[ "geospatial", "gis", "netcdf", "python", "raster" ]
stackoverflow_0074517289_geospatial_gis_netcdf_python_raster.txt
Q: Can I run multiple windows at the same time when using selenium I want to open several more windows when starting selenium, and each window can run independently: from lxml import etree import re from selenium import webdriver from selenium.webdriver.common.by import By # 选择器 from selenium.webdriver.common.keys import Keys # 按键 from selenium.webdriver.support.wait import WebDriverWait # 等待页面加载完毕,寻找某些元素 from selenium.webdriver.support import expected_conditions as EC # 等待指定标签加载完毕 import time import pymysql import random import threading import string from requests_html import HTMLSession # 构造请求对象 session = HTMLSession() class A3dModelFile(object): def __init__(self): self.url = 'https://clara.io/signup' self.options = webdriver.ChromeOptions() self.options.add_experimental_option('excludeSwitches', ['enable-automation']) self.browser = webdriver.Chrome(options=self.options) time.sleep(1) self.browser.get(self.url) def other_window(self): self.browser.execute_script("window.open('https://clara.io/signup')") time.sleep(1) self.login() self.get_label() def main_window(self): self.login() self.get_label() def login(self): pass def get_label(self): pass def run(self): threads = [] # 定义线程池 t1 = threading.Thread(target=a.main_window) threads.append(t1) # 将函数接入线程池 index = 2 while True: t2 = threading.Thread(target=a.other_window) index += 1 threads.append(t2) if index == 3: return threads if __name__ == '__main__': a = A3dModelFile() threads = a.run() for t in threads: t.start() I've tried many times, but I can only run one window independently. I want every window to run the same function method, but only the main window can run. A: Yes, you can use multiple windows in Selenium: driver.get(2nd website) (opens a new window) presses key to switch tabs like ctrl + 1 for firefox driver.quit() (closes only that tab but doesn't close the whole browser)
Can I run multiple windows at the same time when using selenium
I want to open several more windows when starting selenium, and each window can run independently: from lxml import etree import re from selenium import webdriver from selenium.webdriver.common.by import By # 选择器 from selenium.webdriver.common.keys import Keys # 按键 from selenium.webdriver.support.wait import WebDriverWait # 等待页面加载完毕,寻找某些元素 from selenium.webdriver.support import expected_conditions as EC # 等待指定标签加载完毕 import time import pymysql import random import threading import string from requests_html import HTMLSession # 构造请求对象 session = HTMLSession() class A3dModelFile(object): def __init__(self): self.url = 'https://clara.io/signup' self.options = webdriver.ChromeOptions() self.options.add_experimental_option('excludeSwitches', ['enable-automation']) self.browser = webdriver.Chrome(options=self.options) time.sleep(1) self.browser.get(self.url) def other_window(self): self.browser.execute_script("window.open('https://clara.io/signup')") time.sleep(1) self.login() self.get_label() def main_window(self): self.login() self.get_label() def login(self): pass def get_label(self): pass def run(self): threads = [] # 定义线程池 t1 = threading.Thread(target=a.main_window) threads.append(t1) # 将函数接入线程池 index = 2 while True: t2 = threading.Thread(target=a.other_window) index += 1 threads.append(t2) if index == 3: return threads if __name__ == '__main__': a = A3dModelFile() threads = a.run() for t in threads: t.start() I've tried many times, but I can only run one window independently. I want every window to run the same function method, but only the main window can run.
[ "Yes, you can use multiple windows in Selenium:\n\ndriver.get(2nd website) (opens a new window)\npresses key to switch tabs like ctrl + 1 for firefox\ndriver.quit() (closes only that tab but doesn't close the whole browser)\n\n" ]
[ 1 ]
[]
[]
[ "python", "selenium" ]
stackoverflow_0074518223_python_selenium.txt
Q: Extract part of XML files in a folder I have a folder with a number of Pascal Voc XML annotations of images. The annotations looks like the one in below <annotation> <folder>images</folder> <filename>Norway_000000.jpg</filename> <size> <width>3650</width> <height>2044</height> <depth/> </size> <segmented>0</segmented> <object> <name>D00</name> <truncated>0</truncated> <occluded>0</occluded> <difficult>0</difficult> <bndbox> <xmin>1138.46</xmin> <ymin>1281.93</ymin> <xmax>1169.35</xmax> <ymax>1336.85</ymax> </bndbox> <attributes> <attribute> <name>rotation</name> <value>0.0</value> </attribute> </attributes> </object> <object> <name>D20</name> <truncated>0</truncated> <occluded>0</occluded> <difficult>0</difficult> <bndbox> <xmin>1537.53</xmin> <ymin>1131.79</ymin> <xmax>1629.06</xmax> <ymax>1247.64</ymax> </bndbox> <attributes> <attribute> <name>rotation</name> <value>0.0</value> </attribute> </attributes> </object> <object> <name>D00</name> <truncated>0</truncated> <occluded>0</occluded> <difficult>0</difficult> <bndbox> <xmin>1773.45</xmin> <ymin>1825.97</ymin> <xmax>1862.69</xmax> <ymax>2038.78</ymax> </bndbox> <attributes> <attribute> <name>rotation</name> <value>0.0</value> </attribute> </attributes> </object> <object> <name>D00</name> <truncated>0</truncated> <occluded>0</occluded> <difficult>0</difficult> <bndbox> <xmin>1589.02</xmin> <ymin>1296.26</ymin> <xmax>1624.77</xmax> <ymax>1343.46</ymax> </bndbox> <attributes> <attribute> <name>rotation</name> <value>0.0</value> </attribute> </attributes> </object> <object> <name>D00</name> <truncated>0</truncated> <occluded>0</occluded> <difficult>0</difficult> <bndbox> <xmin>1507.53</xmin> <ymin>1216.53</ymin> <xmax>1527.49</xmax> <ymax>1254.27</ymax> </bndbox> <attributes> <attribute> <name>rotation</name> <value>0.0</value> </attribute> </attributes> </object> </annotation> I want to extract only the following part and save the new xml file. <object> <name>D00</name> <truncated>0</truncated> <occluded>0</occluded> <difficult>0</difficult> <bndbox> <xmin>1138.46</xmin> <ymin>1281.93</ymin> <xmax>1169.35</xmax> <ymax>1336.85</ymax> </bndbox> <attributes> <attribute> <name>rotation</name> <value>0.0</value> </attribute> </attributes> </object> I did not find any specific resource or guide to solve this except for manual removal of the unwanted parts. How can I read all the files in the folder, extract only the desired annotation and then save the new xml files? I need the images for custom object detection in tensorflow. A: This is a way how you can extract each objects. You can later on simply iterate inside them and search for certain name. This is a part where I try to find all object elements: import xml.etree.ElementTree as ET xml_file = ET.parse('YourXml.xml') xml_root = xml_file.getroot() xml_objects = list() for i in xml_root: if i.tag == 'object': xml_objects.append(i) In this part I convert Elements to ElementTree which lets me to write them to .xml file. for iter_i, i in enumerate(xml_objects): ET.ElementTree(i).write(f'D:\\object_{iter_i}.xml')
Extract part of XML files in a folder
I have a folder with a number of Pascal Voc XML annotations of images. The annotations looks like the one in below <annotation> <folder>images</folder> <filename>Norway_000000.jpg</filename> <size> <width>3650</width> <height>2044</height> <depth/> </size> <segmented>0</segmented> <object> <name>D00</name> <truncated>0</truncated> <occluded>0</occluded> <difficult>0</difficult> <bndbox> <xmin>1138.46</xmin> <ymin>1281.93</ymin> <xmax>1169.35</xmax> <ymax>1336.85</ymax> </bndbox> <attributes> <attribute> <name>rotation</name> <value>0.0</value> </attribute> </attributes> </object> <object> <name>D20</name> <truncated>0</truncated> <occluded>0</occluded> <difficult>0</difficult> <bndbox> <xmin>1537.53</xmin> <ymin>1131.79</ymin> <xmax>1629.06</xmax> <ymax>1247.64</ymax> </bndbox> <attributes> <attribute> <name>rotation</name> <value>0.0</value> </attribute> </attributes> </object> <object> <name>D00</name> <truncated>0</truncated> <occluded>0</occluded> <difficult>0</difficult> <bndbox> <xmin>1773.45</xmin> <ymin>1825.97</ymin> <xmax>1862.69</xmax> <ymax>2038.78</ymax> </bndbox> <attributes> <attribute> <name>rotation</name> <value>0.0</value> </attribute> </attributes> </object> <object> <name>D00</name> <truncated>0</truncated> <occluded>0</occluded> <difficult>0</difficult> <bndbox> <xmin>1589.02</xmin> <ymin>1296.26</ymin> <xmax>1624.77</xmax> <ymax>1343.46</ymax> </bndbox> <attributes> <attribute> <name>rotation</name> <value>0.0</value> </attribute> </attributes> </object> <object> <name>D00</name> <truncated>0</truncated> <occluded>0</occluded> <difficult>0</difficult> <bndbox> <xmin>1507.53</xmin> <ymin>1216.53</ymin> <xmax>1527.49</xmax> <ymax>1254.27</ymax> </bndbox> <attributes> <attribute> <name>rotation</name> <value>0.0</value> </attribute> </attributes> </object> </annotation> I want to extract only the following part and save the new xml file. <object> <name>D00</name> <truncated>0</truncated> <occluded>0</occluded> <difficult>0</difficult> <bndbox> <xmin>1138.46</xmin> <ymin>1281.93</ymin> <xmax>1169.35</xmax> <ymax>1336.85</ymax> </bndbox> <attributes> <attribute> <name>rotation</name> <value>0.0</value> </attribute> </attributes> </object> I did not find any specific resource or guide to solve this except for manual removal of the unwanted parts. How can I read all the files in the folder, extract only the desired annotation and then save the new xml files? I need the images for custom object detection in tensorflow.
[ "This is a way how you can extract each objects.\nYou can later on simply iterate inside them and search for certain name.\nThis is a part where I try to find all object elements:\nimport xml.etree.ElementTree as ET\n\nxml_file = ET.parse('YourXml.xml')\nxml_root = xml_file.getroot()\nxml_objects = list()\n\nfor i in xml_root:\n if i.tag == 'object':\n xml_objects.append(i)\n\nIn this part I convert Elements to ElementTree which lets me to write them to .xml file.\nfor iter_i, i in enumerate(xml_objects):\n ET.ElementTree(i).write(f'D:\\\\object_{iter_i}.xml')\n\n" ]
[ 0 ]
[]
[]
[ "annotations", "object_detection", "python", "xml" ]
stackoverflow_0074518273_annotations_object_detection_python_xml.txt
Q: Number changes in all row of array I created a 4x5 2D array using python, and when I wanted to change a number inside it, it automatically changes the number in every row rows,cols = (4,5) arr = [[0]*cols]*rows print (arr) And this is how the output shows [[0, 0, 0, 0, 0], [0, 0, 0, 0, 0], [0, 0, 0, 0, 0], [0, 0, 0, 0, 0]] After I created the array, I decide to change a number in the first row arr[0][2] = 3 print(arr) But it appears like this [[0, 0, 3, 0, 0], [0, 0, 3, 0, 0], [0, 0, 3, 0, 0], [0, 0, 3, 0, 0]] I checked with it and I still can't find any problem in it. May someone help me with it? A: 'Multiplying' the list copies the value references repeatedly - which is fine for primitives but not so much for lists, like you've seen. If you want different instances of the list, you could use list comprehension: rows, cols = (4, 5) arr = [[0] * cols for _ in range(rows)] arr[0][2] = 3 print(arr) # [[0, 0, 3, 0, 0], [0, 0, 0, 0, 0], [0, 0, 0, 0, 0], [0, 0, 0, 0, 0]]
Number changes in all row of array
I created a 4x5 2D array using python, and when I wanted to change a number inside it, it automatically changes the number in every row rows,cols = (4,5) arr = [[0]*cols]*rows print (arr) And this is how the output shows [[0, 0, 0, 0, 0], [0, 0, 0, 0, 0], [0, 0, 0, 0, 0], [0, 0, 0, 0, 0]] After I created the array, I decide to change a number in the first row arr[0][2] = 3 print(arr) But it appears like this [[0, 0, 3, 0, 0], [0, 0, 3, 0, 0], [0, 0, 3, 0, 0], [0, 0, 3, 0, 0]] I checked with it and I still can't find any problem in it. May someone help me with it?
[ "'Multiplying' the list copies the value references repeatedly - which is fine for primitives but not so much for lists, like you've seen. If you want different instances of the list, you could use list comprehension:\nrows, cols = (4, 5)\narr = [[0] * cols for _ in range(rows)]\narr[0][2] = 3\nprint(arr) # [[0, 0, 3, 0, 0], [0, 0, 0, 0, 0], [0, 0, 0, 0, 0], [0, 0, 0, 0, 0]]\n\n" ]
[ 1 ]
[]
[]
[ "python" ]
stackoverflow_0074518428_python.txt
Q: Playwright on google colab : Attempt to free invalid pointer 0x29000020c5a0 I was trying to run playwright on google colab but getting an error Installed playwright and chromium !pip install playwright !playwright install To run run async stuff in a notebook import nest_asyncio nest_asyncio.apply() My Code import time import asyncio from playwright.async_api import async_playwright async def main(): async with async_playwright() as p: browser = await p.chromium.launch(headless=True) page = await browser.new_page(storage_state='auth.json') await page.goto('https://www.google.com') time.sleep(6) html = await page.content() time.sleep(5) # await browser.close() asyncio.run(main()) which gives me following error /usr/lib/python3.7/asyncio/futures.py in result(self) 179 self.__log_traceback = False 180 if self._exception is not None: --> 181 raise self._exception 182 return self._result 183 Error: Browser closed. ==================== Browser output: ==================== <launching> /root/.cache/ms-playwright/chromium-1015/chrome-linux/chrome --disable-field-trial-config --disable-background-networking --enable-features=NetworkService,NetworkServiceInProcess --disable-background-timer-throttling --disable-backgrounding-occluded-windows --disable-back-forward-cache --disable-breakpad --disable-client-side-phishing-detection --disable-component-extensions-with-background-pages --disable-default-apps --disable-dev-shm-usage --disable-extensions --disable-features=ImprovedCookieControls,LazyFrameLoading,GlobalMediaControls,DestroyProfileOnBrowserClose,MediaRouter,DialMediaRouteProvider,AcceptCHFrame,AutoExpandDetailsElement,CertificateTransparencyComponentUpdater,AvoidUnnecessaryBeforeUnloadCheckSync --allow-pre-commit-input --disable-hang-monitor --disable-ipc-flooding-protection --disable-popup-blocking --disable-prompt-on-repost --disable-renderer-backgrounding --disable-sync --force-color-profile=srgb --metrics-recording-only --no-first-run --enable-automation --password-store=basic --use-mock-keychain --no-service-autorun --export-tagged-pdf --no-sandbox --user-data-dir=/tmp/playwright_chromiumdev_profile-IAbW15 --remote-debugging-pipe --no-startup-window <launched> pid=656 [pid=656][err] src/tcmalloc.cc:283] Attempt to free invalid pointer 0x29000020c5a0 =========================== logs =========================== <launching> /root/.cache/ms-playwright/chromium-1015/chrome-linux/chrome --disable-field-trial-config --disable-background-networking --enable-features=NetworkService,NetworkServiceInProcess --disable-background-timer-throttling --disable-backgrounding-occluded-windows --disable-back-forward-cache --disable-breakpad --disable-client-side-phishing-detection --disable-component-extensions-with-background-pages --disable-default-apps --disable-dev-shm-usage --disable-extensions --disable-features=ImprovedCookieControls,LazyFrameLoading,GlobalMediaControls,DestroyProfileOnBrowserClose,MediaRouter,DialMediaRouteProvider,AcceptCHFrame,AutoExpandDetailsElement,CertificateTransparencyComponentUpdater,AvoidUnnecessaryBeforeUnloadCheckSync --allow-pre-commit-input --disable-hang-monitor --disable-ipc-flooding-protection --disable-popup-blocking --disable-prompt-on-repost --disable-renderer-backgrounding --disable-sync --force-color-profile=srgb --metrics-recording-only --no-first-run --enable-automation --password-store=basic --use-mock-keychain --no-service-autorun --export-tagged-pdf --no-sandbox --user-data-dir=/tmp/playwright_chromiumdev_profile-IAbW15 --remote-debugging-pipe --no-startup-window <launched> pid=656 [pid=656][err] src/tcmalloc.cc:283] Attempt to free invalid pointer 0x29000020c5a0 A: When I try to run the chromium browser that downloaded by playwright using this command !/root/.cache/ms-playwright/chromium-1033/chrome-linux/chrome It gives this error src/tcmalloc.cc:283] Attempt to free invalid pointer 0x18400020c5a0 It means the problem is somehow related to the browser downloaded by playwright. We can use a different browser. First, install chromium. !apt install chromium-chromedriver Set executable_path and user_data_dir with launch_persistent_context in your code. import nest_asyncio nest_asyncio.apply() import asyncio from playwright.async_api import async_playwright async def main(): async with async_playwright() as p: browser = await p.chromium.launch_persistent_context( executable_path="/usr/bin/chromium-browser", user_data_dir="/content/random-user" ) page = await browser.new_page() await page.goto("https://google.com") title = await page.title() print(f"Title: {title}") await browser.close() asyncio.run(main()) I know this is not the right solution but it works.
Playwright on google colab : Attempt to free invalid pointer 0x29000020c5a0
I was trying to run playwright on google colab but getting an error Installed playwright and chromium !pip install playwright !playwright install To run run async stuff in a notebook import nest_asyncio nest_asyncio.apply() My Code import time import asyncio from playwright.async_api import async_playwright async def main(): async with async_playwright() as p: browser = await p.chromium.launch(headless=True) page = await browser.new_page(storage_state='auth.json') await page.goto('https://www.google.com') time.sleep(6) html = await page.content() time.sleep(5) # await browser.close() asyncio.run(main()) which gives me following error /usr/lib/python3.7/asyncio/futures.py in result(self) 179 self.__log_traceback = False 180 if self._exception is not None: --> 181 raise self._exception 182 return self._result 183 Error: Browser closed. ==================== Browser output: ==================== <launching> /root/.cache/ms-playwright/chromium-1015/chrome-linux/chrome --disable-field-trial-config --disable-background-networking --enable-features=NetworkService,NetworkServiceInProcess --disable-background-timer-throttling --disable-backgrounding-occluded-windows --disable-back-forward-cache --disable-breakpad --disable-client-side-phishing-detection --disable-component-extensions-with-background-pages --disable-default-apps --disable-dev-shm-usage --disable-extensions --disable-features=ImprovedCookieControls,LazyFrameLoading,GlobalMediaControls,DestroyProfileOnBrowserClose,MediaRouter,DialMediaRouteProvider,AcceptCHFrame,AutoExpandDetailsElement,CertificateTransparencyComponentUpdater,AvoidUnnecessaryBeforeUnloadCheckSync --allow-pre-commit-input --disable-hang-monitor --disable-ipc-flooding-protection --disable-popup-blocking --disable-prompt-on-repost --disable-renderer-backgrounding --disable-sync --force-color-profile=srgb --metrics-recording-only --no-first-run --enable-automation --password-store=basic --use-mock-keychain --no-service-autorun --export-tagged-pdf --no-sandbox --user-data-dir=/tmp/playwright_chromiumdev_profile-IAbW15 --remote-debugging-pipe --no-startup-window <launched> pid=656 [pid=656][err] src/tcmalloc.cc:283] Attempt to free invalid pointer 0x29000020c5a0 =========================== logs =========================== <launching> /root/.cache/ms-playwright/chromium-1015/chrome-linux/chrome --disable-field-trial-config --disable-background-networking --enable-features=NetworkService,NetworkServiceInProcess --disable-background-timer-throttling --disable-backgrounding-occluded-windows --disable-back-forward-cache --disable-breakpad --disable-client-side-phishing-detection --disable-component-extensions-with-background-pages --disable-default-apps --disable-dev-shm-usage --disable-extensions --disable-features=ImprovedCookieControls,LazyFrameLoading,GlobalMediaControls,DestroyProfileOnBrowserClose,MediaRouter,DialMediaRouteProvider,AcceptCHFrame,AutoExpandDetailsElement,CertificateTransparencyComponentUpdater,AvoidUnnecessaryBeforeUnloadCheckSync --allow-pre-commit-input --disable-hang-monitor --disable-ipc-flooding-protection --disable-popup-blocking --disable-prompt-on-repost --disable-renderer-backgrounding --disable-sync --force-color-profile=srgb --metrics-recording-only --no-first-run --enable-automation --password-store=basic --use-mock-keychain --no-service-autorun --export-tagged-pdf --no-sandbox --user-data-dir=/tmp/playwright_chromiumdev_profile-IAbW15 --remote-debugging-pipe --no-startup-window <launched> pid=656 [pid=656][err] src/tcmalloc.cc:283] Attempt to free invalid pointer 0x29000020c5a0
[ "When I try to run the chromium browser that downloaded by playwright using this command\n!/root/.cache/ms-playwright/chromium-1033/chrome-linux/chrome\n\nIt gives this error\nsrc/tcmalloc.cc:283] Attempt to free invalid pointer 0x18400020c5a0\n\nIt means the problem is somehow related to the browser downloaded by playwright.\nWe can use a different browser.\nFirst, install chromium.\n!apt install chromium-chromedriver\n\nSet executable_path and user_data_dir with launch_persistent_context in your code.\nimport nest_asyncio\nnest_asyncio.apply()\n\nimport asyncio\nfrom playwright.async_api import async_playwright\n\nasync def main():\n async with async_playwright() as p:\n browser = await p.chromium.launch_persistent_context(\n executable_path=\"/usr/bin/chromium-browser\",\n user_data_dir=\"/content/random-user\"\n )\n page = await browser.new_page()\n await page.goto(\"https://google.com\")\n title = await page.title()\n print(f\"Title: {title}\")\n await browser.close()\n\nasyncio.run(main())\n\nI know this is not the right solution but it works.\n" ]
[ 1 ]
[]
[]
[ "async_await", "exception", "playwright", "python", "python_asyncio" ]
stackoverflow_0073084960_async_await_exception_playwright_python_python_asyncio.txt
Q: Exclamation mark in python Hi I am curious about how you can describe a exclamation mark in python in a for loop. Input : 145 Output : It's a Strong Number. Explanation : Number = 145 145 = 1! + 4! + 5! 145 = 1 + 24 + 120 def exponent(n): res = 0 for i in str(n): a = int(i) res = res + (#exclamation mark) return res I have tried the code a above but I get a little bit stuck. A: You should definitely use the np.math.factorial(n) for this. Also notice that your "Output" does not really follow correct syntax and the ' sign is causing it to be evaluated as a comment. You could do it like this: Output = "It's a strong number." For the main problem your trying to solve: import numpy as np number = 7 result = np.math.factorial(number) # = 7*6*5*4*3*2*1
Exclamation mark in python
Hi I am curious about how you can describe a exclamation mark in python in a for loop. Input : 145 Output : It's a Strong Number. Explanation : Number = 145 145 = 1! + 4! + 5! 145 = 1 + 24 + 120 def exponent(n): res = 0 for i in str(n): a = int(i) res = res + (#exclamation mark) return res I have tried the code a above but I get a little bit stuck.
[ "You should definitely use the np.math.factorial(n) for this.\nAlso notice that your \"Output\" does not really follow correct syntax and the ' sign is causing it to be evaluated as a comment.\nYou could do it like this:\nOutput = \"It's a strong number.\"\n\nFor the main problem your trying to solve:\nimport numpy as np\nnumber = 7\nresult = np.math.factorial(number) # = 7*6*5*4*3*2*1\n\n" ]
[ 0 ]
[]
[]
[ "python" ]
stackoverflow_0074518291_python.txt
Q: Pandas: calculate the morning averaged values or afternoon averaged values I got a dataframe like this: gpi_data[['sig','hourtime']] Out[28]: sig hourtime datetime_doy 2007-01-02 -8.963545 2007-01-02 09:20:11.249998 2007-01-03 -8.671357 2007-01-03 10:39:31.874991 2007-01-03 -8.996480 2007-01-03 20:22:59.999006 2007-01-04 -8.835958 2007-01-04 10:18:56.249024 2007-01-05 -8.785034 2007-01-05 21:21:39.374002 ... ... 2019-12-30 -8.529724 2019-12-30 20:23:01.874996 2019-12-30 -8.563781 2019-12-30 20:48:28.125016 2019-12-30 -8.504211 2019-12-30 21:23:44.999996 2019-12-31 -8.460620 2019-12-31 09:39:31.873999 2019-12-31 -8.230092 2019-12-31 10:18:58.125014 [7983 rows x 2 columns] and I want to calculate the averaged values of each morning and each afternoon based on hour time. By morning I mean the data is observed around 10:00:00, and 22:00:00 for afternoon. If there is no values on the morning/evening on this day, fill it with np.nan. For example, on 2007-01-01 we don't have any morning or evening values of sig. Then we fill it with two np.nan values. Then on 2007-01-02 we only have morning value, so we fill the evening value of 2007-01-02 with np.nan. SPECIFICALLY, for 2019-12-30, we have 3 evening values which are 2019-12-30 20:23:01.874996, 2019-12-30 20:48:28.125016 and 2019-12-30 21:23:44.999996. So we need to calculate the average value of -8.529724, -8.563781 and -8.504211. It's same for the last two datapoints on the morning of 2019-12-31, we need to average them, and fill the np.nan to the evening of 2019-12-31. So ideally the final result would be: gpi_data[['sig','hourtime']] Out[28]: sig hourtime datetime_doy 2007-01-01 nan 2007-01-01 10:00:00 2007-01-01 nan 2007-01-01 22:00:00 2007-01-02 -8.963545 2007-01-02 09:20:11.249998 2007-01-02 nan 2007-01-02 22:00:00 2007-01-03 -8.671357 2007-01-03 10:39:31.874991 2007-01-03 -8.996480 2007-01-03 20:22:59.999006 2007-01-04 -8.835958 2007-01-04 10:18:56.249024 2007-01-04 nan 2007-01-04 22:00:00 2007-01-05 nan 2007-01-05 10:00:00 2007-01-05 -8.785034 2007-01-05 21:21:39.374002 ... ... 2019-12-30 -8.532572 2019-12-30 22:00:00 2019-12-31 -8.345356 2019-12-31 09:39:31.873999 2019-12-31 nan 2019-12-31 22:00:00 It's fine if we round all hourtime to 10:00:00 or 22:00:00 like below: gpi_data[['sig','hourtime']] Out[28]: sig hourtime datetime_doy 2007-01-01 nan 2007-01-01 10:00:00 2007-01-01 nan 2007-01-01 22:00:00 2007-01-02 -8.963545 2007-01-02 10:00:00 2007-01-02 nan 2007-01-02 22:00:00 2007-01-03 -8.671357 2007-01-03 10:00:00 2007-01-03 -8.996480 2007-01-03 22:00:00 2007-01-04 -8.835958 2007-01-04 10:00:00 2007-01-04 nan 2007-01-04 22:00:00 2007-01-05 nan 2007-01-05 10:00:00 2007-01-05 -8.785034 2007-01-05 22:00:00 ... ... 2019-12-30 -8.532572 2019-12-30 22:00:00 2019-12-31 -8.460620 2019-12-31 10:00:00 2019-12-31 nan 2019-12-31 22:00:00 How can I do it? is there anybody who can help me? Thanks! A: Use cut for defined 10 and 22 column by some thresholds, here is used 12 and 23 hours. Then create MultiIndex by minimal and maximal years in MultiIndex.from_product, aggregate mean and add missing combinations by Series.reindex, last create hourtime column: df['hourtime'] = pd.cut(df['hourtime'].dt.hour, bins=[0,12,23], labels=[10,22]) start = pd.Timestamp(year=df.index.year.min(), month=1, day=1) end = pd.Timestamp(year=df.index.year.max(), month=12, day=31) mux = pd.MultiIndex.from_product([pd.date_range(start, end), [10,22]], names=['datetime_doy','h']) df = df.groupby([df.index, 'hourtime'])['sig'].mean().reindex(mux).reset_index(level=1) df['hourtime'] = df.index + pd.to_timedelta(df.pop('h'), unit='H') print (df) sig hourtime datetime_doy 2007-01-01 NaN 2007-01-01 10:00:00 2007-01-01 NaN 2007-01-01 22:00:00 2007-01-02 -8.963545 2007-01-02 10:00:00 2007-01-02 NaN 2007-01-02 22:00:00 2007-01-03 -8.671357 2007-01-03 10:00:00 ... ... 2019-12-29 NaN 2019-12-29 22:00:00 2019-12-30 NaN 2019-12-30 10:00:00 2019-12-30 -8.532572 2019-12-30 22:00:00 2019-12-31 -8.345356 2019-12-31 10:00:00 2019-12-31 NaN 2019-12-31 22:00:00 [9496 rows x 2 columns]
Pandas: calculate the morning averaged values or afternoon averaged values
I got a dataframe like this: gpi_data[['sig','hourtime']] Out[28]: sig hourtime datetime_doy 2007-01-02 -8.963545 2007-01-02 09:20:11.249998 2007-01-03 -8.671357 2007-01-03 10:39:31.874991 2007-01-03 -8.996480 2007-01-03 20:22:59.999006 2007-01-04 -8.835958 2007-01-04 10:18:56.249024 2007-01-05 -8.785034 2007-01-05 21:21:39.374002 ... ... 2019-12-30 -8.529724 2019-12-30 20:23:01.874996 2019-12-30 -8.563781 2019-12-30 20:48:28.125016 2019-12-30 -8.504211 2019-12-30 21:23:44.999996 2019-12-31 -8.460620 2019-12-31 09:39:31.873999 2019-12-31 -8.230092 2019-12-31 10:18:58.125014 [7983 rows x 2 columns] and I want to calculate the averaged values of each morning and each afternoon based on hour time. By morning I mean the data is observed around 10:00:00, and 22:00:00 for afternoon. If there is no values on the morning/evening on this day, fill it with np.nan. For example, on 2007-01-01 we don't have any morning or evening values of sig. Then we fill it with two np.nan values. Then on 2007-01-02 we only have morning value, so we fill the evening value of 2007-01-02 with np.nan. SPECIFICALLY, for 2019-12-30, we have 3 evening values which are 2019-12-30 20:23:01.874996, 2019-12-30 20:48:28.125016 and 2019-12-30 21:23:44.999996. So we need to calculate the average value of -8.529724, -8.563781 and -8.504211. It's same for the last two datapoints on the morning of 2019-12-31, we need to average them, and fill the np.nan to the evening of 2019-12-31. So ideally the final result would be: gpi_data[['sig','hourtime']] Out[28]: sig hourtime datetime_doy 2007-01-01 nan 2007-01-01 10:00:00 2007-01-01 nan 2007-01-01 22:00:00 2007-01-02 -8.963545 2007-01-02 09:20:11.249998 2007-01-02 nan 2007-01-02 22:00:00 2007-01-03 -8.671357 2007-01-03 10:39:31.874991 2007-01-03 -8.996480 2007-01-03 20:22:59.999006 2007-01-04 -8.835958 2007-01-04 10:18:56.249024 2007-01-04 nan 2007-01-04 22:00:00 2007-01-05 nan 2007-01-05 10:00:00 2007-01-05 -8.785034 2007-01-05 21:21:39.374002 ... ... 2019-12-30 -8.532572 2019-12-30 22:00:00 2019-12-31 -8.345356 2019-12-31 09:39:31.873999 2019-12-31 nan 2019-12-31 22:00:00 It's fine if we round all hourtime to 10:00:00 or 22:00:00 like below: gpi_data[['sig','hourtime']] Out[28]: sig hourtime datetime_doy 2007-01-01 nan 2007-01-01 10:00:00 2007-01-01 nan 2007-01-01 22:00:00 2007-01-02 -8.963545 2007-01-02 10:00:00 2007-01-02 nan 2007-01-02 22:00:00 2007-01-03 -8.671357 2007-01-03 10:00:00 2007-01-03 -8.996480 2007-01-03 22:00:00 2007-01-04 -8.835958 2007-01-04 10:00:00 2007-01-04 nan 2007-01-04 22:00:00 2007-01-05 nan 2007-01-05 10:00:00 2007-01-05 -8.785034 2007-01-05 22:00:00 ... ... 2019-12-30 -8.532572 2019-12-30 22:00:00 2019-12-31 -8.460620 2019-12-31 10:00:00 2019-12-31 nan 2019-12-31 22:00:00 How can I do it? is there anybody who can help me? Thanks!
[ "Use cut for defined 10 and 22 column by some thresholds, here is used 12 and 23 hours.\nThen create MultiIndex by minimal and maximal years in MultiIndex.from_product, aggregate mean and add missing combinations by Series.reindex, last create hourtime column:\ndf['hourtime'] = pd.cut(df['hourtime'].dt.hour, bins=[0,12,23], labels=[10,22])\n\nstart = pd.Timestamp(year=df.index.year.min(), month=1, day=1)\nend = pd.Timestamp(year=df.index.year.max(), month=12, day=31)\nmux = pd.MultiIndex.from_product([pd.date_range(start, end), [10,22]],\n names=['datetime_doy','h'])\n\ndf = df.groupby([df.index, 'hourtime'])['sig'].mean().reindex(mux).reset_index(level=1)\ndf['hourtime'] = df.index + pd.to_timedelta(df.pop('h'), unit='H')\nprint (df)\n sig hourtime\ndatetime_doy \n2007-01-01 NaN 2007-01-01 10:00:00\n2007-01-01 NaN 2007-01-01 22:00:00\n2007-01-02 -8.963545 2007-01-02 10:00:00\n2007-01-02 NaN 2007-01-02 22:00:00\n2007-01-03 -8.671357 2007-01-03 10:00:00\n ... ...\n2019-12-29 NaN 2019-12-29 22:00:00\n2019-12-30 NaN 2019-12-30 10:00:00\n2019-12-30 -8.532572 2019-12-30 22:00:00\n2019-12-31 -8.345356 2019-12-31 10:00:00\n2019-12-31 NaN 2019-12-31 22:00:00\n\n[9496 rows x 2 columns]\n\n" ]
[ 1 ]
[]
[]
[ "dataframe", "group_by", "pandas", "pandas_resample", "python" ]
stackoverflow_0074518311_dataframe_group_by_pandas_pandas_resample_python.txt
Q: How to convert a pandas DatetimeIndex to Array of Timestamps? I've been digging at this, but think I've confused myself on the various ways pandas can represent dates and times. I've imported a csv of data which includes columnds for year, month, day, etc, and then converted that to a datetime column and then set it as an index - all good. # import and name columns epwNames = ['year','month','day','hour','minute','Datasource','DryBulb {C}','DewPoint {C}','RelHum {%}','Atmos Pressure {Pa}','ExtHorzRad {Wh/m2}','ExtDirRad {Wh/m2}','HorzIRSky {Wh/m2}','GloHorzRad {Wh/m2}','DirNormRad {Wh/m2}','DifHorzRad {Wh/m2}','GloHorzIllum {lux}','DirNormIllum {lux}','DifHorzIllum {lux}','ZenLum {Cd/m2}','WindDir {deg}','WindSpd {m/s}','TotSkyCvr {.1}','OpaqSkyCvr {.1}','Visibility {km}','Ceiling Hgt {m}','PresWeathObs','PresWeathCodes','Precip Wtr {mm}','Aerosol Opt Depth {.001}','SnowDepth {cm}','Days Last Snow','Albedo {.01}','Rain {mm}','Rain Quantity {hr}'] Weather = pd.read_csv(filepath,header=None,skiprows=8,names=epwNames) # Format timestamp index Weather['Datetime'] = pd.to_datetime(Weather[['year','month','day','hour']]) Weather.index = Weather['Datetime'] I have another function which uses the datetime, but is currently set up to require an array of Timestamps - this may or not be the best way to do it, but for example I have things like this, where 'timestamp' is the array being passed in: get_julianDate = np.vectorize(pd.Timestamp.to_julian_date) julianDay = get_julianDate(timestamp) If I run it by passing in the DateTimeIndex, I get an attribute error that 'numpy.datetime64' object has no attribute 'year', which seems weird. Everything works fine if I pass through an array of Timestamps though. I've tried some simple convertions, like just passing the DateTimeIndex into pd.Timestamp, but I guess that would have been too east. Is there a simple way to do this? A: Two ways might work: get_julianDate(timestamp.to_list()) # or use pd.Index.map directly: timestamp.map(pd.Timestamp.to_julian_date)
How to convert a pandas DatetimeIndex to Array of Timestamps?
I've been digging at this, but think I've confused myself on the various ways pandas can represent dates and times. I've imported a csv of data which includes columnds for year, month, day, etc, and then converted that to a datetime column and then set it as an index - all good. # import and name columns epwNames = ['year','month','day','hour','minute','Datasource','DryBulb {C}','DewPoint {C}','RelHum {%}','Atmos Pressure {Pa}','ExtHorzRad {Wh/m2}','ExtDirRad {Wh/m2}','HorzIRSky {Wh/m2}','GloHorzRad {Wh/m2}','DirNormRad {Wh/m2}','DifHorzRad {Wh/m2}','GloHorzIllum {lux}','DirNormIllum {lux}','DifHorzIllum {lux}','ZenLum {Cd/m2}','WindDir {deg}','WindSpd {m/s}','TotSkyCvr {.1}','OpaqSkyCvr {.1}','Visibility {km}','Ceiling Hgt {m}','PresWeathObs','PresWeathCodes','Precip Wtr {mm}','Aerosol Opt Depth {.001}','SnowDepth {cm}','Days Last Snow','Albedo {.01}','Rain {mm}','Rain Quantity {hr}'] Weather = pd.read_csv(filepath,header=None,skiprows=8,names=epwNames) # Format timestamp index Weather['Datetime'] = pd.to_datetime(Weather[['year','month','day','hour']]) Weather.index = Weather['Datetime'] I have another function which uses the datetime, but is currently set up to require an array of Timestamps - this may or not be the best way to do it, but for example I have things like this, where 'timestamp' is the array being passed in: get_julianDate = np.vectorize(pd.Timestamp.to_julian_date) julianDay = get_julianDate(timestamp) If I run it by passing in the DateTimeIndex, I get an attribute error that 'numpy.datetime64' object has no attribute 'year', which seems weird. Everything works fine if I pass through an array of Timestamps though. I've tried some simple convertions, like just passing the DateTimeIndex into pd.Timestamp, but I guess that would have been too east. Is there a simple way to do this?
[ "Two ways might work:\nget_julianDate(timestamp.to_list())\n# or use pd.Index.map directly:\ntimestamp.map(pd.Timestamp.to_julian_date)\n\n" ]
[ 0 ]
[]
[]
[ "datetimeindex", "pandas", "python", "timestamp" ]
stackoverflow_0074512162_datetimeindex_pandas_python_timestamp.txt
Q: fastest way to bruteforce a 6 character password i am trying to find a faster way to brute force a password with 6 characters in this format[abc123] always 3 lower case letters and 3 numbers after. so far i have tried a few different things but im pretty sure there are more effective methods to solving this. it also must include the hashing the password and comparing it to find the correct password. here is the most effective code i have yet using random itteration and multithreading ` import string from itertools import product from time import time import threading import random password = input("write your 3 letter 3 number password here: ") hashed = hash(password) start = time() def product_loop(hashing, generator): for p in generator: global stop_threads if stop_threads: break if hash(''.join(p)) == hashing: print('\nPassword:', ''.join(p)) print(hash(''.join(p))) stop_threads = True end = time() print('Total time: %.2f seconds' % (end - start)) return ''.join(p) return False def bruteforce(hashing, max_nchar=8): for l in range(6, max_nchar + 1): if stop_threads: break print("\t..%d char" % l) generator = product(random.sample(string.ascii_lowercase, 26) + random.sample(string.digits, 10), repeat=int(l)) p = product_loop(hashing, generator) if p is not False: return p if __name__ == "__main__": stop_threads = False t1 = threading.Thread(target=bruteforce,args=((hashed,))) t2 = threading.Thread(target=bruteforce,args=((hashed,))) t3 = threading.Thread(target=bruteforce,args=((hashed,))) t4 = threading.Thread(target=bruteforce,args=((hashed,))) t5 = threading.Thread(target=bruteforce,args=((hashed,))) t6 = threading.Thread(target=bruteforce,args=((hashed,))) t7 = threading.Thread(target=bruteforce,args=((hashed,))) t8 = threading.Thread(target=bruteforce,args=((hashed,))) t1.start() t2.start() t3.start() t4.start() t5.start() t6.start() t7.start() t8.start() ` i am not sure how this code can be faster but since im a begginner im pretty sure there is some ways to improve this A: try em all in order, not many faster ways
fastest way to bruteforce a 6 character password
i am trying to find a faster way to brute force a password with 6 characters in this format[abc123] always 3 lower case letters and 3 numbers after. so far i have tried a few different things but im pretty sure there are more effective methods to solving this. it also must include the hashing the password and comparing it to find the correct password. here is the most effective code i have yet using random itteration and multithreading ` import string from itertools import product from time import time import threading import random password = input("write your 3 letter 3 number password here: ") hashed = hash(password) start = time() def product_loop(hashing, generator): for p in generator: global stop_threads if stop_threads: break if hash(''.join(p)) == hashing: print('\nPassword:', ''.join(p)) print(hash(''.join(p))) stop_threads = True end = time() print('Total time: %.2f seconds' % (end - start)) return ''.join(p) return False def bruteforce(hashing, max_nchar=8): for l in range(6, max_nchar + 1): if stop_threads: break print("\t..%d char" % l) generator = product(random.sample(string.ascii_lowercase, 26) + random.sample(string.digits, 10), repeat=int(l)) p = product_loop(hashing, generator) if p is not False: return p if __name__ == "__main__": stop_threads = False t1 = threading.Thread(target=bruteforce,args=((hashed,))) t2 = threading.Thread(target=bruteforce,args=((hashed,))) t3 = threading.Thread(target=bruteforce,args=((hashed,))) t4 = threading.Thread(target=bruteforce,args=((hashed,))) t5 = threading.Thread(target=bruteforce,args=((hashed,))) t6 = threading.Thread(target=bruteforce,args=((hashed,))) t7 = threading.Thread(target=bruteforce,args=((hashed,))) t8 = threading.Thread(target=bruteforce,args=((hashed,))) t1.start() t2.start() t3.start() t4.start() t5.start() t6.start() t7.start() t8.start() ` i am not sure how this code can be faster but since im a begginner im pretty sure there is some ways to improve this
[ "try em all in order, not many faster ways\n" ]
[ 0 ]
[]
[]
[ "brute_force", "multithreading", "python" ]
stackoverflow_0074240833_brute_force_multithreading_python.txt
Q: matplotlib: change title and colorbar text and tick colors I wanted to know how to change the color of the ticks in the colorbar and how to change the font color of the title and colorbar in a figure. For example, things obviously are visible in temp.png but not in temp2.png: import matplotlib.pyplot as plt import numpy as np from numpy.random import randn fig = plt.figure() data = np.clip(randn(250,250),-1,1) cax = plt.imshow(data, interpolation='nearest') plt.title('my random fig') plt.colorbar() # works fine plt.savefig('temp.png') # title and colorbar ticks and text hidden plt.savefig('temp2.png', facecolor="black", edgecolor="none") Thanks A: Previous answer didnt give what I wanted. This is how I did it: import matplotlib.pyplot as plt import numpy as np from numpy.random import randn data = np.clip(randn(250,250),-1,1) data = np.ma.masked_where(data > 0.5, data) fig, ax1 = plt.subplots(1,1) im = ax1.imshow(data, interpolation='nearest') cb = plt.colorbar(im) fg_color = 'white' bg_color = 'black' # IMSHOW # set title plus title color ax1.set_title('ax1 title', color=fg_color) # set figure facecolor ax1.patch.set_facecolor(bg_color) # set tick and ticklabel color im.axes.tick_params(color=fg_color, labelcolor=fg_color) # set imshow outline for spine in im.axes.spines.values(): spine.set_edgecolor(fg_color) # COLORBAR # set colorbar label plus label color cb.set_label('colorbar label', color=fg_color) # set colorbar tick color cb.ax.yaxis.set_tick_params(color=fg_color) # set colorbar edgecolor cb.outline.set_edgecolor(fg_color) # set colorbar ticklabels plt.setp(plt.getp(cb.ax.axes, 'yticklabels'), color=fg_color) fig.patch.set_facecolor(bg_color) plt.tight_layout() plt.show() #plt.savefig('save/to/pic.png', dpi=200, facecolor=bg_color) A: (Update: The information in this answer is outdated, please scroll below for other answers which is up to date and better suited to new version) This can be done by inspecting and setting properties for object handler in matplotlib. I edited your code and put some explanation in comment: import matplotlib.pyplot as plt import numpy as np from numpy.random import randn fig = plt.figure() data = np.clip(randn(250,250),-1,1) cax = plt.imshow(data, interpolation='nearest') title_obj = plt.title('my random fig') #get the title property handler plt.getp(title_obj) #print out the properties of title plt.getp(title_obj, 'text') #print out the 'text' property for title plt.setp(title_obj, color='r') #set the color of title to red axes_obj = plt.getp(cax,'axes') #get the axes' property handler ytl_obj = plt.getp(axes_obj, 'yticklabels') #get the properties for # yticklabels plt.getp(ytl_obj) #print out a list of properties # for yticklabels plt.setp(ytl_obj, color="r") #set the color of yticks to red plt.setp(plt.getp(axes_obj, 'xticklabels'), color='r') #xticklabels: same color_bar = plt.colorbar() #this one is a little bit cbytick_obj = plt.getp(color_bar.ax.axes, 'yticklabels') #tricky plt.setp(cbytick_obj, color='r') plt.savefig('temp.png') plt.savefig('temp2.png', facecolor="black", edgecolor="none") A: While the other answers are surely correct, it seems this is easier being solved using either styles or specific rcParams, or using the tick_params function Styles Matplotlib provides a dark_background style. You may use it e.g. via plt.style.use("dark_background"): import matplotlib.pyplot as plt import numpy as np plt.style.use("dark_background") fig = plt.figure() data = np.clip(np.random.randn(150,150),-1,1) plt.imshow(data) plt.title('my random fig') plt.colorbar() plt.savefig('temp2.png', facecolor="black", edgecolor="none") plt.show() Or, if you need to create the same figure with and without black background styles may be used in a context. import matplotlib.pyplot as plt import numpy as np def create_plot(): fig = plt.figure() data = np.clip(np.random.randn(150,150),-1,1) plt.imshow(data) plt.title('my random fig') plt.colorbar() return fig # create white background plot create_plot() plt.savefig('white_bg.png') with plt.style.context("dark_background"): create_plot() plt.savefig('dark_bg.png', facecolor="black", edgecolor="none") Read more about this in the Customizing matplotlib tutorial. Specific rcParams You may individually set the required rcParams that compose a style where needed in your script. E.g. to make any text blue and yticks red: params = {"text.color" : "blue", "xtick.color" : "crimson", "ytick.color" : "crimson"} plt.rcParams.update(params) This will automatically also colorize the tickmarks. Customizing ticks and labels You may also customize the objects in the plot individually. For ticks and ticklabels there is a tick_params method. E.g. to only make the ticks of the colorbar red, cbar = plt.colorbar() cbar.ax.tick_params(color="red", width=5, length=10) A: Based on previous answer I added two lines to set the colorbar's box color and colorbar's ticks color: import matplotlib.pyplot as plt import numpy as np from numpy.random import randn fig = plt.figure() data = np.clip(randn(250,250),-1,1) cax = plt.imshow(data, interpolation='nearest') title_obj = plt.title('my random fig') #get the title property handler plt.setp(title_obj, color='w') #set the color of title to white axes_obj = plt.getp(cax,'axes') #get the axes' property handler plt.setp(plt.getp(axes_obj, 'yticklabels'), color='w') #set yticklabels color plt.setp(plt.getp(axes_obj, 'xticklabels'), color='w') #set xticklabels color color_bar = plt.colorbar() plt.setp(plt.getp(color_bar.ax.axes, 'yticklabels'), color='w') # set colorbar # yticklabels color ##### two new lines #### color_bar.outline.set_color('w') #set colorbar box color color_bar.ax.yaxis.set_tick_params(color='w') #set colorbar ticks color ##### two new lines #### plt.setp(cbytick_obj, color='r') plt.savefig('temp.png') plt.savefig('temp3.png', facecolor="black", edgecolor="none") A: Also, you can change the tick labels with: cax = plt.imshow(data) cbar = plt.colorbar(orientation='horizontal', alpha=0.8, label ='my label', fraction=0.075, pad=0.07, extend='max') #get the ticks and transform it to list, if you want to add strings. cbt = cbar.get_ticks().tolist() #edit the new list of ticks, for instance the firs element cbt[0]='$no$ $data$' # then, apply the changes on the actual colorbar cbar.ax.set_xticklabels(cbt)
matplotlib: change title and colorbar text and tick colors
I wanted to know how to change the color of the ticks in the colorbar and how to change the font color of the title and colorbar in a figure. For example, things obviously are visible in temp.png but not in temp2.png: import matplotlib.pyplot as plt import numpy as np from numpy.random import randn fig = plt.figure() data = np.clip(randn(250,250),-1,1) cax = plt.imshow(data, interpolation='nearest') plt.title('my random fig') plt.colorbar() # works fine plt.savefig('temp.png') # title and colorbar ticks and text hidden plt.savefig('temp2.png', facecolor="black", edgecolor="none") Thanks
[ "Previous answer didnt give what I wanted. \nThis is how I did it:\nimport matplotlib.pyplot as plt\nimport numpy as np\nfrom numpy.random import randn\ndata = np.clip(randn(250,250),-1,1)\ndata = np.ma.masked_where(data > 0.5, data)\n\n\nfig, ax1 = plt.subplots(1,1)\n\nim = ax1.imshow(data, interpolation='nearest')\ncb = plt.colorbar(im)\n\nfg_color = 'white'\nbg_color = 'black'\n\n# IMSHOW \n# set title plus title color\nax1.set_title('ax1 title', color=fg_color)\n\n# set figure facecolor\nax1.patch.set_facecolor(bg_color)\n\n# set tick and ticklabel color\nim.axes.tick_params(color=fg_color, labelcolor=fg_color)\n\n# set imshow outline\nfor spine in im.axes.spines.values():\n spine.set_edgecolor(fg_color) \n\n# COLORBAR\n# set colorbar label plus label color\ncb.set_label('colorbar label', color=fg_color)\n\n# set colorbar tick color\ncb.ax.yaxis.set_tick_params(color=fg_color)\n\n# set colorbar edgecolor \ncb.outline.set_edgecolor(fg_color)\n\n# set colorbar ticklabels\nplt.setp(plt.getp(cb.ax.axes, 'yticklabels'), color=fg_color)\n\nfig.patch.set_facecolor(bg_color) \nplt.tight_layout()\nplt.show()\n#plt.savefig('save/to/pic.png', dpi=200, facecolor=bg_color)\n\n\n", "(Update: The information in this answer is outdated, please scroll below for other answers which is up to date and better suited to new version)\nThis can be done by inspecting and setting properties for object handler in matplotlib.\nI edited your code and put some explanation in comment:\nimport matplotlib.pyplot as plt\nimport numpy as np\nfrom numpy.random import randn\n\nfig = plt.figure()\ndata = np.clip(randn(250,250),-1,1)\ncax = plt.imshow(data, interpolation='nearest')\n\ntitle_obj = plt.title('my random fig') #get the title property handler\nplt.getp(title_obj) #print out the properties of title\nplt.getp(title_obj, 'text') #print out the 'text' property for title\nplt.setp(title_obj, color='r') #set the color of title to red\n\naxes_obj = plt.getp(cax,'axes') #get the axes' property handler\nytl_obj = plt.getp(axes_obj, 'yticklabels') #get the properties for \n # yticklabels\nplt.getp(ytl_obj) #print out a list of properties\n # for yticklabels\nplt.setp(ytl_obj, color=\"r\") #set the color of yticks to red\n\nplt.setp(plt.getp(axes_obj, 'xticklabels'), color='r') #xticklabels: same\n\ncolor_bar = plt.colorbar() #this one is a little bit\ncbytick_obj = plt.getp(color_bar.ax.axes, 'yticklabels') #tricky\nplt.setp(cbytick_obj, color='r')\n\nplt.savefig('temp.png')\nplt.savefig('temp2.png', facecolor=\"black\", edgecolor=\"none\")\n\n", "While the other answers are surely correct, it seems this is easier being solved using either styles or specific rcParams, or using the tick_params function\nStyles\nMatplotlib provides a dark_background style. You may use it e.g. via plt.style.use(\"dark_background\"):\nimport matplotlib.pyplot as plt\nimport numpy as np\n\nplt.style.use(\"dark_background\")\n\nfig = plt.figure()\ndata = np.clip(np.random.randn(150,150),-1,1)\nplt.imshow(data)\nplt.title('my random fig')\nplt.colorbar() \n\nplt.savefig('temp2.png', facecolor=\"black\", edgecolor=\"none\")\nplt.show()\n\n\nOr, if you need to create the same figure with and without black background styles may be used in a context.\nimport matplotlib.pyplot as plt\nimport numpy as np\n\ndef create_plot():\n fig = plt.figure()\n data = np.clip(np.random.randn(150,150),-1,1)\n plt.imshow(data)\n plt.title('my random fig')\n plt.colorbar()\n return fig\n\n# create white background plot\ncreate_plot()\nplt.savefig('white_bg.png')\n\nwith plt.style.context(\"dark_background\"):\n create_plot()\n plt.savefig('dark_bg.png', facecolor=\"black\", edgecolor=\"none\")\n\nRead more about this in the Customizing matplotlib tutorial.\nSpecific rcParams\nYou may individually set the required rcParams that compose a style where needed in your script. \nE.g. to make any text blue and yticks red:\nparams = {\"text.color\" : \"blue\",\n \"xtick.color\" : \"crimson\",\n \"ytick.color\" : \"crimson\"}\nplt.rcParams.update(params)\n\nThis will automatically also colorize the tickmarks.\n\nCustomizing ticks and labels\nYou may also customize the objects in the plot individually. For ticks and ticklabels there is a tick_params method. E.g. to only make the ticks of the colorbar red,\ncbar = plt.colorbar()\ncbar.ax.tick_params(color=\"red\", width=5, length=10)\n\n \n", "Based on previous answer I added two lines to set the colorbar's box color and colorbar's ticks color: \nimport matplotlib.pyplot as plt\nimport numpy as np\nfrom numpy.random import randn\n\nfig = plt.figure()\ndata = np.clip(randn(250,250),-1,1)\ncax = plt.imshow(data, interpolation='nearest')\n\ntitle_obj = plt.title('my random fig') #get the title property handler\nplt.setp(title_obj, color='w') #set the color of title to white\n\naxes_obj = plt.getp(cax,'axes') #get the axes' property handler\nplt.setp(plt.getp(axes_obj, 'yticklabels'), color='w') #set yticklabels color\nplt.setp(plt.getp(axes_obj, 'xticklabels'), color='w') #set xticklabels color\n\ncolor_bar = plt.colorbar() \nplt.setp(plt.getp(color_bar.ax.axes, 'yticklabels'), color='w') # set colorbar \n # yticklabels color\n##### two new lines ####\ncolor_bar.outline.set_color('w') #set colorbar box color\ncolor_bar.ax.yaxis.set_tick_params(color='w') #set colorbar ticks color \n##### two new lines ####\n\nplt.setp(cbytick_obj, color='r')\nplt.savefig('temp.png')\nplt.savefig('temp3.png', facecolor=\"black\", edgecolor=\"none\")\n\n", "Also, you can change the tick labels with:\ncax = plt.imshow(data)\ncbar = plt.colorbar(orientation='horizontal', alpha=0.8, label ='my label',\n fraction=0.075, pad=0.07, extend='max')\n#get the ticks and transform it to list, if you want to add strings.\ncbt = cbar.get_ticks().tolist() \n#edit the new list of ticks, for instance the firs element\ncbt[0]='$no$ $data$'\n# then, apply the changes on the actual colorbar\ncbar.ax.set_xticklabels(cbt)\n\n" ]
[ 42, 40, 18, 6, 0 ]
[]
[]
[ "matplotlib", "python" ]
stackoverflow_0009662995_matplotlib_python.txt
Q: How to build a SystemTray app for Windows? I usually work on a Linux system, but I have a situation where I need to write a client app that would run on windows as a service. Can someone help me or direct, on how to build a system tray app (for example like dropbox) for the windows environment, which gets started on OS startup and the icon sits in the TaskBar and on clicking the app icon presents a menu. My scripting language is python. Thanks. A: You do this using the pywin32 (Python for Windows Extensions) module. Example Code for Python 2 Similar Question To make it run at startup you could mess around with services but it's actually much easier to install a link to the exe in the users "Startup Folder". Windows 7 and Vista c:\Users\[username]\AppData\Roaming\Microsoft\Windows\Start Menu\Programs\Startup Windows XP c:\Documents and Settings\[username]\Start Menu\Programs\Startup A: I modified the SysTrayIcon.py Python 2 script to work in Python 3 You need to install pip install pywin32. After that you need to run python Scripts/pywin32_postinstall.py -install from your Python directory to register the dlls. For the test script to run, you need to have some *.ico files in your working directory - you can find lots of them in your c:\windows* folders (search for file:.ico). To hide the program, you can run it via pythonw.exe. If you need balloon notifications, have a look at this post: https://stackoverflow.com/a/42085439/2441026 (Plyer package). To have a menu with only the Quit button you need to pass menu_options = ((None, None, None),) - (or change the class to not always append menu_options). #!/usr/bin/env python # Module : SysTrayIcon.py # Synopsis : Windows System tray icon. # Programmer : Simon Brunning - simon@brunningonline.net - modified for Python 3 # Date : 13 February 2018 # Notes : Based on (i.e. ripped off from) Mark Hammond's # win32gui_taskbar.py and win32gui_menu.py demos from PyWin32 '''TODO For now, the demo at the bottom shows how to use it...''' import os import sys import win32api # package pywin32 import win32con import win32gui_struct try: import winxpgui as win32gui except ImportError: import win32gui class SysTrayIcon(object): '''TODO''' QUIT = 'QUIT' SPECIAL_ACTIONS = [QUIT] FIRST_ID = 1023 def __init__(self, icon, hover_text, menu_options, on_quit=None, default_menu_index=None, window_class_name=None,): self.icon = icon self.hover_text = hover_text self.on_quit = on_quit menu_options = menu_options + (('Quit', None, self.QUIT),) self._next_action_id = self.FIRST_ID self.menu_actions_by_id = set() self.menu_options = self._add_ids_to_menu_options(list(menu_options)) self.menu_actions_by_id = dict(self.menu_actions_by_id) del self._next_action_id self.default_menu_index = (default_menu_index or 0) self.window_class_name = window_class_name or "SysTrayIconPy" message_map = {win32gui.RegisterWindowMessage("TaskbarCreated"): self.restart, win32con.WM_DESTROY: self.destroy, win32con.WM_COMMAND: self.command, win32con.WM_USER+20 : self.notify,} # Register the Window class. window_class = win32gui.WNDCLASS() hinst = window_class.hInstance = win32gui.GetModuleHandle(None) window_class.lpszClassName = self.window_class_name window_class.style = win32con.CS_VREDRAW | win32con.CS_HREDRAW; window_class.hCursor = win32gui.LoadCursor(0, win32con.IDC_ARROW) window_class.hbrBackground = win32con.COLOR_WINDOW window_class.lpfnWndProc = message_map # could also specify a wndproc. classAtom = win32gui.RegisterClass(window_class) # Create the Window. style = win32con.WS_OVERLAPPED | win32con.WS_SYSMENU self.hwnd = win32gui.CreateWindow(classAtom, self.window_class_name, style, 0, 0, win32con.CW_USEDEFAULT, win32con.CW_USEDEFAULT, 0, 0, hinst, None) win32gui.UpdateWindow(self.hwnd) self.notify_id = None self.refresh_icon() win32gui.PumpMessages() def _add_ids_to_menu_options(self, menu_options): result = [] for menu_option in menu_options: option_text, option_icon, option_action = menu_option if callable(option_action) or option_action in self.SPECIAL_ACTIONS: self.menu_actions_by_id.add((self._next_action_id, option_action)) result.append(menu_option + (self._next_action_id,)) elif non_string_iterable(option_action): result.append((option_text, option_icon, self._add_ids_to_menu_options(option_action), self._next_action_id)) else: print('Unknown item', option_text, option_icon, option_action) self._next_action_id += 1 return result def refresh_icon(self): # Try and find a custom icon hinst = win32gui.GetModuleHandle(None) if os.path.isfile(self.icon): icon_flags = win32con.LR_LOADFROMFILE | win32con.LR_DEFAULTSIZE hicon = win32gui.LoadImage(hinst, self.icon, win32con.IMAGE_ICON, 0, 0, icon_flags) else: print("Can't find icon file - using default.") hicon = win32gui.LoadIcon(0, win32con.IDI_APPLICATION) if self.notify_id: message = win32gui.NIM_MODIFY else: message = win32gui.NIM_ADD self.notify_id = (self.hwnd, 0, win32gui.NIF_ICON | win32gui.NIF_MESSAGE | win32gui.NIF_TIP, win32con.WM_USER+20, hicon, self.hover_text) win32gui.Shell_NotifyIcon(message, self.notify_id) def restart(self, hwnd, msg, wparam, lparam): self.refresh_icon() def destroy(self, hwnd, msg, wparam, lparam): if self.on_quit: self.on_quit(self) nid = (self.hwnd, 0) win32gui.Shell_NotifyIcon(win32gui.NIM_DELETE, nid) win32gui.PostQuitMessage(0) # Terminate the app. def notify(self, hwnd, msg, wparam, lparam): if lparam==win32con.WM_LBUTTONDBLCLK: self.execute_menu_option(self.default_menu_index + self.FIRST_ID) elif lparam==win32con.WM_RBUTTONUP: self.show_menu() elif lparam==win32con.WM_LBUTTONUP: pass return True def show_menu(self): menu = win32gui.CreatePopupMenu() self.create_menu(menu, self.menu_options) #win32gui.SetMenuDefaultItem(menu, 1000, 0) pos = win32gui.GetCursorPos() # See http://msdn.microsoft.com/library/default.asp?url=/library/en-us/winui/menus_0hdi.asp win32gui.SetForegroundWindow(self.hwnd) win32gui.TrackPopupMenu(menu, win32con.TPM_LEFTALIGN, pos[0], pos[1], 0, self.hwnd, None) win32gui.PostMessage(self.hwnd, win32con.WM_NULL, 0, 0) def create_menu(self, menu, menu_options): for option_text, option_icon, option_action, option_id in menu_options[::-1]: if option_icon: option_icon = self.prep_menu_icon(option_icon) if option_id in self.menu_actions_by_id: item, extras = win32gui_struct.PackMENUITEMINFO(text=option_text, hbmpItem=option_icon, wID=option_id) win32gui.InsertMenuItem(menu, 0, 1, item) else: submenu = win32gui.CreatePopupMenu() self.create_menu(submenu, option_action) item, extras = win32gui_struct.PackMENUITEMINFO(text=option_text, hbmpItem=option_icon, hSubMenu=submenu) win32gui.InsertMenuItem(menu, 0, 1, item) def prep_menu_icon(self, icon): # First load the icon. ico_x = win32api.GetSystemMetrics(win32con.SM_CXSMICON) ico_y = win32api.GetSystemMetrics(win32con.SM_CYSMICON) hicon = win32gui.LoadImage(0, icon, win32con.IMAGE_ICON, ico_x, ico_y, win32con.LR_LOADFROMFILE) hdcBitmap = win32gui.CreateCompatibleDC(0) hdcScreen = win32gui.GetDC(0) hbm = win32gui.CreateCompatibleBitmap(hdcScreen, ico_x, ico_y) hbmOld = win32gui.SelectObject(hdcBitmap, hbm) # Fill the background. brush = win32gui.GetSysColorBrush(win32con.COLOR_MENU) win32gui.FillRect(hdcBitmap, (0, 0, 16, 16), brush) # unclear if brush needs to be feed. Best clue I can find is: # "GetSysColorBrush returns a cached brush instead of allocating a new # one." - implies no DeleteObject # draw the icon win32gui.DrawIconEx(hdcBitmap, 0, 0, hicon, ico_x, ico_y, 0, 0, win32con.DI_NORMAL) win32gui.SelectObject(hdcBitmap, hbmOld) win32gui.DeleteDC(hdcBitmap) return hbm def command(self, hwnd, msg, wparam, lparam): id = win32gui.LOWORD(wparam) self.execute_menu_option(id) def execute_menu_option(self, id): menu_action = self.menu_actions_by_id[id] if menu_action == self.QUIT: win32gui.DestroyWindow(self.hwnd) else: menu_action(self) def non_string_iterable(obj): try: iter(obj) except TypeError: return False else: return not isinstance(obj, str) # Minimal self test. You'll need a bunch of ICO files in the current working # directory in order for this to work... if __name__ == '__main__': import itertools, glob icons = itertools.cycle(glob.glob('*.ico')) hover_text = "SysTrayIcon.py Demo" def hello(sysTrayIcon): print("Hello World.") def simon(sysTrayIcon): print("Hello Simon.") def switch_icon(sysTrayIcon): sysTrayIcon.icon = next(icons) sysTrayIcon.refresh_icon() menu_options = (('Say Hello', next(icons), hello), ('Switch Icon', None, switch_icon), ('A sub-menu', next(icons), (('Say Hello to Simon', next(icons), simon), ('Switch Icon', next(icons), switch_icon), )) ) def bye(sysTrayIcon): print('Bye, then.') SysTrayIcon(next(icons), hover_text, menu_options, on_quit=bye, default_menu_index=1) A: There are (at least) a couple of libraries openly available for this now: pystray infi.systray I just started using infi.systray in a project, and it's worked well for me. Here's how little code you need to do something very basic (taken from their docs): from infi.systray import SysTrayIcon def say_hello(systray): print("Hello, World!") menu_options = (("Say Hello", None, say_hello),) systray = SysTrayIcon("icon.ico", "Example tray icon", menu_options) systray.start() A: I also modified sysTrayIcon.py to work in python 3 but no prerequisites are needed. #!/usr/bin/env python # Module : SysTrayIcon.py # Synopsis : Windows System tray icon. # Programmer : Simon Brunning - simon@brunningonline.net # Date : 11 April 2005 # Notes : Based on (i.e. ripped off from) Mark Hammond's # win32gui_taskbar.py and win32gui_menu.py demos from PyWin32 '''TODO For now, the demo at the bottom shows how to use it...''' import os import sys import win32api import win32con import win32gui_struct try: import winxpgui as win32gui except ImportError: import win32gui class SysTrayIcon(object): '''TODO''' QUIT = 'QUIT' SPECIAL_ACTIONS = [QUIT] FIRST_ID = 1023 def __init__(self, icon, hover_text, menu_options, on_quit=None, default_menu_index=None, window_class_name=None,): self.icon = icon self.hover_text = hover_text self.on_quit = on_quit menu_options = menu_options + (('Quit', None, self.QUIT),) self._next_action_id = self.FIRST_ID self.menu_actions_by_id = set() self.menu_options = self._add_ids_to_menu_options(list(menu_options)) self.menu_actions_by_id = dict(self.menu_actions_by_id) del self._next_action_id self.default_menu_index = (default_menu_index or 0) self.window_class_name = window_class_name or "SysTrayIconPy" message_map = {win32gui.RegisterWindowMessage("TaskbarCreated"): self.restart, win32con.WM_DESTROY: self.destroy, win32con.WM_COMMAND: self.command, win32con.WM_USER+20 : self.notify,} # Register the Window class. window_class = win32gui.WNDCLASS() hinst = window_class.hInstance = win32gui.GetModuleHandle(None) window_class.lpszClassName = self.window_class_name window_class.style = win32con.CS_VREDRAW | win32con.CS_HREDRAW; window_class.hCursor = win32gui.LoadCursor(0, win32con.IDC_ARROW) window_class.hbrBackground = win32con.COLOR_WINDOW window_class.lpfnWndProc = message_map # could also specify a wndproc. classAtom = win32gui.RegisterClass(window_class) # Create the Window. style = win32con.WS_OVERLAPPED | win32con.WS_SYSMENU self.hwnd = win32gui.CreateWindow(classAtom, self.window_class_name, style, 0, 0, win32con.CW_USEDEFAULT, win32con.CW_USEDEFAULT, 0, 0, hinst, None) win32gui.UpdateWindow(self.hwnd) self.notify_id = None self.refresh_icon() win32gui.PumpMessages() def _add_ids_to_menu_options(self, menu_options): result = [] for menu_option in menu_options: option_text, option_icon, option_action = menu_option if callable(option_action) or option_action in self.SPECIAL_ACTIONS: self.menu_actions_by_id.add((self._next_action_id, option_action)) result.append(menu_option + (self._next_action_id,)) elif non_string_iterable(option_action): result.append((option_text, option_icon, self._add_ids_to_menu_options(option_action), self._next_action_id)) else: print('Unknown item', option_text, option_icon, option_action) self._next_action_id += 1 return result def refresh_icon(self): # Try and find a custom icon hinst = win32gui.GetModuleHandle(None) if os.path.isfile(self.icon): icon_flags = win32con.LR_LOADFROMFILE | win32con.LR_DEFAULTSIZE hicon = win32gui.LoadImage(hinst, self.icon, win32con.IMAGE_ICON, 0, 0, icon_flags) else: print("Can't find icon file - using default.") hicon = win32gui.LoadIcon(0, win32con.IDI_APPLICATION) if self.notify_id: message = win32gui.NIM_MODIFY else: message = win32gui.NIM_ADD self.notify_id = (self.hwnd, 0, win32gui.NIF_ICON | win32gui.NIF_MESSAGE | win32gui.NIF_TIP, win32con.WM_USER+20, hicon, self.hover_text) win32gui.Shell_NotifyIcon(message, self.notify_id) def restart(self, hwnd, msg, wparam, lparam): self.refresh_icon() def destroy(self, hwnd, msg, wparam, lparam): if self.on_quit: self.on_quit(self) nid = (self.hwnd, 0) win32gui.Shell_NotifyIcon(win32gui.NIM_DELETE, nid) win32gui.PostQuitMessage(0) # Terminate the app. def notify(self, hwnd, msg, wparam, lparam): if lparam==win32con.WM_LBUTTONDBLCLK: self.execute_menu_option(self.default_menu_index + self.FIRST_ID) elif lparam==win32con.WM_RBUTTONUP: self.show_menu() elif lparam==win32con.WM_LBUTTONUP: pass return True def show_menu(self): menu = win32gui.CreatePopupMenu() self.create_menu(menu, self.menu_options) #win32gui.SetMenuDefaultItem(menu, 1000, 0) pos = win32gui.GetCursorPos() # See http://msdn.microsoft.com/library/default.asp?url=/library/en-us/winui/menus_0hdi.asp win32gui.SetForegroundWindow(self.hwnd) win32gui.TrackPopupMenu(menu, win32con.TPM_LEFTALIGN, pos[0], pos[1], 0, self.hwnd, None) win32gui.PostMessage(self.hwnd, win32con.WM_NULL, 0, 0) def create_menu(self, menu, menu_options): for option_text, option_icon, option_action, option_id in menu_options[::-1]: if option_icon: option_icon = self.prep_menu_icon(option_icon) if option_id in self.menu_actions_by_id: item, extras = win32gui_struct.PackMENUITEMINFO(text=option_text, hbmpItem=option_icon, wID=option_id) win32gui.InsertMenuItem(menu, 0, 1, item) else: submenu = win32gui.CreatePopupMenu() self.create_menu(submenu, option_action) item, extras = win32gui_struct.PackMENUITEMINFO(text=option_text, hbmpItem=option_icon, hSubMenu=submenu) win32gui.InsertMenuItem(menu, 0, 1, item) def prep_menu_icon(self, icon): # First load the icon. ico_x = win32api.GetSystemMetrics(win32con.SM_CXSMICON) ico_y = win32api.GetSystemMetrics(win32con.SM_CYSMICON) hicon = win32gui.LoadImage(0, icon, win32con.IMAGE_ICON, ico_x, ico_y, win32con.LR_LOADFROMFILE) hdcBitmap = win32gui.CreateCompatibleDC(0) hdcScreen = win32gui.GetDC(0) hbm = win32gui.CreateCompatibleBitmap(hdcScreen, ico_x, ico_y) hbmOld = win32gui.SelectObject(hdcBitmap, hbm) # Fill the background. brush = win32gui.GetSysColorBrush(win32con.COLOR_MENU) win32gui.FillRect(hdcBitmap, (0, 0, 16, 16), brush) # unclear if brush needs to be feed. Best clue I can find is: # "GetSysColorBrush returns a cached brush instead of allocating a new # one." - implies no DeleteObject # draw the icon win32gui.DrawIconEx(hdcBitmap, 0, 0, hicon, ico_x, ico_y, 0, 0, win32con.DI_NORMAL) win32gui.SelectObject(hdcBitmap, hbmOld) win32gui.DeleteDC(hdcBitmap) return hbm def command(self, hwnd, msg, wparam, lparam): id = win32gui.LOWORD(wparam) self.execute_menu_option(id) def execute_menu_option(self, id): menu_action = self.menu_actions_by_id[id] if menu_action == self.QUIT: win32gui.DestroyWindow(self.hwnd) else: menu_action(self) def non_string_iterable(obj): try: iter(obj) except TypeError: return False else: return not isinstance(obj, str) # Minimal self test. You'll need a bunch of ICO files in the current working # directory in order for this to work... if __name__ == '__main__': import itertools, glob icons = itertools.cycle(glob.glob('*.ico')) hover_text = "SysTrayIcon.py Demo" def hello(sysTrayIcon): print("Hello World.") def simon(sysTrayIcon): print("Hello Simon.") def switch_icon(sysTrayIcon): sysTrayIcon.icon = next(icons) sysTrayIcon.refresh_icon() menu_options = (('Say Hello', next(icons), hello), ('Switch Icon', None, switch_icon), ('A sub-menu', next(icons), (('Say Hello to Simon', next(icons), simon), ('Switch Icon', next(icons), switch_icon), )) ) def bye(sysTrayIcon): print('Bye, then.') SysTrayIcon(next(icons), hover_text, menu_options, on_quit=bye, default_menu_index=1) A: 2022 November Update for anyone still wondering: Tried both packages. Both work as intended. pystray consumes more RAM than compared to infi.systray. pystray: 112Mb vs infi.systray: 11Mb infi.systray works with Python 3 (Using Python 3.9.1) infi.systray only supports Windows. pystray supports Linux under Xorg, GNOME and Ubuntu, macOS and Windows Examples: (Used this code for noting the RAM usage) infi.systray: from infi.systray import SysTrayIcon def say_hello(systray): print ("Hello") menu_options = (("Say Hello", None, say_hello),) systray = SysTrayIcon("icon.ico", "Example tray icon", menu_options) systray.start() pystray: import pystray import PIL.Image def on_clicked(icon, item): print("Hello") image = PIL.Image.open("icon.png") tray = pystray.Icon("Tray", image, menu=pystray.Menu( pystray.MenuItem("Button-1", on_clicked) )) tray.run() Disclaimer: I have not tried every function/feature that the packages offer. Just wanted to share my findings
How to build a SystemTray app for Windows?
I usually work on a Linux system, but I have a situation where I need to write a client app that would run on windows as a service. Can someone help me or direct, on how to build a system tray app (for example like dropbox) for the windows environment, which gets started on OS startup and the icon sits in the TaskBar and on clicking the app icon presents a menu. My scripting language is python. Thanks.
[ "You do this using the pywin32 (Python for Windows Extensions) module.\nExample Code for Python 2\nSimilar Question\nTo make it run at startup you could mess around with services but it's actually much easier to install a link to the exe in the users \"Startup Folder\".\nWindows 7 and Vista\nc:\\Users\\[username]\\AppData\\Roaming\\Microsoft\\Windows\\Start Menu\\Programs\\Startup\nWindows XP\nc:\\Documents and Settings\\[username]\\Start Menu\\Programs\\Startup\n", "I modified the SysTrayIcon.py Python 2 script to work in Python 3 \n\nYou need to install pip install pywin32.\nAfter that you need to run python Scripts/pywin32_postinstall.py -install from your Python directory to register the dlls.\nFor the test script to run, you need to have some *.ico files in your working directory - you can find lots of them in your c:\\windows* folders (search for file:.ico). \nTo hide the program, you can run it via pythonw.exe.\nIf you need balloon notifications, have a look at this post: https://stackoverflow.com/a/42085439/2441026 (Plyer package).\nTo have a menu with only the Quit button you need to pass menu_options = ((None, None, None),) - (or change the class to not always append menu_options).\n\n\n#!/usr/bin/env python\n# Module : SysTrayIcon.py\n# Synopsis : Windows System tray icon.\n# Programmer : Simon Brunning - simon@brunningonline.net - modified for Python 3\n# Date : 13 February 2018\n# Notes : Based on (i.e. ripped off from) Mark Hammond's\n# win32gui_taskbar.py and win32gui_menu.py demos from PyWin32\n'''TODO\n\nFor now, the demo at the bottom shows how to use it...'''\n\nimport os\nimport sys\nimport win32api # package pywin32\nimport win32con\nimport win32gui_struct\ntry:\n import winxpgui as win32gui\nexcept ImportError:\n import win32gui\n\nclass SysTrayIcon(object):\n '''TODO'''\n QUIT = 'QUIT'\n SPECIAL_ACTIONS = [QUIT]\n\n FIRST_ID = 1023\n\n def __init__(self,\n icon,\n hover_text,\n menu_options,\n on_quit=None,\n default_menu_index=None,\n window_class_name=None,):\n\n self.icon = icon\n self.hover_text = hover_text\n self.on_quit = on_quit\n\n menu_options = menu_options + (('Quit', None, self.QUIT),)\n self._next_action_id = self.FIRST_ID\n self.menu_actions_by_id = set()\n self.menu_options = self._add_ids_to_menu_options(list(menu_options))\n self.menu_actions_by_id = dict(self.menu_actions_by_id)\n del self._next_action_id\n\n\n self.default_menu_index = (default_menu_index or 0)\n self.window_class_name = window_class_name or \"SysTrayIconPy\"\n\n message_map = {win32gui.RegisterWindowMessage(\"TaskbarCreated\"): self.restart,\n win32con.WM_DESTROY: self.destroy,\n win32con.WM_COMMAND: self.command,\n win32con.WM_USER+20 : self.notify,}\n # Register the Window class.\n window_class = win32gui.WNDCLASS()\n hinst = window_class.hInstance = win32gui.GetModuleHandle(None)\n window_class.lpszClassName = self.window_class_name\n window_class.style = win32con.CS_VREDRAW | win32con.CS_HREDRAW;\n window_class.hCursor = win32gui.LoadCursor(0, win32con.IDC_ARROW)\n window_class.hbrBackground = win32con.COLOR_WINDOW\n window_class.lpfnWndProc = message_map # could also specify a wndproc.\n classAtom = win32gui.RegisterClass(window_class)\n # Create the Window.\n style = win32con.WS_OVERLAPPED | win32con.WS_SYSMENU\n self.hwnd = win32gui.CreateWindow(classAtom,\n self.window_class_name,\n style,\n 0,\n 0,\n win32con.CW_USEDEFAULT,\n win32con.CW_USEDEFAULT,\n 0,\n 0,\n hinst,\n None)\n win32gui.UpdateWindow(self.hwnd)\n self.notify_id = None\n self.refresh_icon()\n\n win32gui.PumpMessages()\n\n def _add_ids_to_menu_options(self, menu_options):\n result = []\n for menu_option in menu_options:\n option_text, option_icon, option_action = menu_option\n if callable(option_action) or option_action in self.SPECIAL_ACTIONS:\n self.menu_actions_by_id.add((self._next_action_id, option_action))\n result.append(menu_option + (self._next_action_id,))\n elif non_string_iterable(option_action):\n result.append((option_text,\n option_icon,\n self._add_ids_to_menu_options(option_action),\n self._next_action_id))\n else:\n print('Unknown item', option_text, option_icon, option_action)\n self._next_action_id += 1\n return result\n\n def refresh_icon(self):\n # Try and find a custom icon\n hinst = win32gui.GetModuleHandle(None)\n if os.path.isfile(self.icon):\n icon_flags = win32con.LR_LOADFROMFILE | win32con.LR_DEFAULTSIZE\n hicon = win32gui.LoadImage(hinst,\n self.icon,\n win32con.IMAGE_ICON,\n 0,\n 0,\n icon_flags)\n else:\n print(\"Can't find icon file - using default.\")\n hicon = win32gui.LoadIcon(0, win32con.IDI_APPLICATION)\n\n if self.notify_id: message = win32gui.NIM_MODIFY\n else: message = win32gui.NIM_ADD\n self.notify_id = (self.hwnd,\n 0,\n win32gui.NIF_ICON | win32gui.NIF_MESSAGE | win32gui.NIF_TIP,\n win32con.WM_USER+20,\n hicon,\n self.hover_text)\n win32gui.Shell_NotifyIcon(message, self.notify_id)\n\n def restart(self, hwnd, msg, wparam, lparam):\n self.refresh_icon()\n\n def destroy(self, hwnd, msg, wparam, lparam):\n if self.on_quit: self.on_quit(self)\n nid = (self.hwnd, 0)\n win32gui.Shell_NotifyIcon(win32gui.NIM_DELETE, nid)\n win32gui.PostQuitMessage(0) # Terminate the app.\n\n def notify(self, hwnd, msg, wparam, lparam):\n if lparam==win32con.WM_LBUTTONDBLCLK:\n self.execute_menu_option(self.default_menu_index + self.FIRST_ID)\n elif lparam==win32con.WM_RBUTTONUP:\n self.show_menu()\n elif lparam==win32con.WM_LBUTTONUP:\n pass\n return True\n\n def show_menu(self):\n menu = win32gui.CreatePopupMenu()\n self.create_menu(menu, self.menu_options)\n #win32gui.SetMenuDefaultItem(menu, 1000, 0)\n\n pos = win32gui.GetCursorPos()\n # See http://msdn.microsoft.com/library/default.asp?url=/library/en-us/winui/menus_0hdi.asp\n win32gui.SetForegroundWindow(self.hwnd)\n win32gui.TrackPopupMenu(menu,\n win32con.TPM_LEFTALIGN,\n pos[0],\n pos[1],\n 0,\n self.hwnd,\n None)\n win32gui.PostMessage(self.hwnd, win32con.WM_NULL, 0, 0)\n\n def create_menu(self, menu, menu_options):\n for option_text, option_icon, option_action, option_id in menu_options[::-1]:\n if option_icon:\n option_icon = self.prep_menu_icon(option_icon)\n\n if option_id in self.menu_actions_by_id: \n item, extras = win32gui_struct.PackMENUITEMINFO(text=option_text,\n hbmpItem=option_icon,\n wID=option_id)\n win32gui.InsertMenuItem(menu, 0, 1, item)\n else:\n submenu = win32gui.CreatePopupMenu()\n self.create_menu(submenu, option_action)\n item, extras = win32gui_struct.PackMENUITEMINFO(text=option_text,\n hbmpItem=option_icon,\n hSubMenu=submenu)\n win32gui.InsertMenuItem(menu, 0, 1, item)\n\n def prep_menu_icon(self, icon):\n # First load the icon.\n ico_x = win32api.GetSystemMetrics(win32con.SM_CXSMICON)\n ico_y = win32api.GetSystemMetrics(win32con.SM_CYSMICON)\n hicon = win32gui.LoadImage(0, icon, win32con.IMAGE_ICON, ico_x, ico_y, win32con.LR_LOADFROMFILE)\n\n hdcBitmap = win32gui.CreateCompatibleDC(0)\n hdcScreen = win32gui.GetDC(0)\n hbm = win32gui.CreateCompatibleBitmap(hdcScreen, ico_x, ico_y)\n hbmOld = win32gui.SelectObject(hdcBitmap, hbm)\n # Fill the background.\n brush = win32gui.GetSysColorBrush(win32con.COLOR_MENU)\n win32gui.FillRect(hdcBitmap, (0, 0, 16, 16), brush)\n # unclear if brush needs to be feed. Best clue I can find is:\n # \"GetSysColorBrush returns a cached brush instead of allocating a new\n # one.\" - implies no DeleteObject\n # draw the icon\n win32gui.DrawIconEx(hdcBitmap, 0, 0, hicon, ico_x, ico_y, 0, 0, win32con.DI_NORMAL)\n win32gui.SelectObject(hdcBitmap, hbmOld)\n win32gui.DeleteDC(hdcBitmap)\n\n return hbm\n\n def command(self, hwnd, msg, wparam, lparam):\n id = win32gui.LOWORD(wparam)\n self.execute_menu_option(id)\n\n def execute_menu_option(self, id):\n menu_action = self.menu_actions_by_id[id] \n if menu_action == self.QUIT:\n win32gui.DestroyWindow(self.hwnd)\n else:\n menu_action(self)\n\ndef non_string_iterable(obj):\n try:\n iter(obj)\n except TypeError:\n return False\n else:\n return not isinstance(obj, str)\n\n# Minimal self test. You'll need a bunch of ICO files in the current working\n# directory in order for this to work...\nif __name__ == '__main__':\n import itertools, glob\n\n icons = itertools.cycle(glob.glob('*.ico'))\n hover_text = \"SysTrayIcon.py Demo\"\n def hello(sysTrayIcon): print(\"Hello World.\")\n def simon(sysTrayIcon): print(\"Hello Simon.\")\n def switch_icon(sysTrayIcon):\n sysTrayIcon.icon = next(icons)\n sysTrayIcon.refresh_icon()\n menu_options = (('Say Hello', next(icons), hello),\n ('Switch Icon', None, switch_icon),\n ('A sub-menu', next(icons), (('Say Hello to Simon', next(icons), simon),\n ('Switch Icon', next(icons), switch_icon),\n ))\n )\n def bye(sysTrayIcon): print('Bye, then.')\n\n SysTrayIcon(next(icons), hover_text, menu_options, on_quit=bye, default_menu_index=1)\n\n", "There are (at least) a couple of libraries openly available for this now:\n\npystray\ninfi.systray\n\nI just started using infi.systray in a project, and it's worked well for me. Here's how little code you need to do something very basic (taken from their docs):\nfrom infi.systray import SysTrayIcon\ndef say_hello(systray):\n print(\"Hello, World!\")\nmenu_options = ((\"Say Hello\", None, say_hello),)\nsystray = SysTrayIcon(\"icon.ico\", \"Example tray icon\", menu_options)\nsystray.start()\n\n", "I also modified sysTrayIcon.py to work in python 3 but no prerequisites are needed.\n#!/usr/bin/env python\n# Module : SysTrayIcon.py\n# Synopsis : Windows System tray icon.\n# Programmer : Simon Brunning - simon@brunningonline.net\n# Date : 11 April 2005\n# Notes : Based on (i.e. ripped off from) Mark Hammond's\n# win32gui_taskbar.py and win32gui_menu.py demos from PyWin32\n'''TODO\n\nFor now, the demo at the bottom shows how to use it...'''\n \nimport os\nimport sys\nimport win32api\nimport win32con\nimport win32gui_struct\ntry:\n import winxpgui as win32gui\nexcept ImportError:\n import win32gui\n\nclass SysTrayIcon(object):\n '''TODO'''\n QUIT = 'QUIT'\n SPECIAL_ACTIONS = [QUIT]\n \n FIRST_ID = 1023\n \n def __init__(self,\n icon,\n hover_text,\n menu_options,\n on_quit=None,\n default_menu_index=None,\n window_class_name=None,):\n \n self.icon = icon\n self.hover_text = hover_text\n self.on_quit = on_quit\n \n menu_options = menu_options + (('Quit', None, self.QUIT),)\n self._next_action_id = self.FIRST_ID\n self.menu_actions_by_id = set()\n self.menu_options = self._add_ids_to_menu_options(list(menu_options))\n self.menu_actions_by_id = dict(self.menu_actions_by_id)\n del self._next_action_id\n \n \n self.default_menu_index = (default_menu_index or 0)\n self.window_class_name = window_class_name or \"SysTrayIconPy\"\n \n message_map = {win32gui.RegisterWindowMessage(\"TaskbarCreated\"): self.restart,\n win32con.WM_DESTROY: self.destroy,\n win32con.WM_COMMAND: self.command,\n win32con.WM_USER+20 : self.notify,}\n # Register the Window class.\n window_class = win32gui.WNDCLASS()\n hinst = window_class.hInstance = win32gui.GetModuleHandle(None)\n window_class.lpszClassName = self.window_class_name\n window_class.style = win32con.CS_VREDRAW | win32con.CS_HREDRAW;\n window_class.hCursor = win32gui.LoadCursor(0, win32con.IDC_ARROW)\n window_class.hbrBackground = win32con.COLOR_WINDOW\n window_class.lpfnWndProc = message_map # could also specify a wndproc.\n classAtom = win32gui.RegisterClass(window_class)\n # Create the Window.\n style = win32con.WS_OVERLAPPED | win32con.WS_SYSMENU\n self.hwnd = win32gui.CreateWindow(classAtom,\n self.window_class_name,\n style,\n 0,\n 0,\n win32con.CW_USEDEFAULT,\n win32con.CW_USEDEFAULT,\n 0,\n 0,\n hinst,\n None)\n win32gui.UpdateWindow(self.hwnd)\n self.notify_id = None\n self.refresh_icon()\n \n win32gui.PumpMessages()\n\n def _add_ids_to_menu_options(self, menu_options):\n result = []\n for menu_option in menu_options:\n option_text, option_icon, option_action = menu_option\n if callable(option_action) or option_action in self.SPECIAL_ACTIONS:\n self.menu_actions_by_id.add((self._next_action_id, option_action))\n result.append(menu_option + (self._next_action_id,))\n elif non_string_iterable(option_action):\n result.append((option_text,\n option_icon,\n self._add_ids_to_menu_options(option_action),\n self._next_action_id))\n else:\n print('Unknown item', option_text, option_icon, option_action)\n self._next_action_id += 1\n return result\n \n def refresh_icon(self):\n # Try and find a custom icon\n hinst = win32gui.GetModuleHandle(None)\n if os.path.isfile(self.icon):\n icon_flags = win32con.LR_LOADFROMFILE | win32con.LR_DEFAULTSIZE\n hicon = win32gui.LoadImage(hinst,\n self.icon,\n win32con.IMAGE_ICON,\n 0,\n 0,\n icon_flags)\n else:\n print(\"Can't find icon file - using default.\")\n hicon = win32gui.LoadIcon(0, win32con.IDI_APPLICATION)\n\n if self.notify_id: message = win32gui.NIM_MODIFY\n else: message = win32gui.NIM_ADD\n self.notify_id = (self.hwnd,\n 0,\n win32gui.NIF_ICON | win32gui.NIF_MESSAGE | win32gui.NIF_TIP,\n win32con.WM_USER+20,\n hicon,\n self.hover_text)\n win32gui.Shell_NotifyIcon(message, self.notify_id)\n\n def restart(self, hwnd, msg, wparam, lparam):\n self.refresh_icon()\n\n def destroy(self, hwnd, msg, wparam, lparam):\n if self.on_quit: self.on_quit(self)\n nid = (self.hwnd, 0)\n win32gui.Shell_NotifyIcon(win32gui.NIM_DELETE, nid)\n win32gui.PostQuitMessage(0) # Terminate the app.\n\n def notify(self, hwnd, msg, wparam, lparam):\n if lparam==win32con.WM_LBUTTONDBLCLK:\n self.execute_menu_option(self.default_menu_index + self.FIRST_ID)\n elif lparam==win32con.WM_RBUTTONUP:\n self.show_menu()\n elif lparam==win32con.WM_LBUTTONUP:\n pass\n return True\n \n def show_menu(self):\n menu = win32gui.CreatePopupMenu()\n self.create_menu(menu, self.menu_options)\n #win32gui.SetMenuDefaultItem(menu, 1000, 0)\n \n pos = win32gui.GetCursorPos()\n # See http://msdn.microsoft.com/library/default.asp?url=/library/en-us/winui/menus_0hdi.asp\n win32gui.SetForegroundWindow(self.hwnd)\n win32gui.TrackPopupMenu(menu,\n win32con.TPM_LEFTALIGN,\n pos[0],\n pos[1],\n 0,\n self.hwnd,\n None)\n win32gui.PostMessage(self.hwnd, win32con.WM_NULL, 0, 0)\n \n def create_menu(self, menu, menu_options):\n for option_text, option_icon, option_action, option_id in menu_options[::-1]:\n if option_icon:\n option_icon = self.prep_menu_icon(option_icon)\n \n if option_id in self.menu_actions_by_id: \n item, extras = win32gui_struct.PackMENUITEMINFO(text=option_text,\n hbmpItem=option_icon,\n wID=option_id)\n win32gui.InsertMenuItem(menu, 0, 1, item)\n else:\n submenu = win32gui.CreatePopupMenu()\n self.create_menu(submenu, option_action)\n item, extras = win32gui_struct.PackMENUITEMINFO(text=option_text,\n hbmpItem=option_icon,\n hSubMenu=submenu)\n win32gui.InsertMenuItem(menu, 0, 1, item)\n\n def prep_menu_icon(self, icon):\n # First load the icon.\n ico_x = win32api.GetSystemMetrics(win32con.SM_CXSMICON)\n ico_y = win32api.GetSystemMetrics(win32con.SM_CYSMICON)\n hicon = win32gui.LoadImage(0, icon, win32con.IMAGE_ICON, ico_x, ico_y, win32con.LR_LOADFROMFILE)\n\n hdcBitmap = win32gui.CreateCompatibleDC(0)\n hdcScreen = win32gui.GetDC(0)\n hbm = win32gui.CreateCompatibleBitmap(hdcScreen, ico_x, ico_y)\n hbmOld = win32gui.SelectObject(hdcBitmap, hbm)\n # Fill the background.\n brush = win32gui.GetSysColorBrush(win32con.COLOR_MENU)\n win32gui.FillRect(hdcBitmap, (0, 0, 16, 16), brush)\n # unclear if brush needs to be feed. Best clue I can find is:\n # \"GetSysColorBrush returns a cached brush instead of allocating a new\n # one.\" - implies no DeleteObject\n # draw the icon\n win32gui.DrawIconEx(hdcBitmap, 0, 0, hicon, ico_x, ico_y, 0, 0, win32con.DI_NORMAL)\n win32gui.SelectObject(hdcBitmap, hbmOld)\n win32gui.DeleteDC(hdcBitmap)\n \n return hbm\n\n def command(self, hwnd, msg, wparam, lparam):\n id = win32gui.LOWORD(wparam)\n self.execute_menu_option(id)\n \n def execute_menu_option(self, id):\n menu_action = self.menu_actions_by_id[id] \n if menu_action == self.QUIT:\n win32gui.DestroyWindow(self.hwnd)\n else:\n menu_action(self)\n \ndef non_string_iterable(obj):\n try:\n iter(obj)\n except TypeError:\n return False\n else:\n return not isinstance(obj, str)\n\n# Minimal self test. You'll need a bunch of ICO files in the current working\n# directory in order for this to work...\nif __name__ == '__main__':\n import itertools, glob\n \n icons = itertools.cycle(glob.glob('*.ico'))\n hover_text = \"SysTrayIcon.py Demo\"\n def hello(sysTrayIcon): print(\"Hello World.\")\n def simon(sysTrayIcon): print(\"Hello Simon.\")\n def switch_icon(sysTrayIcon):\n sysTrayIcon.icon = next(icons)\n sysTrayIcon.refresh_icon()\n menu_options = (('Say Hello', next(icons), hello),\n ('Switch Icon', None, switch_icon),\n ('A sub-menu', next(icons), (('Say Hello to Simon', next(icons), simon),\n ('Switch Icon', next(icons), switch_icon),\n ))\n )\n def bye(sysTrayIcon): print('Bye, then.')\n \n SysTrayIcon(next(icons), hover_text, menu_options, on_quit=bye, default_menu_index=1)\n\n", "2022 November Update for anyone still wondering:\nTried both packages. Both work as intended.\n\npystray consumes more RAM than compared to infi.systray.\npystray: 112Mb vs infi.systray: 11Mb\ninfi.systray works with Python 3 (Using Python 3.9.1)\ninfi.systray only supports Windows.\npystray supports Linux under Xorg, GNOME and Ubuntu, macOS and Windows\n\nExamples: (Used this code for noting the RAM usage)\ninfi.systray:\nfrom infi.systray import SysTrayIcon\n\ndef say_hello(systray):\n print (\"Hello\")\n \nmenu_options = ((\"Say Hello\", None, say_hello),)\nsystray = SysTrayIcon(\"icon.ico\", \"Example tray icon\", menu_options)\nsystray.start()\n\npystray:\nimport pystray\nimport PIL.Image\n\ndef on_clicked(icon, item):\n print(\"Hello\")\n\nimage = PIL.Image.open(\"icon.png\")\ntray = pystray.Icon(\"Tray\", image, menu=pystray.Menu(\n pystray.MenuItem(\"Button-1\", on_clicked)\n))\ntray.run()\n\nDisclaimer: I have not tried every function/feature that the packages offer. Just wanted to share my findings\n" ]
[ 41, 28, 28, 6, 0 ]
[]
[]
[ "appcelerator", "desktop_application", "macos", "python", "windows" ]
stackoverflow_0009494739_appcelerator_desktop_application_macos_python_windows.txt
Q: How to find center pixel value of bounding box in opencv python? I'm trying to find the pixel intensity at the center of bounding box TO achieve this I'm finding the center coordinates of bounding box and get the pixel intensity of that coordinate as shown below img_read= cv2.imread(r'image.png') cv2.rectangle(img_read,(xmin,ymin),(xmax,ymax),(0,0,255),3) center_x = int((xmin+xmax)//2) center_y = int((ymin+ymax)//2) print(center_x,center_y) cv2.circle(img_read,(center_x,center_y),50,(0,0,255),3) print('Pixel intensity at:',img_read[center_x][center_y]) plt.imshow(img_read[:,:,::-1]) when I run this I get error as below IndexError: index 859 is out of bounds for axis 0 with size 815 but when I try to draw circle from that point with cv2.circle it draws circle without any errors How can I access the pixel intensity value at point img_read[center_x][center_y]) ? I tried with this as well img_read[center_x,center_y] but got same error any help or suggestion to fix this issue will be appreciated thanks A: #Read the image & get the dimensions img_read= cv2.imread(r"C:\Users\Desktop\test_center_px.tiff") dimensions = img_read.shape h, w=dimensions[0], dimensions[1] #create the bounding box if necessary (not in mine) domain = cv2.rectangle(img_read,(0,0),(w,h),(255,0,0),20) plt.imshow(domain,cmap='gray') center_x = w/2 center_y = h/2 #all we need to do is pass in the (x, y)-coordinates as image[y, x] (b, g, r) = img_read[np.int16(center_y), np.int16(center_x)] print("Color at center pixel is - Red: {}, Green: {}, Blue: {}".format(r, g, b)) OUTPUT: Color at center pixel is - Red: 152, Green: 152, Blue: 152
How to find center pixel value of bounding box in opencv python?
I'm trying to find the pixel intensity at the center of bounding box TO achieve this I'm finding the center coordinates of bounding box and get the pixel intensity of that coordinate as shown below img_read= cv2.imread(r'image.png') cv2.rectangle(img_read,(xmin,ymin),(xmax,ymax),(0,0,255),3) center_x = int((xmin+xmax)//2) center_y = int((ymin+ymax)//2) print(center_x,center_y) cv2.circle(img_read,(center_x,center_y),50,(0,0,255),3) print('Pixel intensity at:',img_read[center_x][center_y]) plt.imshow(img_read[:,:,::-1]) when I run this I get error as below IndexError: index 859 is out of bounds for axis 0 with size 815 but when I try to draw circle from that point with cv2.circle it draws circle without any errors How can I access the pixel intensity value at point img_read[center_x][center_y]) ? I tried with this as well img_read[center_x,center_y] but got same error any help or suggestion to fix this issue will be appreciated thanks
[ "#Read the image & get the dimensions \n img_read= cv2.imread(r\"C:\\Users\\Desktop\\test_center_px.tiff\")\n dimensions = img_read.shape\n h, w=dimensions[0], dimensions[1] \n\n#create the bounding box if necessary (not in mine) \n domain = cv2.rectangle(img_read,(0,0),(w,h),(255,0,0),20)\n plt.imshow(domain,cmap='gray')\n \n center_x = w/2\n center_y = h/2\n\n#all we need to do is pass in the (x, y)-coordinates as image[y, x]\n (b, g, r) = img_read[np.int16(center_y), np.int16(center_x)]\n print(\"Color at center pixel is - Red: {}, Green: {}, Blue: {}\".format(r, g, b))\n\nOUTPUT:\nColor at center pixel is - Red: 152, Green: 152, Blue: 152\n" ]
[ 0 ]
[ "Try:\nimg_read[center_y,center_x] \n\n" ]
[ -3 ]
[ "cv2", "index_error", "numpy", "python" ]
stackoverflow_0074516088_cv2_index_error_numpy_python.txt
Q: How to redirect stdout and stderr to logger in Python I have a logger that has a RotatingFileHandler. I want to redirect all Stdout and Stderr to the logger. How to do so? A: Not enough rep to comment, but I wanted to add the version of this that worked for me in case others are in a similar situation. class LoggerWriter: def __init__(self, level): # self.level is really like using log.debug(message) # at least in my case self.level = level def write(self, message): # if statement reduces the amount of newlines that are # printed to the logger if message != '\n': self.level(message) def flush(self): # create a flush method so things can be flushed when # the system wants to. Not sure if simply 'printing' # sys.stderr is the correct way to do it, but it seemed # to work properly for me. self.level(sys.stderr) and this would look something like: log = logging.getLogger('foobar') sys.stdout = LoggerWriter(log.debug) sys.stderr = LoggerWriter(log.warning) A: UPDATE for Python 3: Including a dummy flush function which prevents an error where the function is expected (Python 2 was fine with just linebuf=''). Note that your output (and log level) appears different if it is logged from an interpreter session vs being run from a file. Running from a file produces the expected behavior (and output featured below). We still eliminate extra newlines which other solutions do not. class StreamToLogger(object): """ Fake file-like stream object that redirects writes to a logger instance. """ def __init__(self, logger, level): self.logger = logger self.level = level self.linebuf = '' def write(self, buf): for line in buf.rstrip().splitlines(): self.logger.log(self.level, line.rstrip()) def flush(self): pass Then test with something like: import StreamToLogger import sys import logging logging.basicConfig( level=logging.DEBUG, format='%(asctime)s:%(levelname)s:%(name)s:%(message)s', filename='out.log', filemode='a' ) log = logging.getLogger('foobar') sys.stdout = StreamToLogger(log,logging.INFO) sys.stderr = StreamToLogger(log,logging.ERROR) print('Test to standard out') raise Exception('Test to standard error') See below for old Python 2.x answer and the example output: All of the prior answers seem to have problems adding extra newlines where they aren't needed. The solution that works best for me is from http://www.electricmonk.nl/log/2011/08/14/redirect-stdout-and-stderr-to-a-logger-in-python/, where he demonstrates how send both stdout and stderr to the logger: import logging import sys class StreamToLogger(object): """ Fake file-like stream object that redirects writes to a logger instance. """ def __init__(self, logger, log_level=logging.INFO): self.logger = logger self.log_level = log_level self.linebuf = '' def write(self, buf): for line in buf.rstrip().splitlines(): self.logger.log(self.log_level, line.rstrip()) logging.basicConfig( level=logging.DEBUG, format='%(asctime)s:%(levelname)s:%(name)s:%(message)s', filename="out.log", filemode='a' ) stdout_logger = logging.getLogger('STDOUT') sl = StreamToLogger(stdout_logger, logging.INFO) sys.stdout = sl stderr_logger = logging.getLogger('STDERR') sl = StreamToLogger(stderr_logger, logging.ERROR) sys.stderr = sl print "Test to standard out" raise Exception('Test to standard error') The output looks like: 2011-08-14 14:46:20,573:INFO:STDOUT:Test to standard out 2011-08-14 14:46:20,573:ERROR:STDERR:Traceback (most recent call last): 2011-08-14 14:46:20,574:ERROR:STDERR: File "redirect.py", line 33, in 2011-08-14 14:46:20,574:ERROR:STDERR:raise Exception('Test to standard error') 2011-08-14 14:46:20,574:ERROR:STDERR:Exception 2011-08-14 14:46:20,574:ERROR:STDERR:: 2011-08-14 14:46:20,574:ERROR:STDERR:Test to standard error Note that self.linebuf = '' is where the flush is being handled, rather than implementing a flush function. A: If it's an all-Python system (i.e. no C libraries writing to fds directly, as Ignacio Vazquez-Abrams asked about) then you might be able to use an approach as suggested here: class LoggerWriter: def __init__(self, logger, level): self.logger = logger self.level = level def write(self, message): if message != '\n': self.logger.log(self.level, message) and then set sys.stdout and sys.stderr to LoggerWriter instances. A: You can use redirect_stdout context manager: import logging from contextlib import redirect_stdout logging.basicConfig(stream=sys.stdout, level=logging.DEBUG) logging.write = lambda msg: logging.info(msg) if msg != '\n' else None with redirect_stdout(logging): print('Test') or like this import logging from contextlib import redirect_stdout logger = logging.getLogger('Meow') logger.setLevel(logging.INFO) formatter = logging.Formatter( fmt='[{name}] {asctime} {levelname}: {message}', datefmt='%m/%d/%Y %H:%M:%S', style='{' ) ch = logging.StreamHandler() ch.setLevel(logging.INFO) ch.setFormatter(formatter) logger.addHandler(ch) logger.write = lambda msg: logger.info(msg) if msg != '\n' else None with redirect_stdout(logger): print('Test') A: Output Redirection Done Right! The Problem logger.log and the other functions (.info/.error/etc.) output each call as a separate line, i.e. implicitly add (formatting and) a newline to it. sys.stderr.write on the other hand just writes its literal input to stream, including partial lines. For example: The output "ZeroDivisionError: division by zero" is actually 4(!) separate calls to sys.stderr.write: sys.stderr.write('ZeroDivisionError') sys.stderr.write(': ') sys.stderr.write('division by zero') sys.stderr.write('\n') The 4 most upvoted approaches (1, 2, 3, 4) thus result in extra newlines -- simply put "1/0" into your program and you will get the following: 2021-02-17 13:10:40,814 - ERROR - ZeroDivisionError 2021-02-17 13:10:40,814 - ERROR - : 2021-02-17 13:10:40,814 - ERROR - division by zero The Solution Store the intermediate writes in a buffer. The reason I am using a list as buffer rather than a string is to avoid the Shlemiel the painter’s algorithm. TLDR: It is O(n) instead of potentially O(n^2) class LoggerWriter: def __init__(self, logfct): self.logfct = logfct self.buf = [] def write(self, msg): if msg.endswith('\n'): self.buf.append(msg.removesuffix('\n')) self.logfct(''.join(self.buf)) self.buf = [] else: self.buf.append(msg) def flush(self): pass # To access the original stdout/stderr, use sys.__stdout__/sys.__stderr__ sys.stdout = LoggerWriter(logger.info) sys.stderr = LoggerWriter(logger.error) 2021-02-17 13:15:22,956 - ERROR - ZeroDivisionError: division by zero For versions below Python 3.9, you could replace replace msg.removesuffix('\n') with either msg.rstrip('\n') or msg[:-1]. A: As an evolution to Cameron Gagnon's response, I've improved the LoggerWriterclass to: class LoggerWriter(object): def __init__(self, writer): self._writer = writer self._msg = '' def write(self, message): self._msg = self._msg + message while '\n' in self._msg: pos = self._msg.find('\n') self._writer(self._msg[:pos]) self._msg = self._msg[pos+1:] def flush(self): if self._msg != '': self._writer(self._msg) self._msg = '' now uncontrolled exceptions look nicer: 2018-07-31 13:20:37,482 - ERROR - Traceback (most recent call last): 2018-07-31 13:20:37,483 - ERROR - File "mf32.py", line 317, in <module> 2018-07-31 13:20:37,485 - ERROR - main() 2018-07-31 13:20:37,486 - ERROR - File "mf32.py", line 289, in main 2018-07-31 13:20:37,488 - ERROR - int('') 2018-07-31 13:20:37,489 - ERROR - ValueError: invalid literal for int() with base 10: '' A: With flush added to Vinay Sajip's answer: class LoggerWriter: def __init__(self, logger, level): self.logger = logger self.level = level def write(self, message): if message != '\n': self.logger.log(self.level, message) def flush(self): pass A: Quick but Fragile One-Liner sys.stdout.write = logger.info sys.stderr.write = logger.error What this does is simply assign the logger functions to the stdout/stderr .write call which means any write call will instead invoke the logger functions. The downside of this approach is that both calls to .write and the logger functions typically add a newline so you will end up with extra lines in your log file, which may or may not be a problem depending on your use case. Another pitfall is that if your logger writes to stderr itself we get infinite recursion (a stack overflow error). So only output to a file. A: Solving problem where StreamHandler causes infinite Recurison My logger was causing an infinite recursion, because the Streamhandler was trying to write to stdout, which itself is a logger -> leading to infinite recursion. Solution Reinstate the original sys.__stdout__ for the StreamHandler ONLY, so that you can still see the logs showing in the terminal. class DefaultStreamHandler(logging.StreamHandler): def __init__(self, stream=sys.__stdout__): # Use the original sys.__stdout__ to write to stdout # for this handler, as sys.stdout will write out to logger. super().__init__(stream) class LoggerWriter(io.IOBase): """Class to replace the stderr/stdout calls to a logger""" def __init__(self, logger_name: str, log_level: int): """:param logger_name: Name to give the logger (e.g. 'stderr') :param log_level: The log level, e.g. logging.DEBUG / logging.INFO that the MESSAGES should be logged at. """ self.std_logger = logging.getLogger(logger_name) # Get the "root" logger from by its name (i.e. from a config dict or at the bottom of this file) # We will use this to create a copy of all its settings, except the name app_logger = logging.getLogger("myAppsLogger") [self.std_logger.addHandler(handler) for handler in app_logger.handlers] self.std_logger.setLevel(app_logger.level) # the minimum lvl msgs will show at self.level = log_level # the level msgs will be logged at self.buffer = [] def write(self, msg: str): """Stdout/stderr logs one line at a time, rather than 1 message at a time. Use this function to aggregate multi-line messages into 1 log call.""" msg = msg.decode() if issubclass(type(msg), bytes) else msg if not msg.endswith("\n"): return self.buffer.append(msg) self.buffer.append(msg.rstrip("\n")) message = "".join(self.buffer) self.std_logger.log(self.level, message) self.buffer = [] def replace_stderr_and_stdout_with_logger(): """Replaces calls to sys.stderr -> logger.info & sys.stdout -> logger.error""" # To access the original stdout/stderr, use sys.__stdout__/sys.__stderr__ sys.stdout = LoggerWriter("stdout", logging.INFO) sys.stderr = LoggerWriter("stderr", logging.ERROR) if __name__ == __main__(): # Load the logger & handlers logger = logging.getLogger("myAppsLogger") logger.setLevel(logging.DEBUG) # HANDLER = logging.StreamHandler() HANDLER = DefaultStreamHandler() # <--- replace the normal streamhandler with this logger.addHandler(HANDLER) logFormatter = logging.Formatter("[%(asctime)s] - %(name)s - %(levelname)s - %(message)s") HANDLER.setFormatter(logFormatter) # Run this AFTER you load the logger replace_stderr_and_stdout_with_logger() And then finally call the replace_stderr_and_stdout_with_logger() after you've initialised your logger (the last bit of the code) A: If you want to logging info and error messages into separates stream (info into stdout, errors into stderr) you can use this trick: class ErrorStreamHandler(log.StreamHandler): """Print input log-message into stderr, print only error/warning messages""" def __init__(self, stream=sys.stderr): log.Handler.__init__(self, log.WARNING) self.stream = stream def emit(self, record): try: if record.levelno in (log.INFO, log.DEBUG, log.NOTSET): return msg = self.format(record) stream = self.stream # issue 35046: merged two stream.writes into one. stream.write(msg + self.terminator) self.flush() except RecursionError: # See issue 36272 raise except Exception: self.handleError(record) class OutStreamHandler(log.StreamHandler): """Print input log-message into stdout, print only info/debug messages""" def __init__(self, loglevel, stream=sys.stdout): log.Handler.__init__(self, loglevel) self.stream = stream def emit(self, record): try: if record.levelno not in (log.INFO, log.DEBUG, log.NOTSET): return msg = self.format(record) stream = self.stream # issue 35046: merged two stream.writes into one. stream.write(msg + self.terminator) self.flush() except RecursionError: # See issue 36272 raise except Exception: self.handleError(record) Usage: log.basicConfig(level=settings.get_loglevel(), format="[%(asctime)s] %(levelname)s: %(message)s", datefmt='%Y/%m/%d %H:%M:%S', handlers=[ErrorStreamHandler(), OutStreamHandler(settings.get_loglevel())])
How to redirect stdout and stderr to logger in Python
I have a logger that has a RotatingFileHandler. I want to redirect all Stdout and Stderr to the logger. How to do so?
[ "Not enough rep to comment, but I wanted to add the version of this that worked for me in case others are in a similar situation.\nclass LoggerWriter:\n def __init__(self, level):\n # self.level is really like using log.debug(message)\n # at least in my case\n self.level = level\n\n def write(self, message):\n # if statement reduces the amount of newlines that are\n # printed to the logger\n if message != '\\n':\n self.level(message)\n\n def flush(self):\n # create a flush method so things can be flushed when\n # the system wants to. Not sure if simply 'printing'\n # sys.stderr is the correct way to do it, but it seemed\n # to work properly for me.\n self.level(sys.stderr)\n\nand this would look something like:\nlog = logging.getLogger('foobar')\nsys.stdout = LoggerWriter(log.debug)\nsys.stderr = LoggerWriter(log.warning)\n\n", "UPDATE for Python 3:\n\nIncluding a dummy flush function which prevents an error where the function is expected (Python 2 was fine with just linebuf='').\nNote that your output (and log level) appears different if it is logged from an interpreter session vs being run from a file. Running from a file produces the expected behavior (and output featured below).\nWe still eliminate extra newlines which other solutions do not.\n\nclass StreamToLogger(object):\n \"\"\"\n Fake file-like stream object that redirects writes to a logger instance.\n \"\"\"\n def __init__(self, logger, level):\n self.logger = logger\n self.level = level\n self.linebuf = ''\n\n def write(self, buf):\n for line in buf.rstrip().splitlines():\n self.logger.log(self.level, line.rstrip())\n\n def flush(self):\n pass\n\nThen test with something like:\nimport StreamToLogger\nimport sys\nimport logging\n\nlogging.basicConfig(\n level=logging.DEBUG,\n format='%(asctime)s:%(levelname)s:%(name)s:%(message)s',\n filename='out.log',\n filemode='a'\n )\nlog = logging.getLogger('foobar')\nsys.stdout = StreamToLogger(log,logging.INFO)\nsys.stderr = StreamToLogger(log,logging.ERROR)\nprint('Test to standard out')\nraise Exception('Test to standard error')\n\nSee below for old Python 2.x answer and the example output:\nAll of the prior answers seem to have problems adding extra newlines where they aren't needed. The solution that works best for me is from http://www.electricmonk.nl/log/2011/08/14/redirect-stdout-and-stderr-to-a-logger-in-python/, where he demonstrates how send both stdout and stderr to the logger:\nimport logging\nimport sys\n \nclass StreamToLogger(object):\n \"\"\"\n Fake file-like stream object that redirects writes to a logger instance.\n \"\"\"\n def __init__(self, logger, log_level=logging.INFO):\n self.logger = logger\n self.log_level = log_level\n self.linebuf = ''\n \n def write(self, buf):\n for line in buf.rstrip().splitlines():\n self.logger.log(self.log_level, line.rstrip())\n \nlogging.basicConfig(\n level=logging.DEBUG,\n format='%(asctime)s:%(levelname)s:%(name)s:%(message)s',\n filename=\"out.log\",\n filemode='a'\n)\n \nstdout_logger = logging.getLogger('STDOUT')\nsl = StreamToLogger(stdout_logger, logging.INFO)\nsys.stdout = sl\n \nstderr_logger = logging.getLogger('STDERR')\nsl = StreamToLogger(stderr_logger, logging.ERROR)\nsys.stderr = sl\n \nprint \"Test to standard out\"\nraise Exception('Test to standard error')\n\nThe output looks like:\n2011-08-14 14:46:20,573:INFO:STDOUT:Test to standard out\n2011-08-14 14:46:20,573:ERROR:STDERR:Traceback (most recent call last):\n2011-08-14 14:46:20,574:ERROR:STDERR: File \"redirect.py\", line 33, in \n2011-08-14 14:46:20,574:ERROR:STDERR:raise Exception('Test to standard error')\n2011-08-14 14:46:20,574:ERROR:STDERR:Exception\n2011-08-14 14:46:20,574:ERROR:STDERR::\n2011-08-14 14:46:20,574:ERROR:STDERR:Test to standard error\n\nNote that self.linebuf = '' is where the flush is being handled, rather than implementing a flush function.\n", "If it's an all-Python system (i.e. no C libraries writing to fds directly, as Ignacio Vazquez-Abrams asked about) then you might be able to use an approach as suggested here:\nclass LoggerWriter:\n def __init__(self, logger, level):\n self.logger = logger\n self.level = level\n\n def write(self, message):\n if message != '\\n':\n self.logger.log(self.level, message)\n\nand then set sys.stdout and sys.stderr to LoggerWriter instances.\n", "You can use redirect_stdout context manager:\nimport logging\nfrom contextlib import redirect_stdout\n\nlogging.basicConfig(stream=sys.stdout, level=logging.DEBUG)\nlogging.write = lambda msg: logging.info(msg) if msg != '\\n' else None\n\nwith redirect_stdout(logging):\n print('Test')\n\nor like this\nimport logging\nfrom contextlib import redirect_stdout\n\n\nlogger = logging.getLogger('Meow')\nlogger.setLevel(logging.INFO)\nformatter = logging.Formatter(\n fmt='[{name}] {asctime} {levelname}: {message}',\n datefmt='%m/%d/%Y %H:%M:%S',\n style='{'\n)\nch = logging.StreamHandler()\nch.setLevel(logging.INFO)\nch.setFormatter(formatter)\nlogger.addHandler(ch)\n\nlogger.write = lambda msg: logger.info(msg) if msg != '\\n' else None\n\nwith redirect_stdout(logger):\n print('Test')\n\n", "Output Redirection Done Right!\nThe Problem\nlogger.log and the other functions (.info/.error/etc.) output each call as a separate line, i.e. implicitly add (formatting and) a newline to it.\nsys.stderr.write on the other hand just writes its literal input to stream, including partial lines. For example: The output \"ZeroDivisionError: division by zero\" is actually 4(!) separate calls to sys.stderr.write:\nsys.stderr.write('ZeroDivisionError')\nsys.stderr.write(': ')\nsys.stderr.write('division by zero')\nsys.stderr.write('\\n')\n\nThe 4 most upvoted approaches (1, 2, 3, 4) thus result in extra newlines -- simply put \"1/0\" into your program and you will get the following:\n2021-02-17 13:10:40,814 - ERROR - ZeroDivisionError\n2021-02-17 13:10:40,814 - ERROR - : \n2021-02-17 13:10:40,814 - ERROR - division by zero\n\nThe Solution\nStore the intermediate writes in a buffer. The reason I am using a list as buffer rather than a string is to avoid the Shlemiel the painter’s algorithm. TLDR: It is O(n) instead of potentially O(n^2)\nclass LoggerWriter:\n def __init__(self, logfct):\n self.logfct = logfct\n self.buf = []\n\n def write(self, msg):\n if msg.endswith('\\n'):\n self.buf.append(msg.removesuffix('\\n'))\n self.logfct(''.join(self.buf))\n self.buf = []\n else:\n self.buf.append(msg)\n\n def flush(self):\n pass\n\n# To access the original stdout/stderr, use sys.__stdout__/sys.__stderr__\nsys.stdout = LoggerWriter(logger.info)\nsys.stderr = LoggerWriter(logger.error)\n\n2021-02-17 13:15:22,956 - ERROR - ZeroDivisionError: division by zero\n\nFor versions below Python 3.9, you could replace replace msg.removesuffix('\\n') with either msg.rstrip('\\n') or msg[:-1].\n", "As an evolution to Cameron Gagnon's response, I've improved the LoggerWriterclass to:\nclass LoggerWriter(object):\n def __init__(self, writer):\n self._writer = writer\n self._msg = ''\n\n def write(self, message):\n self._msg = self._msg + message\n while '\\n' in self._msg:\n pos = self._msg.find('\\n')\n self._writer(self._msg[:pos])\n self._msg = self._msg[pos+1:]\n\n def flush(self):\n if self._msg != '':\n self._writer(self._msg)\n self._msg = ''\n\nnow uncontrolled exceptions look nicer:\n2018-07-31 13:20:37,482 - ERROR - Traceback (most recent call last):\n2018-07-31 13:20:37,483 - ERROR - File \"mf32.py\", line 317, in <module>\n2018-07-31 13:20:37,485 - ERROR - main()\n2018-07-31 13:20:37,486 - ERROR - File \"mf32.py\", line 289, in main\n2018-07-31 13:20:37,488 - ERROR - int('')\n2018-07-31 13:20:37,489 - ERROR - ValueError: invalid literal for int() with base 10: ''\n\n", "With flush added to Vinay Sajip's answer:\nclass LoggerWriter:\n def __init__(self, logger, level): \n self.logger = logger\n self.level = level \n\n def write(self, message):\n if message != '\\n':\n self.logger.log(self.level, message)\n\n def flush(self): \n pass\n\n", "Quick but Fragile One-Liner\nsys.stdout.write = logger.info\n\nsys.stderr.write = logger.error\n\nWhat this does is simply assign the logger functions to the stdout/stderr .write call which means any write call will instead invoke the logger functions.\nThe downside of this approach is that both calls to .write and the logger functions typically add a newline so you will end up with extra lines in your log file, which may or may not be a problem depending on your use case.\nAnother pitfall is that if your logger writes to stderr itself we get infinite recursion (a stack overflow error). So only output to a file.\n", "Solving problem where StreamHandler causes infinite Recurison\nMy logger was causing an infinite recursion, because the Streamhandler was trying to write to stdout, which itself is a logger -> leading to infinite recursion.\nSolution\nReinstate the original sys.__stdout__ for the StreamHandler ONLY, so that you can still see the logs showing in the terminal.\nclass DefaultStreamHandler(logging.StreamHandler):\n def __init__(self, stream=sys.__stdout__):\n # Use the original sys.__stdout__ to write to stdout\n # for this handler, as sys.stdout will write out to logger.\n super().__init__(stream)\n\n\nclass LoggerWriter(io.IOBase):\n \"\"\"Class to replace the stderr/stdout calls to a logger\"\"\"\n\n def __init__(self, logger_name: str, log_level: int):\n \"\"\":param logger_name: Name to give the logger (e.g. 'stderr')\n :param log_level: The log level, e.g. logging.DEBUG / logging.INFO that\n the MESSAGES should be logged at.\n \"\"\"\n self.std_logger = logging.getLogger(logger_name)\n # Get the \"root\" logger from by its name (i.e. from a config dict or at the bottom of this file)\n # We will use this to create a copy of all its settings, except the name\n app_logger = logging.getLogger(\"myAppsLogger\")\n [self.std_logger.addHandler(handler) for handler in app_logger.handlers]\n self.std_logger.setLevel(app_logger.level) # the minimum lvl msgs will show at\n self.level = log_level # the level msgs will be logged at\n self.buffer = []\n\n def write(self, msg: str):\n \"\"\"Stdout/stderr logs one line at a time, rather than 1 message at a time.\n Use this function to aggregate multi-line messages into 1 log call.\"\"\"\n msg = msg.decode() if issubclass(type(msg), bytes) else msg\n\n if not msg.endswith(\"\\n\"):\n return self.buffer.append(msg)\n\n self.buffer.append(msg.rstrip(\"\\n\"))\n message = \"\".join(self.buffer)\n self.std_logger.log(self.level, message)\n self.buffer = []\n\n\ndef replace_stderr_and_stdout_with_logger():\n \"\"\"Replaces calls to sys.stderr -> logger.info & sys.stdout -> logger.error\"\"\"\n # To access the original stdout/stderr, use sys.__stdout__/sys.__stderr__\n sys.stdout = LoggerWriter(\"stdout\", logging.INFO)\n sys.stderr = LoggerWriter(\"stderr\", logging.ERROR)\n\n\nif __name__ == __main__():\n # Load the logger & handlers\n logger = logging.getLogger(\"myAppsLogger\")\n logger.setLevel(logging.DEBUG)\n # HANDLER = logging.StreamHandler()\n HANDLER = DefaultStreamHandler() # <--- replace the normal streamhandler with this\n logger.addHandler(HANDLER)\n logFormatter = logging.Formatter(\"[%(asctime)s] - %(name)s - %(levelname)s - %(message)s\")\n HANDLER.setFormatter(logFormatter)\n\n # Run this AFTER you load the logger\n replace_stderr_and_stdout_with_logger()\n\n\nAnd then finally call the replace_stderr_and_stdout_with_logger() after you've initialised your logger (the last bit of the code)\n", "If you want to logging info and error messages into separates stream (info into stdout, errors into stderr) you can use this trick:\nclass ErrorStreamHandler(log.StreamHandler):\n\"\"\"Print input log-message into stderr, print only error/warning messages\"\"\"\ndef __init__(self, stream=sys.stderr):\n log.Handler.__init__(self, log.WARNING)\n self.stream = stream\n\ndef emit(self, record):\n try:\n if record.levelno in (log.INFO, log.DEBUG, log.NOTSET):\n return\n msg = self.format(record)\n stream = self.stream\n # issue 35046: merged two stream.writes into one.\n stream.write(msg + self.terminator)\n self.flush()\n except RecursionError: # See issue 36272\n raise\n except Exception:\n self.handleError(record)\n\n\nclass OutStreamHandler(log.StreamHandler):\n\"\"\"Print input log-message into stdout, print only info/debug messages\"\"\"\ndef __init__(self, loglevel, stream=sys.stdout):\n log.Handler.__init__(self, loglevel)\n self.stream = stream\n\ndef emit(self, record):\n try:\n if record.levelno not in (log.INFO, log.DEBUG, log.NOTSET):\n return\n msg = self.format(record)\n stream = self.stream\n # issue 35046: merged two stream.writes into one.\n stream.write(msg + self.terminator)\n self.flush()\n except RecursionError: # See issue 36272\n raise\n except Exception:\n self.handleError(record)\n\nUsage:\nlog.basicConfig(level=settings.get_loglevel(),\n format=\"[%(asctime)s] %(levelname)s: %(message)s\",\n datefmt='%Y/%m/%d %H:%M:%S', handlers=[ErrorStreamHandler(), OutStreamHandler(settings.get_loglevel())])\n\n" ]
[ 54, 37, 18, 16, 14, 11, 5, 5, 1, 0 ]
[]
[]
[ "logging", "python", "python_3.x", "stdout" ]
stackoverflow_0019425736_logging_python_python_3.x_stdout.txt
Q: How to remove empty space in second alinea? I'm tring to remove the extra space and "rebtel.bootstrappedData" in the second alinea but for some reason it won't work. This is my output "welcome_offer_cuba.block_1_title":"SaveonrechargetoCuba","welcome_offer_cuba.block_1_cta":"Sendrecharge!","welcome_offer_cuba.block_1_cta_prebook":"Pre-bookRecarga","welcome_offer_cuba.block_1_footprint":"Offervalidfornewusersonly.","welcome_offer_cuba.block_2_key":"","welcome_offer_cuba.block_2_title":"Howtosendarecharge?","welcome_offer_cuba.block_2_content":"<ol><li>Simplyenterthenumberyou’dliketosendrechargeinthefieldabove.</li><li>Clickthe“{{buttonText}}”button.</li><li>CreateaRebtelaccountifyouhaven’talready.</li><li>Done!Yourfriendshouldreceivetherechargeshortly.</li></ol>","welcome_offer_cuba.block_3_title":"DownloadtheRebtelapp!","welcome_offer_cuba.block_3_content":"Sendno-feerechargeandenjoythebestcallingratestoCubainoneplace."},"canonical":{"string":"<linkrel=\"canonical\"href=\"https://www.rebtel.com/en/rates/\"/>"}}; rebtel.bootstrappedData={"links":{"summary":{"collection":"country_links","ids":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"params":{"locale":"en"},"meta":{}},"data":[{"title":"A","links":[{"iso2":"AF","route":"afghanistan","name":"Afghanistan","url":"/en/rates/afghanistan/","callingCardsUrl":"/en/calling-cards/afghanistan/","popular":false},{"iso2":"AL","route":"albania","name":"Albania","url":"/en/rates/albania/ And this is the code I used: import json import requests from bs4 import BeautifulSoup url = "https://www.rebtel.com/en/rates/" r = requests.get(url) soup = BeautifulSoup(r.content, "html.parser") x = range(132621, 132624) script = soup.find_all("script")[4].text.strip()[38:] print(script) What should I add to "script" so it will remove the empty spaces? A: Original answer You can change the definition of your script variable by : script = soup.find_all("script")[4].text.replace("\t", "")[38:] It will remove all tabulations on your text and so the alineas. Edit after conversation in the comments You can use the following code to extract the data in json : import json import requests from bs4 import BeautifulSoup url = "https://www.rebtel.com/en/rates/" r = requests.get(url) soup = BeautifulSoup(r.content, "html.parser") script = list(filter(None, soup.find_all("script")[4].text.replace("\t", "").split("\r\n"))) app_data = json.loads(script[1].replace("rebtel.appData = ", "")[:-1]) bootstrapped_data = json.loads(script[2].replace("rebtel.bootstrappedData = ", "")) I extracted the lines of the script with split("\r\n") and get the wanted data from there.
How to remove empty space in second alinea?
I'm tring to remove the extra space and "rebtel.bootstrappedData" in the second alinea but for some reason it won't work. This is my output "welcome_offer_cuba.block_1_title":"SaveonrechargetoCuba","welcome_offer_cuba.block_1_cta":"Sendrecharge!","welcome_offer_cuba.block_1_cta_prebook":"Pre-bookRecarga","welcome_offer_cuba.block_1_footprint":"Offervalidfornewusersonly.","welcome_offer_cuba.block_2_key":"","welcome_offer_cuba.block_2_title":"Howtosendarecharge?","welcome_offer_cuba.block_2_content":"<ol><li>Simplyenterthenumberyou’dliketosendrechargeinthefieldabove.</li><li>Clickthe“{{buttonText}}”button.</li><li>CreateaRebtelaccountifyouhaven’talready.</li><li>Done!Yourfriendshouldreceivetherechargeshortly.</li></ol>","welcome_offer_cuba.block_3_title":"DownloadtheRebtelapp!","welcome_offer_cuba.block_3_content":"Sendno-feerechargeandenjoythebestcallingratestoCubainoneplace."},"canonical":{"string":"<linkrel=\"canonical\"href=\"https://www.rebtel.com/en/rates/\"/>"}}; rebtel.bootstrappedData={"links":{"summary":{"collection":"country_links","ids":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"params":{"locale":"en"},"meta":{}},"data":[{"title":"A","links":[{"iso2":"AF","route":"afghanistan","name":"Afghanistan","url":"/en/rates/afghanistan/","callingCardsUrl":"/en/calling-cards/afghanistan/","popular":false},{"iso2":"AL","route":"albania","name":"Albania","url":"/en/rates/albania/ And this is the code I used: import json import requests from bs4 import BeautifulSoup url = "https://www.rebtel.com/en/rates/" r = requests.get(url) soup = BeautifulSoup(r.content, "html.parser") x = range(132621, 132624) script = soup.find_all("script")[4].text.strip()[38:] print(script) What should I add to "script" so it will remove the empty spaces?
[ "Original answer\nYou can change the definition of your script variable by :\nscript = soup.find_all(\"script\")[4].text.replace(\"\\t\", \"\")[38:]\n\nIt will remove all tabulations on your text and so the alineas.\nEdit after conversation in the comments\nYou can use the following code to extract the data in json :\nimport json\nimport requests\nfrom bs4 import BeautifulSoup\n\nurl = \"https://www.rebtel.com/en/rates/\"\nr = requests.get(url)\nsoup = BeautifulSoup(r.content, \"html.parser\")\nscript = list(filter(None, soup.find_all(\"script\")[4].text.replace(\"\\t\", \"\").split(\"\\r\\n\")))\napp_data = json.loads(script[1].replace(\"rebtel.appData = \", \"\")[:-1])\nbootstrapped_data = json.loads(script[2].replace(\"rebtel.bootstrappedData = \", \"\"))\n\nI extracted the lines of the script with split(\"\\r\\n\") and get the wanted data from there.\n" ]
[ 0 ]
[]
[]
[ "html", "python" ]
stackoverflow_0074518444_html_python.txt
Q: Pandas aggregation function: Merge text rows, but insert spaces between them? I managed to group rows in a dataframe, given one column (id). The problem is that one column consists of parts of sentences, and when I add them together, the spaces are missing. An example probably makes it easier to understand... My dataframe looks something like this: import pandas as pd #create dataFrame df = pd.DataFrame({'id': [101, 101, 102, 102, 102], 'text': ['The government changed', 'the legislation on import control.', 'Politics cannot solve all problems', 'but it should try to do its part.', 'That is the reason why these elections are important.'], 'date': [1990, 1990, 2005, 2005, 2005],}) id text date 0 101 The government changed 1990 1 101 the legislation on import control. 1990 2 102 Politics cannot solve all problems 2005 3 102 but it should try to do its part. 2005 4 102 That is the reason why these elections are imp... 2005 Then I used the aggregation function: aggregation_functions = {'id': 'first','text': 'sum', 'date': 'first'} df_new = df.groupby(df['id']).aggregate(aggregation_functions) which returns: id text date 0 101 The government changedthe legislation on import control. 1990 2 102 Politics cannot solve all problemsbut it should try to... 2005 So, for example I need a space in between ' The government changed' and 'the legislation...'. Is that possible? A: If you need to put a space between the two phrases/rows, use str.join : ujoin = lambda s: " ".join(dict.fromkeys(s.astype(str))) ​ out= df.groupby(["id", "date"], as_index=False).agg(**{"text": ("text", ujoin)})[df.columns] # Output : print(out.to_string()) id text date 0 101 The government changed the legislation on import control. 1990 1 102 Politics cannot solve all problems but it should try to do its part. That is the reason why these elections are important. 2005
Pandas aggregation function: Merge text rows, but insert spaces between them?
I managed to group rows in a dataframe, given one column (id). The problem is that one column consists of parts of sentences, and when I add them together, the spaces are missing. An example probably makes it easier to understand... My dataframe looks something like this: import pandas as pd #create dataFrame df = pd.DataFrame({'id': [101, 101, 102, 102, 102], 'text': ['The government changed', 'the legislation on import control.', 'Politics cannot solve all problems', 'but it should try to do its part.', 'That is the reason why these elections are important.'], 'date': [1990, 1990, 2005, 2005, 2005],}) id text date 0 101 The government changed 1990 1 101 the legislation on import control. 1990 2 102 Politics cannot solve all problems 2005 3 102 but it should try to do its part. 2005 4 102 That is the reason why these elections are imp... 2005 Then I used the aggregation function: aggregation_functions = {'id': 'first','text': 'sum', 'date': 'first'} df_new = df.groupby(df['id']).aggregate(aggregation_functions) which returns: id text date 0 101 The government changedthe legislation on import control. 1990 2 102 Politics cannot solve all problemsbut it should try to... 2005 So, for example I need a space in between ' The government changed' and 'the legislation...'. Is that possible?
[ "If you need to put a space between the two phrases/rows, use str.join :\nujoin = lambda s: \" \".join(dict.fromkeys(s.astype(str)))\n​\nout= df.groupby([\"id\", \"date\"], as_index=False).agg(**{\"text\": (\"text\", ujoin)})[df.columns]\n\n# Output :\nprint(out.to_string())\n\n id text date\n0 101 The government changed the legislation on import control. 1990\n1 102 Politics cannot solve all problems but it should try to do its part. That is the reason why these elections are important. 2005\n\n" ]
[ 0 ]
[]
[]
[ "aggregate", "dataframe", "group_by", "pandas", "python" ]
stackoverflow_0074518570_aggregate_dataframe_group_by_pandas_python.txt
Q: Error when I try to extract info in a json I have this code: api_key = "_________" ciudad = input("put the city: ") url = "https://api.openweathermap.org/data/2.5/forecast?q=" +ciudad+ "&appid=" + api_key print(url) data = urllib.request.urlopen(url).read().decode() js = json.loads(data) And it is all okey but I need the temp max and min and I try this: for res in js["list"][0]["main"]: print("the value of", res["main.temp_min"]) and the code give me this error TypeError: string indices must be integers The json it is like: {'cod': '200', 'message': 0, 'cnt': 40, 'list': [{'dt': 1669032000, 'main': {'temp': 288.99, 'feels_like': 288.35, 'temp_min': 286.43, 'temp_max': 288.99, 'pressure': 1012, 'sea_level': 1012, 'grnd_level': 1007, 'humidity': 66, 'temp_kf': 2.56}, 'weather': [{'id': 500, 'main': 'Rain', 'description': 'light rain', 'icon': '10d'}], 'clouds': {'all': 75}, 'wind': {'speed': 9.85, 'deg': 296, 'gust': 13.2}, 'visibility': 10000, 'pop': 1, 'rain': {'3h': 1.55}, 'sys': {'pod': 'd'}, 'dt_txt': '2022-11-21 12:00:00'}, {'dt': 1669042800, 'main': {'temp': 287.59, 'feels_like': 286.94, 'temp_min': 284.8, 'temp_max': 287.59, 'pressure': 1014, 'sea_level': 1014, 'grnd_level': 1008, 'humidity': 71, 'temp_kf': 2.79}, 'weather': [{'id': 500, 'main': 'Rain', 'description': 'light rain', 'icon': '10d'}], 'clouds': {'all': 78}, 'wind': {'speed': 9.77, 'deg': 314, 'gust': 14.1}, 'visibility': 10000, 'pop': 1, 'rain': {'3h': 2.28}, 'sys': {'pod': 'd'}, 'dt_txt': '2022-11-21 15:00:00'}, {'dt': 1669053600, 'main': {'temp': 286.12, 'feels_like': 285.14, 'temp_min': 284.68, 'temp_max': 286.12, 'pressure': 1016, 'sea_level': 1016, 'grnd_level': 1009, 'humidity': 64, 'temp_kf': 1.44}, 'weather': [{'id': 500, 'main': 'Rain', 'description': 'light rain', 'icon': '10n'}], 'clouds': {'all': 86}, 'wind': {'speed': 8.5, 'deg': 308, 'gust': 12.41}, 'visibility': 10000, 'pop': 1, 'rain': {'3h': 1.46}, 'sys': {'pod': 'n'}, 'dt_txt': '2022-11-21 18:00:00'}, {'dt': 1669064400, 'main': {'temp': 284.63, 'feels_like': 283.53, 'temp_min': 284.63, 'temp_max': 284.63, 'pressure': 1019, 'sea_level': 1019, 'grnd_level': 1010, 'humidity': 65, 'temp_kf': 0}, 'weather': [{'id': 500, 'main': 'Rain', 'description': 'light rain', 'icon': '10n'}], 'clouds': {'all': 100}, 'wind': {'speed': 7.04, 'deg': 300, 'gust': 10.08}, 'visibility': 10000, 'pop': 0.57, 'rain': {'3h': 0.42}, 'sys': {'pod': 'n'}, 'dt_txt': '2022-11-21 21:00:00'}, {'dt': 1669075200, 'main': {'temp': 284.82, 'feels_like': 283.95, 'temp_min': 284.82, 'temp_max': 284.82, 'pressure': 1018, 'sea_level': 1018, 'grnd_level': 1009, 'humidity': 73, 'temp_kf': 0} A: js["list"][0]["main"] is a dictionary: {'temp': 288.99, 'feels_like': 288.35, 'temp_min': 286.43, 'temp_max': 288.99, 'pressure': 1012, 'sea_level': 1012, 'grnd_level': 1007, 'humidity': 66, 'temp_kf': 2.56} for res in js["list"][0]["main"] iterates over its keys. So res is one of the keys in this dictionary which are strings (hence the error). What you probably want is: for l in js["list"]: print("the value of", l["main"]["temp_min"])
Error when I try to extract info in a json
I have this code: api_key = "_________" ciudad = input("put the city: ") url = "https://api.openweathermap.org/data/2.5/forecast?q=" +ciudad+ "&appid=" + api_key print(url) data = urllib.request.urlopen(url).read().decode() js = json.loads(data) And it is all okey but I need the temp max and min and I try this: for res in js["list"][0]["main"]: print("the value of", res["main.temp_min"]) and the code give me this error TypeError: string indices must be integers The json it is like: {'cod': '200', 'message': 0, 'cnt': 40, 'list': [{'dt': 1669032000, 'main': {'temp': 288.99, 'feels_like': 288.35, 'temp_min': 286.43, 'temp_max': 288.99, 'pressure': 1012, 'sea_level': 1012, 'grnd_level': 1007, 'humidity': 66, 'temp_kf': 2.56}, 'weather': [{'id': 500, 'main': 'Rain', 'description': 'light rain', 'icon': '10d'}], 'clouds': {'all': 75}, 'wind': {'speed': 9.85, 'deg': 296, 'gust': 13.2}, 'visibility': 10000, 'pop': 1, 'rain': {'3h': 1.55}, 'sys': {'pod': 'd'}, 'dt_txt': '2022-11-21 12:00:00'}, {'dt': 1669042800, 'main': {'temp': 287.59, 'feels_like': 286.94, 'temp_min': 284.8, 'temp_max': 287.59, 'pressure': 1014, 'sea_level': 1014, 'grnd_level': 1008, 'humidity': 71, 'temp_kf': 2.79}, 'weather': [{'id': 500, 'main': 'Rain', 'description': 'light rain', 'icon': '10d'}], 'clouds': {'all': 78}, 'wind': {'speed': 9.77, 'deg': 314, 'gust': 14.1}, 'visibility': 10000, 'pop': 1, 'rain': {'3h': 2.28}, 'sys': {'pod': 'd'}, 'dt_txt': '2022-11-21 15:00:00'}, {'dt': 1669053600, 'main': {'temp': 286.12, 'feels_like': 285.14, 'temp_min': 284.68, 'temp_max': 286.12, 'pressure': 1016, 'sea_level': 1016, 'grnd_level': 1009, 'humidity': 64, 'temp_kf': 1.44}, 'weather': [{'id': 500, 'main': 'Rain', 'description': 'light rain', 'icon': '10n'}], 'clouds': {'all': 86}, 'wind': {'speed': 8.5, 'deg': 308, 'gust': 12.41}, 'visibility': 10000, 'pop': 1, 'rain': {'3h': 1.46}, 'sys': {'pod': 'n'}, 'dt_txt': '2022-11-21 18:00:00'}, {'dt': 1669064400, 'main': {'temp': 284.63, 'feels_like': 283.53, 'temp_min': 284.63, 'temp_max': 284.63, 'pressure': 1019, 'sea_level': 1019, 'grnd_level': 1010, 'humidity': 65, 'temp_kf': 0}, 'weather': [{'id': 500, 'main': 'Rain', 'description': 'light rain', 'icon': '10n'}], 'clouds': {'all': 100}, 'wind': {'speed': 7.04, 'deg': 300, 'gust': 10.08}, 'visibility': 10000, 'pop': 0.57, 'rain': {'3h': 0.42}, 'sys': {'pod': 'n'}, 'dt_txt': '2022-11-21 21:00:00'}, {'dt': 1669075200, 'main': {'temp': 284.82, 'feels_like': 283.95, 'temp_min': 284.82, 'temp_max': 284.82, 'pressure': 1018, 'sea_level': 1018, 'grnd_level': 1009, 'humidity': 73, 'temp_kf': 0}
[ "js[\"list\"][0][\"main\"] is a dictionary:\n{'temp': 288.99, 'feels_like': 288.35, 'temp_min': 286.43, 'temp_max': 288.99, 'pressure': 1012, 'sea_level': 1012, 'grnd_level': 1007, 'humidity': 66, 'temp_kf': 2.56}\n\nfor res in js[\"list\"][0][\"main\"] iterates over its keys. So res is one of the keys in this dictionary which are strings (hence the error). What you probably want is:\nfor l in js[\"list\"]:\n print(\"the value of\", l[\"main\"][\"temp_min\"])\n\n" ]
[ 2 ]
[]
[]
[ "api", "python" ]
stackoverflow_0074518504_api_python.txt
Q: How can I get specific columns form txt file and save them to new file using python I have this txt file sentences.txt that contains texts below a01-000u-s00-00 0 ok 154 19 408 746 1661 89 A|MOVE|to|stop|Mr.|Gaitskell|from a01-000u-s00-01 0 ok 156 19 395 932 1850 105 nominating|any|more|Labour|life|Peers which contains 10 columns I want to use the panda's data frame to extract only the file name (at column 0) and corresponding text (column 10) without the (|) character I wrote this code def load() -> pd.DataFrame: df = pd.read_csv('sentences.txt',sep=' ', header=None) data = [] with open('sentences.txt') as infile: for line in infile: file_name, _, _, _, _, _, _, _, _, text = line.strip().split(' ') data.append((file_name, cl_txt(text))) df = pd.DataFrame(data, columns=['file_name', 'text']) df.rename(columns={0: 'file_name', 9: 'text'}, inplace=True) df['file_name'] = df['file_name'].apply(lambda x: x + '.jpg') df = df[['file_name', 'text']] return df def cl_txt(input_text: str) -> str: text = input_text.replace('+', '-') text = text.replace('|', ' ') return text load() the error I got ParserError: Error tokenizing data. C error: Expected 10 fields in line 4, saw 11 where my expected process.txt file results should look like below without \n a01-000u-s00-00 A MOVE to stop Mr. Gaitskell from a01-000u-s00-01 nominating any more Labour life Peers A: IIUC, you just need pandas.read_csv to read your .txt and then select the two columns : Try this : import pandas as pd df= ( pd.read_csv("test.txt", header=None, sep=r"(\d+)\s(?=\D)", engine="python", usecols=[0,4], names=["filename", "text"]) .assign(filename= lambda x: x["filename"].str.strip().add(".jpg"), text= lambda x: x["text"].str.replace(r'[\|"]', " ", regex=True) .str.replace(r"\s+", " ", regex=True)) ) # Output : print(df) filename text 0 a01-000u-s00-00.jpg A MOVE to stop Mr. Gaitskell from 1 a01-000u-s00-01.jpg nominating any more Labour life Peers 2 a01-003-s00-01.jpg large majority of Labour M Ps are likely to # .txt used:
How can I get specific columns form txt file and save them to new file using python
I have this txt file sentences.txt that contains texts below a01-000u-s00-00 0 ok 154 19 408 746 1661 89 A|MOVE|to|stop|Mr.|Gaitskell|from a01-000u-s00-01 0 ok 156 19 395 932 1850 105 nominating|any|more|Labour|life|Peers which contains 10 columns I want to use the panda's data frame to extract only the file name (at column 0) and corresponding text (column 10) without the (|) character I wrote this code def load() -> pd.DataFrame: df = pd.read_csv('sentences.txt',sep=' ', header=None) data = [] with open('sentences.txt') as infile: for line in infile: file_name, _, _, _, _, _, _, _, _, text = line.strip().split(' ') data.append((file_name, cl_txt(text))) df = pd.DataFrame(data, columns=['file_name', 'text']) df.rename(columns={0: 'file_name', 9: 'text'}, inplace=True) df['file_name'] = df['file_name'].apply(lambda x: x + '.jpg') df = df[['file_name', 'text']] return df def cl_txt(input_text: str) -> str: text = input_text.replace('+', '-') text = text.replace('|', ' ') return text load() the error I got ParserError: Error tokenizing data. C error: Expected 10 fields in line 4, saw 11 where my expected process.txt file results should look like below without \n a01-000u-s00-00 A MOVE to stop Mr. Gaitskell from a01-000u-s00-01 nominating any more Labour life Peers
[ "IIUC, you just need pandas.read_csv to read your .txt and then select the two columns :\nTry this :\nimport pandas as pd\n\ndf= ( \n pd.read_csv(\"test.txt\", header=None, sep=r\"(\\d+)\\s(?=\\D)\", engine=\"python\",\n usecols=[0,4], names=[\"filename\", \"text\"])\n .assign(filename= lambda x: x[\"filename\"].str.strip().add(\".jpg\"),\n text= lambda x: x[\"text\"].str.replace(r'[\\|\"]', \" \", regex=True)\n .str.replace(r\"\\s+\", \" \", regex=True))\n )\n\n# Output :\nprint(df)\n\n filename text\n0 a01-000u-s00-00.jpg A MOVE to stop Mr. Gaitskell from\n1 a01-000u-s00-01.jpg nominating any more Labour life Peers\n2 a01-003-s00-01.jpg large majority of Labour M Ps are likely to\n\n# .txt used:\n\n" ]
[ 2 ]
[]
[]
[ "deep_learning", "nlp", "pandas", "python", "pytorch_lightning" ]
stackoverflow_0074518666_deep_learning_nlp_pandas_python_pytorch_lightning.txt
Q: Pandas cummax datetime when NaT values exist I have a column of "Purchase Dates". The column either contains NaT or an actual date. Date Last_Purchase Cummax_Purchase 2010-05-28 NaT NaT 2010-06-01 2010-06-01 2010-06-01 2010-06-02 2010-06-02 2010-06-02 2010-06-03 NaT NaT 2010-06-04 NaT NaT I want to do a cummax() on the column such that it returns the most recent purchase date. data['Purchase_Date'] = numpy.where(data['Buy Signal'] == True, data.index.astype(str), pandas.NaT) data['Cummax_Purchase'] = pandas.to_datetime(data['Purchase_Date']).cummax() The above cummax returns an NaT whenever their is an NaT in a corresponding row, not the cummax. But whenever I change the pandas.NaT to 0, then it works. But I want to return NaT values when there is an NaT. Any advice? EDIT: here's a small sample code: data = pandas.DataFrame( {"Purchase Dates" : pandas.to_datetime(['01-01-2020', '02-01-2020',None,'04-01-2020'])}, index=pandas.to_datetime(['01-01-2020','02-01-2020','03-01-2020','04-01-2020'])) data['Cummax_date'] = df['Purchase Dates'].cummax() A: Unfortunately, you do not provide a fully runnable example (see https://stackoverflow.com/help/minimal-reproducible-example), which makes it a bit hard to answer. Here is an attempt nevertheless assuming that data is a pd.DataFrame: mask = ~data['Purchase Dates'].isna() data.loc[mask, 'Cummax_Purchase'] = data.loc[mask, 'Purchase Dates'].cummax() Essentially, the approach is to run cummax() only on the part of the data that is actually a value and not on the NaTs EDIT: After a clarification on the requirements, the solution is: data['Cummax_date'] = data['Purchase Dates'].ffill() If the purchase dates are not in order, they have to be sorted first
Pandas cummax datetime when NaT values exist
I have a column of "Purchase Dates". The column either contains NaT or an actual date. Date Last_Purchase Cummax_Purchase 2010-05-28 NaT NaT 2010-06-01 2010-06-01 2010-06-01 2010-06-02 2010-06-02 2010-06-02 2010-06-03 NaT NaT 2010-06-04 NaT NaT I want to do a cummax() on the column such that it returns the most recent purchase date. data['Purchase_Date'] = numpy.where(data['Buy Signal'] == True, data.index.astype(str), pandas.NaT) data['Cummax_Purchase'] = pandas.to_datetime(data['Purchase_Date']).cummax() The above cummax returns an NaT whenever their is an NaT in a corresponding row, not the cummax. But whenever I change the pandas.NaT to 0, then it works. But I want to return NaT values when there is an NaT. Any advice? EDIT: here's a small sample code: data = pandas.DataFrame( {"Purchase Dates" : pandas.to_datetime(['01-01-2020', '02-01-2020',None,'04-01-2020'])}, index=pandas.to_datetime(['01-01-2020','02-01-2020','03-01-2020','04-01-2020'])) data['Cummax_date'] = df['Purchase Dates'].cummax()
[ "Unfortunately, you do not provide a fully runnable example (see https://stackoverflow.com/help/minimal-reproducible-example), which makes it a bit hard to answer. Here is an attempt nevertheless assuming that data is a pd.DataFrame:\nmask = ~data['Purchase Dates'].isna()\ndata.loc[mask, 'Cummax_Purchase'] = data.loc[mask, 'Purchase Dates'].cummax()\n\nEssentially, the approach is to run cummax() only on the part of the data that is actually a value and not on the NaTs\nEDIT:\nAfter a clarification on the requirements, the solution is:\ndata['Cummax_date'] = data['Purchase Dates'].ffill()\n\nIf the purchase dates are not in order, they have to be sorted first\n" ]
[ 0 ]
[]
[]
[ "max", "python" ]
stackoverflow_0074517822_max_python.txt
Q: Python: calling a method inside a method I am trying to implement collisions with python, the collisions isn't the problem. I want to call a method inside another method using OOP, but it isn't recognised. Can you do this? How? def collision_test(self,rect,tiles,x,y): #CREATING A RECT FOR THE GAME MAP(TILES) hit_list = [] for tile in tiles: if rect.colliderect(tile): hit_list.append(tile) return hit_list def move(self,rect,x,y,tiles): #testing collisions collision_types = {'top': False, 'bottom': False, 'right': False, 'left': False} rect.x += x hit_list = collision_test(self,rect,tiles) for tile in hit_list: if self.move_right == True: rect.right = tile.left Here collision_test isn't recognised. A: You have to call self.collision_test(rect,tiles) instead of collision_test(self,rect,tiles). However, the signature aren't matching. Your collision_test expects x and y arguments too. That might causes troubles too.
Python: calling a method inside a method
I am trying to implement collisions with python, the collisions isn't the problem. I want to call a method inside another method using OOP, but it isn't recognised. Can you do this? How? def collision_test(self,rect,tiles,x,y): #CREATING A RECT FOR THE GAME MAP(TILES) hit_list = [] for tile in tiles: if rect.colliderect(tile): hit_list.append(tile) return hit_list def move(self,rect,x,y,tiles): #testing collisions collision_types = {'top': False, 'bottom': False, 'right': False, 'left': False} rect.x += x hit_list = collision_test(self,rect,tiles) for tile in hit_list: if self.move_right == True: rect.right = tile.left Here collision_test isn't recognised.
[ "You have to call self.collision_test(rect,tiles) instead of collision_test(self,rect,tiles).\nHowever, the signature aren't matching. Your collision_test expects x and y arguments too. That might causes troubles too.\n" ]
[ 1 ]
[]
[]
[ "python" ]
stackoverflow_0074518695_python.txt
Q: recover initial allocation form final allocation and percentage changes let's say I have a sequence of n percentage changes for 2 assets and I know that at time n the allocation is A R = np.array([ [0.0, 0.0], [0.1, 0.02], [0.05, 0.01], [0.03, 0.03] ]) A = np.array([0.72345109 0.27654891]) What should I do if I want to recover the initial allocation? ! Edited after @mozway response to clarify my objective ! I should be able to reverse the following procedure. Given an initial allocation A_t0 and a sequence of percentage changes R the portfolio cumulative return is given by: A_t0 = np.array([0.7, 0.3]) cumr = np.cumprod(R+1, axis=0) # cumulative return for each asset. print(cumr) >> [[1. 1. ] [1.1 1.02 ] [1.155 1.0302 ] [1.18965 1.061106]] w_cumr = cumr * A_t0 # weighted cumulative return. print(w_cumr) >> [[0.7 0.3 ] [0.77 0.306 ] [0.8085 0.30906 ] [0.832755 0.3183318]] p_cumr = np.sum(w_cumr, axis=1) # portfolio cumulative return. print(p_cumr) >> [1. 1.076 1.11756 1.1510868] # The following show how my weights shifted as time passes p_alloc = w_cumr / np.sum(w_cumr, axis=1, keepdims=True) # portfolio allocation print(p_alloc) >> [[0.7 0.3 ] [0.71561338 0.28438662] [0.72345109 0.27654891] [0.72345109 0.27654891]] Now what if I have the same series of percentage changes R but instead of heaving the initial allocation I have the final allocation A_tn = [0.72345109 0.27654891] how can I recover the initial allocation A_t0 = [0.70, 0.30]? A: What you want to do is not highly clear. Assuming you have a initial vector I, and that you successively increase (for the first value) by [0, 0.1, 0.05, 0.03] (+0%, +10%, +5%, +3%), then I can be computed from A using: I = A/np.prod(R+1, axis=0) Output: array([0.60812095, 0.26062326]) And indeed: I * np.cumprod(R+1, axis=0) array([[0.60812095, 0.26062326], [0.66893305, 0.26583573], [0.7023797 , 0.26849409], [0.72345109, 0.27654891], # nth step is A ])
recover initial allocation form final allocation and percentage changes
let's say I have a sequence of n percentage changes for 2 assets and I know that at time n the allocation is A R = np.array([ [0.0, 0.0], [0.1, 0.02], [0.05, 0.01], [0.03, 0.03] ]) A = np.array([0.72345109 0.27654891]) What should I do if I want to recover the initial allocation? ! Edited after @mozway response to clarify my objective ! I should be able to reverse the following procedure. Given an initial allocation A_t0 and a sequence of percentage changes R the portfolio cumulative return is given by: A_t0 = np.array([0.7, 0.3]) cumr = np.cumprod(R+1, axis=0) # cumulative return for each asset. print(cumr) >> [[1. 1. ] [1.1 1.02 ] [1.155 1.0302 ] [1.18965 1.061106]] w_cumr = cumr * A_t0 # weighted cumulative return. print(w_cumr) >> [[0.7 0.3 ] [0.77 0.306 ] [0.8085 0.30906 ] [0.832755 0.3183318]] p_cumr = np.sum(w_cumr, axis=1) # portfolio cumulative return. print(p_cumr) >> [1. 1.076 1.11756 1.1510868] # The following show how my weights shifted as time passes p_alloc = w_cumr / np.sum(w_cumr, axis=1, keepdims=True) # portfolio allocation print(p_alloc) >> [[0.7 0.3 ] [0.71561338 0.28438662] [0.72345109 0.27654891] [0.72345109 0.27654891]] Now what if I have the same series of percentage changes R but instead of heaving the initial allocation I have the final allocation A_tn = [0.72345109 0.27654891] how can I recover the initial allocation A_t0 = [0.70, 0.30]?
[ "What you want to do is not highly clear.\nAssuming you have a initial vector I, and that you successively increase (for the first value) by [0, 0.1, 0.05, 0.03] (+0%, +10%, +5%, +3%), then I can be computed from A using:\nI = A/np.prod(R+1, axis=0)\n\nOutput: array([0.60812095, 0.26062326])\nAnd indeed:\nI * np.cumprod(R+1, axis=0)\n\narray([[0.60812095, 0.26062326],\n [0.66893305, 0.26583573],\n [0.7023797 , 0.26849409],\n [0.72345109, 0.27654891], # nth step is A\n ])\n\n" ]
[ 0 ]
[]
[]
[ "numpy", "portfolio", "python" ]
stackoverflow_0074518703_numpy_portfolio_python.txt
Q: Reducing code for iterating over the same list with nested for loops but with different variables Is there any in build python iterating tool that reduces 3 row of for loops into one row? Here are the nested for loops that I want to reduce. some_list = ["AB", "CD", "EF", "GH"] for word_1 in some_list: for word_2 in some_list: for word_3 in some_list: print(word_1, word_2, word_3) #Outputs all different combinations I have tried the following but with no success: some_list = ["AB", "CD", "EF", "GH"] for word_1 ,word_2, word_3 in zip(some_list, some_list, some_list): print(word_1 , word_2, word_3) #Outputs parallel elements, which is not what I want. A: Yes there is, it's product from itertools. from itertools import product some_list = ["AB", "CD", "EF", "GH"] for word_1 ,word_2, word_3 in product(some_list, repeat=3): print(word_1 , word_2, word_3) You can also use tuple unpacking to make it even more concise, like this some_list = ["AB", "CD", "EF", "GH"] for words in product(some_list, repeat=3): print(*words) Output (in both cases): AB AB AB AB AB CD AB AB EF AB AB GH AB CD AB AB CD CD AB CD EF AB CD GH AB EF AB AB EF CD ...
Reducing code for iterating over the same list with nested for loops but with different variables
Is there any in build python iterating tool that reduces 3 row of for loops into one row? Here are the nested for loops that I want to reduce. some_list = ["AB", "CD", "EF", "GH"] for word_1 in some_list: for word_2 in some_list: for word_3 in some_list: print(word_1, word_2, word_3) #Outputs all different combinations I have tried the following but with no success: some_list = ["AB", "CD", "EF", "GH"] for word_1 ,word_2, word_3 in zip(some_list, some_list, some_list): print(word_1 , word_2, word_3) #Outputs parallel elements, which is not what I want.
[ "Yes there is, it's product from itertools.\nfrom itertools import product\n\nsome_list = [\"AB\", \"CD\", \"EF\", \"GH\"]\n\nfor word_1 ,word_2, word_3 in product(some_list, repeat=3):\n print(word_1 , word_2, word_3)\n\nYou can also use tuple unpacking to make it even more concise, like this\nsome_list = [\"AB\", \"CD\", \"EF\", \"GH\"]\n\nfor words in product(some_list, repeat=3):\n print(*words)\n\nOutput (in both cases):\nAB AB AB\nAB AB CD\nAB AB EF\nAB AB GH\nAB CD AB\nAB CD CD\nAB CD EF\nAB CD GH\nAB EF AB\nAB EF CD\n...\n\n" ]
[ 1 ]
[]
[]
[ "for_loop", "iteration", "list", "python", "reducing" ]
stackoverflow_0074518808_for_loop_iteration_list_python_reducing.txt
Q: How to draw axis with arrows the same in Python I plot the function, and write code for plotting graph of this function: import seaborn as sns import matplotlib.pyplot as plt import numpy as np from matplotlib import ticker from matplotlib import rc rc('text', usetex=True) def fct(x): if -2 <= x < -1: y = 1.0 elif -1 <= x < 0: y = -1.0 elif 0 <= x < 0.5: y = 2.0 elif 0.5 <= x < 1: y = -2 else: y = 0 return y x = np.arange(-2.1, 1.2, 0.003) yf = [fct(i) for i in x] plt.style.use('science') sns.set(font_scale=2) sns.set_style("whitegrid") fig, ax = plt.subplots(figsize=(14, 12)) tick_spacing = 0.5 ax.xaxis.set_major_locator(ticker.MultipleLocator(tick_spacing)) ax.set_ylabel('$f(x)$', rotation='vertical') ax.set_xlabel('$x$', labelpad=5) ax.set_ylabel(ax.get_ylabel(), rotation=0, ha='right') sns.lineplot(x=x, y=yf, color='black', linestyle='-', linewidth=1.5, label='$f(x)$') ax.legend(loc="upper left") ax.set_xlim(-2.005, 1.005) fig.savefig('graph1', dpi=600, bbox_inches='tight') # save I want to draw axis and arrow of each axis. And my question is how to draw axis and arrows at the same style (from image)? A: To plot the axis with arrows, you can use the function matplotlib.pyplot.arrow. I have shown you one possible implementation in the following function plot_arrows. import seaborn as sns import matplotlib.pyplot as plt import numpy as np from matplotlib import ticker from matplotlib import rc rc('text', usetex=True) def plot_arrows( fig, ax, width:float=5e-4, head_length:float=8e-3, head_width_factor:float=20, color:str='black' ) -> None: figsize_x, figsize_y = fig.get_size_inches() xlim_min, xlim_max = ax.get_xlim() ylim_min, ylim_max = ax.get_ylim() span_x = xlim_max - xlim_min span_y = ylim_max - ylim_min widthx = width*figsize_x/span_x widthy = width*figsize_y/span_y head_lengthy = head_length*figsize_x/span_x head_lengthx = head_length*figsize_y/span_y plt.arrow(xlim_min, 0, xlim_max - xlim_min, 0, width=widthx, color=color, length_includes_head=True, head_width=head_width_factor*widthx, head_length=head_lengthx) plt.arrow(0, ylim_min, 0, ylim_max-ylim_min, width=widthy, color=color, length_includes_head=True, head_width=head_width_factor*widthy, head_length=head_lengthy) def fct(x): if -2 <= x < -1: y = 1.0 elif -1 <= x < 0: y = -1.0 elif 0 <= x < 0.5: y = 2.0 elif 0.5 <= x < 1: y = -2 else: y = 0 return y x = np.arange(-2.1, 1.2, 0.003) yf = [fct(i) for i in x] plt.style.use('science') sns.set(font_scale=2) sns.set_style("whitegrid") fig, ax = plt.subplots(figsize=(14, 12)) tick_spacing = 0.5 ax.xaxis.set_major_locator(ticker.MultipleLocator(tick_spacing)) ax.set_ylabel('$f(x)$', rotation='vertical') ax.set_xlabel('$x$', labelpad=5) ax.set_ylabel(ax.get_ylabel(), rotation=0, ha='right') sns.lineplot(x=x, y=yf, color='black', linestyle='-', linewidth=1.5, label='$f(x)$') ax.legend(loc="upper left") ax.set_xlim(-2.005, 1.005) fig.savefig('graph1', dpi=600, bbox_inches='tight') # save Output:
How to draw axis with arrows the same in Python
I plot the function, and write code for plotting graph of this function: import seaborn as sns import matplotlib.pyplot as plt import numpy as np from matplotlib import ticker from matplotlib import rc rc('text', usetex=True) def fct(x): if -2 <= x < -1: y = 1.0 elif -1 <= x < 0: y = -1.0 elif 0 <= x < 0.5: y = 2.0 elif 0.5 <= x < 1: y = -2 else: y = 0 return y x = np.arange(-2.1, 1.2, 0.003) yf = [fct(i) for i in x] plt.style.use('science') sns.set(font_scale=2) sns.set_style("whitegrid") fig, ax = plt.subplots(figsize=(14, 12)) tick_spacing = 0.5 ax.xaxis.set_major_locator(ticker.MultipleLocator(tick_spacing)) ax.set_ylabel('$f(x)$', rotation='vertical') ax.set_xlabel('$x$', labelpad=5) ax.set_ylabel(ax.get_ylabel(), rotation=0, ha='right') sns.lineplot(x=x, y=yf, color='black', linestyle='-', linewidth=1.5, label='$f(x)$') ax.legend(loc="upper left") ax.set_xlim(-2.005, 1.005) fig.savefig('graph1', dpi=600, bbox_inches='tight') # save I want to draw axis and arrow of each axis. And my question is how to draw axis and arrows at the same style (from image)?
[ "To plot the axis with arrows, you can use the function matplotlib.pyplot.arrow.\nI have shown you one possible implementation in the following function plot_arrows.\nimport seaborn as sns\nimport matplotlib.pyplot as plt\nimport numpy as np\nfrom matplotlib import ticker\nfrom matplotlib import rc\n\nrc('text', usetex=True)\n\n\ndef plot_arrows(\n fig, ax, \n width:float=5e-4, \n head_length:float=8e-3, \n head_width_factor:float=20, \n color:str='black'\n ) -> None:\n figsize_x, figsize_y = fig.get_size_inches()\n xlim_min, xlim_max = ax.get_xlim()\n ylim_min, ylim_max = ax.get_ylim()\n span_x = xlim_max - xlim_min\n span_y = ylim_max - ylim_min\n widthx = width*figsize_x/span_x\n widthy = width*figsize_y/span_y\n head_lengthy = head_length*figsize_x/span_x\n head_lengthx = head_length*figsize_y/span_y\n\n plt.arrow(xlim_min, 0, xlim_max - xlim_min, 0, width=widthx, color=color, length_includes_head=True, head_width=head_width_factor*widthx, head_length=head_lengthx)\n plt.arrow(0, ylim_min, 0, ylim_max-ylim_min, width=widthy, color=color, length_includes_head=True, head_width=head_width_factor*widthy, head_length=head_lengthy)\n\ndef fct(x):\n if -2 <= x < -1:\n y = 1.0\n elif -1 <= x < 0:\n y = -1.0\n elif 0 <= x < 0.5:\n y = 2.0\n elif 0.5 <= x < 1:\n y = -2\n else:\n y = 0\n return y\n\n\nx = np.arange(-2.1, 1.2, 0.003)\nyf = [fct(i) for i in x]\n\nplt.style.use('science') \nsns.set(font_scale=2)\nsns.set_style(\"whitegrid\")\nfig, ax = plt.subplots(figsize=(14, 12))\ntick_spacing = 0.5\nax.xaxis.set_major_locator(ticker.MultipleLocator(tick_spacing))\nax.set_ylabel('$f(x)$', rotation='vertical')\nax.set_xlabel('$x$', labelpad=5)\nax.set_ylabel(ax.get_ylabel(), rotation=0, ha='right')\nsns.lineplot(x=x, y=yf, color='black', linestyle='-', linewidth=1.5, label='$f(x)$')\nax.legend(loc=\"upper left\")\nax.set_xlim(-2.005, 1.005)\n\nfig.savefig('graph1', dpi=600, bbox_inches='tight') # save\n\n\nOutput:\n\n" ]
[ 1 ]
[]
[]
[ "matplotlib", "python" ]
stackoverflow_0071572533_matplotlib_python.txt
Q: Fast Pathfinder associative network algorithm (PFNET) in Python I've been trying to implement a "Fast Pathfinder" network pruning algorithm from https://doi.org/10.1016/j.ipm.2007.09.005 in Python/networkX, and have finally stumbled on something that is returning something that looks more or less right. I'm not quite competent enough to test if the results are consistently (or ever, for that matter) correct though. Especially for directed graphs I have my doubts, and I'm unsure if the original is even intended to work for directed graphs. I have not found a Python implementation for any pathfinder network algorithms yet, but if there is an existing alternative to use I would also be interested for comparing results. I know there is an implementation in R (https://rdrr.io/cran/comato/src/R/pathfinder.r) where I took some inspiration as well. Based on my best (read: poor) understanding, the algorithm described in the paper uses a distance matrix of shortest paths generated by the Floyd-Warshall algorithm, and compares those distances to the weighted adjacency matrix, picking only the matches as links. The intuition for the expected result in the undirected case is the union of all edges in all of its possible minimum spanning trees. That is what I am attempting to emulate with the below function: def minimal_pathfinder(G, r = float("inf")): """ Args: ----- G [networkX graph]: Graph to filter links from. r [float]: "r" parameter as in the paper. Returns: ----- PFNET [networkX graph]: Graph containing only the PFNET links. """ import networkx as nx from collections import defaultdict H = G.copy() # Initialize adjacency matrix W W = defaultdict(lambda: defaultdict(lambda: float("inf"))) # Set diagonal to 0 for u in H.nodes(): W[u][u] = 0 # Get weights and set W values for i, j, d in H.edges(data=True): W[i][j] = d['weight'] # Add weights to W # Get shortest path distance matrix D dist = nx.floyd_warshall_predecessor_and_distance(H, weight='weight')[1] # Iterate over all triples to get values for D for k in H.nodes(): for i in H.nodes(): for j in H.nodes(): if r == float("inf"): # adapted from the R-comato version which does a similar check # Discard non-shortest paths dist[i][j] = min(dist[i][j], (dist[i][k] + dist[k][j])) else: dist[i][j] = min(dist[i][j], (((dist[i][k]) ** r) + ((dist[k][j]) ** r )) ** (1/r)) # Check for type; set placeholder for either case if not H.is_directed(): PFNET = nx.Graph() PFNET.add_nodes_from(H.nodes(data=True)) else: PFNET = nx.DiGraph() PFNET.add_nodes_from(H.nodes(data=True)) # Add links D_ij only if == W_ij for i in H.nodes(): for j in H.nodes(): if dist[i][j] == W[i][j]: # If shortest path distance equals distance in adjacency if dist[i][j] == float("inf"): # Skip infinite path lengths pass elif i == j: # Skip the diagonal pass else: # Add link to PFNET weight = dist[i][j] PFNET.add_edge(i, j, weight=weight) return PFNET I've tested this with a bunch of real networks (both directed and undirected) and randomly generated networks, both cases ranging from 20ish nodes up to around 300 nodes, maximum few thousand edges (e.g. complete graphs, connected caveman graphs). In all cases it returns something, but I have little confidence the results are correct. As I find no other implementations I'm unsure how to verify this is working consistently (I'm not really using any other languages at all). I am fairly sure there is still something wrong with this but I am unsure of what it might be. Simple use case: G = nx.complete_graph(50) # Generate a complete graph # Add random weights for (u,v,w) in G.edges(data=True): w['weight'] = np.random.randint(1,20) PFNET = minimal_pathfinder(G) print(nx.info(G)) print(nx.info(PFNET)) Output: Graph with 50 nodes and 1225 edges Graph with 50 nodes and 236 edges I was wondering about two things: 1. Any idea what might be wrong with the implementation? Should I have confidence in the results? Any idea how this might converted to work with similarity data instead of distances? To the second I considered normalizing the weights to 0-1 range and converting all the distances to similarities by 1 - distance. But I am unsure if this is theoretically valid, and was hoping for a second opinion. EDIT: I possibly discovered solution to Q2. in the original paper: change float("inf") to float("-inf") and change min to max in the first loop. From the authors' footnote: Actually, using similarities or distances has no influence at all in our proposal. In case of using similarities, we would only need to replace MIN by MAX, ’>’ by ’<’, and use r = -inf to mimic the MIN function instead of the MAX function in the Fast Pathfinder algorithm. Any inputs much appreciated, thanks! EDIT (adding example of how it goes wrong from here) per comment, using the "example from a datafile" section: Adjacency in starting graph: matrix([[0, 1, 4, 2, 2], [1, 0, 2, 3, 0], [4, 2, 0, 3, 1], [2, 3, 3, 0, 3], [2, 0, 1, 3, 0]], dtype=int32) And after pruning with the function, converting first into a networkX undirected graph: matrix([[0, 1, 0, 2, 2], [1, 0, 2, 3, 0], [0, 2, 0, 3, 1], [2, 3, 3, 0, 3], [2, 0, 1, 3, 0]], dtype=int32) It seems to drop only the highest weight overall leaving all other edges. Since the expected result is in an edgelist on the linked example, here's the edgelist of the result I obtain as well: source target weight 1 2 1 1 4 2 1 5 2 2 3 2 2 4 3 3 4 3 3 5 1 4 5 3 A: Disclaimer : I am one of the author of the optimisation papers (Fast PFNET, but there is also a faster version, MST-PFNET). Note that the MST-PFNET version can only be applied to a subset of the original PFNET algorithm, ie, can only work with q=n-1 and r=oo. Sorry for the delay of my answer, but I just have seen this post today. I will try to address as many questions as possible: First, to avoid any confusion, as I see the both concepts are mixed in the post and the comments below, the Fast PFNET (or Fast Pathfinder) algorithm, an optimisation of the original PFNET algorithm from Schvaneveldt, is based on a shortest path algorithm. The MST PFNET version is even faster and is based on Minimum Spanning Trees (MST). Both optimisations work only with (different) subsets of the original algorithm parameters (see this page to see which ones). Thus they are not compatible. I am not aware too about a Python version. But if you are fluent with C, you can find all the versions of this algorithm on GIT here. Those versions should be straightforward to compile (using the Makefile) and to use (input file format are in Pajek format, some examples are included, the command line is <executable> <input_filename> and the output in Pajek format is directly sent to stdout). The original PFNET version from Schvaneveldt is intended to be used with directed and undirected graphs, but the optimised versions are defined only for undirected graphs. You will find a comparison table for all the versions on this page. I am not able to check your Python version now, but on the mentioned page there is a very simple example to test your implementation. The versions on GIT are also well tested (with thousands of random graphs against the original slow version, the code to create the random graphs is also on the GIT) so the output of any random graph can be considered secure enough. Why do you think your implementation might be wrong? Based only on the statistics, it is perfectly normal the algorithm prunes the edges but not the nodes, this is the nominal behaviour of this algorithm. The implementations on GIT are supposed to work with similarity and not distance for the weights of the graphs. As noted in the paper, switching from similarity to distance and vice versa does not change the algorithm itself, but we should only adapt the comparison operators and some other instructions. As said in one of the comments, MST-PFNET (or the original PFNET but with the restriction applied to the parameters) applied to a tree returns the exact same tree. If a graph has multiple/different MSTs, this means that some edges of these MSTs share the same weight. The result of MST-PFNET is the superposition of those multiple MSTs (ie, keeping each edge appearing in at least one of the MSTs). I confirm the behaviour for unweighted graph (or a graph having all the edges with the same weight): the result of MST PFNET should be the input graph itself. A: Below is a possible implementation of Fast-Pathfinder in Python using the networkx library. Note: the implementation corresponds to the paper. it is inspired from the C implementation found in GitHub. only the maximum variant is implemented, where the input matrix is a similarity matrix and not a distance matrix (edges with the highest values are kept). def fast_pfnet(G, q, r): s = G.number_of_nodes() weights_init = np.zeros((s,s)) weights = np.zeros((s,s)) hops = np.zeros((s,s)) pfnet = np.zeros((s,s)) for i, j, d in G.edges(data=True): weights_init[i,j] = d['weight'] weights_init[j,i] = d['weight'] for i in range(s): for j in range(s): weights[i,j] = -weights_init[i,j] if i==j: hops[i,j] = 0 else: hops[i,j] = 1 def update_weight_maximum(i, j, k, wik, wkj, weights, hops, p): if p<=q: if r==0: # r == infinity dist = max(wik, wkj) else: dist = (wik**r + wkj**r) ** (1/r) if dist < weights[i,j]: weights[i,j] = dist weights[j,i] = dist hops[i,j] = p hops[j,i] = p def is_equal(a, b): return abs(a-b)<0.00001 for k in range(s): for i in range(s): if i!=k: beg = i+1 for j in range(beg, s): if j!=k: update_weight_maximum(i, j, k, weights_init[i,k], weights_init[k,j], weights, hops, 2) update_weight_maximum(i, j, k, weights[i,k], weights[k,j], weights, hops, hops[i,k]+hops[k,j]) for i in range(s): for j in range(s): # Possible optimisation: in case of symmetrical matrices, we do not need to go from 0 to s but from i+1 to s if not is_equal(weights_init[i,j], 0): if is_equal(weights[i,j], -weights_init[i,j]): pfnet[i,j] = weights_init[i,j] else: pfnet[i,j] = 0 return nx.from_numpy_matrix(pfnet) Usage: m = np.matrix([[0, 1, 4, 2, 2], [1, 0, 2, 3, 0], [4, 2, 0, 3, 1], [2, 3, 3, 0, 3], [2, 0, 1, 3, 0]], dtype=np.int32) G = nx.from_numpy_matrix(m) # Fast-PFNET parameters set to emulate MST-PFNET # This variant is OK for other parameters (q, r) but for the ones below # it is faster to implement the MST-PFNET variant instead. q = G.number_of_nodes()-1 r = 0 P = fast_pfnet(G, q, r) list(P.edges(data=True)) This should return: [(0, 2, {'weight': 4.0}), (1, 3, {'weight': 3.0}), (2, 3, {'weight': 3.0}), (3, 4, {'weight': 3.0})] Which is similar to what is shown on the website (see the example in the section "After the application of Pathfinder").
Fast Pathfinder associative network algorithm (PFNET) in Python
I've been trying to implement a "Fast Pathfinder" network pruning algorithm from https://doi.org/10.1016/j.ipm.2007.09.005 in Python/networkX, and have finally stumbled on something that is returning something that looks more or less right. I'm not quite competent enough to test if the results are consistently (or ever, for that matter) correct though. Especially for directed graphs I have my doubts, and I'm unsure if the original is even intended to work for directed graphs. I have not found a Python implementation for any pathfinder network algorithms yet, but if there is an existing alternative to use I would also be interested for comparing results. I know there is an implementation in R (https://rdrr.io/cran/comato/src/R/pathfinder.r) where I took some inspiration as well. Based on my best (read: poor) understanding, the algorithm described in the paper uses a distance matrix of shortest paths generated by the Floyd-Warshall algorithm, and compares those distances to the weighted adjacency matrix, picking only the matches as links. The intuition for the expected result in the undirected case is the union of all edges in all of its possible minimum spanning trees. That is what I am attempting to emulate with the below function: def minimal_pathfinder(G, r = float("inf")): """ Args: ----- G [networkX graph]: Graph to filter links from. r [float]: "r" parameter as in the paper. Returns: ----- PFNET [networkX graph]: Graph containing only the PFNET links. """ import networkx as nx from collections import defaultdict H = G.copy() # Initialize adjacency matrix W W = defaultdict(lambda: defaultdict(lambda: float("inf"))) # Set diagonal to 0 for u in H.nodes(): W[u][u] = 0 # Get weights and set W values for i, j, d in H.edges(data=True): W[i][j] = d['weight'] # Add weights to W # Get shortest path distance matrix D dist = nx.floyd_warshall_predecessor_and_distance(H, weight='weight')[1] # Iterate over all triples to get values for D for k in H.nodes(): for i in H.nodes(): for j in H.nodes(): if r == float("inf"): # adapted from the R-comato version which does a similar check # Discard non-shortest paths dist[i][j] = min(dist[i][j], (dist[i][k] + dist[k][j])) else: dist[i][j] = min(dist[i][j], (((dist[i][k]) ** r) + ((dist[k][j]) ** r )) ** (1/r)) # Check for type; set placeholder for either case if not H.is_directed(): PFNET = nx.Graph() PFNET.add_nodes_from(H.nodes(data=True)) else: PFNET = nx.DiGraph() PFNET.add_nodes_from(H.nodes(data=True)) # Add links D_ij only if == W_ij for i in H.nodes(): for j in H.nodes(): if dist[i][j] == W[i][j]: # If shortest path distance equals distance in adjacency if dist[i][j] == float("inf"): # Skip infinite path lengths pass elif i == j: # Skip the diagonal pass else: # Add link to PFNET weight = dist[i][j] PFNET.add_edge(i, j, weight=weight) return PFNET I've tested this with a bunch of real networks (both directed and undirected) and randomly generated networks, both cases ranging from 20ish nodes up to around 300 nodes, maximum few thousand edges (e.g. complete graphs, connected caveman graphs). In all cases it returns something, but I have little confidence the results are correct. As I find no other implementations I'm unsure how to verify this is working consistently (I'm not really using any other languages at all). I am fairly sure there is still something wrong with this but I am unsure of what it might be. Simple use case: G = nx.complete_graph(50) # Generate a complete graph # Add random weights for (u,v,w) in G.edges(data=True): w['weight'] = np.random.randint(1,20) PFNET = minimal_pathfinder(G) print(nx.info(G)) print(nx.info(PFNET)) Output: Graph with 50 nodes and 1225 edges Graph with 50 nodes and 236 edges I was wondering about two things: 1. Any idea what might be wrong with the implementation? Should I have confidence in the results? Any idea how this might converted to work with similarity data instead of distances? To the second I considered normalizing the weights to 0-1 range and converting all the distances to similarities by 1 - distance. But I am unsure if this is theoretically valid, and was hoping for a second opinion. EDIT: I possibly discovered solution to Q2. in the original paper: change float("inf") to float("-inf") and change min to max in the first loop. From the authors' footnote: Actually, using similarities or distances has no influence at all in our proposal. In case of using similarities, we would only need to replace MIN by MAX, ’>’ by ’<’, and use r = -inf to mimic the MIN function instead of the MAX function in the Fast Pathfinder algorithm. Any inputs much appreciated, thanks! EDIT (adding example of how it goes wrong from here) per comment, using the "example from a datafile" section: Adjacency in starting graph: matrix([[0, 1, 4, 2, 2], [1, 0, 2, 3, 0], [4, 2, 0, 3, 1], [2, 3, 3, 0, 3], [2, 0, 1, 3, 0]], dtype=int32) And after pruning with the function, converting first into a networkX undirected graph: matrix([[0, 1, 0, 2, 2], [1, 0, 2, 3, 0], [0, 2, 0, 3, 1], [2, 3, 3, 0, 3], [2, 0, 1, 3, 0]], dtype=int32) It seems to drop only the highest weight overall leaving all other edges. Since the expected result is in an edgelist on the linked example, here's the edgelist of the result I obtain as well: source target weight 1 2 1 1 4 2 1 5 2 2 3 2 2 4 3 3 4 3 3 5 1 4 5 3
[ "Disclaimer : I am one of the author of the optimisation papers (Fast PFNET, but there is also a faster version, MST-PFNET). Note that the MST-PFNET version can only be applied to a subset of the original PFNET algorithm, ie, can only work with q=n-1 and r=oo. Sorry for the delay of my answer, but I just have seen this post today.\nI will try to address as many questions as possible:\n\nFirst, to avoid any confusion, as I see the both concepts are mixed in the post and the comments below, the Fast PFNET (or Fast Pathfinder) algorithm, an optimisation of the original PFNET algorithm from Schvaneveldt, is based on a shortest path algorithm. The MST PFNET version is even faster and is based on Minimum Spanning Trees (MST). Both optimisations work only with (different) subsets of the original algorithm parameters (see this page to see which ones). Thus they are not compatible.\n\nI am not aware too about a Python version. But if you are fluent with C, you can find all the versions of this algorithm on GIT here. Those versions should be straightforward to compile (using the Makefile) and to use (input file format are in Pajek format, some examples are included, the command line is <executable> <input_filename> and the output in Pajek format is directly sent to stdout).\n\nThe original PFNET version from Schvaneveldt is intended to be used with directed and undirected graphs, but the optimised versions are defined only for undirected graphs. You will find a comparison table for all the versions on this page.\n\nI am not able to check your Python version now, but on the mentioned page there is a very simple example to test your implementation. The versions on GIT are also well tested (with thousands of random graphs against the original slow version, the code to create the random graphs is also on the GIT) so the output of any random graph can be considered secure enough.\n\nWhy do you think your implementation might be wrong? Based only on the statistics, it is perfectly normal the algorithm prunes the edges but not the nodes, this is the nominal behaviour of this algorithm.\n\nThe implementations on GIT are supposed to work with similarity and not distance for the weights of the graphs. As noted in the paper, switching from similarity to distance and vice versa does not change the algorithm itself, but we should only adapt the comparison operators and some other instructions.\n\nAs said in one of the comments, MST-PFNET (or the original PFNET but with the restriction applied to the parameters) applied to a tree returns the exact same tree.\n\nIf a graph has multiple/different MSTs, this means that some edges of these MSTs share the same weight. The result of MST-PFNET is the superposition of those multiple MSTs (ie, keeping each edge appearing in at least one of the MSTs).\n\nI confirm the behaviour for unweighted graph (or a graph having all the edges with the same weight): the result of MST PFNET should be the input graph itself.\n\n\n", "Below is a possible implementation of Fast-Pathfinder in Python using the networkx library. Note:\n\nthe implementation corresponds to the paper.\nit is inspired from the C implementation found in GitHub.\nonly the maximum variant is implemented, where the input matrix is a similarity matrix and not a distance matrix (edges with the highest values are kept).\n\ndef fast_pfnet(G, q, r):\n \n s = G.number_of_nodes()\n weights_init = np.zeros((s,s))\n weights = np.zeros((s,s))\n hops = np.zeros((s,s))\n pfnet = np.zeros((s,s))\n\n for i, j, d in G.edges(data=True):\n weights_init[i,j] = d['weight']\n weights_init[j,i] = d['weight']\n\n for i in range(s):\n for j in range(s):\n weights[i,j] = -weights_init[i,j]\n if i==j:\n hops[i,j] = 0\n else:\n hops[i,j] = 1\n\n def update_weight_maximum(i, j, k, wik, wkj, weights, hops, p):\n if p<=q:\n if r==0:\n # r == infinity\n dist = max(wik, wkj)\n else:\n dist = (wik**r + wkj**r) ** (1/r)\n\n if dist < weights[i,j]:\n weights[i,j] = dist\n weights[j,i] = dist\n hops[i,j] = p\n hops[j,i] = p\n \n def is_equal(a, b):\n return abs(a-b)<0.00001\n\n for k in range(s):\n for i in range(s):\n if i!=k:\n beg = i+1\n for j in range(beg, s):\n if j!=k:\n update_weight_maximum(i, j, k, weights_init[i,k], weights_init[k,j], weights, hops, 2)\n update_weight_maximum(i, j, k, weights[i,k], weights[k,j], weights, hops, hops[i,k]+hops[k,j])\n\n for i in range(s):\n for j in range(s): # Possible optimisation: in case of symmetrical matrices, we do not need to go from 0 to s but from i+1 to s\n if not is_equal(weights_init[i,j], 0):\n if is_equal(weights[i,j], -weights_init[i,j]):\n pfnet[i,j] = weights_init[i,j]\n else:\n pfnet[i,j] = 0\n\n return nx.from_numpy_matrix(pfnet)\n\nUsage:\nm = np.matrix([[0, 1, 4, 2, 2],\n [1, 0, 2, 3, 0],\n [4, 2, 0, 3, 1],\n [2, 3, 3, 0, 3],\n [2, 0, 1, 3, 0]], dtype=np.int32)\n\nG = nx.from_numpy_matrix(m)\n\n# Fast-PFNET parameters set to emulate MST-PFNET\n# This variant is OK for other parameters (q, r) but for the ones below\n# it is faster to implement the MST-PFNET variant instead.\nq = G.number_of_nodes()-1\nr = 0\n\nP = fast_pfnet(G, q, r)\n\nlist(P.edges(data=True))\n\nThis should return:\n[(0, 2, {'weight': 4.0}),\n (1, 3, {'weight': 3.0}),\n (2, 3, {'weight': 3.0}),\n (3, 4, {'weight': 3.0})]\n\nWhich is similar to what is shown on the website (see the example in the section \"After the application of Pathfinder\").\n" ]
[ 1, 1 ]
[]
[]
[ "algorithm", "networkx", "path_finding", "python" ]
stackoverflow_0070262806_algorithm_networkx_path_finding_python.txt
Q: How to make if else condition in python 2d array I have a 2d array with shape(3,6), then i want to create a condition to check a value of each array. my data arry is as follows : array([[ 1, 2, 3, 4, 5, 6], 7, 8, 9, 10, 11, 12], [13, 14, 15, 16, 17, 18]]) if in an array there are numbers < 10 then the value will be 0 the result I expected array([[ 0, 0, 0, 0, 0, 0], 0, 0, 0, 10, 11, 12], [13, 14, 15, 16, 17, 18]]) the code i created is like this, but why can't it work as i expected FCDataNew = [] a = [ [1,2,3,4,5,6], [7,8,9,10,11,12], [13,14,15,16,17,18] ] a = np.array(a) c = 0 c = np.array(c) for i in range(len(a)): if a[i].all()<10: FCDataNew.append(c) else: FCDataNew.append(a[i]) FCDataNew = np.array(FCDataNew) FCDataNew A: If you want to modify the array in place, use boolean indexing: FCDataNew = np.array([[1,2,3,4,5,6], [7,8,9,10,11,12], [13,14,15,16,17,18], ]) FCDataNew[FCDataNew<10] = 0 For a copy: out = np.where(FCDataNew<10, 0, FCDataNew) Output: array([[ 0, 0, 0, 0, 0, 0], [ 0, 0, 0, 10, 11, 12], [13, 14, 15, 16, 17, 18]]) A: You can just use arr[arr < 10] = 0
How to make if else condition in python 2d array
I have a 2d array with shape(3,6), then i want to create a condition to check a value of each array. my data arry is as follows : array([[ 1, 2, 3, 4, 5, 6], 7, 8, 9, 10, 11, 12], [13, 14, 15, 16, 17, 18]]) if in an array there are numbers < 10 then the value will be 0 the result I expected array([[ 0, 0, 0, 0, 0, 0], 0, 0, 0, 10, 11, 12], [13, 14, 15, 16, 17, 18]]) the code i created is like this, but why can't it work as i expected FCDataNew = [] a = [ [1,2,3,4,5,6], [7,8,9,10,11,12], [13,14,15,16,17,18] ] a = np.array(a) c = 0 c = np.array(c) for i in range(len(a)): if a[i].all()<10: FCDataNew.append(c) else: FCDataNew.append(a[i]) FCDataNew = np.array(FCDataNew) FCDataNew
[ "If you want to modify the array in place, use boolean indexing:\nFCDataNew = np.array([[1,2,3,4,5,6],\n [7,8,9,10,11,12],\n [13,14,15,16,17,18],\n ])\n\nFCDataNew[FCDataNew<10] = 0\n\nFor a copy:\nout = np.where(FCDataNew<10, 0, FCDataNew)\n\nOutput:\narray([[ 0, 0, 0, 0, 0, 0],\n [ 0, 0, 0, 10, 11, 12],\n [13, 14, 15, 16, 17, 18]])\n\n", "You can just use arr[arr < 10] = 0\n" ]
[ 2, 0 ]
[]
[]
[ "python" ]
stackoverflow_0074518909_python.txt
Q: How can I use python conditionals to map columns in a dataframe with duplicates in them? I am trying to create a mapping where there are duplicates in certain columns in a dataframe. Here are two examples of dataframes I am working with: issue_status trading_state reason 100 'A0' 100 None 'F' 400 None 100 None 400 None 'SL' 100 'B2' 400 None 'L' 100 None 400 'A6' Here is what I need; 3 conditional python logic that does: when we see the first issue_status of 100 and trading_state of None, map F in the reason column. when we see the second last issue_status of 400 and trading_state of None, map SL in the reason column. when we see the last issue_status of 400 and trading_state of None, map L in the reason column. Here is another example: issue_status trading_state reason 400 None 'SL' 100 'A0' 400 None 'L' 400 'A0' 100 None 'F' 100 None @jezrael, I am getting the following error, for your last line of code (market_info_df1['reason'] = s1.combine_first(s2)): @wraps(func) def outer(*args, **kwargs): try: return_value = func(*args, **kwargs) except NormalisedDataError: raise except Exception as exc: original = exc.__class__.__name__ > raise NormalisedDataError(f'{original}: {exc}', original_exception_type=original) from exc SettingWithCopyError: E A value is trying to be set on a copy of a slice from a DataFrame. E Try using .loc[row_indexer,col_indexer] = value instead E E See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy Any idea what is causing this? A: You can filter by 400 and None values for df1, create helper Series with range and mapping last and second last values, for first 100 and None values use Series.duplicated, last join both Series by Series.combine_first: #if None is string #m1 = df['trading_state'].eq('None') m1 = df['trading_state'].isna() m2 = df['issue_status'].eq(400) m3 = df['issue_status'].eq(100) df1 = df[m1 & m2].copy() s1 = pd.Series(range(len(df1), 0, -1), index=df1.index).map({1:'L', 2:'SL'}) s2 = df.loc[m1 & m3, 'issue_status'].copy().duplicated().map({False:'F'}) df['reason'] = s1.combine_first(s2) print (df) issue_status trading_state reason 0 100 'A0' NaN 1 100 None F 2 400 None NaN 3 100 None NaN 4 400 None SL 5 100 'B2' NaN 6 400 None L 7 100 None NaN 8 400 'A6' NaN For second: df['reason'] = s1.combine_first(s2) print (df) issue_status trading_state reason 0 400 None SL 1 100 'A0' NaN 2 400 None L 3 400 'A0' NaN 4 100 None F 5 100 None NaN If necessary empty strings in reason column use: df['reason'] = df['reason'].fillna('')
How can I use python conditionals to map columns in a dataframe with duplicates in them?
I am trying to create a mapping where there are duplicates in certain columns in a dataframe. Here are two examples of dataframes I am working with: issue_status trading_state reason 100 'A0' 100 None 'F' 400 None 100 None 400 None 'SL' 100 'B2' 400 None 'L' 100 None 400 'A6' Here is what I need; 3 conditional python logic that does: when we see the first issue_status of 100 and trading_state of None, map F in the reason column. when we see the second last issue_status of 400 and trading_state of None, map SL in the reason column. when we see the last issue_status of 400 and trading_state of None, map L in the reason column. Here is another example: issue_status trading_state reason 400 None 'SL' 100 'A0' 400 None 'L' 400 'A0' 100 None 'F' 100 None @jezrael, I am getting the following error, for your last line of code (market_info_df1['reason'] = s1.combine_first(s2)): @wraps(func) def outer(*args, **kwargs): try: return_value = func(*args, **kwargs) except NormalisedDataError: raise except Exception as exc: original = exc.__class__.__name__ > raise NormalisedDataError(f'{original}: {exc}', original_exception_type=original) from exc SettingWithCopyError: E A value is trying to be set on a copy of a slice from a DataFrame. E Try using .loc[row_indexer,col_indexer] = value instead E E See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy Any idea what is causing this?
[ "You can filter by 400 and None values for df1, create helper Series with range and mapping last and second last values, for first 100 and None values use Series.duplicated, last join both Series by Series.combine_first:\n#if None is string\n#m1 = df['trading_state'].eq('None')\nm1 = df['trading_state'].isna()\n\nm2 = df['issue_status'].eq(400)\nm3 = df['issue_status'].eq(100)\n\ndf1 = df[m1 & m2].copy()\n\ns1 = pd.Series(range(len(df1), 0, -1), index=df1.index).map({1:'L', 2:'SL'})\ns2 = df.loc[m1 & m3, 'issue_status'].copy().duplicated().map({False:'F'})\n\ndf['reason'] = s1.combine_first(s2)\nprint (df)\n issue_status trading_state reason\n0 100 'A0' NaN\n1 100 None F\n2 400 None NaN\n3 100 None NaN\n4 400 None SL\n5 100 'B2' NaN\n6 400 None L\n7 100 None NaN\n8 400 'A6' NaN\n\nFor second:\ndf['reason'] = s1.combine_first(s2)\nprint (df)\n issue_status trading_state reason\n0 400 None SL\n1 100 'A0' NaN\n2 400 None L\n3 400 'A0' NaN\n4 100 None F\n5 100 None NaN\n\nIf necessary empty strings in reason column use:\ndf['reason'] = df['reason'].fillna('')\n\n" ]
[ 1 ]
[]
[]
[ "dataframe", "pandas", "python" ]
stackoverflow_0074518334_dataframe_pandas_python.txt
Q: Python: How to read and compare text files in 80 separate folders and merge them into one single pandas dataframe based on a condition? I have a folder(user) which contains 80 subfolders (1,2, 3,…, 80) and in each subfolder there are 2 text files (file1 and file2). file1 has 7 columns and file2 has 3 columns both without labels and are not in the same size. First column of file1 is time and the first and second columns of file2 are start_time and end_time. So, what I want to do is to: read file1 and file2 in each subfolder and compare whether the time in the first column of the file1 is in the range of (start_time,end_time) in the file2 or not. if yes, put the value in the third column of file2 in a new column in file1 corresponding to that time which is in the range and convert it to a pandas dataframe. if No, I mean if the time is not in the range (start_time,end_time),put NAN in the new created column. do this for all 2 text files in all folders(1,2,3,...80) so that at the end we will have 80 dataframes convert all the dataframes to one single pandas dataframe. file1 is something like this: 1493974279325 1251148166327 417683620715 250.0 50.847060192 -0.134362797 66.307039766 1493974280326 1252150237681 417683620715 350.0 50.847057006 -0.134359581 105.778622992 1493984243830 1253153644973 417683620715 350.0 50.847054933 -0.134318363 158.247842792 1493984243840 1254156207993 417683620715 350.0 50.847051726 -0.134282482 158.247842792 1493974283335 1255160442889 417683620715 350.0 50.847050123 -0.134264542 158.247842792 1493974284338 1256162859035 417683620715 350.0 50.847049321 -0.134255572 158.247842792 1493974285340 1257165017889 417683620715 350.0 50.847048921 -0.134251086 158.247842792 1493974286343 1258168318930 417683620715 350.0 50.84704872 -0.134248844 158.247842792 1493974287347 1259171307992 417683620715 350.0 50.84704862 -0.134247723 158.247842792 1493974288351 1260175022576 417683620715 350.0 50.84704857 -0.134247162 158.247842792 1493974289352 1261177816325 417683620715 350.0 50.847048545 -0.134246882 158.247842792 1493984243890 1262179719971 417683620715 350.0 50.847048532 -0.134246741 158.247842792 1493984243900 1263182887158 417683620715 350.0 50.847048526 -0.134246671 158.247842792 file2 is something like this: 1488377142416 1488378192537 7 1488379212697 1488380967936 3 1488381613217 1488382948229 4 1488383667626 1488384747965 4 1488385047398 1488385633069 5 1488386203182 1488386877333 5 1488386952452 1488388272444 4 1488388482553 1488389532601 3 1488389863114 1488391843248 4 expected output would be something like this( what I want is a pandas dataframe),in this output file the last column is the new created column and the values come from the third column in file2. (this is just an example to show the expected output and times in the first column are not in the range of start time and end time) 1493974279325 1251148166327 417683620715 250.0 50.847060192 -0.134362797 66.307039766 7 1493974280326 1252150237681 417683620715 350.0 50.847057006 -0.134359581 105.778622992 7 1493984243830 1253153644973 417683620715 350.0 50.847054933 -0.134318363 158.247842792 7 1493984243840 1254156207993 417683620715 350.0 50.847051726 -0.134282482 158.247842792 4 1493974283335 1255160442889 417683620715 350.0 50.847050123 -0.134264542 158.247842792 4 1493974284338 1256162859035 417683620715 350.0 50.847049321 -0.134255572 158.247842792 4 1493974285340 1257165017889 417683620715 350.0 50.847048921 -0.134251086 158.247842792 4 1493974286343 1258168318930 417683620715 350.0 50.84704872 -0.134248844 158.247842792 4 1493974287347 1259171307992 417683620715 350.0 50.84704862 -0.134247723 158.247842792 4 1493974288351 1260175022576 417683620715 350.0 50.84704857 -0.134247162 158.247842792 3 1493974289352 1261177816325 417683620715 350.0 50.847048545 -0.134246882 158.247842792 3 1493984243890 1262179719971 417683620715 350.0 50.847048532 -0.134246741 158.247842792 NAN 1493984243900 1263182887158 417683620715 350.0 50.847048526 -0.134246671 158.247842792 NAN what I have done: #1 def read_file1_data(path): location=pd.read_csv(path,delimiter='\t',header=None,names=['location']) location=location['location'].str.split(expand=True) location.columns=['Timestamp','Ignore2','Ignore3','accuracy(m)','latitude','longitude','altitude'] location.drop(['Ignore2','Ignore3','accuracy(m)','altitude'],axis=1,inplace=True) location['Timestamp'] = location['Timestamp'].astype('int64') return location #2 def read_file2(path): labels = pd.read_csv(path, skiprows=0, header=None, infer_datetime_format=True, delim_whitespace=True) # for clarity rename columns labels.columns = ['start_time', 'end_time', 'label'] return labels #3 def apply_labels(location, labels): indices = labels['start_time'].searchsorted(location['Timestamp'], side='right') - 1 no_label = (indices < 0) | (location['Timestamp'].values >= labels['end_time'].iloc[indices].values) location['label'] = labels['label'].iloc[indices].values location['label'][no_label] = np.NaN #4 def read_user(user_folder): labels = None location_files = glob.glob(os.path.join(user_folder,'Hips_Location.txt')) df = pd.concat([read_file1_data(f) for f in location_files]) labels_file = os.path.join(user_folder, 'labels_track_main.txt') if os.path.exists(labels_file): labels = read_file2(labels_file) apply_labels(df, labels) else: df['label'] = np.NAN return df def read_all_users(folder): subfolders = os.listdir(folder) dfs = [] for i, sf in enumerate(subfolders): df = read_user(os.path.join(folder,sf)) dfs.append(df) return pd.concat(dfs) #final dataframe DF = read_all_users('/content/drive/MyDrive/Sussex Trajectories/Data') However, it returns jus "NAN" in the new created column: Timestamp latitude longitude label 0 1496127155569 52.026602082620364 0.964491661294289 NaN 1 1496127157333 52.026602287 0.964491665 NaN 2 1496127158334 52.026603335 0.964496445 NaN 3 1496127159336 52.026602658 0.964503625 NaN 4 1496127160340 52.026600765 0.964518915 NaN ... ... ... ... ... 33156 1496349577578 50.846888356 -0.133469528 NaN 33157 1496349578581 50.84689128 -0.133483904 NaN 33158 1496349579583 50.84689199 -0.133497738 NaN 33159 1496349580587 50.846893418 -0.133511131 NaN 33160 1496349581590 50.846894132 -0.133517828 NaN Any suggestion would be greatly appreciated. A: Duplicate of this post. Short answer : give the same name, say ts, to the timestamp column in both dataframes, then use merge method : pd.merge(df1, df2, on='ts', how='outer') See also the documentation of merge method.
Python: How to read and compare text files in 80 separate folders and merge them into one single pandas dataframe based on a condition?
I have a folder(user) which contains 80 subfolders (1,2, 3,…, 80) and in each subfolder there are 2 text files (file1 and file2). file1 has 7 columns and file2 has 3 columns both without labels and are not in the same size. First column of file1 is time and the first and second columns of file2 are start_time and end_time. So, what I want to do is to: read file1 and file2 in each subfolder and compare whether the time in the first column of the file1 is in the range of (start_time,end_time) in the file2 or not. if yes, put the value in the third column of file2 in a new column in file1 corresponding to that time which is in the range and convert it to a pandas dataframe. if No, I mean if the time is not in the range (start_time,end_time),put NAN in the new created column. do this for all 2 text files in all folders(1,2,3,...80) so that at the end we will have 80 dataframes convert all the dataframes to one single pandas dataframe. file1 is something like this: 1493974279325 1251148166327 417683620715 250.0 50.847060192 -0.134362797 66.307039766 1493974280326 1252150237681 417683620715 350.0 50.847057006 -0.134359581 105.778622992 1493984243830 1253153644973 417683620715 350.0 50.847054933 -0.134318363 158.247842792 1493984243840 1254156207993 417683620715 350.0 50.847051726 -0.134282482 158.247842792 1493974283335 1255160442889 417683620715 350.0 50.847050123 -0.134264542 158.247842792 1493974284338 1256162859035 417683620715 350.0 50.847049321 -0.134255572 158.247842792 1493974285340 1257165017889 417683620715 350.0 50.847048921 -0.134251086 158.247842792 1493974286343 1258168318930 417683620715 350.0 50.84704872 -0.134248844 158.247842792 1493974287347 1259171307992 417683620715 350.0 50.84704862 -0.134247723 158.247842792 1493974288351 1260175022576 417683620715 350.0 50.84704857 -0.134247162 158.247842792 1493974289352 1261177816325 417683620715 350.0 50.847048545 -0.134246882 158.247842792 1493984243890 1262179719971 417683620715 350.0 50.847048532 -0.134246741 158.247842792 1493984243900 1263182887158 417683620715 350.0 50.847048526 -0.134246671 158.247842792 file2 is something like this: 1488377142416 1488378192537 7 1488379212697 1488380967936 3 1488381613217 1488382948229 4 1488383667626 1488384747965 4 1488385047398 1488385633069 5 1488386203182 1488386877333 5 1488386952452 1488388272444 4 1488388482553 1488389532601 3 1488389863114 1488391843248 4 expected output would be something like this( what I want is a pandas dataframe),in this output file the last column is the new created column and the values come from the third column in file2. (this is just an example to show the expected output and times in the first column are not in the range of start time and end time) 1493974279325 1251148166327 417683620715 250.0 50.847060192 -0.134362797 66.307039766 7 1493974280326 1252150237681 417683620715 350.0 50.847057006 -0.134359581 105.778622992 7 1493984243830 1253153644973 417683620715 350.0 50.847054933 -0.134318363 158.247842792 7 1493984243840 1254156207993 417683620715 350.0 50.847051726 -0.134282482 158.247842792 4 1493974283335 1255160442889 417683620715 350.0 50.847050123 -0.134264542 158.247842792 4 1493974284338 1256162859035 417683620715 350.0 50.847049321 -0.134255572 158.247842792 4 1493974285340 1257165017889 417683620715 350.0 50.847048921 -0.134251086 158.247842792 4 1493974286343 1258168318930 417683620715 350.0 50.84704872 -0.134248844 158.247842792 4 1493974287347 1259171307992 417683620715 350.0 50.84704862 -0.134247723 158.247842792 4 1493974288351 1260175022576 417683620715 350.0 50.84704857 -0.134247162 158.247842792 3 1493974289352 1261177816325 417683620715 350.0 50.847048545 -0.134246882 158.247842792 3 1493984243890 1262179719971 417683620715 350.0 50.847048532 -0.134246741 158.247842792 NAN 1493984243900 1263182887158 417683620715 350.0 50.847048526 -0.134246671 158.247842792 NAN what I have done: #1 def read_file1_data(path): location=pd.read_csv(path,delimiter='\t',header=None,names=['location']) location=location['location'].str.split(expand=True) location.columns=['Timestamp','Ignore2','Ignore3','accuracy(m)','latitude','longitude','altitude'] location.drop(['Ignore2','Ignore3','accuracy(m)','altitude'],axis=1,inplace=True) location['Timestamp'] = location['Timestamp'].astype('int64') return location #2 def read_file2(path): labels = pd.read_csv(path, skiprows=0, header=None, infer_datetime_format=True, delim_whitespace=True) # for clarity rename columns labels.columns = ['start_time', 'end_time', 'label'] return labels #3 def apply_labels(location, labels): indices = labels['start_time'].searchsorted(location['Timestamp'], side='right') - 1 no_label = (indices < 0) | (location['Timestamp'].values >= labels['end_time'].iloc[indices].values) location['label'] = labels['label'].iloc[indices].values location['label'][no_label] = np.NaN #4 def read_user(user_folder): labels = None location_files = glob.glob(os.path.join(user_folder,'Hips_Location.txt')) df = pd.concat([read_file1_data(f) for f in location_files]) labels_file = os.path.join(user_folder, 'labels_track_main.txt') if os.path.exists(labels_file): labels = read_file2(labels_file) apply_labels(df, labels) else: df['label'] = np.NAN return df def read_all_users(folder): subfolders = os.listdir(folder) dfs = [] for i, sf in enumerate(subfolders): df = read_user(os.path.join(folder,sf)) dfs.append(df) return pd.concat(dfs) #final dataframe DF = read_all_users('/content/drive/MyDrive/Sussex Trajectories/Data') However, it returns jus "NAN" in the new created column: Timestamp latitude longitude label 0 1496127155569 52.026602082620364 0.964491661294289 NaN 1 1496127157333 52.026602287 0.964491665 NaN 2 1496127158334 52.026603335 0.964496445 NaN 3 1496127159336 52.026602658 0.964503625 NaN 4 1496127160340 52.026600765 0.964518915 NaN ... ... ... ... ... 33156 1496349577578 50.846888356 -0.133469528 NaN 33157 1496349578581 50.84689128 -0.133483904 NaN 33158 1496349579583 50.84689199 -0.133497738 NaN 33159 1496349580587 50.846893418 -0.133511131 NaN 33160 1496349581590 50.846894132 -0.133517828 NaN Any suggestion would be greatly appreciated.
[ "Duplicate of this post. Short answer : give the same name, say ts, to the timestamp column in both dataframes, then use merge method :\npd.merge(df1, df2, on='ts', how='outer')\n\nSee also the documentation of merge method.\n" ]
[ 1 ]
[]
[]
[ "binary_search", "file_io", "python", "subdirectory", "text_files" ]
stackoverflow_0074517776_binary_search_file_io_python_subdirectory_text_files.txt
Q: __pycache__ showing up in .git/refs/remotes/origin I had an issue where an Azure DevOps pipeline stopped working, and it turned out to be because there was a ref .git/refs/remotes/origin/feature/__pycache__/app.cpython-310.pyc which was "broken" (and clearly shouldn't have existed in the first place). So I logged into the server, deleted that __pycache__ directory, and the pipeline started working again as expected. Now, I just tried switching branches on my local, and I got the same error: $ git fetch --all --prune Fetching origin error: cannot lock ref 'refs/remotes/origin/feature/__pycache__/app.cpython-310.pyc': unable to resolve reference 'refs/remotes/origin/feature/__pycache__/app.cpython-310.pyc': reference broken error: could not remove reference refs/remotes/origin/feature/__pycache__/app.cpython-310.pyc I'm quite puzzled at how this came to be. Has anyone seen this before? Is this potentially an issue where someone accidentally created a new branch like git checkout -b feature/__pycache__/app.cpython-310.pyc? Something else? A: I'm quite puzzled at how this came to be. Git stores names—branch names, tag names, remote-tracking names, and the like—in a key-value database where the full name, e.g., refs/heads/main, is the key, and the value is a hash ID. OK, but what does this have to do with anything? Well, the actual implementation of this key-value database is pretty sleazy, at the moment: sometimes the name-value pair consist of a line (perhaps with an auxiliary line added) in a file named .git/packed-refs; sometimes the value is stored in a plain-text file (no auxiliary line added) that is stored in a directory within .git/refs, with the directory name made up of the components of the reference (with the redundant refs/ elided). Sometimes it's even in both locations, although this means that the "unpacked" file ref overrides the packed one as it's presumed to be newer. So this means that if a branch named feature/foo exists, .git/refs/heads/feature/ might exist. Or it might not! OK, but so what? Well, if that directory does exist and you run a Python program and Python loads a file named app.py and byte-compiles it to a .pyc file (and you're using Python3), the python byte-compiler might write the .pyc file to a file named __pycache__/app.cpython-310.pyc.1 Python will then use this byte-compiled file to load and run things. But once Python has done that, Git will think that .git/refs/heads/feature/__pycache__/app.cpython-310.pyc is a valid ref and therefore there is a branch named feature/__pycache__/app.cpython-310.pyc. Unfortunately, its hash ID is garbage. The same sort of thing can happen with a remote-tracking name: the only difference is that the directory in question will begin with .git/refs/remotes/origin/feature/. In both cases, Git thinks the name is valid, but the value—the hash ID—is bogus. This is what causes the failure to "lock" the "broken" reference.2 The real question, then, is: What caused a running Python program to drop a __pycache__ file into a subdirectory of your Git repository? The location of the __pycache__ directory is supposed to be the same as the location of the loaded .py file, which would suggest that something wrote an app.py file inside this internal Git repository .git/refs/ directory, which would be bad (programs should not do that sort of thing: they should generate their temporary files in private temporary directories or /tmp or similar). Other than pointing to What is __pycache__? and If Python is interpreted, what are .pyc files? I must leave this mystery unsolved at this point, since I have no idea what created this app.py file in the first place. 1The compiled file may end with .pyo if byte-code-optimization is turned on. Python2 writes these files next to the .py files, rather than in a __pycache__ directory. The cpython-310 part in the middle means you're using CPython version 3.10; this version number insertion doesn't occur in some (older) versions of Python. 2The act of "locking" a ref in this case consists of creating a file whose name ends in .lock and is otherwise the same as the full ref file name. This file will be used to hold the new value, and an atomic rename operation will be used to swap the file into place to update and unlock the ref. All of this depends heavily on POSIX file semantics, which is one of the numerous reasons one should not put a .git repository into cloud-managed software folders, which don't obey POSIX file semantics. A: Figured I should update with an answer: I had a script for validating commits, testing whether files compiled and doing various linty things. This script was apparently iterating over the .git directory and trying to compile a (perhaps poorly named) refs/heads/feature/app.py branch. Fixed by deleting the __pycache__ directory and ensuring the validation script skipped over the .git directory :)
__pycache__ showing up in .git/refs/remotes/origin
I had an issue where an Azure DevOps pipeline stopped working, and it turned out to be because there was a ref .git/refs/remotes/origin/feature/__pycache__/app.cpython-310.pyc which was "broken" (and clearly shouldn't have existed in the first place). So I logged into the server, deleted that __pycache__ directory, and the pipeline started working again as expected. Now, I just tried switching branches on my local, and I got the same error: $ git fetch --all --prune Fetching origin error: cannot lock ref 'refs/remotes/origin/feature/__pycache__/app.cpython-310.pyc': unable to resolve reference 'refs/remotes/origin/feature/__pycache__/app.cpython-310.pyc': reference broken error: could not remove reference refs/remotes/origin/feature/__pycache__/app.cpython-310.pyc I'm quite puzzled at how this came to be. Has anyone seen this before? Is this potentially an issue where someone accidentally created a new branch like git checkout -b feature/__pycache__/app.cpython-310.pyc? Something else?
[ "\nI'm quite puzzled at how this came to be.\n\nGit stores names—branch names, tag names, remote-tracking names, and the like—in a key-value database where the full name, e.g., refs/heads/main, is the key, and the value is a hash ID. OK, but what does this have to do with anything? Well, the actual implementation of this key-value database is pretty sleazy, at the moment:\n\nsometimes the name-value pair consist of a line (perhaps with an auxiliary line added) in a file named .git/packed-refs;\nsometimes the value is stored in a plain-text file (no auxiliary line added) that is stored in a directory within .git/refs, with the directory name made up of the components of the reference (with the redundant refs/ elided).\n\nSometimes it's even in both locations, although this means that the \"unpacked\" file ref overrides the packed one as it's presumed to be newer.\nSo this means that if a branch named feature/foo exists, .git/refs/heads/feature/ might exist. Or it might not!\nOK, but so what? Well, if that directory does exist and you run a Python program and Python loads a file named app.py and byte-compiles it to a .pyc file (and you're using Python3), the python byte-compiler might write the .pyc file to a file named __pycache__/app.cpython-310.pyc.1 Python will then use this byte-compiled file to load and run things.\nBut once Python has done that, Git will think that .git/refs/heads/feature/__pycache__/app.cpython-310.pyc is a valid ref and therefore there is a branch named feature/__pycache__/app.cpython-310.pyc. Unfortunately, its hash ID is garbage.\nThe same sort of thing can happen with a remote-tracking name: the only difference is that the directory in question will begin with .git/refs/remotes/origin/feature/. In both cases, Git thinks the name is valid, but the value—the hash ID—is bogus. This is what causes the failure to \"lock\" the \"broken\" reference.2\nThe real question, then, is: What caused a running Python program to drop a __pycache__ file into a subdirectory of your Git repository? The location of the __pycache__ directory is supposed to be the same as the location of the loaded .py file, which would suggest that something wrote an app.py file inside this internal Git repository .git/refs/ directory, which would be bad (programs should not do that sort of thing: they should generate their temporary files in private temporary directories or /tmp or similar).\nOther than pointing to What is __pycache__? and If Python is interpreted, what are .pyc files? I must leave this mystery unsolved at this point, since I have no idea what created this app.py file in the first place.\n\n1The compiled file may end with .pyo if byte-code-optimization is turned on. Python2 writes these files next to the .py files, rather than in a __pycache__ directory. The cpython-310 part in the middle means you're using CPython version 3.10; this version number insertion doesn't occur in some (older) versions of Python.\n2The act of \"locking\" a ref in this case consists of creating a file whose name ends in .lock and is otherwise the same as the full ref file name. This file will be used to hold the new value, and an atomic rename operation will be used to swap the file into place to update and unlock the ref. All of this depends heavily on POSIX file semantics, which is one of the numerous reasons one should not put a .git repository into cloud-managed software folders, which don't obey POSIX file semantics.\n", "Figured I should update with an answer:\nI had a script for validating commits, testing whether files compiled and doing various linty things. This script was apparently iterating over the .git directory and trying to compile a (perhaps poorly named) refs/heads/feature/app.py branch. Fixed by deleting the __pycache__ directory and ensuring the validation script skipped over the .git directory :)\n" ]
[ 1, 0 ]
[]
[]
[ "git", "python" ]
stackoverflow_0073556680_git_python.txt
Q: Why does mypy raise truthy-function error for assertion? I inherited a project from a dev who is no longer at the company. He wrote this test: from contextlib import nullcontext as does_not_raise def test_validation_raised_no_error_when_validation_succeeds(): # given given_df = DataFrame(data={"foo": [1, 2], "bar": ["a", "b"]}) given_schema = Schema( [ Column("foo", [InListValidation([1, 2])]), Column("bar", [InListValidation(["a", "b"])]), ] ) # when _validate_schema(given_df, given_schema) # then assert does_not_raise # line 251 This project has mypy configured and it complains about the assertion: test/clients/test_my_client.py:251: error: Function "Type[nullcontext[Any]]" could always be true in boolean context [truthy-function] Found 1 error in 1 file (checked 24 source files) I don't understand what the problem is. The documentation doesn't provide any meaningful advice. I can disable the check like this: assert does_not_raise # type: ignore but I'd rather understand the problem and address it properly. For reference, here is the mypy config: [mypy] python_version = 3.8 warn_return_any = True warn_unused_configs = True ignore_missing_imports = True A: The test intends to check that the # given and # when parts of the test case run without raising any exception. The # then part is probably only there to satisfy the given-when-then pattern. As mypy says, the line doesn't do anything, it is functionally equivalent to assert bool(some_existing_function_name) which is equivalent to assert True, which is functionally equivalent to just having a comment. Note that does_not_raise is a function, but it's not called. Just naming a function in an assert (or if) always returns True. You can keep silencing the mypy warning as you do now, replace it with assert True, assert "if this line runs, the previous parts didn't raise any exception", or just a comment – they will all be functionally equivalent.
Why does mypy raise truthy-function error for assertion?
I inherited a project from a dev who is no longer at the company. He wrote this test: from contextlib import nullcontext as does_not_raise def test_validation_raised_no_error_when_validation_succeeds(): # given given_df = DataFrame(data={"foo": [1, 2], "bar": ["a", "b"]}) given_schema = Schema( [ Column("foo", [InListValidation([1, 2])]), Column("bar", [InListValidation(["a", "b"])]), ] ) # when _validate_schema(given_df, given_schema) # then assert does_not_raise # line 251 This project has mypy configured and it complains about the assertion: test/clients/test_my_client.py:251: error: Function "Type[nullcontext[Any]]" could always be true in boolean context [truthy-function] Found 1 error in 1 file (checked 24 source files) I don't understand what the problem is. The documentation doesn't provide any meaningful advice. I can disable the check like this: assert does_not_raise # type: ignore but I'd rather understand the problem and address it properly. For reference, here is the mypy config: [mypy] python_version = 3.8 warn_return_any = True warn_unused_configs = True ignore_missing_imports = True
[ "The test intends to check that the # given and # when parts of the test case run without raising any exception. The # then part is probably only there to satisfy the given-when-then pattern. As mypy says, the line doesn't do anything, it is functionally equivalent to assert bool(some_existing_function_name) which is equivalent to assert True, which is functionally equivalent to just having a comment.\nNote that does_not_raise is a function, but it's not called. Just naming a function in an assert (or if) always returns True.\nYou can keep silencing the mypy warning as you do now, replace it with assert True, assert \"if this line runs, the previous parts didn't raise any exception\", or just a comment – they will all be functionally equivalent.\n" ]
[ 1 ]
[]
[]
[ "mypy", "python" ]
stackoverflow_0074518978_mypy_python.txt
Q: Getting output 4 instead of 1 Write a Python code to take one integer as input and store it in a variable namely, myNum. The output is a summation of digits in the tens and the hundreds places. Assume that you will never enter 1-digit or 2-digit integer as an input. (Hint, you have to use modulo and floor division operators, and few variables.) Sample 1 Enter a number: 103 The result is 1 ` def getSum(n): sum = 0 while (n != 0): sum = sum + (n % 10) n = n//10 return sum n = 103 print(getSum(n)) ` A: One option if you only want to handle hundreds + tens: def getSum(n): n = n//10 # first get rid of units n, total = divmod(n, 10) # get tens total += n%10 # get hundreds return total getSum(1234) # 5 getSum(234) # 5
Getting output 4 instead of 1
Write a Python code to take one integer as input and store it in a variable namely, myNum. The output is a summation of digits in the tens and the hundreds places. Assume that you will never enter 1-digit or 2-digit integer as an input. (Hint, you have to use modulo and floor division operators, and few variables.) Sample 1 Enter a number: 103 The result is 1 ` def getSum(n): sum = 0 while (n != 0): sum = sum + (n % 10) n = n//10 return sum n = 103 print(getSum(n)) `
[ "One option if you only want to handle hundreds + tens:\ndef getSum(n):\n n = n//10 # first get rid of units\n n, total = divmod(n, 10) # get tens\n total += n%10 # get hundreds\n return total\n\ngetSum(1234)\n# 5\n\ngetSum(234)\n# 5\n\n" ]
[ 0 ]
[]
[]
[ "python" ]
stackoverflow_0074519042_python.txt
Q: Best way to find the months between two dates I have the need to be able to accurately find the months between two dates in python. I have a solution that works but its not very good (as in elegant) or fast. dateRange = [datetime.strptime(dateRanges[0], "%Y-%m-%d"), datetime.strptime(dateRanges[1], "%Y-%m-%d")] months = [] tmpTime = dateRange[0] oneWeek = timedelta(weeks=1) tmpTime = tmpTime.replace(day=1) dateRange[0] = tmpTime dateRange[1] = dateRange[1].replace(day=1) lastMonth = tmpTime.month months.append(tmpTime) while tmpTime < dateRange[1]: if lastMonth != 12: while tmpTime.month <= lastMonth: tmpTime += oneWeek tmpTime = tmpTime.replace(day=1) months.append(tmpTime) lastMonth = tmpTime.month else: while tmpTime.month >= lastMonth: tmpTime += oneWeek tmpTime = tmpTime.replace(day=1) months.append(tmpTime) lastMonth = tmpTime.month So just to explain, what I'm doing here is taking the two dates and converting them from iso format into python datetime objects. Then I loop through adding a week to the start datetime object and check if the numerical value of the month is greater (unless the month is December then it checks if the date is less), If the value is greater I append it to the list of months and keep looping through until I get to my end date. It works perfectly it just doesn't seem like a good way of doing it... A: Start by defining some test cases, then you will see that the function is very simple and needs no loops from datetime import datetime def diff_month(d1, d2): return (d1.year - d2.year) * 12 + d1.month - d2.month assert diff_month(datetime(2010,10,1), datetime(2010,9,1)) == 1 assert diff_month(datetime(2010,10,1), datetime(2009,10,1)) == 12 assert diff_month(datetime(2010,10,1), datetime(2009,11,1)) == 11 assert diff_month(datetime(2010,10,1), datetime(2009,8,1)) == 14 You should add some test cases to your question, as there are lots of potential corner cases to cover - there is more than one way to define the number of months between two dates. A: One liner to find a list of datetimes, incremented by month, between two dates. import datetime from dateutil.rrule import rrule, MONTHLY strt_dt = datetime.date(2001,1,1) end_dt = datetime.date(2005,6,1) dates = [dt for dt in rrule(MONTHLY, dtstart=strt_dt, until=end_dt)] A: This worked for me - from datetime import datetime from dateutil import relativedelta date1 = datetime.strptime('2011-08-15 12:00:00', '%Y-%m-%d %H:%M:%S') date2 = datetime.strptime('2012-02-15', '%Y-%m-%d') r = relativedelta.relativedelta(date2, date1) r.months + (12*r.years) A: You can easily calculate this using rrule from dateutil module: from dateutil import rrule from datetime import date print(list(rrule.rrule(rrule.MONTHLY, dtstart=date(2013, 11, 1), until=date(2014, 2, 1)))) will give you: [datetime.datetime(2013, 11, 1, 0, 0), datetime.datetime(2013, 12, 1, 0, 0), datetime.datetime(2014, 1, 1, 0, 0), datetime.datetime(2014, 2, 1, 0, 0)] A: from dateutil import relativedelta r = relativedelta.relativedelta(date1, date2) months_difference = (r.years * 12) + r.months A: Get the ending month (relative to the year and month of the start month ex: 2011 January = 13 if your start date starts on 2010 Oct) and then generate the datetimes beginning the start month and that end month like so: dt1, dt2 = dateRange start_month=dt1.month end_months=(dt2.year-dt1.year)*12 + dt2.month+1 dates=[datetime.datetime(year=yr, month=mn, day=1) for (yr, mn) in ( ((m - 1) / 12 + dt1.year, (m - 1) % 12 + 1) for m in range(start_month, end_months) )] if both dates are on the same year, it could also be simply written as: dates=[datetime.datetime(year=dt1.year, month=mn, day=1) for mn in range(dt1.month, dt2.month + 1)] A: My simple solution: import datetime def months(d1, d2): return d1.month - d2.month + 12*(d1.year - d2.year) d1 = datetime.datetime(2009, 9, 26) d2 = datetime.datetime(2019, 9, 26) print(months(d1, d2)) A: This post nails it! Use dateutil.relativedelta. from datetime import datetime from dateutil import relativedelta date1 = datetime.strptime(str('2011-08-15 12:00:00'), '%Y-%m-%d %H:%M:%S') date2 = datetime.strptime(str('2012-02-15'), '%Y-%m-%d') r = relativedelta.relativedelta(date2, date1) r.months A: Update 2018-04-20: it seems that OP @Joshkunz was asking for finding which months are between two dates, instead of "how many months" are between two dates. So I am not sure why @JohnLaRooy is upvoted for more than 100 times. @Joshkunz indicated in the comment under the original question he wanted the actual dates [or the months], instead of finding the total number of months. So it appeared the question wanted, for between two dates 2018-04-11 to 2018-06-01 Apr 2018, May 2018, June 2018 And what if it is between 2014-04-11 to 2018-06-01? Then the answer would be Apr 2014, May 2014, ..., Dec 2014, Jan 2015, ..., Jan 2018, ..., June 2018 So that's why I had the following pseudo code many years ago. It merely suggested using the two months as end points and loop through them, incrementing by one month at a time. @Joshkunz mentioned he wanted the "months" and he also mentioned he wanted the "dates", without knowing exactly, it was difficult to write the exact code, but the idea is to use one simple loop to loop through the end points, and incrementing one month at a time. The answer 8 years ago in 2010: If adding by a week, then it will approximately do work 4.35 times the work as needed. Why not just: 1. get start date in array of integer, set it to i: [2008, 3, 12], and change it to [2008, 3, 1] 2. get end date in array: [2010, 10, 26] 3. add the date to your result by parsing i increment the month in i if month is >= 13, then set it to 1, and increment the year by 1 until either the year in i is > year in end_date, or (year in i == year in end_date and month in i > month in end_date) just pseduo code for now, haven't tested, but i think the idea along the same line will work. A: Define a "month" as 1/12 year, then do this: def month_diff(d1, d2): """Return the number of months between d1 and d2, such that d2 + month_diff(d1, d2) == d1 """ diff = (12 * d1.year + d1.month) - (12 * d2.year + d2.month) return diff You might try to define a month as "a period of either 29, 28, 30 or 31 days (depending on the year)". But you you do that, you have an additional problem to solve. While it's usually clear that June 15th + 1 month should be July 15th, it's not usually not clear if January 30th + 1 month is in February or March. In the latter case, you may be compelled to compute the date as February 30th, then "correct" it to March 2nd. But when you do that, you'll find that March 2nd - 1 month is clearly February 2nd. Ergo, reductio ad absurdum (this operation is not well defined). A: Here's how to do this with Pandas FWIW: import pandas as pd pd.date_range("1990/04/03", "2014/12/31", freq="MS") DatetimeIndex(['1990-05-01', '1990-06-01', '1990-07-01', '1990-08-01', '1990-09-01', '1990-10-01', '1990-11-01', '1990-12-01', '1991-01-01', '1991-02-01', ... '2014-03-01', '2014-04-01', '2014-05-01', '2014-06-01', '2014-07-01', '2014-08-01', '2014-09-01', '2014-10-01', '2014-11-01', '2014-12-01'], dtype='datetime64[ns]', length=296, freq='MS') Notice it starts with the month after the given start date. A: Many people have already given you good answers to solve this but I have not read any using list comprehension so I give you what I used for a similar use case : def compute_months(first_date, second_date): year1, month1, year2, month2 = map( int, (first_date[:4], first_date[5:7], second_date[:4], second_date[5:7]) ) return [ '{:0>4}-{:0>2}'.format(year, month) for year in range(year1, year2 + 1) for month in range(month1 if year == year1 else 1, month2 + 1 if year == year2 else 13) ] >>> first_date = "2016-05" >>> second_date = "2017-11" >>> compute_months(first_date, second_date) ['2016-05', '2016-06', '2016-07', '2016-08', '2016-09', '2016-10', '2016-11', '2016-12', '2017-01', '2017-02', '2017-03', '2017-04', '2017-05', '2017-06', '2017-07', '2017-08', '2017-09', '2017-10', '2017-11'] A: There is a simple solution based on 360 day years, where all months have 30 days. It fits most use cases where, given two dates, you need to calculate the number of full months plus the remaining days. from datetime import datetime, timedelta def months_between(start_date, end_date): #Add 1 day to end date to solve different last days of month s1, e1 = start_date , end_date + timedelta(days=1) #Convert to 360 days s360 = (s1.year * 12 + s1.month) * 30 + s1.day e360 = (e1.year * 12 + e1.month) * 30 + e1.day #Count days between the two 360 dates and return tuple (months, days) return divmod(e360 - s360, 30) print "Counting full and half months" print months_between( datetime(2012, 01, 1), datetime(2012, 03, 31)) #3m print months_between( datetime(2012, 01, 1), datetime(2012, 03, 15)) #2m 15d print months_between( datetime(2012, 01, 16), datetime(2012, 03, 31)) #2m 15d print months_between( datetime(2012, 01, 16), datetime(2012, 03, 15)) #2m print "Adding +1d and -1d to 31 day month" print months_between( datetime(2011, 12, 01), datetime(2011, 12, 31)) #1m 0d print months_between( datetime(2011, 12, 02), datetime(2011, 12, 31)) #-1d => 29d print months_between( datetime(2011, 12, 01), datetime(2011, 12, 30)) #30d => 1m print "Adding +1d and -1d to 29 day month" print months_between( datetime(2012, 02, 01), datetime(2012, 02, 29)) #1m 0d print months_between( datetime(2012, 02, 02), datetime(2012, 02, 29)) #-1d => 29d print months_between( datetime(2012, 02, 01), datetime(2012, 02, 28)) #28d print "Every month has 30 days - 26/M to 5/M+1 always counts 10 days" print months_between( datetime(2011, 02, 26), datetime(2011, 03, 05)) print months_between( datetime(2012, 02, 26), datetime(2012, 03, 05)) print months_between( datetime(2012, 03, 26), datetime(2012, 04, 05)) A: Somewhat a little prettified solution by @Vin-G. import datetime def monthrange(start, finish): months = (finish.year - start.year) * 12 + finish.month + 1 for i in xrange(start.month, months): year = (i - 1) / 12 + start.year month = (i - 1) % 12 + 1 yield datetime.date(year, month, 1) A: You can also use the arrow library. This is a simple example: from datetime import datetime import arrow start = datetime(2014, 1, 17) end = datetime(2014, 6, 20) for d in arrow.Arrow.range('month', start, end): print d.month, d.format('MMMM') This will print: 1 January 2 February 3 March 4 April 5 May 6 June Hope this helps! A: Get difference in number of days, months and years between two dates. import datetime from dateutil.relativedelta import relativedelta iphead_proc_dt = datetime.datetime.now() new_date = iphead_proc_dt + relativedelta(months=+25, days=+23) # Get Number of Days difference bewtween two dates print((new_date - iphead_proc_dt).days) difference = relativedelta(new_date, iphead_proc_dt) # Get Number of Months difference bewtween two dates print(difference.months + 12 * difference.years) # Get Number of Years difference bewtween two dates print(difference.years) A: Try something like this. It presently includes the month if both dates happen to be in the same month. from datetime import datetime,timedelta def months_between(start,end): months = [] cursor = start while cursor <= end: if cursor.month not in months: months.append(cursor.month) cursor += timedelta(weeks=1) return months Output looks like: >>> start = datetime.now() - timedelta(days=120) >>> end = datetime.now() >>> months_between(start,end) [6, 7, 8, 9, 10] A: You could use python-dateutil. See Python: Difference of 2 datetimes in months A: just like range function, when month is 13, go to next year def year_month_range(start_date, end_date): ''' start_date: datetime.date(2015, 9, 1) or datetime.datetime end_date: datetime.date(2016, 3, 1) or datetime.datetime return: datetime.date list of 201509, 201510, 201511, 201512, 201601, 201602 ''' start, end = start_date.strftime('%Y%m'), end_date.strftime('%Y%m') assert len(start) == 6 and len(end) == 6 start, end = int(start), int(end) year_month_list = [] while start < end: year, month = divmod(start, 100) if month == 13: start += 88 # 201513 + 88 = 201601 continue year_month_list.append(datetime.date(year, month, 1)) start += 1 return year_month_list example in python shell >>> import datetime >>> s = datetime.date(2015,9,1) >>> e = datetime.date(2016, 3, 1) >>> year_month_set_range(s, e) [datetime.date(2015, 11, 1), datetime.date(2015, 9, 1), datetime.date(2016, 1, 1), datetime.date(2016, 2, 1), datetime.date(2015, 12, 1), datetime.date(2015, 10, 1)] A: It can be done using datetime.timedelta, where the number of days for skipping to next month can be obtained by calender.monthrange. monthrange returns weekday (0-6 ~ Mon-Sun) and number of days (28-31) for a given year and month. For example: monthrange(2017, 1) returns (6,31). Here is the script using this logic to iterate between two months. from datetime import timedelta import datetime as dt from calendar import monthrange def month_iterator(start_month, end_month): start_month = dt.datetime.strptime(start_month, '%Y-%m-%d').date().replace(day=1) end_month = dt.datetime.strptime(end_month, '%Y-%m-%d').date().replace(day=1) while start_month <= end_month: yield start_month start_month = start_month + timedelta(days=monthrange(start_month.year, start_month.month)[1]) ` A: it seems that the answers are unsatisfactory and I have since use my own code which is easier to understand from datetime import datetime from dateutil import relativedelta date1 = datetime.strptime(str('2017-01-01'), '%Y-%m-%d') date2 = datetime.strptime(str('2019-03-19'), '%Y-%m-%d') difference = relativedelta.relativedelta(date2, date1) months = difference.months years = difference.years # add in the number of months (12) for difference in years months += 12 * difference.years months A: from datetime import datetime from dateutil import relativedelta def get_months(d1, d2): date1 = datetime.strptime(str(d1), '%Y-%m-%d') date2 = datetime.strptime(str(d2), '%Y-%m-%d') print (date2, date1) r = relativedelta.relativedelta(date2, date1) months = r.months + 12 * r.years if r.days > 0: months += 1 print (months) return months assert get_months('2018-08-13','2019-06-19') == 11 assert get_months('2018-01-01','2019-06-19') == 18 assert get_months('2018-07-20','2019-06-19') == 11 assert get_months('2018-07-18','2019-06-19') == 12 assert get_months('2019-03-01','2019-06-19') == 4 assert get_months('2019-03-20','2019-06-19') == 3 assert get_months('2019-01-01','2019-06-19') == 6 assert get_months('2018-09-09','2019-06-19') == 10 A: #This definition gives an array of months between two dates. import datetime def MonthsBetweenDates(BeginDate, EndDate): firstyearmonths = [mn for mn in range(BeginDate.month, 13)]<p> lastyearmonths = [mn for mn in range(1, EndDate.month+1)]<p> months = [mn for mn in range(1, 13)]<p> numberofyearsbetween = EndDate.year - BeginDate.year - 1<p> return firstyearmonths + months * numberofyearsbetween + lastyearmonths<p> #example BD = datetime.datetime.strptime("2000-35", '%Y-%j') ED = datetime.datetime.strptime("2004-200", '%Y-%j') MonthsBetweenDates(BD, ED) A: Usually 90 days are NOT 3 months literally, just a reference. So, finally, you need to check if days are bigger than 15 to add +1 to month counter. or better, add another elif with half month counter. From this other stackoverflow answer i've finally ended with that: #/usr/bin/env python # -*- coding: utf8 -*- import datetime from datetime import timedelta from dateutil.relativedelta import relativedelta import calendar start_date = datetime.date.today() end_date = start_date + timedelta(days=111) start_month = calendar.month_abbr[int(start_date.strftime("%m"))] print str(start_date) + " to " + str(end_date) months = relativedelta(end_date, start_date).months days = relativedelta(end_date, start_date).days print months, "months", days, "days" if days > 16: months += 1 print "around " + str(months) + " months", "(", for i in range(0, months): print calendar.month_abbr[int(start_date.strftime("%m"))], start_date = start_date + relativedelta(months=1) print ")" Output: 2016-02-29 2016-06-14 3 months 16 days around 4 months ( Feb Mar Apr May ) I've noticed that doesn't work if you add more than days left in current year, and that's is unexpected. A: Here is my solution for this: def calc_age_months(from_date, to_date): from_date = time.strptime(from_date, "%Y-%m-%d") to_date = time.strptime(to_date, "%Y-%m-%d") age_in_months = (to_date.tm_year - from_date.tm_year)*12 + (to_date.tm_mon - from_date.tm_mon) if to_date.tm_mday < from_date.tm_mday: return age_in_months -1 else return age_in_months This will handle some edge cases as well where the difference in months between 31st Dec 2018 and 1st Jan 2019 will be zero (since the difference is only a day). A: Assuming upperDate is always later than lowerDate and both are datetime.date objects: if lowerDate.year == upperDate.year: monthsInBetween = range( lowerDate.month + 1, upperDate.month ) elif upperDate.year > lowerDate.year: monthsInBetween = range( lowerDate.month + 1, 12 ) for year in range( lowerDate.year + 1, upperDate.year ): monthsInBetween.extend( range(1,13) ) monthsInBetween.extend( range( 1, upperDate.month ) ) I haven't tested this thoroughly, but it looks like it should do the trick. A: Try this: dateRange = [datetime.strptime(dateRanges[0], "%Y-%m-%d"), datetime.strptime(dateRanges[1], "%Y-%m-%d")] delta_time = max(dateRange) - min(dateRange) #Need to use min(dateRange).month to account for different length month #Note that timedelta returns a number of days delta_datetime = (datetime(1, min(dateRange).month, 1) + delta_time - timedelta(days=1)) #min y/m/d are 1 months = ((delta_datetime.year - 1) * 12 + delta_datetime.month - min(dateRange).month) print months Shouldn't matter what order you input the dates, and it takes into account the difference in month lengths. A: Here is a method: def months_between(start_dt, stop_dt): month_list = [] total_months = 12*(stop_dt.year-start_dt.year)+(stop_dt.month-start_d.month)+1 if total_months > 0: month_list=[ datetime.date(start_dt.year+int((start_dt+i-1)/12), ((start_dt-1+i)%12)+1, 1) for i in xrange(0,total_months) ] return month_list This is first computing the total number of months between the two dates, inclusive. Then it creates a list using the first date as the base and performs modula arithmetic to create the date objects. A: I actually needed to do something pretty similar just now Ended up writing a function which returns a list of tuples indicating the start and end of each month between two sets of dates so I could write some SQL queries off the back of it for monthly totals of sales etc. I'm sure it can be improved by someone who knows what they're doing but hope it helps... The returned value look as follows (generating for today - 365days until today as an example) [ (datetime.date(2013, 5, 1), datetime.date(2013, 5, 31)), (datetime.date(2013, 6, 1), datetime.date(2013, 6, 30)), (datetime.date(2013, 7, 1), datetime.date(2013, 7, 31)), (datetime.date(2013, 8, 1), datetime.date(2013, 8, 31)), (datetime.date(2013, 9, 1), datetime.date(2013, 9, 30)), (datetime.date(2013, 10, 1), datetime.date(2013, 10, 31)), (datetime.date(2013, 11, 1), datetime.date(2013, 11, 30)), (datetime.date(2013, 12, 1), datetime.date(2013, 12, 31)), (datetime.date(2014, 1, 1), datetime.date(2014, 1, 31)), (datetime.date(2014, 2, 1), datetime.date(2014, 2, 28)), (datetime.date(2014, 3, 1), datetime.date(2014, 3, 31)), (datetime.date(2014, 4, 1), datetime.date(2014, 4, 30)), (datetime.date(2014, 5, 1), datetime.date(2014, 5, 31))] Code as follows (has some debug stuff which can be removed): #! /usr/env/python import datetime def gen_month_ranges(start_date=None, end_date=None, debug=False): today = datetime.date.today() if not start_date: start_date = datetime.datetime.strptime( "{0}/01/01".format(today.year),"%Y/%m/%d").date() # start of this year if not end_date: end_date = today if debug: print("Start: {0} | End {1}".format(start_date, end_date)) # sense-check if end_date < start_date: print("Error. Start Date of {0} is greater than End Date of {1}?!".format(start_date, end_date)) return None date_ranges = [] # list of tuples (month_start, month_end) current_year = start_date.year current_month = start_date.month while current_year <= end_date.year: next_month = current_month + 1 next_year = current_year if next_month > 12: next_month = 1 next_year = current_year + 1 month_start = datetime.datetime.strptime( "{0}/{1}/01".format(current_year, current_month),"%Y/%m/%d").date() # start of month month_end = datetime.datetime.strptime( "{0}/{1}/01".format(next_year, next_month),"%Y/%m/%d").date() # start of next month month_end = month_end+datetime.timedelta(days=-1) # start of next month less one day range_tuple = (month_start, month_end) if debug: print("Month runs from {0} --> {1}".format( range_tuple[0], range_tuple[1])) date_ranges.append(range_tuple) if current_month == 12: current_month = 1 current_year += 1 if debug: print("End of year encountered, resetting months") else: current_month += 1 if debug: print("Next iteration for {0}-{1}".format( current_year, current_month)) if current_year == end_date.year and current_month > end_date.month: if debug: print("Final month encountered. Terminating loop") break return date_ranges if __name__ == '__main__': print("Running in standalone mode. Debug set to True") from pprint import pprint pprint(gen_month_ranges(debug=True), indent=4) pprint(gen_month_ranges(start_date=datetime.date.today()+datetime.timedelta(days=-365), debug=True), indent=4) A: Assuming that you wanted to know the "fraction" of the month that dates were in, which I did, then you need to do a bit more work. from datetime import datetime, date import calendar def monthdiff(start_period, end_period, decimal_places = 2): if start_period > end_period: raise Exception('Start is after end') if start_period.year == end_period.year and start_period.month == end_period.month: days_in_month = calendar.monthrange(start_period.year, start_period.month)[1] days_to_charge = end_period.day - start_period.day+1 diff = round(float(days_to_charge)/float(days_in_month), decimal_places) return diff months = 0 # we have a start date within one month and not at the start, and an end date that is not # in the same month as the start date if start_period.day > 1: last_day_in_start_month = calendar.monthrange(start_period.year, start_period.month)[1] days_to_charge = last_day_in_start_month - start_period.day +1 months = months + round(float(days_to_charge)/float(last_day_in_start_month), decimal_places) start_period = datetime(start_period.year, start_period.month+1, 1) last_day_in_last_month = calendar.monthrange(end_period.year, end_period.month)[1] if end_period.day != last_day_in_last_month: # we have lest days in the last month months = months + round(float(end_period.day) / float(last_day_in_last_month), decimal_places) last_day_in_previous_month = calendar.monthrange(end_period.year, end_period.month - 1)[1] end_period = datetime(end_period.year, end_period.month - 1, last_day_in_previous_month) #whatever happens, we now have a period of whole months to calculate the difference between if start_period != end_period: months = months + (end_period.year - start_period.year) * 12 + (end_period.month - start_period.month) + 1 # just counter for any final decimal place manipulation diff = round(months, decimal_places) return diff assert monthdiff(datetime(2015,1,1), datetime(2015,1,31)) == 1 assert monthdiff(datetime(2015,1,1), datetime(2015,02,01)) == 1.04 assert monthdiff(datetime(2014,1,1), datetime(2014,12,31)) == 12 assert monthdiff(datetime(2014,7,1), datetime(2015,06,30)) == 12 assert monthdiff(datetime(2015,1,10), datetime(2015,01,20)) == 0.35 assert monthdiff(datetime(2015,1,10), datetime(2015,02,20)) == 0.71 + 0.71 assert monthdiff(datetime(2015,1,31), datetime(2015,02,01)) == round(1.0/31.0,2) + round(1.0/28.0,2) assert monthdiff(datetime(2013,1,31), datetime(2015,02,01)) == 12*2 + round(1.0/31.0,2) + round(1.0/28.0,2) provides an example that works out the number of months between two dates inclusively, including the fraction of each month that the date is in. This means that you can work out how many months is between 2015-01-20 and 2015-02-14, where the fraction of the date in the month of January is determined by the number of days in January; or equally taking into account that the number of days in February can change form year to year. For my reference, this code is also on github - https://gist.github.com/andrewyager/6b9284a4f1cdb1779b10 A: This works... from datetime import datetime as dt from dateutil.relativedelta import relativedelta def number_of_months(d1, d2): months = 0 r = relativedelta(d1,d2) if r.years==0: months = r.months if r.years>=1: months = 12*r.years+r.months return months #example number_of_months(dt(2017,9,1),dt(2016,8,1)) A: from datetime import datetime def diff_month(start_date,end_date): qty_month = ((end_date.year - start_date.year) * 12) + (end_date.month - start_date.month) d_days = end_date.day - start_date.day if d_days >= 0: adjust = 0 else: adjust = -1 qty_month += adjust return qty_month diff_month(datetime.date.today(),datetime(2019,08,24)) #Examples: #diff_month(datetime(2018,02,12),datetime(2019,08,24)) = 18 #diff_month(datetime(2018,02,12),datetime(2018,08,10)) = 5 A: This is my way to do this: Start_date = "2000-06-01" End_date = "2001-05-01" month_num = len(pd.date_range(start = Start_date[:7], end = End_date[:7] ,freq='M'))+1 I just use the month to create a date range and calculate the length. A: To get the number of full months between two dates: import datetime def difference_in_months(start, end): if start.year == end.year: months = end.month - start.month else: months = (12 - start.month) + (end.month) if start.day > end.day: months = months - 1 return months A: You can use the below code to get month between two dates: OrderedDict(((start_date + timedelta(_)).strftime(date_format), None) for _ in xrange((end_date - start_date).days)).keys() where start_date and end_date must be proper date and date_format is the format in which you want your result of date.. In your case, date_format will be %b %Y. A: The question, is really asking about the total months between 2 dates and not the difference of it Hence a revisited answer with some extra functionallity, from datetime import date, datetime from dateutil.rrule import rrule, MONTHLY def month_get_list(dt_to, dt_from, return_datetime=False, as_month=True): INDEX_MONTH_MAPPING = {1: 'january', 2: 'february', 3: 'march', 4: 'april', 5: 'may', 6: 'june', 7: 'july', 8: 'august', 9: 'september', 10: 'october', 11: 'november', 12: 'december'} if return_datetime: return [dt for dt in rrule(MONTHLY, dtstart=dt_from, until=dt_to)] if as_month: return [INDEX_MONTH_MAPPING[dt.month] for dt in rrule(MONTHLY, dtstart=dt_from, until=dt_to)] return [dt.month for dt in rrule(MONTHLY, dtstart=dt_from, until=dt_to)] month_list = month_get_list(date(2021, 12, 31), date(2021, 1, 1)) total_months = len(month_list) Result month_list = ['january', 'february', 'march', 'april', 'may', 'june', 'july', 'august', 'september', 'october', 'november', 'december'] total_months = 12 With as_month set to False month_list = month_get_list(date(2021, 12, 31), date(2021, 1, 1), as_month=False) # month_list = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12] Returing datetime instead month_list = month_get_list(date(2021, 12, 31), date(2021, 1, 1), return_datetime=True) month_list = [datetime.datetime(2021, 1, 1, 0, 0), datetime.datetime(2021, 2, 1, 0, 0), datetime.datetime(2021, 3, 1, 0, 0), datetime.datetime(2021, 4, 1, 0, 0), datetime.datetime(2021, 5, 1, 0, 0), datetime.datetime(2021, 6, 1, 0, 0), datetime.datetime(2021, 7, 1, 0, 0), datetime.datetime(2021, 8, 1, 0, 0), datetime.datetime(2021, 9, 1, 0, 0), datetime.datetime(2021, 10, 1, 0, 0), datetime.datetime(2021, 11, 1, 0, 0), datetime.datetime(2021, 12, 1, 0, 0)] A: If fractions of month are important to you, leverage the first day of the next month to work around the number of days each month may have, on a step year or not: from datetime import timedelta, datetime from dateutil.relativedelta import relativedelta def month_fraction(this_date): this_date_df=this_date.strftime("%Y-%m-%d") day1_same_month_as_this_date_df=this_date_df[:8]+'01' day1_same_month_as_this_date=datetime.strptime(day1_same_month_as_this_date_df, '%Y-%m-%d').date() next_mon th_as_this_date=this_date+relativedelta(months=1) next_month_as_this_date_df=next_month_as_this_date.strftime("%Y-%m-%d") day1_next_month_this_date_df=next_month_as_this_date_df[:8]+'01' day1_next_month_this_date=datetime.strptime(day1_next_month_this_date_df, '%Y-%m-%d').date() last_day_same_month_this_date=day1_next_month_this_date-timedelta(days=1) delta_days_from_month_beginning=(this_date-day1_same_month_as_this_date).days delta_days_whole_month=(last_day_same_month_this_date-day1_same_month_as_this_date).days fraction_beginning_of_month=round(delta_days_from_month_beginning/delta_days_whole_month,4) return fraction_beginning_of_month def delta_months_JLR(second_date,first_date): return (second_date.year - first_date.year) * 12 + second_date.month - first_date.month def delta_months_float(first_date,second_date): outgoing_fraction_first_date = month_fraction(first_date) incoming_fraction_second_date = month_fraction(second_date) delta_months=delta_months_JLR(second_date,first_date) #as on John La Rooy’s response months_float=round(delta_months-outgoing_fraction_first_date+incoming_fraction_second_date,4) return months_float first_date_df='2021-12-28' first_date=datetime.strptime(first_date, '%Y-%m-%d').date() second_date_df='2022-01-02' second_date=datetime.strptime(second_date, '%Y-%m-%d').date() print (delta_months_float(first_date,second_date)) 0.1333 A: please give start and end date of the and below function will fine all month starting and ending date. $months = $this->getMonthsInRange('2022-03-15', '2022-07-12'); public function getMonthsInRange($startDate, $endDate) { $months = array(); while (strtotime($startDate) <= strtotime($endDate)) { $start_date = date('Y-m-d', strtotime($startDate)); $end_date = date("Y-m-t", strtotime($start_date)); if(strtotime($end_date) >= strtotime($endDate)) { $end_date = $endDate; } $months[] = array( 'start_date' => $start_date, 'end_date' => $end_date ); $startDate = date('01 M Y', strtotime($startDate . '+ 1 month')); } return $months; } A: Works fine for python 3 import datetime start = datetime.datetime.today() end = start + datetime.timedelta(days=365) month_range = [start.date()] diff = end - start months = round(diff.days / 30) temp = start for i in range(months): temp = temp + datetime.timedelta(days=30) if temp <= end: month_range.append(temp.date()) print(month_range)
Best way to find the months between two dates
I have the need to be able to accurately find the months between two dates in python. I have a solution that works but its not very good (as in elegant) or fast. dateRange = [datetime.strptime(dateRanges[0], "%Y-%m-%d"), datetime.strptime(dateRanges[1], "%Y-%m-%d")] months = [] tmpTime = dateRange[0] oneWeek = timedelta(weeks=1) tmpTime = tmpTime.replace(day=1) dateRange[0] = tmpTime dateRange[1] = dateRange[1].replace(day=1) lastMonth = tmpTime.month months.append(tmpTime) while tmpTime < dateRange[1]: if lastMonth != 12: while tmpTime.month <= lastMonth: tmpTime += oneWeek tmpTime = tmpTime.replace(day=1) months.append(tmpTime) lastMonth = tmpTime.month else: while tmpTime.month >= lastMonth: tmpTime += oneWeek tmpTime = tmpTime.replace(day=1) months.append(tmpTime) lastMonth = tmpTime.month So just to explain, what I'm doing here is taking the two dates and converting them from iso format into python datetime objects. Then I loop through adding a week to the start datetime object and check if the numerical value of the month is greater (unless the month is December then it checks if the date is less), If the value is greater I append it to the list of months and keep looping through until I get to my end date. It works perfectly it just doesn't seem like a good way of doing it...
[ "Start by defining some test cases, then you will see that the function is very simple and needs no loops\nfrom datetime import datetime\n\ndef diff_month(d1, d2):\n return (d1.year - d2.year) * 12 + d1.month - d2.month\n\nassert diff_month(datetime(2010,10,1), datetime(2010,9,1)) == 1\nassert diff_month(datetime(2010,10,1), datetime(2009,10,1)) == 12\nassert diff_month(datetime(2010,10,1), datetime(2009,11,1)) == 11\nassert diff_month(datetime(2010,10,1), datetime(2009,8,1)) == 14\n\nYou should add some test cases to your question, as there are lots of potential corner cases to cover - there is more than one way to define the number of months between two dates.\n", "One liner to find a list of datetimes, incremented by month, between two dates. \nimport datetime\nfrom dateutil.rrule import rrule, MONTHLY\n\nstrt_dt = datetime.date(2001,1,1)\nend_dt = datetime.date(2005,6,1)\n\ndates = [dt for dt in rrule(MONTHLY, dtstart=strt_dt, until=end_dt)]\n\n", "This worked for me -\nfrom datetime import datetime\nfrom dateutil import relativedelta\ndate1 = datetime.strptime('2011-08-15 12:00:00', '%Y-%m-%d %H:%M:%S')\ndate2 = datetime.strptime('2012-02-15', '%Y-%m-%d')\nr = relativedelta.relativedelta(date2, date1)\nr.months + (12*r.years)\n\n", "You can easily calculate this using rrule from dateutil module:\nfrom dateutil import rrule\nfrom datetime import date\n\nprint(list(rrule.rrule(rrule.MONTHLY, dtstart=date(2013, 11, 1), until=date(2014, 2, 1))))\n\nwill give you:\n [datetime.datetime(2013, 11, 1, 0, 0),\n datetime.datetime(2013, 12, 1, 0, 0),\n datetime.datetime(2014, 1, 1, 0, 0),\n datetime.datetime(2014, 2, 1, 0, 0)]\n\n", "from dateutil import relativedelta\n\nr = relativedelta.relativedelta(date1, date2)\n\nmonths_difference = (r.years * 12) + r.months\n\n", "Get the ending month (relative to the year and month of the start month ex: 2011 January = 13 if your start date starts on 2010 Oct) and then generate the datetimes beginning the start month and that end month like so:\ndt1, dt2 = dateRange\nstart_month=dt1.month\nend_months=(dt2.year-dt1.year)*12 + dt2.month+1\ndates=[datetime.datetime(year=yr, month=mn, day=1) for (yr, mn) in (\n ((m - 1) / 12 + dt1.year, (m - 1) % 12 + 1) for m in range(start_month, end_months)\n )]\n\nif both dates are on the same year, it could also be simply written as:\ndates=[datetime.datetime(year=dt1.year, month=mn, day=1) for mn in range(dt1.month, dt2.month + 1)]\n\n", "My simple solution:\nimport datetime\n\ndef months(d1, d2):\n return d1.month - d2.month + 12*(d1.year - d2.year)\n\nd1 = datetime.datetime(2009, 9, 26) \nd2 = datetime.datetime(2019, 9, 26) \n\nprint(months(d1, d2))\n\n", "This post nails it! Use dateutil.relativedelta.\nfrom datetime import datetime\nfrom dateutil import relativedelta\ndate1 = datetime.strptime(str('2011-08-15 12:00:00'), '%Y-%m-%d %H:%M:%S')\ndate2 = datetime.strptime(str('2012-02-15'), '%Y-%m-%d')\nr = relativedelta.relativedelta(date2, date1)\nr.months\n\n", "Update 2018-04-20: it seems that OP @Joshkunz was asking for finding which months are between two dates, instead of \"how many months\" are between two dates. So I am not sure why @JohnLaRooy is upvoted for more than 100 times. @Joshkunz indicated in the comment under the original question he wanted the actual dates [or the months], instead of finding the total number of months.\nSo it appeared the question wanted, for between two dates 2018-04-11 to 2018-06-01\nApr 2018, May 2018, June 2018 \n\nAnd what if it is between 2014-04-11 to 2018-06-01? Then the answer would be \nApr 2014, May 2014, ..., Dec 2014, Jan 2015, ..., Jan 2018, ..., June 2018\n\nSo that's why I had the following pseudo code many years ago. It merely suggested using the two months as end points and loop through them, incrementing by one month at a time. @Joshkunz mentioned he wanted the \"months\" and he also mentioned he wanted the \"dates\", without knowing exactly, it was difficult to write the exact code, but the idea is to use one simple loop to loop through the end points, and incrementing one month at a time.\nThe answer 8 years ago in 2010:\nIf adding by a week, then it will approximately do work 4.35 times the work as needed. Why not just:\n1. get start date in array of integer, set it to i: [2008, 3, 12], \n and change it to [2008, 3, 1]\n2. get end date in array: [2010, 10, 26]\n3. add the date to your result by parsing i\n increment the month in i\n if month is >= 13, then set it to 1, and increment the year by 1\n until either the year in i is > year in end_date, \n or (year in i == year in end_date and month in i > month in end_date)\n\njust pseduo code for now, haven't tested, but i think the idea along the same line will work.\n", "Define a \"month\" as 1/12 year, then do this: \ndef month_diff(d1, d2): \n \"\"\"Return the number of months between d1 and d2, \n such that d2 + month_diff(d1, d2) == d1\n \"\"\"\n diff = (12 * d1.year + d1.month) - (12 * d2.year + d2.month)\n return diff\n\nYou might try to define a month as \"a period of either 29, 28, 30 or 31 days (depending on the year)\". But you you do that, you have an additional problem to solve. \nWhile it's usually clear that June 15th + 1 month should be July 15th, it's not usually not clear if January 30th + 1 month is in February or March. In the latter case, you may be compelled to compute the date as February 30th, then \"correct\" it to March 2nd. But when you do that, you'll find that March 2nd - 1 month is clearly February 2nd. Ergo, reductio ad absurdum (this operation is not well defined). \n", "Here's how to do this with Pandas FWIW:\nimport pandas as pd\npd.date_range(\"1990/04/03\", \"2014/12/31\", freq=\"MS\")\n\nDatetimeIndex(['1990-05-01', '1990-06-01', '1990-07-01', '1990-08-01',\n '1990-09-01', '1990-10-01', '1990-11-01', '1990-12-01',\n '1991-01-01', '1991-02-01',\n ...\n '2014-03-01', '2014-04-01', '2014-05-01', '2014-06-01',\n '2014-07-01', '2014-08-01', '2014-09-01', '2014-10-01',\n '2014-11-01', '2014-12-01'],\n dtype='datetime64[ns]', length=296, freq='MS')\n\nNotice it starts with the month after the given start date.\n", "Many people have already given you good answers to solve this but I have not read any using list comprehension so I give you what I used for a similar use case :\n\ndef compute_months(first_date, second_date):\n year1, month1, year2, month2 = map(\n int, \n (first_date[:4], first_date[5:7], second_date[:4], second_date[5:7])\n )\n\n return [\n '{:0>4}-{:0>2}'.format(year, month)\n for year in range(year1, year2 + 1)\n for month in range(month1 if year == year1 else 1, month2 + 1 if year == year2 else 13)\n ]\n\n>>> first_date = \"2016-05\"\n>>> second_date = \"2017-11\"\n>>> compute_months(first_date, second_date)\n['2016-05',\n '2016-06',\n '2016-07',\n '2016-08',\n '2016-09',\n '2016-10',\n '2016-11',\n '2016-12',\n '2017-01',\n '2017-02',\n '2017-03',\n '2017-04',\n '2017-05',\n '2017-06',\n '2017-07',\n '2017-08',\n '2017-09',\n '2017-10',\n '2017-11']\n\n\n", "There is a simple solution based on 360 day years, where all months have 30 days.\nIt fits most use cases where, given two dates, you need to calculate the number of full months plus the remaining days.\nfrom datetime import datetime, timedelta\n\ndef months_between(start_date, end_date):\n #Add 1 day to end date to solve different last days of month \n s1, e1 = start_date , end_date + timedelta(days=1)\n #Convert to 360 days\n s360 = (s1.year * 12 + s1.month) * 30 + s1.day\n e360 = (e1.year * 12 + e1.month) * 30 + e1.day\n #Count days between the two 360 dates and return tuple (months, days)\n return divmod(e360 - s360, 30)\n\nprint \"Counting full and half months\"\nprint months_between( datetime(2012, 01, 1), datetime(2012, 03, 31)) #3m\nprint months_between( datetime(2012, 01, 1), datetime(2012, 03, 15)) #2m 15d\nprint months_between( datetime(2012, 01, 16), datetime(2012, 03, 31)) #2m 15d\nprint months_between( datetime(2012, 01, 16), datetime(2012, 03, 15)) #2m\nprint \"Adding +1d and -1d to 31 day month\"\nprint months_between( datetime(2011, 12, 01), datetime(2011, 12, 31)) #1m 0d\nprint months_between( datetime(2011, 12, 02), datetime(2011, 12, 31)) #-1d => 29d\nprint months_between( datetime(2011, 12, 01), datetime(2011, 12, 30)) #30d => 1m\nprint \"Adding +1d and -1d to 29 day month\"\nprint months_between( datetime(2012, 02, 01), datetime(2012, 02, 29)) #1m 0d\nprint months_between( datetime(2012, 02, 02), datetime(2012, 02, 29)) #-1d => 29d\nprint months_between( datetime(2012, 02, 01), datetime(2012, 02, 28)) #28d\nprint \"Every month has 30 days - 26/M to 5/M+1 always counts 10 days\"\nprint months_between( datetime(2011, 02, 26), datetime(2011, 03, 05))\nprint months_between( datetime(2012, 02, 26), datetime(2012, 03, 05))\nprint months_between( datetime(2012, 03, 26), datetime(2012, 04, 05))\n\n", "Somewhat a little prettified solution by @Vin-G.\nimport datetime\n\ndef monthrange(start, finish):\n months = (finish.year - start.year) * 12 + finish.month + 1 \n for i in xrange(start.month, months):\n year = (i - 1) / 12 + start.year \n month = (i - 1) % 12 + 1\n yield datetime.date(year, month, 1)\n\n", "You can also use the arrow library. This is a simple example:\nfrom datetime import datetime\nimport arrow\n\nstart = datetime(2014, 1, 17)\nend = datetime(2014, 6, 20)\n\nfor d in arrow.Arrow.range('month', start, end):\n print d.month, d.format('MMMM')\n\nThis will print:\n1 January\n2 February\n3 March\n4 April\n5 May\n6 June\n\nHope this helps!\n", "Get difference in number of days, months and years between two dates.\nimport datetime \nfrom dateutil.relativedelta import relativedelta\n\n\niphead_proc_dt = datetime.datetime.now()\nnew_date = iphead_proc_dt + relativedelta(months=+25, days=+23)\n\n# Get Number of Days difference bewtween two dates\nprint((new_date - iphead_proc_dt).days)\n\ndifference = relativedelta(new_date, iphead_proc_dt)\n\n# Get Number of Months difference bewtween two dates\nprint(difference.months + 12 * difference.years)\n\n# Get Number of Years difference bewtween two dates\nprint(difference.years)\n\n", "Try something like this. It presently includes the month if both dates happen to be in the same month.\nfrom datetime import datetime,timedelta\n\ndef months_between(start,end):\n months = []\n cursor = start\n\n while cursor <= end:\n if cursor.month not in months:\n months.append(cursor.month)\n cursor += timedelta(weeks=1)\n\n return months\n\nOutput looks like:\n>>> start = datetime.now() - timedelta(days=120)\n>>> end = datetime.now()\n>>> months_between(start,end)\n[6, 7, 8, 9, 10]\n\n", "You could use python-dateutil. See Python: Difference of 2 datetimes in months\n", "just like range function, when month is 13, go to next year\ndef year_month_range(start_date, end_date):\n '''\n start_date: datetime.date(2015, 9, 1) or datetime.datetime\n end_date: datetime.date(2016, 3, 1) or datetime.datetime\n return: datetime.date list of 201509, 201510, 201511, 201512, 201601, 201602\n '''\n start, end = start_date.strftime('%Y%m'), end_date.strftime('%Y%m')\n assert len(start) == 6 and len(end) == 6\n start, end = int(start), int(end)\n\n year_month_list = []\n while start < end:\n year, month = divmod(start, 100)\n if month == 13:\n start += 88 # 201513 + 88 = 201601\n continue\n year_month_list.append(datetime.date(year, month, 1))\n\n start += 1\n return year_month_list\n\nexample in python shell\n>>> import datetime\n>>> s = datetime.date(2015,9,1)\n>>> e = datetime.date(2016, 3, 1)\n>>> year_month_set_range(s, e)\n[datetime.date(2015, 11, 1), datetime.date(2015, 9, 1), datetime.date(2016, 1, 1), datetime.date(2016, 2, 1),\n datetime.date(2015, 12, 1), datetime.date(2015, 10, 1)]\n\n", "It can be done using datetime.timedelta, where the number of days for skipping to next month can be obtained by calender.monthrange. monthrange returns weekday (0-6 ~ Mon-Sun) and number of days (28-31) for a given year and month.\nFor example: monthrange(2017, 1) returns (6,31).\nHere is the script using this logic to iterate between two months. \nfrom datetime import timedelta\nimport datetime as dt\nfrom calendar import monthrange\n\ndef month_iterator(start_month, end_month):\n start_month = dt.datetime.strptime(start_month,\n '%Y-%m-%d').date().replace(day=1)\n end_month = dt.datetime.strptime(end_month,\n '%Y-%m-%d').date().replace(day=1)\n while start_month <= end_month:\n yield start_month\n start_month = start_month + timedelta(days=monthrange(start_month.year, \n start_month.month)[1])\n\n`\n", "it seems that the answers are unsatisfactory and I have since use my own code which is easier to understand\nfrom datetime import datetime\nfrom dateutil import relativedelta\n\ndate1 = datetime.strptime(str('2017-01-01'), '%Y-%m-%d')\ndate2 = datetime.strptime(str('2019-03-19'), '%Y-%m-%d')\n\ndifference = relativedelta.relativedelta(date2, date1)\nmonths = difference.months\nyears = difference.years\n# add in the number of months (12) for difference in years\nmonths += 12 * difference.years\nmonths\n\n", "from datetime import datetime\nfrom dateutil import relativedelta\n\ndef get_months(d1, d2):\n date1 = datetime.strptime(str(d1), '%Y-%m-%d')\n date2 = datetime.strptime(str(d2), '%Y-%m-%d')\n print (date2, date1)\n r = relativedelta.relativedelta(date2, date1)\n months = r.months + 12 * r.years\n if r.days > 0:\n months += 1\n print (months)\n return months\n\n\nassert get_months('2018-08-13','2019-06-19') == 11\nassert get_months('2018-01-01','2019-06-19') == 18\nassert get_months('2018-07-20','2019-06-19') == 11\nassert get_months('2018-07-18','2019-06-19') == 12\nassert get_months('2019-03-01','2019-06-19') == 4\nassert get_months('2019-03-20','2019-06-19') == 3\nassert get_months('2019-01-01','2019-06-19') == 6\nassert get_months('2018-09-09','2019-06-19') == 10\n\n", "#This definition gives an array of months between two dates.\nimport datetime\ndef MonthsBetweenDates(BeginDate, EndDate):\n firstyearmonths = [mn for mn in range(BeginDate.month, 13)]<p>\n lastyearmonths = [mn for mn in range(1, EndDate.month+1)]<p>\n months = [mn for mn in range(1, 13)]<p>\n numberofyearsbetween = EndDate.year - BeginDate.year - 1<p>\n return firstyearmonths + months * numberofyearsbetween + lastyearmonths<p>\n\n#example\nBD = datetime.datetime.strptime(\"2000-35\", '%Y-%j')\nED = datetime.datetime.strptime(\"2004-200\", '%Y-%j')\nMonthsBetweenDates(BD, ED)\n\n", "Usually 90 days are NOT 3 months literally, just a reference.\nSo, finally, you need to check if days are bigger than 15 to add +1 to month counter. or better, add another elif with half month counter.\nFrom this other stackoverflow answer i've finally ended with that:\n#/usr/bin/env python\n# -*- coding: utf8 -*-\n\nimport datetime\nfrom datetime import timedelta\nfrom dateutil.relativedelta import relativedelta\nimport calendar\n\nstart_date = datetime.date.today()\nend_date = start_date + timedelta(days=111)\nstart_month = calendar.month_abbr[int(start_date.strftime(\"%m\"))]\n\nprint str(start_date) + \" to \" + str(end_date)\n\nmonths = relativedelta(end_date, start_date).months\ndays = relativedelta(end_date, start_date).days\n\nprint months, \"months\", days, \"days\"\n\nif days > 16:\n months += 1\n\nprint \"around \" + str(months) + \" months\", \"(\",\n\nfor i in range(0, months):\n print calendar.month_abbr[int(start_date.strftime(\"%m\"))],\n start_date = start_date + relativedelta(months=1)\n\nprint \")\"\n\nOutput:\n2016-02-29 2016-06-14\n3 months 16 days\naround 4 months ( Feb Mar Apr May )\n\nI've noticed that doesn't work if you add more than days left in current year, and that's is unexpected.\n", "Here is my solution for this:\ndef calc_age_months(from_date, to_date):\n from_date = time.strptime(from_date, \"%Y-%m-%d\")\n to_date = time.strptime(to_date, \"%Y-%m-%d\")\n\n age_in_months = (to_date.tm_year - from_date.tm_year)*12 + (to_date.tm_mon - from_date.tm_mon)\n\n if to_date.tm_mday < from_date.tm_mday:\n return age_in_months -1\n else\n return age_in_months\n\nThis will handle some edge cases as well where the difference in months between 31st Dec 2018 and 1st Jan 2019 will be zero (since the difference is only a day).\n", "Assuming upperDate is always later than lowerDate and both are datetime.date objects:\nif lowerDate.year == upperDate.year:\n monthsInBetween = range( lowerDate.month + 1, upperDate.month )\nelif upperDate.year > lowerDate.year:\n monthsInBetween = range( lowerDate.month + 1, 12 )\n for year in range( lowerDate.year + 1, upperDate.year ):\n monthsInBetween.extend( range(1,13) )\n monthsInBetween.extend( range( 1, upperDate.month ) )\n\nI haven't tested this thoroughly, but it looks like it should do the trick.\n", "Try this:\n dateRange = [datetime.strptime(dateRanges[0], \"%Y-%m-%d\"),\n datetime.strptime(dateRanges[1], \"%Y-%m-%d\")]\ndelta_time = max(dateRange) - min(dateRange)\n#Need to use min(dateRange).month to account for different length month\n#Note that timedelta returns a number of days\ndelta_datetime = (datetime(1, min(dateRange).month, 1) + delta_time -\n timedelta(days=1)) #min y/m/d are 1\nmonths = ((delta_datetime.year - 1) * 12 + delta_datetime.month -\n min(dateRange).month)\nprint months\n\nShouldn't matter what order you input the dates, and it takes into account the difference in month lengths.\n", "Here is a method:\ndef months_between(start_dt, stop_dt):\n month_list = []\n total_months = 12*(stop_dt.year-start_dt.year)+(stop_dt.month-start_d.month)+1\n if total_months > 0:\n month_list=[ datetime.date(start_dt.year+int((start_dt+i-1)/12), \n ((start_dt-1+i)%12)+1,\n 1) for i in xrange(0,total_months) ]\n return month_list\n\nThis is first computing the total number of months between the two dates, inclusive. Then it creates a list using the first date as the base and performs modula arithmetic to create the date objects.\n", "I actually needed to do something pretty similar just now\nEnded up writing a function which returns a list of tuples indicating the start and end of each month between two sets of dates so I could write some SQL queries off the back of it for monthly totals of sales etc.\nI'm sure it can be improved by someone who knows what they're doing but hope it helps...\nThe returned value look as follows (generating for today - 365days until today as an example)\n[ (datetime.date(2013, 5, 1), datetime.date(2013, 5, 31)),\n (datetime.date(2013, 6, 1), datetime.date(2013, 6, 30)),\n (datetime.date(2013, 7, 1), datetime.date(2013, 7, 31)),\n (datetime.date(2013, 8, 1), datetime.date(2013, 8, 31)),\n (datetime.date(2013, 9, 1), datetime.date(2013, 9, 30)),\n (datetime.date(2013, 10, 1), datetime.date(2013, 10, 31)),\n (datetime.date(2013, 11, 1), datetime.date(2013, 11, 30)),\n (datetime.date(2013, 12, 1), datetime.date(2013, 12, 31)),\n (datetime.date(2014, 1, 1), datetime.date(2014, 1, 31)),\n (datetime.date(2014, 2, 1), datetime.date(2014, 2, 28)),\n (datetime.date(2014, 3, 1), datetime.date(2014, 3, 31)),\n (datetime.date(2014, 4, 1), datetime.date(2014, 4, 30)),\n (datetime.date(2014, 5, 1), datetime.date(2014, 5, 31))]\n\nCode as follows (has some debug stuff which can be removed):\n#! /usr/env/python\nimport datetime\n\ndef gen_month_ranges(start_date=None, end_date=None, debug=False):\n today = datetime.date.today()\n if not start_date: start_date = datetime.datetime.strptime(\n \"{0}/01/01\".format(today.year),\"%Y/%m/%d\").date() # start of this year\n if not end_date: end_date = today\n if debug: print(\"Start: {0} | End {1}\".format(start_date, end_date))\n\n # sense-check\n if end_date < start_date:\n print(\"Error. Start Date of {0} is greater than End Date of {1}?!\".format(start_date, end_date))\n return None\n\n date_ranges = [] # list of tuples (month_start, month_end)\n\n current_year = start_date.year\n current_month = start_date.month\n\n while current_year <= end_date.year:\n next_month = current_month + 1\n next_year = current_year\n if next_month > 12:\n next_month = 1\n next_year = current_year + 1\n\n month_start = datetime.datetime.strptime(\n \"{0}/{1}/01\".format(current_year,\n current_month),\"%Y/%m/%d\").date() # start of month\n month_end = datetime.datetime.strptime(\n \"{0}/{1}/01\".format(next_year,\n next_month),\"%Y/%m/%d\").date() # start of next month\n month_end = month_end+datetime.timedelta(days=-1) # start of next month less one day\n\n range_tuple = (month_start, month_end)\n if debug: print(\"Month runs from {0} --> {1}\".format(\n range_tuple[0], range_tuple[1]))\n date_ranges.append(range_tuple)\n\n if current_month == 12:\n current_month = 1\n current_year += 1\n if debug: print(\"End of year encountered, resetting months\")\n else:\n current_month += 1\n if debug: print(\"Next iteration for {0}-{1}\".format(\n current_year, current_month))\n\n if current_year == end_date.year and current_month > end_date.month:\n if debug: print(\"Final month encountered. Terminating loop\")\n break\n\n return date_ranges\n\n\nif __name__ == '__main__':\n print(\"Running in standalone mode. Debug set to True\")\n from pprint import pprint\n pprint(gen_month_ranges(debug=True), indent=4)\n pprint(gen_month_ranges(start_date=datetime.date.today()+datetime.timedelta(days=-365),\n debug=True), indent=4)\n\n", "Assuming that you wanted to know the \"fraction\" of the month that dates were in, which I did, then you need to do a bit more work.\nfrom datetime import datetime, date\nimport calendar\n\ndef monthdiff(start_period, end_period, decimal_places = 2):\n if start_period > end_period:\n raise Exception('Start is after end')\n if start_period.year == end_period.year and start_period.month == end_period.month:\n days_in_month = calendar.monthrange(start_period.year, start_period.month)[1]\n days_to_charge = end_period.day - start_period.day+1\n diff = round(float(days_to_charge)/float(days_in_month), decimal_places)\n return diff\n months = 0\n # we have a start date within one month and not at the start, and an end date that is not\n # in the same month as the start date\n if start_period.day > 1:\n last_day_in_start_month = calendar.monthrange(start_period.year, start_period.month)[1]\n days_to_charge = last_day_in_start_month - start_period.day +1\n months = months + round(float(days_to_charge)/float(last_day_in_start_month), decimal_places)\n start_period = datetime(start_period.year, start_period.month+1, 1)\n\n last_day_in_last_month = calendar.monthrange(end_period.year, end_period.month)[1]\n if end_period.day != last_day_in_last_month:\n # we have lest days in the last month\n months = months + round(float(end_period.day) / float(last_day_in_last_month), decimal_places)\n last_day_in_previous_month = calendar.monthrange(end_period.year, end_period.month - 1)[1]\n end_period = datetime(end_period.year, end_period.month - 1, last_day_in_previous_month)\n\n #whatever happens, we now have a period of whole months to calculate the difference between\n\n if start_period != end_period:\n months = months + (end_period.year - start_period.year) * 12 + (end_period.month - start_period.month) + 1\n\n # just counter for any final decimal place manipulation\n diff = round(months, decimal_places)\n return diff\n\nassert monthdiff(datetime(2015,1,1), datetime(2015,1,31)) == 1\nassert monthdiff(datetime(2015,1,1), datetime(2015,02,01)) == 1.04\nassert monthdiff(datetime(2014,1,1), datetime(2014,12,31)) == 12\nassert monthdiff(datetime(2014,7,1), datetime(2015,06,30)) == 12\nassert monthdiff(datetime(2015,1,10), datetime(2015,01,20)) == 0.35\nassert monthdiff(datetime(2015,1,10), datetime(2015,02,20)) == 0.71 + 0.71\nassert monthdiff(datetime(2015,1,31), datetime(2015,02,01)) == round(1.0/31.0,2) + round(1.0/28.0,2)\nassert monthdiff(datetime(2013,1,31), datetime(2015,02,01)) == 12*2 + round(1.0/31.0,2) + round(1.0/28.0,2)\n\nprovides an example that works out the number of months between two dates inclusively, including the fraction of each month that the date is in. This means that you can work out how many months is between 2015-01-20 and 2015-02-14, where the fraction of the date in the month of January is determined by the number of days in January; or equally taking into account that the number of days in February can change form year to year.\nFor my reference, this code is also on github - https://gist.github.com/andrewyager/6b9284a4f1cdb1779b10\n", "This works...\nfrom datetime import datetime as dt\nfrom dateutil.relativedelta import relativedelta\ndef number_of_months(d1, d2):\n months = 0\n r = relativedelta(d1,d2)\n if r.years==0:\n months = r.months\n if r.years>=1:\n months = 12*r.years+r.months\n return months\n#example \nnumber_of_months(dt(2017,9,1),dt(2016,8,1))\n\n", "from datetime import datetime\n\ndef diff_month(start_date,end_date):\n qty_month = ((end_date.year - start_date.year) * 12) + (end_date.month - start_date.month)\n\n d_days = end_date.day - start_date.day\n\n if d_days >= 0:\n adjust = 0\n else:\n adjust = -1\n qty_month += adjust\n\n return qty_month\n\ndiff_month(datetime.date.today(),datetime(2019,08,24))\n\n\n#Examples:\n#diff_month(datetime(2018,02,12),datetime(2019,08,24)) = 18\n#diff_month(datetime(2018,02,12),datetime(2018,08,10)) = 5\n\n", "This is my way to do this:\nStart_date = \"2000-06-01\"\nEnd_date = \"2001-05-01\"\n\nmonth_num = len(pd.date_range(start = Start_date[:7], end = End_date[:7] ,freq='M'))+1\n\nI just use the month to create a date range and calculate the length.\n", "To get the number of full months between two dates:\nimport datetime\n\ndef difference_in_months(start, end):\n if start.year == end.year:\n months = end.month - start.month\n else:\n months = (12 - start.month) + (end.month)\n\n if start.day > end.day:\n months = months - 1\n\n return months\n\n", "You can use the below code to get month between two dates:\nOrderedDict(((start_date + timedelta(_)).strftime(date_format), None) for _ in xrange((end_date - start_date).days)).keys()\n\nwhere start_date and end_date must be proper date and date_format is the format in which you want your result of date..\nIn your case, date_format will be %b %Y.\n", "The question, is really asking about the total months between 2 dates and not the difference of it\nHence a revisited answer with some extra functionallity,\nfrom datetime import date, datetime\nfrom dateutil.rrule import rrule, MONTHLY\n\ndef month_get_list(dt_to, dt_from, return_datetime=False, as_month=True):\n INDEX_MONTH_MAPPING = {1: 'january', 2: 'february', 3: 'march', 4: 'april', 5: 'may', 6: 'june', 7: 'july',\n 8: 'august',\n 9: 'september', 10: 'october', 11: 'november', 12: 'december'}\n if return_datetime:\n return [dt for dt in rrule(MONTHLY, dtstart=dt_from, until=dt_to)]\n if as_month:\n return [INDEX_MONTH_MAPPING[dt.month] for dt in rrule(MONTHLY, dtstart=dt_from, until=dt_to)]\n\n return [dt.month for dt in rrule(MONTHLY, dtstart=dt_from, until=dt_to)]\n\nmonth_list = month_get_list(date(2021, 12, 31), date(2021, 1, 1))\ntotal_months = len(month_list)\n\nResult\nmonth_list = ['january', 'february', 'march', 'april', 'may', 'june', 'july', 'august', 'september', 'october', 'november', 'december']\ntotal_months = 12\n\nWith as_month set to False\nmonth_list = month_get_list(date(2021, 12, 31), date(2021, 1, 1),\nas_month=False)\n# month_list = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12]\n\nReturing datetime instead\nmonth_list = month_get_list(date(2021, 12, 31), date(2021, 1, 1), return_datetime=True)\nmonth_list = [datetime.datetime(2021, 1, 1, 0, 0), datetime.datetime(2021, 2, 1, 0, 0), datetime.datetime(2021, 3, 1, 0, 0), datetime.datetime(2021, 4, 1, 0, 0), datetime.datetime(2021, 5, 1, 0, 0), datetime.datetime(2021, 6, 1, 0, 0), datetime.datetime(2021, 7, 1, 0, 0), datetime.datetime(2021, 8, 1, 0, 0), datetime.datetime(2021, 9, 1, 0, 0), datetime.datetime(2021, 10, 1, 0, 0), datetime.datetime(2021, 11, 1, 0, 0), datetime.datetime(2021, 12, 1, 0, 0)]\n\n", "If fractions of month are important to you, leverage the first day of the next month to work around the number of days each month may have, on a step year or not:\nfrom datetime import timedelta, datetime\nfrom dateutil.relativedelta import relativedelta\n\ndef month_fraction(this_date):\n this_date_df=this_date.strftime(\"%Y-%m-%d\")\n day1_same_month_as_this_date_df=this_date_df[:8]+'01'\n day1_same_month_as_this_date=datetime.strptime(day1_same_month_as_this_date_df, '%Y-%m-%d').date()\n next_mon th_as_this_date=this_date+relativedelta(months=1)\n next_month_as_this_date_df=next_month_as_this_date.strftime(\"%Y-%m-%d\")\n day1_next_month_this_date_df=next_month_as_this_date_df[:8]+'01'\n day1_next_month_this_date=datetime.strptime(day1_next_month_this_date_df, '%Y-%m-%d').date()\n last_day_same_month_this_date=day1_next_month_this_date-timedelta(days=1)\n delta_days_from_month_beginning=(this_date-day1_same_month_as_this_date).days\n delta_days_whole_month=(last_day_same_month_this_date-day1_same_month_as_this_date).days \n fraction_beginning_of_month=round(delta_days_from_month_beginning/delta_days_whole_month,4)\n return fraction_beginning_of_month\n\ndef delta_months_JLR(second_date,first_date):\n return (second_date.year - first_date.year) * 12 + second_date.month - first_date.month\n\ndef delta_months_float(first_date,second_date):\n outgoing_fraction_first_date = month_fraction(first_date)\n incoming_fraction_second_date = month_fraction(second_date)\n delta_months=delta_months_JLR(second_date,first_date) #as on John La Rooy’s response\n months_float=round(delta_months-outgoing_fraction_first_date+incoming_fraction_second_date,4)\n return months_float\n\n\nfirst_date_df='2021-12-28'\nfirst_date=datetime.strptime(first_date, '%Y-%m-%d').date()\nsecond_date_df='2022-01-02'\nsecond_date=datetime.strptime(second_date, '%Y-%m-%d').date()\n\nprint (delta_months_float(first_date,second_date))\n0.1333\n\n", "please give start and end date of the and below function will fine all month starting and ending date.\n$months = $this->getMonthsInRange('2022-03-15', '2022-07-12');\n \n \n public function getMonthsInRange($startDate, $endDate)\n {\n $months = array();\n while (strtotime($startDate) <= strtotime($endDate)) {\n $start_date = date('Y-m-d', strtotime($startDate));\n $end_date = date(\"Y-m-t\", strtotime($start_date));\n if(strtotime($end_date) >= strtotime($endDate)) {\n $end_date = $endDate;\n }\n $months[] = array(\n 'start_date' => $start_date,\n 'end_date' => $end_date\n );\n $startDate = date('01 M Y', strtotime($startDate . '+ 1 month'));\n }\n return $months;\n }\n\n", "Works fine for python 3\n import datetime\n\n start = datetime.datetime.today()\n end = start + datetime.timedelta(days=365)\n month_range = [start.date()]\n diff = end - start\n months = round(diff.days / 30)\n temp = start\n \n for i in range(months):\n temp = temp + datetime.timedelta(days=30)\n if temp <= end:\n month_range.append(temp.date())\n\n print(month_range)\n\n" ]
[ 256, 59, 48, 15, 15, 10, 9, 8, 6, 6, 5, 5, 4, 4, 4, 4, 3, 3, 2, 2, 2, 2, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "You could use something like:\nimport datetime\ndays_in_month = 365.25 / 12 # represent the average of days in a month by year\nmonth_diff = lambda end_date, start_date, precision=0: round((end_date - start_date).days / days_in_month, precision)\nstart_date = datetime.date(1978, 12, 15)\nend_date = datetime.date(2012, 7, 9)\nmonth_diff(end_date, start_date) # should show 403.0 months\n\n", "import datetime\nfrom calendar import monthrange\n\ndef date_dif(from_date,to_date): # Изчислява разлика между две дати\n dd=(to_date-from_date).days\n if dd>=0:\n fromDM=from_date.year*12+from_date.month-1\n toDM=to_date.year*12+to_date.month-1\n mlen=monthrange(int((toDM)/12),(toDM)%12+1)[1]\n d=to_date.day-from_date.day\n dm=toDM-fromDM\n m=(dm-int(d<0))%12\n y=int((dm-int(d<0))/12)\n d+=int(d<0)*mlen\n # diference in Y,M,D, diference months,diference days, days in to_date month\n return[y,m,d,dm,dd,mlen]\n else:\n return[0,0,0,0,dd,0]\n\n" ]
[ -1, -1 ]
[ "date_math", "datetime", "monthcalendar", "python" ]
stackoverflow_0004039879_date_math_datetime_monthcalendar_python.txt
Q: Leaving punctuations untouched during Caesar Cipher in python I read multiple related threads about how to solve the same problem, but I couldn't apply the solutions to my code. Also, the code is supposed receive a path to a text file which must contain text composed of only English letters and punctuation symbols and a destination file for encrypted data. Any suggestions? def check_alpha(m_string): list_wanted = ['!', '?', '.', ',', ' '] for letter in m_string: if not (letter in list_wanted or letter.isalpha()): return False return True and any(letter.isalpha() for letter in m_string) while True: string = input("Enter the text to be encrypted: ") if check_alpha(string): break else: print("Please enter a valid text: ") continue while True: # Validating input key key = input("Enter the key: ") try: key = int(key) except ValueError: print("Please enter a valid key: ") continue break def caesarcipher(string, key): # Caesar Cipher encrypted_string = [] new_key = key % 26 for letter in string: encrypted_string.append(getnewletter(letter, new_key)) return ''.join(encrypted_string) def getnewletter(letter, key): new_letter = ord(letter) + key return chr(new_letter) if new_letter <= 122 else chr(96 + new_letter % 122) with open('Caesar.txt', 'a') as the_file: # Writing to a text file the_file.write(caesarcipher(string, key)) print(caesarcipher(string, key)) print('Your text has been encrypted via Caesar-Cipher, the result is in Caesar.txt') A: As mentioned by Alex P you can simple handle all punctuation separately with a if condition: def caesarcipher(string, key): # Caesar Cipher encrypted_string = [] new_key = key % 26 for letter in string: if letter in ['!', '?', '.', ',', ' ']: encrypted_string.append(letter) else: encrypted_string.append(getnewletter(letter, new_key)) return ''.join(encrypted_string) A: Well you can make another function which can relate to check_alpha() function. EDIT: I hope I understand your problem correctly. If not then let me know. import pathlib def load_file(file_path) from pathlib import Path if not Path(file_path).exist(): return False with open(file_path, 'r') as fin: for line in fin.readlines(): if not check_alpha(line.strip('\n')): return False return fin.read() -> this will create '\n' at every end of the line! keep that in mind def check_aplha(m_string).... -> keep the original while True: fin = input("Enter file path with text selected for encryption: ") uncoded = load_file(fin) if uncoded: -> everything which is not None, Empty, False or 0 is True break else: print("Please enter a valid file with only valid letters/punctuations.\n") continue while True: fout = input ("Enter file path for output file: ") try: if not Path(fout.parent).exists(): Path(fout.parent).mkdir(parents=True, exist_ok=True) break else: break except NotADirectoryError as err: print(f'Error {err} has occured. Probably wrong disk selection.') In this way exactly you can do it for encrypted data. I guess all rules apply either on encrypted either on decrypted data. A: A simple example of the Ceaser Cipher tailed for your need def encrypt(text,s): result = "" # transverse the plain text for i in range(len(text)): char = text[i] # Encrypt uppercase characters in plain text if is_a_letter(char): if (char.isupper()): result += chr((ord(char) + s-65) % 26 + 65) # Encrypt lowercase characters in plain text else: result += chr((ord(char) + s - 97) % 26 + 97) else: result += char return result #check the above function text = "CEASER!CIPHER.DEMO" s = 4 # the length of the shift my_list = ['!', '?', '.', ',', ' '] # check if the character is one of the above list def is_a_letter(text): for x in range(0,len(my_list)): if text == my_list[x]: return False return True print("Plain Text : " + text) print("Shift pattern : " + str(s)) print("Cipher: " + encrypt(text,s))
Leaving punctuations untouched during Caesar Cipher in python
I read multiple related threads about how to solve the same problem, but I couldn't apply the solutions to my code. Also, the code is supposed receive a path to a text file which must contain text composed of only English letters and punctuation symbols and a destination file for encrypted data. Any suggestions? def check_alpha(m_string): list_wanted = ['!', '?', '.', ',', ' '] for letter in m_string: if not (letter in list_wanted or letter.isalpha()): return False return True and any(letter.isalpha() for letter in m_string) while True: string = input("Enter the text to be encrypted: ") if check_alpha(string): break else: print("Please enter a valid text: ") continue while True: # Validating input key key = input("Enter the key: ") try: key = int(key) except ValueError: print("Please enter a valid key: ") continue break def caesarcipher(string, key): # Caesar Cipher encrypted_string = [] new_key = key % 26 for letter in string: encrypted_string.append(getnewletter(letter, new_key)) return ''.join(encrypted_string) def getnewletter(letter, key): new_letter = ord(letter) + key return chr(new_letter) if new_letter <= 122 else chr(96 + new_letter % 122) with open('Caesar.txt', 'a') as the_file: # Writing to a text file the_file.write(caesarcipher(string, key)) print(caesarcipher(string, key)) print('Your text has been encrypted via Caesar-Cipher, the result is in Caesar.txt')
[ "As mentioned by Alex P you can simple handle all punctuation separately with a if condition:\ndef caesarcipher(string, key): # Caesar Cipher\n encrypted_string = []\n new_key = key % 26\n for letter in string:\n if letter in ['!', '?', '.', ',', ' ']:\n encrypted_string.append(letter)\n else:\n encrypted_string.append(getnewletter(letter, new_key))\n return ''.join(encrypted_string)\n\n", "Well you can make another function which can relate to check_alpha() function.\nEDIT: I hope I understand your problem correctly. If not then let me know.\nimport pathlib\n\ndef load_file(file_path)\n from pathlib import Path\n \n if not Path(file_path).exist():\n return False\n \n with open(file_path, 'r') as fin:\n for line in fin.readlines():\n if not check_alpha(line.strip('\\n')):\n return False\n\n return fin.read() -> this will create '\\n' at every end of the line! keep that in mind\n\ndef check_aplha(m_string).... -> keep the original\n\nwhile True:\n fin = input(\"Enter file path with text selected for encryption: \")\n uncoded = load_file(fin)\n if uncoded: -> everything which is not None, Empty, False or 0 is True\n break\n else:\n print(\"Please enter a valid file with only valid letters/punctuations.\\n\")\n continue\n\nwhile True:\n fout = input (\"Enter file path for output file: \") \n\n try:\n if not Path(fout.parent).exists():\n Path(fout.parent).mkdir(parents=True, exist_ok=True)\n break\n else:\n break\n except NotADirectoryError as err:\n print(f'Error {err} has occured. Probably wrong disk selection.')\n\nIn this way exactly you can do it for encrypted data. I guess all rules apply either on encrypted either on decrypted data.\n", "A simple example of the Ceaser Cipher tailed for your need\ndef encrypt(text,s):\n result = \"\"\n # transverse the plain text\n for i in range(len(text)):\n char = text[i]\n # Encrypt uppercase characters in plain text\n if is_a_letter(char):\n \n if (char.isupper()):\n result += chr((ord(char) + s-65) % 26 + 65)\n # Encrypt lowercase characters in plain text\n else:\n result += chr((ord(char) + s - 97) % 26 + 97)\n else:\n result += char\n return result\n#check the above function\ntext = \"CEASER!CIPHER.DEMO\"\ns = 4 # the length of the shift \nmy_list = ['!', '?', '.', ',', ' ']\n\n# check if the character is one of the above list\ndef is_a_letter(text):\n for x in range(0,len(my_list)):\n if text == my_list[x]:\n return False\n return True\n\nprint(\"Plain Text : \" + text)\nprint(\"Shift pattern : \" + str(s))\nprint(\"Cipher: \" + encrypt(text,s))\n\n" ]
[ 1, 1, 1 ]
[]
[]
[ "caesar_cipher", "python" ]
stackoverflow_0074518944_caesar_cipher_python.txt
Q: How to extract a word from a line? I've started python course not so long time ago. I have a file "input.txt" with lines (id, animal, gender, name, date of birth, date arrived to the zoo): 7910 leopard male Leo 04.06.2001 05.15.2010. 9315 cat male Hiha 01.04.2004 03.24.2012. 2226 leopard female Lia 07.28.2007 08.24.2019. I need to extract from each line the kind of animal and then sort kinds into the list by length: cat leopard I had some thoughts but Idk how to extract kind of an animal from each line with open('input.txt', 'r') as file: animals = set() for line in file.readlines(): animals.add(x) sorted(animals, key=lambda x: len(x)) print(animals) A: I think this can work: >>> file = open('input.txt', 'r') >>> {line.split()[1] for line in file.readlines()} {'leopard', 'cat'} While the second word of each line is the kind of that animal, accessing the 1 index of it can give you the kind.
How to extract a word from a line?
I've started python course not so long time ago. I have a file "input.txt" with lines (id, animal, gender, name, date of birth, date arrived to the zoo): 7910 leopard male Leo 04.06.2001 05.15.2010. 9315 cat male Hiha 01.04.2004 03.24.2012. 2226 leopard female Lia 07.28.2007 08.24.2019. I need to extract from each line the kind of animal and then sort kinds into the list by length: cat leopard I had some thoughts but Idk how to extract kind of an animal from each line with open('input.txt', 'r') as file: animals = set() for line in file.readlines(): animals.add(x) sorted(animals, key=lambda x: len(x)) print(animals)
[ "I think this can work:\n>>> file = open('input.txt', 'r')\n>>> {line.split()[1] for line in file.readlines()}\n{'leopard', 'cat'}\n\nWhile the second word of each line is the kind of that animal, accessing the 1 index of it can give you the kind.\n" ]
[ 1 ]
[]
[]
[ "python", "set" ]
stackoverflow_0074519147_python_set.txt
Q: I want 3 conditions to be met with pandas If I have this dataset: IDUSER SOURCE numofvisit Transaction 1 direct 2 yes 1 google 1 no 2 google 1 no 3 yahoo 1 no 3 direct 2 yes so I want to be able to say "50% of users that did a transaction are from google and 50% are from yahoo" but If I filter based on the row that actually got a transaction it would tell me that 100% are direct I'm thinking in a really conventional way of solving it but I cannot see how to do it with pandas filter the users that got a transaction then for each user that got a transaction, check the source if numofvisit == 1 How could I do this with pandas? A: IIUC, you can use: # identify users for which there is at least one transaction keep = df['Transaction'].eq('yes').groupby(df['IDUSER']).any() # keep those users m1 = df['IDUSER'].isin(keep[keep].index) # remove the direct rows m2 = df['SOURCE'].ne('direct') # get the proportion of each source df.loc[m1&m2, 'SOURCE'].value_counts(normalize=True) Output: google 0.5 yahoo 0.5 Name: SOURCE, dtype: float64
I want 3 conditions to be met with pandas
If I have this dataset: IDUSER SOURCE numofvisit Transaction 1 direct 2 yes 1 google 1 no 2 google 1 no 3 yahoo 1 no 3 direct 2 yes so I want to be able to say "50% of users that did a transaction are from google and 50% are from yahoo" but If I filter based on the row that actually got a transaction it would tell me that 100% are direct I'm thinking in a really conventional way of solving it but I cannot see how to do it with pandas filter the users that got a transaction then for each user that got a transaction, check the source if numofvisit == 1 How could I do this with pandas?
[ "IIUC, you can use:\n# identify users for which there is at least one transaction\nkeep = df['Transaction'].eq('yes').groupby(df['IDUSER']).any()\n\n# keep those users\nm1 = df['IDUSER'].isin(keep[keep].index)\n# remove the direct rows\nm2 = df['SOURCE'].ne('direct')\n\n# get the proportion of each source\ndf.loc[m1&m2, 'SOURCE'].value_counts(normalize=True)\n\nOutput:\ngoogle 0.5\nyahoo 0.5\nName: SOURCE, dtype: float64\n\n" ]
[ 1 ]
[]
[]
[ "pandas", "python" ]
stackoverflow_0074519184_pandas_python.txt
Q: How to create classes from existing tables using Flask-SQLaclhemy I know I need to use the MetaData object, in SQLAlchemy, but I am not sure how to use it with a class, db = SQLAlchemy(app) meta =db.Metadata() class orders(db.model): pass How do I pass the meta object to the class so that it will auto generate table schema? A: Well you can use SQLAlchemy's autoload feature but I still haven't figured out how to use that from flask-sqlalchemy. Here's a tutorial if you want to read about it anyway: SQLAlchemy Connecting to pre-existing databases. The best solution I found for the time being is to use sqlautocode to generate the SQLAlchemy models from the existing tables in your database. I know it would be preferable if SQLAlchemy would handle that automatically but I can't find a way to do it from Flask. Here's how to use it: sqlautocode mysql://<dbuser>:<pass>@localhost:3306/<dbname> -o alchemy_models.py This will generate the Models and place them in the alchemy_models.py file. I hope this helps A: You can use sqlacodegen to generate the classes needed for sqlalchemy. pip install sqlacodegen sqlacodegen postgresql+psycopg2://username:password@host/database --outfile models.py I ran into an issue with the Base class and the query attribute. The error I received was: AttributeError: type object 'PaymentType' has no attribute 'query' I was able to make the sqlacodegen classes work by using a scoped_session. session = scoped_session(sessionmaker(autocommit=False,autoflush=False,bind=engine)) Base.query = session.query_property() print(PaymentType.query.all()) A: The db.Model.metadata.reflect(...) function is also super useful here; you don't have to define the model at all (it's just inferred from the existing database schema). See stay_hungry's answer here for a specific implementation
How to create classes from existing tables using Flask-SQLaclhemy
I know I need to use the MetaData object, in SQLAlchemy, but I am not sure how to use it with a class, db = SQLAlchemy(app) meta =db.Metadata() class orders(db.model): pass How do I pass the meta object to the class so that it will auto generate table schema?
[ "Well you can use SQLAlchemy's autoload feature but I still haven't figured out how to use that from flask-sqlalchemy. Here's a tutorial if you want to read about it anyway: SQLAlchemy Connecting to pre-existing databases.\nThe best solution I found for the time being is to use sqlautocode to generate the SQLAlchemy models from the existing tables in your database. I know it would be preferable if SQLAlchemy would handle that automatically but I can't find a way to do it from Flask.\nHere's how to use it:\nsqlautocode mysql://<dbuser>:<pass>@localhost:3306/<dbname> -o alchemy_models.py\n\nThis will generate the Models and place them in the alchemy_models.py file. I hope this helps\n", "You can use sqlacodegen to generate the classes needed for sqlalchemy. \npip install sqlacodegen\n\nsqlacodegen postgresql+psycopg2://username:password@host/database --outfile models.py\n\nI ran into an issue with the Base class and the query attribute. The error I received was:\nAttributeError: type object 'PaymentType' has no attribute 'query' \n\nI was able to make the sqlacodegen classes work by using a scoped_session.\nsession = scoped_session(sessionmaker(autocommit=False,autoflush=False,bind=engine))\nBase.query = session.query_property()\nprint(PaymentType.query.all())\n\n", "The db.Model.metadata.reflect(...) function is also super useful here; you don't have to define the model at all (it's just inferred from the existing database schema).\nSee stay_hungry's answer here for a specific implementation\n" ]
[ 7, 3, 0 ]
[]
[]
[ "flask", "flask_sqlalchemy", "python", "sqlalchemy" ]
stackoverflow_0029455436_flask_flask_sqlalchemy_python_sqlalchemy.txt
Q: Python Multiprocessing Locks This multiprocessing code works as expected. It creates 4 Python processes, and uses them to print the numbers 0 through 39, with a delay after each print. import multiprocessing import time def job(num): print num time.sleep(1) pool = multiprocessing.Pool(4) lst = range(40) for i in lst: pool.apply_async(job, [i]) pool.close() pool.join() However, when I try to use a multiprocessing.Lock to prevent multiple processes from printing to standard out, the program just exits immediately without any output. import multiprocessing import time def job(lock, num): lock.acquire() print num lock.release() time.sleep(1) pool = multiprocessing.Pool(4) l = multiprocessing.Lock() lst = range(40) for i in lst: pool.apply_async(job, [l, i]) pool.close() pool.join() Why does the introduction of a multiprocessing.Lock make this code not work? Update: It works when the lock is declared globally (where I did a few non-definitive tests to check that the lock works), as opposed to the code above which passes the lock as an argument (Python's multiprocessing documentation shows locks being passed as arguments). The code below has a lock declared globally, as opposed to passing as an argument in the code above. import multiprocessing import time l = multiprocessing.Lock() def job(num): l.acquire() print num l.release() time.sleep(1) pool = multiprocessing.Pool(4) lst = range(40) for i in lst: pool.apply_async(job, [i]) pool.close() pool.join() A: If you change pool.apply_async to pool.apply, you get this exception: Traceback (most recent call last): File "p.py", line 15, in <module> pool.apply(job, [l, i]) File "/usr/lib/python2.7/multiprocessing/pool.py", line 244, in apply return self.apply_async(func, args, kwds).get() File "/usr/lib/python2.7/multiprocessing/pool.py", line 558, in get raise self._value RuntimeError: Lock objects should only be shared between processes through inheritance pool.apply_async is just hiding it. I hate to say this, but using a global variable is probably the simplest way for your example. Let's just hope the velociraptors don't get you. A: Other answers already provide the answer that the apply_async silently fails unless an appropriate error_callback argument is provided. I still found OP's other point valid -- the official docs do indeed show multiprocessing.Lock being passed around as a function argument. In fact, the sub-section titled "Explicitly pass resources to child processes" in Programming guidelines recommends passing a multiprocessing.Lock object as function argument instead of a global variable. And, I have been writing a lot of code in which I pass a multiprocessing.Lock as an argument to the child process and it all works as expected. So, what gives? I first investigated whether multiprocessing.Lock is pickle-able or not. In Python 3, MacOS+CPython, trying to pickle multiprocessing.Lock produces the familiar RuntimeError encountered by others. >>> pickle.dumps(multiprocessing.Lock()) --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) <ipython-input-7-66dfe1355652> in <module> ----> 1 pickle.dumps(multiprocessing.Lock()) /usr/local/Cellar/python/3.7.3/Frameworks/Python.framework/Versions/3.7/lib/python3.7/multiprocessing/synchronize.py in __getstate__(self) 99 100 def __getstate__(self): --> 101 context.assert_spawning(self) 102 sl = self._semlock 103 if sys.platform == 'win32': /usr/local/Cellar/python/3.7.3/Frameworks/Python.framework/Versions/3.7/lib/python3.7/multiprocessing/context.py in assert_spawning(obj) 354 raise RuntimeError( 355 '%s objects should only be shared between processes' --> 356 ' through inheritance' % type(obj).__name__ 357 ) RuntimeError: Lock objects should only be shared between processes through inheritance To me, this confirms that multiprocessing.Lock is indeed not pickle-able. Aside begins But, the same lock still needs to be shared across two or more python processes which will have their own, potentially different address spaces (such as when we use "spawn" or "forkserver" as start methods). multiprocessing must be doing something special to send Lock across processes. This other StackOverflow post seems to indicate that in Unix systems, multiprocessing.Lock may be implemented via named semaphores that are supported by the OS itself (outside python). Two or more python processes can then link to the same lock that effectively resides in one location outside both python processes. There may be a shared memory implementation as well. Aside ends Can we pass multiprocessing.Lock object as an argument or not? After a few more experiments and more reading, it appears that the difference is between multiprocessing.Pool and multiprocessing.Process. multiprocessing.Process lets you pass multiprocessing.Lock as an argument but multiprocessing.Pool doesn't. Here is an example that works: import multiprocessing import time from multiprocessing import Process, Lock def task(n: int, lock): with lock: print(f'n={n}') time.sleep(0.25) if __name__ == '__main__': multiprocessing.set_start_method('forkserver') lock = Lock() processes = [Process(target=task, args=(i, lock)) for i in range(20)] for process in processes: process.start() for process in processes: process.join() Note the use of __name__ == '__main__' is essential as mentioned in the "Safe importing of main module" sub-section of Programming guidelines. multiprocessing.Pool seems to use queue.SimpleQueue which puts each task in a queue and that's where pickling happens. Most likely, multiprocessing.Process is not using pickling (or doing a special version of pickling). A: I think the reason is that the multiprocessing pool uses pickle to transfer objects between the processes. However, a Lock cannot be pickled: >>> import multiprocessing >>> import pickle >>> lock = multiprocessing.Lock() >>> lp = pickle.dumps(lock) Traceback (most recent call last): File "<pyshell#3>", line 1, in <module> lp = pickle.dumps(lock) ... RuntimeError: Lock objects should only be shared between processes through inheritance >>> See the "Picklability" and "Better to inherit than pickle/unpickle" sections of https://docs.python.org/2/library/multiprocessing.html#all-platforms A: As mentioned in this stackoverflow post, Manager.Lock() might be appropriate here. It can be passed to the Pool, because it can be pickled. import multiprocessing import time def job(lock, num): lock.acquire() print num lock.release() time.sleep(1) pool = multiprocessing.Pool(4) m = multiprocessing.Manager() l = m.Lock() lst = range(40) for i in lst: pool.apply_async(job, [l, i]) pool.close() pool.join()
Python Multiprocessing Locks
This multiprocessing code works as expected. It creates 4 Python processes, and uses them to print the numbers 0 through 39, with a delay after each print. import multiprocessing import time def job(num): print num time.sleep(1) pool = multiprocessing.Pool(4) lst = range(40) for i in lst: pool.apply_async(job, [i]) pool.close() pool.join() However, when I try to use a multiprocessing.Lock to prevent multiple processes from printing to standard out, the program just exits immediately without any output. import multiprocessing import time def job(lock, num): lock.acquire() print num lock.release() time.sleep(1) pool = multiprocessing.Pool(4) l = multiprocessing.Lock() lst = range(40) for i in lst: pool.apply_async(job, [l, i]) pool.close() pool.join() Why does the introduction of a multiprocessing.Lock make this code not work? Update: It works when the lock is declared globally (where I did a few non-definitive tests to check that the lock works), as opposed to the code above which passes the lock as an argument (Python's multiprocessing documentation shows locks being passed as arguments). The code below has a lock declared globally, as opposed to passing as an argument in the code above. import multiprocessing import time l = multiprocessing.Lock() def job(num): l.acquire() print num l.release() time.sleep(1) pool = multiprocessing.Pool(4) lst = range(40) for i in lst: pool.apply_async(job, [i]) pool.close() pool.join()
[ "If you change pool.apply_async to pool.apply, you get this exception:\nTraceback (most recent call last):\n File \"p.py\", line 15, in <module>\n pool.apply(job, [l, i])\n File \"/usr/lib/python2.7/multiprocessing/pool.py\", line 244, in apply\n return self.apply_async(func, args, kwds).get()\n File \"/usr/lib/python2.7/multiprocessing/pool.py\", line 558, in get\n raise self._value\nRuntimeError: Lock objects should only be shared between processes through inheritance\n\npool.apply_async is just hiding it. I hate to say this, but using a global variable is probably the simplest way for your example. Let's just hope the velociraptors don't get you.\n", "Other answers already provide the answer that the apply_async silently fails unless an appropriate error_callback argument is provided. I still found OP's other point valid -- the official docs do indeed show multiprocessing.Lock being passed around as a function argument. In fact, the sub-section titled \"Explicitly pass resources to child processes\" in Programming guidelines recommends passing a multiprocessing.Lock object as function argument instead of a global variable. And, I have been writing a lot of code in which I pass a multiprocessing.Lock as an argument to the child process and it all works as expected. \nSo, what gives?\nI first investigated whether multiprocessing.Lock is pickle-able or not. In Python 3, MacOS+CPython, trying to pickle multiprocessing.Lock produces the familiar RuntimeError encountered by others.\n>>> pickle.dumps(multiprocessing.Lock())\n---------------------------------------------------------------------------\nRuntimeError Traceback (most recent call last)\n<ipython-input-7-66dfe1355652> in <module>\n----> 1 pickle.dumps(multiprocessing.Lock())\n\n/usr/local/Cellar/python/3.7.3/Frameworks/Python.framework/Versions/3.7/lib/python3.7/multiprocessing/synchronize.py in __getstate__(self)\n 99\n 100 def __getstate__(self):\n--> 101 context.assert_spawning(self)\n 102 sl = self._semlock\n 103 if sys.platform == 'win32':\n\n/usr/local/Cellar/python/3.7.3/Frameworks/Python.framework/Versions/3.7/lib/python3.7/multiprocessing/context.py in assert_spawning(obj)\n 354 raise RuntimeError(\n 355 '%s objects should only be shared between processes'\n--> 356 ' through inheritance' % type(obj).__name__\n 357 )\n\nRuntimeError: Lock objects should only be shared between processes through inheritance\n\nTo me, this confirms that multiprocessing.Lock is indeed not pickle-able. \nAside begins\nBut, the same lock still needs to be shared across two or more python processes which will have their own, potentially different address spaces (such as when we use \"spawn\" or \"forkserver\" as start methods). multiprocessing must be doing something special to send Lock across processes. This other StackOverflow post seems to indicate that in Unix systems, multiprocessing.Lock may be implemented via named semaphores that are supported by the OS itself (outside python). Two or more python processes can then link to the same lock that effectively resides in one location outside both python processes. There may be a shared memory implementation as well.\nAside ends\nCan we pass multiprocessing.Lock object as an argument or not? \nAfter a few more experiments and more reading, it appears that the difference is between multiprocessing.Pool and multiprocessing.Process. \nmultiprocessing.Process lets you pass multiprocessing.Lock as an argument but multiprocessing.Pool doesn't. Here is an example that works:\nimport multiprocessing\nimport time\nfrom multiprocessing import Process, Lock\n\n\ndef task(n: int, lock):\n with lock:\n print(f'n={n}')\n time.sleep(0.25)\n\n\nif __name__ == '__main__':\n multiprocessing.set_start_method('forkserver')\n lock = Lock()\n processes = [Process(target=task, args=(i, lock)) for i in range(20)]\n for process in processes:\n process.start()\n for process in processes:\n process.join()\n\nNote the use of __name__ == '__main__' is essential as mentioned in the \"Safe importing of main module\" sub-section of Programming guidelines. \nmultiprocessing.Pool seems to use queue.SimpleQueue which puts each task in a queue and that's where pickling happens. Most likely, multiprocessing.Process is not using pickling (or doing a special version of pickling).\n", "I think the reason is that the multiprocessing pool uses pickle to transfer objects between the processes. However, a Lock cannot be pickled:\n>>> import multiprocessing\n>>> import pickle\n>>> lock = multiprocessing.Lock()\n>>> lp = pickle.dumps(lock)\nTraceback (most recent call last):\n File \"<pyshell#3>\", line 1, in <module>\n lp = pickle.dumps(lock)\n...\nRuntimeError: Lock objects should only be shared between processes through inheritance\n>>> \n\nSee the \"Picklability\" and \"Better to inherit than pickle/unpickle\" sections of https://docs.python.org/2/library/multiprocessing.html#all-platforms\n", "As mentioned in this stackoverflow post, Manager.Lock() might be appropriate here. It can be passed to the Pool, because it can be pickled.\nimport multiprocessing\nimport time\n\ndef job(lock, num):\n lock.acquire()\n print num\n lock.release()\n time.sleep(1)\n\npool = multiprocessing.Pool(4)\nm = multiprocessing.Manager()\nl = m.Lock()\n\nlst = range(40)\nfor i in lst:\n pool.apply_async(job, [l, i])\n\npool.close()\npool.join()\n\n" ]
[ 32, 16, 9, 1 ]
[]
[]
[ "multiprocessing", "python" ]
stackoverflow_0028267972_multiprocessing_python.txt
Q: Completely delete + purge/expunge IMAP folders using Python imaplib I'm using this script to bulk delete empty IMAP folders: https://gitlab.com/puzzlement/delete-empty-imap-dirs #!/usr/bin/env python import getpass, imaplib, sys, argparse import parseNested IGNORE = set(["INBOX", "Postponed", "Sent", "Sent Items", "Trash", "Drafts", "MQEmail.INBOX", "MQEmail.Outbox", "MQEmail.Postponed"]) def main(): parser = argparse.ArgumentParser( description='This script deletes empty remote IMAP folders.') parser.add_argument('--port', '-p', metavar='PORT', type = int, help = 'Port number to connect to (143 or 993/SSL used by default)') parser.add_argument("-quiet", "--q", action="store_false", dest="verbose", default=True, help="don't print status messages to standard error") parser.add_argument("-s", "--ssl", action="store_true", dest="ssl", default=False, help="Use SSL encryption") parser.add_argument('hostname', help="Domain name/host name of IMAP " "server to delete folders on") parser.add_argument('username', help="Username/login " "on IMAP server") args = parser.parse_args() if args.ssl: IMAPClass = imaplib.IMAP4_SSL else: IMAPClass = imaplib.IMAP4 if args.port: M = IMAPClass(args.hostname, args.port) else: M = IMAPClass(args.hostname) M.login(args.username, getpass.getpass('IMAP password for user %s at server %s: ' % (args.username, args.hostname))) listresponse = M.list() mailboxes = [] for chunk in listresponse[1]: nested = parseNested.parseNestedParens(chunk) if '\\HasNoChildren' in nested[0] and '\\NoSelect' in nested[0]: if args.verbose: sys.stderr.write("%s has no children and is not selectable, deleting\n" % nested[2]) M.delete(nested[2]) elif not '\\HasChildren' in nested[0]: mboxname = nested[2] ignoretest = set([]) ignoretest.add(mboxname) ignoretest.add("INBOX." + mboxname) ignoretest.add(mboxname.lstrip("INBOX.")) if not ignoretest.intersection(IGNORE): mailboxes.append(mboxname) for mailbox in mailboxes: reply, data = M.select(mailbox) if reply != 'OK': print >> sys.stderr, "Cannot select mailbox '%s', reply was '%s', skipping" % (mailbox, str(data[0])) continue else: nomessages = int(data[0]) M.close() if nomessages == 0: if args.verbose: sys.stderr.write("%s is empty of messages, deleting\n" % mailbox) M.delete(mailbox) M.logout() if __name__ == '__main__': main() It's using this line of code to delete the folders: M.delete(mailbox) However this doesn't seem to completely delete the folders: In Outlook, the folders remain there like nothing happened In Thunderbird, the folder names change from having black text to gray In the webmail at the host (Network Solutions), the folders are gone If I re-run this Python script again, it doesn't see the folders it deleted on the previous run I know that IMAP has 2 steps for deleting actual email messages (delete, then purge/expunge)... so I'm guessing this is something similar, but I can't find much info on how this works with folders on IMAP anywhere at all. How can I: Have this Python script see the "partially deleted" folders again, on subsequent runs Have it fully delete all traces of the folders, i.e. do any kind of purge/expunge needed A: In IMAP4Rev1, there are a couple reasons why a folder may continue to exist in some form after you DELETE it: It has child folders (it will then appear as a \NoSelect folder). It is a required system folder Or it is still subscribed In the latter case, the folder does not exist, but the server may continue to return it as result to the LSUB command, which some clients use to present their heirarchy. You could add M.unsubscribe() to your folder deletion code to remove it from the subscription list as well. You may also need to use M.lsub() to find these folders.
Completely delete + purge/expunge IMAP folders using Python imaplib
I'm using this script to bulk delete empty IMAP folders: https://gitlab.com/puzzlement/delete-empty-imap-dirs #!/usr/bin/env python import getpass, imaplib, sys, argparse import parseNested IGNORE = set(["INBOX", "Postponed", "Sent", "Sent Items", "Trash", "Drafts", "MQEmail.INBOX", "MQEmail.Outbox", "MQEmail.Postponed"]) def main(): parser = argparse.ArgumentParser( description='This script deletes empty remote IMAP folders.') parser.add_argument('--port', '-p', metavar='PORT', type = int, help = 'Port number to connect to (143 or 993/SSL used by default)') parser.add_argument("-quiet", "--q", action="store_false", dest="verbose", default=True, help="don't print status messages to standard error") parser.add_argument("-s", "--ssl", action="store_true", dest="ssl", default=False, help="Use SSL encryption") parser.add_argument('hostname', help="Domain name/host name of IMAP " "server to delete folders on") parser.add_argument('username', help="Username/login " "on IMAP server") args = parser.parse_args() if args.ssl: IMAPClass = imaplib.IMAP4_SSL else: IMAPClass = imaplib.IMAP4 if args.port: M = IMAPClass(args.hostname, args.port) else: M = IMAPClass(args.hostname) M.login(args.username, getpass.getpass('IMAP password for user %s at server %s: ' % (args.username, args.hostname))) listresponse = M.list() mailboxes = [] for chunk in listresponse[1]: nested = parseNested.parseNestedParens(chunk) if '\\HasNoChildren' in nested[0] and '\\NoSelect' in nested[0]: if args.verbose: sys.stderr.write("%s has no children and is not selectable, deleting\n" % nested[2]) M.delete(nested[2]) elif not '\\HasChildren' in nested[0]: mboxname = nested[2] ignoretest = set([]) ignoretest.add(mboxname) ignoretest.add("INBOX." + mboxname) ignoretest.add(mboxname.lstrip("INBOX.")) if not ignoretest.intersection(IGNORE): mailboxes.append(mboxname) for mailbox in mailboxes: reply, data = M.select(mailbox) if reply != 'OK': print >> sys.stderr, "Cannot select mailbox '%s', reply was '%s', skipping" % (mailbox, str(data[0])) continue else: nomessages = int(data[0]) M.close() if nomessages == 0: if args.verbose: sys.stderr.write("%s is empty of messages, deleting\n" % mailbox) M.delete(mailbox) M.logout() if __name__ == '__main__': main() It's using this line of code to delete the folders: M.delete(mailbox) However this doesn't seem to completely delete the folders: In Outlook, the folders remain there like nothing happened In Thunderbird, the folder names change from having black text to gray In the webmail at the host (Network Solutions), the folders are gone If I re-run this Python script again, it doesn't see the folders it deleted on the previous run I know that IMAP has 2 steps for deleting actual email messages (delete, then purge/expunge)... so I'm guessing this is something similar, but I can't find much info on how this works with folders on IMAP anywhere at all. How can I: Have this Python script see the "partially deleted" folders again, on subsequent runs Have it fully delete all traces of the folders, i.e. do any kind of purge/expunge needed
[ "In IMAP4Rev1, there are a couple reasons why a folder may continue to exist in some form after you DELETE it:\n\nIt has child folders (it will then appear as a \\NoSelect folder).\nIt is a required system folder\nOr it is still subscribed\n\nIn the latter case, the folder does not exist, but the server may continue to return it as result to the LSUB command, which some clients use to present their heirarchy.\nYou could add M.unsubscribe() to your folder deletion code to remove it from the subscription list as well. You may also need to use M.lsub() to find these folders.\n" ]
[ 1 ]
[]
[]
[ "imap", "imaplib", "python" ]
stackoverflow_0074499286_imap_imaplib_python.txt
Q: Python - Add a line break after every single line I need to add a line break to every single line of text I have changed a CSV file to a text file, in that text file I need to add a line break at the end of every line/sentance I can only manage currently to add a single line break on the first line of text, I can not work out how to do it for subsequent lines An example of what I have when I change the CSV file to Txt: Device_1 A 10.0.0.1 Device_2 A 10.0.0.2 Device_3 A 10.0.0.3 An example of what I want it to look like Device_1 A 10.0.0.1 Device_2 A 10.0.0.2 Device_3 A 10.0.0.3 An example of what it actually looks like after the python script runs Device_1 A 10.0.0.1 Device_2 A 10.0.0.2 Device_3 A 10.0.0.3 The code I have tried: import shutil source = 'DNS_Entries.csv' target = 'DNS_Entries_Copy.txt' shutil.copy(source, target) def DNS_Entries(self,DNS): with open(DNS) as f: lines = [line+'\n' for line in f.readlines()] with open(DNS+'DNS_Entries_Copy.txt', 'w') as f: f.write(''.join(lines)) I have also tried: import shutil source = 'DNS_Entries.csv' target = 'DNS_Entries_Copy.txt' shutil.copy(source, target) def DNS_Entries(self,DNS): with open(DNS) as f: lines = [line+'\n' for line in f.readlines()] with open(DNS) as source: with open(DNS+'DNS_Entries_Copy.txt', 'w') as output: for line in source: f.write(line+'\n') Both of the above scripts add a single line break at the end of the first line of text. Unfortunately its never the same number or character at the end of the line to reference and each line can differ with the amount of characters/numbers per line. Thanks A: Sounds like a job for regex: text = """Device_1 A 10.0.0.1 Device_2 A 10.0.0.2 Device_3 A 10.0.0.3""" import re print(re.sub("\n", "\n\n", text)) Be warned: as a famous rapper once almost sang, I got 99 problems and then I tried to use regex on one of them and now I have 100.
Python - Add a line break after every single line
I need to add a line break to every single line of text I have changed a CSV file to a text file, in that text file I need to add a line break at the end of every line/sentance I can only manage currently to add a single line break on the first line of text, I can not work out how to do it for subsequent lines An example of what I have when I change the CSV file to Txt: Device_1 A 10.0.0.1 Device_2 A 10.0.0.2 Device_3 A 10.0.0.3 An example of what I want it to look like Device_1 A 10.0.0.1 Device_2 A 10.0.0.2 Device_3 A 10.0.0.3 An example of what it actually looks like after the python script runs Device_1 A 10.0.0.1 Device_2 A 10.0.0.2 Device_3 A 10.0.0.3 The code I have tried: import shutil source = 'DNS_Entries.csv' target = 'DNS_Entries_Copy.txt' shutil.copy(source, target) def DNS_Entries(self,DNS): with open(DNS) as f: lines = [line+'\n' for line in f.readlines()] with open(DNS+'DNS_Entries_Copy.txt', 'w') as f: f.write(''.join(lines)) I have also tried: import shutil source = 'DNS_Entries.csv' target = 'DNS_Entries_Copy.txt' shutil.copy(source, target) def DNS_Entries(self,DNS): with open(DNS) as f: lines = [line+'\n' for line in f.readlines()] with open(DNS) as source: with open(DNS+'DNS_Entries_Copy.txt', 'w') as output: for line in source: f.write(line+'\n') Both of the above scripts add a single line break at the end of the first line of text. Unfortunately its never the same number or character at the end of the line to reference and each line can differ with the amount of characters/numbers per line. Thanks
[ "Sounds like a job for regex:\ntext = \"\"\"Device_1 A 10.0.0.1\nDevice_2 A 10.0.0.2\nDevice_3 A 10.0.0.3\"\"\"\n\nimport re\nprint(re.sub(\"\\n\", \"\\n\\n\", text))\n\nBe warned: as a famous rapper once almost sang, I got 99 problems and then I tried to use regex on one of them and now I have 100.\n" ]
[ 1 ]
[]
[]
[ "python" ]
stackoverflow_0074518821_python.txt
Q: Dynamic name to function What i want to do is to make a procedural variable name for a pygame draw function inside a for loop. But i just cant figure out how to do it. I tried to follow some guides that i saw about dynamic names but they only showcased making a variable name for ints and strings. I want to give all of rectangles their own name with a number at the end to show in what loop they were created. But i do not know how change the name of the variable from one loop to the other This is the variable i need to name: pygame.draw.rect(screen, self.color, (self.pos[0] + self.scale * i , (self.pos[1] + (self.scale * x)), self.scale, self.scale)) Example: rect_(procedural number) = pygame.draw.rect(screen, self.color, (self.pos[0] + self.scale * i , (self.pos[1] + (self.scale * x)), self.scale, self.scale)) def invRect(self): tset = 0 for x in range(self.rows): for i in range(self.columns): tset += 1 this is the function i want to give a dynamic name -----> pygame.draw.rect(screen, self.color, (self.pos[0] + self.scale * i , (self.pos[1] + (self.scale * x)), self.scale, self.scale)) A: You can make a dict which stores pointer to a function (if you do not add parentheses the variable acts as a function). If you add parentheses it only stores the result of the function -> return value. funcs = dict({}) funcs[variable_name] = pygame.draw.rect afterwards you can call it as funcs[variable_name](parameters)
Dynamic name to function
What i want to do is to make a procedural variable name for a pygame draw function inside a for loop. But i just cant figure out how to do it. I tried to follow some guides that i saw about dynamic names but they only showcased making a variable name for ints and strings. I want to give all of rectangles their own name with a number at the end to show in what loop they were created. But i do not know how change the name of the variable from one loop to the other This is the variable i need to name: pygame.draw.rect(screen, self.color, (self.pos[0] + self.scale * i , (self.pos[1] + (self.scale * x)), self.scale, self.scale)) Example: rect_(procedural number) = pygame.draw.rect(screen, self.color, (self.pos[0] + self.scale * i , (self.pos[1] + (self.scale * x)), self.scale, self.scale)) def invRect(self): tset = 0 for x in range(self.rows): for i in range(self.columns): tset += 1 this is the function i want to give a dynamic name -----> pygame.draw.rect(screen, self.color, (self.pos[0] + self.scale * i , (self.pos[1] + (self.scale * x)), self.scale, self.scale))
[ "You can make a dict which stores pointer to a function (if you do not add parentheses the variable acts as a function). If you add parentheses it only stores the result of the function -> return value.\nfuncs = dict({})\n\nfuncs[variable_name] = pygame.draw.rect\n\nafterwards you can call it as\nfuncs[variable_name](parameters)\n\n" ]
[ 0 ]
[]
[]
[ "procedural", "python" ]
stackoverflow_0074519246_procedural_python.txt
Q: How to group rows in a dataframe which are in a sequence? consider i have a data frame ID Column B 10 item 1 10 item 1 10 item 1 9 item 2 8 item 3 8 item 3 8 item 3 8 item 3 7 item 4 6 item 5 4 item 6 4 item 6 5 item 7 5 item 7 and i want to update a new column as result if the id column is in decreasing order i want something like this ID Column B result 10 item 1 1 10 item 1 1 10 item 1 1 9 item 2 1 8 item 3 1 8 item 3 1 8 item 3 1 8 item 3 1 7 item 4 1 6 item 5 1 4 item 6 2 4 item 6 2 5 item 7 2 5 item 7 2 conditions are i should group the rows which are having the id columns with decreasing only by one value i tried doing using the code df["result"] = (df["X2"] > df["X2"].shift(1)).cumsum() A: You can use diff to compare the successive values, if >-1, this means we start a new group, with help of cumsum: df['result'] = df['ID'].diff().lt(-1).cumsum().add(1) Output: ID Column B result 0 10 item 1 1 1 10 item 1 1 2 10 item 1 1 3 9 item 2 1 4 8 item 3 1 5 8 item 3 1 6 8 item 3 1 7 8 item 3 1 8 7 item 4 1 9 6 item 5 1 10 4 item 6 2 11 4 item 6 2 12 5 item 7 2 13 5 item 7 2
How to group rows in a dataframe which are in a sequence?
consider i have a data frame ID Column B 10 item 1 10 item 1 10 item 1 9 item 2 8 item 3 8 item 3 8 item 3 8 item 3 7 item 4 6 item 5 4 item 6 4 item 6 5 item 7 5 item 7 and i want to update a new column as result if the id column is in decreasing order i want something like this ID Column B result 10 item 1 1 10 item 1 1 10 item 1 1 9 item 2 1 8 item 3 1 8 item 3 1 8 item 3 1 8 item 3 1 7 item 4 1 6 item 5 1 4 item 6 2 4 item 6 2 5 item 7 2 5 item 7 2 conditions are i should group the rows which are having the id columns with decreasing only by one value i tried doing using the code df["result"] = (df["X2"] > df["X2"].shift(1)).cumsum()
[ "You can use diff to compare the successive values, if >-1, this means we start a new group, with help of cumsum:\ndf['result'] = df['ID'].diff().lt(-1).cumsum().add(1)\n\nOutput:\n ID Column B result\n0 10 item 1 1\n1 10 item 1 1\n2 10 item 1 1\n3 9 item 2 1\n4 8 item 3 1\n5 8 item 3 1\n6 8 item 3 1\n7 8 item 3 1\n8 7 item 4 1\n9 6 item 5 1\n10 4 item 6 2\n11 4 item 6 2\n12 5 item 7 2\n13 5 item 7 2\n\n" ]
[ 1 ]
[]
[]
[ "dataframe", "pandas", "python" ]
stackoverflow_0074519425_dataframe_pandas_python.txt
Q: Python list with type strings I have got a python list Year= [‘1997JAN’, ‘1997FEB’, ‘1997MAR’‘1997APR’………………………’2021SEP’’2021OCT’] I would like to extract only years from the above list but not the months How can I extract only years? Year = [1997,1997,1997,…………………2021,2021] A: If you have these dates: dates = ['1997JAN', '1997FEB', '1997MAR','1997APR', '2022NOV'] Just use this to extract years from dates: years = [int(x[:4]) for x in dates] A: You import and use the module re: import re Year= ['1997JAN', '1997FEB', '1997MAR','1997APR','2021SEP','2021OCT'] Years_only=[re.findall(r'\d+', year)[0] for year in Year] Years_only Output ['1997', '1997', '1997', '1997', '2021', '2021'] A: You can first extract the numbers with filter and str.isdigit like this: input = '1997JAN' output = ''.join(filter(str.isdigit, input)) print(output) # '1997' (This is still string) Now we should cast it to integer: output = int(''.join(filter(str.isdigit, input))) print(output) # 1997 You can do this in all elements in the list with map: output = list(map(lambda input: int(''.join(filter(str.isdigit, input))), Year)) print(output) # [1997,1997,1997,…………………2021,2021]
Python list with type strings
I have got a python list Year= [‘1997JAN’, ‘1997FEB’, ‘1997MAR’‘1997APR’………………………’2021SEP’’2021OCT’] I would like to extract only years from the above list but not the months How can I extract only years? Year = [1997,1997,1997,…………………2021,2021]
[ "If you have these dates:\ndates = ['1997JAN', '1997FEB', '1997MAR','1997APR', '2022NOV']\n\nJust use this to extract years from dates:\nyears = [int(x[:4]) for x in dates]\n\n", "You import and use the module re:\nimport re\n\nYear= ['1997JAN', '1997FEB', '1997MAR','1997APR','2021SEP','2021OCT']\n\nYears_only=[re.findall(r'\\d+', year)[0] for year in Year]\n\nYears_only\n\nOutput\n['1997', '1997', '1997', '1997', '2021', '2021']\n\n", "You can first extract the numbers with filter and str.isdigit like this:\ninput = '1997JAN'\noutput = ''.join(filter(str.isdigit, input))\nprint(output)\n# '1997' (This is still string)\n\nNow we should cast it to integer:\noutput = int(''.join(filter(str.isdigit, input)))\nprint(output)\n# 1997\n\nYou can do this in all elements in the list with map:\noutput = list(map(lambda input: int(''.join(filter(str.isdigit, input))), Year))\nprint(output)\n# [1997,1997,1997,…………………2021,2021]\n\n" ]
[ 1, 0, 0 ]
[]
[]
[ "extract", "list", "numbers", "python", "string" ]
stackoverflow_0074519289_extract_list_numbers_python_string.txt
Q: How to convert a tuple in a list to a normal list? language: Python 3.7.0 mysql-connector-python==8.0.31 I'm working on a website and have just implemented a database. The response I'm getting from the database looks like this: [('indigo', 'admin')] How do I extract the two values from the tuple in a list and convert it to a list only? Expected output: ["indigo", "admin"] Thanks, indigo A: Use tuple unpacking response = [('indigo', 'admin')] data = [*response[0]] print(data) Output: ['indigo', 'admin'] A: For this very specific example you can just access the first element of the list a = [('indigo', 'admin')] via your_tuple = a[0] which returns your_tuple = ('indigo', 'admin'). Then this tuple can be converted to a list via list(your_tuple). In general it is better not to do these steps in between. I just put them to be more pedagogical. You get the desired result with: list(a[0]) A: you can access the first elem of the origin list [('indigo', 'admin')] (it has only one elem) to get the tuple. then use list function to convert the tuple into list. A: You can use: response=[('indigo', 'admin')] data=[response[0][i] for i in [0,1]] data Output ['indigo', 'admin']
How to convert a tuple in a list to a normal list?
language: Python 3.7.0 mysql-connector-python==8.0.31 I'm working on a website and have just implemented a database. The response I'm getting from the database looks like this: [('indigo', 'admin')] How do I extract the two values from the tuple in a list and convert it to a list only? Expected output: ["indigo", "admin"] Thanks, indigo
[ "Use tuple unpacking\nresponse = [('indigo', 'admin')]\ndata = [*response[0]]\nprint(data)\n\nOutput: ['indigo', 'admin']\n", "For this very specific example you can just access the first element of the list a = [('indigo', 'admin')] via your_tuple = a[0] which returns your_tuple = ('indigo', 'admin'). Then this tuple can be converted to a list via list(your_tuple).\nIn general it is better not to do these steps in between. I just put them to be more pedagogical. You get the desired result with:\nlist(a[0])\n\n", "you can access the first elem of the origin list [('indigo', 'admin')] (it has only one elem) to get the tuple. then use list function to convert the tuple into list.\n", "You can use:\nresponse=[('indigo', 'admin')]\n\ndata=[response[0][i] for i in [0,1]]\n\ndata\n\nOutput\n['indigo', 'admin']\n\n" ]
[ 1, 1, 1, 1 ]
[]
[]
[ "database", "list", "mysql", "python", "tuples" ]
stackoverflow_0074519459_database_list_mysql_python_tuples.txt
Q: How can I return a list from a python function https://stackoverflow.com/a/8978435/1335492 ...shows how to call a python script from LibreOffice BASIC: (How can I call a Python macro in a cell formula in OpenOffice.Org Calc? ) Function invokeScriptFunc(..., args As Array, outIdxs As Array, outArgs As Array) ... invokeScriptFunc = oScript.invoke(args, outIdxs, outArgs) end Function But that doesn't work for me. I get "BASIC runtime error. Argument is not optional" for outArgs. On the other hand, "oScript.invoke(args, Array(), Array())" is not an error. The example has not been wrong for 10 years, it's unlikely to be wrong today. But I've not got an example of it working with a python script that returns a list: perhaps that is my problem. The script I am trying to use is: def MyFunc(a,b): return [a,b] I don't get the error when I try Function invokeScriptFunc(..., args As Array, outIdxs As Array) ... dim outArgs as array invokeScriptFunc = oScript.invoke(args, outIdxs, outArgs) end Function or invokeScriptFunc = oScript.invoke(args, outIdxs, array()) but either way, I'm no closer to seeing the return value I want. FWIW, when I "dim outArgs as array", .invoke returns an object with lbound=0 and ubound=-1. outArgs(0) is not valid. I'm not trying to parse the output: that comes later. I'm just trying to get it to run without error. A: The first parameter to .invoke is (all arguments). The second parameter to .invoke is a list indicating which arguments are output arguments. The third parameter to .invoke is (output arguments). Because in Java, method arguments are immutable. The Java interface returns values in (output arguments). The python interface does not use (output arguments): output is returned as the return value of .invoke. So for python scripts, outIdxs should be an empty array, and outArgs will be an empty array. This does not explain why passing an empty array to outIdxs and to outArgs is sometimes an error, depending on how the empty array has been declared. That has to do with how declarations happen and errors are defined, detected and reported in LibreOffice BASIC, which is a completely separate subject. A: Everything necessary for returning a list is explained in the documentation at https://help.libreoffice.org/latest/en-US/text/sbasic/guide/basic_2_python.html. Here is working code. Function invokeScriptFunc(args As Array) oScript = GetPythonScript("filename.py$MyFunc", "user") invokeScriptFunc = oScript.invoke(args, Array(), Array()) End Function Sub call_invokeScriptFunct result = invokeScriptFunc(Array(5, 5)) MsgBox result(1) End Sub I modified the python code so that it actually does something. def MyFunc(a, b): return [a+1, b-1] Result: 4 which is 5 minus 1
How can I return a list from a python function
https://stackoverflow.com/a/8978435/1335492 ...shows how to call a python script from LibreOffice BASIC: (How can I call a Python macro in a cell formula in OpenOffice.Org Calc? ) Function invokeScriptFunc(..., args As Array, outIdxs As Array, outArgs As Array) ... invokeScriptFunc = oScript.invoke(args, outIdxs, outArgs) end Function But that doesn't work for me. I get "BASIC runtime error. Argument is not optional" for outArgs. On the other hand, "oScript.invoke(args, Array(), Array())" is not an error. The example has not been wrong for 10 years, it's unlikely to be wrong today. But I've not got an example of it working with a python script that returns a list: perhaps that is my problem. The script I am trying to use is: def MyFunc(a,b): return [a,b] I don't get the error when I try Function invokeScriptFunc(..., args As Array, outIdxs As Array) ... dim outArgs as array invokeScriptFunc = oScript.invoke(args, outIdxs, outArgs) end Function or invokeScriptFunc = oScript.invoke(args, outIdxs, array()) but either way, I'm no closer to seeing the return value I want. FWIW, when I "dim outArgs as array", .invoke returns an object with lbound=0 and ubound=-1. outArgs(0) is not valid. I'm not trying to parse the output: that comes later. I'm just trying to get it to run without error.
[ "The first parameter to .invoke is (all arguments).\nThe second parameter to .invoke is a list indicating which arguments are output arguments.\nThe third parameter to .invoke is (output arguments). Because in Java, method arguments are immutable. The Java interface returns values in (output arguments). The python interface does not use (output arguments): output is returned as the return value of .invoke.\nSo for python scripts, outIdxs should be an empty array, and outArgs will be an empty array.\nThis does not explain why passing an empty array to outIdxs and to outArgs is sometimes an error, depending on how the empty array has been declared. That has to do with how declarations happen and errors are defined, detected and reported in LibreOffice BASIC, which is a completely separate subject.\n", "Everything necessary for returning a list is explained in the documentation at https://help.libreoffice.org/latest/en-US/text/sbasic/guide/basic_2_python.html. Here is working code.\nFunction invokeScriptFunc(args As Array)\n oScript = GetPythonScript(\"filename.py$MyFunc\", \"user\")\n invokeScriptFunc = oScript.invoke(args, Array(), Array())\nEnd Function\n\nSub call_invokeScriptFunct\n result = invokeScriptFunc(Array(5, 5))\n MsgBox result(1)\nEnd Sub\n\nI modified the python code so that it actually does something.\ndef MyFunc(a, b):\n return [a+1, b-1]\n\nResult: 4 which is 5 minus 1\n" ]
[ 0, 0 ]
[]
[]
[ "libreoffice", "libreoffice_basic", "python" ]
stackoverflow_0074507680_libreoffice_libreoffice_basic_python.txt
Q: How can I run code that my Python program stored in a string? So, im trying to make a script that takes code from a pastebin post and runs it. But, for some reason it doesnt run the code. I dont know why. Could someone explain why this wont work so i can fix the issue? I tried: (dont mind the imports im gonna use those for later) import os from json import loads, dumps from base64 import b64decode from urllib.request import Request, urlopen from subprocess import Popen, PIPE def get_code(): test = 'None' try: test = urlopen(Request('https://pastebin.com/raw/4dnZntN3')).read().decode() except: pass return test test = get_code() def main(): test main() The output is empty, and no errors. A: In your main function instead of just printing test use exec(test) def main(): exec(test)
How can I run code that my Python program stored in a string?
So, im trying to make a script that takes code from a pastebin post and runs it. But, for some reason it doesnt run the code. I dont know why. Could someone explain why this wont work so i can fix the issue? I tried: (dont mind the imports im gonna use those for later) import os from json import loads, dumps from base64 import b64decode from urllib.request import Request, urlopen from subprocess import Popen, PIPE def get_code(): test = 'None' try: test = urlopen(Request('https://pastebin.com/raw/4dnZntN3')).read().decode() except: pass return test test = get_code() def main(): test main() The output is empty, and no errors.
[ "In your main function instead of just printing test\nuse exec(test)\ndef main():\n exec(test)\n\n" ]
[ 0 ]
[ "you are printing nothing and 'return test' wont be ran because it is outside of the try block\n" ]
[ -2 ]
[ "pastebin", "python", "urlopen" ]
stackoverflow_0074519531_pastebin_python_urlopen.txt
Q: Auto list fields from many-to-many model I've created a model of analysis types and then I created a table that groups several analyses into one group: class AnalysisType(models.Model): a_name = models.CharField(max_length=16,primary_key=True) a_measur = models.CharField(max_length=16) a_ref_min = models.DecimalField(max_digits=5, decimal_places=2, null=True, blank=True) a_ref_max = models.DecimalField(max_digits=5, decimal_places=2, null=True, blank=True) # analysis_group = models.ForeignKey(AnalysysGroup, on_delete=models.CASCADE, default=1) def __str__(self): return f"{self.a_name} - {self.a_measur}" class AnalysysGroup(models.Model): group_name = models.CharField(max_length=32) analysis = models.ManyToManyField(AnalysisType, blank=True) def __str__(self): return f"{self.group_name}" I want to have the option to multiply add values via the admin panel (I.E. I chose Analysis type then below appear fields to fill) class PatientGroupAnalysis(models.Model): patient = models.ForeignKey(Patient, on_delete=models.CASCADE) analysis_date = models.DateTimeField() analysis_type = models.ForeignKey(AnalysysGroup, on_delete=models.CASCADE, default=1) # amalysis_data = ??? def __str__(self): return f"{self.patient}: {self.analysis_date} - {self.analysis_type} - {self.analysis_data}" I tried to use analysis_data = analysis.type.objects.all() and etc. but that's the wrong way. A: Try this: Admin panel with StackedInline from django.contrib import admin from .models import AnalysisType, PatientGroupAnalysis # Register your models here. class PatientGroupAnalysisInline(admin.StackedInline): model = PatientGroupAnalysis @admin.register(AnalysisType) class AnalysisTypeAdmin(admin.ModelAdmin): list_display = ["id", "a_name", "a_measur", "a_ref_min", "a_ref_max"] search_fields = ("id", "a_name") inlines = [PatientGroupAnalysisInline]
Auto list fields from many-to-many model
I've created a model of analysis types and then I created a table that groups several analyses into one group: class AnalysisType(models.Model): a_name = models.CharField(max_length=16,primary_key=True) a_measur = models.CharField(max_length=16) a_ref_min = models.DecimalField(max_digits=5, decimal_places=2, null=True, blank=True) a_ref_max = models.DecimalField(max_digits=5, decimal_places=2, null=True, blank=True) # analysis_group = models.ForeignKey(AnalysysGroup, on_delete=models.CASCADE, default=1) def __str__(self): return f"{self.a_name} - {self.a_measur}" class AnalysysGroup(models.Model): group_name = models.CharField(max_length=32) analysis = models.ManyToManyField(AnalysisType, blank=True) def __str__(self): return f"{self.group_name}" I want to have the option to multiply add values via the admin panel (I.E. I chose Analysis type then below appear fields to fill) class PatientGroupAnalysis(models.Model): patient = models.ForeignKey(Patient, on_delete=models.CASCADE) analysis_date = models.DateTimeField() analysis_type = models.ForeignKey(AnalysysGroup, on_delete=models.CASCADE, default=1) # amalysis_data = ??? def __str__(self): return f"{self.patient}: {self.analysis_date} - {self.analysis_type} - {self.analysis_data}" I tried to use analysis_data = analysis.type.objects.all() and etc. but that's the wrong way.
[ "Try this:\nAdmin panel with StackedInline\nfrom django.contrib import admin\nfrom .models import AnalysisType, PatientGroupAnalysis\n\n# Register your models here.\n\nclass PatientGroupAnalysisInline(admin.StackedInline):\n model = PatientGroupAnalysis\n\n\n@admin.register(AnalysisType)\nclass AnalysisTypeAdmin(admin.ModelAdmin):\n list_display = [\"id\", \"a_name\", \"a_measur\", \"a_ref_min\", \"a_ref_max\"]\n search_fields = (\"id\", \"a_name\")\n inlines = [PatientGroupAnalysisInline]\n\n" ]
[ 1 ]
[]
[]
[ "django", "django_class_based_views", "python" ]
stackoverflow_0074519224_django_django_class_based_views_python.txt
Q: How to detect when an image needs perspective transform? I have a set of images in which I need to detect which of them needs a perspective transform. The images might be plain documents or photos taken with phone cameras with perspective and I need to perform perspective transform on those. How can I detect which need perspective transform in opencv? I can do perspective transform, however, I'm not capable of detecting when an image needs to suffer a perspective transform. A: This could be a possible approach: Take a reference picture (which does not require a perspective transform). Define four points of interest- (x1,y1) (x2,y2) (x3,y3) (x4,y4) in your reference image. Consider these points as your destination points. Now in every other image that you want to check if a perspective transform is necessary, you will detect the same points of interest in those images. Lets call them source points. Next you have to check if the source points match your destination points. Also you will have to check if the dimensions(width & height) match. If neither of the two matches(the points or the dimension), there's a need for perspective transform.
How to detect when an image needs perspective transform?
I have a set of images in which I need to detect which of them needs a perspective transform. The images might be plain documents or photos taken with phone cameras with perspective and I need to perform perspective transform on those. How can I detect which need perspective transform in opencv? I can do perspective transform, however, I'm not capable of detecting when an image needs to suffer a perspective transform.
[ "This could be a possible approach:\n\nTake a reference picture (which does not require a perspective transform).\nDefine four points of interest- (x1,y1) (x2,y2) (x3,y3) (x4,y4) in your reference image. Consider these points as your destination points.\nNow in every other image that you want to check if a perspective transform is necessary, you will detect the same points of interest in those images. Lets call them source points.\nNext you have to check if the source points match your destination points. Also you will have to check if the dimensions(width & height) match.\nIf neither of the two matches(the points or the dimension), there's a need for perspective transform.\n\n" ]
[ 0 ]
[]
[]
[ "computer_vision", "opencv", "python" ]
stackoverflow_0074473938_computer_vision_opencv_python.txt
Q: FileNotFoundError: scipy.libs I'm trying to build an exe file using cx_Freeze. but when I run the resulting file I get an error: FileNotFoundError: ..\build\exe.win-amd64-3.8\lib\scipy.libs please tell me how to fix this problem? I run the following code: from cx_Freeze import setup, Executable build_exe_options = {"packages": ["torch", 'tensorflow']} target = Executable( script='sub.py' ) setup( name='my', options={'build_exe': build_exe_options}, executables=[target] ) A: I had this exact problem, this is only a short term fix but if you search for 'scipy.libs' in your python install location 'site-packages' folder (or virtual environment if you're using one) and copy/paste it into the libs folder in your build it should solve the issue. I'll edit my answer if I come across the root cause and a more permanent fix... Hope this helps!
FileNotFoundError: scipy.libs
I'm trying to build an exe file using cx_Freeze. but when I run the resulting file I get an error: FileNotFoundError: ..\build\exe.win-amd64-3.8\lib\scipy.libs please tell me how to fix this problem? I run the following code: from cx_Freeze import setup, Executable build_exe_options = {"packages": ["torch", 'tensorflow']} target = Executable( script='sub.py' ) setup( name='my', options={'build_exe': build_exe_options}, executables=[target] )
[ "I had this exact problem, this is only a short term fix but if you search for 'scipy.libs' in your python install location 'site-packages' folder (or virtual environment if you're using one) and copy/paste it into the libs folder in your build it should solve the issue.\nI'll edit my answer if I come across the root cause and a more permanent fix...\nHope this helps!\n" ]
[ 1 ]
[]
[]
[ "cx_freeze", "exe", "python", "pytorch", "tensorflow" ]
stackoverflow_0074454338_cx_freeze_exe_python_pytorch_tensorflow.txt
Q: How can I change localhost IP of azure function code when running it locally? I am new to azure function. I want to run my azure function code locally (in an azure virtual machine). I'm running my code using this line in a linux VM terminal: . env/bin/activate && func host start It was successful with this output. Azure Functions Core Tools Core Tools Version: 4.0.4785 Commit hash: N/A (64-bit) Function Runtime Version: 4.10.4.19213 Functions: update-info: [GET,POST] http://localhost:7071/update-info However, I wonder if it is possible to change localhost:7071 to the IP of my virtual machine so that it will be available online. Is it? If yes, how?; if not, how can I run HTTPS request/response program in a VM? Another question if it is possible is can I change it from http to https? If yes, how? Edited: - Adding Settings from the config files. function.json { "scriptFile": "__init__.py", "bindings": [ { "authLevel": "anonymous", "type": "httpTrigger", "direction": "in", "name": "req", "methods": [ "get", "post" ] }, { "type": "http", "direction": "out", "name": "$return" } ] } host.json { "version": "2.0", "logging": { "applicationInsights": { "samplingSettings": { "isEnabled": true, "excludedTypes": "Request" } } }, "extensionBundle": { "id": "Microsoft.Azure.Functions.ExtensionBundle", "version": "[2.*, 3.0.0)" }, "extensions": { "http": { "routePrefix": "" } } } Btw, I already figured out how to run it using https. I had to add --useHTTPS on the command like this: . env/bin/activate && func host start --useHttps I just need to know how to change the localhost to the VM IP address. A: Created the Azure Linux VM > Hosted Azure Functions Python Project (Http Trigger Function) on it. Enabled the Ports HTTP, HTTPS & RDP for checking using the browser by enabling the XRDP & installed the Firefox browser Glad that enabling the HTTPS flag is resolved by yourself. I'm able to get the Function App Result with the local host and private IP address in Azure Linux VM:
How can I change localhost IP of azure function code when running it locally?
I am new to azure function. I want to run my azure function code locally (in an azure virtual machine). I'm running my code using this line in a linux VM terminal: . env/bin/activate && func host start It was successful with this output. Azure Functions Core Tools Core Tools Version: 4.0.4785 Commit hash: N/A (64-bit) Function Runtime Version: 4.10.4.19213 Functions: update-info: [GET,POST] http://localhost:7071/update-info However, I wonder if it is possible to change localhost:7071 to the IP of my virtual machine so that it will be available online. Is it? If yes, how?; if not, how can I run HTTPS request/response program in a VM? Another question if it is possible is can I change it from http to https? If yes, how? Edited: - Adding Settings from the config files. function.json { "scriptFile": "__init__.py", "bindings": [ { "authLevel": "anonymous", "type": "httpTrigger", "direction": "in", "name": "req", "methods": [ "get", "post" ] }, { "type": "http", "direction": "out", "name": "$return" } ] } host.json { "version": "2.0", "logging": { "applicationInsights": { "samplingSettings": { "isEnabled": true, "excludedTypes": "Request" } } }, "extensionBundle": { "id": "Microsoft.Azure.Functions.ExtensionBundle", "version": "[2.*, 3.0.0)" }, "extensions": { "http": { "routePrefix": "" } } } Btw, I already figured out how to run it using https. I had to add --useHTTPS on the command like this: . env/bin/activate && func host start --useHttps I just need to know how to change the localhost to the VM IP address.
[ "\nCreated the Azure Linux VM > Hosted Azure Functions Python Project (Http Trigger Function) on it.\nEnabled the Ports HTTP, HTTPS & RDP for checking using the browser by enabling the XRDP & installed the Firefox browser\n\nGlad that enabling the HTTPS flag is resolved by yourself.\nI'm able to get the Function App Result with the local host and private IP address in Azure Linux VM:\n\n\n" ]
[ 0 ]
[]
[]
[ "azure", "azure_functions", "json", "linux", "python" ]
stackoverflow_0074360611_azure_azure_functions_json_linux_python.txt
Q: Python function repeating itself after if statement satisfied I am a beginner python user and I am stuck with a time-calculator program I am trying to create as part of an online certification. The program will calculate in an AM/PM format the time it is added from the initial time and the correct weekday. I have been having problems with this part as for reasons unknown to me the functions restart after having found the new weekday, assigning the integer "2" from the variable weekday and then breaking. Here is the code snippet: day_names = [ "monday", "tuesday", "wednesday", "thursday", "friday", "saturday", "sunday", ] def weekday_calculator( weekday, day_count, new_hour, new_minute,): # this function calculates the right weekday for the new time > print(f"starting weekday:{weekday}") > weekday = weekday.lower() > starting_day_index = day_names.index(weekday) > print(f"This is the starting day of the week's index: {starting_day_index}") > print(f"This is the day count {day_count}") > weekday_calculate = starting_day_index + day_count > if weekday_calculate <= 6: >> new_weekday = day_names[weekday_calculate] # to be fixed >> print(f"This is the new weekday {new_weekday}") >> result_printer(new_hour, new_minute, new_am_pm, day_count, new_weekday) > elif weekday_calculate > 6: >> print("let's adjust the weekday") >> adjust_weekday(define_weekday) weekday_calculator(weekday = "tuesday", daycount = 1) #this is only the data relevant to this snippet This is the expected output: Let's calculate the weekday starting weekday:tuesday This is the starting day of the week's index: 1 This is the day count 1 This is the new weekday Wednesday (proceeds to the next function) This is what has been happening Let's calculate the weekday starting weekday:tuesday This is the starting day of the week's index: 1 This is the day count 1 This is the new weekday wednesday tuesday starting weekday:2 Traceback (most recent call last) line 52, in weekday_calculator weekday = weekday.lower() AttributeError: 'int' object has no attribute 'lower' # of course that is because you cannot change an integer to lower Does anyone have an idea on how to fix this problem? I have no idea where the value "2" for weekday is coming from, and neither why the function is repeating itself instead of directly jump to the next one once at the end of the if statement. I have tried to change the structure of the function and the variable names so that the program does not make confusion between weekday and new weekday, but to no avail. As you rightly requested, I have edited the post and added the rest of the code: day_names = [ "monday", "tuesday", "wednesday", "thursday", "friday", "saturday", "sunday", ] timeday_am = ["PM", "AM"] * 100 timeday_pm = ["AM", "PM"] * 100 weekday = 0 def result_printer(new_hour, new_minute, new_am_pm, day_count, weekday): new_time = [new_hour, new_minute] for number in new_time: if number < 10: return f"0{number}" if day_count != 0: if day_count == 1: day = "(next day)" else: day = f"({day_count} days later)" print(f"{new_time[0]}:{new_time[1]} {new_am_pm}, {weekday} {day}") def adjust_weekday( define_weekday, ): # this is to adjust the weekday index if it is more than 6 adjusted_weekday = day_names[define_weekday % len(day_names)] print((adjusted_weekday)) def weekday_calculator( weekday, day_count, new_hour, new_minute, new_am_pm ): # this function calculates the right weekday for the new time print(f"starting weekday:{weekday}") weekday = weekday.lower() starting_day_index = day_names.index(weekday) print(f"This is the starting day of the week's index: {starting_day_index}") print(f"This is the day count {day_count}") weekday_calculate = starting_day_index + day_count if weekday_calculate <= 6: new_weekday = day_names[weekday_calculate] # to be fixed print(f"This is the new weekday {new_weekday}") result_printer(new_hour, new_minute, new_am_pm, day_count, new_weekday) elif weekday_calculate > 6: print("let's adjust the weekday") adjust_weekday(define_weekday) def day_calculator( new_hour, new_minute, new_am_pm, am_pm, weekday, day_count ): # this function calculates the right AM PM of the new hour, and the number of days between times (if applicable) day_count = day_count if new_am_pm == "AM": new_am_pm = timeday_am[am_pm] print(f"This is the new time of the day list {new_am_pm}") day_new = timeday_am[:am_pm] print(f"this is the new day {day_new}") day_count = day_new.count( "AM" ) # this is to count how many days have passed from the starting day print(f"this is the day count {day_count}") elif new_am_pm == "PM": new_am_pm_day = timeday_pm[am_pm] print(f"This is the new time of the day {new_am_pm}") day_new = timeday_pm[:am_pm] print(f"this is how it is calculated {day_new}") day_count = day_new.count("AM") print(f"this is the day count {day_count}") if weekday is not None: print(weekday) print("Let's calculate the weekday") weekday_calculator(weekday, day_count, new_hour, new_minute, new_am_pm) result_printer(new_hour, new_minute, new_am_pm, day_count, weekday) def time_calculator(init_time: str, add_time: str, weekday: str): day_count = 0 new_am_pm = init_time.split(" ")[1] init_hour = int(init_time.split(":")[0]) init_minute = init_time.split(":")[1] init_minute = int( init_minute.split(" ")[0] ) # this is to avoid to include AM/PM in the string #this results in problem when python cannot convert string to integer because of formatting ex 00: add_hour = int(add_time.split(":")[0]) add_minute = int(add_time.split(":")[1]) print( f"1. This is the hour to be added: {init_hour} and this is the minute: {init_minute}" ) # @ control string new_minute = init_minute + add_minute new_hour = init_hour + add_hour if new_minute >= 60: new_minute -= 60 new_hour = new_hour + 1 # calculate am or pm am_pm = ( new_hour // 12 ) # this starts the process to calculate the right time of the day and day of the week, floor division rounds the number down print(f"This is {am_pm} am pm coefficent") # @control string print(type(am_pm)) # adapt new hour to hour format 0-12 if new_hour > 12: new_hour = new_hour - (am_pm * 12) print( f"This is the new hour: {new_hour} and this is the new minute: {new_minute}" ) # @ control string if am_pm < 1: new_am_pm = new_am_pm else: day_calculator(new_hour, new_minute, new_am_pm, am_pm, weekday, day_count) if weekday is not None: weekday_calculator(new_hour, new_minute, new_am_pm, weekday, day_count) result_printer(new_hour, new_minute, new_am_pm, day_count, weekday) time_calculator("3:10 PM", "23:20", "tuesday") A: When I ran your code removing result_printer and adjust_weekday calls as i don't have it in the code you sent, my output is starting weekday:tuesday This is the starting day of the week's index: 1 This is the day count 1 This is the new weekday wednesday I believe the problem comes from the other functions result_printer and adjust_weekday, returning an index instead of the str value. Maybe send this two functions code to help us to find the problem EDIT: so here is your problem: in time_calculator(), you call weekday_calculator(new_hour, new_minute, new_am_pm, weekday, day_count) but weekday_calculator is defined with parameter weekday, day_count, new_hour, new_minute, new_am_pm, you cannot use the parameters so in an other order, you should specify your parameters names as weekday_calculator(new_hour=new_hour, new_minute=new_minute, new_am_pm=new_am_pm, weekday=weekday, day_count=day_count) or use the parameters in the same order than defined:weekday_calculator(weekday, day_count, new_hour, new_minute, new_am_pm) A: The problem is in your time_calculator function. In the last few lines you first call day_calculator(new_hour, new_minute, new_am_pm, am_pm, weekday, day_count) which in turn calls weekday_calculator (this is fine). However, in the next two lines you also call weekday_calculator with completely wrong arguments. Your function defines the arguments as weekday, day_count, new_hour, new_minute, new_am_pm. But looking at the call to the function, you pass new_hour, new_minute, new_am_pm, weekday, day_count. A: your problem is here: starting weekday:2 should be string, not int.
Python function repeating itself after if statement satisfied
I am a beginner python user and I am stuck with a time-calculator program I am trying to create as part of an online certification. The program will calculate in an AM/PM format the time it is added from the initial time and the correct weekday. I have been having problems with this part as for reasons unknown to me the functions restart after having found the new weekday, assigning the integer "2" from the variable weekday and then breaking. Here is the code snippet: day_names = [ "monday", "tuesday", "wednesday", "thursday", "friday", "saturday", "sunday", ] def weekday_calculator( weekday, day_count, new_hour, new_minute,): # this function calculates the right weekday for the new time > print(f"starting weekday:{weekday}") > weekday = weekday.lower() > starting_day_index = day_names.index(weekday) > print(f"This is the starting day of the week's index: {starting_day_index}") > print(f"This is the day count {day_count}") > weekday_calculate = starting_day_index + day_count > if weekday_calculate <= 6: >> new_weekday = day_names[weekday_calculate] # to be fixed >> print(f"This is the new weekday {new_weekday}") >> result_printer(new_hour, new_minute, new_am_pm, day_count, new_weekday) > elif weekday_calculate > 6: >> print("let's adjust the weekday") >> adjust_weekday(define_weekday) weekday_calculator(weekday = "tuesday", daycount = 1) #this is only the data relevant to this snippet This is the expected output: Let's calculate the weekday starting weekday:tuesday This is the starting day of the week's index: 1 This is the day count 1 This is the new weekday Wednesday (proceeds to the next function) This is what has been happening Let's calculate the weekday starting weekday:tuesday This is the starting day of the week's index: 1 This is the day count 1 This is the new weekday wednesday tuesday starting weekday:2 Traceback (most recent call last) line 52, in weekday_calculator weekday = weekday.lower() AttributeError: 'int' object has no attribute 'lower' # of course that is because you cannot change an integer to lower Does anyone have an idea on how to fix this problem? I have no idea where the value "2" for weekday is coming from, and neither why the function is repeating itself instead of directly jump to the next one once at the end of the if statement. I have tried to change the structure of the function and the variable names so that the program does not make confusion between weekday and new weekday, but to no avail. As you rightly requested, I have edited the post and added the rest of the code: day_names = [ "monday", "tuesday", "wednesday", "thursday", "friday", "saturday", "sunday", ] timeday_am = ["PM", "AM"] * 100 timeday_pm = ["AM", "PM"] * 100 weekday = 0 def result_printer(new_hour, new_minute, new_am_pm, day_count, weekday): new_time = [new_hour, new_minute] for number in new_time: if number < 10: return f"0{number}" if day_count != 0: if day_count == 1: day = "(next day)" else: day = f"({day_count} days later)" print(f"{new_time[0]}:{new_time[1]} {new_am_pm}, {weekday} {day}") def adjust_weekday( define_weekday, ): # this is to adjust the weekday index if it is more than 6 adjusted_weekday = day_names[define_weekday % len(day_names)] print((adjusted_weekday)) def weekday_calculator( weekday, day_count, new_hour, new_minute, new_am_pm ): # this function calculates the right weekday for the new time print(f"starting weekday:{weekday}") weekday = weekday.lower() starting_day_index = day_names.index(weekday) print(f"This is the starting day of the week's index: {starting_day_index}") print(f"This is the day count {day_count}") weekday_calculate = starting_day_index + day_count if weekday_calculate <= 6: new_weekday = day_names[weekday_calculate] # to be fixed print(f"This is the new weekday {new_weekday}") result_printer(new_hour, new_minute, new_am_pm, day_count, new_weekday) elif weekday_calculate > 6: print("let's adjust the weekday") adjust_weekday(define_weekday) def day_calculator( new_hour, new_minute, new_am_pm, am_pm, weekday, day_count ): # this function calculates the right AM PM of the new hour, and the number of days between times (if applicable) day_count = day_count if new_am_pm == "AM": new_am_pm = timeday_am[am_pm] print(f"This is the new time of the day list {new_am_pm}") day_new = timeday_am[:am_pm] print(f"this is the new day {day_new}") day_count = day_new.count( "AM" ) # this is to count how many days have passed from the starting day print(f"this is the day count {day_count}") elif new_am_pm == "PM": new_am_pm_day = timeday_pm[am_pm] print(f"This is the new time of the day {new_am_pm}") day_new = timeday_pm[:am_pm] print(f"this is how it is calculated {day_new}") day_count = day_new.count("AM") print(f"this is the day count {day_count}") if weekday is not None: print(weekday) print("Let's calculate the weekday") weekday_calculator(weekday, day_count, new_hour, new_minute, new_am_pm) result_printer(new_hour, new_minute, new_am_pm, day_count, weekday) def time_calculator(init_time: str, add_time: str, weekday: str): day_count = 0 new_am_pm = init_time.split(" ")[1] init_hour = int(init_time.split(":")[0]) init_minute = init_time.split(":")[1] init_minute = int( init_minute.split(" ")[0] ) # this is to avoid to include AM/PM in the string #this results in problem when python cannot convert string to integer because of formatting ex 00: add_hour = int(add_time.split(":")[0]) add_minute = int(add_time.split(":")[1]) print( f"1. This is the hour to be added: {init_hour} and this is the minute: {init_minute}" ) # @ control string new_minute = init_minute + add_minute new_hour = init_hour + add_hour if new_minute >= 60: new_minute -= 60 new_hour = new_hour + 1 # calculate am or pm am_pm = ( new_hour // 12 ) # this starts the process to calculate the right time of the day and day of the week, floor division rounds the number down print(f"This is {am_pm} am pm coefficent") # @control string print(type(am_pm)) # adapt new hour to hour format 0-12 if new_hour > 12: new_hour = new_hour - (am_pm * 12) print( f"This is the new hour: {new_hour} and this is the new minute: {new_minute}" ) # @ control string if am_pm < 1: new_am_pm = new_am_pm else: day_calculator(new_hour, new_minute, new_am_pm, am_pm, weekday, day_count) if weekday is not None: weekday_calculator(new_hour, new_minute, new_am_pm, weekday, day_count) result_printer(new_hour, new_minute, new_am_pm, day_count, weekday) time_calculator("3:10 PM", "23:20", "tuesday")
[ "When I ran your code removing result_printer and adjust_weekday calls as i don't have it in the code you sent, my output is\nstarting weekday:tuesday\nThis is the starting day of the week's index: 1\nThis is the day count 1\nThis is the new weekday wednesday\n\nI believe the problem comes from the other functions result_printer and adjust_weekday, returning an index instead of the str value.\nMaybe send this two functions code to help us to find the problem\nEDIT:\nso here is your problem:\nin time_calculator(), you call weekday_calculator(new_hour, new_minute, new_am_pm, weekday, day_count)\nbut weekday_calculator is defined with parameter weekday, day_count, new_hour, new_minute, new_am_pm, you cannot use the parameters so in an other order, you should specify your parameters names as\nweekday_calculator(new_hour=new_hour, new_minute=new_minute, new_am_pm=new_am_pm, weekday=weekday, day_count=day_count)\n\nor use the parameters in the same order than defined:weekday_calculator(weekday, day_count, new_hour, new_minute, new_am_pm)\n", "The problem is in your time_calculator function. In the last few lines you first call day_calculator(new_hour, new_minute, new_am_pm, am_pm, weekday, day_count) which in turn calls weekday_calculator (this is fine). However, in the next two lines you also call weekday_calculator with completely wrong arguments.\nYour function defines the arguments as weekday, day_count, new_hour, new_minute, new_am_pm.\nBut looking at the call to the function, you pass new_hour, new_minute, new_am_pm, weekday, day_count.\n", "your problem is here: starting weekday:2 should be string, not int.\n" ]
[ 1, 1, 0 ]
[]
[]
[ "function", "if_statement", "python", "repeat" ]
stackoverflow_0074519394_function_if_statement_python_repeat.txt
Q: How to make an IF statement with conditions articulated with OR that stops as soon as the first True condition is reached? Let's take an example : I would like to check if the variable s is a string with length equal or less than 3. I tried the following : if (not isinstance(s,str)) | (len(s)>3) : print("The value of s is not correct : must be a string, with length equal or less than 3") But it is not correct as the code considers the second condition whatever the result of the first one. For example, with s = 2, the code returns the error : object of type 'int' has no len() I would have thought that since the first condition is True, that the rest of the line would not have been considered. How please could I get the code to run until the first True condition is reached? A: | is a bitwise or. use the keyword or instead. The or will shortcircuit as you correctly mention in your question, so if s is not a string the second part will not evaluate, preventing the error of trying to apply len to a non-string object. if not isinstance(s, str) or len(s) > 3: print("The value of s is not correct : must be a string, with length equal or less than 3")
How to make an IF statement with conditions articulated with OR that stops as soon as the first True condition is reached?
Let's take an example : I would like to check if the variable s is a string with length equal or less than 3. I tried the following : if (not isinstance(s,str)) | (len(s)>3) : print("The value of s is not correct : must be a string, with length equal or less than 3") But it is not correct as the code considers the second condition whatever the result of the first one. For example, with s = 2, the code returns the error : object of type 'int' has no len() I would have thought that since the first condition is True, that the rest of the line would not have been considered. How please could I get the code to run until the first True condition is reached?
[ "| is a bitwise or. use the keyword or instead.\nThe or will shortcircuit as you correctly mention in your question, so if s is not a string the second part will not evaluate, preventing the error of trying to apply len to a non-string object.\nif not isinstance(s, str) or len(s) > 3:\n print(\"The value of s is not correct : must be a string, with length equal or less than 3\")\n\n" ]
[ 2 ]
[]
[]
[ "conditional_statements", "if_statement", "python" ]
stackoverflow_0074519773_conditional_statements_if_statement_python.txt
Q: setup script exited with error: command 'x86_64-linux-gnu-gcc' failed with exit status 1 When I try to install odoo-server, I got the following error: error: Setup script exited with error: command 'x86_64-linux-gnu-gcc' failed with exit status 1 Could anyone help me to solve this issue? A: I encountered the same problem in college having installed Linux Mint for the main project of my final year, the third solution below worked for me. When encountering this error please note before the error it may say you are missing a package or header file — you should find those and install them and verify if it works (e.g. ssl → libssl). For Python 2.x use: sudo apt-get install python-dev For Python 2.7 use: sudo apt-get install libffi-dev For Python 3.x use: sudo apt-get install python3-dev or for a specific version of Python 3, replace x with the minor version in sudo apt-get install python3.x-dev A: Python.h is nothing but a header file. It is used by gcc to build applications. You need to install a package called python-dev. This package includes header files, a static library and development tools for building Python modules, extending the Python interpreter or embedding Python in applications. enter: $ sudo apt-get install python-dev or # apt-get install python-dev see http://www.cyberciti.biz/faq/debian-ubuntu-linux-python-h-file-not-found-error-solution/ A: Try installing these packages. sudo apt-get install build-essential autoconf libtool pkg-config python-opengl python-pil python-pyrex python-pyside.qtopengl idle-python2.7 qt4-dev-tools qt4-designer libqtgui4 libqtcore4 libqt4-xml libqt4-test libqt4-script libqt4-network libqt4-dbus python-qt4 python-qt4-gl libgle3 python-dev libssl-dev sudo easy_install greenlet sudo easy_install gevent A: You need to install these packages: sudo apt-get install libpq-dev python-dev libxml2-dev libxslt1-dev libldap2-dev libsasl2-dev libffi-dev A: For Python 3.4 use: sudo apt-get install python3.4-dev For Python 3.5 use: sudo apt-get install python3.5-dev For Python 3.6 use: sudo apt-get install python3.6-dev For Python 3.7 use: sudo apt-get install python3.7-dev For Python 3.8 use: sudo apt-get install python3.8-dev ... and so on ... A: $ sudo apt-get install gcc $ sudo apt-get install python-dateutil python-docutils python-feedparser python-gdata python-jinja2 python-ldap python-libxslt1 python-lxml python-mako python-mock python-openid python-psycopg2 python-psutil python-pybabel python-pychart python-pydot python-pyparsing python-reportlab python-simplejson python-tz python-unittest2 python-vatnumber python-vobject python-webdav python-werkzeug python-xlwt python-yaml python-zsi OR TRY THIS: $ sudo apt-get install libxml2-dev libxslt1-dev A: For me none of above worked. However, I solved problem with installing libssl-dev. sudo apt-get install libssl-dev This might work if you have same error message as in my case: fatal error: openssl/opensslv.h: No such file or directory ... .... command 'x86_64-linux-gnu-gcc' failed with exit status 1 A: In my case, it was missing package libffi-dev. What worked: sudo apt-get install libffi-dev A: In my case following command did the magic sudo apt-get install gcc python3-dev if the above command didn't work try following two commands sudo apt-get install gcc python-dev this is the case when you want it to install for the python version set as default python in your machine. Or sudo apt-get install gcc python3.x-dev where python3.x represent the version number of python installed on your machine. A: on ubuntu 14.04: sudo apt-file search ffi.h returned: chipmunk-dev: /usr/include/chipmunk/chipmunk_ffi.h ghc-doc: /usr/share/doc/ghc-doc/html/users_guide/ffi.html jython-doc: /usr/share/doc/jython-doc/html/javadoc/org/python/modules/jffi/jffi.html libffi-dev: /usr/include/x86_64-linux-gnu/ffi.h libffi-dev: /usr/share/doc/libffi6/html/Using-libffi.html libgirepository1.0-dev: /usr/include/gobject-introspection-1.0/girffi.h libgirepository1.0-doc: /usr/share/gtk-doc/html/gi/gi-girffi.html mlton-basis: /usr/lib/mlton/include/basis-ffi.h pypy-doc: /usr/share/doc/pypy-doc/html/config/objspace.usemodules._ffi.html pypy-doc: /usr/share/doc/pypy-doc/html/config/objspace.usemodules._rawffi.html pypy-doc: /usr/share/doc/pypy-doc/html/rffi.html I chose to install libffi-dev sudo apt-get install libffi-dev worked perfectly A: In my case pip was unable to install libraries, I tried solutions given above, but none worked but the below worked for me: sudo apt upgrade gcc A: Despite being an old question, I'll add my opinion. I think the right answer depends on the error message of the gcc compiler, something like "Missing xxxx.h" This might help in some cases: sudo apt-get install build-essential python-dev A: This was enough for me: sudo apt-get install build-essential A: In Linux Mint with python3 $ sudo apt install build-essential python3-dev should be enough A: below answer worked for me, you can try: sudo apt-get install python3-lxml A: Error : error: command 'x86_64-linux-gnu-gcc' failed with exit status 1 Executing sudo apt-get install python-dev solved the error. A: After upgrade my computer with pip today, and check the other answers here, I can tell you that it could be ANYTHING. You should check error by error, looking for what's the specific library that you need. In my case, these were the libraries that I had to install: $ sudo apt-get install libssl-dev $ sudo apt-get install libffi-dev $ sudo apt-get install libjpeg-dev $ sudo apt-get install libvirt-dev $ sudo apt-get install libsqlite3-dev $ sudo apt-get install libcurl4-openssl-dev $ sudo apt-get install libxml2-dev libxslt1-dev python-dev HTH A: This works for me, 12.04, python2.7.6 sudo apt-get install libxml2 libxml2-dev libxslt1-dev sudo apt-get install lxml A: Using Ubuntu 14.04 LTS with a virtualenv running python 3.5, I had to do: sudo apt-get install python3.5-dev The other commands: sudo apt-get install python-dev sudo apt-get install python3-dev Did not help. I think this is because the virtualenv needs to rely on the system-wide python-dev package and it must match the virtualenv's python version. However, using the above commands installs python-dev for python 2.x and the python 3.x that comes with Ubuntu 14.04 which is 3.4, not 3.5. A: In my case the command sudo apt-get install unixodbc-dev resolved the issue. I was getting an error specific to the sql.h header file. A: Tip: Please do not consider this as an answer. Just to help someone else too. I had similar issue while installing psycopg2. I installedbuild-essential, python-dev and also libpq-dev but it thrown same error. error: Setup script exited with error: command 'x86_64-linux-gnu-gcc' failed with exit status 1 As I was in hurry in deployment so finally just copied full line from @user3440631's answer. sudo apt-get install build-essential autoconf libtool pkg-config python-opengl python-imaging python-pyrex python-pyside.qtopengl idle-python2.7 qt4-dev-tools qt4-designer libqtgui4 libqtcore4 libqt4-xml libqt4-test libqt4-script libqt4-network libqt4-dbus python-qt4 python-qt4-gl libgle3 python-dev And It worked like a charm. but could not find which package has resolved my issue. Please update the comment if anyone have idea about psycopg2 dependancy package from above command. A: first you need to find out what the actual problem was. what you're seeing is that the C compiler failed but you don't yet know why. scroll up to where you get the original error. in my case, trying to install some packages using pip3, I found: Complete output from command /usr/bin/python3 -c "import setuptools, tokenize;__file__='/tmp/pip-build-4u59c_8b/cryptography/setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /tmp/pip-itjeh3va-record/install-record.txt --single-version-externally-managed --compile --user: c/_cffi_backend.c:15:17: fatal error: ffi.h: No such file or directory #include <ffi.h> ^ compilation terminated. so in my case I needed to install libffi-dev. A: error: command 'x86_64-linux-gnu-gcc' failed with exit status 1 Lot of time I got the same error when installing M2Crypto & pygraphviz and installed all the things mention in the approved answer. But this below line solved all my problems with the other packages in approved answer too. sudo apt-get install libssl-dev swig sudo apt-get install -y graphviz-dev This swig package saved my life as the solution for M2Crypto and graphviz-dev for pygraphviz. I hope this will help someone. A: sudo apt-get install build-essential autoconf libtool pkg-config python-opengl python-imaging python-pyrex python-pyside.qtopengl idle-python2.7 qt4-dev-tools qt4-designer libqtgui4 libqtcore4 libqt4-xml libqt4-test libqt4-script libqt4-network libqt4-dbus python-qt4 python-qt4-gl libgle3 python-dev sudo easy_install greenlet sudo easy_install gevent A: For me I had to make sure I was using the correct version of cryptography. pip.freeze had and older version and once I used the latest the problem when away. A: For Centos 7 Use below command to install Python Development Package Python 2.7 sudo yum install python-dev Python 3.4 sudo yum install python34-devel Still if your problem not solved then try installing below packages - sudo yum install libffi-devel sudo yum install openssl-devel A: None of the above answers worked for me when I had the same issue on my Ubuntu 14.04 However, this solved the error: sudo apt-get install python-numpy libicu-dev A: For me it helped to install libxml2-dev and libxslt1-dev. sudo apt-get install libxml2-dev A: My stack was like that: > > ^ > > In file included from /usr/include/openssl/ssl.h:156:0, > > from OpenSSL/crypto/x509.h:17, > > from OpenSSL/crypto/crypto.h:17, > > from OpenSSL/crypto/crl.c:3: > > /usr/include/openssl/x509.h:751:15: note: previous declaration of ‘X509_REVOKED_dup’ was here > > X509_REVOKED *X509_REVOKED_dup(X509_REVOKED *rev); > > ^ > > error: command 'x86_64-linux-gnu-gcc' failed with exit status 1 > > > > ---------------------------------------- Rolling back uninstall of > pyOpenSSL Command "/home/marta/env/pb/bin/python -u -c > "import setuptools, > > tokenize;__file__='/tmp/pip-build-14ekWY/pyOpenSSL/setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', > > '\n');f.close();exec(compile(code, __file__, 'exec'))" install > > --record /tmp/pip-2HERvW-record/install-record.txt --single-version-externally-managed --compile --install-headers /home/marta/env/pb/include/site/python2.7/pyOpenSSL" failed with error > > code 1 in /tmp/pip-build-14ekWY/pyOpenSSL/ in the same case, please consider the typo (bug) in one of the installation files and edit it manually by changing "X509_REVOKED_dup" to "X509_REVOKED_dupe" (no quotes). I have edited the x509.h file: sed -e's/X509_REVOKED_dup/X509_REVOKED_dupe/g' -i usr/include/openssl/x509.h and it worked for me, but please consult with the post linked below, as they edited another file: sed -e's/X509_REVOKED_dup/X509_REVOKED_dupe/g' -i OpenSSL/crypto/crl.c https://groups.google.com/forum/#!topic/kivy-users/Qt0jNIOACZc A: For python3: sudo apt-get install python3-dev \ build-essential libssl-dev libffi-dev \ libxml2-dev libxslt1-dev zlib1g-dev \ python3-pip For Python2: sudo apt-get install python2-dev \ build-essential libssl-dev libffi-dev \ libxml2-dev libxslt1-dev zlib1g-dev \ python2-pip A: Like Robin Winslow says in a comment : I found my solution over here: stackoverflow.com/a/5178444/613540 In my case, my complete error message was : /usr/bin/ld: cannot find -lz collect2: error: ld returned 1 exit status error: Setup script exited with error: command 'x86_64-linux-gnu-gcc' failed with exit status 1 I was trying to install torrench : sudo python3 setup.py install With given stackoverflow link, I solve this issue by : sudo apt install zlib1g-dev Note that the following packages were already installed : libxslt1-dev is already the newest version. python3-dev is already the newest version. libxml2-dev is already the newest version. Hope that will help ! A: In my case, it was oursql that was causing the same(generic) error as below. In file included from oursqlx/oursql.c:236:0: oursqlx/compat.h:13:19: fatal error: mysql.h: No such file or directory compilation terminated. error: command 'x86_64-linux-gnu-gcc' failed with exit status 1 ---------------------------------------- Failed building wheel for oursql Running setup.py clean for oursql So, I knew that I need to have libmysqlcppconn-dev package. sudo apt-get install libmysqlcppconn-dev And all good! A: This Worked for me: sudo apt install zlib1g-dev A: In addition to some other helpful answers, if docker-compose brought you here--With your venv set, run: easy_install docker-compose A: While installing ssdeep i was getting same error Please check actual error can be something else Like i was also getting same but above this error there was an error fuzzy.h no file or directory and then i tried this apt-get -y install libfuzzy-dev Work like charm A: TL;DR: run the below command sudo apt-get install python2-dev gcc I had this problem when trying to pip install a module for python2.7. Lots of answers mention that a fix for this is sudo apt-get install python-dev. However, this did not work for me, as the package was not found. However, the command shown at the top of this comment exists, and I was finally able to pip install the module. A: This problem can originate from one of any missing packages especially in newer builds. creating build/temp.linux-x86_64-cpython-39/src x86_64-linux-gnu-gcc -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -g -fwrapv -O2 -fPIC -I/home/vipin/.cache/pypoetry/virtualenvs/bbox-drf-QjIedbEI-py3.9/include -I/usr/include/python3.9 -c src/base64.c -o build/temp.linux-x86_64-cpython-39/src/base64.o x86_64-linux-gnu-gcc -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -g -fwrapv -O2 -fPIC -I/home/vipin/.cache/pypoetry/virtualenvs/bbox-drf-QjIedbEI-py3.9/include -I/usr/include/python3.9 -c src/kerberos.c -o build/temp.linux-x86_64-cpython-39/src/kerberos.o In file included from src/kerberos.c:20: src/kerberosbasic.h:17:10: fatal error: gssapi/gssapi.h: No such file or directory 17 | #include <gssapi/gssapi.h> | ^~~~~~~~~~~~~~~~~ compilation terminated. error: command '/usr/bin/x86_64-linux-gnu-gcc' failed with exit code 1 [end of output] most of us search using the second last line. error: command '/usr/bin/x86_64-linux-gnu-gcc' failed with exit code 1 But if you look closely, a few lines above, you can actually see which package is missing. It clearly states that a directory or file is missing gssapi/gssapi.h: No such file or directory searching why this package could be the solution you are looking for.
setup script exited with error: command 'x86_64-linux-gnu-gcc' failed with exit status 1
When I try to install odoo-server, I got the following error: error: Setup script exited with error: command 'x86_64-linux-gnu-gcc' failed with exit status 1 Could anyone help me to solve this issue?
[ "I encountered the same problem in college having installed Linux Mint for the main project of my final year, the third solution below worked for me.\nWhen encountering this error please note before the error it may say you are missing a package or header file — you should find those and install them and verify if it works (e.g. ssl → libssl).\nFor Python 2.x use:\nsudo apt-get install python-dev\n\nFor Python 2.7 use:\nsudo apt-get install libffi-dev\n\nFor Python 3.x use:\nsudo apt-get install python3-dev\n\nor for a specific version of Python 3, replace x with the minor version in\nsudo apt-get install python3.x-dev\n\n", "\nPython.h is nothing but a header file. It is used by gcc to build applications. You need to install a package called python-dev. This package includes header files, a static library and development tools for building Python modules, extending the Python interpreter or embedding Python in applications.\n\nenter:\n$ sudo apt-get install python-dev\n\nor\n# apt-get install python-dev\n\nsee http://www.cyberciti.biz/faq/debian-ubuntu-linux-python-h-file-not-found-error-solution/\n", "Try installing these packages.\nsudo apt-get install build-essential autoconf libtool pkg-config python-opengl python-pil python-pyrex python-pyside.qtopengl idle-python2.7 qt4-dev-tools qt4-designer libqtgui4 libqtcore4 libqt4-xml libqt4-test libqt4-script libqt4-network libqt4-dbus python-qt4 python-qt4-gl libgle3 python-dev libssl-dev\n\nsudo easy_install greenlet\n\nsudo easy_install gevent\n\n", "You need to install these packages: \nsudo apt-get install libpq-dev python-dev libxml2-dev libxslt1-dev libldap2-dev libsasl2-dev libffi-dev\n\n", "For Python 3.4 use:\nsudo apt-get install python3.4-dev\n\nFor Python 3.5 use:\nsudo apt-get install python3.5-dev\n\nFor Python 3.6 use:\nsudo apt-get install python3.6-dev\n\nFor Python 3.7 use:\nsudo apt-get install python3.7-dev\n\nFor Python 3.8 use:\nsudo apt-get install python3.8-dev\n\n... and so on ...\n", "$ sudo apt-get install gcc\n$ sudo apt-get install python-dateutil python-docutils python-feedparser python-gdata python-jinja2 python-ldap python-libxslt1 python-lxml python-mako python-mock python-openid python-psycopg2 python-psutil python-pybabel python-pychart python-pydot python-pyparsing python-reportlab python-simplejson python-tz python-unittest2 python-vatnumber python-vobject python-webdav python-werkzeug python-xlwt python-yaml python-zsi\n\nOR TRY THIS: \n$ sudo apt-get install libxml2-dev libxslt1-dev\n\n", "For me none of above worked. However, I solved problem with installing libssl-dev. \nsudo apt-get install libssl-dev\n\nThis might work if you have same error message as in my case: \n\nfatal error: openssl/opensslv.h: No such file or directory ... ....\n command 'x86_64-linux-gnu-gcc' failed with exit status 1\n\n", "In my case, it was missing package libffi-dev.\nWhat worked:\nsudo apt-get install libffi-dev\n\n", "In my case following command did the magic\nsudo apt-get install gcc python3-dev\n\nif the above command didn't work try following two commands\nsudo apt-get install gcc python-dev\n\n\nthis is the case when you want it to install for the python version set as default python in your machine.\n\nOr\nsudo apt-get install gcc python3.x-dev\n\n\nwhere python3.x represent the version number of python installed on your machine.\n\n", "on ubuntu 14.04:\nsudo apt-file search ffi.h \n\nreturned:\nchipmunk-dev: /usr/include/chipmunk/chipmunk_ffi.h\nghc-doc: /usr/share/doc/ghc-doc/html/users_guide/ffi.html\njython-doc: /usr/share/doc/jython-doc/html/javadoc/org/python/modules/jffi/jffi.html\nlibffi-dev: /usr/include/x86_64-linux-gnu/ffi.h\nlibffi-dev: /usr/share/doc/libffi6/html/Using-libffi.html\nlibgirepository1.0-dev: /usr/include/gobject-introspection-1.0/girffi.h\nlibgirepository1.0-doc: /usr/share/gtk-doc/html/gi/gi-girffi.html\nmlton-basis: /usr/lib/mlton/include/basis-ffi.h\npypy-doc: /usr/share/doc/pypy-doc/html/config/objspace.usemodules._ffi.html\npypy-doc: /usr/share/doc/pypy-doc/html/config/objspace.usemodules._rawffi.html\npypy-doc: /usr/share/doc/pypy-doc/html/rffi.html\n\nI chose to install libffi-dev\nsudo apt-get install libffi-dev\n\nworked perfectly\n", "In my case pip was unable to install libraries, I tried solutions given above, but none worked but the below worked for me:\nsudo apt upgrade gcc\n\n", "Despite being an old question, I'll add my opinion.\nI think the right answer depends on the error message of the gcc compiler, something like \"Missing xxxx.h\"\nThis might help in some cases:\nsudo apt-get install build-essential python-dev\n\n", "This was enough for me:\nsudo apt-get install build-essential\n\n", "In Linux Mint with python3\n$ sudo apt install build-essential python3-dev\n\nshould be enough\n", "below answer worked for me, you can try:\nsudo apt-get install python3-lxml\n\n", "\nError : error: command 'x86_64-linux-gnu-gcc' failed with exit status 1\n\nExecuting sudo apt-get install python-dev solved the error.\n", "After upgrade my computer with pip today, and check the other answers here, I can tell you that it could be ANYTHING. You should check error by error, looking for what's the specific library that you need. In my case, these were the libraries that I had to install:\n$ sudo apt-get install libssl-dev\n$ sudo apt-get install libffi-dev\n$ sudo apt-get install libjpeg-dev\n$ sudo apt-get install libvirt-dev\n$ sudo apt-get install libsqlite3-dev\n$ sudo apt-get install libcurl4-openssl-dev\n$ sudo apt-get install libxml2-dev libxslt1-dev python-dev\n\nHTH\n", "This works for me, 12.04, python2.7.6 \nsudo apt-get install libxml2 libxml2-dev libxslt1-dev\nsudo apt-get install lxml\n\n", "Using Ubuntu 14.04 LTS with a virtualenv running python 3.5, I had to do:\nsudo apt-get install python3.5-dev\n\nThe other commands:\nsudo apt-get install python-dev\nsudo apt-get install python3-dev\n\nDid not help. I think this is because the virtualenv needs to rely on the system-wide python-dev package and it must match the virtualenv's python version. However, using the above commands installs python-dev for python 2.x and the python 3.x that comes with Ubuntu 14.04 which is 3.4, not 3.5.\n", "In my case the command sudo apt-get install unixodbc-dev resolved the issue. I was getting an error specific to the sql.h header file.\n", "Tip: Please do not consider this as an answer. Just to help someone else too.\nI had similar issue while installing psycopg2. I installedbuild-essential, python-dev and also libpq-dev but it thrown same error.\nerror: Setup script exited with error: command 'x86_64-linux-gnu-gcc' failed with exit status 1\n\nAs I was in hurry in deployment so finally just copied full line from\n@user3440631's answer.\nsudo apt-get install build-essential autoconf libtool pkg-config python-opengl python-imaging python-pyrex python-pyside.qtopengl idle-python2.7 qt4-dev-tools qt4-designer libqtgui4 libqtcore4 libqt4-xml libqt4-test libqt4-script libqt4-network libqt4-dbus python-qt4 python-qt4-gl libgle3 python-dev\n\nAnd It worked like a charm. but could not find which package has resolved my issue. \nPlease update the comment if anyone have idea about psycopg2 dependancy package from above command.\n", "first you need to find out what the actual problem was. what you're seeing is that the C compiler failed but you don't yet know why. scroll up to where you get the original error. in my case, trying to install some packages using pip3, I found:\n Complete output from command /usr/bin/python3 -c \"import setuptools, tokenize;__file__='/tmp/pip-build-4u59c_8b/cryptography/setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\\r\\n', '\\n'), __file__, 'exec'))\" install --record /tmp/pip-itjeh3va-record/install-record.txt --single-version-externally-managed --compile --user:\n c/_cffi_backend.c:15:17: fatal error: ffi.h: No such file or directory\n\n #include <ffi.h>\n\n ^\n\ncompilation terminated.\n\nso in my case I needed to install libffi-dev.\n", "error: command 'x86_64-linux-gnu-gcc' failed with exit status 1\n\nLot of time I got the same error when installing M2Crypto & pygraphviz and installed all the things mention in the approved answer. But this below line solved all my problems with the other packages in approved answer too.\nsudo apt-get install libssl-dev swig\nsudo apt-get install -y graphviz-dev\n\nThis swig package saved my life as the solution for M2Crypto and graphviz-dev for pygraphviz. I hope this will help someone.\n", "sudo apt-get install build-essential autoconf libtool pkg-config python-opengl python-imaging python-pyrex python-pyside.qtopengl idle-python2.7 qt4-dev-tools qt4-designer libqtgui4 libqtcore4 libqt4-xml libqt4-test libqt4-script libqt4-network libqt4-dbus python-qt4 python-qt4-gl libgle3 python-dev\n\nsudo easy_install greenlet\n\nsudo easy_install gevent\n\n", "For me I had to make sure I was using the correct version of cryptography.\npip.freeze had and older version and once I used the latest the problem when away.\n", "For Centos 7 Use below command to install Python Development Package\nPython 2.7\n\nsudo yum install python-dev\n\nPython 3.4\n\nsudo yum install python34-devel\n\nStill if your problem not solved then try installing below packages - \n\nsudo yum install libffi-devel\nsudo yum install openssl-devel\n\n", "None of the above answers worked for me when I had the same issue on my Ubuntu 14.04\nHowever, this solved the error:\nsudo apt-get install python-numpy libicu-dev\n", "For me it helped to install libxml2-dev and libxslt1-dev.\nsudo apt-get install libxml2-dev\n\n", "My stack was like that:\n> > ^\n> > In file included from /usr/include/openssl/ssl.h:156:0,\n> > from OpenSSL/crypto/x509.h:17,\n> > from OpenSSL/crypto/crypto.h:17,\n> > from OpenSSL/crypto/crl.c:3:\n> > /usr/include/openssl/x509.h:751:15: note: previous declaration of ‘X509_REVOKED_dup’ was here\n> > X509_REVOKED *X509_REVOKED_dup(X509_REVOKED *rev);\n> > ^\n> > error: command 'x86_64-linux-gnu-gcc' failed with exit status 1\n> > \n> > ---------------------------------------- Rolling back uninstall of > pyOpenSSL Command \"/home/marta/env/pb/bin/python -u -c\n> \"import setuptools,\n> > tokenize;__file__='/tmp/pip-build-14ekWY/pyOpenSSL/setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\\r\\n',\n> > '\\n');f.close();exec(compile(code, __file__, 'exec'))\" install\n> > --record /tmp/pip-2HERvW-record/install-record.txt --single-version-externally-managed --compile --install-headers /home/marta/env/pb/include/site/python2.7/pyOpenSSL\" failed with error\n> > code 1 in /tmp/pip-build-14ekWY/pyOpenSSL/\n\nin the same case, please consider the typo (bug) in one of the installation files and edit it manually by changing \"X509_REVOKED_dup\" to \"X509_REVOKED_dupe\" (no quotes). I have edited the x509.h file:\n\nsed -e's/X509_REVOKED_dup/X509_REVOKED_dupe/g' -i\n usr/include/openssl/x509.h\n\nand it worked for me, but please consult with the post linked below, as they edited another file:\n\nsed -e's/X509_REVOKED_dup/X509_REVOKED_dupe/g' -i OpenSSL/crypto/crl.c\n\nhttps://groups.google.com/forum/#!topic/kivy-users/Qt0jNIOACZc\n", "For python3:\nsudo apt-get install python3-dev \\\n build-essential libssl-dev libffi-dev \\\n libxml2-dev libxslt1-dev zlib1g-dev \\\n python3-pip\n\nFor Python2:\nsudo apt-get install python2-dev \\\n build-essential libssl-dev libffi-dev \\\n libxml2-dev libxslt1-dev zlib1g-dev \\\n python2-pip\n\n", "Like Robin Winslow says in a comment :\n\nI found my solution over here: stackoverflow.com/a/5178444/613540\n\nIn my case, my complete error message was :\n/usr/bin/ld: cannot find -lz \ncollect2: error: ld returned 1 exit status\nerror: Setup script exited with error: command 'x86_64-linux-gnu-gcc' failed with exit status 1\n\nI was trying to install torrench :\nsudo python3 setup.py install\n\nWith given stackoverflow link, I solve this issue by :\nsudo apt install zlib1g-dev\n\nNote that the following packages were already installed :\nlibxslt1-dev is already the newest version.\npython3-dev is already the newest version.\nlibxml2-dev is already the newest version.\n\nHope that will help !\n", "In my case, it was oursql that was causing the same(generic) error as below.\nIn file included from oursqlx/oursql.c:236:0:\n oursqlx/compat.h:13:19: fatal error: mysql.h: No such file or directory\n compilation terminated.\n error: command 'x86_64-linux-gnu-gcc' failed with exit status 1\n\n ----------------------------------------\n Failed building wheel for oursql\n Running setup.py clean for oursql\n\nSo, I knew that I need to have libmysqlcppconn-dev package.\nsudo apt-get install libmysqlcppconn-dev\n\nAnd all good!\n", "This Worked for me:\nsudo apt install zlib1g-dev\n", "In addition to some other helpful answers, if docker-compose brought you here--With your venv set, run:\n\neasy_install docker-compose\n\n\n", "While installing ssdeep i was getting same error Please check actual error can be something else Like i was also getting same but above this error there was an error fuzzy.h no file or directory and then i tried this\napt-get -y install libfuzzy-dev\nWork like charm\n", "TL;DR: run the below command\nsudo apt-get install python2-dev gcc\n\nI had this problem when trying to pip install a module for python2.7.\nLots of answers mention that a fix for this is sudo apt-get install python-dev. However, this did not work for me, as the package was not found. However, the command shown at the top of this comment exists, and I was finally able to pip install the module.\n", "This problem can originate from one of any missing packages especially in newer builds.\n creating build/temp.linux-x86_64-cpython-39/src\n x86_64-linux-gnu-gcc -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -g -fwrapv -O2 -fPIC -I/home/vipin/.cache/pypoetry/virtualenvs/bbox-drf-QjIedbEI-py3.9/include -I/usr/include/python3.9 -c src/base64.c -o build/temp.linux-x86_64-cpython-39/src/base64.o\n x86_64-linux-gnu-gcc -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -g -fwrapv -O2 -fPIC -I/home/vipin/.cache/pypoetry/virtualenvs/bbox-drf-QjIedbEI-py3.9/include -I/usr/include/python3.9 -c src/kerberos.c -o build/temp.linux-x86_64-cpython-39/src/kerberos.o\n In file included from src/kerberos.c:20:\n src/kerberosbasic.h:17:10: fatal error: gssapi/gssapi.h: No such file or directory\n 17 | #include <gssapi/gssapi.h>\n | ^~~~~~~~~~~~~~~~~\n compilation terminated.\n error: command '/usr/bin/x86_64-linux-gnu-gcc' failed with exit code 1\n [end of output]\n\nmost of us search using the second last line.\n\nerror: command '/usr/bin/x86_64-linux-gnu-gcc' failed with exit code 1\n\nBut if you look closely, a few lines above, you can actually see which package is missing. It clearly states that a directory or file is missing\n\ngssapi/gssapi.h: No such file or directory\n\nsearching why this package could be the solution you are looking for.\n" ]
[ 558, 289, 197, 136, 86, 74, 39, 38, 27, 17, 17, 11, 9, 6, 5, 5, 4, 4, 4, 4, 3, 3, 3, 2, 2, 2, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0 ]
[ "After installing a lot of libraries, the one that worked for me! was swig:\nsudo apt-get install swig\n\nThe error arose when installing python's M2Crypto library.\n:)\n" ]
[ -1 ]
[ "gcc", "odoo", "pip", "python" ]
stackoverflow_0026053982_gcc_odoo_pip_python.txt
Q: Need to know if the same ID it's repeated but with a different DATE I'm trying to see if the same "ID" its repeated but with a different "DATE" value. I was thinking using a numpy.where, so I created the column "Count" to use something like this: df['FULFILL?'] = np.where((df['Count']>1) & (df['DATE']), 'YES', 'NO') But then I got stuck because I was not sure how to end the second condition. Here's an example: ID Count DATE 111 3 01/01/2020 222 2 02/12/2020 111 3 01/01/2020 222 2 02/12/2020 111 3 02/10/2020 333 2 01/25/2020 333 2 05/02/2020 444 1 01/01/2020 I'm looking an output like this: ID Count DATE FULFILL? 111 3 01/01/2020 YES 222 2 02/12/2020 NO 111 3 01/01/2020 YES 222 2 02/12/2020 NO 111 3 02/10/2020 YES 333 2 01/25/2020 YES 333 2 05/02/2020 YES 444 1 01/01/2020 NO Sorry if my english it's not very good :) A: Use GroupBy.transform with DataFrameGroupBy.nunique for test number of unique values per groups, first condition (df['Count']>1) is removed, because for single value per groups number of unique values is not greater like 1: df['FULFILL?'] = np.where(df.groupby('ID')['DATE'].transform('nunique').gt(1), 'YES', 'NO') print (df) ID Count DATE FULFILL? 0 111 3 01/01/2020 YES 1 222 2 02/12/2020 NO 2 111 3 01/01/2020 YES 3 222 2 02/12/2020 NO 4 111 3 02/10/2020 YES 5 333 2 01/25/2020 YES 6 333 2 05/02/2020 YES 7 444 1 01/01/2020 NO Details: print (df.groupby('ID')['DATE'].transform('nunique')) 0 2 1 1 2 2 3 1 4 2 5 2 6 2 7 1 Name: DATE, dtype: int64 A: You can use nunique: # get number of unique dates per id s = df.groupby('ID')['DATE'].nunique() # identify non unique ones df['FULFILL?'] = np.where(df['ID'].isin(s[s>1].index), 'YES', 'NO') Or with both conditions: df['FULFILL?'] = np.where(df['Count'].gt(1) & df['ID'].isin(s[s>1].index), 'YES', 'NO') NB. using an intermediate s and isin, as it is more efficient than groupby.transform('nunique'). Output: ID Count DATE FULFILL? 0 111 3 01/01/2020 YES 1 222 2 02/12/2020 NO 2 111 3 01/01/2020 YES 3 222 2 02/12/2020 NO 4 111 3 02/10/2020 YES 5 333 2 01/25/2020 YES 6 333 2 05/02/2020 YES 7 444 1 01/01/2020 NO Count per group: ID 111 2 222 1 333 2 444 1 Name: DATE, dtype: int64
Need to know if the same ID it's repeated but with a different DATE
I'm trying to see if the same "ID" its repeated but with a different "DATE" value. I was thinking using a numpy.where, so I created the column "Count" to use something like this: df['FULFILL?'] = np.where((df['Count']>1) & (df['DATE']), 'YES', 'NO') But then I got stuck because I was not sure how to end the second condition. Here's an example: ID Count DATE 111 3 01/01/2020 222 2 02/12/2020 111 3 01/01/2020 222 2 02/12/2020 111 3 02/10/2020 333 2 01/25/2020 333 2 05/02/2020 444 1 01/01/2020 I'm looking an output like this: ID Count DATE FULFILL? 111 3 01/01/2020 YES 222 2 02/12/2020 NO 111 3 01/01/2020 YES 222 2 02/12/2020 NO 111 3 02/10/2020 YES 333 2 01/25/2020 YES 333 2 05/02/2020 YES 444 1 01/01/2020 NO Sorry if my english it's not very good :)
[ "Use GroupBy.transform with DataFrameGroupBy.nunique for test number of unique values per groups, first condition (df['Count']>1) is removed, because for single value per groups number of unique values is not greater like 1:\ndf['FULFILL?'] = np.where(df.groupby('ID')['DATE'].transform('nunique').gt(1), 'YES', 'NO')\nprint (df)\n ID Count DATE FULFILL?\n0 111 3 01/01/2020 YES\n1 222 2 02/12/2020 NO\n2 111 3 01/01/2020 YES\n3 222 2 02/12/2020 NO\n4 111 3 02/10/2020 YES\n5 333 2 01/25/2020 YES\n6 333 2 05/02/2020 YES\n7 444 1 01/01/2020 NO\n\nDetails:\nprint (df.groupby('ID')['DATE'].transform('nunique'))\n0 2\n1 1\n2 2\n3 1\n4 2\n5 2\n6 2\n7 1\nName: DATE, dtype: int64\n\n", "You can use nunique:\n# get number of unique dates per id\ns = df.groupby('ID')['DATE'].nunique()\n# identify non unique ones\ndf['FULFILL?'] = np.where(df['ID'].isin(s[s>1].index), 'YES', 'NO')\n\nOr with both conditions:\ndf['FULFILL?'] = np.where(df['Count'].gt(1) & df['ID'].isin(s[s>1].index),\n 'YES', 'NO')\n\nNB. using an intermediate s and isin, as it is more efficient than groupby.transform('nunique').\nOutput:\n ID Count DATE FULFILL?\n0 111 3 01/01/2020 YES\n1 222 2 02/12/2020 NO\n2 111 3 01/01/2020 YES\n3 222 2 02/12/2020 NO\n4 111 3 02/10/2020 YES\n5 333 2 01/25/2020 YES\n6 333 2 05/02/2020 YES\n7 444 1 01/01/2020 NO\n\nCount per group:\nID\n111 2\n222 1\n333 2\n444 1\nName: DATE, dtype: int64\n\n" ]
[ 0, 0 ]
[]
[]
[ "numpy", "pandas", "python" ]
stackoverflow_0074519837_numpy_pandas_python.txt
Q: How to activate a specifically named virtualenv using pipenv? I created a specifically named virtualenv by setting PIPENV_CUSTOM_VENV_NAME before doing pipenv shell as outlined in this Github issue thread on "How to set the full name of the virtualenv created". I can confirm a virtualenv with the name given exists in /Users/username/.local/share/virtualenvs/. Now, how do I activate this specific virtualenv again? Doing pipenv shell in the project directory simply creates a new one, so how do I activate the one with a given name? A: You will have to always export that PIPENV_CUSTOM_VENV_NAME environment variable. It's the same as what that contributor did in that Github issue thread: export PIPENV_CUSTOM_VENV_NAME=mycustomname pipenv install pipenv shell etc. etc. The export link sets that environment variable for all subsequent pipenv commands, and this includes activation of the environment: # There is no virtual env yet # --- myapp$ PIPENV_CUSTOM_VENV_NAME=foo python3.8 -m pipenv --venv No virtualenv has been created for this project(/path/to/myapp) yet! Aborted! # Let's create one named `foo` ! # --- myapp$ PIPENV_CUSTOM_VENV_NAME=foo python3.8 -m pipenv install Creating a virtualenv for this project... Pipfile: /path/to/myapp/Pipfile Using /usr/local/bin/python3 (3.10.8) to create virtualenv... ⠹ Creating virtual environment...created virtual environment ... Virtualenv location: /path/to/.venvs/foo # There is now a virtual env ! # --- myapp$ PIPENV_CUSTOM_VENV_NAME=foo python3.8 -m pipenv --venv /path/to/.venvs/foo # Let's activate it ! # --- myapp$ PIPENV_CUSTOM_VENV_NAME=foo python3.8 -m pipenv shell Launching subshell in virtual environment... myapp$ . /path/to/.venvs/foo/bin/activate (myapp) myapp$ # Let's check if it's really installing packages in the right place... # --- (myapp) myapp$ pipenv install flask ... (myapp) myapp$ find ~/.venvs/foo/lib/python3.10/site-packages -name flask /path/to/.venvs/foo/lib/python3.10/site-packages/flask Now, this is a bit inconvenient. But you can define it per project in your .env file, as per the docs https://pipenv.pypa.io/en/latest/advanced/#virtual-environment-name The logical place to specify this would be in a user’s .env file in the root of the project, which gets loaded by pipenv when it is invoked. So, in your project, create a .env file, and define it there: myapp$ cat .env PIPENV_CUSTOM_VENV_NAME=foo So now every time you run pipenv shell in that folder, pipenv would read your .env file in that same folder, and apply PIPENV_CUSTOM_VENV_NAME: myapp$ cat .env PIPENV_CUSTOM_VENV_NAME=foo myapp$ python3.10 -m pipenv shell Loading .env environment variables... Loading .env environment variables... Launching subshell in virtual environment... myapp$ . /path/to/.venvs/foo/bin/activate
How to activate a specifically named virtualenv using pipenv?
I created a specifically named virtualenv by setting PIPENV_CUSTOM_VENV_NAME before doing pipenv shell as outlined in this Github issue thread on "How to set the full name of the virtualenv created". I can confirm a virtualenv with the name given exists in /Users/username/.local/share/virtualenvs/. Now, how do I activate this specific virtualenv again? Doing pipenv shell in the project directory simply creates a new one, so how do I activate the one with a given name?
[ "You will have to always export that PIPENV_CUSTOM_VENV_NAME environment variable.\nIt's the same as what that contributor did in that Github issue thread:\n\nexport PIPENV_CUSTOM_VENV_NAME=mycustomname \npipenv install \npipenv shell \netc. etc. \n\n\nThe export link sets that environment variable for all subsequent pipenv commands, and this includes activation of the environment:\n# There is no virtual env yet\n# ---\n\nmyapp$ PIPENV_CUSTOM_VENV_NAME=foo python3.8 -m pipenv --venv\nNo virtualenv has been created for this project(/path/to/myapp) yet!\nAborted!\n\n# Let's create one named `foo` !\n# ---\n\nmyapp$ PIPENV_CUSTOM_VENV_NAME=foo python3.8 -m pipenv install\nCreating a virtualenv for this project...\nPipfile: /path/to/myapp/Pipfile\nUsing /usr/local/bin/python3 (3.10.8) to create virtualenv...\n⠹ Creating virtual environment...created virtual environment \n...\n\nVirtualenv location: /path/to/.venvs/foo\n\n# There is now a virtual env !\n# ---\n\nmyapp$ PIPENV_CUSTOM_VENV_NAME=foo python3.8 -m pipenv --venv\n/path/to/.venvs/foo\n\n# Let's activate it !\n# ---\n\nmyapp$ PIPENV_CUSTOM_VENV_NAME=foo python3.8 -m pipenv shell\nLaunching subshell in virtual environment...\nmyapp$ . /path/to/.venvs/foo/bin/activate\n(myapp) myapp$\n\n# Let's check if it's really installing packages in the right place...\n# ---\n\n(myapp) myapp$ pipenv install flask\n...\n(myapp) myapp$ find ~/.venvs/foo/lib/python3.10/site-packages -name flask\n/path/to/.venvs/foo/lib/python3.10/site-packages/flask\n\nNow, this is a bit inconvenient. But you can define it per project in your .env file, as per the docs https://pipenv.pypa.io/en/latest/advanced/#virtual-environment-name\n\nThe logical place to specify this would be in a user’s .env file in the root of the project, which gets loaded by pipenv when it is invoked.\n\nSo, in your project, create a .env file, and define it there:\nmyapp$ cat .env\nPIPENV_CUSTOM_VENV_NAME=foo\n\nSo now every time you run pipenv shell in that folder, pipenv would read your .env file in that same folder, and apply PIPENV_CUSTOM_VENV_NAME:\nmyapp$ cat .env\nPIPENV_CUSTOM_VENV_NAME=foo\n\nmyapp$ python3.10 -m pipenv shell\nLoading .env environment variables...\nLoading .env environment variables...\nLaunching subshell in virtual environment...\nmyapp$ . /path/to/.venvs/foo/bin/activate\n\n" ]
[ 1 ]
[]
[]
[ "pipenv", "python" ]
stackoverflow_0074390453_pipenv_python.txt
Q: Communication between a python server and a C# client (Unity) Below you can see both of the python server and the C# client scripts, the process is to send and receive packets. I connect to the server via cloud, using Putty to connect to it, the client is an application created using Unity and C# script. server.py: import socket port = 80 s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) host = socket.gethostname() s.bind((host, port)) s.listen(5) print("server listening...") while True: client, adr = s.accept() print(f"got connection from ",adr) client.send(bytes("welcome from the server","utf-8")) data = client.recv(1024) print("server received", data.decode("utf-8")) client.close() client.cs: I use here the WebSocketSharp library. public class client: MonoBehaviour { byte[] buffer = Encoding.UTF8.GetBytes("Hello I'm the client"); void Start() { using (WebSocket ws = new WebSocket("ws://arb-server.tunis-plm.com/")) { ws.Connect(); ws.Send(buffer); ws.OnMessage += Ws_OnMessage; } } private void Ws_OnMessage(object sender, MessageEventArgs e) { Debug.Log(e.Data); } } The problem here that I can't receive data from server, the same thing for the client, I can't receive data from the client, also I don't know if my message sent or not, however, in the server console I received a connection from the client, thanks to this two lines of code in the server script: client, adr = s.accept() print(f"Got connection from ",adr) So the result it's Got connection from ('193.168.1.255',3112). Here is the complete output from the server side: And this is what I receive from the web browser: I took an effort to solve this issue but no result yet, if someone can help me I would appreciate it. A: I think the issue is your using block. It will Dispose the ws as soon as the Start method has finished (or actually as soon as the using block has reached the end). I think you'd rather do something like private WebSocket ws; private void Start () { ws = new WebSocket("ws://arb-server.tunis-plm.com/")) // In general I would move this up // what if you already receive a message right when connected // but before your code had time to add the callback ws.OnMessage += Ws_OnMessage; ws.Connect(); ws.Send(buffer); } private void OnDestroy() { ws?.Dispose(); } A: I think the question is no longer relevant, but nevertheless I will try to answer. It's all about the HTTP protocol. As I see it, you are trying to connect to your server like a website, but for this you need to send a correct response so that the client can see it. You can read more details here -> https://developer.mozilla.org/en-US/docs/Web/HTTP/Overview#responses In short, you need to send a response like: HTTP/1.1 200 OK Content Length: 11 Hello World Or for python client.send(b'HTTP/1.1 200 OK\nContent-Length: 11\n\nHello World') This applies to cases if you want to implement communication over the HTTP protocol, and your server would send a response in the form of HTML or JSON format. I hope I could help you clarify this situation.
Communication between a python server and a C# client (Unity)
Below you can see both of the python server and the C# client scripts, the process is to send and receive packets. I connect to the server via cloud, using Putty to connect to it, the client is an application created using Unity and C# script. server.py: import socket port = 80 s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) host = socket.gethostname() s.bind((host, port)) s.listen(5) print("server listening...") while True: client, adr = s.accept() print(f"got connection from ",adr) client.send(bytes("welcome from the server","utf-8")) data = client.recv(1024) print("server received", data.decode("utf-8")) client.close() client.cs: I use here the WebSocketSharp library. public class client: MonoBehaviour { byte[] buffer = Encoding.UTF8.GetBytes("Hello I'm the client"); void Start() { using (WebSocket ws = new WebSocket("ws://arb-server.tunis-plm.com/")) { ws.Connect(); ws.Send(buffer); ws.OnMessage += Ws_OnMessage; } } private void Ws_OnMessage(object sender, MessageEventArgs e) { Debug.Log(e.Data); } } The problem here that I can't receive data from server, the same thing for the client, I can't receive data from the client, also I don't know if my message sent or not, however, in the server console I received a connection from the client, thanks to this two lines of code in the server script: client, adr = s.accept() print(f"Got connection from ",adr) So the result it's Got connection from ('193.168.1.255',3112). Here is the complete output from the server side: And this is what I receive from the web browser: I took an effort to solve this issue but no result yet, if someone can help me I would appreciate it.
[ "I think the issue is your using block. It will Dispose the ws as soon as the Start method has finished (or actually as soon as the using block has reached the end).\nI think you'd rather do something like\nprivate WebSocket ws;\n\nprivate void Start ()\n{\n ws = new WebSocket(\"ws://arb-server.tunis-plm.com/\"))\n\n // In general I would move this up\n // what if you already receive a message right when connected\n // but before your code had time to add the callback\n ws.OnMessage += Ws_OnMessage;\n \n ws.Connect();\n\n ws.Send(buffer); \n}\n\nprivate void OnDestroy()\n{\n ws?.Dispose();\n}\n\n", "I think the question is no longer relevant, but nevertheless I will try to answer.\nIt's all about the HTTP protocol. As I see it, you are trying to connect to your server like a website, but for this you need to send a correct response so that the client can see it. You can read more details here -> https://developer.mozilla.org/en-US/docs/Web/HTTP/Overview#responses\nIn short, you need to send a response like:\nHTTP/1.1 200 OK\nContent Length: 11\n\nHello World\n\nOr for python\nclient.send(b'HTTP/1.1 200 OK\\nContent-Length: 11\\n\\nHello World')\n\nThis applies to cases if you want to implement communication over the HTTP protocol, and your server would send a response in the form of HTML or JSON format.\nI hope I could help you clarify this situation.\n" ]
[ 0, 0 ]
[]
[]
[ "c#", "python", "unity3d", "websocket" ]
stackoverflow_0067731405_c#_python_unity3d_websocket.txt
Q: Count matches of two elements on corresponding index positions in two arrays I have two arrays where one contains integers and the other words. It can look like this with arrays with 4 elements; arr1 = ['cat', 'cat', 'dog', 'cow'] arr2 = [0, 0, 1, 2] Further I have created indexing for all possible pairs: pairs = [] for i in range(4) : for j in range(i+1, 4) : pairs.append((i, j)) pairs = np.array(pairs) ### Outputs ### array([[0, 1], [0, 2], [0, 3], [1, 2], [1, 3], [2, 3]]) I now want to check, for both of my arrays, if the two elements corresponding to the index of the unordered pairs are equal for both of the arrays, and get the count of how many. For the above example with arr1 & arr2 I'd want; arr1[pairs]# [['cat' 'cat'] ['cat' 'dog'] ['cat' 'cow'] ['cat' 'dog'] ['cat' 'cow'] ['dog' 'cow']] # First inner array ['cat', 'cat'] is matching arr2[pairs] # [[0 0] [0 1] [0 2] [0 1] [0 2] [1 2]] # First inner array [0 0] is matching # So I'd want match_count = equal(arr1[pairs], arr2[pairs]) = 1 So for each possible pairing, if the two elements on the corresponding index are matching in both of the arrays, I want the total number of such matches. I'd figure I could do something like below to check if an inner array is a match, but I'm not sure how to apply it to the dimensions of the arrays and with the conditions I've mentioned. inner_arr2 = arr2[pairs][0] print(np.all(inner_arr2 == inner_arr2[0])) # True In my provided example there was N=4 elements, this will be much greater (N=500 & N=1000) so I'm hoping there's some numpy way to deal with this so I don't have to loop over all possible pairs. A: Compare the columns of each array's pairs, and then simply get where there are both a match (True) with a logical_and operation. You can get the count afterwards with count_nonzero() or sum(). np.count_nonzero( np.logical_and( arr1[pairs][:, 0] == arr1[pairs][:, 1], arr2[pairs][:, 0] == arr2[pairs][:, 1] ) )
Count matches of two elements on corresponding index positions in two arrays
I have two arrays where one contains integers and the other words. It can look like this with arrays with 4 elements; arr1 = ['cat', 'cat', 'dog', 'cow'] arr2 = [0, 0, 1, 2] Further I have created indexing for all possible pairs: pairs = [] for i in range(4) : for j in range(i+1, 4) : pairs.append((i, j)) pairs = np.array(pairs) ### Outputs ### array([[0, 1], [0, 2], [0, 3], [1, 2], [1, 3], [2, 3]]) I now want to check, for both of my arrays, if the two elements corresponding to the index of the unordered pairs are equal for both of the arrays, and get the count of how many. For the above example with arr1 & arr2 I'd want; arr1[pairs]# [['cat' 'cat'] ['cat' 'dog'] ['cat' 'cow'] ['cat' 'dog'] ['cat' 'cow'] ['dog' 'cow']] # First inner array ['cat', 'cat'] is matching arr2[pairs] # [[0 0] [0 1] [0 2] [0 1] [0 2] [1 2]] # First inner array [0 0] is matching # So I'd want match_count = equal(arr1[pairs], arr2[pairs]) = 1 So for each possible pairing, if the two elements on the corresponding index are matching in both of the arrays, I want the total number of such matches. I'd figure I could do something like below to check if an inner array is a match, but I'm not sure how to apply it to the dimensions of the arrays and with the conditions I've mentioned. inner_arr2 = arr2[pairs][0] print(np.all(inner_arr2 == inner_arr2[0])) # True In my provided example there was N=4 elements, this will be much greater (N=500 & N=1000) so I'm hoping there's some numpy way to deal with this so I don't have to loop over all possible pairs.
[ "Compare the columns of each array's pairs, and then simply get where there are both a match (True) with a logical_and operation. You can get the count afterwards with count_nonzero() or sum().\nnp.count_nonzero(\n np.logical_and(\n arr1[pairs][:, 0] == arr1[pairs][:, 1],\n arr2[pairs][:, 0] == arr2[pairs][:, 1]\n )\n)\n\n" ]
[ 1 ]
[]
[]
[ "arrays", "numpy", "python" ]
stackoverflow_0074519680_arrays_numpy_python.txt
Q: Why does saving an text containing html inside of variable causing beautifulsoup4 causing unexpected behavior? I am using beautifulsoup to automate posting products on one of the shopping platforms, unfortunately their API is disabled currently, so the only option right now is to use beautifulsoup. How is program expected to work? Program is expected to read .csv file (I provide the name of the file) and store the product data within the variables - after it, it goes through the multiple steps (filling out the form) - like inputting the name which it gets from variable, example of it: ime_artikla = driver.find_element(By.ID, 'kategorija_sug').send_keys(csvName) #Here it inputs the name where csvName is passed value to the function along with some other parameters: def postAutomation(csvName, csvPrice, csvproductDescription): The way that I am reading file is following: filename = open(naziv_fajla, 'r', encoding="utf-8") #File name to open + utf-8 encoding was necess. file = csv.DictReader(filename) The above lines of code are within the try: statement. The way that I am reading columns from csv file is following: for col in file: print("Reading column with following SKU: " + color.GREEN + color.BOLD + (col['SKU']) + color.END + "\n") csvSKU = (col['SKU']) csvName = (col['Name']) #csvCategory = (col['Categories']) csvPrice = (col['Price']) csvproductDescription = (col['Description']) print(csvName) #print(csvCategory) print(csvPrice) print(csvproductDescription) postAutomation(csvName, csvPrice, csvproductDescription) i+=1 counterOfProducts = counterOfProducts + 1 This is working as expected (the product is published on online store successfully) all until there's HTML and/or inline-css for product description The problem : As I've said the problem is happening when I have column containing html. As I am populating the field for product description (Tools > Source Code), the site is using tinymce for editing text and putting description etc... There are actually two scenarios that are happening, where program is acting as not expected: Case: In the first case, the product is published successfully, but, the <li> and \n is not treated as HTML for some reason, here's an example of one product's description (where this problem occurs): <p style="text-align: center;">Some product description.\n<ul>\n <li>Product feature 1</li>\n <li>Prod Feature 2</li>\n<li>Prod Feature 3</li>\n<li>Prod Feature 3</li>\n<li>Prod feature 4</li>\n</ul> What I get when I submit this code: \n\nProduct feature 1\nProd Feature 2\nProd Feature 3\nProd Feature 3\nProd feature 4\n Case: In the second case what happens, is that program crashes. What happens is following: Somehow the product description which is taken from csv file confuses (I think its due to complex html) program - the part of the product description gets into the field for price &nbsp..., <--- this, which is on totally next page (you have to click next onto the end of the page where product description goes) and then input the price, which seems weird to me. The werid thing is that I have template for product description (which is HTML and CSS) and I save it into the string literal, as template1 = """" A LOT OF HTML AND INLINE CSS """ and end_of_template = """ A LOT OF HTML AND INLINE CSS """ and it gets rendered perfectly after doing this: final_description = template1 + csvproductDescription + end_of_template But the html and inline css inside of csvproductDescription variable doesn't get treated as HTML and CSS. How can I fix this? A: Seems like problem was that I have had whitespaces inside of the product description, so I have solved it like this: final_description = html_and_css final_description = final_description + csvproductDescription final_description = final_description + html_and_css2 final_description = " ".join(re.split("\s+", final_description, flags=re.UNICODE))
Why does saving an text containing html inside of variable causing beautifulsoup4 causing unexpected behavior?
I am using beautifulsoup to automate posting products on one of the shopping platforms, unfortunately their API is disabled currently, so the only option right now is to use beautifulsoup. How is program expected to work? Program is expected to read .csv file (I provide the name of the file) and store the product data within the variables - after it, it goes through the multiple steps (filling out the form) - like inputting the name which it gets from variable, example of it: ime_artikla = driver.find_element(By.ID, 'kategorija_sug').send_keys(csvName) #Here it inputs the name where csvName is passed value to the function along with some other parameters: def postAutomation(csvName, csvPrice, csvproductDescription): The way that I am reading file is following: filename = open(naziv_fajla, 'r', encoding="utf-8") #File name to open + utf-8 encoding was necess. file = csv.DictReader(filename) The above lines of code are within the try: statement. The way that I am reading columns from csv file is following: for col in file: print("Reading column with following SKU: " + color.GREEN + color.BOLD + (col['SKU']) + color.END + "\n") csvSKU = (col['SKU']) csvName = (col['Name']) #csvCategory = (col['Categories']) csvPrice = (col['Price']) csvproductDescription = (col['Description']) print(csvName) #print(csvCategory) print(csvPrice) print(csvproductDescription) postAutomation(csvName, csvPrice, csvproductDescription) i+=1 counterOfProducts = counterOfProducts + 1 This is working as expected (the product is published on online store successfully) all until there's HTML and/or inline-css for product description The problem : As I've said the problem is happening when I have column containing html. As I am populating the field for product description (Tools > Source Code), the site is using tinymce for editing text and putting description etc... There are actually two scenarios that are happening, where program is acting as not expected: Case: In the first case, the product is published successfully, but, the <li> and \n is not treated as HTML for some reason, here's an example of one product's description (where this problem occurs): <p style="text-align: center;">Some product description.\n<ul>\n <li>Product feature 1</li>\n <li>Prod Feature 2</li>\n<li>Prod Feature 3</li>\n<li>Prod Feature 3</li>\n<li>Prod feature 4</li>\n</ul> What I get when I submit this code: \n\nProduct feature 1\nProd Feature 2\nProd Feature 3\nProd Feature 3\nProd feature 4\n Case: In the second case what happens, is that program crashes. What happens is following: Somehow the product description which is taken from csv file confuses (I think its due to complex html) program - the part of the product description gets into the field for price &nbsp..., <--- this, which is on totally next page (you have to click next onto the end of the page where product description goes) and then input the price, which seems weird to me. The werid thing is that I have template for product description (which is HTML and CSS) and I save it into the string literal, as template1 = """" A LOT OF HTML AND INLINE CSS """ and end_of_template = """ A LOT OF HTML AND INLINE CSS """ and it gets rendered perfectly after doing this: final_description = template1 + csvproductDescription + end_of_template But the html and inline css inside of csvproductDescription variable doesn't get treated as HTML and CSS. How can I fix this?
[ "Seems like problem was that I have had whitespaces inside of the product description, so I have solved it like this:\nfinal_description = html_and_css\nfinal_description = final_description + csvproductDescription\nfinal_description = final_description + html_and_css2\nfinal_description = \" \".join(re.split(\"\\s+\", final_description, flags=re.UNICODE))\n\n" ]
[ 0 ]
[]
[]
[ "beautifulsoup", "python", "python_3.x", "string" ]
stackoverflow_0074403416_beautifulsoup_python_python_3.x_string.txt
Q: Python can't open file in autostart When I start my program in autostart I get the Error [Errno13] Permission denied when it should open the file. When I then start the program manually it all works and my program opens the file. I autostart my program as registry key in Windows I use with open('save.macros', mode='rb') as f to open the file. The file is in the same directory and the program also noticed the file but cant open it on startup. A: The reason of occurs error on running your file in python, File path is not exact wrong File's extension difference for windows and linux path slash (windows => \ , linux => /) or save.macros file is using in another process and make your you placed that save.macros file in same place with your python code. and try this : with open('.\save.macros', mode='rb') as f
Python can't open file in autostart
When I start my program in autostart I get the Error [Errno13] Permission denied when it should open the file. When I then start the program manually it all works and my program opens the file. I autostart my program as registry key in Windows I use with open('save.macros', mode='rb') as f to open the file. The file is in the same directory and the program also noticed the file but cant open it on startup.
[ "The reason of occurs error on running your file in python,\n\nFile path is not exact\nwrong File's extension\ndifference for windows and linux path slash (windows => \\ , linux => /)\nor save.macros file is using in another process\n\nand make your you placed that save.macros file in same place with your python code.\nand try this :\nwith open('.\\save.macros', mode='rb') as f\n\n" ]
[ 0 ]
[]
[]
[ "python", "registry" ]
stackoverflow_0074519856_python_registry.txt
Q: How can I update an existing dataframe to add values, without overwriting other existing values in the same column? I have an existing dataframe with two columns as follows: reason market_state 0 NaN UNSCHEDULED_AUCTION 1 NaN None 2 NaN CLOSED 3 NaN CONTINUOUS_TRADING 4 NaN None 5 NaN UNSCHEDULED_AUCTION 6 NaN UNSCHEDULED_AUCTION 7 F None 8 NaN CONTINUOUS_TRADING 9 SL None 10 NaN HALTED 11 NaN None 12 NaN None 13 L None I am trying to apply the following 3 mappings to the above dataframe: market_info_df['market_state'] = market_info_df['reason'].map({'F': OPENING_AUCTION}) market_info_df['market_state'] = market_info_df['reason'].map({'SL': CLOSING_AUCTION}) market_info_df['market_state'] = market_info_df['reason'].map({'L': CLOSED}) But when I run the above 3 lines, it seems to overwrite the existing mappings: market_state reason 0 NaN NaN 1 NaN NaN 2 NaN NaN 3 NaN NaN 4 NaN NaN 5 NaN NaN 6 NaN NaN 7 NaN F 8 NaN NaN 9 NaN SL 10 NaN NaN 11 NaN NaN 12 NaN NaN 13 CLOSED L (And it seems to have swapped the columns? - though this doesn't matter) Each of the lines seems to overwrite the dataframe. Is there a way simply to update the dataframe, i.e. so it just updates the three mappings, like this: reason market_state 0 NaN UNSCHEDULED_AUCTION 1 NaN None 2 NaN CLOSED 3 NaN CONTINUOUS_TRADING 4 NaN None 5 NaN UNSCHEDULED_AUCTION 6 NaN UNSCHEDULED_AUCTION 7 F OPENING_AUCTION 8 NaN CONTINUOUS_TRADING 9 SL CLOSING_AUCTION 10 NaN HALTED 11 NaN None 12 NaN None 13 L CLOSED A: Join values to one dictionary and add Series.fillna by same column market_state: d = {'F': 'OPENING_AUCTION','SL': 'CLOSING_AUCTION', 'L': 'CLOSED'} market_info_df['market_state'] = (market_info_df['reason'].map(d) .fillna(market_info_df['market_state'])) print (market_info_df) reason market_state 0 NaN UNSCHEDULED_AUCTION 1 NaN None 2 NaN CLOSED 3 NaN CONTINUOUS_TRADING 4 NaN None 5 NaN UNSCHEDULED_AUCTION 6 NaN UNSCHEDULED_AUCTION 7 F OPENING_AUCTION 8 NaN CONTINUOUS_TRADING 9 SL CLOSING_AUCTION 10 NaN HALTED 11 NaN None 12 NaN None 13 L CLOSED A: Use a single dictionary, then fillna with the original values if needed: market_info_df['market_state'] = ( market_info_df['reason'] .map({'F': 'OPENING_AUCTION', # only ONE dictionary 'SL': 'CLOSING_AUCTION', 'L': 'CLOSED'}) .fillna(market_info_df['market_state']) ) Or, to only update the NA values: df.loc[df['market_state'].isna(), 'market_state'] = ( market_info_df['reason'] .map({'F': 'OPENING_AUCTION', # only ONE dictionary 'SL': 'CLOSING_AUCTION', 'L': 'CLOSED'}) ) Output: reason market_state 0 NaN UNSCHEDULED_AUCTION 1 NaN None 2 NaN CLOSED 3 NaN CONTINUOUS_TRADING 4 NaN None 5 NaN UNSCHEDULED_AUCTION 6 NaN UNSCHEDULED_AUCTION 7 F OPENING_AUCTION 8 NaN CONTINUOUS_TRADING 9 SL CLOSING_AUCTION 10 NaN HALTED 11 NaN None 12 NaN None 13 L CLOSED
How can I update an existing dataframe to add values, without overwriting other existing values in the same column?
I have an existing dataframe with two columns as follows: reason market_state 0 NaN UNSCHEDULED_AUCTION 1 NaN None 2 NaN CLOSED 3 NaN CONTINUOUS_TRADING 4 NaN None 5 NaN UNSCHEDULED_AUCTION 6 NaN UNSCHEDULED_AUCTION 7 F None 8 NaN CONTINUOUS_TRADING 9 SL None 10 NaN HALTED 11 NaN None 12 NaN None 13 L None I am trying to apply the following 3 mappings to the above dataframe: market_info_df['market_state'] = market_info_df['reason'].map({'F': OPENING_AUCTION}) market_info_df['market_state'] = market_info_df['reason'].map({'SL': CLOSING_AUCTION}) market_info_df['market_state'] = market_info_df['reason'].map({'L': CLOSED}) But when I run the above 3 lines, it seems to overwrite the existing mappings: market_state reason 0 NaN NaN 1 NaN NaN 2 NaN NaN 3 NaN NaN 4 NaN NaN 5 NaN NaN 6 NaN NaN 7 NaN F 8 NaN NaN 9 NaN SL 10 NaN NaN 11 NaN NaN 12 NaN NaN 13 CLOSED L (And it seems to have swapped the columns? - though this doesn't matter) Each of the lines seems to overwrite the dataframe. Is there a way simply to update the dataframe, i.e. so it just updates the three mappings, like this: reason market_state 0 NaN UNSCHEDULED_AUCTION 1 NaN None 2 NaN CLOSED 3 NaN CONTINUOUS_TRADING 4 NaN None 5 NaN UNSCHEDULED_AUCTION 6 NaN UNSCHEDULED_AUCTION 7 F OPENING_AUCTION 8 NaN CONTINUOUS_TRADING 9 SL CLOSING_AUCTION 10 NaN HALTED 11 NaN None 12 NaN None 13 L CLOSED
[ "Join values to one dictionary and add Series.fillna by same column market_state:\nd = {'F': 'OPENING_AUCTION','SL': 'CLOSING_AUCTION', 'L': 'CLOSED'}\nmarket_info_df['market_state'] = (market_info_df['reason'].map(d)\n .fillna(market_info_df['market_state']))\nprint (market_info_df)\n reason market_state\n0 NaN UNSCHEDULED_AUCTION\n1 NaN None\n2 NaN CLOSED\n3 NaN CONTINUOUS_TRADING\n4 NaN None\n5 NaN UNSCHEDULED_AUCTION\n6 NaN UNSCHEDULED_AUCTION\n7 F OPENING_AUCTION\n8 NaN CONTINUOUS_TRADING\n9 SL CLOSING_AUCTION\n10 NaN HALTED\n11 NaN None\n12 NaN None\n13 L CLOSED\n\n", "Use a single dictionary, then fillna with the original values if needed:\nmarket_info_df['market_state'] = (\n market_info_df['reason']\n .map({'F': 'OPENING_AUCTION', # only ONE dictionary\n 'SL': 'CLOSING_AUCTION',\n 'L': 'CLOSED'})\n .fillna(market_info_df['market_state'])\n)\n\nOr, to only update the NA values:\ndf.loc[df['market_state'].isna(), 'market_state'] = (\n market_info_df['reason']\n .map({'F': 'OPENING_AUCTION', # only ONE dictionary\n 'SL': 'CLOSING_AUCTION',\n 'L': 'CLOSED'})\n)\n\nOutput:\n reason market_state\n0 NaN UNSCHEDULED_AUCTION\n1 NaN None\n2 NaN CLOSED\n3 NaN CONTINUOUS_TRADING\n4 NaN None\n5 NaN UNSCHEDULED_AUCTION\n6 NaN UNSCHEDULED_AUCTION\n7 F OPENING_AUCTION\n8 NaN CONTINUOUS_TRADING\n9 SL CLOSING_AUCTION\n10 NaN HALTED\n11 NaN None\n12 NaN None\n13 L CLOSED\n\n" ]
[ 2, 0 ]
[]
[]
[ "dataframe", "pandas", "python" ]
stackoverflow_0074519937_dataframe_pandas_python.txt
Q: check if a txt file exist in a specific directory or not? I am trying to find out if a txt file with 'newfile' name exists in a specified directory or not, if not create a new txt file import os.path if (os.path.exists("newfile.txt") == False): open("count.txt", "w") but it does not work since I cannot access the current or specified director with this code. A: import inspect import os module_path = inspect.getfile(inspect.currentframe()) module_dir = os.path.realpath(os.path.dirname(module_path)) os.chdir(module_dir) # set working directory to where file is if not os.path.exists("C:\\absolute\\directory\\newfile.txt"): open("count.txt", "w") You can replace the path with unix style directories. A: Solve 1: import os.path file = 'yourpath\file_to_check.txt' # 예제 Textfile if os.path.isfile(file): print("Yes. it is a file") elif os.path.isdir(file): print("Yes. it is a directory") elif os.path.exists(file): print("Something exist") else : print("Nothing") Solve 2: from pathlib import Path my_file = Path("your_path\file_to_check.txt") # This way only works in windows os if my_file.is_file(): print("Yes it is a file") A: You can use glob to locate the directory. import glob file_path = glob.glob('../your/file_directory/*') if "count.txt" not in file_path: with open('../your/file_directory/count.txt', 'w') as f: f.write('Create new text file')
check if a txt file exist in a specific directory or not?
I am trying to find out if a txt file with 'newfile' name exists in a specified directory or not, if not create a new txt file import os.path if (os.path.exists("newfile.txt") == False): open("count.txt", "w") but it does not work since I cannot access the current or specified director with this code.
[ "import inspect\nimport os\n\nmodule_path = inspect.getfile(inspect.currentframe())\nmodule_dir = os.path.realpath(os.path.dirname(module_path))\nos.chdir(module_dir) # set working directory to where file is\n\nif not os.path.exists(\"C:\\\\absolute\\\\directory\\\\newfile.txt\"):\n open(\"count.txt\", \"w\")\n\nYou can replace the path with unix style directories.\n", "Solve 1:\nimport os.path\n\nfile = 'yourpath\\file_to_check.txt' # 예제 Textfile\n\nif os.path.isfile(file):\n print(\"Yes. it is a file\")\nelif os.path.isdir(file):\n print(\"Yes. it is a directory\")\nelif os.path.exists(file):\n print(\"Something exist\")\nelse :\n print(\"Nothing\")\n\nSolve 2:\nfrom pathlib import Path\n\nmy_file = Path(\"your_path\\file_to_check.txt\") # This way only works in windows os\nif my_file.is_file():\n print(\"Yes it is a file\") \n\n", "You can use glob to locate the directory.\nimport glob\n\nfile_path = glob.glob('../your/file_directory/*')\nif \"count.txt\" not in file_path:\n with open('../your/file_directory/count.txt', 'w') as f:\n f.write('Create new text file')\n\n\n" ]
[ 1, 0, 0 ]
[]
[]
[ "python" ]
stackoverflow_0074519810_python.txt
Q: Write a filter Factory populated at import time in Python using metaclasses I want a simple way to implement new filters in a module. They would eventually be automatically recognized by the library at import. For example, if I want the list of all filters I do: >>> FilterFactory.available_filters { 'upper': __main__.FilterUpper, 'lower': __main__.FilterLower, 'trim': __main__.FilterTrim } My first approach was to use a classmethod and a LRU Cache: class FilterFactory: @classmethod @lru_cache() def available_filters(cls): fmap = {} for _, member in inspect.getmembers(sys.modules[__name__]): if not inspect.isclass(member) or not hasattr(member, 'name'): continue if member.name() == 'base': continue fmap[member.name()] = member return fmap Then I realized that it is better to build the factory when the module is loaded using metaclasses: from abc import abstractmethod class FilterFactory: available_filters = {} @classmethod def register(cls, filter_: type): # if not issubclass(filter_, Filter): # raise InvalidFilterError(f'Invalid filter: {filter_}') cls.available_filters[filter_.name] = filter_ setattr(cls, filter_.name, filter_) def __new__(cls, name, *args, **kwargs): if name not in cls.available_filters: raise ValueError(f'Unknown filter: {name}') return cls.available_filters[name](*args, **kwargs) class MetaFilter(type): def __new__(cls, name, bases, attrs): new_class = super().__new__(cls, name, bases, attrs) if not name.startswith('Filter') and name != 'BaseFilter': raise ValueError('Filter class names must start with "Filter"') new_class.name = name.split('Filter', maxsplit=1)[1].lower() if name != 'BaseFilter': FilterFactory.register(new_class) return new_class class BaseFilter(metaclass=MetaFilter): """ Base class for filters. """ @abstractmethod def filter(self, value: str) -> str: raise NotImplementedError("Filter.filter() must be implemented") def __init__(self, *args, **kwargs): ... def __repr__(self): return f'{self.__class__.__name__}' def __call__(self, value: str) -> str: return self.filter(value) class FilterUpper(BaseFilter): def filter(self, value: str) -> str: return value.upper() class FilterRegex(BaseFilter): def __init__(self, pattern: str, replace: str): self.pattern = re.compile(pattern) self.replace = replace def filter(self, value: str) -> str: return self.pattern.sub(value, self.replace) This looks neat, but it has some flaws: I cannot ensure the filter passed to register is indeed a subclass of BaseFilter because this base class isn't yet declared. Unlike C++ I cannot do forward declarations in Python. I must specifically prevent the abstract class BaseFilter to be added to the available_filters. This pattern looks a bit odd. The goal is to be able to use FilterFactory.available_filters to build a JSON schema validator that only accepts available filters. And use the factory to create then apply filters multiple times during the execution of the program. The validation may be done with voluptuous by adding some extra in the metaclass: class MetaFilter(type): def __new__(cls, name, bases, attrs): ... new_class.__params__, new_class.__types__ = cls.extract_parameters(new_class) return new_class @classmethod def extract_parameters(cls, new_class): """ Extract parameters from the class. Ensure that all the parameters are annotated.""" params = dict(inspect.signature(new_class.__init__).parameters) for key in ['self', 'args', 'kwargs']: if key in params: del params[key] for param, value in params.items(): if value.annotation is inspect.Parameter.empty: raise ValueError( f'Filter {new_class.name} has an untyped parameter: {param}' ) return (params.keys(), [p.annotation for p in params.values()]) Then I can create a validation schema with: filters = {} for filter_name, filter_class in FilterFactory.available_filters.items(): filters[Optional(filter_name)] = All( ExactSequence(filter_class.__types__), lambda args: FilterFactory(filter_name, *args) ) schema = Schema({'filter': filters}) s = schema({ 'filter': { 'regex': ['foo', 'bar'] } }) assert(s['filter']['regex'].filter('foo') == 'bar') If the filter is missing from the implementation, the validation fails. Adding a new filter to the application is as simple as adding this filter in the filters.py module. Is this implementation Zen and Pythonic? What better option can I use? A: TL;DR: The idea is good - I don't see the problem of "can't forward reference classes" as a real one,a s a filter class will have to import BaseFilter anyway, even if it is in a different file, and therefore, it has to be made available early, or the program won't even run. (that is: you won't get a class declared as inheriting from BaseFilter that dos not, in fact, does so). That said, since Python 3.6 there is a new feature in the language that does away with the need for a metaclass in this case (and as a bonus, it even simplifies the fact that BaseFilter itself is not a filter): the __init_subclass__ method. It should be written as a plain method on a base-class - it will always be a class method, even without being decorated with @classmethod, and it will be called for each new subclass, with the subclass as first argument: you can write all your registering logic in that method. (ANd it is not called for the base class, where it should be declared, itself). init subclass documentation
Write a filter Factory populated at import time in Python using metaclasses
I want a simple way to implement new filters in a module. They would eventually be automatically recognized by the library at import. For example, if I want the list of all filters I do: >>> FilterFactory.available_filters { 'upper': __main__.FilterUpper, 'lower': __main__.FilterLower, 'trim': __main__.FilterTrim } My first approach was to use a classmethod and a LRU Cache: class FilterFactory: @classmethod @lru_cache() def available_filters(cls): fmap = {} for _, member in inspect.getmembers(sys.modules[__name__]): if not inspect.isclass(member) or not hasattr(member, 'name'): continue if member.name() == 'base': continue fmap[member.name()] = member return fmap Then I realized that it is better to build the factory when the module is loaded using metaclasses: from abc import abstractmethod class FilterFactory: available_filters = {} @classmethod def register(cls, filter_: type): # if not issubclass(filter_, Filter): # raise InvalidFilterError(f'Invalid filter: {filter_}') cls.available_filters[filter_.name] = filter_ setattr(cls, filter_.name, filter_) def __new__(cls, name, *args, **kwargs): if name not in cls.available_filters: raise ValueError(f'Unknown filter: {name}') return cls.available_filters[name](*args, **kwargs) class MetaFilter(type): def __new__(cls, name, bases, attrs): new_class = super().__new__(cls, name, bases, attrs) if not name.startswith('Filter') and name != 'BaseFilter': raise ValueError('Filter class names must start with "Filter"') new_class.name = name.split('Filter', maxsplit=1)[1].lower() if name != 'BaseFilter': FilterFactory.register(new_class) return new_class class BaseFilter(metaclass=MetaFilter): """ Base class for filters. """ @abstractmethod def filter(self, value: str) -> str: raise NotImplementedError("Filter.filter() must be implemented") def __init__(self, *args, **kwargs): ... def __repr__(self): return f'{self.__class__.__name__}' def __call__(self, value: str) -> str: return self.filter(value) class FilterUpper(BaseFilter): def filter(self, value: str) -> str: return value.upper() class FilterRegex(BaseFilter): def __init__(self, pattern: str, replace: str): self.pattern = re.compile(pattern) self.replace = replace def filter(self, value: str) -> str: return self.pattern.sub(value, self.replace) This looks neat, but it has some flaws: I cannot ensure the filter passed to register is indeed a subclass of BaseFilter because this base class isn't yet declared. Unlike C++ I cannot do forward declarations in Python. I must specifically prevent the abstract class BaseFilter to be added to the available_filters. This pattern looks a bit odd. The goal is to be able to use FilterFactory.available_filters to build a JSON schema validator that only accepts available filters. And use the factory to create then apply filters multiple times during the execution of the program. The validation may be done with voluptuous by adding some extra in the metaclass: class MetaFilter(type): def __new__(cls, name, bases, attrs): ... new_class.__params__, new_class.__types__ = cls.extract_parameters(new_class) return new_class @classmethod def extract_parameters(cls, new_class): """ Extract parameters from the class. Ensure that all the parameters are annotated.""" params = dict(inspect.signature(new_class.__init__).parameters) for key in ['self', 'args', 'kwargs']: if key in params: del params[key] for param, value in params.items(): if value.annotation is inspect.Parameter.empty: raise ValueError( f'Filter {new_class.name} has an untyped parameter: {param}' ) return (params.keys(), [p.annotation for p in params.values()]) Then I can create a validation schema with: filters = {} for filter_name, filter_class in FilterFactory.available_filters.items(): filters[Optional(filter_name)] = All( ExactSequence(filter_class.__types__), lambda args: FilterFactory(filter_name, *args) ) schema = Schema({'filter': filters}) s = schema({ 'filter': { 'regex': ['foo', 'bar'] } }) assert(s['filter']['regex'].filter('foo') == 'bar') If the filter is missing from the implementation, the validation fails. Adding a new filter to the application is as simple as adding this filter in the filters.py module. Is this implementation Zen and Pythonic? What better option can I use?
[ "TL;DR:\nThe idea is good - I don't see the problem of \"can't forward reference classes\" as a real one,a s a filter class will have to import BaseFilter anyway, even if it is in a different file, and therefore, it has to be made available early, or the program won't even run. (that is: you won't get a class declared as inheriting from BaseFilter that dos not, in fact, does so).\nThat said, since Python 3.6 there is a new feature in the language that does away with the need for a metaclass in this case (and as a bonus, it even simplifies the fact that BaseFilter itself is not a filter): the __init_subclass__ method.\nIt should be written as a plain method on a base-class - it will always be a class method, even without being decorated with @classmethod, and it will be called for each new subclass, with the subclass as first argument: you can write all your registering logic in that method. (ANd it is not called for the base class, where it should be declared, itself).\ninit subclass documentation\n" ]
[ 1 ]
[]
[]
[ "design_patterns", "factory", "metaclass", "oop", "python" ]
stackoverflow_0074467584_design_patterns_factory_metaclass_oop_python.txt
Q: Pyautogui not importing "No module named 'pyautogui' " import pyautogui print("hello") After running this I am presented with the following: C:\Users\Darkm\Anaconda3\envs\PythonChallenges\python.exe C:/Users/Darkm/PycharmProjects/PythonChallenges/Automation1.py Traceback (most recent call last): File "C:/Users/Darkm/PycharmProjects/PythonChallenges/Automation1.py", line 1, in <module> import pyautogui ModuleNotFoundError: No module named 'pyautogui' Process finished with exit code 1 Could somebody help me understand why I cannot import pyautogui? Some background information: 1.) I only have one version of python (3.7.4) 2.) I have already installed the module through "pip install pyautogui" in cmd prompt. 3.) Pyautogui is installed under C:\Users\Darkm\Anaconda3\Lib\site-packages 4.) Pyautogui does not show up when I go into file > settings > project interpreter and try to add it manually (it just doesn't show up). 5.) Have restarted computer multiple times At this point I cannot figure out why I'm unable to import pyautogui, any help would be greatly appreciated! A: Why are you getting this error? Because you are using PyCharm. In PyCharm you don't need to install python packages from command prompt, in PyCharm you need to install python packages from PyCharm Project Interpreter. Here are some tips that can help you! Step 1: Go to PyCharm settings and go to this directory: Preferences and select Interpreter Settings Screenshot: Step 2: Click on this plus icon. Screenshot: Step 3: Type your package name and select package. Screenshot: Step4: Then click on install button. Step 5: Click on okay Then wait for two to three minutes and try again. Hopefully it'll work. A: You and also do this trick on imports import subprocess try: import pyautogui except ImportError: subprocess.call("pip install pyautogui")` A: It's probably because of pycharm. If it's not pycharm and you still have problems you can try: Cmd: python -m pip install < module > To append to PYTHONPATH: IDE: import sys sys.path.append('< path >') A: If you face module not found on Jupyter environment you had to install it on Jupyter environment instead of installing it on command prompt by this command (for windows) !pip install pyautogui After that you can easily import and use it. Whenever you want to tell Jupyter that this is system command you should put ! before your command. A: Make sure you have the latest version of python installed on your computer On Windows: Open up Windows PowerShell. Type py -m pip install pyautogui and wait for 2 minutes. Then, restart your IDE. Try running your code, it should work now. On a Mac: Open up iTerm. Type pip3 install pyautogui or py -m pip3 install pyautogui, and wait for 2 minutes. Then, restart your IDE. Then just simply try to run your code, it should be working. Hope this helped! A: Try this... # Install a pip package in the current Jupyter kernel import sys !{sys.executable} -m pip install numpy Tutorial from Pythonic Perambulations A: Go to terminal. type "pip install pyautogui" pip install pyautogui Your problem should be solved now. If it isn't try this. !pip install pyautogui A: You are using Pycharm IDE. To install packages in Pycharm, you can use the project interpreter in settings or type the following in the terminal: pip install pyautogui A: make sure that library is installed in same version of python
Pyautogui not importing "No module named 'pyautogui' "
import pyautogui print("hello") After running this I am presented with the following: C:\Users\Darkm\Anaconda3\envs\PythonChallenges\python.exe C:/Users/Darkm/PycharmProjects/PythonChallenges/Automation1.py Traceback (most recent call last): File "C:/Users/Darkm/PycharmProjects/PythonChallenges/Automation1.py", line 1, in <module> import pyautogui ModuleNotFoundError: No module named 'pyautogui' Process finished with exit code 1 Could somebody help me understand why I cannot import pyautogui? Some background information: 1.) I only have one version of python (3.7.4) 2.) I have already installed the module through "pip install pyautogui" in cmd prompt. 3.) Pyautogui is installed under C:\Users\Darkm\Anaconda3\Lib\site-packages 4.) Pyautogui does not show up when I go into file > settings > project interpreter and try to add it manually (it just doesn't show up). 5.) Have restarted computer multiple times At this point I cannot figure out why I'm unable to import pyautogui, any help would be greatly appreciated!
[ "Why are you getting this error?\nBecause you are using PyCharm. \nIn PyCharm you don't need to install python packages from command prompt, in PyCharm you need to install python packages from PyCharm Project Interpreter.\nHere are some tips that can help you!\nStep 1: Go to PyCharm settings and go to this directory: Preferences and select Interpreter Settings\nScreenshot:\n\nStep 2: Click on this plus icon.\nScreenshot:\n\nStep 3: Type your package name and select package.\nScreenshot:\n\nStep4: Then click on install button.\n\nStep 5: Click on okay\nThen wait for two to three minutes and try again.\nHopefully it'll work.\n", "You and also do this trick on imports\nimport subprocess\n\ntry:\n import pyautogui\nexcept ImportError:\n subprocess.call(\"pip install pyautogui\")`\n\n", "It's probably because of pycharm.\nIf it's not pycharm and you still have problems you can try:\nCmd:\npython -m pip install < module >\n\nTo append to PYTHONPATH:\nIDE:\nimport sys\nsys.path.append('< path >')\n\n", "If you face module not found on Jupyter environment you had to install it on Jupyter environment instead of installing it on command prompt\nby this command (for windows)\n!pip install pyautogui\n\nAfter that you can easily import and use it.\nWhenever you want to tell Jupyter that this is system command you should put ! before your command.\n", "Make sure you have the latest version of python installed on your computer\nOn Windows:\nOpen up Windows PowerShell. Type py -m pip install pyautogui and wait for 2 minutes. Then, restart your IDE. Try running your code, it should work now.\nOn a Mac:\nOpen up iTerm. Type pip3 install pyautogui or py -m pip3 install pyautogui, and wait for 2 minutes. Then, restart your IDE. Then just simply try to run your code, it should be working.\nHope this helped!\n", "Try this...\n# Install a pip package in the current Jupyter kernel\nimport sys\n!{sys.executable} -m pip install numpy\n\nTutorial from Pythonic Perambulations\n", "Go to terminal.\ntype \"pip install pyautogui\"\npip install pyautogui\n\nYour problem should be solved now.\nIf it isn't try this.\n!pip install pyautogui\n\n", "You are using Pycharm IDE. To install packages in Pycharm, you can use the project interpreter in settings or type the following in the terminal:\npip install pyautogui\n", "make sure that library is installed in same version of python\n" ]
[ 3, 0, 0, 0, 0, 0, 0, 0, 0 ]
[]
[]
[ "anaconda", "pyautogui", "python", "python_import" ]
stackoverflow_0058887481_anaconda_pyautogui_python_python_import.txt
Q: No module named 'cuda._lib'; 'cuda' is not a package After following the steps on cuda-python to install cuda-python with conda instruction, I try to from cuda import cuda, nvrtc as in the example in the pycharm python console, but it raises an error: Traceback (most recent call last): File "D:\Anaconda\envs\hierot\lib\code.py", line 90, in runcode exec(code, self.locals) File "<input>", line 1, in <module> File "D:\PyCharm Community Edition 2022.1.3\plugins\python-ce\helpers\pydev\_pydev_bundle\pydev_import_hook.py", line 21, in do_import module = self._system_import(name, *args, **kwargs) File "cuda\cuda.pyx", line 1, in init cuda.cuda # Copyright 2021-2022 NVIDIA Corporation. All rights reserved. File "D:\PyCharm Community Edition 2022.1.3\plugins\python-ce\helpers\pydev\_pydev_bundle\pydev_import_hook.py", line 21, in do_import module = self._system_import(name, *args, **kwargs) ModuleNotFoundError: No module named 'cuda._lib'; 'cuda' is not a package But the code above can be successfully run in the terminal (hierot) D:\Projects\SimPlatform>python Python 3.9.13 (main, Aug 25 2022, 23:51:50) [MSC v.1916 64 bit (AMD64)] :: Anaconda, Inc. on win32 Type "help", "copyright", "credits" or "license" for more information. >>> from cuda import cuda, nvrtc >>> Please help me with the problem, thanks in advance. Further information provided on request. I searched with ModuleNotFoundError: No module named 'xxx' Solutions suggest configure correct python interpreter, but I believe my interpreter is already properly configured. And search with No module named 'xxx'; 'yyy' is not a package Some says the cause is the name cuda is shadowed by the package name cuda, but I don't know how to fix it. A: Oh I finally solved this problem, by configuring interpreter path, which in the beginning I added site-packages/cuda because I was trying to debug another problem at that time, and thus the shadow of the name cuda. (The image below is after deleting the redundant path)
No module named 'cuda._lib'; 'cuda' is not a package
After following the steps on cuda-python to install cuda-python with conda instruction, I try to from cuda import cuda, nvrtc as in the example in the pycharm python console, but it raises an error: Traceback (most recent call last): File "D:\Anaconda\envs\hierot\lib\code.py", line 90, in runcode exec(code, self.locals) File "<input>", line 1, in <module> File "D:\PyCharm Community Edition 2022.1.3\plugins\python-ce\helpers\pydev\_pydev_bundle\pydev_import_hook.py", line 21, in do_import module = self._system_import(name, *args, **kwargs) File "cuda\cuda.pyx", line 1, in init cuda.cuda # Copyright 2021-2022 NVIDIA Corporation. All rights reserved. File "D:\PyCharm Community Edition 2022.1.3\plugins\python-ce\helpers\pydev\_pydev_bundle\pydev_import_hook.py", line 21, in do_import module = self._system_import(name, *args, **kwargs) ModuleNotFoundError: No module named 'cuda._lib'; 'cuda' is not a package But the code above can be successfully run in the terminal (hierot) D:\Projects\SimPlatform>python Python 3.9.13 (main, Aug 25 2022, 23:51:50) [MSC v.1916 64 bit (AMD64)] :: Anaconda, Inc. on win32 Type "help", "copyright", "credits" or "license" for more information. >>> from cuda import cuda, nvrtc >>> Please help me with the problem, thanks in advance. Further information provided on request. I searched with ModuleNotFoundError: No module named 'xxx' Solutions suggest configure correct python interpreter, but I believe my interpreter is already properly configured. And search with No module named 'xxx'; 'yyy' is not a package Some says the cause is the name cuda is shadowed by the package name cuda, but I don't know how to fix it.
[ "Oh I finally solved this problem, by configuring interpreter path, which in the beginning I added site-packages/cuda because I was trying to debug another problem at that time, and thus the shadow of the name cuda. (The image below is after deleting the redundant path)\n\n" ]
[ 1 ]
[]
[]
[ "cuda", "pycharm", "python" ]
stackoverflow_0074515262_cuda_pycharm_python.txt
Q: Flask Pagination without SQLAlchemy Looking to paginate the data set without using SQLAlchemy. Am getting the data from the database using query. def data(): cursor.execute("""select * from table_data""") records = cursor.fetchall() return reneder_template('data.html') And am rendering this result in the @app.route method. Pagination can be done using paginate and query method in SQLAlchemy, but since am getting the data directly from the database, suggestion will be helpful on pagination. A: Below approach can be used to paginate in Flaks without using SQLAlchemy, def data(): page = request.args.get(get_page_parameter(), type=int, default=1) limit=20 offset = page*limit - limit cursor = connection.cursor() cursor.execute("""select * from user_listing table_data""") result = cursor.fetchall() total = len(result) print(total) cursor.execute("""select * from table_data limit %s offset %s""", (limit, offset)) data = cursor.fetchall() pagination = Pagination(page=page, page_per=limit, total=total) return render_template('data.html', pagination=pagination, data = data)
Flask Pagination without SQLAlchemy
Looking to paginate the data set without using SQLAlchemy. Am getting the data from the database using query. def data(): cursor.execute("""select * from table_data""") records = cursor.fetchall() return reneder_template('data.html') And am rendering this result in the @app.route method. Pagination can be done using paginate and query method in SQLAlchemy, but since am getting the data directly from the database, suggestion will be helpful on pagination.
[ "Below approach can be used to paginate in Flaks without using SQLAlchemy,\ndef data():\n\n page = request.args.get(get_page_parameter(), type=int, default=1)\n limit=20\n offset = page*limit - limit\n cursor = connection.cursor()\n cursor.execute(\"\"\"select * from user_listing\n table_data\"\"\")\n result = cursor.fetchall()\n\n total = len(result)\n print(total)\n\n cursor.execute(\"\"\"select * from table_data\n limit %s offset %s\"\"\", (limit, offset))\n data = cursor.fetchall()\n\n pagination = Pagination(page=page, page_per=limit, total=total)\n return render_template('data.html', pagination=pagination, data = data)\n\n" ]
[ 0 ]
[]
[]
[ "flask", "python" ]
stackoverflow_0074520043_flask_python.txt
Q: Python: recursively move all files from folders and sub folders into a root folder Given a file tree with much dept like this: ├── movepy.py # the file I want use to move all other files └── testfodlerComp ├── asdas │   └── erwer.txt ├── asdasdas │   └── sdffg.txt └── asdasdasdasd ├── hoihoi.txt ├── hoihej.txt └── asd ├── dfsdf.txt └── dsfsdfsd.txt How can I then move all items recursively into the current working directory: ├── movepy.py │── erwer.txt │── sdffg.txt ├── hoihoi.txt ├── hoihej.txt ├── dfsdf.txt └── dsfsdfsd.txt The file tree in this question is an example, in reality I want to move a tree that has many nested sub folders with many nested files. A: import os import shutil from pathlib import Path cwd = Path(os.getcwd()) to_remove = set() for root, dirnames, files in os.walk(cwd): for d in dirnames: to_remove.add(root / Path(d)) for f in files: p = root / Path(f) if p != cwd and p.parent != cwd: print(f"Moving {p} -> {cwd}") shutil.move(p, cwd) # Remove directories for d in to_remove: if os.path.exists(d): print(d) shutil.rmtree(d) A: This should do the trick for what you're trying to achieve. import os import shutil #store the path to your root directory base='.' # traverse root directory, and list directories as dirs and files as files for root, dirs, files in os.walk(base): path = root.split(os.sep) for file in files: if not os.path.isdir(file): # move file from nested folder into the base folder shutil.move(os.path.join(root,file),os.path.join(base,file))
Python: recursively move all files from folders and sub folders into a root folder
Given a file tree with much dept like this: ├── movepy.py # the file I want use to move all other files └── testfodlerComp ├── asdas │   └── erwer.txt ├── asdasdas │   └── sdffg.txt └── asdasdasdasd ├── hoihoi.txt ├── hoihej.txt └── asd ├── dfsdf.txt └── dsfsdfsd.txt How can I then move all items recursively into the current working directory: ├── movepy.py │── erwer.txt │── sdffg.txt ├── hoihoi.txt ├── hoihej.txt ├── dfsdf.txt └── dsfsdfsd.txt The file tree in this question is an example, in reality I want to move a tree that has many nested sub folders with many nested files.
[ "import os\nimport shutil\nfrom pathlib import Path\n\ncwd = Path(os.getcwd())\n\nto_remove = set()\nfor root, dirnames, files in os.walk(cwd):\n for d in dirnames:\n to_remove.add(root / Path(d))\n\n for f in files:\n p = root / Path(f)\n if p != cwd and p.parent != cwd:\n print(f\"Moving {p} -> {cwd}\")\n shutil.move(p, cwd)\n\n# Remove directories\nfor d in to_remove:\n if os.path.exists(d):\n print(d)\n shutil.rmtree(d)\n\n", "This should do the trick for what you're trying to achieve.\nimport os\nimport shutil\n\n#store the path to your root directory\nbase='.'\n\n# traverse root directory, and list directories as dirs and files as files\nfor root, dirs, files in os.walk(base):\n path = root.split(os.sep)\n\n for file in files:\n if not os.path.isdir(file):\n \n # move file from nested folder into the base folder\n shutil.move(os.path.join(root,file),os.path.join(base,file))\n\n" ]
[ 1, 1 ]
[]
[]
[ "move", "python", "python_3.x" ]
stackoverflow_0074519777_move_python_python_3.x.txt
Q: Efficient way to calculate difference from pandas datetime columns based on days I have a dataframe with a few million rows where I want to calculate the difference on a daily basis between two columns which are in datetime format. There are stack overflow questions which answer this question computing the difference on a timestamp basis (see here Doing it on the timestamp basis felt quite fast: df["Differnce"] = (df["end_date"] - df["start_date"]).dt.days But doing it on a daily basis felt quite slow: df["Differnce"] = (df["end_date"].dt.date - df["start_date"].dt.date).dt.days I was wondering if there is a easy but better/faster way to achieve the same result? Example Code: import pandas as pd import numpy as np data = {'Condition' :["a", "a", "b"], 'start_date': [pd.Timestamp('2022-01-01 23:00:00.000000'), pd.Timestamp('2022-01-01 23:00:00.000000'), pd.Timestamp('2022-01-01 23:00:00.000000')], 'end_date': [pd.Timestamp('2022-01-02 01:00:00.000000'), pd.Timestamp('2022-02-01 23:00:00.000000'), pd.Timestamp('2022-01-02 01:00:00.000000')]} df = pd.DataFrame(data) df["Right_Difference"] = np.where((df["Condition"] == "a"), ((df["end_date"].dt.date - df["start_date"].dt.date).dt.days), np.nan) df["Wrong_Difference"] = np.where((df["Condition"] == "a"), ((df["end_date"] - df["start_date"]).dt.days), np.nan) A: Use Series.dt.to_period, faster is Series.dt.normalize or Series.dt.floor : #300k rows df = pd.concat([df] * 100000, ignore_index=True) In [286]: %timeit (df["end_date"].dt.date - df["start_date"].dt.date).dt.days 1.14 s ± 135 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) In [287]: %timeit df["end_date"].dt.to_period('d').astype('int') - df["start_date"].dt.to_period('d').astype('int') 64.1 ms ± 3 ms per loop (mean ± std. dev. of 7 runs, 10 loops each) In [288]: %timeit (df["end_date"].dt.normalize() - df["start_date"].dt.normalize()).dt.days 27.7 ms ± 316 µs per loop (mean ± std. dev. of 7 runs, 10 loops each) In [289]: %timeit (df["end_date"].dt.floor('d') - df["start_date"].dt.floor('d')).dt.days 27.7 ms ± 937 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
Efficient way to calculate difference from pandas datetime columns based on days
I have a dataframe with a few million rows where I want to calculate the difference on a daily basis between two columns which are in datetime format. There are stack overflow questions which answer this question computing the difference on a timestamp basis (see here Doing it on the timestamp basis felt quite fast: df["Differnce"] = (df["end_date"] - df["start_date"]).dt.days But doing it on a daily basis felt quite slow: df["Differnce"] = (df["end_date"].dt.date - df["start_date"].dt.date).dt.days I was wondering if there is a easy but better/faster way to achieve the same result? Example Code: import pandas as pd import numpy as np data = {'Condition' :["a", "a", "b"], 'start_date': [pd.Timestamp('2022-01-01 23:00:00.000000'), pd.Timestamp('2022-01-01 23:00:00.000000'), pd.Timestamp('2022-01-01 23:00:00.000000')], 'end_date': [pd.Timestamp('2022-01-02 01:00:00.000000'), pd.Timestamp('2022-02-01 23:00:00.000000'), pd.Timestamp('2022-01-02 01:00:00.000000')]} df = pd.DataFrame(data) df["Right_Difference"] = np.where((df["Condition"] == "a"), ((df["end_date"].dt.date - df["start_date"].dt.date).dt.days), np.nan) df["Wrong_Difference"] = np.where((df["Condition"] == "a"), ((df["end_date"] - df["start_date"]).dt.days), np.nan)
[ "Use Series.dt.to_period, faster is Series.dt.normalize or Series.dt.floor :\n#300k rows\ndf = pd.concat([df] * 100000, ignore_index=True)\n\nIn [286]: %timeit (df[\"end_date\"].dt.date - df[\"start_date\"].dt.date).dt.days\n1.14 s ± 135 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)\n\nIn [287]: %timeit df[\"end_date\"].dt.to_period('d').astype('int') - df[\"start_date\"].dt.to_period('d').astype('int')\n64.1 ms ± 3 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)\n\nIn [288]: %timeit (df[\"end_date\"].dt.normalize() - df[\"start_date\"].dt.normalize()).dt.days\n27.7 ms ± 316 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)\n\nIn [289]: %timeit (df[\"end_date\"].dt.floor('d') - df[\"start_date\"].dt.floor('d')).dt.days\n27.7 ms ± 937 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)\n\n" ]
[ 1 ]
[]
[]
[ "datetime", "pandas", "python" ]
stackoverflow_0074520051_datetime_pandas_python.txt
Q: ITK: Cannot find ITKConfig.cmake file in ITK-5.2.1? I've installed itk-5.2.1 with pip in an anaconda environment with Python 3.9. On the other hand, I'm trying to run CMake to build Greedy. In the Cmake console (i'm using linux) I'm asked about the directory where ITKConfig.cmake or itk-config.cmake is located. I have been searching in ITK binaries directory in the anaconda environment files, but I didn't find it. Does anyone know where the path file is located? A: It is usually in ...\lib\cmake\ITK-5.2\ITKConfig.cmake. One full path on my computer is M:\a\Seg3D-deb\Externals\Install\ITK_external\lib\cmake\ITK-5.3\ITKConfig.cmake.
ITK: Cannot find ITKConfig.cmake file in ITK-5.2.1?
I've installed itk-5.2.1 with pip in an anaconda environment with Python 3.9. On the other hand, I'm trying to run CMake to build Greedy. In the Cmake console (i'm using linux) I'm asked about the directory where ITKConfig.cmake or itk-config.cmake is located. I have been searching in ITK binaries directory in the anaconda environment files, but I didn't find it. Does anyone know where the path file is located?
[ "It is usually in ...\\lib\\cmake\\ITK-5.2\\ITKConfig.cmake. One full path on my computer is M:\\a\\Seg3D-deb\\Externals\\Install\\ITK_external\\lib\\cmake\\ITK-5.3\\ITKConfig.cmake.\n" ]
[ 0 ]
[]
[]
[ "cmake", "itk", "python", "python_3.x" ]
stackoverflow_0074518440_cmake_itk_python_python_3.x.txt
Q: Time interval calculation between commits I have a dataframe that looks like this: commitdates api_spec_id/ 0 2021-04-07 84 1 2021-05-31 84 2 2021-06-21 84 3 2021-06-18 84 4 2020-12-06 124 commits commitDate 0 32 2021-04-07 12:52:56 1 32 2021-05-31 03:12:37 2 32 2021-06-21 06:50:33 3 32 2021-06-18 05:11:23 4 37 2020-12-06 20:35:45 I want to calculate the time interval elapsed between the first commit and last commit, api_spec_id corresponds to the relevant API, each API has different commits, hence I want to find the first and last and calculate the interval between them. My desired output is : api_spec_id Age (in days) 0 84 89 1 84 89 2 84 89 3 84 67 4 124 56 I tried doing the following after scanning through similar posts here on stack: gb = final_api.groupby('api_spec_id')['commitDate'] (gb.max() - gb.min()) / pd.Timedelta(days=1) and got the following output: api_spec_id 84 74.748345 124 22.486979 164 124.080359 184 921.732488 214 11.994167 ... 224530 1.987951 224606 8.221690 224613 67.541366 224627 151.838333 224665 657.721481 And another method: s = final_api.groupby(['api_spec_id','commitdates'])['timestamp'].agg(['min','max']); s['max']-s['min'] which returned me 0 days. I am not sure if this is correct, and also I would like to append this result to a new dataframe column, but not sure how to do this. Any help would be appreciated. A: Use groupby.transform with min/max (or first/last if you really want the order, not values to matter)): # pre-requisite df[['commitdates', 'commitDate']] = df[['commitdates', 'commitDate']].apply(pd.to_datetime) g = df.groupby('api_spec_id')['commitdates'] df['Age (in days)'] = g.transform('max').sub(g.transform('min')) Output: commitdates api_spec_id commits commitDate Age (in days) 0 2021-04-07 84 32 2021-04-07 12:52:56 75 days 1 2021-05-31 84 32 2021-05-31 03:12:37 75 days 2 2021-06-21 84 32 2021-06-21 06:50:33 75 days 3 2021-06-18 84 32 2021-06-18 05:11:23 75 days 4 2020-12-06 124 37 2020-12-06 20:35:45 0 days
Time interval calculation between commits
I have a dataframe that looks like this: commitdates api_spec_id/ 0 2021-04-07 84 1 2021-05-31 84 2 2021-06-21 84 3 2021-06-18 84 4 2020-12-06 124 commits commitDate 0 32 2021-04-07 12:52:56 1 32 2021-05-31 03:12:37 2 32 2021-06-21 06:50:33 3 32 2021-06-18 05:11:23 4 37 2020-12-06 20:35:45 I want to calculate the time interval elapsed between the first commit and last commit, api_spec_id corresponds to the relevant API, each API has different commits, hence I want to find the first and last and calculate the interval between them. My desired output is : api_spec_id Age (in days) 0 84 89 1 84 89 2 84 89 3 84 67 4 124 56 I tried doing the following after scanning through similar posts here on stack: gb = final_api.groupby('api_spec_id')['commitDate'] (gb.max() - gb.min()) / pd.Timedelta(days=1) and got the following output: api_spec_id 84 74.748345 124 22.486979 164 124.080359 184 921.732488 214 11.994167 ... 224530 1.987951 224606 8.221690 224613 67.541366 224627 151.838333 224665 657.721481 And another method: s = final_api.groupby(['api_spec_id','commitdates'])['timestamp'].agg(['min','max']); s['max']-s['min'] which returned me 0 days. I am not sure if this is correct, and also I would like to append this result to a new dataframe column, but not sure how to do this. Any help would be appreciated.
[ "Use groupby.transform with min/max (or first/last if you really want the order, not values to matter)):\n# pre-requisite\ndf[['commitdates', 'commitDate']] = df[['commitdates', 'commitDate']].apply(pd.to_datetime)\n\ng = df.groupby('api_spec_id')['commitdates']\ndf['Age (in days)'] = g.transform('max').sub(g.transform('min'))\n\nOutput:\n commitdates api_spec_id commits commitDate Age (in days)\n0 2021-04-07 84 32 2021-04-07 12:52:56 75 days\n1 2021-05-31 84 32 2021-05-31 03:12:37 75 days\n2 2021-06-21 84 32 2021-06-21 06:50:33 75 days\n3 2021-06-18 84 32 2021-06-18 05:11:23 75 days\n4 2020-12-06 124 37 2020-12-06 20:35:45 0 days\n\n" ]
[ 1 ]
[]
[]
[ "pandas", "python" ]
stackoverflow_0074520045_pandas_python.txt
Q: basic question about python code on NLTK(sent(), list(s), for ~in~) I'm a python beginner. I know that it is very easy code but actually it is difficult to me. I'm sorry. I find somebody's python code on the internet about word2vec for embedding the word. The following code is that I'm confused. There are 2 things I can't understand 1. why we have to use [ ] in line 2? 2. what is the meaning, sents()? from nltk.corpus import movie_reviews sentences = [list(s) for s in movie_reviews.sents()] A: Answer of question no. 1 I think, it return code in list. [Maybe] Answer of question no. 2 The send() method returns the next value yielded by the generator, or raises StopIteration if the generator exits without yielding another value
basic question about python code on NLTK(sent(), list(s), for ~in~)
I'm a python beginner. I know that it is very easy code but actually it is difficult to me. I'm sorry. I find somebody's python code on the internet about word2vec for embedding the word. The following code is that I'm confused. There are 2 things I can't understand 1. why we have to use [ ] in line 2? 2. what is the meaning, sents()? from nltk.corpus import movie_reviews sentences = [list(s) for s in movie_reviews.sents()]
[ "Answer of question no. 1\nI think, it return code in list. [Maybe]\nAnswer of question no. 2\nThe send() method returns the next value yielded by the generator, or raises StopIteration if the generator exits without yielding another value\n" ]
[ 0 ]
[]
[]
[ "nltk", "python" ]
stackoverflow_0055646710_nltk_python.txt
Q: Best way to rotate (and translate) a set of points in python I have two sets of points (x,y) that I have plotted with matplotlib Just visually I can see that it seems there is some kind of rotation between those. I would like to rotate one set of points around a certain point (would like to try several points of rotation) and plot them again. What would be the best way to rotate said set of points with python? I have read that perhaps shapely could be used but a simple example would help me understand how. A: Use numpy to store your points For example, if you have a nx2 array, each line being a point, like this xy=np.array([[50, 60], [10, 30], [30, 10]]) You can plot them like this plt.scatter(xy[:,0], xy[:,1]) And to rotate them, you need a rotation matrix def rotateMatrix(a): return np.array([[np.cos(a), -np.sin(a)], [np.sin(a), np.cos(a)]]) You can apply this matrix to your xy set of points like this newxy = xy @ rotateMatrix(a).T Note that I transpose the rotation matrix, to keep acuracy. But, in this case, because of the specific form of rotation matrix, you could generate directly the transposed one by just passing -a newxy = xy @ rotateMatrix(-a) If you need no rotate around a center (x0,y0) other than (0,0), just rotate not xy but xy-(x0,y0) (that is the displacement vector from center to points), and then add that rotated vector to the center. newxy = (xy-[x0,y0]) @ rotateMatrix(-a) + [x0,y0] Application import numpy as np import matplotlib.pyplot as plt import matplotlib.animation as animation import matplotlib import time def rotateMatrix(a): return np.array([[np.cos(a), -np.sin(a)], [np.sin(a), np.cos(a)]]) xy=np.random.randint(0,100, (200,2)) fig=plt.figure() plt.plot(xy[:,0], xy[:,1], 'o') plt.xlim(-30,130) plt.ylim(-50,110) plotdata,=plt.plot(xy[:,0], xy[:,1],'o') x0=20 y0=50 def anim(i): newxy=(xy-[x0,y0]) @ rotateMatrix(-2*i*np.pi/180) + [x0,y0] plotdata.set_data(newxy[:,0], newxy[:,1]) return [plotdata] theAnim=animation.FuncAnimation(fig, anim, interval=40, blit=False, frames=360, repeat=False) #theAnim.save('rotate.nosync.gif') plt.show()
Best way to rotate (and translate) a set of points in python
I have two sets of points (x,y) that I have plotted with matplotlib Just visually I can see that it seems there is some kind of rotation between those. I would like to rotate one set of points around a certain point (would like to try several points of rotation) and plot them again. What would be the best way to rotate said set of points with python? I have read that perhaps shapely could be used but a simple example would help me understand how.
[ "Use numpy to store your points\nFor example, if you have a nx2 array, each line being a point, like this\nxy=np.array([[50, 60],\n [10, 30],\n [30, 10]])\n\nYou can plot them like this\nplt.scatter(xy[:,0], xy[:,1])\n\nAnd to rotate them, you need a rotation matrix\ndef rotateMatrix(a):\n return np.array([[np.cos(a), -np.sin(a)], [np.sin(a), np.cos(a)]])\n\nYou can apply this matrix to your xy set of points like this\nnewxy = xy @ rotateMatrix(a).T\n\nNote that I transpose the rotation matrix, to keep acuracy. But, in this case, because of the specific form of rotation matrix, you could generate directly the transposed one by just passing -a\nnewxy = xy @ rotateMatrix(-a)\n\nIf you need no rotate around a center (x0,y0) other than (0,0), just rotate not xy but xy-(x0,y0) (that is the displacement vector from center to points), and then add that rotated vector to the center.\nnewxy = (xy-[x0,y0]) @ rotateMatrix(-a) + [x0,y0]\n\nApplication\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport matplotlib.animation as animation\nimport matplotlib\nimport time\n\ndef rotateMatrix(a):\n return np.array([[np.cos(a), -np.sin(a)], [np.sin(a), np.cos(a)]])\n\nxy=np.random.randint(0,100, (200,2))\n\nfig=plt.figure()\nplt.plot(xy[:,0], xy[:,1], 'o')\nplt.xlim(-30,130)\nplt.ylim(-50,110)\n\nplotdata,=plt.plot(xy[:,0], xy[:,1],'o')\n\nx0=20\ny0=50\n\ndef anim(i):\n newxy=(xy-[x0,y0]) @ rotateMatrix(-2*i*np.pi/180) + [x0,y0]\n plotdata.set_data(newxy[:,0], newxy[:,1])\n return [plotdata]\n\ntheAnim=animation.FuncAnimation(fig, anim, interval=40, blit=False, frames=360, repeat=False)\n#theAnim.save('rotate.nosync.gif')\nplt.show()\n\n\n" ]
[ 1 ]
[]
[]
[ "graph", "math", "python", "rotation" ]
stackoverflow_0074519927_graph_math_python_rotation.txt
Q: SQLAlchemy raises an exception for unknown version while connecting to GaussDB (for openGauss) As the following code shows, we may choose an ORM (Object-Relational Mapping) module to connect to GaussDB (for openGauss). The most popular third-party library in Python I know is SQLAlchemy. But while I connect to the openGauss through the following code, an exception for known version raises. from sqlalchemy.engine import create_engine from sqlalchemy.orm import sessionmaker # ... dsn = '{}://{}:{}@{}:{}/{}'.format(db_type, username, password, host, port, database) engine = create_engine(dsn, pool_pre_ping=True) session_maker = sessionmaker(bind=engine) # Base is a base class for each table. # We want to create tables' schema, but the exception raises. Base.metadata.create_all( engine, checkfirst=check_first ) The exception we mentioned is: File "C:\Users\wotchin\AppData\Local\Programs\Python\Python39\lib\site-packages\sqlalchemy\engine\create.py", line 674, in first_connect dialect.initialize(c) File "C:\Users\wotchin\AppData\Local\Programs\Python\Python39\lib\site-packages\sqlalchemy\dialects\postgresql\psycopg2.py", line 775, in initialize super(PGDialect_psycopg2, self).initialize(connection) File "C:\Users\wotchin\AppData\Local\Programs\Python\Python39\lib\site-packages\sqlalchemy\dialects\postgresql\base.py", line 3182, in initialize super(PGDialect, self).initialize(connection) File "C:\Users\wotchin\AppData\Local\Programs\Python\Python39\lib\site-packages\sqlalchemy\engine\default.py", line 394, in initialize self.server_version_info = self._get_server_version_info( File "C:\Users\wotchin\AppData\Local\Programs\Python\Python39\lib\site-packages\sqlalchemy\dialects\postgresql\base.py", line 3435, in _get_server_version_info raise AssertionError( AssertionError: Could not determine version from string '(GaussDB Kernel V500R002C00 build 434c09d8) compiled at 2021-06-26 10:18:58 commit 0 last mr 1692 debug on x86_64-unknown-linux-gnu, compiled by g++ (GCC) 7.3.0, 64-bit' A: I solved the problem. SQLAlchemy has a version check. Hence, we can modify the function action before using SQLAlchemy. from sqlalchemy.dialects.postgresql.base import PGDialect PGDialect._get_server_version_info = lambda *args: (9, 2)
SQLAlchemy raises an exception for unknown version while connecting to GaussDB (for openGauss)
As the following code shows, we may choose an ORM (Object-Relational Mapping) module to connect to GaussDB (for openGauss). The most popular third-party library in Python I know is SQLAlchemy. But while I connect to the openGauss through the following code, an exception for known version raises. from sqlalchemy.engine import create_engine from sqlalchemy.orm import sessionmaker # ... dsn = '{}://{}:{}@{}:{}/{}'.format(db_type, username, password, host, port, database) engine = create_engine(dsn, pool_pre_ping=True) session_maker = sessionmaker(bind=engine) # Base is a base class for each table. # We want to create tables' schema, but the exception raises. Base.metadata.create_all( engine, checkfirst=check_first ) The exception we mentioned is: File "C:\Users\wotchin\AppData\Local\Programs\Python\Python39\lib\site-packages\sqlalchemy\engine\create.py", line 674, in first_connect dialect.initialize(c) File "C:\Users\wotchin\AppData\Local\Programs\Python\Python39\lib\site-packages\sqlalchemy\dialects\postgresql\psycopg2.py", line 775, in initialize super(PGDialect_psycopg2, self).initialize(connection) File "C:\Users\wotchin\AppData\Local\Programs\Python\Python39\lib\site-packages\sqlalchemy\dialects\postgresql\base.py", line 3182, in initialize super(PGDialect, self).initialize(connection) File "C:\Users\wotchin\AppData\Local\Programs\Python\Python39\lib\site-packages\sqlalchemy\engine\default.py", line 394, in initialize self.server_version_info = self._get_server_version_info( File "C:\Users\wotchin\AppData\Local\Programs\Python\Python39\lib\site-packages\sqlalchemy\dialects\postgresql\base.py", line 3435, in _get_server_version_info raise AssertionError( AssertionError: Could not determine version from string '(GaussDB Kernel V500R002C00 build 434c09d8) compiled at 2021-06-26 10:18:58 commit 0 last mr 1692 debug on x86_64-unknown-linux-gnu, compiled by g++ (GCC) 7.3.0, 64-bit'
[ "I solved the problem.\nSQLAlchemy has a version check. Hence, we can modify the function action before using SQLAlchemy.\n from sqlalchemy.dialects.postgresql.base import PGDialect\n PGDialect._get_server_version_info = lambda *args: (9, 2)\n\n" ]
[ 2 ]
[ "You may try the project opengauss-sqlalchemy at now.\nThe openGauss recently provides this project to support SQLAlchemy.\nOverride the inner method _get_server_version_info of PGDialect could solve the problem of connecting to openGauss, but some SQLs of openGauss are different from PostgreSQL. Besides, openGauss has more reserved words than PostgreSQL, we may not notice that until we use these words as table or column name.\nThe project has been published on pypi\n, gitee\nand github\n" ]
[ -1 ]
[ "postgresql", "python", "sqlalchemy" ]
stackoverflow_0070588587_postgresql_python_sqlalchemy.txt
Q: creating a partial-like object with dynamic arguments I'm trying to create a partial function but with dynamic arguments that are stored as class attributes and changed accordingly. Something like the following code: from functools import partial def foo(*args, msg): print(msg) class Bar: def __init__(self, msg): self.msg = msg self.functions = dict() self.functions['foo'] = partial(foo, msg=self.msg) def foo_method(self, *args): return self.functions['foo'](*args) b =Bar('1') b.foo_method() b.msg = '2' b.foo_method() Only, of course, both statements will print '1' as the partial object fixes the arguments. The only alternative I found was changing the attribute to a property and manually changing the partial attributes with the setter: class Bar: def __init__(self, msg): self._msg = None self.functions = dict() self.functions['foo'] = partial(foo) self.msg = msg def foo_method(self, *args): return self.functions['foo'](*args) @property def msg(self): return self._msg @msg.setter def msg(self, msg): self._msg = msg self.functions['foo'].keywords['msg'] = msg I would like to know if there is a more "pythonic" / efficient way to do this, since I really don't need to use properties except for this workaround. A: You can use lambda instead of partial for deferred (or often referred to as "lazy") evaluation of the arguments, so that self.msg is not evaluated until the function is called: class Bar: def __init__(self, msg): self.msg = msg self.functions = dict() self.functions['foo'] = lambda *args: foo(*args, msg=self.msg) def foo_method(self, *args): return self.functions['foo'](*args) A: What's wrong with just storing a reference to the passed function and constructing the call on the spot? i.e.: class Bar: def __init__(self, msg): self.msg = msg self.foo = foo # a reference to foo, not needed here but used as an example def foo_method(self, *args): return self.foo(*args, msg=self.msg) # or just: foo(*args, msg=self.msg) A: A thing that seems to be working as well is defining the function to work with a class attribute. You can then define a function using partial with one of the arguments being the class. class myContex: a = 5 def my_fun(context, b, c): print(context.a, b, c) my_fun_partial = partial(my_fun, myContext) my_fun_partial(4,7) # Output: 5 4 7 myContext.a = 50 my_fun_partial = partial(my_fun, myContext) my_fun_partial(4,7) # Output: 50, 4, 7 A: The simplest possible way I can think of would be just constructing a dict and passing it double-starred to the function to unpack. Something like: def some_func(msg, some_arg=None): print("Hello world") # ignore the msg for now call_args = {} call_args['some_arg'] = 2 # single field call_args.update({'msg': 1, 'stuff': [2,3,4]}) # multiple at once some_func(**call_args) Right now, some_func will throw a TypeError because we've passed more args than the function takes. You could work around this either by having the function accept **kwargs in the signature, trimming down the arguments you don't expect or some other approach. For now, continuing the last session: call_args = {'msg': 'abc'} # let's get rid of those extra args some_func(**call_args) # => prints 'Hello world'
creating a partial-like object with dynamic arguments
I'm trying to create a partial function but with dynamic arguments that are stored as class attributes and changed accordingly. Something like the following code: from functools import partial def foo(*args, msg): print(msg) class Bar: def __init__(self, msg): self.msg = msg self.functions = dict() self.functions['foo'] = partial(foo, msg=self.msg) def foo_method(self, *args): return self.functions['foo'](*args) b =Bar('1') b.foo_method() b.msg = '2' b.foo_method() Only, of course, both statements will print '1' as the partial object fixes the arguments. The only alternative I found was changing the attribute to a property and manually changing the partial attributes with the setter: class Bar: def __init__(self, msg): self._msg = None self.functions = dict() self.functions['foo'] = partial(foo) self.msg = msg def foo_method(self, *args): return self.functions['foo'](*args) @property def msg(self): return self._msg @msg.setter def msg(self, msg): self._msg = msg self.functions['foo'].keywords['msg'] = msg I would like to know if there is a more "pythonic" / efficient way to do this, since I really don't need to use properties except for this workaround.
[ "You can use lambda instead of partial for deferred (or often referred to as \"lazy\") evaluation of the arguments, so that self.msg is not evaluated until the function is called:\nclass Bar:\n def __init__(self, msg):\n self.msg = msg\n self.functions = dict()\n self.functions['foo'] = lambda *args: foo(*args, msg=self.msg)\n\n def foo_method(self, *args):\n return self.functions['foo'](*args)\n\n", "What's wrong with just storing a reference to the passed function and constructing the call on the spot? i.e.:\nclass Bar:\n\n def __init__(self, msg):\n self.msg = msg\n self.foo = foo # a reference to foo, not needed here but used as an example\n\n def foo_method(self, *args):\n return self.foo(*args, msg=self.msg) # or just: foo(*args, msg=self.msg)\n\n", "A thing that seems to be working as well is defining the function to work with a class attribute.\nYou can then define a function using partial with one of the arguments being the class.\nclass myContex:\n a = 5\n \ndef my_fun(context, b, c):\n print(context.a, b, c)\n \nmy_fun_partial = partial(my_fun, myContext)\nmy_fun_partial(4,7)\n\n# Output: 5 4 7\n\nmyContext.a = 50\nmy_fun_partial = partial(my_fun, myContext)\nmy_fun_partial(4,7)\n\n# Output: 50, 4, 7\n\n", "The simplest possible way I can think of would be just constructing a dict and passing it double-starred to the function to unpack.\nSomething like:\ndef some_func(msg, some_arg=None):\n print(\"Hello world\") # ignore the msg for now\n\ncall_args = {}\ncall_args['some_arg'] = 2 # single field\ncall_args.update({'msg': 1, 'stuff': [2,3,4]}) # multiple at once\n\nsome_func(**call_args)\n\nRight now, some_func will throw a TypeError because we've passed more args than the function takes. You could work around this either by having the function accept **kwargs in the signature, trimming down the arguments you don't expect or some other approach. \nFor now, continuing the last session:\ncall_args = {'msg': 'abc'} # let's get rid of those extra args\nsome_func(**call_args) # => prints 'Hello world'\n\n" ]
[ 3, 1, 1, 0 ]
[]
[]
[ "python", "python_3.x" ]
stackoverflow_0052406813_python_python_3.x.txt
Q: How to search for flights using the Amadeus API and Python, by considering the originRadius and destinationRadius parameters? I am trying to get Amadeus API flight data by considering the originRadius and destinationRadius parameters. Can someone help me with that? How can I search for flights by considering these two parameters? Currently, I have implemented following code: def check_flights( self, originLocationCode, destinationLocationCode, departureDate, returnDate, adults, currencyCode ): ''' Return a list of FlightData objects based on the API search results. ''' amadeus = Client(client_id=API_KEY, client_secret=API_SECRET) try: response = amadeus.get( API_URL, originLocationCode=originLocationCode, destinationLocationCode=destinationLocationCode, departureDate=departureDate, returnDate=returnDate, adults=adults, currencyCode=currencyCode ) data = response.data self.save_data_to_file(data=response.body) except ResponseError as error: # TO DO: If error occurs, render error in available_flights return error A: For that you will have to use the POST method of the Flight Offers Search API. I leave an example below that takes into consideration the originRadius. This parameter includes other possible locations around the point, located less than this distance in kilometers away with a max of 300km and it can not be combined with dateWindow or timeWindow. POST https://test.api.amadeus.com/v2/shopping/flight-offers { "originDestinations": [ { "id": "1", "originLocationCode": "MAD", "destinationLocationCode": "ATH", "originRadius": "299", "departureDateTimeRange": { "date": "2023-03-03" } } ], "travelers": [ { "id": "1", "travelerType": "ADULT" } ], "sources": [ "GDS" ] } The logic is the same for the destinationRadius. For more details check the Amadeus for Developers API reference.
How to search for flights using the Amadeus API and Python, by considering the originRadius and destinationRadius parameters?
I am trying to get Amadeus API flight data by considering the originRadius and destinationRadius parameters. Can someone help me with that? How can I search for flights by considering these two parameters? Currently, I have implemented following code: def check_flights( self, originLocationCode, destinationLocationCode, departureDate, returnDate, adults, currencyCode ): ''' Return a list of FlightData objects based on the API search results. ''' amadeus = Client(client_id=API_KEY, client_secret=API_SECRET) try: response = amadeus.get( API_URL, originLocationCode=originLocationCode, destinationLocationCode=destinationLocationCode, departureDate=departureDate, returnDate=returnDate, adults=adults, currencyCode=currencyCode ) data = response.data self.save_data_to_file(data=response.body) except ResponseError as error: # TO DO: If error occurs, render error in available_flights return error
[ "For that you will have to use the POST method of the Flight Offers Search API. I leave an example below that takes into consideration the originRadius. This parameter includes other possible locations around the point, located less than this distance in kilometers away with a max of 300km and it can not be combined with dateWindow or timeWindow.\nPOST https://test.api.amadeus.com/v2/shopping/flight-offers\n{\n \"originDestinations\": [\n {\n \"id\": \"1\",\n \"originLocationCode\": \"MAD\",\n \"destinationLocationCode\": \"ATH\",\n \"originRadius\": \"299\",\n \"departureDateTimeRange\": {\n \"date\": \"2023-03-03\"\n }\n }\n ],\n \"travelers\": [\n {\n \"id\": \"1\",\n \"travelerType\": \"ADULT\"\n }\n ],\n \"sources\": [\n \"GDS\"\n ]\n}\n\nThe logic is the same for the destinationRadius.\nFor more details check the Amadeus for Developers API reference.\n" ]
[ 0 ]
[]
[]
[ "amadeus", "python" ]
stackoverflow_0074500894_amadeus_python.txt
Q: 3D geometry intersections in Python I have a genetic programming algorithm that evolves solutions for a drone trajectory (3D lines) through obstacles, which are no-fly zones in a city. In order to evaluate the fitness of the solutions I need to check if they intersect the no-fly areas (~200 evaluations per iteration for thousands of iterations) example route through barriers In 2D I have done it pretty easily using Shapely. However, the 3D problem is much more complicated. I am currently using solutions with 100 vertices, or 99 line segments. Typical 3D flight paths (in test cases) are 2-10km long and involve 10 to 20 barriers in the immediate vicinity. Currently the barriers are 2D polygons with a height property, but I would like to be able to use concave 3D meshes for the barriers (as well as surfaces). Converting/creating meshes, polyhedrons or sets of faces is not an issue as it will only have to be done once. The evaluations however have to be done 1000s of times.. I am curious if anyone is aware of a simpler/faster way to do this kind of Boolean intersection check. I am considering these options, none of them perfect so far: check if line vertices are contained in convex hulls of polygons (scipy.spatial or this) this limits my barriers to convexity check if line segments intersect/collide with mesh faces (Panda3D, Blender API, ..) Using Shapely to check if the intersection geometry bound lies within the "active" z-coordinate range of the barriers works, but is severely limiting to the kind of 3D barriers that can be used. I would call it a 2.5D solution.. Here is a bad example gif of a 2D scenario: I get hundreds of lines and want to check if they cross the green zones (only a few lines are shown here) solution evolution There are 3D geometry packages like trimesh and rhino3dm but they only support line-plane intersections. I don't know if I need a huge library or if it makes more sense to write something myself.. Also since speed is a priority, I also consider solutions not using Python. I am hoping that there is a mostly mathematical way to do it (without the overhead that I suspect comes with Panda3D or Blender, for example). A: In one of your comments, you said: Even something that just checks line-segment vs triangle intersection should work So what I could suggest is to use the Möller–Trumbore algorithm for fast, minimum storage ray-triangle intersection. I have developed a Python implementation that you can find here. As you can see from the docs, it's quite easy to use, for instance: >>> vertices = np.array([[0.0, 0.0, 0.0], [0.0, 10.0, 0.0], [10.0, 0.0, 0.0]]) >>> ray_origin = np.array([1.0, 1.0, 1.0]) >>> ray_direction = np.array([0.0, 0.0, -1.0]) >>> intersection = ray_triangle_intersection(vertices, ray_origin, ray_direction) (1.0, 0.1, 0.1) And yes, you said that you are not actually dealing with rays, but that could be a starting point.
3D geometry intersections in Python
I have a genetic programming algorithm that evolves solutions for a drone trajectory (3D lines) through obstacles, which are no-fly zones in a city. In order to evaluate the fitness of the solutions I need to check if they intersect the no-fly areas (~200 evaluations per iteration for thousands of iterations) example route through barriers In 2D I have done it pretty easily using Shapely. However, the 3D problem is much more complicated. I am currently using solutions with 100 vertices, or 99 line segments. Typical 3D flight paths (in test cases) are 2-10km long and involve 10 to 20 barriers in the immediate vicinity. Currently the barriers are 2D polygons with a height property, but I would like to be able to use concave 3D meshes for the barriers (as well as surfaces). Converting/creating meshes, polyhedrons or sets of faces is not an issue as it will only have to be done once. The evaluations however have to be done 1000s of times.. I am curious if anyone is aware of a simpler/faster way to do this kind of Boolean intersection check. I am considering these options, none of them perfect so far: check if line vertices are contained in convex hulls of polygons (scipy.spatial or this) this limits my barriers to convexity check if line segments intersect/collide with mesh faces (Panda3D, Blender API, ..) Using Shapely to check if the intersection geometry bound lies within the "active" z-coordinate range of the barriers works, but is severely limiting to the kind of 3D barriers that can be used. I would call it a 2.5D solution.. Here is a bad example gif of a 2D scenario: I get hundreds of lines and want to check if they cross the green zones (only a few lines are shown here) solution evolution There are 3D geometry packages like trimesh and rhino3dm but they only support line-plane intersections. I don't know if I need a huge library or if it makes more sense to write something myself.. Also since speed is a priority, I also consider solutions not using Python. I am hoping that there is a mostly mathematical way to do it (without the overhead that I suspect comes with Panda3D or Blender, for example).
[ "In one of your comments, you said:\n\nEven something that just checks line-segment vs triangle intersection should work\n\nSo what I could suggest is to use the Möller–Trumbore algorithm for fast, minimum storage ray-triangle intersection.\nI have developed a Python implementation that you can find here. As you can see from the docs, it's quite easy to use, for instance:\n>>> vertices = np.array([[0.0, 0.0, 0.0], [0.0, 10.0, 0.0], [10.0, 0.0, 0.0]])\n>>> ray_origin = np.array([1.0, 1.0, 1.0])\n>>> ray_direction = np.array([0.0, 0.0, -1.0])\n>>> intersection = ray_triangle_intersection(vertices, ray_origin, ray_direction)\n(1.0, 0.1, 0.1)\n\nAnd yes, you said that you are not actually dealing with rays, but that could be a starting point.\n" ]
[ 1 ]
[]
[]
[ "3d", "geometry", "mesh", "python" ]
stackoverflow_0074510900_3d_geometry_mesh_python.txt
Q: pandas case sensitive column names I have data which having duplicate column names some are different case and few are in same case. Pandas only renaming columns which are of same case while loading data to dataframe automatically. Is there is anyway to rename columns case insensitive. Input data: ------------------------------------------- | id | Name | class | class | name | ------------------------------------------- | 1 | A | 5 | i | W | | 2 | B | 4 | iv | X | | 3 | C | 10 | x | Y | | 4 | D | 8 | viii | Z | ------------------------------------------- Default o/p: ---------------------------------------------- | id | Name | class | class .1 | name | ---------------------------------------------- | 1 | A | 5 | i | W | | 2 | B | 4 | iv | X | | 3 | C | 10 | x | Y | | 4 | D | 8 | viii | Z | ---------------------------------------------- Expected o/p: ----------------------------------------------------- | id | Name .1 | class .1 | class .2 | name .2| ----------------------------------------------------- | 1 | A | 5 | i | W | | 2 | B | 4 | iv | X | | 3 | C | 10 | x | Y | | 4 | D | 8 | viii | Z | -----------------------------------------------------
pandas case sensitive column names
I have data which having duplicate column names some are different case and few are in same case. Pandas only renaming columns which are of same case while loading data to dataframe automatically. Is there is anyway to rename columns case insensitive. Input data: ------------------------------------------- | id | Name | class | class | name | ------------------------------------------- | 1 | A | 5 | i | W | | 2 | B | 4 | iv | X | | 3 | C | 10 | x | Y | | 4 | D | 8 | viii | Z | ------------------------------------------- Default o/p: ---------------------------------------------- | id | Name | class | class .1 | name | ---------------------------------------------- | 1 | A | 5 | i | W | | 2 | B | 4 | iv | X | | 3 | C | 10 | x | Y | | 4 | D | 8 | viii | Z | ---------------------------------------------- Expected o/p: ----------------------------------------------------- | id | Name .1 | class .1 | class .2 | name .2| ----------------------------------------------------- | 1 | A | 5 | i | W | | 2 | B | 4 | iv | X | | 3 | C | 10 | x | Y | | 4 | D | 8 | viii | Z | -----------------------------------------------------
[]
[]
[ "You can use:\n# get lowercase name\ns = df.columns.str.lower()\n\n# group by identical names and count\nsuffix = df.groupby(s, axis=1).cumcount().add(1).astype(str)\n\n# de-duplicate \ndf.columns = np.where(s.duplicated(keep=False),\n df.columns+'.'+suffix,\n df.columns)\n\nOutput:\n id Name.1 class.1 class.2 name.2\n0 1 A 5 i W\n1 2 B 4 iv X\n2 3 C 10 x Y\n3 4 D 8 viii Z\n\nUsed input:\n id Name class class name\n0 1 A 5 i W\n1 2 B 4 iv X\n2 3 C 10 x Y\n3 4 D 8 viii Z\n\n" ]
[ -1 ]
[ "pandas", "python" ]
stackoverflow_0074520195_pandas_python.txt
Q: Eucledian distance matrix between two matrices I have the following function that calculates the eucledian distance between all combinations of the vectors in Matrix A and Matrix B def distance_matrix(A,B): n=A.shape[1] m=B.shape[1] C=np.zeros((n,m)) for ai, a in enumerate(A.T): for bi, b in enumerate(B.T): C[ai][bi]=np.linalg.norm(a-b) return C This works fine and creates an n*m-Matrix from a d*n-Matrix and a d*m-Matrix containing the eucledian distance between all combinations of the column vectors. >>> print(A) [[-1 -1 1 1 2] [ 1 -1 2 -1 1]] >>> print(B) [[-2 -1 1 2] [-1 2 1 -1]] >>> print(distance_matrix(A,B)) [[2.23606798 1. 2. 3.60555128] [1. 3. 2.82842712 3. ] [4.24264069 2. 1. 3.16227766] [3. 3.60555128 2. 1. ] [4.47213595 3.16227766 1. 2. ]] I spent some time looking for a numpy or scipy function to achieve this in a more efficient way. Is there such a function or what would be the vecotrized way to do this? A: You can use: np.linalg.norm(A[:,:,None]-B[:,None,:],axis=0) or (totaly equivalent but without in-built function) ((A[:,:,None]-B[:,None,:])**2).sum(axis=0)**0.5 We need a 5x4 final array so we extend our array this way: A[:,:,None] -> 2,5,1 ↑ ↓ B[:,None,:] -> 2,1,4 A[:,:,None] - B[:,None,:] -> 2,5,4 and we apply our sum over the axis 0 to finally get a 5,4 ndarray. A: Yes, you can broadcast your vectors: A = np.array([[-1, -1, 1, 1, 2], [ 1, -1, 2, -1, 1]]) B = np.array([[-2, -1, 1, 2], [-1, 2, 1, -1]]) C = np.linalg.norm(A.T[:, None, :] - B.T[None, :, :], axis=-1) print(C) array([[2.23606798, 1. , 2. , 3.60555128], [1. , 3. , 2.82842712, 3. ], [4.24264069, 2. , 1. , 3.16227766], [3. , 3.60555128, 2. , 1. ], [4.47213595, 3.16227766, 1. , 2. ]]) You can get an explanation of how it works here: https://sparrow.dev/pairwise-distance-in-numpy/
Eucledian distance matrix between two matrices
I have the following function that calculates the eucledian distance between all combinations of the vectors in Matrix A and Matrix B def distance_matrix(A,B): n=A.shape[1] m=B.shape[1] C=np.zeros((n,m)) for ai, a in enumerate(A.T): for bi, b in enumerate(B.T): C[ai][bi]=np.linalg.norm(a-b) return C This works fine and creates an n*m-Matrix from a d*n-Matrix and a d*m-Matrix containing the eucledian distance between all combinations of the column vectors. >>> print(A) [[-1 -1 1 1 2] [ 1 -1 2 -1 1]] >>> print(B) [[-2 -1 1 2] [-1 2 1 -1]] >>> print(distance_matrix(A,B)) [[2.23606798 1. 2. 3.60555128] [1. 3. 2.82842712 3. ] [4.24264069 2. 1. 3.16227766] [3. 3.60555128 2. 1. ] [4.47213595 3.16227766 1. 2. ]] I spent some time looking for a numpy or scipy function to achieve this in a more efficient way. Is there such a function or what would be the vecotrized way to do this?
[ "You can use:\nnp.linalg.norm(A[:,:,None]-B[:,None,:],axis=0)\n\nor (totaly equivalent but without in-built function)\n((A[:,:,None]-B[:,None,:])**2).sum(axis=0)**0.5\n\nWe need a 5x4 final array so we extend our array this way:\nA[:,:,None] -> 2,5,1\n ↑ ↓ \nB[:,None,:] -> 2,1,4\n\nA[:,:,None] - B[:,None,:] -> 2,5,4\n\nand we apply our sum over the axis 0 to finally get a 5,4 ndarray.\n", "Yes, you can broadcast your vectors:\nA = np.array([[-1, -1, 1, 1, 2], [ 1, -1, 2, -1, 1]])\nB = np.array([[-2, -1, 1, 2], [-1, 2, 1, -1]])\n\nC = np.linalg.norm(A.T[:, None, :] - B.T[None, :, :], axis=-1)\nprint(C)\n\narray([[2.23606798, 1. , 2. , 3.60555128],\n [1. , 3. , 2.82842712, 3. ],\n [4.24264069, 2. , 1. , 3.16227766],\n [3. , 3.60555128, 2. , 1. ],\n [4.47213595, 3.16227766, 1. , 2. ]])\n\nYou can get an explanation of how it works here:\nhttps://sparrow.dev/pairwise-distance-in-numpy/\n" ]
[ 2, 1 ]
[]
[]
[ "euclidean_distance", "matrix", "numpy", "python", "scipy" ]
stackoverflow_0074520084_euclidean_distance_matrix_numpy_python_scipy.txt
Q: Tensorflow output image is black I am using below code to crop the image, saved image is all black. How to get the correct image. # Crop Image image_open = open(fullpath, 'rb') read_image = image_open.read() decode = tf.image.decode_jpeg(read_image) expand = tf.expand_dims(decode, 0) cropped_image = tf.image.crop_and_resize(expand, boxes=[[y_min, x_min, y_max - y_min, x_max - x_min]], crop_size=[300, 300], box_indices=[0]) score = bscores[idx] * 100 file_name = OUTPUT_PATH + image_name[:-4] + '_' + str(idx) + '_' + class_label + '_' + str(round(score)) + '%' + '_' + os.path.splitext(image_name)[1] #writefile = tf.io.write_file(file_name, encode) tf.keras.utils.save_img(file_name, np.squeeze(cropped_image)) #I am squeezing it because it is expecting 3 dim shape Output Image - A: it is because PLT is a convenient tool you need to make it correct format and dimensions when the shape of the matrix needs to compose a picture or you do need to specify the axis. Sample: The screen needs to match of target dimension or separate work on channels. import os from os.path import exists import tensorflow as tf import matplotlib.pyplot as plt """"""""""""""""""""""""""""""""""""""""""""""""""""""""" : Variables """"""""""""""""""""""""""""""""""""""""""""""""""""""""" CROP_SIZE = [ 210, 160 ] OUTPUT_PATH = "F:\\temp\\20221121" # read image and decode fullpath = "F:\\Pictures\\actor -Kib\\272066225_493107378843948_6905696102089601304_n.jpg" image = tf.io.read_file( fullpath ) decode = tf.image.decode_jpeg( image ) expand = tf.expand_dims(decode, 0) # create filename and path target filename = os.path.basename( fullpath ) image_name = str( filename.split(".")[0] ) + "." + filename.split(".")[1] file_name_str = OUTPUT_PATH + "\\" + str( filename.split(".")[0] ) + "." + str( filename.split(".")[1] ) # crop image boxes = tf.constant([ 0.26, 0.05, 0.8, 1.0 ], shape=(1, 4)) box_indices = tf.constant([ 0 ], shape=(1, )) image_array = tf.keras.preprocessing.image.img_to_array( decode ) image_cropped = tf.image.crop_and_resize( tf.expand_dims(image_array, axis=0), boxes, box_indices, CROP_SIZE ) image_cropped = tf.squeeze( image_cropped ) image_cropped = tf.keras.utils.array_to_img( image_cropped ) """"""""""""""""""""""""""""""""""""""""""""""""""""""""" : Output """"""""""""""""""""""""""""""""""""""""""""""""""""""""" plt.imshow( image_cropped ) plt.show() # for save image image_cropped = tf.keras.preprocessing.image.img_to_array( image_cropped ) tf.keras.utils.save_img( file_name_str, tf.squeeze( image_cropped )) input('...') Output: Save image is look similar to its input.
Tensorflow output image is black
I am using below code to crop the image, saved image is all black. How to get the correct image. # Crop Image image_open = open(fullpath, 'rb') read_image = image_open.read() decode = tf.image.decode_jpeg(read_image) expand = tf.expand_dims(decode, 0) cropped_image = tf.image.crop_and_resize(expand, boxes=[[y_min, x_min, y_max - y_min, x_max - x_min]], crop_size=[300, 300], box_indices=[0]) score = bscores[idx] * 100 file_name = OUTPUT_PATH + image_name[:-4] + '_' + str(idx) + '_' + class_label + '_' + str(round(score)) + '%' + '_' + os.path.splitext(image_name)[1] #writefile = tf.io.write_file(file_name, encode) tf.keras.utils.save_img(file_name, np.squeeze(cropped_image)) #I am squeezing it because it is expecting 3 dim shape Output Image -
[ "it is because PLT is a convenient tool you need to make it correct format and dimensions when the shape of the matrix needs to compose a picture or you do need to specify the axis.\n\nSample: The screen needs to match of target dimension or separate work on channels.\n\nimport os\nfrom os.path import exists\n\nimport tensorflow as tf\n\nimport matplotlib.pyplot as plt\n\n\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\n: Variables\n\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\nCROP_SIZE = [ 210, 160 ]\nOUTPUT_PATH = \"F:\\\\temp\\\\20221121\"\n\n# read image and decode\nfullpath = \"F:\\\\Pictures\\\\actor -Kib\\\\272066225_493107378843948_6905696102089601304_n.jpg\"\nimage = tf.io.read_file( fullpath )\ndecode = tf.image.decode_jpeg( image )\nexpand = tf.expand_dims(decode, 0)\n\n# create filename and path target\nfilename = os.path.basename( fullpath )\nimage_name = str( filename.split(\".\")[0] ) + \".\" + filename.split(\".\")[1]\nfile_name_str = OUTPUT_PATH + \"\\\\\" + str( filename.split(\".\")[0] ) + \".\" + str( filename.split(\".\")[1] )\n\n# crop image\nboxes = tf.constant([ 0.26, 0.05, 0.8, 1.0 ], shape=(1, 4))\nbox_indices = tf.constant([ 0 ], shape=(1, ))\nimage_array = tf.keras.preprocessing.image.img_to_array( decode )\nimage_cropped = tf.image.crop_and_resize( tf.expand_dims(image_array, axis=0), boxes, box_indices, CROP_SIZE )\nimage_cropped = tf.squeeze( image_cropped )\nimage_cropped = tf.keras.utils.array_to_img( image_cropped )\n\n\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\n: Output\n\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\nplt.imshow( image_cropped )\nplt.show()\n\n# for save image\nimage_cropped = tf.keras.preprocessing.image.img_to_array( image_cropped )\ntf.keras.utils.save_img( file_name_str, tf.squeeze( image_cropped ))\n\n\n\ninput('...')\n\n\nOutput: Save image is look similar to its input.\n\n\n" ]
[ 0 ]
[]
[]
[ "deep_learning", "image_processing", "python", "tensorflow" ]
stackoverflow_0074519821_deep_learning_image_processing_python_tensorflow.txt