content
stringlengths
85
101k
title
stringlengths
0
150
question
stringlengths
15
48k
answers
list
answers_scores
list
non_answers
list
non_answers_scores
list
tags
list
name
stringlengths
35
137
Q: pocketsphinx ERROR: Could not build wheels for pocketsphinx, which is required to install pyproject.toml-based projects I am having trouble downloading this python module called pocketsphinx, I've tried everything by downloading it manually, using git and more (I am on windows), I updated pip, and I even tried downloading visual studio with the python environment as the error insisted, i updated my python, I even tried updating my laptop, but I still can't get it to work. but I'm not giving up, although I am very frustrated. ` FULL ERROR: -- Trying "NMake Makefiles (Visual Studio 15 2017 x64 v141)" generator - failure -------------------------------------------------------------------------------- ******************************************************************************** scikit-build could not get a working generator for your system. Aborting build. Building windows wheels for Python 3.11 requires Microsoft Visual Studio 2022. Get it with "Visual Studio 2017": https://visualstudio.microsoft.com/vs/ Or with "Visual Studio 2019": https://visualstudio.microsoft.com/vs/ Or with "Visual Studio 2022": https://visualstudio.microsoft.com/vs/ ******************************************************************************** [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. ERROR: Failed building wheel for pocketsphinx Failed to build pocketsphinx ERROR: Could not build wheels for pocketsphinx, which is required to install pyproject.toml-based projects ` Updating my python, updating PIP, Updating my laptop, Installing it manually, and using GIT, ive even tried some solutions on youtube (none of them worked) A: Ok, i just found the problem. turns out python 3.11 is completely unsupported. meaning you will have to downgrade to 3.10 to use this module
pocketsphinx ERROR: Could not build wheels for pocketsphinx, which is required to install pyproject.toml-based projects
I am having trouble downloading this python module called pocketsphinx, I've tried everything by downloading it manually, using git and more (I am on windows), I updated pip, and I even tried downloading visual studio with the python environment as the error insisted, i updated my python, I even tried updating my laptop, but I still can't get it to work. but I'm not giving up, although I am very frustrated. ` FULL ERROR: -- Trying "NMake Makefiles (Visual Studio 15 2017 x64 v141)" generator - failure -------------------------------------------------------------------------------- ******************************************************************************** scikit-build could not get a working generator for your system. Aborting build. Building windows wheels for Python 3.11 requires Microsoft Visual Studio 2022. Get it with "Visual Studio 2017": https://visualstudio.microsoft.com/vs/ Or with "Visual Studio 2019": https://visualstudio.microsoft.com/vs/ Or with "Visual Studio 2022": https://visualstudio.microsoft.com/vs/ ******************************************************************************** [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. ERROR: Failed building wheel for pocketsphinx Failed to build pocketsphinx ERROR: Could not build wheels for pocketsphinx, which is required to install pyproject.toml-based projects ` Updating my python, updating PIP, Updating my laptop, Installing it manually, and using GIT, ive even tried some solutions on youtube (none of them worked)
[ "Ok, i just found the problem. turns out python 3.11 is completely unsupported. meaning you will have to downgrade to 3.10 to use this module\n" ]
[ 0 ]
[]
[]
[ "module", "pip", "pocketsphinx", "python", "python_3.x" ]
stackoverflow_0074562577_module_pip_pocketsphinx_python_python_3.x.txt
Q: Decorator Factory The factory accepts a function (lambda) as an input and returns a decorator that will return the result of the function as the first argument. The result of the decorated function is passed. The function that the factory accepts (in the example below, it is a lambda) can only take one positional parameter. Example: @decorator_apply(lambda user_id: user_id + 1) def return_user_id(num: int): return num return_user_id(42) >>>43 A: Just nest functions until you reach the required depth, then apply them: def decorator_apply(transform): def wrapper(f): def wrapped(x, /): return f(transform(x)) return wrapped return wrapper @decorator_apply(lambda user_id: user_id + 1) def return_user_id(num: int): return num return_user_id(42)
Decorator Factory
The factory accepts a function (lambda) as an input and returns a decorator that will return the result of the function as the first argument. The result of the decorated function is passed. The function that the factory accepts (in the example below, it is a lambda) can only take one positional parameter. Example: @decorator_apply(lambda user_id: user_id + 1) def return_user_id(num: int): return num return_user_id(42) >>>43
[ "Just nest functions until you reach the required depth, then apply them:\ndef decorator_apply(transform):\n def wrapper(f):\n def wrapped(x, /):\n return f(transform(x))\n return wrapped\n return wrapper\n\n\n@decorator_apply(lambda user_id: user_id + 1)\ndef return_user_id(num: int):\n return num\n\nreturn_user_id(42)\n\n" ]
[ 1 ]
[]
[]
[ "decorator", "python", "python_decorators" ]
stackoverflow_0074562575_decorator_python_python_decorators.txt
Q: How do you remove duplicates in a 2d list with the same values but different order I have a list which contains lists. I am trying to remove any duplicates of lists which may share the same items within the list but in a different order. for example if I have this nestedlist=[[1,2,3,4],[4,3,2,1],[1,5,8,7]] I would like a function that returns something like: [[1,2,3,4],[1,5,8,7]] A: Make a set of sets out of the list of lists. Then convert it back to a list of lists: sos = {frozenset(l) for l in nestedlist} unique_nested = [list(s) for s in sos] We need to use frozenset instead of set because sets are mutable therefore not hashable A: Sort the list after acessing sub list and compare accordingly. new=[] nestedlist=[[1,2,3,4],[4,3,2,1],[1,5,8,7]] for i in nestedlist: if sorted(i) not in new: new.append(i) print(new) Gives # [[1, 2, 3, 4], [1, 5, 8, 7]]
How do you remove duplicates in a 2d list with the same values but different order
I have a list which contains lists. I am trying to remove any duplicates of lists which may share the same items within the list but in a different order. for example if I have this nestedlist=[[1,2,3,4],[4,3,2,1],[1,5,8,7]] I would like a function that returns something like: [[1,2,3,4],[1,5,8,7]]
[ "Make a set of sets out of the list of lists. Then convert it back to a list of lists:\nsos = {frozenset(l) for l in nestedlist}\nunique_nested = [list(s) for s in sos]\n\nWe need to use frozenset instead of set because sets are mutable therefore not hashable\n", "Sort the list after acessing sub list and compare accordingly.\nnew=[]\nnestedlist=[[1,2,3,4],[4,3,2,1],[1,5,8,7]]\nfor i in nestedlist:\n if sorted(i) not in new:\n new.append(i)\n\nprint(new)\n\nGives #\n[[1, 2, 3, 4], [1, 5, 8, 7]] \n\n" ]
[ 1, 1 ]
[]
[]
[ "duplicates", "python" ]
stackoverflow_0074563000_duplicates_python.txt
Q: How do I create a timeseries sliding window tensorflow dataset where some features have different batch sizes than others? Currently I am able to create a timeseries sliding window batched dataset that contains ordered 'feature sets' like 'inputs', 'targets', 'benchmarks', etc. Originally I had developed my model and dataset wherein the targets would be of the same batch size as all other inputs, however that has proven to be detrimental to tuning the input batch size and also won't be helpful when it comes time to run this on live data where I only care to produce a single sample output of the shape (1, horizon, targets) or perhaps just (horizon, targets) given an input dataset of (samples, horizon, features). As an overview, I want to take N historical samples of horizon length features at time T, run them through the model and output a single sample of horizon length targets; repeat until the dataset is run through in its entirety. Assuming a pandas DataFrame of length Z, all resulting Datasets should have a length of Z - horizon. The 'targets' Dataset should have a batch size of 1, and the 'inputs' Dataset should have a batch size of batch_size. Here's a stripped down snippet of what I currently use in order to generate a standard batch size for all feature sets: import tensorflow as tf import pandas as pd horizon = 5 batch_size = 10 columns = { "inputs": ["input_1", "input_2"], "targets": ["target_1"], } batch_options = { "drop_remainder": True, "deterministic": True, } d = range(100) df = pd.DataFrame(data={'input_1': d, 'input_2': d, 'target_1': d}) slices = tuple(df[x].astype("float32") for x in columns.values()) data = ( tf.data.Dataset.from_tensor_slices(slices) .window(horizon, shift=1, drop_remainder=True) .flat_map( lambda *c: tf.data.Dataset.zip( tuple( col.batch(horizon, **batch_options) for col in c ) ) ) .batch( batch_size, **batch_options, ) ) A: We can create two sliding windowed dataset and zip them. inputs = df[['input_1', 'input_1']].to_numpy() labels = df['target_1'].to_numpy() window_size = 10 stride =1 data1 = tf.data.Dataset.from_tensor_slices(inputs).window(window_size, shift=stride, drop_remainder=True).flat_map(lambda x: x.batch(window_size)) data2 = tf.data.Dataset.from_tensor_slices(inputs).window(1, shift=stride, drop_remainder=True).flat_map(lambda x: x.batch(1)) data = tf.data.Dataset.zip((data1, data2))
How do I create a timeseries sliding window tensorflow dataset where some features have different batch sizes than others?
Currently I am able to create a timeseries sliding window batched dataset that contains ordered 'feature sets' like 'inputs', 'targets', 'benchmarks', etc. Originally I had developed my model and dataset wherein the targets would be of the same batch size as all other inputs, however that has proven to be detrimental to tuning the input batch size and also won't be helpful when it comes time to run this on live data where I only care to produce a single sample output of the shape (1, horizon, targets) or perhaps just (horizon, targets) given an input dataset of (samples, horizon, features). As an overview, I want to take N historical samples of horizon length features at time T, run them through the model and output a single sample of horizon length targets; repeat until the dataset is run through in its entirety. Assuming a pandas DataFrame of length Z, all resulting Datasets should have a length of Z - horizon. The 'targets' Dataset should have a batch size of 1, and the 'inputs' Dataset should have a batch size of batch_size. Here's a stripped down snippet of what I currently use in order to generate a standard batch size for all feature sets: import tensorflow as tf import pandas as pd horizon = 5 batch_size = 10 columns = { "inputs": ["input_1", "input_2"], "targets": ["target_1"], } batch_options = { "drop_remainder": True, "deterministic": True, } d = range(100) df = pd.DataFrame(data={'input_1': d, 'input_2': d, 'target_1': d}) slices = tuple(df[x].astype("float32") for x in columns.values()) data = ( tf.data.Dataset.from_tensor_slices(slices) .window(horizon, shift=1, drop_remainder=True) .flat_map( lambda *c: tf.data.Dataset.zip( tuple( col.batch(horizon, **batch_options) for col in c ) ) ) .batch( batch_size, **batch_options, ) )
[ "We can create two sliding windowed dataset and zip them.\ninputs = df[['input_1', 'input_1']].to_numpy()\nlabels = df['target_1'].to_numpy()\n\n\nwindow_size = 10\nstride =1\ndata1 = tf.data.Dataset.from_tensor_slices(inputs).window(window_size, shift=stride, drop_remainder=True).flat_map(lambda x: x.batch(window_size))\ndata2 = tf.data.Dataset.from_tensor_slices(inputs).window(1, shift=stride, drop_remainder=True).flat_map(lambda x: x.batch(1))\ndata = tf.data.Dataset.zip((data1, data2))\n\n" ]
[ 0 ]
[]
[]
[ "python", "tensorflow", "tensorflow_datasets" ]
stackoverflow_0074552302_python_tensorflow_tensorflow_datasets.txt
Q: scrollable dynamically generated popup in kivy I'm trying to create a scrollable popup window dynamically in kivy (without a kv file). My goal is two things. Have the popup window scroll if the popup_label has to much text Make the popup_label text wrap and use its own space without overflowing onto other widgets if the text is too large. The problem: I've been trying many variations of the below code but i can't get the popup to scroll. As you can see from the screen shot attached, i've got text starting to go outside the top of the popup window and big spaces between widgets and when viewed on the phone the white popup_label text is all bunched together and overflowing over the blue header labels. Any advice please ? PS: I'm self taught in python and kivy and it's my first app so please respond with understandable lingo :) IE: x - y for x in x / cos(*%s , %d) will go over my head :) snippet of relevant code so far : class LocationPopupMenu(Popup): """ Add a popup dialog box when a user clicks an employers location pin This is called from "EmployerMarker.py """ def __init__(self, user_data): self.user = user_data super().__init__() root = App.get_running_app() regotype = root.registration_type measurement = root.firebase_user['measurement'] # Set all the required fields of current_user data # Update this list with matching data you want to add to the popup if regotype == "Employer": headers = "Preferred Catagories : ,Preferred Positions : ,Distance Away : " else: headers = "Hiring Catagories : ,Positions : ,Distance Away : " headers = headers.split(',') # Generate a Gridlayout for the popup layout = GridLayout(cols=1, padding=5, spacing=5, size_hint_y=None) # attempt to bind like i would in a kv file IE : height: self.minimum_height layout.bind(minimum_height=layout.setter("height")) scrollLayout = ScrollView(orientation='vertical', scroll_type=['content']) layout.add_widget(scrollLayout) closeButton = None attribute_value = "" for i in range(len(headers)): # iterate over the attributes of the current_user attribute_name = headers[i] if i == 0: attribute_value = user_data['position_categories'] if i == 1: attribute_value = user_data['position_sub_categories'] if i == 2: attribute_value = f"{user_data['distance']} {measurement}" # Add the attributes to a label then add the label widget to the layout # add the blue header labels popup_header = MDLabel(text=f"{attribute_name}", font_size="22sp", text_size=(self.width, None), halign="left", valign="center", size_hint_y=None, theme_text_color="Custom", text_color=root.theme_cls.accent_color) popup_header.bind(height=popup_header.setter("texture_size")) # Add the white labels popup_label = MDLabel(text=f"{attribute_value}", font_size="22sp", text_size=(self.width, None), halign="left", valign="top", size_hint_y=None) # attempt to bind texture_size as i would for kv file IE: height: self.texture_size[1] ? # so that if popup_label is a large amount of text it doesnt just display over the top of # other widgets, it displays inside its own allocated grid popup_label.bind(height=popup_label.setter("texture_size")) layout.add_widget(popup_header) layout.add_widget(popup_label) # Add a button to the bottom of the popup closeButton = MDRoundFlatButton(text="Close me", opposite_colors=True, font_size="17sp", halign="center", valign="center", size_hint_y=None, height="40dp", opacity=1, line_color=root.theme_cls.primary_dark, text_color="yellow", on_release=self.dismiss) profileButton = MDRoundFlatButton(text="View Profile", opposite_colors=True, font_size="17sp", halign="center", valign="center", size_hint_y=None, height="40dp", opacity=1, line_color=root.theme_cls.primary_dark, text_color=root.theme_cls.accent_color, on_release=self.viewProfile) if regotype == "Employer": layout.add_widget(profileButton) layout.add_widget(closeButton) # set the attributes to the popup class setattr(self, "title", f"{user_data['registration_type']} Details") setattr(self, "content", layout) setattr(self, "auto_dismiss", False) and the screenshot when run on linux screen shot of popup window A: The problem is that you are adding the ScrollView to the GridLayout. It should be the opposite. Try changing: layout.add_widget(scrollLayout) to: scrollLayout.add_widget(layout) and change: setattr(self, "content", layout) to: setattr(self, "content", scrollLayout) or: self.content = scrollLayout
scrollable dynamically generated popup in kivy
I'm trying to create a scrollable popup window dynamically in kivy (without a kv file). My goal is two things. Have the popup window scroll if the popup_label has to much text Make the popup_label text wrap and use its own space without overflowing onto other widgets if the text is too large. The problem: I've been trying many variations of the below code but i can't get the popup to scroll. As you can see from the screen shot attached, i've got text starting to go outside the top of the popup window and big spaces between widgets and when viewed on the phone the white popup_label text is all bunched together and overflowing over the blue header labels. Any advice please ? PS: I'm self taught in python and kivy and it's my first app so please respond with understandable lingo :) IE: x - y for x in x / cos(*%s , %d) will go over my head :) snippet of relevant code so far : class LocationPopupMenu(Popup): """ Add a popup dialog box when a user clicks an employers location pin This is called from "EmployerMarker.py """ def __init__(self, user_data): self.user = user_data super().__init__() root = App.get_running_app() regotype = root.registration_type measurement = root.firebase_user['measurement'] # Set all the required fields of current_user data # Update this list with matching data you want to add to the popup if regotype == "Employer": headers = "Preferred Catagories : ,Preferred Positions : ,Distance Away : " else: headers = "Hiring Catagories : ,Positions : ,Distance Away : " headers = headers.split(',') # Generate a Gridlayout for the popup layout = GridLayout(cols=1, padding=5, spacing=5, size_hint_y=None) # attempt to bind like i would in a kv file IE : height: self.minimum_height layout.bind(minimum_height=layout.setter("height")) scrollLayout = ScrollView(orientation='vertical', scroll_type=['content']) layout.add_widget(scrollLayout) closeButton = None attribute_value = "" for i in range(len(headers)): # iterate over the attributes of the current_user attribute_name = headers[i] if i == 0: attribute_value = user_data['position_categories'] if i == 1: attribute_value = user_data['position_sub_categories'] if i == 2: attribute_value = f"{user_data['distance']} {measurement}" # Add the attributes to a label then add the label widget to the layout # add the blue header labels popup_header = MDLabel(text=f"{attribute_name}", font_size="22sp", text_size=(self.width, None), halign="left", valign="center", size_hint_y=None, theme_text_color="Custom", text_color=root.theme_cls.accent_color) popup_header.bind(height=popup_header.setter("texture_size")) # Add the white labels popup_label = MDLabel(text=f"{attribute_value}", font_size="22sp", text_size=(self.width, None), halign="left", valign="top", size_hint_y=None) # attempt to bind texture_size as i would for kv file IE: height: self.texture_size[1] ? # so that if popup_label is a large amount of text it doesnt just display over the top of # other widgets, it displays inside its own allocated grid popup_label.bind(height=popup_label.setter("texture_size")) layout.add_widget(popup_header) layout.add_widget(popup_label) # Add a button to the bottom of the popup closeButton = MDRoundFlatButton(text="Close me", opposite_colors=True, font_size="17sp", halign="center", valign="center", size_hint_y=None, height="40dp", opacity=1, line_color=root.theme_cls.primary_dark, text_color="yellow", on_release=self.dismiss) profileButton = MDRoundFlatButton(text="View Profile", opposite_colors=True, font_size="17sp", halign="center", valign="center", size_hint_y=None, height="40dp", opacity=1, line_color=root.theme_cls.primary_dark, text_color=root.theme_cls.accent_color, on_release=self.viewProfile) if regotype == "Employer": layout.add_widget(profileButton) layout.add_widget(closeButton) # set the attributes to the popup class setattr(self, "title", f"{user_data['registration_type']} Details") setattr(self, "content", layout) setattr(self, "auto_dismiss", False) and the screenshot when run on linux screen shot of popup window
[ "The problem is that you are adding the ScrollView to the GridLayout. It should be the opposite. Try changing:\nlayout.add_widget(scrollLayout)\n\nto:\nscrollLayout.add_widget(layout)\n\nand change:\nsetattr(self, \"content\", layout)\n\nto:\nsetattr(self, \"content\", scrollLayout)\n\nor:\nself.content = scrollLayout\n\n" ]
[ 0 ]
[]
[]
[ "kivy", "kivymd", "python" ]
stackoverflow_0074556878_kivy_kivymd_python.txt
Q: TypeError: Cannot interpret '12.779999999999998' as a data type I am trying to plot my data on a chart with matplotlib but I keep getting an error that states that 12.7799 cannot be interpreted as a data type. It works when I take out the figure of the predicted gas prices but not when I include it in. I have tried to convert it to an int but the error still keeps showing up. #Comparing Gas Prices in Ireland From 2021 & 2022 import time import matplotlib.pyplot as plt import numpy as np #The first number is the gas price in July of 2021 and the second number is the gas price in July of 2022 gp = [1.75, 2.13] diff = round(gp[1] - gp[0], 2) diff1 = '€' + str(diff) percInc = round(((gp[1] - gp[0]) / gp[0]) * 100, 2) percInc1 = str(percInc) + '%' print('Today we will be comparing the Gas Prices in Ireland from 2021 & 2022') time.sleep(4) print('Gas Prices in July 2021 were €1.75 per litre and in July 2022 were €2.13 per litre.') time.sleep(2) print('The difference in price between 2021 and 2022 is', diff1) time.sleep(2) print('The percentage increase in price was',percInc1) time.sleep(2) print('') print('Now it is your turn to predict the gas prices in 2023') userInf = int(input('Enter the inflation rate for the next year (whole number between -10 and 10: ')) gp1 = gp[1] * userInf gp23 = gp1 + gp[1] print('The predicted price for 2023 is', gp23) x = np.array(['2021', '2022', '2023']) y = np.array([gp[0], gp[1]], gp23) #where problem occurs plt.title('Comparing Gas Prices in Ireland') plt.xlabel('Years') plt.ylabel('Prices') plt.bar(x,y, align='edge', width=-0.4) plt.show() This is the error I am receiving Traceback (most recent call last): File "main.py", line 30, in <module> y = np.array([gp[0], gp[1]], gp23) TypeError: Cannot interpret '12.779999999999998' as a data type Can anyone please help me identify the mistake? A: Try this: y = np.array([x , y, z]) instead of y = np.array([x ,y], z) I checked it on my end and it works ;) y = np.array([gp[0], gp[1], gp23])
TypeError: Cannot interpret '12.779999999999998' as a data type
I am trying to plot my data on a chart with matplotlib but I keep getting an error that states that 12.7799 cannot be interpreted as a data type. It works when I take out the figure of the predicted gas prices but not when I include it in. I have tried to convert it to an int but the error still keeps showing up. #Comparing Gas Prices in Ireland From 2021 & 2022 import time import matplotlib.pyplot as plt import numpy as np #The first number is the gas price in July of 2021 and the second number is the gas price in July of 2022 gp = [1.75, 2.13] diff = round(gp[1] - gp[0], 2) diff1 = '€' + str(diff) percInc = round(((gp[1] - gp[0]) / gp[0]) * 100, 2) percInc1 = str(percInc) + '%' print('Today we will be comparing the Gas Prices in Ireland from 2021 & 2022') time.sleep(4) print('Gas Prices in July 2021 were €1.75 per litre and in July 2022 were €2.13 per litre.') time.sleep(2) print('The difference in price between 2021 and 2022 is', diff1) time.sleep(2) print('The percentage increase in price was',percInc1) time.sleep(2) print('') print('Now it is your turn to predict the gas prices in 2023') userInf = int(input('Enter the inflation rate for the next year (whole number between -10 and 10: ')) gp1 = gp[1] * userInf gp23 = gp1 + gp[1] print('The predicted price for 2023 is', gp23) x = np.array(['2021', '2022', '2023']) y = np.array([gp[0], gp[1]], gp23) #where problem occurs plt.title('Comparing Gas Prices in Ireland') plt.xlabel('Years') plt.ylabel('Prices') plt.bar(x,y, align='edge', width=-0.4) plt.show() This is the error I am receiving Traceback (most recent call last): File "main.py", line 30, in <module> y = np.array([gp[0], gp[1]], gp23) TypeError: Cannot interpret '12.779999999999998' as a data type Can anyone please help me identify the mistake?
[ "Try this:\ny = np.array([x , y, z]) instead of y = np.array([x ,y], z)\nI checked it on my end and it works ;)\ny = np.array([gp[0], gp[1], gp23]) \n\n" ]
[ 2 ]
[]
[]
[ "matplotlib", "python", "typeerror" ]
stackoverflow_0074562943_matplotlib_python_typeerror.txt
Q: Problems in installing jupyter notebook I have installed jupyter notebook with pip through python and anaconda multiple times but I cannot find it in my start menu or any other location in my computer. I have also tried using pip download and then pip install. It is successfull i cmd but I still cannot find it in my gui. A: I think the best way to go about using Jupyter notebooks is to first create a conda environment that has jupyter installed. You can do this with the following line of code, conda create -n my_env python jupyter Optionally, you can even specify a python version as follows, conda create -n my_env python=3.7 jupyter Now, activate your environment, conda activate my_env Finally, launch Jupyter notebooks, jupyter notebook If you already have an environment created, just activate it, install jupyter and launch Jupyter notebooks, conda activate my_env pip install jupyter jupyter notebook
Problems in installing jupyter notebook
I have installed jupyter notebook with pip through python and anaconda multiple times but I cannot find it in my start menu or any other location in my computer. I have also tried using pip download and then pip install. It is successfull i cmd but I still cannot find it in my gui.
[ "I think the best way to go about using Jupyter notebooks is to first create a conda environment that has jupyter installed. You can do this with the following line of code,\nconda create -n my_env python jupyter\n\nOptionally, you can even specify a python version as follows,\nconda create -n my_env python=3.7 jupyter\n\nNow, activate your environment,\nconda activate my_env\n\nFinally, launch Jupyter notebooks,\njupyter notebook\n\nIf you already have an environment created, just activate it, install jupyter and launch Jupyter notebooks,\nconda activate my_env\npip install jupyter\njupyter notebook\n\n" ]
[ 0 ]
[]
[]
[ "jupyter_notebook", "pip", "python" ]
stackoverflow_0074560122_jupyter_notebook_pip_python.txt
Q: Django authenticate: usage in login vs. register (signup): how do they differ? I have noticed that the Django authenticate is used in the same way in both the login view and the register view, both return a User object to be used in the login(). In the login view authenticate() uses username and password from the submitted form, then checks on user if the credentials are ok. if request.method == 'POST': username = request.POST['username'] password = request.POST['password1'] user = authenticate(request, username=username, password=password) if user is not None: login(request, user) The register view looks very similar to the login view. It gets the credentials from the submitted form and uses the user to login. if request.method == "POST": form = UserCreationForm(request.POST) if form.is_valid(): form.save() username = form.cleaned_data['username'] password = form.cleaned_data['password1'] user = authenticate(request, username=username, password=password) login(request, user) Both call user = authenticate(request, username=username, password=password). Apart from saving the form in the register view, what is the difference here? Because the login (I guess) is only checking that the credentials are valid, but the register is creating a new user and, since it is creating, the credentials are new data coming in. Am I getting this correctly? A: You do not need to authenticate a user in your register method. It can be as simple as, form = CreateUserForm() if request.method == 'POST': form = CreateUserForm(request.POST) if form.is_valid(): form.save() return redirect('<urlname>') context = {'form': form} return render(request, '<appname>/<filename>.html', context) Hence, authentication is only required while logging in as a user. And you can make your login method even simpler by doing this, if request.method == 'POST': username = request.POST.get('username') password = request.POST.get('password') user = authenticate(request, username=username, password=password) if user: login(request, user) return redirect('<urlname>') return render(request, '<appname>/<filename>.html') Hope this clarifies the situation for you :)
Django authenticate: usage in login vs. register (signup): how do they differ?
I have noticed that the Django authenticate is used in the same way in both the login view and the register view, both return a User object to be used in the login(). In the login view authenticate() uses username and password from the submitted form, then checks on user if the credentials are ok. if request.method == 'POST': username = request.POST['username'] password = request.POST['password1'] user = authenticate(request, username=username, password=password) if user is not None: login(request, user) The register view looks very similar to the login view. It gets the credentials from the submitted form and uses the user to login. if request.method == "POST": form = UserCreationForm(request.POST) if form.is_valid(): form.save() username = form.cleaned_data['username'] password = form.cleaned_data['password1'] user = authenticate(request, username=username, password=password) login(request, user) Both call user = authenticate(request, username=username, password=password). Apart from saving the form in the register view, what is the difference here? Because the login (I guess) is only checking that the credentials are valid, but the register is creating a new user and, since it is creating, the credentials are new data coming in. Am I getting this correctly?
[ "You do not need to authenticate a user in your register method. It can be as simple as,\nform = CreateUserForm()\nif request.method == 'POST':\n form = CreateUserForm(request.POST)\n if form.is_valid():\n form.save()\n return redirect('<urlname>')\ncontext = {'form': form}\nreturn render(request, '<appname>/<filename>.html', context)\n\nHence, authentication is only required while logging in as a user. And you can make your login method even simpler by doing this,\nif request.method == 'POST':\n username = request.POST.get('username')\n password = request.POST.get('password')\n user = authenticate(request, username=username, password=password)\n if user:\n login(request, user)\n return redirect('<urlname>')\nreturn render(request, '<appname>/<filename>.html')\n\nHope this clarifies the situation for you :)\n" ]
[ 0 ]
[]
[]
[ "django", "python" ]
stackoverflow_0074562369_django_python.txt
Q: How can I lower down values to a specific number in a numpy array Let's say I have an array like this: [1,5, 2, 6, 6.7, 8, 10] I want to lower down the numbers that are larger than n. So for example if n is 6, the array will look like this: [1,5, 2, 6, 6, 6, 6] I have tried a solution using numpy.vectorize: lower_down = lambda x : min(6,x) lower_down = numpy.vectorize(lower_down) It works but it's too slow. How can I make this faster? Is there a numpy function for achieving the same result? A: You could use numpy.minimum (or numpy.maximum) if you want to limit it: >>> numpy.minimum(1, [1, 2]) array([1, 1]) >>> numpy.maximum(2, [1, 2]) array([2, 2]) If you need to limit both minimum and maximum, try numpy.clip function: >>> np.clip([1, 2, 3, 4], 2, 3) array([2, 2, 3, 3]) From docs: Clip (limit) the values in an array. Given an interval, values outside the interval are clipped to the interval edges. For example, if an interval of [0, 1] is specified, values smaller than 0 become 0, and values larger than 1 become 1. Equivalent to but faster than np.minimum(a_max, np.maximum(a, a_min)). A: Numpy already has a minimum function, no need to create your own. >>> np.minimum(6, [1,5, 2, 6, 6.7, 8, 10]) array([1., 5., 2., 6., 6., 6., 6.]) A: Try something like this: import numpy as np data = np.array([1,5, 2, 6, 6,7, 8, 10]) data[data >6 ] = 6 A: You could do this: import numpy as np array = [1,5, 2, 6, 6.7, 8, 10] array = np.array(array) array[array >= 6] = 6 new_array = array print(new_array) [1. 5. 2. 6. 6. 6. 6.]
How can I lower down values to a specific number in a numpy array
Let's say I have an array like this: [1,5, 2, 6, 6.7, 8, 10] I want to lower down the numbers that are larger than n. So for example if n is 6, the array will look like this: [1,5, 2, 6, 6, 6, 6] I have tried a solution using numpy.vectorize: lower_down = lambda x : min(6,x) lower_down = numpy.vectorize(lower_down) It works but it's too slow. How can I make this faster? Is there a numpy function for achieving the same result?
[ "You could use numpy.minimum (or numpy.maximum) if you want to limit it:\n>>> numpy.minimum(1, [1, 2])\narray([1, 1])\n\n>>> numpy.maximum(2, [1, 2])\narray([2, 2])\n\nIf you need to limit both minimum and maximum, try numpy.clip function:\n>>> np.clip([1, 2, 3, 4], 2, 3)\narray([2, 2, 3, 3])\n\nFrom docs:\nClip (limit) the values in an array.\nGiven an interval, values outside the interval are clipped to the interval edges. For example, if an interval of [0, 1] is specified, values smaller than 0 become 0, and values larger than 1 become 1.\nEquivalent to but faster than np.minimum(a_max, np.maximum(a, a_min)).\n", "Numpy already has a minimum function, no need to create your own.\n>>> np.minimum(6, [1,5, 2, 6, 6.7, 8, 10])\narray([1., 5., 2., 6., 6., 6., 6.])\n\n", "Try something like this:\nimport numpy as np\n\ndata = np.array([1,5, 2, 6, 6,7, 8, 10])\n\ndata[data >6 ] = 6\n\n", "You could do this:\nimport numpy as np\n\narray = [1,5, 2, 6, 6.7, 8, 10]\narray = np.array(array)\n\narray[array >= 6] = 6\n\nnew_array = array\n\nprint(new_array)\n\n[1. 5. 2. 6. 6. 6. 6.]\n" ]
[ 3, 3, 0, 0 ]
[]
[]
[ "numpy", "python" ]
stackoverflow_0074563149_numpy_python.txt
Q: Removing a character (number, letter, anything) in a string that is the same as the one that came before or after? (Python) So, I've seen a lot of answers for removing duplicate characters in strings, but I'm not trying to remove all duplicates - just the ones that are beside each other. This is probably a lot more simple than what I'm doing, but this is what I've been attempting to do (and failing miserably at) for j in range(2, len(string)-1): char = string[j] plus = string[j+1] minus = string[j-1] if char == plus or char == minus: string.replace(char, "") For reference, the code SHOULD act as: input: ppmpvvpmmp output: pmpvmp But instead, the output does not change at all. Again, I'm aware that this is most likely very easy and I'm overcomplicating, but I'm genuinely struggling here and have tried a lot of similar variations A: I would use a regular expression replacement here: inp = "ppmpvvpmmp" output = re.sub(r'(\w)\1', r'\1', inp) print(output) # pmpvpmp The above assumes that a duplicate is limited to a single pair of same letters. If instead you want to reduce 3 or more, then use: inp = "ppmpvvvvvpmmmp" output = re.sub(r'(\w)\1+', r'\1', inp) print(output) # pmpvpmp
Removing a character (number, letter, anything) in a string that is the same as the one that came before or after? (Python)
So, I've seen a lot of answers for removing duplicate characters in strings, but I'm not trying to remove all duplicates - just the ones that are beside each other. This is probably a lot more simple than what I'm doing, but this is what I've been attempting to do (and failing miserably at) for j in range(2, len(string)-1): char = string[j] plus = string[j+1] minus = string[j-1] if char == plus or char == minus: string.replace(char, "") For reference, the code SHOULD act as: input: ppmpvvpmmp output: pmpvmp But instead, the output does not change at all. Again, I'm aware that this is most likely very easy and I'm overcomplicating, but I'm genuinely struggling here and have tried a lot of similar variations
[ "I would use a regular expression replacement here:\ninp = \"ppmpvvpmmp\"\noutput = re.sub(r'(\\w)\\1', r'\\1', inp)\nprint(output) # pmpvpmp\n\nThe above assumes that a duplicate is limited to a single pair of same letters. If instead you want to reduce 3 or more, then use:\ninp = \"ppmpvvvvvpmmmp\"\noutput = re.sub(r'(\\w)\\1+', r'\\1', inp)\nprint(output) # pmpvpmp\n\n" ]
[ 0 ]
[]
[]
[ "duplicates", "python" ]
stackoverflow_0074563172_duplicates_python.txt
Q: How do I make the program stop only when something specific happens? I just started coding and i want to write a small program were you have to guess a number. The problem is you basically only have one guess and then it tells you the right number. How do I make the program only stop as soon as the user guessed the right number? This is my current code: import random a = int(input()) x = random.randint(1,5) if a > x: print("you guessed to high") elif a < x: print("you guessed to low") elif a == x: print("you guessed right") I have tried 'while' but it didn't work. A: I guess you can figure what is happening import random x = random.randint(1,5) guessed_right = False while not guessed_right: a = int(input()) if a > x: print("you guessed to high") elif a < x: print("you guessed to low") elif a == x: print("you guessed right") guessed_right = True
How do I make the program stop only when something specific happens?
I just started coding and i want to write a small program were you have to guess a number. The problem is you basically only have one guess and then it tells you the right number. How do I make the program only stop as soon as the user guessed the right number? This is my current code: import random a = int(input()) x = random.randint(1,5) if a > x: print("you guessed to high") elif a < x: print("you guessed to low") elif a == x: print("you guessed right") I have tried 'while' but it didn't work.
[ "I guess you can figure what is happening\nimport random\n\nx = random.randint(1,5)\n\nguessed_right = False\n\nwhile not guessed_right:\n\n a = int(input())\n\n if a > x:\n print(\"you guessed to high\")\n\n elif a < x:\n print(\"you guessed to low\")\n\n elif a == x:\n print(\"you guessed right\")\n guessed_right = True\n\n" ]
[ 0 ]
[]
[]
[ "python" ]
stackoverflow_0074563236_python.txt
Q: Values grabbed through one class are not being passed to the inheriting class in PyQT5 I am building a GUI for a hand tracking app using PyQT5. There are two windows: one 'MainWindow' which displays the camera view, and one called 'Window 2' which holds multiple checkboxes that correspond to actions that can be enabled/disabled while the program is running. The code for the two classes is as follows: class MainWindow(QMainWindow): def __init__(self): super(MainWindow, self).__init__() loadUi("window1.ui", self) self.setWindowTitle("Tracker") self.display_width = 640 self.display_height = 480 # start the thread self.gesture.clicked.connect(self.gotowindow2) self.startbutton.clicked.connect(self.startvideo) def gotowindow2(self): widget.setCurrentIndex(widget.currentIndex() + 1) # changes index by 1 to change page def startvideo(self): # Change label color to light blue self.startbutton.clicked.disconnect(self.startvideo) # Change button to stop self.startbutton.setText('Stop video') values = Window2() hello = values.returnvalues() print(hello) self.thread = VideoThread(1,1,0) self.thread.change_pixmap_signal.connect(self.update_image) # start the thread self.thread.start() self.startbutton.clicked.connect(self.thread.stop) # Stop the video if button clicked self.startbutton.clicked.connect(self.stopvideo) def stopvideo(self): self.thread.change_pixmap_signal.disconnect() self.startbutton.setText('Start video') self.startbutton.clicked.disconnect(self.stopvideo) self.startbutton.clicked.disconnect(self.thread.stop) self.startbutton.clicked.connect(self.startvideo) def closeEvent(self, event): self.thread.stop() event.accept() @pyqtSlot(np.ndarray) def update_image(self, img): """Updates the image_label with a new opencv image""" qt_img = self.convert_cv_qt(img) self.image_label.setPixmap(qt_img) def convert_cv_qt(self, img): """Convert from an opencv image to QPixmap""" rgb_image = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) h, w, ch = rgb_image.shape bytes_per_line = ch * w convert_to_Qt_format = QtGui.QImage(rgb_image.data, w, h, bytes_per_line, QtGui.QImage.Format_RGB888) p = convert_to_Qt_format.scaled(self.display_width, self.display_height, Qt.KeepAspectRatio) return QPixmap.fromImage(p) class Window2(QWidget): def __init__(self): super(Window2, self).__init__() loadUi("window2.ui", self) self.backbutton.clicked.connect(self.returnvalues) self.backbutton.clicked.connect(self.gotowindow1) self.outputbutton.clicked.connect(self.printvalues) def printvalues(self): print(self.returnvalues()) def gotowindow1(self): widget.setCurrentIndex(widget.currentIndex() - 1) def returnvalues(self): left = self.leftclickbutton.isChecked() scrollup = self.scrollupbutton.isChecked() scrolldown = self.scrolldownbutton.isChecked() checkboxvalues = [left, scrollup, scrolldown] return checkboxvalues if __name__ == "__main__": app = QApplication(sys.argv) widget = QtWidgets.QStackedWidget() a = MainWindow() b = Window2() widget.resize(1000, 600) widget.addWidget(a) widget.addWidget(b) widget.show() sys.exit(app.exec_()) My problem is that when I check the checkboxes and press the "outputbutton", it correctly displays the three values. When these values are fetched by hello = values.returnvalues(), it is always [False, False, False] How can the correct values get passed into class "MainWindow" stored under variable "hello" A: It seems like you are not familiar with Object Oriented Programming. values = Window2() hello = values.returnvalues() This block of code actually creates a new object Window2, and its checkboxes are indeed unchecked, which is why you always get [False, False, False]. The simplest (but not the best) way to solve your problem would be to give your Window2 instance to your mainWindow. So instead of: a = MainWindow() b = Window2() you do: b = Window2() a = MainWindow(b) Instead of: class MainWindow(QMainWindow): def __init__(self): super(MainWindow, self).__init__() you should do: class MainWindow(QMainWindow): def __init__(self, window_2): super(MainWindow, self).__init__() self.window_2 = window_2 and instead of: values = Window2() hello = values.returnvalues() you should do: hello = self.window_2.returnvalues()
Values grabbed through one class are not being passed to the inheriting class in PyQT5
I am building a GUI for a hand tracking app using PyQT5. There are two windows: one 'MainWindow' which displays the camera view, and one called 'Window 2' which holds multiple checkboxes that correspond to actions that can be enabled/disabled while the program is running. The code for the two classes is as follows: class MainWindow(QMainWindow): def __init__(self): super(MainWindow, self).__init__() loadUi("window1.ui", self) self.setWindowTitle("Tracker") self.display_width = 640 self.display_height = 480 # start the thread self.gesture.clicked.connect(self.gotowindow2) self.startbutton.clicked.connect(self.startvideo) def gotowindow2(self): widget.setCurrentIndex(widget.currentIndex() + 1) # changes index by 1 to change page def startvideo(self): # Change label color to light blue self.startbutton.clicked.disconnect(self.startvideo) # Change button to stop self.startbutton.setText('Stop video') values = Window2() hello = values.returnvalues() print(hello) self.thread = VideoThread(1,1,0) self.thread.change_pixmap_signal.connect(self.update_image) # start the thread self.thread.start() self.startbutton.clicked.connect(self.thread.stop) # Stop the video if button clicked self.startbutton.clicked.connect(self.stopvideo) def stopvideo(self): self.thread.change_pixmap_signal.disconnect() self.startbutton.setText('Start video') self.startbutton.clicked.disconnect(self.stopvideo) self.startbutton.clicked.disconnect(self.thread.stop) self.startbutton.clicked.connect(self.startvideo) def closeEvent(self, event): self.thread.stop() event.accept() @pyqtSlot(np.ndarray) def update_image(self, img): """Updates the image_label with a new opencv image""" qt_img = self.convert_cv_qt(img) self.image_label.setPixmap(qt_img) def convert_cv_qt(self, img): """Convert from an opencv image to QPixmap""" rgb_image = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) h, w, ch = rgb_image.shape bytes_per_line = ch * w convert_to_Qt_format = QtGui.QImage(rgb_image.data, w, h, bytes_per_line, QtGui.QImage.Format_RGB888) p = convert_to_Qt_format.scaled(self.display_width, self.display_height, Qt.KeepAspectRatio) return QPixmap.fromImage(p) class Window2(QWidget): def __init__(self): super(Window2, self).__init__() loadUi("window2.ui", self) self.backbutton.clicked.connect(self.returnvalues) self.backbutton.clicked.connect(self.gotowindow1) self.outputbutton.clicked.connect(self.printvalues) def printvalues(self): print(self.returnvalues()) def gotowindow1(self): widget.setCurrentIndex(widget.currentIndex() - 1) def returnvalues(self): left = self.leftclickbutton.isChecked() scrollup = self.scrollupbutton.isChecked() scrolldown = self.scrolldownbutton.isChecked() checkboxvalues = [left, scrollup, scrolldown] return checkboxvalues if __name__ == "__main__": app = QApplication(sys.argv) widget = QtWidgets.QStackedWidget() a = MainWindow() b = Window2() widget.resize(1000, 600) widget.addWidget(a) widget.addWidget(b) widget.show() sys.exit(app.exec_()) My problem is that when I check the checkboxes and press the "outputbutton", it correctly displays the three values. When these values are fetched by hello = values.returnvalues(), it is always [False, False, False] How can the correct values get passed into class "MainWindow" stored under variable "hello"
[ "It seems like you are not familiar with Object Oriented Programming.\nvalues = Window2()\nhello = values.returnvalues()\n\nThis block of code actually creates a new object Window2, and its checkboxes are indeed unchecked, which is why you always get [False, False, False].\nThe simplest (but not the best) way to solve your problem would be to give your Window2 instance to your mainWindow.\nSo instead of:\na = MainWindow()\nb = Window2()\n\nyou do:\nb = Window2()\na = MainWindow(b)\n\nInstead of:\nclass MainWindow(QMainWindow):\n def __init__(self):\n super(MainWindow, self).__init__()\n\nyou should do:\nclass MainWindow(QMainWindow):\n def __init__(self, window_2):\n super(MainWindow, self).__init__()\n self.window_2 = window_2\n\nand instead of:\nvalues = Window2()\nhello = values.returnvalues()\n\nyou should do:\nhello = self.window_2.returnvalues()\n\n" ]
[ 0 ]
[]
[]
[ "class", "inheritance", "pyqt5", "python" ]
stackoverflow_0074563079_class_inheritance_pyqt5_python.txt
Q: Python gTTS several mp3 files issue I am creating an app based on speech. Everything works fine but I do not want my app to use outside program to open mp3 file. At the moment program can do several commands only if I will use: cmd def speak(text): tts = gTTS(text=text, lang='pl') filename = 'speak.mp3' tts.save(filename) cmd = filename #works for several commands with external program os.system(cmd) What I would like to do is something like this: def speak(text): tts = gTTS(text=text, lang='pl') filename = 'speak.mp3' tts.save(filename) playsound.playsound(filename) return speak Unfortunately it works only for first audio input, second one gives error: File "C:\Users\Admin\AppData\Local\Programs\Python\Python38-32\lib\site-packages\gtts\tts.py", line 294, in save with open(str(savefile), 'wb') as f: PermissionError: [Errno 13] Permission denied: 'speak.mp3' I was trying to delete mp3 file after it has been saved and played but it did not helped. Any idea how to solve it? A: Try to see the permissions of the file that has been created. It might have only read permissions. A: When passing in the function, attempt to supply various file names for each of the distinct texts, or use the random.randit() method to set different file names, or use the current time as your file name using the time module.
Python gTTS several mp3 files issue
I am creating an app based on speech. Everything works fine but I do not want my app to use outside program to open mp3 file. At the moment program can do several commands only if I will use: cmd def speak(text): tts = gTTS(text=text, lang='pl') filename = 'speak.mp3' tts.save(filename) cmd = filename #works for several commands with external program os.system(cmd) What I would like to do is something like this: def speak(text): tts = gTTS(text=text, lang='pl') filename = 'speak.mp3' tts.save(filename) playsound.playsound(filename) return speak Unfortunately it works only for first audio input, second one gives error: File "C:\Users\Admin\AppData\Local\Programs\Python\Python38-32\lib\site-packages\gtts\tts.py", line 294, in save with open(str(savefile), 'wb') as f: PermissionError: [Errno 13] Permission denied: 'speak.mp3' I was trying to delete mp3 file after it has been saved and played but it did not helped. Any idea how to solve it?
[ "Try to see the permissions of the file that has been created. It might have only read permissions.\n", "When passing in the function, attempt to supply various file names for each of the distinct texts, or use the random.randit() method to set different file names, or use the current time as your file name using the time module.\n" ]
[ 0, 0 ]
[]
[]
[ "gtts", "python" ]
stackoverflow_0061712950_gtts_python.txt
Q: Creating a string text from values and indexes that meet condition Hello Guys I have the following dataset: # creating dataset dataset = pd.DataFrame() dataset['name'] = ['Alex', 'Alex', 'Alex','Alex','Alex', 'Marie', 'Marie', 'Marie','Marie','Marie', 'Luke', 'Luke', 'Luke','Luke','Luke'] dataset['sales'] = [690,451,478,524,750,452,784,523,451,125,854,745,856,900,105] dataset.info() dataset.shape I want to create and print a string that will let me know the sales reps whose mean sale's are above 500 units, in order to achieve this I first grouped the data and calculated the mean by name like so: result_grouped=dataset.groupby(['name']).aggregate({'sales': 'mean'}) If I can use the following code to filter my target sales reps print(list(result_grouped.index[(result_grouped['sales']>500)])) result_grouped[(result_grouped['sales']>500)] which gives: ['Alex', 'Luke'] sales name Alex 578.6 Luke 692.0 but my desired output will be something like so: a printable string in the format: "The reps in the target metrics are: name1 with mean1, name2 with mean2, ... , namen with meann" for this example my output will be: "The reps in the target metrics are Alex with 578.6, Luke with 692.0" I am very new to python and in the verge of a mental breakdown I know that this in the code genre does not seeem too hard but guys I come from an R enviroment and Python just seems to be very difficult for me I trully appreciate your help with this thank you so much for your help A: you can use: result_grouped=dataset.groupby(['name']).aggregate({'sales': 'mean'}).reset_index() result_grouped=result_grouped[(result_grouped['sales']>500)] result_grouped['text']=result_grouped['name'] + ' with ' + result_grouped['sales'].astype(str) listt=', '.join(result_grouped['text'].to_list()) final="The reps in the target metrics are {}".format(listt) print(final) ''' The reps in the target metrics are Alex with 578.6, Luke with 692.0 ''' A: You can chain the pandas functions then use a list comp to unpack the list of tuples into a sentence. result_grouped = ( dataset .groupby(['name']) .aggregate({'sales': 'mean'}) .query('sales.gt(500)') .reset_index() .to_records(index=False) .tolist() ) print(f"The reps in the target metrics are {', '.join([f'{x[0]} with {x[1]}' for x in result_grouped])}") Output: The reps in the target metrics are Alex with 578.6, Luke with 692.0
Creating a string text from values and indexes that meet condition
Hello Guys I have the following dataset: # creating dataset dataset = pd.DataFrame() dataset['name'] = ['Alex', 'Alex', 'Alex','Alex','Alex', 'Marie', 'Marie', 'Marie','Marie','Marie', 'Luke', 'Luke', 'Luke','Luke','Luke'] dataset['sales'] = [690,451,478,524,750,452,784,523,451,125,854,745,856,900,105] dataset.info() dataset.shape I want to create and print a string that will let me know the sales reps whose mean sale's are above 500 units, in order to achieve this I first grouped the data and calculated the mean by name like so: result_grouped=dataset.groupby(['name']).aggregate({'sales': 'mean'}) If I can use the following code to filter my target sales reps print(list(result_grouped.index[(result_grouped['sales']>500)])) result_grouped[(result_grouped['sales']>500)] which gives: ['Alex', 'Luke'] sales name Alex 578.6 Luke 692.0 but my desired output will be something like so: a printable string in the format: "The reps in the target metrics are: name1 with mean1, name2 with mean2, ... , namen with meann" for this example my output will be: "The reps in the target metrics are Alex with 578.6, Luke with 692.0" I am very new to python and in the verge of a mental breakdown I know that this in the code genre does not seeem too hard but guys I come from an R enviroment and Python just seems to be very difficult for me I trully appreciate your help with this thank you so much for your help
[ "you can use:\nresult_grouped=dataset.groupby(['name']).aggregate({'sales': 'mean'}).reset_index()\nresult_grouped=result_grouped[(result_grouped['sales']>500)]\n\nresult_grouped['text']=result_grouped['name'] + ' with ' + result_grouped['sales'].astype(str)\nlistt=', '.join(result_grouped['text'].to_list())\nfinal=\"The reps in the target metrics are {}\".format(listt)\n\nprint(final)\n'''\nThe reps in the target metrics are Alex with 578.6, Luke with 692.0\n'''\n\n", "You can chain the pandas functions then use a list comp to unpack the list of tuples into a sentence.\nresult_grouped = (\n dataset\n .groupby(['name'])\n .aggregate({'sales': 'mean'})\n .query('sales.gt(500)')\n .reset_index()\n .to_records(index=False)\n .tolist()\n)\n\n\nprint(f\"The reps in the target metrics are {', '.join([f'{x[0]} with {x[1]}' for x in result_grouped])}\")\n\nOutput:\nThe reps in the target metrics are Alex with 578.6, Luke with 692.0\n\n" ]
[ 1, 1 ]
[]
[]
[ "pandas", "printing", "python" ]
stackoverflow_0074562694_pandas_printing_python.txt
Q: Floating-point errors in cube root of exact cubic input I found myself needing to compute the "integer cube root", meaning the cube root of an integer, rounded down to the nearest integer. In Python, we could use the NumPy floating-point cbrt() function: import numpy as np def icbrt(x): return int(np.cbrt(x)) Though this works most of the time, it fails at certain input x, with the result being one less than expected. For example, icbrt(15**3) == 14, which comes about because np.cbrt(15**3) == 14.999999999999998. The following finds the first 100,000 such failures: print([x for x in range(100_000) if (icbrt(x) + 1)**3 == x]) # [3375, 19683, 27000, 50653] == [15**3, 27**3, 30**3, 37**3] Question: What is special about 15, 27, 30, 37, ..., making cbrt() return ever so slightly below the exact result? I can find no obvious underlying pattern for these numbers. A few observations: The story is the same if we switch from NumPy's cbrt() to that of Python's math module, or if we switch from Python to C (not surprising, as I believe that both numpy.cbrt() and math.cbrt() delegate to cbrt() from the C math library in the end). Replacing cbrt(x) with x**(1/3) (pow(x, 1./3.) in C) leads to many more cases of failure. Let us stick to cbrt(). For the square root, a similar problem does not arise, meaning that import numpy as np def isqrt(x): return int(np.sqrt(x)) returns the correct result for all x (tested up to 100,000,000). Test code: print([x for x in range(100_000) if (y := np.sqrt(x))**2 != x and (y + 1)**2 <= x]) Extra As the above icbrt() only seems to fail on cubic input, we can correct for the occasional mistakes by adding a fixup, like so: import numpy as np def icbrt(x): y = int(np.cbrt(x)) if (y + 1)**3 == x: y += 1 return y A different solution is to stick to exact integer computation, implementing icbrt() without the use of floating-point numbers. This is discussed e.g. in this SO question. An extra benefit of such approaches is that they are (or can be) faster than using the floating-point cbrt(). To be clear, my question is not about how to write a better icbrt(), but about why cbrt() fails at some specific inputs. A: This problem is caused by a bad implementation of cbrt. It is not caused by floating-point arithmetic because floating-point arithmetic is not a barrier to computing the cube root well enough to return an exactly correct result when the exactly correct result is representable in the floating-point format. For example, if one were to use integer arithmetic to compute nine-fifths of 80, we would expect a correct result of 144. If a routine to compute nine-fifths of a number were implemented as int NineFifths(int x) { return 9/5*x; }, we would blame that routine for being implemented incorrectly, not blame integer arithmetic for not handling fractions. Similarly, if a routine uses floating-point arithmetic to calculate an incorrect result when a correct result is representable, we blame the routine, not floating-point arithmetic. Some mathematical functions are difficult to calculate, and we accept some amount of error in them. In fact, for some of the routines in the math library, humans have not yet figured out how to calculate them with correct rounding in a known-bounded execution time. So we accept that not every math routine is correctly rounded. Howver, when the mathematical value of a function is exactly representable in a floating-point format, the correct result can be obtained by faithful rounding rather than correct rounding. So this is a desirable goal for math library functions. Correctly rounded means the computed result equals the number you would obtain by rounding the exact mathematical result to the nearest representable value.1 Faithfully rounded means the computed result is less than one ULP from the exact mathematical result. An ULP is the unit of least precision, the distance between two adjacent representable numbers. Correctly rounding a function can be difficult because, in general, a function can be arbitrarily close to a rounding decision point. For round-to-nearest, this is midway between two adjacent representable numbers. Consider two adjacent representable numbers a and b. Their midpoint is m = (a+b)/2. If the mathematical value of some function f(x) is just below m, it should be rounded to a. If it is just above, it should be rounded to b. As we implement f in software, we might compute it with some very small error e. When we compute f(x), if our computed result lies in [m-e, m+e], and we only know the error bound is e, then we cannot tell whether f(x) is below m or above m. And because, in general, a function f(x) can be arbitrarily close to m, this is always a problem: No matter how accurately we compute f, no matter how small we make the error bound e, there is a possibility that our computed value will lie very close to a midpoint m, closer than e, and therefore our computation will not tell us whether to round down or to round up. For some specific functions and floating-point formats, studies have been made and proofs have been written about how close the functions approach such rounding decision points, and so certain functions like sine and cosine can be implemented with correct rounding with known bounds on the compute time. Other functions have eluded proof so far. In contrast, faithful rounding is easier to implement. If we compute a function with an error bound less than Β½ ULP, then we can always return a faithfully rounded result, one that is within one ULP of the exact mathematical result. Once we have computed some result y, we round that to the nearest representable value2 and return that. Starting with y having error less than Β½ ULP, the rounding may add up to Β½ ULP more error, so the total error is less than one ULP, which is faithfully rounded. A benefit of faithful rounding is that a faithfully rounded implementation of a function always produces the exact result when the exact result is representable. This is because the next nearest result is one ULP away, but faithful rounding always has an error less than one ULP. Thus, a faithfully rounded cbrt function returns exact results when they are representable. What is special about 15, 27, 30, 37, ..., making cbrt() return ever so slightly below the exact result? I can find no obvious underlying pattern for these numbers. The bad cbrt implementation might compute the cube root by reducing the argument to a value in [1, 8) or similar interval and then applying a precomputed polynomial approximation. Each addition and multiplication in that polynomial may introduce a rounding error as the result of each operation is rounded to the nearest representable value in floating-point format. Additionally, the polynomial has inherent error. Rounding errors behave somewhat like a random process, sometimes rounding up, sometimes down. As they accumulate over several calculations, they may happen to round in different directions and cancel, or they may round in the same direction ad reinforce. If the errors happen to cancel by the end of the calculations, you get an exact result from cbrt. Otherwise, you may get an incorrect result from cbrt. Footnotes 1 In general, there is a choice of rounding rules. The default and most common is round-to-nearest, ties-to-even. Others include round-upward, round-downward, and round-toward-zero. This answer focuses on round-to-nearest. 2 Inside a mathematical function, numbers may be computed using extended precision, so we may have computed results that are not representable in the destination floating-point format; they will have more precision.
Floating-point errors in cube root of exact cubic input
I found myself needing to compute the "integer cube root", meaning the cube root of an integer, rounded down to the nearest integer. In Python, we could use the NumPy floating-point cbrt() function: import numpy as np def icbrt(x): return int(np.cbrt(x)) Though this works most of the time, it fails at certain input x, with the result being one less than expected. For example, icbrt(15**3) == 14, which comes about because np.cbrt(15**3) == 14.999999999999998. The following finds the first 100,000 such failures: print([x for x in range(100_000) if (icbrt(x) + 1)**3 == x]) # [3375, 19683, 27000, 50653] == [15**3, 27**3, 30**3, 37**3] Question: What is special about 15, 27, 30, 37, ..., making cbrt() return ever so slightly below the exact result? I can find no obvious underlying pattern for these numbers. A few observations: The story is the same if we switch from NumPy's cbrt() to that of Python's math module, or if we switch from Python to C (not surprising, as I believe that both numpy.cbrt() and math.cbrt() delegate to cbrt() from the C math library in the end). Replacing cbrt(x) with x**(1/3) (pow(x, 1./3.) in C) leads to many more cases of failure. Let us stick to cbrt(). For the square root, a similar problem does not arise, meaning that import numpy as np def isqrt(x): return int(np.sqrt(x)) returns the correct result for all x (tested up to 100,000,000). Test code: print([x for x in range(100_000) if (y := np.sqrt(x))**2 != x and (y + 1)**2 <= x]) Extra As the above icbrt() only seems to fail on cubic input, we can correct for the occasional mistakes by adding a fixup, like so: import numpy as np def icbrt(x): y = int(np.cbrt(x)) if (y + 1)**3 == x: y += 1 return y A different solution is to stick to exact integer computation, implementing icbrt() without the use of floating-point numbers. This is discussed e.g. in this SO question. An extra benefit of such approaches is that they are (or can be) faster than using the floating-point cbrt(). To be clear, my question is not about how to write a better icbrt(), but about why cbrt() fails at some specific inputs.
[ "This problem is caused by a bad implementation of cbrt. It is not caused by floating-point arithmetic because floating-point arithmetic is not a barrier to computing the cube root well enough to return an exactly correct result when the exactly correct result is representable in the floating-point format.\nFor example, if one were to use integer arithmetic to compute nine-fifths of 80, we would expect a correct result of 144. If a routine to compute nine-fifths of a number were implemented as int NineFifths(int x) { return 9/5*x; }, we would blame that routine for being implemented incorrectly, not blame integer arithmetic for not handling fractions. Similarly, if a routine uses floating-point arithmetic to calculate an incorrect result when a correct result is representable, we blame the routine, not floating-point arithmetic.\nSome mathematical functions are difficult to calculate, and we accept some amount of error in them. In fact, for some of the routines in the math library, humans have not yet figured out how to calculate them with correct rounding in a known-bounded execution time. So we accept that not every math routine is correctly rounded.\nHowver, when the mathematical value of a function is exactly representable in a floating-point format, the correct result can be obtained by faithful rounding rather than correct rounding. So this is a desirable goal for math library functions.\nCorrectly rounded means the computed result equals the number you would obtain by rounding the exact mathematical result to the nearest representable value.1 Faithfully rounded means the computed result is less than one ULP from the exact mathematical result. An ULP is the unit of least precision, the distance between two adjacent representable numbers.\nCorrectly rounding a function can be difficult because, in general, a function can be arbitrarily close to a rounding decision point. For round-to-nearest, this is midway between two adjacent representable numbers. Consider two adjacent representable numbers a and b. Their midpoint is m = (a+b)/2. If the mathematical value of some function f(x) is just below m, it should be rounded to a. If it is just above, it should be rounded to b. As we implement f in software, we might compute it with some very small error e. When we compute f(x), if our computed result lies in [m-e, m+e], and we only know the error bound is e, then we cannot tell whether f(x) is below m or above m. And because, in general, a function f(x) can be arbitrarily close to m, this is always a problem: No matter how accurately we compute f, no matter how small we make the error bound e, there is a possibility that our computed value will lie very close to a midpoint m, closer than e, and therefore our computation will not tell us whether to round down or to round up.\nFor some specific functions and floating-point formats, studies have been made and proofs have been written about how close the functions approach such rounding decision points, and so certain functions like sine and cosine can be implemented with correct rounding with known bounds on the compute time. Other functions have eluded proof so far.\nIn contrast, faithful rounding is easier to implement. If we compute a function with an error bound less than Β½ ULP, then we can always return a faithfully rounded result, one that is within one ULP of the exact mathematical result. Once we have computed some result y, we round that to the nearest representable value2 and return that. Starting with y having error less than Β½ ULP, the rounding may add up to Β½ ULP more error, so the total error is less than one ULP, which is faithfully rounded.\nA benefit of faithful rounding is that a faithfully rounded implementation of a function always produces the exact result when the exact result is representable. This is because the next nearest result is one ULP away, but faithful rounding always has an error less than one ULP. Thus, a faithfully rounded cbrt function returns exact results when they are representable.\n\nWhat is special about 15, 27, 30, 37, ..., making cbrt() return ever so slightly below the exact result? I can find no obvious underlying pattern for these numbers.\n\nThe bad cbrt implementation might compute the cube root by reducing the argument to a value in [1, 8) or similar interval and then applying a precomputed polynomial approximation. Each addition and multiplication in that polynomial may introduce a rounding error as the result of each operation is rounded to the nearest representable value in floating-point format. Additionally, the polynomial has inherent error. Rounding errors behave somewhat like a random process, sometimes rounding up, sometimes down. As they accumulate over several calculations, they may happen to round in different directions and cancel, or they may round in the same direction ad reinforce. If the errors happen to cancel by the end of the calculations, you get an exact result from cbrt. Otherwise, you may get an incorrect result from cbrt.\nFootnotes\n1 In general, there is a choice of rounding rules. The default and most common is round-to-nearest, ties-to-even. Others include round-upward, round-downward, and round-toward-zero. This answer focuses on round-to-nearest.\n2 Inside a mathematical function, numbers may be computed using extended precision, so we may have computed results that are not representable in the destination floating-point format; they will have more precision.\n" ]
[ 2 ]
[]
[]
[ "floating_accuracy", "floating_point", "integer", "math", "python" ]
stackoverflow_0074553553_floating_accuracy_floating_point_integer_math_python.txt
Q: How to avoid blank spaces between bars which are plotted next to each other when exporting as SVG from matplotlib? I am plotting with matplolib a bar chart, where the bars are next to each other without a space in between. However, when exporting as .svg blank (white) spaces between the bars are visible. bar chart with blank spaces between bars when exported as .svg When exporting to PDF (also as vector graphic) no blank spaces are shown. bar chart without blank spaces between bars when exported as .pdf A: If your bars aren't perfectly aligned to pixel boundaries, the SVG renderer will try to antialias them, resulting in lighter patches between adjacent bars. For example, here are two SVGs that are both 200 pixels wide. The first shows a bar chart with 21 bars. Since the bars are not aligned with pixel boundaries, there are gaps between them. The second has 20 bars. Each bar has integer x coordinates at the left and right edges, and there are no visible gaps. <svg width="200" height="100" viewBox="0 0 200 100"> <g fill="#f00"> <!-- Notice the non-integer x coordinates (0.0, 9.5, 19.0, 28.6, etc. --> <path d="M0.0 50.0V90.0H9.5V50.0Z"/> <path d="M9.5 42.6V90.0H19.0V42.6Z"/> <path d="M19.0 35.9V90.0H28.6V35.9Z"/> <path d="M28.6 30.5V90.0H38.1V30.5Z"/> <path d="M38.1 26.7V90.0H47.6V26.7Z"/> <path d="M47.6 25.1V90.0H57.1V25.1Z"/> <path d="M57.1 25.6V90.0H66.7V25.6Z"/> <path d="M66.7 28.3V90.0H76.2V28.3Z"/> <path d="M76.2 33.0V90.0H85.7V33.0Z"/> <path d="M85.7 39.2V90.0H95.2V39.2Z"/> <path d="M95.2 46.3V90.0H104.8V46.3Z"/> <path d="M104.8 53.7V90.0H114.3V53.7Z"/> <path d="M114.3 60.8V90.0H123.8V60.8Z"/> <path d="M123.8 67.0V90.0H133.3V67.0Z"/> <path d="M133.3 71.7V90.0H142.9V71.7Z"/> <path d="M142.9 74.4V90.0H152.4V74.4Z"/> <path d="M152.4 74.9V90.0H161.9V74.9Z"/> <path d="M161.9 73.3V90.0H171.4V73.3Z"/> <path d="M171.4 69.5V90.0H181.0V69.5Z"/> <path d="M181.0 64.1V90.0H190.5V64.1Z"/> <path d="M190.5 57.4V90.0H200.0V57.4Z"/> </g> </svg> <svg width="200" height="100" viewBox="0 0 200 100"> <g fill="#080"> <!-- Here, the x coordinates are all integers (0, 10, 20, 30, etc.) --> <path d="M0.0 50.0V90.0H10.0V50.0Z"/> <path d="M10.0 42.3V90.0H20.0V42.3Z"/> <path d="M20.0 35.3V90.0H30.0V35.3Z"/> <path d="M30.0 29.8V90.0H40.0V29.8Z"/> <path d="M40.0 26.2V90.0H50.0V26.2Z"/> <path d="M50.0 25.0V90.0H60.0V25.0Z"/> <path d="M60.0 26.2V90.0H70.0V26.2Z"/> <path d="M70.0 29.8V90.0H80.0V29.8Z"/> <path d="M80.0 35.3V90.0H90.0V35.3Z"/> <path d="M90.0 42.3V90.0H100.0V42.3Z"/> <path d="M100.0 50.0V90.0H110.0V50.0Z"/> <path d="M110.0 57.7V90.0H120.0V57.7Z"/> <path d="M120.0 64.7V90.0H130.0V64.7Z"/> <path d="M130.0 70.2V90.0H140.0V70.2Z"/> <path d="M140.0 73.8V90.0H150.0V73.8Z"/> <path d="M150.0 75.0V90.0H160.0V75.0Z"/> <path d="M160.0 73.8V90.0H170.0V73.8Z"/> <path d="M170.0 70.2V90.0H180.0V70.2Z"/> <path d="M180.0 64.7V90.0H190.0V64.7Z"/> <path d="M190.0 57.7V90.0H200.0V57.7Z"/> </g> </svg> (Alternatively, you could try adding shape-rendering="crispEdges" to your SVG. This disables antialiasing altogether, so it might cause other problems.)
How to avoid blank spaces between bars which are plotted next to each other when exporting as SVG from matplotlib?
I am plotting with matplolib a bar chart, where the bars are next to each other without a space in between. However, when exporting as .svg blank (white) spaces between the bars are visible. bar chart with blank spaces between bars when exported as .svg When exporting to PDF (also as vector graphic) no blank spaces are shown. bar chart without blank spaces between bars when exported as .pdf
[ "If your bars aren't perfectly aligned to pixel boundaries, the SVG renderer will try to antialias them, resulting in lighter patches between adjacent bars.\nFor example, here are two SVGs that are both 200 pixels wide. The first shows a bar chart with 21 bars. Since the bars are not aligned with pixel boundaries, there are gaps between them. The second has 20 bars. Each bar has integer x coordinates at the left and right edges, and there are no visible gaps.\n\n\n<svg width=\"200\" height=\"100\" viewBox=\"0 0 200 100\">\n<g fill=\"#f00\">\n<!-- Notice the non-integer x coordinates (0.0, 9.5, 19.0, 28.6, etc. -->\n<path d=\"M0.0 50.0V90.0H9.5V50.0Z\"/>\n<path d=\"M9.5 42.6V90.0H19.0V42.6Z\"/>\n<path d=\"M19.0 35.9V90.0H28.6V35.9Z\"/>\n<path d=\"M28.6 30.5V90.0H38.1V30.5Z\"/>\n<path d=\"M38.1 26.7V90.0H47.6V26.7Z\"/>\n<path d=\"M47.6 25.1V90.0H57.1V25.1Z\"/>\n<path d=\"M57.1 25.6V90.0H66.7V25.6Z\"/>\n<path d=\"M66.7 28.3V90.0H76.2V28.3Z\"/>\n<path d=\"M76.2 33.0V90.0H85.7V33.0Z\"/>\n<path d=\"M85.7 39.2V90.0H95.2V39.2Z\"/>\n<path d=\"M95.2 46.3V90.0H104.8V46.3Z\"/>\n<path d=\"M104.8 53.7V90.0H114.3V53.7Z\"/>\n<path d=\"M114.3 60.8V90.0H123.8V60.8Z\"/>\n<path d=\"M123.8 67.0V90.0H133.3V67.0Z\"/>\n<path d=\"M133.3 71.7V90.0H142.9V71.7Z\"/>\n<path d=\"M142.9 74.4V90.0H152.4V74.4Z\"/>\n<path d=\"M152.4 74.9V90.0H161.9V74.9Z\"/>\n<path d=\"M161.9 73.3V90.0H171.4V73.3Z\"/>\n<path d=\"M171.4 69.5V90.0H181.0V69.5Z\"/>\n<path d=\"M181.0 64.1V90.0H190.5V64.1Z\"/>\n<path d=\"M190.5 57.4V90.0H200.0V57.4Z\"/>\n</g>\n</svg>\n<svg width=\"200\" height=\"100\" viewBox=\"0 0 200 100\">\n<g fill=\"#080\">\n<!-- Here, the x coordinates are all integers (0, 10, 20, 30, etc.) -->\n<path d=\"M0.0 50.0V90.0H10.0V50.0Z\"/>\n<path d=\"M10.0 42.3V90.0H20.0V42.3Z\"/>\n<path d=\"M20.0 35.3V90.0H30.0V35.3Z\"/>\n<path d=\"M30.0 29.8V90.0H40.0V29.8Z\"/>\n<path d=\"M40.0 26.2V90.0H50.0V26.2Z\"/>\n<path d=\"M50.0 25.0V90.0H60.0V25.0Z\"/>\n<path d=\"M60.0 26.2V90.0H70.0V26.2Z\"/>\n<path d=\"M70.0 29.8V90.0H80.0V29.8Z\"/>\n<path d=\"M80.0 35.3V90.0H90.0V35.3Z\"/>\n<path d=\"M90.0 42.3V90.0H100.0V42.3Z\"/>\n<path d=\"M100.0 50.0V90.0H110.0V50.0Z\"/>\n<path d=\"M110.0 57.7V90.0H120.0V57.7Z\"/>\n<path d=\"M120.0 64.7V90.0H130.0V64.7Z\"/>\n<path d=\"M130.0 70.2V90.0H140.0V70.2Z\"/>\n<path d=\"M140.0 73.8V90.0H150.0V73.8Z\"/>\n<path d=\"M150.0 75.0V90.0H160.0V75.0Z\"/>\n<path d=\"M160.0 73.8V90.0H170.0V73.8Z\"/>\n<path d=\"M170.0 70.2V90.0H180.0V70.2Z\"/>\n<path d=\"M180.0 64.7V90.0H190.0V64.7Z\"/>\n<path d=\"M190.0 57.7V90.0H200.0V57.7Z\"/>\n</g>\n</svg>\n\n\n\n(Alternatively, you could try adding shape-rendering=\"crispEdges\" to your SVG. This disables antialiasing altogether, so it might cause other problems.)\n" ]
[ 1 ]
[]
[]
[ "matplotlib", "python", "svg", "vector_graphics" ]
stackoverflow_0074562936_matplotlib_python_svg_vector_graphics.txt
Q: Why does Python think its timezone is UTC, when the system is PDT/PST? On my Ubuntu Linux system, the system timezone is correctly set to America/Vancouver: $ file /etc/localtime /etc/localtime: symbolic link to /usr/share/zoneinfo/America/Vancouver $ date Thu 17 Nov 10:31:38 PST 2022 $ date '+%Y-%m-%d %H:%M:%S%z' 2022-11-17 10:32:57-0800 $ date '+%Y-%m-%d %H:%M:%S%Z' 2022-11-17 10:33:04PST This is all correct and working as expected. However, python appears to think its timezone is UTC: $ python -c "import time; print(time.tzname); print(time.localtime());" ('UTC', 'UTC') time.struct_time(tm_year=2022, tm_mon=11, tm_mday=17, tm_hour=18, tm_min=34, tm_sec=0, tm_wday=3, tm_yday=321, tm_isdst=0) $ python -c 'from datetime import datetime; print(datetime.utcnow()); print(datetime.now());' 2022-11-17 18:34:38.878930 2022-11-17 18:34:38.878956 This system is running Python 3.10.8 & Ubuntu 22.04.1 LTS. Am I misunderstanding something about how Pythons timezone stuff works? What's going on? How do I get python's time.tzname to match the system timezone of PST? I don't really want to manually hardcode the timezone in my python script - I just want it to use the current systems local timezone. A: Turns out this was caused by Homebrew/linuxbrew. It had installed its own versions of python as dependencies and poetry has picked up one of those, as you can see from the poetry debug info here. I removed Homebrew completely (it's Linux implementation has always felt like a bad idea to me, and this was enough for me to drop it). This broke poetry, so I uninstalled & reinstalled poetry - and that fixed it.
Why does Python think its timezone is UTC, when the system is PDT/PST?
On my Ubuntu Linux system, the system timezone is correctly set to America/Vancouver: $ file /etc/localtime /etc/localtime: symbolic link to /usr/share/zoneinfo/America/Vancouver $ date Thu 17 Nov 10:31:38 PST 2022 $ date '+%Y-%m-%d %H:%M:%S%z' 2022-11-17 10:32:57-0800 $ date '+%Y-%m-%d %H:%M:%S%Z' 2022-11-17 10:33:04PST This is all correct and working as expected. However, python appears to think its timezone is UTC: $ python -c "import time; print(time.tzname); print(time.localtime());" ('UTC', 'UTC') time.struct_time(tm_year=2022, tm_mon=11, tm_mday=17, tm_hour=18, tm_min=34, tm_sec=0, tm_wday=3, tm_yday=321, tm_isdst=0) $ python -c 'from datetime import datetime; print(datetime.utcnow()); print(datetime.now());' 2022-11-17 18:34:38.878930 2022-11-17 18:34:38.878956 This system is running Python 3.10.8 & Ubuntu 22.04.1 LTS. Am I misunderstanding something about how Pythons timezone stuff works? What's going on? How do I get python's time.tzname to match the system timezone of PST? I don't really want to manually hardcode the timezone in my python script - I just want it to use the current systems local timezone.
[ "Turns out this was caused by Homebrew/linuxbrew.\nIt had installed its own versions of python as dependencies and poetry has picked up one of those, as you can see from the poetry debug info here.\nI removed Homebrew completely (it's Linux implementation has always felt like a bad idea to me, and this was enough for me to drop it). This broke poetry, so I uninstalled & reinstalled poetry - and that fixed it.\n" ]
[ 0 ]
[]
[]
[ "python", "time", "timezone" ]
stackoverflow_0074480620_python_time_timezone.txt
Q: How to block terminal/console outputs in python I'm using Python to execute some bash commands. The problem is that the terminal outputs from these bash scripts are spamming my terminal. Is there any way to block the output messages from these scripts? I have tried the step in this answer. But is only blocking the print calls I make, and it is not blocking the console outputs from the bash commands. Can anyone suggest any better solution? A: In Bash you can simply use: $ eclipse &>/dev/null This catches both stdin and stderr to the redirect point (in bash). (here eclipse is my command like)
How to block terminal/console outputs in python
I'm using Python to execute some bash commands. The problem is that the terminal outputs from these bash scripts are spamming my terminal. Is there any way to block the output messages from these scripts? I have tried the step in this answer. But is only blocking the print calls I make, and it is not blocking the console outputs from the bash commands. Can anyone suggest any better solution?
[ "In Bash you can simply use:\n$ eclipse &>/dev/null\n\nThis catches both stdin and stderr to the redirect point (in bash).\n(here eclipse is my command like)\n" ]
[ 0 ]
[]
[]
[ "python" ]
stackoverflow_0074563179_python.txt
Q: Match the content of sentence/sequence in python Suppose we 2 sequence of words sentence1 = 'Ram is eating' sentence2 = 'is Ram eating' sentence3 = 'is Ram playing' sentence4 = 'movie Ram watching is' how to get match% of such 2 sequences . difflib sequenceMatcher matches letter by letter . Any way to find match % in these cases. match% between sentence1 and sentence2 = 3/3 i.e. 100% match% between sentence1 and sentence3 = 2/3 i.e. 66.66% match% between sentence1 and sentence4 = 2/3 i.e. 66.66% match% = (number of words matching in sentence1 and sentence2 irrespective of position/total number of words in sentence1)*100 A: How about converting string to list and find matching percentage. sentence1 = 'Ram is eating' sentence2 = 'is Ram eating' sentence1 = sentence1.split() sentence2 = sentence2.split() longest = max(sentence1, sentence2, key=len) per = len(set(sentence1) & set(sentence2)) result = per/len(longest) print (f'{result *100}% matched') Gives # 100.0% matched Case 2 sentence1 = 'Ram is eating' sentence2 = 'is Ram' sentence1 = sentence1.split() sentence2 = sentence2.split() longest = max(sentence1, sentence2, key=len) per = len(set(sentence1) & set(sentence2)) result = per/len(longest) print (f'{result *100}% matched') Gives # 66.66666666666666% matched Case 3 sentence1 = 'Ram is eating' sentence2 = 'is Ram' sentence3 = 'is Ram playing' sentence1 = sentence1.split() sentence2 = sentence2.split() sentence3 = sentence3.split() longest = max(sentence1, sentence3, key=len) per = len(set(sentence1) & set(sentence3)) result = per/len(longest) print (f'{result *100}% matched') Gives # 66.66666666666666% matched
Match the content of sentence/sequence in python
Suppose we 2 sequence of words sentence1 = 'Ram is eating' sentence2 = 'is Ram eating' sentence3 = 'is Ram playing' sentence4 = 'movie Ram watching is' how to get match% of such 2 sequences . difflib sequenceMatcher matches letter by letter . Any way to find match % in these cases. match% between sentence1 and sentence2 = 3/3 i.e. 100% match% between sentence1 and sentence3 = 2/3 i.e. 66.66% match% between sentence1 and sentence4 = 2/3 i.e. 66.66% match% = (number of words matching in sentence1 and sentence2 irrespective of position/total number of words in sentence1)*100
[ "How about converting string to list and find matching percentage.\nsentence1 = 'Ram is eating'\nsentence2 = 'is Ram eating'\n\nsentence1 = sentence1.split()\nsentence2 = sentence2.split()\n\nlongest = max(sentence1, sentence2, key=len)\n\nper = len(set(sentence1) & set(sentence2)) \nresult = per/len(longest)\nprint (f'{result *100}% matched')\n \n\nGives #\n100.0% matched\n\nCase 2\nsentence1 = 'Ram is eating' \nsentence2 = 'is Ram'\n\nsentence1 = sentence1.split()\nsentence2 = sentence2.split()\nlongest = max(sentence1, sentence2, key=len)\n\nper = len(set(sentence1) & set(sentence2))\nresult = per/len(longest)\nprint (f'{result *100}% matched')\n \n\nGives #\n66.66666666666666% matched\n\nCase 3\nsentence1 = 'Ram is eating'\n\nsentence2 = 'is Ram'\nsentence3 = 'is Ram playing'\n\nsentence1 = sentence1.split()\nsentence2 = sentence2.split()\nsentence3 = sentence3.split()\n\n\nlongest = max(sentence1, sentence3, key=len)\n\nper = len(set(sentence1) & set(sentence3)) \nresult = per/len(longest)\nprint (f'{result *100}% matched')\n\nGives #\n66.66666666666666% matched\n \n\n" ]
[ 1 ]
[]
[]
[ "difflib", "nlp", "python", "sentence", "sequence" ]
stackoverflow_0074563252_difflib_nlp_python_sentence_sequence.txt
Q: Check if date is within three weeks from other date I recently made a birthday tracker app, that basically reads name/birthdate info from a CSV file, puts all data in Person objects, and orders them by date in a list, then re-orders the list, so it starts at the current date. For example, right now the first few items in the list have birthdates in late November and December, and the rest follow still in order starting in January going all the way to November 23rd. Thing is, at some point in my program I have to display current birthdays (simple, just pop the first element of the list if its birthdate is equal to current date), and also upcoming birthdays. Here's the code for it: today_plus_3weeks = today + d.timedelta(days=21) # Take people who were born today and put them in a separate list while people[0].birthdate.month == today.month and people[0].birthdate.day == today.day: birthdays_today.append(people.pop(0)) # Take people who were born on dates between tomorrow and x later (default 3 weeks) # and put them in a separate list while initPeople.is_first_date_later(today_plus_3weeks, people[0].birthdate): upcoming_birthdays.append(people.pop(0)) Up until recently, the upcoming birthdays thing was working fine, but I ran into a problem today. My program checks the first few upcoming dates, and they're all within three weeks. However, the next date is in Januaryβ€” so, in theory, it should be rejected by the program and the loop should end... but it doesn't. The problem is obvious when you look at the code I use to compare dates: def is_first_date_later(date1, date2): """Returns True if the first date is later in the year than the second, False if not""" return date1.month > date2.month or (date1.month == date2.month and date1.day > date2.day) The condition is always True because the date it's using for comparisons is in mid-December, so the month is always greater, or when it's equal the day is greater. So I empty my whole list and end up with IndexError: list index out of range. I can't really use the years to compare, because the dates are several years apart and years are not relevant to use in this context. I have access to them though, if I need them for anything at some point. I'm really not sure what the best solution would be without massively complexing the code. If it's needed, the full code is available here. A: I would start by writing a function to find the year and date of someone's next birthday. There are two cases: Their birthday is this year. Their birthday is next year. def next_birthday(birthday_month, birthday_dayofmonth): today = datetime.date.today() bday_this_year = datetime.date(year=today.year, month=birthday_month, day=birthday_dayofmonth) bday_next_year = datetime.date(year=today.year + 1, month=birthday_month, day=birthday_dayofmonth) if bday_this_year >= today: return bday_this_year elif bday_next_year >= today: return bday_next_year else: raise Exception("This is impossible") You can then use that function to find the number of days until the next birthday. The datetime library lets you subtract dates and find the number of days between them. def days_until_next_birthday(birthday_month, birthday_dayofmonth): """Days until next birthday. If it is their birthday, return 0.""" today = datetime.date.today() next_bday = next_birthday(birthday_month, birthday_dayofmonth) return (next_bday - today).days
Check if date is within three weeks from other date
I recently made a birthday tracker app, that basically reads name/birthdate info from a CSV file, puts all data in Person objects, and orders them by date in a list, then re-orders the list, so it starts at the current date. For example, right now the first few items in the list have birthdates in late November and December, and the rest follow still in order starting in January going all the way to November 23rd. Thing is, at some point in my program I have to display current birthdays (simple, just pop the first element of the list if its birthdate is equal to current date), and also upcoming birthdays. Here's the code for it: today_plus_3weeks = today + d.timedelta(days=21) # Take people who were born today and put them in a separate list while people[0].birthdate.month == today.month and people[0].birthdate.day == today.day: birthdays_today.append(people.pop(0)) # Take people who were born on dates between tomorrow and x later (default 3 weeks) # and put them in a separate list while initPeople.is_first_date_later(today_plus_3weeks, people[0].birthdate): upcoming_birthdays.append(people.pop(0)) Up until recently, the upcoming birthdays thing was working fine, but I ran into a problem today. My program checks the first few upcoming dates, and they're all within three weeks. However, the next date is in Januaryβ€” so, in theory, it should be rejected by the program and the loop should end... but it doesn't. The problem is obvious when you look at the code I use to compare dates: def is_first_date_later(date1, date2): """Returns True if the first date is later in the year than the second, False if not""" return date1.month > date2.month or (date1.month == date2.month and date1.day > date2.day) The condition is always True because the date it's using for comparisons is in mid-December, so the month is always greater, or when it's equal the day is greater. So I empty my whole list and end up with IndexError: list index out of range. I can't really use the years to compare, because the dates are several years apart and years are not relevant to use in this context. I have access to them though, if I need them for anything at some point. I'm really not sure what the best solution would be without massively complexing the code. If it's needed, the full code is available here.
[ "I would start by writing a function to find the year and date of someone's next birthday. There are two cases:\n\nTheir birthday is this year.\nTheir birthday is next year.\n\ndef next_birthday(birthday_month, birthday_dayofmonth):\n today = datetime.date.today()\n bday_this_year = datetime.date(year=today.year, month=birthday_month, day=birthday_dayofmonth)\n bday_next_year = datetime.date(year=today.year + 1, month=birthday_month, day=birthday_dayofmonth)\n if bday_this_year >= today:\n return bday_this_year\n elif bday_next_year >= today:\n return bday_next_year\n else:\n raise Exception(\"This is impossible\")\n\nYou can then use that function to find the number of days until the next birthday. The datetime library lets you subtract dates and find the number of days between them.\ndef days_until_next_birthday(birthday_month, birthday_dayofmonth):\n \"\"\"Days until next birthday. If it is their birthday, return 0.\"\"\"\n today = datetime.date.today()\n next_bday = next_birthday(birthday_month, birthday_dayofmonth)\n return (next_bday - today).days\n\n" ]
[ 1 ]
[]
[]
[ "date", "python" ]
stackoverflow_0074562991_date_python.txt
Q: Problems with global variable Django I write a quiz web site. And i need to save answers from users. Some of them have similar username. This is my start function global new_user_answer user_group = request.user.groups.values_list() university = user_group[0][1] num = Answers.objects.all().count() new_user_answer = num + 1 new_line = Answers(id=new_user_answer, id_user=user) new_line.save() return redirect(f'/1') Here I create new line in my DB. Second function save user answers. data = Answers.objects.get(id=new_user_answer) setattr(data, question, answers) data.save() if int(id_questions) < 47: return redirect(f'/{int(id_questions) +1 }') else: return render(request, 'index.html') Sometime I have a error 500 name new_user_answer is no define How I can solve this problem? A: Probably it has no sense to use a global variable on that way. You could define a session variable instead (a cookie). Edit the MIDDLEWARE setting and make sure it contains django.contrib.sessions.middleware.SessionMiddleware. #start function user_group = request.user.groups.values_list() university = user_group[0][1] num = Answers.objects.all().count() request.session['new_user_answer'] = num + 1 new_line = Answers(id=request.session['new_user_answer'], id_user=user) new_line.save() return redirect(f'/1') #Second function if 'new_user_answer' in request.session: data = Answers.objects.get(id=request.session['new_user_answer']) setattr(data, question, answers) data.save() if int(id_questions) < 47: return redirect(f'/{int(id_questions) +1 }') else: return render(request, 'index.html') More info: https://docs.djangoproject.com/en/dev/topics/http/sessions/#session-object-guidelines
Problems with global variable Django
I write a quiz web site. And i need to save answers from users. Some of them have similar username. This is my start function global new_user_answer user_group = request.user.groups.values_list() university = user_group[0][1] num = Answers.objects.all().count() new_user_answer = num + 1 new_line = Answers(id=new_user_answer, id_user=user) new_line.save() return redirect(f'/1') Here I create new line in my DB. Second function save user answers. data = Answers.objects.get(id=new_user_answer) setattr(data, question, answers) data.save() if int(id_questions) < 47: return redirect(f'/{int(id_questions) +1 }') else: return render(request, 'index.html') Sometime I have a error 500 name new_user_answer is no define How I can solve this problem?
[ "Probably it has no sense to use a global variable on that way. You could define a session variable instead (a cookie).\nEdit the MIDDLEWARE setting and make sure it contains django.contrib.sessions.middleware.SessionMiddleware.\n#start function\nuser_group = request.user.groups.values_list()\nuniversity = user_group[0][1]\nnum = Answers.objects.all().count()\nrequest.session['new_user_answer'] = num + 1\nnew_line = Answers(id=request.session['new_user_answer'], id_user=user)\nnew_line.save()\nreturn redirect(f'/1')\n\n#Second function\nif 'new_user_answer' in request.session:\n data = Answers.objects.get(id=request.session['new_user_answer'])\n setattr(data, question, answers)\n data.save()\nif int(id_questions) < 47:\n return redirect(f'/{int(id_questions) +1 }')\nelse:\n return render(request, 'index.html')\n\nMore info: https://docs.djangoproject.com/en/dev/topics/http/sessions/#session-object-guidelines\n" ]
[ 2 ]
[]
[]
[ "django", "python" ]
stackoverflow_0074562839_django_python.txt
Q: Scraping Google images with Python3 (requests + BeautifulSoup) I would like to download bulk images, using Google image search. My first method; downloading the page source to a file and then opening it with open() works fine, but I would like to be able to fetch image urls by just running the script and changing keywords. First method: Go to the image search (https://www.google.no/search?q=tower&client=opera&hs=UNl&source=lnms&tbm=isch&sa=X&ved=0ahUKEwiM5fnf4_zKAhWIJJoKHYUdBg4Q_AUIBygB&biw=1920&bih=982). View the page source in the browser and save it to a html file. When I then open() that html file with the script, the script works as expected and I get a neat list of all the urls of the images on the search page. This is what line 6 of the script does (uncomment to test). If, however I use the requests.get() function to parse the webpage, as shown in line 7 of the script, it fetches a different html document, that does not contain the full urls of the images, so I cannot extract them. Please help me extract the correct urls of the images. Edit: link to the tower.html, I am using: https://www.dropbox.com/s/yy39w1oc8sjkp3u/tower.html?dl=0 This is the code, I have written so far: import requests from bs4 import BeautifulSoup # define the url to be scraped url = 'https://www.google.no/search?q=tower&client=opera&hs=cTQ&source=lnms&tbm=isch&sa=X&ved=0ahUKEwig3LOx4PzKAhWGFywKHZyZAAgQ_AUIBygB&biw=1920&bih=982' # top line is using the attached "tower.html" as source, bottom line is using the url. The html file contains the source of the above url. #page = open('tower.html', 'r').read() page = requests.get(url).text # parse the text as html soup = BeautifulSoup(page, 'html.parser') # iterate on all "a" elements. for raw_link in soup.find_all('a'): link = raw_link.get('href') # if the link is a string and contain "imgurl" (there are other links on the page, that are not interesting... if type(link) == str and 'imgurl' in link: # print the part of the link that is between "=" and "&" (which is the actual url of the image, print(link.split('=')[1].split('&')[0]) A: Just so you're aware: # http://www.google.com/robots.txt User-agent: * Disallow: /search I would like to preface my answer by saying that Google heavily relies on scripting. It's very possible that you're getting different results because the page you're requesting via reqeusts doesn't do anything with the scripts supplied in on the page, whereas loading the page in a web browser does. Here's what i get when I request the url you supplied The text I get back from requests.get(url).text doesn't contain 'imgurl' in it anywhere. Your script is looking for that as part of its criteria and it's not there. I do however see a bunch of <img> tags with the src attribute set to an image url. If that's what you're after, than try this script: import requests from bs4 import BeautifulSoup url = 'https://www.google.no/search?q=tower&client=opera&hs=cTQ&source=lnms&tbm=isch&sa=X&ved=0ahUKEwig3LOx4PzKAhWGFywKHZyZAAgQ_AUIBygB&biw=1920&bih=982' # page = open('tower.html', 'r').read() page = requests.get(url).text soup = BeautifulSoup(page, 'html.parser') for raw_img in soup.find_all('img'): link = raw_img.get('src') if link: print(link) Which returns the following results: https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcQyxRHrFw0NM-ZcygiHoVhY6B6dWwhwT4va727380n_IekkU9sC1XSddAg https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcRfuhcCcOnC8DmOfweuWMKj3cTKXHS74XFh9GYAPhpD0OhGiCB7Z-gidkVk https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcSOBZ9iFTXR8sGYkjWwPG41EO5Wlcv2rix0S9Ue1HFcts4VcWMrHkD5y10 https://encrypted-tbn1.gstatic.com/images?q=tbn:ANd9GcTEAZM3UoqqDCgcn48n8RlhBotSqvDLcE1z11y9n0yFYw4MrUFucPTbQ0Ma https://encrypted-tbn3.gstatic.com/images?q=tbn:ANd9GcSJvthsICJuYCKfS1PaKGkhfjETL22gfaPxqUm0C2-LIH9HP58tNap7bwc https://encrypted-tbn2.gstatic.com/images?q=tbn:ANd9GcQGNtqD1NOwCaEWXZgcY1pPxQsdB8Z2uLGmiIcLLou6F_1c55zylpMWvSo https://encrypted-tbn2.gstatic.com/images?q=tbn:ANd9GcSdRxvQjm4KWaxhAnJx2GNwTybrtUYCcb_sPoQLyAde2KMBUhR-65cm55I https://encrypted-tbn3.gstatic.com/images?q=tbn:ANd9GcQLVqQ7HLzD7C-mZYQyrwBIUjBRl8okRDcDoeQE-AZ2FR0zCPUfZwQ8Q20 https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcQHNByVCZzjSuMXMd-OV7RZI0Pj7fk93jVKSVs7YYgc_MsQqKu2v0EP1M0 https://encrypted-tbn3.gstatic.com/images?q=tbn:ANd9GcS_RUkfpGZ1xJ2_7DCGPommRiIZOcXRi-63KIE70BHOb6uRk232TZJdGzc https://encrypted-tbn2.gstatic.com/images?q=tbn:ANd9GcSxv4ckWM6eg_BtQlSkFP9hjRB6yPNn1pRyThz3D8MMaLVoPbryrqiMBvlZ https://encrypted-tbn2.gstatic.com/images?q=tbn:ANd9GcQWv_dHMr5ZQzOj8Ort1gItvLgVKLvgm9qaSOi4Uomy13-gWZNcfk8UNO8 https://encrypted-tbn2.gstatic.com/images?q=tbn:ANd9GcRRwzRc9BJpBQyqLNwR6HZ_oPfU1xKDh63mdfZZKV2lo1JWcztBluOrkt_o https://encrypted-tbn1.gstatic.com/images?q=tbn:ANd9GcQdGCT2h_O16OptH7OofZHNvtUhDdGxOHz2n8mRp78Xk-Oy3rndZ88r7ZA https://encrypted-tbn1.gstatic.com/images?q=tbn:ANd9GcRnmn9diX3Q08e_wpwOwn0N7L1QpnBep1DbUFXq0PbnkYXfO0wBy6fkpZY https://encrypted-tbn2.gstatic.com/images?q=tbn:ANd9GcSaP9Ok5n6dL5K1yKXw0TtPd14taoQ0r3HDEwU5F9mOEGdvcIB0ajyqXGE https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcTcyaCvbXLYRtFspKBe18Yy5WZ_1tzzeYD8Obb-r4x9Yi6YZw83SfdOF5fm https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcTnS1qCjeYrbUtDSUNcRhkdO3fc3LTtN8KaQm-rFnbj_JagQEPJRGM-DnY0 https://encrypted-tbn1.gstatic.com/images?q=tbn:ANd9GcSiX_elwJQXGlToaEhFD5j2dBkP70PYDmA5stig29DC5maNhbfG76aDOyGh https://encrypted-tbn3.gstatic.com/images?q=tbn:ANd9GcQb3ughdUcPUgWAF6SkPFnyiJhe9Eb-NLbEZl_r7Pvt4B3mZN1SVGv0J-s A: You can find the attributes by using 'data-src' or 'src' attribute. REQUEST_HEADER = { 'User-Agent': "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/43.0.2357.134 Safari/537.36"} def get_images_new(self, prod_id, name, header, **kw): i=1 man_code = "apple" #anything you want to search for url = "https://www.google.com.au/search?q=%s&source=lnms&tbm=isch" % man_code _logger.info("Subitemsyyyyyyyyyyyyyy: %s" %url) response = urlopen(Request(url, headers={ 'User-Agent': "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/43.0.2357.134 Safari/537.36"})) html = response.read().decode('utf-8') soup = BeautifulSoup(html, "html.parser") image_elements = soup.find_all("img", {"class": "rg_i Q4LuWd"}) for img in image_elements: #temp1 = img.get('src') #_logger.info("11111[%s]" % (temp1)) temp = img.get('data-src') if temp and i < 7: image = temp #_logger.error("11111[%s]" % (image)) filename = str(i) if filename: path = "/your/directory/" + str(prod_id) # your filename if not os.path.exists(path): os.mkdir(path) _logger.error("ath.existath.existath.exist[%s]" % (image)) imagefile = open(path + "/" + filename + ".png", 'wb+') req = Request(image, headers=REQUEST_HEADER) resp = urlopen(req) imagefile.write(resp.read()) imagefile.close() i += 1 A: You can extract Google Images using regular expressions because the data you need renders dynamically but we can find it in the inline JSON. It faster method than using browser automation. To do that, we can search for the first image title in the page source (Ctrl+U) to find the matches we need and if there are any in the <script>> elements, then it is most likely an inline JSON. From there we can extract data. To find the original images, we first need to find the thumbnails. After that we need to subtract part of the parsed Inline JSON which will give an easier way to parse the original resolution images: # https://regex101.com/r/SxwJsW/1 matched_google_images_thumbnails = ", ".join( re.findall(r'\[\"(https\:\/\/encrypted-tbn0\.gstatic\.com\/images\?.*?)\",\d+,\d+\]', str(matched_google_image_data))).split(", ") thumbnails = [bytes(bytes(thumbnail, "ascii").decode("unicode-escape"), "ascii").decode("unicode-escape") for thumbnail in matched_google_images_thumbnails] # removing previously matched thumbnails for easier full resolution image matches. removed_matched_google_images_thumbnails = re.sub( r'\[\"(https\:\/\/encrypted-tbn0\.gstatic\.com\/images\?.*?)\",\d+,\d+\]', "", str(matched_google_image_data)) # https://regex101.com/r/fXjfb1/4 # https://stackoverflow.com/a/19821774/15164646 matched_google_full_resolution_images = re.findall(r"(?:'|,),\[\"(https:|http.*?)\",\d+,\d+\]", removed_matched_google_images_thumbnails) Unfortunately, this method does not make it possible to find absolutely all the pictures, since they are added to the page using scrolling. In case you need to collect absolutely all the pictures, you need use browser automation, such as selenium or playwright if you don't want to reverse engineer it. There's a "ijn" URL parameter that defines the page number to get (greater than or equal to 0). It used in combination with pagination token that also located in the Inline JSON. Check code in online IDE. import requests, re, json, lxml from bs4 import BeautifulSoup headers = { "User-Agent": "Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/107.0.0.0 Safari/537.36" } google_images = [] params = { "q": "tower", # search query "tbm": "isch", # image results "hl": "en", # language of the search "gl": "us" # country where search comes fro } html = requests.get("https://google.com/search", params=params, headers=headers, timeout=30) soup = BeautifulSoup(html.text, "lxml") all_script_tags = soup.select("script") # https://regex101.com/r/RPIbXK/1 matched_images_data = "".join(re.findall(r"AF_initDataCallback\(([^<]+)\);", str(all_script_tags))) matched_images_data_fix = json.dumps(matched_images_data) matched_images_data_json = json.loads(matched_images_data_fix) # https://regex101.com/r/NRKEmV/1 matched_google_image_data = re.findall(r'\"b-GRID_STATE0\"(.*)sideChannel:\s?{}}', matched_images_data_json) # https://regex101.com/r/SxwJsW/1 matched_google_images_thumbnails = ", ".join( re.findall(r'\[\"(https\:\/\/encrypted-tbn0\.gstatic\.com\/images\?.*?)\",\d+,\d+\]', str(matched_google_image_data))).split(", ") thumbnails = [bytes(bytes(thumbnail, "ascii").decode("unicode-escape"), "ascii").decode("unicode-escape") for thumbnail in matched_google_images_thumbnails] # removing previously matched thumbnails for easier full resolution image matches. removed_matched_google_images_thumbnails = re.sub( r'\[\"(https\:\/\/encrypted-tbn0\.gstatic\.com\/images\?.*?)\",\d+,\d+\]', "", str(matched_google_image_data)) # https://regex101.com/r/fXjfb1/4 # https://stackoverflow.com/a/19821774/15164646 matched_google_full_resolution_images = re.findall(r"(?:'|,),\[\"(https:|http.*?)\",\d+,\d+\]", removed_matched_google_images_thumbnails) full_res_images = [ bytes(bytes(img, "ascii").decode("unicode-escape"), "ascii").decode("unicode-escape") for img in matched_google_full_resolution_images ] for index, (metadata, thumbnail, original) in enumerate(zip(soup.select('.isv-r.PNCib.MSM1fd.BUooTd'), thumbnails, full_res_images), start=1): google_images.append({ "title": metadata.select_one(".VFACy.kGQAp.sMi44c.lNHeqe.WGvvNb")["title"], "link": metadata.select_one(".VFACy.kGQAp.sMi44c.lNHeqe.WGvvNb")["href"], "source": metadata.select_one(".fxgdke").text, "thumbnail": thumbnail, "original": original }) print(json.dumps(google_images, indent=2, ensure_ascii=False)) Example output: [ { "title": "Eiffel Tower - Wikipedia", "link": "https://en.wikipedia.org/wiki/Eiffel_Tower", "source": "Wikipedia", "thumbnail": "https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcTsuYzf9os1Qb1ssPO6fWn-5Jm6ASDXAxUFYG6eJfvmehywH-tJEXDW0t7XLR3-i8cNd-0&usqp=CAU", "original": "https://upload.wikimedia.org/wikipedia/commons/thumb/8/85/Tour_Eiffel_Wikimedia_Commons_%28cropped%29.jpg/640px-Tour_Eiffel_Wikimedia_Commons_%28cropped%29.jpg" }, { "title": "tower | architecture | Britannica", "link": "https://www.britannica.com/technology/tower", "source": "Encyclopedia Britannica", "thumbnail": "https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcR8EsWofNiFTe6alwRlwXVR64RdWTG2fuBQ0z1FX4tg3HbL7Mxxvz6GnG1rGZQA8glVNA4&usqp=CAU", "original": "https://cdn.britannica.com/51/94351-050-86B70FE1/Leaning-Tower-of-Pisa-Italy.jpg" }, { "title": "Tower - Wikipedia", "link": "https://en.wikipedia.org/wiki/Tower", "source": "Wikipedia", "thumbnail": "https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcT3L9LA0VamqmevhCtkrHZvM9MlBf9EjtTT7KhyzRP3zi3BmuCOmn0QFQG42xFfWljcsho&usqp=CAU", "original": "https://upload.wikimedia.org/wikipedia/commons/3/3e/Tokyo_Sky_Tree_2012.JPG" }, # ... ] Also you can use Google Images API from SerpApi. It's a paid API with the free plan. The difference is that it will bypass blocks (including CAPTCHA) from Google, no need to create the parser and maintain it. Simple code example: from serpapi import GoogleSearch import os, json image_results = [] # search query parameters params = { "engine": "google", # search engine. Google, Bing, Yahoo, Naver, Baidu... "q": "tower", # search query "tbm": "isch", # image results "num": "100", # number of images per page "ijn": 0, # page number: 0 -> first page, 1 -> second... "api_key": os.getenv("API_KEY") # your serpapi api key # other query parameters: hl (lang), gl (country), etc } search = GoogleSearch(params) # where data extraction happens images_is_present = True while images_is_present: results = search.get_dict() # JSON -> Python dictionary # checks for "Google hasn't returned any results for this query." if "error" not in results: for image in results["images_results"]: if image["original"] not in image_results: image_results.append(image["original"]) # update to the next page params["ijn"] += 1 else: images_is_present = False print(results["error"]) print(json.dumps(image_results, indent=2)) Output: [ "https://cdn.rt.emap.com/wp-content/uploads/sites/4/2022/08/10084135/shutterstock-woods-bagot-rough-site-for-leadenhall-tower.jpg", "https://dynamic-media-cdn.tripadvisor.com/media/photo-o/1c/60/ff/c5/ambuluwawa-tower-is-the.jpg?w=1200&h=-1&s=1", "https://cdn11.bigcommerce.com/s-bf3bb/product_images/uploaded_images/find-your-nearest-cell-tower-in-five-minutes-or-less.jpeg", "https://s3.amazonaws.com/reuniontower/Reunion-Tower-Exterior-Skyline.jpg", "https://assets2.rockpapershotgun.com/minecraft-avengers-tower.jpg/BROK/resize/1920x1920%3E/format/jpg/quality/80/minecraft-avengers-tower.jpg", "https://images.adsttc.com/media/images/52ab/5834/e8e4/4e0f/3700/002e/large_jpg/PERTAMINA_1_Tower_from_Roundabout.jpg?1386960835", "https://awoiaf.westeros.org/images/7/78/The_tower_of_joy_by_henning.jpg", "https://eu-assets.simpleview-europe.com/plymouth2016/imageresizer/?image=%2Fdmsimgs%2Fsmeatontower3_606363908.PNG&action=ProductDetailNew", # ... ] There's a Scrape and download Google Images with Python blog post if you need a little bit more code explanation.
Scraping Google images with Python3 (requests + BeautifulSoup)
I would like to download bulk images, using Google image search. My first method; downloading the page source to a file and then opening it with open() works fine, but I would like to be able to fetch image urls by just running the script and changing keywords. First method: Go to the image search (https://www.google.no/search?q=tower&client=opera&hs=UNl&source=lnms&tbm=isch&sa=X&ved=0ahUKEwiM5fnf4_zKAhWIJJoKHYUdBg4Q_AUIBygB&biw=1920&bih=982). View the page source in the browser and save it to a html file. When I then open() that html file with the script, the script works as expected and I get a neat list of all the urls of the images on the search page. This is what line 6 of the script does (uncomment to test). If, however I use the requests.get() function to parse the webpage, as shown in line 7 of the script, it fetches a different html document, that does not contain the full urls of the images, so I cannot extract them. Please help me extract the correct urls of the images. Edit: link to the tower.html, I am using: https://www.dropbox.com/s/yy39w1oc8sjkp3u/tower.html?dl=0 This is the code, I have written so far: import requests from bs4 import BeautifulSoup # define the url to be scraped url = 'https://www.google.no/search?q=tower&client=opera&hs=cTQ&source=lnms&tbm=isch&sa=X&ved=0ahUKEwig3LOx4PzKAhWGFywKHZyZAAgQ_AUIBygB&biw=1920&bih=982' # top line is using the attached "tower.html" as source, bottom line is using the url. The html file contains the source of the above url. #page = open('tower.html', 'r').read() page = requests.get(url).text # parse the text as html soup = BeautifulSoup(page, 'html.parser') # iterate on all "a" elements. for raw_link in soup.find_all('a'): link = raw_link.get('href') # if the link is a string and contain "imgurl" (there are other links on the page, that are not interesting... if type(link) == str and 'imgurl' in link: # print the part of the link that is between "=" and "&" (which is the actual url of the image, print(link.split('=')[1].split('&')[0])
[ "Just so you're aware:\n# http://www.google.com/robots.txt\n\nUser-agent: *\nDisallow: /search\n\n\n\nI would like to preface my answer by saying that Google heavily relies on scripting. It's very possible that you're getting different results because the page you're requesting via reqeusts doesn't do anything with the scripts supplied in on the page, whereas loading the page in a web browser does.\nHere's what i get when I request the url you supplied \nThe text I get back from requests.get(url).text doesn't contain 'imgurl' in it anywhere. Your script is looking for that as part of its criteria and it's not there.\nI do however see a bunch of <img> tags with the src attribute set to an image url. If that's what you're after, than try this script:\nimport requests\nfrom bs4 import BeautifulSoup\n\nurl = 'https://www.google.no/search?q=tower&client=opera&hs=cTQ&source=lnms&tbm=isch&sa=X&ved=0ahUKEwig3LOx4PzKAhWGFywKHZyZAAgQ_AUIBygB&biw=1920&bih=982'\n\n# page = open('tower.html', 'r').read()\npage = requests.get(url).text\n\nsoup = BeautifulSoup(page, 'html.parser')\n\nfor raw_img in soup.find_all('img'):\n link = raw_img.get('src')\n if link:\n print(link)\n\nWhich returns the following results:\nhttps://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcQyxRHrFw0NM-ZcygiHoVhY6B6dWwhwT4va727380n_IekkU9sC1XSddAg\nhttps://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcRfuhcCcOnC8DmOfweuWMKj3cTKXHS74XFh9GYAPhpD0OhGiCB7Z-gidkVk\nhttps://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcSOBZ9iFTXR8sGYkjWwPG41EO5Wlcv2rix0S9Ue1HFcts4VcWMrHkD5y10\nhttps://encrypted-tbn1.gstatic.com/images?q=tbn:ANd9GcTEAZM3UoqqDCgcn48n8RlhBotSqvDLcE1z11y9n0yFYw4MrUFucPTbQ0Ma\nhttps://encrypted-tbn3.gstatic.com/images?q=tbn:ANd9GcSJvthsICJuYCKfS1PaKGkhfjETL22gfaPxqUm0C2-LIH9HP58tNap7bwc\nhttps://encrypted-tbn2.gstatic.com/images?q=tbn:ANd9GcQGNtqD1NOwCaEWXZgcY1pPxQsdB8Z2uLGmiIcLLou6F_1c55zylpMWvSo\nhttps://encrypted-tbn2.gstatic.com/images?q=tbn:ANd9GcSdRxvQjm4KWaxhAnJx2GNwTybrtUYCcb_sPoQLyAde2KMBUhR-65cm55I\nhttps://encrypted-tbn3.gstatic.com/images?q=tbn:ANd9GcQLVqQ7HLzD7C-mZYQyrwBIUjBRl8okRDcDoeQE-AZ2FR0zCPUfZwQ8Q20\nhttps://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcQHNByVCZzjSuMXMd-OV7RZI0Pj7fk93jVKSVs7YYgc_MsQqKu2v0EP1M0\nhttps://encrypted-tbn3.gstatic.com/images?q=tbn:ANd9GcS_RUkfpGZ1xJ2_7DCGPommRiIZOcXRi-63KIE70BHOb6uRk232TZJdGzc\nhttps://encrypted-tbn2.gstatic.com/images?q=tbn:ANd9GcSxv4ckWM6eg_BtQlSkFP9hjRB6yPNn1pRyThz3D8MMaLVoPbryrqiMBvlZ\nhttps://encrypted-tbn2.gstatic.com/images?q=tbn:ANd9GcQWv_dHMr5ZQzOj8Ort1gItvLgVKLvgm9qaSOi4Uomy13-gWZNcfk8UNO8\nhttps://encrypted-tbn2.gstatic.com/images?q=tbn:ANd9GcRRwzRc9BJpBQyqLNwR6HZ_oPfU1xKDh63mdfZZKV2lo1JWcztBluOrkt_o\nhttps://encrypted-tbn1.gstatic.com/images?q=tbn:ANd9GcQdGCT2h_O16OptH7OofZHNvtUhDdGxOHz2n8mRp78Xk-Oy3rndZ88r7ZA\nhttps://encrypted-tbn1.gstatic.com/images?q=tbn:ANd9GcRnmn9diX3Q08e_wpwOwn0N7L1QpnBep1DbUFXq0PbnkYXfO0wBy6fkpZY\nhttps://encrypted-tbn2.gstatic.com/images?q=tbn:ANd9GcSaP9Ok5n6dL5K1yKXw0TtPd14taoQ0r3HDEwU5F9mOEGdvcIB0ajyqXGE\nhttps://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcTcyaCvbXLYRtFspKBe18Yy5WZ_1tzzeYD8Obb-r4x9Yi6YZw83SfdOF5fm\nhttps://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcTnS1qCjeYrbUtDSUNcRhkdO3fc3LTtN8KaQm-rFnbj_JagQEPJRGM-DnY0\nhttps://encrypted-tbn1.gstatic.com/images?q=tbn:ANd9GcSiX_elwJQXGlToaEhFD5j2dBkP70PYDmA5stig29DC5maNhbfG76aDOyGh\nhttps://encrypted-tbn3.gstatic.com/images?q=tbn:ANd9GcQb3ughdUcPUgWAF6SkPFnyiJhe9Eb-NLbEZl_r7Pvt4B3mZN1SVGv0J-s\n\n", "You can find the attributes by using 'data-src' or 'src' attribute.\n\nREQUEST_HEADER = {\n 'User-Agent': \"Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/43.0.2357.134 Safari/537.36\"}\n\ndef get_images_new(self, prod_id, name, header, **kw):\n i=1\n man_code = \"apple\" #anything you want to search for\n url = \"https://www.google.com.au/search?q=%s&source=lnms&tbm=isch\" % man_code\n _logger.info(\"Subitemsyyyyyyyyyyyyyy: %s\" %url)\n response = urlopen(Request(url, headers={\n 'User-Agent': \"Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/43.0.2357.134 Safari/537.36\"}))\n html = response.read().decode('utf-8')\n soup = BeautifulSoup(html, \"html.parser\")\n image_elements = soup.find_all(\"img\", {\"class\": \"rg_i Q4LuWd\"})\n for img in image_elements:\n #temp1 = img.get('src')\n #_logger.info(\"11111[%s]\" % (temp1))\n temp = img.get('data-src')\n if temp and i < 7:\n image = temp\n #_logger.error(\"11111[%s]\" % (image))\n filename = str(i)\n if filename:\n path = \"/your/directory/\" + str(prod_id) # your filename\n if not os.path.exists(path):\n os.mkdir(path)\n _logger.error(\"ath.existath.existath.exist[%s]\" % (image))\n imagefile = open(path + \"/\" + filename + \".png\", 'wb+')\n req = Request(image, headers=REQUEST_HEADER)\n resp = urlopen(req)\n imagefile.write(resp.read())\n imagefile.close()\n i += 1\n\n\n", "You can extract Google Images using regular expressions because the data you need renders dynamically but we can find it in the inline JSON. It faster method than using browser automation.\nTo do that, we can search for the first image title in the page source (Ctrl+U) to find the matches we need and if there are any in the <script>> elements, then it is most likely an inline JSON. From there we can extract data.\nTo find the original images, we first need to find the thumbnails. After that we need to subtract part of the parsed Inline JSON which will give an easier way to parse the original resolution images:\n# https://regex101.com/r/SxwJsW/1\nmatched_google_images_thumbnails = \", \".join(\n re.findall(r'\\[\\\"(https\\:\\/\\/encrypted-tbn0\\.gstatic\\.com\\/images\\?.*?)\\\",\\d+,\\d+\\]',\n str(matched_google_image_data))).split(\", \")\n \nthumbnails = [bytes(bytes(thumbnail, \"ascii\").decode(\"unicode-escape\"), \"ascii\").decode(\"unicode-escape\") for thumbnail in matched_google_images_thumbnails]\n \n# removing previously matched thumbnails for easier full resolution image matches.\nremoved_matched_google_images_thumbnails = re.sub(\n r'\\[\\\"(https\\:\\/\\/encrypted-tbn0\\.gstatic\\.com\\/images\\?.*?)\\\",\\d+,\\d+\\]', \"\", str(matched_google_image_data))\n \n# https://regex101.com/r/fXjfb1/4\n# https://stackoverflow.com/a/19821774/15164646\nmatched_google_full_resolution_images = re.findall(r\"(?:'|,),\\[\\\"(https:|http.*?)\\\",\\d+,\\d+\\]\", removed_matched_google_images_thumbnails)\n\nUnfortunately, this method does not make it possible to find absolutely all the pictures, since they are added to the page using scrolling. In case you need to collect absolutely all the pictures, you need use browser automation, such as selenium or playwright if you don't want to reverse engineer it.\nThere's a \"ijn\" URL parameter that defines the page number to get (greater than or equal to 0). It used in combination with pagination token that also located in the Inline JSON.\nCheck code in online IDE.\nimport requests, re, json, lxml\nfrom bs4 import BeautifulSoup\n\nheaders = {\n \"User-Agent\": \"Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/107.0.0.0 Safari/537.36\"\n}\n\ngoogle_images = []\n\nparams = { \n \"q\": \"tower\", # search query\n \"tbm\": \"isch\", # image results\n \"hl\": \"en\", # language of the search\n \"gl\": \"us\" # country where search comes fro\n}\n \nhtml = requests.get(\"https://google.com/search\", params=params, headers=headers, timeout=30)\nsoup = BeautifulSoup(html.text, \"lxml\")\n \nall_script_tags = soup.select(\"script\")\n \n# https://regex101.com/r/RPIbXK/1\nmatched_images_data = \"\".join(re.findall(r\"AF_initDataCallback\\(([^<]+)\\);\", str(all_script_tags)))\n \nmatched_images_data_fix = json.dumps(matched_images_data)\nmatched_images_data_json = json.loads(matched_images_data_fix)\n \n# https://regex101.com/r/NRKEmV/1\nmatched_google_image_data = re.findall(r'\\\"b-GRID_STATE0\\\"(.*)sideChannel:\\s?{}}', matched_images_data_json)\n \n# https://regex101.com/r/SxwJsW/1\nmatched_google_images_thumbnails = \", \".join(\n re.findall(r'\\[\\\"(https\\:\\/\\/encrypted-tbn0\\.gstatic\\.com\\/images\\?.*?)\\\",\\d+,\\d+\\]',\n str(matched_google_image_data))).split(\", \")\n \nthumbnails = [bytes(bytes(thumbnail, \"ascii\").decode(\"unicode-escape\"), \"ascii\").decode(\"unicode-escape\") for thumbnail in matched_google_images_thumbnails]\n \n# removing previously matched thumbnails for easier full resolution image matches.\nremoved_matched_google_images_thumbnails = re.sub(\n r'\\[\\\"(https\\:\\/\\/encrypted-tbn0\\.gstatic\\.com\\/images\\?.*?)\\\",\\d+,\\d+\\]', \"\", str(matched_google_image_data))\n \n# https://regex101.com/r/fXjfb1/4\n# https://stackoverflow.com/a/19821774/15164646\nmatched_google_full_resolution_images = re.findall(r\"(?:'|,),\\[\\\"(https:|http.*?)\\\",\\d+,\\d+\\]\", removed_matched_google_images_thumbnails)\n \nfull_res_images = [\n bytes(bytes(img, \"ascii\").decode(\"unicode-escape\"), \"ascii\").decode(\"unicode-escape\") for img in matched_google_full_resolution_images\n]\n \nfor index, (metadata, thumbnail, original) in enumerate(zip(soup.select('.isv-r.PNCib.MSM1fd.BUooTd'), thumbnails, full_res_images), start=1):\n google_images.append({\n \"title\": metadata.select_one(\".VFACy.kGQAp.sMi44c.lNHeqe.WGvvNb\")[\"title\"],\n \"link\": metadata.select_one(\".VFACy.kGQAp.sMi44c.lNHeqe.WGvvNb\")[\"href\"],\n \"source\": metadata.select_one(\".fxgdke\").text,\n \"thumbnail\": thumbnail,\n \"original\": original\n })\n\nprint(json.dumps(google_images, indent=2, ensure_ascii=False))\n\nExample output:\n[\n {\n \"title\": \"Eiffel Tower - Wikipedia\",\n \"link\": \"https://en.wikipedia.org/wiki/Eiffel_Tower\",\n \"source\": \"Wikipedia\",\n \"thumbnail\": \"https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcTsuYzf9os1Qb1ssPO6fWn-5Jm6ASDXAxUFYG6eJfvmehywH-tJEXDW0t7XLR3-i8cNd-0&usqp=CAU\",\n \"original\": \"https://upload.wikimedia.org/wikipedia/commons/thumb/8/85/Tour_Eiffel_Wikimedia_Commons_%28cropped%29.jpg/640px-Tour_Eiffel_Wikimedia_Commons_%28cropped%29.jpg\"\n },\n {\n \"title\": \"tower | architecture | Britannica\",\n \"link\": \"https://www.britannica.com/technology/tower\",\n \"source\": \"Encyclopedia Britannica\",\n \"thumbnail\": \"https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcR8EsWofNiFTe6alwRlwXVR64RdWTG2fuBQ0z1FX4tg3HbL7Mxxvz6GnG1rGZQA8glVNA4&usqp=CAU\",\n \"original\": \"https://cdn.britannica.com/51/94351-050-86B70FE1/Leaning-Tower-of-Pisa-Italy.jpg\"\n },\n {\n \"title\": \"Tower - Wikipedia\",\n \"link\": \"https://en.wikipedia.org/wiki/Tower\",\n \"source\": \"Wikipedia\",\n \"thumbnail\": \"https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcT3L9LA0VamqmevhCtkrHZvM9MlBf9EjtTT7KhyzRP3zi3BmuCOmn0QFQG42xFfWljcsho&usqp=CAU\",\n \"original\": \"https://upload.wikimedia.org/wikipedia/commons/3/3e/Tokyo_Sky_Tree_2012.JPG\"\n },\n # ...\n]\n\n\nAlso you can use Google Images API from SerpApi. It's a paid API with the free plan.\nThe difference is that it will bypass blocks (including CAPTCHA) from Google, no need to create the parser and maintain it.\nSimple code example:\nfrom serpapi import GoogleSearch\nimport os, json\n\nimage_results = []\n \n# search query parameters\nparams = {\n \"engine\": \"google\", # search engine. Google, Bing, Yahoo, Naver, Baidu...\n \"q\": \"tower\", # search query\n \"tbm\": \"isch\", # image results\n \"num\": \"100\", # number of images per page\n \"ijn\": 0, # page number: 0 -> first page, 1 -> second...\n \"api_key\": os.getenv(\"API_KEY\") # your serpapi api key\n # other query parameters: hl (lang), gl (country), etc \n}\n \nsearch = GoogleSearch(params) # where data extraction happens\n \nimages_is_present = True\nwhile images_is_present:\n results = search.get_dict() # JSON -> Python dictionary\n \n# checks for \"Google hasn't returned any results for this query.\"\n if \"error\" not in results:\n for image in results[\"images_results\"]:\n if image[\"original\"] not in image_results:\n image_results.append(image[\"original\"])\n \n# update to the next page\n params[\"ijn\"] += 1\n else:\n images_is_present = False\n print(results[\"error\"])\n\nprint(json.dumps(image_results, indent=2))\n\nOutput:\n[\n \"https://cdn.rt.emap.com/wp-content/uploads/sites/4/2022/08/10084135/shutterstock-woods-bagot-rough-site-for-leadenhall-tower.jpg\",\n \"https://dynamic-media-cdn.tripadvisor.com/media/photo-o/1c/60/ff/c5/ambuluwawa-tower-is-the.jpg?w=1200&h=-1&s=1\",\n \"https://cdn11.bigcommerce.com/s-bf3bb/product_images/uploaded_images/find-your-nearest-cell-tower-in-five-minutes-or-less.jpeg\",\n \"https://s3.amazonaws.com/reuniontower/Reunion-Tower-Exterior-Skyline.jpg\",\n \"https://assets2.rockpapershotgun.com/minecraft-avengers-tower.jpg/BROK/resize/1920x1920%3E/format/jpg/quality/80/minecraft-avengers-tower.jpg\",\n \"https://images.adsttc.com/media/images/52ab/5834/e8e4/4e0f/3700/002e/large_jpg/PERTAMINA_1_Tower_from_Roundabout.jpg?1386960835\",\n \"https://awoiaf.westeros.org/images/7/78/The_tower_of_joy_by_henning.jpg\",\n \"https://eu-assets.simpleview-europe.com/plymouth2016/imageresizer/?image=%2Fdmsimgs%2Fsmeatontower3_606363908.PNG&action=ProductDetailNew\",\n # ...\n]\n\nThere's a Scrape and download Google Images with Python blog post if you need a little bit more code explanation.\n" ]
[ 1, 0, 0 ]
[]
[]
[ "google_image_search", "html", "python", "web_scraping" ]
stackoverflow_0035439110_google_image_search_html_python_web_scraping.txt
Q: PySpark - Create a Temp Tables for each unique item in loop I hope you will be able to help me. I have one big table with information about resolved tasks by user. I need to create a random sample where size of sample is equal 10% of total items per user. I already created a temporary table with information about size of sample (Table 1): https://i.stack.imgur.com/7dM97.jpg And now I would like to: Create a loop (based on Table 1) with a temp tables (created from general table) for each users with the appropriate number of tasks Merge all temp tables into one master table with sample results. Drop Temp Tables (additional) General overview Is something like this possible to perform in PySpark? A: I already found a solution to create a dynamic table, but still I have a problem with size of sample: from pyspark.sql.types import IntegerType #df5 - column with Size of Sample df5 = df5.withColumn("Size", df5["Size"].cast(IntegerType())) dataCollect = df5.collect() df5.show() for row in dataCollect: print(row['User']) print(row['Size']) #df2 - INPUT with all Records df6 = df2.filter(df2.User == row['User']) df6 = df6.limit(row['Sizes']) df_Final = df_Final.union(df6).distinct() And later I create a table with all selected samples.
PySpark - Create a Temp Tables for each unique item in loop
I hope you will be able to help me. I have one big table with information about resolved tasks by user. I need to create a random sample where size of sample is equal 10% of total items per user. I already created a temporary table with information about size of sample (Table 1): https://i.stack.imgur.com/7dM97.jpg And now I would like to: Create a loop (based on Table 1) with a temp tables (created from general table) for each users with the appropriate number of tasks Merge all temp tables into one master table with sample results. Drop Temp Tables (additional) General overview Is something like this possible to perform in PySpark?
[ "I already found a solution to create a dynamic table, but still I have a problem with size of sample:\nfrom pyspark.sql.types import IntegerType\n#df5 - column with Size of Sample\ndf5 = df5.withColumn(\"Size\", df5[\"Size\"].cast(IntegerType()))\n\ndataCollect = df5.collect()\ndf5.show()\nfor row in dataCollect:\nprint(row['User'])\nprint(row['Size'])\n#df2 - INPUT with all Records\ndf6 = df2.filter(df2.User == row['User'])\ndf6 = df6.limit(row['Sizes'])\ndf_Final = df_Final.union(df6).distinct()\n\nAnd later I create a table with all selected samples.\n" ]
[ 1 ]
[]
[]
[ "databricks", "pyspark", "python", "sql" ]
stackoverflow_0074559243_databricks_pyspark_python_sql.txt
Q: how to coalesce every element of join pyspark i have an array of join args (columns): attrs = ['surname', 'name', 'patronymic', 'birth_date', 'doc_type', 'doc_series','doc_number'] i'm trying to join two tables just like this but i need to coalesce each column for join to behave normally (cause it wont join correctly if there are nulls) new_df = pre_df.join(res_df, attrs, how='leftanti') i've tried listing every condition but is there a possibility to do this another way? A: so i've figured this out: join_attrs = [F.coalesce(pre_df[elem], F.lit('')) == F.coalesce(res_df[elem], F.lit('')) for elem in attrs] also this works too, but not sure what's faster: join_attrs = [pre_df[elem].eqNullSafe(res_df[elem]) for elem in attrs] A: If you try to union two dataset with the same columns. You don't perform a join but a union. Try with df = df.unionByName(df2)
how to coalesce every element of join pyspark
i have an array of join args (columns): attrs = ['surname', 'name', 'patronymic', 'birth_date', 'doc_type', 'doc_series','doc_number'] i'm trying to join two tables just like this but i need to coalesce each column for join to behave normally (cause it wont join correctly if there are nulls) new_df = pre_df.join(res_df, attrs, how='leftanti') i've tried listing every condition but is there a possibility to do this another way?
[ "so i've figured this out:\njoin_attrs = [F.coalesce(pre_df[elem], F.lit('')) == F.coalesce(res_df[elem], F.lit('')) for elem in attrs]\n\nalso this works too, but not sure what's faster:\njoin_attrs = [pre_df[elem].eqNullSafe(res_df[elem]) for elem in attrs]\n\n", "If you try to union two dataset with the same columns. You don't perform a join but a union. Try with df = df.unionByName(df2)\n" ]
[ 1, 0 ]
[]
[]
[ "apache_spark", "coalesce", "pyspark", "python" ]
stackoverflow_0074515958_apache_spark_coalesce_pyspark_python.txt
Q: Multidimensional array restructuring like in pandas.stack Consider the following code to create a dummy dataset import numpy as np from scipy.stats import norm import pandas as pd np.random.seed(10) n=3 space= norm(20, 5).rvs(n) time= norm(10,2).rvs(n) values = np.kron(space, time).reshape(n,n) + norm(1,1).rvs([n,n]) ### Output array([[267.39784458, 300.81493866, 229.19163206], [236.1940266 , 266.49469945, 204.01294305], [122.55912977, 140.00957047, 106.28339745]]) I can put these data in a pandas dataframe using space_names = ['A','B','C'] time_names = [2000,2001,2002] df = pd.DataFrame(values, index=space_names, columns=time_names) df ### Output 2000 2001 2002 A 267.397845 300.814939 229.191632 B 236.194027 266.494699 204.012943 C 122.559130 140.009570 106.283397 This is considered a wide dataset, where each observation lies in a table with 2 variable that acts as coordinates to identify it. To make it a long-tidy dataset we can suse the .stack method of pandas dataframe df.columns.name = 'time' df.index.name = 'space' df.stack().rename('value').reset_index() ### Output space time value 0 A 2000 267.397845 1 A 2001 300.814939 2 A 2002 229.191632 3 B 2000 236.194027 4 B 2001 266.494699 5 B 2002 204.012943 6 C 2000 122.559130 7 C 2001 140.009570 8 C 2002 106.283397 My question is: how do I do exactly this thing but for a 3-dimensional dataset? Let's imagine I have 2 observation for each space-time couple s = 3 t = 4 r = 2 space_mus = norm(20, 5).rvs(s) time_mus = norm(10,2).rvs(t) values = np.kron(space_mus, time_mus) values = values.repeat(r).reshape(s,t,r) + norm(0,1).rvs([s,t,r]) values ### Output array([[[286.50322099, 288.51266345], [176.64303485, 175.38175877], [136.01675917, 134.44328617]], [[187.07608546, 185.4068411 ], [112.86398438, 111.983463 ], [ 85.99035255, 86.67236986]], [[267.66833894, 269.45295404], [162.30044715, 162.50564386], [124.6374401 , 126.2315447 ]]]) How can I obtain the same structure for the dataframe as above? Ugly solution Personally i don't like this solution, and i think one might do it in a more elegant and pythonic way, but still might be useful for someone else so I will post my solution. labels = ['{}{}{}'.format(i,j,k) for i in range(s) for j in range(t) for k in range(r)] #space, time, repetition def flatten3d(k): return [i for l in k for s in l for i in s] value_series = pd.Series(flatten3d(values)).rename('y') split_labels= [[i for i in l] for l in labels] df = pd.DataFrame(split_labels, columns=['s','t','r']) pd.concat([df, value_series], axis=1) ### Output s t r y 0 0 0 0 266.2408815208753 1 0 0 1 266.13662442609433 2 0 1 0 299.53178992512954 3 0 1 1 300.13941632567605 4 0 2 0 229.39037800681405 5 0 2 1 227.22227496248507 6 0 3 0 281.76357915411995 7 0 3 1 280.9639352062619 8 1 0 0 235.8137644198259 9 1 0 1 234.23202459516452 10 1 1 0 265.19681013560034 11 1 1 1 266.5462102589883 12 1 2 0 200.730100791878 13 1 2 1 199.83217739700535 14 1 3 0 246.54018839875374 15 1 3 1 248.5496308586532 16 2 0 0 124.90916276929234 17 2 0 1 123.64788669199066 18 2 1 0 139.65391860786775 19 2 1 1 138.08044561039517 20 2 2 0 106.45276370157518 21 2 2 1 104.78351933651582 22 2 3 0 129.86043618610572 23 2 3 1 128.97991481257253 A: This does not use stack, but maybe it is acceptable for your problem: import numpy as np import pandas as pd values = np.arange(18).reshape(3, 3, 2) # Your values here index = pd.MultiIndex.from_product([space_names, space_names, time_names], names=["space1", "space2", "time"]) df = pd.DataFrame({"value": values.ravel()}, index=index).reset_index() # df: # space1 space2 time value # 0 A A 2000 0 # 1 A A 2001 1 # 2 A B 2000 2 # 3 A B 2001 3 # 4 A C 2000 4 # 5 A C 2001 5 # 6 B A 2000 6 # 7 B A 2001 7 # 8 B B 2000 8 # 9 B B 2001 9 # 10 B C 2000 10 # 11 B C 2001 11 # 12 C A 2000 12 # 13 C A 2001 13 # 14 C B 2000 14 # 15 C B 2001 15 # 16 C C 2000 16 # 17 C C 2001 17
Multidimensional array restructuring like in pandas.stack
Consider the following code to create a dummy dataset import numpy as np from scipy.stats import norm import pandas as pd np.random.seed(10) n=3 space= norm(20, 5).rvs(n) time= norm(10,2).rvs(n) values = np.kron(space, time).reshape(n,n) + norm(1,1).rvs([n,n]) ### Output array([[267.39784458, 300.81493866, 229.19163206], [236.1940266 , 266.49469945, 204.01294305], [122.55912977, 140.00957047, 106.28339745]]) I can put these data in a pandas dataframe using space_names = ['A','B','C'] time_names = [2000,2001,2002] df = pd.DataFrame(values, index=space_names, columns=time_names) df ### Output 2000 2001 2002 A 267.397845 300.814939 229.191632 B 236.194027 266.494699 204.012943 C 122.559130 140.009570 106.283397 This is considered a wide dataset, where each observation lies in a table with 2 variable that acts as coordinates to identify it. To make it a long-tidy dataset we can suse the .stack method of pandas dataframe df.columns.name = 'time' df.index.name = 'space' df.stack().rename('value').reset_index() ### Output space time value 0 A 2000 267.397845 1 A 2001 300.814939 2 A 2002 229.191632 3 B 2000 236.194027 4 B 2001 266.494699 5 B 2002 204.012943 6 C 2000 122.559130 7 C 2001 140.009570 8 C 2002 106.283397 My question is: how do I do exactly this thing but for a 3-dimensional dataset? Let's imagine I have 2 observation for each space-time couple s = 3 t = 4 r = 2 space_mus = norm(20, 5).rvs(s) time_mus = norm(10,2).rvs(t) values = np.kron(space_mus, time_mus) values = values.repeat(r).reshape(s,t,r) + norm(0,1).rvs([s,t,r]) values ### Output array([[[286.50322099, 288.51266345], [176.64303485, 175.38175877], [136.01675917, 134.44328617]], [[187.07608546, 185.4068411 ], [112.86398438, 111.983463 ], [ 85.99035255, 86.67236986]], [[267.66833894, 269.45295404], [162.30044715, 162.50564386], [124.6374401 , 126.2315447 ]]]) How can I obtain the same structure for the dataframe as above? Ugly solution Personally i don't like this solution, and i think one might do it in a more elegant and pythonic way, but still might be useful for someone else so I will post my solution. labels = ['{}{}{}'.format(i,j,k) for i in range(s) for j in range(t) for k in range(r)] #space, time, repetition def flatten3d(k): return [i for l in k for s in l for i in s] value_series = pd.Series(flatten3d(values)).rename('y') split_labels= [[i for i in l] for l in labels] df = pd.DataFrame(split_labels, columns=['s','t','r']) pd.concat([df, value_series], axis=1) ### Output s t r y 0 0 0 0 266.2408815208753 1 0 0 1 266.13662442609433 2 0 1 0 299.53178992512954 3 0 1 1 300.13941632567605 4 0 2 0 229.39037800681405 5 0 2 1 227.22227496248507 6 0 3 0 281.76357915411995 7 0 3 1 280.9639352062619 8 1 0 0 235.8137644198259 9 1 0 1 234.23202459516452 10 1 1 0 265.19681013560034 11 1 1 1 266.5462102589883 12 1 2 0 200.730100791878 13 1 2 1 199.83217739700535 14 1 3 0 246.54018839875374 15 1 3 1 248.5496308586532 16 2 0 0 124.90916276929234 17 2 0 1 123.64788669199066 18 2 1 0 139.65391860786775 19 2 1 1 138.08044561039517 20 2 2 0 106.45276370157518 21 2 2 1 104.78351933651582 22 2 3 0 129.86043618610572 23 2 3 1 128.97991481257253
[ "This does not use stack, but maybe it is acceptable for your problem:\nimport numpy as np\nimport pandas as pd\n\nvalues = np.arange(18).reshape(3, 3, 2) # Your values here\nindex = pd.MultiIndex.from_product([space_names, space_names, time_names], names=[\"space1\", \"space2\", \"time\"])\n\ndf = pd.DataFrame({\"value\": values.ravel()}, index=index).reset_index()\n\n# df:\n# space1 space2 time value\n# 0 A A 2000 0\n# 1 A A 2001 1\n# 2 A B 2000 2\n# 3 A B 2001 3\n# 4 A C 2000 4\n# 5 A C 2001 5\n# 6 B A 2000 6\n# 7 B A 2001 7\n# 8 B B 2000 8\n# 9 B B 2001 9\n# 10 B C 2000 10\n# 11 B C 2001 11\n# 12 C A 2000 12\n# 13 C A 2001 13\n# 14 C B 2000 14\n# 15 C B 2001 15\n# 16 C C 2000 16\n# 17 C C 2001 17\n\n" ]
[ 2 ]
[]
[]
[ "data_wrangling", "dataframe", "numpy", "pandas", "python" ]
stackoverflow_0074562840_data_wrangling_dataframe_numpy_pandas_python.txt
Q: Twitter No such element error python-selenium After printing the username on the login screen on twitter, I want it to press the login button, but I get a "no such element" error. from selenium import webdriver from selenium.webdriver.chrome.service import Service from selenium.webdriver.chrome.options import Options from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.common.by import By from selenium.webdriver.support import expected_conditions as EC options = Options() options.add_argument("start-maximized") options.add_argument('--disable-notifications') webdriver_service = Service('C:\webdrivers\chromedriver.exe') driver = webdriver.Chrome(options=options, service=webdriver_service) wait = WebDriverWait(driver, 20) url = "https://twitter.com/login" driver.get(url) wait.until(EC.element_to_be_clickable((By.CSS_SELECTOR, "[autocomplete='username']"))).send_keys("bla") driver.find_element(By.XPATH, "//div[@role='button'][contains(.,'Next')]").click() i try with do xpath css selector but never work A: This code worked for me! With no changes from selenium import webdriver from selenium.webdriver.chrome.service import Service from selenium.webdriver.chrome.options import Options from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.common.by import By from selenium.webdriver.support import expected_conditions as EC options = Options() options.add_argument("start-maximized") options.add_argument('--disable-notifications') webdriver_service = Service('C:\webdrivers\chromedriver.exe') driver = webdriver.Chrome(options=options, service=webdriver_service) wait = WebDriverWait(driver, 10) url = "https://twitter.com/login" driver.get(url) wait.until(EC.element_to_be_clickable((By.CSS_SELECTOR, "[autocomplete='username']"))).send_keys("bla") driver.find_element(By.XPATH, "//div[@role='button'][contains(.,'Next')]").click() The result is - we went to the next page I tried several times.
Twitter No such element error python-selenium
After printing the username on the login screen on twitter, I want it to press the login button, but I get a "no such element" error. from selenium import webdriver from selenium.webdriver.chrome.service import Service from selenium.webdriver.chrome.options import Options from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.common.by import By from selenium.webdriver.support import expected_conditions as EC options = Options() options.add_argument("start-maximized") options.add_argument('--disable-notifications') webdriver_service = Service('C:\webdrivers\chromedriver.exe') driver = webdriver.Chrome(options=options, service=webdriver_service) wait = WebDriverWait(driver, 20) url = "https://twitter.com/login" driver.get(url) wait.until(EC.element_to_be_clickable((By.CSS_SELECTOR, "[autocomplete='username']"))).send_keys("bla") driver.find_element(By.XPATH, "//div[@role='button'][contains(.,'Next')]").click() i try with do xpath css selector but never work
[ "This code worked for me!\nWith no changes\nfrom selenium import webdriver\nfrom selenium.webdriver.chrome.service import Service\nfrom selenium.webdriver.chrome.options import Options\nfrom selenium.webdriver.support.ui import WebDriverWait\nfrom selenium.webdriver.common.by import By\nfrom selenium.webdriver.support import expected_conditions as EC\n\noptions = Options()\noptions.add_argument(\"start-maximized\")\noptions.add_argument('--disable-notifications')\n\nwebdriver_service = Service('C:\\webdrivers\\chromedriver.exe')\ndriver = webdriver.Chrome(options=options, service=webdriver_service)\nwait = WebDriverWait(driver, 10)\n\nurl = \"https://twitter.com/login\"\ndriver.get(url)\n\nwait.until(EC.element_to_be_clickable((By.CSS_SELECTOR, \"[autocomplete='username']\"))).send_keys(\"bla\")\ndriver.find_element(By.XPATH, \"//div[@role='button'][contains(.,'Next')]\").click()\n\nThe result is - we went to the next page\nI tried several times.\n\n" ]
[ 1 ]
[]
[]
[ "python", "selenium", "twitter" ]
stackoverflow_0074563424_python_selenium_twitter.txt
Q: Remove an open file if an error occurs Is it possible to close and delete while using 'with open()'? I will occasionally encounter an error while doing calculations/extractions/queries in a routine called 'write_file'. try: with open(some_file, 'w') as report: write_file(report, other_variables) except: logging.error("Report {} did not compile".format(some_file)) I wrapped this in a try/except, but it still wrote the report up to the exception. A: If you're comfortable deleting the file after encountering any exception at all, then this will suffice: import os try: with open(some_file, 'w') as report: write_file(report, other_variables) except: logging.error("Report {} did not compile".format(some_file)) os.remove(some_file) Just keep in mind that it's almost always better to be specific about the exception you're catching. Some free advice: if I were concerned about writing nonsense to a file, I would separate what you're doing into two distinct steps. First, I would determine prior to even opening the file to being written whether or not some calculation or statement will throw an exception. If it does, I wouldn't even bother opening the file. Second, if the first step passes without an exception, I would open up the file and write to it. You can optionally wrap this step around a try/except block to catch file IO errors. The benefit of splitting up your work like this is that it makes it easier to diagnose an issue should one occur. The class of exceptions resulting from the first step are bound to be distinct from the class of exceptions that would result from the second step. A: After some digging I found that, no, you cannot close and delete an open file. In this case it makes more sense to use tempfile. If a report complies correctly I can then read from the tempfile and write the actual report. This way the script is not creating, writing and then deleting an actual file. with tempfile.TemporaryFile() as report: write_file(report, other_variables) https://docs.python.org/3.4/library/tempfile.html A: Here's a handy context manager I sometimes use. It has the benefit of still raising the exception, while deleting partial files: from contextlib import contextmanager from pathlib import Path @contextmanager def safe_open(file, *args, **kwargs): try: with open(file, *args, **kwargs) as f: yield f except: Path(file).unlink(missing_ok=True) raise Important! Make sure you only use it with mode 'w' (open for writing, truncating the file first), or it would happily delete files open for reading.
Remove an open file if an error occurs
Is it possible to close and delete while using 'with open()'? I will occasionally encounter an error while doing calculations/extractions/queries in a routine called 'write_file'. try: with open(some_file, 'w') as report: write_file(report, other_variables) except: logging.error("Report {} did not compile".format(some_file)) I wrapped this in a try/except, but it still wrote the report up to the exception.
[ "If you're comfortable deleting the file after encountering any exception at all, then this will suffice:\nimport os\n\ntry:\n with open(some_file, 'w') as report:\n write_file(report, other_variables)\nexcept:\n logging.error(\"Report {} did not compile\".format(some_file))\n os.remove(some_file)\n\nJust keep in mind that it's almost always better to be specific about the exception you're catching.\nSome free advice: if I were concerned about writing nonsense to a file, I would separate what you're doing into two distinct steps.\nFirst, I would determine prior to even opening the file to being written whether or not some calculation or statement will throw an exception. If it does, I wouldn't even bother opening the file.\nSecond, if the first step passes without an exception, I would open up the file and write to it. You can optionally wrap this step around a try/except block to catch file IO errors.\nThe benefit of splitting up your work like this is that it makes it easier to diagnose an issue should one occur. The class of exceptions resulting from the first step are bound to be distinct from the class of exceptions that would result from the second step.\n", "After some digging I found that, no, you cannot close and delete an open file. In this case it makes more sense to use tempfile. If a report complies correctly I can then read from the tempfile and write the actual report. This way the script is not creating, writing and then deleting an actual file.\nwith tempfile.TemporaryFile() as report:\n write_file(report, other_variables)\n\nhttps://docs.python.org/3.4/library/tempfile.html\n", "Here's a handy context manager I sometimes use. It has the benefit of still raising the exception, while deleting partial files:\nfrom contextlib import contextmanager\nfrom pathlib import Path\n\n\n@contextmanager\ndef safe_open(file, *args, **kwargs):\n try:\n with open(file, *args, **kwargs) as f:\n yield f\n except:\n Path(file).unlink(missing_ok=True)\n raise\n\nImportant!\nMake sure you only use it with mode 'w' (open for writing, truncating the file first), or it would happily delete files open for reading.\n" ]
[ 3, 0, 0 ]
[]
[]
[ "contextmanager", "python" ]
stackoverflow_0026855536_contextmanager_python.txt
Q: Using GPIOzero play an MP3 sound file whilst a button is held and an alternative MP3 when button is released (Python, Pygame, Raspberry Pi)) So what I am trying to do is have an MP3 playing when a button on my solderless Breadboard is not pressed and a different one playing when the button is held - best popular example is the 'Deal or No Deal' phone if the Banker on the other end was just a recorded message. I am using a Raspberry Pi 3B using the GPIO pins to hook up a button on a breadboard (later to include a speaker bonnet but that's later me's problem). Here is some code; Below is the main script from gpiozero import Button from signal import pause button = Button(2) while True: if button.is_pressed == True: import playLeft #run the playLeft script when button is pressed else: import playRight #run the playRight script when not pressed playLeft script #import the Pygame sound module import pygame #designate where the files are located declared in variables path = "/home/pi/" sound_files = "phoneLEFT.mp3" #initialise Pygame pygame.mixer.init() speaker_volume = 1.0 #100% speaker volume pygame.mixer.music.set_volume(speaker_volume) #sets the mixer volume to the speaker_volume variable for sound_file in sound_files: #iterating loop playing the files in the sound_files variable pygame.mixer.music.load("/home/john/phoneLEFT.mp3") #using the pygame module loads the sound file... pygame.mixer.music.play() #plays the sound file playRight script #import the Pygame sound module import pygame #designate where the files are located declared in variables path = "/home/pi/" sound_files = "handsetright.mp3" #initialise Pygame pygame.mixer.init() speaker_volume = 1.0 #100% speaker volume pygame.mixer.music.set_volume(speaker_volume) #sets the mixer volume to the speaker_volume variable for sound_file in sound_files: #iterating loop playing the files in the sound_files variable pygame.mixer.music.load("/home/john/handsetright.mp3") #using the pygame module loads the sound file... pygame.mixer.music.play() #plays the sound file The current result is that upon running the code it will play the 'Right' script, will accept the button being held and go onto play the 'Left' script but upon release of the button will not return to playing the 'Right' script. Can any Genius' out there give us a hand? Taking any clever recommendations :) A: #Always comment your code like a violent psychopath will be maintaining it and they know where you live ;) from pygame import mixer #imports Mixer class from the Pygame module to run the sound from gpiozero import Button #imports the Button element only from GPIOzero import time #imports the time class to slow the loop, saving processing. button = Button(2) #designates the GPIO pin 2 to the physical button input it is attached to mixer.init() #instantiate the mixer class from Pygame def ringing(): #a funtion to manage loading, playing and unloading the PhoneLEFT.mp3 (Ringing when set down) mixer.music.stop() #stop the current soundfile, which will be the alternate one running mixer.music.set_volume(1.0) #sets the mixer volume mixer.music.load('phoneLEFT.mp3') #Load the named sound file located in the same folder (Current Working Directory) mixer.music.play() #plays the loaded sound file def talking(): #a funtion to manage loading, playing and unloading the handsetright.mp3 (Informational when handset used) mixer.music.stop() #stop the current soundfile, which will be the alternate one running mixer.music.set_volume(1.0) #sets the mixer volume mixer.music.load('handsetright.mp3') #Load the named sound file located in the same folder (CWD) mixer.music.play() #plays the sound file while True: #initialise a while loop button.when_pressed = ringing #when button is held perform the 'ringing' function button.when_released = talking #when the button is not held perform the 'talking' function time.sleep(0.05) #slows the while loop to iterate every 0.05seconds So in the end i changed up the idea of having different scripts importing in the sound and thatwas the begining of the end. There was a reasonable amount of messing about with the updates of Pygame but once it was all back together this above was a result
Using GPIOzero play an MP3 sound file whilst a button is held and an alternative MP3 when button is released (Python, Pygame, Raspberry Pi))
So what I am trying to do is have an MP3 playing when a button on my solderless Breadboard is not pressed and a different one playing when the button is held - best popular example is the 'Deal or No Deal' phone if the Banker on the other end was just a recorded message. I am using a Raspberry Pi 3B using the GPIO pins to hook up a button on a breadboard (later to include a speaker bonnet but that's later me's problem). Here is some code; Below is the main script from gpiozero import Button from signal import pause button = Button(2) while True: if button.is_pressed == True: import playLeft #run the playLeft script when button is pressed else: import playRight #run the playRight script when not pressed playLeft script #import the Pygame sound module import pygame #designate where the files are located declared in variables path = "/home/pi/" sound_files = "phoneLEFT.mp3" #initialise Pygame pygame.mixer.init() speaker_volume = 1.0 #100% speaker volume pygame.mixer.music.set_volume(speaker_volume) #sets the mixer volume to the speaker_volume variable for sound_file in sound_files: #iterating loop playing the files in the sound_files variable pygame.mixer.music.load("/home/john/phoneLEFT.mp3") #using the pygame module loads the sound file... pygame.mixer.music.play() #plays the sound file playRight script #import the Pygame sound module import pygame #designate where the files are located declared in variables path = "/home/pi/" sound_files = "handsetright.mp3" #initialise Pygame pygame.mixer.init() speaker_volume = 1.0 #100% speaker volume pygame.mixer.music.set_volume(speaker_volume) #sets the mixer volume to the speaker_volume variable for sound_file in sound_files: #iterating loop playing the files in the sound_files variable pygame.mixer.music.load("/home/john/handsetright.mp3") #using the pygame module loads the sound file... pygame.mixer.music.play() #plays the sound file The current result is that upon running the code it will play the 'Right' script, will accept the button being held and go onto play the 'Left' script but upon release of the button will not return to playing the 'Right' script. Can any Genius' out there give us a hand? Taking any clever recommendations :)
[ "#Always comment your code like a violent psychopath will be maintaining it and they know where you live ;)\nfrom pygame import mixer #imports Mixer class from the Pygame module to run the sound\nfrom gpiozero import Button #imports the Button element only from GPIOzero\nimport time #imports the time class to slow the loop, saving processing.\n\nbutton = Button(2) #designates the GPIO pin 2 to the physical button input it is attached to \nmixer.init() #instantiate the mixer class from Pygame\n\ndef ringing(): #a funtion to manage loading, playing and unloading the PhoneLEFT.mp3 (Ringing when set down)\n mixer.music.stop() #stop the current soundfile, which will be the alternate one running\n mixer.music.set_volume(1.0) #sets the mixer volume \n mixer.music.load('phoneLEFT.mp3') #Load the named sound file located in the same folder (Current Working Directory) \n mixer.music.play() #plays the loaded sound file\n\ndef talking(): #a funtion to manage loading, playing and unloading the handsetright.mp3 (Informational when handset used)\n mixer.music.stop() #stop the current soundfile, which will be the alternate one running\n mixer.music.set_volume(1.0) #sets the mixer volume \n mixer.music.load('handsetright.mp3') #Load the named sound file located in the same folder (CWD)\n mixer.music.play() #plays the sound file\n \n\nwhile True: #initialise a while loop\n button.when_pressed = ringing #when button is held perform the 'ringing' function\n button.when_released = talking #when the button is not held perform the 'talking' function\n time.sleep(0.05) #slows the while loop to iterate every 0.05seconds\n\nSo in the end i changed up the idea of having different scripts importing in the sound and thatwas the begining of the end. There was a reasonable amount of messing about with the updates of Pygame but once it was all back together this above was a result\n" ]
[ 0 ]
[]
[]
[ "button", "mp3", "pygame", "python", "raspberry_pi" ]
stackoverflow_0074238538_button_mp3_pygame_python_raspberry_pi.txt
Q: Raise an exception using mock when a specific Django model is called I have a class based view inheriting from FormView with an overridden form_valid() method that I would like to test. As you can see, form_valid() is required to access the CustomUser model which is wrapped with a try and except. What I am trying to do is raise an exception whenever create_user is called, but I am having no success. The CustomUser has been created in the usual way via a CustomUserManager with a create_user method. CustomUser = get_user_model() class SignUpView(FormView): template_name = 'accounts/signup.html' form_class = SignUpForm def form_valid(self, form): try: self.user = CustomUser.objects.filter(email=form.cleaned_data['email']).first() if not self.user: self.user = CustomUser.objects.create_user(email=form.cleaned_data['email'], full_name=form.cleaned_data['full_name'], password=form.cleaned_data['password'], is_verified=False ) else: if self.user.is_verified: self.send_reminder() return super().form_valid(form) self.send_code() except: messages.error(self.request, _('Something went wrong, please try to register again')) return redirect(reverse('accounts:signup')) return super().form_valid(form) What I have tried: def test_database_fail(self): with patch.object(CustomUserManager, 'create_user') as mock_method: mock_method.side_effect = Exception(ValueError) view = SignUpView.as_view() url = reverse('accounts:signup') data = {'email': 'test@test.com', 'full_name': 'Test Tester', 'password': 'Abnm1234'} request = self.factory.post(url, data) request.session = {} request.messages = {} response = view(request) and .. def test_database_fail(self): CustomUser = Mock() CustomUser.objects.create_user.side_effect = CustomUser.ValueError view = SignUpView.as_view() url = reverse('accounts:signup') data = {'email': 'test@test.com', 'full_name': 'Test Tester', 'password': 'Abnm1234'} request = self.factory.post(url, data) request.session = {} request.messages = {} response = view(request) In both cases no exception is triggered. My question is, how can I raise an exception whenever create_user is called using mock? A: You need to write the whole test under mock.patch context manager. Otherwise once with statement is finished, mock doesn't work anymore and has no effect. Try this: def test_database_fail(self): with patch.object(CustomUserManager, 'create_user') as mock_method: mock_method.side_effect = Exception(ValueError) view = SignUpView.as_view() url = reverse('accounts:signup') data = {'email': 'test@test.com', 'full_name': 'Test Tester', 'password': 'Abnm1234'} request = self.factory.post(url, data) request.session = {} request.messages = {} response = view(request) Also note that you can specify side_effect directly in patch.object method: with patch.object(CustomUserManager, 'create_user', side_effect=Exception): # rest of your code
Raise an exception using mock when a specific Django model is called
I have a class based view inheriting from FormView with an overridden form_valid() method that I would like to test. As you can see, form_valid() is required to access the CustomUser model which is wrapped with a try and except. What I am trying to do is raise an exception whenever create_user is called, but I am having no success. The CustomUser has been created in the usual way via a CustomUserManager with a create_user method. CustomUser = get_user_model() class SignUpView(FormView): template_name = 'accounts/signup.html' form_class = SignUpForm def form_valid(self, form): try: self.user = CustomUser.objects.filter(email=form.cleaned_data['email']).first() if not self.user: self.user = CustomUser.objects.create_user(email=form.cleaned_data['email'], full_name=form.cleaned_data['full_name'], password=form.cleaned_data['password'], is_verified=False ) else: if self.user.is_verified: self.send_reminder() return super().form_valid(form) self.send_code() except: messages.error(self.request, _('Something went wrong, please try to register again')) return redirect(reverse('accounts:signup')) return super().form_valid(form) What I have tried: def test_database_fail(self): with patch.object(CustomUserManager, 'create_user') as mock_method: mock_method.side_effect = Exception(ValueError) view = SignUpView.as_view() url = reverse('accounts:signup') data = {'email': 'test@test.com', 'full_name': 'Test Tester', 'password': 'Abnm1234'} request = self.factory.post(url, data) request.session = {} request.messages = {} response = view(request) and .. def test_database_fail(self): CustomUser = Mock() CustomUser.objects.create_user.side_effect = CustomUser.ValueError view = SignUpView.as_view() url = reverse('accounts:signup') data = {'email': 'test@test.com', 'full_name': 'Test Tester', 'password': 'Abnm1234'} request = self.factory.post(url, data) request.session = {} request.messages = {} response = view(request) In both cases no exception is triggered. My question is, how can I raise an exception whenever create_user is called using mock?
[ "You need to write the whole test under mock.patch context manager. Otherwise once with statement is finished, mock doesn't work anymore and has no effect. Try this:\ndef test_database_fail(self):\n with patch.object(CustomUserManager, 'create_user') as mock_method:\n mock_method.side_effect = Exception(ValueError)\n\n view = SignUpView.as_view()\n url = reverse('accounts:signup')\n data = {'email': 'test@test.com', 'full_name': 'Test Tester', 'password': 'Abnm1234'}\n request = self.factory.post(url, data)\n request.session = {}\n request.messages = {}\n response = view(request)\n\nAlso note that you can specify side_effect directly in patch.object method:\nwith patch.object(CustomUserManager, 'create_user', side_effect=Exception):\n # rest of your code\n\n" ]
[ 2 ]
[]
[]
[ "django", "django_forms", "django_testing", "mocking", "python" ]
stackoverflow_0074563124_django_django_forms_django_testing_mocking_python.txt
Q: SQLAlchemy: Making a subquery of query.from_statement(text(...)) raising AttributeError I'm building a tool which relies heavily on SQLAlchemy's query builder, but which allows the user to specify literal text of subqueries to join against in cases where the model is insufficient. However, when I try something like this: q = session.query().from_statement(sa.text(subquery_text)).subquery(subquery_name) ...an exception occurs: File ".../lib/sqlalchemy/orm/query.py", line 473, in subquery return q.alias(name=name) AttributeError: 'AnnotatedTextClause' object has no attribute 'alias' Looking at the implementation of .subquery() in SQLAlchemy's codebase raises some clarity on how we got from a Query object to an AnnotatedTextClause: def subquery(self, name=None, with_labels=False, reduce_columns=False): # docstring in the original omitted here for brevity q = self.enable_eagerloads(False) if with_labels: q = q.with_labels() q = q.statement if reduce_columns: q = q.reduce_columns() return q.alias(name=name) ...but I'm finding myself unenlightened as to whether what I'm attempting to do is possible, and if so how it would be accomplished. A: I have found a valid syntax: q = sa.text(subquery_text).columns(Table.col_a, Table.col_b).alias(subquery_name) q can then be used as a standard subquery
SQLAlchemy: Making a subquery of query.from_statement(text(...)) raising AttributeError
I'm building a tool which relies heavily on SQLAlchemy's query builder, but which allows the user to specify literal text of subqueries to join against in cases where the model is insufficient. However, when I try something like this: q = session.query().from_statement(sa.text(subquery_text)).subquery(subquery_name) ...an exception occurs: File ".../lib/sqlalchemy/orm/query.py", line 473, in subquery return q.alias(name=name) AttributeError: 'AnnotatedTextClause' object has no attribute 'alias' Looking at the implementation of .subquery() in SQLAlchemy's codebase raises some clarity on how we got from a Query object to an AnnotatedTextClause: def subquery(self, name=None, with_labels=False, reduce_columns=False): # docstring in the original omitted here for brevity q = self.enable_eagerloads(False) if with_labels: q = q.with_labels() q = q.statement if reduce_columns: q = q.reduce_columns() return q.alias(name=name) ...but I'm finding myself unenlightened as to whether what I'm attempting to do is possible, and if so how it would be accomplished.
[ "I have found a valid syntax:\nq = sa.text(subquery_text).columns(Table.col_a, Table.col_b).alias(subquery_name)\n\nq can then be used as a standard subquery\n" ]
[ 0 ]
[]
[]
[ "python", "sqlalchemy" ]
stackoverflow_0034169993_python_sqlalchemy.txt
Q: pygame.display.set_mode() too slow with threading (up to 18 seconds) I want to use pygame drawing/event handling in its separate thread, while leaving the main thread for all computations. Here is a minimal working example. Everything works as intended, after initialization the performance is fine. But the self.screen = pygame.display.set_mode([400, 400]) line takes 18 seconds to execute. import threading import time import pygame class Window: def __init__(self): self.screen = None self.gui_thread = None self.gui_initialized = False self.keep_running_gui_thread = True self.gui_thread = threading.Thread(target=self._gui_loop) self.gui_thread.start() while not self.gui_initialized: pass # await gui initialization def _gui_loop(self): pygame.init() self.screen = pygame.display.set_mode([400, 400]) self.gui_initialized = True while self.keep_running_gui_thread: ... # handle events etc. pygame.quit() def close(self): self.keep_running_gui_thread = False self.gui_thread.join() if __name__ == "__main__": tick = time.perf_counter() window = Window() print(f"init: {time.perf_counter() - tick} seconds") window.close() What's even weirder is that when I try to run it with Scalene it starts to run much faster: the loading time is reduced from 18 to 1-2 seconds. Regarding the core design: I want to run it using multithreading/multiprocessing etc. for smooth gui without redesigning my computation-heavy application around pygame from the very beginning. I want to run pygame-related stuff in the secondary thread (not the main one) so that my computation-heavy application controls (creates/manages) the gui and not vice-versa. A: Figured it out. Apparently while not self.gui_initialized: pass is really bad because it causes a nearly complete freeze on all threads due to GIL. Simply adding a time.sleep(0.1) inside the loop fixes the problem.
pygame.display.set_mode() too slow with threading (up to 18 seconds)
I want to use pygame drawing/event handling in its separate thread, while leaving the main thread for all computations. Here is a minimal working example. Everything works as intended, after initialization the performance is fine. But the self.screen = pygame.display.set_mode([400, 400]) line takes 18 seconds to execute. import threading import time import pygame class Window: def __init__(self): self.screen = None self.gui_thread = None self.gui_initialized = False self.keep_running_gui_thread = True self.gui_thread = threading.Thread(target=self._gui_loop) self.gui_thread.start() while not self.gui_initialized: pass # await gui initialization def _gui_loop(self): pygame.init() self.screen = pygame.display.set_mode([400, 400]) self.gui_initialized = True while self.keep_running_gui_thread: ... # handle events etc. pygame.quit() def close(self): self.keep_running_gui_thread = False self.gui_thread.join() if __name__ == "__main__": tick = time.perf_counter() window = Window() print(f"init: {time.perf_counter() - tick} seconds") window.close() What's even weirder is that when I try to run it with Scalene it starts to run much faster: the loading time is reduced from 18 to 1-2 seconds. Regarding the core design: I want to run it using multithreading/multiprocessing etc. for smooth gui without redesigning my computation-heavy application around pygame from the very beginning. I want to run pygame-related stuff in the secondary thread (not the main one) so that my computation-heavy application controls (creates/manages) the gui and not vice-versa.
[ "Figured it out. Apparently\nwhile not self.gui_initialized:\n pass\n\nis really bad because it causes a nearly complete freeze on all threads due to GIL.\nSimply adding a time.sleep(0.1) inside the loop fixes the problem.\n" ]
[ 0 ]
[]
[]
[ "multithreading", "pygame", "python" ]
stackoverflow_0074563456_multithreading_pygame_python.txt
Q: on message event in cog dosent work (discord.py) Bot does not even print the messages from the on_message event and i cannot understand why (no errors or something just nothing happens). @commands.Cog.listener("on_message") async def on_message(self, message: discord.Message, ctx): print(message) if message.author.id == self.bot.user: return msg_content = message.content.lower() CurseWord = ['curse1', 'curse2'] # delete curse word if match with the list if msg_content in CurseWord: await message.delete() await ctx.send("Dont say that again" i cant find any errors and so i cannot understand where the problem at A: on_message only takes one argument, being the message. You can't just add random arguments to events and expect it to work. How would the library know what to pass in? Docs: https://discordpy.readthedocs.io/en/stable/api.html?highlight=on_message#discord.on_message Also if you don't get any errors you probably didn't configure logging probably, or you've overridden the error handler & you're not handling that type of error. Docs on logging: https://discordpy.readthedocs.io/en/stable/logging.html?highlight=logging
on message event in cog dosent work (discord.py)
Bot does not even print the messages from the on_message event and i cannot understand why (no errors or something just nothing happens). @commands.Cog.listener("on_message") async def on_message(self, message: discord.Message, ctx): print(message) if message.author.id == self.bot.user: return msg_content = message.content.lower() CurseWord = ['curse1', 'curse2'] # delete curse word if match with the list if msg_content in CurseWord: await message.delete() await ctx.send("Dont say that again" i cant find any errors and so i cannot understand where the problem at
[ "on_message only takes one argument, being the message. You can't just add random arguments to events and expect it to work. How would the library know what to pass in?\nDocs: https://discordpy.readthedocs.io/en/stable/api.html?highlight=on_message#discord.on_message\nAlso if you don't get any errors you probably didn't configure logging probably, or you've overridden the error handler & you're not handling that type of error. Docs on logging: https://discordpy.readthedocs.io/en/stable/logging.html?highlight=logging\n" ]
[ 0 ]
[]
[]
[ "discord.py", "events", "python" ]
stackoverflow_0074562260_discord.py_events_python.txt
Q: Type for function that accepts module containing specific function? I'm trying to use a more functional syntax, and I've got two modules: # foo.py def bar(): # Do something pass # baz.py def baz(qux: ???): qux.bar() # Usage: import foo as Foo import baz baz(Foo) I want baz to accept an argument (qux) which has an an attribute bar which is a callable. This parameter could refer to a module, or an instance of some class - I don't really care. Is this possible? A: You can use a Protocol, as mentioned by @jonrsharpe in the comments: # baz.py from typing import Protocol class SupportsBar(Protocol): def bar(self) -> None: ... def baz(qux: SupportsBar) -> None: qux.bar() # Usage: import foo as Foo baz(Foo) # no errors
Type for function that accepts module containing specific function?
I'm trying to use a more functional syntax, and I've got two modules: # foo.py def bar(): # Do something pass # baz.py def baz(qux: ???): qux.bar() # Usage: import foo as Foo import baz baz(Foo) I want baz to accept an argument (qux) which has an an attribute bar which is a callable. This parameter could refer to a module, or an instance of some class - I don't really care. Is this possible?
[ "You can use a Protocol, as mentioned by @jonrsharpe in the comments:\n# baz.py\n\nfrom typing import Protocol\n\n\nclass SupportsBar(Protocol):\n def bar(self) -> None:\n ...\n\n\ndef baz(qux: SupportsBar) -> None:\n qux.bar()\n\n\n# Usage:\nimport foo as Foo\n\nbaz(Foo) # no errors\n\n" ]
[ 0 ]
[]
[]
[ "python", "python_typing" ]
stackoverflow_0074483387_python_python_typing.txt
Q: Nested Function in Python What benefit or implications could we get with Python code like this: class some_class(parent_class): def doOp(self, x, y): def add(x, y): return x + y return add(x, y) I found this in an open-source project, doing something useful inside the nested function, but doing absolutely nothing outside it except calling it. (The actual code can be found here.) Why might someone code it like this? Is there some benefit or side effect for writing the code inside the nested function rather than in the outer, normal function? A: Normally you do it to make closures: def make_adder(x): def add(y): return x + y return add plus5 = make_adder(5) print(plus5(12)) # prints 17 Inner functions can access variables from the enclosing scope (in this case, the local variable x). If you're not accessing any variables from the enclosing scope, they're really just ordinary functions with a different scope. A: Aside from function generators, where internal function creation is almost the definition of a function generator, the reason I create nested functions is to improve readability. If I have a tiny function that will only be invoked by the outer function, then I inline the definition so you don't have to skip around to determine what that function is doing. I can always move the inner method outside of the encapsulating method if I find a need to reuse the function at a later date. Toy example: import sys def Foo(): def e(s): sys.stderr.write('ERROR: ') sys.stderr.write(s) sys.stderr.write('\n') e('I regret to inform you') e('that a shameful thing has happened.') e('Thus, I must issue this desultory message') e('across numerous lines.') Foo() A: One potential benefit of using inner methods is that it allows you to use outer method local variables without passing them as arguments. def helper(feature, resultBuffer): resultBuffer.print(feature) resultBuffer.printLine() resultBuffer.flush() def save(item, resultBuffer): helper(item.description, resultBuffer) helper(item.size, resultBuffer) helper(item.type, resultBuffer) can be written as follows, which arguably reads better def save(item, resultBuffer): def helper(feature): resultBuffer.print(feature) resultBuffer.printLine() resultBuffer.flush() helper(item.description) helper(item.size) helper(item.type) A: I can't image any good reason for code like that. Maybe there was a reason for the inner function in older revisions, like other Ops. For example, this makes slightly more sense: class some_class(parent_class): def doOp(self, op, x, y): def add(x, y): return x + y def sub(x,y): return x - y return locals()[op](x,y) some_class().doOp('add', 1,2) but then the inner function should be ("private") class methods instead: class some_class(object): def _add(self, x, y): return x + y def doOp(self, x, y): return self._add(x,y) A: The idea behind local methods is similar to local variables: don't pollute the larger name space. Obviously the benefits are limited since most languages don't also provide such functionality directly. A: Are you sure the code was exactly like this? The normal reason for doing something like this is for creating a partial - a function with baked-in parameters. Calling the outer function returns a callable that needs no parameters, and so therefore can be stored and used somewhere it is impossible to pass parameters. However, the code you've posted won't do that - it calls the function immediately and returns the result, rather than the callable. It might be useful to post the actual code you saw. A: In Python, you can use a nested function to create a decorator like @decorator. *My answer explains more about decorators. I created multiply_by_5() to use it as the decorator for sum() as shown below: # (4 + 6) x 5 = 50 def multiply_by_5(func): def core(*args, **kwargs): result = func(*args, **kwargs) return result * 5 return core @multiply_by_5 # Here def sum(num1, num2): return num1 + num2 result = sum(4, 6) print(result) Output: 50 The code below is the case of not using the decorator: # (4 + 6) x 5 = 50 # ... # @multiply_by_5 def sum(num1, num2): return num1 + num2 f1 = multiply_by_5(sum) # Here result = f1(4, 6) print(result) Or: # (4 + 6) x 5 = 50 # ... # @multiply_by_5 def sum(num1, num2): return num1 + num2 result = multiply_by_5(sum)(4, 6) # Here print(result) Output: 50
Nested Function in Python
What benefit or implications could we get with Python code like this: class some_class(parent_class): def doOp(self, x, y): def add(x, y): return x + y return add(x, y) I found this in an open-source project, doing something useful inside the nested function, but doing absolutely nothing outside it except calling it. (The actual code can be found here.) Why might someone code it like this? Is there some benefit or side effect for writing the code inside the nested function rather than in the outer, normal function?
[ "Normally you do it to make closures:\ndef make_adder(x):\n def add(y):\n return x + y\n return add\n\nplus5 = make_adder(5)\nprint(plus5(12)) # prints 17\n\nInner functions can access variables from the enclosing scope (in this case, the local variable x). If you're not accessing any variables from the enclosing scope, they're really just ordinary functions with a different scope.\n", "Aside from function generators, where internal function creation is almost the definition of a function generator, the reason I create nested functions is to improve readability. If I have a tiny function that will only be invoked by the outer function, then I inline the definition so you don't have to skip around to determine what that function is doing. I can always move the inner method outside of the encapsulating method if I find a need to reuse the function at a later date.\nToy example:\nimport sys\n\ndef Foo():\n def e(s):\n sys.stderr.write('ERROR: ')\n sys.stderr.write(s)\n sys.stderr.write('\\n')\n e('I regret to inform you')\n e('that a shameful thing has happened.')\n e('Thus, I must issue this desultory message')\n e('across numerous lines.')\nFoo()\n\n", "One potential benefit of using inner methods is that it allows you to use outer method local variables without passing them as arguments.\ndef helper(feature, resultBuffer):\n resultBuffer.print(feature)\n resultBuffer.printLine()\n resultBuffer.flush()\n\ndef save(item, resultBuffer):\n\n helper(item.description, resultBuffer)\n helper(item.size, resultBuffer)\n helper(item.type, resultBuffer)\n\ncan be written as follows, which arguably reads better\ndef save(item, resultBuffer):\n\n def helper(feature):\n resultBuffer.print(feature)\n resultBuffer.printLine()\n resultBuffer.flush()\n\n helper(item.description)\n helper(item.size)\n helper(item.type)\n\n", "I can't image any good reason for code like that.\nMaybe there was a reason for the inner function in older revisions, like other Ops. \nFor example, this makes slightly more sense:\nclass some_class(parent_class):\n def doOp(self, op, x, y):\n def add(x, y):\n return x + y\n def sub(x,y):\n return x - y\n return locals()[op](x,y)\n\nsome_class().doOp('add', 1,2)\n\nbut then the inner function should be (\"private\") class methods instead:\nclass some_class(object):\n def _add(self, x, y):\n return x + y\n def doOp(self, x, y):\n return self._add(x,y)\n\n", "The idea behind local methods is similar to local variables: don't pollute the larger name space. Obviously the benefits are limited since most languages don't also provide such functionality directly.\n", "Are you sure the code was exactly like this? The normal reason for doing something like this is for creating a partial - a function with baked-in parameters. Calling the outer function returns a callable that needs no parameters, and so therefore can be stored and used somewhere it is impossible to pass parameters. However, the code you've posted won't do that - it calls the function immediately and returns the result, rather than the callable. It might be useful to post the actual code you saw.\n", "In Python, you can use a nested function to create a decorator like @decorator. *My answer explains more about decorators.\nI created multiply_by_5() to use it as the decorator for sum() as shown below:\n# (4 + 6) x 5 = 50\n\ndef multiply_by_5(func):\n def core(*args, **kwargs):\n result = func(*args, **kwargs)\n return result * 5\n return core\n\n@multiply_by_5 # Here\ndef sum(num1, num2):\n return num1 + num2\n\nresult = sum(4, 6)\nprint(result)\n\nOutput:\n50\n\nThe code below is the case of not using the decorator:\n# (4 + 6) x 5 = 50\n\n# ...\n\n# @multiply_by_5\ndef sum(num1, num2):\n return num1 + num2\n\nf1 = multiply_by_5(sum) # Here\nresult = f1(4, 6)\nprint(result)\n\nOr:\n# (4 + 6) x 5 = 50\n\n# ...\n\n# @multiply_by_5\ndef sum(num1, num2):\n return num1 + num2\n\nresult = multiply_by_5(sum)(4, 6) # Here\nprint(result)\n\nOutput:\n50\n\n" ]
[ 119, 62, 27, 8, 6, 1, 0 ]
[]
[]
[ "nested_function", "python" ]
stackoverflow_0001589058_nested_function_python.txt
Q: Selenium webpage not loading properly I am trying to web scrape university ranking infomation from USNews site. And the problem is when I use selenium to open the webpage, the 'Load More Button' is not working properly. (I think I successfully click it but in the Chrome window opened by webdriver, when I scroll down to the button, is says that 'We're sorry, there was a problem loading the next page of search results'. I am new to web scrawler and I did a lot of research on this, there are several similar questions but none of those answers helped. I really need some help. Here is my code: driver_path = 'xxx' (chromedriver path) driver = webdriver.Chrome(executable_path=driver_path) url2 = 'https://www.usnews.com/education/best-global-universities/rankings' wait = WebDriverWait(driver, 30) driver.get(url2) driver.maximize_window() count = 1 while True: try: print(1) # driver.execute_script("window.scrollTo(0, document.body.scrollHeight);") wait.until(EC.visibility_of_element_located((By.XPATH,"//*[@id='rankings']/div[3]/button"))) print(2) show_more = wait.until(EC.element_to_be_clickable((By.XPATH, "//*[@id='rankings']/div[3]/button"))) ActionChains(browser).move_to_element(show_more).click().perform() print(3) # driver.find_element(By.XPATH,"//*[@id='rankings']/div[3]/button").click() # print(4) # wait.until(EC.visibility_of_element_located((By.XPATH,"//*[@id='rankings']/div[3]/button"))) # print(5) count += 1 time.sleep(2) if count >=2: break Even though I did not write code to close the ad, but I don't think the ad is the problem since when I manually close it and then click the button, it is still not working. Is it the problem with the website? import requests import os from bs4 import BeautifulSoup from selenium import webdriver import time from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support import expected_conditions as EC from selenium.webdriver.common.by import By A: It is clear that there is an anti scraping control in the specific site. It is always recommended to consult the robots.txt file beforehand and check whether scraping is possible on a certain site or not. In general, this site blocks just the IP (try to go to other pages afterwards, you will see that you will get a 403 error). In general, however, the approach you used does not seem wrong to me. You can try contacting the site directly to see if the problem can be solved in some other way.
Selenium webpage not loading properly
I am trying to web scrape university ranking infomation from USNews site. And the problem is when I use selenium to open the webpage, the 'Load More Button' is not working properly. (I think I successfully click it but in the Chrome window opened by webdriver, when I scroll down to the button, is says that 'We're sorry, there was a problem loading the next page of search results'. I am new to web scrawler and I did a lot of research on this, there are several similar questions but none of those answers helped. I really need some help. Here is my code: driver_path = 'xxx' (chromedriver path) driver = webdriver.Chrome(executable_path=driver_path) url2 = 'https://www.usnews.com/education/best-global-universities/rankings' wait = WebDriverWait(driver, 30) driver.get(url2) driver.maximize_window() count = 1 while True: try: print(1) # driver.execute_script("window.scrollTo(0, document.body.scrollHeight);") wait.until(EC.visibility_of_element_located((By.XPATH,"//*[@id='rankings']/div[3]/button"))) print(2) show_more = wait.until(EC.element_to_be_clickable((By.XPATH, "//*[@id='rankings']/div[3]/button"))) ActionChains(browser).move_to_element(show_more).click().perform() print(3) # driver.find_element(By.XPATH,"//*[@id='rankings']/div[3]/button").click() # print(4) # wait.until(EC.visibility_of_element_located((By.XPATH,"//*[@id='rankings']/div[3]/button"))) # print(5) count += 1 time.sleep(2) if count >=2: break Even though I did not write code to close the ad, but I don't think the ad is the problem since when I manually close it and then click the button, it is still not working. Is it the problem with the website? import requests import os from bs4 import BeautifulSoup from selenium import webdriver import time from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support import expected_conditions as EC from selenium.webdriver.common.by import By
[ "It is clear that there is an anti scraping control in the specific site. It is always recommended to consult the robots.txt file beforehand and check whether scraping is possible on a certain site or not.\nIn general, this site blocks just the IP (try to go to other pages afterwards, you will see that you will get a 403 error).\nIn general, however, the approach you used does not seem wrong to me. You can try contacting the site directly to see if the problem can be solved in some other way.\n" ]
[ 0 ]
[]
[]
[ "python", "selenium", "web_scraping" ]
stackoverflow_0074563125_python_selenium_web_scraping.txt
Q: Python function comparing characters in two strings I would like to write my own Python function (i.e. without using any other non base Python functions) to compare the characters in two strings in the following way. If the letter in position i of string 1 is the same as the letter in position i of string 2 then "Green" is returned If the letter in position i of string 1 is the same as the letter in position [i-1] or [i+1] of string 2 then "Blue" is returned If the letter in position i of string 1 is not the same as the letters in position [i-1] , i or [i+1] of string 2 then "White" is returned The final output of the function should be a tuple of the "Green" / "Blue" / "White" output for each letter. For example, if we call the function letter_comparison and write: def letter_comparison(string1, string2): ..... letter_comparison("chain", "chant") would return "Green", "Green", "Green", "White", "Blue". Any ideas would be appreciated. A: Have you tried using a for loop? def letter_comparison(string1, string2): myList = [] if len(string1) != len(string2): return for i in range(len(string1)): try: l2p = string2[i+1] except: l2p = None try: l2m = string2[i-1] except: l2m = None if string1[i] == string2[i]: myList.append("Green") elif string1[i] == l2p or string1[i] == l2m: myList.append("Blue") else: myList.append("White") return myList otherList = letter_comparison("chain", "chant") print(otherList) Otherwise, the question was pretty clear and you did a good job. A: In this way, the function still works depending on the length of the words. I have divided it into two functions for convenience. def Test(word1,word2,pos): if word1[pos] == word2[pos]: return "Green" elif (pos < (len(word1)-2)) and (word1[pos] == word2[pos+1]): return "Blue" elif (pos>0) and (word1[pos] == word2[pos-1]): return "Blue" else: return "White" def letter_comparison(test1, test2): risultato = [] if len(test1) < len(test2): for count, char in enumerate(test1): risultato.append(Test(test1, test2, count)) else: for count, char in enumerate(test2): risultato.append(Test(test1, test2, count)) return risultato
Python function comparing characters in two strings
I would like to write my own Python function (i.e. without using any other non base Python functions) to compare the characters in two strings in the following way. If the letter in position i of string 1 is the same as the letter in position i of string 2 then "Green" is returned If the letter in position i of string 1 is the same as the letter in position [i-1] or [i+1] of string 2 then "Blue" is returned If the letter in position i of string 1 is not the same as the letters in position [i-1] , i or [i+1] of string 2 then "White" is returned The final output of the function should be a tuple of the "Green" / "Blue" / "White" output for each letter. For example, if we call the function letter_comparison and write: def letter_comparison(string1, string2): ..... letter_comparison("chain", "chant") would return "Green", "Green", "Green", "White", "Blue". Any ideas would be appreciated.
[ "Have you tried using a for loop?\ndef letter_comparison(string1, string2):\n myList = []\n\n if len(string1) != len(string2):\n return\n \n for i in range(len(string1)):\n try:\n l2p = string2[i+1]\n except:\n l2p = None\n try:\n l2m = string2[i-1]\n except:\n l2m = None\n\n if string1[i] == string2[i]:\n myList.append(\"Green\")\n elif string1[i] == l2p or string1[i] == l2m:\n myList.append(\"Blue\")\n else:\n myList.append(\"White\")\n\n return myList\n\notherList = letter_comparison(\"chain\", \"chant\")\nprint(otherList)\n\nOtherwise, the question was pretty clear and you did a good job.\n", "In this way, the function still works depending on the length of the words.\nI have divided it into two functions for convenience.\ndef Test(word1,word2,pos):\n if word1[pos] == word2[pos]:\n return \"Green\"\n elif (pos < (len(word1)-2)) and (word1[pos] == word2[pos+1]):\n return \"Blue\"\n elif (pos>0) and (word1[pos] == word2[pos-1]):\n return \"Blue\"\n else:\n return \"White\"\n\ndef letter_comparison(test1, test2):\n risultato = []\n if len(test1) < len(test2):\n for count, char in enumerate(test1):\n risultato.append(Test(test1, test2, count))\n else:\n for count, char in enumerate(test2):\n risultato.append(Test(test1, test2, count))\n return risultato\n\n" ]
[ 0, 0 ]
[]
[]
[ "compare", "function", "python" ]
stackoverflow_0074562852_compare_function_python.txt
Q: Extra coefficient in Ridge Regression I have 9 predictors (Clean df) but when I run the model I get 10 coefficients. Here is my code: #Get clean df with only more relevant columns Clean_indices = wkospi[['Open_sp','Close_sp','Close_jp','Open_eur','High_eur','Open_kos','Close_kos','1 Mo','2 Mo','1 Yr','2 Yr','Open_oil','Open_gold']] Clean_df = wkospi[['Close_jp','Open_sp','Open_eur','High_eur','Close_kos','3 Mo','6 Mo', '1 Yr', '2 Yr']] #Run the test for Clean df ALIAS "cd" cdx_train, cdx_test, cdy_train, cdy_test = train_test_split(Clean_df, Clean_indices['Close_sp'] , test_size=0.6, random_state = 4, shuffle = True) #Prepare train data and test data as polynomials cpr=PolynomialFeatures(degree=1) cdp_train=cpr.fit_transform(cdx_train) cdp_test=cpr.fit_transform(cdx_test) RigeModel_cd=Ridge(alpha = 1000) RigeModel_cd.fit(cdp_train, cdy_train) yhat_cd = RigeModel_cd.predict(cdp_test) But when I check the coefficients I get 10 instead. in>> RigeModel_cd.coef_ out>> array([ 0.00000000e+00, 4.66393448e-03, 9.60826030e-01, -8.18000961e-01, 8.78056763e-01, -9.08744162e-05, -3.30052619e-01, -4.24748286e-01, -5.42880494e-01, -6.49848520e-01]) Does anybody know why this is happening? A: PolynomialFeatures has include_bias=True by default, which adds a column of all 1s. Note that the first coefficient is exactly zero, because Ridge has killed that term in favor of its own intercept. A: From the top of my head, there are probably 9 weigths for your predictors and one constant as a whole model bias or offset (i believe it is properly called intercept, as in other models or regressions).
Extra coefficient in Ridge Regression
I have 9 predictors (Clean df) but when I run the model I get 10 coefficients. Here is my code: #Get clean df with only more relevant columns Clean_indices = wkospi[['Open_sp','Close_sp','Close_jp','Open_eur','High_eur','Open_kos','Close_kos','1 Mo','2 Mo','1 Yr','2 Yr','Open_oil','Open_gold']] Clean_df = wkospi[['Close_jp','Open_sp','Open_eur','High_eur','Close_kos','3 Mo','6 Mo', '1 Yr', '2 Yr']] #Run the test for Clean df ALIAS "cd" cdx_train, cdx_test, cdy_train, cdy_test = train_test_split(Clean_df, Clean_indices['Close_sp'] , test_size=0.6, random_state = 4, shuffle = True) #Prepare train data and test data as polynomials cpr=PolynomialFeatures(degree=1) cdp_train=cpr.fit_transform(cdx_train) cdp_test=cpr.fit_transform(cdx_test) RigeModel_cd=Ridge(alpha = 1000) RigeModel_cd.fit(cdp_train, cdy_train) yhat_cd = RigeModel_cd.predict(cdp_test) But when I check the coefficients I get 10 instead. in>> RigeModel_cd.coef_ out>> array([ 0.00000000e+00, 4.66393448e-03, 9.60826030e-01, -8.18000961e-01, 8.78056763e-01, -9.08744162e-05, -3.30052619e-01, -4.24748286e-01, -5.42880494e-01, -6.49848520e-01]) Does anybody know why this is happening?
[ "PolynomialFeatures has include_bias=True by default, which adds a column of all 1s. Note that the first coefficient is exactly zero, because Ridge has killed that term in favor of its own intercept.\n", "From the top of my head, there are probably 9 weigths for your predictors and one constant as a whole model bias or offset (i believe it is properly called intercept, as in other models or regressions).\n" ]
[ 2, 0 ]
[]
[]
[ "model", "python", "regression", "scikit_learn" ]
stackoverflow_0074563411_model_python_regression_scikit_learn.txt
Q: Dataframe object not callable when working Python script moved from Replit to local TabPy server I have written a Python script that calls a National Oceanic and Atmospheric Administration (NOAA) endpoint with a zip code and gets a list of weather stations in response. The script then converts the response to a Pandas dataframe. I believe I have it working correctly based on this Replit.The dataframe appears to print to console correctly and I can inspect it using breakpoints. Using this blog tutorial as my guide, my real goal is to leverage this Python script in a Tableau Prep flow. Tableau Prep is basically a desktop ETL tool, similar to PowerQuery, but different :). I have a local working instance of a TabPy server, whose logs also appear to be showing proper construction of the dataframe (image below). However, I'm getting a TypeError : 'DataFrame' object is not callable. I've also provided an image of the same error surfaced in the Tableau Prep interface. Any help is sincerely appreciated. Here's the syntax of the actual script running on my TabPy server - with minimal modifications from what's on Replit. import requests; import pandas as pd; import json; zip = '97034' userToken = 'foobar123' headerCreds = dict(token = userToken) url = 'https://www.ncei.noaa.gov/cdo-web/api/v2/stations?&locationid=ZIP:' + zip global dfWorking def get_stations_for_zip(): r = requests.get(url, headers = headerCreds) data = json.loads(r.text) if 'results' in data: data = data.get('results') dfWorking = pd.DataFrame(data) # Column datatypes as received # elevation float64 # mindate object # maxdate object # latitude float64 # name int64 # datacoverage float64 # id object # elevationUnit object # longitude float64 dfWorking = dfWorking.astype({'name': 'str'}) # dfWorking['name'] = dfWorking.index # defining an index converts back to float64 print(dfWorking) else: print('no results object in response') return dfWorking # Note: the below prep functions are undefined until they are on a TabPy server def get_output_schema(): return pd.DataFrame({ 'elevation' : prep_decimal(), 'mindate' : prep_string(), 'maxdate' : prep_decimal(), 'latitude' : prep_date(), 'name' : prep_string(), 'datacoverage' : prep_decimal(), 'id' : prep_decimal(), 'name' : prep_string(), 'elevationUnit' : prep_decimal(), 'longitude' : prep_decimal() }); get_stations_for_zip() A: The solution required two changes: In the Tableau Prep interface where stating the function name, I had get_stations_for_zip(), but needed get_stations_for_zip without parenthesis In my script, the get_stations_for_zip function needed to take "df" (for dataframe) as an argument. So def get_stations_for_zip(df):. Strangely this argument is never used within the function, but it's necessary and the blog I was referencing shows the same. Here's a quote from help.tableau.com's article Use Python scripts in your flow When you create your script, include a function that specifies a pandas (pd.DataFrame) as an argument of the function. This will call your data from Tableau Prep Builder. A: This line is wrong: execution_result = get_stations_for_zip()(pd.DataFrame(_arg1)) because get_stations_for_zip is returning DataFrame, and you are threating it as python function, so you are trying: df = get_stations_for_zip() df(pd.DataFrame(_arg1)) # and error is right here
Dataframe object not callable when working Python script moved from Replit to local TabPy server
I have written a Python script that calls a National Oceanic and Atmospheric Administration (NOAA) endpoint with a zip code and gets a list of weather stations in response. The script then converts the response to a Pandas dataframe. I believe I have it working correctly based on this Replit.The dataframe appears to print to console correctly and I can inspect it using breakpoints. Using this blog tutorial as my guide, my real goal is to leverage this Python script in a Tableau Prep flow. Tableau Prep is basically a desktop ETL tool, similar to PowerQuery, but different :). I have a local working instance of a TabPy server, whose logs also appear to be showing proper construction of the dataframe (image below). However, I'm getting a TypeError : 'DataFrame' object is not callable. I've also provided an image of the same error surfaced in the Tableau Prep interface. Any help is sincerely appreciated. Here's the syntax of the actual script running on my TabPy server - with minimal modifications from what's on Replit. import requests; import pandas as pd; import json; zip = '97034' userToken = 'foobar123' headerCreds = dict(token = userToken) url = 'https://www.ncei.noaa.gov/cdo-web/api/v2/stations?&locationid=ZIP:' + zip global dfWorking def get_stations_for_zip(): r = requests.get(url, headers = headerCreds) data = json.loads(r.text) if 'results' in data: data = data.get('results') dfWorking = pd.DataFrame(data) # Column datatypes as received # elevation float64 # mindate object # maxdate object # latitude float64 # name int64 # datacoverage float64 # id object # elevationUnit object # longitude float64 dfWorking = dfWorking.astype({'name': 'str'}) # dfWorking['name'] = dfWorking.index # defining an index converts back to float64 print(dfWorking) else: print('no results object in response') return dfWorking # Note: the below prep functions are undefined until they are on a TabPy server def get_output_schema(): return pd.DataFrame({ 'elevation' : prep_decimal(), 'mindate' : prep_string(), 'maxdate' : prep_decimal(), 'latitude' : prep_date(), 'name' : prep_string(), 'datacoverage' : prep_decimal(), 'id' : prep_decimal(), 'name' : prep_string(), 'elevationUnit' : prep_decimal(), 'longitude' : prep_decimal() }); get_stations_for_zip()
[ "The solution required two changes:\n\nIn the Tableau Prep interface where stating the function name, I had get_stations_for_zip(), but needed get_stations_for_zip without parenthesis\n\n\n\nIn my script, the get_stations_for_zip function needed to take \"df\" (for dataframe) as an argument. So def get_stations_for_zip(df):. Strangely this argument is never used within the function, but it's necessary and the blog I was referencing shows the same.\n\nHere's a quote from help.tableau.com's article Use Python scripts in your flow\n\nWhen you create your script, include a function that specifies a pandas (pd.DataFrame) as an argument of the function. This will call your data from Tableau Prep Builder.\n\n", "This line is wrong:\nexecution_result = get_stations_for_zip()(pd.DataFrame(_arg1))\n\nbecause get_stations_for_zip is returning DataFrame, and you are threating it as python function, so you are trying:\ndf = get_stations_for_zip()\ndf(pd.DataFrame(_arg1)) # and error is right here\n\n" ]
[ 1, 0 ]
[]
[]
[ "pandas", "python", "tableau_prep", "tabpy" ]
stackoverflow_0074553651_pandas_python_tableau_prep_tabpy.txt
Q: Airflow: How to get the current date of when data is inserted into a BigQuery table? I am inserting data from a GCS Bucket to BigQuery, and I am unsure how to get the current date of when the data is inserted into a column. This is my schema: load_csv = gcs_to_bq.GoogleCloudStorageToBigQueryOperator( task_id='gcs_to_bq_example', bucket='cloud-samples-data', source_objects=['SOURCE-FILE-LOCATION'], destination_project_dataset_table='airflow_test.gcs_to_bq_table', schema_fields=[ {'name': 'item', 'type': 'STRING', 'mode': 'NULLABLE'}, {'name': 'date', 'type': 'DATE', 'mode': 'NULLABLE'}, ], write_disposition='WRITE_TRUNCATE', dag=dag) So, in my schema, I have item and date. Therefore, when triggering my DAG to insert the data from the GCS Bucket to BigQuery, how do I make it so that the date column contains the current date of when the data gets inserted? For example, if I insert it today, then the date column should be 2022-11-24. A: There might be 2 ways to reach the desired result but not sure of either. The first one is to use default values as described here and add a column to your schema: schema_fields=[ {'name': 'item', 'type': 'STRING', 'mode': 'NULLABLE'}, {'name': 'date', 'type': 'DATE', 'mode': 'NULLABLE'}, {'name': 'load_date', 'type': 'DATE', 'default': 'CURRENT_DATE'}, ] However, this is pre-GA so not sure whether you can use it (also I haven't tested sorry). Other possibility would be to use Airflow templating ability and add another step: load_csv = gcs_to_bq.GoogleCloudStorageToBigQueryOperator( task_id='gcs_to_bq_example', bucket='cloud-samples-data', source_objects=['SOURCE-FILE-LOCATION'], destination_project_dataset_table='airflow_test.gcs_to_bq_table_{{ ds_nodash }}', schema_fields=[ {'name': 'item', 'type': 'STRING', 'mode': 'NULLABLE'}, {'name': 'date', 'type': 'DATE', 'mode': 'NULLABLE'}, ], write_disposition='WRITE_TRUNCATE', dag=dag) With this operation you'll get your file in a table, with the ingestion date (or timestamp if you use ts_nodash) in the table name. You're then free to use the BigqueryOperator to insert this staged data into your destination data with some SQL.
Airflow: How to get the current date of when data is inserted into a BigQuery table?
I am inserting data from a GCS Bucket to BigQuery, and I am unsure how to get the current date of when the data is inserted into a column. This is my schema: load_csv = gcs_to_bq.GoogleCloudStorageToBigQueryOperator( task_id='gcs_to_bq_example', bucket='cloud-samples-data', source_objects=['SOURCE-FILE-LOCATION'], destination_project_dataset_table='airflow_test.gcs_to_bq_table', schema_fields=[ {'name': 'item', 'type': 'STRING', 'mode': 'NULLABLE'}, {'name': 'date', 'type': 'DATE', 'mode': 'NULLABLE'}, ], write_disposition='WRITE_TRUNCATE', dag=dag) So, in my schema, I have item and date. Therefore, when triggering my DAG to insert the data from the GCS Bucket to BigQuery, how do I make it so that the date column contains the current date of when the data gets inserted? For example, if I insert it today, then the date column should be 2022-11-24.
[ "There might be 2 ways to reach the desired result but not sure of either.\nThe first one is to use default values as described here and add a column to your schema:\nschema_fields=[\n {'name': 'item', 'type': 'STRING', 'mode': 'NULLABLE'},\n {'name': 'date', 'type': 'DATE', 'mode': 'NULLABLE'},\n\n {'name': 'load_date', 'type': 'DATE', 'default': 'CURRENT_DATE'},\n]\n\nHowever, this is pre-GA so not sure whether you can use it (also I haven't tested sorry).\nOther possibility would be to use Airflow templating ability and add another step:\nload_csv = gcs_to_bq.GoogleCloudStorageToBigQueryOperator(\ntask_id='gcs_to_bq_example',\nbucket='cloud-samples-data',\nsource_objects=['SOURCE-FILE-LOCATION'],\ndestination_project_dataset_table='airflow_test.gcs_to_bq_table_{{ ds_nodash }}',\nschema_fields=[\n {'name': 'item', 'type': 'STRING', 'mode': 'NULLABLE'},\n {'name': 'date', 'type': 'DATE', 'mode': 'NULLABLE'},\n],\nwrite_disposition='WRITE_TRUNCATE',\ndag=dag)\n\nWith this operation you'll get your file in a table, with the ingestion date (or timestamp if you use ts_nodash) in the table name. You're then free to use the BigqueryOperator to insert this staged data into your destination data with some SQL.\n" ]
[ 1 ]
[]
[]
[ "airflow", "directed_acyclic_graphs", "google_bigquery", "google_cloud_platform", "python" ]
stackoverflow_0074561928_airflow_directed_acyclic_graphs_google_bigquery_google_cloud_platform_python.txt
Q: Check if user react with an certian emoji with cogs I have a problem. How can I check which emoji the user reacted with? That did not work for me How do you check if a specific user reacts to a specific message [discord.py] I want to check if the reaction is βœ… or ❌ folder structure β”œβ”€β”€ main.py β”œβ”€β”€ cogs β”‚ β”œβ”€β”€ member.py The problem is that I don't get an error message. Nothing happens. As soon as I react with an emoji, nothing happens. member.py import discord from discord.ext import commands from datetime import datetime class command(commands.Cog): def __init__(self, bot): self.bot = bot @commands.Cog.listener() async def on_message(self, message): ... ... for emoji in emojis: await msg.add_reaction(emoji) await message.author.send('Send me that βœ… reaction, mate') def check(reaction, user): return user == message.author and str(reaction.emoji) == 'βœ…' res = await self.bot.wait_for( "reaction_add", check=check, timeout=None ) print(res.emoji) if res.content.lower() == "βœ…": await message.author.send("Got it") else: await message.author.send("Thanks") confirmation = await self.bot.wait_for("reaction_add", check=check) await message.author.send("You responded with {}".format(reaction.emoji)) async def setup(bot): await bot.add_cog(command(bot)) import asyncio import os from dotenv import load_dotenv import discord from discord.ext import commands from discord.ext.commands import has_permissions load_dotenv() TOKEN = os.getenv('DISCORD_TOKEN') import discord from discord.utils import get class MyBot(commands.Bot): def __init__(self): intents = discord.Intents.default() intents.message_content = True super().__init__(command_prefix=commands.when_mentioned_or('-'), intents=intents, max_messages=1000) async def on_ready(self): print(f'Logged in as {self.user} (ID: {self.user.id})') async def setup_hook(self): for file in os.listdir("./cogs"): if file.endswith(".py"): extension = file[:-3] try: await self.load_extension(f"cogs.{extension}") print(f"Loaded extension '{extension}'") except Exception as e: exception = f"{type(e).__name__}: {e}" print(f"Failed to load extension {extension}\n{exception}") bot = MyBot() bot.run(TOKEN) A: You're using a function called check, and it doesn't exist - like your error is telling you. There's one in your class, but that isn't in the same scope so you can't just call it using its name. To access a method in a class, use self.<name>. Also, you should only pass the check function, not call it. (..., check=self.check EDIT the code in the original question was edited. You're not loading your cogs asynchronously. Extensions & cogs were made async in 2.0. Docs: https://discordpy.readthedocs.io/en/stable/migrating.html#extension-and-cog-loading-unloading-is-now-asynchronous A: Put this code in your @commands.Cog.listener() decorator, and the code will work if your cogs loader is working. If you would like me to show you my cogs loader, I can. accept = 'βœ…' decline = '' def check(reaction, user): return user == author messsage = await ctx.send("test") await message.add_reaction(accept) await message.add_reaction(decline) reaction = await self.bot.wait_for("reaction_add", check=check) if str(reaction[0]) == accept: await ctx.send("Accepted") elif str(reaction[0]) == decline: await ctx.send("Denied")
Check if user react with an certian emoji with cogs
I have a problem. How can I check which emoji the user reacted with? That did not work for me How do you check if a specific user reacts to a specific message [discord.py] I want to check if the reaction is βœ… or ❌ folder structure β”œβ”€β”€ main.py β”œβ”€β”€ cogs β”‚ β”œβ”€β”€ member.py The problem is that I don't get an error message. Nothing happens. As soon as I react with an emoji, nothing happens. member.py import discord from discord.ext import commands from datetime import datetime class command(commands.Cog): def __init__(self, bot): self.bot = bot @commands.Cog.listener() async def on_message(self, message): ... ... for emoji in emojis: await msg.add_reaction(emoji) await message.author.send('Send me that βœ… reaction, mate') def check(reaction, user): return user == message.author and str(reaction.emoji) == 'βœ…' res = await self.bot.wait_for( "reaction_add", check=check, timeout=None ) print(res.emoji) if res.content.lower() == "βœ…": await message.author.send("Got it") else: await message.author.send("Thanks") confirmation = await self.bot.wait_for("reaction_add", check=check) await message.author.send("You responded with {}".format(reaction.emoji)) async def setup(bot): await bot.add_cog(command(bot)) import asyncio import os from dotenv import load_dotenv import discord from discord.ext import commands from discord.ext.commands import has_permissions load_dotenv() TOKEN = os.getenv('DISCORD_TOKEN') import discord from discord.utils import get class MyBot(commands.Bot): def __init__(self): intents = discord.Intents.default() intents.message_content = True super().__init__(command_prefix=commands.when_mentioned_or('-'), intents=intents, max_messages=1000) async def on_ready(self): print(f'Logged in as {self.user} (ID: {self.user.id})') async def setup_hook(self): for file in os.listdir("./cogs"): if file.endswith(".py"): extension = file[:-3] try: await self.load_extension(f"cogs.{extension}") print(f"Loaded extension '{extension}'") except Exception as e: exception = f"{type(e).__name__}: {e}" print(f"Failed to load extension {extension}\n{exception}") bot = MyBot() bot.run(TOKEN)
[ "You're using a function called check, and it doesn't exist - like your error is telling you. There's one in your class, but that isn't in the same scope so you can't just call it using its name.\nTo access a method in a class, use self.<name>.\nAlso, you should only pass the check function, not call it.\n(..., check=self.check\n\nEDIT the code in the original question was edited. You're not loading your cogs asynchronously. Extensions & cogs were made async in 2.0. Docs: https://discordpy.readthedocs.io/en/stable/migrating.html#extension-and-cog-loading-unloading-is-now-asynchronous\n", "Put this code in your @commands.Cog.listener() decorator, and the code will work if your cogs loader is working. If you would like me to show you my cogs loader, I can.\naccept = 'βœ…'\ndecline = ''\n\ndef check(reaction, user):\n return user == author\n\nmesssage = await ctx.send(\"test\")\nawait message.add_reaction(accept)\nawait message.add_reaction(decline)\n\nreaction = await self.bot.wait_for(\"reaction_add\", check=check)\n\nif str(reaction[0]) == accept:\n await ctx.send(\"Accepted\")\nelif str(reaction[0]) == decline:\n await ctx.send(\"Denied\") \n\n" ]
[ 1, 0 ]
[]
[]
[ "discord", "discord.py", "python" ]
stackoverflow_0074485957_discord_discord.py_python.txt
Q: discord.py how does hybrid commands work? I have a problem with my discord.py code for my bot. It is not showing up as a slash command in Discord's chat box. I wanted to rewrite my bot that I have been running for many months with discord.py 1.7.3, so I wanted to introduce slash commands. Now I have the problem that with my code the slash commands are not displayed BUT they work with the prefix (!), can anyone help me? Sideinfos: Discord.py = 2.1.0 Server = Linux Discord-Server ID: 1000794475683123362 Its not a cog! import sys import discord from discord import app_commands from discord.ext import commands, tasks import os import json from discord.ext.commands import Bot import random from random import randint import datetime import traceback import urllib.request, json import urllib from dotenv import load_dotenv from discord.ext.commands import clean_content from discord.ext.commands.cooldowns import BucketType from dislash import InteractionClient, SelectMenu, SelectOption from PIL import Image,ImageFont,ImageDraw from easy_pil import Editor, load_image_async, Font bot = commands.Bot(command_prefix=["!","?",","],intents=discord.Intents.all()) botcolor = 0xffffff @bot.hybrid_command(name='test',description='TEST') @app_commands.guilds(discord.Object(1000794475683123362)) async def test(ctx): await ctx.send("Test!") bot.run(MyToken) Ive researched the Internet but i didnt find any good anwsers that worked. A: Slash commands have to be registered to Discord. This is done through a process called syncing. By calling tree.sync(), you can push your changes to Discord to let them know about your commands. If you never sync, Discord has no idea you have slash commands. The exact same applies for regular slash commands as well, this isn't just for hybrids. Important note: do NOT auto-sync (syncing automatically in on_ready or setup_hook). A lot of people online do this, and it's a really bad idea. You should only have to sync whenever you change/remove a command, so not every time your bot starts. The ratelimits for this API call are very low and unforgiving, so if you spam this by doing it in on_ready (every single time your bot starts - which can be very often in development) then you'll get ratelimited. Syncing should be done in an owner-only message command, so only you can call it. If other people have access to this command, they can ratelimit you so hard you'll never be able to sync again. Syncing is as easy as calling await bot.tree.sync() in a message command. If you want to sync to a guild instead of globally, you can pass an ID as an argument. If you're wondering: the reason that message commands don't have to be synced is because they don't really exist. Discord doesn't know or care about them. These are parsed in your bot itself when the on_message event is triggered. Slash commands are integrated into the Discord UI, so this can't be done without pushing it somewhere.
discord.py how does hybrid commands work?
I have a problem with my discord.py code for my bot. It is not showing up as a slash command in Discord's chat box. I wanted to rewrite my bot that I have been running for many months with discord.py 1.7.3, so I wanted to introduce slash commands. Now I have the problem that with my code the slash commands are not displayed BUT they work with the prefix (!), can anyone help me? Sideinfos: Discord.py = 2.1.0 Server = Linux Discord-Server ID: 1000794475683123362 Its not a cog! import sys import discord from discord import app_commands from discord.ext import commands, tasks import os import json from discord.ext.commands import Bot import random from random import randint import datetime import traceback import urllib.request, json import urllib from dotenv import load_dotenv from discord.ext.commands import clean_content from discord.ext.commands.cooldowns import BucketType from dislash import InteractionClient, SelectMenu, SelectOption from PIL import Image,ImageFont,ImageDraw from easy_pil import Editor, load_image_async, Font bot = commands.Bot(command_prefix=["!","?",","],intents=discord.Intents.all()) botcolor = 0xffffff @bot.hybrid_command(name='test',description='TEST') @app_commands.guilds(discord.Object(1000794475683123362)) async def test(ctx): await ctx.send("Test!") bot.run(MyToken) Ive researched the Internet but i didnt find any good anwsers that worked.
[ "Slash commands have to be registered to Discord. This is done through a process called syncing. By calling tree.sync(), you can push your changes to Discord to let them know about your commands. If you never sync, Discord has no idea you have slash commands.\nThe exact same applies for regular slash commands as well, this isn't just for hybrids.\nImportant note: do NOT auto-sync (syncing automatically in on_ready or setup_hook). A lot of people online do this, and it's a really bad idea. You should only have to sync whenever you change/remove a command, so not every time your bot starts.\nThe ratelimits for this API call are very low and unforgiving, so if you spam this by doing it in on_ready (every single time your bot starts - which can be very often in development) then you'll get ratelimited.\nSyncing should be done in an owner-only message command, so only you can call it. If other people have access to this command, they can ratelimit you so hard you'll never be able to sync again.\nSyncing is as easy as calling await bot.tree.sync() in a message command. If you want to sync to a guild instead of globally, you can pass an ID as an argument.\nIf you're wondering: the reason that message commands don't have to be synced is because they don't really exist. Discord doesn't know or care about them. These are parsed in your bot itself when the on_message event is triggered. Slash commands are integrated into the Discord UI, so this can't be done without pushing it somewhere.\n" ]
[ 1 ]
[]
[]
[ "discord", "discord.py", "python" ]
stackoverflow_0074563671_discord_discord.py_python.txt
Q: How to load HTML table into Plotly hover...? I've been trying to implement an HTML table into my graph label, I have one column in my df with the html code but it isn't showing in the right format. Does anyone knows how can I change this? plotly code error if you hover the cursor in the graph you will see something like: table border=1 class="dataframe"><thead><tr style="text-align: right;"><th>Index</th><th>Place name</th><th>Qtd</th><th>AVG</th> A: Unfortunately, at this time it's impossible to render a table in the way that I intended. They are planning to make a future update where it will be possible.
How to load HTML table into Plotly hover...?
I've been trying to implement an HTML table into my graph label, I have one column in my df with the html code but it isn't showing in the right format. Does anyone knows how can I change this? plotly code error if you hover the cursor in the graph you will see something like: table border=1 class="dataframe"><thead><tr style="text-align: right;"><th>Index</th><th>Place name</th><th>Qtd</th><th>AVG</th>
[ "Unfortunately, at this time it's impossible to render a table in the way that I intended. They are planning to make a future update where it will be possible.\n" ]
[ 0 ]
[]
[]
[ "html", "plotly", "python" ]
stackoverflow_0074174997_html_plotly_python.txt
Q: Sphinx autodoc dies on ImportError of third party package There's any way to exclude the import part of a module and then document it with sphinx-python? I have a module that imports another package (other different project) and then the sphinx gives this error: """ File "/usr/local/lib/python2.7/dist-packages/Sphinx-1.1.3-py2.7.egg/sphinx/ext/autodoc.py", line 321, in import_object import(self.modname) File "/home/x/GitHub/project/mod_example1.py", line 33, in from other_pck import Klass, KlassStuff ImportError: No module named other_pck """ And if I comment the import parts in the module that calls/imports that package the sphinx can do the autodoc. I tried with all the sphinx autodoc modules: autoclass, automodule, etc... but the result is always the same once it try's to import the other package. Thanks A: You are fixing the issue wrong way. The correct way to fix the issue is to make Sphinx aware of your existing other packages as autodoc functionality must import Python packages to scan the source code. Python packages cannot be imported without all their dependencies resolved and you cannot cherry-pick lines of source code out of it, because this is how Python is built(*) Possible solutions are Creating a Python virtualenv environment where both Sphinx and the other packages reside, so that they can see each other http://opensourcehacker.com/2012/09/16/recommended-way-for-sudo-free-installation-of-python-software-with-virtualenv/ Setting PYTHONPATH environment variable or editing sys.path in Sphinx config file, so that missing packages are added in the import list when Sphinx is run http://scienceoss.com/minimal-sphinx-setup-for-autodocumenting-python-modules/ *) In theory you can, but this is outside the scope of Sphinx and this question A: Sometimes it's due to the machine that you're using to generating docs does not have the third party package imported by the python project. For example, there isn't pytorch on my current machine to write the docs. You can mock the required packages by adding the following in conf.py autodoc_mock_imports = ['packages', 'to', 'mock'] In my case, its autodoc_mock_imports = ['torch'] See this blog for detail.
Sphinx autodoc dies on ImportError of third party package
There's any way to exclude the import part of a module and then document it with sphinx-python? I have a module that imports another package (other different project) and then the sphinx gives this error: """ File "/usr/local/lib/python2.7/dist-packages/Sphinx-1.1.3-py2.7.egg/sphinx/ext/autodoc.py", line 321, in import_object import(self.modname) File "/home/x/GitHub/project/mod_example1.py", line 33, in from other_pck import Klass, KlassStuff ImportError: No module named other_pck """ And if I comment the import parts in the module that calls/imports that package the sphinx can do the autodoc. I tried with all the sphinx autodoc modules: autoclass, automodule, etc... but the result is always the same once it try's to import the other package. Thanks
[ "You are fixing the issue wrong way. The correct way to fix the issue is to make Sphinx aware of your existing other packages as autodoc functionality must import Python packages to scan the source code. Python packages cannot be imported without all their dependencies resolved and you cannot cherry-pick lines of source code out of it, because this is how Python is built(*)\nPossible solutions are\n\nCreating a Python virtualenv environment where both Sphinx and the other packages reside, so that they can see each other http://opensourcehacker.com/2012/09/16/recommended-way-for-sudo-free-installation-of-python-software-with-virtualenv/\nSetting PYTHONPATH environment variable or editing sys.path in Sphinx config file, so that missing packages are added in the import list when Sphinx is run http://scienceoss.com/minimal-sphinx-setup-for-autodocumenting-python-modules/\n\n*) In theory you can, but this is outside the scope of Sphinx and this question\n", "Sometimes it's due to the machine that you're using to generating docs does not have the third party package imported by the python project. For example, there isn't pytorch on my current machine to write the docs. You can mock the required packages by adding the following in conf.py\nautodoc_mock_imports = ['packages', 'to', 'mock']\n\nIn my case, its\nautodoc_mock_imports = ['torch']\n\nSee this blog for detail.\n" ]
[ 4, 0 ]
[]
[]
[ "autodoc", "python", "python_sphinx" ]
stackoverflow_0015088792_autodoc_python_python_sphinx.txt
Q: Affect groups() to panda column I have this dataframe and I want to split a column with a regular expression and create new columns in this dataframe : data = ["a:b-c","d:e-f"] df = pd.DataFrame(data, columns=['expr']) >>> df expr 0 a:b-c 1 d:e-f And here is what I want : >>> df expr one two three 0 a:b-c a b c 1 d:e-f d e f I tried this command but with the error : >>> df["one"], df["two"], df["three"] = df["expr"].apply(lambda x: re.match("^(.*):(.*)-(.*)$",x).groups()) Traceback (most recent call last): File "<stdin>", line 1, in <module> ValueError: not enough values to unpack (expected 3, got 2) How to have the expected result ? Thanx A: data = ["a:b-c","d:e-f"] df = pd.DataFrame(data, columns=['expr']) df expr 0 a:b-c 1 d:e-f df['one'], df['two'], df['three'] = df['expr'].str.split(':|-', expand=True) df expr one two three 0 a:b-c a b c 1 d:e-f d e f
Affect groups() to panda column
I have this dataframe and I want to split a column with a regular expression and create new columns in this dataframe : data = ["a:b-c","d:e-f"] df = pd.DataFrame(data, columns=['expr']) >>> df expr 0 a:b-c 1 d:e-f And here is what I want : >>> df expr one two three 0 a:b-c a b c 1 d:e-f d e f I tried this command but with the error : >>> df["one"], df["two"], df["three"] = df["expr"].apply(lambda x: re.match("^(.*):(.*)-(.*)$",x).groups()) Traceback (most recent call last): File "<stdin>", line 1, in <module> ValueError: not enough values to unpack (expected 3, got 2) How to have the expected result ? Thanx
[ "data = [\"a:b-c\",\"d:e-f\"]\ndf = pd.DataFrame(data, columns=['expr'])\ndf\n\n expr\n0 a:b-c\n1 d:e-f\n\ndf['one'], df['two'], df['three'] = df['expr'].str.split(':|-', expand=True)\ndf\n\n expr one two three\n0 a:b-c a b c\n1 d:e-f d e f\n\n" ]
[ 1 ]
[]
[]
[ "pandas", "python" ]
stackoverflow_0074563702_pandas_python.txt
Q: Python class containing a temporary file I would like to create a Python class which contains a temporary file. If I use the usual tempfile.TemporaryFile() with a context manager to create a member variable in the constructor, then the context manager will close/delete the temporary file when the constructor exits. This is no good because I want the file to exist for the lifetime of the class. I see that I could create my own context managed class using __enter__ and __exit__ methods, does anyone have any examples of this? (Maybe I just need to add a line to delete the file to the example in the link?) Or maybe there's a better way of doing this? A: I came up with the following class TemporaryFile: def __init__(self, *, data: str): self._data = data def __enter__(self): self._file = NamedTemporaryFile() self._file.write(data) self._file.flush() return self def __exit__(self, exc_type, exc_value, exc_tb): self._file.close() def sample_method(): pass Which is used like so: with TemporaryFile(data=data) as file: file.sample_method()
Python class containing a temporary file
I would like to create a Python class which contains a temporary file. If I use the usual tempfile.TemporaryFile() with a context manager to create a member variable in the constructor, then the context manager will close/delete the temporary file when the constructor exits. This is no good because I want the file to exist for the lifetime of the class. I see that I could create my own context managed class using __enter__ and __exit__ methods, does anyone have any examples of this? (Maybe I just need to add a line to delete the file to the example in the link?) Or maybe there's a better way of doing this?
[ "I came up with the following\nclass TemporaryFile:\n def __init__(self, *, data: str):\n self._data = data\n\n def __enter__(self):\n self._file = NamedTemporaryFile()\n self._file.write(data)\n self._file.flush()\n return self\n\n def __exit__(self, exc_type, exc_value, exc_tb):\n self._file.close()\n\n def sample_method():\n pass\n\nWhich is used like so:\nwith TemporaryFile(data=data) as file:\n file.sample_method()\n\n" ]
[ 0 ]
[]
[]
[ "python" ]
stackoverflow_0074559659_python.txt
Q: How to pass variables/functions from javaScript to Python and vice versa? I am creating a website in HTML, CSS and JavaScript where I require an AI powered chatbot. I have the required python file which consists of the logic for the chatbot (AI, NLTK). Now, in the python file, I have a function named "response()" which takes the user message as an argument and returns the processed response after running the NLP logic. What I want to do is, As soon as the user sends a message, The JavaScript would store that message in a variable (say, user-response) and should send that variable as an argument to the python file's "response()" function: response(user-response) The Python file should use the response(user-response) function and send the processed output to the JavaScript file How do I achieve this? Here's the python logic def response(user_response): #This argument has to be passed from JavaScript robo_response = '' sent_tokens.append(user_response) TfIdVec = TfidfVectorizer(tokenizer=LemNormalize, stop_words='english') tfidf = TfIdVec.fit_transform(sent_tokens) vals = cosine_similarity(tfidf[-1], tfidf) idx = vals.argsort()[0][-2] flat = vals.flatten() flat.sort() req_tfidf = flat[-2] GREETING_INPUTS = ("hello", "hi", "greetings", "sup", "what's up", "hey") GREETING_RESPONSES = ["hi", "hey", "*nods*", "hi there", "hello", "I'm glad you're talking to me"] for word in user_response.split(): if (word.lower() in GREETING_INPUTS): return random.choice(GREETING_RESPONSES) if(req_tfidf == 0): robo_response = [ "Sorry, I have not been trained to answer that yet!", "Sorry, I cannot answer to that! ] return random.choice(robo_response); robo_response = robo_response+sent_tokens[idx] return robo_response; response("") #This response has to be sent back to JavaScript Here's the JavaScript code function returnsUserMessage(){ var userResponse = document.getElementById('input-chat').value; console.log(userResponse); return userResponse; } A: I will put few steps for you to go through but as @Pointy said in the comment, "Exactly how you do all that is a very large topic for a single Stack Overflow question", so consider this as a roadmap. Side note: I assume you don't want to execute the AI logic in the frontend as this will be heavy on the client. 1- Create a backend server (or REST API) with Python. 2- Inject your AI logic in HTTP requests (GET/POST). Backend is a big topic but I will provide a small example here: from flask import Flask, json, request def response(user_response): ... api = Flask(__name__) @api.route('/response', methods=['POST']) def post_response(): user_response = json.loads(request.json)["user_response"] return json.dumps({response: response(user_response)}) if __name__ == '__main__': api.run() 3- From the frontend, send the user input to the backend (using the HTTP request you created in step 2) and then write back the response. Example: <button onclick="returnsUserMessage()">Send</button> <script> async function returnsUserMessage() { var user_input = document.getElementById("input-chat").value; var bot_reponse = await fetch('localhost/response', { method: 'POST', headers: { 'Content-Type': 'application/json' }, body: JSON.stringify({user_response: user_input}) }); // Then you need to present the bot response in your element } </script>
How to pass variables/functions from javaScript to Python and vice versa?
I am creating a website in HTML, CSS and JavaScript where I require an AI powered chatbot. I have the required python file which consists of the logic for the chatbot (AI, NLTK). Now, in the python file, I have a function named "response()" which takes the user message as an argument and returns the processed response after running the NLP logic. What I want to do is, As soon as the user sends a message, The JavaScript would store that message in a variable (say, user-response) and should send that variable as an argument to the python file's "response()" function: response(user-response) The Python file should use the response(user-response) function and send the processed output to the JavaScript file How do I achieve this? Here's the python logic def response(user_response): #This argument has to be passed from JavaScript robo_response = '' sent_tokens.append(user_response) TfIdVec = TfidfVectorizer(tokenizer=LemNormalize, stop_words='english') tfidf = TfIdVec.fit_transform(sent_tokens) vals = cosine_similarity(tfidf[-1], tfidf) idx = vals.argsort()[0][-2] flat = vals.flatten() flat.sort() req_tfidf = flat[-2] GREETING_INPUTS = ("hello", "hi", "greetings", "sup", "what's up", "hey") GREETING_RESPONSES = ["hi", "hey", "*nods*", "hi there", "hello", "I'm glad you're talking to me"] for word in user_response.split(): if (word.lower() in GREETING_INPUTS): return random.choice(GREETING_RESPONSES) if(req_tfidf == 0): robo_response = [ "Sorry, I have not been trained to answer that yet!", "Sorry, I cannot answer to that! ] return random.choice(robo_response); robo_response = robo_response+sent_tokens[idx] return robo_response; response("") #This response has to be sent back to JavaScript Here's the JavaScript code function returnsUserMessage(){ var userResponse = document.getElementById('input-chat').value; console.log(userResponse); return userResponse; }
[ "I will put few steps for you to go through but as @Pointy said in the comment, \"Exactly how you do all that is a very large topic for a single Stack Overflow question\", so consider this as a roadmap.\nSide note: I assume you don't want to execute the AI logic in the frontend as this will be heavy on the client.\n1- Create a backend server (or REST API) with Python.\n2- Inject your AI logic in HTTP requests (GET/POST).\nBackend is a big topic but I will provide a small example here:\nfrom flask import Flask, json, request\n\ndef response(user_response):\n ...\napi = Flask(__name__)\n\n@api.route('/response', methods=['POST'])\ndef post_response():\n user_response = json.loads(request.json)[\"user_response\"]\n return json.dumps({response: response(user_response)})\n\nif __name__ == '__main__':\n api.run()\n\n3- From the frontend, send the user input to the backend (using the HTTP request you created in step 2) and then write back the response.\nExample:\n<button onclick=\"returnsUserMessage()\">Send</button>\n\n<script>\nasync function returnsUserMessage() {\n var user_input = document.getElementById(\"input-chat\").value;\n var bot_reponse = await fetch('localhost/response',\n {\n method: 'POST',\n headers: {\n 'Content-Type': 'application/json'\n },\n body: JSON.stringify({user_response: user_input})\n });\n // Then you need to present the bot response in your element\n}\n</script>\n\n\n" ]
[ 0 ]
[]
[]
[ "artificial_intelligence", "javascript", "nltk", "python" ]
stackoverflow_0074563585_artificial_intelligence_javascript_nltk_python.txt
Q: How to merge similar columns into a single dictionary column in pandas I am trying to convert a dataframe that has similar naming convention into a single json format column. Sample Data: import pandas as pd df = pd.DataFrame({'id' : 1, 'userName' : 'john', 'productlist0.name' : 'shoe', 'productlist0.price' : 45.89, 'productlist0.brand' : 'nike', 'productlist1.name' : 'jeans', 'productlist1.price' : 19.45, 'productlist1.brand' : 'howes', 'productlist2.name' : 'watch', 'productlist2.price' : 60.0, 'productlist2.brand' : 'fossil' }, index = [0]) So we have bunch of columns starting with productlist and that share the column names after the period. I need to covert these columns into below json format: df1 = pd.DataFrame( {'id' : 1, 'userName' : 'john', 'productlist' : '''[{'name' : 'shoe', 'price' : 45.89, 'brand' : 'nike'}, {'name' : 'jeans', 'price' : 19.45, 'brand' : 'howes'}, {'name' : 'watch', 'price' : 60.0, 'brand' : 'fossil'}]''' }, index = [0] ) df1 id userName productlist 0 1 john [{'name' : 'shoe', 'price' : 45.89, 'brand' : 'nike'}, {'name' : 'jeans', 'price' : 19.45, 'brand' : 'howes'}, {'name' : 'watch', 'price' : 60.0, 'brand' : 'fossil'}] I tried to use the stack approach and got here: df.filter(regex = '^productlist').rename(columns = lambda x : re.sub(r'productlist\d\.', '', x)).stack().reset_index().\ groupby(['level_0', 'level_1'])[0].agg(dict) level_0 level_1 0 brand {2: 'nike', 5: 'howes', 8: 'fossil'} name {0: 'shoe', 3: 'jeans', 6: 'watch'} price {1: 45.89, 4: 19.45, 7: 60.0} Name: 0, dtype: object But I am not sure how to proceed from here. Could someone please help me on this. A: You can do it like: df1 = df.filter(regex="^productlist\d+.").T df1.index = pd.MultiIndex.from_tuples([(a[0], a[1]) for a in df1.index.str.split(".")]) product_values = df1.unstack().droplevel(0, axis=1).to_dict("records") df1 = pd.concat( [ df[["id", "userName"]], pd.DataFrame({"productlist": [product_values]}, index=[0]), ], axis=1, ) First get productlist columns. Then reindex with (productlist, key) format and unstack and get a dict in the form of "records" out of it. That will be your list of dicts. Use that to create a new dataframe with column "productlist" and then concat it to original df without productlist\d+. columns print(df1): id userName productlist 0 1 john [{'brand': 'nike', 'name': 'shoe', 'price': 45...
How to merge similar columns into a single dictionary column in pandas
I am trying to convert a dataframe that has similar naming convention into a single json format column. Sample Data: import pandas as pd df = pd.DataFrame({'id' : 1, 'userName' : 'john', 'productlist0.name' : 'shoe', 'productlist0.price' : 45.89, 'productlist0.brand' : 'nike', 'productlist1.name' : 'jeans', 'productlist1.price' : 19.45, 'productlist1.brand' : 'howes', 'productlist2.name' : 'watch', 'productlist2.price' : 60.0, 'productlist2.brand' : 'fossil' }, index = [0]) So we have bunch of columns starting with productlist and that share the column names after the period. I need to covert these columns into below json format: df1 = pd.DataFrame( {'id' : 1, 'userName' : 'john', 'productlist' : '''[{'name' : 'shoe', 'price' : 45.89, 'brand' : 'nike'}, {'name' : 'jeans', 'price' : 19.45, 'brand' : 'howes'}, {'name' : 'watch', 'price' : 60.0, 'brand' : 'fossil'}]''' }, index = [0] ) df1 id userName productlist 0 1 john [{'name' : 'shoe', 'price' : 45.89, 'brand' : 'nike'}, {'name' : 'jeans', 'price' : 19.45, 'brand' : 'howes'}, {'name' : 'watch', 'price' : 60.0, 'brand' : 'fossil'}] I tried to use the stack approach and got here: df.filter(regex = '^productlist').rename(columns = lambda x : re.sub(r'productlist\d\.', '', x)).stack().reset_index().\ groupby(['level_0', 'level_1'])[0].agg(dict) level_0 level_1 0 brand {2: 'nike', 5: 'howes', 8: 'fossil'} name {0: 'shoe', 3: 'jeans', 6: 'watch'} price {1: 45.89, 4: 19.45, 7: 60.0} Name: 0, dtype: object But I am not sure how to proceed from here. Could someone please help me on this.
[ "You can do it like:\ndf1 = df.filter(regex=\"^productlist\\d+.\").T\ndf1.index = pd.MultiIndex.from_tuples([(a[0], a[1]) for a in df1.index.str.split(\".\")])\nproduct_values = df1.unstack().droplevel(0, axis=1).to_dict(\"records\")\ndf1 = pd.concat(\n [\n df[[\"id\", \"userName\"]],\n pd.DataFrame({\"productlist\": [product_values]}, index=[0]),\n ],\n axis=1,\n)\n\nFirst get productlist columns. Then reindex with (productlist, key) format and unstack and get a dict in the form of \"records\" out of it. That will be your list of dicts. Use that to create a new dataframe with column \"productlist\" and then concat it to original df without productlist\\d+. columns\nprint(df1):\n id userName productlist\n0 1 john [{'brand': 'nike', 'name': 'shoe', 'price': 45...\n\n" ]
[ 1 ]
[]
[]
[ "json", "pandas", "python" ]
stackoverflow_0074563440_json_pandas_python.txt
Q: how to embed fonts in PDFs produced by matplotlib? I'm using a font called a ttf font called FreeSans on linux with matplotlib. I create my figure as: from matplotlib import rc plt.rcParams['ps.useafm'] = True rc('font',**{'family':'sans-serif','sans-serif':['FreeSans']}) plt.rcParams['pdf.fonttype'] = 42 plt.figure() # plot figure... plt.savefig("myfig.pdf") When I open it on another program (e.g. illustrator on Mac OS X) then the font does not appear and the default font is used instead, since FreeSans is unavailable. How can I make it so matplotlib embeds the font in every PDF it produces? I don't mind if the file is larger. Thanks. A: I have the same problem when producing pdf with matplotlib. Interesting if I specify using TrueType in pdf, the font will be embedded: matplotlib.rc('pdf', fonttype=42) A: Are you sure that it's not doing it already? From the website: matplotlib has excellent text support, including mathematical expressions, truetype support for raster and vector outputs, newline separated text with arbitrary rotations, and unicode support. Because we embed the fonts directly in the output documents, eg for postscript or PDF, what you see on the screen is what you get in the hardcopy. Back in the day, I used to output a .ps document and use ps2pdf with the -dEmbedAllFonts=true option. A: I recently had the same issue and then realized that if I turn off "text.usetex" with text.usetex : False in my mplstyle file; the fonts will be fine. I have tried with "fonttype=42", but it simply does not work. I think the issue is that when rendering the texts with LaTeX, the fonts are not properly embedded somehow... I tried to change fonts, but as long as text.usetex is set as True, I can not get the fonts appropriately embedded. So, unfortunately, the current solution is to turn off the LaTeX rendering...
how to embed fonts in PDFs produced by matplotlib?
I'm using a font called a ttf font called FreeSans on linux with matplotlib. I create my figure as: from matplotlib import rc plt.rcParams['ps.useafm'] = True rc('font',**{'family':'sans-serif','sans-serif':['FreeSans']}) plt.rcParams['pdf.fonttype'] = 42 plt.figure() # plot figure... plt.savefig("myfig.pdf") When I open it on another program (e.g. illustrator on Mac OS X) then the font does not appear and the default font is used instead, since FreeSans is unavailable. How can I make it so matplotlib embeds the font in every PDF it produces? I don't mind if the file is larger. Thanks.
[ "I have the same problem when producing pdf with matplotlib.\nInteresting if I specify using TrueType in pdf, the font will be embedded:\nmatplotlib.rc('pdf', fonttype=42)\n\n", "Are you sure that it's not doing it already? From the website:\n\nmatplotlib has excellent text support, including mathematical\n expressions, truetype support for raster and vector outputs, newline\n separated text with arbitrary rotations, and unicode support. Because\n we embed the fonts directly in the output documents, eg for postscript\n or PDF, what you see on the screen is what you get in the hardcopy.\n\nBack in the day, I used to output a .ps document and use ps2pdf with the -dEmbedAllFonts=true option.\n", "I recently had the same issue and then realized that if I turn off \"text.usetex\" with text.usetex : False in my mplstyle file; the fonts will be fine.\nI have tried with \"fonttype=42\", but it simply does not work.\nI think the issue is that when rendering the texts with LaTeX, the fonts are not properly embedded somehow... I tried to change fonts, but as long as text.usetex is set as True, I can not get the fonts appropriately embedded.\nSo, unfortunately, the current solution is to turn off the LaTeX rendering...\n" ]
[ 14, 2, 0 ]
[]
[]
[ "matplotlib", "pdf", "python" ]
stackoverflow_0009054884_matplotlib_pdf_python.txt
Q: creating multiple file using os python Hi folks I am using ros noetic and i have to create 12 file name as x.bag and x ranging upto 12. code is following. import rospy import os for x in range(12): cmd='rosbag record -o /home/mubashir/catkin_ws/src/germany1_trush/rosbag/x.bag /web_cam --duration 5 ' os.system(cmd) how I get vlaue of x in cmd. creating 12 file of 5sec duration using os.while having different name i am not able to acess value of x inside cmd A: I'm not sure I understand your question exactly. I think what you want is to run the following command 12 times (from 0 to 11): import rospy import os for x in range(12): cmd = f'rosbag record -o /home/mubashir/catkin_ws/src/germany1_trush/rosbag/{x}.bag /web_cam --duration 5' os.system(cmd) You probably want 1..12 which you can easily do with {x + 1}. BTW, this is called a "Literal String Interpolation", aka f-string. Pretty handy.
creating multiple file using os python
Hi folks I am using ros noetic and i have to create 12 file name as x.bag and x ranging upto 12. code is following. import rospy import os for x in range(12): cmd='rosbag record -o /home/mubashir/catkin_ws/src/germany1_trush/rosbag/x.bag /web_cam --duration 5 ' os.system(cmd) how I get vlaue of x in cmd. creating 12 file of 5sec duration using os.while having different name i am not able to acess value of x inside cmd
[ "I'm not sure I understand your question exactly. I think what you want is to run the following command 12 times (from 0 to 11):\nimport rospy\nimport os\nfor x in range(12): \n cmd = f'rosbag record -o /home/mubashir/catkin_ws/src/germany1_trush/rosbag/{x}.bag /web_cam --duration 5'\n os.system(cmd)\n\nYou probably want 1..12 which you can easily do with {x + 1}.\nBTW, this is called a \"Literal String Interpolation\", aka f-string. Pretty handy.\n" ]
[ 1 ]
[]
[]
[ "for_loop", "linux", "python" ]
stackoverflow_0074563392_for_loop_linux_python.txt
Q: Using "python -m" to call Python script not working Basically the title. File structure below with code examples. Relevant project structure: drf/ β”œβ”€ backend/ β”œβ”€ py_client/ β”‚ β”œβ”€ basic.py β”œβ”€ venv/ β”œβ”€ requirements.txt I know that using "python -m" is best practice for venvs, and I understand that the reason for this is to use the currently activated Python version, managing dependencies etc. etc. But what I don't understand is why it affects running a script via CLI the way it does. Method + Response 1: (venv) PS C:\Users\cjrow\DjangoProjects\drf> python -m py_client/basic.py C:\Users\cjrow\DjangoProjects\drf\venv\Scripts\python.exe: Error while finding module specification for 'py_client/basic.py' (ModuleNotFoundError: No module named 'py_client/basic'). Try using 'py_client/basic' instead of 'py_client/basic.py' as the module name. So I followed the suggestion and removed the .py, though I don't even understand why that was a suggestion. Method + Result 2: (venv) PS C:\Users\cjrow\DjangoProjects\drf> python -m py_client/basic C:\Users\cjrow\DjangoProjects\drf\venv\Scripts\python.exe: No module named py_client/basic This obviously didn't work at all. So I tried without -m: Method + Result 3 (basic.py just contains print("It's working"): (venv) PS C:\Users\cjrow\DjangoProjects\drf> python py_client/basic.py It's working And then, just out of curiosity: Method + Result 4: (venv) PS C:\Users\cjrow\DjangoProjects\drf> python py_client/basic C:\Users\cjrow\AppData\Local\Programs\Python\Python311\python.exe: can't open file 'C:\\Users\\cjrow\\DjangoProjects\\drf\\py_client\\basic': [Errno 2] No such file or directory I understand why 3 works and I understand why 4 doesn't work, but I don't understand why neither 1 nor 2 work. Thanks! A: When you use -m flag you are telling python to read the script as a module, this requires a special file in the folder where your script is with the name __init__.py more info here Why init.py. Then you can use -m flag. I reproduced you error with the following structure. b/ |--a/ |--|script.py Solved adding. b/ |--a/ |--|script.py |--|__init__.py file script.py contains print('hi') b contains a and a contains script.py and init.py so from terminal I can do python3 -m b.a.script hi if I am in folder where b is or if you are in b itself then python3 -m a.script hi Or in a itself python3 -m script hi Hope this helps, this is my first answer.
Using "python -m" to call Python script not working
Basically the title. File structure below with code examples. Relevant project structure: drf/ β”œβ”€ backend/ β”œβ”€ py_client/ β”‚ β”œβ”€ basic.py β”œβ”€ venv/ β”œβ”€ requirements.txt I know that using "python -m" is best practice for venvs, and I understand that the reason for this is to use the currently activated Python version, managing dependencies etc. etc. But what I don't understand is why it affects running a script via CLI the way it does. Method + Response 1: (venv) PS C:\Users\cjrow\DjangoProjects\drf> python -m py_client/basic.py C:\Users\cjrow\DjangoProjects\drf\venv\Scripts\python.exe: Error while finding module specification for 'py_client/basic.py' (ModuleNotFoundError: No module named 'py_client/basic'). Try using 'py_client/basic' instead of 'py_client/basic.py' as the module name. So I followed the suggestion and removed the .py, though I don't even understand why that was a suggestion. Method + Result 2: (venv) PS C:\Users\cjrow\DjangoProjects\drf> python -m py_client/basic C:\Users\cjrow\DjangoProjects\drf\venv\Scripts\python.exe: No module named py_client/basic This obviously didn't work at all. So I tried without -m: Method + Result 3 (basic.py just contains print("It's working"): (venv) PS C:\Users\cjrow\DjangoProjects\drf> python py_client/basic.py It's working And then, just out of curiosity: Method + Result 4: (venv) PS C:\Users\cjrow\DjangoProjects\drf> python py_client/basic C:\Users\cjrow\AppData\Local\Programs\Python\Python311\python.exe: can't open file 'C:\\Users\\cjrow\\DjangoProjects\\drf\\py_client\\basic': [Errno 2] No such file or directory I understand why 3 works and I understand why 4 doesn't work, but I don't understand why neither 1 nor 2 work. Thanks!
[ "When you use -m flag you are telling python to read the script as a module, this requires a special file in the folder where your script is with the name __init__.py more info here Why init.py.\nThen you can use -m flag.\nI reproduced you error with the following structure.\nb/\n|--a/\n|--|script.py\n\nSolved adding.\nb/\n|--a/\n|--|script.py\n|--|__init__.py\n\nfile script.py contains\nprint('hi')\n\nb contains a and a contains script.py and init.py so from terminal I can do python3 -m b.a.script\nhi\n\nif I am in folder where b is or if you are in b itself then python3 -m a.script \nhi\n\nOr in a itself python3 -m script\nhi\n\nHope this helps, this is my first answer.\n" ]
[ 1 ]
[]
[]
[ "python", "terminal" ]
stackoverflow_0074562771_python_terminal.txt
Q: Pycharm does not display database tables After updating PyCharm (version 2017.1), PyCharm does not display sqlite3 database tables anymore. I've tested the connection and it's working. In sqlite client I can list all tables and make queries. Someone else has get this problem? And in this case could solve anyway? A: I am using PyCharm Professional v2017.3. In Database pane, click the plus button and add Data Source -> Sqlite (Xerial). Data Sources and Drivers settings will open up where you will see Driver: Sqlite (Xerial). This does not mean drivers are fully installed. Look at the bottom-left of the pain for a message. If it has a warning sign about missing files, install missing files. Otherwise it will say no objects. A: After clicking on the View => Tools => Window => Database click on the green plus icon and then on Data Source => Sqlite (Xerial). Then, on the window that opens install the driver (it's underneath the Test Connection button) that is proposing (Sqlite (Xerial)). That should do it both for db.sqlite3 and identifier.sqlite. I have never any problem with Sqlite database, showing on PyCharm IDE. A: Yeah, Pycharm seems stupid on that, I've stubled upon the solution with a bunch of luck: Make sure you have all schemas shown: db->Schemas Still, the tables do not show..until you select the db and hit Reload: Cmd+R or Context menu A: It might be stupid thing but consider that you have to connect actual DB and not only indentifier. The fastest way is to double tap on the DB file in the file explorer and Pycharm will automatically suggests everything. Pycharm screen
Pycharm does not display database tables
After updating PyCharm (version 2017.1), PyCharm does not display sqlite3 database tables anymore. I've tested the connection and it's working. In sqlite client I can list all tables and make queries. Someone else has get this problem? And in this case could solve anyway?
[ "I am using PyCharm Professional v2017.3.\nIn Database pane, click the plus button and add Data Source -> Sqlite (Xerial). Data Sources and Drivers settings will open up where you will see Driver: Sqlite (Xerial). This does not mean drivers are fully installed. Look at the bottom-left of the pain for a message. If it has a warning sign about missing files, install missing files. Otherwise it will say no objects.\n", "After clicking on the View => Tools => Window => Database click on the green plus icon and then on Data Source => Sqlite (Xerial). Then, on the window that opens install the driver (it's underneath the Test Connection button) that is proposing (Sqlite (Xerial)).\nThat should do it both for db.sqlite3 and identifier.sqlite. I have never any problem with Sqlite database, showing on PyCharm IDE.\n", "Yeah, Pycharm seems stupid on that, I've stubled upon the solution with a bunch of luck:\n\nMake sure you have all schemas shown: db->Schemas\nStill, the tables do not show..until you select the db and hit Reload: Cmd+R or Context menu\n\n", "It might be stupid thing but consider that you have to connect actual DB and not only indentifier.\nThe fastest way is to double tap on the DB file in the file explorer and Pycharm will automatically suggests everything.\nPycharm screen\n" ]
[ 2, 1, 0, 0 ]
[]
[]
[ "django", "pycharm", "python", "sqlite" ]
stackoverflow_0043075420_django_pycharm_python_sqlite.txt
Q: Matplotlib PDF export uses wrong font I want to generate high-quality diagrams for a presentation. I’m using Python’s matplotlib to generate the graphics. Unfortunately, the PDF export seems to ignore my font settings. I tried setting the font both by passing a FontProperties object to the text drawing functions and by setting the option globally. For the record, here is a MWE to reproduce the problem: import scipy import matplotlib matplotlib.use('cairo') import matplotlib.pylab as pylab import matplotlib.font_manager as fm data = scipy.arange(5) for font in ['Helvetica', 'Gill Sans']: fig = pylab.figure() ax = fig.add_subplot(111) ax.bar(data, data) ax.set_xticks(data) ax.set_xticklabels(data, fontproperties = fm.FontProperties(family = font)) pylab.savefig('foo-%s.pdf' % font) In both cases, the produced output is identical and uses Helvetica (and yes, I do have both fonts installed). Just to be sure, the following doesn’t help either: matplotlib.rc('font', family = 'Gill Sans') Finally, if I replace the backend, instead using the native viewer: matplotlib.use('MacOSX') I do get the correct font displayed – but only in the viewer GUI. The PDF output is once again wrong. To be sure – I can set other fonts – but only other classes of font families: I can set serif fonts or fantasy or monospace. But all sans-serif fonts seem to default to Helvetica. A: Basically, @Jouni’s is the right answer but since I still had some trouble getting it to work, here’s my final solution: #!/usr/bin/env python2.6 import scipy import matplotlib matplotlib.use('cairo') import matplotlib.pylab as pylab import matplotlib.font_manager as fm font = fm.FontProperties( family = 'Gill Sans', fname = '/Library/Fonts/GillSans.ttc') data = scipy.arange(5) fig = pylab.figure() ax = fig.add_subplot(111) ax.bar(data, data) ax.set_yticklabels(ax.get_yticks(), fontproperties = font) ax.set_xticklabels(ax.get_xticks(), fontproperties = font) pylab.savefig('foo.pdf') Notice that the font has to be set explicitly using the fontproperties key. Apparently, there’s no rc setting for the fname property (at least I didn’t find it). Giving a family key in the instantiation of font isn’t strictly necessary here, it will be ignored by the PDF backend. This code works with the cairo backend only. Using MacOSX won’t work. A: The "family" argument and the corresponding rc parameter are not meant to specify the name of the font can actually be used this way. There's an (arguably baroque) CSS-like font selection system that helps the same script work on different computers, selecting the closest font available. The usually recommended way to use e.g. Gill Sans is to add it to the front of the value of the rc parameter font.sans-serif (see sample rc file), and then set font.family to sans-serif. This can be annoying if the font manager decides for some obscure reason that Gill Sans is not the closest match to your specification. A way to bypass the font selection logic is to use FontProperties(fname='/path/to/font.ttf') (docstring). In your case, I suspect that the MacOSX backend uses fonts via the operating system's mechanisms and so automatically supports all kinds of fonts, but the pdf backend has its own font support code that doesn't support your version of Gill Sans. A: This is an addition to the answers above if you came here for a non-cairo backend. The pdf-backend of matplotlib does not yet support true type font collections (saved as .ttc files). See this issue. The currently suggested workaround is to extract the font-of-interest from a .ttc file and save it as a .ttf file. And then use that font in the way described by Konrad Rudolph. You can use the python-package fonttools to achieve this: font = TTFont("/System/Library/Fonts/Helvetica.ttc", fontNumber=0) font.save("Helvetica-regular.ttf") As far as I can see, it is not possible to make this setting "global" by passing the path to this new .ttf file to the rc. If you are really desperate, you could try to extract all fonts from a .ttc into separate .ttf files, uninstall the .ttc and install the ttfs separately. To have the extracted font side-by-side with the original font from the .ttc, you need to change the font name with tools like FontForge. I haven't tested this, though. A: Check if you are rendering the text with LaTeX, i.e., if text.usetex is set to True. Because LaTeX rendering only supports a few fonts, it largely ignores/overwrites your other fonts settings. This might be the cause.
Matplotlib PDF export uses wrong font
I want to generate high-quality diagrams for a presentation. I’m using Python’s matplotlib to generate the graphics. Unfortunately, the PDF export seems to ignore my font settings. I tried setting the font both by passing a FontProperties object to the text drawing functions and by setting the option globally. For the record, here is a MWE to reproduce the problem: import scipy import matplotlib matplotlib.use('cairo') import matplotlib.pylab as pylab import matplotlib.font_manager as fm data = scipy.arange(5) for font in ['Helvetica', 'Gill Sans']: fig = pylab.figure() ax = fig.add_subplot(111) ax.bar(data, data) ax.set_xticks(data) ax.set_xticklabels(data, fontproperties = fm.FontProperties(family = font)) pylab.savefig('foo-%s.pdf' % font) In both cases, the produced output is identical and uses Helvetica (and yes, I do have both fonts installed). Just to be sure, the following doesn’t help either: matplotlib.rc('font', family = 'Gill Sans') Finally, if I replace the backend, instead using the native viewer: matplotlib.use('MacOSX') I do get the correct font displayed – but only in the viewer GUI. The PDF output is once again wrong. To be sure – I can set other fonts – but only other classes of font families: I can set serif fonts or fantasy or monospace. But all sans-serif fonts seem to default to Helvetica.
[ "Basically, @Jouni’s is the right answer but since I still had some trouble getting it to work, here’s my final solution:\n#!/usr/bin/env python2.6\n\nimport scipy\nimport matplotlib\nmatplotlib.use('cairo')\nimport matplotlib.pylab as pylab\nimport matplotlib.font_manager as fm\n\nfont = fm.FontProperties(\n family = 'Gill Sans', fname = '/Library/Fonts/GillSans.ttc')\n\ndata = scipy.arange(5)\nfig = pylab.figure()\nax = fig.add_subplot(111)\nax.bar(data, data)\nax.set_yticklabels(ax.get_yticks(), fontproperties = font)\nax.set_xticklabels(ax.get_xticks(), fontproperties = font)\npylab.savefig('foo.pdf')\n\nNotice that the font has to be set explicitly using the fontproperties key. Apparently, there’s no rc setting for the fname property (at least I didn’t find it).\nGiving a family key in the instantiation of font isn’t strictly necessary here, it will be ignored by the PDF backend.\nThis code works with the cairo backend only. Using MacOSX won’t work.\n", "The \"family\" argument and the corresponding rc parameter are not meant to specify the name of the font can actually be used this way. There's an (arguably baroque) CSS-like font selection system that helps the same script work on different computers, selecting the closest font available. The usually recommended way to use e.g. Gill Sans is to add it to the front of the value of the rc parameter font.sans-serif (see sample rc file), and then set font.family to sans-serif.\nThis can be annoying if the font manager decides for some obscure reason that Gill Sans is not the closest match to your specification. A way to bypass the font selection logic is to use FontProperties(fname='/path/to/font.ttf') (docstring).\nIn your case, I suspect that the MacOSX backend uses fonts via the operating system's mechanisms and so automatically supports all kinds of fonts, but the pdf backend has its own font support code that doesn't support your version of Gill Sans.\n", "This is an addition to the answers above if you came here for a non-cairo backend.\nThe pdf-backend of matplotlib does not yet support true type font collections (saved as .ttc files). See this issue.\nThe currently suggested workaround is to extract the font-of-interest from a .ttc file and save it as a .ttf file. And then use that font in the way described by Konrad Rudolph.\nYou can use the python-package fonttools to achieve this:\nfont = TTFont(\"/System/Library/Fonts/Helvetica.ttc\", fontNumber=0)\nfont.save(\"Helvetica-regular.ttf\")\n\nAs far as I can see, it is not possible to make this setting \"global\" by passing the path to this new .ttf file to the rc. If you are really desperate, you could try to extract all fonts from a .ttc into separate .ttf files, uninstall the .ttc and install the ttfs separately. To have the extracted font side-by-side with the original font from the .ttc, you need to change the font name with tools like FontForge. I haven't tested this, though. \n", "Check if you are rendering the text with LaTeX, i.e., if text.usetex is set to True. Because LaTeX rendering only supports a few fonts, it largely ignores/overwrites your other fonts settings. This might be the cause.\n" ]
[ 8, 3, 0, 0 ]
[]
[]
[ "cairo", "macos", "matplotlib", "python" ]
stackoverflow_0002797525_cairo_macos_matplotlib_python.txt
Q: Use Git commands within Python code I have been asked to write a script that pulls the latest code from Git, makes a build, and performs some automated unit tests. I found that there are two built-in Python modules for interacting with Git that are readily available: GitPython and libgit2. What approach/module should I use? A: An easier solution would be to use the Python subprocess module to call git. In your case, this would pull the latest code and build: import subprocess subprocess.call(["git", "pull"]) subprocess.call(["make"]) subprocess.call(["make", "test"]) Docs: subprocess - Python 2.x subprocess - Python 3.x A: I agree with Ian Wetherbee. You should use subprocess to call git directly. If you need to perform some logic on the output of the commands then you would use the following subprocess call format. import subprocess PIPE = subprocess.PIPE branch = 'my_branch' process = subprocess.Popen(['git', 'pull', branch], stdout=PIPE, stderr=PIPE) stdoutput, stderroutput = process.communicate() if 'fatal' in stdoutput: # Handle error case else: # Success! A: So with Python 3.5 and later, the .call() method has been deprecated. https://docs.python.org/3.6/library/subprocess.html#older-high-level-api The current recommended method is to use the .run() method on subprocess. import subprocess subprocess.run(["git", "pull"]) subprocess.run(["make"]) subprocess.run(["make", "test"]) Adding this as when I went to read the docs, the links above contradicted the accepted answer and I had to do some research. Adding my 2 cents to hopefully save someone else a bit of time. A: In EasyBuild, we rely on GitPython, and that's working out fine. See here, for examples of how to use it. A: If GitPython package doesn't work for you there are also the PyGit and Dulwich packages. These can be easily installed through pip. But, I have personally just used the subprocess calls. Works perfect for what I needed, which was just basic git calls. For something more advanced, I'd recommend a git package. A: I had to use shlex on top of the run call because my command was too complex for the subprocess alone to understand. import subprocess import shlex git_command = "git <command>" subprocess.run(shlex.split(git_command))
Use Git commands within Python code
I have been asked to write a script that pulls the latest code from Git, makes a build, and performs some automated unit tests. I found that there are two built-in Python modules for interacting with Git that are readily available: GitPython and libgit2. What approach/module should I use?
[ "An easier solution would be to use the Python subprocess module to call git. In your case, this would pull the latest code and build:\nimport subprocess\nsubprocess.call([\"git\", \"pull\"])\nsubprocess.call([\"make\"])\nsubprocess.call([\"make\", \"test\"])\n\nDocs:\n\nsubprocess - Python 2.x\nsubprocess - Python 3.x\n\n", "I agree with Ian Wetherbee. You should use subprocess to call git directly. If you need to perform some logic on the output of the commands then you would use the following subprocess call format. \nimport subprocess\nPIPE = subprocess.PIPE\nbranch = 'my_branch'\n\nprocess = subprocess.Popen(['git', 'pull', branch], stdout=PIPE, stderr=PIPE)\nstdoutput, stderroutput = process.communicate()\n\nif 'fatal' in stdoutput:\n # Handle error case\nelse:\n # Success!\n\n", "So with Python 3.5 and later, the .call() method has been deprecated.\nhttps://docs.python.org/3.6/library/subprocess.html#older-high-level-api\nThe current recommended method is to use the .run() method on subprocess.\nimport subprocess\nsubprocess.run([\"git\", \"pull\"])\nsubprocess.run([\"make\"])\nsubprocess.run([\"make\", \"test\"])\n\nAdding this as when I went to read the docs, the links above contradicted the accepted answer and I had to do some research. Adding my 2 cents to hopefully save someone else a bit of time.\n", "In EasyBuild, we rely on GitPython, and that's working out fine.\nSee here, for examples of how to use it.\n", "If GitPython package doesn't work for you there are also the PyGit and Dulwich packages. These can be easily installed through pip. \nBut, I have personally just used the subprocess calls. Works perfect for what I needed, which was just basic git calls. For something more advanced, I'd recommend a git package.\n", "I had to use shlex on top of the run call because my command was too complex for the subprocess alone to understand.\nimport subprocess\nimport shlex\ngit_command = \"git <command>\"\nsubprocess.run(shlex.split(git_command))\n\n" ]
[ 47, 24, 21, 2, 2, 0 ]
[ "If you're on Linux or Mac, why use python at all for this task? Write a shell script.\n#!/bin/sh\nset -e\ngit pull\nmake\n./your_test #change this line to actually launch the thing that does your test\n\n" ]
[ -8 ]
[ "git", "python" ]
stackoverflow_0011113896_git_python.txt
Q: Is it possible access a list stored in a dataframe in a vectorized manner? Considering a dataframe like so: data = { 'lists': [[0, 1, 2],[3, 4, 5],[6, 7, 8]], 'indexes': [0, 1, 2] } df = pd.DataFrame(data=data) lists indexes 0 [0, 1, 2] 0 1 [3, 4, 5] 1 2 [6, 7, 8] 2 I want to create a new column 'extracted_value' which would be the value contained in the list at 'indexes' index (list = [0, 1, 2], indexes = 0 -> 0, indexes = 1 -> 1, and so on) lists indexes extracted_values 0 [0, 1, 2] 0 0 1 [3, 4, 5] 1 4 2 [6, 7, 8] 2 8 Doing it with iterrows() is extremely slow as I work with dataframes containing multiple millions of lines. I have tried the following: df['extracted_value'] = df['lists'][df['indexes']] But it results in: lists indexes extracted_value 0 [0, 1, 2] 0 [0, 1, 2] 1 [3, 4, 5] 1 [3, 4, 5] 2 [6, 7, 8] 2 [6, 7, 8] The following will just results in extracted_value containing the whole list: df['extracted_value'] = df['lists'][0] Thank you for your help. A: What you tried was almost ok, you only needed to put it into pd.DataFrame.apply while setting axis argument as 1 to make sure the function is applied on each row: df['extracted_values'] = df.apply(lambda x: x['lists'][x['indexes']], axis=1) df lists indexes extracted_values 0 [0, 1, 2] 0 0 1 [3, 4, 5] 1 4 2 [6, 7, 8] 2 8
Is it possible access a list stored in a dataframe in a vectorized manner?
Considering a dataframe like so: data = { 'lists': [[0, 1, 2],[3, 4, 5],[6, 7, 8]], 'indexes': [0, 1, 2] } df = pd.DataFrame(data=data) lists indexes 0 [0, 1, 2] 0 1 [3, 4, 5] 1 2 [6, 7, 8] 2 I want to create a new column 'extracted_value' which would be the value contained in the list at 'indexes' index (list = [0, 1, 2], indexes = 0 -> 0, indexes = 1 -> 1, and so on) lists indexes extracted_values 0 [0, 1, 2] 0 0 1 [3, 4, 5] 1 4 2 [6, 7, 8] 2 8 Doing it with iterrows() is extremely slow as I work with dataframes containing multiple millions of lines. I have tried the following: df['extracted_value'] = df['lists'][df['indexes']] But it results in: lists indexes extracted_value 0 [0, 1, 2] 0 [0, 1, 2] 1 [3, 4, 5] 1 [3, 4, 5] 2 [6, 7, 8] 2 [6, 7, 8] The following will just results in extracted_value containing the whole list: df['extracted_value'] = df['lists'][0] Thank you for your help.
[ "What you tried was almost ok, you only needed to put it into pd.DataFrame.apply while setting axis argument as 1 to make sure the function is applied on each row:\ndf['extracted_values'] = df.apply(lambda x: x['lists'][x['indexes']], axis=1)\ndf\n\n lists indexes extracted_values\n0 [0, 1, 2] 0 0\n1 [3, 4, 5] 1 4\n2 [6, 7, 8] 2 8\n\n" ]
[ 1 ]
[]
[]
[ "dataframe", "pandas", "python", "vectorization" ]
stackoverflow_0074563708_dataframe_pandas_python_vectorization.txt
Q: Serve directory in Python 3 I've got this basic python3 server but can't figure out how to serve a directory. class SimpleHTTPRequestHandler(BaseHTTPRequestHandler): def do_GET(self): print(self.path) if self.path == '/up': self.send_response(200) self.end_headers() self.wfile.write(b'Going Up') if self.path == '/down': self.send_response(200) self.end_headers() self.wfile.write(B'Going Down') httpd = socketserver.TCPServer(("", PORT), SimpleHTTPRequestHandler) print("Server started on ", PORT) httpd.serve_forever() If Instead of the custom class above, I simply pass Handler = http.server.SimpleHTTPRequestHandler into TCPServer():, the default functionality is to serve a directory, but I want to serve that directory and have functionality on my two GETs above. As an example, if someone were to go to localhost:8080/index.html, I'd want that file to be served to them A: if you are using 3.7, you can simply serve up a directory where your html files, eg. index.html is still python -m http.server 8080 --bind 127.0.0.1 --directory /path/to/dir for the docs A: The simple way You want to extend the functionality of SimpleHTTPRequestHandler, so you subclass it! Check for your special condition(s), if none of them apply, call super().do_GET() and let it do the rest. Example: class MyHandler(http.server.SimpleHTTPRequestHandler): def do_GET(self): if self.path == '/up': self.send_response(200) self.end_headers() self.wfile.write(b'up') else: super().do_GET() The long way To serve files, you basically just have to open them, read the contents and send it. To serve directories (indexes), use os.listdir(). (If you want, you can when receiving directories first check for an index.html and then, if that fails, serve an index listing). Putting this into your code will give you: class MyHandler(http.server.BaseHTTPRequestHandler): def do_GET(self): print(self.path) if self.path == '/up': self.send_response(200) self.end_headers() self.wfile.write(b'Going up') elif os.path.isdir(self.path): try: self.send_response(200) self.end_headers() self.wfile.write(str(os.listdir(self.path)).encode()) except Exception: self.send_response(500) self.end_headers() self.wfile.write(b'error') else: try: with open(self.path, 'rb') as f: data = f.read() self.send_response(200) self.end_headers() self.wfile.write(data) except FileNotFoundError: self.send_response(404) self.end_headers() self.wfile.write(b'not found') except PermissionError: self.send_response(403) self.end_headers() self.wfile.write(b'no permission') except Exception: self.send_response(500) self.end_headers() self.wfile.write(b'error') This example has a lot of error handling. You might want to move it somewhere else. The problem is this serves from your root directory. To stop this, you'll have to (easy way) just add the serving directory to the beginning of self.path. Also check if .. cause you to land higher than you want. A way to do this is os.path.abspath(serve_from+self.path).startswith(serve_from) Putting this inside (after the check for /up): class MyHandler(http.server.BaseHTTPRequestHandler): def do_GET(self): print(self.path) path = serve_from + self.path if self.path == '/up': self.send_response(200) self.end_headers() self.wfile.write(b'Going up') elif not os.path.abspath(path).startswith(serve_from): self.send_response(403) self.end_headers() self.wfile.write(b'Private!') elif os.path.isdir(path): try: self.send_response(200) self.end_headers() self.wfile.write(str(os.listdir(path)).encode()) except Exception: self.send_response(500) self.end_headers() self.wfile.write(b'error') else: try: with open(path, 'rb') as f: data = f.read() self.send_response(200) self.end_headers() self.wfile.write(data) # error handling skipped except Exception: self.send_response(500) self.end_headers() self.wfile.write(b'error') Note you define path and use it subsequently, otherwise you will still serve from / A: @user24343's answer to subclass SimpleHTTPRequestHandler is really helpful! One detail I couldn't figure out was how to customize the directory= constructor arg when I pass MyHandler into HTTPServer. Use any of these answers, i.e. HTTPServer(('', 8001), lambda *_: MyHandler(*_, directory=sys.path[0])) A: With python3, you can serve the current directory by simply running: python3 -m http.server 8080 Of course you can configure many parameters as per the documentation.
Serve directory in Python 3
I've got this basic python3 server but can't figure out how to serve a directory. class SimpleHTTPRequestHandler(BaseHTTPRequestHandler): def do_GET(self): print(self.path) if self.path == '/up': self.send_response(200) self.end_headers() self.wfile.write(b'Going Up') if self.path == '/down': self.send_response(200) self.end_headers() self.wfile.write(B'Going Down') httpd = socketserver.TCPServer(("", PORT), SimpleHTTPRequestHandler) print("Server started on ", PORT) httpd.serve_forever() If Instead of the custom class above, I simply pass Handler = http.server.SimpleHTTPRequestHandler into TCPServer():, the default functionality is to serve a directory, but I want to serve that directory and have functionality on my two GETs above. As an example, if someone were to go to localhost:8080/index.html, I'd want that file to be served to them
[ "if you are using 3.7, you can simply serve up a directory where your html files, eg. index.html is still\npython -m http.server 8080 --bind 127.0.0.1 --directory /path/to/dir\n\nfor the docs\n", "The simple way\nYou want to extend the functionality of SimpleHTTPRequestHandler, so you subclass it! Check for your special condition(s), if none of them apply, call super().do_GET() and let it do the rest.\nExample:\nclass MyHandler(http.server.SimpleHTTPRequestHandler):\n def do_GET(self):\n if self.path == '/up':\n self.send_response(200)\n self.end_headers()\n self.wfile.write(b'up')\n else:\n super().do_GET()\n\nThe long way\nTo serve files, you basically just have to open them, read the contents and send it.\nTo serve directories (indexes), use os.listdir(). (If you want, you can when receiving directories first check for an index.html and then, if that fails, serve an index listing).\nPutting this into your code will give you:\nclass MyHandler(http.server.BaseHTTPRequestHandler):\n def do_GET(self):\n print(self.path)\n if self.path == '/up':\n self.send_response(200)\n self.end_headers()\n self.wfile.write(b'Going up')\n elif os.path.isdir(self.path):\n try:\n self.send_response(200)\n self.end_headers()\n self.wfile.write(str(os.listdir(self.path)).encode())\n except Exception:\n self.send_response(500)\n self.end_headers()\n self.wfile.write(b'error')\n else:\n try:\n with open(self.path, 'rb') as f:\n data = f.read()\n self.send_response(200)\n self.end_headers()\n self.wfile.write(data)\n except FileNotFoundError:\n self.send_response(404)\n self.end_headers()\n self.wfile.write(b'not found')\n except PermissionError:\n self.send_response(403)\n self.end_headers()\n self.wfile.write(b'no permission')\n except Exception:\n self.send_response(500)\n self.end_headers()\n self.wfile.write(b'error')\n\nThis example has a lot of error handling. You might want to move it somewhere else.\nThe problem is this serves from your root directory. To stop this, you'll have to (easy way) just add the serving directory to the beginning of self.path. Also check if .. cause you to land higher than you want. A way to do this is os.path.abspath(serve_from+self.path).startswith(serve_from)\nPutting this inside (after the check for /up):\nclass MyHandler(http.server.BaseHTTPRequestHandler):\n def do_GET(self):\n print(self.path)\n path = serve_from + self.path\n if self.path == '/up':\n self.send_response(200)\n self.end_headers()\n self.wfile.write(b'Going up')\n elif not os.path.abspath(path).startswith(serve_from):\n self.send_response(403)\n self.end_headers()\n self.wfile.write(b'Private!')\n elif os.path.isdir(path):\n try:\n self.send_response(200)\n self.end_headers()\n self.wfile.write(str(os.listdir(path)).encode())\n except Exception:\n self.send_response(500)\n self.end_headers()\n self.wfile.write(b'error')\n else:\n try:\n with open(path, 'rb') as f:\n data = f.read()\n self.send_response(200)\n self.end_headers()\n self.wfile.write(data)\n # error handling skipped\n except Exception:\n self.send_response(500)\n self.end_headers()\n self.wfile.write(b'error')\n\nNote you define path and use it subsequently, otherwise you will still serve from /\n", "@user24343's answer to subclass SimpleHTTPRequestHandler is really helpful! One detail I couldn't figure out was how to customize the directory= constructor arg when I pass MyHandler into HTTPServer. Use any of these answers, i.e.\nHTTPServer(('', 8001), lambda *_: MyHandler(*_, directory=sys.path[0]))\n\n", "With python3, you can serve the current directory by simply running:\npython3 -m http.server 8080\nOf course you can configure many parameters as per the documentation.\n" ]
[ 19, 4, 1, 0 ]
[]
[]
[ "python", "python_3.x", "server" ]
stackoverflow_0055052811_python_python_3.x_server.txt
Q: Extract a number from a txt file Apologies to all, I am rewriting the question to be clearer than before. I have text files that are renamed like this: 1.txt, 2.txt, ... etc. (for a total of 195 files). These text files contain two blocks made like this: Alpha occ. eigenvalues -- -0.40198 -0.39833 -0.39431 -0.38246 -0.38026 Alpha occ. eigenvalues -- -0.37706 -0.36582 -0.35944 -0.35207 -0.34057 Alpha occ. eigenvalues -- -0.33953 -0.32519 -0.31472 -0.30868 -0.28488 Alpha occ. eigenvalues -- -0.27287 -0.26713 -0.26506 -0.26428 -0.25603 Alpha virt. eigenvalues -- -0.08714 -0.06790 -0.04446 -0.03750 0.00174 Alpha virt. eigenvalues -- 0.01408 0.01679 0.03779 0.04314 0.05398 ... Optimization complete ... Alpha occ. eigenvalues -- -0.39708 -0.39539 -0.37817 -0.37390 -0.36335 Alpha occ. eigenvalues -- -0.35790 -0.35095 -0.34682 -0.34412 -0.34011 Alpha occ. eigenvalues -- -0.33775 -0.32013 -0.31434 -0.30201 -0.28924 Alpha occ. eigenvalues -- -0.28686 -0.28518 -0.28216 -0.27672 -0.27505 Alpha virt. eigenvalues -- -0.12386 -0.10541 -0.02072 0.00150 0.02156 Alpha virt. eigenvalues -- 0.03129 0.03997 0.04449 0.04675 0.05155 First of all: I only need the second block, so the code should read the text part after "optimization completed" and ignore the previous one. The blocks are not whole, I copied part of it just for example. I need two values: HOMO and LUMO. HOMO is the fifth number in the last row where " Alpha occ. eigenvalues -- " appears. LUMO is the first number of the first line (but of the second block) in which appears "Alpha occ. eigenvalues --" For this particular case then HOMO is -0.27505, while LUMO is -0.12386. What I need is a loop that reads the first file and writes me HOMO and LUMO, opens the second file and again HOMO and LUMO and so on for all the files. Eventually I should get a table where each row is represented by a file and two columns with the HOMO and LUMO values of each file. This is the code I've written so far, it works for one file but I can't make it loop for all the files. Consider that I will put all the files in the same directory of course. If you notice something strange, it is because I don't know how to make the reading start from "optimization complete" so I used a stratagem based on the mutual size of the numbers between the blocks to get what I wanted and get rid of the values in the first block. lst = [] n = int(input("Enter the total number of files:\n")) i=1 for i in range(1, n+1): lst.append("{}.txt".format(i)) i=+1 HOMO=[] LUMO=[] lumo=[] homo=[] with open(lst[0]) as f: lines = f.readlines() for line in lines: if line.startswith(" Alpha virt. eigenvalues --"): lumo.append(line.split()[4]); elif line.startswith(" Alpha occ. eigenvalues --"): homo.append(line.split()[-1]); for i,x in enumerate(lumo): if i == 0 or i == (len(lumo)-1): pass else: pre = float(lumo[i-1]) test=float(lumo[i]) diff=pre-test if diff>15: LUMO.append(lumo[i]) HOMO.append(homo[-1]) print(HOMO) print(LUMO) A: Assuming your data is stored in text.txt file. I only take the last 5 elements of a line. with open('text.txt') as f: file_list = f.readlines() new_list = [] for sentence in file_list: new_list.append(sentence.replace('\n', '')) list_number = [] for element in new_list: list_number.append(element.split()[:-6:-1]) Output: [['-0.34011', '-0.34412', '-0.34682', '-0.35095', '-0.35790'], ['-0.28924', '-0.30201', '-0.31434', '-0.32013', '-0.33775'], ['-0.27505', '-0.27672', '-0.28216', '-0.28518', '-0.28686'], ['0.02156', '0.00150', '-0.02072', '-0.10541', '-0.12386'], ['0.05155', '0.04675', '0.04449', '0.03997', '0.03129'], ['0.08236', '0.08193', '0.07459', '0.06358', '0.06062']]
Extract a number from a txt file
Apologies to all, I am rewriting the question to be clearer than before. I have text files that are renamed like this: 1.txt, 2.txt, ... etc. (for a total of 195 files). These text files contain two blocks made like this: Alpha occ. eigenvalues -- -0.40198 -0.39833 -0.39431 -0.38246 -0.38026 Alpha occ. eigenvalues -- -0.37706 -0.36582 -0.35944 -0.35207 -0.34057 Alpha occ. eigenvalues -- -0.33953 -0.32519 -0.31472 -0.30868 -0.28488 Alpha occ. eigenvalues -- -0.27287 -0.26713 -0.26506 -0.26428 -0.25603 Alpha virt. eigenvalues -- -0.08714 -0.06790 -0.04446 -0.03750 0.00174 Alpha virt. eigenvalues -- 0.01408 0.01679 0.03779 0.04314 0.05398 ... Optimization complete ... Alpha occ. eigenvalues -- -0.39708 -0.39539 -0.37817 -0.37390 -0.36335 Alpha occ. eigenvalues -- -0.35790 -0.35095 -0.34682 -0.34412 -0.34011 Alpha occ. eigenvalues -- -0.33775 -0.32013 -0.31434 -0.30201 -0.28924 Alpha occ. eigenvalues -- -0.28686 -0.28518 -0.28216 -0.27672 -0.27505 Alpha virt. eigenvalues -- -0.12386 -0.10541 -0.02072 0.00150 0.02156 Alpha virt. eigenvalues -- 0.03129 0.03997 0.04449 0.04675 0.05155 First of all: I only need the second block, so the code should read the text part after "optimization completed" and ignore the previous one. The blocks are not whole, I copied part of it just for example. I need two values: HOMO and LUMO. HOMO is the fifth number in the last row where " Alpha occ. eigenvalues -- " appears. LUMO is the first number of the first line (but of the second block) in which appears "Alpha occ. eigenvalues --" For this particular case then HOMO is -0.27505, while LUMO is -0.12386. What I need is a loop that reads the first file and writes me HOMO and LUMO, opens the second file and again HOMO and LUMO and so on for all the files. Eventually I should get a table where each row is represented by a file and two columns with the HOMO and LUMO values of each file. This is the code I've written so far, it works for one file but I can't make it loop for all the files. Consider that I will put all the files in the same directory of course. If you notice something strange, it is because I don't know how to make the reading start from "optimization complete" so I used a stratagem based on the mutual size of the numbers between the blocks to get what I wanted and get rid of the values in the first block. lst = [] n = int(input("Enter the total number of files:\n")) i=1 for i in range(1, n+1): lst.append("{}.txt".format(i)) i=+1 HOMO=[] LUMO=[] lumo=[] homo=[] with open(lst[0]) as f: lines = f.readlines() for line in lines: if line.startswith(" Alpha virt. eigenvalues --"): lumo.append(line.split()[4]); elif line.startswith(" Alpha occ. eigenvalues --"): homo.append(line.split()[-1]); for i,x in enumerate(lumo): if i == 0 or i == (len(lumo)-1): pass else: pre = float(lumo[i-1]) test=float(lumo[i]) diff=pre-test if diff>15: LUMO.append(lumo[i]) HOMO.append(homo[-1]) print(HOMO) print(LUMO)
[ "Assuming your data is stored in text.txt file. I only take the last 5 elements of a line.\nwith open('text.txt') as f:\n file_list = f.readlines()\nnew_list = [] \nfor sentence in file_list:\n new_list.append(sentence.replace('\\n', ''))\nlist_number = []\nfor element in new_list:\n list_number.append(element.split()[:-6:-1])\n\nOutput:\n[['-0.34011', '-0.34412', '-0.34682', '-0.35095', '-0.35790'],\n ['-0.28924', '-0.30201', '-0.31434', '-0.32013', '-0.33775'],\n ['-0.27505', '-0.27672', '-0.28216', '-0.28518', '-0.28686'],\n ['0.02156', '0.00150', '-0.02072', '-0.10541', '-0.12386'],\n ['0.05155', '0.04675', '0.04449', '0.03997', '0.03129'],\n ['0.08236', '0.08193', '0.07459', '0.06358', '0.06062']]\n\n" ]
[ 2 ]
[]
[]
[ "python" ]
stackoverflow_0074563187_python.txt
Q: Python Requests - Get Server IP I'm making a small tool that tests CDN performance and would like to check where the response comes from. I thought of getting the host's IP and then using one of the geolocation API's on github to check the country. I've tried doing so with import socket ... raw._fp.fp._sock.getpeername() ...however that only works when i use stream=True for the request and that in turn breaks the tool's functionality. Is there any other option to get the server ip with requests or in a completely different way? A: The socket.gethostbyname() function from Python's socket library should solve your problem. You can check it out in the Python docs here. Here is an example of how to use it: import socket url="cdnjs.cloudflare.com" print("IP:",socket.gethostbyname(url)) All you need to do is pass the url to socket.gethostbyname() and it will do the rest. Just make sure to remove the http:// before the URL because that will trip it up. A: I could not get Akilan's solution to give the IP address of a different host that I was using. socket.gethostbyname() and getpeername() were not working for me. They are not even available. His solution did open the door. However, navigating the socket object, I did find this: socket.getaddrinfo('host name', 443)[0][4][0] I wrapped this in a try/except block. Maybe there is a prettier way.
Python Requests - Get Server IP
I'm making a small tool that tests CDN performance and would like to check where the response comes from. I thought of getting the host's IP and then using one of the geolocation API's on github to check the country. I've tried doing so with import socket ... raw._fp.fp._sock.getpeername() ...however that only works when i use stream=True for the request and that in turn breaks the tool's functionality. Is there any other option to get the server ip with requests or in a completely different way?
[ "The socket.gethostbyname() function from Python's socket library should solve your problem. You can check it out in the Python docs here.\nHere is an example of how to use it:\nimport socket\nurl=\"cdnjs.cloudflare.com\"\nprint(\"IP:\",socket.gethostbyname(url))\n\nAll you need to do is pass the url to socket.gethostbyname() and it will do the rest. Just make sure to remove the http:// before the URL because that will trip it up.\n", "I could not get Akilan's solution to give the IP address of a different host that I was using. socket.gethostbyname() and getpeername() were not working for me. They are not even available. His solution did open the door.\nHowever, navigating the socket object, I did find this:\nsocket.getaddrinfo('host name', 443)[0][4][0]\nI wrapped this in a try/except block.\nMaybe there is a prettier way.\n" ]
[ 1, 0 ]
[]
[]
[ "ip", "networking", "python", "python_requests", "sockets" ]
stackoverflow_0067459725_ip_networking_python_python_requests_sockets.txt
Q: vectorize a function on a 3D numpy array using a specific signature I'd like to apply a function f(x, y) on a numpy array a of shape (N,M,2), whose last axis (2) contains the variables x and y to give in input to f. Example. a = np.array([[[1, 1], [2, 1], [3, 1]], [[1, 2], [2, 2], [3, 2]], [[1, 3], [2, 3], [3, 3]]]) def function_to_vectorize(x, y): # the function body is totaly random and not important if x>2 and y-x>0: sum = 0 for i in range(y): sum+=i return sum else: sum = y for i in range(x): sum-=i return sum I'd like to apply function_to_vectorize in this way: [[function_to_vectorize(element[0], element[1]) for element in vector] for vector in a] #array([[ 1, 0, -2], # [ 2, 1, -1], # [ 3, 2, 3]]) How can I vectorize this function with np.vectorize? A: With that function, the np.vectorize result will also expect 2 arguments. 'signature' is determined by the function, not by the array(s) you expect to supply. In [184]: f = np.vectorize(function_to_vectorize) In [185]: f(1,2) Out[185]: array(2) In [186]: a = np.array([[[1, 1], ...: [2, 1], ...: [3, 1]], ...: ...: [[1, 2], ...: [2, 2], ...: [3, 2]], ...: ...: [[1, 3], ...: [2, 3], ...: [3, 3]]]) Just supply the 2 columns of a: In [187]: f(a[:,:,0],a[:,:,1]) Out[187]: array([[ 1, 0, -2], [ 2, 1, -1], [ 3, 2, 0]])
vectorize a function on a 3D numpy array using a specific signature
I'd like to apply a function f(x, y) on a numpy array a of shape (N,M,2), whose last axis (2) contains the variables x and y to give in input to f. Example. a = np.array([[[1, 1], [2, 1], [3, 1]], [[1, 2], [2, 2], [3, 2]], [[1, 3], [2, 3], [3, 3]]]) def function_to_vectorize(x, y): # the function body is totaly random and not important if x>2 and y-x>0: sum = 0 for i in range(y): sum+=i return sum else: sum = y for i in range(x): sum-=i return sum I'd like to apply function_to_vectorize in this way: [[function_to_vectorize(element[0], element[1]) for element in vector] for vector in a] #array([[ 1, 0, -2], # [ 2, 1, -1], # [ 3, 2, 3]]) How can I vectorize this function with np.vectorize?
[ "With that function, the np.vectorize result will also expect 2 arguments. 'signature' is determined by the function, not by the array(s) you expect to supply.\nIn [184]: f = np.vectorize(function_to_vectorize)\n\nIn [185]: f(1,2)\nOut[185]: array(2)\n\nIn [186]: a = np.array([[[1, 1],\n ...: [2, 1],\n ...: [3, 1]],\n ...: \n ...: [[1, 2],\n ...: [2, 2],\n ...: [3, 2]],\n ...: \n ...: [[1, 3],\n ...: [2, 3],\n ...: [3, 3]]])\n\nJust supply the 2 columns of a:\nIn [187]: f(a[:,:,0],a[:,:,1])\nOut[187]: \narray([[ 1, 0, -2],\n [ 2, 1, -1],\n [ 3, 2, 0]])\n\n" ]
[ 1 ]
[]
[]
[ "numpy", "python", "vectorization" ]
stackoverflow_0074561431_numpy_python_vectorization.txt
Q: openpyxl - Check if sheet contains errors I have workbook and one sheet in the workbook: wb = openpyxl.load_workbook("path/to/workbook.xlsx") ws = wb.worksheets[0] How can I return all cells that contain an error in the spreadsheet? I know there could be different types of errors (formula error, N/As, data validation error etc...), just trying to see what's possible throught the Python library. A: It can be so. Reading an excel file, taking only cell values and comparing them with a list of errors. test.xlsx from openpyxl import load_workbook ERROR_CODES = ('#NULL!', '#DIV/0!', '#VALUE!', '#REF!', '#NAME?', '#NUM!', '#N/A') wb = load_workbook('test.xlsx', data_only=True) ws = wb.active cell_error = [ws.cell(row=i, column=j) for j in range(1, ws.max_column + 1) for i in range(1, ws.max_row + 1) if ws.cell(row=i, column=j).value in ERROR_CODES] print(cell_error) ----------------------------------------- [<Cell 'Лист1'.A3>, <Cell 'Лист1'.C1>, <Cell 'Лист1'.C2>]
openpyxl - Check if sheet contains errors
I have workbook and one sheet in the workbook: wb = openpyxl.load_workbook("path/to/workbook.xlsx") ws = wb.worksheets[0] How can I return all cells that contain an error in the spreadsheet? I know there could be different types of errors (formula error, N/As, data validation error etc...), just trying to see what's possible throught the Python library.
[ "It can be so. Reading an excel file, taking only cell values and comparing them with a list of errors.\ntest.xlsx\n\n\nfrom openpyxl import load_workbook\n\nERROR_CODES = ('#NULL!', '#DIV/0!', '#VALUE!', '#REF!', '#NAME?', '#NUM!', '#N/A')\n\nwb = load_workbook('test.xlsx', data_only=True)\nws = wb.active\ncell_error = [ws.cell(row=i, column=j) for j in range(1, ws.max_column + 1)\n for i in range(1, ws.max_row + 1)\n if ws.cell(row=i, column=j).value in ERROR_CODES]\n\nprint(cell_error)\n\n-----------------------------------------\n\n[<Cell 'Лист1'.A3>, <Cell 'Лист1'.C1>, <Cell 'Лист1'.C2>]\n\n" ]
[ 1 ]
[]
[]
[ "openpyxl", "python" ]
stackoverflow_0074561075_openpyxl_python.txt
Q: beautiful soup to grab forex prices I'm new to using beautiful soup and I have been following tutorials on scraping with it. I am trying to use it to return high and low prices from common forex pairs. Im not sure if it is the sites that I'm trying to get the information rom, but I can find the div tag that I want the info from, I believe the text is hidden in the span, but i am still having trouble with it coming back nonetype. Can anyone help me figure this out? url : https://www.centralcharts.com/en/6748-aud-nzd/quotes div class="tabMini-wrapper" this is the whole table ^^ div class.. Is it because of the format the site has it in? import requests from bs4 import BeautifulSoup import re URL = "https://www.centralcharts.com/en/6748-aud-nzd/quotes" page = requests.get(URL) soup = BeautifulSoup(page.content, "html.parser") spans=soup.find('span', attrs = {'class' , 'tabMini tabQuotes'}) print(spans) I tried a bunch of different ways but this was most recent attempt. I was trying to get it from the span after the .find() returned nonetype for the table A: Oke, Version 2.. It seems like OP want to capture the complete table, with date. Since this is an HTML table, you'll need to make a custom loop that will map both the headers (th) and rows (tr > td) steps the script takes: Find the table For each header, append the data-date to the result object Find all the tr's in the table Index 5 is high, index 6 is low (this could be improved by searching for the text) Ensure high and low have the same amount of items Map the index of the row with the index of the header, append to result (could probably be improved using something like zip() import requests from bs4 import BeautifulSoup import re result = [] URL = "https://www.centralcharts.com/en/6748-aud-nzd/quotes" page = requests.get(URL) soup = BeautifulSoup(page.content, "html.parser") table = soup.select('.prod-left-content .tabMini-wrapper table')[0] for header in table.find('thead').find_all('span', attrs = { 'class': 'set-locale-date' }): result.append({ 'date': header.get('data-date'), 'high': None, 'low': None }) trs = table.select('tbody > tr') high = trs[5].find_all('span') low = trs[6].find_all('span') if (len(high) != len(low)): print('Mmmm, somehting went wrong?') exit() for i in range(len(high)): result[i]['low'] = low[i].text result[i]['high'] = high[i].text for o in result: print(o['date'] + "\t\t" + o['high'] + "\t" + o['low']) Gives: 2022-11-18 1.0915 1.0834 2022-11-21 1.0854 1.0811 2022-11-22 1.0837 1.0782 2022-11-23 1.0837 1.0746 2022-11-24 1.0812 1.0765 A: UPDATED ANSWER 11-25-2022 Sorry for the slow reply I was away from my computer for the American holiday. Here is another way to accomplish your task. This one uses multiple list comprehensions and the zip to iterator over the data. import requests from bs4 import BeautifulSoup URL = "https://www.centralcharts.com/en/6748-aud-nzd/quotes" page = requests.get(URL) soup = BeautifulSoup(page.content, "html.parser") results = [] table_data = soup.select_one('h2:-soup-contains("5 days quotes")').find_next('table') dates = [element.get('data-date') for element in table_data.find_all('span', attrs={'set-locale-date'})] daily_high_quotes = [element.text for element in table_data.find('td', text='High').find_next_siblings('td')] daily_low_quotes = [element.text for element in table_data.find('td', text='Low').find_next_siblings('td')] for quote_date, daily_high, daily_low in zip(dates, daily_high_quotes, daily_low_quotes): results.append([quote_date, daily_high, daily_low]) print(results) Output: [['2022-11-21', '1.0854', '1.0811'], ['2022-11-22', '1.0837', '1.0782'], ['2022-11-23', '1.0837', '1.0746'], ['2022-11-24', '1.0814', '1.0765'], ['2022-11-25', '1.0822', '1.0796']] ORIGINAL ANSWER 11-24-2022 There are multiple tables on the pages, so you need to extract the data from the correct table. This code will help. import requests from bs4 import BeautifulSoup URL = "https://www.centralcharts.com/en/6748-aud-nzd/quotes" page = requests.get(URL) soup = BeautifulSoup(page.content, "html.parser") for tag_element in soup.find_all('table', attrs={'class', 'tabMini tabQuotes'}): for item in tag_element.find_all('td'): print(item)
beautiful soup to grab forex prices
I'm new to using beautiful soup and I have been following tutorials on scraping with it. I am trying to use it to return high and low prices from common forex pairs. Im not sure if it is the sites that I'm trying to get the information rom, but I can find the div tag that I want the info from, I believe the text is hidden in the span, but i am still having trouble with it coming back nonetype. Can anyone help me figure this out? url : https://www.centralcharts.com/en/6748-aud-nzd/quotes div class="tabMini-wrapper" this is the whole table ^^ div class.. Is it because of the format the site has it in? import requests from bs4 import BeautifulSoup import re URL = "https://www.centralcharts.com/en/6748-aud-nzd/quotes" page = requests.get(URL) soup = BeautifulSoup(page.content, "html.parser") spans=soup.find('span', attrs = {'class' , 'tabMini tabQuotes'}) print(spans) I tried a bunch of different ways but this was most recent attempt. I was trying to get it from the span after the .find() returned nonetype for the table
[ "Oke, Version 2..\n\nIt seems like OP want to capture the complete table, with date.\nSince this is an HTML table, you'll need to make a custom loop that will map both the headers (th) and rows (tr > td)\n\nsteps the script takes:\n\nFind the table\n\nFor each header, append the data-date to the result object\n\nFind all the tr's in the table\n\nIndex 5 is high, index 6 is low (this could be improved by searching for the text)\n\nEnsure high and low have the same amount of items\n\nMap the index of the row with the index of the header, append to result (could probably be improved using something like zip()\n\n\nimport requests\nfrom bs4 import BeautifulSoup\nimport re\n\nresult = []\n\nURL = \"https://www.centralcharts.com/en/6748-aud-nzd/quotes\"\npage = requests.get(URL)\n\nsoup = BeautifulSoup(page.content, \"html.parser\")\n\ntable = soup.select('.prod-left-content .tabMini-wrapper table')[0]\n\nfor header in table.find('thead').find_all('span', attrs = { 'class': 'set-locale-date' }):\n result.append({ 'date': header.get('data-date'), 'high': None, 'low': None })\n\ntrs = table.select('tbody > tr')\n\nhigh = trs[5].find_all('span')\nlow = trs[6].find_all('span')\n\nif (len(high) != len(low)):\n print('Mmmm, somehting went wrong?')\n exit()\n\nfor i in range(len(high)):\n result[i]['low'] = low[i].text\n result[i]['high'] = high[i].text\n\nfor o in result:\n print(o['date'] + \"\\t\\t\" + o['high'] + \"\\t\" + o['low'])\n\nGives:\n2022-11-18 1.0915 1.0834\n2022-11-21 1.0854 1.0811\n2022-11-22 1.0837 1.0782\n2022-11-23 1.0837 1.0746\n2022-11-24 1.0812 1.0765\n\n", "UPDATED ANSWER 11-25-2022\nSorry for the slow reply I was away from my computer for the American holiday. Here is another way to accomplish your task. This one uses multiple list comprehensions and the zip to iterator over the data.\nimport requests\nfrom bs4 import BeautifulSoup\n\nURL = \"https://www.centralcharts.com/en/6748-aud-nzd/quotes\"\npage = requests.get(URL)\n\nsoup = BeautifulSoup(page.content, \"html.parser\")\n\n\nresults = []\ntable_data = soup.select_one('h2:-soup-contains(\"5 days quotes\")').find_next('table')\ndates = [element.get('data-date') for element in table_data.find_all('span', attrs={'set-locale-date'})]\ndaily_high_quotes = [element.text for element in table_data.find('td', text='High').find_next_siblings('td')]\ndaily_low_quotes = [element.text for element in table_data.find('td', text='Low').find_next_siblings('td')]\nfor quote_date, daily_high, daily_low in zip(dates, daily_high_quotes, daily_low_quotes):\n results.append([quote_date, daily_high, daily_low])\nprint(results)\n\nOutput:\n[['2022-11-21', '1.0854', '1.0811'], \n['2022-11-22', '1.0837', '1.0782'], \n['2022-11-23', '1.0837', '1.0746'], \n['2022-11-24', '1.0814', '1.0765'], \n['2022-11-25', '1.0822', '1.0796']]\n\nORIGINAL ANSWER 11-24-2022\nThere are multiple tables on the pages, so you need to extract the data from the correct table. This code will help.\nimport requests\nfrom bs4 import BeautifulSoup\n\nURL = \"https://www.centralcharts.com/en/6748-aud-nzd/quotes\"\npage = requests.get(URL)\n\nsoup = BeautifulSoup(page.content, \"html.parser\")\n\nfor tag_element in soup.find_all('table', attrs={'class', 'tabMini tabQuotes'}):\n for item in tag_element.find_all('td'):\n print(item)\n\n" ]
[ 2, 1 ]
[]
[]
[ "beautifulsoup", "python" ]
stackoverflow_0074563876_beautifulsoup_python.txt
Q: Optimize nested for loops with numpy Suppose I have the following loop: N=5 a=np.zeros((N,N)) for i in range(N): for j in range(N): for k in range(N): for l in range(N): a[i,j]+=np.exp(1j*(2*np.pi/N*i*k+2*np.pi*j*l)) How can I optimize this? I'm out of ideas A: import numpy as np x = np.arange(N) i = x[:, None, None, None] j = x[None, :, None, None] k = x[None, None, :, None] l = x[None, None, None, :] out = np.exp(1j*(2*np.pi/N*i*k + 2*np.pi*j*l)).sum(axis=(2, 3)) # >>> np.allclose(a, out) # True
Optimize nested for loops with numpy
Suppose I have the following loop: N=5 a=np.zeros((N,N)) for i in range(N): for j in range(N): for k in range(N): for l in range(N): a[i,j]+=np.exp(1j*(2*np.pi/N*i*k+2*np.pi*j*l)) How can I optimize this? I'm out of ideas
[ "import numpy as np\n\nx = np.arange(N)\ni = x[:, None, None, None]\nj = x[None, :, None, None]\nk = x[None, None, :, None]\nl = x[None, None, None, :]\n\nout = np.exp(1j*(2*np.pi/N*i*k + 2*np.pi*j*l)).sum(axis=(2, 3))\n\n# >>> np.allclose(a, out)\n# True\n\n" ]
[ 3 ]
[]
[]
[ "for_loop", "numpy", "optimization", "performance", "python" ]
stackoverflow_0074563814_for_loop_numpy_optimization_performance_python.txt
Q: List of lists to Tree Diagram Print I have a list of lists that make up a tree, similar to a top level directory with a recursive listing of directories and files. I want to visualize this as a printed tree. How can a see a list of lists printed as a tree? Data tree = [ ['Main University'], ['Main University', 'Academic Affairs'], ['Main University', 'Academic Affairs', 'College of Health Sciences'], ['Main University', 'Academic Affairs', 'College of Arts & Science'], ['Main University', 'Academic Affairs', 'College of Arts & Science', 'Biology'], ['Main University', 'Academic Affairs', 'College of Arts & Science', 'Chemistry/Physics'], ['Main University', 'Academic Affairs', 'College of Arts & Science', 'Chemistry/Physics', 'Physics'], ['Main University', 'Academic Affairs', 'College of Arts & Science', 'Biology', 'Biochemistry & Molecular Bio'], ['Main University', 'Academic Affairs', 'College of Arts & Science', 'Biology', 'Earth Sciences'], ['Main University', 'Academic Affairs', 'College of Arts & Science', 'Biology', 'Environmental Studies'], ['Main University', 'Academic Affairs', 'College of Health Sciences', 'Social Work'], ['Main University', 'Academic Affairs', 'College of Arts & Science', 'Chemistry/Physics', 'Chemistry'], ['Main University', 'Academic Affairs', 'College of Health Sciences', 'Health Sciences'], ['Main University', 'Academic Affairs', 'College of Health Sciences', 'Occupational Therapy'] ] Desired Output (or similar; i.e., glyphs like Β¦--, Β°--, etc. don't matter) Main University Β°--Academic Affairs Β¦--College of Arts & Science Β¦ Β¦--Chemistry/Physics Β¦ Β¦ Β¦--Physics Β¦ Β¦ Β°--Chemistry Β¦ Β°--Biology Β¦ Β¦--Biochemistry & Molecular Bio Β¦ Β¦--Earth Sciences Β¦ Β°--Environmental Studies Β°--College of Health Sciences Β¦--Health Sciences Β¦--Occupational Therapy Β°--Social Work A: if the array is not too large you can convert it to a tree first then print it #!/bin/env python3 from collections import OrderedDict tree = [ ['Main University'], ['Main University', 'Academic Affairs'], ['Main University', 'Academic Affairs', 'College of Health Sciences'], ['Main University', 'Academic Affairs', 'College of Arts & Science'], ['Main University', 'Academic Affairs', 'College of Arts & Science', 'Biology'], ['Main University', 'Academic Affairs', 'College of Arts & Science', 'Chemistry/Physics'], ['Main University', 'Academic Affairs', 'College of Arts & Science', 'Chemistry/Physics', 'Physics'], ['Main University', 'Academic Affairs', 'College of Arts & Science', 'Biology', 'Biochemistry & Molecular Bio'], ['Main University', 'Academic Affairs', 'College of Arts & Science', 'Biology', 'Earth Sciences'], ['Main University', 'Academic Affairs', 'College of Arts & Science', 'Biology', 'Environmental Studies'], ['Main University', 'Academic Affairs', 'College of Health Sciences', 'Social Work'], ['Main University', 'Academic Affairs', 'College of Arts & Science', 'Chemistry/Physics', 'Chemistry'], ['Main University', 'Academic Affairs', 'College of Health Sciences', 'Health Sciences'], ['Main University', 'Academic Affairs', 'College of Health Sciences', 'Occupational Therapy'] ] class Tree: def __init__(self, data): self.data = data self.children = OrderedDict() def add_array(self, obj): if obj: if obj[0] not in self.children: self.children[obj[0]] = Tree(obj[0]) self.children[obj[0]].add_array(obj[1:]) def print_tree(self, l): s = " " * l * 4 + self.data + "\n" for x in self.children.values(): s += x.print_tree(l+1) return s def __repr__(self): return self.print_tree(0) root = Tree("root") for x in tree: root.add_array(x) print(root) prints out the following root Main University Academic Affairs College of Health Sciences Social Work Health Sciences Occupational Therapy College of Arts & Science Biology Biochemistry & Molecular Bio Earth Sciences Environmental Studies Chemistry/Physics Physics Chemistry A: The bigtree package can do this nicely: from bigtree import list_to_tree, print_tree path_list = ['/'.join(x) for x in tree] root = list_to_tree(path_list) print_tree(root) Yields: Main University └── Academic Affairs β”œβ”€β”€ College of Health Sciences β”‚ β”œβ”€β”€ Social Work β”‚ β”œβ”€β”€ Health Sciences β”‚ └── Occupational Therapy └── College of Arts & Science β”œβ”€β”€ Biology β”‚ β”œβ”€β”€ Biochemistry & Molecular Bio β”‚ β”œβ”€β”€ Earth Sciences β”‚ └── Environmental Studies └── Chemistry └── Physics β”œβ”€β”€ Physics └── Chemistry
List of lists to Tree Diagram Print
I have a list of lists that make up a tree, similar to a top level directory with a recursive listing of directories and files. I want to visualize this as a printed tree. How can a see a list of lists printed as a tree? Data tree = [ ['Main University'], ['Main University', 'Academic Affairs'], ['Main University', 'Academic Affairs', 'College of Health Sciences'], ['Main University', 'Academic Affairs', 'College of Arts & Science'], ['Main University', 'Academic Affairs', 'College of Arts & Science', 'Biology'], ['Main University', 'Academic Affairs', 'College of Arts & Science', 'Chemistry/Physics'], ['Main University', 'Academic Affairs', 'College of Arts & Science', 'Chemistry/Physics', 'Physics'], ['Main University', 'Academic Affairs', 'College of Arts & Science', 'Biology', 'Biochemistry & Molecular Bio'], ['Main University', 'Academic Affairs', 'College of Arts & Science', 'Biology', 'Earth Sciences'], ['Main University', 'Academic Affairs', 'College of Arts & Science', 'Biology', 'Environmental Studies'], ['Main University', 'Academic Affairs', 'College of Health Sciences', 'Social Work'], ['Main University', 'Academic Affairs', 'College of Arts & Science', 'Chemistry/Physics', 'Chemistry'], ['Main University', 'Academic Affairs', 'College of Health Sciences', 'Health Sciences'], ['Main University', 'Academic Affairs', 'College of Health Sciences', 'Occupational Therapy'] ] Desired Output (or similar; i.e., glyphs like Β¦--, Β°--, etc. don't matter) Main University Β°--Academic Affairs Β¦--College of Arts & Science Β¦ Β¦--Chemistry/Physics Β¦ Β¦ Β¦--Physics Β¦ Β¦ Β°--Chemistry Β¦ Β°--Biology Β¦ Β¦--Biochemistry & Molecular Bio Β¦ Β¦--Earth Sciences Β¦ Β°--Environmental Studies Β°--College of Health Sciences Β¦--Health Sciences Β¦--Occupational Therapy Β°--Social Work
[ "if the array is not too large you can convert it to a tree first then print it\n#!/bin/env python3\nfrom collections import OrderedDict\n\ntree = [\n ['Main University'],\n ['Main University', 'Academic Affairs'],\n ['Main University', 'Academic Affairs', 'College of Health Sciences'],\n ['Main University', 'Academic Affairs', 'College of Arts & Science'],\n ['Main University', 'Academic Affairs', 'College of Arts & Science', 'Biology'],\n ['Main University', 'Academic Affairs', 'College of Arts & Science', 'Chemistry/Physics'],\n ['Main University', 'Academic Affairs', 'College of Arts & Science', 'Chemistry/Physics', 'Physics'],\n ['Main University', 'Academic Affairs', 'College of Arts & Science', 'Biology', 'Biochemistry & Molecular Bio'],\n ['Main University', 'Academic Affairs', 'College of Arts & Science', 'Biology', 'Earth Sciences'],\n ['Main University', 'Academic Affairs', 'College of Arts & Science', 'Biology', 'Environmental Studies'],\n ['Main University', 'Academic Affairs', 'College of Health Sciences', 'Social Work'],\n ['Main University', 'Academic Affairs', 'College of Arts & Science', 'Chemistry/Physics', 'Chemistry'],\n ['Main University', 'Academic Affairs', 'College of Health Sciences', 'Health Sciences'],\n ['Main University', 'Academic Affairs', 'College of Health Sciences', 'Occupational Therapy']\n]\n\nclass Tree:\n def __init__(self, data):\n self.data = data\n self.children = OrderedDict()\n\n def add_array(self, obj):\n if obj:\n if obj[0] not in self.children:\n self.children[obj[0]] = Tree(obj[0])\n self.children[obj[0]].add_array(obj[1:])\n\n def print_tree(self, l):\n s = \" \" * l * 4 + self.data + \"\\n\"\n for x in self.children.values():\n s += x.print_tree(l+1)\n\n return s\n\n def __repr__(self):\n return self.print_tree(0)\n\n\n\nroot = Tree(\"root\")\nfor x in tree:\n root.add_array(x)\n\nprint(root)\n\nprints out the following\nroot\n Main University\n Academic Affairs\n College of Health Sciences\n Social Work\n Health Sciences\n Occupational Therapy\n College of Arts & Science\n Biology\n Biochemistry & Molecular Bio\n Earth Sciences\n Environmental Studies\n Chemistry/Physics\n Physics\n Chemistry\n\n", "The bigtree package can do this nicely:\nfrom bigtree import list_to_tree, print_tree\npath_list = ['/'.join(x) for x in tree]\nroot = list_to_tree(path_list)\nprint_tree(root)\n\nYields:\nMain University\n└── Academic Affairs\n β”œβ”€β”€ College of Health Sciences\n β”‚ β”œβ”€β”€ Social Work\n β”‚ β”œβ”€β”€ Health Sciences\n β”‚ └── Occupational Therapy\n └── College of Arts & Science\n β”œβ”€β”€ Biology\n β”‚ β”œβ”€β”€ Biochemistry & Molecular Bio\n β”‚ β”œβ”€β”€ Earth Sciences\n β”‚ └── Environmental Studies\n └── Chemistry\n └── Physics\n β”œβ”€β”€ Physics\n └── Chemistry\n\n" ]
[ 1, 1 ]
[]
[]
[ "python", "python_3.x" ]
stackoverflow_0074553128_python_python_3.x.txt
Q: Implement touch using Python? touch is a Unix utility that sets the modification and access times of files to the current time of day. If the file doesn't exist, it is created with default permissions. How would you implement it as a Python function? Try to be cross platform and complete. (Current Google results for "python touch file" are not that great, but point to os.utime.) A: Looks like this is new as of Python 3.4 - pathlib. from pathlib import Path Path('path/to/file.txt').touch() This will create a file.txt at the path. -- Path.touch(mode=0o777, exist_ok=True) Create a file at this given path. If mode is given, it is combined with the process’ umask value to determine the file mode and access flags. If the file already exists, the function succeeds if exist_ok is true (and its modification time is updated to the current time), otherwise FileExistsError is raised. A: This tries to be a little more race-free than the other solutions. (The with keyword is new in Python 2.5.) import os def touch(fname, times=None): with open(fname, 'a'): os.utime(fname, times) Roughly equivalent to this. import os def touch(fname, times=None): fhandle = open(fname, 'a') try: os.utime(fname, times) finally: fhandle.close() Now, to really make it race-free, you need to use futimes and change the timestamp of the open filehandle, instead of opening the file and then changing the timestamp on the filename (which may have been renamed). Unfortunately, Python doesn't seem to provide a way to call futimes without going through ctypes or similar... EDIT As noted by Nate Parsons, Python 3.3 will add specifying a file descriptor (when os.supports_fd) to functions such as os.utime, which will use the futimes syscall instead of the utimes syscall under the hood. In other words: import os def touch(fname, mode=0o666, dir_fd=None, **kwargs): flags = os.O_CREAT | os.O_APPEND with os.fdopen(os.open(fname, flags=flags, mode=mode, dir_fd=dir_fd)) as f: os.utime(f.fileno() if os.utime in os.supports_fd else fname, dir_fd=None if os.supports_fd else dir_fd, **kwargs) A: def touch(fname): if os.path.exists(fname): os.utime(fname, None) else: open(fname, 'a').close() A: Why not try this?: import os def touch(fname): try: os.utime(fname, None) except OSError: open(fname, 'a').close() I believe this eliminates any race condition that matters. If the file does not exist then an exception will be thrown. The only possible race condition here is if the file is created before open() is called but after os.utime(). But this does not matter because in this case the modification time will be as expected since it must have happened during the call to touch(). A: For a more low-level solution one can use os.close(os.open("file.txt", os.O_CREAT)) A: This answer is compatible with all versions since Python-2.5 when keyword with has been released. 1. Create file if does not exist + Set current time (exactly same as command touch) import os fname = 'directory/filename.txt' with open(fname, 'a'): # Create file if does not exist os.utime(fname, None) # Set access/modified times to now # May raise OSError if file does not exist A more robust version: import os with open(fname, 'a'): try: # Whatever if file was already existing os.utime(fname, None) # => Set current time anyway except OSError: pass # File deleted between open() and os.utime() calls 2. Just create the file if does not exist (does not update time) with open(fname, 'a'): # Create file if does not exist pass 3. Just update file access/modified times (does not create file if not existing) import os try: os.utime(fname, None) # Set access/modified times to now except OSError: pass # File does not exist (or no permission) Using os.path.exists() does not simplify the code: from __future__ import (absolute_import, division, print_function) import os if os.path.exists(fname): try: os.utime(fname, None) # Set access/modified times to now except OSError: pass # File deleted between exists() and utime() calls # (or no permission) Bonus: Update time of all files in a directory from __future__ import (absolute_import, division, print_function) import os number_of_files = 0 # Current directory which is "walked through" # | Directories in root # | | Files in root Working directory # | | | | for root, _, filenames in os.walk('.'): for fname in filenames: pathname = os.path.join(root, fname) try: os.utime(pathname, None) # Set access/modified times to now number_of_files += 1 except OSError as why: print('Cannot change time of %r because %r', pathname, why) print('Changed time of %i files', number_of_files) A: Here's some code that uses ctypes (only tested on Linux): from ctypes import * libc = CDLL("libc.so.6") # struct timespec { # time_t tv_sec; /* seconds */ # long tv_nsec; /* nanoseconds */ # }; # int futimens(int fd, const struct timespec times[2]); class c_timespec(Structure): _fields_ = [('tv_sec', c_long), ('tv_nsec', c_long)] class c_utimbuf(Structure): _fields_ = [('atime', c_timespec), ('mtime', c_timespec)] utimens = CFUNCTYPE(c_int, c_char_p, POINTER(c_utimbuf)) futimens = CFUNCTYPE(c_int, c_char_p, POINTER(c_utimbuf)) # from /usr/include/i386-linux-gnu/bits/stat.h UTIME_NOW = ((1l << 30) - 1l) UTIME_OMIT = ((1l << 30) - 2l) now = c_timespec(0,UTIME_NOW) omit = c_timespec(0,UTIME_OMIT) # wrappers def update_atime(fileno): assert(isinstance(fileno, int)) libc.futimens(fileno, byref(c_utimbuf(now, omit))) def update_mtime(fileno): assert(isinstance(fileno, int)) libc.futimens(fileno, byref(c_utimbuf(omit, now))) # usage example: # # f = open("/tmp/test") # update_mtime(f.fileno()) A: Simplistic: def touch(fname): open(fname, 'a').close() os.utime(fname, None) The open ensures there is a file there the utime ensures that the timestamps are updated Theoretically, it's possible someone will delete the file after the open, causing utime to raise an exception. But arguably that's OK, since something bad did happen. A: with open(file_name,'a') as f: pass A: The following is sufficient: import os def func(filename): if os.path.exists(filename): os.utime(filename) else: with open(filename,'a') as f: pass If you want to set a specific time for touch, use os.utime as follows: os.utime(filename,(atime,mtime)) Here, atime and mtime both should be int/float and should be equal to epoch time in seconds to the time which you want to set. A: Complex (possibly buggy): def utime(fname, atime=None, mtime=None) if type(atime) is tuple: atime, mtime = atime if atime is None or mtime is None: statinfo = os.stat(fname) if atime is None: atime = statinfo.st_atime if mtime is None: mtime = statinfo.st_mtime os.utime(fname, (atime, mtime)) def touch(fname, atime=None, mtime=None): if type(atime) is tuple: atime, mtime = atime open(fname, 'a').close() utime(fname, atime, mtime) This tries to also allow setting the access or modification time, like GNU touch. A: write_text() from pathlib.Path can be used. >>> from pathlib import Path >>> Path('aa.txt').write_text("") 0 A: It might seem logical to create a string with the desired variables, and pass it to os.system: touch = 'touch ' + dir + '/' + fileName os.system(touch) This is inadequate in a number of ways (e.g.,it doesn't handle whitespace), so don't do it. A more robust method is to use subprocess : subprocess.call(['touch', os.path.join(dirname, fileName)]) While this is much better than using a subshell (with os.system), it is still only suitable for quick-and-dirty scripts; use the accepted answer for cross-platform programs. A: Why don't you try: newfile.py #!/usr/bin/env python import sys inputfile = sys.argv[1] with open(inputfile, 'r+') as file: pass python newfile.py foobar.txt or use subprocess: import subprocess subprocess.call(["touch", "barfoo.txt"]) A: There is also a python module for touch >>> from touch import touch >>> touch(file_name) You can install it with pip install touch A: I have a program that I use for backups: https://stromberg.dnsalias.org/~strombrg/backshift/ I profiled it using vmprof, and identified that touch was by far the most time-consuming part of it. So I looked into ways of touching files quickly. I found that on CPython 3.11, this was the fastest: def touch3(filename, flags=os.O_CREAT | os.O_RDWR): """Touch a file using os.open+os.close - fastest on CPython 3.11.""" os.close(os.open(filename, flags, 0o644)) And on Pypy3 7.3.9, this was the fastest: def touch1(filename): """Touch a file using pathlib - fastest on pypy3, and fastest overall.""" Path(filename).touch() Of the two, pypy3's best was only slightly faster cpython's best. I may create a web page about this someday, but for now all I have is a Subversion repo: https://stromberg.dnsalias.org/svn/touch/trunk It includes the 4 ways of doing touches I tried.
Implement touch using Python?
touch is a Unix utility that sets the modification and access times of files to the current time of day. If the file doesn't exist, it is created with default permissions. How would you implement it as a Python function? Try to be cross platform and complete. (Current Google results for "python touch file" are not that great, but point to os.utime.)
[ "Looks like this is new as of Python 3.4 - pathlib.\nfrom pathlib import Path\n\nPath('path/to/file.txt').touch()\n\nThis will create a file.txt at the path.\n--\n\nPath.touch(mode=0o777, exist_ok=True)\nCreate a file at this given path. If mode is given, it is combined with the process’ umask value to determine the file mode and access flags. If the file already exists, the function succeeds if exist_ok is true (and its modification time is updated to the current time), otherwise FileExistsError is raised. \n\n", "This tries to be a little more race-free than the other solutions. (The with keyword is new in Python 2.5.)\nimport os\ndef touch(fname, times=None):\n with open(fname, 'a'):\n os.utime(fname, times)\n\nRoughly equivalent to this.\nimport os\ndef touch(fname, times=None):\n fhandle = open(fname, 'a')\n try:\n os.utime(fname, times)\n finally:\n fhandle.close()\n\nNow, to really make it race-free, you need to use futimes and change the timestamp of the open filehandle, instead of opening the file and then changing the timestamp on the filename (which may have been renamed). Unfortunately, Python doesn't seem to provide a way to call futimes without going through ctypes or similar...\n\nEDIT\nAs noted by Nate Parsons, Python 3.3 will add specifying a file descriptor (when os.supports_fd) to functions such as os.utime, which will use the futimes syscall instead of the utimes syscall under the hood. In other words:\nimport os\ndef touch(fname, mode=0o666, dir_fd=None, **kwargs):\n flags = os.O_CREAT | os.O_APPEND\n with os.fdopen(os.open(fname, flags=flags, mode=mode, dir_fd=dir_fd)) as f:\n os.utime(f.fileno() if os.utime in os.supports_fd else fname,\n dir_fd=None if os.supports_fd else dir_fd, **kwargs)\n\n", "def touch(fname):\n if os.path.exists(fname):\n os.utime(fname, None)\n else:\n open(fname, 'a').close()\n\n", "Why not try this?:\nimport os\n\ndef touch(fname):\n try:\n os.utime(fname, None)\n except OSError:\n open(fname, 'a').close()\n\nI believe this eliminates any race condition that matters. If the file does not exist then an exception will be thrown.\nThe only possible race condition here is if the file is created before open() is called but after os.utime(). But this does not matter because in this case the modification time will be as expected since it must have happened during the call to touch().\n", "For a more low-level solution one can use\nos.close(os.open(\"file.txt\", os.O_CREAT))\n\n", "This answer is compatible with all versions since Python-2.5 when keyword with has been released.\n1. Create file if does not exist + Set current time\n(exactly same as command touch)\nimport os\n\nfname = 'directory/filename.txt'\nwith open(fname, 'a'): # Create file if does not exist\n os.utime(fname, None) # Set access/modified times to now\n # May raise OSError if file does not exist\n\nA more robust version:\nimport os\n\nwith open(fname, 'a'):\n try: # Whatever if file was already existing\n os.utime(fname, None) # => Set current time anyway\n except OSError:\n pass # File deleted between open() and os.utime() calls\n\n2. Just create the file if does not exist\n(does not update time) \nwith open(fname, 'a'): # Create file if does not exist\n pass\n\n3. Just update file access/modified times\n(does not create file if not existing)\nimport os\n\ntry:\n os.utime(fname, None) # Set access/modified times to now\nexcept OSError:\n pass # File does not exist (or no permission)\n\nUsing os.path.exists() does not simplify the code:\nfrom __future__ import (absolute_import, division, print_function)\nimport os\n\nif os.path.exists(fname):\n try:\n os.utime(fname, None) # Set access/modified times to now\n except OSError:\n pass # File deleted between exists() and utime() calls\n # (or no permission)\n\nBonus: Update time of all files in a directory\nfrom __future__ import (absolute_import, division, print_function)\nimport os\n\nnumber_of_files = 0\n\n# Current directory which is \"walked through\"\n# | Directories in root\n# | | Files in root Working directory\n# | | | |\nfor root, _, filenames in os.walk('.'):\n for fname in filenames:\n pathname = os.path.join(root, fname)\n try:\n os.utime(pathname, None) # Set access/modified times to now\n number_of_files += 1\n except OSError as why:\n print('Cannot change time of %r because %r', pathname, why)\n\nprint('Changed time of %i files', number_of_files)\n\n", "Here's some code that uses ctypes (only tested on Linux):\nfrom ctypes import *\nlibc = CDLL(\"libc.so.6\")\n\n# struct timespec {\n# time_t tv_sec; /* seconds */\n# long tv_nsec; /* nanoseconds */\n# };\n# int futimens(int fd, const struct timespec times[2]);\n\nclass c_timespec(Structure):\n _fields_ = [('tv_sec', c_long), ('tv_nsec', c_long)]\n\nclass c_utimbuf(Structure):\n _fields_ = [('atime', c_timespec), ('mtime', c_timespec)]\n\nutimens = CFUNCTYPE(c_int, c_char_p, POINTER(c_utimbuf))\nfutimens = CFUNCTYPE(c_int, c_char_p, POINTER(c_utimbuf)) \n\n# from /usr/include/i386-linux-gnu/bits/stat.h\nUTIME_NOW = ((1l << 30) - 1l)\nUTIME_OMIT = ((1l << 30) - 2l)\nnow = c_timespec(0,UTIME_NOW)\nomit = c_timespec(0,UTIME_OMIT)\n\n# wrappers\ndef update_atime(fileno):\n assert(isinstance(fileno, int))\n libc.futimens(fileno, byref(c_utimbuf(now, omit)))\ndef update_mtime(fileno):\n assert(isinstance(fileno, int))\n libc.futimens(fileno, byref(c_utimbuf(omit, now)))\n\n# usage example:\n#\n# f = open(\"/tmp/test\")\n# update_mtime(f.fileno())\n\n", "Simplistic:\ndef touch(fname):\n open(fname, 'a').close()\n os.utime(fname, None)\n\n\nThe open ensures there is a file there\nthe utime ensures that the timestamps are updated\n\nTheoretically, it's possible someone will delete the file after the open, causing utime to raise an exception. But arguably that's OK, since something bad did happen.\n", "with open(file_name,'a') as f: \n pass\n\n", "The following is sufficient:\nimport os\ndef func(filename):\n if os.path.exists(filename):\n os.utime(filename)\n else:\n with open(filename,'a') as f:\n pass\n\nIf you want to set a specific time for touch, use os.utime as follows:\nos.utime(filename,(atime,mtime))\n\nHere, atime and mtime both should be int/float and should be equal to epoch time in seconds to the time which you want to set.\n", "Complex (possibly buggy):\ndef utime(fname, atime=None, mtime=None)\n if type(atime) is tuple:\n atime, mtime = atime\n\n if atime is None or mtime is None:\n statinfo = os.stat(fname)\n if atime is None:\n atime = statinfo.st_atime\n if mtime is None:\n mtime = statinfo.st_mtime\n\n os.utime(fname, (atime, mtime))\n\n\ndef touch(fname, atime=None, mtime=None):\n if type(atime) is tuple:\n atime, mtime = atime\n\n open(fname, 'a').close()\n utime(fname, atime, mtime)\n\nThis tries to also allow setting the access or modification time, like GNU touch.\n", "write_text() from pathlib.Path can be used.\n>>> from pathlib import Path\n>>> Path('aa.txt').write_text(\"\")\n0\n\n", "It might seem logical to create a string with the desired variables, and pass it to os.system:\ntouch = 'touch ' + dir + '/' + fileName\nos.system(touch)\n\nThis is inadequate in a number of ways (e.g.,it doesn't handle whitespace), so don't do it. \nA more robust method is to use subprocess :\nsubprocess.call(['touch', os.path.join(dirname, fileName)])\nWhile this is much better than using a subshell (with os.system), it is still only suitable for quick-and-dirty scripts; use the accepted answer for cross-platform programs.\n", "Why don't you try:\nnewfile.py\n#!/usr/bin/env python\nimport sys\ninputfile = sys.argv[1]\n\nwith open(inputfile, 'r+') as file:\n pass\n\n\npython newfile.py foobar.txt\n\nor\n\nuse subprocess:\n\nimport subprocess\nsubprocess.call([\"touch\", \"barfoo.txt\"])\n\n", "There is also a python module for touch\n>>> from touch import touch\n>>> touch(file_name)\n\nYou can install it with pip install touch\n", "I have a program that I use for backups: https://stromberg.dnsalias.org/~strombrg/backshift/\nI profiled it using vmprof, and identified that touch was by far the most time-consuming part of it.\nSo I looked into ways of touching files quickly.\nI found that on CPython 3.11, this was the fastest:\ndef touch3(filename, flags=os.O_CREAT | os.O_RDWR): \n \"\"\"Touch a file using os.open+os.close - fastest on CPython 3.11.\"\"\" \n os.close(os.open(filename, flags, 0o644)) \n\n \n\nAnd on Pypy3 7.3.9, this was the fastest:\ndef touch1(filename): \n \"\"\"Touch a file using pathlib - fastest on pypy3, and fastest overall.\"\"\" \n Path(filename).touch() \n\nOf the two, pypy3's best was only slightly faster cpython's best.\nI may create a web page about this someday, but for now all I have is a Subversion repo:\nhttps://stromberg.dnsalias.org/svn/touch/trunk\nIt includes the 4 ways of doing touches I tried.\n" ]
[ 512, 253, 46, 37, 19, 18, 8, 5, 4, 4, 2, 2, 1, 0, 0, 0 ]
[]
[]
[ "python", "utility" ]
stackoverflow_0001158076_python_utility.txt
Q: Combine Pandas columns into a nested list I am attempting to combine elements of a dataframe into a nested list. Say I have the following: df = pd.DataFrame(np.random.randn(100,4), columns=list('abcd')) df.head(4) a b c d 0 0.455258 1.135895 0.573383 -0.637943 1 0.262079 -0.397168 -0.980062 -1.600837 2 0.921582 0.767232 -0.298590 -0.159964 3 -0.645110 -0.709058 1.223899 0.382212 Then, I would like to create a fifth column e that looks like: a b c d e 0 0.455258 1.135895 0.573383 -0.637943 [[0.455258 1.135895 0.573383 -0.637943]] 1 0.262079 -0.397168 -0.980062 -1.600837 [[0.262079 -0.397168 -0.980062 -1.600837]] 2 0.921582 0.767232 -0.298590 -0.159964 [[0.921582 0.767232 -0.298590 -0.159964]] 3 -0.645110 -0.709058 1.223899 0.382212 [[-0.645110 -0.709058 1.223899 0.382212]] efficiently. My most efficient but wrong guess so far has been to do df['e'] = df.values.tolist() But that just results in: a b c d e 0 0.455258 1.135895 0.573383 -0.637943 [0.455258 1.135895 0.573383 -0.637943] 1 0.262079 -0.397168 -0.980062 -1.600837 [0.262079 -0.397168 -0.980062 -1.600837] 2 0.921582 0.767232 -0.298590 -0.159964 [0.921582 0.767232 -0.298590 -0.159964] 3 -0.645110 -0.709058 1.223899 0.382212 [-0.645110 -0.709058 1.223899 0.382212] My least efficient but correct guess has been: a = [] for index, row in df.iterrows(): a.append([[row['a'],row['b'],row['c'],row['d']]]) Is there a better way? A: Another possible solution: df['e'] = df.values.tolist() df['e'] = df['e'].map(lambda x: [x]) Output: a b c d \ 0 -1.594129 1.692562 0.602186 -1.620295 1 -0.561567 -0.033658 -1.259215 1.054229 2 0.450852 -0.483194 0.126173 0.354781 3 2.060968 -0.428400 -0.973516 -0.201786 4 -0.977307 -0.123215 -1.494138 -0.175432 e 0 [[-1.5941291794267378, 1.6925620764107292, 0.6... 1 [[-0.5615669341251519, -0.03365818317800309, -... 2 [[0.45085184068754164, -0.48319360005444034, 0... 3 [[2.0609676606685086, -0.42839969840552594, -0... 4 [[-0.9773067339895964, -0.12321466907036417, -... A: Let's use np.array_split: df['e'] = np.array_split(df.to_numpy(), df.shape[0], axis=0) Output: a b c d e 0 -0.164745 -0.498313 -0.247778 -1.531003 [[-0.16474534230721335, -0.49831346259483156, ... 1 0.079485 0.125790 0.002755 -0.182361 [[0.0794845071834397, 0.12579014367640728, 0.0... 2 0.790263 0.488152 -0.752555 0.432949 [[0.790263001866772, 0.48815219760288764, -0.7... 3 -0.139499 -1.493593 -1.708668 -2.495497 [[-0.13949904491921675, -1.493593498340277, -1... 4 2.662431 0.247559 -0.949407 2.746299 [[2.662430989009563, 0.2475588133223812, -0.94... .. ... ... ... ... ... 95 0.252663 1.018614 -0.491736 -0.290786 [[0.252663350866794, 1.018613617727022, -0.491... 96 1.023089 -0.367463 0.437327 -0.017441 [[1.0230888404185123, -0.3674628009130751, 0.4... 97 0.571278 0.450803 0.441102 1.176884 [[0.5712775025212533, 0.4508029251387083, 0.44... 98 1.336477 0.166516 0.408941 0.972896 [[1.3364769455886123, 0.16651649771088423, 0.4... 99 -1.298205 1.868477 -0.174665 0.065565 [[-1.2982050517578514, 1.8684774453090633, -0.... A: try: df["e"]=df.apply(lambda x:[[x[column] for column in df.columns]],axis=1)
Combine Pandas columns into a nested list
I am attempting to combine elements of a dataframe into a nested list. Say I have the following: df = pd.DataFrame(np.random.randn(100,4), columns=list('abcd')) df.head(4) a b c d 0 0.455258 1.135895 0.573383 -0.637943 1 0.262079 -0.397168 -0.980062 -1.600837 2 0.921582 0.767232 -0.298590 -0.159964 3 -0.645110 -0.709058 1.223899 0.382212 Then, I would like to create a fifth column e that looks like: a b c d e 0 0.455258 1.135895 0.573383 -0.637943 [[0.455258 1.135895 0.573383 -0.637943]] 1 0.262079 -0.397168 -0.980062 -1.600837 [[0.262079 -0.397168 -0.980062 -1.600837]] 2 0.921582 0.767232 -0.298590 -0.159964 [[0.921582 0.767232 -0.298590 -0.159964]] 3 -0.645110 -0.709058 1.223899 0.382212 [[-0.645110 -0.709058 1.223899 0.382212]] efficiently. My most efficient but wrong guess so far has been to do df['e'] = df.values.tolist() But that just results in: a b c d e 0 0.455258 1.135895 0.573383 -0.637943 [0.455258 1.135895 0.573383 -0.637943] 1 0.262079 -0.397168 -0.980062 -1.600837 [0.262079 -0.397168 -0.980062 -1.600837] 2 0.921582 0.767232 -0.298590 -0.159964 [0.921582 0.767232 -0.298590 -0.159964] 3 -0.645110 -0.709058 1.223899 0.382212 [-0.645110 -0.709058 1.223899 0.382212] My least efficient but correct guess has been: a = [] for index, row in df.iterrows(): a.append([[row['a'],row['b'],row['c'],row['d']]]) Is there a better way?
[ "Another possible solution:\ndf['e'] = df.values.tolist()\ndf['e'] = df['e'].map(lambda x: [x])\n\nOutput:\n a b c d \\\n0 -1.594129 1.692562 0.602186 -1.620295 \n1 -0.561567 -0.033658 -1.259215 1.054229 \n2 0.450852 -0.483194 0.126173 0.354781 \n3 2.060968 -0.428400 -0.973516 -0.201786 \n4 -0.977307 -0.123215 -1.494138 -0.175432 \n\n e \n0 [[-1.5941291794267378, 1.6925620764107292, 0.6... \n1 [[-0.5615669341251519, -0.03365818317800309, -... \n2 [[0.45085184068754164, -0.48319360005444034, 0... \n3 [[2.0609676606685086, -0.42839969840552594, -0... \n4 [[-0.9773067339895964, -0.12321466907036417, -... \n\n", "Let's use np.array_split:\ndf['e'] = np.array_split(df.to_numpy(), df.shape[0], axis=0)\n\nOutput:\n a b c d e\n0 -0.164745 -0.498313 -0.247778 -1.531003 [[-0.16474534230721335, -0.49831346259483156, ...\n1 0.079485 0.125790 0.002755 -0.182361 [[0.0794845071834397, 0.12579014367640728, 0.0...\n2 0.790263 0.488152 -0.752555 0.432949 [[0.790263001866772, 0.48815219760288764, -0.7...\n3 -0.139499 -1.493593 -1.708668 -2.495497 [[-0.13949904491921675, -1.493593498340277, -1...\n4 2.662431 0.247559 -0.949407 2.746299 [[2.662430989009563, 0.2475588133223812, -0.94...\n.. ... ... ... ... ...\n95 0.252663 1.018614 -0.491736 -0.290786 [[0.252663350866794, 1.018613617727022, -0.491...\n96 1.023089 -0.367463 0.437327 -0.017441 [[1.0230888404185123, -0.3674628009130751, 0.4...\n97 0.571278 0.450803 0.441102 1.176884 [[0.5712775025212533, 0.4508029251387083, 0.44...\n98 1.336477 0.166516 0.408941 0.972896 [[1.3364769455886123, 0.16651649771088423, 0.4...\n99 -1.298205 1.868477 -0.174665 0.065565 [[-1.2982050517578514, 1.8684774453090633, -0....\n\n", "try:\ndf[\"e\"]=df.apply(lambda x:[[x[column] for column in df.columns]],axis=1)\n\n" ]
[ 1, 1, 0 ]
[]
[]
[ "dataframe", "pandas", "python" ]
stackoverflow_0074563854_dataframe_pandas_python.txt
Q: Python script on PBS fails with error =>> PBS: job killed: ncpus 37.94 exceeded limit 36 (sum) I get the error mentioned in the title when I run a python script (using Miniconda) on a PBS scheduler. I think that numpy is doing some multithreading/processing but I can't stop it from doing so. I added these lines to my PBS script: export MKL_NUM_THREADS=1 export NUMEXPR_NUM_THREADS=1 export OMP_NUM_THREADS=1 export OPENBLAS_NUM_THREADS=1 export VECLIB_MAXIMUM_THREADS=1 I also add these lines to my main.py, just for good measure: import os os.environ["OMP_NUM_THREADS"] = "1" os.environ["OPENBLAS_NUM_THREADS"] = "1" os.environ["MKL_NUM_THREADS"] = "1" os.environ["VECLIB_MAXIMUM_THREADS"] = "1" os.environ["NUMEXPR_NUM_THREADS"] = "1" import numpy as np # Import numpy AFTER setting these variables But to no avail --- I still get the same error. I run my script as qsub -q <QUEUE_NAME> -lnodes=1:ppn=36 path/to/script.sh" Sources: Two answers that tell you how to stop all/most unwanted multithreading/multiprocessing: https://stackoverflow.com/a/48665619/3670097, https://stackoverflow.com/a/51954326/3670097 Summarizes how to do it from within a script: https://stackoverflow.com/a/53224849/3670097 This also fails I went to each numpy computationaly intensive function and placed it in a context manager: import threadpoolctl with threadpoolctl.threadpool_limits(limits=1, user_api="blas"): D, P = np.linalg.eig(M, right=True) A: Runtime fix from https://stackoverflow.com/a/57505958/3528321 : try: import mkl mkl.set_num_threads(1) except: pass
Python script on PBS fails with error =>> PBS: job killed: ncpus 37.94 exceeded limit 36 (sum)
I get the error mentioned in the title when I run a python script (using Miniconda) on a PBS scheduler. I think that numpy is doing some multithreading/processing but I can't stop it from doing so. I added these lines to my PBS script: export MKL_NUM_THREADS=1 export NUMEXPR_NUM_THREADS=1 export OMP_NUM_THREADS=1 export OPENBLAS_NUM_THREADS=1 export VECLIB_MAXIMUM_THREADS=1 I also add these lines to my main.py, just for good measure: import os os.environ["OMP_NUM_THREADS"] = "1" os.environ["OPENBLAS_NUM_THREADS"] = "1" os.environ["MKL_NUM_THREADS"] = "1" os.environ["VECLIB_MAXIMUM_THREADS"] = "1" os.environ["NUMEXPR_NUM_THREADS"] = "1" import numpy as np # Import numpy AFTER setting these variables But to no avail --- I still get the same error. I run my script as qsub -q <QUEUE_NAME> -lnodes=1:ppn=36 path/to/script.sh" Sources: Two answers that tell you how to stop all/most unwanted multithreading/multiprocessing: https://stackoverflow.com/a/48665619/3670097, https://stackoverflow.com/a/51954326/3670097 Summarizes how to do it from within a script: https://stackoverflow.com/a/53224849/3670097 This also fails I went to each numpy computationaly intensive function and placed it in a context manager: import threadpoolctl with threadpoolctl.threadpool_limits(limits=1, user_api="blas"): D, P = np.linalg.eig(M, right=True)
[ "Runtime fix from https://stackoverflow.com/a/57505958/3528321 :\ntry:\n import mkl\n mkl.set_num_threads(1)\nexcept:\n pass\n\n" ]
[ 0 ]
[]
[]
[ "anaconda", "multithreading", "numpy", "pbs", "python" ]
stackoverflow_0074429606_anaconda_multithreading_numpy_pbs_python.txt
Q: how to filter csv in python I have a csv file named film.csv the title of each column is as follows (with a couple of example rows): Year;Length;Title;Subject;Actor;Actress;Director;Popularity;Awards;*Image 1990;111;Tie Me Up! Tie Me Down!;Comedy;Banderas, Antonio;Abril, Victoria;AlmodΓ³var, Pedro;68;No;NicholasCage.png 1991;113;High Heels;Comedy;BosΓ©, Miguel;Abril, Victoria;AlmodΓ³var, Pedro;68;No;NicholasCage.png 1983;104;Dead Zone, The;Horror;Walken, Christopher;Adams, Brooke;Cronenberg, David;79;No;NicholasCage.png 1979;122;Cuba;Action;Connery, Sean;Adams, Brooke;Lester, Richard;6;No;seanConnery.png 1978;94;Days of Heaven;Drama;Gere, Richard;Adams, Brooke;Malick, Terrence;14;No;NicholasCage.png 1983;140;Octopussy;Action;Moore, Roger;Adams, Maud;Glen, John;68;No;NicholasCage.png I need to parse this csv with basic command (not using Pandas) How would I extract all movie titles with the actor first name = Richard , made before year 1985 , and award = yes ? (I have been able to get it to show lisy where awards == yes , but not the rest) How can I count how many times any given actor appears in the list? file_name = "film.csv" print('loading file') lines = (line for line in open(file_name,encoding='cp1252')) #generator to capture lines print('removing ;') lists = (s.rstrip().split(";") for s in lines) #generators to capture lists containing values from lines print('2-filter by awards') sel = input() if sel == '2': cols=next(lists) #obtains only the header print(cols) collections = (dict(zip(cols,data)) for data in lists) filtered = (col["Title"] for col in collections if col["Awards"][0]== "Y") for item in filtered: print(item) # input() #browse lists and index them per header values, then filter all movies that have been awarded #using a new generator object else: A: To read and filter the data you can use next example (I'm using award == No, because you don't have movie with award == Yes and other criteria in your example): import csv from collections import Counter with open("data.csv", "r") as f_in: reader = csv.DictReader(f_in, delimiter=";") data = list(reader) # extract all movie titles with the actor first name = Richard , made before year 1985 , and award = No for d in data: if ( d["Actor"].split(", ")[-1] == "Richard" and int(d["Year"]) < 1985 and d["Awards"] == "No" ): print(d) Prints: { "Year": "1978", "Length": "94", "Title": "Days of Heaven", "Subject": "Drama", "Actor": "Gere, Richard", "Actress": "Adams, Brooke", "Director": "Malick, Terrence", "Popularity": "14", "Awards": "No", "*Image": "NicholasCage.png", } To get counter of actors you can use collections.Counter: cnt = Counter(d["Actor"] for d in data) print(cnt) Prints: Counter( { "Banderas, Antonio": 1, "BosΓ©, Miguel": 1, "Walken, Christopher": 1, "Connery, Sean": 1, "Gere, Richard": 1, "Moore, Roger": 1, } ) A: This will print out all movie titles that the actor's first name is Richard, made before 1985 and awards == Yes: filter = {} lines = open('test.csv', 'r').readlines() columns = lines[0].strip().split(';') lines.pop(0) for i in lines: x = i.strip().split(';') # Checking if the movie was made before 1985 if int(x[columns.index('Year')]) < 1985: # Checking if the actor's first name is Richard if x[columns.index('Actor')].split(', ')[1] == 'Richard': # Checking if awards == Yes if x[columns.index('Awards')] == 'Yes': # Printing out the title of the movie print(x[columns.index('Title')]) Counting if any given actor appears in the list: name = "Gere, Richard" # Given actor name count = 0 for i in lines: x = i.strip().split(';') # Checking if the actor's name is the given name if x[columns.index('Actor')] == name: # If it is, add 1 to the count count += 1 Output: count: 1
how to filter csv in python
I have a csv file named film.csv the title of each column is as follows (with a couple of example rows): Year;Length;Title;Subject;Actor;Actress;Director;Popularity;Awards;*Image 1990;111;Tie Me Up! Tie Me Down!;Comedy;Banderas, Antonio;Abril, Victoria;AlmodΓ³var, Pedro;68;No;NicholasCage.png 1991;113;High Heels;Comedy;BosΓ©, Miguel;Abril, Victoria;AlmodΓ³var, Pedro;68;No;NicholasCage.png 1983;104;Dead Zone, The;Horror;Walken, Christopher;Adams, Brooke;Cronenberg, David;79;No;NicholasCage.png 1979;122;Cuba;Action;Connery, Sean;Adams, Brooke;Lester, Richard;6;No;seanConnery.png 1978;94;Days of Heaven;Drama;Gere, Richard;Adams, Brooke;Malick, Terrence;14;No;NicholasCage.png 1983;140;Octopussy;Action;Moore, Roger;Adams, Maud;Glen, John;68;No;NicholasCage.png I need to parse this csv with basic command (not using Pandas) How would I extract all movie titles with the actor first name = Richard , made before year 1985 , and award = yes ? (I have been able to get it to show lisy where awards == yes , but not the rest) How can I count how many times any given actor appears in the list? file_name = "film.csv" print('loading file') lines = (line for line in open(file_name,encoding='cp1252')) #generator to capture lines print('removing ;') lists = (s.rstrip().split(";") for s in lines) #generators to capture lists containing values from lines print('2-filter by awards') sel = input() if sel == '2': cols=next(lists) #obtains only the header print(cols) collections = (dict(zip(cols,data)) for data in lists) filtered = (col["Title"] for col in collections if col["Awards"][0]== "Y") for item in filtered: print(item) # input() #browse lists and index them per header values, then filter all movies that have been awarded #using a new generator object else:
[ "To read and filter the data you can use next example (I'm using award == No, because you don't have movie with award == Yes and other criteria in your example):\nimport csv\nfrom collections import Counter\n\nwith open(\"data.csv\", \"r\") as f_in:\n reader = csv.DictReader(f_in, delimiter=\";\")\n data = list(reader)\n\n# extract all movie titles with the actor first name = Richard , made before year 1985 , and award = No\n\nfor d in data:\n if (\n d[\"Actor\"].split(\", \")[-1] == \"Richard\"\n and int(d[\"Year\"]) < 1985\n and d[\"Awards\"] == \"No\"\n ):\n print(d)\n\nPrints:\n{\n \"Year\": \"1978\",\n \"Length\": \"94\",\n \"Title\": \"Days of Heaven\",\n \"Subject\": \"Drama\",\n \"Actor\": \"Gere, Richard\",\n \"Actress\": \"Adams, Brooke\",\n \"Director\": \"Malick, Terrence\",\n \"Popularity\": \"14\",\n \"Awards\": \"No\",\n \"*Image\": \"NicholasCage.png\",\n}\n\n\nTo get counter of actors you can use collections.Counter:\ncnt = Counter(d[\"Actor\"] for d in data)\nprint(cnt)\n\nPrints:\nCounter(\n {\n \"Banderas, Antonio\": 1,\n \"BosΓ©, Miguel\": 1,\n \"Walken, Christopher\": 1,\n \"Connery, Sean\": 1,\n \"Gere, Richard\": 1,\n \"Moore, Roger\": 1,\n }\n)\n\n", "This will print out all movie titles that the actor's first name is Richard, made before 1985 and awards == Yes:\nfilter = {}\nlines = open('test.csv', 'r').readlines()\ncolumns = lines[0].strip().split(';')\n\nlines.pop(0)\n\nfor i in lines:\n x = i.strip().split(';')\n # Checking if the movie was made before 1985\n if int(x[columns.index('Year')]) < 1985:\n # Checking if the actor's first name is Richard\n if x[columns.index('Actor')].split(', ')[1] == 'Richard':\n # Checking if awards == Yes\n if x[columns.index('Awards')] == 'Yes':\n # Printing out the title of the movie\n print(x[columns.index('Title')])\n\nCounting if any given actor appears in the list:\nname = \"Gere, Richard\" # Given actor name\n\ncount = 0\nfor i in lines:\n x = i.strip().split(';')\n # Checking if the actor's name is the given name\n if x[columns.index('Actor')] == name:\n # If it is, add 1 to the count\n count += 1\n\n\nOutput: count: 1\n\n" ]
[ 1, 0 ]
[]
[]
[ "csv", "parsing", "python" ]
stackoverflow_0074562025_csv_parsing_python.txt
Q: Text File Manipulation using For Loop I have a text file that looks like this: line1 #commentA line2 line3 #commentB line4 line5 line6 #commentC line7 line8 line9 line10 line11 #commentD line12 I want to reformat it to look like this: line1 #commentA line2 line3 #commentB line4line5 line6 #commentC line7line8line9line10 line11 #commentD line12 The number of lines between lines that have comments is variable, and can be large (up to 1024 lines). So I'm looking for a way to leave the line with the comment untouched, but append the text of all lines between lines with comments into one line. Can you give a suggestion? My start at this is as follows: with open("myfile.txt", mode='r+') as f: lines = f.readlines() for n, content in enumerate(lines): lines[n] = content if '#' in lines[n]: print(lines[n]) # not sure how to combine the lines in between those with comments A: You can use itertools.groupby: text = """\ line1 #commentA line2 line3 #commentB line4 line5 line6 #commentC line7 line8 line9 line10 line11 #commentD line12""" from itertools import groupby for _, g in groupby(text.splitlines(), lambda l: "#" in l): print("".join(g)) Prints: line1 #commentA line2 line3 #commentB line4line5 line6 #commentC line7line8line9line10 line11 #commentD line12 To load text from your file you can use: with open('my_file.txt', 'r') as f_in: text = f_in.read() A: One way to do it : new_lines = [] for line in lines: if "#" in line: new_lines.append(f"\n{line}\n") else: new_lines.append(line) It will gives you another list of lines, than you can then save to a new file with this (note the "".join because the lines already contains the \n) : with open('file.txt', 'w') as fo: fo.write("".join(new_lines)) However, if the first line contains a comment, this will add an extra new line at the beginning of the file. To remove it, you can either check for it after the loop : if new_lines[0].startswith("\n"): new_lines[0] = new_lines[0][1:] Or change the loop as such : new_lines: list[str] = [] for index, line in enumerate(lines): if "#" in line: new_lines.append(("\n" if index > 0 else "") + f"{line}\n") else: new_lines.append(line)
Text File Manipulation using For Loop
I have a text file that looks like this: line1 #commentA line2 line3 #commentB line4 line5 line6 #commentC line7 line8 line9 line10 line11 #commentD line12 I want to reformat it to look like this: line1 #commentA line2 line3 #commentB line4line5 line6 #commentC line7line8line9line10 line11 #commentD line12 The number of lines between lines that have comments is variable, and can be large (up to 1024 lines). So I'm looking for a way to leave the line with the comment untouched, but append the text of all lines between lines with comments into one line. Can you give a suggestion? My start at this is as follows: with open("myfile.txt", mode='r+') as f: lines = f.readlines() for n, content in enumerate(lines): lines[n] = content if '#' in lines[n]: print(lines[n]) # not sure how to combine the lines in between those with comments
[ "You can use itertools.groupby:\ntext = \"\"\"\\\nline1 #commentA\nline2\nline3 #commentB\nline4\nline5\nline6 #commentC\nline7\nline8\nline9\nline10\nline11 #commentD\nline12\"\"\"\n\nfrom itertools import groupby\n\nfor _, g in groupby(text.splitlines(), lambda l: \"#\" in l):\n print(\"\".join(g))\n\nPrints:\nline1 #commentA\nline2\nline3 #commentB\nline4line5\nline6 #commentC\nline7line8line9line10\nline11 #commentD\nline12\n\n\nTo load text from your file you can use:\nwith open('my_file.txt', 'r') as f_in:\n text = f_in.read()\n\n", "One way to do it :\nnew_lines = []\nfor line in lines:\n if \"#\" in line:\n new_lines.append(f\"\\n{line}\\n\")\n else:\n new_lines.append(line)\n\nIt will gives you another list of lines, than you can then save to a new file with this (note the \"\".join because the lines already contains the \\n) :\nwith open('file.txt', 'w') as fo:\n fo.write(\"\".join(new_lines))\n\nHowever, if the first line contains a comment, this will add an extra new line at the beginning of the file. To remove it, you can either check for it after the loop :\nif new_lines[0].startswith(\"\\n\"):\n new_lines[0] = new_lines[0][1:]\n\nOr change the loop as such :\nnew_lines: list[str] = []\nfor index, line in enumerate(lines):\n if \"#\" in line:\n new_lines.append((\"\\n\" if index > 0 else \"\") + f\"{line}\\n\")\n else:\n new_lines.append(line)\n\n" ]
[ 1, 0 ]
[ "I think you can use lines = f.read().splitlines() instead of f.readlines()\nand in your for loop, you can do something like\nfor l in lines\n tmp = \"\"\n if '#' in l:\n print(tmp)\n tmp = \"\"\n print(l)\n else:\n tmp+=l\n ```\n\n" ]
[ -1 ]
[ "file", "python", "string", "text" ]
stackoverflow_0074562233_file_python_string_text.txt
Q: In python, how can I jump to a def? I want to jump from def number1 to def number2. I tried this: def number1(): print("from here to ") number2() number1() def blablabla(): print("blablabla") blablabla() def number2(): print("here") number2() but I received this error: Traceback (most recent call last): File "C:\Users\i5 9400f\Documents\projetos python\test.py", line 4, in <module> number1() File "C:\Users\i5 9400f\Documents\projetos python\test.py", line 3, in number1 number2() ^^^^^^^ NameError: name 'number2' is not defined. Did you mean: 'number1'? from here to Process finished with exit code 1 I tried using the number2() it did not work A: Python just run your code from top to bottom sequentially so if you try to access to something that is only defined later you won't succeed. What you need to do is to define all the functions first then call them later : def number1(): print("from here to ") number2() def blablabla(): print("blablabla") def number2(): print("here") number1() blablabla() number2() A: def number1(): print("from here to ") number2() def number2(): print("here") def blablabla(): print("blablabla") number1() blablabla() number2() ###Before runnΔ±ng def functΔ±ons, you should put them at the top of your processes
In python, how can I jump to a def?
I want to jump from def number1 to def number2. I tried this: def number1(): print("from here to ") number2() number1() def blablabla(): print("blablabla") blablabla() def number2(): print("here") number2() but I received this error: Traceback (most recent call last): File "C:\Users\i5 9400f\Documents\projetos python\test.py", line 4, in <module> number1() File "C:\Users\i5 9400f\Documents\projetos python\test.py", line 3, in number1 number2() ^^^^^^^ NameError: name 'number2' is not defined. Did you mean: 'number1'? from here to Process finished with exit code 1 I tried using the number2() it did not work
[ "Python just run your code from top to bottom sequentially so if you try to access to something that is only defined later you won't succeed. What you need to do is to define all the functions first then call them later :\ndef number1():\n print(\"from here to \")\n number2()\n\ndef blablabla():\n print(\"blablabla\")\n\ndef number2():\n print(\"here\")\n\nnumber1()\nblablabla()\nnumber2()\n\n", "def number1():\n print(\"from here to \")\n number2()\n\ndef number2():\n print(\"here\")\n\ndef blablabla():\n print(\"blablabla\")\n\nnumber1()\nblablabla()\nnumber2()\n\n###Before runnΔ±ng def functΔ±ons, you should put them at the top of your processes\n" ]
[ 1, 0 ]
[]
[]
[ "function", "python" ]
stackoverflow_0074564064_function_python.txt
Q: How can I fix this "IndentationError: expected an indented block"? def remove_stopwords(text,nlp,custom_stop_words=None,remove_small_tokens=True,min_len=2): if custom_stop_words: nlp.Defaults.stop_words |= custom_stop_words filtered_sentence =[] doc = nlp (text) for token in doc: if token.is_stop == False: if remove_small_tokens: if len(token.text)>min_len: filtered_sentence.append(token.text) else: filtered_sentence.append(token.text) return " ".join(filtered_sentence) if len(filtered_sentence)>0 else None I am getting the error for the last else: The goal of this last part is, if after the stopword removal, words are still left in the sentence, then the sentence should be returned as a string else return null. I'd be so thankful for any advice. else None ^ IndentationError: expected an indented block A: I guess you want to use the ternary operator. The format for it is x if condition else y this is on the same line and without the : after the if else. So your last return statement should be : return " ".join(filtered_sentence) if len(filtered_sentence)>0 else None A: Your entire code is not properly indented def remove_stopwords(text,nlp,custom_stop_words=None,remove_small_tokens=True,min_len=2): if custom_stop_words: nlp.Defaults.stop_words |= custom_stop_words filtered_sentence =[] doc = nlp (text) for token in doc: if token.is_stop == False: if remove_small_tokens: if len(token.text)>min_len: filtered_sentence.append(token.text) else: filtered_sentence.append(token.text) if len(filtered_sentence) > 0: return " ".join(filtered_sentence) else: return None
How can I fix this "IndentationError: expected an indented block"?
def remove_stopwords(text,nlp,custom_stop_words=None,remove_small_tokens=True,min_len=2): if custom_stop_words: nlp.Defaults.stop_words |= custom_stop_words filtered_sentence =[] doc = nlp (text) for token in doc: if token.is_stop == False: if remove_small_tokens: if len(token.text)>min_len: filtered_sentence.append(token.text) else: filtered_sentence.append(token.text) return " ".join(filtered_sentence) if len(filtered_sentence)>0 else None I am getting the error for the last else: The goal of this last part is, if after the stopword removal, words are still left in the sentence, then the sentence should be returned as a string else return null. I'd be so thankful for any advice. else None ^ IndentationError: expected an indented block
[ "I guess you want to use the ternary operator.\nThe format for it is x if condition else y this is on the same line and without the : after the if else.\nSo your last return statement should be :\nreturn \" \".join(filtered_sentence) if len(filtered_sentence)>0 else None\n\n", "Your entire code is not properly indented\ndef remove_stopwords(text,nlp,custom_stop_words=None,remove_small_tokens=True,min_len=2):\n if custom_stop_words:\n nlp.Defaults.stop_words |= custom_stop_words\n\n filtered_sentence =[] \n doc = nlp (text)\n for token in doc:\n \n if token.is_stop == False: \n \n if remove_small_tokens:\n if len(token.text)>min_len:\n filtered_sentence.append(token.text)\n else:\n filtered_sentence.append(token.text)\n \n if len(filtered_sentence) > 0: \n return \" \".join(filtered_sentence) \n else:\n return None\n\n\n" ]
[ 1, 1 ]
[]
[]
[ "if_statement", "nlp", "python", "python_3.x", "topic_modeling" ]
stackoverflow_0074563930_if_statement_nlp_python_python_3.x_topic_modeling.txt
Q: how to get raw events from Firebase analytics using api without BigQuery? I need to extract raw events from Firebase Analytics using python SDK. Actually, we can link a BigQuery to a firebase and access raw events through BigQuery. But it is not clear from the documentation is there any other ways to extract events without BigQuery? A: There is an Analytics Data API that you can use to run Analytics reports and retrieve data. The Google Analytics Data API gives programmatic access to users by country. In Data API requests, you'll need to identify your Google Analytics 4 (GA4) property by its ID; this ID is different from the Firebase project. There are client libraries in Java, Python, Node.js, and other languages to simplify your implementation. A: You could download CSV file of your Analytics reports. Though, Google Analytics exports only up to 5k rows when you download a report as a CSV. To export a report: Click in the top right of most reports. Click Download File. Select an option: Export to Google Sheets Download CSV You could check this for your reference.
how to get raw events from Firebase analytics using api without BigQuery?
I need to extract raw events from Firebase Analytics using python SDK. Actually, we can link a BigQuery to a firebase and access raw events through BigQuery. But it is not clear from the documentation is there any other ways to extract events without BigQuery?
[ "There is an Analytics Data API that you can use to run Analytics reports and retrieve data.\nThe Google Analytics Data API gives programmatic access to users by country.\nIn Data API requests, you'll need to identify your Google Analytics 4 (GA4) property by its ID; this ID is different from the Firebase project. There are client libraries in Java, Python, Node.js, and other languages to simplify your implementation.\n", "You could download CSV file of your Analytics reports. Though, Google Analytics exports only up to 5k rows when you download a report as a CSV.\nTo export a report:\n\nClick in the top right of most reports.\nClick Download File.\nSelect an option:\n\n\nExport to Google Sheets\nDownload CSV\n\nYou could check this for your reference.\n" ]
[ 0, 0 ]
[]
[]
[ "api", "firebase", "firebase_analytics", "google_bigquery", "python" ]
stackoverflow_0073631180_api_firebase_firebase_analytics_google_bigquery_python.txt
Q: Removing lines that have strings with same hexadecimal values from a text file I have a file in1.txt info="0x0000b573" data="0x7" id="sp. PCU(Si)" info="0x0000b573" data="0x00000007" id="HI all. SHa" info="0x00010AC3" data="0x00000003" id="abc_16. PS" info="0x00010ac3" data="0x00000045" id="hB2_RC/BS (Spr)" info="0x205" data="0x00000010" id="cgc_15. PK" info="0x205" data="0x10" id="cgsd_GH/BS (Scd)" Expected output: out.txt info="0x00010AC3" data="0x00000003" id="abc_16. PS" info="0x00010ac3" data="0x00000045" id="hB2_RC/BS (Spr)" I need only lines that have same info values and different data values to be written to out.txt. but the current code removes all the line that have string data in it. with open("in.txt", "r") as fin,open("out.txt", "w") as fout: for line in fin: if 'data' not in line: fout.write(line.strip()+'\n') what i need is for eg: line 1 and line 2 is having same info="0x0000b573" and data is "0x7" & "0x00000007" which is same then remove that line. A: You can use regex import re s = '''info="0x0000b573" data="0x7" id="sp. PCU(Si)" info="0x0000b573" data="0x00000007" id="HI all. SHa" info="0x00010AC3" data="0x00000003" id="abc_16. PS" info="0x00010ac3" data="0x00000045" id="hB2_RC/BS (Spr)" info="0x205" data="0x00000010" id="cgc_15. PK" info="0x205" data="0x10" id="cgsd_GH/BS (Scd)"''' parsed_data = re.findall(r'info="([^"]+)" data="([^"]+)" id="[^"]+"', s, re.MULTILINE) parsed_data = sorted([list(map(lambda x: int(x, 16), i)) + [index] for index,i in enumerate(parsed_data)]) row_numbers = [j for i in [[parsed_data[i][-1], parsed_data[i+1][-1]] for i in range(0,len(parsed_data),2) if parsed_data[i][1] != parsed_data[i+1][1]] for j in i] final_output = [] for index,line in enumerate(s.split('\n')): if index in row_numbers: final_output.append(line) final_out_text = '\n'.join(final_output) print(final_out_text) # info="0x00010AC3" data="0x00000003" id="abc_16. PS" # info="0x00010ac3" data="0x00000045" id="hB2_RC/BS (Spr)" A: You could try something like that too, I think #!/usr/bin/python3 records = {} items = [] info = [] data = [] with open("in.dat", "r") as fin: for line in fin: items=line.split(' ') info = items[0].split('=') data = items[1].split('=') try: key = info[1].strip('"').lower() value = str(int(data[1].strip('"'), 16)) records[key][value] += 1 except KeyError: try: records[key][value] = 1 except KeyError: records[key] = {value: 1} out = dict() for key in records: for value in records[key]: if records[key][value] == 1: try: out[key].append(value) except KeyError: out[key] = [value] with open("out.dat", "w") as fout: for key in out: for value in out[key]: fout.write(f"{key}={value}\n") A: Something like this could work: found_info_values = [] with open("in.txt", "r") as fin,open("out.txt", "w") as fout: for line in fin: info = line.split('"')[1] if info not in found_info_values: fout.write(line.strip()+'\n') found_info_values += info
Removing lines that have strings with same hexadecimal values from a text file
I have a file in1.txt info="0x0000b573" data="0x7" id="sp. PCU(Si)" info="0x0000b573" data="0x00000007" id="HI all. SHa" info="0x00010AC3" data="0x00000003" id="abc_16. PS" info="0x00010ac3" data="0x00000045" id="hB2_RC/BS (Spr)" info="0x205" data="0x00000010" id="cgc_15. PK" info="0x205" data="0x10" id="cgsd_GH/BS (Scd)" Expected output: out.txt info="0x00010AC3" data="0x00000003" id="abc_16. PS" info="0x00010ac3" data="0x00000045" id="hB2_RC/BS (Spr)" I need only lines that have same info values and different data values to be written to out.txt. but the current code removes all the line that have string data in it. with open("in.txt", "r") as fin,open("out.txt", "w") as fout: for line in fin: if 'data' not in line: fout.write(line.strip()+'\n') what i need is for eg: line 1 and line 2 is having same info="0x0000b573" and data is "0x7" & "0x00000007" which is same then remove that line.
[ "You can use regex\nimport re\n\ns = '''info=\"0x0000b573\" data=\"0x7\" id=\"sp. PCU(Si)\"\ninfo=\"0x0000b573\" data=\"0x00000007\" id=\"HI all. SHa\"\ninfo=\"0x00010AC3\" data=\"0x00000003\" id=\"abc_16. PS\"\ninfo=\"0x00010ac3\" data=\"0x00000045\" id=\"hB2_RC/BS (Spr)\"\ninfo=\"0x205\" data=\"0x00000010\" id=\"cgc_15. PK\"\ninfo=\"0x205\" data=\"0x10\" id=\"cgsd_GH/BS (Scd)\"'''\n\nparsed_data = re.findall(r'info=\"([^\"]+)\" data=\"([^\"]+)\" id=\"[^\"]+\"', s, re.MULTILINE)\nparsed_data = sorted([list(map(lambda x: int(x, 16), i)) + [index] for index,i in enumerate(parsed_data)])\n\nrow_numbers = [j for i in [[parsed_data[i][-1], parsed_data[i+1][-1]] for i in range(0,len(parsed_data),2) if parsed_data[i][1] != parsed_data[i+1][1]] for j in i]\n\n\nfinal_output = []\n\nfor index,line in enumerate(s.split('\\n')):\n if index in row_numbers:\n final_output.append(line)\n \n \nfinal_out_text = '\\n'.join(final_output)\nprint(final_out_text)\n\n# info=\"0x00010AC3\" data=\"0x00000003\" id=\"abc_16. PS\"\n# info=\"0x00010ac3\" data=\"0x00000045\" id=\"hB2_RC/BS (Spr)\"\n\n", "You could try something like that too, I think\n#!/usr/bin/python3\n\nrecords = {}\nitems = []\ninfo = []\ndata = []\n\nwith open(\"in.dat\", \"r\") as fin:\n for line in fin:\n items=line.split(' ')\n info = items[0].split('=')\n data = items[1].split('=')\n try:\n key = info[1].strip('\"').lower()\n value = str(int(data[1].strip('\"'), 16))\n records[key][value] += 1\n except KeyError:\n try:\n records[key][value] = 1\n except KeyError:\n records[key] = {value: 1}\n\n\nout = dict()\nfor key in records:\n for value in records[key]:\n if records[key][value] == 1:\n try:\n out[key].append(value)\n except KeyError:\n out[key] = [value]\n \n\nwith open(\"out.dat\", \"w\") as fout:\n for key in out:\n for value in out[key]:\n fout.write(f\"{key}={value}\\n\")\n\n", "Something like this could work:\nfound_info_values = []\n\nwith open(\"in.txt\", \"r\") as fin,open(\"out.txt\", \"w\") as fout:\n for line in fin:\n info = line.split('\"')[1]\n if info not in found_info_values:\n fout.write(line.strip()+'\\n')\n found_info_values += info\n\n" ]
[ 1, 1, 0 ]
[]
[]
[ "python", "python_3.x" ]
stackoverflow_0074558591_python_python_3.x.txt
Q: django restframework how to edit POST form im working on api that resizes images. I want to upload just one file save it and resize and keep it in another folder. models.py from django.db import models from django.conf import settings from django_resized import ResizedImageField from django.contrib.auth import get_user_model User = get_user_model() class Image(models.Model): file = models.ImageField(upload_to="files/") file1 = models.ImageField() author = models.ForeignKey(User, on_delete=models.CASCADE) def save(self, *args, **kwargs): if self.file: self.file1 = ResizedImageField(self.file, size=[200, 200]) super(Image, self).save(*args, **kwargs) views.py from rest_framework import generics from .models import Image from .serializers import ImageSerializer class ListCreateImage(generics.ListCreateAPIView): serializer_class = ImageSerializer def get_queryset(self): queryset = Image.objects.filter(author=self.request.user) return queryset def perform_create(self, serializer): kwargs = {"author": self.request.user} serializer.save(**kwargs) class DetailImage(generics.RetrieveAPIView): serializer_class = ImageSerializer def get_queryset(self): queryset = Image.objects.filter(author=self.request.user) return queryset serializers.py from rest_framework import serializers from .models import Image class ImageSerializer(serializers.ModelSerializer): class Meta: model = Image fields = ("file", "file1") read_only_fields = ( "id", "author", ) Problem is that my code works, but HTML form asks me for 2 files, I want to have the same result using just one file. So the output should look like that: { "file": "http://127.0.0.1:8000/files/register_og.png", "file1": "http://127.0.0.1:8000/files/thumb200px/register_og.jpg" }, A: In your Meta class of the ImageSerializer there is a fields attribute. It should only contain the files you want to upload. fields ("file") not fields ("file", "file1") Going of your intention in the comment I suggest you also add the "file1" to the read only fields read_only_fields = ( "id", "author", "file1", )
django restframework how to edit POST form
im working on api that resizes images. I want to upload just one file save it and resize and keep it in another folder. models.py from django.db import models from django.conf import settings from django_resized import ResizedImageField from django.contrib.auth import get_user_model User = get_user_model() class Image(models.Model): file = models.ImageField(upload_to="files/") file1 = models.ImageField() author = models.ForeignKey(User, on_delete=models.CASCADE) def save(self, *args, **kwargs): if self.file: self.file1 = ResizedImageField(self.file, size=[200, 200]) super(Image, self).save(*args, **kwargs) views.py from rest_framework import generics from .models import Image from .serializers import ImageSerializer class ListCreateImage(generics.ListCreateAPIView): serializer_class = ImageSerializer def get_queryset(self): queryset = Image.objects.filter(author=self.request.user) return queryset def perform_create(self, serializer): kwargs = {"author": self.request.user} serializer.save(**kwargs) class DetailImage(generics.RetrieveAPIView): serializer_class = ImageSerializer def get_queryset(self): queryset = Image.objects.filter(author=self.request.user) return queryset serializers.py from rest_framework import serializers from .models import Image class ImageSerializer(serializers.ModelSerializer): class Meta: model = Image fields = ("file", "file1") read_only_fields = ( "id", "author", ) Problem is that my code works, but HTML form asks me for 2 files, I want to have the same result using just one file. So the output should look like that: { "file": "http://127.0.0.1:8000/files/register_og.png", "file1": "http://127.0.0.1:8000/files/thumb200px/register_og.jpg" },
[ "In your Meta class of the ImageSerializer there is a fields attribute. It should only contain the files you want to upload.\nfields (\"file\")\n\nnot\nfields (\"file\", \"file1\")\n\nGoing of your intention in the comment I suggest you also add the \"file1\" to the read only fields\nread_only_fields = (\n \"id\",\n \"author\",\n \"file1\",\n )\n\n" ]
[ 0 ]
[]
[]
[ "django", "django_rest_framework", "python" ]
stackoverflow_0074563888_django_django_rest_framework_python.txt
Q: Put a XML file inside a Python script? I'm trying to create a face-detection script using Python's OpenCV using the haar cascade XML file. My goal is to upload a python file to a website but due to some weird policies, I can only upload the Python file, without the XML... The question is, is it possible to somehow put the XML file inside the Python script, say, convert it to a String or something and then generate an XML from that String? A: xml = """<?xml version="1.0" encoding="UTF-8"?> <a> <b>Yes, you can embed XML in a string literal in Python.</b> </a>""" A: Not answer to title but answer of your description question. Haar cascade doesn't support non-file XML strings. Also, if you try to put an XML file to a website and give a link to an XML file with cv2.CascadeClassifier(), it will give an error. But you can use the request module on python to achieve what you want. It gets XML from the website, then puts it into a file def function(self, image): # download XML from server link = LINK_TO_XML r = requests.get(link, allow_redirects=True) open('haarcascade_frontalface_default.xml', 'wb').write(r.content) # end of download haar_cascade = cv.CascadeClassifier('haarcascade_frontalface_default.xml')
Put a XML file inside a Python script?
I'm trying to create a face-detection script using Python's OpenCV using the haar cascade XML file. My goal is to upload a python file to a website but due to some weird policies, I can only upload the Python file, without the XML... The question is, is it possible to somehow put the XML file inside the Python script, say, convert it to a String or something and then generate an XML from that String?
[ "xml = \"\"\"<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<a>\n <b>Yes, you can embed XML in a string literal in Python.</b>\n</a>\"\"\"\n\n", "Not answer to title but answer of your description question.\nHaar cascade doesn't support non-file XML strings. Also, if you try to put an XML file to a website and give a link to an XML file with cv2.CascadeClassifier(), it will give an error.\nBut you can use the request module on python to achieve what you want.\nIt gets XML from the website, then puts it into a file\ndef function(self, image):\n # download XML from server\n link = LINK_TO_XML\n r = requests.get(link, allow_redirects=True)\n open('haarcascade_frontalface_default.xml', 'wb').write(r.content)\n # end of download \n \n haar_cascade = cv.CascadeClassifier('haarcascade_frontalface_default.xml')\n\n" ]
[ 2, 0 ]
[ "First, copy the contents of the XML file into the python file and assign the whole thing to a string. Then use XML library to create a tree type data structure named root which contains the contents of the XML file. This tree is traversable and you can do what you like with it in your program:\n import xml.etree.ElementTree as ET \n root = ET.fromstring(XML_file_example_as_string). \n\nTo generate XML from the string you can use ElementTree.write() like this:\n tree = ET.ElementTree(root)\n tree.write('example.xml')\n\n" ]
[ -2 ]
[ "python", "string", "xml" ]
stackoverflow_0051431774_python_string_xml.txt
Q: Using python bytes with winrt I'm attempting to use BitmapDecoder from the winrt package with bytes I've read from a file with python. I can do it if I use winrt to read the bytes from the file: import os from winrt.windows.storage import StorageFile, FileAccessMode from winrt.windows.graphics.imaging import BitmapDecoder async def process_image(path): # get image from disk file = await StorageFile.get_file_from_path_async(os.fspath(path)) stream = await file.open_async(FileAccessMode.READ) decoder = await BitmapDecoder.create_async(stream) return await decoder.get_software_bitmap_async() The problem is that I'd really like to use the bytes in python-land before sending them to BitmapDecoder instead of getting them with StorageFile. Browsing the MS docs, I see there's an InMemoryRandomAccessStream, which sounds like what I want, but I can't seem to get that working. I tried this: from winrt.windows.storage.streams import InMemoryRandomAccessStream, DataWriter stream = InMemoryRandomAccessStream() writer = DataWriter(stream) await writer.write_bytes(bytes_) That gives me RuntimeError: The parameter is incorrect. for the await writer.write_bytes(bytes_) line. Not sure what to try next. A: To use python bytes you do the following. The key was writer.write_bytes is not async and calling writer.store_async(). async def process_image(bytes_): stream = InMemoryRandomAccessStream() writer = DataWriter(stream) writer.write_bytes(bytes_) writer.store_async() stream.seek(0) decoder = await BitmapDecoder.create_async(stream) return await decoder.get_software_bitmap_async()
Using python bytes with winrt
I'm attempting to use BitmapDecoder from the winrt package with bytes I've read from a file with python. I can do it if I use winrt to read the bytes from the file: import os from winrt.windows.storage import StorageFile, FileAccessMode from winrt.windows.graphics.imaging import BitmapDecoder async def process_image(path): # get image from disk file = await StorageFile.get_file_from_path_async(os.fspath(path)) stream = await file.open_async(FileAccessMode.READ) decoder = await BitmapDecoder.create_async(stream) return await decoder.get_software_bitmap_async() The problem is that I'd really like to use the bytes in python-land before sending them to BitmapDecoder instead of getting them with StorageFile. Browsing the MS docs, I see there's an InMemoryRandomAccessStream, which sounds like what I want, but I can't seem to get that working. I tried this: from winrt.windows.storage.streams import InMemoryRandomAccessStream, DataWriter stream = InMemoryRandomAccessStream() writer = DataWriter(stream) await writer.write_bytes(bytes_) That gives me RuntimeError: The parameter is incorrect. for the await writer.write_bytes(bytes_) line. Not sure what to try next.
[ "To use python bytes you do the following. The key was writer.write_bytes is not async and calling writer.store_async().\nasync def process_image(bytes_):\n stream = InMemoryRandomAccessStream()\n writer = DataWriter(stream)\n writer.write_bytes(bytes_)\n writer.store_async()\n stream.seek(0)\n\n decoder = await BitmapDecoder.create_async(stream)\n return await decoder.get_software_bitmap_async()\n\n" ]
[ 0 ]
[]
[]
[ "python", "windows", "windows_runtime" ]
stackoverflow_0074554823_python_windows_windows_runtime.txt
Q: Best practice to rename a method parameter in a deployed Python module Say I maintain a Python module with some method foo(): def foo(BarArg=None, AnotherArg=False): return True But now I'm not satisfied with the PascalCase of my argument names, and would like to rename them as such: def foo(bar_arg=None, another_arg=False): ... How can I introduce this change without breaking existing client code? I wouldn't really want deprecation warnings (but maybe that's the best practice), and also would very much would like to keep my function's name... For now, **kwargs plus some input validation logic is the only solution that comes to mind, but it seems like the wrong direction. A: You can use a decorator factory to intercept any uses of the incorrect args: def re_arg(kwarg_map): def decorator(func):Β  def wrapped(*args, **kwargs): new_kwargs = {} for k, v in kwargs.items(): if k in kwarg_map: print(f"DEPRECATION WARNING: keyword argument '{k}' is no longer valid. Use '{kwarg_map[k]}' instead.") new_kwargs[kwarg_map.get(k, k)] = v return func(*args, **new_kwargs) return wrapped return decorator # change your kwarg names as desired, and pass the kwarg re-mapping to the decorator factory @re_arg({"BarArg": "bar_arg", "AnotherArg": "another_arg"}) def foo(bar_arg=None, another_arg=False): return True Demo: In [7]: foo(BarArg="hello") DEPRECATION WARNING: keyword argument 'BarArg' is no longer valid. Use 'bar_arg' instead. Out[7]: True In [8]: foo(AnotherArg="hello") DEPRECATION WARNING: keyword argument 'AnotherArg' is no longer valid. Use 'another_arg' instead. Out[8]: True In [9]: foo(x="hello") # still errors out on invalid kwargs --------------------------------------------------------------------------- TypeError Traceback (most recent call last) Cell In [9], line 1 ----> 1 foo(x="hello") Cell In [4], line 9, in re_arg.<locals>.wrapped(**kwargs) 7 print(f"DEPRECATION WARNING: keyword argument '{k}' is no longer valid. Use '{kwarg_map[k]}' instead.") 8 new_kwargs[kwarg_map.get(k, k)] = v ----> 9 return func(**new_kwargs) TypeError: foo() got an unexpected keyword argument 'x' In [10]: foo(another_arg="hello") # no warning if you pass a correct arg (`bar_arg` has a default so it doesn't show up in `new_kwargs`. Out[10]: True In [11]: foo(BarArg="world", AnotherArg="hello") DEPRECATION WARNING: keyword argument 'BarArg' is no longer valid. Use 'bar_arg' instead. DEPRECATION WARNING: keyword argument 'AnotherArg' is no longer valid. Use 'another_arg' instead. Out[11]: True You could get super fancy and leave in the old kwargs alongside the new ones, inspect the signature, extract the old kwargs and build the kwarg_map dynamically, but that'd be quite a bit more work for probably not much gain in my opinion, so I'll "leave it as an exercise for the reader". Another solution would be to simply add a new_foo function, transfer the old foo implementation over, and simply call new_foo from foo with the kwarg re-mapping shown above, and with a deprecation warning, but I think this is cleaner than having to maintain a bunch of stubs. You may also want to check out the deprecation library: https://pypi.org/project/deprecation/
Best practice to rename a method parameter in a deployed Python module
Say I maintain a Python module with some method foo(): def foo(BarArg=None, AnotherArg=False): return True But now I'm not satisfied with the PascalCase of my argument names, and would like to rename them as such: def foo(bar_arg=None, another_arg=False): ... How can I introduce this change without breaking existing client code? I wouldn't really want deprecation warnings (but maybe that's the best practice), and also would very much would like to keep my function's name... For now, **kwargs plus some input validation logic is the only solution that comes to mind, but it seems like the wrong direction.
[ "You can use a decorator factory to intercept any uses of the incorrect args:\ndef re_arg(kwarg_map):\n def decorator(func):Β \n def wrapped(*args, **kwargs):\n new_kwargs = {}\n for k, v in kwargs.items():\n if k in kwarg_map:\n print(f\"DEPRECATION WARNING: keyword argument '{k}' is no longer valid. Use '{kwarg_map[k]}' instead.\")\n new_kwargs[kwarg_map.get(k, k)] = v\n return func(*args, **new_kwargs)\n return wrapped\n return decorator\n\n\n# change your kwarg names as desired, and pass the kwarg re-mapping to the decorator factory\n@re_arg({\"BarArg\": \"bar_arg\", \"AnotherArg\": \"another_arg\"})\ndef foo(bar_arg=None, another_arg=False):\n return True\n\nDemo:\nIn [7]: foo(BarArg=\"hello\")\nDEPRECATION WARNING: keyword argument 'BarArg' is no longer valid. Use 'bar_arg' instead.\nOut[7]: True\n\nIn [8]: foo(AnotherArg=\"hello\")\nDEPRECATION WARNING: keyword argument 'AnotherArg' is no longer valid. Use 'another_arg' instead.\nOut[8]: True\n\nIn [9]: foo(x=\"hello\") # still errors out on invalid kwargs\n---------------------------------------------------------------------------\nTypeError Traceback (most recent call last)\nCell In [9], line 1\n----> 1 foo(x=\"hello\")\n\nCell In [4], line 9, in re_arg.<locals>.wrapped(**kwargs)\n 7 print(f\"DEPRECATION WARNING: keyword argument '{k}' is no longer valid. Use '{kwarg_map[k]}' instead.\")\n 8 new_kwargs[kwarg_map.get(k, k)] = v\n----> 9 return func(**new_kwargs)\n\nTypeError: foo() got an unexpected keyword argument 'x'\n\nIn [10]: foo(another_arg=\"hello\") # no warning if you pass a correct arg (`bar_arg` has a default so it doesn't show up in `new_kwargs`.\nOut[10]: True\n\nIn [11]: foo(BarArg=\"world\", AnotherArg=\"hello\")\nDEPRECATION WARNING: keyword argument 'BarArg' is no longer valid. Use 'bar_arg' instead.\nDEPRECATION WARNING: keyword argument 'AnotherArg' is no longer valid. Use 'another_arg' instead.\nOut[11]: True\n\nYou could get super fancy and leave in the old kwargs alongside the new ones, inspect the signature, extract the old kwargs and build the kwarg_map dynamically, but that'd be quite a bit more work for probably not much gain in my opinion, so I'll \"leave it as an exercise for the reader\".\nAnother solution would be to simply add a new_foo function, transfer the old foo implementation over, and simply call new_foo from foo with the kwarg re-mapping shown above, and with a deprecation warning, but I think this is cleaner than having to maintain a bunch of stubs.\nYou may also want to check out the deprecation library: https://pypi.org/project/deprecation/\n" ]
[ 3 ]
[]
[]
[ "python" ]
stackoverflow_0074564140_python.txt
Q: When multiple symbols and days occur, how to only keep the first occurrence of the day and symbol? If I have a dataframe of daily data which contain symbols and different dates: level_0 index date symbol open ... volume_10_day is_downtrending is_downtrending_lookback consolidating_10 consolidating_10_lookback 0 3608 3608 2022-10-26 CIFR 0.8600 ... 3883.2 0 0 0 1 1 11367 11367 2022-09-12 CLVS 1.2800 ... 24749.8 0 0 0 1 2 13031 13031 2022-10-06 CGC 3.0700 ... 3807474.9 0 0 0 1 3 13044 13044 2022-10-25 CGC 2.4000 ... 4213340.1 0 0 0 1 4 13864 13864 2022-09-02 CMCM 4.9100 ... 3560.0 0 0 0 1 .. ... ... ... ... ... ... ... ... ... ... ... 353 684622 684622 2022-10-24 SOBR 3.2500 ... 65830.2 0 0 0 1 354 685045 685045 2022-08-29 SNTG 2.6500 ... 12765.3 0 1 0 1 355 685093 685093 2022-11-04 SNTG 4.6889 ... 17969582.7 0 0 0 0 356 686851 686851 2022-10-11 WNW 0.8700 ... 5172.1 0 0 0 1 357 688103 688103 2022-10-11 BHG 0.8750 ... 1489.5 0 1 0 1 [358 rows x 18 columns] Sometimes, there are multiplies of the same days but with different symbols. For example, on 2022-10-11 there are two symbols which occur: WNW, BHG. 356 686851 686851 2022-10-11 WNW 0.8700 ... 5172.1 0 0 0 1 357 688103 688103 2022-10-11 BHG 0.8750 ... 1489.5 0 1 0 1 When this happens, I only want the first instance to be returned (all other symbols occurring on the same day should be removed), something like: level_0 index date symbol open ... volume_10_day is_downtrending is_downtrending_lookback consolidating_10 consolidating_10_lookback 0 3608 3608 2022-10-26 CIFR 0.8600 ... 3883.2 0 0 0 1 1 11367 11367 2022-09-12 CLVS 1.2800 ... 24749.8 0 0 0 1 2 13031 13031 2022-10-06 CGC 3.0700 ... 3807474.9 0 0 0 1 3 13044 13044 2022-10-25 CGC 2.4000 ... 4213340.1 0 0 0 1 4 13864 13864 2022-09-02 CMCM 4.9100 ... 3560.0 0 0 0 1 .. ... ... ... ... ... ... ... ... ... ... ... 353 684622 684622 2022-10-24 SOBR 3.2500 ... 65830.2 0 0 0 1 354 685045 685045 2022-08-29 SNTG 2.6500 ... 12765.3 0 1 0 1 355 685093 685093 2022-11-04 SNTG 4.6889 ... 17969582.7 0 0 0 0 356 686851 686851 2022-10-11 WNW 0.8700 ... 5172.1 0 0 0 1 [357 rows x 18 columns] Where in the duplicate of WNW, BHG, only the first one (WNW) is returned. How can I do this? Something like: df_filtered.drop_duplicates(subset=['date', 'symbol'], inplace=True) Any help is much appreciated A: Per the discussion in the comments, this solution works: df_filtered.drop_duplicates(subset=['date'], keep='first', inplace=True)
When multiple symbols and days occur, how to only keep the first occurrence of the day and symbol?
If I have a dataframe of daily data which contain symbols and different dates: level_0 index date symbol open ... volume_10_day is_downtrending is_downtrending_lookback consolidating_10 consolidating_10_lookback 0 3608 3608 2022-10-26 CIFR 0.8600 ... 3883.2 0 0 0 1 1 11367 11367 2022-09-12 CLVS 1.2800 ... 24749.8 0 0 0 1 2 13031 13031 2022-10-06 CGC 3.0700 ... 3807474.9 0 0 0 1 3 13044 13044 2022-10-25 CGC 2.4000 ... 4213340.1 0 0 0 1 4 13864 13864 2022-09-02 CMCM 4.9100 ... 3560.0 0 0 0 1 .. ... ... ... ... ... ... ... ... ... ... ... 353 684622 684622 2022-10-24 SOBR 3.2500 ... 65830.2 0 0 0 1 354 685045 685045 2022-08-29 SNTG 2.6500 ... 12765.3 0 1 0 1 355 685093 685093 2022-11-04 SNTG 4.6889 ... 17969582.7 0 0 0 0 356 686851 686851 2022-10-11 WNW 0.8700 ... 5172.1 0 0 0 1 357 688103 688103 2022-10-11 BHG 0.8750 ... 1489.5 0 1 0 1 [358 rows x 18 columns] Sometimes, there are multiplies of the same days but with different symbols. For example, on 2022-10-11 there are two symbols which occur: WNW, BHG. 356 686851 686851 2022-10-11 WNW 0.8700 ... 5172.1 0 0 0 1 357 688103 688103 2022-10-11 BHG 0.8750 ... 1489.5 0 1 0 1 When this happens, I only want the first instance to be returned (all other symbols occurring on the same day should be removed), something like: level_0 index date symbol open ... volume_10_day is_downtrending is_downtrending_lookback consolidating_10 consolidating_10_lookback 0 3608 3608 2022-10-26 CIFR 0.8600 ... 3883.2 0 0 0 1 1 11367 11367 2022-09-12 CLVS 1.2800 ... 24749.8 0 0 0 1 2 13031 13031 2022-10-06 CGC 3.0700 ... 3807474.9 0 0 0 1 3 13044 13044 2022-10-25 CGC 2.4000 ... 4213340.1 0 0 0 1 4 13864 13864 2022-09-02 CMCM 4.9100 ... 3560.0 0 0 0 1 .. ... ... ... ... ... ... ... ... ... ... ... 353 684622 684622 2022-10-24 SOBR 3.2500 ... 65830.2 0 0 0 1 354 685045 685045 2022-08-29 SNTG 2.6500 ... 12765.3 0 1 0 1 355 685093 685093 2022-11-04 SNTG 4.6889 ... 17969582.7 0 0 0 0 356 686851 686851 2022-10-11 WNW 0.8700 ... 5172.1 0 0 0 1 [357 rows x 18 columns] Where in the duplicate of WNW, BHG, only the first one (WNW) is returned. How can I do this? Something like: df_filtered.drop_duplicates(subset=['date', 'symbol'], inplace=True) Any help is much appreciated
[ "Per the discussion in the comments, this solution works:\ndf_filtered.drop_duplicates(subset=['date'], keep='first', inplace=True)\n\n" ]
[ 1 ]
[]
[]
[ "numpy", "pandas", "python" ]
stackoverflow_0074554462_numpy_pandas_python.txt
Q: How to split a column into 4 columns based on spaces in python? I have seen similar questions, but it seems all would not work in Python 3.8. I have a dataframe like index id 0 0001 01 12537.30 0 1 0001 01 1278.50 1 2 0001 03 53.10 0 where id column should be split into 4 columns, but they are in one column now. I need to split it based on space. I have tried df2 = df1['id'].apply(lambda x: pd.Series(x.split(' '))) or df1[["City", "State", "Country",'indicator']] = df1["id"].str.split(pat=" ", expand=True) All told me that TypeError: 'function' object is not subscriptable. Does anyone have any idea? Thanks A: Instead of going through the hassle of splitting the column, the easy path for you would be to create a new Dataframe with read_csv method where the sep argument as spaces or tab (whatever you have there) and skiprows argument as 1 and names=[co1, col2, col3,...]. Here is an example to further clarify the above point: new_df = pd.read_csv(path_to_your_file, sep='\\t', skiprows=1, names=['id', 'col1', 'col2', 'col3'] A: Pretty late, but this is what you can do: df['id'].str.extractall('(\w+)')[0].unstack() Here is the link to the original answer (Link
How to split a column into 4 columns based on spaces in python?
I have seen similar questions, but it seems all would not work in Python 3.8. I have a dataframe like index id 0 0001 01 12537.30 0 1 0001 01 1278.50 1 2 0001 03 53.10 0 where id column should be split into 4 columns, but they are in one column now. I need to split it based on space. I have tried df2 = df1['id'].apply(lambda x: pd.Series(x.split(' '))) or df1[["City", "State", "Country",'indicator']] = df1["id"].str.split(pat=" ", expand=True) All told me that TypeError: 'function' object is not subscriptable. Does anyone have any idea? Thanks
[ "Instead of going through the hassle of splitting the column, the easy path for you would be to create a new Dataframe with read_csv method where the sep argument as spaces or tab (whatever you have there) and skiprows argument as 1 and names=[co1, col2, col3,...].\nHere is an example to further clarify the above point:\nnew_df = pd.read_csv(path_to_your_file, sep='\\\\t', skiprows=1, names=['id', 'col1', 'col2', 'col3']\n\n", "Pretty late, but this is what you can do:\ndf['id'].str.extractall('(\\w+)')[0].unstack()\n\nHere is the link to the original answer (Link\n" ]
[ 1, 0 ]
[]
[]
[ "pandas", "python" ]
stackoverflow_0069453110_pandas_python.txt
Q: Where can I find a catalogue of error messages for the standard packages in Python? I am referring to Errors like ValueError: dictionary update sequence element #0 has length 1; 2 is required which happen when you >>> a_dictionary = {} >>> a_dictionary.update([[1]]) Is there a place where these Errors for standard packages like dictionaries are documented? An online research didn't yield any results. *Edit: I am not asking for Exception types, but rather the specific error messages. There can be for example different messages for ValueErrors, I am looking for a catalog of those error messages. A: I think all errors are subclasses of the Exception class. Thus, the following code will list them for you: def get_all_subclasses(cls): all_subclasses = [] for subclass in cls.__subclasses__(): all_subclasses.append(subclass) all_subclasses.extend(get_all_subclasses(subclass)) return all_subclasses errors = get_all_subclasses(Exception) print(errors)
Where can I find a catalogue of error messages for the standard packages in Python?
I am referring to Errors like ValueError: dictionary update sequence element #0 has length 1; 2 is required which happen when you >>> a_dictionary = {} >>> a_dictionary.update([[1]]) Is there a place where these Errors for standard packages like dictionaries are documented? An online research didn't yield any results. *Edit: I am not asking for Exception types, but rather the specific error messages. There can be for example different messages for ValueErrors, I am looking for a catalog of those error messages.
[ "I think all errors are subclasses of the Exception class. Thus, the following code will list them for you:\ndef get_all_subclasses(cls):\n all_subclasses = []\n\n for subclass in cls.__subclasses__():\n all_subclasses.append(subclass)\n all_subclasses.extend(get_all_subclasses(subclass))\n\n return all_subclasses\n\nerrors = get_all_subclasses(Exception)\nprint(errors)\n\n" ]
[ 0 ]
[]
[]
[ "documentation", "python" ]
stackoverflow_0074564340_documentation_python.txt
Q: Losing information when saving an image as uint8 So I have an image, I'm just testing it with any random Google image, that I saved as "Picture.png". Now I want to normalize that image and save it as an .npy file, so I use the code: from PIL import Image import numpy as np temp = Image.open("Picture.png") image = np.asarray(temp) def NormalizeData(data): return ((data - np.min(data)) / (np.max(data) - np.min(data))) image = NormalizeData(image) np.save("Picture.npy", image) Then, I can retrieve the image with the code: import matplotlib.pyplot as plt image = np.load("Picture.npy") plt.imshow(image) plt.show() The problem is that the .npy file is too big, so I added .astype('uint8') to the NormalizeData function, which saves tons of space. But now, when I try to plt.show() on the new uint8 .npy file, I get a white canvas. What am I doing wrong? A: you are normalizing the data to be between 0 and 1, then you are converting it to an integer. which will round all numbers to 0. you should just multiply the numbers by 255 before using the astype(np.uint8) so numbers will be between 0 and 255 which is the correct range for unsigned 8 bit integers.
Losing information when saving an image as uint8
So I have an image, I'm just testing it with any random Google image, that I saved as "Picture.png". Now I want to normalize that image and save it as an .npy file, so I use the code: from PIL import Image import numpy as np temp = Image.open("Picture.png") image = np.asarray(temp) def NormalizeData(data): return ((data - np.min(data)) / (np.max(data) - np.min(data))) image = NormalizeData(image) np.save("Picture.npy", image) Then, I can retrieve the image with the code: import matplotlib.pyplot as plt image = np.load("Picture.npy") plt.imshow(image) plt.show() The problem is that the .npy file is too big, so I added .astype('uint8') to the NormalizeData function, which saves tons of space. But now, when I try to plt.show() on the new uint8 .npy file, I get a white canvas. What am I doing wrong?
[ "you are normalizing the data to be between 0 and 1, then you are converting it to an integer. which will round all numbers to 0.\nyou should just multiply the numbers by 255 before using the astype(np.uint8) so numbers will be between 0 and 255 which is the correct range for unsigned 8 bit integers.\n" ]
[ 2 ]
[]
[]
[ "numpy", "python", "python_imaging_library" ]
stackoverflow_0074564391_numpy_python_python_imaging_library.txt
Q: How to create an object with built-in "object()" in Python? I found there is object() which is a built-in function in Python. *You can find object() in Built-in Functions And, the documentation says below: Return a new featureless object. object is a base for all classes. It has methods that are common to all instances of Python classes. This function does not accept any arguments. As the documentation says, object() can create an object but I don't know how to do it. My questions: How to create an object with object()? When to use object()? or What are the use cases of object()? A: To create an object with object, just call it: object(). However, it is never (as noted in the comments, it may be sometimes useful, when you need to have a something but you don't care what it is) used as is. object is just the (implicit in Python 3) base class of all classes. It provides basic features, such as allocation and magic methods, that you never directly manipulate in Python. The naming follows one of Python's catchphrases "everything is an object". A: You can create an object with only object() as shown below: print(type(object())) Then,the object of object class is created: <class 'object'> In addition, you can create and initialize the object of Person class with object() as shown below: class Person: def __init__(self, name): self.name = name obj = object().__new__(Person) # Creates the object of "Person" class print(type(obj)) obj.__init__("John") # Initializes the object of "Person" class print(obj.name) Then, the object of Person class is created and initialized: <class '__main__.Person'> John
How to create an object with built-in "object()" in Python?
I found there is object() which is a built-in function in Python. *You can find object() in Built-in Functions And, the documentation says below: Return a new featureless object. object is a base for all classes. It has methods that are common to all instances of Python classes. This function does not accept any arguments. As the documentation says, object() can create an object but I don't know how to do it. My questions: How to create an object with object()? When to use object()? or What are the use cases of object()?
[ "To create an object with object, just call it: object(). However, it is never (as noted in the comments, it may be sometimes useful, when you need to have a something but you don't care what it is) used as is. object is just the (implicit in Python 3) base class of all classes. It provides basic features, such as allocation and magic methods, that you never directly manipulate in Python.\nThe naming follows one of Python's catchphrases \"everything is an object\".\n", "You can create an object with only object() as shown below:\nprint(type(object()))\n\nThen,the object of object class is created:\n<class 'object'>\n\nIn addition, you can create and initialize the object of Person class with object() as shown below:\nclass Person:\n def __init__(self, name):\n self.name = name\n\nobj = object().__new__(Person) # Creates the object of \"Person\" class\nprint(type(obj))\n\nobj.__init__(\"John\") # Initializes the object of \"Person\" class\nprint(obj.name)\n\nThen, the object of Person class is created and initialized:\n<class '__main__.Person'>\nJohn\n\n" ]
[ 2, 0 ]
[]
[]
[ "built_in", "class", "object", "python", "python_3.x" ]
stackoverflow_0074434245_built_in_class_object_python_python_3.x.txt
Q: Flask extract value from drop down menu if value is url_for() I want to be able to serve a static html file (located in static/{platform}/graph.html) within an iframe based on the value ({platform}) selected from a drop down menu. I also want to use the value selected in drop down menu in other places as well. Right now I have something that works for serving static html file in the iframe and updating it based on drop down menu value, but the problem is that I'm unable to use the value from the drop down menu elsewhere. I think it's because the value is a url_for(), but I'm not sure. views.py @app.route('/', methods=['GET']) def index(): return render_template('index.html') @app.route('/maps', methods=['GET','POST']) def maps(): print(request.form.get('location')) ## This prints None return render_template('maps.html') @app.route('/static/<string:platform>/graph.html', methods=['GET', 'POST']) def show_plot(platform): print(request.form.get('location')) ## This prints None try: return send_file('static/{platform}/graph.html'.format(platform=platform)) except FileNotFoundError: print("Error") maps.html <head> <!-- define iframe src based on drop down menu value --> <script type="text/javascript"> function setIframeSource() { var theSelect = document.getElementById('location'); var theIframe = document.getElementById('plot'); var theUrl; theUrl = theSelect.options[theSelect.selectedIndex].value; theIframe.src = theUrl; } </script> </head> <body> <!-- drop down menu --> <form id="select-platform" method="post"> <label> Select Platform</label> <select name="location" id="location" onchange="setIframeSource()"> <option value="{{ url_for('show_plot', platform='web') }}">Web</option> <option value="{{ url_for('show_plot', platform='app') }}">App</option> </select> </form> <!-- embed static html file based on drop down menu value --> <div class="iframe-container"> <iframe id="plot" onload='javascript:resizeIframe(this);' style="border:0" src="{{ url_for('show_plot', platform='web') }}" > </iframe> </div> </body> A: Adding a submit button that made a POST request to a new _refresh_plot endpoint allowed me to see the URL from the url_for() value from drop down menu value views.py @app.route('/_refresh_plot', methods=['POST']) def _refresh_plot(): pattern = '\/static\/([\w]+)\/graph\.html' url = request.form.get('location') platform = re.search(pattern, url)[1] print(platform) maps.html <form id="select-platform" method="post" action="{{ url_for('_refresh_plot') }}> <label> Select Platform</label> <select name="location" id="location" onchange="setIframeSource()"> <option value="{{ url_for('show_plot', platform='web') }}">Web</option> <option value="{{ url_for('show_plot', platform='app') }}">App</option> </select> <button type="submit">Refresh</button> </form>
Flask extract value from drop down menu if value is url_for()
I want to be able to serve a static html file (located in static/{platform}/graph.html) within an iframe based on the value ({platform}) selected from a drop down menu. I also want to use the value selected in drop down menu in other places as well. Right now I have something that works for serving static html file in the iframe and updating it based on drop down menu value, but the problem is that I'm unable to use the value from the drop down menu elsewhere. I think it's because the value is a url_for(), but I'm not sure. views.py @app.route('/', methods=['GET']) def index(): return render_template('index.html') @app.route('/maps', methods=['GET','POST']) def maps(): print(request.form.get('location')) ## This prints None return render_template('maps.html') @app.route('/static/<string:platform>/graph.html', methods=['GET', 'POST']) def show_plot(platform): print(request.form.get('location')) ## This prints None try: return send_file('static/{platform}/graph.html'.format(platform=platform)) except FileNotFoundError: print("Error") maps.html <head> <!-- define iframe src based on drop down menu value --> <script type="text/javascript"> function setIframeSource() { var theSelect = document.getElementById('location'); var theIframe = document.getElementById('plot'); var theUrl; theUrl = theSelect.options[theSelect.selectedIndex].value; theIframe.src = theUrl; } </script> </head> <body> <!-- drop down menu --> <form id="select-platform" method="post"> <label> Select Platform</label> <select name="location" id="location" onchange="setIframeSource()"> <option value="{{ url_for('show_plot', platform='web') }}">Web</option> <option value="{{ url_for('show_plot', platform='app') }}">App</option> </select> </form> <!-- embed static html file based on drop down menu value --> <div class="iframe-container"> <iframe id="plot" onload='javascript:resizeIframe(this);' style="border:0" src="{{ url_for('show_plot', platform='web') }}" > </iframe> </div> </body>
[ "Adding a submit button that made a POST request to a new _refresh_plot endpoint allowed me to see the URL from the url_for() value from drop down menu value\nviews.py\n@app.route('/_refresh_plot', methods=['POST'])\ndef _refresh_plot():\n pattern = '\\/static\\/([\\w]+)\\/graph\\.html'\n url = request.form.get('location')\n platform = re.search(pattern, url)[1]\n print(platform)\n\nmaps.html\n<form id=\"select-platform\" method=\"post\" action=\"{{ url_for('_refresh_plot') }}>\n <label> Select Platform</label>\n <select name=\"location\" id=\"location\" onchange=\"setIframeSource()\">\n <option value=\"{{ url_for('show_plot', platform='web') }}\">Web</option>\n <option value=\"{{ url_for('show_plot', platform='app') }}\">App</option>\n </select>\n <button type=\"submit\">Refresh</button> \n</form>\n\n" ]
[ 0 ]
[]
[]
[ "flask", "html", "iframe", "javascript", "python" ]
stackoverflow_0074452268_flask_html_iframe_javascript_python.txt
Q: Pyomo with glpk solver doesn't solve anything Shouldn't the following result in a number different than zero? import pyomo.environ as pyo from pyomo.opt import SolverFactory m = pyo.ConcreteModel() m.x = pyo.Var([1,2], domain=pyo.Reals,initialize=0) m.obj = pyo.Objective(expr = 2*m.x[1] + 3*m.x[2],sense=pyo.minimize) m.c1 = pyo.Constraint(expr = 3*m.x[1] + 4*m.x[2] >= 3) SolverFactory('glpk', executable='/usr/bin/glpsol').solve(m) pyo.value(m.x[1]) I have tried following the documentation but its quite limited for simple examples. When I execute this code it just prints zero... A: The problem you have written is unbounded. Try changing the domain of x to NonNegativeReals or put in constraints to do same. You should always check the solver status, which you seem to have skipped over and will state β€œunbounded” for this model.
Pyomo with glpk solver doesn't solve anything
Shouldn't the following result in a number different than zero? import pyomo.environ as pyo from pyomo.opt import SolverFactory m = pyo.ConcreteModel() m.x = pyo.Var([1,2], domain=pyo.Reals,initialize=0) m.obj = pyo.Objective(expr = 2*m.x[1] + 3*m.x[2],sense=pyo.minimize) m.c1 = pyo.Constraint(expr = 3*m.x[1] + 4*m.x[2] >= 3) SolverFactory('glpk', executable='/usr/bin/glpsol').solve(m) pyo.value(m.x[1]) I have tried following the documentation but its quite limited for simple examples. When I execute this code it just prints zero...
[ "The problem you have written is unbounded. Try changing the domain of x to NonNegativeReals or put in constraints to do same.\nYou should always check the solver status, which you seem to have skipped over and will state β€œunbounded” for this model.\n" ]
[ 1 ]
[]
[]
[ "pyomo", "python", "solver" ]
stackoverflow_0074563966_pyomo_python_solver.txt
Q: Capturing output from bash script run using os.system() python I'm using Python to run a bash script using os.system. The problem is that the bash executable prints so many outputs to the console which is spamming my screen. Is there any way to block all the print calls from such external routines/modules in python? Here is a small toy example showing the problem, I have a small bash script which makes a file and prints this text #!/bin/bash touch "SomeFile.dat" echo "Spam Spam Spam Spam" echo "Spam Spam Spam Spam" echo "Spam Spam Spam Spam" echo "Spam Spam Spam Spam" and I have this python file which calls this bash file import os print ("Job starting") #text1 os.system("./blue.sh") print ("Job finished") #text2 So when I run this, I want text1 and text2 to be printed and to block all outputs from the bash script. How can we do this in Python? P.S: I can not edit the bash-script, I want to achieve this through Python. A: The os.system() does not provide a way to capture the stdout of the process which is run. os.system(command) Execute the command (a string) in a subshell. This is implemented by calling the Standard C function system(), and has the same limitations. Changes to sys.stdin, etc. are not reflected in the environment of the executed command. If command generates any output, it will be sent to the interpreter standard output stream. The C standard does not specify the meaning of the return value of the C function, so the return value of the Python function is system-dependent. The subprocess module provides more powerful facilities for spawning new processes and retrieving their results; using that module is preferable to using this function. Using subprocess.run() you can define stdin and stdout of the spawned process. If you want to ignore the stdout and stderr of the process, you can redirect it to subprocess.DEVNULL import subprocess print("Job starting") #text1 subprocess.run(["./blue.sh"], stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL) print("Job finished") #text2 should produce the desired result of only logging the print statements but not the output of the blue.sh If you want to suppress only the stdout and not the stderr then you can remove the stderr=subprocess.DEVNULL from the subprocess.run() arguments.
Capturing output from bash script run using os.system() python
I'm using Python to run a bash script using os.system. The problem is that the bash executable prints so many outputs to the console which is spamming my screen. Is there any way to block all the print calls from such external routines/modules in python? Here is a small toy example showing the problem, I have a small bash script which makes a file and prints this text #!/bin/bash touch "SomeFile.dat" echo "Spam Spam Spam Spam" echo "Spam Spam Spam Spam" echo "Spam Spam Spam Spam" echo "Spam Spam Spam Spam" and I have this python file which calls this bash file import os print ("Job starting") #text1 os.system("./blue.sh") print ("Job finished") #text2 So when I run this, I want text1 and text2 to be printed and to block all outputs from the bash script. How can we do this in Python? P.S: I can not edit the bash-script, I want to achieve this through Python.
[ "The os.system() does not provide a way to capture the stdout of the process which is run.\n\nos.system(command)\nExecute the command (a string) in a subshell. This is implemented by\ncalling the Standard C function system(), and has the same\nlimitations. Changes to sys.stdin, etc. are not reflected in the\nenvironment of the executed command. If command generates any output,\nit will be sent to the interpreter standard output stream. The C\nstandard does not specify the meaning of the return value of the C\nfunction, so the return value of the Python function is\nsystem-dependent.\n\nThe subprocess module provides more powerful facilities for spawning new processes and retrieving their results; using that module is preferable to using this function.\nUsing subprocess.run() you can define stdin and stdout of the spawned process. If you want to ignore the stdout and stderr of the process, you can redirect it to subprocess.DEVNULL\nimport subprocess\nprint(\"Job starting\") #text1\nsubprocess.run([\"./blue.sh\"], stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL)\nprint(\"Job finished\") #text2\n\nshould produce the desired result of only logging the print statements but not the output of the blue.sh\nIf you want to suppress only the stdout and not the stderr then you can remove the stderr=subprocess.DEVNULL from the subprocess.run() arguments.\n" ]
[ 1 ]
[]
[]
[ "python" ]
stackoverflow_0074564067_python.txt
Q: Can we create a history track on Fusion spreadsheet? I have a Fusion spreasheet on foundry which i want to track it's history whenever a user type something new on it or modify it's content Can we do something similar? A: Assuming your fusion sheet is synced to a dataset, you might be able to achieve this through an upstream incremental build as described here. Something like: from pyspark.sql import functions as F @incremental(snapshot_inputs=['input_data']) @transform( input_data=Input("/path/to/snapshot/input"), history=Output("/path/to/historical/dataset"), ) def my_compute_function(input_data, history): input_df = input_data.dataframe() # note that you can also use current_timestamp() below # if the input will change > 1x/day input_df = input_df.withColumn('date', F.current_date()) history.write_dataframe(input_df) You would also need to set up a schedule such that it would build every time the upstream fusion back dataset is updated. This schedule would be set up as shown in the screenshot below. You can get more information on setting up schedules here. It wouldn't capture whenever the user types but it would capture the changes made to the dataset which would be relatively close to what you want.
Can we create a history track on Fusion spreadsheet?
I have a Fusion spreasheet on foundry which i want to track it's history whenever a user type something new on it or modify it's content Can we do something similar?
[ "Assuming your fusion sheet is synced to a dataset, you might be able to achieve this through an upstream incremental build as described here.\nSomething like:\nfrom pyspark.sql import functions as F\n\n@incremental(snapshot_inputs=['input_data'])\n@transform(\n input_data=Input(\"/path/to/snapshot/input\"),\n history=Output(\"/path/to/historical/dataset\"),\n)\ndef my_compute_function(input_data, history):\n input_df = input_data.dataframe()\n\n # note that you can also use current_timestamp() below\n # if the input will change > 1x/day\n input_df = input_df.withColumn('date', F.current_date())\n\n history.write_dataframe(input_df)\n\nYou would also need to set up a schedule such that it would build every time the upstream fusion back dataset is updated. This schedule would be set up as shown in the screenshot below. You can get more information on setting up schedules here.\n\nIt wouldn't capture whenever the user types but it would capture the changes made to the dataset which would be relatively close to what you want.\n" ]
[ 0 ]
[]
[]
[ "palantir_foundry", "palantir_foundry_api", "pyspark", "python" ]
stackoverflow_0074564224_palantir_foundry_palantir_foundry_api_pyspark_python.txt
Q: Why does this for loop loop twice? This is what I'm trying to accomplish: list_dic_gen(['One','Two'], [['First','Second']]) is [{'One': 'First', 'Two': 'Second'}] As a second example of this, the function call: list_dic_gen(['Second'], [['One'],['Third Fourth']]) would be expected to return: [{'Second': 'One'}, {'Second': 'Third Fourth'}] But my code def list_dic_gen(lst,lol): acc=[] a=0 for x in lol: accd={} for b in x: accd[lst[a]]=b acc.append(accd) if len(lst)>1: a+=1 return acc This code works for the second example, but for the first example, the for loop makes 2 dictionaries even though there is 1 list in the list [{'One': 'First', 'Two': 'Second'}, {'One': 'First', 'Two': 'Second'}] A: Because there are 2 b's in x and you are appending for both. Append acc after the inner loop. def list_dic_gen(lst,lol): acc=[] a=0 for x in lol: accd={} for b in x: accd[lst[a]]=b a += len(lst)>1 #you don't need a condition acc.append(accd) #append the list after the loop return acc print(list_dic_gen(['One','Two'], [['First','Second']])) I believe my complete refactor of your function does the same thing that yours does. I don't have enough test cases to be positive. def list_dic_gen(keys:list, data:list) -> list[dict]: return [{keys.pop(0):value for value in values} for values in data]
Why does this for loop loop twice?
This is what I'm trying to accomplish: list_dic_gen(['One','Two'], [['First','Second']]) is [{'One': 'First', 'Two': 'Second'}] As a second example of this, the function call: list_dic_gen(['Second'], [['One'],['Third Fourth']]) would be expected to return: [{'Second': 'One'}, {'Second': 'Third Fourth'}] But my code def list_dic_gen(lst,lol): acc=[] a=0 for x in lol: accd={} for b in x: accd[lst[a]]=b acc.append(accd) if len(lst)>1: a+=1 return acc This code works for the second example, but for the first example, the for loop makes 2 dictionaries even though there is 1 list in the list [{'One': 'First', 'Two': 'Second'}, {'One': 'First', 'Two': 'Second'}]
[ "Because there are 2 b's in x and you are appending for both. Append acc after the inner loop.\ndef list_dic_gen(lst,lol):\n acc=[]\n a=0\n for x in lol:\n accd={}\n for b in x:\n accd[lst[a]]=b\n a += len(lst)>1 #you don't need a condition\n acc.append(accd) #append the list after the loop\n return acc\n\nprint(list_dic_gen(['One','Two'], [['First','Second']]))\n\nI believe my complete refactor of your function does the same thing that yours does. I don't have enough test cases to be positive.\ndef list_dic_gen(keys:list, data:list) -> list[dict]:\n return [{keys.pop(0):value for value in values} for values in data]\n\n" ]
[ 2 ]
[]
[]
[ "python" ]
stackoverflow_0074564380_python.txt
Q: I'm trying to properly visualise this lambda problem but I'm having a lot of trouble These 2 lines of code are from an exam paper and I'm trying to figure out how to properly visualise how the variables move about. The output is 8. f = lambda x, y: lambda z: (x)(y)(z) print((f)(lambda x: lambda y: x, lambda z: z * 2)(3)(4)) I've tried using online python visualiser websites but I still can't understand how to process works. Is it possible if someone is kind enough to rewrite this as a 'def function()'? A: f can be written as a function as follows: def f(x, y): def inner(z): return x(y)(z) return inner f takes two functions, x and y. x is a function that accepts another function (y), and returns a third function that accepts an argument z. The print statement calls f with a couple of anonymous functions called in-line: print((f)(lambda x: lambda y: x, lambda z: z * 2)(3)(4)) We can rewrite this print statement as follows: print((f)(lambda_x, lambda_z)(3)(4)) with lambda_x defined as: def lambda_x(x): def lambda_y(y): return x return lambda_y lambda_x is a function that accepts some function x. It then creates a function lambda_y, that accepts some argument y. regardless of what y is, lambda_y returns the original function passed to lambda_x - x. In other words, lambda_x can be rewritten as: def lambda_x(x): return x So you can see that y is just a red herring here. lambda_z can be rewritten as: def lambda_z(z): return z*2 When we run the print statement, we call f with the arguments lambda_x and lambda_z. In running f, we create a new function inner, that calls lambda_x with the argument lambda_z. We've seen already that if we call lambda_x and pass to it some function, we simply get that function back. So when we call f with lambda_x and lambda_z, what we get out of it is just lambda_z. The definition of lambda_x, however, requires a superfluous argument y to be passed - to which nothing is done, and from which no value is obtained. In this print statement, 3 plays this role. You can rerun your original two lines of code with anything in place of 3, and get the same result - try it with 'foo', or 3+j or any other argument of any other type in place of 3 in the print statement - it'll make no difference. f therefore returns the function lambda_z, which consumes the final argument 4, and per the definition of lambda_z, returns 8. Blow by blow: If you throw in print statements along the way, as follows, you can follow along the various function calls in the resulting output. Here, I've set up the print statement with 'foo' in place of 3 to demonstrate its superfluity: def f(x, y): print (f'Calling "f", with arguments {x} and {y}') def inner(z): print (f'Calling function "inner" with argument {z}') return x(y)(z) print (f'returning "inner"') return inner def lambda_x(x): print (f'calling lambda_x with argument {x}') def lambda_y(y): print (f'calling lambda_y with argument {y}, returning {x}') return x return lambda_y def lambda_z(z): print (f'calling lambda_z with argument {z}') return z*2 print((f)(lambda_x, lambda_z)('foo')(4)) With the result: Calling "f", with arguments <function lambda_x at 0x0000017EC49109D0> and <function lambda_z at 0x0000017EC4910940> returning "inner" Calling function "inner" with argument foo calling lambda_x with argument <function lambda_z at 0x0000017EC4910940> calling lambda_y with argument foo, returning <function lambda_z at 0x0000017EC4910940> calling lambda_z with argument 4 8 Hopefully that helps clarify?
I'm trying to properly visualise this lambda problem but I'm having a lot of trouble
These 2 lines of code are from an exam paper and I'm trying to figure out how to properly visualise how the variables move about. The output is 8. f = lambda x, y: lambda z: (x)(y)(z) print((f)(lambda x: lambda y: x, lambda z: z * 2)(3)(4)) I've tried using online python visualiser websites but I still can't understand how to process works. Is it possible if someone is kind enough to rewrite this as a 'def function()'?
[ "f can be written as a function as follows:\ndef f(x, y):\n def inner(z):\n return x(y)(z)\n return inner\n\nf takes two functions, x and y. x is a function that accepts another function (y), and returns a third function that accepts an argument z.\nThe print statement calls f with a couple of anonymous functions called in-line:\nprint((f)(lambda x: lambda y: x, lambda z: z * 2)(3)(4))\n\nWe can rewrite this print statement as follows:\nprint((f)(lambda_x, lambda_z)(3)(4))\n\nwith lambda_x defined as:\ndef lambda_x(x):\n def lambda_y(y):\n return x\n return lambda_y \n\nlambda_x is a function that accepts some function x. It then creates a function lambda_y, that accepts some argument y. regardless of what y is, lambda_y returns the original function passed to lambda_x - x. In other words, lambda_x can be rewritten as:\ndef lambda_x(x):\n return x\n\nSo you can see that y is just a red herring here.\nlambda_z can be rewritten as:\ndef lambda_z(z):\n return z*2\n\nWhen we run the print statement, we call f with the arguments lambda_x and lambda_z. In running f, we create a new function inner, that calls lambda_x with the argument lambda_z. We've seen already that if we call lambda_x and pass to it some function, we simply get that function back. So when we call f with lambda_x and lambda_z, what we get out of it is just lambda_z.\nThe definition of lambda_x, however, requires a superfluous argument y to be passed - to which nothing is done, and from which no value is obtained. In this print statement, 3 plays this role. You can rerun your original two lines of code with anything in place of 3, and get the same result - try it with 'foo', or 3+j or any other argument of any other type in place of 3 in the print statement - it'll make no difference.\nf therefore returns the function lambda_z, which consumes the final argument 4, and per the definition of lambda_z, returns 8.\nBlow by blow:\nIf you throw in print statements along the way, as follows, you can follow along the various function calls in the resulting output. Here, I've set up the print statement with 'foo' in place of 3 to demonstrate its superfluity:\ndef f(x, y):\n print (f'Calling \"f\", with arguments {x} and {y}')\n def inner(z):\n print (f'Calling function \"inner\" with argument {z}')\n return x(y)(z)\n print (f'returning \"inner\"')\n return inner\n\ndef lambda_x(x):\n print (f'calling lambda_x with argument {x}')\n def lambda_y(y):\n print (f'calling lambda_y with argument {y}, returning {x}')\n return x\n return lambda_y\n\ndef lambda_z(z):\n print (f'calling lambda_z with argument {z}')\n return z*2\n\nprint((f)(lambda_x, lambda_z)('foo')(4))\n\nWith the result:\nCalling \"f\", with arguments <function lambda_x at 0x0000017EC49109D0> and <function lambda_z at 0x0000017EC4910940>\nreturning \"inner\"\nCalling function \"inner\" with argument foo\ncalling lambda_x with argument <function lambda_z at 0x0000017EC4910940>\ncalling lambda_y with argument foo, returning <function lambda_z at 0x0000017EC4910940>\ncalling lambda_z with argument 4\n8\n\nHopefully that helps clarify?\n" ]
[ 3 ]
[]
[]
[ "lambda", "python", "python_3.x" ]
stackoverflow_0074563389_lambda_python_python_3.x.txt
Q: 'numpy.ndarray' object has no attribute 'tick_params' when plotting histograms in 'for' loop but not sure why I have the following code. I am trying to loop through columns of a dataframe (newerdf) and plot a histogram for each one. I am then saving each plot as a .png file on my desktop. However, the following code gives me the error: 'numpy.ndarray' object has no attribute 'tick_params'. I would be so grateful for a helping hand! listedvariables = ['distance','age'] for i in range(0,len(listedvariables)): x = newerdf[[listedvariables[i]]].hist(figsize=(50,50)) x.tick_params(axis='x',labelsize=60) x.tick_params(axis='y',labelsize=60) x.set_xlabel(var,fontsize=70,labelpad=30,weight='bold') x.set_ylabel('Number of participants',fontsize=70,labelpad=30,weight='bold') x.set_title(var,fontsize=70,pad=30,weight='bold') dir_name = "/Users/macbook/Desktop/UCL PhD Work/" plt.rcParams["savefig.directory"] = os.chdir(os.path.dirname(dir_name)) plt.savefig(var+' '+'histogram') plt.show() The first 10 rows of newerdf['age'] look like this: 0 21.0 1 24.0 2 47.0 3 32.0 5 29.0 6 29.0 7 22.0 8 23.0 9 32.0 10 22.0 A: The DataFrame.hist() function returns, according to its documentation, a matplotlib axes or a numpy array of them, if your dataframe has more then one column. This function calls matplotlib.pyplot.hist(), on each series in the DataFrame, resulting in one histogram per column. Thus in this line, x = newerdf[[listedvariables[i]]].hist(figsize=(50,50)) x is set to a numpy array, and numpy arrays have no attribute tick_params. to fix this, loop over the values in x as: for i in range(0,len(listedvariables)): x = newerdf[[listedvariables[i]]].hist(figsize=(50,50)) for hist in x: # remainder of your code A: The following code works. I added '.plot' between the df 'newer[df]' and 'hist'. listedvariables = ['distance','age'] for i in range(0,len(listedvariables)): x = newerdf[[listedvariables[i]]].plot.hist(figsize=(50,50)) x.tick_params(axis='x',labelsize=60) x.tick_params(axis='y',labelsize=60) x.set_xlabel(listedvariables[i],fontsize=70,labelpad=30,weight='bold') x.set_ylabel('Number of participants',fontsize=70,labelpad=30,weight='bold') x.set_title(listedvariables[i],fontsize=70,pad=30,weight='bold') dir_name = "/Users/macbook/Desktop/UCL PhD Work/" plt.rcParams["savefig.directory"] = os.chdir(os.path.dirname(dir_name)) plt.savefig(listedvariables[i]+' '+'histogram') plt.show()
'numpy.ndarray' object has no attribute 'tick_params' when plotting histograms in 'for' loop but not sure why
I have the following code. I am trying to loop through columns of a dataframe (newerdf) and plot a histogram for each one. I am then saving each plot as a .png file on my desktop. However, the following code gives me the error: 'numpy.ndarray' object has no attribute 'tick_params'. I would be so grateful for a helping hand! listedvariables = ['distance','age'] for i in range(0,len(listedvariables)): x = newerdf[[listedvariables[i]]].hist(figsize=(50,50)) x.tick_params(axis='x',labelsize=60) x.tick_params(axis='y',labelsize=60) x.set_xlabel(var,fontsize=70,labelpad=30,weight='bold') x.set_ylabel('Number of participants',fontsize=70,labelpad=30,weight='bold') x.set_title(var,fontsize=70,pad=30,weight='bold') dir_name = "/Users/macbook/Desktop/UCL PhD Work/" plt.rcParams["savefig.directory"] = os.chdir(os.path.dirname(dir_name)) plt.savefig(var+' '+'histogram') plt.show() The first 10 rows of newerdf['age'] look like this: 0 21.0 1 24.0 2 47.0 3 32.0 5 29.0 6 29.0 7 22.0 8 23.0 9 32.0 10 22.0
[ "The DataFrame.hist() function returns, according to its documentation, a matplotlib axes or a numpy array of them, if your dataframe has more then one column.\n\nThis function calls matplotlib.pyplot.hist(), on each series in the DataFrame, resulting in one histogram per column.\n\nThus in this line,\nx = newerdf[[listedvariables[i]]].hist(figsize=(50,50))\n\nx is set to a numpy array, and numpy arrays have no attribute tick_params.\nto fix this, loop over the values in x as:\nfor i in range(0,len(listedvariables)): \n x = newerdf[[listedvariables[i]]].hist(figsize=(50,50))\n for hist in x:\n # remainder of your code\n\n", "The following code works.\nI added '.plot' between the df 'newer[df]' and 'hist'.\nlistedvariables = ['distance','age']\nfor i in range(0,len(listedvariables)): \n x = newerdf[[listedvariables[i]]].plot.hist(figsize=(50,50))\n x.tick_params(axis='x',labelsize=60) \n x.tick_params(axis='y',labelsize=60)\n x.set_xlabel(listedvariables[i],fontsize=70,labelpad=30,weight='bold')\n x.set_ylabel('Number of participants',fontsize=70,labelpad=30,weight='bold') \n x.set_title(listedvariables[i],fontsize=70,pad=30,weight='bold') \n dir_name = \"/Users/macbook/Desktop/UCL PhD Work/\"\n plt.rcParams[\"savefig.directory\"] = os.chdir(os.path.dirname(dir_name))\n plt.savefig(listedvariables[i]+' '+'histogram')\n plt.show()\n\n" ]
[ 0, 0 ]
[]
[]
[ "dataframe", "histogram", "jupyter_notebook", "pandas", "python" ]
stackoverflow_0074564245_dataframe_histogram_jupyter_notebook_pandas_python.txt
Q: Rearranging values mixed up in incorrect columns I'm cleaning a dataframe and have this column Description that I would like to split into 4 separate new columns(Type, Stories, Bedrooms, Bathrooms). The column contains entries mainly in this format: Type: Detached; Style: 2-Story; 3 Bedrooms; 2 Bathrooms which is the correct format I want every entry in the column to have. The problem is some entries are mixed up, e.g. 1 Bathroom; 2 Bedrooms; Type: Bunaglow; Style: 1-Story; or 3 Bedrooms; Type: Detached; Style: 2-Story; 2 Bathrooms; both of which are in the incorrect order and do not follow the format above. I already carried out a .str() split to create the 4 new columns but I have no idea how to deal with these mixed-up entries. I found a solution online that somewhat works and implemented it into my dataframe but the problem with it is I have to manually specify each and every mixed-up entry and my dataframe contains over 950 rows. Is there any sort of way I could define a criteria for my column 'Description' and implement a check that if certain entries do not match up with the criteria they should be sorted correctly and then updated to the data frame, or can I tackle this in a better way than manually specifying each individual row? Code to manually sort one of the mixed-up entries, the problem here is I have to specify out of which of the 4 new columns I want the individual values swapped around for m = df1['Type'] == '2 Bathrooms' mp = {'Type': 'Bathrooms', 'Bathrooms': 'Type'} df1.update(df1.loc[m].rename(mp, axis=1)) df1 Original Dataframe Before Str Split String Split df1[['Type','Stories','Bedrooms','Bathrooms']] = df1['Description'].str.split(';', expand=True) df1 = df1.drop('Description', axis=1) Current State of Dataframe df1.head().to_dict() 1: '2016-01-07', 2: '2016-01-10', 3: '2016-01-10', 4: '2016-01-10'}, 'Price(€)': {0: 638740.0, 1: 546330.0, 2: 376039.0, 3: 506446.0, 4: 494491.0}, 'Location': {0: 'Brookville', 1: 'Brookville', 2: 'West End', 3: 'West End', 4: 'West End'}, 'Year Built': {0: 2011, 1: 2009, 2: 1963, 3: 2013, 4: 2004}, 'Size(sq ft)': {0: 1849, 1: 1551, 2: 1073, 3: 1206, 4: 1687}, 'Description': {0: 'Type: Detached; Style: 2-Story; 3 Bedrooms; 2 Bathrooms', 1: 'Type: Detached; Style: 1-Story; 3 Bedrooms; 2 Bathrooms', 2: 'Type: Terraced; Style: 1-Story; 3 Bedrooms; 1 Bathroom', 3: 'Type: Detached; Style: 1.5-Story; 2 Bedrooms; 2 Bathrooms', 4: 'Type: Detached; Style: 2-Story; 3 Bedrooms; 2 Bathrooms'}} A: Try using .str accessor,extract, regex, and named capture groups like this: regstr = 'Type: (?P<Type>.*); Style: (?P<Style>.*); (?P<Bedrooms>\d+) Bedrooms; (?P<Bathrooms>\d+)' df.join(df['Description'].str.extract(regstr)) Output: Date of Sale Price(€) Location Year Built Size(sq ft) Description Type Style Bedrooms Bathrooms 0 2016-01-03 638740.0 Brookville 2011 1849 Type: Detached; Style: 2-Story; 3 Bedrooms; 2 ... Detached 2-Story 3 2 1 2016-01-07 546330.0 Brookville 2009 1551 Type: Detached; Style: 1-Story; 3 Bedrooms; 2 ... Detached 1-Story 3 2 2 2016-01-10 376039.0 West End 1963 1073 Type: Terraced; Style: 1-Story; 3 Bedrooms; 1 ... Terraced 1-Story 3 1 3 2016-01-10 506446.0 West End 2013 1206 Type: Detached; Style: 1.5-Story; 2 Bedrooms; ... Detached 1.5-Story 2 2 4 2016-01-10 494491.0 West End 2004 1687 Type: Detached; Style: 2-Story; 3 Bedrooms; 2 ... Detached 2-Story 3 2
Rearranging values mixed up in incorrect columns
I'm cleaning a dataframe and have this column Description that I would like to split into 4 separate new columns(Type, Stories, Bedrooms, Bathrooms). The column contains entries mainly in this format: Type: Detached; Style: 2-Story; 3 Bedrooms; 2 Bathrooms which is the correct format I want every entry in the column to have. The problem is some entries are mixed up, e.g. 1 Bathroom; 2 Bedrooms; Type: Bunaglow; Style: 1-Story; or 3 Bedrooms; Type: Detached; Style: 2-Story; 2 Bathrooms; both of which are in the incorrect order and do not follow the format above. I already carried out a .str() split to create the 4 new columns but I have no idea how to deal with these mixed-up entries. I found a solution online that somewhat works and implemented it into my dataframe but the problem with it is I have to manually specify each and every mixed-up entry and my dataframe contains over 950 rows. Is there any sort of way I could define a criteria for my column 'Description' and implement a check that if certain entries do not match up with the criteria they should be sorted correctly and then updated to the data frame, or can I tackle this in a better way than manually specifying each individual row? Code to manually sort one of the mixed-up entries, the problem here is I have to specify out of which of the 4 new columns I want the individual values swapped around for m = df1['Type'] == '2 Bathrooms' mp = {'Type': 'Bathrooms', 'Bathrooms': 'Type'} df1.update(df1.loc[m].rename(mp, axis=1)) df1 Original Dataframe Before Str Split String Split df1[['Type','Stories','Bedrooms','Bathrooms']] = df1['Description'].str.split(';', expand=True) df1 = df1.drop('Description', axis=1) Current State of Dataframe df1.head().to_dict() 1: '2016-01-07', 2: '2016-01-10', 3: '2016-01-10', 4: '2016-01-10'}, 'Price(€)': {0: 638740.0, 1: 546330.0, 2: 376039.0, 3: 506446.0, 4: 494491.0}, 'Location': {0: 'Brookville', 1: 'Brookville', 2: 'West End', 3: 'West End', 4: 'West End'}, 'Year Built': {0: 2011, 1: 2009, 2: 1963, 3: 2013, 4: 2004}, 'Size(sq ft)': {0: 1849, 1: 1551, 2: 1073, 3: 1206, 4: 1687}, 'Description': {0: 'Type: Detached; Style: 2-Story; 3 Bedrooms; 2 Bathrooms', 1: 'Type: Detached; Style: 1-Story; 3 Bedrooms; 2 Bathrooms', 2: 'Type: Terraced; Style: 1-Story; 3 Bedrooms; 1 Bathroom', 3: 'Type: Detached; Style: 1.5-Story; 2 Bedrooms; 2 Bathrooms', 4: 'Type: Detached; Style: 2-Story; 3 Bedrooms; 2 Bathrooms'}}
[ "Try using .str accessor,extract, regex, and named capture groups like this:\nregstr = 'Type: (?P<Type>.*); Style: (?P<Style>.*); (?P<Bedrooms>\\d+) Bedrooms; (?P<Bathrooms>\\d+)'\ndf.join(df['Description'].str.extract(regstr))\n\nOutput:\n Date of Sale Price(€) Location Year Built Size(sq ft) Description Type Style Bedrooms Bathrooms\n0 2016-01-03 638740.0 Brookville 2011 1849 Type: Detached; Style: 2-Story; 3 Bedrooms; 2 ... Detached 2-Story 3 2\n1 2016-01-07 546330.0 Brookville 2009 1551 Type: Detached; Style: 1-Story; 3 Bedrooms; 2 ... Detached 1-Story 3 2\n2 2016-01-10 376039.0 West End 1963 1073 Type: Terraced; Style: 1-Story; 3 Bedrooms; 1 ... Terraced 1-Story 3 1\n3 2016-01-10 506446.0 West End 2013 1206 Type: Detached; Style: 1.5-Story; 2 Bedrooms; ... Detached 1.5-Story 2 2\n4 2016-01-10 494491.0 West End 2004 1687 Type: Detached; Style: 2-Story; 3 Bedrooms; 2 ... Detached 2-Story 3 2\n\n" ]
[ 1 ]
[]
[]
[ "dataframe", "pandas", "python" ]
stackoverflow_0074564102_dataframe_pandas_python.txt
Q: Fixing Confusion Matrix plot lines I am trying to plot a confusion matrix as shown below cm = confusion_matrix(testY.argmax(axis=1), predictions.argmax(axis=1)) disp = ConfusionMatrixDisplay(confusion_matrix=cm, display_labels=lb.classes_) disp = disp.plot(include_values=True, cmap='viridis', ax=None, xticks_rotation='horizontal') plt.show() The result: As you can see, it's showing the axes of the boxes instead of outlining the boxes. I can't see the numbers outside the yellow boxes, because of the axes. I am not good with plots. So I can't find out what I need to change. What I expect: FOUND SOLUTION plt.tick_params(axis=u'both', which=u'both',length=0) plt.grid(b=None) A: Turn the grid off E.g., import matplotlib.pyplot as plt fig, _ = plt.subplots(nrows=1, figsize=(10,10)) ax = plt.subplot(1, 1, 1) ax.grid(False) ... disp = ConfusionMatrixDisplay(...) _ = disp.plot(..., ax=ax, ...) A: cm = confusion_matrix(testY.argmax(axis=1), predictions.argmax(axis=1)) disp = ConfusionMatrixDisplay(confusion_matrix=cm, display_labels=lb.classes_) disp = disp.plot(include_values=True, cmap='viridis', ax=None, xticks_rotation='horizontal') plt.grid(False) plt.show() A: Change your cmap parameter in plot() function. It stands for colour-mapping your integer values with colors. Check https://matplotlib.org/3.1.0/tutorials/colors/colormaps.html for more details. As the answer cm = confusion_matrix(testY.argmax(axis=1), predictions.argmax(axis=1)) disp = ConfusionMatrixDisplay(confusion_matrix=cm, display_labels=lb.classes_) disp = disp.plot(include_values=True, cmap='Blues', ax=None, xticks_rotation='horizontal') plt.show() A: The graph which you are showing as example is by sns plot. You can use sns heatmap to plot your matrix. import seaborn as sns categories = lb.classes_ sns.heatmap(cm, annot=True,categories =categories, cmap='Blues') A: I used plt.rcParams['axes.grid'] = True in one of the first cells (for another matplotlib charts). So before the ConfusionMatrixDisplay I turned it off. import matplotlib.pyplot as plt from sklearn.metrics import confusion_matrix, ConfusionMatrixDisplay plt.rcParams['axes.grid'] = True ... plt.rcParams['axes.grid'] = False fig, ax = plt.subplots(nrows=1, ncols=2, figsize=(16, 6)) disp_rfc.plot(ax = ax[0], cmap='coolwarm') disp_cbc.plot(ax = ax[1], cmap='coolwarm') plt.show()
Fixing Confusion Matrix plot lines
I am trying to plot a confusion matrix as shown below cm = confusion_matrix(testY.argmax(axis=1), predictions.argmax(axis=1)) disp = ConfusionMatrixDisplay(confusion_matrix=cm, display_labels=lb.classes_) disp = disp.plot(include_values=True, cmap='viridis', ax=None, xticks_rotation='horizontal') plt.show() The result: As you can see, it's showing the axes of the boxes instead of outlining the boxes. I can't see the numbers outside the yellow boxes, because of the axes. I am not good with plots. So I can't find out what I need to change. What I expect: FOUND SOLUTION plt.tick_params(axis=u'both', which=u'both',length=0) plt.grid(b=None)
[ "Turn the grid off\nE.g.,\nimport matplotlib.pyplot as plt\nfig, _ = plt.subplots(nrows=1, figsize=(10,10))\nax = plt.subplot(1, 1, 1)\nax.grid(False)\n\n...\n\ndisp = ConfusionMatrixDisplay(...)\n_ = disp.plot(..., ax=ax, ...)\n\n", "cm = confusion_matrix(testY.argmax(axis=1), predictions.argmax(axis=1))\n\ndisp = ConfusionMatrixDisplay(confusion_matrix=cm, display_labels=lb.classes_)\ndisp = disp.plot(include_values=True, cmap='viridis', ax=None, xticks_rotation='horizontal')\nplt.grid(False)\nplt.show()\n\n", "Change your cmap parameter in plot() function. It stands for colour-mapping your integer values with colors.\nCheck\nhttps://matplotlib.org/3.1.0/tutorials/colors/colormaps.html\nfor more details.\nAs the answer\ncm = confusion_matrix(testY.argmax(axis=1), predictions.argmax(axis=1))\n\ndisp = ConfusionMatrixDisplay(confusion_matrix=cm, display_labels=lb.classes_)\ndisp = disp.plot(include_values=True, cmap='Blues', ax=None, xticks_rotation='horizontal')\n\nplt.show()\n\n", "The graph which you are showing as example is by sns plot. You can use sns heatmap to plot your matrix.\nimport seaborn as sns\ncategories = lb.classes_\nsns.heatmap(cm, annot=True,categories =categories, cmap='Blues')\n\n", "I used plt.rcParams['axes.grid'] = True in one of the first cells (for another matplotlib charts). So before the ConfusionMatrixDisplay I turned it off.\nimport matplotlib.pyplot as plt\nfrom sklearn.metrics import confusion_matrix, ConfusionMatrixDisplay\nplt.rcParams['axes.grid'] = True\n\n...\n\nplt.rcParams['axes.grid'] = False\nfig, ax = plt.subplots(nrows=1, ncols=2, figsize=(16, 6))\n\ndisp_rfc.plot(ax = ax[0], cmap='coolwarm')\ndisp_cbc.plot(ax = ax[1], cmap='coolwarm')\n\nplt.show()\n\n" ]
[ 5, 2, 0, 0, 0 ]
[]
[]
[ "confusion_matrix", "python", "scikit_learn" ]
stackoverflow_0063591238_confusion_matrix_python_scikit_learn.txt
Q: Python Regular Expression - Get Text starting in the next line after the match was found I have a question on using regular expressions in Python. This is a part of the text I am analysing. Amit Jawaharlaz Daryanani, Evercore ISI Institutional Equities, Research Division - Senior MD & Fundamental Research Analyst [19]\n I have 2 as well. I guess, first off, on the channel inventory, I was hoping if you could talk about how did channel inventory look like in the March quarter because it sounds like it may be below the historical ranges. And then the discussion you had for June quarter performance of iPhones, what are you embedding from a channel building back inventory levels in that expectation?\n My Goal is to extract this part of the text by matching the name of the analyst which is Amit Jawaharlaz Daryanani: \n I have 2 as well. I guess, first off, on the channel inventory, I was hoping if you could talk about how did channel inventory look like in the March quarter because it sounds like it may be below the historical ranges. And then the discussion you had for June quarter performance of iPhones, what are you embedding from a channel building back inventory levels in that expectation?\n I cannot just do from \n to \n because the text is much longer and I specifically need the line of text which comes after his name. I tried: re.findall(r'(?<=Amit Jawaharlaz Daryanani).*?(?=\n)', text) But the Output here is [', Evercore ISI Institutional Equities, Research Division - Senior MD & Fundamental Research Analyst [19]' So how can I start after the first \n that comes after his name until the second \n after his name? A: You can use a capture group: \bAmit Jawaharlaz Daryanani\b.*\n\s*(.*)\n Explanation \bAmit Jawaharlaz Daryanani\b Match the name .*\n Match the rest of the line and a newline \s*(.*)\n Match optional whitespace chars, and capture a whole line in group 1 followed by matching a newline See a regex demo and a Python demo. import re pattern = r"\bAmit Jawaharlaz Daryanani\b.*\n\s*(.*)\n" s = ("Amit Jawaharlaz Daryanani, Evercore ISI Institutional Equities, Research Division - Senior MD & Fundamental Research Analyst [19]\n" " I have 2 as well. I guess, first off, on the channel inventory, I was hoping if you could talk about how did channel inventory look like in the March quarter because it sounds like it may be below the historical ranges. And then the discussion you had for June quarter performance of iPhones, what are you embedding from a channel building back inventory levels in that expectation?\n" " \n") m = re.search(pattern, s) if m: print(m.group(1)) Output I have 2 as well. I guess, first off, on the channel inventory, I was hoping if you could talk about how did channel inventory look like in the March quarter because it sounds like it may be below the historical ranges. And then the discussion you had for June quarter performance of iPhones, what are you embedding from a channel building back inventory levels in that expectation? A: Try this: non-capturing group for the name look for the first \n capturing group until the second \n re.findall(r'(?:Amit Jawaharlaz Daryanani).*?\n(.*?)\n', text) This works because of .*?, which is non-greedy. This means it stops before the first \n that is encountered. Output: [' I have 2 as well. I guess, first off, on the channel inventory, I was hoping if you could talk about how did channel inventory look like in the March quarter because it sounds like it may be below the historical ranges. And then the discussion you had for June quarter performance of iPhones, what are you embedding from a channel building back inventory levels in that expectation?']
Python Regular Expression - Get Text starting in the next line after the match was found
I have a question on using regular expressions in Python. This is a part of the text I am analysing. Amit Jawaharlaz Daryanani, Evercore ISI Institutional Equities, Research Division - Senior MD & Fundamental Research Analyst [19]\n I have 2 as well. I guess, first off, on the channel inventory, I was hoping if you could talk about how did channel inventory look like in the March quarter because it sounds like it may be below the historical ranges. And then the discussion you had for June quarter performance of iPhones, what are you embedding from a channel building back inventory levels in that expectation?\n My Goal is to extract this part of the text by matching the name of the analyst which is Amit Jawaharlaz Daryanani: \n I have 2 as well. I guess, first off, on the channel inventory, I was hoping if you could talk about how did channel inventory look like in the March quarter because it sounds like it may be below the historical ranges. And then the discussion you had for June quarter performance of iPhones, what are you embedding from a channel building back inventory levels in that expectation?\n I cannot just do from \n to \n because the text is much longer and I specifically need the line of text which comes after his name. I tried: re.findall(r'(?<=Amit Jawaharlaz Daryanani).*?(?=\n)', text) But the Output here is [', Evercore ISI Institutional Equities, Research Division - Senior MD & Fundamental Research Analyst [19]' So how can I start after the first \n that comes after his name until the second \n after his name?
[ "You can use a capture group:\n\\bAmit Jawaharlaz Daryanani\\b.*\\n\\s*(.*)\\n\n\nExplanation\n\n\\bAmit Jawaharlaz Daryanani\\b Match the name\n.*\\n Match the rest of the line and a newline\n\\s*(.*)\\n Match optional whitespace chars, and capture a whole line in group 1 followed by matching a newline\n\nSee a regex demo and a Python demo.\nimport re\n\npattern = r\"\\bAmit Jawaharlaz Daryanani\\b.*\\n\\s*(.*)\\n\"\n\ns = (\"Amit Jawaharlaz Daryanani, Evercore ISI Institutional Equities, Research Division - Senior MD & Fundamental Research Analyst [19]\\n\"\n \" I have 2 as well. I guess, first off, on the channel inventory, I was hoping if you could talk about how did channel inventory look like in the March quarter because it sounds like it may be below the historical ranges. And then the discussion you had for June quarter performance of iPhones, what are you embedding from a channel building back inventory levels in that expectation?\\n\"\n \" \\n\")\n\nm = re.search(pattern, s)\nif m:\n print(m.group(1))\n\nOutput\nI have 2 as well. I guess, first off, on the channel inventory, I was hoping if you could talk about how did channel inventory look like in the March quarter because it sounds like it may be below the historical ranges. And then the discussion you had for June quarter performance of iPhones, what are you embedding from a channel building back inventory levels in that expectation?\n\n", "Try this:\n\nnon-capturing group for the name\nlook for the first \\n\ncapturing group until the second \\n\n\nre.findall(r'(?:Amit Jawaharlaz Daryanani).*?\\n(.*?)\\n', text)\n\nThis works because of .*?, which is non-greedy. This means it stops before the first \\n that is encountered.\nOutput:\n[' I have 2 as well. I guess, first off, on the channel inventory, I was hoping if you could talk about how did channel inventory look like in the March quarter because it sounds like it may be below the historical ranges. And then the discussion you had for June quarter performance of iPhones, what are you embedding from a channel building back inventory levels in that expectation?']\n\n" ]
[ 2, 1 ]
[]
[]
[ "python", "regex" ]
stackoverflow_0074564654_python_regex.txt
Q: How to add columns to Pandas DataFrame with minute of the day, month, year using time stamp from other column? I have a dataframe containing various data, including a column from Linux Timestamp. For further analysis, I need to extract the minutes of each period (hour minute number, day minute number, week minute number, month minute number, year minute number) from the Linux Timestamp column. I have: TimeStamp var1 var2 1659494100 5.22 6.34 1659494160 4.33 7.33 1659494220 5.46 7.21 1659494280 4.33 4.51 1659494340 6.45 5.67 ... I need to have: TimeStamp var1 var2 minute_of_hour minute_of_day minute_of_week minute_of_month minute_of_year 1659494100 5.22 6.34 35 155 3035 3035 308315 1659494160 4.33 7.33 36 156 3036 3036 308316 1659494220 5.46 7.21 37 157 3037 3037 308317 1659494280 4.33 4.51 38 158 3038 3038 308318 1659494340 6.45 5.67 39 159 3039 3039 308319 I have a large table and using loops is not an option. Do you have any ideas? A: import pandas as pd # Your dataframe here: df = pd.DataFrame({ "Timestamp": [1659494100, 1659494160, 1659494220, 1659494280, 1659494340], "var1": [5.22, 4.33, 5.46, 4.33, 6.45], "var2": [6.34, 7.33, 7.21, 4.51, 5.67] }) timestamps = pd.to_datetime(df["Timestamp"], unit="s") freqs = { "hour": "H", "day": "D", "week": "W", "month": "M", "year": "Y" } for name, freq in freqs.items(): df[f"minute_of_{name}"] = ( timestamps - timestamps.dt.to_period(freq).dt.start_time ) // pd.Timedelta("1Min") Output: Timestamp var1 var2 minute_of_hour minute_of_day minute_of_week \ 0 1659494100 5.22 6.34 35 155 3035 1 1659494160 4.33 7.33 36 156 3036 2 1659494220 5.46 7.21 37 157 3037 3 1659494280 4.33 4.51 38 158 3038 4 1659494340 6.45 5.67 39 159 3039 minute_of_month minute_of_year 0 3035 308315 1 3036 308316 2 3037 308317 3 3038 308318 4 3039 308319 Note that some columns can be calculated more directly, but this method makes the code consistent for all frequencies.
How to add columns to Pandas DataFrame with minute of the day, month, year using time stamp from other column?
I have a dataframe containing various data, including a column from Linux Timestamp. For further analysis, I need to extract the minutes of each period (hour minute number, day minute number, week minute number, month minute number, year minute number) from the Linux Timestamp column. I have: TimeStamp var1 var2 1659494100 5.22 6.34 1659494160 4.33 7.33 1659494220 5.46 7.21 1659494280 4.33 4.51 1659494340 6.45 5.67 ... I need to have: TimeStamp var1 var2 minute_of_hour minute_of_day minute_of_week minute_of_month minute_of_year 1659494100 5.22 6.34 35 155 3035 3035 308315 1659494160 4.33 7.33 36 156 3036 3036 308316 1659494220 5.46 7.21 37 157 3037 3037 308317 1659494280 4.33 4.51 38 158 3038 3038 308318 1659494340 6.45 5.67 39 159 3039 3039 308319 I have a large table and using loops is not an option. Do you have any ideas?
[ "import pandas as pd\n\n# Your dataframe here:\ndf = pd.DataFrame({\n \"Timestamp\": [1659494100, 1659494160, 1659494220, 1659494280, 1659494340],\n \"var1\": [5.22, 4.33, 5.46, 4.33, 6.45],\n \"var2\": [6.34, 7.33, 7.21, 4.51, 5.67]\n})\n\ntimestamps = pd.to_datetime(df[\"Timestamp\"], unit=\"s\")\n\nfreqs = {\n \"hour\": \"H\",\n \"day\": \"D\",\n \"week\": \"W\",\n \"month\": \"M\",\n \"year\": \"Y\"\n}\n\nfor name, freq in freqs.items():\n df[f\"minute_of_{name}\"] = (\n timestamps - timestamps.dt.to_period(freq).dt.start_time\n ) // pd.Timedelta(\"1Min\")\n\nOutput:\n Timestamp var1 var2 minute_of_hour minute_of_day minute_of_week \\\n0 1659494100 5.22 6.34 35 155 3035 \n1 1659494160 4.33 7.33 36 156 3036 \n2 1659494220 5.46 7.21 37 157 3037 \n3 1659494280 4.33 4.51 38 158 3038 \n4 1659494340 6.45 5.67 39 159 3039 \n\n minute_of_month minute_of_year \n0 3035 308315 \n1 3036 308316 \n2 3037 308317 \n3 3038 308318 \n4 3039 308319 \n\nNote that some columns can be calculated more directly, but this method makes the code consistent for all frequencies.\n" ]
[ 1 ]
[]
[]
[ "dataframe", "pandas", "python", "timestamp" ]
stackoverflow_0074564536_dataframe_pandas_python_timestamp.txt
Q: searching values from one dataframe in another dataframe using pandas I have two datasets, patient data and disease data. The patient dataset has diseases written in alphanumeric code format which I want to search in the disease dataset to display the disease name. Patient dataset snapshot Disease dataset snapshot I want use groupby function on the ICD column and find out the occurrence of a disease and rank it in descending order to display the top 5. I have been trying to find a reference for the same, but could not. Would appreciate the help! EDIT!! avg2 = joined.groupby('disease_name').TIME_DELTA.mean().disease_name.value_counts() I am getting this error "'Series' object has no attribute 'disease_name'" A: Assuming that the data you have are in two pandas dataframes called patients and diseases and that the diseases dataset has the column names disease_id and disease_name this could be a solution: joined = patients.merge(diseases, left_on='ICD', right_on='disease_id') top_5 = joined.disease_name.value_counts().head(5) This solution joins the data together and then use value_counts instead of grouping. It should solve what I preceive to be what you are asking for even if it is not exactly the functionality you asked for.
searching values from one dataframe in another dataframe using pandas
I have two datasets, patient data and disease data. The patient dataset has diseases written in alphanumeric code format which I want to search in the disease dataset to display the disease name. Patient dataset snapshot Disease dataset snapshot I want use groupby function on the ICD column and find out the occurrence of a disease and rank it in descending order to display the top 5. I have been trying to find a reference for the same, but could not. Would appreciate the help! EDIT!! avg2 = joined.groupby('disease_name').TIME_DELTA.mean().disease_name.value_counts() I am getting this error "'Series' object has no attribute 'disease_name'"
[ "Assuming that the data you have are in two pandas dataframes called patients and diseases and that the diseases dataset has the column names disease_id and disease_name this could be a solution:\njoined = patients.merge(diseases, left_on='ICD', right_on='disease_id')\n\ntop_5 = joined.disease_name.value_counts().head(5)\n\nThis solution joins the data together and then use value_counts instead of grouping. It should solve what I preceive to be what you are asking for even if it is not exactly the functionality you asked for.\n" ]
[ 0 ]
[]
[]
[ "columnsorting", "dataframe", "numpy", "pandas", "python" ]
stackoverflow_0074564513_columnsorting_dataframe_numpy_pandas_python.txt
Q: Finding the median of a list of even numbers I'm doing a coding challenge where I need to find the min, max, average and median of a list and output two tuples (one of them being squared). I've managed to output the correct results apart from the median of a list that has odd numbers (e.g. ([7,2,4,5]) should return [(2, 4.5, 4.5, 7), (4, 23.5, 20.5, 49)]. Instead, I get ((2, 4.5, 3.0, 7), (4, 23.5, 10.0, 49)). It would also be helpful if anyone knew how to round the numbers. def exercise3(l): l2 = [number ** 2 for number in l] def median(l): l.copy().sort() if len(l)%2 != 0: median = l[len(l)//2] return median elif len(l)%2 == 0: mid = (len(l)//2)-1 median = (l[mid] + l[mid+1]) / 2 return median def calcStats(l): minL = min(l) avgL = sum(l) / len(l) medL = median(l) maxL = max(l) return minL, avgL, medL, maxL return calcStats(l), calcStats(l2) A: If usage of python standard library is not prohibited by the rules of your contest I would go with from statistics import median median(l)
Finding the median of a list of even numbers
I'm doing a coding challenge where I need to find the min, max, average and median of a list and output two tuples (one of them being squared). I've managed to output the correct results apart from the median of a list that has odd numbers (e.g. ([7,2,4,5]) should return [(2, 4.5, 4.5, 7), (4, 23.5, 20.5, 49)]. Instead, I get ((2, 4.5, 3.0, 7), (4, 23.5, 10.0, 49)). It would also be helpful if anyone knew how to round the numbers. def exercise3(l): l2 = [number ** 2 for number in l] def median(l): l.copy().sort() if len(l)%2 != 0: median = l[len(l)//2] return median elif len(l)%2 == 0: mid = (len(l)//2)-1 median = (l[mid] + l[mid+1]) / 2 return median def calcStats(l): minL = min(l) avgL = sum(l) / len(l) medL = median(l) maxL = max(l) return minL, avgL, medL, maxL return calcStats(l), calcStats(l2)
[ "If usage of python standard library is not prohibited by the rules of your contest I would go with\nfrom statistics import median\nmedian(l)\n\n" ]
[ 1 ]
[]
[]
[ "list", "python", "statistics" ]
stackoverflow_0074564696_list_python_statistics.txt
Q: Converting indices in marching cubes to original x,y,z space - visualizing isosurface 3d skimage I want to draw a volume in x1,x2,x3-space. The volume is an isocurve found by the marching cubes algorithm in skimage. The function generating the volume is pdf_grid = f(x1,x2,x3) and I want to draw the volume where pdf = 60% max(pdf). My issue is that the marching cubes algorithm generates vertices and faces, but how do I map those back to the x1, x2, x3-space? My (rather limited) understanding of marching cubes is that "vertices" refer to the indices in the volume (pdf_grid in my case). If "vertices" contained only the exact indices in the grid this would have been easy, but "vertices" contains floats and not integers. It seems like marching cubes do some interpolation between grid points (according to https://www.cs.carleton.edu/cs_comps/0405/shape/marching_cubes.html), so the question is then how to recover exactly the values of x1,x2,x3? import numpy as np import scipy.stats import matplotlib.pyplot as plt #Make some random data cov = np.array([[1, .2, -.5], [.2, 1.2, .1], [-.5, .1, .8]]) dist = scipy.stats.multivariate_normal(mean = [1., 3., 2], cov = cov) N = 500 x_samples = dist.rvs(size=N).T #Create the kernel density estimator - approximation of a pdf kernel = scipy.stats.gaussian_kde(x_samples) x_mean = x_samples.mean(axis=1) #Find the mode res = scipy.optimize.minimize(lambda x: -kernel.logpdf(x), x_mean #x0, initial guess ) x_mode = res["x"] num_el = 50 #number of elements in the grid x_min = np.min(x_samples, axis = 1) x_max = np.max(x_samples, axis = 1) x1g, x2g, x3g = np.mgrid[x_min[0]:x_max[0]:num_el*1j, x_min[1]:x_max[1]:num_el*1j, x_min[2]:x_max[2]:num_el*1j ] pdf_grid = np.zeros(x1g.shape) #implicit function/grid for the marching cubes for an in range(x1g.shape[0]): for b in range(x1g.shape[1]): for c in range(x1g.shape[2]): pdf_grid[a,b,c] = kernel(np.array([x1g[a,b,c], x2g[a,b,c], x3g[a,b,c]] )) from mpl_toolkits.mplot3d.art3d import Poly3DCollection from skimage import measure iso_level = .6 #draw a volume which contains pdf_val(mode)*60% verts, faces, normals, values = measure.marching_cubes(pdf_grid, kernel(x_mode)*iso_level) #How to convert the figure back to x1,x2,x3 space? I just draw the output as it was done in the skimage example here https://scikit-image.org/docs/0.16.x/auto_examples/edges/plot_marching_cubes.html#sphx-glr-auto-examples-edges-plot-marching-cubes-py so you can see the volume # Fancy indexing: `verts[faces]` to generate a collection of triangles mesh = Poly3DCollection(verts[faces], alpha = .5, label = f"KDE = {iso_level}"+r"$x_{mode}$", linewidth = .1) mesh.set_edgecolor('k') fig, ax = plt.subplots(subplot_kw=dict(projection='3d')) c1 = ax.add_collection3d(mesh) c1._facecolors2d=c1._facecolor3d c1._edgecolors2d=c1._edgecolor3d #Plot the samples. Marching cubes volume does not capture these samples pdf_val = kernel(x_samples) #get density value for each point (for color-coding) x1, x2, x3 = x_samples scatter_plot = ax.scatter(x1, x2, x3, c=pdf_val, alpha = .2, label = r" samples") ax.scatter(x_mode[0], x_mode[1], x_mode[2], c = "r", alpha = .2, label = r"$x_{mode}$") ax.set_xlabel(r"$x_1$") ax.set_ylabel(r"$x_2$") ax.set_zlabel(r"$x_3$") # ax.set_box_aspect([np.ptp(i) for me in x_samples]) # equal aspect ratio cbar = fig.color bar(scatter_plot, ax=ax) cbar.set_label(r"$KDE(w) \approx pdf(w)$") ax.legend() #Make the axis limit so that the volume and samples are shown. ax.set_xlim(- 5, np.max(verts, axis=0)[0] + 3) ax.set_ylim(- 5, np.max(verts, axis=0)[1] + 3) ax.set_zlim(- 5, np.max(verts, axis=0)[2] + 3) A: This is probably way too late of an answer to help OP, but in case anyone else comes across this post looking for a solution to this problem, the issue stems from the marching cubes algorithm outputting the relevant vertices in array space. This space is defined by the number of elements per dimension of the mesh grid and the marching cubes algorithm does indeed do some interpolation in this space (explaining the presence of floats). Anyways, in order to transform the vertices back into x1,x2,x3 space you just need to scale and shift them by the appropriate quantities. These quantities are defined by the range, number of elements of the mesh grid, and the minimum value in each dimension respectively. So using the variables defined in the OP, the following will provide the actual location of the vertices: verts_actual = verts*((x_max-x_min)/pdf_grid.shape) + x_min
Converting indices in marching cubes to original x,y,z space - visualizing isosurface 3d skimage
I want to draw a volume in x1,x2,x3-space. The volume is an isocurve found by the marching cubes algorithm in skimage. The function generating the volume is pdf_grid = f(x1,x2,x3) and I want to draw the volume where pdf = 60% max(pdf). My issue is that the marching cubes algorithm generates vertices and faces, but how do I map those back to the x1, x2, x3-space? My (rather limited) understanding of marching cubes is that "vertices" refer to the indices in the volume (pdf_grid in my case). If "vertices" contained only the exact indices in the grid this would have been easy, but "vertices" contains floats and not integers. It seems like marching cubes do some interpolation between grid points (according to https://www.cs.carleton.edu/cs_comps/0405/shape/marching_cubes.html), so the question is then how to recover exactly the values of x1,x2,x3? import numpy as np import scipy.stats import matplotlib.pyplot as plt #Make some random data cov = np.array([[1, .2, -.5], [.2, 1.2, .1], [-.5, .1, .8]]) dist = scipy.stats.multivariate_normal(mean = [1., 3., 2], cov = cov) N = 500 x_samples = dist.rvs(size=N).T #Create the kernel density estimator - approximation of a pdf kernel = scipy.stats.gaussian_kde(x_samples) x_mean = x_samples.mean(axis=1) #Find the mode res = scipy.optimize.minimize(lambda x: -kernel.logpdf(x), x_mean #x0, initial guess ) x_mode = res["x"] num_el = 50 #number of elements in the grid x_min = np.min(x_samples, axis = 1) x_max = np.max(x_samples, axis = 1) x1g, x2g, x3g = np.mgrid[x_min[0]:x_max[0]:num_el*1j, x_min[1]:x_max[1]:num_el*1j, x_min[2]:x_max[2]:num_el*1j ] pdf_grid = np.zeros(x1g.shape) #implicit function/grid for the marching cubes for an in range(x1g.shape[0]): for b in range(x1g.shape[1]): for c in range(x1g.shape[2]): pdf_grid[a,b,c] = kernel(np.array([x1g[a,b,c], x2g[a,b,c], x3g[a,b,c]] )) from mpl_toolkits.mplot3d.art3d import Poly3DCollection from skimage import measure iso_level = .6 #draw a volume which contains pdf_val(mode)*60% verts, faces, normals, values = measure.marching_cubes(pdf_grid, kernel(x_mode)*iso_level) #How to convert the figure back to x1,x2,x3 space? I just draw the output as it was done in the skimage example here https://scikit-image.org/docs/0.16.x/auto_examples/edges/plot_marching_cubes.html#sphx-glr-auto-examples-edges-plot-marching-cubes-py so you can see the volume # Fancy indexing: `verts[faces]` to generate a collection of triangles mesh = Poly3DCollection(verts[faces], alpha = .5, label = f"KDE = {iso_level}"+r"$x_{mode}$", linewidth = .1) mesh.set_edgecolor('k') fig, ax = plt.subplots(subplot_kw=dict(projection='3d')) c1 = ax.add_collection3d(mesh) c1._facecolors2d=c1._facecolor3d c1._edgecolors2d=c1._edgecolor3d #Plot the samples. Marching cubes volume does not capture these samples pdf_val = kernel(x_samples) #get density value for each point (for color-coding) x1, x2, x3 = x_samples scatter_plot = ax.scatter(x1, x2, x3, c=pdf_val, alpha = .2, label = r" samples") ax.scatter(x_mode[0], x_mode[1], x_mode[2], c = "r", alpha = .2, label = r"$x_{mode}$") ax.set_xlabel(r"$x_1$") ax.set_ylabel(r"$x_2$") ax.set_zlabel(r"$x_3$") # ax.set_box_aspect([np.ptp(i) for me in x_samples]) # equal aspect ratio cbar = fig.color bar(scatter_plot, ax=ax) cbar.set_label(r"$KDE(w) \approx pdf(w)$") ax.legend() #Make the axis limit so that the volume and samples are shown. ax.set_xlim(- 5, np.max(verts, axis=0)[0] + 3) ax.set_ylim(- 5, np.max(verts, axis=0)[1] + 3) ax.set_zlim(- 5, np.max(verts, axis=0)[2] + 3)
[ "This is probably way too late of an answer to help OP, but in case anyone else comes across this post looking for a solution to this problem, the issue stems from the marching cubes algorithm outputting the relevant vertices in array space. This space is defined by the number of elements per dimension of the mesh grid and the marching cubes algorithm does indeed do some interpolation in this space (explaining the presence of floats).\nAnyways, in order to transform the vertices back into x1,x2,x3 space you just need to scale and shift them by the appropriate quantities. These quantities are defined by the range, number of elements of the mesh grid, and the minimum value in each dimension respectively. So using the variables defined in the OP, the following will provide the actual location of the vertices:\nverts_actual = verts*((x_max-x_min)/pdf_grid.shape) + x_min\n\n" ]
[ 0 ]
[]
[]
[ "isosurface", "marching_cubes", "python", "scikit_image" ]
stackoverflow_0070834443_isosurface_marching_cubes_python_scikit_image.txt
Q: Python- Return true if all statements are true I have a method and I want it to return true if all 3 statements are true. In case any of them is false the method should return false. def check_valid(self, a, b): statement1 = self.x == 0 statement2 = self.y == a statment3 = self.z = b return statement1 ^ statement2 ^ statement3 I am using xor to validate if all statements have the same value but if all statements are false then the method will return true, which is not the intended behavior. In order to fix this I am thinking in adding a true to the return statement like this: return true ^ statement1 ^ statement2 ^ statement3 But I don't think that it is the best approach. Is there a cleaner/better way to do this? A: This way would be a better approach and much more readable: def check_valid(self, a, b): if not self.x == 0: return False if not self.y == a: return False if not self.z == b: return False return True
Python- Return true if all statements are true
I have a method and I want it to return true if all 3 statements are true. In case any of them is false the method should return false. def check_valid(self, a, b): statement1 = self.x == 0 statement2 = self.y == a statment3 = self.z = b return statement1 ^ statement2 ^ statement3 I am using xor to validate if all statements have the same value but if all statements are false then the method will return true, which is not the intended behavior. In order to fix this I am thinking in adding a true to the return statement like this: return true ^ statement1 ^ statement2 ^ statement3 But I don't think that it is the best approach. Is there a cleaner/better way to do this?
[ "This way would be a better approach and much more readable:\ndef check_valid(self, a, b):\n if not self.x == 0: return False\n if not self.y == a: return False\n if not self.z == b: return False\n return True\n\n\n" ]
[ 2 ]
[]
[]
[ "boolean", "boolean_logic", "boolean_operations", "python", "xor" ]
stackoverflow_0074564703_boolean_boolean_logic_boolean_operations_python_xor.txt
Q: How to split a list into pairs in all possible ways I have a list (say 6 elements for simplicity) L = [0, 1, 2, 3, 4, 5] and I want to chunk it into pairs in ALL possible ways. I show some configurations: [(0, 1), (2, 3), (4, 5)] [(0, 1), (2, 4), (3, 5)] [(0, 1), (2, 5), (3, 4)] and so on. Here (a, b) = (b, a) and the order of pairs is not important i.e. [(0, 1), (2, 3), (4, 5)] = [(0, 1), (4, 5), (2, 3)] The total number of such configurations is 1*3*5*...*(N-1) where N is the length of my list. How can I write a generator in Python that gives me all possible configurations for an arbitrary N? A: Take a look at itertools.combinations. matt@stanley:~$ python Python 2.6.5 (r265:79063, Apr 16 2010, 13:57:41) [GCC 4.4.3] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import itertools >>> list(itertools.combinations(range(6), 2)) [(0, 1), (0, 2), (0, 3), (0, 4), (0, 5), (1, 2), (1, 3), (1, 4), (1, 5), (2, 3), (2, 4), (2, 5), (3, 4), (3, 5), (4, 5)] A: I don't think there's any function in the standard library that does exactly what you need. Just using itertools.combinations can get you a list of all possible individual pairs, but doesn't actually solve the problem of all valid pair combinations. You could solve this easily with: import itertools def all_pairs(lst): for p in itertools.permutations(lst): i = iter(p) yield zip(i,i) But this will get you duplicates as it treats (a,b) and (b,a) as different, and also gives all orderings of pairs. In the end, I figured it's easier to code this from scratch than trying to filter the results, so here's the correct function. def all_pairs(lst): if len(lst) < 2: yield [] return if len(lst) % 2 == 1: # Handle odd length list for i in range(len(lst)): for result in all_pairs(lst[:i] + lst[i+1:]): yield result else: a = lst[0] for i in range(1,len(lst)): pair = (a,lst[i]) for rest in all_pairs(lst[1:i]+lst[i+1:]): yield [pair] + rest It's recursive, so it will run into stack issues with a long list, but otherwise does what you need. >>> for x in all_pairs([0,1,2,3,4,5]): print x [(0, 1), (2, 3), (4, 5)] [(0, 1), (2, 4), (3, 5)] [(0, 1), (2, 5), (3, 4)] [(0, 2), (1, 3), (4, 5)] [(0, 2), (1, 4), (3, 5)] [(0, 2), (1, 5), (3, 4)] [(0, 3), (1, 2), (4, 5)] [(0, 3), (1, 4), (2, 5)] [(0, 3), (1, 5), (2, 4)] [(0, 4), (1, 2), (3, 5)] [(0, 4), (1, 3), (2, 5)] [(0, 4), (1, 5), (2, 3)] [(0, 5), (1, 2), (3, 4)] [(0, 5), (1, 3), (2, 4)] [(0, 5), (1, 4), (2, 3)] A: How about this: items = ["me", "you", "him"] [(items[i],items[j]) for i in range(len(items)) for j in range(i+1, len(items))] [('me', 'you'), ('me', 'him'), ('you', 'him')] or items = [1, 2, 3, 5, 6] [(items[i],items[j]) for i in range(len(items)) for j in range(i+1, len(items))] [(1, 2), (1, 3), (1, 5), (1, 6), (2, 3), (2, 5), (2, 6), (3, 5), (3, 6), (5, 6)] A: Conceptually similar to @shang's answer, but it does not assume that groups are of size 2: import itertools def generate_groups(lst, n): if not lst: yield [] else: for group in (((lst[0],) + xs) for xs in itertools.combinations(lst[1:], n-1)): for groups in generate_groups([x for x in lst if x not in group], n): yield [group] + groups pprint(list(generate_groups([0, 1, 2, 3, 4, 5], 2))) This yields: [[(0, 1), (2, 3), (4, 5)], [(0, 1), (2, 4), (3, 5)], [(0, 1), (2, 5), (3, 4)], [(0, 2), (1, 3), (4, 5)], [(0, 2), (1, 4), (3, 5)], [(0, 2), (1, 5), (3, 4)], [(0, 3), (1, 2), (4, 5)], [(0, 3), (1, 4), (2, 5)], [(0, 3), (1, 5), (2, 4)], [(0, 4), (1, 2), (3, 5)], [(0, 4), (1, 3), (2, 5)], [(0, 4), (1, 5), (2, 3)], [(0, 5), (1, 2), (3, 4)], [(0, 5), (1, 3), (2, 4)], [(0, 5), (1, 4), (2, 3)]] A: My boss is probably not going to be happy I spent a little time on this fun problem, but here's a nice solution that doesn't need recursion, and uses itertools.product. It's explained in the docstring :). The results seem OK, but I haven't tested it too much. import itertools def all_pairs(lst): """Generate all sets of unique pairs from a list `lst`. This is equivalent to all _partitions_ of `lst` (considered as an indexed set) which have 2 elements in each partition. Recall how we compute the total number of such partitions. Starting with a list [1, 2, 3, 4, 5, 6] one takes off the first element, and chooses its pair [from any of the remaining 5]. For example, we might choose our first pair to be (1, 4). Then, we take off the next element, 2, and choose which element it is paired to (say, 3). So, there are 5 * 3 * 1 = 15 such partitions. That sounds like a lot of nested loops (i.e. recursion), because 1 could pick 2, in which case our next element is 3. But, if one abstracts "what the next element is", and instead just thinks of what index it is in the remaining list, our choices are static and can be aided by the itertools.product() function. """ N = len(lst) choice_indices = itertools.product(*[ xrange(k) for k in reversed(xrange(1, N, 2)) ]) for choice in choice_indices: # calculate the list corresponding to the choices tmp = lst[:] result = [] for index in choice: result.append( (tmp.pop(0), tmp.pop(index)) ) yield result cheers! A: A non-recursive function to find all the possible pairs where the order does not matter, i.e., (a,b) = (b,a) def combinantorial(lst): count = 0 index = 1 pairs = [] for element1 in lst: for element2 in lst[index:]: pairs.append((element1, element2)) index += 1 return pairs Since it is non-recursive you won't experience memory issues with long lists. Example of usage: my_list = [1, 2, 3, 4, 5] print(combinantorial(my_list)) >>> [(1, 2), (1, 3), (1, 4), (1, 5), (2, 3), (2, 4), (2, 5), (3, 4), (3, 5), (4, 5)] A: Try the following recursive generator function: def pairs_gen(L): if len(L) == 2: yield [(L[0], L[1])] else: first = L.pop(0) for i, e in enumerate(L): second = L.pop(i) for list_of_pairs in pairs_gen(L): list_of_pairs.insert(0, (first, second)) yield list_of_pairs L.insert(i, second) L.insert(0, first) Example usage: >>> for pairs in pairs_gen([0, 1, 2, 3, 4, 5]): ... print pairs ... [(0, 1), (2, 3), (4, 5)] [(0, 1), (2, 4), (3, 5)] [(0, 1), (2, 5), (3, 4)] [(0, 2), (1, 3), (4, 5)] [(0, 2), (1, 4), (3, 5)] [(0, 2), (1, 5), (3, 4)] [(0, 3), (1, 2), (4, 5)] [(0, 3), (1, 4), (2, 5)] [(0, 3), (1, 5), (2, 4)] [(0, 4), (1, 2), (3, 5)] [(0, 4), (1, 3), (2, 5)] [(0, 4), (1, 5), (2, 3)] [(0, 5), (1, 2), (3, 4)] [(0, 5), (1, 3), (2, 4)] [(0, 5), (1, 4), (2, 3)] A: I made a small test suite for all the compliant solutions. I had to change the functions a bit to get them to work in Python 3. Interestingly, the fastest function in PyPy is the slowest function in Python 2/3 in some cases. import itertools import time from collections import OrderedDict def tokland_org(lst, n): if not lst: yield [] else: for group in (((lst[0],) + xs) for xs in itertools.combinations(lst[1:], n-1)): for groups in tokland_org([x for x in lst if x not in group], n): yield [group] + groups tokland = lambda x: tokland_org(x, 2) def gatoatigrado(lst): N = len(lst) choice_indices = itertools.product(*[ range(k) for k in reversed(range(1, N, 2)) ]) for choice in choice_indices: # calculate the list corresponding to the choices tmp = list(lst) result = [] for index in choice: result.append( (tmp.pop(0), tmp.pop(index)) ) yield result def shang(X): lst = list(X) if len(lst) < 2: yield lst return a = lst[0] for i in range(1,len(lst)): pair = (a,lst[i]) for rest in shang(lst[1:i]+lst[i+1:]): yield [pair] + rest def smichr(X): lst = list(X) if not lst: yield [tuple()] elif len(lst) == 1: yield [tuple(lst)] elif len(lst) == 2: yield [tuple(lst)] else: if len(lst) % 2: for i in (None, True): if i not in lst: lst = lst + [i] PAD = i break else: while chr(i) in lst: i += 1 PAD = chr(i) lst = lst + [PAD] else: PAD = False a = lst[0] for i in range(1, len(lst)): pair = (a, lst[i]) for rest in smichr(lst[1:i] + lst[i+1:]): rv = [pair] + rest if PAD is not False: for i, t in enumerate(rv): if PAD in t: rv[i] = (t[0],) break yield rv def adeel_zafar(X): L = list(X) if len(L) == 2: yield [(L[0], L[1])] else: first = L.pop(0) for i, e in enumerate(L): second = L.pop(i) for list_of_pairs in adeel_zafar(L): list_of_pairs.insert(0, (first, second)) yield list_of_pairs L.insert(i, second) L.insert(0, first) if __name__ =="__main__": import timeit import pprint candidates = dict(tokland=tokland, gatoatigrado=gatoatigrado, shang=shang, smichr=smichr, adeel_zafar=adeel_zafar) for i in range(1,7): results = [ frozenset([frozenset(x) for x in candidate(range(i*2))]) for candidate in candidates.values() ] assert len(frozenset(results)) == 1 print("Times for getting all permutations of sets of unordered pairs consisting of two draws from a 6-element deck until it is empty") times = dict([(k, timeit.timeit('list({0}(range(6)))'.format(k), setup="from __main__ import {0}".format(k), number=10000)) for k in candidates.keys()]) pprint.pprint([(k, "{0:.3g}".format(v)) for k,v in OrderedDict(sorted(times.items(), key=lambda t: t[1])).items()]) print("Times for getting the first 2000 permutations of sets of unordered pairs consisting of two draws from a 52-element deck until it is empty") times = dict([(k, timeit.timeit('list(islice({0}(range(52)), 800))'.format(k), setup="from itertools import islice; from __main__ import {0}".format(k), number=100)) for k in candidates.keys()]) pprint.pprint([(k, "{0:.3g}".format(v)) for k,v in OrderedDict(sorted(times.items(), key=lambda t: t[1])).items()]) """ print("The 10000th permutations of the previous series:") gens = dict([(k,v(range(52))) for k,v in candidates.items()]) tenthousands = dict([(k, list(itertools.islice(permutations, 10000))[-1]) for k,permutations in gens.items()]) for pair in tenthousands.items(): print(pair[0]) print(pair[1]) """ They all seem to generate the exact same order, so the sets aren't necessary, but this way it's future proof. I experimented a bit with the Python 3 conversion, it is not always clear where to construct the list, but I tried some alternatives and chose the fastest. Here are the benchmark results: % echo "pypy"; pypy all_pairs.py; echo "python2"; python all_pairs.py; echo "python3"; python3 all_pairs.py pypy Times for getting all permutations of sets of unordered pairs consisting of two draws from a 6-element deck until it is empty [('gatoatigrado', '0.0626'), ('adeel_zafar', '0.125'), ('smichr', '0.149'), ('shang', '0.2'), ('tokland', '0.27')] Times for getting the first 2000 permutations of sets of unordered pairs consisting of two draws from a 52-element deck until it is empty [('gatoatigrado', '0.29'), ('adeel_zafar', '0.411'), ('smichr', '0.464'), ('shang', '0.493'), ('tokland', '0.553')] python2 Times for getting all permutations of sets of unordered pairs consisting of two draws from a 6-element deck until it is empty [('gatoatigrado', '0.344'), ('adeel_zafar', '0.374'), ('smichr', '0.396'), ('shang', '0.495'), ('tokland', '0.675')] Times for getting the first 2000 permutations of sets of unordered pairs consisting of two draws from a 52-element deck until it is empty [('adeel_zafar', '0.773'), ('shang', '0.823'), ('smichr', '0.841'), ('tokland', '0.948'), ('gatoatigrado', '1.38')] python3 Times for getting all permutations of sets of unordered pairs consisting of two draws from a 6-element deck until it is empty [('gatoatigrado', '0.385'), ('adeel_zafar', '0.419'), ('smichr', '0.433'), ('shang', '0.562'), ('tokland', '0.837')] Times for getting the first 2000 permutations of sets of unordered pairs consisting of two draws from a 52-element deck until it is empty [('smichr', '0.783'), ('shang', '0.81'), ('adeel_zafar', '0.835'), ('tokland', '0.969'), ('gatoatigrado', '1.3')] % pypy --version Python 2.7.12 (5.6.0+dfsg-0~ppa2~ubuntu16.04, Nov 11 2016, 16:31:26) [PyPy 5.6.0 with GCC 5.4.0 20160609] % python3 --version Python 3.5.2 So I say, go with gatoatigrado's solution. A: def f(l): if l == []: yield [] return ll = l[1:] for j in range(len(ll)): for end in f(ll[:j] + ll[j+1:]): yield [(l[0], ll[j])] + end Usage: for x in f([0,1,2,3,4,5]): print x >>> [(0, 1), (2, 3), (4, 5)] [(0, 1), (2, 4), (3, 5)] [(0, 1), (2, 5), (3, 4)] [(0, 2), (1, 3), (4, 5)] [(0, 2), (1, 4), (3, 5)] [(0, 2), (1, 5), (3, 4)] [(0, 3), (1, 2), (4, 5)] [(0, 3), (1, 4), (2, 5)] [(0, 3), (1, 5), (2, 4)] [(0, 4), (1, 2), (3, 5)] [(0, 4), (1, 3), (2, 5)] [(0, 4), (1, 5), (2, 3)] [(0, 5), (1, 2), (3, 4)] [(0, 5), (1, 3), (2, 4)] [(0, 5), (1, 4), (2, 3)] A: L = [1, 1, 2, 3, 4] answer = [] for i in range(len(L)): for j in range(i+1, len(L)): if (L[i],L[j]) not in answer: answer.append((L[i],L[j])) print answer [(1, 1), (1, 2), (1, 3), (1, 4), (2, 3), (2, 4), (3, 4)] Hope this helps A: Hope this will help: L = [0, 1, 2, 3, 4, 5] [(i,j) for i in L for j in L] output: [(0, 0), (0, 1), (0, 2), (0, 3), (0, 4), (0, 5), (1, 0), (1, 1), (1, 2), (1, 3), (1, 4), (1, 5), (2, 0), (2, 1), (2, 2), (2, 3), (2, 4), (2, 5), (3, 0), (3, 1), (3, 2), (3, 3), (3, 4), (3, 5), (4, 0), (4, 1), (4, 2), (4, 3), (4, 4), (4, 5), (5, 0), (5, 1), (5, 2), (5, 3), (5, 4), (5, 5)] A: This code works when the length of the list is not a multiple of 2; it employs a hack to make it work. Perhaps there are better ways to do this...It also ensures that the pairs are always in a tuple and that it works whether the input is a list or tuple. def all_pairs(lst): """Return all combinations of pairs of items of ``lst`` where order within the pair and order of pairs does not matter. Examples ======== >>> for i in range(6): ... list(all_pairs(range(i))) ... [[()]] [[(0,)]] [[(0, 1)]] [[(0, 1), (2,)], [(0, 2), (1,)], [(0,), (1, 2)]] [[(0, 1), (2, 3)], [(0, 2), (1, 3)], [(0, 3), (1, 2)]] [[(0, 1), (2, 3), (4,)], [(0, 1), (2, 4), (3,)], [(0, 1), (2,), (3, 4)], [(0, 2) , (1, 3), (4,)], [(0, 2), (1, 4), (3,)], [(0, 2), (1,), (3, 4)], [(0, 3), (1, 2) , (4,)], [(0, 3), (1, 4), (2,)], [(0, 3), (1,), (2, 4)], [(0, 4), (1, 2), (3,)], [(0, 4), (1, 3), (2,)], [(0, 4), (1,), (2, 3)], [(0,), (1, 2), (3, 4)], [(0,), (1, 3), (2, 4)], [(0,), (1, 4), (2, 3)]] Note that when the list has an odd number of items, one of the pairs will be a singleton. References ========== http://stackoverflow.com/questions/5360220/ how-to-split-a-list-into-pairs-in-all-possible-ways """ if not lst: yield [tuple()] elif len(lst) == 1: yield [tuple(lst)] elif len(lst) == 2: yield [tuple(lst)] else: if len(lst) % 2: for i in (None, True): if i not in lst: lst = list(lst) + [i] PAD = i break else: while chr(i) in lst: i += 1 PAD = chr(i) lst = list(lst) + [PAD] else: PAD = False a = lst[0] for i in range(1, len(lst)): pair = (a, lst[i]) for rest in all_pairs(lst[1:i] + lst[i+1:]): rv = [pair] + rest if PAD is not False: for i, t in enumerate(rv): if PAD in t: rv[i] = (t[0],) break yield rv A: I'm adding in my own contribution, which builds on the great solutions provided by @shang and @tokland. My problem was that in a group of 12, I wanted to also see all the possible pairs when your pair size does not divide perfectly with the group size. For instance, for an input list size of 12, I want to see all possible pairs with 5 elements. This snip of code and small modification should address that issue: import itertools def generate_groups(lst, n): if not lst: yield [] else: if len(lst) % n == 0: for group in (((lst[0],) + xs) for xs in itertools.combinations(lst[1:], n-1)): for groups in generate_groups([x for x in lst if x not in group], n): yield [group] + groups else: for group in (((lst[0],) + xs) for xs in itertools.combinations(lst[1:], n-1)): group2 = [x for x in lst if x not in group] for grp in (((group2[0],) + xs2) for xs2 in itertools.combinations(group2[1:], n-1)): yield [group] + [grp] Thus, in this case, if I run the following snip of code, I get the results below. The final snip of code is a sanity check that I have no overlapping elements. i = 0 for x in generate_groups([1,2,3,4,5,6,7,8,9,10,11,12], 5): print(x) for elem in x[0]: if elem in x[1]: print('break') break >>> [(1, 2, 3, 4, 5), (6, 7, 8, 9, 10)] [(1, 2, 3, 4, 5), (6, 7, 8, 9, 11)] [(1, 2, 3, 4, 5), (6, 7, 8, 9, 12)] [(1, 2, 3, 4, 5), (6, 7, 8, 10, 11)] [(1, 2, 3, 4, 5), (6, 7, 8, 10, 12)] [(1, 2, 3, 4, 5), (6, 7, 8, 11, 12)] [(1, 2, 3, 4, 5), (6, 7, 9, 10, 11)] ...
How to split a list into pairs in all possible ways
I have a list (say 6 elements for simplicity) L = [0, 1, 2, 3, 4, 5] and I want to chunk it into pairs in ALL possible ways. I show some configurations: [(0, 1), (2, 3), (4, 5)] [(0, 1), (2, 4), (3, 5)] [(0, 1), (2, 5), (3, 4)] and so on. Here (a, b) = (b, a) and the order of pairs is not important i.e. [(0, 1), (2, 3), (4, 5)] = [(0, 1), (4, 5), (2, 3)] The total number of such configurations is 1*3*5*...*(N-1) where N is the length of my list. How can I write a generator in Python that gives me all possible configurations for an arbitrary N?
[ "Take a look at itertools.combinations.\nmatt@stanley:~$ python\nPython 2.6.5 (r265:79063, Apr 16 2010, 13:57:41) \n[GCC 4.4.3] on linux2\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\n>>> import itertools\n>>> list(itertools.combinations(range(6), 2))\n[(0, 1), (0, 2), (0, 3), (0, 4), (0, 5), (1, 2), (1, 3), (1, 4), (1, 5), (2, 3), (2, 4), (2, 5), (3, 4), (3, 5), (4, 5)]\n\n", "I don't think there's any function in the standard library that does exactly what you need. Just using itertools.combinations can get you a list of all possible individual pairs, but doesn't actually solve the problem of all valid pair combinations.\nYou could solve this easily with:\nimport itertools\ndef all_pairs(lst):\n for p in itertools.permutations(lst):\n i = iter(p)\n yield zip(i,i)\n\nBut this will get you duplicates as it treats (a,b) and (b,a) as different, and also gives all orderings of pairs. In the end, I figured it's easier to code this from scratch than trying to filter the results, so here's the correct function.\ndef all_pairs(lst):\n if len(lst) < 2:\n yield []\n return\n if len(lst) % 2 == 1:\n # Handle odd length list\n for i in range(len(lst)):\n for result in all_pairs(lst[:i] + lst[i+1:]):\n yield result\n else:\n a = lst[0]\n for i in range(1,len(lst)):\n pair = (a,lst[i])\n for rest in all_pairs(lst[1:i]+lst[i+1:]):\n yield [pair] + rest\n\nIt's recursive, so it will run into stack issues with a long list, but otherwise does what you need.\n>>> for x in all_pairs([0,1,2,3,4,5]):\n print x\n\n[(0, 1), (2, 3), (4, 5)]\n[(0, 1), (2, 4), (3, 5)]\n[(0, 1), (2, 5), (3, 4)]\n[(0, 2), (1, 3), (4, 5)]\n[(0, 2), (1, 4), (3, 5)]\n[(0, 2), (1, 5), (3, 4)]\n[(0, 3), (1, 2), (4, 5)]\n[(0, 3), (1, 4), (2, 5)]\n[(0, 3), (1, 5), (2, 4)]\n[(0, 4), (1, 2), (3, 5)]\n[(0, 4), (1, 3), (2, 5)]\n[(0, 4), (1, 5), (2, 3)]\n[(0, 5), (1, 2), (3, 4)]\n[(0, 5), (1, 3), (2, 4)]\n[(0, 5), (1, 4), (2, 3)]\n", "How about this:\nitems = [\"me\", \"you\", \"him\"]\n[(items[i],items[j]) for i in range(len(items)) for j in range(i+1, len(items))]\n\n[('me', 'you'), ('me', 'him'), ('you', 'him')]\n\nor \nitems = [1, 2, 3, 5, 6]\n[(items[i],items[j]) for i in range(len(items)) for j in range(i+1, len(items))]\n\n[(1, 2), (1, 3), (1, 5), (1, 6), (2, 3), (2, 5), (2, 6), (3, 5), (3, 6), (5, 6)]\n\n", "Conceptually similar to @shang's answer, but it does not assume that groups are of size 2:\nimport itertools\n\ndef generate_groups(lst, n):\n if not lst:\n yield []\n else:\n for group in (((lst[0],) + xs) for xs in itertools.combinations(lst[1:], n-1)):\n for groups in generate_groups([x for x in lst if x not in group], n):\n yield [group] + groups\n\npprint(list(generate_groups([0, 1, 2, 3, 4, 5], 2)))\n\nThis yields:\n[[(0, 1), (2, 3), (4, 5)],\n [(0, 1), (2, 4), (3, 5)],\n [(0, 1), (2, 5), (3, 4)],\n [(0, 2), (1, 3), (4, 5)],\n [(0, 2), (1, 4), (3, 5)],\n [(0, 2), (1, 5), (3, 4)],\n [(0, 3), (1, 2), (4, 5)],\n [(0, 3), (1, 4), (2, 5)],\n [(0, 3), (1, 5), (2, 4)],\n [(0, 4), (1, 2), (3, 5)],\n [(0, 4), (1, 3), (2, 5)],\n [(0, 4), (1, 5), (2, 3)],\n [(0, 5), (1, 2), (3, 4)],\n [(0, 5), (1, 3), (2, 4)],\n [(0, 5), (1, 4), (2, 3)]]\n\n", "My boss is probably not going to be happy I spent a little time on this fun problem, but here's a nice solution that doesn't need recursion, and uses itertools.product. It's explained in the docstring :). The results seem OK, but I haven't tested it too much.\nimport itertools\n\n\ndef all_pairs(lst):\n \"\"\"Generate all sets of unique pairs from a list `lst`.\n\n This is equivalent to all _partitions_ of `lst` (considered as an indexed\n set) which have 2 elements in each partition.\n\n Recall how we compute the total number of such partitions. Starting with\n a list\n\n [1, 2, 3, 4, 5, 6]\n\n one takes off the first element, and chooses its pair [from any of the\n remaining 5]. For example, we might choose our first pair to be (1, 4).\n Then, we take off the next element, 2, and choose which element it is\n paired to (say, 3). So, there are 5 * 3 * 1 = 15 such partitions.\n\n That sounds like a lot of nested loops (i.e. recursion), because 1 could\n pick 2, in which case our next element is 3. But, if one abstracts \"what\n the next element is\", and instead just thinks of what index it is in the\n remaining list, our choices are static and can be aided by the\n itertools.product() function.\n \"\"\"\n N = len(lst)\n choice_indices = itertools.product(*[\n xrange(k) for k in reversed(xrange(1, N, 2)) ])\n\n for choice in choice_indices:\n # calculate the list corresponding to the choices\n tmp = lst[:]\n result = []\n for index in choice:\n result.append( (tmp.pop(0), tmp.pop(index)) )\n yield result\n\ncheers!\n", "A non-recursive function to find all the possible pairs where the order does not matter, i.e., (a,b) = (b,a)\ndef combinantorial(lst):\n count = 0\n index = 1\n pairs = []\n for element1 in lst:\n for element2 in lst[index:]:\n pairs.append((element1, element2))\n index += 1\n\n return pairs\n\nSince it is non-recursive you won't experience memory issues with long lists.\nExample of usage:\nmy_list = [1, 2, 3, 4, 5]\nprint(combinantorial(my_list))\n>>>\n[(1, 2), (1, 3), (1, 4), (1, 5), (2, 3), (2, 4), (2, 5), (3, 4), (3, 5), (4, 5)]\n\n", "Try the following recursive generator function:\ndef pairs_gen(L):\n if len(L) == 2:\n yield [(L[0], L[1])]\n else:\n first = L.pop(0)\n for i, e in enumerate(L):\n second = L.pop(i)\n for list_of_pairs in pairs_gen(L):\n list_of_pairs.insert(0, (first, second))\n yield list_of_pairs\n L.insert(i, second)\n L.insert(0, first)\n\nExample usage:\n>>> for pairs in pairs_gen([0, 1, 2, 3, 4, 5]):\n... print pairs\n...\n[(0, 1), (2, 3), (4, 5)]\n[(0, 1), (2, 4), (3, 5)]\n[(0, 1), (2, 5), (3, 4)]\n[(0, 2), (1, 3), (4, 5)]\n[(0, 2), (1, 4), (3, 5)]\n[(0, 2), (1, 5), (3, 4)]\n[(0, 3), (1, 2), (4, 5)]\n[(0, 3), (1, 4), (2, 5)]\n[(0, 3), (1, 5), (2, 4)]\n[(0, 4), (1, 2), (3, 5)]\n[(0, 4), (1, 3), (2, 5)]\n[(0, 4), (1, 5), (2, 3)]\n[(0, 5), (1, 2), (3, 4)]\n[(0, 5), (1, 3), (2, 4)]\n[(0, 5), (1, 4), (2, 3)]\n\n", "I made a small test suite for all the compliant solutions. I had to change the functions a bit to get them to work in Python 3. Interestingly, the fastest function in PyPy is the slowest function in Python 2/3 in some cases.\nimport itertools \nimport time\nfrom collections import OrderedDict\n\ndef tokland_org(lst, n):\n if not lst:\n yield []\n else:\n for group in (((lst[0],) + xs) for xs in itertools.combinations(lst[1:], n-1)):\n for groups in tokland_org([x for x in lst if x not in group], n):\n yield [group] + groups\n\ntokland = lambda x: tokland_org(x, 2)\n\ndef gatoatigrado(lst):\n N = len(lst)\n choice_indices = itertools.product(*[\n range(k) for k in reversed(range(1, N, 2)) ])\n\n for choice in choice_indices:\n # calculate the list corresponding to the choices\n tmp = list(lst)\n result = []\n for index in choice:\n result.append( (tmp.pop(0), tmp.pop(index)) )\n yield result\n\ndef shang(X):\n lst = list(X)\n if len(lst) < 2:\n yield lst\n return\n a = lst[0]\n for i in range(1,len(lst)):\n pair = (a,lst[i])\n for rest in shang(lst[1:i]+lst[i+1:]):\n yield [pair] + rest\n\ndef smichr(X):\n lst = list(X)\n if not lst:\n yield [tuple()]\n elif len(lst) == 1:\n yield [tuple(lst)]\n elif len(lst) == 2:\n yield [tuple(lst)]\n else:\n if len(lst) % 2:\n for i in (None, True):\n if i not in lst:\n lst = lst + [i]\n PAD = i\n break\n else:\n while chr(i) in lst:\n i += 1\n PAD = chr(i)\n lst = lst + [PAD]\n else:\n PAD = False\n a = lst[0]\n for i in range(1, len(lst)):\n pair = (a, lst[i])\n for rest in smichr(lst[1:i] + lst[i+1:]):\n rv = [pair] + rest\n if PAD is not False:\n for i, t in enumerate(rv):\n if PAD in t:\n rv[i] = (t[0],)\n break\n yield rv\n\ndef adeel_zafar(X):\n L = list(X)\n if len(L) == 2:\n yield [(L[0], L[1])]\n else:\n first = L.pop(0)\n for i, e in enumerate(L):\n second = L.pop(i)\n for list_of_pairs in adeel_zafar(L):\n list_of_pairs.insert(0, (first, second))\n yield list_of_pairs\n L.insert(i, second)\n L.insert(0, first)\n\nif __name__ ==\"__main__\":\n import timeit\n import pprint\n\n candidates = dict(tokland=tokland, gatoatigrado=gatoatigrado, shang=shang, smichr=smichr, adeel_zafar=adeel_zafar)\n\n for i in range(1,7):\n results = [ frozenset([frozenset(x) for x in candidate(range(i*2))]) for candidate in candidates.values() ]\n assert len(frozenset(results)) == 1\n\n print(\"Times for getting all permutations of sets of unordered pairs consisting of two draws from a 6-element deck until it is empty\")\n times = dict([(k, timeit.timeit('list({0}(range(6)))'.format(k), setup=\"from __main__ import {0}\".format(k), number=10000)) for k in candidates.keys()])\n pprint.pprint([(k, \"{0:.3g}\".format(v)) for k,v in OrderedDict(sorted(times.items(), key=lambda t: t[1])).items()])\n\n print(\"Times for getting the first 2000 permutations of sets of unordered pairs consisting of two draws from a 52-element deck until it is empty\")\n times = dict([(k, timeit.timeit('list(islice({0}(range(52)), 800))'.format(k), setup=\"from itertools import islice; from __main__ import {0}\".format(k), number=100)) for k in candidates.keys()])\n pprint.pprint([(k, \"{0:.3g}\".format(v)) for k,v in OrderedDict(sorted(times.items(), key=lambda t: t[1])).items()])\n\n \"\"\"\n print(\"The 10000th permutations of the previous series:\")\n gens = dict([(k,v(range(52))) for k,v in candidates.items()])\n tenthousands = dict([(k, list(itertools.islice(permutations, 10000))[-1]) for k,permutations in gens.items()])\n for pair in tenthousands.items():\n print(pair[0])\n print(pair[1])\n \"\"\"\n\nThey all seem to generate the exact same order, so the sets aren't necessary, but this way it's future proof. I experimented a bit with the Python 3 conversion, it is not always clear where to construct the list, but I tried some alternatives and chose the fastest.\nHere are the benchmark results:\n% echo \"pypy\"; pypy all_pairs.py; echo \"python2\"; python all_pairs.py; echo \"python3\"; python3 all_pairs.py\npypy\nTimes for getting all permutations of sets of unordered pairs consisting of two draws from a 6-element deck until it is empty\n[('gatoatigrado', '0.0626'),\n ('adeel_zafar', '0.125'),\n ('smichr', '0.149'),\n ('shang', '0.2'),\n ('tokland', '0.27')]\nTimes for getting the first 2000 permutations of sets of unordered pairs consisting of two draws from a 52-element deck until it is empty\n[('gatoatigrado', '0.29'),\n ('adeel_zafar', '0.411'),\n ('smichr', '0.464'),\n ('shang', '0.493'),\n ('tokland', '0.553')]\npython2\nTimes for getting all permutations of sets of unordered pairs consisting of two draws from a 6-element deck until it is empty\n[('gatoatigrado', '0.344'),\n ('adeel_zafar', '0.374'),\n ('smichr', '0.396'),\n ('shang', '0.495'),\n ('tokland', '0.675')]\nTimes for getting the first 2000 permutations of sets of unordered pairs consisting of two draws from a 52-element deck until it is empty\n[('adeel_zafar', '0.773'),\n ('shang', '0.823'),\n ('smichr', '0.841'),\n ('tokland', '0.948'),\n ('gatoatigrado', '1.38')]\npython3\nTimes for getting all permutations of sets of unordered pairs consisting of two draws from a 6-element deck until it is empty\n[('gatoatigrado', '0.385'),\n ('adeel_zafar', '0.419'),\n ('smichr', '0.433'),\n ('shang', '0.562'),\n ('tokland', '0.837')]\nTimes for getting the first 2000 permutations of sets of unordered pairs consisting of two draws from a 52-element deck until it is empty\n[('smichr', '0.783'),\n ('shang', '0.81'),\n ('adeel_zafar', '0.835'),\n ('tokland', '0.969'),\n ('gatoatigrado', '1.3')]\n% pypy --version\nPython 2.7.12 (5.6.0+dfsg-0~ppa2~ubuntu16.04, Nov 11 2016, 16:31:26)\n[PyPy 5.6.0 with GCC 5.4.0 20160609]\n% python3 --version\nPython 3.5.2\n\nSo I say, go with gatoatigrado's solution.\n", "def f(l):\n if l == []:\n yield []\n return\n ll = l[1:]\n for j in range(len(ll)):\n for end in f(ll[:j] + ll[j+1:]):\n yield [(l[0], ll[j])] + end\n\nUsage:\nfor x in f([0,1,2,3,4,5]):\n print x\n\n>>> \n[(0, 1), (2, 3), (4, 5)]\n[(0, 1), (2, 4), (3, 5)]\n[(0, 1), (2, 5), (3, 4)]\n[(0, 2), (1, 3), (4, 5)]\n[(0, 2), (1, 4), (3, 5)]\n[(0, 2), (1, 5), (3, 4)]\n[(0, 3), (1, 2), (4, 5)]\n[(0, 3), (1, 4), (2, 5)]\n[(0, 3), (1, 5), (2, 4)]\n[(0, 4), (1, 2), (3, 5)]\n[(0, 4), (1, 3), (2, 5)]\n[(0, 4), (1, 5), (2, 3)]\n[(0, 5), (1, 2), (3, 4)]\n[(0, 5), (1, 3), (2, 4)]\n[(0, 5), (1, 4), (2, 3)]\n\n", "L = [1, 1, 2, 3, 4]\nanswer = []\nfor i in range(len(L)):\n for j in range(i+1, len(L)):\n if (L[i],L[j]) not in answer:\n answer.append((L[i],L[j]))\n\nprint answer\n[(1, 1), (1, 2), (1, 3), (1, 4), (2, 3), (2, 4), (3, 4)]\n\nHope this helps\n", "Hope this will help:\n\nL = [0, 1, 2, 3, 4, 5]\n[(i,j) for i in L for j in L]\n\noutput:\n[(0, 0), (0, 1), (0, 2), (0, 3), (0, 4), (0, 5), (1, 0), (1, 1), (1, 2), (1, 3), (1, 4), (1, 5), (2, 0), (2, 1), (2, 2), (2, 3), (2, 4), (2, 5), (3, 0), (3, 1), (3, 2), (3, 3), (3, 4), (3, 5), (4, 0), (4, 1), (4, 2), (4, 3), (4, 4), (4, 5), (5, 0), (5, 1), (5, 2), (5, 3), (5, 4), (5, 5)]\n", "This code works when the length of the list is not a multiple of 2; it employs a hack to make it work. Perhaps there are better ways to do this...It also ensures that the pairs are always in a tuple and that it works whether the input is a list or tuple.\ndef all_pairs(lst):\n \"\"\"Return all combinations of pairs of items of ``lst`` where order\n within the pair and order of pairs does not matter.\n\n Examples\n ========\n\n >>> for i in range(6):\n ... list(all_pairs(range(i)))\n ...\n [[()]]\n [[(0,)]]\n [[(0, 1)]]\n [[(0, 1), (2,)], [(0, 2), (1,)], [(0,), (1, 2)]]\n [[(0, 1), (2, 3)], [(0, 2), (1, 3)], [(0, 3), (1, 2)]]\n [[(0, 1), (2, 3), (4,)], [(0, 1), (2, 4), (3,)], [(0, 1), (2,), (3, 4)], [(0, 2)\n , (1, 3), (4,)], [(0, 2), (1, 4), (3,)], [(0, 2), (1,), (3, 4)], [(0, 3), (1, 2)\n , (4,)], [(0, 3), (1, 4), (2,)], [(0, 3), (1,), (2, 4)], [(0, 4), (1, 2), (3,)],\n [(0, 4), (1, 3), (2,)], [(0, 4), (1,), (2, 3)], [(0,), (1, 2), (3, 4)], [(0,),\n (1, 3), (2, 4)], [(0,), (1, 4), (2, 3)]]\n\n Note that when the list has an odd number of items, one of the\n pairs will be a singleton.\n\n References\n ==========\n\n http://stackoverflow.com/questions/5360220/\n how-to-split-a-list-into-pairs-in-all-possible-ways\n\n \"\"\"\n if not lst:\n yield [tuple()]\n elif len(lst) == 1:\n yield [tuple(lst)]\n elif len(lst) == 2:\n yield [tuple(lst)]\n else:\n if len(lst) % 2:\n for i in (None, True):\n if i not in lst:\n lst = list(lst) + [i]\n PAD = i\n break\n else:\n while chr(i) in lst:\n i += 1\n PAD = chr(i)\n lst = list(lst) + [PAD]\n else:\n PAD = False\n a = lst[0]\n for i in range(1, len(lst)):\n pair = (a, lst[i])\n for rest in all_pairs(lst[1:i] + lst[i+1:]):\n rv = [pair] + rest\n if PAD is not False:\n for i, t in enumerate(rv):\n if PAD in t:\n rv[i] = (t[0],)\n break\n yield rv\n\n", "I'm adding in my own contribution, which builds on the great solutions provided by @shang and @tokland. My problem was that in a group of 12, I wanted to also see all the possible pairs when your pair size does not divide perfectly with the group size. For instance, for an input list size of 12, I want to see all possible pairs with 5 elements.\nThis snip of code and small modification should address that issue:\nimport itertools\n\ndef generate_groups(lst, n):\n if not lst:\n yield []\n else:\n \n if len(lst) % n == 0:\n \n \n for group in (((lst[0],) + xs) for xs in itertools.combinations(lst[1:], n-1)):\n for groups in generate_groups([x for x in lst if x not in group], n):\n yield [group] + groups\n \n else:\n \n for group in (((lst[0],) + xs) for xs in itertools.combinations(lst[1:], n-1)):\n group2 = [x for x in lst if x not in group]\n for grp in (((group2[0],) + xs2) for xs2 in itertools.combinations(group2[1:], n-1)):\n yield [group] + [grp]\n\nThus, in this case, if I run the following snip of code, I get the results below. The final snip of code is a sanity check that I have no overlapping elements.\ni = 0\nfor x in generate_groups([1,2,3,4,5,6,7,8,9,10,11,12], 5):\n print(x)\n for elem in x[0]:\n if elem in x[1]:\n print('break')\n break\n\n>>>\n[(1, 2, 3, 4, 5), (6, 7, 8, 9, 10)]\n[(1, 2, 3, 4, 5), (6, 7, 8, 9, 11)]\n[(1, 2, 3, 4, 5), (6, 7, 8, 9, 12)]\n[(1, 2, 3, 4, 5), (6, 7, 8, 10, 11)]\n[(1, 2, 3, 4, 5), (6, 7, 8, 10, 12)]\n[(1, 2, 3, 4, 5), (6, 7, 8, 11, 12)]\n[(1, 2, 3, 4, 5), (6, 7, 9, 10, 11)]\n...\n\n" ]
[ 149, 61, 28, 17, 8, 8, 6, 4, 3, 2, 2, 1, 0 ]
[ "Not the most efficient or fastest, but probably the easiest. The last line is a simple way to dedupe a list in python. In this case, pairs like (0,1) and (1,0) are in the output. Not sure if you'd consider those duplicates or not.\nl = [0, 1, 2, 3, 4, 5]\npairs = []\nfor x in l:\n for y in l:\n pairs.append((x,y))\npairs = list(set(pairs))\nprint(pairs)\n\nOutput:\n[(0, 0), (0, 1), (0, 2), (0, 3), (0, 4), (0, 5), (1, 0), (1, 1), (1, 2), (1, 3), (1, 4), (1, 5), (2, 0), (2, 1), (2, 2), (2, 3), (2, 4), (2, 5), (3, 0), (3, 1), (3, 2), (3, 3), (3, 4), (3, 5), (4, 0), (4, 1), (4, 2), (4, 3), (4, 4), (4, 5), (5, 0), (5, 1), (5, 2), (5, 3), (5, 4), (5, 5)]\n\n" ]
[ -2 ]
[ "python" ]
stackoverflow_0005360220_python.txt
Q: Can I use python to replace bmp file header with icon file header? Join icon header to bmp image data and create new icon. The bmp and icon are 72x72 256 color. Using a hex editor to view the headers I tried to splice these files in the correct place. It seems there are read errors anytime I try to read data from a non-text file. from PyQt5.QtCore import QFile # get icon header a = open("images/brown.ico") s = a.read() a.seek(0) a.write(s[:61]) a.close() # get image data from bmp a = QFile("images/football.bmp") s = a.read() a.seek(0) a.write(s[53:]) a.close() # Python program to # demonstrate merging # of two files data = data2 = "" # Reading data from file1 with open("images/brown.ico") as fp: data = fp.read() # Reading data from file2 with open("images/football.bmp") as fp: data2 = fp.read() # Merging 2 files # To add the data of file2 # from next line data += data2 with open('deed.ico', 'w') as fp: fp.write(data) UnicodeDecodeError: 'charmap' codec can't decode byte 0x90 in position 30: character maps to A: import codecs # get 72x72 icon data with codecs.open("images/brown.ico", encoding='iso-8859-1') as fp: icon_data = fp.read() fp.close() # 72x72 bmp to convert to icon--- do not reduce bmp to 256 color with codecs.open("images/tiger.bmp", encoding='iso-8859-1') as fp: b = fp.read() fp.close() # Insert $FF every fourth byte pixdata = "" count = 3 for letter in b: zz = count % 3 if zz == 0: pixdata = pixdata + chr(255) pixdata = pixdata + letter count -= 1 pixdata = pixdata + chr(255) # first 72 bytes of pixdata are garbage now # connect icon header, bmp image data, and icon footer package = ''.join((icon_data[:62], pixdata[73:], icon_data[20798:])) # Save as new icon with codecs.open('images/new_icon.ico', 'w', encoding='iso-8859-1') as fp: fp.write(package) fp.close()`
Can I use python to replace bmp file header with icon file header?
Join icon header to bmp image data and create new icon. The bmp and icon are 72x72 256 color. Using a hex editor to view the headers I tried to splice these files in the correct place. It seems there are read errors anytime I try to read data from a non-text file. from PyQt5.QtCore import QFile # get icon header a = open("images/brown.ico") s = a.read() a.seek(0) a.write(s[:61]) a.close() # get image data from bmp a = QFile("images/football.bmp") s = a.read() a.seek(0) a.write(s[53:]) a.close() # Python program to # demonstrate merging # of two files data = data2 = "" # Reading data from file1 with open("images/brown.ico") as fp: data = fp.read() # Reading data from file2 with open("images/football.bmp") as fp: data2 = fp.read() # Merging 2 files # To add the data of file2 # from next line data += data2 with open('deed.ico', 'w') as fp: fp.write(data) UnicodeDecodeError: 'charmap' codec can't decode byte 0x90 in position 30: character maps to
[ "import codecs\n\n# get 72x72 icon data\nwith codecs.open(\"images/brown.ico\", encoding='iso-8859-1') as fp:\n icon_data = fp.read()\nfp.close()\n\n# 72x72 bmp to convert to icon--- do not reduce bmp to 256 color\nwith codecs.open(\"images/tiger.bmp\", encoding='iso-8859-1') as fp:\n b = fp.read()\nfp.close()\n\n# Insert $FF every fourth byte\npixdata = \"\"\ncount = 3\nfor letter in b:\n zz = count % 3\n if zz == 0:\n pixdata = pixdata + chr(255)\n pixdata = pixdata + letter\n count -= 1\npixdata = pixdata + chr(255)\n# first 72 bytes of pixdata are garbage now\n\n# connect icon header, bmp image data, and icon \nfooter\npackage = ''.join((icon_data[:62], pixdata[73:], icon_data[20798:]))\n\n# Save as new icon\nwith codecs.open('images/new_icon.ico', 'w', encoding='iso-8859-1') as fp:\n fp.write(package)\nfp.close()`\n\n" ]
[ 0 ]
[]
[]
[ "bmp", "file_io", "icons", "python" ]
stackoverflow_0074476953_bmp_file_io_icons_python.txt
Q: How to correctly important parent modules in submodules, while still being able to run them on their own and via main.py? My Project has this file Structure: src β”œβ”€β”€ API β”‚Β Β  β”œβ”€β”€ API.py β”‚Β Β  └── __init__.py β”œβ”€β”€ DataBase β”‚Β Β  β”œβ”€β”€ CreateDB.py β”‚Β Β  β”œβ”€β”€ DB.py β”‚Β Β  β”œβ”€β”€ SpacyTags.py β”‚Β Β  β”œβ”€β”€ __init__.py β”œβ”€β”€ ML β”‚Β Β  β”œβ”€β”€ FeaturePipe.py β”‚Β Β  β”œβ”€β”€ Labeler.py β”‚Β Β  β”œβ”€β”€ Predictor.py β”‚Β Β  β”œβ”€β”€ Transformer.py β”‚Β Β  β”œβ”€β”€ ModelCreator.py β”‚Β Β  β”œβ”€β”€ ModelOptimizer.py β”‚Β Β  └── __init__.py β”œβ”€β”€ News β”‚Β Β  β”œβ”€β”€ CNBCSpider.py β”‚Β Β  β”œβ”€β”€ CNBC_parse_article.py β”‚Β Β  β”œβ”€β”€ HistNewsAPI.py β”‚Β Β  β”œβ”€β”€ NewsListener.py β”‚Β Β  β”œβ”€β”€ __init__.py β”‚Β Β  └── new_article_funcs.py β”œβ”€β”€ Stock β”‚Β Β  β”œβ”€β”€ HistStockAPI.py β”‚Β Β  β”œβ”€β”€ ISIN.py β”‚Β Β  β”œβ”€β”€ PortfolioListener.py β”‚Β Β  β”œβ”€β”€ UpdateEntries.py β”‚Β Β  β”œβ”€β”€ __init__.py β”œβ”€β”€ Testing β”‚Β Β  └── ArticleFeatureTesting.ipynb β”œβ”€β”€ Exceptions.py β”œβ”€β”€ __init__.py β”œβ”€β”€ config.py └── main.py Before I ran my program via Pycharm and used imports like this (example from ModelCreator.py): from src.ML.Labeler import Labeler from src.ML.ModelOptimizer import optimize_model from src.Stock.ISIN import ISIN_LIST Now, that I am migrating to my Raspberry Pi I am unsure how to import for example src/config.py in src/DataBase/DB.py. I still want to be able to run main.py, which is basically importing most of the modules and submodules, while also being able to run the submodules on their own for testing. I also don't like the idea of adding a import sys sys.path.append(xyz) to every file. So I was wondering if there is a cleaner option, which allows all of this. I've tried to use relative paths for the import, but this led to errors like ImportError: attempted relative import beyond top-level package most of the solutions I found didn't work for me, because I want to be able to import my modules in main.py, but still want to run them on their own. Thanks in advance. A: Unfortunately, I do not believe there is a more elegant solution. Perhaps it is a limitation of python as several tutorials and other SO threads say similar: Geeks for Geeks SO - How to properly import parent module/other submodules in Python In this case, I would choose the solution that best fits your needs. Perhaps PYTHONPATH (or other env variables)? As I have found in the industry, sometimes there just isn't a better way and the "ugly" way is the only way.
How to correctly important parent modules in submodules, while still being able to run them on their own and via main.py?
My Project has this file Structure: src β”œβ”€β”€ API β”‚Β Β  β”œβ”€β”€ API.py β”‚Β Β  └── __init__.py β”œβ”€β”€ DataBase β”‚Β Β  β”œβ”€β”€ CreateDB.py β”‚Β Β  β”œβ”€β”€ DB.py β”‚Β Β  β”œβ”€β”€ SpacyTags.py β”‚Β Β  β”œβ”€β”€ __init__.py β”œβ”€β”€ ML β”‚Β Β  β”œβ”€β”€ FeaturePipe.py β”‚Β Β  β”œβ”€β”€ Labeler.py β”‚Β Β  β”œβ”€β”€ Predictor.py β”‚Β Β  β”œβ”€β”€ Transformer.py β”‚Β Β  β”œβ”€β”€ ModelCreator.py β”‚Β Β  β”œβ”€β”€ ModelOptimizer.py β”‚Β Β  └── __init__.py β”œβ”€β”€ News β”‚Β Β  β”œβ”€β”€ CNBCSpider.py β”‚Β Β  β”œβ”€β”€ CNBC_parse_article.py β”‚Β Β  β”œβ”€β”€ HistNewsAPI.py β”‚Β Β  β”œβ”€β”€ NewsListener.py β”‚Β Β  β”œβ”€β”€ __init__.py β”‚Β Β  └── new_article_funcs.py β”œβ”€β”€ Stock β”‚Β Β  β”œβ”€β”€ HistStockAPI.py β”‚Β Β  β”œβ”€β”€ ISIN.py β”‚Β Β  β”œβ”€β”€ PortfolioListener.py β”‚Β Β  β”œβ”€β”€ UpdateEntries.py β”‚Β Β  β”œβ”€β”€ __init__.py β”œβ”€β”€ Testing β”‚Β Β  └── ArticleFeatureTesting.ipynb β”œβ”€β”€ Exceptions.py β”œβ”€β”€ __init__.py β”œβ”€β”€ config.py └── main.py Before I ran my program via Pycharm and used imports like this (example from ModelCreator.py): from src.ML.Labeler import Labeler from src.ML.ModelOptimizer import optimize_model from src.Stock.ISIN import ISIN_LIST Now, that I am migrating to my Raspberry Pi I am unsure how to import for example src/config.py in src/DataBase/DB.py. I still want to be able to run main.py, which is basically importing most of the modules and submodules, while also being able to run the submodules on their own for testing. I also don't like the idea of adding a import sys sys.path.append(xyz) to every file. So I was wondering if there is a cleaner option, which allows all of this. I've tried to use relative paths for the import, but this led to errors like ImportError: attempted relative import beyond top-level package most of the solutions I found didn't work for me, because I want to be able to import my modules in main.py, but still want to run them on their own. Thanks in advance.
[ "Unfortunately, I do not believe there is a more elegant solution. Perhaps it is a limitation of python as several tutorials and other SO threads say similar:\n\nGeeks for Geeks\nSO - How to properly import parent module/other submodules in Python\n\nIn this case, I would choose the solution that best fits your needs. Perhaps PYTHONPATH (or other env variables)?\nAs I have found in the industry, sometimes there just isn't a better way and the \"ugly\" way is the only way.\n" ]
[ 0 ]
[]
[]
[ "git_submodules", "import", "module", "python" ]
stackoverflow_0074564756_git_submodules_import_module_python.txt
Q: how can I convert string type of special character into original? I wanted to do maths calculation using asterick (*) but what if it is in string format? how can I convert it to normal? I tried 4 "*" 5 and first of all, I was not even expecting it to multiply it, as the operator is in string format but it gave me an error. A: An asterisk in the form of a string variable cannot be converted directly to a mathematical operator; however, it is possible to take the string "*" and use it to perform multiplication by using an if statement. Let's say you are given some string variable 'operator', and two integer variables 'a' and 'b'. The code would look something like this: if operator == "*": product = a * b print(product) In your case, you would set the 'a' and 'b' variables to 4 and 5 respectively, meaning the example above would print 20 if the 'operator' variable was "*".
how can I convert string type of special character into original?
I wanted to do maths calculation using asterick (*) but what if it is in string format? how can I convert it to normal? I tried 4 "*" 5 and first of all, I was not even expecting it to multiply it, as the operator is in string format but it gave me an error.
[ "An asterisk in the form of a string variable cannot be converted directly to a mathematical operator; however, it is possible to take the string \"*\" and use it to perform multiplication by using an if statement.\nLet's say you are given some string variable 'operator', and two integer variables 'a' and 'b'. The code would look something like this:\nif operator == \"*\":\n product = a * b\n print(product)\n\nIn your case, you would set the 'a' and 'b' variables to 4 and 5 respectively, meaning the example above would print 20 if the 'operator' variable was \"*\".\n" ]
[ 0 ]
[]
[]
[ "operators", "python", "special_characters" ]
stackoverflow_0074564587_operators_python_special_characters.txt
Q: removing whitespace from dataframe titles I am trying to remove whitespace from the titles of columns on a dataframe. my_df=pd.DataFrame({' name_1':[1, 2],' name_2':[3, 4],}) After some research, i've tried: my_df.columns.map(lstrip()) df.columns.to_series().map(lstrip) these both give: NameError: name 'lstrip' is not defined even though mystr.lstrip() works fine. how can I do this without getting the name error? and why am I getting it? A: Try: my_df.columns = my_df.columns.str.strip() A: lstrip is a method of the str class, therefore lstrip() alone is going to produce that error while str.lstrip() or mystr.lstrip() (whit mystr being a string) won't. So, you can use my_df.columns.map(str.lstrip) but because pandas has vecorized versions of the str methods under pandas.Series.str, is considered more pythonic (pandastic?) to use instead my_df.columns.str.lstrip(). As noted by @saad_saeed in their comment, using .strip() instead of .lstrip() is recommended to eliminate both trailing and leading whitespaces, which are a common source of KeyError.
removing whitespace from dataframe titles
I am trying to remove whitespace from the titles of columns on a dataframe. my_df=pd.DataFrame({' name_1':[1, 2],' name_2':[3, 4],}) After some research, i've tried: my_df.columns.map(lstrip()) df.columns.to_series().map(lstrip) these both give: NameError: name 'lstrip' is not defined even though mystr.lstrip() works fine. how can I do this without getting the name error? and why am I getting it?
[ "Try:\nmy_df.columns = my_df.columns.str.strip()\n\n", "lstrip is a method of the str class, therefore lstrip() alone is going to produce that error while str.lstrip() or mystr.lstrip() (whit mystr being a string) won't.\nSo, you can use\nmy_df.columns.map(str.lstrip)\n\nbut because pandas has vecorized versions of the str methods under pandas.Series.str, is considered more pythonic (pandastic?) to use instead my_df.columns.str.lstrip().\nAs noted by @saad_saeed in their comment, using .strip() instead of .lstrip() is recommended to eliminate both trailing and leading whitespaces, which are a common source of KeyError.\n" ]
[ 1, 1 ]
[]
[]
[ "dataframe", "python" ]
stackoverflow_0074564387_dataframe_python.txt