content
stringlengths
85
101k
title
stringlengths
0
150
question
stringlengths
15
48k
answers
list
answers_scores
list
non_answers
list
non_answers_scores
list
tags
list
name
stringlengths
35
137
Q: Using Ordinal Variables as categories in XGBoost Python I am trying to train a multi-class classifier using XGBoost. Data contains 4 independent variables which are ordinal in nature. I want to use these variables as is because they are encoded. The data looks like below Column name Values target ['high', 'medium', 'low'] feature_1 Values ranging from 1-5 feature_2 Values ranging from 1-5 feature_3 Values ranging from 1-5 feature_4 Values ranging from 1-5 My code currently look like below y = data['target'] X = data.drop(['target'], axis=1) X = X.fillna(0) X = X.astype('int').astype('category') x_train, x_test, y_train, y_test = train_test_split(X, y, test_size = 0.20, random_state=random_state, stratify=y) # Create instance of model xgb_model = XGBClassifier() # Create the random grid xgb_grid = {'n_estimators': [int(x) for x in np.linspace(start = 100, stop = 500, num = 5)], 'max_depth': [3, 5, 8, 10], 'learning_rate': [0.01, 0.05, 0.1, 0.2, 0.3]} xgb_model_tuned = RandomizedSearchCV(estimator = xgb_model, param_distributions = xgb_grid, n_iter = 50, cv = 5, scoring='roc_auc', verbose=2, random_state=random_state, n_jobs = -1) # Pass training data into model xgb_model_tuned.fit(x_train, y_train) I get the following error when i run this ValueError: DataFrame.dtypes for data must be int, float, bool or categorical. When categorical type is supplied, DMatrix parameter `enable_categorical` must be set to `True`.feature_1, feature_2, feature_3, feature_4 The dtype is category for all the variables. This worked well with RandomForest Classifier but not with XGBoost. If i cannot use the datatype category how can i pass the ordinal variables as categories? A: If you want them treated as ordinal, then just make the column type int: xgboost will make splits as though they were continuous, which preserves the ordered nature. A: You are almost there! Based on XGBoost Documentation, you need to set enable_categorical=True and the supported tree methods are gpu_hist, approx, and hist. # Create instance of model xgb_model = XGBClassifier(tree_method="gpu_hist", enable_categorical=True) Also, ensure that your XGBoost version is 1.5 and above.
Using Ordinal Variables as categories in XGBoost Python
I am trying to train a multi-class classifier using XGBoost. Data contains 4 independent variables which are ordinal in nature. I want to use these variables as is because they are encoded. The data looks like below Column name Values target ['high', 'medium', 'low'] feature_1 Values ranging from 1-5 feature_2 Values ranging from 1-5 feature_3 Values ranging from 1-5 feature_4 Values ranging from 1-5 My code currently look like below y = data['target'] X = data.drop(['target'], axis=1) X = X.fillna(0) X = X.astype('int').astype('category') x_train, x_test, y_train, y_test = train_test_split(X, y, test_size = 0.20, random_state=random_state, stratify=y) # Create instance of model xgb_model = XGBClassifier() # Create the random grid xgb_grid = {'n_estimators': [int(x) for x in np.linspace(start = 100, stop = 500, num = 5)], 'max_depth': [3, 5, 8, 10], 'learning_rate': [0.01, 0.05, 0.1, 0.2, 0.3]} xgb_model_tuned = RandomizedSearchCV(estimator = xgb_model, param_distributions = xgb_grid, n_iter = 50, cv = 5, scoring='roc_auc', verbose=2, random_state=random_state, n_jobs = -1) # Pass training data into model xgb_model_tuned.fit(x_train, y_train) I get the following error when i run this ValueError: DataFrame.dtypes for data must be int, float, bool or categorical. When categorical type is supplied, DMatrix parameter `enable_categorical` must be set to `True`.feature_1, feature_2, feature_3, feature_4 The dtype is category for all the variables. This worked well with RandomForest Classifier but not with XGBoost. If i cannot use the datatype category how can i pass the ordinal variables as categories?
[ "If you want them treated as ordinal, then just make the column type int: xgboost will make splits as though they were continuous, which preserves the ordered nature.\n", "You are almost there!\nBased on XGBoost Documentation, you need to set enable_categorical=True and the supported tree methods are gpu_hist, approx, and hist.\n# Create instance of model\nxgb_model = XGBClassifier(tree_method=\"gpu_hist\", enable_categorical=True)\n\nAlso, ensure that your XGBoost version is 1.5 and above.\n" ]
[ 1, 0 ]
[]
[]
[ "classification", "data_preprocessing", "pandas", "python", "xgboost" ]
stackoverflow_0074478807_classification_data_preprocessing_pandas_python_xgboost.txt
Q: Side-by-side Labels in StackLayout: Why is second label missing? (kivy, python) How can I display two labels side-by-side in a Kivy StackLayout? Consider the following code #!/usr/bin/env python3 from kivy.uix.button import Button from kivy.lang import Builder from kivy.app import App KV = """ StackLayout: orientation: 'lr-tb' Label: text: "Hello" Label: text: "World" """ class MyApp(App): def build(self): return Builder.load_string( KV ) MyApp().run() I'm trying to make two text labels appears side-by-side. Originally I was using a BoxLayout and GridLayout, but I found that those would make the width of each widget coorespond to the width of the app. Whereas I want: The first label to be only as-wide as it needs to be for the text it contains The second label to be placed immediately next to first label, where the only gap between the text is the layout's spacing. Unfortunately, the above code doesn't even display a second label -- it's just totally not there. Why? How can I display two labels right next to each-other, without kivy adding additional spacing or mysteriously not displaying my second label at all when using the StackLayout? A: To make this work as expected, you have to: override size_hint to None and set the size of the widget to its texture_size (which is the actual pixels needed to render the font -- but you may actually want to pad this with some pixels) For example #!/usr/bin/env python3 from kivy.uix.button import Button from kivy.lang import Builder from kivy.app import App KV = """ StackLayout: orientation: 'lr-tb' Label: text: "Hello" size: self.texture_size size_hint: None, None Label: text: "World" size: self.texture_size size_hint: None, None """ class MyApp(App): def build(self): return Builder.load_string( KV ) MyApp().run()
Side-by-side Labels in StackLayout: Why is second label missing? (kivy, python)
How can I display two labels side-by-side in a Kivy StackLayout? Consider the following code #!/usr/bin/env python3 from kivy.uix.button import Button from kivy.lang import Builder from kivy.app import App KV = """ StackLayout: orientation: 'lr-tb' Label: text: "Hello" Label: text: "World" """ class MyApp(App): def build(self): return Builder.load_string( KV ) MyApp().run() I'm trying to make two text labels appears side-by-side. Originally I was using a BoxLayout and GridLayout, but I found that those would make the width of each widget coorespond to the width of the app. Whereas I want: The first label to be only as-wide as it needs to be for the text it contains The second label to be placed immediately next to first label, where the only gap between the text is the layout's spacing. Unfortunately, the above code doesn't even display a second label -- it's just totally not there. Why? How can I display two labels right next to each-other, without kivy adding additional spacing or mysteriously not displaying my second label at all when using the StackLayout?
[ "To make this work as expected, you have to:\n\noverride size_hint to None and\nset the size of the widget to its texture_size (which is the actual pixels needed to render the font -- but you may actually want to pad this with some pixels)\n\nFor example\n#!/usr/bin/env python3\n\nfrom kivy.uix.button import Button\nfrom kivy.lang import Builder\nfrom kivy.app import App\n\nKV = \"\"\"\nStackLayout:\n orientation: 'lr-tb'\n\n Label:\n text: \"Hello\"\n size: self.texture_size\n size_hint: None, None\n\n Label:\n text: \"World\"\n size: self.texture_size\n size_hint: None, None\n\"\"\"\n\nclass MyApp(App):\n def build(self):\n return Builder.load_string( KV )\n\nMyApp().run()\n\n" ]
[ 2 ]
[]
[]
[ "kivy", "kivy_language", "layout", "python", "stacklayout" ]
stackoverflow_0074480862_kivy_kivy_language_layout_python_stacklayout.txt
Q: How to check if tuple having a list or dictionary is empty I have a tuple: details = ({}, []) As there is no data in the following tuple I want to return a null response. For this I am writing: if not details: return Response({}) else: print "Not null" But this does not seem to work as it is always going in the else part and printing not null. I am new to python. Any help is appreciated. A: Note: if you write: if <expr>: pass then Python will not check that <expr> == True, it will evaluate the truthiness of the <expr>. Objects have some sort of defined "truthiness" value. The truthiness of True and False are respectively True and False. For None, the truthiness is False, for numbers usually the truthiness is True if and only if the number is different from zero, for collections (tuples, sets, dictionaries, lists, etc.), the truthiness is True if the collection contains at least one element. By default custom classes have always True as truthiness, but by overriding the __bool__ (or __len__), one can define custom rules. The truthiness of tuple is True given the tuple itself contains one or more items (and False otherwise). What these elements are, is irrelevant. In case you want to check that at least one of the items of the tuple has truthiness True, we can use any(..): if not any(details): # all items are empty return Response({}) else: print "Not null" So from the moment the list contains at least one element, or the dictonary, or both, the else case will fire, otherwise the if body will fire. If we want to check that all elements in the tuple have truthiness True, we can use all(..): if not all(details): # one or more items are empty return Response({}) else: print "Not null" A: The accepted answer implies that any does not perform a deep search for truth. To demonstrate it, the double negation not not <expression> is handy: not not [] # False not not ([],) # True not not any(([],)) # False not not any(([1],)) # True not not any(([None],)) # Still True, as expected.
How to check if tuple having a list or dictionary is empty
I have a tuple: details = ({}, []) As there is no data in the following tuple I want to return a null response. For this I am writing: if not details: return Response({}) else: print "Not null" But this does not seem to work as it is always going in the else part and printing not null. I am new to python. Any help is appreciated.
[ "\nNote: if you write:\nif <expr>:\n pass\n\nthen Python will not check that <expr> == True, it will evaluate the truthiness of the <expr>. Objects have some sort of defined \"truthiness\" value. The truthiness of True and False are respectively True and False. For None, the truthiness is False, for numbers usually the truthiness is True if and only if the number is different from zero, for collections (tuples, sets, dictionaries, lists, etc.), the truthiness is True if the collection contains at least one element. By default custom classes have always True as truthiness, but by overriding the __bool__ (or __len__), one can define custom rules.\n\nThe truthiness of tuple is True given the tuple itself contains one or more items (and False otherwise). What these elements are, is irrelevant.\nIn case you want to check that at least one of the items of the tuple has truthiness True, we can use any(..):\nif not any(details): # all items are empty\n return Response({})\nelse:\n print \"Not null\"\nSo from the moment the list contains at least one element, or the dictonary, or both, the else case will fire, otherwise the if body will fire.\nIf we want to check that all elements in the tuple have truthiness True, we can use all(..):\nif not all(details): # one or more items are empty\n return Response({})\nelse:\n print \"Not null\"\n", "The accepted answer implies that any does not perform a deep search for truth. To demonstrate it, the double negation not not <expression> is handy:\nnot not [] # False\nnot not ([],) # True\nnot not any(([],)) # False\nnot not any(([1],)) # True\nnot not any(([None],)) # Still True, as expected.\n\n" ]
[ 11, 0 ]
[]
[]
[ "list", "python", "python_2.7", "tuples" ]
stackoverflow_0048660923_list_python_python_2.7_tuples.txt
Q: Optuna, recover original study name to load .db file I created a study to optimize a model with Optuna, which produced a .db file with the same name as the study_name. The problem is that I'm trying to load the results by using: study = optuna.create_study(study_name=study_name, storage=f"sqlite:///{results_folder}/results.db", directions=["maximize", "maximize"], load_if_exists=True) but I renamed the original .db file, and I can't remember what was its original name (i.e., the original study_name value). I seem to understand that I can rename the file and use the new file name in the "storage" argument when loading the study, but I should use the original value of study_name. In case I don't, I get a message saying that: [W 2022-08-31 14:55:26,962] Study instance does not contain completed trials. [W 2022-08-31 14:55:26,964] Your study does not have any completed trials. Is there any way I can get it back from the .db file? A: Yep! The db file is query-able. Study names are stored in a table called "studies", so you can use your database interface of choice to query that table. However, if you're storing multiple study results in this one db file, you'll need to be able to remember which of the study names you find is actually the one you're after. Here's a quick example of how to do it in python, probably not the best code but still: import sqlite3 con = sqlite3.connect("/path/to/your/file/results.db") cur = con.cursor() res = cur.execute("SELECT * FROM studies") res.fetchall()
Optuna, recover original study name to load .db file
I created a study to optimize a model with Optuna, which produced a .db file with the same name as the study_name. The problem is that I'm trying to load the results by using: study = optuna.create_study(study_name=study_name, storage=f"sqlite:///{results_folder}/results.db", directions=["maximize", "maximize"], load_if_exists=True) but I renamed the original .db file, and I can't remember what was its original name (i.e., the original study_name value). I seem to understand that I can rename the file and use the new file name in the "storage" argument when loading the study, but I should use the original value of study_name. In case I don't, I get a message saying that: [W 2022-08-31 14:55:26,962] Study instance does not contain completed trials. [W 2022-08-31 14:55:26,964] Your study does not have any completed trials. Is there any way I can get it back from the .db file?
[ "Yep! The db file is query-able. Study names are stored in a table called \"studies\", so you can use your database interface of choice to query that table. However, if you're storing multiple study results in this one db file, you'll need to be able to remember which of the study names you find is actually the one you're after.\nHere's a quick example of how to do it in python, probably not the best code but still:\nimport sqlite3\ncon = sqlite3.connect(\"/path/to/your/file/results.db\")\ncur = con.cursor()\nres = cur.execute(\"SELECT * FROM studies\")\nres.fetchall()\n\n" ]
[ 1 ]
[]
[]
[ "optuna", "python" ]
stackoverflow_0073556231_optuna_python.txt
Q: fast api stopping after a while on google cloud vm I have a ML model with fast api wrapper running on google cloud VM, it runs fine when ssh terminal is open. but once I close the terminal it runs for 10 more minutes maybe and then the api returns 502 bad gate way I'm using nginx with this config server{listen 80; server_name: public ip; location /{proxy_pass http://127.0.0.1:8000;}} please let me know if there is any way I can fix this problem. reran everything sill same error A: When you close the SSH terminal session, the applications that you started will be killed. Use a program such as tmux, screen, etc. to create sessions that you can attach to and detach from. However, since you are using Nginx, there are better methods of managing applications that are being proxied. For development, your current method is OK. For production, your proxied applications should be started and managed by the system as a service.
fast api stopping after a while on google cloud vm
I have a ML model with fast api wrapper running on google cloud VM, it runs fine when ssh terminal is open. but once I close the terminal it runs for 10 more minutes maybe and then the api returns 502 bad gate way I'm using nginx with this config server{listen 80; server_name: public ip; location /{proxy_pass http://127.0.0.1:8000;}} please let me know if there is any way I can fix this problem. reran everything sill same error
[ "When you close the SSH terminal session, the applications that you started will be killed. Use a program such as tmux, screen, etc. to create sessions that you can attach to and detach from.\nHowever, since you are using Nginx, there are better methods of managing applications that are being proxied. For development, your current method is OK. For production, your proxied applications should be started and managed by the system as a service.\n" ]
[ 0 ]
[]
[]
[ "fastapi", "google_cloud_platform", "nginx", "python", "python_3.x" ]
stackoverflow_0074481058_fastapi_google_cloud_platform_nginx_python_python_3.x.txt
Q: Video Recording with mss in python I'm capturing my screen using OpenCV on windows. It works fine but I have heard mss is much faster than PIL. I have seen this code in a youtube video but am unable to figure out how to save the frames to a .wav file or similar from mss import mss import cv2 from PIL import Image import numpy as np from time import time mon = {'top': 100, 'left':200, 'width':1600, 'height':1024} sct = mss() while 1: begin_time = time() sct_img = sct.grab(mon) img = Image.frombytes('RGB', (sct_img.size.width, sct_img.size.height), sct_img.rgb) img_bgr = cv2.cvtColor(np.array(img), cv2.COLOR_RGB2BGR) cv2.imshow('test', np.array(img_bgr)) print('This frame takes {} seconds.'.format(time()-begin_time)) if cv2.waitKey(25) & 0xFF == ord('q'): cv2.destroyAllWindows() break Credits I tried different aproaches writing the frames to an array but failed. Any answers and help are welcome. A: Here is a basic example to get you started: import cv2 import numpy as np import mss from time import time width = 640 height = 400 fps = 25 frame_delta = 1 / fps # part of the screen to capture monitor = {"top": 10, "left": 10, "width": width, "height": height} # open video writer video = cv2.VideoWriter('video.mp4', cv2.VideoWriter_fourcc(*'mp4v'), fps, (width, height)) with mss.mss() as sct: next_frame = time() while True: next_frame += frame_delta img = np.array(sct.grab(monitor)) img = cv2.cvtColor(img, cv2.COLOR_BGRA2BGR) video.write(img) cv2.imshow("video", img) # calculate wait time to meet the defined fps wait_ms = max(int((next_frame - time()) * 1000), 1) if cv2.waitKey(wait_ms) != -1: break cv2.destroyAllWindows() video.release()
Video Recording with mss in python
I'm capturing my screen using OpenCV on windows. It works fine but I have heard mss is much faster than PIL. I have seen this code in a youtube video but am unable to figure out how to save the frames to a .wav file or similar from mss import mss import cv2 from PIL import Image import numpy as np from time import time mon = {'top': 100, 'left':200, 'width':1600, 'height':1024} sct = mss() while 1: begin_time = time() sct_img = sct.grab(mon) img = Image.frombytes('RGB', (sct_img.size.width, sct_img.size.height), sct_img.rgb) img_bgr = cv2.cvtColor(np.array(img), cv2.COLOR_RGB2BGR) cv2.imshow('test', np.array(img_bgr)) print('This frame takes {} seconds.'.format(time()-begin_time)) if cv2.waitKey(25) & 0xFF == ord('q'): cv2.destroyAllWindows() break Credits I tried different aproaches writing the frames to an array but failed. Any answers and help are welcome.
[ "Here is a basic example to get you started:\nimport cv2\nimport numpy as np\nimport mss\nfrom time import time\n\nwidth = 640\nheight = 400\nfps = 25\nframe_delta = 1 / fps\n\n# part of the screen to capture\nmonitor = {\"top\": 10, \"left\": 10, \"width\": width, \"height\": height}\n\n# open video writer\nvideo = cv2.VideoWriter('video.mp4', cv2.VideoWriter_fourcc(*'mp4v'), fps, (width, height))\n\nwith mss.mss() as sct:\n next_frame = time()\n\n while True:\n next_frame += frame_delta\n\n img = np.array(sct.grab(monitor))\n img = cv2.cvtColor(img, cv2.COLOR_BGRA2BGR)\n video.write(img)\n cv2.imshow(\"video\", img)\n\n # calculate wait time to meet the defined fps\n wait_ms = max(int((next_frame - time()) * 1000), 1)\n\n if cv2.waitKey(wait_ms) != -1:\n break\n\ncv2.destroyAllWindows()\nvideo.release()\n\n" ]
[ 0 ]
[]
[]
[ "opencv", "python", "recording" ]
stackoverflow_0074365876_opencv_python_recording.txt
Q: Python: JSON File Format not printing out correctly I'm trying to develop a parser that extracts data from a json formatted file, but when I was testing out trying to read the file and output its contents it doesn't print the data properly. Disclaimer this is my first time working on json so please go easy. Here are the contents of the file (it's quite dense so I'm only putting in a part of it and some of the values are made up): { "jobs" : [ { "jobname" : "workload", "groupid" : 0, "eta" : 0, "elapsed" : 69, "job options" : { "bs" : "4k", "rw" : "randread" }, "read" : { "io_bytes" : 2000, "bw" : 560, "slat_ns" : { "min" : 0, "max" : 0, "mean" : 0 } } } ] } So now I have python code that opens the json file and returns it as a dictionary. Then it's supposed to iterate through the list: import json # Opening JSON file f = open('workload.log') # returns JSON object as a dictionary data = json.load(f) # Iterating through the json list for i in data['jobs']: print(i) # Closing file f.close() Here's the link to the code I found online: https://www.geeksforgeeks.org/read-json-file-using-python/ Now from my understanding on how the json format works I assume when I print the file contents, the output should be: {'jobname': 'workload', 'groupid': 0, 'eta': 0, 'elapsed': 69, 'job options', 'read'} I think 'job options' would be their own category or at least be printed separately from 'jobname', 'groupid', and etc. However, this is what I get instead: {'jobname': 'workload', 'groupid': 0, 'eta': 0, 'elapsed': 69, 'job options': {'bs': '4k', 'rw': 'randread'}, 'read': {'io_bytes': 2000, 'bw': 560, 'slat_ns': {'min': 0, 'max': 0, 'mean': 0}} There's a lot more data than that but that's the gist of it. They are all printed on one line. Is the formatting wrong? I've used this code on other sample JSON formats and it works just fine. I feel like at least the "job options" and "read" sections in the file should be accessible through the data label like "data['jobs']['job options']" or something. I want to figure out how to print out these sections separately. A: Your code iterates over each job, and then prints that job, one per line. There's only one job in your example file, so you get one line of output. Why would you expect the print statement to drop some of the data? Those subkeys are available, exactly as you expect: for job in data['jobs']: # job is now the entire job object you where printing print(job['job options']) # or make it pretty print(json.dumps(job['job options'], sort_keys=True, indent=2)) { "bs": "4k", "rw": "randread" }
Python: JSON File Format not printing out correctly
I'm trying to develop a parser that extracts data from a json formatted file, but when I was testing out trying to read the file and output its contents it doesn't print the data properly. Disclaimer this is my first time working on json so please go easy. Here are the contents of the file (it's quite dense so I'm only putting in a part of it and some of the values are made up): { "jobs" : [ { "jobname" : "workload", "groupid" : 0, "eta" : 0, "elapsed" : 69, "job options" : { "bs" : "4k", "rw" : "randread" }, "read" : { "io_bytes" : 2000, "bw" : 560, "slat_ns" : { "min" : 0, "max" : 0, "mean" : 0 } } } ] } So now I have python code that opens the json file and returns it as a dictionary. Then it's supposed to iterate through the list: import json # Opening JSON file f = open('workload.log') # returns JSON object as a dictionary data = json.load(f) # Iterating through the json list for i in data['jobs']: print(i) # Closing file f.close() Here's the link to the code I found online: https://www.geeksforgeeks.org/read-json-file-using-python/ Now from my understanding on how the json format works I assume when I print the file contents, the output should be: {'jobname': 'workload', 'groupid': 0, 'eta': 0, 'elapsed': 69, 'job options', 'read'} I think 'job options' would be their own category or at least be printed separately from 'jobname', 'groupid', and etc. However, this is what I get instead: {'jobname': 'workload', 'groupid': 0, 'eta': 0, 'elapsed': 69, 'job options': {'bs': '4k', 'rw': 'randread'}, 'read': {'io_bytes': 2000, 'bw': 560, 'slat_ns': {'min': 0, 'max': 0, 'mean': 0}} There's a lot more data than that but that's the gist of it. They are all printed on one line. Is the formatting wrong? I've used this code on other sample JSON formats and it works just fine. I feel like at least the "job options" and "read" sections in the file should be accessible through the data label like "data['jobs']['job options']" or something. I want to figure out how to print out these sections separately.
[ "Your code iterates over each job, and then prints that job, one per line. There's only one job in your example file, so you get one line of output.\nWhy would you expect the print statement to drop some of the data?\nThose subkeys are available, exactly as you expect:\nfor job in data['jobs']:\n # job is now the entire job object you where printing\n print(job['job options'])\n # or make it pretty\n print(json.dumps(job['job options'], sort_keys=True, indent=2))\n\n{\n \"bs\": \"4k\",\n \"rw\": \"randread\"\n}\n\n" ]
[ 0 ]
[]
[]
[ "file", "format", "json", "printing", "python" ]
stackoverflow_0074482380_file_format_json_printing_python.txt
Q: Hello everyone! I am begginer in programming and I decide to write some small programming. And and as usually happens with beginners, I have a problem My goal in this program is that when the user entered data, it was written from the dictionary. And in my case, only the last entered by the user is recorded in the dictionary and displayed on the screen. I apologize in advance for not writing comments. The program is stupid, but I still want to know what wrong. dictionary = {} def make_album(artist_name, album_title, number_of_songs_in_album=None): all_info_here = {artist_name.title(): album_title.title()} if number_of_songs_in_album: all_info_here['number_of_songs_in_album'] = number_of_songs_in_album return all_info_here while True: print("Please enter an artist and album's title:") print("(enter 'q' at any time to quit)") artist_name = input("\nEnter an artist: ") if artist_name == 'q': break album_title = input("Enter album's title: ") if album_title == 'q': break number_of_songs = input("Enter the number of songs in almbum" "(if you don't know the number print '99'): ") if number_of_songs == 99: continue formatted_information = make_album(artist_name, album_title, number_of_songs) dictionary.update(formatted_information) print(dictionary) I watched in the internet decision but I didn't find something there. A: from collections import defaultdict dictionary = defaultdict(list) def make_album(artist_name, album_title, number_of_songs_in_album=None): all_info_here = {'artist_name': artist_name, 'album_title': album_title} if number_of_songs_in_album: all_info_here['number_of_songs_in_album'] = number_of_songs_in_album return all_info_here while True: print("Please enter an artist and album's title:") print("(enter 'q' at any time to quit)") artist_name = input("\nEnter an artist: ") if artist_name == 'q': break album_title = input("Enter album's title: ") if album_title == 'q': break number_of_songs = input("Enter the number of songs in almbum" "(if you don't know the number print '99'): ") if number_of_songs == 99: continue # formatted_information = make_album(artist_name, album_title, number_of_songs) dictionary['artist_name'].append(artist_name) dictionary['album_title'].append(album_title) dictionary['number_of_songs_in_album'].append(number_of_songs) print(dictionary)
Hello everyone! I am begginer in programming and I decide to write some small programming. And and as usually happens with beginners, I have a problem
My goal in this program is that when the user entered data, it was written from the dictionary. And in my case, only the last entered by the user is recorded in the dictionary and displayed on the screen. I apologize in advance for not writing comments. The program is stupid, but I still want to know what wrong. dictionary = {} def make_album(artist_name, album_title, number_of_songs_in_album=None): all_info_here = {artist_name.title(): album_title.title()} if number_of_songs_in_album: all_info_here['number_of_songs_in_album'] = number_of_songs_in_album return all_info_here while True: print("Please enter an artist and album's title:") print("(enter 'q' at any time to quit)") artist_name = input("\nEnter an artist: ") if artist_name == 'q': break album_title = input("Enter album's title: ") if album_title == 'q': break number_of_songs = input("Enter the number of songs in almbum" "(if you don't know the number print '99'): ") if number_of_songs == 99: continue formatted_information = make_album(artist_name, album_title, number_of_songs) dictionary.update(formatted_information) print(dictionary) I watched in the internet decision but I didn't find something there.
[ "from collections import defaultdict\ndictionary = defaultdict(list)\n\n\ndef make_album(artist_name, album_title, number_of_songs_in_album=None):\n all_info_here = {'artist_name': artist_name, 'album_title': album_title}\n if number_of_songs_in_album:\n all_info_here['number_of_songs_in_album'] = number_of_songs_in_album\n\n return all_info_here\n\n\nwhile True:\n print(\"Please enter an artist and album's title:\")\n print(\"(enter 'q' at any time to quit)\")\n\n artist_name = input(\"\\nEnter an artist: \")\n if artist_name == 'q':\n break\n album_title = input(\"Enter album's title: \")\n if album_title == 'q':\n break\n number_of_songs = input(\"Enter the number of songs in almbum\"\n \"(if you don't know the number print '99'): \")\n if number_of_songs == 99:\n continue\n\n # formatted_information = make_album(artist_name, album_title, number_of_songs)\n dictionary['artist_name'].append(artist_name)\n dictionary['album_title'].append(album_title)\n dictionary['number_of_songs_in_album'].append(number_of_songs)\n \n\nprint(dictionary)\n\n" ]
[ 0 ]
[]
[]
[ "linux", "pycharm", "python", "windows" ]
stackoverflow_0074482341_linux_pycharm_python_windows.txt
Q: Is there a way to be notified when a cell finishes execution in VSCode Jupyter Notebook? I'm using Jupyter Notebook on VSCode and would like to be notified when a cell finishes execution. I searched and was not able to find any extension for this task. Is there a way to get this working? A: You could play a sound at the end of your Section after your code finishes. :-P from playsound import playsound playsound('/path/to/note.wav') # .wav file playsound('/path/to/note.mp3') # .mp3 file It's a way of creating an audio alert, if that suits your needs. You can borrow one of the audio alerts that come with whichever OS you are using. If you are looking for a remote notification system, you could maybe email yourself or setup a twilio account. A: Crucially, nobody wants to be notified when each and every cell is done executing. Rather, we want to be notified when a long-running cell finishes. So there should be a way to set a conditional such that if a cell finishes running under that threshold of time, there's no sound alert, but for the cells that take a long time to run, those cells play the alert sound upon completion. Otherwise your notebook will sound like an orchestra of unnecessary "false positives" playing audible alerts for short-running cells. A: There are audio cues for Notebook Cell Completed Notebook Cell Failed being added to vscode, see Implement Audio cues on cell execution completed. Should be under the setting Audio Cues: Notebook Cell Completed and Audio Cues: Notebook Cell Failed
Is there a way to be notified when a cell finishes execution in VSCode Jupyter Notebook?
I'm using Jupyter Notebook on VSCode and would like to be notified when a cell finishes execution. I searched and was not able to find any extension for this task. Is there a way to get this working?
[ "You could play a sound at the end of your Section after your code finishes. :-P\nfrom playsound import playsound\nplaysound('/path/to/note.wav') # .wav file\nplaysound('/path/to/note.mp3') # .mp3 file\n\nIt's a way of creating an audio alert, if that suits your needs. You can borrow one of the audio alerts that come with whichever OS you are using.\nIf you are looking for a remote notification system, you could maybe email yourself or setup a twilio account.\n", "Crucially, nobody wants to be notified when each and every cell is done executing.\nRather, we want to be notified when a long-running cell finishes. So there should be a way to set a conditional such that if a cell finishes running under that threshold of time, there's no sound alert, but for the cells that take a long time to run, those cells play the alert sound upon completion.\nOtherwise your notebook will sound like an orchestra of unnecessary \"false positives\" playing audible alerts for short-running cells.\n", "There are audio cues for\nNotebook Cell Completed\nNotebook Cell Failed\n\nbeing added to vscode, see Implement Audio cues on cell execution completed.\nShould be under the setting Audio Cues: Notebook Cell Completed and Audio Cues: Notebook Cell Failed\n" ]
[ 2, 2, 2 ]
[]
[]
[ "jupyter_notebook", "python", "python_3.x", "visual_studio_code" ]
stackoverflow_0071317132_jupyter_notebook_python_python_3.x_visual_studio_code.txt
Q: What is an "instance method"? From 3. Data model: Instance methods An instance method object combines a class, a class instance and any callable object (normally a user-defined function). If it is a definition, what does it mean? If it is not a definition, what is the definition of an "instance method"? Is an "instance method" the same concept of a method of a class? Since someone brings up class methods and static methods, bound methods and unbound methods, let me clarify: I understand a method of a class can be an ordinary method, a class method, or a static method. I understand a method of a class accessed via the class or its instance can be bound or function. I have never heard of "an instance method". I don't know what it is even after looking at the quote and am not sure if it is related to a ordinary method, a class method, or a static method, or a bound method or function. A: >>> class Foo: ... def im_a_method(self): ... pass ... >>> x = Foo() >>> x.im_a_method <bound method Foo.im_a_method of <__main__.Foo object at 0x7f4f1993dd30>> Tada! That's an instance method object. It's the thing you get when you retrieve a method of an object, before you call it. A: What is an instance method? An instance method is a function the is bound to a class instance. The instance of the class is implicitly passed as the first argument to instance methods. It essentially belongs to that specific instance. An instance method is the "normal" type of method people use. This is opposed to a static method or class method created using staticmethod and classmethod respectively. Here's an example of an instance method: >>> class Class: ... def method(self): ... pass >>> Class.method <bound method Class.method of <Class object at 0x7f12781c5b70>> It's that simple. A: Your confusion comes from what exactly this definition is about. The term "instance method" is actually used to describe both the concept (a method that works on an instance - by opposition with a classmethod or staticmethod) and its technical implementation. The definition you quote is about the technical implementation. If you want to understand the context of this definition, you can read this article in the Python wiki, which explains how Python turns functions into methods at runtime. A: An instance method: can call instance and class variables and instance, class and static methods by self. can call class variables and instance, class and static methods by class name but not instance variables. can be called by object. can be also called directly by class name but when called directly by class name, we need to pass one argument to the instance method because self becomes the normal parameter which doesn't have the ability to call instance and class variables and instance, class and static methods. needs self for the 1st argument otherwise the instance method cannot be called by an object but the instance method can still be called directly by class name and the name of self is used in convention so other names instead of self still work. *I also explain about @classmethod and @staticmethod in my answer for @classmethod vs @staticmethod in Python. For example, the instance method can call the instance and class variables and the instance, class and static methods by self and the instance method can call the class variable and the instance, class and static methods by class name but not the instance variables and the instance method can be called by object as shown below: class Person: x = "Hello" def __init__(self, name): self.name = name def test1(self): # Instance method print(self.name) # Instance variable by "self" print(self.x) # Class variable by "self" self.test2() # Instance method by "self" self.test3() # Class method by "self" self.test4() # Static method by "self" print() print(Person.x) # Class variable by class name Person.test2("Test2") # Instance method by class name Person.test3() # Class method by class name Person.test4() # Static method by class name def test2(self): print("Test2") @classmethod def test3(cls): print("Test3") @staticmethod def test4(): print("Test4") obj = Person("John") obj.test1() # By object Output: John # Instance variable by "self" Hello # Class variable by "self" Test2 # Instance method by "self" Test3 # Class method by "self" Test4 # Static method by "self" Hello # Class variable by class name Test2 # Instance method by class name Test3 # Class method by class name Test4 # Static method by class name And, if the instance method tries to call the instance variable by class name as shown below: # ... def test1(self): # Instance method print(Person.name) # Instance variable by class name obj = Person("John") obj.test1() The error below occurs: AttributeError: type object 'Person' has no attribute 'name' And, the instance method can be also called directly by class name but when called directly by class name, we need to pass one argument to the instance method as shown below because self becomes the normal parameter which doesn't have the ability to call the instance and class variables and the instance, class and static methods by self: # ... def test1(self): # Instance method print(self) # ... Person.test1("Test1") # Here Output: Test1 So, if the instance method tries to call the instance and class variables and the instance, class and static methods by self as shown below: # ... def test1(self): # Instance method print(self.name) # Instance variable or print(self.x) # Class variable or self.test2() # Instance method or self.test3() # Class method or self.test4() # Static method # ... Person.test1("Test1") # Here The errors below occur because again, self becomes the normal parameter which doesn't have the ability to call the instance and class variables and the instance, class and static methods: AttributeError: 'str' object has no attribute 'name' AttributeError: 'str' object has no attribute 'x' AttributeError: 'str' object has no attribute 'test2' AttributeError: 'str' object has no attribute 'test3' AttributeError: 'str' object has no attribute 'test4' And, if one argument is not passed to the instance method as shown below: # ... def test1(self): # Instance method print(self) # ... Person.test1() # Here The error below occurs: TypeError: test1() missing 1 required positional argument: 'self' And, the instance method needs self for the 1st argument otherwise the instance method cannot be called by object as shown below: # ... def test1(): # Without "self" print("Test1") # ... obj = Person("John") obj.test1() # Here Then, the error below occurs: TypeError: test1() takes 0 positional arguments but 1 was given But, the instance method without self can still be called directly by class name as shown below: # ... def test1(): # Without "self" print("Test1") # ... Person.test1() # Here Output: Test1 And, the name of self is used in convention in an instance method so other name instead of self still works as shown below: # ... # Here def test1(orange): # Instance method print(orange.name) # Instance variable print(orange.x) # Class variable orange.test2() # Instance method orange.test3() # Class method orange.test4() # Static method # ... obj = Person("John") obj.test1() Output: John Hello Test2 Test3 Test4
What is an "instance method"?
From 3. Data model: Instance methods An instance method object combines a class, a class instance and any callable object (normally a user-defined function). If it is a definition, what does it mean? If it is not a definition, what is the definition of an "instance method"? Is an "instance method" the same concept of a method of a class? Since someone brings up class methods and static methods, bound methods and unbound methods, let me clarify: I understand a method of a class can be an ordinary method, a class method, or a static method. I understand a method of a class accessed via the class or its instance can be bound or function. I have never heard of "an instance method". I don't know what it is even after looking at the quote and am not sure if it is related to a ordinary method, a class method, or a static method, or a bound method or function.
[ ">>> class Foo:\n... def im_a_method(self):\n... pass\n... \n>>> x = Foo()\n>>> x.im_a_method\n<bound method Foo.im_a_method of <__main__.Foo object at 0x7f4f1993dd30>>\n\nTada! That's an instance method object. It's the thing you get when you retrieve a method of an object, before you call it.\n", "What is an instance method?\nAn instance method is a function the is bound to a class instance. The instance of the class is implicitly passed as the first argument to instance methods. It essentially belongs to that specific instance. An instance method is the \"normal\" type of method people use. This is opposed to a static method or class method created using staticmethod and classmethod respectively.\nHere's an example of an instance method:\n>>> class Class: \n... def method(self): \n... pass \n\n>>> Class.method\n<bound method Class.method of <Class object at 0x7f12781c5b70>>\n\nIt's that simple.\n", "Your confusion comes from what exactly this definition is about. The term \"instance method\" is actually used to describe both the concept (a method that works on an instance - by opposition with a classmethod or staticmethod) and its technical implementation. The definition you quote is about the technical implementation. \nIf you want to understand the context of this definition, you can read this article in the Python wiki, which explains how Python turns functions into methods at runtime.\n", "An instance method:\n\ncan call instance and class variables and instance, class and static methods by self.\n\ncan call class variables and instance, class and static methods by class name but not instance variables.\n\ncan be called by object.\n\ncan be also called directly by class name but when called directly by class name, we need to pass one argument to the instance method because self becomes the normal parameter which doesn't have the ability to call instance and class variables and instance, class and static methods.\n\nneeds self for the 1st argument otherwise the instance method cannot be called by an object but the instance method can still be called directly by class name and the name of self is used in convention so other names instead of self still work.\n\n\n*I also explain about @classmethod and @staticmethod in my answer for @classmethod vs @staticmethod in Python.\nFor example, the instance method can call the instance and class variables and the instance, class and static methods by self and the instance method can call the class variable and the instance, class and static methods by class name but not the instance variables and the instance method can be called by object as shown below:\nclass Person:\n x = \"Hello\"\n def __init__(self, name):\n self.name = name\n \n def test1(self): # Instance method\n print(self.name) # Instance variable by \"self\"\n print(self.x) # Class variable by \"self\" \n self.test2() # Instance method by \"self\"\n self.test3() # Class method by \"self\"\n self.test4() # Static method by \"self\"\n \n print()\n\n print(Person.x) # Class variable by class name\n Person.test2(\"Test2\") # Instance method by class name\n Person.test3() # Class method by class name\n Person.test4() # Static method by class name\n\n def test2(self):\n print(\"Test2\")\n \n @classmethod\n def test3(cls):\n print(\"Test3\")\n \n @staticmethod\n def test4():\n print(\"Test4\")\n\nobj = Person(\"John\")\nobj.test1() # By object\n\nOutput:\nJohn # Instance variable by \"self\"\nHello # Class variable by \"self\"\nTest2 # Instance method by \"self\"\nTest3 # Class method by \"self\"\nTest4 # Static method by \"self\"\n\nHello # Class variable by class name\nTest2 # Instance method by class name\nTest3 # Class method by class name\nTest4 # Static method by class name\n\nAnd, if the instance method tries to call the instance variable by class name as shown below:\n# ...\n \n def test1(self): # Instance method\n print(Person.name) # Instance variable by class name\n\nobj = Person(\"John\")\nobj.test1()\n\nThe error below occurs:\n\nAttributeError: type object 'Person' has no attribute 'name'\n\nAnd, the instance method can be also called directly by class name but when called directly by class name, we need to pass one argument to the instance method as shown below because self becomes the normal parameter which doesn't have the ability to call the instance and class variables and the instance, class and static methods by self:\n# ...\n\n def test1(self): # Instance method\n print(self)\n\n# ...\n\nPerson.test1(\"Test1\") # Here\n\nOutput:\nTest1\n\nSo, if the instance method tries to call the instance and class variables and the instance, class and static methods by self as shown below:\n# ...\n\n def test1(self): # Instance method\n print(self.name) # Instance variable or\n print(self.x) # Class variable or\n self.test2() # Instance method or\n self.test3() # Class method or\n self.test4() # Static method\n\n# ...\n\nPerson.test1(\"Test1\") # Here\n\nThe errors below occur because again, self becomes the normal parameter which doesn't have the ability to call the instance and class variables and the instance, class and static methods:\n\nAttributeError: 'str' object has no attribute 'name'\n\n\nAttributeError: 'str' object has no attribute 'x'\n\n\nAttributeError: 'str' object has no attribute 'test2'\n\n\nAttributeError: 'str' object has no attribute 'test3'\n\n\nAttributeError: 'str' object has no attribute 'test4'\n\nAnd, if one argument is not passed to the instance method as shown below:\n# ...\n\n def test1(self): # Instance method\n print(self)\n\n# ...\n\nPerson.test1() # Here\n\nThe error below occurs:\n\nTypeError: test1() missing 1 required positional argument: 'self'\n\nAnd, the instance method needs self for the 1st argument otherwise the instance method cannot be called by object as shown below:\n# ...\n\n def test1(): # Without \"self\"\n print(\"Test1\")\n\n# ...\n\nobj = Person(\"John\")\nobj.test1() # Here\n\nThen, the error below occurs:\n\nTypeError: test1() takes 0 positional arguments but 1 was given\n\nBut, the instance method without self can still be called directly by class name as shown below:\n# ...\n\n def test1(): # Without \"self\"\n print(\"Test1\")\n\n# ...\n\nPerson.test1() # Here\n\nOutput:\nTest1\n\nAnd, the name of self is used in convention in an instance method so other name instead of self still works as shown below:\n# ...\n # Here\n def test1(orange): # Instance method\n print(orange.name) # Instance variable\n print(orange.x) # Class variable\n orange.test2() # Instance method\n orange.test3() # Class method\n orange.test4() # Static method\n\n# ...\n\nobj = Person(\"John\")\nobj.test1()\n\nOutput:\nJohn\nHello\nTest2\nTest3\nTest4\n\n" ]
[ 4, 3, 2, 0 ]
[]
[]
[ "python", "python_3.x" ]
stackoverflow_0046230482_python_python_3.x.txt
Q: How can I put data from api to django I have stored some data in python and now I want to display it in django how can I do that? animeUrl = "https://api.jikan.moe/v4/top/anime" animeResponse = requests.get(animeUrl).json() def topAnime(): for idx, video in enumerate(animeResponse['data']): animeUrl = video['url'] title = video['title'] status = video['status'] type = video['type'] images = video['images']['jpg']['image_url'] #if status == "Currently Airing": print (idx+1,":",animeUrl, ":", status, ":", title, ":", type, ":", images) topAnime() this is my stored data and now I want to display it on website, how can I do that? I'm newbie in django I'm looking for some suggestions I have tried using templates but it didn't worked A: The question is answered here : How to pass data to a template in Django? In order to display data in django from url to html files there are two ways Method 1: Rendering the template along with the data Django Templates How to use it in the project : Render Html Pages in django You can easily set up the jinja syntax with the help of above two links Method-2 : Using django Rest Framework Django Rest Framwork Prefer this method if you have already worked with api's and worked with sending ajax requests with java scripts Sample Code structure for Method - 1: main.py from django.shortcuts import render animeUrl = "https://api.jikan.moe/v4/top/anime" animeResponse = requests.get(animeUrl).json() def topAnime(): for idx, video in enumerate(animeResponse['data']): # z [data] wyciaga mi nastepujace rzeczy ktorze sa pod spodem animeUrl = video['url'] title = video['title'] status = video['status'] type = video['type'] images = video['images']['jpg']['image_url'] #if status == "Currently Airing": print (idx+1,":",animeUrl, ":", status, ":", title, ":", type, ":", images) topAnime() def requestSender(request): return render(response, "index.html", data=topAnime()) index.html <body> <p> {{ data }} </p> </body> A: If you are new, I highly recommend that you build your first project by following the tutorial, it helps you grasp some key concepts. You can find your solution at the third part. But, to answer your question: In your App's views.py: from django.shortcuts import render import requests def get_animes(request): url= "https://api.jikan.moe/v4/top/anime" response= requests.get(url).json() return render(request, 'animes.html', { 'data': response['data']}) In your App's urls.py: from django.urls import path from . import views urlpatterns = [ path('animes/', views.get_animes, name='animes'), ] In your animes.html file: {% for obj in data %} {{ obj.url }} <br> {{ obj.title }} <br> {{ obj.status }} <br> {{ obj.type }} <br> {{ obj.jpg.image_url }} <br> <br> {% endfor%} In your root urls.py: from django.urls import path, include urlpatterns = [ path('admin/', admin.site.urls), path('', include('myApp.urls')), ] Finally, run your development server: python manage.py runserver Open your browser and send a request to your URL: http://localhost:8000/animes/
How can I put data from api to django
I have stored some data in python and now I want to display it in django how can I do that? animeUrl = "https://api.jikan.moe/v4/top/anime" animeResponse = requests.get(animeUrl).json() def topAnime(): for idx, video in enumerate(animeResponse['data']): animeUrl = video['url'] title = video['title'] status = video['status'] type = video['type'] images = video['images']['jpg']['image_url'] #if status == "Currently Airing": print (idx+1,":",animeUrl, ":", status, ":", title, ":", type, ":", images) topAnime() this is my stored data and now I want to display it on website, how can I do that? I'm newbie in django I'm looking for some suggestions I have tried using templates but it didn't worked
[ "The question is answered here :\nHow to pass data to a template in Django?\nIn order to display data in django from url to html files there are two ways\nMethod 1: Rendering the template along with the data\nDjango Templates\nHow to use it in the project : Render Html Pages in django\nYou can easily set up the jinja syntax with the help of above two links\nMethod-2 : Using django Rest Framework Django Rest Framwork\nPrefer this method if you have already worked with api's and worked with sending ajax requests with java scripts\nSample Code structure for Method - 1:\nmain.py\nfrom django.shortcuts import render\nanimeUrl = \"https://api.jikan.moe/v4/top/anime\"\nanimeResponse = requests.get(animeUrl).json()\n\ndef topAnime():\n for idx, video in enumerate(animeResponse['data']): # z [data] wyciaga mi nastepujace rzeczy ktorze sa pod spodem\n animeUrl = video['url']\n title = video['title']\n status = video['status']\n type = video['type'] \n images = video['images']['jpg']['image_url'] \n #if status == \"Currently Airing\":\n print (idx+1,\":\",animeUrl, \":\", status, \":\", title, \":\", type, \":\", images) \ntopAnime()\n\ndef requestSender(request):\n return render(response, \"index.html\", data=topAnime())\n\nindex.html\n<body>\n<p> {{ data }} </p>\n</body>\n\n", "If you are new, I highly recommend that you build your first project by following the tutorial, it helps you grasp some key concepts. You can find your solution at the third part.\nBut, to answer your question:\nIn your App's views.py:\nfrom django.shortcuts import render\nimport requests\n \ndef get_animes(request):\n url= \"https://api.jikan.moe/v4/top/anime\"\n response= requests.get(url).json()\n return render(request, 'animes.html', { 'data': response['data']})\n\nIn your App's urls.py:\nfrom django.urls import path\nfrom . import views\n \nurlpatterns = [\n path('animes/', views.get_animes, name='animes'),\n]\n\nIn your animes.html file:\n{% for obj in data %}\n {{ obj.url }}\n <br>\n {{ obj.title }}\n <br>\n {{ obj.status }}\n <br>\n {{ obj.type }}\n <br>\n {{ obj.jpg.image_url }}\n <br>\n <br>\n{% endfor%}\n\nIn your root urls.py:\nfrom django.urls import path, include\n\nurlpatterns = [\n path('admin/', admin.site.urls),\n path('', include('myApp.urls')),\n]\n\nFinally, run your development server:\npython manage.py runserver\n\nOpen your browser and send a request to your URL:\nhttp://localhost:8000/animes/\n\n" ]
[ 0, 0 ]
[]
[]
[ "api", "django", "python" ]
stackoverflow_0074481937_api_django_python.txt
Q: How to generate __init__.py in all subdirectories of current directory in cmake? I use an out-of-tree builds with CMake. I have a CMake custom command that generates *_pb2.py files from proto-files. Since proto-files may reside in an unknown number of subdirectories (package namespace), like $SRC/package1/package2/file.proto, then the build directory will contain something like $BLD/package1/package2/file_pb2.py. I want to implicitly make packages from auto-generated *_pb2.py files and, thus, I want to automagically generate __init__.py files in all subfolders ($BLD/package1, $BLD/package1/package2, etc.) and then install them. How can I do that? P.S. I've tried macro from CMake : How to get the name of all subdirectories of a directory? (changed GLOB to GLOB_RECURSE) but it returns only subdirs that contain files. I can't get package1 subdir from example above. A: If you are working under a *NIX os (including mac) you could use the shell find command like: ROOT="./" for DIR in $(find $ROOT -type d); do touch $DIR/__init__.py done or with a python script: from os.path import isdir, walk, join root = "/path/to/project" finit = '__init__.py' def visitor(arg, dirname, fnames): fnames = [fname for fname in fnames if isdir(fname)] # here you could do some additional checks ... print "adding %s to : %s" %(finit, dirname) with open(join(dirname, finit), 'w') as file_: file_.write('') walk(root, visitor, None) A: The following should give you a list of directories as required in the variable AllPaths: # Get paths to all .py files (relative to build dir) file(GLOB_RECURSE SubDirs RELATIVE ${CMAKE_BINARY_DIR} "${CMAKE_BINARY_DIR}/*.py") # Clear the variable AllPaths ready to take the list of results set(AllPaths) foreach(SubDir ${SubDirs}) # Strip the filename from the path get_filename_component(SubDir ${SubDir} PATH) # Change the path to a semi-colon separated list string(REPLACE "/" ";" PathParts ${SubDir}) # Incrementally rebuild path, appending each partial path to list of results set(RebuiltPath ${CMAKE_BINARY_DIR}) foreach(PathPart ${PathParts}) set(RebuiltPath "${RebuiltPath}/${PathPart}") set(AllPaths ${AllPaths} ${RebuiltPath}) endforeach() endforeach() # Remove duplicates list(REMOVE_DUPLICATES AllPaths) A: Here's a one-line version of the other answer at https://stackoverflow.com/a/11449316/827437: find $DIR -type d -exec touch {}/__init__.py \; This creates an __init__.py file within every directory in $DIR, by executing the touch command. Run find $DIR -type d to see the directories that will include the file.
How to generate __init__.py in all subdirectories of current directory in cmake?
I use an out-of-tree builds with CMake. I have a CMake custom command that generates *_pb2.py files from proto-files. Since proto-files may reside in an unknown number of subdirectories (package namespace), like $SRC/package1/package2/file.proto, then the build directory will contain something like $BLD/package1/package2/file_pb2.py. I want to implicitly make packages from auto-generated *_pb2.py files and, thus, I want to automagically generate __init__.py files in all subfolders ($BLD/package1, $BLD/package1/package2, etc.) and then install them. How can I do that? P.S. I've tried macro from CMake : How to get the name of all subdirectories of a directory? (changed GLOB to GLOB_RECURSE) but it returns only subdirs that contain files. I can't get package1 subdir from example above.
[ "If you are working under a *NIX os (including mac) you could use the shell find command like:\nROOT=\"./\"\nfor DIR in $(find $ROOT -type d); do\n touch $DIR/__init__.py\ndone\n\nor with a python script:\nfrom os.path import isdir, walk, join\n\nroot = \"/path/to/project\"\nfinit = '__init__.py'\ndef visitor(arg, dirname, fnames):\n fnames = [fname for fname in fnames if isdir(fname)]\n # here you could do some additional checks ...\n print \"adding %s to : %s\" %(finit, dirname)\n with open(join(dirname, finit), 'w') as file_: file_.write('')\n\nwalk(root, visitor, None)\n\n", "The following should give you a list of directories as required in the variable AllPaths:\n# Get paths to all .py files (relative to build dir)\nfile(GLOB_RECURSE SubDirs RELATIVE ${CMAKE_BINARY_DIR} \"${CMAKE_BINARY_DIR}/*.py\")\n# Clear the variable AllPaths ready to take the list of results\nset(AllPaths)\nforeach(SubDir ${SubDirs})\n # Strip the filename from the path\n get_filename_component(SubDir ${SubDir} PATH)\n # Change the path to a semi-colon separated list\n string(REPLACE \"/\" \";\" PathParts ${SubDir})\n # Incrementally rebuild path, appending each partial path to list of results\n set(RebuiltPath ${CMAKE_BINARY_DIR})\n foreach(PathPart ${PathParts})\n set(RebuiltPath \"${RebuiltPath}/${PathPart}\")\n set(AllPaths ${AllPaths} ${RebuiltPath})\n endforeach()\nendforeach()\n# Remove duplicates\nlist(REMOVE_DUPLICATES AllPaths)\n\n", "Here's a one-line version of the other answer at https://stackoverflow.com/a/11449316/827437:\nfind $DIR -type d -exec touch {}/__init__.py \\;\nThis creates an __init__.py file within every directory in $DIR, by executing the touch command. Run find $DIR -type d to see the directories that will include the file.\n" ]
[ 6, 2, 1 ]
[]
[]
[ "cmake", "protocol_buffers", "python" ]
stackoverflow_0011449117_cmake_protocol_buffers_python.txt
Q: `ResourceExhaustedError: Graph execution error` when trying to train tensorflow model using model.fit() A few days back, I got the same error at 12th epoch. This time, it happens at the 1st. I have no idea why that is happening as I did not make any changes to the model. I only normalized the input to give X_train.max() as 1 after scaling like it should be. Does it have something to do with patch size? Should I reduce it? Why do I get this error and how can I fix it? my_model.summary() Model: "U-Net" __________________________________________________________________________________________________ Layer (type) Output Shape Param # Connected to ================================================================================================== input_6 (InputLayer) [(None, 64, 64, 64, 0 [] 3)] conv3d_95 (Conv3D) (None, 64, 64, 64, 5248 ['input_6[0][0]'] 64) batch_normalization_90 (BatchN (None, 64, 64, 64, 256 ['conv3d_95[0][0]'] ormalization) 64) activation_90 (Activation) (None, 64, 64, 64, 0 ['batch_normalization_90[0][0]'] 64) conv3d_96 (Conv3D) (None, 64, 64, 64, 110656 ['activation_90[0][0]'] 64) batch_normalization_91 (BatchN (None, 64, 64, 64, 256 ['conv3d_96[0][0]'] ormalization) 64) activation_91 (Activation) (None, 64, 64, 64, 0 ['batch_normalization_91[0][0]'] 64) max_pooling3d_20 (MaxPooling3D (None, 32, 32, 32, 0 ['activation_91[0][0]'] ) 64) conv3d_97 (Conv3D) (None, 32, 32, 32, 221312 ['max_pooling3d_20[0][0]'] 128) batch_normalization_92 (BatchN (None, 32, 32, 32, 512 ['conv3d_97[0][0]'] ormalization) 128) activation_92 (Activation) (None, 32, 32, 32, 0 ['batch_normalization_92[0][0]'] 128) conv3d_98 (Conv3D) (None, 32, 32, 32, 442496 ['activation_92[0][0]'] 128) batch_normalization_93 (BatchN (None, 32, 32, 32, 512 ['conv3d_98[0][0]'] ormalization) 128) activation_93 (Activation) (None, 32, 32, 32, 0 ['batch_normalization_93[0][0]'] 128) max_pooling3d_21 (MaxPooling3D (None, 16, 16, 16, 0 ['activation_93[0][0]'] ) 128) conv3d_99 (Conv3D) (None, 16, 16, 16, 884992 ['max_pooling3d_21[0][0]'] 256) batch_normalization_94 (BatchN (None, 16, 16, 16, 1024 ['conv3d_99[0][0]'] ormalization) 256) activation_94 (Activation) (None, 16, 16, 16, 0 ['batch_normalization_94[0][0]'] 256) conv3d_100 (Conv3D) (None, 16, 16, 16, 1769728 ['activation_94[0][0]'] 256) batch_normalization_95 (BatchN (None, 16, 16, 16, 1024 ['conv3d_100[0][0]'] ormalization) 256) activation_95 (Activation) (None, 16, 16, 16, 0 ['batch_normalization_95[0][0]'] 256) max_pooling3d_22 (MaxPooling3D (None, 8, 8, 8, 256 0 ['activation_95[0][0]'] ) ) conv3d_101 (Conv3D) (None, 8, 8, 8, 512 3539456 ['max_pooling3d_22[0][0]'] ) batch_normalization_96 (BatchN (None, 8, 8, 8, 512 2048 ['conv3d_101[0][0]'] ormalization) ) activation_96 (Activation) (None, 8, 8, 8, 512 0 ['batch_normalization_96[0][0]'] ) conv3d_102 (Conv3D) (None, 8, 8, 8, 512 7078400 ['activation_96[0][0]'] ) batch_normalization_97 (BatchN (None, 8, 8, 8, 512 2048 ['conv3d_102[0][0]'] ormalization) ) activation_97 (Activation) (None, 8, 8, 8, 512 0 ['batch_normalization_97[0][0]'] ) max_pooling3d_23 (MaxPooling3D (None, 4, 4, 4, 512 0 ['activation_97[0][0]'] ) ) conv3d_103 (Conv3D) (None, 4, 4, 4, 102 14156800 ['max_pooling3d_23[0][0]'] 4) batch_normalization_98 (BatchN (None, 4, 4, 4, 102 4096 ['conv3d_103[0][0]'] ormalization) 4) activation_98 (Activation) (None, 4, 4, 4, 102 0 ['batch_normalization_98[0][0]'] 4) conv3d_104 (Conv3D) (None, 4, 4, 4, 102 28312576 ['activation_98[0][0]'] 4) batch_normalization_99 (BatchN (None, 4, 4, 4, 102 4096 ['conv3d_104[0][0]'] ormalization) 4) activation_99 (Activation) (None, 4, 4, 4, 102 0 ['batch_normalization_99[0][0]'] 4) conv3d_transpose_20 (Conv3DTra (None, 8, 8, 8, 512 4194816 ['activation_99[0][0]'] nspose) ) concatenate_20 (Concatenate) (None, 8, 8, 8, 102 0 ['conv3d_transpose_20[0][0]', 4) 'activation_97[0][0]'] conv3d_105 (Conv3D) (None, 8, 8, 8, 512 14156288 ['concatenate_20[0][0]'] ) batch_normalization_100 (Batch (None, 8, 8, 8, 512 2048 ['conv3d_105[0][0]'] Normalization) ) activation_100 (Activation) (None, 8, 8, 8, 512 0 ['batch_normalization_100[0][0]'] ) conv3d_106 (Conv3D) (None, 8, 8, 8, 512 7078400 ['activation_100[0][0]'] ) batch_normalization_101 (Batch (None, 8, 8, 8, 512 2048 ['conv3d_106[0][0]'] Normalization) ) activation_101 (Activation) (None, 8, 8, 8, 512 0 ['batch_normalization_101[0][0]'] ) conv3d_transpose_21 (Conv3DTra (None, 16, 16, 16, 1048832 ['activation_101[0][0]'] nspose) 256) concatenate_21 (Concatenate) (None, 16, 16, 16, 0 ['conv3d_transpose_21[0][0]', 512) 'activation_95[0][0]'] conv3d_107 (Conv3D) (None, 16, 16, 16, 3539200 ['concatenate_21[0][0]'] 256) batch_normalization_102 (Batch (None, 16, 16, 16, 1024 ['conv3d_107[0][0]'] Normalization) 256) activation_102 (Activation) (None, 16, 16, 16, 0 ['batch_normalization_102[0][0]'] 256) conv3d_108 (Conv3D) (None, 16, 16, 16, 1769728 ['activation_102[0][0]'] 256) batch_normalization_103 (Batch (None, 16, 16, 16, 1024 ['conv3d_108[0][0]'] Normalization) 256) activation_103 (Activation) (None, 16, 16, 16, 0 ['batch_normalization_103[0][0]'] 256) conv3d_transpose_22 (Conv3DTra (None, 32, 32, 32, 262272 ['activation_103[0][0]'] nspose) 128) concatenate_22 (Concatenate) (None, 32, 32, 32, 0 ['conv3d_transpose_22[0][0]', 256) 'activation_93[0][0]'] conv3d_109 (Conv3D) (None, 32, 32, 32, 884864 ['concatenate_22[0][0]'] 128) batch_normalization_104 (Batch (None, 32, 32, 32, 512 ['conv3d_109[0][0]'] Normalization) 128) activation_104 (Activation) (None, 32, 32, 32, 0 ['batch_normalization_104[0][0]'] 128) conv3d_110 (Conv3D) (None, 32, 32, 32, 442496 ['activation_104[0][0]'] 128) batch_normalization_105 (Batch (None, 32, 32, 32, 512 ['conv3d_110[0][0]'] Normalization) 128) activation_105 (Activation) (None, 32, 32, 32, 0 ['batch_normalization_105[0][0]'] 128) conv3d_transpose_23 (Conv3DTra (None, 64, 64, 64, 65600 ['activation_105[0][0]'] nspose) 64) concatenate_23 (Concatenate) (None, 64, 64, 64, 0 ['conv3d_transpose_23[0][0]', 128) 'activation_91[0][0]'] conv3d_111 (Conv3D) (None, 64, 64, 64, 221248 ['concatenate_23[0][0]'] 64) batch_normalization_106 (Batch (None, 64, 64, 64, 256 ['conv3d_111[0][0]'] Normalization) 64) activation_106 (Activation) (None, 64, 64, 64, 0 ['batch_normalization_106[0][0]'] 64) conv3d_112 (Conv3D) (None, 64, 64, 64, 110656 ['activation_106[0][0]'] 64) batch_normalization_107 (Batch (None, 64, 64, 64, 256 ['conv3d_112[0][0]'] Normalization) 64) activation_107 (Activation) (None, 64, 64, 64, 0 ['batch_normalization_107[0][0]'] 64) conv3d_113 (Conv3D) (None, 64, 64, 64, 260 ['activation_107[0][0]'] 4) ================================================================================================== Total params: 90,319,876 Trainable params: 90,308,100 Non-trainable params: 11,776 __________________________________________________________________________________________________ None Error Message Log: Epoch 1/100 --------------------------------------------------------------------------- ResourceExhaustedError Traceback (most recent call last) <ipython-input-52-ec522ff5ad08> in <module>() 5 epochs=100, 6 verbose=1, ----> 7 validation_data=(X_test, y_test)) 1 frames /usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/execute.py in quick_execute(op_name, num_outputs, inputs, attrs, ctx, name) 53 ctx.ensure_initialized() 54 tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name, ---> 55 inputs, attrs, num_outputs) 56 except core._NotOkStatusException as e: 57 if name is not None: ResourceExhaustedError: Graph execution error: Detected at node 'U-Net/concatenate_23/concat' defined at (most recent call last): File "/usr/lib/python3.7/runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "/usr/lib/python3.7/runpy.py", line 85, in _run_code exec(code, run_globals) File "/usr/local/lib/python3.7/dist-packages/ipykernel_launcher.py", line 16, in <module> app.launch_new_instance() File "/usr/local/lib/python3.7/dist-packages/traitlets/config/application.py", line 846, in launch_instance app.start() File "/usr/local/lib/python3.7/dist-packages/ipykernel/kernelapp.py", line 499, in start self.io_loop.start() File "/usr/local/lib/python3.7/dist-packages/tornado/platform/asyncio.py", line 132, in start self.asyncio_loop.run_forever() File "/usr/lib/python3.7/asyncio/base_events.py", line 541, in run_forever self._run_once() File "/usr/lib/python3.7/asyncio/base_events.py", line 1786, in _run_once handle._run() File "/usr/lib/python3.7/asyncio/events.py", line 88, in _run self._context.run(self._callback, *self._args) File "/usr/local/lib/python3.7/dist-packages/tornado/platform/asyncio.py", line 122, in _handle_events handler_func(fileobj, events) File "/usr/local/lib/python3.7/dist-packages/tornado/stack_context.py", line 300, in null_wrapper return fn(*args, **kwargs) File "/usr/local/lib/python3.7/dist-packages/zmq/eventloop/zmqstream.py", line 452, in _handle_events self._handle_recv() File "/usr/local/lib/python3.7/dist-packages/zmq/eventloop/zmqstream.py", line 481, in _handle_recv self._run_callback(callback, msg) File "/usr/local/lib/python3.7/dist-packages/zmq/eventloop/zmqstream.py", line 431, in _run_callback callback(*args, **kwargs) File "/usr/local/lib/python3.7/dist-packages/tornado/stack_context.py", line 300, in null_wrapper return fn(*args, **kwargs) File "/usr/local/lib/python3.7/dist-packages/ipykernel/kernelbase.py", line 283, in dispatcher return self.dispatch_shell(stream, msg) File "/usr/local/lib/python3.7/dist-packages/ipykernel/kernelbase.py", line 233, in dispatch_shell handler(stream, idents, msg) File "/usr/local/lib/python3.7/dist-packages/ipykernel/kernelbase.py", line 399, in execute_request user_expressions, allow_stdin) File "/usr/local/lib/python3.7/dist-packages/ipykernel/ipkernel.py", line 208, in do_execute res = shell.run_cell(code, store_history=store_history, silent=silent) File "/usr/local/lib/python3.7/dist-packages/ipykernel/zmqshell.py", line 537, in run_cell return super(ZMQInteractiveShell, self).run_cell(*args, **kwargs) File "/usr/local/lib/python3.7/dist-packages/IPython/core/interactiveshell.py", line 2718, in run_cell interactivity=interactivity, compiler=compiler, result=result) File "/usr/local/lib/python3.7/dist-packages/IPython/core/interactiveshell.py", line 2822, in run_ast_nodes if self.run_code(code, result): File "/usr/local/lib/python3.7/dist-packages/IPython/core/interactiveshell.py", line 2882, in run_code exec(code_obj, self.user_global_ns, self.user_ns) File "<ipython-input-52-ec522ff5ad08>", line 7, in <module> validation_data=(X_test, y_test)) File "/usr/local/lib/python3.7/dist-packages/keras/utils/traceback_utils.py", line 64, in error_handler return fn(*args, **kwargs) File "/usr/local/lib/python3.7/dist-packages/keras/engine/training.py", line 1384, in fit tmp_logs = self.train_function(iterator) File "/usr/local/lib/python3.7/dist-packages/keras/engine/training.py", line 1021, in train_function return step_function(self, iterator) File "/usr/local/lib/python3.7/dist-packages/keras/engine/training.py", line 1010, in step_function outputs = model.distribute_strategy.run(run_step, args=(data,)) File "/usr/local/lib/python3.7/dist-packages/keras/engine/training.py", line 1000, in run_step outputs = model.train_step(data) File "/usr/local/lib/python3.7/dist-packages/keras/engine/training.py", line 859, in train_step y_pred = self(x, training=True) File "/usr/local/lib/python3.7/dist-packages/keras/utils/traceback_utils.py", line 64, in error_handler return fn(*args, **kwargs) packages/keras/layers/merge.py", line 531, in _merge_function return backend.concatenate(inputs, axis=self.axis) File "/usr/local/lib/python3.7/dist-packages/keras/backend.py", line 3313, in concatenate return tf.concat([to_dense(x) for x in tensors], axis) Node: 'U-Net/concatenate_23/concat' OOM when allocating tensor with shape[8,128,64,64,64] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc [[{{node U-Net/concatenate_23/concat}}]] Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode. [Op:__inference_train_function_24517] GPU details: nvidia-smi command: +-----------------------------------------------------------------------------+ | NVIDIA-SMI 460.32.03 Driver Version: 460.32.03 CUDA Version: 11.2 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |===============================+======================+======================| | 0 Tesla K80 Off | 00000000:00:04.0 Off | 0 | | N/A 72C P0 73W / 149W | 11077MiB / 11441MiB | 0% Default | | | | N/A | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=============================================================================| +-----------------------------------------------------------------------------+ I'm new to Tensorflow and all of this ML stuff honestly. Would really appreciate any help. Thanks. A: I had the same error as you ,it's a resource exhausted problem, resolved by just reducing batch_size value(I had a Model which try to learn from dataset of big images I reduce it's value from 32 to 16) .and it's worked fine A: Just by looking at your SS from nvidia-smi command, it seems like your GPU is not being used for this model training. So, you might wanna look into it and start using your GPU for computation during model training.
`ResourceExhaustedError: Graph execution error` when trying to train tensorflow model using model.fit()
A few days back, I got the same error at 12th epoch. This time, it happens at the 1st. I have no idea why that is happening as I did not make any changes to the model. I only normalized the input to give X_train.max() as 1 after scaling like it should be. Does it have something to do with patch size? Should I reduce it? Why do I get this error and how can I fix it? my_model.summary() Model: "U-Net" __________________________________________________________________________________________________ Layer (type) Output Shape Param # Connected to ================================================================================================== input_6 (InputLayer) [(None, 64, 64, 64, 0 [] 3)] conv3d_95 (Conv3D) (None, 64, 64, 64, 5248 ['input_6[0][0]'] 64) batch_normalization_90 (BatchN (None, 64, 64, 64, 256 ['conv3d_95[0][0]'] ormalization) 64) activation_90 (Activation) (None, 64, 64, 64, 0 ['batch_normalization_90[0][0]'] 64) conv3d_96 (Conv3D) (None, 64, 64, 64, 110656 ['activation_90[0][0]'] 64) batch_normalization_91 (BatchN (None, 64, 64, 64, 256 ['conv3d_96[0][0]'] ormalization) 64) activation_91 (Activation) (None, 64, 64, 64, 0 ['batch_normalization_91[0][0]'] 64) max_pooling3d_20 (MaxPooling3D (None, 32, 32, 32, 0 ['activation_91[0][0]'] ) 64) conv3d_97 (Conv3D) (None, 32, 32, 32, 221312 ['max_pooling3d_20[0][0]'] 128) batch_normalization_92 (BatchN (None, 32, 32, 32, 512 ['conv3d_97[0][0]'] ormalization) 128) activation_92 (Activation) (None, 32, 32, 32, 0 ['batch_normalization_92[0][0]'] 128) conv3d_98 (Conv3D) (None, 32, 32, 32, 442496 ['activation_92[0][0]'] 128) batch_normalization_93 (BatchN (None, 32, 32, 32, 512 ['conv3d_98[0][0]'] ormalization) 128) activation_93 (Activation) (None, 32, 32, 32, 0 ['batch_normalization_93[0][0]'] 128) max_pooling3d_21 (MaxPooling3D (None, 16, 16, 16, 0 ['activation_93[0][0]'] ) 128) conv3d_99 (Conv3D) (None, 16, 16, 16, 884992 ['max_pooling3d_21[0][0]'] 256) batch_normalization_94 (BatchN (None, 16, 16, 16, 1024 ['conv3d_99[0][0]'] ormalization) 256) activation_94 (Activation) (None, 16, 16, 16, 0 ['batch_normalization_94[0][0]'] 256) conv3d_100 (Conv3D) (None, 16, 16, 16, 1769728 ['activation_94[0][0]'] 256) batch_normalization_95 (BatchN (None, 16, 16, 16, 1024 ['conv3d_100[0][0]'] ormalization) 256) activation_95 (Activation) (None, 16, 16, 16, 0 ['batch_normalization_95[0][0]'] 256) max_pooling3d_22 (MaxPooling3D (None, 8, 8, 8, 256 0 ['activation_95[0][0]'] ) ) conv3d_101 (Conv3D) (None, 8, 8, 8, 512 3539456 ['max_pooling3d_22[0][0]'] ) batch_normalization_96 (BatchN (None, 8, 8, 8, 512 2048 ['conv3d_101[0][0]'] ormalization) ) activation_96 (Activation) (None, 8, 8, 8, 512 0 ['batch_normalization_96[0][0]'] ) conv3d_102 (Conv3D) (None, 8, 8, 8, 512 7078400 ['activation_96[0][0]'] ) batch_normalization_97 (BatchN (None, 8, 8, 8, 512 2048 ['conv3d_102[0][0]'] ormalization) ) activation_97 (Activation) (None, 8, 8, 8, 512 0 ['batch_normalization_97[0][0]'] ) max_pooling3d_23 (MaxPooling3D (None, 4, 4, 4, 512 0 ['activation_97[0][0]'] ) ) conv3d_103 (Conv3D) (None, 4, 4, 4, 102 14156800 ['max_pooling3d_23[0][0]'] 4) batch_normalization_98 (BatchN (None, 4, 4, 4, 102 4096 ['conv3d_103[0][0]'] ormalization) 4) activation_98 (Activation) (None, 4, 4, 4, 102 0 ['batch_normalization_98[0][0]'] 4) conv3d_104 (Conv3D) (None, 4, 4, 4, 102 28312576 ['activation_98[0][0]'] 4) batch_normalization_99 (BatchN (None, 4, 4, 4, 102 4096 ['conv3d_104[0][0]'] ormalization) 4) activation_99 (Activation) (None, 4, 4, 4, 102 0 ['batch_normalization_99[0][0]'] 4) conv3d_transpose_20 (Conv3DTra (None, 8, 8, 8, 512 4194816 ['activation_99[0][0]'] nspose) ) concatenate_20 (Concatenate) (None, 8, 8, 8, 102 0 ['conv3d_transpose_20[0][0]', 4) 'activation_97[0][0]'] conv3d_105 (Conv3D) (None, 8, 8, 8, 512 14156288 ['concatenate_20[0][0]'] ) batch_normalization_100 (Batch (None, 8, 8, 8, 512 2048 ['conv3d_105[0][0]'] Normalization) ) activation_100 (Activation) (None, 8, 8, 8, 512 0 ['batch_normalization_100[0][0]'] ) conv3d_106 (Conv3D) (None, 8, 8, 8, 512 7078400 ['activation_100[0][0]'] ) batch_normalization_101 (Batch (None, 8, 8, 8, 512 2048 ['conv3d_106[0][0]'] Normalization) ) activation_101 (Activation) (None, 8, 8, 8, 512 0 ['batch_normalization_101[0][0]'] ) conv3d_transpose_21 (Conv3DTra (None, 16, 16, 16, 1048832 ['activation_101[0][0]'] nspose) 256) concatenate_21 (Concatenate) (None, 16, 16, 16, 0 ['conv3d_transpose_21[0][0]', 512) 'activation_95[0][0]'] conv3d_107 (Conv3D) (None, 16, 16, 16, 3539200 ['concatenate_21[0][0]'] 256) batch_normalization_102 (Batch (None, 16, 16, 16, 1024 ['conv3d_107[0][0]'] Normalization) 256) activation_102 (Activation) (None, 16, 16, 16, 0 ['batch_normalization_102[0][0]'] 256) conv3d_108 (Conv3D) (None, 16, 16, 16, 1769728 ['activation_102[0][0]'] 256) batch_normalization_103 (Batch (None, 16, 16, 16, 1024 ['conv3d_108[0][0]'] Normalization) 256) activation_103 (Activation) (None, 16, 16, 16, 0 ['batch_normalization_103[0][0]'] 256) conv3d_transpose_22 (Conv3DTra (None, 32, 32, 32, 262272 ['activation_103[0][0]'] nspose) 128) concatenate_22 (Concatenate) (None, 32, 32, 32, 0 ['conv3d_transpose_22[0][0]', 256) 'activation_93[0][0]'] conv3d_109 (Conv3D) (None, 32, 32, 32, 884864 ['concatenate_22[0][0]'] 128) batch_normalization_104 (Batch (None, 32, 32, 32, 512 ['conv3d_109[0][0]'] Normalization) 128) activation_104 (Activation) (None, 32, 32, 32, 0 ['batch_normalization_104[0][0]'] 128) conv3d_110 (Conv3D) (None, 32, 32, 32, 442496 ['activation_104[0][0]'] 128) batch_normalization_105 (Batch (None, 32, 32, 32, 512 ['conv3d_110[0][0]'] Normalization) 128) activation_105 (Activation) (None, 32, 32, 32, 0 ['batch_normalization_105[0][0]'] 128) conv3d_transpose_23 (Conv3DTra (None, 64, 64, 64, 65600 ['activation_105[0][0]'] nspose) 64) concatenate_23 (Concatenate) (None, 64, 64, 64, 0 ['conv3d_transpose_23[0][0]', 128) 'activation_91[0][0]'] conv3d_111 (Conv3D) (None, 64, 64, 64, 221248 ['concatenate_23[0][0]'] 64) batch_normalization_106 (Batch (None, 64, 64, 64, 256 ['conv3d_111[0][0]'] Normalization) 64) activation_106 (Activation) (None, 64, 64, 64, 0 ['batch_normalization_106[0][0]'] 64) conv3d_112 (Conv3D) (None, 64, 64, 64, 110656 ['activation_106[0][0]'] 64) batch_normalization_107 (Batch (None, 64, 64, 64, 256 ['conv3d_112[0][0]'] Normalization) 64) activation_107 (Activation) (None, 64, 64, 64, 0 ['batch_normalization_107[0][0]'] 64) conv3d_113 (Conv3D) (None, 64, 64, 64, 260 ['activation_107[0][0]'] 4) ================================================================================================== Total params: 90,319,876 Trainable params: 90,308,100 Non-trainable params: 11,776 __________________________________________________________________________________________________ None Error Message Log: Epoch 1/100 --------------------------------------------------------------------------- ResourceExhaustedError Traceback (most recent call last) <ipython-input-52-ec522ff5ad08> in <module>() 5 epochs=100, 6 verbose=1, ----> 7 validation_data=(X_test, y_test)) 1 frames /usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/execute.py in quick_execute(op_name, num_outputs, inputs, attrs, ctx, name) 53 ctx.ensure_initialized() 54 tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name, ---> 55 inputs, attrs, num_outputs) 56 except core._NotOkStatusException as e: 57 if name is not None: ResourceExhaustedError: Graph execution error: Detected at node 'U-Net/concatenate_23/concat' defined at (most recent call last): File "/usr/lib/python3.7/runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "/usr/lib/python3.7/runpy.py", line 85, in _run_code exec(code, run_globals) File "/usr/local/lib/python3.7/dist-packages/ipykernel_launcher.py", line 16, in <module> app.launch_new_instance() File "/usr/local/lib/python3.7/dist-packages/traitlets/config/application.py", line 846, in launch_instance app.start() File "/usr/local/lib/python3.7/dist-packages/ipykernel/kernelapp.py", line 499, in start self.io_loop.start() File "/usr/local/lib/python3.7/dist-packages/tornado/platform/asyncio.py", line 132, in start self.asyncio_loop.run_forever() File "/usr/lib/python3.7/asyncio/base_events.py", line 541, in run_forever self._run_once() File "/usr/lib/python3.7/asyncio/base_events.py", line 1786, in _run_once handle._run() File "/usr/lib/python3.7/asyncio/events.py", line 88, in _run self._context.run(self._callback, *self._args) File "/usr/local/lib/python3.7/dist-packages/tornado/platform/asyncio.py", line 122, in _handle_events handler_func(fileobj, events) File "/usr/local/lib/python3.7/dist-packages/tornado/stack_context.py", line 300, in null_wrapper return fn(*args, **kwargs) File "/usr/local/lib/python3.7/dist-packages/zmq/eventloop/zmqstream.py", line 452, in _handle_events self._handle_recv() File "/usr/local/lib/python3.7/dist-packages/zmq/eventloop/zmqstream.py", line 481, in _handle_recv self._run_callback(callback, msg) File "/usr/local/lib/python3.7/dist-packages/zmq/eventloop/zmqstream.py", line 431, in _run_callback callback(*args, **kwargs) File "/usr/local/lib/python3.7/dist-packages/tornado/stack_context.py", line 300, in null_wrapper return fn(*args, **kwargs) File "/usr/local/lib/python3.7/dist-packages/ipykernel/kernelbase.py", line 283, in dispatcher return self.dispatch_shell(stream, msg) File "/usr/local/lib/python3.7/dist-packages/ipykernel/kernelbase.py", line 233, in dispatch_shell handler(stream, idents, msg) File "/usr/local/lib/python3.7/dist-packages/ipykernel/kernelbase.py", line 399, in execute_request user_expressions, allow_stdin) File "/usr/local/lib/python3.7/dist-packages/ipykernel/ipkernel.py", line 208, in do_execute res = shell.run_cell(code, store_history=store_history, silent=silent) File "/usr/local/lib/python3.7/dist-packages/ipykernel/zmqshell.py", line 537, in run_cell return super(ZMQInteractiveShell, self).run_cell(*args, **kwargs) File "/usr/local/lib/python3.7/dist-packages/IPython/core/interactiveshell.py", line 2718, in run_cell interactivity=interactivity, compiler=compiler, result=result) File "/usr/local/lib/python3.7/dist-packages/IPython/core/interactiveshell.py", line 2822, in run_ast_nodes if self.run_code(code, result): File "/usr/local/lib/python3.7/dist-packages/IPython/core/interactiveshell.py", line 2882, in run_code exec(code_obj, self.user_global_ns, self.user_ns) File "<ipython-input-52-ec522ff5ad08>", line 7, in <module> validation_data=(X_test, y_test)) File "/usr/local/lib/python3.7/dist-packages/keras/utils/traceback_utils.py", line 64, in error_handler return fn(*args, **kwargs) File "/usr/local/lib/python3.7/dist-packages/keras/engine/training.py", line 1384, in fit tmp_logs = self.train_function(iterator) File "/usr/local/lib/python3.7/dist-packages/keras/engine/training.py", line 1021, in train_function return step_function(self, iterator) File "/usr/local/lib/python3.7/dist-packages/keras/engine/training.py", line 1010, in step_function outputs = model.distribute_strategy.run(run_step, args=(data,)) File "/usr/local/lib/python3.7/dist-packages/keras/engine/training.py", line 1000, in run_step outputs = model.train_step(data) File "/usr/local/lib/python3.7/dist-packages/keras/engine/training.py", line 859, in train_step y_pred = self(x, training=True) File "/usr/local/lib/python3.7/dist-packages/keras/utils/traceback_utils.py", line 64, in error_handler return fn(*args, **kwargs) packages/keras/layers/merge.py", line 531, in _merge_function return backend.concatenate(inputs, axis=self.axis) File "/usr/local/lib/python3.7/dist-packages/keras/backend.py", line 3313, in concatenate return tf.concat([to_dense(x) for x in tensors], axis) Node: 'U-Net/concatenate_23/concat' OOM when allocating tensor with shape[8,128,64,64,64] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc [[{{node U-Net/concatenate_23/concat}}]] Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode. [Op:__inference_train_function_24517] GPU details: nvidia-smi command: +-----------------------------------------------------------------------------+ | NVIDIA-SMI 460.32.03 Driver Version: 460.32.03 CUDA Version: 11.2 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |===============================+======================+======================| | 0 Tesla K80 Off | 00000000:00:04.0 Off | 0 | | N/A 72C P0 73W / 149W | 11077MiB / 11441MiB | 0% Default | | | | N/A | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=============================================================================| +-----------------------------------------------------------------------------+ I'm new to Tensorflow and all of this ML stuff honestly. Would really appreciate any help. Thanks.
[ "I had the same error as you ,it's a resource exhausted problem, resolved by just reducing batch_size value(I had a Model which try to learn from dataset of big images I reduce it's value from 32 to 16) .and it's worked fine\n", "Just by looking at your SS from nvidia-smi command, it seems like your GPU is not being used for this model training. So, you might wanna look into it and start using your GPU for computation during model training.\n" ]
[ 2, 0 ]
[]
[]
[ "google_colaboratory", "python", "tensorflow" ]
stackoverflow_0072122939_google_colaboratory_python_tensorflow.txt
Q: More concise way of filling blocks of NaN values with CAGR between beginning and ending periods with Pandas Sample data: data = {'year':[2011, 2012, 2013, 2014, 2015, 2016, 2017, 2018, 2019, 2020], 'revenue' : [100, np.nan, np.nan, 108, 118, np.nan, np.nan, np.nan, 127, 135]} df = pd.DataFrame(data).set_index('year') df Output: revenue year 2011 100.0 2012 NaN 2013 NaN 2014 108.0 2015 118.0 2016 NaN 2017 NaN 2018 NaN 2019 127.0 2020 135.0 I would like to fill in each NaN value corresponding to the Compound Annual Growth Rate (CAGR) of the first and last known periods that it is contained within. By using the following equation to calculate CAGR of the 2 blocks of NaN values pictured: growthA=((df.loc[2014,'revenue']/df.loc[2011,'revenue'])**(1/len(df.loc[2011:2014]))) growthB=((df.loc[2019,'revenue']/df.loc[2015,'revenue'])**(1/len(df.loc[2015:2019]))) Note: I left out the -1 so I can just multiply each iteration by my growth variables. Now I can fill in the NaN values as follows: df.loc[2012, 'revenue'] = df.loc[2011, 'revenue'] * growthA df.loc[2013, 'revenue'] = df.loc[2012, 'revenue'] * growthA df.loc[2016, 'revenue'] = df.loc[2015, 'revenue'] * growthB df.loc[2017, 'revenue'] = df.loc[2016, 'revenue'] * growthB df.loc[2018, 'revenue'] = df.loc[2017, 'revenue'] * growthB df Yielding my desired output: revenue year 2011 100.000000 2012 101.942655 2013 103.923048 2014 108.000000 2015 118.000000 2016 119.747471 2017 121.520820 2018 123.320431 2019 127.000000 2020 135.000000 This works, but isn't very efficient when working with a much larger dataset for obvious reasons. My goal is to write a script that automates filling multiple blocks of NaN values in the fashion I have shown, without having to go year by year within each block of NaNs, and going block by block across the entire dataset. What would be a good place to start to achieve this? A: You are trying to interpolate data in your DataFrame. From what I understand, you want to apply a kind of exponential growth rate between each date. As pandas does not have direct exponential interpolation, an idea would be to first apply a log (understand it as natural logarithm, or ln in maths) function to your revenue column, apply a linear interpolation, then putting back the exponential to see the nan values filled with the interpolation: import pandas as pd import numpy as np data = {'year': [2011, 2012, 2013, 2014, 2015, 2016, 2017, 2018, 2019, 2020], 'revenue': [100, np.nan, np.nan, 108, 118, np.nan, np.nan, np.nan, 127, 135]} df = pd.DataFrame(data).set_index('year') df['revenue2'] = df['revenue'].apply(np.log) df['revenue2'] = df['revenue2'].interpolate('linear') df['revenue2'] = df['revenue2'].apply(np.exp) By checking closely this does not seem to fit exactly your desired output, but it is very close. Hope this helps
More concise way of filling blocks of NaN values with CAGR between beginning and ending periods with Pandas
Sample data: data = {'year':[2011, 2012, 2013, 2014, 2015, 2016, 2017, 2018, 2019, 2020], 'revenue' : [100, np.nan, np.nan, 108, 118, np.nan, np.nan, np.nan, 127, 135]} df = pd.DataFrame(data).set_index('year') df Output: revenue year 2011 100.0 2012 NaN 2013 NaN 2014 108.0 2015 118.0 2016 NaN 2017 NaN 2018 NaN 2019 127.0 2020 135.0 I would like to fill in each NaN value corresponding to the Compound Annual Growth Rate (CAGR) of the first and last known periods that it is contained within. By using the following equation to calculate CAGR of the 2 blocks of NaN values pictured: growthA=((df.loc[2014,'revenue']/df.loc[2011,'revenue'])**(1/len(df.loc[2011:2014]))) growthB=((df.loc[2019,'revenue']/df.loc[2015,'revenue'])**(1/len(df.loc[2015:2019]))) Note: I left out the -1 so I can just multiply each iteration by my growth variables. Now I can fill in the NaN values as follows: df.loc[2012, 'revenue'] = df.loc[2011, 'revenue'] * growthA df.loc[2013, 'revenue'] = df.loc[2012, 'revenue'] * growthA df.loc[2016, 'revenue'] = df.loc[2015, 'revenue'] * growthB df.loc[2017, 'revenue'] = df.loc[2016, 'revenue'] * growthB df.loc[2018, 'revenue'] = df.loc[2017, 'revenue'] * growthB df Yielding my desired output: revenue year 2011 100.000000 2012 101.942655 2013 103.923048 2014 108.000000 2015 118.000000 2016 119.747471 2017 121.520820 2018 123.320431 2019 127.000000 2020 135.000000 This works, but isn't very efficient when working with a much larger dataset for obvious reasons. My goal is to write a script that automates filling multiple blocks of NaN values in the fashion I have shown, without having to go year by year within each block of NaNs, and going block by block across the entire dataset. What would be a good place to start to achieve this?
[ "You are trying to interpolate data in your DataFrame. From what I understand, you want to apply a kind of exponential growth rate between each date.\nAs pandas does not have direct exponential interpolation, an idea would be to first apply a log (understand it as natural logarithm, or ln in maths) function to your revenue column, apply a linear interpolation, then putting back the exponential to see the nan values filled with the interpolation:\nimport pandas as pd\nimport numpy as np\n\ndata = {'year': [2011, 2012, 2013, 2014, 2015, 2016, 2017, 2018, 2019, 2020],\n 'revenue': [100, np.nan, np.nan, 108, 118, np.nan, np.nan, np.nan, 127, 135]}\ndf = pd.DataFrame(data).set_index('year')\n\ndf['revenue2'] = df['revenue'].apply(np.log)\ndf['revenue2'] = df['revenue2'].interpolate('linear')\ndf['revenue2'] = df['revenue2'].apply(np.exp)\n\n\nBy checking closely this does not seem to fit exactly your desired output, but it is very close.\nHope this helps\n" ]
[ 0 ]
[]
[]
[ "dataframe", "fillna", "pandas", "python" ]
stackoverflow_0074480937_dataframe_fillna_pandas_python.txt
Q: pytest-django Use env vars in settings.py I have an api in Django that uses quite a few environment variables. The idea is to add pytest-django to test all its functionalities (I know it would have been smarter to build the tests together with the project). Currently it is in the manage.py file where I load the environment variables as follows: def main(): dotenv.read_dotenv() And in my api settings.py file I use some of these environment variables as follows: os.environ.get('one_key') In my pytest.ini file I have correctly configured my settings.py as follows: DJANGO_SETTINGS_MODULE = api.settings The problem is that when I run pytest I get the error that it does not find those environment variables, because the manage.py has not been executed and therefore these have not been loaded. Is there any way to make pytest load an .env before running the tests and the settings.py? A: I was having the same issue. The suggestion of calling dotenv.read_dotenv() from pytest_sessionstart() in conftest.py did not work for me. I also tried the pytest-dotenv library that was linked. It made pytest work, but broke my manage.py module (due to a namespace conflict between django-dotenv [which my manage.py is using to read the .env file], and python-dotenv [which is a dependency of pytest-dotenv]). Ultimately I ended up finding pytest-django-dotenv, which seems to be doing the trick.
pytest-django Use env vars in settings.py
I have an api in Django that uses quite a few environment variables. The idea is to add pytest-django to test all its functionalities (I know it would have been smarter to build the tests together with the project). Currently it is in the manage.py file where I load the environment variables as follows: def main(): dotenv.read_dotenv() And in my api settings.py file I use some of these environment variables as follows: os.environ.get('one_key') In my pytest.ini file I have correctly configured my settings.py as follows: DJANGO_SETTINGS_MODULE = api.settings The problem is that when I run pytest I get the error that it does not find those environment variables, because the manage.py has not been executed and therefore these have not been loaded. Is there any way to make pytest load an .env before running the tests and the settings.py?
[ "I was having the same issue. The suggestion of calling dotenv.read_dotenv() from pytest_sessionstart() in conftest.py did not work for me. I also tried the pytest-dotenv library that was linked. It made pytest work, but broke my manage.py module (due to a namespace conflict between django-dotenv [which my manage.py is using to read the .env file], and python-dotenv [which is a dependency of pytest-dotenv]).\nUltimately I ended up finding \npytest-django-dotenv, which seems to be doing the trick.\n" ]
[ 1 ]
[]
[]
[ "django", "pytest", "pytest_django", "python" ]
stackoverflow_0073021144_django_pytest_pytest_django_python.txt
Q: Split function returning NaN for non matching patterns in pandas I am getting NaN for non mathcing pattern w.r.t to split in pandas. Source Data: Attr [ABC].[xyz] CDE Code Used: df['Extr_Attr'] = np.where((df.Attr.str.contains('.')),df['Attr'].str.split('.',1).str[1], df.Attr) This returns NaN for data that does not have a match of '.' in source data. Expected output: Attr Extr_Attr [ABC].[xyz] [xyz] CDE CDE A: Assuming you want the last chunk after a dot (if any, else the full string). If you want to split, use rsplit and slice the last item: df['Extr_Attr'] = df['Attr'].str.rsplit('.', 1).str[-1] Or more efficiently, with extract (get all non-. characters at the end of the string): df['Extr_Attr'] = df['Attr'].str.extract(r'([^.]+)$') Output: Attr Extr_Attr 0 [ABC].[xyz] [xyz] 1 CDE CDE A: I think we can skip the str.contains and use .split and .fillna df['Extr_Attr'] = df['Attr'].str.split('.').str[1].fillna(df['Attr']) Attr Extr_Attr 0 [ABC].[xyz] [xyz] 1 CDE CDE
Split function returning NaN for non matching patterns in pandas
I am getting NaN for non mathcing pattern w.r.t to split in pandas. Source Data: Attr [ABC].[xyz] CDE Code Used: df['Extr_Attr'] = np.where((df.Attr.str.contains('.')),df['Attr'].str.split('.',1).str[1], df.Attr) This returns NaN for data that does not have a match of '.' in source data. Expected output: Attr Extr_Attr [ABC].[xyz] [xyz] CDE CDE
[ "Assuming you want the last chunk after a dot (if any, else the full string).\nIf you want to split, use rsplit and slice the last item:\ndf['Extr_Attr'] = df['Attr'].str.rsplit('.', 1).str[-1]\n\nOr more efficiently, with extract (get all non-. characters at the end of the string):\ndf['Extr_Attr'] = df['Attr'].str.extract(r'([^.]+)$')\n\nOutput:\n Attr Extr_Attr\n0 [ABC].[xyz] [xyz]\n1 CDE CDE\n\n", "I think we can skip the str.contains and use .split and .fillna\ndf['Extr_Attr'] = df['Attr'].str.split('.').str[1].fillna(df['Attr'])\n\n Attr Extr_Attr\n0 [ABC].[xyz] [xyz]\n1 CDE CDE\n\n" ]
[ 1, 0 ]
[]
[]
[ "pandas", "python" ]
stackoverflow_0074482602_pandas_python.txt
Q: Django : Can't import 'module'. Check that module AppConfig.name is correct Might look like an already answered question, actually here you have the same problem (kind of) i had. My problem is, it's just a trick, one line, no explanation (and still it's different but the solution given works, and that's part of my problem). Here's my project structure, simplified: manage.py compfactu/---settings.py |--__init__.py |--core/--------__init__.py |-apps.py So here is how I added my app in INSTALLED_APPS: apps.py from django.apps import AppConfig class CoreConfig(AppConfig): name = 'core' settings.py INSTALLED_APPS = [ ... #compfactu modules 'compfactu.core.apps.CoreConfig', ] As I read the django 1.11 documentation, and I quote : New applications should avoid default_app_config. Instead they should require the dotted path to the appropriate AppConfig subclass to be configured explicitly in INSTALLED_APPS. Well nice, it's a new application so i should do that : but i'm getting an error. And it's not a problem of pythonpath, cause i just opened a python shell and I can do from compfactu.core.apps import CoreConfig with no problem (print the sys.path too, everything's fine). But I have this error, here's a full traceback: Traceback (most recent call last): File "/home/jbjaillet/Projets/venvcompfactu/lib/python3.5/site-packages/django/apps/config.py", line 147, in create app_module = import_module(app_name) File "/home/jbjaillet/Projets/venvcompfactu/lib/python3.5/importlib/__init__.py", line 126, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "<frozen importlib._bootstrap>", line 986, in _gcd_import File "<frozen importlib._bootstrap>", line 969, in _find_and_load File "<frozen importlib._bootstrap>", line 956, in _find_and_load_unlocked ImportError: No module named 'core' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/jbjaillet/Projets/venvcompfactu/lib/python3.5/site-packages/django/utils/autoreload.py", line 228, in wrapper fn(*args, **kwargs) File "/home/jbjaillet/Projets/venvcompfactu/lib/python3.5/site-packages/django/core/management/commands/runserver.py", line 117, in inner_run autoreload.raise_last_exception() File "/home/jbjaillet/Projets/venvcompfactu/lib/python3.5/site-packages/django/utils/autoreload.py", line 251, in raise_last_exception six.reraise(*_exception) File "/home/jbjaillet/Projets/venvcompfactu/lib/python3.5/site-packages/django/utils/six.py", line 685, in reraise raise value.with_traceback(tb) File "/home/jbjaillet/Projets/venvcompfactu/lib/python3.5/site-packages/django/utils/autoreload.py", line 228, in wrapper fn(*args, **kwargs) File "/home/jbjaillet/Projets/venvcompfactu/lib/python3.5/site-packages/django/__init__.py", line 27, in setup apps.populate(settings.INSTALLED_APPS) File "/home/jbjaillet/Projets/venvcompfactu/lib/python3.5/site-packages/django/apps/registry.py", line 85, in populate app_config = AppConfig.create(entry) File "/home/jbjaillet/Projets/venvcompfactu/lib/python3.5/site-packages/django/apps/config.py", line 151, in create app_name, mod_path, cls_name, django.core.exceptions.ImproperlyConfigured: Cannot import 'core'. Check that 'compfactu.core.apps.CoreConfig.name' is correct. And from there, all files and class have been generated by django (manage.py startapp). And when I actually do what's told in the question I linked above, doing like : INSTALLED_APPS = [ ... #compfactu modules 'compfactu.core', ] it works ! And I don't get that point ! Reading the doc (part i've just quoted), it SHOULD NOT work (noting that I don't have a default_app_config in my __init__.py. So, as the question where I found the "trick" but no explanation, I'm here asking why it works this way when it shouldn't, and why the solution in the official doc doesn't work? Thank you in advance for you time. A: According to the documentation, AppConfig.name is a full python path to the application. AppConfig.name Full Python path to the application, e.g. 'django.contrib.admin'. This attribute defines which application the configuration applies to. It must be set in all AppConfig subclasses. It must be unique across a Django project. https://docs.djangoproject.com/en/2.2/ref/applications/#django.apps.AppConfig.name Try this: class CoreConfig(AppConfig): name = 'compfactu.core' A: In your apps.py ensure to include the full path off the app: class CoreConfig(AppConfig): default_auto_field = 'django.db.models.BigAutoField' name = 'compfactu.core'
Django : Can't import 'module'. Check that module AppConfig.name is correct
Might look like an already answered question, actually here you have the same problem (kind of) i had. My problem is, it's just a trick, one line, no explanation (and still it's different but the solution given works, and that's part of my problem). Here's my project structure, simplified: manage.py compfactu/---settings.py |--__init__.py |--core/--------__init__.py |-apps.py So here is how I added my app in INSTALLED_APPS: apps.py from django.apps import AppConfig class CoreConfig(AppConfig): name = 'core' settings.py INSTALLED_APPS = [ ... #compfactu modules 'compfactu.core.apps.CoreConfig', ] As I read the django 1.11 documentation, and I quote : New applications should avoid default_app_config. Instead they should require the dotted path to the appropriate AppConfig subclass to be configured explicitly in INSTALLED_APPS. Well nice, it's a new application so i should do that : but i'm getting an error. And it's not a problem of pythonpath, cause i just opened a python shell and I can do from compfactu.core.apps import CoreConfig with no problem (print the sys.path too, everything's fine). But I have this error, here's a full traceback: Traceback (most recent call last): File "/home/jbjaillet/Projets/venvcompfactu/lib/python3.5/site-packages/django/apps/config.py", line 147, in create app_module = import_module(app_name) File "/home/jbjaillet/Projets/venvcompfactu/lib/python3.5/importlib/__init__.py", line 126, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "<frozen importlib._bootstrap>", line 986, in _gcd_import File "<frozen importlib._bootstrap>", line 969, in _find_and_load File "<frozen importlib._bootstrap>", line 956, in _find_and_load_unlocked ImportError: No module named 'core' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/jbjaillet/Projets/venvcompfactu/lib/python3.5/site-packages/django/utils/autoreload.py", line 228, in wrapper fn(*args, **kwargs) File "/home/jbjaillet/Projets/venvcompfactu/lib/python3.5/site-packages/django/core/management/commands/runserver.py", line 117, in inner_run autoreload.raise_last_exception() File "/home/jbjaillet/Projets/venvcompfactu/lib/python3.5/site-packages/django/utils/autoreload.py", line 251, in raise_last_exception six.reraise(*_exception) File "/home/jbjaillet/Projets/venvcompfactu/lib/python3.5/site-packages/django/utils/six.py", line 685, in reraise raise value.with_traceback(tb) File "/home/jbjaillet/Projets/venvcompfactu/lib/python3.5/site-packages/django/utils/autoreload.py", line 228, in wrapper fn(*args, **kwargs) File "/home/jbjaillet/Projets/venvcompfactu/lib/python3.5/site-packages/django/__init__.py", line 27, in setup apps.populate(settings.INSTALLED_APPS) File "/home/jbjaillet/Projets/venvcompfactu/lib/python3.5/site-packages/django/apps/registry.py", line 85, in populate app_config = AppConfig.create(entry) File "/home/jbjaillet/Projets/venvcompfactu/lib/python3.5/site-packages/django/apps/config.py", line 151, in create app_name, mod_path, cls_name, django.core.exceptions.ImproperlyConfigured: Cannot import 'core'. Check that 'compfactu.core.apps.CoreConfig.name' is correct. And from there, all files and class have been generated by django (manage.py startapp). And when I actually do what's told in the question I linked above, doing like : INSTALLED_APPS = [ ... #compfactu modules 'compfactu.core', ] it works ! And I don't get that point ! Reading the doc (part i've just quoted), it SHOULD NOT work (noting that I don't have a default_app_config in my __init__.py. So, as the question where I found the "trick" but no explanation, I'm here asking why it works this way when it shouldn't, and why the solution in the official doc doesn't work? Thank you in advance for you time.
[ "According to the documentation, AppConfig.name is a full python path to the application.\n\nAppConfig.name\nFull Python path to the application, e.g. 'django.contrib.admin'.\nThis attribute defines which application the configuration applies to.\n It must be set in all AppConfig subclasses.\nIt must be unique across a Django project.\n\nhttps://docs.djangoproject.com/en/2.2/ref/applications/#django.apps.AppConfig.name\nTry this:\nclass CoreConfig(AppConfig):\n name = 'compfactu.core'\n\n", "In your apps.py ensure to include the full path off the app:\nclass CoreConfig(AppConfig):\ndefault_auto_field = 'django.db.models.BigAutoField'\nname = 'compfactu.core'\n\n" ]
[ 131, 0 ]
[]
[]
[ "django", "python", "python_3.x" ]
stackoverflow_0046177499_django_python_python_3.x.txt
Q: How do i make my discord.py bot pick an random line in a .txt file and reply to a message Ok so i am making a gen bot for me and my friends and i wanted to know how do i make my bot reply to the message with a account from a .txt file that i put in the folder with the bot please help me thank you i tried ` import random @client.command() async def color(ctx): responses = ['red', 'blue', 'green', 'purple', 'Add more',] await ctx.send(f'Color: {random.choice(responses)}') ` but i didnt want to put it all in one line in the responses A: You could add something like this to your code: import random def readTxtLines(filename): with open(filename, "r") as f: lines = f.readlines() return lines def getRandomLine(lines): return random.choice(lines).strip() print(getRandomLine(readTxtLines("test.txt"))) A: Here is a method in cog form: import random @commands.command() async def color(self,ctx): filename1=open('cogs/color.txt','r') wordList1=[line.rstrip('\n') for line in filename1] filename1.close() out1 = random.choice(wordList1) await ctx.send(f"color: {out1}")
How do i make my discord.py bot pick an random line in a .txt file and reply to a message
Ok so i am making a gen bot for me and my friends and i wanted to know how do i make my bot reply to the message with a account from a .txt file that i put in the folder with the bot please help me thank you i tried ` import random @client.command() async def color(ctx): responses = ['red', 'blue', 'green', 'purple', 'Add more',] await ctx.send(f'Color: {random.choice(responses)}') ` but i didnt want to put it all in one line in the responses
[ "You could add something like this to your code:\nimport random\n\n\ndef readTxtLines(filename):\n with open(filename, \"r\") as f:\n lines = f.readlines()\n return lines\n\n\ndef getRandomLine(lines):\n return random.choice(lines).strip()\n\n\nprint(getRandomLine(readTxtLines(\"test.txt\")))\n\n", "Here is a method in cog form:\nimport random\n\n@commands.command()\nasync def color(self,ctx):\n filename1=open('cogs/color.txt','r')\n wordList1=[line.rstrip('\\n') for line in filename1]\n filename1.close()\n out1 = random.choice(wordList1)\n await ctx.send(f\"color: {out1}\")\n\n" ]
[ 0, 0 ]
[]
[]
[ "bots", "discord", "discord.py", "python" ]
stackoverflow_0074438914_bots_discord_discord.py_python.txt
Q: Select rows between multiple values for all columns in a dataframe I am trying to select multiple values between a range fom all rows per each column and plot them all-together. The values in the dataframe are between 0 and 100. I want to select a range of values between 0 to 10 for all rows of one column, and then repeat that iteration every 10 values until 100 (e.g.: values between 0 to 10: 2, 4, 6, 9, 1 and then 10 to 20, 20 to 30, etc.) for each column. After importing data import numpy as np import pandas as pd import matplotlib.pyplot as plt df = pd.DataFrame(np.random.randint(0, 100, size = (100, 10)), columns=list('ABCDEFGHIJ')) I am aware that I can select rows based on multiple column conditions with the following function df_1 = df.A[(df.A > 0) & (df.A < 10)] However, by doing it this way I can only select one range of values per one column at a time. Is there a better way how to do this for a full range of values (0 to 100 every 10 iterations) and for all columns, rather than doing it manually for every range for all columns? If done manually, I need to set up 10 conditions per column and since there are 10 columns in the dataframe, it would end up with 100 conditions, which I wish to omit if possible. I am also interested if there is a counter library that can do this kind of operation, just to provide an output of how many rows are between 0 to 10 every 10 iterations for each column. A: Try : df_1 = df[(df['A']>0) & (df['A']<10)] You will get : >>> df_1 A B C D E F G H I J 43 5 91 98 63 55 32 6 79 28 18 47 3 88 62 6 52 21 16 64 33 60 50 8 43 84 6 8 6 70 93 0 95 65 5 24 7 80 89 92 70 65 12 44 78 2 99 15 14 5 46 99 62 33 40 80 5 66 86 22 57 86 15 44 78 37 95 2 36 70 53 81 76 70 2 9 42 A: Here is a small example which you can use for your purpose by modifying the values of rows to 100 and num_group to 10: import pandas as pd import numpy as np pd.set_option('display.max_rows', 500) # generate data rows = 6 rng = np.random.default_rng(1) df = pd.DataFrame(rng.integers(0, 100, size = (rows, 10)), columns=list('ABCDEFGHIJ')) print(df) num_group = 3 print(df.index//num_group) # Counter to track the number of groups visited i = 0 def count(x): global i num_rows = [] for col in x.columns: num_rows.append(np.sum(x[col].between(i*10, (i+1)*10, inclusive="left"))) i += 1 return pd.Series(dict(zip(list('ABCDEFGHIJ'), num_rows))) print(df.groupby(df.index // num_group).apply(lambda x: count(x))) This gives: # df A B C D E F G H I J 0 47 51 75 95 3 14 82 94 24 31 1 86 42 27 82 25 40 64 54 8 2 2 86 75 83 53 81 32 45 78 12 30 3 12 45 97 13 38 40 90 20 50 26 4 1 75 6 28 49 48 11 98 74 96 5 9 72 29 54 92 27 72 16 32 96 # df.index // num_group Int64Index([0, 0, 0, 1, 1, 1], dtype='int64') and the expected output for counting the number of rows whose value in a particular column lie in the range of [0, 10) and [10, 20) for the first and second groups, respectively: A B C D E F G H I J 0 0 0 0 0 1 0 0 0 1 1 1 1 0 0 1 0 0 1 1 0 0 To understand how grouping works on rows, please see this answer or pandas.groupby. To check if a value at a particular index of pandas Series lies within a particular range, we have used pandas.Series.between. Most importantly, we have created a function count that takes the grouped data frame, and for each column, gives us the count of rows that lie within a particular range. Finally, as num_rows is just a simple list, we convert it to pandas.Series before returning the output to pandas.apply. So, the answer will be: df.groupby(df.index // num_group).apply( lambda x: pd.Series(dict(zip( list('ABCDEFGHIJ'), x.apply(lambda x: np.sum(x.between((x.index//num_group) * 10, ((x.index//num_group) +1) * 10))))))) Please be aware that the number of groups is equal to the unique integers in df.index // num_group and not num_group. A: Try this; pd.cut ==> cut your column to bins and you can give any label name between this numbers. For example with this code , every number 0 to 10 will be 1, 11 to 20 will be 2... etc for i in df.columns: bins = [0,10,20,30,40,50,60,70,80,90,100] labels = [1,2,3,4,5,6,7,8,9] df[i] = pd.cut(df[i], bins=bins, labels=labels)
Select rows between multiple values for all columns in a dataframe
I am trying to select multiple values between a range fom all rows per each column and plot them all-together. The values in the dataframe are between 0 and 100. I want to select a range of values between 0 to 10 for all rows of one column, and then repeat that iteration every 10 values until 100 (e.g.: values between 0 to 10: 2, 4, 6, 9, 1 and then 10 to 20, 20 to 30, etc.) for each column. After importing data import numpy as np import pandas as pd import matplotlib.pyplot as plt df = pd.DataFrame(np.random.randint(0, 100, size = (100, 10)), columns=list('ABCDEFGHIJ')) I am aware that I can select rows based on multiple column conditions with the following function df_1 = df.A[(df.A > 0) & (df.A < 10)] However, by doing it this way I can only select one range of values per one column at a time. Is there a better way how to do this for a full range of values (0 to 100 every 10 iterations) and for all columns, rather than doing it manually for every range for all columns? If done manually, I need to set up 10 conditions per column and since there are 10 columns in the dataframe, it would end up with 100 conditions, which I wish to omit if possible. I am also interested if there is a counter library that can do this kind of operation, just to provide an output of how many rows are between 0 to 10 every 10 iterations for each column.
[ "Try :\ndf_1 = df[(df['A']>0) & (df['A']<10)]\n\nYou will get :\n>>> df_1\n\n A B C D E F G H I J\n43 5 91 98 63 55 32 6 79 28 18\n47 3 88 62 6 52 21 16 64 33 60\n50 8 43 84 6 8 6 70 93 0 95\n65 5 24 7 80 89 92 70 65 12 44\n78 2 99 15 14 5 46 99 62 33 40\n80 5 66 86 22 57 86 15 44 78 37\n95 2 36 70 53 81 76 70 2 9 42\n\n", "Here is a small example which you can use for your purpose by modifying the values of rows to 100 and num_group to 10:\nimport pandas as pd\nimport numpy as np \npd.set_option('display.max_rows', 500)\n\n# generate data\nrows = 6\nrng = np.random.default_rng(1)\ndf = pd.DataFrame(rng.integers(0, 100, size = (rows, 10)), columns=list('ABCDEFGHIJ'))\n\nprint(df)\n\nnum_group = 3\nprint(df.index//num_group)\n\n# Counter to track the number of groups visited\ni = 0\ndef count(x):\n global i\n num_rows = []\n for col in x.columns:\n num_rows.append(np.sum(x[col].between(i*10, (i+1)*10, inclusive=\"left\")))\n i += 1\n return pd.Series(dict(zip(list('ABCDEFGHIJ'), num_rows)))\n\nprint(df.groupby(df.index // num_group).apply(lambda x: count(x)))\n\nThis gives:\n# df\n A B C D E F G H I J\n0 47 51 75 95 3 14 82 94 24 31\n1 86 42 27 82 25 40 64 54 8 2\n2 86 75 83 53 81 32 45 78 12 30\n3 12 45 97 13 38 40 90 20 50 26\n4 1 75 6 28 49 48 11 98 74 96\n5 9 72 29 54 92 27 72 16 32 96\n\n# df.index // num_group\nInt64Index([0, 0, 0, 1, 1, 1], dtype='int64')\n\nand the expected output for counting the number of rows whose value in a particular column lie in the range of [0, 10) and [10, 20) for the first and second groups, respectively:\n A B C D E F G H I J\n0 0 0 0 0 1 0 0 0 1 1\n1 1 0 0 1 0 0 1 1 0 0\n\nTo understand how grouping works on rows, please see this answer or pandas.groupby. To check if a value at a particular index of pandas Series lies within a particular range, we have used pandas.Series.between.\nMost importantly, we have created a function count that takes the grouped data frame, and for each column, gives us the count of rows that lie within a particular range. Finally, as num_rows is just a simple list, we convert it to pandas.Series before returning the output to pandas.apply.\nSo, the answer will be:\ndf.groupby(df.index // num_group).apply(\n lambda x: \n pd.Series(dict(zip(\n list('ABCDEFGHIJ'), \n x.apply(lambda x: \n np.sum(x.between((x.index//num_group) * 10, ((x.index//num_group) +1) * 10)))))))\n\nPlease be aware that the number of groups is equal to the unique integers in df.index // num_group and not num_group.\n", "Try this; pd.cut ==> cut your column to bins and you can give any label name between this numbers. For example with this code , every number 0 to 10 will be 1, 11 to 20 will be 2... etc\nfor i in df.columns:\n bins = [0,10,20,30,40,50,60,70,80,90,100]\n labels = [1,2,3,4,5,6,7,8,9] \n df[i] = pd.cut(df[i], bins=bins, labels=labels)\n\n" ]
[ 0, 0, 0 ]
[]
[]
[ "pandas", "python" ]
stackoverflow_0074481753_pandas_python.txt
Q: I can read from local file in py spark but i can't write data frame in local file df.write.csv("sdf") " 21/07/24 15:27:23 ERROR FileFormatWriter: Aborting job a9914f88-3ab9-480a-984f-33d0e598c0fc. java.lang.UnsatisfiedLinkError: org.apache.hadoop.io.nativeio.NativeIO$Windows.access0(Ljava/lang/String;I)Z at org.apache.hadoop.io.nativeio.NativeIO$Windows.access0(Native Method) at org.apache.hadoop.io.nativeio.NativeIO$Windows.access(NativeIO.java:645) at org.apache.hadoop.fs.FileUtil.canRead(FileUtil.java:1230) at org.apache.hadoop.fs.FileUtil.list(FileUtil.java:1435) at org.apache.hadoop.fs.RawLocalFileSystem.listStatus(RawLocalFileSystem.java:493) at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1868) at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1910) at org.apache.hadoop.fs.ChecksumFileSystem.listStatus(ChecksumFileSystem.java:678) at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1868) at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1910) at org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter.getAllCommittedTaskPaths(FileOutputCommitter.java:332) at org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter.commitJobInternal(FileOutputCommitter.java:402) at org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter.commitJob(FileOutputCommitter.java:375) at org.apache.spark.internal.io.HadoopMapReduceCommitProtocol.commitJob(HadoopMapReduceCommitProtocol.scala:182) at org.apache.spark.sql.execution.datasources.FileFormatWriter$.write(FileFormatWriter.scala:220) at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand.run(InsertIntoHadoopFsRelationCommand.scala:188) at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult$lzycompute(commands.scala:108) at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult(commands.scala:106) at org.apache.spark.sql.execution.command.DataWritingCommandExec.doExecute(commands.scala:131) at org.apache.spark.sql.execution.SparkPlan.$anonfun$execute$1(SparkPlan.scala:180) at org.apache.spark.sql.execution.SparkPlan.$anonfun$executeQuery$1(SparkPlan.scala:218) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151) at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:215) at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:176) at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:132) at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:131) at org.apache.spark.sql.DataFrameWriter.$anonfun$runCommand$1(DataFrameWriter.scala:989) at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$5(SQLExecution.scala:103) at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:163) at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:90) at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:775) at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:64) at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:989) at org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:438) at org.apache.spark.sql.DataFrameWriter.saveInternal(DataFrameWriter.scala:415) at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:293) at org.apache.spark.sql.DataFrameWriter.csv(DataFrameWriter.scala:979) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) at java.lang.reflect.Method.invoke(Unknown Source) at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244) at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357) at py4j.Gateway.invoke(Gateway.java:282) at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132) at py4j.commands.CallCommand.execute(CallCommand.java:79) at py4j.GatewayConnection.run(GatewayConnection.java:238) at java.lang.Thread.run(Unknown Source) Traceback (most recent call last): File "", line 1, in File "C:\spark\python\pyspark\sql\readwriter.py", line 1372, in csv self._jwrite.csv(path) File "C:\spark\python\lib\py4j-0.10.9-src.zip\py4j\java_gateway.py", line 1305, in call File "C:\spark\python\pyspark\sql\utils.py", line 111, in deco return f(*a, **kw) File "C:\spark\python\lib\py4j-0.10.9-src.zip\py4j\protocol.py", line 328, in get_return_value py4j.protocol.Py4JJavaError: An error occurred while calling o40.csv. : org.apache.spark.SparkException: Job aborted. at org.apache.spark.sql.execution.datasources.FileFormatWriter$.write(FileFormatWriter.scala:231) at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand.run(InsertIntoHadoopFsRelationCommand.scala:188) at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult$lzycompute(commands.scala:108) at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult(commands.scala:106) at org.apache.spark.sql.execution.command.DataWritingCommandExec.doExecute(commands.scala:131) at org.apache.spark.sql.execution.SparkPlan.$anonfun$execute$1(SparkPlan.scala:180) at org.apache.spark.sql.execution.SparkPlan.$anonfun$executeQuery$1(SparkPlan.scala:218) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151) at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:215) at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:176) at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:132) at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:131) at org.apache.spark.sql.DataFrameWriter.$anonfun$runCommand$1(DataFrameWriter.scala:989) at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$5(SQLExecution.scala:103) at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:163) at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:90) at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:775) at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:64) at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:989) at org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:438) at org.apache.spark.sql.DataFrameWriter.saveInternal(DataFrameWriter.scala:415) at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:293) at org.apache.spark.sql.DataFrameWriter.csv(DataFrameWriter.scala:979) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) at java.lang.reflect.Method.invoke(Unknown Source) at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244) at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357) at py4j.Gateway.invoke(Gateway.java:282) at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132) at py4j.commands.CallCommand.execute(CallCommand.java:79) at py4j.GatewayConnection.run(GatewayConnection.java:238) at java.lang.Thread.run(Unknown Source) Caused by: java.lang.UnsatisfiedLinkError: org.apache.hadoop.io.nativeio.NativeIO$Windows.access0(Ljava/lang/String;I)Z at org.apache.hadoop.io.nativeio.NativeIO$Windows.access0(Native Method) at org.apache.hadoop.io.nativeio.NativeIO$Windows.access(NativeIO.java:645) at org.apache.hadoop.fs.FileUtil.canRead(FileUtil.java:1230) at org.apache.hadoop.fs.FileUtil.list(FileUtil.java:1435) at org.apache.hadoop.fs.RawLocalFileSystem.listStatus(RawLocalFileSystem.java:493) at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1868) at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1910) at org.apache.hadoop.fs.ChecksumFileSystem.listStatus(ChecksumFileSystem.java:678) at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1868) at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1910) at org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter.getAllCommittedTaskPaths(FileOutputCommitter.java:332) at org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter.commitJobInternal(FileOutputCommitter.java:402) at org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter.commitJob(FileOutputCommitter.java:375) at org.apache.spark.internal.io.HadoopMapReduceCommitProtocol.commitJob(HadoopMapReduceCommitProtocol.scala:182) at org.apache.spark.sql.execution.datasources.FileFormatWriter$.write(FileFormatWriter.scala:220) A: Apart from getting winutils.exe and setting the hadoop_home. Please check if you have hadoop.dll binary in your bin or not. If not there then download it from the github repo. https://github.com/cdarlint/winutils/blob/master/hadoop-3.2.1/bin/hadoop.dll It worked for me. A: For PySpark 3.3.1, Win10, Java 18. Download bin folder (from here https://github.com/cdarlint/winutils or here https://github.com/steveloughran/winutils ) with winutils.exe and hadoop.dll inside it (my version is 3.0.0 or any other greater then 3.0.0), put the folder into C:\hadoop Go to System variables and create HADOOP_HOME and set to C\hadoop. Then add it to path under System vars like that %HADOOP_HOME%\bin . If any issues occur during writing the file json or parquet etc. Just try putting the hadoop.dll file into System32 folder. That's it.
I can read from local file in py spark but i can't write data frame in local file
df.write.csv("sdf") " 21/07/24 15:27:23 ERROR FileFormatWriter: Aborting job a9914f88-3ab9-480a-984f-33d0e598c0fc. java.lang.UnsatisfiedLinkError: org.apache.hadoop.io.nativeio.NativeIO$Windows.access0(Ljava/lang/String;I)Z at org.apache.hadoop.io.nativeio.NativeIO$Windows.access0(Native Method) at org.apache.hadoop.io.nativeio.NativeIO$Windows.access(NativeIO.java:645) at org.apache.hadoop.fs.FileUtil.canRead(FileUtil.java:1230) at org.apache.hadoop.fs.FileUtil.list(FileUtil.java:1435) at org.apache.hadoop.fs.RawLocalFileSystem.listStatus(RawLocalFileSystem.java:493) at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1868) at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1910) at org.apache.hadoop.fs.ChecksumFileSystem.listStatus(ChecksumFileSystem.java:678) at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1868) at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1910) at org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter.getAllCommittedTaskPaths(FileOutputCommitter.java:332) at org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter.commitJobInternal(FileOutputCommitter.java:402) at org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter.commitJob(FileOutputCommitter.java:375) at org.apache.spark.internal.io.HadoopMapReduceCommitProtocol.commitJob(HadoopMapReduceCommitProtocol.scala:182) at org.apache.spark.sql.execution.datasources.FileFormatWriter$.write(FileFormatWriter.scala:220) at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand.run(InsertIntoHadoopFsRelationCommand.scala:188) at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult$lzycompute(commands.scala:108) at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult(commands.scala:106) at org.apache.spark.sql.execution.command.DataWritingCommandExec.doExecute(commands.scala:131) at org.apache.spark.sql.execution.SparkPlan.$anonfun$execute$1(SparkPlan.scala:180) at org.apache.spark.sql.execution.SparkPlan.$anonfun$executeQuery$1(SparkPlan.scala:218) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151) at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:215) at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:176) at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:132) at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:131) at org.apache.spark.sql.DataFrameWriter.$anonfun$runCommand$1(DataFrameWriter.scala:989) at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$5(SQLExecution.scala:103) at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:163) at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:90) at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:775) at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:64) at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:989) at org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:438) at org.apache.spark.sql.DataFrameWriter.saveInternal(DataFrameWriter.scala:415) at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:293) at org.apache.spark.sql.DataFrameWriter.csv(DataFrameWriter.scala:979) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) at java.lang.reflect.Method.invoke(Unknown Source) at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244) at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357) at py4j.Gateway.invoke(Gateway.java:282) at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132) at py4j.commands.CallCommand.execute(CallCommand.java:79) at py4j.GatewayConnection.run(GatewayConnection.java:238) at java.lang.Thread.run(Unknown Source) Traceback (most recent call last): File "", line 1, in File "C:\spark\python\pyspark\sql\readwriter.py", line 1372, in csv self._jwrite.csv(path) File "C:\spark\python\lib\py4j-0.10.9-src.zip\py4j\java_gateway.py", line 1305, in call File "C:\spark\python\pyspark\sql\utils.py", line 111, in deco return f(*a, **kw) File "C:\spark\python\lib\py4j-0.10.9-src.zip\py4j\protocol.py", line 328, in get_return_value py4j.protocol.Py4JJavaError: An error occurred while calling o40.csv. : org.apache.spark.SparkException: Job aborted. at org.apache.spark.sql.execution.datasources.FileFormatWriter$.write(FileFormatWriter.scala:231) at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand.run(InsertIntoHadoopFsRelationCommand.scala:188) at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult$lzycompute(commands.scala:108) at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult(commands.scala:106) at org.apache.spark.sql.execution.command.DataWritingCommandExec.doExecute(commands.scala:131) at org.apache.spark.sql.execution.SparkPlan.$anonfun$execute$1(SparkPlan.scala:180) at org.apache.spark.sql.execution.SparkPlan.$anonfun$executeQuery$1(SparkPlan.scala:218) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151) at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:215) at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:176) at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:132) at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:131) at org.apache.spark.sql.DataFrameWriter.$anonfun$runCommand$1(DataFrameWriter.scala:989) at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$5(SQLExecution.scala:103) at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:163) at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:90) at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:775) at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:64) at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:989) at org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:438) at org.apache.spark.sql.DataFrameWriter.saveInternal(DataFrameWriter.scala:415) at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:293) at org.apache.spark.sql.DataFrameWriter.csv(DataFrameWriter.scala:979) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) at java.lang.reflect.Method.invoke(Unknown Source) at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244) at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357) at py4j.Gateway.invoke(Gateway.java:282) at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132) at py4j.commands.CallCommand.execute(CallCommand.java:79) at py4j.GatewayConnection.run(GatewayConnection.java:238) at java.lang.Thread.run(Unknown Source) Caused by: java.lang.UnsatisfiedLinkError: org.apache.hadoop.io.nativeio.NativeIO$Windows.access0(Ljava/lang/String;I)Z at org.apache.hadoop.io.nativeio.NativeIO$Windows.access0(Native Method) at org.apache.hadoop.io.nativeio.NativeIO$Windows.access(NativeIO.java:645) at org.apache.hadoop.fs.FileUtil.canRead(FileUtil.java:1230) at org.apache.hadoop.fs.FileUtil.list(FileUtil.java:1435) at org.apache.hadoop.fs.RawLocalFileSystem.listStatus(RawLocalFileSystem.java:493) at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1868) at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1910) at org.apache.hadoop.fs.ChecksumFileSystem.listStatus(ChecksumFileSystem.java:678) at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1868) at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1910) at org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter.getAllCommittedTaskPaths(FileOutputCommitter.java:332) at org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter.commitJobInternal(FileOutputCommitter.java:402) at org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter.commitJob(FileOutputCommitter.java:375) at org.apache.spark.internal.io.HadoopMapReduceCommitProtocol.commitJob(HadoopMapReduceCommitProtocol.scala:182) at org.apache.spark.sql.execution.datasources.FileFormatWriter$.write(FileFormatWriter.scala:220)
[ "Apart from getting winutils.exe and setting the hadoop_home.\nPlease check if you have hadoop.dll binary in your bin or not.\nIf not there then download it from the github repo.\nhttps://github.com/cdarlint/winutils/blob/master/hadoop-3.2.1/bin/hadoop.dll\nIt worked for me.\n", "For PySpark 3.3.1, Win10, Java 18.\n\nDownload bin folder (from here https://github.com/cdarlint/winutils or here https://github.com/steveloughran/winutils ) with winutils.exe and hadoop.dll inside it (my version is 3.0.0 or any other greater then 3.0.0), put the folder into C:\\hadoop\nGo to System variables and create HADOOP_HOME and set to C\\hadoop.\nThen add it to path under System vars like that %HADOOP_HOME%\\bin .\n\nIf any issues occur during writing the file json or parquet etc. Just try putting the hadoop.dll file into System32 folder. That's it.\n" ]
[ 1, 0 ]
[]
[]
[ "apache_spark", "pyspark", "python" ]
stackoverflow_0068509434_apache_spark_pyspark_python.txt
Q: Break Python Functions into Other Files So to keep it simple I just want to be able to reference the function in C from A and I'm not sure how to do this in python without directly referencing c which I don't want to do a.py references b.py b.py references c.py c.py has a function in it called foo I want to call foo from a.py but for abstraction purposes I want to only reference b.py I tried to set up a simple example like the one above before I start attempting this on my actual codebase but it seems I can't see the member in C from A A: So not the most ideal solution but it appears if I create a member for each sub file in b.py it will allow me to access it from a.py It would look something like this c.py def call_me_c(): print("It works from c") b.py import c cfile = c a.py import b b.cfile.call_me_c()
Break Python Functions into Other Files
So to keep it simple I just want to be able to reference the function in C from A and I'm not sure how to do this in python without directly referencing c which I don't want to do a.py references b.py b.py references c.py c.py has a function in it called foo I want to call foo from a.py but for abstraction purposes I want to only reference b.py I tried to set up a simple example like the one above before I start attempting this on my actual codebase but it seems I can't see the member in C from A
[ "So not the most ideal solution but it appears if I create a member for each sub file in b.py it will allow me to access it from a.py\nIt would look something like this\nc.py\ndef call_me_c():\n print(\"It works from c\")\n\nb.py\nimport c\ncfile = c\n\na.py\nimport b\nb.cfile.call_me_c()\n\n" ]
[ 0 ]
[]
[]
[ "python" ]
stackoverflow_0074482785_python.txt
Q: How to unstack a dataset to a certain dataframe? I have a dataset like this data = {'weight': ['NaN',2,3,4,'NaN',6,7,8,9,'NaN',11,12,13,14,15], 'MI': ['NaN', 21, 19, 18, 'NaN',16,15,14,13,'NaN',11,10,9,8,7]} df = pd.DataFrame(data, index= ['group1', "gene1", "gene2", 'gene3', 'group2', "gene1", 'gene21', 'gene4', 'gene7', 'group3', 'gene2', 'gene10', 'gene3', 'gene43', 'gene1']) I need to stack it to gene by group dataframe with MI values. If there are no gene values for particular group the imputated value should be 0.1. 'weights' column should be removed. The final dataframe should look like this A: You can use: m = df['weight'].ne('NaN') (df[m] .set_index((~m).cumsum()[m], append=True)['MI'] .unstack('weight', fill_value=0.1) .add_prefix('group') ) Variant with pivot: m = df['weight'].ne('NaN') (df.assign(col=(~m).cumsum()) .loc[m] .pivot(columns='col', values='MI') .fillna(0.1) .add_prefix('group') ) Output: weight group1 group2 group3 gene1 21 16 7 gene10 0.1 0.1 10 gene2 19 0.1 11 gene21 0.1 15 0.1 gene3 18 0.1 9 gene4 0.1 14 0.1 gene43 0.1 0.1 8 gene7 0.1 13 0.1 A: Another option, still with pivot: (df .assign( grp = np.where(df.weight.eq('NaN'), df.index, np.nan), group = lambda df: df.grp.ffill()) .loc[lambda df: df.weight.ne('NaN'), ['MI', 'group']] .pivot(index=None, columns='group', values='MI') .fillna(0.1) ) group group1 group2 group3 gene1 21.0 16.0 7.0 gene10 0.1 0.1 10.0 gene2 19.0 0.1 11.0 gene21 0.1 15.0 0.1 gene3 18.0 0.1 9.0 gene4 0.1 14.0 0.1 gene43 0.1 0.1 8.0 gene7 0.1 13.0 0.1 Another process, for the fun of it, and possibly an inkling into the complexities that pivot abstracts: # get positions for group1,2,3: positions = df.weight.eq('NaN').to_numpy().nonzero()[0] groups = df.index[positions] #get lengths of rows between group1,2,3: positions = np.diff(np.append(positions, len(df))) - 1 # compute the cumulative positions and pass it to np.split: arrays = np.split(df.MI.loc[df.MI.ne('NaN')], np.cumsum(positions)) #concatenate the arrays, get the final output: pd.concat(arrays, axis = 1, keys = groups).fillna(0.1) group1 group2 group3 gene1 21.0 16.0 7.0 gene2 19.0 0.1 11.0 gene3 18.0 0.1 9.0 gene21 0.1 15.0 0.1 gene4 0.1 14.0 0.1 gene7 0.1 13.0 0.1 gene10 0.1 0.1 10.0 gene43 0.1 0.1 8.0
How to unstack a dataset to a certain dataframe?
I have a dataset like this data = {'weight': ['NaN',2,3,4,'NaN',6,7,8,9,'NaN',11,12,13,14,15], 'MI': ['NaN', 21, 19, 18, 'NaN',16,15,14,13,'NaN',11,10,9,8,7]} df = pd.DataFrame(data, index= ['group1', "gene1", "gene2", 'gene3', 'group2', "gene1", 'gene21', 'gene4', 'gene7', 'group3', 'gene2', 'gene10', 'gene3', 'gene43', 'gene1']) I need to stack it to gene by group dataframe with MI values. If there are no gene values for particular group the imputated value should be 0.1. 'weights' column should be removed. The final dataframe should look like this
[ "You can use:\nm = df['weight'].ne('NaN')\n\n(df[m]\n .set_index((~m).cumsum()[m], append=True)['MI']\n .unstack('weight', fill_value=0.1)\n .add_prefix('group')\n )\n\nVariant with pivot:\nm = df['weight'].ne('NaN')\n\n(df.assign(col=(~m).cumsum())\n .loc[m]\n .pivot(columns='col', values='MI')\n .fillna(0.1)\n .add_prefix('group')\n )\n\nOutput:\nweight group1 group2 group3\ngene1 21 16 7\ngene10 0.1 0.1 10\ngene2 19 0.1 11\ngene21 0.1 15 0.1\ngene3 18 0.1 9\ngene4 0.1 14 0.1\ngene43 0.1 0.1 8\ngene7 0.1 13 0.1\n\n", "Another option, still with pivot:\n(df\n.assign(\n grp = np.where(df.weight.eq('NaN'), df.index, np.nan), \n group = lambda df: df.grp.ffill())\n.loc[lambda df: df.weight.ne('NaN'), ['MI', 'group']]\n.pivot(index=None, columns='group', values='MI')\n.fillna(0.1)\n)\n\ngroup group1 group2 group3\ngene1 21.0 16.0 7.0\ngene10 0.1 0.1 10.0\ngene2 19.0 0.1 11.0\ngene21 0.1 15.0 0.1\ngene3 18.0 0.1 9.0\ngene4 0.1 14.0 0.1\ngene43 0.1 0.1 8.0\ngene7 0.1 13.0 0.1\n\nAnother process, for the fun of it, and possibly an inkling into the complexities that pivot abstracts:\n# get positions for group1,2,3:\npositions = df.weight.eq('NaN').to_numpy().nonzero()[0]\ngroups = df.index[positions]\n\n#get lengths of rows between group1,2,3:\npositions = np.diff(np.append(positions, len(df))) - 1\n\n# compute the cumulative positions and pass it to np.split:\narrays = np.split(df.MI.loc[df.MI.ne('NaN')], np.cumsum(positions))\n\n#concatenate the arrays, get the final output:\npd.concat(arrays, axis = 1, keys = groups).fillna(0.1)\n\n group1 group2 group3\ngene1 21.0 16.0 7.0\ngene2 19.0 0.1 11.0\ngene3 18.0 0.1 9.0\ngene21 0.1 15.0 0.1\ngene4 0.1 14.0 0.1\ngene7 0.1 13.0 0.1\ngene10 0.1 0.1 10.0\ngene43 0.1 0.1 8.0\n\n" ]
[ 4, 1 ]
[]
[]
[ "pandas", "python" ]
stackoverflow_0074481613_pandas_python.txt
Q: How can I deal with unwanted numbers in my dataframe where I only want date format? In my dataframe I have a column called Competencia where only dates in the format YYYY-MM-DD should be. But I just found out that I have a couple of zeros and suddenly and a number like for example 11. Which shouldnt be in it. The Column Competencia has currently the Dtype OBJECT. I normally created a new column called dt_COMP and converted all the dates into datetime which is the format I need. Now my question is, what do I do with the unwanted 0's and the 11 and how do I do it? If I try to run my code ignoring the 0's and 11 I get the error which I dont understand: ParserError: Unknown string format: COMPETENCIA This is the Code that I use to convert COMPETENCIA into Datetime in a new column called dt_COMP #Converting 'COMPETENCIA' into date -> new Column: 'dt_COMP' df30_new['dt_COMP'] = pd.to_datetime(df30_new['COMPETENCIA'], yearfirst=True) df30_new['dt_COMP'] = pd.to_datetime(df30_new['dt_COMP']).dt.date df30_new["dt_COMP"] = pd.to_datetime(df30_new["COMPETENCIA"], format="%Y/%m/%d") in the screenshot you can see the zeros and the 11 A: Something along the lines of this if I remember rightly - doing from memory df30_new['COMPETENCIA_TRIMMED'] = df30_new['COMPETENCIA'].apply(lambda x: x if type(x) is datetime else None, axis=1) Maybe check the datatype of one of the correct values for this to work as expected. Check out apply in the pandas docs.
How can I deal with unwanted numbers in my dataframe where I only want date format?
In my dataframe I have a column called Competencia where only dates in the format YYYY-MM-DD should be. But I just found out that I have a couple of zeros and suddenly and a number like for example 11. Which shouldnt be in it. The Column Competencia has currently the Dtype OBJECT. I normally created a new column called dt_COMP and converted all the dates into datetime which is the format I need. Now my question is, what do I do with the unwanted 0's and the 11 and how do I do it? If I try to run my code ignoring the 0's and 11 I get the error which I dont understand: ParserError: Unknown string format: COMPETENCIA This is the Code that I use to convert COMPETENCIA into Datetime in a new column called dt_COMP #Converting 'COMPETENCIA' into date -> new Column: 'dt_COMP' df30_new['dt_COMP'] = pd.to_datetime(df30_new['COMPETENCIA'], yearfirst=True) df30_new['dt_COMP'] = pd.to_datetime(df30_new['dt_COMP']).dt.date df30_new["dt_COMP"] = pd.to_datetime(df30_new["COMPETENCIA"], format="%Y/%m/%d") in the screenshot you can see the zeros and the 11
[ "Something along the lines of this if I remember rightly - doing from memory\ndf30_new['COMPETENCIA_TRIMMED'] = df30_new['COMPETENCIA'].apply(lambda x: x if type(x) is datetime else None, axis=1)\nMaybe check the datatype of one of the correct values for this to work as expected.\nCheck out apply in the pandas docs.\n" ]
[ 0 ]
[]
[]
[ "datetime", "dtype", "object", "pandas", "python" ]
stackoverflow_0074482804_datetime_dtype_object_pandas_python.txt
Q: How to convert negative strings in float numbers in pandas? I have a series of negative strings in my dataset. I'd like to convert them into negative floats, but get the ValueError: could not convert string to float: '-'. I suppose there is a problem with the enconding format, so I tried to replace - with the Unicode - hyphen, but got the same error anyway. I've tried to replace every possible Unicode code with a normal hyphen, but it didn't work. I use Python 3.8.1 and pandas 1.0.2. Are there any workarounds? P.S. There is a similar question here, but it didn't help. Here what I've done: The dataset is here. It's called '1240K+HO', extension .anno. Then: # open file df = pd.read_table('v42.4.1240K_HO.anno', index_col=0, usecols=['Index', 'Instance ID', 'Master ID', 'Average of 95.4% date range in calBP (defined as 1950 CE)', 'Country', 'Lat.', 'Long.'], na_values='..') Then I try to convert strings in 'Lat.' column to float numbers. # convert strings to floats df['Lat.'] = df['Lat.'].astype(float) A: The issue is that there is at least one '-' value. That's it, just a hyphen with no figure after it. You can do this: import numpy as np df['Lat.'] = df['Lat.'].replace('-',np.nan) Then this will work: df['Lat.'] = df['Lat.'].astype(float) A: in case you still get an error you can use pd.to_numeric with coerce to convert non-numeric elements to NaN. you can then get convert all NaN to 0 or whatever you wish from there import pandas as pd df['Lat.'] = pd.to_numeric(df['Lat.'],errors='coerce')
How to convert negative strings in float numbers in pandas?
I have a series of negative strings in my dataset. I'd like to convert them into negative floats, but get the ValueError: could not convert string to float: '-'. I suppose there is a problem with the enconding format, so I tried to replace - with the Unicode - hyphen, but got the same error anyway. I've tried to replace every possible Unicode code with a normal hyphen, but it didn't work. I use Python 3.8.1 and pandas 1.0.2. Are there any workarounds? P.S. There is a similar question here, but it didn't help. Here what I've done: The dataset is here. It's called '1240K+HO', extension .anno. Then: # open file df = pd.read_table('v42.4.1240K_HO.anno', index_col=0, usecols=['Index', 'Instance ID', 'Master ID', 'Average of 95.4% date range in calBP (defined as 1950 CE)', 'Country', 'Lat.', 'Long.'], na_values='..') Then I try to convert strings in 'Lat.' column to float numbers. # convert strings to floats df['Lat.'] = df['Lat.'].astype(float)
[ "The issue is that there is at least one '-' value. That's it, just a hyphen with no figure after it.\nYou can do this:\nimport numpy as np\n\ndf['Lat.'] = df['Lat.'].replace('-',np.nan)\n\nThen this will work:\ndf['Lat.'] = df['Lat.'].astype(float)\n\n", "in case you still get an error you can use pd.to_numeric with coerce to convert non-numeric elements to NaN. you can then get convert all NaN to 0 or whatever you wish from there\nimport pandas as pd\n\ndf['Lat.'] = pd.to_numeric(df['Lat.'],errors='coerce')\n\n" ]
[ 1, 1 ]
[]
[]
[ "pandas", "python" ]
stackoverflow_0060748254_pandas_python.txt
Q: Download the attachment only from the from latest outlook email using Python? I am trying to download and save the outlook email attachment from the most recent email in a folder. I have a code that downloads all of the attachment from a outlook folder and saves it. Any help is appreciated. from pathlib import Path import win32com.client output_dir = Path.home()/r"Documents\Test" output_dir.mkdir(parents=True, exist_ok=True) outlook = win32com.client.Dispatch("Outlook.Application").GetNamespace("MAPI") inbox = outlook.GetDefaultFolder(6).folders("Sample Folder").folders("Sample Subfolder") messages = inbox.Items message = messages.GetFirst() for message in messages: if message.Subject == 'Sample Subject' or message.Subject == 'Sample Subject 2': attachments = message.Attachments subject = messages.GetFirst().Subject for attachment in attachments: attachment.SaveAsFile(output_dir / str(attachment)) A: I am trying to download and save the outlook email attachment from the most recent email in a folder. To get the most recent item from the folder you need to sort the collection first by using the Sort method in the following way (VBA): messages = inbox.Items messages.Sort("[RecievedTime]", false) message = messages.GetFirst() Also iterating over all items in the folder is not really a good idea: for message in messages: if message.Subject == 'Sample Subject' or message.Subject == 'Sample Subject 2': Instead, you need to use the Find/FindNext or Restrict methods of the Items class. They allows getting items that correspond to your conditions without iterating over all items in the folder. Read more about these methods in the articles that I wrote for the technical blog: How To: Use Find and FindNext methods to retrieve Outlook mail items from a folder (C#, VB.NET) How To: Use Restrict method to retrieve Outlook mail items from a folder
Download the attachment only from the from latest outlook email using Python?
I am trying to download and save the outlook email attachment from the most recent email in a folder. I have a code that downloads all of the attachment from a outlook folder and saves it. Any help is appreciated. from pathlib import Path import win32com.client output_dir = Path.home()/r"Documents\Test" output_dir.mkdir(parents=True, exist_ok=True) outlook = win32com.client.Dispatch("Outlook.Application").GetNamespace("MAPI") inbox = outlook.GetDefaultFolder(6).folders("Sample Folder").folders("Sample Subfolder") messages = inbox.Items message = messages.GetFirst() for message in messages: if message.Subject == 'Sample Subject' or message.Subject == 'Sample Subject 2': attachments = message.Attachments subject = messages.GetFirst().Subject for attachment in attachments: attachment.SaveAsFile(output_dir / str(attachment))
[ "\nI am trying to download and save the outlook email attachment from the most recent email in a folder.\n\nTo get the most recent item from the folder you need to sort the collection first by using the Sort method in the following way (VBA):\nmessages = inbox.Items\nmessages.Sort(\"[RecievedTime]\", false) \nmessage = messages.GetFirst()\n\nAlso iterating over all items in the folder is not really a good idea:\nfor message in messages:\n if message.Subject == 'Sample Subject' or message.Subject == 'Sample Subject 2':\n\nInstead, you need to use the Find/FindNext or Restrict methods of the Items class. They allows getting items that correspond to your conditions without iterating over all items in the folder. Read more about these methods in the articles that I wrote for the technical blog:\n\nHow To: Use Find and FindNext methods to retrieve Outlook mail items from a folder (C#, VB.NET)\nHow To: Use Restrict method to retrieve Outlook mail items from a folder\n\n" ]
[ 0 ]
[]
[]
[ "email_attachments", "office_automation", "outlook", "python", "pywin32" ]
stackoverflow_0074471096_email_attachments_office_automation_outlook_python_pywin32.txt
Q: How to read a csv file into a multidimensional list in Python I'm trying to read a csv file into a multidimensional list with 52 rows and 7 columns. Currently it's only displaying me the last line as 52 rows of the csv file. I am pretty sure there is something wrong in my readfile function but I couldn't figure it out where I'm making the mistake. Here is my code: rows = 52 cols = 7 def readFile(): matrix = [] file = open("rainfall.csv","r") for line in file: data = line.split(",") for row in range(rows): matrix.append([]) for col in range(cols): matrix[row].append(data[col]) return matrix def display(matrix): for counter in matrix: for values in counter: print(values, end=" ") print() matrix = readFile() display(matrix) Here is the output: 8 25 1 6 21 23 0 8 25 1 6 21 23 0 8 25 1 6 21 23 0 8 25 1 6 21 23 0 8 25 1 6 21 23 0 8 25 1 6 21 23 0 8 25 1 6 21 23 0 8 25 1 6 21 23 0 8 25 1 6 21 23 0 8 25 1 6 21 23 0 8 25 1 6 21 23 0 8 25 1 6 21 23 0 8 25 1 6 21 23 0 8 25 1 6 21 23 0 8 25 1 6 21 23 0 8 25 1 6 21 23 0 8 25 1 6 21 23 0 8 25 1 6 21 23 0 8 25 1 6 21 23 0 8 25 1 6 21 23 0 8 25 1 6 21 23 0 8 25 1 6 21 23 0 8 25 1 6 21 23 0 8 25 1 6 21 23 0 8 25 1 6 21 23 0 8 25 1 6 21 23 0 8 25 1 6 21 23 0 8 25 1 6 21 23 0 8 25 1 6 21 23 0 8 25 1 6 21 23 0 8 25 1 6 21 23 0 8 25 1 6 21 23 0 8 25 1 6 21 23 0 8 25 1 6 21 23 0 8 25 1 6 21 23 0 8 25 1 6 21 23 0 8 25 1 6 21 23 0 8 25 1 6 21 23 0 8 25 1 6 21 23 0 8 25 1 6 21 23 0 8 25 1 6 21 23 0 8 25 1 6 21 23 0 8 25 1 6 21 23 0 8 25 1 6 21 23 0 8 25 1 6 21 23 0 8 25 1 6 21 23 0 8 25 1 6 21 23 0 8 25 1 6 21 23 0 8 25 1 6 21 23 0 8 25 1 6 21 23 0 8 25 1 6 21 23 0 8 25 1 6 21 23 0 I have the following csv file: 0,0,30,2,21,13,23 29,3,29,30,7,8,25 26,5,26,13,4,13,4 22,30,13,15,15,0,2 3,12,11,10,17,0,15 8,13,11,24,30,24,27 22,18,2,29,11,13,18 15,1,29,23,18,7,0 23,27,3,7,13,14,28 6,25,24,14,20,23,5 24,29,26,22,0,9,18 22,27,22,20,24,29,21 23,13,14,4,13,1,21 25,21,21,6,28,17,19 4,6,11,10,21,1,5 11,7,22,11,10,24,15 25,11,23,3,23,8,3 22,23,0,29,15,12,5 21,11,18,22,1,4,3 11,10,3,1,30,14,22 2,16,10,2,12,9,9 2,29,17,16,13,18,7 22,15,27,19,6,26,11 21,7,18,4,14,14,2 6,30,12,4,26,22,11 21,16,14,11,28,20,3 19,10,22,18,30,9,27 8,15,17,4,11,16,6 19,17,16,6,18,18,6 2,15,3,25,27,16,11 15,5,26,24,24,30,5 15,11,16,22,14,23,28 25,6,7,20,26,18,16 5,5,21,22,24,16,5 6,27,11,8,24,1,16 28,4,1,4,3,19,24 19,3,27,14,12,24,0 6,3,26,15,15,22,26 18,5,0,14,15,7,26 10,5,12,22,8,7,11 11,1,18,29,6,9,26 3,23,2,21,29,15,25 5,7,1,6,15,18,24 28,11,0,6,28,11,26 4,28,9,24,11,13,2 6,2,14,18,20,21,1 20,29,22,21,11,14,20 28,23,14,17,25,3,18 6,27,6,20,19,5,24 25,3,27,22,7,12,21 12,22,8,7,0,11,8 8,25,1,6,21,23,0 A: Use the built-in csv module: import csv from pprint import pprint with open('input.csv', newline='') as f: reader = csv.reader(f) data = [[int(x) for x in line] for line in reader] pprint(data) Output: [[0, 0, 30, 2, 21, 13, 23], [29, 3, 29, 30, 7, 8, 25], [26, 5, 26, 13, 4, 13, 4], [22, 30, 13, 15, 15, 0, 2], [3, 12, 11, 10, 17, 0, 15], [8, 13, 11, 24, 30, 24, 27], [22, 18, 2, 29, 11, 13, 18], [15, 1, 29, 23, 18, 7, 0], [23, 27, 3, 7, 13, 14, 28], [6, 25, 24, 14, 20, 23, 5], [24, 29, 26, 22, 0, 9, 18], [22, 27, 22, 20, 24, 29, 21], [23, 13, 14, 4, 13, 1, 21], [25, 21, 21, 6, 28, 17, 19], [4, 6, 11, 10, 21, 1, 5], [11, 7, 22, 11, 10, 24, 15], [25, 11, 23, 3, 23, 8, 3], [22, 23, 0, 29, 15, 12, 5], [21, 11, 18, 22, 1, 4, 3], [11, 10, 3, 1, 30, 14, 22], [2, 16, 10, 2, 12, 9, 9], [2, 29, 17, 16, 13, 18, 7], [22, 15, 27, 19, 6, 26, 11], [21, 7, 18, 4, 14, 14, 2], [6, 30, 12, 4, 26, 22, 11], [21, 16, 14, 11, 28, 20, 3], [19, 10, 22, 18, 30, 9, 27], [8, 15, 17, 4, 11, 16, 6], [19, 17, 16, 6, 18, 18, 6], [2, 15, 3, 25, 27, 16, 11], [15, 5, 26, 24, 24, 30, 5], [15, 11, 16, 22, 14, 23, 28], [25, 6, 7, 20, 26, 18, 16], [5, 5, 21, 22, 24, 16, 5], [6, 27, 11, 8, 24, 1, 16], [28, 4, 1, 4, 3, 19, 24], [19, 3, 27, 14, 12, 24, 0], [6, 3, 26, 15, 15, 22, 26], [18, 5, 0, 14, 15, 7, 26], [10, 5, 12, 22, 8, 7, 11], [11, 1, 18, 29, 6, 9, 26], [3, 23, 2, 21, 29, 15, 25], [5, 7, 1, 6, 15, 18, 24], [28, 11, 0, 6, 28, 11, 26], [4, 28, 9, 24, 11, 13, 2], [6, 2, 14, 18, 20, 21, 1], [20, 29, 22, 21, 11, 14, 20], [28, 23, 14, 17, 25, 3, 18], [6, 27, 6, 20, 19, 5, 24], [25, 3, 27, 22, 7, 12, 21], [12, 22, 8, 7, 0, 11, 8], [8, 25, 1, 6, 21, 23, 0]] A: Change for line in file: data = line.split(",") to: data=[] for line in file: data.append(line.split(",")) A: Please use the stdlib's csv module for this. It is designed for this exact purpose. Also, use the with statement for opening files. It will handle closing the file for you, even in the case that an exception is raised. import csv def readFile(): with open("rainfall.csv","r") as f: matrix = list(csv.reader(f)) return matrix A: Just on 4 lines: #Read data file with open('rainfall.csv', 'r') as f: lst = [[int(num) for num in line.split(',')] for line in f] #Display result for i in lst: print(*i)
How to read a csv file into a multidimensional list in Python
I'm trying to read a csv file into a multidimensional list with 52 rows and 7 columns. Currently it's only displaying me the last line as 52 rows of the csv file. I am pretty sure there is something wrong in my readfile function but I couldn't figure it out where I'm making the mistake. Here is my code: rows = 52 cols = 7 def readFile(): matrix = [] file = open("rainfall.csv","r") for line in file: data = line.split(",") for row in range(rows): matrix.append([]) for col in range(cols): matrix[row].append(data[col]) return matrix def display(matrix): for counter in matrix: for values in counter: print(values, end=" ") print() matrix = readFile() display(matrix) Here is the output: 8 25 1 6 21 23 0 8 25 1 6 21 23 0 8 25 1 6 21 23 0 8 25 1 6 21 23 0 8 25 1 6 21 23 0 8 25 1 6 21 23 0 8 25 1 6 21 23 0 8 25 1 6 21 23 0 8 25 1 6 21 23 0 8 25 1 6 21 23 0 8 25 1 6 21 23 0 8 25 1 6 21 23 0 8 25 1 6 21 23 0 8 25 1 6 21 23 0 8 25 1 6 21 23 0 8 25 1 6 21 23 0 8 25 1 6 21 23 0 8 25 1 6 21 23 0 8 25 1 6 21 23 0 8 25 1 6 21 23 0 8 25 1 6 21 23 0 8 25 1 6 21 23 0 8 25 1 6 21 23 0 8 25 1 6 21 23 0 8 25 1 6 21 23 0 8 25 1 6 21 23 0 8 25 1 6 21 23 0 8 25 1 6 21 23 0 8 25 1 6 21 23 0 8 25 1 6 21 23 0 8 25 1 6 21 23 0 8 25 1 6 21 23 0 8 25 1 6 21 23 0 8 25 1 6 21 23 0 8 25 1 6 21 23 0 8 25 1 6 21 23 0 8 25 1 6 21 23 0 8 25 1 6 21 23 0 8 25 1 6 21 23 0 8 25 1 6 21 23 0 8 25 1 6 21 23 0 8 25 1 6 21 23 0 8 25 1 6 21 23 0 8 25 1 6 21 23 0 8 25 1 6 21 23 0 8 25 1 6 21 23 0 8 25 1 6 21 23 0 8 25 1 6 21 23 0 8 25 1 6 21 23 0 8 25 1 6 21 23 0 8 25 1 6 21 23 0 8 25 1 6 21 23 0 I have the following csv file: 0,0,30,2,21,13,23 29,3,29,30,7,8,25 26,5,26,13,4,13,4 22,30,13,15,15,0,2 3,12,11,10,17,0,15 8,13,11,24,30,24,27 22,18,2,29,11,13,18 15,1,29,23,18,7,0 23,27,3,7,13,14,28 6,25,24,14,20,23,5 24,29,26,22,0,9,18 22,27,22,20,24,29,21 23,13,14,4,13,1,21 25,21,21,6,28,17,19 4,6,11,10,21,1,5 11,7,22,11,10,24,15 25,11,23,3,23,8,3 22,23,0,29,15,12,5 21,11,18,22,1,4,3 11,10,3,1,30,14,22 2,16,10,2,12,9,9 2,29,17,16,13,18,7 22,15,27,19,6,26,11 21,7,18,4,14,14,2 6,30,12,4,26,22,11 21,16,14,11,28,20,3 19,10,22,18,30,9,27 8,15,17,4,11,16,6 19,17,16,6,18,18,6 2,15,3,25,27,16,11 15,5,26,24,24,30,5 15,11,16,22,14,23,28 25,6,7,20,26,18,16 5,5,21,22,24,16,5 6,27,11,8,24,1,16 28,4,1,4,3,19,24 19,3,27,14,12,24,0 6,3,26,15,15,22,26 18,5,0,14,15,7,26 10,5,12,22,8,7,11 11,1,18,29,6,9,26 3,23,2,21,29,15,25 5,7,1,6,15,18,24 28,11,0,6,28,11,26 4,28,9,24,11,13,2 6,2,14,18,20,21,1 20,29,22,21,11,14,20 28,23,14,17,25,3,18 6,27,6,20,19,5,24 25,3,27,22,7,12,21 12,22,8,7,0,11,8 8,25,1,6,21,23,0
[ "Use the built-in csv module:\nimport csv\nfrom pprint import pprint\n\nwith open('input.csv', newline='') as f:\n reader = csv.reader(f)\n data = [[int(x) for x in line] for line in reader]\n\npprint(data)\n\nOutput:\n[[0, 0, 30, 2, 21, 13, 23],\n [29, 3, 29, 30, 7, 8, 25],\n [26, 5, 26, 13, 4, 13, 4],\n [22, 30, 13, 15, 15, 0, 2],\n [3, 12, 11, 10, 17, 0, 15],\n [8, 13, 11, 24, 30, 24, 27],\n [22, 18, 2, 29, 11, 13, 18],\n [15, 1, 29, 23, 18, 7, 0],\n [23, 27, 3, 7, 13, 14, 28],\n [6, 25, 24, 14, 20, 23, 5],\n [24, 29, 26, 22, 0, 9, 18],\n [22, 27, 22, 20, 24, 29, 21],\n [23, 13, 14, 4, 13, 1, 21],\n [25, 21, 21, 6, 28, 17, 19],\n [4, 6, 11, 10, 21, 1, 5],\n [11, 7, 22, 11, 10, 24, 15],\n [25, 11, 23, 3, 23, 8, 3],\n [22, 23, 0, 29, 15, 12, 5],\n [21, 11, 18, 22, 1, 4, 3],\n [11, 10, 3, 1, 30, 14, 22],\n [2, 16, 10, 2, 12, 9, 9],\n [2, 29, 17, 16, 13, 18, 7],\n [22, 15, 27, 19, 6, 26, 11],\n [21, 7, 18, 4, 14, 14, 2],\n [6, 30, 12, 4, 26, 22, 11],\n [21, 16, 14, 11, 28, 20, 3],\n [19, 10, 22, 18, 30, 9, 27],\n [8, 15, 17, 4, 11, 16, 6],\n [19, 17, 16, 6, 18, 18, 6],\n [2, 15, 3, 25, 27, 16, 11],\n [15, 5, 26, 24, 24, 30, 5],\n [15, 11, 16, 22, 14, 23, 28],\n [25, 6, 7, 20, 26, 18, 16],\n [5, 5, 21, 22, 24, 16, 5],\n [6, 27, 11, 8, 24, 1, 16],\n [28, 4, 1, 4, 3, 19, 24],\n [19, 3, 27, 14, 12, 24, 0],\n [6, 3, 26, 15, 15, 22, 26],\n [18, 5, 0, 14, 15, 7, 26],\n [10, 5, 12, 22, 8, 7, 11],\n [11, 1, 18, 29, 6, 9, 26],\n [3, 23, 2, 21, 29, 15, 25],\n [5, 7, 1, 6, 15, 18, 24],\n [28, 11, 0, 6, 28, 11, 26],\n [4, 28, 9, 24, 11, 13, 2],\n [6, 2, 14, 18, 20, 21, 1],\n [20, 29, 22, 21, 11, 14, 20],\n [28, 23, 14, 17, 25, 3, 18],\n [6, 27, 6, 20, 19, 5, 24],\n [25, 3, 27, 22, 7, 12, 21],\n [12, 22, 8, 7, 0, 11, 8],\n [8, 25, 1, 6, 21, 23, 0]]\n\n", "Change\n for line in file:\n data = line.split(\",\")\n\nto:\n data=[]\n for line in file:\n data.append(line.split(\",\"))\n\n", "Please use the stdlib's csv module for this. It is designed for this exact purpose. Also, use the with statement for opening files. It will handle closing the file for you, even in the case that an exception is raised.\nimport csv\n\ndef readFile():\n with open(\"rainfall.csv\",\"r\") as f:\n matrix = list(csv.reader(f))\n return matrix\n\n", "Just on 4 lines:\n#Read data file\nwith open('rainfall.csv', 'r') as f:\n lst = [[int(num) for num in line.split(',')] for line in f]\n\n#Display result\nfor i in lst:\n print(*i)\n\n" ]
[ 1, 0, 0, 0 ]
[]
[]
[ "python" ]
stackoverflow_0074482853_python.txt
Q: How to change color of data points on a scatter plot according to an age range? !(https://i.stack.imgur.com/FX1vB.png) !(https://i.stack.imgur.com/mGajr.png) Hello everyone, I am very new to Python so bear with me. I am sure this is an easy answer. Above is my scatter plot, with GOLF Data from Kaggle. The X variable is Fairway Hit% and the Y variable is Average Driving Distance. I can see there is a slight negative correlation in the data. Each red dot is a player. I want to make each dot a different color based on the age of the player. There is a whole series in my data set titled 'AGE' and it varies from 21 to 49. For example, I want to have players that are aged 20-29 be a blue dot, aged 30-39 be a red dot, and aged 40-49 be a yellow dot. I have tried to research this to not much avail, as I tried to write code like the third picture above. I tried to define a subseries of 'AGE' as something like 'AGE' >= 20 <= 29. I haven't had any luck and I'm sure this isn't too difficult, so any help would be appreciated. Thank you. INCORRECT DATA I tried to make each dot a different color that was representative of the age of the golfer. A: import pandas as pd df = pd.DataFrame({'Age': [18, 22, 26,36, 47,78]}) YOUNG = df[(df['Age']>=20) & (df['Age']<=29)] YOUNG Or if the type of Age is string, import pandas as pd df = pd.DataFrame({'Age': ['18', '22', '26', '36', '47', '78']}) df['Age'] = df['Age'].astype('int64') YOUNG = df[(df['Age']>=20) & (df['Age']<=29)] YOUNG
How to change color of data points on a scatter plot according to an age range?
!(https://i.stack.imgur.com/FX1vB.png) !(https://i.stack.imgur.com/mGajr.png) Hello everyone, I am very new to Python so bear with me. I am sure this is an easy answer. Above is my scatter plot, with GOLF Data from Kaggle. The X variable is Fairway Hit% and the Y variable is Average Driving Distance. I can see there is a slight negative correlation in the data. Each red dot is a player. I want to make each dot a different color based on the age of the player. There is a whole series in my data set titled 'AGE' and it varies from 21 to 49. For example, I want to have players that are aged 20-29 be a blue dot, aged 30-39 be a red dot, and aged 40-49 be a yellow dot. I have tried to research this to not much avail, as I tried to write code like the third picture above. I tried to define a subseries of 'AGE' as something like 'AGE' >= 20 <= 29. I haven't had any luck and I'm sure this isn't too difficult, so any help would be appreciated. Thank you. INCORRECT DATA I tried to make each dot a different color that was representative of the age of the golfer.
[ "import pandas as pd\ndf = pd.DataFrame({'Age': [18, 22, 26,36, 47,78]})\nYOUNG = df[(df['Age']>=20) & (df['Age']<=29)]\nYOUNG\n\nOr if the type of Age is string,\nimport pandas as pd\ndf = pd.DataFrame({'Age': ['18', '22', '26', '36', '47', '78']})\ndf['Age'] = df['Age'].astype('int64')\nYOUNG = df[(df['Age']>=20) & (df['Age']<=29)]\nYOUNG\n\n" ]
[ 0 ]
[]
[]
[ "matplotlib", "pandas", "python", "scatter_plot" ]
stackoverflow_0074483044_matplotlib_pandas_python_scatter_plot.txt
Q: Extract FQDNS from a text file using python I am trying to create a python script that downloads text files from a list of URLs and then concatenates them into a single file. This is what I have: import urllib import urllib.request import re with open("blocklist_urls.txt", "r") as a: urls = a.readlines() retrieved_pages = [] for url in urls: retrieved_pages.append(urllib.request.urlopen(url).read()) with open('blocklist_raw.txt', 'w') as f: for page in retrieved_pages: sys.stdout = f decoded_line = page.decode("utf-8") print(decoded_line) sys.stdout = original_stdout and it does grab text files from all of the URLs I listed in my text file without error. It creates blocklist_raw.txt and it contains several blocklists formatted in such a way: # AdAway default blocklist # Blocking mobile ad providers and some analytics providers # # Project home page: # https://github.com/AdAway/adaway.github.io/ # # Fetch the latest version of this file: # https://raw.githubusercontent.com/AdAway/adaway.github.io/master/hosts.txt # # License: # CC Attribution 3.0 (http://creativecommons.org/licenses/by/3.0/) # # Contributions by: # Kicelo, Dominik Schuermann. # Further changes and contributors maintained in the commit history at # https://github.com/AdAway/adaway.github.io/commits/master # # Contribute: # Create an issue at https://github.com/AdAway/adaway.github.io/issues # # [163.com] 127.0.0.1 analytics.163.com 127.0.0.1 crash.163.com 127.0.0.1 crashlytics.163.com 127.0.0.1 iad.g.163.com # [1mobile.com] 127.0.0.1 ads.1mobile.com 127.0.0.1 api4.1mobile.com # [1rx.io] 127.0.0.1 sync.1rx.io 127.0.0.1 tag.1rx.io # [206ads.com] 127.0.0.1 s.206ads.com # [247-inc.net] 127.0.0.1 api.247-inc.net 127.0.0.1 tie.247-inc.net nav.booksonlineclub.com navi.businessconsults.net navi.earthsolution.org nci.bigdepression.net nci.dnsweb.org nci.safalife.com ncih.dnsweb.org ncsc.businessconsults.net ne.hugesoft.org nes.nationtour.net net.firefoxupdata.com net.infosupports.com new.arrowservice.net new.booksonlineclub.com new.firefoxupdata.com new.globalowa.com newport.bigdepression.net newport.infosupports.com newport.safalife.com news.advanbusiness.com news.aoldaily.com news.aolon1ine.com # Oldest record: 2021-09-03T02:06:30+02:00 # Number of source websites: 873 # Number of source subdomains: 1990306 # Number of source DNS records: ~2E9 + 1298142 # # Input rules: asns: 6, zones: 48 # Subsequent rules: asns: 6, hostnames: 122196, ip4s: 64, zones: 48 # … no duplicates: asns: 6, hostnames: 89794, zones: 48 # Output rules: hostnames: 122196 # 0.0.0.0 0001.ya-man.com 0.0.0.0 0002.onlyminerals.jp 0.0.0.0 000.affex.org # Title: NoTrack Malware Blocklist # Description: Domains classified as malware, phishing or adware # Author: QuidsUp # License: GNU General Public License v3.0 # Home: https://quidsup.net/notrack/blocklist.php # @ GitLab : https://gitlab.com/quidsup/notrack-blocklists # Updated: 08 Sep 2021 #LatestVersion 21.08 # Domain Count: 348 #=============================================================== 2track.info #Adware - Malware 4dsply.com #Adware - Malware acountscr.cool #Adware Lnkr - Malware ad2up.com #Adware - Malware adaranth.com #Adware - Malware adbigline.network #Malware - Malware addr.cx #Adware Lnkr - Malware adfuture.cn #Android Trojan - Malware adsunflower.com #Android Trojan - Malware adultsonly.pro #Generic - Malware Is there a way that I can only keep the FQDN's from the blocklist_raw.txt and remove all other text? Any point in the right direction is appreciated. Thank you. EDIT: This is what I have thus far. I have never written python before, so a lot of this may not make a whole bunch of sense: #!/usr/bin/env python3 import urllib import urllib.request import re import sys original_stdout = sys.stdout def removeStr(val): if val.count('.') >= 2: if val.count('/') <= 0: return val with open("blocklist_urls.txt", "r") as a: urls = a.readlines() retrieved_pages = [] for url in urls: retrieved_pages.append(urllib.request.urlopen(url).read()) for page in retrieved_pages: decoded_line = page.decode("utf-8") each_line = "\n".join(filter(removeStr, decoded_line.split())) urls_filtered_raw = each_line.replace('0.0.0.0', '\b').replace('127.0.0.1', '\b') with open('blocklist_raw.txt', 'w') as b: for page in retrieved_pages: sys.stdout = b print(urls_filtered_raw.rstrip("\n").rstrip("^H")) sys.stdout = original_stdout links = set() with open('blocklist_raw.txt', 'r') as fp: for line in fp.readlines(): links.add(line) with open('blocklist_raw.txt', 'w') as fp: for line in links: fp.write(line) sorted_urls_raw = open('blocklist_raw.txt', 'r') sorted_urls_list = sorted_urls_raw.readlines split_hosts = [] for h in sorted_urls_raw: segments = h.split('.') segments.reverse() split_hosts.append(segments) split_hosts.sort() for segments in split_hosts: segments.reverse() print(".".join(segments)) I'm trying to find a way to sort the output in alphabetical order and write the output back to file. Thanks guys. A: The simplest way is probably to use the IANA database of Top Level Domain (.com, .org, .net, ...). With this list, create a regex pattern to find all strings that match something like '*.tld': # Additional import import re # Get TLD database resp = urllib.request.urlopen('http://data.iana.org/TLD/tlds-alpha-by-domain.txt') # Create a reverse sorted list of TLD ('.com' must be before '.co') tld = sorted([tld.strip().lower().decode('utf-8') for tld in resp.readlines()[1:]], reverse=True) # Compile the regex pattern FQDN = re.compile(fr"([^\s]*\.(?:{'|'.join(tld)}))") # Find all fqdn with open('blocklist_raw.txt') as fp: fqdn_list = [] for line in fp.readlines(): line = line.strip().lower() # Remove comments and blank lines if (len(line) == 0) or line.startswith('#'): continue # Extract FQDN fqdn = FQDN.findall(line) if fqdn: fqdn_list.append(fqdn[0]) Output: >>> fqdn_list ['analytics.163.com', 'crash.163.com', 'crashlytics.163.com', 'iad.g.163.com', 'ads.1mobile.com', 'api4.1mobile.com', 'sync.1rx.io', 'tag.1rx.io', 's.206ads.com', 'api.247-inc.net', 'tie.247-inc.net', 'nav.booksonlineclub.com', 'navi.businessconsults.net', 'navi.earthsolution.org', 'nci.bigdepression.net', 'nci.dnsweb.org', 'nci.safalife.com', 'ncih.dnsweb.org', 'ncsc.businessconsults.net', 'ne.hugesoft.org', 'nes.nationtour.net', 'net.firefoxupdata.com', 'net.infosupports.com', 'new.arrowservice.net', 'new.booksonlineclub.com', 'new.firefoxupdata.com', 'new.globalowa.com', 'newport.bigdepression.net', 'newport.infosupports.com', 'newport.safalife.com', 'news.advanbusiness.com', 'news.aoldaily.com', 'news.aolon1ine.com', '0001.ya-man.com', '0002.onlyminerals.jp', '000.affex.org', '2track.info', '4dsply.com', 'acountscr.cool', 'ad2up.com', 'adaranth.com', 'adbigline.network', 'addr.cx', 'adfuture.cn', 'adsunflower.com', 'adultsonly.pro'] A: Here's one way: fqdns_re = re.compile( r'^(([a-zA-Z]{1})|([a-zA-Z]{1}[a-zA-Z]{1})|' r'([a-zA-Z]{1}[0-9]{1})|([0-9]{1}[a-zA-Z]{1})|' r'([a-zA-Z0-9][-_.a-zA-Z0-9]{0,61}[a-zA-Z0-9]))\.' r'([a-zA-Z]{2,13}|[a-zA-Z0-9-]{2,30}.[a-zA-Z]{2,3})$' ) splits_re = re.compile(r'[#\s/]') def match(word): m = fqdns_re.match(word) if m: return m.group(0) with open('/tmp/blocklist_raw.txt') as f: fqdns = [word for row in f.readlines() for word in splits_re.split(row) if match(word)] print(sorted(fqdns)) Returns: ['000.affex.org', '0001.ya-man.com', '0002.onlyminerals.jp', '2track.info', '4dsply.com', 'acountscr.cool', 'ad2up.com', 'adaranth.com', 'adaway.github.io', 'adaway.github.io', 'adaway.github.io', 'adaway.github.io', 'adbigline.network', 'addr.cx', 'adfuture.cn', 'ads.1mobile.com', 'adsunflower.com', 'adultsonly.pro', 'analytics.163.com', 'api.247-inc.net', 'api4.1mobile.com', 'blocklist.php', 'crash.163.com', 'crashlytics.163.com', 'creativecommons.org', 'github.com', 'github.com', 'github.com', 'gitlab.com', 'hosts.txt', 'iad.g.163.com', 'nav.booksonlineclub.com', 'navi.businessconsults.net', 'navi.earthsolution.org', 'nci.bigdepression.net', 'nci.dnsweb.org', 'nci.safalife.com', 'ncih.dnsweb.org', 'ncsc.businessconsults.net', 'ne.hugesoft.org', 'nes.nationtour.net', 'net.firefoxupdata.com', 'net.infosupports.com', 'new.arrowservice.net', 'new.booksonlineclub.com', 'new.firefoxupdata.com', 'new.globalowa.com', 'newport.bigdepression.net', 'newport.infosupports.com', 'newport.safalife.com', 'news.advanbusiness.com', 'news.aoldaily.com', 'news.aolon1ine.com', 'quidsup.net', 'raw.githubusercontent.com', 's.206ads.com', 'sync.1rx.io', 'tag.1rx.io', 'tie.247-inc.net']
Extract FQDNS from a text file using python
I am trying to create a python script that downloads text files from a list of URLs and then concatenates them into a single file. This is what I have: import urllib import urllib.request import re with open("blocklist_urls.txt", "r") as a: urls = a.readlines() retrieved_pages = [] for url in urls: retrieved_pages.append(urllib.request.urlopen(url).read()) with open('blocklist_raw.txt', 'w') as f: for page in retrieved_pages: sys.stdout = f decoded_line = page.decode("utf-8") print(decoded_line) sys.stdout = original_stdout and it does grab text files from all of the URLs I listed in my text file without error. It creates blocklist_raw.txt and it contains several blocklists formatted in such a way: # AdAway default blocklist # Blocking mobile ad providers and some analytics providers # # Project home page: # https://github.com/AdAway/adaway.github.io/ # # Fetch the latest version of this file: # https://raw.githubusercontent.com/AdAway/adaway.github.io/master/hosts.txt # # License: # CC Attribution 3.0 (http://creativecommons.org/licenses/by/3.0/) # # Contributions by: # Kicelo, Dominik Schuermann. # Further changes and contributors maintained in the commit history at # https://github.com/AdAway/adaway.github.io/commits/master # # Contribute: # Create an issue at https://github.com/AdAway/adaway.github.io/issues # # [163.com] 127.0.0.1 analytics.163.com 127.0.0.1 crash.163.com 127.0.0.1 crashlytics.163.com 127.0.0.1 iad.g.163.com # [1mobile.com] 127.0.0.1 ads.1mobile.com 127.0.0.1 api4.1mobile.com # [1rx.io] 127.0.0.1 sync.1rx.io 127.0.0.1 tag.1rx.io # [206ads.com] 127.0.0.1 s.206ads.com # [247-inc.net] 127.0.0.1 api.247-inc.net 127.0.0.1 tie.247-inc.net nav.booksonlineclub.com navi.businessconsults.net navi.earthsolution.org nci.bigdepression.net nci.dnsweb.org nci.safalife.com ncih.dnsweb.org ncsc.businessconsults.net ne.hugesoft.org nes.nationtour.net net.firefoxupdata.com net.infosupports.com new.arrowservice.net new.booksonlineclub.com new.firefoxupdata.com new.globalowa.com newport.bigdepression.net newport.infosupports.com newport.safalife.com news.advanbusiness.com news.aoldaily.com news.aolon1ine.com # Oldest record: 2021-09-03T02:06:30+02:00 # Number of source websites: 873 # Number of source subdomains: 1990306 # Number of source DNS records: ~2E9 + 1298142 # # Input rules: asns: 6, zones: 48 # Subsequent rules: asns: 6, hostnames: 122196, ip4s: 64, zones: 48 # … no duplicates: asns: 6, hostnames: 89794, zones: 48 # Output rules: hostnames: 122196 # 0.0.0.0 0001.ya-man.com 0.0.0.0 0002.onlyminerals.jp 0.0.0.0 000.affex.org # Title: NoTrack Malware Blocklist # Description: Domains classified as malware, phishing or adware # Author: QuidsUp # License: GNU General Public License v3.0 # Home: https://quidsup.net/notrack/blocklist.php # @ GitLab : https://gitlab.com/quidsup/notrack-blocklists # Updated: 08 Sep 2021 #LatestVersion 21.08 # Domain Count: 348 #=============================================================== 2track.info #Adware - Malware 4dsply.com #Adware - Malware acountscr.cool #Adware Lnkr - Malware ad2up.com #Adware - Malware adaranth.com #Adware - Malware adbigline.network #Malware - Malware addr.cx #Adware Lnkr - Malware adfuture.cn #Android Trojan - Malware adsunflower.com #Android Trojan - Malware adultsonly.pro #Generic - Malware Is there a way that I can only keep the FQDN's from the blocklist_raw.txt and remove all other text? Any point in the right direction is appreciated. Thank you. EDIT: This is what I have thus far. I have never written python before, so a lot of this may not make a whole bunch of sense: #!/usr/bin/env python3 import urllib import urllib.request import re import sys original_stdout = sys.stdout def removeStr(val): if val.count('.') >= 2: if val.count('/') <= 0: return val with open("blocklist_urls.txt", "r") as a: urls = a.readlines() retrieved_pages = [] for url in urls: retrieved_pages.append(urllib.request.urlopen(url).read()) for page in retrieved_pages: decoded_line = page.decode("utf-8") each_line = "\n".join(filter(removeStr, decoded_line.split())) urls_filtered_raw = each_line.replace('0.0.0.0', '\b').replace('127.0.0.1', '\b') with open('blocklist_raw.txt', 'w') as b: for page in retrieved_pages: sys.stdout = b print(urls_filtered_raw.rstrip("\n").rstrip("^H")) sys.stdout = original_stdout links = set() with open('blocklist_raw.txt', 'r') as fp: for line in fp.readlines(): links.add(line) with open('blocklist_raw.txt', 'w') as fp: for line in links: fp.write(line) sorted_urls_raw = open('blocklist_raw.txt', 'r') sorted_urls_list = sorted_urls_raw.readlines split_hosts = [] for h in sorted_urls_raw: segments = h.split('.') segments.reverse() split_hosts.append(segments) split_hosts.sort() for segments in split_hosts: segments.reverse() print(".".join(segments)) I'm trying to find a way to sort the output in alphabetical order and write the output back to file. Thanks guys.
[ "The simplest way is probably to use the IANA database of Top Level Domain (.com, .org, .net, ...). With this list, create a regex pattern to find all strings that match something like '*.tld':\n# Additional import\nimport re\n\n# Get TLD database\nresp = urllib.request.urlopen('http://data.iana.org/TLD/tlds-alpha-by-domain.txt')\n\n# Create a reverse sorted list of TLD ('.com' must be before '.co')\ntld = sorted([tld.strip().lower().decode('utf-8')\n for tld in resp.readlines()[1:]], reverse=True)\n\n# Compile the regex pattern\nFQDN = re.compile(fr\"([^\\s]*\\.(?:{'|'.join(tld)}))\")\n\n\n# Find all fqdn\nwith open('blocklist_raw.txt') as fp:\n fqdn_list = []\n for line in fp.readlines():\n line = line.strip().lower()\n\n # Remove comments and blank lines\n if (len(line) == 0) or line.startswith('#'):\n continue\n\n # Extract FQDN\n fqdn = FQDN.findall(line)\n if fqdn:\n fqdn_list.append(fqdn[0])\n\nOutput:\n>>> fqdn_list\n['analytics.163.com',\n 'crash.163.com',\n 'crashlytics.163.com',\n 'iad.g.163.com',\n 'ads.1mobile.com',\n 'api4.1mobile.com',\n 'sync.1rx.io',\n 'tag.1rx.io',\n 's.206ads.com',\n 'api.247-inc.net',\n 'tie.247-inc.net',\n 'nav.booksonlineclub.com',\n 'navi.businessconsults.net',\n 'navi.earthsolution.org',\n 'nci.bigdepression.net',\n 'nci.dnsweb.org',\n 'nci.safalife.com',\n 'ncih.dnsweb.org',\n 'ncsc.businessconsults.net',\n 'ne.hugesoft.org',\n 'nes.nationtour.net',\n 'net.firefoxupdata.com',\n 'net.infosupports.com',\n 'new.arrowservice.net',\n 'new.booksonlineclub.com',\n 'new.firefoxupdata.com',\n 'new.globalowa.com',\n 'newport.bigdepression.net',\n 'newport.infosupports.com',\n 'newport.safalife.com',\n 'news.advanbusiness.com',\n 'news.aoldaily.com',\n 'news.aolon1ine.com',\n '0001.ya-man.com',\n '0002.onlyminerals.jp',\n '000.affex.org',\n '2track.info',\n '4dsply.com',\n 'acountscr.cool',\n 'ad2up.com',\n 'adaranth.com',\n 'adbigline.network',\n 'addr.cx',\n 'adfuture.cn',\n 'adsunflower.com',\n 'adultsonly.pro']\n\n", "Here's one way:\n fqdns_re = re.compile(\n r'^(([a-zA-Z]{1})|([a-zA-Z]{1}[a-zA-Z]{1})|'\n r'([a-zA-Z]{1}[0-9]{1})|([0-9]{1}[a-zA-Z]{1})|'\n r'([a-zA-Z0-9][-_.a-zA-Z0-9]{0,61}[a-zA-Z0-9]))\\.'\n r'([a-zA-Z]{2,13}|[a-zA-Z0-9-]{2,30}.[a-zA-Z]{2,3})$'\n )\n splits_re = re.compile(r'[#\\s/]')\n\n def match(word):\n m = fqdns_re.match(word)\n if m:\n return m.group(0)\n\n with open('/tmp/blocklist_raw.txt') as f:\n fqdns = [word for row in f.readlines()\n for word in splits_re.split(row) if match(word)]\n print(sorted(fqdns))\n\nReturns:\n['000.affex.org', '0001.ya-man.com', '0002.onlyminerals.jp', '2track.info', '4dsply.com', 'acountscr.cool', 'ad2up.com', 'adaranth.com', 'adaway.github.io', 'adaway.github.io', 'adaway.github.io', 'adaway.github.io', 'adbigline.network', 'addr.cx', 'adfuture.cn', 'ads.1mobile.com', 'adsunflower.com', 'adultsonly.pro', 'analytics.163.com', 'api.247-inc.net', 'api4.1mobile.com', 'blocklist.php', 'crash.163.com', 'crashlytics.163.com', 'creativecommons.org', 'github.com', 'github.com', 'github.com', 'gitlab.com', 'hosts.txt', 'iad.g.163.com', 'nav.booksonlineclub.com', 'navi.businessconsults.net', 'navi.earthsolution.org', 'nci.bigdepression.net', 'nci.dnsweb.org', 'nci.safalife.com', 'ncih.dnsweb.org', 'ncsc.businessconsults.net', 'ne.hugesoft.org', 'nes.nationtour.net', 'net.firefoxupdata.com', 'net.infosupports.com', 'new.arrowservice.net', 'new.booksonlineclub.com', 'new.firefoxupdata.com', 'new.globalowa.com', 'newport.bigdepression.net', 'newport.infosupports.com', 'newport.safalife.com', 'news.advanbusiness.com', 'news.aoldaily.com', 'news.aolon1ine.com', 'quidsup.net', 'raw.githubusercontent.com', 's.206ads.com', 'sync.1rx.io', 'tag.1rx.io', 'tie.247-inc.net']\n\n" ]
[ 0, 0 ]
[]
[]
[ "python" ]
stackoverflow_0069144646_python.txt
Q: Cannot create .exe with pyinstaller from .py with torchaudio (CPU): AttributeError: '_OpNamespace' 'torchaudio' object has no attribute 'cuda_version' I have a .py script that uses torchaudio (without GPU) to process some sound in Windows. To distribute it, I've used pyinstaller to turn it into a .exe. You can reproduce the issue with this simple script: import torchaudio import time if __name__ == '__main__': t = torchaudio.transforms time.sleep(3) print("Success") This script correctly runs from a python console python test.py but I want to create a test.exe that works in Windows (without having python installed). I create test.exe by using pyinstaller: pyinstaller test.py. This creates a build/test folder with all the required dependencies (around 1GB). test.exe is located inside that folder but when I double click on it, it fails with the following error: Traceback (most recent call last): File "torch\_ops.py", line 501, in __getattr__ op, overload_names = torch._C._jit_get_operation(qualified_op_name) RuntimeError: No such operator torchaudio::cuda_version The above exception was the direct cause of the following exception: Traceback (most recent call last): File "test.py", line 1, in <module> File "<frozen importlib._bootstrap>", line 1007, in _find_and_load File "<frozen importlib._bootstrap>", line 986, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 680, in _load_unlocked File "PyInstaller\loader\pyimod02_importers.py", line 499, in exec_module File "torchaudio\__init__.py", line 1, in <module> File "<frozen importlib._bootstrap>", line 1007, in _find_and_load File "<frozen importlib._bootstrap>", line 986, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 680, in _load_unlocked File "PyInstaller\loader\pyimod02_importers.py", line 499, in exec_module File "torchaudio\_extension.py", line 136, in <module> File "torchaudio\_extension.py", line 121, in _check_cuda_version File "torch\_ops.py", line 505, in __getattr__ raise AttributeError( AttributeError: '_OpNamespace' 'torchaudio' object has no attribute 'cuda_version' [11648] Failed to execute script 'test' due to unhandled exception! The environment uses: python==3.9.15 torch==1.13.0 six==1.15.0 numpy==1.22.4 scipy==1.6.0 sounddevice==0.4.5 torchaudio==0.13.0 pyinstaller==5.6.2 Note: I tried the same installing torch with cuda ending up with the same error and a build 4 times bigger. A: I was able to make the script work. Here are the steps I took to get it to run. Create a new empty directory and pasted your script in as main.py py -m venv venv && venv\scripts\activate && py -m pip install --upgrade pip pyinstaller pip install torchaudio==0.13.0 torch==1.13.0 numpy=1.22.4 sounddevice==0.4.5 six==1.15.0 scipy pyinstaller -F main.py Go into venv\Lib\site-packages and copy the entire torchaudio folder and paste it into the top level directory alongside venv and main.py In main.spec set datas=[('./torchaudio','./torchaudio')] pyinstaller main.spec And after compiling the executable runs... it still gives off a few warnings, but it runs and prints the success message.
Cannot create .exe with pyinstaller from .py with torchaudio (CPU): AttributeError: '_OpNamespace' 'torchaudio' object has no attribute 'cuda_version'
I have a .py script that uses torchaudio (without GPU) to process some sound in Windows. To distribute it, I've used pyinstaller to turn it into a .exe. You can reproduce the issue with this simple script: import torchaudio import time if __name__ == '__main__': t = torchaudio.transforms time.sleep(3) print("Success") This script correctly runs from a python console python test.py but I want to create a test.exe that works in Windows (without having python installed). I create test.exe by using pyinstaller: pyinstaller test.py. This creates a build/test folder with all the required dependencies (around 1GB). test.exe is located inside that folder but when I double click on it, it fails with the following error: Traceback (most recent call last): File "torch\_ops.py", line 501, in __getattr__ op, overload_names = torch._C._jit_get_operation(qualified_op_name) RuntimeError: No such operator torchaudio::cuda_version The above exception was the direct cause of the following exception: Traceback (most recent call last): File "test.py", line 1, in <module> File "<frozen importlib._bootstrap>", line 1007, in _find_and_load File "<frozen importlib._bootstrap>", line 986, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 680, in _load_unlocked File "PyInstaller\loader\pyimod02_importers.py", line 499, in exec_module File "torchaudio\__init__.py", line 1, in <module> File "<frozen importlib._bootstrap>", line 1007, in _find_and_load File "<frozen importlib._bootstrap>", line 986, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 680, in _load_unlocked File "PyInstaller\loader\pyimod02_importers.py", line 499, in exec_module File "torchaudio\_extension.py", line 136, in <module> File "torchaudio\_extension.py", line 121, in _check_cuda_version File "torch\_ops.py", line 505, in __getattr__ raise AttributeError( AttributeError: '_OpNamespace' 'torchaudio' object has no attribute 'cuda_version' [11648] Failed to execute script 'test' due to unhandled exception! The environment uses: python==3.9.15 torch==1.13.0 six==1.15.0 numpy==1.22.4 scipy==1.6.0 sounddevice==0.4.5 torchaudio==0.13.0 pyinstaller==5.6.2 Note: I tried the same installing torch with cuda ending up with the same error and a build 4 times bigger.
[ "I was able to make the script work. Here are the steps I took to get it to run.\n\nCreate a new empty directory and pasted your script in as main.py\n\npy -m venv venv && venv\\scripts\\activate && py -m pip install --upgrade pip pyinstaller\n\npip install torchaudio==0.13.0 torch==1.13.0 numpy=1.22.4 sounddevice==0.4.5 six==1.15.0 scipy\n\npyinstaller -F main.py\n\nGo into venv\\Lib\\site-packages and copy the entire torchaudio folder and paste it into the top level directory alongside venv and main.py\n\nIn main.spec set datas=[('./torchaudio','./torchaudio')]\n\npyinstaller main.spec\n\n\nAnd after compiling the executable runs... it still gives off a few warnings, but it runs and prints the success message.\n" ]
[ 1 ]
[]
[]
[ "pyinstaller", "python", "torch", "torchaudio" ]
stackoverflow_0074451478_pyinstaller_python_torch_torchaudio.txt
Q: Add data to DataFrame by function thats my function: import pandas as pd shopping_list = pd.DataFrame() shopping_list = shopping_list.assign(Order=0, Type=0, Price=0, Quantity=0) def add_item(order: str, type_transaction: str, price: float, quantity: int, shopping_list=shopping_list): new_item_data = pd.DataFrame({"Order": [order], "Type": [type_transaction], "Price": [price], "Quantity": [quantity]}) return pd.concat([shopping_list, new_item_data], ignore_index=True) add_item(order="Buy", type_transaction="Add", price=20.0, quantity=100) add_item(order="Sell", type_transaction="Add", price=25.0, quantity=200) print(shopping_list) Output: Empty DataFrame Columns: [Order, Type, Price, Quantity] Index: [] What i should do to add this items into my dataframe ? bcz they vanish and idk why A: Here with some changes to make things work: import pandas as pd shopping_list = pd.DataFrame() shopping_list = shopping_list.assign(Order=0, Type=0, Price=0, Quantity=0) def add_item(order: str, type_transaction: str, price: float, quantity: int, shopping_list=shopping_list): new_item_data = pd.DataFrame({"Order": [order], "Type": [type_transaction], "Price": [price], "Quantity": [quantity]}) return pd.concat([shopping_list, new_item_data], ignore_index=True) shopping_list = add_item(order="Buy", type_transaction="Add", price=20.0, quantity=100, shopping_list=shopping_list) shopping_list = add_item(order="Sell", type_transaction="Add", price=25.0, quantity=200, shopping_list=shopping_list) print(shopping_list) Firstly, pd.concat returns a concatenation of the data frames you pass in, but it leaves the inputs untouched. So, you need to assign what you return fomr add_item to a variable. Secondly, default arguments for python functions are only evaluated once when the function is defined. That means that your default for shopping_list will always be an empty list, no matter what the variable shopping_list contains. A: You can improve the performance of the operation by generating a list of dictionary objects and performing a single call to pd.concat(). Appending data on a row-by-row basis is never performant. import pandas as pd data = [{"Order": "Buy", "Type": "Add", "Price": 20.0, "Quantity": 100}, {"Order": "Sell", "Type": "Add", "Price": 25.0, "Quantity": 200}] shopping_list = pd.DataFrame() shopping_list = shopping_list.assign(Order=0, Type=0, Price=0, Quantity=0) shopping_list = pd.concat([shopping_list, pd.DataFrame(data)], ignore_index=True)
Add data to DataFrame by function
thats my function: import pandas as pd shopping_list = pd.DataFrame() shopping_list = shopping_list.assign(Order=0, Type=0, Price=0, Quantity=0) def add_item(order: str, type_transaction: str, price: float, quantity: int, shopping_list=shopping_list): new_item_data = pd.DataFrame({"Order": [order], "Type": [type_transaction], "Price": [price], "Quantity": [quantity]}) return pd.concat([shopping_list, new_item_data], ignore_index=True) add_item(order="Buy", type_transaction="Add", price=20.0, quantity=100) add_item(order="Sell", type_transaction="Add", price=25.0, quantity=200) print(shopping_list) Output: Empty DataFrame Columns: [Order, Type, Price, Quantity] Index: [] What i should do to add this items into my dataframe ? bcz they vanish and idk why
[ "Here with some changes to make things work:\nimport pandas as pd\n\nshopping_list = pd.DataFrame()\nshopping_list = shopping_list.assign(Order=0, Type=0, Price=0, Quantity=0)\n\n\ndef add_item(order: str, type_transaction: str, price: float, quantity: int, shopping_list=shopping_list):\n new_item_data = pd.DataFrame({\"Order\": [order],\n \"Type\": [type_transaction],\n \"Price\": [price],\n \"Quantity\": [quantity]})\n return pd.concat([shopping_list, new_item_data], ignore_index=True)\n\nshopping_list = add_item(order=\"Buy\", type_transaction=\"Add\", price=20.0, quantity=100, shopping_list=shopping_list)\nshopping_list = add_item(order=\"Sell\", type_transaction=\"Add\", price=25.0, quantity=200, shopping_list=shopping_list)\n\nprint(shopping_list)\n\nFirstly, pd.concat returns a concatenation of the data frames you pass in, but it leaves the inputs untouched. So, you need to assign what you return fomr add_item to a variable.\nSecondly, default arguments for python functions are only evaluated once when the function is defined. That means that your default for shopping_list will always be an empty list, no matter what the variable shopping_list contains.\n", "You can improve the performance of the operation by generating a list of dictionary objects and performing a single call to pd.concat(). Appending data on a row-by-row basis is never performant.\nimport pandas as pd\n\ndata = [{\"Order\": \"Buy\", \"Type\": \"Add\", \"Price\": 20.0, \"Quantity\": 100},\n {\"Order\": \"Sell\", \"Type\": \"Add\", \"Price\": 25.0, \"Quantity\": 200}]\n\nshopping_list = pd.DataFrame()\nshopping_list = shopping_list.assign(Order=0, Type=0, Price=0, Quantity=0)\nshopping_list = pd.concat([shopping_list, pd.DataFrame(data)], ignore_index=True)\n\n" ]
[ 1, 1 ]
[]
[]
[ "concatenation", "dataframe", "python" ]
stackoverflow_0074482457_concatenation_dataframe_python.txt
Q: No module named _cffi_backend I have Python 2.6 in my Linux rhel-5. I have installed pip and required CFFI packages. When I try to run a sample CFFI program: ffi = FFI() it says: File "/usr/lib/python2.6/site-packages/cffi/api.py", line 56, in __init__ import _cffi_backend as backend ImportError: No module named _cffi_backend What could be the possible error? Did I miss something during installation? I have installed pip, wheel, pycparser, pytest and cffi. A: For python2.x use following command: python -m pip install cffi for python3.x python3 -m pip install cffi A: I needed to uninstall and install it again: sudo pip uninstall cryptography sudo pip uninstall paramiko then install pagamiko again sudo pip install paramiko and it start to work for me A: I recently had the same issue and none of the above solutions worked for me. Here is what worked. sudo apt remove python3-cffi sudo python3 -m pip install cffi A: Did you compile Python from source, and if so, did it give you any errors during the configure/make/make install phase? Compiling Python from source can be a real beast on older Red Hat systems, so if you installed that way, I'd suggest combing through the configure and make output to be sure that no modules were left out. In order to get pip install cffi to succeed with no errors, I had to install gcc and libffi-devel from the EL5 repos. From there, I was able to instantiate an FFI instance with no problems: >>> from cffi import FFI >>> ffi = FFI() >>> Here's the output of pip freeze, for reference: [root@machine ~]# pip freeze argparse==1.2.1 autobahn==0.8.10 cffi==1.5.2 characteristic==14.3.0 pika==0.9.13 pyasn1==0.1.7 pyasn1-modules==0.0.8 pycparser==2.14 pycrypto==2.6.1 pyOpenSSL==0.12 pysnmp==4.2.5 requests==2.7.0 service-identity==14.0.0 six==1.7.3 Twisted==14.0.0 version-utils==0.2.2 wheel==0.24.0 zope.interface==4.1.1 If you've got the same or better versions of the relevant packages installed, I'd try a pip -vvv install --upgrade --force-reinstall cffi, just to see if there are perhaps errors that pip was masking, and go from there. A: You have to first remove the following packages: cryptography bcrypt paramiko Now use the following command to install: pip -vvv install --upgrade --force-reinstall cffi A: I had the same problem, following this thread https://github.com/pyca/cryptography/issues/4403, I solved the problem by reinstalling and upgrading with the command: pip install -U cffi A: Have the same problem. After many attempts adding import cffi solve the issue. Make sure you have cffi and cryptography installed. A: You could look at the code L56 in /usr/lib/python2.6/site-packages/cffi/api.py It needs the _cffi_backend.so in your pythonpath. You could install the python-cffi for it. But not sure whether it is in your RPM repo, especially you are using RHEL-5. Here is an RPM for CENTOS http://cbs.centos.org/koji/rpminfo?rpmID=20613 Hope it helps. I am still searching the source code for building the _cffi_backend.so. A: For me there was no way to install cffi on python3.8 because of this: ImportError: cannot import name 'sysconfig' from 'distutils' (/usr/lib/python3.8/distutils/__init__.py) Somehow, the package python3-distutils does not exist in Ubuntu 16.04. So I ended up installing python3.7 and now I finally could install cffi, fixing the problem mentioned by the TS. A: You should install cffi via pip install cffi to get the latest version. I had to restart my application for it to recognize the cffi installation. A: I was getting this error while trying to get the cryptography module to work with Python 3.8 for AWS Lambda. Adding the cffi*manylinux*.whl files to my Lambda Layer (as suggested here) worked. The cffi module comes built in for many python distributions, but not on AWS Lambda A: For AWS Lambda I was facing the same issue when running on Python3.7. When I downgraded it to Python3.6, this issue was resolved. I think this packaged might have been present in Python3.6 version and later was removed. Adding this package while making layers for AWS Lambda might resolve the problem for Python3.7. A: I encountered this issue when trying to install packages in a local directory using pip install -t . and then running python (2.7). My solution was to remove the -t and not install into a local directory. A: it worked after adding " import cffi " in my application. please refer for more details. https://buildmedia.readthedocs.org/media/pdf/cffi/latest/cffi.pdf A: Thanks to @MPlanchard, for his answer which helped identify the cause In my case, the issue was related to python3.9, changing to python3.8 it just works well! A: After many futile efforts to install the right packages, the right python versions and building the perfect layer, resorting to installing Fabric solved it for me A: I got this issue running an Ansible playbook using python 3.9 under Ubuntu-18.04 in WSL2. It was sorted by doing: sudo apt-get remove -y python3-cffi-backend sudo apt-get install -y python3-cffi-backend
No module named _cffi_backend
I have Python 2.6 in my Linux rhel-5. I have installed pip and required CFFI packages. When I try to run a sample CFFI program: ffi = FFI() it says: File "/usr/lib/python2.6/site-packages/cffi/api.py", line 56, in __init__ import _cffi_backend as backend ImportError: No module named _cffi_backend What could be the possible error? Did I miss something during installation? I have installed pip, wheel, pycparser, pytest and cffi.
[ "For python2.x use following command:\npython -m pip install cffi\n\nfor python3.x\npython3 -m pip install cffi\n\n", "I needed to uninstall and install it again:\nsudo pip uninstall cryptography\n\nsudo pip uninstall paramiko\n\nthen install pagamiko again\nsudo pip install paramiko\n\nand it start to work for me\n", "I recently had the same issue and none of the above solutions worked for me.\nHere is what worked.\nsudo apt remove python3-cffi\nsudo python3 -m pip install cffi\n\n", "Did you compile Python from source, and if so, did it give you any errors during the configure/make/make install phase? Compiling Python from source can be a real beast on older Red Hat systems, so if you installed that way, I'd suggest combing through the configure and make output to be sure that no modules were left out.\nIn order to get pip install cffi to succeed with no errors, I had to install gcc and libffi-devel from the EL5 repos. From there, I was able to instantiate an FFI instance with no problems:\n>>> from cffi import FFI\n>>> ffi = FFI()\n>>>\n\nHere's the output of pip freeze, for reference:\n[root@machine ~]# pip freeze\nargparse==1.2.1\nautobahn==0.8.10\ncffi==1.5.2\ncharacteristic==14.3.0\npika==0.9.13\npyasn1==0.1.7\npyasn1-modules==0.0.8\npycparser==2.14\npycrypto==2.6.1\npyOpenSSL==0.12\npysnmp==4.2.5\nrequests==2.7.0\nservice-identity==14.0.0\nsix==1.7.3\nTwisted==14.0.0\nversion-utils==0.2.2\nwheel==0.24.0\nzope.interface==4.1.1\n\nIf you've got the same or better versions of the relevant packages installed, I'd try a pip -vvv install --upgrade --force-reinstall cffi, just to see if there are perhaps errors that pip was masking, and go from there.\n", "You have to first remove the following packages:\ncryptography\nbcrypt\nparamiko\n\nNow use the following command to install:\npip -vvv install --upgrade --force-reinstall cffi\n\n", "I had the same problem, following this thread https://github.com/pyca/cryptography/issues/4403, I solved the problem by reinstalling and upgrading with the command:\npip install -U cffi\n\n", "Have the same problem. After many attempts adding import cffi solve the issue.\nMake sure you have cffi and cryptography installed.\n", "You could look at the code L56 in /usr/lib/python2.6/site-packages/cffi/api.py\nIt needs the _cffi_backend.so in your pythonpath. You could install the python-cffi for it. But not sure whether it is in your RPM repo, especially you are using RHEL-5.\nHere is an RPM for CENTOS http://cbs.centos.org/koji/rpminfo?rpmID=20613\nHope it helps. I am still searching the source code for building the _cffi_backend.so.\n", "For me there was no way to install cffi on python3.8 because of this:\nImportError: cannot import name 'sysconfig' from 'distutils' (/usr/lib/python3.8/distutils/__init__.py)\n\nSomehow, the package python3-distutils does not exist in Ubuntu 16.04.\nSo I ended up installing python3.7 and now I finally could install cffi, fixing the problem mentioned by the TS.\n", "You should install cffi via pip install cffi \nto get the latest version. I had to restart my application for it to recognize the cffi installation.\n", "I was getting this error while trying to get the cryptography module to work with Python 3.8 for AWS Lambda.\nAdding the cffi*manylinux*.whl files to my Lambda Layer (as suggested here) worked.\nThe cffi module comes built in for many python distributions, but not on AWS Lambda\n", "For AWS Lambda I was facing the same issue when running on Python3.7. When I downgraded it to Python3.6, this issue was resolved.\nI think this packaged might have been present in Python3.6 version and later was removed. Adding this package while making layers for AWS Lambda might resolve the problem for Python3.7.\n", "I encountered this issue when trying to install packages in a local directory using pip install -t . and then running python (2.7). My solution was to remove the -t and not install into a local directory.\n", "it worked after adding \" import cffi \" in my application.\nplease refer for more details.\nhttps://buildmedia.readthedocs.org/media/pdf/cffi/latest/cffi.pdf\n", "Thanks to @MPlanchard, for his answer which helped identify the cause\nIn my case, the issue was related to python3.9, changing to python3.8 it just works well!\n", "After many futile efforts to install the right packages, the right python versions and building the perfect layer, resorting to installing Fabric solved it for me\n", "I got this issue running an Ansible playbook using python 3.9 under Ubuntu-18.04 in WSL2. It was sorted by doing:\nsudo apt-get remove -y python3-cffi-backend\n\nsudo apt-get install -y python3-cffi-backend\n\n" ]
[ 55, 19, 12, 9, 6, 5, 2, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0 ]
[]
[]
[ "python", "python_2.6", "python_cffi" ]
stackoverflow_0034370962_python_python_2.6_python_cffi.txt
Q: Expected Expression Error in my Python script I'm on Visual Studio trying to update my Itch Quiz Game and when I tried to test an Else function that follows with a print function it said I had an Expected Expression error. I tried to fix it but it didn't work. Please help Heres my code import time print("Math Game") print("") score = 0 #intro print("Welcome to 5 Questions") answer6 = input("Type Begin to Begin") if answer6 == "Begin": score +=0 #Question 1 print("Whats 2 + 2") answer1 = input("Enter answer") if answer1 == "4": else: ---THIS IS THE ERROR print("Test") score += 1 #Question 2 print("Whats 4 * 2") answer2 = input("Enter answer") if answer2 == "8": score += 1 #Question 3 print("Whats the root square of 16") answer3 = input("Enter answer") if answer3 == "4": score += 1 #Question 4 print("Who made the laws of gravity") answer4 = input("Enter answer") if answer4 == "Issac Newton": score += 1 #Question 5 print("Whats Apples frist device the Phone or the Computer") answer4 = input("Enter answer") if answer4 == "Computer": score += 1 print("you got " + str(score) + "/5") time.sleep(5) print("Good Bye!") A: The main issue is that the body of the if-else statement in Question 1 is empty. You need to have at least one line of code inside every if/else statement. If nothing should be done, you can use the keyword pass: if answer1 == "4": # ... score += 1 # this should go here, when answer1=='4', right? # ... else: pass https://www.w3schools.com/python/ref_keyword_pass.asp Also, all clauses of the if-else statement should have the same indentation level.
Expected Expression Error in my Python script
I'm on Visual Studio trying to update my Itch Quiz Game and when I tried to test an Else function that follows with a print function it said I had an Expected Expression error. I tried to fix it but it didn't work. Please help Heres my code import time print("Math Game") print("") score = 0 #intro print("Welcome to 5 Questions") answer6 = input("Type Begin to Begin") if answer6 == "Begin": score +=0 #Question 1 print("Whats 2 + 2") answer1 = input("Enter answer") if answer1 == "4": else: ---THIS IS THE ERROR print("Test") score += 1 #Question 2 print("Whats 4 * 2") answer2 = input("Enter answer") if answer2 == "8": score += 1 #Question 3 print("Whats the root square of 16") answer3 = input("Enter answer") if answer3 == "4": score += 1 #Question 4 print("Who made the laws of gravity") answer4 = input("Enter answer") if answer4 == "Issac Newton": score += 1 #Question 5 print("Whats Apples frist device the Phone or the Computer") answer4 = input("Enter answer") if answer4 == "Computer": score += 1 print("you got " + str(score) + "/5") time.sleep(5) print("Good Bye!")
[ "The main issue is that the body of the if-else statement in Question 1 is empty. You need to have at least one line of code inside every if/else statement. If nothing should be done, you can use the keyword pass:\nif answer1 == \"4\":\n # ...\n score += 1 # this should go here, when answer1=='4', right?\n # ...\nelse: \n pass\n\nhttps://www.w3schools.com/python/ref_keyword_pass.asp\n\nAlso, all clauses of the if-else statement should have the same indentation level.\n" ]
[ 1 ]
[]
[]
[ "python" ]
stackoverflow_0074483073_python.txt
Q: QtMultimedia is not currently supported on this platform or compiler. PyInstaller I'm using PyInstaller v5.6.2. I prescribe pyinstaller mp3pyqt6.py, after which I add the necessary files to mp3pyqt6.spec, updating the pyinstaller mp3pyqt6.spec command. The console that comes with the application says: could not load multimedia backend "" QtMultimedia is not currently supported on this platform or compiler Is there a way to solve this? Any alternatives to PyInstaller? mp3pyqt6.py from PyQt6.QtWidgets import QApplication, QWidget, QSizeGrip, QFileDialog from PyQt6.QtCore import Qt, QUrl from PyQt6 import uic, QtGui, QtWidgets, QtMultimedia import png_icons6 import sys import os GLOBAL_STATE = 0 count = 0 play_count = 0 loop_count = 0 class App(QWidget): def __init__(self): super().__init__() self.dragPos = None self.ui = uic.loadUi('MP3Player6.ui', self) self.ui.position = 0 self.ui.dial.setValue(50) # Size Grip self.size_grip = QSizeGrip(self.ui.f_status_right) self.size_grip.setToolTip("Resize Window") self.size_grip.setStyleSheet("QSizeGrip " "{ width: 20px; height: 20px; margin: 5px } " "QSizeGrip:hover " "{ background-color: none; }") # Title bar self.setWindowFlag(Qt.WindowType.FramelessWindowHint) self.setAttribute(Qt.WidgetAttribute.WA_TranslucentBackground) # Buttons self.ui.btn_close.clicked.connect(lambda: self.close()) self.ui.btn_hide.clicked.connect(lambda: self.showMinimized()) self.ui.btn_maximize.clicked.connect(lambda: self.window_size()) self.ui.btn_play.clicked.connect(lambda: self.play_btn()) self.ui.dial.valueChanged.connect(self.dial_volume) self.ui.btn_repeat.clicked.connect(lambda: self.loop_mp3()) # Player self.player = QtMultimedia.QMediaPlayer() self.audio_output = QtMultimedia.QAudioOutput() # self.ui.postion = 0 self.player.positionChanged.connect(self.slider_pos) self.player.durationChanged.connect(self.duration) self.ui.horizontalSlider.sliderMoved.connect(lambda: self.set_position(self.ui.horizontalSlider.value())) self.ui.horizontalSlider.valueChanged.connect(lambda: self.timer()) self.load_files() def move_window(event): if GLOBAL_STATE == 1: self.window_size() if event.buttons() == Qt.MouseButton.LeftButton: pos = self.pos() glpos = event.globalPosition().toPoint() self.move(pos + glpos - self.dragPos.toPoint()) self.dragPos = event.globalPosition() event.accept() self.ui.f_title.mouseMoveEvent = move_window self.show() def mousePressEvent(self, event): self.dragPos = event.globalPosition() def window_size(self): global GLOBAL_STATE status = GLOBAL_STATE if status == 0: self.showMaximized() self.ui.main_frame.setStyleSheet('background-color:' ' qlineargradient(spread:pad, x1:0, y1:0, x2:1, y2:1, stop:0 rgba(42, 44, 111, 255), stop:0.522727 rgba(28, 29, 73, 255));' ' border-radius: 0px') GLOBAL_STATE = 1 else: self.showNormal() self.ui.main_frame.setStyleSheet('background-color:' ' qlineargradient(spread:pad, x1:0, y1:0, x2:1, y2:1, stop:0 rgba(42, 44, 111, 255), stop:0.522727 rgba(28, 29, 73, 255));' ' border-radius: 15px;') GLOBAL_STATE = 0 def load_files(self): spisok = [] for file in os.listdir(r"C:\Users\Michael\Мусор\Music"): if file.endswith('.mp3'): spisok.append(file) self.ui.btn = QtWidgets.QPushButton() font = QtGui.QFont() font.setPointSize(20) self.ui.btn.setFont(font) self.ui.path = r'C:\Users\Michael\Мусор\Music\\' + str(file) self.ui.btn.setText(file) self.ui.url = QUrl.fromLocalFile(self.ui.path) self.ui.verticalLayout_6.addWidget(self.ui.btn) self.ui.btn.clicked.connect(lambda checked, g=self.ui.url, h=file: self.play_mp3_file(g, h)) def play_mp3_file(self, g, h): global play_count if play_count == 0: self.player.setAudioOutput(self.audio_output) self.player.setSource(g) self.audio_output.setVolume(0.5) self.player.setPosition(self.ui.position) self.ui.dial.setValue(50) self.ui.btn_play.setStyleSheet('QPushButton{image: url(:/пауза/png_icons/Пауза/icons8-pause-52.png);}QPushButton:hover{image: url(:/пауза/png_icons/Пауза/icons8-pause-52 (1).png);}QPushButton:pressed{image: url(:/пауза/png_icons/Пауза/icons8-pause-52 (2).png);}') self.ui.label_song.setText(h) self.ui.label_song.setFont(QtGui.QFont("Times", 10)) self.player.play() play_count = 1 else: self.player.stop() self.player.setAudioOutput(self.audio_output) self.player.setSource(g) self.audio_output.setVolume(0.5) self.player.setPosition(self.ui.position) self.ui.dial.setValue(50) self.ui.label_song.setText(h) self.ui.label_song.setFont(QtGui.QFont("Times", 10)) self.ui.btn_play.setStyleSheet('QPushButton{image: url(:/пауза/png_icons/Пауза/icons8-pause-52.png);}QPushButton:hover{image: url(:/пауза/png_icons/Пауза/icons8-pause-52 (1).png);}QPushButton:pressed{image: url(:/пауза/png_icons/Пауза/icons8-pause-52 (2).png);}') self.player.play() def play_btn(self): global play_count if play_count == 0: self.ui.btn_play.setStyleSheet('QPushButton{image: url(:/пауза/png_icons/Пауза/icons8-pause-52.png);}QPushButton:hover{image: url(:/пауза/png_icons/Пауза/icons8-pause-52 (1).png);}QPushButton:pressed{image: url(:/пауза/png_icons/Пауза/icons8-pause-52 (2).png);}') self.player.play() play_count = 1 else: self.player.pause() self.ui.btn_play.setStyleSheet('QPushButton {image: url(:/старт/png_icons/Старт/icons8-play-button-circled-50.png);}QPushButton:hover{image: url(:/старт/png_icons/Старт/icons8-play-button-circled-50 (1).png);}QPushButton:pressed{image: url(:/старт/png_icons/Старт/icons8-play-button-circled-50 (2).png);}') play_count = 0 def dial_volume(self, i): self.audio_output.setVolume(float(i/100)) def loop_mp3(self): global loop_count if loop_count == 0: self.player.setLoops(-1) loop_count = 1 self.ui.btn_repeat.setStyleSheet('QPushButton{image: url(:/повтор/png_icons/Повтор/icons8-update-left-rotation-48 (2).png);}QPushButton:hover{image: url(:/повтор/png_icons/Повтор/icons8-update-left-rotation-48 (1).png);}QPushButton:pressed{image: url(:/повтор/png_icons/Повтор/icons8-update-left-rotation-48.png);}') else: self.player.setLoops(1) loop_count = 0 self.ui.btn_repeat.setStyleSheet('QPushButton{image: url(:/повтор/png_icons/Повтор/icons8-update-left-rotation-48.png);}QPushButton:hover{image: url(:/повтор/png_icons/Повтор/icons8-update-left-rotation-48 (1).png);}QPushButton:pressed{ image: url(:/повтор/png_icons/Повтор/icons8-update-left-rotation-48 (2).png);}') def slider_pos(self, position): self.ui.horizontalSlider.setValue(position) def duration(self, duration): self.ui.horizontalSlider.setRange(0, duration) def set_position(self, position): self.player.setPosition(position) def timer(self): total_milliseconds = self.player.duration() total_seconds, total_milliseconds = divmod(total_milliseconds, 1000) total_minutes, total_seconds = divmod(total_seconds, 60) total_hours, total_minutes = divmod(total_minutes, 60) elapsed_milliseconds = self.ui.horizontalSlider.value() elapsed_seconds, elapsed_milliseconds = divmod(elapsed_milliseconds, 1000) elapsed_minutes, elapsed_seconds = divmod(elapsed_seconds, 60) elapsed_hours, elapsed_minutes = divmod(elapsed_minutes, 60) self.ui.l_left_label.setText(f'{elapsed_minutes}:{elapsed_seconds}') self.ui.l_right_label.setText(f'{total_minutes}:{total_seconds}') if __name__ == '__main__': app = QApplication(sys.argv) application = App() sys.exit(app.exec()) mp3pyqt6.spec # -*- mode: python ; coding: utf-8 -*- block_cipher = None a = Analysis( ['mp3pyqt6.py'], pathex=[], binaries=[], datas=[('musicalnoteeightflat_105984.ico', '.'), ('png_icons6.py', '.'), ('MP3Player6.ui', '.')], hiddenimports=[], hookspath=[], hooksconfig={}, runtime_hooks=[], excludes=[], win_no_prefer_redirects=False, win_private_assemblies=False, cipher=block_cipher, noarchive=False, ) pyz = PYZ(a.pure, a.zipped_data, cipher=block_cipher) exe = EXE( pyz, a.scripts, [], exclude_binaries=True, name='mp3pyqt6', debug=False, bootloader_ignore_signals=False, strip=False, upx=True, console=True, disable_windowed_traceback=False, argv_emulation=False, target_arch=None, codesign_identity=None, entitlements_file=None, ) coll = COLLECT( exe, a.binaries, a.zipfiles, a.datas, strip=False, upx=True, upx_exclude=[], name='mp3pyqt6', ) minimal reproducible example import sys from PyQt6 import uic from PyQt6.QtCore import QUrl from PyQt6 import QtMultimedia from PyQt6.QtWidgets import QApplication, QWidget, QFileDialog class App(QWidget): def __init__(self): super().__init__() self.player = QtMultimedia.QMediaPlayer() self.audio_output = QtMultimedia.QAudioOutput() self.path = r'C:\Users\Michael\Мусор\Music\ac-dc-highway-to-hell-(best-muzon.cc).mp3' self.player.setAudioOutput(self.audio_output) self.player.setSource(QUrl.fromLocalFile(self.path)) self.audio_output.setVolume(20) self.player.play() if __name__ == '__main__': app = QApplication(sys.argv) application = App() sys.exit(app.exec()) I already compiled a project using QtMultimedia on pyqt5, everything went well there. But in this player PyInstaller doesn't work. A: I got it to work, and these are the steps I took to get it to run and compile. Create a new directory and paste your script inside of it as main.py. py -m venv venv && venv\scripts\activate py -m pip install --upgrade pip pyinstaller PyQt6 pyinstaller -F main.py Go into the venv\Lib\site-packages folder and copy the PyQt6 directory to the top level directory next to main.py and venv inside main.spec set the datas=[('./PyQt6', './PyQt6')] pyinstaller main.spec Once it compiles the executable should run. It pops a few warnings for me, but otherwise it does what its supposed too.
QtMultimedia is not currently supported on this platform or compiler. PyInstaller
I'm using PyInstaller v5.6.2. I prescribe pyinstaller mp3pyqt6.py, after which I add the necessary files to mp3pyqt6.spec, updating the pyinstaller mp3pyqt6.spec command. The console that comes with the application says: could not load multimedia backend "" QtMultimedia is not currently supported on this platform or compiler Is there a way to solve this? Any alternatives to PyInstaller? mp3pyqt6.py from PyQt6.QtWidgets import QApplication, QWidget, QSizeGrip, QFileDialog from PyQt6.QtCore import Qt, QUrl from PyQt6 import uic, QtGui, QtWidgets, QtMultimedia import png_icons6 import sys import os GLOBAL_STATE = 0 count = 0 play_count = 0 loop_count = 0 class App(QWidget): def __init__(self): super().__init__() self.dragPos = None self.ui = uic.loadUi('MP3Player6.ui', self) self.ui.position = 0 self.ui.dial.setValue(50) # Size Grip self.size_grip = QSizeGrip(self.ui.f_status_right) self.size_grip.setToolTip("Resize Window") self.size_grip.setStyleSheet("QSizeGrip " "{ width: 20px; height: 20px; margin: 5px } " "QSizeGrip:hover " "{ background-color: none; }") # Title bar self.setWindowFlag(Qt.WindowType.FramelessWindowHint) self.setAttribute(Qt.WidgetAttribute.WA_TranslucentBackground) # Buttons self.ui.btn_close.clicked.connect(lambda: self.close()) self.ui.btn_hide.clicked.connect(lambda: self.showMinimized()) self.ui.btn_maximize.clicked.connect(lambda: self.window_size()) self.ui.btn_play.clicked.connect(lambda: self.play_btn()) self.ui.dial.valueChanged.connect(self.dial_volume) self.ui.btn_repeat.clicked.connect(lambda: self.loop_mp3()) # Player self.player = QtMultimedia.QMediaPlayer() self.audio_output = QtMultimedia.QAudioOutput() # self.ui.postion = 0 self.player.positionChanged.connect(self.slider_pos) self.player.durationChanged.connect(self.duration) self.ui.horizontalSlider.sliderMoved.connect(lambda: self.set_position(self.ui.horizontalSlider.value())) self.ui.horizontalSlider.valueChanged.connect(lambda: self.timer()) self.load_files() def move_window(event): if GLOBAL_STATE == 1: self.window_size() if event.buttons() == Qt.MouseButton.LeftButton: pos = self.pos() glpos = event.globalPosition().toPoint() self.move(pos + glpos - self.dragPos.toPoint()) self.dragPos = event.globalPosition() event.accept() self.ui.f_title.mouseMoveEvent = move_window self.show() def mousePressEvent(self, event): self.dragPos = event.globalPosition() def window_size(self): global GLOBAL_STATE status = GLOBAL_STATE if status == 0: self.showMaximized() self.ui.main_frame.setStyleSheet('background-color:' ' qlineargradient(spread:pad, x1:0, y1:0, x2:1, y2:1, stop:0 rgba(42, 44, 111, 255), stop:0.522727 rgba(28, 29, 73, 255));' ' border-radius: 0px') GLOBAL_STATE = 1 else: self.showNormal() self.ui.main_frame.setStyleSheet('background-color:' ' qlineargradient(spread:pad, x1:0, y1:0, x2:1, y2:1, stop:0 rgba(42, 44, 111, 255), stop:0.522727 rgba(28, 29, 73, 255));' ' border-radius: 15px;') GLOBAL_STATE = 0 def load_files(self): spisok = [] for file in os.listdir(r"C:\Users\Michael\Мусор\Music"): if file.endswith('.mp3'): spisok.append(file) self.ui.btn = QtWidgets.QPushButton() font = QtGui.QFont() font.setPointSize(20) self.ui.btn.setFont(font) self.ui.path = r'C:\Users\Michael\Мусор\Music\\' + str(file) self.ui.btn.setText(file) self.ui.url = QUrl.fromLocalFile(self.ui.path) self.ui.verticalLayout_6.addWidget(self.ui.btn) self.ui.btn.clicked.connect(lambda checked, g=self.ui.url, h=file: self.play_mp3_file(g, h)) def play_mp3_file(self, g, h): global play_count if play_count == 0: self.player.setAudioOutput(self.audio_output) self.player.setSource(g) self.audio_output.setVolume(0.5) self.player.setPosition(self.ui.position) self.ui.dial.setValue(50) self.ui.btn_play.setStyleSheet('QPushButton{image: url(:/пауза/png_icons/Пауза/icons8-pause-52.png);}QPushButton:hover{image: url(:/пауза/png_icons/Пауза/icons8-pause-52 (1).png);}QPushButton:pressed{image: url(:/пауза/png_icons/Пауза/icons8-pause-52 (2).png);}') self.ui.label_song.setText(h) self.ui.label_song.setFont(QtGui.QFont("Times", 10)) self.player.play() play_count = 1 else: self.player.stop() self.player.setAudioOutput(self.audio_output) self.player.setSource(g) self.audio_output.setVolume(0.5) self.player.setPosition(self.ui.position) self.ui.dial.setValue(50) self.ui.label_song.setText(h) self.ui.label_song.setFont(QtGui.QFont("Times", 10)) self.ui.btn_play.setStyleSheet('QPushButton{image: url(:/пауза/png_icons/Пауза/icons8-pause-52.png);}QPushButton:hover{image: url(:/пауза/png_icons/Пауза/icons8-pause-52 (1).png);}QPushButton:pressed{image: url(:/пауза/png_icons/Пауза/icons8-pause-52 (2).png);}') self.player.play() def play_btn(self): global play_count if play_count == 0: self.ui.btn_play.setStyleSheet('QPushButton{image: url(:/пауза/png_icons/Пауза/icons8-pause-52.png);}QPushButton:hover{image: url(:/пауза/png_icons/Пауза/icons8-pause-52 (1).png);}QPushButton:pressed{image: url(:/пауза/png_icons/Пауза/icons8-pause-52 (2).png);}') self.player.play() play_count = 1 else: self.player.pause() self.ui.btn_play.setStyleSheet('QPushButton {image: url(:/старт/png_icons/Старт/icons8-play-button-circled-50.png);}QPushButton:hover{image: url(:/старт/png_icons/Старт/icons8-play-button-circled-50 (1).png);}QPushButton:pressed{image: url(:/старт/png_icons/Старт/icons8-play-button-circled-50 (2).png);}') play_count = 0 def dial_volume(self, i): self.audio_output.setVolume(float(i/100)) def loop_mp3(self): global loop_count if loop_count == 0: self.player.setLoops(-1) loop_count = 1 self.ui.btn_repeat.setStyleSheet('QPushButton{image: url(:/повтор/png_icons/Повтор/icons8-update-left-rotation-48 (2).png);}QPushButton:hover{image: url(:/повтор/png_icons/Повтор/icons8-update-left-rotation-48 (1).png);}QPushButton:pressed{image: url(:/повтор/png_icons/Повтор/icons8-update-left-rotation-48.png);}') else: self.player.setLoops(1) loop_count = 0 self.ui.btn_repeat.setStyleSheet('QPushButton{image: url(:/повтор/png_icons/Повтор/icons8-update-left-rotation-48.png);}QPushButton:hover{image: url(:/повтор/png_icons/Повтор/icons8-update-left-rotation-48 (1).png);}QPushButton:pressed{ image: url(:/повтор/png_icons/Повтор/icons8-update-left-rotation-48 (2).png);}') def slider_pos(self, position): self.ui.horizontalSlider.setValue(position) def duration(self, duration): self.ui.horizontalSlider.setRange(0, duration) def set_position(self, position): self.player.setPosition(position) def timer(self): total_milliseconds = self.player.duration() total_seconds, total_milliseconds = divmod(total_milliseconds, 1000) total_minutes, total_seconds = divmod(total_seconds, 60) total_hours, total_minutes = divmod(total_minutes, 60) elapsed_milliseconds = self.ui.horizontalSlider.value() elapsed_seconds, elapsed_milliseconds = divmod(elapsed_milliseconds, 1000) elapsed_minutes, elapsed_seconds = divmod(elapsed_seconds, 60) elapsed_hours, elapsed_minutes = divmod(elapsed_minutes, 60) self.ui.l_left_label.setText(f'{elapsed_minutes}:{elapsed_seconds}') self.ui.l_right_label.setText(f'{total_minutes}:{total_seconds}') if __name__ == '__main__': app = QApplication(sys.argv) application = App() sys.exit(app.exec()) mp3pyqt6.spec # -*- mode: python ; coding: utf-8 -*- block_cipher = None a = Analysis( ['mp3pyqt6.py'], pathex=[], binaries=[], datas=[('musicalnoteeightflat_105984.ico', '.'), ('png_icons6.py', '.'), ('MP3Player6.ui', '.')], hiddenimports=[], hookspath=[], hooksconfig={}, runtime_hooks=[], excludes=[], win_no_prefer_redirects=False, win_private_assemblies=False, cipher=block_cipher, noarchive=False, ) pyz = PYZ(a.pure, a.zipped_data, cipher=block_cipher) exe = EXE( pyz, a.scripts, [], exclude_binaries=True, name='mp3pyqt6', debug=False, bootloader_ignore_signals=False, strip=False, upx=True, console=True, disable_windowed_traceback=False, argv_emulation=False, target_arch=None, codesign_identity=None, entitlements_file=None, ) coll = COLLECT( exe, a.binaries, a.zipfiles, a.datas, strip=False, upx=True, upx_exclude=[], name='mp3pyqt6', ) minimal reproducible example import sys from PyQt6 import uic from PyQt6.QtCore import QUrl from PyQt6 import QtMultimedia from PyQt6.QtWidgets import QApplication, QWidget, QFileDialog class App(QWidget): def __init__(self): super().__init__() self.player = QtMultimedia.QMediaPlayer() self.audio_output = QtMultimedia.QAudioOutput() self.path = r'C:\Users\Michael\Мусор\Music\ac-dc-highway-to-hell-(best-muzon.cc).mp3' self.player.setAudioOutput(self.audio_output) self.player.setSource(QUrl.fromLocalFile(self.path)) self.audio_output.setVolume(20) self.player.play() if __name__ == '__main__': app = QApplication(sys.argv) application = App() sys.exit(app.exec()) I already compiled a project using QtMultimedia on pyqt5, everything went well there. But in this player PyInstaller doesn't work.
[ "I got it to work, and these are the steps I took to get it to run and compile.\n\nCreate a new directory and paste your script inside of it as main.py.\npy -m venv venv && venv\\scripts\\activate\npy -m pip install --upgrade pip pyinstaller PyQt6\npyinstaller -F main.py\nGo into the venv\\Lib\\site-packages folder and copy the PyQt6 directory to the top level directory next to main.py and venv\ninside main.spec set the datas=[('./PyQt6', './PyQt6')]\npyinstaller main.spec\n\nOnce it compiles the executable should run. It pops a few warnings for me, but otherwise it does what its supposed too.\n" ]
[ 0 ]
[]
[]
[ "pyinstaller", "pyqt6", "python" ]
stackoverflow_0074415173_pyinstaller_pyqt6_python.txt
Q: Python ParseError Document is empty I'm lost. You won't be able to run the code because of existing files in the directory. Does anyone know why this occurs? Below is the code and the executed error. It runs up to 1900 before stopping. Why 1900? I've run it 5 times, and it's always 1900. I would understand the issue more if it crashed immediately, but it runs and then doesn't half way through? import os import pandas as pd #this replace parse_data_live SCORE_DIR = "data/scores" box_scores = os.listdir(SCORE_DIR) box_scores = [os.path.join(SCORE_DIR, f) for f in box_scores if f.endswith(".html")] from bs4 import BeautifulSoup def parse_html(box_score): with open(box_score, encoding="utf-8") as f: html = f.read() #with open(box_score) as f: #html = f.read() soup = BeautifulSoup(html, 'lxml') [s.decompose() for s in soup.select("tr.over_header")] [s.decompose() for s in soup.select("tr.thead")] return soup def read_season_info(soup): nav = soup.select("#bottom_nav_container")[0] hrefs = [a["href"] for a in nav.find_all('a')] season = os.path.basename(hrefs[1]).split("_")[0] return season def read_line_score(soup): line_score = pd.read_html(str(soup), attrs={'id': 'line_score'})[0] cols = list(line_score.columns) cols[0] = "team" cols[-1] = "total" line_score.columns = cols line_score = line_score[["team", "total"]] return line_score def read_stats(soup, team, stat): df = pd.read_html(str(soup), attrs={'id': f'box-{team}-game-{stat}'}, index_col=0)[0] df = df.apply(pd.to_numeric, errors="coerce") return df games = [] base_cols = None for box_score in box_scores: soup = parse_html(box_score) line_score = read_line_score(soup) teams = list(line_score["team"]) summaries = [] for team in teams: basic = read_stats(soup, team, "basic") advanced = read_stats(soup, team, "advanced") totals = pd.concat([basic.iloc[-1, :], advanced.iloc[-1, :]]) totals.index = totals.index.str.lower() maxes = pd.concat([basic.iloc[:-1].max(), advanced.iloc[:-1].max()]) maxes.index = maxes.index.str.lower() + "_max" summary = pd.concat([totals, maxes]) if base_cols is None: base_cols = list(summary.index.drop_duplicates(keep="first")) base_cols = [b for b in base_cols if "bpm" not in b] summary = summary[base_cols] summaries.append(summary) summary = pd.concat(summaries, axis=1).T game = pd.concat([summary, line_score], axis=1) game["home"] = [0, 1] game_opp = game.iloc[::-1].reset_index() game_opp.columns += "_opp" full_game = pd.concat([game, game_opp], axis=1) full_game["season"] = read_season_info(soup) full_game["date"] = os.path.basename(box_score)[:8] full_game["date"] = pd.to_datetime(full_game["date"], format="%Y%m%d") full_game["won"] = full_game["total"] > full_game["total_opp"] games.append(full_game) if len(games) % 100 == 0: print(f"{len(games)} / {len(box_scores)}") games_df = pd.concat(games, ignore_index=True) print(games_df) games_df.to_csv("nba_games.csv") #outcome 100 / 8394 200 / 8394 300 / 8394 400 / 8394 500 / 8394 600 / 8394 700 / 8394 800 / 8394 900 / 8394 1000 / 8394 1100 / 8394 1200 / 8394 1300 / 8394 1400 / 8394 1500 / 8394 1600 / 8394 1700 / 8394 1800 / 8394 1900 / 8394 Traceback (most recent call last): File "C:\Users\Martin\PycharmProjects\Dog\venv\lib\site-packages\pandas\io\html.py", line 730, in _build_doc r = parse(self.io, parser=parser) File "C:\Users\Martin\PycharmProjects\Dog\venv\lib\site-packages\lxml\html\__init__.py", line 937, in parse return etree.parse(filename_or_url, parser, base_url=base_url, **kw) File "src\lxml\etree.pyx", line 3538, in lxml.etree.parse File "src\lxml\parser.pxi", line 1876, in lxml.etree._parseDocument File "src\lxml\parser.pxi", line 1902, in lxml.etree._parseDocumentFromURL File "src\lxml\parser.pxi", line 1805, in lxml.etree._parseDocFromFile File "src\lxml\parser.pxi", line 1177, in lxml.etree._BaseParser._parseDocFromFile File "src\lxml\parser.pxi", line 615, in lxml.etree._ParserContext._handleParseResultDoc File "src\lxml\parser.pxi", line 725, in lxml.etree._handleParseResult File "src\lxml\parser.pxi", line 652, in lxml.etree._raiseParseError OSError: Error reading file '': failed to load external entity "" During handling of the above exception, another exception occurred: Traceback (most recent call last): File "C:\Users\Martin\PycharmProjects\Dog\nba game.py", line 52, in <module> line_score = read_line_score(soup) File "C:\Users\Martin\PycharmProjects\Dog\nba game.py", line 30, in read_line_score line_score = pd.read_html(str(soup), attrs={'id': 'line_score'})[0] File "C:\Users\Martin\PycharmProjects\Dog\venv\lib\site-packages\pandas\util\_decorators.py", line 311, in wrapper return func(*args, **kwargs) File "C:\Users\Martin\PycharmProjects\Dog\venv\lib\site-packages\pandas\io\html.py", line 1098, in read_html return _parse( File "C:\Users\Martin\PycharmProjects\Dog\venv\lib\site-packages\pandas\io\html.py", line 906, in _parse tables = p.parse_tables() File "C:\Users\Martin\PycharmProjects\Dog\venv\lib\site-packages\pandas\io\html.py", line 222, in parse_tables tables = self._parse_tables(self._build_doc(), self.match, self.attrs) File "C:\Users\Martin\PycharmProjects\Dog\venv\lib\site-packages\pandas\io\html.py", line 738, in _build_doc r = fromstring(self.io, parser=parser) File "C:\Users\Martin\PycharmProjects\Dog\venv\lib\site-packages\lxml\html\__init__.py", line 873, in fromstring doc = document_fromstring(html, parser=parser, base_url=base_url, **kw) File "C:\Users\Martin\PycharmProjects\Dog\venv\lib\site-packages\lxml\html\__init__.py", line 761, in document_fromstring raise etree.ParserError( lxml.etree.ParserError: Document is empty Process finished with exit code 1 A: I'm working on the same project and ran into the same issue. After running this code, I found 3 empty box scores: for b in box_scores: if os.path.getsize(b) == 0: # check for empty files print(b) I removed the 3 empty files using the following list comprehension: box_scores = [b for b in box_scores if os.path.getsize(b) != 0] Currently rerunning the code now, will let you know if it works.
Python ParseError Document is empty
I'm lost. You won't be able to run the code because of existing files in the directory. Does anyone know why this occurs? Below is the code and the executed error. It runs up to 1900 before stopping. Why 1900? I've run it 5 times, and it's always 1900. I would understand the issue more if it crashed immediately, but it runs and then doesn't half way through? import os import pandas as pd #this replace parse_data_live SCORE_DIR = "data/scores" box_scores = os.listdir(SCORE_DIR) box_scores = [os.path.join(SCORE_DIR, f) for f in box_scores if f.endswith(".html")] from bs4 import BeautifulSoup def parse_html(box_score): with open(box_score, encoding="utf-8") as f: html = f.read() #with open(box_score) as f: #html = f.read() soup = BeautifulSoup(html, 'lxml') [s.decompose() for s in soup.select("tr.over_header")] [s.decompose() for s in soup.select("tr.thead")] return soup def read_season_info(soup): nav = soup.select("#bottom_nav_container")[0] hrefs = [a["href"] for a in nav.find_all('a')] season = os.path.basename(hrefs[1]).split("_")[0] return season def read_line_score(soup): line_score = pd.read_html(str(soup), attrs={'id': 'line_score'})[0] cols = list(line_score.columns) cols[0] = "team" cols[-1] = "total" line_score.columns = cols line_score = line_score[["team", "total"]] return line_score def read_stats(soup, team, stat): df = pd.read_html(str(soup), attrs={'id': f'box-{team}-game-{stat}'}, index_col=0)[0] df = df.apply(pd.to_numeric, errors="coerce") return df games = [] base_cols = None for box_score in box_scores: soup = parse_html(box_score) line_score = read_line_score(soup) teams = list(line_score["team"]) summaries = [] for team in teams: basic = read_stats(soup, team, "basic") advanced = read_stats(soup, team, "advanced") totals = pd.concat([basic.iloc[-1, :], advanced.iloc[-1, :]]) totals.index = totals.index.str.lower() maxes = pd.concat([basic.iloc[:-1].max(), advanced.iloc[:-1].max()]) maxes.index = maxes.index.str.lower() + "_max" summary = pd.concat([totals, maxes]) if base_cols is None: base_cols = list(summary.index.drop_duplicates(keep="first")) base_cols = [b for b in base_cols if "bpm" not in b] summary = summary[base_cols] summaries.append(summary) summary = pd.concat(summaries, axis=1).T game = pd.concat([summary, line_score], axis=1) game["home"] = [0, 1] game_opp = game.iloc[::-1].reset_index() game_opp.columns += "_opp" full_game = pd.concat([game, game_opp], axis=1) full_game["season"] = read_season_info(soup) full_game["date"] = os.path.basename(box_score)[:8] full_game["date"] = pd.to_datetime(full_game["date"], format="%Y%m%d") full_game["won"] = full_game["total"] > full_game["total_opp"] games.append(full_game) if len(games) % 100 == 0: print(f"{len(games)} / {len(box_scores)}") games_df = pd.concat(games, ignore_index=True) print(games_df) games_df.to_csv("nba_games.csv") #outcome 100 / 8394 200 / 8394 300 / 8394 400 / 8394 500 / 8394 600 / 8394 700 / 8394 800 / 8394 900 / 8394 1000 / 8394 1100 / 8394 1200 / 8394 1300 / 8394 1400 / 8394 1500 / 8394 1600 / 8394 1700 / 8394 1800 / 8394 1900 / 8394 Traceback (most recent call last): File "C:\Users\Martin\PycharmProjects\Dog\venv\lib\site-packages\pandas\io\html.py", line 730, in _build_doc r = parse(self.io, parser=parser) File "C:\Users\Martin\PycharmProjects\Dog\venv\lib\site-packages\lxml\html\__init__.py", line 937, in parse return etree.parse(filename_or_url, parser, base_url=base_url, **kw) File "src\lxml\etree.pyx", line 3538, in lxml.etree.parse File "src\lxml\parser.pxi", line 1876, in lxml.etree._parseDocument File "src\lxml\parser.pxi", line 1902, in lxml.etree._parseDocumentFromURL File "src\lxml\parser.pxi", line 1805, in lxml.etree._parseDocFromFile File "src\lxml\parser.pxi", line 1177, in lxml.etree._BaseParser._parseDocFromFile File "src\lxml\parser.pxi", line 615, in lxml.etree._ParserContext._handleParseResultDoc File "src\lxml\parser.pxi", line 725, in lxml.etree._handleParseResult File "src\lxml\parser.pxi", line 652, in lxml.etree._raiseParseError OSError: Error reading file '': failed to load external entity "" During handling of the above exception, another exception occurred: Traceback (most recent call last): File "C:\Users\Martin\PycharmProjects\Dog\nba game.py", line 52, in <module> line_score = read_line_score(soup) File "C:\Users\Martin\PycharmProjects\Dog\nba game.py", line 30, in read_line_score line_score = pd.read_html(str(soup), attrs={'id': 'line_score'})[0] File "C:\Users\Martin\PycharmProjects\Dog\venv\lib\site-packages\pandas\util\_decorators.py", line 311, in wrapper return func(*args, **kwargs) File "C:\Users\Martin\PycharmProjects\Dog\venv\lib\site-packages\pandas\io\html.py", line 1098, in read_html return _parse( File "C:\Users\Martin\PycharmProjects\Dog\venv\lib\site-packages\pandas\io\html.py", line 906, in _parse tables = p.parse_tables() File "C:\Users\Martin\PycharmProjects\Dog\venv\lib\site-packages\pandas\io\html.py", line 222, in parse_tables tables = self._parse_tables(self._build_doc(), self.match, self.attrs) File "C:\Users\Martin\PycharmProjects\Dog\venv\lib\site-packages\pandas\io\html.py", line 738, in _build_doc r = fromstring(self.io, parser=parser) File "C:\Users\Martin\PycharmProjects\Dog\venv\lib\site-packages\lxml\html\__init__.py", line 873, in fromstring doc = document_fromstring(html, parser=parser, base_url=base_url, **kw) File "C:\Users\Martin\PycharmProjects\Dog\venv\lib\site-packages\lxml\html\__init__.py", line 761, in document_fromstring raise etree.ParserError( lxml.etree.ParserError: Document is empty Process finished with exit code 1
[ "I'm working on the same project and ran into the same issue.\nAfter running this code, I found 3 empty box scores:\nfor b in box_scores:\n if os.path.getsize(b) == 0: # check for empty files\n print(b)\n\nI removed the 3 empty files using the following list comprehension:\nbox_scores = [b for b in box_scores if os.path.getsize(b) != 0]\n\nCurrently rerunning the code now, will let you know if it works.\n" ]
[ 0 ]
[]
[]
[ "pandas", "parsing", "python" ]
stackoverflow_0074176923_pandas_parsing_python.txt
Q: How to cast String float to Float in PySpark? I have the following PySpark dataframe: df = spark.createDataFrame( [ ('31,2', 'foo'), ('33,1', 'bar'), ], ['cost', 'label'] ) I need to cast the ´cost´ column to float. I do it as follows: df = df.withColumn('cost', df.cost.cast('float')) However, as I result I get null values instead of numbers in the cost column. How can I convert cost to float numbers? A: This should work for you. df = (df.withColumn('cost', F.regexp_replace(df.cost, ',', '.') .withColumn('cost', df.cost.cast('float'))) A: I think a simple lambda expression should take care of most things. df.loc[:, 'cost'] = df.cost.apply(lambda x: float(x.replace(',', '.')))
How to cast String float to Float in PySpark?
I have the following PySpark dataframe: df = spark.createDataFrame( [ ('31,2', 'foo'), ('33,1', 'bar'), ], ['cost', 'label'] ) I need to cast the ´cost´ column to float. I do it as follows: df = df.withColumn('cost', df.cost.cast('float')) However, as I result I get null values instead of numbers in the cost column. How can I convert cost to float numbers?
[ "This should work for you.\ndf = (df.withColumn('cost', F.regexp_replace(df.cost, ',', '.')\n .withColumn('cost', df.cost.cast('float')))\n\n\n", "I think a simple lambda expression should take care of most things.\n df.loc[:, 'cost'] = df.cost.apply(lambda x: float(x.replace(',', '.')))\n\n" ]
[ 2, 1 ]
[]
[]
[ "pyspark", "python" ]
stackoverflow_0074481067_pyspark_python.txt
Q: How to use model.fit generator I have this variable #Data Preprocessing train_datagen = ImageDataGenerator(rescale=1.0/255) train_generator = train_datagen.flow_from_directory(directory=images_train,target_size=(1024,1024),class_mode='categorical',batch_size=32) val_datagen = ImageDataGenerator(rescale=1.0/255) val_generator = train_datagen.flow_from_directory(directory=images_valid,target_size=(1024,1024),class_mode='categorical',batch_size=32) test_datagen = ImageDataGenerator(rescale=1.0/255) test_generator = train_datagen.flow_from_directory(directory=images_test,target_size=(1024,1024),class_mode='categorical',batch_size=32) And I want to input in model.fit but I don't know how. It's keeping error This is the error --------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-38-032c8242ff66> in <module> 2 steps_per_epoch=len(train_generator)//32, 3 epochs=20,validation_data=val_generator, ----> 4 validation_steps=len(val_generator)//32) 1 frames /usr/local/lib/python3.7/dist-packages/keras/engine/training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_batch_size, validation_freq, max_queue_size, workers, use_multiprocessing) 1418 logs = tf_utils.sync_to_numpy_or_python_type(logs) 1419 if logs is None: -> 1420 raise ValueError('Unexpected result of `train_function` ' 1421 '(Empty logs). Please use ' 1422 '`Model.compile(..., run_eagerly=True)`, or ' ValueError: Unexpected result of `train_function` (Empty logs). Please use `Model.compile(..., run_eagerly=True)`, or `tf.config.run_functions_eagerly(True)` for more information of where went wrong, or file a issue/bug to `tf.keras`. How do I use the model.fit function? A: before calling model.fit(x, y) please compile it first by: model.compile() Ref: https://keras.io/api/models/model_training_apis/
How to use model.fit generator
I have this variable #Data Preprocessing train_datagen = ImageDataGenerator(rescale=1.0/255) train_generator = train_datagen.flow_from_directory(directory=images_train,target_size=(1024,1024),class_mode='categorical',batch_size=32) val_datagen = ImageDataGenerator(rescale=1.0/255) val_generator = train_datagen.flow_from_directory(directory=images_valid,target_size=(1024,1024),class_mode='categorical',batch_size=32) test_datagen = ImageDataGenerator(rescale=1.0/255) test_generator = train_datagen.flow_from_directory(directory=images_test,target_size=(1024,1024),class_mode='categorical',batch_size=32) And I want to input in model.fit but I don't know how. It's keeping error This is the error --------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-38-032c8242ff66> in <module> 2 steps_per_epoch=len(train_generator)//32, 3 epochs=20,validation_data=val_generator, ----> 4 validation_steps=len(val_generator)//32) 1 frames /usr/local/lib/python3.7/dist-packages/keras/engine/training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_batch_size, validation_freq, max_queue_size, workers, use_multiprocessing) 1418 logs = tf_utils.sync_to_numpy_or_python_type(logs) 1419 if logs is None: -> 1420 raise ValueError('Unexpected result of `train_function` ' 1421 '(Empty logs). Please use ' 1422 '`Model.compile(..., run_eagerly=True)`, or ' ValueError: Unexpected result of `train_function` (Empty logs). Please use `Model.compile(..., run_eagerly=True)`, or `tf.config.run_functions_eagerly(True)` for more information of where went wrong, or file a issue/bug to `tf.keras`. How do I use the model.fit function?
[ "before calling model.fit(x, y) please compile it first by:\nmodel.compile()\n\nRef:\nhttps://keras.io/api/models/model_training_apis/\n" ]
[ 0 ]
[]
[]
[ "keras", "python", "tf.keras" ]
stackoverflow_0074482746_keras_python_tf.keras.txt
Q: Does pytest support multiprocessing.set_start_method? The doc of multiprocessing.set_start_method note that: Note that this should be called at most once, and it should be protected inside the if name == 'main' clause of the main module. However, if I put multiprocessing.set_start_method('spawn') in a pytest module fixture, I do not know will does it work perfectly. A: Indeed, as stated in the documentation, you will be in trouble if you try to call multiprocessing.set_start_method() from multiple unit tests functions. Moreover, this will affect your whole program and may interoperate badly with the entire tests suit. However, there exists a workaround which is described in the documentation too: Alternatively, you can use get_context() to obtain a context object. Context objects have the same API as the multiprocessing module, and allow one to use multiple start methods in the same program. import multiprocessing as mp def foo(q): q.put('hello') if __name__ == '__main__': ctx = mp.get_context('spawn') q = ctx.Queue() p = ctx.Process(target=foo, args=(q,)) p.start() print(q.get()) p.join() ``` This method can be used per-test to avoid compatibility issues discussed. It can be combined with "monkeypatching" or "mocking" to test your class with different start methods: # my_class.py import multiprocessing class MyClass: def __init__(self): self._queue = multiprocessing.Queue() def process(self, x): # Very simplified example of a method using a multiprocessing Queue self._queue.put(x) return self._queue.get() # tests/test_my_class.py import multiprocessing import my_class def test_spawn(monkeypatch): ctx = multiprocessing.get_context('spawn') monkeypatch.setattr(my_class.multiprocessing, "Queue", ctx.Queue) obj = my_class.MyClass() assert obj.process(6) == 6 def test_fork(monkeypatch): ctx = multiprocessing.get_context('fork') monkeypatch.setattr(my_class.multiprocessing, "Queue", ctx.Queue) obj = my_class.MyClass() assert obj.process(6) == 6 A: If you really do always want to use the same start method, you can set it in a session-scoped fixture in the file conftest.py in the root of your source tree. E.g. # conftest.py import multiprocessing import pytest @pytest.fixture(scope="session", autouse=True) def always_spawn(): multiprocessing.set_start_method("spawn")
Does pytest support multiprocessing.set_start_method?
The doc of multiprocessing.set_start_method note that: Note that this should be called at most once, and it should be protected inside the if name == 'main' clause of the main module. However, if I put multiprocessing.set_start_method('spawn') in a pytest module fixture, I do not know will does it work perfectly.
[ "Indeed, as stated in the documentation, you will be in trouble if you try to call multiprocessing.set_start_method() from multiple unit tests functions. Moreover, this will affect your whole program and may interoperate badly with the entire tests suit.\nHowever, there exists a workaround which is described in the documentation too:\n\nAlternatively, you can use get_context() to obtain a context\n object. Context objects have the same API as the multiprocessing\n module, and allow one to use multiple start methods in the same\n program.\nimport multiprocessing as mp\n\ndef foo(q):\n q.put('hello')\n\nif __name__ == '__main__':\n ctx = mp.get_context('spawn')\n q = ctx.Queue()\n p = ctx.Process(target=foo, args=(q,))\n p.start()\n print(q.get())\n p.join() ```\n\n\nThis method can be used per-test to avoid compatibility issues discussed. It can be combined with \"monkeypatching\" or \"mocking\" to test your class with different start methods:\n# my_class.py\n\nimport multiprocessing\n\nclass MyClass:\n\n def __init__(self):\n self._queue = multiprocessing.Queue()\n\n def process(self, x):\n # Very simplified example of a method using a multiprocessing Queue \n self._queue.put(x)\n return self._queue.get()\n\n# tests/test_my_class.py\n\nimport multiprocessing\nimport my_class\n\ndef test_spawn(monkeypatch):\n ctx = multiprocessing.get_context('spawn')\n monkeypatch.setattr(my_class.multiprocessing, \"Queue\", ctx.Queue)\n obj = my_class.MyClass()\n assert obj.process(6) == 6\n\ndef test_fork(monkeypatch):\n ctx = multiprocessing.get_context('fork')\n monkeypatch.setattr(my_class.multiprocessing, \"Queue\", ctx.Queue)\n obj = my_class.MyClass()\n assert obj.process(6) == 6\n\n", "If you really do always want to use the same start method, you can set it in a session-scoped fixture in the file conftest.py in the root of your source tree. E.g.\n# conftest.py\nimport multiprocessing\nimport pytest\n\n@pytest.fixture(scope=\"session\", autouse=True)\ndef always_spawn():\n multiprocessing.set_start_method(\"spawn\")\n\n" ]
[ 0, 0 ]
[]
[]
[ "pytest", "python", "python_multiprocessing" ]
stackoverflow_0052921309_pytest_python_python_multiprocessing.txt
Q: Phone number parser producing undesirable white space phone_number = int(input()) line_number =phone_number % 10000 area_code_prefix = phone_number //10000 area_code =area_code_prefix // 1000 prefix =area_code_prefix % 1000 print('(',area_code,')',prefix,'-',line_number) and I can't figure out how to fix it. I've already tried a few different str.() types to try to solve this and none has helped. A: Simple fix: ',' produces a white space phone_number = int(input("Enter phone number")) line_number =phone_number % 10000 area_code_prefix = phone_number //10000 area_code =area_code_prefix // 1000 prefix =area_code_prefix % 1000 print('('+str(area_code)+')'+str(prefix)+'-'+str(line_number))
Phone number parser producing undesirable white space
phone_number = int(input()) line_number =phone_number % 10000 area_code_prefix = phone_number //10000 area_code =area_code_prefix // 1000 prefix =area_code_prefix % 1000 print('(',area_code,')',prefix,'-',line_number) and I can't figure out how to fix it. I've already tried a few different str.() types to try to solve this and none has helped.
[ "Simple fix: ',' produces a white space\n\nphone_number = int(input(\"Enter phone number\"))\nline_number =phone_number % 10000\narea_code_prefix = phone_number //10000\narea_code =area_code_prefix // 1000\nprefix =area_code_prefix % 1000\n\nprint('('+str(area_code)+')'+str(prefix)+'-'+str(line_number))\n\n" ]
[ 0 ]
[]
[]
[ "python" ]
stackoverflow_0074483149_python.txt
Q: How to extract edges from polydata as connected features? I have a polydata structure and its extracted edges but computed with extract_feature_edges function as unconnected cells (separated lines). Is it possible to connect those cells (lines) from their common points and then get the different features (lands, islands such as what you can see in the image - Antartica, Australia, ... - BTW they are paleo continents)? In resume, I would like to extract from my grid and its edges the different land parts as separate polydata. I have tried with the python module shapely and the polygonize function, it works but not with 3D coordinates (https://shapely.readthedocs.io/en/latest/reference/shapely.polygonize.html). import pyvista as pv ! wget -q -nc https://thredds-su.ipsl.fr/thredds/fileServer/ipsl_thredds/brocksce/pyvista/mesh.vtk mesh = pv.PolyData('mesh.vtk') edges = mesh.extract_feature_edges(boundary_edges=True) pl = pv.Plotter() pl.add_mesh(pv.Sphere(radius=0.999, theta_resolution=360, phi_resolution=180)) pl.add_mesh(mesh, show_edges=True, edge_color="gray") pl.add_mesh(edges, color="red", line_width=2) viewer = pl.show(jupyter_backend='pythreejs', return_viewer=True) display(viewer) Any idea? A: Here is a solution using vtk.vtkStripper() to join contiguous segments into polylines. See thread from https://discourse.vtk.org/t/get-a-continuous-line-from-a-polydata-structure/9864 import pyvista as pv import vtk import random ! wget -q -nc https://thredds-su.ipsl.fr/thredds/fileServer/ipsl_thredds/brocksce/pyvista/mesh.vtk mesh = pv.PolyData('mesh.vtk') edges = mesh.extract_feature_edges(boundary_edges=True) pl = pv.Plotter() pl.add_mesh(pv.Sphere(radius=0.999, theta_resolution=360, phi_resolution=180)) pl.add_mesh(mesh, show_edges=True, edge_color="gray") regions = edges.connectivity() regCount = len(set(pv.get_array(regions, name="RegionId"))) connectivityFilter = vtk.vtkPolyDataConnectivityFilter() stripper = vtk.vtkStripper() for r in range(regCount): connectivityFilter.SetInputData(edges) connectivityFilter.SetExtractionModeToSpecifiedRegions() connectivityFilter.InitializeSpecifiedRegionList() connectivityFilter.AddSpecifiedRegion(r) connectivityFilter.Update() stripper.SetInputData(connectivityFilter.GetOutput()) stripper.SetJoinContiguousSegments(True) stripper.Update() reg = stripper.GetOutput() random_color = "#"+''.join([random.choice('0123456789ABCDEF') for i in range(6)]) pl.add_mesh(reg, color=random_color, line_width=4) viewer = pl.show(jupyter_backend='pythreejs', return_viewer=True) display(viewer) A: This has come up before in github discussions. The conclusion was that PyVista doesn't have anything built-in to reorder edges, but there might be third-party libraries that can do this (this answer mentioned libigl, but I have no experience with that). I have some ideas on how to tackle this, but there are concerns about the applicability of such a helper in the generic case. In your specific case, however, we know that every edge is a closed loop, and that there aren't very many of them, so we don't have to worry about performance (and especially memory footprint) that much. Here's a manual approach to reordering the edges by building an adjacency graph and walking until we end up where we started on each loop: from collections import defaultdict import pyvista as pv # load example mesh mesh = pv.read('mesh.vtk') # get edges edges = mesh.extract_feature_edges(boundary_edges=True) # build undirected adjacency graph from edges (2-length lines) # (potential performance improvement: use connectivity to only do this for each closed loop) # (potentially via calling edges.split_bodies()) lines = edges.lines.reshape(-1, 3)[:, 1:] adjacency = defaultdict(set) # {2: {1, 3}, ...} if there are lines from point 2 to point 1 and 3 for first, second in lines: adjacency[first].add(second) adjacency[second].add(first) # start looping from whichever point, keep going until we run out of adjacent points points_left = set(range(edges.n_points)) loops = [] while points_left: point = points_left.pop() # starting point for next loop loop = [point] loops.append(loop) while True: # keep walking the loop neighb = adjacency[point].pop() loop.append(neighb) if neighb == loop[0]: # this loop is done break # make sure we never backtrack adjacency[neighb].remove(point) # bookkeeping points_left.discard(neighb) point = neighb # assemble new lines based on the existing ones, flatten lines = sum(([len(loop)] + loop for loop in loops), []) # overwrite the lines in the original edges; optionally we could create a copy here edges.lines = lines # edges are long, closed loops by construction, so it's probably correct # plot each curve with an individual colour just to be safe plotter = pv.Plotter() plotter.add_mesh(pv.Sphere(radius=0.999)) plotter.add_mesh(edges, scalars=range(edges.n_cells), line_width=3, show_scalar_bar=False) plotter.enable_anti_aliasing('msaa') plotter.show() This code replaces your original 1760 2-length lines with 14 larger lines defining each loop. You have to be a bit careful, though: north of Australia you have a loop that self-intersects: The intersection point appears 4 times instead of 2. This means that my brute-force solver doesn't give a well-defined result: it will choose at the intersection randomly, and if by (bad) luck we start the loop from the intersection point the algorithm will probably fail. Making it more robust is left as an exercise to the reader (my comment about splitting the edges into individual ones could help with this issue).
How to extract edges from polydata as connected features?
I have a polydata structure and its extracted edges but computed with extract_feature_edges function as unconnected cells (separated lines). Is it possible to connect those cells (lines) from their common points and then get the different features (lands, islands such as what you can see in the image - Antartica, Australia, ... - BTW they are paleo continents)? In resume, I would like to extract from my grid and its edges the different land parts as separate polydata. I have tried with the python module shapely and the polygonize function, it works but not with 3D coordinates (https://shapely.readthedocs.io/en/latest/reference/shapely.polygonize.html). import pyvista as pv ! wget -q -nc https://thredds-su.ipsl.fr/thredds/fileServer/ipsl_thredds/brocksce/pyvista/mesh.vtk mesh = pv.PolyData('mesh.vtk') edges = mesh.extract_feature_edges(boundary_edges=True) pl = pv.Plotter() pl.add_mesh(pv.Sphere(radius=0.999, theta_resolution=360, phi_resolution=180)) pl.add_mesh(mesh, show_edges=True, edge_color="gray") pl.add_mesh(edges, color="red", line_width=2) viewer = pl.show(jupyter_backend='pythreejs', return_viewer=True) display(viewer) Any idea?
[ "Here is a solution using vtk.vtkStripper() to join contiguous segments into polylines.\nSee thread from https://discourse.vtk.org/t/get-a-continuous-line-from-a-polydata-structure/9864\nimport pyvista as pv\nimport vtk\nimport random\n\n! wget -q -nc https://thredds-su.ipsl.fr/thredds/fileServer/ipsl_thredds/brocksce/pyvista/mesh.vtk\nmesh = pv.PolyData('mesh.vtk')\nedges = mesh.extract_feature_edges(boundary_edges=True)\n\npl = pv.Plotter()\n\npl.add_mesh(pv.Sphere(radius=0.999, theta_resolution=360, phi_resolution=180))\npl.add_mesh(mesh, show_edges=True, edge_color=\"gray\")\n\nregions = edges.connectivity()\nregCount = len(set(pv.get_array(regions, name=\"RegionId\")))\n\nconnectivityFilter = vtk.vtkPolyDataConnectivityFilter()\nstripper = vtk.vtkStripper()\n\nfor r in range(regCount):\n connectivityFilter.SetInputData(edges)\n connectivityFilter.SetExtractionModeToSpecifiedRegions()\n connectivityFilter.InitializeSpecifiedRegionList()\n connectivityFilter.AddSpecifiedRegion(r)\n connectivityFilter.Update()\n \n stripper.SetInputData(connectivityFilter.GetOutput())\n stripper.SetJoinContiguousSegments(True)\n stripper.Update()\n reg = stripper.GetOutput()\n \n random_color = \"#\"+''.join([random.choice('0123456789ABCDEF') for i in range(6)])\n pl.add_mesh(reg, color=random_color, line_width=4)\n\nviewer = pl.show(jupyter_backend='pythreejs', return_viewer=True)\ndisplay(viewer)\n\n", "This has come up before in github discussions. The conclusion was that PyVista doesn't have anything built-in to reorder edges, but there might be third-party libraries that can do this (this answer mentioned libigl, but I have no experience with that).\nI have some ideas on how to tackle this, but there are concerns about the applicability of such a helper in the generic case. In your specific case, however, we know that every edge is a closed loop, and that there aren't very many of them, so we don't have to worry about performance (and especially memory footprint) that much.\nHere's a manual approach to reordering the edges by building an adjacency graph and walking until we end up where we started on each loop:\nfrom collections import defaultdict\n\nimport pyvista as pv\n\n# load example mesh\nmesh = pv.read('mesh.vtk')\n\n# get edges\nedges = mesh.extract_feature_edges(boundary_edges=True)\n\n# build undirected adjacency graph from edges (2-length lines)\n# (potential performance improvement: use connectivity to only do this for each closed loop)\n# (potentially via calling edges.split_bodies())\nlines = edges.lines.reshape(-1, 3)[:, 1:]\nadjacency = defaultdict(set) # {2: {1, 3}, ...} if there are lines from point 2 to point 1 and 3\nfor first, second in lines:\n adjacency[first].add(second)\n adjacency[second].add(first)\n\n# start looping from whichever point, keep going until we run out of adjacent points\npoints_left = set(range(edges.n_points))\nloops = []\nwhile points_left:\n point = points_left.pop() # starting point for next loop\n loop = [point]\n loops.append(loop)\n while True:\n # keep walking the loop\n neighb = adjacency[point].pop()\n loop.append(neighb)\n if neighb == loop[0]:\n # this loop is done\n break\n # make sure we never backtrack\n adjacency[neighb].remove(point)\n # bookkeeping\n points_left.discard(neighb)\n point = neighb\n\n# assemble new lines based on the existing ones, flatten\nlines = sum(([len(loop)] + loop for loop in loops), [])\n\n# overwrite the lines in the original edges; optionally we could create a copy here\nedges.lines = lines\n\n# edges are long, closed loops by construction, so it's probably correct\n# plot each curve with an individual colour just to be safe\nplotter = pv.Plotter()\nplotter.add_mesh(pv.Sphere(radius=0.999))\nplotter.add_mesh(edges, scalars=range(edges.n_cells), line_width=3, show_scalar_bar=False)\nplotter.enable_anti_aliasing('msaa')\nplotter.show()\n\nThis code replaces your original 1760 2-length lines with 14 larger lines defining each loop. You have to be a bit careful, though: north of Australia you have a loop that self-intersects:\n\nThe intersection point appears 4 times instead of 2. This means that my brute-force solver doesn't give a well-defined result: it will choose at the intersection randomly, and if by (bad) luck we start the loop from the intersection point the algorithm will probably fail. Making it more robust is left as an exercise to the reader (my comment about splitting the edges into individual ones could help with this issue).\n" ]
[ 0, 0 ]
[]
[]
[ "python", "pyvista", "vtk" ]
stackoverflow_0074467727_python_pyvista_vtk.txt
Q: I am trying to convert a string from a list into an integer without losing the decimal places I want to convert a string to integer without rounding. For example s = "99.7" x = s(int(float(s)) Output: 99 But I want the output to be 99.7 I was thinking of just adding all the strings to a list and somehow converting the list to an integer but I am not sure how to do that or how to even do it individually. Desired output: x = '99.7' z = int(x) output: 99.7 A: An integer in python can not have a floating point. To show this you should use float(x) This will prevent any rounding.
I am trying to convert a string from a list into an integer without losing the decimal places
I want to convert a string to integer without rounding. For example s = "99.7" x = s(int(float(s)) Output: 99 But I want the output to be 99.7 I was thinking of just adding all the strings to a list and somehow converting the list to an integer but I am not sure how to do that or how to even do it individually. Desired output: x = '99.7' z = int(x) output: 99.7
[ "An integer in python can not have a floating point. To show this you should use\nfloat(x)\n\nThis will prevent any rounding.\n" ]
[ 1 ]
[]
[]
[ "decimal", "integer", "list", "python", "string" ]
stackoverflow_0074483183_decimal_integer_list_python_string.txt
Q: Scrapy [scrapy.core.scraper] ERROR: Error processing I am trying to scrape some data from a website using scrapy. I am scraping the data using these lines of code: ` def parse(self, response): data = json.loads(response.body) flat = FlatItem() for item in data["_embedded"]["estates"]: flat['flat'] = item['price'] yield flat ` and the FlatItem() contains a field like this: ` from scrapy.item import Item, Field class FlatItem(Item): flat = Field() ` Then, I am trying to paste it into postgresql database, with a command like this: ` def process_item(self, item, spider): self.current.execute("""insert into flats(content, tags, author) values(%s)""", ( item["flat"], )) self.connection.commit() return item ` Unfortunatelly, when I'm trying to run the crawler, it gives me an exception like this: 2022-11-17 11:32:07 [scrapy.core.scraper] ERROR: Error processing {'flat': 3299000} Traceback (most recent call last): File "/Users/XY/.pyenv/versions/3.10.7/lib/python3.10/site-packages/twisted/internet/defer.py", line 892, in _runCallbacks current.result = callback( # type: ignore[misc] File "/Users/XY/.pyenv/versions/3.10.7/lib/python3.10/site-packages/scrapy/utils/defer.py", line 285, in f return deferred_from_coro(coro_f(*coro_args, **coro_kwargs)) File "/Users/XY/Library/CloudStorage/Creative/project/pipelines.py", line 29, in process_item self.current.execute("""insert into flats(content, tags, author) values(%s)""", ( psycopg2.errors.InFailedSqlTransaction: current transaction is aborted, commands ignored until end of transaction block I`ve been trying to search for a solution for hours but unf nothing.. Any idea? Trying to pass scraped data from scrapy to postgresql database, but expecting an error. A: What is probably happening is that one of your items has a None value and postgresql doesn't accept None as a value. Try changing your pipeline process_item() to this: def process_item(self, item, spider): print(item) print(item["flat"]) if item["flat"]: self.current.execute( """insert into flats(content, tags, author) values(%s)""", (item["flat"],) ) self.connection.commit() return item
Scrapy [scrapy.core.scraper] ERROR: Error processing
I am trying to scrape some data from a website using scrapy. I am scraping the data using these lines of code: ` def parse(self, response): data = json.loads(response.body) flat = FlatItem() for item in data["_embedded"]["estates"]: flat['flat'] = item['price'] yield flat ` and the FlatItem() contains a field like this: ` from scrapy.item import Item, Field class FlatItem(Item): flat = Field() ` Then, I am trying to paste it into postgresql database, with a command like this: ` def process_item(self, item, spider): self.current.execute("""insert into flats(content, tags, author) values(%s)""", ( item["flat"], )) self.connection.commit() return item ` Unfortunatelly, when I'm trying to run the crawler, it gives me an exception like this: 2022-11-17 11:32:07 [scrapy.core.scraper] ERROR: Error processing {'flat': 3299000} Traceback (most recent call last): File "/Users/XY/.pyenv/versions/3.10.7/lib/python3.10/site-packages/twisted/internet/defer.py", line 892, in _runCallbacks current.result = callback( # type: ignore[misc] File "/Users/XY/.pyenv/versions/3.10.7/lib/python3.10/site-packages/scrapy/utils/defer.py", line 285, in f return deferred_from_coro(coro_f(*coro_args, **coro_kwargs)) File "/Users/XY/Library/CloudStorage/Creative/project/pipelines.py", line 29, in process_item self.current.execute("""insert into flats(content, tags, author) values(%s)""", ( psycopg2.errors.InFailedSqlTransaction: current transaction is aborted, commands ignored until end of transaction block I`ve been trying to search for a solution for hours but unf nothing.. Any idea? Trying to pass scraped data from scrapy to postgresql database, but expecting an error.
[ "What is probably happening is that one of your items has a None value and postgresql doesn't accept None as a value.\nTry changing your pipeline process_item() to this:\ndef process_item(self, item, spider):\n print(item)\n print(item[\"flat\"])\n if item[\"flat\"]:\n self.current.execute(\n \"\"\"insert into flats(content, tags, author) values(%s)\"\"\", \n (item[\"flat\"],)\n )\n self.connection.commit()\n return item\n\n" ]
[ 0 ]
[]
[]
[ "postgresql", "python", "scrapy" ]
stackoverflow_0074474061_postgresql_python_scrapy.txt
Q: How to fix "could not find or load the Qt platform plugin windows" while using Matplotlib in PyCharm I am getting the error "could not find or load the Qt platform plugin windows" while using matplotlib in PyCharm. How can I solve this? A: I had the same problem with Anaconda3 4.2.0 and 4.3.0.1 (64-bit). When I tried to run a simple program that uses matplotlib, I got this error message: This application failed to start because it could not find or load the Qt platform plugin "windows" Reinstalling the application may fix this problem. Reinstalling didn't fix it. What helped was this (found here): Look for the Anaconda directory and set the Library\plugins subdir (here c:\ProgramData\Anaconda3\Library\plugins) as environment variable QT_PLUGIN_PATH under Control Panel / System / Advanced System Settings / Environment Variables. After setting the variable you might need to restart PyCharm, if the change does not have an immediate effect. Even though after that the command line Python worked, TexWorks (which uses Qt as well) displayed an error message very much like it. Setting the QT_PLUGIN_PATH to the directory containing TexWorks' Qt DLLs (here C:\Users\chris\AppData\Local\Programs\MiKTeX 2.9\miktex\bin\x64) fixed the problem for both programs. A: If you want to visualize your matplotlibs in an alternative way, use a different backend that generates the graphs, charts etc. import matplotlib matplotlib.use('TKAgg') This worked for me. A: If you are running PyQt5 and PySide2, this solved the problem for me: Copy the following files: \Anaconda3\Lib\site-packages\PySide2\plugins\platforms\qminimal.dll \Anaconda3\Lib\site-packages\PySide2\plugins\platforms\qoffscreen.dll \Anaconda3\Lib\site-packages\PySide2\plugins\platforms\qwindows.dll to: \Anaconda3\Library\plugins\platforms\ A: I tried the following at Anaconda's prompt, and it solved this problem: conda remove qt conda remove pyqt conda install qt conda install pyqt A: I found that this was being caused by having the MiKTeX binaries in my PATH variable; and the wrong Qt dll's were being found. I just needed to re-arrange the PATH entries. (Dependency Walker is such a useful tool.) A: I had a similar problem with PyCharm where things worked great in main run but not in debugger, getting the same error message. This happened for me because I had moved my Anaconda installation to a different directory. The debugger goes and checks a qt.conf file that is located at the same place as python. This location can be found by running import sys; print sys.executable. I found this solution through a pile of web searches and it was buried deep here. The qt.conf file needs to have correct paths for debugger to work. My qt.conf files looks like this in notepad: [Paths] Prefix = E:/python/Anaconda3_py35/Library Binaries = E:/python/Anaconda3_py35/Library/bin Libraries = E:/python/Anaconda3_py35/Library/lib Headers = E:/python/Anaconda3_py35/Library/include/qt A: Just add a system variable: QT_QPA_PLATFORM_PLUGIN_PATH and set its value to C:\Python34\Lib\site-packages\PyQt4\plugins\platforms Voilà. Done A: I have found a solution that worked for me. This solution includes a code snippet to add before you import any modules from Pyside2 or PyQt5 package. See "Qt platform plugin "windows" #2" for more information. This code snippet is from the link: import os import PySide2 dirname = os.path.dirname(PySide2.__file__) plugin_path = os.path.join(dirname, 'plugins', 'platforms') os.environ['QT_QPA_PLATFORM_PLUGIN_PATH'] = plugin_path from PySide2.QtWidgets import * ''' Your code goes here ''' This solution works for PyQt5 and PySide2 modules. I don't know if it's relevant but I added the QT_PLUGIN_PATH environment variable in the system before. That solution enabled me to test PySide2 scripts in IDLE. However, I faced the same error when I tried to run a bundled script (exe). With some shallow debugging, it's evident that plugin folder itself is missing. I fixed the problem by adding the plugin folder in the appropriate location: C:\Users\xxxx\.spyder-py3\My_QtProjects\Project 1\dist\MyQt_1\PySide2\ A: If the Pycharm console or debugger are showing "Could not find or load the Qt platform plugin windows", the Python EXE file may be located at a different location for the PyCharm interpreter. You might manually select it in File -> Settings -> Interpreter. Set the working directory: File -> Settings -> Build, Execution, Deployment -> Console -> Python Console -> Working directory. Set it to the parent directory where your all code exists. Open Control Panel -> System Settings -> Advanced System Settings -> Environment Variables -> New. Set the variable name QT_PLUGIN_PATH , Variable Directory: Users\<Username>\Appdata\Local\Continuum\Anaconda2\Library\plugins. Restart Pycharm. A: I solved it by: Adding a path: \Anaconda3\Lib\site-packages\PyQt5\Qt\bin to PATH. Setting an environment variable: QT_PLUGIN_PATH as \Anaconda3\Lib\site-packages\PyQt5\Qt\plugins or \Anaconda3\Library\plugins. Also, you can try: pyqt = os.path.dirname(PyQt5.__file__) os.environ['QT_PLUGIN_PATH'] = os.path.join(pyqt, "Qt/plugins") A: First, use the command: conda remove pyqt qt qtpy Then install using: conda install pyqt qt qtpy This worked for me. A: Copy the folder \Anaconda3\Library\plugins\platforms to \$\ where $ is your project interpreter folder. For example: "\project\anaconda_env\Scripts\" because PyCharm calls the python.exe in this folder, not the one in \Anaconda3. A: You may need to copy the "plugins" file in Anaconda3\Library. For example, on my computer it is S:\Anaconda3\Library\plugins to the same path of your .exe file. A: On Windows: Copy the folder platforms: C:\Users\%USERNAME%\AppData\Roaming\pyinstaller\bincache00_py35_64bit\pyqt5\qt\plugins\platforms Paste the folder platform into the folder location of the file .exe: Example: c:\MyFolder\yourFile.exe c:\MyFolder\platforms A: SOLUTION FOR WINDOWS USERS Create new environment variable with: name: QT_PLUGIN_PATH path: C:\yourpythonpath\Lib\site-packages\PyQt5\Qt\plugins after that exe file will work A: copy the plugins from PySide2 and paste and overwrite the existing plugins in Miniconda worked for me. (base) C:\ProgramData\Miniconda3\Lib\site-packages\PySide2\plugins\platforms>copy *.dll C:\ProgramData\Miniconda3\Library\plugins\platforms\ A: I had the same problem with Anaconda. For me, although not very elegant, the fastest solution was to unistall and reinstall Ananconda completely. After that, everything worked well again. A: I had the same issue. Following "Activating an environment" in "Managing environments" solved the issue. In the command line: conda activate myenv where myenv=base for my setup. A: I have the same issue and fixed in this way In Anaconda installation folder I went to : (change it to your installed path): C:\ProgramData\Anaconda3\Lib\site-packages\PySide2 Edit this file by adding the following code lines : # below the line 23 type.__signature__ pyside_package_dir = os.path.abspath(os.path.dirname(__file__)) dirname = os.path.dirname(__file__) plugin_path = os.path.join(dirname, 'plugins', 'platforms') os.environ['QT_QPA_PLATFORM_PLUGIN_PATH'] = plugin_path save this file and try again and the issue should be gone :) A: Please try this in the script qt_path= os.path.dirname(PyQt5.__file__) os.environ['QT_PLUGIN_PATH'] = os.path.join(qt_path, "Qt/plugins") A: I know everyone above had provided various ways to fix OP's issue. I just want to add on some suggestions. By adding the QT_PLUGIN_PATH = C:\Users{YOUR_USERNAME}\Anaconda3\Library\plugins as your local machine environment variable it helps to fix OP's PyCharm issue above. However, this will break other systems in your machine like: Dropbox reports missing QT, AMD settings fails to launch(which happens on my side) etc. Instead of adding QT_PLUGIN_PATH to your machine locally, one can add the environment variable in PyCharm's python interpreter setting as shown below: This method not only allow your PyCharm's python.exe able to search those DLLs but also not breaking other systems' QT lookup PATH. Thanks A: I installed a package that had a QT-gui that I didn't need. So I just removed all the Qt modules from my environment. pip freeze | grep -i qt PyQt5==5.15.4 PyQt5-Qt5==5.15.2 PyQt5-sip==12.9.0 QtPy==1.9.0 pip uninstall PyQt5 pip uninstall PyQt5-Qt5 pip uninstall PyQt5-sip pip uninstall QtPy Problem solved. A: Inspired by Osama Adly, I think this kind of problems are all caused by Anaconda configuration for Qt DLLs on Windows platform. Just try to install PyQt/PySide in an empty environment besides Anaconda, for example a standalone Python program. You will find that the plugins about platforms are in the site-package directory itself. For comparation: \site-packages\PyQt6\Qt6\plugins\platforms \site-packages\PySide6\plugins\platforms But it seems that Anaconda contains some software depending on PyQt5 or Qt. Anaconda moves the platforms directory from PyQt5 to another folder and this folder might be contained in the PATH variable when using Anaconda. \Anaconda3\Library\plugins\platforms This could lead to unneccessary problems. These DLLs reserve the same name across different generation of Qt. For example, when I tried PySide6 in a virtual environment created with Anaconda, its call for DLLs will mistakenly use the Qt5 DLLS rather than the DLLs in its folder. A: if you are using anaconda/miniconda with matplotlib installed. you'll have to install uninstall anaconda/miniconda and use miniconda without matplotlib, a fix is to use normal python not anaconda. it has be a know issue here enter link description here A: In my situation, I did everything listed above and on other forum post: Copying and pasting files Adding system variables Uninstalling, downloading, and reinstalling programs Restarting the computer Enabling debugging mode Running the source code instead of the compiled program Running sfc /scannow All of this did not work. In my case, the solution was to update Windows. The computer was apparently running a very outdated version of Windows (10?) After 2-3 hours of installing the update, problem solved. Source/Inspiration: https://www.partitionwizard.com/clone-disk/no-qt-platform-plugin-could-be-initialized.html
How to fix "could not find or load the Qt platform plugin windows" while using Matplotlib in PyCharm
I am getting the error "could not find or load the Qt platform plugin windows" while using matplotlib in PyCharm. How can I solve this?
[ "I had the same problem with Anaconda3 4.2.0 and 4.3.0.1 (64-bit). When I tried to run a simple program that uses matplotlib, I got this error message:\nThis application failed to start because it could not find or load the Qt platform plugin \"windows\"\n\nReinstalling the application may fix this problem.\n\nReinstalling didn't fix it. \nWhat helped was this (found here):\nLook for the Anaconda directory and set the Library\\plugins subdir (here c:\\ProgramData\\Anaconda3\\Library\\plugins) as environment variable QT_PLUGIN_PATH under Control Panel / System / Advanced System Settings / Environment Variables.\nAfter setting the variable you might need to restart PyCharm, if the change does not have an immediate effect.\n\nEven though after that the command line Python worked, TexWorks (which uses Qt as well) displayed an error message very much like it. Setting the QT_PLUGIN_PATH to the directory containing TexWorks' Qt DLLs (here C:\\Users\\chris\\AppData\\Local\\Programs\\MiKTeX 2.9\\miktex\\bin\\x64) fixed the problem for both programs.\n", "If you want to visualize your matplotlibs in an alternative way, use a different backend that generates the graphs, charts etc. \nimport matplotlib\nmatplotlib.use('TKAgg')\n\nThis worked for me. \n", "If you are running PyQt5 and PySide2, this solved the problem for me:\nCopy the following files:\n\\Anaconda3\\Lib\\site-packages\\PySide2\\plugins\\platforms\\qminimal.dll\n\\Anaconda3\\Lib\\site-packages\\PySide2\\plugins\\platforms\\qoffscreen.dll\n\\Anaconda3\\Lib\\site-packages\\PySide2\\plugins\\platforms\\qwindows.dll\n\nto:\n\\Anaconda3\\Library\\plugins\\platforms\\\n\n", "I tried the following at Anaconda's prompt, and it solved this problem: \nconda remove qt\nconda remove pyqt \nconda install qt \nconda install pyqt\n\n", "I found that this was being caused by having the MiKTeX binaries in my PATH variable; and the wrong Qt dll's were being found. I just needed to re-arrange the PATH entries.\n(Dependency Walker is such a useful tool.)\n", "I had a similar problem with PyCharm where things worked great in main run but not in debugger, getting the same error message. This happened for me because I had moved my Anaconda installation to a different directory. The debugger goes and checks a qt.conf file that is located at the same place as python. This location can be found by running import sys; print sys.executable. I found this solution through a pile of web searches and it was buried deep here. The qt.conf file needs to have correct paths for debugger to work. \nMy qt.conf files looks like this in notepad:\n[Paths]\nPrefix = E:/python/Anaconda3_py35/Library\nBinaries = E:/python/Anaconda3_py35/Library/bin\nLibraries = E:/python/Anaconda3_py35/Library/lib\nHeaders = E:/python/Anaconda3_py35/Library/include/qt\n\n", "Just add a system variable:\nQT_QPA_PLATFORM_PLUGIN_PATH\n\nand set its value to \nC:\\Python34\\Lib\\site-packages\\PyQt4\\plugins\\platforms\n\nVoilà. Done\n", "I have found a solution that worked for me. This solution includes a code snippet to add before you import any modules from Pyside2 or PyQt5 package. See \"Qt platform plugin \"windows\" #2\" for more information.\nThis code snippet is from the link:\nimport os\nimport PySide2\n\ndirname = os.path.dirname(PySide2.__file__)\nplugin_path = os.path.join(dirname, 'plugins', 'platforms')\nos.environ['QT_QPA_PLATFORM_PLUGIN_PATH'] = plugin_path\n\nfrom PySide2.QtWidgets import *\n'''\nYour code goes here\n'''\n\nThis solution works for PyQt5 and PySide2 modules. \nI don't know if it's relevant but I added the QT_PLUGIN_PATH environment variable in the system before. \nThat solution enabled me to test PySide2 scripts in IDLE.\nHowever, I faced the same error when I tried to run a bundled script (exe).\nWith some shallow debugging, it's evident that plugin folder itself is missing. I fixed the problem by adding the plugin folder in the appropriate location:\nC:\\Users\\xxxx\\.spyder-py3\\My_QtProjects\\Project 1\\dist\\MyQt_1\\PySide2\\\n\n", "If the Pycharm console or debugger are showing \"Could not find or load the Qt platform plugin windows\", the Python EXE file may be located at a different location for the PyCharm interpreter. You might manually select it in File -> Settings -> Interpreter.\n\nSet the working directory: File -> Settings -> Build, Execution, Deployment -> Console -> Python Console -> Working directory. Set it to the parent directory where your all code exists.\nOpen Control Panel -> System Settings -> Advanced System Settings -> Environment Variables -> New. Set the variable name QT_PLUGIN_PATH , Variable Directory: Users\\<Username>\\Appdata\\Local\\Continuum\\Anaconda2\\Library\\plugins.\nRestart Pycharm.\n\n", "I solved it by:\n\nAdding a path: \n\\Anaconda3\\Lib\\site-packages\\PyQt5\\Qt\\bin to PATH.\n\nSetting an environment variable: \nQT_PLUGIN_PATH as \\Anaconda3\\Lib\\site-packages\\PyQt5\\Qt\\plugins or \\Anaconda3\\Library\\plugins.\nAlso, you can try:\npyqt = os.path.dirname(PyQt5.__file__)\nos.environ['QT_PLUGIN_PATH'] = os.path.join(pyqt, \"Qt/plugins\")\n\n\n", "First, use the command:\nconda remove pyqt qt qtpy\nThen install using:\nconda install pyqt qt qtpy\nThis worked for me.\n", "Copy the folder\n\\Anaconda3\\Library\\plugins\\platforms\n\nto\n\\$\\\n\nwhere $ is your project interpreter folder. For example:\n\"\\project\\anaconda_env\\Scripts\\\"\n\nbecause PyCharm calls the python.exe in this folder, not the one in \\Anaconda3.\n", "You may need to copy the \"plugins\" file in Anaconda3\\Library. For example, on my computer it is \nS:\\Anaconda3\\Library\\plugins\n\nto the same path of your .exe file.\n", "On Windows:\n\nCopy the folder platforms:\nC:\\Users\\%USERNAME%\\AppData\\Roaming\\pyinstaller\\bincache00_py35_64bit\\pyqt5\\qt\\plugins\\platforms \n\nPaste the folder platform into the folder location of the file .exe:\nExample:\nc:\\MyFolder\\yourFile.exe\nc:\\MyFolder\\platforms\n\n\n", "SOLUTION FOR WINDOWS USERS\nCreate new environment variable with:\nname: QT_PLUGIN_PATH\npath: C:\\yourpythonpath\\Lib\\site-packages\\PyQt5\\Qt\\plugins\nafter that exe file will work\n", "copy the plugins from PySide2 and paste and overwrite the existing plugins in Miniconda worked for me.\n(base) C:\\ProgramData\\Miniconda3\\Lib\\site-packages\\PySide2\\plugins\\platforms>copy *.dll C:\\ProgramData\\Miniconda3\\Library\\plugins\\platforms\\\n\n", "I had the same problem with Anaconda. For me, although not very elegant, the fastest solution was to unistall and reinstall Ananconda completely. After that, everything worked well again.\n", "I had the same issue. Following \"Activating an environment\" in \"Managing environments\" solved the issue.\nIn the command line: \nconda activate myenv\n\nwhere myenv=base for my setup.\n", "I have the same issue and fixed in this way\nIn Anaconda installation folder I went to : (change it to your installed path):\nC:\\ProgramData\\Anaconda3\\Lib\\site-packages\\PySide2\nEdit this file by adding the following code lines :\n# below the line 23 type.__signature__\n pyside_package_dir = os.path.abspath(os.path.dirname(__file__))\n dirname = os.path.dirname(__file__)\n plugin_path = os.path.join(dirname, 'plugins', 'platforms')\n os.environ['QT_QPA_PLATFORM_PLUGIN_PATH'] = plugin_path\n\nsave this file and try again and the issue should be gone :) \n", "Please try this in the script\nqt_path= os.path.dirname(PyQt5.__file__)\nos.environ['QT_PLUGIN_PATH'] = os.path.join(qt_path, \"Qt/plugins\")\n\n", "I know everyone above had provided various ways to fix OP's issue. I just want to add on some suggestions.\nBy adding the QT_PLUGIN_PATH = C:\\Users{YOUR_USERNAME}\\Anaconda3\\Library\\plugins as your local machine environment variable it helps to fix OP's PyCharm issue above. However, this will break other systems in your machine like: Dropbox reports missing QT, AMD settings fails to launch(which happens on my side) etc.\nInstead of adding QT_PLUGIN_PATH to your machine locally, one can add the environment variable in PyCharm's python interpreter setting as shown below:\n\nThis method not only allow your PyCharm's python.exe able to search those DLLs but also not breaking other systems' QT lookup PATH.\nThanks\n", "I installed a package that had a QT-gui that I didn't need.\nSo I just removed all the Qt modules from my environment.\npip freeze | grep -i qt\n\nPyQt5==5.15.4\nPyQt5-Qt5==5.15.2\nPyQt5-sip==12.9.0\nQtPy==1.9.0\n\npip uninstall PyQt5\npip uninstall PyQt5-Qt5\npip uninstall PyQt5-sip\npip uninstall QtPy\n\nProblem solved.\n", "Inspired by Osama Adly, I think this kind of problems are all caused by Anaconda configuration for Qt DLLs on Windows platform.\nJust try to install PyQt/PySide in an empty environment besides Anaconda, for example a standalone Python program. You will find that the plugins about platforms are in the site-package directory itself.\nFor comparation:\n\\site-packages\\PyQt6\\Qt6\\plugins\\platforms\n\\site-packages\\PySide6\\plugins\\platforms\n\nBut it seems that Anaconda contains some software depending on PyQt5 or Qt. Anaconda moves the platforms directory from PyQt5 to another folder and this folder might be contained in the PATH variable when using Anaconda.\n\\Anaconda3\\Library\\plugins\\platforms\n\nThis could lead to unneccessary problems. These DLLs reserve the same name across different generation of Qt. For example, when I tried PySide6 in a virtual environment created with Anaconda, its call for DLLs will mistakenly use the Qt5 DLLS rather than the DLLs in its folder.\n", "if you are using anaconda/miniconda with matplotlib installed.\nyou'll have to install uninstall anaconda/miniconda and use miniconda without matplotlib, a fix is to use normal python not anaconda.\nit has be a know issue here enter link description here\n", "In my situation, I did everything listed above and on other forum post:\n\nCopying and pasting files\nAdding system variables\nUninstalling, downloading, and reinstalling programs\nRestarting the computer\nEnabling debugging mode\nRunning the source code instead of the compiled program\nRunning sfc /scannow\n\nAll of this did not work.\nIn my case, the solution was to update Windows.\nThe computer was apparently running a very outdated version of Windows (10?)\nAfter 2-3 hours of installing the update, problem solved.\nSource/Inspiration: https://www.partitionwizard.com/clone-disk/no-qt-platform-plugin-could-be-initialized.html\n" ]
[ 57, 27, 27, 20, 20, 17, 15, 12, 7, 4, 3, 2, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "In my case, I had multiple combined problems in order to make PyQt5 run on Windows, see DLL load failed when importing PyQt5\n", "I had the same issue with Qt 5.9 example btscanner.exe. What works in my case is:\n\nCreate a folder where is btscanner.exe ( my is c:\\temp\\BlueTouth )\nRun from command prompt windeployqt.exe as follow:\n c:\\qt\\qt5.9.0\\msvc2015\\bin\\windeployqt c:\\temp\\BlueTouth\n/* windeplyqt is the standard Qt tool to packet your application with any needed\nlibraries or extra files and ready to deploy on other machine */\nResult should be something like that: \n\nC:\\temp\\BlueTouth\\btscanner.exe 32 bit, release executable\nAdding Qt5Svg for qsvgicon.dll\nSkipping plugin qtvirtualkeyboardplugin.dll due to disabled dependencies.\nDirect dependencies: Qt5Bluetooth Qt5Core Qt5Gui Qt5Widgets\nAll dependencies : Qt5Bluetooth Qt5Core Qt5Gui Qt5Widgets\nTo be deployed : Qt5Bluetooth Qt5Core Qt5Gui Qt5Svg Qt5Widgets\nWarning: Cannot find Visual Studio installation directory, VCINSTALLDIR is not set.\nUpdating Qt5Bluetooth.dll.\nUpdating Qt5Core.dll.\nUpdating Qt5Gui.dll.\nUpdating Qt5Svg.dll.\nUpdating Qt5Widgets.dll.\nUpdating libGLESV2.dll.\nUpdating libEGL.dll.\nUpdating D3Dcompiler_47.dll.\nUpdating opengl32sw.dll.\nPatching Qt5Core.dll...\nCreating directory C:/temp/BlueTouth/iconengines.\nUpdating qsvgicon.dll.\nCreating directory C:/temp/BlueTouth/imageformats.\nUpdating qgif.dll.\nUpdating qicns.dll.\nUpdating qico.dll.\nUpdating qjpeg.dll.\nUpdating qsvg.dll.\nUpdating qtga.dll.\nUpdating qtiff.dll.\nUpdating qwbmp.dll.\nUpdating qwebp.dll.\nCreating directory C:/temp/BlueTouth/platforms.\nUpdating qwindows.dll.\nCreating C:\\temp\\BlueTouth\\translations...\nCreating qt_bg.qm...\nCreating qt_ca.qm...\nCreating qt_cs.qm...\nCreating qt_da.qm...\nCreating qt_de.qm...\nCreating qt_en.qm...\nCreating qt_es.qm...\nCreating qt_fi.qm...\nCreating qt_fr.qm...\nCreating qt_gd.qm...\nCreating qt_he.qm...\nCreating qt_hu.qm...\nCreating qt_it.qm...\nCreating qt_ja.qm...\nCreating qt_ko.qm...\nCreating qt_lv.qm...\nCreating qt_pl.qm...\nCreating qt_ru.qm...\nCreating qt_sk.qm...\nCreating qt_uk.qm...\n\n\nIf you take e look at c:\\temp\\BlueTouth folder will see\nthe folders iconengines, imageformats, platforms, translations,\nand files D3Dcompiler_47.dll, libEGL.dll, libGLESV2.dll, opengl32sw.dll,\nQt5Bluetouth.dll, Qt5Core.dll, Qt5Gui.dll, Qt5Svg.dll, Qt5Widgets.dll.\n\nThese are all of the files and folders need to run btscanner.exe on\n this or another machine. Just copy whole folder on other machine and\n run the file.\n", "copy platforms from Anaconda3\\Library\\plugins and put it in the Anaconda3.\nfor env put the platforms in the specific env\\ folder\n" ]
[ -1, -1, -2 ]
[ "pycharm", "python", "python_3.x" ]
stackoverflow_0041994485_pycharm_python_python_3.x.txt
Q: Stick the dataframe rows and column in one row+ replace the nan values with the day before or after I have a df and I want to stick the values of it. At first I want to select the specific time, and replace the Nan values with the same in the day before. Here is a simple example: I only want to choose the values in 2020, I want to stick its value based on the time, and also replace the nan value same as day before. df = pd.DataFrame() df['day'] =[ '2020-01-01', '2019-01-01', '2020-01-02','2020-01-03', '2018-01-01', '2020-01-15','2020-03-01', '2020-02-01', '2017-01-01' ] df['value_1'] = [ 1, np.nan, 32, 48, 5, -1, 5,10,2] df['value_2'] = [ np.nan, 121, 23, 34, 15, 21, 15, 12, 39] df day value_1 value_2 0 2020-01-01 1.0 NaN 1 2019-01-01 NaN 121.0 2 2020-01-02 32.0 23.0 3 2020-01-03 48.0 34.0 4 2018-01-01 5.0 15.0 5 2020-01-15 -1.0 21.0 6 2020-03-01 5.0 15.0 7 2020-02-01 10.0 12.0 8 2017-01-01 2.0 39.0 The output: _1 _2 _3 _4 _5 _6 _7 _8 _9 _10 _11 _12 0 1 121 1 23 48 34 -1 21 10 12 -1 21 I have tried to use the follwing code, but it does not solve my problem: val_cols = df.filter(like='value_').columns output = (df.pivot('day', val_cols).groupby(level=0, axis=1).apply(lambda x:x.ffill(axis=1).bfill(axis=1)).sort_index(axis=1, level=1)) A: I don't know what the output is supposed to be but i think this should do at least part of what you're trying to do df['day'] = pd.to_datetime(df['day'], format='%Y-%m-%d') df = df.sort_values(by=['day']) filter_2020 = df['day'].dt.year == 2020 val_cols = df.filter(like='value_').columns df.loc[filter_2020, val_cols] = df.loc[:,val_cols].ffill().loc[filter_2020] print(df) day value_1 value_2 8 2017-01-01 2.0 39.0 4 2018-01-01 5.0 15.0 1 2019-01-01 NaN 121.0 0 2020-01-01 1.0 121.0 2 2020-01-02 32.0 23.0 3 2020-01-03 48.0 34.0 5 2020-01-15 -1.0 21.0 7 2020-02-01 10.0 12.0 6 2020-03-01 5.0 15.0
Stick the dataframe rows and column in one row+ replace the nan values with the day before or after
I have a df and I want to stick the values of it. At first I want to select the specific time, and replace the Nan values with the same in the day before. Here is a simple example: I only want to choose the values in 2020, I want to stick its value based on the time, and also replace the nan value same as day before. df = pd.DataFrame() df['day'] =[ '2020-01-01', '2019-01-01', '2020-01-02','2020-01-03', '2018-01-01', '2020-01-15','2020-03-01', '2020-02-01', '2017-01-01' ] df['value_1'] = [ 1, np.nan, 32, 48, 5, -1, 5,10,2] df['value_2'] = [ np.nan, 121, 23, 34, 15, 21, 15, 12, 39] df day value_1 value_2 0 2020-01-01 1.0 NaN 1 2019-01-01 NaN 121.0 2 2020-01-02 32.0 23.0 3 2020-01-03 48.0 34.0 4 2018-01-01 5.0 15.0 5 2020-01-15 -1.0 21.0 6 2020-03-01 5.0 15.0 7 2020-02-01 10.0 12.0 8 2017-01-01 2.0 39.0 The output: _1 _2 _3 _4 _5 _6 _7 _8 _9 _10 _11 _12 0 1 121 1 23 48 34 -1 21 10 12 -1 21 I have tried to use the follwing code, but it does not solve my problem: val_cols = df.filter(like='value_').columns output = (df.pivot('day', val_cols).groupby(level=0, axis=1).apply(lambda x:x.ffill(axis=1).bfill(axis=1)).sort_index(axis=1, level=1))
[ "I don't know what the output is supposed to be but i think this should do at least part of what you're trying to do\ndf['day'] = pd.to_datetime(df['day'], format='%Y-%m-%d')\ndf = df.sort_values(by=['day'])\n\nfilter_2020 = df['day'].dt.year == 2020\nval_cols = df.filter(like='value_').columns\n\ndf.loc[filter_2020, val_cols] = df.loc[:,val_cols].ffill().loc[filter_2020]\nprint(df)\n\n day value_1 value_2\n8 2017-01-01 2.0 39.0\n4 2018-01-01 5.0 15.0\n1 2019-01-01 NaN 121.0\n0 2020-01-01 1.0 121.0\n2 2020-01-02 32.0 23.0\n3 2020-01-03 48.0 34.0\n5 2020-01-15 -1.0 21.0\n7 2020-02-01 10.0 12.0\n6 2020-03-01 5.0 15.0\n\n" ]
[ 1 ]
[]
[]
[ "dataframe", "pandas", "python" ]
stackoverflow_0074480636_dataframe_pandas_python.txt
Q: trying to zip datetime64[D], getting Error : too many values to unpack (expected 2) I am trying to zip 3 iterators - list of high temperature values , list of low temperature values and a date index of dtype = datetime64[D]. I am using vs code. here is my code: date_index = np.arange('2015-01-01','2016-01-01', dtype='datetime64[D]') (dates_high,break_high) = [(x,a) for a, b, x in zip(high, tmax, date_index) if a > b] this is the error ValueError Traceback (most recent call last) Cell In [27], line 8 5 low = df_2015f[('Data_Value', 'min')].tolist() 6 high = df_2015f[('Data_Value', 'max')].tolist() ----> 8 (dates_high,break_high) = [(x,a) for a, b, x in zip(high, tmax, date_index) if a > b] 9 (dates_low,break_low) = [(x,a) for a, b, x in zip(low, tmin, date_index) if a < b] ValueError: too many values to unpack (expected 2) I am trying to take the date and max temperature here if the temperature which is in high is greater than max temperature which is in tmax for that day[date] in dates_high and break_high respectively after that i will plot a scatter of high temperature(y axis) on that date_index(x-axis). high,low,tmin,tmax are list of column of a dataframe I converted using tolist I believe the error is because of date_index as it works if I remove dates_high, break_high, date_index and x from that line of code If its required I can post whole code A: Here's a simple example like your problem line: In [580]: [(a,b) for a,b,c in zip([1,2,3,4],[5,6,7,8],[9,10,11,12])] Out[580]: [(1, 5), (2, 6), (3, 7), (4, 8)] Tell me how it's supposed to unpack that into two variable? There are 4 items in the list. You don't need to zip to get the elements of the respective lists/arrays: In [586]: a,b,_ = [1,2,3,4],[5,6,7,8],[9,10,11,12] In [587]: a Out[587]: [1, 2, 3, 4] In [588]: b Out[588]: [5, 6, 7, 8] list(zip(*...)) is a kind of list 'transpose'
trying to zip datetime64[D], getting Error : too many values to unpack (expected 2)
I am trying to zip 3 iterators - list of high temperature values , list of low temperature values and a date index of dtype = datetime64[D]. I am using vs code. here is my code: date_index = np.arange('2015-01-01','2016-01-01', dtype='datetime64[D]') (dates_high,break_high) = [(x,a) for a, b, x in zip(high, tmax, date_index) if a > b] this is the error ValueError Traceback (most recent call last) Cell In [27], line 8 5 low = df_2015f[('Data_Value', 'min')].tolist() 6 high = df_2015f[('Data_Value', 'max')].tolist() ----> 8 (dates_high,break_high) = [(x,a) for a, b, x in zip(high, tmax, date_index) if a > b] 9 (dates_low,break_low) = [(x,a) for a, b, x in zip(low, tmin, date_index) if a < b] ValueError: too many values to unpack (expected 2) I am trying to take the date and max temperature here if the temperature which is in high is greater than max temperature which is in tmax for that day[date] in dates_high and break_high respectively after that i will plot a scatter of high temperature(y axis) on that date_index(x-axis). high,low,tmin,tmax are list of column of a dataframe I converted using tolist I believe the error is because of date_index as it works if I remove dates_high, break_high, date_index and x from that line of code If its required I can post whole code
[ "Here's a simple example like your problem line:\nIn [580]: [(a,b) for a,b,c in zip([1,2,3,4],[5,6,7,8],[9,10,11,12])]\nOut[580]: [(1, 5), (2, 6), (3, 7), (4, 8)]\n\nTell me how it's supposed to unpack that into two variable? There are 4 items in the list.\nYou don't need to zip to get the elements of the respective lists/arrays:\nIn [586]: a,b,_ = [1,2,3,4],[5,6,7,8],[9,10,11,12]\nIn [587]: a\nOut[587]: [1, 2, 3, 4]\nIn [588]: b\nOut[588]: [5, 6, 7, 8]\n\nlist(zip(*...)) is a kind of list 'transpose'\n" ]
[ 0 ]
[]
[]
[ "numpy", "pandas", "python", "zip" ]
stackoverflow_0074481786_numpy_pandas_python_zip.txt
Q: Python logging - different logs from every instance of one class Is there any way in python to get other logs from every instance of one class without necessitiy of modifying existing logs? import logging log = logging.getLogger("my_module") class MyClass: def __init__(self, name): self.name = name def function(self): log.debug("My log!") c1 = MyClass("first class") c2 = MyClass("second class") c1.function() c2.function()` I want to achive output like: my_module - first class: My log! my_module - second class: My log! Is any way to do that without passing extra parameter to log.debug and configuring formatter? Like it's described in documentation: DOCS I am using aiomisc library, maybe there is some option to do this? I've looked at the documentation, but I don't see anything useful. A: You can fairly easily set up a different logger with a different name for each instance: class MyClass: def __init__(self, name): self.name = name self.log = logging.getLogger(f'{__name__}.{type(self).__name__}.{name}') def function(self): self.log.debug("My log!") If your formatter is adding the logger name already, you will not need to do anything extra: the name of your logger is now going to be my_module.MyClass.first class, my_module.MyClass.second class. If you want to send some custom messages to the module logger instead, you can add a function for internal use to your class: class MyClass: def __init__(self, name): self.name = name def function(self): self.log(logging.DEBUG, "My log!") def log(level, msg, *args, **kwargs): log.log(level, self.name + ' - ' + msg, *args, **kwargs) You might also take a look at using a logging.LoggerAdapter.
Python logging - different logs from every instance of one class
Is there any way in python to get other logs from every instance of one class without necessitiy of modifying existing logs? import logging log = logging.getLogger("my_module") class MyClass: def __init__(self, name): self.name = name def function(self): log.debug("My log!") c1 = MyClass("first class") c2 = MyClass("second class") c1.function() c2.function()` I want to achive output like: my_module - first class: My log! my_module - second class: My log! Is any way to do that without passing extra parameter to log.debug and configuring formatter? Like it's described in documentation: DOCS I am using aiomisc library, maybe there is some option to do this? I've looked at the documentation, but I don't see anything useful.
[ "You can fairly easily set up a different logger with a different name for each instance:\nclass MyClass:\n def __init__(self, name):\n self.name = name\n self.log = logging.getLogger(f'{__name__}.{type(self).__name__}.{name}')\n\n def function(self):\n self.log.debug(\"My log!\")\n\nIf your formatter is adding the logger name already, you will not need to do anything extra: the name of your logger is now going to be my_module.MyClass.first class, my_module.MyClass.second class.\nIf you want to send some custom messages to the module logger instead, you can add a function for internal use to your class:\nclass MyClass:\n def __init__(self, name):\n self.name = name\n\n def function(self):\n self.log(logging.DEBUG, \"My log!\")\n\n def log(level, msg, *args, **kwargs):\n log.log(level, self.name + ' - ' + msg, *args, **kwargs)\n\nYou might also take a look at using a logging.LoggerAdapter.\n" ]
[ 0 ]
[]
[]
[ "logging", "python", "python_3.x" ]
stackoverflow_0074483210_logging_python_python_3.x.txt
Q: Using dictionary to calculate totals from a text file in python Below is a sample input file: A,B,C Location:London A, 46 B, 93 C, 32 A, 48 Location:Amsterdam A, 83 B, 21 C, 92 B, 39 Location:Paris A, 29 B, 91 C, 10 The output should be as follows: name_set = { A, B, C } location_set = {London, Amsterdam, Paris} Generate a dictonary that maps name to total: dic = {A: 206, B: 244, C:134} I was able to create different sets for location and name but I don't know how to get the total and generate a dictonary name_set = set() location_set = set() num_set = set () userfile = input("Enter input file name:") input_file2 = open(userfile, "r") input_file = input_file2.readlines() name = input_file[0].strip().split(',') #get names from first line name_set.update(name) for next_line in input_file: if next_line.startswith('Location'): location = next_line.strip().split(":")[-1] #get location location_set.add(location) #calculate scores and totals (?) name_to_num = {} for k in name: for v in score: name_to_num[k] = v #assign key (party name) to value (score) print(str(name_to_num)) A: Create a dictionary with the names as the keys and 0 for the values, and increment the appropriate value as you read the file. userfile = "input.txt" locations = set() with open(userfile) as f: names = dict.fromkeys(next(f).strip().split(","), 0) location = "" for line in f: if line.startswith('Location:'): _, location = line.strip().split(":") locations.add(location) else: name, val = line.strip().split(", ") names[name] += int(val) print(f"name_set = {set(names)} location_set = {locations}") print(f"dic = {names}") name_set = {'C', 'A', 'B'} location_set = {'Amsterdam', 'London', 'Paris'} dic = {'A': 206, 'B': 244, 'C': 134}
Using dictionary to calculate totals from a text file in python
Below is a sample input file: A,B,C Location:London A, 46 B, 93 C, 32 A, 48 Location:Amsterdam A, 83 B, 21 C, 92 B, 39 Location:Paris A, 29 B, 91 C, 10 The output should be as follows: name_set = { A, B, C } location_set = {London, Amsterdam, Paris} Generate a dictonary that maps name to total: dic = {A: 206, B: 244, C:134} I was able to create different sets for location and name but I don't know how to get the total and generate a dictonary name_set = set() location_set = set() num_set = set () userfile = input("Enter input file name:") input_file2 = open(userfile, "r") input_file = input_file2.readlines() name = input_file[0].strip().split(',') #get names from first line name_set.update(name) for next_line in input_file: if next_line.startswith('Location'): location = next_line.strip().split(":")[-1] #get location location_set.add(location) #calculate scores and totals (?) name_to_num = {} for k in name: for v in score: name_to_num[k] = v #assign key (party name) to value (score) print(str(name_to_num))
[ "Create a dictionary with the names as the keys and 0 for the values, and increment the appropriate value as you read the file.\nuserfile = \"input.txt\"\nlocations = set()\nwith open(userfile) as f:\n names = dict.fromkeys(next(f).strip().split(\",\"), 0)\n location = \"\"\n for line in f:\n if line.startswith('Location:'):\n _, location = line.strip().split(\":\")\n locations.add(location)\n else:\n name, val = line.strip().split(\", \")\n names[name] += int(val)\n\nprint(f\"name_set = {set(names)} location_set = {locations}\")\nprint(f\"dic = {names}\")\n\nname_set = {'C', 'A', 'B'} location_set = {'Amsterdam', 'London', 'Paris'}\ndic = {'A': 206, 'B': 244, 'C': 134}\n\n" ]
[ 2 ]
[]
[]
[ "dictionary", "file", "python", "set" ]
stackoverflow_0074483206_dictionary_file_python_set.txt
Q: reroute terminal to interface of the application I have built a small desktop application which edits data(.ags format) and then saves to selected folder. Before i had an issue that, i could run it as python file, but it would crash when I make it .exe. I figured out the problem. The reason was that, particular line of code tries to prints to terminal, but .exe did not have it. I deleted sg.output() line from code, then used pyinstaller to make it .exe. Earlier i was using psgcompiler. Now it works fine. However, when i open software the terminal opens as well (attached photo). Is there any chance to hide it, or add it to software itself? I tried multiline. I have tried to add, but it did not work. [sg.Multiline(size=(55, 5), reroute_stdout=True)], Thanks A: By default pyinstaller compiles executables in console mode... which means that unless you tell it otherwise when the application is run outside of the command line, e.g. by double clicking the .exe a console window will always appear. To avoid this simply use the windowed mode of pyinstaller with the -w flag when compiling. pyinstaller -w myapp.py
reroute terminal to interface of the application
I have built a small desktop application which edits data(.ags format) and then saves to selected folder. Before i had an issue that, i could run it as python file, but it would crash when I make it .exe. I figured out the problem. The reason was that, particular line of code tries to prints to terminal, but .exe did not have it. I deleted sg.output() line from code, then used pyinstaller to make it .exe. Earlier i was using psgcompiler. Now it works fine. However, when i open software the terminal opens as well (attached photo). Is there any chance to hide it, or add it to software itself? I tried multiline. I have tried to add, but it did not work. [sg.Multiline(size=(55, 5), reroute_stdout=True)], Thanks
[ "By default pyinstaller compiles executables in console mode... which means that unless you tell it otherwise when the application is run outside of the command line, e.g. by double clicking the .exe a console window will always appear.\nTo avoid this simply use the windowed mode of pyinstaller with the -w flag when compiling.\npyinstaller -w myapp.py\n" ]
[ 0 ]
[]
[]
[ "exe", "pyinstaller", "pysimplegui", "python", "terminal" ]
stackoverflow_0074479039_exe_pyinstaller_pysimplegui_python_terminal.txt
Q: pd.read_csv: delimiter = '\t' and header=None not compatible I have this line of code: df = pd.read_csv('some_file.txt',engine ='python', delimiter = '\t', header=None, encoding="utf-16") I'm using those txt files quiet often in my lab, one of our machines gives them as output. If I only use the delimiter I get a nice table, but with the first element as header for everything. If I only use header = None I get rid of the header, but have a bunch of \t everywhere. If I try to use both commands, I get this error: ParserError: Expected 1 fields in line 3, saw 23 When removing enigne = 'python' I get a similar error. (also tried seperator and a bunch of other things) Help would be very much appreciated! Edit: As requested that's how the file looks like: ##BLOCKS= 1 Plate: Plate1 1.3 PlateFormat Endpoint Absorbance Raw FALSE 1 1 562 1 12 96 1 8 Temperature(¡C) 1 2 3 4 5 6 7 8 9 10 11 12 26.5 0.8368 0.5211 0.321 0.2707 0.2124 0.1768 0.1694 0.1635 0.1659 0.1029 0.1032 0.104 0.7142 0.4866 0.2968 0.252 0.2111 0.1737 0.1633 0.162 0.1599 0.1009 0.1007 0.1025 0.3499 0.2119 0.2799 0.2097 0.3114 0.3393 0.2544 0.2965 0.2392 0.3063 0.3093 0.2655 0.305 0.2068 0.2573 0.2008 0.287 0.2765 0.2373 0.2703 0.2357 0.2865 0.2926 0.263 0.2922 0.3456 0.1964 0.2667 0.3022 0.2596 0.2256 0.2387 0.2498 0.2936 0.2396 0.3411 0.3018 0.349 0.2069 0.272 0.2926 0.2444 0.2141 0.2348 0.2486 0.2678 0.2346 0.2944 0.2965 0.3505 0.2427 0.3322 0.1873 0.2286 0.3758 0.208 0.3023 0.3573 0.3141 0.2658 0.2956 0.3155 0.2514 0.2929 0.1985 0.2379 0.1898 0.2101 0.3211 0.3558 0.3121 0.2567 ~End Original Filename: 20220725_Benedikt_DEF; Date Last Saved: 7/25/2022 2:31:30 PM That's how it looks like when I read it without pandas: ['##BLOCKS= 1\n', 'Plate:\tPlate1\t1.3\tPlateFormat\tEndpoint\tAbsorbance\tRaw\tFALSE\t1\t\t\t\t\t\t1\t562 \t1\t12\t96\t1\t8\t\t\n', '\tTemperature(¡C)\t1\t2\t3\t4\t5\t6\t7\t8\t9\t10\t11\t12\t\t\n', '\t26.5\t0.8368\t0.5211\t0.321\t0.2707\t0.2124\t0.1768\t0.1694\t0.1635\t0.1659\t0.1029\t0.1032\t0.104\t\t\n', '\t\t0.7142\t0.4866\t0.2968\t0.252\t0.2111\t0.1737\t0.1633\t0.162\t0.1599\t0.1009\t0.1007\t0.1025\t\t\n', '\t\t0.3499\t0.2119\t0.2799\t0.2097\t0.3114\t0.3393\t0.2544\t0.2965\t0.2392\t0.3063\t0.3093\t0.2655\t\t\n', '\t\t0.305\t0.2068\t0.2573\t0.2008\t0.287\t0.2765\t0.2373\t0.2703\t0.2357\t0.2865\t0.2926\t0.263\t\t\n', '\t\t0.2922\t0.3456\t0.1964\t0.2667\t0.3022\t0.2596\t0.2256\t0.2387\t0.2498\t0.2936\t0.2396\t0.3411\t\t\n', '\t\t0.3018\t0.349\t0.2069\t0.272\t0.2926\t0.2444\t0.2141\t0.2348\t0.2486\t0.2678\t0.2346\t0.2944\t\t\n', '\t\t0.2965\t0.3505\t0.2427\t0.3322\t0.1873\t0.2286\t0.3758\t0.208\t0.3023\t0.3573\t0.3141\t0.2658\t\t\n', '\t\t0.2956\t0.3155\t0.2514\t0.2929\t0.1985\t0.2379\t0.1898\t0.2101\t0.3211\t0.3558\t0.3121\t0.2567\t\t\n', '\n', '~End\n', 'Original Filename: some_file; Date Last Saved: 7/25/2022 2:31:30 PM\n'] When I use just use the pd.read_csv(file, encoding =''utf-16') I get this: It' basically a file that is stating the wavelength absorbance from a sample plate with 8 rows and 12 columns (96 samples). A: Assuming all the files have the same structure and you only want the data; skip the first four rows, don't use the last three rows, whitespace delimiter, no header, python engine. >>> df = pd.read_csv(csv,skiprows=4,skipfooter=3,header=None,delim_whitespace=True,engine='python') >>> df 0 1 2 3 4 5 6 7 8 9 10 11 0 0.7142 0.4866 0.2968 0.2520 0.2111 0.1737 0.1633 0.1620 0.1599 0.1009 0.1007 0.1025 1 0.3499 0.2119 0.2799 0.2097 0.3114 0.3393 0.2544 0.2965 0.2392 0.3063 0.3093 0.2655 2 0.3050 0.2068 0.2573 0.2008 0.2870 0.2765 0.2373 0.2703 0.2357 0.2865 0.2926 0.2630 3 0.2922 0.3456 0.1964 0.2667 0.3022 0.2596 0.2256 0.2387 0.2498 0.2936 0.2396 0.3411 4 0.3018 0.3490 0.2069 0.2720 0.2926 0.2444 0.2141 0.2348 0.2486 0.2678 0.2346 0.2944 5 0.2965 0.3505 0.2427 0.3322 0.1873 0.2286 0.3758 0.2080 0.3023 0.3573 0.3141 0.2658 6 0.2956 0.3155 0.2514 0.2929 0.1985 0.2379 0.1898 0.2101 0.3211 0.3558 0.3121 0.2567 >>>
pd.read_csv: delimiter = '\t' and header=None not compatible
I have this line of code: df = pd.read_csv('some_file.txt',engine ='python', delimiter = '\t', header=None, encoding="utf-16") I'm using those txt files quiet often in my lab, one of our machines gives them as output. If I only use the delimiter I get a nice table, but with the first element as header for everything. If I only use header = None I get rid of the header, but have a bunch of \t everywhere. If I try to use both commands, I get this error: ParserError: Expected 1 fields in line 3, saw 23 When removing enigne = 'python' I get a similar error. (also tried seperator and a bunch of other things) Help would be very much appreciated! Edit: As requested that's how the file looks like: ##BLOCKS= 1 Plate: Plate1 1.3 PlateFormat Endpoint Absorbance Raw FALSE 1 1 562 1 12 96 1 8 Temperature(¡C) 1 2 3 4 5 6 7 8 9 10 11 12 26.5 0.8368 0.5211 0.321 0.2707 0.2124 0.1768 0.1694 0.1635 0.1659 0.1029 0.1032 0.104 0.7142 0.4866 0.2968 0.252 0.2111 0.1737 0.1633 0.162 0.1599 0.1009 0.1007 0.1025 0.3499 0.2119 0.2799 0.2097 0.3114 0.3393 0.2544 0.2965 0.2392 0.3063 0.3093 0.2655 0.305 0.2068 0.2573 0.2008 0.287 0.2765 0.2373 0.2703 0.2357 0.2865 0.2926 0.263 0.2922 0.3456 0.1964 0.2667 0.3022 0.2596 0.2256 0.2387 0.2498 0.2936 0.2396 0.3411 0.3018 0.349 0.2069 0.272 0.2926 0.2444 0.2141 0.2348 0.2486 0.2678 0.2346 0.2944 0.2965 0.3505 0.2427 0.3322 0.1873 0.2286 0.3758 0.208 0.3023 0.3573 0.3141 0.2658 0.2956 0.3155 0.2514 0.2929 0.1985 0.2379 0.1898 0.2101 0.3211 0.3558 0.3121 0.2567 ~End Original Filename: 20220725_Benedikt_DEF; Date Last Saved: 7/25/2022 2:31:30 PM That's how it looks like when I read it without pandas: ['##BLOCKS= 1\n', 'Plate:\tPlate1\t1.3\tPlateFormat\tEndpoint\tAbsorbance\tRaw\tFALSE\t1\t\t\t\t\t\t1\t562 \t1\t12\t96\t1\t8\t\t\n', '\tTemperature(¡C)\t1\t2\t3\t4\t5\t6\t7\t8\t9\t10\t11\t12\t\t\n', '\t26.5\t0.8368\t0.5211\t0.321\t0.2707\t0.2124\t0.1768\t0.1694\t0.1635\t0.1659\t0.1029\t0.1032\t0.104\t\t\n', '\t\t0.7142\t0.4866\t0.2968\t0.252\t0.2111\t0.1737\t0.1633\t0.162\t0.1599\t0.1009\t0.1007\t0.1025\t\t\n', '\t\t0.3499\t0.2119\t0.2799\t0.2097\t0.3114\t0.3393\t0.2544\t0.2965\t0.2392\t0.3063\t0.3093\t0.2655\t\t\n', '\t\t0.305\t0.2068\t0.2573\t0.2008\t0.287\t0.2765\t0.2373\t0.2703\t0.2357\t0.2865\t0.2926\t0.263\t\t\n', '\t\t0.2922\t0.3456\t0.1964\t0.2667\t0.3022\t0.2596\t0.2256\t0.2387\t0.2498\t0.2936\t0.2396\t0.3411\t\t\n', '\t\t0.3018\t0.349\t0.2069\t0.272\t0.2926\t0.2444\t0.2141\t0.2348\t0.2486\t0.2678\t0.2346\t0.2944\t\t\n', '\t\t0.2965\t0.3505\t0.2427\t0.3322\t0.1873\t0.2286\t0.3758\t0.208\t0.3023\t0.3573\t0.3141\t0.2658\t\t\n', '\t\t0.2956\t0.3155\t0.2514\t0.2929\t0.1985\t0.2379\t0.1898\t0.2101\t0.3211\t0.3558\t0.3121\t0.2567\t\t\n', '\n', '~End\n', 'Original Filename: some_file; Date Last Saved: 7/25/2022 2:31:30 PM\n'] When I use just use the pd.read_csv(file, encoding =''utf-16') I get this: It' basically a file that is stating the wavelength absorbance from a sample plate with 8 rows and 12 columns (96 samples).
[ "Assuming all the files have the same structure and you only want the data; skip the first four rows, don't use the last three rows, whitespace delimiter, no header, python engine.\n>>> df = pd.read_csv(csv,skiprows=4,skipfooter=3,header=None,delim_whitespace=True,engine='python')\n>>> df\n 0 1 2 3 4 5 6 7 8 9 10 11\n0 0.7142 0.4866 0.2968 0.2520 0.2111 0.1737 0.1633 0.1620 0.1599 0.1009 0.1007 0.1025\n1 0.3499 0.2119 0.2799 0.2097 0.3114 0.3393 0.2544 0.2965 0.2392 0.3063 0.3093 0.2655\n2 0.3050 0.2068 0.2573 0.2008 0.2870 0.2765 0.2373 0.2703 0.2357 0.2865 0.2926 0.2630\n3 0.2922 0.3456 0.1964 0.2667 0.3022 0.2596 0.2256 0.2387 0.2498 0.2936 0.2396 0.3411\n4 0.3018 0.3490 0.2069 0.2720 0.2926 0.2444 0.2141 0.2348 0.2486 0.2678 0.2346 0.2944\n5 0.2965 0.3505 0.2427 0.3322 0.1873 0.2286 0.3758 0.2080 0.3023 0.3573 0.3141 0.2658\n6 0.2956 0.3155 0.2514 0.2929 0.1985 0.2379 0.1898 0.2101 0.3211 0.3558 0.3121 0.2567\n>>>\n\n" ]
[ 2 ]
[]
[]
[ "dataframe", "pandas", "python" ]
stackoverflow_0074480594_dataframe_pandas_python.txt
Q: Retrieving identity of most recent insert in Oracle DB 12c I'd like to have returned to me (via cx_oracle in python) the value of the Identity that's created for a row that I'm inserting. I think I can figure out the python bit on my own, if someone could please state how to modify my SQL statement to get the ID of the newly-created row. I have a table that's created with something like the following: CREATE TABLE hypervisor ( id NUMBER GENERATED BY DEFAULT AS IDENTITY ( START WITH 1 NOCACHE ORDER ) NOT NULL , name VARCHAR2 (50) ) LOGGING ; ALTER TABLE hypervisor ADD CONSTRAINT hypervisor_PK PRIMARY KEY ( id ) ; And I have SQL that's similar to the following: insert into hypervisor ( name ) values ('my hypervisor') Is there an easy way to obtain the id of the newly inserted row? I'm happy to modify my SQL statement to have it returned, if that's possible. Most of the google hits on this issue were for version 11 and below, which don't support automatically-generated identity columns so hopefully someone here can help out. A: Taking what user2502422 said above and adding the python bit: newest_id_wrapper = cursor.var(cx_Oracle.STRING) sql_params = { "newest_id_sql_param" : newest_id_wrapper } sql = "insert into hypervisor ( name ) values ('my hypervisor') " + \ "returning id into :python_var" cursor.execute(sql, sql_params) newest_id=newest_id_wrapper.getvalue() A: This example taken from learncodeshare.net has helped me grasp the correct syntax. cur = con.cursor() new_id = cur.var(cx_Oracle.NUMBER) statement = 'insert into cx_people(name, age, notes) values (:1, :2, :3) returning id into :4' cur.execute(statement, ('Sandy', 31, 'I like horses', new_id)) sandy_id = new_id.getvalue() pet_statement = 'insert into cx_pets (name, owner, type) values (:1, :2, :3)' cur.execute(pet_statement, ('Big Red', sandy_id, 'horse')) con.commit() It's only slightly different from ragerdl's answer, but different enough to be added here I believe! Notice the absence of sql_params = { "newest_id_sql_param" : newest_id_wrapper } A: Use the returning clause of the insert statement. insert into hypervisor (name ) values ('my hypervisor') returning id into :python_var You said you could handle the Python bit ? You should be able to "bind" the return parameter in your program. A: I liked the answer by Marco Polo, but it is incomplete. The answer from FelDev is good too but does not address named parameters. Here is a more complete example from code I wrote with a simplified table (less fields). I have omitted code on how to set up a cursor since that is well documented elsewhere. import cx_Oracle INSERT_A_LOG = '''INSERT INTO A_LOG(A_KEY, REGION, DIR_NAME, FILENAME) VALUES(A_KEY_Sequence.nextval, :REGION, :DIR_NAME, :FILENAME) RETURNING A_KEY INTO :A_LOG_ID''' CURSOR = None class DataProcessor(Process): # Other code for setting up connection to DB and storing it in CURSOR def save_log_entry(self, row): global CURSOR # Oracle variable to hold value of last insert log_var = CURSOR.var(cx_Oracle.NUMBER) row['A_LOG_ID'] = log_var row['REGION'] = 'R7' # Other entries set elsewhere try: # This will fail unless row.keys() = # ['REGION', 'DIR_NAME', 'FILE_NAME', 'A_LOG_ID'] CURSOR.execute(INSERT_A_LOG, row) except Exception as e: row['REJCTN_CD'] = 'InsertFailed' raise # Get last inserted ID from Oracle for update self.last_log_id = log_var.getvalue() print('Insert id was {}'.format(self.last_log_id)) A: Agreeing with the older answers. However, depending on your version of cx_Oracle (7.0 and newer), var.getvalue() might return an array instead of a scalar. This is to support multiple return values as stated in this comment. Also note, that cx_Oracle is deprecated and has moved to oracledb now. Example: newId = cur.var(oracledb.NUMBER, outconverter=int) sql = """insert into Locations(latitude, longitude) values (:latitude, :longitude) returning locationId into :newId""" sqlParam = [latitude, longitude, newId] cur.execute(sql, sqlParam) newIdValue = newId.getvalue() newIdValue would return [1] instead of 1
Retrieving identity of most recent insert in Oracle DB 12c
I'd like to have returned to me (via cx_oracle in python) the value of the Identity that's created for a row that I'm inserting. I think I can figure out the python bit on my own, if someone could please state how to modify my SQL statement to get the ID of the newly-created row. I have a table that's created with something like the following: CREATE TABLE hypervisor ( id NUMBER GENERATED BY DEFAULT AS IDENTITY ( START WITH 1 NOCACHE ORDER ) NOT NULL , name VARCHAR2 (50) ) LOGGING ; ALTER TABLE hypervisor ADD CONSTRAINT hypervisor_PK PRIMARY KEY ( id ) ; And I have SQL that's similar to the following: insert into hypervisor ( name ) values ('my hypervisor') Is there an easy way to obtain the id of the newly inserted row? I'm happy to modify my SQL statement to have it returned, if that's possible. Most of the google hits on this issue were for version 11 and below, which don't support automatically-generated identity columns so hopefully someone here can help out.
[ "Taking what user2502422 said above and adding the python bit:\nnewest_id_wrapper = cursor.var(cx_Oracle.STRING)\nsql_params = { \"newest_id_sql_param\" : newest_id_wrapper }\nsql = \"insert into hypervisor ( name ) values ('my hypervisor') \" + \\ \n \"returning id into :python_var\"\ncursor.execute(sql, sql_params)\nnewest_id=newest_id_wrapper.getvalue()\n\n", "This example taken from learncodeshare.net has helped me grasp the correct syntax.\ncur = con.cursor()\n\nnew_id = cur.var(cx_Oracle.NUMBER)\n\nstatement = 'insert into cx_people(name, age, notes) values (:1, :2, :3) returning id into :4'\ncur.execute(statement, ('Sandy', 31, 'I like horses', new_id))\n\nsandy_id = new_id.getvalue()\n\npet_statement = 'insert into cx_pets (name, owner, type) values (:1, :2, :3)'\ncur.execute(pet_statement, ('Big Red', sandy_id, 'horse'))\n\ncon.commit()\n\nIt's only slightly different from ragerdl's answer, but different enough to be added here I believe!\nNotice the absence of sql_params = { \"newest_id_sql_param\" : newest_id_wrapper }\n", "Use the returning clause of the insert statement.\ninsert into hypervisor (name ) values ('my hypervisor')\n returning id into :python_var\n\nYou said you could handle the Python bit ? You should be able to \"bind\" the return parameter in your program.\n", "I liked the answer by Marco Polo, but it is incomplete. \nThe answer from FelDev is good too but does not address named parameters.\nHere is a more complete example from code I wrote with a simplified table (less fields). I have omitted code on how to set up a cursor since that is well documented elsewhere.\nimport cx_Oracle\n\nINSERT_A_LOG = '''INSERT INTO A_LOG(A_KEY, REGION, DIR_NAME, FILENAME)\nVALUES(A_KEY_Sequence.nextval, :REGION, :DIR_NAME, :FILENAME)\nRETURNING A_KEY INTO :A_LOG_ID'''\n\nCURSOR = None\n\nclass DataProcessor(Process):\n # Other code for setting up connection to DB and storing it in CURSOR\n def save_log_entry(self, row):\n global CURSOR\n # Oracle variable to hold value of last insert\n log_var = CURSOR.var(cx_Oracle.NUMBER)\n row['A_LOG_ID'] = log_var\n\n row['REGION'] = 'R7' # Other entries set elsewhere\n try:\n # This will fail unless row.keys() = \n # ['REGION', 'DIR_NAME', 'FILE_NAME', 'A_LOG_ID']\n CURSOR.execute(INSERT_A_LOG, row)\n except Exception as e:\n row['REJCTN_CD'] = 'InsertFailed'\n raise\n\n # Get last inserted ID from Oracle for update\n self.last_log_id = log_var.getvalue()\n print('Insert id was {}'.format(self.last_log_id))\n\n", "Agreeing with the older answers. However, depending on your version of cx_Oracle (7.0 and newer), var.getvalue() might return an array instead of a scalar.\nThis is to support multiple return values as stated in this comment.\nAlso note, that cx_Oracle is deprecated and has moved to oracledb now.\nExample:\nnewId = cur.var(oracledb.NUMBER, outconverter=int)\nsql = \"\"\"insert into Locations(latitude, longitude) values (:latitude, :longitude) returning locationId into :newId\"\"\"\nsqlParam = [latitude, longitude, newId]\ncur.execute(sql, sqlParam)\nnewIdValue = newId.getvalue()\n\nnewIdValue would return [1] instead of 1\n" ]
[ 12, 7, 4, 1, 0 ]
[]
[]
[ "oracle", "oracle12c", "python" ]
stackoverflow_0035327135_oracle_oracle12c_python.txt
Q: how to make input not case sensitive so i recently started my coding journey and am a freshman computer science student and i decided to do a side project where i input a country and it tells me the continent uknow for fun, although when i write the input it dosnt work and i think it has to do with it being case sensitive. pls help here's the code i currently made country = input('Tell me a country... ') # all the countries in africa countries_in_africa = ["Sao Tome and Principe", "senegal", "Seychelles", "Sierra Leone", "Somalia", "South africa", "South Sudan", ] if country.upper() in countries_in_africa: print('country is in africa') else: print('country is not in africa') A: When you write the following if country.upper() in countries_in_africa: print('country is in africa') You're searching for an all-caps string. Casing is important as you've noted. To get around it you can also convert all strings in countries_in_africa to uppercase. Either do it manually when you define the list, or on the fly if country.upper() in list(map(str.upper, countries_in_africa)): print('country is in africa')
how to make input not case sensitive
so i recently started my coding journey and am a freshman computer science student and i decided to do a side project where i input a country and it tells me the continent uknow for fun, although when i write the input it dosnt work and i think it has to do with it being case sensitive. pls help here's the code i currently made country = input('Tell me a country... ') # all the countries in africa countries_in_africa = ["Sao Tome and Principe", "senegal", "Seychelles", "Sierra Leone", "Somalia", "South africa", "South Sudan", ] if country.upper() in countries_in_africa: print('country is in africa') else: print('country is not in africa')
[ "When you write the following\nif country.upper() in countries_in_africa:\n print('country is in africa')\n\nYou're searching for an all-caps string. Casing is important as you've noted. To get around it you can also convert all strings in countries_in_africa to uppercase.\nEither do it manually when you define the list, or on the fly\nif country.upper() in list(map(str.upper, countries_in_africa)):\n print('country is in africa')\n\n" ]
[ 0 ]
[]
[]
[ "python" ]
stackoverflow_0074483304_python.txt
Q: Write a function that, given natural numbers n, m, determines the smallest natural number k such that n^k >= m, in time O(log k) I can do it in only O(k) time can someone be that kind to help me. I can not use build in functions. def potnr(a, b): rez = 1 while b>0: if b%2: rez = rez * a b = b // 2 a = a * a return rez def liczba(n, m): k = 1 while potnr(n, k) < m: k += 1 return k print(liczba(2, 16)) I can do it in only O(k) time can someone be that kind to help me A: n^k >= m if and only if k >= log m base n Since log m base n = log m / log n, this is as simple as: from math import log, ceil def smallest_k(n, m): return ceil(log(m)/log(n)) This runs in O(1) time. A: This one should work (I just fixed the value of k returned, for there was no guarantee it was the smallest value with the previous return): import math def min_power(n,m): b=1 while n**b < m: b *= 2 a = b/2 while b-a > 1: c = (a+b)/2 if n**c < m: a = c else: b = c k = math.ceil(a) return k if (n**k >= m) else k+1 min_power(35,10**250) # Out[23]: 162 A: First determine any natural number k for which n ^ k >= m. Then refine your estimate to find the smallest such k. It's easiest to find the initial estimate for k as a power of 2. Have a temporary value which holds n ^ k. Start from k = 1, repeatedly multiply k by 2, and square your temporary variable, until your k is sufficiently big. Your real k will be greater than half the estimate you found. Numbers in that range have log2(k) bits. Check each bit, starting from the most significant one. For each such bit, calculate n ^ k for two values of k: with that bit equal to 0 and 1. Compare with m - this will tell you the value of that bit. Proceed to lower-significant bits, until you get to bit 0 (least significant bit). I am not sure you are allowed to assume that calculating n ^ k has O(1) complexity. If not, you have to store intermediate results for all n ^ k calculations at first stage, or alternatively, use sqrt to calculate lesser powers of n.
Write a function that, given natural numbers n, m, determines the smallest natural number k such that n^k >= m, in time O(log k)
I can do it in only O(k) time can someone be that kind to help me. I can not use build in functions. def potnr(a, b): rez = 1 while b>0: if b%2: rez = rez * a b = b // 2 a = a * a return rez def liczba(n, m): k = 1 while potnr(n, k) < m: k += 1 return k print(liczba(2, 16)) I can do it in only O(k) time can someone be that kind to help me
[ "n^k >= m if and only if k >= log m base n\nSince log m base n = log m / log n, this is as simple as:\nfrom math import log, ceil\ndef smallest_k(n, m):\n return ceil(log(m)/log(n))\n\nThis runs in O(1) time.\n", "This one should work (I just fixed the value of k returned, for there was no guarantee it was the smallest value with the previous return):\nimport math\ndef min_power(n,m):\n b=1\n while n**b < m:\n b *= 2\n a = b/2\n while b-a > 1:\n c = (a+b)/2\n if n**c < m:\n a = c\n else:\n b = c\n k = math.ceil(a)\n return k if (n**k >= m) else k+1\n\nmin_power(35,10**250)\n# Out[23]: 162\n\n", "First determine any natural number k for which n ^ k >= m. Then refine your estimate to find the smallest such k.\nIt's easiest to find the initial estimate for k as a power of 2. Have a temporary value which holds n ^ k. Start from k = 1, repeatedly multiply k by 2, and square your temporary variable, until your k is sufficiently big.\nYour real k will be greater than half the estimate you found. Numbers in that range have log2(k) bits. Check each bit, starting from the most significant one. For each such bit, calculate n ^ k for two values of k: with that bit equal to 0 and 1. Compare with m - this will tell you the value of that bit. Proceed to lower-significant bits, until you get to bit 0 (least significant bit).\nI am not sure you are allowed to assume that calculating n ^ k has O(1) complexity. If not, you have to store intermediate results for all n ^ k calculations at first stage, or alternatively, use sqrt to calculate lesser powers of n.\n" ]
[ 3, 1, 0 ]
[]
[]
[ "python" ]
stackoverflow_0074482502_python.txt
Q: Django - Pagination test When moving a test from a separate class to a class with other tests, it starts showing 4 posts on the second page instead of 3. If range is changed to 12 it shows 2 posts. Please suggest what is the problem. def test_correct_page_context_guest_client(self): posts = [Post(text=f'Тестовый текст {i}', group=self.group0, author=self.user0) for i in range( 13)] Post.objects.bulk_create(posts) pages = (reverse('posts:posts_list'), reverse('posts:group_list', kwargs={'slug': f'{self.group0.slug}'}), reverse('posts:profile', kwargs={'username': f'{self.user0.username}'})) for page in pages: for page_number in range(2): with self.subTest(page=page): response = self.guest_client0.get( page, {'page': page_number+1}) self.assertEqual(len(response.context['page_obj']), POSTS_COUNT[page_number]) If the test is left in a separate class PaginatorViewsTest(TestCase): then everything works as it should, but this is the task of the reviewer Here is the class and SetUpClass in which the test is located class PostPagesTests(TestCase): @classmethod def setUpClass(cls): super().setUpClass() cls.user = User.objects.create_user(username='test') cls.user2 = User.objects.create_user(username='test2') cls.user_unfollow = User.objects.create_user(username='test3') cls.guest_client = Client() cls.authorized_client = Client() cls.authorized_client.force_login(cls.user) cls.authorized_client_no_follow = Client() cls.authorized_client_no_follow.force_login(cls.user_unfollow) cls.guest_client0 = Client() cls.user0 = User.objects.create_user(username='auth') cls.group0 = Group.objects.create(title='Тестовая группа', slug='test_group') cls.group = Group.objects.create( title='test', slug='test', description='test' ) cls.group2 = Group.objects.create( title='test2', slug='test2', description='test2' ) cls.post = Post.objects.create( author=cls.user, group=cls.group, text='test' ) A: I added Post created in SetUpClass to SetUp and deleted it before pagination test with self.post.delete() command
Django - Pagination test
When moving a test from a separate class to a class with other tests, it starts showing 4 posts on the second page instead of 3. If range is changed to 12 it shows 2 posts. Please suggest what is the problem. def test_correct_page_context_guest_client(self): posts = [Post(text=f'Тестовый текст {i}', group=self.group0, author=self.user0) for i in range( 13)] Post.objects.bulk_create(posts) pages = (reverse('posts:posts_list'), reverse('posts:group_list', kwargs={'slug': f'{self.group0.slug}'}), reverse('posts:profile', kwargs={'username': f'{self.user0.username}'})) for page in pages: for page_number in range(2): with self.subTest(page=page): response = self.guest_client0.get( page, {'page': page_number+1}) self.assertEqual(len(response.context['page_obj']), POSTS_COUNT[page_number]) If the test is left in a separate class PaginatorViewsTest(TestCase): then everything works as it should, but this is the task of the reviewer Here is the class and SetUpClass in which the test is located class PostPagesTests(TestCase): @classmethod def setUpClass(cls): super().setUpClass() cls.user = User.objects.create_user(username='test') cls.user2 = User.objects.create_user(username='test2') cls.user_unfollow = User.objects.create_user(username='test3') cls.guest_client = Client() cls.authorized_client = Client() cls.authorized_client.force_login(cls.user) cls.authorized_client_no_follow = Client() cls.authorized_client_no_follow.force_login(cls.user_unfollow) cls.guest_client0 = Client() cls.user0 = User.objects.create_user(username='auth') cls.group0 = Group.objects.create(title='Тестовая группа', slug='test_group') cls.group = Group.objects.create( title='test', slug='test', description='test' ) cls.group2 = Group.objects.create( title='test2', slug='test2', description='test2' ) cls.post = Post.objects.create( author=cls.user, group=cls.group, text='test' )
[ "I added Post created in SetUpClass to SetUp and deleted it before pagination test with self.post.delete() command\n" ]
[ 0 ]
[]
[]
[ "django", "python", "python_3.x" ]
stackoverflow_0074482153_django_python_python_3.x.txt
Q: How to use new Spark Context I am currently running a jupyter notebook on GCP dataproc and hoping to increase the memory available via my config: I first stopped my spark context: import pyspark sc = spark.sparkContext sc.stop() Waited until running the next code block so sc.stop() can finish conf = pyspark.SparkConf().setAll([('spark.driver.maxResultSize','8g')]) sc = pyspark.SparkContext(conf=conf) However when I run data = spark.read.parquet('link to data bucket'), it raises a Py4JJavaError: An error occurred while calling o152.parquet. : java.lang.IllegalStateException: Cannot call methods on a stopped SparkContext. This stopped SparkContext was created at: ... The currently active SparkContext was created at: ... The line above runs fine if I use the spark context originally provided when starting up a new pyspark notebook. The error implies that though I created a new spark context, whenever I call methods via spark it is still pointing towards the old context. How would I go about using the new SparkContext I created? A: You've created a SparkContext, not a new SparkSession. You will need to use spark = SparkSession.builder.config(key, value).getOrCreate() after stopping the context. Alternatively (recommended) You should also be able to set PYSPARK_SUBMIT_ARGS='-c spark.driver.maxResultSize=8g' in the Notebook's environment variables, and it should accomplish a similar goal. aside: 8g for the notebook driver is a bit excessive. Perhaps you meant to change the executor memory? And your read parquet file's dataframe would be distributed anyway, so I still don't think you'll need that much.
How to use new Spark Context
I am currently running a jupyter notebook on GCP dataproc and hoping to increase the memory available via my config: I first stopped my spark context: import pyspark sc = spark.sparkContext sc.stop() Waited until running the next code block so sc.stop() can finish conf = pyspark.SparkConf().setAll([('spark.driver.maxResultSize','8g')]) sc = pyspark.SparkContext(conf=conf) However when I run data = spark.read.parquet('link to data bucket'), it raises a Py4JJavaError: An error occurred while calling o152.parquet. : java.lang.IllegalStateException: Cannot call methods on a stopped SparkContext. This stopped SparkContext was created at: ... The currently active SparkContext was created at: ... The line above runs fine if I use the spark context originally provided when starting up a new pyspark notebook. The error implies that though I created a new spark context, whenever I call methods via spark it is still pointing towards the old context. How would I go about using the new SparkContext I created?
[ "You've created a SparkContext, not a new SparkSession.\nYou will need to use spark = SparkSession.builder.config(key, value).getOrCreate() after stopping the context.\nAlternatively (recommended) You should also be able to set PYSPARK_SUBMIT_ARGS='-c spark.driver.maxResultSize=8g' in the Notebook's environment variables, and it should accomplish a similar goal.\naside: 8g for the notebook driver is a bit excessive. Perhaps you meant to change the executor memory? And your read parquet file's dataframe would be distributed anyway, so I still don't think you'll need that much.\n" ]
[ 1 ]
[]
[]
[ "apache_spark", "dataproc", "google_cloud_platform", "pyspark", "python" ]
stackoverflow_0074483399_apache_spark_dataproc_google_cloud_platform_pyspark_python.txt
Q: Split list into N lists, and assign each list to a worker in multithreading I'm writing a script that takes N records from a table, and processes the said records via multithreading. Previously I simply used Order by RAND() in my SQL statement within each worker definition, and hoped that there would be no duplicates. This sort of works (deduping is done later), however, I would like to make my script more efficient by: 1) querying the table once, extract N records, and assign them to a list 2) split the big list into ~equally-sized lists of Y lists, which can be accomplished via : number_of_workers = 2 first_names = ['Steve', 'Jane', 'Sara', 'Mary','Jack'] def chunkify(lst,n): return [lst[i::n] for i in xrange(n)] list1 = chunkify(first_names, number_of_workers) print list1 3) When defining the worker function in multithreading, pass on a different sublist to each worker. Note - the number of workers (and parts I want to split the query result into) is defined at the beginning of the function. However, as I'm fairly new to Python, I have no idea how to pass on each sublist to a separate worker (or is it even doable?) Any help, other suggestions, etc. would be much appreciated! Example of multithreading code is below. How would I use import threading import random def worker(): assign sublistN to worker N print sublistN threads = [] for i in range(number_of_workers): print i print "" t = threading.Thread(target=worker) threads.append(t) t.start() Thank you in advance! A: Two things: First, take a look at the Queue object. You don't even need to split the lists apart yourself this way. It's used for splitting a collection of objects between multiple threads (there's also a multi-process varient, which is where I'm getting to). The docs contain very good examples that fit your requirements. Second, unless your workers involve waiting on things such as IO, network requests etc. threading in python is no quicker (probably slower actually) than processing sequentially. Threading does not make use of multi-processing, only one thread is ever running at one time. If this is your case, you'll probably want Multiprocessing which actually spins up a whole new python process for working. You've got similar tools such as queues in here. A: As SCB mentioned, this was solved by utilizing que. Here is a quick example that takes a list of names -> passes a name to each worker (2 workers) -> each workers simply prints the name they were given. from Queue import Queue from threading import Thread from time import sleep first_names = ['Steve', 'Jane', 'Sara', 'Mary','Jack','tara','bobby'] q = Queue(first_names) num_threads = 2 def do_stuff(q): while True: print q.get() sleep(1) q.task_done() for i in range(num_threads): worker = Thread(target=do_stuff, args=(q,)) worker.start() for x in first_names: q.put(x) q.join() Code adapted from here. A: Much Needed Fixes In @FlyingZebra1. from queue import Queue from threading import Thread from time import sleep first_names = ['Steve', 'Jane', 'Sara', 'Mary','Jack','tara','bobby'] q = Queue() # This will be Empty num threads = 2 # No of Threads def do_stuff(): while True: item = q.get() if item is None: # Our Script will not Break it this is Missing break print q.get() sleep(1) q.task_done() for i in range(num_threads): worker = Thread(target=do_stuff) worker. Start() q.join() for x in first_names: q.put(None) Just a Fix.
Split list into N lists, and assign each list to a worker in multithreading
I'm writing a script that takes N records from a table, and processes the said records via multithreading. Previously I simply used Order by RAND() in my SQL statement within each worker definition, and hoped that there would be no duplicates. This sort of works (deduping is done later), however, I would like to make my script more efficient by: 1) querying the table once, extract N records, and assign them to a list 2) split the big list into ~equally-sized lists of Y lists, which can be accomplished via : number_of_workers = 2 first_names = ['Steve', 'Jane', 'Sara', 'Mary','Jack'] def chunkify(lst,n): return [lst[i::n] for i in xrange(n)] list1 = chunkify(first_names, number_of_workers) print list1 3) When defining the worker function in multithreading, pass on a different sublist to each worker. Note - the number of workers (and parts I want to split the query result into) is defined at the beginning of the function. However, as I'm fairly new to Python, I have no idea how to pass on each sublist to a separate worker (or is it even doable?) Any help, other suggestions, etc. would be much appreciated! Example of multithreading code is below. How would I use import threading import random def worker(): assign sublistN to worker N print sublistN threads = [] for i in range(number_of_workers): print i print "" t = threading.Thread(target=worker) threads.append(t) t.start() Thank you in advance!
[ "Two things:\nFirst, take a look at the Queue object. You don't even need to split the lists apart yourself this way. It's used for splitting a collection of objects between multiple threads (there's also a multi-process varient, which is where I'm getting to). The docs contain very good examples that fit your requirements.\nSecond, unless your workers involve waiting on things such as IO, network requests etc. threading in python is no quicker (probably slower actually) than processing sequentially. Threading does not make use of multi-processing, only one thread is ever running at one time. If this is your case, you'll probably want Multiprocessing which actually spins up a whole new python process for working. You've got similar tools such as queues in here.\n", "As SCB mentioned, this was solved by utilizing que.\nHere is a quick example that takes a list of names -> passes a name to each worker (2 workers) -> each workers simply prints the name they were given.\nfrom Queue import Queue\nfrom threading import Thread\nfrom time import sleep\nfirst_names = ['Steve', 'Jane', 'Sara', 'Mary','Jack','tara','bobby']\n\n\nq = Queue(first_names)\nnum_threads = 2\n\ndef do_stuff(q):\n while True:\n print q.get()\n sleep(1)\n q.task_done()\n\n\n\nfor i in range(num_threads):\n worker = Thread(target=do_stuff, args=(q,))\n worker.start()\n\nfor x in first_names:\n q.put(x)\n\nq.join()\n\nCode adapted from here.\n", "Much Needed Fixes In @FlyingZebra1.\nfrom queue import Queue\nfrom threading import Thread\nfrom time import sleep\nfirst_names = ['Steve', 'Jane', 'Sara', 'Mary','Jack','tara','bobby']\n \nq = Queue() # This will be Empty\nnum threads = 2 # No of Threads\n\ndef do_stuff():\n while True:\n item = q.get()\n if item is None: # Our Script will not Break it this is Missing\n break\n print q.get()\n sleep(1)\n q.task_done() \n\n\nfor i in range(num_threads):\n worker = Thread(target=do_stuff)\n worker. Start()\n\nq.join()\n\nfor x in first_names:\n q.put(None)\n\nJust a Fix.\n" ]
[ 4, 1, 0 ]
[]
[]
[ "list", "multithreading", "python" ]
stackoverflow_0047900922_list_multithreading_python.txt
Q: I can't print my entire csv file in python on OnlineGDB I'm trying to print a csv file on OnlineGDB. However, when I do, I can only print the first and last column and 5 rows. It also prints [5 rows X 6 columns] at the very end. Although this is not terrible, my csv file contains over a thousand rows and 6 columns. Is there anyway I can print the entirety of my csv file? Here's my code for reference: import pandas as pd import csv if __name__ == "__main__": df = pd.read_csv('data1.csv') print(df.head()) This prints time_s ... speed_mph 0 0 ... 0 1 1 ... 0 2 2 ... 0 3 3 ... 0 4 4 ... 0 A: you can use a pandas function to show all columns pd.set_option('display.max_columns', None) pd.set_option('display.max_rows', None) I hope this helps!
I can't print my entire csv file in python on OnlineGDB
I'm trying to print a csv file on OnlineGDB. However, when I do, I can only print the first and last column and 5 rows. It also prints [5 rows X 6 columns] at the very end. Although this is not terrible, my csv file contains over a thousand rows and 6 columns. Is there anyway I can print the entirety of my csv file? Here's my code for reference: import pandas as pd import csv if __name__ == "__main__": df = pd.read_csv('data1.csv') print(df.head()) This prints time_s ... speed_mph 0 0 ... 0 1 1 ... 0 2 2 ... 0 3 3 ... 0 4 4 ... 0
[ "you can use a pandas function to show all columns\npd.set_option('display.max_columns', None)\npd.set_option('display.max_rows', None)\n\nI hope this helps!\n" ]
[ 0 ]
[]
[]
[ "csv", "file", "printing", "python" ]
stackoverflow_0074483402_csv_file_printing_python.txt
Q: Tell Python that two sympy symbols are related by a complex conjugate My Problem I am using Sympy v. 1.11.1 on (Jupyter Notebook) Python v. 3.8.5. I am dealing with a large Hessian, where terms such as these appear: Pi+ and Pi- are complex Sympy symbols. However, one is the complex conjugate of the other, that is conjugate(Pi+) = Pi- and vice versa. This means that the product Pi+ * Pi- is real and the derivatives can be easily evaluated by removing the Re/Im (in one case Re(Pi+ * Pi-) = Pi+ * Pi-, in the other Im(Pi+ * Pi-) = 0). My Question Is it possible to tell Sympy that Pi+ and Pi- are related by a complex conjugate, and it can therefore simplify the derivatives as explained above? Or does there exist some other way to simplify my derivatives? My Attempts Optimally, I would like to find a way to express the above relation between Pi+ and Pi- to Python, such that it can make simplifications where needed throughout the code. Initially I wanted to use Sympy global assumptions and try to set an assumption that (Pi+ * Pi-) is real. However, when I try to use global assumptions it says name 'global_assumptions' is not defined and when I try to explicitly import it (instead of import *), it says cannot import name 'global_assumptions' from 'sympy.assumptions' I could not figure out the root of this problem. My next attempt was to replace all instances of Re(Pi+ * Pi-) -> Pi+ * Pi- etc. manually with the Sympy function subs. The code replaced these instances successfully, but never evaluated the derivatives, so I got stuck with these instead: Please let me know if any clarification is needed. I found a similar question Setting Assumptions on Variables in Sympy Relative to Other Variables and it seems from the discussion there that there does not exist an efficient way to do this. However, seeing that this was asked back in 2013, and the discussions pointed towards the possibility of implementation of a new improved assumption system within Sympy in the near future, it would be nice to know if any new such useful methods exist. A: Given one and the other, try replacing one with conjugate(other): >>> one = x; other = y >>> p = one*other; q = p.subs(one, conjugate(other); im(q),re(q) (Abs(y)**2, 0) If you want to get back the original symbol after the simplifications wrought by the first replacement, follow up with a second replacement: >>> p.sub(one, conjugate(other)).subs(conjugate(other), one) x*y
Tell Python that two sympy symbols are related by a complex conjugate
My Problem I am using Sympy v. 1.11.1 on (Jupyter Notebook) Python v. 3.8.5. I am dealing with a large Hessian, where terms such as these appear: Pi+ and Pi- are complex Sympy symbols. However, one is the complex conjugate of the other, that is conjugate(Pi+) = Pi- and vice versa. This means that the product Pi+ * Pi- is real and the derivatives can be easily evaluated by removing the Re/Im (in one case Re(Pi+ * Pi-) = Pi+ * Pi-, in the other Im(Pi+ * Pi-) = 0). My Question Is it possible to tell Sympy that Pi+ and Pi- are related by a complex conjugate, and it can therefore simplify the derivatives as explained above? Or does there exist some other way to simplify my derivatives? My Attempts Optimally, I would like to find a way to express the above relation between Pi+ and Pi- to Python, such that it can make simplifications where needed throughout the code. Initially I wanted to use Sympy global assumptions and try to set an assumption that (Pi+ * Pi-) is real. However, when I try to use global assumptions it says name 'global_assumptions' is not defined and when I try to explicitly import it (instead of import *), it says cannot import name 'global_assumptions' from 'sympy.assumptions' I could not figure out the root of this problem. My next attempt was to replace all instances of Re(Pi+ * Pi-) -> Pi+ * Pi- etc. manually with the Sympy function subs. The code replaced these instances successfully, but never evaluated the derivatives, so I got stuck with these instead: Please let me know if any clarification is needed. I found a similar question Setting Assumptions on Variables in Sympy Relative to Other Variables and it seems from the discussion there that there does not exist an efficient way to do this. However, seeing that this was asked back in 2013, and the discussions pointed towards the possibility of implementation of a new improved assumption system within Sympy in the near future, it would be nice to know if any new such useful methods exist.
[ "Given one and the other, try replacing one with conjugate(other):\n>>> one = x; other = y\n>>> p = one*other; q = p.subs(one, conjugate(other); im(q),re(q)\n(Abs(y)**2, 0)\n\nIf you want to get back the original symbol after the simplifications wrought by the first replacement, follow up with a second replacement:\n>>> p.sub(one, conjugate(other)).subs(conjugate(other), one)\nx*y\n\n" ]
[ 1 ]
[]
[]
[ "python", "sympy" ]
stackoverflow_0074482997_python_sympy.txt
Q: how to define python generic classes I have a class: T = TypeVar('T') class Stack(Generic[T]): def __init__(self) -> None: self.items: list[T] = [] def push(self, item: T) -> None: self.items.append(item) def pop(self) -> T: return self.items.pop() def empty(self) -> bool: return not self.items but I can also do: T = TypeVar('T') class Stack: def __init__(self) -> None: # Create an empty list with items of type T self.items: list[T] = [] def push(self, item: T) -> None: self.items.append(item) def pop(self) -> T: return self.items.pop() def empty(self) -> bool: return not self.items what is the difference between these two samples? which on should I use? I tried running both, and both worked. A: Type checking vs runtime After writing this, I finally understood @Alexander point in first comment: whatever you write in annotations, it does not affect runtime, and your code is executed in the same way (sorry, I missed that you're looking just not from type checking perspective). This is core principle of python typing, as opposed to strongly typed languages (which makes it wonderful IMO): you can always say "I don't need types here - save my time and mental health". Type annotations are used to help some third-party tools, like mypy (type checker maintained by python core team) and IDEs. IDEs can suggest you something based on this information, and mypy checks whether your code can work if your types match the reality. Generic version T = TypeVar('T') class Stack(Generic[T]): def __init__(self) -> None: self.items: list[T] = [] def push(self, item: T) -> None: self.items.append(item) def pop(self) -> T: return self.items.pop() def empty(self) -> bool: return not self.items You can treat type variables like regular variables, but intended for "meta" usage and ignored (well, there are some runtime traces, but they exist primary for introspection purpose) on runtime. They are substituted once for every binding context (more about it - below), and can be defined only once per module scope. The code above declares normal generic class with one type argument. Now you can say Stack[int] to refer to a stack of integers, which is great. Current definition allows either explicit typing or using implicit Any parametrization: # Explicit type int_stack: Stack[int] = Stack() reveal_type(int_stack) # N: revealed type is "__main__.Stack[builtins.int] int_stack.push(1) # ok int_stack.push('foo') # E: Argument 1 to "push" of "Stack" has incompatible type "str"; expected "int" [arg-type] reveal_type(int_stack.pop()) # N: revealed type is "builtins.int" # No type results in mypy error, similar to `x = []` any_stack = Stack() # E: need type annotation for any_stack # But if you ignore it, the type becomes `Stack[Any]` reveal_type(any_stack) # N: revealed type is "__main__.Stack[Any] any_stack.push(1) # ok any_stack.push('foo') # ok too reveal_type(any_stack.pop()) # N: revealed type is "Any" To make the intended usage easier, you can allow initialization from iterable (I'm not covering the fact that you should be using collections.deque instead of list and maybe instead of this Stack class, assuming it is just a toy collection): from collections.abc import Iterable class Stack(Generic[T]): def __init__(self, items: Iterable[T] | None) -> None: # Create an empty list with items of type T self.items: list[T] = list(items or []) ... deduced_int_stack = Stack([1]) reveal_type(deduced_int_stack) # N: revealed type is "__main__.Stack[builtins.int]" To sum up, generic classes have some type variable bound to the class body. When you create an instance of such class, it can be parametrized with some type - it may be another type variable or some fixed type, like int or tuple[str, Callable[[], MyClass[bool]]]. Then all occurrences of T in its body (except for nested classes, which are perhaps out of "quick glance" explanation context) are replaced with this type (or Any, if it is not given and cannot be deduced). This type can be deduced iff at least one of __init__ or __new__ arguments has type referring to T (just T or, say, list[T]), and otherwise you have to specify it. Note that if you have T used in __init__ of non-generic class, it is not very cool, although currently not disallowed. Now, if you use T in some methods of generic class, it refers to that replaced value and results in typecheck errors, if passed types are not compatible with expected. You can play with this example here. Working outside of generic context However, not all usages of type variables are related to generic classes. Fortunately, you cannot declare generic function with possibility to declare generic arg on calling side (like function<T> fun(x: number): int and fun<string>(0)), but there is enough more stuff. Let's begin with simpler examples - pure functions: T = TypeVar('T') def func1() -> T: return 1 def func2(x: T) -> int: return 1 def func3(x: T) -> T: return x def func4(x: T, y: T) -> int: return 1 First function is declared to return some value of unbound type T. It obviously makes no sense, and recent mypy versions even learned to mark it as error. Your function return depends only on arguments and external state - and type variable must be present there, right? You cannot also declare global variable of type T in module scope, because T is still unbound - and thus neither func1 args nor module-scoped variables can depend on T. Second function is more interesting. It does not cause mypy error, although still makes not very much sense: we can bind some type to T, but what is the difference between this and func2_1(x: Any) -> int: ...? We can speculate that now T can be used as annotation in function body, which can help in some corner case with type variable having upper bound, and I won't say it is impossible - but I cannot quickly construct such example, and have never seen such usage in proper context (it was always a mistake). Similar example is even explicitly referenced in PEP as valid. The third and fourth functions are typical examples of type variables in functions. The third declares function returning the same type as it's argument. The fourth function takes two arguments of the same type (arbitrary one). It is more useful if you have T = TypeVar('T', bound=Something) or T = TypeVar('T', str, bytes): you can concatenate two arguments of type T, but cannot - of type str | bytes, like in the below example: T = TypeVar('T', str, bytes) def total_length(x: T, y: T) -> int: return len(x + y) The most important fact about all examples above in this section: T doesnot have to be the same for different functions. You can call func3(1), then func3(['bar']) and then func4('foo', 'bar'). T is int, list[str] and str in these calls - no need to match. With this in mind your second solution is clear: T = TypeVar('T') class Stack: def __init__(self) -> None: # Create an empty list with items of type T self.items: list[T] = [] # E: Type variable "__main__.T" is unbound [valid-type] def push(self, item: T) -> None: self.items.append(item) def pop(self) -> T: # E: A function returning TypeVar should receive at least one argument containing the same TypeVar [type-var] return self.items.pop() Here is mypy issue, discussing similar case. __init__ says that we set attribute x to value of type T, but this T is lost later (T is scoped only within __init__) - so mypy rejects the assignment. push is ill-formed and T has no meaning here, but it does not result in invalid typing situation, so is not rejected (type of argument is erased to Any, so you still can call push with some argument). pop is invalid, because typechecker needs to know what my_stack.pop() will return. It could say "I give up - just have your Any", and will be perfectly valid (PEP does not enforce this). but mypy is more smart and denies invalid-by-design usage. Edge case: you can return SomeGeneric[T] with unbound T, for example, in factory functions: def make_list() -> list[T]: ... mylist: list[str] = make_list() because otherwise type argument couldn't have been specified on calling site For better understanding of type variables and generics in python, I suggest you to read PEP483 and PEP484 - usually PEPs are more like a boring standard, but these are really good as a starting point. There are many edge cases omitted there, which still cause hot discussions in mypy team (and probably other typecheckers too) - say, type variables in staticmethods of generic classes, or binding in classmethods used as constructors - mind that they can be used on instances too. However, basically you can: have a TypeVar bound to class (Generic or Protocol, or some Generic subclass - if you subclass Iterable[T], your class is already generic in T) - then all methods use the same T and can contain it in one or both sides or have a method-scoped/function-scoped type variable - then it's useful if repeated in the signature more than once (not necessary "clean" - it may be parametrizing another generic) or use type variables in generic aliases (like LongTuple = tuple[T, T, T, T] - then you can do x: LongTuple[int] = (1, 2, 3, 4) or do something more exotic with type variables, which is probably out of scope
how to define python generic classes
I have a class: T = TypeVar('T') class Stack(Generic[T]): def __init__(self) -> None: self.items: list[T] = [] def push(self, item: T) -> None: self.items.append(item) def pop(self) -> T: return self.items.pop() def empty(self) -> bool: return not self.items but I can also do: T = TypeVar('T') class Stack: def __init__(self) -> None: # Create an empty list with items of type T self.items: list[T] = [] def push(self, item: T) -> None: self.items.append(item) def pop(self) -> T: return self.items.pop() def empty(self) -> bool: return not self.items what is the difference between these two samples? which on should I use? I tried running both, and both worked.
[ "Type checking vs runtime\nAfter writing this, I finally understood @Alexander point in first comment: whatever you write in annotations, it does not affect runtime, and your code is executed in the same way (sorry, I missed that you're looking just not from type checking perspective). This is core principle of python typing, as opposed to strongly typed languages (which makes it wonderful IMO): you can always say \"I don't need types here - save my time and mental health\". Type annotations are used to help some third-party tools, like mypy (type checker maintained by python core team) and IDEs. IDEs can suggest you something based on this information, and mypy checks whether your code can work if your types match the reality.\nGeneric version\nT = TypeVar('T')\n\nclass Stack(Generic[T]):\n def __init__(self) -> None:\n self.items: list[T] = []\n\n def push(self, item: T) -> None:\n self.items.append(item)\n\n def pop(self) -> T:\n return self.items.pop()\n\n def empty(self) -> bool:\n return not self.items\n\nYou can treat type variables like regular variables, but intended for \"meta\" usage and ignored (well, there are some runtime traces, but they exist primary for introspection purpose) on runtime. They are substituted once for every binding context (more about it - below), and can be defined only once per module scope.\nThe code above declares normal generic class with one type argument. Now you can say Stack[int] to refer to a stack of integers, which is great. Current definition allows either explicit typing or using implicit Any parametrization:\n# Explicit type\nint_stack: Stack[int] = Stack()\nreveal_type(int_stack) # N: revealed type is \"__main__.Stack[builtins.int]\nint_stack.push(1) # ok\nint_stack.push('foo') # E: Argument 1 to \"push\" of \"Stack\" has incompatible type \"str\"; expected \"int\" [arg-type]\nreveal_type(int_stack.pop()) # N: revealed type is \"builtins.int\"\n\n# No type results in mypy error, similar to `x = []`\nany_stack = Stack() # E: need type annotation for any_stack\n# But if you ignore it, the type becomes `Stack[Any]`\nreveal_type(any_stack) # N: revealed type is \"__main__.Stack[Any]\nany_stack.push(1) # ok\nany_stack.push('foo') # ok too\nreveal_type(any_stack.pop()) # N: revealed type is \"Any\"\n\nTo make the intended usage easier, you can allow initialization from iterable (I'm not covering the fact that you should be using collections.deque instead of list and maybe instead of this Stack class, assuming it is just a toy collection):\nfrom collections.abc import Iterable\n\nclass Stack(Generic[T]):\n def __init__(self, items: Iterable[T] | None) -> None:\n # Create an empty list with items of type T\n self.items: list[T] = list(items or [])\n ...\n\ndeduced_int_stack = Stack([1])\nreveal_type(deduced_int_stack) # N: revealed type is \"__main__.Stack[builtins.int]\"\n\nTo sum up, generic classes have some type variable bound to the class body. When you create an instance of such class, it can be parametrized with some type - it may be another type variable or some fixed type, like int or tuple[str, Callable[[], MyClass[bool]]]. Then all occurrences of T in its body (except for nested classes, which are perhaps out of \"quick glance\" explanation context) are replaced with this type (or Any, if it is not given and cannot be deduced). This type can be deduced iff at least one of __init__ or __new__ arguments has type referring to T (just T or, say, list[T]), and otherwise you have to specify it. Note that if you have T used in __init__ of non-generic class, it is not very cool, although currently not disallowed.\nNow, if you use T in some methods of generic class, it refers to that replaced value and results in typecheck errors, if passed types are not compatible with expected.\nYou can play with this example here.\nWorking outside of generic context\nHowever, not all usages of type variables are related to generic classes. Fortunately, you cannot declare generic function with possibility to declare generic arg on calling side (like function<T> fun(x: number): int and fun<string>(0)), but there is enough more stuff. Let's begin with simpler examples - pure functions:\nT = TypeVar('T')\n\ndef func1() -> T:\n return 1\ndef func2(x: T) -> int:\n return 1\ndef func3(x: T) -> T:\n return x\ndef func4(x: T, y: T) -> int:\n return 1\n\nFirst function is declared to return some value of unbound type T. It obviously makes no sense, and recent mypy versions even learned to mark it as error. Your function return depends only on arguments and external state - and type variable must be present there, right? You cannot also declare global variable of type T in module scope, because T is still unbound - and thus neither func1 args nor module-scoped variables can depend on T.\nSecond function is more interesting. It does not cause mypy error, although still makes not very much sense: we can bind some type to T, but what is the difference between this and func2_1(x: Any) -> int: ...? We can speculate that now T can be used as annotation in function body, which can help in some corner case with type variable having upper bound, and I won't say it is impossible - but I cannot quickly construct such example, and have never seen such usage in proper context (it was always a mistake). Similar example is even explicitly referenced in PEP as valid.\nThe third and fourth functions are typical examples of type variables in functions. The third declares function returning the same type as it's argument.\nThe fourth function takes two arguments of the same type (arbitrary one). It is more useful if you have T = TypeVar('T', bound=Something) or T = TypeVar('T', str, bytes): you can concatenate two arguments of type T, but cannot - of type str | bytes, like in the below example:\nT = TypeVar('T', str, bytes)\n\ndef total_length(x: T, y: T) -> int:\n return len(x + y)\n\nThe most important fact about all examples above in this section: T doesnot have to be the same for different functions. You can call func3(1), then func3(['bar']) and then func4('foo', 'bar'). T is int, list[str] and str in these calls - no need to match.\nWith this in mind your second solution is clear:\nT = TypeVar('T')\n\nclass Stack:\n def __init__(self) -> None:\n # Create an empty list with items of type T\n self.items: list[T] = [] # E: Type variable \"__main__.T\" is unbound [valid-type]\n\n def push(self, item: T) -> None:\n self.items.append(item)\n\n def pop(self) -> T: # E: A function returning TypeVar should receive at least one argument containing the same TypeVar [type-var]\n return self.items.pop()\n\nHere is mypy issue, discussing similar case.\n__init__ says that we set attribute x to value of type T, but this T is lost later (T is scoped only within __init__) - so mypy rejects the assignment.\npush is ill-formed and T has no meaning here, but it does not result in invalid typing situation, so is not rejected (type of argument is erased to Any, so you still can call push with some argument).\npop is invalid, because typechecker needs to know what my_stack.pop() will return. It could say \"I give up - just have your Any\", and will be perfectly valid (PEP does not enforce this). but mypy is more smart and denies invalid-by-design usage.\nEdge case: you can return SomeGeneric[T] with unbound T, for example, in factory functions:\ndef make_list() -> list[T]: ...\n\nmylist: list[str] = make_list()\n\nbecause otherwise type argument couldn't have been specified on calling site\nFor better understanding of type variables and generics in python, I suggest you to read PEP483 and PEP484 - usually PEPs are more like a boring standard, but these are really good as a starting point.\nThere are many edge cases omitted there, which still cause hot discussions in mypy team (and probably other typecheckers too) - say, type variables in staticmethods of generic classes, or binding in classmethods used as constructors - mind that they can be used on instances too. However, basically you can:\n\nhave a TypeVar bound to class (Generic or Protocol, or some Generic subclass - if you subclass Iterable[T], your class is already generic in T) - then all methods use the same T and can contain it in one or both sides\nor have a method-scoped/function-scoped type variable - then it's useful if repeated in the signature more than once (not necessary \"clean\" - it may be parametrizing another generic)\nor use type variables in generic aliases (like LongTuple = tuple[T, T, T, T] - then you can do x: LongTuple[int] = (1, 2, 3, 4)\nor do something more exotic with type variables, which is probably out of scope\n\n" ]
[ 0 ]
[]
[]
[ "python", "python_3.x", "python_typing" ]
stackoverflow_0074472798_python_python_3.x_python_typing.txt
Q: GEKKO error in model expression with array of variables and intermediates I am trying to use GEKKO for fitting and function parameters estimation. I need to use arrays of variables and arrays of intermediate-type variables because of changing number of parameters to fit. And got an error I think in a model. apm some_ip_here_gk_model14 <br><pre> ---------------------------------------------------------------- APMonitor, Version 1.0.1 APMonitor Optimization Suite ---------------------------------------------------------------- --------- APM Model Size ------------ Each time step contains Objects : 0 Constants : 2 Variables : 15 Intermediates: 22 Connections : 0 Equations : 24 Residuals : 2 @error: Model Expression *** Error in syntax of function string: Invalid element: none Position: 1 none ? how to check what is this error? I am running this code in jupyter notebook and I tried to look apm file - didn't find it in the folder where this jupyter notebook is situated. Where should I search? Here is the code. import numpy as np from gekko import GEKKO import math M = 10; m = 1; gj =1; n = 1 num_pulses_in_window = 4 сonstant = 1; ac = 1 el_init_guess = [1,2,3,4] borders_left = [1,2,3,4] borders_right = [1,2,3,4] A1_c = (M/(M+m))*сonstant gj_c = gj # using GEKKO for preliminary estomation xData = np.array([1,2,3,4]) yData = np.array([2.5,1.2,3.2,1.1]) model = GEKKO() # parameters x = model.Param(value = xData) z = model.Param(value = yData) # constants A1 = model.Const(A1_c) gj = model.Const(gj_c) # variables E = model.Array(model.Var, num_pulses_in_window) G1 = model.Array(model.Var, num_pulses_in_window) G2 = model.Array(model.Var, num_pulses_in_window) Gg = model.Array(model.Var, num_pulses_in_window) #Intermediates k_alfa = model.Intermediate(A1*model.sqrt(x)) ro = model.Intermediate(k_alfa*ac) phi = model.Intermediate(ro) G = model.Array(model.Intermediate, num_pulses_in_window, equation=None) d = model.Array(model.Intermediate, num_pulses_in_window, equation=None) f = model.Array(model.Intermediate, num_pulses_in_window, equation=None) for i in range(0, num_pulses_in_window): E[i].value = el_init_guess[i] E[i].lower = borders_left[i] E[i].upper = borders_right[i] #G1 G1[i].lower = 0.0000001 G1[i].upper = 1 #G2 G2[i].lower = 0 G2[i].upper = 0 #Gg Gg[i].lower = 0.0000001 Gg[i].upper = 1 G[i] = model.Intermediate(G1[i]+G2[i]+Gg[i]) d[i] = model.Intermediate((E[i]-x)**2+(G[i]/2)**2) f[i] = model.Intermediate((1-(1-(G[i]*G1[i]/(2*d[i])))*model.cos(2*phi)-((E[i]-x)*G[i]/d[i])*model.sin(2*phi))) sigma_sum = model.Intermediate(2*math.pi*gj/k_alfa * (model.sum(f))) y = model.Var() model.Equation(y == model.exp(-n*sigma_sum)) model.Minimize(((y-z))**2) model.options.IMODE = 2 model.options.SOLVER = 3 model.options.MAX_ITER = 1000 model.solve(disp=1) A: Intermediates are not defined with m.Array() because they are defined with the m.Intermediate() method. Try using an empty list instead: G = [None]*num_pulses_in_window d = [None]*num_pulses_in_window f = [None]*num_pulses_in_window For troubleshooting, open the run folder with model.open_folder() and inspect gk_model0.apm with a text editor. This is a plain text version of the model. The 4th and onward intermediates are not defined correctly. Model Constants i0 = 0.9090909090909091 i1 = 1 End Constants Parameters p1 p2 End Parameters Variables v1 = 1, <= 1, >= 1 v2 = 2, <= 2, >= 2 v3 = 3, <= 3, >= 3 v4 = 4, <= 4, >= 4 ... v13 = 0, <= 1, >= 1e-07 v14 = 0, <= 1, >= 1e-07 v15 = 0, <= 1, >= 1e-07 v16 = 0, <= 1, >= 1e-07 v17 = 0 End Variables Intermediates i2=((i0)*(sqrt(p1))) i3=((i2)*(1)) i4=i3 i5=None i6=None i7=None i8=None i9=None ... Here is a script that runs successfully: import numpy as np from gekko import GEKKO import math M = 10; m = 1; gj =1; n = 1 num_pulses_in_window = 4 сonstant = 1; ac = 1 el_init_guess = [1,2,3,4] borders_left = [1,2,3,4] borders_right = [1,2,3,4] A1_c = (M/(M+m))*сonstant gj_c = gj # using GEKKO for preliminary estomation xData = np.array([1,2,3,4]) yData = np.array([2.5,1.2,3.2,1.1]) model = GEKKO() # parameters x = model.Param(value = xData) z = model.Param(value = yData) # constants A1 = model.Const(A1_c) gj = model.Const(gj_c) # variables E = model.Array(model.Var, num_pulses_in_window) G1 = model.Array(model.Var, num_pulses_in_window) G2 = model.Array(model.Var, num_pulses_in_window) Gg = model.Array(model.Var, num_pulses_in_window) #Intermediates k_alfa = model.Intermediate(A1*model.sqrt(x)) ro = model.Intermediate(k_alfa*ac) phi = model.Intermediate(ro) G = [None]*num_pulses_in_window d = [None]*num_pulses_in_window f = [None]*num_pulses_in_window for i in range(0, num_pulses_in_window): E[i].value = el_init_guess[i] E[i].lower = borders_left[i] E[i].upper = borders_right[i] #G1 G1[i].lower = 0.0000001 G1[i].upper = 1 #G2 G2[i].lower = 0 G2[i].upper = 0 #Gg Gg[i].lower = 0.0000001 Gg[i].upper = 1 G[i] = model.Intermediate(G1[i]+G2[i]+Gg[i]) d[i] = model.Intermediate((E[i]-x)**2+(G[i]/2)**2) f[i] = model.Intermediate((1-(1-(G[i]*G1[i]/(2*d[i])))*model.cos(2*phi)-((E[i]-x)*G[i]/d[i])*model.sin(2*phi))) sigma_sum = model.Intermediate(2*math.pi*gj/k_alfa * (model.sum(f))) y = model.Var() model.Equation(y == model.exp(-n*sigma_sum)) model.Minimize(((y-z))**2) model.options.IMODE = 2 model.options.SOLVER = 3 model.options.MAX_ITER = 1000 model.solve(disp=1) Don't forget to include dummy values in your script so that it runs and produces the error. I edited the question to include sample values in your question: M = 10; m = 1; gj =1; n = 1 num_pulses_in_window = 4 сonstant = 1; ac = 1 el_init_guess = [1,2,3,4] borders_left = [1,2,3,4] borders_right = [1,2,3,4] A1_c = (M/(M+m))*сonstant gj_c = gj # using GEKKO for preliminary estomation xData = np.array([1,2,3,4]) yData = np.array([2.5,1.2,3.2,1.1])
GEKKO error in model expression with array of variables and intermediates
I am trying to use GEKKO for fitting and function parameters estimation. I need to use arrays of variables and arrays of intermediate-type variables because of changing number of parameters to fit. And got an error I think in a model. apm some_ip_here_gk_model14 <br><pre> ---------------------------------------------------------------- APMonitor, Version 1.0.1 APMonitor Optimization Suite ---------------------------------------------------------------- --------- APM Model Size ------------ Each time step contains Objects : 0 Constants : 2 Variables : 15 Intermediates: 22 Connections : 0 Equations : 24 Residuals : 2 @error: Model Expression *** Error in syntax of function string: Invalid element: none Position: 1 none ? how to check what is this error? I am running this code in jupyter notebook and I tried to look apm file - didn't find it in the folder where this jupyter notebook is situated. Where should I search? Here is the code. import numpy as np from gekko import GEKKO import math M = 10; m = 1; gj =1; n = 1 num_pulses_in_window = 4 сonstant = 1; ac = 1 el_init_guess = [1,2,3,4] borders_left = [1,2,3,4] borders_right = [1,2,3,4] A1_c = (M/(M+m))*сonstant gj_c = gj # using GEKKO for preliminary estomation xData = np.array([1,2,3,4]) yData = np.array([2.5,1.2,3.2,1.1]) model = GEKKO() # parameters x = model.Param(value = xData) z = model.Param(value = yData) # constants A1 = model.Const(A1_c) gj = model.Const(gj_c) # variables E = model.Array(model.Var, num_pulses_in_window) G1 = model.Array(model.Var, num_pulses_in_window) G2 = model.Array(model.Var, num_pulses_in_window) Gg = model.Array(model.Var, num_pulses_in_window) #Intermediates k_alfa = model.Intermediate(A1*model.sqrt(x)) ro = model.Intermediate(k_alfa*ac) phi = model.Intermediate(ro) G = model.Array(model.Intermediate, num_pulses_in_window, equation=None) d = model.Array(model.Intermediate, num_pulses_in_window, equation=None) f = model.Array(model.Intermediate, num_pulses_in_window, equation=None) for i in range(0, num_pulses_in_window): E[i].value = el_init_guess[i] E[i].lower = borders_left[i] E[i].upper = borders_right[i] #G1 G1[i].lower = 0.0000001 G1[i].upper = 1 #G2 G2[i].lower = 0 G2[i].upper = 0 #Gg Gg[i].lower = 0.0000001 Gg[i].upper = 1 G[i] = model.Intermediate(G1[i]+G2[i]+Gg[i]) d[i] = model.Intermediate((E[i]-x)**2+(G[i]/2)**2) f[i] = model.Intermediate((1-(1-(G[i]*G1[i]/(2*d[i])))*model.cos(2*phi)-((E[i]-x)*G[i]/d[i])*model.sin(2*phi))) sigma_sum = model.Intermediate(2*math.pi*gj/k_alfa * (model.sum(f))) y = model.Var() model.Equation(y == model.exp(-n*sigma_sum)) model.Minimize(((y-z))**2) model.options.IMODE = 2 model.options.SOLVER = 3 model.options.MAX_ITER = 1000 model.solve(disp=1)
[ "Intermediates are not defined with m.Array() because they are defined with the m.Intermediate() method. Try using an empty list instead:\nG = [None]*num_pulses_in_window\nd = [None]*num_pulses_in_window\nf = [None]*num_pulses_in_window\n\nFor troubleshooting, open the run folder with model.open_folder() and inspect gk_model0.apm with a text editor. This is a plain text version of the model. The 4th and onward intermediates are not defined correctly.\nModel\nConstants\n i0 = 0.9090909090909091\n i1 = 1\nEnd Constants\nParameters\n p1\n p2\nEnd Parameters\nVariables\n v1 = 1, <= 1, >= 1\n v2 = 2, <= 2, >= 2\n v3 = 3, <= 3, >= 3\n v4 = 4, <= 4, >= 4\n ...\n v13 = 0, <= 1, >= 1e-07\n v14 = 0, <= 1, >= 1e-07\n v15 = 0, <= 1, >= 1e-07\n v16 = 0, <= 1, >= 1e-07\n v17 = 0\nEnd Variables\nIntermediates\n i2=((i0)*(sqrt(p1)))\n i3=((i2)*(1))\n i4=i3\n i5=None\n i6=None\n i7=None\n i8=None\n i9=None\n ...\n\nHere is a script that runs successfully:\nimport numpy as np\nfrom gekko import GEKKO\nimport math\n\nM = 10; m = 1; gj =1; n = 1\nnum_pulses_in_window = 4\nсonstant = 1; ac = 1\nel_init_guess = [1,2,3,4]\nborders_left = [1,2,3,4]\nborders_right = [1,2,3,4]\nA1_c = (M/(M+m))*сonstant\ngj_c = gj\n\n# using GEKKO for preliminary estomation\nxData = np.array([1,2,3,4])\nyData = np.array([2.5,1.2,3.2,1.1])\n\nmodel = GEKKO()\n\n# parameters\nx = model.Param(value = xData) \nz = model.Param(value = yData) \n\n# constants\nA1 = model.Const(A1_c)\ngj = model.Const(gj_c)\n\n# variables\nE = model.Array(model.Var, num_pulses_in_window)\nG1 = model.Array(model.Var, num_pulses_in_window)\nG2 = model.Array(model.Var, num_pulses_in_window)\nGg = model.Array(model.Var, num_pulses_in_window)\n\n#Intermediates\nk_alfa = model.Intermediate(A1*model.sqrt(x))\nro = model.Intermediate(k_alfa*ac)\nphi = model.Intermediate(ro)\n\nG = [None]*num_pulses_in_window\nd = [None]*num_pulses_in_window\nf = [None]*num_pulses_in_window\n\nfor i in range(0, num_pulses_in_window):\n E[i].value = el_init_guess[i]\n E[i].lower = borders_left[i]\n E[i].upper = borders_right[i]\n \n #G1\n G1[i].lower = 0.0000001\n G1[i].upper = 1\n #G2\n G2[i].lower = 0\n G2[i].upper = 0\n #Gg\n Gg[i].lower = 0.0000001\n Gg[i].upper = 1\n\n G[i] = model.Intermediate(G1[i]+G2[i]+Gg[i])\n d[i] = model.Intermediate((E[i]-x)**2+(G[i]/2)**2)\n f[i] = model.Intermediate((1-(1-(G[i]*G1[i]/(2*d[i])))*model.cos(2*phi)-((E[i]-x)*G[i]/d[i])*model.sin(2*phi)))\n\nsigma_sum = model.Intermediate(2*math.pi*gj/k_alfa * (model.sum(f)))\n\ny = model.Var()\n\nmodel.Equation(y == model.exp(-n*sigma_sum))\n\nmodel.Minimize(((y-z))**2)\n\nmodel.options.IMODE = 2\nmodel.options.SOLVER = 3\n\nmodel.options.MAX_ITER = 1000\n\nmodel.solve(disp=1)\n\nDon't forget to include dummy values in your script so that it runs and produces the error. I edited the question to include sample values in your question:\nM = 10; m = 1; gj =1; n = 1\nnum_pulses_in_window = 4\nсonstant = 1; ac = 1\nel_init_guess = [1,2,3,4]\nborders_left = [1,2,3,4]\nborders_right = [1,2,3,4]\nA1_c = (M/(M+m))*сonstant\ngj_c = gj\n\n# using GEKKO for preliminary estomation\nxData = np.array([1,2,3,4])\nyData = np.array([2.5,1.2,3.2,1.1])\n\n" ]
[ 0 ]
[]
[]
[ "curve_fitting", "gekko", "python" ]
stackoverflow_0074482108_curve_fitting_gekko_python.txt
Q: Loop to remove string in selected dataframe column header I wonder if it is possible to create a loop to remove strings in dataframe column. I have multiple dataframes which look like the structure below. df = pd.DataFrame({ 'xyz CODE': [1,2,3,3,4, 5,6,7,7,8], 'a': [4, 5, 3, 1, 2, 20, 10, 40, 50, 30], 'b': [20, 10, 40, 50, 30, 4, 5, 3, 1, 2], 'c': [25, 20, 5, 15, 10, 25, 20, 5, 15, 10] }) For each dataframe I want to remove string 'CODE' in the first column. I wrote the following if __name__ == '__main__': path = os.getcwd() csv_files = glob.glob(os.path.join(path, "*.xlsx")) dataframes_list = [] for file in csv_files: dataframes_list.append(pd.read_excel(file)) for i in dataframes_list: i.columns[0] = i.columns[0].replace('CODE', '') print(i.columns[0]) i = dosomethingtoeachdf(i) i.to_excel(f'{i.columns[0]}' + '.xlsx') I ran into an error TypeError: Index does not support mutable operations. I know I'm missing some basics here, appreciate any help! A: Try to use DataFrame.rename: df = df.rename(columns={df.columns[0]: df.columns[0].replace(" CODE", "")}) print(df) Prints: xyz a b c 0 1 4 20 25 1 2 5 10 20 2 3 3 40 5 3 3 1 50 15 4 4 2 30 10 5 5 20 4 25 6 6 10 5 20 7 7 40 3 5 8 7 50 1 15 9 8 30 2 10 A: df.columns = [column.replace(' CODE', '') if index == 0 else column for index, column in enumerate(df.columns)] Or you can just apply the str replace method to all columns if you don't care about it impacting columns other than the first e.g. df.columns = df.columns.str.replace(' CODE', '')
Loop to remove string in selected dataframe column header
I wonder if it is possible to create a loop to remove strings in dataframe column. I have multiple dataframes which look like the structure below. df = pd.DataFrame({ 'xyz CODE': [1,2,3,3,4, 5,6,7,7,8], 'a': [4, 5, 3, 1, 2, 20, 10, 40, 50, 30], 'b': [20, 10, 40, 50, 30, 4, 5, 3, 1, 2], 'c': [25, 20, 5, 15, 10, 25, 20, 5, 15, 10] }) For each dataframe I want to remove string 'CODE' in the first column. I wrote the following if __name__ == '__main__': path = os.getcwd() csv_files = glob.glob(os.path.join(path, "*.xlsx")) dataframes_list = [] for file in csv_files: dataframes_list.append(pd.read_excel(file)) for i in dataframes_list: i.columns[0] = i.columns[0].replace('CODE', '') print(i.columns[0]) i = dosomethingtoeachdf(i) i.to_excel(f'{i.columns[0]}' + '.xlsx') I ran into an error TypeError: Index does not support mutable operations. I know I'm missing some basics here, appreciate any help!
[ "Try to use DataFrame.rename:\ndf = df.rename(columns={df.columns[0]: df.columns[0].replace(\" CODE\", \"\")})\nprint(df)\n\nPrints:\n xyz a b c\n0 1 4 20 25\n1 2 5 10 20\n2 3 3 40 5\n3 3 1 50 15\n4 4 2 30 10\n5 5 20 4 25\n6 6 10 5 20\n7 7 40 3 5\n8 7 50 1 15\n9 8 30 2 10\n\n", "df.columns = [column.replace(' CODE', '') if index == 0 else column for index, column in enumerate(df.columns)]\n\nOr you can just apply the str replace method to all columns if you don't care about it impacting columns other than the first e.g.\ndf.columns = df.columns.str.replace(' CODE', '')\n\n" ]
[ 1, 0 ]
[]
[]
[ "loops", "pandas", "python", "python_3.x" ]
stackoverflow_0074483455_loops_pandas_python_python_3.x.txt
Q: Minimizing rows with a merge/squish in Pandas DataFrame with Multiple indexes With a DataFrame like, import pandas as pd import numpy as np df = pd.DataFrame({ 'id_1': [33,33,33,33,22,22,88,100], 'id_2': [64,64,64,64,12,12,77,100], 'col_1': [np.nan, 'dog', np.nan, 'kangaroo', np.nan, np.nan, np.nan, np.nan], 'col_2': ['bike', 'car', np.nan, np.nan, 'train', np.nan, 'horse', np.nan], 'col_3': [np.nan, np.nan, 'star', 'meteor', np.nan, 'rock', np.nan, np.nan] }) """ id_1 id_2 col_1 col_2 col_3 0 33 64 NaN bike NaN 1 33 64 dog car NaN 2 33 64 NaN NaN star 3 33 64 kangaroo NaN meteor 4 22 12 NaN train NaN 5 22 12 NaN NaN rock 6 88 77 NaN horse NaN 7 100 100 NaN NaN NaN """ How can it be transformed into a minimum amount of rows without aggregating or losing data like the following? id_1 id_2 col_1 col_2 col_3 0 33 64 dog bike star 1 33 64 kangaroo car meteor 3 22 12 NaN train rock 4 88 77 NaN horse NaN 5 100 100 NaN NaN NaN Basically, for each group of id_X columns, the col_X columns' NaN values are replaced with other group values if applicable. A: # melt (wide to long) on id_1, id_2 and sort the values # this brings the NaN to the top df2=df.melt(id_vars=['id_1', 'id_2'], var_name='col').sort_values(['id_1', 'id_2','col', 'value']) # create a seq, to make the keys unique and pivot df3=(df2.assign(seq=df2.groupby(['id_1','id_2','col' ]).cumcount()) .pivot(index=['id_1','id_2','seq'], columns=['col'], values='value').reset_index() ) # for id_1 =100, you have all NaN and still want to keep it # so remove rows with all NaN except when its for seq=0 df3=df3.loc[~((df3['seq']>0) & (df3[['col_1','col_2','col_3']].isna().all(axis=1)) )] # drop the seq (temp) column df3.drop(columns='seq', inplace=True) df3 col id_1 id_2 col_1 col_2 col_3 0 22 12 NaN train rock 2 33 64 dog bike meteor 3 33 64 kangaroo car star 6 88 77 NaN horse NaN 7 100 100 NaN NaN NaN A: Another possible solution: # this is to push up all not NaN values to the top of each column df.loc[:, 'col_1':'col_3'] = df.groupby( ['id_1', 'id_2'], sort=False).transform(lambda x: sorted(x, key=pd.isnull)) # this is to remove all useless rows of NaN df.loc[~(df.duplicated(['id_1', 'id_2']) & df.loc[:, 'col_1':'col_3'].isna().all(axis=1))] Output: id_1 id_2 col_1 col_2 col_3 0 33 64 dog bike star 1 33 64 kangaroo car meteor 4 22 12 NaN train rock 6 88 77 NaN horse NaN 7 100 100 NaN NaN NaN A: To avoid illegible Pandas voodoo, after your imports and df instantiation, you can do def get_max_vals_from_row_sets(row, cols): mn = 1 for col in cols: mn = max(mn, len(row[col])) return mn def add_id_row(d, row, ids, cols): max_vals = get_max_vals_from_row_sets(row, cols) for _ in range(max_vals): for id_ in ids: d[id_].append(row[id_]) for col in cols: if len(row[col]) != 0: d[col].append(row[col].pop()) else: d[col].append(np.nan) def drop_set_nans(row, cols): for col in cols: if np.nan in row[col]: row[col].remove(np.nan) return row def squash_out_redundant_nans(df, ids, cols): df = df.groupby(ids).agg(set).reset_index() d = {k: [] for k in df.columns} for _, row in df1.iterrows(): drop_set_nans(row, cols) add_id_row(d, row, ids, cols) df = pd.DataFrame(d) return df ids = ['id_1', 'id_2'] cols = ['col_1', 'col_2', 'col_3'] df = squash_out_redundant_nans(df, ids, cols) print(df)
Minimizing rows with a merge/squish in Pandas DataFrame with Multiple indexes
With a DataFrame like, import pandas as pd import numpy as np df = pd.DataFrame({ 'id_1': [33,33,33,33,22,22,88,100], 'id_2': [64,64,64,64,12,12,77,100], 'col_1': [np.nan, 'dog', np.nan, 'kangaroo', np.nan, np.nan, np.nan, np.nan], 'col_2': ['bike', 'car', np.nan, np.nan, 'train', np.nan, 'horse', np.nan], 'col_3': [np.nan, np.nan, 'star', 'meteor', np.nan, 'rock', np.nan, np.nan] }) """ id_1 id_2 col_1 col_2 col_3 0 33 64 NaN bike NaN 1 33 64 dog car NaN 2 33 64 NaN NaN star 3 33 64 kangaroo NaN meteor 4 22 12 NaN train NaN 5 22 12 NaN NaN rock 6 88 77 NaN horse NaN 7 100 100 NaN NaN NaN """ How can it be transformed into a minimum amount of rows without aggregating or losing data like the following? id_1 id_2 col_1 col_2 col_3 0 33 64 dog bike star 1 33 64 kangaroo car meteor 3 22 12 NaN train rock 4 88 77 NaN horse NaN 5 100 100 NaN NaN NaN Basically, for each group of id_X columns, the col_X columns' NaN values are replaced with other group values if applicable.
[ "# melt (wide to long) on id_1, id_2 and sort the values\n# this brings the NaN to the top\n\ndf2=df.melt(id_vars=['id_1', 'id_2'], var_name='col').sort_values(['id_1', 'id_2','col', 'value'])\n\n# create a seq, to make the keys unique and pivot\ndf3=(df2.assign(seq=df2.groupby(['id_1','id_2','col' ]).cumcount())\n .pivot(index=['id_1','id_2','seq'], columns=['col'], values='value').reset_index()\n)\n\n# for id_1 =100, you have all NaN and still want to keep it\n# so remove rows with all NaN except when its for seq=0\ndf3=df3.loc[~((df3['seq']>0) & \n (df3[['col_1','col_2','col_3']].isna().all(axis=1)) )]\n\n# drop the seq (temp) column\ndf3.drop(columns='seq', inplace=True)\ndf3\n\n\ncol id_1 id_2 col_1 col_2 col_3\n0 22 12 NaN train rock\n2 33 64 dog bike meteor\n3 33 64 kangaroo car star\n6 88 77 NaN horse NaN\n7 100 100 NaN NaN NaN\n\n", "Another possible solution:\n# this is to push up all not NaN values to the top of each column\ndf.loc[:, 'col_1':'col_3'] = df.groupby(\n ['id_1', 'id_2'], sort=False).transform(lambda x: sorted(x, key=pd.isnull))\n\n# this is to remove all useless rows of NaN\ndf.loc[~(df.duplicated(['id_1', 'id_2']) &\n df.loc[:, 'col_1':'col_3'].isna().all(axis=1))]\n\nOutput:\n id_1 id_2 col_1 col_2 col_3\n0 33 64 dog bike star\n1 33 64 kangaroo car meteor\n4 22 12 NaN train rock\n6 88 77 NaN horse NaN\n7 100 100 NaN NaN NaN\n\n", "To avoid illegible Pandas voodoo, after your imports and df instantiation, you can do\ndef get_max_vals_from_row_sets(row, cols):\n mn = 1\n for col in cols:\n mn = max(mn, len(row[col]))\n return mn\n\ndef add_id_row(d, row, ids, cols):\n max_vals = get_max_vals_from_row_sets(row, cols)\n\n for _ in range(max_vals):\n for id_ in ids:\n d[id_].append(row[id_])\n\n for col in cols:\n if len(row[col]) != 0:\n d[col].append(row[col].pop())\n else:\n d[col].append(np.nan)\n\ndef drop_set_nans(row, cols):\n for col in cols:\n if np.nan in row[col]:\n row[col].remove(np.nan)\n return row\n\ndef squash_out_redundant_nans(df, ids, cols):\n df = df.groupby(ids).agg(set).reset_index()\n d = {k: [] for k in df.columns}\n for _, row in df1.iterrows():\n drop_set_nans(row, cols)\n add_id_row(d, row, ids, cols)\n\n df = pd.DataFrame(d)\n return df\n\nids = ['id_1', 'id_2']\ncols = ['col_1', 'col_2', 'col_3']\ndf = squash_out_redundant_nans(df, ids, cols)\nprint(df)\n\n" ]
[ 0, 0, 0 ]
[]
[]
[ "dataframe", "pandas", "python" ]
stackoverflow_0074478461_dataframe_pandas_python.txt
Q: Python Higher order-function with varying arguments I am trying to write a higher-order function that takes a varying amount of arguments. For instance something like this def higher(fnc, args): print(f"Calling function {fnc}") fnc(argv) def one_arg(only_arg): print(f"Here is the only arg {only}") def two_arg(first, second): print(f"Here is the first {first} And here is the second {second}") higher(one_arg, "Only one argument") higher(two_arg, "Here's one arg", "and Another one") Is it possible to do this without changing the functions one_arg() or two_arg() ? I've looked into using *argv but I don't think I understand it well enough or see a way to use that without changing those two functions A: you can just use * to define multiple args. def higher(fnc, *args): print(f"Calling function {fnc}") fnc(*args) def one_arg(only_arg): print(f"Here is the only arg {only_arg}") def two_arg(first, second): print(f"Here is the first {first} And here is the second {second}") higher(one_arg, "Only one argument") higher(two_arg, "Here's one arg", "and Another one") Also for more details regarding functions and object oriented programming in python you can refer to this link There are a lot more additional resources available online for you to learn A: Define higher and call fnc like this: def higher(fnc, *args): print(f"Calling function {fnc}") fnc(*args) Within the body of higher, args is a tuple of the positional arguments passed after fnc. Calling fnc(*args) spreads that tuple into individual positional arguments to fnc.
Python Higher order-function with varying arguments
I am trying to write a higher-order function that takes a varying amount of arguments. For instance something like this def higher(fnc, args): print(f"Calling function {fnc}") fnc(argv) def one_arg(only_arg): print(f"Here is the only arg {only}") def two_arg(first, second): print(f"Here is the first {first} And here is the second {second}") higher(one_arg, "Only one argument") higher(two_arg, "Here's one arg", "and Another one") Is it possible to do this without changing the functions one_arg() or two_arg() ? I've looked into using *argv but I don't think I understand it well enough or see a way to use that without changing those two functions
[ "you can just use * to define multiple args.\ndef higher(fnc, *args):\n print(f\"Calling function {fnc}\")\n fnc(*args)\n\ndef one_arg(only_arg):\n print(f\"Here is the only arg {only_arg}\")\n\ndef two_arg(first, second):\n print(f\"Here is the first {first} And here is the second {second}\")\n\nhigher(one_arg, \"Only one argument\")\nhigher(two_arg, \"Here's one arg\", \"and Another one\")\n\nAlso for more details regarding functions and object oriented programming in python you can refer to this link\nThere are a lot more additional resources available online for you to learn\n", "Define higher and call fnc like this:\ndef higher(fnc, *args):\n print(f\"Calling function {fnc}\")\n fnc(*args)\n\nWithin the body of higher, args is a tuple of the positional arguments passed after fnc. Calling fnc(*args) spreads that tuple into individual positional arguments to fnc.\n" ]
[ 1, 0 ]
[]
[]
[ "higher_order_functions", "python", "python_3.x" ]
stackoverflow_0074483499_higher_order_functions_python_python_3.x.txt
Q: How to split text into columns? I want to split these ascii characters into 4 columns so it will look more convenient.. Uploaded a picture as an example.. for i in range(1,121): a = chr(i) print(str(i)+". "+str(a)) I have tried the .format or split(), but they don't seem to work as intended A: Because print prints line-by-line, we're going to have to figure out which characters go on the same line, and then format those into a string. Since some characters are not printable, we'll have to replace them, especially characters like "\n" and "\t", which would break our formatting. Luckily, python provides a str.isprintable method that tells us exactly this. If a character is not printable, I've replaced it with an uppercase X (but you can choose any other character you want). Next, we need to play with string formatting. Here's a cheatsheet that describes the f-string syntax I've used. E.g., f"{num:>3}" formats the integer num into our string, right justified to three places. Similarly, f"{disp_str:<10}" formats the string disp_str, left-justified to a length of 10 The last thing to note is the use of the argument end="" in print(). The default value of this argument is "\n", which means print adds a newline after whatever we ask it to print. Since we don't want that after each column, we add end="" (which tells python to add nothing after the argument to print). After all columns are done, we print nothing, but let it add the default newline, which takes us onto a new line in the terminal. n_cols = 4 n_rows = 30 for row in range(n_rows): for col in range(n_cols): num = row + n_rows * col # Find which character goes in this row/col char = chr(num) # Get the ascii character if not char.isprintable(): char = "X" # If it is not printable, replace it with a glyph for display disp_str = f"{num:>3}. {char}" # Format num and char into a string print(f"{disp_str:<10}", end="") # Left-justify disp_str to 10 places. Do not print the default newline print("") # After the row is done, we can print nothing (plus the default newline) This gives the output: 0. X 30. X 60. < 90. Z 1. X 31. X 61. = 91. [ 2. X 32. 62. > 92. \ 3. X 33. ! 63. ? 93. ] 4. X 34. " 64. @ 94. ^ 5. X 35. # 65. A 95. _ 6. X 36. $ 66. B 96. ` 7. X 37. % 67. C 97. a 8. X 38. & 68. D 98. b 9. X 39. ' 69. E 99. c 10. X 40. ( 70. F 100. d 11. X 41. ) 71. G 101. e 12. X 42. * 72. H 102. f ... A: To print multiple characters on the same line you will need to either print multiple in one statement or alternatively change the default end "\n" to prevent the newline after each print statement. Then you can perform a check to print a newline only after 4 characters have been printed. The below code prints the characters left to right, and then down. for i in range(1,121): a = chr(i) print(str(i)+". "+str(a), end=" ") if i % 4 == 0: print() I believe you may see some formatting issues due to printing some ASCII characters, however.
How to split text into columns?
I want to split these ascii characters into 4 columns so it will look more convenient.. Uploaded a picture as an example.. for i in range(1,121): a = chr(i) print(str(i)+". "+str(a)) I have tried the .format or split(), but they don't seem to work as intended
[ "Because print prints line-by-line, we're going to have to figure out which characters go on the same line, and then format those into a string. Since some characters are not printable, we'll have to replace them, especially characters like \"\\n\" and \"\\t\", which would break our formatting. Luckily, python provides a str.isprintable method that tells us exactly this. If a character is not printable, I've replaced it with an uppercase X (but you can choose any other character you want).\nNext, we need to play with string formatting. Here's a cheatsheet that describes the f-string syntax I've used. E.g., f\"{num:>3}\" formats the integer num into our string, right justified to three places. Similarly, f\"{disp_str:<10}\" formats the string disp_str, left-justified to a length of 10\nThe last thing to note is the use of the argument end=\"\" in print(). The default value of this argument is \"\\n\", which means print adds a newline after whatever we ask it to print. Since we don't want that after each column, we add end=\"\" (which tells python to add nothing after the argument to print). After all columns are done, we print nothing, but let it add the default newline, which takes us onto a new line in the terminal.\nn_cols = 4\nn_rows = 30\n\nfor row in range(n_rows):\n for col in range(n_cols):\n num = row + n_rows * col # Find which character goes in this row/col\n char = chr(num) # Get the ascii character\n if not char.isprintable(): \n char = \"X\" # If it is not printable, replace it with a glyph for display\n disp_str = f\"{num:>3}. {char}\" # Format num and char into a string\n print(f\"{disp_str:<10}\", end=\"\") # Left-justify disp_str to 10 places. Do not print the default newline\n\n print(\"\") # After the row is done, we can print nothing (plus the default newline)\n\nThis gives the output:\n 0. X 30. X 60. < 90. Z \n 1. X 31. X 61. = 91. [ \n 2. X 32. 62. > 92. \\ \n 3. X 33. ! 63. ? 93. ] \n 4. X 34. \" 64. @ 94. ^ \n 5. X 35. # 65. A 95. _ \n 6. X 36. $ 66. B 96. ` \n 7. X 37. % 67. C 97. a \n 8. X 38. & 68. D 98. b \n 9. X 39. ' 69. E 99. c \n 10. X 40. ( 70. F 100. d \n 11. X 41. ) 71. G 101. e \n 12. X 42. * 72. H 102. f \n...\n\n", "To print multiple characters on the same line you will need to either print multiple in one statement or alternatively change the default end \"\\n\" to prevent the newline after each print statement.\nThen you can perform a check to print a newline only after 4 characters have been printed.\nThe below code prints the characters left to right, and then down.\nfor i in range(1,121):\n a = chr(i)\n print(str(i)+\". \"+str(a), end=\" \")\n\n if i % 4 == 0:\n print()\n\nI believe you may see some formatting issues due to printing some ASCII characters, however.\n" ]
[ 2, 0 ]
[]
[]
[ "ascii", "python" ]
stackoverflow_0074483371_ascii_python.txt
Q: extract a specific table from web page I want to extract the first table of this page https://www.sec.gov/cgi-bin/own-disp?action=getissuer&CIK=1318605 For the second table of the page I use the id of the table url=f'https://www.sec.gov/cgi-bin/own-disp?action=getissuer&CIK=1318605' response = requests.get(url) web = response.content soup = BeautifulSoup(web, 'html.parser') transaction = soup.find('table', {'id':'transaction-report'}) report = pd.read_html(str(transaction))[0] for the first table I do not see such id easily usable How to extract the first one ? A: Try: import requests import pandas as pd from bs4 import BeautifulSoup url = "https://www.sec.gov/cgi-bin/own-disp?action=getissuer&CIK=1318605" soup = BeautifulSoup(requests.get(url).content, "html.parser") # select correct table table = soup.select_one("table:not(:has(table)):has(a:-soup-contains(Owner))") # make first row header (to have column names in Pandas) for td in table.tr.select("td"): td.name = "th" df = pd.read_html(str(table))[0] print(df) Prints: Owner Filings Transaction Date Type of Owner 0 Gebbia Joseph 1834171 2022-09-25 director 1 DENHOLM ROBYN M 1242782 2022-05-02 director 2 Taneja Vaibhav 1771340 2022-01-05 officer: Chief Accounting Officer 3 Musk Elon 1494730 2021-11-08 director, 10 percent owner, officer: CEO 4 Ehrenpreis Ira Matthew 1412598 2021-10-27 director 5 Baglino Andrew D 1790565 2020-06-05 officer: SVP Powertrain and Energy Eng. 6 Mizuno Hiromichi 1811230 2020-04-23 director 7 ELLISON LAWRENCE JOSEPH 901999 2020-02-14 director 8 Kirkhorn Zachary 1771364 2019-03-13 officer: Chief Financial Officer 9 Wilson-Thompson Kathleen 1331680 2018-12-27 director 10 MORTON DAVID H JR 1476070 2018-08-06 officer: Chief Accounting Officer 11 FIELD JOHN DOUGLAS 1650649 2017-11-02 officer: Senior VP, Engineering 12 McNeill Jon 1670512 2017-08-14 officer: President, WW Sales/Service 13 RICE LINDA JOHNSON 1188735 2017-07-17 director 14 MURDOCH JAMES R 1420590 2017-07-17 director 15 Branderiz Eric 1352816 2016-10-24 officer: VP, Chief Accounting Officer 16 Jason Wheeler S 1660228 2015-11-30 officer: Chief Financial Officer, other: Chief Financial Officer 17 Reichow Gregory 1584531 2015-11-06 officer: VP Manufacturing 18 Guillen Jerome M 1584518 2015-07-15 officer: VP Service and Sales Ops 19 Kroeger Harald 1565080 2012-12-12 director 20 WHITAKER ERIC S 1234046 2010-10-28 officer: General Counsel 21 Blankenship George 1503210 2010-10-06 other: VP Sales and Service 22 Jurvetson Stephen T 1314917 2010-06-28 director 23 Buss Brad W 1336664 2010-06-28 director 24 Straubel Jeffrey B 1494727 2010-06-28 officer: Chief Technology Officer 25 Walker John K 1494729 2010-06-28 officer: VP, No. Amer. Sales & Mktg 26 Musk Kimbal 1494731 2010-06-28 director 27 Ahuja Deepak 1494732 2010-06-28 officer: Chief Financial Officer 28 Passin Gilbert 1494806 2010-06-28 officer: Vice President, Manufacturing 29 Kohler Herbert 1495013 2010-06-28 director 30 Gracias Antonio J. 1495158 2010-06-28 director 31 Al Darmaki H.E. Ahmed Saif 1495205 2010-06-28 director
extract a specific table from web page
I want to extract the first table of this page https://www.sec.gov/cgi-bin/own-disp?action=getissuer&CIK=1318605 For the second table of the page I use the id of the table url=f'https://www.sec.gov/cgi-bin/own-disp?action=getissuer&CIK=1318605' response = requests.get(url) web = response.content soup = BeautifulSoup(web, 'html.parser') transaction = soup.find('table', {'id':'transaction-report'}) report = pd.read_html(str(transaction))[0] for the first table I do not see such id easily usable How to extract the first one ?
[ "Try:\nimport requests\nimport pandas as pd\nfrom bs4 import BeautifulSoup\n\n\nurl = \"https://www.sec.gov/cgi-bin/own-disp?action=getissuer&CIK=1318605\"\nsoup = BeautifulSoup(requests.get(url).content, \"html.parser\")\n\n# select correct table\ntable = soup.select_one(\"table:not(:has(table)):has(a:-soup-contains(Owner))\")\n\n# make first row header (to have column names in Pandas)\nfor td in table.tr.select(\"td\"):\n td.name = \"th\"\n\ndf = pd.read_html(str(table))[0]\nprint(df)\n\nPrints:\n Owner Filings Transaction Date Type of Owner\n0 Gebbia Joseph 1834171 2022-09-25 director\n1 DENHOLM ROBYN M 1242782 2022-05-02 director\n2 Taneja Vaibhav 1771340 2022-01-05 officer: Chief Accounting Officer\n3 Musk Elon 1494730 2021-11-08 director, 10 percent owner, officer: CEO\n4 Ehrenpreis Ira Matthew 1412598 2021-10-27 director\n5 Baglino Andrew D 1790565 2020-06-05 officer: SVP Powertrain and Energy Eng.\n6 Mizuno Hiromichi 1811230 2020-04-23 director\n7 ELLISON LAWRENCE JOSEPH 901999 2020-02-14 director\n8 Kirkhorn Zachary 1771364 2019-03-13 officer: Chief Financial Officer\n9 Wilson-Thompson Kathleen 1331680 2018-12-27 director\n10 MORTON DAVID H JR 1476070 2018-08-06 officer: Chief Accounting Officer\n11 FIELD JOHN DOUGLAS 1650649 2017-11-02 officer: Senior VP, Engineering\n12 McNeill Jon 1670512 2017-08-14 officer: President, WW Sales/Service\n13 RICE LINDA JOHNSON 1188735 2017-07-17 director\n14 MURDOCH JAMES R 1420590 2017-07-17 director\n15 Branderiz Eric 1352816 2016-10-24 officer: VP, Chief Accounting Officer\n16 Jason Wheeler S 1660228 2015-11-30 officer: Chief Financial Officer, other: Chief Financial Officer\n17 Reichow Gregory 1584531 2015-11-06 officer: VP Manufacturing\n18 Guillen Jerome M 1584518 2015-07-15 officer: VP Service and Sales Ops\n19 Kroeger Harald 1565080 2012-12-12 director\n20 WHITAKER ERIC S 1234046 2010-10-28 officer: General Counsel\n21 Blankenship George 1503210 2010-10-06 other: VP Sales and Service\n22 Jurvetson Stephen T 1314917 2010-06-28 director\n23 Buss Brad W 1336664 2010-06-28 director\n24 Straubel Jeffrey B 1494727 2010-06-28 officer: Chief Technology Officer\n25 Walker John K 1494729 2010-06-28 officer: VP, No. Amer. Sales & Mktg\n26 Musk Kimbal 1494731 2010-06-28 director\n27 Ahuja Deepak 1494732 2010-06-28 officer: Chief Financial Officer\n28 Passin Gilbert 1494806 2010-06-28 officer: Vice President, Manufacturing\n29 Kohler Herbert 1495013 2010-06-28 director\n30 Gracias Antonio J. 1495158 2010-06-28 director\n31 Al Darmaki H.E. Ahmed Saif 1495205 2010-06-28 director\n\n" ]
[ 1 ]
[]
[]
[ "beautifulsoup", "pandas", "python" ]
stackoverflow_0074483513_beautifulsoup_pandas_python.txt
Q: Looping through dataframe rows and compare them I am working with a huge dataframe and I want to loop through rows and compare them. If Value and DATE are the same in different rows I would like to merge them doing some statistics, eg, minimum of minimums etc.. Value MIN MAX MEAN STD DATE 0 -2460 -454 -1413.1 254.8 20181223 1 -2361 619 -1348.3 443.0 20181223 0 -2677 -483 -1626.3 258.8 20181227 1 -2629 256 -1477.5 378.0 20181227 2 -2682 598 -1486.0 319.4 20181227 Any ideas? A: This is pretty easy. df = pandas.read_clipboard(sep='\\s+') df df.groupby(['Value'])['MIN'].min() df.groupby(['Value'])[['MIN','MAX','MEAN','STD']].min()
Looping through dataframe rows and compare them
I am working with a huge dataframe and I want to loop through rows and compare them. If Value and DATE are the same in different rows I would like to merge them doing some statistics, eg, minimum of minimums etc.. Value MIN MAX MEAN STD DATE 0 -2460 -454 -1413.1 254.8 20181223 1 -2361 619 -1348.3 443.0 20181223 0 -2677 -483 -1626.3 258.8 20181227 1 -2629 256 -1477.5 378.0 20181227 2 -2682 598 -1486.0 319.4 20181227 Any ideas?
[ "This is pretty easy.\ndf = pandas.read_clipboard(sep='\\\\s+')\ndf\n\ndf.groupby(['Value'])['MIN'].min()\n\n\ndf.groupby(['Value'])[['MIN','MAX','MEAN','STD']].min()\n\n" ]
[ 0 ]
[]
[]
[ "dataframe", "if_statement", "loops", "python" ]
stackoverflow_0061122128_dataframe_if_statement_loops_python.txt
Q: jaxlib.xla_extension.XlaRuntimeError: RESOURCE_EXHAUSTED: Out of memory while trying to allocate 553305856 bytes. BufferAssignment OOM I'm getting this error when running a jax script on multiple GPU. jaxlib.xla_extension.XlaRuntimeError: RESOURCE_EXHAUSTED: Out of memory while trying to allocate 553305856 bytes. BufferAssignment OOM Are there things I can do to solve this? A: This seems to have worked for me. os.environ["XLA_PYTHON_CLIENT_PREALLOCATE"]="false" os.environ["XLA_PYTHON_CLIENT_MEM_FRACTION"]=".XX" os.environ["XLA_PYTHON_CLIENT_ALLOCATOR"]="platform" https://jax.readthedocs.io/en/latest/gpu_memory_allocation.html
jaxlib.xla_extension.XlaRuntimeError: RESOURCE_EXHAUSTED: Out of memory while trying to allocate 553305856 bytes. BufferAssignment OOM
I'm getting this error when running a jax script on multiple GPU. jaxlib.xla_extension.XlaRuntimeError: RESOURCE_EXHAUSTED: Out of memory while trying to allocate 553305856 bytes. BufferAssignment OOM Are there things I can do to solve this?
[ "This seems to have worked for me.\nos.environ[\"XLA_PYTHON_CLIENT_PREALLOCATE\"]=\"false\"\nos.environ[\"XLA_PYTHON_CLIENT_MEM_FRACTION\"]=\".XX\"\nos.environ[\"XLA_PYTHON_CLIENT_ALLOCATOR\"]=\"platform\"\n\nhttps://jax.readthedocs.io/en/latest/gpu_memory_allocation.html\n" ]
[ 0 ]
[]
[]
[ "gpu", "jax", "python", "tensorflow" ]
stackoverflow_0074143812_gpu_jax_python_tensorflow.txt
Q: How can I artificially nest schemas in Marshmallow? In Marshmallow, is there a way to pass the current object to a Nested field in order to produce artificially nested serializations? For example, consider this object that I'm serializing: example = Example( name="Foo", address="301 Elm Street", city="Kalamazoo", state="MI", ) I want to produce JSON for this that looks like this: { "name": "Foo", "address": { "street": "301 Elm Street", "city": "Kalamazoo", "state": "MI" } } Essentially, this would be a nested AddressSchema inside the ExampleSchema, something like this: class AddressSchema: street = fields.String(attribute="address") city = fields.String() state = fields.String() class ExampleSchema: name = fields.String() address = fields.Nested(AddressSchema) ...but that doesn't quite do what I'd like. I can use a custom function, but I'd like to use a built-in method if possible. A: I managed to figure out a solution that allows me to preserve introspection and use only built-in fields; it's a little odd, though. I modified ExampleSchema to include a @pre_dump hook that adds a self-referential attribute, and pointed the field at that: class ExampleSchema: name = fields.String() address = fields.Nested( AddressSchema, attribute="_marshmallow_self_reference" ) @pre_dump def add_self_reference(self, data): setattr(data, "_marshmallow_self_reference", data) A: You can define the address field as follows address = fields.Function(lambda x: {'street': x.street, 'city': x.city, 'state': x.state}) Credit goes to this post: Nesting in Python and Marshmallow with a field that does not exist in the database A: I couldn't get the accepted answer to work. I wanted to nest some members as as a "preferences" schema inside the account. class Color: id = Column(UUID) name = Column(Unicode(100)) class Account: id = Column(UUID()) name = Column(Integer()) age = Column(Integer()) # These are the account's "preferences", which are stored # directly in the account model. color_ids = Column(ARRAY(UUID)) colors = relationship("Color", primaryjoin... etc) opt_in = Column(Boolean()) I used marshmallow's pre_dump decorator to call a function that sets a preferences object onto the model: class ColorSchema(SQLAlchemySchema): class Meta: model = models.Color fields = [ "id", "name" ] # Subset of Account that I want returned as a nested entry class PreferencesSchema(SQLAlchemySchema): class Meta: model = models.Account fields = [ "colors", "opt_in" ] colors = marshmallow.fields.List(marshmallow.fields.Nested(ColorSchema)) class AccountSchema(SQLAlchemySchema): class Meta: model = models.Account fields = [ "id", "name", "age", "preferences" # Note: does not exist on model ] preferences = fields.Nested(PreferencesSchema) @pre_dump def set_preferences(self, data, **kwargs): preferences = { "colors": data.colors, "opt_in": data.opt_in, } setattr(data, "preferences", preferences) return data Now the response would be: { "id": "uuid...", "name": "Joe", "age": 25, "preferences": { "colors": [ { "id": "uuid...", "name": "blue" }, { "id": "uuid...", "name": "red" }, ], "opt_in": false } }
How can I artificially nest schemas in Marshmallow?
In Marshmallow, is there a way to pass the current object to a Nested field in order to produce artificially nested serializations? For example, consider this object that I'm serializing: example = Example( name="Foo", address="301 Elm Street", city="Kalamazoo", state="MI", ) I want to produce JSON for this that looks like this: { "name": "Foo", "address": { "street": "301 Elm Street", "city": "Kalamazoo", "state": "MI" } } Essentially, this would be a nested AddressSchema inside the ExampleSchema, something like this: class AddressSchema: street = fields.String(attribute="address") city = fields.String() state = fields.String() class ExampleSchema: name = fields.String() address = fields.Nested(AddressSchema) ...but that doesn't quite do what I'd like. I can use a custom function, but I'd like to use a built-in method if possible.
[ "I managed to figure out a solution that allows me to preserve introspection and use only built-in fields; it's a little odd, though. I modified ExampleSchema to include a @pre_dump hook that adds a self-referential attribute, and pointed the field at that:\nclass ExampleSchema:\n name = fields.String()\n address = fields.Nested(\n AddressSchema, attribute=\"_marshmallow_self_reference\"\n )\n\n @pre_dump\n def add_self_reference(self, data):\n setattr(data, \"_marshmallow_self_reference\", data)\n\n", "You can define the address field as follows\naddress = fields.Function(lambda x: {'street': x.street, 'city': x.city, 'state': x.state})\n\nCredit goes to this post: Nesting in Python and Marshmallow with a field that does not exist in the database\n", "I couldn't get the accepted answer to work. I wanted to nest some members as as a \"preferences\" schema inside the account.\nclass Color:\n id = Column(UUID)\n name = Column(Unicode(100))\n\nclass Account:\n id = Column(UUID())\n name = Column(Integer())\n age = Column(Integer())\n\n # These are the account's \"preferences\", which are stored\n # directly in the account model.\n color_ids = Column(ARRAY(UUID))\n colors = relationship(\"Color\", primaryjoin... etc)\n opt_in = Column(Boolean())\n\nI used marshmallow's pre_dump decorator to call a function that sets a preferences object onto the model:\nclass ColorSchema(SQLAlchemySchema):\n class Meta:\n model = models.Color\n fields = [ \"id\", \"name\" ]\n\n# Subset of Account that I want returned as a nested entry\nclass PreferencesSchema(SQLAlchemySchema):\n class Meta:\n model = models.Account\n fields = [ \"colors\", \"opt_in\" ]\n\n colors = marshmallow.fields.List(marshmallow.fields.Nested(ColorSchema))\n\nclass AccountSchema(SQLAlchemySchema):\n class Meta:\n model = models.Account\n fields = [\n \"id\",\n \"name\",\n \"age\",\n \"preferences\" # Note: does not exist on model\n ]\n preferences = fields.Nested(PreferencesSchema)\n\n @pre_dump\n def set_preferences(self, data, **kwargs):\n preferences = {\n \"colors\": data.colors,\n \"opt_in\": data.opt_in,\n }\n setattr(data, \"preferences\", preferences)\n return data\n\nNow the response would be:\n{\n \"id\": \"uuid...\",\n \"name\": \"Joe\",\n \"age\": 25,\n \"preferences\": {\n \"colors\": [\n { \"id\": \"uuid...\", \"name\": \"blue\" },\n { \"id\": \"uuid...\", \"name\": \"red\" },\n ],\n \"opt_in\": false\n }\n }\n\n" ]
[ 4, 1, 0 ]
[]
[]
[ "marshmallow", "python", "serialization" ]
stackoverflow_0051951669_marshmallow_python_serialization.txt
Q: How to extract the text from (bs4.element.Tag) How can I extract the "Data Engineer" text from <a class="jobTitle-link" href="/job/Data-Engineer/861664201/">Data Engineer</a> Sample Code should be fine. A: const text = document.querySelector(".jobTitle-link").innerText; console.log(text); <a class="jobTitle-link" href="/job/Data-Engineer/861664201/">Data Engineer</a>
How to extract the text from (bs4.element.Tag)
How can I extract the "Data Engineer" text from <a class="jobTitle-link" href="/job/Data-Engineer/861664201/">Data Engineer</a> Sample Code should be fine.
[ "\n\nconst text = document.querySelector(\".jobTitle-link\").innerText;\nconsole.log(text);\n<a class=\"jobTitle-link\" href=\"/job/Data-Engineer/861664201/\">Data Engineer</a>\n\n\n\n" ]
[ 0 ]
[]
[]
[ "alfresco_webscripts", "html", "jupyter", "python", "web_scraping" ]
stackoverflow_0074483323_alfresco_webscripts_html_jupyter_python_web_scraping.txt
Q: Adding input variables to plot title/legend in Python I would like to display the current value of a parameter used to plot a certain function in the plot title/legend/annotated text. As a simple example, let's take a straight line: import numpy import matplotlib.pyplot as plt def line(m,c): x = numpy.linspace(0,1) y = m*x+c plt.plot(x,y) plt.text(0.1, 2.8, "The gradient is" *the current m-value should go here*) plt.show() print line(1.0, 2.0) In this case, I would like my text to say "The gradient is 1.0", but I'm not sure what the syntax is. Moreover, how would I include the second (and more) parameter(s) below, so that it reads: "The gradient is 1.0 The intercept is 2.0." A: Use string formatting with the .format() method: plt.text(0.1, 2.8, "The gradient is {}, the intercept is {}".format(m, c)) Where m and c are the variables you want to substitute in. You can directly write the variables like this in Python 3.6+ if you prefix the string with an f whcih denotes a formatted string literal: f"the gradient is {m}, the intercept is {c}" A: In python 3.6+ you can do it by prefixing the string with f, and putting the variable in curly brackets. For earlier python version there have been various ways of doing it, look up string formatting message = f"The slope is {m}" plt.text(message) (by the way, gradient is usually called slope when referring to single variable linear equation) A: The other answers didn't work for my code, but adaption of it did. Shown below: Showing y = m*x + c printed on the plot in log format. a1 = coefs[0] # variable 1 a2 = coefs[1] # variable 2 message = f"log(L/Lo) = {a1} * log(M/Mo) + {a2}" # Define axes left = 0.01 width = 0.9 bottom = 0.01 height = 0.9 right = left + width top = bottom + height ax = plt.gca() # Transform axes ax.set_transform(ax.transAxes) # Define text ax.text(0.5 * (left + right), 0.5 * (bottom + top), message, horizontalalignment='center', verticalalignment='center', size= 10, color='r', transform=ax.transAxes) plt.show() Using code from @ https://pythonguides.com/add-text-to-plot-matplotlib/
Adding input variables to plot title/legend in Python
I would like to display the current value of a parameter used to plot a certain function in the plot title/legend/annotated text. As a simple example, let's take a straight line: import numpy import matplotlib.pyplot as plt def line(m,c): x = numpy.linspace(0,1) y = m*x+c plt.plot(x,y) plt.text(0.1, 2.8, "The gradient is" *the current m-value should go here*) plt.show() print line(1.0, 2.0) In this case, I would like my text to say "The gradient is 1.0", but I'm not sure what the syntax is. Moreover, how would I include the second (and more) parameter(s) below, so that it reads: "The gradient is 1.0 The intercept is 2.0."
[ "Use string formatting with the .format() method:\nplt.text(0.1, 2.8, \"The gradient is {}, the intercept is {}\".format(m, c))\n\nWhere m and c are the variables you want to substitute in. \nYou can directly write the variables like this in Python 3.6+ if you prefix the string with an f whcih denotes a formatted string literal:\nf\"the gradient is {m}, the intercept is {c}\"\n\n", "In python 3.6+ you can do it by prefixing the string with f, and putting the variable in curly brackets. For earlier python version there have been various ways of doing it, look up string formatting\nmessage = f\"The slope is {m}\"\nplt.text(message)\n\n(by the way, gradient is usually called slope when referring to single variable linear equation)\n", "The other answers didn't work for my code, but adaption of it did. Shown below:\nShowing y = m*x + c printed on the plot in log format.\na1 = coefs[0] # variable 1\na2 = coefs[1] # variable 2\n\nmessage = f\"log(L/Lo) = {a1} * log(M/Mo) + {a2}\"\n\n# Define axes\nleft = 0.01\nwidth = 0.9\nbottom = 0.01\nheight = 0.9\nright = left + width\ntop = bottom + height\nax = plt.gca()\n\n# Transform axes\nax.set_transform(ax.transAxes)\n\n# Define text\nax.text(0.5 * (left + right), 0.5 * (bottom + top), message,\n horizontalalignment='center',\n verticalalignment='center',\n size= 10,\n color='r',\n transform=ax.transAxes)\n\nplt.show()\n\nUsing code from @ https://pythonguides.com/add-text-to-plot-matplotlib/\n\n" ]
[ 8, 3, 0 ]
[]
[]
[ "legend", "matplotlib", "python", "text" ]
stackoverflow_0051812323_legend_matplotlib_python_text.txt
Q: Trying to concatenate a string with a int but the min() command is there and is causing mayhem Im trying to do something for a school project and have the code ask the users for some numbers then print the smallest from the bunch.The main issue with this is that i have to put a string with the print so that the grading system gives a 100.Im not sure on how to do that with my knowledge.Here is my code- num1=int(input("Enter a number: ")) num2=int(input("Enter a number: ")) num3=int(input("Enter a number: ")) print(min("Smallest:", num1 , num2 , num3)) and the error message- Traceback (most recent call last): File "<string>", line 4, in <module> TypeError: '<' not supported between instances of 'int' and 'str' I have tried making the variables strings like such- num1=int(input("Enter a number: ")) num2=int(input("Enter a number: ")) num3=int(input("Enter a number: ")) print(min("Smallest:", str(num1 , num2 , num3))) and even just having the str() command with each variable but it doesn't like my attempt to fix it. A: Hello @NindeBonic in order to show the Smallest number, you need to delete the "Smallest" string that you are trying to concat, instead use: num1 = int(input("Enter a number: ")) num2 = int(input("Enter a number: ")) num3 = int(input("Enter a number: ")) # print the minumum of the three numbers print("Smallest: ", min(num1, num2, num3))
Trying to concatenate a string with a int but the min() command is there and is causing mayhem
Im trying to do something for a school project and have the code ask the users for some numbers then print the smallest from the bunch.The main issue with this is that i have to put a string with the print so that the grading system gives a 100.Im not sure on how to do that with my knowledge.Here is my code- num1=int(input("Enter a number: ")) num2=int(input("Enter a number: ")) num3=int(input("Enter a number: ")) print(min("Smallest:", num1 , num2 , num3)) and the error message- Traceback (most recent call last): File "<string>", line 4, in <module> TypeError: '<' not supported between instances of 'int' and 'str' I have tried making the variables strings like such- num1=int(input("Enter a number: ")) num2=int(input("Enter a number: ")) num3=int(input("Enter a number: ")) print(min("Smallest:", str(num1 , num2 , num3))) and even just having the str() command with each variable but it doesn't like my attempt to fix it.
[ "Hello @NindeBonic in order to show the Smallest number, you need to delete the \"Smallest\" string that you are trying to concat, instead use:\nnum1 = int(input(\"Enter a number: \"))\nnum2 = int(input(\"Enter a number: \"))\nnum3 = int(input(\"Enter a number: \"))\n\n# print the minumum of the three numbers\nprint(\"Smallest: \", min(num1, num2, num3))\n\n\n" ]
[ 1 ]
[ "num1=int(input(\"Enter a number: \"))\nnum2=int(input(\"Enter a number: \"))\nnum3=int(input(\"Enter a number: \"))\nprint(\"Smallest: \" + str(min(num1 , num2 , num3)))\n\n", "You have the \"min\" in the wrong spot, it should be after the text:\nprint(\"Smallest:\", min(num1 , num2 , num3))\n" ]
[ -1, -2 ]
[ "python" ]
stackoverflow_0074483465_python.txt
Q: I want to know python gekko optimization solve I am currently using Python. However, I am struggling with one error. This is the tool I have made so far. from gekko import GEKKO import numpy as np m = GEKKO(remote=False) m.options.SOLVER = 1 hour = 24 Num_EV = 1 p_i =m.Array(m.Var,(hour,Num_EV)) TOU = [64.9,64.9,64.9,64.9,64.9,64.9,64.9,64.9,152.6,239.8, 239.8,152.6,239.8,239.8,239.8,239.8,152.6,152.6, 152.6,152.6,152.6,152.6,152.6,64.9] n=len(TOU) inp = m.Array(m.Var, (n), value=0.0, lb=0.0, ub=7.0, integer=True) # EV min/Max setting for tt in range(0,hour): p_i[tt,0].lower = 30 p_i[tt,0].upper = 70 # EV Charger min/Max setting Num_EV_C = 1 p_j = m.Array(m.Var, (hour, Num_EV_C)) for tt in range(0,hour): p_j[tt,0].lower = 0 p_j[tt,0].upper = 7 # s.t : EV SOC p_i[0,0] = 30 # inital EV SOC eq_EV_SOC = np.zeros((hour,1)) eq_EV_SOC = list(eq_EV_SOC) for tt in range(0,hour): for i in range(0,Num_EV): eq_EV_SOC[tt] = p_i[tt-1,i] + p_i[tt,i] == p_i[tt,0] m.Equation(eq_EV_SOC) # s.t : EV charging rate p_j[0,0] = 0 eq_EV_C = np.zeros((hour,1)) eq_EV_C = list(eq_EV_C) for tt in range(0,hour): for i in range(0,Num_EV_C): eq_EV_C[tt] = p_j[tt,0] >= p_j[tt,i] m.Equation(eq_EV_C) # Object Function : sum[i=n]*sum[t=T]() F = np.zeros((hour*Num_EV)) F = F.tolist() for tt in range(0,hour): for i in range(0,Num_EV): F[i+tt*Num_EV] = p_i[tt,i] * p_j[tt,i] F_Obj = m.sum(F) m.Minimize(F_Obj) m.solve(disp=True) Exception: @error: Equation Definition Equation without an equality (=) or inequality (>,<) true STOPPING... I want to know this problem. Below is a description of constraints and objective functions. s.t is constraint. First constraint is EV SOC range. EV SOC minimum is 30 and Maxmium is 70. EV SOC form is (inital SOC + time by EV SOC). Second constraint is EV Charging range. EV Charging range is from 0 to 7. Finally, Object function is to minimize the product of tou and charging rate. A: There are a few problems with the model that can be observed by opening the model file in the run directory. Use m.open_folder() and open the gk_model0.apm file with a text editor. Here are some of the equations that indicate that there is a problem with the formulation: True v50>=v50 v51>=v51 v52>=v52 v53>=v53 The True expression is because a constant is evaluated with another constant in the first cycle of: for tt in range(0,hour): for i in range(0,Num_EV_C): eq_EV_C[tt] = p_j[tt,0] >= p_j[tt,i] This gives a Boolean True result. The initial EV SOC should either be changed to fixed or else include a simple equation: # s.t : EV SOC m.Equation(p_i[0,0]== 30) # inital EV SOC # s.t : EV charging rate m.Equation(p_j[0,0]==0) It appears that the charging rate should decrease over time with this constraint: m.Equation(p_j[0,0]==0) for tt in range(0,hour): for i in range(1,Num_EV_C): m.Equation(p_j[tt,0] >= p_j[tt,i]) The index is changed to p_j[tt,0] >= p_j[tt,i] so that p_j[tt,0] >= p_j[tt,0] is not included as an equation. Should the range for time also be adjusted here to start at 1? for tt in range(1,hour): for i in range(0,Num_EV): m.Equation(p_i[tt-1,i] + p_i[tt,i] == p_i[tt,0]) The problem is currently infeasible, even with these corrections. Maybe this problem can help: from gekko import GEKKO import numpy as np import matplotlib.pyplot as plt m = GEKKO() m.options.SOLVER = 1 m.options.IMODE = 3 Num_car = 1 TOU = [64.9,64.9,64.9,64.9,64.9,64.9,64.9,64.9,152.6,239.8, 239.8,152.6,239.8,239.8,239.8,239.8,152.6,152.6, 152.6,152.6,152.6,152.6,152.6,64.9] n=len(TOU) inp = m.Array(m.Var, (n), value = 0.0, lb = 0.0, ub = 7.0, integer = True) SOC_Min = 30; SOC_Max = 90 # set bounds 30-90 SOC_t = m.Array(m.Var,(n, Num_car),lb=SOC_Min,ub=SOC_Max) # set new bounds 30-70 for tt in range(0,n): for j in range(Num_car): SOC_t[tt,j].lower = 30 SOC_t[tt,j].upper = 70 for j in range(Num_car): # initial SOC m.Equation(SOC_t[0,j]==30) # initial charge at start m.Equation(SOC_t[n-1,j]==70) # desired charge at end for tt in range(1,n): m.Equation(SOC_t[tt,j] == SOC_t[tt-1,j] + inp[tt]) for tt in range(n): m.Minimize(TOU[tt]*inp[tt]) m.options.IMODE = 3 m.options.SOLVER = 1 m.solve(disp=True) plt.figure(figsize=(8,5)) plt.subplot(3,1,1) for j in range(Num_car): p = np.empty(n) for tt in range(n): p[tt] = SOC_t[tt,j].value[0] plt.plot(p,'r.-',label='vehicle '+str(j+1)) plt.legend(); plt.ylabel('SOC'); plt.grid() plt.subplot(3,1,2) p = np.empty(n) for tt in range(n): p[tt] = inp[tt].value[0] plt.plot(p,'ko-',label='charge rate') plt.legend(); plt.ylabel('charge'); plt.grid() plt.subplot(3,1,3) plt.plot(TOU,'bs-',label='electricity price') plt.ylabel('price'); plt.grid() plt.legend(); plt.xlabel('Time (hr)') plt.tight_layout() plt.savefig('soc_results.png',dpi=300) plt.show() This is a solution to this question: How to optimize the electric vehicle charging cost using Gekko? It looks like you may be working on a similar problem. Here are additional similar questions: How to model a time-dependent constraint in Gekko? GEKKO RTO vs MPC MODES Mixed-Integer Model Predictive Control using Gekko Variable bounds in MPC with GEKKO
I want to know python gekko optimization solve
I am currently using Python. However, I am struggling with one error. This is the tool I have made so far. from gekko import GEKKO import numpy as np m = GEKKO(remote=False) m.options.SOLVER = 1 hour = 24 Num_EV = 1 p_i =m.Array(m.Var,(hour,Num_EV)) TOU = [64.9,64.9,64.9,64.9,64.9,64.9,64.9,64.9,152.6,239.8, 239.8,152.6,239.8,239.8,239.8,239.8,152.6,152.6, 152.6,152.6,152.6,152.6,152.6,64.9] n=len(TOU) inp = m.Array(m.Var, (n), value=0.0, lb=0.0, ub=7.0, integer=True) # EV min/Max setting for tt in range(0,hour): p_i[tt,0].lower = 30 p_i[tt,0].upper = 70 # EV Charger min/Max setting Num_EV_C = 1 p_j = m.Array(m.Var, (hour, Num_EV_C)) for tt in range(0,hour): p_j[tt,0].lower = 0 p_j[tt,0].upper = 7 # s.t : EV SOC p_i[0,0] = 30 # inital EV SOC eq_EV_SOC = np.zeros((hour,1)) eq_EV_SOC = list(eq_EV_SOC) for tt in range(0,hour): for i in range(0,Num_EV): eq_EV_SOC[tt] = p_i[tt-1,i] + p_i[tt,i] == p_i[tt,0] m.Equation(eq_EV_SOC) # s.t : EV charging rate p_j[0,0] = 0 eq_EV_C = np.zeros((hour,1)) eq_EV_C = list(eq_EV_C) for tt in range(0,hour): for i in range(0,Num_EV_C): eq_EV_C[tt] = p_j[tt,0] >= p_j[tt,i] m.Equation(eq_EV_C) # Object Function : sum[i=n]*sum[t=T]() F = np.zeros((hour*Num_EV)) F = F.tolist() for tt in range(0,hour): for i in range(0,Num_EV): F[i+tt*Num_EV] = p_i[tt,i] * p_j[tt,i] F_Obj = m.sum(F) m.Minimize(F_Obj) m.solve(disp=True) Exception: @error: Equation Definition Equation without an equality (=) or inequality (>,<) true STOPPING... I want to know this problem. Below is a description of constraints and objective functions. s.t is constraint. First constraint is EV SOC range. EV SOC minimum is 30 and Maxmium is 70. EV SOC form is (inital SOC + time by EV SOC). Second constraint is EV Charging range. EV Charging range is from 0 to 7. Finally, Object function is to minimize the product of tou and charging rate.
[ "There are a few problems with the model that can be observed by opening the model file in the run directory. Use m.open_folder() and open the gk_model0.apm file with a text editor. Here are some of the equations that indicate that there is a problem with the formulation:\nTrue\nv50>=v50\nv51>=v51\nv52>=v52\nv53>=v53\n\nThe True expression is because a constant is evaluated with another constant in the first cycle of:\nfor tt in range(0,hour): \n for i in range(0,Num_EV_C):\n eq_EV_C[tt] = p_j[tt,0] >= p_j[tt,i]\n\nThis gives a Boolean True result.\nThe initial EV SOC should either be changed to fixed or else include a simple equation:\n# s.t : EV SOC\nm.Equation(p_i[0,0]== 30) # inital EV SOC\n\n# s.t : EV charging rate\nm.Equation(p_j[0,0]==0)\n\nIt appears that the charging rate should decrease over time with this constraint:\nm.Equation(p_j[0,0]==0)\nfor tt in range(0,hour): \n for i in range(1,Num_EV_C):\n m.Equation(p_j[tt,0] >= p_j[tt,i])\n\nThe index is changed to p_j[tt,0] >= p_j[tt,i] so that p_j[tt,0] >= p_j[tt,0] is not included as an equation. Should the range for time also be adjusted here to start at 1?\nfor tt in range(1,hour): \n for i in range(0,Num_EV):\n m.Equation(p_i[tt-1,i] + p_i[tt,i] == p_i[tt,0])\n\nThe problem is currently infeasible, even with these corrections. Maybe this problem can help:\nfrom gekko import GEKKO\nimport numpy as np \nimport matplotlib.pyplot as plt \n\nm = GEKKO() \nm.options.SOLVER = 1 \nm.options.IMODE = 3\n\nNum_car = 1\nTOU = [64.9,64.9,64.9,64.9,64.9,64.9,64.9,64.9,152.6,239.8,\n 239.8,152.6,239.8,239.8,239.8,239.8,152.6,152.6,\n 152.6,152.6,152.6,152.6,152.6,64.9]\nn=len(TOU)\ninp = m.Array(m.Var, (n), value = 0.0, \n lb = 0.0, ub = 7.0, integer = True)\n\nSOC_Min = 30; SOC_Max = 90\n# set bounds 30-90\nSOC_t = m.Array(m.Var,(n, Num_car),lb=SOC_Min,ub=SOC_Max)\n\n# set new bounds 30-70\nfor tt in range(0,n):\n for j in range(Num_car):\n SOC_t[tt,j].lower = 30 \n SOC_t[tt,j].upper = 70 \n\nfor j in range(Num_car):\n # initial SOC\n m.Equation(SOC_t[0,j]==30) # initial charge at start\n m.Equation(SOC_t[n-1,j]==70) # desired charge at end\n for tt in range(1,n):\n m.Equation(SOC_t[tt,j] == SOC_t[tt-1,j] + inp[tt])\n\nfor tt in range(n):\n m.Minimize(TOU[tt]*inp[tt])\n\nm.options.IMODE = 3\nm.options.SOLVER = 1\nm.solve(disp=True)\n\nplt.figure(figsize=(8,5))\nplt.subplot(3,1,1)\nfor j in range(Num_car):\n p = np.empty(n)\n for tt in range(n):\n p[tt] = SOC_t[tt,j].value[0]\n plt.plot(p,'r.-',label='vehicle '+str(j+1))\nplt.legend(); plt.ylabel('SOC'); plt.grid()\n\nplt.subplot(3,1,2)\np = np.empty(n)\nfor tt in range(n):\n p[tt] = inp[tt].value[0]\nplt.plot(p,'ko-',label='charge rate')\nplt.legend(); plt.ylabel('charge'); plt.grid()\n\nplt.subplot(3,1,3)\nplt.plot(TOU,'bs-',label='electricity price')\nplt.ylabel('price'); plt.grid()\nplt.legend(); plt.xlabel('Time (hr)')\nplt.tight_layout()\nplt.savefig('soc_results.png',dpi=300)\nplt.show()\n\nThis is a solution to this question: How to optimize the electric vehicle charging cost using Gekko? It looks like you may be working on a similar problem.\nHere are additional similar questions:\n\nHow to model a time-dependent constraint in Gekko?\nGEKKO RTO vs MPC MODES\nMixed-Integer Model Predictive Control using Gekko\nVariable bounds in MPC with GEKKO\n\n" ]
[ 0 ]
[]
[]
[ "gekko", "nonlinear_optimization", "optimization", "python" ]
stackoverflow_0074416838_gekko_nonlinear_optimization_optimization_python.txt
Q: How can I visualize single image with only one convolutional layer and one pooling layers? I wrote this sample code to show only a single image after passing it to my model. The model should have only one convolutional layer and one pooling layer. Or in another way, how can I visualize a single image by passing it to a simple neural network that has one convolutional and one pooling layer? import torch import torch.nn as nn #creating neural network from PIL import Image from numpy import asarray # Set up GPU device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") # load the image image = Image.open('./img.png') # convert image to numpy array data = asarray(image) print(type(data)) print(data.shape) now building the arch. class ConvNet(nn.Module): def __init__(self): super().__init__() #convolutional layer self.layer = nn.Sequential( nn.Conv2d(in_channels=3, out_channels=3, kernel_size=2, stride=1, padding=0), nn.MaxPool2d(kernel_size=2, stride=2)) def forward(self, x): out = self.layer(x) return out convnet = ConvNet().to(device) #set up for GPU if available convnet pass image to my model outputs = convnet(data) imshow(outputs) got the error below TypeError Traceback (most recent call last) ~\AppData\Local\Temp/ipykernel_3184/1768392595.py in <module> ----> 1 outputs = convnet(data) 2 imshow(outputs) TypeError: conv2d() received an invalid combination of arguments - got (numpy.ndarray, Parameter, Parameter, tuple, tuple, tuple, int), but expected one of: * (Tensor input, Tensor weight, Tensor bias, tuple of ints stride, tuple of ints padding, tuple of ints dilation, int groups) didn't match because some of the arguments have invalid types: (numpy.ndarray, Parameter, Parameter, tuple, tuple, tuple, int) * (Tensor input, Tensor weight, Tensor bias, tuple of ints stride, str padding, tuple of ints dilation, int groups) didn't match because some of the arguments have invalid types: (numpy.ndarray, Parameter, Parameter, tuple, tuple, tuple, int) I expect to show image after passed during this sample network A: The input to the CNN needs to be of the torch.Tensor type. You can do this by applying the transform directly on the PIL image, as: data = torchvision.transforms.functional.to_tensor(image) or transform = torchvision.transforms.ToTensor() # can be composed with other transforms if necessary data = transform(image) A: as GoodDeeds mentioned, CNN expects the data to be of type Tensor you have read the image using PIL and then converted it to NumPy array, you will need to convert the NumPy array to Tensor using torch.from_numpy(data) Below code will solve the issue import torch import torch.nn as nn #creating neural network from PIL import Image from numpy import asarray #Set up GPU device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") Her I am loading my image # load the image image = Image.open('./img.png') # convert image to numpy array data = asarray(image) data=torch.from_numpy(data) print(type(data)) print(data.shape)`
How can I visualize single image with only one convolutional layer and one pooling layers?
I wrote this sample code to show only a single image after passing it to my model. The model should have only one convolutional layer and one pooling layer. Or in another way, how can I visualize a single image by passing it to a simple neural network that has one convolutional and one pooling layer? import torch import torch.nn as nn #creating neural network from PIL import Image from numpy import asarray # Set up GPU device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") # load the image image = Image.open('./img.png') # convert image to numpy array data = asarray(image) print(type(data)) print(data.shape) now building the arch. class ConvNet(nn.Module): def __init__(self): super().__init__() #convolutional layer self.layer = nn.Sequential( nn.Conv2d(in_channels=3, out_channels=3, kernel_size=2, stride=1, padding=0), nn.MaxPool2d(kernel_size=2, stride=2)) def forward(self, x): out = self.layer(x) return out convnet = ConvNet().to(device) #set up for GPU if available convnet pass image to my model outputs = convnet(data) imshow(outputs) got the error below TypeError Traceback (most recent call last) ~\AppData\Local\Temp/ipykernel_3184/1768392595.py in <module> ----> 1 outputs = convnet(data) 2 imshow(outputs) TypeError: conv2d() received an invalid combination of arguments - got (numpy.ndarray, Parameter, Parameter, tuple, tuple, tuple, int), but expected one of: * (Tensor input, Tensor weight, Tensor bias, tuple of ints stride, tuple of ints padding, tuple of ints dilation, int groups) didn't match because some of the arguments have invalid types: (numpy.ndarray, Parameter, Parameter, tuple, tuple, tuple, int) * (Tensor input, Tensor weight, Tensor bias, tuple of ints stride, str padding, tuple of ints dilation, int groups) didn't match because some of the arguments have invalid types: (numpy.ndarray, Parameter, Parameter, tuple, tuple, tuple, int) I expect to show image after passed during this sample network
[ "The input to the CNN needs to be of the torch.Tensor type. You can do this by applying the transform directly on the PIL image, as:\ndata = torchvision.transforms.functional.to_tensor(image)\n\nor\ntransform = torchvision.transforms.ToTensor() # can be composed with other transforms if necessary\ndata = transform(image)\n\n", "as GoodDeeds mentioned, CNN expects the data to be of type Tensor you have read the image using PIL and then converted it to NumPy array, you will need to convert the NumPy array to Tensor using torch.from_numpy(data)\nBelow code will solve the issue\nimport torch\nimport torch.nn as nn #creating neural network\nfrom PIL import Image\nfrom numpy import asarray\n\n#Set up GPU\ndevice = torch.device(\"cuda:0\" if torch.cuda.is_available() else \"cpu\")\nHer I am loading my image \n# load the image\nimage = Image.open('./img.png')\n# convert image to numpy array\ndata = asarray(image)\ndata=torch.from_numpy(data)\nprint(type(data))\nprint(data.shape)`\n\n" ]
[ 1, 1 ]
[]
[]
[ "conv_neural_network", "deep_learning", "python", "pytorch" ]
stackoverflow_0074452216_conv_neural_network_deep_learning_python_pytorch.txt
Q: How to find circle faster than by Hough Transform in python opencv I’m trying to make Hough Transform find a circle faster or find another function that can do it faster. (i do not need to stick to open cv, but it needs to be opensource) I need to get centerpoint and radius. My use case: I have a square picture in grayscale, 4 aruco markers in the corners(detected earlier), and a black circle approximately in the middle. Rest of the picture is quite uniformly white-isch/gray. Picture size is 1600x1600. I know the approximate center position and radius of the circle I use: cv2.HoughCircles(image, cv2.HOUGH_GRADIENT, 1, 100, 100, 30, 200,250) This takes about 35-40ms. I would love to get it down to about 15ms I tried reducing resolution by half, but id does not give me very big benefit and makes result a bit "jittery". Image i try to recognize: Image i try to recognize A: I didn't have much success with the method I suggested in the comments so I tried a different approach: #!/usr/bin/env python3 import cv2 # Load image in greyscale im = cv2.imread('J65Xt.jpg', cv2.IMREAD_GRAYSCALE) # Define region of interest to exclude corner markers and reduce processing time ROI = im[400:-400,400:-400] # Threshold and invert _, thr = cv2.threshold(ROI, 80,255,type=cv2.THRESH_BINARY_INV) # Find contours contours, _ = cv2.findContours(thr, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE) # Get centroid from moments - may not really need this - could get from bounding box M = cv2.moments(contours[0]) cX = int(M["m10"] / M["m00"]) cY = int(M["m01"] / M["m00"]) print(f'Centroid: cX={cX}, cY={cY} do not forget 400 offset of ROI') # Mark centre in black thr[cY-3:cY+3, cX-3:cX+3] = 0 # Get bounding box and draw it on x, y, w, h = cv2.boundingRect(contours[0]) print(f'x={x}, y={y}, w={w}, h={h}') print(f'cX={x+w/2}, cY={y+h/2}') cv2.rectangle(thr, (x, y), (x + w, y + h), 255, 1) cv2.imwrite('result.png', thr) Output Centroid: cX=432, cY=394 do not forget 400 offset of ROI cX=432.5, cY=395.0 It takes 127 microseconds on my Mac if you exclude loading the input image and saving the output image.
How to find circle faster than by Hough Transform in python opencv
I’m trying to make Hough Transform find a circle faster or find another function that can do it faster. (i do not need to stick to open cv, but it needs to be opensource) I need to get centerpoint and radius. My use case: I have a square picture in grayscale, 4 aruco markers in the corners(detected earlier), and a black circle approximately in the middle. Rest of the picture is quite uniformly white-isch/gray. Picture size is 1600x1600. I know the approximate center position and radius of the circle I use: cv2.HoughCircles(image, cv2.HOUGH_GRADIENT, 1, 100, 100, 30, 200,250) This takes about 35-40ms. I would love to get it down to about 15ms I tried reducing resolution by half, but id does not give me very big benefit and makes result a bit "jittery". Image i try to recognize: Image i try to recognize
[ "I didn't have much success with the method I suggested in the comments so I tried a different approach:\n#!/usr/bin/env python3\n\nimport cv2\n\n# Load image in greyscale\nim = cv2.imread('J65Xt.jpg', cv2.IMREAD_GRAYSCALE)\n\n# Define region of interest to exclude corner markers and reduce processing time\nROI = im[400:-400,400:-400]\n\n# Threshold and invert\n_, thr = cv2.threshold(ROI, 80,255,type=cv2.THRESH_BINARY_INV)\n\n# Find contours\ncontours, _ = cv2.findContours(thr, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)\n\n# Get centroid from moments - may not really need this - could get from bounding box\nM = cv2.moments(contours[0])\ncX = int(M[\"m10\"] / M[\"m00\"])\ncY = int(M[\"m01\"] / M[\"m00\"])\nprint(f'Centroid: cX={cX}, cY={cY} do not forget 400 offset of ROI')\n# Mark centre in black\nthr[cY-3:cY+3, cX-3:cX+3] = 0\n\n# Get bounding box and draw it on\nx, y, w, h = cv2.boundingRect(contours[0])\nprint(f'x={x}, y={y}, w={w}, h={h}')\nprint(f'cX={x+w/2}, cY={y+h/2}')\ncv2.rectangle(thr, (x, y), (x + w, y + h), 255, 1)\ncv2.imwrite('result.png', thr)\n\n\nOutput\nCentroid: cX=432, cY=394 do not forget 400 offset of ROI\ncX=432.5, cY=395.0\n\nIt takes 127 microseconds on my Mac if you exclude loading the input image and saving the output image.\n" ]
[ 1 ]
[]
[]
[ "hough_transform", "opencv", "python" ]
stackoverflow_0074438250_hough_transform_opencv_python.txt
Q: python loop function in cs description: Python can loop functions in eachother. can cS loop function too? Example python: def func(): x=input(">") func() Example c# expected: namespace f {class f{ static void main(string[] args){ void stuff() { Console.readLine() stuff() } } }} i dont think its possible to loop function in the function in cs. what i mean by looping function is by putting the void inside the container. here is what i mean python: def g(): x=input(">") g() output (typer): Python Latest Update >h >bruh >new line >new new line >line >infinite input lines > repeating function i use this because in python i added commands in the script and i do it so i wont need to retype until the python stops the input. example: Problem (python script): def func(): x=input(">") if x=="help": print("commands: help") x=input(">") if x=="help": #repeat Solution (python script): def func(): x=input(">") if x=="help": print("commands: help") func() why i put the examples in python script: idk if you can do this in c# so im not going to confuse anyone Can this happen in C#? A: I'm not familiar with C# but hopefully this page can help Recursive Function C# What you're trying to make is called a recursive function A: Every modern language supports recursion. The problem in your example was you had a nested function, which C# doesn't do. You'd write it like this: namespace f { class f{ static void stuff() { Console.readLine(); stuff(); } static void main(string[] args){ stuff(); } } } But I want to reiterate that this is poor practice. There are some languages in which the compiler can catch this "tail recursion" and optimize for it, turning it into a "jump" that doesn't use stack space. Python and C# do not do that. The proper way is just: namespace f { class f{ static void stuff() { while( 1 ) { Console.readLine(); } } static void main(string[] args){ stuff(); } } } Ordinarily, you would have some condition inside the loop signalling it was time to end, and you'd do a break to stop the loop.
python loop function in cs
description: Python can loop functions in eachother. can cS loop function too? Example python: def func(): x=input(">") func() Example c# expected: namespace f {class f{ static void main(string[] args){ void stuff() { Console.readLine() stuff() } } }} i dont think its possible to loop function in the function in cs. what i mean by looping function is by putting the void inside the container. here is what i mean python: def g(): x=input(">") g() output (typer): Python Latest Update >h >bruh >new line >new new line >line >infinite input lines > repeating function i use this because in python i added commands in the script and i do it so i wont need to retype until the python stops the input. example: Problem (python script): def func(): x=input(">") if x=="help": print("commands: help") x=input(">") if x=="help": #repeat Solution (python script): def func(): x=input(">") if x=="help": print("commands: help") func() why i put the examples in python script: idk if you can do this in c# so im not going to confuse anyone Can this happen in C#?
[ "I'm not familiar with C# but hopefully this page can help Recursive Function C#\nWhat you're trying to make is called a recursive function\n", "Every modern language supports recursion. The problem in your example was you had a nested function, which C# doesn't do. You'd write it like this:\nnamespace f {\n class f{\n static void stuff() {\n Console.readLine();\n stuff();\n }\n static void main(string[] args){\n stuff();\n }\n }\n}\n\nBut I want to reiterate that this is poor practice. There are some languages in which the compiler can catch this \"tail recursion\" and optimize for it, turning it into a \"jump\" that doesn't use stack space. Python and C# do not do that.\nThe proper way is just:\nnamespace f {\n class f{\n static void stuff() {\n while( 1 ) {\n Console.readLine();\n }\n }\n static void main(string[] args){\n stuff();\n }\n }\n}\n\nOrdinarily, you would have some condition inside the loop signalling it was time to end, and you'd do a break to stop the loop.\n" ]
[ 1, 1 ]
[]
[]
[ "c#", "python" ]
stackoverflow_0074483652_c#_python.txt
Q: Bbox For Image Grabbing So , I'm Trying To Make A Automated App Actually , i'm making it for Dino Web Game Everything Is Fine , But ! Colors Array's Number Will Not Change , I Think It is Boxing Problem Can You Guide Me With Correct Values In This Box ? from PIL import ImageGrab, ImageOps from webbrowser import open_new_tab as new from pyautogui import keyDown from time import sleep from numpy import * site_url = "https://trex-runner.com/" dinasour = (692, 494) def pressSpaceButton(): sleep(0.007) keyDown('space') def openGamePage(): # Open Game URL In New Tab new(site_url) def restartGame(): # Press Space Button To Start/Restart Game keyDown('space') print("Game Has Been Started / Restarted") def FindCactuses(): # Find Cactuses In Screen box = ( #(top_left_x, top_left_y, bottom_right_x, bottom_right_y) dinasour[0] + 30, dinasour[1], dinasour[0] + 120, dinasour[1] + 2 ) image = ImageGrab.grab(box) grayImage = ImageOps.grayscale(image) a = array(grayImage.getcolors()) print(a) return a.sum() sleep(3) # Wait 3 Seconds openGamePage() sleep(5)# Wait 5 Seconds restartGame() while True: FindCactuses() if FindCactuses != 697: pressSpaceButton() It's Going To Recognize Black And White Colors And When It Finds A Black Color it will press Space Button A: If you just want to pixel match a certain rgb value in a certain x,y position then you can use pyautogui.pixelMatchesColor(x, y, (r, g, b)) which is perfect for this situation So in your code (keep in mind you will have to change the x,y values): while True: #if at x 497 and y 524 the pixel matches your rgb color value of 83,83,83 then if pyautogui.pixelMatchesColor(497, 524, (83, 83, 83)): #press up pyautogui.press('up')
Bbox For Image Grabbing
So , I'm Trying To Make A Automated App Actually , i'm making it for Dino Web Game Everything Is Fine , But ! Colors Array's Number Will Not Change , I Think It is Boxing Problem Can You Guide Me With Correct Values In This Box ? from PIL import ImageGrab, ImageOps from webbrowser import open_new_tab as new from pyautogui import keyDown from time import sleep from numpy import * site_url = "https://trex-runner.com/" dinasour = (692, 494) def pressSpaceButton(): sleep(0.007) keyDown('space') def openGamePage(): # Open Game URL In New Tab new(site_url) def restartGame(): # Press Space Button To Start/Restart Game keyDown('space') print("Game Has Been Started / Restarted") def FindCactuses(): # Find Cactuses In Screen box = ( #(top_left_x, top_left_y, bottom_right_x, bottom_right_y) dinasour[0] + 30, dinasour[1], dinasour[0] + 120, dinasour[1] + 2 ) image = ImageGrab.grab(box) grayImage = ImageOps.grayscale(image) a = array(grayImage.getcolors()) print(a) return a.sum() sleep(3) # Wait 3 Seconds openGamePage() sleep(5)# Wait 5 Seconds restartGame() while True: FindCactuses() if FindCactuses != 697: pressSpaceButton() It's Going To Recognize Black And White Colors And When It Finds A Black Color it will press Space Button
[ "If you just want to pixel match a certain rgb value in a certain x,y position then you can use pyautogui.pixelMatchesColor(x, y, (r, g, b)) which is perfect for this situation\nSo in your code (keep in mind you will have to change the x,y values):\nwhile True:\n #if at x 497 and y 524 the pixel matches your rgb color value of 83,83,83 then\n if pyautogui.pixelMatchesColor(497, 524, (83, 83, 83)):\n #press up\n pyautogui.press('up')\n\n" ]
[ 1 ]
[]
[]
[ "numpy", "pyautogui", "python", "python_imaging_library", "webbrowser_control" ]
stackoverflow_0074474813_numpy_pyautogui_python_python_imaging_library_webbrowser_control.txt
Q: Filling NaN values with rolling mean of the previous non-NaN values I have recently come across a case where I would like to replace NaN values with the rolling mean of the previous non-NaN values in such a way that each newly generated rolling mean is then considered a non-NaN and is used for the next NaN. This is the sample data set: df = pd.DataFrame({'col1': [1, 3, 4, 5, 6, np.NaN, np.NaN, np.NaN]}) df col1 0 1.0 1 3.0 2 4.0 3 5.0 4 6.0 5 NaN # (6.0 + 5.0) / 2 6 NaN # (5.5 + 6.0) / 2 7 NaN # ... I have also found a solution for this which I am struggling to understand: from functools import reduce reduce(lambda x, _: x.fillna(x.rolling(2, min_periods=2).mean().shift()), range(df['col1'].isna().sum()), df) My problem with this solution is reduce function takes 3 arguments, where we first define the lambda function then we specify the iterator. In the solution above I don't understand the last df we put in the function call for reduce and I struggle to understand how it works in general to populate the NaN. I would appreciate any explanation of how it works. Also if there is any pandas, numpy based solution as reduce is not seemingly efficient here. A: for i in df.index: if np.isnan(df["col1"][i]): df["col1"][i] = (df["col1"][i - 1] + df["col1"][i - 2]) / 2 This can be a start using for loop, it will fail if the first 2 values of the dataframe are NAN
Filling NaN values with rolling mean of the previous non-NaN values
I have recently come across a case where I would like to replace NaN values with the rolling mean of the previous non-NaN values in such a way that each newly generated rolling mean is then considered a non-NaN and is used for the next NaN. This is the sample data set: df = pd.DataFrame({'col1': [1, 3, 4, 5, 6, np.NaN, np.NaN, np.NaN]}) df col1 0 1.0 1 3.0 2 4.0 3 5.0 4 6.0 5 NaN # (6.0 + 5.0) / 2 6 NaN # (5.5 + 6.0) / 2 7 NaN # ... I have also found a solution for this which I am struggling to understand: from functools import reduce reduce(lambda x, _: x.fillna(x.rolling(2, min_periods=2).mean().shift()), range(df['col1'].isna().sum()), df) My problem with this solution is reduce function takes 3 arguments, where we first define the lambda function then we specify the iterator. In the solution above I don't understand the last df we put in the function call for reduce and I struggle to understand how it works in general to populate the NaN. I would appreciate any explanation of how it works. Also if there is any pandas, numpy based solution as reduce is not seemingly efficient here.
[ "for i in df.index:\n if np.isnan(df[\"col1\"][i]):\n df[\"col1\"][i] = (df[\"col1\"][i - 1] + df[\"col1\"][i - 2]) / 2\n\nThis can be a start using for loop, it will fail if the first 2 values of the dataframe are NAN\n" ]
[ 2 ]
[]
[]
[ "pandas", "python", "reduce" ]
stackoverflow_0074482996_pandas_python_reduce.txt
Q: Getting EOF error but running my code in Thonny produces no errors I'm learning python and one of my labs required me to: Write a program whose input is a string which contains a character and a phrase, and whose output indicates the number of times the character appears in the phrase. The output should include the input character and use the plural form, n's, if the number of times the characters appears is not exactly 1. My code ended up being: char = input() string = input() count = 0 for i in string: if i == char: count +=1 if count > 1 or count == 0: print(f"{count} {char}'s") else: print(f'{count} {char}') Whenever I run the code in Thonny or in the Zybooks development tab it works but when I select the submit option I keep getting and EOF error: Traceback (most recent call last): File "main.py", line 2, in <module> string = input() EOFError: EOF when reading a line Does anyone know what's causing the error? I tried using the break command but it didn't help though I think if I used break at the end of my for statement it wouldn't count all the way. Any ideas folks? A: Thank you Mr. Roberts the number of inputs was the issue. I had to create a single input and pull what I needed from that single line. My code ended up being: string = input() char = string[0] phrase = string[1:] count = 0 for i in phrase: if i == char: count +=1 All good now.
Getting EOF error but running my code in Thonny produces no errors
I'm learning python and one of my labs required me to: Write a program whose input is a string which contains a character and a phrase, and whose output indicates the number of times the character appears in the phrase. The output should include the input character and use the plural form, n's, if the number of times the characters appears is not exactly 1. My code ended up being: char = input() string = input() count = 0 for i in string: if i == char: count +=1 if count > 1 or count == 0: print(f"{count} {char}'s") else: print(f'{count} {char}') Whenever I run the code in Thonny or in the Zybooks development tab it works but when I select the submit option I keep getting and EOF error: Traceback (most recent call last): File "main.py", line 2, in <module> string = input() EOFError: EOF when reading a line Does anyone know what's causing the error? I tried using the break command but it didn't help though I think if I used break at the end of my for statement it wouldn't count all the way. Any ideas folks?
[ "Thank you Mr. Roberts the number of inputs was the issue. I had to create a single input and pull what I needed from that single line. My code ended up being:\nstring = input()\n\nchar = string[0]\n\nphrase = string[1:]\n\ncount = 0\n\nfor i in phrase:\n \nif i == char:\n \ncount +=1\n\nAll good now.\n" ]
[ 0 ]
[]
[]
[ "python" ]
stackoverflow_0074483609_python.txt
Q: conda activate fails (results in IndexError: list index out of range) I have a fresh copy of Ubuntu 22.04 that I'm running in a Hyper-V virtual machine in Windows 11 Pro. I just installed Anaconda from anaconda.com. Everything seems fine (I've also added Conda to the path.) I created a new environment using: conda create --name proto202211 Environment is created successfully and conda tells me: to activate this environment, use $ conda activate proto202211 So I do that, but I get a long ERROR REPORT and the environment fails to activate. I rebooted the Linux machine. Same thing. I created a new environment, myenv, and I can't switch to that either (same error). I can list the environments, and conda sees proto202211 and myenv. It recognizes that they're there. But if I try to activate them, I get this long failure. I've attached images of the errors that I'm getting. Help. A: I found the solution to this problem. After installing anaconda I manually edited my PATH using: echo "export PATH=$PATH:/home/nate/anaconda3/bin">> ~/.bashrc This was a bad idea. I removed that line from .bashrc using gedit, restarted bash, and now I can switch environments. I got the idea to try this from this similar (but non-identical) question: unable to activate existing conda environments
conda activate fails (results in IndexError: list index out of range)
I have a fresh copy of Ubuntu 22.04 that I'm running in a Hyper-V virtual machine in Windows 11 Pro. I just installed Anaconda from anaconda.com. Everything seems fine (I've also added Conda to the path.) I created a new environment using: conda create --name proto202211 Environment is created successfully and conda tells me: to activate this environment, use $ conda activate proto202211 So I do that, but I get a long ERROR REPORT and the environment fails to activate. I rebooted the Linux machine. Same thing. I created a new environment, myenv, and I can't switch to that either (same error). I can list the environments, and conda sees proto202211 and myenv. It recognizes that they're there. But if I try to activate them, I get this long failure. I've attached images of the errors that I'm getting. Help.
[ "I found the solution to this problem. After installing anaconda I manually edited my PATH using:\necho \"export PATH=$PATH:/home/nate/anaconda3/bin\">> ~/.bashrc\nThis was a bad idea. I removed that line from .bashrc using gedit, restarted bash, and now I can switch environments.\nI got the idea to try this from this similar (but non-identical) question: unable to activate existing conda environments\n" ]
[ 0 ]
[]
[]
[ "anaconda", "conda", "linux", "python", "ubuntu" ]
stackoverflow_0074478348_anaconda_conda_linux_python_ubuntu.txt
Q: beginner python on looping back to the start of my simple number guessing game This is my code so far (in PyCharm), I am writing a very simple number guessing game that has integers from 1-9. I am still trying to master thought & flow as well as loops, and I hit a roadblock: import random Player_Name = input("What is your name?\n") print(f"Hello {Player_Name}!\n") random_num = random.randint(1, 10) guess = int(input("What is the number you want to pick? Guess one, 1-9\n")) def number_game(): if guess == random_num: print(f"You guessed right, the number is confirmed to be {random_num}.") else: print(f"You guessed the wrong number. Try again.\n") number_game() I called the function and ran the code... everything appears to be working except I really can't figure out how to keep the game going in a loop until the player gets the right number out of 1-9...and end it when I need to. I tried searching all my resources and am quite stuck on this beginner practice coding. Any help is appreciated. What I wrote and tried is above... googling and stackoverflow just confused me more. A: Honestly, there are many ways to do what you want. But using your code as base, this is one possible solution. import random Player_Name = input("What is your name?\n") print(f"Hello {Player_Name}!\n") random_num = random.randint(1, 10) def number_game(): guess = int(input("What is the number you want to pick? Guess one, 1-9\n")) if guess == random_num: print(f"You guessed right, the number is confirmed to be {random_num}.") return True else: print(f"You guessed the wrong number. Try again.\n") return False while True: guessed_right = number_game() if guessed_right: quit() else: number_game() A: while True: number_game() Replace the last line of your script with this!
beginner python on looping back to the start of my simple number guessing game
This is my code so far (in PyCharm), I am writing a very simple number guessing game that has integers from 1-9. I am still trying to master thought & flow as well as loops, and I hit a roadblock: import random Player_Name = input("What is your name?\n") print(f"Hello {Player_Name}!\n") random_num = random.randint(1, 10) guess = int(input("What is the number you want to pick? Guess one, 1-9\n")) def number_game(): if guess == random_num: print(f"You guessed right, the number is confirmed to be {random_num}.") else: print(f"You guessed the wrong number. Try again.\n") number_game() I called the function and ran the code... everything appears to be working except I really can't figure out how to keep the game going in a loop until the player gets the right number out of 1-9...and end it when I need to. I tried searching all my resources and am quite stuck on this beginner practice coding. Any help is appreciated. What I wrote and tried is above... googling and stackoverflow just confused me more.
[ "Honestly, there are many ways to do what you want. But using your code as base, this is one possible solution.\nimport random\n\n\nPlayer_Name = input(\"What is your name?\\n\")\nprint(f\"Hello {Player_Name}!\\n\")\nrandom_num = random.randint(1, 10)\n\n\n\ndef number_game():\n guess = int(input(\"What is the number you want to pick? Guess one, 1-9\\n\"))\n if guess == random_num:\n print(f\"You guessed right, the number is confirmed to be {random_num}.\")\n return True\n else:\n print(f\"You guessed the wrong number. Try again.\\n\")\n return False\n\n\nwhile True:\n guessed_right = number_game()\n\n if guessed_right:\n quit()\n else:\n number_game()\n\n", "while True:\n number_game()\n\nReplace the last line of your script with this!\n" ]
[ 1, 0 ]
[]
[]
[ "integer", "loops", "python", "random" ]
stackoverflow_0074483670_integer_loops_python_random.txt
Q: How to extract all nested tags and their content with BeautifulSoup? I'm trying to pull out all nested <option> tags and their values using BeautifulSoup in Python. The first block of code provides the desired Unicode-type result (more than 60 pages of output). Part of the HTML tree is included below. Please note that the desired <option> tags are nested. Issue: The second block of code below does not provide the output, throwing no error. from bs4 import BeautifulSoup import requests def main(base_url): response = requests.get(base_url) soup = BeautifulSoup(response.text, "html.parser") print(soup.prettify) main('https://meps.ahrq.gov/data_stats/download_data_files.jsp') from bs4 import BeautifulSoup import requests def main(base_url): response = requests.get(base_url) soup = BeautifulSoup(response.text, "html.parser") select_id = soup.find_all("select", id="pufnumber") print(select_id) nested_option = [x.find_all("option") for x in select_id] print(nested_option) main('https://meps.ahrq.gov/data_stats/download_data_files.jsp') Part of the output from print(soup.prettify): </table> <!-- 3/23/06 <img src="../images/bullets/spacer.gif" width="1" height="3" alt=""> <table role="presentation" width="430" height="15" border="0" cellpadding="6" cellspacing="0"> <tr> <td height="0" bgcolor="#F9F9F9" class="contentStyle"><strong><font color="#006600">Option 2: </font><font color="#003399"><label for="pufnumber">Select by data file number/title </label></font></strong></td> </tr> </table> <table role="presentation" width="430" height="25" border="0" cellpadding="5" cellspacing="0" class="BlueBox"> <tr> <td width="430" height="0"> <span class="contentStyle"> <select id="pufnumber" size=1 name="cboPufNumber"> <option value="All">All data files</option> <option value="HC-225">MEPS HC-225: MEPS Panel 24 Longitudinal Data File</option> <option value="HC-224">MEPS HC-224: 2020 Full Year Consolidated Data File</option> <option value="HC-223">MEPS HC-223: 2020 Person Round Plan File</option> My goal is to pull out nested option tags like this: <option value="HC-225">MEPS HC-225: MEPS Panel 24 Longitudinal Data File</option> I'm not interested in the following <option> tags: <option value="All">All available years</option> <option value="2020">2020</option> <option value="2019">2019</option> <option value="2018">2018</option> <option value="2017">2017</option> <option value="2016">2016</option> ... A: I noticed that the part of the HTML you want to process is in a comment block, which means the BeautifulSoup cannot process the content. <!-- 3/23/06 <img src=" --> Try the code below to see all the comments, import requests from bs4 import BeautifulSoup, Comment def main(base_url): response = requests.get(base_url) soup = BeautifulSoup(response.text, "html.parser") comments = soup.find_all(string=lambda text: isinstance(text, Comment)) for c in comments: print(c) print("===========") c.extract() main('https://meps.ahrq.gov/data_stats/download_data_files.jsp') Now, your problem becomes how to process the comments to extract the data you want. Here is a working example, and I used the regular expression to process the raw text. Note that this is only designed for the specific web page structure and might not be useful for other sites. import requests from bs4 import BeautifulSoup, Comment import re # find all options match the start and end string def extractOptions(inputData): sub1 = str(re.escape('<option value="All">All data files</option>')) sub2 = str(re.escape('</select>')) result = re.findall(sub1+"(.*)"+sub2, inputData, flags=re.S) if len(result) > 0: return result[0] # find the actual data from each option def extracData(inputData): sub1 = str(re.escape('>')) sub2 = str(re.escape('</option>')) result = re.findall(sub1+"(.*)"+sub2, inputData, flags=re.S) if len(result) > 0: return result[0] return '' def main(base_url): response = requests.get(base_url) soup = BeautifulSoup(response.text, "html.parser") comments = soup.find_all(string=lambda text: isinstance(text, Comment)) for c in comments: if '<select id="pufnumber" size=1 name="cboPufNumber">' in c: options = extractOptions(c) ops = options.splitlines() #split text into lines for op in ops: data = extracData(op) if data != '': #check if the data found print(data) main('https://meps.ahrq.gov/data_stats/download_data_files.jsp')
How to extract all nested tags and their content with BeautifulSoup?
I'm trying to pull out all nested <option> tags and their values using BeautifulSoup in Python. The first block of code provides the desired Unicode-type result (more than 60 pages of output). Part of the HTML tree is included below. Please note that the desired <option> tags are nested. Issue: The second block of code below does not provide the output, throwing no error. from bs4 import BeautifulSoup import requests def main(base_url): response = requests.get(base_url) soup = BeautifulSoup(response.text, "html.parser") print(soup.prettify) main('https://meps.ahrq.gov/data_stats/download_data_files.jsp') from bs4 import BeautifulSoup import requests def main(base_url): response = requests.get(base_url) soup = BeautifulSoup(response.text, "html.parser") select_id = soup.find_all("select", id="pufnumber") print(select_id) nested_option = [x.find_all("option") for x in select_id] print(nested_option) main('https://meps.ahrq.gov/data_stats/download_data_files.jsp') Part of the output from print(soup.prettify): </table> <!-- 3/23/06 <img src="../images/bullets/spacer.gif" width="1" height="3" alt=""> <table role="presentation" width="430" height="15" border="0" cellpadding="6" cellspacing="0"> <tr> <td height="0" bgcolor="#F9F9F9" class="contentStyle"><strong><font color="#006600">Option 2: </font><font color="#003399"><label for="pufnumber">Select by data file number/title </label></font></strong></td> </tr> </table> <table role="presentation" width="430" height="25" border="0" cellpadding="5" cellspacing="0" class="BlueBox"> <tr> <td width="430" height="0"> <span class="contentStyle"> <select id="pufnumber" size=1 name="cboPufNumber"> <option value="All">All data files</option> <option value="HC-225">MEPS HC-225: MEPS Panel 24 Longitudinal Data File</option> <option value="HC-224">MEPS HC-224: 2020 Full Year Consolidated Data File</option> <option value="HC-223">MEPS HC-223: 2020 Person Round Plan File</option> My goal is to pull out nested option tags like this: <option value="HC-225">MEPS HC-225: MEPS Panel 24 Longitudinal Data File</option> I'm not interested in the following <option> tags: <option value="All">All available years</option> <option value="2020">2020</option> <option value="2019">2019</option> <option value="2018">2018</option> <option value="2017">2017</option> <option value="2016">2016</option> ...
[ "I noticed that the part of the HTML you want to process is in a comment block, which means the BeautifulSoup cannot process the content.\n<!-- 3/23/06 <img src=\" -->\n\nTry the code below to see all the comments,\nimport requests\nfrom bs4 import BeautifulSoup, Comment\n\ndef main(base_url):\n response = requests.get(base_url)\n soup = BeautifulSoup(response.text, \"html.parser\")\n comments = soup.find_all(string=lambda text: isinstance(text, Comment))\n for c in comments:\n print(c)\n print(\"===========\")\n c.extract()\nmain('https://meps.ahrq.gov/data_stats/download_data_files.jsp')\n\nNow, your problem becomes how to process the comments to extract the data you want.\nHere is a working example, and I used the regular expression to process the raw text. Note that this is only designed for the specific web page structure and might not be useful for other sites.\nimport requests\nfrom bs4 import BeautifulSoup, Comment\nimport re\n\n# find all options match the start and end string\ndef extractOptions(inputData):\n sub1 = str(re.escape('<option value=\"All\">All data files</option>'))\n sub2 = str(re.escape('</select>'))\n result = re.findall(sub1+\"(.*)\"+sub2, inputData, flags=re.S)\n if len(result) > 0:\n return result[0]\n\n# find the actual data from each option\ndef extracData(inputData):\n sub1 = str(re.escape('>'))\n sub2 = str(re.escape('</option>'))\n result = re.findall(sub1+\"(.*)\"+sub2, inputData, flags=re.S)\n if len(result) > 0:\n return result[0]\n return ''\n\ndef main(base_url):\n response = requests.get(base_url)\n soup = BeautifulSoup(response.text, \"html.parser\")\n comments = soup.find_all(string=lambda text: isinstance(text, Comment))\n\n for c in comments:\n if '<select id=\"pufnumber\" size=1 name=\"cboPufNumber\">' in c:\n options = extractOptions(c)\n ops = options.splitlines() #split text into lines\n for op in ops:\n data = extracData(op)\n if data != '': #check if the data found\n print(data)\n \n \nmain('https://meps.ahrq.gov/data_stats/download_data_files.jsp')\n\n" ]
[ 1 ]
[]
[]
[ "beautifulsoup", "python", "python_3.x" ]
stackoverflow_0074483202_beautifulsoup_python_python_3.x.txt
Q: Flask and sqlalchemy: Receiving a "can't adapt type 'ABCMeta'" error when posting to database When I try to create a new user in the database I receive an error that reads sqlalchemy.exc.ProgrammingError: (psycopg2.ProgrammingError) can't adapt type 'ABCMeta' I've seen similar responses to this error here, but I am unsure of what this error is telling me. Would anyone be able to give me clarity on what this error means and how can I solve it? Code: from extensions import db class User(db.Model): __tablename__ = 'user' id = db.Column(db.Integer, primary_key=True) username = db.Column(db.String(80), nullable=False, unique=True) email = db.Column(db.String(200), nullable=False, unique=True) password = db.Column(db.String(200)) is_active = db.Column(db.Boolean(), default=False) created_at = db.Column(db.DateTime(), nullable=False, server_default=db.func.now()) updated_at = db.Column(db.DateTime(), nullable=False, server_default=db.func.now(), onupdate=db.func.now()) recipes = db.relationship('Recipe', backref='user') @classmethod def get_user_by_username(cls, username): return cls.query.filter_by(username=username).first() @classmethod def get_user_by_email(cls, email): return cls.query.filter_by(email=email).first() def save(self): db.session.add(self) db.session.commit() Error: * Running on http://127.0.0.1:5000/ (Press CTRL+C to quit) * Restarting with stat * Debugger is active! * Debugger PIN: 828-615-892 127.0.0.1 - - [30/May/2022 16:15:10] "POST /users HTTP/1.1" 500 - Traceback (most recent call last): File "/Users/lawrence/Documents/smilecook/venv/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 1820, in _execute_context cursor, statement, parameters, context File "/Users/lawrence/Documents/smilecook/venv/lib/python3.7/site-packages/sqlalchemy/engine/default.py", line 732, in do_execute cursor.execute(statement, parameters) psycopg2.ProgrammingError: can't adapt type 'ABCMeta' The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/Users/lawrence/Documents/smilecook/venv/lib/python3.7/site-packages/flask/app.py", line 2328, in __call__ return self.wsgi_app(environ, start_response) File "/Users/lawrence/Documents/smilecook/venv/lib/python3.7/site-packages/flask/app.py", line 2314, in wsgi_app response = self.handle_exception(e) File "/Users/lawrence/Documents/smilecook/venv/lib/python3.7/site-packages/flask_restful/__init__.py", line 269, in error_router return original_handler(e) File "/Users/lawrence/Documents/smilecook/venv/lib/python3.7/site-packages/flask/app.py", line 1760, in handle_exception reraise(exc_type, exc_value, tb) File "/Users/lawrence/Documents/smilecook/venv/lib/python3.7/site-packages/flask/_compat.py", line 35, in reraise raise value.with_traceback(tb) File "/Users/lawrence/Documents/smilecook/venv/lib/python3.7/site-packages/flask/app.py", line 2311, in wsgi_app response = self.full_dispatch_request() File "/Users/lawrence/Documents/smilecook/venv/lib/python3.7/site-packages/flask/app.py", line 1834, in full_dispatch_request rv = self.handle_user_exception(e) File "/Users/lawrence/Documents/smilecook/venv/lib/python3.7/site-packages/flask_restful/__init__.py", line 269, in error_router return original_handler(e) File "/Users/lawrence/Documents/smilecook/venv/lib/python3.7/site-packages/flask/app.py", line 1737, in handle_user_exception reraise(exc_type, exc_value, tb) File "/Users/lawrence/Documents/smilecook/venv/lib/python3.7/site-packages/flask/_compat.py", line 35, in reraise raise value.with_traceback(tb) File "/Users/lawrence/Documents/smilecook/venv/lib/python3.7/site-packages/flask/app.py", line 1832, in full_dispatch_request rv = self.dispatch_request() File "/Users/lawrence/Documents/smilecook/venv/lib/python3.7/site-packages/flask/app.py", line 1818, in dispatch_request return self.view_functions[rule.endpoint](**req.view_args) File "/Users/lawrence/Documents/smilecook/venv/lib/python3.7/site-packages/flask_restful/__init__.py", line 458, in wrapper resp = resource(*args, **kwargs) File "/Users/lawrence/Documents/smilecook/venv/lib/python3.7/site-packages/flask/views.py", line 88, in view return self.dispatch_request(*args, **kwargs) File "/Users/lawrence/Documents/smilecook/venv/lib/python3.7/site-packages/flask_restful/__init__.py", line 573, in dispatch_request resp = meth(*args, **kwargs) File "/Users/lawrence/Documents/smilecook/resources/user.py", line 31, in post user.save() File "/Users/lawrence/Documents/smilecook/models/user.py", line 29, in save db.session.commit() File "<string>", line 2, in commit File "/Users/lawrence/Documents/smilecook/venv/lib/python3.7/site-packages/sqlalchemy/orm/session.py", line 1435, in commit self._transaction.commit(_to_root=self.future) File "/Users/lawrence/Documents/smilecook/venv/lib/python3.7/site-packages/sqlalchemy/orm/session.py", line 829, in commit self._prepare_impl() File "/Users/lawrence/Documents/smilecook/venv/lib/python3.7/site-packages/sqlalchemy/orm/session.py", line 808, in _prepare_impl self.session.flush() File "/Users/lawrence/Documents/smilecook/venv/lib/python3.7/site-packages/sqlalchemy/orm/session.py", line 3367, in flush self._flush(objects) File "/Users/lawrence/Documents/smilecook/venv/lib/python3.7/site-packages/sqlalchemy/orm/session.py", line 3507, in _flush transaction.rollback(_capture_exception=True) File "/Users/lawrence/Documents/smilecook/venv/lib/python3.7/site-packages/sqlalchemy/util/langhelpers.py", line 72, in __exit__ with_traceback=exc_tb, File "/Users/lawrence/Documents/smilecook/venv/lib/python3.7/site-packages/sqlalchemy/util/compat.py", line 207, in raise_ raise exception File "/Users/lawrence/Documents/smilecook/venv/lib/python3.7/site-packages/sqlalchemy/orm/session.py", line 3467, in _flush flush_context.execute() File "/Users/lawrence/Documents/smilecook/venv/lib/python3.7/site-packages/sqlalchemy/orm/unitofwork.py", line 456, in execute rec.execute(self) File "/Users/lawrence/Documents/smilecook/venv/lib/python3.7/site-packages/sqlalchemy/orm/unitofwork.py", line 633, in execute uow, File "/Users/lawrence/Documents/smilecook/venv/lib/python3.7/site-packages/sqlalchemy/orm/persistence.py", line 250, in save_obj insert, File "/Users/lawrence/Documents/smilecook/venv/lib/python3.7/site-packages/sqlalchemy/orm/persistence.py", line 1241, in _emit_insert_statements execution_options=execution_options, File "/Users/lawrence/Documents/smilecook/venv/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 1631, in _execute_20 return meth(self, args_10style, kwargs_10style, execution_options) File "/Users/lawrence/Documents/smilecook/venv/lib/python3.7/site-packages/sqlalchemy/sql/elements.py", line 326, in _execute_on_connection self, multiparams, params, execution_options File "/Users/lawrence/Documents/smilecook/venv/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 1508, in _execute_clauseelement cache_hit=cache_hit, File "/Users/lawrence/Documents/smilecook/venv/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 1863, in _execute_context e, statement, parameters, cursor, context File "/Users/lawrence/Documents/smilecook/venv/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 2044, in _handle_dbapi_exception sqlalchemy_exception, with_traceback=exc_info[2], from_=e File "/Users/lawrence/Documents/smilecook/venv/lib/python3.7/site-packages/sqlalchemy/util/compat.py", line 207, in raise_ raise exception File "/Users/lawrence/Documents/smilecook/venv/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 1820, in _execute_context cursor, statement, parameters, context File "/Users/lawrence/Documents/smilecook/venv/lib/python3.7/site-packages/sqlalchemy/engine/default.py", line 732, in do_execute cursor.execute(statement, parameters) sqlalchemy.exc.ProgrammingError: (psycopg2.ProgrammingError) can't adapt type 'ABCMeta' [SQL: INSERT INTO "user" (username, email, password, is_active) VALUES (%(username)s, %(email)s, %(password)s, %(is_active)s) RETURNING "user".id] [parameters: {'username': 'ray', 'email': 'ray@gmail.com', 'password': <class 'passlib.handlers.pbkdf2.pbkdf2_sha256'>, 'is_active': False}] A: It appears the password value you are trying to save is not a string, as the typing of the password column suggests you intended, but a class -- specifically, the class passlib.handlers.pbkdf2.pbkdf2_sha256. I think maybe you meant to call that class when you were setting the value of password (i.e., do this: password = passlib.handlers.pbkdf2.pbkdf2_sha256(), with the parentheses), but you instead set it to the class you intended to call (i.e., did this: password = passlib.handlers.pbkdf2.pbkdf2_sha256, without the parentheses). I'm not totally sure what is going on with that particular error you are getting, but it suggests that the meta class of passlib.handlers.pbkdf2.pbkdf2_sha256 is ABCMeta, which would be the case if passlib.handlers.pbkdf2.pbkdf2_sha256 is an abstract class.
Flask and sqlalchemy: Receiving a "can't adapt type 'ABCMeta'" error when posting to database
When I try to create a new user in the database I receive an error that reads sqlalchemy.exc.ProgrammingError: (psycopg2.ProgrammingError) can't adapt type 'ABCMeta' I've seen similar responses to this error here, but I am unsure of what this error is telling me. Would anyone be able to give me clarity on what this error means and how can I solve it? Code: from extensions import db class User(db.Model): __tablename__ = 'user' id = db.Column(db.Integer, primary_key=True) username = db.Column(db.String(80), nullable=False, unique=True) email = db.Column(db.String(200), nullable=False, unique=True) password = db.Column(db.String(200)) is_active = db.Column(db.Boolean(), default=False) created_at = db.Column(db.DateTime(), nullable=False, server_default=db.func.now()) updated_at = db.Column(db.DateTime(), nullable=False, server_default=db.func.now(), onupdate=db.func.now()) recipes = db.relationship('Recipe', backref='user') @classmethod def get_user_by_username(cls, username): return cls.query.filter_by(username=username).first() @classmethod def get_user_by_email(cls, email): return cls.query.filter_by(email=email).first() def save(self): db.session.add(self) db.session.commit() Error: * Running on http://127.0.0.1:5000/ (Press CTRL+C to quit) * Restarting with stat * Debugger is active! * Debugger PIN: 828-615-892 127.0.0.1 - - [30/May/2022 16:15:10] "POST /users HTTP/1.1" 500 - Traceback (most recent call last): File "/Users/lawrence/Documents/smilecook/venv/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 1820, in _execute_context cursor, statement, parameters, context File "/Users/lawrence/Documents/smilecook/venv/lib/python3.7/site-packages/sqlalchemy/engine/default.py", line 732, in do_execute cursor.execute(statement, parameters) psycopg2.ProgrammingError: can't adapt type 'ABCMeta' The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/Users/lawrence/Documents/smilecook/venv/lib/python3.7/site-packages/flask/app.py", line 2328, in __call__ return self.wsgi_app(environ, start_response) File "/Users/lawrence/Documents/smilecook/venv/lib/python3.7/site-packages/flask/app.py", line 2314, in wsgi_app response = self.handle_exception(e) File "/Users/lawrence/Documents/smilecook/venv/lib/python3.7/site-packages/flask_restful/__init__.py", line 269, in error_router return original_handler(e) File "/Users/lawrence/Documents/smilecook/venv/lib/python3.7/site-packages/flask/app.py", line 1760, in handle_exception reraise(exc_type, exc_value, tb) File "/Users/lawrence/Documents/smilecook/venv/lib/python3.7/site-packages/flask/_compat.py", line 35, in reraise raise value.with_traceback(tb) File "/Users/lawrence/Documents/smilecook/venv/lib/python3.7/site-packages/flask/app.py", line 2311, in wsgi_app response = self.full_dispatch_request() File "/Users/lawrence/Documents/smilecook/venv/lib/python3.7/site-packages/flask/app.py", line 1834, in full_dispatch_request rv = self.handle_user_exception(e) File "/Users/lawrence/Documents/smilecook/venv/lib/python3.7/site-packages/flask_restful/__init__.py", line 269, in error_router return original_handler(e) File "/Users/lawrence/Documents/smilecook/venv/lib/python3.7/site-packages/flask/app.py", line 1737, in handle_user_exception reraise(exc_type, exc_value, tb) File "/Users/lawrence/Documents/smilecook/venv/lib/python3.7/site-packages/flask/_compat.py", line 35, in reraise raise value.with_traceback(tb) File "/Users/lawrence/Documents/smilecook/venv/lib/python3.7/site-packages/flask/app.py", line 1832, in full_dispatch_request rv = self.dispatch_request() File "/Users/lawrence/Documents/smilecook/venv/lib/python3.7/site-packages/flask/app.py", line 1818, in dispatch_request return self.view_functions[rule.endpoint](**req.view_args) File "/Users/lawrence/Documents/smilecook/venv/lib/python3.7/site-packages/flask_restful/__init__.py", line 458, in wrapper resp = resource(*args, **kwargs) File "/Users/lawrence/Documents/smilecook/venv/lib/python3.7/site-packages/flask/views.py", line 88, in view return self.dispatch_request(*args, **kwargs) File "/Users/lawrence/Documents/smilecook/venv/lib/python3.7/site-packages/flask_restful/__init__.py", line 573, in dispatch_request resp = meth(*args, **kwargs) File "/Users/lawrence/Documents/smilecook/resources/user.py", line 31, in post user.save() File "/Users/lawrence/Documents/smilecook/models/user.py", line 29, in save db.session.commit() File "<string>", line 2, in commit File "/Users/lawrence/Documents/smilecook/venv/lib/python3.7/site-packages/sqlalchemy/orm/session.py", line 1435, in commit self._transaction.commit(_to_root=self.future) File "/Users/lawrence/Documents/smilecook/venv/lib/python3.7/site-packages/sqlalchemy/orm/session.py", line 829, in commit self._prepare_impl() File "/Users/lawrence/Documents/smilecook/venv/lib/python3.7/site-packages/sqlalchemy/orm/session.py", line 808, in _prepare_impl self.session.flush() File "/Users/lawrence/Documents/smilecook/venv/lib/python3.7/site-packages/sqlalchemy/orm/session.py", line 3367, in flush self._flush(objects) File "/Users/lawrence/Documents/smilecook/venv/lib/python3.7/site-packages/sqlalchemy/orm/session.py", line 3507, in _flush transaction.rollback(_capture_exception=True) File "/Users/lawrence/Documents/smilecook/venv/lib/python3.7/site-packages/sqlalchemy/util/langhelpers.py", line 72, in __exit__ with_traceback=exc_tb, File "/Users/lawrence/Documents/smilecook/venv/lib/python3.7/site-packages/sqlalchemy/util/compat.py", line 207, in raise_ raise exception File "/Users/lawrence/Documents/smilecook/venv/lib/python3.7/site-packages/sqlalchemy/orm/session.py", line 3467, in _flush flush_context.execute() File "/Users/lawrence/Documents/smilecook/venv/lib/python3.7/site-packages/sqlalchemy/orm/unitofwork.py", line 456, in execute rec.execute(self) File "/Users/lawrence/Documents/smilecook/venv/lib/python3.7/site-packages/sqlalchemy/orm/unitofwork.py", line 633, in execute uow, File "/Users/lawrence/Documents/smilecook/venv/lib/python3.7/site-packages/sqlalchemy/orm/persistence.py", line 250, in save_obj insert, File "/Users/lawrence/Documents/smilecook/venv/lib/python3.7/site-packages/sqlalchemy/orm/persistence.py", line 1241, in _emit_insert_statements execution_options=execution_options, File "/Users/lawrence/Documents/smilecook/venv/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 1631, in _execute_20 return meth(self, args_10style, kwargs_10style, execution_options) File "/Users/lawrence/Documents/smilecook/venv/lib/python3.7/site-packages/sqlalchemy/sql/elements.py", line 326, in _execute_on_connection self, multiparams, params, execution_options File "/Users/lawrence/Documents/smilecook/venv/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 1508, in _execute_clauseelement cache_hit=cache_hit, File "/Users/lawrence/Documents/smilecook/venv/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 1863, in _execute_context e, statement, parameters, cursor, context File "/Users/lawrence/Documents/smilecook/venv/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 2044, in _handle_dbapi_exception sqlalchemy_exception, with_traceback=exc_info[2], from_=e File "/Users/lawrence/Documents/smilecook/venv/lib/python3.7/site-packages/sqlalchemy/util/compat.py", line 207, in raise_ raise exception File "/Users/lawrence/Documents/smilecook/venv/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 1820, in _execute_context cursor, statement, parameters, context File "/Users/lawrence/Documents/smilecook/venv/lib/python3.7/site-packages/sqlalchemy/engine/default.py", line 732, in do_execute cursor.execute(statement, parameters) sqlalchemy.exc.ProgrammingError: (psycopg2.ProgrammingError) can't adapt type 'ABCMeta' [SQL: INSERT INTO "user" (username, email, password, is_active) VALUES (%(username)s, %(email)s, %(password)s, %(is_active)s) RETURNING "user".id] [parameters: {'username': 'ray', 'email': 'ray@gmail.com', 'password': <class 'passlib.handlers.pbkdf2.pbkdf2_sha256'>, 'is_active': False}]
[ "It appears the password value you are trying to save is not a string, as the typing of the password column suggests you intended, but a class -- specifically, the class passlib.handlers.pbkdf2.pbkdf2_sha256. I think maybe you meant to call that class when you were setting the value of password (i.e., do this: password = passlib.handlers.pbkdf2.pbkdf2_sha256(), with the parentheses), but you instead set it to the class you intended to call (i.e., did this: password = passlib.handlers.pbkdf2.pbkdf2_sha256, without the parentheses).\nI'm not totally sure what is going on with that particular error you are getting, but it suggests that the meta class of passlib.handlers.pbkdf2.pbkdf2_sha256 is ABCMeta, which would be the case if passlib.handlers.pbkdf2.pbkdf2_sha256 is an abstract class.\n" ]
[ 0 ]
[]
[]
[ "flask", "flask_sqlalchemy", "python", "sqlalchemy" ]
stackoverflow_0072440229_flask_flask_sqlalchemy_python_sqlalchemy.txt
Q: Python question using Monte carlo simulation and for loops This is the problem: Simulate the average of rolling two dice. This is the code I have so far: from random import seed, randint def simulate(): """ Roll two dice and return their sum """ dice_1 = randint(1,6) dice_2 = randint(1,6) sum = dice_1 + dice_2 ### Main seed(0) total = 0 # Use a for loop that runs for 1000 iterations for trial in range(1000): simulate() The next steps are to do this: # Call simulate() inside the loop to generate the sum # of two dice and add it into total For now, I have already called the simulate() function, but I am a bit confused on how to add it into the total variable I have. A: I didn't get exactly what you want to do, but is this enough for you ? from random import seed, randint def simulate(): """ Roll two dice and return their sum """ dice_1 = randint(1,6) dice_2 = randint(1,6) sum = dice_1 + dice_2 # Add A return statement to get your sum after calling the function return sum ### Main seed(0) total = 0 # Use a for loop that runs for 1000 iterations for trial in range(1000): # add each time the simulation result to the total total = total + simulate()
Python question using Monte carlo simulation and for loops
This is the problem: Simulate the average of rolling two dice. This is the code I have so far: from random import seed, randint def simulate(): """ Roll two dice and return their sum """ dice_1 = randint(1,6) dice_2 = randint(1,6) sum = dice_1 + dice_2 ### Main seed(0) total = 0 # Use a for loop that runs for 1000 iterations for trial in range(1000): simulate() The next steps are to do this: # Call simulate() inside the loop to generate the sum # of two dice and add it into total For now, I have already called the simulate() function, but I am a bit confused on how to add it into the total variable I have.
[ "I didn't get exactly what you want to do, but is this enough for you ?\nfrom random import seed, randint\n\ndef simulate():\n \"\"\"\n Roll two dice and return their sum\n \"\"\"\n\n\n dice_1 = randint(1,6)\n dice_2 = randint(1,6)\n sum = dice_1 + dice_2\n # Add A return statement to get your sum after calling the function\n return sum\n\n\n\n### Main\n\nseed(0) \n\ntotal = 0\n\n# Use a for loop that runs for 1000 iterations\nfor trial in range(1000):\n # add each time the simulation result to the total\n total = total + simulate()\n\n\n\n" ]
[ 0 ]
[]
[]
[ "montecarlo", "python" ]
stackoverflow_0074483826_montecarlo_python.txt
Q: The function takes a string as a parameter. If the string is anything but "Arizona" you return the string passed in as an argument I don't know why it still printing Arizona, and not raising a ValueError.The function also needs to be able to take Arizona in any case mix for example "ArIzOna" in the argument. def raising_arizona(string): try: print(string) return True except: if string.upper() == 'Arizona' or string.lower() == 'arizona': raise ValueError return False raising_arizona('Arizona') I tried using an if statement in order for Arizona to be case-mixed by saying the string will be taken whether its lower or upper case. A: The "try...except statement first runs the code in the try: section, and if it doesn't raise any exceptions (doesn't have an error) then it will skip over the except: section. So, in you case, you are trying to print the passed argument and returning true. Neither of these lines throw an error, so the except section is skipped. Also, you can do this with an if statement by first making all the letters of the passed argument lowercase and checking if that text is equal to "arizona": def raising_arizona(string): if string.lower() == 'arizona': return string raising_arizona('Arizona')
The function takes a string as a parameter. If the string is anything but "Arizona" you return the string passed in as an argument
I don't know why it still printing Arizona, and not raising a ValueError.The function also needs to be able to take Arizona in any case mix for example "ArIzOna" in the argument. def raising_arizona(string): try: print(string) return True except: if string.upper() == 'Arizona' or string.lower() == 'arizona': raise ValueError return False raising_arizona('Arizona') I tried using an if statement in order for Arizona to be case-mixed by saying the string will be taken whether its lower or upper case.
[ "The \"try...except statement first runs the code in the try: section, and if it doesn't raise any exceptions (doesn't have an error) then it will skip over the except: section.\nSo, in you case, you are trying to print the passed argument and returning true. Neither of these lines throw an error, so the except section is skipped.\nAlso, you can do this with an if statement by first making all the letters of the passed argument lowercase and checking if that text is equal to \"arizona\":\ndef raising_arizona(string):\n if string.lower() == 'arizona':\n return string\n\nraising_arizona('Arizona')\n\n" ]
[ 0 ]
[]
[]
[ "python" ]
stackoverflow_0074483770_python.txt
Q: Finding the indexes of an array If i have an array like this a = np.array([[False, False, False, False, False, False, False, False, False, False],[False, False, False, False, False, False, False, False, False, False],[False, False, False, True, True, False, False, False, False, False], I tried np.random.choice but it doesnt work for 1_D arrays :( A: A possible solution is to loop through different indexes until you find a match E.g. import random index_0 = 0 index_1 = 0 found = False while not found: temp_i0 = random.randint(len(array)) temp_i1 = random.randint(len(array[0])) if array[temp_i0][temp_i1]: index_0 = temp_i0 index_1 = temp_i1 found = True A: You show a 2d array: In [581]: a = np.array([[False, False, False, False, False, False, False, False, False, False],[False, False, False, False, False, False, False, False, False, False],[False, False, False, True, True, False, False, False, False, False],]) In [582]: a Out[582]: array([[False, False, False, False, False, False, False, False, False, False], [False, False, False, False, False, False, False, False, False, False], [False, False, False, True, True, False, False, False, False, False]]) In [583]: a.shape Out[583]: (3, 10) np.nonzero (or np.where) finds the indices of the True elements: In [584]: np.nonzero(a) Out[584]: (array([2, 2], dtype=int64), array([3, 4], dtype=int64)) That's a tuple of arrays. It can be used for indexing as in: In [585]: a[_] Out[585]: array([ True, True]) Read its docs
Finding the indexes of an array
If i have an array like this a = np.array([[False, False, False, False, False, False, False, False, False, False],[False, False, False, False, False, False, False, False, False, False],[False, False, False, True, True, False, False, False, False, False], I tried np.random.choice but it doesnt work for 1_D arrays :(
[ "A possible solution is to loop through different indexes until you find a match\nE.g.\n\nimport random\nindex_0 = 0\nindex_1 = 0\nfound = False\nwhile not found:\n temp_i0 = random.randint(len(array))\n temp_i1 = random.randint(len(array[0]))\n if array[temp_i0][temp_i1]:\n index_0 = temp_i0\n index_1 = temp_i1\n found = True\n\n\n\n", "You show a 2d array:\nIn [581]: a = np.array([[False, False, False, False, False, False, False, False, False, False],[False, False, False, False, False, False, False, False, False, False],[False, False, False, True, True, False, False, False, False, False],])\n\nIn [582]: a\nOut[582]: \narray([[False, False, False, False, False, False, False, False, False,\n False],\n [False, False, False, False, False, False, False, False, False,\n False],\n [False, False, False, True, True, False, False, False, False,\n False]])\n\nIn [583]: a.shape\nOut[583]: (3, 10)\n\nnp.nonzero (or np.where) finds the indices of the True elements:\nIn [584]: np.nonzero(a)\nOut[584]: (array([2, 2], dtype=int64), array([3, 4], dtype=int64))\n\nThat's a tuple of arrays. It can be used for indexing as in:\nIn [585]: a[_]\nOut[585]: array([ True, True])\n\nRead its docs\n" ]
[ 0, 0 ]
[]
[]
[ "numpy", "python" ]
stackoverflow_0074483572_numpy_python.txt
Q: What does asyncio.create_task actually do? I'm trying to understand how does asyncio.create_task actually work. Suppose I have following code: import asyncio import time async def delayer(): await asyncio.sleep(1) async def messenger(): await asyncio.sleep(1) return "A Message" async def main(): message = await messenger() await delayer() start_time = time.time() asyncio.run(main()) end_time = time.time() - start_time print(end_time) The code will take about 2 seconds. But if I make some changes to the body of main like this: import asyncio import time async def delayer(): await asyncio.sleep(1) async def messenger(): await asyncio.sleep(1) return "A Message" async def main(): task1 = asyncio.create_task(delayer()) task2 = asyncio.create_task(delayer()) await task1 await task2 start_time = time.time() asyncio.run(main()) end_time = time.time() - start_time print(end_time) Now the code will take about 1 second. My understanding from what I read is that await is a blocking process as we can see from the first code. In that code we need to wait 1 second for the messenger function to return, then another second for delayer function. Now the real question come from the second code. We just learnt that await need us to wait for its expression to return. So even if we use async.create_task, shouldn't awaits in one of the function's body block the process and then return whenever it finishes its job, thus should give us 2 seconds for the program to end? If that wasn't the case, can you help me understand the asyncio.create_task? What I know: await is a blocking process await executes coroutine function and task object await makes us possible to pause coroutine process (I don't quite understand about this, too) create_task creates task object and then schedule and execute it as soon as possible What I am expecting: I hope I can get a simple but effective answer about how does asyncio.create_task conduct its work using my sample code. A: Perhaps it will help to think in the following way. You cannot understand what await does until you understand what an event loop is. This line: asyncio.run(main()) creates and executes an event loop, which is basically an infinite loop with some methods for allowing an exit - a "semi-infinite" loop, so to speak. Until that loop exits, it will be entirely responsible for executing the program. (Here I am assuming that your program has only a single thread and a single Process. I'm not talking about concurrent program in any form.) Each unit of code that can run within an event loop is called a "Task." The idea of the loop is that it can run multiple Tasks by switching from one to another, thus giving the illusion that the CPU is doing more than one thing at a time. The asyncio.run() call does a second thing: it creates a Task, main(). At that moment, it's the only Task. The event loop begins to run the Task at its first line. Initially it runs just like any other function: async def main(): task1 = asyncio.create_task(delayer()) task2 = asyncio.create_task(delayer()) await task1 await task2 It creates two more tasks, task1 and task2. Now there are 3 Tasks but only one of them can be active. That's still main(). Then you come to this line: await task1 The await keyword is what allows this whole rigmarole to work. It is an instruction to the event loop to suspend the active task right here, at this point, and possibly allow another Task to become the active one. So to address your first bullet point, await is neither "blocking" nor is it a "process". Its purpose is to mark a point at which the event loop gets control back from the active Task. There is another thing happening here. The object that follows the await is called, unimaginatively, an "awaitable" object. Its crucial property is whether or not it is "done." The event loop keeps track of this object; as the loop cycles through its Tasks it will keep checking this object. If it's not done, main() doesn't resume. (This isn't exactly how it's implemented because that would be inefficient, but it's conceptually what's happening.) If you want to say that the await is "blocking" main() until task1 is finished, that's sort-of true; but "blocking" has a technical meaning so it's not the best word to use. In any case, the event loop is not "blocked" at all - it can keep running other Tasks until the awaitable task1 is done. After task1 becomes "done" and main() gets its turn to be the active task, execution continues to the next line of code. Your second bullet point, "await executes coroutine function and task object" is not correct. await doesn't execute anything. As I said, it just marks a point where the Task gets suspended and the event loop gets control back. Its awaitable determines when the Task can be resumed. You say, "await makes [it] possible to pause coroutine process". Not quite right - it ALWAYS suspends the current Task. Whether or not there is a significant delay in the Task's execution depends on whether there are other Tasks that are ready to take over, and also the state of its awaitable. "create_task creates task object and then schedule and execute it as soon as possible." Correct. But "as soon as possible" means the next time the current Task hits an await expression. Other Tasks may get a turn to run first, before the new Task gets a chance to start. Those details are up to the implementation of the event loop. But eventually the new Task will get a turn. In the comments you ask, "Is it safe if I say that plain await, not being involved in any event loop or any kind of it, works in blocking manner?" It's absolutely not safe to say that. First of all, there is no such thing as a "plain await". Your task must wait FOR something, otherwise how would the event loop know when to resume? An await without an event loop is either a syntax error or a runtime error - it makes no sense, because await is a point where the Task and the event loop interact. The main point is that event loops and await expression are intimately related: an await without an event loop is an error; an event loop without any await expressions is useless. The closest you can come to a plain await is this expression: await asyncio.sleep(0) which has the effect of suspending the current Task momentarily, giving the event loop a chance to run other tasks, resuming this Task as soon as possible. One other point is that the code: await task1 is an expression which has a value, in this case the returned value from task1. Since your task1 doesn't return anything this will be None. But if your delayer function looked like this: async def delayer(): await asyncio.sleep(1) return "Hello" then in main() you could write: print(await task1) and you would see "Hello" on the console.
What does asyncio.create_task actually do?
I'm trying to understand how does asyncio.create_task actually work. Suppose I have following code: import asyncio import time async def delayer(): await asyncio.sleep(1) async def messenger(): await asyncio.sleep(1) return "A Message" async def main(): message = await messenger() await delayer() start_time = time.time() asyncio.run(main()) end_time = time.time() - start_time print(end_time) The code will take about 2 seconds. But if I make some changes to the body of main like this: import asyncio import time async def delayer(): await asyncio.sleep(1) async def messenger(): await asyncio.sleep(1) return "A Message" async def main(): task1 = asyncio.create_task(delayer()) task2 = asyncio.create_task(delayer()) await task1 await task2 start_time = time.time() asyncio.run(main()) end_time = time.time() - start_time print(end_time) Now the code will take about 1 second. My understanding from what I read is that await is a blocking process as we can see from the first code. In that code we need to wait 1 second for the messenger function to return, then another second for delayer function. Now the real question come from the second code. We just learnt that await need us to wait for its expression to return. So even if we use async.create_task, shouldn't awaits in one of the function's body block the process and then return whenever it finishes its job, thus should give us 2 seconds for the program to end? If that wasn't the case, can you help me understand the asyncio.create_task? What I know: await is a blocking process await executes coroutine function and task object await makes us possible to pause coroutine process (I don't quite understand about this, too) create_task creates task object and then schedule and execute it as soon as possible What I am expecting: I hope I can get a simple but effective answer about how does asyncio.create_task conduct its work using my sample code.
[ "Perhaps it will help to think in the following way.\nYou cannot understand what await does until you understand what an event loop is. This line:\nasyncio.run(main())\n\ncreates and executes an event loop, which is basically an infinite loop with some methods for allowing an exit - a \"semi-infinite\" loop, so to speak. Until that loop exits, it will be entirely responsible for executing the program. (Here I am assuming that your program has only a single thread and a single Process. I'm not talking about concurrent program in any form.) Each unit of code that can run within an event loop is called a \"Task.\" The idea of the loop is that it can run multiple Tasks by switching from one to another, thus giving the illusion that the CPU is doing more than one thing at a time.\nThe asyncio.run() call does a second thing: it creates a Task, main(). At that moment, it's the only Task. The event loop begins to run the Task at its first line. Initially it runs just like any other function:\nasync def main():\n task1 = asyncio.create_task(delayer())\n task2 = asyncio.create_task(delayer())\n\n await task1\n await task2\n\nIt creates two more tasks, task1 and task2. Now there are 3 Tasks but only one of them can be active. That's still main(). Then you come to this line:\n await task1\n\nThe await keyword is what allows this whole rigmarole to work. It is an instruction to the event loop to suspend the active task right here, at this point, and possibly allow another Task to become the active one. So to address your first bullet point, await is neither \"blocking\" nor is it a \"process\". Its purpose is to mark a point at which the event loop gets control back from the active Task.\nThere is another thing happening here. The object that follows the await is called, unimaginatively, an \"awaitable\" object. Its crucial property is whether or not it is \"done.\" The event loop keeps track of this object; as the loop cycles through its Tasks it will keep checking this object. If it's not done, main() doesn't resume. (This isn't exactly how it's implemented because that would be inefficient, but it's conceptually what's happening.) If you want to say that the await is \"blocking\" main() until task1 is finished, that's sort-of true; but \"blocking\" has a technical meaning so it's not the best word to use. In any case, the event loop is not \"blocked\" at all - it can keep running other Tasks until the awaitable task1 is done. After task1 becomes \"done\" and main() gets its turn to be the active task, execution continues to the next line of code.\nYour second bullet point, \"await executes coroutine function and task object\" is not correct. await doesn't execute anything. As I said, it just marks a point where the Task gets suspended and the event loop gets control back. Its awaitable determines when the Task can be resumed.\nYou say, \"await makes [it] possible to pause coroutine process\". Not quite right - it ALWAYS suspends the current Task. Whether or not there is a significant delay in the Task's execution depends on whether there are other Tasks that are ready to take over, and also the state of its awaitable.\n\"create_task creates task object and then schedule and execute it as soon as possible.\" Correct. But \"as soon as possible\" means the next time the current Task hits an await expression. Other Tasks may get a turn to run first, before the new Task gets a chance to start. Those details are up to the implementation of the event loop. But eventually the new Task will get a turn.\nIn the comments you ask, \"Is it safe if I say that plain await, not being involved in any event loop or any kind of it, works in blocking manner?\" It's absolutely not safe to say that. First of all, there is no such thing as a \"plain await\". Your task must wait FOR something, otherwise how would the event loop know when to resume? An await without an event loop is either a syntax error or a runtime error - it makes no sense, because await is a point where the Task and the event loop interact. The main point is that event loops and await expression are intimately related: an await without an event loop is an error; an event loop without any await expressions is useless.\nThe closest you can come to a plain await is this expression:\nawait asyncio.sleep(0)\n\nwhich has the effect of suspending the current Task momentarily, giving the event loop a chance to run other tasks, resuming this Task as soon as possible.\nOne other point is that the code:\nawait task1\n\nis an expression which has a value, in this case the returned value from task1. Since your task1 doesn't return anything this will be None. But if your delayer function looked like this:\nasync def delayer():\n await asyncio.sleep(1)\n return \"Hello\"\n\nthen in main() you could write:\nprint(await task1)\n\nand you would see \"Hello\" on the console.\n" ]
[ 1 ]
[]
[]
[ "coroutine", "python", "python_asyncio" ]
stackoverflow_0074480673_coroutine_python_python_asyncio.txt
Q: Code for TensorFlow's EfficientNet preprocess_input()? I am using EfficientNet and I want to remove TensorFlow dependencies from my code, and for this I want to make preprocess_input on my own. from tensorflow.keras.applications.efficientnet import preprocess_input Can anyone tell me how to write preprocess_input function of efficientnet without using TensorFlow? def preprocess_input(): ...... return I found this repository so far. https://github.com/keras-team/keras-applications/blob/master/keras_applications/efficientnet.py But I am not able to understand the code. A: Efficient net model expect the images to have pixels in the range from 0 to 255 so if your images have pixels in that range you do not need to preprocess the input
Code for TensorFlow's EfficientNet preprocess_input()?
I am using EfficientNet and I want to remove TensorFlow dependencies from my code, and for this I want to make preprocess_input on my own. from tensorflow.keras.applications.efficientnet import preprocess_input Can anyone tell me how to write preprocess_input function of efficientnet without using TensorFlow? def preprocess_input(): ...... return I found this repository so far. https://github.com/keras-team/keras-applications/blob/master/keras_applications/efficientnet.py But I am not able to understand the code.
[ "Efficient net model expect the images to have pixels in the range from 0 to 255 so if your images have pixels in that range you do not need to preprocess the input\n" ]
[ 1 ]
[]
[]
[ "keras", "python", "tensorflow" ]
stackoverflow_0074472305_keras_python_tensorflow.txt
Q: save the pillow img result, and without vector line just the pure picture Purpose: (python) save the pillow img result, and without vector line just the pure picture I'm making the picture RGB/HSV (0~255) make img color I accidentally save the mask one , I want to save plt.show output (the one after filter the mask) here is the pic link: https://imgur.com/a/eYVqHA9 and my script: ( is simple issue, I'm new for using pillow and dealing img with python) from PIL import Image import pytesseract import cv2 import numpy as np from os import listdir from os.path import isfile, join import matplotlib.pyplot as plt path_01 = "/home/student_joy/desktop/output_02/" output_02_onlyfiles = [f for f in listdir(path_01) if isfile(join(path_01, f))] print(output_02_onlyfiles) k = 0 while k < 29: each_file_path_output_02 = '/home/student_joy/desktop/output_02/'+ output_02_onlyfiles[k] # Read the image in grayscale img = cv2.imread(each_file_path_output_02, cv2.IMREAD_GRAYSCALE) img_filtered = img.copy() # Simple editing through a loop on pixels value # 0 ~255 => ( 0 ~ 80), ( 80 ~ 160) , ( 160 ~ 255) for i in range (img.shape[0]): for j in range(img.shape[1]): if (img[i,j] < 40): img_filtered[i,j] = 0 elif (img[i,j] < 185): img_filtered[i,j] = 120 else: img_filtered[i,j] = 255 plt.imshow(img_filtered, cmap='gray') plt.show() plt.imsave(f"/home/student_joy/desktop/output_04_{k}.png", img_filtered) k +=1 I expect the save the pure picture output like (pic 3) in link A: you mention you used plt.imsave the plt.savefig should work the working script will be: from PIL import Image import pytesseract import cv2 import numpy as np from os import listdir from os.path import isfile, join import matplotlib.pyplot as plt path_01 = "/home/student_joy/desktop/output_02/" output_02_onlyfiles = [f for f in listdir(path_01) if isfile(join(path_01, f))] print(output_02_onlyfiles) k = 0 while k < 29: each_file_path_output_02 = '/home/student_joy/desktop/output_02/'+ output_02_onlyfiles[k] # Read the image in grayscale img = cv2.imread(each_file_path_output_02, cv2.IMREAD_GRAYSCALE) img_filtered = img.copy() # Simple editing through a loop on pixels value # 0 ~255 => ( 0 ~ 80), ( 80 ~ 160) , ( 160 ~ 255) for i in range (img.shape[0]): for j in range(img.shape[1]): if (img[i,j] < 40): img_filtered[i,j] = 0 elif (img[i,j] < 185): img_filtered[i,j] = 120 else: img_filtered[i,j] = 255 plt.imshow(img_filtered, cmap='gray') plt.show() # this line instead plt.savefig(f"/home/student_joy/desktop/output_04_{k}.png") k +=1
save the pillow img result, and without vector line just the pure picture
Purpose: (python) save the pillow img result, and without vector line just the pure picture I'm making the picture RGB/HSV (0~255) make img color I accidentally save the mask one , I want to save plt.show output (the one after filter the mask) here is the pic link: https://imgur.com/a/eYVqHA9 and my script: ( is simple issue, I'm new for using pillow and dealing img with python) from PIL import Image import pytesseract import cv2 import numpy as np from os import listdir from os.path import isfile, join import matplotlib.pyplot as plt path_01 = "/home/student_joy/desktop/output_02/" output_02_onlyfiles = [f for f in listdir(path_01) if isfile(join(path_01, f))] print(output_02_onlyfiles) k = 0 while k < 29: each_file_path_output_02 = '/home/student_joy/desktop/output_02/'+ output_02_onlyfiles[k] # Read the image in grayscale img = cv2.imread(each_file_path_output_02, cv2.IMREAD_GRAYSCALE) img_filtered = img.copy() # Simple editing through a loop on pixels value # 0 ~255 => ( 0 ~ 80), ( 80 ~ 160) , ( 160 ~ 255) for i in range (img.shape[0]): for j in range(img.shape[1]): if (img[i,j] < 40): img_filtered[i,j] = 0 elif (img[i,j] < 185): img_filtered[i,j] = 120 else: img_filtered[i,j] = 255 plt.imshow(img_filtered, cmap='gray') plt.show() plt.imsave(f"/home/student_joy/desktop/output_04_{k}.png", img_filtered) k +=1 I expect the save the pure picture output like (pic 3) in link
[ "you mention you used plt.imsave\nthe plt.savefig should work\nthe working script will be:\nfrom PIL import Image\nimport pytesseract\nimport cv2 \nimport numpy as np\nfrom os import listdir\nfrom os.path import isfile, join\nimport matplotlib.pyplot as plt\n\npath_01 = \"/home/student_joy/desktop/output_02/\"\noutput_02_onlyfiles = [f for f in listdir(path_01) if isfile(join(path_01, f))]\n\nprint(output_02_onlyfiles)\n\nk = 0\nwhile k < 29:\n each_file_path_output_02 = '/home/student_joy/desktop/output_02/'+ output_02_onlyfiles[k]\n \n # Read the image in grayscale\n img = cv2.imread(each_file_path_output_02, cv2.IMREAD_GRAYSCALE)\n img_filtered = img.copy()\n\n # Simple editing through a loop on pixels value\n\n # 0 ~255 => ( 0 ~ 80), ( 80 ~ 160) , ( 160 ~ 255)\n for i in range (img.shape[0]):\n for j in range(img.shape[1]):\n if (img[i,j] < 40):\n img_filtered[i,j] = 0\n elif (img[i,j] < 185):\n img_filtered[i,j] = 120\n else:\n img_filtered[i,j] = 255\n\n plt.imshow(img_filtered, cmap='gray')\n plt.show()\n\n# this line instead\n plt.savefig(f\"/home/student_joy/desktop/output_04_{k}.png\")\n k +=1\n\n" ]
[ 0 ]
[]
[]
[ "matplotlib", "python", "python_imaging_library" ]
stackoverflow_0074473059_matplotlib_python_python_imaging_library.txt
Q: AttributeError: module 'matplotlib.pyplot' has no attribute 'xlabel' I read all the similar questions about this error, they are either spelling mistakes or importing the matplotlib.pyplot as plt wrong. My code is as follows. import matplotlib.pyplot as plt import matplotlib %matplotlib inline plt.hist(raw_data['smoker'], bins=3, color='gray') plt.xlabel('Smoker') plt.show() I'm not sure what is the reason for this error. Might be library version? I didn't find anything about that This is the error: AttributeError: module 'matplotlib.pyplot' has no attribute 'xlabel' A: I am using matplotlib version 3.5.2 and it works You can try with this command: pip install matplotlib==3.5.2
AttributeError: module 'matplotlib.pyplot' has no attribute 'xlabel'
I read all the similar questions about this error, they are either spelling mistakes or importing the matplotlib.pyplot as plt wrong. My code is as follows. import matplotlib.pyplot as plt import matplotlib %matplotlib inline plt.hist(raw_data['smoker'], bins=3, color='gray') plt.xlabel('Smoker') plt.show() I'm not sure what is the reason for this error. Might be library version? I didn't find anything about that This is the error: AttributeError: module 'matplotlib.pyplot' has no attribute 'xlabel'
[ "I am using matplotlib version 3.5.2 and it works\nYou can try with this command:\npip install matplotlib==3.5.2 \n\n" ]
[ 0 ]
[]
[]
[ "matplotlib", "python" ]
stackoverflow_0074483898_matplotlib_python.txt
Q: How do I get os.environ to list environment variables of my system, not a user? For context, I am trying to run code that tries to read an environment variable and spits an error: _PySpin.SpinnakerException: Spinnaker: System instance cannot be acquired. Could not load producer. Make sure that the environment variable FLIR_GENTL64_CTI_VS140 exists, and points to the location of the file FLIR_GenTL_v140.cti [-1012] So, after much digging, I found where the file is and went into windows system Properties -> Advanced -> Environment Variables, and to my surprise, there is a variable FLIR_GENTL64_CTI_VS140 and sure enough it points to the appropriate file. In python, if I import os and run os.environ, the following is printed: environ({'ALLUSERSPROFILE': 'C:\\ProgramData', 'APPDATA': 'C:\\Users\\Kingdel\\AppData\\Roaming', 'COMMONPROGRAMFILES': 'C:\\Program Files\\Common Files', 'COMMONPROGRAMFILES(X86)': 'C:\\Program Files (x86)\\Common Files', 'COMMONPROGRAMW6432': 'C:\\Program Files\\Common Files', 'COMPUTERNAME': 'KINGDEL', 'COMSPEC': 'C:\\WINDOWS\\system32\\cmd.exe', 'CONDA_DEFAULT_ENV': 'PointLock_pyspin', 'CONDA_PREFIX': 'C:\\ProgramData\\Anaconda3\\envs\\PointLock_pyspin', 'CONDA_PROMPT_MODIFIER': '(PointLock_pyspin) ', 'CONDA_SHLVL': '1', 'DRIVERDATA': 'C:\\Windows\\System32\\Drivers\\DriverData', 'FC2PATH': 'C:\\Program Files\\Point Grey Research\\FlyCapture2\\bin64', 'FPS_BROWSER_APP_PROFILE_STRING': 'Internet Explorer', 'FPS_BROWSER_USER_PROFILE_STRING': 'Default', 'HOMEDRIVE': 'C:', 'HOMEPATH': '\\Users\\Kingdel', 'IDEA_INITIAL_DIRECTORY': 'C:\\Users\\Kingdel\\Desktop', 'LOCALAPPDATA': 'C:\\Users\\Kingdel\\AppData\\Local', 'LOGONSERVER': '\\\\KINGDEL', 'NIEXTCCOMPILERSUPP': 'C:\\Program Files (x86)\\National Instruments\\Shared\\ExternalCompilerSupport\\C\\', 'NUMBER_OF_PROCESSORS': '4', 'ONEDRIVE': 'C:\\Users\\Kingdel\\OneDrive', 'OS': 'Windows_NT', 'PATH': 'C:\\ProgramData\\Anaconda3\\envs\\PointLock_pyspin;C:\\ProgramData\\Anaconda3\\envs\\PointLock_pyspin\\Library\\mingw-w64\\bin;C:\\ProgramData\\Anaconda3\\envs\\PointLock_pyspin\\Library\\usr\\bin;C:\\ProgramData\\Anaconda3\\envs\\PointLock_pyspin\\Library\\bin;C:\\ProgramData\\Anaconda3\\envs\\PointLock_pyspin\\Scripts;C:\\ProgramData\\Anaconda3\\envs\\PointLock_pyspin\\bin;C:\\ProgramData\\Anaconda3\\condabin;C:\\WINDOWS\\system32;C:\\WINDOWS;C:\\WINDOWS\\System32\\Wbem;C:\\WINDOWS\\System32\\WindowsPowerShell\\v1.0;C:\\WINDOWS\\System32\\OpenSSH;C:\\Program Files (x86)\\IVI Foundation\\VISA\\WinNT\\Bin;C:\\Program Files\\IVI Foundation\\VISA\\Win64\\Bin;C:\\Program Files (x86)\\IVI Foundation\\VISA\\WinNT\\Bin;C:\\Program Files\\MATLAB\\R2012b\\runtime\\win64;C:\\Program Files\\MATLAB\\R2012b\\bin;C:\\Program Files\\Microsoft Windows Performance Toolkit;C:\\Program Files\\Git\\cmd;C:\\Program Files\\Point Grey Research\\FlyCapture2\\bin64;C:\\Program Files\\Point Grey Research\\FlyCapture2\\bin64\\vs2013;C:\\Program Files\\Point Grey Research\\FlyCapture2\\bin64\\vs2015;C:\\Users\\Kingdel\\AppData\\Local\\Microsoft\\WindowsApps;C:\\Users\\Kingdel\\AppData\\Local\\GitHubDesktop\\bin;C:\\Users\\Kingdel\\AppData\\Local\\Microsoft\\WindowsApps;.', 'PATHEXT': '.COM;.EXE;.BAT;.CMD;.VBS;.VBE;.JS;.JSE;.WSF;.WSH;.MSC', 'PROCESSOR_ARCHITECTURE': 'AMD64', 'PROCESSOR_IDENTIFIER': 'Intel64 Family 6 Model 58 Stepping 9, GenuineIntel', 'PROCESSOR_LEVEL': '6', 'PROCESSOR_REVISION': '3a09', 'PROGRAMDATA': 'C:\\ProgramData', 'PROGRAMFILES': 'C:\\Program Files', 'PROGRAMFILES(X86)': 'C:\\Program Files (x86)', 'PROGRAMW6432': 'C:\\Program Files', 'PROMPT': '(PointLock_pyspin) $P$G', 'PSMODULEPATH': 'C:\\Program Files\\WindowsPowerShell\\Modules;C:\\WINDOWS\\system32\\WindowsPowerShell\\v1.0\\Modules', 'PUBLIC': 'C:\\Users\\Public', 'PYCHARM_HOSTED': '1', 'PYTHONIOENCODING': 'UTF-8', 'PYTHONPATH': 'C:\\Users\\Kingdel\\Documents\\GitHub\\spinnaker_python', 'PYTHONUNBUFFERED': '1', 'SESSIONNAME': 'Console', 'SYSTEMDRIVE': 'C:', 'SYSTEMROOT': 'C:\\WINDOWS', 'TEMP': 'C:\\Users\\Kingdel\\AppData\\Local\\Temp', 'TMP': 'C:\\Users\\Kingdel\\AppData\\Local\\Temp', 'USERDOMAIN': 'KINGDEL', 'USERDOMAIN_ROAMINGPROFILE': 'KINGDEL', 'USERNAME': 'Kingdel', 'USERPROFILE': 'C:\\Users\\Kingdel', 'VS100COMNTOOLS': 'C:\\Program Files (x86)\\Microsoft Visual Studio 10.0\\Common7\\Tools\\', 'VXIPNPPATH': 'C:\\Program Files (x86)\\IVI Foundation\\VISA\\', 'VXIPNPPATH64': 'C:\\Program Files\\IVI Foundation\\VISA\\', 'WINDIR': 'C:\\WINDOWS'}) Anyway, the point is that it is a different set of variables than I see in windows system Properties -> Advanced -> Environment Variables: Also, what os.environ prints does not seem to be the User variables for the my current user either. Anyway, my primary question is, of course, how do I get FLIR_GENTL64_CTI_VS140 to show up as an environment variable in my python, given that it is an environment variable, at least on my system? I suspect that the answer has something to do with python os.environ['USERNAME'] being 'Kingdel', while the environment variables listed under windows system Properties -> Advanced -> Environment Variables is 'SYSTEM'. This is probably because python is installed on the user Kindel instead of above any user. But, I am hoping for a solution that does not require reinstalling python. Is there a os.change_user type of command or something? Or maybe I can clone over environment variables from my system to the appropriate user somehow or something like that? Thank you! I tried uninstalling the SDK (and it's associated programs) that I am trying to use and to reinstall it on my user rather than directly on C drive, thinking that might automatically create the correct variables on my user, but it turns out that I cannot install the program within the user directory (maybe that is a windows thing as I am primarily a Mac user). I tried to find a way to change user with python using os.setuid(), but the solution I found for doing that used pwd package, but this is being done on windows; so, I could not do that and did not find a work around. A: Well, the problem is fixed. os.environ now prints out the environment variables found in windows system properties->advanced->environment variables. I am only guessing, but I think that restarting my windows machine fixed this issue. I deleted my conda environment. I uninstalled the Spinnaker SDK and then reinstalled the Spinnaker SDK—in the same location as originally installed. Then, I made my environment again from scratch and followed the identical process for installing, except I did add to my path (in windows system properties->advanced->environment variables) the path to my environment in conda which holds python. I honestly do not think that the uninstall/reinstall nor adding the path did anything, because again, the problem was os.environ printing different environment variables than listed in windows system properties->advanced->environment variables, which seems strange. Nominally, I expect this to have something to do with when os.environ made its mapping to my environment variables, which according to the docs is when I import os. Obviously, I was rerunning that import every time that I ran my code, but it was not being updated. Suspecting that maybe it had to do with when I opened my IDE, I had closed and relaunched my IDE, but that did nothing. So, I think it was fixed when I rebooted. Or perhaps, the uninstalling process did not remove the Environment variables from my system such that when I reinstalled everything, including my new environment with the os package being on that environment, that is when os created its mapping and this time included the "new" environment variables that I needed. I am not entirely sure. I guess if trying to use python spinnaker, install everything once, reboot if you run into problems, and then maybe uninstall everything and reinstall. And btw, import numpy before you import PySpin. Hope there are no more issues! os.environ docs
How do I get os.environ to list environment variables of my system, not a user?
For context, I am trying to run code that tries to read an environment variable and spits an error: _PySpin.SpinnakerException: Spinnaker: System instance cannot be acquired. Could not load producer. Make sure that the environment variable FLIR_GENTL64_CTI_VS140 exists, and points to the location of the file FLIR_GenTL_v140.cti [-1012] So, after much digging, I found where the file is and went into windows system Properties -> Advanced -> Environment Variables, and to my surprise, there is a variable FLIR_GENTL64_CTI_VS140 and sure enough it points to the appropriate file. In python, if I import os and run os.environ, the following is printed: environ({'ALLUSERSPROFILE': 'C:\\ProgramData', 'APPDATA': 'C:\\Users\\Kingdel\\AppData\\Roaming', 'COMMONPROGRAMFILES': 'C:\\Program Files\\Common Files', 'COMMONPROGRAMFILES(X86)': 'C:\\Program Files (x86)\\Common Files', 'COMMONPROGRAMW6432': 'C:\\Program Files\\Common Files', 'COMPUTERNAME': 'KINGDEL', 'COMSPEC': 'C:\\WINDOWS\\system32\\cmd.exe', 'CONDA_DEFAULT_ENV': 'PointLock_pyspin', 'CONDA_PREFIX': 'C:\\ProgramData\\Anaconda3\\envs\\PointLock_pyspin', 'CONDA_PROMPT_MODIFIER': '(PointLock_pyspin) ', 'CONDA_SHLVL': '1', 'DRIVERDATA': 'C:\\Windows\\System32\\Drivers\\DriverData', 'FC2PATH': 'C:\\Program Files\\Point Grey Research\\FlyCapture2\\bin64', 'FPS_BROWSER_APP_PROFILE_STRING': 'Internet Explorer', 'FPS_BROWSER_USER_PROFILE_STRING': 'Default', 'HOMEDRIVE': 'C:', 'HOMEPATH': '\\Users\\Kingdel', 'IDEA_INITIAL_DIRECTORY': 'C:\\Users\\Kingdel\\Desktop', 'LOCALAPPDATA': 'C:\\Users\\Kingdel\\AppData\\Local', 'LOGONSERVER': '\\\\KINGDEL', 'NIEXTCCOMPILERSUPP': 'C:\\Program Files (x86)\\National Instruments\\Shared\\ExternalCompilerSupport\\C\\', 'NUMBER_OF_PROCESSORS': '4', 'ONEDRIVE': 'C:\\Users\\Kingdel\\OneDrive', 'OS': 'Windows_NT', 'PATH': 'C:\\ProgramData\\Anaconda3\\envs\\PointLock_pyspin;C:\\ProgramData\\Anaconda3\\envs\\PointLock_pyspin\\Library\\mingw-w64\\bin;C:\\ProgramData\\Anaconda3\\envs\\PointLock_pyspin\\Library\\usr\\bin;C:\\ProgramData\\Anaconda3\\envs\\PointLock_pyspin\\Library\\bin;C:\\ProgramData\\Anaconda3\\envs\\PointLock_pyspin\\Scripts;C:\\ProgramData\\Anaconda3\\envs\\PointLock_pyspin\\bin;C:\\ProgramData\\Anaconda3\\condabin;C:\\WINDOWS\\system32;C:\\WINDOWS;C:\\WINDOWS\\System32\\Wbem;C:\\WINDOWS\\System32\\WindowsPowerShell\\v1.0;C:\\WINDOWS\\System32\\OpenSSH;C:\\Program Files (x86)\\IVI Foundation\\VISA\\WinNT\\Bin;C:\\Program Files\\IVI Foundation\\VISA\\Win64\\Bin;C:\\Program Files (x86)\\IVI Foundation\\VISA\\WinNT\\Bin;C:\\Program Files\\MATLAB\\R2012b\\runtime\\win64;C:\\Program Files\\MATLAB\\R2012b\\bin;C:\\Program Files\\Microsoft Windows Performance Toolkit;C:\\Program Files\\Git\\cmd;C:\\Program Files\\Point Grey Research\\FlyCapture2\\bin64;C:\\Program Files\\Point Grey Research\\FlyCapture2\\bin64\\vs2013;C:\\Program Files\\Point Grey Research\\FlyCapture2\\bin64\\vs2015;C:\\Users\\Kingdel\\AppData\\Local\\Microsoft\\WindowsApps;C:\\Users\\Kingdel\\AppData\\Local\\GitHubDesktop\\bin;C:\\Users\\Kingdel\\AppData\\Local\\Microsoft\\WindowsApps;.', 'PATHEXT': '.COM;.EXE;.BAT;.CMD;.VBS;.VBE;.JS;.JSE;.WSF;.WSH;.MSC', 'PROCESSOR_ARCHITECTURE': 'AMD64', 'PROCESSOR_IDENTIFIER': 'Intel64 Family 6 Model 58 Stepping 9, GenuineIntel', 'PROCESSOR_LEVEL': '6', 'PROCESSOR_REVISION': '3a09', 'PROGRAMDATA': 'C:\\ProgramData', 'PROGRAMFILES': 'C:\\Program Files', 'PROGRAMFILES(X86)': 'C:\\Program Files (x86)', 'PROGRAMW6432': 'C:\\Program Files', 'PROMPT': '(PointLock_pyspin) $P$G', 'PSMODULEPATH': 'C:\\Program Files\\WindowsPowerShell\\Modules;C:\\WINDOWS\\system32\\WindowsPowerShell\\v1.0\\Modules', 'PUBLIC': 'C:\\Users\\Public', 'PYCHARM_HOSTED': '1', 'PYTHONIOENCODING': 'UTF-8', 'PYTHONPATH': 'C:\\Users\\Kingdel\\Documents\\GitHub\\spinnaker_python', 'PYTHONUNBUFFERED': '1', 'SESSIONNAME': 'Console', 'SYSTEMDRIVE': 'C:', 'SYSTEMROOT': 'C:\\WINDOWS', 'TEMP': 'C:\\Users\\Kingdel\\AppData\\Local\\Temp', 'TMP': 'C:\\Users\\Kingdel\\AppData\\Local\\Temp', 'USERDOMAIN': 'KINGDEL', 'USERDOMAIN_ROAMINGPROFILE': 'KINGDEL', 'USERNAME': 'Kingdel', 'USERPROFILE': 'C:\\Users\\Kingdel', 'VS100COMNTOOLS': 'C:\\Program Files (x86)\\Microsoft Visual Studio 10.0\\Common7\\Tools\\', 'VXIPNPPATH': 'C:\\Program Files (x86)\\IVI Foundation\\VISA\\', 'VXIPNPPATH64': 'C:\\Program Files\\IVI Foundation\\VISA\\', 'WINDIR': 'C:\\WINDOWS'}) Anyway, the point is that it is a different set of variables than I see in windows system Properties -> Advanced -> Environment Variables: Also, what os.environ prints does not seem to be the User variables for the my current user either. Anyway, my primary question is, of course, how do I get FLIR_GENTL64_CTI_VS140 to show up as an environment variable in my python, given that it is an environment variable, at least on my system? I suspect that the answer has something to do with python os.environ['USERNAME'] being 'Kingdel', while the environment variables listed under windows system Properties -> Advanced -> Environment Variables is 'SYSTEM'. This is probably because python is installed on the user Kindel instead of above any user. But, I am hoping for a solution that does not require reinstalling python. Is there a os.change_user type of command or something? Or maybe I can clone over environment variables from my system to the appropriate user somehow or something like that? Thank you! I tried uninstalling the SDK (and it's associated programs) that I am trying to use and to reinstall it on my user rather than directly on C drive, thinking that might automatically create the correct variables on my user, but it turns out that I cannot install the program within the user directory (maybe that is a windows thing as I am primarily a Mac user). I tried to find a way to change user with python using os.setuid(), but the solution I found for doing that used pwd package, but this is being done on windows; so, I could not do that and did not find a work around.
[ "Well, the problem is fixed. os.environ now prints out the environment variables found in windows system properties->advanced->environment variables.\nI am only guessing, but I think that restarting my windows machine fixed this issue. I deleted my conda environment. I uninstalled the Spinnaker SDK and then reinstalled the Spinnaker SDK—in the same location as originally installed. Then, I made my environment again from scratch and followed the identical process for installing, except I did add to my path (in windows system properties->advanced->environment variables) the path to my environment in conda which holds python.\nI honestly do not think that the uninstall/reinstall nor adding the path did anything, because again, the problem was os.environ printing different environment variables than listed in windows system properties->advanced->environment variables, which seems strange.\nNominally, I expect this to have something to do with when os.environ made its mapping to my environment variables, which according to the docs is when I import os. Obviously, I was rerunning that import every time that I ran my code, but it was not being updated. Suspecting that maybe it had to do with when I opened my IDE, I had closed and relaunched my IDE, but that did nothing. So, I think it was fixed when I rebooted. Or perhaps, the uninstalling process did not remove the Environment variables from my system such that when I reinstalled everything, including my new environment with the os package being on that environment, that is when os created its mapping and this time included the \"new\" environment variables that I needed. I am not entirely sure.\nI guess if trying to use python spinnaker, install everything once, reboot if you run into problems, and then maybe uninstall everything and reinstall. And btw, import numpy before you import PySpin. Hope there are no more issues!\nos.environ docs\n" ]
[ 0 ]
[]
[]
[ "environment_variables", "flir", "python", "spinnaker" ]
stackoverflow_0074483730_environment_variables_flir_python_spinnaker.txt
Q: Can I query elasticsearch inside spark map method? I can query elasticsearch from spark like this: spark.read.format( "es" ).options( **{ "es.index.auto.create": "true", 'es.resource': index_name, 'es.nodes.wan.only': 'true', 'es.nodes': elasticsearch_host, 'es.port': elasticsearch_port, 'es.net.http.auth.user': elasticsearch_user, 'es.net.http.auth.pass': elasticsearch_password, 'es.query': query } ).load() but how can I visit es inside map method? something like this: df.rdd.map( lambda x: query_es({"match": {"name": x[1]}}) ) A: I fixed this problem by myself yesterday. The solution is relatively simple. df.rdd.map( lambda x: ElasticSearch().search(index=index, query={"match": {"name": x[1]}}) ) yes, just new an object of ElasticSearch() will works. If you encountered obstacles in this step such as Connection Error or etc. Try set xpack.security.enabled=false and modify the procotol from https to http
Can I query elasticsearch inside spark map method?
I can query elasticsearch from spark like this: spark.read.format( "es" ).options( **{ "es.index.auto.create": "true", 'es.resource': index_name, 'es.nodes.wan.only': 'true', 'es.nodes': elasticsearch_host, 'es.port': elasticsearch_port, 'es.net.http.auth.user': elasticsearch_user, 'es.net.http.auth.pass': elasticsearch_password, 'es.query': query } ).load() but how can I visit es inside map method? something like this: df.rdd.map( lambda x: query_es({"match": {"name": x[1]}}) )
[ "I fixed this problem by myself yesterday. The solution is relatively simple.\ndf.rdd.map(\n lambda x: ElasticSearch().search(index=index, query={\"match\": {\"name\": x[1]}})\n)\n\nyes, just new an object of ElasticSearch() will works.\nIf you encountered obstacles in this step such as Connection Error or etc. Try set xpack.security.enabled=false and modify the procotol from https to http\n" ]
[ 0 ]
[]
[]
[ "elasticsearch", "pyspark", "python" ]
stackoverflow_0074473030_elasticsearch_pyspark_python.txt
Q: How can I groupby row with multi-column with pandas? I'm beginner of pandas so I have a question below. There's a lot of answers about groupby rows but I can't find the answer what I want. anyway my datatable is below. COLUMN1 COLUMN2 COLUMN3 0 APPLE RED JOHN, JANE 1 BANANA YELLOW SMITH 1 BANANA YELLOW EMILY 2 GRAPE VIOLET JESSICA 2 GRAPE VIOLET REIRA 2 GRAPE VIOLET EMMA 2 GRAPE PURPLE JOE 2 GRAPE PURPLE LISA 3 MELON GREEN RIO 3 MELON GREEN REIRA .. and I want to get this table. (edit : EXCEPT YELLOW) COLUMN1 COLUMN2 COLUMN3 0 APPLE RED JOHN, JANE 1 BANANA YELLOW SMITH 1 BANANA YELLOW EMILY 2 GRAPE VIOLET JESSICA, REIRA, EMMA 2 GRAPE PURPLE JOE, LISA 3 MELON GREEN RIO, REIRA .. How can I get this? Please give me a hint or answer then I'll appreciate a lot.. thank you. A: import pandas as pd df = pd.DataFrame({'col1': ['Apple', 'Banana', 'Banana', 'Grape', 'Grape', 'Grape', 'Apple'], 'col2': ['Red', 'Yellow', 'Yellow', 'Violet', 'Violet', 'Purple', 'Red'], 'col3':['John, Jane', 'Smith', 'Emily', 'Jecica', 'Reira', 'Joe', 'Rio']}) df2 = df.groupby(['col1', 'col2'])['col3'].apply(list).reset_index() df2['col3'] = df2['col3'].apply(lambda x: ', '.join(map(str, x))) df2 To avoid splitting Yellow of Banana, import pandas as pd df = pd.DataFrame({'col1': ['Apple', 'Banana', 'Banana', 'Grape', 'Grape', 'Grape', 'Apple'], 'col2': ['Red', 'Yellow', 'Yellow*', 'Violet', 'Violet', 'Purple', 'Red'], 'col3':['John, Jane', 'Smith', 'Emily', 'Jecica', 'Reira', 'Joe', 'Rio']}) df2 = df.groupby(['col1', 'col2'])['col3'].apply(list).reset_index() df2['col3'] = df2['col3'].apply(lambda x: ', '.join(map(str, x))) df2['col2'] = df2['col2'].replace('Yellow*', 'Yellow') df2 A: must make reproducible example for answer EXAMPLE data = [['APPLE', 'RED', 'JOHN, JANE'], ['BANANA', 'YELLOW', 'SMITH'], ['BANANA', 'YELLOW', 'EMILY'], ['GRAPE', 'VIOLET', 'JESSICA'], ['GRAPE', 'VIOLET', 'REIRA'], ['GRAPE', 'VIOLET', 'EMMA'], ['GRAPE', 'PURPLE', 'JOE'], ['GRAPE', 'PURPLE', 'LISA'], ['MELON', 'GREEN', 'RIO'], ['MELON', 'GREEN', 'REIRA']] df = pd.DataFrame(data, index=[0, 1, 1, 2, 2, 2, 2, 2, 3, 3], columns=['col1', 'col2', 'col3']) output(df): col1 col2 col3 0 APPLE RED JOHN, JANE 1 BANANA YELLOW SMITH 1 BANANA YELLOW EMILY 2 GRAPE VIOLET JESSICA 2 GRAPE VIOLET REIRA 2 GRAPE VIOLET EMMA 2 GRAPE PURPLE JOE 2 GRAPE PURPLE LISA 3 MELON GREEN RIO 3 MELON GREEN REIRA First make col3 to list df1 = df.groupby([df.index, 'col1', 'col2']).agg(list).reset_index() output(df1): level_0 col1 col2 col3 0 0 APPLE RED [JOHN, JANE] 1 1 BANANA YELLOW [SMITH, EMILY] 2 2 GRAPE PURPLE [JOE, LISA] 3 2 GRAPE VIOLET [JESSICA, REIRA, EMMA] 4 3 MELON GREEN [RIO, REIRA] Second join col3 except yellow and explode yellow df1.assign(col3=df1.apply(lambda x: ','.join(x['col3']) if x['col2'] != 'YELLOW' else x['col3'] , axis=1)).explode('col3') result: level_0 col1 col2 col3 0 0 APPLE RED JOHN, JANE 1 1 BANANA YELLOW SMITH 1 1 BANANA YELLOW EMILY 2 2 GRAPE PURPLE JOE,LISA 3 2 GRAPE VIOLET JESSICA,REIRA,EMMA 4 3 MELON GREEN RIO,REIRA if you want level_0 to index, use set_index or set_axis and at next time make reproducible exmaple for answer.
How can I groupby row with multi-column with pandas?
I'm beginner of pandas so I have a question below. There's a lot of answers about groupby rows but I can't find the answer what I want. anyway my datatable is below. COLUMN1 COLUMN2 COLUMN3 0 APPLE RED JOHN, JANE 1 BANANA YELLOW SMITH 1 BANANA YELLOW EMILY 2 GRAPE VIOLET JESSICA 2 GRAPE VIOLET REIRA 2 GRAPE VIOLET EMMA 2 GRAPE PURPLE JOE 2 GRAPE PURPLE LISA 3 MELON GREEN RIO 3 MELON GREEN REIRA .. and I want to get this table. (edit : EXCEPT YELLOW) COLUMN1 COLUMN2 COLUMN3 0 APPLE RED JOHN, JANE 1 BANANA YELLOW SMITH 1 BANANA YELLOW EMILY 2 GRAPE VIOLET JESSICA, REIRA, EMMA 2 GRAPE PURPLE JOE, LISA 3 MELON GREEN RIO, REIRA .. How can I get this? Please give me a hint or answer then I'll appreciate a lot.. thank you.
[ "import pandas as pd\ndf = pd.DataFrame({'col1': ['Apple', 'Banana', 'Banana', 'Grape', 'Grape', 'Grape', 'Apple'], 'col2': ['Red', 'Yellow', 'Yellow', 'Violet', 'Violet', 'Purple', 'Red'], 'col3':['John, Jane', 'Smith', 'Emily', 'Jecica', 'Reira', 'Joe', 'Rio']})\ndf2 = df.groupby(['col1', 'col2'])['col3'].apply(list).reset_index()\ndf2['col3'] = df2['col3'].apply(lambda x: ', '.join(map(str, x)))\ndf2\n\nTo avoid splitting Yellow of Banana,\nimport pandas as pd\ndf = pd.DataFrame({'col1': ['Apple', 'Banana', 'Banana', 'Grape', 'Grape', 'Grape', 'Apple'], 'col2': ['Red', 'Yellow', 'Yellow*', 'Violet', 'Violet', 'Purple', 'Red'], 'col3':['John, Jane', 'Smith', 'Emily', 'Jecica', 'Reira', 'Joe', 'Rio']})\ndf2 = df.groupby(['col1', 'col2'])['col3'].apply(list).reset_index()\ndf2['col3'] = df2['col3'].apply(lambda x: ', '.join(map(str, x)))\ndf2['col2'] = df2['col2'].replace('Yellow*', 'Yellow')\ndf2\n\n", "must make reproducible example for answer\nEXAMPLE\ndata = [['APPLE', 'RED', 'JOHN, JANE'],\n ['BANANA', 'YELLOW', 'SMITH'],\n ['BANANA', 'YELLOW', 'EMILY'],\n ['GRAPE', 'VIOLET', 'JESSICA'],\n ['GRAPE', 'VIOLET', 'REIRA'],\n ['GRAPE', 'VIOLET', 'EMMA'],\n ['GRAPE', 'PURPLE', 'JOE'],\n ['GRAPE', 'PURPLE', 'LISA'],\n ['MELON', 'GREEN', 'RIO'],\n ['MELON', 'GREEN', 'REIRA']]\ndf = pd.DataFrame(data, index=[0, 1, 1, 2, 2, 2, 2, 2, 3, 3], columns=['col1', 'col2', 'col3'])\n\noutput(df):\n col1 col2 col3\n0 APPLE RED JOHN, JANE\n1 BANANA YELLOW SMITH\n1 BANANA YELLOW EMILY\n2 GRAPE VIOLET JESSICA\n2 GRAPE VIOLET REIRA\n2 GRAPE VIOLET EMMA\n2 GRAPE PURPLE JOE\n2 GRAPE PURPLE LISA\n3 MELON GREEN RIO\n3 MELON GREEN REIRA\n\n\nFirst\nmake col3 to list\n\ndf1 = df.groupby([df.index, 'col1', 'col2']).agg(list).reset_index()\n\noutput(df1):\n level_0 col1 col2 col3\n0 0 APPLE RED [JOHN, JANE]\n1 1 BANANA YELLOW [SMITH, EMILY]\n2 2 GRAPE PURPLE [JOE, LISA]\n3 2 GRAPE VIOLET [JESSICA, REIRA, EMMA]\n4 3 MELON GREEN [RIO, REIRA]\n\n\nSecond\njoin col3 except yellow and explode yellow\n\ndf1.assign(col3=df1.apply(lambda x: ','.join(x['col3']) if x['col2'] != 'YELLOW' else x['col3'] , axis=1)).explode('col3')\n\nresult:\n level_0 col1 col2 col3\n0 0 APPLE RED JOHN, JANE\n1 1 BANANA YELLOW SMITH\n1 1 BANANA YELLOW EMILY\n2 2 GRAPE PURPLE JOE,LISA\n3 2 GRAPE VIOLET JESSICA,REIRA,EMMA\n4 3 MELON GREEN RIO,REIRA\n\nif you want level_0 to index, use set_index or set_axis\nand at next time make reproducible exmaple for answer.\n" ]
[ 0, 0 ]
[]
[]
[ "pandas", "python" ]
stackoverflow_0074483651_pandas_python.txt
Q: how can I store more objects from another class in other classes Field I've been looking for my problem in Django documentation and couldn't find solution. My problem is that in Api Pannel I cannot insert more objects from "ActorsAndDirectors" class into "cast" Field in "Movie" class. I can only insert one. How to transfrom cast field so I could insert multiple objects from "ActorsAndDirectors" class into "Movie" class this is the code ` class ActorsAndDirectors(models.Model): name = models.CharField(max_length=32, blank=False, null=False) surname = models.CharField(max_length=32, blank=False, null=False) role = models.CharField(max_length=11, blank=False, null=False) def __str__(self): return f"{self.name} {self.surname}" class Movie(models.Model): title = models.CharField(max_length=50, blank=False, null=False, unique=True) description = models.TextField(max_length=400) cast = models.ForeignKey(ActorsAndDirectors, on_delete=models.CASCADE) premiere = models.DateField() updated = models.DateTimeField() slug = models.SlugField() def number_of_ratings(self): return Rating.objects.filter(movie=self).count() def avg_rating(self): score = 0 ratings = Rating.objects.filter(movie=self) for rating in ratings: score +=rating.stars if len(ratings) > 0: return score/len(ratings) else: return 0 def __str__(self): return f"{self.title}, ({self.premiere})" ` I looked through Django documentation for some kind of list Field but with no good results. Im looking for help how to transform the field or maybe some other explanation of my problem A: What you are looking for is a Many to Many relation. Where many actors and directors can participate in many different movies. I would like to complement that when querying the database its slower to look for strings. Maybe you should check this choices option for your ActorsAndDirectors role field. This would help if you try to filter directors or actors later on. Another option would be a Table and a FK.
how can I store more objects from another class in other classes Field
I've been looking for my problem in Django documentation and couldn't find solution. My problem is that in Api Pannel I cannot insert more objects from "ActorsAndDirectors" class into "cast" Field in "Movie" class. I can only insert one. How to transfrom cast field so I could insert multiple objects from "ActorsAndDirectors" class into "Movie" class this is the code ` class ActorsAndDirectors(models.Model): name = models.CharField(max_length=32, blank=False, null=False) surname = models.CharField(max_length=32, blank=False, null=False) role = models.CharField(max_length=11, blank=False, null=False) def __str__(self): return f"{self.name} {self.surname}" class Movie(models.Model): title = models.CharField(max_length=50, blank=False, null=False, unique=True) description = models.TextField(max_length=400) cast = models.ForeignKey(ActorsAndDirectors, on_delete=models.CASCADE) premiere = models.DateField() updated = models.DateTimeField() slug = models.SlugField() def number_of_ratings(self): return Rating.objects.filter(movie=self).count() def avg_rating(self): score = 0 ratings = Rating.objects.filter(movie=self) for rating in ratings: score +=rating.stars if len(ratings) > 0: return score/len(ratings) else: return 0 def __str__(self): return f"{self.title}, ({self.premiere})" ` I looked through Django documentation for some kind of list Field but with no good results. Im looking for help how to transform the field or maybe some other explanation of my problem
[ "What you are looking for is a Many to Many relation. Where many actors and directors can participate in many different movies.\nI would like to complement that when querying the database its slower to look for strings. Maybe you should check this choices option for your ActorsAndDirectors role field.\nThis would help if you try to filter directors or actors later on. Another option would be a Table and a FK.\n" ]
[ 0 ]
[]
[]
[ "django", "django_models", "django_rest_framework", "python" ]
stackoverflow_0074483636_django_django_models_django_rest_framework_python.txt
Q: Adding a list to df and getting Error invalid __array_struct__ Hi so I have a list of 54000 items, some of which say None. I want to add this list as a column to a df that has 54000 rows as well. I think I need to add a N/A to the empty rows but I can't seem to do that. This one gives me: Error invalid _array_struct_ df.insert(loc = 0, column = 'name', value = list) while this gives me: object of type 'NoneType' has no len() df.append(pd.DataFrame(list, columns=['name']), ignore_index=True) This is what the first few items in the list print as: [[40.614479, -73.926401], None, None, [38.851787574468084, -77.32203340425532]] A: What you are describing should be as easy as making the list an additional column in the dataframe: df['new_name'] = the_list Unless there's something very unusual with the list. Btw I'm assuming you are using a different name for the list than 'list', and this is just an example. If you aren't, you've either overwritten the variable for the function, or are trying to assign the function to the dataframe. I hope that helps.
Adding a list to df and getting Error invalid __array_struct__
Hi so I have a list of 54000 items, some of which say None. I want to add this list as a column to a df that has 54000 rows as well. I think I need to add a N/A to the empty rows but I can't seem to do that. This one gives me: Error invalid _array_struct_ df.insert(loc = 0, column = 'name', value = list) while this gives me: object of type 'NoneType' has no len() df.append(pd.DataFrame(list, columns=['name']), ignore_index=True) This is what the first few items in the list print as: [[40.614479, -73.926401], None, None, [38.851787574468084, -77.32203340425532]]
[ "What you are describing should be as easy as making the list an additional column in the dataframe:\ndf['new_name'] = the_list\n\nUnless there's something very unusual with the list.\nBtw I'm assuming you are using a different name for the list than 'list', and this is just an example. If you aren't, you've either overwritten the variable for the function, or are trying to assign the function to the dataframe.\nI hope that helps.\n" ]
[ 0 ]
[]
[]
[ "pandas", "python" ]
stackoverflow_0074483951_pandas_python.txt
Q: Is there any quick way to do the following in sql or python? I have a dataset of size 1TB containing 3 columns and about 20 billion rows. I would like to split this data in some random order into two sub datas in approximately 80/20 chunks. However, the two data should be non-overlapping meaning no entry in one chunk should appear in another chunk. An entry in one column of one chunk should not appear in any column of the other chunk.As an example, suppose an example data is: fruit apple seeds vegetable carrot yellow crops fruit lettuce green onion vegetable lettuce red health The two subdata can be fruit apple seeds crops fruit lettuce lettuce red health and vegetable carrot yellow green onion vegetable Is there any efficient way to do this for such a large data?
Is there any quick way to do the following in sql or python?
I have a dataset of size 1TB containing 3 columns and about 20 billion rows. I would like to split this data in some random order into two sub datas in approximately 80/20 chunks. However, the two data should be non-overlapping meaning no entry in one chunk should appear in another chunk. An entry in one column of one chunk should not appear in any column of the other chunk.As an example, suppose an example data is: fruit apple seeds vegetable carrot yellow crops fruit lettuce green onion vegetable lettuce red health The two subdata can be fruit apple seeds crops fruit lettuce lettuce red health and vegetable carrot yellow green onion vegetable Is there any efficient way to do this for such a large data?
[]
[]
[ "You can just iterate over the file and randomly assign rows to sub-data-1 and sub-data-2 according to the proportions you've laid out.\nimport random\nwith open('large_file', 'r') as lf, \nopen('s1', 'w') as s1, open('s2', 'w') as s2:\n for line in lf:\n if random.random() < 0.8:\n s1.write(line)\n else:\n s2.write(line)\n\n" ]
[ -1 ]
[ "python", "sql" ]
stackoverflow_0074483972_python_sql.txt
Q: How can I connect myremotesql to Python? I tried to connect RemoteMySql as a host with PyMySql, it neither shows an error nor does it work. The code is below: db = pymysql.connect( host="remotemysql.com",user="USER", password="PASSWORD",db="DBNAME") cur = db.cursor() cur.execute("INSERT INTO `users` (ID, name, password,email) VALUES (93454623021,'Jeff','12345','mail@gmail.com');") db.close() I also changed the host to localhost, but it showed this error: pymysql.err.OperationalError: (2003, "Can't connect to MySQL server on 'localhost' ([WinError 10061] No connection could be made because the target machine actively refused it)") It does work when I test it in phpMyAdmin but it does not work when I do it in any other compiler, it does not insert the data to the table so, what actually am I missing? A: This source code is correct. At issue is: "does the client have TCP connectivity to the server?". It's easy to check. Use any one of these commands. $ ncat remotemysql.com 3306 L 8.0.13-4???3E>Z/l8Q?????hC+!h&CsNmysql_native_password ^C $ $ telnet remotemysql.com 3306 Trying 37.59.55.185... Connected to remotemysql.com. Escape character is '^]'. L 8.0.13-4}??/}>lkfJ??zf+*P }XNN^mysql_native_password ^] telnet> q Connection closed. $ $ time curl http://remotemysql.com:3306 curl: (1) Received HTTP/0.9 when not allowed real 0m0.500s If your local firewall is blocking such packets, and you can change the firewall config, arrange for it to pass outbound TCP port 3306 connections, to support the mysql DB protocol. If you are blocked and cannot change the config, then seek another connectivity solution.
How can I connect myremotesql to Python?
I tried to connect RemoteMySql as a host with PyMySql, it neither shows an error nor does it work. The code is below: db = pymysql.connect( host="remotemysql.com",user="USER", password="PASSWORD",db="DBNAME") cur = db.cursor() cur.execute("INSERT INTO `users` (ID, name, password,email) VALUES (93454623021,'Jeff','12345','mail@gmail.com');") db.close() I also changed the host to localhost, but it showed this error: pymysql.err.OperationalError: (2003, "Can't connect to MySQL server on 'localhost' ([WinError 10061] No connection could be made because the target machine actively refused it)") It does work when I test it in phpMyAdmin but it does not work when I do it in any other compiler, it does not insert the data to the table so, what actually am I missing?
[ "This source code is correct.\nAt issue is: \"does the client have TCP connectivity to the server?\".\nIt's easy to check.\nUse any one of these commands.\n$ ncat remotemysql.com 3306\nL\n8.0.13-4???3E>Z/l8Q?????hC+!h&CsNmysql_native_password\n^C\n$\n$ telnet remotemysql.com 3306\nTrying 37.59.55.185...\nConnected to remotemysql.com.\nEscape character is '^]'.\nL\n8.0.13-4}??/}>lkfJ??zf+*P\n }XNN^mysql_native_password\n^]\ntelnet> q\nConnection closed.\n$\n$ time curl http://remotemysql.com:3306\ncurl: (1) Received HTTP/0.9 when not allowed\n\nreal 0m0.500s\n\nIf your local firewall is blocking such\npackets, and you can change the firewall\nconfig, arrange for it to pass outbound\nTCP port 3306 connections, to support\nthe mysql DB protocol.\nIf you are blocked and cannot change\nthe config, then seek another connectivity\nsolution.\n" ]
[ 1 ]
[]
[]
[ "mysql", "python" ]
stackoverflow_0074481975_mysql_python.txt
Q: Print parameter value if it exists for all class methods I have a class I wrote where a majority (but not all) of its methods take in an int parameter foo: class MyClass: def fun_one(self, foo: int): pass def fun_two(self, foo: int, flu: int): pass def fun_three(self, flu: str, foo: int): pass def fun_four(self): pass Is there any way I can make my program log the values of foo whenever they come in in any of the methods without needing to manually go to every relevant function and add print(foo)? Also important to note, sometimes None will be passed as a parameter to these functions. There needs to be a distinction between functions which simply don't have the parameter and functions where the parameter's value is None. The easiest solution I can think of is using regex to find every instance of def … foo … :, then adding a line break and print statement after each line, but I'm trying to see if there's a built-in, nicer way I can do this. Thank you so much! A: You could add this to every method: if 'foo' in locals(): print(foo) A: As a note to people in the future, I ended up using a regex script to do this. In VS Code's find/replace, I used the following: Search: (def.*foo: int.*\n) Replace: $1 print(foo)\n (I'm sure there's a more efficient way to do this, but I'll leave that to the regex experts) This changes the script to the following: class MyClass: def fun_one(self, foo: int): print(foo) pass def fun_two(self, foo: int, flu: int): print(foo) pass def fun_three(self, flu: str, foo: int): print(foo) pass def fun_four(self): pass
Print parameter value if it exists for all class methods
I have a class I wrote where a majority (but not all) of its methods take in an int parameter foo: class MyClass: def fun_one(self, foo: int): pass def fun_two(self, foo: int, flu: int): pass def fun_three(self, flu: str, foo: int): pass def fun_four(self): pass Is there any way I can make my program log the values of foo whenever they come in in any of the methods without needing to manually go to every relevant function and add print(foo)? Also important to note, sometimes None will be passed as a parameter to these functions. There needs to be a distinction between functions which simply don't have the parameter and functions where the parameter's value is None. The easiest solution I can think of is using regex to find every instance of def … foo … :, then adding a line break and print statement after each line, but I'm trying to see if there's a built-in, nicer way I can do this. Thank you so much!
[ "You could add this to every method:\nif 'foo' in locals():\n print(foo)\n\n", "As a note to people in the future, I ended up using a regex script to do this. In VS Code's find/replace, I used the following:\n\nSearch: (def.*foo: int.*\\n)\nReplace: $1 print(foo)\\n\n\n(I'm sure there's a more efficient way to do this, but I'll leave that to the regex experts)\nThis changes the script to the following:\nclass MyClass:\n def fun_one(self, foo: int):\n print(foo)\n pass\n\n def fun_two(self, foo: int, flu: int):\n print(foo)\n pass\n\n def fun_three(self, flu: str, foo: int):\n print(foo)\n pass\n\n def fun_four(self):\n pass\n\n" ]
[ 0, 0 ]
[]
[]
[ "parameters", "python" ]
stackoverflow_0074482117_parameters_python.txt
Q: Align an array of traces in python I have an array of traces that are look like this : Really small low part then a big High part and ended with low part again. I want to be able to align all those traces ... as close as I can (so the changes from low to high and the opposite will be at the same indexes). I tried to use cross-correlation but that gave me 0 offset ... I don't know why. def give_offset(y,y2): # limit your signal length to speed things up lim = 2000 # do the actual correlation corr = scipy.signal.correlate(y[:lim], y2[:lim], mode='full') # The offset is the maximum of your correlation array, # itself being offset by (lim - 1): offset = np.argmax(corr) - (lim - 1) return offset I didn't find anything over the internet and I'm so confused since it seems like a common problem that many faced before. A: I know this is kind of an old question, but I'll answer it anyway for future people interested. The way I solved this problem was by taking a section of the signal from one array and cross-correlating the other arrays with the section. I chose the max correlation and subtracted the max index of the subarray to get the offset. Subtracting the mean of the subarray for cross-correlation is important to get the right answer. This example will confirm a = np.concatenate((np.random.normal(size=100),np.random.normal(size=100)-10)) b = np.concatenate((np.random.normal(size=45),np.random.normal(size=155)-10)) max_ind = 150 min_ind = 50 small_sect = a[min_ind:max_ind] cc = scipy.signal.correlate(b-np.mean(small_sect),small_sect-np.mean(small_sect)) offset = np.argmax(cc) - max_ind + 1
Align an array of traces in python
I have an array of traces that are look like this : Really small low part then a big High part and ended with low part again. I want to be able to align all those traces ... as close as I can (so the changes from low to high and the opposite will be at the same indexes). I tried to use cross-correlation but that gave me 0 offset ... I don't know why. def give_offset(y,y2): # limit your signal length to speed things up lim = 2000 # do the actual correlation corr = scipy.signal.correlate(y[:lim], y2[:lim], mode='full') # The offset is the maximum of your correlation array, # itself being offset by (lim - 1): offset = np.argmax(corr) - (lim - 1) return offset I didn't find anything over the internet and I'm so confused since it seems like a common problem that many faced before.
[ "I know this is kind of an old question, but I'll answer it anyway for future people interested. The way I solved this problem was by taking a section of the signal from one array and cross-correlating the other arrays with the section. I chose the max correlation and subtracted the max index of the subarray to get the offset. Subtracting the mean of the subarray for cross-correlation is important to get the right answer. This example will confirm\na = np.concatenate((np.random.normal(size=100),np.random.normal(size=100)-10))\nb = np.concatenate((np.random.normal(size=45),np.random.normal(size=155)-10))\nmax_ind = 150\nmin_ind = 50\nsmall_sect = a[min_ind:max_ind]\ncc = scipy.signal.correlate(b-np.mean(small_sect),small_sect-np.mean(small_sect))\noffset = np.argmax(cc) - max_ind + 1\n\n" ]
[ 0 ]
[]
[]
[ "numpy", "python", "sequence_alignment", "signals" ]
stackoverflow_0059492958_numpy_python_sequence_alignment_signals.txt
Q: Passing a dictionary to aggregate function in Python - Alternative Way? I have a bit of an odd issue - the following code works in Jupyter Notebook but it does not work in Databricks: df = df.set_index('date') groups = ['ABC', 'XYZ'] df_grouped = df.groupby(groups) df_grouped = df_grouped.resample('Q') df_grouped_agg = dict( sum_area=('shop_area', 'sum'), total_count=('name', 'count'), sum_total_1=('total_cost_customer', 'sum'), sum_total_2=('total_cost_item', 'sum'), ) df_grouped = df_grouped.agg(**df_grouped_agg) When I run this code in Databricks, I get the following error: aggregate() missing 1 required positional argument: 'func' I'm not too sure how to fix this - most answers to this issue state that updating Pandas is what fixes it, but since I'm using Databricks everything is already updated. If anyone has an idea I'd greatly appreciate it! EDIT: This question does provide a bit more insight, but I'm not sure how to re-write my code in a similar way: <class 'TypeError'>: aggregate() missing 1 required positional argument: 'func_or_funcs' A: For anyone who has this problem in the future (especially with Databricks for some reason) here's a workaround: Basically, replace: df_grouped_agg = dict( sum_area=('shop_area', 'sum'), total_count=('name', 'count'), sum_total_1=('total_cost_customer', 'sum'), sum_total_2=('total_cost_item', 'sum'), ) df_grouped = df_grouped.agg(**df_grouped_agg) with df_grouped = df_grouped.apply(lambda x: pd.Series({ 'sum_area': x['shop_area'].sum(), 'total_count': x['name'].count(), 'sum_total_1': x['total_cost_customer'].sum(), 'sum_total_2': x['total_cost_item'].sum()}))
Passing a dictionary to aggregate function in Python - Alternative Way?
I have a bit of an odd issue - the following code works in Jupyter Notebook but it does not work in Databricks: df = df.set_index('date') groups = ['ABC', 'XYZ'] df_grouped = df.groupby(groups) df_grouped = df_grouped.resample('Q') df_grouped_agg = dict( sum_area=('shop_area', 'sum'), total_count=('name', 'count'), sum_total_1=('total_cost_customer', 'sum'), sum_total_2=('total_cost_item', 'sum'), ) df_grouped = df_grouped.agg(**df_grouped_agg) When I run this code in Databricks, I get the following error: aggregate() missing 1 required positional argument: 'func' I'm not too sure how to fix this - most answers to this issue state that updating Pandas is what fixes it, but since I'm using Databricks everything is already updated. If anyone has an idea I'd greatly appreciate it! EDIT: This question does provide a bit more insight, but I'm not sure how to re-write my code in a similar way: <class 'TypeError'>: aggregate() missing 1 required positional argument: 'func_or_funcs'
[ "For anyone who has this problem in the future (especially with Databricks for some reason) here's a workaround:\nBasically, replace:\ndf_grouped_agg = dict(\n sum_area=('shop_area', 'sum'),\n total_count=('name', 'count'),\n sum_total_1=('total_cost_customer', 'sum'),\n sum_total_2=('total_cost_item', 'sum'),\n)\n\ndf_grouped = df_grouped.agg(**df_grouped_agg)\n\nwith\ndf_grouped = df_grouped.apply(lambda x: pd.Series({ \n 'sum_area': x['shop_area'].sum(),\n 'total_count': x['name'].count(), \n 'sum_total_1': x['total_cost_customer'].sum(), \n 'sum_total_2': x['total_cost_item'].sum()}))\n\n" ]
[ 0 ]
[]
[]
[ "aggregate", "databricks", "dictionary", "python" ]
stackoverflow_0074470344_aggregate_databricks_dictionary_python.txt
Q: Django rest filter by serializermethodfield with custom filter As declared in question title, i got task to filter results by field not presented in model but calculated by serializer. The model: class Recipe(models.Model): tags = models.ManyToManyField( Tag, related_name='recipe_tags' ) author = models.ForeignKey( User, on_delete=models.CASCADE, related_name='author_recipes' ) ingredients = models.ManyToManyField( Ingredient, related_name='recipe_ingredients' ) name = models.CharField(max_length=200) image = models.ImageField() text = models.TextField() cooking_time = models.PositiveSmallIntegerField( validators=[MinValueValidator(1)] ) class Meta: ordering = ("-id",) verbose_name = "Recipe" verbose_name_plural = "Recipes" def __str__(self): return self.name Here is the view code: class RecipeViewSet(ModelViewSet): queryset = Recipe.objects.all() permission_classes = [IsAdminOrAuthorOrReadOnly, ] serializer_class = RecipeInSerializer pagination_class = LimitPageNumberPagination filter_backends = [DjangoFilterBackend] filterset_fields = ['tags', ] filter_class = RecipeFilter Serializer: class RecipeOutSerializer(serializers.ModelSerializer): tags = ManyRelatedField(child_relation=TagSerializer()) author = CustomUserSerializer() ingredients = serializers.SerializerMethodField() is_favorite = serializers.SerializerMethodField() is_in_shopping_cart = serializers.SerializerMethodField() class Meta: fields = '__all__' model = Recipe def get_ingredients(self, obj): ingredients = IngredientAmount.objects.filter(recipe=obj) return GetIngredientSerializer(ingredients, many=True).data def get_is_favorite(self, obj): request = self.context.get("request") if request.user.is_anonymous: return False return Favorite.objects.filter(recipe=obj, user=request.user).exists() def get_is_in_shopping_cart(self, obj): request = self.context.get("request") if not request or request.user.is_anonymous: return False return ShoppingCart.objects.filter(recipe=obj, user=request.user).exists() And custom filter code: class RecipeFilter(rest_framework.FilterSet): tags = ModelMultipleChoiceFilter( field_name='tags__slug', to_field_name="slug", queryset=Tag.objects.all() ) favorite = BooleanFilter(field_name='is_favorite', method='filter_favorite') def filter_favorite(self, queryset, name, value): return queryset.filter(is_favorite__exact=True) class Meta: model = Recipe fields = ['tags', ] Target is is_favorited field that return boolean value. I tried writing func in custom filter class that return queryset but didnt work, neither documentation helped me with examples. Hope for your help. A: We can use queryset annotate: from django.db import models from rest_framework import serializers class RecipeViewSet(ModelViewSet): def get_queryset(self): user = self.request.user user_id = user.id if not user.is_anonymous else None return Recipe.objects.all().annotate( total_favorite=models.Count( "favorite", filter=models.Q(favorite__user_id=user_id) ), is_favorite=models.Case( models.When(total_favorite__gte=1, then=True), default=False, output_field=BooleanField() ) ) class RecipeOutSerializer(serializers.ModelSerializer) is_favorite = serializers.BooleanField(read_only=True) class Meta: model = Recipe fields = ( # ... is_favorite, ) class RecipeFilter(rest_framework.FilterSet): favorite = BooleanFilter(field_name='is_favorite')
Django rest filter by serializermethodfield with custom filter
As declared in question title, i got task to filter results by field not presented in model but calculated by serializer. The model: class Recipe(models.Model): tags = models.ManyToManyField( Tag, related_name='recipe_tags' ) author = models.ForeignKey( User, on_delete=models.CASCADE, related_name='author_recipes' ) ingredients = models.ManyToManyField( Ingredient, related_name='recipe_ingredients' ) name = models.CharField(max_length=200) image = models.ImageField() text = models.TextField() cooking_time = models.PositiveSmallIntegerField( validators=[MinValueValidator(1)] ) class Meta: ordering = ("-id",) verbose_name = "Recipe" verbose_name_plural = "Recipes" def __str__(self): return self.name Here is the view code: class RecipeViewSet(ModelViewSet): queryset = Recipe.objects.all() permission_classes = [IsAdminOrAuthorOrReadOnly, ] serializer_class = RecipeInSerializer pagination_class = LimitPageNumberPagination filter_backends = [DjangoFilterBackend] filterset_fields = ['tags', ] filter_class = RecipeFilter Serializer: class RecipeOutSerializer(serializers.ModelSerializer): tags = ManyRelatedField(child_relation=TagSerializer()) author = CustomUserSerializer() ingredients = serializers.SerializerMethodField() is_favorite = serializers.SerializerMethodField() is_in_shopping_cart = serializers.SerializerMethodField() class Meta: fields = '__all__' model = Recipe def get_ingredients(self, obj): ingredients = IngredientAmount.objects.filter(recipe=obj) return GetIngredientSerializer(ingredients, many=True).data def get_is_favorite(self, obj): request = self.context.get("request") if request.user.is_anonymous: return False return Favorite.objects.filter(recipe=obj, user=request.user).exists() def get_is_in_shopping_cart(self, obj): request = self.context.get("request") if not request or request.user.is_anonymous: return False return ShoppingCart.objects.filter(recipe=obj, user=request.user).exists() And custom filter code: class RecipeFilter(rest_framework.FilterSet): tags = ModelMultipleChoiceFilter( field_name='tags__slug', to_field_name="slug", queryset=Tag.objects.all() ) favorite = BooleanFilter(field_name='is_favorite', method='filter_favorite') def filter_favorite(self, queryset, name, value): return queryset.filter(is_favorite__exact=True) class Meta: model = Recipe fields = ['tags', ] Target is is_favorited field that return boolean value. I tried writing func in custom filter class that return queryset but didnt work, neither documentation helped me with examples. Hope for your help.
[ "We can use queryset annotate:\nfrom django.db import models\nfrom rest_framework import serializers\n\nclass RecipeViewSet(ModelViewSet):\n def get_queryset(self):\n user = self.request.user\n user_id = user.id if not user.is_anonymous else None\n return Recipe.objects.all().annotate(\n total_favorite=models.Count(\n \"favorite\",\n filter=models.Q(favorite__user_id=user_id)\n ),\n is_favorite=models.Case(\n models.When(total_favorite__gte=1, then=True),\n default=False,\n output_field=BooleanField()\n )\n )\n\n\nclass RecipeOutSerializer(serializers.ModelSerializer)\n is_favorite = serializers.BooleanField(read_only=True)\n\n class Meta:\n model = Recipe\n fields = (\n # ...\n is_favorite,\n )\n\nclass RecipeFilter(rest_framework.FilterSet):\n favorite = BooleanFilter(field_name='is_favorite')\n \n\n" ]
[ 1 ]
[]
[]
[ "django", "django_filter", "django_rest_framework", "python" ]
stackoverflow_0074477101_django_django_filter_django_rest_framework_python.txt
Q: Aliasing commands in Python Argparse I'd like some of my argparse commands to have an alias. For example, let's say I have the command mycli test --true posarg. In this example, mycli is the name of the program (the parent parser), test is a subcommand (subparser parser), --true is a boolean flag argument, and posarg is a positional argument. I would like to keep it this way, but also have some alias mycli true-test posarg that points to the definition of mycli test --true posarg. I would like to do this without setting an os-level alias in .bashrc or .zshrc. Is this possible? A: Not an alias in the sense that you define one command that refers to another, but you can define two separate subcommands that achieve the same result. One has an option --true; the other has a positional argument with the same name. import argparse p = argparse.ArgumentParser() sp = p.add_subparsers() p1 = sp.add_parser("test") p1.add_argument("--true") p2 = sp.add_parser("test-true") p2.add_argument("true") print(p.parse_args()) Then $ python3 tmp.py test --true 3 Namespace(true='3') $ python3 tmp.py test-true 3 Namespace(true='3')
Aliasing commands in Python Argparse
I'd like some of my argparse commands to have an alias. For example, let's say I have the command mycli test --true posarg. In this example, mycli is the name of the program (the parent parser), test is a subcommand (subparser parser), --true is a boolean flag argument, and posarg is a positional argument. I would like to keep it this way, but also have some alias mycli true-test posarg that points to the definition of mycli test --true posarg. I would like to do this without setting an os-level alias in .bashrc or .zshrc. Is this possible?
[ "Not an alias in the sense that you define one command that refers to another, but you can define two separate subcommands that achieve the same result. One has an option --true; the other has a positional argument with the same name.\nimport argparse\n\np = argparse.ArgumentParser()\nsp = p.add_subparsers()\np1 = sp.add_parser(\"test\")\np1.add_argument(\"--true\")\n\np2 = sp.add_parser(\"test-true\")\np2.add_argument(\"true\")\n\n\nprint(p.parse_args())\n\nThen\n$ python3 tmp.py test --true 3\nNamespace(true='3')\n\n$ python3 tmp.py test-true 3\nNamespace(true='3')\n\n" ]
[ 0 ]
[]
[]
[ "argparse", "python" ]
stackoverflow_0074483693_argparse_python.txt
Q: How to connect to an existing firefox instance using selenium(python) Is there any way to open a Firefox browser and then connect to it using selenium? I know this is possible on chrome by launching it in the command line and using --remote-debugging-port argument like this: import subprocess from selenium import webdriver from selenium.webdriver.chrome.options import Options subprocess.Popen('"C:/Program Files (x86)/Google/Chrome/Application/chrome.exe" --remote-debugging-port=9222', shell=True) options = Options() options.add_experimental_option("debuggerAddress", "127.0.0.1:9222") driver = webdriver.Chrome(executable_path=PATH, options=options) Can this be done in firefox? I have been searching and checking questions relating to this for a while now but no luck. The only lead I found is that geckodriver has a --connect-existing argument but I am not sure how to use it. How do you pass arguments to geckodriver and use it in selenium? Any help would be appreciated. If it can't be done please let me know. Thank you EDIT: Okay I have made some progress, I know how to pass geckodriver args to selenium: driver = webdriver.Firefox(service=Service(PATH, service_args=['--marionette-port', '9394', '--connect-existing'])) The problem now is even though i start firefox with a debugger server like this: firefox.exe -marionette -start-debugger-server <PORT> When I run the code it either raises this error message: Traceback (most recent call last): File "c:\Users\maxis\Desktop\Python\Freelance\Application for Opening Web Browsers\browsers\firefox.py", line 107, in <module> driver = webdriver.Firefox(service=Service(PATH, service_args=['--marionette-port', '9394', '--connect-existing'])) File "C:\Users\maxis\AppData\Local\Programs\Python\Python39\lib\site-packages\selenium\webdriver\firefox\webdriver.py", line 180, in __init__ RemoteWebDriver.__init__( File "C:\Users\maxis\AppData\Local\Programs\Python\Python39\lib\site-packages\selenium\webdriver\remote\webdriver.py", line 275, in __init__ self.start_session(capabilities, browser_profile) File "C:\Users\maxis\AppData\Local\Programs\Python\Python39\lib\site-packages\selenium\webdriver\remote\webdriver.py", line 365, in start_session response = self.execute(Command.NEW_SESSION, parameters) File "C:\Users\maxis\AppData\Local\Programs\Python\Python39\lib\site-packages\selenium\webdriver\remote\webdriver.py", line 430, in execute self.error_handler.check_response(response) File "C:\Users\maxis\AppData\Local\Programs\Python\Python39\lib\site-packages\selenium\webdriver\remote\errorhandler.py", line 247, in check_response raise exception_class(message, screen, stacktrace) selenium.common.exceptions.TimeoutException: Message: No connection could be made because the target machine actively refused it. (os error 10061) or I get multiple popups, that tell me there is an incoming request to Firefox. Even when I click okay, nothing seems to happen. A: CMD: C:\Program Files\Mozilla Firefox\ firefox.exe -marionette -start-debugger-server 2828 //only use 2828 Python Script: from selenium import webdriver driver = webdriver.Firefox(executable_path = "YOUR GECKODRIVER PATH", service_args = ['--marionette-port', '2828', '--connect-existing'] ) pageSource = driver.page_source print(pageSource) A: I got the same error but it worked when I used the default Marionette port of 2828. Go to about:config in your Firefox and look up marionette.port, and make sure it is the same as the port in your web driver's service_args. Then, start a Firefox instance simply with the -marionette option but without the -start-debugger-server option. A: First Step: Open CMD and execute: firefox.exe --marionette This command will open a firefox instance that has its marionette-port=2828 (by default) (writes about:config in url bar of the firefox instance, press enter and then search: marionette.port) Then: from selenium import webdriver from selenium.webdriver.firefox.service import Service firefox_services = Service(executable_path='firefoxdriver', port=3000, service_args=['--marionette-port', '2828', '--connect-existing']) driver = webdriver.Firefox(service=firefox_services) driver.get('https://youtube.com') driver.execute_script('alert(\'your favorite music is here\')') executable_path='firefoxdriver' My geckodriver.exe is inside firefoxdriver folder Screenshot my VSCode port=3000 I want to send my 'geckodriver orders' throught port 3000 service_args=['--marionette-port', '2828', '--connect-existing'] I want to control an open firefox instance that has its marionette-port=2828 Suggestion: Delete all firefox profile folders that were created in %temp% before you knew how to connect to an open firefox instance Delete these folders in %temp%
How to connect to an existing firefox instance using selenium(python)
Is there any way to open a Firefox browser and then connect to it using selenium? I know this is possible on chrome by launching it in the command line and using --remote-debugging-port argument like this: import subprocess from selenium import webdriver from selenium.webdriver.chrome.options import Options subprocess.Popen('"C:/Program Files (x86)/Google/Chrome/Application/chrome.exe" --remote-debugging-port=9222', shell=True) options = Options() options.add_experimental_option("debuggerAddress", "127.0.0.1:9222") driver = webdriver.Chrome(executable_path=PATH, options=options) Can this be done in firefox? I have been searching and checking questions relating to this for a while now but no luck. The only lead I found is that geckodriver has a --connect-existing argument but I am not sure how to use it. How do you pass arguments to geckodriver and use it in selenium? Any help would be appreciated. If it can't be done please let me know. Thank you EDIT: Okay I have made some progress, I know how to pass geckodriver args to selenium: driver = webdriver.Firefox(service=Service(PATH, service_args=['--marionette-port', '9394', '--connect-existing'])) The problem now is even though i start firefox with a debugger server like this: firefox.exe -marionette -start-debugger-server <PORT> When I run the code it either raises this error message: Traceback (most recent call last): File "c:\Users\maxis\Desktop\Python\Freelance\Application for Opening Web Browsers\browsers\firefox.py", line 107, in <module> driver = webdriver.Firefox(service=Service(PATH, service_args=['--marionette-port', '9394', '--connect-existing'])) File "C:\Users\maxis\AppData\Local\Programs\Python\Python39\lib\site-packages\selenium\webdriver\firefox\webdriver.py", line 180, in __init__ RemoteWebDriver.__init__( File "C:\Users\maxis\AppData\Local\Programs\Python\Python39\lib\site-packages\selenium\webdriver\remote\webdriver.py", line 275, in __init__ self.start_session(capabilities, browser_profile) File "C:\Users\maxis\AppData\Local\Programs\Python\Python39\lib\site-packages\selenium\webdriver\remote\webdriver.py", line 365, in start_session response = self.execute(Command.NEW_SESSION, parameters) File "C:\Users\maxis\AppData\Local\Programs\Python\Python39\lib\site-packages\selenium\webdriver\remote\webdriver.py", line 430, in execute self.error_handler.check_response(response) File "C:\Users\maxis\AppData\Local\Programs\Python\Python39\lib\site-packages\selenium\webdriver\remote\errorhandler.py", line 247, in check_response raise exception_class(message, screen, stacktrace) selenium.common.exceptions.TimeoutException: Message: No connection could be made because the target machine actively refused it. (os error 10061) or I get multiple popups, that tell me there is an incoming request to Firefox. Even when I click okay, nothing seems to happen.
[ "CMD:\nC:\\Program Files\\Mozilla Firefox\\\n\nfirefox.exe -marionette -start-debugger-server 2828 //only use 2828\n\nPython Script:\nfrom selenium import webdriver\n\ndriver = webdriver.Firefox(executable_path = \"YOUR GECKODRIVER PATH\", service_args = ['--marionette-port', '2828', '--connect-existing'] )\n\npageSource = driver.page_source\nprint(pageSource)\n\n", "I got the same error but it worked when I used the default Marionette port of 2828. Go to about:config in your Firefox and look up marionette.port, and make sure it is the same as the port in your web driver's service_args. Then, start a Firefox instance simply with the -marionette option but without the -start-debugger-server option.\n", "First Step:\nOpen CMD and execute: firefox.exe --marionette\nThis command will open a firefox instance that has its marionette-port=2828 (by default)\n(writes about:config in url bar of the firefox instance, press enter and then search: marionette.port)\n\nThen:\nfrom selenium import webdriver\nfrom selenium.webdriver.firefox.service import Service\n\nfirefox_services = Service(executable_path='firefoxdriver', port=3000, service_args=['--marionette-port', '2828', '--connect-existing'])\ndriver = webdriver.Firefox(service=firefox_services)\ndriver.get('https://youtube.com')\ndriver.execute_script('alert(\\'your favorite music is here\\')')\n\nexecutable_path='firefoxdriver'\nMy geckodriver.exe is inside firefoxdriver folder\nScreenshot my VSCode\nport=3000\nI want to send my 'geckodriver orders' throught port 3000\nservice_args=['--marionette-port', '2828', '--connect-existing']\nI want to control an open firefox instance that has its marionette-port=2828\n\nSuggestion:\nDelete all firefox profile folders that were created in %temp% before you knew how to connect to an open firefox instance\nDelete these folders in %temp%\n" ]
[ 2, 1, 1 ]
[]
[]
[ "firefox", "geckodriver", "python", "selenium", "selenium_webdriver" ]
stackoverflow_0072331816_firefox_geckodriver_python_selenium_selenium_webdriver.txt