content
stringlengths
85
101k
title
stringlengths
0
150
question
stringlengths
15
48k
answers
list
answers_scores
list
non_answers
list
non_answers_scores
list
tags
list
name
stringlengths
35
137
Q: JSON format inside record in csv file - convert to column Python This is my data. Inside the column - 'device' and 'geonetwork' store the data as a dict or json format. I would like to create new columns based on data from that columns, fore example -> new column should be 'browser','browserversion', 'continent' and so on. I have tried a lot of solutions, but it dosen't work. enter image description here DATA ,date,device,fullVisitorId,geoNetwork 0,20180420,"{""browser"": ""Chrome"", ""browserVersion"": ""not available in demo dataset"", ""browserSize"": ""not available in demo dataset"", ""operatingSystem"": ""Macintosh""}",3.37108036201195E+018,"{""continent"": ""Americas"", ""subContinent"": ""Northern America"", ""country"": ""United States"", ""region"": ""California""}" 1,20180328,"{""browser"": ""Chrome"", ""browserVersion"": ""not available in demo dataset"", ""browserSize"": ""not available in demo dataset"", ""operatingSystem"": ""Macintosh""}",1.27350339266773E+018,"{""continent"": ""Americas"", ""subContinent"": ""Northern America"", ""country"": ""Canada"", ""region"": ""State of Sao Paulo""}" A little help how to solve my problem A: enter image description here On the attached picture first new column name is highlited. It is json file import csv import json def csv_to_json(csvFilePath, jsonFilePath): jsonArray = [] # read csv file with open(csvFilePath, encoding='utf-8') as csvf: csvReader = csv.DictReader(csvf) for row in csvReader: jsonArray.append(row) with open(jsonFilePath, 'w', encoding='utf-8') as jsonf: jsonString = json.dumps(jsonArray, indent=4) jsonf.write(jsonString) csvFilePath = r'dane.csv' jsonFilePath = r'data.json' csv_to_json(csvFilePath, jsonFilePath)
JSON format inside record in csv file - convert to column Python
This is my data. Inside the column - 'device' and 'geonetwork' store the data as a dict or json format. I would like to create new columns based on data from that columns, fore example -> new column should be 'browser','browserversion', 'continent' and so on. I have tried a lot of solutions, but it dosen't work. enter image description here DATA ,date,device,fullVisitorId,geoNetwork 0,20180420,"{""browser"": ""Chrome"", ""browserVersion"": ""not available in demo dataset"", ""browserSize"": ""not available in demo dataset"", ""operatingSystem"": ""Macintosh""}",3.37108036201195E+018,"{""continent"": ""Americas"", ""subContinent"": ""Northern America"", ""country"": ""United States"", ""region"": ""California""}" 1,20180328,"{""browser"": ""Chrome"", ""browserVersion"": ""not available in demo dataset"", ""browserSize"": ""not available in demo dataset"", ""operatingSystem"": ""Macintosh""}",1.27350339266773E+018,"{""continent"": ""Americas"", ""subContinent"": ""Northern America"", ""country"": ""Canada"", ""region"": ""State of Sao Paulo""}" A little help how to solve my problem
[ "enter image description here\nOn the attached picture first new column name is highlited. It is json file\nimport csv\nimport json\n\n\ndef csv_to_json(csvFilePath, jsonFilePath):\n jsonArray = []\n\n # read csv file\n with open(csvFilePath, encoding='utf-8') as csvf:\n csvReader = csv.DictReader(csvf)\n for row in csvReader:\n jsonArray.append(row)\n\n with open(jsonFilePath, 'w', encoding='utf-8') as jsonf:\n jsonString = json.dumps(jsonArray, indent=4)\n jsonf.write(jsonString)\n\n\ncsvFilePath = r'dane.csv'\njsonFilePath = r'data.json'\ncsv_to_json(csvFilePath, jsonFilePath)\n\n" ]
[ 0 ]
[]
[]
[ "arrays", "csv", "dataframe", "json", "python" ]
stackoverflow_0074485676_arrays_csv_dataframe_json_python.txt
Q: Resolving bottleneck on database connection in Dataflow pipeline We have a Dataflow streaming job that consumes messages in Pubsub, do some transformations, and perform DML (INSERT, UPDATE, DELETE) on a CloudSQL Postgres instance. We observed that the bottleneck is in the database. The code is written in Python and uses SQLAlchemy as the library to interface with Postgres Common issues we observed are: It maxes out the allowed database connections, multiple connection pools are created. When there is high-volume data coming in from Pubsub, the DoFn responsible for writing to the database throws these Exceptions: Task was destroyed but it is pending! task: <Task pending name='Task-194770'... Task exception was never retrieved future: <Task finished name='Task-196602'... RuntimeError: aiohttp.client_exceptions.ClientResponseError: 429, message='Too Many Requests', url=URL('https://sqladmin.googleapis.com/sql/v1beta4/projects/.../instances/db-csql:generateEphemeralCert') [while running 'write_data-ptransform-48'] Seems that the Cloud SQL API hits the rate limit here. These should be our ideal scenario: Regardless of the volume and the number of workers created by Dataflow, we should only have one ConnectionPool (a singleton) throughout the pipeline, with static number of connections (max of 50 allotted to the Dataflow job, out of 200 max connections configured in the database). In moments of high-volume flow from Pubsub, there should be some mechanism to throttle the rate of incoming requests to the database. Or do not scale the number of workers for DoFn responsible for writing to the database. Can you recommend of a way to accomplish this? From my experience a single global connection pool is not possible because you cannot pass the connection object to workers (pickle/unpickle). Is this true? A: You should try to batch the call to your database. The pseudocode would look like this (taken from the beam programming guide) class BufferDoFn(DoFn): BUFFER = BagStateSpec('buffer', EventCoder()) IS_TIMER_SET = ReadModifyWriteStateSpec('is_timer_set', BooleanCoder()) OUTPUT = TimerSpec('output', TimeDomain.REAL_TIME) def process(self, buffer=DoFn.StateParam(BUFFER), is_timer_set=DoFn.StateParam(IS_TIMER_SET), timer=DoFn.TimerParam(OUTPUT)): buffer.add(element) if not is_timer_set.read(): timer.set(Timestamp.now() + Duration(seconds=10)) is_timer_set.write(True) @on_timer(OUTPUT) def output_callback(self, buffer=DoFn.StateParam(BUFFER), is_timer_set=DoFn.StateParam(IS_TIMER_SET)): send_rpc(list(buffer.read())) buffer.clear() is_timer_set.clear() In principle, you would need to write a splittable dofn and use timers and states.
Resolving bottleneck on database connection in Dataflow pipeline
We have a Dataflow streaming job that consumes messages in Pubsub, do some transformations, and perform DML (INSERT, UPDATE, DELETE) on a CloudSQL Postgres instance. We observed that the bottleneck is in the database. The code is written in Python and uses SQLAlchemy as the library to interface with Postgres Common issues we observed are: It maxes out the allowed database connections, multiple connection pools are created. When there is high-volume data coming in from Pubsub, the DoFn responsible for writing to the database throws these Exceptions: Task was destroyed but it is pending! task: <Task pending name='Task-194770'... Task exception was never retrieved future: <Task finished name='Task-196602'... RuntimeError: aiohttp.client_exceptions.ClientResponseError: 429, message='Too Many Requests', url=URL('https://sqladmin.googleapis.com/sql/v1beta4/projects/.../instances/db-csql:generateEphemeralCert') [while running 'write_data-ptransform-48'] Seems that the Cloud SQL API hits the rate limit here. These should be our ideal scenario: Regardless of the volume and the number of workers created by Dataflow, we should only have one ConnectionPool (a singleton) throughout the pipeline, with static number of connections (max of 50 allotted to the Dataflow job, out of 200 max connections configured in the database). In moments of high-volume flow from Pubsub, there should be some mechanism to throttle the rate of incoming requests to the database. Or do not scale the number of workers for DoFn responsible for writing to the database. Can you recommend of a way to accomplish this? From my experience a single global connection pool is not possible because you cannot pass the connection object to workers (pickle/unpickle). Is this true?
[ "You should try to batch the call to your database. The pseudocode would look like this (taken from the beam programming guide)\nclass BufferDoFn(DoFn):\n BUFFER = BagStateSpec('buffer', EventCoder())\n IS_TIMER_SET = ReadModifyWriteStateSpec('is_timer_set', BooleanCoder())\n OUTPUT = TimerSpec('output', TimeDomain.REAL_TIME)\n\n def process(self,\n buffer=DoFn.StateParam(BUFFER),\n is_timer_set=DoFn.StateParam(IS_TIMER_SET),\n timer=DoFn.TimerParam(OUTPUT)):\n buffer.add(element)\n if not is_timer_set.read():\n timer.set(Timestamp.now() + Duration(seconds=10))\n is_timer_set.write(True)\n\n @on_timer(OUTPUT)\n def output_callback(self,\n buffer=DoFn.StateParam(BUFFER),\n is_timer_set=DoFn.StateParam(IS_TIMER_SET)):\n send_rpc(list(buffer.read()))\n buffer.clear()\n is_timer_set.clear()\n\nIn principle, you would need to write a splittable dofn and use timers and states.\n" ]
[ 2 ]
[]
[]
[ "apache_beam", "google_cloud_dataflow", "google_cloud_sql", "python", "sqlalchemy" ]
stackoverflow_0074476994_apache_beam_google_cloud_dataflow_google_cloud_sql_python_sqlalchemy.txt
Q: Sniffing network traffic with scapy, get "Could not activate the pcap handler" I try to sniff network adapter(TP-LINK, 0bda:b711) with python3 , But I get an OSError: Could not activate the pcap handler from scapy.all import * from scapy.config import conf from scapy.layers.dot11 import Dot11 conf.use_pcap = True def callBack(pkg): if pkg.haslayer(Dot11): if pkg.type == 0 and pkg.subtype == 8: print("dBm_AntSignal=", pkg.dBm_AntSignal) print("dBm_AntNoise=", pkg.dBm_AntNoise) sniff(iface='wlp1s1', monitor='True', prn=callBack) I think there is something wrong with libpcap, I want to get dBm_AntSignal and dBm_AntNoise from sniff, the code can run Macbook according to other people(you can browse my last question). Is there somebody can solve this issue ? A: If you posted issue #1136 on the libpcap issues list, then you presumably somehow managed to determine that pcap_activate() returned PCAP_ERROR. If you did that by modifying the Scapy code, try modifying it further to, if pcap_activate() returns PCAP_ERROR, report the result of pcap_geterr(), in order to try to find out why, in this particular instance, pcap_activate() returned PCAP_ERROR. The problem is that PCAP_ERROR can be returned for a number of different reasons, and it's difficult if not impossible to guess which one it was. (And then file an issue on Scapy's issue list indicating that the error message for pcap_activate() failing should be based on both the return value from pcap_activate() and, for certain errors, the result of pcap_geterr(). They should also distinguish between error returns from pcap_activate(), which are negative numbers, and warning returns from pcap_activate(), which indicate that the "pcap handler" could be activated, but something unexpected happened, and are positive numbers.) Update: No need to file the Scapy issue; I've already submitted a pull request for the change to fix the error reporting, and it's been merged. Apply the changes from that pull request to Scapy and try again.
Sniffing network traffic with scapy, get "Could not activate the pcap handler"
I try to sniff network adapter(TP-LINK, 0bda:b711) with python3 , But I get an OSError: Could not activate the pcap handler from scapy.all import * from scapy.config import conf from scapy.layers.dot11 import Dot11 conf.use_pcap = True def callBack(pkg): if pkg.haslayer(Dot11): if pkg.type == 0 and pkg.subtype == 8: print("dBm_AntSignal=", pkg.dBm_AntSignal) print("dBm_AntNoise=", pkg.dBm_AntNoise) sniff(iface='wlp1s1', monitor='True', prn=callBack) I think there is something wrong with libpcap, I want to get dBm_AntSignal and dBm_AntNoise from sniff, the code can run Macbook according to other people(you can browse my last question). Is there somebody can solve this issue ?
[ "If you posted issue #1136 on the libpcap issues list, then you presumably somehow managed to determine that pcap_activate() returned PCAP_ERROR. If you did that by modifying the Scapy code, try modifying it further to, if pcap_activate() returns PCAP_ERROR, report the result of pcap_geterr(), in order to try to find out why, in this particular instance, pcap_activate() returned PCAP_ERROR. The problem is that PCAP_ERROR can be returned for a number of different reasons, and it's difficult if not impossible to guess which one it was.\n(And then file an issue on Scapy's issue list indicating that the error message for pcap_activate() failing should be based on both the return value from pcap_activate() and, for certain errors, the result of pcap_geterr(). They should also distinguish between error returns from pcap_activate(), which are negative numbers, and warning returns from pcap_activate(), which indicate that the \"pcap handler\" could be activated, but something unexpected happened, and are positive numbers.)\nUpdate:\nNo need to file the Scapy issue; I've already submitted a pull request for the change to fix the error reporting, and it's been merged. Apply the changes from that pull request to Scapy and try again.\n" ]
[ 0 ]
[]
[]
[ "linux", "python", "wifi" ]
stackoverflow_0074336825_linux_python_wifi.txt
Q: Is there a way to run Python scripts client side in the browser using a front end framework like react? Introduction There are so many cool and useful python scripts/programs out there, that aren't living up to their potential, in my opinion. Because all of these are executed locally in the command line, they aren't very user friendly and not very accessible to the normal person. I wanted to create a site that could run such a script on it, but from the browser with an easy to navigate UI, that way the user wont have to download a file in order to use the service. My approach I'm still very much a beginner even though I've dedicated quite a bit of time to learning js, but I just don't seem to see how I would be able to make this happen. I'm thinking I might have to set up an API that could run on the back end and feed the information to the front end. However the specific script/file that I want to run is called spleeter and takes in audio files, and split them into stems. I'm sure it's possible to upload a file through an API, however I can see that becoming very heavy server side, especially if a lot of people were to visit the site at the same time. I quite naively thought, this issue of essentially scalability could be solved by running the script locally client side, so the user wouldn't have to upload anything, two birds with one stone, in the sense that I don't have to pay server costs of hosting the uploaded files, nor the computational power of running the script. After searching up on this topic for a week now, I know realize that this was indeed VERY naïve of me. I've sorta looked into a thing called Brython and a thing called Transcrypt but don't quite see how it would be applicable. Basically the question All of this explanation to essentially ask the following question(s), is what I want to do even possible? Does this kind of setup have a name that I'm just not searching correctly for? And if possible, could someone nudge me in the right direction? I hope a solution to this exists and if not I really think there's a gold mine ahead to anyone coupling this together. A: I have not used this tool yet myself, but Brython boasts this functionality. I will be looking into using this myself, as this is something I've been looking forward to utilizing. A: I'm personally a fan of Transcrypt and use it myself to create full React applications that are programmed with Python instead of JavaScript. But like any React application, that does require the code to be pre-compiled into JavaScript before being able to be run in a web browser. That said, thanks to sourcemaps, you can still troubleshoot the Python code in the browser. If you want to actually run Python code from source in the browser, you would likely need one of the following: Brython (which gives you Python script tags, compiled to JS on load) Skulpt (which provides a JS based Python interpreter) Pyodide (A WASM based Python interpreter that runs in the browser) Beyond those, there are a few framework type projects like Dash and IDOM that let you use Python to create browser applications. Finally, there is Anvil that gives you a VB style experience for developing web applications using Python. Given all of the above options, it sounds like Skulpt may be something you might want to look into some more. A: Now it is possible to run the python code directly in browser without any development environment other than a web browser. Say Hello to PyScript. As per the current official documentation : Please be advised that PyScript is very alpha and under heavy development. There are many known issues, from usability to loading times, and you should expect things to change often. We encourage people to play and explore with PyScript, but at this time we do not recommend using it for production. Here is the demo to print Hello World and live date time using pyscript directly in the browser. <link rel="stylesheet" href="https://pyscript.net/latest/pyscript.css" /> <script defer src="https://pyscript.net/latest/pyscript.js"></script> <py-script> from datetime import datetime print('Hello world!') now = datetime.now() now.strftime("%m/%d/%Y, %H:%M:%S") </py-script>
Is there a way to run Python scripts client side in the browser using a front end framework like react?
Introduction There are so many cool and useful python scripts/programs out there, that aren't living up to their potential, in my opinion. Because all of these are executed locally in the command line, they aren't very user friendly and not very accessible to the normal person. I wanted to create a site that could run such a script on it, but from the browser with an easy to navigate UI, that way the user wont have to download a file in order to use the service. My approach I'm still very much a beginner even though I've dedicated quite a bit of time to learning js, but I just don't seem to see how I would be able to make this happen. I'm thinking I might have to set up an API that could run on the back end and feed the information to the front end. However the specific script/file that I want to run is called spleeter and takes in audio files, and split them into stems. I'm sure it's possible to upload a file through an API, however I can see that becoming very heavy server side, especially if a lot of people were to visit the site at the same time. I quite naively thought, this issue of essentially scalability could be solved by running the script locally client side, so the user wouldn't have to upload anything, two birds with one stone, in the sense that I don't have to pay server costs of hosting the uploaded files, nor the computational power of running the script. After searching up on this topic for a week now, I know realize that this was indeed VERY naïve of me. I've sorta looked into a thing called Brython and a thing called Transcrypt but don't quite see how it would be applicable. Basically the question All of this explanation to essentially ask the following question(s), is what I want to do even possible? Does this kind of setup have a name that I'm just not searching correctly for? And if possible, could someone nudge me in the right direction? I hope a solution to this exists and if not I really think there's a gold mine ahead to anyone coupling this together.
[ "I have not used this tool yet myself, but Brython boasts this functionality. I will be looking into using this myself, as this is something I've been looking forward to utilizing.\n", "I'm personally a fan of Transcrypt and use it myself to create full React applications that are programmed with Python instead of JavaScript. But like any React application, that does require the code to be pre-compiled into JavaScript before being able to be run in a web browser. That said, thanks to sourcemaps, you can still troubleshoot the Python code in the browser.\nIf you want to actually run Python code from source in the browser, you would likely need one of the following:\n\nBrython (which gives you Python script tags, compiled to JS on load)\nSkulpt (which provides a JS based Python interpreter)\nPyodide (A WASM based Python interpreter that runs in the browser)\n\nBeyond those, there are a few framework type projects like Dash and IDOM that let you use Python to create browser applications.\nFinally, there is Anvil that gives you a VB style experience for developing web applications using Python.\nGiven all of the above options, it sounds like Skulpt may be something you might want to look into some more.\n", "Now it is possible to run the python code directly in browser without any development environment other than a web browser. Say Hello to PyScript.\nAs per the current official documentation :\n\nPlease be advised that PyScript is very alpha and under heavy\ndevelopment. There are many known issues, from usability to loading\ntimes, and you should expect things to change often. We encourage\npeople to play and explore with PyScript, but at this time we do not\nrecommend using it for production.\n\nHere is the demo to print Hello World and live date time using pyscript directly in the browser.\n\n\n<link rel=\"stylesheet\" href=\"https://pyscript.net/latest/pyscript.css\" />\n<script defer src=\"https://pyscript.net/latest/pyscript.js\"></script>\n\n<py-script>\n from datetime import datetime\n print('Hello world!')\n now = datetime.now()\n now.strftime(\"%m/%d/%Y, %H:%M:%S\")\n</py-script>\n\n\n\n" ]
[ 1, 0, 0 ]
[]
[]
[ "api", "client_side", "javascript", "python", "reactjs" ]
stackoverflow_0070900950_api_client_side_javascript_python_reactjs.txt
Q: How to get text from commandline with python I need to read out the text from a cmd prompt window (which updates every second) in my Python program. I'm using Windows 7. Anyone have an idea how to do this? edit: I didn’t explain it very well. The cmd prompt is already open and I need to read out everything it prints. I have to "link" my Python program to the command prompt. A: You should read :module-subprocess >>> subprocess.check_output(["echo", "Hello World!"]) 'Hello World!\n' A: Have you tried sys.argv ?. import sys print sys.argv[1:] The first argument will be your file name. A: Just do subprocess.check_output() import subprocess command = "dir" output = subprocess.check_output(command, shell=True, text=True) print(output) Don't forget to add shell=True and text=True argument
How to get text from commandline with python
I need to read out the text from a cmd prompt window (which updates every second) in my Python program. I'm using Windows 7. Anyone have an idea how to do this? edit: I didn’t explain it very well. The cmd prompt is already open and I need to read out everything it prints. I have to "link" my Python program to the command prompt.
[ "You should read :module-subprocess \n>>> subprocess.check_output([\"echo\", \"Hello World!\"])\n'Hello World!\\n'\n\n", "Have you tried sys.argv ?.\nimport sys\nprint sys.argv[1:]\n\nThe first argument will be your file name.\n", "Just do subprocess.check_output()\nimport subprocess\ncommand = \"dir\"\noutput = subprocess.check_output(command, shell=True, text=True)\nprint(output)\n\nDon't forget to add shell=True and text=True argument\n" ]
[ 1, 0, 0 ]
[]
[]
[ "cmd", "python", "windows" ]
stackoverflow_0022197322_cmd_python_windows.txt
Q: run python script from within html in the browser This is a python inside html question. I have been looking for some time and have come across this code that embeds python inside a webpage (html). I have tested it and it works. Here is the code: The code script is saved as *.html and opens (and runs) in the browser. <!DOCTYPE html> <head> <script type="module" src="https://cdn.jsdelivr.net/gh/vanillawc/wc-code@1.0.3/src/wc-code.js"></script> </head> <body> <wc-code mode="python"> <script type="wc-content"> a = 1 b = 1 print(a+b) </script> </wc-code> </body> The <script> tag in the code above contains some basic python code. However, i cannot seem to import modules with a standard import module call and was wondering if there was a way of doing this ? Also, similarly, if there is a way of calling or importing a custom python script like my_code.py ? Thanks A: I think you should learn flask or Django \ the solution here FLASK can be helpful A: Pyodide is a solution for embedding python inside a webpage. A: Now it can be achievable by using PyScript. You can use the import statements inside the <py-script> tag. Live Demo : <link rel="stylesheet" href="https://pyscript.net/latest/pyscript.css" /> <script defer src="https://pyscript.net/latest/pyscript.js"></script> <py-script> from datetime import datetime datetime.now().strftime("%m/%d/%Y, %H:%M:%S") </py-script> Note (As per the current official documentation) : PyScript is very alpha and under heavy development. There are many known issues, from usability to loading times, and you should expect things to change often. We encourage people to play and explore with PyScript, but at this time we do not recommend using it for production.
run python script from within html in the browser
This is a python inside html question. I have been looking for some time and have come across this code that embeds python inside a webpage (html). I have tested it and it works. Here is the code: The code script is saved as *.html and opens (and runs) in the browser. <!DOCTYPE html> <head> <script type="module" src="https://cdn.jsdelivr.net/gh/vanillawc/wc-code@1.0.3/src/wc-code.js"></script> </head> <body> <wc-code mode="python"> <script type="wc-content"> a = 1 b = 1 print(a+b) </script> </wc-code> </body> The <script> tag in the code above contains some basic python code. However, i cannot seem to import modules with a standard import module call and was wondering if there was a way of doing this ? Also, similarly, if there is a way of calling or importing a custom python script like my_code.py ? Thanks
[ "I think you should learn flask or Django \\\nthe solution here FLASK can be helpful\n", "Pyodide is a solution for embedding python inside a webpage.\n", "Now it can be achievable by using PyScript. You can use the import statements inside the <py-script> tag.\nLive Demo :\n\n\n<link rel=\"stylesheet\" href=\"https://pyscript.net/latest/pyscript.css\" />\n<script defer src=\"https://pyscript.net/latest/pyscript.js\"></script>\n\n<py-script>\n from datetime import datetime\n datetime.now().strftime(\"%m/%d/%Y, %H:%M:%S\")\n</py-script>\n\n\n\nNote (As per the current official documentation) :\n\nPyScript is very alpha and under heavy development. There are many\nknown issues, from usability to loading times, and you should expect\nthings to change often. We encourage people to play and explore with\nPyScript, but at this time we do not recommend using it for\nproduction.\n\n" ]
[ 0, 0, 0 ]
[]
[]
[ "html", "python" ]
stackoverflow_0071636540_html_python.txt
Q: Selenium: Go URL's "up" is there a way to go urls endings up? For an example : driver.get("Example.com/1") then driver.get("Example.com/2") then driver.get("Example.com/3") and again and again. Until it is at 69. I tried googling, but I didn't found anything. A: Just use range for that: for i in range(1, 70): driver.get(f'example/{i}') # Do what you want driver.quit()
Selenium: Go URL's "up"
is there a way to go urls endings up? For an example : driver.get("Example.com/1") then driver.get("Example.com/2") then driver.get("Example.com/3") and again and again. Until it is at 69. I tried googling, but I didn't found anything.
[ "Just use range for that:\nfor i in range(1, 70):\n driver.get(f'example/{i}')\n # Do what you want\n driver.quit()\n\n" ]
[ 0 ]
[]
[]
[ "python", "selenium", "selenium_chromedriver" ]
stackoverflow_0074485633_python_selenium_selenium_chromedriver.txt
Q: How to convert list into row using pandas if the name and there value is stored in one list Analyte_line 0 NaN 1 [['Urea', 3.0, '3', ''], ['Creatinine', 3.0, '... 2 [['Total Protein', '', '6', ''], ['Albumin', '... 3 [['HGB', '', '18', ''], ['RBC', '', '1', ''], ... 4 [['Total Protein', '', '23', ''], ['Albumin', ... .. ... 102 [['Rapid Malaria', '', 'NEGATIVE', '']] 103 [['Rapid Malaria', '', 'POSITIVE (P.VIVAX)', '']] 104 [['Rapid Malaria', '', 'NEGATIVE', '']] 105 [['Rapid Malaria', '', 'POSITIVE (P.VIVAX)', '']] 106 [['Rapid Malaria', '', 'NEGATIVE', '']] this given data is stored in a data frame. df = pd.DataFrame(data) print(df) In data each name and value is stored list. How can we convert this data frame in row and columns wise like Urea Creatinine Uric Acid Alkaline Phosphatase Test 3.0 , 3 3.0 , 3 3.0 , 3 4 Positive A: You can use: out = (df .set_index(0).T.astype(str) .agg(lambda s: ','.join(x for x in s if x)) .to_frame().T ) Or: out = pd.DataFrame.from_dict({0: {k: ','.join(x for x in map(str, l) if x) for k, *l in data}}, orient='index') Output: Urea Creatinine Uric Acid Alkaline Phosphatase Test 0 3.0,3 3.0,3 3.0,3 4 Positive
How to convert list into row using pandas if the name and there value is stored in one list
Analyte_line 0 NaN 1 [['Urea', 3.0, '3', ''], ['Creatinine', 3.0, '... 2 [['Total Protein', '', '6', ''], ['Albumin', '... 3 [['HGB', '', '18', ''], ['RBC', '', '1', ''], ... 4 [['Total Protein', '', '23', ''], ['Albumin', ... .. ... 102 [['Rapid Malaria', '', 'NEGATIVE', '']] 103 [['Rapid Malaria', '', 'POSITIVE (P.VIVAX)', '']] 104 [['Rapid Malaria', '', 'NEGATIVE', '']] 105 [['Rapid Malaria', '', 'POSITIVE (P.VIVAX)', '']] 106 [['Rapid Malaria', '', 'NEGATIVE', '']] this given data is stored in a data frame. df = pd.DataFrame(data) print(df) In data each name and value is stored list. How can we convert this data frame in row and columns wise like Urea Creatinine Uric Acid Alkaline Phosphatase Test 3.0 , 3 3.0 , 3 3.0 , 3 4 Positive
[ "You can use:\nout = (df\n .set_index(0).T.astype(str)\n .agg(lambda s: ','.join(x for x in s if x))\n .to_frame().T\n)\n\nOr:\nout = pd.DataFrame.from_dict({0: {k: ','.join(x for x in map(str, l) if x)\n for k, *l in data}}, orient='index')\n\nOutput:\n Urea Creatinine Uric Acid Alkaline Phosphatase Test\n0 3.0,3 3.0,3 3.0,3 4 Positive\n\n" ]
[ 1 ]
[]
[]
[ "csv", "dataframe", "pandas", "python" ]
stackoverflow_0074486198_csv_dataframe_pandas_python.txt
Q: Is it possible to run python code in a web browser? My code below is a pandas dataframe, I've tried transcypt and flask to view this code in my browser, but have had errors. Any help would be greatly appreciated. import pandas as pd columns = ['cap', 'title', 'price'] df = pd.read_csv('asdawhiskey.csv', names=columns) items = df[df['cap'] == '70cl'] print(items.to_html()) A: Try Streamlit. https://streamlit.io You don't have to write a single line of html import streamlit as st import pandas as pd columns = ['cap', 'title', 'price'] df = pd.read_csv('asdawhiskey.csv', names=columns) items = df[df['cap'] == '70cl'] st.write(items) Then run streamlit run example.py A: Is it possible to run python code in a web browser ? - Yes, It is now possible with the help of PyScript. Say thanks to this framework that allows users to create rich Python applications in the browser There is no installation required. We can just use the PyScript assets served on https://pyscript.net/ Live Demo : <html> <head> <link rel="stylesheet" href="https://pyscript.net/latest/pyscript.css" /> <script defer src="https://pyscript.net/latest/pyscript.js"></script> </head> <body> <py-script> print('Hello, World!') </py-script> </body> </html> Note (As per the current official documentation) : PyScript is very alpha and under heavy development. There are many known issues, from usability to loading times, and you should expect things to change often. We encourage people to play and explore with PyScript, but at this time we do not recommend using it for production.
Is it possible to run python code in a web browser?
My code below is a pandas dataframe, I've tried transcypt and flask to view this code in my browser, but have had errors. Any help would be greatly appreciated. import pandas as pd columns = ['cap', 'title', 'price'] df = pd.read_csv('asdawhiskey.csv', names=columns) items = df[df['cap'] == '70cl'] print(items.to_html())
[ "Try Streamlit. https://streamlit.io\nYou don't have to write a single line of html\nimport streamlit as st\nimport pandas as pd\n\ncolumns = ['cap', 'title', 'price']\ndf = pd.read_csv('asdawhiskey.csv', names=columns)\n\nitems = df[df['cap'] == '70cl']\n\nst.write(items)\n\nThen run streamlit run example.py\n", "Is it possible to run python code in a web browser ? - Yes, It is now possible with the help of PyScript. Say thanks to this framework that allows users to create rich Python applications in the browser\nThere is no installation required. We can just use the PyScript assets served on https://pyscript.net/\nLive Demo :\n\n\n<html>\n <head>\n <link rel=\"stylesheet\" href=\"https://pyscript.net/latest/pyscript.css\" />\n <script defer src=\"https://pyscript.net/latest/pyscript.js\"></script>\n </head>\n <body> <py-script> print('Hello, World!') </py-script> </body>\n</html>\n\n\n\nNote (As per the current official documentation) :\n\nPyScript is very alpha and under heavy development. There are many\nknown issues, from usability to loading times, and you should expect\nthings to change often. We encourage people to play and explore with\nPyScript, but at this time we do not recommend using it for\nproduction.\n\n" ]
[ 3, 0 ]
[]
[]
[ "html", "pandas", "python" ]
stackoverflow_0062942057_html_pandas_python.txt
Q: No symbols loaded for c++ in mixed debugging (from Python) I have a large project where the major part computation heavy stuff is written in c++ and the "glue code" and the start script is written in Python. The code has been compiled with the VS 2017 compiler (V141) and Python 3.7.0-32bit for a few years now and I want to upgrade to V143 and Python 3.10-64, I can build and run the code with the new environment. I get some small errors when I run the test suite (probably due to the change from 32bit to 64bit). The issue occurs when I try to debug the C++ code in Visual Studio 2022. I can debug the Python code without any errors but when I want to debug the code I can't get any symbols from the c++ to load. I've tried to follow this guide: https://learn.microsoft.com/en-us/visualstudio/python/debugging-mixed-mode-c-cpp-python-in-visual-studio?view=vs-2022 (But I fail to enable both Native and Python in step 2 of Enable mixed-mode debugging in a Python project. When I select Python in Select Code type I get this error: "Python debugging is not compatible with Native. Would you like to uncheck Native?". Similar I don't find the option Python/Native debugging in step 2 of "Enable mixed-mode debugging in a C/C++ project" and I've installed Python native development tools using the VS installer etc.) Update: To simplify and show a simple case I followed this tutorial https://www.youtube.com/watch?v=9q-LHP7cMfg&ab_channel=MichaelVanslembrouck and was able to build and set breakpoints when I started the debugging with the c++ project as startup project (as described in the YouTube tutorial). The issue occurs when I try to do it with the PythonApplication as startup project. (I have tried to use both Python39-64 and Python310-64) I added the content of the nessessary files to reproduce the issue. This is the solution file: Microsoft Visual Studio Solution File, Format Version 12.00 # Visual Studio Version 17 VisualStudioVersion = 17.3.32929.385 MinimumVisualStudioVersion = 10.0.40219.1 Project("{8BC9CEB8-8B4A-11D0-8D11-00A0C91BC942}") = "spam", "spam\spam.vcxproj", "{18FA7E0B-56F8-433D-ABCC-523E412FFB83}" EndProject Project("{888888A0-9F3D-457C-B088-3A5042F75D52}") = "PythonApplication", "PythonApplication\PythonApplication.pyproj", "{DD2AA21D-1713-4B24-AB58-2695C05684C1}" EndProject Global GlobalSection(SolutionConfigurationPlatforms) = preSolution Debug|Any CPU = Debug|Any CPU Debug|x64 = Debug|x64 Debug|x86 = Debug|x86 Release|Any CPU = Release|Any CPU Release|x64 = Release|x64 Release|x86 = Release|x86 EndGlobalSection GlobalSection(ProjectConfigurationPlatforms) = postSolution {18FA7E0B-56F8-433D-ABCC-523E412FFB83}.Debug|Any CPU.ActiveCfg = Debug|x64 {18FA7E0B-56F8-433D-ABCC-523E412FFB83}.Debug|Any CPU.Build.0 = Debug|x64 {18FA7E0B-56F8-433D-ABCC-523E412FFB83}.Debug|x64.ActiveCfg = Debug|x64 {18FA7E0B-56F8-433D-ABCC-523E412FFB83}.Debug|x64.Build.0 = Debug|x64 {18FA7E0B-56F8-433D-ABCC-523E412FFB83}.Debug|x86.ActiveCfg = Debug|Win32 {18FA7E0B-56F8-433D-ABCC-523E412FFB83}.Debug|x86.Build.0 = Debug|Win32 {18FA7E0B-56F8-433D-ABCC-523E412FFB83}.Release|Any CPU.ActiveCfg = Release|x64 {18FA7E0B-56F8-433D-ABCC-523E412FFB83}.Release|Any CPU.Build.0 = Release|x64 {18FA7E0B-56F8-433D-ABCC-523E412FFB83}.Release|x64.ActiveCfg = Release|x64 {18FA7E0B-56F8-433D-ABCC-523E412FFB83}.Release|x64.Build.0 = Release|x64 {18FA7E0B-56F8-433D-ABCC-523E412FFB83}.Release|x86.ActiveCfg = Release|Win32 {18FA7E0B-56F8-433D-ABCC-523E412FFB83}.Release|x86.Build.0 = Release|Win32 {DD2AA21D-1713-4B24-AB58-2695C05684C1}.Debug|Any CPU.ActiveCfg = Debug|Any CPU {DD2AA21D-1713-4B24-AB58-2695C05684C1}.Debug|x64.ActiveCfg = Debug|Any CPU {DD2AA21D-1713-4B24-AB58-2695C05684C1}.Debug|x86.ActiveCfg = Debug|Any CPU {DD2AA21D-1713-4B24-AB58-2695C05684C1}.Release|Any CPU.ActiveCfg = Release|Any CPU {DD2AA21D-1713-4B24-AB58-2695C05684C1}.Release|x64.ActiveCfg = Release|Any CPU {DD2AA21D-1713-4B24-AB58-2695C05684C1}.Release|x86.ActiveCfg = Release|Any CPU EndGlobalSection GlobalSection(SolutionProperties) = preSolution HideSolutionNode = FALSE EndGlobalSection GlobalSection(ExtensibilityGlobals) = postSolution SolutionGuid = {A9526BE0-4988-4327-B7A0-AFD3DF109D47} EndGlobalSection EndGlobal This is the PythonApplication.pyproj file: <Project DefaultTargets="Build" xmlns="http://schemas.microsoft.com/developer/msbuild/2003" ToolsVersion="4.0"> <PropertyGroup> <Configuration Condition=" '$(Configuration)' == '' ">Debug</Configuration> <SchemaVersion>2.0</SchemaVersion> <ProjectGuid>dd2aa21d-1713-4b24-ab58-2695c05684c1</ProjectGuid> <ProjectHome>.</ProjectHome> <StartupFile>PythonApplication.py</StartupFile> <SearchPath> </SearchPath> <WorkingDirectory>.</WorkingDirectory> <OutputPath>.</OutputPath> <Name>PythonApplication</Name> <RootNamespace>PythonApplication</RootNamespace> <InterpreterId>MSBuild|venv10|$(MSBuildProjectFullPath)</InterpreterId> <LaunchProvider>Standard Python launcher</LaunchProvider> <EnableNativeCodeDebugging>True</EnableNativeCodeDebugging> </PropertyGroup> <PropertyGroup Condition=" '$(Configuration)' == 'Debug' "> <DebugSymbols>true</DebugSymbols> <EnableUnmanagedDebugging>false</EnableUnmanagedDebugging> </PropertyGroup> <PropertyGroup Condition=" '$(Configuration)' == 'Release' "> <DebugSymbols>true</DebugSymbols> <EnableUnmanagedDebugging>false</EnableUnmanagedDebugging> </PropertyGroup> <ItemGroup> <Compile Include="PythonApplication.py" /> </ItemGroup> <ItemGroup> <Interpreter Include="venv10\"> <Id>venv10</Id> <Version>3.10</Version> <Description>venv10 (Python 3.10 (64-bit))</Description> <InterpreterPath>Scripts\python.exe</InterpreterPath> <WindowsInterpreterPath>Scripts\pythonw.exe</WindowsInterpreterPath> <PathEnvironmentVariable>PYTHONPATH</PathEnvironmentVariable> <Architecture>X64</Architecture> </Interpreter> </ItemGroup> <Import Project="$(MSBuildExtensionsPath32)\Microsoft\VisualStudio\v$(VisualStudioVersion)\Python Tools\Microsoft.PythonTools.targets" /> <!-- Uncomment the CoreCompile target to enable the Build command in Visual Studio and specify your pre- and post-build commands in the BeforeBuild and AfterBuild targets below. --> <!--<Target Name="CoreCompile" />--> <Target Name="BeforeBuild"> </Target> <Target Name="AfterBuild"> </Target> </Project> This is the PythonAppliction.py file: import spam print(spam.add_one(123)) This is my spam.vcxproj file: <?xml version="1.0" encoding="utf-8"?> <Project DefaultTargets="Build" xmlns="http://schemas.microsoft.com/developer/msbuild/2003"> <ItemGroup Label="ProjectConfigurations"> <ProjectConfiguration Include="Debug|Win32"> <Configuration>Debug</Configuration> <Platform>Win32</Platform> </ProjectConfiguration> <ProjectConfiguration Include="Release|Win32"> <Configuration>Release</Configuration> <Platform>Win32</Platform> </ProjectConfiguration> <ProjectConfiguration Include="Debug|x64"> <Configuration>Debug</Configuration> <Platform>x64</Platform> </ProjectConfiguration> <ProjectConfiguration Include="Release|x64"> <Configuration>Release</Configuration> <Platform>x64</Platform> </ProjectConfiguration> </ItemGroup> <PropertyGroup Label="Globals"> <VCProjectVersion>16.0</VCProjectVersion> <Keyword>Win32Proj</Keyword> <ProjectGuid>{18fa7e0b-56f8-433d-abcc-523e412ffb83}</ProjectGuid> <RootNamespace>spam</RootNamespace> <WindowsTargetPlatformVersion>10.0</WindowsTargetPlatformVersion> </PropertyGroup> <Import Project="$(VCTargetsPath)\Microsoft.Cpp.Default.props" /> <PropertyGroup Condition="'$(Configuration)|$(Platform)'=='Debug|Win32'" Label="Configuration"> <ConfigurationType>Application</ConfigurationType> <UseDebugLibraries>true</UseDebugLibraries> <PlatformToolset>v143</PlatformToolset> <CharacterSet>Unicode</CharacterSet> </PropertyGroup> <PropertyGroup Condition="'$(Configuration)|$(Platform)'=='Release|Win32'" Label="Configuration"> <ConfigurationType>Application</ConfigurationType> <UseDebugLibraries>false</UseDebugLibraries> <PlatformToolset>v143</PlatformToolset> <WholeProgramOptimization>true</WholeProgramOptimization> <CharacterSet>Unicode</CharacterSet> </PropertyGroup> <PropertyGroup Condition="'$(Configuration)|$(Platform)'=='Debug|x64'" Label="Configuration"> <ConfigurationType>DynamicLibrary</ConfigurationType> <UseDebugLibraries>true</UseDebugLibraries> <PlatformToolset>v143</PlatformToolset> <CharacterSet>Unicode</CharacterSet> </PropertyGroup> <PropertyGroup Condition="'$(Configuration)|$(Platform)'=='Release|x64'" Label="Configuration"> <ConfigurationType>Application</ConfigurationType> <UseDebugLibraries>false</UseDebugLibraries> <PlatformToolset>v143</PlatformToolset> <WholeProgramOptimization>true</WholeProgramOptimization> <CharacterSet>Unicode</CharacterSet> </PropertyGroup> <Import Project="$(VCTargetsPath)\Microsoft.Cpp.props" /> <ImportGroup Label="ExtensionSettings"> </ImportGroup> <ImportGroup Label="Shared"> </ImportGroup> <ImportGroup Label="PropertySheets" Condition="'$(Configuration)|$(Platform)'=='Debug|Win32'"> <Import Project="$(UserRootDir)\Microsoft.Cpp.$(Platform).user.props" Condition="exists('$(UserRootDir)\Microsoft.Cpp.$(Platform).user.props')" Label="LocalAppDataPlatform" /> </ImportGroup> <ImportGroup Label="PropertySheets" Condition="'$(Configuration)|$(Platform)'=='Release|Win32'"> <Import Project="$(UserRootDir)\Microsoft.Cpp.$(Platform).user.props" Condition="exists('$(UserRootDir)\Microsoft.Cpp.$(Platform).user.props')" Label="LocalAppDataPlatform" /> </ImportGroup> <ImportGroup Label="PropertySheets" Condition="'$(Configuration)|$(Platform)'=='Debug|x64'"> <Import Project="$(UserRootDir)\Microsoft.Cpp.$(Platform).user.props" Condition="exists('$(UserRootDir)\Microsoft.Cpp.$(Platform).user.props')" Label="LocalAppDataPlatform" /> </ImportGroup> <ImportGroup Label="PropertySheets" Condition="'$(Configuration)|$(Platform)'=='Release|x64'"> <Import Project="$(UserRootDir)\Microsoft.Cpp.$(Platform).user.props" Condition="exists('$(UserRootDir)\Microsoft.Cpp.$(Platform).user.props')" Label="LocalAppDataPlatform" /> </ImportGroup> <PropertyGroup Label="UserMacros" /> <PropertyGroup Condition="'$(Configuration)|$(Platform)'=='Debug|x64'"> <IncludePath>c:\Python310-64\include;$(IncludePath)</IncludePath> <OutDir>$(SolutionDir)\PythonApplication\</OutDir> <TargetExt>.pyd</TargetExt> </PropertyGroup> <ItemDefinitionGroup Condition="'$(Configuration)|$(Platform)'=='Debug|Win32'"> <ClCompile> <WarningLevel>Level3</WarningLevel> <SDLCheck>true</SDLCheck> <PreprocessorDefinitions>WIN32;_DEBUG;_CONSOLE;%(PreprocessorDefinitions)</PreprocessorDefinitions> <ConformanceMode>true</ConformanceMode> </ClCompile> <Link> <SubSystem>Console</SubSystem> <GenerateDebugInformation>true</GenerateDebugInformation> </Link> </ItemDefinitionGroup> <ItemDefinitionGroup Condition="'$(Configuration)|$(Platform)'=='Release|Win32'"> <ClCompile> <WarningLevel>Level3</WarningLevel> <FunctionLevelLinking>true</FunctionLevelLinking> <IntrinsicFunctions>true</IntrinsicFunctions> <SDLCheck>true</SDLCheck> <PreprocessorDefinitions>WIN32;NDEBUG;_CONSOLE;%(PreprocessorDefinitions)</PreprocessorDefinitions> <ConformanceMode>true</ConformanceMode> </ClCompile> <Link> <SubSystem>Console</SubSystem> <EnableCOMDATFolding>true</EnableCOMDATFolding> <OptimizeReferences>true</OptimizeReferences> <GenerateDebugInformation>true</GenerateDebugInformation> </Link> </ItemDefinitionGroup> <ItemDefinitionGroup Condition="'$(Configuration)|$(Platform)'=='Debug|x64'"> <ClCompile> <WarningLevel>Level3</WarningLevel> <SDLCheck>true</SDLCheck> <PreprocessorDefinitions>_DEBUG;_CONSOLE;%(PreprocessorDefinitions)</PreprocessorDefinitions> <ConformanceMode>true</ConformanceMode> </ClCompile> <Link> <SubSystem>Console</SubSystem> <GenerateDebugInformation>true</GenerateDebugInformation> <AdditionalLibraryDirectories>C:\Python310-64\libs;%(AdditionalLibraryDirectories)</AdditionalLibraryDirectories> </Link> </ItemDefinitionGroup> <ItemDefinitionGroup Condition="'$(Configuration)|$(Platform)'=='Release|x64'"> <ClCompile> <WarningLevel>Level3</WarningLevel> <FunctionLevelLinking>true</FunctionLevelLinking> <IntrinsicFunctions>true</IntrinsicFunctions> <SDLCheck>true</SDLCheck> <PreprocessorDefinitions>NDEBUG;_CONSOLE;%(PreprocessorDefinitions)</PreprocessorDefinitions> <ConformanceMode>true</ConformanceMode> </ClCompile> <Link> <SubSystem>Console</SubSystem> <EnableCOMDATFolding>true</EnableCOMDATFolding> <OptimizeReferences>true</OptimizeReferences> <GenerateDebugInformation>true</GenerateDebugInformation> </Link> </ItemDefinitionGroup> <ItemGroup> <ClCompile Include="main.cpp" /> </ItemGroup> <ItemGroup> <ClInclude Include="include.h" /> </ItemGroup> <Import Project="$(VCTargetsPath)\Microsoft.Cpp.targets" /> <ImportGroup Label="ExtensionTargets"> </ImportGroup> </Project> My main.cpp: #include "include.h" PyObject* some_function(PyObject* self, PyObject* args) { __int64 input_value; if (!PyArg_ParseTuple(args, "L", &input_value)) { goto error; } return PyLong_FromLongLong(input_value + 1); error: return 0; } PyMethodDef SpamMethods[] = { {"add_one",(PyCFunction)some_function, METH_VARARGS, 0}, {0,0,0,0} }; static struct PyModuleDef spammodule = { PyModuleDef_HEAD_INIT, "spam", /* name of module */ "spam_doc", /* module documentation, may be NULL */ -1, /* size of per-interpreter state of the module, or -1 if the module keeps state in global variables. */ SpamMethods }; PyMODINIT_FUNC PyInit_spam(void) { return PyModule_Create(&spammodule); } Finally here is my include.h file: #if defined(_MSC_VER) && defined(_DEBUG) #undef _DEBUG #include <Python.h> #define _DEBUG 1 #else #include <Python.h> #endif So when I debug the c++ project as startup project I can set (and hit) breakpoints in the main.cpp. When I tries to run the code with PythonApplication as startup project I can use breakpoints in my Python code until I activate Native debugging than the symbols for the c++ project does not get loaded, the program runs and print the result and ends but does not hit any breakpoints in Python or c++. Any ideas what I'm missing? **My original text: ** I've built the code with debug info etc. and enabeled Native debugging in the Python project in VS 2022 and when I run the code with debugging the break points in the c++ doesn't get hit. (the pdb, pyd, lib and exp files should have the extension project_name.cp310-win_amd64.pdb for python 3.10-64 right?) I can also see that it doesn't load the Symbols for my 4 c++ projects (Screenshot of the loaded modules after pausing the debugging while running the code): Compared to the list of modules when I run the project with Python 3.7.0: I've tried to run the project in vscode but I can't get it to load the debugging there either. Any suggestion how to get the symbols loaded? A: Woho! I finally figured it out. The issue was connected to the venv. Running from the global Python works but if you need to run it from a venv you need to create it with --symlinks like this C:\Python310-64\python.exe -m venv venv --symlinks. You also need to run this command as administrator in the cmd or from PowerShell to get it to work!
No symbols loaded for c++ in mixed debugging (from Python)
I have a large project where the major part computation heavy stuff is written in c++ and the "glue code" and the start script is written in Python. The code has been compiled with the VS 2017 compiler (V141) and Python 3.7.0-32bit for a few years now and I want to upgrade to V143 and Python 3.10-64, I can build and run the code with the new environment. I get some small errors when I run the test suite (probably due to the change from 32bit to 64bit). The issue occurs when I try to debug the C++ code in Visual Studio 2022. I can debug the Python code without any errors but when I want to debug the code I can't get any symbols from the c++ to load. I've tried to follow this guide: https://learn.microsoft.com/en-us/visualstudio/python/debugging-mixed-mode-c-cpp-python-in-visual-studio?view=vs-2022 (But I fail to enable both Native and Python in step 2 of Enable mixed-mode debugging in a Python project. When I select Python in Select Code type I get this error: "Python debugging is not compatible with Native. Would you like to uncheck Native?". Similar I don't find the option Python/Native debugging in step 2 of "Enable mixed-mode debugging in a C/C++ project" and I've installed Python native development tools using the VS installer etc.) Update: To simplify and show a simple case I followed this tutorial https://www.youtube.com/watch?v=9q-LHP7cMfg&ab_channel=MichaelVanslembrouck and was able to build and set breakpoints when I started the debugging with the c++ project as startup project (as described in the YouTube tutorial). The issue occurs when I try to do it with the PythonApplication as startup project. (I have tried to use both Python39-64 and Python310-64) I added the content of the nessessary files to reproduce the issue. This is the solution file: Microsoft Visual Studio Solution File, Format Version 12.00 # Visual Studio Version 17 VisualStudioVersion = 17.3.32929.385 MinimumVisualStudioVersion = 10.0.40219.1 Project("{8BC9CEB8-8B4A-11D0-8D11-00A0C91BC942}") = "spam", "spam\spam.vcxproj", "{18FA7E0B-56F8-433D-ABCC-523E412FFB83}" EndProject Project("{888888A0-9F3D-457C-B088-3A5042F75D52}") = "PythonApplication", "PythonApplication\PythonApplication.pyproj", "{DD2AA21D-1713-4B24-AB58-2695C05684C1}" EndProject Global GlobalSection(SolutionConfigurationPlatforms) = preSolution Debug|Any CPU = Debug|Any CPU Debug|x64 = Debug|x64 Debug|x86 = Debug|x86 Release|Any CPU = Release|Any CPU Release|x64 = Release|x64 Release|x86 = Release|x86 EndGlobalSection GlobalSection(ProjectConfigurationPlatforms) = postSolution {18FA7E0B-56F8-433D-ABCC-523E412FFB83}.Debug|Any CPU.ActiveCfg = Debug|x64 {18FA7E0B-56F8-433D-ABCC-523E412FFB83}.Debug|Any CPU.Build.0 = Debug|x64 {18FA7E0B-56F8-433D-ABCC-523E412FFB83}.Debug|x64.ActiveCfg = Debug|x64 {18FA7E0B-56F8-433D-ABCC-523E412FFB83}.Debug|x64.Build.0 = Debug|x64 {18FA7E0B-56F8-433D-ABCC-523E412FFB83}.Debug|x86.ActiveCfg = Debug|Win32 {18FA7E0B-56F8-433D-ABCC-523E412FFB83}.Debug|x86.Build.0 = Debug|Win32 {18FA7E0B-56F8-433D-ABCC-523E412FFB83}.Release|Any CPU.ActiveCfg = Release|x64 {18FA7E0B-56F8-433D-ABCC-523E412FFB83}.Release|Any CPU.Build.0 = Release|x64 {18FA7E0B-56F8-433D-ABCC-523E412FFB83}.Release|x64.ActiveCfg = Release|x64 {18FA7E0B-56F8-433D-ABCC-523E412FFB83}.Release|x64.Build.0 = Release|x64 {18FA7E0B-56F8-433D-ABCC-523E412FFB83}.Release|x86.ActiveCfg = Release|Win32 {18FA7E0B-56F8-433D-ABCC-523E412FFB83}.Release|x86.Build.0 = Release|Win32 {DD2AA21D-1713-4B24-AB58-2695C05684C1}.Debug|Any CPU.ActiveCfg = Debug|Any CPU {DD2AA21D-1713-4B24-AB58-2695C05684C1}.Debug|x64.ActiveCfg = Debug|Any CPU {DD2AA21D-1713-4B24-AB58-2695C05684C1}.Debug|x86.ActiveCfg = Debug|Any CPU {DD2AA21D-1713-4B24-AB58-2695C05684C1}.Release|Any CPU.ActiveCfg = Release|Any CPU {DD2AA21D-1713-4B24-AB58-2695C05684C1}.Release|x64.ActiveCfg = Release|Any CPU {DD2AA21D-1713-4B24-AB58-2695C05684C1}.Release|x86.ActiveCfg = Release|Any CPU EndGlobalSection GlobalSection(SolutionProperties) = preSolution HideSolutionNode = FALSE EndGlobalSection GlobalSection(ExtensibilityGlobals) = postSolution SolutionGuid = {A9526BE0-4988-4327-B7A0-AFD3DF109D47} EndGlobalSection EndGlobal This is the PythonApplication.pyproj file: <Project DefaultTargets="Build" xmlns="http://schemas.microsoft.com/developer/msbuild/2003" ToolsVersion="4.0"> <PropertyGroup> <Configuration Condition=" '$(Configuration)' == '' ">Debug</Configuration> <SchemaVersion>2.0</SchemaVersion> <ProjectGuid>dd2aa21d-1713-4b24-ab58-2695c05684c1</ProjectGuid> <ProjectHome>.</ProjectHome> <StartupFile>PythonApplication.py</StartupFile> <SearchPath> </SearchPath> <WorkingDirectory>.</WorkingDirectory> <OutputPath>.</OutputPath> <Name>PythonApplication</Name> <RootNamespace>PythonApplication</RootNamespace> <InterpreterId>MSBuild|venv10|$(MSBuildProjectFullPath)</InterpreterId> <LaunchProvider>Standard Python launcher</LaunchProvider> <EnableNativeCodeDebugging>True</EnableNativeCodeDebugging> </PropertyGroup> <PropertyGroup Condition=" '$(Configuration)' == 'Debug' "> <DebugSymbols>true</DebugSymbols> <EnableUnmanagedDebugging>false</EnableUnmanagedDebugging> </PropertyGroup> <PropertyGroup Condition=" '$(Configuration)' == 'Release' "> <DebugSymbols>true</DebugSymbols> <EnableUnmanagedDebugging>false</EnableUnmanagedDebugging> </PropertyGroup> <ItemGroup> <Compile Include="PythonApplication.py" /> </ItemGroup> <ItemGroup> <Interpreter Include="venv10\"> <Id>venv10</Id> <Version>3.10</Version> <Description>venv10 (Python 3.10 (64-bit))</Description> <InterpreterPath>Scripts\python.exe</InterpreterPath> <WindowsInterpreterPath>Scripts\pythonw.exe</WindowsInterpreterPath> <PathEnvironmentVariable>PYTHONPATH</PathEnvironmentVariable> <Architecture>X64</Architecture> </Interpreter> </ItemGroup> <Import Project="$(MSBuildExtensionsPath32)\Microsoft\VisualStudio\v$(VisualStudioVersion)\Python Tools\Microsoft.PythonTools.targets" /> <!-- Uncomment the CoreCompile target to enable the Build command in Visual Studio and specify your pre- and post-build commands in the BeforeBuild and AfterBuild targets below. --> <!--<Target Name="CoreCompile" />--> <Target Name="BeforeBuild"> </Target> <Target Name="AfterBuild"> </Target> </Project> This is the PythonAppliction.py file: import spam print(spam.add_one(123)) This is my spam.vcxproj file: <?xml version="1.0" encoding="utf-8"?> <Project DefaultTargets="Build" xmlns="http://schemas.microsoft.com/developer/msbuild/2003"> <ItemGroup Label="ProjectConfigurations"> <ProjectConfiguration Include="Debug|Win32"> <Configuration>Debug</Configuration> <Platform>Win32</Platform> </ProjectConfiguration> <ProjectConfiguration Include="Release|Win32"> <Configuration>Release</Configuration> <Platform>Win32</Platform> </ProjectConfiguration> <ProjectConfiguration Include="Debug|x64"> <Configuration>Debug</Configuration> <Platform>x64</Platform> </ProjectConfiguration> <ProjectConfiguration Include="Release|x64"> <Configuration>Release</Configuration> <Platform>x64</Platform> </ProjectConfiguration> </ItemGroup> <PropertyGroup Label="Globals"> <VCProjectVersion>16.0</VCProjectVersion> <Keyword>Win32Proj</Keyword> <ProjectGuid>{18fa7e0b-56f8-433d-abcc-523e412ffb83}</ProjectGuid> <RootNamespace>spam</RootNamespace> <WindowsTargetPlatformVersion>10.0</WindowsTargetPlatformVersion> </PropertyGroup> <Import Project="$(VCTargetsPath)\Microsoft.Cpp.Default.props" /> <PropertyGroup Condition="'$(Configuration)|$(Platform)'=='Debug|Win32'" Label="Configuration"> <ConfigurationType>Application</ConfigurationType> <UseDebugLibraries>true</UseDebugLibraries> <PlatformToolset>v143</PlatformToolset> <CharacterSet>Unicode</CharacterSet> </PropertyGroup> <PropertyGroup Condition="'$(Configuration)|$(Platform)'=='Release|Win32'" Label="Configuration"> <ConfigurationType>Application</ConfigurationType> <UseDebugLibraries>false</UseDebugLibraries> <PlatformToolset>v143</PlatformToolset> <WholeProgramOptimization>true</WholeProgramOptimization> <CharacterSet>Unicode</CharacterSet> </PropertyGroup> <PropertyGroup Condition="'$(Configuration)|$(Platform)'=='Debug|x64'" Label="Configuration"> <ConfigurationType>DynamicLibrary</ConfigurationType> <UseDebugLibraries>true</UseDebugLibraries> <PlatformToolset>v143</PlatformToolset> <CharacterSet>Unicode</CharacterSet> </PropertyGroup> <PropertyGroup Condition="'$(Configuration)|$(Platform)'=='Release|x64'" Label="Configuration"> <ConfigurationType>Application</ConfigurationType> <UseDebugLibraries>false</UseDebugLibraries> <PlatformToolset>v143</PlatformToolset> <WholeProgramOptimization>true</WholeProgramOptimization> <CharacterSet>Unicode</CharacterSet> </PropertyGroup> <Import Project="$(VCTargetsPath)\Microsoft.Cpp.props" /> <ImportGroup Label="ExtensionSettings"> </ImportGroup> <ImportGroup Label="Shared"> </ImportGroup> <ImportGroup Label="PropertySheets" Condition="'$(Configuration)|$(Platform)'=='Debug|Win32'"> <Import Project="$(UserRootDir)\Microsoft.Cpp.$(Platform).user.props" Condition="exists('$(UserRootDir)\Microsoft.Cpp.$(Platform).user.props')" Label="LocalAppDataPlatform" /> </ImportGroup> <ImportGroup Label="PropertySheets" Condition="'$(Configuration)|$(Platform)'=='Release|Win32'"> <Import Project="$(UserRootDir)\Microsoft.Cpp.$(Platform).user.props" Condition="exists('$(UserRootDir)\Microsoft.Cpp.$(Platform).user.props')" Label="LocalAppDataPlatform" /> </ImportGroup> <ImportGroup Label="PropertySheets" Condition="'$(Configuration)|$(Platform)'=='Debug|x64'"> <Import Project="$(UserRootDir)\Microsoft.Cpp.$(Platform).user.props" Condition="exists('$(UserRootDir)\Microsoft.Cpp.$(Platform).user.props')" Label="LocalAppDataPlatform" /> </ImportGroup> <ImportGroup Label="PropertySheets" Condition="'$(Configuration)|$(Platform)'=='Release|x64'"> <Import Project="$(UserRootDir)\Microsoft.Cpp.$(Platform).user.props" Condition="exists('$(UserRootDir)\Microsoft.Cpp.$(Platform).user.props')" Label="LocalAppDataPlatform" /> </ImportGroup> <PropertyGroup Label="UserMacros" /> <PropertyGroup Condition="'$(Configuration)|$(Platform)'=='Debug|x64'"> <IncludePath>c:\Python310-64\include;$(IncludePath)</IncludePath> <OutDir>$(SolutionDir)\PythonApplication\</OutDir> <TargetExt>.pyd</TargetExt> </PropertyGroup> <ItemDefinitionGroup Condition="'$(Configuration)|$(Platform)'=='Debug|Win32'"> <ClCompile> <WarningLevel>Level3</WarningLevel> <SDLCheck>true</SDLCheck> <PreprocessorDefinitions>WIN32;_DEBUG;_CONSOLE;%(PreprocessorDefinitions)</PreprocessorDefinitions> <ConformanceMode>true</ConformanceMode> </ClCompile> <Link> <SubSystem>Console</SubSystem> <GenerateDebugInformation>true</GenerateDebugInformation> </Link> </ItemDefinitionGroup> <ItemDefinitionGroup Condition="'$(Configuration)|$(Platform)'=='Release|Win32'"> <ClCompile> <WarningLevel>Level3</WarningLevel> <FunctionLevelLinking>true</FunctionLevelLinking> <IntrinsicFunctions>true</IntrinsicFunctions> <SDLCheck>true</SDLCheck> <PreprocessorDefinitions>WIN32;NDEBUG;_CONSOLE;%(PreprocessorDefinitions)</PreprocessorDefinitions> <ConformanceMode>true</ConformanceMode> </ClCompile> <Link> <SubSystem>Console</SubSystem> <EnableCOMDATFolding>true</EnableCOMDATFolding> <OptimizeReferences>true</OptimizeReferences> <GenerateDebugInformation>true</GenerateDebugInformation> </Link> </ItemDefinitionGroup> <ItemDefinitionGroup Condition="'$(Configuration)|$(Platform)'=='Debug|x64'"> <ClCompile> <WarningLevel>Level3</WarningLevel> <SDLCheck>true</SDLCheck> <PreprocessorDefinitions>_DEBUG;_CONSOLE;%(PreprocessorDefinitions)</PreprocessorDefinitions> <ConformanceMode>true</ConformanceMode> </ClCompile> <Link> <SubSystem>Console</SubSystem> <GenerateDebugInformation>true</GenerateDebugInformation> <AdditionalLibraryDirectories>C:\Python310-64\libs;%(AdditionalLibraryDirectories)</AdditionalLibraryDirectories> </Link> </ItemDefinitionGroup> <ItemDefinitionGroup Condition="'$(Configuration)|$(Platform)'=='Release|x64'"> <ClCompile> <WarningLevel>Level3</WarningLevel> <FunctionLevelLinking>true</FunctionLevelLinking> <IntrinsicFunctions>true</IntrinsicFunctions> <SDLCheck>true</SDLCheck> <PreprocessorDefinitions>NDEBUG;_CONSOLE;%(PreprocessorDefinitions)</PreprocessorDefinitions> <ConformanceMode>true</ConformanceMode> </ClCompile> <Link> <SubSystem>Console</SubSystem> <EnableCOMDATFolding>true</EnableCOMDATFolding> <OptimizeReferences>true</OptimizeReferences> <GenerateDebugInformation>true</GenerateDebugInformation> </Link> </ItemDefinitionGroup> <ItemGroup> <ClCompile Include="main.cpp" /> </ItemGroup> <ItemGroup> <ClInclude Include="include.h" /> </ItemGroup> <Import Project="$(VCTargetsPath)\Microsoft.Cpp.targets" /> <ImportGroup Label="ExtensionTargets"> </ImportGroup> </Project> My main.cpp: #include "include.h" PyObject* some_function(PyObject* self, PyObject* args) { __int64 input_value; if (!PyArg_ParseTuple(args, "L", &input_value)) { goto error; } return PyLong_FromLongLong(input_value + 1); error: return 0; } PyMethodDef SpamMethods[] = { {"add_one",(PyCFunction)some_function, METH_VARARGS, 0}, {0,0,0,0} }; static struct PyModuleDef spammodule = { PyModuleDef_HEAD_INIT, "spam", /* name of module */ "spam_doc", /* module documentation, may be NULL */ -1, /* size of per-interpreter state of the module, or -1 if the module keeps state in global variables. */ SpamMethods }; PyMODINIT_FUNC PyInit_spam(void) { return PyModule_Create(&spammodule); } Finally here is my include.h file: #if defined(_MSC_VER) && defined(_DEBUG) #undef _DEBUG #include <Python.h> #define _DEBUG 1 #else #include <Python.h> #endif So when I debug the c++ project as startup project I can set (and hit) breakpoints in the main.cpp. When I tries to run the code with PythonApplication as startup project I can use breakpoints in my Python code until I activate Native debugging than the symbols for the c++ project does not get loaded, the program runs and print the result and ends but does not hit any breakpoints in Python or c++. Any ideas what I'm missing? **My original text: ** I've built the code with debug info etc. and enabeled Native debugging in the Python project in VS 2022 and when I run the code with debugging the break points in the c++ doesn't get hit. (the pdb, pyd, lib and exp files should have the extension project_name.cp310-win_amd64.pdb for python 3.10-64 right?) I can also see that it doesn't load the Symbols for my 4 c++ projects (Screenshot of the loaded modules after pausing the debugging while running the code): Compared to the list of modules when I run the project with Python 3.7.0: I've tried to run the project in vscode but I can't get it to load the debugging there either. Any suggestion how to get the symbols loaded?
[ "Woho! I finally figured it out. The issue was connected to the venv. Running from the global Python works but if you need to run it from a venv you need to create it with --symlinks like this C:\\Python310-64\\python.exe -m venv venv --symlinks. You also need to run this command as administrator in the cmd or from PowerShell to get it to work!\n" ]
[ 2 ]
[]
[]
[ "c++", "debugging", "python", "visual_studio", "windows" ]
stackoverflow_0074190153_c++_debugging_python_visual_studio_windows.txt
Q: Add names to a list without splitting in Python I'm trying to create a program where a user copy pastes a list of names seperated by spaces & python saves it as a list. However, while using list.extend it splits every name into alphabets. How can I fix it so it saves each name as a string. How can I solve it. A: You could use regex: import re input = "program where a user copy pastes a list of names seperated" output = re.split(" ", input) Result: ['program', 'where', 'a', 'user', 'copy', 'pastes', 'a', 'list', 'of', 'names', 'seperated'] A: Python strings have the split function, that split by white space: s = "foo bar baz" s.split() ['foo', 'bar', 'baz']
Add names to a list without splitting in Python
I'm trying to create a program where a user copy pastes a list of names seperated by spaces & python saves it as a list. However, while using list.extend it splits every name into alphabets. How can I fix it so it saves each name as a string. How can I solve it.
[ "You could use regex:\nimport re\ninput = \"program where a user copy pastes a list of names seperated\"\noutput = re.split(\" \", input)\n\nResult:\n['program', 'where', 'a', 'user', 'copy', 'pastes', 'a', 'list', 'of', 'names', 'seperated']\n\n", "Python strings have the split function, that split by white space:\ns = \"foo bar baz\"\ns.split()\n['foo', 'bar', 'baz']\n\n" ]
[ 1, 0 ]
[]
[]
[ "list", "python" ]
stackoverflow_0074486054_list_python.txt
Q: PyQT drag and drop to reorder items Im trying to make a small app that would help me rename pictures. Since i want to manually order them, i had the idea to simply show a window with thumbnails inside in a grid ( or small scale images, doesnt matter ), and then drag and drop reorder them as i see fit. Afterwards its just click a button and they get properly named acording to the order. Is there any container or something that would allow its inside widgets to be moved around like that while also properly displaying the inside widgets? The ways im thinking of currently since i cant find anything else, is to make the whole background a canvas, move x/y on drag/drop of the pictures and then calculate where im dropping it off and manually reorder the whole canvas again and keep redrawing. Im open to different python solution if anyone has them, but after checking wxwidgets and tkinter, i havent found anything that would be a solution to this without a lot of manual code. A: After ekhumoro hint, i was able to solve it. Heres a sample code that reads the current folder of its files, shows them as "thumbnails", and allows reordering. #!/usr/bin/python import sys, os from PyQt5.QtWidgets import (QListWidget, QWidget, QMessageBox, QApplication, QVBoxLayout,QAbstractItemView,QListWidgetItem ) from PyQt5.QtGui import QIcon from PyQt5.QtCore import QSize, Qt from PyQt5.QtWidgets import QListView class Example(QWidget): def __init__(self): super().__init__() self.icon_size = 200 self.initUI() def loadImageItem(self, fajl,folder=None): icon = QIcon() item = QListWidgetItem() if folder is not None: pot = os.path.join(folder,fajl) else: pot = fajl icon.addFile(pot,size=QSize(self.icon_size,self.icon_size)) item.setIcon(icon) item.setTextAlignment(Qt.AlignBottom) return item def initUI(self): vbox = QVBoxLayout(self) listWidget = QListWidget() #make it icons listWidget.setDragDropMode(QAbstractItemView.InternalMove) listWidget.setFlow(QListView.LeftToRight) listWidget.setWrapping(True) listWidget.setResizeMode(QListView.Adjust) listWidget.setMovement(QListView.Snap) listWidget.setIconSize(QSize(200,200)) folder = os.getcwd() #folder = "/mnt/Data/pictures/2022-10-30 Sveta Katarina/izbor/1" files = os.listdir(folder) files = [f for f in files if os.path.isfile(os.path.join(folder,f))] for foo in files: listWidget.addItem(self.loadImageItem(foo,folder=folder)) vbox.addWidget(listWidget) self.setLayout(vbox) self.setGeometry(10, 10, 1260, 820) self.setWindowTitle('Image renamer') self.show() def main(): App = QApplication(sys.argv) ex = Example() sys.exit(App.exec()) if __name__ == '__main__': main()
PyQT drag and drop to reorder items
Im trying to make a small app that would help me rename pictures. Since i want to manually order them, i had the idea to simply show a window with thumbnails inside in a grid ( or small scale images, doesnt matter ), and then drag and drop reorder them as i see fit. Afterwards its just click a button and they get properly named acording to the order. Is there any container or something that would allow its inside widgets to be moved around like that while also properly displaying the inside widgets? The ways im thinking of currently since i cant find anything else, is to make the whole background a canvas, move x/y on drag/drop of the pictures and then calculate where im dropping it off and manually reorder the whole canvas again and keep redrawing. Im open to different python solution if anyone has them, but after checking wxwidgets and tkinter, i havent found anything that would be a solution to this without a lot of manual code.
[ "After ekhumoro hint, i was able to solve it.\nHeres a sample code that reads the current folder of its files, shows them as \"thumbnails\", and allows reordering.\n#!/usr/bin/python\n\nimport sys, os\nfrom PyQt5.QtWidgets import (QListWidget, QWidget, QMessageBox,\n QApplication, QVBoxLayout,QAbstractItemView,QListWidgetItem )\nfrom PyQt5.QtGui import QIcon\nfrom PyQt5.QtCore import QSize, Qt\nfrom PyQt5.QtWidgets import QListView\n\n\nclass Example(QWidget):\n\n def __init__(self):\n super().__init__()\n self.icon_size = 200\n self.initUI()\n\n\n def loadImageItem(self, fajl,folder=None):\n icon = QIcon()\n item = QListWidgetItem()\n if folder is not None:\n pot = os.path.join(folder,fajl)\n else:\n pot = fajl\n icon.addFile(pot,size=QSize(self.icon_size,self.icon_size))\n item.setIcon(icon)\n item.setTextAlignment(Qt.AlignBottom)\n return item\n\n def initUI(self):\n\n vbox = QVBoxLayout(self)\n\n listWidget = QListWidget()\n #make it icons \n listWidget.setDragDropMode(QAbstractItemView.InternalMove)\n listWidget.setFlow(QListView.LeftToRight)\n listWidget.setWrapping(True)\n listWidget.setResizeMode(QListView.Adjust)\n listWidget.setMovement(QListView.Snap)\n listWidget.setIconSize(QSize(200,200))\n\n folder = os.getcwd()\n #folder = \"/mnt/Data/pictures/2022-10-30 Sveta Katarina/izbor/1\"\n files = os.listdir(folder)\n files = [f for f in files if os.path.isfile(os.path.join(folder,f))]\n\n for foo in files:\n listWidget.addItem(self.loadImageItem(foo,folder=folder))\n\n vbox.addWidget(listWidget)\n self.setLayout(vbox)\n self.setGeometry(10, 10, 1260, 820)\n self.setWindowTitle('Image renamer')\n self.show()\n\n\ndef main():\n\n App = QApplication(sys.argv)\n ex = Example()\n sys.exit(App.exec())\n\nif __name__ == '__main__':\n main()\n\n" ]
[ 0 ]
[]
[]
[ "pyqt", "python", "user_interface" ]
stackoverflow_0074474875_pyqt_python_user_interface.txt
Q: Unable to locate or select a select element from dropdown menu with Selenium - element not visible edited: [https://www.bellsofsteel.us/checkout/][1] I'm unable to locate or select an option from a drop-down menu using selenium. I'm attempting to get the various taxes and shipping for an item, by iterating through a list with cities, states and zip codes The HTML for the select element is as follows: <span class="woocommerce-input-wrapper"> <select name="billing_state" id="billing_state" class="state_select select2-hidden-accessible" autocomplete="address-level1" data-placeholder="State" data-input-classes="" data-label="State / County" tabindex="-1" aria-hidden="true"> <option value="">Select an option…</option> <option value="AL">Alabama</option> <option value="AZ">Arizona</option> <option value="AR">Arkansas</option> <option value="CA">California</option> ......... <option value="WY">Wyoming</option> </select> <span class="select2 select2-container select2-container--default select2-container--above select2-container--open" dir="ltr" style="width: 100%;"> <span class="selection"> <span class="select2-selection select2-selection--single" aria-haspopup="true" aria-expanded="true" tabindex="0" aria-label="State / County" role="combobox" aria-owns="select2-billing_state-results" aria-activedescendant="select2-billing_state-result-og2a-AR"> <span class="select2-selection__rendered" id="select2-billing_state-container" role="textbox" aria-readonly="true"> <span class="select2-selection__placeholder">State</span> </span> <span class="select2-selection__arrow" role="presentation"> <b role="presentation"></b> </span> </span> </span> <span class="dropdown-wrapper" aria-hidden="true"> </span> </span> I've tried this: dropdown1 = Select(driver.find_element(By.CSS_SELECTOR, 'select[name="billing_state"]')) dropdown1.select_by_visible_text('California') Which returns the error: Message: element not interactable: Element is not currently visible and may not be manipulated I've also used Expected Conditions element_present = EC.text_to_be_present_in_element((By.CSS_SELECTOR,'select[name="billing_state"]'),item[1]) WebDriverWait(driver, 20).until(element_present) Which will just time out. I can select the actual drop down menu with element_present = EC.element_to_be_clickable((By.CSS_SELECTOR, 'span.selection') WebDriverWait(driver, 30).until(element_present) driver.find_element(By.CSS_SELECTOR, 'span.selection').click() which will open the drop-down menu but not make the elements clickable Any help would be greatly appreaciated! Posted some of the code I've been using here are the things I've tried: ###This will click on the drop down menu so that you can see it open in the selenium window: element_present = EC.element_to_be_clickable((By.CSS_SELECTOR, 'span.selection')) WebDriverWait(driver, 30).until(element_present) try: driver.find_element(By.CSS_SELECTOR, 'span.selection').click() except: clicker = driver.find_element(By.CSS_SELECTOR, 'span.selection') driver.execute_script("arguments[0].click();", clicker) ##This attempts to select from the select options: dropdown1 = Select(driver.find_element(By.CSS_SELECTOR, 'select[name="billing_state"]')) dropdown1.select_by_visible_text('California') essentially the same thing but with the span class directly proceeding the select element (receives click but does not drop down menu) ###This will click on the drop down menu so that you can see it open in the selenium window: element_present = EC.element_to_be_clickable((By.CSS_SELECTOR, 'span.woocommerce-input-wrapper')) WebDriverWait(driver, 30).until(element_present) try: driver.find_element(By.CSS_SELECTOR, 'span.woocommerce-input-wrapper').click() except: clicker = driver.find_element(By.CSS_SELECTOR, 'span.woocommerce-input-wrapper') driver.execute_script("arguments[0].click();", clicker) ##This attempts to select from the select options: dropdown1 = Select(driver.find_element(By.CSS_SELECTOR, 'select[name="billing_state"]')) dropdown1.select_by_visible_text('California') Both return the following: Message: element not interactable: Element is not currently visible and may not be manipulated When I manually click on the option I can see the HTML change from: <span class="select2-selection__rendered" id="select2-billing_state-container" role="textbox" aria-readonly="true"> <span class="select2-selection__placeholder">State</span> To <span class="select2-selection__rendered" id="select2-billing_state-container" role="textbox" aria-readonly="true" title="California">California</span> I thought I might be able to f string literal the title like so: element_present = EC.element_to_be_clickable((By.CSS_SELECTOR, 'span.select2-selection__rendered')) WebDriverWait(driver, 30).until(element_present) driver.find_element(By.CSS_SELECTOR, 'span.select2-selection__rendered').click() element_present = EC.element_to_be_clickable((By.CSS_SELECTOR, element_present_click(f'span[title=\"California\"]')) WebDriverWait(driver, 30).until(element_present) driver.find_element(By.CSS_SELECTOR, element_present_click(f'span[title=\"California\"]').click() but it also timed out as well. Also tried to select by value: driver.find_element(By.CSS_SELECTOR, 'span.selection').click() dropdown1 = Select(driver.find_element(By.CSS_SELECTOR, 'select[name="billing_state"]')) dropdown1.select_by_value('CA') Same thing - element not interactable A: That dropdown is not a 'Select' type element, it is a 'ul' type element, so you can't use Select. In the checkout page, add the below code and try: # scrolling to the element - 'First name' label first_name_label = driver.find_element(By.XPATH, ".//label[@for='billing_first_name']") driver.execute_script("arguments[0].scrollIntoView(true)", first_name_label) # clicking on the 'State / County' dropdown driver.find_element(By.XPATH, "(.//*[@aria-label='State / County'])[1]").click() sleep(1) # getting the list of all the states list_of_states = driver.find_elements(By.CSS_SELECTOR, "#select2-billing_state-results li") # state name to be selected state_to_select = "South Dakota" i = 0 # select the state for state in list_of_states: if state.text == state_to_select: driver.find_element(By.XPATH, ".//ul[@id='select2-billing_state-results']/li[" + str(i + 1) + "]").click() break i += 1
Unable to locate or select a select element from dropdown menu with Selenium - element not visible
edited: [https://www.bellsofsteel.us/checkout/][1] I'm unable to locate or select an option from a drop-down menu using selenium. I'm attempting to get the various taxes and shipping for an item, by iterating through a list with cities, states and zip codes The HTML for the select element is as follows: <span class="woocommerce-input-wrapper"> <select name="billing_state" id="billing_state" class="state_select select2-hidden-accessible" autocomplete="address-level1" data-placeholder="State" data-input-classes="" data-label="State / County" tabindex="-1" aria-hidden="true"> <option value="">Select an option…</option> <option value="AL">Alabama</option> <option value="AZ">Arizona</option> <option value="AR">Arkansas</option> <option value="CA">California</option> ......... <option value="WY">Wyoming</option> </select> <span class="select2 select2-container select2-container--default select2-container--above select2-container--open" dir="ltr" style="width: 100%;"> <span class="selection"> <span class="select2-selection select2-selection--single" aria-haspopup="true" aria-expanded="true" tabindex="0" aria-label="State / County" role="combobox" aria-owns="select2-billing_state-results" aria-activedescendant="select2-billing_state-result-og2a-AR"> <span class="select2-selection__rendered" id="select2-billing_state-container" role="textbox" aria-readonly="true"> <span class="select2-selection__placeholder">State</span> </span> <span class="select2-selection__arrow" role="presentation"> <b role="presentation"></b> </span> </span> </span> <span class="dropdown-wrapper" aria-hidden="true"> </span> </span> I've tried this: dropdown1 = Select(driver.find_element(By.CSS_SELECTOR, 'select[name="billing_state"]')) dropdown1.select_by_visible_text('California') Which returns the error: Message: element not interactable: Element is not currently visible and may not be manipulated I've also used Expected Conditions element_present = EC.text_to_be_present_in_element((By.CSS_SELECTOR,'select[name="billing_state"]'),item[1]) WebDriverWait(driver, 20).until(element_present) Which will just time out. I can select the actual drop down menu with element_present = EC.element_to_be_clickable((By.CSS_SELECTOR, 'span.selection') WebDriverWait(driver, 30).until(element_present) driver.find_element(By.CSS_SELECTOR, 'span.selection').click() which will open the drop-down menu but not make the elements clickable Any help would be greatly appreaciated! Posted some of the code I've been using here are the things I've tried: ###This will click on the drop down menu so that you can see it open in the selenium window: element_present = EC.element_to_be_clickable((By.CSS_SELECTOR, 'span.selection')) WebDriverWait(driver, 30).until(element_present) try: driver.find_element(By.CSS_SELECTOR, 'span.selection').click() except: clicker = driver.find_element(By.CSS_SELECTOR, 'span.selection') driver.execute_script("arguments[0].click();", clicker) ##This attempts to select from the select options: dropdown1 = Select(driver.find_element(By.CSS_SELECTOR, 'select[name="billing_state"]')) dropdown1.select_by_visible_text('California') essentially the same thing but with the span class directly proceeding the select element (receives click but does not drop down menu) ###This will click on the drop down menu so that you can see it open in the selenium window: element_present = EC.element_to_be_clickable((By.CSS_SELECTOR, 'span.woocommerce-input-wrapper')) WebDriverWait(driver, 30).until(element_present) try: driver.find_element(By.CSS_SELECTOR, 'span.woocommerce-input-wrapper').click() except: clicker = driver.find_element(By.CSS_SELECTOR, 'span.woocommerce-input-wrapper') driver.execute_script("arguments[0].click();", clicker) ##This attempts to select from the select options: dropdown1 = Select(driver.find_element(By.CSS_SELECTOR, 'select[name="billing_state"]')) dropdown1.select_by_visible_text('California') Both return the following: Message: element not interactable: Element is not currently visible and may not be manipulated When I manually click on the option I can see the HTML change from: <span class="select2-selection__rendered" id="select2-billing_state-container" role="textbox" aria-readonly="true"> <span class="select2-selection__placeholder">State</span> To <span class="select2-selection__rendered" id="select2-billing_state-container" role="textbox" aria-readonly="true" title="California">California</span> I thought I might be able to f string literal the title like so: element_present = EC.element_to_be_clickable((By.CSS_SELECTOR, 'span.select2-selection__rendered')) WebDriverWait(driver, 30).until(element_present) driver.find_element(By.CSS_SELECTOR, 'span.select2-selection__rendered').click() element_present = EC.element_to_be_clickable((By.CSS_SELECTOR, element_present_click(f'span[title=\"California\"]')) WebDriverWait(driver, 30).until(element_present) driver.find_element(By.CSS_SELECTOR, element_present_click(f'span[title=\"California\"]').click() but it also timed out as well. Also tried to select by value: driver.find_element(By.CSS_SELECTOR, 'span.selection').click() dropdown1 = Select(driver.find_element(By.CSS_SELECTOR, 'select[name="billing_state"]')) dropdown1.select_by_value('CA') Same thing - element not interactable
[ "That dropdown is not a 'Select' type element, it is a 'ul' type element, so you can't use Select.\nIn the checkout page, add the below code and try:\n# scrolling to the element - 'First name' label\nfirst_name_label = driver.find_element(By.XPATH, \".//label[@for='billing_first_name']\")\ndriver.execute_script(\"arguments[0].scrollIntoView(true)\", first_name_label)\n\n# clicking on the 'State / County' dropdown\ndriver.find_element(By.XPATH, \"(.//*[@aria-label='State / County'])[1]\").click()\nsleep(1)\n# getting the list of all the states\nlist_of_states = driver.find_elements(By.CSS_SELECTOR, \"#select2-billing_state-results li\")\n\n# state name to be selected\nstate_to_select = \"South Dakota\"\ni = 0\n\n# select the state\nfor state in list_of_states:\n if state.text == state_to_select:\n driver.find_element(By.XPATH, \".//ul[@id='select2-billing_state-results']/li[\" + str(i + 1) + \"]\").click()\n break\n i += 1\n\n" ]
[ 0 ]
[]
[]
[ "drop_down_menu", "ironwebscraper", "python", "selenium", "selenium_webdriver" ]
stackoverflow_0074470588_drop_down_menu_ironwebscraper_python_selenium_selenium_webdriver.txt
Q: How to improve this pandas code "Dropping of nuisance columns in DataFrame reductions (with 'numeric_only=None') is deprecated" I have a CSV with more than 500.000 results with a lot of duplicates, I'm creating a new dataframe with unique values and trying to find the min value of a date. I have this code here that works but takes way to much time to load. How can I improve this? for i in range(len(df_leads)): df_leads.loc[i,'Created Date'] = df[df['customer_lead_id'] == df_leads.loc[i,'Lead']].min()['created_date'] A: I'm creating a new dataframe with unique values and trying to find the min value of a date. The second df comes from doing the unique of df['customer_lead_id'] Instead of that, we can drop_duplicates in a DataFrame after we sort_values of the date - this will keep the first occurrence, i. e. the min value: df.sort_values('created_date').drop_duplicates('customer_lead_id')
How to improve this pandas code "Dropping of nuisance columns in DataFrame reductions (with 'numeric_only=None') is deprecated"
I have a CSV with more than 500.000 results with a lot of duplicates, I'm creating a new dataframe with unique values and trying to find the min value of a date. I have this code here that works but takes way to much time to load. How can I improve this? for i in range(len(df_leads)): df_leads.loc[i,'Created Date'] = df[df['customer_lead_id'] == df_leads.loc[i,'Lead']].min()['created_date']
[ "\nI'm creating a new dataframe with unique values and trying to find the min value of a date.\nThe second df comes from doing the unique of df['customer_lead_id']\n\nInstead of that, we can drop_duplicates in a DataFrame after we sort_values of the date - this will keep the first occurrence, i. e. the min value:\ndf.sort_values('created_date').drop_duplicates('customer_lead_id')\n\n" ]
[ 0 ]
[]
[]
[ "optimization", "pandas", "python" ]
stackoverflow_0074474798_optimization_pandas_python.txt
Q: How to find if a value unique to first data frame when comparing to another data frame For example, I have 2 dataframes with 2 columns: df1 df2 AAA BBB AAA KKK BBB CCC BBB LLL CCC FFF CCC FFF DDD None None None I want to spot whats on df1 is not in df2, then the result is DDD (exclude None). How can I achieve this? A: import pandas as pd df1 = pd.DataFrame([['AAA', 'BBB'], ['BBB', 'CCC'], ['CCC', 'FFF'], ['DDD', None]]) df2 = pd.DataFrame([['AAA', 'KKK'], ['BBB', 'LLL'], ['CCC', 'FFF'], [None, None]]) df1_uniq = [] df2_uniq = [] for col in df1.columns: for string in df1[col].unique(): df1_uniq.append(string) for col in df2.columns: for string in df2[col].unique(): df2_uniq.append(string) result = [x for x in df1_uniq if not x in df2_uniq] print(result)
How to find if a value unique to first data frame when comparing to another data frame
For example, I have 2 dataframes with 2 columns: df1 df2 AAA BBB AAA KKK BBB CCC BBB LLL CCC FFF CCC FFF DDD None None None I want to spot whats on df1 is not in df2, then the result is DDD (exclude None). How can I achieve this?
[ "import pandas as pd\n\ndf1 = pd.DataFrame([['AAA', 'BBB'], ['BBB', 'CCC'], ['CCC', 'FFF'], ['DDD', None]])\ndf2 = pd.DataFrame([['AAA', 'KKK'], ['BBB', 'LLL'], ['CCC', 'FFF'], [None, None]])\n\ndf1_uniq = []\ndf2_uniq = []\nfor col in df1.columns:\n for string in df1[col].unique():\n df1_uniq.append(string)\nfor col in df2.columns:\n for string in df2[col].unique():\n df2_uniq.append(string)\n\nresult = [x for x in df1_uniq if not x in df2_uniq]\nprint(result)\n\n" ]
[ 0 ]
[]
[]
[ "dataframe", "pandas", "python" ]
stackoverflow_0074485939_dataframe_pandas_python.txt
Q: Getting text from multiple webpages(Pagination) in selenium python I wanted to extract text from multiple pages. Currently, I am able to extract data from the first page but I want to append and go to muliple pages and extract the data from pagination. I have written this simple code which extracts data from the first page. I am not able to extract the data from multiple pages which is dynamic in number. ` element_list = [] opts = webdriver.ChromeOptions() opts.headless = True driver = webdriver.Chrome(ChromeDriverManager().install()) base_url = "XYZ" driver.maximize_window() driver.get(base_url) driver.set_page_load_timeout(50) element = WebDriverWait(driver, 50).until(EC.presence_of_element_located((By.ID, 'all-my-groups'))) l = [] l = driver.find_elements_by_xpath("//div[contains(@class, 'alias-wrapper sim-ellipsis sim-list--shortId')]") for i in l: print(i.text) ` I have shared the images of class if this could help from pagination. If we could extract the automate and extract from all the pages that would be awesome. Also, I am new so please pardon me for asking silly questions. Thanks in advance. A: You have provided the code just for the previous page button. I guess you need to go to the next page until next page exists. As I don't know what site we are talking about I can only guess its behavior. So I'm assuming the button 'next' disappears when no next page exists. If so, it can be done like this: element_list = [] opts = webdriver.ChromeOptions() opts.headless = True driver = webdriver.Chrome(ChromeDriverManager().install()) base_url = "XYZ" driver.maximize_window() driver.get(base_url) driver.set_page_load_timeout(50) element = WebDriverWait(driver, 50).until(EC.presence_of_element_located((By.ID, 'all-my-groups'))) l = [] l = driver.find_elements_by_xpath("//div[contains(@class, 'alias-wrapper sim-ellipsis sim-list--shortId')]") while True: try: next_page = driver.find_element(By.XPATH, '//button[@label="Next page"]') except NoSuchElementException: break next_page.click() l.extend(driver.find_elements(By.XPATH, "//div[contains(@class, 'alias-wrapper sim-ellipsis sim-list--shortId')]")) for i in l: print(i.text) To be able to catch the exception this import has to be added: from selenium.common.exceptions import NoSuchElementException Also note that the method find_elements_by_xpath is deprecated and it would be better to replace this line: l = driver.find_elements_by_xpath("//div[contains(@class, 'alias-wrapper sim-ellipsis sim-list--shortId')]") by this one: l = driver.find_elements(By.XPATH, "//div[contains(@class, 'alias-wrapper sim-ellipsis sim-list--shortId')]")
Getting text from multiple webpages(Pagination) in selenium python
I wanted to extract text from multiple pages. Currently, I am able to extract data from the first page but I want to append and go to muliple pages and extract the data from pagination. I have written this simple code which extracts data from the first page. I am not able to extract the data from multiple pages which is dynamic in number. ` element_list = [] opts = webdriver.ChromeOptions() opts.headless = True driver = webdriver.Chrome(ChromeDriverManager().install()) base_url = "XYZ" driver.maximize_window() driver.get(base_url) driver.set_page_load_timeout(50) element = WebDriverWait(driver, 50).until(EC.presence_of_element_located((By.ID, 'all-my-groups'))) l = [] l = driver.find_elements_by_xpath("//div[contains(@class, 'alias-wrapper sim-ellipsis sim-list--shortId')]") for i in l: print(i.text) ` I have shared the images of class if this could help from pagination. If we could extract the automate and extract from all the pages that would be awesome. Also, I am new so please pardon me for asking silly questions. Thanks in advance.
[ "You have provided the code just for the previous page button. I guess you need to go to the next page until next page exists. As I don't know what site we are talking about I can only guess its behavior. So I'm assuming the button 'next' disappears when no next page exists. If so, it can be done like this:\nelement_list = []\nopts = webdriver.ChromeOptions()\nopts.headless = True\ndriver = webdriver.Chrome(ChromeDriverManager().install())\nbase_url = \"XYZ\"\ndriver.maximize_window()\ndriver.get(base_url)\ndriver.set_page_load_timeout(50)\nelement = WebDriverWait(driver, 50).until(EC.presence_of_element_located((By.ID, 'all-my-groups')))\n\nl = []\nl = driver.find_elements_by_xpath(\"//div[contains(@class, 'alias-wrapper sim-ellipsis sim-list--shortId')]\")\n\nwhile True:\n try:\n next_page = driver.find_element(By.XPATH, '//button[@label=\"Next page\"]')\n except NoSuchElementException:\n break\n next_page.click()\n l.extend(driver.find_elements(By.XPATH, \"//div[contains(@class, 'alias-wrapper sim-ellipsis sim-list--shortId')]\"))\n \n\nfor i in l:\n print(i.text)\n\nTo be able to catch the exception this import has to be added:\nfrom selenium.common.exceptions import NoSuchElementException\n\nAlso note that the method find_elements_by_xpath is deprecated and it would be better to replace this line:\nl = driver.find_elements_by_xpath(\"//div[contains(@class, 'alias-wrapper sim-ellipsis sim-list--shortId')]\")\n\nby this one:\nl = driver.find_elements(By.XPATH, \"//div[contains(@class, 'alias-wrapper sim-ellipsis sim-list--shortId')]\")\n\n" ]
[ 0 ]
[]
[]
[ "automation", "pagination", "python", "selenium" ]
stackoverflow_0074485737_automation_pagination_python_selenium.txt
Q: My else statement doesn't work. What is wrong? So in the if-statement, I want to print a message if the player doesn't input the correct things. The problem is that when I start up the code and type something random, it jumps to the first if where it checks yes and moves on. It even replied to the next function for me. Is there a way to fix this? import time yes_no = ['yes', 'y', 'no', 'n'] directions = ['north', 'n', 'south', 's', 'east', 'e', 'west', 'w'] else_msg = "Invalid command" def start(): print("-------------------------------") print("Welcome to Haunted Funhouse") print("Do you want to proceed? (y/n)") cmd = input(">") if cmd in yes_no: if cmd == "yes" or "y": time.sleep(1) print("\n------------------") print("Get ready") print("--------------------\n") starting_room() elif cmd == "no" or "n": time.sleep(1) print("Okay, shutting down") quit() else: print(else_msg) def starting_room(): print("-----------Start-----------") print("You stand in a octagon room") print( "There are windows to the northwest, northeast, southeast, southwest") print("There are doors to the:\n- North\n- South\n- East\n- West") print("---------------------------------------------------------") print("Where do you want to go? (n/s/e/w)") cmd = input(">") if cmd in directions: if cmd == "north" or "n": time.sleep(1) print("You enter through the north door") elif cmd == "south" or "s": time.sleep(1) print("You go through the south door") elif cmd == "west" or "w": time.sleep(1) print("You enter through the west door") elif cmd == "east" or "e": time.sleep(1) print("You enter through the east door") else: print(else_msg) start() I've tried changing '''if cmd not in yes_no''' to '''if cmd in yes_no''', but it didn't work. I ran it through Thonny and the code checker said it was fine A: You need to change: if cmd == "yes" or "y" to if cmd == "yes" or cmd == "y" ..and similarly in other parts of your code. If cmd == 'no' then this test would be True because a non-empty string is truthy. An alternative construct could be: if cmd in {'yes', 'y'} A: change as following: if cmd in yes_no: if cmd == "yes" or cmd == "y": time.sleep(1) print("\n------------------") print("Get ready") print("--------------------\n") starting_room() elif cmd == "no" or cmd == "n": time.sleep(1) print("Okay, shutting down") quit()
My else statement doesn't work. What is wrong?
So in the if-statement, I want to print a message if the player doesn't input the correct things. The problem is that when I start up the code and type something random, it jumps to the first if where it checks yes and moves on. It even replied to the next function for me. Is there a way to fix this? import time yes_no = ['yes', 'y', 'no', 'n'] directions = ['north', 'n', 'south', 's', 'east', 'e', 'west', 'w'] else_msg = "Invalid command" def start(): print("-------------------------------") print("Welcome to Haunted Funhouse") print("Do you want to proceed? (y/n)") cmd = input(">") if cmd in yes_no: if cmd == "yes" or "y": time.sleep(1) print("\n------------------") print("Get ready") print("--------------------\n") starting_room() elif cmd == "no" or "n": time.sleep(1) print("Okay, shutting down") quit() else: print(else_msg) def starting_room(): print("-----------Start-----------") print("You stand in a octagon room") print( "There are windows to the northwest, northeast, southeast, southwest") print("There are doors to the:\n- North\n- South\n- East\n- West") print("---------------------------------------------------------") print("Where do you want to go? (n/s/e/w)") cmd = input(">") if cmd in directions: if cmd == "north" or "n": time.sleep(1) print("You enter through the north door") elif cmd == "south" or "s": time.sleep(1) print("You go through the south door") elif cmd == "west" or "w": time.sleep(1) print("You enter through the west door") elif cmd == "east" or "e": time.sleep(1) print("You enter through the east door") else: print(else_msg) start() I've tried changing '''if cmd not in yes_no''' to '''if cmd in yes_no''', but it didn't work. I ran it through Thonny and the code checker said it was fine
[ "You need to change:\nif cmd == \"yes\" or \"y\"\n\nto\nif cmd == \"yes\" or cmd == \"y\"\n\n..and similarly in other parts of your code.\nIf cmd == 'no' then this test would be True because a non-empty string is truthy.\nAn alternative construct could be:\nif cmd in {'yes', 'y'}\n\n", "change as following:\nif cmd in yes_no:\n if cmd == \"yes\" or cmd == \"y\":\n time.sleep(1)\n print(\"\\n------------------\")\n print(\"Get ready\")\n print(\"--------------------\\n\")\n starting_room()\n elif cmd == \"no\" or cmd == \"n\":\n time.sleep(1)\n print(\"Okay, shutting down\")\n quit()\n\n" ]
[ 0, 0 ]
[]
[]
[ "python" ]
stackoverflow_0074486393_python.txt
Q: How to detect figures in a paper news image in Python? So i have this project in Python (Computer Vision), which is seperating text from figures of an image (like a paper news image). My question is what's the best way to detect those figures in the paper ? (in Python). Paper image example : Paper . Haven't try anything. I have no idea .. A: I found layout-parser python toolkit which is very helpful for your project. Layout Parser is a unified toolkit for Deep Learning Based Document Image Analysis. With the help of Deep Learning, layoutparser supports the analysis very complex documents and processing of the hierarchical structure in the layouts. Check this complete notebook example on detecting newspaper layouts (separating images and text regions on the newspaper image) it's recommended to use Jupyter notebook on Linux or macOS because layout-parser isn't supported on windows OS, or you can use Google Colab which I used for direct running of the toolkit. Requirements for installing the toolkit pip install layoutparser # Install the base layoutparser library with pip install "layoutparser[layoutmodels]" # Install DL layout model toolkit pip install "layoutparser[ocr]" # Install OCR toolkit Then installing the detectron2 model backend dependencies pip install layoutparser torchvision && pip install "git+https://github.com/facebookresearch/detectron2.git@v0.5#egg=detectron2" Running the toolkit on newspaper image import layoutparser as lp import cv2 # Convert the image from BGR (cv2 default loading style) # to RGB image = cv2.imread("test.jpg") image = image[..., ::-1] # Load the deep layout model from the layoutparser API # For all the supported model, please check the Model # Zoo Page: https://layout-parser.readthedocs.io/en/latest/notes/modelzoo.html model = lp.models.Detectron2LayoutModel('lp://PrimaLayout/mask_rcnn_R_50_FPN_3x/config', extra_config=["MODEL.ROI_HEADS.SCORE_THRESH_TEST", 0.7], label_map={1:"TextRegion", 2:"ImageRegion", 3:"TableRegion", 4:"MathsRegion", 5:"SeparatorRegion", 6:"OtherRegion"}) # Detect the layout of the input image layout = model.detect(image) # Show the detected layout of the input image lp.draw_box(image, layout, box_width=3) From the result image you can see text layouts regions in orange box and image layouts regions (figure) in white box. It's amazing deep learning toolkit for image recognition. A: I would get started with the OpenCV module in Python, as it has a lot of really useful tools for image recognition. I'll link it here: https://pypi.org/project/opencv-python/ https://github.com/opencv Got to the first link to download the module package, and then check out the github link if you need help or have any issues. A: you can use image segmentation approach. Use connected components labelling algorithm so that all the text and images are detected as components. The components with larger area than a particular threshold can be detected as images in the paper. The connectedcomponentswithstats method can help to get components and get area of all components. Hope this helps. A: import cv2 import numpy as np # Read the image image = cv2.imread('paper-news.png') # Convert to grayscale gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) # Blur the image blurred = cv2.GaussianBlur(gray, (5, 5), 0) canny = cv2.Canny(blurred, 30, 150) # Find contours in the image contours, hierarchy = cv2.findContours(canny.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE) # Iterate over the contours for contour in contours: # Get the rectangle bounding the contour # Draw the rectangle cv2.rectangle(image, (x, y), (x + w, y + h), (36,255,12), 2) # Show the image cv2.imshow('Image with Figures Detected', image) cv2.waitKey(0) this will help you. A: Simple way would be to detect the region of text using this resource. Detect text region in image using Opencv Then, do white color background thresholding and blob detection in the remaining region to find the images using this resource. Detecting and counting blobs/connected objects with opencv
How to detect figures in a paper news image in Python?
So i have this project in Python (Computer Vision), which is seperating text from figures of an image (like a paper news image). My question is what's the best way to detect those figures in the paper ? (in Python). Paper image example : Paper . Haven't try anything. I have no idea ..
[ "I found layout-parser python toolkit which is very helpful for your project.\nLayout Parser is a unified toolkit for Deep Learning Based Document Image Analysis.\nWith the help of Deep Learning, layoutparser supports the analysis very complex documents and processing of the hierarchical structure in the layouts.\nCheck this complete notebook example on detecting newspaper layouts (separating images and text regions on the newspaper image)\nit's recommended to use Jupyter notebook on Linux or macOS because layout-parser isn't supported on windows OS, or you can use Google Colab which I used for direct running of the toolkit.\nRequirements for installing the toolkit\npip install layoutparser # Install the base layoutparser library with \npip install \"layoutparser[layoutmodels]\" # Install DL layout model toolkit \npip install \"layoutparser[ocr]\" # Install OCR toolkit\n\nThen installing the detectron2 model backend dependencies\npip install layoutparser torchvision && pip install \"git+https://github.com/facebookresearch/detectron2.git@v0.5#egg=detectron2\" \n\nRunning the toolkit on newspaper image\nimport layoutparser as lp\nimport cv2\n\n# Convert the image from BGR (cv2 default loading style)\n# to RGB\nimage = cv2.imread(\"test.jpg\")\nimage = image[..., ::-1] \n\n# Load the deep layout model from the layoutparser API \n# For all the supported model, please check the Model \n# Zoo Page: https://layout-parser.readthedocs.io/en/latest/notes/modelzoo.html \nmodel = lp.models.Detectron2LayoutModel('lp://PrimaLayout/mask_rcnn_R_50_FPN_3x/config', \n extra_config=[\"MODEL.ROI_HEADS.SCORE_THRESH_TEST\", 0.7],\n label_map={1:\"TextRegion\", 2:\"ImageRegion\", 3:\"TableRegion\", 4:\"MathsRegion\", 5:\"SeparatorRegion\", 6:\"OtherRegion\"})\n \n# Detect the layout of the input image\nlayout = model.detect(image)\n \n# Show the detected layout of the input image\nlp.draw_box(image, layout, box_width=3)\n \n\n\nFrom the result image you can see text layouts regions in orange box and image layouts regions (figure) in white box. It's amazing deep learning toolkit for image recognition.\n", "I would get started with the OpenCV module in Python, as it has a lot of really useful tools for image recognition. I'll link it here:\nhttps://pypi.org/project/opencv-python/\nhttps://github.com/opencv\nGot to the first link to download the module package, and then check out the github link if you need help or have any issues.\n", "you can use image segmentation approach. Use connected components labelling algorithm so that all the text and images are detected as components. The components with larger area than a particular threshold can be detected as images in the paper. The connectedcomponentswithstats method can help to get components and get area of all components.\nHope this helps.\n", "import cv2\nimport numpy as np\n\n# Read the image\nimage = cv2.imread('paper-news.png')\n\n# Convert to grayscale\ngray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)\n\n# Blur the image\nblurred = cv2.GaussianBlur(gray, (5, 5), 0)\n\ncanny = cv2.Canny(blurred, 30, 150)\n\n# Find contours in the image\ncontours, hierarchy = cv2.findContours(canny.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)\n\n# Iterate over the contours\nfor contour in contours:\n # Get the rectangle bounding the contour\n # Draw the rectangle\n cv2.rectangle(image, (x, y), (x + w, y + h), (36,255,12), 2)\n\n# Show the image\ncv2.imshow('Image with Figures Detected', image)\ncv2.waitKey(0)\n\nthis will help you.\n", "\nSimple way would be to detect the region of text using this resource.\n\nDetect text region in image using Opencv\n\nThen, do white color background thresholding and blob detection in the remaining region to find the images using this resource.\n\nDetecting and counting blobs/connected objects with opencv\n" ]
[ 1, 0, 0, 0, 0 ]
[]
[]
[ "computer_vision", "document_layout_analysis", "image_processing", "object_detection", "python" ]
stackoverflow_0074485252_computer_vision_document_layout_analysis_image_processing_object_detection_python.txt
Q: Find non-numeric values in pandas dataframe column I got a a column in a dataframe that contains numbers and strings. So I replaced the strings by numbers via df.column.replace(["A", "B", "C", "D"], [1, 2, 3, 4], inplace=True). But the column is still dtype "object". I can not sort the column (TypeError error: '<' not supported between instances of 'str' and 'int'). Now how can I identify those numbers that are strings? I tried print(df[pd.to_numeric(df['column']).isnull()]) and it gives back an empty dataframe, as expected. However I read that this does not work in my case (actual numbers saved as strings). So how can I identify those numbers saved as a string? Am I right that if a column only contains REAL numbers (int or float) it will automatically change to dtype int or float? Thank you! A: You can use pd.to_numeric with something like: df['column'] = pd.to_numeric(df['column'], errors='coerce') For the errors argument you have few option, see reference documentation here A: Expanding on Francesco's answer, it's possible to create a mask of non-numeric values and identify unique instances to handle or remove. This uses the fact that where values cant be coerced, they are treated as nulls. is_non_numeric = pd.to_numeric(df['column'], errors='coerce').isnull() df[is_non_numeric]['column'].unique() Or alternatively in a single line: df[pd.to_numeric(df['column'], errors='coerce').isnull()]['column'].unique() A: you can change dtype df.column.dtype=df.column.astype(int)
Find non-numeric values in pandas dataframe column
I got a a column in a dataframe that contains numbers and strings. So I replaced the strings by numbers via df.column.replace(["A", "B", "C", "D"], [1, 2, 3, 4], inplace=True). But the column is still dtype "object". I can not sort the column (TypeError error: '<' not supported between instances of 'str' and 'int'). Now how can I identify those numbers that are strings? I tried print(df[pd.to_numeric(df['column']).isnull()]) and it gives back an empty dataframe, as expected. However I read that this does not work in my case (actual numbers saved as strings). So how can I identify those numbers saved as a string? Am I right that if a column only contains REAL numbers (int or float) it will automatically change to dtype int or float? Thank you!
[ "You can use pd.to_numeric with something like:\ndf['column'] = pd.to_numeric(df['column'], errors='coerce')\n\nFor the errors argument you have few option, see reference documentation here\n", "Expanding on Francesco's answer, it's possible to create a mask of non-numeric values and identify unique instances to handle or remove.\nThis uses the fact that where values cant be coerced, they are treated as nulls.\nis_non_numeric = pd.to_numeric(df['column'], errors='coerce').isnull()\ndf[is_non_numeric]['column'].unique()\n\nOr alternatively in a single line:\ndf[pd.to_numeric(df['column'], errors='coerce').isnull()]['column'].unique()\n\n", "you can change dtype\n df.column.dtype=df.column.astype(int)\n\n" ]
[ 0, 0, -1 ]
[]
[]
[ "dataframe", "dtype", "pandas", "python", "python_3.x" ]
stackoverflow_0062376326_dataframe_dtype_pandas_python_python_3.x.txt
Q: Custom methods for a Pandas DataFrame I'd like to write custom functions that I can call on my pd.DataFrame df using the df.method() notation. For instance, def my_pd_method(df: pd.DataFrame, col: str)->pd.DataFrame: '''apply my_function to df[col] of df''' df_copy = df.copy(deep = True) df_copy[col] = df_copy[col].apply(lambda x: my_function(x), axis = 1) return df_copy After this, I can run the command PandasObject.my_pd_method = my_pd_method to define the my_pd_method as a pd method. After this, df.my_pd_method(col) will run as expected. Is there some way to do this in a single function that I can put it in a library, import it and start using it, without having to run PandasObject.my_pd_method = my_pd_method? A: Your best shot is to use inheritance i.e to create your own custom class that inherits from pandas DataFrame class. Example: class CustomDataFrame(pd.DataFrame): def my_method(self, col): df_copy = self.copy(deep = True) df_copy[col] = df_copy[col].apply(lambda x: my_function(x), axis = 1) return df_copy Then you will be able to call your method like you wanted: df.my_method(col)
Custom methods for a Pandas DataFrame
I'd like to write custom functions that I can call on my pd.DataFrame df using the df.method() notation. For instance, def my_pd_method(df: pd.DataFrame, col: str)->pd.DataFrame: '''apply my_function to df[col] of df''' df_copy = df.copy(deep = True) df_copy[col] = df_copy[col].apply(lambda x: my_function(x), axis = 1) return df_copy After this, I can run the command PandasObject.my_pd_method = my_pd_method to define the my_pd_method as a pd method. After this, df.my_pd_method(col) will run as expected. Is there some way to do this in a single function that I can put it in a library, import it and start using it, without having to run PandasObject.my_pd_method = my_pd_method?
[ "Your best shot is to use inheritance i.e to create your own custom class that inherits from pandas DataFrame class.\nExample:\nclass CustomDataFrame(pd.DataFrame):\n def my_method(self, col):\n df_copy = self.copy(deep = True)\n df_copy[col] = df_copy[col].apply(lambda x: my_function(x), axis = 1)\n\n return df_copy \n\nThen you will be able to call your method like you wanted:\ndf.my_method(col)\n\n" ]
[ 2 ]
[]
[]
[ "dataframe", "methods", "oop", "pandas", "python" ]
stackoverflow_0074485623_dataframe_methods_oop_pandas_python.txt
Q: find Coordinates on a 3D hexgon Coordinates of the hexagon [Hexagon](https://i.stack.imgur.com/tww83.png) Now I want to add coordinates between these coordinates such that these 8 coordinates 27 coordinates as shownenter image description here an find the new coordinates too I tried to make a code but not working much well xi = [-1.0, 0.0, 1.0] ele_arr is the coordinates given cord1 = np.zeros([(N+1)*(N+1)*(NPZ+1),4],dtype = 'float64') cord1[0,:] = ele_arr[-1,:] cord1[N,:] = ele_arr[-2,:] cord1[N*(N+1),:] = ele_arr[3,:] cord1[((N+1)*(N+1))-1,:] = ele_arr[2,:] cord1[-1,:] = ele_arr[0,:] cord1[-1-(N),:] = ele_arr[1,:] cord1[-1-(N*(N+1)),:] = ele_arr[-4,:] cord1[-((N+1)*(N+1)),:] = ele_arr[-3,:] for i in range(1,NPZ): gap = (N+1) * (N+1) cord1[i*gap,1:4] = np.array([cord1[0,1] + ((cord1[-((N+1)*(N+1)),1] - cord1[0,1]) * ((xi[i] - xi[0])/2)), cord1[0,2] + ((cord1[-((N+1)*(N+1)),2] - cord1[0,2]) * ((xi[i] - xi[0])/2)), cord1[0,3] + ((cord1[-((N+1)*(N+1)),3] - cord1[0,3]) * ((xi[i] - xi[0])/2))]) cord1[i*gap + N,1:4] = np.array([cord1[N,1] + ((cord1[-1-(N*(N+1)),1] - cord1[N,1]) * ((xi[i] - xi[0])/2)), cord1[N,2] + ((cord1[-1-(N*(N+1)),2] - cord1[N,2]) * ((xi[i] - xi[0])/2)), cord1[N,3] + ((cord1[-1-(N*(N+1)),3] - cord1[N,3]) * ((xi[i] - xi[0])/2))]) cord1[i*gap + N + (N+1)*N,1:4] = np.array([cord1[((N+1)*(N+1))-1,1] + ((cord1[-1,1] - cord1[((N+1)*(N+1))-1,1]) * ((xi[i] - xi[0])/2)), cord1[((N+1)*(N+1))-1,2] + ((cord1[-1,2] - cord1[((N+1)*(N+1))-1,2]) * ((xi[i] - xi[0])/2)), cord1[((N+1)*(N+1))-1,3] + ((cord1[-1,3] - cord1[((N+1)*(N+1))-1,3]) * ((xi[i] - xi[0])/2))]) cord1[i*gap + (N+1)*N,1:4] = np.array([ cord1[N*(N+1),1] + ((cord1[-1-N,1] - cord1[N*(N+1),1]) * ((xi[i] - xi[0])/2)), cord1[N*(N+1),2] + ((cord1[-1-N,2] - cord1[N*(N+1),2]) * ((xi[i] - xi[0])/2)), cord1[N*(N+1),3] + ((cord1[-1-N,3] - cord1[N*(N+1),3]) * ((xi[i] - xi[0])/2))]) for i in range(NPZ+1): for j in range(1,N): cord1[i*(N+1)*(N+1)+j,1:4] = [cord1[i*(N+1)*(N+1),1] + ((cord1[i*(N+1)*(N+1)+N,1] - cord1[i*(N+1)*(N+1),1]) * (((xi[j] - xi[0])/2))), cord1[i*(N+1)*(N+1),2] + ((cord1[i*(N+1)*(N+1)+N,2] - cord1[i*(N+1)*(N+1),2]) * (((xi[j] - xi[0])/2))), cord1[i*(N+1)*(N+1),3] + ((cord1[i*(N+1)*(N+1)+N,3] - cord1[i*(N+1)*(N+1),3]) * (((xi[j] - xi[0])/2)))] cord1[i*(N+1)*(N+1)+(N*(N+1))+j,1:4] = [cord1[i*(N+1)*(N+1)+(N*(N+1)),1] + ((cord1[i*(N+1)*(N+1)+(N*(N+1))+N,1] - cord1[i*(N+1)*(N+1)+(N*(N+1)),1]) * (((xi[j] - xi[0])/2))), cord1[i*(N+1)*(N+1)+(N*(N+1)),2] + ((cord1[i*(N+1)*(N+1)+(N*(N+1))+N,2] - cord1[i*(N+1)*(N+1)+(N*(N+1)),2]) * (((xi[j] - xi[0])/2))), cord1[i*(N+1)*(N+1)+(N*(N+1)),3] + ((cord1[i*(N+1)*(N+1)+(N*(N+1))+N,3] - cord1[i*(N+1)*(N+1)+(N*(N+1)),3]) * (((xi[j] - xi[0])/2)))] for i in range(NPZ+1): for j in range(N+1): for k in range(1,N): cord1[(i*gap)+(k*(N+1))+j,1:4] = [cord1[(i*gap)+j,1] + ((cord1[(i*gap)+j+(N*(N+1)),1] - cord1[(i*gap)+j,1]) * (((xi[k] - xi[0])/2))), cord1[(i*gap)+j,2] + ((cord1[(i*gap)+j+(N*(N+1)),2] - cord1[(i*gap)+j,2]) * (((xi[k] - xi[0])/2))), cord1[(i*gap)+j,3] + ((cord1[(i*gap)+j+(N*(N+1)),3] - cord1[(i*gap)+j,3]) * (((xi[k] - xi[0])/2)))] A: Hard code it: let a, b, c, d, e, f, g, h be the eight points. Then add the averages of (a, b), (b, c), (c, d), (d, a), (d, e), (e, f), (g, h), (h, d), (a, d), (b, e), (c, f), (d, h), (a, b, c, d), (a, d, b, e), (e, f, g, h), (b, e, c, f), (c, f, d, h), (d, h, a, d), (a, b, c, d, e, f, g).
find Coordinates on a 3D hexgon
Coordinates of the hexagon [Hexagon](https://i.stack.imgur.com/tww83.png) Now I want to add coordinates between these coordinates such that these 8 coordinates 27 coordinates as shownenter image description here an find the new coordinates too I tried to make a code but not working much well xi = [-1.0, 0.0, 1.0] ele_arr is the coordinates given cord1 = np.zeros([(N+1)*(N+1)*(NPZ+1),4],dtype = 'float64') cord1[0,:] = ele_arr[-1,:] cord1[N,:] = ele_arr[-2,:] cord1[N*(N+1),:] = ele_arr[3,:] cord1[((N+1)*(N+1))-1,:] = ele_arr[2,:] cord1[-1,:] = ele_arr[0,:] cord1[-1-(N),:] = ele_arr[1,:] cord1[-1-(N*(N+1)),:] = ele_arr[-4,:] cord1[-((N+1)*(N+1)),:] = ele_arr[-3,:] for i in range(1,NPZ): gap = (N+1) * (N+1) cord1[i*gap,1:4] = np.array([cord1[0,1] + ((cord1[-((N+1)*(N+1)),1] - cord1[0,1]) * ((xi[i] - xi[0])/2)), cord1[0,2] + ((cord1[-((N+1)*(N+1)),2] - cord1[0,2]) * ((xi[i] - xi[0])/2)), cord1[0,3] + ((cord1[-((N+1)*(N+1)),3] - cord1[0,3]) * ((xi[i] - xi[0])/2))]) cord1[i*gap + N,1:4] = np.array([cord1[N,1] + ((cord1[-1-(N*(N+1)),1] - cord1[N,1]) * ((xi[i] - xi[0])/2)), cord1[N,2] + ((cord1[-1-(N*(N+1)),2] - cord1[N,2]) * ((xi[i] - xi[0])/2)), cord1[N,3] + ((cord1[-1-(N*(N+1)),3] - cord1[N,3]) * ((xi[i] - xi[0])/2))]) cord1[i*gap + N + (N+1)*N,1:4] = np.array([cord1[((N+1)*(N+1))-1,1] + ((cord1[-1,1] - cord1[((N+1)*(N+1))-1,1]) * ((xi[i] - xi[0])/2)), cord1[((N+1)*(N+1))-1,2] + ((cord1[-1,2] - cord1[((N+1)*(N+1))-1,2]) * ((xi[i] - xi[0])/2)), cord1[((N+1)*(N+1))-1,3] + ((cord1[-1,3] - cord1[((N+1)*(N+1))-1,3]) * ((xi[i] - xi[0])/2))]) cord1[i*gap + (N+1)*N,1:4] = np.array([ cord1[N*(N+1),1] + ((cord1[-1-N,1] - cord1[N*(N+1),1]) * ((xi[i] - xi[0])/2)), cord1[N*(N+1),2] + ((cord1[-1-N,2] - cord1[N*(N+1),2]) * ((xi[i] - xi[0])/2)), cord1[N*(N+1),3] + ((cord1[-1-N,3] - cord1[N*(N+1),3]) * ((xi[i] - xi[0])/2))]) for i in range(NPZ+1): for j in range(1,N): cord1[i*(N+1)*(N+1)+j,1:4] = [cord1[i*(N+1)*(N+1),1] + ((cord1[i*(N+1)*(N+1)+N,1] - cord1[i*(N+1)*(N+1),1]) * (((xi[j] - xi[0])/2))), cord1[i*(N+1)*(N+1),2] + ((cord1[i*(N+1)*(N+1)+N,2] - cord1[i*(N+1)*(N+1),2]) * (((xi[j] - xi[0])/2))), cord1[i*(N+1)*(N+1),3] + ((cord1[i*(N+1)*(N+1)+N,3] - cord1[i*(N+1)*(N+1),3]) * (((xi[j] - xi[0])/2)))] cord1[i*(N+1)*(N+1)+(N*(N+1))+j,1:4] = [cord1[i*(N+1)*(N+1)+(N*(N+1)),1] + ((cord1[i*(N+1)*(N+1)+(N*(N+1))+N,1] - cord1[i*(N+1)*(N+1)+(N*(N+1)),1]) * (((xi[j] - xi[0])/2))), cord1[i*(N+1)*(N+1)+(N*(N+1)),2] + ((cord1[i*(N+1)*(N+1)+(N*(N+1))+N,2] - cord1[i*(N+1)*(N+1)+(N*(N+1)),2]) * (((xi[j] - xi[0])/2))), cord1[i*(N+1)*(N+1)+(N*(N+1)),3] + ((cord1[i*(N+1)*(N+1)+(N*(N+1))+N,3] - cord1[i*(N+1)*(N+1)+(N*(N+1)),3]) * (((xi[j] - xi[0])/2)))] for i in range(NPZ+1): for j in range(N+1): for k in range(1,N): cord1[(i*gap)+(k*(N+1))+j,1:4] = [cord1[(i*gap)+j,1] + ((cord1[(i*gap)+j+(N*(N+1)),1] - cord1[(i*gap)+j,1]) * (((xi[k] - xi[0])/2))), cord1[(i*gap)+j,2] + ((cord1[(i*gap)+j+(N*(N+1)),2] - cord1[(i*gap)+j,2]) * (((xi[k] - xi[0])/2))), cord1[(i*gap)+j,3] + ((cord1[(i*gap)+j+(N*(N+1)),3] - cord1[(i*gap)+j,3]) * (((xi[k] - xi[0])/2)))]
[ "Hard code it: let a, b, c, d, e, f, g, h be the eight points. Then add the averages of\n(a, b), (b, c), (c, d), (d, a), (d, e), (e, f), (g, h), (h, d), (a, d), (b, e), (c, f), (d, h), (a, b, c, d), (a, d, b, e), (e, f, g, h), (b, e, c, f), (c, f, d, h), (d, h, a, d), (a, b, c, d, e, f, g).\n" ]
[ 0 ]
[]
[]
[ "python" ]
stackoverflow_0074486559_python.txt
Q: What is the most efficient option to implement a C based library into my python code? I have a moderate amount of experience in python and a little experience in C++ and c#. I am currently doing an optimization challenge where I am gated by efficiency, and am hoping to use a C library in python to increase efficiency. I have no experience using C in python, but I won't need to marshall many variables. I will need to call a function in python, then from there it can be entirely C. An example of what I am hoping the code would look like is: import cLibrary as C #start python code def runFunction(string): #start C run function in C, have to marshall string #end C runFunction(string) #end python I am confident with the C/C++ code itself, primary issue is what library/module to use, how to call that library, and how to convert the string from python to C. A: In general, you can't write your C code inline within your Python file. Instead, you need to create a separate C program which can get compiled into a library which defines a function that can be imported by Python. This article appears to have reasonable instructions. A: Better way to do this will be create a ".dll" (for window) and ".so" file (for Linux) and invoke c code with help of those file.
What is the most efficient option to implement a C based library into my python code?
I have a moderate amount of experience in python and a little experience in C++ and c#. I am currently doing an optimization challenge where I am gated by efficiency, and am hoping to use a C library in python to increase efficiency. I have no experience using C in python, but I won't need to marshall many variables. I will need to call a function in python, then from there it can be entirely C. An example of what I am hoping the code would look like is: import cLibrary as C #start python code def runFunction(string): #start C run function in C, have to marshall string #end C runFunction(string) #end python I am confident with the C/C++ code itself, primary issue is what library/module to use, how to call that library, and how to convert the string from python to C.
[ "In general, you can't write your C code inline within your Python file. Instead, you need to create a separate C program which can get compiled into a library which defines a function that can be imported by Python.\nThis article appears to have reasonable instructions.\n", "Better way to do this will be create a \".dll\" (for window) and \".so\" file (for Linux) and invoke c code with help of those file.\n" ]
[ 0, 0 ]
[]
[]
[ "c", "python", "python_c_api" ]
stackoverflow_0074467742_c_python_python_c_api.txt
Q: Send email in Python Now, as the Lesser secure apps feature in Gmail has been disabled, I am trying to find alternatives for email sending. I am trying freemail.hu as an alternative which supports SMTP protocol, but any other suggestion is highly welcome. According to the web page, the data for SMTP are the following: Server name: smtp.freemail.hu Port: 587 (with STARTTLS) Username: email address Password: the same as used on the web My code looks like this: import smtplib import ssl try: server = smtplib.SMTP('smtp.freemail.hu', 587) server.starttls(context=ssl.create_default_context()) server.login('[myuser]@freemail.hu', '[mypassword]') server.sendmail('[myuser]@freemail.hu', ['[myprivatemail]@gmail.com'], 'Test mail.') except Exception as e: print(e) finally: server.quit() The username is password is correct: I checked them several times + it works on the web interface. However, I am getting the following error message: (535, b'5.7.8 Error: authentication failed: [encoded value]') Does anyone has an idea what the problem could be? I tried two email providers (freemail.hu, mail.com), tried to log in with and without server name, tried to enter the password from command prompt, checked the settings looking for the feature similar to Lesser secure apps in Google, but nothing helped. A: For Gmail the App Passwords as described on page https://support.google.com/mail/answer/185833 works well. The 2 step verification should be turned on, and then a 16 character long random password can be generated on the App passwords section.
Send email in Python
Now, as the Lesser secure apps feature in Gmail has been disabled, I am trying to find alternatives for email sending. I am trying freemail.hu as an alternative which supports SMTP protocol, but any other suggestion is highly welcome. According to the web page, the data for SMTP are the following: Server name: smtp.freemail.hu Port: 587 (with STARTTLS) Username: email address Password: the same as used on the web My code looks like this: import smtplib import ssl try: server = smtplib.SMTP('smtp.freemail.hu', 587) server.starttls(context=ssl.create_default_context()) server.login('[myuser]@freemail.hu', '[mypassword]') server.sendmail('[myuser]@freemail.hu', ['[myprivatemail]@gmail.com'], 'Test mail.') except Exception as e: print(e) finally: server.quit() The username is password is correct: I checked them several times + it works on the web interface. However, I am getting the following error message: (535, b'5.7.8 Error: authentication failed: [encoded value]') Does anyone has an idea what the problem could be? I tried two email providers (freemail.hu, mail.com), tried to log in with and without server name, tried to enter the password from command prompt, checked the settings looking for the feature similar to Lesser secure apps in Google, but nothing helped.
[ "For Gmail the App Passwords as described on page https://support.google.com/mail/answer/185833 works well. The 2 step verification should be turned on, and then a 16 character long random password can be generated on the App passwords section.\n" ]
[ 0 ]
[]
[]
[ "python", "smtplib" ]
stackoverflow_0074395874_python_smtplib.txt
Q: Python - Sort Dictionary Key Alphabetically while sorting Value list elements by length I have a Dictionary here: test_dict = {'gfg': ['One', 'six', 'three'], 'is': ['seven', 'eight', 'nine'], 'best': ['ten', 'six']} I tried: for i in range(len(test_dict)): values = list(test_dict.values()) keys = list(test_dict) value_sorted_list = values[i] value_sorted_list = keys[i] keys_sorted_list = random.shuffle(value_sorted_list) test_dict.update({f"{keys_sorted_list}":value_sorted_list}) I want to sort the keys alphabetically while the value list by length Something like this: test_dict = {'best': ['six', 'ten'], 'gfg': ['One', 'six', 'three'], 'is': ['nine', 'eight', 'seven]} I also want another function similar to the one i mentioned above but if the elements are similar length, to sort them randomly. As well as another function to sort value list randomly. A: Sorting keys alphabetically and values by length. new_dict = {} for key in sorted(test_dict.keys()): sorted_values = sorted(test_dict[key], key=len) new_dict[key] = sorted_values print(new_dict) A: dict preserves insertion order since 3.7. Changed in version 3.7: Dictionary order is guaranteed to be insertion order. This behavior was an implementation detail of CPython from 3.6. Therefore, you can simply construct a new dictionary according to the sorted key and ensure the corresponding value is sorted. From the output your posted, the value is sorted by length first then alphabetically. result = {key: sorted(value, key=lambda x: (len(x), x)) for key, value in sorted(test_dict.items())} # Thanks to Masklinn print(result) # {'best': ['six', 'ten'], 'gfg': ['One', 'six', 'three'], 'is': ['nine', 'eight', 'seven']} Reference: dict-comprehension - a way to construct a dictionary sorted - The key function here achieves sorting by length first then alphabetically. You can change it according to your sorting rules. A: This can be achieved with a dictionary comprehension as follows: test_dict = {'gfg': ['One', 'six', 'three'], 'is': ['seven', 'eight', 'nine'], 'best': ['ten', 'six']} new_dict = {k:sorted(v, key=len) for k, v in sorted(test_dict.items())} print(new_dict) Output: {'best': ['ten', 'six'], 'gfg': ['One', 'six', 'three'], 'is': ['nine', 'seven', 'eight']}
Python - Sort Dictionary Key Alphabetically while sorting Value list elements by length
I have a Dictionary here: test_dict = {'gfg': ['One', 'six', 'three'], 'is': ['seven', 'eight', 'nine'], 'best': ['ten', 'six']} I tried: for i in range(len(test_dict)): values = list(test_dict.values()) keys = list(test_dict) value_sorted_list = values[i] value_sorted_list = keys[i] keys_sorted_list = random.shuffle(value_sorted_list) test_dict.update({f"{keys_sorted_list}":value_sorted_list}) I want to sort the keys alphabetically while the value list by length Something like this: test_dict = {'best': ['six', 'ten'], 'gfg': ['One', 'six', 'three'], 'is': ['nine', 'eight', 'seven]} I also want another function similar to the one i mentioned above but if the elements are similar length, to sort them randomly. As well as another function to sort value list randomly.
[ "Sorting keys alphabetically and values by length.\nnew_dict = {}\nfor key in sorted(test_dict.keys()):\n sorted_values = sorted(test_dict[key], key=len)\n new_dict[key] = sorted_values\nprint(new_dict)\n\n", "dict preserves insertion order since 3.7.\n\nChanged in version 3.7: Dictionary order is guaranteed to be insertion order. This behavior was an implementation detail of CPython from 3.6.\n\nTherefore, you can simply construct a new dictionary according to the sorted key and ensure the corresponding value is sorted. From the output your posted, the value is sorted by length first then alphabetically.\nresult = {key: sorted(value, key=lambda x: (len(x), x)) for key, value in sorted(test_dict.items())} # Thanks to Masklinn\nprint(result)\n# {'best': ['six', 'ten'], 'gfg': ['One', 'six', 'three'], 'is': ['nine', 'eight', 'seven']}\n\nReference:\ndict-comprehension - a way to construct a dictionary\nsorted - The key function here achieves sorting by length first then alphabetically. You can change it according to your sorting rules.\n", "This can be achieved with a dictionary comprehension as follows:\ntest_dict = {'gfg': ['One', 'six', 'three'], \n 'is': ['seven', 'eight', 'nine'], \n 'best': ['ten', 'six']}\n\nnew_dict = {k:sorted(v, key=len) for k, v in sorted(test_dict.items())}\n\nprint(new_dict)\n\nOutput:\n{'best': ['ten', 'six'], 'gfg': ['One', 'six', 'three'], 'is': ['nine', 'seven', 'eight']}\n\n" ]
[ 0, 0, 0 ]
[]
[]
[ "dictionary", "function", "list", "python", "sorting" ]
stackoverflow_0074485945_dictionary_function_list_python_sorting.txt
Q: --proto_path passed empty directory name I'm trying to compile a proto file with the following command: protoc -I=. --python_out=. ./message.proto --proto_path=. But I'm getting this error: --proto_path passed empty directory name. (Use "." for current directory.) What to do? A: You should remove = in -I=. and also remove --proto_path=. flag A: The command works for me. NOTE -I == --proto_path so using both with the same value is redundant One 'wrinkle' with protoc is that the protobuf files must be encapsulated by a proto_path. So, in your case, the current directory must contain a valid message.proto file, both for ./message.proto to be a valid reference and because your --proto_path includes the current directory. A: In my case, mixing short and long flags -I= and --python_out= was the problem. $ protoc -I=. --python_out=. ./message.proto --proto_path passed empty directory name. (Use "." for current directory.) solution: $ protoc --proto_path=. --python_out=. ./message.proto In your case, removing -I=. will fix the problem. $ --python_out=. ./message.proto --proto_path=.
--proto_path passed empty directory name
I'm trying to compile a proto file with the following command: protoc -I=. --python_out=. ./message.proto --proto_path=. But I'm getting this error: --proto_path passed empty directory name. (Use "." for current directory.) What to do?
[ "You should remove = in -I=. and also remove --proto_path=. flag\n", "The command works for me.\n\nNOTE -I == --proto_path so using both with the same value is redundant\n\nOne 'wrinkle' with protoc is that the protobuf files must be encapsulated by a proto_path.\nSo, in your case, the current directory must contain a valid message.proto file, both for ./message.proto to be a valid reference and because your --proto_path includes the current directory.\n", "In my case, mixing short and long flags -I= and --python_out= was the problem.\n$ protoc -I=. --python_out=. ./message.proto\n--proto_path passed empty directory name. (Use \".\" for current directory.)\n\nsolution:\n$ protoc --proto_path=. --python_out=. ./message.proto\n\nIn your case, removing -I=. will fix the problem.\n$ --python_out=. ./message.proto --proto_path=.\n\n" ]
[ 8, 0, 0 ]
[]
[]
[ "proto", "protocol_buffers", "python" ]
stackoverflow_0064048132_proto_protocol_buffers_python.txt
Q: Visual Studio Code Python refuses to write to file I'm trying to have a program output data to a JSON file, but VS code or Python itself seems to have a problem with that. Specifically, I'm trying to output this(Tlist and Slist are lists of integers): output = {"Time": Tlist, "Space": Slist} json_data = json.dumps(output, indent=4) with open("sortsOutput.json", "a") as outfile: outfile.write(json_data) But nothing seems to be happening. SortsOutput.json was never made, and even with a pre-existing SortsOuput.json nothing happened. Heck, this doesn't even work: out = open("blah.txt", "w") out.write("Egg") out.close() What might be going wrong for my software for this to happen? I'm using Python v2022.16.1, for the record, and every time the program runs for the first time the command "conda activate base" happens with some error text that doesn't seem to affect the rest of my program, so is it that? How do I fix that? A: out = file.open("blah.txt", "w") In your second example, it seems that you don't need file.. open() is a built-in method of Python. You can use out = open("blah.txt", "w") directly. At the same time, this problem seems to have nothing to do with vscode. I think this problem is more likely due to the path problem. The .json file you executed does not exist in the first level directory under the workspace. For example, Workspace -.vscode -sortsOutput.json -test.py You need to use the workspace as the root directory to tell the specific location of python files: with open(".vscode/sortsOutput.json", "a") as outfile: outfile.write(json_data) A: I was unable to write file using MS Visual Studio Community (python), in my case it was an encoding issue. I found the solution at: https://peps.python.org/pep-0263/ just put a special comment: # coding=<encoding name> at the first line of the python script (in my case: # coding=utf-8)
Visual Studio Code Python refuses to write to file
I'm trying to have a program output data to a JSON file, but VS code or Python itself seems to have a problem with that. Specifically, I'm trying to output this(Tlist and Slist are lists of integers): output = {"Time": Tlist, "Space": Slist} json_data = json.dumps(output, indent=4) with open("sortsOutput.json", "a") as outfile: outfile.write(json_data) But nothing seems to be happening. SortsOutput.json was never made, and even with a pre-existing SortsOuput.json nothing happened. Heck, this doesn't even work: out = open("blah.txt", "w") out.write("Egg") out.close() What might be going wrong for my software for this to happen? I'm using Python v2022.16.1, for the record, and every time the program runs for the first time the command "conda activate base" happens with some error text that doesn't seem to affect the rest of my program, so is it that? How do I fix that?
[ "\nout = file.open(\"blah.txt\", \"w\")\n\nIn your second example, it seems that you don't need file.. open() is a built-in method of Python.\nYou can use out = open(\"blah.txt\", \"w\") directly.\nAt the same time, this problem seems to have nothing to do with vscode.\nI think this problem is more likely due to the path problem. The .json file you executed does not exist in the first level directory under the workspace.\nFor example,\nWorkspace\n -.vscode\n -sortsOutput.json\n -test.py\n\nYou need to use the workspace as the root directory to tell the specific location of python files:\nwith open(\".vscode/sortsOutput.json\", \"a\") as outfile:\n outfile.write(json_data)\n\n", "I was unable to write file using MS Visual Studio Community (python), in my case it was an encoding issue. I found the solution at: https://peps.python.org/pep-0263/ just put a special comment: # coding=<encoding name> at the first line of the python script\n(in my case: # coding=utf-8)\n" ]
[ 0, 0 ]
[]
[]
[ "file_io", "python", "visual_studio_code" ]
stackoverflow_0074140974_file_io_python_visual_studio_code.txt
Q: Distutils build include C++ library with setup I'm trying to link glfw to my C++ python extension but I can't do it without cmake. Here is how I did it in cmake: cmake_minimum_required(VERSION 3.0) project(Test) add_subdirectory(lib/glfw) add_executable(Test py_extend.cpp) target_include_directories(Test PRIVATE ${OPENGL_INCLUDE_DIR} ) target_link_libraries(Test PRIVATE ${OPENGL_LIBRARY} glfw ) This works fine. This is what I've tried to do in distutils: from distutils.core import setup, Extension from distutils.ccompiler import CCompiler, new_compiler our_compiler = new_compiler() our_compiler.add_include_dir("glfw") our_compiler.add_library("glfw") our_compiler.add_library_dir("lib/glfw") module1 = Extension('test', sources = ['py_extend.cpp', 'src/glfw_binder.cpp'], include_dirs=["include/glfw_binder.h"], libraries=["glfw"], library_dirs=["lib/glfw"] ) setup (name = 'python_test', version = '1.0', description = 'This is a demo package', ext_modules = [module1] ) When I run this I get this error: python3 build.py build running build running build_ext building 'test' extension x86_64-linux-gnu-gcc -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -g -fwrapv -O2 -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -Iinclude/glfw_binder.h -I/usr/include/python3.10 -c py_extend.cpp -o build/temp.linux-x86_64-3.10/py_extend.o cc1plus: warning: include/glfw_binder.h: not a directory In file included from py_extend.cpp:4: ./include/glfw_binder.h:4:10: fatal error: GLFW/glfw3.h: No such file or directory 4 | #include <GLFW/glfw3.h> | ^~~~~~~~~~~~~~ compilation terminated. error: command '/usr/bin/x86_64-linux-gnu-gcc' failed with exit code 1 How do I properly implement that cmake code into distutils code? A: I did it by changing to setuptools (I should change to it since distutils will be depricated soon.) and writing it like this: from setuptools import setup, Extension module1 = Extension('nerveblox_test', sources = ['py_extend.cpp', 'src/nerveblox_VM.cpp'], include_dirs=["include/nerveblox_VM.h", "lib/glfw/include"], libraries=["lib/glfw"], library_dirs=["lib/glfw"], extra_objects=["lib/glfw/include"] ) setup (name = 'Nerveblox_python_test', version = '1.0', description = 'This is a demo package', ext_modules = [module1] )
Distutils build include C++ library with setup
I'm trying to link glfw to my C++ python extension but I can't do it without cmake. Here is how I did it in cmake: cmake_minimum_required(VERSION 3.0) project(Test) add_subdirectory(lib/glfw) add_executable(Test py_extend.cpp) target_include_directories(Test PRIVATE ${OPENGL_INCLUDE_DIR} ) target_link_libraries(Test PRIVATE ${OPENGL_LIBRARY} glfw ) This works fine. This is what I've tried to do in distutils: from distutils.core import setup, Extension from distutils.ccompiler import CCompiler, new_compiler our_compiler = new_compiler() our_compiler.add_include_dir("glfw") our_compiler.add_library("glfw") our_compiler.add_library_dir("lib/glfw") module1 = Extension('test', sources = ['py_extend.cpp', 'src/glfw_binder.cpp'], include_dirs=["include/glfw_binder.h"], libraries=["glfw"], library_dirs=["lib/glfw"] ) setup (name = 'python_test', version = '1.0', description = 'This is a demo package', ext_modules = [module1] ) When I run this I get this error: python3 build.py build running build running build_ext building 'test' extension x86_64-linux-gnu-gcc -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -g -fwrapv -O2 -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -Iinclude/glfw_binder.h -I/usr/include/python3.10 -c py_extend.cpp -o build/temp.linux-x86_64-3.10/py_extend.o cc1plus: warning: include/glfw_binder.h: not a directory In file included from py_extend.cpp:4: ./include/glfw_binder.h:4:10: fatal error: GLFW/glfw3.h: No such file or directory 4 | #include <GLFW/glfw3.h> | ^~~~~~~~~~~~~~ compilation terminated. error: command '/usr/bin/x86_64-linux-gnu-gcc' failed with exit code 1 How do I properly implement that cmake code into distutils code?
[ "I did it by changing to setuptools (I should change to it since distutils will be depricated soon.) and writing it like this:\nfrom setuptools import setup, Extension\n\nmodule1 = Extension('nerveblox_test',\n sources = ['py_extend.cpp', 'src/nerveblox_VM.cpp'],\n include_dirs=[\"include/nerveblox_VM.h\", \"lib/glfw/include\"],\n libraries=[\"lib/glfw\"],\n library_dirs=[\"lib/glfw\"],\n extra_objects=[\"lib/glfw/include\"]\n )\n\nsetup (name = 'Nerveblox_python_test',\n version = '1.0',\n description = 'This is a demo package',\n ext_modules = [module1]\n )\n\n\n" ]
[ 0 ]
[]
[]
[ "c++", "cmake", "distutils", "python" ]
stackoverflow_0074486430_c++_cmake_distutils_python.txt
Q: Fetch not receiving headers JS from Flask send_file() I'm writing a full stack app. I have a python backend using flask that sends a file and a Vue client that receives. Its been working fine up until the point when I try to send the filename over using a Content-Disposition header. On the backend I've tried: return send_file(base_path + filename, as_attachment=True, download_name=filename) And to set the headers manually, response = make_response(send_file(base_path + filename)) response.headers['Content-Disposition'] = f"attachment; filename=\"{filename}\"" return response I've also tried to put in headers that would not be blocked by CORS just to see if the request would receive the header but to no avail, response = make_response(send_file(base_path + filename)) response.headers['Content-Type'] = "sample/info" return response I'm printing the header to the console by doing fetch('http://localhost:4999/rdownload/' + this.$route.params.id, { method: 'GET' }).then(res =\> { if (res.status == '500') { } console.log(res.headers) //const header = res.headers.get('Content-Disposition'); //console.log(header) res.blob().then((blob) => { /* ... */ }) }) Any help would be appreciated! Thanks :) A: Research In the interest of logging the solution I found and helping out anyone in the future who may be interested in knowing the answer here's what I discovered: There is a restriction to access response headers when you are using Fetch API over CORS. https://stackoverflow.com/a/44816592/20342081 So, no matter what using the JS fetch-api you will be unable to access all headers (outside of Cache-Control, Content-Language, Content-Type, Expires, Last-Modified, and Pragma) unless you expose them by specifying them in a request header. That would look something like this: fetch('https://myrequest/requestend/', { headers: { 'Access-Control-Expose-Headers': 'Content-Disposition' } }) When a cross-origin source accesses your API you will have to expose the header from the back end as well. https://stackoverflow.com/a/66291644/20342081 I was also confused about how the differences between Access-Control-Expose-Headers and Access-Control-Allow-Headers. In my case the solution was use "expose headers" on both the frontend and the backend (and allow wouldn't work). However, Allow has its own applications which I have yet to understand fully. For those endeavoring check out: https://stackoverflow.com/a/28108431/20342081 Solution I implemented these things in my code by doing: class RequestResult(Resource): def get(self, index): base_path = f"Requests/{index}/" filename = os.listdir(base_path)[0] response = make_response(send_file(base_path + filename, as_attachment=True, download_name=filename)) response.headers['Access-Control-Expose-Headers'] = "Content-Disposition" return response And on the front end exposing the header as well on the fetch request: fetch('http://localhost:4999/rdownload/' + this.$route.params.id, { method: 'GET', mode: 'cors', headers: { 'Access-Control-Expose-Headers': 'Content-Disposition' } }) I hope this is helpful for the next 5 people who open this in the next 10 years!
Fetch not receiving headers JS from Flask send_file()
I'm writing a full stack app. I have a python backend using flask that sends a file and a Vue client that receives. Its been working fine up until the point when I try to send the filename over using a Content-Disposition header. On the backend I've tried: return send_file(base_path + filename, as_attachment=True, download_name=filename) And to set the headers manually, response = make_response(send_file(base_path + filename)) response.headers['Content-Disposition'] = f"attachment; filename=\"{filename}\"" return response I've also tried to put in headers that would not be blocked by CORS just to see if the request would receive the header but to no avail, response = make_response(send_file(base_path + filename)) response.headers['Content-Type'] = "sample/info" return response I'm printing the header to the console by doing fetch('http://localhost:4999/rdownload/' + this.$route.params.id, { method: 'GET' }).then(res =\> { if (res.status == '500') { } console.log(res.headers) //const header = res.headers.get('Content-Disposition'); //console.log(header) res.blob().then((blob) => { /* ... */ }) }) Any help would be appreciated! Thanks :)
[ "Research\nIn the interest of logging the solution I found and helping out anyone in the future who may be interested in knowing the answer here's what I discovered:\n\n\n\nThere is a restriction to access response headers when you are using Fetch API over CORS.\n\nhttps://stackoverflow.com/a/44816592/20342081\nSo, no matter what using the JS fetch-api you will be unable to access all headers (outside of Cache-Control, Content-Language, Content-Type, Expires, Last-Modified, and Pragma) unless you expose them by specifying them in a request header. That would look something like this:\nfetch('https://myrequest/requestend/', {\n headers: {\n 'Access-Control-Expose-Headers': 'Content-Disposition'\n }\n})\n\n\nWhen a cross-origin source accesses your API you will have to expose the header from the back end as well. https://stackoverflow.com/a/66291644/20342081\n\n\nI was also confused about how the differences between Access-Control-Expose-Headers and Access-Control-Allow-Headers. In my case the solution was use \"expose headers\" on both the frontend and the backend (and allow wouldn't work). However, Allow has its own applications which I have yet to understand fully. For those endeavoring check out: https://stackoverflow.com/a/28108431/20342081\nSolution\nI implemented these things in my code by doing:\nclass RequestResult(Resource):\n def get(self, index):\n base_path = f\"Requests/{index}/\"\n filename = os.listdir(base_path)[0]\n\n response = make_response(send_file(base_path + filename, as_attachment=True, download_name=filename))\n response.headers['Access-Control-Expose-Headers'] = \"Content-Disposition\"\n\n return response\n\nAnd on the front end exposing the header as well on the fetch request:\nfetch('http://localhost:4999/rdownload/' + this.$route.params.id, {\n method: 'GET',\n mode: 'cors',\n headers: {\n 'Access-Control-Expose-Headers': 'Content-Disposition'\n }\n})\n\nI hope this is helpful for the next 5 people who open this in the next 10 years!\n" ]
[ 0 ]
[]
[]
[ "fetch_api", "flask", "javascript", "python", "vue.js" ]
stackoverflow_0074473164_fetch_api_flask_javascript_python_vue.js.txt
Q: AuthorizationPermissionMismatch when copy blobs across different storage accounts I am using below code to copy blob across different storage accounts, but it fails with the below error src_blob = '{0}/{1}?{2}'.format('source_url',b_name,'sp=rw&st=2022-11-17T20:44:03Z&se=2022-12-31T04:44:03Z&spr=https&sv=2021-06-08&sr=c&sig=ZXRe2FptVF5ArRM%2BKDAkLboCN%2FfaD9Mx38yZGWhnps0%3D') destination_client = BlobServiceClient.from_connection_string("destination_connection_string")//The connection string has sas token which has sr=c copied_blob = destination_client.get_blob_client('standardfeed', b_name) copied_blob.start_copy_from_url(src_blob) ErrorCode: AuthorizationPermissionMismatch This request is not authorized to perform this operation using this permission. Any thing missing or did I copy the wrong SAS token? A: I tried in my environment and successfully copied blob from one storage account to another storage account. Code: from azure.storage.blob import BlobServiceClient b_name="sample1.pdf" src_blob = '{0}/{1}?{2}'.format('https://venkat123.blob.core.windows.net/test',b_name,'sp=r&st=2022-11-18T07:46:10Z&se=2022-11-18T15:46:10Z&spr=https&sv=<SAS token >) destination_client = BlobServiceClient.from_connection_string("<connection string>") copied_blob = destination_client.get_blob_client('test1', b_name) copied_blob.start_copy_from_url(src_blob) Console: Portal: Make sure you has necessary permission for authentication purpose you need to assign roles in your storage account. Storage Blob Data Contributor Storage Blob Data Reader Portal: Update: You can get the connection string through portal: Reference: Azure Blob Storage "Authorization Permission Mismatch" error for get request with AD token - Stack Overflow
AuthorizationPermissionMismatch when copy blobs across different storage accounts
I am using below code to copy blob across different storage accounts, but it fails with the below error src_blob = '{0}/{1}?{2}'.format('source_url',b_name,'sp=rw&st=2022-11-17T20:44:03Z&se=2022-12-31T04:44:03Z&spr=https&sv=2021-06-08&sr=c&sig=ZXRe2FptVF5ArRM%2BKDAkLboCN%2FfaD9Mx38yZGWhnps0%3D') destination_client = BlobServiceClient.from_connection_string("destination_connection_string")//The connection string has sas token which has sr=c copied_blob = destination_client.get_blob_client('standardfeed', b_name) copied_blob.start_copy_from_url(src_blob) ErrorCode: AuthorizationPermissionMismatch This request is not authorized to perform this operation using this permission. Any thing missing or did I copy the wrong SAS token?
[ "I tried in my environment and successfully copied blob from one storage account to another storage account.\nCode:\nfrom azure.storage.blob import BlobServiceClient\n\nb_name=\"sample1.pdf\"\n\nsrc_blob = '{0}/{1}?{2}'.format('https://venkat123.blob.core.windows.net/test',b_name,'sp=r&st=2022-11-18T07:46:10Z&se=2022-11-18T15:46:10Z&spr=https&sv=<SAS token >)\n\ndestination_client = BlobServiceClient.from_connection_string(\"<connection string>\")\n\ncopied_blob = destination_client.get_blob_client('test1', b_name)\n\ncopied_blob.start_copy_from_url(src_blob)\n\nConsole:\n\nPortal:\n\nMake sure you has necessary permission for authentication purpose you need to assign roles in your storage account.\n\nStorage Blob Data Contributor\nStorage Blob Data Reader\n\nPortal:\n\nUpdate:\nYou can get the connection string through portal:\n\nReference:\nAzure Blob Storage \"Authorization Permission Mismatch\" error for get request with AD token - Stack Overflow\n" ]
[ 1 ]
[]
[]
[ "azure_blob_storage", "azure_python_sdk", "azure_storage", "python" ]
stackoverflow_0074485713_azure_blob_storage_azure_python_sdk_azure_storage_python.txt
Q: Vscode Unittests not discovered when using conda environment with mamba installed I am trying to use tests in vscode. When I am on the default interpreter in /usr/bin/python3 I have no problem and my simple tests are discovered. However when I select the conda interpreter the tests disappear and if I configure the tests again they won't appear. This is the python output when I try to discover tests on the conda env: When I take that command conda run -n uavsar --no-capture-output python ~/.vscode/extensions/ms-python.python-2022.6.3/pythonFiles/get_output_via_markers.py ~/.vscode/extensions/ms-python.python-2022.6.3/pythonFiles/testing_tools/unittest_discovery.py ./tests test*.py and run it without the --no-capture-output in the vscode terminal I see results from my tests. I can also get the tests to be discovered with the conda env activated from the terminal with python -m unittest discover. vscode version - 1.67.2 python version - most current (2022.6.3) Any advice or thoughts? A: Just in case anyone else finds this and has this problem I had installed mamba directly into my conda environments instead of into the base environment and recreating those environments. The mamba installation instructions specifically warns against installing mamba into anywhere other than the base environment. mamba installation docs For some reason installing mamba into a pre-existing environment meant vscode would not find my tests. I deleted all my conda environments with mamba installed, installed mamba into the base environment and now vscode can find my tests. A: There might be a reason for mamba forcing you to install it in base environment. But in most cases, installing packages in base environment to avoid a test issue sounds dimishing the point of having conda environments... I ran into the same issue but with other packages. In my case, the issue is an open VSCode bug. See the replies for workarounds that don't require you to stick to the base environment. This works for me: Activate the conda environment in command line and the also launch the VSCode from there. Create a launch.json file in VSCode to tell the debugger to use the integrated terminal: { "version": "0.2.0", "configurations": [ { "name": "Debug file", "type": "python", "request": "launch", "program": "${file}", "console": "integratedTerminal", }, { "name": "Debug tests", "type": "python", "request": "test", "console": "integratedTerminal", } ] } Also, if you have Anaconda Navigate, you can launch the bundled VSCode in the correct environment from the UI. That should launch a VSCode that has everything already configured for the environment.
Vscode Unittests not discovered when using conda environment with mamba installed
I am trying to use tests in vscode. When I am on the default interpreter in /usr/bin/python3 I have no problem and my simple tests are discovered. However when I select the conda interpreter the tests disappear and if I configure the tests again they won't appear. This is the python output when I try to discover tests on the conda env: When I take that command conda run -n uavsar --no-capture-output python ~/.vscode/extensions/ms-python.python-2022.6.3/pythonFiles/get_output_via_markers.py ~/.vscode/extensions/ms-python.python-2022.6.3/pythonFiles/testing_tools/unittest_discovery.py ./tests test*.py and run it without the --no-capture-output in the vscode terminal I see results from my tests. I can also get the tests to be discovered with the conda env activated from the terminal with python -m unittest discover. vscode version - 1.67.2 python version - most current (2022.6.3) Any advice or thoughts?
[ "Just in case anyone else finds this and has this problem\nI had installed mamba directly into my conda environments instead of into the base environment and recreating those environments. The mamba installation instructions specifically warns against installing mamba into anywhere other than the base environment. mamba installation docs\nFor some reason installing mamba into a pre-existing environment meant vscode would not find my tests. I deleted all my conda environments with mamba installed, installed mamba into the base environment and now vscode can find my tests.\n", "There might be a reason for mamba forcing you to install it in base environment. But in most cases, installing packages in base environment to avoid a test issue sounds dimishing the point of having conda environments...\nI ran into the same issue but with other packages.\nIn my case, the issue is an open VSCode bug. See the replies for workarounds that don't require you to stick to the base environment.\nThis works for me:\n\nActivate the conda environment in command line and the also launch the VSCode from there.\nCreate a launch.json file in VSCode to tell the debugger to use the integrated terminal:\n\n {\n \"version\": \"0.2.0\",\n \"configurations\": [\n {\n \"name\": \"Debug file\",\n \"type\": \"python\",\n \"request\": \"launch\",\n \"program\": \"${file}\",\n \"console\": \"integratedTerminal\",\n },\n {\n \"name\": \"Debug tests\",\n \"type\": \"python\",\n \"request\": \"test\",\n \"console\": \"integratedTerminal\",\n }\n ]\n}\n\nAlso, if you have Anaconda Navigate, you can launch the bundled VSCode in the correct environment from the UI. That should launch a VSCode that has everything already configured for the environment.\n" ]
[ 1, 0 ]
[]
[]
[ "python", "unit_testing", "visual_studio_code" ]
stackoverflow_0072508876_python_unit_testing_visual_studio_code.txt
Q: Scraping a table from html using python and beautifulsoup I am trying to scrape data from a table in a government website, I have tried to watch some tutorials but so far to no avail (coding dummy over here!!!) I would like to get a .csv file out of the tables they have containing the date, the type of event, and the adopted measures for a project. I leave here the website if any of you wants to crack it! https://www.inspq.qc.ca/covid-19/donnees/ligne-du-temps !pip install beautifulsoup4 !pip install requests from bs4 import BeautifulSoup import requests url= "https://www.inspq.qc.ca/covid-19/donnees/ligne-du-temps" r = requests.get(url) soup = BeautifulSoup(r.text, 'html.parser') FIRST_table = soup.find('table', class_ = 'tableau-timeline') print(FIRST_table) for timeline in FIRST_table.find_all('tbody'): rows= timeline.find_all('tr') for row in rows: pl_timeline = row.find('td', class_ = 'date').text print(pl_timeline) p I was expecting to get in order the dates and to use the same for loop to get the also the other two columns by tweaking it for "Type d'événement" and "Mesures adoptées" What am I doing wrong? How can I tweak it? (I am using colab if it makes any difference) Thanks in advance A: Make your life easier and just use pandas to parse the tables and dump the data to a .csv file. For example, this gets you all the tables, merges them, and spits out a .csv file: import pandas as pd import requests url = "https://www.inspq.qc.ca/covid-19/donnees/ligne-du-temps" tables = pd.read_html(requests.get(url).text, flavor="lxml") df = pd.concat(tables).to_csv("data.csv", index=False) Output:
Scraping a table from html using python and beautifulsoup
I am trying to scrape data from a table in a government website, I have tried to watch some tutorials but so far to no avail (coding dummy over here!!!) I would like to get a .csv file out of the tables they have containing the date, the type of event, and the adopted measures for a project. I leave here the website if any of you wants to crack it! https://www.inspq.qc.ca/covid-19/donnees/ligne-du-temps !pip install beautifulsoup4 !pip install requests from bs4 import BeautifulSoup import requests url= "https://www.inspq.qc.ca/covid-19/donnees/ligne-du-temps" r = requests.get(url) soup = BeautifulSoup(r.text, 'html.parser') FIRST_table = soup.find('table', class_ = 'tableau-timeline') print(FIRST_table) for timeline in FIRST_table.find_all('tbody'): rows= timeline.find_all('tr') for row in rows: pl_timeline = row.find('td', class_ = 'date').text print(pl_timeline) p I was expecting to get in order the dates and to use the same for loop to get the also the other two columns by tweaking it for "Type d'événement" and "Mesures adoptées" What am I doing wrong? How can I tweak it? (I am using colab if it makes any difference) Thanks in advance
[ "Make your life easier and just use pandas to parse the tables and dump the data to a .csv file.\nFor example, this gets you all the tables, merges them, and spits out a .csv file:\nimport pandas as pd\nimport requests\n\nurl = \"https://www.inspq.qc.ca/covid-19/donnees/ligne-du-temps\"\ntables = pd.read_html(requests.get(url).text, flavor=\"lxml\")\ndf = pd.concat(tables).to_csv(\"data.csv\", index=False)\n\nOutput:\n\n" ]
[ 0 ]
[]
[]
[ "beautifulsoup", "google_colaboratory", "python", "web_scraping" ]
stackoverflow_0074483452_beautifulsoup_google_colaboratory_python_web_scraping.txt
Q: How to train real-time LSTM with input and output of varying length? I have the data of banner clicks by minute. I have the following data: hour, minute, and was the banner clicked by someone in that minute. There are some other features (I omitted them in the example dataframe). I need to predict will be any clicks on banner for all following minutes of this hour. For example I have data for the first 11 minutes of an hour. hour minute is_click 1 1 0 1 2 0 1 3 1 1 4 0 1 5 1 1 6 0 1 7 0 1 8 0 1 9 1 1 10 1 1 11 0 My goal is to make prediction for 12, 13 ... 59, 60 minute. It will be real-time model that makes predictions every minute using the latest data. For example, I made the prediction at 18:00 for the next 59 minutes (until 18:59). Now it is 18:01 and I get the real data about clicks at 18:00, so I want to make more precise prediction for following 58 minutes (from 18:02 to 18:59). And so on. My idea was to mask-out the passed minutes with -1 I created the example of 11 minutes.There are targets: minute target vector 1 -1 0 1 0 1 0 0 0 1 1 0 2 -1 -1 1 0 1 0 0 0 1 1 0 3 -1 -1 -1 0 1 0 0 0 1 1 0 4 -1 -1 -1 -1 1 0 0 0 1 1 0 5 -1 -1 -1 -1 -1 0 0 0 1 1 0 6 -1 -1 -1 -1 -1 -1 0 0 1 1 0 7 -1 -1 -1 -1 -1 -1 -1 0 1 1 0 8 -1 -1 -1 -1 -1 -1 -1 -1 1 1 0 9 -1 -1 -1 -1 -1 -1 -1 -1 -1 1 0 10 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 0 11 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 However it seems strange to me to train a model to predict this mask values of -1. I think for neural network it will be not obvious that these -1 are just a padding. The another idea was to use a current minute as a feature and ,therefore, to predict always the sequence of 60 - minute length and then cut the extra prediction. However, the input will have different lengths anyway, so it does not solve the problem. So how I should preprocess the data to use LSTM? Should I use described above padding so all vectors will be have the same length of 60? Is there any better solution? A: An RNN (or LSTM) will return an output for every input, as well as the final hidden state (and cell state for LSTM). So one possible solution: Pad your input of future minutes with with a different token and use an embedding layer with 3 embeddings (0, 1, 2 where 2 represents unseen value). For example, at timestep 3 the input = [0, 0, 1, 2, 2, 2,...2]. After this goes through an embedding layer each token will mapped to some embedding dimension (e.g. 16) and this would be pass to the LSTM. So the input size for your LSTM would be 16 and the hidden size would be one (so that you get a scalar output for every timestep of the input). Then you pass this output through a sigmoid so each prediction is between 0 and 1 and use binary cross entropy between the predictions and targets as your loss function. Additionally, since you probably don't care how accurate the predictions are for the minutes you've already seen, you could ignore their contribution to the loss.
How to train real-time LSTM with input and output of varying length?
I have the data of banner clicks by minute. I have the following data: hour, minute, and was the banner clicked by someone in that minute. There are some other features (I omitted them in the example dataframe). I need to predict will be any clicks on banner for all following minutes of this hour. For example I have data for the first 11 minutes of an hour. hour minute is_click 1 1 0 1 2 0 1 3 1 1 4 0 1 5 1 1 6 0 1 7 0 1 8 0 1 9 1 1 10 1 1 11 0 My goal is to make prediction for 12, 13 ... 59, 60 minute. It will be real-time model that makes predictions every minute using the latest data. For example, I made the prediction at 18:00 for the next 59 minutes (until 18:59). Now it is 18:01 and I get the real data about clicks at 18:00, so I want to make more precise prediction for following 58 minutes (from 18:02 to 18:59). And so on. My idea was to mask-out the passed minutes with -1 I created the example of 11 minutes.There are targets: minute target vector 1 -1 0 1 0 1 0 0 0 1 1 0 2 -1 -1 1 0 1 0 0 0 1 1 0 3 -1 -1 -1 0 1 0 0 0 1 1 0 4 -1 -1 -1 -1 1 0 0 0 1 1 0 5 -1 -1 -1 -1 -1 0 0 0 1 1 0 6 -1 -1 -1 -1 -1 -1 0 0 1 1 0 7 -1 -1 -1 -1 -1 -1 -1 0 1 1 0 8 -1 -1 -1 -1 -1 -1 -1 -1 1 1 0 9 -1 -1 -1 -1 -1 -1 -1 -1 -1 1 0 10 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 0 11 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 However it seems strange to me to train a model to predict this mask values of -1. I think for neural network it will be not obvious that these -1 are just a padding. The another idea was to use a current minute as a feature and ,therefore, to predict always the sequence of 60 - minute length and then cut the extra prediction. However, the input will have different lengths anyway, so it does not solve the problem. So how I should preprocess the data to use LSTM? Should I use described above padding so all vectors will be have the same length of 60? Is there any better solution?
[ "An RNN (or LSTM) will return an output for every input, as well as the final hidden state (and cell state for LSTM). So one possible solution: Pad your input of future minutes with with a different token and use an embedding layer with 3 embeddings (0, 1, 2 where 2 represents unseen value). For example, at timestep 3 the input = [0, 0, 1, 2, 2, 2,...2].\nAfter this goes through an embedding layer each token will mapped to some embedding dimension (e.g. 16) and this would be pass to the LSTM. So the input size for your LSTM would be 16 and the hidden size would be one (so that you get a scalar output for every timestep of the input). Then you pass this output through a sigmoid so each prediction is between 0 and 1 and use binary cross entropy between the predictions and targets as your loss function. Additionally, since you probably don't care how accurate the predictions are for the minutes you've already seen, you could ignore their contribution to the loss.\n" ]
[ 1 ]
[]
[]
[ "deep_learning", "lstm", "python", "pytorch", "recurrent_neural_network" ]
stackoverflow_0074482446_deep_learning_lstm_python_pytorch_recurrent_neural_network.txt
Q: layoutparser has no attribute Detectron2LayoutModel Im working on a projet where I need to extract informations from resume in pdf format, the problem is when I use libraries like pdfminer ect sometimes the text extracted is not the good result because it gets lines overlapped with other lines from another box of text, thats why I thought of using layout parser first before extracting the text to extract text based on boxes of text pytesseract.pytesseract.tesseract_cmd ="C/Users/faty/Downloads/tesseract-ocr-w64-setup-v5.1.0.20220510.exe" poppler_path="C:/Users/faty/Downloads/Release-22.04.0-0/poppler-22.04.0/Library/bin" model = lp.Detectron2LayoutModel('lp://PubLayNet/mask_rcnn_X_101_32x8d_FPN_3x/config', extra_config=["MODEL.ROI_HEADS.SCORE_THRESH_TEST", 0.5], label_map={0: "Text", 1: "Title", 2: "List", 3:"Table",4:"Figure"}) layout_result = model.detect(img) lp.draw_box(img, layout_result, box_width=5, box_alpha=0.2, show_element_type=True) I Get this error : AttributeError: module layoutparser has no attribute Detectron2LayoutModel A: Actually the attribute Detectron2LayoutModel can be accessed only inside models: model = lp.models.Detectron2LayoutModel('lp://PubLayNet/mask_rcnn_X_101_32x8d_FPN_3x/config', extra_config=["MODEL.ROI_HEADS.SCORE_THRESH_TEST", 0.5], label_map={0: "Text", 1: "Title", 2: "List", 3:"Table",4:"Figure"})
layoutparser has no attribute Detectron2LayoutModel
Im working on a projet where I need to extract informations from resume in pdf format, the problem is when I use libraries like pdfminer ect sometimes the text extracted is not the good result because it gets lines overlapped with other lines from another box of text, thats why I thought of using layout parser first before extracting the text to extract text based on boxes of text pytesseract.pytesseract.tesseract_cmd ="C/Users/faty/Downloads/tesseract-ocr-w64-setup-v5.1.0.20220510.exe" poppler_path="C:/Users/faty/Downloads/Release-22.04.0-0/poppler-22.04.0/Library/bin" model = lp.Detectron2LayoutModel('lp://PubLayNet/mask_rcnn_X_101_32x8d_FPN_3x/config', extra_config=["MODEL.ROI_HEADS.SCORE_THRESH_TEST", 0.5], label_map={0: "Text", 1: "Title", 2: "List", 3:"Table",4:"Figure"}) layout_result = model.detect(img) lp.draw_box(img, layout_result, box_width=5, box_alpha=0.2, show_element_type=True) I Get this error : AttributeError: module layoutparser has no attribute Detectron2LayoutModel
[ "Actually the attribute Detectron2LayoutModel can be accessed only inside models:\nmodel = lp.models.Detectron2LayoutModel('lp://PubLayNet/mask_rcnn_X_101_32x8d_FPN_3x/config',\n extra_config=[\"MODEL.ROI_HEADS.SCORE_THRESH_TEST\", 0.5],\n label_map={0: \"Text\", 1: \"Title\", 2: \"List\", \n 3:\"Table\",4:\"Figure\"})\n\n" ]
[ 1 ]
[]
[]
[ "python" ]
stackoverflow_0072572315_python.txt
Q: Python Threading consumer being called before producer I wrote a simple python program to understand threading. This program has no thread blocking logic like time.sleep(x) but still I don't understand how possibly can consumer thread be called before producer thread. Producer -> appends a numeric value to list Consumer -> gets/prints the numeric value from this list. How can consumer print even before producer generates the number? import threading import time import datetime from colorama import Fore as colorify def main() -> None: data=[] threads = [ threading.Thread(target=producer,args=(data,5),daemon=True), threading.Thread(target=consumer,args=(data,5),daemon=True) ] [t.start() for t in threads] [t.join() for t in threads] print(f"{colorify.YELLOW}Execution completed") def producer(data: list, num:int) -> None: for i in range(num): _t = datetime.datetime.now() sqrt=i*i data.append([i,_t]) print(f"{colorify.BLUE} producer {i} {threading.get_ident()}") def consumer(data : list, num: int) -> None: for i in data: item= i number=item[0] _t=item[1] print(f"{colorify.RED} consumer {number} {threading.get_ident()}") output: producer 0 18064 producer 1 18064 producer 2 18064 producer 3 18064 consumer 0 8028 consumer 1 8028 consumer 2 8028 consumer 3 8028 producer 4 18064 Execution completed 0.0028671000036410987 seconds A: This is a race condition, and if the list was bigger and there were more elements in it you would only print the elements that were in the list the moment the second thread started, which is only a portion of the list. Printing is also buffered so it doesn't happen at the order in which you call print, the actual printing is done when the buffer is full,and a context switch can happen even between writing to the list and printing, and the print buffer can be written to concurrently, so writing to buffers (including printing) is a race condition, and your entire code is a race condition. You can flush the stdout using flush=True argument of print to avoid buffering, but to fix the race condition that happens when accessing elements you need to use a queue, and you can also share a lock between the two threads to make sure adding to queue and printing will not be interrupted by a context switch. import threading import time import datetime from colorama import Fore as colorify import queue def main() -> None: data = queue.Queue() lock = threading.Lock() threads = [ threading.Thread(target=producer, args=(data, 5, lock), daemon=True), threading.Thread(target=consumer, args=(data, 5, lock), daemon=True) ] [t.start() for t in threads] [t.join() for t in threads] print(f"{colorify.YELLOW}Execution completed", flush=True) def producer(data: queue.Queue, num: int, lock: threading.Lock) -> None: for i in range(num): _t = datetime.datetime.now() sqrt = i * i with lock: data.put([i, _t]) print(f"{colorify.BLUE} producer {i} {threading.get_ident()}", flush=True) def consumer(data: queue.Queue, num: int, lock: threading.Lock) -> None: for i in range(num): item = data.get() number = item[0] _t = item[1] with lock: print(f"{colorify.RED} consumer {number} {threading.get_ident()}", flush=True) main() producer 0 5692 producer 1 5692 producer 2 5692 producer 3 5692 producer 4 5692 consumer 0 12644 consumer 1 12644 consumer 2 12644 consumer 3 12644 consumer 4 12644 Execution completed note that the flush in this code is not needed, as the lock will prevent concurrent writes to stdout buffer.
Python Threading consumer being called before producer
I wrote a simple python program to understand threading. This program has no thread blocking logic like time.sleep(x) but still I don't understand how possibly can consumer thread be called before producer thread. Producer -> appends a numeric value to list Consumer -> gets/prints the numeric value from this list. How can consumer print even before producer generates the number? import threading import time import datetime from colorama import Fore as colorify def main() -> None: data=[] threads = [ threading.Thread(target=producer,args=(data,5),daemon=True), threading.Thread(target=consumer,args=(data,5),daemon=True) ] [t.start() for t in threads] [t.join() for t in threads] print(f"{colorify.YELLOW}Execution completed") def producer(data: list, num:int) -> None: for i in range(num): _t = datetime.datetime.now() sqrt=i*i data.append([i,_t]) print(f"{colorify.BLUE} producer {i} {threading.get_ident()}") def consumer(data : list, num: int) -> None: for i in data: item= i number=item[0] _t=item[1] print(f"{colorify.RED} consumer {number} {threading.get_ident()}") output: producer 0 18064 producer 1 18064 producer 2 18064 producer 3 18064 consumer 0 8028 consumer 1 8028 consumer 2 8028 consumer 3 8028 producer 4 18064 Execution completed 0.0028671000036410987 seconds
[ "This is a race condition, and if the list was bigger and there were more elements in it you would only print the elements that were in the list the moment the second thread started, which is only a portion of the list.\nPrinting is also buffered so it doesn't happen at the order in which you call print, the actual printing is done when the buffer is full,and a context switch can happen even between writing to the list and printing, and the print buffer can be written to concurrently, so writing to buffers (including printing) is a race condition, and your entire code is a race condition.\nYou can flush the stdout using flush=True argument of print to avoid buffering, but to fix the race condition that happens when accessing elements you need to use a queue, and you can also share a lock between the two threads to make sure adding to queue and printing will not be interrupted by a context switch.\nimport threading\nimport time\nimport datetime\nfrom colorama import Fore as colorify\nimport queue\n\n\ndef main() -> None:\n data = queue.Queue()\n lock = threading.Lock()\n threads = [\n\n threading.Thread(target=producer, args=(data, 5, lock), daemon=True),\n threading.Thread(target=consumer, args=(data, 5, lock), daemon=True)\n\n ]\n\n [t.start() for t in threads]\n\n [t.join() for t in threads]\n\n print(f\"{colorify.YELLOW}Execution completed\", flush=True)\n\n\ndef producer(data: queue.Queue, num: int, lock: threading.Lock) -> None:\n for i in range(num):\n _t = datetime.datetime.now()\n sqrt = i * i\n with lock:\n data.put([i, _t])\n print(f\"{colorify.BLUE} producer {i} {threading.get_ident()}\", flush=True)\n\n\ndef consumer(data: queue.Queue, num: int, lock: threading.Lock) -> None:\n for i in range(num):\n item = data.get()\n number = item[0]\n _t = item[1]\n with lock:\n print(f\"{colorify.RED} consumer {number} {threading.get_ident()}\", flush=True)\n\n\nmain()\n\n producer 0 5692\n producer 1 5692\n producer 2 5692\n producer 3 5692\n producer 4 5692\n consumer 0 12644\n consumer 1 12644\n consumer 2 12644\n consumer 3 12644\n consumer 4 12644\nExecution completed\n\nnote that the flush in this code is not needed, as the lock will prevent concurrent writes to stdout buffer.\n" ]
[ 0 ]
[]
[]
[ "multithreading", "python" ]
stackoverflow_0074470184_multithreading_python.txt
Q: read excel with pandas hiii guys i'am new in python and pandas. i have some questions about this tutorial page from pandas https://pandas.pydata.org/pandas-docs/stable/user_guide/advanced.html#advanced-indexing-with-hierarchical-index how to get a list of loc A B C first second bar one 0.895717 0.410835 -1.413681 two 0.805244 0.813850 1.607920 baz one -1.206412 0.132003 1.024180 two 2.565646 -0.827317 0.569605 foo one 1.431256 -0.076467 0.875906 two 1.340309 -1.187678 -2.211372 qux one -1.170299 1.130127 0.974466 two -0.226169 -1.436737 -2.006747 In [43]: df.loc["bar"] Out[43]: A B C second one 0.895717 0.410835 -1.413681 two 0.805244 0.813850 1.607920 in that tutorial a "bar", was directly coded on that bracket my question is how to get a list of loc like: loc=[bar, baz, foo, qux] some method if i call that method it print a list of loc [bar, baz, foo, qux] A: try this df.index.unique(level=0) returns Index(['bar', 'baz', 'foo', 'qux'], dtype='object', name='first')
read excel with pandas
hiii guys i'am new in python and pandas. i have some questions about this tutorial page from pandas https://pandas.pydata.org/pandas-docs/stable/user_guide/advanced.html#advanced-indexing-with-hierarchical-index how to get a list of loc A B C first second bar one 0.895717 0.410835 -1.413681 two 0.805244 0.813850 1.607920 baz one -1.206412 0.132003 1.024180 two 2.565646 -0.827317 0.569605 foo one 1.431256 -0.076467 0.875906 two 1.340309 -1.187678 -2.211372 qux one -1.170299 1.130127 0.974466 two -0.226169 -1.436737 -2.006747 In [43]: df.loc["bar"] Out[43]: A B C second one 0.895717 0.410835 -1.413681 two 0.805244 0.813850 1.607920 in that tutorial a "bar", was directly coded on that bracket my question is how to get a list of loc like: loc=[bar, baz, foo, qux] some method if i call that method it print a list of loc [bar, baz, foo, qux]
[ "try this\ndf.index.unique(level=0)\nreturns\nIndex(['bar', 'baz', 'foo', 'qux'], dtype='object', name='first')\n" ]
[ 0 ]
[]
[]
[ "pandas", "python" ]
stackoverflow_0074486041_pandas_python.txt
Q: Pandas - Pivot Table based on Column A while ensuring order in Column B Given the following data frame containing three columns: Text Line_Number Value 521998135749 15 Pamphlet 2716253485 15 Local SRM 15 Info 12 B WAY 16 Info DANUBE 17 Info 520898004500 18 Pamphlet 2746254789 18 Local OLO 18 Info 14TH ST N 19 Info VOLGA 20 Info 534598195562 21 Pamphlet 2867365825 21 Local JDM 21 Info 896 VT 22 Info FALLS RD 23 Info I want to transform it such that Line_Number column and the unique values in Value column form the headers of the new table. The values in Text column should be filled appropriately. The challenge here is, if certain values in the columns are missing, they should be left blank as shown below: Line_Number Pamphlet Local Info 15 521998135749 2716253485 SRM 16 12 B WAY 17 DANUBE 18 520898004500 2746254789 OLO 19 14TH ST N 20 VOLGA 21 534598195562 2867365825 JDM 22 896 VT 23 FALLS RD The table must be filled strictly according to the order of values in "Line_Number" column. There are many examples of transposing and pivoting tables, but I haven't come across any examples where the order of a column is preserved. A: Use pivot and reindex by unique so: (df.pivot('Line_Number', 'Value', 'Text') .reindex(index=df['Line_Number'].unique(), columns=df['Value'].unique()))
Pandas - Pivot Table based on Column A while ensuring order in Column B
Given the following data frame containing three columns: Text Line_Number Value 521998135749 15 Pamphlet 2716253485 15 Local SRM 15 Info 12 B WAY 16 Info DANUBE 17 Info 520898004500 18 Pamphlet 2746254789 18 Local OLO 18 Info 14TH ST N 19 Info VOLGA 20 Info 534598195562 21 Pamphlet 2867365825 21 Local JDM 21 Info 896 VT 22 Info FALLS RD 23 Info I want to transform it such that Line_Number column and the unique values in Value column form the headers of the new table. The values in Text column should be filled appropriately. The challenge here is, if certain values in the columns are missing, they should be left blank as shown below: Line_Number Pamphlet Local Info 15 521998135749 2716253485 SRM 16 12 B WAY 17 DANUBE 18 520898004500 2746254789 OLO 19 14TH ST N 20 VOLGA 21 534598195562 2867365825 JDM 22 896 VT 23 FALLS RD The table must be filled strictly according to the order of values in "Line_Number" column. There are many examples of transposing and pivoting tables, but I haven't come across any examples where the order of a column is preserved.
[ "Use pivot and reindex by unique so:\n(df.pivot('Line_Number', 'Value', 'Text')\n .reindex(index=df['Line_Number'].unique(), columns=df['Value'].unique()))\n\n" ]
[ 2 ]
[]
[]
[ "dataframe", "pandas", "python" ]
stackoverflow_0074486802_dataframe_pandas_python.txt
Q: Python playwright - existing browser and variable in function Below is the code of my web scraper, where I am trying to scrape the car names. I have also have list with locations. I would like to use the variable loc of the locationlist in my function. And also use the opened browser window, but the function does not recognize the syntax 'page.' that is on line 16. F.e. like Selenium where you define your webdriver once at the top of your code and use it in and outside any function without having call it again. How can I use the page in my function -page.goto("https://www.sixt.co.uk/")- from playwright.sync_api import Playwright, sync_playwright, expect locationList = [ 'London Luton Airport', 'London Hilton', 'London City', 'London Wembley', 'London Battersea', 'London Shepherds Bush' ] chromium = playwright.chromium browser = chromium.launch(channel="chrome", headless=False) page = browser.new_page() page.set_viewport_size({"width": 1920, "height": 1080}) page.goto("https://www.sixt.co.uk/") page.locator("[data-testid=\"uc-accept-all-button\"]").click() page.locator("[placeholder=\"Find a location\"]").click() page.locator("[placeholder=\"Find a location\"]").fill("luton") page.locator("text=London Luton Airport (GB)").click() page.locator("button:has-text(\"Show offers\")").click() def run(playwright: Playwright): page.locator("[placeholder=\"Find a location\"]").click() page.locator("[placeholder=\"Find a location\"]").fill(loc) page.locator('//div[text()="' + loc + '"]').click() carnames = page.locator("//h2[@class='vehicle-item__title']") #Get all carnames in a list carnamelist = [cars.text for cars in carnames] for loc in locationList: with sync_playwright() as playwright: run(playwright) A: Here's how to initialize playwright browser globally without context managers. In your particular code, this will work like below: from playwright.sync_api import Playwright, sync_playwright locationList = [ 'London Luton Airport', 'London Hilton', 'London City', 'London Wembley', 'London Battersea', 'London Shepherds Bush' ] playwright = sync_playwright().start() # <-- Use this to initialize playwright globally chromium = playwright.chromium browser = chromium.launch(channel="chrome", headless=False) page = browser.new_page() page.set_viewport_size({"width": 1920, "height": 1080}) page.goto("https://www.sixt.co.uk/") page.locator("[data-testid=\"uc-accept-all-button\"]").click() page.locator("[placeholder=\"Find a location\"]").click() page.locator("[placeholder=\"Find a location\"]").fill("luton") page.locator("text=London Luton Airport (GB)").click() page.locator("button:has-text(\"Show offers\")").click() def run(playwright: Playwright): page.locator("[placeholder=\"Find a location\"]").click() page.locator("[placeholder=\"Find a location\"]").fill(loc) page.locator('//div[text()="' + loc + '"]').click() carnames = page.locator("//h2[@class='vehicle-item__title']") #Get all carnames in a list carnamelist = [cars.text for cars in carnames] for loc in locationList: run(playwright) playwright.stop() # --> Cleanup resources properly when done However, you can also initialize page and browser from inside the context manager you create in the original question, and it will work as expected.
Python playwright - existing browser and variable in function
Below is the code of my web scraper, where I am trying to scrape the car names. I have also have list with locations. I would like to use the variable loc of the locationlist in my function. And also use the opened browser window, but the function does not recognize the syntax 'page.' that is on line 16. F.e. like Selenium where you define your webdriver once at the top of your code and use it in and outside any function without having call it again. How can I use the page in my function -page.goto("https://www.sixt.co.uk/")- from playwright.sync_api import Playwright, sync_playwright, expect locationList = [ 'London Luton Airport', 'London Hilton', 'London City', 'London Wembley', 'London Battersea', 'London Shepherds Bush' ] chromium = playwright.chromium browser = chromium.launch(channel="chrome", headless=False) page = browser.new_page() page.set_viewport_size({"width": 1920, "height": 1080}) page.goto("https://www.sixt.co.uk/") page.locator("[data-testid=\"uc-accept-all-button\"]").click() page.locator("[placeholder=\"Find a location\"]").click() page.locator("[placeholder=\"Find a location\"]").fill("luton") page.locator("text=London Luton Airport (GB)").click() page.locator("button:has-text(\"Show offers\")").click() def run(playwright: Playwright): page.locator("[placeholder=\"Find a location\"]").click() page.locator("[placeholder=\"Find a location\"]").fill(loc) page.locator('//div[text()="' + loc + '"]').click() carnames = page.locator("//h2[@class='vehicle-item__title']") #Get all carnames in a list carnamelist = [cars.text for cars in carnames] for loc in locationList: with sync_playwright() as playwright: run(playwright)
[ "Here's how to initialize playwright browser globally without context managers.\nIn your particular code, this will work like below:\nfrom playwright.sync_api import Playwright, sync_playwright\n\nlocationList = [\n 'London Luton Airport',\n 'London Hilton',\n 'London City',\n 'London Wembley',\n 'London Battersea',\n 'London Shepherds Bush'\n]\n\nplaywright = sync_playwright().start() # <-- Use this to initialize playwright globally\n\nchromium = playwright.chromium\nbrowser = chromium.launch(channel=\"chrome\", headless=False)\npage = browser.new_page()\npage.set_viewport_size({\"width\": 1920, \"height\": 1080})\npage.goto(\"https://www.sixt.co.uk/\")\npage.locator(\"[data-testid=\\\"uc-accept-all-button\\\"]\").click()\npage.locator(\"[placeholder=\\\"Find a location\\\"]\").click()\npage.locator(\"[placeholder=\\\"Find a location\\\"]\").fill(\"luton\")\npage.locator(\"text=London Luton Airport (GB)\").click()\npage.locator(\"button:has-text(\\\"Show offers\\\")\").click()\n\ndef run(playwright: Playwright):\n\n page.locator(\"[placeholder=\\\"Find a location\\\"]\").click()\n page.locator(\"[placeholder=\\\"Find a location\\\"]\").fill(loc)\n page.locator('//div[text()=\"' + loc + '\"]').click()\n carnames = page.locator(\"//h2[@class='vehicle-item__title']\")\n #Get all carnames in a list\n carnamelist = [cars.text for cars in carnames]\n\nfor loc in locationList:\n run(playwright)\n\nplaywright.stop() # --> Cleanup resources properly when done\n\nHowever, you can also initialize page and browser from inside the context manager you create in the original question, and it will work as expected.\n" ]
[ 0 ]
[]
[]
[ "playwright", "python" ]
stackoverflow_0074480301_playwright_python.txt
Q: Accessing MySQL docker container from Flask APP I have been trying all day, unsuccessfully, to connect to a MySQL running container from my SQLALCHEMY flask app. Don't know what I'm doing wrong, perhaps I'm missing something that you might call my attention to resolve my issue. Just for context. I have the following files: docker-compose.yml file with MySQL and Adminer: version: '3.1' services: db: image: mysql:5.7 restart: unless-stopped environment: MYSQL_DATABASE: mysqldb MYSQL_ROOT_PASSWORD: master MYSQL_USER: user MYSQL_PASSWORD: password networks: - mysqlcomposenetwork volumes: - ./data:/data ports: - 100:100 adminer: image: adminer restart: unless-stopped ports: - 8080:8080 networks: - mysqlcomposenetwork networks: mysqlcomposenetwork: driver: bridge db.yml file to store connection string information isolated from the app. mysql_host: 'localhost:100' mysql_user: 'user' # Enter your password in field below mysql_password: 'password' mysql_db: 'mysqldb' And finally my app.py, which is a basic Flask app leveraging ORM via flask_sqlalchemy module. Note that before running app.py the mysql and adminer container are both running and db.yml file is in the same directory of app.py; thus, accessible. from flask import Flask from flask_sqlalchemy import SQLAlchemy import pymysql import datetime import yaml app = Flask(__name__) db = yaml.full_load(open('db.yml')) #mysql://username:password@host:port/database_name connection_str='mysql+pymysql://'+db['mysql_user']+':'+db['mysql_password']+'@'+db['mysql_host']+'/'+db['mysql_db'] app.config['SQLALCHEMY_DATABASE_URI']=connection_str print(connection_str) db = SQLAlchemy(app) class Visitor(db.Model): accessed_at=db.Column(db.Float,primary_key=True) user_id=db.Column(db.Integer) page_id=db.Column(db.Integer) def __init__(self,accessed_at,user_id,page_id): self.accessed_at=accessed_at self.user_id=user_id self.page_id=page_id if __name__ == '__main__': with app.app_context(): db.create_all() visitor=Visitor(datetime.datetime.now().timestamp(),1000,5) db.session.add(visitor) db.session.commit() print(Visitor.query.all()) After running app..py the following error is prompted: sqlalchemy.exc.OperationalError: (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'localhost' ([WinError 10061] Already tried to change mysql_host: 'localhost:100' to mysql_host: 'db:100' and mysql_host: 'db'. None of them working... I appreciate your attention and possible feedback to help me overcome this crippling issue in my program. I am unable to connect to MySQL container databased from outside. Specifically, from a flask app. A: I have resolved your question in this github repo : https://github.com/jcroyoaun/stackoverflow-flask-question I would appreciate if you could clone my repo and try docker-compose up --build first. After that, if it works for you please read on the explanation of what was wrong. (I modified a few things after running it on my Mac as I have to specify platform: "linux/amd64" because I use an M1 chip with arm architectore and container for mysql image wouldn't run on my local, so if you have any issues while running it, post them here) NOTE : for simplicity, I'm only adding the lines of code that I changed from the one you shared. For a full view of the codebase, please see the repo shared above. What I changed from your code changed on main.py connection_str='mysql://'+db['mysql_user']+':'+db['mysql_password']+'@'+db['mysql_host']+'/'+db['mysql_db'] app.config['SQLALCHEMY_DATABASE_URI']=connection_str print(connection_str) EXPLANATION : -> I didn't need pymysql for your particular situation, so I had it removed from the connection string then on your db.yml file, I changed from mysql_host: 'localhost:100' to mysql_host: 'db:3306' EXPLANATION : -> When you use networking in Docker compose, value of a service name can be resolved as a hostname. Networking wise, your "adminer" Flask app, being in a different container, cannot resolve "localhost" as something outside of itself, therefore, you ma use a hostname as it relies on DNS resolution so that it can connect to another host from the outside world outside of the adminer container. I also changed the port, as by default, mysql listens to port 3306 so internally mapping port 100 wouldn't make sense unless you modify the configurations of the base docker image (source) I changed the ports on db service declared on the docker-compose.yml as well, this way we allow the connection to go through by matching the connection string read from db.yml ports we changed earlier. Changes went from ports: - 100:100 to ports: - 3306:3306 Until this point, the connection works fine, however I found some bugs on your code that I needed to modify as well. Additional tweaks I made First, I had to re-create your schema and add a table with a few parameters changed. You had the accessed_at defined as CREATE TABLE `visitor` ( `accessed_at` float(10,7), but then you're passing numbers like 1668759725.166818 Essentially what 10,7 means is that you want : 10 total units of precision with 7 units past the decimal place, hence only 3 units before the decimal place Therefore I added : CREATE TABLE `visitor` ( `accessed_at` float(18,6), As I found it more appropriate for the epoch timestamps you were passing as the primary key. Some other things I changed briefly (you can see on the GitHub repo) a) I think it's a good practice to copy a schema.sql to docker-entrypoint-initdb.d/ to pre-load the tables and have them ready for your first-time run of the application+db, otherwise you'll have to insert the schema name and tables manually the first time. I created a Dockerfile for this under ./mysqldockerfile/Dockerfile dir on the project. b) I also modified the volume you mounted around line 14 on your docker-compose.yml, /var/lib/mysql is usually where the data can be stored/retrieved for the Databases, therefore, mounting a volume from your local (./data:/var/lib/mysql) will allow the data inserted to persist after restart (since it's now stored on your local hard drive). NOTE : there are other bugs / exceptions being thrown, but the connection issue works fine now.
Accessing MySQL docker container from Flask APP
I have been trying all day, unsuccessfully, to connect to a MySQL running container from my SQLALCHEMY flask app. Don't know what I'm doing wrong, perhaps I'm missing something that you might call my attention to resolve my issue. Just for context. I have the following files: docker-compose.yml file with MySQL and Adminer: version: '3.1' services: db: image: mysql:5.7 restart: unless-stopped environment: MYSQL_DATABASE: mysqldb MYSQL_ROOT_PASSWORD: master MYSQL_USER: user MYSQL_PASSWORD: password networks: - mysqlcomposenetwork volumes: - ./data:/data ports: - 100:100 adminer: image: adminer restart: unless-stopped ports: - 8080:8080 networks: - mysqlcomposenetwork networks: mysqlcomposenetwork: driver: bridge db.yml file to store connection string information isolated from the app. mysql_host: 'localhost:100' mysql_user: 'user' # Enter your password in field below mysql_password: 'password' mysql_db: 'mysqldb' And finally my app.py, which is a basic Flask app leveraging ORM via flask_sqlalchemy module. Note that before running app.py the mysql and adminer container are both running and db.yml file is in the same directory of app.py; thus, accessible. from flask import Flask from flask_sqlalchemy import SQLAlchemy import pymysql import datetime import yaml app = Flask(__name__) db = yaml.full_load(open('db.yml')) #mysql://username:password@host:port/database_name connection_str='mysql+pymysql://'+db['mysql_user']+':'+db['mysql_password']+'@'+db['mysql_host']+'/'+db['mysql_db'] app.config['SQLALCHEMY_DATABASE_URI']=connection_str print(connection_str) db = SQLAlchemy(app) class Visitor(db.Model): accessed_at=db.Column(db.Float,primary_key=True) user_id=db.Column(db.Integer) page_id=db.Column(db.Integer) def __init__(self,accessed_at,user_id,page_id): self.accessed_at=accessed_at self.user_id=user_id self.page_id=page_id if __name__ == '__main__': with app.app_context(): db.create_all() visitor=Visitor(datetime.datetime.now().timestamp(),1000,5) db.session.add(visitor) db.session.commit() print(Visitor.query.all()) After running app..py the following error is prompted: sqlalchemy.exc.OperationalError: (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'localhost' ([WinError 10061] Already tried to change mysql_host: 'localhost:100' to mysql_host: 'db:100' and mysql_host: 'db'. None of them working... I appreciate your attention and possible feedback to help me overcome this crippling issue in my program. I am unable to connect to MySQL container databased from outside. Specifically, from a flask app.
[ "I have resolved your question in this github repo :\nhttps://github.com/jcroyoaun/stackoverflow-flask-question\nI would appreciate if you could clone my repo and try\ndocker-compose up --build\n\nfirst.\nAfter that, if it works for you please read on the explanation of what was wrong. (I modified a few things after running it on my Mac as I have to specify platform: \"linux/amd64\" because I use an M1 chip with arm architectore and container for mysql image wouldn't run on my local, so if you have any issues while running it, post them here)\nNOTE : for simplicity, I'm only adding the lines of code that I changed from the one you shared.\nFor a full view of the codebase, please see the repo shared above.\nWhat I changed from your code\n\nchanged on main.py\n\nconnection_str='mysql://'+db['mysql_user']+':'+db['mysql_password']+'@'+db['mysql_host']+'/'+db['mysql_db']\napp.config['SQLALCHEMY_DATABASE_URI']=connection_str\nprint(connection_str)\n\nEXPLANATION :\n-> I didn't need pymysql for your particular situation, so I had it removed from the connection string\n\nthen on your db.yml file, I changed from\n\nmysql_host: 'localhost:100'\n\nto\nmysql_host: 'db:3306'\n\nEXPLANATION : -> When you use networking in Docker compose, value of a service name can be resolved as a hostname. Networking wise, your \"adminer\" Flask app, being in a different container, cannot resolve \"localhost\" as something outside of itself, therefore, you ma use a hostname as it relies on DNS resolution so that it can connect to another host from the outside world outside of the adminer container.\nI also changed the port, as by default, mysql listens to port 3306 so internally mapping port 100 wouldn't make sense unless you modify the configurations of the base docker image (source)\n\nI changed the ports on db service declared on the docker-compose.yml as well, this way we allow the connection to go through by matching the connection string read from db.yml ports we changed earlier. Changes went from\n\n ports:\n - 100:100\n\nto\n ports:\n - 3306:3306\n\nUntil this point, the connection works fine, however I found some bugs on your code that I needed to modify as well.\nAdditional tweaks I made\nFirst, I had to re-create your schema and add a table with a few parameters changed. You had the accessed_at defined as\nCREATE TABLE `visitor` (\n `accessed_at` float(10,7),\n\nbut then you're passing numbers like\n1668759725.166818\nEssentially what 10,7 means is that you want :\n\n10 total units of precision\nwith 7 units past the decimal place, hence\nonly 3 units before the decimal place\n\nTherefore I added :\nCREATE TABLE `visitor` (\n `accessed_at` float(18,6),\n\nAs I found it more appropriate for the epoch timestamps you were passing as the primary key.\nSome other things I changed briefly (you can see on the GitHub repo)\na) I think it's a good practice to copy a schema.sql to docker-entrypoint-initdb.d/ to pre-load the tables and have them ready for your first-time run of the application+db, otherwise you'll have to insert the schema name and tables manually the first time. I created a Dockerfile for this under ./mysqldockerfile/Dockerfile dir on the project.\nb) I also modified the volume you mounted around line 14 on your docker-compose.yml, /var/lib/mysql is usually where the data can be stored/retrieved for the Databases, therefore, mounting a volume from your local (./data:/var/lib/mysql) will allow the data inserted to persist after restart (since it's now stored on your local hard drive).\nNOTE : there are other bugs / exceptions being thrown, but the connection issue works fine now.\n" ]
[ 0 ]
[]
[]
[ "docker_compose", "flask", "flask_sqlalchemy", "mysql_python", "python" ]
stackoverflow_0074482882_docker_compose_flask_flask_sqlalchemy_mysql_python_python.txt
Q: TensorFlow.js model converters.: error: unrecognized arguments: I am trying to follow this tutorial How to deploy your custom TensorFlow model to react native to convert my model to be deployed to my reactjs webapp. (Thesis) C:\Users\JHON MICHEAL>tensorflowjs_converter --input_format=keras C:\Users\JHON MICHEAL\Desktop\Tan\Model\image-model\rabbit.h5 C:\Users\JHON MICHEAL\Desktop\Tan\Model\image-model\rabbit.h5 2022-11-18 15:30:20.943680: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'cudart64_110.dll'; dlerror: cudart64_110.dll not found 2022-11-18 15:30:20.944067: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine. usage: TensorFlow.js model converters. [-h] [--input_format {tf_frozen_model,tf_saved_model,tf_hub,tfjs_layers_model,keras_saved_model,keras}] [--output_format {tfjs_layers_model,keras_saved_model,keras,tfjs_graph_model}] [--signature_name SIGNATURE_NAME] [--saved_model_tags SAVED_MODEL_TAGS] [--quantize_float16 [QUANTIZE_FLOAT16]] [--quantize_uint8 [QUANTIZE_UINT8]] [--quantize_uint16 [QUANTIZE_UINT16]] [--quantization_bytes {1,2}] [--split_weights_by_layer] [--version] [--skip_op_check] [--strip_debug_ops STRIP_DEBUG_OPS] [--use_structured_outputs_names USE_STRUCTURED_OUTPUTS_NAMES] [--weight_shard_size_bytes WEIGHT_SHARD_SIZE_BYTES] [--output_node_names OUTPUT_NODE_NAMES] [--control_flow_v2 CONTROL_FLOW_V2] [--experiments EXPERIMENTS] [--metadata METADATA] [input_path] [output_path] TensorFlow.js model converters.: error: unrecognized arguments: C:\Users\JHON MICHEAL\Desktop\Tan\Model\image-model\rabbit.h5 but I get this results. A: It seems that this argument is not correct --input_format=keras and need to be changed to --input_format keras you can see details here https://www.tensorflow.org/js/tutorials/conversion/import_keras
TensorFlow.js model converters.: error: unrecognized arguments:
I am trying to follow this tutorial How to deploy your custom TensorFlow model to react native to convert my model to be deployed to my reactjs webapp. (Thesis) C:\Users\JHON MICHEAL>tensorflowjs_converter --input_format=keras C:\Users\JHON MICHEAL\Desktop\Tan\Model\image-model\rabbit.h5 C:\Users\JHON MICHEAL\Desktop\Tan\Model\image-model\rabbit.h5 2022-11-18 15:30:20.943680: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'cudart64_110.dll'; dlerror: cudart64_110.dll not found 2022-11-18 15:30:20.944067: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine. usage: TensorFlow.js model converters. [-h] [--input_format {tf_frozen_model,tf_saved_model,tf_hub,tfjs_layers_model,keras_saved_model,keras}] [--output_format {tfjs_layers_model,keras_saved_model,keras,tfjs_graph_model}] [--signature_name SIGNATURE_NAME] [--saved_model_tags SAVED_MODEL_TAGS] [--quantize_float16 [QUANTIZE_FLOAT16]] [--quantize_uint8 [QUANTIZE_UINT8]] [--quantize_uint16 [QUANTIZE_UINT16]] [--quantization_bytes {1,2}] [--split_weights_by_layer] [--version] [--skip_op_check] [--strip_debug_ops STRIP_DEBUG_OPS] [--use_structured_outputs_names USE_STRUCTURED_OUTPUTS_NAMES] [--weight_shard_size_bytes WEIGHT_SHARD_SIZE_BYTES] [--output_node_names OUTPUT_NODE_NAMES] [--control_flow_v2 CONTROL_FLOW_V2] [--experiments EXPERIMENTS] [--metadata METADATA] [input_path] [output_path] TensorFlow.js model converters.: error: unrecognized arguments: C:\Users\JHON MICHEAL\Desktop\Tan\Model\image-model\rabbit.h5 but I get this results.
[ "It seems that this argument is not correct\n\n--input_format=keras\n\nand need to be changed to\n\n--input_format keras\n\nyou can see details here\nhttps://www.tensorflow.org/js/tutorials/conversion/import_keras\n" ]
[ 0 ]
[]
[]
[ "python", "reactjs", "tensorflow" ]
stackoverflow_0074486398_python_reactjs_tensorflow.txt
Q: How to Extract all Urls from href under a but it seems to give me an error all the time category_tag = soup.find_all('div' , {'class': '_p13n-zg-nav-tree-all_style_zg-browse-item__1rdKf _p13n-zg-nav-tree-all_style_zg-browse-height-large__1z5B8'}) Output of category_tag: <div class="_p13n-zg-nav-tree-all_style_zg-browse-item__1rdKf _p13n-zg-nav-tree-all_style_zg-browse-height-large__1z5B8" role="treeitem"><a href="/gp/bestsellers/books/1318158031">Action &amp; Adventure</a></div>, <div class="_p13n-zg-nav-tree-all_style_zg-browse-item__1rdKf _p13n-zg-nav-tree-all_style_zg-browse-height-large__1z5B8" role="treeitem"><a href="/gp/bestsellers/books/1318052031">Arts, Film &amp; Photography</a></div>, <div class="_p13n-zg-nav-tree-all_style_zg-browse-item__1rdKf _p13n-zg-nav-tree-all_style_zg-browse-height-large__1z5B8" role="treeitem"><a href="/gp/bestsellers/books/1318064031">Biographies, Diaries &amp; True Accounts</a></div>, <div class="_p13n-zg-nav-tree-all_style_zg-browse-item__1rdKf _p13n-zg-nav-tree-all_style_zg-browse-height-large__1z5B8" role="treeitem"><a href="/gp/bestsellers/books/1318068031">Business &amp; Economics</a></div>, <div class="_p13n-zg-nav-tree-all_style_zg-browse-item__1rdKf _p13n-zg-nav-tree-all_style_zg-browse-height-large__1z5B8" role="treeitem"><a href="/gp/bestsellers/books/1318073031">Children's &amp; Young Adult</a></div>, <div class="_p13n-zg-nav-tree-all_style_zg-browse-item__1rdKf _p13n-zg-nav-tree-all_style_zg-browse-height-large__1z5B8" role="treeitem"><a href="/gp/bestsellers/books/1318104031">Comics &amp; Mangas</a></div>, <div class="_p13n-zg-nav-tree-all_style_zg-browse-item__1rdKf _p13n-zg-nav-tree-all_style_zg-browse-height-large__1z5B8" role="treeitem"><a href="/gp/bestsellers/books/1318105031">Computing, Internet &amp; Digital Media</a></div>, <div class="_p13n-zg-nav-tree-all_style_zg-browse-item__1rdKf _p13n-zg-nav-tree-all_style_zg-browse-height-large__1z5B8" role="treeitem"><a href="/gp/bestsellers/books/1318118031">Crafts, Home &amp; Lifestyle</a></div>, Now the problem is, I am not able to extract href from ''. It keeps showing error. I have already tried: category_url_tag = category_tag.find('a')['href'] But it keeps showing an error. category_url = [] for tag in category_tag: category_url.append(tag.get('href')) print(category_url) This printed a list containing None. A: Try to select your elements more specific and use id and tag structure over dynamic classes: soup.select('#zg-left-col a') or to be more strict, to use only path that starts with specific pattern: soup.select('#zg-left-col a[href^="/gp/bestsellers/books"]') So list could be created via lis comprehension: ['https://www.amazon.in'+a.get('href') for a in soup.select('#zg-left-col a[href^="/gp/bestsellers/books"]')] Example This deals with dict comprehension to get only unique urls and on top also the category name: import requests from bs4 import BeautifulSoup soup = BeautifulSoup(requests.get('https://www.amazon.in/gp/bestsellers/books/').text) {'https://www.amazon.in'+a.get('href'):a.text for a in soup.select('#zg-left-col a[href^="/gp/bestsellers/books"]')} Output {'https://www.amazon.in/gp/bestsellers/books/1318158031': 'Action & Adventure', 'https://www.amazon.in/gp/bestsellers/books/1318052031': 'Arts, Film & Photography', 'https://www.amazon.in/gp/bestsellers/books/1318064031': 'Biographies, Diaries & True Accounts', 'https://www.amazon.in/gp/bestsellers/books/1318068031': 'Business & Economics', 'https://www.amazon.in/gp/bestsellers/books/1318073031': "Children's & Young Adult", 'https://www.amazon.in/gp/bestsellers/books/1318104031': 'Comics & Mangas', 'https://www.amazon.in/gp/bestsellers/books/1318105031': 'Computing, Internet & Digital Media', 'https://www.amazon.in/gp/bestsellers/books/1318118031': 'Crafts, Home & Lifestyle', 'https://www.amazon.in/gp/bestsellers/books/1318161031': 'Crime, Thriller & Mystery', 'https://www.amazon.in/gp/bestsellers/books/22960344031': 'Engineering',...} A: You are looping over the div and all. You should find the inside of the div. Please check the following code. It should give you the expected result. category_tag = soup.find_all('div' , {'class': '_p13n-zg-nav-tree-all_style_zg-browse-item__1rdKf _p13n-zg-nav-tree-all_style_zg-browse-height-large__1z5B8'}) categories = [(cat.find('a').text, cat.find('a')['href']) for cat in category_tag[1:]]
How to Extract all Urls from href under a but it seems to give me an error all the time
category_tag = soup.find_all('div' , {'class': '_p13n-zg-nav-tree-all_style_zg-browse-item__1rdKf _p13n-zg-nav-tree-all_style_zg-browse-height-large__1z5B8'}) Output of category_tag: <div class="_p13n-zg-nav-tree-all_style_zg-browse-item__1rdKf _p13n-zg-nav-tree-all_style_zg-browse-height-large__1z5B8" role="treeitem"><a href="/gp/bestsellers/books/1318158031">Action &amp; Adventure</a></div>, <div class="_p13n-zg-nav-tree-all_style_zg-browse-item__1rdKf _p13n-zg-nav-tree-all_style_zg-browse-height-large__1z5B8" role="treeitem"><a href="/gp/bestsellers/books/1318052031">Arts, Film &amp; Photography</a></div>, <div class="_p13n-zg-nav-tree-all_style_zg-browse-item__1rdKf _p13n-zg-nav-tree-all_style_zg-browse-height-large__1z5B8" role="treeitem"><a href="/gp/bestsellers/books/1318064031">Biographies, Diaries &amp; True Accounts</a></div>, <div class="_p13n-zg-nav-tree-all_style_zg-browse-item__1rdKf _p13n-zg-nav-tree-all_style_zg-browse-height-large__1z5B8" role="treeitem"><a href="/gp/bestsellers/books/1318068031">Business &amp; Economics</a></div>, <div class="_p13n-zg-nav-tree-all_style_zg-browse-item__1rdKf _p13n-zg-nav-tree-all_style_zg-browse-height-large__1z5B8" role="treeitem"><a href="/gp/bestsellers/books/1318073031">Children's &amp; Young Adult</a></div>, <div class="_p13n-zg-nav-tree-all_style_zg-browse-item__1rdKf _p13n-zg-nav-tree-all_style_zg-browse-height-large__1z5B8" role="treeitem"><a href="/gp/bestsellers/books/1318104031">Comics &amp; Mangas</a></div>, <div class="_p13n-zg-nav-tree-all_style_zg-browse-item__1rdKf _p13n-zg-nav-tree-all_style_zg-browse-height-large__1z5B8" role="treeitem"><a href="/gp/bestsellers/books/1318105031">Computing, Internet &amp; Digital Media</a></div>, <div class="_p13n-zg-nav-tree-all_style_zg-browse-item__1rdKf _p13n-zg-nav-tree-all_style_zg-browse-height-large__1z5B8" role="treeitem"><a href="/gp/bestsellers/books/1318118031">Crafts, Home &amp; Lifestyle</a></div>, Now the problem is, I am not able to extract href from ''. It keeps showing error. I have already tried: category_url_tag = category_tag.find('a')['href'] But it keeps showing an error. category_url = [] for tag in category_tag: category_url.append(tag.get('href')) print(category_url) This printed a list containing None.
[ "Try to select your elements more specific and use id and tag structure over dynamic classes:\nsoup.select('#zg-left-col a')\n\nor to be more strict, to use only path that starts with specific pattern:\nsoup.select('#zg-left-col a[href^=\"/gp/bestsellers/books\"]')\n\nSo list could be created via lis comprehension:\n['https://www.amazon.in'+a.get('href') for a in soup.select('#zg-left-col a[href^=\"/gp/bestsellers/books\"]')]\n\nExample\nThis deals with dict comprehension to get only unique urls and on top also the category name:\nimport requests\nfrom bs4 import BeautifulSoup\n\nsoup = BeautifulSoup(requests.get('https://www.amazon.in/gp/bestsellers/books/').text)\n\n\n{'https://www.amazon.in'+a.get('href'):a.text for a in soup.select('#zg-left-col a[href^=\"/gp/bestsellers/books\"]')}\n\nOutput\n{'https://www.amazon.in/gp/bestsellers/books/1318158031': 'Action & Adventure',\n 'https://www.amazon.in/gp/bestsellers/books/1318052031': 'Arts, Film & Photography',\n 'https://www.amazon.in/gp/bestsellers/books/1318064031': 'Biographies, Diaries & True Accounts',\n 'https://www.amazon.in/gp/bestsellers/books/1318068031': 'Business & Economics',\n 'https://www.amazon.in/gp/bestsellers/books/1318073031': \"Children's & Young Adult\",\n 'https://www.amazon.in/gp/bestsellers/books/1318104031': 'Comics & Mangas',\n 'https://www.amazon.in/gp/bestsellers/books/1318105031': 'Computing, Internet & Digital Media',\n 'https://www.amazon.in/gp/bestsellers/books/1318118031': 'Crafts, Home & Lifestyle',\n 'https://www.amazon.in/gp/bestsellers/books/1318161031': 'Crime, Thriller & Mystery',\n 'https://www.amazon.in/gp/bestsellers/books/22960344031': 'Engineering',...}\n\n", "You are looping over the div and all. You should find the inside of the div.\nPlease check the following code. It should give you the expected result.\ncategory_tag = soup.find_all('div' , {'class': '_p13n-zg-nav-tree-all_style_zg-browse-item__1rdKf _p13n-zg-nav-tree-all_style_zg-browse-height-large__1z5B8'})\ncategories = [(cat.find('a').text, cat.find('a')['href']) for cat in category_tag[1:]]\n\n" ]
[ 1, 0 ]
[]
[]
[ "beautifulsoup", "python", "web_scraping" ]
stackoverflow_0074486499_beautifulsoup_python_web_scraping.txt
Q: Is there any way to make an anti-aliased circle in OpenCV I'm trying to draw a circle in a picture using open CV with Python. Here is the picture I wish I can make : Here is the code I write : import cv2 import numpy as np import imutils text1 = "10x" text2 = "20gr" # Load image in OpenCV image = cv2.imread('Sasa.jfif') resized = imutils.resize(image, width=500) cv2.circle(resized,(350,150),65,(102,51,17),thickness=-1) # Convert the image to RGB (OpenCV uses BGR) cv2_im_rgb = cv2.cvtColor(resized,cv2.COLOR_BGR2RGB) # Pass the image to PIL pil_im = Image.fromarray(cv2_im_rgb) draw = ImageDraw.Draw(pil_im) # use a truetype font font1 = ImageFont.truetype("arial.ttf", 50) font2 = ImageFont.truetype("arial.ttf", 25) # Draw the text draw.text((310,110), text1, font=font1) draw.text((325,170), text2, font=font2) # Get back the image to OpenCV cv2_im_processed = cv2.cvtColor(np.array(pil_im), cv2.COLOR_RGB2BGR) cv2.imshow('Fonts', cv2_im_processed) cv2.waitKey(1) But this is what my code generate : The circle line is not precise. Is there anything I can do to make the line preciser or is there any other library that generates circle with precise line ? Any suggestion will be very appreciated ! A: You can use anti aliasing to make the circle look better as described here: cv2.circle(resized,(350,150),65,(102,51,17),thickness=-1,lineType=cv2.LINE_AA)
Is there any way to make an anti-aliased circle in OpenCV
I'm trying to draw a circle in a picture using open CV with Python. Here is the picture I wish I can make : Here is the code I write : import cv2 import numpy as np import imutils text1 = "10x" text2 = "20gr" # Load image in OpenCV image = cv2.imread('Sasa.jfif') resized = imutils.resize(image, width=500) cv2.circle(resized,(350,150),65,(102,51,17),thickness=-1) # Convert the image to RGB (OpenCV uses BGR) cv2_im_rgb = cv2.cvtColor(resized,cv2.COLOR_BGR2RGB) # Pass the image to PIL pil_im = Image.fromarray(cv2_im_rgb) draw = ImageDraw.Draw(pil_im) # use a truetype font font1 = ImageFont.truetype("arial.ttf", 50) font2 = ImageFont.truetype("arial.ttf", 25) # Draw the text draw.text((310,110), text1, font=font1) draw.text((325,170), text2, font=font2) # Get back the image to OpenCV cv2_im_processed = cv2.cvtColor(np.array(pil_im), cv2.COLOR_RGB2BGR) cv2.imshow('Fonts', cv2_im_processed) cv2.waitKey(1) But this is what my code generate : The circle line is not precise. Is there anything I can do to make the line preciser or is there any other library that generates circle with precise line ? Any suggestion will be very appreciated !
[ "You can use anti aliasing to make the circle look better as described here:\ncv2.circle(resized,(350,150),65,(102,51,17),thickness=-1,lineType=cv2.LINE_AA)\n\n" ]
[ 5 ]
[]
[]
[ "draw", "opencv", "python" ]
stackoverflow_0074486877_draw_opencv_python.txt
Q: Find multiple text patterns and then output the next value in a sting in pandas I have a datframe with the following values Call Data 1 [{'b_id': '31358658', 'incentive': 0}, {'b_id': 'D8384E90', 'incentive': 0}, {'b_id': '681B405A','incentive': 100}] 2 [{'b_id': 'D8384E90','incentive': 0 }, {'b_id': '31358658', 'incentive': 0}, {'b_id': '681B405A', 'incentive': 120}] 3 [{'b_id': '971C0B58','incentive': 0,}] 4 [{'b_id': '00450AAA','incentive': 0}, {'b_id': '0BCAEC4F','incentive': 0}, {'b_id': 'F2AD1313''incentive': 220},{'b_id': '971C0B58', 'incentive': 0}] Ideally I would like the output in this format Call B_id incentive 1 [31358658,D8384E90,681B405A] [0,0,100] 2 [D8384E90,31358658,681B405A] [0,0,120] 3 [971C0B58] [0] 4 [00450AAA,0BCAEC4F,F2AD1313,971C0B58] [0,0,220,0] The length of the data column can wary So far I have tried df1 = df1.join(df1['Data'].str.split('b_id',expand=True).add_prefix('data')) is there a way to search for each b_id in the sting and then take the value followed by the ":" and then add it to the list #sample data code Call = [1,2,3,4,5,6,7,8,9] Data= [ [{'b_id': '31358658', 'incentive': 0}, {'b_id': 'D8384E90', 'incentive': 0}, {'b_id': '681B405A','incentive': 100}], [{'b_id': 'D8384E90','incentive': 0 }, {'b_id': '31358658', 'incentive': 0}, {'b_id': '681B405A', 'incentive': 120}], [{'b_id': '971C0B58','incentive': 0}], [{'b_id': '00450AAA','incentive': 0}, {'b_id': '0BCAEC4F','incentive': 0}, {'b_id': 'F2AD1313','incentive': 220},{'b_id': '971C0B58', 'incentive': 0}], [{'b_id': '90591CC5','incentive': 0}, {'b_id': '31358658','incentive': 0,}], [{'b_id': '20E32751', 'incentive': 0}, {'b_id': '339A574F','incentive': 0}], [{'b_id': '971C0B58','incentive': 0}], [], ] df = pd.DataFrame(list(zip(Call,Data)), columns =['Call','Data']) All help appreciated A: you can use a loop in lambda function.: import ast df['Data']=df['Data'].apply(ast.literal_eval) df['B_id']=df['Data'].apply(lambda x: [i['b_id'] for i in x]) df['incentive']=df['Data'].apply(lambda x: [i['incentive'] for i in x]) print(df.head(1) ''' Call Data B_id incentive 0 1 [......] ['31358658', 'D8384E90', '681B405A'] [0, 0, 100] .... '''
Find multiple text patterns and then output the next value in a sting in pandas
I have a datframe with the following values Call Data 1 [{'b_id': '31358658', 'incentive': 0}, {'b_id': 'D8384E90', 'incentive': 0}, {'b_id': '681B405A','incentive': 100}] 2 [{'b_id': 'D8384E90','incentive': 0 }, {'b_id': '31358658', 'incentive': 0}, {'b_id': '681B405A', 'incentive': 120}] 3 [{'b_id': '971C0B58','incentive': 0,}] 4 [{'b_id': '00450AAA','incentive': 0}, {'b_id': '0BCAEC4F','incentive': 0}, {'b_id': 'F2AD1313''incentive': 220},{'b_id': '971C0B58', 'incentive': 0}] Ideally I would like the output in this format Call B_id incentive 1 [31358658,D8384E90,681B405A] [0,0,100] 2 [D8384E90,31358658,681B405A] [0,0,120] 3 [971C0B58] [0] 4 [00450AAA,0BCAEC4F,F2AD1313,971C0B58] [0,0,220,0] The length of the data column can wary So far I have tried df1 = df1.join(df1['Data'].str.split('b_id',expand=True).add_prefix('data')) is there a way to search for each b_id in the sting and then take the value followed by the ":" and then add it to the list #sample data code Call = [1,2,3,4,5,6,7,8,9] Data= [ [{'b_id': '31358658', 'incentive': 0}, {'b_id': 'D8384E90', 'incentive': 0}, {'b_id': '681B405A','incentive': 100}], [{'b_id': 'D8384E90','incentive': 0 }, {'b_id': '31358658', 'incentive': 0}, {'b_id': '681B405A', 'incentive': 120}], [{'b_id': '971C0B58','incentive': 0}], [{'b_id': '00450AAA','incentive': 0}, {'b_id': '0BCAEC4F','incentive': 0}, {'b_id': 'F2AD1313','incentive': 220},{'b_id': '971C0B58', 'incentive': 0}], [{'b_id': '90591CC5','incentive': 0}, {'b_id': '31358658','incentive': 0,}], [{'b_id': '20E32751', 'incentive': 0}, {'b_id': '339A574F','incentive': 0}], [{'b_id': '971C0B58','incentive': 0}], [], ] df = pd.DataFrame(list(zip(Call,Data)), columns =['Call','Data']) All help appreciated
[ "you can use a loop in lambda function.:\nimport ast\ndf['Data']=df['Data'].apply(ast.literal_eval)\n\ndf['B_id']=df['Data'].apply(lambda x: [i['b_id'] for i in x])\ndf['incentive']=df['Data'].apply(lambda x: [i['incentive'] for i in x])\nprint(df.head(1)\n'''\n Call Data B_id incentive\n0 1 [......] ['31358658', 'D8384E90', '681B405A'] [0, 0, 100]\n....\n'''\n \n\n" ]
[ 1 ]
[]
[]
[ "json", "pandas", "python" ]
stackoverflow_0074486950_json_pandas_python.txt
Q: How to find the common elements and total count of elements that are in DataFrame in Pandas I have below single DataFrame with columns as order_id , Order_date. Order_id Order_date 1 2022-11-16 2 2022-11-16 3 2022-11-16 4 2022-11-16 5 2022-11-17 6 2022-11-17 2 2022-11-17 1 2022-11-17 2 2022-11-18 7 2022-11-18 Here order_id 2 and 1 are in both 2022-11-16 and 2022-11-17. Similary in 2022-11-17 & 2022-11-18 order_id 2 is repeating. So i want to compare the dates and see how many order_id are comming from the previous date. count the order_id which are comming from the previous day. Could someone please help. Expected Output: 2 - As count of 2 order_id is comming from 16 Nov to 17 Nov 1 - As count of 1 order_id is comming from 17 Nov to 18 Nov Not sure how to acheive this. Did some research on it but did not find anything. A: You can sort the dates, compute the successive differences per group to filter the deltas equal to 1 day, and count them: df['Order_date'] = pd.to_datetime(df['Order_date']) (df.sort_values(by=['Order_id', 'Order_date']) .groupby('Order_id')['Order_date'] .apply(lambda s: s.diff().eq('1d').sum()) .loc[lambda s: s.gt(0)] ) Output: Order_id 1 1 2 2 Name: Order_date, dtype: int64
How to find the common elements and total count of elements that are in DataFrame in Pandas
I have below single DataFrame with columns as order_id , Order_date. Order_id Order_date 1 2022-11-16 2 2022-11-16 3 2022-11-16 4 2022-11-16 5 2022-11-17 6 2022-11-17 2 2022-11-17 1 2022-11-17 2 2022-11-18 7 2022-11-18 Here order_id 2 and 1 are in both 2022-11-16 and 2022-11-17. Similary in 2022-11-17 & 2022-11-18 order_id 2 is repeating. So i want to compare the dates and see how many order_id are comming from the previous date. count the order_id which are comming from the previous day. Could someone please help. Expected Output: 2 - As count of 2 order_id is comming from 16 Nov to 17 Nov 1 - As count of 1 order_id is comming from 17 Nov to 18 Nov Not sure how to acheive this. Did some research on it but did not find anything.
[ "You can sort the dates, compute the successive differences per group to filter the deltas equal to 1 day, and count them:\ndf['Order_date'] = pd.to_datetime(df['Order_date'])\n\n(df.sort_values(by=['Order_id', 'Order_date'])\n .groupby('Order_id')['Order_date']\n .apply(lambda s: s.diff().eq('1d').sum())\n .loc[lambda s: s.gt(0)]\n)\n\nOutput:\nOrder_id\n1 1\n2 2\nName: Order_date, dtype: int64\n\n" ]
[ 0 ]
[]
[]
[ "dataframe", "pandas", "python" ]
stackoverflow_0074486723_dataframe_pandas_python.txt
Q: Best way to groupby and join in pyspark Hi I have two dataframe like this: df_1: id item activity 1 2 a 34 14 b 1 2 b . . . Activity has two uniqe values a and b. df_2: id item activity 1 2 c 34 14 c 1 2 c Here activity has all same values c Now I want final df where I have to groupby using id and item and get count of unique activities from df_1 and df_2 and later join them using id and item. df_1_grp (Groupby using id and item and get count of activity frequency record): df_1_grp = df_1.groupby("id", "item").agg(f.count(f.when(f.col('activity') == 'a', 1)).alias('a'), f.count(f.when(f.col('activity_type') == 'b', 1)).alias('b')) id item a b 1 2 1 1 34 14 0 1 df_2_grp (Groupby using id and item and just get the count of record as all values in activity is same): df_2_grp = df_2.groupBy("id", "item").count().select('id', 'item', f.col('count').alias('c')) id item c 1 2 2 34 14 1 And now join them to get final df: df = df_1_grp.join(df_2_grp, on = ['id', 'item'], how = 'inner') Expected Output: id item a b c 1 2 1 1 2 34 14 0 1 1 Now because my dataframe is too big like probably 4 TB or 1 Billion records. I'm running out of disc storage. Is there more optimized and effecient way of doing it. Spark Config: spark_config["spark.executor.memory"] = "32G" spark_config["spark.executor.memoryOverhead"] = "32G" spark_config["spark.executor.cores"] = "32" spark_config["spark.driver.memory"] = "8G" spark_config["spark.dynamicAllocation.minExecutors"] = "200" spark_config["spark.dynamicAllocation.maxExecutors"] = "300" A: You can change the spark config to allow for more memory. Can you clarify if it is storage space you're lacking or memory when running the program? One method to reduce the memory of the spark session is to save each table to the disk before joining them. This helps runtime and memory usage (or atleast it did for me). Be sure to create new spark sessions after each save. A: Join is redundant, much more efficient way is first to union the two dataframes and then to perform groupBy (union is cheap operation that doesn't require shuffle). my_df = df_1.union(df_2) my_df.groupby("id", "item").agg( f.count(f.when(f.col('activity') == 'a', 1)).alias('a'), f.count(f.when(f.col('activity') == 'b', 1)).alias('b'), f.count(f.when(f.col('activity') == 'c', 1)).alias('c'), ) Also, for large tables shuffles, as in your case, you need to verify you have adequate values for spark.sql.shuffle.partitions and spark.default.parallelism - at least 2000 in case of 4TB table (default is 200).
Best way to groupby and join in pyspark
Hi I have two dataframe like this: df_1: id item activity 1 2 a 34 14 b 1 2 b . . . Activity has two uniqe values a and b. df_2: id item activity 1 2 c 34 14 c 1 2 c Here activity has all same values c Now I want final df where I have to groupby using id and item and get count of unique activities from df_1 and df_2 and later join them using id and item. df_1_grp (Groupby using id and item and get count of activity frequency record): df_1_grp = df_1.groupby("id", "item").agg(f.count(f.when(f.col('activity') == 'a', 1)).alias('a'), f.count(f.when(f.col('activity_type') == 'b', 1)).alias('b')) id item a b 1 2 1 1 34 14 0 1 df_2_grp (Groupby using id and item and just get the count of record as all values in activity is same): df_2_grp = df_2.groupBy("id", "item").count().select('id', 'item', f.col('count').alias('c')) id item c 1 2 2 34 14 1 And now join them to get final df: df = df_1_grp.join(df_2_grp, on = ['id', 'item'], how = 'inner') Expected Output: id item a b c 1 2 1 1 2 34 14 0 1 1 Now because my dataframe is too big like probably 4 TB or 1 Billion records. I'm running out of disc storage. Is there more optimized and effecient way of doing it. Spark Config: spark_config["spark.executor.memory"] = "32G" spark_config["spark.executor.memoryOverhead"] = "32G" spark_config["spark.executor.cores"] = "32" spark_config["spark.driver.memory"] = "8G" spark_config["spark.dynamicAllocation.minExecutors"] = "200" spark_config["spark.dynamicAllocation.maxExecutors"] = "300"
[ "You can change the spark config to allow for more memory. Can you clarify if it is storage space you're lacking or memory when running the program?\nOne method to reduce the memory of the spark session is to save each table to the disk before joining them. This helps runtime and memory usage (or atleast it did for me). Be sure to create new spark sessions after each save.\n", "Join is redundant, much more efficient way is first to union the two dataframes and then to perform groupBy (union is cheap operation that doesn't require shuffle).\nmy_df = df_1.union(df_2)\nmy_df.groupby(\"id\", \"item\").agg(\n f.count(f.when(f.col('activity') == 'a', 1)).alias('a'),\n f.count(f.when(f.col('activity') == 'b', 1)).alias('b'),\n f.count(f.when(f.col('activity') == 'c', 1)).alias('c'),\n)\n\nAlso, for large tables shuffles, as in your case, you need to verify you have adequate values for spark.sql.shuffle.partitions and spark.default.parallelism - at least 2000 in case of 4TB table (default is 200).\n" ]
[ 0, 0 ]
[]
[]
[ "apache_spark", "join", "pyspark", "python", "scala" ]
stackoverflow_0074483461_apache_spark_join_pyspark_python_scala.txt
Q: Error in installing aria2c in python while installing Windows VM I'm trying to install Windows VM in accordance with the instructions https://sugary-selenium-eb9.notion.site/Power-BI-installation-guide-for-Mac-ed07be30d6b94cf2ad9325dddd38d9d3 I have a Mac with an Intel chip. I Enter the following commands: chmod +x uup_download_macos.sh ./uup_download_macos.sh And I get this error: aria2c does not seem to be installed Check the readme.unix.md for details I can't find aria2c on official python site. How can I solve it? I tried pip install aria2c brew tap aria2c A: You may need to install Homebrew (via Terminal): Enter /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)" then press enter/return Run echo 'eval "$(/opt/homebrew/bin/brew shellenv)"' >> ~/.zprofile Run brew install aria2 cabextract wimlib cdrtools sidneys/homebrew/chntpw Here's a reference up until #2
Error in installing aria2c in python while installing Windows VM
I'm trying to install Windows VM in accordance with the instructions https://sugary-selenium-eb9.notion.site/Power-BI-installation-guide-for-Mac-ed07be30d6b94cf2ad9325dddd38d9d3 I have a Mac with an Intel chip. I Enter the following commands: chmod +x uup_download_macos.sh ./uup_download_macos.sh And I get this error: aria2c does not seem to be installed Check the readme.unix.md for details I can't find aria2c on official python site. How can I solve it? I tried pip install aria2c brew tap aria2c
[ "You may need to install Homebrew (via Terminal):\n\nEnter /bin/bash -c \"$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)\"\nthen press enter/return\nRun echo 'eval \"$(/opt/homebrew/bin/brew shellenv)\"' >> ~/.zprofile\nRun brew install aria2 cabextract wimlib cdrtools sidneys/homebrew/chntpw\n\nHere's a reference up until #2\n" ]
[ 0 ]
[]
[]
[ "homebrew", "python", "virtual_machine" ]
stackoverflow_0074373316_homebrew_python_virtual_machine.txt
Q: How to show image in grayscale for some reason this isn't working. i may be making a silly mistake somewhere. please help # importing modules import urllib.request import matplotlib.pyplot as plt import matplotlib.cm as cm import numpy as np from PIL import Image #dowload mona lisa image urllib.request.urlretrieve( 'https://upload.wikimedia.org/wikipedia/commons/thumb/e/ec/Mona_Lisa%2C_by_Leonardo_da_Vinci%2C_from_C2RMF_retouched.jpg/1024px-Mona_Lisa%2C_by_Leonardo_da_Vinci%2C_from_C2RMF_retouched.jpg', "Mona_Lisa.png") #open the file img = Image.open("/content/Mona_Lisa.png") #convert to from rgba to rgb rgb_image = img.convert('RGB') rgb_image_rgb = np.array(rgb_image) #show image plt.imshow(rgb_image_rgb, cmap = cm.Greys_r) A: have you tried this answer ? How can I convert an RGB image into grayscale in Python? from PIL import Image img = Image.open('image.png').convert('L') img.save('greyscale.png') A: you can convert the image to grayscale using PIL.Image.convert: img = Image.open("/content/Mona_Lisa.png").convert("L")
How to show image in grayscale
for some reason this isn't working. i may be making a silly mistake somewhere. please help # importing modules import urllib.request import matplotlib.pyplot as plt import matplotlib.cm as cm import numpy as np from PIL import Image #dowload mona lisa image urllib.request.urlretrieve( 'https://upload.wikimedia.org/wikipedia/commons/thumb/e/ec/Mona_Lisa%2C_by_Leonardo_da_Vinci%2C_from_C2RMF_retouched.jpg/1024px-Mona_Lisa%2C_by_Leonardo_da_Vinci%2C_from_C2RMF_retouched.jpg', "Mona_Lisa.png") #open the file img = Image.open("/content/Mona_Lisa.png") #convert to from rgba to rgb rgb_image = img.convert('RGB') rgb_image_rgb = np.array(rgb_image) #show image plt.imshow(rgb_image_rgb, cmap = cm.Greys_r)
[ "have you tried this answer ?\nHow can I convert an RGB image into grayscale in Python?\nfrom PIL import Image\nimg = Image.open('image.png').convert('L')\nimg.save('greyscale.png')\n\n", "you can convert the image to grayscale using PIL.Image.convert:\nimg = Image.open(\"/content/Mona_Lisa.png\").convert(\"L\")\n\n" ]
[ 3, 1 ]
[]
[]
[ "image", "matplotlib", "python" ]
stackoverflow_0074487021_image_matplotlib_python.txt
Q: Pandas dataframe row counts to plt incrementally after every 4 consecutive rows I am trying to assign as ID to a pandas dataframe based on row count. For this I am trying to apply the below logic to pandas dataframe: num = df.shape[0] for i in range(num): print(math.ceil(i/4)) So the idea is that for every 4 consecutive rows, an ID would be assigned. So the resultant dataframe would look like col_1 Group_ID v_1 1 v_2 1 v_3 1 v_4 1 v_5 2 v_6 2 v_7 2 v_8 2 v_9 3 v_10 3 --- And so on. Just a quick thought. How can I use apply function on df.index. Can I use the below code? df['Index'] = df.index df[GroupID] = df['Index].apply(np.ceil) Any hints? A: You can pass a function to apply, so create a named function and pass it def everyFour(rowIdx): return math.ceil(rowIdx / 4) df['GroupId'] = df['Index'].apply(everyFour) or just use a lambda df['GroupId'] = df['Index'].apply(lambda rowIdx: math.ceil(rowIdx / 4)) Note that this will leave the first row with index 0 at 0, so you might want to add 1 to the rowIndex before dividing by 4.
Pandas dataframe row counts to plt incrementally after every 4 consecutive rows
I am trying to assign as ID to a pandas dataframe based on row count. For this I am trying to apply the below logic to pandas dataframe: num = df.shape[0] for i in range(num): print(math.ceil(i/4)) So the idea is that for every 4 consecutive rows, an ID would be assigned. So the resultant dataframe would look like col_1 Group_ID v_1 1 v_2 1 v_3 1 v_4 1 v_5 2 v_6 2 v_7 2 v_8 2 v_9 3 v_10 3 --- And so on. Just a quick thought. How can I use apply function on df.index. Can I use the below code? df['Index'] = df.index df[GroupID] = df['Index].apply(np.ceil) Any hints?
[ "You can pass a function to apply, so create a named function and pass it\ndef everyFour(rowIdx):\n return math.ceil(rowIdx / 4)\n\ndf['GroupId'] = df['Index'].apply(everyFour)\n\nor just use a lambda\ndf['GroupId'] = df['Index'].apply(lambda rowIdx: math.ceil(rowIdx / 4))\n\nNote that this will leave the first row with index 0 at 0, so you might want to add 1 to the rowIndex before dividing by 4.\n" ]
[ 0 ]
[]
[]
[ "python" ]
stackoverflow_0074486976_python.txt
Q: Python OS - check if file exists, if so rename, check again, then save I have a script that takes a file from a form, renames it and uploads it to a folder and inserts record into a database. I would like to add the functionality where before the file is saved, it checks the upload folder to determine if the filename exists. If it does exist, renames the file in a loop and then saves the file. What I have currently: file = request.files['xx'] extension = os.path.splitext(file.filename)[1] xx = str(uuid.uuid4()) + extension ## if xx exists .. xx = str(uuid.uuid4()) + extension.. loop endlessly. file.save(os.path.join(app.config['UPLOAD_FOLDER'], xx) A: Haven't tested this yet but you can use os.path.isfile() to check if a file already exists (for directories, use os.path.exists). import os def save(): file = request.files['xx'] extension = os.path.splitext(file.filename)[1] xx = generate_filename(extension) file.save(os.path.join(app.config['UPLOAD_FOLDER'], xx)) def generate_filename(extension): xx = str(uuid.uuid4()) + extension if os.path.isfile(os.path.join(app.config['UPLOAD_FOLDER'], xx)): return generate_filename(extension) return xx A: quick and dirty, haven't tested this. using the check and rename function recursively to add "_1", "_2" etc to the end of the file name until it can be saved. def check_and_rename(file, add=0): original_file = file if add != 0: split = file.split(".") part_1 = split[0] + "_" + str(add) file = ".".join([part1, split[1]]) if not os.path.isfile(file): # save here else: check_and_rename(original_file, add+=1) A: This will check if a file exist and generate a new name that does not exist by increasing a number: from os import path def check_file(filePath): if path.exists(filePath): numb = 1 while True: newPath = "{0}_{2}{1}".format(*path.splitext(filePath) + (numb,)) if path.exists(newPath): numb += 1 else: return newPath return filePath A: Have you tried to use the glob Module, it provides an interface similar to ls, you can use it as it follows: import os import glob file_list = glob.glob('my_file') if len(file_list) > 0: os.rename('my_file', 'new_name') A: if not os.path.isfile(xx): file.save(os.path.join(app.config['UPLOAD_FOLDER'], xx) else: print("File does not exist") A: Improving on N.Walters answer, but so you have a function that just parses the file_path and gives you a valid one back and using the internal Path class: import os from pathlib import Path def check_and_rename(file_path: Path, add: int = 0) -> Path: original_file_path = file_path if add != 0: file_path = file_path.with_stem(file_path.stem + "_" + str(add)) if not os.path.isfile(file_path): return file_path else: return check_and_rename(original_file_path, add + 1)
Python OS - check if file exists, if so rename, check again, then save
I have a script that takes a file from a form, renames it and uploads it to a folder and inserts record into a database. I would like to add the functionality where before the file is saved, it checks the upload folder to determine if the filename exists. If it does exist, renames the file in a loop and then saves the file. What I have currently: file = request.files['xx'] extension = os.path.splitext(file.filename)[1] xx = str(uuid.uuid4()) + extension ## if xx exists .. xx = str(uuid.uuid4()) + extension.. loop endlessly. file.save(os.path.join(app.config['UPLOAD_FOLDER'], xx)
[ "Haven't tested this yet but you can use os.path.isfile() to check if a file already exists (for directories, use os.path.exists).\nimport os\n\ndef save():\n file = request.files['xx']\n extension = os.path.splitext(file.filename)[1]\n\n xx = generate_filename(extension)\n\n file.save(os.path.join(app.config['UPLOAD_FOLDER'], xx))\n\ndef generate_filename(extension):\n xx = str(uuid.uuid4()) + extension\n if os.path.isfile(os.path.join(app.config['UPLOAD_FOLDER'], xx)):\n return generate_filename(extension)\n return xx\n\n", "quick and dirty, haven't tested this. using the check and rename function recursively to add \"_1\", \"_2\" etc to the end of the file name until it can be saved.\ndef check_and_rename(file, add=0):\n original_file = file\n if add != 0:\n split = file.split(\".\")\n part_1 = split[0] + \"_\" + str(add)\n file = \".\".join([part1, split[1]])\n if not os.path.isfile(file):\n # save here\n else:\n check_and_rename(original_file, add+=1)\n\n", "This will check if a file exist and generate a new name that does not exist by increasing a number:\nfrom os import path\n\ndef check_file(filePath):\n if path.exists(filePath):\n numb = 1\n while True:\n newPath = \"{0}_{2}{1}\".format(*path.splitext(filePath) + (numb,))\n if path.exists(newPath):\n numb += 1\n else:\n return newPath\n return filePath\n\n", "Have you tried to use the glob Module, it provides an interface similar to ls, you can use it as it follows:\nimport os\nimport glob\nfile_list = glob.glob('my_file')\nif len(file_list) > 0:\n os.rename('my_file', 'new_name')\n\n", "if not os.path.isfile(xx):\n file.save(os.path.join(app.config['UPLOAD_FOLDER'], xx)\nelse:\n print(\"File does not exist\")\n\n", "Improving on N.Walters answer, but so you have a function that just parses the file_path and gives you a valid one back and using the internal Path class:\nimport os\nfrom pathlib import Path\n\ndef check_and_rename(file_path: Path, add: int = 0) -> Path:\n original_file_path = file_path\n if add != 0:\n file_path = file_path.with_stem(file_path.stem + \"_\" + str(add))\n if not os.path.isfile(file_path):\n return file_path\n else:\n return check_and_rename(original_file_path, add + 1)\n\n" ]
[ 2, 1, 1, 0, 0, 0 ]
[]
[]
[ "python", "python_3.x" ]
stackoverflow_0043107577_python_python_3.x.txt
Q: discord.errors.NotFound: 404 Not Found (error code: 0): 404: Not Found i have this code: class Adduser(discord.ui.Modal, title="add"): answer = ui.TextInput(label = "ID de l'utilisateur à ajouter au ticket", style = discord.TextStyle.short, placeholder = "ID", required= True) def __init__(self, channel): self.channel = channel super().__init__(timeout=None) async def on_submit(self, interaction: discord.Interaction, user=answer): member = await interaction.guild.fetch_member(user) if "ticket-de-" in interaction.channel.name: await interaction.channel.set_permissions(member, view_channel=True, send_messages=True, attach_files=True, embed_links=True) but i have this error: 2022-11-18 08:35:47 ERROR discord.ui.modal Ignoring exception in modal <Adduser timeout=None children=1>: Traceback (most recent call last): File "C:\Users\Distool - User\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\discord\ui\modal.py", line 186, in _scheduled_task await self.on_submit(interaction) File "C:\Users\Distool - User\distool\env\scripts\bot.py", line 422, in on_submit member = await interaction.guild.fetch_member(user) File "C:\Users\Distool - User\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\discord\guild.py", line 2094, in fetch_member data = await self._state.http.get_member(self.id, member_id) File "C:\Users\Distool - User\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\discord\http.py", line 740, in request raise NotFound(response, data) discord.errors.NotFound: 404 Not Found (error code: 0): 404: Not Found Someone can help me? The user is on the server and i'have tried to copy his id for the test but the client doesn't find him A: Got it now. You are using the Modal in the wrong way. You only get interaction being passed. Also, the answer variable contains a modal instance. You could just pass int(user.value) to fetch_member() but that wouldn‘t be proper python syntax instead use async def on_submit(self, interaction: discord.Interaction): member = await interaction.guild.fetch_member(int(self.answer.value)) # Rest of your code goes here You should add error handling in case the user isnt found or a not valid ID was entered
discord.errors.NotFound: 404 Not Found (error code: 0): 404: Not Found
i have this code: class Adduser(discord.ui.Modal, title="add"): answer = ui.TextInput(label = "ID de l'utilisateur à ajouter au ticket", style = discord.TextStyle.short, placeholder = "ID", required= True) def __init__(self, channel): self.channel = channel super().__init__(timeout=None) async def on_submit(self, interaction: discord.Interaction, user=answer): member = await interaction.guild.fetch_member(user) if "ticket-de-" in interaction.channel.name: await interaction.channel.set_permissions(member, view_channel=True, send_messages=True, attach_files=True, embed_links=True) but i have this error: 2022-11-18 08:35:47 ERROR discord.ui.modal Ignoring exception in modal <Adduser timeout=None children=1>: Traceback (most recent call last): File "C:\Users\Distool - User\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\discord\ui\modal.py", line 186, in _scheduled_task await self.on_submit(interaction) File "C:\Users\Distool - User\distool\env\scripts\bot.py", line 422, in on_submit member = await interaction.guild.fetch_member(user) File "C:\Users\Distool - User\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\discord\guild.py", line 2094, in fetch_member data = await self._state.http.get_member(self.id, member_id) File "C:\Users\Distool - User\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\discord\http.py", line 740, in request raise NotFound(response, data) discord.errors.NotFound: 404 Not Found (error code: 0): 404: Not Found Someone can help me? The user is on the server and i'have tried to copy his id for the test but the client doesn't find him
[ "Got it now.\nYou are using the Modal in the wrong way. You only get interaction being passed. Also, the answer variable contains a modal instance. You could just pass int(user.value) to fetch_member() but that wouldn‘t be proper python syntax\ninstead use\nasync def on_submit(self, interaction: discord.Interaction):\n member = await interaction.guild.fetch_member(int(self.answer.value))\n # Rest of your code goes here\n\n\nYou should add error handling in case the user isnt found or a not valid ID was entered\n" ]
[ 0 ]
[]
[]
[ "discord.py", "python" ]
stackoverflow_0074486433_discord.py_python.txt
Q: Positional Only Parameter Syntax '/' in Google Colaboratory Firstly, I'm using Python 3.11.0 in Colab. When I use / to make parameters positional-only, SyntaxError is raised. However, when I use * for keyword-only parameters, any error hasn't been raised. Is there anyone who knows why these things happen? # SyntaxError is raised def foo(x, y, /): return x + y File "<ipython-input-28-57597574dc0a>", line 1 def foo(x, y, /): ^ SyntaxError: invalid syntax # This was ok def foo(*, x, y): return x + y A: Python introduces the new function syntax in Python3.8.2 Version, Where we can introduce the / forward slash to compare the positional only parameter which comes before the / slash and parameters that comes after * is keyword only arguments. Rest of the arguments that are come between / and * can be either positional or keyword type of argument. Google colab version : A: I try with Python 3.10 kernel and it works fine. See this notebook. https://colab.research.google.com/drive/1rNF0KhX2A2UwazoYuUGeikCJm2YGxcgg
Positional Only Parameter Syntax '/' in Google Colaboratory
Firstly, I'm using Python 3.11.0 in Colab. When I use / to make parameters positional-only, SyntaxError is raised. However, when I use * for keyword-only parameters, any error hasn't been raised. Is there anyone who knows why these things happen? # SyntaxError is raised def foo(x, y, /): return x + y File "<ipython-input-28-57597574dc0a>", line 1 def foo(x, y, /): ^ SyntaxError: invalid syntax # This was ok def foo(*, x, y): return x + y
[ "Python introduces the new function syntax in Python3.8.2 Version, Where we can introduce the / forward slash to compare the positional only parameter which comes before the / slash and parameters that comes after * is keyword only arguments. Rest of the arguments that are come between / and * can be either positional or keyword type of argument.\nGoogle colab version :\n\n", "I try with Python 3.10 kernel and it works fine.\nSee this notebook.\nhttps://colab.research.google.com/drive/1rNF0KhX2A2UwazoYuUGeikCJm2YGxcgg\n" ]
[ 0, 0 ]
[]
[]
[ "google_colaboratory", "python" ]
stackoverflow_0074486841_google_colaboratory_python.txt
Q: Assign Pyspark window function the same uuid for each window I have a Pyspark dataframe as such: data = [ {"master_record_id": "001-0073-683496", 'dob': datetime.date(2000, 1, 1), "patient_ssn": "123456789", "dodid": "1234567891", "dqi_id":"123", "site_id":700}, {"master_record_id": "001-0013-101321", 'dob': datetime.date(2000, 1, 1), "patient_ssn": "123456789", "dodid": "1234567891", "dqi_id":"123", "site_id":701}, {"master_record_id": "001-0046-2845712", 'dob': datetime.date(1999, 2, 3), "patient_ssn": "987654321", "dodid": "0987654322", "dqi_id":None, "site_id":775}, {"master_record_id": "001-0048-2845712", 'dob': datetime.date(1999, 2, 3), "patient_ssn": "987654321", "dodid": "0987654322", "dqi_id":None, "site_id":775}] df = spark.createDataFrame(data=data) I want to be able to assign a uuid for records that share the same dodid,patient_ssn, and dob using a window function. At the moment I have a working solution, but it does not scale to millions of records. I believe what is slowing it down is looping through clusters and creating several dataframes. Is there a way to assign a uuid directly in the Window function? My working but inefficient solution below: # Filter out any recrods with null dob/patient_ssn/dodid df = df.filter("dob is NOT NULL AND patient_ssn is NOT NULL AND dodid is NOT NULL") # Create a cluster id based on dob/patient_ssn/dodid window = Window.orderBy(["dob","patient_ssn","dodid"]) df = df.withColumn("cluster_id",lit(f.dense_rank().over(window)-1)) cluster_list = set(df.select("cluster_id").rdd.flatMap(lambda x: x).collect()) df_list = [] # Iterate throuh clusters assigning uuid to each cluster. Each cluster will now be a new dataframe for cluster in cluster_list: temp = df.filter(col("cluster_id")==cluster) df_list.append(temp.withColumn("uuid",f.lit(str(uuid.uuid1())))) # Union all dataframes from df_list df = reduce(DataFrame.unionAll,df_list) A: I don't see the reason to use Window function. How about concatenating all three "key" columns and then running md5? .withColumn('uuid',md5(concat('dodid','patient_ssn', 'dob' )))
Assign Pyspark window function the same uuid for each window
I have a Pyspark dataframe as such: data = [ {"master_record_id": "001-0073-683496", 'dob': datetime.date(2000, 1, 1), "patient_ssn": "123456789", "dodid": "1234567891", "dqi_id":"123", "site_id":700}, {"master_record_id": "001-0013-101321", 'dob': datetime.date(2000, 1, 1), "patient_ssn": "123456789", "dodid": "1234567891", "dqi_id":"123", "site_id":701}, {"master_record_id": "001-0046-2845712", 'dob': datetime.date(1999, 2, 3), "patient_ssn": "987654321", "dodid": "0987654322", "dqi_id":None, "site_id":775}, {"master_record_id": "001-0048-2845712", 'dob': datetime.date(1999, 2, 3), "patient_ssn": "987654321", "dodid": "0987654322", "dqi_id":None, "site_id":775}] df = spark.createDataFrame(data=data) I want to be able to assign a uuid for records that share the same dodid,patient_ssn, and dob using a window function. At the moment I have a working solution, but it does not scale to millions of records. I believe what is slowing it down is looping through clusters and creating several dataframes. Is there a way to assign a uuid directly in the Window function? My working but inefficient solution below: # Filter out any recrods with null dob/patient_ssn/dodid df = df.filter("dob is NOT NULL AND patient_ssn is NOT NULL AND dodid is NOT NULL") # Create a cluster id based on dob/patient_ssn/dodid window = Window.orderBy(["dob","patient_ssn","dodid"]) df = df.withColumn("cluster_id",lit(f.dense_rank().over(window)-1)) cluster_list = set(df.select("cluster_id").rdd.flatMap(lambda x: x).collect()) df_list = [] # Iterate throuh clusters assigning uuid to each cluster. Each cluster will now be a new dataframe for cluster in cluster_list: temp = df.filter(col("cluster_id")==cluster) df_list.append(temp.withColumn("uuid",f.lit(str(uuid.uuid1())))) # Union all dataframes from df_list df = reduce(DataFrame.unionAll,df_list)
[ "I don't see the reason to use Window function.\nHow about concatenating all three \"key\" columns and then running md5?\n.withColumn('uuid',md5(concat('dodid','patient_ssn', 'dob' )))\n" ]
[ 0 ]
[]
[]
[ "group", "pyspark", "python", "uuid", "window" ]
stackoverflow_0074483910_group_pyspark_python_uuid_window.txt
Q: what difference between these two join methods in threading in python? I want to use threading package to calculate the square of num and my code like, import threading def my_squr(num): #if this function take long time to run print(num*num) return num*num if __name__ == "__main__": l1 = [1,3,5,7,11,13,15,17] for i, item in enumerate(l1): if i % 3 == 0: t1 = threading.Thread(target=my_squr, args=(item,)) t1.start() t1.join() elif i % 3 == 1: t2 = threading.Thread(target=my_squr, args=(item,)) t2.start() t2.join() else: t3 = threading.Thread(target=my_squr, args=(item,)) t3.start() t3.join() # t1.join() # t2.join() # t3.join() print("Done") However, I am confused about where should I put the join() method.Although, they both get same answer, I guess there are some differeces between them. A: If you immediately join after started an thread, it means that wait until it executes. However, this isn't different than calling function normally inside main thread. Assume that functions works takes a bit of time and you need to process them at the same time. Then you can start them and uncomment joins. This is your current code snippet workflow ->Create thread x and start ->wait for finish of thread x ->Create thread y and start ->wait for finish of thread y ... and so on. However if you change comments of joins this is new workflow ->Create thread x and start ->Create thread y and start ->Create thread z and start ... at the end ->wait thread x to finish ->wait thread y to finish ... So here even when you are waiting to finish X, your other threads like y and z still processing whatever you are doing inside. EDIT: You should remove the joins where right after the start and uncomment threads that are been in the end. That would be a more appropriate. Also, as processors are fast enough to complete simple math just in a millisecond, you will not experience any difference. Where u should use the joins is completely dependent on your program. For your situation using at the end probably would be the best. However, assume you will have another thread called X_2 that will use result of thread X_1. Then before creating thread X_2, you should join thread X_1. A: You could construct a list of the threads then join them all once your loop terminates: import threading def my_squr(num): #if this function take long time to run print(num*num) return num*num if __name__ == "__main__": threads = [] l1 = [1,3,5,7,11,13,15,17] for i, item in enumerate(l1): if i % 3 == 0: t1 = threading.Thread(target=my_squr, args=(item,)) t1.start() threads.append(t1) elif i % 3 == 1: t2 = threading.Thread(target=my_squr, args=(item,)) t2.start() threads.append(t2) else: t3 = threading.Thread(target=my_squr, args=(item,)) t3.start() threads.append(t3) for thread in threads: thread.join() print("Done") A: Join simply stops the application from ending before the thread completes. So you want to join threads AFTER they have started. import threading def my_squr(num): #if this function take long time to run print(num*num) return num*num if __name__ == "__main__": threads = list() l1 = [1,3,5,7,11,13,15,17] for i, item in enumerate(l1): if i % 3 == 0: t1 = threading.Thread(target=my_squr, args=(item,)) threads.append(t1) t1.start() elif i % 3 == 1: t2 = threading.Thread(target=my_squr, args=(item,)) threads.append(t2) t2.start() else: t3 = threading.Thread(target=my_squr, args=(item,)) threads.append(t3) t3.start() for t in threads: t,join() print("Done")
what difference between these two join methods in threading in python?
I want to use threading package to calculate the square of num and my code like, import threading def my_squr(num): #if this function take long time to run print(num*num) return num*num if __name__ == "__main__": l1 = [1,3,5,7,11,13,15,17] for i, item in enumerate(l1): if i % 3 == 0: t1 = threading.Thread(target=my_squr, args=(item,)) t1.start() t1.join() elif i % 3 == 1: t2 = threading.Thread(target=my_squr, args=(item,)) t2.start() t2.join() else: t3 = threading.Thread(target=my_squr, args=(item,)) t3.start() t3.join() # t1.join() # t2.join() # t3.join() print("Done") However, I am confused about where should I put the join() method.Although, they both get same answer, I guess there are some differeces between them.
[ "If you immediately join after started an thread, it means that wait until it executes. However, this isn't different than calling function normally inside main thread. Assume that functions works takes a bit of time and you need to process them at the same time.\nThen you can start them and uncomment joins.\nThis is your current code snippet workflow\n->Create thread x and start\n->wait for finish of thread x\n->Create thread y and start\n->wait for finish of thread y\n... and so on.\n\nHowever if you change comments of joins this is new workflow\n->Create thread x and start\n->Create thread y and start\n->Create thread z and start\n... \n\nat the end\n->wait thread x to finish\n->wait thread y to finish\n...\n\nSo here even when you are waiting to finish X, your other threads like y and z still processing whatever you are doing inside.\nEDIT:\nYou should remove the joins where right after the start and uncomment threads that are been in the end. That would be a more appropriate. Also, as processors are fast enough to complete simple math just in a millisecond, you will not experience any difference.\nWhere u should use the joins is completely dependent on your program.\nFor your situation using at the end probably would be the best.\nHowever, assume you will have another thread called X_2 that will use result of thread X_1. Then before creating thread X_2, you should join thread X_1.\n", "You could construct a list of the threads then join them all once your loop terminates:\nimport threading\ndef my_squr(num): #if this function take long time to run\n print(num*num)\n return num*num\n\nif __name__ == \"__main__\":\n threads = []\n l1 = [1,3,5,7,11,13,15,17]\n for i, item in enumerate(l1):\n if i % 3 == 0:\n t1 = threading.Thread(target=my_squr, args=(item,))\n t1.start()\n threads.append(t1)\n elif i % 3 == 1:\n t2 = threading.Thread(target=my_squr, args=(item,))\n t2.start()\n threads.append(t2)\n else:\n t3 = threading.Thread(target=my_squr, args=(item,))\n t3.start()\n threads.append(t3)\n\n for thread in threads:\n thread.join()\n\n print(\"Done\")\n\n", "Join simply stops the application from ending before the thread completes.\nSo you want to join threads AFTER they have started.\nimport threading\ndef my_squr(num): #if this function take long time to run\n print(num*num)\n return num*num\n\nif __name__ == \"__main__\":\n threads = list()\n l1 = [1,3,5,7,11,13,15,17]\n for i, item in enumerate(l1):\n if i % 3 == 0:\n t1 = threading.Thread(target=my_squr, args=(item,))\n threads.append(t1)\n t1.start()\n elif i % 3 == 1:\n t2 = threading.Thread(target=my_squr, args=(item,))\n threads.append(t2)\n t2.start()\n else:\n t3 = threading.Thread(target=my_squr, args=(item,))\n threads.append(t3)\n t3.start()\n\n for t in threads:\n t,join()\n\n print(\"Done\")\n\n" ]
[ 3, 1, 1 ]
[]
[]
[ "multithreading", "python" ]
stackoverflow_0074486815_multithreading_python.txt
Q: Shuffle list values into sublists of 19 values each I have a large list of around 200 values The list looks like this list_ids = [10148, 10149, 10150, 10151, 10152, 10153, 10154, 10155, 10156, 10157, 10158, 10159, 10160, 10161, 10163, 10164, 10165, 10167, 10168, 10169, 10170, 10171, 10172, 10173, 10174, 10175, 10177, 10178, 10179, 10180, 10181, 10182, 10183, 7137, 7138, 7139, 7142, 7143, 7148, 7150, 7151, 7152, 7153, 7155, 7156, 7157, 9086, 9087, 9088, 9089, 9090, 9091, 9094, 9095, 9096, 9097, 2164] I would like to shuffle this list and create a sublist of 19 values for each sublist. I tried : list_ids.sort(key=lambda list_ids, r={b: random.random() for a, b in list_ids}: r[list_ids[1]]) But it didnt work. Looks like I am missing something. End result is a sublist with shuffled values containing 19 values each A: you can shuffle the list with random.shuffle: import random # shuffles list in place random.shuffle(list_ids) #split into lists containg 19 elements splits = list([list_ids[i:i+19] for i in range(0,len(list_ids),19)]) A: import random s = 19 random.shuffle(list_ids) sub_lists = [list_ids[s*i:s*(i+1)] for i in range(len(list_ids) // s)] A: Convert to pandas series and get a sample of size 19: import pandas as pd ids = pd.Series(list_ids) ids.sample(19).values for random numbers between 0 and 1: import random random.shuffle(list_ids) result = {} for i in list_ids: result[i] = [random.random() for x in range(19)] result for random numbers from the original list: import random random.shuffle(list_ids) result = {} for i in list_ids: result[i] = [ids.sample(19).values] result
Shuffle list values into sublists of 19 values each
I have a large list of around 200 values The list looks like this list_ids = [10148, 10149, 10150, 10151, 10152, 10153, 10154, 10155, 10156, 10157, 10158, 10159, 10160, 10161, 10163, 10164, 10165, 10167, 10168, 10169, 10170, 10171, 10172, 10173, 10174, 10175, 10177, 10178, 10179, 10180, 10181, 10182, 10183, 7137, 7138, 7139, 7142, 7143, 7148, 7150, 7151, 7152, 7153, 7155, 7156, 7157, 9086, 9087, 9088, 9089, 9090, 9091, 9094, 9095, 9096, 9097, 2164] I would like to shuffle this list and create a sublist of 19 values for each sublist. I tried : list_ids.sort(key=lambda list_ids, r={b: random.random() for a, b in list_ids}: r[list_ids[1]]) But it didnt work. Looks like I am missing something. End result is a sublist with shuffled values containing 19 values each
[ "you can shuffle the list with random.shuffle:\nimport random\n\n# shuffles list in place\nrandom.shuffle(list_ids)\n\n#split into lists containg 19 elements\nsplits = list([list_ids[i:i+19] for i in range(0,len(list_ids),19)])\n\n", "import random\n\ns = 19\nrandom.shuffle(list_ids)\nsub_lists = [list_ids[s*i:s*(i+1)] for i in range(len(list_ids) // s)]\n\n", "Convert to pandas series and get a sample of size 19:\nimport pandas as pd\n\nids = pd.Series(list_ids)\nids.sample(19).values\n\nfor random numbers between 0 and 1:\nimport random\nrandom.shuffle(list_ids)\nresult = {}\nfor i in list_ids:\n result[i] = [random.random() for x in range(19)]\nresult\n\nfor random numbers from the original list:\nimport random\nrandom.shuffle(list_ids)\nresult = {}\nfor i in list_ids:\n result[i] = [ids.sample(19).values]\nresult\n\n" ]
[ 2, 1, 0 ]
[]
[]
[ "python" ]
stackoverflow_0074487051_python.txt
Q: NameError: name 'reduce' is not defined in Python I'm using Python 3.2. Tried this: xor = lambda x,y: (x+y)%2 l = reduce(xor, [1,2,3,4]) And got the following error: l = reduce(xor, [1,2,3,4]) NameError: name 'reduce' is not defined Tried printing reduce into interactive console - got this error: NameError: name 'reduce' is not defined Is reduce really removed in Python 3.2? If that's the case, what's the alternative? A: It was moved to functools. A: You can add from functools import reduce before you use the reduce. A: Or if you use the six library from six.moves import reduce A: In this case I believe that the following is equivalent: l = sum([1,2,3,4]) % 2 The only problem with this is that it creates big numbers, but maybe that is better than repeated modulo operations? A: Reduce function is not defined in the Python built-in function. So first, you should import the reduce function from functools import reduce A: you need to install and import reduce from functools python package A: Use it like this. # Use reduce function from functools import reduce def reduce_func(n1, n2): return n1 + n2 data_list = [2, 7, 9, 21, 33] x = reduce(reduce_func, data_list) print(x)
NameError: name 'reduce' is not defined in Python
I'm using Python 3.2. Tried this: xor = lambda x,y: (x+y)%2 l = reduce(xor, [1,2,3,4]) And got the following error: l = reduce(xor, [1,2,3,4]) NameError: name 'reduce' is not defined Tried printing reduce into interactive console - got this error: NameError: name 'reduce' is not defined Is reduce really removed in Python 3.2? If that's the case, what's the alternative?
[ "It was moved to functools.\n", "You can add\nfrom functools import reduce\n\nbefore you use the reduce.\n", "Or if you use the six library\nfrom six.moves import reduce\n\n", "In this case I believe that the following is equivalent:\nl = sum([1,2,3,4]) % 2\n\nThe only problem with this is that it creates big numbers, but maybe that is better than repeated modulo operations?\n", "Reduce function is not defined in the Python built-in function.\nSo first, you should import the reduce function\nfrom functools import reduce\n\n", "you need to install and import reduce from functools python package\n", "Use it like this.\n# Use reduce function\n\nfrom functools import reduce\n\ndef reduce_func(n1, n2):\n\n return n1 + n2\n\n\ndata_list = [2, 7, 9, 21, 33]\n\nx = reduce(reduce_func, data_list)\n\nprint(x)\n\n" ]
[ 336, 261, 10, 2, 2, 1, 1 ]
[]
[]
[ "python", "python_3.2", "reduce" ]
stackoverflow_0008689184_python_python_3.2_reduce.txt
Q: RuntimeWarning: coroutine 'Command.__call__' was never awaited self._run_job(job) Trying to send an embed at a set time only on weekdays, I can use asyncio sleep to run everyday (including weekends) but I want to skip Saturday and Sunday. Ive now imported schedule and get the error RuntimeWarning: coroutine 'Command.__call__' was never awaited self._run_job(job) import discord from discord.ext import commands, tasks from discord import Embed import asyncio import schedule import time import aiohttp import tracemalloc client = commands.Bot(command_prefix='..') @client.command() async def startpoll(ctx): #command name #scheduler #get channel channel = client.get_channel(1039973136361345034) # @everyone if ctx.author.guild_permissions.manage_roles: message = await ctx.send(f'{ctx.guild.default_role}') # embed description = [] embed = discord.Embed(title='**Daily Market Prediction**', description='<:Green:865709217678360636> **Bullish**\n\n<:Red:865709217980350464> **Bearish**', colour=6398207) react_message = await channel.send(embed=embed) await react_message.add_reaction('<:Green:865709217678360636>') await react_message.add_reaction('<:Red:865709217980350464>') while True: schedule.every().thursday.at('23:18').do(startpoll) schedule.run_pending() time.sleep(1) Was expecting a post while testing at 23:18, but once this time is hit the error appears A: I dont know the schedule library but you should have a look into the tasks extension of discord.py, it can handle tasks at specific times for you. But it‘s very simple, the library you‘re using cannot handle async functions, what a command in fact is... Just use said extension. It‘s build in in discord.py and works perfectly fine with async code and put your. If you need further help with it, just ask ;-)
RuntimeWarning: coroutine 'Command.__call__' was never awaited self._run_job(job)
Trying to send an embed at a set time only on weekdays, I can use asyncio sleep to run everyday (including weekends) but I want to skip Saturday and Sunday. Ive now imported schedule and get the error RuntimeWarning: coroutine 'Command.__call__' was never awaited self._run_job(job) import discord from discord.ext import commands, tasks from discord import Embed import asyncio import schedule import time import aiohttp import tracemalloc client = commands.Bot(command_prefix='..') @client.command() async def startpoll(ctx): #command name #scheduler #get channel channel = client.get_channel(1039973136361345034) # @everyone if ctx.author.guild_permissions.manage_roles: message = await ctx.send(f'{ctx.guild.default_role}') # embed description = [] embed = discord.Embed(title='**Daily Market Prediction**', description='<:Green:865709217678360636> **Bullish**\n\n<:Red:865709217980350464> **Bearish**', colour=6398207) react_message = await channel.send(embed=embed) await react_message.add_reaction('<:Green:865709217678360636>') await react_message.add_reaction('<:Red:865709217980350464>') while True: schedule.every().thursday.at('23:18').do(startpoll) schedule.run_pending() time.sleep(1) Was expecting a post while testing at 23:18, but once this time is hit the error appears
[ "I dont know the schedule library but you should have a look into the tasks extension of discord.py, it can handle tasks at specific times for you.\nBut it‘s very simple, the library you‘re using cannot handle async functions, what a command in fact is...\nJust use said extension. It‘s build in in discord.py and works perfectly fine with async code and put your. If you need further help with it, just ask ;-)\n" ]
[ 0 ]
[]
[]
[ "discord.py", "python", "python_3.x" ]
stackoverflow_0074484913_discord.py_python_python_3.x.txt
Q: How can I print square of the numbers in full tringle (not left and right tringle) pattern in Python? Hi I got a problem with a nested loop where the output must be like this: Required pattern 1 4 9 16 25 36 49 64 81 100 Need exactly the same result A: def square_pyramid(n): number = 1 k=n*2 for i in range(0,n): for j in range(0,k): print(end= " ") #this sets spaces before each number k = k-2 #reduce space in front of every number, decreases row wise for j in range(0,i+1): print(number*number, " ",end=" ") number += 1 print("\r") #new line square_pyramid(4)
How can I print square of the numbers in full tringle (not left and right tringle) pattern in Python?
Hi I got a problem with a nested loop where the output must be like this: Required pattern 1 4 9 16 25 36 49 64 81 100 Need exactly the same result
[ "def square_pyramid(n):\n number = 1 \n k=n*2\n for i in range(0,n):\n for j in range(0,k):\n print(end= \" \") #this sets spaces before each number\n k = k-2 #reduce space in front of every number, decreases row wise \n for j in range(0,i+1):\n print(number*number, \" \",end=\" \")\n number += 1 \n print(\"\\r\") #new line \nsquare_pyramid(4)\n\n" ]
[ 1 ]
[]
[]
[ "python" ]
stackoverflow_0074486471_python.txt
Q: TypeError: '>=' not supported between instances of 'builtin_function_or_method' and 'datetime.time' I'm trying to compare two datetime.time values but I get this error: File "c:\Users\xxxxxx\Desktop\pyfiche\code.py", line 33, in timeofweek if (a.time >= d0.time() and a.time() <= d6.time()) or (a.time() >= d21.time() and a.time() <= d23.time()): TypeError: '>=' not supported between instances of 'builtin_function_or_method' and 'datetime.time' and this is my code: def timeofweek(a): d6 = a.replace(hour=6, minute=0, second=0, microsecond=0) d21 = a.replace(hour=21, minute=0, second=0, microsecond=0) d0 = a.replace(hour=0, minute=0, second=0, microsecond=0) d23 = a.replace(hour=23, minute=0, second=0, microsecond=0) if a.weekday() in range(0,6): print(a.time(),type(a.time())) #for debugging and they are both Datetime.time print(d0.time(),type(d0.time())) #for debugging and they are both Datetime.time if (a.time() >= d6.time() and a.time() < d21.time()): return 1 if (a.time() >= d0.time() and a.time() < d6.time()) or (a.time() >= d21.time() and a.time() <= d23.time()): return 2 if a.weekday() == 6: if (a.time() >= d6.time() and a.time() <= d21.time()): return 2 if (a.time >= d0.time() and a.time() <= d6.time()) or (a.time() >= d21.time() and a.time() <= d23.time()): return 3 I'm sure that d6, d21, d0, d23 are all the same type Datetime.time the problem started when the input value changed from 24h to 12h format, but still when I print the value for debugging I still get the 24h format everytime. A: you aren't calling the .time() method for a in the second to last row. (You're missing brackets)
TypeError: '>=' not supported between instances of 'builtin_function_or_method' and 'datetime.time'
I'm trying to compare two datetime.time values but I get this error: File "c:\Users\xxxxxx\Desktop\pyfiche\code.py", line 33, in timeofweek if (a.time >= d0.time() and a.time() <= d6.time()) or (a.time() >= d21.time() and a.time() <= d23.time()): TypeError: '>=' not supported between instances of 'builtin_function_or_method' and 'datetime.time' and this is my code: def timeofweek(a): d6 = a.replace(hour=6, minute=0, second=0, microsecond=0) d21 = a.replace(hour=21, minute=0, second=0, microsecond=0) d0 = a.replace(hour=0, minute=0, second=0, microsecond=0) d23 = a.replace(hour=23, minute=0, second=0, microsecond=0) if a.weekday() in range(0,6): print(a.time(),type(a.time())) #for debugging and they are both Datetime.time print(d0.time(),type(d0.time())) #for debugging and they are both Datetime.time if (a.time() >= d6.time() and a.time() < d21.time()): return 1 if (a.time() >= d0.time() and a.time() < d6.time()) or (a.time() >= d21.time() and a.time() <= d23.time()): return 2 if a.weekday() == 6: if (a.time() >= d6.time() and a.time() <= d21.time()): return 2 if (a.time >= d0.time() and a.time() <= d6.time()) or (a.time() >= d21.time() and a.time() <= d23.time()): return 3 I'm sure that d6, d21, d0, d23 are all the same type Datetime.time the problem started when the input value changed from 24h to 12h format, but still when I print the value for debugging I still get the 24h format everytime.
[ "you aren't calling the .time() method for a in the second to last row. (You're missing brackets)\n" ]
[ 0 ]
[]
[]
[ "datetime", "python", "time", "typeerror" ]
stackoverflow_0074487261_datetime_python_time_typeerror.txt
Q: Filter and group list of duplicate names How do I change the shape of this data so that the subject column becomes a top row header containing all unique values. Name and Surnames are listed in the 1st and 2nd column as unique values. Then in each cell I need a true or false of whether the person is in that subject class. I need to transpose or reshape the data but how on earth is this done in code? SUBJECT NAME SURNAME Art person1 Surname1 Art person2 surname2 Art person3 Surname3 Art person4 Surname4 Art person5 Surname5 Art person6 Surname6 Art person7 Surname7 Art person8 Surname8 DT person1 Surname1 DT person3 Surname3 DT person5 Surname5 Photography person1 Surname1 Photography person2 surname2 Photography person3 Surname3 Photography person5 Surname5 Photography person8 Surname8 Games person4 Surname4 Games person5 Surname5 Games person6 Surname6 Games person7 Surname7 Games person8 Surname8 Games person9 Surname9 So that it looks like this: Name Surname Art DT Photography Games person1 Surname1 True False True etc.... person2 surname2 False True False etc... person3 Surname3 person4 Surname4 person5 Surname5 person6 Surname6 person7 Surname7 person8 Surname8 person9 Surname9 A: Use: (df.value_counts().unstack(0) .notnull() .reindex(columns=df['SUBJECT'].unique()) .reset_index() .rename_axis(columns=None)) Output: NAME SURNAME Art DT Photography Games 0 person1 Surname1 True True True False 1 person2 surname2 True False True False 2 person3 Surname3 True True True False 3 person4 Surname4 True False False True 4 person5 Surname5 True True True True 5 person6 Surname6 True False False True 6 person7 Surname7 True False False True 7 person8 Surname8 True False True True 8 person9 Surname9 False False False True A: This is a crosstab converted to boolean: out = (pd .crosstab([df['NAME'], df['SURNAME']], df['SUBJECT']) .astype(bool) .reset_index().rename_axis(columns=None) ) Output: NAME SURNAME Art DT Games Photography 0 person1 Surname1 True True False True 1 person2 surname2 True False False True 2 person3 Surname3 True True False True 3 person4 Surname4 True False True False 4 person5 Surname5 True True True True 5 person6 Surname6 True False True False 6 person7 Surname7 True False True False 7 person8 Surname8 True False True True 8 person9 Surname9 False False True False
Filter and group list of duplicate names
How do I change the shape of this data so that the subject column becomes a top row header containing all unique values. Name and Surnames are listed in the 1st and 2nd column as unique values. Then in each cell I need a true or false of whether the person is in that subject class. I need to transpose or reshape the data but how on earth is this done in code? SUBJECT NAME SURNAME Art person1 Surname1 Art person2 surname2 Art person3 Surname3 Art person4 Surname4 Art person5 Surname5 Art person6 Surname6 Art person7 Surname7 Art person8 Surname8 DT person1 Surname1 DT person3 Surname3 DT person5 Surname5 Photography person1 Surname1 Photography person2 surname2 Photography person3 Surname3 Photography person5 Surname5 Photography person8 Surname8 Games person4 Surname4 Games person5 Surname5 Games person6 Surname6 Games person7 Surname7 Games person8 Surname8 Games person9 Surname9 So that it looks like this: Name Surname Art DT Photography Games person1 Surname1 True False True etc.... person2 surname2 False True False etc... person3 Surname3 person4 Surname4 person5 Surname5 person6 Surname6 person7 Surname7 person8 Surname8 person9 Surname9
[ "Use:\n(df.value_counts().unstack(0)\n .notnull()\n .reindex(columns=df['SUBJECT'].unique())\n .reset_index()\n .rename_axis(columns=None))\n\nOutput:\n NAME SURNAME Art DT Photography Games\n0 person1 Surname1 True True True False\n1 person2 surname2 True False True False\n2 person3 Surname3 True True True False\n3 person4 Surname4 True False False True\n4 person5 Surname5 True True True True\n5 person6 Surname6 True False False True\n6 person7 Surname7 True False False True\n7 person8 Surname8 True False True True\n8 person9 Surname9 False False False True\n\n", "This is a crosstab converted to boolean:\nout = (pd\n .crosstab([df['NAME'], df['SURNAME']], df['SUBJECT'])\n .astype(bool)\n .reset_index().rename_axis(columns=None)\n)\n\nOutput:\n NAME SURNAME Art DT Games Photography\n0 person1 Surname1 True True False True\n1 person2 surname2 True False False True\n2 person3 Surname3 True True False True\n3 person4 Surname4 True False True False\n4 person5 Surname5 True True True True\n5 person6 Surname6 True False True False\n6 person7 Surname7 True False True False\n7 person8 Surname8 True False True True\n8 person9 Surname9 False False True False\n\n" ]
[ 2, 1 ]
[]
[]
[ "excel", "pandas", "python" ]
stackoverflow_0074487176_excel_pandas_python.txt
Q: Fetching data from postgres database on jupyter notebook I'm having this script to fetch data from Postgres DB. POSTGRES_PORT = 'xxxx' POSTGRES_USERNAME = 'xxx' ## CHANGE THIS TO YOUR PANOPLY/POSTGRES USERNAME POSTGRES_PASSWORD = 'xxx' ## CHANGE THIS TO YOUR PANOPLY/POSTGRES PASSWORD POSTGRES_DBNAME = 'xxxx' ## CHANGE THIS TO YOUR DATABASE NAME POSTGRES_DBNAME = 'xxx' postgres_str = (f'postgresql://{POSTGRES_USERNAME}:{POSTGRES_PASSWORD}@{POSTGRES_ADDRESS}:{POSTGRES_PORT}/{POSTGRES_DBNAME}') # Create the connection cnx = create_engine(postgres_str) When I use the limit, I'm able to fetch it. table_name = pd.read_sql_query("""SELECT * FROM public.timeline limit 1000""", cnx) table_name When I try to fetch without limit, I got this error "Connection failed A connection to the notebook server could not be established. The notebook will continue trying to reconnect. Check your network connection or notebook server configuration." In this case, would you recommend I use pyspark? As it looks the data is bigdata? I use the count and I got "66231781" rows. A: By default the database driver for Postgresql uses a client side cursor, but you can use a server side cursor and stream the data to the client in batches. The following code will iterate through the query result in batches of 1,000 rows as set by the chunksize parameter. You can adjust the value of chunksize to meet your needs. import pandas as pd from sqlalchemy import create_engine engine = create_engine(f"postgresql://{POSTGRES_USERNAME}:{POSTGRES_PASSWORD}@{POSTGRES_ADDRESS}:{POSTGRES_PORT}/{POSTGRES_DBNAME}") with engine.connect().execution_options(stream_results=True) as conn: for chunk_df in pd.read_sql("SELECT * FROM public.timeline", conn, chunksize=1000): print(f"Dataframe has {len(chunk_df)} rows.")
Fetching data from postgres database on jupyter notebook
I'm having this script to fetch data from Postgres DB. POSTGRES_PORT = 'xxxx' POSTGRES_USERNAME = 'xxx' ## CHANGE THIS TO YOUR PANOPLY/POSTGRES USERNAME POSTGRES_PASSWORD = 'xxx' ## CHANGE THIS TO YOUR PANOPLY/POSTGRES PASSWORD POSTGRES_DBNAME = 'xxxx' ## CHANGE THIS TO YOUR DATABASE NAME POSTGRES_DBNAME = 'xxx' postgres_str = (f'postgresql://{POSTGRES_USERNAME}:{POSTGRES_PASSWORD}@{POSTGRES_ADDRESS}:{POSTGRES_PORT}/{POSTGRES_DBNAME}') # Create the connection cnx = create_engine(postgres_str) When I use the limit, I'm able to fetch it. table_name = pd.read_sql_query("""SELECT * FROM public.timeline limit 1000""", cnx) table_name When I try to fetch without limit, I got this error "Connection failed A connection to the notebook server could not be established. The notebook will continue trying to reconnect. Check your network connection or notebook server configuration." In this case, would you recommend I use pyspark? As it looks the data is bigdata? I use the count and I got "66231781" rows.
[ "By default the database driver for Postgresql uses a client side cursor, but you can use a server side cursor and stream the data to the client in batches. The following code will iterate through the query result in batches of 1,000 rows as set by the chunksize parameter. You can adjust the value of chunksize to meet your needs.\nimport pandas as pd\nfrom sqlalchemy import create_engine\n\nengine = create_engine(f\"postgresql://{POSTGRES_USERNAME}:{POSTGRES_PASSWORD}@{POSTGRES_ADDRESS}:{POSTGRES_PORT}/{POSTGRES_DBNAME}\")\n\nwith engine.connect().execution_options(stream_results=True) as conn:\n for chunk_df in pd.read_sql(\"SELECT * FROM public.timeline\", conn, chunksize=1000):\n print(f\"Dataframe has {len(chunk_df)} rows.\")\n\n" ]
[ 0 ]
[]
[]
[ "postgresql", "python" ]
stackoverflow_0074486768_postgresql_python.txt
Q: I want to add two y axis values in a bokeh line plot whats wrong with this line of code..please help import pandas as pd from bokeh.models import ColumnDataSource from bokeh.plotting import figure, show, output_file output_file ('newfile.html') data = pd.read_excel(r'C:\Users\ASyed\OneDrive - NKT\PythonProject\bokehPractise\newfile\19AJ100429-GC3-FR-003-A1-KP240.000-KP248.831-SL-AT.xlsx', \ sheet_name= 'Listing') df = pd.DataFrame(data) df.columns = [x.replace("\n", " ") for x in df.columns.to_list()] SOURCE = ColumnDataSource(data = df) p = figure (plot_width = 800, plot_height = 600) p.line(x= 'KP [km]', y =[['DOL [m]'], ['DOC [m]']], source = SOURCE ) p.title.text = 'DOL Visualization' p.xaxis.axis_label = 'Kilometer Point' p.yaxis.axis_label = 'DOC' show(p) I am using pandas and bokeh. i want a plot with two y axis lines derived from two different columns. Above code gives following error and a plank bokeh plot in html. Expected y to reference fields in the supplied data source. When a 'source' argument is passed to a glyph method, values that are sequences (like lists or arrays) must come from references to data columns in the source. For instance, as an example: source = ColumnDataSource(data=dict(x=a_list, y=an_array)) p.circle(x='x', y='y', source=source, ...) # pass column names and a source Alternatively, *all* data sequences may be provided as literals as long as a source is *not* provided: p.circle(x=a_list, y=an_array, ...) # pass actual sequences and no source A: When you are using source as an input to your figure, you are only allowed to add strings as a pointer to the data. In your case use p.line(x= 'KP [km]', y ='DOL [m]', source = SOURCE ) instead of p.line(x= 'KP [km]', y =[['DOL [m]'], ['DOC [m]']], source = SOURCE ) If you want to add multiple lines to one figure, call the p.line() once for each line. Minimal Example import pandas as pd from bokeh.plotting import figure, show, output_notebook from bokeh.models import ColumnDataSource output_notebook() df = pd.DataFrame({'a':[1,2,3], 'b':[4,5,6]}) source = ColumnDataSource(df) p = figure(width=300, height=300) p.line(x='index', y='a', source=source) p.line(x='index', y='b', source=source, color='red') show(p) See also Check out the multiline example on the official documentation.
I want to add two y axis values in a bokeh line plot
whats wrong with this line of code..please help import pandas as pd from bokeh.models import ColumnDataSource from bokeh.plotting import figure, show, output_file output_file ('newfile.html') data = pd.read_excel(r'C:\Users\ASyed\OneDrive - NKT\PythonProject\bokehPractise\newfile\19AJ100429-GC3-FR-003-A1-KP240.000-KP248.831-SL-AT.xlsx', \ sheet_name= 'Listing') df = pd.DataFrame(data) df.columns = [x.replace("\n", " ") for x in df.columns.to_list()] SOURCE = ColumnDataSource(data = df) p = figure (plot_width = 800, plot_height = 600) p.line(x= 'KP [km]', y =[['DOL [m]'], ['DOC [m]']], source = SOURCE ) p.title.text = 'DOL Visualization' p.xaxis.axis_label = 'Kilometer Point' p.yaxis.axis_label = 'DOC' show(p) I am using pandas and bokeh. i want a plot with two y axis lines derived from two different columns. Above code gives following error and a plank bokeh plot in html. Expected y to reference fields in the supplied data source. When a 'source' argument is passed to a glyph method, values that are sequences (like lists or arrays) must come from references to data columns in the source. For instance, as an example: source = ColumnDataSource(data=dict(x=a_list, y=an_array)) p.circle(x='x', y='y', source=source, ...) # pass column names and a source Alternatively, *all* data sequences may be provided as literals as long as a source is *not* provided: p.circle(x=a_list, y=an_array, ...) # pass actual sequences and no source
[ "When you are using source as an input to your figure, you are only allowed to add strings as a pointer to the data.\nIn your case use\np.line(x= 'KP [km]', y ='DOL [m]', source = SOURCE )\n\ninstead of\np.line(x= 'KP [km]', y =[['DOL [m]'], ['DOC [m]']], source = SOURCE )\n\nIf you want to add multiple lines to one figure, call the p.line() once for each line.\nMinimal Example\nimport pandas as pd\n\nfrom bokeh.plotting import figure, show, output_notebook\nfrom bokeh.models import ColumnDataSource\noutput_notebook()\n\ndf = pd.DataFrame({'a':[1,2,3], 'b':[4,5,6]})\nsource = ColumnDataSource(df)\n\np = figure(width=300, height=300)\np.line(x='index', y='a', source=source)\np.line(x='index', y='b', source=source, color='red')\n\nshow(p)\n\n\nSee also\nCheck out the multiline example on the official documentation.\n" ]
[ 0 ]
[]
[]
[ "bokeh", "pandas", "python" ]
stackoverflow_0074486794_bokeh_pandas_python.txt
Q: What does 'index 0 is out of bounds for axis 0 with size 0' mean? I am new to both python and numpy. I ran a code that I wrote and I am getting this message: 'index 0 is out of bounds for axis 0 with size 0' Without the context, I just want to figure out what this means.. It might be silly to ask this but what do they mean by axis 0 and size 0? index 0 means the first value in the array.. but I can't figure out what axis 0 and size 0 mean. The 'data' is a text file with lots of numbers in two columns. x = np.linspace(1735.0,1775.0,100) column1 = (data[0,0:-1]+data[0,1:])/2.0 column2 = data[1,1:] x_column1 = np.zeros(x.size+2) x_column1[1:-1] = x x_column1[0] = x[0]+x[0]-x[1] x_column1[-1] = x[-1]+x[-1]-x[-2] experiment = np.zeros_like(x) for i in range(np.size(x_edges)-2): indexes = np.flatnonzero(np.logical_and((column1>=x_column1[i]),(column1<x_column1[i+1]))) temp_column2 = column2[indexes] temp_column2[0] -= column2[indexes[0]]*(x_column1[i]-column1[indexes[0]-1])/(column1[indexes[0]]-column1[indexes[0]-1]) temp_column2[-1] -= column2[indexes[-1]]*(column1[indexes[-1]+1]-x_column1[i+1])/(column1[indexes[-1]+1]-column1[indexes[-1]]) experiment[i] = np.sum(temp_column2) return experiment A: In numpy, index and dimension numbering starts with 0. So axis 0 means the 1st dimension. Also in numpy a dimension can have length (size) 0. The simplest case is: In [435]: x = np.zeros((0,), int) In [436]: x Out[436]: array([], dtype=int32) In [437]: x[0] ... IndexError: index 0 is out of bounds for axis 0 with size 0 I also get it if x = np.zeros((0,5), int), a 2d array with 0 rows, and 5 columns. So someplace in your code you are creating an array with a size 0 first axis. When asking about errors, it is expected that you tell us where the error occurs. Also when debugging problems like this, the first thing you should do is print the shape (and maybe the dtype) of the suspected variables. Applied to pandas The same error can occur for those using pandas, when sending a Series or DataFrame to a numpy.array, as with the following: pandas.Series.values or pandas.Series.to_numpy() or pandas.Series.array pandas.DataFrame.values or pandas.DataFrame.to_numpy() Resolving the error: Use a try-except block Verify the size of the array is not 0 if x.size != 0: A: Essentially it means you don't have the index you are trying to reference. For example: df = pd.DataFrame() df['this']=np.nan df['my']=np.nan df['data']=np.nan df['data'][0]=5 #I haven't yet assigned how long df[data] should be! print(df) will give me the error you are referring to, because I haven't told Pandas how long my dataframe is. Whereas if I do the exact same code but I DO assign an index length, I don't get an error: df = pd.DataFrame(index=[0,1,2,3,4]) df['this']=np.nan df['is']=np.nan df['my']=np.nan df['data']=np.nan df['data'][0]=5 #since I've properly labelled my index, I don't run into this problem! print(df) Hope that answers your question! A: This is an IndexError in python, which means that we're trying to access an index which isn't there in the tensor. Below is a very simple example to understand this error. # create an empty array of dimension `0` In [14]: arr = np.array([], dtype=np.int64) # check its shape In [15]: arr.shape Out[15]: (0,) with this array arr in place, if we now try to assign any value to some index, for example to the index 0 as in the case below In [16]: arr[0] = 23 Then, we will get an IndexError, as below: IndexError Traceback (most recent call last) <ipython-input-16-0891244a3c59> in <module> ----> 1 arr[0] = 23 IndexError: index 0 is out of bounds for axis 0 with size 0 The reason is that we are trying to access an index (here at 0th position), which is not there (i.e. it doesn't exist because we have an array of size 0). In [19]: arr.size * arr.itemsize Out[19]: 0 So, in essence, such an array is useless and cannot be used for storing anything. Thus, in your code, you've to follow the traceback and look for the place where you're creating an array/tensor of size 0 and fix that. A: I encountered this error and found that it was my data type causing the error. The type was an object, after converting it to an int or float the issue was solved. I used the following code: df = df.astype({"column": new_data_type, "example": float})
What does 'index 0 is out of bounds for axis 0 with size 0' mean?
I am new to both python and numpy. I ran a code that I wrote and I am getting this message: 'index 0 is out of bounds for axis 0 with size 0' Without the context, I just want to figure out what this means.. It might be silly to ask this but what do they mean by axis 0 and size 0? index 0 means the first value in the array.. but I can't figure out what axis 0 and size 0 mean. The 'data' is a text file with lots of numbers in two columns. x = np.linspace(1735.0,1775.0,100) column1 = (data[0,0:-1]+data[0,1:])/2.0 column2 = data[1,1:] x_column1 = np.zeros(x.size+2) x_column1[1:-1] = x x_column1[0] = x[0]+x[0]-x[1] x_column1[-1] = x[-1]+x[-1]-x[-2] experiment = np.zeros_like(x) for i in range(np.size(x_edges)-2): indexes = np.flatnonzero(np.logical_and((column1>=x_column1[i]),(column1<x_column1[i+1]))) temp_column2 = column2[indexes] temp_column2[0] -= column2[indexes[0]]*(x_column1[i]-column1[indexes[0]-1])/(column1[indexes[0]]-column1[indexes[0]-1]) temp_column2[-1] -= column2[indexes[-1]]*(column1[indexes[-1]+1]-x_column1[i+1])/(column1[indexes[-1]+1]-column1[indexes[-1]]) experiment[i] = np.sum(temp_column2) return experiment
[ "In numpy, index and dimension numbering starts with 0. So axis 0 means the 1st dimension. Also in numpy a dimension can have length (size) 0. The simplest case is:\nIn [435]: x = np.zeros((0,), int)\nIn [436]: x\nOut[436]: array([], dtype=int32)\nIn [437]: x[0]\n...\nIndexError: index 0 is out of bounds for axis 0 with size 0\n\nI also get it if x = np.zeros((0,5), int), a 2d array with 0 rows, and 5 columns.\nSo someplace in your code you are creating an array with a size 0 first axis.\nWhen asking about errors, it is expected that you tell us where the error occurs.\nAlso when debugging problems like this, the first thing you should do is print the shape (and maybe the dtype) of the suspected variables.\nApplied to pandas\n\nThe same error can occur for those using pandas, when sending a Series or DataFrame to a numpy.array, as with the following:\n\npandas.Series.values or pandas.Series.to_numpy() or pandas.Series.array\npandas.DataFrame.values or pandas.DataFrame.to_numpy()\n\n\n\nResolving the error:\n\nUse a try-except block\nVerify the size of the array is not 0\n\nif x.size != 0:\n\n\n\n", "Essentially it means you don't have the index you are trying to reference. For example:\ndf = pd.DataFrame()\ndf['this']=np.nan\ndf['my']=np.nan\ndf['data']=np.nan\ndf['data'][0]=5 #I haven't yet assigned how long df[data] should be!\nprint(df)\n\nwill give me the error you are referring to, because I haven't told Pandas how long my dataframe is. Whereas if I do the exact same code but I DO assign an index length, I don't get an error:\ndf = pd.DataFrame(index=[0,1,2,3,4])\ndf['this']=np.nan\ndf['is']=np.nan\ndf['my']=np.nan\ndf['data']=np.nan\ndf['data'][0]=5 #since I've properly labelled my index, I don't run into this problem!\nprint(df)\n\nHope that answers your question!\n", "This is an IndexError in python, which means that we're trying to access an index which isn't there in the tensor. Below is a very simple example to understand this error.\n# create an empty array of dimension `0`\nIn [14]: arr = np.array([], dtype=np.int64) \n\n# check its shape \nIn [15]: arr.shape \nOut[15]: (0,)\n\nwith this array arr in place, if we now try to assign any value to some index, for example to the index 0 as in the case below\nIn [16]: arr[0] = 23 \n\nThen, we will get an IndexError, as below:\n\n\nIndexError Traceback (most recent call last)\n<ipython-input-16-0891244a3c59> in <module>\n----> 1 arr[0] = 23\n\nIndexError: index 0 is out of bounds for axis 0 with size 0\n\n\nThe reason is that we are trying to access an index (here at 0th position), which is not there (i.e. it doesn't exist because we have an array of size 0). \nIn [19]: arr.size * arr.itemsize \nOut[19]: 0\n\nSo, in essence, such an array is useless and cannot be used for storing anything. Thus, in your code, you've to follow the traceback and look for the place where you're creating an array/tensor of size 0 and fix that.\n", "I encountered this error and found that it was my data type causing the error.\nThe type was an object, after converting it to an int or float the issue was solved.\nI used the following code:\ndf = df.astype({\"column\": new_data_type,\n \"example\": float})\n\n" ]
[ 46, 10, 9, 0 ]
[]
[]
[ "error_handling", "index_error", "indexing", "numpy", "python" ]
stackoverflow_0041492288_error_handling_index_error_indexing_numpy_python.txt
Q: Measure of how well two dataframe columns move with each other I have two columns in a df and rows of dates. I'd like to see how well each column matches the other and moves in sync with the other column - ie. do they move in tandem and does one influence the movements in the other. Col1 Col2 Date 1991-01-01 00:00:00+00:00 6.945847 3.4222 1991-04-01 00:00:00+00:00 8.377481 6.7783 1991-07-01 00:00:00+00:00 7.869787 4.6666 ... ... Is there a way to do this in pandas? I thought of dividing each row by the value in the first row to see the % increase, but wondered if there was a better statistical way of doing this. Thanks. A: If you want to calculate Spearman correlation coefficient you can use Scipy package df = pd.DataFrame({'Date': ['1991-01-01 00:00:00+00:00', '1991-04-01 00:00:00+00:00', '1991-07-01 00:00:00+00:00'], 'Col1': [6.945847 , 8.377481, 7.869787], 'Col2': [3.4222, 6.7783, 4.6666]}).set_index('Date') from scipy import stats stats.spearmanr(df['Col1'], df['Col2']) >>> SpearmanrResult(correlation=1.0, pvalue=0.0) A: https://en.wikipedia.org/wiki/Cross-correlation https://docs.scipy.org/doc/scipy/reference/generated/scipy.signal.correlate.html Spearman only tells you the correlation at zero shift.
Measure of how well two dataframe columns move with each other
I have two columns in a df and rows of dates. I'd like to see how well each column matches the other and moves in sync with the other column - ie. do they move in tandem and does one influence the movements in the other. Col1 Col2 Date 1991-01-01 00:00:00+00:00 6.945847 3.4222 1991-04-01 00:00:00+00:00 8.377481 6.7783 1991-07-01 00:00:00+00:00 7.869787 4.6666 ... ... Is there a way to do this in pandas? I thought of dividing each row by the value in the first row to see the % increase, but wondered if there was a better statistical way of doing this. Thanks.
[ "If you want to calculate Spearman correlation coefficient you can use Scipy package\ndf = pd.DataFrame({'Date': ['1991-01-01 00:00:00+00:00', '1991-04-01 00:00:00+00:00', '1991-07-01 00:00:00+00:00'], \n 'Col1': [6.945847 , 8.377481, 7.869787], \n 'Col2': [3.4222, 6.7783, 4.6666]}).set_index('Date')\n\nfrom scipy import stats\nstats.spearmanr(df['Col1'], df['Col2'])\n>>>\nSpearmanrResult(correlation=1.0, pvalue=0.0)\n\n", "https://en.wikipedia.org/wiki/Cross-correlation\nhttps://docs.scipy.org/doc/scipy/reference/generated/scipy.signal.correlate.html\nSpearman only tells you the correlation at zero shift.\n" ]
[ 1, 1 ]
[]
[]
[ "dataframe", "pandas", "python" ]
stackoverflow_0074487023_dataframe_pandas_python.txt
Q: How to create Dict from a Python list I have a list of random objects generated from a Model (querySet). I intend to create a separate list of objects using some but not all of the values of the objects from the original list. For instance, people = [ {'name': 'John', 'age': 20, 'location': 'Lagos'}, {'name': 'Kate', 'age': 40, 'location': 'Athens'}, {'name': 'Mike', 'age': 30, 'location': 'Delhi'}, {'name': 'Ben', 'age': 48, 'location': 'New York'} ] Here's what I've tried: my_own_list = [] my_obj = {} for person in people: my_obj['your_name'] = person['name'] my_obj['your_location'] = person['location'] my_own_list.append(my_obj) However, my code created only one obj, repeatedly four times. A: Your code corrected my_own_list = [] for person in people: # every time you create a new dictionary my_obj = {} my_obj['your_name'] = person['name'] my_obj['your_location'] = person['location'] my_own_list.append(my_obj) One-liner that uses list and dict comprehension :-) [{k:v for k,v in p.items() if k in ['name', 'location']} for p in people] A: You have to create a new object for every new person: my_own_list = [] for person in people: my_obj = {} my_obj['your_name'] = person['name'] my_obj['your_location'] = person['location'] my_own_list.append(my_obj) A: Your getting the same entry because you predefine the dict you put into a list and overwrite the same dict - so each dict in your list references to the same dict in memory and hence they're all the same you could use a list comprehension like so : my_own_list = [{"a": person["name"], "b": person["location"]} for person in people] A: please declare my_obj inside the for loop . it will work .
How to create Dict from a Python list
I have a list of random objects generated from a Model (querySet). I intend to create a separate list of objects using some but not all of the values of the objects from the original list. For instance, people = [ {'name': 'John', 'age': 20, 'location': 'Lagos'}, {'name': 'Kate', 'age': 40, 'location': 'Athens'}, {'name': 'Mike', 'age': 30, 'location': 'Delhi'}, {'name': 'Ben', 'age': 48, 'location': 'New York'} ] Here's what I've tried: my_own_list = [] my_obj = {} for person in people: my_obj['your_name'] = person['name'] my_obj['your_location'] = person['location'] my_own_list.append(my_obj) However, my code created only one obj, repeatedly four times.
[ "Your code corrected\nmy_own_list = []\n\nfor person in people:\n # every time you create a new dictionary\n my_obj = {}\n my_obj['your_name'] = person['name']\n my_obj['your_location'] = person['location']\n my_own_list.append(my_obj)\n\nOne-liner that uses list and dict comprehension :-)\n[{k:v for k,v in p.items() if k in ['name', 'location']} for p in people]\n\n", "You have to create a new object for every new person:\nmy_own_list = []\n\nfor person in people:\n my_obj = {}\n my_obj['your_name'] = person['name']\n my_obj['your_location'] = person['location']\n my_own_list.append(my_obj)\n\n", "Your getting the same entry because you predefine the dict you put into a list and overwrite the same dict - so each dict in your list references to the same dict in memory and hence they're all the same\nyou could use a list comprehension like so :\nmy_own_list = [{\"a\": person[\"name\"], \"b\": person[\"location\"]} for person in people]\n\n", "please declare my_obj inside the for loop . it will work .\n" ]
[ 1, 0, 0, 0 ]
[]
[]
[ "python" ]
stackoverflow_0074487324_python.txt
Q: Python XML parsing removing empty CDATA nodes I'm using minidom from xml.dom to parse an xml document. I make some changes to it and then re-export it back to a new xml file. This file is generated by a program as an export and I use the changed document as an import. Upon importing, the program tells me that there are missing CDATA nodes and that it cannot import. I simplified my code to test the process: from xml.dom import minidom filename = 'Test.xml' dom = minidom.parse(filename) with open( filename.replace('.xml','_Generated.xml'), mode='w', encoding='utf8' ) as fh: fh.write(dom.toxml()) Using this for the Test.xml: <?xml version="1.0" encoding="UTF-8"?> <body> <![CDATA[]]> </body> This is what the Text_Generated.xml file is: <?xml version="1.0" ?><body> </body> A simple solution is to first open the document and change all the empty CDATA nodes to include some value before parsing then removing the value from the new file after generation but this seems like unnecessary work and time for execution as some of these documents include tens of thousands of lines. I partially debugged the issue down to the explatbuilder.py and it's parser. The parser is installed with custom callbacks. The callback that handles the data from the CDATA nodes is the character_data_handler_cdata method. The data that is supplied to this method is already missing after parsing. Anyone know what is going on with this? A: Unfortunately the XML specification is not 100% explicit about what counts as significant information in a document and what counts as noise. But there's a fairly wide consensus that CDATA tags serve no purpose other than to delimit text that hasn't been escaped: so % and &#37; and &#x25 and <!CDATA[%]]> are different ways of writing the same content, and whichever of these you use in your input, the XML parser will produce the same output. On that assumption, an empty <!CDATA[]]> represents "no content" and a parser will remove it. If your document design attaches signficance to CDATA tags then it's out of line with usual practice followed by most XML tooling, and it would be a good idea to revise the design to use element tags instead. Having said that, many XML parsers do have an option to report CDATA tags to the application, so you may be able to find a way around this, but it's still not a good design choice.
Python XML parsing removing empty CDATA nodes
I'm using minidom from xml.dom to parse an xml document. I make some changes to it and then re-export it back to a new xml file. This file is generated by a program as an export and I use the changed document as an import. Upon importing, the program tells me that there are missing CDATA nodes and that it cannot import. I simplified my code to test the process: from xml.dom import minidom filename = 'Test.xml' dom = minidom.parse(filename) with open( filename.replace('.xml','_Generated.xml'), mode='w', encoding='utf8' ) as fh: fh.write(dom.toxml()) Using this for the Test.xml: <?xml version="1.0" encoding="UTF-8"?> <body> <![CDATA[]]> </body> This is what the Text_Generated.xml file is: <?xml version="1.0" ?><body> </body> A simple solution is to first open the document and change all the empty CDATA nodes to include some value before parsing then removing the value from the new file after generation but this seems like unnecessary work and time for execution as some of these documents include tens of thousands of lines. I partially debugged the issue down to the explatbuilder.py and it's parser. The parser is installed with custom callbacks. The callback that handles the data from the CDATA nodes is the character_data_handler_cdata method. The data that is supplied to this method is already missing after parsing. Anyone know what is going on with this?
[ "Unfortunately the XML specification is not 100% explicit about what counts as significant information in a document and what counts as noise. But there's a fairly wide consensus that CDATA tags serve no purpose other than to delimit text that hasn't been escaped: so % and &#37; and &#x25 and <!CDATA[%]]> are different ways of writing the same content, and whichever of these you use in your input, the XML parser will produce the same output. On that assumption, an empty <!CDATA[]]> represents \"no content\" and a parser will remove it.\nIf your document design attaches signficance to CDATA tags then it's out of line with usual practice followed by most XML tooling, and it would be a good idea to revise the design to use element tags instead.\nHaving said that, many XML parsers do have an option to report CDATA tags to the application, so you may be able to find a way around this, but it's still not a good design choice.\n" ]
[ 1 ]
[]
[]
[ "minidom", "python", "xml" ]
stackoverflow_0074482942_minidom_python_xml.txt
Q: How to check my CPU usage, heap usage, thread count, process count using python? I have a multi processing program. I need to check it's CPU usage and other variables like heap memory usage and number of processes used.. number of thread used. how to do this? I just want to know how my code is effecting my system. A: Using the psutil library saves a lot of work here. You can install it via the various methods described on this page Getting CPU usage, heap usage, thread count, and process count: import psutil import threading print("CPU usage", psutil.cpu_percent()) print("HEAP usage", psutil.virtual_memory()) print("Active thread count", threading.active_count()) print("Process count", len([*psutil.process_iter()]))
How to check my CPU usage, heap usage, thread count, process count using python?
I have a multi processing program. I need to check it's CPU usage and other variables like heap memory usage and number of processes used.. number of thread used. how to do this? I just want to know how my code is effecting my system.
[ "Using the psutil library saves a lot of work here. \nYou can install it via the various methods described on this page\n\nGetting CPU usage, heap usage, thread count, and process count:\nimport psutil\nimport threading\n\n\nprint(\"CPU usage\", psutil.cpu_percent())\nprint(\"HEAP usage\", psutil.virtual_memory())\nprint(\"Active thread count\", threading.active_count())\nprint(\"Process count\", len([*psutil.process_iter()]))\n\n" ]
[ 1 ]
[]
[]
[ "cpu", "cpu_usage", "heap_memory", "python", "python_3.x" ]
stackoverflow_0074485730_cpu_cpu_usage_heap_memory_python_python_3.x.txt
Q: Pythonnet importing class: "ModuleNotFoundError: No module named..." I am importing a C# dll into python using pythonnet. import sys import clr assemblydir = r"C:\Users\Nathan_Dehnel\source\repos\TFSHygiene\bin\Debug\net5.0-windows" sys.path.append(assemblydir) clr.AddReference("TFSHygiene") from TFSHygiene import QueryExecutor The DLL is present inside assemblydir. Inside TFSHygiene: namespace TFSHygiene { public class QueryExecutor { ... } } I followed the answer in this question: "No module named" error when attempting to importing c# dll using Python.NET However I get this error when building: Traceback (most recent call last): File "C:\Users\Nathan_Dehnel\OneDrive - Dell Technologies\Documents\ADO TFS\ADO TFS\main.py", line 12, in <module> from TFSHygiene import QueryExecutor ModuleNotFoundError: No module named 'TFSHygiene' Built with .NET 5.0 target. A: I never got it to work with .NET 5.0. For me it worked with .NET Framework 4.8. See my answer for more details. A: For .net core, you need to add load("coreclr") before import clr: load("coreclr") import clr
Pythonnet importing class: "ModuleNotFoundError: No module named..."
I am importing a C# dll into python using pythonnet. import sys import clr assemblydir = r"C:\Users\Nathan_Dehnel\source\repos\TFSHygiene\bin\Debug\net5.0-windows" sys.path.append(assemblydir) clr.AddReference("TFSHygiene") from TFSHygiene import QueryExecutor The DLL is present inside assemblydir. Inside TFSHygiene: namespace TFSHygiene { public class QueryExecutor { ... } } I followed the answer in this question: "No module named" error when attempting to importing c# dll using Python.NET However I get this error when building: Traceback (most recent call last): File "C:\Users\Nathan_Dehnel\OneDrive - Dell Technologies\Documents\ADO TFS\ADO TFS\main.py", line 12, in <module> from TFSHygiene import QueryExecutor ModuleNotFoundError: No module named 'TFSHygiene' Built with .NET 5.0 target.
[ "I never got it to work with .NET 5.0. For me it worked with .NET Framework 4.8. See my answer for more details.\n", "For .net core, you need to add load(\"coreclr\") before import clr:\nload(\"coreclr\")\nimport clr\n\n" ]
[ 0, 0 ]
[]
[]
[ ".net_5", "c#", "python", "python.net", "python_import" ]
stackoverflow_0072353293_.net_5_c#_python_python.net_python_import.txt
Q: Mapping of a row of a dataframe in pandas I have following dataframe named df. id letter 1 x,y 2 z 3 a The mapping condition is {'x' : 1, 'z' : 2, 'ELSE' : 0} my desired output dataframe should look like, id letter map 1 x,y 1 2 z 2 2 a 0 Which means, even any of the letters in column letter is x, then the column map should be 1. Without iterating through each row of the dataframe, is there any way to do that? A: You can use pure pandas cond = {'x' : 1, 'z' : 2, 'ELSE' : 0} df['map'] = (df['letter'] .str.split(',').explode() .map(lambda x: cond.get(x, cond['ELSE'])) .groupby(level=0).max() ) In case of multiple values I would get the max. Alternative for the first valid match: df['map'] = (df['letter'] .str.split(',').explode() .map(cond) .groupby(level=0).first() .fillna(cond['ELSE'], downcast='infer') ) list comprehension Or using a list comprehension, here the first valid match would be used: cond = {'x' : 1, 'z' : 2, 'ELSE' : 0} df['map'] = [next((cond[x] for x in s.split(',') if x in cond), cond['ELSE']) for s in df['letter']] id letter map 0 1 x,y 1 1 2 z 2 2 3 a 0 A: use np.select import numpy as np cond1 = df['letter'].str.contains('x') cond2 = df['letter'].str.contains('z') df.assign(map=np.select([cond1, cond2], [1, 2], 0)) output: id letter map 0 1 x,y 1 1 2 z 2 2 3 a 0
Mapping of a row of a dataframe in pandas
I have following dataframe named df. id letter 1 x,y 2 z 3 a The mapping condition is {'x' : 1, 'z' : 2, 'ELSE' : 0} my desired output dataframe should look like, id letter map 1 x,y 1 2 z 2 2 a 0 Which means, even any of the letters in column letter is x, then the column map should be 1. Without iterating through each row of the dataframe, is there any way to do that?
[ "You can use\npure pandas\ncond = {'x' : 1, 'z' : 2, 'ELSE' : 0}\n\ndf['map'] = (df['letter']\n .str.split(',').explode()\n .map(lambda x: cond.get(x, cond['ELSE']))\n .groupby(level=0).max()\n)\n\nIn case of multiple values I would get the max.\nAlternative for the first valid match:\ndf['map'] = (df['letter']\n .str.split(',').explode()\n .map(cond)\n .groupby(level=0).first()\n .fillna(cond['ELSE'], downcast='infer')\n)\n\nlist comprehension\nOr using a list comprehension, here the first valid match would be used:\ncond = {'x' : 1, 'z' : 2, 'ELSE' : 0}\n\ndf['map'] = [next((cond[x] for x in s.split(',') if x in cond),\n cond['ELSE']) for s in df['letter']]\n\n id letter map\n0 1 x,y 1\n1 2 z 2\n2 3 a 0\n\n", "use np.select\nimport numpy as np\n\ncond1 = df['letter'].str.contains('x')\ncond2 = df['letter'].str.contains('z')\ndf.assign(map=np.select([cond1, cond2], [1, 2], 0))\n\noutput:\n id letter map\n0 1 x,y 1\n1 2 z 2\n2 3 a 0\n\n" ]
[ 1, 0 ]
[]
[]
[ "pandas", "python" ]
stackoverflow_0074487208_pandas_python.txt
Q: I got some errors when I pip install bfieldtools PS C:\Python\source> pip install bfieldtools Collecting bfieldtools Using cached bfieldtools-0.9.13.3-py3-none-any.whl (1.2 MB) Collecting trimesh Using cached trimesh-3.16.4-py3-none-any.whl (663 kB) Collecting mayavi Using cached mayavi-4.8.1.tar.gz (20.6 MB) Installing build dependencies ... error ERROR: Command errored out with exit status 1: command: 'C:\Users\billlin\AppData\Local\Programs\Python\Python311\python.exe' 'C:\Users\billlin\AppData\Local\Temp\pip-standalone-pip-gje45057\__env_pip__.zip\pip' install --ignore-installed --no-user --prefix 'C:\Users\billlin\AppData\Local\Temp\pip-build-env-32lpxcys\overlay' --no-warn-script-location --no-binary :none: --only-binary :none: -i https://pypi.org/simple -- oldest-supported-numpy setuptools vtk wheel cwd: None Complete output (8 lines): Collecting oldest-supported-numpy Using cached oldest_supported_numpy-2022.8.16-py3-none-any.whl (3.9 kB) Collecting setuptools Using cached setuptools-65.5.1-py3-none-any.whl (1.2 MB) ERROR: Could not find a version that satisfies the requirement vtk (from versions: none) ERROR: No matching distribution found for vtk WARNING: You are using pip version 21.3.1; however, version 22.3.1 is available. You should consider upgrading via the 'C:\Users\billlin\AppData\Local\Programs\Python\Python311\python.exe -m pip install --upgrade pip' command. Could someone please help me ? I list the log as above. I was using version 22.3.1 and someone asked me to use it back to the previous version of pip for another issue. A: Install below vtx by entering following command pip install vtk==9.2.2 https://pypi.org/project/vtk/
I got some errors when I pip install bfieldtools
PS C:\Python\source> pip install bfieldtools Collecting bfieldtools Using cached bfieldtools-0.9.13.3-py3-none-any.whl (1.2 MB) Collecting trimesh Using cached trimesh-3.16.4-py3-none-any.whl (663 kB) Collecting mayavi Using cached mayavi-4.8.1.tar.gz (20.6 MB) Installing build dependencies ... error ERROR: Command errored out with exit status 1: command: 'C:\Users\billlin\AppData\Local\Programs\Python\Python311\python.exe' 'C:\Users\billlin\AppData\Local\Temp\pip-standalone-pip-gje45057\__env_pip__.zip\pip' install --ignore-installed --no-user --prefix 'C:\Users\billlin\AppData\Local\Temp\pip-build-env-32lpxcys\overlay' --no-warn-script-location --no-binary :none: --only-binary :none: -i https://pypi.org/simple -- oldest-supported-numpy setuptools vtk wheel cwd: None Complete output (8 lines): Collecting oldest-supported-numpy Using cached oldest_supported_numpy-2022.8.16-py3-none-any.whl (3.9 kB) Collecting setuptools Using cached setuptools-65.5.1-py3-none-any.whl (1.2 MB) ERROR: Could not find a version that satisfies the requirement vtk (from versions: none) ERROR: No matching distribution found for vtk WARNING: You are using pip version 21.3.1; however, version 22.3.1 is available. You should consider upgrading via the 'C:\Users\billlin\AppData\Local\Programs\Python\Python311\python.exe -m pip install --upgrade pip' command. Could someone please help me ? I list the log as above. I was using version 22.3.1 and someone asked me to use it back to the previous version of pip for another issue.
[ "Install below vtx\nby entering following command\npip install vtk==9.2.2\n\nhttps://pypi.org/project/vtk/\n" ]
[ 0 ]
[]
[]
[ "installation", "pip", "python", "windows" ]
stackoverflow_0074485347_installation_pip_python_windows.txt
Q: How to call async function in sync code and break async/await chain (i.e. how to wrap an async function in a sync function) All my code is written without asyncio in mind; however, I use one function that is async (written by another developer; for my purposes it's a black box). Let's call this func_1. I need to call this function from within another function, call it func_2 (which itself may be called in an arbitrarily long chain of functions func_3, func_4 etc...). Since func_1 is async, I need to await it, but since I call it in func_2, I need to make func_2 async as well (I can't await within a non-async function). And this goes on an on; I need to turn the entire chain of functions func_2, func_3, func_4 into async functions. Is there a way to avoid this? I just want to call func_1, wait for it to finish, and use the results in the rest of my normal python code. Can I create a wrapper around func_1 to allow this? What I want is essentially the following, which doesn't work: # This is the function defined by someone else async def func_1(*args): return something(*args) # This is my wrapper def func_1_wrapper(*args): return await func_1(*args) # So that I can call it like normal within the rest of my code def func_2(*args): # do something a = func_1_wrapper(*args) # do something else A: Essentially, you want to start an event loop (the "engine" of async code), submit your function to the event loop, and then wait until this function is done being executed by the event loop. One approach is asyncio.run(func_1()), but you can run into issues if something else in your code already started an event loop or if you are running this in a multithreaded context. A simple way to handle these edge cases is to use a library like asgiref.sync which allows you to do: func_1_sync = async_to_sync(func_1) and then call func_1_sync() directly from your synchronous function. Here is a snippet from their pypi page: These [helper functions] allow you to wrap or decorate async or sync functions to call them from the other style (so you can call async functions from a synchronous thread, or vice-versa). In particular: AsyncToSync lets a synchronous subthread stop and wait while the async function is called on the main thread’s event loop, and then control is returned to the thread when the async function is finished. SyncToAsync lets async code call a synchronous function, which is run in a threadpool and control returned to the async coroutine when the synchronous function completes. The idea is to make it easier to call synchronous APIs from async code and asynchronous APIs from synchronous code so it’s easier to transition code from one style to the other. Another library that does this is syncer: Sometimes (mainly in test) we need to convert asynchronous functions to normal, synchronous functions and run them synchronously. It can be done by ayncio.get_event_loop().run_until_complete(), but it’s quite long… Syncer makes this conversion easy. Convert coroutine-function (defined by aync def) to normal (synchronous) function Run coroutines synchronously Support both async def and decorator (@asyncio.coroutine) style A: If you just want to run it, and the code is already having a loop running in the background, try next(coro.__await__())
How to call async function in sync code and break async/await chain (i.e. how to wrap an async function in a sync function)
All my code is written without asyncio in mind; however, I use one function that is async (written by another developer; for my purposes it's a black box). Let's call this func_1. I need to call this function from within another function, call it func_2 (which itself may be called in an arbitrarily long chain of functions func_3, func_4 etc...). Since func_1 is async, I need to await it, but since I call it in func_2, I need to make func_2 async as well (I can't await within a non-async function). And this goes on an on; I need to turn the entire chain of functions func_2, func_3, func_4 into async functions. Is there a way to avoid this? I just want to call func_1, wait for it to finish, and use the results in the rest of my normal python code. Can I create a wrapper around func_1 to allow this? What I want is essentially the following, which doesn't work: # This is the function defined by someone else async def func_1(*args): return something(*args) # This is my wrapper def func_1_wrapper(*args): return await func_1(*args) # So that I can call it like normal within the rest of my code def func_2(*args): # do something a = func_1_wrapper(*args) # do something else
[ "Essentially, you want to start an event loop (the \"engine\" of async code), submit your function to the event loop, and then wait until this function is done being executed by the event loop.\nOne approach is asyncio.run(func_1()), but you can run into issues if something else in your code already started an event loop or if you are running this in a multithreaded context.\nA simple way to handle these edge cases is to use a library like asgiref.sync which allows you to do: func_1_sync = async_to_sync(func_1) and then call func_1_sync() directly from your synchronous function. Here is a snippet from their pypi page:\n\nThese [helper functions] allow you to wrap or decorate async or sync functions to call\nthem from the other style (so you can call async functions from a\nsynchronous thread, or vice-versa).\nIn particular:\nAsyncToSync lets a synchronous subthread stop and wait while the async\nfunction is called on the main thread’s event loop, and then control\nis returned to the thread when the async function is finished.\nSyncToAsync lets async code call a synchronous function, which is run\nin a threadpool and control returned to the async coroutine when the\nsynchronous function completes. The idea is to make it easier to call\nsynchronous APIs from async code and asynchronous APIs from\nsynchronous code so it’s easier to transition code from one style to\nthe other.\n\nAnother library that does this is syncer:\n\nSometimes (mainly in test) we need to convert asynchronous functions\nto normal, synchronous functions and run them synchronously. It can be\ndone by ayncio.get_event_loop().run_until_complete(), but it’s quite\nlong…\nSyncer makes this conversion easy.\n\nConvert coroutine-function (defined by aync def) to normal (synchronous) function\nRun coroutines synchronously\nSupport both async def and decorator (@asyncio.coroutine) style\n\n\n", "If you just want to run it, and the code is already having a loop running in the background, try next(coro.__await__())\n" ]
[ 1, 0 ]
[]
[]
[ "async_await", "python", "python_asyncio" ]
stackoverflow_0068744830_async_await_python_python_asyncio.txt
Q: Printing from Numba CUDA kernel (In google Colab) I'm trying to learn CUDA for python using Numba in a Google Colab jupyter notebook. To learn how to apply 3D thread allocation for nested loops I wrote the following kernel: from numba import cuda as cd # Kernel to loop over 3D grid @cd.jit def grid_coordinate_GPU(): i = cd.blockDim.x * cd.blockIdx.x + cd.threadIdx.x j = cd.blockDim.y * cd.blockIdx.y + cd.threadIdx.y k = cd.blockDim.z * cd.blockIdx.z + cd.threadIdx.z print(f"[{i},{j},{k}]") # Grid Dimensions Nx = 2 Ny = 2 Nz = 2 threadsperblock = (1,1,1) blockspergrid = (Nx,Ny,Nz) grid_coordinate_GPU[blockspergrid, threadsperblock]() The problem I however find is that printing the coordinates in format string does not work. The exact error I get is: TypingError: Failed in cuda mode pipeline (step: nopython frontend) No implementation of function Function(<class 'str'>) found for signature: >>> str(int64) There are 10 candidate implementations: - Of which 8 did not match due to: Overload of function 'str': File: <numerous>: Line N/A. With argument(s): '(int64)': No match. - Of which 2 did not match due to: Overload in function 'integer_str': File: numba/cpython/unicode.py: Line 2394. With argument(s): '(int64)': Rejected as the implementation raised a specific error: NumbaRuntimeError: Failed in nopython mode pipeline (step: native lowering) NRT required but not enabled During: lowering "s = call $76load_global.17(kind, char_width, length, $const84.21, func=$76load_global.17, args=[Var(kind, unicode.py:2408), Var(char_width, unicode.py:2409), Var(length, unicode.py:2407), Var($const84.21, unicode.py:2410)], kws=(), vararg=None, varkwarg=None, target=None)" at /usr/local/lib/python3.7/dist-packages/numba/cpython/unicode.py (2410) raised from /usr/local/lib/python3.7/dist-packages/numba/core/runtime/context.py:19 During: resolving callee type: Function(<class 'str'>) During: typing of call at <ipython-input-12-4a28d7f41e76> (12) To solve this I tried a couple of things. Firstly I tried to initialise the CUDA simulator by setting the environment variable NUMBA_ENABLE_CUDASIM = 1 following the Numba Documentation. This however dit not change much. Secondly I thought that the problem laid within the inability of the Jupiter notebook to print the result in the notebook instead of the terminal. I tried to solve this by following this GitHub post which instructed me to use wurlitzer. This however did not do much. Lastly I added cd.synchronize() after the call to the kernel to try and mimic the c++ example I tried to implement in the first place. This sadly did not work either. It would be amazing if someone could help me out! A: The simple solution was to skip the formatted string and just use print(i,j,k) within the kernel instead.
Printing from Numba CUDA kernel (In google Colab)
I'm trying to learn CUDA for python using Numba in a Google Colab jupyter notebook. To learn how to apply 3D thread allocation for nested loops I wrote the following kernel: from numba import cuda as cd # Kernel to loop over 3D grid @cd.jit def grid_coordinate_GPU(): i = cd.blockDim.x * cd.blockIdx.x + cd.threadIdx.x j = cd.blockDim.y * cd.blockIdx.y + cd.threadIdx.y k = cd.blockDim.z * cd.blockIdx.z + cd.threadIdx.z print(f"[{i},{j},{k}]") # Grid Dimensions Nx = 2 Ny = 2 Nz = 2 threadsperblock = (1,1,1) blockspergrid = (Nx,Ny,Nz) grid_coordinate_GPU[blockspergrid, threadsperblock]() The problem I however find is that printing the coordinates in format string does not work. The exact error I get is: TypingError: Failed in cuda mode pipeline (step: nopython frontend) No implementation of function Function(<class 'str'>) found for signature: >>> str(int64) There are 10 candidate implementations: - Of which 8 did not match due to: Overload of function 'str': File: <numerous>: Line N/A. With argument(s): '(int64)': No match. - Of which 2 did not match due to: Overload in function 'integer_str': File: numba/cpython/unicode.py: Line 2394. With argument(s): '(int64)': Rejected as the implementation raised a specific error: NumbaRuntimeError: Failed in nopython mode pipeline (step: native lowering) NRT required but not enabled During: lowering "s = call $76load_global.17(kind, char_width, length, $const84.21, func=$76load_global.17, args=[Var(kind, unicode.py:2408), Var(char_width, unicode.py:2409), Var(length, unicode.py:2407), Var($const84.21, unicode.py:2410)], kws=(), vararg=None, varkwarg=None, target=None)" at /usr/local/lib/python3.7/dist-packages/numba/cpython/unicode.py (2410) raised from /usr/local/lib/python3.7/dist-packages/numba/core/runtime/context.py:19 During: resolving callee type: Function(<class 'str'>) During: typing of call at <ipython-input-12-4a28d7f41e76> (12) To solve this I tried a couple of things. Firstly I tried to initialise the CUDA simulator by setting the environment variable NUMBA_ENABLE_CUDASIM = 1 following the Numba Documentation. This however dit not change much. Secondly I thought that the problem laid within the inability of the Jupiter notebook to print the result in the notebook instead of the terminal. I tried to solve this by following this GitHub post which instructed me to use wurlitzer. This however did not do much. Lastly I added cd.synchronize() after the call to the kernel to try and mimic the c++ example I tried to implement in the first place. This sadly did not work either. It would be amazing if someone could help me out!
[ "The simple solution was to skip the formatted string and just use print(i,j,k) within the kernel instead.\n" ]
[ 2 ]
[]
[]
[ "cuda", "google_colaboratory", "numba", "printing", "python" ]
stackoverflow_0074475992_cuda_google_colaboratory_numba_printing_python.txt
Q: Pytorch Batchwise block diagonal I have two tensors containing batches of matrices of the same batch size (first dimension) but different matrix structure (all other dimensions). For example A of shape (n,d,d) and B (n,e,e). Now I would like to build block diagonals of A and B for all n. So that the output shape (n,(d+e),(d+e)). Is there an implementation for a problem like this? I could only find torch.block_diag which is not suited for dimensions higher than 2. A: Unfortunately there's no vectorized implementation, you'd have to loop through the batch: A = torch.rand((2, 2, 2)) B = torch.rand((2, 3, 3)) C = torch.zeros((2, 5, 5)) for i in range(2): C[i] = torch.block_diag(A[i], B[i])
Pytorch Batchwise block diagonal
I have two tensors containing batches of matrices of the same batch size (first dimension) but different matrix structure (all other dimensions). For example A of shape (n,d,d) and B (n,e,e). Now I would like to build block diagonals of A and B for all n. So that the output shape (n,(d+e),(d+e)). Is there an implementation for a problem like this? I could only find torch.block_diag which is not suited for dimensions higher than 2.
[ "Unfortunately there's no vectorized implementation, you'd have to loop through the batch:\nA = torch.rand((2, 2, 2))\nB = torch.rand((2, 3, 3))\nC = torch.zeros((2, 5, 5))\nfor i in range(2):\n C[i] = torch.block_diag(A[i], B[i])\n\n" ]
[ 0 ]
[]
[]
[ "diagonal", "python", "pytorch", "torch" ]
stackoverflow_0074487220_diagonal_python_pytorch_torch.txt
Q: How to make a filename dynamic when extracting a table from BigQuery to a GCS Bucket? I am new to Stack Overflow, and this is my first post. I am also new to GCP, and I am writing a Python script (Airflow DAG) to extract a table from BigQuery to a GCS Bucket using the following: project = "bigquery-public-data" dataset_id = "samples" table_id = "my_dataset" destination_uri = "gs://{}/{}".format(bucket_name, "mydataset.csv") dataset_ref = bigquery.DatasetReference(project, dataset_id) table_ref = dataset_ref.table(table_id) extract_job = client.extract_table(     table_ref,     destination_uri,     # Location must match that of the source table.     location="US", )  # API request extract_job.result()  # Waits for job to complete. print(     "Exported {}:{}.{} to {}".format(project, dataset_id, table_id, destination_uri) ) The issue I have is, how do I makw my filename dynamic? I am wanting my filename to look something like this: my_file_mmddyyyy.csv. How do I go about doing this, as I am really not sure what to do. Any help would be great. Thank you. I have tried looking at the GCP documentation, but I have not had much looking finding an answer to what I am wanting to achieve. A: You can use the following operator with Airflow : BigQueryToGCSOperator : from airflow.providers.google.cloud.transfers.bigquery_to_gcs import BigQueryToGCSOperator date_str = ... BigQueryToGCSOperator( task_id="task_id", source_project_dataset_table="{my_project}:{my_dataset}.{my_table}", export_format='CSV', destination_cloud_storage_uris=[ f"gs://my_bucket/my_folder/my_file_{date_str}" ], field_delimiter=',' ) source_project_dataset_table is the BigQuery source table destination_cloud_storage_uris is the destination file name in the destination bucket, you can generate your file name as needed dynamically.
How to make a filename dynamic when extracting a table from BigQuery to a GCS Bucket?
I am new to Stack Overflow, and this is my first post. I am also new to GCP, and I am writing a Python script (Airflow DAG) to extract a table from BigQuery to a GCS Bucket using the following: project = "bigquery-public-data" dataset_id = "samples" table_id = "my_dataset" destination_uri = "gs://{}/{}".format(bucket_name, "mydataset.csv") dataset_ref = bigquery.DatasetReference(project, dataset_id) table_ref = dataset_ref.table(table_id) extract_job = client.extract_table(     table_ref,     destination_uri,     # Location must match that of the source table.     location="US", )  # API request extract_job.result()  # Waits for job to complete. print(     "Exported {}:{}.{} to {}".format(project, dataset_id, table_id, destination_uri) ) The issue I have is, how do I makw my filename dynamic? I am wanting my filename to look something like this: my_file_mmddyyyy.csv. How do I go about doing this, as I am really not sure what to do. Any help would be great. Thank you. I have tried looking at the GCP documentation, but I have not had much looking finding an answer to what I am wanting to achieve.
[ "You can use the following operator with Airflow : BigQueryToGCSOperator :\nfrom airflow.providers.google.cloud.transfers.bigquery_to_gcs import BigQueryToGCSOperator\n\ndate_str = ...\n\nBigQueryToGCSOperator(\n task_id=\"task_id\",\n source_project_dataset_table=\"{my_project}:{my_dataset}.{my_table}\",\n export_format='CSV',\n destination_cloud_storage_uris=[\n f\"gs://my_bucket/my_folder/my_file_{date_str}\"\n ],\n field_delimiter=','\n )\n\n\nsource_project_dataset_table is the BigQuery source table\ndestination_cloud_storage_uris is the destination file name in the destination bucket, you can generate your file name as needed dynamically.\n\n" ]
[ 0 ]
[]
[]
[ "airflow", "google_bigquery", "google_cloud_platform", "google_cloud_storage", "python" ]
stackoverflow_0074487080_airflow_google_bigquery_google_cloud_platform_google_cloud_storage_python.txt
Q: python decoding bytes from a GPS Module GY-NEO6MV2 connected to raspberry pi I have a GPS Module GY-NEO6MV2 connected to a raspberry pi 3. When I use the gpsmon command I get the data perfectly the output of gpsmon but I'm trying to use python to read the data from the serial. This is my code import serial ser = serial.Serial('/dev/ttyACM0') ser.flushInput() while True: try: ser_bytes = ser.readline() decoded_bytes = ser_bytes print(decoded_bytes) except: break output b'\xb5b\x01\x064\x00\x00\xda@\x12T\xe9\x05\x00\xb3\x08\x03\xdd/\xb6l\x1b\xd0\xb6P\x16\xfc\x9a\xec\r\xb1\x05\x00\x00\x16\x00\x00\x00\xf0\xff\xff\xff\x04\x00\x00\x00\\\x00\x00\x00\xcb\x00\x03\x08"g\x01\x00\xa4\x84\xb5b\x010\xb0\x00\x00\xda@\x12\x0e\x02\x00\x00\x00\x01\r\x04\x12\x1d\xb7\x00K\x00\x00\x00\x05\x03\x04\x04\x10?\\\x00\x00\x00\x00\x00\x03\x04\r\x04\r.\t\x00\x19\x00\x00\x00\x01\x06\r\x05\x1f\x127\x01~\xfe\xff\xff\x0f\x07\r\x07%,\xe5\x00m\x00\x00\x00\t\t\r\x05\x1d(=\x016\x03\x00\x00\n' b'T\x00\x00\x00\x00\x00\x07\x15\r\x04\x10\x0c\xab\x00k\x07\x00\x00\x0e\x1a\r\x04\x16\x119\x00\n' b'\x10\x01\x00\xa5\x00\x00\x00\x00\x00\x00\x08\x1e\r\x04\x1c\n' b'\xdf\x00{\xf9\xff\xff\x0cx\x10\x01\x00\x19\xff\x00\x00\x00\x00\x00\x04|\x10\x01\x00:\xdc\x00\x00\x00\x00\x00\x0b~\x10\x01\x00<\xd6\x00\x00\x00\x00\x00`\xc6\xb5b\x01\x04\x12\x00\x00\xda@\x12\xde\x00\xcb\x00X\x00\xb6\x00[\x007\x00H\x00\xd4\xb0\xb5b\x01 \x10\x00\x00\xda@\x12T\xe9\x05\x00\xb3\x08\x12\x07\x16\x00\x00\x00\x89\xa0' b'\xb5b\x01\x064\x00\xe8\xdd@\x12U\t\x06\x00\xb3\x08\x03\xddY\xb6l\x1b\xce\xb6P\x16\x07\x9b\xec\r\xc1\x05\x00\x00\x11\x00\x00\x00\xf8\xff\xff\xff\x01\x00\x00\x00`\x00\x00\x00\xdc\x00\x03\x08"g\x01\x00\n' b'\xb0\x00\xe8\xdd@\x12\x0e\x02\x00\x00\x00\x01\r\x04\x13\x1d\xb7\x00\xb2\x00\x00\x00\x05\x03\x04\x04\x10?\\\x00\x00\x00\x00\x00\x03\x04\r\x04\x0e.\t\x00\xa2\x00\x00\x00\x01\x06\r\x06 \x127\x01u\xfe\xff\xff\x0f\x07\r\x07&,\xe5\x00\x80\x00\x00\x00\t\t\r\x05\x1d(=\x01=\x03\x00\x00\n' I tried using decoded_bytes = ser_bytes.decode('utf-8') I get this error Traceback (most recent call last): File "/test10.py", line 8, in <module> decoded_bytes = ser_bytes.decode("utf-8") UnicodeDecodeError: 'utf-8' codec can't decode byte 0xb5 in position 0: invalid start byte and tried using the unicode_escape decoded_bytes = ser_bytes.decode('unicode_escape') and got this rubbish output òj:2�*rþÿÿ xÿ|:Ü ´("gæµb0°`ÉUxÅUýédÔ_BD¾Éµb xÅUç�Dwµb4`ÉUw·Ý�¼l▒¿P°¢ì 9q *IK;þÿÿ CðÿÿYPuTTY ▒ %åYþÿÿ xÿ|:Ü I'm tring to read the line Thank you A: The data is packed info byte-array to unpack data use the Python struct library. >>> import struct >>> data =b'\x10\x01\x00\xa5\x00\x00\x00\x00\x00\x00\x08\x1e\r\x04\x1c\n' >>> struct.unpack("4f", data) (-1.1102590235254148e-16, 0.0, 7.199780051661553e-21, 7.511888650386347e-33)
python decoding bytes from a GPS Module GY-NEO6MV2 connected to raspberry pi
I have a GPS Module GY-NEO6MV2 connected to a raspberry pi 3. When I use the gpsmon command I get the data perfectly the output of gpsmon but I'm trying to use python to read the data from the serial. This is my code import serial ser = serial.Serial('/dev/ttyACM0') ser.flushInput() while True: try: ser_bytes = ser.readline() decoded_bytes = ser_bytes print(decoded_bytes) except: break output b'\xb5b\x01\x064\x00\x00\xda@\x12T\xe9\x05\x00\xb3\x08\x03\xdd/\xb6l\x1b\xd0\xb6P\x16\xfc\x9a\xec\r\xb1\x05\x00\x00\x16\x00\x00\x00\xf0\xff\xff\xff\x04\x00\x00\x00\\\x00\x00\x00\xcb\x00\x03\x08"g\x01\x00\xa4\x84\xb5b\x010\xb0\x00\x00\xda@\x12\x0e\x02\x00\x00\x00\x01\r\x04\x12\x1d\xb7\x00K\x00\x00\x00\x05\x03\x04\x04\x10?\\\x00\x00\x00\x00\x00\x03\x04\r\x04\r.\t\x00\x19\x00\x00\x00\x01\x06\r\x05\x1f\x127\x01~\xfe\xff\xff\x0f\x07\r\x07%,\xe5\x00m\x00\x00\x00\t\t\r\x05\x1d(=\x016\x03\x00\x00\n' b'T\x00\x00\x00\x00\x00\x07\x15\r\x04\x10\x0c\xab\x00k\x07\x00\x00\x0e\x1a\r\x04\x16\x119\x00\n' b'\x10\x01\x00\xa5\x00\x00\x00\x00\x00\x00\x08\x1e\r\x04\x1c\n' b'\xdf\x00{\xf9\xff\xff\x0cx\x10\x01\x00\x19\xff\x00\x00\x00\x00\x00\x04|\x10\x01\x00:\xdc\x00\x00\x00\x00\x00\x0b~\x10\x01\x00<\xd6\x00\x00\x00\x00\x00`\xc6\xb5b\x01\x04\x12\x00\x00\xda@\x12\xde\x00\xcb\x00X\x00\xb6\x00[\x007\x00H\x00\xd4\xb0\xb5b\x01 \x10\x00\x00\xda@\x12T\xe9\x05\x00\xb3\x08\x12\x07\x16\x00\x00\x00\x89\xa0' b'\xb5b\x01\x064\x00\xe8\xdd@\x12U\t\x06\x00\xb3\x08\x03\xddY\xb6l\x1b\xce\xb6P\x16\x07\x9b\xec\r\xc1\x05\x00\x00\x11\x00\x00\x00\xf8\xff\xff\xff\x01\x00\x00\x00`\x00\x00\x00\xdc\x00\x03\x08"g\x01\x00\n' b'\xb0\x00\xe8\xdd@\x12\x0e\x02\x00\x00\x00\x01\r\x04\x13\x1d\xb7\x00\xb2\x00\x00\x00\x05\x03\x04\x04\x10?\\\x00\x00\x00\x00\x00\x03\x04\r\x04\x0e.\t\x00\xa2\x00\x00\x00\x01\x06\r\x06 \x127\x01u\xfe\xff\xff\x0f\x07\r\x07&,\xe5\x00\x80\x00\x00\x00\t\t\r\x05\x1d(=\x01=\x03\x00\x00\n' I tried using decoded_bytes = ser_bytes.decode('utf-8') I get this error Traceback (most recent call last): File "/test10.py", line 8, in <module> decoded_bytes = ser_bytes.decode("utf-8") UnicodeDecodeError: 'utf-8' codec can't decode byte 0xb5 in position 0: invalid start byte and tried using the unicode_escape decoded_bytes = ser_bytes.decode('unicode_escape') and got this rubbish output òj:2�*rþÿÿ xÿ|:Ü ´("gæµb0°`ÉUxÅUýédÔ_BD¾Éµb xÅUç�Dwµb4`ÉUw·Ý�¼l▒¿P°¢ì 9q *IK;þÿÿ CðÿÿYPuTTY ▒ %åYþÿÿ xÿ|:Ü I'm tring to read the line Thank you
[ "The data is packed info byte-array to unpack data use the Python struct library.\n>>> import struct\n>>> data =b'\\x10\\x01\\x00\\xa5\\x00\\x00\\x00\\x00\\x00\\x00\\x08\\x1e\\r\\x04\\x1c\\n'\n>>> struct.unpack(\"4f\", data)\n(-1.1102590235254148e-16, 0.0, 7.199780051661553e-21, 7.511888650386347e-33)\n\n" ]
[ 0 ]
[]
[]
[ "decode", "gps", "python", "raspberry_pi" ]
stackoverflow_0073717885_decode_gps_python_raspberry_pi.txt
Q: sqlalchemy.exc.OperationalError: (sqlite3.OperationalError) no such table: users I'm developing an application with sqlalchemy and I'm constantly getting a no such table error when I try to add data to a table. However, such a table exists. Here is my code in models.py: DATABASE_NAME = 'test_task_db.sqlite' engine = create_engine('sqlite:///:memory:', echo=True) Base = declarative_base() metadata = MetaData() class Roles(Base): __tablename__ = "roles" id = Column(Integer, primary_key=True) name = Column(VARCHAR(50)) class Users(Base): __tablename__ = 'users' id = Column(BIGINT, primary_key=True) fio = Column(TEXT) datar = Column(DATE) id_role = (Integer, ForeignKey('roles.id')) Base.metadata.create_all(engine) main.py: BASE_DIR = os.path.dirname(os.path.abspath(__file__)) db_path = os.path.join(BASE_DIR, "test_task_db.sqlite") with Session(engine) as session: newUser = Users(fio=name, datar=datetime(2012, 3, 3, 10, 10, 10), id_role=randint(1, 2)) session.add(newUser) session.commit() A: I think the issue is with your connection String for sqlite. On sqlite3, for :memory: database, you may pass an empty connection string : sqlite:// You don't need to specify :memory: at the end. If you want to specify a table, then you do something like : sqlite://{path}/dbname.db: The number of slashes depend on your path, you should add three slashes for relative paths : sqlite:///./dbname.db Four slashes for absolute paths, for example if you want to give an absolute path called "/var/dir/" sqlite:////var/dir/dbname.db I've also seen db names have a .db extension, did you specifically modify that? Here's a version of a FastAPI app I was connecting to sqlite3 so you can get an idea of what to try next. I have since migrated it to postgersql, however I'm linking a revision history with the older version. I'd suggest you to test creating the DB on the project directory so you can add the "." relative path connection string and lastly, make sure that the DB name and the extension you're providing in your models.py is correct.
sqlalchemy.exc.OperationalError: (sqlite3.OperationalError) no such table: users
I'm developing an application with sqlalchemy and I'm constantly getting a no such table error when I try to add data to a table. However, such a table exists. Here is my code in models.py: DATABASE_NAME = 'test_task_db.sqlite' engine = create_engine('sqlite:///:memory:', echo=True) Base = declarative_base() metadata = MetaData() class Roles(Base): __tablename__ = "roles" id = Column(Integer, primary_key=True) name = Column(VARCHAR(50)) class Users(Base): __tablename__ = 'users' id = Column(BIGINT, primary_key=True) fio = Column(TEXT) datar = Column(DATE) id_role = (Integer, ForeignKey('roles.id')) Base.metadata.create_all(engine) main.py: BASE_DIR = os.path.dirname(os.path.abspath(__file__)) db_path = os.path.join(BASE_DIR, "test_task_db.sqlite") with Session(engine) as session: newUser = Users(fio=name, datar=datetime(2012, 3, 3, 10, 10, 10), id_role=randint(1, 2)) session.add(newUser) session.commit()
[ "I think the issue is with your connection String for sqlite. On sqlite3, for :memory: database, you may pass an empty connection string :\nsqlite://\n\nYou don't need to specify :memory: at the end.\nIf you want to specify a table, then you do something like :\nsqlite://{path}/dbname.db:\n\nThe number of slashes depend on your path,\nyou should add three slashes for relative paths :\nsqlite:///./dbname.db\n\nFour slashes for absolute paths, for example if you want to give an absolute path called \"/var/dir/\"\nsqlite:////var/dir/dbname.db\n\nI've also seen db names have a .db extension, did you specifically modify that?\n\nHere's a version of a FastAPI app I was connecting to sqlite3 so you can get an idea of what to try next. I have since migrated it to postgersql, however I'm linking a revision history with the older version. I'd suggest you to test creating the DB on the project directory so you can add the \".\" relative path connection string and lastly, make sure that the DB name and the extension you're providing in your models.py is correct.\n" ]
[ 0 ]
[]
[]
[ "python", "sqlalchemy" ]
stackoverflow_0074487345_python_sqlalchemy.txt
Q: streamlit latex Can anyone help me with the layout of a streamlit website? I create a website with streamit, however, I use st.latex and the layout of my website looks like this. enter image description here I want the lines to be aligned to the left. Sample Code import streamlit as st st.set_page_config(layout="wide") st.latex(r''' \text{This is a gradient algorithm where step size } \alpha_{k}\text{ is chosen to minimize }\phi_{k}(\alpha)=f\left(\mathbf{x}^{k}-\alpha \nabla f\left(\mathbf{x}^{k}\right)\right). ''') st.latex(r''' \bullet\textbf{ Step 1:}\text{ Let }\mathbf{x}^{0}\text{ be a starting point.} ''') st.latex(r''' \bullet\textbf{ Step 2:}\text{Assign k := 0} ''') st.latex(r''' \bullet\textbf{ Step 3:}\text{ Find }\nabla f\left(\mathbf{x}^{k}\right).\text{ If }\nabla f\left(\mathbf{x}^{k}\right)=0,\text{ then go to }7^{\text {th }}\text{ Step, otherwise go to next step.} ''') Sample output A: Here is a hack on css on class katex-html. Add the text-align: left; Add it after st.set_page_config(layout="wide") from sample code above. st.markdown(''' <style> .katex-html { text-align: left; } </style>''', unsafe_allow_html=True ) Output
streamlit latex
Can anyone help me with the layout of a streamlit website? I create a website with streamit, however, I use st.latex and the layout of my website looks like this. enter image description here I want the lines to be aligned to the left. Sample Code import streamlit as st st.set_page_config(layout="wide") st.latex(r''' \text{This is a gradient algorithm where step size } \alpha_{k}\text{ is chosen to minimize }\phi_{k}(\alpha)=f\left(\mathbf{x}^{k}-\alpha \nabla f\left(\mathbf{x}^{k}\right)\right). ''') st.latex(r''' \bullet\textbf{ Step 1:}\text{ Let }\mathbf{x}^{0}\text{ be a starting point.} ''') st.latex(r''' \bullet\textbf{ Step 2:}\text{Assign k := 0} ''') st.latex(r''' \bullet\textbf{ Step 3:}\text{ Find }\nabla f\left(\mathbf{x}^{k}\right).\text{ If }\nabla f\left(\mathbf{x}^{k}\right)=0,\text{ then go to }7^{\text {th }}\text{ Step, otherwise go to next step.} ''') Sample output
[ "Here is a hack on css on class katex-html. Add the text-align: left;\nAdd it after st.set_page_config(layout=\"wide\") from sample code above.\nst.markdown('''\n<style>\n.katex-html {\n text-align: left;\n}\n</style>''',\nunsafe_allow_html=True\n)\n\nOutput\n\n" ]
[ 1 ]
[]
[]
[ "latex", "python", "streamlit", "web" ]
stackoverflow_0074456419_latex_python_streamlit_web.txt
Q: does not appear to have any patterns in it. or circular import recently i started to studying django but there is a problem into my code and i cant find what exactly could be a decent fix to this problem so i thought i would be good to ask. ... from django.urls import path from . import views ulrpatterns = [ path('' , views.index) ] ... well this is my code for urls.py into the articles directory. ... from django.contrib import admin from django.urls import path , include from home import views as home_views urlpatterns = [ path("" ,home_views.index ), path("about", home_views.about), path("Contact" ,home_views.contact ), path('admin/', admin.site.urls), path('articles/' , include('articles.urls')), ] ... this one is my main urls.py from what im seeing i called articles.urls but when im running my server it keeps givin me this error raise ImproperlyConfigured(msg.format(name=self.urlconf_name)) django.core.exceptions.ImproperlyConfigured: The included URLconf '' does not appear to have any patterns in it. If you see valid patterns in the file then the issue is probably caused by a circular import. A: ulrpatterns = [ path('' , views.index) Django wants to see the urlpatterns in the urls.py file. You cannot rename it. A: I had the same error when I was doing this crash course Django project. I found at least 50% errors were made by TYPO ;) from django.urls import path from . import views **ulrpatterns** = [ path('' , views.index) ] change this variable to 'urlpatterns', done.
does not appear to have any patterns in it. or circular import
recently i started to studying django but there is a problem into my code and i cant find what exactly could be a decent fix to this problem so i thought i would be good to ask. ... from django.urls import path from . import views ulrpatterns = [ path('' , views.index) ] ... well this is my code for urls.py into the articles directory. ... from django.contrib import admin from django.urls import path , include from home import views as home_views urlpatterns = [ path("" ,home_views.index ), path("about", home_views.about), path("Contact" ,home_views.contact ), path('admin/', admin.site.urls), path('articles/' , include('articles.urls')), ] ... this one is my main urls.py from what im seeing i called articles.urls but when im running my server it keeps givin me this error raise ImproperlyConfigured(msg.format(name=self.urlconf_name)) django.core.exceptions.ImproperlyConfigured: The included URLconf '' does not appear to have any patterns in it. If you see valid patterns in the file then the issue is probably caused by a circular import.
[ "ulrpatterns = [\n path('' , views.index)\n\nDjango wants to see the urlpatterns in the urls.py file. You cannot rename it. \n", "I had the same error when I was doing this crash course Django project. I found at least 50% errors were made by TYPO ;)\nfrom django.urls import path\nfrom . import views\n\n**ulrpatterns** = [\n path('' , views.index)\n]\n\nchange this variable to 'urlpatterns', done.\n" ]
[ 1, 0 ]
[]
[]
[ "django", "python" ]
stackoverflow_0061146625_django_python.txt
Q: How can I use tqdm to add a progress bar in this function? I have received from one colleague a Python script but (due to the large amount of data and the time processing) I would like to include a progress bar to check at each time its progress. from Bio import SeqIO from tqdm import tqdm import csv import pandas as pd import re import time # Find code in "metadata_silva_simplified.txt" file path_to_file = "metadata_silva_simplified.txt" df = pd.read_csv("Name.csv") counter = 0 Code=[] Names=[] Missing=[] t = time.time() for index in df.index: #print("------------------------------------------------------------------------") #print(str(counter) + "- " + df["0"][index]) name=str(df["0"][index]) with open(path_to_file,"r") as file: for line in file: coincident=0 ref=line[(line.find("|")+1):] ref=ref[:(ref.find("|")-1)] ref=ref.strip() if name == ref: #if ref.find(name) != -1: coincident=1 position = line.find("|")-1 Code.append("kraken:taxid|" + line[:position]) Names.append(name) #print("kraken:taxid|" + line[:position]) break if coincident==0: Missing.append(name) counter += 1 if (counter%1000) == 0: print(str(round(counter/5105.08))+"% completed") Code = {'Code':Code,'Name':Names} dfcodes = pd.DataFrame(Code) dfcodes.to_csv("Codes_secondpart.csv", index=False) missing = pd.DataFrame(Missing) missing.to_csv("Missing_secondpart.csv", index=False) elapsed = time.time() - t print("Mean time per sample=" + str(elapsed/counter)) I thought incorporating the progress bar through the use of tqdm Python tool, but I don't know how to include in the previous function attached above to run it. A: You already imported tqdm. Wrap your loop in a tqdm call and it should work: for index in tqdm(df.index):
How can I use tqdm to add a progress bar in this function?
I have received from one colleague a Python script but (due to the large amount of data and the time processing) I would like to include a progress bar to check at each time its progress. from Bio import SeqIO from tqdm import tqdm import csv import pandas as pd import re import time # Find code in "metadata_silva_simplified.txt" file path_to_file = "metadata_silva_simplified.txt" df = pd.read_csv("Name.csv") counter = 0 Code=[] Names=[] Missing=[] t = time.time() for index in df.index: #print("------------------------------------------------------------------------") #print(str(counter) + "- " + df["0"][index]) name=str(df["0"][index]) with open(path_to_file,"r") as file: for line in file: coincident=0 ref=line[(line.find("|")+1):] ref=ref[:(ref.find("|")-1)] ref=ref.strip() if name == ref: #if ref.find(name) != -1: coincident=1 position = line.find("|")-1 Code.append("kraken:taxid|" + line[:position]) Names.append(name) #print("kraken:taxid|" + line[:position]) break if coincident==0: Missing.append(name) counter += 1 if (counter%1000) == 0: print(str(round(counter/5105.08))+"% completed") Code = {'Code':Code,'Name':Names} dfcodes = pd.DataFrame(Code) dfcodes.to_csv("Codes_secondpart.csv", index=False) missing = pd.DataFrame(Missing) missing.to_csv("Missing_secondpart.csv", index=False) elapsed = time.time() - t print("Mean time per sample=" + str(elapsed/counter)) I thought incorporating the progress bar through the use of tqdm Python tool, but I don't know how to include in the previous function attached above to run it.
[ "You already imported tqdm. Wrap your loop in a tqdm call and it should work:\nfor index in tqdm(df.index):\n\n" ]
[ 1 ]
[]
[]
[ "python" ]
stackoverflow_0074487670_python.txt
Q: pandas average across dynamic number of columns I have a dataframe like as shown below customer_id revenue_m7 revenue_m8 revenue_m9 revenue_m10 1 1234 1231 1256 1239 2 5678 3425 3255 2345 I would like to do the below a) get average of revenue for each customer based on latest two columns (revenue_m9 and revenue_m10) b) get average of revenue for each customer based on latest four columns (revenue_m7, revenue_m8, revenue_m9 and revenue_m10) So, I tried the below df['revenue_mean_2m'] = (df['revenue_m10']+df['revenue_m9'])/2 df['revenue_mean_4m'] = (df['revenue_m10']+df['revenue_m9']+df['revenue_m8']+df['revenue_m7'])/4 df['revenue_mean_4m'] = df.mean(axis=1) # i also tried this but how to do for only two columns (and not all columns) But if I wish to compute average for past 12 months, then it may not be elegant to write this way. Is there any other better or efficient way to write this? I can just key in number of columns to look back and it can compute the average based on keyed in input I expect my output to be like as below customer_id revenue_m7 revenue_m8 revenue_m9 revenue_m10 revenue_mean_2m revenue_mean_4m 1 1234 1231 1256 1239 1867 1240 2 5678 3425 3255 2345 2800 3675.75 A: Use filter and slicing: # keep only the "revenue_" columns df2 = df.filter(like='revenue_') # or # df2 = df.filter(regex=r'revenue_m\d+') # get last 2/4 columns and aggregate as mean df['revenue_mean_2m'] = df2.iloc[:, -2:].mean(axis=1) df['revenue_mean_4m'] = df2.iloc[:, -4:].mean(axis=1) Output: customer_id revenue_m7 revenue_m8 revenue_m9 revenue_m10 \ 0 1 1234 1231 1256 1239 1 2 5678 3425 3255 2345 revenue_mean_2m revenue_mean_4m 0 1247.5 1240.00 1 2800.0 3675.75 if column order it not guaranteed Sort them with natural sorting # shuffle the DataFrame columns for demo df = df.sample(frac=1, axis=1) # filter and reorder the needed columns from natsort import natsort_key df2 = df.filter(regex=r'revenue_m\d+').sort_index(key=natsort_key, axis=1) A: you could try something like this in reference to this post: n_months = 4 # you could also do this in a loop for all months range(1, 12) df[f'revenue_mean_{n_months}m'] = df.iloc[:, -n_months:-1].mean(axis=1)
pandas average across dynamic number of columns
I have a dataframe like as shown below customer_id revenue_m7 revenue_m8 revenue_m9 revenue_m10 1 1234 1231 1256 1239 2 5678 3425 3255 2345 I would like to do the below a) get average of revenue for each customer based on latest two columns (revenue_m9 and revenue_m10) b) get average of revenue for each customer based on latest four columns (revenue_m7, revenue_m8, revenue_m9 and revenue_m10) So, I tried the below df['revenue_mean_2m'] = (df['revenue_m10']+df['revenue_m9'])/2 df['revenue_mean_4m'] = (df['revenue_m10']+df['revenue_m9']+df['revenue_m8']+df['revenue_m7'])/4 df['revenue_mean_4m'] = df.mean(axis=1) # i also tried this but how to do for only two columns (and not all columns) But if I wish to compute average for past 12 months, then it may not be elegant to write this way. Is there any other better or efficient way to write this? I can just key in number of columns to look back and it can compute the average based on keyed in input I expect my output to be like as below customer_id revenue_m7 revenue_m8 revenue_m9 revenue_m10 revenue_mean_2m revenue_mean_4m 1 1234 1231 1256 1239 1867 1240 2 5678 3425 3255 2345 2800 3675.75
[ "Use filter and slicing:\n# keep only the \"revenue_\" columns\ndf2 = df.filter(like='revenue_')\n# or\n# df2 = df.filter(regex=r'revenue_m\\d+')\n\n# get last 2/4 columns and aggregate as mean\ndf['revenue_mean_2m'] = df2.iloc[:, -2:].mean(axis=1)\ndf['revenue_mean_4m'] = df2.iloc[:, -4:].mean(axis=1)\n\nOutput:\n customer_id revenue_m7 revenue_m8 revenue_m9 revenue_m10 \\\n0 1 1234 1231 1256 1239 \n1 2 5678 3425 3255 2345 \n\n revenue_mean_2m revenue_mean_4m \n0 1247.5 1240.00 \n1 2800.0 3675.75 \n\nif column order it not guaranteed\nSort them with natural sorting\n# shuffle the DataFrame columns for demo\ndf = df.sample(frac=1, axis=1)\n\n# filter and reorder the needed columns\nfrom natsort import natsort_key\ndf2 = df.filter(regex=r'revenue_m\\d+').sort_index(key=natsort_key, axis=1)\n\n", "you could try something like this in reference to this post:\nn_months = 4 # you could also do this in a loop for all months range(1, 12)\n\ndf[f'revenue_mean_{n_months}m'] = df.iloc[:, -n_months:-1].mean(axis=1)\n\n" ]
[ 3, 1 ]
[]
[]
[ "dataframe", "mean", "pandas", "python", "series" ]
stackoverflow_0074487627_dataframe_mean_pandas_python_series.txt
Q: Sub directory not available as package on parent package despite having __init__.py file in every folder If importing a package, it does not load its child packages, despite all folders having __init__.py. The example: mkdir -p parent/child touch parent/__init__.py touch parent/child/__init__.py Looks like: $ tree parent parent ├── child │   ├── file.py │   └── __init__.py └── __init__.py $ python3 Python 3.11.0 (main, Oct 26 2022, 14:04:35) [GCC 10.2.1 20210110] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import parent >>> parent.child Traceback (most recent call last): File "<stdin>", line 1, in <module> AttributeError: module 'parent' has no attribute 'child' This works, but is annoying when you have a lot of subdirectories to explicitly import them all. >>> from parent import child Why doesn't this work? And how would I make all my subpackage easily available on the parent package?
Sub directory not available as package on parent package despite having __init__.py file in every folder
If importing a package, it does not load its child packages, despite all folders having __init__.py. The example: mkdir -p parent/child touch parent/__init__.py touch parent/child/__init__.py Looks like: $ tree parent parent ├── child │   ├── file.py │   └── __init__.py └── __init__.py $ python3 Python 3.11.0 (main, Oct 26 2022, 14:04:35) [GCC 10.2.1 20210110] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import parent >>> parent.child Traceback (most recent call last): File "<stdin>", line 1, in <module> AttributeError: module 'parent' has no attribute 'child' This works, but is annoying when you have a lot of subdirectories to explicitly import them all. >>> from parent import child Why doesn't this work? And how would I make all my subpackage easily available on the parent package?
[]
[]
[ "Create init.py file like this it worked for me.\n$ tree\n├── __init__.py\n├── main.py\n└── parent\n ├── __init__.py\n └── child\n ├── __init__.py\n └── file.py\n\n" ]
[ -1 ]
[ "import", "python", "python_2.7", "python_3.x", "python_import" ]
stackoverflow_0074487179_import_python_python_2.7_python_3.x_python_import.txt
Q: How can I read a c3d file on Python? I'm trying to read a c3d file on Python with the btk library but I didn't succeed. I downloaded the library here https://code.google.com/archive/p/b-tk/downloads for Windows 10 (64 bit) and then wrote this code: A: try https://pypi.org/project/c3d/ or https://github.com/pyomeca/pyomeca its easier with them than with btk toolkit. You will need to see docs to get it done. I am using those to read only, then switch to numpy or pandas.
How can I read a c3d file on Python?
I'm trying to read a c3d file on Python with the btk library but I didn't succeed. I downloaded the library here https://code.google.com/archive/p/b-tk/downloads for Windows 10 (64 bit) and then wrote this code:
[ "try https://pypi.org/project/c3d/\nor https://github.com/pyomeca/pyomeca\nits easier with them than with btk toolkit. You will need to see docs to get it done. I am using those to read only, then switch to numpy or pandas.\n" ]
[ 0 ]
[]
[]
[ "installation", "jsc3d", "pandas", "python", "windows" ]
stackoverflow_0072137839_installation_jsc3d_pandas_python_windows.txt
Q: How to zip 2D arrays a=np.array([[1,2,3],[4,5,6],[7,8,9]]) b=np.array([[1,2,3],[4,5,6],[7,8,9]]) I've 2 identical 2D arrays, I'm trying to zip them element-wise. It should look like: [[(1,1) (2,2), (3,3)] [(4,4) (5,5) (6,6)] [(7,7) (8,8) (9,9)]] I've tried the method below but it didn't work out. First flatten the arrays, zip them, convert it into a list, then convert it into an array and reshape it. np.array(list(zip(np.ndarray.flatten(a),np.ndarray.flatten(b)))).reshape(a.shape) I'm getting the following error cannot reshape array of size 18 into shape (3,3) It's not treating the elements (1,1) (2,2) etc. of the final array as tuples but as individual elements. Hence, 18 elements. This question has been posted once but I didn't find an answer that worked for me. A: Don't zip, use numpy native functions! You want a dstack: out = np.dstack([a, b]) output: array([[[1, 1], [2, 2], [3, 3]], [[4, 4], [5, 5], [6, 6]], [[7, 7], [8, 8], [9, 9]]])
How to zip 2D arrays
a=np.array([[1,2,3],[4,5,6],[7,8,9]]) b=np.array([[1,2,3],[4,5,6],[7,8,9]]) I've 2 identical 2D arrays, I'm trying to zip them element-wise. It should look like: [[(1,1) (2,2), (3,3)] [(4,4) (5,5) (6,6)] [(7,7) (8,8) (9,9)]] I've tried the method below but it didn't work out. First flatten the arrays, zip them, convert it into a list, then convert it into an array and reshape it. np.array(list(zip(np.ndarray.flatten(a),np.ndarray.flatten(b)))).reshape(a.shape) I'm getting the following error cannot reshape array of size 18 into shape (3,3) It's not treating the elements (1,1) (2,2) etc. of the final array as tuples but as individual elements. Hence, 18 elements. This question has been posted once but I didn't find an answer that worked for me.
[ "Don't zip, use numpy native functions! You want a dstack:\nout = np.dstack([a, b])\n\noutput:\narray([[[1, 1],\n [2, 2],\n [3, 3]],\n\n [[4, 4],\n [5, 5],\n [6, 6]],\n\n [[7, 7],\n [8, 8],\n [9, 9]]])\n\n" ]
[ 1 ]
[]
[]
[ "arrays", "numpy", "numpy_ndarray", "python", "python_zip" ]
stackoverflow_0074487809_arrays_numpy_numpy_ndarray_python_python_zip.txt
Q: How to catch scrapy response from spider to pipeline? I need all the scrapy response with settings, pipelines, urls and everything in pipeline where i create model objects? Is there any way of catching it? pipeline.py class ScraperPipeline(object): def process_item(self, item, spider): logger = get_task_logger("logs") logger.info("Pipeline activated.") id = item['id'][0] user= item['user'][0] text = item['text'][0] Mail.objects.create(user=User.objects.get_or_create( id=id, user=user), text=text, date=today) logger.info(f"Pipeline disacvtivated") spider.py class Spider(CrawlSpider): name = 'spider' allowed_domains = ['xxx.com'] def start_requests(self): urls = [ 'xxx.com', ] for url in urls: yield scrapy.Request(url=url, callback=self.parse, headers={'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, ' 'like Gecko) Chrome/107.0.0.0 Safari/537.36'}) def parse(self, response): item = MailItem() for row in response.xpath('xpath thins'): ip['id'] = row.xpath('td[1]//text()').extract_first(), ip['user'] = row.xpath('td[2]//text()').extract_first(), ip['text'] = row.xpath('td[3]//text()').extract_first(), yield item I've tried to call response from pipeline, but i have only item. Also the things from created object are not enough from me. Any ideas? A: You can pass the full response along with the item in your callback methods if you need access to the response or request in your pipeline. For example: class SpiderClass(scrapy.Spider): ... ... def parse(self, response): for i in response.xpath(...): field1 = ... yield {'field1': field1, 'response': response} Then in your pipeline you will have access to the response as a field of the item in the process_item method. You can also access the settings from this method by using the crawler attribute of the spider argument. For example: class MyPipeline: def process_item(self, item, spider): response = item['response'] request = response.request settings = spider.crawler.settings ... do something del item['response'] return item Then you just need to activate the pipeline in your settings.
How to catch scrapy response from spider to pipeline?
I need all the scrapy response with settings, pipelines, urls and everything in pipeline where i create model objects? Is there any way of catching it? pipeline.py class ScraperPipeline(object): def process_item(self, item, spider): logger = get_task_logger("logs") logger.info("Pipeline activated.") id = item['id'][0] user= item['user'][0] text = item['text'][0] Mail.objects.create(user=User.objects.get_or_create( id=id, user=user), text=text, date=today) logger.info(f"Pipeline disacvtivated") spider.py class Spider(CrawlSpider): name = 'spider' allowed_domains = ['xxx.com'] def start_requests(self): urls = [ 'xxx.com', ] for url in urls: yield scrapy.Request(url=url, callback=self.parse, headers={'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, ' 'like Gecko) Chrome/107.0.0.0 Safari/537.36'}) def parse(self, response): item = MailItem() for row in response.xpath('xpath thins'): ip['id'] = row.xpath('td[1]//text()').extract_first(), ip['user'] = row.xpath('td[2]//text()').extract_first(), ip['text'] = row.xpath('td[3]//text()').extract_first(), yield item I've tried to call response from pipeline, but i have only item. Also the things from created object are not enough from me. Any ideas?
[ "You can pass the full response along with the item in your callback methods if you need access to the response or request in your pipeline.\nFor example:\nclass SpiderClass(scrapy.Spider):\n ...\n ...\n\n def parse(self, response):\n for i in response.xpath(...):\n field1 = ...\n yield {'field1': field1, 'response': response}\n\nThen in your pipeline you will have access to the response as a field of the item in the process_item method. You can also access the settings from this method by using the crawler attribute of the spider argument.\nFor example:\nclass MyPipeline:\n\n def process_item(self, item, spider):\n response = item['response']\n request = response.request\n settings = spider.crawler.settings\n ... do something \n del item['response']\n return item\n\nThen you just need to activate the pipeline in your settings.\n" ]
[ 0 ]
[]
[]
[ "django", "pipeline", "python", "scrapy" ]
stackoverflow_0074483604_django_pipeline_python_scrapy.txt
Q: Time Limit exceeded on LeetCode I'm trying to solve the Leetcode's Two Sum problem (https://leetcode.com/problems/two-sum/) and my code runs well on small lists, but the website outputs me 'time limit exceeded' when trying this list and target (https://leetcode.com/submissions/detail/845707290/testcase/) (19999), although my coding environment outputs [9998, 9999] (after some time though) x = 0 y = 1 while x < len(nums): if x == y: y += 1 if (nums[x] + nums[y]) == target: L = [x, y] print(L) break if y == len(nums) - 1: x += 1 y = 0 if (nums[x] + nums[y]) == target: L = [x, y] print(L) break #if x == len(nums) - 1: # y += 1 # x = 0 elif (nums[x] + nums[y]) == target: L = [x, y] print(L) break y += 1 (My code in Leetcode contains return instead of print as it's a part of function) Thanks. Here is the code on LeetCode class Solution: def twoSum(self, nums: List[int], target: int) -> List[int]: x = 0 y = 1 while x < len(nums): if x == y: y += 1 if (nums[x] + nums[y]) == target: L = [x, y] return L break if y == len(nums) - 1: x += 1 y = 0 if (nums[x] + nums[y]) == target: L = [x, y] return L break #if x == len(nums) - 1: # y += 1 # x = 0 if (nums[x] + nums[y]) == target: L = [x, y] return L break y += 1 UPDATE 1 class Solution: def twoSum(self, nums: List[int], target: int) -> List[int]: x = 0 y = 1 while x < len(nums): if (nums[x] + nums[y]) == target: L = [x, y] return L if y == len(nums) - 1: x += 1 y = x + 1 if (nums[x] + nums[y]) == target: L = [x, y] return L y += 1 A: Use a dictionary to create a performant lookup. nums = [1,5,2,7,21] target = 23 lookup = { k:v for (v,k) in enumerate(nums) } for a in nums: b = target - a if lookup.get(b, None): print([lookup[a], lookup[b]]) break Output: [2, 4] Moving lookup.get() to the assignment of b will improve performance further: def sum_of_two(nums, target): lookup = { value:index for (index, value) in enumerate(nums) } for a in nums: b = lookup.get(target - a, None) if b is not None: return([lookup[a], b]) EDIT: As pointed out by @Nineteendo in the comments, the straight up dictionary lookup can't handle duplicates in the list. However if the dictionary is populated while iterating through the list of numbers then this case is handled. def sum_of_two_edit(nums, target): lookup = {} for i, a in enumerate(nums): b = target - a j = lookup.get(b, None) if j is not None: return [j, i] lookup[a] = i Sample input: nums = [1,2,3,3,6] target = 6 sum_of_two_edit(nums, target) Result: [2, 3] A: I think the code isn't quite linearly scaling yet, maybe search further. (Definitely use datetime to track the time). Attempt 1 of Mohamed Hassan: from datetime import datetime start = datetime.now() nums = list(range(1,10_000 + 1)) target = 19_999 x = 0 y = 1 while x < len(nums): if x == y: y += 1 if (nums[x] + nums[y]) == target: print([x, y]) break if y == len(nums) - 1: x += 1 y = 0 if (nums[x] + nums[y]) == target: print([x, y]) break if (nums[x] + nums[y]) == target: print([x, y]) break y += 1 print("Finished in:", datetime.now() - start) Result: [9998, 9999] Finished in: 0:04:43.237951 Attempt 2 of Mohamed Hassan, using a class: from datetime import datetime from typing import List class Solution: def twoSum(self, nums: List[int], target: int) -> List[int]: x = 0 y = 1 while x < len(nums): if x == y: y += 1 if (nums[x] + nums[y]) == target: return [x, y] if y == len(nums) - 1: x += 1 y = 0 if (nums[x] + nums[y]) == target: return [x, y] if (nums[x] + nums[y]) == target: return [x, y] y += 1 start = datetime.now() nums = list(range(1, 10_000 + 1)) target = 19_999 print(Solution().twoSum(nums, target)) print("Finished in:", datetime.now() - start) Result: [9998, 9999] Finished in: 0:03:47.205079 attempt 3 of Mohamed Hassan, with some optimisations: from datetime import datetime from typing import List class Solution: def twoSum(self, nums: List[int], target: int) -> List[int]: x = 0 y = 1 while x < len(nums): if (nums[x] + nums[y]) == target: return [x, y] if y == len(nums) - 1: x += 1 y = x + 1 if (nums[x] + nums[y]) == target: return [x, y] y += 1 start = datetime.now() nums = list(range(1, 10_000 + 1)) target = 19_999 print(Solution().twoSum(nums, target)) print("Finished in:", datetime.now() - start) Result: [9998, 9999] Finished in: 0:01:25.186796 Attempt 1 of me, Nineteendo, using a for loop instead: from datetime import datetime start = datetime.now() nums = list(range(1,10_000 + 1)) target = 19_999 for i, x in enumerate(nums): for j, y in enumerate(nums[i + 1:]): if x + y == target: break if x + y == target: print([i, i + 1 + j]) break print("Finished in:", datetime.now() - start) Result: [9998, 9999] Finished in: 0:00:28.605655 Attempt 2 of me, Nineteendo, using a function too to avoid the second break, it's also a lot faster: from datetime import datetime def find_sum(nums, target): for i, x in enumerate(nums): for j, y in enumerate(nums[i + 1:]): if x + y == target: return [i, i + 1 + j] start = datetime.now() nums = list(range(1, 10_000 + 1)) target = 19_999 print(find_sum(nums, target)) print("Finished in:", datetime.now() - start) Result: [9998, 9999] Finished in: 0:00:18.117496 Attempt 3 of Dan Nagle, using a math solution, and a function: from datetime import datetime def sum_of_two_edit(nums, target): lookup = {} for i, a in enumerate(nums): b = target - a j = lookup.get(b, None) if j is not None: return [j, i] lookup[a] = i start = datetime.now() nums = list(range(1, 10_000 + 1)) target = 19_999 print(sum_of_two_edit(nums, target)) print("Finished in:", datetime.now() - start) Result: [9998, 9999] Finished in: 0:00:00.010349 Attempt 1 of Cobra, also using a for loop: from datetime import datetime def twoSum(nums, target): d = {} for i, n in enumerate(nums): if (j := d.get(target - n)) is not None: return [i, j] d[n] = i start = datetime.now() nums = list(range(1, 10_000 + 1)) target = 19_999 print(twoSum(nums, target)) print("Finished in:", datetime.now() - start) Result: [9999, 9998] Finished in: 0:00:00.009762 A: If you're looking for a reasonably efficient implementation then you could try this: def twoSum(nums, target): d = {} for i, n in enumerate(nums): if (b := target - n) in d: return [i, d[b]] d[n] = i For the list containing inclusive values 1 -> 10000 and a target of 19999, this returns [9999, 9998] in 0.0012s
Time Limit exceeded on LeetCode
I'm trying to solve the Leetcode's Two Sum problem (https://leetcode.com/problems/two-sum/) and my code runs well on small lists, but the website outputs me 'time limit exceeded' when trying this list and target (https://leetcode.com/submissions/detail/845707290/testcase/) (19999), although my coding environment outputs [9998, 9999] (after some time though) x = 0 y = 1 while x < len(nums): if x == y: y += 1 if (nums[x] + nums[y]) == target: L = [x, y] print(L) break if y == len(nums) - 1: x += 1 y = 0 if (nums[x] + nums[y]) == target: L = [x, y] print(L) break #if x == len(nums) - 1: # y += 1 # x = 0 elif (nums[x] + nums[y]) == target: L = [x, y] print(L) break y += 1 (My code in Leetcode contains return instead of print as it's a part of function) Thanks. Here is the code on LeetCode class Solution: def twoSum(self, nums: List[int], target: int) -> List[int]: x = 0 y = 1 while x < len(nums): if x == y: y += 1 if (nums[x] + nums[y]) == target: L = [x, y] return L break if y == len(nums) - 1: x += 1 y = 0 if (nums[x] + nums[y]) == target: L = [x, y] return L break #if x == len(nums) - 1: # y += 1 # x = 0 if (nums[x] + nums[y]) == target: L = [x, y] return L break y += 1 UPDATE 1 class Solution: def twoSum(self, nums: List[int], target: int) -> List[int]: x = 0 y = 1 while x < len(nums): if (nums[x] + nums[y]) == target: L = [x, y] return L if y == len(nums) - 1: x += 1 y = x + 1 if (nums[x] + nums[y]) == target: L = [x, y] return L y += 1
[ "Use a dictionary to create a performant lookup.\nnums = [1,5,2,7,21]\ntarget = 23\n\nlookup = { k:v for (v,k) in enumerate(nums) }\n\nfor a in nums:\n b = target - a\n if lookup.get(b, None):\n print([lookup[a], lookup[b]])\n break\n\nOutput:\n[2, 4]\n\nMoving lookup.get() to the assignment of b will improve performance further:\ndef sum_of_two(nums, target):\n lookup = { value:index for (index, value) in enumerate(nums) }\n for a in nums:\n b = lookup.get(target - a, None)\n if b is not None:\n return([lookup[a], b])\n\nEDIT: As pointed out by @Nineteendo in the comments, the straight up dictionary lookup can't handle duplicates in the list. However if the dictionary is populated while iterating through the list of numbers then this case is handled.\ndef sum_of_two_edit(nums, target):\n lookup = {}\n for i, a in enumerate(nums):\n b = target - a\n j = lookup.get(b, None)\n if j is not None:\n return [j, i]\n lookup[a] = i\n\nSample input:\nnums = [1,2,3,3,6]\ntarget = 6\n\nsum_of_two_edit(nums, target)\n\nResult:\n[2, 3]\n\n", "I think the code isn't quite linearly scaling yet, maybe search further. (Definitely use datetime to track the time).\nAttempt 1 of Mohamed Hassan:\nfrom datetime import datetime\nstart = datetime.now()\nnums = list(range(1,10_000 + 1))\ntarget = 19_999\nx = 0\ny = 1\nwhile x < len(nums):\n if x == y:\n y += 1\n if (nums[x] + nums[y]) == target:\n print([x, y])\n break\n if y == len(nums) - 1:\n x += 1\n y = 0\n if (nums[x] + nums[y]) == target:\n print([x, y])\n break\n if (nums[x] + nums[y]) == target:\n print([x, y])\n break\n y += 1\nprint(\"Finished in:\", datetime.now() - start)\n\nResult:\n[9998, 9999]\nFinished in: 0:04:43.237951\n\nAttempt 2 of Mohamed Hassan, using a class:\nfrom datetime import datetime\nfrom typing import List\nclass Solution:\n def twoSum(self, nums: List[int], target: int) -> List[int]:\n x = 0\n y = 1\n while x < len(nums):\n if x == y:\n y += 1\n if (nums[x] + nums[y]) == target:\n return [x, y]\n if y == len(nums) - 1:\n x += 1\n y = 0\n if (nums[x] + nums[y]) == target:\n return [x, y] \n if (nums[x] + nums[y]) == target: \n return [x, y]\n y += 1\n\nstart = datetime.now()\nnums = list(range(1, 10_000 + 1))\ntarget = 19_999\nprint(Solution().twoSum(nums, target))\nprint(\"Finished in:\", datetime.now() - start)\n\nResult:\n[9998, 9999]\nFinished in: 0:03:47.205079\n\nattempt 3 of Mohamed Hassan, with some optimisations:\nfrom datetime import datetime\nfrom typing import List\nclass Solution:\n def twoSum(self, nums: List[int], target: int) -> List[int]:\n x = 0\n y = 1\n while x < len(nums):\n if (nums[x] + nums[y]) == target:\n return [x, y]\n if y == len(nums) - 1:\n x += 1\n y = x + 1\n if (nums[x] + nums[y]) == target:\n return [x, y]\n y += 1\n\nstart = datetime.now()\nnums = list(range(1, 10_000 + 1))\ntarget = 19_999\nprint(Solution().twoSum(nums, target))\nprint(\"Finished in:\", datetime.now() - start)\n\nResult:\n[9998, 9999]\nFinished in: 0:01:25.186796\n\nAttempt 1 of me, Nineteendo, using a for loop instead:\nfrom datetime import datetime\nstart = datetime.now()\nnums = list(range(1,10_000 + 1))\ntarget = 19_999\nfor i, x in enumerate(nums):\n for j, y in enumerate(nums[i + 1:]):\n if x + y == target:\n break\n if x + y == target:\n print([i, i + 1 + j])\n break\nprint(\"Finished in:\", datetime.now() - start)\n\nResult:\n[9998, 9999]\nFinished in: 0:00:28.605655\n\nAttempt 2 of me, Nineteendo, using a function too to avoid the second break, it's also a lot faster:\nfrom datetime import datetime\ndef find_sum(nums, target):\n for i, x in enumerate(nums):\n for j, y in enumerate(nums[i + 1:]):\n if x + y == target:\n return [i, i + 1 + j]\n\nstart = datetime.now()\nnums = list(range(1, 10_000 + 1))\ntarget = 19_999\nprint(find_sum(nums, target))\nprint(\"Finished in:\", datetime.now() - start)\n\nResult:\n[9998, 9999]\nFinished in: 0:00:18.117496\n\nAttempt 3 of Dan Nagle, using a math solution, and a function:\nfrom datetime import datetime\ndef sum_of_two_edit(nums, target):\n lookup = {}\n for i, a in enumerate(nums):\n b = target - a\n j = lookup.get(b, None)\n if j is not None:\n return [j, i]\n lookup[a] = i\n\nstart = datetime.now()\nnums = list(range(1, 10_000 + 1))\ntarget = 19_999\nprint(sum_of_two_edit(nums, target))\nprint(\"Finished in:\", datetime.now() - start)\n\nResult:\n[9998, 9999]\nFinished in: 0:00:00.010349\n\nAttempt 1 of Cobra, also using a for loop:\nfrom datetime import datetime\ndef twoSum(nums, target):\n d = {}\n for i, n in enumerate(nums):\n if (j := d.get(target - n)) is not None:\n return [i, j]\n d[n] = i\nstart = datetime.now()\nnums = list(range(1, 10_000 + 1))\ntarget = 19_999\nprint(twoSum(nums, target))\nprint(\"Finished in:\", datetime.now() - start)\n\nResult:\n[9999, 9998]\nFinished in: 0:00:00.009762\n\n", "If you're looking for a reasonably efficient implementation then you could try this:\n def twoSum(nums, target):\n d = {}\n for i, n in enumerate(nums):\n if (b := target - n) in d:\n return [i, d[b]]\n d[n] = i \n\nFor the list containing inclusive values 1 -> 10000 and a target of 19999, this returns [9999, 9998] in 0.0012s\n" ]
[ 3, 0, 0 ]
[ "I thought about the O(n²) problematic and additonally I wanted to have it elegant :D.\nPythons tim-sort (sorted(), [].sort()) is O(n*log(n)) which is indeed better :).\nSo what I did:\n\nSort the input array (O(n * log(n))\niterate with two indizes through the array: one on the right and one on the left O(2*n) = O(n)\nWhen the result is too large: move right one to the left\nwhen the result is too low: move left one to the right\nWhen the result == target: find the indizes O(2*n) again\nTada.\n\nIn code:\ndef twoSum(self, nums: List[int], target: int) -> List[int]:\n sorted_list = list(sorted(nums)) # the most expensive part\n left, right = 0, len(nums) - 1 # start at the ends\n result = -1 # any value which is not \"possible\" suffices\n while result != target:\n if result < target:\n left += 1\n elif result > targe:t\n right -= 1\n value_left, value_right = sorted_list[left], sorted_list[right]\n result = value_left + value_right\n\n return [nums.index(value_left), nums.index(value_right)]\n\n" ]
[ -1 ]
[ "python" ]
stackoverflow_0074487302_python.txt
Q: Changing Time Series Dataframe to Array I need the following 'ftse_change' dataframe to be in the format of the 'ts'. I have loaded the ftse_change data as CSV into python, but the 'ts' data is already built into a package that I am loading to run a few models from. (Screenshots below). I'm not sure what the difference between the two is to begin with (namely the time series array is something I haven't seen before). And I need to be able to convert the ftse data frame into the second (array) format for it to work in the ML model. Thanks! A: Assuming what you want is to extract only the 'Change column' to the format shown in the second screenshot. This is how I would do it. arr = [[[i]] for i in ftse_change.Change.to_list()] eg: col1 = [1,2,3,4,5] col2 = [1,2,3,4,5] ftse_change = pd.DataFrame({"Date":col1, "Change": col2}) ftse_change.head() Date Change 0 1 1 1 2 2 2 3 3 3 4 4 4 5 5 arr = [[[i]] for i in ftse_change.Change.to_list()] print(arr) #gives: [[[1]], [[2]], [[3]], [[4]], [[5]]]
Changing Time Series Dataframe to Array
I need the following 'ftse_change' dataframe to be in the format of the 'ts'. I have loaded the ftse_change data as CSV into python, but the 'ts' data is already built into a package that I am loading to run a few models from. (Screenshots below). I'm not sure what the difference between the two is to begin with (namely the time series array is something I haven't seen before). And I need to be able to convert the ftse data frame into the second (array) format for it to work in the ML model. Thanks!
[ "Assuming what you want is to extract only the 'Change column' to the format shown in the second screenshot. This is how I would do it.\narr = [[[i]] for i in ftse_change.Change.to_list()]\n\neg:\ncol1 = [1,2,3,4,5]\ncol2 = [1,2,3,4,5]\n\nftse_change = pd.DataFrame({\"Date\":col1, \"Change\": col2})\nftse_change.head()\n\n Date Change\n0 1 1\n1 2 2\n2 3 3\n3 4 4\n4 5 5\narr = [[[i]] for i in ftse_change.Change.to_list()]\nprint(arr)\n#gives: [[[1]], [[2]], [[3]], [[4]], [[5]]]\n\n" ]
[ 0 ]
[]
[]
[ "arrays", "python", "time_series" ]
stackoverflow_0074487723_arrays_python_time_series.txt
Q: Timestamp to iso I have a dataframe with timestamp of different formats one with 05-28-2022 14:05:30 and one with 06-04-2022 03:04:13.002 both I want to convert into iso format how can I do that? input output 05-28-2022 14:05:30 -> 2022-05-28T14:05:30.000+0000 06-04-2022 03:04:13.002 -> 2022-06-04T03:04:13.002+0000 A: You can use strptime() + strftime(). Here is an example: from datetime import datetime import pytz # parse str to instance first = datetime.strptime('05-28-2022 14:05:30', '%m-%d-%Y %H:%M:%S') first = first.replace(tzinfo=pytz.UTC) print(first.strftime('%Y-%m-%dT%H:%M:%S.%f%z')) print(f'{first.isoformat()}') second = datetime.strptime('06-04-2022 03:04:13.002', '%m-%d-%Y %H:%M:%S.%f') second = second.replace(tzinfo=pytz.UTC) print(second.strftime('%Y-%m-%dT%H:%M:%S.%f%z')) print(second.isoformat()) # 2022-05-28T14:05:30.000000+0000 # 2022-05-28T14:05:30+00:00 # 2022-06-04T03:04:13.002000+0000 # 2022-06-04T03:04:13.002000+00:00 See datetime docs. Also you can use other packages for dates processing / formatting: iso8601 pendulum dateutil arrow Example with dataframe: import pandas as pd import pytz from datetime import datetime df = pd.DataFrame({'date': ['05-28-2022 14:05:30', '06-04-2022 03:04:13.002']}) def convert_date(x): dt_format = '%m-%d-%Y %H:%M:%S.%f' if x.rfind('.', 1) > -1 else '%m-%d-%Y %H:%M:%S' dt = datetime.strptime(x, dt_format).replace(tzinfo=pytz.UTC) return dt.strftime('%Y-%m-%dT%H:%M:%S.%f%z') df['new_date'] = df['date'].apply(convert_date) print(df) date new_date 0 05-28-2022 14:05:30 2022-05-28T14:05:30.000000+0000 1 06-04-2022 03:04:13.002 2022-06-04T03:04:13.002000+0000
Timestamp to iso
I have a dataframe with timestamp of different formats one with 05-28-2022 14:05:30 and one with 06-04-2022 03:04:13.002 both I want to convert into iso format how can I do that? input output 05-28-2022 14:05:30 -> 2022-05-28T14:05:30.000+0000 06-04-2022 03:04:13.002 -> 2022-06-04T03:04:13.002+0000
[ "You can use strptime() + strftime(). Here is an example:\nfrom datetime import datetime\nimport pytz\n\n# parse str to instance\nfirst = datetime.strptime('05-28-2022 14:05:30', '%m-%d-%Y %H:%M:%S')\nfirst = first.replace(tzinfo=pytz.UTC)\nprint(first.strftime('%Y-%m-%dT%H:%M:%S.%f%z'))\nprint(f'{first.isoformat()}')\n\nsecond = datetime.strptime('06-04-2022 03:04:13.002', '%m-%d-%Y %H:%M:%S.%f')\nsecond = second.replace(tzinfo=pytz.UTC)\nprint(second.strftime('%Y-%m-%dT%H:%M:%S.%f%z'))\nprint(second.isoformat())\n\n# 2022-05-28T14:05:30.000000+0000\n# 2022-05-28T14:05:30+00:00\n# 2022-06-04T03:04:13.002000+0000\n# 2022-06-04T03:04:13.002000+00:00\n\nSee datetime docs. Also you can use other packages for dates processing / formatting:\n\niso8601\npendulum\ndateutil\narrow\n\nExample with dataframe:\nimport pandas as pd\nimport pytz\nfrom datetime import datetime\n\n\ndf = pd.DataFrame({'date': ['05-28-2022 14:05:30', '06-04-2022 03:04:13.002']})\n\n\ndef convert_date(x):\n dt_format = '%m-%d-%Y %H:%M:%S.%f' if x.rfind('.', 1) > -1 else '%m-%d-%Y %H:%M:%S'\n dt = datetime.strptime(x, dt_format).replace(tzinfo=pytz.UTC)\n return dt.strftime('%Y-%m-%dT%H:%M:%S.%f%z')\n\n\ndf['new_date'] = df['date'].apply(convert_date)\nprint(df)\n date new_date\n0 05-28-2022 14:05:30 2022-05-28T14:05:30.000000+0000\n1 06-04-2022 03:04:13.002 2022-06-04T03:04:13.002000+0000\n\n" ]
[ 1 ]
[]
[]
[ "formatting", "iso", "pandas", "python" ]
stackoverflow_0074485470_formatting_iso_pandas_python.txt
Q: Activating existing conda enviornments in snakemake How do I get snakemake to activate a conda environment that already exists in my environment list? I know you can use the --use-conda with a .yaml environment file but that seems to generate a new environment which is just annoying when the environment already exists. Any help with this would be much appreciated. I have tried using the: conda: path/to/some/yamlFile but it just returns command not found errors for packages in the environment A: This isn't possible and I'd argue it's mostly a good thing. Snakemake having sole ownership of the env helps improve reproducibility by requiring one to update the YAML instead of directly manipulating the env with conda (install|update|remove). Note that such a practice of updating a YAML and recreating is a Conda best practice when mixing in Pip, and it definitely doesn't hurt to adopt it generally. Conda does a lot of hardlinking, so I wouldn't sweat the duplication too much - it's mostly superficial. Moreover, if you create a YAML from the existing environment you wish to use (conda env export > env.yaml) and give that to Snakemake, then all the identical packages that you already have downloaded will be used in the environment that Snakemake creates. If space really is such a tight resource, you can simply not use Snakemake's --use-conda flag and instead activate your named envs as part of the shell command or script you provide. I would be very careful not to manipulate those envs or at least be very diligent about tracking changes made to them. Perhaps, tracking the output of conda env export > env.yaml under version control and putting that YAML as an input file in the Snakemake rules that activate the env. A: It is possible. It is essentially an environment config issue. You need to call bash in the snakemake rules and load conda-init'd bash profiles there. Below example works with me: rule test_conda: shell: """ bash -c ' . $HOME/.bashrc # if not loaded automatically conda activate base conda deactivate' """ In addition, --use-conda is not necessary in this case at all. A: Follow up to answer by liagy, since snakemake runs with strict bash mode (set -u flag), conda activate or deactivate may throw an error showing unbound variable related to conda environment. I ended up editing parent conda.sh file which contains activate function. Doing so will temporarily disable u flag while activating or deactivating conda environments but will preserve bash strict mode for rest of snakemake workflow. Here is what I did: Edit (after backing up the original file) ~/anaconda3/etc/profile.d/conda.sh and add following from the first line within __conda_activate() block: __conda_activate() { if [[ "$-" =~ .*u.* ]]; then local bash_set_u bash_set_u="on" ## temporarily disable u flag ## allow unbound variables from conda env ## during activate/deactivate commands in ## subshell else script will fail with set -u flag ## https://github.com/conda/conda/issues/8186#issuecomment-532874667 set +u else local bash_set_u bash_set_u="off" fi # ... rest of code from the original script And also add following code at the end of __conda_activate() block to re-enable bash strict mode only if present prior to running conda activate/deactivate functions. ## reenable set -u if it was enabled prior to ## conda activate/deactivate operation if [[ "${bash_set_u}" == "on" ]]; then set -u fi } Then in Snakefile, you can have following shell commands to manage existing conda environments. shell:""" ## check current set flags echo "$-" ## switch conda env source ~/anaconda3/etc/profile.d/conda.sh && conda activate r-reticulate ## Confirm that set flags are same as prior to conda activate command echo "$-" ## switch conda env again conda activate dev echo "$-" which R samtools --version ## revert to previous: r-reticulate conda deactivate """ You do not need to add above patch for __conda_deactivate function as it sources activate script. PS: Editing ~/anaconda3/etc/profile.d/conda.sh is not ideal. Always backup the original and edited filed. Updating conda will most likely overwrite these changes. A: This question is still trending on Google, so an update: Since snakemake=6.14.0 (2022-01-26) using an existing, named conda environment is a supported feature. You simply put the name of the environment some-env-name into the rules conda directive (instead of the .yaml file) and use snakemake --use-conda: rule NAME: input: "table.txt" output: "plots/myplot.pdf" conda: "some-env-name" script: "scripts/plot-stuff.R" Documentation: https://snakemake.readthedocs.io/en/stable/snakefiles/deployment.html#using-already-existing-named-conda-environments Note: It is recommended to use the feature sparsely and prefer to specify a environment.yaml file instead to increase reproduceability.
Activating existing conda enviornments in snakemake
How do I get snakemake to activate a conda environment that already exists in my environment list? I know you can use the --use-conda with a .yaml environment file but that seems to generate a new environment which is just annoying when the environment already exists. Any help with this would be much appreciated. I have tried using the: conda: path/to/some/yamlFile but it just returns command not found errors for packages in the environment
[ "This isn't possible and I'd argue it's mostly a good thing. Snakemake having sole ownership of the env helps improve reproducibility by requiring one to update the YAML instead of directly manipulating the env with conda (install|update|remove). Note that such a practice of updating a YAML and recreating is a Conda best practice when mixing in Pip, and it definitely doesn't hurt to adopt it generally.\nConda does a lot of hardlinking, so I wouldn't sweat the duplication too much - it's mostly superficial. Moreover, if you create a YAML from the existing environment you wish to use (conda env export > env.yaml) and give that to Snakemake, then all the identical packages that you already have downloaded will be used in the environment that Snakemake creates.\n\nIf space really is such a tight resource, you can simply not use Snakemake's --use-conda flag and instead activate your named envs as part of the shell command or script you provide. I would be very careful not to manipulate those envs or at least be very diligent about tracking changes made to them. Perhaps, tracking the output of conda env export > env.yaml under version control and putting that YAML as an input file in the Snakemake rules that activate the env.\n", "It is possible. It is essentially an environment config issue. You need to call bash in the snakemake rules and load conda-init'd bash profiles there. Below example works with me:\nrule test_conda:\n shell:\n \"\"\"\n bash -c '\n . $HOME/.bashrc # if not loaded automatically\n conda activate base\n conda deactivate'\n \"\"\"\n\nIn addition, --use-conda is not necessary in this case at all.\n", "Follow up to answer by liagy, since snakemake runs with strict bash mode (set -u flag), conda activate or deactivate may throw an error showing unbound variable related to conda environment. I ended up editing parent conda.sh file which contains activate function. Doing so will temporarily disable u flag while activating or deactivating conda environments but will preserve bash strict mode for rest of snakemake workflow.\nHere is what I did:\nEdit (after backing up the original file) ~/anaconda3/etc/profile.d/conda.sh and add following from the first line within __conda_activate() block:\n__conda_activate() {\n if [[ \"$-\" =~ .*u.* ]]; then\n local bash_set_u\n bash_set_u=\"on\"\n ## temporarily disable u flag\n ## allow unbound variables from conda env\n ## during activate/deactivate commands in\n ## subshell else script will fail with set -u flag\n ## https://github.com/conda/conda/issues/8186#issuecomment-532874667 \n set +u\n else\n local bash_set_u\n bash_set_u=\"off\"\n fi\n\n# ... rest of code from the original script\n\nAnd also add following code at the end of __conda_activate() block to re-enable bash strict mode only if present prior to running conda activate/deactivate functions.\n ## reenable set -u if it was enabled prior to\n ## conda activate/deactivate operation\n if [[ \"${bash_set_u}\" == \"on\" ]]; then\n set -u\n fi\n}\n\nThen in Snakefile, you can have following shell commands to manage existing conda environments.\n shell:\"\"\"\n ## check current set flags\n echo \"$-\"\n ## switch conda env\n source ~/anaconda3/etc/profile.d/conda.sh && conda activate r-reticulate\n ## Confirm that set flags are same as prior to conda activate command\n echo \"$-\"\n\n ## switch conda env again\n conda activate dev\n echo \"$-\"\n which R\n samtools --version\n\n ## revert to previous: r-reticulate\n conda deactivate\n \"\"\"\n\nYou do not need to add above patch for __conda_deactivate function as it sources activate script.\nPS: Editing ~/anaconda3/etc/profile.d/conda.sh is not ideal. Always backup the original and edited filed. Updating conda will most likely overwrite these changes.\n", "This question is still trending on Google, so an update:\nSince snakemake=6.14.0 (2022-01-26) using an existing, named conda environment is a supported feature.\nYou simply put the name of the environment some-env-name into the rules conda directive (instead of the .yaml file) and use snakemake --use-conda:\nrule NAME:\n input:\n \"table.txt\"\n output:\n \"plots/myplot.pdf\"\n conda:\n \"some-env-name\"\n script:\n \"scripts/plot-stuff.R\"\n\nDocumentation: https://snakemake.readthedocs.io/en/stable/snakefiles/deployment.html#using-already-existing-named-conda-environments\nNote: It is recommended to use the feature sparsely and prefer to specify a environment.yaml file instead to increase reproduceability.\n" ]
[ 2, 2, 2, 0 ]
[]
[]
[ "conda", "python", "snakemake" ]
stackoverflow_0059107413_conda_python_snakemake.txt
Q: scala map get keys from Map as Sequence sorting by both keys and values In Python I can do: in_dd = {"aaa": 1, "bbb": 7, "zzz": 3, "hhh": 9, "ggg": 10, "ccc": 3} out_ll = ['ggg', 'hhh', 'bbb', 'aaa', 'ccc', 'zzz'] so, I want to get keys sorted by value in descending order while having keys in ascending order taking into consideration sorted values How can I do it in Scala? In Scala I know I can do: val m = Map("aaa" -> 3, "bbb" -> 7, "zzz" -> 3, "hhh" -> 9, "ggg" -> 10, "ccc" -> 3) m.toSeq.sortWith(_._2 > _._2) but I do not know how to sort by two cases. EDIT: I have tried also such approach but it does not return desired result: m.toSeq.sortWith((x,y) => x._2 > y._2 && x._1 < y._1).map(_.1) List((ggg,10), (hhh,9), (bbb,7), (ccc,3), (zzz,3), (aaa,3)) notice it shall be aaa,ccc,zzz A: In scala you could use: m.toSeq.sortBy(a => (a._2, a._1) )(Ordering.Tuple2(Ordering.Int.reverse, Ordering.String.reverse)) for List((ggg,10), (hhh,9), (bbb,7), (zzz,3), (ccc,3), (aaa,3)) and m.toSeq.sortBy(a => (-a._2, a._1) ) for List((ggg,10), (hhh,9), (bbb,7), (aaa,3), (ccc,3), (zzz,3))
scala map get keys from Map as Sequence sorting by both keys and values
In Python I can do: in_dd = {"aaa": 1, "bbb": 7, "zzz": 3, "hhh": 9, "ggg": 10, "ccc": 3} out_ll = ['ggg', 'hhh', 'bbb', 'aaa', 'ccc', 'zzz'] so, I want to get keys sorted by value in descending order while having keys in ascending order taking into consideration sorted values How can I do it in Scala? In Scala I know I can do: val m = Map("aaa" -> 3, "bbb" -> 7, "zzz" -> 3, "hhh" -> 9, "ggg" -> 10, "ccc" -> 3) m.toSeq.sortWith(_._2 > _._2) but I do not know how to sort by two cases. EDIT: I have tried also such approach but it does not return desired result: m.toSeq.sortWith((x,y) => x._2 > y._2 && x._1 < y._1).map(_.1) List((ggg,10), (hhh,9), (bbb,7), (ccc,3), (zzz,3), (aaa,3)) notice it shall be aaa,ccc,zzz
[ "In scala you could use:\nm.toSeq.sortBy(a => (a._2, a._1) )(Ordering.Tuple2(Ordering.Int.reverse, Ordering.String.reverse))\n\nfor List((ggg,10), (hhh,9), (bbb,7), (zzz,3), (ccc,3), (aaa,3))\nand\nm.toSeq.sortBy(a => (-a._2, a._1) )\n\nfor List((ggg,10), (hhh,9), (bbb,7), (aaa,3), (ccc,3), (zzz,3))\n" ]
[ 2 ]
[]
[]
[ "dictionary", "python", "scala", "sorting" ]
stackoverflow_0074487703_dictionary_python_scala_sorting.txt
Q: How do I use python pandas to read an already opened excel sheet Assuming I have an excel sheet already open, make some changes in the file and use pd.read_excel to create a dataframe based on that sheet, I understand that the dataframe will only reflect the data in the last saved version of the excel file. I would have to save the sheet first in order for pandas dataframe to take into account the change. Is there anyway for pandas or other python packages to read an opened excel file and be able to refresh its data real time (without saving or closing the file)? A: Have you tried using mitosheet package? It doesn't answer your question directly, but it allows you working on pandas dataframes as you would do in excel sheets. In this way, you may edit the data on the fly as in excel and still get a pandas dataframe as a result (meanwhile generating the code to perform the same operations with python). Does this help? A: There is no way to do this. The table is not saved to disk, so pandas can not read it from disk. A: Be careful not to over-engineer, that being said: Depending on your use case, if this is really needed, I could theoretically imagine a Robotic Process Automation like e.g. BluePrism, UiPath or PowerAutomate loading live data from Excel into a Python environment with a pandas DataFrame continuously and then changing it. This use case would have to be a really important process though, otherwise licensing RPA is not worth it here. A: df = pd.read_excel("path") In variable explorer you can see the data if you run the program in SPYDER ide
How do I use python pandas to read an already opened excel sheet
Assuming I have an excel sheet already open, make some changes in the file and use pd.read_excel to create a dataframe based on that sheet, I understand that the dataframe will only reflect the data in the last saved version of the excel file. I would have to save the sheet first in order for pandas dataframe to take into account the change. Is there anyway for pandas or other python packages to read an opened excel file and be able to refresh its data real time (without saving or closing the file)?
[ "Have you tried using mitosheet package? It doesn't answer your question directly, but it allows you working on pandas dataframes as you would do in excel sheets. In this way, you may edit the data on the fly as in excel and still get a pandas dataframe as a result (meanwhile generating the code to perform the same operations with python). Does this help?\n", "There is no way to do this. The table is not saved to disk, so pandas can not read it from disk.\n", "Be careful not to over-engineer, that being said:\nDepending on your use case, if this is really needed, I could theoretically imagine a Robotic Process Automation like e.g. BluePrism, UiPath or PowerAutomate loading live data from Excel into a Python environment with a pandas DataFrame continuously and then changing it.\nThis use case would have to be a really important process though, otherwise licensing RPA is not worth it here.\n", "df = pd.read_excel(\"path\")\nIn variable explorer you can see the data if you run the program in SPYDER ide\n" ]
[ 1, 0, 0, 0 ]
[]
[]
[ "excel", "pandas", "python" ]
stackoverflow_0052862768_excel_pandas_python.txt
Q: make a loop for videocapture with 5 minutees delay hi i use a code that take capture image from my webcam and do some image processing on image. i need to repeat the total code consecutive n times. paraphrase take image and do image processing consecutively every five minutes. thanks. import time import cv2 videoCaptureObject = cv2.VideoCapture(0) result = True while(result): ret,frame = videoCaptureObject.read() cv2.imwrite("NewPicture.jpg",frame) result = False videoCaptureObject.release() import numpy as np image = cv2.imread('Newpicture.jpg') blur = cv2.GaussianBlur(image, (3,3), 0) gray = cv2.cvtColor(blur, cv2.COLOR_BGR2GRAY) thresh = cv2.threshold(gray, 220, 255, cv2.THRESH_BINARY_INV)[1] x, y, w, h = cv2.boundingRect(thresh) # Replaced code # left = (x, np.argmax(thresh[:, x])) # right = (x+w-1, np.argmax(thresh[:, x+w-1])) # top = (np.argmax(thresh[y, :]), y) # bottom = (np.argmax(thresh[y+h-1, :]), y+h-1) # cv2.circle(image, left, 8, (0, 50, 255), -1) cv2.circle(image, right, 8, (0, 255, 255), -1) cv2.circle(image, top, 8, (255, 50, 0), -1) cv2.circle(image, bottom, 8, (255, 255, 0), -1) print('left: {}'.format(left)) print('right: {}'.format(right)) print('top: {}'.format(top)) print('bottom: {}'.format(bottom)) cv2.imshow('thresh', thresh) cv2.imshow('image', image) cv2.waitKey() time.sleep(300) i need to repeat consecutive every five minutes A: There are so many issues with your code I don't know where to start. Let's go for simple. I think you want a structure like this: import numpy as np def captureImage(): print('Capturing image') # Code to capture image goes here, simulate a random one for now im = np.random.randint(0, 256, size=(256, 256, 3), dtype=np.uint8) return im def processImage(im): print('Processing image') # Code to process image goes here, simulate processing with "np.mean()" mean = np.mean(im) print(f'Mean: {mean}') N = 5 for i in range(N): im = captureImage() processImage(im) sleep(5) # so you see results sooner # more like actual code... sleep(5*60)
make a loop for videocapture with 5 minutees delay
hi i use a code that take capture image from my webcam and do some image processing on image. i need to repeat the total code consecutive n times. paraphrase take image and do image processing consecutively every five minutes. thanks. import time import cv2 videoCaptureObject = cv2.VideoCapture(0) result = True while(result): ret,frame = videoCaptureObject.read() cv2.imwrite("NewPicture.jpg",frame) result = False videoCaptureObject.release() import numpy as np image = cv2.imread('Newpicture.jpg') blur = cv2.GaussianBlur(image, (3,3), 0) gray = cv2.cvtColor(blur, cv2.COLOR_BGR2GRAY) thresh = cv2.threshold(gray, 220, 255, cv2.THRESH_BINARY_INV)[1] x, y, w, h = cv2.boundingRect(thresh) # Replaced code # left = (x, np.argmax(thresh[:, x])) # right = (x+w-1, np.argmax(thresh[:, x+w-1])) # top = (np.argmax(thresh[y, :]), y) # bottom = (np.argmax(thresh[y+h-1, :]), y+h-1) # cv2.circle(image, left, 8, (0, 50, 255), -1) cv2.circle(image, right, 8, (0, 255, 255), -1) cv2.circle(image, top, 8, (255, 50, 0), -1) cv2.circle(image, bottom, 8, (255, 255, 0), -1) print('left: {}'.format(left)) print('right: {}'.format(right)) print('top: {}'.format(top)) print('bottom: {}'.format(bottom)) cv2.imshow('thresh', thresh) cv2.imshow('image', image) cv2.waitKey() time.sleep(300) i need to repeat consecutive every five minutes
[ "There are so many issues with your code I don't know where to start. Let's go for simple. I think you want a structure like this:\nimport numpy as np\n\ndef captureImage():\n print('Capturing image')\n # Code to capture image goes here, simulate a random one for now\n im = np.random.randint(0, 256, size=(256, 256, 3), dtype=np.uint8)\n return im\n\ndef processImage(im):\n print('Processing image')\n # Code to process image goes here, simulate processing with \"np.mean()\"\n mean = np.mean(im)\n print(f'Mean: {mean}')\n\nN = 5\nfor i in range(N):\n im = captureImage()\n processImage(im)\n sleep(5) # so you see results sooner\n # more like actual code... sleep(5*60)\n\n" ]
[ 0 ]
[]
[]
[ "image", "python" ]
stackoverflow_0074434959_image_python.txt
Q: dataframe get 2 rows with different values I want to query two rows with different values in the same column. name age Martin 28 Josh 37 Peter 24 Claire 57 df = pd.read_csv('names.csv') df.query('name.str.contains("Martin") & name.str.contains("Claire")') Now i expect to see the two rows where Claire an Martin are in. But if i delete Claire the query will return only Martin. The query should only return if both values are true an give me both rows as a result. Can anybody help me out? A: try this: import pandas as pd df = pd.DataFrame({'name':['Martin', 'Josh', 'Peter', 'Claire'], 'age':[28, 37, 24, 57]}) new_df = df[df.name.isin(['Claire', 'Martin'])] This gives name age 0 Martin 28 3 Claire 57
dataframe get 2 rows with different values
I want to query two rows with different values in the same column. name age Martin 28 Josh 37 Peter 24 Claire 57 df = pd.read_csv('names.csv') df.query('name.str.contains("Martin") & name.str.contains("Claire")') Now i expect to see the two rows where Claire an Martin are in. But if i delete Claire the query will return only Martin. The query should only return if both values are true an give me both rows as a result. Can anybody help me out?
[ "try this:\n\nimport pandas as pd\ndf = pd.DataFrame({'name':['Martin', 'Josh', 'Peter', 'Claire'],\n 'age':[28, 37, 24, 57]})\n\nnew_df = df[df.name.isin(['Claire', 'Martin'])]\n\n\nThis gives\n name age\n0 Martin 28\n3 Claire 57\n\n" ]
[ 0 ]
[]
[]
[ "dataframe", "python" ]
stackoverflow_0074487904_dataframe_python.txt
Q: CondaValueError: The target prefix is the base prefix. Aborting. in anaconda prompt I've got following error when I try to make a virtual environment in anaconda prompt which is executed with administrator (base) C:\WINDOWS\system32>d: (base) D:\>cd anaconda (base) D:\anaconda>cd Scripts (base) D:\anaconda\Scripts>conda create -m ppt python=3.8 CondaValueError: The target prefix is the base prefix. Aborting. How can I make a virtual environment in anaconda prompt in this case? A: Create an environment in anaconda command prompt with the below command: conda create -n myenv3.7 python=3.7 conda create myenv3.7 Similarly, you could try creating & activating an environment with python 3.8 with the below commands: conda create -n myenv3.8 python=3.8 conda create myenv3.8 A: conda create --name Envname or conda create -n ppt python=3.8 notice that you are writing -m
CondaValueError: The target prefix is the base prefix. Aborting. in anaconda prompt
I've got following error when I try to make a virtual environment in anaconda prompt which is executed with administrator (base) C:\WINDOWS\system32>d: (base) D:\>cd anaconda (base) D:\anaconda>cd Scripts (base) D:\anaconda\Scripts>conda create -m ppt python=3.8 CondaValueError: The target prefix is the base prefix. Aborting. How can I make a virtual environment in anaconda prompt in this case?
[ "Create an environment in anaconda command prompt with the below command:\nconda create -n myenv3.7 python=3.7\nconda create myenv3.7\n\n\nSimilarly, you could try creating & activating an environment with python 3.8 with the below commands:\nconda create -n myenv3.8 python=3.8\nconda create myenv3.8\n\n", "conda create --name Envname \n\nor\nconda create -n ppt python=3.8\n\nnotice that you are writing -m\n" ]
[ 2, 0 ]
[]
[]
[ "admin", "anaconda", "prompt", "python" ]
stackoverflow_0067849025_admin_anaconda_prompt_python.txt
Q: Twitter bot with selenium I want to make Twitter bot with selenium but the Chrome window closes as soon as it opens. How can I fix this? My Code: from userinfo import username, password from selenium import webdriver import time from selenium.webdriver.common.by import By from selenium.webdriver.common.keys import Keys class twitter: def __init__(self,username,password): self.browserProfile=webdriver.ChromeOptions() self.browserProfile.add_experimental_option('prefs',{'intl.accept_languages':'en,en_US'}) self.browser=webdriver.Chrome('chromedriver.exe',chrome_options=self.browserProfile) self.username=username self.password=password def sıgnIn(self): self.browser.get("https://twitter.com/i/flow/login") time.sleep(2) self.browser.maximize_window() search=self.browser.find_element(By.XPATH,"//*[@id='react-root']/div/div/div/main/div/div/div/div[2]/div[2]/div/div[5]/label/div/div[2]/div/input") search.click() search.send_keys(self.username) search.send_keys(Keys.ENTER) sifre=self.browser.find_element(By.XPATH,"//*[@id='react-root']/div/div/div/main/div/div/div/div[2]/div[2]/div[1]/div/div/div/div[3]/div/label/div/div[2]/div[1]/input") sifre.click() time.sleep(2) sifre.send_keys(self.password) sifre.send_keys(Keys.ENTER) giris=twitter(username,password) giris.sıgnIn() A: Add the below chrome options in __init__() function: self.browserProfile.add_experimental_option("detach", True)
Twitter bot with selenium
I want to make Twitter bot with selenium but the Chrome window closes as soon as it opens. How can I fix this? My Code: from userinfo import username, password from selenium import webdriver import time from selenium.webdriver.common.by import By from selenium.webdriver.common.keys import Keys class twitter: def __init__(self,username,password): self.browserProfile=webdriver.ChromeOptions() self.browserProfile.add_experimental_option('prefs',{'intl.accept_languages':'en,en_US'}) self.browser=webdriver.Chrome('chromedriver.exe',chrome_options=self.browserProfile) self.username=username self.password=password def sıgnIn(self): self.browser.get("https://twitter.com/i/flow/login") time.sleep(2) self.browser.maximize_window() search=self.browser.find_element(By.XPATH,"//*[@id='react-root']/div/div/div/main/div/div/div/div[2]/div[2]/div/div[5]/label/div/div[2]/div/input") search.click() search.send_keys(self.username) search.send_keys(Keys.ENTER) sifre=self.browser.find_element(By.XPATH,"//*[@id='react-root']/div/div/div/main/div/div/div/div[2]/div[2]/div[1]/div/div/div/div[3]/div/label/div/div[2]/div[1]/input") sifre.click() time.sleep(2) sifre.send_keys(self.password) sifre.send_keys(Keys.ENTER) giris=twitter(username,password) giris.sıgnIn()
[ "Add the below chrome options in __init__() function:\nself.browserProfile.add_experimental_option(\"detach\", True)\n\n" ]
[ 0 ]
[]
[]
[ "python", "selenium", "selenium_webdriver" ]
stackoverflow_0074483109_python_selenium_selenium_webdriver.txt
Q: Euclidean Distance for Arrays of 3D points in Python I have two .csv files of 3D points (numeric coordinate data) and associated attribute data (strings + numeric). I need to calculate the Euclidean distance between each point and every other point, and maintain the attribute data for each point associated with the difference. I have a method that works for this, but it uses a loop and I'm hoping that there is a better way to do this that is less resource intensive. Here is the code I am using currently: import pandas as pd import numpy as np # read .csv dataset_1 = pd.read_csv(dataset1 path) dataset_2 = pd.read_csv(dataset2 path) # convert to numpy array array_1 = dataset_1.to_numpy() array_2 = dataset_2.to_numpy() # define data types for new array. This includes the attribute data I want to maintain data_type = np.dtype('f4, f4, f4, U10, U10, f4, f4, f4, U10, U10, U10, f4, f4, U10, U100') #define the new array new_array = np.empty((len(array_1)*len(array_2)), dtype=data_type) #calculate the Euclidean distance between each set of 3D coordinates, and populate the new array with the results as well as data from the input arrays number3 = 0 for number in range(len(array_1)): for number2 in range(len(array_2)): Euclidean_Dist = np.linalg.norm(array_1[number, 0:3]-array_2[number2, 0:3]) new_array[number3] = (array_1[number, 0], array_1[number, 1], array_1[number, 2], array_1[number, 3], array_1[number, 7], array_2[number2, 0], array_2[number2, 1],array_2[number2, 2], array_2[number2, 3], array_2[number2, 6], array_2[number2, 7], array_2[number2, 12], array_2[number2, 13], dist,''.join(sorted((str(array_2[number2, 0]) + str(array_2[number2, 1]) + str(array_2[number2, 2]) + str(array_2[number2, 3]))))) number3+=1 #Convert results to pandas dataframe new_df = pd.DataFrame(new_array) I work with very large datasets, so if anyone could suggest a more efficient way to do this I would be very grateful. Thanks, The code presented above works for my problem, but I'm looking for something to improve efficiency Edit to show example input datasets (dataset_1 & dataset_2) and desired output dataset (new_df). The key is that for the output dataset I need to maintain the attributes from the input dataset associated with the Euclidean Distance. I could use scipy.spatial.distance.cdist to calculate the distances, but I'm not sure of the best way to maintain the attributes from the input data in the output data. A: Two methods. Setup: import numpy as np import pandas as pd import string from scipy.spatial.distance import cdist upper = list(string.ascii_uppercase) lower = list(string.ascii_lowercase) df1 = pd.DataFrame(np.random.rand(26,3), columns = lower[-3:], index = lower ) df2 = pd.DataFrame(np.random.rand(25,3), columns = lower[-3:], index = upper[:-1] ) #testing different lengths Using .merge(*, how='cross'), this gives your intended output I think new_df = df1.reset_index().merge(df2.reset_index(), how = 'cross', suffixes = ['1', '2']) new_df['dist'] = cdist(df1, df2).flatten() A 2D 'ravelled' method that maintains the original data as MultiIndexes: new_df2 = pd.DataFrame(cdist(df1, df2), index = pd.MultiIndex.from_arrays(df1.reset_index().values.T, names = df1.reset_index().columns), columns = pd.MultiIndex.from_arrays(df2.reset_index().values.T, names = df2.reset_index().columns))
Euclidean Distance for Arrays of 3D points in Python
I have two .csv files of 3D points (numeric coordinate data) and associated attribute data (strings + numeric). I need to calculate the Euclidean distance between each point and every other point, and maintain the attribute data for each point associated with the difference. I have a method that works for this, but it uses a loop and I'm hoping that there is a better way to do this that is less resource intensive. Here is the code I am using currently: import pandas as pd import numpy as np # read .csv dataset_1 = pd.read_csv(dataset1 path) dataset_2 = pd.read_csv(dataset2 path) # convert to numpy array array_1 = dataset_1.to_numpy() array_2 = dataset_2.to_numpy() # define data types for new array. This includes the attribute data I want to maintain data_type = np.dtype('f4, f4, f4, U10, U10, f4, f4, f4, U10, U10, U10, f4, f4, U10, U100') #define the new array new_array = np.empty((len(array_1)*len(array_2)), dtype=data_type) #calculate the Euclidean distance between each set of 3D coordinates, and populate the new array with the results as well as data from the input arrays number3 = 0 for number in range(len(array_1)): for number2 in range(len(array_2)): Euclidean_Dist = np.linalg.norm(array_1[number, 0:3]-array_2[number2, 0:3]) new_array[number3] = (array_1[number, 0], array_1[number, 1], array_1[number, 2], array_1[number, 3], array_1[number, 7], array_2[number2, 0], array_2[number2, 1],array_2[number2, 2], array_2[number2, 3], array_2[number2, 6], array_2[number2, 7], array_2[number2, 12], array_2[number2, 13], dist,''.join(sorted((str(array_2[number2, 0]) + str(array_2[number2, 1]) + str(array_2[number2, 2]) + str(array_2[number2, 3]))))) number3+=1 #Convert results to pandas dataframe new_df = pd.DataFrame(new_array) I work with very large datasets, so if anyone could suggest a more efficient way to do this I would be very grateful. Thanks, The code presented above works for my problem, but I'm looking for something to improve efficiency Edit to show example input datasets (dataset_1 & dataset_2) and desired output dataset (new_df). The key is that for the output dataset I need to maintain the attributes from the input dataset associated with the Euclidean Distance. I could use scipy.spatial.distance.cdist to calculate the distances, but I'm not sure of the best way to maintain the attributes from the input data in the output data.
[ "Two methods. Setup:\nimport numpy as np\nimport pandas as pd\nimport string\nfrom scipy.spatial.distance import cdist\n\nupper = list(string.ascii_uppercase)\nlower = list(string.ascii_lowercase)\n\ndf1 = pd.DataFrame(np.random.rand(26,3), \n columns = lower[-3:], \n index = lower )\n\ndf2 = pd.DataFrame(np.random.rand(25,3), \n columns = lower[-3:], \n index = upper[:-1] ) #testing different lengths\n\nUsing .merge(*, how='cross'), this gives your intended output I think\nnew_df = df1.reset_index().merge(df2.reset_index(), \n how = 'cross',\n suffixes = ['1', '2'])\nnew_df['dist'] = cdist(df1, df2).flatten()\n\nA 2D 'ravelled' method that maintains the original data as MultiIndexes:\nnew_df2 = pd.DataFrame(cdist(df1, df2), \n index = pd.MultiIndex.from_arrays(df1.reset_index().values.T, \n names = df1.reset_index().columns), \n columns = pd.MultiIndex.from_arrays(df2.reset_index().values.T, \n names = df2.reset_index().columns))\n\n" ]
[ 1 ]
[]
[]
[ "euclidean_distance", "numpy", "python", "spatial_data" ]
stackoverflow_0074486648_euclidean_distance_numpy_python_spatial_data.txt
Q: Sympy not properly displaying conjugate of square root of real I keep getting expressions like this image, despite declaring these symbols as reals. The code to reproduce is: import sympy as sp delta = sp.Symbol('delta', real=True) f = sp.sqrt(1/delta) prod = sp.conjugate(f)*f prod.subs(delta,delta) I expected to get 1/delta Also trying simplify() does not work either. A: According to the official SymPy Docs for conjugate, it looks like the function is supposed to return the complex conjugate for its input. In other words, it takes the complex part of the number and flips the sign. In your example, you are taking the square root of a variable. If delta = -1, then the resulting conjugate could be unreal and thus different than if delta was any other integer. Thus, SymPy wraps the result in a conjugate object. If you want to tell Sympy that your variable delta is positive (and thus f must be real), then you should define it as delta = sp.Symbol('delta', real=True, positive=True).
Sympy not properly displaying conjugate of square root of real
I keep getting expressions like this image, despite declaring these symbols as reals. The code to reproduce is: import sympy as sp delta = sp.Symbol('delta', real=True) f = sp.sqrt(1/delta) prod = sp.conjugate(f)*f prod.subs(delta,delta) I expected to get 1/delta Also trying simplify() does not work either.
[ "According to the official SymPy Docs for conjugate, it looks like the function is supposed to return the complex conjugate for its input. In other words, it takes the complex part of the number and flips the sign.\nIn your example, you are taking the square root of a variable. If delta = -1, then the resulting conjugate could be unreal and thus different than if delta was any other integer. Thus, SymPy wraps the result in a conjugate object.\nIf you want to tell Sympy that your variable delta is positive (and thus f must be real), then you should define it as delta = sp.Symbol('delta', real=True, positive=True).\n" ]
[ 2 ]
[]
[]
[ "ipython", "jupyter_notebook", "python", "simplify", "sympy" ]
stackoverflow_0074487874_ipython_jupyter_notebook_python_simplify_sympy.txt
Q: smtplib.SMTPSenderRefused Authentication Error in Flask App I have created a flask app and whenever I use the forgot password module I get this error I have used this code in my init.py file login_manager = LoginManager(app) login_manager.login_view = 'login' login_manager.login_message_category ='info' app.config['MAIL_SERVER']='smtp.gmail.com' app.config['MAIL_PORT']= 587 app.config['MAIL_USE_TLS']=True app.config['MAIL_USERNAME']=os.environ.get('EMAIL_USER') app.config['MAIL_PASSWORD']=os.environ.get('EMAIL_PASS') mail = Mail(app) The error smtplib.SMTPSenderRefused: (530, b'5.5.1 Authentication Required. Learn more at\n5.5.1 https://support.google.com/mail/?p=WantAuthError d16sm4144946pgb.4 - gsmtp', 'noreply@demo.com') A: Change: app.config['MAIL_SERVER']='smtp.gmail.com' app.config['MAIL_PORT']= 587 app.config['MAIL_USE_TLS']=True app.config['MAIL_USERNAME']=os.environ.get('EMAIL_USER') app.config['MAIL_PASSWORD']=os.environ.get('EMAIL_PASS') To: app.config.update( MAIL_SERVER='smtp.gmail.com', MAIL_PORT='587', MAIL_USE_TLS=True, MAIL_USERNAME=os.environ.get('EMAIL_USER'), MAIL_PASSWORD=os.environ.get('EMAIL_PASS') ) And don't forget to turn on less secure apps: https://support.google.com/accounts/answer/6010255?hl=en Greetings A: on my server, app.config['MAIL_USE_TLS']='True' is work. It's string but not bool in this params.
smtplib.SMTPSenderRefused Authentication Error in Flask App
I have created a flask app and whenever I use the forgot password module I get this error I have used this code in my init.py file login_manager = LoginManager(app) login_manager.login_view = 'login' login_manager.login_message_category ='info' app.config['MAIL_SERVER']='smtp.gmail.com' app.config['MAIL_PORT']= 587 app.config['MAIL_USE_TLS']=True app.config['MAIL_USERNAME']=os.environ.get('EMAIL_USER') app.config['MAIL_PASSWORD']=os.environ.get('EMAIL_PASS') mail = Mail(app) The error smtplib.SMTPSenderRefused: (530, b'5.5.1 Authentication Required. Learn more at\n5.5.1 https://support.google.com/mail/?p=WantAuthError d16sm4144946pgb.4 - gsmtp', 'noreply@demo.com')
[ "Change:\napp.config['MAIL_SERVER']='smtp.gmail.com'\napp.config['MAIL_PORT']= 587\napp.config['MAIL_USE_TLS']=True\napp.config['MAIL_USERNAME']=os.environ.get('EMAIL_USER')\napp.config['MAIL_PASSWORD']=os.environ.get('EMAIL_PASS')\n\nTo:\napp.config.update(\nMAIL_SERVER='smtp.gmail.com',\nMAIL_PORT='587',\nMAIL_USE_TLS=True,\nMAIL_USERNAME=os.environ.get('EMAIL_USER'),\nMAIL_PASSWORD=os.environ.get('EMAIL_PASS')\n)\n\nAnd don't forget to turn on less secure apps: https://support.google.com/accounts/answer/6010255?hl=en\nGreetings\n", "on my server, app.config['MAIL_USE_TLS']='True' is work. It's string but not bool in this params.\n" ]
[ 0, 0 ]
[]
[]
[ "flask", "python", "smtplib" ]
stackoverflow_0056911441_flask_python_smtplib.txt
Q: How to insert newline while logging lists and arrays? I am trying to output lists to a log file, using Python's logging module. The code I used is: import logging import os logging.basicConfig(filename = 'Log.log', level = logging.DEBUG, filemode = 'w', format = '%(asctime)s \t %(levelname)s \t %(message)s', datefmt="[%Y-%m-%d %H:%M:%S]") file_list = [] for root, directories, files in os.walk('./Directory'): files = [f for f in files] for file in files: file_list.append(os.path.join(root, file)) logging.info('Files in list: %s', file_list) This gives me the output in the log file as a single line. Output [2015-03-14 11:41:53] INFO Files in list: ['./Directory/Subdirectory 1/file_a.dat', './Directory/Subdirectory 1/file_1b.dat', './Directory/Subdirectory 1/Subdirectory 11/file_11a.dat', './Directory/Subdirectory 1/Subdirectory 11/Subdirectory 111/file_111a.dat', './Directory/Subdirectory 2/Subdirectory 22/Subdirectory 221/file_221a.dat', './Directory/Subdirectory 2/Subdirectory 22/Subdirectory 221/file_221b.dat', './Directory/Subdirectory 2/Subdirectory 22/Subdirectory 221/file_221c.dat'] What I require is only the file names in the list in a new line. Desired output [2015-03-14 11:41:53] INFO Files in list: file_a.dat file_1b.dat file_11a.dat file_111a.dat file_221a.dat file_221b.dat file_221c.dat How can this be done in Python? A: Use standard Pretty Print module: from pprint import pformat logging.info('Files in list:\n%s', pformat(file_list)) Output example: [2015-03-14 14:23:47] INFO Files in list: ['./tmpvYWsRB.png', './Log.log', './tmpCG2Dn2', './ tmp7I36mh.png', To print filenames only, use standard os.path.basename as user4815162342 suggested. A: Use '\n'.join to merge strings with newline as separator and a generator expression to remove the directory names: logging.info('Files in list: %s', '\n'.join(os.path.basename(f) for f in file_list))) A: Sharing the generic way to do this for a list of tuples. Probably not the best way though, but has the least dependencies: Tldr; Brute Force it. print('\n'.join(str(my_list_of_tuples).split('),')))
How to insert newline while logging lists and arrays?
I am trying to output lists to a log file, using Python's logging module. The code I used is: import logging import os logging.basicConfig(filename = 'Log.log', level = logging.DEBUG, filemode = 'w', format = '%(asctime)s \t %(levelname)s \t %(message)s', datefmt="[%Y-%m-%d %H:%M:%S]") file_list = [] for root, directories, files in os.walk('./Directory'): files = [f for f in files] for file in files: file_list.append(os.path.join(root, file)) logging.info('Files in list: %s', file_list) This gives me the output in the log file as a single line. Output [2015-03-14 11:41:53] INFO Files in list: ['./Directory/Subdirectory 1/file_a.dat', './Directory/Subdirectory 1/file_1b.dat', './Directory/Subdirectory 1/Subdirectory 11/file_11a.dat', './Directory/Subdirectory 1/Subdirectory 11/Subdirectory 111/file_111a.dat', './Directory/Subdirectory 2/Subdirectory 22/Subdirectory 221/file_221a.dat', './Directory/Subdirectory 2/Subdirectory 22/Subdirectory 221/file_221b.dat', './Directory/Subdirectory 2/Subdirectory 22/Subdirectory 221/file_221c.dat'] What I require is only the file names in the list in a new line. Desired output [2015-03-14 11:41:53] INFO Files in list: file_a.dat file_1b.dat file_11a.dat file_111a.dat file_221a.dat file_221b.dat file_221c.dat How can this be done in Python?
[ "Use standard Pretty Print module:\nfrom pprint import pformat\nlogging.info('Files in list:\\n%s', pformat(file_list))\n\nOutput example:\n[2015-03-14 14:23:47] INFO Files in list:\n['./tmpvYWsRB.png',\n './Log.log',\n './tmpCG2Dn2',\n './ tmp7I36mh.png',\n\nTo print filenames only, use standard os.path.basename as user4815162342 suggested.\n", "Use '\\n'.join to merge strings with newline as separator and a generator expression to remove the directory names:\nlogging.info('Files in list: %s', '\\n'.join(os.path.basename(f)\n for f in file_list)))\n\n", "Sharing the generic way to do this for a list of tuples. Probably not the best way though, but has the least dependencies:\nTldr; Brute Force it.\nprint('\\n'.join(str(my_list_of_tuples).split('),')))\n\n" ]
[ 2, 1, 0 ]
[]
[]
[ "list", "logging", "python" ]
stackoverflow_0029047319_list_logging_python.txt
Q: Specific web page doesn't load (empty page) HTML and CSS with Selenium? I started working with Selenium, it works for any website I tried except one (myvisit.com) that doesn't load the page. It opens Chrome but the page is empty. I tried to set number of delays but it still doesn't load. When I go to the website on a regular Chrome (without Selenium) it loads everything. Here is my simple code, not sure how to continue from that: import os import random import time # selenium libraries from selenium import webdriver from selenium.webdriver.common.by import By from selenium.webdriver.chrome.options import ChromiumOptions def delay(): time.sleep(random.randint(2,3)) driver = webdriver.Chrome(os.getcwd()+"\\webdriver\\chromedriver.exe") driver.get("https://myvisit.com") delay() delay() delay() delay() I also tried to use ChromiumOptions with flags like --no-sandbox but it didn't help: A: from selenium.webdriver.chrome.options import Options options = Options() options.add_argument('--disable-blink-features=AutomationControlled') driver = webdriver.Chrome(os.getcwd()+"\\webdriver\\chromedriver.exe",options=options) Simply add the arguement to remove it from determining it's an automation.
Specific web page doesn't load (empty page) HTML and CSS with Selenium?
I started working with Selenium, it works for any website I tried except one (myvisit.com) that doesn't load the page. It opens Chrome but the page is empty. I tried to set number of delays but it still doesn't load. When I go to the website on a regular Chrome (without Selenium) it loads everything. Here is my simple code, not sure how to continue from that: import os import random import time # selenium libraries from selenium import webdriver from selenium.webdriver.common.by import By from selenium.webdriver.chrome.options import ChromiumOptions def delay(): time.sleep(random.randint(2,3)) driver = webdriver.Chrome(os.getcwd()+"\\webdriver\\chromedriver.exe") driver.get("https://myvisit.com") delay() delay() delay() delay() I also tried to use ChromiumOptions with flags like --no-sandbox but it didn't help:
[ "from selenium.webdriver.chrome.options import Options\noptions = Options()\noptions.add_argument('--disable-blink-features=AutomationControlled')\ndriver = webdriver.Chrome(os.getcwd()+\"\\\\webdriver\\\\chromedriver.exe\",options=options)\n\nSimply add the arguement to remove it from determining it's an automation.\n" ]
[ 1 ]
[]
[]
[ "python", "selenium", "selenium_chromedriver", "selenium_webdriver" ]
stackoverflow_0074487869_python_selenium_selenium_chromedriver_selenium_webdriver.txt
Q: How to combine diagonal data? I am new to python, apologies if I do not explain well or provide partial solutions yet... I have a dataframe as below: a key, some dates (distributed in rows), and many other columns (same key, same value) Key Date 1 Date 2 Date 3 Column X Column Y Key 1 2022-01-01 X11111111 Y11111111 Key 1 2022-01-02 X11111111 Y11111111 Key 1 2022-01-03 X11111111 Y11111111 Key 2 2022-12-01 X22222222 Y22222222 Key 2 2022-12-02 X22222222 Y22222222 Key 2 2022-12-03 X22222222 Y22222222 And I want to aggregate them like below, where the dates are aggregate, other columns keep the same Key Date 1 Date 2 Date 3 Column X Column Y Key 1 2022-01-01 2022-01-02 2022-01-03 X11111111 Y11111111 Key 2 2022-12-01 2022-12-02 2022-12-03 X22222222 Y22222222 What would be the most efficient way of doing it? Thank you. I have tried normal pivot and aggregation but did not work as I want ... A: Assuming the empty cells are NaN, use groupby.first: out = df.groupby('Key', as_index=False).first() NB. If the empty cells are empty strings, use df.replace('', float('nan')).groupby('Key', as_index=False).first(). Output: Key Date 1 Date 2 Date 3 Column X Column Y 0 Key 1 2022-01-01 2022-01-02 2022-01-03 X11111111 Y11111111 1 Key 2 2022-12-01 2022-12-02 2022-12-03 X22222222 Y22222222 A: Another possible solution: (df.groupby('Key', group_keys=True) .apply(lambda g: g.ffill().bfill()) .drop_duplicates() .reset_index(drop=True)) Output: Key Date 1 Date 2 Date 3 Column X Column Y 0 Key 1 2022-01-01 2022-01-02 2022-01-03 X11111111 Y11111111 1 Key 2 2022-12-01 2022-12-02 2022-12-03 X22222222 Y22222222
How to combine diagonal data?
I am new to python, apologies if I do not explain well or provide partial solutions yet... I have a dataframe as below: a key, some dates (distributed in rows), and many other columns (same key, same value) Key Date 1 Date 2 Date 3 Column X Column Y Key 1 2022-01-01 X11111111 Y11111111 Key 1 2022-01-02 X11111111 Y11111111 Key 1 2022-01-03 X11111111 Y11111111 Key 2 2022-12-01 X22222222 Y22222222 Key 2 2022-12-02 X22222222 Y22222222 Key 2 2022-12-03 X22222222 Y22222222 And I want to aggregate them like below, where the dates are aggregate, other columns keep the same Key Date 1 Date 2 Date 3 Column X Column Y Key 1 2022-01-01 2022-01-02 2022-01-03 X11111111 Y11111111 Key 2 2022-12-01 2022-12-02 2022-12-03 X22222222 Y22222222 What would be the most efficient way of doing it? Thank you. I have tried normal pivot and aggregation but did not work as I want ...
[ "Assuming the empty cells are NaN, use groupby.first:\nout = df.groupby('Key', as_index=False).first()\n\nNB. If the empty cells are empty strings, use df.replace('', float('nan')).groupby('Key', as_index=False).first().\nOutput:\n Key Date 1 Date 2 Date 3 Column X Column Y\n0 Key 1 2022-01-01 2022-01-02 2022-01-03 X11111111 Y11111111\n1 Key 2 2022-12-01 2022-12-02 2022-12-03 X22222222 Y22222222\n\n", "Another possible solution:\n(df.groupby('Key', group_keys=True)\n .apply(lambda g: g.ffill().bfill())\n .drop_duplicates()\n .reset_index(drop=True))\n\nOutput:\n Key Date 1 Date 2 Date 3 Column X Column Y\n0 Key 1 2022-01-01 2022-01-02 2022-01-03 X11111111 Y11111111\n1 Key 2 2022-12-01 2022-12-02 2022-12-03 X22222222 Y22222222\n\n" ]
[ 2, 2 ]
[]
[]
[ "aggregate", "dataframe", "pandas", "pivot", "python" ]
stackoverflow_0074487910_aggregate_dataframe_pandas_pivot_python.txt
Q: How can I get the line.column index for the right-click event on Tkinter Text widget? Say - I am working on a tk.Text widget with the following content: this is a test text. which has two lines. Say I right-click on the beginning of the first "h" in the first word "which" in the second line - and I want to access its location in the tk.Text's line.column format just as follows: rc_index = tk.Text.get_right_click_index('current') print(rc_index) The output shall be: 2.1 Is there a way to do this? A: Assume text is the instance of the Text widget, then you can bind "<Button-3>" (right click event of mouse in Windows) to a callback and get the index inside the callback: def on_right_click(event): idx = event.widget.index("current") print(idx) text.bind("<Button-3>", on_right_click)
How can I get the line.column index for the right-click event on Tkinter Text widget?
Say - I am working on a tk.Text widget with the following content: this is a test text. which has two lines. Say I right-click on the beginning of the first "h" in the first word "which" in the second line - and I want to access its location in the tk.Text's line.column format just as follows: rc_index = tk.Text.get_right_click_index('current') print(rc_index) The output shall be: 2.1 Is there a way to do this?
[ "Assume text is the instance of the Text widget, then you can bind \"<Button-3>\" (right click event of mouse in Windows) to a callback and get the index inside the callback:\ndef on_right_click(event):\n idx = event.widget.index(\"current\")\n print(idx)\n\ntext.bind(\"<Button-3>\", on_right_click)\n\n" ]
[ 0 ]
[]
[]
[ "python", "tkinter" ]
stackoverflow_0074487818_python_tkinter.txt
Q: Extracting values from a df Problem statement: There are multiple instances of charging and discharging for each vehicle, get the minimum charge, maximum charge, min discharge and max discharge for each vehicle for a particular day. df1 Date Time vehicle_no soc SOC Diff 0 2022-10-01 02:27:56 DL21GD0100 80.0 0 1 2022-10-01 02:28:26 DL21GD0100 80.0 Discharging 2 2022-10-01 02:28:56 DL21GD0100 80.0 Discharging 3 2022-10-01 02:29:26 DL21GD0100 80.0 Discharging 4 2022-10-01 02:29:56 DL21GD0100 69.0 Discharging 5 2022-10-01 02:29:56 DL21GD0100 70.0 Charging 6 2022-10-01 02:29:56 DL21GD0100 71.0 Charging 7 2022-10-01 02:29:56 DL21GD0100 72.0 Charging 8 2022-10-01 03:16:00 DL21GD0100 63.0 Discharging 9 2022-10-01 03:16:30 DL21GD0100 23.0 Discharging 10 2022-10-01 04:17:00 DL21GD0100 54.0 Charging 11 2022-10-01 09:17:30 WB25M9298 24.0 Charging 12 2022-10-01 09:18:00 WB25M9298 25.0 Charging A: Read the whole answer for 3 different options mapping strictly charge/discharge You can use groupby.diff to get the difference per group, then numpy.sign and map: df['status'] = np.sign(df.groupby('vehicle_no')['soc'].diff() ).map({1: 'Charging', -1: 'Discharging'}) Or with numpy.select: s = df.groupby('vehicle_no')['soc'].diff() df['status'] = np.select([s>0, s<0], ['Charging', 'Discharging'], np.nan) Output: Date Time vehicle_no soc status 0 2022-10-01 02:27:56 DL21GD0100 80.0 NaN 2 2022-10-01 02:28:56 DL21GD0100 80.0 NaN 3 2022-10-01 02:29:26 DL21GD0100 80.0 NaN 4 2022-10-01 02:29:56 DL21GD0100 69.0 Discharging 5 2022-10-01 02:29:56 DL21GD0100 70.0 Charging 6 2022-10-01 02:29:56 DL21GD0100 71.0 Charging 7 2022-10-01 02:29:56 DL21GD0100 72.0 Charging 8 2022-10-01 09:16:00 WB25M9298 23.0 NaN 9 2022-10-01 09:16:30 WB25M9298 23.0 NaN 10 2022-10-01 09:17:00 WB25M9298 24.0 Charging 11 2022-10-01 09:17:30 WB25M9298 24.0 NaN 12 2022-10-01 09:18:00 WB25M9298 25.0 Charging mapping Charge/Discharge with steady as Discharge If you want to consider equal value as Discharging: df['status'] = np.where(df.groupby('vehicle_no')['soc'].diff().gt(0), 'Charging', 'Discharging') Output: Date Time vehicle_no soc status 0 2022-10-01 02:27:56 DL21GD0100 80.0 Discharging 2 2022-10-01 02:28:56 DL21GD0100 80.0 Discharging 3 2022-10-01 02:29:26 DL21GD0100 80.0 Discharging 4 2022-10-01 02:29:56 DL21GD0100 69.0 Discharging 5 2022-10-01 02:29:56 DL21GD0100 70.0 Charging 6 2022-10-01 02:29:56 DL21GD0100 71.0 Charging 7 2022-10-01 02:29:56 DL21GD0100 72.0 Charging 8 2022-10-01 09:16:00 WB25M9298 23.0 Discharging 9 2022-10-01 09:16:30 WB25M9298 23.0 Discharging 10 2022-10-01 09:17:00 WB25M9298 24.0 Charging 11 2022-10-01 09:17:30 WB25M9298 24.0 Discharging 12 2022-10-01 09:18:00 WB25M9298 25.0 Charging mapping Charge/Discharge with steady as previous state: d = {1: 'Charging', -1: 'Discharging'} df['status'] = (df.groupby('vehicle_no')['soc'] .transform(lambda s: np.sign(s.diff()).map(d).ffill()) .fillna('Discharging') ) Output: Date Time vehicle_no soc status 0 2022-10-01 02:27:56 DL21GD0100 80.0 Discharging 2 2022-10-01 02:28:56 DL21GD0100 80.0 Discharging 3 2022-10-01 02:29:26 DL21GD0100 80.0 Discharging 4 2022-10-01 02:29:56 DL21GD0100 69.0 Discharging 5 2022-10-01 02:29:56 DL21GD0100 70.0 Charging 6 2022-10-01 02:29:56 DL21GD0100 71.0 Charging 7 2022-10-01 02:29:56 DL21GD0100 72.0 Charging 8 2022-10-01 09:16:00 WB25M9298 23.0 Discharging 9 2022-10-01 09:16:30 WB25M9298 23.0 Discharging 10 2022-10-01 09:17:00 WB25M9298 24.0 Charging 11 2022-10-01 09:17:30 WB25M9298 24.0 Charging 12 2022-10-01 09:18:00 WB25M9298 25.0 Charging A: Try this- df1.groupby(['vehicle_no','status']).agg({'soc':[min,max]})
Extracting values from a df
Problem statement: There are multiple instances of charging and discharging for each vehicle, get the minimum charge, maximum charge, min discharge and max discharge for each vehicle for a particular day. df1 Date Time vehicle_no soc SOC Diff 0 2022-10-01 02:27:56 DL21GD0100 80.0 0 1 2022-10-01 02:28:26 DL21GD0100 80.0 Discharging 2 2022-10-01 02:28:56 DL21GD0100 80.0 Discharging 3 2022-10-01 02:29:26 DL21GD0100 80.0 Discharging 4 2022-10-01 02:29:56 DL21GD0100 69.0 Discharging 5 2022-10-01 02:29:56 DL21GD0100 70.0 Charging 6 2022-10-01 02:29:56 DL21GD0100 71.0 Charging 7 2022-10-01 02:29:56 DL21GD0100 72.0 Charging 8 2022-10-01 03:16:00 DL21GD0100 63.0 Discharging 9 2022-10-01 03:16:30 DL21GD0100 23.0 Discharging 10 2022-10-01 04:17:00 DL21GD0100 54.0 Charging 11 2022-10-01 09:17:30 WB25M9298 24.0 Charging 12 2022-10-01 09:18:00 WB25M9298 25.0 Charging
[ "Read the whole answer for 3 different options\nmapping strictly charge/discharge\nYou can use groupby.diff to get the difference per group, then numpy.sign and map:\ndf['status'] = np.sign(df.groupby('vehicle_no')['soc'].diff()\n ).map({1: 'Charging', -1: 'Discharging'})\n\nOr with numpy.select:\ns = df.groupby('vehicle_no')['soc'].diff()\n\ndf['status'] = np.select([s>0, s<0], ['Charging', 'Discharging'], np.nan)\n\nOutput:\n Date Time vehicle_no soc status\n0 2022-10-01 02:27:56 DL21GD0100 80.0 NaN\n2 2022-10-01 02:28:56 DL21GD0100 80.0 NaN\n3 2022-10-01 02:29:26 DL21GD0100 80.0 NaN\n4 2022-10-01 02:29:56 DL21GD0100 69.0 Discharging\n5 2022-10-01 02:29:56 DL21GD0100 70.0 Charging\n6 2022-10-01 02:29:56 DL21GD0100 71.0 Charging\n7 2022-10-01 02:29:56 DL21GD0100 72.0 Charging\n8 2022-10-01 09:16:00 WB25M9298 23.0 NaN\n9 2022-10-01 09:16:30 WB25M9298 23.0 NaN\n10 2022-10-01 09:17:00 WB25M9298 24.0 Charging\n11 2022-10-01 09:17:30 WB25M9298 24.0 NaN\n12 2022-10-01 09:18:00 WB25M9298 25.0 Charging\n\nmapping Charge/Discharge with steady as Discharge\nIf you want to consider equal value as Discharging:\ndf['status'] = np.where(df.groupby('vehicle_no')['soc'].diff().gt(0), 'Charging', 'Discharging')\n\nOutput:\n Date Time vehicle_no soc status\n0 2022-10-01 02:27:56 DL21GD0100 80.0 Discharging\n2 2022-10-01 02:28:56 DL21GD0100 80.0 Discharging\n3 2022-10-01 02:29:26 DL21GD0100 80.0 Discharging\n4 2022-10-01 02:29:56 DL21GD0100 69.0 Discharging\n5 2022-10-01 02:29:56 DL21GD0100 70.0 Charging\n6 2022-10-01 02:29:56 DL21GD0100 71.0 Charging\n7 2022-10-01 02:29:56 DL21GD0100 72.0 Charging\n8 2022-10-01 09:16:00 WB25M9298 23.0 Discharging\n9 2022-10-01 09:16:30 WB25M9298 23.0 Discharging\n10 2022-10-01 09:17:00 WB25M9298 24.0 Charging\n11 2022-10-01 09:17:30 WB25M9298 24.0 Discharging\n12 2022-10-01 09:18:00 WB25M9298 25.0 Charging\n\nmapping Charge/Discharge with steady as previous state:\nd = {1: 'Charging', -1: 'Discharging'}\ndf['status'] = (df.groupby('vehicle_no')['soc']\n .transform(lambda s: np.sign(s.diff()).map(d).ffill())\n .fillna('Discharging')\n )\n\nOutput:\n Date Time vehicle_no soc status\n0 2022-10-01 02:27:56 DL21GD0100 80.0 Discharging\n2 2022-10-01 02:28:56 DL21GD0100 80.0 Discharging\n3 2022-10-01 02:29:26 DL21GD0100 80.0 Discharging\n4 2022-10-01 02:29:56 DL21GD0100 69.0 Discharging\n5 2022-10-01 02:29:56 DL21GD0100 70.0 Charging\n6 2022-10-01 02:29:56 DL21GD0100 71.0 Charging\n7 2022-10-01 02:29:56 DL21GD0100 72.0 Charging\n8 2022-10-01 09:16:00 WB25M9298 23.0 Discharging\n9 2022-10-01 09:16:30 WB25M9298 23.0 Discharging\n10 2022-10-01 09:17:00 WB25M9298 24.0 Charging\n11 2022-10-01 09:17:30 WB25M9298 24.0 Charging\n12 2022-10-01 09:18:00 WB25M9298 25.0 Charging\n\n", "Try this-\ndf1.groupby(['vehicle_no','status']).agg({'soc':[min,max]})\n\n\n" ]
[ 0, 0 ]
[]
[]
[ "pandas", "python" ]
stackoverflow_0074431697_pandas_python.txt