content
stringlengths
85
101k
title
stringlengths
0
150
question
stringlengths
15
48k
answers
list
answers_scores
list
non_answers
list
non_answers_scores
list
tags
list
name
stringlengths
35
137
Q: Python range error. Im not sure how to make the program display "out of range if credits" entered are not in the range 0, 20, 40,60, 80, 100 and 120 I'm doing a python sort of grade calculator right now and i need to make the program display ‘Out of range’ if credits entered are not in the range 0, 20, 40,60, 80, 100 and 120. Where it says incorrect it should say Out of range instead This is the spec of the grade calc im trying to make so far everything works apart from Part 1B where its supposed to display out of range Second bullet point is the problem im struggling with def Validation(StudentPassCredits, StudentDeferCredits, StudentFailCredits): if StudentPassCredits + StudentDeferCredits + StudentFailCredits == 120: if StudentPassCredits % 20 != 0 or StudentDeferCredits % 20 != 0 or StudentFailCredits % 20 != 0: print('Range error') else: Main_Version() else: print('Total incorrect') I tried this but it wouldnt work and only displays Total incorrect instead of range error and i dont really know what to do now. The rest of the program works as needed its just the out of range thats the issue def Main_Version(): if StudentPassCredits == 120: print('Your academic year progression outcome is :', 'Progress') elif StudentPassCredits == 100: print('Your academic year progression outcome is :', 'Progress (module trailer)') elif StudentFailCredits >= 80: print('Your academic year progression outcome is :', 'Exclude') else: print('Your academic year progression outcome is :', 'Do not progress – module retriever') def Validation(StudentPassCredits, StudentDeferCredits, StudentFailCredits): if StudentPassCredits + StudentDeferCredits + StudentFailCredits == 120: if StudentPassCredits % 20 != 0 or StudentDeferCredits % 20 != 0 or StudentFailCredits % 20 != 0: print('Range error') else: Main_Version() else: print('Total incorrect') try: StudentPassCredits = int(input('Please enter your current pass credits: ')) StudentDeferCredits = int(input('Please enter your current defer credits: ')) StudentFailCredits = int(input('Please enter your current fail credits: ')) Validation(StudentPassCredits, StudentDeferCredits, StudentFailCredits) except ValueError: print('Integer required') A: You can validate each credits. def _validate_credits_in_range(credits: int, container: "Container[int]" = range(0, 121, 20)) -> bool: return credits in container def Validation(StudentPassCredits: int, StudentDeferCredits: int, StudentFailCredits: int) -> None: if all(_validate_credits_in_range(credits) for credits in(StudentPassCredits, StudentDeferCredits, StudentFailCredits)): if StudentPassCredits + StudentDeferCredits + StudentFailCredits == 120: print('Total incorrect') else: Main_Version() else: print('Range error')
Python range error. Im not sure how to make the program display "out of range if credits" entered are not in the range 0, 20, 40,60, 80, 100 and 120
I'm doing a python sort of grade calculator right now and i need to make the program display ‘Out of range’ if credits entered are not in the range 0, 20, 40,60, 80, 100 and 120. Where it says incorrect it should say Out of range instead This is the spec of the grade calc im trying to make so far everything works apart from Part 1B where its supposed to display out of range Second bullet point is the problem im struggling with def Validation(StudentPassCredits, StudentDeferCredits, StudentFailCredits): if StudentPassCredits + StudentDeferCredits + StudentFailCredits == 120: if StudentPassCredits % 20 != 0 or StudentDeferCredits % 20 != 0 or StudentFailCredits % 20 != 0: print('Range error') else: Main_Version() else: print('Total incorrect') I tried this but it wouldnt work and only displays Total incorrect instead of range error and i dont really know what to do now. The rest of the program works as needed its just the out of range thats the issue def Main_Version(): if StudentPassCredits == 120: print('Your academic year progression outcome is :', 'Progress') elif StudentPassCredits == 100: print('Your academic year progression outcome is :', 'Progress (module trailer)') elif StudentFailCredits >= 80: print('Your academic year progression outcome is :', 'Exclude') else: print('Your academic year progression outcome is :', 'Do not progress – module retriever') def Validation(StudentPassCredits, StudentDeferCredits, StudentFailCredits): if StudentPassCredits + StudentDeferCredits + StudentFailCredits == 120: if StudentPassCredits % 20 != 0 or StudentDeferCredits % 20 != 0 or StudentFailCredits % 20 != 0: print('Range error') else: Main_Version() else: print('Total incorrect') try: StudentPassCredits = int(input('Please enter your current pass credits: ')) StudentDeferCredits = int(input('Please enter your current defer credits: ')) StudentFailCredits = int(input('Please enter your current fail credits: ')) Validation(StudentPassCredits, StudentDeferCredits, StudentFailCredits) except ValueError: print('Integer required')
[ "You can validate each credits.\ndef _validate_credits_in_range(credits: int, container: \"Container[int]\" = range(0, 121, 20)) -> bool:\n return credits in container\n\ndef Validation(StudentPassCredits: int, StudentDeferCredits: int, StudentFailCredits: int) -> None:\n if all(_validate_credits_in_range(cre...
[ 0 ]
[]
[]
[ "python" ]
stackoverflow_0074551908_python.txt
Q: Elsaticsearch Connection Refused from elasticsearch import Elasticsearch #elasticsearch==7.17.3 used esDB = Elasticsearch( ['https://13.XX.242.XX'], use_ssl = False, ca_certs=False, verify_certs=False, http_auth=('elastic', 'elastic'), port=5601, ) Trying to connect to a remote elastic search server but getting this error ERROR: elasticsearch.exceptions.ConnectionError: ConnectionError(<urllib3.connection.HTTPSConnection object at 0x7f3f597e8430>: Failed to establish a new connection: [Errno 111] Connection refused) caused by: NewConnectionError(<urllib3.connection.HTTPSConnection object at 0x7f3f597e8430>: Failed to establish a new connection: [Errno 111] Connection refused) A: Elasticsearch listens on port 9200 for API requests. 5601 is for Kibana. you have the two mixed up in your code, so it should be something like; from elasticsearch import Elasticsearch #elasticsearch==7.17.3 used esDB = Elasticsearch( ['https://13.XX.242.XX'], use_ssl = False, ca_certs=False, verify_certs=False, http_auth=('elastic', 'elastic'), port=9200, )
Elsaticsearch Connection Refused
from elasticsearch import Elasticsearch #elasticsearch==7.17.3 used esDB = Elasticsearch( ['https://13.XX.242.XX'], use_ssl = False, ca_certs=False, verify_certs=False, http_auth=('elastic', 'elastic'), port=5601, ) Trying to connect to a remote elastic search server but getting this error ERROR: elasticsearch.exceptions.ConnectionError: ConnectionError(<urllib3.connection.HTTPSConnection object at 0x7f3f597e8430>: Failed to establish a new connection: [Errno 111] Connection refused) caused by: NewConnectionError(<urllib3.connection.HTTPSConnection object at 0x7f3f597e8430>: Failed to establish a new connection: [Errno 111] Connection refused)
[ "Elasticsearch listens on port 9200 for API requests. 5601 is for Kibana. you have the two mixed up in your code, so it should be something like;\nfrom elasticsearch import Elasticsearch #elasticsearch==7.17.3 used\n\n\nesDB = Elasticsearch(\n ['https://13.XX.242.XX'],\n use_ssl = False,\n ca_certs=False,...
[ 0 ]
[]
[]
[ "elasticsearch", "python" ]
stackoverflow_0074541730_elasticsearch_python.txt
Q: 'gbk' codec can't encode character '\u2022' in position 32: illegal multibyte sequence There is a question about writting file. when I used data.to_csv('/home/bio_kang/Learning/Python/film_project/top250_film_info.csv', index=None, encoding='gbk'), it given me a error that 'gbk' codec can't encode character '\u2022' in position 32: illegal multibyte sequence. The data come from a website https://movie.douban.com/top250. I use requests , beautifulsoup and re to get them form the website. And, here is my part code: uni_num = [] years = [] countries = [] directors = [] actors = [] descriptions = [] for i in range(250): with open('/home/bio_kang/Learning/Python/film_project/film_info/film_{}.html'.format(i), 'rb') as f: film_info = f.read().decode('utf-8','ignore') pattern_uni_num = re.compile(r'<span class="pl">IMDb:</span> (.*?)<br/>') # unique number pattern_year = re.compile(r'<span class="year">\((.*?)\)</span>') # year pattern_country = re.compile(r'<span class="pl">制片国家/地区:</span>(.*?)<br/>') # country pattern_director = re.compile(r'<meta content=(.*?) property="video:director"/>') # director pattern_actor = re.compile(r'<meta content="(.*?)" property="video:actor"/>') # actors pattern_description = re.compile(r'<meta content="(.*?)property="og:description">') # description uni_num.append(str(re.findall(pattern_uni_num, film_info)).strip("[]").strip("'")) years.append(str(re.findall(pattern_year, film_info)).strip("[]").strip("'")) countries.append(str(re.findall(pattern_country, film_info)).strip("[]").strip("'").split('/')[0]) directors.append(re.findall(pattern_director, film_info)) actors.append(re.findall(pattern_actor, film_info)) descriptions.append(str(re.findall(pattern_description, film_info)).strip('[]').strip('\'')) raw_data = {'encoding':uni_num, 'name':names, 'description':descriptions, 'country':countries, 'director':new_director, 'actor':new_actor, 'vote':new_votes, 'score':scores, 'year':years, 'link':urls } data = pd.DataFrame(raw_data) data.to_csv('/home/bio_kang/Learning/Python/film_project/top250_film_info.csv', index=None, encoding='gbk') A: try that: open('...','rb',encoding='utf-8') or utf-16
'gbk' codec can't encode character '\u2022' in position 32: illegal multibyte sequence
There is a question about writting file. when I used data.to_csv('/home/bio_kang/Learning/Python/film_project/top250_film_info.csv', index=None, encoding='gbk'), it given me a error that 'gbk' codec can't encode character '\u2022' in position 32: illegal multibyte sequence. The data come from a website https://movie.douban.com/top250. I use requests , beautifulsoup and re to get them form the website. And, here is my part code: uni_num = [] years = [] countries = [] directors = [] actors = [] descriptions = [] for i in range(250): with open('/home/bio_kang/Learning/Python/film_project/film_info/film_{}.html'.format(i), 'rb') as f: film_info = f.read().decode('utf-8','ignore') pattern_uni_num = re.compile(r'<span class="pl">IMDb:</span> (.*?)<br/>') # unique number pattern_year = re.compile(r'<span class="year">\((.*?)\)</span>') # year pattern_country = re.compile(r'<span class="pl">制片国家/地区:</span>(.*?)<br/>') # country pattern_director = re.compile(r'<meta content=(.*?) property="video:director"/>') # director pattern_actor = re.compile(r'<meta content="(.*?)" property="video:actor"/>') # actors pattern_description = re.compile(r'<meta content="(.*?)property="og:description">') # description uni_num.append(str(re.findall(pattern_uni_num, film_info)).strip("[]").strip("'")) years.append(str(re.findall(pattern_year, film_info)).strip("[]").strip("'")) countries.append(str(re.findall(pattern_country, film_info)).strip("[]").strip("'").split('/')[0]) directors.append(re.findall(pattern_director, film_info)) actors.append(re.findall(pattern_actor, film_info)) descriptions.append(str(re.findall(pattern_description, film_info)).strip('[]').strip('\'')) raw_data = {'encoding':uni_num, 'name':names, 'description':descriptions, 'country':countries, 'director':new_director, 'actor':new_actor, 'vote':new_votes, 'score':scores, 'year':years, 'link':urls } data = pd.DataFrame(raw_data) data.to_csv('/home/bio_kang/Learning/Python/film_project/top250_film_info.csv', index=None, encoding='gbk')
[ "try that:\nopen('...','rb',encoding='utf-8')\n\nor utf-16\n" ]
[ 0 ]
[]
[]
[ "encoding", "python", "python_3.x" ]
stackoverflow_0073540470_encoding_python_python_3.x.txt
Q: Moving TKinter Window.destroy to a function from a button, not so simple? I'm giving myself a crash-course in Python and TKinter, but there is one small detail I can't grasp. Closing a Toplevel window in a function instead of a button. My button alone works perfect: button = Button(UpdateWindow, text="Destroy Window", command=UpdateWindow.destroy) Using a button with a reference to a close function bombs: def Close(): tkMessageBox.showwarning('', 'Close function called', icon="warning") command=UpdateWindow.destroy btn_updatecon = Button(ContactForm, text="Update", width=20, command=lambda:[UpdateData(), Close()]) What am I missing in the function? It is being called, but no close. The SQLite3 project im working with is here Any guidance greatly appreciated. A: command=UpdateWindow.destroy defines a variable. If you want to keep the command variable use command(), while if you want the recommended way use UpdateWindow.destroy()
Moving TKinter Window.destroy to a function from a button, not so simple?
I'm giving myself a crash-course in Python and TKinter, but there is one small detail I can't grasp. Closing a Toplevel window in a function instead of a button. My button alone works perfect: button = Button(UpdateWindow, text="Destroy Window", command=UpdateWindow.destroy) Using a button with a reference to a close function bombs: def Close(): tkMessageBox.showwarning('', 'Close function called', icon="warning") command=UpdateWindow.destroy btn_updatecon = Button(ContactForm, text="Update", width=20, command=lambda:[UpdateData(), Close()]) What am I missing in the function? It is being called, but no close. The SQLite3 project im working with is here Any guidance greatly appreciated.
[ "command=UpdateWindow.destroy defines a variable. If you want to keep the command variable use command(), while if you want the recommended way use UpdateWindow.destroy()\n" ]
[ 0 ]
[]
[]
[ "python" ]
stackoverflow_0074551987_python.txt
Q: Python + PyCharm File Structure issue: AttributeError: 'module' object has no attribute 'X' I'm having the following file structure: main.py Crypto GetGenerators.py Utils RecHash.py ToInteger.py Utils.py GetGenerators.py looks like this: import unittest import os, sys import gmpy2 from gmpy2 import mpz sys.path.append(os.path.dirname(os.path.dirname(os.path.abspath(__file__)))) from Utils.Utils import AssertInt, AssertClass from Utils.ToInteger import ToInteger from Utils.RecHash import RecHash def GetGenerators(n): AssertInt(n) assert n >= 0, "n must be greater than or equal 0" generators = [] # ... irrelevant code... return generators class GetGeneratorsTest(unittest.TestCase): def testGetGenerators(self): self.assertEqual(len(GetGenerators(50)), 50) if __name__ == '__main__': unittest.main() When I'm using the function GetGenerators from inside main.py, it works fine. However, when I'm running the GetGenerators.py UnitTests by rightclicking the file, "Run Unittests in GetGenerators.py", I'm getting the following error: File "C:\Program Files (x86)\JetBrains\PyCharm 2016.3.2\helpers\pycharm\nose_helper\util.py", line 70, in resolve_name obj = getattr(obj, part) AttributeError: 'module' object has no attribute 'GetGenerators' I suppose it has something to do with the structure of my files, but I don't see the problem. A: I haven't had your exact problem before, but I think I've had one like it. When I use PyCharm, I find that if open and use files that I've created in a project in PyCharm, then everything works fine. I can import them, can run them; no problems. The problems I run into (which are similar to yours) are when I open a file that was not created within a PyCharm project. I can't import them, and sometimes can't even run them correctly. Maybe it's just me being stupid or maybe a real bug with PyCharm, but whatever the case is. It might be worth (if you haven't already), create a project in PyCharm and copy and paste the file contents into files you create within PyCharm. For some reason, that has worked for me in the past. A: So I've ran into a similar problem with PyCharm 2022.2.2 and this solution didn't help me. Instead, what worked was checking my code to make sure I didn't have any object named 'module' defined anywhere, plus I changed some of the documents like "face_landmarks.py" and "face_recognition.py" into "landmarks.py" to avoid confusion when calling a similar line with face_recognition package in python. I've also tried marking the project folder as a Namespace package. However, as I've done several things at once, I'm not sure if this had any impact. The problem was resolved, but the issue with file structure is there for PyCharm even 6 years later.
Python + PyCharm File Structure issue: AttributeError: 'module' object has no attribute 'X'
I'm having the following file structure: main.py Crypto GetGenerators.py Utils RecHash.py ToInteger.py Utils.py GetGenerators.py looks like this: import unittest import os, sys import gmpy2 from gmpy2 import mpz sys.path.append(os.path.dirname(os.path.dirname(os.path.abspath(__file__)))) from Utils.Utils import AssertInt, AssertClass from Utils.ToInteger import ToInteger from Utils.RecHash import RecHash def GetGenerators(n): AssertInt(n) assert n >= 0, "n must be greater than or equal 0" generators = [] # ... irrelevant code... return generators class GetGeneratorsTest(unittest.TestCase): def testGetGenerators(self): self.assertEqual(len(GetGenerators(50)), 50) if __name__ == '__main__': unittest.main() When I'm using the function GetGenerators from inside main.py, it works fine. However, when I'm running the GetGenerators.py UnitTests by rightclicking the file, "Run Unittests in GetGenerators.py", I'm getting the following error: File "C:\Program Files (x86)\JetBrains\PyCharm 2016.3.2\helpers\pycharm\nose_helper\util.py", line 70, in resolve_name obj = getattr(obj, part) AttributeError: 'module' object has no attribute 'GetGenerators' I suppose it has something to do with the structure of my files, but I don't see the problem.
[ "I haven't had your exact problem before, but I think I've had one like it. When I use PyCharm, I find that if open and use files that I've created in a project in PyCharm, then everything works fine. I can import them, can run them; no problems. The problems I run into (which are similar to yours) are when I open ...
[ 1, 0 ]
[]
[]
[ "pycharm", "python", "python_3.x" ]
stackoverflow_0044011169_pycharm_python_python_3.x.txt
Q: Django Sass Compressor django_libsass.SassCompiler: command not found I'm using a Django-Compressor Filter as part of Wagtail (Django variant CMS with super cool UI). Environment is Wagtail 0.2 + Python 2.7 + Django 1.6 + Virtualenv + FastCGI + Apache shared hosting. Issue occurs when trying to access admin/login page of the CMS. Django shows an Error rendering template Error during template rendering In template /home/username/env/lib/python2.7/site-packages/wagtail/wagtailadmin/templates/wagtailadmin/skeleton.html, error at line 20 /bin/sh: django_libsass.SassCompiler: command not found Line 20 of the skeleton.html is: <!doctype html> {% load compress %} <!--[if lt IE 7]> <html class="no-js lt-ie9 lt-ie8 lt-ie7" lang="{{ LANGUAGE_CODE|default:"en-gb" }}"> <![endif]--> <!--[if IE 7]> <html class="no-js lt-ie9 lt-ie8" lang="{{ LANGUAGE_CODE|default:"en-gb" }}"> <![endif]--> <!--[if IE 8]> <html class="no-js lt-ie9" lang="{{ LANGUAGE_CODE|default:"en-gb" }}"> <![endif]--> <!--[if gt IE 8]><!--> <html class="no-js" lang="{{ LANGUAGE_CODE|default:"en-gb" }}"> <!--<![endif]--> <title>Wagtail - {% block titletag %}{% endblock %}</title> <meta name="description" content="" /> <meta name="viewport" content="width=device-width, initial-scale=1.0" /> <script src="//cdnjs.cloudflare.com/ajax/libs/modernizr/2.6.2/modernizr.min.js"></script> {% block css %}{# Block defined for timing breakdowns in django debug toolbar - not expected to be overridden #} <link rel="stylesheet" href="//fonts.googleapis.com/css?family=Open+Sans:300,400,600,700" /> <link rel="stylesheet" href="//fonts.googleapis.com/css?family=Bitter:400,700" /> 19 20 {% compress css %} 21 <link rel="stylesheet" href="{{ STATIC_URL }}wagtailadmin/scss/normalize.css" /> <link rel="stylesheet" href="{{ STATIC_URL }}wagtailadmin/scss/vendor/jquery-ui/jquery-ui-1.10.3.verdant.css" /> <link rel="stylesheet" href="{{ STATIC_URL }}wagtailadmin/scss/vendor/jquery.timepicker.css" /> <link rel="stylesheet" href="{{ STATIC_URL }}wagtailadmin/scss/core.scss" type="text/x-scss" /> {% endcompress %} {% block extra_css %}{% endblock %} {% endblock %} </head> <body class="{% block bodyclass %}{% endblock %} {% if messages %}has-messages{% endif %}"> the precompiler in my settings.py, DEBUG is set to True: COMPRESS_ENABLED = True COMPRESS_PRECOMPILERS = ( ('text/x-scss', 'django_libsass.SassCompiler'), ) TRIAL AND ERROR #1 I've tried changing to: ('text/x-scss', '/home/username/env/lib/python2.7/site-packages/django_libsass {infile} {outfile}') but that leads me to a dictionary update sequence element #0 error. I have django_libsass & compressor installed, also tried pip install libsass, ngm install lessc, pip install sass, turning DEBUG = False, adding COMPRESSOR_OFFLINE, adding COMPRESSOR_ENABLED as suggested in other similar questions. Running manage.py compress returns the same error. Have rechecked and site-packages and django_libsass are indeed on my sys.path SassCompiler can be found in ~/env/lib/python2.7/site-packages/django_libsass.py TRIAL AND ERROR #2 Re-checked that sass is installed and on my path. Change code to: COMPRESS_PRECOMPILERS = ( ('text/x-scss', 'sass --scss {infile} {outfile}'), ) Returns error: Exception Type: FilterError Exception Value: /bin/sh: sass: command not found TRIAL AND ERROR #3 After reading this feedly issue, tried pip install compass to no effect TRIAL AND ERROR #4 Following gasman's comment, I ran python manage.py shell and tried to import SassCompiler It works with no errors. Python 2.7.6 (default, Nov 11 2013, 18:34:29) [GCC 4.4.7 20120313 (Red Hat 4.4.7-3)] on linux2 Type "help", "copyright", "credits" or "license" for more information. (InteractiveConsole) >>> from django_libsass import SassCompiler >>> Full Traceback I do apologize if this question is getting too long. Traceback: File "/home/username/env/lib/python2.7/site-packages/django/core/handlers/base.py" in get_response 139. response = response.render() File "/home/username/env/lib/python2.7/site-packages/django/template/response.py" in render 105. self.content = self.rendered_content File "/home/username/env/lib/python2.7/site-packages/django/template/response.py" in rendered_content 82. content = template.render(context) File "/home/username/env/lib/python2.7/site-packages/django/template/base.py" in render 140. return self._render(context) File "/home/username/env/lib/python2.7/site-packages/django/template/base.py" in _render 134. return self.nodelist.render(context) File "/home/username/env/lib/python2.7/site-packages/django/template/base.py" in render 840. bit = self.render_node(node, context) File "/home/username/env/lib/python2.7/site-packages/django/template/debug.py" in render_node 78. return node.render(context) File "/home/username/env/lib/python2.7/site-packages/django/template/loader_tags.py" in render 123. return compiled_parent._render(context) File "/home/username/env/lib/python2.7/site-packages/django/template/base.py" in _render 134. return self.nodelist.render(context) File "/home/username/env/lib/python2.7/site-packages/django/template/base.py" in render 840. bit = self.render_node(node, context) File "/home/username/env/lib/python2.7/site-packages/django/template/debug.py" in render_node 78. return node.render(context) File "/home/username/env/lib/python2.7/site-packages/django/template/loader_tags.py" in render 62. result = block.nodelist.render(context) File "/home/username/env/lib/python2.7/site-packages/django/template/base.py" in render 840. bit = self.render_node(node, context) File "/home/username/env/lib/python2.7/site-packages/django/template/debug.py" in render_node 78. return node.render(context) File "/home/username/env/lib/python2.7/site-packages/compressor/templatetags/compress.py" in render 147. return self.render_compressed(context, self.kind, self.mode, forced=forced) File "/home/username/env/lib/python2.7/site-packages/compressor/templatetags/compress.py" in render_compressed 107. rendered_output = self.render_output(compressor, mode, forced=forced) File "/home/username/env/lib/python2.7/site-packages/compressor/templatetags/compress.py" in render_output 119. return compressor.output(mode, forced=forced) File "/home/username/env/lib/python2.7/site-packages/compressor/css.py" in output 51. ret.append(subnode.output(*args, **kwargs)) File "/home/username/env/lib/python2.7/site-packages/compressor/css.py" in output 53. return super(CssCompressor, self).output(*args, **kwargs) File "/home/username/env/lib/python2.7/site-packages/compressor/base.py" in output 246. content = self.filter_input(forced) File "/home/username/env/lib/python2.7/site-packages/compressor/base.py" in filter_input 194. for hunk in self.hunks(forced): File "/home/username/env/lib/python2.7/site-packages/compressor/base.py" in hunks 169. precompiled, value = self.precompile(value, **options) File "/home/username/env/lib/python2.7/site-packages/compressor/base.py" in precompile 226. **kwargs) File "/home/username/env/lib/python2.7/site-packages/django_libsass.py" in input 51. return compile(filename=self.filename) File "/home/username/env/lib/python2.7/site-packages/django_libsass.py" in compile 41. return sass.compile(**kwargs) Exception Type: AttributeError at /admin/login/ Exception Value: 'module' object has no attribute 'compile' A: (Reposting my comments as an answer, as requested...) The original error: django_libsass.SassCompiler: command not found meant that it was failing to import the django-libsass library. (django-compressor responded to that failure by trying to treat it as a shell command instead - django_libsass is not a runnable command, so this failed too, giving the actual error seen here.) The solution is to ensure that django-libsass is installed - it should show up in the output of pip freeze. The second error: 'module' object has no attribute 'compile' meant that there was another package installed which defined a module called sass, and this was being loaded in place of the one we wanted, from the libsass package. The solution is to uninstall all sass-related packages apart from django-libsass and libsass. A: I had the same problem my Python-Django project and rvm. The problem is that the compress precompiler does not know which rvm to use. Solution is to add the ruby path to environ and tell the precompiler where to find sass in the settings.py file: Like in my case (just do which sass to find the path): environ['PATH'] = '/Users/username/.rvm/gems/ruby-2.1.5/bin:/Users/username/.rvm/gems/ruby-2.1.5@global/bin:/Users/username/.rvm/rubies/ruby-2.1.5/bin:Users/username/bin' COMPRESS_PRECOMPILERS = ( ('text/x-scss', '/Users/username/.rvm/gems/ruby-2.1.5/bin/sass --scss --trace {infile} {outfile}'), ) Hope that this helps someone. A: The error /bin/sh: django_libsass.SassCompiler: command not found Indicates that Django Compressor is trying to run django_libsass.SassCompiler as a shell command, and the script is failing to run. That's because django_libsass.SassCompiler isn't a valid program that could be run from the command line. Your best bet is to first install sass if you haven't already, by following the instructions on this page: http://sass-lang.com/install Then change your code to: COMPRESS_PRECOMPILERS = ( ('text/x-scss', 'sass --scss {infile} {outfile}'), ) You'll have to be sure the sass command is on your path. From the docs (http://django-compressor.readthedocs.org/en/latest/settings/#django.conf.settings.COMPRESS_PRECOMPILERS) the second part of your tuple is: The command to call on each of the files. Modern Python string formatting will be provided for the two placeholders {infile} and {outfile} whose existence in the command string also triggers the actual creation of those temporary files. If not given in the command string, Django Compressor will use stdin and stdout respectively instead. A: pip install django_compressor==2.4 pip install django-libsass This worked for me, Also install latest of django-libsass.
Django Sass Compressor django_libsass.SassCompiler: command not found
I'm using a Django-Compressor Filter as part of Wagtail (Django variant CMS with super cool UI). Environment is Wagtail 0.2 + Python 2.7 + Django 1.6 + Virtualenv + FastCGI + Apache shared hosting. Issue occurs when trying to access admin/login page of the CMS. Django shows an Error rendering template Error during template rendering In template /home/username/env/lib/python2.7/site-packages/wagtail/wagtailadmin/templates/wagtailadmin/skeleton.html, error at line 20 /bin/sh: django_libsass.SassCompiler: command not found Line 20 of the skeleton.html is: <!doctype html> {% load compress %} <!--[if lt IE 7]> <html class="no-js lt-ie9 lt-ie8 lt-ie7" lang="{{ LANGUAGE_CODE|default:"en-gb" }}"> <![endif]--> <!--[if IE 7]> <html class="no-js lt-ie9 lt-ie8" lang="{{ LANGUAGE_CODE|default:"en-gb" }}"> <![endif]--> <!--[if IE 8]> <html class="no-js lt-ie9" lang="{{ LANGUAGE_CODE|default:"en-gb" }}"> <![endif]--> <!--[if gt IE 8]><!--> <html class="no-js" lang="{{ LANGUAGE_CODE|default:"en-gb" }}"> <!--<![endif]--> <title>Wagtail - {% block titletag %}{% endblock %}</title> <meta name="description" content="" /> <meta name="viewport" content="width=device-width, initial-scale=1.0" /> <script src="//cdnjs.cloudflare.com/ajax/libs/modernizr/2.6.2/modernizr.min.js"></script> {% block css %}{# Block defined for timing breakdowns in django debug toolbar - not expected to be overridden #} <link rel="stylesheet" href="//fonts.googleapis.com/css?family=Open+Sans:300,400,600,700" /> <link rel="stylesheet" href="//fonts.googleapis.com/css?family=Bitter:400,700" /> 19 20 {% compress css %} 21 <link rel="stylesheet" href="{{ STATIC_URL }}wagtailadmin/scss/normalize.css" /> <link rel="stylesheet" href="{{ STATIC_URL }}wagtailadmin/scss/vendor/jquery-ui/jquery-ui-1.10.3.verdant.css" /> <link rel="stylesheet" href="{{ STATIC_URL }}wagtailadmin/scss/vendor/jquery.timepicker.css" /> <link rel="stylesheet" href="{{ STATIC_URL }}wagtailadmin/scss/core.scss" type="text/x-scss" /> {% endcompress %} {% block extra_css %}{% endblock %} {% endblock %} </head> <body class="{% block bodyclass %}{% endblock %} {% if messages %}has-messages{% endif %}"> the precompiler in my settings.py, DEBUG is set to True: COMPRESS_ENABLED = True COMPRESS_PRECOMPILERS = ( ('text/x-scss', 'django_libsass.SassCompiler'), ) TRIAL AND ERROR #1 I've tried changing to: ('text/x-scss', '/home/username/env/lib/python2.7/site-packages/django_libsass {infile} {outfile}') but that leads me to a dictionary update sequence element #0 error. I have django_libsass & compressor installed, also tried pip install libsass, ngm install lessc, pip install sass, turning DEBUG = False, adding COMPRESSOR_OFFLINE, adding COMPRESSOR_ENABLED as suggested in other similar questions. Running manage.py compress returns the same error. Have rechecked and site-packages and django_libsass are indeed on my sys.path SassCompiler can be found in ~/env/lib/python2.7/site-packages/django_libsass.py TRIAL AND ERROR #2 Re-checked that sass is installed and on my path. Change code to: COMPRESS_PRECOMPILERS = ( ('text/x-scss', 'sass --scss {infile} {outfile}'), ) Returns error: Exception Type: FilterError Exception Value: /bin/sh: sass: command not found TRIAL AND ERROR #3 After reading this feedly issue, tried pip install compass to no effect TRIAL AND ERROR #4 Following gasman's comment, I ran python manage.py shell and tried to import SassCompiler It works with no errors. Python 2.7.6 (default, Nov 11 2013, 18:34:29) [GCC 4.4.7 20120313 (Red Hat 4.4.7-3)] on linux2 Type "help", "copyright", "credits" or "license" for more information. (InteractiveConsole) >>> from django_libsass import SassCompiler >>> Full Traceback I do apologize if this question is getting too long. Traceback: File "/home/username/env/lib/python2.7/site-packages/django/core/handlers/base.py" in get_response 139. response = response.render() File "/home/username/env/lib/python2.7/site-packages/django/template/response.py" in render 105. self.content = self.rendered_content File "/home/username/env/lib/python2.7/site-packages/django/template/response.py" in rendered_content 82. content = template.render(context) File "/home/username/env/lib/python2.7/site-packages/django/template/base.py" in render 140. return self._render(context) File "/home/username/env/lib/python2.7/site-packages/django/template/base.py" in _render 134. return self.nodelist.render(context) File "/home/username/env/lib/python2.7/site-packages/django/template/base.py" in render 840. bit = self.render_node(node, context) File "/home/username/env/lib/python2.7/site-packages/django/template/debug.py" in render_node 78. return node.render(context) File "/home/username/env/lib/python2.7/site-packages/django/template/loader_tags.py" in render 123. return compiled_parent._render(context) File "/home/username/env/lib/python2.7/site-packages/django/template/base.py" in _render 134. return self.nodelist.render(context) File "/home/username/env/lib/python2.7/site-packages/django/template/base.py" in render 840. bit = self.render_node(node, context) File "/home/username/env/lib/python2.7/site-packages/django/template/debug.py" in render_node 78. return node.render(context) File "/home/username/env/lib/python2.7/site-packages/django/template/loader_tags.py" in render 62. result = block.nodelist.render(context) File "/home/username/env/lib/python2.7/site-packages/django/template/base.py" in render 840. bit = self.render_node(node, context) File "/home/username/env/lib/python2.7/site-packages/django/template/debug.py" in render_node 78. return node.render(context) File "/home/username/env/lib/python2.7/site-packages/compressor/templatetags/compress.py" in render 147. return self.render_compressed(context, self.kind, self.mode, forced=forced) File "/home/username/env/lib/python2.7/site-packages/compressor/templatetags/compress.py" in render_compressed 107. rendered_output = self.render_output(compressor, mode, forced=forced) File "/home/username/env/lib/python2.7/site-packages/compressor/templatetags/compress.py" in render_output 119. return compressor.output(mode, forced=forced) File "/home/username/env/lib/python2.7/site-packages/compressor/css.py" in output 51. ret.append(subnode.output(*args, **kwargs)) File "/home/username/env/lib/python2.7/site-packages/compressor/css.py" in output 53. return super(CssCompressor, self).output(*args, **kwargs) File "/home/username/env/lib/python2.7/site-packages/compressor/base.py" in output 246. content = self.filter_input(forced) File "/home/username/env/lib/python2.7/site-packages/compressor/base.py" in filter_input 194. for hunk in self.hunks(forced): File "/home/username/env/lib/python2.7/site-packages/compressor/base.py" in hunks 169. precompiled, value = self.precompile(value, **options) File "/home/username/env/lib/python2.7/site-packages/compressor/base.py" in precompile 226. **kwargs) File "/home/username/env/lib/python2.7/site-packages/django_libsass.py" in input 51. return compile(filename=self.filename) File "/home/username/env/lib/python2.7/site-packages/django_libsass.py" in compile 41. return sass.compile(**kwargs) Exception Type: AttributeError at /admin/login/ Exception Value: 'module' object has no attribute 'compile'
[ "(Reposting my comments as an answer, as requested...)\nThe original error: django_libsass.SassCompiler: command not found\nmeant that it was failing to import the django-libsass library. (django-compressor responded to that failure by trying to treat it as a shell command instead - django_libsass is not a runnable...
[ 8, 1, 0, 0 ]
[ "pip install django_pyscss will fix your issue.\n" ]
[ -2 ]
[ "django", "django_compressor", "python", "python_2.7", "wagtail" ]
stackoverflow_0022515611_django_django_compressor_python_python_2.7_wagtail.txt
Q: What can I do to avoid race conditions for multiple callbacks in plotly? I have a dashboard very similar to this one- import datetime import dash from dash import dcc, html import plotly from dash.dependencies import Input, Output # pip install pyorbital from pyorbital.orbital import Orbital satellite = Orbital('TERRA') external_stylesheets = ['https://codepen.io/chriddyp/pen/bWLwgP.css'] app = dash.Dash(__name__, external_stylesheets=external_stylesheets) app.layout = html.Div( html.Div([ html.H4('TERRA Satellite Live Feed'), html.Div(id='live-update-text'), dcc.Graph(id='live-update-graph'), dcc.Interval( id='interval-component', interval=1*1000, # in milliseconds n_intervals=0 ) ]) ) # Multiple components can update everytime interval gets fired. @app.callback(Output('live-update-graph', 'figure'), Input('live-update-graph', 'relayout'), Input('interval-component', 'n_intervals')) def update_graph_live(relayout, n): if ctx.triggered_id == 'relayout': * code that affects the y axis * return fig else: satellite = Orbital('TERRA') data = { 'time': [], 'Latitude': [], 'Longitude': [], 'Altitude': [] } # Collect some data for i in range(180): time = datetime.datetime.now() - datetime.timedelta(seconds=i*20) lon, lat, alt = satellite.get_lonlatalt( time ) data['Longitude'].append(lon) data['Latitude'].append(lat) data['Altitude'].append(alt) data['time'].append(time) # Create the graph with subplots fig = plotly.tools.make_subplots(rows=2, cols=1, vertical_spacing=0.2) fig['layout']['margin'] = { 'l': 30, 'r': 10, 'b': 30, 't': 10 } fig['layout']['legend'] = {'x': 0, 'y': 1, 'xanchor': 'left'} fig.append_trace({ 'x': data['time'], 'y': data['Altitude'], 'name': 'Altitude', 'mode': 'lines+markers', 'type': 'scatter' }, 1, 1) fig.append_trace({ 'x': data['Longitude'], 'y': data['Latitude'], 'text': data['time'], 'name': 'Longitude vs Latitude', 'mode': 'lines+markers', 'type': 'scatter' }, 2, 1) return fig if __name__ == '__main__': app.run_server(debug=True) In my case, I have three different intputs. One input gets triggered by an dcc.interval timer, like in the example. Another input gets triggered when a user zooms in on the dashboard using the Input('live-update-graph', 'relayoutData' input and the last triggeres when a button gets pressed. All three inputs are totally independent. One updates the data stored in fig['data'], another updates the data stored in fig['layout']['xaxis'] and the last updates the stuff in fig['layout']['yaxis']. I am concerned about the this situation: The dcc interval input gets triggered and the function starts to update the data The user zooms in on the dashboard and triggers the relayout data The dcc.interval returns a figure Now, because the relayout input got triggered second, it has stale data. There is a race condition and it is possible that the dcc interval gets undone as a result. What can I do to avoid the race condition? I wonder if it's possible to update only a part of the figure with a callback rather than editing the whole object. A: I think this code does what you want. Update data, while keeping layout. You can adapt to exactly what you would like, your example is copied anyhow and not really working (ex: you have a ctx there that is not defined) The idea of the code below is: rather than update the complete object server side (in the callback) have different "parts" of the object (data-patch1, data-patch2, etc) and "merge" them in the browser (see deep_merge). Depending on what you want to keep/adjust you can adjust that function and fill accordingly the data-patch. For the code below you can just zoom in/zoom out, but you could also patch colors, sizes, etc. # From https://github.com/plotly/dash-core-components/issues/881 import dash import datetime import random import dash_html_components as html import dash_core_components as dcc from dash.dependencies import Input, Output import plotly.express as px import plotly.graph_objects as go import plotly figure = go.Figure() app = dash.Dash(__name__) app.layout = html.Div(children = [ html.Div(id="patchstore", **{'data-figure':figure, 'data-patch1':{}, 'data-patch2':{}, 'data-patch3':{}}), dcc.Graph(id="graph"), dcc.Interval( id='interval-component', interval=2*1000, # in milliseconds n_intervals=0) ]) deep_merge = """ function batchAssign(patches) { function recursiveAssign(input, patch){ var outputR = Object(input); for (var key in patch) { if(outputR[key] && typeof patch[key] == "object" && key!="data") { outputR[key] = recursiveAssign(outputR[key], patch[key]) } else { outputR[key] = patch[key]; } } return outputR; } return Array.prototype.reduce.call(arguments, recursiveAssign, {}); } """ app.clientside_callback( deep_merge, Output('graph', 'figure'), [Input('patchstore', 'data-figure'), Input('patchstore', 'data-patch1'), Input('patchstore', 'data-patch2'), Input('patchstore', 'data-patch3')] ) @app.callback(Output('patchstore', 'data-patch1'),[Input('interval-component', 'n_intervals')]) def callback_data_generation(n_intervals): data = { 'time': [], 'Latitude': [], 'Longitude': [], 'Altitude': [] } # Collect some data for i in range(30): time = datetime.datetime.now() - datetime.timedelta(seconds=i*20) data['Longitude'].append(random.randint(1,10)) data['Latitude'].append(random.randint(1,10)) data['Altitude'].append(random.randint(1,10)) data['time'].append(time) # Create the graph with subplots fig = plotly.tools.make_subplots(rows=2, cols=1, vertical_spacing=0.2) fig['layout']['margin'] = { 'l': 30, 'r': 10, 'b': 30, 't': 10 } fig['layout']['legend'] = {'x': 0, 'y': 1, 'xanchor': 'left'} fig.append_trace({ 'x': data['time'], 'y': data['Altitude'], 'name': 'Altitude', 'mode': 'lines+markers', 'type': 'scatter' }, 1, 1) fig.append_trace({ 'x': data['Longitude'], 'y': data['Latitude'], 'text': data['time'], 'name': 'Longitude vs Latitude', 'mode': 'lines+markers', 'type': 'scatter' }, 2, 1) if n_intervals==0: fig.layout = None return fig app.run_server()
What can I do to avoid race conditions for multiple callbacks in plotly?
I have a dashboard very similar to this one- import datetime import dash from dash import dcc, html import plotly from dash.dependencies import Input, Output # pip install pyorbital from pyorbital.orbital import Orbital satellite = Orbital('TERRA') external_stylesheets = ['https://codepen.io/chriddyp/pen/bWLwgP.css'] app = dash.Dash(__name__, external_stylesheets=external_stylesheets) app.layout = html.Div( html.Div([ html.H4('TERRA Satellite Live Feed'), html.Div(id='live-update-text'), dcc.Graph(id='live-update-graph'), dcc.Interval( id='interval-component', interval=1*1000, # in milliseconds n_intervals=0 ) ]) ) # Multiple components can update everytime interval gets fired. @app.callback(Output('live-update-graph', 'figure'), Input('live-update-graph', 'relayout'), Input('interval-component', 'n_intervals')) def update_graph_live(relayout, n): if ctx.triggered_id == 'relayout': * code that affects the y axis * return fig else: satellite = Orbital('TERRA') data = { 'time': [], 'Latitude': [], 'Longitude': [], 'Altitude': [] } # Collect some data for i in range(180): time = datetime.datetime.now() - datetime.timedelta(seconds=i*20) lon, lat, alt = satellite.get_lonlatalt( time ) data['Longitude'].append(lon) data['Latitude'].append(lat) data['Altitude'].append(alt) data['time'].append(time) # Create the graph with subplots fig = plotly.tools.make_subplots(rows=2, cols=1, vertical_spacing=0.2) fig['layout']['margin'] = { 'l': 30, 'r': 10, 'b': 30, 't': 10 } fig['layout']['legend'] = {'x': 0, 'y': 1, 'xanchor': 'left'} fig.append_trace({ 'x': data['time'], 'y': data['Altitude'], 'name': 'Altitude', 'mode': 'lines+markers', 'type': 'scatter' }, 1, 1) fig.append_trace({ 'x': data['Longitude'], 'y': data['Latitude'], 'text': data['time'], 'name': 'Longitude vs Latitude', 'mode': 'lines+markers', 'type': 'scatter' }, 2, 1) return fig if __name__ == '__main__': app.run_server(debug=True) In my case, I have three different intputs. One input gets triggered by an dcc.interval timer, like in the example. Another input gets triggered when a user zooms in on the dashboard using the Input('live-update-graph', 'relayoutData' input and the last triggeres when a button gets pressed. All three inputs are totally independent. One updates the data stored in fig['data'], another updates the data stored in fig['layout']['xaxis'] and the last updates the stuff in fig['layout']['yaxis']. I am concerned about the this situation: The dcc interval input gets triggered and the function starts to update the data The user zooms in on the dashboard and triggers the relayout data The dcc.interval returns a figure Now, because the relayout input got triggered second, it has stale data. There is a race condition and it is possible that the dcc interval gets undone as a result. What can I do to avoid the race condition? I wonder if it's possible to update only a part of the figure with a callback rather than editing the whole object.
[ "I think this code does what you want. Update data, while keeping layout. You can adapt to exactly what you would like, your example is copied anyhow and not really working (ex: you have a ctx there that is not defined)\nThe idea of the code below is: rather than update the complete object server side (in the callb...
[ 1 ]
[]
[]
[ "dashboard", "plotly", "plotly_dash", "plotly_python", "python" ]
stackoverflow_0074442539_dashboard_plotly_plotly_dash_plotly_python_python.txt
Q: Comparison between datetime and datetime64[ns] in pandas I'm writing a program that checks an excel file and if today's date is in the excel file's date column, I parse it I'm using: cur_date = datetime.today() for today's date. I'm checking if today is in the column with: bool_val = cur_date in df['date'] #evaluates to false I do know for a fact that today's date is in the file in question. The dtype of the series is datetime64[ns] Also, I am only checking the date itself and not the timestamp afterwards, if that matters. I'm doing this to make the timestamp 00:00:00: cur_date = datetime.strptime(cur_date.strftime('%Y_%m_%d'), '%Y_%m_%d') And the type of that object after printing is datetime as well A: For anyone who also stumbled across this when comparing a dataframe date to a variable date, and this did not exactly answer your question; you can use the code below. Instead of: self.df["date"] = pd.to_datetime(self.df["date"]) You can import datetime and then add .dt.date to the end like: self.df["date"] = pd.to_datetime(self.df["date"]).dt.date A: You can use pd.Timestamp('today') or pd.to_datetime('today') But both of those give the date and time for 'now'. Try this instead: pd.Timestamp('today').floor('D') or pd.to_datetime('today').floor('D') You could have also passed the datetime object to pandas.to_datetime but I like the other option mroe. pd.to_datetime(datetime.datetime.today()).floor('D') Pandas also has a Timedelta object pd.Timestamp('now').floor('D') + pd.Timedelta(-3, unit='D') Or you can use the offsets module pd.Timestamp('now').floor('D') + pd.offsets.Day(-3) To check for membership, try one of these cur_date in df['date'].tolist() Or df['date'].eq(cur_date).any() A: When converting datetime64 type using pd.Timestamp() it is important to note that you should compare it to another timestamp type. (not a datetime.date type) Convert a date to numpy.datetime64 date = '2022-11-20 00:00:00' date64 = np.datetime64(date) Seven days ago - timestamp type sevenDaysAgoTs = (pd.to_datetime('today')-timedelta(days=7)) convert date64 to Timestamp and see if it was in the last 7 days print(pd.Timestamp(pd.to_datetime(date64)) >= sevenDaysAgoTs)
Comparison between datetime and datetime64[ns] in pandas
I'm writing a program that checks an excel file and if today's date is in the excel file's date column, I parse it I'm using: cur_date = datetime.today() for today's date. I'm checking if today is in the column with: bool_val = cur_date in df['date'] #evaluates to false I do know for a fact that today's date is in the file in question. The dtype of the series is datetime64[ns] Also, I am only checking the date itself and not the timestamp afterwards, if that matters. I'm doing this to make the timestamp 00:00:00: cur_date = datetime.strptime(cur_date.strftime('%Y_%m_%d'), '%Y_%m_%d') And the type of that object after printing is datetime as well
[ "For anyone who also stumbled across this when comparing a dataframe date to a variable date, and this did not exactly answer your question; you can use the code below.\nInstead of:\nself.df[\"date\"] = pd.to_datetime(self.df[\"date\"])\n\nYou can import datetime and then add .dt.date to the end like:\nself.df[\"da...
[ 80, 41, 0 ]
[]
[]
[ "datetime", "datetime64", "pandas", "python" ]
stackoverflow_0051827134_datetime_datetime64_pandas_python.txt
Q: Get location of pixel upon click in Kivy I am using Kivy to design an application to draw an n-sided polygon over a live video stream to demarcate regions of interest. The problem I have is that Kivy provides coordinates w.r.t to the whole window and not just the image. What I would like is to have the location of the pixel (in x and y cords) clicked. I have looked at to_local() method but it didn't make much sense, neither did it produce desired results. Any help would be appreciated, below is the MRE. from kivy.app import App from kivy.uix.image import Image from kivy.uix.boxlayout import BoxLayout from kivy.graphics import Color, Ellipse, Line from random import random class ImageView(Image): def on_touch_down(self, touch): ## # This provides touch cords for the entire app and not just the location of the pixel clicked# print("Touch Cords", touch.x, touch.y) ## color = (random(), 1, 1) with self.canvas: Color(*color, mode='hsv') d = 30. Ellipse(pos=(touch.x - d / 2, touch.y - d / 2), size=(d, d)) touch.ud['line'] = Line(points=(touch.x, touch.y)) def on_touch_move(self, touch): touch.ud['line'].points += [touch.x, touch.y] class DMSApp(App): def build(self): imagewidget = ImageView(source="/home/red/Downloads/600.png") imagewidget.size_hint = (1, .5) imagewidget.pos_hint = {"top": 1} layout = BoxLayout(size_hint=(1, 1)) layout.add_widget(imagewidget) return layout if __name__ == '__main__': DMSApp().run() A: You can calculate the coordinates of the touch relative to the lower left corner of the actual image that appears in the GUI. Those coordinates can then be scaled to the actual size of the source image to get a reasonable estimate of the actual pixel coordinates within the source. Here is a modified version of your in_touch_down() method that does that (only minimal testing performed): def on_touch_down(self, touch): if not self.collide_point(*touch.pos): return super(ImageView, self).on_touch_down(touch) lr_space = (self.width - self.norm_image_size[0]) / 2 # empty space in Image widget left and right of actual image tb_space = (self.height - self.norm_image_size[1]) / 2 # empty space in Image widget above and below actual image print('lr_space =', lr_space, ', tb_space =', tb_space) print("Touch Cords", touch.x, touch.y) print('Size of image within ImageView widget:', self.norm_image_size) print('ImageView widget:, pos:', self.pos, ', size:', self.size) print('image extents in x:', self.x + lr_space, self.right - lr_space) print('image extents in y:', self.y + tb_space, self.top - tb_space) pixel_x = touch.x - lr_space - self.x # x coordinate of touch measured from lower left of actual image pixel_y = touch.y - tb_space - self.y # y coordinate of touch measured from lower left of actual image if pixel_x < 0 or pixel_y < 0: print('clicked outside of image\n') return True elif pixel_x > self.norm_image_size[0] or \ pixel_y > self.norm_image_size[1]: print('clicked outside of image\n') return True else: print('clicked inside image, coords:', pixel_x, pixel_y) # scale coordinates to actual pixels of the Image source print('actual pixel coords:', pixel_x * self.texture_size[0] / self.norm_image_size[0], pixel_y * self.texture_size[1] / self.norm_image_size[1], '\n') color = (random(), 1, 1) with self.canvas: Color(*color, mode='hsv') d = 30. Ellipse(pos=(touch.x - d / 2, touch.y - d / 2), size=(d, d)) touch.ud['line'] = Line(points=(touch.x, touch.y)) return True
Get location of pixel upon click in Kivy
I am using Kivy to design an application to draw an n-sided polygon over a live video stream to demarcate regions of interest. The problem I have is that Kivy provides coordinates w.r.t to the whole window and not just the image. What I would like is to have the location of the pixel (in x and y cords) clicked. I have looked at to_local() method but it didn't make much sense, neither did it produce desired results. Any help would be appreciated, below is the MRE. from kivy.app import App from kivy.uix.image import Image from kivy.uix.boxlayout import BoxLayout from kivy.graphics import Color, Ellipse, Line from random import random class ImageView(Image): def on_touch_down(self, touch): ## # This provides touch cords for the entire app and not just the location of the pixel clicked# print("Touch Cords", touch.x, touch.y) ## color = (random(), 1, 1) with self.canvas: Color(*color, mode='hsv') d = 30. Ellipse(pos=(touch.x - d / 2, touch.y - d / 2), size=(d, d)) touch.ud['line'] = Line(points=(touch.x, touch.y)) def on_touch_move(self, touch): touch.ud['line'].points += [touch.x, touch.y] class DMSApp(App): def build(self): imagewidget = ImageView(source="/home/red/Downloads/600.png") imagewidget.size_hint = (1, .5) imagewidget.pos_hint = {"top": 1} layout = BoxLayout(size_hint=(1, 1)) layout.add_widget(imagewidget) return layout if __name__ == '__main__': DMSApp().run()
[ "You can calculate the coordinates of the touch relative to the lower left corner of the actual image that appears in the GUI. Those coordinates can then be scaled to the actual size of the source image to get a reasonable estimate of the actual pixel coordinates within the source. Here is a modified version of you...
[ 1 ]
[]
[]
[ "kivy", "python", "user_interface" ]
stackoverflow_0074543030_kivy_python_user_interface.txt
Q: Downloading big file(800MB) from url into GCS bucket using Cloud function I have written a code which works in my local and then I tried to replicate the same in cloud function. The basic purpose is to download a massive file of around 800 MB to a gcs bucket. However I am getting the below error: Function invocation was interrupted. Error: function terminated. Recommended action: inspect logs for termination reason. Additional troubleshooting documentation can be found at https://cloud.google.com/functions/docs/troubleshooting#logging there is also a warning which precedes the error Container worker exceeded memory limit of 256 MiB with 256 MiB used after servicing 1 requests total. Consider setting a larger instance class. It seems that cloudfunction wont be able to download such a big file, is my assumption correct ? What is max limit on CF for such a task i.e download data from URL to GCS (I am aware that GCS bucket can save an object of upto 5TB) What other options do i have , I tried to change the code to include chunksize option but even doesnt work. Code snapshot: import requests import pandas as pd import time url = "" def main(request): s_time_chunk = time.time() chunk = pd.read_csv(url, chunksize=1000 , usecols = ['Mk','Cn','m (kg)','Enedc (g/km)','Ewltp (g/km)','Ft','ec (cm3)','year'] ) e_time_chunk = time.time() print("With chunks: ", (e_time_chunk-s_time_chunk), "sec") df = pd.concat(chunk) df.to_csv("/tmp/eea.csv",index=False) storage_client = storage.Client(project='XXXXXXX') bucket_name = "XXXXXXX" bucket = storage_client.get_bucket(bucket_name) blob = bucket.blob("eea.csv") blob.upload_from_filename("/tmp/eea.csv") print('File uploaded to bucket') print("Success") return f"OK" ''' A: Cloud Functions stores data in memory when you download it. Even if you use a file system path, it's an in memory file system and will consume memory. The solution is to increase the memory of your cloud function (try with 1 or 2 Gb). Use the second gen Cloud Functions if you want more granularity and more memory.
Downloading big file(800MB) from url into GCS bucket using Cloud function
I have written a code which works in my local and then I tried to replicate the same in cloud function. The basic purpose is to download a massive file of around 800 MB to a gcs bucket. However I am getting the below error: Function invocation was interrupted. Error: function terminated. Recommended action: inspect logs for termination reason. Additional troubleshooting documentation can be found at https://cloud.google.com/functions/docs/troubleshooting#logging there is also a warning which precedes the error Container worker exceeded memory limit of 256 MiB with 256 MiB used after servicing 1 requests total. Consider setting a larger instance class. It seems that cloudfunction wont be able to download such a big file, is my assumption correct ? What is max limit on CF for such a task i.e download data from URL to GCS (I am aware that GCS bucket can save an object of upto 5TB) What other options do i have , I tried to change the code to include chunksize option but even doesnt work. Code snapshot: import requests import pandas as pd import time url = "" def main(request): s_time_chunk = time.time() chunk = pd.read_csv(url, chunksize=1000 , usecols = ['Mk','Cn','m (kg)','Enedc (g/km)','Ewltp (g/km)','Ft','ec (cm3)','year'] ) e_time_chunk = time.time() print("With chunks: ", (e_time_chunk-s_time_chunk), "sec") df = pd.concat(chunk) df.to_csv("/tmp/eea.csv",index=False) storage_client = storage.Client(project='XXXXXXX') bucket_name = "XXXXXXX" bucket = storage_client.get_bucket(bucket_name) blob = bucket.blob("eea.csv") blob.upload_from_filename("/tmp/eea.csv") print('File uploaded to bucket') print("Success") return f"OK" '''
[ "Cloud Functions stores data in memory when you download it. Even if you use a file system path, it's an in memory file system and will consume memory.\nThe solution is to increase the memory of your cloud function (try with 1 or 2 Gb). Use the second gen Cloud Functions if you want more granularity and more memory...
[ 0 ]
[]
[]
[ "google_cloud_functions", "google_cloud_storage", "pandas", "python" ]
stackoverflow_0074548275_google_cloud_functions_google_cloud_storage_pandas_python.txt
Q: PEP-8 break up for loop I am trying to figure out a way to break a long for loop to make it PEP-8 valid. (I'm using flake8 vscode extension). This is the code: for result_row in soup.find_all('div', {"class": "b2c-inner-data-wrapper"}): .............. The error I get is: line too long (88 > 79 characters) I've tried: for result_row in soup.find_all('div', {"class": "b2c-inner-data-wrapper"}): But I get: continuation line under-indented for visual indent What is the right way of doing it? Thanks. A: result_rows = soup.find_all('div', {"class": "b2c-inner-data-wrapper"}) for result_row in result_rows: ..............
PEP-8 break up for loop
I am trying to figure out a way to break a long for loop to make it PEP-8 valid. (I'm using flake8 vscode extension). This is the code: for result_row in soup.find_all('div', {"class": "b2c-inner-data-wrapper"}): .............. The error I get is: line too long (88 > 79 characters) I've tried: for result_row in soup.find_all('div', {"class": "b2c-inner-data-wrapper"}): But I get: continuation line under-indented for visual indent What is the right way of doing it? Thanks.
[ "result_rows = soup.find_all('div', {\"class\": \"b2c-inner-data-wrapper\"})\nfor result_row in result_rows:\n ..............\n\n" ]
[ 1 ]
[]
[]
[ "beautifulsoup", "flake8", "pep8", "python" ]
stackoverflow_0074552071_beautifulsoup_flake8_pep8_python.txt
Q: "No source for code" message in Coverage.py I ran a build last night, successfully. I got up this morning and ran another without changing any configuration or modifying any source code. Now my build is failing with the message "No source for code" when running my nosetests with coverage. NoSource: No source for code: '/home/matthew/.hudson/jobs/myproject/workspace/tests/unit/util.py' . . . No source for code: '/home/matthew/.hudson/jobs/myproject/workspace/__init__.py' The only clue I have is that the files it says it can't find aren't there, but they never were and they're not supposed to be. For example, in the latter, Hudson's workspace isn't a Python module, so __init__.py wouldn't be there. Update: I've confirmed that this isn't a Hudson issue. When I run nostests with coverage in the directory itself, I see similar messages. Again, the files that coverage is looking for were never there to begin with, making this very puzzling. A: I'm not sure why it thinks that file exists, but you can tell coverage.py to ignore these problems with a coverage xml -i switch. If you want to track down the error, drop me a line (ned at ned batchelder com). A: Ensure theres no .pyc file there, that may have existed in the past. A: Summary: Existing .coverage data is kept around when running nosetests --with-coverage, so remove it first. Details: I too just encountered this via Hudson and nosetests. This error was coming from coverage/results.py:18 (coverage 3.3.1 - there were 3 places raising this error, but this was the relevant one). It's trying to open the .py file corresponding to the module that was actually traced. A small demo: $ echo print > hello.py $ echo import hello > main.py $ coverage run main.py $ rm hello.py $ coverage xml No source for code: '/tmp/aoeu/hello.py' Apparently I had a file stopwords.pyc that was executed/traced, but no stopwords.py. Yet nowhere in my code was I importing stopwords, and even removing the .pyc I still got the error. A simple strings .coverage then revealed that the reference to stopwords.py still existed. nosetests --with-coverage is using coverage's append or merge functionality, meaning the old .coverage data still lingers around. Indeed, removing .coverage addressed the issue. A: Just use the '--cover-erase' argument. It fixes this error and you don't have to manually delete coverage files nosetests --with-coverage --cover-erase I'd strongly recommend checking out the help to see what other args you're missing too and don't forget those plugins either A: The problem is that the .pyc file still exists. A quick and dirty solution is to delete all .pyc files in that directory: find . -name "*.pyc" -exec rm -rf {} \; A: coverage report -m can be called just so, without providing any argument (see official quick instructions). But it works on 1st script coverage-ed, not on 2nd. E.g. coverage run -m f1.py coverage report -m # works coverage run -m f2.py coverage report -m # fails (f2.py instead of f1.py in last coverage run) Instead, always indicate script as argument of coverage report -m: file="f2.py" && coverage run $file && coverage report -m $file Coverage reporting docs A: Maybe this will help, but I ran into a similar error today. And it's a permission error. My code is using a checkout from another user (by design, down ask) and I need to sudo in order for coverage to work. So your issue may have something to it. A: I ran into this problem as well when trying to run nosetests coverage through setuptools. As mentioned, it is possible to delete existing .pyc files but that can be cumbersome. I ended up having to create a .coveragerc file with the following [report] ignore_errors = True to fix this error. A: I had this problem. pytest-cov claimed there is no code for existing files full of valid and covered code. I removed those warnings just by removing .coverage file. It is of course recreated on next runs. A: A bit another case, but anyway... Don't be foolish as me, use just coverage html, not coverage report html
"No source for code" message in Coverage.py
I ran a build last night, successfully. I got up this morning and ran another without changing any configuration or modifying any source code. Now my build is failing with the message "No source for code" when running my nosetests with coverage. NoSource: No source for code: '/home/matthew/.hudson/jobs/myproject/workspace/tests/unit/util.py' . . . No source for code: '/home/matthew/.hudson/jobs/myproject/workspace/__init__.py' The only clue I have is that the files it says it can't find aren't there, but they never were and they're not supposed to be. For example, in the latter, Hudson's workspace isn't a Python module, so __init__.py wouldn't be there. Update: I've confirmed that this isn't a Hudson issue. When I run nostests with coverage in the directory itself, I see similar messages. Again, the files that coverage is looking for were never there to begin with, making this very puzzling.
[ "I'm not sure why it thinks that file exists, but you can tell coverage.py to ignore these problems with a coverage xml -i switch. \nIf you want to track down the error, drop me a line (ned at ned batchelder com).\n", "Ensure theres no .pyc file there, that may have existed in the past.\n", "Summary: Existing ...
[ 48, 46, 17, 10, 6, 1, 0, 0, 0, 0 ]
[]
[]
[ "code_coverage", "continuous_integration", "nosetests", "python" ]
stackoverflow_0002386975_code_coverage_continuous_integration_nosetests_python.txt
Q: DataFrame is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all() Since some columns are strings, I did get numeric to exact those columns only to DF, but I received an error message when I tried to convert the negative numbers to zero #cleaning negative minimum to equal only 0s df2 = df._get_numeric_data() if df2 < 0: then = 0 /Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/pandas/core/generic.py in __nonzero__(self) 1535 @final 1536 def __nonzero__(self): -> 1537 raise ValueError( 1538 f"The truth value of a {type(self).__name__} is ambiguous. " 1539 "Use a.empty, a.bool(), a.item(), a.any() or a.all()." ValueError: The truth value of a DataFrame is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all(). A: this method worked without using an if statment df2 = df._get_numeric_data() df2[df2 < 0] = 0
DataFrame is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all()
Since some columns are strings, I did get numeric to exact those columns only to DF, but I received an error message when I tried to convert the negative numbers to zero #cleaning negative minimum to equal only 0s df2 = df._get_numeric_data() if df2 < 0: then = 0 /Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/pandas/core/generic.py in __nonzero__(self) 1535 @final 1536 def __nonzero__(self): -> 1537 raise ValueError( 1538 f"The truth value of a {type(self).__name__} is ambiguous. " 1539 "Use a.empty, a.bool(), a.item(), a.any() or a.all()." ValueError: The truth value of a DataFrame is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().
[ "this method worked without using an if statment\ndf2 = df._get_numeric_data()\n\ndf2[df2 < 0] = 0\n\n\n" ]
[ 0 ]
[]
[]
[ "if_statement", "python" ]
stackoverflow_0074552136_if_statement_python.txt
Q: How to get python logging output in shell script I would want to get the stdout of the python file into shell script. I initially used print and it worked fine but it is not working for logging. ab.py import logging logger = logging.getLogger() logging.basicConfig(level=logging.INFO) logging.info('2222222222222') print("111111111111") cd.sh #/bin/bash set -e set -x communities=$(python3 ab.py) echo $communities The output of executing cd.sh I am getting the output only as 111111111111 and not 2222222222222 A: logging prints to stderr and not stdout. These are 2 separate streams of data. You can manage these by learning up a bit on Input/Output Redirection. But, it is kind of a big topic to summarize here. tl/dr: Make your command be python3 ab.py 2>&1 and you will get the output you want, kinda. You'll see that the logging messages appear before the print messages. This is because stderr is unbuffered and stdout is buffered. A: By default, the Python logging system logs to stderr. You are capturing just stdout. To configure the Python logging system to output to stdout, which it sounds like is what you want, you can slightly modify your code, as follows: import logging import sys logger = logging.getLogger() logging.basicConfig(level=logging.INFO, stream=sys.stdout) logging.info('2222222222222') print("111111111111") With this, both lines of output go to stdout and so your shell script will catch both of them.
How to get python logging output in shell script
I would want to get the stdout of the python file into shell script. I initially used print and it worked fine but it is not working for logging. ab.py import logging logger = logging.getLogger() logging.basicConfig(level=logging.INFO) logging.info('2222222222222') print("111111111111") cd.sh #/bin/bash set -e set -x communities=$(python3 ab.py) echo $communities The output of executing cd.sh I am getting the output only as 111111111111 and not 2222222222222
[ "logging prints to stderr and not stdout. These are 2 separate streams of data. You can manage these by learning up a bit on Input/Output Redirection. But, it is kind of a big topic to summarize here.\ntl/dr: Make your command be python3 ab.py 2>&1 and you will get the output you want, kinda. You'll see that t...
[ 2, 2 ]
[]
[]
[ "logging", "python", "shell" ]
stackoverflow_0074552292_logging_python_shell.txt
Q: Drawing point of view for an object in PyQt6 I'm really new to PyQt6. I want to draw an object that moves in space and draw around it its point of view site. This is exactly what I want: And this is what I get till now. from PyQt6.QtWidgets import * from PyQt6.QtCore import * from PyQt6.QtGui import * import sys class Window(QWidget): def __init__(self): super().__init__() self.painter = QPainter(self) self.setWindowTitle("test 5") self.setStyleSheet("background-color:rgb(20, 20, 20);font-size:20px;") self.setGeometry(0, 0, 600, 600) def paintEvent(self, event): self.painter.begin(self) self.painter.setPen(QPen(QColor(30, 130, 30), 0, Qt.PenStyle.SolidLine)) self.painter.setBrush(QBrush(QColor(0, 100, 0), Qt.BrushStyle.SolidPattern)) self.painter.translate(300, 300) self.painter.rotate(-90) path = QPainterPath() path.moveTo(-20, 0) path.lineTo(-30, 15) path.lineTo(20, 0) path.lineTo(-30, -15) path.lineTo(-20, 0) self.painter.drawPath(path) self.painter.setPen(QPen(QColor(130, 30, 30), 0, Qt.PenStyle.SolidLine)) self.painter.setBrush(QBrush(QColor(100, 0, 0, 100), Qt.BrushStyle.SolidPattern)) path = QPainterPath() path.moveTo(100, 0) path.arcTo(-100, -100, 200, 200, -90 /2 * 16, 180 * 16) self.painter.drawPath(path) self.painter.end() if __name__ == "__main__": App = QApplication(sys.argv) window = Window() window.show() try: sys.exit(App.exec()) except SystemExit: print("closing window ...") I want to draw an arc around the object but it draws a full circle. A: You're using the wrong angle for QPainterPath, which uses degrees, as opposed to QPainter which uses sixteenths of a degree. Also, you need to use arcMoveTo() (not moveTo) in order to place the path at the correct position when starting a new arc. Finally, you have to close the path, using closeSubpath(). path = QPainterPath() outRect = QRectF(-100, -100, 200, 200) path.arcMoveTo(outRect, -60) path.arcTo(outRect, -60, 120) path.arcTo(-50, -50, 100, 100, 60, 240) path.closeSubpath() painter.drawPath(path)
Drawing point of view for an object in PyQt6
I'm really new to PyQt6. I want to draw an object that moves in space and draw around it its point of view site. This is exactly what I want: And this is what I get till now. from PyQt6.QtWidgets import * from PyQt6.QtCore import * from PyQt6.QtGui import * import sys class Window(QWidget): def __init__(self): super().__init__() self.painter = QPainter(self) self.setWindowTitle("test 5") self.setStyleSheet("background-color:rgb(20, 20, 20);font-size:20px;") self.setGeometry(0, 0, 600, 600) def paintEvent(self, event): self.painter.begin(self) self.painter.setPen(QPen(QColor(30, 130, 30), 0, Qt.PenStyle.SolidLine)) self.painter.setBrush(QBrush(QColor(0, 100, 0), Qt.BrushStyle.SolidPattern)) self.painter.translate(300, 300) self.painter.rotate(-90) path = QPainterPath() path.moveTo(-20, 0) path.lineTo(-30, 15) path.lineTo(20, 0) path.lineTo(-30, -15) path.lineTo(-20, 0) self.painter.drawPath(path) self.painter.setPen(QPen(QColor(130, 30, 30), 0, Qt.PenStyle.SolidLine)) self.painter.setBrush(QBrush(QColor(100, 0, 0, 100), Qt.BrushStyle.SolidPattern)) path = QPainterPath() path.moveTo(100, 0) path.arcTo(-100, -100, 200, 200, -90 /2 * 16, 180 * 16) self.painter.drawPath(path) self.painter.end() if __name__ == "__main__": App = QApplication(sys.argv) window = Window() window.show() try: sys.exit(App.exec()) except SystemExit: print("closing window ...") I want to draw an arc around the object but it draws a full circle.
[ "You're using the wrong angle for QPainterPath, which uses degrees, as opposed to QPainter which uses sixteenths of a degree.\nAlso, you need to use arcMoveTo() (not moveTo) in order to place the path at the correct position when starting a new arc.\nFinally, you have to close the path, using closeSubpath().\n ...
[ 1 ]
[]
[]
[ "graphics", "pyqt", "pyqt6", "python", "python_3.x" ]
stackoverflow_0074549261_graphics_pyqt_pyqt6_python_python_3.x.txt
Q: How to search a dictionary for a value then delete that value I have a dictionary where the values are lists. I want to search these for a specific value. right now it returns if the value is in each list individually but i just want overall then it deletes Here's what it returns right now: marie true marie false marie false tom false tom true tom false jane false jane false jane false Here is what I want: marie true tom true jane false Here is the code: dictionary = {'nyu': ['marie', 'taylor', 'jim'], 'msu': ['tom', 'josh'], ' csu': ['tyler', 'mark', 'john']} #made in different method in same class class example: def get_names(self, name_list): for i in range(len(name_list)): for j in dictionary: if name_list[i] in dictionary[j]: print('true') dictionary[j].remove(name_list[i]) else: print('false') def main(): name_list = ['marie', 'tom', 'jane'] e = example() e.get_names(name_list) main() A: You need to wait the iteration on all dict values before being able to yes yes or no dictionary = {'nyu': ['marie', 'taylor', 'jim'], 'msu': ['tom', 'josh'], ' csu': ['tyler', 'mark', 'john']} def get_names(names): for name in names: name_found = False for dict_names in dictionary.values(): if name in dict_names: name_found = True break print(name, name_found) name_list = ['marie', 'tom', 'jane'] get_names(name_list) A: You must not remove from something you are iterating. NEVER! But you may iterate a copy while deleting from the original, as in: dictionary = {'nyu': ['marie', 'taylor', 'jim'], 'msu': ['tom', 'josh'], ' csu': ['tyler', 'mark', 'john']} \ #made in different method in same class class example: def remove_names( self, name_list): dic = dictionary.copy() for name in name_list: for k in dic: if name in dic[k]: dictionary[k].remove( name) def main(): name_list = ['marie', 'tom', 'jane'] e = example() e.remove_names(name_list) print(dictionary) main() It will print: {'nyu': ['taylor', 'jim'], 'msu': ['josh'], ' csu': ['tyler', 'mark', 'john']}
How to search a dictionary for a value then delete that value
I have a dictionary where the values are lists. I want to search these for a specific value. right now it returns if the value is in each list individually but i just want overall then it deletes Here's what it returns right now: marie true marie false marie false tom false tom true tom false jane false jane false jane false Here is what I want: marie true tom true jane false Here is the code: dictionary = {'nyu': ['marie', 'taylor', 'jim'], 'msu': ['tom', 'josh'], ' csu': ['tyler', 'mark', 'john']} #made in different method in same class class example: def get_names(self, name_list): for i in range(len(name_list)): for j in dictionary: if name_list[i] in dictionary[j]: print('true') dictionary[j].remove(name_list[i]) else: print('false') def main(): name_list = ['marie', 'tom', 'jane'] e = example() e.get_names(name_list) main()
[ "You need to wait the iteration on all dict values before being able to yes yes or no\ndictionary = {'nyu': ['marie', 'taylor', 'jim'],\n 'msu': ['tom', 'josh'],\n ' csu': ['tyler', 'mark', 'john']}\n\ndef get_names(names):\n for name in names:\n name_found = False\n for d...
[ 1, 0 ]
[]
[]
[ "python" ]
stackoverflow_0074551969_python.txt
Q: how to edit txt file with regular expressions (re) in python Im having a trouble with editing a txt file on python. Hi guys, Im having a trouble with editing a txt file on python. Here is the first few lines of the txt file m0 +++$+++ 10 things i hate about you +++$+++ 1999 +++$+++ 6.90 +++$+++ 62847 +++$+++ ['comedy', 'romance'] m1 +++$+++ 1492: conquest of paradise +++$+++ 1992 +++$+++ 6.20 +++$+++ 10421 +++$+++ ['adventure', 'biography', 'drama', 'history'] here is my code: import re file = open('datasets/movie_titles_metadata.txt') def extract_categories(file): for line in file: line: str = line.rstrip() if re.search(" ", line): line = re.sub(r"[0-9]", "", line) line = re.sub(r"[$ + : . ]", "", line) return line extract_categories(file) i need to get an out put that looks like this: ['action', 'comedy', 'crime', 'drama', 'thriller'] can someone help? A: Regex is not the correct solution for this. Each of your lists is at the end of each line, so use str.rsplit: from io import StringIO import ast content = """m0 +++$+++ 10 things i hate about you +++$+++ 1999 +++$+++ 6.90 +++$+++ 62847 +++$+++ ['comedy', 'romance'] m1 +++$+++ 1492: conquest of paradise +++$+++ 1992 +++$+++ 6.20 +++$+++ 10421 +++$+++ ['adventure', 'biography', 'drama', 'history']""" # this is a mock file-handle, use your file instead here with StringIO(content) as fh: genres = [] for line in fh: # the 1 means that only 1 split occurs _, lst = line.rsplit('+++$+++', 1) # use ast to convert the string representation # to a python list lst = ast.literal_eval(lst.strip()) # extend your result list genres.extend(lst) print(genres) ['comedy', 'romance', 'adventure', 'biography', 'drama', 'history'] A: Alternatively, if you want to use regex instead: def extract_categories(file): categories = [] for line in file: _, line = line.rsplit('+++$+++', 1) if re.search(r"\['[a-z]+", line): res = re.findall(r"'([a-z]+)'", line) categories.extend(res) return categories
how to edit txt file with regular expressions (re) in python
Im having a trouble with editing a txt file on python. Hi guys, Im having a trouble with editing a txt file on python. Here is the first few lines of the txt file m0 +++$+++ 10 things i hate about you +++$+++ 1999 +++$+++ 6.90 +++$+++ 62847 +++$+++ ['comedy', 'romance'] m1 +++$+++ 1492: conquest of paradise +++$+++ 1992 +++$+++ 6.20 +++$+++ 10421 +++$+++ ['adventure', 'biography', 'drama', 'history'] here is my code: import re file = open('datasets/movie_titles_metadata.txt') def extract_categories(file): for line in file: line: str = line.rstrip() if re.search(" ", line): line = re.sub(r"[0-9]", "", line) line = re.sub(r"[$ + : . ]", "", line) return line extract_categories(file) i need to get an out put that looks like this: ['action', 'comedy', 'crime', 'drama', 'thriller'] can someone help?
[ "Regex is not the correct solution for this. Each of your lists is at the end of each line, so use str.rsplit:\nfrom io import StringIO\nimport ast\n\ncontent = \"\"\"m0 +++$+++ 10 things i hate about you +++$+++ 1999 +++$+++ 6.90 +++$+++ 62847 +++$+++ ['comedy', 'romance']\nm1 +++$+++ 1492: conquest of paradise ++...
[ 1, 0 ]
[]
[]
[ "python", "python_re", "txt" ]
stackoverflow_0074552131_python_python_re_txt.txt
Q: How can I solve this double sub-string and astype problem In this case I have a problem with this: Im trying to split a column "Ubicación Finalizada" in two, but I have a problem with a "," string, I can't take it out and that make some problems in the after code because I can't convert it to float in the "Latitud" column with the 'Longitud' I dont have problem. Code: df['Latitud'] = df['Ubicación finalizada'].str.slice(0, 12).astype(str) df["Latitud"] = df["Latitud"].replace(",", "" ) df['Latitud'] = df['Latitud'].astype('float') df['Longitud'] = df['Ubicación finalizada'].str.slice(12, 25) df['Longitud1'] = df['Longitud'].astype('float') df['Cliente'] = df['Cliente'].astype('str') df['Conductor'] = df['Conductor'].astype('str') df['Zona de tarifa'] = df['Zona de tarifa'].str.slice(4, 7) df["Zona de tarifa1"] = 'Zona ' + df['Zona de tarifa'].astype('str') + ' $'+ df['Precio'].astype('str') + ' Conductor: ' + df['Conductor'].astype('str') colors= ['lightgray', 'darkred', 'lightgreen', 'green', 'pink', 'darkgreen', 'lightblue', 'darkpurple', 'black', 'lightred', 'gray', 'orange', 'cadetblue', 'blue', 'purple', 'beige', 'white', 'darkblue', 'red'] map = folium.Map(location=[df.Latitud.mean(), df.Longitud.mean()], zoom_start=14, control_scale=True) for index, location_info in df.iterrows(): folium.Marker([location_info["Latitud"], location_info["Longitud"]],icon=folium.Icon(color=colors[int(location_info['Zona de tarifa'])]),popup=location_info["Cliente"]+' '+location_info["Zona de tarifa1"]).add_to(map) map ''' Error: ValueError Traceback (most recent call last) <ipython-input-136-ccd19cd31b4c> in <module> 1 df['Latitud'] = df['Ubicación finalizada'].str.slice(1, 12).astype(str) 2 df["Latitud"] = df["Latitud"].replace(",", ",0" ) ----> 3 df['Latitud'] = df['Latitud'].astype('float') 4 df['Longitud'] = df['Ubicación finalizada'].str.slice(12, 25) 5 df['Longitud1'] = df['Longitud'].astype('float') 6 frames /usr/local/lib/python3.7/dist-packages/pandas/core/dtypes/cast.py in astype_nansafe(arr, dtype, copy, skipna) 1199 if copy or is_object_dtype(arr.dtype) or is_object_dtype(dtype): 1200 # Explicit copy, or required since NumPy can't view from / to object. -> 1201 return arr.astype(dtype, copy=True) 1202 1203 return arr.astype(dtype, copy=copy) ValueError: could not convert string to float: '34.9051275,' Also I want to convert "# Externo" that is in the format 4.176466e+10 - to normal numbers. A: You can use .str.split to split the string column and then convert it: df[["lat", "lon"]] = df["Ubicación finalizada"].str.split(",", expand=True) df[["lat", "lon"]] = df[["lat", "lon"]].astype(float) print(df) Prints: Ubicación finalizada lat lon 0 -34.123,-56.156 -34.12300 -56.156 1 -35.1234,-57.156 -35.12340 -57.156 2 -36.12356,-58.156 -36.12356 -58.156 df used: Ubicación finalizada 0 -34.123,-56.156 1 -35.1234,-57.156 2 -36.12356,-58.156
How can I solve this double sub-string and astype problem
In this case I have a problem with this: Im trying to split a column "Ubicación Finalizada" in two, but I have a problem with a "," string, I can't take it out and that make some problems in the after code because I can't convert it to float in the "Latitud" column with the 'Longitud' I dont have problem. Code: df['Latitud'] = df['Ubicación finalizada'].str.slice(0, 12).astype(str) df["Latitud"] = df["Latitud"].replace(",", "" ) df['Latitud'] = df['Latitud'].astype('float') df['Longitud'] = df['Ubicación finalizada'].str.slice(12, 25) df['Longitud1'] = df['Longitud'].astype('float') df['Cliente'] = df['Cliente'].astype('str') df['Conductor'] = df['Conductor'].astype('str') df['Zona de tarifa'] = df['Zona de tarifa'].str.slice(4, 7) df["Zona de tarifa1"] = 'Zona ' + df['Zona de tarifa'].astype('str') + ' $'+ df['Precio'].astype('str') + ' Conductor: ' + df['Conductor'].astype('str') colors= ['lightgray', 'darkred', 'lightgreen', 'green', 'pink', 'darkgreen', 'lightblue', 'darkpurple', 'black', 'lightred', 'gray', 'orange', 'cadetblue', 'blue', 'purple', 'beige', 'white', 'darkblue', 'red'] map = folium.Map(location=[df.Latitud.mean(), df.Longitud.mean()], zoom_start=14, control_scale=True) for index, location_info in df.iterrows(): folium.Marker([location_info["Latitud"], location_info["Longitud"]],icon=folium.Icon(color=colors[int(location_info['Zona de tarifa'])]),popup=location_info["Cliente"]+' '+location_info["Zona de tarifa1"]).add_to(map) map ''' Error: ValueError Traceback (most recent call last) <ipython-input-136-ccd19cd31b4c> in <module> 1 df['Latitud'] = df['Ubicación finalizada'].str.slice(1, 12).astype(str) 2 df["Latitud"] = df["Latitud"].replace(",", ",0" ) ----> 3 df['Latitud'] = df['Latitud'].astype('float') 4 df['Longitud'] = df['Ubicación finalizada'].str.slice(12, 25) 5 df['Longitud1'] = df['Longitud'].astype('float') 6 frames /usr/local/lib/python3.7/dist-packages/pandas/core/dtypes/cast.py in astype_nansafe(arr, dtype, copy, skipna) 1199 if copy or is_object_dtype(arr.dtype) or is_object_dtype(dtype): 1200 # Explicit copy, or required since NumPy can't view from / to object. -> 1201 return arr.astype(dtype, copy=True) 1202 1203 return arr.astype(dtype, copy=copy) ValueError: could not convert string to float: '34.9051275,' Also I want to convert "# Externo" that is in the format 4.176466e+10 - to normal numbers.
[ "You can use .str.split to split the string column and then convert it:\ndf[[\"lat\", \"lon\"]] = df[\"Ubicación finalizada\"].str.split(\",\", expand=True)\ndf[[\"lat\", \"lon\"]] = df[[\"lat\", \"lon\"]].astype(float)\nprint(df)\n\nPrints:\n Ubicación finalizada lat lon\n0 -34.123,-56.156 -34.1230...
[ 1 ]
[]
[]
[ "pandas", "python", "slice", "string", "types" ]
stackoverflow_0074552377_pandas_python_slice_string_types.txt
Q: cv2 onMouse() has a missing argument when using importlib to create a class at runtime I want to load a python file (module) dynamically when the user inputs the filename (here for example: RegionOfInterest.py). In the file will be one class (here: RegionOfInterest). All classes have a method with the same name (here: start) which i want to call. This works fine, but if i call another method from the start method i get an error: TypeError: testMethod() missing 1 required positional argument: 'self' Any help is greatly appreciated. :) Minimal Working Example: main.py import importlib if __name__ == '__main__': module_name = "RegionOfInterest" # will be user input class_name = module_name # same as module_name myModule = importlib.import_module(f"{module_name}") myClass = myModule.__getattribute__(class_name) myClass.__init__(myClass) # apparently is not called automatically ? myMethod = myClass.__getattribute__(myClass, "start") myMethod(myClass) RegionOfInterest.py class RegionOfInterest: def start(self): self.testMethod() def testMethod(self): pass A: I'm not sure what you're trying to achieve with this code: myClass.__init__(myClass, img) myMethod = myClass.__getattribute__(myClass, "start") myMethod(myClass) myClass is a class, not an instance of that class. So your passing an invalid object as self to the __init__() method. The following seems much simpler to me and does work without the error: module_name = "RegionOfInterest" # will be user input myModule = importlib.import_module(f"{module_name}") myClass = myModule.__getattribute__(module_name) roi = myClass(img) roi.start() (Note: this answer was based on earlier revision of the question. The code has been changed a bit since then, but the principle remains the same.)
cv2 onMouse() has a missing argument when using importlib to create a class at runtime
I want to load a python file (module) dynamically when the user inputs the filename (here for example: RegionOfInterest.py). In the file will be one class (here: RegionOfInterest). All classes have a method with the same name (here: start) which i want to call. This works fine, but if i call another method from the start method i get an error: TypeError: testMethod() missing 1 required positional argument: 'self' Any help is greatly appreciated. :) Minimal Working Example: main.py import importlib if __name__ == '__main__': module_name = "RegionOfInterest" # will be user input class_name = module_name # same as module_name myModule = importlib.import_module(f"{module_name}") myClass = myModule.__getattribute__(class_name) myClass.__init__(myClass) # apparently is not called automatically ? myMethod = myClass.__getattribute__(myClass, "start") myMethod(myClass) RegionOfInterest.py class RegionOfInterest: def start(self): self.testMethod() def testMethod(self): pass
[ "I'm not sure what you're trying to achieve with this code:\nmyClass.__init__(myClass, img)\nmyMethod = myClass.__getattribute__(myClass, \"start\")\nmyMethod(myClass)\n\nmyClass is a class, not an instance of that class. So your passing an invalid object as self to the __init__() method.\nThe following seems much ...
[ 0 ]
[]
[]
[ "python", "python_importlib", "self" ]
stackoverflow_0074551092_python_python_importlib_self.txt
Q: Why doesn't my function with variables update data? def overwriteFlat(top, curTable, rawEntrylist, columns): rawEntrylist = rawEntrylist entryList = list() for value in rawEntrylist: entryList.append(value.get()) conn = sqlite3.connect('data.db') c = conn.cursor() for i in range(len(columns)): if entryList[i] != '': c.execute("""UPDATE """+curTable+""" SET """+columns[i]+""" = :"""+columns[i]+""" WHERE """+columns[0]+""" = """ + str(entryList[0]), {columns[i]: entryList[i]}) print(curTable,columns[i],entryList[i]) conn.commit() c.close() conn.close() closeWin(top) Output: Flat ID 23 Flat Street Test Flat Street_Number 100 I put in "Test" and "100" so that works. I provide a window for input, the input gets put into here and everything provided gets overwritten in provided ID. Because of print() I see it goes into the right table, it also selects the right column and doesn't throw any exception. But it doesn't update database. Database not locked. Variables all valid and work. No exception is thrown. Vulnerable to injection, soon as it works I'll change it. A: Thanks to @JohnGordon i found the solution But just so if someone wants to use Variables in Sqlite i will explain how as this is hardly explained anywhere on the Internet (at least at my Beginner-Programmer-Level) Usually Sql commands work like this and are pretty static: "UPDATE Your_Table SET Your_Column = :Your_Column WHERE IndexColumn = Your_Index), {Your_Column: Your_Value}" But by using +Variable+ you can use Variables in there so its the same thing but with whatever Variable you want: "UPDATE "+curTable+" SET "+columns[i]+" = :"+columns[i]+" WHERE "+columns[i]+" = " + str(entryList[0]), {columns[i]: entryList[i]} You can now have the Variables "curTable", "columns", "entryList" set to whatever you want and dont need a static line for everything The same works with INSERT and the other things too Edit: (its now 3 hours later, 1 AM and i got the safer way) NOW THAT YOU GOT THAT READ THIS you will be vulnerable to SQL Injection, and you need to still change that code to this: query = " UPDATE "+curTable+" SET "+columns[i]+" = ? WHERE "+columns[0]+" = ?" c.execute(query, (entryList[i], entryList[0], )) this makes it safer, but as i am not a pro yet maybe someone can confirm Edit: Removed triple-quotes as they are only needed in multiple-sentence sql stuff thanks for the hint @Tim Roberts
Why doesn't my function with variables update data?
def overwriteFlat(top, curTable, rawEntrylist, columns): rawEntrylist = rawEntrylist entryList = list() for value in rawEntrylist: entryList.append(value.get()) conn = sqlite3.connect('data.db') c = conn.cursor() for i in range(len(columns)): if entryList[i] != '': c.execute("""UPDATE """+curTable+""" SET """+columns[i]+""" = :"""+columns[i]+""" WHERE """+columns[0]+""" = """ + str(entryList[0]), {columns[i]: entryList[i]}) print(curTable,columns[i],entryList[i]) conn.commit() c.close() conn.close() closeWin(top) Output: Flat ID 23 Flat Street Test Flat Street_Number 100 I put in "Test" and "100" so that works. I provide a window for input, the input gets put into here and everything provided gets overwritten in provided ID. Because of print() I see it goes into the right table, it also selects the right column and doesn't throw any exception. But it doesn't update database. Database not locked. Variables all valid and work. No exception is thrown. Vulnerable to injection, soon as it works I'll change it.
[ "Thanks to @JohnGordon i found the solution\nBut just so if someone wants to use Variables in Sqlite i will explain how as this is hardly explained anywhere on the Internet (at least at my Beginner-Programmer-Level)\nUsually Sql commands work like this and are pretty static:\n\"UPDATE Your_Table SET Your_Column = :...
[ 0 ]
[]
[]
[ "python", "sqlite" ]
stackoverflow_0074552305_python_sqlite.txt
Q: Pandas dataframe to SQL table using presto-python-client syntax error: mismatched input ';' I am connecting to a presto db, trying to write a dataframe into a sql table. I can "CREATE TABLE" but df.to_sql throws a syntax error: PrestoUserError: PrestoUserError(type=USER_ERROR, name=SYNTAX_ERROR, message="line 1:61: mismatched input ';'. Expecting: '%', '*', '+', '-', '.', '/', 'AND', 'AT', 'EXCEPT', 'FETCH', 'GROUP', 'HAVING', 'INTERSECT', 'LIMIT', 'OFFSET', 'OR', 'ORDER', 'UNION', 'WINDOW', '[', '||', <EOF>", query_id=20220420_155736_92426_nkmur) Code: conn = prestodb.dbapi.connect( host = hostname, port = 8443, user = username, catalog = 'the-catalog', http_scheme = 'https', auth = prestodb.auth.BasicAuthentication(username, password), ) cur = conn.cursor() query = '''create table if not exists employees (id integer, name varchar(10), salary integer, dept_id integer)''' cur.execute(query) temp = cur.fetchall() data = {'id':[1,2],'name':['fname','lname'],'salary':[1000,10001],'dept_id':[3,2]} df = pd.DataFrame(data, columns= ['id','name','salary','dept_id']) #Everything works perfect above. The error is in this line- df.to_sql('employees', conn, if_exists='replace', index = False) A: SQLAlchemy have limited dialect options or databases that it can connect to. Refer : https://docs.sqlalchemy.org/en/13/dialects/ for the list. Presto is not part of it.
Pandas dataframe to SQL table using presto-python-client syntax error: mismatched input ';'
I am connecting to a presto db, trying to write a dataframe into a sql table. I can "CREATE TABLE" but df.to_sql throws a syntax error: PrestoUserError: PrestoUserError(type=USER_ERROR, name=SYNTAX_ERROR, message="line 1:61: mismatched input ';'. Expecting: '%', '*', '+', '-', '.', '/', 'AND', 'AT', 'EXCEPT', 'FETCH', 'GROUP', 'HAVING', 'INTERSECT', 'LIMIT', 'OFFSET', 'OR', 'ORDER', 'UNION', 'WINDOW', '[', '||', <EOF>", query_id=20220420_155736_92426_nkmur) Code: conn = prestodb.dbapi.connect( host = hostname, port = 8443, user = username, catalog = 'the-catalog', http_scheme = 'https', auth = prestodb.auth.BasicAuthentication(username, password), ) cur = conn.cursor() query = '''create table if not exists employees (id integer, name varchar(10), salary integer, dept_id integer)''' cur.execute(query) temp = cur.fetchall() data = {'id':[1,2],'name':['fname','lname'],'salary':[1000,10001],'dept_id':[3,2]} df = pd.DataFrame(data, columns= ['id','name','salary','dept_id']) #Everything works perfect above. The error is in this line- df.to_sql('employees', conn, if_exists='replace', index = False)
[ "SQLAlchemy have limited dialect options or databases that it can connect to.\nRefer : https://docs.sqlalchemy.org/en/13/dialects/ for the list.\nPresto is not part of it.\n" ]
[ 0 ]
[]
[]
[ "presto", "python", "python_db_api", "sqlalchemy" ]
stackoverflow_0071942942_presto_python_python_db_api_sqlalchemy.txt
Q: How to analyse data collected through a sensor every minute? I have a dataset from a sensor which collected air quality features like CO2, temperature etc every minute for 30 days. I have tried pandas and tableu but I don't seem to get anywhere. Any suggestions or solutions? A: Usually using sampled data is the better option for Time-Series data analysis. You can refer this answer to know more about the sampling and how it is useful for time-series data analysis.
How to analyse data collected through a sensor every minute?
I have a dataset from a sensor which collected air quality features like CO2, temperature etc every minute for 30 days. I have tried pandas and tableu but I don't seem to get anywhere. Any suggestions or solutions?
[ "Usually using sampled data is the better option for Time-Series data analysis. You can refer this answer to know more about the sampling and how it is useful for time-series data analysis.\n" ]
[ 0 ]
[]
[]
[ "analysis", "data_analysis", "machine_learning", "pandas", "python" ]
stackoverflow_0074552496_analysis_data_analysis_machine_learning_pandas_python.txt
Q: ArgumentError: Python argument types in rdkit.Chem.rdMolDescriptors.GetMorganFingerprintAsBitVect(NoneType, int) did not match C++ signature So I'm working with RDKit and Python to convert SMILES strings to ECFP4 fingerprints, and my code is as shown below. I got an error, but I have also checked with this question over here but I seem to have the correct code? But why am I still getting an error? Is there an alternative way to code this? bits = 1024 PandasTools.AddMoleculeColumnToFrame(data, smilesCol='SMILES') data_ECFP4 = [AllChem.GetMorganFingerprintAsBitVect(x, 3, nBits = bits) for x in data['ROMol']] data_ecfp4_lists = [list(l) for l in data_ECFP4] ecfp4_name = [f'B{i+1}' for i in range(1024)] data_ecfp4_df = pd.DataFrame(data_ecfp4_lists, index = data.TARGET, columns = ecfp4_name) The error I got is: ArgumentError: Python argument types in rdkit.Chem.rdMolDescriptors.GetMorganFingerprintAsBitVect(NoneType, int) did not match C++ signature: GetMorganFingerprintAsBitVect(class RDKit::ROMol mol, int radius, unsigned int nBits=2048, class boost::python::api::object invariants=[], class boost::python::api::object fromAtoms=[], bool useChirality=False, bool useBondTypes=True, bool useFeatures=False, class boost::python::api::object bitInfo=None, bool includeRedundantEnvironments=False) A: It is likely that your list contains irregular values, such as 'None', nan, or just empty items. In your code the 'x' should correspond to a single string. A: You may have irregular molecules or ions in your data set! To fix this use MolVS (https://molvs.readthedocs.io/en/latest/). An example of use: https://www.cheminformania.com/a-deep-tox21-neural-network-with-rdkit-and-keras/
ArgumentError: Python argument types in rdkit.Chem.rdMolDescriptors.GetMorganFingerprintAsBitVect(NoneType, int) did not match C++ signature
So I'm working with RDKit and Python to convert SMILES strings to ECFP4 fingerprints, and my code is as shown below. I got an error, but I have also checked with this question over here but I seem to have the correct code? But why am I still getting an error? Is there an alternative way to code this? bits = 1024 PandasTools.AddMoleculeColumnToFrame(data, smilesCol='SMILES') data_ECFP4 = [AllChem.GetMorganFingerprintAsBitVect(x, 3, nBits = bits) for x in data['ROMol']] data_ecfp4_lists = [list(l) for l in data_ECFP4] ecfp4_name = [f'B{i+1}' for i in range(1024)] data_ecfp4_df = pd.DataFrame(data_ecfp4_lists, index = data.TARGET, columns = ecfp4_name) The error I got is: ArgumentError: Python argument types in rdkit.Chem.rdMolDescriptors.GetMorganFingerprintAsBitVect(NoneType, int) did not match C++ signature: GetMorganFingerprintAsBitVect(class RDKit::ROMol mol, int radius, unsigned int nBits=2048, class boost::python::api::object invariants=[], class boost::python::api::object fromAtoms=[], bool useChirality=False, bool useBondTypes=True, bool useFeatures=False, class boost::python::api::object bitInfo=None, bool includeRedundantEnvironments=False)
[ "It is likely that your list contains irregular values, such as 'None', nan, or just empty items. In your code the 'x' should correspond to a single string.\n", "You may have irregular molecules or ions in your data set!\nTo fix this use MolVS (https://molvs.readthedocs.io/en/latest/).\nAn example of use:\nhttps...
[ 1, 0 ]
[]
[]
[ "python", "rdkit" ]
stackoverflow_0071776465_python_rdkit.txt
Q: Raspi Pico W [Errno 98] EADDRINUSE despite using socket.SO_REUSEADDR I'm trying to set up a simple server/client connection using the socket module on a Raspberry Pi Pico W running the latest nightly build image rp2-pico-w-20221123-unstable-v1.19.1-713-g7fe7c55bb.uf2 which I've downloaded from https://micropython.org/download/rp2-pico-w/ The following code runs fine for the first connection (network connectivity can be assumed at this point). import socket def await_connection(): print(' >> Awaiting connection ...') try: host_addr = socket.getaddrinfo('0.0.0.0', 46321)[0][-1] sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) sock.bind(host_addr) sock.listen(1) con, addr = sock.accept() while True: # keep receiving commands on the open connection until the client stops sending cmd = con.recv(1024) if not cmd: print(f' >> {addr} disconnected') break elif cmd.decode() == 'foo': response = 'bar'.encode() else: response = cmd print(f"Received {cmd.decode()}") print(f"Returning {response.decode()}") con.sendall(response) except OSError as e: print(f' >> ERROR: {e}') finally: # appearantly, context managers are currently not supported in MicroPython, therefore the connection is closed manually con.close() print(' >> Connection closed.') while True: # main loop, causing the program to await a new connection as soon as the previous one is closed await_connection() If the client closes the connection and tries to re-connect, the infamous [Errno 98] EADDRINUSE is thrown: Please note that I've already implemented the sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) statement, as recommended here, but to no avail. However, if I run the exact same code on a Raspberry Pi 3 B with python 3.7.3 within the same network, everything works as expected - the client can disconnect and reconnect multiple times without issues: How do I get the Pico to reuse the address after the initial connection, just like it is working in python 3.7.3? A: While I was able to mitigate the reconnection crash by adding a sock.close() statement after the con.close(), the main issue with my code was the structure itself, as Steffen Ullrich pointed out. The actual fix was to move the operations on the sock object out of the loop. import socket def await_connection(): print(' >> Awaiting connection ...') try: con, addr = sock.accept() while True: # keep receiving commands on the open connection until the client stops sending cmd = con.recv(1024) if not cmd: print(f' >> {addr} disconnected') break else: response = cmd print(f"Received {cmd.decode()}") print(f"Returning {response.decode()}") con.sendall(response) except OSError as e: print(f' >> ERROR: {e}') finally: # appearantly, context managers are currently not supported in MicroPython, therefore the connection is closed manually con.close() print(' >> Connection closed.') host_addr = socket.getaddrinfo('0.0.0.0', 46321)[0][-1] sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) sock.bind(host_addr) sock.listen(1) while True: # main loop, causing the program to await a new connection as soon as the previous one is closed await_connection()
Raspi Pico W [Errno 98] EADDRINUSE despite using socket.SO_REUSEADDR
I'm trying to set up a simple server/client connection using the socket module on a Raspberry Pi Pico W running the latest nightly build image rp2-pico-w-20221123-unstable-v1.19.1-713-g7fe7c55bb.uf2 which I've downloaded from https://micropython.org/download/rp2-pico-w/ The following code runs fine for the first connection (network connectivity can be assumed at this point). import socket def await_connection(): print(' >> Awaiting connection ...') try: host_addr = socket.getaddrinfo('0.0.0.0', 46321)[0][-1] sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) sock.bind(host_addr) sock.listen(1) con, addr = sock.accept() while True: # keep receiving commands on the open connection until the client stops sending cmd = con.recv(1024) if not cmd: print(f' >> {addr} disconnected') break elif cmd.decode() == 'foo': response = 'bar'.encode() else: response = cmd print(f"Received {cmd.decode()}") print(f"Returning {response.decode()}") con.sendall(response) except OSError as e: print(f' >> ERROR: {e}') finally: # appearantly, context managers are currently not supported in MicroPython, therefore the connection is closed manually con.close() print(' >> Connection closed.') while True: # main loop, causing the program to await a new connection as soon as the previous one is closed await_connection() If the client closes the connection and tries to re-connect, the infamous [Errno 98] EADDRINUSE is thrown: Please note that I've already implemented the sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) statement, as recommended here, but to no avail. However, if I run the exact same code on a Raspberry Pi 3 B with python 3.7.3 within the same network, everything works as expected - the client can disconnect and reconnect multiple times without issues: How do I get the Pico to reuse the address after the initial connection, just like it is working in python 3.7.3?
[ "While I was able to mitigate the reconnection crash by adding a sock.close() statement after the con.close(), the main issue with my code was the structure itself, as Steffen Ullrich pointed out.\nThe actual fix was to move the operations on the sock object out of the loop.\nimport socket\n\ndef await_connection()...
[ 1 ]
[]
[]
[ "micropython", "python", "raspberry_pi_pico", "sockets" ]
stackoverflow_0074551529_micropython_python_raspberry_pi_pico_sockets.txt
Q: Conditions for statespace statsmodel mlemodel I'm trying to run a TVP VAR on Statsmodel for a Big Data, but there seems to be a problem when trying to validate the vector matrix and the vector shape. Especially when defining the start and update parameter , mostly on the dimension and the structure of the update parameters. My model is a TVP-PVAR in a normal linear state space model composed of the State Equation eand the Measurement Equation. . There is a large amount of data, therefore I'm using numerous variables in the model. My model equations are: Where X ̃t=XtΞ and ut=Xt′+ut with Ut∼N(0,(I+σ2Xt′Xt))×Σ My issues that cannot solve are that I have specified k_states=702, which means the state_cov matrix must be 702 x 702. In the update method, I'm trying to set the state_cov to be a 3x3 matrix, which is wrong. The error is telling me that the state_cov cannot be a 3x3 matrix. Desired is a 702 x 702 matrix , a k_states x k_states, if I get this right. I so not get where this 3X3 came from I have two questions: How could I correct the full code, or how could I modify properly my code in order to get the right results? I cannot see where and how set up the state_cov matrix to be 702x702 Since this is a TVP-VAR, and unit root is assumed, my understanding is that I have to keep only the " diffuse''and remove the "stationarity " . Same goes for "constrain_stationary_multivariate" and "unconstrain_stationary_multivariate". Am I getting this right? Much appreciate a full version of the code! `Traceback : `Traceback (most recent call last): File "/Users/user/Documents/PYTHON/Spider/tvp/tvpstandard5.py", line 246, in <module> preliminary = mod.fit(maxiter=1000) File "/opt/anaconda3/envs/spyder-env/lib/python3.10/site-packages/statsmodels/tsa/statespace/mlemodel.py", line 704, in fit mlefit = super(MLEModel, self).fit(start_params, method=method, File "/opt/anaconda3/envs/spyder-env/lib/python3.10/site-packages/statsmodels/base/model.py", line 563, in fit xopt, retvals, optim_settings = optimizer._fit(f, score, start_params, File "/opt/anaconda3/envs/spyder-env/lib/python3.10/site-packages/statsmodels/base/optimizer.py", line 241, in _fit xopt, retvals = func(objective, gradient, start_params, fargs, kwargs, File "/opt/anaconda3/envs/spyder-env/lib/python3.10/site-packages/statsmodels/base/optimizer.py", line 651, in _fit_lbfgs retvals = optimize.fmin_l_bfgs_b(func, start_params, maxiter=maxiter, File "/opt/anaconda3/envs/spyder-env/lib/python3.10/site-packages/scipy/optimize/lbfgsb.py", line 197, in fmin_l_bfgs_b res = _minimize_lbfgsb(fun, x0, args=args, jac=jac, bounds=bounds, File "/opt/anaconda3/envs/spyder-env/lib/python3.10/site-packages/scipy/optimize/lbfgsb.py", line 306, in _minimize_lbfgsb sf = _prepare_scalar_function(fun, x0, jac=jac, args=args, epsilon=eps, File "/opt/anaconda3/envs/spyder-env/lib/python3.10/site-packages/scipy/optimize/optimize.py", line 261, in _prepare_scalar_function sf = ScalarFunction(fun, x0, args, grad, hess, File "/opt/anaconda3/envs/spyder-env/lib/python3.10/site-packages/scipy/optimize/_differentiable_functions.py", line 140, in __init__ self._update_fun() File "/opt/anaconda3/envs/spyder-env/lib/python3.10/site-packages/scipy/optimize/_differentiable_functions.py", line 233, in _update_fun self._update_fun_impl() File "/opt/anaconda3/envs/spyder-env/lib/python3.10/site-packages/scipy/optimize/_differentiable_functions.py", line 137, in update_fun self.f = fun_wrapped(self.x) File "/opt/anaconda3/envs/spyder-env/lib/python3.10/site-packages/scipy/optimize/_differentiable_functions.py", line 134, in fun_wrapped return fun(np.copy(x), *args) File "/opt/anaconda3/envs/spyder-env/lib/python3.10/site-packages/statsmodels/base/model.py", line 531, in f return -self.loglike(params, *args) / nobs File "/opt/anaconda3/envs/spyder-env/lib/python3.10/site-packages/statsmodels/tsa/statespace/mlemodel.py", line 933, in loglike self.update(params, transformed=True, includes_fixed=True, File "/Users/user/Documents/PYTHON/Spider/tvp/tvpstandard5.py", line 218, in update self['state_cov'] = np.diag([params[2]**2, params[3]**2, params[4]**2]) # W File "/opt/anaconda3/envs/spyder-env/lib/python3.10/site-packages/statsmodels/tsa/statespace/mlemodel.py", line 239, in __setitem__ return self.ssm.__setitem__(key, value) File "/opt/anaconda3/envs/spyder-env/lib/python3.10/site-packages/statsmodels/tsa/statespace/representation.py", line 420, in __setitem__ setattr(self, key, value) File "/opt/anaconda3/envs/spyder-env/lib/python3.10/site-packages/statsmodels/tsa/statespace/representation.py", line 54, in __set__ value = self._set_matrix(obj, value, shape) File "/opt/anaconda3/envs/spyder-env/lib/python3.10/site-packages/statsmodels/tsa/statespace/representation.py", line 68, in _set_matrix validate_matrix_shape( File "/opt/anaconda3/envs/spyder-env/lib/python3.10/site-packages/statsmodels/tsa/statespace/tools.py", line 1474, in validate_matrix_shape raise ValueError('Invalid dimensions for %s matrix: requires %d' ValueError: Invalid dimensions for state covariance matrix matrix: requires 702 rows, got 3`` Code ` class TVPVAR(sm.tsa.statespace.MLEModel): def __init__(self, y): # Create a matrix with [y_t' : y_{t-1}'] for t = 2, ..., T augmented = sm.tsa.lagmat(y, 1, trim='both', original='in', use_pandas=True) # Separate into y_t and z_t = [1 : y_{t-1}'] p = y.shape[1] y_t = augmented.iloc[:, :p] z_t = sm.add_constant(augmented.iloc[:, p:]) nobs = y.shape[0] T=y.shape[0] # Recall that the length of the state vector is p * (p + 1) k_states = p * (p + 1) super(TVPVAR,self).__init__(y_t, exog=None, k_states=k_states,k_posdef=k_states) self.k_y = p self.k_states = p * (p + 1) self.nobs = T self['design'] = np.zeros((self.k_y, self.k_states, 1)) self['transition'] = np.eye(k_states) # G self['selection'] = np.eye(k_states) # R=1 def update_variances(self, obs_cov, state_cov_diag): self['obs_cov'] = obs_cov self['state_cov'] = np.diag(state_cov_diag) # W init = initialization.Initialization(self.k_states) init.set((0, 2), 'diffuse') init.set((2, 4), 'stationary') self.ssm.initialize(init) def constrain_stationary_multivariate(unconstrained, variance, transform_variance=False, prefix=None): unconstrained =np.zeros_like(k_y * k_y * order) variance=np.zeros_like(k_y * k_y) order = k_y prefix = find_best_blas_type( [unconstrained, variance]) dtype = prefix_dtype_map[prefix] unconstrained = np.asfortranarray(unconstrained, dtype=dtype) variance = np.asfortranarray(variance, dtype=dtype) # Step 1: convert from arbitrary matrices to those with singular values # less than one. # sv_constrained = _constrain_sv_less_than_one(unconstrained, order, # k_y, prefix) sv_constrained = prefix_sv_map[prefix](unconstrained, order, k_y) # Step 2: convert matrices from our "partial autocorrelation matrix" # space (matrices with singular values less than one) to the space of # stationary coefficient matrices constrained, variance = prefix_pacf_map[prefix]( sv_constrained, variance, transform_variance, order, k_y) constrained = np.zeros_like(constrained, dtype=dtype) variance = np.zeros_like(variance, dtype=dtype) return constrained, variance def unconstrain_stationary_multivariate(constrained, error_variance): constrained= np.zeros_like(k_y * k_y * order) error_variance=np.zeros_like(k_y * k_y) # Step 1: convert matrices from the space of stationary # coefficient matrices to our "partial autocorrelation matrix" space # (matrices with singular values less than one) partial_autocorrelations = _compute_multivariate_pacf_from_coefficients( constrained, error_variance, order, k_y) unconstrained = _unconstrain_sv_less_than_one( partial_autocorrelations, order, k_y) return unconstrained, error_variance def update(self, params, **kwargs): params = super().update(params, **kwargs) self['transition', 2,2] = params[0] self['transition', 3,2] = params[1] self['state_cov'] = np.diag([params[2]**2, params[3]**2, params[4]**2]) # W @property def state_names(self): state_names = np.empty((self.k_y, self.k_y + 1), dtype=object) for i in range(self.k_y): endog_nam I have two questions: How could I correct the full code, or how could I modify properly my code in order to get the right results? I cannot see where and how to set up the state_cov matrix to be 702x702 Since this is a TVP-VAR, and unit root is assumed, my understanding is that I have to keep only the " diffuse''and remove the "stationarity " . Same goes for "constrain_stationary_multivariate" and "unconstrain_stationary_multivariate". Am I getting this right? Much appreciate a full version of the code! I have debugged, and the error shows up in self['state_cov'] = np.diag([params[2]**2, params[3]**2, params[4]**2]) # W after it calls the validate_matrix_shape() in tools.py. Any help is highly appreciated A: On this line: self['state_cov'] = np.diag([params[2]**2, params[3]**2, params[4]**2]) You're calling the function np.diag(). According to the documentation, when np.diag() is given a 1-D vector of N elements, it creates an NxN matrix, with the element of the vector along the diagonal. You gave it three elements, so it creates a 3x3 matrix.
Conditions for statespace statsmodel mlemodel
I'm trying to run a TVP VAR on Statsmodel for a Big Data, but there seems to be a problem when trying to validate the vector matrix and the vector shape. Especially when defining the start and update parameter , mostly on the dimension and the structure of the update parameters. My model is a TVP-PVAR in a normal linear state space model composed of the State Equation eand the Measurement Equation. . There is a large amount of data, therefore I'm using numerous variables in the model. My model equations are: Where X ̃t=XtΞ and ut=Xt′+ut with Ut∼N(0,(I+σ2Xt′Xt))×Σ My issues that cannot solve are that I have specified k_states=702, which means the state_cov matrix must be 702 x 702. In the update method, I'm trying to set the state_cov to be a 3x3 matrix, which is wrong. The error is telling me that the state_cov cannot be a 3x3 matrix. Desired is a 702 x 702 matrix , a k_states x k_states, if I get this right. I so not get where this 3X3 came from I have two questions: How could I correct the full code, or how could I modify properly my code in order to get the right results? I cannot see where and how set up the state_cov matrix to be 702x702 Since this is a TVP-VAR, and unit root is assumed, my understanding is that I have to keep only the " diffuse''and remove the "stationarity " . Same goes for "constrain_stationary_multivariate" and "unconstrain_stationary_multivariate". Am I getting this right? Much appreciate a full version of the code! `Traceback : `Traceback (most recent call last): File "/Users/user/Documents/PYTHON/Spider/tvp/tvpstandard5.py", line 246, in <module> preliminary = mod.fit(maxiter=1000) File "/opt/anaconda3/envs/spyder-env/lib/python3.10/site-packages/statsmodels/tsa/statespace/mlemodel.py", line 704, in fit mlefit = super(MLEModel, self).fit(start_params, method=method, File "/opt/anaconda3/envs/spyder-env/lib/python3.10/site-packages/statsmodels/base/model.py", line 563, in fit xopt, retvals, optim_settings = optimizer._fit(f, score, start_params, File "/opt/anaconda3/envs/spyder-env/lib/python3.10/site-packages/statsmodels/base/optimizer.py", line 241, in _fit xopt, retvals = func(objective, gradient, start_params, fargs, kwargs, File "/opt/anaconda3/envs/spyder-env/lib/python3.10/site-packages/statsmodels/base/optimizer.py", line 651, in _fit_lbfgs retvals = optimize.fmin_l_bfgs_b(func, start_params, maxiter=maxiter, File "/opt/anaconda3/envs/spyder-env/lib/python3.10/site-packages/scipy/optimize/lbfgsb.py", line 197, in fmin_l_bfgs_b res = _minimize_lbfgsb(fun, x0, args=args, jac=jac, bounds=bounds, File "/opt/anaconda3/envs/spyder-env/lib/python3.10/site-packages/scipy/optimize/lbfgsb.py", line 306, in _minimize_lbfgsb sf = _prepare_scalar_function(fun, x0, jac=jac, args=args, epsilon=eps, File "/opt/anaconda3/envs/spyder-env/lib/python3.10/site-packages/scipy/optimize/optimize.py", line 261, in _prepare_scalar_function sf = ScalarFunction(fun, x0, args, grad, hess, File "/opt/anaconda3/envs/spyder-env/lib/python3.10/site-packages/scipy/optimize/_differentiable_functions.py", line 140, in __init__ self._update_fun() File "/opt/anaconda3/envs/spyder-env/lib/python3.10/site-packages/scipy/optimize/_differentiable_functions.py", line 233, in _update_fun self._update_fun_impl() File "/opt/anaconda3/envs/spyder-env/lib/python3.10/site-packages/scipy/optimize/_differentiable_functions.py", line 137, in update_fun self.f = fun_wrapped(self.x) File "/opt/anaconda3/envs/spyder-env/lib/python3.10/site-packages/scipy/optimize/_differentiable_functions.py", line 134, in fun_wrapped return fun(np.copy(x), *args) File "/opt/anaconda3/envs/spyder-env/lib/python3.10/site-packages/statsmodels/base/model.py", line 531, in f return -self.loglike(params, *args) / nobs File "/opt/anaconda3/envs/spyder-env/lib/python3.10/site-packages/statsmodels/tsa/statespace/mlemodel.py", line 933, in loglike self.update(params, transformed=True, includes_fixed=True, File "/Users/user/Documents/PYTHON/Spider/tvp/tvpstandard5.py", line 218, in update self['state_cov'] = np.diag([params[2]**2, params[3]**2, params[4]**2]) # W File "/opt/anaconda3/envs/spyder-env/lib/python3.10/site-packages/statsmodels/tsa/statespace/mlemodel.py", line 239, in __setitem__ return self.ssm.__setitem__(key, value) File "/opt/anaconda3/envs/spyder-env/lib/python3.10/site-packages/statsmodels/tsa/statespace/representation.py", line 420, in __setitem__ setattr(self, key, value) File "/opt/anaconda3/envs/spyder-env/lib/python3.10/site-packages/statsmodels/tsa/statespace/representation.py", line 54, in __set__ value = self._set_matrix(obj, value, shape) File "/opt/anaconda3/envs/spyder-env/lib/python3.10/site-packages/statsmodels/tsa/statespace/representation.py", line 68, in _set_matrix validate_matrix_shape( File "/opt/anaconda3/envs/spyder-env/lib/python3.10/site-packages/statsmodels/tsa/statespace/tools.py", line 1474, in validate_matrix_shape raise ValueError('Invalid dimensions for %s matrix: requires %d' ValueError: Invalid dimensions for state covariance matrix matrix: requires 702 rows, got 3`` Code ` class TVPVAR(sm.tsa.statespace.MLEModel): def __init__(self, y): # Create a matrix with [y_t' : y_{t-1}'] for t = 2, ..., T augmented = sm.tsa.lagmat(y, 1, trim='both', original='in', use_pandas=True) # Separate into y_t and z_t = [1 : y_{t-1}'] p = y.shape[1] y_t = augmented.iloc[:, :p] z_t = sm.add_constant(augmented.iloc[:, p:]) nobs = y.shape[0] T=y.shape[0] # Recall that the length of the state vector is p * (p + 1) k_states = p * (p + 1) super(TVPVAR,self).__init__(y_t, exog=None, k_states=k_states,k_posdef=k_states) self.k_y = p self.k_states = p * (p + 1) self.nobs = T self['design'] = np.zeros((self.k_y, self.k_states, 1)) self['transition'] = np.eye(k_states) # G self['selection'] = np.eye(k_states) # R=1 def update_variances(self, obs_cov, state_cov_diag): self['obs_cov'] = obs_cov self['state_cov'] = np.diag(state_cov_diag) # W init = initialization.Initialization(self.k_states) init.set((0, 2), 'diffuse') init.set((2, 4), 'stationary') self.ssm.initialize(init) def constrain_stationary_multivariate(unconstrained, variance, transform_variance=False, prefix=None): unconstrained =np.zeros_like(k_y * k_y * order) variance=np.zeros_like(k_y * k_y) order = k_y prefix = find_best_blas_type( [unconstrained, variance]) dtype = prefix_dtype_map[prefix] unconstrained = np.asfortranarray(unconstrained, dtype=dtype) variance = np.asfortranarray(variance, dtype=dtype) # Step 1: convert from arbitrary matrices to those with singular values # less than one. # sv_constrained = _constrain_sv_less_than_one(unconstrained, order, # k_y, prefix) sv_constrained = prefix_sv_map[prefix](unconstrained, order, k_y) # Step 2: convert matrices from our "partial autocorrelation matrix" # space (matrices with singular values less than one) to the space of # stationary coefficient matrices constrained, variance = prefix_pacf_map[prefix]( sv_constrained, variance, transform_variance, order, k_y) constrained = np.zeros_like(constrained, dtype=dtype) variance = np.zeros_like(variance, dtype=dtype) return constrained, variance def unconstrain_stationary_multivariate(constrained, error_variance): constrained= np.zeros_like(k_y * k_y * order) error_variance=np.zeros_like(k_y * k_y) # Step 1: convert matrices from the space of stationary # coefficient matrices to our "partial autocorrelation matrix" space # (matrices with singular values less than one) partial_autocorrelations = _compute_multivariate_pacf_from_coefficients( constrained, error_variance, order, k_y) unconstrained = _unconstrain_sv_less_than_one( partial_autocorrelations, order, k_y) return unconstrained, error_variance def update(self, params, **kwargs): params = super().update(params, **kwargs) self['transition', 2,2] = params[0] self['transition', 3,2] = params[1] self['state_cov'] = np.diag([params[2]**2, params[3]**2, params[4]**2]) # W @property def state_names(self): state_names = np.empty((self.k_y, self.k_y + 1), dtype=object) for i in range(self.k_y): endog_nam I have two questions: How could I correct the full code, or how could I modify properly my code in order to get the right results? I cannot see where and how to set up the state_cov matrix to be 702x702 Since this is a TVP-VAR, and unit root is assumed, my understanding is that I have to keep only the " diffuse''and remove the "stationarity " . Same goes for "constrain_stationary_multivariate" and "unconstrain_stationary_multivariate". Am I getting this right? Much appreciate a full version of the code! I have debugged, and the error shows up in self['state_cov'] = np.diag([params[2]**2, params[3]**2, params[4]**2]) # W after it calls the validate_matrix_shape() in tools.py. Any help is highly appreciated
[ "On this line:\nself['state_cov'] = np.diag([params[2]**2, params[3]**2, params[4]**2])\n\nYou're calling the function np.diag(). According to the documentation, when np.diag() is given a 1-D vector of N elements, it creates an NxN matrix, with the element of the vector along the diagonal. You gave it three element...
[ 0 ]
[]
[]
[ "bayesian", "kalman_filter", "python", "state_space", "statsmodels" ]
stackoverflow_0074547854_bayesian_kalman_filter_python_state_space_statsmodels.txt
Q: Repack inference model with requirements.txt inside source_dir without installing them during the process in SageMaker After training a custom model, i need to create an inference model and then deploy the relevant endpoint. When, in the execution of the pipeline, i have to inject a custom inference script, a model repacking process is triggered. The inference model needs to have the requirements.txt file (the same as the trained model). When the repacking process is started, a default machine ml.m5.large with the training image sagemaker-scikit-learn:0.23-1-cpu-py3 is instantiated. If the requirements.txt file is present in the inference code folder, this process will try to install the packages (although it is not necessary, should be a simple repacking of a tar.gz!). Unfortunately, having specified particular library versions, it will fail. For example: ERROR: Ignored the following versions that require a different python version: 1.22.0 Requires-Python >=3.8; 1.22.0rc1 Requires-Python >=3.8; 1.22. 0rc2 Requires-Python >=3.8; 1.22.0rc3 Requires-Python >=3.8; 1.22.1 Requires-Python >=3.8; 1.22.2 Requires-Python >=3.8; 1.22.3 Requires-Python >=3. 8; 1.22.4 Requires-Python >=3.8; 1.23.0 Requires-Python >=3.8; 1.23.0rc1 Requires-Python >=3.8; 1.23.0rc2 Requires-Python >=3.8; 1.23. 0rc3 Requires-Python >=3.8; 1.23.1 Requires-Python >=3.8; 1.23.2 Requires-Python >=3.8; 1.23.3 Requires-Python >=3.8; 1.23.4 Requires-Python >=3.8 ERROR: Could not find a version that satisfies the requirement numpy==1.23.0 This is the code I'm running: inf_img_uri = sagemaker.image_uris.retrieve( framework='pytorch', region=region, image_scope='inference', version="1.12.0", instance_type='ml.m5.xlarge', py_version='py38' ) pytorch_model = Model( image_uri=inf_img_uri, model_data=step_train.properties.ModelArtifacts.S3ModelArtifacts, role=role, entry_point='inference.py', sagemaker_session=PipelineSession(), source_dir=os.path.join(LIB_DIR, "test"), # here is inference.py and requirements.txt name=model_name, ) step_create_model = ModelStep( name="infTest", step_args=pytorch_model.create(instance_type="ml.m5.xlarge"), description = 'Create model for inference' ) Is there any way to prevent the template repack from trying to install packages from the requirements.txt? My current solution: I have omitted the file in the directory and manually install the packages with subprocess.check_call([sys.executable, "-m", "pip", "install", package]) in the inference code. But I find this approach wrong for Batch Inference processes (since it would be executed every time) and also inconsistent. A: The SageMaker SDK will always repackage the tar ball to include the inference.py script and then re-upload the tar ball to S3. In general, SageMaker Framework containers will install the packages specified in the requirements.txt file. If you do not want this to occur you can leave out the requirements.txt file and extend the sagemaker-scikit-learn:0.23-1-cpu-py container to include all the necessary dependencies. That way the packages will be baked into the image and every time you kick off a Batch Transform Job the packages will not be installed again. https://docs.aws.amazon.com/sagemaker/latest/dg/prebuilt-containers-extend.html
Repack inference model with requirements.txt inside source_dir without installing them during the process in SageMaker
After training a custom model, i need to create an inference model and then deploy the relevant endpoint. When, in the execution of the pipeline, i have to inject a custom inference script, a model repacking process is triggered. The inference model needs to have the requirements.txt file (the same as the trained model). When the repacking process is started, a default machine ml.m5.large with the training image sagemaker-scikit-learn:0.23-1-cpu-py3 is instantiated. If the requirements.txt file is present in the inference code folder, this process will try to install the packages (although it is not necessary, should be a simple repacking of a tar.gz!). Unfortunately, having specified particular library versions, it will fail. For example: ERROR: Ignored the following versions that require a different python version: 1.22.0 Requires-Python >=3.8; 1.22.0rc1 Requires-Python >=3.8; 1.22. 0rc2 Requires-Python >=3.8; 1.22.0rc3 Requires-Python >=3.8; 1.22.1 Requires-Python >=3.8; 1.22.2 Requires-Python >=3.8; 1.22.3 Requires-Python >=3. 8; 1.22.4 Requires-Python >=3.8; 1.23.0 Requires-Python >=3.8; 1.23.0rc1 Requires-Python >=3.8; 1.23.0rc2 Requires-Python >=3.8; 1.23. 0rc3 Requires-Python >=3.8; 1.23.1 Requires-Python >=3.8; 1.23.2 Requires-Python >=3.8; 1.23.3 Requires-Python >=3.8; 1.23.4 Requires-Python >=3.8 ERROR: Could not find a version that satisfies the requirement numpy==1.23.0 This is the code I'm running: inf_img_uri = sagemaker.image_uris.retrieve( framework='pytorch', region=region, image_scope='inference', version="1.12.0", instance_type='ml.m5.xlarge', py_version='py38' ) pytorch_model = Model( image_uri=inf_img_uri, model_data=step_train.properties.ModelArtifacts.S3ModelArtifacts, role=role, entry_point='inference.py', sagemaker_session=PipelineSession(), source_dir=os.path.join(LIB_DIR, "test"), # here is inference.py and requirements.txt name=model_name, ) step_create_model = ModelStep( name="infTest", step_args=pytorch_model.create(instance_type="ml.m5.xlarge"), description = 'Create model for inference' ) Is there any way to prevent the template repack from trying to install packages from the requirements.txt? My current solution: I have omitted the file in the directory and manually install the packages with subprocess.check_call([sys.executable, "-m", "pip", "install", package]) in the inference code. But I find this approach wrong for Batch Inference processes (since it would be executed every time) and also inconsistent.
[ "The SageMaker SDK will always repackage the tar ball to include the inference.py script and then re-upload the tar ball to S3.\nIn general, SageMaker Framework containers will install the packages specified in the requirements.txt file.\nIf you do not want this to occur you can leave out the requirements.txt file ...
[ 1 ]
[]
[]
[ "amazon_sagemaker", "amazon_web_services", "boto3", "pipeline", "python" ]
stackoverflow_0074405489_amazon_sagemaker_amazon_web_services_boto3_pipeline_python.txt
Q: Can't recreate Conda environment in docker I created a conda environment from a fresh installation of miniconda3. After that I exported it and this is the content of the file (my only extra install was flask): name: myenv channels: - defaults dependencies: - ca-certificates=2018.03.07=0 - certifi=2018.11.29=py37_0 - click=7.0=py37_0 - flask=1.0.2=py37_1 - itsdangerous=1.1.0=py37_0 - jinja2=2.10=py37_0 - libcxx=4.0.1=hcfea43d_1 - libcxxabi=4.0.1=hcfea43d_1 - libedit=3.1.20170329=hb402a30_2 - libffi=3.2.1=h475c297_4 - markupsafe=1.1.0=py37h1de35cc_0 - ncurses=6.1=h0a44026_1 - openssl=1.1.1a=h1de35cc_0 - pip=18.1=py37_0 - python=3.7.1=haf84260_7 - readline=7.0=h1de35cc_5 - setuptools=40.6.2=py37_0 - sqlite=3.26.0=ha441bb4_0 - tk=8.6.8=ha441bb4_0 - werkzeug=0.14.1=py37_0 - wheel=0.32.3=py37_0 - xz=5.2.4=h1de35cc_4 - zlib=1.2.11=h1de35cc_3 prefix: /Users/rossid/miniconda3/envs/phadmin now what I want, is to recreate this environment in a docket image so I created this Dockefile FROM continuumio/miniconda3 ADD * myappdir/ RUN conda env create -f /myappdir/environment.yml but it will fail with: Step 1/5 : FROM continuumio/miniconda3 ---> d3c252f8727b Step 2/5 : ADD * myappdir/ ---> Using cache ---> 2afbf5ea75bd Step 3/5 : RUN conda env create -f /myappdir/environment.yml ---> Running in 7f916bd46979 Solving environment: ...working... failed ResolvePackageNotFound: - tk==8.6.8=ha441bb4_0 - ncurses==6.1=h0a44026_1 - markupsafe==1.1.0=py37h1de35cc_0 - readline==7.0=h1de35cc_5 - zlib==1.2.11=h1de35cc_3 - openssl==1.1.1a=h1de35cc_0 - xz==5.2.4=h1de35cc_4 - libcxxabi==4.0.1=hcfea43d_1 - libcxx==4.0.1=hcfea43d_1 - libffi==3.2.1=h475c297_4 - sqlite==3.26.0=ha441bb4_0 - python==3.7.1=haf84260_7 - libedit==3.1.20170329=hb402a30_2 why is this happening? If I try to do the same to create another environment it works. If I remove the build version, some dependencies are resolved (I mean the third coordinate in dependencies). I tried to add more channels like conda-forge, but nothing. Also my .condarc file is empty. Does anyone know how to fix this? A: I had a similar problem and I find multiple ways to solve it. The main problem with your approach is conda is not platform independent, so will force the environments to use pip. 1. Conda Like Solution Change your my_env.yml so that all the dependencies apart from pip goes under the pip dependency. Notice that the syntax is different when you move under the pip. For instance: name: myenv channels: - defaults dependencies: - pip=18.1 - pip: - wheel==0.32.3 Then go to your Dockerfile and add the following line: RUN conda env update -n base --file myenv.yml 2. Good old Pip way Export your conda environment into a pip requirements file as at this answer conda install pip pip freeze > requirements.txt Then go to your Docker file and add the following line: RUN python -m pip install -r requirements.txt A: The docker build fails because your input yml file includes platform-specific build constraints. For example: ResolvePackageNotFound: - tk==8.6.8=ha441bb4_0 - ncurses==6.1=h0a44026_1 - markupsafe==1.1.0=py37h1de35cc_0 - readline==7.0=h1de35cc_5 - zlib==1.2.11=h1de35cc_3 - openssl==1.1.1a=h1de35cc_0 - xz==5.2.4=h1de35cc_4 - libcxxabi==4.0.1=hcfea43d_1 - libcxx==4.0.1=hcfea43d_1 - libffi==3.2.1=h475c297_4 - sqlite==3.26.0=ha441bb4_0 - python==3.7.1=haf84260_7 - libedit==3.1.20170329=hb402a30_2 Those packages contain platform specific hashs (e.g. ha441bb4_0). Basically, you are trying to install packages from the OS platform on a linux platform. That's why berkay's answer would work for most of the use case. A simpler way to solve this problem is add from-history argument while exporting your conda environment. conda env export -f env_explicit.yml --from-history This argument will just includes packages, which you explicitly specified during installation. This argument will also ignore any platform-specific dependencies. And your new Dockerfile will looks like following: FROM continuumio/miniconda3 ADD * myappdir/ RUN conda env create -f /myappdir/env_explicit.yml Reference: conda fails to create environment from yml
Can't recreate Conda environment in docker
I created a conda environment from a fresh installation of miniconda3. After that I exported it and this is the content of the file (my only extra install was flask): name: myenv channels: - defaults dependencies: - ca-certificates=2018.03.07=0 - certifi=2018.11.29=py37_0 - click=7.0=py37_0 - flask=1.0.2=py37_1 - itsdangerous=1.1.0=py37_0 - jinja2=2.10=py37_0 - libcxx=4.0.1=hcfea43d_1 - libcxxabi=4.0.1=hcfea43d_1 - libedit=3.1.20170329=hb402a30_2 - libffi=3.2.1=h475c297_4 - markupsafe=1.1.0=py37h1de35cc_0 - ncurses=6.1=h0a44026_1 - openssl=1.1.1a=h1de35cc_0 - pip=18.1=py37_0 - python=3.7.1=haf84260_7 - readline=7.0=h1de35cc_5 - setuptools=40.6.2=py37_0 - sqlite=3.26.0=ha441bb4_0 - tk=8.6.8=ha441bb4_0 - werkzeug=0.14.1=py37_0 - wheel=0.32.3=py37_0 - xz=5.2.4=h1de35cc_4 - zlib=1.2.11=h1de35cc_3 prefix: /Users/rossid/miniconda3/envs/phadmin now what I want, is to recreate this environment in a docket image so I created this Dockefile FROM continuumio/miniconda3 ADD * myappdir/ RUN conda env create -f /myappdir/environment.yml but it will fail with: Step 1/5 : FROM continuumio/miniconda3 ---> d3c252f8727b Step 2/5 : ADD * myappdir/ ---> Using cache ---> 2afbf5ea75bd Step 3/5 : RUN conda env create -f /myappdir/environment.yml ---> Running in 7f916bd46979 Solving environment: ...working... failed ResolvePackageNotFound: - tk==8.6.8=ha441bb4_0 - ncurses==6.1=h0a44026_1 - markupsafe==1.1.0=py37h1de35cc_0 - readline==7.0=h1de35cc_5 - zlib==1.2.11=h1de35cc_3 - openssl==1.1.1a=h1de35cc_0 - xz==5.2.4=h1de35cc_4 - libcxxabi==4.0.1=hcfea43d_1 - libcxx==4.0.1=hcfea43d_1 - libffi==3.2.1=h475c297_4 - sqlite==3.26.0=ha441bb4_0 - python==3.7.1=haf84260_7 - libedit==3.1.20170329=hb402a30_2 why is this happening? If I try to do the same to create another environment it works. If I remove the build version, some dependencies are resolved (I mean the third coordinate in dependencies). I tried to add more channels like conda-forge, but nothing. Also my .condarc file is empty. Does anyone know how to fix this?
[ "I had a similar problem and I find multiple ways to solve it. The main problem with your approach is conda is not platform independent, so will force the environments to use pip.\n1. Conda Like Solution\nChange your my_env.yml so that all the dependencies apart from pip goes under the pip dependency. Notice that t...
[ 3, 0 ]
[]
[]
[ "docker", "dockerfile", "miniconda", "python", "python_3.x" ]
stackoverflow_0053819954_docker_dockerfile_miniconda_python_python_3.x.txt
Q: Remove outliers using groupby in data with several categories I have a time-series with several products. I want to remove outliers using the Tukey Fence method. The idea is to create a column with a flag indicating outlier or not, using groupby. It should be like that (flag column is added by the groupby): date prod units flag 1 a 100 0 2 a 90 0 3 a 80 0 4 a 15 1 1 b 200 0 2 b 180 0 3 b 190 0 4 b 30000 1 I was able to do it separating the prods using a for-loop and then making corresponding joins, but I wish to do it more cleanly. A: I would compute the quantiles first; then derive IQR from them. Compute the fence bounds and call merge() to map these limits to the original dataframe and call eval() to check if the units are within their respective Tukey fence bounds. # compute quantiles quantiles = df.groupby('prod')['units'].quantile([0.25, 0.75]).unstack() # compute interquartile range for each prod iqr = quantiles.diff(axis=1).bfill(axis=1) # compute fence bounds fence_bounds = quantiles + iqr * [-1.5, 1.5] # check if units are outside their respective tukey ranges df['flag'] = df.merge(fence_bounds, left_on='prod', right_index=True).eval('not (`0.25` < units < `0.75`)').astype(int) df The intermediate fence bounds are:
Remove outliers using groupby in data with several categories
I have a time-series with several products. I want to remove outliers using the Tukey Fence method. The idea is to create a column with a flag indicating outlier or not, using groupby. It should be like that (flag column is added by the groupby): date prod units flag 1 a 100 0 2 a 90 0 3 a 80 0 4 a 15 1 1 b 200 0 2 b 180 0 3 b 190 0 4 b 30000 1 I was able to do it separating the prods using a for-loop and then making corresponding joins, but I wish to do it more cleanly.
[ "I would compute the quantiles first; then derive IQR from them. Compute the fence bounds and call merge() to map these limits to the original dataframe and call eval() to check if the units are within their respective Tukey fence bounds.\n# compute quantiles\nquantiles = df.groupby('prod')['units'].quantile([0.25,...
[ 1 ]
[]
[]
[ "group_by", "pandas", "python", "time_series", "vectorization" ]
stackoverflow_0074551771_group_by_pandas_python_time_series_vectorization.txt
Q: Python. Generate imitation of sentence from random letters from large text I'm a total newbie. I'm writing a code for my python classes and I'm looking for help. This is supposed to be a code imitating language, in this case latin. I want to take random letter from long string. When I have letter1, I'd like to find all indexes of the same letter in the text. Then randomly take one index and take +1 to pick the next letter. And I want to keep adding letters in this way until it will generate '.' end of sentence. I have two first letters and I'm stuck. The third letter should be connected with second etc. Don't know, how to generate the next letters, maybe in a while loop. import random text = 'Lorem ipsum dolor sit amet. Consectetur adipiscing elit. Fusce accumsan, dolor eu maximus vulputate. Urna tortor vestibulum justo. Et fermentum libero tellus quis diam. Aenean massa nisi.' while True: letter = random.choice(text) print(letter, end='') indexes3 = [i for i,x in enumerate(text) if x == letter] nextindex = random.choice(indexes3)+1 print(text[nextindex], end='') if letter == '.': break I'd be very grateful for your help! A: You are close. You pick a letter randomly from the whole string only the first time, so this should be before the loop. Then you pick one of the succeeding letters. Now reset letter to the picked one and repeat the process: import random text = 'Lorem ipsum dolor sit amet. Consectetur adipiscing elit. Fusce accumsan, dolor eu maximus vulputate. Urna tortor vestibulum justo. Et fermentum libero tellus quis diam. Aenean massa nisi.' letter = random.choice(text) while letter != '.': print(letter, end='') indexes = [i for i, x in enumerate(text) if x == letter] next_index = random.choice(indexes) + 1 letter = text[next_index] print(letter) # to print a dot and a newline
Python. Generate imitation of sentence from random letters from large text
I'm a total newbie. I'm writing a code for my python classes and I'm looking for help. This is supposed to be a code imitating language, in this case latin. I want to take random letter from long string. When I have letter1, I'd like to find all indexes of the same letter in the text. Then randomly take one index and take +1 to pick the next letter. And I want to keep adding letters in this way until it will generate '.' end of sentence. I have two first letters and I'm stuck. The third letter should be connected with second etc. Don't know, how to generate the next letters, maybe in a while loop. import random text = 'Lorem ipsum dolor sit amet. Consectetur adipiscing elit. Fusce accumsan, dolor eu maximus vulputate. Urna tortor vestibulum justo. Et fermentum libero tellus quis diam. Aenean massa nisi.' while True: letter = random.choice(text) print(letter, end='') indexes3 = [i for i,x in enumerate(text) if x == letter] nextindex = random.choice(indexes3)+1 print(text[nextindex], end='') if letter == '.': break I'd be very grateful for your help!
[ "You are close. You pick a letter randomly from the whole string only the first time, so this should be before the loop. Then you pick one of the succeeding letters. Now reset letter to the picked one and repeat the process:\nimport random\n\ntext = 'Lorem ipsum dolor sit amet. Consectetur adipiscing elit. Fusce ac...
[ 0 ]
[]
[]
[ "python", "while_loop" ]
stackoverflow_0074552623_python_while_loop.txt
Q: Mouse not interacting with object correctly using collidepoint function So I used the collidepoint function to test out whether or not my mouse is interacting or can interact with the images on the surface Surface but the variable mouse_pos does give out a position yet the mouse cannot ever collide with the object (see A is always false rather than true when the mouse hit the object). How do I solve this Code: import pygame from sys import exit pygame.init() widthscreen = 1440 #middle 720 heightscreen = 790 #middle 395 w_surface = 800 h_surface = 500 midalignX_lg = (widthscreen-w_surface)/2 midalignY_lg = (heightscreen-h_surface)/2 #blue = player #yellow = barrier screen = pygame.display.set_mode((widthscreen,heightscreen)) pygame.display.set_caption("Collision Game") clock = pygame.time.Clock() test_font = pygame.font.Font('font/Pixeltype.ttf', 45) surface = pygame.Surface((w_surface,h_surface)) surface.fill('Light Yellow') blue_b = pygame.image.load('images/blue.png').convert_alpha() blue_b = pygame.transform.scale(blue_b,(35,35)) yellow_b = pygame.image.load('images/yellow.png').convert_alpha() yellow_b = pygame.transform.scale(yellow_b,(35,35)) text_surface = test_font.render('Ball Option:', True, 'White') barrier_1_x = 0 barrier_1_surf = pygame.image.load('images/yellow.png').convert_alpha() barrier_1_surf = pygame.transform.scale(barrier_1_surf,(35,35)) barrier_1_rect = barrier_1_surf.get_rect(center = (100, 350)) player_surf = pygame.image.load('images/blue.png').convert_alpha() player_surf = pygame.transform.scale(player_surf,(35,35)) player_rect = player_surf.get_rect(center = (0,350)) while True: #elements & update #event loop for event in pygame.event.get(): if event.type == pygame.QUIT: pygame.quit() exit() screen.blit(surface, (midalignX_lg,midalignY_lg)) screen.blit(blue_b,(150,250)) screen.blit(yellow_b, (150,300)) screen.blit(text_surface,(150, 200)) #barrier_1_x += 3 #if barrier_1_x > 800: barrier_1_x = 0 #barrier_1_rect.x += 3 #if barrier_1_rect.x > 800: barrier_1_rect.x = 0 barrier_1_rect.x += 2 if barrier_1_rect.right >= 820: barrier_1_rect.left = -10 player_rect.x += 3 if player_rect.right >= 820: player_rect.left = -10 surface = pygame.Surface((w_surface,h_surface)) surface.fill('Light Yellow') surface.blit(barrier_1_surf, barrier_1_rect) surface.blit(player_surf, player_rect) '''if player_rect.colliderect(barrier_1_rect): print('collision')''' A = False; mouse_pos = pygame.mouse.get_pos() if player_rect.collidepoint(mouse_pos): A = True print(A) pygame.display.update() clock.tick(60) I am not sure what else to do. i think it may be something wrong with the layering of the surface? A: You are not drawing the objects on the screen, but on the surface. Therefore the coordinates of player_rect are relative to the surface and you also have to calculate the mouse position relative to the surface. The top left coordinate of the surface is (midalignX_lg, midalignY_lg): while True: # [...] mouse_pos = pygame.mouse.get_pos() rel_x = mouse_pos[0] - midalignX_lg rel_y = mouse_pos[1] - midalignY_lg if player_rect.collidepoint(rel_x, rel_y): print("hit")
Mouse not interacting with object correctly using collidepoint function
So I used the collidepoint function to test out whether or not my mouse is interacting or can interact with the images on the surface Surface but the variable mouse_pos does give out a position yet the mouse cannot ever collide with the object (see A is always false rather than true when the mouse hit the object). How do I solve this Code: import pygame from sys import exit pygame.init() widthscreen = 1440 #middle 720 heightscreen = 790 #middle 395 w_surface = 800 h_surface = 500 midalignX_lg = (widthscreen-w_surface)/2 midalignY_lg = (heightscreen-h_surface)/2 #blue = player #yellow = barrier screen = pygame.display.set_mode((widthscreen,heightscreen)) pygame.display.set_caption("Collision Game") clock = pygame.time.Clock() test_font = pygame.font.Font('font/Pixeltype.ttf', 45) surface = pygame.Surface((w_surface,h_surface)) surface.fill('Light Yellow') blue_b = pygame.image.load('images/blue.png').convert_alpha() blue_b = pygame.transform.scale(blue_b,(35,35)) yellow_b = pygame.image.load('images/yellow.png').convert_alpha() yellow_b = pygame.transform.scale(yellow_b,(35,35)) text_surface = test_font.render('Ball Option:', True, 'White') barrier_1_x = 0 barrier_1_surf = pygame.image.load('images/yellow.png').convert_alpha() barrier_1_surf = pygame.transform.scale(barrier_1_surf,(35,35)) barrier_1_rect = barrier_1_surf.get_rect(center = (100, 350)) player_surf = pygame.image.load('images/blue.png').convert_alpha() player_surf = pygame.transform.scale(player_surf,(35,35)) player_rect = player_surf.get_rect(center = (0,350)) while True: #elements & update #event loop for event in pygame.event.get(): if event.type == pygame.QUIT: pygame.quit() exit() screen.blit(surface, (midalignX_lg,midalignY_lg)) screen.blit(blue_b,(150,250)) screen.blit(yellow_b, (150,300)) screen.blit(text_surface,(150, 200)) #barrier_1_x += 3 #if barrier_1_x > 800: barrier_1_x = 0 #barrier_1_rect.x += 3 #if barrier_1_rect.x > 800: barrier_1_rect.x = 0 barrier_1_rect.x += 2 if barrier_1_rect.right >= 820: barrier_1_rect.left = -10 player_rect.x += 3 if player_rect.right >= 820: player_rect.left = -10 surface = pygame.Surface((w_surface,h_surface)) surface.fill('Light Yellow') surface.blit(barrier_1_surf, barrier_1_rect) surface.blit(player_surf, player_rect) '''if player_rect.colliderect(barrier_1_rect): print('collision')''' A = False; mouse_pos = pygame.mouse.get_pos() if player_rect.collidepoint(mouse_pos): A = True print(A) pygame.display.update() clock.tick(60) I am not sure what else to do. i think it may be something wrong with the layering of the surface?
[ "You are not drawing the objects on the screen, but on the surface. Therefore the coordinates of player_rect are relative to the surface and you also have to calculate the mouse position relative to the surface. The top left coordinate of the surface is (midalignX_lg, midalignY_lg):\nwhile True:\n # [...]\n \n...
[ 1 ]
[]
[]
[ "pygame", "python" ]
stackoverflow_0074552642_pygame_python.txt
Q: How to handle API response in python I wonder how I can properly handle this invalid response. How to check if the response has status: 400 or perhaps has a title: Bad Request? url = f"https://api123.com" substring = "title: Bad Request" response = requests.get(url, headers=headers).json() if substring in response: print ("Your Data is Invalid") else: print ("Valid data") response: #-- Response from url https://api.xyzabc.com/data/abc1234 {'type': 'h---s://abc.com/data-errors/bad_request', 'title': 'Bad Request', 'status': 400, 'detail': 'The request you sent was invalid.', 'extras': {'invalid_field': 'account_id', 'reason': 'Account is invalid'}} A: You have a dictionary right there. >>> response = {'type': 'h---s://abc.com/data-errors/bad_request', 'title': 'Bad Request', 'status': 400, 'detail': 'The request you sent was invalid.', 'extras': {'invalid_field': 'account_id', 'reason': 'Account is invalid'}} >>> response["status"] == 400 True response = {'type': 'h---s://abc.com/data-errors/bad_request', 'title': 'Bad Request', 'status': 400, 'detail': 'The request you sent was invalid.', 'extras': {'invalid_field': 'account_id', 'reason': 'Account is invalid'}} if response["status"] == 400: print(response["extras"]["reason"]) # prints: Account is invalid A: An example to check status import requests url = 'https://api.github.com/search/repositories?q=language:python&sort=starts' headers = {'Accept': 'application/vnd.github.v3+json'} r = requests.get(url, headers=headers) # check the request (200 is successful) print(f"Satus code: {r.status_code}") #get dict from json response_dict = r.json()
How to handle API response in python
I wonder how I can properly handle this invalid response. How to check if the response has status: 400 or perhaps has a title: Bad Request? url = f"https://api123.com" substring = "title: Bad Request" response = requests.get(url, headers=headers).json() if substring in response: print ("Your Data is Invalid") else: print ("Valid data") response: #-- Response from url https://api.xyzabc.com/data/abc1234 {'type': 'h---s://abc.com/data-errors/bad_request', 'title': 'Bad Request', 'status': 400, 'detail': 'The request you sent was invalid.', 'extras': {'invalid_field': 'account_id', 'reason': 'Account is invalid'}}
[ "You have a dictionary right there.\n>>> response = {'type': 'h---s://abc.com/data-errors/bad_request', 'title': 'Bad Request', 'status': 400, 'detail': 'The request you sent was invalid.', 'extras': {'invalid_field': 'account_id', 'reason': 'Account is invalid'}} \n>>> response[\"status\"] == 400 ...
[ 1, 1 ]
[]
[]
[ "api", "python", "request" ]
stackoverflow_0074552447_api_python_request.txt
Q: Python - Complementary suppression of 2nd/lowest value row-wise I'm working on a data suppression script in python where I need to 1) suppress small values (between 1 and 5) and 2) make sure that there are at least 2 values suppressed at the smallest level of aggregation. I've done the first step, replacing small values with -1 (which I'll later recode to "s"). And I created a new helper column that counts how many suppressed values there are per row ('sup_cnt'). That yields something like this: Subgroup cat1 cat2 cat3 sup_cnt Group1 0 -1 0 1 Group2 -1 22 6 1 Group3 -1 14 -1 2 Group4 -1 -1 0 2 data = {'group':['group1','group2','group3','group4'],'cat1':[0,-1,-1,-1],'cat2':[-1,22,14,-1],'cat3':[0,0,-1,0],'sup_cnt':[1,1,2,3]} df = pd.DataFrame(data) So for Group1 and Group2, which only have one value suppressed, I want a second value -- the lowest (including zeroes) -- to be replaced with -1. In Group1, one of the zeroes would be replaced; in Group2, 6 would be replaced. So the result would be like this: Subgroup cat1 cat2 cat3 sup_cnt Group1 -1 -1 0 1 Group2 -1 22 -1 1 Group3 -1 14 -1 2 Group4 -1 -1 0 2 If there are more than one columns with the same lowest value (like with Group1, which has 2 zeroes), I only want one of those to be replaced (doesn't matter which). Originally started this in R and switched to python/pandas (but I'm new to pandas). My idea was to write a function that takes the cat values as arguments, determines the minimum non-negative integer among those, loops through the data columns in a row and replaces the first instance of that min value in the row, then breaks. Not sure if that's the right approach though (or exactly how to carry it out). Any ideas? A: I hope I've understood your question right: def fn(x): cols = x.filter(regex=r"^cat") x = cols[cols >= 0].sort_values()[: 2 - x["sup_cnt"]] df.loc[x.name, x.index] = -1 df[df.sup_cnt < 2].apply(fn, axis=1) print(df) Prints: Subgroup cat1 cat2 cat3 sup_cnt 0 Group1 -1 -1 0 1 1 Group2 -1 22 -1 1 2 Group3 -1 14 -1 2 3 Group4 -1 -1 0 2
Python - Complementary suppression of 2nd/lowest value row-wise
I'm working on a data suppression script in python where I need to 1) suppress small values (between 1 and 5) and 2) make sure that there are at least 2 values suppressed at the smallest level of aggregation. I've done the first step, replacing small values with -1 (which I'll later recode to "s"). And I created a new helper column that counts how many suppressed values there are per row ('sup_cnt'). That yields something like this: Subgroup cat1 cat2 cat3 sup_cnt Group1 0 -1 0 1 Group2 -1 22 6 1 Group3 -1 14 -1 2 Group4 -1 -1 0 2 data = {'group':['group1','group2','group3','group4'],'cat1':[0,-1,-1,-1],'cat2':[-1,22,14,-1],'cat3':[0,0,-1,0],'sup_cnt':[1,1,2,3]} df = pd.DataFrame(data) So for Group1 and Group2, which only have one value suppressed, I want a second value -- the lowest (including zeroes) -- to be replaced with -1. In Group1, one of the zeroes would be replaced; in Group2, 6 would be replaced. So the result would be like this: Subgroup cat1 cat2 cat3 sup_cnt Group1 -1 -1 0 1 Group2 -1 22 -1 1 Group3 -1 14 -1 2 Group4 -1 -1 0 2 If there are more than one columns with the same lowest value (like with Group1, which has 2 zeroes), I only want one of those to be replaced (doesn't matter which). Originally started this in R and switched to python/pandas (but I'm new to pandas). My idea was to write a function that takes the cat values as arguments, determines the minimum non-negative integer among those, loops through the data columns in a row and replaces the first instance of that min value in the row, then breaks. Not sure if that's the right approach though (or exactly how to carry it out). Any ideas?
[ "I hope I've understood your question right:\ndef fn(x):\n cols = x.filter(regex=r\"^cat\")\n x = cols[cols >= 0].sort_values()[: 2 - x[\"sup_cnt\"]]\n df.loc[x.name, x.index] = -1\n\n\ndf[df.sup_cnt < 2].apply(fn, axis=1)\nprint(df)\n\nPrints:\n Subgroup cat1 cat2 cat3 sup_cnt\n0 Group1 -1 -1...
[ 1 ]
[]
[]
[ "pandas", "python", "replace", "suppression" ]
stackoverflow_0074552418_pandas_python_replace_suppression.txt
Q: Amazon Web Scraping - retrieving price data I'm currently working on my first project experimenting with web scraping on python. I am attempting to retrieve price data from an amazon url but am having some issues. url = 'https://www.amazon.ca/Nintendo-SwitchTM-Neon-Blue-Joy-E2-80-91ConTM-dp-B0BFJWCYTL/dp/B0BFJWCYTL/ref=dp_ob_title_vg' headers = {"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/107.0.0.0 Safari/537.36"} page = requests.get(url, headers=headers) soup1 = BeautifulSoup(page.content, "lxml") soup2 = BeautifulSoup(soup1.prettify(), "lxml") title = soup2.find(id='productTitle').get_text() price = soup2.find(id='corePriceDisplay_desktop_feature_div').get_text() When I print the price variable, my output is a bit weird: $394.00 $ 394 . 00 There's alot of whitespace and the numbers are formatted in weird way with a alot of newline. How do I gather just the price so when I print it should just display $394.00? I believe this can be solved with the span class but I could not figure it out. A: As you can see below, searching for "corePriceDisplay_desktop_feature_div" is far to broad. Searching for span element with class="a-offscreen" should fit your needs. Try: price = soup.find("span", {"class": "a-offscreen"})
Amazon Web Scraping - retrieving price data
I'm currently working on my first project experimenting with web scraping on python. I am attempting to retrieve price data from an amazon url but am having some issues. url = 'https://www.amazon.ca/Nintendo-SwitchTM-Neon-Blue-Joy-E2-80-91ConTM-dp-B0BFJWCYTL/dp/B0BFJWCYTL/ref=dp_ob_title_vg' headers = {"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/107.0.0.0 Safari/537.36"} page = requests.get(url, headers=headers) soup1 = BeautifulSoup(page.content, "lxml") soup2 = BeautifulSoup(soup1.prettify(), "lxml") title = soup2.find(id='productTitle').get_text() price = soup2.find(id='corePriceDisplay_desktop_feature_div').get_text() When I print the price variable, my output is a bit weird: $394.00 $ 394 . 00 There's alot of whitespace and the numbers are formatted in weird way with a alot of newline. How do I gather just the price so when I print it should just display $394.00? I believe this can be solved with the span class but I could not figure it out.
[ "As you can see below, searching for \"corePriceDisplay_desktop_feature_div\" is far to broad. Searching for span element with class=\"a-offscreen\" should fit your needs.\nTry:\nprice = soup.find(\"span\", {\"class\": \"a-offscreen\"})\n\n" ]
[ 0 ]
[]
[]
[ "beautifulsoup", "jupyter_notebook", "python", "web_scraping" ]
stackoverflow_0074551550_beautifulsoup_jupyter_notebook_python_web_scraping.txt
Q: how to change the date if it is sunday or saturday in batch file I have to a run a python script in command line and pass the date as argument and below is my command to run the date parser = argparse.ArgumentParser() parser.add_argument('date') tdate= parser.parse_args().date i would like to change this to check if it is sunday or saturday. If it is saturday, tdate should be tdate-1 and if it is sunday , it should be tdate-2. I tried to get weekno in python and if it is 5, i am considering this as saturday and if it is 6 then it is sunday. #this code is in python getting today() weekday() weekno = datetime.datetime.today().weekday() if weekno==5: tdate= parser.parse_args().date-1 elif weekno==6: tdate= parser.parse_args().date-2 else: tdate= parser.parse_args().date I am missing something in this code. Can anyone please help me ? A: If I'm understanding correctly, you want it so that if the current day is either Saturday or Sunday, then the entered date that is passed as a command line argument should be date - 1 day or date - 2 days. So, if today's date is passed (11/23/2022) on Monday-Friday it'll return the date as is but if the same date is passed on Saturday, it'll return 11/22/2022. To do the operation of subtracting days from the current day you'll need to use time delta. This is my solution based of my assumptions of what you are asking import argparse from datetime import datetime, timedelta parser = argparse.ArgumentParser() parser.add_argument('date') passed_in_date = parser.parse_args().date tdate = datetime.strptime(passed_in_date, "%m/%d/%Y") weekno = datetime.today().weekday() print(f"weekno of today is: {weekno}") if weekno==5: tdate = tdate - timedelta(days=1) elif weekno==6: tdate = tdate - timedelta(days=2) print(tdate.date()) OutPut #running on a Wednesday (for example) date returns same as passed python .\date.py 11/26/2022 weekno of today is: 2 2022-11-26 #example for testing if running on a sunday python .\date.py 11/23/2022 weekno of today is: 6 2022-11-21 If you wanted to see the date change just manually set weekno to 5 or 6 for testing.
how to change the date if it is sunday or saturday in batch file
I have to a run a python script in command line and pass the date as argument and below is my command to run the date parser = argparse.ArgumentParser() parser.add_argument('date') tdate= parser.parse_args().date i would like to change this to check if it is sunday or saturday. If it is saturday, tdate should be tdate-1 and if it is sunday , it should be tdate-2. I tried to get weekno in python and if it is 5, i am considering this as saturday and if it is 6 then it is sunday. #this code is in python getting today() weekday() weekno = datetime.datetime.today().weekday() if weekno==5: tdate= parser.parse_args().date-1 elif weekno==6: tdate= parser.parse_args().date-2 else: tdate= parser.parse_args().date I am missing something in this code. Can anyone please help me ?
[ "If I'm understanding correctly, you want it so that if the current day is either Saturday or Sunday, then the entered date that is passed as a command line argument should be date - 1 day or date - 2 days.\nSo, if today's date is passed (11/23/2022) on Monday-Friday it'll return the date as is but if the same date...
[ 0 ]
[]
[]
[ "pandas", "python" ]
stackoverflow_0074551335_pandas_python.txt
Q: Is there a fast way to find the first offset at which two large byte sequences differ? I can use a for loop to loop over two byte sequences and return the index at the first difference of course: bytes1 = b'12345' bytes2 = b'1F345' for index, pair in enumerate(zip(bytes1, bytes2)): if pair[0] != pair[1]: print(index) break But I don't think that's a smart and fast way to do it. I would hope a native method exists that I can call to get this done. Is there something that can help me here? I can also use numpy if it helps. I also want to clarify that this will run many times, with medium sequences. Approximately 300MB is expected, chunked by 100kB. I might be able to change that for larger if it helps significantly. A: a solution with numpy is to convert them to an array of uint8 then xor them and use argmax to get the first non-zero. import numpy as np bytes1 = b'12345' bytes2 = b'1F345' bytes3 = np.frombuffer(bytes1,dtype=np.uint8) bytes4 = np.frombuffer(bytes2,dtype=np.uint8) max_loc = np.flatnonzero(bytes3 ^ bytes4)[0] print(max_loc) 1 problem is that this still iterates to the end of the string on all functions, it's done in C so it is not too slow, slicing long array into multiple smaller ones can reduce the overhead for very long arrays. Edit: modified argmax to the correct flatnonzero as pointed by @jasonharper, which throws an indexError if both inputs are equal. A: If using numba is ok: import numba @numba.jit() def method2(bytes1, bytes2): idx = 0 while idx < len(bytes1): if bytes1[idx] != bytes2[idx]: return idx idx += 1 return idx Note that first run of this function will be significantly slower (due to compilation performed by numba). Takes like 2 seconds. Then, for each next run of the function: easy case you posted, index = 1 -> numba is 2x faster, for index = 100 -> numba is 33x faster
Is there a fast way to find the first offset at which two large byte sequences differ?
I can use a for loop to loop over two byte sequences and return the index at the first difference of course: bytes1 = b'12345' bytes2 = b'1F345' for index, pair in enumerate(zip(bytes1, bytes2)): if pair[0] != pair[1]: print(index) break But I don't think that's a smart and fast way to do it. I would hope a native method exists that I can call to get this done. Is there something that can help me here? I can also use numpy if it helps. I also want to clarify that this will run many times, with medium sequences. Approximately 300MB is expected, chunked by 100kB. I might be able to change that for larger if it helps significantly.
[ "a solution with numpy is to convert them to an array of uint8 then xor them and use argmax to get the first non-zero.\nimport numpy as np\nbytes1 = b'12345'\nbytes2 = b'1F345'\nbytes3 = np.frombuffer(bytes1,dtype=np.uint8)\nbytes4 = np.frombuffer(bytes2,dtype=np.uint8)\nmax_loc = np.flatnonzero(bytes3 ^ bytes4)[0]...
[ 2, 1 ]
[]
[]
[ "performance", "python" ]
stackoverflow_0074552142_performance_python.txt
Q: Reverse a get_dummies encoding in pandas Column names are: ID,1,2,3,4,5,6,7,8,9. The col values are either 0 or 1 My dataframe looks like this: ID 1 2 3 4 5 6 7 8 9 1002 0 1 0 1 0 0 0 0 0 1003 0 0 0 0 0 0 0 0 0 1004 1 1 0 0 0 0 0 0 0 1005 0 0 0 0 1 0 0 0 0 1006 0 0 0 0 0 1 0 0 0 1007 1 0 1 0 0 0 0 0 0 1000 0 0 0 0 0 0 0 0 0 1009 0 0 1 0 0 0 1 0 0 I want the column names in front of the ID where the value in a row is 1. The Dataframe i want should look like this: ID Col2 1002 2 // has 1 at Col(2) and Col(4) 1002 4 1004 1 // has 1 at col(1) and col(2) 1004 2 1005 5 // has 1 at col(5) 1006 6 // has 1 at col(6) 1007 1 // has 1 at col(1) and col(3) 1007 3 1009 3 // has 1 at col(3) and col(7) 1009 7 Please help me in this, Thanks in advance A: Pretty one-liner :) new_df = df.idxmax(axis=1) A: Several great answers for the OP post. However, often get_dummies is used for multiple categorical features. Pandas uses a prefix separator prefix_sep to distinguish different values for a column. The following function collapses a "dummified" dataframe while keeping the order of columns: def undummify(df, prefix_sep="_"): cols2collapse = { item.split(prefix_sep)[0]: (prefix_sep in item) for item in df.columns } series_list = [] for col, needs_to_collapse in cols2collapse.items(): if needs_to_collapse: undummified = ( df.filter(like=col) .idxmax(axis=1) .apply(lambda x: x.split(prefix_sep, maxsplit=1)[1]) .rename(col) ) series_list.append(undummified) else: series_list.append(df[col]) undummified_df = pd.concat(series_list, axis=1) return undummified_df Example >>> df a b c 0 A_1 B_1 C_1 1 A_2 B_2 C_2 >>> df2 = pd.get_dummies(df) >>> df2 a_A_1 a_A_2 b_B_1 b_B_2 c_C_1 c_C_2 0 1 0 1 0 1 0 1 0 1 0 1 0 1 >>> df3 = undummify(df2) >>> df3 a b c 0 A_1 B_1 C_1 1 A_2 B_2 C_2 A: set_index + stack, stack will dropna by default df.set_index('ID',inplace=True) df[df==1].stack().reset_index().drop(0, axis=1) Out[363]: ID level_1 0 1002 2 1 1002 4 2 1004 1 3 1004 2 4 1005 5 5 1006 6 6 1007 1 7 1007 3 8 1009 3 9 1009 7 A: np.argwhere v = np.argwhere(df.drop('ID', 1).values).T pd.DataFrame({'ID' : df.loc[v[0], 'ID'], 'Col2' : df.columns[1:][v[1]]}) Col2 ID 0 2 1002 0 4 1002 2 1 1004 2 2 1004 3 5 1005 4 6 1006 5 1 1007 5 3 1007 7 3 1009 7 7 1009 argwhere gets the i, j indices of all non-zero elements in your DataFrame. Use the first column of indices to index into column ID, and the second column of indices to index into df.columns. I transpose v before step 2 for cache efficiency, and less typing. A: Use: df = (df.melt('ID', var_name='Col2') .query('value== 1') .sort_values(['ID', 'Col2']) .drop('value',1)) Alternative solution: df = (df.set_index('ID') .mask(lambda x: x == 0) .stack() .reset_index() .drop(0,1)) print (df) ID Col2 8 1002 2 24 1002 4 2 1004 1 10 1004 2 35 1005 5 44 1006 6 5 1007 1 21 1007 3 23 1009 3 55 1009 7 Explanation: First reshape values by melt or set_index with unstack Filter only 1 by query or convert 0 to NaNs by mask sort_values for first solution create columns from MultiIndex by reset_index Last remove unnecessary columns by drop A: As of pandas v.1.5.0, the following will do the trick dummy_cols = [col1, col2, col3] pd.from_dummies(df[dummy_cols]) A: New in pandas 1.5.0 there is a builtin that inverts the operation performed by get_dummies(). Most of the time a prefix was added using the original label. Use the sep= parameter to get back original values. df_w_dummies.head() >>> | pitch_type_FF | pitch_type_CU | pitch_type_CH -------------------------------------------------- 1| 0 | 0 | 1 2| 1 | 0 | 0 3| 1 | 0 | 0 4| 0 | 1 | 0 5| 0 | 0 | 1 # .from_dummies() returns a data frame df_reversed = pd.from_dummies(df_w_dummies, sep='pitch_type_').rename(columns={'': 'pitch_type'}) df_reversed.head() >>> | pitch_type --------------- 1| CH 2| FF 3| FF 4| CU 5| CH
Reverse a get_dummies encoding in pandas
Column names are: ID,1,2,3,4,5,6,7,8,9. The col values are either 0 or 1 My dataframe looks like this: ID 1 2 3 4 5 6 7 8 9 1002 0 1 0 1 0 0 0 0 0 1003 0 0 0 0 0 0 0 0 0 1004 1 1 0 0 0 0 0 0 0 1005 0 0 0 0 1 0 0 0 0 1006 0 0 0 0 0 1 0 0 0 1007 1 0 1 0 0 0 0 0 0 1000 0 0 0 0 0 0 0 0 0 1009 0 0 1 0 0 0 1 0 0 I want the column names in front of the ID where the value in a row is 1. The Dataframe i want should look like this: ID Col2 1002 2 // has 1 at Col(2) and Col(4) 1002 4 1004 1 // has 1 at col(1) and col(2) 1004 2 1005 5 // has 1 at col(5) 1006 6 // has 1 at col(6) 1007 1 // has 1 at col(1) and col(3) 1007 3 1009 3 // has 1 at col(3) and col(7) 1009 7 Please help me in this, Thanks in advance
[ "Pretty one-liner :)\nnew_df = df.idxmax(axis=1)\n\n", "Several great answers for the OP post. However, often get_dummies is used for multiple categorical features. Pandas uses a prefix separator prefix_sep to distinguish different values for a column. \nThe following function collapses a \"dummified\" dataframe ...
[ 29, 19, 18, 4, 3, 1, 0 ]
[]
[]
[ "dataframe", "pandas", "python" ]
stackoverflow_0050607740_dataframe_pandas_python.txt
Q: How to sort MultiIndex using values from a given column I have a DataFrame with 2-level index and column with the numerical values. I want to sort it by level-0 and level-1 index in such a way that the the order of 0-level index is determined by the sum of values from Value column (descending), and the order of 1-level index is also determined by the values in Value column. This is my code: import pandas as pd df = pd.DataFrame() df["Index1"] = ["A", "A", "B", "B", "C", "C"] df["Index2"] = ["X", "Y", "X", "Y", "X", "Y"] df["Value"] = [1, 4, 7, 3, 2, 7] df = df.set_index(["Index1", "Index2"]) df And this is the desired output (B is at the top because the sum is 10 and then we have X first because 7 >3): A: You can do this with pandas.DataFrame.sort_values : out= ( df .assign(temp_col = df.groupby(level=0).transform("sum")) .sort_values(by=["temp_col", "Value"], ascending=[False, False]) .drop(columns="temp_col") ) # Output : print(out) Value Index1 Index2 B X 7 Y 3 C Y 7 X 2 A Y 4 X 1
How to sort MultiIndex using values from a given column
I have a DataFrame with 2-level index and column with the numerical values. I want to sort it by level-0 and level-1 index in such a way that the the order of 0-level index is determined by the sum of values from Value column (descending), and the order of 1-level index is also determined by the values in Value column. This is my code: import pandas as pd df = pd.DataFrame() df["Index1"] = ["A", "A", "B", "B", "C", "C"] df["Index2"] = ["X", "Y", "X", "Y", "X", "Y"] df["Value"] = [1, 4, 7, 3, 2, 7] df = df.set_index(["Index1", "Index2"]) df And this is the desired output (B is at the top because the sum is 10 and then we have X first because 7 >3):
[ "You can do this with pandas.DataFrame.sort_values :\nout= (\n df\n .assign(temp_col = df.groupby(level=0).transform(\"sum\"))\n .sort_values(by=[\"temp_col\", \"Value\"], ascending=[False, False])\n .drop(columns=\"temp_col\")\n )\n\n# Output :\nprint(out)\n\n Value...
[ 2 ]
[]
[]
[ "dataframe", "pandas", "python" ]
stackoverflow_0074552607_dataframe_pandas_python.txt
Q: When is the usage of the Python Ellipsis to be preferred over 'pass'? I was just looking up the way to go for defining an abstract base class in Python, when I found the abc module (https://docs.python.org/3/library/abc.html). After a bit of reading I saw the following class: class C(ABC): @abstractmethod def my_abstract_method(self, ...): ... @classmethod @abstractmethod def my_abstract_classmethod(cls, ...): ... Wondering about the triple dots, which I found it is called the Ellipsis (https://docs.python.org/3/library/constants.html#Ellipsis). I've seen and used it so far only in combination with type hints, where it maked perfectly sense. But why would one use the Ellipsis in the definition of an abstract method? Personally, I would either do def my_abstract_method(self): raise RuntimeError("NotImplemented") or def my_abstract_method(self): pass So why is the Ellipsis preferred over pass in the official documentation? Is it just opiniated? A: Using the Ellipsis literal as the body of a function does nothing. It's purely a matter of style if you use it instead of pass or some other statement. If you give your function a docstring, you don't even need to put any statement after the line with the docstring. If the official Python documentation is inconsistent, it's probably because the Python core developers who write the docs don't themselves have a consistent opinion on which looks best. PEP 8 does not give any recommendation on what to use for the body of dummy functions (though it does use lots of ...s to stand in for things in its examples, sometimes in places where you can't actually use an Ellipsis in real code). So for your own code, use whatever you like best. If you want to be formal about it, write down your preferences in your style guide and then always do whatever the guide says. You don't need to follow the same style as anyone else (unless that person is your boss). One final note: You can't use ... as a parameter name in a function definition (def my_abstract_method(self, ...): raises a SyntaxError, regardless of what's on the next line). A: I looked into python (3.7) docs and pydoc ... The Ellipsis Object ******************* This object is commonly used by slicing (see Slicings). It supports no special operations. There is exactly one ellipsis object, named "Ellipsis" (a built-in name). "type(Ellipsis)()" produces the "Ellipsis" singleton. It is written as "Ellipsis" or "...". Related help topics: SLICINGS and pydoc pass The "pass" statement ******************** pass_stmt ::= "pass" "pass" is a null operation Ś when it is executed, nothing happens. It is useful as a placeholder when a statement is required syntactically, but no code needs to be executed, for example: def f(arg): pass # a function that does nothing (yet) class C: pass # a class with no methods (yet) So it looks like that at least from pydoc point of view pass is way to define function which does nothing. A: I almost only ever use ellipses in abstract classes, because of how abstract classes work. To me, pass means "do nothing" or "NOOP", but ... means "details to be defined elsewhere". An abstract method literally cannot not be overridden. Now, you can technically call super().abstractmeth() and run code that's defined in the abstractmethod definition, but the usual pattern I use is to use ellipses here to indicate when reading that the content of the method should be completely redefined somewhere else. If you just put a pass someone might think that this is a stub method to be filled out later, but the ellipses usually makes people realize that something a little different is going on. A: I agree with @Blckknght and @noharmpun. The resulting bytecode is pretty much the same. In [13]: dis.dis("def B(): ...; B()") 1 0 LOAD_CONST 0 (<code object B at 0x111bf6130, file "<dis>", line 1>) 2 LOAD_CONST 1 ('B') 4 MAKE_FUNCTION 0 6 STORE_NAME 0 (B) 8 LOAD_CONST 2 (None) 10 RETURN_VALUE Disassembly of <code object B at 0x111bf6130, file "<dis>", line 1>: 1 0 LOAD_GLOBAL 0 (B) 2 CALL_FUNCTION 0 4 POP_TOP 6 LOAD_CONST 0 (None) 8 RETURN_VALUE In [14]: dis.dis("def A(): pass; A()") 1 0 LOAD_CONST 0 (<code object A at 0x111bf59a0, file "<dis>", line 1>) 2 LOAD_CONST 1 ('A') 4 MAKE_FUNCTION 0 6 STORE_NAME 0 (A) 8 LOAD_CONST 2 (None) 10 RETURN_VALUE Disassembly of <code object A at 0x111bf59a0, file "<dis>", line 1>: 1 0 LOAD_GLOBAL 0 (A) 2 CALL_FUNCTION 0 4 POP_TOP 6 LOAD_CONST 0 (None) 8 RETURN_VALUE A: One more little note: I'm using pass over ... because the last one is not so easy to exclude in coverage reports. Both pass and ... will be reported as uncovered lines inside @abstractmethod. coverage has special syntax to exclude lines, but ... considerated as regex "skip all text". Probably it can be solved somehow by escape symbols or special syntax... P.S. Another workaround is to exclude @abstractmehod lines in coverage.
When is the usage of the Python Ellipsis to be preferred over 'pass'?
I was just looking up the way to go for defining an abstract base class in Python, when I found the abc module (https://docs.python.org/3/library/abc.html). After a bit of reading I saw the following class: class C(ABC): @abstractmethod def my_abstract_method(self, ...): ... @classmethod @abstractmethod def my_abstract_classmethod(cls, ...): ... Wondering about the triple dots, which I found it is called the Ellipsis (https://docs.python.org/3/library/constants.html#Ellipsis). I've seen and used it so far only in combination with type hints, where it maked perfectly sense. But why would one use the Ellipsis in the definition of an abstract method? Personally, I would either do def my_abstract_method(self): raise RuntimeError("NotImplemented") or def my_abstract_method(self): pass So why is the Ellipsis preferred over pass in the official documentation? Is it just opiniated?
[ "Using the Ellipsis literal as the body of a function does nothing. It's purely a matter of style if you use it instead of pass or some other statement. If you give your function a docstring, you don't even need to put any statement after the line with the docstring.\nIf the official Python documentation is inconsi...
[ 37, 17, 13, 1, 0 ]
[]
[]
[ "python" ]
stackoverflow_0055274977_python.txt
Q: AttributeError: 'WebDriver' object has no attribute 'execute_cdp_cmd' I am trying to set geolocation based on this documentation: https://www.selenium.dev/documentation/en/support_packages/chrome_devtools/ I am using selenium==4.0.0b2 I am running remote webdriver instead of local, and I am getting the following error: AttributeError: 'WebDriver' object has no attribute 'execute_cdp_cmd' This is my sample code: from selenium.webdriver.common.desired_capabilities import DesiredCapabilities from selenium import webdriver import time driver = webdriver.Remote( command_executor='http://localhost:4444/wd/hub', desired_capabilities=DesiredCapabilities.CHROME ) Map_coordinates = dict({ "latitude": 21.841, "longitude": -97.948, "accuracy": 100 }) print(Map_coordinates) driver.execute_cdp_cmd("Emulation.setGeolocationOverride", Map_coordinates) driver.get('http://www.google.com/') time.sleep(50) What am I missing?? A: I also have this problem. But I am using is VB.NET at Visual Studio cannot find any Chrome Devtool. And at the end I found a nuget that is https://github.com/ToCSharp/AsyncChromeDriver You can try to look at. A: In my case I was following example from Selenium - python. how to capture network traffic's response and got it working "python3 script.py". After numerous executions started seeing this error due to wrong python version used. If someone has the same error it can happen if system loads python3.6 instead of python3.9 so I needed to specify exact python version.
AttributeError: 'WebDriver' object has no attribute 'execute_cdp_cmd'
I am trying to set geolocation based on this documentation: https://www.selenium.dev/documentation/en/support_packages/chrome_devtools/ I am using selenium==4.0.0b2 I am running remote webdriver instead of local, and I am getting the following error: AttributeError: 'WebDriver' object has no attribute 'execute_cdp_cmd' This is my sample code: from selenium.webdriver.common.desired_capabilities import DesiredCapabilities from selenium import webdriver import time driver = webdriver.Remote( command_executor='http://localhost:4444/wd/hub', desired_capabilities=DesiredCapabilities.CHROME ) Map_coordinates = dict({ "latitude": 21.841, "longitude": -97.948, "accuracy": 100 }) print(Map_coordinates) driver.execute_cdp_cmd("Emulation.setGeolocationOverride", Map_coordinates) driver.get('http://www.google.com/') time.sleep(50) What am I missing??
[ "I also have this problem. But I am using is VB.NET at Visual Studio cannot find any Chrome Devtool.\nAnd at the end I found a nuget that is\nhttps://github.com/ToCSharp/AsyncChromeDriver\nYou can try to look at.\n", "In my case I was following example from Selenium - python. how to capture network traffic's resp...
[ 0, 0 ]
[]
[]
[ "python", "python_3.x", "selenium", "selenium4", "selenium_chromedriver" ]
stackoverflow_0066817547_python_python_3.x_selenium_selenium4_selenium_chromedriver.txt
Q: Does tf.histogram_fixed_width() support back propagation? I want to use the histogram of the output from a CNN to compute the loss. I am wondering whether tf.histogram_fixed_width() supports the gradient to flow back to its former layer. Only it works, I can add a loss layer after calculating the histogram. A: tf.histogram_fixed_width() does not support autogradient functionality since histogram is not a continuous differential function. You can look at the following example which returns gradient None. import keras.backend as K import tensorflow as tf value_range = [0.0, 5.0] a = np.array([-1.0, 0.0, 1.5, 2.0, 5.0, 15]) x = K.variable(a) hist = tf.histogram_fixed_width(x, value_range, nbins=5, dtype=tf.float32) gradient = K.gradients(hist, x) # output is [None] A: I had the similar problem. There are two ways you can try: 1. After the output layer, add an extra layer to produce the histogram; 2. Use something like tf.RegisterGradient or tf.custom_gradient to define your own gradients for the operations. A: If someone is looking for a solution: As Stephen mentioned, a histogram isn't a continuous function (or even mostly continuous), so it's not differentiable. However, what you're probably looking for is actually a density estimate for the distribution of intensities. You can calculate one using kernel density estimation (KDE). It's similar to a histogram, except the intensity values are interpolated so a continuous change in intensity will correspond to a continuous change in the density estimate. The easiest way to do this in Tensorflow currently is using Tensorflow Probability. There's an example in their original paper (see section 5.1): f = lambda x: tfd.Independent(tfd.Normal( loc=x, scale=1.)) n = x.shape[0].value kde = tfd.MixtureSameFamily( mixture_distribution=tfd.Categorical( probs=[1 / n] * n), components_distribution=f(x)) Note you'll have to install tensorflow_probability (pip works) and import tensorflow_probability as tfp and/or import tensorflow_probability.Distributions as tfd. It's also possible to roll your own, although working out a way to do it without writing a custom op is a bit tricky. For the adventurous, a good place to start is this paper (see eq. 6; you only need the second term for a 1d histogram/distribution). My (lightly tested) effort: import tensorflow as tf def cubicSplineFunction(arg): """ Applies the cubic spline basis function to the argument """ absX = tf.math.abs(arg) sqrX = tf.math.square(arg) coef1 = (4.0 - 6.0*sqrX + 3.0*sqrX*absX) / 6.0 # |arg| < 1.0 coef2 = (8.0 - 12.0*absX + 6.0*sqrX - sqrX*absX) / 6.0 # |arg| < 2.0 lt1 = tf.cast(tf.where(absX <= 1,1,0),tf.float32) lt2 = tf.cast(tf.where(absX < 2,1,0),tf.float32) * (1-lt1) out = coef1 * lt1 + coef2 * lt2 return out def bincountWithWeights(h,bins,weights): """ Adds weights[i] into h[bins[i]] for all i""" return tf.tensor_scatter_nd_add(h,tf.reshape(tf.cast(bins,tf.int32),[-1,1]),weights) def parzenDensityEstimate(x,histN): """ Returns a cubic spline interpolated probability density estimate for x """ padding = 2 minOb = tf.reduce_min(x)-1e-4; maxOb = tf.reduce_max(x)+1e-4; delta = (maxOb-minOb) / (histN-2*padding) min = minOb / delta - padding max = maxOb / delta + padding xs = tf.range(minOb-2*delta,maxOb+2*delta,delta) h = tf.zeros_like(xs) xn = x/delta - min xb = tf.math.floor(xn,tf.int32) for offset in range(-2,3): splineArg = (xb+offset)-xn+0.5 # 0.5 is to find the distance from the bin centre v = cubicSplineFunction(splineArg) h = bincountWithWeights(h=h,bins=xb+offset,weights=v) h = h / tf.reduce_sum(h) return h,xs density, bins = parzenDensityEstimate([0.,10.,10.,10.,20.],histN=20)
Does tf.histogram_fixed_width() support back propagation?
I want to use the histogram of the output from a CNN to compute the loss. I am wondering whether tf.histogram_fixed_width() supports the gradient to flow back to its former layer. Only it works, I can add a loss layer after calculating the histogram.
[ "tf.histogram_fixed_width() does not support autogradient functionality since histogram is not a continuous differential function. You can look at the following example which returns gradient None.\nimport keras.backend as K\nimport tensorflow as tf\n\nvalue_range = [0.0, 5.0]\na = np.array([-1.0, 0.0, 1.5, 2.0, 5....
[ 2, 0, 0 ]
[]
[]
[ "python", "tensorflow" ]
stackoverflow_0046394659_python_tensorflow.txt
Q: Launch multiple tkinter applications in separate threads I am trying to launch multiple instances of a tkinter class in separate threads. Everytime the second screen appears, the following error is thrown: "_tkinter.TclError: image "pyimage2" doesn't exist". I expected them to be two separate instances, but according to internet searches this is not the case. Here's a simplified PoC: import subprocess, threading, time import tkinter from PIL import Image, ImageTk class App: def __init__(self, display: str): self.root = tkinter.Tk(screenName=display) self.background_image = Image.new("RGB", (200, 200), (200, 200, 200)) self.background = ImageTk.PhotoImage(self.background_image) self.label_background = tkinter.Label( self.root, image=self.background, borderwidth=0 ) self.root.mainloop() class X(threading.Thread): def __init__(self, display): super().__init__() self.display = display subprocess.Popen(["Xephyr", self.display]) # wait for X server to become available while ( subprocess.run( ["xset", "-q", "-display", self.display], stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL, ).returncode != 0 ): time.sleep(0.01) def run(self): # blocking call self.app = App(self.display) x1 = X(":1") x1.start() x2 = X(":2") x2.start() x2.join() And the according output: $ python3 tkinter_test.py Exception in thread Thread-2: Traceback (most recent call last): File "/usr/lib/python3.10/threading.py", line 1016, in _bootstrap_inner self.run() File "/home/user/pyrdp/tkinter_test.py", line 35, in run self.app = App(self.display) File "/home/user/pyrdp/tkinter_test.py", line 11, in __init__ self.label_background = tkinter.Label( File "/usr/lib/python3.10/tkinter/__init__.py", line 3187, in __init__ Widget.__init__(self, master, 'label', cnf, kw) File "/usr/lib/python3.10/tkinter/__init__.py", line 2601, in __init__ self.tk.call( _tkinter.TclError: image "pyimage2" doesn't exist Is it even possible to run multiple tkinter instances in different threads or do I need a different approach here? These instances will have to run in two separate X servers and do not need to share any resources or communicate among each other. A: Everytime the second screen appears, the following error is thrown: "_tkinter.TclError: image "pyimage2" doesn't exist". I expected them to be two separate instances, but according to internet searches this is not the case. They are separate instances, which is why you're seeing the error. You didn't specify a master for each image so each one defaults to the first instance of Tk that was created. Thus, all of your images belong to the first instance and are not visible in the second instance. Thus, image "pyimage2" doesn't exist. The second image has an internal name of pyimage2, but there's only an image with that name in the first instance of Tk. The solution is to pass a value for master when creating the image so that the image is created in the appropriate tkinter window. self.background = ImageTk.PhotoImage(self.background_image, master=self.root)
Launch multiple tkinter applications in separate threads
I am trying to launch multiple instances of a tkinter class in separate threads. Everytime the second screen appears, the following error is thrown: "_tkinter.TclError: image "pyimage2" doesn't exist". I expected them to be two separate instances, but according to internet searches this is not the case. Here's a simplified PoC: import subprocess, threading, time import tkinter from PIL import Image, ImageTk class App: def __init__(self, display: str): self.root = tkinter.Tk(screenName=display) self.background_image = Image.new("RGB", (200, 200), (200, 200, 200)) self.background = ImageTk.PhotoImage(self.background_image) self.label_background = tkinter.Label( self.root, image=self.background, borderwidth=0 ) self.root.mainloop() class X(threading.Thread): def __init__(self, display): super().__init__() self.display = display subprocess.Popen(["Xephyr", self.display]) # wait for X server to become available while ( subprocess.run( ["xset", "-q", "-display", self.display], stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL, ).returncode != 0 ): time.sleep(0.01) def run(self): # blocking call self.app = App(self.display) x1 = X(":1") x1.start() x2 = X(":2") x2.start() x2.join() And the according output: $ python3 tkinter_test.py Exception in thread Thread-2: Traceback (most recent call last): File "/usr/lib/python3.10/threading.py", line 1016, in _bootstrap_inner self.run() File "/home/user/pyrdp/tkinter_test.py", line 35, in run self.app = App(self.display) File "/home/user/pyrdp/tkinter_test.py", line 11, in __init__ self.label_background = tkinter.Label( File "/usr/lib/python3.10/tkinter/__init__.py", line 3187, in __init__ Widget.__init__(self, master, 'label', cnf, kw) File "/usr/lib/python3.10/tkinter/__init__.py", line 2601, in __init__ self.tk.call( _tkinter.TclError: image "pyimage2" doesn't exist Is it even possible to run multiple tkinter instances in different threads or do I need a different approach here? These instances will have to run in two separate X servers and do not need to share any resources or communicate among each other.
[ "\nEverytime the second screen appears, the following error is thrown: \"_tkinter.TclError: image \"pyimage2\" doesn't exist\". I expected them to be two separate instances, but according to internet searches this is not the case.\n\nThey are separate instances, which is why you're seeing the error. You didn't spec...
[ 2 ]
[]
[]
[ "python", "tcl", "tk_toolkit", "tkinter" ]
stackoverflow_0074552455_python_tcl_tk_toolkit_tkinter.txt
Q: Create empty square Dataframe from single column DataFrame I have the following single column DataFrame: df: data = {'YEAR': [2020,2021,2022,2023,2024,2025,2026,2027,2028,2029,2030], } df = pd.DataFrame(data) df How can I create an empty square Dataframe from df like the following DatFrame: I´m kinda new to Python. I have tried converting the original Dataframme to list and the create a new dataframe from there without success. I also tried to do somekind concatenation but it does not work either. I guess that its not as hard, but I dont know how to do that. A: Try provide both index and columns as Year when creating the data frame: df = pd.DataFrame([], index=data['YEAR'], columns=data['YEAR']) df 2020 2021 2022 2023 2024 2025 2026 2027 2028 2029 2030 2020 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 2021 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 2022 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 2023 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 2024 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 2025 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 2026 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 2027 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 2028 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 2029 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 2030 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN A: You can also use df.dot and replace: df.set_index('YEAR').dot(df.set_index('YEAR').T).replace({0:''}) A: Use reindex: df.reindex(columns=df.columns.union(df['YEAR'])) Output: YEAR 2020 2021 2022 2023 2024 2025 2026 2027 2028 2029 2030 0 2020 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1 2021 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 2 2022 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 3 2023 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 4 2024 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 5 2025 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 6 2026 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 7 2027 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 8 2028 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 9 2029 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 10 2030 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN A: You can do this by simply using pandas.DataFrame.loc : df.loc[:, df.set_index("YEAR").index.tolist()]= np.NaN #or "" # Output : print(df) YEAR 2020 2021 2022 2023 2024 2025 2026 2027 2028 2029 2030 0 2020 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1 2021 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 2 2022 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 3 2023 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 4 2024 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 5 2025 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 6 2026 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 7 2027 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 8 2028 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 9 2029 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 10 2030 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
Create empty square Dataframe from single column DataFrame
I have the following single column DataFrame: df: data = {'YEAR': [2020,2021,2022,2023,2024,2025,2026,2027,2028,2029,2030], } df = pd.DataFrame(data) df How can I create an empty square Dataframe from df like the following DatFrame: I´m kinda new to Python. I have tried converting the original Dataframme to list and the create a new dataframe from there without success. I also tried to do somekind concatenation but it does not work either. I guess that its not as hard, but I dont know how to do that.
[ "Try provide both index and columns as Year when creating the data frame:\ndf = pd.DataFrame([], index=data['YEAR'], columns=data['YEAR'])\ndf\n 2020 2021 2022 2023 2024 2025 2026 2027 2028 2029 2030\n2020 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN\n2021 NaN NaN NaN NaN NaN NaN NaN NaN NaN ...
[ 1, 1, 1, 0 ]
[]
[]
[ "dataframe", "numpy", "pandas", "python" ]
stackoverflow_0074552855_dataframe_numpy_pandas_python.txt
Q: Python Z3, rule to make 2 numbers be 2 certain numbers in a 2D array If i have 2 z3 Ints for examaple x1 and x2, and a 2d array of numbers for example: list = [[1,2],[12,13],[45,7]] i need to right a rule so that x1 and x2 are any of the pairs of numbers in the list for example x1 would be 1 and x2 would be 2 or x1 is 12 and x2 is 13 im guessing it would be something like: solver = Solver() for i in range(o,len(list)): solver.add(And((x1==list[i][0]),(x2==list[i][1]))) but this would obviously just always be unsat, so i need to right it so that x1 and x2 can be any of the pairs in the list. It's worth noting that the number of pairs in the list could be anything not just 3 pairs. A: You're on the right track. Simply iterate and form the disjunction instead. Something like: from z3 import * list = [[1,2],[12,13],[45,7]] s = Solver() x1, x2 = Ints('x1 x2') s.add(Or([And(x1 == p[0], x2 == p[1]) for p in list])) while s.check() == sat: m = s.model() print("x1 = %2d, x2 = %2d" % (m[x1].as_long(), m[x2].as_long())) s.add(Or(x1 != m[x1], x2 != m[x2])) When run, this prints: x1 = 1, x2 = 2 x1 = 12, x2 = 13 x1 = 45, x2 = 7
Python Z3, rule to make 2 numbers be 2 certain numbers in a 2D array
If i have 2 z3 Ints for examaple x1 and x2, and a 2d array of numbers for example: list = [[1,2],[12,13],[45,7]] i need to right a rule so that x1 and x2 are any of the pairs of numbers in the list for example x1 would be 1 and x2 would be 2 or x1 is 12 and x2 is 13 im guessing it would be something like: solver = Solver() for i in range(o,len(list)): solver.add(And((x1==list[i][0]),(x2==list[i][1]))) but this would obviously just always be unsat, so i need to right it so that x1 and x2 can be any of the pairs in the list. It's worth noting that the number of pairs in the list could be anything not just 3 pairs.
[ "You're on the right track. Simply iterate and form the disjunction instead. Something like:\nfrom z3 import *\n\nlist = [[1,2],[12,13],[45,7]]\n\ns = Solver()\nx1, x2 = Ints('x1 x2')\ns.add(Or([And(x1 == p[0], x2 == p[1]) for p in list]))\n\nwhile s.check() == sat:\n m = s.model()\n print(\"x1 = %2d, x2 = %2d\" ...
[ 0 ]
[]
[]
[ "list", "list_comprehension", "python", "z3", "z3py" ]
stackoverflow_0074552664_list_list_comprehension_python_z3_z3py.txt
Q: Find enums listed in Python DESCRIPTOR for ProtoBuf I have received a Google ProtoBuf using Python, and am attempting to compare the value for an enum to a string representation of it. Based on this and this I should be able to use something like enum_values_by_name to get the info I need. However, all the enum* related attributes are empty: >>> type(my_message) <class 'myObjects_pb2.myObject'> >>> my_message # nID: 53564 # nAge: 2 # type: OBJECT_CLASS_SIGN # view: OBJECT_VIEW_FRONT >>> my_message.type # 1 >>> filter(lambda s: s.startswith('enum'), dir(my_message.DESCRIPTOR)) # ['enum_types', 'enum_types_by_name', 'enum_values_by_name'] >>> my_message.DESCRIPTOR.enum_types # [] >>> my_message.DESCRIPTOR.enum_types_by_name # {} >>> my_message.DESCRIPTOR.enum_values_by_name # {} Perhaps it is related to the fact that my protobuf is defined in many files, and the enums I want are not defined in the main file which I'm importing (but which is used to decode my_message)? Why am I getting these empty collections and (more importantly) how do find information about the enums? A: I don't know why the DESCRIPTOR for the message includes enum attributes that are not populated. (This seems like a bug to me.) However, there are (at least) two solutions to this: 1) If you know the name of the file where the enums are defined, you can 'hack' get the enum value by name via: # This is the root ProtoBuf definition from mydir import rootapi_pb2 # We happen to know that the enums are defined in myenums.proto enum_file = rootapi_pb2.myenums__pb2 # NOTE: Extra Underscore! enum_value = getattr(enum_file, 'OBJECT_CLASS_SIGN') If you don't want to rely on this hack, however, you can eventually find the enum descriptor, and thus the value from the name, via: my_message.DESCRIPTOR.fields_by_name['type'].enum_type.values_by_name['OBJECT_CLASS_SIGN'].number Because that's so horrific, here it is wrapped up as a safe, re-usable function: def enum_value(msg, field_name, enum_name): """Return the integer for the enum name of a field, or None if the field does not exist, is not an enum, or the enum does not have a value with the supplied name.""" field = msg.DESCRIPTOR.fields_by_name.get(field_name,None) if field and field.enum_type: enum = field.enum_type.values_by_name.get(enum_name,None) return enum and enum.number print(enum_value(my_message, 'type', 'OBJECT_CLASS_SIGN')) # 1 A: ProtoBuf for python is super ugly...but I had to use it anyway... I feel like the function by @Phrogz could be simplified a bit. This is the function I've came with: def get_enum_name_by_number(parent, field_name): field_value = getattr(parent, field_name) return parent.DESCRIPTOR.fields_by_name[field_name].enum_type.values_by_number.get(field_value).name print(my_message.type) # 1 print(get_enum_name_by_number(my_message, 'type')) # OBJECT_CLASS_SIGN A: Fyi, if you want to go in the opposite direction, from int value to key in a protobuf enum I've been using this: def lookup_template_name(template_id: int) -> str: """Lookup the template name from the template id""" try: return notify.EmailTemplate.Name(template_id) except Exception as e: logger.error(f"Error looking up template name: {e}") return e logger.info(f"Template name is: {lookup_template_name(1)}") # 2022-11-23 16:04:11:977 INFO Template name is: EMAIL_TEMPLATE_WELCOME_EMAIL logger.info(f"Template name is: {lookup_template_name(123456)}") # 2022-11-23 16:04:50:334 ERROR Error looking up template name: Enum EmailTemplate has no name defined for value 123456 The only issue is that you need to already know the name of the enum (in this case EmailTemplate)
Find enums listed in Python DESCRIPTOR for ProtoBuf
I have received a Google ProtoBuf using Python, and am attempting to compare the value for an enum to a string representation of it. Based on this and this I should be able to use something like enum_values_by_name to get the info I need. However, all the enum* related attributes are empty: >>> type(my_message) <class 'myObjects_pb2.myObject'> >>> my_message # nID: 53564 # nAge: 2 # type: OBJECT_CLASS_SIGN # view: OBJECT_VIEW_FRONT >>> my_message.type # 1 >>> filter(lambda s: s.startswith('enum'), dir(my_message.DESCRIPTOR)) # ['enum_types', 'enum_types_by_name', 'enum_values_by_name'] >>> my_message.DESCRIPTOR.enum_types # [] >>> my_message.DESCRIPTOR.enum_types_by_name # {} >>> my_message.DESCRIPTOR.enum_values_by_name # {} Perhaps it is related to the fact that my protobuf is defined in many files, and the enums I want are not defined in the main file which I'm importing (but which is used to decode my_message)? Why am I getting these empty collections and (more importantly) how do find information about the enums?
[ "I don't know why the DESCRIPTOR for the message includes enum attributes that are not populated. (This seems like a bug to me.) However, there are (at least) two solutions to this:\n1) If you know the name of the file where the enums are defined, you can 'hack' get the enum value by name via:\n# This is the root P...
[ 3, 1, 1 ]
[]
[]
[ "protocol_buffers", "python", "python_3.x" ]
stackoverflow_0040226049_protocol_buffers_python_python_3.x.txt
Q: Faster list manipulation I have a large numpy array whNumPylements I individually want to multiply with other indexes and then sum up. My current code is relatively slow, does anyone have an idea how I could make it faster: result = 0 n = 1 int_array = np.array((3,16,3,29,36)) for i in int_array: result += int(i) * n n *= 10 A: In every iteration 10 * prev(10 * ...), So you can use 10 ^ [0, 1, 2, ...] = [1, 10, 100, ...] with numpy.array & numpy.power. Then you need [1*int_arr[0], 10*int_arr[1], ...]. At the end, you need numpy.sum(). res = (np.power(10, np.arange(int_array.shape[0])) * int_array).sum() print(res) Output: 389463
Faster list manipulation
I have a large numpy array whNumPylements I individually want to multiply with other indexes and then sum up. My current code is relatively slow, does anyone have an idea how I could make it faster: result = 0 n = 1 int_array = np.array((3,16,3,29,36)) for i in int_array: result += int(i) * n n *= 10
[ "In every iteration 10 * prev(10 * ...), So you can use 10 ^ [0, 1, 2, ...] = [1, 10, 100, ...] with numpy.array & numpy.power. Then you need [1*int_arr[0], 10*int_arr[1], ...]. At the end, you need numpy.sum().\nres = (np.power(10, np.arange(int_array.shape[0])) * int_array).sum()\n\nprint(res)\n\nOutput:\n389463\...
[ 2 ]
[ "I think i have understood that you want to progressively multiply each number of the array by n, and n is going to be multiplied by 10 each loop. Is that is what you want to do, I think there is nothing much to do. The only thing is that you dont need to convert i to an int, as i is already one\n" ]
[ -1 ]
[ "numpy", "python" ]
stackoverflow_0074552950_numpy_python.txt
Q: Print specified text from .txt file im a beginner with python, I wrote a simple webscraper script which returned a html code and printed it into the output2.txt file, and I need to print out thats behind "title=", but I cant seem to do so. I tried .find method which returned the list of indexes, now I'm stuck at how to print the text starting from the indexes in the list, if it's even possible Here's the code: with open('output2.txt', 'r', encoding='utf-8') as f: output = [] for line in f: titles = line.find("title") if titles >= 0: output.append(titles) output.sort() print(output) As I said I don't know if its even possible to do this, please be kind. I'll be glad for any advice, Thanks in advance:) A: Take a look at this SO answer: Extract title with BeautifulSoup Copying from that answer, it looks like you should be doing something like from bs4 import BeautifulSoup html = "" with open('output2.txt', 'r', encoding='utf-8') as f: html = f.read() soup = BeautifulSoup(html, 'html.parser') title = soup.title print(title) print(title.string)
Print specified text from .txt file
im a beginner with python, I wrote a simple webscraper script which returned a html code and printed it into the output2.txt file, and I need to print out thats behind "title=", but I cant seem to do so. I tried .find method which returned the list of indexes, now I'm stuck at how to print the text starting from the indexes in the list, if it's even possible Here's the code: with open('output2.txt', 'r', encoding='utf-8') as f: output = [] for line in f: titles = line.find("title") if titles >= 0: output.append(titles) output.sort() print(output) As I said I don't know if its even possible to do this, please be kind. I'll be glad for any advice, Thanks in advance:)
[ "Take a look at this SO answer: Extract title with BeautifulSoup\nCopying from that answer, it looks like you should be doing something like\nfrom bs4 import BeautifulSoup\n\nhtml = \"\"\nwith open('output2.txt', 'r', encoding='utf-8') as f:\n html = f.read()\n\nsoup = BeautifulSoup(html, 'html.parser')\ntitle =...
[ 0 ]
[]
[]
[ "python", "python_3.x" ]
stackoverflow_0074552796_python_python_3.x.txt
Q: Using the unpacking operator '*' in Python I have encountered some weird behaviour using the unpacking operator '*' in Python. L = [1,2,3] print(*L if len(L)<=2 else f"{L[0]}-{L[-1]}") Running the above code I was expecting the output "1-3" but instead i get "1 - 3". Am I using the '*'-operator wrong? Or are my if/else-statements wrong? I tried changing the "*L" to "L" which resolved the problem. This, however messes up the output when len(L)<3. Changing the "L" to "L" fixes the problem. But since len(L) is not <=2 this should not affect the output, right? A: This expression is parsed as print(*(L if len(L)<=2 else f"{L[0]}-{L[-1]}")) which, for the given L, is equivalent to: print(*'1-3') which is in turn the same as: print('1', '-', '3') And that's where the spaces are coming from.
Using the unpacking operator '*' in Python
I have encountered some weird behaviour using the unpacking operator '*' in Python. L = [1,2,3] print(*L if len(L)<=2 else f"{L[0]}-{L[-1]}") Running the above code I was expecting the output "1-3" but instead i get "1 - 3". Am I using the '*'-operator wrong? Or are my if/else-statements wrong? I tried changing the "*L" to "L" which resolved the problem. This, however messes up the output when len(L)<3. Changing the "L" to "L" fixes the problem. But since len(L) is not <=2 this should not affect the output, right?
[ "This expression is parsed as\nprint(*(L if len(L)<=2 else f\"{L[0]}-{L[-1]}\"))\n\nwhich, for the given L, is equivalent to:\nprint(*'1-3')\n\nwhich is in turn the same as:\nprint('1', '-', '3')\n\nAnd that's where the spaces are coming from.\n" ]
[ 3 ]
[]
[]
[ "python", "python_3.x" ]
stackoverflow_0074552920_python_python_3.x.txt
Q: How to use variable instead of xpath value in Selenium ...find_element(By.XPATH, "xpath") Is there a way to use variable instead of string value for the xpath? I am using: driver.find_element(By.XPATH, "xpath-value-string") But I need to replace the string (the xpath value) and use variable instead like this: my_xpath = "xpath-value-string" driver.find_element(By.XPATH, my_xpath) The goal is to pull various xpath values from the list and use them in for loop dynamically inserted as the xpath values. I have found some advices of using .format; str(); f"{my_xpath} etc. to insert the values but nothing was working. I am pyhton beginner. So maybe I am taking it wrong and it must be a string as the attribute not variable. Any ideas? Thank you. A: Maybe something like this: xpaths = ['xpath_example_1', 'xpath_example_2'] for xpath in xpaths: driver.find_element(By.XPATH, xpath)
How to use variable instead of xpath value in Selenium ...find_element(By.XPATH, "xpath")
Is there a way to use variable instead of string value for the xpath? I am using: driver.find_element(By.XPATH, "xpath-value-string") But I need to replace the string (the xpath value) and use variable instead like this: my_xpath = "xpath-value-string" driver.find_element(By.XPATH, my_xpath) The goal is to pull various xpath values from the list and use them in for loop dynamically inserted as the xpath values. I have found some advices of using .format; str(); f"{my_xpath} etc. to insert the values but nothing was working. I am pyhton beginner. So maybe I am taking it wrong and it must be a string as the attribute not variable. Any ideas? Thank you.
[ "Maybe something like this:\nxpaths = ['xpath_example_1', 'xpath_example_2']\nfor xpath in xpaths:\n driver.find_element(By.XPATH, xpath)\n\n" ]
[ 1 ]
[]
[]
[ "python", "selenium4", "variables", "xpath" ]
stackoverflow_0074552982_python_selenium4_variables_xpath.txt
Q: how to exclude words in regex using Negative Lookahead? I am trying to exclude a word from a sentence, but if the excluded word does not appear, the regex should keep searching for characters until the exclude word is found. For example, lets suppose I have a list like this: S.no Vehicle Status 1 car sold 2 car not sold 3 car sold 4 car Repair I want to match all those cars which don't have a status of sold (they could be anything but sold) and I want it to catch the status too (if not sold) I tried this regex: f"car(?!\s+sold)" But how can I tell it to continue if it doesn't find the "sold" in the negative lookahead (but still search with that filter) A: You can write the pattern like this: pattern = r"\bcar\b(?!\s+sold\b).+" Explanation \bcar\b Match the word car (?!\s+sold\b) Assert not 1+ whitespace chars followed by the word "sold" to the right .+ Match 1+ chars See a regex demo. If there has to be a non whitespace char present after "car" and you don't want to cross newlines: \bcar\b(?![^\S\n]+sold\b)[^\S\n]+\S.* See another Regex demo
how to exclude words in regex using Negative Lookahead?
I am trying to exclude a word from a sentence, but if the excluded word does not appear, the regex should keep searching for characters until the exclude word is found. For example, lets suppose I have a list like this: S.no Vehicle Status 1 car sold 2 car not sold 3 car sold 4 car Repair I want to match all those cars which don't have a status of sold (they could be anything but sold) and I want it to catch the status too (if not sold) I tried this regex: f"car(?!\s+sold)" But how can I tell it to continue if it doesn't find the "sold" in the negative lookahead (but still search with that filter)
[ "You can write the pattern like this:\npattern = r\"\\bcar\\b(?!\\s+sold\\b).+\"\n\nExplanation\n\n\\bcar\\b Match the word car\n(?!\\s+sold\\b) Assert not 1+ whitespace chars followed by the word \"sold\" to the right\n.+ Match 1+ chars\n\nSee a regex demo.\nIf there has to be a non whitespace char present after \...
[ 1 ]
[]
[]
[ "python", "regex", "regex_lookarounds", "web_crawler", "web_scraping" ]
stackoverflow_0074549686_python_regex_regex_lookarounds_web_crawler_web_scraping.txt
Q: How can I get a pandas dataframe into CSV format with different formats per columns? Issues for pandas' DataFrame text output functions: to_csv() does not support the 'formatters' parameter of to_string(). I need different formats for each column. to_string() does not support a separator. What I do so far: I generate a list of formatting strings in the new Python formatting format, so e.g. ['{8.1f}','{9.3f}',...,], and then do this hack: f.write(', '.join(fmt).format(*data)+'\r\n') Is there a way I can have pandas do some of this hacking for me or is that a feature request and I already did all the work? ;) A: np.savetxt can do it (by supplying fmt argument, which can be a list), but that means the column names has to be written separately (header argument): np.savetxt('temp.csv', df.values, fmt=['%8.1f','%9.3f','%8.1f','%9.3f','%8.1f'], delimiter=',', header=' '+' ,'.join(df.columns), comments='') A: You can try this: import pandas as pd df = pd.DataFrame({ 'a': [-8.114445, 888.303217, 80], 'b': ['d', 'gg5g', '9ghhhh'], 'c': [14.34e+4,12.1e-1,1e-5], 'd': [34,65373,-176576]} ) df.to_string("temp.txt", formatters={ "a": "{:6,.2f}".format, "b": "{:<}".format, "c": "{:9,.5e}".format, "d": "{:7d}".format }, index=False ) Content of "temp.txt": a b c d -8.11 d 1.43400e+05 34 888.30 gg5g 1.21000e+00 65373 80.00 9ghhhh 1.00000e-05 -176576
How can I get a pandas dataframe into CSV format with different formats per columns?
Issues for pandas' DataFrame text output functions: to_csv() does not support the 'formatters' parameter of to_string(). I need different formats for each column. to_string() does not support a separator. What I do so far: I generate a list of formatting strings in the new Python formatting format, so e.g. ['{8.1f}','{9.3f}',...,], and then do this hack: f.write(', '.join(fmt).format(*data)+'\r\n') Is there a way I can have pandas do some of this hacking for me or is that a feature request and I already did all the work? ;)
[ "np.savetxt can do it (by supplying fmt argument, which can be a list), but that means the column names has to be written separately (header argument):\nnp.savetxt('temp.csv', df.values, fmt=['%8.1f','%9.3f','%8.1f','%9.3f','%8.1f'], \n delimiter=',', header=' '+' ,'.join(df.columns), comments='')...
[ 3, 0 ]
[]
[]
[ "csv", "io", "pandas", "python" ]
stackoverflow_0023124128_csv_io_pandas_python.txt
Q: Manipulate string data into excel file I have data in string format like str1 = "[0,-1.5],[-12.5,1.5],[12.5,1.5],[12.5,-1.5],[-12.5,-1.5])" I want to put this data into an excel file. means 1st value from the array will go in x Col and 2nd value will go in Y col. this will be repeated until the whole string will be added to the x and y columns. I am attempting like, first convert the string into datafram and then datafram to an excel file. but it's giving me an error of "Empty DataFrame". bad_chars = [';', ':', '(', ')', '[', ']'] s = "" for i in str1: if i not in bad_chars: s += i print(s) StringData = StringIO(s) df = pd.read_csv(StringData, sep=",") # Print the dataframe print(df) A: You can use pandas.Series.str.extractall : out = ( pd.Series([str1]) .str.extractall(r"(-?\d+\.?\d*,-?\d+\.?\d*)") .reset_index(drop=True) [0].str.split(",", expand=True) .rename(columns= {0: "X", 1: "Y"}) .applymap('="{}"'.format) ) ​ # Output : print(out) X Y 0 ="0" ="-1.5" 1 ="-12.5" ="1.5" 2 ="12.5" ="1.5" 3 ="12.5" ="-1.5" 4 ="-12.5" ="-1.5" Then, you can use pandas.DataFrame.to_excel to put this dataframe in a spreadsheet: out.to_excel("path_to_the_file.xlsx", index=False)
Manipulate string data into excel file
I have data in string format like str1 = "[0,-1.5],[-12.5,1.5],[12.5,1.5],[12.5,-1.5],[-12.5,-1.5])" I want to put this data into an excel file. means 1st value from the array will go in x Col and 2nd value will go in Y col. this will be repeated until the whole string will be added to the x and y columns. I am attempting like, first convert the string into datafram and then datafram to an excel file. but it's giving me an error of "Empty DataFrame". bad_chars = [';', ':', '(', ')', '[', ']'] s = "" for i in str1: if i not in bad_chars: s += i print(s) StringData = StringIO(s) df = pd.read_csv(StringData, sep=",") # Print the dataframe print(df)
[ "You can use pandas.Series.str.extractall :\nout = (\n pd.Series([str1])\n .str.extractall(r\"(-?\\d+\\.?\\d*,-?\\d+\\.?\\d*)\")\n .reset_index(drop=True)\n [0].str.split(\",\", expand=True)\n .rename(columns= {0: \"X\", 1: \"Y\"})\n .applymap('=\"{}\"'....
[ 1 ]
[]
[]
[ "dataframe", "pandas", "python" ]
stackoverflow_0074552906_dataframe_pandas_python.txt
Q: How to read Youtube Videos with OpenCV at specific timestamps and set duration without downloading it? I have tried using pafy but it plays the video from beginning, I want to run my model on specific parts of the video. If this is possible please guide me how to do it. Any help is appreciated, thanks in advance :) A: It was quite easy actually, I figured it out :) Here's the code: import cv2 import pafy #Ask the user for url input url = input("Enter Youtube Video URL: ") #Getting video id from the url string url_data = urlparse.urlparse(url) query = urlparse.parse_qs(url_data.query) id = query["v"][0] video = 'https://youtu.be/{}'.format(str(id)) #Using the pafy library for youtube videos urlPafy = pafy.new(video) videoplay = urlPafy.getbest(preftype="any") cap = cv2.VideoCapture(videoplay.url) #Asking the user for video start time and duration in seconds milliseconds = 1000 start_time = int(input("Enter Start time: ")) end_time = int(input("Enter Length: ")) end_time = start_time + end_time # Passing the start and end time for CV2 cap.set(cv2.CAP_PROP_POS_MSEC, start_time*milliseconds) #Will execute till the duration specified by the user while True and cap.get(cv2.CAP_PROP_POS_MSEC)<=end_time*milliseconds: success, img = cap.read() cv2.imshow("Image", img) cv2.waitKey(1) A: update for python 3.11 (changes to url parse) import cv2 import pafy import urllib #Ask the user for url input url = input("Enter Youtube Video URL: ") #Getting video id from the url string url_data = urllib.parse.urlparse(url) query = urllib.parse.parse_qs(url_data.query) id = query["v"][0] video = 'https://youtu.be/{}'.format(str(id)) #Using the pafy library for youtube videos urlPafy = pafy.new(video) videoplay = urlPafy.getbest(preftype="any") cap = cv2.VideoCapture(videoplay.url) #Asking the user for video start time and duration in seconds milliseconds = 1000 start_time = int(input("Enter Start time: ")) end_time = int(input("Enter Length: ")) end_time = start_time + end_time # Passing the start and end time for CV2 cap.set(cv2.CAP_PROP_POS_MSEC, start_time*milliseconds) #Will execute till the duration specified by the user while True and cap.get(cv2.CAP_PROP_POS_MSEC)<=end_time*milliseconds: success, img = cap.read() cv2.imshow("Image", img) cv2.waitKey(1) Also, I had to install the pafy package from git source due to like_count not being reliable pip install git+https://github.com/mps-youtube/pafy.git See this issue
How to read Youtube Videos with OpenCV at specific timestamps and set duration without downloading it?
I have tried using pafy but it plays the video from beginning, I want to run my model on specific parts of the video. If this is possible please guide me how to do it. Any help is appreciated, thanks in advance :)
[ "It was quite easy actually, I figured it out :)\nHere's the code:\nimport cv2\nimport pafy\n\n#Ask the user for url input\nurl = input(\"Enter Youtube Video URL: \")\n\n#Getting video id from the url string\nurl_data = urlparse.urlparse(url)\nquery = urlparse.parse_qs(url_data.query)\nid = query[\"v\"][0]\nvideo =...
[ 1, 1 ]
[]
[]
[ "opencv", "pafy", "python", "youtube", "youtube_dl" ]
stackoverflow_0069400984_opencv_pafy_python_youtube_youtube_dl.txt
Q: append values to spinner dynamically I'm trying to write some basic GUI using Kivy. My program loads some data from a CSV file that contains an unknown number of rows. The first column is called sValue which basically tells me the "id" of the spinner and the second column has a Name value. My goal is to iterate all of the rows inside this CSV and to create dynamically x spinners based on the different numbers of "id" that the CSV has and in each spinner to show the values that it might have. For example, if the CSV looks as follows: sValue Name 1 a 1 b 2 a 3 a 3 b 3 c I want the code to create 3 spinners where in spinner 1 it will have the values a,b. spinner 2 will have the value a and spinner 3 will have a,b,c. I wrote the following code however it only shows me the last value that was added (I guess because it always makes a new spinner instead of appending): from kivy.uix.label import Label from kivy.uix.spinner import Spinner from kivy.uix.floatlayout import FloatLayout from kivy.app import App import pandas as pd def loadData(): data = pd.read_csv('data.csv') SValues = range(min(data['sValue']),max(data['sValue'])) return data, SValues class MainApp(App): def build(self): Data, SValues = loadData() layout = self.initializeScreen(Data,SValues) return layout def initializeScreen(self, Data, SValues): self.spinnerObject = {} self.imageObject = {} complete_layout = FloatLayout() s_layout = FloatLayout() for ind, row in Data.iterrows(): self.labelObject = Label(text='S %d' % row['sValue'], bold=True) self.labelObject.size_hint = (1/len(SValues), 0.05) self.labelObject.pos_hint={'x': (row['sValue']-1)/len(SValues), 'y':0.8} s_layout.add_widget(self.labelObject) self.spinnerObject[row['sValue']] = Spinner(text='sValue %d' % row['sValue'], values=row['Name']) self.spinnerObject[row['sValue']].size_hint = (1/len(SValues), 0.1) self.spinnerObject[row['sValue']].pos_hint={'x': (row['sValue']-1)/len(SValues), 'y':0.7} s_layout.add_widget(self.spinnerObject[row['sValue']]) complete_layout.add_widget(s_layout) return complete_layout if __name__ == '__main__': MainApp().run() What I get looks like this: A: You can modify your loop to add the values to existing Spinner instances: for ind, row in Data.iterrows(): try: spinner = self.spinnerObject[row['sValue']] except: spinner = Spinner(text='sValue %d' % row['sValue'], values=[]) self.spinnerObject[row['sValue']] = spinner self.spinnerObject[row['sValue']].size_hint = (1 / len(SValues), 0.1) self.spinnerObject[row['sValue']].pos_hint = {'x': (row['sValue'] - 1) / len(SValues), 'y': 0.7} s_layout.add_widget(self.spinnerObject[row['sValue']]) spinner.values.append(row['Name'])
append values to spinner dynamically
I'm trying to write some basic GUI using Kivy. My program loads some data from a CSV file that contains an unknown number of rows. The first column is called sValue which basically tells me the "id" of the spinner and the second column has a Name value. My goal is to iterate all of the rows inside this CSV and to create dynamically x spinners based on the different numbers of "id" that the CSV has and in each spinner to show the values that it might have. For example, if the CSV looks as follows: sValue Name 1 a 1 b 2 a 3 a 3 b 3 c I want the code to create 3 spinners where in spinner 1 it will have the values a,b. spinner 2 will have the value a and spinner 3 will have a,b,c. I wrote the following code however it only shows me the last value that was added (I guess because it always makes a new spinner instead of appending): from kivy.uix.label import Label from kivy.uix.spinner import Spinner from kivy.uix.floatlayout import FloatLayout from kivy.app import App import pandas as pd def loadData(): data = pd.read_csv('data.csv') SValues = range(min(data['sValue']),max(data['sValue'])) return data, SValues class MainApp(App): def build(self): Data, SValues = loadData() layout = self.initializeScreen(Data,SValues) return layout def initializeScreen(self, Data, SValues): self.spinnerObject = {} self.imageObject = {} complete_layout = FloatLayout() s_layout = FloatLayout() for ind, row in Data.iterrows(): self.labelObject = Label(text='S %d' % row['sValue'], bold=True) self.labelObject.size_hint = (1/len(SValues), 0.05) self.labelObject.pos_hint={'x': (row['sValue']-1)/len(SValues), 'y':0.8} s_layout.add_widget(self.labelObject) self.spinnerObject[row['sValue']] = Spinner(text='sValue %d' % row['sValue'], values=row['Name']) self.spinnerObject[row['sValue']].size_hint = (1/len(SValues), 0.1) self.spinnerObject[row['sValue']].pos_hint={'x': (row['sValue']-1)/len(SValues), 'y':0.7} s_layout.add_widget(self.spinnerObject[row['sValue']]) complete_layout.add_widget(s_layout) return complete_layout if __name__ == '__main__': MainApp().run() What I get looks like this:
[ "You can modify your loop to add the values to existing Spinner instances:\n for ind, row in Data.iterrows():\n try:\n spinner = self.spinnerObject[row['sValue']]\n except:\n spinner = Spinner(text='sValue %d' % row['sValue'], values=[])\n self.spinnerObject[row['sV...
[ 0 ]
[]
[]
[ "kivy", "python", "spinner" ]
stackoverflow_0074551260_kivy_python_spinner.txt
Q: transfer the value from def() to another def() How can I transfer the value of the mes variable from one function to another? def forwardmes2withdelay(message): print(message.text) if message.text == 'Главное меню': button = types.ReplyKeyboardMarkup(resize_keyboard=True, row_width=1) contacts = types.KeyboardButton('Контакты') post = types.KeyboardButton('Разместить пост') button.add(contacts, post) bot.send_message(message.chat.id, 'Главное меню', parse_mode='') bot.send_message(message.chat.id, 'Выберите действие', reply_markup=button) else: try: mes = message.id return mes finally: mesg = bot.send_message(message.chat.id,'Укажите желаемое вами время формата <i>ЧЧ:СС</i>', parse_mode='HTML') bot.register_next_step_handler(mesg, forwardmestime) def forwardmestime(message): print(mes) timeobj = datetime.strptime(message.text, '%H:%M').time() if f'{currentdatetime.hour}:{currentdatetime.minute}' == f'{timeobj.hour}:{timeobj.minute}': bot.copy_message(chat2, message.chat.id, mes) else: def tusk(): bot.copy_message(chat2, message.chat.id, mes) return schedule.CancelJob schedule.every().day.at(f'{timeobj}').do(tusk) while True: schedule.run_pending() time.sleep(1) I want the mes value from the first function to go to another A: First, you need to have your def functions inside the class. Second. you need to have your variables declared inside that class. This way you can change that variable in one function and then use it in another function. Just like the example below. To access it you need to use "self." class Person(): def __init__(self): self._name = None def changeNameToJames(self): self._name = "James" def nameIs(self): print(self._name)
transfer the value from def() to another def()
How can I transfer the value of the mes variable from one function to another? def forwardmes2withdelay(message): print(message.text) if message.text == 'Главное меню': button = types.ReplyKeyboardMarkup(resize_keyboard=True, row_width=1) contacts = types.KeyboardButton('Контакты') post = types.KeyboardButton('Разместить пост') button.add(contacts, post) bot.send_message(message.chat.id, 'Главное меню', parse_mode='') bot.send_message(message.chat.id, 'Выберите действие', reply_markup=button) else: try: mes = message.id return mes finally: mesg = bot.send_message(message.chat.id,'Укажите желаемое вами время формата <i>ЧЧ:СС</i>', parse_mode='HTML') bot.register_next_step_handler(mesg, forwardmestime) def forwardmestime(message): print(mes) timeobj = datetime.strptime(message.text, '%H:%M').time() if f'{currentdatetime.hour}:{currentdatetime.minute}' == f'{timeobj.hour}:{timeobj.minute}': bot.copy_message(chat2, message.chat.id, mes) else: def tusk(): bot.copy_message(chat2, message.chat.id, mes) return schedule.CancelJob schedule.every().day.at(f'{timeobj}').do(tusk) while True: schedule.run_pending() time.sleep(1) I want the mes value from the first function to go to another
[ "First, you need to have your def functions inside the class.\nSecond. you need to have your variables declared inside that class.\nThis way you can change that variable in one function and then use it in another function. Just like the example below.\nTo access it you need to use \"self.\"\nclass Person():\n de...
[ 0 ]
[]
[]
[ "function", "python", "telebot" ]
stackoverflow_0074553141_function_python_telebot.txt
Q: how does indexing work in a for loop when appending a list? I´m trying write a function that returns a list with the positions for each capital letter it doesn’t work with when the capital letter in the string is repeated. def capital_indexes(word): cap = [] for i in word: if i == i.upper(): cap.append(word.index(i)) return cap print(capital_indexes("HelloWorld")) [0, 5] print(capital_indexes("HoHoHoHo")) [0, 0, 0, 0] A: I think that index method is returning the first occurrence of character. Maybe you should control the index of letter by an another var like this: def capital_indexes(word): cap = [] index = 0 for i in word: if i == i.upper(): cap.append(index) index=index+1 return cap A: It doesn't work as you expect because the indexmethod returns the first occurrence of the searched element (some docs here). So... Let's say your code finds the third H (for instance) in the string HoHoHoHo. Then, the index method asks "Hey, give me the first H" Which happens on position 0 (HoHoHoHo). I suggest you print what you are iterating over. I say this because the fact that you're calling your item i (usually used for indices) makes me think that maaaaaybe (just maaaaaybe) there's a tiny confusion between the index and the element?. Maybe just renaming your control variable will help a bit: def capital_indexes(word): cap = [] for letter in word: if letter == letter.upper(): cap.append(word.index(letter)) return cap With that in mind, you can now take a look to enumerate, which will give you both the item and the position it's in: def capital_indexes(word): cap = [] for idx, letter in enumerate(word): if letter == letter.upper(): cap.append(idx) return cap If you want to stick to indices, you have to retrieve the contents of position idx in the string: def capital_indexes(word): cap = [] for idx in range(len(word)): letter = word[idx] if letter == letter.upper(): cap.append(idx) return cap
how does indexing work in a for loop when appending a list?
I´m trying write a function that returns a list with the positions for each capital letter it doesn’t work with when the capital letter in the string is repeated. def capital_indexes(word): cap = [] for i in word: if i == i.upper(): cap.append(word.index(i)) return cap print(capital_indexes("HelloWorld")) [0, 5] print(capital_indexes("HoHoHoHo")) [0, 0, 0, 0]
[ "I think that index method is returning the first occurrence of character.\nMaybe you should control the index of letter by an another var like this:\ndef capital_indexes(word):\n cap = []\n index = 0\n for i in word:\n if i == i.upper():\n cap.append(index)\n index=index+1\n re...
[ 0, 0 ]
[]
[]
[ "append", "for_loop", "indexing", "python" ]
stackoverflow_0074553126_append_for_loop_indexing_python.txt
Q: How to detect #N/A in a data frame (data taken from xlsx file) using pandas? The blank cells with no data can be checked with: if pd.isna(dataframe.loc[index_name, column_name] == True) but if the cell has #N/A, the above command does not work nor dataframe.loc[index, column_name] == '#N/A'. On reading that cell, it shows NaN, but the above codes does not work. My main target is to capture the release dates and store it in a list. A: If you're reading your dataframe tft from a spreadsheet (and it seems to be the case here), you can use the parameter na_values of pandas.read_excel to consider some values (e.g #N/A) as NaN values like below : tft= pd.read_excel("path_to_the_file.xlsx", na_values=["#N/A"]) Otherwise, if you want to preserve those #N/A values/strings, you can check/select them like this : tft.loc[tft["Release Data"].eq("#N/A")] #will return a dataframe In the first scenario, your code would be like this : rel_date= [] for i in range(len(tft)): if pd.isna(tft["Release Date"]) continue else: rel_date.append(int(str(tft.loc[i, "Release Date"]).split()[1])) However, there is no need for the loop here, you can make a list of the release dates with this : rel_date= ( tft["Release Date"] .str.extract(("Release (\d{8})"), expand=False) .dropna() .astype(int) .drop_duplicates() .tolist() ) print(rel_date) [20220603, 20220610]
How to detect #N/A in a data frame (data taken from xlsx file) using pandas?
The blank cells with no data can be checked with: if pd.isna(dataframe.loc[index_name, column_name] == True) but if the cell has #N/A, the above command does not work nor dataframe.loc[index, column_name] == '#N/A'. On reading that cell, it shows NaN, but the above codes does not work. My main target is to capture the release dates and store it in a list.
[ "If you're reading your dataframe tft from a spreadsheet (and it seems to be the case here), you can use the parameter na_values of pandas.read_excel to consider some values (e.g #N/A) as NaN values like below :\ntft= pd.read_excel(\"path_to_the_file.xlsx\", na_values=[\"#N/A\"])\n\nOtherwise, if you want to preser...
[ 1 ]
[]
[]
[ "dataframe", "nan", "pandas", "python", "syntax_error" ]
stackoverflow_0074553226_dataframe_nan_pandas_python_syntax_error.txt
Q: How to pass '\x00' from python to c function? python: msg = b'aaa\x01' + b'\x00' + b'\x23\x45cc' dl = cdll.LoadLibrary lib = dl("./test.so") lib.send.argtypes = c_char_p lib.send(msg) c (test.so): void send(char * msg) { // printf("%s", msg); SSL_write(ssl, msg, strlen(msg)); } how can I pass '\x00' and what's behind it in? Thanks! A: As indicated in comments, printf with %s only prints a char* up to the first null byte found. The C function needs to know the size and some of your bytes are non-printing characters, so printing the bytes in hexadecimal for the specified length is an option: test.c #include <stdio.h> #include <stdlib.h> #ifdef _WIN32 # define API __declspec(dllexport) #else # define API #endif API void send(const char* msg, size_t size) { for(size_t i = 0; i < size; ++i) printf("%02X ", msg[i]); printf("\n"); } test.py import ctypes as ct msg = b'aaa\x01\x00\x23\x45cc' dll = ct.CDLL('./test') dll.send.argtypes = ct.c_char_p, ct.c_size_t dll.send.restype = None dll.send(msg, len(msg)) Output: 61 61 61 01 00 23 45 63 63
How to pass '\x00' from python to c function?
python: msg = b'aaa\x01' + b'\x00' + b'\x23\x45cc' dl = cdll.LoadLibrary lib = dl("./test.so") lib.send.argtypes = c_char_p lib.send(msg) c (test.so): void send(char * msg) { // printf("%s", msg); SSL_write(ssl, msg, strlen(msg)); } how can I pass '\x00' and what's behind it in? Thanks!
[ "As indicated in comments, printf with %s only prints a char* up to the first null byte found. The C function needs to know the size and some of your bytes are non-printing characters, so printing the bytes in hexadecimal for the specified length is an option:\ntest.c\n#include <stdio.h>\n#include <stdlib.h>\n\n#i...
[ 1 ]
[]
[]
[ "c", "ctypes", "python" ]
stackoverflow_0074549959_c_ctypes_python.txt
Q: How to compare two serializer field and show whichever is higher in django rest I have product serializer which return category_offer_price & product_offer_price, before getting this response I want to compare both price and only return whichever is highest price. #Serilaizer.py class ProductSerializer(ModelSerializer): category = CategorySerializer() product_offer_price = SerializerMethodField() category_offer_price = SerializerMethodField() class Meta: model = Products fields = [ "id", "product_name", "slug", "category", "description", "category_offer_price", "product_offer_price", "base_price", "stock", "is_available", "created_date", "images", "images_two", "images_three", ] def get_product_offer_price(self, obj): try: product_offer = ProductOffer.objects.get(product=obj) if product_offer.is_active: offer_price = product_offer.product_offer_price() return offer_price except Exception: pass return None def get_category_offer_price(self, obj): try: category_offer = CategoryOffer.objects.get(category=obj.category) if category_offer.is_active: offer_price = category_offer.category_offer_price(obj) return offer_price except Exception: pass return None #Models.py class Products(models.Model): category = models.ForeignKey(Category, on_delete=models.CASCADE) product_name = models.CharField(max_length=50, unique=True) slug = models.SlugField(max_length=100, unique=True) description = models.TextField(max_length=500) base_price = models.IntegerField() images = models.ImageField(upload_to="photos/products") images_two = models.ImageField(upload_to="photos/products") images_three = models.ImageField(upload_to="photos/products") stock = models.IntegerField() is_available = models.BooleanField(default=True) created_date = models.DateTimeField(auto_now_add=True) modified_date = models.DateTimeField(auto_now=True) class Meta: verbose_name_plural = "Products" def __str__(self): return self.product_name I'd like to know is it possible to compare serializer fields in a serializer class? A: You can move into one method, to validate your field. Also, substitute your try:except with get-object-or-404 method and your serializer fields with all value since you are using everything, to have a much cleaner code. from django.shortcuts import get_object_or_404 class ProductSerializer(ModelSerializer): category = CategorySerializer() price = SerializerMethodField() class Meta: model = Products fields = '__all__' def get_price(self, obj): product_offer = get_object_or_404(ProductOffer, product=obj) category_offer = get_object_or_404(CategoryOffer, category=obj.category) if product_offer.is_active and category_offer.is_active: if product_offer.product_offer_price() > category_offer.category_offer_price(obj): return product_offer.product_offer_price() else: return category_offer.category_offer_price(obj) elif product_offer.is_active and not category_offer.is_active: return product_offer.product_offer_price() elif category_offer.is_active and not product_offer.is_active: return category_offer.category_offer_price(obj) EDIT: As you can see I used the classic if/else in this solution, although since Python 3.10 you can use the Match case statement to substitute these conditions chain. In case of objects do not exist: class ProductSerializer(ModelSerializer): category = CategorySerializer() price = SerializerMethodField() class Meta: model = Products fields = '__all__' def get_price(self, obj): try: product_offer = ProductOffer.objects.filter(product=obj).first() category_offer = CategoryOffer.objects.filter(category=obj.category).first() if not product_offer and not category_offer: return obj.base_price elif not category_offer: return product_offer.product_offer_price() elif not product_offer: return category_offer.category_offer_price(obj) elif category_offer and product_offer: if category_offer.is_active and not product_offer.is_active: return category_offer.category_offer_price(obj) elif product_offer.is_active and not category_offer.is_active: return product_offer.product_offer_price() elif category_offer.is_active and product_offer.is_active: if category_offer.category_offer_price(obj) > product_offer.product_offer_price(): return category_offer.category_offer_price(obj) else: return product_offer.product_offer_price() except: return obj.base_price Although, to be honest, if there could be no objects then the is_active field is redundant. A: You can override to_representation() Example: class ProductSerializer(ModelSerializer): category = CategorySerializer() product_offer_price = SerializerMethodField() category_offer_price = SerializerMethodField() ... ... def to_representation(self, instance): data = super().to_representation(instance) # access required fields like this product_offer_price = data['product_offer_price'] category_offer_price = data['category_offer_price'] # do calculations here and returning the desired field as `calculated_price` if category_offer_price > product_offer_price: data['calculated_price'] = category_offer_price else: data['calculated_price'] = product_offer_price return data A: Not sure it s what you want but you could use a field of type SerializerMethodField which allow you to add a computed field that you could call category_offer_higher_price. Its value is computed by a function that return the highest one. See following link : https://www.django-rest-framework.org/api-guide/fields/#serializermethodfield
How to compare two serializer field and show whichever is higher in django rest
I have product serializer which return category_offer_price & product_offer_price, before getting this response I want to compare both price and only return whichever is highest price. #Serilaizer.py class ProductSerializer(ModelSerializer): category = CategorySerializer() product_offer_price = SerializerMethodField() category_offer_price = SerializerMethodField() class Meta: model = Products fields = [ "id", "product_name", "slug", "category", "description", "category_offer_price", "product_offer_price", "base_price", "stock", "is_available", "created_date", "images", "images_two", "images_three", ] def get_product_offer_price(self, obj): try: product_offer = ProductOffer.objects.get(product=obj) if product_offer.is_active: offer_price = product_offer.product_offer_price() return offer_price except Exception: pass return None def get_category_offer_price(self, obj): try: category_offer = CategoryOffer.objects.get(category=obj.category) if category_offer.is_active: offer_price = category_offer.category_offer_price(obj) return offer_price except Exception: pass return None #Models.py class Products(models.Model): category = models.ForeignKey(Category, on_delete=models.CASCADE) product_name = models.CharField(max_length=50, unique=True) slug = models.SlugField(max_length=100, unique=True) description = models.TextField(max_length=500) base_price = models.IntegerField() images = models.ImageField(upload_to="photos/products") images_two = models.ImageField(upload_to="photos/products") images_three = models.ImageField(upload_to="photos/products") stock = models.IntegerField() is_available = models.BooleanField(default=True) created_date = models.DateTimeField(auto_now_add=True) modified_date = models.DateTimeField(auto_now=True) class Meta: verbose_name_plural = "Products" def __str__(self): return self.product_name I'd like to know is it possible to compare serializer fields in a serializer class?
[ "You can move into one method, to validate your field. Also, substitute your try:except with get-object-or-404 method and your serializer fields with all value since you are using everything, to have a much cleaner code.\nfrom django.shortcuts import get_object_or_404\n\nclass ProductSerializer(ModelSerializer):\n ...
[ 2, 2, 0 ]
[]
[]
[ "django", "django_rest_framework", "python" ]
stackoverflow_0074527264_django_django_rest_framework_python.txt
Q: player collision not working properly with jump mechanic When jumping multiple times, the player can phase through the map. I would assume this is due to the player.y -= yvel line but I cant exactly see a work around. Here is my code, im not sure why the section that stops collision isn't working, it seems as though it should stop all movement that results in collision. Hope this isnt a duplicate, Thanks. import pygame import sys window = pygame.display.set_mode((768,768)) tile_map = [] for i in range(24): tile_map.append(pygame.Rect(i*32, 400, 32, 32)) for i in range(12): tile_map.append(pygame.Rect(i*32+256, 200, 32, 32)) player = pygame.Rect(300,300,32,32) xvel=0 yvel=0 a = False d = False s = False w = False space = False def collision_check(player,tm): for t in tm: if player.colliderect(t): return True return False while True: for event in pygame.event.get(): if event.type == pygame.QUIT: pygame.quit() sys.exit() if event.type == pygame.KEYDOWN: if event.key == pygame.K_a: a = True if event.key == pygame.K_d: d = True if event.key == pygame.K_w: w = True if event.key == pygame.K_s: s = True if event.key == pygame.K_SPACE: space = True if event.type == pygame.KEYUP: if event.key == pygame.K_a: a = False if event.key == pygame.K_d: d = False if event.key == pygame.K_w: w = False if event.key == pygame.K_s: s = False if event.key == pygame.K_SPACE: space = False window.fill(0) yvel += .1 if space: space = False yvel -= 5 if yvel > 5: yvel = 5 if xvel > 5: xvel = 5 if a: xvel = -1 elif d: xvel = 1 if w: yvel = -1 elif s: yvel = 1 player.y += yvel if collision_check(player, tile_map): player.y -= yvel yvel = 0 player.x += xvel if collision_check(player, tile_map): player.x -= xvel xvel = 0 pygame.draw.rect(window,(0,0,255),player) for t in tile_map: pygame.draw.rect(window,(255,0,0),t) pygame.display.update() pygame.time.Clock().tick(60) A: You must limit the position of the player rectangle depending on the direction of movement with the bottom or top of the tile with which the player collides. e.g.: while True: # [...] player.y += yvel ti = player.collidelist(tile_map) if ti >= 0: if yvel < 0: player.top = tile_map[ti].bottom else: player.bottom = tile_map[ti].top yvel = 0
player collision not working properly with jump mechanic
When jumping multiple times, the player can phase through the map. I would assume this is due to the player.y -= yvel line but I cant exactly see a work around. Here is my code, im not sure why the section that stops collision isn't working, it seems as though it should stop all movement that results in collision. Hope this isnt a duplicate, Thanks. import pygame import sys window = pygame.display.set_mode((768,768)) tile_map = [] for i in range(24): tile_map.append(pygame.Rect(i*32, 400, 32, 32)) for i in range(12): tile_map.append(pygame.Rect(i*32+256, 200, 32, 32)) player = pygame.Rect(300,300,32,32) xvel=0 yvel=0 a = False d = False s = False w = False space = False def collision_check(player,tm): for t in tm: if player.colliderect(t): return True return False while True: for event in pygame.event.get(): if event.type == pygame.QUIT: pygame.quit() sys.exit() if event.type == pygame.KEYDOWN: if event.key == pygame.K_a: a = True if event.key == pygame.K_d: d = True if event.key == pygame.K_w: w = True if event.key == pygame.K_s: s = True if event.key == pygame.K_SPACE: space = True if event.type == pygame.KEYUP: if event.key == pygame.K_a: a = False if event.key == pygame.K_d: d = False if event.key == pygame.K_w: w = False if event.key == pygame.K_s: s = False if event.key == pygame.K_SPACE: space = False window.fill(0) yvel += .1 if space: space = False yvel -= 5 if yvel > 5: yvel = 5 if xvel > 5: xvel = 5 if a: xvel = -1 elif d: xvel = 1 if w: yvel = -1 elif s: yvel = 1 player.y += yvel if collision_check(player, tile_map): player.y -= yvel yvel = 0 player.x += xvel if collision_check(player, tile_map): player.x -= xvel xvel = 0 pygame.draw.rect(window,(0,0,255),player) for t in tile_map: pygame.draw.rect(window,(255,0,0),t) pygame.display.update() pygame.time.Clock().tick(60)
[ "You must limit the position of the player rectangle depending on the direction of movement with the bottom or top of the tile with which the player collides. e.g.:\nwhile True:\n # [...]\n\n player.y += yvel\n ti = player.collidelist(tile_map)\n if ti >= 0:\n if yvel < 0:\n player.top...
[ 0 ]
[]
[]
[ "pygame", "python" ]
stackoverflow_0074553124_pygame_python.txt
Q: How to replace values in nested dictionaries by new values in a list in python I am iterating through a list of dictionaries. I need to update the values for one specific key in all the dictionaries and I have the new values stored in a list. The list of new values is ordered so that the 1st new value belongs to a key in the 1st dictionary, 2nd new value to a key in the 2nd dictionary, etc. My data looks something like this: dict_list = [{'person':'Tom', 'job':'student'}, {'person':'John', 'job':'teacher'}, {'person':'Mary', 'job':'manager'}] new_jobs = ['lecturer', 'cook', 'driver'] And I want to transform the list of dictionaries using the list of new jobs according to my description into this: dict_list = [{'person':'Tom', 'job':'lecturer'}, {'person':'John', 'job':'cook'}, {'person':'Mary', 'job':'driver'}] As I have a very long list of dictionaries I would like to define a function that would do this automatically but I am struggling how to do it with for loops and zip(), any suggestions? I tried the for loop below. I guess the following code could work if it was possible to index the dictionaries like this dictionary['job'][i] Unfortunately dictionaries don't work like this as far as I know. def update_dic_list(): for dictionary in dict_list: for i in range(len(new_jobs)): dictionary['job'] = new_jobs[i] print(dict_list) The output the code above gave me was this: [{'person': 'Tom', 'job': 'driver'}, {'person': 'John', 'job': 'teacher'}, {'person': 'Mary', 'job': 'manager'}] [{'person': 'Tom', 'job': 'driver'}, {'person': 'John', 'job': 'driver'}, {'person': 'Mary', 'job': 'manager'}] [{'person': 'Tom', 'job': 'driver'}, {'person': 'John', 'job': 'driver'}, {'person': 'Mary', 'job': 'driver'}] A: If your new_jobs list has the right job for each corresponding entry in the dict list, you could use zip: dict_list = [{'person':'Tom', 'job':'student'}, {'person':'John', 'job':'teacher'}, {'person':'Mary', 'job':'manager'}] new_jobs = ['lecturer', 'cook', 'driver'] for d, j in zip(dict_list, new_jobs): d['job'] = j print(dict_list) prints [{'person': 'Tom', 'job': 'lecturer'}, {'person': 'John', 'job': 'cook'}, {'person': 'Mary', 'job': 'driver'}] A: You only need to remove the inner loop because you are changing dictionary key job more than one time for each of item of outer loop: def update_dic_list(): i = 0 for dictionary in dict_list: dictionary['job'] = new_jobs[i] i += 1 print(dict_list) Or alternatively you could use enumerate: def update_dic_list(): for i, dictionary in enumerate(dict_list): dictionary['job'] = new_jobs[i] print(dict_list) Output: [{'person': 'Tom', 'job': 'lecturer'}, {'person': 'John', 'job': 'cook'}, {'person': 'Mary', 'job': 'driver'}] A: With your loop, for each dictionary, you're going through the new jobs and updating that same dictionary over and over with each of the jobs until the last one. So by the end of it, they'll all be drivers. Because that's the last one. You can do dict_list = [{'person':'Tom', 'job':'student'}, {'person':'John', 'job':'teacher'}, {'person':'Mary', 'job':'manager'}] new_jobs = ['lecturer', 'cook', 'driver'] def update_dic_list(): for job, _dict in zip(new_jobs, dict_list): _dict['job'] = job or def update_dict_list(): for i, job in enumerate(new_jobs): dict_list[i]['job'] = job
How to replace values in nested dictionaries by new values in a list in python
I am iterating through a list of dictionaries. I need to update the values for one specific key in all the dictionaries and I have the new values stored in a list. The list of new values is ordered so that the 1st new value belongs to a key in the 1st dictionary, 2nd new value to a key in the 2nd dictionary, etc. My data looks something like this: dict_list = [{'person':'Tom', 'job':'student'}, {'person':'John', 'job':'teacher'}, {'person':'Mary', 'job':'manager'}] new_jobs = ['lecturer', 'cook', 'driver'] And I want to transform the list of dictionaries using the list of new jobs according to my description into this: dict_list = [{'person':'Tom', 'job':'lecturer'}, {'person':'John', 'job':'cook'}, {'person':'Mary', 'job':'driver'}] As I have a very long list of dictionaries I would like to define a function that would do this automatically but I am struggling how to do it with for loops and zip(), any suggestions? I tried the for loop below. I guess the following code could work if it was possible to index the dictionaries like this dictionary['job'][i] Unfortunately dictionaries don't work like this as far as I know. def update_dic_list(): for dictionary in dict_list: for i in range(len(new_jobs)): dictionary['job'] = new_jobs[i] print(dict_list) The output the code above gave me was this: [{'person': 'Tom', 'job': 'driver'}, {'person': 'John', 'job': 'teacher'}, {'person': 'Mary', 'job': 'manager'}] [{'person': 'Tom', 'job': 'driver'}, {'person': 'John', 'job': 'driver'}, {'person': 'Mary', 'job': 'manager'}] [{'person': 'Tom', 'job': 'driver'}, {'person': 'John', 'job': 'driver'}, {'person': 'Mary', 'job': 'driver'}]
[ "If your new_jobs list has the right job for each corresponding entry in the dict list, you could use zip:\ndict_list = [{'person':'Tom', 'job':'student'},\n {'person':'John', 'job':'teacher'},\n {'person':'Mary', 'job':'manager'}]\nnew_jobs = ['lecturer', 'cook', 'driver']\n\nfor d, j in zip(...
[ 1, 1, 1 ]
[]
[]
[ "dictionary", "dictionary_comprehension", "for_loop", "list", "python" ]
stackoverflow_0074553173_dictionary_dictionary_comprehension_for_loop_list_python.txt
Q: Python class decorator arguments I'm trying to pass optional arguments to my class decorator in python. Below the code I currently have: class Cache(object): def __init__(self, function, max_hits=10, timeout=5): self.function = function self.max_hits = max_hits self.timeout = timeout self.cache = {} def __call__(self, *args): # Here the code returning the correct thing. @Cache def double(x): return x * 2 @Cache(max_hits=100, timeout=50) def double(x): return x * 2 The second decorator with arguments to overwrite the default one (max_hits=10, timeout=5 in my __init__ function), is not working and I got the exception TypeError: __init__() takes at least 2 arguments (3 given). I tried many solutions and read articles about it, but here I still can't make it work. Any idea to resolve this? Thanks! A: @Cache(max_hits=100, timeout=50) calls __init__(max_hits=100, timeout=50), so you aren't satisfying the function argument. You could implement your decorator via a wrapper method that detected whether a function was present. If it finds a function, it can return the Cache object. Otherwise, it can return a wrapper function that will be used as the decorator. class _Cache(object): def __init__(self, function, max_hits=10, timeout=5): self.function = function self.max_hits = max_hits self.timeout = timeout self.cache = {} def __call__(self, *args): # Here the code returning the correct thing. # wrap _Cache to allow for deferred calling def Cache(function=None, max_hits=10, timeout=5): if function: return _Cache(function) else: def wrapper(function): return _Cache(function, max_hits, timeout) return wrapper @Cache def double(x): return x * 2 @Cache(max_hits=100, timeout=50) def double(x): return x * 2 A: @Cache def double(...): ... is equivalent to def double(...): ... double=Cache(double) While @Cache(max_hits=100, timeout=50) def double(...): ... is equivalent to def double(...): ... double = Cache(max_hits=100, timeout=50)(double) Cache(max_hits=100, timeout=50)(double) has very different semantics than Cache(double). It's unwise to try to make Cache handle both use cases. You could instead use a decorator factory that can take optional max_hits and timeout arguments, and returns a decorator: class Cache(object): def __init__(self, function, max_hits=10, timeout=5): self.function = function self.max_hits = max_hits self.timeout = timeout self.cache = {} def __call__(self, *args): # Here the code returning the correct thing. def cache_hits(max_hits=10, timeout=5): def _cache(function): return Cache(function,max_hits,timeout) return _cache @cache_hits() def double(x): return x * 2 @cache_hits(max_hits=100, timeout=50) def double(x): return x * 2 PS. If the class Cache has no other methods besides __init__ and __call__, you can probably move all the code inside the _cache function and eliminate Cache altogether. A: I'd rather to include the wrapper inside the class's __call__ method: UPDATE: This method has been tested in python 3.6, so I'm not sure about the higher or earlier versions. class Cache: def __init__(self, max_hits=10, timeout=5): # Remove function from here and add it to the __call__ self.max_hits = max_hits self.timeout = timeout self.cache = {} def __call__(self, function): def wrapper(*args): value = function(*args) # saving to cache codes return value return wrapper @Cache() def double(x): return x * 2 @Cache(max_hits=100, timeout=50) def double(x): return x * 2 A: I've learned a lot from this question, thanks all. Isn't the answer just to put empty brackets on the first @Cache? Then you can move the function parameter to __call__. class Cache(object): def __init__(self, max_hits=10, timeout=5): self.max_hits = max_hits self.timeout = timeout self.cache = {} def __call__(self, function, *args): # Here the code returning the correct thing. @Cache() def double(x): return x * 2 @Cache(max_hits=100, timeout=50) def double(x): return x * 2 Although I think this approach is simpler and more concise: def cache(max_hits=10, timeout=5): def caching_decorator(fn): def decorated_fn(*args ,**kwargs): # Here the code returning the correct thing. return decorated_fn return decorator If you forget the parentheses when using the decorator, unfortunately you still don't get an error until runtime, as the outer decorator parameters are passed the function you're trying to decorate. Then at runtime the inner decorator complains: TypeError: caching_decorator() takes exactly 1 argument (0 given). However you can catch this, if you know your decorator's parameters are never going to be a callable: def cache(max_hits=10, timeout=5): assert not callable(max_hits), "@cache passed a callable - did you forget to parenthesize?" def caching_decorator(fn): def decorated_fn(*args ,**kwargs): # Here the code returning the correct thing. return decorated_fn return decorator If you now try: @cache def some_method() pass You get an AssertionError on declaration. On a total tangent, I came across this post looking for decorators that decorate classes, rather than classes that decorate. In case anyone else does too, this question is useful. A: You can use a classmethod as a factory method, this should handle all the use cases (with or without parenthesis). import functools class Cache(): def __init__(self, function): functools.update_wrapper(self, function) self.function = function self.max_hits = self.__class__.max_hits self.timeout = self.__class__.timeout self.cache = {} def __call__(self, *args): # Here the code returning the correct thing. @classmethod def Cache_dec(cls, _func = None, *, max_hits=10, timeout=5): cls.max_hits = max_hits cls.timeout = timeout if _func is not None: #when decorator is passed parenthesis return cls(_func) else: return cls #when decorator is passed without parenthesis @Cache.Cache_dec def double(x): return x * 2 @Cache.Cache_dec() def double(x): return x * 2 @Cache.Cache_dec(timeout=50) def double(x): return x * 2 @Cache.Cache_dec(max_hits=100) def double(x): return x * 2 @Cache.Cache_dec(max_hits=100, timeout=50) def double(x): return x * 2 A: Define decorator that takes optional argument: from functools import wraps, partial def _cache(func=None, *, instance=None): if func is None: return partial(_cache, instance=instance) @wraps(func) def wrapper(*ar, **kw): print(instance) return func(*ar, **kw) return wrapper And pass the instance object to decorator in __call__, or use other helper class that is instantiated on each __call__. This way you can use decorator without brackets, with params or even define a __getattr__ in proxy Cache class to apply some params. class Cache: def __call__(self, *ar, **kw): return _cache(*ar, instance=self, **kw) cache = Cache() @cache def f(): pass f() # prints <__main__.Cache object at 0x7f5c1bde4880> A: class myclass2: def __init__(self,arg): self.arg=arg print("call to init") def __call__(self,func): print("call to __call__ is made") self.function=func def myfunction(x,y,z): return x+y+z+self.function(x,y,z) self.newfunction=myfunction return self.newfunction @classmethod def prints(cls,arg): cls.prints_arg=arg print("call to prints is made") return cls(arg) @myclass2.prints("x") def myfunction1(x,y,z): return x+y+z print(myfunction1(1,2,3)) remember it goes like this: first call return object get second argument usually if applicable it goes like argument,function,old function arguments A: I made a helper decorator for this purpose: from functools import update_wrapper class ClassWrapper: def __init__(self, cls): self.cls = cls def __call__(self, *args, **kwargs): class ClassWrapperInner: def __init__(self, cls, *args, **kwargs): # This combines previous information to get ready to recieve the actual function in the __call__ method. self._cls = cls self.args = args self.kwargs = kwargs def __call__(self, func, *args, **kw): # Basically "return self._cls(func, *self.args, **self.kwargs)", but with an adjustment to update the info of the new class & verify correct arguments assert len(args) == 0 and len(kw) == 0 and callable(func), f"{self._cls.__name__} got invalid arguments. Did you forget to parenthesize?" obj = self._cls(func, *self.args, **self.kwargs) update_wrapper(obj, func) return obj return ClassWrapperInner(self.cls, *args, **kwargs) This weird code makes more sense in the context of how it will be executed: double = ClassWrapper(Cache)(max_hits=100, timeout=50)(double) ClassWrapper.__init__ stores the class it will be wrapping, (Cache). ClassWrapper.__call__ passes on its arguments (max_hits=100, timeout=50) to ClassWrapperInner.__init__, which stores them for the next call. ClassWrapper.__call__ combines all of the previous arguments and (func) together and gives them to an instance of your class, Cache, which it returns for use as the new double. It also updates your class's arguments, __name__ and __doc__ with the functools library. It's kind of like a way more complicated version of 2d list flattening where it's function arguments instead of lists. With this class decorating it, your original function behaves as expected, except that you need to put parentheses around it in all cases. @ClassWrapper class Cache(object): def __init__(self, function, max_hits=10, timeout=5): self.function = function self.max_hits = max_hits self.timeout = timeout self.cache = {} def __call__(self, *args): ... # Here the code returning the correct thing. @Cache() def double(x): return x * 2 @Cache(max_hits=100, timeout=50) def double(x): return x * 2 You could try to edit ClassWrapperInner.__call__ so that the parentheses are not required, but this approach is hacky and doesn't really make sense; it's like trying to add logic to each method of a class so that calling them without a self parameter works correctly. EDIT: After writing this answer, I realized there was a much better way to make the decorator: def class_wrapper(cls): def decorator1(*args, **kwargs): def decorator2(func): return cls(func, *args, **kwargs) return decorator2 return decorator1 With functools functions for updating the name & things: def class_wrapper(cls): def decorator1(*args, **kwargs): @wraps(cls) def decorator2(func): obj = cls(func, *args, **kwargs) update_wrapper(obj, func) return obj return decorator2 return decorator1
Python class decorator arguments
I'm trying to pass optional arguments to my class decorator in python. Below the code I currently have: class Cache(object): def __init__(self, function, max_hits=10, timeout=5): self.function = function self.max_hits = max_hits self.timeout = timeout self.cache = {} def __call__(self, *args): # Here the code returning the correct thing. @Cache def double(x): return x * 2 @Cache(max_hits=100, timeout=50) def double(x): return x * 2 The second decorator with arguments to overwrite the default one (max_hits=10, timeout=5 in my __init__ function), is not working and I got the exception TypeError: __init__() takes at least 2 arguments (3 given). I tried many solutions and read articles about it, but here I still can't make it work. Any idea to resolve this? Thanks!
[ "@Cache(max_hits=100, timeout=50) calls __init__(max_hits=100, timeout=50), so you aren't satisfying the function argument.\nYou could implement your decorator via a wrapper method that detected whether a function was present. If it finds a function, it can return the Cache object. Otherwise, it can return a wrappe...
[ 24, 24, 7, 4, 3, 0, 0, 0 ]
[]
[]
[ "arguments", "decorator", "python" ]
stackoverflow_0007492068_arguments_decorator_python.txt
Q: Can I use two Gherkin Annotations on a single step implementation? Is it possible to use two different Gherkin steps on a step implementation? @Given('a user signs up for a new "{country}" account') @Then('Select "{country}" on country selector') def choose_country(context, country): match country: case "Country A": context.country_code = "A" case "Country B": context.country_code = "B" A: Short answer, no. It should recognize one step as not being implemented. Perhaps some steps shouldn't exist ? Here's a link to more information about the subject.
Can I use two Gherkin Annotations on a single step implementation?
Is it possible to use two different Gherkin steps on a step implementation? @Given('a user signs up for a new "{country}" account') @Then('Select "{country}" on country selector') def choose_country(context, country): match country: case "Country A": context.country_code = "A" case "Country B": context.country_code = "B"
[ "Short answer, no. It should recognize one step as not being implemented. Perhaps some steps shouldn't exist ? Here's a link to more information about the subject.\n" ]
[ 1 ]
[]
[]
[ "bdd", "cucumber", "gherkin", "python" ]
stackoverflow_0074553140_bdd_cucumber_gherkin_python.txt
Q: clicking a tab on a page to alow selenium to scrape from selenium import webdriver from selenium.webdriver.chrome.options import Options from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support import expected_conditions as EC from selenium.webdriver.common.by import By from time import sleep from datetime import datetime import pandas as pd import warnings import os warnings.filterwarnings('ignore') url = 'https://www.fifa.com/fifaplus/en/match-centre/match/17/255711/285063/400128082?country=IE&wtw-filter=ALL' option = Options() option.headless = True driver = webdriver.Chrome("C:/Users/paulc/Documents/Medium Football/chromedriver.exe",options=option) # Scraping the data HomeTeam = driver.find_element(By.XPATH, "/html/body/div[1]/main/div/div[1]/div/section/div[1]/div[1]/div[2]/div[3]/div/div[1]/div/div/div[1]/div[1]/p").text AwayTeam = driver.find_element(By.XPATH, "/html/body/div[1]/main/div/div[1]/div/section/div[1]/div[1]/div[2]/div[3]/div/div[1]/div/div/div[3]/div[2]").text Result = driver.find_element(By.XPATH, "/html/body/div[1]/main/div/div[1]/div/section/div[1]/div[1]/div[2]/div[3]/div/div[1]/div/div/div[2]").text elem = WebDriverWait(driver, 5).until(EC.element_to_be_clickable((By.XPATH, "/html/body/div1/main/div/div[3]/div/div1/div/div[4]"))) elem.click() Hi Guys, I am looking to scrape the world cup data, I have managed to start it easily by getting the team names and scores. The in game statistics are housed in the stats tab in the image. So to start scraping them I need to be able to click on it with selenium and make the page active. Am i missing something obvious and that this cannot be done through an xpath? help is appreciated. A: In case you want to click on "LINE UP" tab this can be done with the following XPath: "//div[text()='LINE UP']". To click on "STATS" tab you can use this XPath: "//div[text()='STATS']". So, Selenium code line can be as following: WebDriverWait(driver, 20).until(EC.element_to_be_clickable((By.XPATH, "//div[text()='LINE UP']"))).click() For the LINE UP tab and WebDriverWait(driver, 20).until(EC.element_to_be_clickable((By.XPATH, "//div[text()='STATS']"))).click() for the "STATS" tab. Also, you need to improve your locators. Very long absolute XPaths are very breakable.
clicking a tab on a page to alow selenium to scrape
from selenium import webdriver from selenium.webdriver.chrome.options import Options from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support import expected_conditions as EC from selenium.webdriver.common.by import By from time import sleep from datetime import datetime import pandas as pd import warnings import os warnings.filterwarnings('ignore') url = 'https://www.fifa.com/fifaplus/en/match-centre/match/17/255711/285063/400128082?country=IE&wtw-filter=ALL' option = Options() option.headless = True driver = webdriver.Chrome("C:/Users/paulc/Documents/Medium Football/chromedriver.exe",options=option) # Scraping the data HomeTeam = driver.find_element(By.XPATH, "/html/body/div[1]/main/div/div[1]/div/section/div[1]/div[1]/div[2]/div[3]/div/div[1]/div/div/div[1]/div[1]/p").text AwayTeam = driver.find_element(By.XPATH, "/html/body/div[1]/main/div/div[1]/div/section/div[1]/div[1]/div[2]/div[3]/div/div[1]/div/div/div[3]/div[2]").text Result = driver.find_element(By.XPATH, "/html/body/div[1]/main/div/div[1]/div/section/div[1]/div[1]/div[2]/div[3]/div/div[1]/div/div/div[2]").text elem = WebDriverWait(driver, 5).until(EC.element_to_be_clickable((By.XPATH, "/html/body/div1/main/div/div[3]/div/div1/div/div[4]"))) elem.click() Hi Guys, I am looking to scrape the world cup data, I have managed to start it easily by getting the team names and scores. The in game statistics are housed in the stats tab in the image. So to start scraping them I need to be able to click on it with selenium and make the page active. Am i missing something obvious and that this cannot be done through an xpath? help is appreciated.
[ "In case you want to click on \"LINE UP\" tab this can be done with the following XPath: \"//div[text()='LINE UP']\". To click on \"STATS\" tab you can use this XPath: \"//div[text()='STATS']\". So, Selenium code line can be as following:\nWebDriverWait(driver, 20).until(EC.element_to_be_clickable((By.XPATH, \"//di...
[ 1 ]
[]
[]
[ "python", "selenium", "selenium_webdriver", "web_scraping", "xpath" ]
stackoverflow_0074553164_python_selenium_selenium_webdriver_web_scraping_xpath.txt
Q: making a request to find members of a group in GCP this is the request I need to make (trying to get list of members in a google group): GET https://admin.googleapis.com/admin/directory/v1/groups/{groupKey}/members https://developers.google.com/admin-sdk/directory/reference/rest/v1/members/list?apix_params=%7B%22groupKey%22%3A%22devops%40flight-analytics.com%22%2C%22includeDerivedMembership%22%3Atrue%2C%22maxResults%22%3A200%2C%22prettyPrint%22%3Atrue%7D group key in this case is email for the group eg: xyx@myorg.com how can i make it into a python script and get my results. this is the attempt I was expecting to work. def make_request(request): """ HTTP Cloud Function that makes another HTTP request. Args: request (flask.Request): The request object. <http://flask.pocoo.org/docs/1.0/api/#flask.Request> Returns: The response text, or any set of values that can be turned into a Response object using `make_response` <http://flask.pocoo.org/docs/1.0/api/#flask.Flask.make_response>. """ import requests # The URL to send the request to #url = 'https://google.com' # Process the request response = requests.get(request) response.raise_for_status() print(response) print(response.text) print(response.raise_for_status()) print("Success") return 'Success!' make_request("https://admin.googleapis.com/admin/directory/v1/groups/xyx@myorg.com/members") But the script fails saying i dont have proper login but i did ran: gcloud auth login gcloud auth application-default login before firing the python script in same shell but end up getting error: Traceback (most recent call last): File "/Users/deepak.sandhu/Desktop/test.py", line 55, in <module> make_request("https://admin.googleapis.com/admin/directory/v1/groups/xyz@myorg.com/members") File "/Users/deepak.sandhu/Desktop/test.py", line 48, in make_request response.raise_for_status() File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/requests/models.py", line 953, in raise_for_status raise HTTPError(http_error_msg, response=self) requests.exceptions.HTTPError: 401 Client Error: Unauthorized for url: https://admin.googleapis.com/admin/directory/v1/groups/xyz@myorg.com/members A: gcloud auth login authorizes the CLI (gcloud). Use gcloud auth application-default login to authorize application credentials. gcloud auth application-default login
making a request to find members of a group in GCP
this is the request I need to make (trying to get list of members in a google group): GET https://admin.googleapis.com/admin/directory/v1/groups/{groupKey}/members https://developers.google.com/admin-sdk/directory/reference/rest/v1/members/list?apix_params=%7B%22groupKey%22%3A%22devops%40flight-analytics.com%22%2C%22includeDerivedMembership%22%3Atrue%2C%22maxResults%22%3A200%2C%22prettyPrint%22%3Atrue%7D group key in this case is email for the group eg: xyx@myorg.com how can i make it into a python script and get my results. this is the attempt I was expecting to work. def make_request(request): """ HTTP Cloud Function that makes another HTTP request. Args: request (flask.Request): The request object. <http://flask.pocoo.org/docs/1.0/api/#flask.Request> Returns: The response text, or any set of values that can be turned into a Response object using `make_response` <http://flask.pocoo.org/docs/1.0/api/#flask.Flask.make_response>. """ import requests # The URL to send the request to #url = 'https://google.com' # Process the request response = requests.get(request) response.raise_for_status() print(response) print(response.text) print(response.raise_for_status()) print("Success") return 'Success!' make_request("https://admin.googleapis.com/admin/directory/v1/groups/xyx@myorg.com/members") But the script fails saying i dont have proper login but i did ran: gcloud auth login gcloud auth application-default login before firing the python script in same shell but end up getting error: Traceback (most recent call last): File "/Users/deepak.sandhu/Desktop/test.py", line 55, in <module> make_request("https://admin.googleapis.com/admin/directory/v1/groups/xyz@myorg.com/members") File "/Users/deepak.sandhu/Desktop/test.py", line 48, in make_request response.raise_for_status() File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/requests/models.py", line 953, in raise_for_status raise HTTPError(http_error_msg, response=self) requests.exceptions.HTTPError: 401 Client Error: Unauthorized for url: https://admin.googleapis.com/admin/directory/v1/groups/xyz@myorg.com/members
[ "gcloud auth login authorizes the CLI (gcloud). Use gcloud auth application-default login to authorize application credentials.\ngcloud auth application-default login\n" ]
[ 2 ]
[]
[]
[ "authentication", "google_cloud_platform", "google_groups_api", "python", "python_requests" ]
stackoverflow_0074551482_authentication_google_cloud_platform_google_groups_api_python_python_requests.txt
Q: yarl/_quoting.c:196:12: fatal error: 'longintrepr.h' file not found - 1 error generated Python version: 3.11 Installing dependencies for an application by pip install -r requirements.txt gives the error below. This error is specific to Python 3.11 version. On Python with 3.10.6 version installation goes fine. Related question: ERROR: Could not build wheels for aiohttp, which is required to install pyproject.toml-based projects Running setup.py install for yarl ... error error: subprocess-exited-with-error × Running setup.py install for yarl did not run successfully.\ │ exit code: 1 ╰─> [45 lines of output] **** * Accellerated build * **** /data/data/com.termux/files/home/folder_for_app/venv/lib/python3.11/site-packages/setuptools/config/setupcfg.py:508: SetuptoolsDeprecationWarning: The license_file parameter is deprecated, use license_files instead. warnings.warn(msg, warning_class) running install /data/data/com.termux/files/home/folder_for_app/venv/lib/python3.11/site-packages/setuptools/command/install.py:34: SetuptoolsDeprecationWarning: setup.py install is deprecated. Use build and pip and other standards-based tools. warnings.warn( running build running build_py creating build creating build/lib.linux-armv8l-cpython-311 creating build/lib.linux-armv8l-cpython-311/yarl copying yarl/init.py -> build/lib.linux-armv8l-cpython-311/yarl copying yarl/quoting.py -> build/lib.linux-armv8l-cpython-311/yarl running egg_info writing yarl.egg-info/PKG-INFO writing dependency_links to yarl.egg-info/dependency_links.txt writing requirements to yarl.egg-info/requires.txt writing top-level names to yarl.egg-info/top_level.txt reading manifest file 'yarl.egg-info/SOURCES.txt' reading manifest template 'MANIFEST.in' warning: no previously-included files matching '*.pyc' found anywhere in distribution warning: no previously-included files matching '*.cache' found anywhere in distribution warning: no previously-included files found matching 'yarl/_quoting.html' warning: no previously-included files found matching 'yarl/_quoting.*.so' warning: no previously-included files found matching 'yarl/_quoting.pyd' warning: no previously-included files found matching 'yarl/_quoting.*.pyd' no previously-included directories found matching 'docs/_build' adding license file 'LICENSE' writing manifest file 'yarl.egg-info/SOURCES.txt' copying yarl/init.pyi -> build/lib.linux-armv8l-cpython-311/yarl copying yarl/_quoting.c -> build/lib.linux-armv8l-cpython-311/yarl copying yarl/_quoting.pyx -> build/lib.linux-armv8l-cpython-311/yarl copying yarl/py.typed -> build/lib.linux-armv8l-cpython-311/yarl running build_ext building 'yarl._quoting' extension creating build/temp.linux-armv8l-cpython-311 creating build/temp.linux-armv8l-cpython-311/yarl arm-linux-androideabi-clang -mfloat-abi=softfp -mfpu=vfpv3-d16 -DNDEBUG -g -fwrapv -O3 -Wall -march=armv7-a -mfpu=neon -mfloat-abi=softfp -mthumb -fstack-protector-strong -O3 -march=armv7-a -mfpu=neon -mfloat-abi=softfp -mthumb -fstack-protector-strong -O3 -fPIC -I/data/data/com.termux/files/home/folder_for_app/venv/include -I/data/data/com.termux/files/usr/include/python3.11 -c yarl/_quoting.c -o build/temp.linux-armv8l-cpython-311/yarl/_quoting.o yarl/_quoting.c:196:12: fatal error: 'longintrepr.h' file not found #include "longintrepr.h" ^~~~~~~ 1 error generated. error: command '/data/data/com.termux/files/usr/bin/arm-linux-androideabi-clang' failed with exit code 1 [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. error: legacy-install-failure × Encountered error while trying to install package. ╰─> yarl note: This is an issue with the package mentioned above, not pip. A: Solution for this error: need to update requirements.txt. Not working versions of modules with Python 3.11: yarl==1.4.2 frozenlist==1.3.0 aiohttp==3.8.1 Working versions: yarl==1.8.1 frozenlist==1.3.1 aiohttp==3.8.2 Links to the corresponding issues with fixes: https://github.com/aio-libs/yarl/issues/706 https://github.com/aio-libs/frozenlist/issues/305 https://github.com/aio-libs/aiohttp/issues/6600
yarl/_quoting.c:196:12: fatal error: 'longintrepr.h' file not found - 1 error generated
Python version: 3.11 Installing dependencies for an application by pip install -r requirements.txt gives the error below. This error is specific to Python 3.11 version. On Python with 3.10.6 version installation goes fine. Related question: ERROR: Could not build wheels for aiohttp, which is required to install pyproject.toml-based projects Running setup.py install for yarl ... error error: subprocess-exited-with-error × Running setup.py install for yarl did not run successfully.\ │ exit code: 1 ╰─> [45 lines of output] **** * Accellerated build * **** /data/data/com.termux/files/home/folder_for_app/venv/lib/python3.11/site-packages/setuptools/config/setupcfg.py:508: SetuptoolsDeprecationWarning: The license_file parameter is deprecated, use license_files instead. warnings.warn(msg, warning_class) running install /data/data/com.termux/files/home/folder_for_app/venv/lib/python3.11/site-packages/setuptools/command/install.py:34: SetuptoolsDeprecationWarning: setup.py install is deprecated. Use build and pip and other standards-based tools. warnings.warn( running build running build_py creating build creating build/lib.linux-armv8l-cpython-311 creating build/lib.linux-armv8l-cpython-311/yarl copying yarl/init.py -> build/lib.linux-armv8l-cpython-311/yarl copying yarl/quoting.py -> build/lib.linux-armv8l-cpython-311/yarl running egg_info writing yarl.egg-info/PKG-INFO writing dependency_links to yarl.egg-info/dependency_links.txt writing requirements to yarl.egg-info/requires.txt writing top-level names to yarl.egg-info/top_level.txt reading manifest file 'yarl.egg-info/SOURCES.txt' reading manifest template 'MANIFEST.in' warning: no previously-included files matching '*.pyc' found anywhere in distribution warning: no previously-included files matching '*.cache' found anywhere in distribution warning: no previously-included files found matching 'yarl/_quoting.html' warning: no previously-included files found matching 'yarl/_quoting.*.so' warning: no previously-included files found matching 'yarl/_quoting.pyd' warning: no previously-included files found matching 'yarl/_quoting.*.pyd' no previously-included directories found matching 'docs/_build' adding license file 'LICENSE' writing manifest file 'yarl.egg-info/SOURCES.txt' copying yarl/init.pyi -> build/lib.linux-armv8l-cpython-311/yarl copying yarl/_quoting.c -> build/lib.linux-armv8l-cpython-311/yarl copying yarl/_quoting.pyx -> build/lib.linux-armv8l-cpython-311/yarl copying yarl/py.typed -> build/lib.linux-armv8l-cpython-311/yarl running build_ext building 'yarl._quoting' extension creating build/temp.linux-armv8l-cpython-311 creating build/temp.linux-armv8l-cpython-311/yarl arm-linux-androideabi-clang -mfloat-abi=softfp -mfpu=vfpv3-d16 -DNDEBUG -g -fwrapv -O3 -Wall -march=armv7-a -mfpu=neon -mfloat-abi=softfp -mthumb -fstack-protector-strong -O3 -march=armv7-a -mfpu=neon -mfloat-abi=softfp -mthumb -fstack-protector-strong -O3 -fPIC -I/data/data/com.termux/files/home/folder_for_app/venv/include -I/data/data/com.termux/files/usr/include/python3.11 -c yarl/_quoting.c -o build/temp.linux-armv8l-cpython-311/yarl/_quoting.o yarl/_quoting.c:196:12: fatal error: 'longintrepr.h' file not found #include "longintrepr.h" ^~~~~~~ 1 error generated. error: command '/data/data/com.termux/files/usr/bin/arm-linux-androideabi-clang' failed with exit code 1 [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. error: legacy-install-failure × Encountered error while trying to install package. ╰─> yarl note: This is an issue with the package mentioned above, not pip.
[ "Solution for this error: need to update requirements.txt.\nNot working versions of modules with Python 3.11:\nyarl==1.4.2\nfrozenlist==1.3.0\naiohttp==3.8.1\n\nWorking versions:\nyarl==1.8.1\nfrozenlist==1.3.1\naiohttp==3.8.2\n\nLinks to the corresponding issues with fixes:\n\nhttps://github.com/aio-libs/yarl/issu...
[ 0 ]
[]
[]
[ "linux", "pip", "python", "termux" ]
stackoverflow_0074553366_linux_pip_python_termux.txt
Q: AttributeError: module 'collections' has no attribute 'Iterator' python 3.10 django 2.0 Hello its a clone project but when ı try "python manage.py makemigrations" ım getting this error how can ı fix it? requirements django==2.0 django-ckeditor==5.4.0 django-cleanup==2.1.0 django-crispy-forms==1.7.2 django-js-asset==1.0.0 this error error 2 A: You are using a very old version of Django (we are currently in version 4.1.x!) which is incompatible with python3.10. Iterator was moved from collections to collections.abc (I think in version 3.3). In order to fix this issue you must either downgrade your python version to something before python3.3 (which is quite unfortunate I guess), or upgrade your django version (which might be very hard, depending on your application). You can actaully see this change in a 4 year old commit in the django repo: https://github.com/django/django/commit/aba9763b5117494ca1ef1e420397e3845ad5b262. Good luck! A: A simple fix that works for python3.10: Under directory /usr/lib/python3.10/collections/init.py Note: The path might change depending Add this line of code: from _collections_abc import Iterator
AttributeError: module 'collections' has no attribute 'Iterator' python 3.10 django 2.0
Hello its a clone project but when ı try "python manage.py makemigrations" ım getting this error how can ı fix it? requirements django==2.0 django-ckeditor==5.4.0 django-cleanup==2.1.0 django-crispy-forms==1.7.2 django-js-asset==1.0.0 this error error 2
[ "You are using a very old version of Django (we are currently in version 4.1.x!) which is incompatible with python3.10.\nIterator was moved from collections to collections.abc (I think in version 3.3).\nIn order to fix this issue you must either downgrade your python version to something before python3.3 (which is ...
[ 4, 0 ]
[]
[]
[ "django", "python" ]
stackoverflow_0072330021_django_python.txt
Q: SageMaker Batch Transform Input filter I am trying to run inference using SageMaker Batch Transform. My data input for prediction has two columns: ID, Text Basically, doing some prediction based on test data. However, I do not need ID field going to the prediction. I tried using input_filter="[1:]", and the job keeps failing. Below is my setup: transformer.transform( batch_input_path, content_type="text/csv", split_type="Line", logs=True, input_filter="$[1:]", ) Any suggestion on how I can achieve this without manually dropping the ID columns? Also, what does "$[1:]" represent in input_filter? is it filtering the rows or columns? Thanks in advance! A: $[1:]: indicates that we are excluding column 0 (the 'ID') before processing the inferences and keeping everything from column 1 to the last column (all the features) Kindly see this example that showcases how to make use of input/output/joining filter feature of Batch Transform.
SageMaker Batch Transform Input filter
I am trying to run inference using SageMaker Batch Transform. My data input for prediction has two columns: ID, Text Basically, doing some prediction based on test data. However, I do not need ID field going to the prediction. I tried using input_filter="[1:]", and the job keeps failing. Below is my setup: transformer.transform( batch_input_path, content_type="text/csv", split_type="Line", logs=True, input_filter="$[1:]", ) Any suggestion on how I can achieve this without manually dropping the ID columns? Also, what does "$[1:]" represent in input_filter? is it filtering the rows or columns? Thanks in advance!
[ "$[1:]: indicates that we are excluding column 0 (the 'ID') before processing the inferences and keeping everything from column 1 to the last column (all the features)\nKindly see this example that showcases how to make use of input/output/joining filter feature of Batch Transform.\n" ]
[ 0 ]
[]
[]
[ "amazon_sagemaker", "python" ]
stackoverflow_0074495545_amazon_sagemaker_python.txt
Q: How to get information from excel through an input using pandas python? How I can get information from specific lines of an excel using a variable entry placed by the user, I managed to find the information, but I can't integrate it. import pandas as pd dados = pd.read_excel(r"Dados.xlsx") CPF = input('Digite o CPF: ') if CPF in dados: print(dados.iloc[CPF][['CPF', 'Agencia', 'Conta']]) else:print('Não encontrado') the iloc print works when I put the line to be printed, but I wanted the CPF indicated in the input to be the vector to search the line since the information is on the same line. How I can get information from specific lines of an excel using a variable entry placed by the user, I managed to find the information, but I can't integrate it. A: If I understand well your question use this : import pandas as pd dados = pd.read_excel(r"Dados.xlsx") CPF = int(input('Digite o CPF: ')) if CPF in dados['CPF'].to_list(): print(dados.loc[dados['CPF'].eq(CPF), ['CPF', 'Agencia', 'Conta']]) else: print('Não encontrado')
How to get information from excel through an input using pandas python?
How I can get information from specific lines of an excel using a variable entry placed by the user, I managed to find the information, but I can't integrate it. import pandas as pd dados = pd.read_excel(r"Dados.xlsx") CPF = input('Digite o CPF: ') if CPF in dados: print(dados.iloc[CPF][['CPF', 'Agencia', 'Conta']]) else:print('Não encontrado') the iloc print works when I put the line to be printed, but I wanted the CPF indicated in the input to be the vector to search the line since the information is on the same line. How I can get information from specific lines of an excel using a variable entry placed by the user, I managed to find the information, but I can't integrate it.
[ "If I understand well your question use this :\nimport pandas as pd\n\ndados = pd.read_excel(r\"Dados.xlsx\")\n\nCPF = int(input('Digite o CPF: '))\n\nif CPF in dados['CPF'].to_list():\n print(dados.loc[dados['CPF'].eq(CPF), ['CPF', 'Agencia', 'Conta']])\nelse:\n print('Não encontrado')\n\n" ]
[ 0 ]
[]
[]
[ "pandas", "python" ]
stackoverflow_0074553290_pandas_python.txt
Q: Travelling Salesman in scipy How do I solve a Travelling Salesman problem in python? I did not find any library, there should be a way using scipy functions for optimization or other libraries. My hacky-extremelly-lazy-pythonic bruteforcing solution is: tsp_solution = min( (sum( Dist[i] for i in izip(per, per[1:])), n, per) for n, per in enumerate(i for i in permutations(xrange(Dist.shape[0]), Dist.shape[0])) )[2] where Dist (numpy.array) is the distance matrix. If Dist is too big this will take forever. Suggestions? A: The scipy.optimize functions are not constructed to allow straightforward adaptation to the traveling salesman problem (TSP). For a simple solution, I recommend the 2-opt algorithm, which is a well-accepted algorithm for solving the TSP and relatively straightforward to implement. Here is my implementation of the algorithm: import numpy as np # Calculate the euclidian distance in n-space of the route r traversing cities c, ending at the path start. path_distance = lambda r,c: np.sum([np.linalg.norm(c[r[p]]-c[r[p-1]]) for p in range(len(r))]) # Reverse the order of all elements from element i to element k in array r. two_opt_swap = lambda r,i,k: np.concatenate((r[0:i],r[k:-len(r)+i-1:-1],r[k+1:len(r)])) def two_opt(cities,improvement_threshold): # 2-opt Algorithm adapted from https://en.wikipedia.org/wiki/2-opt route = np.arange(cities.shape[0]) # Make an array of row numbers corresponding to cities. improvement_factor = 1 # Initialize the improvement factor. best_distance = path_distance(route,cities) # Calculate the distance of the initial path. while improvement_factor > improvement_threshold: # If the route is still improving, keep going! distance_to_beat = best_distance # Record the distance at the beginning of the loop. for swap_first in range(1,len(route)-2): # From each city except the first and last, for swap_last in range(swap_first+1,len(route)): # to each of the cities following, new_route = two_opt_swap(route,swap_first,swap_last) # try reversing the order of these cities new_distance = path_distance(new_route,cities) # and check the total distance with this modification. if new_distance < best_distance: # If the path distance is an improvement, route = new_route # make this the accepted best route best_distance = new_distance # and update the distance corresponding to this route. improvement_factor = 1 - best_distance/distance_to_beat # Calculate how much the route has improved. return route # When the route is no longer improving substantially, stop searching and return the route. Here is an example of the function being used: # Create a matrix of cities, with each row being a location in 2-space (function works in n-dimensions). cities = np.random.RandomState(42).rand(70,2) # Find a good route with 2-opt ("route" gives the order in which to travel to each city by row number.) route = two_opt(cities,0.001) And here is the approximated solution path shown on a plot: import matplotlib.pyplot as plt # Reorder the cities matrix by route order in a new matrix for plotting. new_cities_order = np.concatenate((np.array([cities[route[i]] for i in range(len(route))]),np.array([cities[0]]))) # Plot the cities. plt.scatter(cities[:,0],cities[:,1]) # Plot the path. plt.plot(new_cities_order[:,0],new_cities_order[:,1]) plt.show() # Print the route as row numbers and the total distance travelled by the path. print("Route: " + str(route) + "\n\nDistance: " + str(path_distance(route,cities))) If the speed of algorithm is important to you, I recommend pre-calculating the distances and storing them in a matrix. This dramatically decreases the convergence time. Edit: Custom Start and End Points For a non-circular path (one which ends at a location different from where it starts), edit the path distance formula to path_distance = lambda r,c: np.sum([np.linalg.norm(c[r[p+1]]-c[r[p]]) for p in range(len(r)-1)]) and then reorder the cities for plotting using new_cities_order = np.array([cities[route[i]] for i in range(len(route))]) With the code as it is, the starting city is fixed as the first city in cities, and the ending city is variable. To make the ending city the last city in cities, restrict the range of swappable cities by changing the range of swap_first and swap_last in two_opt() with the code for swap_first in range(1,len(route)-3): for swap_last in range(swap_first+1,len(route)-1): To make both the starting and ending cities variable, instead expand the range of swap_first and swap_last with for swap_first in range(0,len(route)-2): for swap_last in range(swap_first+1,len(route)): A: I recently found out this option to use linear optimization for the TSP problem https://gist.github.com/mirrornerror/a684b4d439edbd7117db66a56f2483e0 Nonetheless I agree with some of the other comments, just a remainder that there are ways to use linear optimization for this problem. Some academic publications include the following http://www.opl.ufc.br/post/tsp/ https://phabi.ch/2021/09/19/tsp-subtour-elimination-by-miller-tucker-zemlin-constraint/
Travelling Salesman in scipy
How do I solve a Travelling Salesman problem in python? I did not find any library, there should be a way using scipy functions for optimization or other libraries. My hacky-extremelly-lazy-pythonic bruteforcing solution is: tsp_solution = min( (sum( Dist[i] for i in izip(per, per[1:])), n, per) for n, per in enumerate(i for i in permutations(xrange(Dist.shape[0]), Dist.shape[0])) )[2] where Dist (numpy.array) is the distance matrix. If Dist is too big this will take forever. Suggestions?
[ "The scipy.optimize functions are not constructed to allow straightforward adaptation to the traveling salesman problem (TSP). For a simple solution, I recommend the 2-opt algorithm, which is a well-accepted algorithm for solving the TSP and relatively straightforward to implement. Here is my implementation of th...
[ 25, 0 ]
[]
[]
[ "optimization", "python", "scipy", "traveling_salesman" ]
stackoverflow_0025585401_optimization_python_scipy_traveling_salesman.txt
Q: How to save all the lines in lists Python I am trying to get all the lines from my input file and save them in the lists dataset_texts and dataset_labels. But instead I am getting only the last line of my input file. The variable text_str gets the text sequence in the line and the variable labels_str saves the vector that correspond to the text sequence in the same line. The variable label saves the position of 1 in the vector. Finally I want to save these lines in two lists dataset_texts and dataset_labels, but for some reason that I could not understand, it's saving only the last line. Please advice how can I get the lists with all my lines and their respective positions of 1 in the vector? This is the code that I have so far and checked line by line. from transformers import BertTokenizer import torch import re training_set_path = '../test.txt' regexp = r'^(.*)\t(\d+)$' dataset_texts = list() dataset_labels = list() input_file = open(training_set_path, 'rb' ) print("Dataset loaded") num_labels = 0 print("Num_labels") print(num_labels) #labels_str = [] # added by me for line in input_file: line = line.decode( errors = 'replace' ) #print(line) if re.match(regexp, line): text_str = re.findall( regexp, line )[0][0] # getting the aa sequence print("here text_str") print(text_str) labels_str = re.findall( regexp, line )[0][1] # getting the corresponding vector print("here labels_str") print(labels_str) label = labels_str.index('1') print("here label") print(label) dataset_texts.append( text_str ) dataset_labels.append( label ) num_labels = len(labels_str) print("Here length num_labels") print(num_labels) counter += 1 # else: # break input_file.close() print("______________________________________________________________________") print("Here dataset_text") print(dataset_texts) print("Here dataset_labels") print(dataset_labels) output_file = open( logs_path, 'w') num_labels = len(labels_str) My output is as follows: Dataset loaded Num_labels 0 here text_str Q Q L R K P A E E L G R E I T H Q L F L L G C G A Q M L K Y A S P P M A Q A W C Q V M L D T R G G V R L S E Q I Q N D L L here labels_str 1000000000000000000000000000000000000000000000000000000000000 here label 0 Here length num_labels 61 ______________________________________________________________________ Here dataset_text ['Q Q L R K P A E E L G R E I T H Q L F L L G C G A Q M L K Y A S P P M A Q A W C Q V M L D T R G G V R L S E Q I Q N D L L'] Here dataset_labels [0] A: I believe the issue is with your regex. Change regexp = r'^(.*)\t(\d+)$' to regexp = r'^(.*)\t(\d+)(\r\n|\r|\n)$' so that it matches new line characters at the end of each line I ran into an error with this label = labels_str.index('1') after fixing the regex. So, you may want to remove that. You will also need to define counter outside of the loop before trying to add to it. The code will also error out if there are no matches because you print out variables at the end that are only defined when there is a match. So I would probably also define all those variables outside the loop as blank string. Hopefully I guessed right in the format of your input file. Some text followed by a tab and then some digits. sample output Here dataset_text ['abasd\tTEST', 'FASDASD\t345678 TEST', 'FASDASD\t345678 TEST', 'FASDASD\t345678 TEST', 'FASDASD\t345678 TEST'] Here dataset_labels ['1234', '4321', '8964', '1234', '1234']
How to save all the lines in lists Python
I am trying to get all the lines from my input file and save them in the lists dataset_texts and dataset_labels. But instead I am getting only the last line of my input file. The variable text_str gets the text sequence in the line and the variable labels_str saves the vector that correspond to the text sequence in the same line. The variable label saves the position of 1 in the vector. Finally I want to save these lines in two lists dataset_texts and dataset_labels, but for some reason that I could not understand, it's saving only the last line. Please advice how can I get the lists with all my lines and their respective positions of 1 in the vector? This is the code that I have so far and checked line by line. from transformers import BertTokenizer import torch import re training_set_path = '../test.txt' regexp = r'^(.*)\t(\d+)$' dataset_texts = list() dataset_labels = list() input_file = open(training_set_path, 'rb' ) print("Dataset loaded") num_labels = 0 print("Num_labels") print(num_labels) #labels_str = [] # added by me for line in input_file: line = line.decode( errors = 'replace' ) #print(line) if re.match(regexp, line): text_str = re.findall( regexp, line )[0][0] # getting the aa sequence print("here text_str") print(text_str) labels_str = re.findall( regexp, line )[0][1] # getting the corresponding vector print("here labels_str") print(labels_str) label = labels_str.index('1') print("here label") print(label) dataset_texts.append( text_str ) dataset_labels.append( label ) num_labels = len(labels_str) print("Here length num_labels") print(num_labels) counter += 1 # else: # break input_file.close() print("______________________________________________________________________") print("Here dataset_text") print(dataset_texts) print("Here dataset_labels") print(dataset_labels) output_file = open( logs_path, 'w') num_labels = len(labels_str) My output is as follows: Dataset loaded Num_labels 0 here text_str Q Q L R K P A E E L G R E I T H Q L F L L G C G A Q M L K Y A S P P M A Q A W C Q V M L D T R G G V R L S E Q I Q N D L L here labels_str 1000000000000000000000000000000000000000000000000000000000000 here label 0 Here length num_labels 61 ______________________________________________________________________ Here dataset_text ['Q Q L R K P A E E L G R E I T H Q L F L L G C G A Q M L K Y A S P P M A Q A W C Q V M L D T R G G V R L S E Q I Q N D L L'] Here dataset_labels [0]
[ "I believe the issue is with your regex.\nChange regexp = r'^(.*)\\t(\\d+)$' to regexp = r'^(.*)\\t(\\d+)(\\r\\n|\\r|\\n)$' so that it matches new line characters at the end of each line\nI ran into an error with this label = labels_str.index('1') after fixing the regex. So, you may want to remove that. You will al...
[ 0 ]
[]
[]
[ "line", "list", "python", "string" ]
stackoverflow_0074547790_line_list_python_string.txt
Q: How to create a dictionary using a list? I have this list(newList) where I´m trying to create a dictionary with the product as a key and the price(identified with a $) as a value, while the output of the key is correct, the only key that has a value is the last one, and I dont know what I´m doing wrong, I would appreciate some help, thank you. Here is a resumed version of the code I was trying newList = ["banana", "apple", "$10", "$15"] dict = {} product = "" price = "" for i in newList: if "$" not in i: product = i else: price = i dict[product] = price print(dict) And this is the output: {'banana': '', 'apple': '$15'} Desired output: {'banana': '$10', 'apple': '$15'} A: Split the list into two lists, then combine them into a dictionary. products = [x for x in newList if not x.startswith("$")] prices = [x for x in newList if x.startswith("$")] product_prices = dict(zip(products, prices))
How to create a dictionary using a list?
I have this list(newList) where I´m trying to create a dictionary with the product as a key and the price(identified with a $) as a value, while the output of the key is correct, the only key that has a value is the last one, and I dont know what I´m doing wrong, I would appreciate some help, thank you. Here is a resumed version of the code I was trying newList = ["banana", "apple", "$10", "$15"] dict = {} product = "" price = "" for i in newList: if "$" not in i: product = i else: price = i dict[product] = price print(dict) And this is the output: {'banana': '', 'apple': '$15'} Desired output: {'banana': '$10', 'apple': '$15'}
[ "Split the list into two lists, then combine them into a dictionary.\nproducts = [x for x in newList if not x.startswith(\"$\")]\nprices = [x for x in newList if x.startswith(\"$\")]\n\nproduct_prices = dict(zip(products, prices))\n\n" ]
[ 3 ]
[]
[]
[ "arraylist", "dictionary", "python" ]
stackoverflow_0074553432_arraylist_dictionary_python.txt
Q: Invalid character in identifier I am working on the letter distribution problem from HP code wars 2012. I keep getting an error message that says "invalid character in identifier". What does this mean and how can it be fixed? Here is the page with the information. import string def text_analyzer(text): '''The text to be parsed and the number of occurrences of the letters given back be. Punctuation marks, and I ignore the EOF simple. The function is thus very limited. ''' result = {} # Processing for a in string.ascii_lowercase: result [a] = text.lower (). count (a) return result def analysis_result (results): # I look at the data keys = analysis.keys () values \u200b\u200b= list(analysis.values \u200b\u200b()) values.sort (reverse = True ) # I turn to the dictionary and # Must avoid that letters will be overwritten w2 = {} list = [] for key in keys: item = w2.get (results [key], 0 ) if item = = 0 : w2 [analysis results [key]] = [key] else : item.append (key) w2 [analysis results [key]] = item # We get the keys keys = list (w2.keys ()) keys.sort (reverse = True ) for key in keys: list = w2 [key] liste.sort () for a in list: print (a.upper (), "*" * key) text = """I have a dream that one day this nation will rise up and live out the true meaning of its creed: "We hold these truths to be self-evident, that all men are created equal. "I have a dream that my four little children will one day live in a nation where they will not be Judged by the color of their skin but by the content of their character. # # # """ analysis result = text_analyzer (text) analysis_results (results) A: The error SyntaxError: invalid character in identifier means you have some character in the middle of a variable name, function, etc. that's not a letter, number, or underscore. The actual error message will look something like this: File "invalchar.py", line 23 values = list(analysis.values ()) ^ SyntaxError: invalid character in identifier That tells you what the actual problem is, so you don't have to guess "where do I have an invalid character"? Well, if you look at that line, you've got a bunch of non-printing garbage characters in there. Take them out, and you'll get past this. If you want to know what the actual garbage characters are, I copied the offending line from your code and pasted it into a string in a Python interpreter: >>> s=' values ​​= list(analysis.values ​​())' >>> s ' values \u200b\u200b= list(analysis.values \u200b\u200b())' So, that's \u200b, or ZERO WIDTH SPACE. That explains why you can't see it on the page. Most commonly, you get these because you've copied some formatted (not plain-text) code off a site like StackOverflow or a wiki, or out of a PDF file. If your editor doesn't give you a way to find and fix those characters, just delete and retype the line. Of course you've also got at least two IndentationErrors from not indenting things, at least one more SyntaxError from stay spaces (like = = instead of ==) or underscores turned into spaces (like analysis results instead of analysis_results). The question is, how did you get your code into this state? If you're using something like Microsoft Word as a code editor, that's your problem. Use a text editor. If not… well, whatever the root problem is that caused you to end up with these garbage characters, broken indentation, and extra spaces, fix that, before you try to fix your code. A: If your keyboard is set to English US (International) rather than English US the double quotation marks don't work. This is why the single quotation marks worked in your case. A: Similar to the previous answers, the problem is some character (possibly invisible) that the Python interpreter doesn't recognize. Because this is often due to copy-pasting code, re-typing the line is one option. But if you don't want to re-type the line, you can paste your code into this tool or something similar (Google "show unicode characters online"), and it will reveal any non-standard characters. For example, s=' values ​​= list(analysis.values ​​())' becomes s=' values U+200B U+200B​​ = list(analysis.values U+200B U+200B ​​())' You can then delete the non-standard characters from the string. A: Carefully see your quotation, is this correct or incorrect! Sometime double quotation doesn’t work properly, it's depend on your keyboard layout. A: I got a similar issue. My solution was to change minus character from: — to - A: I got that error, when sometimes I type in Chinese language. When it comes to punctuation marks, you do not notice that you are actually typing the Chinese version, instead of the English version. The interpreter will give you an error message, but for human eyes, it is hard to notice the difference. For example, "," in Chinese; and "," in English. So be careful with your language setting. A: Not sure this is right on but when i copied some code form a paper on using pgmpy and pasted it into the editor under Spyder, i kept getting the "invalid character in identifier" error though it didn't look bad to me. The particular line was grade_cpd = TabularCPD(variable='G',\ For no good reason I replaced the ' with " throughout the code and it worked. Not sure why but it did work A: A little bit late but I got the same error and I realized that it was because I copied some code from a PDF. Check the difference between these two: - − The first one is from hitting the minus sign on keyboard and the second is from a latex generated PDF. A: This error occurs mainly when copy-pasting the code. Try editing/replacing minus(-), bracket({) symbols. A: You don't get a good error message in IDLE if you just Run the module. Try typing an import command from within IDLE shell, and you'll get a much more informative error message. I had the same error and that made all the difference. (And yes, I'd copied the code from an ebook and it was full of invisible "wrong" characters.) A: My solution was to switch my Mac keyboard from Unicode to U.S. English. A: it is similar for me as well after copying the code from my email. def update(self, k=1, step = 2): if self.start.get() and not self.is_paused.get(): U+A0 x_data.append([i for i in range(0,k,1)][-1]) y = [i for i in range(0,k,step)][-1] There is additional U+A0 character after checking with the tool as recommended by @Jacob Stern.
Invalid character in identifier
I am working on the letter distribution problem from HP code wars 2012. I keep getting an error message that says "invalid character in identifier". What does this mean and how can it be fixed? Here is the page with the information. import string def text_analyzer(text): '''The text to be parsed and the number of occurrences of the letters given back be. Punctuation marks, and I ignore the EOF simple. The function is thus very limited. ''' result = {} # Processing for a in string.ascii_lowercase: result [a] = text.lower (). count (a) return result def analysis_result (results): # I look at the data keys = analysis.keys () values \u200b\u200b= list(analysis.values \u200b\u200b()) values.sort (reverse = True ) # I turn to the dictionary and # Must avoid that letters will be overwritten w2 = {} list = [] for key in keys: item = w2.get (results [key], 0 ) if item = = 0 : w2 [analysis results [key]] = [key] else : item.append (key) w2 [analysis results [key]] = item # We get the keys keys = list (w2.keys ()) keys.sort (reverse = True ) for key in keys: list = w2 [key] liste.sort () for a in list: print (a.upper (), "*" * key) text = """I have a dream that one day this nation will rise up and live out the true meaning of its creed: "We hold these truths to be self-evident, that all men are created equal. "I have a dream that my four little children will one day live in a nation where they will not be Judged by the color of their skin but by the content of their character. # # # """ analysis result = text_analyzer (text) analysis_results (results)
[ "The error SyntaxError: invalid character in identifier means you have some character in the middle of a variable name, function, etc. that's not a letter, number, or underscore. The actual error message will look something like this:\n File \"invalchar.py\", line 23\n values = list(analysis.values ())\n ...
[ 106, 12, 5, 3, 3, 2, 1, 1, 1, 0, 0, 0 ]
[]
[]
[ "python", "python_3.x" ]
stackoverflow_0014844687_python_python_3.x.txt
Q: Convert a lambda function to a regular function I'm trying to understand how can I convert a lambda function to a normal one. I have this lambda function that it supposed to fill the null values of each column with the mode def fill_nn(data): df= data.apply(lambda column: column.fillna(column.mode()[0])) return df I tried this: def fill_nn(df): for column in df: if df[column].isnull().any(): return df[column].fillna(df[column].mode()[0]) A: Hi Hope you are doing well! If I understood your question correctly then the best possible way will be similar to this: import pandas as pd def fill_missing_values(series: pd.Series) -> pd.Series: """Fill missing values in series/column.""" value_to_use = series.mode()[0] return series.fillna(value=value_to_use) df = pd.DataFrame( { "A": [1, 2, 3, 4, 5], "B": [None, 2, 3, 4, None], "C": [None, None, 3, 4, None], } ) df = df.apply(fill_missing_values) # type: ignore print(df) # A B C # 0 1 2.0 3.0 # 1 2 2.0 3.0 # 2 3 3.0 3.0 # 3 4 4.0 4.0 # 4 5 2.0 3.0 but personally, I would still use the lambda as it requires less code and is easier to handle (especially for such a small task).
Convert a lambda function to a regular function
I'm trying to understand how can I convert a lambda function to a normal one. I have this lambda function that it supposed to fill the null values of each column with the mode def fill_nn(data): df= data.apply(lambda column: column.fillna(column.mode()[0])) return df I tried this: def fill_nn(df): for column in df: if df[column].isnull().any(): return df[column].fillna(df[column].mode()[0])
[ "Hi Hope you are doing well!\nIf I understood your question correctly then the best possible way will be similar to this:\nimport pandas as pd\n\n\ndef fill_missing_values(series: pd.Series) -> pd.Series:\n \"\"\"Fill missing values in series/column.\"\"\"\n\n value_to_use = series.mode()[0]\n return seri...
[ 0 ]
[]
[]
[ "fillna", "function", "lambda", "python" ]
stackoverflow_0074551640_fillna_function_lambda_python.txt
Q: Class attribute named the same as a local variable? I've got the following Python code: def GF(p, m=1): # Yeah, I know this should be a metaclass, but I can't find a good tutorial on them q = p ** m class _element: p = p m = m q = q def __init__(self, value): self.value = value def __repr__(self): return 'GF(%i, %i)(%i)' % (self.p, self.m, self.value) ... return _element The problem here is with the line p = p: the left side "binds" p to the class' definition, preventing it from pulling in p from the locals. One fix I can see is hacking around the variable names to fit: def GF(p, m=1): q = p ** m _p, _m, _q = p, m, q class _element: p = _p m = _m q = _q def __init__(self, value): self.value = value def __repr__(self): return 'GF(%i, %i)(%i)' % (self.p, self.m, self.value) ... return _element Is there a more recognized or stable way to do this? A: Firstly, here's a good tutorial. Secondly, you can use the type function in python to create classes. type(name, bases, attrs) So to answer your question here's some code : def display(self): return 'GF(%i, %i)(%i)' % (self.p, self.m, self.value) def constructor(self, value): self.value = value def GF(p, m=1): q = p ** m return type("_element", (), {'p': p, 'm': m, 'q': q, '__repr__':display, '__init__':constructor})
Class attribute named the same as a local variable?
I've got the following Python code: def GF(p, m=1): # Yeah, I know this should be a metaclass, but I can't find a good tutorial on them q = p ** m class _element: p = p m = m q = q def __init__(self, value): self.value = value def __repr__(self): return 'GF(%i, %i)(%i)' % (self.p, self.m, self.value) ... return _element The problem here is with the line p = p: the left side "binds" p to the class' definition, preventing it from pulling in p from the locals. One fix I can see is hacking around the variable names to fit: def GF(p, m=1): q = p ** m _p, _m, _q = p, m, q class _element: p = _p m = _m q = _q def __init__(self, value): self.value = value def __repr__(self): return 'GF(%i, %i)(%i)' % (self.p, self.m, self.value) ... return _element Is there a more recognized or stable way to do this?
[ "Firstly, here's a good tutorial. Secondly, you can use the type function in python to create classes.\ntype(name, bases, attrs)\nSo to answer your question here's some code :\ndef display(self):\n return 'GF(%i, %i)(%i)' % (self.p, self.m, self.value)\n\ndef constructor(self, value):\n self.value = value\n\n...
[ 0 ]
[]
[]
[ "python" ]
stackoverflow_0074553344_python.txt
Q: How to write Python code for generating Italian car number plates? The three-digit number changes first, then the letters from right to left. So, first plate is AA 000 AA, followed by AA 001 AA...AA 999 AA, then AA 000 AB to AA 999 AZ, then AA 000 BA to AA 999 ZZ, then AB 000 AA to AZ 999 ZZ, then BA 000 AA to ZZ 999 ZZ. A: Another solution, an alternative to nested loops, also quite readable. Use itertools.product() instead of nested loops : "cartesian product, equivalent to a nested for-loop". Use a generator expression to build the plate string, in order to keep the whole thing "iterable". from string import ascii_uppercase from itertools import product plate_gen = (f"{l1}{l2} {n:03d} {l3}{l4}" for l1, l2, n, l3, l4 in product(ascii_uppercase, ascii_uppercase, range(1000), ascii_uppercase, ascii_uppercase)) Then use the generator, in a "streaming mode", to print or whatever you want. for plate in plate_gen: print(plate) A: Just use a couple for loops. But this is a very big output (456_976_000 lines): from string import ascii_uppercase as letters for l1 in letters: for l2 in letters: for l3 in letters: for l4 in letters: for i in range(1000): print(f'{l1}{l2} {i:03d} {l3}{l4}') A: I suggest writing a function that calculates the string of the nth number plate based on number n: def numberplate(i): chars = "ABCDEFGHIJKLMNOPQRSTUVWXYZ" # Remove invalid ones c = len(chars) return chars[i//1000//c**3] + chars[(i//1000%c**3)//c**2] + "{:03d}".format(i%1000) + chars[(i//1000%c**2)//c] + chars[i//(1000)%c**1] print(numberplate(0)) print(numberplate(456)) print(numberplate(1000)) print(numberplate(1335)) print(numberplate(1000*26**4-1)) # The below will run for a while... for i in range(1000*26**4): print(numberplate(i)) Edit: I am not using ascii_uppercase on purpose as in many countries, some characters like O and I are not allowed in license plates.
How to write Python code for generating Italian car number plates?
The three-digit number changes first, then the letters from right to left. So, first plate is AA 000 AA, followed by AA 001 AA...AA 999 AA, then AA 000 AB to AA 999 AZ, then AA 000 BA to AA 999 ZZ, then AB 000 AA to AZ 999 ZZ, then BA 000 AA to ZZ 999 ZZ.
[ "Another solution, an alternative to nested loops, also quite readable.\nUse itertools.product() instead of nested loops : \"cartesian product, equivalent to a nested for-loop\".\nUse a generator expression to build the plate string, in order to keep the whole thing \"iterable\".\nfrom string import ascii_uppercase...
[ 1, 0, 0 ]
[ "Here's a reasonably general approach to this problem:\ndigits = [ \n # value, init, count, suffix, order \n [ 'A', 'A', 26 , '', 6], \n [ 'A', 'A', 26 , ' ', 5], \n [ '0', '0', 10, '', 2 ], \n [ '0', '0', 10, '', 1 ], \n [ '0', '0', 10, ' ', 0 ], \n [ 'A', 'A', 26 , '', 4], \n [ 'A', 'A', 26 , '', 3], \n...
[ -1 ]
[ "python" ]
stackoverflow_0074552511_python.txt
Q: Pandas. Collapse rows with a certain time difference by summing data in other columns I have a table with time points (timestamp). If less than a minute has passed between the points, I need to sum the timedelta and update the point's datetime boundaries. For the rows where 'time_break' is less than 60 I need the following: sum 'time_delta' with 'time_delta' from the previous row; update 'ts_start' with the value 'ts_start' from the previous row; calculate new 'time_break', so it's correct for updated time points Input data: df = pd.DataFrame({'ts_start': [1647854644, 1647855323, 1647855454, 1647855521, 1647858807, 1647858858, 1647858970], 'ts_end': [1647854699, 1647855421, 1647855521, 1647856205, 1647858810, 1647858958, 1647859020], 'time_break': [105.0, 624.0, 33.0, 0.0, 2602.0, 48.0, 12.0], 'time_delta': [55, 98, 67, 625, 3, 100, 50]}) Expected output: df_out = pd.DataFrame({'ts_start': [1647854644, 1647855323, 1647858807], 'ts_end': [1647854699, 1647856205, 1647859020], 'time_break': [105.0, 624.0, 2602.0], 'time_delta': [55, 790, 153]}) But I understand it's working slowly and it misses doubled time_breaks. I think there's a possibility of using groupby, but I can't come up with a solution. Would appreciate any help, thanks! I've tried this: for i in range(1, len(df)): try: if 0 <= df.iloc[i, df.columns.get_loc('time_break')] <= 60: df.iloc[i, df.columns.get_loc('time_delta')] += df.iloc[i-1, df.columns.get_loc('time_delta')] df.iloc[i, df.columns.get_loc('ts_start')] = df.iloc[i-1, df.columns.get_loc('ts_start')] df.iloc[i, df.columns.get_loc('time_break')] = df.iloc[i, df.columns.get_loc('ts_start')] - df.iloc[i-2, df.columns.get_loc('ts_end')] df.drop(index=i-1, inplace=True) df = df.reset_index(drop=True) except IndexError: break; My output: enter image description here A: Try: x = ( df.groupby((df["time_break"] >= 60).cumsum()) .agg( { "ts_start": "first", "ts_end": "last", "time_break": "first", "time_delta": "sum", } ) .reset_index(drop=True) ) print(x) Prints: ts_start ts_end time_break time_delta 0 1647854644 1647854699 105.0 55 1 1647855323 1647856205 624.0 790 2 1647858807 1647859020 2602.0 153
Pandas. Collapse rows with a certain time difference by summing data in other columns
I have a table with time points (timestamp). If less than a minute has passed between the points, I need to sum the timedelta and update the point's datetime boundaries. For the rows where 'time_break' is less than 60 I need the following: sum 'time_delta' with 'time_delta' from the previous row; update 'ts_start' with the value 'ts_start' from the previous row; calculate new 'time_break', so it's correct for updated time points Input data: df = pd.DataFrame({'ts_start': [1647854644, 1647855323, 1647855454, 1647855521, 1647858807, 1647858858, 1647858970], 'ts_end': [1647854699, 1647855421, 1647855521, 1647856205, 1647858810, 1647858958, 1647859020], 'time_break': [105.0, 624.0, 33.0, 0.0, 2602.0, 48.0, 12.0], 'time_delta': [55, 98, 67, 625, 3, 100, 50]}) Expected output: df_out = pd.DataFrame({'ts_start': [1647854644, 1647855323, 1647858807], 'ts_end': [1647854699, 1647856205, 1647859020], 'time_break': [105.0, 624.0, 2602.0], 'time_delta': [55, 790, 153]}) But I understand it's working slowly and it misses doubled time_breaks. I think there's a possibility of using groupby, but I can't come up with a solution. Would appreciate any help, thanks! I've tried this: for i in range(1, len(df)): try: if 0 <= df.iloc[i, df.columns.get_loc('time_break')] <= 60: df.iloc[i, df.columns.get_loc('time_delta')] += df.iloc[i-1, df.columns.get_loc('time_delta')] df.iloc[i, df.columns.get_loc('ts_start')] = df.iloc[i-1, df.columns.get_loc('ts_start')] df.iloc[i, df.columns.get_loc('time_break')] = df.iloc[i, df.columns.get_loc('ts_start')] - df.iloc[i-2, df.columns.get_loc('ts_end')] df.drop(index=i-1, inplace=True) df = df.reset_index(drop=True) except IndexError: break; My output: enter image description here
[ "Try:\nx = (\n df.groupby((df[\"time_break\"] >= 60).cumsum())\n .agg(\n {\n \"ts_start\": \"first\",\n \"ts_end\": \"last\",\n \"time_break\": \"first\",\n \"time_delta\": \"sum\",\n }\n )\n .reset_index(drop=True)\n)\n\nprint(x)\n\nPrints:\n ...
[ 0 ]
[]
[]
[ "dataframe", "datetime", "merge", "pandas", "python" ]
stackoverflow_0074553438_dataframe_datetime_merge_pandas_python.txt
Q: Getting coordinates of surface nodes using pyvista I'm wondering if anyone could help me figure out how to apply pyvista to extract the surface nodes of a 3D object. For example, suppose I have a collection of points that builds out a sphere, including 'interior' and 'surface' points: import numpy as np import matplotlib.pyplot as plt N = 50 max_rad = 1 thetavec = np.linspace(0,np.pi,N) phivec = np.linspace(0,2*np.pi,2*N) [th, ph] = np.meshgrid(thetavec,phivec) R = np.random.rand(*th.shape) * max_rad x = R*np.sin(th)*np.cos(ph) y = R*np.sin(th)*np.sin(ph) z = R*np.cos(th) ax = plt.axes(projection='3d') ax.plot(x.flatten(), y.flatten(), z.flatten(), '*') Now I'd like to apply pyvista's extract_surface to locate the 'nodes' that live on the surface, together with their coordinates. That is, I'd like for extract_surface to return an array or dataframe of the coordinates of the surface points. I've tried to build a polydata object just with the vertices above (see link and section 'Initialize with just vertices') Any help is much appreciated. Thanks! A: Since you've confirmed in a comment that you're looking for a convex hull, you can do this using the delaunay_3d() filter. The output of the triangulation is an UnstructuredGrid that contains a grid of tetrahedra that fills the convex hull of you mesh. Calling extract_surface() on this space-filling mesh will give you the actual exterior, i.e. the convex hull: import numpy as np import pyvista as pv # your example data N = 50 max_rad = 1 thetavec = np.linspace(0,np.pi,N) phivec = np.linspace(0,2*np.pi,2*N) [th, ph] = np.meshgrid(thetavec,phivec) R = np.random.rand(*th.shape) * max_rad x = R*np.sin(th)*np.cos(ph) y = R*np.sin(th)*np.sin(ph) z = R*np.cos(th) # create a PyVista point cloud (in a PolyData) points = np.array([x, y, z]).reshape(3, -1).T # shape (n_points, 3) cloud = pv.PolyData(points) # extract surface by Delaunay triangulation to get the convex hull convex_hull = cloud.delaunay_3d().extract_surface() # contains faces surface_points = convex_hull.cast_to_pointset() # only points # check what we've got surface_points.plot( render_points_as_spheres=True, point_size=10, background='paleturquoise', scalar_bar_args={'color': 'black'}, ) (On older PyVista versions where PolyData.cast_to_pointset() is not available, one can convex_hull.extract_points(range(convex_hull.n_points))). The result looks like this: Playing around with this interactively it's obvious that it only contains points from the convex hull (i.e. it doesn't contain interior points). Also note the colouring: the scalars used are called 'vtkOriginalPointIds' which are what you would actually expect if you tried to guess: it is the index of each point in the original point cloud. So we can use these scalars to extract the indices of the points making up the point cloud: # grab original point indices surface_point_inds = surface_points.point_data['vtkOriginalPointIds'] # confirm that the indices are correct print(np.array_equal(surface_points.points, cloud.points[surface_point_inds, :])) # True Of course if you don't need to identify the surface points in the original point cloud then you can just use surface_points.points or even convex_hull.points to get a standalone array of convex hull point coordinates.
Getting coordinates of surface nodes using pyvista
I'm wondering if anyone could help me figure out how to apply pyvista to extract the surface nodes of a 3D object. For example, suppose I have a collection of points that builds out a sphere, including 'interior' and 'surface' points: import numpy as np import matplotlib.pyplot as plt N = 50 max_rad = 1 thetavec = np.linspace(0,np.pi,N) phivec = np.linspace(0,2*np.pi,2*N) [th, ph] = np.meshgrid(thetavec,phivec) R = np.random.rand(*th.shape) * max_rad x = R*np.sin(th)*np.cos(ph) y = R*np.sin(th)*np.sin(ph) z = R*np.cos(th) ax = plt.axes(projection='3d') ax.plot(x.flatten(), y.flatten(), z.flatten(), '*') Now I'd like to apply pyvista's extract_surface to locate the 'nodes' that live on the surface, together with their coordinates. That is, I'd like for extract_surface to return an array or dataframe of the coordinates of the surface points. I've tried to build a polydata object just with the vertices above (see link and section 'Initialize with just vertices') Any help is much appreciated. Thanks!
[ "Since you've confirmed in a comment that you're looking for a convex hull, you can do this using the delaunay_3d() filter. The output of the triangulation is an UnstructuredGrid that contains a grid of tetrahedra that fills the convex hull of you mesh. Calling extract_surface() on this space-filling mesh will give...
[ 1 ]
[]
[]
[ "python", "pyvista" ]
stackoverflow_0074535351_python_pyvista.txt
Q: Saving OpenCV output in Motion JPEG format. Getting a "'MJPG' is not supported with codec id 7" error I'd like to save video camera output in motion JPEG (MJPG) format. The below code, import cv2 import numpy as np cap = cv2.VideoCapture(0) if (cap.isOpened() == False): print("Unable to read camera feed") frame_width = int(cap.get(3)) frame_height = int(cap.get(4)) frame_per_sec = int('10') out = cv2.VideoWriter('output.mjpeg',cv2.VideoWriter_fourcc('M','J','P','G'), (frame_per_sec), (frame_width,frame_height)) while(True): ret, frame = cap.read() if ret == True: # Write the frame into the file 'output.mjpeg' out.write(frame) # Display the resulting frame cv2.imshow('frame',frame) # Press Q on keyboard to stop recording if cv2.waitKey(1) & 0xFF == ord('q'): break else: break cap.release() out.release() cv2.destroyAllWindows() While it will run, I am getting the following error(s), [ WARN:0] OpenCV | GStreamer warning: Cannot query video position: status=0, value=-1, duration=-1 OpenCV: FFMPEG: tag 0x67706a6d/'mjpg' is not supported with codec id 7 and format 'mjpeg / raw MJPEG video' What can I do to resolve these? I've tried changing the case, ('M','J','P','G' to 'm','j','p','g') with no success. Appreciate any suggestions regarding resolving the above issue, as well as the GStreamer issue. Thanks in advance. A: .mjpeg is not a valid suffix for any known container format. I'm sure you didn't intend to write a raw MJPG stream without a container. That is very very rarely useful at all and requires expert knowledge. You have two options: use MJPG in a .avi container, because that's built into OpenCV and doesn't even require ffmpeg use whatever ffmpeg understands, which would be a .mpg container, or .mov or .mkv or whatever else
Saving OpenCV output in Motion JPEG format. Getting a "'MJPG' is not supported with codec id 7" error
I'd like to save video camera output in motion JPEG (MJPG) format. The below code, import cv2 import numpy as np cap = cv2.VideoCapture(0) if (cap.isOpened() == False): print("Unable to read camera feed") frame_width = int(cap.get(3)) frame_height = int(cap.get(4)) frame_per_sec = int('10') out = cv2.VideoWriter('output.mjpeg',cv2.VideoWriter_fourcc('M','J','P','G'), (frame_per_sec), (frame_width,frame_height)) while(True): ret, frame = cap.read() if ret == True: # Write the frame into the file 'output.mjpeg' out.write(frame) # Display the resulting frame cv2.imshow('frame',frame) # Press Q on keyboard to stop recording if cv2.waitKey(1) & 0xFF == ord('q'): break else: break cap.release() out.release() cv2.destroyAllWindows() While it will run, I am getting the following error(s), [ WARN:0] OpenCV | GStreamer warning: Cannot query video position: status=0, value=-1, duration=-1 OpenCV: FFMPEG: tag 0x67706a6d/'mjpg' is not supported with codec id 7 and format 'mjpeg / raw MJPEG video' What can I do to resolve these? I've tried changing the case, ('M','J','P','G' to 'm','j','p','g') with no success. Appreciate any suggestions regarding resolving the above issue, as well as the GStreamer issue. Thanks in advance.
[ ".mjpeg is not a valid suffix for any known container format.\nI'm sure you didn't intend to write a raw MJPG stream without a container. That is very very rarely useful at all and requires expert knowledge.\nYou have two options:\n\nuse MJPG in a .avi container, because that's built into OpenCV and doesn't even re...
[ 2 ]
[]
[]
[ "ffmpeg", "mjpeg", "opencv", "python", "video" ]
stackoverflow_0074553092_ffmpeg_mjpeg_opencv_python_video.txt
Q: What causes the error "_pickle.UnpicklingError: invalid load key, ' '."? I'm trying to store 5000 data elements on an array. This 5000 elements are stored on an existinng file (therefore it's not empty). But I'm getting an error. IN: def array(): name = 'puntos.df4' m = open(name, 'rb') v = []*5000 m.seek(-5000, io.SEEK_END) fp = m.tell() sz = os.path.getsize(name) while fp < sz: pt = pickle.load(m) v.append(pt) m.close() return v OUT: line 23, in array pt = pickle.load(m) _pickle.UnpicklingError: invalid load key, ''. A: pickling is recursive, not sequential. Thus, to pickle a list, pickle will start to pickle the containing list, then pickle the first element… diving into the first element and pickling dependencies and sub-elements until the first element is serialized. Then moves on to the next element of the list, and so on, until it finally finishes the list and finishes serializing the enclosing list. In short, it's hard to treat a recursive pickle as sequential, except for some special cases. It's better to use a smarter pattern on your dump, if you want to load in a special way. The most common pickle, it to pickle everything with a single dump to a file -- but then you have to load everything at once with a single load. However, if you open a file handle and do multiple dump calls (e.g. one for each element of the list, or a tuple of selected elements), then your load will mirror that… you open the file handle and do multiple load calls until you have all the list elements and can reconstruct the list. It's still not easy to selectively load only certain list elements, however. To do that, you'd probably have to store your list elements as a dict (with the index of the element or chunk as the key) using a package like klepto, which can break up a pickled dict into several files transparently, and enables easy loading of specific elements. Saving and loading multiple objects in pickle file? A: This may not be relevant to your specific issue, but I had a similar problem when the pickle archive had been created using gzip. For example if a compressed pickle archive is made like this, import gzip, pickle with gzip.open('test.pklz', 'wb') as ofp: pickle.dump([1,2,3], ofp) Trying to open it throws the errors with open('test.pklz', 'rb') as ifp: print(pickle.load(ifp)) Traceback (most recent call last): File "<stdin>", line 2, in <module> _pickle.UnpicklingError: invalid load key, ''. But, if the pickle file is opened using gzip all is harmonious with gzip.open('test.pklz', 'rb') as ifp: print(pickle.load(ifp)) [1, 2, 3] A: If you transferred these files through disk or other means, it is likely they were not saved properly. A: I solved my issue by: Remove the cloned project Install git lfs: sudo apt-get install git-lfs Set up git lfs for your user account: git lfs install Clone the project again. A: I am not completely sure what you're trying to achieve by seeking to a specific offset and attempting to load individual values manually, the typical usage of the pickle module is: # save data to a file with open('myfile.pickle','wb') as fout: pickle.dump([1,2,3],fout) # read data from a file with open('myfile.pickle') as fin: print pickle.load(fin) # output >> [1, 2, 3] If you dumped a list, you'll load a list, there's no need to load each item individually. you're saying that you got an error before you were seeking to the -5000 offset, maybe the file you're trying to read is corrupted. If you have access to the original data, I suggest you try saving it to a new file and reading it as in the example. A: I received a similar error while loading a pickled sklearn model. The problem was that the pickle is created via sklearn.externals.joblib and i was trying to load it via standard pickle library. Using joblib has solved my problem. A: I had a similar error but with different context when I uploaded a *.p file to Google Drive. I tried to use it later in a Google Colab session, and got this error: 1 with open("/tmp/train.p", mode='rb') as training_data: ----> 2 train = pickle.load(training_data) UnpicklingError: invalid load key, '<'. I solved it by compressing the file, upload it and then unzip on the session. It looks like the pickle file is not saved correctly when you upload/download it so it gets corrupted. A: I just encountered that issue which was initiated by the bad pickle file (not fully copied). My solution: Check the pickle file status (corrupted or not). A: In my case, I ran into this issue due to multiple processes trying to read from the same pickled file. The first of these actually creates a pickle (write operation) and some quick threads start reading from it too soon. Just by retrying the read when catching these 2 errors EOFError, UnpicklingError I don't see these errors anymore
What causes the error "_pickle.UnpicklingError: invalid load key, ' '."?
I'm trying to store 5000 data elements on an array. This 5000 elements are stored on an existinng file (therefore it's not empty). But I'm getting an error. IN: def array(): name = 'puntos.df4' m = open(name, 'rb') v = []*5000 m.seek(-5000, io.SEEK_END) fp = m.tell() sz = os.path.getsize(name) while fp < sz: pt = pickle.load(m) v.append(pt) m.close() return v OUT: line 23, in array pt = pickle.load(m) _pickle.UnpicklingError: invalid load key, ''.
[ "pickling is recursive, not sequential. Thus, to pickle a list, pickle will start to pickle the containing list, then pickle the first element… diving into the first element and pickling dependencies and sub-elements until the first element is serialized. Then moves on to the next element of the list, and so on, u...
[ 27, 22, 12, 12, 3, 3, 1, 0, 0 ]
[ "\nClose the opened file\nfilepath = 'model_v1.pkl' with open(filepath, 'rb') as f: p = cPickle.Unpickler(f) model = p.load() f.close()\n\nIf step 1 doesn't work; restart the session\n\n\n", "Pickling error - _pickle.UnpicklingError: invalid load key, '<'. \nThis kind of error comes when Weights are complete or s...
[ -1, -1 ]
[ "pickle", "python" ]
stackoverflow_0033049688_pickle_python.txt
Q: How can I make command line argument values get assigned to selection of 2 variables in python? I am using the argparse package here. There are 4 possible command line arguments in this code. I need to choose any combination of only 2 of them, for example "python script.py -arg1 int1 int2 int3 -arg4 int1 int2 int3" and have those int values assigned to variables in for loops(see below). How can I make it so that it doesn't matter which of the 4 command line arguments are input, and their int values get assigned to one of the two for loops? It doesn't matter which for loop they get into, as long as all combinations are possible. Does this even make sense? Sorry if it doesn't import numpy as np import argparse parser = argparse.ArgumentParser(description = 'Test') parser.add_argument('-arg1', nargs =3, required = False, type = int) parser.add_argument('-arg2', nargs = 3, required = False, type = int) parser.add_argument('-arg3', nargs = 3, required = False, type = int) parser.add_argument('-arg4', nargs = 3, required = False, type = int) args = parser.parse_args() if arg1: args.arg1[0] = #start1 or start2 args.arg1[1] = #stop1 or stop2 args.arg1[2] = #num_samples1 or numsamples2 if arg2: args.arg2[0] = #start1 or start2 args.arg2[1] = #stop1 or stop2 args.arg2[2] = #num_samples1 or numsamples2 if arg3: args.arg3[0] = #start1 or start2 args.arg3[1] = #stop1 or stop2 args.arg3[2] = #num_samples1 or numsamples2 if arg4: args.arg4[0] = #start1 or start2 args.arg4[1] = #stop1 or stop2 args.arg4[2] = #num_samples1 or numsamples2 for a in np.linspace(start1, stop1, num_samples1): for b in np.linspace(start2,stop2,num_samples2): #do something with these values A: First get your two args by iterating over the four possibilities and selecting those that aren't None: two_args = [a for a in (args.arg1, args.arg2, args.arg3, args.arg4) if a] if len(two_args) != 2: print("Exactly two of arg1, arg2, arg3, and/or arg4 must be provided") exit() Then you can get your six values like this: (start1, stop1, num_samples1), (start2, stop2, num_samples2) = two_args
How can I make command line argument values get assigned to selection of 2 variables in python?
I am using the argparse package here. There are 4 possible command line arguments in this code. I need to choose any combination of only 2 of them, for example "python script.py -arg1 int1 int2 int3 -arg4 int1 int2 int3" and have those int values assigned to variables in for loops(see below). How can I make it so that it doesn't matter which of the 4 command line arguments are input, and their int values get assigned to one of the two for loops? It doesn't matter which for loop they get into, as long as all combinations are possible. Does this even make sense? Sorry if it doesn't import numpy as np import argparse parser = argparse.ArgumentParser(description = 'Test') parser.add_argument('-arg1', nargs =3, required = False, type = int) parser.add_argument('-arg2', nargs = 3, required = False, type = int) parser.add_argument('-arg3', nargs = 3, required = False, type = int) parser.add_argument('-arg4', nargs = 3, required = False, type = int) args = parser.parse_args() if arg1: args.arg1[0] = #start1 or start2 args.arg1[1] = #stop1 or stop2 args.arg1[2] = #num_samples1 or numsamples2 if arg2: args.arg2[0] = #start1 or start2 args.arg2[1] = #stop1 or stop2 args.arg2[2] = #num_samples1 or numsamples2 if arg3: args.arg3[0] = #start1 or start2 args.arg3[1] = #stop1 or stop2 args.arg3[2] = #num_samples1 or numsamples2 if arg4: args.arg4[0] = #start1 or start2 args.arg4[1] = #stop1 or stop2 args.arg4[2] = #num_samples1 or numsamples2 for a in np.linspace(start1, stop1, num_samples1): for b in np.linspace(start2,stop2,num_samples2): #do something with these values
[ "First get your two args by iterating over the four possibilities and selecting those that aren't None:\ntwo_args = [a for a in (args.arg1, args.arg2, args.arg3, args.arg4) if a]\nif len(two_args) != 2:\n print(\"Exactly two of arg1, arg2, arg3, and/or arg4 must be provided\")\n exit()\n\nThen you can get you...
[ 0 ]
[]
[]
[ "argparse", "command_line_arguments", "python" ]
stackoverflow_0074553495_argparse_command_line_arguments_python.txt
Q: cv2.read or Image from PIL changes PNG image color Seems like cv2.imread() or Image.fromarray() is changing original image color to a bluish color. What i am trying to accomplish is to crop the original png image and keep the same colors but the color changes. Not sure how to revert to original color. Please help! ty ` # start cropping logic img = cv2.imread("image.png") # import cv2 crop = img[1280:, 2250:2730] cropped_rendered_image = Image.fromarray(crop) #from PIL import Image cropped_rendered_image.save("newImageName.png") ` tried this and other fixes but no luck yet https://stackoverflow.com/a/50720612/13206968 A: There is no "changing" going on. It's simply a matter of channel order. OpenCV natively uses BGR order (in numpy arrays) PIL natively uses RGB order Numpy doesn't care When you call cv.imread(), you're getting BGR data in a numpy array. When you repackage that into a PIL Image, you are giving it BGR order data, but you're telling it that it's RGB, so PIL takes your word for it... and misinterprets the data. You can try telling PIL that it's BGR;24 data. See https://pillow.readthedocs.io/en/stable/handbook/concepts.html Or you can use cv.cvtColor() with the cv.COLOR_BGR2RGB flag (because you have BGR and you want RGB). For the opposite direction, there is the cv.COLOR_RGB2BGR flag.
cv2.read or Image from PIL changes PNG image color
Seems like cv2.imread() or Image.fromarray() is changing original image color to a bluish color. What i am trying to accomplish is to crop the original png image and keep the same colors but the color changes. Not sure how to revert to original color. Please help! ty ` # start cropping logic img = cv2.imread("image.png") # import cv2 crop = img[1280:, 2250:2730] cropped_rendered_image = Image.fromarray(crop) #from PIL import Image cropped_rendered_image.save("newImageName.png") ` tried this and other fixes but no luck yet https://stackoverflow.com/a/50720612/13206968
[ "There is no \"changing\" going on. It's simply a matter of channel order.\n\nOpenCV natively uses BGR order (in numpy arrays)\nPIL natively uses RGB order\nNumpy doesn't care\n\nWhen you call cv.imread(), you're getting BGR data in a numpy array.\nWhen you repackage that into a PIL Image, you are giving it BGR ord...
[ 2 ]
[]
[]
[ "cv2", "opencv", "python", "python_imaging_library" ]
stackoverflow_0074552758_cv2_opencv_python_python_imaging_library.txt
Q: sql select column from nested associated tables I have 4 tables in PostgreSQL: Projects Organizations Organization_membership User CREATE TABLE IF NOT EXISTS organization( id uuid PRIMARY KEY DEFAULT uuid_generate_v4 (), CONSTRAINT plan_id_fk FOREIGN KEY (plan_type) REFERENCES plan(plan_type) MATCH SIMPLE ON UPDATE NO ACTION ON DELETE NO ACTION ); CREATE TABLE IF NOT EXISTS users( id varchar(100) PRIMARY KEY, email VARCHAR(50) NOT NULL UNIQUE, ); CREATE TABLE IF NOT EXISTS organization_membership( organization_id uuid not null, user_id varchar(100) not null, CONSTRAINT organization_id_fk FOREIGN KEY (organization_id) REFERENCES organization(id) MATCH SIMPLE ON UPDATE CASCADE ON DELETE NO ACTION, CONSTRAINT users_uuid_fk FOREIGN KEY (user_id) REFERENCES users(id) MATCH SIMPLE ON UPDATE CASCADE ON DELETE NO ACTION, PRIMARY KEY (organization_id, user_id) ); CREATE TABLE IF NOT EXISTS project( id uuid PRIMARY KEY DEFAULT uuid_generate_v4 (), owner uuid NOT NULL, project_name VARCHAR(100), CONSTRAINT project_owner_fk FOREIGN KEY (owner) REFERENCES organization(id) MATCH SIMPLE ON UPDATE CASCADE ON DELETE NO ACTION, ); I am trying to get projects which belongs to user 1, so I am trying to get all projects for user 1 from all organizations of this user I just need raw sql code I tried this: await database.fetch_all( query="SELECT organization_membership.*, organization.id FROM organization JOIN organization_membership ON organization.id = organization_membership.organization_id WHERE organization_membership.user_id = :id", values={'id': acting_user.id}, ) but this returns only organizations for this user also I have tried this: await database.fetch_all( query="SELECT * from project JOIN organization ON project.owner = organization.id JOIN organization_membership ON organization.id = organization_membership.organization_id WHERE organization_membership.user_id = :id", values={'id': acting_user.id}, ) this returns empty data A: select p.* from users u join organization_membership om on u.id = om.user_id join organization o on om.organization_id = o.id join project p on o.id = p.owner where u.id = '1'; Edit: Updated as per question A: It depends a lot on how you have your relationship scheme. But what about nesting SQL statements? SELECT * FROM projects WHERE organization_id in ( SELECT organization_id FROM organizations WHERE user_id = 1);
sql select column from nested associated tables
I have 4 tables in PostgreSQL: Projects Organizations Organization_membership User CREATE TABLE IF NOT EXISTS organization( id uuid PRIMARY KEY DEFAULT uuid_generate_v4 (), CONSTRAINT plan_id_fk FOREIGN KEY (plan_type) REFERENCES plan(plan_type) MATCH SIMPLE ON UPDATE NO ACTION ON DELETE NO ACTION ); CREATE TABLE IF NOT EXISTS users( id varchar(100) PRIMARY KEY, email VARCHAR(50) NOT NULL UNIQUE, ); CREATE TABLE IF NOT EXISTS organization_membership( organization_id uuid not null, user_id varchar(100) not null, CONSTRAINT organization_id_fk FOREIGN KEY (organization_id) REFERENCES organization(id) MATCH SIMPLE ON UPDATE CASCADE ON DELETE NO ACTION, CONSTRAINT users_uuid_fk FOREIGN KEY (user_id) REFERENCES users(id) MATCH SIMPLE ON UPDATE CASCADE ON DELETE NO ACTION, PRIMARY KEY (organization_id, user_id) ); CREATE TABLE IF NOT EXISTS project( id uuid PRIMARY KEY DEFAULT uuid_generate_v4 (), owner uuid NOT NULL, project_name VARCHAR(100), CONSTRAINT project_owner_fk FOREIGN KEY (owner) REFERENCES organization(id) MATCH SIMPLE ON UPDATE CASCADE ON DELETE NO ACTION, ); I am trying to get projects which belongs to user 1, so I am trying to get all projects for user 1 from all organizations of this user I just need raw sql code I tried this: await database.fetch_all( query="SELECT organization_membership.*, organization.id FROM organization JOIN organization_membership ON organization.id = organization_membership.organization_id WHERE organization_membership.user_id = :id", values={'id': acting_user.id}, ) but this returns only organizations for this user also I have tried this: await database.fetch_all( query="SELECT * from project JOIN organization ON project.owner = organization.id JOIN organization_membership ON organization.id = organization_membership.organization_id WHERE organization_membership.user_id = :id", values={'id': acting_user.id}, ) this returns empty data
[ "select p.* from users u\njoin organization_membership om on u.id = om.user_id\njoin organization o on om.organization_id = o.id\njoin project p on o.id = p.owner\nwhere u.id = '1';\n\nEdit: Updated as per question\n", "It depends a lot on how you have your relationship scheme. But what about nesting SQL statemen...
[ 0, 0 ]
[]
[]
[ "postgresql", "python", "sql" ]
stackoverflow_0074553488_postgresql_python_sql.txt
Q: Bigquery parquet file treats list as list when empty array is passed I have a large nested terabyte sized jsonl(s) which I am converting to parquet files and writing to a have partitioned google cloud storage bucket. The issue is as follows. One of the nested fields is a list of string ideally the schema for this field I expect is billing_code_modifier: list<item: string>, but there is a rare case the sometimes the length of the list is 0 for all records in which case pandas writes the billing_code_modifier: list<item: null> This causes an issue since the third party tool [bigquery] which is being used to read these parquet files fail to read these due to inconsistent schema expecting list not list [it defaults empty arrays to int32 , blame google not me] How does one get around this. Is there a way to specify the schema while writing parquet files. Since I am dealing with a bucket I cannot write an empty parquet and then add the data to the file in 2 separate write operations as GCP does not allow you to modify files only overwrite A: For Pandas you can specify an Arrow schema as a kwarg which should provide the correct schema. See Pyarrow apply schema when using pandas to_parquet() for details.
Bigquery parquet file treats list as list when empty array is passed
I have a large nested terabyte sized jsonl(s) which I am converting to parquet files and writing to a have partitioned google cloud storage bucket. The issue is as follows. One of the nested fields is a list of string ideally the schema for this field I expect is billing_code_modifier: list<item: string>, but there is a rare case the sometimes the length of the list is 0 for all records in which case pandas writes the billing_code_modifier: list<item: null> This causes an issue since the third party tool [bigquery] which is being used to read these parquet files fail to read these due to inconsistent schema expecting list not list [it defaults empty arrays to int32 , blame google not me] How does one get around this. Is there a way to specify the schema while writing parquet files. Since I am dealing with a bucket I cannot write an empty parquet and then add the data to the file in 2 separate write operations as GCP does not allow you to modify files only overwrite
[ "For Pandas you can specify an Arrow schema as a kwarg which should provide the correct schema. See Pyarrow apply schema when using pandas to_parquet() for details.\n" ]
[ 1 ]
[]
[]
[ "google_bigquery", "parquet", "pyarrow", "python" ]
stackoverflow_0074533897_google_bigquery_parquet_pyarrow_python.txt
Q: Can't get my python code to find jpg file even though it is located in the correct folder Okay, so I cannot understand why this does not work. Every time I run the code it searches for a different picture but it never finds it. When I go to the specified location and search for the image I find it immediately. But i somehow still get the same error message.[enter image description here] id='../content/drive/MyDrive/dog-breed-identification/train/000bec180eb18c7604dcecc8fe0dba07.jpg' dogs = pd.read_csv('../content/drive/MyDrive/dog-breed-identification/labels.csv') dogs = dogs.sample(20) dogs['file'] = dogs.id.map(lambda id: f'../content/drive/MyDrive/dog-breed-identification/train/{id}.jpg') dogs['image'] = dogs.file.map(lambda f: get_thumbnail(f)) dogs.head() I am trying to get this code to work: https://www.kaggle.com/code/samayshikhar/dog-breed-identification-2-0 Does anyone know what the issue can be? (I run the code in google colab) I have tried changing {id} to a specific name, and that works. The problem is that the model will ot be able to predict dog breeds. Error code (highlighted in image)[screenshot] 1 A: You can follow these steps to get it work: Mount drive and create symbolic link to specific folder you will be using, which is dog-breed-identification in your case #mount drive %cd .. from google.colab import drive drive.mount('/content/gdrive') # this creates a symbolic link so that now the path /content/gdrive/My\ Drive/ is equal to /mydrive !ln -s /content/gdrive/My\ Drive/ /mydrive # list the contents of /mydrive !ls /mydrive #Navigate to /mydrive/dog-breed-identification %cd /mydrive/dog-breed-identification Use your code id='/mydrive/dog-breed-identification/train/000bec180eb18c7604dcecc8fe0dba07.jpg' dogs = pd.read_csv('/mydrive/dog-breed-identification/labels.csv') dogs = dogs.sample(20) dogs['file'] = dogs.id.map(lambda id: f'/mydrive/dog-breed-identification/train/{id}.jpg') dogs['image'] = dogs.file.map(lambda f: get_thumbnail(f)) dogs.head()
Can't get my python code to find jpg file even though it is located in the correct folder
Okay, so I cannot understand why this does not work. Every time I run the code it searches for a different picture but it never finds it. When I go to the specified location and search for the image I find it immediately. But i somehow still get the same error message.[enter image description here] id='../content/drive/MyDrive/dog-breed-identification/train/000bec180eb18c7604dcecc8fe0dba07.jpg' dogs = pd.read_csv('../content/drive/MyDrive/dog-breed-identification/labels.csv') dogs = dogs.sample(20) dogs['file'] = dogs.id.map(lambda id: f'../content/drive/MyDrive/dog-breed-identification/train/{id}.jpg') dogs['image'] = dogs.file.map(lambda f: get_thumbnail(f)) dogs.head() I am trying to get this code to work: https://www.kaggle.com/code/samayshikhar/dog-breed-identification-2-0 Does anyone know what the issue can be? (I run the code in google colab) I have tried changing {id} to a specific name, and that works. The problem is that the model will ot be able to predict dog breeds. Error code (highlighted in image)[screenshot] 1
[ "You can follow these steps to get it work:\n\nMount drive and create symbolic link to specific folder you will be using, which is dog-breed-identification in your case\n\n#mount drive\n%cd ..\nfrom google.colab import drive\ndrive.mount('/content/gdrive')\n\n# this creates a symbolic link so that now the path /con...
[ 0 ]
[]
[]
[ "google_colaboratory", "python" ]
stackoverflow_0074553331_google_colaboratory_python.txt
Q: Reuse colors in plot I have a project in Jupyter notebooks where I am comparing two dataframes. Both are indexed by year, and both have the same columns representing the proportion of followers of a religion in the population. The two dataframes represent two different populations. I want to be able to display both sets of data in the same line plot, with the same color used for each religion, but with the lines for one population solid, while the lines for the other population are dashed. I thought I'd be able to do something like this: ax1.plot(area1_df, color=[col1,col2,col3,col4]) ax1.plot(area2_df, color=[col1,col2,col3,col4], ls=':',alpha=0.5, linewidth=3.0) But that doesn't work. At the moment I have this: import matplotlib.pyplot as plt fig, ax1 = plt.subplots(1,1, sharex = True, sharey=True, figsize=(15,5)) plt.style.use('seaborn') ax1.plot(area1_df); ax1.plot(area2_df, ls=':',alpha=0.5, linewidth=3.0); ax1.legend(area1_df.keys(), loc=2) ax1.set_ylabel('% of population') plt.tight_layout() Maybe there's another way of doing this entirely(?) Bonus points for any ideas as to how best to create a unified legend, with entries for the columns from both dataframes. A: To give each line a particular color, you could capture the output of ax1.plot and iterate through that list of lines. Each line can be given its color. And also a label for the legend. The following code first generates some toy data and then iterates through the lines of both plots. A legend with two columns is created using the assigned labels. import numpy as np import pandas as pd import matplotlib.pylab as plt years = np.arange(1990, 2021, 1) N = years.size area1_df = pd.DataFrame({f'group {i}': 10+i+np.random.uniform(-1, 1, N).cumsum() for i in range(1, 5)}, index=years) area2_df = pd.DataFrame({f'group {i}': 10+i+np.random.uniform(-1, 1, N).cumsum() for i in range(1, 5)}, index=years) fig, ax1 = plt.subplots(figsize=(15,5)) plot1 = ax1.plot(area1_df) plot2 = ax1.plot(area2_df, ls=':',alpha=0.5, linewidth=3.0) for l1, l2, label1, label2, color in zip(plot1, plot2, area1_df.columns, area2_df.columns, ['crimson', 'limegreen', 'dodgerblue', 'turquoise']): l1.set_color(color) l1.set_label(label1) l2.set_color(color) l2.set_label(label2) ax1.legend(ncol=2, title='area1 / area2') plt.show() Alternatively, you could plot via pandas, which does allow assigning a color for each column: fig, ax1 = plt.subplots(figsize=(15, 5)) colors = plt.cm.Dark2.colors area1_df.plot(color=colors, ax=ax1) area2_df.plot(color=colors, ls=':', alpha=0.5, linewidth=3.0, ax=ax1) ax1.legend(ncol=2, title='area1 / area2') A: The principle of color assignment in pyplot is based on a cycler, a list of colors, which is reset after the last one has been used. Hence it's possible to reuse colors by selecting the proper number of colors in the cycler. In the code below, a cycler is created with colors from the default cycler. There are two lists of curves to plot. The number of colors is made equal to the number of curves in the first list, curves from the second list are plotted after the cycler has reset itself. from numpy import linspace, pi, cos, random import matplotlib.pyplot as plt # Time t = linspace(-0.5*pi, 0.5*pi, 100) # Curves a_p = (1.2, 0), (1, -3*pi/2), (1.4, -pi/4) series_1 = [a * cos(t+p) for a, p in a_p] series_2 = [c + 0.5 * (random.rand(len(c)) - 0.5) for c in series_1] # Create a color cycler with 3 colors colors = plt.rcParams['axes.prop_cycle'][0:len(series_1)] cycler_2 = plt.cycler(color=colors) # Associate cycler to axis fig, ax = plt.subplots() ax.set_prop_cycle(cycler_2) # Plot for c in series_1: ax.plot(t, c, ls='--', lw=3) for c in series_2: ax.plot(t, c, ls=':')
Reuse colors in plot
I have a project in Jupyter notebooks where I am comparing two dataframes. Both are indexed by year, and both have the same columns representing the proportion of followers of a religion in the population. The two dataframes represent two different populations. I want to be able to display both sets of data in the same line plot, with the same color used for each religion, but with the lines for one population solid, while the lines for the other population are dashed. I thought I'd be able to do something like this: ax1.plot(area1_df, color=[col1,col2,col3,col4]) ax1.plot(area2_df, color=[col1,col2,col3,col4], ls=':',alpha=0.5, linewidth=3.0) But that doesn't work. At the moment I have this: import matplotlib.pyplot as plt fig, ax1 = plt.subplots(1,1, sharex = True, sharey=True, figsize=(15,5)) plt.style.use('seaborn') ax1.plot(area1_df); ax1.plot(area2_df, ls=':',alpha=0.5, linewidth=3.0); ax1.legend(area1_df.keys(), loc=2) ax1.set_ylabel('% of population') plt.tight_layout() Maybe there's another way of doing this entirely(?) Bonus points for any ideas as to how best to create a unified legend, with entries for the columns from both dataframes.
[ "To give each line a particular color, you could capture the output of ax1.plot and iterate through that list of lines. Each line can be given its color. And also a label for the legend. \nThe following code first generates some toy data and then iterates through the lines of both plots. A legend with two columns i...
[ 1, 0 ]
[]
[]
[ "jupyter_notebook", "matplotlib", "python" ]
stackoverflow_0061841011_jupyter_notebook_matplotlib_python.txt
Q: How do I set a default value for flag in argparse if the flag is given alone import argparse parser = argparse.ArgumentParser() parser.add_argument('-c', '--cookies', nargs='?', default=5, type=int, ) args = parser.parse_args() if args.cookies: print('cookies flag is set: ' + args.cookies) else: print('cookies flag not set: ' + str(args.cookies)) I want it to work so that if the user gives -c then we know they want cookies, but we don't know how many cookies they want so we give them 5 by default (-c == 5 :). If the user types -c 25 then we know they want 25 cookies. If the user does not give a -c flag then we know they do not want cookies and cookies flag should not be set. The way it works as above is that -c == 5 only when -c is not set by the user. But we do not want to give them cookies if they do not ask for it! If they ask for a specific amount of cookies (ex: -c 10), then the code above works fine. I fixed this problem by using a short custom action that checks if the flag is set and if no value is passed in I give it the default value. This seems a bit convoluted and there must be an easier way. I've searched the argparse docs (looked at nargs, default, and const) but couldn't figure out a solution. Any ideas? Thank you for your time. A: You're looking for the const parameter, which the docs don't do a very good job of explaining. default always sets the value, even if the flag is not provided, unless it is overridden by a user input. const only sets the value if the flag is provided and no overriding value is provided. The nargs section has an example of how to use the const parameter: >>> parser = argparse.ArgumentParser() >>> parser.add_argument('--foo', nargs='?', const='c', default='d') >>> parser.add_argument('bar', nargs='?', default='d') >>> parser.parse_args(['XX', '--foo', 'YY']) Namespace(bar='XX', foo='YY') >>> parser.parse_args(['XX', '--foo']) Namespace(bar='XX', foo='c') Although the default keyword isn't necessary in your case, since you want the value to be None if the user does not provide the flag.
How do I set a default value for flag in argparse if the flag is given alone
import argparse parser = argparse.ArgumentParser() parser.add_argument('-c', '--cookies', nargs='?', default=5, type=int, ) args = parser.parse_args() if args.cookies: print('cookies flag is set: ' + args.cookies) else: print('cookies flag not set: ' + str(args.cookies)) I want it to work so that if the user gives -c then we know they want cookies, but we don't know how many cookies they want so we give them 5 by default (-c == 5 :). If the user types -c 25 then we know they want 25 cookies. If the user does not give a -c flag then we know they do not want cookies and cookies flag should not be set. The way it works as above is that -c == 5 only when -c is not set by the user. But we do not want to give them cookies if they do not ask for it! If they ask for a specific amount of cookies (ex: -c 10), then the code above works fine. I fixed this problem by using a short custom action that checks if the flag is set and if no value is passed in I give it the default value. This seems a bit convoluted and there must be an easier way. I've searched the argparse docs (looked at nargs, default, and const) but couldn't figure out a solution. Any ideas? Thank you for your time.
[ "You're looking for the const parameter, which the docs don't do a very good job of explaining.\ndefault always sets the value, even if the flag is not provided, unless it is overridden by a user input.\nconst only sets the value if the flag is provided and no overriding value is provided.\nThe nargs section has an...
[ 0 ]
[]
[]
[ "argparse", "python" ]
stackoverflow_0074517842_argparse_python.txt
Q: Autosuggestion In Flask Api I am creating a Flask API, for movie Recommendation i have dataset and i want autosuggestion functionality when i type any letter i will get movie related to that word. This my app.py file:- from flask import Flask, jsonify, request, render_template from flask_cors import CORS import pandas as pd item_similarity_df = pd.read_csv("movie_similarity.csv", index_col=0) app = Flask(__name__) CORS(app) @app.route("/") def hello_from_root(): return jsonify(message='Hello from root!') @app.route("/recms", methods = ["POST"]) def make_rec(): if request.method == "POST": data = request.json movie = data["movie_title"] #curl -X POST http://0.0.0.0:8080/recms -H 'Content-Type: application/json' -d '{"movie_title":"Heat (1995)"}' try: similar_score = item_similarity_df[movie] similar_movies = similar_score.sort_values(ascending=False)[1:50] api_recommendations = similar_movies.index.to_list() except: api_recommendations = ['Movie not found'] return render_template("index.html",api_recommendations = api_recommendations) if __name__ == "__main__": app.run(host='0.0.0.0', port=8080) This is my index.htlm:- <!DOCTYPE html> <html> <head> <title>AutoComplete</title> <script src="https://ajax.googleapis.com/ajax/libs/jquery/1.7.1/jquery.js"> </script> <script src="https://ajax.googleapis.com/ajax/libs/jqueryui/1.8.16/jquery-ui.js"> </script> <link href="http://ajax.googleapis.com/ajax/libs/jqueryui/1.8.16/themes/ui-lightness/jquery-ui.css" rel="stylesheet" type="text/css" /> </head> <body> <h1>Welcome to GFG</h1> <input type="text" id="tags"> <script> $( function() { var availableTags = [ {% for api_recommendations in api_recommendations %} "{{api_recommendations}}", {% endfor %} ]; $( "#tags" ).autocomplete({ source: availableTags }); } ); </script> </body> </html> Code is executing fine and i am getting;-{"message":"Hello from root!"} but when i am executing:- curl -X POST http://10.0.0.72:8080/recms -H 'Content-Type: application/json' -d '{"movie_title":"Heat (1995)"}' i am getting recommendate movie but i want if i type instead of "Heat (1995)" i am getting rec. movie but i want if i type instead of "Heat (1995)" only h it will give all the movie name with h letter. A: The following example uses AJAX to search for entries in the DataFrame whose title column contains the substring that was sent. A GET request is sent to the server for each substring entered. This searches for rows whose title column contains the string, ignoring upper and lower case. Matching rows are then extracted, sorted by title, and collapsed to the title column. All titles are returned as a list in JSON format. from flask import ( Flask, jsonify, render_template, request ) import pandas as pd import re app = Flask(__name__) df = pd.DataFrame( [ ['Heat (1995)'], ['Rock, The (1996)'], ['Casino (1995)'], ['Rumble in the Bronx (1995)'], ['Léon: The Professional (1994)'], ['Desperado (1995)'], ['Twelve Monkeys (1995)'], ['Broken Arrow (1996)'], ['Platoon (1986)'], ['Leaving Las Vegas (1995)'], ['Fargo (1996)'] ], columns=['movie_title'] ) @app.route('/') def index(): return render_template('index.html') @app.route('/lookup') def lookup(): query = request.args.get('query') data = df[df['movie_title'].str.contains(query, flags=re.IGNORECASE)] data = data.sort_values(by=['movie_title'], ascending=True)['movie_title'].tolist() return jsonify(data) <!DOCTYPE html> <html> <head> <meta charset="utf-8"> <meta name="viewport" content="width=device-width, initial-scale=1"> <title>Index</title> <link rel="stylesheet" href="//code.jquery.com/ui/1.13.2/themes/base/jquery-ui.css"> </head> <body> <div class="ui-widget"> <label for="titles">Title</label> <input id="titles" /> </div> <script src="https://code.jquery.com/jquery-3.6.0.js"></script> <script src="https://code.jquery.com/ui/1.13.2/jquery-ui.js"></script> <script> $(function() { $('#titles').autocomplete({ source: function(request, response) { $.get({{ url_for('.lookup') | tojson }}, { query: request.term }, function(data) { response(data); } ); } }); }); </script> </body> </html>
Autosuggestion In Flask Api
I am creating a Flask API, for movie Recommendation i have dataset and i want autosuggestion functionality when i type any letter i will get movie related to that word. This my app.py file:- from flask import Flask, jsonify, request, render_template from flask_cors import CORS import pandas as pd item_similarity_df = pd.read_csv("movie_similarity.csv", index_col=0) app = Flask(__name__) CORS(app) @app.route("/") def hello_from_root(): return jsonify(message='Hello from root!') @app.route("/recms", methods = ["POST"]) def make_rec(): if request.method == "POST": data = request.json movie = data["movie_title"] #curl -X POST http://0.0.0.0:8080/recms -H 'Content-Type: application/json' -d '{"movie_title":"Heat (1995)"}' try: similar_score = item_similarity_df[movie] similar_movies = similar_score.sort_values(ascending=False)[1:50] api_recommendations = similar_movies.index.to_list() except: api_recommendations = ['Movie not found'] return render_template("index.html",api_recommendations = api_recommendations) if __name__ == "__main__": app.run(host='0.0.0.0', port=8080) This is my index.htlm:- <!DOCTYPE html> <html> <head> <title>AutoComplete</title> <script src="https://ajax.googleapis.com/ajax/libs/jquery/1.7.1/jquery.js"> </script> <script src="https://ajax.googleapis.com/ajax/libs/jqueryui/1.8.16/jquery-ui.js"> </script> <link href="http://ajax.googleapis.com/ajax/libs/jqueryui/1.8.16/themes/ui-lightness/jquery-ui.css" rel="stylesheet" type="text/css" /> </head> <body> <h1>Welcome to GFG</h1> <input type="text" id="tags"> <script> $( function() { var availableTags = [ {% for api_recommendations in api_recommendations %} "{{api_recommendations}}", {% endfor %} ]; $( "#tags" ).autocomplete({ source: availableTags }); } ); </script> </body> </html> Code is executing fine and i am getting;-{"message":"Hello from root!"} but when i am executing:- curl -X POST http://10.0.0.72:8080/recms -H 'Content-Type: application/json' -d '{"movie_title":"Heat (1995)"}' i am getting recommendate movie but i want if i type instead of "Heat (1995)" i am getting rec. movie but i want if i type instead of "Heat (1995)" only h it will give all the movie name with h letter.
[ "The following example uses AJAX to search for entries in the DataFrame whose title column contains the substring that was sent.\nA GET request is sent to the server for each substring entered. This searches for rows whose title column contains the string, ignoring upper and lower case. Matching rows are then extra...
[ 1 ]
[]
[]
[ "ajax", "flash", "flask", "pandas", "python" ]
stackoverflow_0074546737_ajax_flash_flask_pandas_python.txt
Q: Create a network graph manually I am looking to create a program in python that will allow me to draw a graph manually, that is to say, by placing the points (vertex) and clicking on two vertices in order to connect them and thus form an edge and above all I am looking to that this graph is not just a drawing because I have to extract the adjacency and incidence matrices from it and also apply algorithms on it (DFS, BFS, DJikstra). I tried and searched everywhere but I can't, I can only create vertices and edges using the functions of the Networkx library (add_node and add_edge) which do not meet my needs because with these I can create a graph only by implementations. Here is the interface of the program I am making, I want that when I click on the "Vertex" option a new window opens and I can place the vertices and then with the "Edge" option connect two peaks. enter image description here Thank you for helping me. A: If you have a decent GUI framework, then this sort of thing is easily accomplished. As a first step, you need to implement the following Add node move mouse cursor to location click right mouse button select "Add Node" menu item from pop-up node will be displayed at mouse location as a black circle Select node move mouse cursor over node click left mouse button selected node will be drawn in red Add link select first node move mouse cursor over second node click right mouse button select "link" menu item from pop-up link will be drawn between nodes Move node move mouse cursor over node press and hold left mouse button drag mouse while holing down left button. ( the node and connected links will follow the mouse curor ) release button at reuired new location As an example of how this can be done, you may find it helpful to take a look at an implementation using C++ at https://github.com/JamesBremner/graphex2. This uses the WINDEX gui framework - most gui frameworks offer the same methods, so it should be a straightforward task to port this line by line to python.
Create a network graph manually
I am looking to create a program in python that will allow me to draw a graph manually, that is to say, by placing the points (vertex) and clicking on two vertices in order to connect them and thus form an edge and above all I am looking to that this graph is not just a drawing because I have to extract the adjacency and incidence matrices from it and also apply algorithms on it (DFS, BFS, DJikstra). I tried and searched everywhere but I can't, I can only create vertices and edges using the functions of the Networkx library (add_node and add_edge) which do not meet my needs because with these I can create a graph only by implementations. Here is the interface of the program I am making, I want that when I click on the "Vertex" option a new window opens and I can place the vertices and then with the "Edge" option connect two peaks. enter image description here Thank you for helping me.
[ "If you have a decent GUI framework, then this sort of thing is easily accomplished.\nAs a first step, you need to implement the following\n\nAdd node\n\nmove mouse cursor to location\nclick right mouse button\nselect \"Add Node\" menu item from pop-up\nnode will be displayed at mouse location as a black circle\n\n...
[ 1 ]
[]
[]
[ "graph", "python", "user_interface" ]
stackoverflow_0074548869_graph_python_user_interface.txt
Q: How to return binary data from lambda function in AWS in Python? I cannot get python lambda to return binary data. The node-template for thumbnail images works fine but I cannot get a python lambda to work. Below is the relevant lines from my lambda. The print("image_data " + image_64_encode) line prints a base64 encoded image to the logs. def lambda_handler(event, context): img_base64 = event.get('base64Image') if img_base64 is None: return respond(True, "No base64Image key") img = base64.decodestring(img_base64) name = uuid.uuid4() path = '/tmp/{}.png'.format(name) print("path " + path) image_result = open(path, 'wb') image_result.write(img) image_result.close() process_image(path) image_processed_path = '/tmp/{}-processed.png'.format(name) print("image_processed_path " + image_processed_path) image_processed = open(image_processed_path, 'rb') image_processed_data = image_processed.read() image_processed.close() image_64_encode = base64.encodestring(image_processed_data) print("image_data " + image_64_encode) return respond(False, image_64_encode) def respond(err, res): return { 'statusCode': '400' if err else '200', 'body': res, 'headers': { 'Content-Type': 'image/png', }, 'isBase64Encoded': 'true' } Any pointers to what I'm doing wrong? A: I finally figured this out. Returning binary data from a python lambda is doable. Follow the instructions here: https://aws.amazon.com/blogs/compute/binary-support-for-api-integrations-with-amazon-api-gateway/ Be sure to check the 'Use Lambda Proxy integration' when creating a new method. Also be sure your Python Lambda response returns a base64-encoded body, sets isBase64Encoded to True, and an appropriate content type: import base64 def lambda_handler(event, context): # ... body = base64.b64encode(bin_data) return {'isBase64Encoded' : True, 'statusCode' : 200, 'headers' : { 'Content-Type': content_type }, 'body' : body } THEN: For each of your routes/methods issue: apigateway update-integration-response --rest-api-id <api-id> --resource-id <res-id> --http-method POST --status-code 200 --patch-operations "[{\"op\" : \"replace\", \"path\" : \"/contentHandling\", \"value\" : \"CONVERT_TO_BINARY\"}]" In the AWS console. The and can be seen in the API Gateway 'breadcrumbs' ex: <api-id> = zdb7jsoey8 <res-id> = zy2b5g THEN: You need to 'Deploy API'. From what I found only it only worked AFTER deploying the API. Be sure you setup the 'Binary Media Types' before deploying. Hint: Nice AWS shell terminal here: https://github.com/awslabs/aws-shell pip install aws-shell A: Following all the steps above didn't work on my case, because having the binary support for content-type = */* will convert all responses to binary. My case: Multiple lambda functions returning json (text), just a single lambda returning a binary file. All have lambda proxy enabled. The lambdas are in an API Gateway The API Gateway is behind CloudFront Hint: I have notice an important information in the API Gateway -> Settings Quoting: API Gateway will look at the Content-Type and Accept HTTP headers to decide how to handle the body. This means that the Content-Type response header must match Accept request header Solution: Set Binary Media Types in API gateway to your mime type: image/jpg In your HTTP request set Accept: image/jpg In your HTTP response set Content-Type: image/jpg { "isBase64Encoded": True, "statusCode": 200, "headers": { "content-type": "image/jpg"}, "body": base64.b64encode(content_bytes).decode("utf-8") } Next we must tell CloudFront to accept the 'Accept' header from the request. So, in CloudFront distribution, click on your API Gateway instance (ID is clickable) and once redirected to CloudFront instance go to Behaviour tab, select the path-pattern of your API (example: /api/*) and click on Edit button. On the new screen, you have to add Accept header to Whitelist. Note 1: If you have multiple file types, you must add them all to Binary Media Types in the API gateway settings Note 2: For those coming from serverless and want to set the binary types when deploying your lambdas, then check this post: setting binary media types for API gateway plugins: - serverless-apigw-binary custom: apigwBinary: types: - 'image/jpeg' The serverless.yml file for cloudfront should contain: resources: WebAppCloudFrontDistribution: Type: AWS::CloudFront::Distribution Properties: DistributionConfig: ... CacheBehaviors: ... - #API calls ... ForwardedValues: ... Headers: - Authorization - Accept A: As far as I can tell, this is also the case with Python 3. I'm trying to return a binary data (bytes). It's not working at all. I also tried to use base-64 encoding and I have had no success. This is with API Gateway and Proxy Integration. [update] I finally realized how to do this. I enabled binary support for type */* and then returned this: return({ "isBase64Encoded": True, "statusCode": 200, "headers": { "content-type": "image/jpg", }, 'body': base64.b64encode(open('image.jpg', 'rb').read()).decode('utf-8') }) A: I faced the same problem about 6 months ago. Looks like although there is now binary support (and examples in JS) in API Gateway, Python 2.7 Lambda still does not support valid binary response, not sure about Python 3.6. Base64 encoded response is having problems because of JSON wrapping. I wrote a custom JS on client side taking the base-64 image out of this JSON manually, but this was also a poor solution. Upload the result to S3 (behind the CloudFront) and return 301 to CloudFront seems to be a good workaround. Works best for me. A: My issue was different - you need to redeploy Lambda after you do changes Binary Media Types. That wasn't obvious and I was changing Binary Media Types without any effect and stuck with the issue for days. For reference .NET core response: var response = new APIGatewayProxyResponse { StatusCode = 200, Body = base64String, IsBase64Encoded = true, Headers = new Dictionary<string, string> { {"Content-Type", "application/pdf"} } };
How to return binary data from lambda function in AWS in Python?
I cannot get python lambda to return binary data. The node-template for thumbnail images works fine but I cannot get a python lambda to work. Below is the relevant lines from my lambda. The print("image_data " + image_64_encode) line prints a base64 encoded image to the logs. def lambda_handler(event, context): img_base64 = event.get('base64Image') if img_base64 is None: return respond(True, "No base64Image key") img = base64.decodestring(img_base64) name = uuid.uuid4() path = '/tmp/{}.png'.format(name) print("path " + path) image_result = open(path, 'wb') image_result.write(img) image_result.close() process_image(path) image_processed_path = '/tmp/{}-processed.png'.format(name) print("image_processed_path " + image_processed_path) image_processed = open(image_processed_path, 'rb') image_processed_data = image_processed.read() image_processed.close() image_64_encode = base64.encodestring(image_processed_data) print("image_data " + image_64_encode) return respond(False, image_64_encode) def respond(err, res): return { 'statusCode': '400' if err else '200', 'body': res, 'headers': { 'Content-Type': 'image/png', }, 'isBase64Encoded': 'true' } Any pointers to what I'm doing wrong?
[ "I finally figured this out. Returning binary data from a python lambda is doable.\nFollow the instructions here:\nhttps://aws.amazon.com/blogs/compute/binary-support-for-api-integrations-with-amazon-api-gateway/\nBe sure to check the 'Use Lambda Proxy integration' when creating a new method.\nAlso be sure your Pyt...
[ 20, 19, 10, 4, 0 ]
[]
[]
[ "amazon_web_services", "aws_lambda", "binary", "image", "python" ]
stackoverflow_0044860486_amazon_web_services_aws_lambda_binary_image_python.txt
Q: How do i get a specfic piece of data from a user input list I need the code to find the gradient (slope calculation) using the first to inputs but i cant get the code to find the individual inputs heart_rate = [] max_length = 5 while len(heart_rate) < max_length: hr = int(input("enter heartrate after exercise: ")) heart_rate.append(hr) #Print data set print(heart_rate) #Calculate the gradient of HR recovery for the data entered n = len(heart_rate) def HR_gradient(heart_rate,n): time = [0,1,2,3,4,5] for idx in list(heart_rate): gradient = (time[0]-time[1])/(len[0]-len[1]) return (gradient) I've tried len() but its not working. A: len is a function, not a list. To get the first 2 inputs of the list you would have to use heart_rate[0] and heart_rate[1] respectively. The code should be like this instead: gradient = (time[0]-time[1])/(heart_rate[0]-heart_rate[1]) But keep in mind that just changing this wont fix your HR_gradient function, since: You're not using the n variable at all inside the function even though you have it as an argument. You're iterating over heart_rate with idx, but you're not using it, so everytime it loops, it will just keep using the first 2 inputs of time and heart_rate, ignoring the rest.
How do i get a specfic piece of data from a user input list
I need the code to find the gradient (slope calculation) using the first to inputs but i cant get the code to find the individual inputs heart_rate = [] max_length = 5 while len(heart_rate) < max_length: hr = int(input("enter heartrate after exercise: ")) heart_rate.append(hr) #Print data set print(heart_rate) #Calculate the gradient of HR recovery for the data entered n = len(heart_rate) def HR_gradient(heart_rate,n): time = [0,1,2,3,4,5] for idx in list(heart_rate): gradient = (time[0]-time[1])/(len[0]-len[1]) return (gradient) I've tried len() but its not working.
[ "len is a function, not a list. To get the first 2 inputs of the list you would have to use heart_rate[0] and heart_rate[1] respectively. The code should be like this instead:\ngradient = (time[0]-time[1])/(heart_rate[0]-heart_rate[1])\n\nBut keep in mind that just changing this wont fix your HR_gradient function, ...
[ 0 ]
[]
[]
[ "list", "python" ]
stackoverflow_0074553487_list_python.txt
Q: How do I change abbreviated words in a column in Python, eg NY to New York, US to United States at the same time? How do I replace all abbreviation in a column to word eg NY to New York, US to United States etal. I could do for just NY with inplace=True, but I need to collectively do for all abbreviations. I tried df['prod_state'].replace('NY', 'New York', inplace= True), it worked but when I included US, errors started popping A: You can use pandas.Series.replace that accept dictionnaries as an argument of to_replace: dico_abbrv= {'NY': 'New York', 'US': 'United States'} df['prod_state'].replace(dico_abbrv, inplace= True) You can also use pandas.Series.map : df['prod_state']= df['prod_state'].map(dico_abbrv)
How do I change abbreviated words in a column in Python, eg NY to New York, US to United States at the same time?
How do I replace all abbreviation in a column to word eg NY to New York, US to United States etal. I could do for just NY with inplace=True, but I need to collectively do for all abbreviations. I tried df['prod_state'].replace('NY', 'New York', inplace= True), it worked but when I included US, errors started popping
[ "You can use pandas.Series.replace that accept dictionnaries as an argument of to_replace:\ndico_abbrv= {'NY': 'New York', 'US': 'United States'}\n\ndf['prod_state'].replace(dico_abbrv, inplace= True)\n\nYou can also use pandas.Series.map :\ndf['prod_state']= df['prod_state'].map(dico_abbrv)\n\n" ]
[ 0 ]
[]
[]
[ "pandas", "python" ]
stackoverflow_0074553662_pandas_python.txt
Q: TypeError: 'Artihref' object is not iterable z3-solver I am trying to create a function called nochange which keeps the variable passed through within the same transition state it was already in. variables = [('pc_thrd1', 'int'), ('pc_thrd2', 'int'), ('flag1', 'int'), ('flag2', 'int'), ('x', 'int'), ('turn', 'int'), ('pid', 'int')] variables_enc_0, variables_enc_1 = bmchecker.add_variables(variables) #aliases of state variables pc_thrd1 = variables_enc_0[0] pc_thrd2 = variables_enc_0[1] flag1 = variables_enc_0[2] flag2 = variables_enc_0[3] x = variables_enc_0[4] turn = variables_enc_0[5] pid = variables_enc_0[6] pc_thrd1_x = variables_enc_1[0] pc_thrd2_x = variables_enc_1[1] flag1_x = variables_enc_1[2] flag2_x = variables_enc_1[3] x_x = variables_enc_1[4] turn_x = variables_enc_1[5] state0_enc = And(Or(x == 0, x == 1), (flag1 == 0), (flag2 == 0), (turn == 0), (pc_thrd1 == 0), (pc_thrd2 == 0), (pid == 0)) bmchecker.add_initial_state_enc(state0_enc) def nochange(l): c = None for i in l: x, y = i if c is None: c = (x == y) else: c = And(x == y, (c)) return c I then created this variable calling in the function on the int variables thr1 = Or(And(pc_thrd1 == 0, flag1_x == 1, pc_thrd1_x == 1, nochange(flag2), nochange(flag2_x), nochange(turn), nochange(turn_x), nochange(x), nochange(x_x), nochange(pc_thrd2), nochange(pc_thrd2_x)), And(pc_thrd1 == 1, flag2 < 1, pc_thrd1 == 6, nochange(flag1), nochange(flag1_x), nochange(turn), nochange(turn_x), nochange(x), nochange(x_x), nochange(pc_thrd2), nochange(pc_thrd2_x)), And(pc_thrd1 == 1, flag2 >= 1, pc_thrd1_x == 2, nochange(flag1), nochange(flag1_x), nochange(turn), nochange(turn_x), nochange(x), nochange(x_x), nochange(pc_thrd2), nochange(pc_thrd2_x)), And(pc_thrd1 == 2, turn == 0, pc_thrd1_x == 6, nochange(flag2), nochange(flag2_x), nochange(flag1), nochange(flag1_x), nochange(x), nochange(x_x), nochange(pc_thrd2), nochange(pc_thrd2_x)), And(pc_thrd1 == 2, Not(turn == 0), pc_thrd1 == 3, nochange(flag2), nochange(flag2_x), nochange(flag1), nochange(flag1_x), nochange(x), nochange(x_x), nochange(pc_thrd2), nochange(pc_thrd2_x)), And(pc_thrd1 == 3, flag2 == 0, pc_thrd1 == 4, nochange(flag1), nochange(flag1_x), nochange(turn), nochange(turn_x), nochange(x), nochange(x_x), nochange(pc_thrd2), nochange(pc_thrd2_x)), And(pc_thrd1 == 4, turn == 0, pc_thrd1 == 5, nochange(flag2), nochange(flag2_x), nochange(flag1), nochange(flag1_x), nochange(x), nochange(x_x), nochange(pc_thrd2), nochange(pc_thrd2_x)), And(pc_thrd1 == 4, Not(turn == 0), pc_thrd1 == 4, nochange(flag2), nochange(flag2_x), nochange(flag1), nochange(flag1_x), nochange(x), nochange(x_x), nochange(pc_thrd2), nochange(pc_thrd2_x)), And(pc_thrd1 == 5, flag1 == 1, pc_thrd1 == 6, nochange(flag2), nochange(flag2_x), nochange(turn), nochange(turn_x), nochange(x), nochange(x_x), nochange(pc_thrd2), nochange(pc_thrd2_x)), And(pc_thrd1 == 6, x == 0, pc_thrd1 == 7, nochange(flag2), nochange(flag2_x), nochange(turn), nochange(turn_x), nochange(flag1), nochange(flag1_x), nochange(pc_thrd2), nochange(pc_thrd2_x)), And(pc_thrd1 == 7, turn == 1, pc_thrd1 == 8, nochange(flag2), nochange(flag2_x), nochange(flag1), nochange(flag1_x), nochange(x), nochange(x_x), nochange(pc_thrd2), nochange(pc_thrd2_x)), And(pc_thrd1 == 8, flag1 == 0, pc_thrd1 == 0, nochange(flag2), nochange(flag2_x), nochange(turn), nochange(turn_x), nochange(x), nochange(x_x), nochange(pc_thrd2), nochange(pc_thrd2_x))) all_thrds = And(And(0<=pid, pid<=1), Or(And(pid == 0, turn == 1, thr1, pc_thrd2 == pc_thrd2_x), And(pid == 1, turn == 0, pc_thrd1 == pc_thrd1_x))) I expected to receive an error trace print which would allow me to create a finite state machine however I recieved this syntax error File "c:\Users\madom\Python projects\Bounded_Model_Check\Prof_Z_example\MultiThread_1.py", line 82, in <module> thr1 = Or(And(pc_thrd1 == 0, flag1_x == 1, pc_thrd1_x == 1, nochange(flag2), nochange(flag2_x), nochange(turn), nochange(turn_x), nochange(x), nochange(x_x), nochange(pc_thrd2), nochange(pc_thrd2_x)), File "c:\Users\madom\Python projects\Bounded_Model_Check\Prof_Z_example\MultiThread_1.py", line 73, in nochange for i in l: TypeError: 'ArithRef' object is not iterable In essence, thr1 is a set of constraints for a thread represented by C Code which I did not think I needed to add, the nochange function is supposed to manipulate any transitory state, for example, flag1, flag1_x would remain equal to whatever value it was equal too and then that would get passed through Z3 to allow me to find a suitable model. A: You need to post an MRE (minimal-reproducible-example). See here: https://stackoverflow.com/help/minimal-reproducible-example Without a reproducible example, it's impossible to opine on what exactly is going on wrong. However, I see a clear red-flag. You have: flag2 == 0 and also: nochange(flag2) The first one suggests flag2 is some sort of an arithmetic object, and the error you're getting (ArithRef) corroborates this. But your function nochange expects a Python list. Clearly, an arithmetic object is not iterable, which explains the error-message. Long story short: nochange wants a list, and you're passing it a numeric value; thus the error message. Posting an MRE (see the link above) might help us help you further.
TypeError: 'Artihref' object is not iterable z3-solver
I am trying to create a function called nochange which keeps the variable passed through within the same transition state it was already in. variables = [('pc_thrd1', 'int'), ('pc_thrd2', 'int'), ('flag1', 'int'), ('flag2', 'int'), ('x', 'int'), ('turn', 'int'), ('pid', 'int')] variables_enc_0, variables_enc_1 = bmchecker.add_variables(variables) #aliases of state variables pc_thrd1 = variables_enc_0[0] pc_thrd2 = variables_enc_0[1] flag1 = variables_enc_0[2] flag2 = variables_enc_0[3] x = variables_enc_0[4] turn = variables_enc_0[5] pid = variables_enc_0[6] pc_thrd1_x = variables_enc_1[0] pc_thrd2_x = variables_enc_1[1] flag1_x = variables_enc_1[2] flag2_x = variables_enc_1[3] x_x = variables_enc_1[4] turn_x = variables_enc_1[5] state0_enc = And(Or(x == 0, x == 1), (flag1 == 0), (flag2 == 0), (turn == 0), (pc_thrd1 == 0), (pc_thrd2 == 0), (pid == 0)) bmchecker.add_initial_state_enc(state0_enc) def nochange(l): c = None for i in l: x, y = i if c is None: c = (x == y) else: c = And(x == y, (c)) return c I then created this variable calling in the function on the int variables thr1 = Or(And(pc_thrd1 == 0, flag1_x == 1, pc_thrd1_x == 1, nochange(flag2), nochange(flag2_x), nochange(turn), nochange(turn_x), nochange(x), nochange(x_x), nochange(pc_thrd2), nochange(pc_thrd2_x)), And(pc_thrd1 == 1, flag2 < 1, pc_thrd1 == 6, nochange(flag1), nochange(flag1_x), nochange(turn), nochange(turn_x), nochange(x), nochange(x_x), nochange(pc_thrd2), nochange(pc_thrd2_x)), And(pc_thrd1 == 1, flag2 >= 1, pc_thrd1_x == 2, nochange(flag1), nochange(flag1_x), nochange(turn), nochange(turn_x), nochange(x), nochange(x_x), nochange(pc_thrd2), nochange(pc_thrd2_x)), And(pc_thrd1 == 2, turn == 0, pc_thrd1_x == 6, nochange(flag2), nochange(flag2_x), nochange(flag1), nochange(flag1_x), nochange(x), nochange(x_x), nochange(pc_thrd2), nochange(pc_thrd2_x)), And(pc_thrd1 == 2, Not(turn == 0), pc_thrd1 == 3, nochange(flag2), nochange(flag2_x), nochange(flag1), nochange(flag1_x), nochange(x), nochange(x_x), nochange(pc_thrd2), nochange(pc_thrd2_x)), And(pc_thrd1 == 3, flag2 == 0, pc_thrd1 == 4, nochange(flag1), nochange(flag1_x), nochange(turn), nochange(turn_x), nochange(x), nochange(x_x), nochange(pc_thrd2), nochange(pc_thrd2_x)), And(pc_thrd1 == 4, turn == 0, pc_thrd1 == 5, nochange(flag2), nochange(flag2_x), nochange(flag1), nochange(flag1_x), nochange(x), nochange(x_x), nochange(pc_thrd2), nochange(pc_thrd2_x)), And(pc_thrd1 == 4, Not(turn == 0), pc_thrd1 == 4, nochange(flag2), nochange(flag2_x), nochange(flag1), nochange(flag1_x), nochange(x), nochange(x_x), nochange(pc_thrd2), nochange(pc_thrd2_x)), And(pc_thrd1 == 5, flag1 == 1, pc_thrd1 == 6, nochange(flag2), nochange(flag2_x), nochange(turn), nochange(turn_x), nochange(x), nochange(x_x), nochange(pc_thrd2), nochange(pc_thrd2_x)), And(pc_thrd1 == 6, x == 0, pc_thrd1 == 7, nochange(flag2), nochange(flag2_x), nochange(turn), nochange(turn_x), nochange(flag1), nochange(flag1_x), nochange(pc_thrd2), nochange(pc_thrd2_x)), And(pc_thrd1 == 7, turn == 1, pc_thrd1 == 8, nochange(flag2), nochange(flag2_x), nochange(flag1), nochange(flag1_x), nochange(x), nochange(x_x), nochange(pc_thrd2), nochange(pc_thrd2_x)), And(pc_thrd1 == 8, flag1 == 0, pc_thrd1 == 0, nochange(flag2), nochange(flag2_x), nochange(turn), nochange(turn_x), nochange(x), nochange(x_x), nochange(pc_thrd2), nochange(pc_thrd2_x))) all_thrds = And(And(0<=pid, pid<=1), Or(And(pid == 0, turn == 1, thr1, pc_thrd2 == pc_thrd2_x), And(pid == 1, turn == 0, pc_thrd1 == pc_thrd1_x))) I expected to receive an error trace print which would allow me to create a finite state machine however I recieved this syntax error File "c:\Users\madom\Python projects\Bounded_Model_Check\Prof_Z_example\MultiThread_1.py", line 82, in <module> thr1 = Or(And(pc_thrd1 == 0, flag1_x == 1, pc_thrd1_x == 1, nochange(flag2), nochange(flag2_x), nochange(turn), nochange(turn_x), nochange(x), nochange(x_x), nochange(pc_thrd2), nochange(pc_thrd2_x)), File "c:\Users\madom\Python projects\Bounded_Model_Check\Prof_Z_example\MultiThread_1.py", line 73, in nochange for i in l: TypeError: 'ArithRef' object is not iterable In essence, thr1 is a set of constraints for a thread represented by C Code which I did not think I needed to add, the nochange function is supposed to manipulate any transitory state, for example, flag1, flag1_x would remain equal to whatever value it was equal too and then that would get passed through Z3 to allow me to find a suitable model.
[ "You need to post an MRE (minimal-reproducible-example). See here: https://stackoverflow.com/help/minimal-reproducible-example\nWithout a reproducible example, it's impossible to opine on what exactly is going on wrong. However, I see a clear red-flag. You have:\nflag2 == 0\n\nand also:\nnochange(flag2)\n\nThe firs...
[ 0 ]
[]
[]
[ "python", "sat", "smt", "z3", "z3py" ]
stackoverflow_0074553489_python_sat_smt_z3_z3py.txt
Q: separating 2d numpy array into nxn chunks How would you separate a 2D numpy array into a nxn chunks? For example, the following array of shape (4,4): arr = [[1,2,3,4], [5,6,7,8], [9,10,11,12], [13,14,15,16]] Transformed to this array, of shape (4,2,2), by subsampling with a different (2x2) array: new_arr = [[[1,2], [5,6]], [[3,4], [7,8]], [[9,10], [13,14]], [[11,12], [15,16]]] A: You can use np.vsplit to split the array into multiple subarrays vertically. Similarly you can use np.hsplit to split the array into multiple subarrays horizontally. To better understand this examine the generalized resample function which makes the use of np.vsplit and np.hsplit methods. Use this: def ressample(arr, N): A = [] for v in np.vsplit(arr, arr.shape[0] // N): A.extend([*np.hsplit(v, arr.shape[0] // N)]) return np.array(A) Example 1: The given 2D array is of shape 4x4 and we want to subsample it into the chunks of shape 2x2. arr = np.array([[1, 2, 3, 4], [5, 6, 7, 8], [9, 10, 11, 12], [13, 14, 15, 16]]) print(ressample(arr, 2)) #--> chunk size 2 Output 1: [[[ 1 2] [ 5 6]] [[ 3 4] [ 7 8]] [[ 9 10] [13 14]] [[11 12] [15 16]]] Example 2: Consider the given 2D array contains 8 rows and 8 columns. Now we subsample this array into the chunks of shape 4x4. arr = np.random.randint(0, 10, 64).reshape(8, 8) print(ressample(arr, 4)) #--> chunck size 4 Sample Output 2: [[[8 3 7 5] [7 2 6 1] [7 9 2 2] [3 1 8 8]] [[2 0 3 2] [2 9 0 8] [2 6 3 9] [2 4 4 8]] [[9 9 1 8] [9 1 5 0] [8 5 1 2] [2 7 5 1]] [[7 8 9 6] [9 0 9 5] [8 9 8 3] [7 3 6 3]]] A: You could do the following, and adjust it to your array: import numpy as np arr = [[1,2,3,4], [5,6,7,8], [9,10,11,12], [13,14,15,16]] arr_new = np.array([[arr[i][j:j+2], arr[i+1][j:j+2]] for j in range(len(arr[0])-2) for i in range(len(arr)-2)]) print(arr_new) print(arr_new.shape) This gives the following output: [[[ 1 2] [ 5 6]] [[ 5 6] [ 9 10]] [[ 2 3] [ 6 7]] [[ 6 7] [10 11]]] (4, 2, 2) A: You could use hsplit() and vsplit() methods to achieve the above. import numpy as np arr = np.array([[1,2,3,4],[5,6,7,8],[9,10,11,12],[13,14,15,16]]) ls1,ls2 = np.hsplit(arr, 2) ls1 = np.vsplit(ls1,2) ls2 = np.vsplit(ls2,2) ls = ls1 + ls2 result = np.array(ls) print(result) >>> [[[ 1 2] [ 5 6]] [[ 9 10] [13 14]] [[ 3 4] [ 7 8]] [[11 12] [15 16]]] print(result.tolist()) >>> [[[1, 2], [5, 6]], [[9, 10], [13, 14]], [[3, 4], [7, 8]], [[11, 12], [15, 16]]] A: There is no need to split or anything; the same can be achieved by reshaping and reordering the axes. result = np.swapaxes(arr.reshape(2, 2, 2, 2), 1, 2).reshape(-1, 2, 2) Dividing an (N, N) array to (n, n) chunks is also basically a sliding window op with an (n, n) window and a stride of n. from numpy.lib.stride_tricks import sliding_window_view result = sliding_window_view(arr, (2, 2))[::2, ::2].reshape(-1, 2, 2)
separating 2d numpy array into nxn chunks
How would you separate a 2D numpy array into a nxn chunks? For example, the following array of shape (4,4): arr = [[1,2,3,4], [5,6,7,8], [9,10,11,12], [13,14,15,16]] Transformed to this array, of shape (4,2,2), by subsampling with a different (2x2) array: new_arr = [[[1,2], [5,6]], [[3,4], [7,8]], [[9,10], [13,14]], [[11,12], [15,16]]]
[ "You can use np.vsplit to split the array into multiple subarrays vertically. Similarly you can use np.hsplit to split the array into multiple subarrays horizontally. To better understand this examine the generalized resample function which makes the use of np.vsplit and np.hsplit methods.\nUse this:\ndef ressample...
[ 1, 0, 0, 0 ]
[]
[]
[ "arrays", "numpy", "python", "slice" ]
stackoverflow_0061094337_arrays_numpy_python_slice.txt
Q: Logging from 2 root logger to 1 file I have a function for quickly setting up the logger: def init_logger(name: str, log_path: str): logger = logging.getLogger(name) logger.setLevel(logging.DEBUG) cli_handler = logging.StreamHandler() cli_handler.setFormatter(CLILoggerFormatter()) cli_handler.setLevel(logging.INFO) logger.addHandler(cli_handler) file_handler = logging.handlers.TimedRotatingFileHandler(filename=log_path, when="midnight", encoding="utf-8") file_handler.setFormatter(FileLoggerFormatter()) file_handler.setLevel(logging.DEBUG) logger.addHandler(file_handler) Also, my python package my_lib has these lines of code: logger = logging.getLogger(__name__) logger.addHandler(logging.NullHandler()) And this is the main.py file import my_lib import logging init_logger(__name__, "logs/log.log") logger = logging.getLogger(__name__) init_logger("my_lib", "logs/log.log") By design, all logs from the python package my_lib and from the main.py file should be written to 1 file log.log (which changes every midnight). But at midnight an error is thrown: --- Logging error --- Traceback (most recent call last): File "C:\Program Files\Python310\lib\logging\handlers.py", line 74, in emit self.doRollover() File "C:\Program Files\Python310\lib\logging\handlers.py", line 435, in doRollover self.rotate(self.baseFilename, dfn) File "C:\Program Files\Python310\lib\logging\handlers.py", line 115, in rotate os.rename(source, dest) PermissionError: [WinError 32] The process cannot access the file because it is occupied by another process: 'C:\\Users\\Woopertail\\Desktop\\Test\\logs\\log.log' -> 'C:\\Users\\Woopertail\\Desktop\\Test\\logs\\log.log.2022-11-23' As far as I understand, this happens because 2 root loggers are trying to create and open a file one after another, and the second one gets an error like this. Is there any way to make 2 root loggers save records to 1 file? A: If anyone has a similar problem, here's a solution that helped me. Instead of adding a handler to each logger object, I used logging.config.dictConfig: import logging import logging.config CONFIG = { "version": 1, "handlers": { "file_handler": { "class": "logging.handlers.TimedRotatingFileHandler", "level": "DEBUG", "formatter": "file_formatter", "filename": "logs/log.log", "when": "midnight", "encoding": "utf-8" }, "cli_handler": { "class": "logging.StreamHandler", "level": "INFO", "formatter": "cli_formatter" } }, "formatters": { "file_formatter": { "()": "MyFormatters.FileLoggerFormatter" }, "cli_formatter": { "()": "MyFormatters.CLILoggerFormatter" } }, "loggers": { "main": { "handlers": ["file_handler", "cli_handler"], "level": "DEBUG" }, "my_lib": { "handlers": ["file_handler", "cli_handler"], "level": "DEBUG" } } } logging.config.dictConfig(CONFIG) logger = logging.getLogger("main")
Logging from 2 root logger to 1 file
I have a function for quickly setting up the logger: def init_logger(name: str, log_path: str): logger = logging.getLogger(name) logger.setLevel(logging.DEBUG) cli_handler = logging.StreamHandler() cli_handler.setFormatter(CLILoggerFormatter()) cli_handler.setLevel(logging.INFO) logger.addHandler(cli_handler) file_handler = logging.handlers.TimedRotatingFileHandler(filename=log_path, when="midnight", encoding="utf-8") file_handler.setFormatter(FileLoggerFormatter()) file_handler.setLevel(logging.DEBUG) logger.addHandler(file_handler) Also, my python package my_lib has these lines of code: logger = logging.getLogger(__name__) logger.addHandler(logging.NullHandler()) And this is the main.py file import my_lib import logging init_logger(__name__, "logs/log.log") logger = logging.getLogger(__name__) init_logger("my_lib", "logs/log.log") By design, all logs from the python package my_lib and from the main.py file should be written to 1 file log.log (which changes every midnight). But at midnight an error is thrown: --- Logging error --- Traceback (most recent call last): File "C:\Program Files\Python310\lib\logging\handlers.py", line 74, in emit self.doRollover() File "C:\Program Files\Python310\lib\logging\handlers.py", line 435, in doRollover self.rotate(self.baseFilename, dfn) File "C:\Program Files\Python310\lib\logging\handlers.py", line 115, in rotate os.rename(source, dest) PermissionError: [WinError 32] The process cannot access the file because it is occupied by another process: 'C:\\Users\\Woopertail\\Desktop\\Test\\logs\\log.log' -> 'C:\\Users\\Woopertail\\Desktop\\Test\\logs\\log.log.2022-11-23' As far as I understand, this happens because 2 root loggers are trying to create and open a file one after another, and the second one gets an error like this. Is there any way to make 2 root loggers save records to 1 file?
[ "If anyone has a similar problem, here's a solution that helped me.\nInstead of adding a handler to each logger object, I used logging.config.dictConfig:\nimport logging\nimport logging.config\n\n\nCONFIG = {\n \"version\": 1,\n \"handlers\": {\n \"file_handler\": {\n \"class\": \"logging.ha...
[ 0 ]
[]
[]
[ "logging", "python", "python_3.x" ]
stackoverflow_0074553143_logging_python_python_3.x.txt
Q: Retrieve output parameter(s) from a SQL Server stored procedure Trying to run a MS SQL stored procedure that has an output parameter. I have followed documentation on how to do this, but when I run my code I get this error: SystemError: <class 'pyodbc.Error'> returned a result with an error set. Here is my code: my_stored_procedure CREATE PROCEDURE [dbo].[my_stored_procedure] @IN1 INT @IN2 INT , @OUT INT OUTPUT AS BEGIN SET @OUT = @IN + 1 END myclass.py z = sqlalchemy.sql.expression.outparam("ret_%d" % 0, type_=int) x = 1 y = 2 exec = self.context.\ execute(text(f"EXEC my_stored_procedure :x, :y, :z OUTPUT"), {"x": x, "y": y, "z": z}) result = exec.fetchall() context.py def execute(self, statement, args=None): if not args: return self.session.execute(statement) else: return self.session.execute(statement, args) Any suggestions or can anyone see what I am doing wrong? A: .outparam() was added way back in SQLAlchemy 0.4 to address a specific requirement when working with Oracle and is only really used by that dialect. As mentioned in the current SQLAlchemy documentation here, working with stored procedures is one of the more database/dialect-specific tasks because of the significant variations in the way the different DBAPI layers deal with them. For SQL Server, the pyodbc Wiki explains how to do it here. The good news is that if the stored procedure does not return result sets in addition to the output/return values then you don't need to resort to using a raw DBAPI connection. For your (corrected) stored procedure CREATE PROCEDURE [dbo].[my_stored_procedure] @IN INT , @OUT INT OUTPUT AS BEGIN SET @OUT = @IN + 1 END you can use this: import sqlalchemy as sa engine = sa.create_engine("mssql+pyodbc://scott:tiger^5HHH@mssql_199") sql = """\ SET NOCOUNT ON; DECLARE @out_param int; EXEC dbo.my_stored_procedure @IN = :in_value, @OUT = @out_param OUTPUT; SELECT @out_param AS the_output; """ with engine.begin() as conn: result = conn.execute(sa.text(sql), {"in_value": 1}).scalar() print(result) # 2
Retrieve output parameter(s) from a SQL Server stored procedure
Trying to run a MS SQL stored procedure that has an output parameter. I have followed documentation on how to do this, but when I run my code I get this error: SystemError: <class 'pyodbc.Error'> returned a result with an error set. Here is my code: my_stored_procedure CREATE PROCEDURE [dbo].[my_stored_procedure] @IN1 INT @IN2 INT , @OUT INT OUTPUT AS BEGIN SET @OUT = @IN + 1 END myclass.py z = sqlalchemy.sql.expression.outparam("ret_%d" % 0, type_=int) x = 1 y = 2 exec = self.context.\ execute(text(f"EXEC my_stored_procedure :x, :y, :z OUTPUT"), {"x": x, "y": y, "z": z}) result = exec.fetchall() context.py def execute(self, statement, args=None): if not args: return self.session.execute(statement) else: return self.session.execute(statement, args) Any suggestions or can anyone see what I am doing wrong?
[ ".outparam() was added way back in SQLAlchemy 0.4 to address a specific requirement when working with Oracle and is only really used by that dialect. As mentioned in the current SQLAlchemy documentation here, working with stored procedures is one of the more database/dialect-specific tasks because of the significan...
[ 0 ]
[]
[]
[ "output_parameter", "python", "sql_server", "sqlalchemy", "stored_procedures" ]
stackoverflow_0074552026_output_parameter_python_sql_server_sqlalchemy_stored_procedures.txt
Q: How to find/get class object from a class method, static method or instance method For example. class Klass: def f(sel): pass f = Klass().f() klass = # i want to get Klass object. I searched a methond in inspect module but I can not find, Actually I find a private function named _findclass but this method does not run correctly in local class for example def test(): class K: def f(self): pass class_object = _findclass(K().f()) A: Once you get the output of a function, you can't get any information about it. In your example, Klass().f() returns None. Once this value is returned, it is indistinguishable from any other None. But if you instead do f = Klass().f, you can get the instance with f.__self__. For example: class Klass: def f(self): pass obj = Klass() f = obj.f print(f.__self__ is obj) # prints True obj.foo = 4 print(f.__self__.foo) # prints 4 If you want to get the class, you can use __class__: print(f.__self__.__class__ is the same thing as Klass If you really need the class from the output, you could try having every method return its class: class Klass: def f(self): normal_result = "whatever" return normal_result, Klass result, cls = Klass().f()
How to find/get class object from a class method, static method or instance method
For example. class Klass: def f(sel): pass f = Klass().f() klass = # i want to get Klass object. I searched a methond in inspect module but I can not find, Actually I find a private function named _findclass but this method does not run correctly in local class for example def test(): class K: def f(self): pass class_object = _findclass(K().f())
[ "Once you get the output of a function, you can't get any information about it. In your example, Klass().f() returns None. Once this value is returned, it is indistinguishable from any other None. But if you instead do f = Klass().f, you can get the instance with f.__self__.\nFor example:\nclass Klass:\n def f(s...
[ 0 ]
[]
[]
[ "inspect", "python" ]
stackoverflow_0074553431_inspect_python.txt
Q: How to take individual words from a list I am currently running some python code to extract words from a list and create a list of these words. The list I am using is from a .txt file with some lines from romeo and juliet. I read in the file, trimmed the whitespace, split each word up, and added these words to a list. I am now trying to create a list that does not include any words repeating. I know that I need to create a loop of some sort to go through the list, add the words, and then discard the repeated words. This is the code I currently have: fname = input ("Enter file name: ") #Here we check to see if the file is in the correct format #If it is not, we will return a personalized error message #and quit the programme. try : fh = open(fname) except : print("Enter a valid file name: ") quit() #Here I read in the file so that it returns as a complete #string. fread = fh.read() #print(fread) #Here we are stripping the file of any unnecessary #whitespace fstrip = fread.rstrip() #print(fstrip) #Here we are splitting all the words into individual values #in a list. This will allow us to write a loop to check #for repeating words. fsplit = fstrip.split() lst = list(fsplit) #print(lst) #This is going to be our for loop. wdlst = list() Any help would be greatly appreciated, I am new to python and I just cannot seem to figure out what combination of statements needs to be added to create this new list. Many thanks, A: A set requires that it has only unique elements. To remove repeated elements from a list, you can convert it into a set and then back again. list_without_duplicates = list(set(lst)) Here's a simple way to do it while preserving the order of the words: new_list = [] added_words = {} for word in lst: if word not in added_words: new_list.append(word) added_words.add(word) Then, new_list will contain all words in the list except with duplicates removed. A: You do not require a for loop to remove the duplicates of a list. Instead use set to remove the duplicates. E.g.: l1=['hello', 'world' , 'hello', 'people'] print(list(set(l1))) Result: ['hello', 'world', 'people']
How to take individual words from a list
I am currently running some python code to extract words from a list and create a list of these words. The list I am using is from a .txt file with some lines from romeo and juliet. I read in the file, trimmed the whitespace, split each word up, and added these words to a list. I am now trying to create a list that does not include any words repeating. I know that I need to create a loop of some sort to go through the list, add the words, and then discard the repeated words. This is the code I currently have: fname = input ("Enter file name: ") #Here we check to see if the file is in the correct format #If it is not, we will return a personalized error message #and quit the programme. try : fh = open(fname) except : print("Enter a valid file name: ") quit() #Here I read in the file so that it returns as a complete #string. fread = fh.read() #print(fread) #Here we are stripping the file of any unnecessary #whitespace fstrip = fread.rstrip() #print(fstrip) #Here we are splitting all the words into individual values #in a list. This will allow us to write a loop to check #for repeating words. fsplit = fstrip.split() lst = list(fsplit) #print(lst) #This is going to be our for loop. wdlst = list() Any help would be greatly appreciated, I am new to python and I just cannot seem to figure out what combination of statements needs to be added to create this new list. Many thanks,
[ "A set requires that it has only unique elements. To remove repeated elements from a list, you can convert it into a set and then back again.\nlist_without_duplicates = list(set(lst))\n\nHere's a simple way to do it while preserving the order of the words:\nnew_list = []\nadded_words = {}\nfor word in lst:\n if ...
[ 0, 0 ]
[]
[]
[ "python" ]
stackoverflow_0074553807_python.txt
Q: Python: Guessing a number is User's mind I am stuck on a question from my Introduction to Python course. I have to write a code wherein the user keeps an integer in their mind, and the computer guesses. If the user's number is higher than the computer's guess, the user types "+", and the computer guesses higher. If the user's number is lower, the user types "-", and the computer guesses lower numer. If the computer guesses correctly, the user types "y", and the program ends. Use the builtin function "input" to get a text from the user. If the user types anything other than "+", "-", or "y", the function should throw an exception. Your function should take no arguments and return nothing. I have to write the code in python. The problem I am facing is that after checking for the input the first time, how to change the range and make the user enter their response again. I have just started coding, so please forgive me if it is a very basic question. A: I was having the same problem you have and here is my solution for it: import random low = 1 high = 100 int(input("Enter the number for computer to guess: ")) while low != high: guess = low + (high - low) // 2 high_low = input(f"Computer guess is: {guess} \n " f"Press (H) for higher, (L) for lower and (C) for correct: ").casefold() if high_low == "h": low = guess + 1 elif high_low == "l": high = guess - 1 elif high_low == "c": print(f"I got it the number is {guess}") break else: print("Please enter h, l or c") else: print(f"Your number is {low}") Here I was using Binary Search algorithm with formula: guess = low + (high - low) // 2. To be more clear, what this formula does it starts of with guessing the mid point between high and low values. If we are told to guess higher that must mean our answer must be between 50 - 100. So the lowest it can be is 51 and that is our new lowest value for the range the mid point now becomes 51+(100-51)//2 (answer from this calculation is 75.5 integer division rounds down) so the result is 75. And If we are told now to guess lower that means answer must be somewhere between 51 and 75, so we know the answer is less than 75, the mid point now becomes 50+(75-51)//2 which is 62 and so on continues with that pattern. You can change H with +, L with -, and C with y and you will get your solution. I hope you find this useful :).
Python: Guessing a number is User's mind
I am stuck on a question from my Introduction to Python course. I have to write a code wherein the user keeps an integer in their mind, and the computer guesses. If the user's number is higher than the computer's guess, the user types "+", and the computer guesses higher. If the user's number is lower, the user types "-", and the computer guesses lower numer. If the computer guesses correctly, the user types "y", and the program ends. Use the builtin function "input" to get a text from the user. If the user types anything other than "+", "-", or "y", the function should throw an exception. Your function should take no arguments and return nothing. I have to write the code in python. The problem I am facing is that after checking for the input the first time, how to change the range and make the user enter their response again. I have just started coding, so please forgive me if it is a very basic question.
[ "I was having the same problem you have and here is my solution for it:\nimport random\n\n\nlow = 1\nhigh = 100\n\nint(input(\"Enter the number for computer to guess: \"))\n\n\nwhile low != high:\n guess = low + (high - low) // 2\n high_low = input(f\"Computer guess is: {guess} \\n \"\n f\"Press (H) for h...
[ 0 ]
[ "There are several solutions here, and some are more sophisticated than others.\nThe simplest solution in my opinion would simply be something like this (without validations):\nif user_input == \"+\":\n number = number + 1\nelif user_input == \"-\":\n number = number - 1\nelif user_input = \"y\":\n print(\...
[ -1 ]
[ "python" ]
stackoverflow_0073859371_python.txt
Q: Problem loading parallel datasets even after using SubsetRandomSampler I have two parallel datasets dataset1 and dataset2 and following is my code to load them in parallel using SubsetRandomSampler where I provide train_indices for dataloading. P.S. Even after setting num_workers=0 and seeding np as well as torch, the samples do not get loaded in parallel. Any suggestions are heartily welcome including methods other than SubsetRandomSampler. import torch, numpy as np from torch.utils.data import Dataset, DataLoader, SubsetRandomSampler dataset1 = torch.tensor([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]) dataset2 = torch.tensor([10, 11, 12, 13, 14, 15, 16, 17, 18, 19]) train_indices = list(range(len(dataset1))) torch.manual_seed(12) np.random.seed(12) np.random.shuffle(train_indices) sampler = SubsetRandomSampler(train_indices) dataloader1 = DataLoader(dataset1, batch_size=2, num_workers=0, sampler=sampler) dataloader2 = DataLoader(dataset2, batch_size=2, num_workers=0, sampler=sampler) for i, (data1, data2) in enumerate(zip(dataloader1, dataloader2)): x = data1 y = data2 print(x, y) Output: tensor([5, 1]) tensor([15, 18]) tensor([0, 2]) tensor([14, 12]) tensor([4, 6]) tensor([16, 10]) tensor([8, 9]) tensor([11, 19]) tensor([7, 3]) tensor([17, 13]) Expected Output: tensor([5, 1]) tensor([15, 11]) tensor([0, 2]) tensor([10, 12]) tensor([4, 6]) tensor([14, 16]) tensor([8, 9]) tensor([18, 19]) tensor([7, 3]) tensor([17, 13]) A: It looks like you are trying to load the two datasets in parallel, but have them maintain the same shuffled order. Currently, the code is shuffling the indices for dataset1 and then using those same shuffled indices to sample from both dataset1 and dataset2. However, this does not guarantee that the same elements will be paired together in the output, as dataset2 is shuffled separately from dataset1. To achieve your expected output, you would need to shuffle both datasets together, and then use the shuffled indices to sample from both datasets. One way to do this would be to first combine the two datasets into a single dataset containing tuples of corresponding elements from each dataset, and then shuffle the combined dataset. Then, you could use the shuffled indices to create two separate dataloaders, each of which would return the corresponding elements from each dataset. Here is an example of how this could be done: # combine the two datasets into a single dataset of tuples combined_dataset = list(zip(dataset1, dataset2)) # shuffle the combined dataset train_indices = list(range(len(combined_dataset))) np.random.seed(12) np.random.shuffle(train_indices) # create the dataloaders dataloader = DataLoader(combined_dataset, batch_size=2, num_workers=0, sampler=SubsetRandomSampler(train_indices)) # unpack the elements from the tuples in each batch for i, (data1, data2) in enumerate(dataloader): x = data1 y = data2 print(x, y) A: Since I was using a random sampler, the random indices are expected. To yield the same (shuffled) indices from both DataLoaders, it is better to create the indices first, and then use a custom sampler: class MySampler(torch.utils.data.sampler.Sampler): def __init__(self, indices): self.indices = indices def __iter__(self): return iter(self.indices) def __len__(self): return len(self.indices) dataset1 = torch.tensor([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]) dataset2 = torch.tensor([10, 11, 12, 13, 14, 15, 16, 17, 18, 19]) train_indices = list(range(len(dataset1))) np.random.seed(12) np.random.shuffle(train_indices) sampler = MySampler(train_indices) dataloader1 = DataLoader(dataset1, batch_size=2, num_workers=0, sampler=sampler) dataloader2 = DataLoader(dataset2, batch_size=2, num_workers=0, sampler=sampler) for i, (data1, data2) in enumerate(zip(dataloader1, dataloader2)): x = data1 y = data2 print(x, y) P.S. got the solution by cross-posting on Pytorch forums but still want to keep it for future readers. Credits to ptrblck.
Problem loading parallel datasets even after using SubsetRandomSampler
I have two parallel datasets dataset1 and dataset2 and following is my code to load them in parallel using SubsetRandomSampler where I provide train_indices for dataloading. P.S. Even after setting num_workers=0 and seeding np as well as torch, the samples do not get loaded in parallel. Any suggestions are heartily welcome including methods other than SubsetRandomSampler. import torch, numpy as np from torch.utils.data import Dataset, DataLoader, SubsetRandomSampler dataset1 = torch.tensor([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]) dataset2 = torch.tensor([10, 11, 12, 13, 14, 15, 16, 17, 18, 19]) train_indices = list(range(len(dataset1))) torch.manual_seed(12) np.random.seed(12) np.random.shuffle(train_indices) sampler = SubsetRandomSampler(train_indices) dataloader1 = DataLoader(dataset1, batch_size=2, num_workers=0, sampler=sampler) dataloader2 = DataLoader(dataset2, batch_size=2, num_workers=0, sampler=sampler) for i, (data1, data2) in enumerate(zip(dataloader1, dataloader2)): x = data1 y = data2 print(x, y) Output: tensor([5, 1]) tensor([15, 18]) tensor([0, 2]) tensor([14, 12]) tensor([4, 6]) tensor([16, 10]) tensor([8, 9]) tensor([11, 19]) tensor([7, 3]) tensor([17, 13]) Expected Output: tensor([5, 1]) tensor([15, 11]) tensor([0, 2]) tensor([10, 12]) tensor([4, 6]) tensor([14, 16]) tensor([8, 9]) tensor([18, 19]) tensor([7, 3]) tensor([17, 13])
[ "It looks like you are trying to load the two datasets in parallel, but have them maintain the same shuffled order.\nCurrently, the code is shuffling the indices for dataset1 and then using those same shuffled indices to sample from both dataset1 and dataset2. However, this does not guarantee that the same elements...
[ 1, 0 ]
[]
[]
[ "python", "pytorch", "pytorch_dataloader", "tensor" ]
stackoverflow_0074537660_python_pytorch_pytorch_dataloader_tensor.txt
Q: How to round down a float using import math I have a variable that is a float, and I can't find out how to round it down. I did a google search and it said I should use trunc, but trunc didn't work for me. A: if you always want to round down and are getting positive floats, just use int() .. otherwise you can use math.floor(), which can handle negative floats too >>> math.floor(3.99) 3 >>> int(-0.1) 0 >>> int(-1.2) -1 >>> math.floor(-1.2) -2
How to round down a float using import math
I have a variable that is a float, and I can't find out how to round it down. I did a google search and it said I should use trunc, but trunc didn't work for me.
[ "if you always want to round down and are getting positive floats, just use int() .. otherwise you can use math.floor(), which can handle negative floats too\n>>> math.floor(3.99)\n3\n>>> int(-0.1)\n0\n>>> int(-1.2)\n-1\n>>> math.floor(-1.2)\n-2\n\n" ]
[ 0 ]
[]
[]
[ "python", "rounding" ]
stackoverflow_0074553939_python_rounding.txt