content
stringlengths
85
101k
title
stringlengths
0
150
question
stringlengths
15
48k
answers
list
answers_scores
list
non_answers
list
non_answers_scores
list
tags
list
name
stringlengths
35
137
Q: How to call Python pip in VSCode? After upgrading pip i can't call pip in VSCode. If it's not 'pip' or 'pip3' then what is it? PS C:\Users\dyhli\OneDrive\Рабочий стол\Python\Python> pip install pandas pip : The name "pip" is not recognized as the name of a cmdlet, function, script file, or executable. the program being taken. Check the spelling of the name, as well as the presence and correct path, then try again. string:1 character:1 + pip install pandas + ~~~ + CategoryInfo : ObjectNotFound: (pip:String) [], CommandNotFoundExce ption + FullyQualifiedErrorId : CommandNotFoundException I tried calling 'pip', 'pip3' and 'pip22' A: You need to add the path of your pip installation to your PATH system variable. By default, pip is installed to C:\Python34\Scripts\pip (pip now comes bundled with new versions of python), so the path "C:\Python34\Scripts" needs to be added to your PATH variable. To check if it is already in your PATH variable, type echo %PATH% at the CMD prompt To add the path of your pip installation to your PATH variable, you can use the Control Panel or the setx command. For example: setx PATH "%PATH%;C:\Python34\Scripts"
How to call Python pip in VSCode?
After upgrading pip i can't call pip in VSCode. If it's not 'pip' or 'pip3' then what is it? PS C:\Users\dyhli\OneDrive\Рабочий стол\Python\Python> pip install pandas pip : The name "pip" is not recognized as the name of a cmdlet, function, script file, or executable. the program being taken. Check the spelling of the name, as well as the presence and correct path, then try again. string:1 character:1 + pip install pandas + ~~~ + CategoryInfo : ObjectNotFound: (pip:String) [], CommandNotFoundExce ption + FullyQualifiedErrorId : CommandNotFoundException I tried calling 'pip', 'pip3' and 'pip22'
[ "You need to add the path of your pip installation to your PATH system variable.\nBy default, pip is installed to C:\\Python34\\Scripts\\pip (pip now comes bundled with new versions of python), so the path \"C:\\Python34\\Scripts\" needs to be added to your PATH variable.\nTo check if it is already in your PATH variable, type echo %PATH% at the CMD prompt\nTo add the path of your pip installation to your PATH variable,\nyou can use the Control Panel or the setx command. For example:\nsetx PATH \"%PATH%;C:\\Python34\\Scripts\"\n\n" ]
[ 1 ]
[]
[]
[ "pip", "python", "visual_studio_code" ]
stackoverflow_0074456998_pip_python_visual_studio_code.txt
Q: Python - Convert JPG to text file Good morning all, I've made a Python script that adds text on top of images, based on a preset template. I'm now developing a template editor that will let the user edit the template in GUI, then save the template as a config file. The idea is that one user can create a template, export it, send it to a new user on a separate computer, who can import it into their config file. The second user will retain full edit abilities on the template (if any changes needs to be made). Now, in addition to the text, I also want the ability to add up to two images (company logos, ect.) to the template/stills. Now, my question: Is there a way to convert a JPG to pure text data, that can be saved to a config file, and that can be reinterpreted to a JPG at the receiving system. And if not, what would be the best way to achieve this? What I'm hoping to avoid is the user having to send the image files separately. A: Sounds questionable that you want to ship an image as text file (it's easy, base64 is supplied with python, but it drastically increases the amount of bytes. I'd strongly recommend not doing that). I'd rather take the text and embed it in the image metadata! That way, you would still have a valid image file, but if loaded with your application, that application could read the metadata, interpret it as text config. There's EXIF and XMP metadata, for both there's python modules. Alternatively, would make more sense to simply put images and config files into one archive file (you know .docx word documents? They do exactly that, just like .odt; java jar files? Same. Android APK files? All archive files with multiple files inside) python brings a zip module to enable you to do that easily. Instead of an archive, you could also build a PDF file. That way, you could simply have the images embedded in the PDF, the text editable on top of it, any browser can display it, and the text stays editable. Operating on pdf files can be done in many ways, but I like Fitz from the PyMuPDF package. Just make a document the size of your image, add the image file, put the text on top. On the reader side, find the image and text elements. It's relatively ok to do! PDF is a very flexible format, if you need more config that just text information, you can add arbitrary text streams to the document that are not displayed. A: If I understand properly, you want to use the config file as a settings file that stores the preferences of a user, you could store such data as JSON/XML/YAML or similar, such files are used to store data in pure readable text than binary can be parsed into a Python dict object. As for storing the images, you can have the generated images uploaded to a server then use their URL when they are needed to re-download them, unless if I didn’t understand the question?
Python - Convert JPG to text file
Good morning all, I've made a Python script that adds text on top of images, based on a preset template. I'm now developing a template editor that will let the user edit the template in GUI, then save the template as a config file. The idea is that one user can create a template, export it, send it to a new user on a separate computer, who can import it into their config file. The second user will retain full edit abilities on the template (if any changes needs to be made). Now, in addition to the text, I also want the ability to add up to two images (company logos, ect.) to the template/stills. Now, my question: Is there a way to convert a JPG to pure text data, that can be saved to a config file, and that can be reinterpreted to a JPG at the receiving system. And if not, what would be the best way to achieve this? What I'm hoping to avoid is the user having to send the image files separately.
[ "Sounds questionable that you want to ship an image as text file (it's easy, base64 is supplied with python, but it drastically increases the amount of bytes. I'd strongly recommend not doing that).\nI'd rather take the text and embed it in the image metadata! That way, you would still have a valid image file, but if loaded with your application, that application could read the metadata, interpret it as text config.\nThere's EXIF and XMP metadata, for both there's python modules.\nAlternatively, would make more sense to simply put images and config files into one archive file (you know .docx word documents? They do exactly that, just like .odt; java jar files? Same. Android APK files? All archive files with multiple files inside) python brings a zip module to enable you to do that easily.\nInstead of an archive, you could also build a PDF file. That way, you could simply have the images embedded in the PDF, the text editable on top of it, any browser can display it, and the text stays editable. Operating on pdf files can be done in many ways, but I like Fitz from the PyMuPDF package. Just make a document the size of your image, add the image file, put the text on top. On the reader side, find the image and text elements. It's relatively ok to do!\nPDF is a very flexible format, if you need more config that just text information, you can add arbitrary text streams to the document that are not displayed.\n", "If I understand properly, you want to use the config file as a settings file that stores the preferences of a user, you could store such data as JSON/XML/YAML or similar, such files are used to store data in pure readable text than binary can be parsed into a Python dict object. As for storing the images, you can have the generated images uploaded to a server then use their URL when they are needed to re-download them, unless if I didn’t understand the question?\n" ]
[ 1, 0 ]
[]
[]
[ "config", "python" ]
stackoverflow_0074458732_config_python.txt
Q: SSL errors when sending file from Azure databricks to SharePoint We are working in a Python notebook on Databricks and want to send a file to a SharePoint site. To achieve this, we obtained a client_id and client_secret from https://<SP_domain>.sharepoint.com/sites/<my_site_name>/_layouts/15/appregnew.aspx Locally, I can successfully send a file to SharePoint using these secrets. On DataBricks, I receive SSL Errors. Normally, something like verify=false within the request can be provided, ignoring SSL certificate checks (if that is the actual issue). But this does not seem to be supported in the Python package that I am using: Office365-REST-Python-Client The message of the errors that are received without any attempt to circumvent the issue. SSLError: HTTPSConnectionPool(host='<SP_domain>.sharepoint.com', port=443): Max retries exceeded with url: /sites/<my sites name>(Caused by SSLError(SSLEOFError(8, 'EOF occurred in violation of protocol (_ssl.c:1129)'))) Reproducible code sharepoint_url = 'https://....sharepoint.com/sites/...' client_credentials = ClientCredential(client_id=, client_secret=) ctx = ClientContext(sharepoint_url).with_credentials(client_credentials) web = ctx.web ctx.load(web) ctx.execute_query() # <<< Crashes here print(web.properties["Url"]) Results in: AttributeError: 'NoneType' object has no attribute 'text' Actual (not the last) error states: MaxRetryError: HTTPSConnectionPool(host='nsdigitaal.sharepoint.com', port=443): Max retries exceeded with url: /sites/Team-Camerainspectie (Caused by SSLError(SSLEOFError(8, 'EOF occurred in violation of protocol (_ssl.c:1129)'))) Full stack (sorry in advance :P) --------------------------------------------------------------------------- SSLEOFError Traceback (most recent call last) /databricks/python/lib/python3.9/site-packages/urllib3/connectionpool.py in urlopen(self, method, url, body, headers, retries, redirect, assert_same_host, timeout, pool_timeout, release_conn, chunked, body_pos, **response_kw) 698 # Make the request on the httplib connection object. --> 699 httplib_response = self._make_request( 700 conn, /databricks/python/lib/python3.9/site-packages/urllib3/connectionpool.py in _make_request(self, conn, method, url, timeout, chunked, **httplib_request_kw) 381 try: --> 382 self._validate_conn(conn) 383 except (SocketTimeout, BaseSSLError) as e: /databricks/python/lib/python3.9/site-packages/urllib3/connectionpool.py in _validate_conn(self, conn) 1009 if not getattr(conn, "sock", None): # AppEngine might not have `.sock` -> 1010 conn.connect() 1011 /databricks/python/lib/python3.9/site-packages/urllib3/connection.py in connect(self) 415 --> 416 self.sock = ssl_wrap_socket( 417 sock=conn, /databricks/python/lib/python3.9/site-packages/urllib3/util/ssl_.py in ssl_wrap_socket(sock, keyfile, certfile, cert_reqs, ca_certs, server_hostname, ssl_version, ciphers, ssl_context, ca_cert_dir, key_password, ca_cert_data, tls_in_tls) 448 if send_sni: --> 449 ssl_sock = _ssl_wrap_socket_impl( 450 sock, context, tls_in_tls, server_hostname=server_hostname /databricks/python/lib/python3.9/site-packages/urllib3/util/ssl_.py in _ssl_wrap_socket_impl(sock, ssl_context, tls_in_tls, server_hostname) 492 if server_hostname: --> 493 return ssl_context.wrap_socket(sock, server_hostname=server_hostname) 494 else: /usr/lib/python3.9/ssl.py in wrap_socket(self, sock, server_side, do_handshake_on_connect, suppress_ragged_eofs, server_hostname, session) 499 # ctx._wrap_socket() --> 500 return self.sslsocket_class._create( 501 sock=sock, /usr/lib/python3.9/ssl.py in _create(cls, sock, server_side, do_handshake_on_connect, suppress_ragged_eofs, server_hostname, context, session) 1039 raise ValueError("do_handshake_on_connect should not be specified for non-blocking sockets") -> 1040 self.do_handshake() 1041 except (OSError, ValueError): /usr/lib/python3.9/ssl.py in do_handshake(self, block) 1308 self.settimeout(None) -> 1309 self._sslobj.do_handshake() 1310 finally: SSLEOFError: EOF occurred in violation of protocol (_ssl.c:1129) During handling of the above exception, another exception occurred: MaxRetryError Traceback (most recent call last) /databricks/python/lib/python3.9/site-packages/requests/adapters.py in send(self, request, stream, timeout, verify, cert, proxies) 438 if not chunked: --> 439 resp = conn.urlopen( 440 method=request.method, /databricks/python/lib/python3.9/site-packages/urllib3/connectionpool.py in urlopen(self, method, url, body, headers, retries, redirect, assert_same_host, timeout, pool_timeout, release_conn, chunked, body_pos, **response_kw) 754 --> 755 retries = retries.increment( 756 method, url, error=e, _pool=self, _stacktrace=sys.exc_info()[2] /databricks/python/lib/python3.9/site-packages/urllib3/util/retry.py in increment(self, method, url, response, error, _pool, _stacktrace) 573 if new_retry.is_exhausted(): --> 574 raise MaxRetryError(_pool, url, error or ResponseError(cause)) 575 MaxRetryError: HTTPSConnectionPool(host='<tenant name>.sharepoint.com', port=443): Max retries exceeded with url: /sites/<site name> (Caused by SSLError(SSLEOFError(8, 'EOF occurred in violation of protocol (_ssl.c:1129)'))) During handling of the above exception, another exception occurred: SSLError Traceback (most recent call last) /local_disk0/.ephemeral_nfs/envs/pythonEnv-e6edc2d5-a811-4e43-a0ea-d29958d03122/lib/python3.9/site-packages/office365/runtime/auth/providers/acs_token_provider.py in get_app_only_access_token(self) 40 try: ---> 41 realm = self._get_realm_from_target_url() 42 url_info = urlparse(self.url) /local_disk0/.ephemeral_nfs/envs/pythonEnv-e6edc2d5-a811-4e43-a0ea-d29958d03122/lib/python3.9/site-packages/office365/runtime/auth/providers/acs_token_provider.py in _get_realm_from_target_url(self) 69 def _get_realm_from_target_url(self): ---> 70 response = requests.head(url=self.url, headers={'Authorization': 'Bearer'}) 71 return self.process_realm_response(response) /databricks/python/lib/python3.9/site-packages/requests/api.py in head(url, **kwargs) 101 kwargs.setdefault('allow_redirects', False) --> 102 return request('head', url, **kwargs) 103 /databricks/python/lib/python3.9/site-packages/requests/api.py in request(method, url, **kwargs) 60 with sessions.Session() as session: ---> 61 return session.request(method=method, url=url, **kwargs) 62 /databricks/python/lib/python3.9/site-packages/requests/sessions.py in request(self, method, url, params, data, headers, cookies, files, auth, timeout, allow_redirects, proxies, hooks, stream, verify, cert, json) 541 send_kwargs.update(settings) --> 542 resp = self.send(prep, **send_kwargs) 543 /databricks/python/lib/python3.9/site-packages/requests/sessions.py in send(self, request, **kwargs) 654 # Send the request --> 655 r = adapter.send(request, **kwargs) 656 /databricks/python/lib/python3.9/site-packages/requests/adapters.py in send(self, request, stream, timeout, verify, cert, proxies) 513 # This branch is for urllib3 v1.22 and later. --> 514 raise SSLError(e, request=request) 515 SSLError: HTTPSConnectionPool(host='<tenant name>.sharepoint.com', port=443): Max retries exceeded with url: /sites/<site name> (Caused by SSLError(SSLEOFError(8, 'EOF occurred in violation of protocol (_ssl.c:1129)'))) During handling of the above exception, another exception occurred: AttributeError Traceback (most recent call last) <command-4083654498839573> in <cell line: 14>() 12 web = ctx.web 13 ctx.load(web) ---> 14 ctx.execute_query() 15 print(web.properties["Url"]) /local_disk0/.ephemeral_nfs/envs/pythonEnv-e6edc2d5-a811-4e43-a0ea-d29958d03122/lib/python3.9/site-packages/office365/runtime/client_runtime_context.py in execute_query(self) 145 def execute_query(self): 146 """Submit request(s) to the server""" --> 147 self.pending_request().execute_query() 148 149 def add_query(self, query): /local_disk0/.ephemeral_nfs/envs/pythonEnv-e6edc2d5-a811-4e43-a0ea-d29958d03122/lib/python3.9/site-packages/office365/runtime/client_request.py in execute_query(self) 72 request = self.build_request(qry) 73 self.beforeExecute.notify(request) ---> 74 response = self.execute_request_direct(request) 75 response.raise_for_status() 76 self.process_response(response) /local_disk0/.ephemeral_nfs/envs/pythonEnv-e6edc2d5-a811-4e43-a0ea-d29958d03122/lib/python3.9/site-packages/office365/runtime/odata/request.py in execute_request_direct(self, request) 34 """ 35 self._build_specific_request(request) ---> 36 return super(ODataRequest, self).execute_request_direct(request) 37 38 def build_request(self, query): /local_disk0/.ephemeral_nfs/envs/pythonEnv-e6edc2d5-a811-4e43-a0ea-d29958d03122/lib/python3.9/site-packages/office365/runtime/client_request.py in execute_request_direct(self, request) 84 :type request: office365.runtime.http.request_options.RequestOptions 85 """ ---> 86 self.context.authenticate_request(request) 87 if request.method == HttpMethod.Post: 88 if request.is_bytes or request.is_file: /local_disk0/.ephemeral_nfs/envs/pythonEnv-e6edc2d5-a811-4e43-a0ea-d29958d03122/lib/python3.9/site-packages/office365/sharepoint/client_context.py in authenticate_request(self, request) 238 239 def authenticate_request(self, request): --> 240 self.authentication_context.authenticate_request(request) 241 242 def _build_modification_query(self, request): /local_disk0/.ephemeral_nfs/envs/pythonEnv-e6edc2d5-a811-4e43-a0ea-d29958d03122/lib/python3.9/site-packages/office365/runtime/auth/authentication_context.py in authenticate_request(self, request) 95 :type request: office365.runtime.http.request_options.RequestOptions 96 """ ---> 97 self._provider.authenticate_request(request) /local_disk0/.ephemeral_nfs/envs/pythonEnv-e6edc2d5-a811-4e43-a0ea-d29958d03122/lib/python3.9/site-packages/office365/runtime/auth/providers/acs_token_provider.py in authenticate_request(self, request) 29 :type request: office365.runtime.http.request_options.RequestOptions 30 """ ---> 31 self.ensure_app_only_access_token() 32 request.set_header('Authorization', self._get_authorization_header()) 33 /local_disk0/.ephemeral_nfs/envs/pythonEnv-e6edc2d5-a811-4e43-a0ea-d29958d03122/lib/python3.9/site-packages/office365/runtime/auth/providers/acs_token_provider.py in ensure_app_only_access_token(self) 34 def ensure_app_only_access_token(self): 35 if self._cached_token is None: ---> 36 self._cached_token = self.get_app_only_access_token() 37 return self._cached_token and self._cached_token.is_valid 38 /local_disk0/.ephemeral_nfs/envs/pythonEnv-e6edc2d5-a811-4e43-a0ea-d29958d03122/lib/python3.9/site-packages/office365/runtime/auth/providers/acs_token_provider.py in get_app_only_access_token(self) 43 return self._get_app_only_access_token(url_info.hostname, realm) 44 except requests.exceptions.RequestException as e: ---> 45 self.error = e.response.text 46 raise ValueError(e.response.text) 47 AttributeError: 'NoneType' object has no attribute 'text' Tried solutions: Attempt 1: ctx = ClientContext(sharepoint_url).with_credentials(client_credentials) request = RequestOptions("{0}/_api/web/".format(sharepoint_url)) request.verify = False response = ctx.execute_request_direct(request) # <<< crashes here... example outdated? json = json.loads(response.content) web_title = json['d']['Title'] print("Web title: {0}".format(web_title)) Results in: TypeError: sequence item 2: expected str instance, RequestOptions found Attempt 2: Based on this SO thread. # If you're using a third-party module and want to disable the checks, # here's a context manager that monkey patches `requests` and changes # it so that verify=False is the default and suppresses the warning. import warnings import contextlib import requests from urllib3.exceptions import InsecureRequestWarning old_merge_environment_settings = requests.Session.merge_environment_settings @contextlib.contextmanager def no_ssl_verification(): opened_adapters = set() def merge_environment_settings(self, url, proxies, stream, verify, cert): # Verification happens only once per connection so we need to close # all the opened adapters once we're done. Otherwise, the effects of # verify=False persist beyond the end of this context manager. opened_adapters.add(self.get_adapter(url)) settings = old_merge_environment_settings(self, url, proxies, stream, verify, cert) settings['verify'] = False return settings requests.Session.merge_environment_settings = merge_environment_settings try: with warnings.catch_warnings(): warnings.simplefilter('ignore', InsecureRequestWarning) yield finally: requests.Session.merge_environment_settings = old_merge_environment_settings for adapter in opened_adapters: try: adapter.close() except: pass And running that like: with no_ssl_verification(): function_to_send_file_to_sharepoint() Results in the same Max number of attempts error Attempt 3: Based on this github issue. def disable_ssl(request): request.verify = False # Disable certification verification ctx.get_pending_request().beforeExecute += disable_ssl web = ctx.web ctx.load(web) ctx.execute_query() print(web.properties["Url"]) This code needs an update, since the thread was outdated. The current api provides pending_request and not get_pending_request(). With the fix applied, it results in the following: A: We got it working. The network configuration of databricks was configured with a firewall that blocked both these URLs which are both needed: https://<tenant name>.sharepoint.com/ https://accounts.accesscontrol.windows.net Then it worked flawlessly. I didn't figure out why the error is shown like this: AttributeError: 'NoneType' object has no attribute 'text'
SSL errors when sending file from Azure databricks to SharePoint
We are working in a Python notebook on Databricks and want to send a file to a SharePoint site. To achieve this, we obtained a client_id and client_secret from https://<SP_domain>.sharepoint.com/sites/<my_site_name>/_layouts/15/appregnew.aspx Locally, I can successfully send a file to SharePoint using these secrets. On DataBricks, I receive SSL Errors. Normally, something like verify=false within the request can be provided, ignoring SSL certificate checks (if that is the actual issue). But this does not seem to be supported in the Python package that I am using: Office365-REST-Python-Client The message of the errors that are received without any attempt to circumvent the issue. SSLError: HTTPSConnectionPool(host='<SP_domain>.sharepoint.com', port=443): Max retries exceeded with url: /sites/<my sites name>(Caused by SSLError(SSLEOFError(8, 'EOF occurred in violation of protocol (_ssl.c:1129)'))) Reproducible code sharepoint_url = 'https://....sharepoint.com/sites/...' client_credentials = ClientCredential(client_id=, client_secret=) ctx = ClientContext(sharepoint_url).with_credentials(client_credentials) web = ctx.web ctx.load(web) ctx.execute_query() # <<< Crashes here print(web.properties["Url"]) Results in: AttributeError: 'NoneType' object has no attribute 'text' Actual (not the last) error states: MaxRetryError: HTTPSConnectionPool(host='nsdigitaal.sharepoint.com', port=443): Max retries exceeded with url: /sites/Team-Camerainspectie (Caused by SSLError(SSLEOFError(8, 'EOF occurred in violation of protocol (_ssl.c:1129)'))) Full stack (sorry in advance :P) --------------------------------------------------------------------------- SSLEOFError Traceback (most recent call last) /databricks/python/lib/python3.9/site-packages/urllib3/connectionpool.py in urlopen(self, method, url, body, headers, retries, redirect, assert_same_host, timeout, pool_timeout, release_conn, chunked, body_pos, **response_kw) 698 # Make the request on the httplib connection object. --> 699 httplib_response = self._make_request( 700 conn, /databricks/python/lib/python3.9/site-packages/urllib3/connectionpool.py in _make_request(self, conn, method, url, timeout, chunked, **httplib_request_kw) 381 try: --> 382 self._validate_conn(conn) 383 except (SocketTimeout, BaseSSLError) as e: /databricks/python/lib/python3.9/site-packages/urllib3/connectionpool.py in _validate_conn(self, conn) 1009 if not getattr(conn, "sock", None): # AppEngine might not have `.sock` -> 1010 conn.connect() 1011 /databricks/python/lib/python3.9/site-packages/urllib3/connection.py in connect(self) 415 --> 416 self.sock = ssl_wrap_socket( 417 sock=conn, /databricks/python/lib/python3.9/site-packages/urllib3/util/ssl_.py in ssl_wrap_socket(sock, keyfile, certfile, cert_reqs, ca_certs, server_hostname, ssl_version, ciphers, ssl_context, ca_cert_dir, key_password, ca_cert_data, tls_in_tls) 448 if send_sni: --> 449 ssl_sock = _ssl_wrap_socket_impl( 450 sock, context, tls_in_tls, server_hostname=server_hostname /databricks/python/lib/python3.9/site-packages/urllib3/util/ssl_.py in _ssl_wrap_socket_impl(sock, ssl_context, tls_in_tls, server_hostname) 492 if server_hostname: --> 493 return ssl_context.wrap_socket(sock, server_hostname=server_hostname) 494 else: /usr/lib/python3.9/ssl.py in wrap_socket(self, sock, server_side, do_handshake_on_connect, suppress_ragged_eofs, server_hostname, session) 499 # ctx._wrap_socket() --> 500 return self.sslsocket_class._create( 501 sock=sock, /usr/lib/python3.9/ssl.py in _create(cls, sock, server_side, do_handshake_on_connect, suppress_ragged_eofs, server_hostname, context, session) 1039 raise ValueError("do_handshake_on_connect should not be specified for non-blocking sockets") -> 1040 self.do_handshake() 1041 except (OSError, ValueError): /usr/lib/python3.9/ssl.py in do_handshake(self, block) 1308 self.settimeout(None) -> 1309 self._sslobj.do_handshake() 1310 finally: SSLEOFError: EOF occurred in violation of protocol (_ssl.c:1129) During handling of the above exception, another exception occurred: MaxRetryError Traceback (most recent call last) /databricks/python/lib/python3.9/site-packages/requests/adapters.py in send(self, request, stream, timeout, verify, cert, proxies) 438 if not chunked: --> 439 resp = conn.urlopen( 440 method=request.method, /databricks/python/lib/python3.9/site-packages/urllib3/connectionpool.py in urlopen(self, method, url, body, headers, retries, redirect, assert_same_host, timeout, pool_timeout, release_conn, chunked, body_pos, **response_kw) 754 --> 755 retries = retries.increment( 756 method, url, error=e, _pool=self, _stacktrace=sys.exc_info()[2] /databricks/python/lib/python3.9/site-packages/urllib3/util/retry.py in increment(self, method, url, response, error, _pool, _stacktrace) 573 if new_retry.is_exhausted(): --> 574 raise MaxRetryError(_pool, url, error or ResponseError(cause)) 575 MaxRetryError: HTTPSConnectionPool(host='<tenant name>.sharepoint.com', port=443): Max retries exceeded with url: /sites/<site name> (Caused by SSLError(SSLEOFError(8, 'EOF occurred in violation of protocol (_ssl.c:1129)'))) During handling of the above exception, another exception occurred: SSLError Traceback (most recent call last) /local_disk0/.ephemeral_nfs/envs/pythonEnv-e6edc2d5-a811-4e43-a0ea-d29958d03122/lib/python3.9/site-packages/office365/runtime/auth/providers/acs_token_provider.py in get_app_only_access_token(self) 40 try: ---> 41 realm = self._get_realm_from_target_url() 42 url_info = urlparse(self.url) /local_disk0/.ephemeral_nfs/envs/pythonEnv-e6edc2d5-a811-4e43-a0ea-d29958d03122/lib/python3.9/site-packages/office365/runtime/auth/providers/acs_token_provider.py in _get_realm_from_target_url(self) 69 def _get_realm_from_target_url(self): ---> 70 response = requests.head(url=self.url, headers={'Authorization': 'Bearer'}) 71 return self.process_realm_response(response) /databricks/python/lib/python3.9/site-packages/requests/api.py in head(url, **kwargs) 101 kwargs.setdefault('allow_redirects', False) --> 102 return request('head', url, **kwargs) 103 /databricks/python/lib/python3.9/site-packages/requests/api.py in request(method, url, **kwargs) 60 with sessions.Session() as session: ---> 61 return session.request(method=method, url=url, **kwargs) 62 /databricks/python/lib/python3.9/site-packages/requests/sessions.py in request(self, method, url, params, data, headers, cookies, files, auth, timeout, allow_redirects, proxies, hooks, stream, verify, cert, json) 541 send_kwargs.update(settings) --> 542 resp = self.send(prep, **send_kwargs) 543 /databricks/python/lib/python3.9/site-packages/requests/sessions.py in send(self, request, **kwargs) 654 # Send the request --> 655 r = adapter.send(request, **kwargs) 656 /databricks/python/lib/python3.9/site-packages/requests/adapters.py in send(self, request, stream, timeout, verify, cert, proxies) 513 # This branch is for urllib3 v1.22 and later. --> 514 raise SSLError(e, request=request) 515 SSLError: HTTPSConnectionPool(host='<tenant name>.sharepoint.com', port=443): Max retries exceeded with url: /sites/<site name> (Caused by SSLError(SSLEOFError(8, 'EOF occurred in violation of protocol (_ssl.c:1129)'))) During handling of the above exception, another exception occurred: AttributeError Traceback (most recent call last) <command-4083654498839573> in <cell line: 14>() 12 web = ctx.web 13 ctx.load(web) ---> 14 ctx.execute_query() 15 print(web.properties["Url"]) /local_disk0/.ephemeral_nfs/envs/pythonEnv-e6edc2d5-a811-4e43-a0ea-d29958d03122/lib/python3.9/site-packages/office365/runtime/client_runtime_context.py in execute_query(self) 145 def execute_query(self): 146 """Submit request(s) to the server""" --> 147 self.pending_request().execute_query() 148 149 def add_query(self, query): /local_disk0/.ephemeral_nfs/envs/pythonEnv-e6edc2d5-a811-4e43-a0ea-d29958d03122/lib/python3.9/site-packages/office365/runtime/client_request.py in execute_query(self) 72 request = self.build_request(qry) 73 self.beforeExecute.notify(request) ---> 74 response = self.execute_request_direct(request) 75 response.raise_for_status() 76 self.process_response(response) /local_disk0/.ephemeral_nfs/envs/pythonEnv-e6edc2d5-a811-4e43-a0ea-d29958d03122/lib/python3.9/site-packages/office365/runtime/odata/request.py in execute_request_direct(self, request) 34 """ 35 self._build_specific_request(request) ---> 36 return super(ODataRequest, self).execute_request_direct(request) 37 38 def build_request(self, query): /local_disk0/.ephemeral_nfs/envs/pythonEnv-e6edc2d5-a811-4e43-a0ea-d29958d03122/lib/python3.9/site-packages/office365/runtime/client_request.py in execute_request_direct(self, request) 84 :type request: office365.runtime.http.request_options.RequestOptions 85 """ ---> 86 self.context.authenticate_request(request) 87 if request.method == HttpMethod.Post: 88 if request.is_bytes or request.is_file: /local_disk0/.ephemeral_nfs/envs/pythonEnv-e6edc2d5-a811-4e43-a0ea-d29958d03122/lib/python3.9/site-packages/office365/sharepoint/client_context.py in authenticate_request(self, request) 238 239 def authenticate_request(self, request): --> 240 self.authentication_context.authenticate_request(request) 241 242 def _build_modification_query(self, request): /local_disk0/.ephemeral_nfs/envs/pythonEnv-e6edc2d5-a811-4e43-a0ea-d29958d03122/lib/python3.9/site-packages/office365/runtime/auth/authentication_context.py in authenticate_request(self, request) 95 :type request: office365.runtime.http.request_options.RequestOptions 96 """ ---> 97 self._provider.authenticate_request(request) /local_disk0/.ephemeral_nfs/envs/pythonEnv-e6edc2d5-a811-4e43-a0ea-d29958d03122/lib/python3.9/site-packages/office365/runtime/auth/providers/acs_token_provider.py in authenticate_request(self, request) 29 :type request: office365.runtime.http.request_options.RequestOptions 30 """ ---> 31 self.ensure_app_only_access_token() 32 request.set_header('Authorization', self._get_authorization_header()) 33 /local_disk0/.ephemeral_nfs/envs/pythonEnv-e6edc2d5-a811-4e43-a0ea-d29958d03122/lib/python3.9/site-packages/office365/runtime/auth/providers/acs_token_provider.py in ensure_app_only_access_token(self) 34 def ensure_app_only_access_token(self): 35 if self._cached_token is None: ---> 36 self._cached_token = self.get_app_only_access_token() 37 return self._cached_token and self._cached_token.is_valid 38 /local_disk0/.ephemeral_nfs/envs/pythonEnv-e6edc2d5-a811-4e43-a0ea-d29958d03122/lib/python3.9/site-packages/office365/runtime/auth/providers/acs_token_provider.py in get_app_only_access_token(self) 43 return self._get_app_only_access_token(url_info.hostname, realm) 44 except requests.exceptions.RequestException as e: ---> 45 self.error = e.response.text 46 raise ValueError(e.response.text) 47 AttributeError: 'NoneType' object has no attribute 'text' Tried solutions: Attempt 1: ctx = ClientContext(sharepoint_url).with_credentials(client_credentials) request = RequestOptions("{0}/_api/web/".format(sharepoint_url)) request.verify = False response = ctx.execute_request_direct(request) # <<< crashes here... example outdated? json = json.loads(response.content) web_title = json['d']['Title'] print("Web title: {0}".format(web_title)) Results in: TypeError: sequence item 2: expected str instance, RequestOptions found Attempt 2: Based on this SO thread. # If you're using a third-party module and want to disable the checks, # here's a context manager that monkey patches `requests` and changes # it so that verify=False is the default and suppresses the warning. import warnings import contextlib import requests from urllib3.exceptions import InsecureRequestWarning old_merge_environment_settings = requests.Session.merge_environment_settings @contextlib.contextmanager def no_ssl_verification(): opened_adapters = set() def merge_environment_settings(self, url, proxies, stream, verify, cert): # Verification happens only once per connection so we need to close # all the opened adapters once we're done. Otherwise, the effects of # verify=False persist beyond the end of this context manager. opened_adapters.add(self.get_adapter(url)) settings = old_merge_environment_settings(self, url, proxies, stream, verify, cert) settings['verify'] = False return settings requests.Session.merge_environment_settings = merge_environment_settings try: with warnings.catch_warnings(): warnings.simplefilter('ignore', InsecureRequestWarning) yield finally: requests.Session.merge_environment_settings = old_merge_environment_settings for adapter in opened_adapters: try: adapter.close() except: pass And running that like: with no_ssl_verification(): function_to_send_file_to_sharepoint() Results in the same Max number of attempts error Attempt 3: Based on this github issue. def disable_ssl(request): request.verify = False # Disable certification verification ctx.get_pending_request().beforeExecute += disable_ssl web = ctx.web ctx.load(web) ctx.execute_query() print(web.properties["Url"]) This code needs an update, since the thread was outdated. The current api provides pending_request and not get_pending_request(). With the fix applied, it results in the following:
[ "We got it working.\nThe network configuration of databricks was configured with a firewall that blocked both these URLs which are both needed:\n\nhttps://<tenant name>.sharepoint.com/\nhttps://accounts.accesscontrol.windows.net\n\nThen it worked flawlessly.\nI didn't figure out why the error is shown like this:\nAttributeError: 'NoneType' object has no attribute 'text'\n" ]
[ 0 ]
[]
[]
[ "azure_databricks", "python", "sharepoint", "ssl" ]
stackoverflow_0074377680_azure_databricks_python_sharepoint_ssl.txt
Q: Mysql connector python error violation of protocol (_ssl.c:2483) Im running an application running React, python and MySQL with a docker compose, When I run the application everything works fine, but when a pettition to the database (from the frontend with axios) is made multiple times, the connection breaks and the following error appears. Everything is running locally "2055: Lost connection to MySQL server at 'database:3306', system error: 8 EOF occurred in violation of protocol (_ssl.c:2483)" Here is the configuration of my connection and my dockerfile import mysql.connector return mysql.connector.connect( host=os.environ.get("database"), user="root", password="root", database="locatec", port=3306, auth_plugin='mysql_native_password' ) database: image: mysql container_name: database restart: always ports: - '3306:3306' #command: --init-file /init.sql volumes: - ./init.sql:/docker-entrypoint-initdb.d/init.sql - ~/apps/mysql:/var/lib/locatec5 environment: MYSQL_ROOT_PASSWORD: root MYSQL_USER: test MYSQL_PASSWORD: test MYSQL_ROOT_HOST: '%' TIMEZONE: UTC networks: - locatec server: build: context: ../../../ dockerfile: infra/deploy/backend/dev/Dockerfile restart: always ports: - "5000:5000" container_name: server networks: - locatec depends_on: - database I tried using mysql-connector-python==8.0.30 , also I configured the param of ssl to false A: This looks like an error I faced some time ago. In my case, it was due to a client initiating a secured connection to a non-secured server. The error message indicates that the client has initiated a SSL/TLS connection and is expecting to receive more data than the server has sent during the handshake procedure. There are two ways to fix this, either by: enabling SSL/TLS on server side so that the client can initiate a SSL/TLS handshake and get a consistent response. [Docker side] disabling the SSL/TLS from the database connector. [Python side]
Mysql connector python error violation of protocol (_ssl.c:2483)
Im running an application running React, python and MySQL with a docker compose, When I run the application everything works fine, but when a pettition to the database (from the frontend with axios) is made multiple times, the connection breaks and the following error appears. Everything is running locally "2055: Lost connection to MySQL server at 'database:3306', system error: 8 EOF occurred in violation of protocol (_ssl.c:2483)" Here is the configuration of my connection and my dockerfile import mysql.connector return mysql.connector.connect( host=os.environ.get("database"), user="root", password="root", database="locatec", port=3306, auth_plugin='mysql_native_password' ) database: image: mysql container_name: database restart: always ports: - '3306:3306' #command: --init-file /init.sql volumes: - ./init.sql:/docker-entrypoint-initdb.d/init.sql - ~/apps/mysql:/var/lib/locatec5 environment: MYSQL_ROOT_PASSWORD: root MYSQL_USER: test MYSQL_PASSWORD: test MYSQL_ROOT_HOST: '%' TIMEZONE: UTC networks: - locatec server: build: context: ../../../ dockerfile: infra/deploy/backend/dev/Dockerfile restart: always ports: - "5000:5000" container_name: server networks: - locatec depends_on: - database I tried using mysql-connector-python==8.0.30 , also I configured the param of ssl to false
[ "This looks like an error I faced some time ago. In my case, it was due to a client initiating a secured connection to a non-secured server.\nThe error message indicates that the client has initiated a SSL/TLS connection and is expecting to receive more data than the server has sent during the handshake procedure.\nThere are two ways to fix this, either by:\n\nenabling SSL/TLS on server side so that the client can initiate a SSL/TLS handshake and get a consistent response. [Docker side]\ndisabling the SSL/TLS from the database connector. [Python side]\n\n" ]
[ 1 ]
[]
[]
[ "mysql", "python" ]
stackoverflow_0074416286_mysql_python.txt
Q: Example of jqgrid with python So I've cloned this https://bitbucket.org/romildo/django-jqgrid-demo.git as I am looking for a working example of jqgrid with django. I've been updating the code (as this seems like it was written for a version 2 of django and I'm workng on 4.1) I'm completely stumped by the lines from jqgrid import JqGrid giving me this error ModuleNotFoundError: No module named 'jqgrid' I cannot find a reference to jqgrid within pip and I cannot install one (jqgrid is not a python package) I understand that jqgrid is a javascript component around jquery but how do I get that to work in Python I have google for django-jqgrid and on youtube. None of the answers provide enough information to get a simple working example up. There seems to be an assumption that everything is installed and I'd like to understand what is required where and how to reference What am I missing? A: Simply you can install this library: pip install js.jqgrid And now your above error will solve
Example of jqgrid with python
So I've cloned this https://bitbucket.org/romildo/django-jqgrid-demo.git as I am looking for a working example of jqgrid with django. I've been updating the code (as this seems like it was written for a version 2 of django and I'm workng on 4.1) I'm completely stumped by the lines from jqgrid import JqGrid giving me this error ModuleNotFoundError: No module named 'jqgrid' I cannot find a reference to jqgrid within pip and I cannot install one (jqgrid is not a python package) I understand that jqgrid is a javascript component around jquery but how do I get that to work in Python I have google for django-jqgrid and on youtube. None of the answers provide enough information to get a simple working example up. There seems to be an assumption that everything is installed and I'd like to understand what is required where and how to reference What am I missing?
[ "Simply you can install this library:\npip install js.jqgrid\n\nAnd now your above error will solve\n" ]
[ 0 ]
[]
[]
[ "django", "jqgrid", "python" ]
stackoverflow_0074458777_django_jqgrid_python.txt
Q: Structure Folder Path with Repository path Cloud Composer DAG I need to run the DAG with the repository folder name, and I need to call the other modules from another directory from another path repository deployed. So, I have a cloudbuild.yaml that will deploy the script into DAG folder and Plugins folder, but I still didn't know, how to get the other modules from the other path on Cloud Composer Bucket Storage. This is my Bucket Storage path cloud-composer-bucket/ dags/ github_my_repository_deployed-testing/ test_dag.py plugins/ github_my_repository_deployed-testing/ planning/ modules_1.py I need to call modules_1.py from my test_dag.py, I used this command to call the module from planning.modules_1 import get_data But from this method, I got an error shown like this Broken DAG: [/home/airflow/gcs/dags/github_my_repository_deployed-testing/test_dag.py] Traceback (most recent call last): File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed File "/home/airflow/gcs/dags/github_my_repository_deployed-testing/test_dag.py", line 7, in <module> from planning.modules_1 import get_date ModuleNotFoundError: No module named 'planning' This is my cloudbuild.yaml steps: - id: 'Push into Composer DAG' name: 'google/cloud-sdk' entrypoint: 'sh' args: [ '-c', 'gsutil -m rsync -d -r ./dags ${_COMPOSER_BUCKET}/dags/$REPO_NAME'] - id: 'Push into Composer Plugins' name: 'google/cloud-sdk' entrypoint: 'sh' args: [ '-c', 'gsutil -m rsync -d -r ./plugins ${_COMPOSER_BUCKET}/plugins/$REPO_NAME'] - id: 'Code Scanning' name: 'python:3.7-slim' entrypoint: 'sh' args: [ '-c', 'pip install bandit && bandit --exit-zero -r ./'] substitutions: _CONTAINER_VERSION: v0.0.1 _COMPOSER_BUCKET: gs://asia-southeast1-testing-cloud-composer-025c0511-bucket My question is, what is the best and how to call the other modules into DAG? A: You can put every modules in the Cloud Composer DAG folder, example : cloud-composer-bucket/ dags/ github_my_repository_deployed-testing/ test_dag.py planning/ modules_1.py setup.py On the DAG Python code, you can import your module with the following way : from planning.modules_1 import get_data As I remembered, the setup.py is created by Cloud Composer in the DAG root folder, if it's not the case, you can copy the setup.py in the DAG folder : bucket/dags/setup.py Example of setup.py file : from setuptools import find_packages, setup setup( name="composer_env_python_lib", version="0.0.1", install_requires=[], data_files=[], packages=find_packages(), ) Other possible solution You can also use internal Python packages from GCP Artifact registry if you want (example with your package planning). Then you can download your internal Python packages from Cloud Composer via PyPiPackages, I share with you a link about this : private repo Composer Artifact registry
Structure Folder Path with Repository path Cloud Composer DAG
I need to run the DAG with the repository folder name, and I need to call the other modules from another directory from another path repository deployed. So, I have a cloudbuild.yaml that will deploy the script into DAG folder and Plugins folder, but I still didn't know, how to get the other modules from the other path on Cloud Composer Bucket Storage. This is my Bucket Storage path cloud-composer-bucket/ dags/ github_my_repository_deployed-testing/ test_dag.py plugins/ github_my_repository_deployed-testing/ planning/ modules_1.py I need to call modules_1.py from my test_dag.py, I used this command to call the module from planning.modules_1 import get_data But from this method, I got an error shown like this Broken DAG: [/home/airflow/gcs/dags/github_my_repository_deployed-testing/test_dag.py] Traceback (most recent call last): File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed File "/home/airflow/gcs/dags/github_my_repository_deployed-testing/test_dag.py", line 7, in <module> from planning.modules_1 import get_date ModuleNotFoundError: No module named 'planning' This is my cloudbuild.yaml steps: - id: 'Push into Composer DAG' name: 'google/cloud-sdk' entrypoint: 'sh' args: [ '-c', 'gsutil -m rsync -d -r ./dags ${_COMPOSER_BUCKET}/dags/$REPO_NAME'] - id: 'Push into Composer Plugins' name: 'google/cloud-sdk' entrypoint: 'sh' args: [ '-c', 'gsutil -m rsync -d -r ./plugins ${_COMPOSER_BUCKET}/plugins/$REPO_NAME'] - id: 'Code Scanning' name: 'python:3.7-slim' entrypoint: 'sh' args: [ '-c', 'pip install bandit && bandit --exit-zero -r ./'] substitutions: _CONTAINER_VERSION: v0.0.1 _COMPOSER_BUCKET: gs://asia-southeast1-testing-cloud-composer-025c0511-bucket My question is, what is the best and how to call the other modules into DAG?
[ "You can put every modules in the Cloud Composer DAG folder, example :\ncloud-composer-bucket/\n dags/\n github_my_repository_deployed-testing/\n test_dag.py\n planning/\n modules_1.py\n \n setup.py\n\nOn the DAG Python code, you can import your module with the following way :\nfrom planning.modules_1 import get_data\n\nAs I remembered, the setup.py is created by Cloud Composer in the DAG root folder, if it's not the case, you can copy the setup.py in the DAG folder :\nbucket/dags/setup.py\n\nExample of setup.py file :\nfrom setuptools import find_packages, setup\n\nsetup(\n name=\"composer_env_python_lib\",\n version=\"0.0.1\",\n install_requires=[],\n data_files=[],\n packages=find_packages(),\n)\n\nOther possible solution\nYou can also use internal Python packages from GCP Artifact registry if you want (example with your package planning).\nThen you can download your internal Python packages from Cloud Composer via PyPiPackages, I share with you a link about this :\nprivate repo Composer Artifact registry\n" ]
[ 1 ]
[]
[]
[ "airflow", "google_cloud_composer", "google_cloud_storage", "python" ]
stackoverflow_0074458726_airflow_google_cloud_composer_google_cloud_storage_python.txt
Q: Finding the most likely correct string from multiple OCR results of the same text in Python I have run EasyOCR in Python over a large number of black and white images of the text on soldered components, with the goal of collecting the writing on each of them. The results are mostly good, but there are some inconsistent results that I would like to filter out. I have used multiple pictures of the same component and they are all labeled, so my DataFrame looks like this. ID OCR Guesses component 1 [RNGSE, BN65E, 8NGse, BN65E, BN65E] component 2 [DFEAW, DFEAW, DF3AW, DFEAW] component 3 [1002, 1002, l002, 1002] As you can see, most of the letters are identified correctly, but sometimes one of the letters is identified as a number or vice versa. Is there an easy method to "take the average" of these strings to find the most likely correct OCR result? The result I am aiming for would look like the following: ID OCR Guesses Correct component 1 [RNGSE, BN65E, 8NGse, BN65E, BN65E] BNGSE component 2 [DFEAW, DFEAW, DF3AW, DFEAW] DFEAW component 3 [1002, 1002, l002, 1002] 1002 It would be great if there was a module that takes into account common confusing characters such as 1 and l, 6 and G, B and R etc. Any help is appreciated. Thanks! A: You can find the Levenshtein distance (or edit distance) for each pair of guesses, and then select the one which is closer to all other. There are many libraries implementing Levenshtein distance, for this example I'll use editdistance (there may be better implementations with more parameters to tune, this is one I just found). import numpy as np import editdistance guesses = ['foo', 'foo 2', 'Foo 2'] pair_distances = np.zeros((len(guesses), len(guesses)) for i, gi in enumerate (guesses): for j, gj in enumerate (guesses): pair_distances[i, j] = editdistance.eval(gi, gj) sum_distances = np.sum(pair_distances, axis=0) idx_min = np.argmin(sum_distances) best_guess = guesses[idx_min] Note that np.argmin broke ties by keeping the first match. Previous code may lead to situations where multiple candidates has the best distance. You can take some other decision to break ties, like considering the best guess with case-insesitives (i.e. just same code but convert guesses to low case before computing). However, this may also lead to ties. That said, this code snippet should work, but it is not so efficient (every distance is computed twice since d(i, j) == d(j, i) and d(i, i) is always 0, so don't need to compute it)) but I think it is clear enough to explain my point. A: One simple way would be to count the number of occurrences of each characters and to take each time the most frequent character. For example: pred_list = ["DFEAW", "DFEAW", "DF3AW", "DFEAW"] avg_string = "" for i in range(len(pred_list[0])): character_count = {} for pred in pred_list: if pred[i] not in character_count: character_count[pred[i]] = 1 else: character_count[pred[i]] += 1 avg_string += max(character_count, key=character_count.get) print(avg_string) Result: "DFEAW" Note that this approach doesn't take into account the frequently confused characters. If there's a possibility of misalignment between the OCR results (e.g. the OCR predicted two characters instead of one, there is an extra space...) you would need to first align the different strings between each other (see: Multiple Sequence Alignement). The python-Levenshtein module can be useful in that case: import Levenshtein Levenshtein.median([" DFEA W", "DFEAW", "DF3AW", "DFEAVV"]) Result: "DFEAW"
Finding the most likely correct string from multiple OCR results of the same text in Python
I have run EasyOCR in Python over a large number of black and white images of the text on soldered components, with the goal of collecting the writing on each of them. The results are mostly good, but there are some inconsistent results that I would like to filter out. I have used multiple pictures of the same component and they are all labeled, so my DataFrame looks like this. ID OCR Guesses component 1 [RNGSE, BN65E, 8NGse, BN65E, BN65E] component 2 [DFEAW, DFEAW, DF3AW, DFEAW] component 3 [1002, 1002, l002, 1002] As you can see, most of the letters are identified correctly, but sometimes one of the letters is identified as a number or vice versa. Is there an easy method to "take the average" of these strings to find the most likely correct OCR result? The result I am aiming for would look like the following: ID OCR Guesses Correct component 1 [RNGSE, BN65E, 8NGse, BN65E, BN65E] BNGSE component 2 [DFEAW, DFEAW, DF3AW, DFEAW] DFEAW component 3 [1002, 1002, l002, 1002] 1002 It would be great if there was a module that takes into account common confusing characters such as 1 and l, 6 and G, B and R etc. Any help is appreciated. Thanks!
[ "You can find the Levenshtein distance (or edit distance) for each pair of guesses, and then select the one which is closer to all other.\nThere are many libraries implementing Levenshtein distance, for this example I'll use editdistance (there may be better implementations with more parameters to tune, this is one I just found).\nimport numpy as np\nimport editdistance\n\nguesses = ['foo', 'foo 2', 'Foo 2']\npair_distances = np.zeros((len(guesses), len(guesses))\n\nfor i, gi in enumerate (guesses):\n for j, gj in enumerate (guesses):\n pair_distances[i, j] = editdistance.eval(gi, gj)\n\nsum_distances = np.sum(pair_distances, axis=0)\n\nidx_min = np.argmin(sum_distances)\n\nbest_guess = guesses[idx_min]\n\nNote that np.argmin broke ties by keeping the first match. Previous code may lead to situations where multiple candidates has the best distance. You can take some other decision to break ties, like considering the best guess with case-insesitives (i.e. just same code but convert guesses to low case before computing). However, this may also lead to ties.\nThat said, this code snippet should work, but it is not so efficient (every distance is computed twice since d(i, j) == d(j, i) and d(i, i) is always 0, so don't need to compute it)) but I think it is clear enough to explain my point.\n", "One simple way would be to count the number of occurrences of each characters and to take each time the most frequent character.\nFor example:\npred_list = [\"DFEAW\", \"DFEAW\", \"DF3AW\", \"DFEAW\"]\navg_string = \"\"\n\nfor i in range(len(pred_list[0])):\n character_count = {}\n \n for pred in pred_list:\n if pred[i] not in character_count:\n character_count[pred[i]] = 1\n else: \n character_count[pred[i]] += 1\n \n avg_string += max(character_count, key=character_count.get)\n\nprint(avg_string)\n\nResult: \"DFEAW\"\nNote that this approach doesn't take into account the frequently confused characters.\nIf there's a possibility of misalignment between the OCR results (e.g. the OCR predicted two characters instead of one, there is an extra space...) you would need to first align the different strings between each other (see: Multiple Sequence Alignement).\nThe python-Levenshtein module can be useful in that case:\nimport Levenshtein \nLevenshtein.median([\" DFEA W\", \"DFEAW\", \"DF3AW\", \"DFEAVV\"])\n\nResult: \"DFEAW\"\n" ]
[ 0, 0 ]
[]
[]
[ "ocr", "pandas", "python", "text_recognition" ]
stackoverflow_0073427276_ocr_pandas_python_text_recognition.txt
Q: How to use tabulate library with zip_longest? I am trying to use tabulate with the zip_longest function. So I have it like this: from __future__ import print_function from tabulate import tabulate from itertools import zip_longest import itertools import locale import operator import re verdi50 ="[' \n\na)\n\n \n\nFactuur\nVerdi Import Schoolfruit\nFactuur nr. : 71201 Koopliedenweg 33\nDeb. nr. : 108636 2991 LN BARENDRECHT\nYour VAT nr. : NL851703884B01 Nederland\nFactuur datum : 10-12-21\nAantal Omschrijving Prijs Bedrag\nOrder number : 77553 Loading date : 09-12-21 Incoterm: : FOT\nYour ref. : SCHOOLFRUIT Delivery date :\nWK50\nD.C. Schoolfruit\n16 Watermeloenen Quetzali 16kg 4 IMPERIAL BR I € 7,70 € 123,20\n360 Watermeloenen Quetzali 16kg 4 IMPERIAL BR I € 7,70 € 2.772,00\n6 Watermeloenen Quetzali 16kg 4 IMPERIAL BR I € 7,/0 € 46,20\n75 Watermeloenen Quetzali 16kg 4 IMPERIAL BR I € 7,70 € 577,50\n9 Watermeloenen Quetzali 16kg 4 IMPERIAL BR I € 7,70 € 69,30\n688 Appels Royal Gala 13kg 60/65 Generica PL I € 5,07 € 3.488,16\n22 Sinaasappels Valencias 15kg 105 Elara ZAI € 6,25 € 137,50\n80 Sinaasappels Valencias 15kg 105 Elara ZAI € 6,25 € 500,00\n160 Sinaasappels Valencias 15kg 105 FVC ZAI € 6,25 € 1.000,00\n320 Sinaasappels Valencias 15kg 105 Generica ZAI € 6,25 € 2.000,00\n160 Sinaasappels Valencias 15kg 105 Noordhoek ZA I € 6,25 € 1.000,00\n61 Sinaasappels Valencias 15kg 105 Noordhoek ZA I € 6,25 € 381,25\nTotaal Colli Totaal Netto Btw Btw Bedrag Totaal Bedrag\n€ 12.095,11 1.088,56\nBetaling binnen 30 dagen\nAchterstand wordt gemeld bij de kredietverzekeringsmaatschappij\nVerDi Import BV ING Bank NV. Rotterdam IBAN number: NL17INGB0006959173 ~~\n\n \n\nKoopliedenweg 38, 2991 LN Barendrecht, The Netherlands SWIFT/BIC: INGBNL2A, VAT number: NL851703884B01 i\nTel, +31 (0}1 80 61 88 11, Fax +31 (0)1 8061 88 25 Chamber of Commerce Rotterdam no. 55424309 VerDi\n\nE-mail: sales@verdiimport.nl, www.verdiimport.nl Dutch law shall apply. The Rotterdam District Court shall have exclusive jurisdiction.\n\nrut ard wegetables\n\x0c']" fruit_words = ['Appels', 'Ananas', 'Peen Waspeen', 'Tomaten Cherry', 'Sinaasappels', 'Watermeloenen', 'Rettich', 'Peren', 'Peen', 'Mandarijnen', 'Meloenen', 'Grapefruit'] def total_amount_fruit_regex(format_=re.escape): return r"(\d*(?:\.\d+)*)\s*~?=?\s*(" + '|'.join( format_(word) for word in fruit_words) + ')' def total_fruit_per_sort(): number_found = re.findall(total_amount_fruit_regex(), verdi50) fruit_dict = {} for n, f in number_found: fruit_dict[f] = fruit_dict.get(f, 0) + int(n) result = '\n'.join(f'{key}: {val}' for key, val in fruit_dict.items()) return result def fruit_list(format_=re.escape): return "|".join(format_(word) for word in fruit_words) def findallfruit(regex): return re.findall(regex, verdi50) def verdi_total_number_fruit_regex(): return rf"(\d*(?:\.\d+)*)\s*\W+(?:{fruit_list()})" def show_extracted_data_from_file(): regexes = [ verdi_total_number_fruit_regex(), ] matches = [findallfruit(regex) for regex in regexes] fruit_list = total_fruit_per_sort().split("\n") return "\n".join(" \t ".join(items) for items in zip_longest(tabulate(*matches, fruit_list, headers=['header','header2'], fillvalue='', ))) print(show_extracted_data_from_file()) But then I get this error: TypeError at /controlepunt140 tabulate() got multiple values for argument 'headers' So how to improve this? So if you remove the tabulate function. Then the format looks like this: 16 Watermeloenen: 466 360 Appels: 688 6 Sinaasappels: 803 75 9 688 22 80 160 320 160 61 So expected output is with headers: header1 header2 ------- ------- 16 Watermeloenen: 466 360 Appels: 688 6 Sinaasappels: 803 75 9 688 22 80 160 320 160 61 Like how it works in tabulate. A: You should be passing a single table to the tabulate() function, passing multiple lists results in the TypeError: tabulate() got multiple values for argument 'headers' you are seeing. Updating your return statement - def show_extracted_data_from_file(): regexes = [ verdi_total_number_fruit_regex(), ] matches = [findallfruit(regex) for regex in regexes] fruit_list = total_fruit_per_sort().split("\n") return tabulate(zip_longest(*matches, fruit_list), headers=['header1','header2']) Output: header1 header2 --------- ------------------ 16 Watermeloenen: 466 360 Appels: 688 6 Sinaasappels: 803 75 9 688 22 80 160 320 160 61
How to use tabulate library with zip_longest?
I am trying to use tabulate with the zip_longest function. So I have it like this: from __future__ import print_function from tabulate import tabulate from itertools import zip_longest import itertools import locale import operator import re verdi50 ="[' \n\na)\n\n \n\nFactuur\nVerdi Import Schoolfruit\nFactuur nr. : 71201 Koopliedenweg 33\nDeb. nr. : 108636 2991 LN BARENDRECHT\nYour VAT nr. : NL851703884B01 Nederland\nFactuur datum : 10-12-21\nAantal Omschrijving Prijs Bedrag\nOrder number : 77553 Loading date : 09-12-21 Incoterm: : FOT\nYour ref. : SCHOOLFRUIT Delivery date :\nWK50\nD.C. Schoolfruit\n16 Watermeloenen Quetzali 16kg 4 IMPERIAL BR I € 7,70 € 123,20\n360 Watermeloenen Quetzali 16kg 4 IMPERIAL BR I € 7,70 € 2.772,00\n6 Watermeloenen Quetzali 16kg 4 IMPERIAL BR I € 7,/0 € 46,20\n75 Watermeloenen Quetzali 16kg 4 IMPERIAL BR I € 7,70 € 577,50\n9 Watermeloenen Quetzali 16kg 4 IMPERIAL BR I € 7,70 € 69,30\n688 Appels Royal Gala 13kg 60/65 Generica PL I € 5,07 € 3.488,16\n22 Sinaasappels Valencias 15kg 105 Elara ZAI € 6,25 € 137,50\n80 Sinaasappels Valencias 15kg 105 Elara ZAI € 6,25 € 500,00\n160 Sinaasappels Valencias 15kg 105 FVC ZAI € 6,25 € 1.000,00\n320 Sinaasappels Valencias 15kg 105 Generica ZAI € 6,25 € 2.000,00\n160 Sinaasappels Valencias 15kg 105 Noordhoek ZA I € 6,25 € 1.000,00\n61 Sinaasappels Valencias 15kg 105 Noordhoek ZA I € 6,25 € 381,25\nTotaal Colli Totaal Netto Btw Btw Bedrag Totaal Bedrag\n€ 12.095,11 1.088,56\nBetaling binnen 30 dagen\nAchterstand wordt gemeld bij de kredietverzekeringsmaatschappij\nVerDi Import BV ING Bank NV. Rotterdam IBAN number: NL17INGB0006959173 ~~\n\n \n\nKoopliedenweg 38, 2991 LN Barendrecht, The Netherlands SWIFT/BIC: INGBNL2A, VAT number: NL851703884B01 i\nTel, +31 (0}1 80 61 88 11, Fax +31 (0)1 8061 88 25 Chamber of Commerce Rotterdam no. 55424309 VerDi\n\nE-mail: sales@verdiimport.nl, www.verdiimport.nl Dutch law shall apply. The Rotterdam District Court shall have exclusive jurisdiction.\n\nrut ard wegetables\n\x0c']" fruit_words = ['Appels', 'Ananas', 'Peen Waspeen', 'Tomaten Cherry', 'Sinaasappels', 'Watermeloenen', 'Rettich', 'Peren', 'Peen', 'Mandarijnen', 'Meloenen', 'Grapefruit'] def total_amount_fruit_regex(format_=re.escape): return r"(\d*(?:\.\d+)*)\s*~?=?\s*(" + '|'.join( format_(word) for word in fruit_words) + ')' def total_fruit_per_sort(): number_found = re.findall(total_amount_fruit_regex(), verdi50) fruit_dict = {} for n, f in number_found: fruit_dict[f] = fruit_dict.get(f, 0) + int(n) result = '\n'.join(f'{key}: {val}' for key, val in fruit_dict.items()) return result def fruit_list(format_=re.escape): return "|".join(format_(word) for word in fruit_words) def findallfruit(regex): return re.findall(regex, verdi50) def verdi_total_number_fruit_regex(): return rf"(\d*(?:\.\d+)*)\s*\W+(?:{fruit_list()})" def show_extracted_data_from_file(): regexes = [ verdi_total_number_fruit_regex(), ] matches = [findallfruit(regex) for regex in regexes] fruit_list = total_fruit_per_sort().split("\n") return "\n".join(" \t ".join(items) for items in zip_longest(tabulate(*matches, fruit_list, headers=['header','header2'], fillvalue='', ))) print(show_extracted_data_from_file()) But then I get this error: TypeError at /controlepunt140 tabulate() got multiple values for argument 'headers' So how to improve this? So if you remove the tabulate function. Then the format looks like this: 16 Watermeloenen: 466 360 Appels: 688 6 Sinaasappels: 803 75 9 688 22 80 160 320 160 61 So expected output is with headers: header1 header2 ------- ------- 16 Watermeloenen: 466 360 Appels: 688 6 Sinaasappels: 803 75 9 688 22 80 160 320 160 61 Like how it works in tabulate.
[ "You should be passing a single table to the tabulate() function, passing multiple lists results in the TypeError: tabulate() got multiple values for argument 'headers' you are seeing.\nUpdating your return statement -\ndef show_extracted_data_from_file():\n regexes = [\n verdi_total_number_fruit_regex(),\n ]\n matches = [findallfruit(regex) for regex in regexes]\n fruit_list = total_fruit_per_sort().split(\"\\n\")\n return tabulate(zip_longest(*matches, fruit_list), headers=['header1','header2'])\n\nOutput:\n header1 header2\n--------- ------------------\n 16 Watermeloenen: 466\n 360 Appels: 688\n 6 Sinaasappels: 803\n 75\n 9\n 688\n 22\n 80\n 160\n 320\n 160\n 61\n\n" ]
[ 1 ]
[]
[]
[ "python", "tabulate" ]
stackoverflow_0074458320_python_tabulate.txt
Q: How to resize a table with xlwings? I am trying to figure out how to resize a table using xlwings but can't figure out how. I have tried using the resize(range) property but I am getting getting the following error : AttributeError: 'str' object has no attribute 'api' This is the code I got the error with : import xlwings as xw tableau = xw.books['test_book.xlsx'].sheets[0].tables[0] tableau.resize('<Range [test_book.xlsx]Feuil1!$A$1:$B$6>') I tried different values for the range attribut like $A$1:$B$6 or A1:B6 but still couldn't make it work. So how can I manage to resize my table using xlwings ? A: Shouldn't need to, the table is resized automatically. This is the code I ran for your previous question using an Excel file with a table, 'Table1' consisting of 3 columns and 3 rows with header row so the table range is A1:C4. The code adds two additional rows as individual cells and as a tuple. As each row is added the size of the table increases to the include the added rows. import xlwings as xw workbook = 'table1.xlsx' wb = xw.Book(workbook) ws = wb.sheets('Sheet1') print("Original Table") table = ws.cells.table print("Table Range: " + str(table.range.address)) # Added data by cell ws.range('A5').value = '444' ws.range('B5').value = '333' ws.range('C5').value = '222' table2 = ws.cells.table print("Table Range after 1 row added: " + str(table2.range.address)) ### Added row ws['A6'].value = [111, 999, 987] table3 = ws.cells.table print("Table Range after 2nd row added: " + str(table3.range.address)) wb.save(workbook) wb.close() Ouput Original Table Table Range: $A$1:$C$4 Added rows Table Range after 1 row added: $A$1:$C$5 Table Range after 2nd row added: $A$1:$C$6
How to resize a table with xlwings?
I am trying to figure out how to resize a table using xlwings but can't figure out how. I have tried using the resize(range) property but I am getting getting the following error : AttributeError: 'str' object has no attribute 'api' This is the code I got the error with : import xlwings as xw tableau = xw.books['test_book.xlsx'].sheets[0].tables[0] tableau.resize('<Range [test_book.xlsx]Feuil1!$A$1:$B$6>') I tried different values for the range attribut like $A$1:$B$6 or A1:B6 but still couldn't make it work. So how can I manage to resize my table using xlwings ?
[ "Shouldn't need to, the table is resized automatically.\nThis is the code I ran for your previous question using an Excel file with a table, 'Table1' consisting of 3 columns and 3 rows with header row so the table range is A1:C4. The code adds two additional rows as individual cells and as a tuple. As each row is added the size of the table increases to the include the added rows.\nimport xlwings as xw\n\nworkbook = 'table1.xlsx'\n\nwb = xw.Book(workbook)\nws = wb.sheets('Sheet1')\n\nprint(\"Original Table\")\ntable = ws.cells.table\nprint(\"Table Range: \" + str(table.range.address))\n\n# Added data by cell\nws.range('A5').value = '444'\nws.range('B5').value = '333'\nws.range('C5').value = '222'\ntable2 = ws.cells.table\nprint(\"Table Range after 1 row added: \" + str(table2.range.address))\n\n### Added row\nws['A6'].value = [111, 999, 987]\ntable3 = ws.cells.table\nprint(\"Table Range after 2nd row added: \" + str(table3.range.address))\n\nwb.save(workbook)\nwb.close() \n\nOuput\nOriginal Table \nTable Range: $A$1:$C$4\n\nAdded rows \nTable Range after 1 row added: $A$1:$C$5 \nTable Range after 2nd row added: $A$1:$C$6\n\n" ]
[ 1 ]
[]
[]
[ "python", "python_3.x", "xlwings" ]
stackoverflow_0074458079_python_python_3.x_xlwings.txt
Q: How to round the minute of a datetime object I have a datetime object produced using strptime(). >>> tm datetime.datetime(2010, 6, 10, 3, 56, 23) What I need to do is round the minute to the closest 10th minute. What I have been doing up to this point was taking the minute value and using round() on it. min = round(tm.minute, -1) However, as with the above example, it gives an invalid time when the minute value is greater than 56. i.e.: 3:60 What is a better way to do this? Does datetime support this? A: This will get the 'floor' of a datetime object stored in tm rounded to the 10 minute mark before tm. tm = tm - datetime.timedelta(minutes=tm.minute % 10, seconds=tm.second, microseconds=tm.microsecond) If you want classic rounding to the nearest 10 minute mark, do this: discard = datetime.timedelta(minutes=tm.minute % 10, seconds=tm.second, microseconds=tm.microsecond) tm -= discard if discard >= datetime.timedelta(minutes=5): tm += datetime.timedelta(minutes=10) or this: tm += datetime.timedelta(minutes=5) tm -= datetime.timedelta(minutes=tm.minute % 10, seconds=tm.second, microseconds=tm.microsecond) A: General function to round a datetime at any time lapse in seconds: def roundTime(dt=None, roundTo=60): """Round a datetime object to any time lapse in seconds dt : datetime.datetime object, default now. roundTo : Closest number of seconds to round to, default 1 minute. Author: Thierry Husson 2012 - Use it as you want but don't blame me. """ if dt == None : dt = datetime.datetime.now() seconds = (dt.replace(tzinfo=None) - dt.min).seconds rounding = (seconds+roundTo/2) // roundTo * roundTo return dt + datetime.timedelta(0,rounding-seconds,-dt.microsecond) Samples with 1 hour rounding & 30 minutes rounding: print roundTime(datetime.datetime(2012,12,31,23,44,59,1234),roundTo=60*60) 2013-01-01 00:00:00 print roundTime(datetime.datetime(2012,12,31,23,44,59,1234),roundTo=30*60) 2012-12-31 23:30:00 A: I used Stijn Nevens code (thank you Stijn) and have a little add-on to share. Rounding up, down and rounding to nearest. update 2019-03-09 = comment Spinxz incorporated; thank you. update 2019-12-27 = comment Bart incorporated; thank you. Tested for date_delta of "X hours" or "X minutes" or "X seconds". import datetime def round_time(dt=None, date_delta=datetime.timedelta(minutes=1), to='average'): """ Round a datetime object to a multiple of a timedelta dt : datetime.datetime object, default now. dateDelta : timedelta object, we round to a multiple of this, default 1 minute. from: http://stackoverflow.com/questions/3463930/how-to-round-the-minute-of-a-datetime-object-python """ round_to = date_delta.total_seconds() if dt is None: dt = datetime.now() seconds = (dt - dt.min).seconds if seconds % round_to == 0 and dt.microsecond == 0: rounding = (seconds + round_to / 2) // round_to * round_to else: if to == 'up': # // is a floor division, not a comment on following line (like in javascript): rounding = (seconds + dt.microsecond/1000000 + round_to) // round_to * round_to elif to == 'down': rounding = seconds // round_to * round_to else: rounding = (seconds + round_to / 2) // round_to * round_to return dt + datetime.timedelta(0, rounding - seconds, - dt.microsecond) # test data print(round_time(datetime.datetime(2019,11,1,14,39,00), date_delta=datetime.timedelta(seconds=30), to='up')) print(round_time(datetime.datetime(2019,11,2,14,39,00,1), date_delta=datetime.timedelta(seconds=30), to='up')) print(round_time(datetime.datetime(2019,11,3,14,39,00,776980), date_delta=datetime.timedelta(seconds=30), to='up')) print(round_time(datetime.datetime(2019,11,4,14,39,29,776980), date_delta=datetime.timedelta(seconds=30), to='up')) print(round_time(datetime.datetime(2018,11,5,14,39,00,776980), date_delta=datetime.timedelta(seconds=30), to='down')) print(round_time(datetime.datetime(2018,11,6,14,38,59,776980), date_delta=datetime.timedelta(seconds=30), to='down')) print(round_time(datetime.datetime(2017,11,7,14,39,15), date_delta=datetime.timedelta(seconds=30), to='average')) print(round_time(datetime.datetime(2017,11,8,14,39,14,999999), date_delta=datetime.timedelta(seconds=30), to='average')) print(round_time(datetime.datetime(2019,11,9,14,39,14,999999), date_delta=datetime.timedelta(seconds=30), to='up')) print(round_time(datetime.datetime(2012,12,10,23,44,59,7769),to='average')) print(round_time(datetime.datetime(2012,12,11,23,44,59,7769),to='up')) print(round_time(datetime.datetime(2010,12,12,23,44,59,7769),to='down',date_delta=datetime.timedelta(seconds=1))) print(round_time(datetime.datetime(2011,12,13,23,44,59,7769),to='up',date_delta=datetime.timedelta(seconds=1))) print(round_time(datetime.datetime(2012,12,14,23,44,59),date_delta=datetime.timedelta(hours=1),to='down')) print(round_time(datetime.datetime(2012,12,15,23,44,59),date_delta=datetime.timedelta(hours=1),to='up')) print(round_time(datetime.datetime(2012,12,16,23,44,59),date_delta=datetime.timedelta(hours=1))) print(round_time(datetime.datetime(2012,12,17,23,00,00),date_delta=datetime.timedelta(hours=1),to='down')) print(round_time(datetime.datetime(2012,12,18,23,00,00),date_delta=datetime.timedelta(hours=1),to='up')) print(round_time(datetime.datetime(2012,12,19,23,00,00),date_delta=datetime.timedelta(hours=1))) A: From the best answer I modified to an adapted version using only datetime objects, this avoids having to do the conversion to seconds and makes the calling code more readable: def roundTime(dt=None, dateDelta=datetime.timedelta(minutes=1)): """Round a datetime object to a multiple of a timedelta dt : datetime.datetime object, default now. dateDelta : timedelta object, we round to a multiple of this, default 1 minute. Author: Thierry Husson 2012 - Use it as you want but don't blame me. Stijn Nevens 2014 - Changed to use only datetime objects as variables """ roundTo = dateDelta.total_seconds() if dt == None : dt = datetime.datetime.now() seconds = (dt - dt.min).seconds # // is a floor division, not a comment on following line: rounding = (seconds+roundTo/2) // roundTo * roundTo return dt + datetime.timedelta(0,rounding-seconds,-dt.microsecond) Samples with 1 hour rounding & 15 minutes rounding: print roundTime(datetime.datetime(2012,12,31,23,44,59),datetime.timedelta(hour=1)) 2013-01-01 00:00:00 print roundTime(datetime.datetime(2012,12,31,23,44,49),datetime.timedelta(minutes=15)) 2012-12-31 23:30:00 A: Pandas has a datetime round feature, but as with most things in Pandas it needs to be in Series format. >>> ts = pd.Series(pd.date_range(Dt(2019,1,1,1,1),Dt(2019,1,1,1,4),periods=8)) >>> print(ts) 0 2019-01-01 01:01:00.000000000 1 2019-01-01 01:01:25.714285714 2 2019-01-01 01:01:51.428571428 3 2019-01-01 01:02:17.142857142 4 2019-01-01 01:02:42.857142857 5 2019-01-01 01:03:08.571428571 6 2019-01-01 01:03:34.285714285 7 2019-01-01 01:04:00.000000000 dtype: datetime64[ns] >>> ts.dt.round('1min') 0 2019-01-01 01:01:00 1 2019-01-01 01:01:00 2 2019-01-01 01:02:00 3 2019-01-01 01:02:00 4 2019-01-01 01:03:00 5 2019-01-01 01:03:00 6 2019-01-01 01:04:00 7 2019-01-01 01:04:00 dtype: datetime64[ns] Docs - Change the frequency string as needed. A: Here is a simpler generalized solution without floating point precision issues and external library dependencies: import datetime def time_mod(time, delta, epoch=None): if epoch is None: epoch = datetime.datetime(1970, 1, 1, tzinfo=time.tzinfo) return (time - epoch) % delta def time_round(time, delta, epoch=None): mod = time_mod(time, delta, epoch) if mod < delta / 2: return time - mod return time + (delta - mod) def time_floor(time, delta, epoch=None): mod = time_mod(time, delta, epoch) return time - mod def time_ceil(time, delta, epoch=None): mod = time_mod(time, delta, epoch) if mod: return time + (delta - mod) return time In your case: >>> tm = datetime.datetime(2010, 6, 10, 3, 56, 23) >>> time_round(tm, datetime.timedelta(minutes=10)) datetime.datetime(2010, 6, 10, 4, 0) >>> time_floor(tm, datetime.timedelta(minutes=10)) datetime.datetime(2010, 6, 10, 3, 50) >>> time_ceil(tm, datetime.timedelta(minutes=10)) datetime.datetime(2010, 6, 10, 4, 0) A: if you don't want to use condition, you can use modulo operator: minutes = int(round(tm.minute, -1)) % 60 UPDATE did you want something like this? def timeround10(dt): a, b = divmod(round(dt.minute, -1), 60) return '%i:%02i' % ((dt.hour + a) % 24, b) timeround10(datetime.datetime(2010, 1, 1, 0, 56, 0)) # 0:56 # -> 1:00 timeround10(datetime.datetime(2010, 1, 1, 23, 56, 0)) # 23:56 # -> 0:00 .. if you want result as string. for obtaining datetime result, it's better to use timedelta - see other responses ;) A: i'm using this. it has the advantage of working with tz aware datetimes. def round_minutes(some_datetime: datetime, step: int): """ round up to nearest step-minutes """ if step > 60: raise AttrbuteError("step must be less than 60") change = timedelta( minutes= some_datetime.minute % step, seconds=some_datetime.second, microseconds=some_datetime.microsecond ) if change > timedelta(): change -= timedelta(minutes=step) return some_datetime - change it has the disadvantage of only working for timeslices less than an hour. A: A straightforward approach: def round_time(dt, round_to_seconds=60): """Round a datetime object to any number of seconds dt: datetime.datetime object round_to_seconds: closest number of seconds for rounding, Default 1 minute. """ rounded_epoch = round(dt.timestamp() / round_to_seconds) * round_to_seconds rounded_dt = datetime.datetime.fromtimestamp(rounded_epoch).astimezone(dt.tzinfo) return rounded_dt A: yes, if your data belongs to a DateTime column in a pandas series, you can round it up using the built-in pandas.Series.dt.round function. See documentation here on pandas.Series.dt.round. In your case of rounding to 10min it will be Series.dt.round('10min') or Series.dt.round('600s') like so: pandas.Series(tm).dt.round('10min') Edit to add Example code: import datetime import pandas tm = datetime.datetime(2010, 6, 10, 3, 56, 23) tm_rounded = pandas.Series(tm).dt.round('10min') print(tm_rounded) >>> 0 2010-06-10 04:00:00 dtype: datetime64[ns] A: I came up with this very simple function, working with any timedelta as long as it's either a multiple or divider of 60 seconds. It's also compatible with timezone-aware datetimes. #!/usr/env python3 from datetime import datetime, timedelta def round_dt_to_delta(dt, delta=timedelta(minutes=30)): ref = datetime.min.replace(tzinfo=dt.tzinfo) return ref + round((dt - ref) / delta) * delta Output: In [1]: round_dt_to_delta(datetime(2012,12,31,23,44,49), timedelta(seconds=15)) Out[1]: datetime.datetime(2012, 12, 31, 23, 44, 45) In [2]: round_dt_to_delta(datetime(2012,12,31,23,44,49), timedelta(minutes=15)) Out[2]: datetime.datetime(2012, 12, 31, 23, 45) A: General Function to round down times of minutes: from datetime import datetime def round_minute(date: datetime = None, round_to: int = 1): """ round datetime object to minutes """ if not date: date = datetime.now() date = date.replace(second=0, microsecond=0) delta = date.minute % round_to return date.replace(minute=date.minute - delta) A: Those seem overly complex def round_down_to(): num = int(datetime.utcnow().replace(second=0, microsecond=0).minute) return num - (num%10) A: This will do it, I think it uses a very useful application of round. from typing import Literal import math def round_datetime(dt: datetime.datetime, step: datetime.timedelta, d: Literal['no', 'up', 'down'] = 'no'): step = step.seconds round_f = {'no': round, 'up': math.ceil, 'down': math.floor} return datetime.datetime.fromtimestamp(step * round_f[d](dt.timestamp() / step)) date = datetime.datetime(year=2022, month=11, day=16, hour=10, minute=2, second=30, microsecond=424242)# print('Original:', date) print('Standard:', round_datetime(date, datetime.timedelta(minutes=5))) print('Down: ', round_datetime(date, datetime.timedelta(minutes=5), d='down')) print('Up: ', round_datetime(date, datetime.timedelta(minutes=5), d='up')) The result: Original: 2022-11-16 10:02:30.424242 Standard: 2022-11-16 10:05:00 Down: 2022-11-16 10:00:00 Up: 2022-11-16 10:05:00 A: def get_rounded_datetime(self, dt, freq, nearest_type='inf'): if freq.lower() == '1h': round_to = 3600 elif freq.lower() == '3h': round_to = 3 * 3600 elif freq.lower() == '6h': round_to = 6 * 3600 else: raise NotImplementedError("Freq %s is not handled yet" % freq) # // is a floor division, not a comment on following line: seconds_from_midnight = dt.hour * 3600 + dt.minute * 60 + dt.second if nearest_type == 'inf': rounded_sec = int(seconds_from_midnight / round_to) * round_to elif nearest_type == 'sup': rounded_sec = (int(seconds_from_midnight / round_to) + 1) * round_to else: raise IllegalArgumentException("nearest_type should be 'inf' or 'sup'") dt_midnight = datetime.datetime(dt.year, dt.month, dt.day) return dt_midnight + datetime.timedelta(0, rounded_sec) A: Based on Stijn Nevens and modified for Django use to round current time to the nearest 15 minute. from datetime import date, timedelta, datetime, time def roundTime(dt=None, dateDelta=timedelta(minutes=1)): roundTo = dateDelta.total_seconds() if dt == None : dt = datetime.now() seconds = (dt - dt.min).seconds # // is a floor division, not a comment on following line: rounding = (seconds+roundTo/2) // roundTo * roundTo return dt + timedelta(0,rounding-seconds,-dt.microsecond) dt = roundTime(datetime.now(),timedelta(minutes=15)).strftime('%H:%M:%S') dt = 11:45:00 if you need full date and time just remove the .strftime('%H:%M:%S') A: Not the best for speed when the exception is caught, however this would work. def _minute10(dt=datetime.utcnow()): try: return dt.replace(minute=round(dt.minute, -1)) except ValueError: return dt.replace(minute=0) + timedelta(hours=1) Timings %timeit _minute10(datetime(2016, 12, 31, 23, 55)) 100000 loops, best of 3: 5.12 µs per loop %timeit _minute10(datetime(2016, 12, 31, 23, 31)) 100000 loops, best of 3: 2.21 µs per loop A: A two line intuitive solution to round to a given time unit, here seconds, for a datetime object t: format_str = '%Y-%m-%d %H:%M:%S' t_rounded = datetime.strptime(datetime.strftime(t, format_str), format_str) If you wish to round to a different unit simply alter format_str. This approach does not round to arbitrary time amounts as above methods, but is a nicely Pythonic way to round to a given hour, minute or second. A: Other solution: def round_time(timestamp=None, lapse=0): """ Round a timestamp to a lapse according to specified minutes Usage: >>> import datetime, math >>> round_time(datetime.datetime(2010, 6, 10, 3, 56, 23), 0) datetime.datetime(2010, 6, 10, 3, 56) >>> round_time(datetime.datetime(2010, 6, 10, 3, 56, 23), 1) datetime.datetime(2010, 6, 10, 3, 57) >>> round_time(datetime.datetime(2010, 6, 10, 3, 56, 23), -1) datetime.datetime(2010, 6, 10, 3, 55) >>> round_time(datetime.datetime(2019, 3, 11, 9, 22, 11), 3) datetime.datetime(2019, 3, 11, 9, 24) >>> round_time(datetime.datetime(2019, 3, 11, 9, 22, 11), 3*60) datetime.datetime(2019, 3, 11, 12, 0) >>> round_time(datetime.datetime(2019, 3, 11, 10, 0, 0), 3) datetime.datetime(2019, 3, 11, 10, 0) :param timestamp: Timestamp to round (default: now) :param lapse: Lapse to round in minutes (default: 0) """ t = timestamp or datetime.datetime.now() # type: Union[datetime, Any] surplus = datetime.timedelta(seconds=t.second, microseconds=t.microsecond) t -= surplus try: mod = t.minute % lapse except ZeroDivisionError: return t if mod: # minutes % lapse != 0 t += datetime.timedelta(minutes=math.ceil(t.minute / lapse) * lapse - t.minute) elif surplus != datetime.timedelta() or lapse < 0: t += datetime.timedelta(minutes=(t.minute / lapse + 1) * lapse - t.minute) return t Hope this helps! A: The shortest way I know min = tm.minute // 10 * 10 A: Most of the answers seem to be too complicated for such a simple question. Assuming your_time is the datetime object your have, the following rounds (actually floors) it at a desired resolution defined in minutes. from math import floor your_time = datetime.datetime.now() g = 10 # granularity in minutes print( datetime.datetime.fromtimestamp( floor(your_time.timestamp() / (60*g)) * (60*g) )) A: The function below with minimum of import will do the job. You can round to anything you want by setting te parameters unit, rnd, and frm. Play with the function and you will see how easy it will be. def toNearestTime(ts, unit='sec', rnd=1, frm=None): ''' round to nearest Time format param ts = time string to round in '%H:%M:%S' or '%H:%M' format : param unit = specify unit wich must be rounded 'sec' or 'min' or 'hour', default is seconds : param rnd = to which number you will round, the default is 1 : param frm = the output (return) format of the time string, as default the function take the unit format''' from time import strftime, gmtime ts = ts + ':00' if len(ts) == 5 else ts if 'se' in unit.lower(): frm = '%H:%M:%S' if frm is None else frm elif 'm' in unit.lower(): frm = '%H:%M' if frm is None else frm rnd = rnd * 60 elif 'h' in unit.lower(): frm = '%H' if frm is None else frm rnd = rnd * 3600 secs = sum(int(x) * 60 ** i for i, x in enumerate(reversed(ts.split(':')))) rtm = int(round(secs / rnd, 0) * rnd) nt = strftime(frm, gmtime(rtm)) return nt Call function as follow: Round to nearest 5 minutes with default ouput format = hh:mm as follow ts = '02:27:29' nt = toNearestTime(ts, unit='min', rnd=5) print(nt) output: '02:25' Or round to nearest hour with ouput format hh:mm:ss as follow ts = '10:30:01' nt = toNearestTime(ts, unit='hour', rnd=1, frm='%H:%M:%S') print(nt) output: '11:00:00' last updated version
How to round the minute of a datetime object
I have a datetime object produced using strptime(). >>> tm datetime.datetime(2010, 6, 10, 3, 56, 23) What I need to do is round the minute to the closest 10th minute. What I have been doing up to this point was taking the minute value and using round() on it. min = round(tm.minute, -1) However, as with the above example, it gives an invalid time when the minute value is greater than 56. i.e.: 3:60 What is a better way to do this? Does datetime support this?
[ "This will get the 'floor' of a datetime object stored in tm rounded to the 10 minute mark before tm.\ntm = tm - datetime.timedelta(minutes=tm.minute % 10,\n seconds=tm.second,\n microseconds=tm.microsecond)\n\nIf you want classic rounding to the nearest 10 minute mark, do this:\ndiscard = datetime.timedelta(minutes=tm.minute % 10,\n seconds=tm.second,\n microseconds=tm.microsecond)\ntm -= discard\nif discard >= datetime.timedelta(minutes=5):\n tm += datetime.timedelta(minutes=10)\n\nor this:\ntm += datetime.timedelta(minutes=5)\ntm -= datetime.timedelta(minutes=tm.minute % 10,\n seconds=tm.second,\n microseconds=tm.microsecond)\n\n", "General function to round a datetime at any time lapse in seconds:\ndef roundTime(dt=None, roundTo=60):\n \"\"\"Round a datetime object to any time lapse in seconds\n dt : datetime.datetime object, default now.\n roundTo : Closest number of seconds to round to, default 1 minute.\n Author: Thierry Husson 2012 - Use it as you want but don't blame me.\n \"\"\"\n if dt == None : dt = datetime.datetime.now()\n seconds = (dt.replace(tzinfo=None) - dt.min).seconds\n rounding = (seconds+roundTo/2) // roundTo * roundTo\n return dt + datetime.timedelta(0,rounding-seconds,-dt.microsecond)\n\nSamples with 1 hour rounding & 30 minutes rounding:\nprint roundTime(datetime.datetime(2012,12,31,23,44,59,1234),roundTo=60*60)\n2013-01-01 00:00:00\n\nprint roundTime(datetime.datetime(2012,12,31,23,44,59,1234),roundTo=30*60)\n2012-12-31 23:30:00\n\n", "I used Stijn Nevens code (thank you Stijn) and have a little add-on to share. Rounding up, down and rounding to nearest.\nupdate 2019-03-09 = comment Spinxz incorporated; thank you.\nupdate 2019-12-27 = comment Bart incorporated; thank you.\nTested for date_delta of \"X hours\" or \"X minutes\" or \"X seconds\".\nimport datetime\n\ndef round_time(dt=None, date_delta=datetime.timedelta(minutes=1), to='average'):\n \"\"\"\n Round a datetime object to a multiple of a timedelta\n dt : datetime.datetime object, default now.\n dateDelta : timedelta object, we round to a multiple of this, default 1 minute.\n from: http://stackoverflow.com/questions/3463930/how-to-round-the-minute-of-a-datetime-object-python\n \"\"\"\n round_to = date_delta.total_seconds()\n if dt is None:\n dt = datetime.now()\n seconds = (dt - dt.min).seconds\n\n if seconds % round_to == 0 and dt.microsecond == 0:\n rounding = (seconds + round_to / 2) // round_to * round_to\n else:\n if to == 'up':\n # // is a floor division, not a comment on following line (like in javascript):\n rounding = (seconds + dt.microsecond/1000000 + round_to) // round_to * round_to\n elif to == 'down':\n rounding = seconds // round_to * round_to\n else:\n rounding = (seconds + round_to / 2) // round_to * round_to\n\n return dt + datetime.timedelta(0, rounding - seconds, - dt.microsecond)\n\n# test data\nprint(round_time(datetime.datetime(2019,11,1,14,39,00), date_delta=datetime.timedelta(seconds=30), to='up'))\nprint(round_time(datetime.datetime(2019,11,2,14,39,00,1), date_delta=datetime.timedelta(seconds=30), to='up'))\nprint(round_time(datetime.datetime(2019,11,3,14,39,00,776980), date_delta=datetime.timedelta(seconds=30), to='up'))\nprint(round_time(datetime.datetime(2019,11,4,14,39,29,776980), date_delta=datetime.timedelta(seconds=30), to='up'))\nprint(round_time(datetime.datetime(2018,11,5,14,39,00,776980), date_delta=datetime.timedelta(seconds=30), to='down'))\nprint(round_time(datetime.datetime(2018,11,6,14,38,59,776980), date_delta=datetime.timedelta(seconds=30), to='down'))\nprint(round_time(datetime.datetime(2017,11,7,14,39,15), date_delta=datetime.timedelta(seconds=30), to='average'))\nprint(round_time(datetime.datetime(2017,11,8,14,39,14,999999), date_delta=datetime.timedelta(seconds=30), to='average'))\nprint(round_time(datetime.datetime(2019,11,9,14,39,14,999999), date_delta=datetime.timedelta(seconds=30), to='up'))\nprint(round_time(datetime.datetime(2012,12,10,23,44,59,7769),to='average'))\nprint(round_time(datetime.datetime(2012,12,11,23,44,59,7769),to='up'))\nprint(round_time(datetime.datetime(2010,12,12,23,44,59,7769),to='down',date_delta=datetime.timedelta(seconds=1)))\nprint(round_time(datetime.datetime(2011,12,13,23,44,59,7769),to='up',date_delta=datetime.timedelta(seconds=1)))\nprint(round_time(datetime.datetime(2012,12,14,23,44,59),date_delta=datetime.timedelta(hours=1),to='down'))\nprint(round_time(datetime.datetime(2012,12,15,23,44,59),date_delta=datetime.timedelta(hours=1),to='up'))\nprint(round_time(datetime.datetime(2012,12,16,23,44,59),date_delta=datetime.timedelta(hours=1)))\nprint(round_time(datetime.datetime(2012,12,17,23,00,00),date_delta=datetime.timedelta(hours=1),to='down'))\nprint(round_time(datetime.datetime(2012,12,18,23,00,00),date_delta=datetime.timedelta(hours=1),to='up'))\nprint(round_time(datetime.datetime(2012,12,19,23,00,00),date_delta=datetime.timedelta(hours=1)))\n\n", "From the best answer I modified to an adapted version using only datetime objects, this avoids having to do the conversion to seconds and makes the calling code more readable:\ndef roundTime(dt=None, dateDelta=datetime.timedelta(minutes=1)):\n \"\"\"Round a datetime object to a multiple of a timedelta\n dt : datetime.datetime object, default now.\n dateDelta : timedelta object, we round to a multiple of this, default 1 minute.\n Author: Thierry Husson 2012 - Use it as you want but don't blame me.\n Stijn Nevens 2014 - Changed to use only datetime objects as variables\n \"\"\"\n roundTo = dateDelta.total_seconds()\n\n if dt == None : dt = datetime.datetime.now()\n seconds = (dt - dt.min).seconds\n # // is a floor division, not a comment on following line:\n rounding = (seconds+roundTo/2) // roundTo * roundTo\n return dt + datetime.timedelta(0,rounding-seconds,-dt.microsecond)\n\nSamples with 1 hour rounding & 15 minutes rounding:\nprint roundTime(datetime.datetime(2012,12,31,23,44,59),datetime.timedelta(hour=1))\n2013-01-01 00:00:00\n\nprint roundTime(datetime.datetime(2012,12,31,23,44,49),datetime.timedelta(minutes=15))\n2012-12-31 23:30:00\n\n", "Pandas has a datetime round feature, but as with most things in Pandas it needs to be in Series format.\n>>> ts = pd.Series(pd.date_range(Dt(2019,1,1,1,1),Dt(2019,1,1,1,4),periods=8))\n>>> print(ts)\n0 2019-01-01 01:01:00.000000000\n1 2019-01-01 01:01:25.714285714\n2 2019-01-01 01:01:51.428571428\n3 2019-01-01 01:02:17.142857142\n4 2019-01-01 01:02:42.857142857\n5 2019-01-01 01:03:08.571428571\n6 2019-01-01 01:03:34.285714285\n7 2019-01-01 01:04:00.000000000\ndtype: datetime64[ns]\n\n>>> ts.dt.round('1min')\n0 2019-01-01 01:01:00\n1 2019-01-01 01:01:00\n2 2019-01-01 01:02:00\n3 2019-01-01 01:02:00\n4 2019-01-01 01:03:00\n5 2019-01-01 01:03:00\n6 2019-01-01 01:04:00\n7 2019-01-01 01:04:00\ndtype: datetime64[ns]\n\nDocs - Change the frequency string as needed.\n", "Here is a simpler generalized solution without floating point precision issues and external library dependencies:\nimport datetime\n\ndef time_mod(time, delta, epoch=None):\n if epoch is None:\n epoch = datetime.datetime(1970, 1, 1, tzinfo=time.tzinfo)\n return (time - epoch) % delta\n\ndef time_round(time, delta, epoch=None):\n mod = time_mod(time, delta, epoch)\n if mod < delta / 2:\n return time - mod\n return time + (delta - mod)\n\ndef time_floor(time, delta, epoch=None):\n mod = time_mod(time, delta, epoch)\n return time - mod\n\ndef time_ceil(time, delta, epoch=None):\n mod = time_mod(time, delta, epoch)\n if mod:\n return time + (delta - mod)\n return time\n\nIn your case:\n>>> tm = datetime.datetime(2010, 6, 10, 3, 56, 23)\n>>> time_round(tm, datetime.timedelta(minutes=10))\ndatetime.datetime(2010, 6, 10, 4, 0)\n>>> time_floor(tm, datetime.timedelta(minutes=10))\ndatetime.datetime(2010, 6, 10, 3, 50)\n>>> time_ceil(tm, datetime.timedelta(minutes=10))\ndatetime.datetime(2010, 6, 10, 4, 0)\n\n", "if you don't want to use condition, you can use modulo operator:\nminutes = int(round(tm.minute, -1)) % 60\n\nUPDATE\ndid you want something like this?\ndef timeround10(dt):\n a, b = divmod(round(dt.minute, -1), 60)\n return '%i:%02i' % ((dt.hour + a) % 24, b)\n\ntimeround10(datetime.datetime(2010, 1, 1, 0, 56, 0)) # 0:56\n# -> 1:00\n\ntimeround10(datetime.datetime(2010, 1, 1, 23, 56, 0)) # 23:56\n# -> 0:00\n\n.. if you want result as string. for obtaining datetime result, it's better to use timedelta - see other responses ;)\n", "i'm using this. it has the advantage of working with tz aware datetimes.\ndef round_minutes(some_datetime: datetime, step: int):\n \"\"\" round up to nearest step-minutes \"\"\"\n if step > 60:\n raise AttrbuteError(\"step must be less than 60\")\n\n change = timedelta(\n minutes= some_datetime.minute % step,\n seconds=some_datetime.second,\n microseconds=some_datetime.microsecond\n )\n\n if change > timedelta():\n change -= timedelta(minutes=step)\n\n return some_datetime - change\n\nit has the disadvantage of only working for timeslices less than an hour.\n", "A straightforward approach:\ndef round_time(dt, round_to_seconds=60):\n \"\"\"Round a datetime object to any number of seconds\n dt: datetime.datetime object\n round_to_seconds: closest number of seconds for rounding, Default 1 minute.\n \"\"\"\n rounded_epoch = round(dt.timestamp() / round_to_seconds) * round_to_seconds\n rounded_dt = datetime.datetime.fromtimestamp(rounded_epoch).astimezone(dt.tzinfo)\n return rounded_dt\n\n", "yes, if your data belongs to a DateTime column in a pandas series, you can round it up using the built-in pandas.Series.dt.round function.\nSee documentation here on pandas.Series.dt.round.\nIn your case of rounding to 10min it will be Series.dt.round('10min') or Series.dt.round('600s') like so:\npandas.Series(tm).dt.round('10min')\n\nEdit to add Example code:\nimport datetime\nimport pandas\n\ntm = datetime.datetime(2010, 6, 10, 3, 56, 23)\ntm_rounded = pandas.Series(tm).dt.round('10min')\nprint(tm_rounded)\n\n>>> 0 2010-06-10 04:00:00\ndtype: datetime64[ns]\n\n", "I came up with this very simple function, working with any timedelta as long as it's either a multiple or divider of 60 seconds. It's also compatible with timezone-aware datetimes.\n#!/usr/env python3\nfrom datetime import datetime, timedelta\n\ndef round_dt_to_delta(dt, delta=timedelta(minutes=30)):\n ref = datetime.min.replace(tzinfo=dt.tzinfo)\n return ref + round((dt - ref) / delta) * delta\n\nOutput:\nIn [1]: round_dt_to_delta(datetime(2012,12,31,23,44,49), timedelta(seconds=15))\nOut[1]: datetime.datetime(2012, 12, 31, 23, 44, 45)\nIn [2]: round_dt_to_delta(datetime(2012,12,31,23,44,49), timedelta(minutes=15))\nOut[2]: datetime.datetime(2012, 12, 31, 23, 45)\n\n", "General Function to round down times of minutes:\nfrom datetime import datetime\ndef round_minute(date: datetime = None, round_to: int = 1):\n \"\"\"\n round datetime object to minutes\n \"\"\"\n if not date:\n date = datetime.now()\n date = date.replace(second=0, microsecond=0)\n delta = date.minute % round_to\n return date.replace(minute=date.minute - delta)\n\n", "Those seem overly complex\ndef round_down_to():\n num = int(datetime.utcnow().replace(second=0, microsecond=0).minute)\n return num - (num%10)\n\n", "This will do it, I think it uses a very useful application of round.\nfrom typing import Literal\nimport math\n\ndef round_datetime(dt: datetime.datetime, step: datetime.timedelta, d: Literal['no', 'up', 'down'] = 'no'):\n step = step.seconds\n round_f = {'no': round, 'up': math.ceil, 'down': math.floor}\n return datetime.datetime.fromtimestamp(step * round_f[d](dt.timestamp() / step))\n\ndate = datetime.datetime(year=2022, month=11, day=16, hour=10, minute=2, second=30, microsecond=424242)#\nprint('Original:', date)\nprint('Standard:', round_datetime(date, datetime.timedelta(minutes=5)))\nprint('Down: ', round_datetime(date, datetime.timedelta(minutes=5), d='down'))\nprint('Up: ', round_datetime(date, datetime.timedelta(minutes=5), d='up'))\n\nThe result:\nOriginal: 2022-11-16 10:02:30.424242\nStandard: 2022-11-16 10:05:00\nDown: 2022-11-16 10:00:00\nUp: 2022-11-16 10:05:00\n\n", "def get_rounded_datetime(self, dt, freq, nearest_type='inf'):\n\n if freq.lower() == '1h':\n round_to = 3600\n elif freq.lower() == '3h':\n round_to = 3 * 3600\n elif freq.lower() == '6h':\n round_to = 6 * 3600\n else:\n raise NotImplementedError(\"Freq %s is not handled yet\" % freq)\n\n # // is a floor division, not a comment on following line:\n seconds_from_midnight = dt.hour * 3600 + dt.minute * 60 + dt.second\n if nearest_type == 'inf':\n rounded_sec = int(seconds_from_midnight / round_to) * round_to\n elif nearest_type == 'sup':\n rounded_sec = (int(seconds_from_midnight / round_to) + 1) * round_to\n else:\n raise IllegalArgumentException(\"nearest_type should be 'inf' or 'sup'\")\n\n dt_midnight = datetime.datetime(dt.year, dt.month, dt.day)\n\n return dt_midnight + datetime.timedelta(0, rounded_sec)\n\n", "Based on Stijn Nevens and modified for Django use to round current time to the nearest 15 minute.\nfrom datetime import date, timedelta, datetime, time\n\n def roundTime(dt=None, dateDelta=timedelta(minutes=1)):\n\n roundTo = dateDelta.total_seconds()\n\n if dt == None : dt = datetime.now()\n seconds = (dt - dt.min).seconds\n # // is a floor division, not a comment on following line:\n rounding = (seconds+roundTo/2) // roundTo * roundTo\n return dt + timedelta(0,rounding-seconds,-dt.microsecond)\n\n dt = roundTime(datetime.now(),timedelta(minutes=15)).strftime('%H:%M:%S')\n\n dt = 11:45:00\n\nif you need full date and time just remove the .strftime('%H:%M:%S')\n", "Not the best for speed when the exception is caught, however this would work.\ndef _minute10(dt=datetime.utcnow()):\n try:\n return dt.replace(minute=round(dt.minute, -1))\n except ValueError:\n return dt.replace(minute=0) + timedelta(hours=1)\n\nTimings\n%timeit _minute10(datetime(2016, 12, 31, 23, 55))\n100000 loops, best of 3: 5.12 µs per loop\n\n%timeit _minute10(datetime(2016, 12, 31, 23, 31))\n100000 loops, best of 3: 2.21 µs per loop\n\n", "A two line intuitive solution to round to a given time unit, here seconds, for a datetime object t:\nformat_str = '%Y-%m-%d %H:%M:%S'\nt_rounded = datetime.strptime(datetime.strftime(t, format_str), format_str)\n\nIf you wish to round to a different unit simply alter format_str. \nThis approach does not round to arbitrary time amounts as above methods, but is a nicely Pythonic way to round to a given hour, minute or second.\n", "Other solution:\ndef round_time(timestamp=None, lapse=0):\n \"\"\"\n Round a timestamp to a lapse according to specified minutes\n\n Usage:\n\n >>> import datetime, math\n >>> round_time(datetime.datetime(2010, 6, 10, 3, 56, 23), 0)\n datetime.datetime(2010, 6, 10, 3, 56)\n >>> round_time(datetime.datetime(2010, 6, 10, 3, 56, 23), 1)\n datetime.datetime(2010, 6, 10, 3, 57)\n >>> round_time(datetime.datetime(2010, 6, 10, 3, 56, 23), -1)\n datetime.datetime(2010, 6, 10, 3, 55)\n >>> round_time(datetime.datetime(2019, 3, 11, 9, 22, 11), 3)\n datetime.datetime(2019, 3, 11, 9, 24)\n >>> round_time(datetime.datetime(2019, 3, 11, 9, 22, 11), 3*60)\n datetime.datetime(2019, 3, 11, 12, 0)\n >>> round_time(datetime.datetime(2019, 3, 11, 10, 0, 0), 3)\n datetime.datetime(2019, 3, 11, 10, 0)\n\n :param timestamp: Timestamp to round (default: now)\n :param lapse: Lapse to round in minutes (default: 0)\n \"\"\"\n t = timestamp or datetime.datetime.now() # type: Union[datetime, Any]\n surplus = datetime.timedelta(seconds=t.second, microseconds=t.microsecond)\n t -= surplus\n try:\n mod = t.minute % lapse\n except ZeroDivisionError:\n return t\n if mod: # minutes % lapse != 0\n t += datetime.timedelta(minutes=math.ceil(t.minute / lapse) * lapse - t.minute)\n elif surplus != datetime.timedelta() or lapse < 0:\n t += datetime.timedelta(minutes=(t.minute / lapse + 1) * lapse - t.minute)\n return t\n\nHope this helps!\n", "The shortest way I know\n\nmin = tm.minute // 10 * 10\n\n", "Most of the answers seem to be too complicated for such a simple question.\nAssuming your_time is the datetime object your have, the following rounds (actually floors) it at a desired resolution defined in minutes.\nfrom math import floor\n\nyour_time = datetime.datetime.now() \n\ng = 10 # granularity in minutes\nprint(\ndatetime.datetime.fromtimestamp(\nfloor(your_time.timestamp() / (60*g)) * (60*g)\n))\n\n", "The function below with minimum of import will do the job. You can round to anything you want by setting te parameters unit, rnd, and frm. Play with the function and you will see how easy it will be.\ndef toNearestTime(ts, unit='sec', rnd=1, frm=None):\n ''' round to nearest Time format\n param ts = time string to round in '%H:%M:%S' or '%H:%M' format :\n param unit = specify unit wich must be rounded 'sec' or 'min' or 'hour', default is seconds :\n param rnd = to which number you will round, the default is 1 :\n param frm = the output (return) format of the time string, as default the function take the unit format'''\n from time import strftime, gmtime\n\n ts = ts + ':00' if len(ts) == 5 else ts\n if 'se' in unit.lower():\n frm = '%H:%M:%S' if frm is None else frm\n elif 'm' in unit.lower():\n frm = '%H:%M' if frm is None else frm\n rnd = rnd * 60\n elif 'h' in unit.lower():\n frm = '%H' if frm is None else frm\n rnd = rnd * 3600\n secs = sum(int(x) * 60 ** i for i, x in enumerate(reversed(ts.split(':'))))\n rtm = int(round(secs / rnd, 0) * rnd)\n nt = strftime(frm, gmtime(rtm))\n return nt\n\nCall function as follow:\nRound to nearest 5 minutes with default ouput format = hh:mm as follow\nts = '02:27:29'\nnt = toNearestTime(ts, unit='min', rnd=5)\nprint(nt)\noutput: '02:25'\n\nOr round to nearest hour with ouput format hh:mm:ss as follow\nts = '10:30:01'\nnt = toNearestTime(ts, unit='hour', rnd=1, frm='%H:%M:%S')\nprint(nt)\noutput: '11:00:00'\n\nlast updated version\n" ]
[ 161, 112, 21, 20, 14, 13, 3, 3, 3, 2, 2, 2, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0 ]
[]
[]
[ "datetime", "python", "rounding" ]
stackoverflow_0003463930_datetime_python_rounding.txt
Q: How to analyze only .py files in SonarQube properties file? I am using SonarQube for my work project. We have a repository with a lot of files and folders in our project. I want to filter out only the python files (.py files). My goal is to analyze only the .py files in the folder repo/src. I tried using the wildcards, but to no avail. Here are the two things I tried : sonar.sources=repo/src/**/*.py sonar.sources=repo/src/**.py The option of just passing the whole folder where the code is located (repo/src/) works, but the plan my work pays for SonarQube doesn't allow us to analyze the whole project, hence why I want to analyze only python files. What am I doing wrong? Can someone let me know what I am misunderstanding here. A: Try specifying sonar.exclusions in addition to sonar.sources. Before, there was sonar.language, which you could set to py for Python: Set the language of the source code to analyze. Browse the Plugin Library page to get the list of all available languages. If not set, a multi-language analysis will be triggered But that has long been deprecated and now does nothing in latest versions. From a thread on the Sonarqube forums, What is the alternative to the sonar.language property in the current version of sonarqube?: The alternative is to just not use it. ... Now if, on the other hand, you want to prevent other languages in the project from being analyzed, then you should do one of these things: configure sonar.sources not to include those files, assuming this can be done conveniently set exclusions as a last-ditch hack, there’s also the option of configuring language file suffixes at the project level to not recognize the other languages’ files (e.g. set the Java file suffix to .foo) A reply on that same thread said that this worked for them: so in my case the exclusions property looks like this sonar.exclusions=**/*.java,**/*.jar So, first, change sonar.sources to specify directories. Comma-separated paths to directories containing main source files. sonar.sources=repo/src Then specify sonar.exclusions for the file extensions to ignore. We have the same setup in our projects: sonar.sources=app sonar.exclusions=**/*.html,**/*.js You can check from the sonar scanner logs if some files were excluded: INFO: Indexing files... INFO: Project configuration: INFO: Excluded sources: **/*.html,**/*.js INFO: Excluded sources for coverage: tests/* INFO: 266 files indexed INFO: 24 files ignored because of inclusion/exclusion patterns INFO: 1 file ignored because of scm ignore settings ... (We are using SonarQube server 9.5.0.56709, SonarScanner 4.3.0.2102)
How to analyze only .py files in SonarQube properties file?
I am using SonarQube for my work project. We have a repository with a lot of files and folders in our project. I want to filter out only the python files (.py files). My goal is to analyze only the .py files in the folder repo/src. I tried using the wildcards, but to no avail. Here are the two things I tried : sonar.sources=repo/src/**/*.py sonar.sources=repo/src/**.py The option of just passing the whole folder where the code is located (repo/src/) works, but the plan my work pays for SonarQube doesn't allow us to analyze the whole project, hence why I want to analyze only python files. What am I doing wrong? Can someone let me know what I am misunderstanding here.
[ "Try specifying sonar.exclusions in addition to sonar.sources.\nBefore, there was sonar.language, which you could set to py for Python:\n\nSet the language of the source code to analyze. Browse the Plugin Library page to get the list of all available languages. If not set, a multi-language analysis will be triggered\n\nBut that has long been deprecated and now does nothing in latest versions. From a thread on the Sonarqube forums, What is the alternative to the sonar.language property in the current version of sonarqube?:\n\nThe alternative is to just not use it.\n...\nNow if, on the other hand, you want to prevent other languages in the project from being analyzed, then you should do one of these things:\n\nconfigure sonar.sources not to include those files, assuming this can be done conveniently\nset exclusions\nas a last-ditch hack, there’s also the option of configuring language file suffixes at the project level to not recognize the other languages’ files (e.g. set the Java file suffix to .foo)\n\n\nA reply on that same thread said that this worked for them:\n\nso in my case the exclusions property looks like this\nsonar.exclusions=**/*.java,**/*.jar\n\n\nSo, first, change sonar.sources to specify directories.\n\nComma-separated paths to directories containing main source files.\n\nsonar.sources=repo/src\n\nThen specify sonar.exclusions for the file extensions to ignore.\nWe have the same setup in our projects:\nsonar.sources=app\nsonar.exclusions=**/*.html,**/*.js\n\nYou can check from the sonar scanner logs if some files were excluded:\nINFO: Indexing files...\nINFO: Project configuration:\nINFO: Excluded sources: **/*.html,**/*.js\nINFO: Excluded sources for coverage: tests/*\nINFO: 266 files indexed\nINFO: 24 files ignored because of inclusion/exclusion patterns\nINFO: 1 file ignored because of scm ignore settings\n...\n\n(We are using SonarQube server 9.5.0.56709, SonarScanner 4.3.0.2102)\n" ]
[ 1 ]
[]
[]
[ "python", "sonarqube" ]
stackoverflow_0074452047_python_sonarqube.txt
Q: Applying a function that inverts column values using pandas I'm hoping to get someone's advice on a problem I'm running into trying to apply a function over columns in a dataframe I have that inverses the values in the columns. For example, if the observation is 0 and the max of the column is 7, I subtract the absolute value of the max from the observation: abs(0 - 7) = 7, so the smallest value becomes the largest. All of the columns essentially have a similar range to the above example. The shape of the sliced df is 16984,512 The code I have written creates a bunch of empty columns, that are then replaced with the max values of those columns. The new shape becomes 16984, 1029 including the 5 columns that I sliced off before. Then I use lambda to apply the function over the columns in question: #create max cols col = df.iloc[:, 5:] col_names = col.columns maximum = '_max' for col in df[col_names]: max_value = df[col].max() df[col+maximum] = np.zeros((16984,)) df[col+maximum].replace(to_replace = 0, value = max_value) #for each row and column inverse value of row def invert_col(x, col): """Invert values of a column""" return abs(x[col] - x[col+"_max"]) for col in col_names: new_df = df.apply(lambda x: invert_col(x, col), axis = 1) I've tried this where I includes axis = 1 and when I remove it and the behaviour is quite different. I am fairly new to Python so I'm finding it difficult to troubleshoot why this is happening. When I remove axis = 1, the error I get is a key error: KeyError: 'TV_TIME_LIVE' TV_TIME_LIVE is the first column in col_names, so it's as if it's not finding it. When I include axis = 1, I don't get an error, but all the columns in the df get flattened into a Series, with length equal to the original df. What I'm expecting is a new_df with the same shape (16984,1029) where the values of the 5th to the 517th column have the inverse function applied to them. I would really appreciate any guidance as to what's going on here and how we might get to the desired output. Many thanks A: apply is slow. It is better to use vectorized approaches as below. axis=1 means that your function will work column wise, if you do not specify it will work row wise. When you get key error it means pandas is searching for a column name and it cannot find it. If you really must use apply try searching for a few examples how exactly it works. import pandas as pd import numpy as np df=pd.DataFrame(np.random.randint(0,7,size=(100, 4)), columns=list('ABCD')) col_list=df.columns.copy() for col in col_list: df[col+"inversed"]=abs(df[col]-df[col].max())
Applying a function that inverts column values using pandas
I'm hoping to get someone's advice on a problem I'm running into trying to apply a function over columns in a dataframe I have that inverses the values in the columns. For example, if the observation is 0 and the max of the column is 7, I subtract the absolute value of the max from the observation: abs(0 - 7) = 7, so the smallest value becomes the largest. All of the columns essentially have a similar range to the above example. The shape of the sliced df is 16984,512 The code I have written creates a bunch of empty columns, that are then replaced with the max values of those columns. The new shape becomes 16984, 1029 including the 5 columns that I sliced off before. Then I use lambda to apply the function over the columns in question: #create max cols col = df.iloc[:, 5:] col_names = col.columns maximum = '_max' for col in df[col_names]: max_value = df[col].max() df[col+maximum] = np.zeros((16984,)) df[col+maximum].replace(to_replace = 0, value = max_value) #for each row and column inverse value of row def invert_col(x, col): """Invert values of a column""" return abs(x[col] - x[col+"_max"]) for col in col_names: new_df = df.apply(lambda x: invert_col(x, col), axis = 1) I've tried this where I includes axis = 1 and when I remove it and the behaviour is quite different. I am fairly new to Python so I'm finding it difficult to troubleshoot why this is happening. When I remove axis = 1, the error I get is a key error: KeyError: 'TV_TIME_LIVE' TV_TIME_LIVE is the first column in col_names, so it's as if it's not finding it. When I include axis = 1, I don't get an error, but all the columns in the df get flattened into a Series, with length equal to the original df. What I'm expecting is a new_df with the same shape (16984,1029) where the values of the 5th to the 517th column have the inverse function applied to them. I would really appreciate any guidance as to what's going on here and how we might get to the desired output. Many thanks
[ "apply is slow. It is better to use vectorized approaches as below.\naxis=1 means that your function will work column wise, if you do not specify it will work row wise. When you get key error it means pandas is searching for a column name and it cannot find it. If you really must use apply try searching for a few examples how exactly it works.\nimport pandas as pd\nimport numpy as np\n\ndf=pd.DataFrame(np.random.randint(0,7,size=(100, 4)), columns=list('ABCD'))\ncol_list=df.columns.copy()\nfor col in col_list:\n df[col+\"inversed\"]=abs(df[col]-df[col].max())\n\n" ]
[ 2 ]
[]
[]
[ "pandas", "python" ]
stackoverflow_0074458144_pandas_python.txt
Q: Right to left text in tkinter I'm using a RTL language and I need my text to be RTL. Is there a way to do it? And How can I justify my text? Example: from tkinter import * from tkinter.constants import * root = Tk() text = Text(root,,font=('Tahoma',8))#I need RTL and Right justified text! text.grid() scrl = Scrollbar(root, command=text.yview) text.config(yscrollcommand=scrl.set) scrl.grid(row=0, column=1, sticky='ns') root.mainloop() A: i modified your code and it's worked!.. from tkinter import * from tkinter.constants import * root = Tk() text = Text(root,,font=('Tahoma',8))#I need RTL and Right justified text! text.tag_configure('tag-right', justify='right') text.insert('end', 'text ' * 10, 'tag-right') text.grid() scrl = Scrollbar(root, command=text.yview) text.config(yscrollcommand=scrl.set) scrl.grid(row=0, column=1, sticky='ns') root.mainloop() in fact i add 2 lines code that set justify=CENTER for a Text widget fails: there is no such option for the widget. What you want is to create a tag with the parameter justify. Then you can insert some text using that tag (or you can insert any text and later apply the tag to a certain region)... Good luck! :) A: 1 Answer will solve that letters inside the word wrote correctly from right to left, but still, the new line position is on the left, and sentences do not begin from right to left like this example. کلمات در فارسی باید حتما از راست شروع شوند نه از میان صفحه ، و خط جدید هم باید از طرف راست شروع شود . if you see the second line begins from the middle of the page, like this. consider next line begins in the middle. A: This is not an direct answer for the current question; but is somehow related. In case of someone needs to write RTL in an Entry (Alternative to Textbox or Input in Tkinter) for farsi (persian), arabic and/or other RTL languages; you can configure your Entry like this: from tkinter import * win = Tk() ent = Entry(win, justify="right") ent.pack() win.mainloop() Note that: Chars of text is broken inside Entry; I don't know what exactly is wrong about this in Ubuntu; but as far as i know there should not be a problem in windows.
Right to left text in tkinter
I'm using a RTL language and I need my text to be RTL. Is there a way to do it? And How can I justify my text? Example: from tkinter import * from tkinter.constants import * root = Tk() text = Text(root,,font=('Tahoma',8))#I need RTL and Right justified text! text.grid() scrl = Scrollbar(root, command=text.yview) text.config(yscrollcommand=scrl.set) scrl.grid(row=0, column=1, sticky='ns') root.mainloop()
[ "i modified your code and it's worked!..\nfrom tkinter import *\nfrom tkinter.constants import *\nroot = Tk()\ntext = Text(root,,font=('Tahoma',8))#I need RTL and Right justified text!\n\ntext.tag_configure('tag-right', justify='right')\ntext.insert('end', 'text ' * 10, 'tag-right')\ntext.grid()\n\nscrl = Scrollbar(root, command=text.yview)\ntext.config(yscrollcommand=scrl.set)\nscrl.grid(row=0, column=1, sticky='ns')\nroot.mainloop()\n\nin fact i add 2 lines code that set justify=CENTER for a Text widget fails: there is no such option for the widget.\nWhat you want is to create a tag with the parameter justify. Then you can insert some text using that tag (or you can insert any text and later apply the tag to a certain region)... Good luck! :)\n", "1 Answer will solve that letters inside the word wrote correctly from right to left, but still, the new line position is on the left, and sentences do not begin from right to left like this example.\nکلمات در فارسی باید حتما از راست شروع شوند نه از میان\nصفحه ، و خط جدید هم باید از طرف\nراست شروع شود .\nif you see the second line begins from the middle of the\npage, like\nthis.\nconsider next line begins in the middle.\n", "This is not an direct answer for the current question; but is somehow related. In case of someone needs to write RTL in an Entry (Alternative to Textbox or Input in Tkinter) for farsi (persian), arabic and/or other RTL languages; you can configure your Entry like this:\nfrom tkinter import *\n\nwin = Tk()\nent = Entry(win, justify=\"right\")\nent.pack()\nwin.mainloop()\n\n\n\nNote that: Chars of text is broken inside Entry; I don't know what exactly is wrong about this in Ubuntu; but as far as i know there should not be a problem in windows.\n\n" ]
[ 2, 0, 0 ]
[]
[]
[ "python", "text", "tkinter" ]
stackoverflow_0020306726_python_text_tkinter.txt
Q: pythonanywhere - issues with JavaScript files loading in admin console having issues with js files loading in the admin console, hindering the activities of the admin console. Errors attached. Tried entering the static file path under the static section on the pythonanywhere. home/<username>/<name of the app>/static/js A: If you search for "static" in the PythonAnywhere help pages, you will find a number of help pages about setting up static files and debugging issues with them.
pythonanywhere - issues with JavaScript files loading in admin console
having issues with js files loading in the admin console, hindering the activities of the admin console. Errors attached. Tried entering the static file path under the static section on the pythonanywhere. home/<username>/<name of the app>/static/js
[ "If you search for \"static\" in the PythonAnywhere help pages, you will find a number of help pages about setting up static files and debugging issues with them.\n" ]
[ 1 ]
[]
[]
[ "django", "javascript", "python", "pythonanywhere" ]
stackoverflow_0074455219_django_javascript_python_pythonanywhere.txt
Q: How to encode string to hex for sending downlink to iot devices? According to this document I need to send downlink of type 080100ff to open the supply of the socket. However, I can not send 080100ff since I got this error The payload field is not a valid hexadecimal payload in upper case. Here is my so far works in python '080100ff'.encode('utf-8').hex() -> 3038303130306666 I can send it now, but I think it is incorrectly hex encoded because the device is not turned on A: In your example you are converting a hexadecimal string (080100ff) into hexadecimal value. You already have the hexadecimal value, it is: 080100ff Would that work? bytes.fromhex('080100ff') A: Could be maybe you need to try, 080100FF
How to encode string to hex for sending downlink to iot devices?
According to this document I need to send downlink of type 080100ff to open the supply of the socket. However, I can not send 080100ff since I got this error The payload field is not a valid hexadecimal payload in upper case. Here is my so far works in python '080100ff'.encode('utf-8').hex() -> 3038303130306666 I can send it now, but I think it is incorrectly hex encoded because the device is not turned on
[ "In your example you are converting a hexadecimal string (080100ff) into hexadecimal value. You already have the hexadecimal value, it is: 080100ff\nWould that work?\nbytes.fromhex('080100ff')\n\n", "Could be maybe you need to try, 080100FF\n" ]
[ 1, 0 ]
[]
[]
[ "hex", "iot", "python" ]
stackoverflow_0074458800_hex_iot_python.txt
Q: Why does this code detects images as video and how can I fix it? This method is detecting .jpg pictures as video. Why is that? How can I fix it? def is_video(self) -> bool: try: res = self.video_metadata['codec_type'] == 'video' logger.info(f"Video.is_video() -> {res}") return res except: return False I'm getting the metadata with ffmpeg.probe(self.path, select_streams = stream)['streams'][0] I'm using the metadata for more things, that's why I've used ffmpeg in this method. A: Check for number of frames greater than 1 to distinguish between image and video. def is_video(self) -> bool: try: res = (self.video_metadata['codec_type'] == 'video' and int(self.video_metadata['nb_frames']) > 1) logger.info(f"Video.is_video() -> {res}") return res except: return False
Why does this code detects images as video and how can I fix it?
This method is detecting .jpg pictures as video. Why is that? How can I fix it? def is_video(self) -> bool: try: res = self.video_metadata['codec_type'] == 'video' logger.info(f"Video.is_video() -> {res}") return res except: return False I'm getting the metadata with ffmpeg.probe(self.path, select_streams = stream)['streams'][0] I'm using the metadata for more things, that's why I've used ffmpeg in this method.
[ "Check for number of frames greater than 1 to distinguish between image and video.\ndef is_video(self) -> bool:\n try:\n res = (self.video_metadata['codec_type'] == 'video'\n and int(self.video_metadata['nb_frames']) > 1)\n logger.info(f\"Video.is_video() -> {res}\")\n return res\n except:\n return False\n\n" ]
[ 1 ]
[]
[]
[ "ffmpeg", "python", "video" ]
stackoverflow_0074437759_ffmpeg_python_video.txt
Q: How I add to a list same number multiple times by count? I've got 2 problems here. my first problem is that the code shows me only one time a factor even though it's multiple x times by the same factor. I don't know how to add it to the factor list. Another problem is I'm not sure in print - how the sep works and how can I write "*" only between elements of factor list. I can't use any import functions here (intertools, maths etc.) Please help me. def factorize(n): prvocisla = [] faktor = [] #prime numbers for num in range(1, 2000): if num > 1: for i in range(2, num): if (num % i) == 0: break else: prvocisla.append(num) count = 0 for i in prvocisla: if n % i == 0: count += 1 faktor.append(i) print(n, " =", *faktor , sep=' *', end='\n') factorize(360) My result: 360 * = *2 *3 *5 The right result: 360 = 2 * 2 * 2 * 3 * 3 * 5 I try the count function with adding same factor to the list "count times" but it shows me an Error. A: The problem is that in your second 'for' loop you evaluate if there is a prime number in your number, but not how many times it is present. To do this you need to repeat the cycle every time you find a prime number and divide the initial number by the prime number. this way you will get to 1 and get all the factors in the array. Here the right code: def factorize(n): prvocisla = [] faktor = [] #prime numbers for num in range(1, 2000): if num > 1: for i in range(2, num): if (num % i) == 0: break else: prvocisla.append(num) count = 0 t = n # <-- a temporary variable which get n value while t>1: for i in prvocisla: if t % i == 0: count += 1 faktor.append(i) t = t/i <-- divide t every time you find a factor break print(f"{n!s} = {' * '.join(str(k) for k in faktor)}") factorize(360) For the print I use the @CreepyRaccoon suggestion.
How I add to a list same number multiple times by count?
I've got 2 problems here. my first problem is that the code shows me only one time a factor even though it's multiple x times by the same factor. I don't know how to add it to the factor list. Another problem is I'm not sure in print - how the sep works and how can I write "*" only between elements of factor list. I can't use any import functions here (intertools, maths etc.) Please help me. def factorize(n): prvocisla = [] faktor = [] #prime numbers for num in range(1, 2000): if num > 1: for i in range(2, num): if (num % i) == 0: break else: prvocisla.append(num) count = 0 for i in prvocisla: if n % i == 0: count += 1 faktor.append(i) print(n, " =", *faktor , sep=' *', end='\n') factorize(360) My result: 360 * = *2 *3 *5 The right result: 360 = 2 * 2 * 2 * 3 * 3 * 5 I try the count function with adding same factor to the list "count times" but it shows me an Error.
[ "The problem is that in your second 'for' loop you evaluate if there is a prime number in your number, but not how many times it is present.\nTo do this you need to repeat the cycle every time you find a prime number and divide the initial number by the prime number. this way you will get to 1 and get all the factors in the array.\nHere the right code:\ndef factorize(n):\n prvocisla = []\n faktor = []\n #prime numbers\n for num in range(1, 2000):\n if num > 1:\n for i in range(2, num):\n if (num % i) == 0:\n break\n else:\n prvocisla.append(num)\n count = 0 \n\n t = n # <-- a temporary variable which get n value\n while t>1:\n for i in prvocisla:\n if t % i == 0:\n count += 1\n faktor.append(i)\n t = t/i <-- divide t every time you find a factor\n break\n\n print(f\"{n!s} = {' * '.join(str(k) for k in faktor)}\")\n\nfactorize(360)\n\nFor the print I use the @CreepyRaccoon suggestion.\n" ]
[ 0 ]
[]
[]
[ "list", "prime_factoring", "python" ]
stackoverflow_0074453084_list_prime_factoring_python.txt
Q: How to keep the same style for a table using openpyxl and python? So my goal was to add data in an already existing table using openpyxl and python. I did it by using .cell(row, column).value method. After doing this I had a problem because the table I was writing the data in was not expanding correctly. So i found this method and it worked fine : from openpyxl import load_workbook #getting the max number of row ok = bs_sheet.max_row #expanding the table bs_sheet.tables['Table1'].ref = "A1:H"+str(ok) What I initially thought is that the format of the table would expand accordingly. What I mean by that is if I had a formula in a column, when expanding using openpyxl it would also expand the formula (or the position etc.). Just like it works when you do it manually. But it doesn't. And this is where I have a problem because I haven't found anything. What I am having trouble with is when extending the table, the shaping that was already done on the existing rows doesn't extend down on the rows of the table. Is there a way I could fix this ? A: Using xlwings helps to keep the same format (including justifications, formulas) of a table. When inserting data, the table will automatically expand. See example below : import xlwings as xw wb = xw.Book('test_book.xlsx') tableau = wb.sheets[0].tables[0] sheet = wb.sheets[0] tableau.name = 'new' sheet.range(3,1).value = 'VAMONOS' wb.save('test_book.xlsx') wb.close() In ths example, it will add value in an alreading existing table (and also change the name of the table). You will see the table already expanded when opening the file again.
How to keep the same style for a table using openpyxl and python?
So my goal was to add data in an already existing table using openpyxl and python. I did it by using .cell(row, column).value method. After doing this I had a problem because the table I was writing the data in was not expanding correctly. So i found this method and it worked fine : from openpyxl import load_workbook #getting the max number of row ok = bs_sheet.max_row #expanding the table bs_sheet.tables['Table1'].ref = "A1:H"+str(ok) What I initially thought is that the format of the table would expand accordingly. What I mean by that is if I had a formula in a column, when expanding using openpyxl it would also expand the formula (or the position etc.). Just like it works when you do it manually. But it doesn't. And this is where I have a problem because I haven't found anything. What I am having trouble with is when extending the table, the shaping that was already done on the existing rows doesn't extend down on the rows of the table. Is there a way I could fix this ?
[ "Using xlwings helps to keep the same format (including justifications, formulas) of a table.\nWhen inserting data, the table will automatically expand. See example below :\nimport xlwings as xw\nwb = xw.Book('test_book.xlsx')\n\ntableau = wb.sheets[0].tables[0]\nsheet = wb.sheets[0]\n\ntableau.name = 'new'\nsheet.range(3,1).value = 'VAMONOS'\n\nwb.save('test_book.xlsx')\nwb.close()\n\nIn ths example, it will add value in an alreading existing table (and also change the name of the table). You will see the table already expanded when opening the file again.\n" ]
[ 0 ]
[]
[]
[ "excel", "openpyxl", "python" ]
stackoverflow_0074432531_excel_openpyxl_python.txt
Q: Pandas concat is adding unnamed index When I concat two dataframes which are both 337 columns and then export to CSV, the result become 338 columns with each time a new unnamed index being added. df1 Out[141]: datecreated 1 2 3 4 5 ... 331 332 333 334 335 336 0 2022-11-14 4000 3900 3850 3810 3790 ... 5520 5300 5180 4990 4730 4520 1 2022-11-15 4000 3900 3850 3810 3790 ... 5520 5300 5180 4990 4730 4520 [2 rows x 337 columns] df4 Out[142]: datecreated 1 2 3 ... 333 334 335 336 0 2022-11-16 4080.0 3980.0 3940.0 ... 5510.0 5290.0 4960.0 4700.0 [1 rows x 337 columns] using the concatenation: df5 = pd.concat([df1, df4], ignore_index=True) and then exporting to CSV: csv_buffer = StringIO() df5.to_csv(csv_buffer) file_name = 'outputs.csv' s3_resource.Object(bucket_name, file_name).put(Body=csv_buffer.getvalue()) yields a 338 column with unnamed index after fetching the updated output file: body = s3_client.get_object(Bucket=bucket_name, Key='outputs.csv')['Body'] contents = body.read().decode('utf-8') df1 = pd.read_csv(StringIO(contents), parse_dates=['datecreated']) df1 Out[134]: Unnamed: 0 datecreated 1 2 ... 333 334 335 336 0 0 2022-11-14 4000.0 3900.0 ... 5180.0 4990.0 4730.0 4520.0 1 1 2022-11-15 4000.0 3900.0 ... 5180.0 4990.0 4730.0 4520.0 2 2 2022-11-16 4080.0 3980.0 ... 5510.0 5290.0 4960.0 4700.0 What is causing this? A: The unnamed index is the row index of the dataframe. If you do not want this, you can use index=False as one of the arguments such that : d5.to_csv(csv_buffer,index=False)
Pandas concat is adding unnamed index
When I concat two dataframes which are both 337 columns and then export to CSV, the result become 338 columns with each time a new unnamed index being added. df1 Out[141]: datecreated 1 2 3 4 5 ... 331 332 333 334 335 336 0 2022-11-14 4000 3900 3850 3810 3790 ... 5520 5300 5180 4990 4730 4520 1 2022-11-15 4000 3900 3850 3810 3790 ... 5520 5300 5180 4990 4730 4520 [2 rows x 337 columns] df4 Out[142]: datecreated 1 2 3 ... 333 334 335 336 0 2022-11-16 4080.0 3980.0 3940.0 ... 5510.0 5290.0 4960.0 4700.0 [1 rows x 337 columns] using the concatenation: df5 = pd.concat([df1, df4], ignore_index=True) and then exporting to CSV: csv_buffer = StringIO() df5.to_csv(csv_buffer) file_name = 'outputs.csv' s3_resource.Object(bucket_name, file_name).put(Body=csv_buffer.getvalue()) yields a 338 column with unnamed index after fetching the updated output file: body = s3_client.get_object(Bucket=bucket_name, Key='outputs.csv')['Body'] contents = body.read().decode('utf-8') df1 = pd.read_csv(StringIO(contents), parse_dates=['datecreated']) df1 Out[134]: Unnamed: 0 datecreated 1 2 ... 333 334 335 336 0 0 2022-11-14 4000.0 3900.0 ... 5180.0 4990.0 4730.0 4520.0 1 1 2022-11-15 4000.0 3900.0 ... 5180.0 4990.0 4730.0 4520.0 2 2 2022-11-16 4080.0 3980.0 ... 5510.0 5290.0 4960.0 4700.0 What is causing this?
[ "The unnamed index is the row index of the dataframe. If you do not want this, you can use index=False as one of the arguments such that :\nd5.to_csv(csv_buffer,index=False)\n\n" ]
[ 1 ]
[]
[]
[ "pandas", "python" ]
stackoverflow_0074458652_pandas_python.txt
Q: Walking average based on two matching columns I have a dataframe df of the following format: team1 team2 score1 score2 0 1 2 1 0 1 3 4 3 0 2 1 3 1 1 3 2 4 0 2 4 1 2 3 2 What I want to do is to create a new column that will return rolling average of the score1 column of last 3 games but only when the two teams from team1 and team2 are matching. Expected output: team1 team2 score1 score2 new 0 1 2 1 0 1 1 3 4 3 0 3 2 1 3 1 1 1 3 2 4 0 2 0 4 1 2 3 2 2 I was able to calculate walking average for all games for each team separately like that: df['new'] = df.groupby('team1')['score1'].transform(lambda x: x.rolling(3, min_periods=1).mean() but cannot find a sensible way to expand that to match two teams. I tried the code below that returns... something, but definitely not what I need. df['new'] = df.groupby(['team1','team2'])['score1'].transform(lambda x: x.rolling(3, min_periods=1).mean() I suppose this could be done with apply() but I want to avoid it due to performace issues. A: Not sure what is your exact expected output, but you can first reshape the DataFrame to a long format: (pd.wide_to_long(df.reset_index(), ['team', 'score'], i='index', j='x') .groupby('team')['score'] .rolling(3, min_periods=1).mean() ) Output: team index x 1 0 1 1.0 2 1 1.0 2 3 1 0.0 0 2 0.0 3 1 1 3.0 2 2 2.0 4 1 2 0.0 3 2 1.0 Name: score, dtype: float64 A: The walkaround I've found was to create 'temp' column that merges the values in 'team1' and 'team2' and uses that column as a reference for the rolling average. df['temp'] = df.team1+'_'+df.team2 df['new'] = df.groupby('temp')['score1'].transform(lambda x: x.rolling(3, min_periods=1).mean() Can this be done in one line?
Walking average based on two matching columns
I have a dataframe df of the following format: team1 team2 score1 score2 0 1 2 1 0 1 3 4 3 0 2 1 3 1 1 3 2 4 0 2 4 1 2 3 2 What I want to do is to create a new column that will return rolling average of the score1 column of last 3 games but only when the two teams from team1 and team2 are matching. Expected output: team1 team2 score1 score2 new 0 1 2 1 0 1 1 3 4 3 0 3 2 1 3 1 1 1 3 2 4 0 2 0 4 1 2 3 2 2 I was able to calculate walking average for all games for each team separately like that: df['new'] = df.groupby('team1')['score1'].transform(lambda x: x.rolling(3, min_periods=1).mean() but cannot find a sensible way to expand that to match two teams. I tried the code below that returns... something, but definitely not what I need. df['new'] = df.groupby(['team1','team2'])['score1'].transform(lambda x: x.rolling(3, min_periods=1).mean() I suppose this could be done with apply() but I want to avoid it due to performace issues.
[ "Not sure what is your exact expected output, but you can first reshape the DataFrame to a long format:\n(pd.wide_to_long(df.reset_index(), ['team', 'score'], i='index', j='x')\n .groupby('team')['score']\n .rolling(3, min_periods=1).mean()\n)\n\nOutput:\nteam index x\n1 0 1 1.0\n 2 1 1.0\n2 3 1 0.0\n 0 2 0.0\n3 1 1 3.0\n 2 2 2.0\n4 1 2 0.0\n 3 2 1.0\nName: score, dtype: float64\n\n", "The walkaround I've found was to create 'temp' column that merges the values in 'team1' and 'team2' and uses that column as a reference for the rolling average.\ndf['temp'] = df.team1+'_'+df.team2\ndf['new'] = df.groupby('temp')['score1'].transform(lambda x: x.rolling(3, min_periods=1).mean()\n\nCan this be done in one line?\n" ]
[ 0, 0 ]
[]
[]
[ "lambda", "pandas", "python" ]
stackoverflow_0074458608_lambda_pandas_python.txt
Q: Android apk with buildozer problem while compiling I'm trying to compile a python file into an apk on android with Buildozer. I am using the KivyMD library. However, when I run the command "buildozer -v android debug", I get this error: # Check configuration tokens # 1 error(s) found in the buildozer.spec [app] "orientation" have an invalid value I've done this before and it worked. Now I'm trying to make a larger project. My .spec file: [app] # (str) Title of your application title = Kivy Weather For Poland # (str) Package name package.name = KivyWeather # (str) Package domain (needed for android/ios packaging) package.domain = org.test # (str) Source code where the main.py live source.dir = . # (list) Source files to include (let empty to include all the files) source.include_exts = py,png,jpg,kv,atlas # (list) List of inclusions using pattern matching #source.include_patterns = assets/*,images/*.png # (list) Source files to exclude (let empty to not exclude anything) #source.exclude_exts = spec # (list) List of directory to exclude (let empty to not exclude anything) #source.exclude_dirs = tests, bin, venv # (list) List of exclusions using pattern matching # Do not prefix with './' #source.exclude_patterns = license,images/*/*.jpg # (str) Application versioning (method 1) version = 0.1 # (str) Application versioning (method 2) # version.regex = __version__ = ['"](.*)['"] # version.filename = %(source.dir)s/main.py # (list) Application requirements # comma separated e.g. requirements = sqlite3,kivy requirements = python3,kivy, kivymd, sqlite3, json, requests # (str) Custom source folders for requirements # Sets custom source for any requirements with recipes # requirements.source.kivy = ../../kivy # (str) Presplash of the application #presplash.filename = %(source.dir)s/data/presplash.png # (str) Icon of the application #icon.filename = %(source.dir)s/data/icon.png # (str) Supported orientation (one of landscape, sensorLandscape, portrait or all) orientation = portrait # (list) List of service to declare #services = NAME:ENTRYPOINT_TO_PY,NAME2:ENTRYPOINT2_TO_PY # # OSX Specific # # author = author # change the major version of python used by the app osx.python_version = 3 # Kivy version to use osx.kivy_version = 1.9.1 # # Android specific # # (bool) Indicate if the application should be fullscreen or not fullscreen = 1 # (string) Presplash background color (for android toolchain) # Supported formats are: #RRGGBB #AARRGGBB or one of the following names: # red, blue, green, black, white, gray, cyan, magenta, yellow, lightgray, # darkgray, grey, lightgrey, darkgrey, aqua, fuchsia, lime, maroon, navy, # olive, purple, silver, teal. #android.presplash_color = #FFFFFF # (string) Presplash animation using Lottie format. # see https://lottiefiles.com/ for examples and https://airbnb.design/lottie/ # for general documentation. # Lottie files can be created using various tools, like Adobe After Effect or Synfig. #android.presplash_lottie = "path/to/lottie/file.json" # (str) Adaptive icon of the application (used if Android API level is 26+ at runtime) #icon.adaptive_foreground.filename = %(source.dir)s/data/icon_fg.png #icon.adaptive_background.filename = %(source.dir)s/data/icon_bg.png # (list) Permissions android.permissions = INTERNET # (list) features (adds uses-feature -tags to manifest) #android.features = android.hardware.usb.host # (int) Target Android API, should be as high as possible. #android.api = 27 # (int) Minimum API your APK / AAB will support. #android.minapi = 21 # (int) Android SDK version to use #android.sdk = 20 # (str) Android NDK version to use #android.ndk = 23b # (int) Android NDK API to use. This is the minimum API your app will support, it should usually match android.minapi. #android.ndk_api = 21 # (bool) Use --private data storage (True) or --dir public storage (False) #android.private_storage = True # (str) Android NDK directory (if empty, it will be automatically downloaded.) #android.ndk_path = # (str) Android SDK directory (if empty, it will be automatically downloaded.) #android.sdk_path = # (str) ANT directory (if empty, it will be automatically downloaded.) #android.ant_path = # (bool) If True, then skip trying to update the Android sdk # This can be useful to avoid excess Internet downloads or save time # when an update is due and you just want to test/build your package # android.skip_update = False # (bool) If True, then automatically accept SDK license # agreements. This is intended for automation only. If set to False, # the default, you will be shown the license when first running # buildozer. # android.accept_sdk_license = False # (str) Android entry point, default is ok for Kivy-based app #android.entrypoint = org.kivy.android.PythonActivity # (str) Full name including package path of the Java class that implements Android Activity # use that parameter together with android.entrypoint to set custom Java class instead of PythonActivity #android.activity_class_name = org.kivy.android.PythonActivity # (str) Extra xml to write directly inside the <manifest> element of AndroidManifest.xml # use that parameter to provide a filename from where to load your custom XML code #android.extra_manifest_xml = ./src/android/extra_manifest.xml # (str) Extra xml to write directly inside the <manifest><application> tag of AndroidManifest.xml # use that parameter to provide a filename from where to load your custom XML arguments: #android.extra_manifest_application_arguments = ./src/android/extra_manifest_application_arguments.xml # (str) Full name including package path of the Java class that implements Python Service # use that parameter to set custom Java class instead of PythonService #android.service_class_name = org.kivy.android.PythonService # (str) Android app theme, default is ok for Kivy-based app # android.apptheme = "@android:style/Theme.NoTitleBar" # (list) Pattern to whitelist for the whole project #android.whitelist = # (str) Path to a custom whitelist file #android.whitelist_src = # (str) Path to a custom blacklist file #android.blacklist_src = # (list) List of Java .jar files to add to the libs so that pyjnius can access # their classes. Don't add jars that you do not need, since extra jars can slow # down the build process. Allows wildcards matching, for example: # OUYA-ODK/libs/*.jar #android.add_jars = foo.jar,bar.jar,path/to/more/*.jar # (list) List of Java files to add to the android project (can be java or a # directory containing the files) #android.add_src = # (list) Android AAR archives to add #android.add_aars = # (list) Put these files or directories in the apk assets directory. # Either form may be used, and assets need not be in 'source.include_exts'. # 1) android.add_assets = source_asset_relative_path # 2) android.add_assets = source_asset_path:destination_asset_relative_path #android.add_assets = # (list) Gradle dependencies to add #android.gradle_dependencies = # (bool) Enable AndroidX support. Enable when 'android.gradle_dependencies' # contains an 'androidx' package, or any package from Kotlin source. # android.enable_androidx requires android.api >= 28 #android.enable_androidx = False # (list) add java compile options # this can for example be necessary when importing certain java libraries using the 'android.gradle_dependencies' option # see https://developer.android.com/studio/write/java8-support for further information # android.add_compile_options = "sourceCompatibility = 1.8", "targetCompatibility = 1.8" # (list) Gradle repositories to add {can be necessary for some android.gradle_dependencies} # please enclose in double quotes # e.g. android.gradle_repositories = "maven { url 'https://kotlin.bintray.com/ktor' }" #android.add_gradle_repositories = # (list) packaging options to add # see https://google.github.io/android-gradle-dsl/current/com.android.build.gradle.internal.dsl.PackagingOptions.html # can be necessary to solve conflicts in gradle_dependencies # please enclose in double quotes # e.g. android.add_packaging_options = "exclude 'META-INF/common.kotlin_module'", "exclude 'META-INF/*.kotlin_module'" #android.add_packaging_options = # (list) Java classes to add as activities to the manifest. #android.add_activities = com.example.ExampleActivity # (str) OUYA Console category. Should be one of GAME or APP # If you leave this blank, OUYA support will not be enabled #android.ouya.category = GAME # (str) Filename of OUYA Console icon. It must be a 732x412 png image. #android.ouya.icon.filename = %(source.dir)s/data/ouya_icon.png # (str) XML file to include as an intent filters in <activity> tag #android.manifest.intent_filters = # (str) launchMode to set for the main activity #android.manifest.launch_mode = standard # (list) Android additional libraries to copy into libs/armeabi #android.add_libs_armeabi = libs/android/*.so #android.add_libs_armeabi_v7a = libs/android-v7/*.so #android.add_libs_arm64_v8a = libs/android-v8/*.so #android.add_libs_x86 = libs/android-x86/*.so #android.add_libs_mips = libs/android-mips/*.so # (bool) Indicate whether the screen should stay on # Don't forget to add the WAKE_LOCK permission if you set this to True #android.wakelock = False # (list) Android application meta-data to set (key=value format) #android.meta_data = # (list) Android library project to add (will be added in the # project.properties automatically.) #android.library_references = # (list) Android shared libraries which will be added to AndroidManifest.xml using <uses-library> tag #android.uses_library = # (str) Android logcat filters to use #android.logcat_filters = *:S python:D # (bool) Android logcat only display log for activity's pid #android.logcat_pid_only = False # (str) Android additional adb arguments #android.adb_args = -H host.docker.internal # (bool) Copy library instead of making a libpymodules.so #android.copy_libs = 1 # (list) The Android archs to build for, choices: armeabi-v7a, arm64-v8a, x86, x86_64 # In past, was `android.arch` as we weren't supporting builds for multiple archs at the same time. android.archs = arm64-v8a, armeabi-v7a # (int) overrides automatic versionCode computation (used in build.gradle) # this is not the same as app version and should only be edited if you know what you're doing # android.numeric_version = 1 # (bool) enables Android auto backup feature (Android API >=23) android.allow_backup = True # (str) XML file for custom backup rules (see official auto backup documentation) # android.backup_rules = # (str) If you need to insert variables into your AndroidManifest.xml file, # you can do so with the manifestPlaceholders property. # This property takes a map of key-value pairs. (via a string) # Usage example : android.manifest_placeholders = [myCustomUrl:\"org.kivy.customurl\"] # android.manifest_placeholders = [:] # (bool) disables the compilation of py to pyc/pyo files when packaging # android.no-compile-pyo = True # (str) The format used to package the app for release mode (aab or apk or aar). # android.release_artifact = aab # (str) The format used to package the app for debug mode (apk or aar). # android.debug_artifact = apk # # Python for android (p4a) specific # # (str) python-for-android URL to use for checkout #p4a.url = # (str) python-for-android fork to use in case if p4a.url is not specified, defaults to upstream (kivy) #p4a.fork = kivy # (str) python-for-android branch to use, defaults to master #p4a.branch = master # (str) python-for-android specific commit to use, defaults to HEAD, must be within p4a.branch #p4a.commit = HEAD # (str) python-for-android git clone directory (if empty, it will be automatically cloned from github) #p4a.source_dir = # (str) The directory in which python-for-android should look for your own build recipes (if any) #p4a.local_recipes = # (str) Filename to the hook for p4a #p4a.hook = # (str) Bootstrap to use for android builds # p4a.bootstrap = sdl2 # (int) port number to specify an explicit --port= p4a argument (eg for bootstrap flask) #p4a.port = # Control passing the --use-setup-py vs --ignore-setup-py to p4a # "in the future" --use-setup-py is going to be the default behaviour in p4a, right now it is not # Setting this to false will pass --ignore-setup-py, true will pass --use-setup-py # NOTE: this is general setuptools integration, having pyproject.toml is enough, no need to generate # setup.py if you're using Poetry, but you need to add "toml" to source.include_exts. #p4a.setup_py = false # (str) extra command line arguments to pass when invoking pythonforandroid.toolchain #p4a.extra_args = # # iOS specific # # (str) Path to a custom kivy-ios folder #ios.kivy_ios_dir = ../kivy-ios # Alternately, specify the URL and branch of a git checkout: ios.kivy_ios_url = https://github.com/kivy/kivy-ios ios.kivy_ios_branch = master # Another platform dependency: ios-deploy # Uncomment to use a custom checkout #ios.ios_deploy_dir = ../ios_deploy # Or specify URL and branch ios.ios_deploy_url = https://github.com/phonegap/ios-deploy ios.ios_deploy_branch = 1.10.0 # (bool) Whether or not to sign the code ios.codesign.allowed = false # (str) Name of the certificate to use for signing the debug version # Get a list of available identities: buildozer ios list_identities #ios.codesign.debug = "iPhone Developer: <lastname> <firstname> (<hexstring>)" # (str) The development team to use for signing the debug version #ios.codesign.development_team.debug = <hexstring> # (str) Name of the certificate to use for signing the release version #ios.codesign.release = %(ios.codesign.debug)s # (str) The development team to use for signing the release version #ios.codesign.development_team.release = <hexstring> # (str) URL pointing to .ipa file to be installed # This option should be defined along with `display_image_url` and `full_size_image_url` options. #ios.manifest.app_url = # (str) URL pointing to an icon (57x57px) to be displayed during download # This option should be defined along with `app_url` and `full_size_image_url` options. #ios.manifest.display_image_url = # (str) URL pointing to a large icon (512x512px) to be used by iTunes # This option should be defined along with `app_url` and `display_image_url` options. #ios.manifest.full_size_image_url = [buildozer] # (int) Log level (0 = error only, 1 = info, 2 = debug (with command output)) log_level = 2 # (int) Display warning if buildozer is run as root (0 = False, 1 = True) warn_on_root = 1 # (str) Path to build artifact storage, absolute or relative to spec file # build_dir = ./.buildozer # (str) Path to build output (i.e. .apk, .aab, .ipa) storage # bin_dir = ./bin # ----------------------------------------------------------------------------- # List as sections # # You can define all the "list" as [section:key]. # Each line will be considered as a option to the list. # Let's take [app] / source.exclude_patterns. # Instead of doing: # #[app] #source.exclude_patterns = license,data/audio/*.wav,data/images/original/* # # This can be translated into: # #[app:source.exclude_patterns] #license #data/audio/*.wav #data/images/original/* # # ----------------------------------------------------------------------------- # Profiles # # You can extend section / key with a profile # For example, you want to deploy a demo version of your application without # HD content. You could first change the title to add "(demo)" in the name # and extend the excluded directories to remove the HD content. # #[app@demo] #title = My Application (demo) # #[app:source.exclude_patterns@demo] #images/hd/* # # Then, invoke the command line with the "demo" profile: # #buildozer --profile demo android debug It worked before in the same configuration. A: You have an incorrect blank space in front of author. A: .spec files are very strict about spaces. When hiding the comments, the critical part of the .spec file looks like this: orientation = portrait author = author osx.python_version = 3 As one can see, the author has a blank space in front leading to an error for the value orientation.
Android apk with buildozer problem while compiling
I'm trying to compile a python file into an apk on android with Buildozer. I am using the KivyMD library. However, when I run the command "buildozer -v android debug", I get this error: # Check configuration tokens # 1 error(s) found in the buildozer.spec [app] "orientation" have an invalid value I've done this before and it worked. Now I'm trying to make a larger project. My .spec file: [app] # (str) Title of your application title = Kivy Weather For Poland # (str) Package name package.name = KivyWeather # (str) Package domain (needed for android/ios packaging) package.domain = org.test # (str) Source code where the main.py live source.dir = . # (list) Source files to include (let empty to include all the files) source.include_exts = py,png,jpg,kv,atlas # (list) List of inclusions using pattern matching #source.include_patterns = assets/*,images/*.png # (list) Source files to exclude (let empty to not exclude anything) #source.exclude_exts = spec # (list) List of directory to exclude (let empty to not exclude anything) #source.exclude_dirs = tests, bin, venv # (list) List of exclusions using pattern matching # Do not prefix with './' #source.exclude_patterns = license,images/*/*.jpg # (str) Application versioning (method 1) version = 0.1 # (str) Application versioning (method 2) # version.regex = __version__ = ['"](.*)['"] # version.filename = %(source.dir)s/main.py # (list) Application requirements # comma separated e.g. requirements = sqlite3,kivy requirements = python3,kivy, kivymd, sqlite3, json, requests # (str) Custom source folders for requirements # Sets custom source for any requirements with recipes # requirements.source.kivy = ../../kivy # (str) Presplash of the application #presplash.filename = %(source.dir)s/data/presplash.png # (str) Icon of the application #icon.filename = %(source.dir)s/data/icon.png # (str) Supported orientation (one of landscape, sensorLandscape, portrait or all) orientation = portrait # (list) List of service to declare #services = NAME:ENTRYPOINT_TO_PY,NAME2:ENTRYPOINT2_TO_PY # # OSX Specific # # author = author # change the major version of python used by the app osx.python_version = 3 # Kivy version to use osx.kivy_version = 1.9.1 # # Android specific # # (bool) Indicate if the application should be fullscreen or not fullscreen = 1 # (string) Presplash background color (for android toolchain) # Supported formats are: #RRGGBB #AARRGGBB or one of the following names: # red, blue, green, black, white, gray, cyan, magenta, yellow, lightgray, # darkgray, grey, lightgrey, darkgrey, aqua, fuchsia, lime, maroon, navy, # olive, purple, silver, teal. #android.presplash_color = #FFFFFF # (string) Presplash animation using Lottie format. # see https://lottiefiles.com/ for examples and https://airbnb.design/lottie/ # for general documentation. # Lottie files can be created using various tools, like Adobe After Effect or Synfig. #android.presplash_lottie = "path/to/lottie/file.json" # (str) Adaptive icon of the application (used if Android API level is 26+ at runtime) #icon.adaptive_foreground.filename = %(source.dir)s/data/icon_fg.png #icon.adaptive_background.filename = %(source.dir)s/data/icon_bg.png # (list) Permissions android.permissions = INTERNET # (list) features (adds uses-feature -tags to manifest) #android.features = android.hardware.usb.host # (int) Target Android API, should be as high as possible. #android.api = 27 # (int) Minimum API your APK / AAB will support. #android.minapi = 21 # (int) Android SDK version to use #android.sdk = 20 # (str) Android NDK version to use #android.ndk = 23b # (int) Android NDK API to use. This is the minimum API your app will support, it should usually match android.minapi. #android.ndk_api = 21 # (bool) Use --private data storage (True) or --dir public storage (False) #android.private_storage = True # (str) Android NDK directory (if empty, it will be automatically downloaded.) #android.ndk_path = # (str) Android SDK directory (if empty, it will be automatically downloaded.) #android.sdk_path = # (str) ANT directory (if empty, it will be automatically downloaded.) #android.ant_path = # (bool) If True, then skip trying to update the Android sdk # This can be useful to avoid excess Internet downloads or save time # when an update is due and you just want to test/build your package # android.skip_update = False # (bool) If True, then automatically accept SDK license # agreements. This is intended for automation only. If set to False, # the default, you will be shown the license when first running # buildozer. # android.accept_sdk_license = False # (str) Android entry point, default is ok for Kivy-based app #android.entrypoint = org.kivy.android.PythonActivity # (str) Full name including package path of the Java class that implements Android Activity # use that parameter together with android.entrypoint to set custom Java class instead of PythonActivity #android.activity_class_name = org.kivy.android.PythonActivity # (str) Extra xml to write directly inside the <manifest> element of AndroidManifest.xml # use that parameter to provide a filename from where to load your custom XML code #android.extra_manifest_xml = ./src/android/extra_manifest.xml # (str) Extra xml to write directly inside the <manifest><application> tag of AndroidManifest.xml # use that parameter to provide a filename from where to load your custom XML arguments: #android.extra_manifest_application_arguments = ./src/android/extra_manifest_application_arguments.xml # (str) Full name including package path of the Java class that implements Python Service # use that parameter to set custom Java class instead of PythonService #android.service_class_name = org.kivy.android.PythonService # (str) Android app theme, default is ok for Kivy-based app # android.apptheme = "@android:style/Theme.NoTitleBar" # (list) Pattern to whitelist for the whole project #android.whitelist = # (str) Path to a custom whitelist file #android.whitelist_src = # (str) Path to a custom blacklist file #android.blacklist_src = # (list) List of Java .jar files to add to the libs so that pyjnius can access # their classes. Don't add jars that you do not need, since extra jars can slow # down the build process. Allows wildcards matching, for example: # OUYA-ODK/libs/*.jar #android.add_jars = foo.jar,bar.jar,path/to/more/*.jar # (list) List of Java files to add to the android project (can be java or a # directory containing the files) #android.add_src = # (list) Android AAR archives to add #android.add_aars = # (list) Put these files or directories in the apk assets directory. # Either form may be used, and assets need not be in 'source.include_exts'. # 1) android.add_assets = source_asset_relative_path # 2) android.add_assets = source_asset_path:destination_asset_relative_path #android.add_assets = # (list) Gradle dependencies to add #android.gradle_dependencies = # (bool) Enable AndroidX support. Enable when 'android.gradle_dependencies' # contains an 'androidx' package, or any package from Kotlin source. # android.enable_androidx requires android.api >= 28 #android.enable_androidx = False # (list) add java compile options # this can for example be necessary when importing certain java libraries using the 'android.gradle_dependencies' option # see https://developer.android.com/studio/write/java8-support for further information # android.add_compile_options = "sourceCompatibility = 1.8", "targetCompatibility = 1.8" # (list) Gradle repositories to add {can be necessary for some android.gradle_dependencies} # please enclose in double quotes # e.g. android.gradle_repositories = "maven { url 'https://kotlin.bintray.com/ktor' }" #android.add_gradle_repositories = # (list) packaging options to add # see https://google.github.io/android-gradle-dsl/current/com.android.build.gradle.internal.dsl.PackagingOptions.html # can be necessary to solve conflicts in gradle_dependencies # please enclose in double quotes # e.g. android.add_packaging_options = "exclude 'META-INF/common.kotlin_module'", "exclude 'META-INF/*.kotlin_module'" #android.add_packaging_options = # (list) Java classes to add as activities to the manifest. #android.add_activities = com.example.ExampleActivity # (str) OUYA Console category. Should be one of GAME or APP # If you leave this blank, OUYA support will not be enabled #android.ouya.category = GAME # (str) Filename of OUYA Console icon. It must be a 732x412 png image. #android.ouya.icon.filename = %(source.dir)s/data/ouya_icon.png # (str) XML file to include as an intent filters in <activity> tag #android.manifest.intent_filters = # (str) launchMode to set for the main activity #android.manifest.launch_mode = standard # (list) Android additional libraries to copy into libs/armeabi #android.add_libs_armeabi = libs/android/*.so #android.add_libs_armeabi_v7a = libs/android-v7/*.so #android.add_libs_arm64_v8a = libs/android-v8/*.so #android.add_libs_x86 = libs/android-x86/*.so #android.add_libs_mips = libs/android-mips/*.so # (bool) Indicate whether the screen should stay on # Don't forget to add the WAKE_LOCK permission if you set this to True #android.wakelock = False # (list) Android application meta-data to set (key=value format) #android.meta_data = # (list) Android library project to add (will be added in the # project.properties automatically.) #android.library_references = # (list) Android shared libraries which will be added to AndroidManifest.xml using <uses-library> tag #android.uses_library = # (str) Android logcat filters to use #android.logcat_filters = *:S python:D # (bool) Android logcat only display log for activity's pid #android.logcat_pid_only = False # (str) Android additional adb arguments #android.adb_args = -H host.docker.internal # (bool) Copy library instead of making a libpymodules.so #android.copy_libs = 1 # (list) The Android archs to build for, choices: armeabi-v7a, arm64-v8a, x86, x86_64 # In past, was `android.arch` as we weren't supporting builds for multiple archs at the same time. android.archs = arm64-v8a, armeabi-v7a # (int) overrides automatic versionCode computation (used in build.gradle) # this is not the same as app version and should only be edited if you know what you're doing # android.numeric_version = 1 # (bool) enables Android auto backup feature (Android API >=23) android.allow_backup = True # (str) XML file for custom backup rules (see official auto backup documentation) # android.backup_rules = # (str) If you need to insert variables into your AndroidManifest.xml file, # you can do so with the manifestPlaceholders property. # This property takes a map of key-value pairs. (via a string) # Usage example : android.manifest_placeholders = [myCustomUrl:\"org.kivy.customurl\"] # android.manifest_placeholders = [:] # (bool) disables the compilation of py to pyc/pyo files when packaging # android.no-compile-pyo = True # (str) The format used to package the app for release mode (aab or apk or aar). # android.release_artifact = aab # (str) The format used to package the app for debug mode (apk or aar). # android.debug_artifact = apk # # Python for android (p4a) specific # # (str) python-for-android URL to use for checkout #p4a.url = # (str) python-for-android fork to use in case if p4a.url is not specified, defaults to upstream (kivy) #p4a.fork = kivy # (str) python-for-android branch to use, defaults to master #p4a.branch = master # (str) python-for-android specific commit to use, defaults to HEAD, must be within p4a.branch #p4a.commit = HEAD # (str) python-for-android git clone directory (if empty, it will be automatically cloned from github) #p4a.source_dir = # (str) The directory in which python-for-android should look for your own build recipes (if any) #p4a.local_recipes = # (str) Filename to the hook for p4a #p4a.hook = # (str) Bootstrap to use for android builds # p4a.bootstrap = sdl2 # (int) port number to specify an explicit --port= p4a argument (eg for bootstrap flask) #p4a.port = # Control passing the --use-setup-py vs --ignore-setup-py to p4a # "in the future" --use-setup-py is going to be the default behaviour in p4a, right now it is not # Setting this to false will pass --ignore-setup-py, true will pass --use-setup-py # NOTE: this is general setuptools integration, having pyproject.toml is enough, no need to generate # setup.py if you're using Poetry, but you need to add "toml" to source.include_exts. #p4a.setup_py = false # (str) extra command line arguments to pass when invoking pythonforandroid.toolchain #p4a.extra_args = # # iOS specific # # (str) Path to a custom kivy-ios folder #ios.kivy_ios_dir = ../kivy-ios # Alternately, specify the URL and branch of a git checkout: ios.kivy_ios_url = https://github.com/kivy/kivy-ios ios.kivy_ios_branch = master # Another platform dependency: ios-deploy # Uncomment to use a custom checkout #ios.ios_deploy_dir = ../ios_deploy # Or specify URL and branch ios.ios_deploy_url = https://github.com/phonegap/ios-deploy ios.ios_deploy_branch = 1.10.0 # (bool) Whether or not to sign the code ios.codesign.allowed = false # (str) Name of the certificate to use for signing the debug version # Get a list of available identities: buildozer ios list_identities #ios.codesign.debug = "iPhone Developer: <lastname> <firstname> (<hexstring>)" # (str) The development team to use for signing the debug version #ios.codesign.development_team.debug = <hexstring> # (str) Name of the certificate to use for signing the release version #ios.codesign.release = %(ios.codesign.debug)s # (str) The development team to use for signing the release version #ios.codesign.development_team.release = <hexstring> # (str) URL pointing to .ipa file to be installed # This option should be defined along with `display_image_url` and `full_size_image_url` options. #ios.manifest.app_url = # (str) URL pointing to an icon (57x57px) to be displayed during download # This option should be defined along with `app_url` and `full_size_image_url` options. #ios.manifest.display_image_url = # (str) URL pointing to a large icon (512x512px) to be used by iTunes # This option should be defined along with `app_url` and `display_image_url` options. #ios.manifest.full_size_image_url = [buildozer] # (int) Log level (0 = error only, 1 = info, 2 = debug (with command output)) log_level = 2 # (int) Display warning if buildozer is run as root (0 = False, 1 = True) warn_on_root = 1 # (str) Path to build artifact storage, absolute or relative to spec file # build_dir = ./.buildozer # (str) Path to build output (i.e. .apk, .aab, .ipa) storage # bin_dir = ./bin # ----------------------------------------------------------------------------- # List as sections # # You can define all the "list" as [section:key]. # Each line will be considered as a option to the list. # Let's take [app] / source.exclude_patterns. # Instead of doing: # #[app] #source.exclude_patterns = license,data/audio/*.wav,data/images/original/* # # This can be translated into: # #[app:source.exclude_patterns] #license #data/audio/*.wav #data/images/original/* # # ----------------------------------------------------------------------------- # Profiles # # You can extend section / key with a profile # For example, you want to deploy a demo version of your application without # HD content. You could first change the title to add "(demo)" in the name # and extend the excluded directories to remove the HD content. # #[app@demo] #title = My Application (demo) # #[app:source.exclude_patterns@demo] #images/hd/* # # Then, invoke the command line with the "demo" profile: # #buildozer --profile demo android debug It worked before in the same configuration.
[ "You have an incorrect blank space in front of author.\n", ".spec files are very strict about spaces.\nWhen hiding the comments, the critical part of the .spec file looks like this:\norientation = portrait\n\n author = author\n\nosx.python_version = 3\n\nAs one can see, the author has a blank space in front leading to an error for the value orientation.\n" ]
[ 0, 0 ]
[]
[]
[ "android", "apk", "buildozer", "kivymd", "python" ]
stackoverflow_0073601412_android_apk_buildozer_kivymd_python.txt
Q: fastapi: mapping sqlalchemy database model to pydantic geojson feature I just started playing with FastAPI, SQLAlchemy, Pydantic and I'm trying to build a simple API endpoint to return the rows in a postgis table as a geojson feature collection. This is my sqlalchemy model: class Poi(Base): __tablename__ = 'poi' id = Column(Integer, primary_key=True) name = Column(Text, nullable=False) type_id = Column(Integer) geometry = Column(Geometry('POINT', 4326, from_text='ST_GeomFromEWKT'), nullable=False) Using geojson_pydantic the relevant pydantic models are: from geojson_pydantic.features import Feature, FeatureCollection from geojson_pydantic.geometries import Point from typing import List class PoiProperties(BaseModel): name: str type_id: int class PoiFeature(Feature): id: int geometry: Point properties: PoiProperties class PoiCollection(FeatureCollection): features: List[PoiFeature] Desired Output: Ideally I'd like to be able to retrieve and return the database records like so: def get_pois(db: Session, skip: int = 0, limit: int = 100): return db.query(Poi).offset(skip).limit(limit).all() @app.get("/geojson", response_model=PoiCollection) def read_geojson(skip: int = 0, limit: int = 100, db: Session = Depends(get_db)): return get_pois(db, skip=skip, limit=limit) Still I'm trying to figure out how to map the name and type_id columns from the db model to the PoiProperties in the PoiFeature object. A: You want to return the PoiCollection schema (response_model=schemas.PoiCollection) except that you return your database response directly without any formatting. So you have to convert your crud response into your schema response. # Different function for translate db response to Pydantic response according to your different schema def make_response_poi_properties(poi): return PoiFeature(name=poi.name, type_id=poi.type_id) def make_response_poi_feature(poi): return PoiFeature(id=poi.id, geometry=poi.geometry,properties=make_response_poi_properties(poi)) def make_response_poi_collection(pois): response = [] for poi in pois: response.append(make_response_poi_feature(poi) return response @app.get("/geojson", response_model=PoiCollection) def read_geojson(skip: int = 0, limit: int = 100, db: Session = Depends(get_db)): # Call function for translate db data to pydantic data according to your response_model return make_response_poi_collection(get_pois(db, skip=skip, limit=limit)) or simply use the orm mode inside your different schema class
fastapi: mapping sqlalchemy database model to pydantic geojson feature
I just started playing with FastAPI, SQLAlchemy, Pydantic and I'm trying to build a simple API endpoint to return the rows in a postgis table as a geojson feature collection. This is my sqlalchemy model: class Poi(Base): __tablename__ = 'poi' id = Column(Integer, primary_key=True) name = Column(Text, nullable=False) type_id = Column(Integer) geometry = Column(Geometry('POINT', 4326, from_text='ST_GeomFromEWKT'), nullable=False) Using geojson_pydantic the relevant pydantic models are: from geojson_pydantic.features import Feature, FeatureCollection from geojson_pydantic.geometries import Point from typing import List class PoiProperties(BaseModel): name: str type_id: int class PoiFeature(Feature): id: int geometry: Point properties: PoiProperties class PoiCollection(FeatureCollection): features: List[PoiFeature] Desired Output: Ideally I'd like to be able to retrieve and return the database records like so: def get_pois(db: Session, skip: int = 0, limit: int = 100): return db.query(Poi).offset(skip).limit(limit).all() @app.get("/geojson", response_model=PoiCollection) def read_geojson(skip: int = 0, limit: int = 100, db: Session = Depends(get_db)): return get_pois(db, skip=skip, limit=limit) Still I'm trying to figure out how to map the name and type_id columns from the db model to the PoiProperties in the PoiFeature object.
[ "You want to return the PoiCollection schema (response_model=schemas.PoiCollection) except that you return your database response directly without any formatting. So you have to convert your crud response into your schema response.\n# Different function for translate db response to Pydantic response according to your different schema\ndef make_response_poi_properties(poi):\n return PoiFeature(name=poi.name, type_id=poi.type_id) \n\ndef make_response_poi_feature(poi):\n return PoiFeature(id=poi.id, geometry=poi.geometry,properties=make_response_poi_properties(poi)) \n\ndef make_response_poi_collection(pois):\n response = []\n for poi in pois:\n response.append(make_response_poi_feature(poi)\n return response\n\n@app.get(\"/geojson\", response_model=PoiCollection)\ndef read_geojson(skip: int = 0,\n limit: int = 100,\n db: Session = Depends(get_db)):\n \n # Call function for translate db data to pydantic data according to your response_model\n return make_response_poi_collection(get_pois(db, skip=skip, limit=limit))\n\nor simply use the orm mode inside your different schema class\n" ]
[ 3 ]
[]
[]
[ "fastapi", "geojson", "pydantic", "python", "sqlalchemy" ]
stackoverflow_0067419454_fastapi_geojson_pydantic_python_sqlalchemy.txt
Q: How can I use context vars in other file in python 3.7 or above? I have a context var in in file a.py and I want to use it in b.py. a.py: import contextvars cntx = contextvars.ContextVar("abcd") b.py: from .a import cntx print(cntx.get()) Error: Traceback (most recent call last): File "/home/user/Desktop/b.py", line 1, in <module> from .a import cntx ImportError: attempted relative import with no known parent package Isn't this how context variables supposed to work? I'm using python 3.9 A: The ImportError that you are getting is because of the invalid file name. .a is a valid file name and would work if you had a file with filename being .a.py The reason you are getting the LookupError: <ContextVar name='abcd' at 0x7f7d6209c5e0> is because you are trying to get() the context which hasn't been set yet. The get() method raises a LookupError if no context was set before. Try with following - a.py: import contextvars cntx = contextvars.ContextVar("abcd") cntx.set("example") b.py: from a import cntx print(cntx.get()) When you run b.py - Output: example
How can I use context vars in other file in python 3.7 or above?
I have a context var in in file a.py and I want to use it in b.py. a.py: import contextvars cntx = contextvars.ContextVar("abcd") b.py: from .a import cntx print(cntx.get()) Error: Traceback (most recent call last): File "/home/user/Desktop/b.py", line 1, in <module> from .a import cntx ImportError: attempted relative import with no known parent package Isn't this how context variables supposed to work? I'm using python 3.9
[ "The ImportError that you are getting is because of the invalid file name. .a is a valid file name and would work if you had a file with filename being .a.py\nThe reason you are getting the LookupError: <ContextVar name='abcd' at 0x7f7d6209c5e0> is because you are trying to get() the context which hasn't been set yet.\nThe get() method raises a LookupError if no context was set before.\nTry with following -\na.py:\nimport contextvars\n\ncntx = contextvars.ContextVar(\"abcd\")\ncntx.set(\"example\")\n\nb.py:\nfrom a import cntx\nprint(cntx.get())\n\nWhen you run b.py -\nOutput:\nexample\n\n" ]
[ 1 ]
[]
[]
[ "error_handling", "importerror", "python", "python_3.x", "python_contextvars" ]
stackoverflow_0074458949_error_handling_importerror_python_python_3.x_python_contextvars.txt
Q: pandas - dynamically fill the missing row of group by(create a duplicate row if required based on previous record) Need to fill the data accrding to the stage and last stage is the maximum date Input: RecordID ChangeDate Stage 17764 31-08-2021 New 17764 02-09-2021 inprogress 17764 05-09-2021 won 70382 04-01-2022 new 70382 06-01-2022 hold 70382 07-01-2022 lost Expceted output: RecordID ChangeDate Stage 17764 31-08-2021 New 17764 01-09-2021 New 17764 02-09-2021 inprogress 17764 03-09-2021 inprogress 17764 04-09-2021 inprogress 17764 05-09-2021 won 70382 04-01-2022 new 70382 05-01-2022 new 70382 06-01-2022 hold 70382 07-01-2022 lost A: You can use a groupby.resample: df['ChangeDate'] = pd.to_datetime(df['ChangeDate'], dayfirst=True) (df.set_index('ChangeDate') .groupby('RecordID', as_index=False) .resample('1d').ffill() .reset_index('ChangeDate') ) Output: ChangeDate RecordID Stage 0 2021-08-31 17764 New 0 2021-09-01 17764 New 0 2021-09-02 17764 inprogress 0 2021-09-03 17764 inprogress 0 2021-09-04 17764 inprogress 0 2021-09-05 17764 won 1 2022-01-04 70382 new 1 2022-01-05 70382 new 1 2022-01-06 70382 hold 1 2022-01-07 70382 lost A: One option is with complete from pyjanitor, to expose missing rows: # pip install pyjanitor import pandas as pd import janitor # build a list of new dates new_dates = {'ChangeDate': lambda df: pd.date_range(df.min(), df.max(), freq='1D')} df.complete(new_dates, by = 'RecordID').ffill() Out[70]: RecordID ChangeDate Stage 0 17764 2021-08-31 New 1 17764 2021-09-01 New 2 17764 2021-09-02 inprogress 3 17764 2021-09-03 inprogress 4 17764 2021-09-04 inprogress 5 17764 2021-09-05 won 6 70382 2022-01-04 new 7 70382 2022-01-05 new 8 70382 2022-01-06 hold 9 70382 2022-01-07 lost Another option is to build a dataframe, and do a merge with the original dataframe - this is useful for non-unique values: index = (df .set_index('ChangeDate') .drop(columns='Stage') .groupby('RecordID') .apply(lambda df: df.asfreq(freq='1D')) .index) new_df = pd.DataFrame([], index = index) (new_df .merge(df, how = 'outer', left_index = True, right_on = ['RecordID', 'ChangeDate']) .ffill()) RecordID ChangeDate Stage 0 17764 2021-08-31 New 5 17764 2021-09-01 New 1 17764 2021-09-02 inprogress 5 17764 2021-09-03 inprogress 5 17764 2021-09-04 inprogress 2 17764 2021-09-05 won 3 70382 2022-01-04 new 5 70382 2022-01-05 new 4 70382 2022-01-06 hold 5 70382 2022-01-07 lost
pandas - dynamically fill the missing row of group by(create a duplicate row if required based on previous record)
Need to fill the data accrding to the stage and last stage is the maximum date Input: RecordID ChangeDate Stage 17764 31-08-2021 New 17764 02-09-2021 inprogress 17764 05-09-2021 won 70382 04-01-2022 new 70382 06-01-2022 hold 70382 07-01-2022 lost Expceted output: RecordID ChangeDate Stage 17764 31-08-2021 New 17764 01-09-2021 New 17764 02-09-2021 inprogress 17764 03-09-2021 inprogress 17764 04-09-2021 inprogress 17764 05-09-2021 won 70382 04-01-2022 new 70382 05-01-2022 new 70382 06-01-2022 hold 70382 07-01-2022 lost
[ "You can use a groupby.resample:\ndf['ChangeDate'] = pd.to_datetime(df['ChangeDate'], dayfirst=True)\n\n(df.set_index('ChangeDate')\n .groupby('RecordID', as_index=False)\n .resample('1d').ffill()\n .reset_index('ChangeDate')\n)\n\nOutput:\n ChangeDate RecordID Stage\n0 2021-08-31 17764 New\n0 2021-09-01 17764 New\n0 2021-09-02 17764 inprogress\n0 2021-09-03 17764 inprogress\n0 2021-09-04 17764 inprogress\n0 2021-09-05 17764 won\n1 2022-01-04 70382 new\n1 2022-01-05 70382 new\n1 2022-01-06 70382 hold\n1 2022-01-07 70382 lost\n\n", "One option is with complete from pyjanitor, to expose missing rows:\n# pip install pyjanitor\nimport pandas as pd\nimport janitor\n\n# build a list of new dates\nnew_dates = {'ChangeDate': lambda df: pd.date_range(df.min(), df.max(), freq='1D')}\n\ndf.complete(new_dates, by = 'RecordID').ffill()\nOut[70]: \n RecordID ChangeDate Stage\n0 17764 2021-08-31 New\n1 17764 2021-09-01 New\n2 17764 2021-09-02 inprogress\n3 17764 2021-09-03 inprogress\n4 17764 2021-09-04 inprogress\n5 17764 2021-09-05 won\n6 70382 2022-01-04 new\n7 70382 2022-01-05 new\n8 70382 2022-01-06 hold\n9 70382 2022-01-07 lost\n\nAnother option is to build a dataframe, and do a merge with the original dataframe - this is useful for non-unique values:\nindex = (df\n.set_index('ChangeDate')\n.drop(columns='Stage')\n.groupby('RecordID')\n.apply(lambda df: df.asfreq(freq='1D'))\n.index)\nnew_df = pd.DataFrame([], index = index)\n(new_df\n.merge(df, how = 'outer', left_index = True, right_on = ['RecordID', 'ChangeDate'])\n.ffill())\n\n RecordID ChangeDate Stage\n0 17764 2021-08-31 New\n5 17764 2021-09-01 New\n1 17764 2021-09-02 inprogress\n5 17764 2021-09-03 inprogress\n5 17764 2021-09-04 inprogress\n2 17764 2021-09-05 won\n3 70382 2022-01-04 new\n5 70382 2022-01-05 new\n4 70382 2022-01-06 hold\n5 70382 2022-01-07 lost\n\n" ]
[ 2, 1 ]
[]
[]
[ "pandas", "python" ]
stackoverflow_0074458068_pandas_python.txt
Q: Get a Variable directly from function Python i want to work with the variable "sets" after the function has been called. How can i do this? sets = 0 def search_str(monatsabrechnung, d1): with open(monatsabrechnung, 'r') as file: content = file.read() if lastDay == heute and ja == 1: sets = 1 else: pass search_str(monatsabrechnung, d1) print(sets) A: IIUC, you are trying to modify a variable in a function, which is originally defined outside a function. This is a variable scoping problem. Do check this awesome article to get an understanding of how variable scopes work in python. Back to your code, the issue here is that even though you run the function to modify the variable sets to 1, python goes back to the outer scope where sets were set to 0. sets = 0 #<-- outer scope def search_str(): ... #do something sets = 1 #<-- inner scope search_str() print(sets) #back to outer scope # 0 Solution 1: Pass as a parameter and return You will have to pass the variable as a parameter to the function and then return it as follows - sets = 0 def search_str(sets): #<-- pass as paramter ... #do something sets = 1 #<-- modify return sets output = search_str(sets) #<-- save the output print(output) # 1 TIP: If your function is already returning another output, you can actually return, store and unpack multiple return values at once - In your return statement, return everything you need return x, y, sets Then, while calling use - X, Y, sets = search_str(...) Solution 2: Set scope to global If passing a parameter and returning is not an option for you, the scope of the variable has to be made global - sets = 0 def search_str(): global sets #<-- set the scope to global ... #do something sets = 1 search_str() print(sets) # 1 EDIT: As an additional pointer, as Matthias points out in his comments correctly, global scoping is usually avoided as it can cause a lot of problems if not careful. Here is a great StackOverflow thread detailing Why are global variables evil? A: Return the new value of sets from the function. If the if statement is true, then return 1, otherwise return the same value as before. To be able to return the old value you should add it as a parameter to the function. sets = 0 def search_str(monatsabrechnung, d1, sets): with open(monatsabrechnung, 'r') as file: content = file.read() if lastDay == heute and ja == 1: current_sets = 1 else: current_sets = sets return current_sets sets = search_str(monatsabrechnung, d1, sets) Don't use global if you don't have to. You get side-effects and the code is less reusable.
Get a Variable directly from function Python
i want to work with the variable "sets" after the function has been called. How can i do this? sets = 0 def search_str(monatsabrechnung, d1): with open(monatsabrechnung, 'r') as file: content = file.read() if lastDay == heute and ja == 1: sets = 1 else: pass search_str(monatsabrechnung, d1) print(sets)
[ "IIUC, you are trying to modify a variable in a function, which is originally defined outside a function. This is a variable scoping problem. Do check this awesome article to get an understanding of how variable scopes work in python.\nBack to your code, the issue here is that even though you run the function to modify the variable sets to 1, python goes back to the outer scope where sets were set to 0.\nsets = 0 #<-- outer scope\ndef search_str():\n ... #do something\n sets = 1 #<-- inner scope \n\nsearch_str()\nprint(sets) #back to outer scope\n\n# 0\n\nSolution 1: Pass as a parameter and return\nYou will have to pass the variable as a parameter to the function and then return it as follows -\nsets = 0\ndef search_str(sets): #<-- pass as paramter\n ... #do something\n sets = 1 #<-- modify\n return sets \n\noutput = search_str(sets) #<-- save the output\nprint(output)\n\n# 1\n\n\nTIP: If your function is already returning another output, you can actually return, store and unpack multiple return values at once -\n\nIn your return statement, return everything you need return x, y, sets\nThen, while calling use - X, Y, sets = search_str(...)\n\n\nSolution 2: Set scope to global\nIf passing a parameter and returning is not an option for you, the scope of the variable has to be made global -\nsets = 0\ndef search_str():\n global sets #<-- set the scope to global\n ... #do something\n sets = 1\n \nsearch_str()\nprint(sets)\n\n# 1\n\n\nEDIT: As an additional pointer, as Matthias points out in his comments correctly, global scoping is usually avoided as it can cause a lot of problems if not careful.\nHere is a great StackOverflow thread detailing Why are global variables evil?\n", "Return the new value of sets from the function. If the if statement is true, then return 1, otherwise return the same value as before. To be able to return the old value you should add it as a parameter to the function.\nsets = 0\n\ndef search_str(monatsabrechnung, d1, sets):\n with open(monatsabrechnung, 'r') as file:\n content = file.read()\n if lastDay == heute and ja == 1:\n current_sets = 1\n else:\n current_sets = sets\n return current_sets\n\nsets = search_str(monatsabrechnung, d1, sets)\n\nDon't use global if you don't have to. You get side-effects and the code is less reusable.\n" ]
[ 1, 0 ]
[]
[]
[ "function", "python", "variables" ]
stackoverflow_0074459125_function_python_variables.txt
Q: Truncate by the minimum of another DataFrame by columns Be the following python pandas DataFrame (df): age money time 10 300 10 8 200 20 20 1800 80 15 200 50 I want to extract the minimum value for each column: age money time 8 200 10 Given this other new dataframe (new_df): age money time 30 -100 15 10 100 50 -2 1800 -20 18 -50 52 All values that are less than the minima of each column are set to the minimum value of the previous dataframe. age money time 30 200 15 10 200 50 8 1800 10 18 200 52 A: You can use min to get the min of df, then clip to clip the values of new_df: out = new_df.clip(lower=df.min(), axis=1) Output: age money time 0 30 200 15 1 10 200 50 2 8 1800 10 3 18 200 52 Restricting to a subset of columns: cols = ['age', 'time'] out = new_df[cols].clip(lower=df.min(), axis=1).combine_first(new_df) output: age money time 0 30 -100 15 1 10 100 50 2 8 1800 10 3 18 -50 52
Truncate by the minimum of another DataFrame by columns
Be the following python pandas DataFrame (df): age money time 10 300 10 8 200 20 20 1800 80 15 200 50 I want to extract the minimum value for each column: age money time 8 200 10 Given this other new dataframe (new_df): age money time 30 -100 15 10 100 50 -2 1800 -20 18 -50 52 All values that are less than the minima of each column are set to the minimum value of the previous dataframe. age money time 30 200 15 10 200 50 8 1800 10 18 200 52
[ "You can use min to get the min of df, then clip to clip the values of new_df:\nout = new_df.clip(lower=df.min(), axis=1)\n\nOutput:\n age money time\n0 30 200 15\n1 10 200 50\n2 8 1800 10\n3 18 200 52\n\nRestricting to a subset of columns:\ncols = ['age', 'time']\nout = new_df[cols].clip(lower=df.min(), axis=1).combine_first(new_df)\n\noutput:\n age money time\n0 30 -100 15\n1 10 100 50\n2 8 1800 10\n3 18 -50 52\n\n" ]
[ 2 ]
[]
[]
[ "dataframe", "pandas", "python" ]
stackoverflow_0074459291_dataframe_pandas_python.txt
Q: Day 9 Secret Auction Program from 100 Days of Code with Dr. Angela Yu Can someone help me with this? What am I doing wrong? I checked Dr. Angela Yu's solution. She solves the problem in a different way but I am not able to understand where I am making mistakes. This program should print the name and the bid of the highest bidder. However, when I run this code, the console prints the bid and the name that was entered at the end. from replit import clear #HINT: You can call clear() to clear the output in the console. from art import logo print (logo) game_end = False bids = {} while not game_end: name = input("What is your name?\n") bid = int(input("How much are you bidding? $")) bids[name] = bid game = input("Are there any other bidders. y or n\n").lower() if game == "n": game_end = True else: clear() highest_bid = 0 for bidder in bids: if bid > highest_bid: highest_bid = bid winner = bidder print(f"The winner is {winner} with a bid of ${highest_bid}") A: You are not considering the value of bid that was stored in the dict bids. You just need to get that value as you are iterating: for bidder, bid in bids.items(): if bid > highest_bid: highest_bid = bid winner = bidder print(f"The winner is {winner} with a bid of ${highest_bid}") By using bids.items(), the dict yields both the name (bidder) and the value of the bid (bid) Alternatively you can sort bids and extract the last (highest) bid: winner,highest_bid = sorted(bids.items(), key=lambda item:item[1])[-1] print(f"The winner is {winner} with a bid of ${highest_bid}") You could change your code to be more like Dr. Angela's like this: for bidder in bids: bid = bids[bidder] # This is what Dr. Angela does if bid > highest_bid: highest_bid = bid winner = bidder print(f"The winner is {winner} with a bid of ${highest_bid}")
Day 9 Secret Auction Program from 100 Days of Code with Dr. Angela Yu
Can someone help me with this? What am I doing wrong? I checked Dr. Angela Yu's solution. She solves the problem in a different way but I am not able to understand where I am making mistakes. This program should print the name and the bid of the highest bidder. However, when I run this code, the console prints the bid and the name that was entered at the end. from replit import clear #HINT: You can call clear() to clear the output in the console. from art import logo print (logo) game_end = False bids = {} while not game_end: name = input("What is your name?\n") bid = int(input("How much are you bidding? $")) bids[name] = bid game = input("Are there any other bidders. y or n\n").lower() if game == "n": game_end = True else: clear() highest_bid = 0 for bidder in bids: if bid > highest_bid: highest_bid = bid winner = bidder print(f"The winner is {winner} with a bid of ${highest_bid}")
[ "You are not considering the value of bid that was stored in the dict bids. You just need to get that value as you are iterating:\nfor bidder, bid in bids.items():\n if bid > highest_bid:\n highest_bid = bid\n winner = bidder\nprint(f\"The winner is {winner} with a bid of ${highest_bid}\")\n\nBy using bids.items(), the dict yields both the name (bidder) and the value of the bid (bid)\nAlternatively you can sort bids and extract the last (highest) bid:\nwinner,highest_bid = sorted(bids.items(), key=lambda item:item[1])[-1]\nprint(f\"The winner is {winner} with a bid of ${highest_bid}\")\n\nYou could change your code to be more like Dr. Angela's like this:\nfor bidder in bids:\n bid = bids[bidder] # This is what Dr. Angela does\n if bid > highest_bid:\n highest_bid = bid\n winner = bidder\nprint(f\"The winner is {winner} with a bid of ${highest_bid}\")\n\n" ]
[ 2 ]
[]
[]
[ "dictionary", "python", "replit" ]
stackoverflow_0074459214_dictionary_python_replit.txt
Q: After clicking the button the user should jump to a specific message in another text channel I have a problem. I have created a button, if the user presses this button the user should jump to a certain message in another text channel. How do I do that? There is jump_url but how do I refer from the button to the messages and that the user jumps? class MyView(discord.ui.View): # Create a class called MyView that subclasses discord.ui.View @discord.ui.button(label="->", style=discord.ButtonStyle.primary, emoji="") async def button_callback(self, button, interaction): await interaction.response.send_message("You clicked the button!") # Send a message when the button is clicked # link to certian message @commands.has_permissions(administrator=True, ban_members=True) @bot.command() async def button(ctx): await ctx.send(view=MyView()) A: I already answered this question in your other post. There is a special type of button for links, and you can set the url to the jump_url of the message. Other than that, you can send an embed with the link in it. If you don't want either of those then there's no other option. Official example: https://github.com/Rapptz/discord.py/blob/master/examples/views/link.py
After clicking the button the user should jump to a specific message in another text channel
I have a problem. I have created a button, if the user presses this button the user should jump to a certain message in another text channel. How do I do that? There is jump_url but how do I refer from the button to the messages and that the user jumps? class MyView(discord.ui.View): # Create a class called MyView that subclasses discord.ui.View @discord.ui.button(label="->", style=discord.ButtonStyle.primary, emoji="") async def button_callback(self, button, interaction): await interaction.response.send_message("You clicked the button!") # Send a message when the button is clicked # link to certian message @commands.has_permissions(administrator=True, ban_members=True) @bot.command() async def button(ctx): await ctx.send(view=MyView())
[ "I already answered this question in your other post. There is a special type of button for links, and you can set the url to the jump_url of the message.\nOther than that, you can send an embed with the link in it. If you don't want either of those then there's no other option.\nOfficial example: https://github.com/Rapptz/discord.py/blob/master/examples/views/link.py\n" ]
[ 0 ]
[]
[]
[ "discord", "discord.py", "python" ]
stackoverflow_0074458604_discord_discord.py_python.txt
Q: aiohttp asyncio parsing works fine for a time, then without any error gets no data I need to parse html from list of domains (only main pages) Script works well for a period of time, then it's getting no data with very high speed. Looks like requests doesn't even send. My code: import asyncio import time import aiohttp import pandas as pd import json from bs4 import BeautifulSoup df = pd.read_excel('work_file.xlsx') domains_count = df.shape[0] start_time = time.time() print(start_time) data = {} async def get_data(session, url, j): try: async with session.get(url) as resp: html = await resp.text() rawhtml = BeautifulSoup(html, 'lxml') title = rawhtml.title data[url] = {'url': url, 'resp': resp.status, 'title': str(title), 'html': str(rawhtml)} print(j) print(url) except Exception as e: data[url] = {'url': url, 'resp': str(e), 'title': 'None', 'html': 'None'} print(j) print(url) print(str(e)) async def get_queue(): tasks = [] async with aiohttp.ClientSession(timeout=aiohttp.ClientTimeout(total=120)) as session: for j, i in enumerate(df.domain): i = 'http://' + i.lower() task = asyncio.create_task(get_data(session, i, j)) tasks.append(task) await asyncio.gather(*tasks) if __name__ == '__main__': asyncio.run(get_queue()) with open('parsed_data.json', 'a+') as file: file.write(json.dumps(data)) end_time = time.time() - start_time print(end_time) A: That was timeout error. 'resp': str(e) This code prints only error exception message. TimeOut Error has no exception message, so str(e) = empty string. str(repr(e)) helps to see an Error.
aiohttp asyncio parsing works fine for a time, then without any error gets no data
I need to parse html from list of domains (only main pages) Script works well for a period of time, then it's getting no data with very high speed. Looks like requests doesn't even send. My code: import asyncio import time import aiohttp import pandas as pd import json from bs4 import BeautifulSoup df = pd.read_excel('work_file.xlsx') domains_count = df.shape[0] start_time = time.time() print(start_time) data = {} async def get_data(session, url, j): try: async with session.get(url) as resp: html = await resp.text() rawhtml = BeautifulSoup(html, 'lxml') title = rawhtml.title data[url] = {'url': url, 'resp': resp.status, 'title': str(title), 'html': str(rawhtml)} print(j) print(url) except Exception as e: data[url] = {'url': url, 'resp': str(e), 'title': 'None', 'html': 'None'} print(j) print(url) print(str(e)) async def get_queue(): tasks = [] async with aiohttp.ClientSession(timeout=aiohttp.ClientTimeout(total=120)) as session: for j, i in enumerate(df.domain): i = 'http://' + i.lower() task = asyncio.create_task(get_data(session, i, j)) tasks.append(task) await asyncio.gather(*tasks) if __name__ == '__main__': asyncio.run(get_queue()) with open('parsed_data.json', 'a+') as file: file.write(json.dumps(data)) end_time = time.time() - start_time print(end_time)
[ "That was timeout error.\n'resp': str(e)\n\nThis code prints only error exception message. TimeOut Error has no exception message, so str(e) = empty string.\nstr(repr(e)) helps to see an Error.\n" ]
[ 0 ]
[]
[]
[ "aiohttp", "python", "python_asyncio" ]
stackoverflow_0074429069_aiohttp_python_python_asyncio.txt
Q: Python - get number of folder in list if file was found How i can get number of folder in my list when folder was found? - For example: [Folder1, Folder2, Folder3] I open all folders contained in this list by: for root, dirs, files in os.walk(path): pass And in this folders i have something to find, example: file name in Folder2 -> xxx.png When i was found this i wanna get get the number of Folder2 in my list i.e 1 ([Folder1, Folder2, Folder3]) A: Use this example: import os # folder path dir_path = r'E:\account' count = 0 # Iterate directory for path in os.listdir(dir_path): # check if current path is a file if os.path.isfile(os.path.join(dir_path, path)): count += 1 print('File count:', count)
Python - get number of folder in list if file was found
How i can get number of folder in my list when folder was found? - For example: [Folder1, Folder2, Folder3] I open all folders contained in this list by: for root, dirs, files in os.walk(path): pass And in this folders i have something to find, example: file name in Folder2 -> xxx.png When i was found this i wanna get get the number of Folder2 in my list i.e 1 ([Folder1, Folder2, Folder3])
[ "Use this example:\nimport os\n\n# folder path\ndir_path = r'E:\\account'\ncount = 0\n# Iterate directory\nfor path in os.listdir(dir_path):\n # check if current path is a file\n if os.path.isfile(os.path.join(dir_path, path)):\n count += 1\nprint('File count:', count)\n\n" ]
[ 1 ]
[]
[]
[ "for_loop", "python" ]
stackoverflow_0074459268_for_loop_python.txt
Q: ModuleNotFoundError: No module named 'pytest' After installing the pytest module in a virtual environment, I used the python code to call and run the prompt to find the pytest module. I installed the pytest module outside the virtual environment. I can call it normally with python. import pytest def test_main(): assert 5!=5 if __name__ == "__main__": pytest.main() The error is as follows: [Running] python -u "d:\MyPytest\test_sample.py" Traceback (most recent call last): File "d:\MyPytest\test_sample.py", line 1, in import pytest ModuleNotFoundError: No module named 'pytest' [Done] exited with code=1 in 0.185 seconds A: TLDR: I suspect you installed pytest within your system level python site-packages so when you try to run pytest, within your virtualenv, it's throwing a ModuleNotFoundError since it doesn't have this dependency installed within your virtualenv. Virtual environments give you a sandboxed environment so you can experiment with potential python libraries for your project, but they're self contained and don't have access to your system level python third-party libraries. Typically an ImportError is raised when an import statement has trouble successfully importing the specified module. If the problem is due to an invalid or incorrect path, this will raise a ModuleNotFoundError. From your question it isn't clear where you installed pytest since you said you installed it within your virtualenv then you said you installed it outside your virtualenv on your System level python site-packages.. So I will give my thoughts for getting pytest to work within a virtualenv, since this is probably what you want: Virtualenv are nice because they give you a sandboxed environment to play around with python libraries, safe from messing up your system level python configurations. Now the ModuleNotFoundError is thrown within your virtualenv because it can't find the pytest module for the test you're trying to run. Maybe you could try activating your virtualenv and re-installing pytest within this virtualenv and seeing if this course of action resolves your issue: Activate your virtualenv: # Posix systems source /path/to/ENV/bin/activate # Windows \path\to\env\Scripts\activate Install pytest within your virtualenv: Note: you should see your virtualenv's name listed in parenthesizes before installing pytest. For this example, suppose you created a virtual environment named: env (env) pip install pytest Now pytest will be available to you within your virtualenv. For more information checkout virtualenv's documentation. I would also suggest looking into virtualenvwrapper, which nicely wraps around virtualenv for more convenient commands to activate/deactivate virtualenvs. Hopefully that helps! A: If it's not the virtualenv case, then try python3 -m pytest. A: In my case if I use vscode terminal. I get same error. Then I tried same using CMD and its working. So I suggest to use system terminal instead of any third party terminal. cmd: python -m pytest A: The Python error "ModuleNotFoundError: No module named 'pytest'" occurs for multiple reasons: Not having the pytest package installed by running pip install pytest. Installing the package in a different Python version than the one you're using. Installing the package globally and not in your virtual environment. Your IDE running an incorrect version of Python. See the details here.
ModuleNotFoundError: No module named 'pytest'
After installing the pytest module in a virtual environment, I used the python code to call and run the prompt to find the pytest module. I installed the pytest module outside the virtual environment. I can call it normally with python. import pytest def test_main(): assert 5!=5 if __name__ == "__main__": pytest.main() The error is as follows: [Running] python -u "d:\MyPytest\test_sample.py" Traceback (most recent call last): File "d:\MyPytest\test_sample.py", line 1, in import pytest ModuleNotFoundError: No module named 'pytest' [Done] exited with code=1 in 0.185 seconds
[ "TLDR: I suspect you installed pytest within your system level python site-packages so when you try to run pytest, within your virtualenv, it's throwing a ModuleNotFoundError since it doesn't have this dependency installed within your virtualenv. Virtual environments give you a sandboxed environment so you can experiment with potential python libraries for your project, but they're self contained and don't have access to your system level python third-party libraries.\nTypically an ImportError is raised when an import statement has trouble successfully importing the specified module. If the problem is due to an invalid or incorrect path, this will raise a ModuleNotFoundError.\nFrom your question it isn't clear where you installed pytest since you said you installed it within your virtualenv then you said you installed it outside your virtualenv on your System level python site-packages.. So I will give my thoughts for getting pytest to work within a virtualenv, since this is probably what you want:\nVirtualenv are nice because they give you a sandboxed environment to play around with python libraries, safe from messing up your system level python configurations. Now the ModuleNotFoundError is thrown within your virtualenv because it can't find the pytest module for the test you're trying to run. Maybe you could try activating your virtualenv and re-installing pytest within this virtualenv and seeing if this course of action resolves your issue:\nActivate your virtualenv:\n# Posix systems\nsource /path/to/ENV/bin/activate\n\n# Windows\n\\path\\to\\env\\Scripts\\activate\n\nInstall pytest within your virtualenv:\nNote: you should see your virtualenv's name listed in parenthesizes before installing pytest. For this example, suppose you created a virtual environment named: env\n(env) pip install pytest\n\nNow pytest will be available to you within your virtualenv. For more information checkout virtualenv's documentation. I would also suggest looking into virtualenvwrapper, which nicely wraps around virtualenv for more convenient commands to activate/deactivate virtualenvs.\nHopefully that helps!\n", "If it's not the virtualenv case, then try python3 -m pytest.\n", "In my case if I use vscode terminal. I get same error.\nThen I tried same using CMD and its working.\nSo I suggest to use system terminal instead of any third party terminal.\ncmd: python -m pytest\n", "The Python error \"ModuleNotFoundError: No module named 'pytest'\" occurs for multiple reasons:\n\nNot having the pytest package installed by running pip install\npytest.\nInstalling the package in a different Python version than the one\nyou're using.\nInstalling the package globally and not in your virtual environment.\nYour IDE running an incorrect version of Python.\n\nSee the details here.\n" ]
[ 34, 5, 0, 0 ]
[]
[]
[ "pytest", "python" ]
stackoverflow_0055652866_pytest_python.txt
Q: Problem installing Python 3.6.5 on macOS 12.6 Monterey with Intel chip I get the following error when I use pyenv to install python version 3.6.5 using the command pyenv install 3.6.5: Error - configure: error: internal configure error for the platform triplet, please file a bug report I've also tried the suggestion in this stackoverflow post, however I ran into the same issue. Command - LDFLAGS="-L$(brew --prefix zlib)/lib -L$(brew --prefix bzip2)/lib" pyenv install --patch 3.6.5 < <(curl -sSL https://github.com/python/cpython/commit/8ea6353.patch\?full_index\=1) Detailed Logs - Downloading openssl-1.0.2k.tar.gz... -> https://pyenv.github.io/pythons/6b3977c61f2aedf0f96367dcfb5c6e578cf37e7b8d913b4ecb6643c3cb88d8c0 Installing openssl-1.0.2k... Installed openssl-1.0.2k to /Users/weaver/.pyenv/versions/3.6.5 python-build: use readline from homebrew Downloading Python-3.6.5.tar.xz... -> https://www.python.org/ftp/python/3.6.5/Python-3.6.5.tar.xz Installing Python-3.6.5... patching file Misc/NEWS.d/next/macOS/2020-06-24-13-51-57.bpo-41100.mcHdc5.rst patching file configure Hunk #1 succeeded at 3369 (offset -57 lines). patching file configure.ac Hunk #1 succeeded at 495 (offset -15 lines). python-build: use readline from homebrew python-build: use zlib from xcode sdk BUILD FAILED (OS X 12.6 using python-build 20180424) Inspect or clean up the working tree at /var/folders/vp/qpmxg0vx6dx396bx974srp380000gn/T/python-build.20220928220948.83599 Results logged to /var/folders/vp/qpmxg0vx6dx396bx974srp380000gn/T/python-build.20220928220948.83599.log Last 10 log lines: checking for a sed that does not truncate output... /usr/bin/sed checking for --with-cxx-main=<compiler>... no checking for clang++... no configure: By default, distutils will build C++ extension modules with "clang++". If this is not intended, then set CXX on the configure command line. checking for the platform triplet based on compiler characteristics... darwin configure: error: internal configure error for the platform triplet, please file a bug report Please help! A: After researching the question I could not install python 3.6, the solution I found was that only a few versions could be installed, which is described here https://github.com/pyenv/pyenv/issues/2112, version 3.7.13 was installed without problems
Problem installing Python 3.6.5 on macOS 12.6 Monterey with Intel chip
I get the following error when I use pyenv to install python version 3.6.5 using the command pyenv install 3.6.5: Error - configure: error: internal configure error for the platform triplet, please file a bug report I've also tried the suggestion in this stackoverflow post, however I ran into the same issue. Command - LDFLAGS="-L$(brew --prefix zlib)/lib -L$(brew --prefix bzip2)/lib" pyenv install --patch 3.6.5 < <(curl -sSL https://github.com/python/cpython/commit/8ea6353.patch\?full_index\=1) Detailed Logs - Downloading openssl-1.0.2k.tar.gz... -> https://pyenv.github.io/pythons/6b3977c61f2aedf0f96367dcfb5c6e578cf37e7b8d913b4ecb6643c3cb88d8c0 Installing openssl-1.0.2k... Installed openssl-1.0.2k to /Users/weaver/.pyenv/versions/3.6.5 python-build: use readline from homebrew Downloading Python-3.6.5.tar.xz... -> https://www.python.org/ftp/python/3.6.5/Python-3.6.5.tar.xz Installing Python-3.6.5... patching file Misc/NEWS.d/next/macOS/2020-06-24-13-51-57.bpo-41100.mcHdc5.rst patching file configure Hunk #1 succeeded at 3369 (offset -57 lines). patching file configure.ac Hunk #1 succeeded at 495 (offset -15 lines). python-build: use readline from homebrew python-build: use zlib from xcode sdk BUILD FAILED (OS X 12.6 using python-build 20180424) Inspect or clean up the working tree at /var/folders/vp/qpmxg0vx6dx396bx974srp380000gn/T/python-build.20220928220948.83599 Results logged to /var/folders/vp/qpmxg0vx6dx396bx974srp380000gn/T/python-build.20220928220948.83599.log Last 10 log lines: checking for a sed that does not truncate output... /usr/bin/sed checking for --with-cxx-main=<compiler>... no checking for clang++... no configure: By default, distutils will build C++ extension modules with "clang++". If this is not intended, then set CXX on the configure command line. checking for the platform triplet based on compiler characteristics... darwin configure: error: internal configure error for the platform triplet, please file a bug report Please help!
[ "After researching the question I could not install python 3.6, the solution I found was that only a few versions could be installed, which is described here https://github.com/pyenv/pyenv/issues/2112, version 3.7.13 was installed without problems\n" ]
[ 0 ]
[]
[]
[ "macos", "pyenv", "python", "python_3.x" ]
stackoverflow_0073890842_macos_pyenv_python_python_3.x.txt
Q: Extracting date from another column to use for partition I am uploading some CSV files into a big query table. There is a column called filename which is in this format:sales_2021-09-09T21-27-05_010555Z I am trying to upload the data from google cloud storage into a partitioned table in the big query. Could you help me to create the field below there is no date column and I need to extract date from filename: time_partitioning=bigquery.TimePartitioning( type_=bigquery.TimePartitioningType.DAY, field="date", # Name of the column to use for partitioning. expiration_ms=7776000000, # 90 days. ), date column does not exist, above code is from google cloud tutorial of data upload into bigquery. The below code gives me what I want (extract date from string) but I am not sure how to use it in the above code to be considered as partitiontime : import re from datetime import datetime text = 'sales_2022-09-09T21-27-05_010555Z' match = re.search(r'\d{4}-\d{2}-\d{2}', text) date = datetime.strptime(match.group(), '%Y-%m-%d').date() A: To insert a date field, with the Python BigQuery client, you can pass a String date with the following format : 2022-09-09 : import re from datetime import datetime text = 'sales_2022-09-09T21-27-05_010555Z' match = re.search(r'\d{4}-\d{2}-\d{2}', text) res_date = datetime.strptime(match.group(), '%Y-%m-%d').date() # Your date as string to insert to BigQuery and in your partition field. your_date_str = str(res_date)
Extracting date from another column to use for partition
I am uploading some CSV files into a big query table. There is a column called filename which is in this format:sales_2021-09-09T21-27-05_010555Z I am trying to upload the data from google cloud storage into a partitioned table in the big query. Could you help me to create the field below there is no date column and I need to extract date from filename: time_partitioning=bigquery.TimePartitioning( type_=bigquery.TimePartitioningType.DAY, field="date", # Name of the column to use for partitioning. expiration_ms=7776000000, # 90 days. ), date column does not exist, above code is from google cloud tutorial of data upload into bigquery. The below code gives me what I want (extract date from string) but I am not sure how to use it in the above code to be considered as partitiontime : import re from datetime import datetime text = 'sales_2022-09-09T21-27-05_010555Z' match = re.search(r'\d{4}-\d{2}-\d{2}', text) date = datetime.strptime(match.group(), '%Y-%m-%d').date()
[ "To insert a date field, with the Python BigQuery client, you can pass a String date with the following format : 2022-09-09 :\nimport re\nfrom datetime import datetime\ntext = 'sales_2022-09-09T21-27-05_010555Z'\nmatch = re.search(r'\\d{4}-\\d{2}-\\d{2}', text)\nres_date = datetime.strptime(match.group(), '%Y-%m-%d').date()\n \n# Your date as string to insert to BigQuery and in your partition field.\nyour_date_str = str(res_date)\n\n" ]
[ 0 ]
[]
[]
[ "extract", "google_bigquery", "google_cloud_platform", "python" ]
stackoverflow_0074458776_extract_google_bigquery_google_cloud_platform_python.txt
Q: Weird python issue - increasing CPU usage overtime? I am making a video player using pygame, it takes numpy arrays of frames from a video file and streams them to a pygame window using a buffer. I'm getting a weird issue where CPU usage is increasing (program is slowing down) over time, then CPU usage is sharply decreasing (program is speeding up). Memory usage stays pretty much constant, so I don't think it's a memory leak. This affects the program as I'm trying to stream a video, and long processing times turn it into a slideshow. I would expect fluctuations, but not that steep. I cannot seem to figure what is going on here, I would really appreciate some help in debugging this! Source code to replicate the issue at the end of the question. You can use any .mp4 file, but the exact one I used is here to download. This also outputs a log.txt file which I am using to debug the issue. My findings are below... In the first few seconds, process time is relatively fast: [12:13:31] (Variable: current_frame) 1 [12:13:31] (Variable: len(frame_buffer)) 3 [12:13:31] (Process time) 0.08 seconds It steadily slows over the duration of about 30 seconds, until it hits peak slowness at ~0.5 seconds of process time: [12:14:07] (Variable: current_frame) 154 [12:14:07] (Variable: len(frame_buffer)) 3 [12:14:07] (Process time) 0.53 seconds Then it dips down to previous levels: [12:14:11] (Variable: current_frame) 164 [12:14:11] (Variable: len(frame_buffer)) 3 [12:14:11] (Process time) 0.1 seconds Here is a graph I made using matplotlib to identify the trend over ~1000 frames: import cv2 import time import pygame import gc from datetime import datetime #Debug code runtime_start = str(datetime.now().strftime("%H-%M-%S")) def log(prefix, message): string = "[" + str(datetime.now().strftime("%H:%M:%S")) + "] " + "(" + str(prefix) + ") " + str(message) print(string) with open(file="log" + runtime_start + ".txt", mode="a", encoding="utf-8") as file: file.write(string + "\n") ### ### ### #Globals buffer_size = 3 frame_buffer = [] ### ### ### #Functions def loadVideoFileToMemory(file): """ Load file to memory. """ global capture capture = cv2.VideoCapture(file) def getFrameFromVideo(file, frame_number): """ Captures a frame from the loaded file, returns a numpy image array. """ #capture = cv2.VideoCapture(file) capture.set(cv2.CAP_PROP_POS_FRAMES, frame_number) ret, frame = capture.read() #image = ImageTk.PhotoImage(image=Image.fromarray(frame)) image = cv2.resize(frame, (90*6, 160*6)) del frame gc.collect() return image def setBuffer(start_frame): """ Fills the buffer at a set starting frame before playing the video. """ frame_buffer.clear() for i in range(buffer_size): frame_buffer.append(getFrameFromVideo("test_video1.mp4", start_frame + i)) def updateBuffer(frame): """ Updates frame_buffer, deals with garabge collection. """ del frame_buffer[0] gc.collect() if len(frame_buffer) < buffer_size: #Limits to buffer size frame_array = getFrameFromVideo("test_video1.mp4", frame + buffer_size) frame_buffer.append(frame_array) del frame_array gc.collect() def displayImageToPygame(image, window): """ Converts a numpy image array to a pygame surface and blits to screen. """ window.blit(pygame.surfarray.make_surface(image), (0,0)) def createPygameWindow(): """ Creates a pygame window, sets the frame_buffer and starts the main thread. """ window = pygame.display.set_mode((160*6, 90*6)) pygame.display.flip() #Starting frame current_frame = 0 setBuffer(current_frame) #Set from this position displayImageToPygame(frame_buffer[0], window) #Show starting frame pygameThread(window, current_frame) def pygameThread(window, current_frame): """ Main thread. Plays the video. """ running = True while running: for event in pygame.event.get(): if event.type == pygame.QUIT: running = False timer_start = time.time() #DEBUG: PROCESSING TIME current_frame += 1 displayImageToPygame(frame_buffer[1], window) updateBuffer(current_frame) pygame.display.flip() #DEBUG: POST TO LOG.TXT log("Variable: current_frame", current_frame) log("Variable: len(frame_buffer)", len(frame_buffer)) timer_end = time.time() #DEBUG: PROCESSING TIME log("Process time", str(round(timer_end - timer_start, 2)) + " seconds") ### ### ### ### video_file = "test_video1.mp4" #Replace this with a path to an .mp4 file. You can download the exact file I'm using here: https://drive.google.com/file/d/1_tfTVHmaoTEYxkLrjVS8NiAvdes3Gd77/view?usp=share_link loadVideoFileToMemory(video_file) createPygameWindow() A: Ok, so i could reproduce what you describe (which was very easy with the sources you provided). Removing the GC also did nothing for me, as you already observed. Now i did the following test: instead of getting each frame in order, i got a random frame from the video and recorded those times. Here's the (trivial) code snippet to do that: from random import randint, seed [..] def pygameThread(window, current_frame): """ Main thread. Plays the video. """ seed() [..] current_frame += 1 rand_frame = randint(1,500) # taking one of the first 500, doesn't really matter displayImageToPygame(frame_buffer[1], window) #updateBuffer(current_frame) updateBuffer(rand_frame) and more interestingly here's the recorded timing: [09:02:37] Current_frame 202 len(frame_buffer)3 Process time 0.06 seconds [09:02:37] Current_frame 423 len(frame_buffer)3 Process time 0.09 seconds [09:02:37] Current_frame 258 len(frame_buffer)3 Process time 0.03 seconds [09:02:37] Current_frame 464 len(frame_buffer)3 Process time 0.1 seconds [09:02:37] Current_frame 176 len(frame_buffer)3 Process time 0.04 seconds [09:02:37] Current_frame 463 len(frame_buffer)3 Process time 0.1 seconds [09:02:37] Current_frame 425 len(frame_buffer)3 Process time 0.09 seconds [09:02:37] Current_frame 154 len(frame_buffer)3 Process time 0.11 seconds [09:02:37] Current_frame 54 len(frame_buffer)3 Process time 0.05 seconds [09:02:37] Current_frame 110 len(frame_buffer)3 Process time 0.08 seconds [09:02:38] Current_frame 133 len(frame_buffer)3 Process time 0.1 seconds [09:02:38] Current_frame 41 len(frame_buffer)3 Process time 0.04 seconds [09:02:38] Current_frame 471 len(frame_buffer)3 Process time 0.11 seconds [09:02:38] Current_frame 458 len(frame_buffer)3 Process time 0.1 seconds [09:02:38] Current_frame 44 len(frame_buffer)3 Process time 0.05 seconds [09:02:38] Current_frame 406 len(frame_buffer)3 Process time 0.07 seconds [09:02:38] Current_frame 66 len(frame_buffer)3 Process time 0.06 seconds [09:02:38] Current_frame 439 len(frame_buffer)3 Process time 0.09 seconds [09:02:38] Current_frame 32 len(frame_buffer)3 Process time 0.04 seconds [09:02:38] Current_frame 89 len(frame_buffer)3 Process time 0.07 seconds [09:02:38] Current_frame 221 len(frame_buffer)3 Process time 0.06 seconds So as we can see, just accessing the frames takes a variable amount of time, independant of processing time. My educated guess is that accessing key frames in the stream is fast, while reconstructing the following frames takes longer. Maybe try running some decompression first, before running your algorithm ? You should also run this test for a longer time than i have and plot it again, to make sure that there really is no increase over time. A: Seeking is random access in a media file. Setting CAP_PROP_POS_FRAMES is seeking. Do not seek if you don't have to. For most video codecs, it is more costly than sequential access. It can also be imprecise because seeking may decide, because it's cheaper to do that, to merely jump to the nearest "keyframe", instead of the exact time index you want. Your access pattern appears to be one jump followed by sequential decoding from there. For this entire operation, you should seek once, then decode sequentially. Calling .read() on the VideoCapture object is sequential decoding. It automatically advances in the video.
Weird python issue - increasing CPU usage overtime?
I am making a video player using pygame, it takes numpy arrays of frames from a video file and streams them to a pygame window using a buffer. I'm getting a weird issue where CPU usage is increasing (program is slowing down) over time, then CPU usage is sharply decreasing (program is speeding up). Memory usage stays pretty much constant, so I don't think it's a memory leak. This affects the program as I'm trying to stream a video, and long processing times turn it into a slideshow. I would expect fluctuations, but not that steep. I cannot seem to figure what is going on here, I would really appreciate some help in debugging this! Source code to replicate the issue at the end of the question. You can use any .mp4 file, but the exact one I used is here to download. This also outputs a log.txt file which I am using to debug the issue. My findings are below... In the first few seconds, process time is relatively fast: [12:13:31] (Variable: current_frame) 1 [12:13:31] (Variable: len(frame_buffer)) 3 [12:13:31] (Process time) 0.08 seconds It steadily slows over the duration of about 30 seconds, until it hits peak slowness at ~0.5 seconds of process time: [12:14:07] (Variable: current_frame) 154 [12:14:07] (Variable: len(frame_buffer)) 3 [12:14:07] (Process time) 0.53 seconds Then it dips down to previous levels: [12:14:11] (Variable: current_frame) 164 [12:14:11] (Variable: len(frame_buffer)) 3 [12:14:11] (Process time) 0.1 seconds Here is a graph I made using matplotlib to identify the trend over ~1000 frames: import cv2 import time import pygame import gc from datetime import datetime #Debug code runtime_start = str(datetime.now().strftime("%H-%M-%S")) def log(prefix, message): string = "[" + str(datetime.now().strftime("%H:%M:%S")) + "] " + "(" + str(prefix) + ") " + str(message) print(string) with open(file="log" + runtime_start + ".txt", mode="a", encoding="utf-8") as file: file.write(string + "\n") ### ### ### #Globals buffer_size = 3 frame_buffer = [] ### ### ### #Functions def loadVideoFileToMemory(file): """ Load file to memory. """ global capture capture = cv2.VideoCapture(file) def getFrameFromVideo(file, frame_number): """ Captures a frame from the loaded file, returns a numpy image array. """ #capture = cv2.VideoCapture(file) capture.set(cv2.CAP_PROP_POS_FRAMES, frame_number) ret, frame = capture.read() #image = ImageTk.PhotoImage(image=Image.fromarray(frame)) image = cv2.resize(frame, (90*6, 160*6)) del frame gc.collect() return image def setBuffer(start_frame): """ Fills the buffer at a set starting frame before playing the video. """ frame_buffer.clear() for i in range(buffer_size): frame_buffer.append(getFrameFromVideo("test_video1.mp4", start_frame + i)) def updateBuffer(frame): """ Updates frame_buffer, deals with garabge collection. """ del frame_buffer[0] gc.collect() if len(frame_buffer) < buffer_size: #Limits to buffer size frame_array = getFrameFromVideo("test_video1.mp4", frame + buffer_size) frame_buffer.append(frame_array) del frame_array gc.collect() def displayImageToPygame(image, window): """ Converts a numpy image array to a pygame surface and blits to screen. """ window.blit(pygame.surfarray.make_surface(image), (0,0)) def createPygameWindow(): """ Creates a pygame window, sets the frame_buffer and starts the main thread. """ window = pygame.display.set_mode((160*6, 90*6)) pygame.display.flip() #Starting frame current_frame = 0 setBuffer(current_frame) #Set from this position displayImageToPygame(frame_buffer[0], window) #Show starting frame pygameThread(window, current_frame) def pygameThread(window, current_frame): """ Main thread. Plays the video. """ running = True while running: for event in pygame.event.get(): if event.type == pygame.QUIT: running = False timer_start = time.time() #DEBUG: PROCESSING TIME current_frame += 1 displayImageToPygame(frame_buffer[1], window) updateBuffer(current_frame) pygame.display.flip() #DEBUG: POST TO LOG.TXT log("Variable: current_frame", current_frame) log("Variable: len(frame_buffer)", len(frame_buffer)) timer_end = time.time() #DEBUG: PROCESSING TIME log("Process time", str(round(timer_end - timer_start, 2)) + " seconds") ### ### ### ### video_file = "test_video1.mp4" #Replace this with a path to an .mp4 file. You can download the exact file I'm using here: https://drive.google.com/file/d/1_tfTVHmaoTEYxkLrjVS8NiAvdes3Gd77/view?usp=share_link loadVideoFileToMemory(video_file) createPygameWindow()
[ "Ok, so i could reproduce what you describe (which was very easy with the sources you provided).\nRemoving the GC also did nothing for me, as you already observed.\nNow i did the following test: instead of getting each frame in order, i got a random frame from the video and recorded those times.\nHere's the (trivial) code snippet to do that:\nfrom random import randint, seed\n[..]\ndef pygameThread(window, current_frame):\n \"\"\"\n Main thread. Plays the video.\n \"\"\"\n seed()\n[..]\n\n current_frame += 1\n rand_frame = randint(1,500) # taking one of the first 500, doesn't really matter\n displayImageToPygame(frame_buffer[1], window)\n #updateBuffer(current_frame)\n updateBuffer(rand_frame)\n\nand more interestingly here's the recorded timing:\n[09:02:37] Current_frame 202 len(frame_buffer)3 Process time 0.06 seconds\n[09:02:37] Current_frame 423 len(frame_buffer)3 Process time 0.09 seconds\n[09:02:37] Current_frame 258 len(frame_buffer)3 Process time 0.03 seconds\n[09:02:37] Current_frame 464 len(frame_buffer)3 Process time 0.1 seconds\n[09:02:37] Current_frame 176 len(frame_buffer)3 Process time 0.04 seconds\n[09:02:37] Current_frame 463 len(frame_buffer)3 Process time 0.1 seconds\n[09:02:37] Current_frame 425 len(frame_buffer)3 Process time 0.09 seconds\n[09:02:37] Current_frame 154 len(frame_buffer)3 Process time 0.11 seconds\n[09:02:37] Current_frame 54 len(frame_buffer)3 Process time 0.05 seconds\n[09:02:37] Current_frame 110 len(frame_buffer)3 Process time 0.08 seconds\n[09:02:38] Current_frame 133 len(frame_buffer)3 Process time 0.1 seconds\n[09:02:38] Current_frame 41 len(frame_buffer)3 Process time 0.04 seconds\n[09:02:38] Current_frame 471 len(frame_buffer)3 Process time 0.11 seconds\n[09:02:38] Current_frame 458 len(frame_buffer)3 Process time 0.1 seconds\n[09:02:38] Current_frame 44 len(frame_buffer)3 Process time 0.05 seconds\n[09:02:38] Current_frame 406 len(frame_buffer)3 Process time 0.07 seconds\n[09:02:38] Current_frame 66 len(frame_buffer)3 Process time 0.06 seconds\n[09:02:38] Current_frame 439 len(frame_buffer)3 Process time 0.09 seconds\n[09:02:38] Current_frame 32 len(frame_buffer)3 Process time 0.04 seconds\n[09:02:38] Current_frame 89 len(frame_buffer)3 Process time 0.07 seconds\n[09:02:38] Current_frame 221 len(frame_buffer)3 Process time 0.06 seconds\n\nSo as we can see, just accessing the frames takes a variable amount of time, independant of processing time. My educated guess is that accessing key frames in the stream is fast, while reconstructing the following frames takes longer. Maybe try running some decompression first, before running your algorithm ?\nYou should also run this test for a longer time than i have and plot it again, to make sure that there really is no increase over time.\n", "Seeking is random access in a media file.\nSetting CAP_PROP_POS_FRAMES is seeking.\nDo not seek if you don't have to. For most video codecs, it is more costly than sequential access. It can also be imprecise because seeking may decide, because it's cheaper to do that, to merely jump to the nearest \"keyframe\", instead of the exact time index you want.\nYour access pattern appears to be one jump followed by sequential decoding from there. For this entire operation, you should seek once, then decode sequentially.\nCalling .read() on the VideoCapture object is sequential decoding. It automatically advances in the video.\n" ]
[ 1, 1 ]
[]
[]
[ "opencv", "pygame", "python" ]
stackoverflow_0074445827_opencv_pygame_python.txt
Q: Not scrolling down in a website having dynamic scroll I'm scraping news-articles from a website where there is no load-more button in a specific category page, the news article links are being generated as I scroll down. I wrote a function which take input category_page_url and limit_page(how many times I want to scroll down) and return me back all the links of the news articles displayed in that page. Category page link = https://www.scmp.com/topics/trade def get_article_links(url, limit_loading): options = webdriver.ChromeOptions() lists = ['disable-popup-blocking'] caps = DesiredCapabilities().CHROME caps["pageLoadStrategy"] = "normal" options.add_argument("--window-size=1920,1080") options.add_argument("--disable-extensions") options.add_argument("--disable-notifications") options.add_argument("--disable-Advertisement") options.add_argument("--disable-popup-blocking") driver = webdriver.Chrome(executable_path= r"E:\chromedriver\chromedriver.exe", options=options) #add your chrome path driver.get(url) last_height = driver.execute_script("return document.body.scrollHeight") loading = 0 while loading < limit_loading: loading += 1 driver.execute_script("window.scrollTo(0, document.body.scrollHeight);") time.sleep(8) new_height = driver.execute_script("return document.body.scrollHeight") if new_height == last_height: break last_height = new_height article_links = [] bsObj = BeautifulSoup(driver.page_source, 'html.parser') for i in bsObj.find('div', {'class': 'content-box'}).find('div', {'class': 'topic-article-container'}).find_all('h2', {'class': 'article__title'}): article_links.append(i.a['href']) return article_links Assuming I want to scroll 5 times in this category page, get_article_links('https://www.scmp.com/topics/trade', 5) But even if I change the number of my limit_page it return me back only the links from first page, there is some mistake I've done to write the scrolling part. Please help me with this. A: Instead of scrolling using per body scrollHeight property, I checked to see if there was any appropriate element after the list of articles to scroll to. I noticed this appropriately named div: <div class="topic-content__load-more-anchor" data-v-db98a5c0=""></div> Accordingly, I primarily changed the while loop in your function get_article_links to scroll to this div using location_once_scrolled_into_view after finding the div before the loop starts, as follows: loading = 0 end_div = driver.find_element('class name','topic-content__load-more-anchor') while loading < limit_loading: loading += 1 print(f'scrolling to page {loading}...') end_div.location_once_scrolled_into_view time.sleep(2) If we now call the function with different limit_loading, we get different count of unique news links. Here are couple of runs: >>> ar_links = get_article_links('https://www.scmp.com/topics/trade', 2) >>> len(ar_links) scrolling to page 1... scrolling to page 2... 90 >>> ar_links = get_article_links('https://www.scmp.com/topics/trade', 3) >>> len(ar_links) scrolling to page 1... scrolling to page 2... scrolling to page 3... 120
Not scrolling down in a website having dynamic scroll
I'm scraping news-articles from a website where there is no load-more button in a specific category page, the news article links are being generated as I scroll down. I wrote a function which take input category_page_url and limit_page(how many times I want to scroll down) and return me back all the links of the news articles displayed in that page. Category page link = https://www.scmp.com/topics/trade def get_article_links(url, limit_loading): options = webdriver.ChromeOptions() lists = ['disable-popup-blocking'] caps = DesiredCapabilities().CHROME caps["pageLoadStrategy"] = "normal" options.add_argument("--window-size=1920,1080") options.add_argument("--disable-extensions") options.add_argument("--disable-notifications") options.add_argument("--disable-Advertisement") options.add_argument("--disable-popup-blocking") driver = webdriver.Chrome(executable_path= r"E:\chromedriver\chromedriver.exe", options=options) #add your chrome path driver.get(url) last_height = driver.execute_script("return document.body.scrollHeight") loading = 0 while loading < limit_loading: loading += 1 driver.execute_script("window.scrollTo(0, document.body.scrollHeight);") time.sleep(8) new_height = driver.execute_script("return document.body.scrollHeight") if new_height == last_height: break last_height = new_height article_links = [] bsObj = BeautifulSoup(driver.page_source, 'html.parser') for i in bsObj.find('div', {'class': 'content-box'}).find('div', {'class': 'topic-article-container'}).find_all('h2', {'class': 'article__title'}): article_links.append(i.a['href']) return article_links Assuming I want to scroll 5 times in this category page, get_article_links('https://www.scmp.com/topics/trade', 5) But even if I change the number of my limit_page it return me back only the links from first page, there is some mistake I've done to write the scrolling part. Please help me with this.
[ "Instead of scrolling using per body scrollHeight property, I checked to see if there was any appropriate element after the list of articles to scroll to. I noticed this appropriately named div:\n<div class=\"topic-content__load-more-anchor\" data-v-db98a5c0=\"\"></div>\n\nAccordingly, I primarily changed the while loop in your function get_article_links to scroll to this div using location_once_scrolled_into_view after finding the div before the loop starts, as follows:\n loading = 0\n end_div = driver.find_element('class name','topic-content__load-more-anchor')\n while loading < limit_loading:\n loading += 1\n print(f'scrolling to page {loading}...') \n end_div.location_once_scrolled_into_view\n time.sleep(2)\n \n\nIf we now call the function with different limit_loading, we get different count of unique news links. Here are couple of runs:\n>>> ar_links = get_article_links('https://www.scmp.com/topics/trade', 2)\n>>> len(ar_links)\nscrolling to page 1...\nscrolling to page 2...\n\n90\n>>> ar_links = get_article_links('https://www.scmp.com/topics/trade', 3)\n>>> len(ar_links)\nscrolling to page 1...\nscrolling to page 2...\nscrolling to page 3...\n\n120\n\n" ]
[ 1 ]
[]
[]
[ "beautifulsoup", "python", "selenium", "selenium_webdriver", "web_crawler" ]
stackoverflow_0074443689_beautifulsoup_python_selenium_selenium_webdriver_web_crawler.txt
Q: Getting NULL values only from get_json_object in PySpark I have a Spark Dataframe (in Palantir Foundry) with the column "c_temperature". This column contains a JSON string in each row with the following schema: {"TempCelsiusEndAvg":"24.33","TempCelsiusEndMax":"null","TempCelsiusEndMin":"null","TempCelsiusStartAvg":"22.54","TempCelsiusStartMax":"null","TempCelsiusStartMin":"null","TempEndPlausibility":"T_PLAUSIBLE","TempStartPlausibility":"T_PLAUSIBLE"} I tried to extract the values (they are sometimes "null" and sometimes wiht values like e.g. "24.33") of the avg temperatures in the new columns "TempCelsiusEndAvg" and "TempCelsiusStartAvg" with the following code: from pyspark.sql import functions as F from pyspark.sql.types import StringType def flat_json(sessions_finished): df = sessions_finished df = df.withColumn("new_temperature", F.col('c_temperature').cast(StringType()) df = df.withColumn("TempCelsiusEndAvg", F.get_json_object("c_Temperature", '$.TempCelsiusEndAvg')) df = df.withColumn("TempCelsiusStartAvg", F.get_json_object("c_Temperature", '$.TempCelsiusStartAvg')) return df I wanted to get the new columns filled with doubles like: ... +-----------------+-------------------+ ... ... |TempCelsiusEndAvg|TempCelsiusStartAvg| ... ... +-----------------+-------------------+ ... ... | 24.33| 22.54| ... ... +-----------------+-------------------+ ... ... | 29.28| 25.16| ... ... +-----------------+-------------------+ ... ... | null| null| ... ... +-----------------+-------------------+ ... The new dataframe contains the columns but they are only filled with null values. Can anyone help me solving this problem? ... +-----------------+-------------------+ ... ... |TempCelsiusEndAvg|TempCelsiusStartAvg| ... ... +-----------------+-------------------+ ... ... | null| null| ... ... +-----------------+-------------------+ ... ... | null| null| ... ... +-----------------+-------------------+ ... ... | null| null| ... ... +-----------------+-------------------+ ... There is also a comment in this thread: [https://stackoverflow.com/questions/46084158/how-can-you-parse-a-string-that-is-json-from-an-existing-temp-table-using-pyspar] that describes my problem, but I have no idea how to use this information. A: You are don't need to do anything, since the column is already a struct. You can create those columns by accessing them with a . df = df.withColumn("TempCelsiusEndAvg", F.col("c_temperature.TempCelsiusEndAvg")) df = df.withColumn("TempCelsiusStartAvg", F.col("c_temperature.TempCelsiusStartAvg"))
Getting NULL values only from get_json_object in PySpark
I have a Spark Dataframe (in Palantir Foundry) with the column "c_temperature". This column contains a JSON string in each row with the following schema: {"TempCelsiusEndAvg":"24.33","TempCelsiusEndMax":"null","TempCelsiusEndMin":"null","TempCelsiusStartAvg":"22.54","TempCelsiusStartMax":"null","TempCelsiusStartMin":"null","TempEndPlausibility":"T_PLAUSIBLE","TempStartPlausibility":"T_PLAUSIBLE"} I tried to extract the values (they are sometimes "null" and sometimes wiht values like e.g. "24.33") of the avg temperatures in the new columns "TempCelsiusEndAvg" and "TempCelsiusStartAvg" with the following code: from pyspark.sql import functions as F from pyspark.sql.types import StringType def flat_json(sessions_finished): df = sessions_finished df = df.withColumn("new_temperature", F.col('c_temperature').cast(StringType()) df = df.withColumn("TempCelsiusEndAvg", F.get_json_object("c_Temperature", '$.TempCelsiusEndAvg')) df = df.withColumn("TempCelsiusStartAvg", F.get_json_object("c_Temperature", '$.TempCelsiusStartAvg')) return df I wanted to get the new columns filled with doubles like: ... +-----------------+-------------------+ ... ... |TempCelsiusEndAvg|TempCelsiusStartAvg| ... ... +-----------------+-------------------+ ... ... | 24.33| 22.54| ... ... +-----------------+-------------------+ ... ... | 29.28| 25.16| ... ... +-----------------+-------------------+ ... ... | null| null| ... ... +-----------------+-------------------+ ... The new dataframe contains the columns but they are only filled with null values. Can anyone help me solving this problem? ... +-----------------+-------------------+ ... ... |TempCelsiusEndAvg|TempCelsiusStartAvg| ... ... +-----------------+-------------------+ ... ... | null| null| ... ... +-----------------+-------------------+ ... ... | null| null| ... ... +-----------------+-------------------+ ... ... | null| null| ... ... +-----------------+-------------------+ ... There is also a comment in this thread: [https://stackoverflow.com/questions/46084158/how-can-you-parse-a-string-that-is-json-from-an-existing-temp-table-using-pyspar] that describes my problem, but I have no idea how to use this information.
[ "You are don't need to do anything, since the column is already a struct. You can create those columns by accessing them with a .\n df = df.withColumn(\"TempCelsiusEndAvg\", F.col(\"c_temperature.TempCelsiusEndAvg\"))\n df = df.withColumn(\"TempCelsiusStartAvg\", F.col(\"c_temperature.TempCelsiusStartAvg\"))\n\n" ]
[ 3 ]
[]
[]
[ "dataframe", "json", "palantir_foundry", "pyspark", "python" ]
stackoverflow_0074457628_dataframe_json_palantir_foundry_pyspark_python.txt
Q: Improve an exponential curve fit of nearly exponential data I have recorded some data about the color of an LED that varies with the 8bit signal sent to the LED driver, the signal can vary between 0 and 255. Exponential curve fitting seems to work very well to represent the LED's behavior. I have had good results with the following formula: x * signal ** ex y * signal ** ey z * signal ** ez In Python, I use the following function: from scipy.optimize import curve_fit def fit_func_xae(x, a, e): # Curve fitting function return a * x**e # X, Y, Z are real colorimetric values that are measured by a physical instrument (aX, eX), cov = curve_fit(fit_func3xa, signal, X) (aY, eY), cov = curve_fit(fit_func3xa, signal, Y) (aZ, eZ), cov = curve_fit(fit_func3xa, signal, Z) Note: In colorimetry, we represent the color of the LED in the CIE XYZ color space, which is a linear space that works in a similar way as a linear RGB color space. Even if it is an aproximation, you can think of XYZ as a synonym of (linear) RGB. So a color can be represented as a triplet of linear values X, Y, Z. here is the data behind the curves. for each 8bit parameter sent to the LED driver, there are 3 measures Signal [ 3. 3. 3. 5. 5. 5. 7. 7. 7. 10. 10. 10. 15. 15. 15. 20. 20. 20. 30. 30. 30. 40. 40. 40. 80. 80. 80. 160. 160. 160. 240. 240. 240. 255. 255. 255.] X, Y, Z [[9.93295448e-05 8.88955748e-04 6.34978556e-04] [9.66399391e-05 8.86031926e-04 6.24680520e-04] [1.06108685e-04 8.99010175e-04 6.41577838e-04] [1.96407653e-04 1.70210146e-03 1.27178991e-03] [1.84965943e-04 1.67927596e-03 1.24985475e-03] [1.83770476e-04 1.67905297e-03 1.24855580e-03] [3.28537613e-04 2.75382195e-03 2.14639821e-03] [3.17804246e-04 2.74152647e-03 2.11730825e-03] [3.19167905e-04 2.74977632e-03 2.11142769e-03] [5.43770342e-04 4.09314433e-03 3.33793380e-03] [5.02493149e-04 4.04392581e-03 3.24784452e-03] [5.00712102e-04 4.03456071e-03 3.26803716e-03] [1.48001671e-03 1.09367632e-02 9.59283037e-03] [1.52082180e-03 1.09920985e-02 9.63624777e-03] [1.50153844e-03 1.09623592e-02 9.61724422e-03] [3.66206564e-03 2.74730946e-02 2.51982924e-02] [3.64074861e-03 2.74283157e-02 2.52187294e-02] [3.68719991e-03 2.75033778e-02 2.51691331e-02] [1.50905917e-02 1.06056566e-01 1.06534373e-01] [1.51370269e-02 1.06091182e-01 1.06790424e-01] [1.51654172e-02 1.06109863e-01 1.06943957e-01] [3.42912601e-02 2.30854413e-01 2.43427207e-01] [3.42217124e-02 2.30565972e-01 2.43529454e-01] [3.41486993e-02 2.30807320e-01 2.43591644e-01] [1.95905112e-01 1.27409867e+00 1.37490536e+00] [1.94923951e-01 1.26934278e+00 1.37751808e+00] [1.95242984e-01 1.26805844e+00 1.37565458e+00] [1.07931878e+00 6.97822521e+00 7.49602715e+00] [1.08944832e+00 7.03128378e+00 7.54296884e+00] [1.07994964e+00 6.96864302e+00 7.44011991e+00] [2.95296087e+00 1.90746191e+01 1.99164655e+01] [2.94254973e+00 1.89524517e+01 1.98158118e+01] [2.95753358e+00 1.90200667e+01 1.98885050e+01] [3.44049055e+00 2.21221159e+01 2.29667049e+01] [3.43817829e+00 2.21225393e+01 2.29363833e+01] [3.43077583e+00 2.21158929e+01 2.29399652e+01]] _ The Problem: Here's a scatter plot of some of my LED's XYZ values, together with a plot of the exponential curve fitting obtained with the code above: It all seems good... until you zoom a bit: On this zoom we can also see that the curve is fitted on multiple measurements: At high values, Z values (blue dots) are always higher than Y values (green dots). But at low values, Y values are higher than Z values. The meaning of this is that the LED changes in color depending on the PWM applied, for some reason (maybe because the temperature rises when more power is applied). This behavior cannot be represented mathematically by the formula that I have used for the curve fit, however, the curve fit is so good for high values that I am searching for a way to improve it in a simple and elegant way. Do you have any idea how this could be done? I have tried unsuccessfully to add more parameters, for example I tried to use: x1 * signal ** ex + x2 * signal ** fx instead of: x * signal ** ex but that causes scipy to overflow. My idea was that by adding two such elements I could still have a funtion that equals 0 when signal = 0, but that increases faster at low values than a simple exponential. A: The data shows two steps in the log-log plot so I used an approach already used here. Code is as follows: import matplotlib.pyplot as plt import numpy as np from scipy.optimize import curve_fit signal = np.array( [ 3.0, 3.0, 3.0, 5.0, 5.0, 5.0, 7.0, 7.0, 7.0, 10.0, 10., 10., 15.0, 15., 15., 20.0, 20., 20., 30.0, 30., 30., 40.0, 40., 40., 80.0, 80., 80., 160.0, 160., 160., 240.0, 240., 240., 255.0, 255., 255. ] ) data = np.array( [ [9.93295448e-05, 8.88955748e-04, 6.34978556e-04], [9.66399391e-05, 8.86031926e-04, 6.24680520e-04], [1.06108685e-04, 8.99010175e-04, 6.41577838e-04], [1.96407653e-04, 1.70210146e-03, 1.27178991e-03], [1.84965943e-04, 1.67927596e-03, 1.24985475e-03], [1.83770476e-04, 1.67905297e-03, 1.24855580e-03], [3.28537613e-04, 2.75382195e-03, 2.14639821e-03], [3.17804246e-04, 2.74152647e-03, 2.11730825e-03], [3.19167905e-04, 2.74977632e-03, 2.11142769e-03], [5.43770342e-04, 4.09314433e-03, 3.33793380e-03], [5.02493149e-04, 4.04392581e-03, 3.24784452e-03], [5.00712102e-04, 4.03456071e-03, 3.26803716e-03], [1.48001671e-03, 1.09367632e-02, 9.59283037e-03], [1.52082180e-03, 1.09920985e-02, 9.63624777e-03], [1.50153844e-03, 1.09623592e-02, 9.61724422e-03], [3.66206564e-03, 2.74730946e-02, 2.51982924e-02], [3.64074861e-03, 2.74283157e-02, 2.52187294e-02], [3.68719991e-03, 2.75033778e-02, 2.51691331e-02], [1.50905917e-02, 1.06056566e-01, 1.06534373e-01], [1.51370269e-02, 1.06091182e-01, 1.06790424e-01], [1.51654172e-02, 1.06109863e-01, 1.06943957e-01], [3.42912601e-02, 2.30854413e-01, 2.43427207e-01], [3.42217124e-02, 2.30565972e-01, 2.43529454e-01], [3.41486993e-02, 2.30807320e-01, 2.43591644e-01], [1.95905112e-01, 1.27409867e+00, 1.37490536e+00], [1.94923951e-01, 1.26934278e+00, 1.37751808e+00], [1.95242984e-01, 1.26805844e+00, 1.37565458e+00], [1.07931878e+00, 6.97822521e+00, 7.49602715e+00], [1.08944832e+00, 7.03128378e+00, 7.54296884e+00], [1.07994964e+00, 6.96864302e+00, 7.44011991e+00], [2.95296087e+00, 1.90746191e+01, 1.99164655e+01], [2.94254973e+00, 1.89524517e+01, 1.98158118e+01], [2.95753358e+00, 1.90200667e+01, 1.98885050e+01], [3.44049055e+00, 2.21221159e+01, 2.29667049e+01], [3.43817829e+00, 2.21225393e+01, 2.29363833e+01], [3.43077583e+00, 2.21158929e+01, 2.29399652e+01] ] ) def determine_start_parameters( x , y, edge=9 ): logx = np.log( x ) logy = np.log( y ) xx = logx[ :edge ] yy = logy[ :edge ] (ar1, br1), _ = curve_fit( lambda x, slope, off: slope * x + off, xx , yy ) xx = logx[ edge : -edge ] yy = logy[ edge : -edge] (ar2, br2), _ = curve_fit( lambda x, slope, off: slope * x + off, xx , yy ) xx = logx[ -edge : ] yy = logy[ -edge : ] (ar3, br3), _ = curve_fit( lambda x, slope, off: slope * x + off, xx , yy ) cross1r = ( br2 - br1 ) / ( ar1 - ar2 ) mr = ar1 * cross1r + br1 cross2r = ( br3 - br2 ) / ( ar2 - ar3 ) xx0r = [ mr, ar1, ar2 , ar3, cross1r, cross2r, 1 ] return xx0r def func( x, b, m1, m2, m3, a1, a2, p ): """ continuous approxiation for a two-step function used to fit the log-log data p is a sharpness parameter for the transition """ out = b - np.log( ( 1 + np.exp( -m1 * ( x - a1 ) )**abs( p ) ) ) / p + np.log( ( 1 + np.exp( m2 * ( x - a1 ) )**abs( p ) ) ) / p - np.log( ( 1 + np.exp( m3 * ( x - a2 ) )**abs( p ) ) ) / abs( p ) return out def expfunc( x, b, m1, m2, m3, a1, a2, p ): """ remapping to the original data """ xi = np.log( x ) eta = func( xi, b, m1, m2, m3, a1, a2, p) return np.exp(eta) def expfunc2( x, b, m1, m2, m3, a1, a2, p ): """ simplified remapping """ aa1 = np.exp( a1 ) aa2 = np.exp( a2 ) return ( np.exp( b ) * ( ( 1 + ( x / aa1 )**( m2 * p ) ) / ( 1 + ( x / aa2 )**( m3 * p ) ) / ( 1 + ( aa1 / x )**( m1 * p ) ) )**( 1 / p ) ) logsig = np.log( signal ) logred = np.log( data[:,0] ) loggreen = np.log( data[:,1] ) logblue = np.log( data[:,2] ) ### getting initial parameters ### red xx0r = determine_start_parameters( signal, data[ :, 0 ] ) xx0g = determine_start_parameters( signal, data[ :, 1 ] ) xx0b = determine_start_parameters( signal, data[ :, 2 ] ) print( xx0r ) print( xx0g ) print( xx0b ) xl = np.linspace( 1, 6, 150 ) tl = np.linspace( 1, 260, 150 ) solred = curve_fit( func, logsig, logred, p0=xx0r )[0] solgreen = curve_fit( func, logsig, loggreen, p0=xx0g )[0] solblue = curve_fit( func, logsig, logblue, p0=xx0b )[0] print( solred ) print( solgreen ) print( solblue ) fig = plt.figure() ax = fig.add_subplot( 2, 1, 1 ) bx = fig.add_subplot( 2, 1, 2 ) ax.scatter( np.log( signal ), np.log( data[:,0] ), color = 'r' ) ax.scatter( np.log( signal ), np.log( data[:,1] ), color = 'g' ) ax.scatter( np.log( signal ), np.log( data[:,2] ), color = 'b' ) ax.plot( xl, func( xl, *solred ), color = 'r' ) ax.plot( xl, func( xl, *solgreen ), color = 'g' ) ax.plot( xl, func( xl, *solblue ), color = 'b' ) bx.scatter( signal, data[:,0], color = 'r' ) bx.scatter( signal, data[:,1], color = 'g' ) bx.scatter( signal, data[:,2], color = 'b' ) bx.plot( tl, expfunc2( tl, *solred), color = 'r' ) bx.plot( tl, expfunc2( tl, *solgreen), color = 'g' ) bx.plot( tl, expfunc2( tl, *solblue), color = 'b' ) plt.show() Which results in
Improve an exponential curve fit of nearly exponential data
I have recorded some data about the color of an LED that varies with the 8bit signal sent to the LED driver, the signal can vary between 0 and 255. Exponential curve fitting seems to work very well to represent the LED's behavior. I have had good results with the following formula: x * signal ** ex y * signal ** ey z * signal ** ez In Python, I use the following function: from scipy.optimize import curve_fit def fit_func_xae(x, a, e): # Curve fitting function return a * x**e # X, Y, Z are real colorimetric values that are measured by a physical instrument (aX, eX), cov = curve_fit(fit_func3xa, signal, X) (aY, eY), cov = curve_fit(fit_func3xa, signal, Y) (aZ, eZ), cov = curve_fit(fit_func3xa, signal, Z) Note: In colorimetry, we represent the color of the LED in the CIE XYZ color space, which is a linear space that works in a similar way as a linear RGB color space. Even if it is an aproximation, you can think of XYZ as a synonym of (linear) RGB. So a color can be represented as a triplet of linear values X, Y, Z. here is the data behind the curves. for each 8bit parameter sent to the LED driver, there are 3 measures Signal [ 3. 3. 3. 5. 5. 5. 7. 7. 7. 10. 10. 10. 15. 15. 15. 20. 20. 20. 30. 30. 30. 40. 40. 40. 80. 80. 80. 160. 160. 160. 240. 240. 240. 255. 255. 255.] X, Y, Z [[9.93295448e-05 8.88955748e-04 6.34978556e-04] [9.66399391e-05 8.86031926e-04 6.24680520e-04] [1.06108685e-04 8.99010175e-04 6.41577838e-04] [1.96407653e-04 1.70210146e-03 1.27178991e-03] [1.84965943e-04 1.67927596e-03 1.24985475e-03] [1.83770476e-04 1.67905297e-03 1.24855580e-03] [3.28537613e-04 2.75382195e-03 2.14639821e-03] [3.17804246e-04 2.74152647e-03 2.11730825e-03] [3.19167905e-04 2.74977632e-03 2.11142769e-03] [5.43770342e-04 4.09314433e-03 3.33793380e-03] [5.02493149e-04 4.04392581e-03 3.24784452e-03] [5.00712102e-04 4.03456071e-03 3.26803716e-03] [1.48001671e-03 1.09367632e-02 9.59283037e-03] [1.52082180e-03 1.09920985e-02 9.63624777e-03] [1.50153844e-03 1.09623592e-02 9.61724422e-03] [3.66206564e-03 2.74730946e-02 2.51982924e-02] [3.64074861e-03 2.74283157e-02 2.52187294e-02] [3.68719991e-03 2.75033778e-02 2.51691331e-02] [1.50905917e-02 1.06056566e-01 1.06534373e-01] [1.51370269e-02 1.06091182e-01 1.06790424e-01] [1.51654172e-02 1.06109863e-01 1.06943957e-01] [3.42912601e-02 2.30854413e-01 2.43427207e-01] [3.42217124e-02 2.30565972e-01 2.43529454e-01] [3.41486993e-02 2.30807320e-01 2.43591644e-01] [1.95905112e-01 1.27409867e+00 1.37490536e+00] [1.94923951e-01 1.26934278e+00 1.37751808e+00] [1.95242984e-01 1.26805844e+00 1.37565458e+00] [1.07931878e+00 6.97822521e+00 7.49602715e+00] [1.08944832e+00 7.03128378e+00 7.54296884e+00] [1.07994964e+00 6.96864302e+00 7.44011991e+00] [2.95296087e+00 1.90746191e+01 1.99164655e+01] [2.94254973e+00 1.89524517e+01 1.98158118e+01] [2.95753358e+00 1.90200667e+01 1.98885050e+01] [3.44049055e+00 2.21221159e+01 2.29667049e+01] [3.43817829e+00 2.21225393e+01 2.29363833e+01] [3.43077583e+00 2.21158929e+01 2.29399652e+01]] _ The Problem: Here's a scatter plot of some of my LED's XYZ values, together with a plot of the exponential curve fitting obtained with the code above: It all seems good... until you zoom a bit: On this zoom we can also see that the curve is fitted on multiple measurements: At high values, Z values (blue dots) are always higher than Y values (green dots). But at low values, Y values are higher than Z values. The meaning of this is that the LED changes in color depending on the PWM applied, for some reason (maybe because the temperature rises when more power is applied). This behavior cannot be represented mathematically by the formula that I have used for the curve fit, however, the curve fit is so good for high values that I am searching for a way to improve it in a simple and elegant way. Do you have any idea how this could be done? I have tried unsuccessfully to add more parameters, for example I tried to use: x1 * signal ** ex + x2 * signal ** fx instead of: x * signal ** ex but that causes scipy to overflow. My idea was that by adding two such elements I could still have a funtion that equals 0 when signal = 0, but that increases faster at low values than a simple exponential.
[ "The data shows two steps in the log-log plot so I used an approach already used here.\nCode is as follows:\nimport matplotlib.pyplot as plt\nimport numpy as np\nfrom scipy.optimize import curve_fit\nsignal = np.array( [\n 3.0, 3.0, 3.0,\n 5.0, 5.0, 5.0,\n 7.0, 7.0, 7.0,\n 10.0, 10., 10.,\n 15.0, 15., 15.,\n 20.0, 20., 20.,\n 30.0, 30., 30.,\n 40.0, 40., 40.,\n 80.0, 80., 80.,\n 160.0, 160., 160.,\n 240.0, 240., 240.,\n 255.0, 255., 255.\n] )\n\ndata = np.array( [\n [9.93295448e-05, 8.88955748e-04, 6.34978556e-04],\n [9.66399391e-05, 8.86031926e-04, 6.24680520e-04],\n [1.06108685e-04, 8.99010175e-04, 6.41577838e-04],\n [1.96407653e-04, 1.70210146e-03, 1.27178991e-03],\n [1.84965943e-04, 1.67927596e-03, 1.24985475e-03],\n [1.83770476e-04, 1.67905297e-03, 1.24855580e-03],\n [3.28537613e-04, 2.75382195e-03, 2.14639821e-03],\n [3.17804246e-04, 2.74152647e-03, 2.11730825e-03],\n [3.19167905e-04, 2.74977632e-03, 2.11142769e-03],\n [5.43770342e-04, 4.09314433e-03, 3.33793380e-03],\n [5.02493149e-04, 4.04392581e-03, 3.24784452e-03],\n [5.00712102e-04, 4.03456071e-03, 3.26803716e-03],\n [1.48001671e-03, 1.09367632e-02, 9.59283037e-03],\n [1.52082180e-03, 1.09920985e-02, 9.63624777e-03],\n [1.50153844e-03, 1.09623592e-02, 9.61724422e-03],\n [3.66206564e-03, 2.74730946e-02, 2.51982924e-02],\n [3.64074861e-03, 2.74283157e-02, 2.52187294e-02],\n [3.68719991e-03, 2.75033778e-02, 2.51691331e-02],\n [1.50905917e-02, 1.06056566e-01, 1.06534373e-01],\n [1.51370269e-02, 1.06091182e-01, 1.06790424e-01],\n [1.51654172e-02, 1.06109863e-01, 1.06943957e-01],\n [3.42912601e-02, 2.30854413e-01, 2.43427207e-01],\n [3.42217124e-02, 2.30565972e-01, 2.43529454e-01],\n [3.41486993e-02, 2.30807320e-01, 2.43591644e-01],\n [1.95905112e-01, 1.27409867e+00, 1.37490536e+00],\n [1.94923951e-01, 1.26934278e+00, 1.37751808e+00],\n [1.95242984e-01, 1.26805844e+00, 1.37565458e+00],\n [1.07931878e+00, 6.97822521e+00, 7.49602715e+00],\n [1.08944832e+00, 7.03128378e+00, 7.54296884e+00],\n [1.07994964e+00, 6.96864302e+00, 7.44011991e+00],\n [2.95296087e+00, 1.90746191e+01, 1.99164655e+01],\n [2.94254973e+00, 1.89524517e+01, 1.98158118e+01],\n [2.95753358e+00, 1.90200667e+01, 1.98885050e+01],\n [3.44049055e+00, 2.21221159e+01, 2.29667049e+01],\n [3.43817829e+00, 2.21225393e+01, 2.29363833e+01],\n [3.43077583e+00, 2.21158929e+01, 2.29399652e+01]\n] )\n\ndef determine_start_parameters( x , y, edge=9 ):\n logx = np.log( x )\n logy = np.log( y )\n xx = logx[ :edge ]\n yy = logy[ :edge ]\n (ar1, br1), _ = curve_fit( lambda x, slope, off: slope * x + off, xx , yy )\n xx = logx[ edge : -edge ]\n yy = logy[ edge : -edge]\n (ar2, br2), _ = curve_fit( lambda x, slope, off: slope * x + off, xx , yy )\n xx = logx[ -edge : ]\n yy = logy[ -edge : ]\n (ar3, br3), _ = curve_fit( lambda x, slope, off: slope * x + off, xx , yy )\n cross1r = ( br2 - br1 ) / ( ar1 - ar2 )\n mr = ar1 * cross1r + br1\n cross2r = ( br3 - br2 ) / ( ar2 - ar3 )\n xx0r = [ mr, ar1, ar2 , ar3, cross1r, cross2r, 1 ]\n return xx0r\n\ndef func(\n x, b,\n m1, m2, m3,\n a1, a2,\n p\n):\n \"\"\"\n continuous approxiation for a two-step function\n used to fit the log-log data\n p is a sharpness parameter for the transition\n \"\"\"\n out = b - np.log(\n ( 1 + np.exp( -m1 * ( x - a1 ) )**abs( p ) )\n ) / p + np.log(\n ( 1 + np.exp( m2 * ( x - a1 ) )**abs( p ) )\n ) / p - np.log(\n ( 1 + np.exp( m3 * ( x - a2 ) )**abs( p ) )\n ) / abs( p )\n return out\n\ndef expfunc(\n x, b,\n m1, m2, m3,\n a1, a2,\n p\n):\n \"\"\"\n remapping to the original data\n \"\"\"\n xi = np.log( x )\n eta = func(\n xi, b,\n m1, m2, m3,\n a1, a2,\n p)\n return np.exp(eta)\n\ndef expfunc2(\n x, b,\n m1, m2, m3,\n a1, a2,\n p\n):\n \"\"\"\n simplified remapping\n \"\"\"\n aa1 = np.exp( a1 )\n aa2 = np.exp( a2 )\n return (\n np.exp( b ) * (\n ( 1 + ( x / aa1 )**( m2 * p ) ) / \n ( 1 + ( x / aa2 )**( m3 * p ) ) /\n ( 1 + ( aa1 / x )**( m1 * p ) )\n )**( 1 / p )\n )\n\nlogsig = np.log( signal )\nlogred = np.log( data[:,0] )\nloggreen = np.log( data[:,1] )\nlogblue = np.log( data[:,2] )\n\n### getting initial parameters\n### red\n\nxx0r = determine_start_parameters( signal, data[ :, 0 ] )\nxx0g = determine_start_parameters( signal, data[ :, 1 ] )\nxx0b = determine_start_parameters( signal, data[ :, 2 ] )\nprint( xx0r )\nprint( xx0g )\nprint( xx0b )\n\nxl = np.linspace( 1, 6, 150 )\ntl = np.linspace( 1, 260, 150 )\n\n\nsolred = curve_fit( func, logsig, logred, p0=xx0r )[0]\nsolgreen = curve_fit( func, logsig, loggreen, p0=xx0g )[0]\nsolblue = curve_fit( func, logsig, logblue, p0=xx0b )[0]\n\nprint( solred )\nprint( solgreen )\nprint( solblue )\n\nfig = plt.figure()\nax = fig.add_subplot( 2, 1, 1 )\nbx = fig.add_subplot( 2, 1, 2 )\n\nax.scatter( np.log( signal ), np.log( data[:,0] ), color = 'r' )\nax.scatter( np.log( signal ), np.log( data[:,1] ), color = 'g' )\nax.scatter( np.log( signal ), np.log( data[:,2] ), color = 'b' )\n\nax.plot( xl, func( xl, *solred ), color = 'r' )\nax.plot( xl, func( xl, *solgreen ), color = 'g' )\nax.plot( xl, func( xl, *solblue ), color = 'b' )\n\nbx.scatter( signal, data[:,0], color = 'r' )\nbx.scatter( signal, data[:,1], color = 'g' )\nbx.scatter( signal, data[:,2], color = 'b' )\nbx.plot( tl, expfunc2( tl, *solred), color = 'r' )\nbx.plot( tl, expfunc2( tl, *solgreen), color = 'g' )\nbx.plot( tl, expfunc2( tl, *solblue), color = 'b' )\n\n\nplt.show()\n\nWhich results in\n\n" ]
[ 1 ]
[]
[]
[ "colors", "curve_fitting", "exponential", "led", "python" ]
stackoverflow_0074440462_colors_curve_fitting_exponential_led_python.txt
Q: How to use modify_transaction after send_raw_transaction in web3.py I am using Infura node, thus I had to sign the transaction with w3.eth.account.sign_transaction and then send it with w3.eth.send_raw_transaction. The gas that I used was too low apparently, and the transaction is pending for 8 hours now. By looking in the docs I noticed there are two methods that could help me w3.eth.modify_transaction and w3.eth.replace_transaction. The idea would be to use one of them (not sure what's the difference between them though) to modify the transaction gas so it gets confirmed. The problem is, I don't see in the docs how to use one of those two methods and sign the modified transaction with my private key because both of them make the RPC call to eth_sendTransaction which isn't supported by the shared Infura node. A: You can use local account signing middleware with Web3.py so you do not need to use send_raw_transaction. A: Example of manually bumping up gas with Web3.py 5 from web3.exceptions import TransactionNotFound tx, receipt = None, None try: tx = w3.eth.get_transaction (tx_hash) # Not 100% reliable! except TransactionNotFound: pass try: receipt = w3.eth.get_transaction_receipt (tx_hash) except TransactionNotFound: pass if not receipt and tx: tx = tx.__dict__ gas_price = tx['maxFeePerGas'] / 1000000000 if gas_price <= 10: tx['maxPriorityFeePerGas'] = 1230000000 tx['maxFeePerGas'] = 12300000000 tx.pop ('blockHash', '') tx.pop ('blockNumber', '') tx.pop ('transactionIndex', '') tx.pop ('gasPrice', '') tx.pop ('hash', '') tx['data'] = tx.pop ('input') signed = w3.eth.account.sign_transaction (tx, pk) tid = w3.eth.send_raw_transaction (signed.rawTransaction) print (tid.hex()) In my experience it seems like both maxFeePerGas and maxPriorityFeePerGas should be increased. There is some discussion here.
How to use modify_transaction after send_raw_transaction in web3.py
I am using Infura node, thus I had to sign the transaction with w3.eth.account.sign_transaction and then send it with w3.eth.send_raw_transaction. The gas that I used was too low apparently, and the transaction is pending for 8 hours now. By looking in the docs I noticed there are two methods that could help me w3.eth.modify_transaction and w3.eth.replace_transaction. The idea would be to use one of them (not sure what's the difference between them though) to modify the transaction gas so it gets confirmed. The problem is, I don't see in the docs how to use one of those two methods and sign the modified transaction with my private key because both of them make the RPC call to eth_sendTransaction which isn't supported by the shared Infura node.
[ "You can use local account signing middleware with Web3.py so you do not need to use send_raw_transaction.\n", "Example of manually bumping up gas with Web3.py 5\nfrom web3.exceptions import TransactionNotFound\n\ntx, receipt = None, None\ntry: tx = w3.eth.get_transaction (tx_hash) # Not 100% reliable!\nexcept TransactionNotFound: pass\ntry: receipt = w3.eth.get_transaction_receipt (tx_hash)\nexcept TransactionNotFound: pass\n\nif not receipt and tx:\n tx = tx.__dict__\n gas_price = tx['maxFeePerGas'] / 1000000000\n if gas_price <= 10:\n tx['maxPriorityFeePerGas'] = 1230000000\n tx['maxFeePerGas'] = 12300000000\n tx.pop ('blockHash', '')\n tx.pop ('blockNumber', '')\n tx.pop ('transactionIndex', '')\n tx.pop ('gasPrice', '')\n tx.pop ('hash', '')\n tx['data'] = tx.pop ('input')\n\n signed = w3.eth.account.sign_transaction (tx, pk)\n tid = w3.eth.send_raw_transaction (signed.rawTransaction)\n print (tid.hex())\n\nIn my experience it seems like both maxFeePerGas and maxPriorityFeePerGas should be increased. There is some discussion here.\n" ]
[ 1, 0 ]
[]
[]
[ "ethereum", "python", "web3py" ]
stackoverflow_0072294891_ethereum_python_web3py.txt
Q: Add sufix on duplicates in pandas dataframe Python i am writing a script to download images. I'm reading a excel file as a pandas dataframe Column A -url links Column B - Name downloaded images will have this name, example "A.jpeg" There will be duplicates in Column B[Name] in that case i would like to add a suffix on the image name. so the output will be A.jpeg A-1.Jpeg .. import requests import pandas as pd df = pd.read_excel(r'C:\Users\exdata1.xlsx') for index, row in df.iterrows(): url = row['url'] file_name = url.split('/') r = requests.get(url) file_name=(row['name']+".jpeg") if r.status_code == 200: with open(file_name, "wb") as f: f.write(r.content) print (file_name) I have been trying cumcount but can't really seem to get it to work.. Apreciate all the help I can get A: You can try: import requests import pandas as pd df = pd.read_excel(r"C:\Users\exdata1.xlsx") cnt = {} for index, row in df.iterrows(): name = row["name"] if name not in cnt: cnt[name] = 0 name = f"{name}.jpeg" else: cnt[name] += 1 name = f"{name}-{cnt[name]}.jpeg" url = row["url"] r = requests.get(url) if r.status_code == 200: with open(name, "wb") as f: f.write(r.content) print(name) This will download the files as A.jpeg, A-1.jpeg, A-2.jpeg, ...
Add sufix on duplicates in pandas dataframe Python
i am writing a script to download images. I'm reading a excel file as a pandas dataframe Column A -url links Column B - Name downloaded images will have this name, example "A.jpeg" There will be duplicates in Column B[Name] in that case i would like to add a suffix on the image name. so the output will be A.jpeg A-1.Jpeg .. import requests import pandas as pd df = pd.read_excel(r'C:\Users\exdata1.xlsx') for index, row in df.iterrows(): url = row['url'] file_name = url.split('/') r = requests.get(url) file_name=(row['name']+".jpeg") if r.status_code == 200: with open(file_name, "wb") as f: f.write(r.content) print (file_name) I have been trying cumcount but can't really seem to get it to work.. Apreciate all the help I can get
[ "You can try:\nimport requests\nimport pandas as pd\n\ndf = pd.read_excel(r\"C:\\Users\\exdata1.xlsx\")\ncnt = {}\n\nfor index, row in df.iterrows():\n name = row[\"name\"]\n if name not in cnt:\n cnt[name] = 0\n name = f\"{name}.jpeg\"\n else:\n cnt[name] += 1\n name = f\"{name}-{cnt[name]}.jpeg\"\n\n url = row[\"url\"]\n r = requests.get(url)\n \n if r.status_code == 200:\n with open(name, \"wb\") as f:\n f.write(r.content)\n print(name)\n\n\nThis will download the files as A.jpeg, A-1.jpeg, A-2.jpeg, ...\n" ]
[ 1 ]
[]
[]
[ "dataframe", "duplicates", "pandas", "python", "python_requests" ]
stackoverflow_0074457738_dataframe_duplicates_pandas_python_python_requests.txt
Q: Psycopg2 : Insert multiple values if not exists in the table I need to insert multiple values into a table after checking if it doesn't exist using psycopg2. The query am using: WITH data(name,proj_id) as ( VALUES ('hello',123),('hey',123) ) INSERT INTO keywords(name,proj_id) SELECT d.name,d.proj_id FROM data d WHERE NOT EXISTS (SELECT 1 FROM keywords u2 WHERE u2.name=d.name AND u2.proj_id=d.proj_id) But how to format or add the values section from tuple to ('hello',123),('hey',123) in query. A: As suggested in the comment, assuming that your connection is already established as conn one of the ways would be: from typing import Iterator, Dict, Any def insert_execute_values_iterator(connection, keywords: Iterator[Dict[str, Any]], page_size: int = 1000) -> None: with connection.cursor() as cursor: psycopg2.extras.execute_values( cursor, """ WITH data(name,proj_id) as (VALUES %s) INSERT INTO keywords(name,proj_id) SELECT d.name,d.proj_id FROM data d WHERE NOT EXISTS (SELECT 1 FROM keywords u2 WHERE u2.name=d.name AND u2.proj_id=d.proj_id);""", (( keyword['name'], keyword['proj_id'] ) for keyword in keywords), page_size=page_size) insert_execute_values_iterator(conn,{'hello':123,'hey':123})
Psycopg2 : Insert multiple values if not exists in the table
I need to insert multiple values into a table after checking if it doesn't exist using psycopg2. The query am using: WITH data(name,proj_id) as ( VALUES ('hello',123),('hey',123) ) INSERT INTO keywords(name,proj_id) SELECT d.name,d.proj_id FROM data d WHERE NOT EXISTS (SELECT 1 FROM keywords u2 WHERE u2.name=d.name AND u2.proj_id=d.proj_id) But how to format or add the values section from tuple to ('hello',123),('hey',123) in query.
[ "As suggested in the comment, assuming that your connection is already established as conn one of the ways would be:\nfrom typing import Iterator, Dict, Any\n\ndef insert_execute_values_iterator(connection, keywords: Iterator[Dict[str, Any]], page_size: int = 1000) -> None:\n with connection.cursor() as cursor:\n psycopg2.extras.execute_values(\n cursor,\n \"\"\" WITH data(name,proj_id) as (VALUES %s)\n INSERT INTO keywords(name,proj_id)\n SELECT d.name,d.proj_id FROM data d \n WHERE NOT EXISTS (SELECT 1 FROM keywords u2 WHERE\n u2.name=d.name AND u2.proj_id=d.proj_id);\"\"\", \n (( keyword['name'],\n keyword['proj_id'] ) for keyword in keywords),\n page_size=page_size)\n\n\ninsert_execute_values_iterator(conn,{'hello':123,'hey':123})\n\n" ]
[ 1 ]
[ "insert_query = \"\"\"WITH data(name, proj_id) as (\n VALUES (%s,%s)\n ) \n INSERT INTO keywords(name, proj_id) \n SELECT d.name,d.proj_id FROM data d \n WHERE NOT EXISTS (\n SELECT 1 FROM keywords u2 \n WHERE u2.name = d.name AND u2.proj_id = d.proj_id)\"\"\"\ntuple_values = (('hello',123),('hey',123))\n \npsycopg2.extras.execute_batch(cursor,insert_query,tuple_values)\n\n" ]
[ -1 ]
[ "postgresql", "psycopg2", "python" ]
stackoverflow_0074450641_postgresql_psycopg2_python.txt
Q: Multiple comparisons on an array of ints I have an array of ints mypos = np.array([10, 20, 30, 40, 50]) and a list of start & end positions, like this: mydelims = [[5, 12], [15,31], [12,16], [22,69]] I'd like to loop through mydelims and count how many positions are within each pair. Instinctively, I would write it like this: for mypair in mydelims: print(sum(mypos>mypair[0] & mypos<mypair[1])) However, python doesn't seem to handle this. I know mypos>42 will return an array of booleans, but what is the most efficient way to apply multiple comparisons to one array of ints? A: You were pretty close and looking for np.logical_and: import numpy as np mypos = np.array([10, 20, 30, 40, 50]) mydelims = [[5, 12], [15, 31], [12, 16], [22, 69]] result = { tuple(mypair): sum(np.logical_and(mypos > mypair[0], mypos < mypair[1])) for mypair in mydelims } print(result) Output: {(5, 12): 1, (15, 31): 2, (12, 16): 0, (22, 69): 3} Note that you're treating the delims as non-inclusive - which I assumed was intentional. I.e. neither 5 nor 12 fall within [5, 12].
Multiple comparisons on an array of ints
I have an array of ints mypos = np.array([10, 20, 30, 40, 50]) and a list of start & end positions, like this: mydelims = [[5, 12], [15,31], [12,16], [22,69]] I'd like to loop through mydelims and count how many positions are within each pair. Instinctively, I would write it like this: for mypair in mydelims: print(sum(mypos>mypair[0] & mypos<mypair[1])) However, python doesn't seem to handle this. I know mypos>42 will return an array of booleans, but what is the most efficient way to apply multiple comparisons to one array of ints?
[ "You were pretty close and looking for np.logical_and:\nimport numpy as np\n\nmypos = np.array([10, 20, 30, 40, 50])\n\nmydelims = [[5, 12],\n [15, 31],\n [12, 16],\n [22, 69]]\n\nresult = {\n tuple(mypair): sum(np.logical_and(mypos > mypair[0], mypos < mypair[1])) \n for mypair in mydelims\n}\nprint(result)\n\nOutput:\n{(5, 12): 1, (15, 31): 2, (12, 16): 0, (22, 69): 3}\n\nNote that you're treating the delims as non-inclusive - which I assumed was intentional. I.e. neither 5 nor 12 fall within [5, 12].\n" ]
[ 1 ]
[]
[]
[ "arrays", "python" ]
stackoverflow_0074459479_arrays_python.txt
Q: Matrix[i][j] index out of range for the 2D Matrix 1. Given A of size 4 × 3 and K = 1 with bird habitats at (0, 0), (2, 2) and (3, 1) the function should return 3. Wind turbines can be built in three locations: (0, 2), (1, 1) and (2, 0). In my code i have got an error of index out of range what should i can do to get my desired output.in the given case out put should be 3. def solution(Matrix, K): row=len(Matrix) col=len(Matrix[0]) K = K +1 for i in (row): for j in (col): if Matrix[i][j] == 1: print(Matrix) fun(Matrix,i,j, K), res = 0, for i in (row): for j in (col): if Matrix[i][j] == 0: res += 1, return res, def fun(Matrix,i,j,k): if k == 1: if i-1 >= 0: if Matrix[i-1][j]==0 or Matrix[i-1][j] >=2 and Matrix[i-1][j] <= k-1: Matrix[i-1][j]=k, fun(Matrix,i-1,j,k-1), if i+1 < len(Matrix): if Matrix[i+1][j] == 0 or Matrix[i+1][j] >=2 and Matrix[i+1][j] <= k-1: Matrix[i+1][j]=k, fun(Matrix,i+1,j,k-1) if j-1 >= 0: if Matrix[i][j-1]== 0 or Matrix[i][j+1] >= 2 and Matrix[i][j-1] <= k-1: Matrix[i][j-1]=k, fun(Matrix,i,j-1,k-1) if j+1 < len(Matrix[0]): if Matrix[i][j+1]== 0 or Matrix[i][j-1] >=2 and Matrix[i][j+1] <= k-1: Matrix[i][j+1]=k, fun(Matrix,i,j+1,k-1), return A: In Solution with the code you are making tuples for row and col... row=len(Matrix), col=len(Matrix[0]), K = K +1; Change your Code to actually save only the length as an int in your variable row=len(Matrix) col=len(Matrix[0]) K = K +1 Next Let's focus on this Part of your Mistake... This only works because it seems you don´t know how to create Variables and so it can loop through a tuple of what should be an int. for i in (row): for j in (col): if Matrix[i][j] == 1: print(Matrix) fun(Matrix,i,j, K) for i in number doesnt loop 0-number.... The way you defined your variables row would be a tuple with length 1 containing the length of the rows... col would be a tuple with length 1 containing the length of cols... eg: col = {tuple: 1} 0 = {int} 6 for j in col: # would loop once with value 6 This results an error as your row and col are the length of an array. So you are basically trying to access an array with the index of its length. eg. for a array index 0- 5 (length 6) you are trying array[6] Your proper Solution and the way the Program seems intended would be to loop the length of col and row (using range) Lets take for example: row = 5, col = 6 for i in range(row): # will loop through with 0, 1, 2, 3, 4 for j in range(col): # will loop through with 0, 1, 2, 3, 4, 5 if Matrix[i][j] == 1: print(Matrix) fun(Matrix,i,j, K) A: Finally i solved out for the 2D matrix res = 0, for i in range(len(Matrix)): for j in range(len(Matrix[0])): if Matrix[i][j] == 1: fun(Matrix,i,j, K), for i in range(len(Matrix)): for j in range(len(Matrix[0])): if Matrix[i][j] == 0: res=res[0]+1, ----------
Matrix[i][j] index out of range for the 2D Matrix
1. Given A of size 4 × 3 and K = 1 with bird habitats at (0, 0), (2, 2) and (3, 1) the function should return 3. Wind turbines can be built in three locations: (0, 2), (1, 1) and (2, 0). In my code i have got an error of index out of range what should i can do to get my desired output.in the given case out put should be 3. def solution(Matrix, K): row=len(Matrix) col=len(Matrix[0]) K = K +1 for i in (row): for j in (col): if Matrix[i][j] == 1: print(Matrix) fun(Matrix,i,j, K), res = 0, for i in (row): for j in (col): if Matrix[i][j] == 0: res += 1, return res, def fun(Matrix,i,j,k): if k == 1: if i-1 >= 0: if Matrix[i-1][j]==0 or Matrix[i-1][j] >=2 and Matrix[i-1][j] <= k-1: Matrix[i-1][j]=k, fun(Matrix,i-1,j,k-1), if i+1 < len(Matrix): if Matrix[i+1][j] == 0 or Matrix[i+1][j] >=2 and Matrix[i+1][j] <= k-1: Matrix[i+1][j]=k, fun(Matrix,i+1,j,k-1) if j-1 >= 0: if Matrix[i][j-1]== 0 or Matrix[i][j+1] >= 2 and Matrix[i][j-1] <= k-1: Matrix[i][j-1]=k, fun(Matrix,i,j-1,k-1) if j+1 < len(Matrix[0]): if Matrix[i][j+1]== 0 or Matrix[i][j-1] >=2 and Matrix[i][j+1] <= k-1: Matrix[i][j+1]=k, fun(Matrix,i,j+1,k-1), return
[ "In Solution with the code you are making tuples for row and col...\n row=len(Matrix),\n col=len(Matrix[0]),\n K = K +1;\n\nChange your Code to actually save only the length as an int in your variable\nrow=len(Matrix)\ncol=len(Matrix[0])\nK = K +1\n\nNext Let's focus on this Part of your Mistake... This only works because it seems you don´t know how to create Variables and so it can loop through a tuple of what should be an int.\nfor i in (row):\n for j in (col):\n if Matrix[i][j] == 1:\n print(Matrix)\n fun(Matrix,i,j, K)\n\nfor i in number doesnt loop 0-number.... The way you defined your variables row would be a tuple with length 1 containing the length of the rows...\ncol would be a tuple with length 1 containing the length of cols...\neg:\ncol = {tuple: 1}\n0 = {int} 6\nfor j in col: # would loop once with value 6\n\nThis results an error as your row and col are the length of an array. So you are basically trying to access an array with the index of its length.\neg. for a array index 0- 5 (length 6)\nyou are trying array[6]\nYour proper Solution and the way the Program seems intended would be to loop the length of col and row (using range)\nLets take for example: row = 5, col = 6\nfor i in range(row): # will loop through with 0, 1, 2, 3, 4\n for j in range(col): # will loop through with 0, 1, 2, 3, 4, 5\n if Matrix[i][j] == 1:\n print(Matrix)\n fun(Matrix,i,j, K)\n\n", "Finally i solved out for the 2D matrix\n res = 0,\nfor i in range(len(Matrix)):\n for j in range(len(Matrix[0])):\n if Matrix[i][j] == 1:\n fun(Matrix,i,j, K),\n \nfor i in range(len(Matrix)): \n for j in range(len(Matrix[0])):\n if Matrix[i][j] == 0:\n res=res[0]+1,\n----------\n\n" ]
[ 1, 0 ]
[]
[]
[ "list", "matrix", "python", "python_3.x", "tuples" ]
stackoverflow_0074445124_list_matrix_python_python_3.x_tuples.txt
Q: How to remove NaN values from pivot table only if each column has more than x NaN values? I have a pivot table that I create with the line pivot_table = pd.pivot_table(df, values='trip_duration', index=['day_of_month', 'hour'], columns='location_pair_id', aggfunc=np.mean, dropna=True), which looks like this: pivot table For each column, I want to impute the NaN values, but only if the entire column has less than x NaN values, say x=10. All the other columns having NaN values more than x times, should be removed. Until now, I tried to add a subset of columns into the dropna function: pivot_table = pivot_table.dropna(axis=1, subset=nan_values_idx), where nan_values_idx is calculated as follows: nan_values = pivot_table.isnull().sum() nan_values_idx = list(nan_values[nan_values>10].keys()), which gives a list of location_pair_id's: ['(164, 170)', '(186, 230)', '(186, 48)',...,'(79, 79)'] However, when I say pivot_table = pivot_table.dropna(axis=1, subset=nan_values_idx) I get the error: in DataFrame.dropna(self, axis, how, thresh, subset, inplace) 6548 check = indices == -1 6549 if check.any(): -> 6550 raise KeyError(np.array(subset)[check].tolist()) 6551 agg_obj = self.take(indices, axis=agg_axis) 6553 if thresh is not no_default: KeyError: ['(164, 170)', '(186, 230)', '(186, 48)', '(186, 68)', '(230, 186)', '(230, 230)', '(230, 48)', '(230, 50)', '(263, 141)', '(263, 75)', '(48, 142)', '(48, 164)', '(48, 186)', '(48, 230)', '(48, 48)', '(48, 50)', '(48, 68)', '(68, 246)', '(68, 48)', '(68, 68)', '(79, 107)', '(79, 79)'] I appreciate any hint! A: You can count number of NaN values in each column, and filter out the column if the number is above 10 (or another value) cols = [col for col, no_na in pivot_table.isna().sum().items() if no_na <= 10] pivot_table = pivot_table[cols]
How to remove NaN values from pivot table only if each column has more than x NaN values?
I have a pivot table that I create with the line pivot_table = pd.pivot_table(df, values='trip_duration', index=['day_of_month', 'hour'], columns='location_pair_id', aggfunc=np.mean, dropna=True), which looks like this: pivot table For each column, I want to impute the NaN values, but only if the entire column has less than x NaN values, say x=10. All the other columns having NaN values more than x times, should be removed. Until now, I tried to add a subset of columns into the dropna function: pivot_table = pivot_table.dropna(axis=1, subset=nan_values_idx), where nan_values_idx is calculated as follows: nan_values = pivot_table.isnull().sum() nan_values_idx = list(nan_values[nan_values>10].keys()), which gives a list of location_pair_id's: ['(164, 170)', '(186, 230)', '(186, 48)',...,'(79, 79)'] However, when I say pivot_table = pivot_table.dropna(axis=1, subset=nan_values_idx) I get the error: in DataFrame.dropna(self, axis, how, thresh, subset, inplace) 6548 check = indices == -1 6549 if check.any(): -> 6550 raise KeyError(np.array(subset)[check].tolist()) 6551 agg_obj = self.take(indices, axis=agg_axis) 6553 if thresh is not no_default: KeyError: ['(164, 170)', '(186, 230)', '(186, 48)', '(186, 68)', '(230, 186)', '(230, 230)', '(230, 48)', '(230, 50)', '(263, 141)', '(263, 75)', '(48, 142)', '(48, 164)', '(48, 186)', '(48, 230)', '(48, 48)', '(48, 50)', '(48, 68)', '(68, 246)', '(68, 48)', '(68, 68)', '(79, 107)', '(79, 79)'] I appreciate any hint!
[ "You can count number of NaN values in each column, and filter out the column if the number is above 10 (or another value)\ncols = [col for col, no_na in pivot_table.isna().sum().items() if no_na <= 10]\npivot_table = pivot_table[cols]\n\n" ]
[ 0 ]
[]
[]
[ "imputation", "nan", "pandas", "pivot_table", "python" ]
stackoverflow_0074459390_imputation_nan_pandas_pivot_table_python.txt
Q: Can't change system default SQLite binary Using Django and SQLite I want to run most recent SQLite version; most recent SQLite binary, not the SQLite Python library. I have an SQLite binary that is not the system default and can't change the default version. I'm not using Django's ORM but replaced it with a standalone SQLAlchemy version. Related (but has to do with running the most recent Python SQLite library). A: The simplest option if you just need a recent version of SQLite from python is now to install the pysqlite3 package: pip install pysqlite3-binary This comes with a recent version of SQLite statically-linked. You can use it in a venv without affecting any other package. You can use it like this: from pysqlite3 import dbapi2 as sqlite3 print(sqlite3.sqlite_version) If you happen to be using peewee, it will pick it up automatically. If you need a specific version of SQLite, you can build this package, as suggested in Aaron Digulla's answer. A: Python can't use the sqlite3 binary directly. It always uses a module which is linked against the sqlite3 shared library. That means you have to follow the instructions in "How to upgrade sqlite3 in python 2.7.3 inside a virtualenv?" to create a version of the pysqlite module in your virtualenv. You can then use this import from pysqlite2 import dbapi2 as sqlite to shadow the system's default sqlite module with the new one. Another option would be to get Python's source code, compile everything and copy the file sqlite.so into your virtualenv. The drawback of this approach is that it's brittle and hard to repeat by other people. A: Try this easy route first if you're on windows: grab the dll from the sqlite download page (it will be under the "Precompiled Binaries for Windows" heading) and add it to Anaconda's dll path (like C:\Users\YourUserName\Anaconda3\DLLs). The new versions there come with goodies like FTS5 already enabled. If you're on Linux, then refer to Install Python and Sqlite from Source and Compiling SQLite for use with Python Applications
Can't change system default SQLite binary
Using Django and SQLite I want to run most recent SQLite version; most recent SQLite binary, not the SQLite Python library. I have an SQLite binary that is not the system default and can't change the default version. I'm not using Django's ORM but replaced it with a standalone SQLAlchemy version. Related (but has to do with running the most recent Python SQLite library).
[ "The simplest option if you just need a recent version of SQLite from python is now to install the pysqlite3 package:\npip install pysqlite3-binary\n\nThis comes with a recent version of SQLite statically-linked. You can use it in a venv without affecting any other package.\nYou can use it like this:\nfrom pysqlite3 import dbapi2 as sqlite3\nprint(sqlite3.sqlite_version)\n\nIf you happen to be using peewee, it will pick it up automatically.\nIf you need a specific version of SQLite, you can build this package, as suggested in Aaron Digulla's answer.\n", "Python can't use the sqlite3 binary directly. It always uses a module which is linked against the sqlite3 shared library. That means you have to follow the instructions in \"How to upgrade sqlite3 in python 2.7.3 inside a virtualenv?\" to create a version of the pysqlite module in your virtualenv.\nYou can then use this import\nfrom pysqlite2 import dbapi2 as sqlite\n\nto shadow the system's default sqlite module with the new one.\nAnother option would be to get Python's source code, compile everything and copy the file sqlite.so into your virtualenv. The drawback of this approach is that it's brittle and hard to repeat by other people.\n", "Try this easy route first if you're on windows: grab the dll from the sqlite download page (it will be under the \"Precompiled Binaries for Windows\" heading) and add it to Anaconda's dll path (like C:\\Users\\YourUserName\\Anaconda3\\DLLs). The new versions there come with goodies like FTS5 already enabled. \nIf you're on Linux, then refer to Install Python and Sqlite from Source and Compiling SQLite for use with Python Applications\n" ]
[ 1, 0, 0 ]
[]
[]
[ "python", "sqlalchemy", "sqlite" ]
stackoverflow_0029282380_python_sqlalchemy_sqlite.txt
Q: How to subtract time in python? I'm using Python 3.10 and I'm trying to subtract two time values from each other. Now I have tried bunch of ways to do that but getting errors. day_time = timezone.now() day_name = day_time.strftime("%Y-%m-%d %H:%M:%S") end_Time = datetime.strptime(latest_slots.end_hour, '%Y-%m-%d %H:%M:%S') print(end_Time- day_name) error: TypeError: unsupported operand type(s) for -: 'str' and 'str' I also tried: day_time = timezone.now() end_Time = datetime.strptime(latest_slots.end_hour, '%Y-%m-%d %H:%M:%S') print(end_Time- day_time) error: TypeError: can't subtract offset-naive and offset-aware datetimes And this as well: day_time = timezone.now() end_Time = datetime.strptime(latest_slots.end_hour, '%Y-%m-%d %H:%M:%S.000000.00+00') print(end_Time- day_time) error: ValueError: time data '2022-11-27 00:00:00' does not match format '%Y-%m-%d %H:%M:%S.000000.00+00' A: Your second approach is almost right, but you should use day_time = datetime.now() When you subtract you will get a a datetime.timedelta object I think the issue with your second approach is that by using timezone you also have the timezone as part of the datetime, instead of just the date and time. A: You can solve this by using a combination of the datetime and timedelta libraries like this: from datetime import datetime,timedelta day_time = datetime.now() end_time = datetime.now() + timedelta(days= 2,hours =7) result = end_time - day_time print(result) # Output :--> 2 days, 7:00:00.000005 A: I have solved this problem by doing the following approach. #first i get the current time day_time = timezone.now() #second converted that into end_time format day_time = day_time.strftime("%Y-%m-%d %H:%M:%S") #third converted that string again into time object day_time = datetime.strptime(day_time, '%Y-%m-%d %H:%M:%S') end_Time = datetime.strptime(latest_slots.end_hour, '%Y-%m-%d %H:%M:%S')
How to subtract time in python?
I'm using Python 3.10 and I'm trying to subtract two time values from each other. Now I have tried bunch of ways to do that but getting errors. day_time = timezone.now() day_name = day_time.strftime("%Y-%m-%d %H:%M:%S") end_Time = datetime.strptime(latest_slots.end_hour, '%Y-%m-%d %H:%M:%S') print(end_Time- day_name) error: TypeError: unsupported operand type(s) for -: 'str' and 'str' I also tried: day_time = timezone.now() end_Time = datetime.strptime(latest_slots.end_hour, '%Y-%m-%d %H:%M:%S') print(end_Time- day_time) error: TypeError: can't subtract offset-naive and offset-aware datetimes And this as well: day_time = timezone.now() end_Time = datetime.strptime(latest_slots.end_hour, '%Y-%m-%d %H:%M:%S.000000.00+00') print(end_Time- day_time) error: ValueError: time data '2022-11-27 00:00:00' does not match format '%Y-%m-%d %H:%M:%S.000000.00+00'
[ "Your second approach is almost right, but you should use\nday_time = datetime.now()\n\nWhen you subtract you will get a a datetime.timedelta object\nI think the issue with your second approach is that by using timezone you also have the timezone as part of the datetime, instead of just the date and time.\n", "You can solve this by using a combination of the datetime and timedelta libraries like this:\nfrom datetime import datetime,timedelta\nday_time = datetime.now()\nend_time = datetime.now() + timedelta(days= 2,hours =7)\nresult = end_time - day_time\nprint(result)\n\n# Output :--> 2 days, 7:00:00.000005\n\n", "I have solved this problem by doing the following approach.\n #first i get the current time\n day_time = timezone.now()\n #second converted that into end_time format\n day_time = day_time.strftime(\"%Y-%m-%d %H:%M:%S\")\n #third converted that string again into time object\n day_time = datetime.strptime(day_time, '%Y-%m-%d %H:%M:%S')\n end_Time = datetime.strptime(latest_slots.end_hour, '%Y-%m-%d %H:%M:%S')\n\n" ]
[ 0, 0, 0 ]
[]
[]
[ "python", "time" ]
stackoverflow_0074459253_python_time.txt
Q: How can I create and initialize a dictionary/JSON equivalent data structure in JSP? I have the following sample data: { "2022-37" : "2022-09-17 00:00:00.0", "2022-38" : "2022-09-24 00:00:00.0", "2022-39" : "2022-10-01 00:00:00.0", "2022-40" : "2022-10-08 00:00:00.0" } If this was python, I would create a dictionary like this, week_to_date_dict = { "2022-37" : "2022-09-17 00:00:00.0", "2022-38" : "2022-09-24 00:00:00.0", "2022-39" : "2022-10-01 00:00:00.0", "2022-40" : "2022-10-08 00:00:00.0" } and access the data like this: print(week_to_date_dict["2022-28"]) How can I achieve the same thing in a JSP file? I did come across Hashmap but wasn't able to figure out how to initialize the data as above. Thank you. A: You can initialze a HashMap and - only in a JSP - use it the same way: var week_to_date_dict = new java.util.HashMap<String, String>() { { put("2022-37", "2022-09-17 00:00:00.0"); put("2022-38", "2022-09-24 00:00:00.0"); put("2022-39", "2022-10-01 00:00:00.0"); put("2022-40", "2022-10-08 00:00:00.0"); } }; <%= week_to_date_dict["2022-37"] %>
How can I create and initialize a dictionary/JSON equivalent data structure in JSP?
I have the following sample data: { "2022-37" : "2022-09-17 00:00:00.0", "2022-38" : "2022-09-24 00:00:00.0", "2022-39" : "2022-10-01 00:00:00.0", "2022-40" : "2022-10-08 00:00:00.0" } If this was python, I would create a dictionary like this, week_to_date_dict = { "2022-37" : "2022-09-17 00:00:00.0", "2022-38" : "2022-09-24 00:00:00.0", "2022-39" : "2022-10-01 00:00:00.0", "2022-40" : "2022-10-08 00:00:00.0" } and access the data like this: print(week_to_date_dict["2022-28"]) How can I achieve the same thing in a JSP file? I did come across Hashmap but wasn't able to figure out how to initialize the data as above. Thank you.
[ "You can initialze a HashMap and - only in a JSP - use it the same way:\nvar week_to_date_dict = new java.util.HashMap<String, String>() {\n {\n put(\"2022-37\", \"2022-09-17 00:00:00.0\");\n put(\"2022-38\", \"2022-09-24 00:00:00.0\");\n put(\"2022-39\", \"2022-10-01 00:00:00.0\");\n put(\"2022-40\", \"2022-10-08 00:00:00.0\");\n }\n};\n\n<%= week_to_date_dict[\"2022-37\"] %>\n\n" ]
[ 1 ]
[]
[]
[ "dictionary", "java", "json", "jsp", "python" ]
stackoverflow_0074457914_dictionary_java_json_jsp_python.txt
Q: How to continue a loop while being in a nested loop? I'm using a loop and a nested loop, and i need the outer loop to stop whenever the second reaches a certain value. for first in range(0,10): for second in range(0,10): print(first + second) But i want it to skip to the next 'first' value if the second value is odd. I tried to do something like this: odd = [1,3,5,7,9] for first in range(0,10): for second in range(0.10): if second in odd: continue But it won't work. A: break breaks out of the inner loop only, which jumps to the after the inner loop. Assuming that there’s no code immediately after the inner loop, then the current iteration of the outer loop will end and the next one will start. So I think using break instead of continue should do exactly what you want. A: To terminate a loop you should use break. Try using break instead of continue
How to continue a loop while being in a nested loop?
I'm using a loop and a nested loop, and i need the outer loop to stop whenever the second reaches a certain value. for first in range(0,10): for second in range(0,10): print(first + second) But i want it to skip to the next 'first' value if the second value is odd. I tried to do something like this: odd = [1,3,5,7,9] for first in range(0,10): for second in range(0.10): if second in odd: continue But it won't work.
[ "break breaks out of the inner loop only, which jumps to the after the inner loop. Assuming that there’s no code immediately after the inner loop, then the current iteration of the outer loop will end and the next one will start.\nSo I think using break instead of continue should do exactly what you want.\n", "To terminate a loop you should use break. Try using break instead of continue\n" ]
[ 0, 0 ]
[]
[]
[ "python", "python_3.x" ]
stackoverflow_0074459602_python_python_3.x.txt
Q: Convert rows to columns and enter 0 and 1 for matches in python I have a pandas data frame like this: and I need to convert it into the below: So, the unique values in column 'camps' part of the original data set, gets turned into columns with values 0 and 1 for each id. How can I achieve this using Pandas or anything else in Python? Any help is greatly appreciated. Thanks Below is the code to create the original data set: import pandas as pd cust = {'id': [212175, 286170, 361739, 297438, 415712, 415777, 261700, 314624, 170365, 333254], 'camps': [':_Camp1_CC SC_Statemt_CCSTMT', ':_Camp1_CC SC_Statemt_CCSTMT', ':_Camp1_Free_DS0742203H_BD03', ':_Camp1_Over_EO3982112A_BD07', ':_Camp1_Over_EO4022202A_BD16', ':_Camp1_Over_EO4022202A_BD16', ':_Camp1_AS07_DS0722204H_DD02', ':_Camp1_AS07_DS0722204H_DD02', ':_Camp1_AS07_DS0722204H_DD02', ':_Camp1_AS07_DS0722204H_DD02']} cust_df = pd.DataFrame(cust) A: A possible solution, based on pd.crosstab: pd.crosstab(cust_df.id, cust_df.camps) Output: camps :_Camp1_AS07_DS0722204H_DD02 ... :_Camp1_Over_EO4022202A_BD16 id ... 170365 1 ... 0 212175 0 ... 0 261700 1 ... 0 286170 0 ... 0 297438 0 ... 0 314624 1 ... 0 333254 1 ... 0 361739 0 ... 0 415712 0 ... 1 415777 0 ... 1 A: try pd.get_dummies() for your reference: https://www.geeksforgeeks.org/how-to-create-dummy-variables-in-python-with-pandas/
Convert rows to columns and enter 0 and 1 for matches in python
I have a pandas data frame like this: and I need to convert it into the below: So, the unique values in column 'camps' part of the original data set, gets turned into columns with values 0 and 1 for each id. How can I achieve this using Pandas or anything else in Python? Any help is greatly appreciated. Thanks Below is the code to create the original data set: import pandas as pd cust = {'id': [212175, 286170, 361739, 297438, 415712, 415777, 261700, 314624, 170365, 333254], 'camps': [':_Camp1_CC SC_Statemt_CCSTMT', ':_Camp1_CC SC_Statemt_CCSTMT', ':_Camp1_Free_DS0742203H_BD03', ':_Camp1_Over_EO3982112A_BD07', ':_Camp1_Over_EO4022202A_BD16', ':_Camp1_Over_EO4022202A_BD16', ':_Camp1_AS07_DS0722204H_DD02', ':_Camp1_AS07_DS0722204H_DD02', ':_Camp1_AS07_DS0722204H_DD02', ':_Camp1_AS07_DS0722204H_DD02']} cust_df = pd.DataFrame(cust)
[ "A possible solution, based on pd.crosstab:\npd.crosstab(cust_df.id, cust_df.camps)\n\nOutput:\ncamps :_Camp1_AS07_DS0722204H_DD02 ... :_Camp1_Over_EO4022202A_BD16\nid ... \n170365 1 ... 0\n212175 0 ... 0\n261700 1 ... 0\n286170 0 ... 0\n297438 0 ... 0\n314624 1 ... 0\n333254 1 ... 0\n361739 0 ... 0\n415712 0 ... 1\n415777 0 ... 1\n\n", "try\npd.get_dummies()\n\nfor your reference:\nhttps://www.geeksforgeeks.org/how-to-create-dummy-variables-in-python-with-pandas/\n" ]
[ 2, 1 ]
[]
[]
[ "dataframe", "pandas", "pivot", "python" ]
stackoverflow_0074459402_dataframe_pandas_pivot_python.txt
Q: How skip a header in the next loop of writing data in csv file - Python I writting a script, where I dowload a data from json file (about temperature and data from datetime) and I want save the data i csv file. The script has a schedule set every minute to download new data from json. I have a problem with one thing. Running the code causes the data to write correctly, but each time with a header. The question is how to save the header only once? My script: ... with open('Temp.csv', 'a', newline='') as file: fieldnames = ['Date', 'Temp'] writer = csv.DictWriter(file, fieldnames=fieldnames) writer.writeheader() writer.writerow({'Date':now, 'Temp':Temperatura}) My DataFrame look like this: enter image description here but I want: enter image description here Thanks for help, Dawid A: You need to write the header before the first row. So check the existence of the file to decide on writing the header. import os file_exists = os.path.exists('Temp.csv') with open('Temp.csv', 'a', newline='') as file: fieldnames = ['Date', 'Temp'] writer = csv.DictWriter(file, fieldnames=fieldnames) if not file_exists: writer.writeheader() # So the header is written only once just before the first row writer.writerow({'Date':now, 'Temp':Temperatura}) For the rest rows, since the csv file exists, header is not written again.
How skip a header in the next loop of writing data in csv file - Python
I writting a script, where I dowload a data from json file (about temperature and data from datetime) and I want save the data i csv file. The script has a schedule set every minute to download new data from json. I have a problem with one thing. Running the code causes the data to write correctly, but each time with a header. The question is how to save the header only once? My script: ... with open('Temp.csv', 'a', newline='') as file: fieldnames = ['Date', 'Temp'] writer = csv.DictWriter(file, fieldnames=fieldnames) writer.writeheader() writer.writerow({'Date':now, 'Temp':Temperatura}) My DataFrame look like this: enter image description here but I want: enter image description here Thanks for help, Dawid
[ "You need to write the header before the first row. So check the existence of the file to decide on writing the header.\n\nimport os \n\nfile_exists = os.path.exists('Temp.csv')\n\nwith open('Temp.csv', 'a', newline='') as file:\n fieldnames = ['Date', 'Temp']\n writer = csv.DictWriter(file, fieldnames=fieldnames)\n if not file_exists:\n writer.writeheader() # So the header is written only once just before the first row\n writer.writerow({'Date':now, 'Temp':Temperatura})\n\nFor the rest rows, since the csv file exists, header is not written again.\n" ]
[ 0 ]
[]
[]
[ "csv", "python", "schedule" ]
stackoverflow_0074453744_csv_python_schedule.txt
Q: python3 dataclass with **kwargs(asterisk) Currently I used DTO(Data Transfer Object) like this. class Test1: def __init__(self, user_id: int = None, body: str = None): self.user_id = user_id self.body = body Example code is very small, But when object scale growing up, I have to define every variable. While digging into it, found that python 3.7 supported dataclass Below code is DTO used dataclass. from dataclasses import dataclass @dataclass class Test2: user_id: int body: str In this case, How can I allow pass more argument that does not define into class Test2? If I used Test1, it is easy. Just add **kwargs(asterisk) into __init__ class Test1: def __init__(self, user_id: int = None, body: str = None, **kwargs): self.user_id = user_id self.body = body But using dataclass, Can't found any way to implement it. Is there any solution here? Thanks. EDIT class Test1: def __init__(self, user_id: str = None, body: str = None): self.user_id = user_id self.body = body if __name__ == '__main__': temp = {'user_id': 'hide', 'body': 'body test'} t1 = Test1(**temp) print(t1.__dict__) Result : {'user_id': 'hide', 'body': 'body test'} As you know, I want to insert data with dictionary type -> **temp Reason to using asterisk in dataclass is the same. I have to pass dictinary type to class init. Any idea here? A: The basic use case for dataclasses is to provide a container that maps arguments to attributes. If you have unknown arguments, you can't know the respective attributes during class creation. You can work around it if you know during initialization which arguments are unknown by sending them to a catch-all attribute by hand: from dataclasses import dataclass, field @dataclass class Container: user_id: int body: str meta: field(default_factory=dict) # usage: obligatory_args = {'user_id': 1, 'body': 'foo'} other_args = {'bar': 'baz', 'amount': 10} c = Container(**obligatory_args, meta=other_args) print(c.meta['bar']) # prints: 'baz' But in this case you'll still have a dictionary you need to look into and can't access the arguments by their name, i.e. c.bar doesn't work. If you care about accessing attributes by name, or if you can't distinguish between known and unknown arguments during initialisation, then your last resort without rewriting __init__ (which pretty much defeats the purpose of using dataclasses in the first place) is writing a @classmethod: from dataclasses import dataclass from inspect import signature @dataclass class Container: user_id: int body: str @classmethod def from_kwargs(cls, **kwargs): # fetch the constructor's signature cls_fields = {field for field in signature(cls).parameters} # split the kwargs into native ones and new ones native_args, new_args = {}, {} for name, val in kwargs.items(): if name in cls_fields: native_args[name] = val else: new_args[name] = val # use the native ones to create the class ... ret = cls(**native_args) # ... and add the new ones by hand for new_name, new_val in new_args.items(): setattr(ret, new_name, new_val) return ret Usage: params = {'user_id': 1, 'body': 'foo', 'bar': 'baz', 'amount': 10} Container(**params) # still doesn't work, raises a TypeError c = Container.from_kwargs(**params) print(c.bar) # prints: 'baz' A: Dataclass only relies on the __init__ method so you're free to alter your class in the __new__ method. from dataclasses import dataclass @dataclass class Container: user_id: int body: str def __new__(cls, *args, **kwargs): try: initializer = cls.__initializer except AttributeError: # Store the original init on the class in a different place cls.__initializer = initializer = cls.__init__ # replace init with something harmless cls.__init__ = lambda *a, **k: None # code from adapted from Arne added_args = {} for name in list(kwargs.keys()): if name not in cls.__annotations__: added_args[name] = kwargs.pop(name) ret = object.__new__(cls) initializer(ret, **kwargs) # ... and add the new ones by hand for new_name, new_val in added_args.items(): setattr(ret, new_name, new_val) return ret if __name__ == "__main__": params = {'user_id': 1, 'body': 'foo', 'bar': 'baz', 'amount': 10} c = Container(**params) print(c.bar) # prints: 'baz' print(c.body) # prints: 'baz'` A: Here's a neat variation on this I used. from dataclasses import dataclass, field from typing import Optional, Dict @dataclass class MyDataclass: data1: Optional[str] = None data2: Optional[Dict] = None data3: Optional[Dict] = None kwargs: field(default_factory=dict) = None def __post_init__(self): [setattr(self, k, v) for k, v in self.kwargs.items()] This works as below: >>> data = MyDataclass(data1="data1", kwargs={"test": 1, "test2": 2}) >>> data.test 1 >>> data.test2 2 However note that the dataclass does not seem to know that is has these new attributes: >>> from dataclasses import asdict >>> asdict(data) {'data1': 'data1', 'data2': None, 'data3': None, 'kwargs': {'test': 1, 'test2': 2}} This means that the keys have to be known. This worked for my use case and possibly others. A: from dataclasses import make_dataclass Clas = make_dataclass('A', ['d'], namespace={ '__post_init__': lambda self: self.__dict__.update(self.d) }) d = {'a':1, 'b': 2} instance = Clas(d) instance.a A: Variation of answer from Trian Svinit: You could use the following approach: Extra attributes are added via a kwargs argument as such: MyDataclass(xx, yy, kwargs={...}) kwargs is a dataclasses.InitVar that is then processed in the __post_init__ of your dataclass You can access all the values with instance.__dict__ (because asdict would not detect the attributes added via kwargs=... This would only use native features from dataclasses and inheriting this class would still work. from dataclasses import InitVar, asdict, dataclass from typing import Dict, Optional @dataclass class MyDataclass: data1: Optional[str] = None data2: Optional[Dict] = None data3: Optional[Dict] = None kwargs: InitVar[Optional[Dict[str, Any]]] = None def __post_init__(self, kwargs: Optional[Dict[str, Any]]) -> None: if kwargs: for k, v in kwargs.items(): setattr(self, k, v) data = MyDataclass(data1="data_nb_1", kwargs={"test1": 1, "test2": 2}) print(data, "-", data.data1, "-", data.test1) # MyDataclass(data1='data_nb_1', data2=None, data3=None) - data1 - 1 print(asdict(data)) # {'data1': 'data_nb_1', 'data2': None, 'data3': None} print(data.__dict__) # {'data1': 'data_nb_1', 'data2': None, 'data3': None, 'test1': 1, 'test2': 2} If you really need to use asdict to get the attributes passed as kwargs, you could start to use private attributes in dataclasses to hack asdict: from dataclasses import _FIELD, _FIELDSInitVar, asdict, dataclass, field from typing import Dict, Optional @dataclass class MyDataclass: data1: Optional[str] = None data2: Optional[Dict] = None data3: Optional[Dict] = None kwargs: InitVar[Optional[Dict[str, Any]]] = None def __post_init__(self, kwargs: Optional[Dict[str, Any]]) -> None: if kwargs: for k, v in kwargs.items(): setattr(self, k, v) self._add_to_asdict(k) def _add_to_asdict(self, attr:str) -> None: """Add an attribute to the list of keys returned by asdict""" f = field(repr=True) f.name = attr f._field_type = _FIELD getattr(self, _FIELDS)[attr] = f data = MyDataclass(data1="data_nb_1", kwargs={"test1": 1, "test2": 2}) print(asdict(data)) # {'data1': 'data_nb_1', 'data2': None, 'data3': None, 'test1': 1, 'test2': 2} A: Based on Arnes Answer, I create a class decorator which extends the dataclass decorator with the from_kwargs method. from dataclasses import dataclass from inspect import signature def dataclass_init_kwargs(cls, *args, **kwargs): cls = dataclass(cls, *args, **kwargs) def from_kwargs(**kwargs): cls_fields = {field for field in signature(cls).parameters} native_arg_keys = cls_fields & set(kwargs.keys()) native_args = {k: kwargs[k] for k in native_arg_keys} ret = cls(**native_args) return ret setattr(cls, 'from_kwargs', from_kwargs) return cls A: All of these changes are well-meaning but pretty clearly against the spirit of dataclasses, which is to avoid writing a bunch of boilerplate to set up a class. Python 3.10 introduces the match statement and with it dataclasses get a match_args=True default argument in the constructor (i.e. the decorator). This means that you get a dunder attribute __match_args__ which stores a tuple of the init (kw)args, importantly without runtime inspection. So you can just create a classmethod from dataclasses import dataclass @dataclass class A: a: int b: int = 0 def from_kwargs(cls, **kwargs: dict) -> A: return cls(**{k: kwargs[k] for k in kwargs if k in cls.__match_args__}) It works: >>> A.from_kwargs(a=1, b=2, c=3) A(a=1, b=2) >>> A.from_kwargs(a=1) A(a=1, b=0) However we also have access to these same keys in Python 3.9 thanks to __dataclass_fields__, which is the next best option if you can’t rely on the 3.10 runtime. def from_kwargs(cls, **kwargs: dict) -> A: return cls(**{k: kwargs[k] for k in kwargs if k in cls.__dataclass_fields__}) This gives the same result. For the (unusual but reasonable!) use case in the question, you can just change the class method to pop rather than access the kwargs dict when building the init_kw dict, so that the remaining keys will be left over in kwargs and can be passed as their own kwarg, rest. from dataclasses import dataclass @dataclass class A: a: int b: int = 0 rest: dict = {} def from_kwargs(cls, **kwargs: dict) -> A: init_kw = {k: kwargs.pop(k) for k in dict(kwargs) if k in cls.__match_args__} return cls(**init_kw, rest=kwargs) Note that you have to wrap the kwargs in a call to dict (make a copy) to avoid the error of "dict size changed during iteration"
python3 dataclass with **kwargs(asterisk)
Currently I used DTO(Data Transfer Object) like this. class Test1: def __init__(self, user_id: int = None, body: str = None): self.user_id = user_id self.body = body Example code is very small, But when object scale growing up, I have to define every variable. While digging into it, found that python 3.7 supported dataclass Below code is DTO used dataclass. from dataclasses import dataclass @dataclass class Test2: user_id: int body: str In this case, How can I allow pass more argument that does not define into class Test2? If I used Test1, it is easy. Just add **kwargs(asterisk) into __init__ class Test1: def __init__(self, user_id: int = None, body: str = None, **kwargs): self.user_id = user_id self.body = body But using dataclass, Can't found any way to implement it. Is there any solution here? Thanks. EDIT class Test1: def __init__(self, user_id: str = None, body: str = None): self.user_id = user_id self.body = body if __name__ == '__main__': temp = {'user_id': 'hide', 'body': 'body test'} t1 = Test1(**temp) print(t1.__dict__) Result : {'user_id': 'hide', 'body': 'body test'} As you know, I want to insert data with dictionary type -> **temp Reason to using asterisk in dataclass is the same. I have to pass dictinary type to class init. Any idea here?
[ "The basic use case for dataclasses is to provide a container that maps arguments to attributes. If you have unknown arguments, you can't know the respective attributes during class creation.\nYou can work around it if you know during initialization which arguments are unknown by sending them to a catch-all attribute by hand:\nfrom dataclasses import dataclass, field\n\n\n@dataclass\nclass Container:\n user_id: int\n body: str\n meta: field(default_factory=dict)\n\n\n# usage:\nobligatory_args = {'user_id': 1, 'body': 'foo'}\nother_args = {'bar': 'baz', 'amount': 10}\nc = Container(**obligatory_args, meta=other_args)\nprint(c.meta['bar']) # prints: 'baz'\n\nBut in this case you'll still have a dictionary you need to look into and can't access the arguments by their name, i.e. c.bar doesn't work.\n\nIf you care about accessing attributes by name, or if you can't distinguish between known and unknown arguments during initialisation, then your last resort without rewriting __init__ (which pretty much defeats the purpose of using dataclasses in the first place) is writing a @classmethod:\nfrom dataclasses import dataclass\nfrom inspect import signature\n\n\n@dataclass\nclass Container:\n user_id: int\n body: str\n\n @classmethod\n def from_kwargs(cls, **kwargs):\n # fetch the constructor's signature\n cls_fields = {field for field in signature(cls).parameters}\n\n # split the kwargs into native ones and new ones\n native_args, new_args = {}, {}\n for name, val in kwargs.items():\n if name in cls_fields:\n native_args[name] = val\n else:\n new_args[name] = val\n\n # use the native ones to create the class ...\n ret = cls(**native_args)\n\n # ... and add the new ones by hand\n for new_name, new_val in new_args.items():\n setattr(ret, new_name, new_val)\n return ret\n\nUsage:\nparams = {'user_id': 1, 'body': 'foo', 'bar': 'baz', 'amount': 10}\nContainer(**params) # still doesn't work, raises a TypeError \nc = Container.from_kwargs(**params)\nprint(c.bar) # prints: 'baz'\n\n", "Dataclass only relies on the __init__ method so you're free to alter your class in the __new__ method.\nfrom dataclasses import dataclass\n\n\n@dataclass\nclass Container:\n user_id: int\n body: str\n\n def __new__(cls, *args, **kwargs):\n try:\n initializer = cls.__initializer\n except AttributeError:\n # Store the original init on the class in a different place\n cls.__initializer = initializer = cls.__init__\n # replace init with something harmless\n cls.__init__ = lambda *a, **k: None\n\n # code from adapted from Arne\n added_args = {}\n for name in list(kwargs.keys()):\n if name not in cls.__annotations__:\n added_args[name] = kwargs.pop(name)\n\n ret = object.__new__(cls)\n initializer(ret, **kwargs)\n # ... and add the new ones by hand\n for new_name, new_val in added_args.items():\n setattr(ret, new_name, new_val)\n\n return ret\n\n\nif __name__ == \"__main__\":\n params = {'user_id': 1, 'body': 'foo', 'bar': 'baz', 'amount': 10}\n c = Container(**params)\n print(c.bar) # prints: 'baz'\n print(c.body) # prints: 'baz'`\n\n", "Here's a neat variation on this I used.\nfrom dataclasses import dataclass, field\nfrom typing import Optional, Dict\n\n\n@dataclass\nclass MyDataclass:\n data1: Optional[str] = None\n data2: Optional[Dict] = None\n data3: Optional[Dict] = None\n\n kwargs: field(default_factory=dict) = None\n\n def __post_init__(self):\n [setattr(self, k, v) for k, v in self.kwargs.items()]\n\nThis works as below:\n>>> data = MyDataclass(data1=\"data1\", kwargs={\"test\": 1, \"test2\": 2})\n>>> data.test\n1\n>>> data.test2\n2\n\nHowever note that the dataclass does not seem to know that is has these new attributes:\n>>> from dataclasses import asdict\n>>> asdict(data)\n{'data1': 'data1', 'data2': None, 'data3': None, 'kwargs': {'test': 1, 'test2': 2}}\n\nThis means that the keys have to be known. This worked for my use case and possibly others.\n", "from dataclasses import make_dataclass\nClas = make_dataclass('A', \n ['d'], \n namespace={\n '__post_init__': lambda self: self.__dict__.update(self.d)\n })\nd = {'a':1, 'b': 2}\ninstance = Clas(d)\ninstance.a\n\n", "Variation of answer from Trian Svinit:\nYou could use the following approach:\n\nExtra attributes are added via a kwargs argument as such: MyDataclass(xx, yy, kwargs={...})\nkwargs is a dataclasses.InitVar that is then processed in the __post_init__ of your dataclass\nYou can access all the values with instance.__dict__ (because asdict would not detect the attributes added via kwargs=...\n\nThis would only use native features from dataclasses and inheriting this class would still work.\nfrom dataclasses import InitVar, asdict, dataclass\nfrom typing import Dict, Optional\n\n\n@dataclass\nclass MyDataclass:\n data1: Optional[str] = None\n data2: Optional[Dict] = None\n data3: Optional[Dict] = None\n\n kwargs: InitVar[Optional[Dict[str, Any]]] = None\n\n def __post_init__(self, kwargs: Optional[Dict[str, Any]]) -> None:\n if kwargs:\n for k, v in kwargs.items():\n setattr(self, k, v)\n\n\ndata = MyDataclass(data1=\"data_nb_1\", kwargs={\"test1\": 1, \"test2\": 2})\nprint(data, \"-\", data.data1, \"-\", data.test1)\n# MyDataclass(data1='data_nb_1', data2=None, data3=None) - data1 - 1\nprint(asdict(data))\n# {'data1': 'data_nb_1', 'data2': None, 'data3': None}\nprint(data.__dict__)\n# {'data1': 'data_nb_1', 'data2': None, 'data3': None, 'test1': 1, 'test2': 2}\n\n\nIf you really need to use asdict to get the attributes passed as kwargs, you could start to use private attributes in dataclasses to hack asdict:\nfrom dataclasses import _FIELD, _FIELDSInitVar, asdict, dataclass, field\nfrom typing import Dict, Optional\n\n\n@dataclass\nclass MyDataclass:\n data1: Optional[str] = None\n data2: Optional[Dict] = None\n data3: Optional[Dict] = None\n\n kwargs: InitVar[Optional[Dict[str, Any]]] = None\n\n def __post_init__(self, kwargs: Optional[Dict[str, Any]]) -> None:\n if kwargs:\n for k, v in kwargs.items():\n setattr(self, k, v)\n self._add_to_asdict(k)\n\n def _add_to_asdict(self, attr:str) -> None:\n \"\"\"Add an attribute to the list of keys returned by asdict\"\"\"\n f = field(repr=True)\n f.name = attr\n f._field_type = _FIELD\n getattr(self, _FIELDS)[attr] = f\n\ndata = MyDataclass(data1=\"data_nb_1\", kwargs={\"test1\": 1, \"test2\": 2})\nprint(asdict(data))\n# {'data1': 'data_nb_1', 'data2': None, 'data3': None, 'test1': 1, 'test2': 2}\n\n", "Based on Arnes Answer, I create a class decorator which extends the dataclass decorator with the from_kwargs method.\nfrom dataclasses import dataclass\nfrom inspect import signature\n\n\ndef dataclass_init_kwargs(cls, *args, **kwargs):\n cls = dataclass(cls, *args, **kwargs)\n\n def from_kwargs(**kwargs):\n cls_fields = {field for field in signature(cls).parameters}\n native_arg_keys = cls_fields & set(kwargs.keys())\n native_args = {k: kwargs[k] for k in native_arg_keys}\n ret = cls(**native_args)\n return ret\n\n setattr(cls, 'from_kwargs', from_kwargs)\n return cls\n\n\n", "All of these changes are well-meaning but pretty clearly against the spirit of dataclasses, which is to avoid writing a bunch of boilerplate to set up a class.\nPython 3.10 introduces the match statement and with it dataclasses get a match_args=True default argument in the constructor (i.e. the decorator).\nThis means that you get a dunder attribute __match_args__ which stores a tuple of the init (kw)args, importantly without runtime inspection.\nSo you can just create a classmethod\nfrom dataclasses import dataclass\n\n@dataclass\nclass A:\n a: int\n b: int = 0\n \n def from_kwargs(cls, **kwargs: dict) -> A:\n return cls(**{k: kwargs[k] for k in kwargs if k in cls.__match_args__})\n\nIt works:\n>>> A.from_kwargs(a=1, b=2, c=3)\nA(a=1, b=2)\n>>> A.from_kwargs(a=1)\nA(a=1, b=0)\n\nHowever we also have access to these same keys in Python 3.9 thanks to __dataclass_fields__, which is the next best option if you can’t rely on the 3.10 runtime.\n def from_kwargs(cls, **kwargs: dict) -> A:\n return cls(**{k: kwargs[k] for k in kwargs if k in cls.__dataclass_fields__})\n\nThis gives the same result.\nFor the (unusual but reasonable!) use case in the question, you can just change the class method to pop rather than access the kwargs dict when building the init_kw dict, so that the remaining keys will be left over in kwargs and can be passed as their own kwarg, rest.\nfrom dataclasses import dataclass\n\n@dataclass\nclass A:\n a: int\n b: int = 0\n rest: dict = {}\n \n def from_kwargs(cls, **kwargs: dict) -> A:\n init_kw = {k: kwargs.pop(k) for k in dict(kwargs) if k in cls.__match_args__}\n return cls(**init_kw, rest=kwargs)\n\nNote that you have to wrap the kwargs in a call to dict (make a copy) to avoid the error of \"dict size changed during iteration\"\n" ]
[ 21, 10, 4, 1, 1, 0, 0 ]
[]
[]
[ "python", "python_3.7", "python_dataclasses" ]
stackoverflow_0055099243_python_python_3.7_python_dataclasses.txt
Q: youtube upload video, 403 when requesting None returned "Request had insufficient authentication scopes." I am trying to upload a video to youtube using python. Using the example code from https://developers.google.com/youtube/v3/docs/videos/insert I am getting an error while uploading. class YTService(): def __init__(self, credentials): # Youtube Credential self._YOUTUBE_SERVICE = build( YOUTUBE_API_SERVICE_NAME, YOUTUBE_API_VERSION, http=credentials.authorize(httplib2.Http())) def initialize_upload(self, buffer): if not buffer: return tags = None if _YOUTUBE_OPTIONS.get('keywords'): tags = _YOUTUBE_OPTIONS.get('keywords').split(',') body = dict( snippet=dict( title=_YOUTUBE_OPTIONS.get('title'), description=_YOUTUBE_OPTIONS.get('description'), tags=tags, categoryId=_YOUTUBE_OPTIONS.get('category')), status=dict(privacyStatus=_YOUTUBE_OPTIONS.get('privacyStatus'))) print("{0} in initialize_upload".format(body)) # Call the API's videos.insert method to create and upload the video. insert_request = self._YOUTUBE_SERVICE.videos().insert( part=','.join(body.keys()), body=body, media_body=MediaInMemoryUpload( buffer, mimetype='video/mp4', chunksize=-1, resumable=True)) return self.resumable_upload(insert_request) # This method implements an exponential backoff strategy to resume a # failed upload. def resumable_upload(self, insert_request): response = None error = None retry = 0 youtube_id = None while response is None: try: _, response = insert_request.next_chunk() print("response : {0} in resumable_upload".format(response)) if response is not None: if 'id' in response: youtube_id = response['id'] return youtube_id else: exit("The upload failed with an unexpected response: %s" % response) except HttpError as e: if e.resp.status in RETRIABLE_STATUS_CODES: error = 'A retriable HTTP error {} occurred:\n{}'.format( e.resp.status, e.content) else: raise except RETRIABLE_EXCEPTIONS as e: error = 'A retriable error occurred: {}'.format(e) if error is not None: retry += 1 if retry > MAX_RETRIES: raise ValueError('No longer attempting to retry.') max_sleep = 2 * retry sleep_seconds = random.random() * max_sleep print('YouTube sleeping {} seconds and then retrying...'.format( sleep_seconds)) time.sleep(sleep_seconds) return youtube_id def get_youtube_id(self, buffer, title): if len(title) > 100: title = title[0:100] if title: _YOUTUBE_OPTIONS['title'] = title return self.initialize_upload(buffer) <HttpError 403 when requesting None returned "Request had insufficient authentication scopes.". Details: "[{'message': 'Insufficient Permission', 'domain': 'global', 'reason': 'insufficientPermissions'}] It is accessed using the ouath2.0 client and the scope is ['https://www.googleapis.com/auth/youtube.upload', 'https://www.googleapis.com/auth/youtube' 'https://www.googleapis.com/auth/youtubepartner' 'https://www.googleapis.com/auth/youtube.force-ssl'] I put 4 of them. What could be the problem? hope the video is uploaded A: Request had insufficient authentication scopes. Means that your application has not requested the proper consent from the user The videos.insert method requires that your application use an access token which has been authorized with one of the following scopes Now you have stated that you have those scopes in your code. May i sugest that you changed the scopes in your code and added these after you had already authorized the user, with a lower level scope. You need to remove your old authorization and request authorization of the user again using the new scopes.
youtube upload video, 403 when requesting None returned "Request had insufficient authentication scopes."
I am trying to upload a video to youtube using python. Using the example code from https://developers.google.com/youtube/v3/docs/videos/insert I am getting an error while uploading. class YTService(): def __init__(self, credentials): # Youtube Credential self._YOUTUBE_SERVICE = build( YOUTUBE_API_SERVICE_NAME, YOUTUBE_API_VERSION, http=credentials.authorize(httplib2.Http())) def initialize_upload(self, buffer): if not buffer: return tags = None if _YOUTUBE_OPTIONS.get('keywords'): tags = _YOUTUBE_OPTIONS.get('keywords').split(',') body = dict( snippet=dict( title=_YOUTUBE_OPTIONS.get('title'), description=_YOUTUBE_OPTIONS.get('description'), tags=tags, categoryId=_YOUTUBE_OPTIONS.get('category')), status=dict(privacyStatus=_YOUTUBE_OPTIONS.get('privacyStatus'))) print("{0} in initialize_upload".format(body)) # Call the API's videos.insert method to create and upload the video. insert_request = self._YOUTUBE_SERVICE.videos().insert( part=','.join(body.keys()), body=body, media_body=MediaInMemoryUpload( buffer, mimetype='video/mp4', chunksize=-1, resumable=True)) return self.resumable_upload(insert_request) # This method implements an exponential backoff strategy to resume a # failed upload. def resumable_upload(self, insert_request): response = None error = None retry = 0 youtube_id = None while response is None: try: _, response = insert_request.next_chunk() print("response : {0} in resumable_upload".format(response)) if response is not None: if 'id' in response: youtube_id = response['id'] return youtube_id else: exit("The upload failed with an unexpected response: %s" % response) except HttpError as e: if e.resp.status in RETRIABLE_STATUS_CODES: error = 'A retriable HTTP error {} occurred:\n{}'.format( e.resp.status, e.content) else: raise except RETRIABLE_EXCEPTIONS as e: error = 'A retriable error occurred: {}'.format(e) if error is not None: retry += 1 if retry > MAX_RETRIES: raise ValueError('No longer attempting to retry.') max_sleep = 2 * retry sleep_seconds = random.random() * max_sleep print('YouTube sleeping {} seconds and then retrying...'.format( sleep_seconds)) time.sleep(sleep_seconds) return youtube_id def get_youtube_id(self, buffer, title): if len(title) > 100: title = title[0:100] if title: _YOUTUBE_OPTIONS['title'] = title return self.initialize_upload(buffer) <HttpError 403 when requesting None returned "Request had insufficient authentication scopes.". Details: "[{'message': 'Insufficient Permission', 'domain': 'global', 'reason': 'insufficientPermissions'}] It is accessed using the ouath2.0 client and the scope is ['https://www.googleapis.com/auth/youtube.upload', 'https://www.googleapis.com/auth/youtube' 'https://www.googleapis.com/auth/youtubepartner' 'https://www.googleapis.com/auth/youtube.force-ssl'] I put 4 of them. What could be the problem? hope the video is uploaded
[ "\nRequest had insufficient authentication scopes.\n\nMeans that your application has not requested the proper consent from the user\nThe videos.insert method requires that your application use an access token which has been authorized with one of the following scopes\n\nNow you have stated that you have those scopes in your code. May i sugest that you changed the scopes in your code and added these after you had already authorized the user, with a lower level scope.\nYou need to remove your old authorization and request authorization of the user again using the new scopes.\n" ]
[ 0 ]
[]
[]
[ "python", "youtube_api", "youtube_data_api" ]
stackoverflow_0074459159_python_youtube_api_youtube_data_api.txt
Q: Python arrays for solving equations I am writing a simple solver for the heat equation to get used to the python programming language. The code I have is the following: for i in range(1,m): c=gamma*p*(q[i-1]+q[i]) rhs=np.matmul(B,np.transpose(u[i-1,:]))+np.transpose(c) sol=np.linalg.solve(A,rhs[0]) u[i,:]=np.transpose(sol) print('Simulation Complete!') The problem I have is with understanding the matrix structure. The usual programming language I use, Matlab, there is a very strict way you deal with arrays, just like maths, you have to be careful with your dimensions. It seems that this isn't the case with python. In the final few lines of my code, I've been treating both row and column vectors as different, but this has been too cumbersome, is there a way I can make these things more efficient? At the moment I seem to be treating everything as an array and to properly get a vector from the linear algebra solver, I have to choose the first element. Can I be a bit looser with my inputs, or can I get more rigorous with how I define my matrices, so keep tabs on if they're row of column vectors? A: in numpy you can define column vectors, row vectors and 2D matrices. eg: >>> np.ones((1,4)) array([[1., 1., 1., 1.]]) >>> np.ones((4,1)) array([[1.], [1.], [1.], [1.]]) >>> np.ones((4,4)) array([[1., 1., 1., 1.], [1., 1., 1., 1.], [1., 1., 1., 1.], [1., 1., 1., 1.]]) You can do: >>> np.dot(2*np.ones((1,4)), np.ones((4,4))) array([[8., 8., 8., 8.]]) You can also do >>> np.dot(np.ones((4,4)), 2*np.ones((4,1))) array([[8.], [8.], [8.], [8.]]) But if you try to do >>> np.dot(np.ones((4,4)), 2*np.ones((1,4))) or >>> np.dot(2*np.ones((4,1)), np.ones((4,4))) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "<__array_function__ internals>", line 5, in dot ValueError: shapes (4,1) and (4,4) not aligned: 1 (dim 1) != 4 (dim 0) So as you can see, dimensions are taken in account, and some operations are not allowed if the dimensions of the vectors involved don't match
Python arrays for solving equations
I am writing a simple solver for the heat equation to get used to the python programming language. The code I have is the following: for i in range(1,m): c=gamma*p*(q[i-1]+q[i]) rhs=np.matmul(B,np.transpose(u[i-1,:]))+np.transpose(c) sol=np.linalg.solve(A,rhs[0]) u[i,:]=np.transpose(sol) print('Simulation Complete!') The problem I have is with understanding the matrix structure. The usual programming language I use, Matlab, there is a very strict way you deal with arrays, just like maths, you have to be careful with your dimensions. It seems that this isn't the case with python. In the final few lines of my code, I've been treating both row and column vectors as different, but this has been too cumbersome, is there a way I can make these things more efficient? At the moment I seem to be treating everything as an array and to properly get a vector from the linear algebra solver, I have to choose the first element. Can I be a bit looser with my inputs, or can I get more rigorous with how I define my matrices, so keep tabs on if they're row of column vectors?
[ "in numpy you can define column vectors, row vectors and 2D matrices.\neg:\n>>> np.ones((1,4))\narray([[1., 1., 1., 1.]])\n\n>>> np.ones((4,1))\narray([[1.],\n [1.],\n [1.],\n [1.]])\n\n>>> np.ones((4,4))\narray([[1., 1., 1., 1.],\n [1., 1., 1., 1.],\n [1., 1., 1., 1.],\n [1., 1., 1., 1.]])\n\nYou can do:\n>>> np.dot(2*np.ones((1,4)), np.ones((4,4)))\narray([[8., 8., 8., 8.]])\n\nYou can also do\n>>> np.dot(np.ones((4,4)), 2*np.ones((4,1)))\narray([[8.],\n [8.],\n [8.],\n [8.]])\n\nBut if you try to do\n>>> np.dot(np.ones((4,4)), 2*np.ones((1,4)))\n\nor \n\n>>> np.dot(2*np.ones((4,1)), np.ones((4,4)))\n\nTraceback (most recent call last):\n File \"<stdin>\", line 1, in <module>\n File \"<__array_function__ internals>\", line 5, in dot\nValueError: shapes (4,1) and (4,4) not aligned: 1 (dim 1) != 4 (dim 0)\n\nSo as you can see, dimensions are taken in account, and some operations are not allowed if the dimensions of the vectors involved don't match\n" ]
[ 0 ]
[]
[]
[ "arrays", "numpy", "python" ]
stackoverflow_0074459636_arrays_numpy_python.txt
Q: How minimize more equality constraints than independant variables in scipy I have this minimization problem: import numpy as np from scipy.optimize import Bounds, minimize, fmin_cobyla, linprog A = \ np.array([[ 0.106667, 0.1333, 0.1333, 0.01], [ 0.02, 0.6667, 0.1333, 0.12], [0.0933, 0.06667, 0.6, 0.01]]) B = \ np.array([[27], [57], [28]]) l = \ np.array([[100], [40], [10], [50]]) u = \ np.array([[200], [80], [20], [150]]) def objfun(x): return abs(np.sum(x - (u+l)/2)) x0 = \ np.array([[150], [60], [15], [100]]) bounds = Bounds(l, u) eq_cons1 = {'type': 'eq', 'fun': lambda x: np.matmul(A[0,:],x)-B[0]} eq_cons2 = {'type': 'eq', 'fun': lambda x: np.matmul(A[1,:],x)-B[1]} eq_cons3 = {'type': 'eq', 'fun': lambda x: np.matmul(A[2,:],x)-B[2]} eq_cons = {'type': 'eq', 'fun': lambda x: x%1} res = minimize(objfun, x0, method='SLSQP', constraints=[eq_cons1, eq_cons2, eq_cons3, eq_cons], bounds=bounds) but adding the last constraint eq_cons = {'type': 'eq', 'fun': lambda x: x%1} makes it fail with the following error More equality constraints than independent variables (Exit mode 2) How can we properly use scipy to solve this kind of problem. I need for example x to be a multiple of a certain constant k. Thats why I'm using modulo. Thanks A: First of all, please note that all your vectors B, l, u, x0 have dimension 2 and should have dimension 1 instead. That's the same mistake as in your last question. Using np.arrays with wrong dimensions for vectors will lead to surprising results due to numpy's broadcasting, so please try to keep an eye on your array dimensions. Some notes in arbitrary order: Assuming your vectors have the right dimension, your first three constraints can be written as a single vectorial constraint function: lambda x: A @ x - B. Here, @ denotes the matrix multiplication operator which calls np.matmul under the hood. This makes your code less verbose. Your initial guess x0 is not feasible and violates your equality constraints. You should be aware that there are some things going wrong from a mathematical point of view. First, neither your objective nor your last constraint function is differentiable. Consequently, your optimization problem is not differentiable and violates the mathematical assumptions of the SLSQP algorithm. This can lead to really odd results in practice. You can easily make at least the objective differentiable by simply minimizing the square of the sum instead of the absolute value. Second, you have an optimization problem of four variables subject to seven equality constraints. Note that your modulo constraint is a vectorial constraint, i.e. it counts as four equality constraints. Hence, there's nothing to optimize since the solution to your problem (given that it exists) is already given by your constraints. Third and most importantly, your modulo constraint implies that your optimization variables should be integers, which turns your problem into a mixed-integer nonlinear optimization problem (MINLP) instead of a contiguous nonlinear optimization problem (NLP). However, scipy.optimize.minimize only supports the latter. Long story short, you basically have two options: Rewrite the problem as an integer optimization problem and solve it by a MINLP solver. Note also that your problem can also be formulated as a mixed-integer linear optimization problem (MILP) since the absolute value in your objective and the modulo constraint can be linearized. You can replace the modulo function with a smooth approximation (e.g. this one) and solve the MINLP approximately by an NLP. Here, you could add a penalty term to the objective function that penalizes violations of the modulo constraint and thus tries to push the solution towards integral values. Afterwards, you can round the found solution to the nearest integral values.
How minimize more equality constraints than independant variables in scipy
I have this minimization problem: import numpy as np from scipy.optimize import Bounds, minimize, fmin_cobyla, linprog A = \ np.array([[ 0.106667, 0.1333, 0.1333, 0.01], [ 0.02, 0.6667, 0.1333, 0.12], [0.0933, 0.06667, 0.6, 0.01]]) B = \ np.array([[27], [57], [28]]) l = \ np.array([[100], [40], [10], [50]]) u = \ np.array([[200], [80], [20], [150]]) def objfun(x): return abs(np.sum(x - (u+l)/2)) x0 = \ np.array([[150], [60], [15], [100]]) bounds = Bounds(l, u) eq_cons1 = {'type': 'eq', 'fun': lambda x: np.matmul(A[0,:],x)-B[0]} eq_cons2 = {'type': 'eq', 'fun': lambda x: np.matmul(A[1,:],x)-B[1]} eq_cons3 = {'type': 'eq', 'fun': lambda x: np.matmul(A[2,:],x)-B[2]} eq_cons = {'type': 'eq', 'fun': lambda x: x%1} res = minimize(objfun, x0, method='SLSQP', constraints=[eq_cons1, eq_cons2, eq_cons3, eq_cons], bounds=bounds) but adding the last constraint eq_cons = {'type': 'eq', 'fun': lambda x: x%1} makes it fail with the following error More equality constraints than independent variables (Exit mode 2) How can we properly use scipy to solve this kind of problem. I need for example x to be a multiple of a certain constant k. Thats why I'm using modulo. Thanks
[ "First of all, please note that all your vectors B, l, u, x0 have\ndimension 2 and should have dimension 1 instead. That's the same mistake\nas in your last question. Using np.arrays with wrong dimensions for vectors\nwill lead to surprising results due to numpy's broadcasting, so please\ntry to keep an eye on your array dimensions.\nSome notes in arbitrary order:\n\nAssuming your vectors have the right dimension, your first three constraints can be written as a single vectorial constraint\nfunction: lambda x: A @ x - B. Here, @ denotes the matrix multiplication operator which calls np.matmul under the hood. This makes your code less verbose.\n\nYour initial guess x0 is not feasible and violates your equality constraints.\n\nYou should be aware that there are some things going wrong from a mathematical point of view. First, neither your objective nor your last constraint function is differentiable. Consequently, your optimization problem is not differentiable\nand violates the mathematical assumptions of the SLSQP algorithm. This can\nlead to really odd results in practice. You can easily make at least the objective\ndifferentiable by simply minimizing the square of the sum instead of the absolute value.\n\nSecond, you have an optimization problem of four variables subject to\nseven equality constraints. Note that your modulo constraint is a vectorial\nconstraint, i.e. it counts as four equality constraints. Hence, there's nothing\nto optimize since the solution to your problem (given that it exists) is\nalready given by your constraints.\n\nThird and most importantly, your modulo constraint implies that your optimization variables should be integers, which turns your problem into a mixed-integer nonlinear optimization problem (MINLP) instead of a contiguous nonlinear optimization problem (NLP). However, scipy.optimize.minimize only supports the latter.\n\n\nLong story short, you basically have two options:\n\nRewrite the problem as an integer optimization problem and solve it by a\nMINLP solver. Note also that your problem can also be formulated as a mixed-integer linear optimization problem (MILP) since the absolute value in your objective and the modulo constraint can be linearized.\n\nYou can replace the modulo function with a smooth approximation (e.g. this one) and solve\nthe MINLP approximately by an NLP. Here, you could add a penalty term to the\nobjective function that penalizes violations of the modulo constraint and thus tries to push the solution towards integral values. Afterwards, you can round the found solution to the nearest integral values.\n\n\n" ]
[ 0 ]
[]
[]
[ "constraints", "minimize", "optimization", "python", "scipy" ]
stackoverflow_0074454084_constraints_minimize_optimization_python_scipy.txt
Q: How to get data from requests-cache I have CachedSession(backend='memory', expire_after=timedelta(days=1)) in my code. It works fine. But I want to use data from my cache, which contain in my memory. I looked for in doc, but unfortunately get nothing. Anybody know how to get cache-data? A: You can use the methods of the CachedSession object(cache_session.cache.url) to loop through all the urls which are currently cached. You can use CachedSession.cache.urls to see all URLs currently in the cache: session = CachedSession() print(session.cache.urls) >>> ['https://httpbin.org/get', 'https://httpbin.org/stream/100'] If needed, you can get more details on cached responses via CachedSession.cache.responses, which is a dict-like interface to the cache backend. See CachedResponse for a full list of attributes available. For example, if you wanted to to see all URLs requested with a specific method: post_urls = [ response.url for response in session.cache.responses.values() if response.request.method == 'POST' ] You can also inspect CachedSession.cache.redirects, which maps redirect URLs to keys of the responses they redirect to. Additional keys() and values() wrapper methods are available on BaseCache to get combined keys and responses. print('All responses:') for response in session.cache.values(): print(response) print('All cache keys for redirects and responses combined:') print(list(session.cache.keys()))
How to get data from requests-cache
I have CachedSession(backend='memory', expire_after=timedelta(days=1)) in my code. It works fine. But I want to use data from my cache, which contain in my memory. I looked for in doc, but unfortunately get nothing. Anybody know how to get cache-data?
[ "You can use the methods of the CachedSession object(cache_session.cache.url) to loop through all the urls which are currently cached.\n\nYou can use CachedSession.cache.urls to see all URLs currently in the cache:\n\nsession = CachedSession()\nprint(session.cache.urls)\n>>> ['https://httpbin.org/get', 'https://httpbin.org/stream/100']\n\n\nIf needed, you can get more details on cached responses via CachedSession.cache.responses, which is a dict-like interface to the cache backend. See CachedResponse for a full list of attributes available.\n\n\nFor example, if you wanted to to see all URLs requested with a specific method:\n\npost_urls = [\n response.url for response in session.cache.responses.values()\n if response.request.method == 'POST'\n]\n\n\nYou can also inspect CachedSession.cache.redirects, which maps redirect URLs to keys of the responses they redirect to.\n\n\nAdditional keys() and values() wrapper methods are available on BaseCache to get combined keys and responses.\n\nprint('All responses:')\nfor response in session.cache.values():\n print(response)\n\nprint('All cache keys for redirects and responses combined:')\nprint(list(session.cache.keys()))\n\n" ]
[ 0 ]
[]
[]
[ "python", "python_requests", "request" ]
stackoverflow_0074459073_python_python_requests_request.txt
Q: Pandas and ValueError: time data '0' does not match format I can not figure out why I get the error "ValueError: time data '0' does not match format '%d.%m.%Y %H:%M' (match)" (or ..'%d.%m.%Y'). So, I have a test dataframe: Date DateCP Time kWh DT 0 01.11.2022 01.11.2022 01:00 0.693 01.11.2022 01:00 1 01.11.2022 01.11.2022 02:00 0.675 01.11.2022 02:00 2 01.11.2022 01.11.2022 03:00 1.044 01.11.2022 03:00 to be absolutely sure (following suggestions here) I run on both DateCP and DT: df['DateCP'] = df['DateCP'].apply(unidecode) df['DT'] = df['DT'].apply(unidecode) but issuing df['DateCP'] = pd.to_datetime(df['DateCP'], format='%d.%m.%Y') or df['DT'] = pd.to_datetime(df['DT'], format='%d.%m.%Y %H:%M') or df['Date'] = df['Date'].apply(lambda x:datetime.strptime(x,'%d.%m.%Y')) leads to the mentioned error. The conversion happens only if errors='coerce' is added giving the expected result: Date DateCP Time kWh DT 0 01.11.2022 2022-11-01 01:00 0.693 2022-11-01 01:00:00 1 01.11.2022 2022-11-01 02:00 0.675 2022-11-01 02:00:00 2 01.11.2022 2022-11-01 03:00 1.044 2022-11-01 03:00:00 Why the coercion is required? A: Because wrong data. Here is mixed datetimes with number 0, so if add errors='coerce' parameter pandas convert column to datetimes with NaT for not parseable dates. You can check it: print (df) Date DateCP Time kWh DT 0 01.11.2022 01.11.2022 01:00 0.693 01.11.2022 01:00 1 01.11.2022 1 02:00 0.675 01.11.2022 02:00 2 01.11.2022 01.11.2022 03:00 1.044 01.11.2022 03:00 df['new'] = pd.to_datetime(df['DateCP'], format='%d.%m.%Y', errors='coerce') print (df.loc[df['new'].isna(), ['DateCP','new']]) DateCP new 1 1 NaT So finally DataFrames looks: df['DateCP'] = pd.to_datetime(df['DateCP'], format='%d.%m.%Y', errors='coerce') print (df) Date DateCP Time kWh DT 0 01.11.2022 2022-11-01 01:00 0.693 01.11.2022 01:00 1 01.11.2022 NaT 02:00 0.675 01.11.2022 02:00 2 01.11.2022 2022-11-01 03:00 1.044 01.11.2022 03:00 print (df.dtypes) Date object DateCP datetime64[ns] Time object kWh float64 DT object dtype: object EDIT: If need remove all rows with misisng values use DataFrame.dropna: df['DateCP'] = pd.to_datetime(df['DateCP'], format='%d.%m.%Y', errors='coerce') df['Date'] = pd.to_datetime(df['Date'], format='%d.%m.%Y', errors='coerce') df['DT'] = pd.to_datetime(df['DT'], format='%d.%m.%Y %H:%M', errors='coerce') df = df.dropna(subset=['DateCP','Date','DT'])
Pandas and ValueError: time data '0' does not match format
I can not figure out why I get the error "ValueError: time data '0' does not match format '%d.%m.%Y %H:%M' (match)" (or ..'%d.%m.%Y'). So, I have a test dataframe: Date DateCP Time kWh DT 0 01.11.2022 01.11.2022 01:00 0.693 01.11.2022 01:00 1 01.11.2022 01.11.2022 02:00 0.675 01.11.2022 02:00 2 01.11.2022 01.11.2022 03:00 1.044 01.11.2022 03:00 to be absolutely sure (following suggestions here) I run on both DateCP and DT: df['DateCP'] = df['DateCP'].apply(unidecode) df['DT'] = df['DT'].apply(unidecode) but issuing df['DateCP'] = pd.to_datetime(df['DateCP'], format='%d.%m.%Y') or df['DT'] = pd.to_datetime(df['DT'], format='%d.%m.%Y %H:%M') or df['Date'] = df['Date'].apply(lambda x:datetime.strptime(x,'%d.%m.%Y')) leads to the mentioned error. The conversion happens only if errors='coerce' is added giving the expected result: Date DateCP Time kWh DT 0 01.11.2022 2022-11-01 01:00 0.693 2022-11-01 01:00:00 1 01.11.2022 2022-11-01 02:00 0.675 2022-11-01 02:00:00 2 01.11.2022 2022-11-01 03:00 1.044 2022-11-01 03:00:00 Why the coercion is required?
[ "Because wrong data. Here is mixed datetimes with number 0, so if add errors='coerce' parameter pandas convert column to datetimes with NaT for not parseable dates.\nYou can check it:\nprint (df)\n Date DateCP Time kWh DT\n0 01.11.2022 01.11.2022 01:00 0.693 01.11.2022 01:00\n1 01.11.2022 1 02:00 0.675 01.11.2022 02:00\n2 01.11.2022 01.11.2022 03:00 1.044 01.11.2022 03:00\n\ndf['new'] = pd.to_datetime(df['DateCP'], format='%d.%m.%Y', errors='coerce')\n\nprint (df.loc[df['new'].isna(), ['DateCP','new']])\n\n DateCP new\n1 1 NaT\n\nSo finally DataFrames looks:\ndf['DateCP'] = pd.to_datetime(df['DateCP'], format='%d.%m.%Y', errors='coerce')\nprint (df)\n\n Date DateCP Time kWh DT\n0 01.11.2022 2022-11-01 01:00 0.693 01.11.2022 01:00\n1 01.11.2022 NaT 02:00 0.675 01.11.2022 02:00\n2 01.11.2022 2022-11-01 03:00 1.044 01.11.2022 03:00\n\nprint (df.dtypes)\nDate object\nDateCP datetime64[ns]\nTime object\nkWh float64\nDT object\ndtype: object\n\nEDIT: If need remove all rows with misisng values use DataFrame.dropna:\ndf['DateCP'] = pd.to_datetime(df['DateCP'], format='%d.%m.%Y', errors='coerce')\ndf['Date'] = pd.to_datetime(df['Date'], format='%d.%m.%Y', errors='coerce')\ndf['DT'] = pd.to_datetime(df['DT'], format='%d.%m.%Y %H:%M', errors='coerce')\n\ndf = df.dropna(subset=['DateCP','Date','DT'])\n\n" ]
[ 0 ]
[]
[]
[ "dataframe", "datetime", "pandas", "python" ]
stackoverflow_0074459574_dataframe_datetime_pandas_python.txt
Q: Change data cutoff frequency over time I have data with this structure Is there a way to change data cutoff frequency over time (on python side, not SQL) from 30-min slice to 1 hour, with an obligatory condition when changing slices, sum the value in the columns 'starts' and 'scooter_on_parking', but the rest of the values ​​should not change. Basic command df.groupby(pd.Grouper(freq='1H', key='time_')).sum() sums all columns, how to leave part unchanged - don't quite understand. Thank. Update: answer new_df = ( df.groupby(pd.Grouper(freq='1H', key='time_')).agg({'starts':'sum', 'scooters_on_parking':'sum'}).reset_index() ) new_df = new_df.merge(df, on='time_' did not fit, the data is incorrect - after aggregation, the data diverges greatly. Example of actual values . The result A: Since after groupby you need an aggregation operation, the easiest way to apply a function to only some columns would be to use merge after groupby: new_df = ( df.groupby(pd.Grouper(freq='1H', key='time_')).agg({'starts':'sum', 'scooters_on_parking':'sum'}).reset_index() ) new_df = new_df.merge(df, on='time_')
Change data cutoff frequency over time
I have data with this structure Is there a way to change data cutoff frequency over time (on python side, not SQL) from 30-min slice to 1 hour, with an obligatory condition when changing slices, sum the value in the columns 'starts' and 'scooter_on_parking', but the rest of the values ​​should not change. Basic command df.groupby(pd.Grouper(freq='1H', key='time_')).sum() sums all columns, how to leave part unchanged - don't quite understand. Thank. Update: answer new_df = ( df.groupby(pd.Grouper(freq='1H', key='time_')).agg({'starts':'sum', 'scooters_on_parking':'sum'}).reset_index() ) new_df = new_df.merge(df, on='time_' did not fit, the data is incorrect - after aggregation, the data diverges greatly. Example of actual values . The result
[ "Since after groupby you need an aggregation operation, the easiest way to apply a function to only some columns would be to use merge after groupby:\nnew_df = ( df.groupby(pd.Grouper(freq='1H', key='time_')).agg({'starts':'sum', \n 'scooters_on_parking':'sum'}).reset_index() )\nnew_df = new_df.merge(df, on='time_')\n\n" ]
[ 0 ]
[]
[]
[ "dataframe", "pandas", "python" ]
stackoverflow_0074459581_dataframe_pandas_python.txt
Q: Install Detectron2 on Windows 10 I try to install Facebook's Detectron2 followed this official repo. Following that repo, detectron2 can only install on linux. However, I'm working on a server run on Windows operator. Anybody know how to install it on Windows? A: Answer found through this issue: https://github.com/facebookresearch/detectron2/issues/9 These steps worked for me on my RTX 3070. Install Anaconda https://docs.anaconda.com/anaconda/install/windows/ Create a environment.yml file containing the following code. name: detectron2 channels: - pytorch - conda-forge - anaconda - defaults dependencies: - python=3.8 - numpy - pywin32 - cudatoolkit=11.0 - pytorch==1.7.1 - torchvision - git - pip - pip: - git+https://github.com/facebookresearch/detectron2.git@v0.3 Launch the Anaconda terminal, navigate to the yml file and run conda env create -f environment.yml Activate the environment conda activate detectron2 And you're good to go. Edit: This works without issue if you run your script within the anaconda terminal but I was also having this issue ImportError: DLL load failed: The specified module could not be found. with numpy and Pillow when running the script from VS Code so if you happen to have this issue, I fixed it by uninstalling and reinstalling the troubled modules from within the anaconda terminal. pip uninstall numpy pip install numpy A: Installation of detectron2 in Windows is somehow tricky. I struggled a whole week to make it work. For this, I created a new anaconda environment (to match with the version requirement for pytorch and torchvision for detectron2) and started from installing cudatoolkit and cudnn in that environment. This could be the best way not to mess up with the existing anaconda environment. Here is the step by step procedure (verified with my laptop with Windows 10 and RTX2070 GPU): Create an anaconda environment (say 'detectron_env'): (note. python 3.8 didn't work, 3.7 worked) conda create -n detectron_env python=3.7 Activate detectron_env: conda activate detectron_env Install cudatoolkit: (note. cuda version number should match with the one installed in your computer (in my case 11.3. You can check by typing "nvcc -V" in the anaconda prompt window. For further information, refer to https://pytorch.org/) conda install -c anaconda cudatoolkit=11.3 Install cudnn: (note. Don't specify the version number. It will be automatically figured out) conda install -c anaconda cudnn Install pywin32: conda install -c anaconda pywin32 Install pytorch, torchvision and torchaudio: (note. Version number of cudatoolkit should match with the one in step 3. pytorch will be automatically installed with version number equal to or higher than 1.8 that is required by detectron2) conda install pytorch torchvision torchaudio cudatoolkit=11.3 -c pytorch Check whether GPU is enabled for pytorch: Enable python, import torch and type 'torch.cuda.is_available()' You should get 'True'. However, If you find that GPU is not enabled for pytorch, go to step 1) and try again with different version numbers for cuda and/or python. Install some packages: (note. You should install ninja. Otherwise, set-up and build procedure will not go smoothly) conda install -c anaconda cython pip install opencv-python pip install git+https://github.com/facebookresearch/fvcore pip install git+https://github.com/philferriere/cocoapi.git#subdirectory=PythonAPI pip install av conda install -c anaconda scipy conda install -c anaconda ninja Go to the directory where you want to install detectron2. Git clone the following repository: (note. The folder name for detectron2 should be different from 'detectron2'. In my case, I used 'detectron_repo'. Otherwise, path for pytorch will be confused) git clone https://github.com/facebookresearch/detectron2.git detectron_repo Install dependencies: (note. Don't enter the cloned detectron_repo directory) pip install -q -e detectron_repo Go to detectron_repo directory: cd detectron_repo Build detectron2: python setup.py build develop If the above is not successful, you may need to start again from the beginning or reinstall pytorch. If you reinstall pytocrh, you need to rebuild detectron2 again. If the above is successful, then Test: Go to demo/ directory and run the following script by specyfing an input path to any of your image (say .jpg): python demo.py --config-file ../configs/COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml --input <path_to_your_image_file.jpg> --opts MODEL.WEIGHTS detectron2://COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x/137849600/model_final_f10217.pkl A: Here is the installation instructions. To install the latest version of Detectron2, following command works. Windows10 Python = 3.7.9 Pytorch = 1.7.1 Torchvision = 0.8.2 Cuda = 11.0 detectron2 = 0.5 In the terminal: python -m pip install git+https://github.com/facebookresearch/detectron2.git A: Here is how I managed to use detectron2 on Windows 10: Determine how to install pytorch using https://pytorch.org/get-started/locally/ (I am using CPU only, install pytorch using the suggested command Run python -m pip install git+github.com/facebookresearch/detectron2.git conda install pywin32 A: I Installed this in Windows 10 and 11 Successfully using this trick. First of all, Install Visual Studio and Compiler, and build tools, and runtimes. Install .NET Runtime. Then start with conda create -n norfair python=3.9 conda install -c anaconda pywin32 conda install -c anaconda cython pip install git+https://github.com/facebookresearch/fvcore pip install git+https://github.com/philferriere/cocoapi.git#subdirectory=PythonAPI pip install av conda install -c anaconda scipy conda install -c anaconda ninja pip install git+https://github.com/facebookresearch/detectron2.git DONE A: I build using win 10, py 3.8, cuda 11.3 following steps from @east as well as I did additionally used the steps (1) and (2) from stackoverflow.com/questions/70751751/… Edited to include details: 1)I made a new env D/jav/learning with python 3.8.3 and activated it using conda activate D:\jav\learning conda install pytorch==1.10.1 torchvision==0.11.2 torchaudio==0.10.1 cudatoolkit=11.3 -c pytorch Set up environment variables for Microsoft Visual Studio by running vcvars64.bat C:\Program Files (x86)\Microsoft Visual Studio\2019\BuildTools\VC\Auxiliary\Build\vcvars64.bat 4)Add new system path for cl.exe in my PATH variable: C:\Program Files (x86)\Microsoft Visual Studio\2019\BuildTools\VC\Tools\MSVC\14.29.30133\bin\Hostx64\x64 Install following: conda install -c anaconda pywin32 conda install -c anaconda cython pip install opencv-python pip install git+https://github.com/facebookresearch/fvcore pip install git+https://github.com/philferriere /cocoapi.git#subdirectory=PythonAPI pip install av conda install -c anaconda scipy conda install -c anaconda ninja Then following direction of comments (9 to 14) from @east a) Go to the directory where you want to install detectron2. git clone https://github.com/facebookresearch/detectron2.git detectron_repo b)Install dependencies: pip install -q -e detectron_repo c)Go to detectron_repo directory: cd detectron_repo Build detectron2: python setup.py build develop Test: Go to demo/ directory and run the following script python demo.py --config-file ../configs/COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml --input <path_to_your_image_file.jpg> --opts MODEL.WEIGHTS detectron2://COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x/137849600/model_final_f10217.pkl Successful, just got this warning: PkgResourcesDeprecationWarning: is an invalid version and will not be supported in a future release A: Regarding pip installations, it was a nightmare but I finally managed to create a workaround for me: First install torch 1.10 cpu version: pip install torch==1.10.0+cpu torchvision==0.11.0+cpu torchaudio==0.10.0 -f https://download.pytorch.org/whl/torch_stable.html --no-cache -I Install detectron2: git clone https://github.com/facebookresearch/detectron2.git python -m pip install -e detectron2 Uninstall torch if you need newer version/cuda: pip uninstall -y torch torchvision torchaudio Finally delete all torch folders in your Lib/site-packages/ and install a different torch version.
Install Detectron2 on Windows 10
I try to install Facebook's Detectron2 followed this official repo. Following that repo, detectron2 can only install on linux. However, I'm working on a server run on Windows operator. Anybody know how to install it on Windows?
[ "Answer found through this issue: https://github.com/facebookresearch/detectron2/issues/9\nThese steps worked for me on my RTX 3070.\n\nInstall Anaconda https://docs.anaconda.com/anaconda/install/windows/\nCreate a environment.yml file containing the following code.\n\nname: detectron2\nchannels:\n - pytorch\n - conda-forge\n - anaconda\n - defaults\ndependencies:\n - python=3.8\n - numpy\n - pywin32\n - cudatoolkit=11.0\n - pytorch==1.7.1\n - torchvision\n - git\n - pip\n - pip:\n - git+https://github.com/facebookresearch/detectron2.git@v0.3\n\n\nLaunch the Anaconda terminal, navigate to the yml file and run conda env create -f environment.yml\n\nActivate the environment conda activate detectron2\n\n\nAnd you're good to go.\nEdit: This works without issue if you run your script within the anaconda terminal but I was also having this issue ImportError: DLL load failed: The specified module could not be found. with numpy and Pillow when running the script from VS Code so if you happen to have this issue, I fixed it by uninstalling and reinstalling the troubled modules from within the anaconda terminal.\npip uninstall numpy\npip install numpy\n\n", "Installation of detectron2 in Windows is somehow tricky. I struggled a whole week to make it work. For this, I created a new anaconda environment (to match with the version requirement for pytorch and torchvision for detectron2) and started from installing cudatoolkit and cudnn in that environment. This could be the best way not to mess up with the existing anaconda environment.\nHere is the step by step procedure (verified with my laptop with Windows 10 and RTX2070 GPU):\n\nCreate an anaconda environment (say 'detectron_env'):\n(note. python 3.8 didn't work, 3.7 worked)\n\nconda create -n detectron_env python=3.7\n\n\nActivate detectron_env:\n\nconda activate detectron_env\n\n\nInstall cudatoolkit:\n(note. cuda version number should match with the one installed in your computer (in my case 11.3. You can check by typing \"nvcc -V\" in the anaconda prompt window. For further information, refer to https://pytorch.org/)\n\nconda install -c anaconda cudatoolkit=11.3\n\n\nInstall cudnn:\n(note. Don't specify the version number. It will be automatically figured out)\n\nconda install -c anaconda cudnn\n\n\nInstall pywin32:\n\nconda install -c anaconda pywin32\n\n\nInstall pytorch, torchvision and torchaudio:\n(note. Version number of cudatoolkit should match with the one in step 3. pytorch will be automatically installed with version number equal to or higher than 1.8 that is required by detectron2)\n\nconda install pytorch torchvision torchaudio cudatoolkit=11.3 -c pytorch\n\n\nCheck whether GPU is enabled for pytorch:\n\nEnable python, import torch and type 'torch.cuda.is_available()'\n\nYou should get 'True'. However, If you find that GPU is not enabled for pytorch, go to step 1) and try again with different version numbers for cuda and/or python.\n\nInstall some packages:\n(note. You should install ninja. Otherwise, set-up and build procedure will not go smoothly)\n\nconda install -c anaconda cython\n\npip install opencv-python\n\npip install git+https://github.com/facebookresearch/fvcore\n\npip install git+https://github.com/philferriere/cocoapi.git#subdirectory=PythonAPI\n\npip install av\n\nconda install -c anaconda scipy\n\nconda install -c anaconda ninja\n\n\n\nGo to the directory where you want to install detectron2.\n\nGit clone the following repository:\n(note. The folder name for detectron2 should be different from 'detectron2'. In my case, I used 'detectron_repo'. Otherwise, path for pytorch will be confused)\n\ngit clone https://github.com/facebookresearch/detectron2.git detectron_repo\n\n\nInstall dependencies:\n(note. Don't enter the cloned detectron_repo directory)\n\npip install -q -e detectron_repo\n\n\nGo to detectron_repo directory:\n\ncd detectron_repo\n\n\nBuild detectron2:\n\npython setup.py build develop\n\nIf the above is not successful, you may need to start again from the beginning or reinstall pytorch. If you reinstall pytocrh, you need to rebuild detectron2 again.\nIf the above is successful, then\n\nTest:\nGo to demo/ directory and run the following script by specyfing an input path to any of your image (say .jpg):\n\npython demo.py --config-file ../configs/COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml --input <path_to_your_image_file.jpg> --opts MODEL.WEIGHTS detectron2://COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x/137849600/model_final_f10217.pkl\n\n\n\n", "Here is the installation instructions. To install the latest version of Detectron2, following command works.\n\nWindows10\nPython = 3.7.9\nPytorch = 1.7.1\nTorchvision = 0.8.2\nCuda = 11.0\ndetectron2 = 0.5\n\nIn the terminal:\npython -m pip install git+https://github.com/facebookresearch/detectron2.git\n\n", "Here is how I managed to use detectron2 on Windows 10:\n\nDetermine how to install pytorch using https://pytorch.org/get-started/locally/ (I am using CPU only, install pytorch using the suggested command\n\nRun python -m pip install git+github.com/facebookresearch/detectron2.git\n\nconda install pywin32\n\n\n", "I Installed this in Windows 10 and 11 Successfully using this trick.\nFirst of all, Install Visual Studio and Compiler, and build tools, and runtimes. Install .NET Runtime.\nThen start with\nconda create -n norfair python=3.9\nconda install -c anaconda pywin32\nconda install -c anaconda cython\npip install git+https://github.com/facebookresearch/fvcore\npip install git+https://github.com/philferriere/cocoapi.git#subdirectory=PythonAPI\npip install av\nconda install -c anaconda scipy\nconda install -c anaconda ninja\npip install git+https://github.com/facebookresearch/detectron2.git\nDONE\n", "I build using win 10, py 3.8, cuda 11.3\nfollowing steps from @east as well as I did additionally used the steps (1) and (2) from stackoverflow.com/questions/70751751/…\nEdited to include details:\n1)I made a new env D/jav/learning with python 3.8.3\nand activated it using conda activate D:\\jav\\learning\n\nconda install pytorch==1.10.1 torchvision==0.11.2 torchaudio==0.10.1 cudatoolkit=11.3 -c pytorch\n\nSet up environment variables for Microsoft Visual Studio by running vcvars64.bat\n C:\\Program Files (x86)\\Microsoft Visual Studio\\2019\\BuildTools\\VC\\Auxiliary\\Build\\vcvars64.bat\n\n\n\n4)Add new system path for cl.exe in my PATH variable:\nC:\\Program Files (x86)\\Microsoft Visual Studio\\2019\\BuildTools\\VC\\Tools\\MSVC\\14.29.30133\\bin\\Hostx64\\x64\n\n\nInstall following:\nconda install -c anaconda pywin32\n conda install -c anaconda cython\n\n pip install opencv-python\n\n pip install git+https://github.com/facebookresearch/fvcore\n pip install git+https://github.com/philferriere /cocoapi.git#subdirectory=PythonAPI\n\n pip install av\n\n conda install -c anaconda scipy\n\n conda install -c anaconda ninja\n\n\nThen following direction of comments (9 to 14) from @east\na) Go to the directory where you want to install detectron2.\n git clone https://github.com/facebookresearch/detectron2.git detectron_repo\n\nb)Install dependencies:\npip install -q -e detectron_repo\n\n\nc)Go to detectron_repo directory:\n cd detectron_repo\n\n Build detectron2:\n\n python setup.py build develop\n\nTest:\nGo to demo/ directory and run the following script\npython demo.py --config-file ../configs/COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml --input <path_to_your_image_file.jpg> --opts MODEL.WEIGHTS detectron2://COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x/137849600/model_final_f10217.pkl\nSuccessful, just got this warning:\nPkgResourcesDeprecationWarning: is an invalid version and will not be supported in a future release\n", "Regarding pip installations, it was a nightmare but I finally managed to create a workaround for me:\nFirst install torch 1.10 cpu version:\npip install torch==1.10.0+cpu torchvision==0.11.0+cpu torchaudio==0.10.0 -f https://download.pytorch.org/whl/torch_stable.html --no-cache -I\n\nInstall detectron2:\ngit clone https://github.com/facebookresearch/detectron2.git\npython -m pip install -e detectron2\n\nUninstall torch if you need newer version/cuda:\npip uninstall -y torch torchvision torchaudio\n\nFinally delete all torch folders in your Lib/site-packages/ and install a different torch version.\n" ]
[ 8, 5, 2, 1, 1, 0, 0 ]
[]
[]
[ "deep_learning", "object_detection_api", "python", "python_3.x", "pytorch" ]
stackoverflow_0060631933_deep_learning_object_detection_api_python_python_3.x_pytorch.txt
Q: How to convert a row of csv file into column using python I have this: There are six rows in csv file and I need to convert in this formate: convert into columns I tried convert a column into rows and I got successfully import pandas as pd x = pd.read_csv('source.csv', header=None) #reading it as csv for now columns = x[0] #convert questions label column to list columns.tolist() output ['Period', 'B_date', 'C_date', 'Year', 'Day of period', 'label'] but I am confused how to convert rows into columns A: As per the comment, try Transpose... import pandas as pd df = pd.read_csv('source.csv', header=None) df.T Outputs: 0 1 2 3 4 5 0 1 1-Jan-21 31-Jan-21 2021 31 FY21 1 2 1-Feb-21 28-Feb-21 2021 28 FY22 2 3 1-Mar-21 31-Mar-21 2021 31 FY23 3 4 1-Apr-21 30-Apr-21 2021 30 FY24 4 5 1-May-21 31-May-21 2021 31 FY25 5 6 1-Jun-21 30-Jun-21 2021 30 FY26 6 7 1-Jul-21 31-Jul-21 2021 31 FY27 7 8 1-Aug-21 31-Aug-21 2021 31 FY28 8 9 1-Sep-21 30-Sep-21 2021 30 FY29 9 10 1-Oct-21 31-Oct-21 2021 31 FY30 10 11 1-Nov-21 30-Nov-21 2021 30 FY31 11 12 1-Dec-21 31-Dec-21 2021 31 FY32
How to convert a row of csv file into column using python
I have this: There are six rows in csv file and I need to convert in this formate: convert into columns I tried convert a column into rows and I got successfully import pandas as pd x = pd.read_csv('source.csv', header=None) #reading it as csv for now columns = x[0] #convert questions label column to list columns.tolist() output ['Period', 'B_date', 'C_date', 'Year', 'Day of period', 'label'] but I am confused how to convert rows into columns
[ "As per the comment, try Transpose...\nimport pandas as pd\n\ndf = pd.read_csv('source.csv', header=None)\ndf.T\n\nOutputs:\n 0 1 2 3 4 5\n0 1 1-Jan-21 31-Jan-21 2021 31 FY21\n1 2 1-Feb-21 28-Feb-21 2021 28 FY22\n2 3 1-Mar-21 31-Mar-21 2021 31 FY23\n3 4 1-Apr-21 30-Apr-21 2021 30 FY24\n4 5 1-May-21 31-May-21 2021 31 FY25\n5 6 1-Jun-21 30-Jun-21 2021 30 FY26\n6 7 1-Jul-21 31-Jul-21 2021 31 FY27\n7 8 1-Aug-21 31-Aug-21 2021 31 FY28\n8 9 1-Sep-21 30-Sep-21 2021 30 FY29\n9 10 1-Oct-21 31-Oct-21 2021 31 FY30\n10 11 1-Nov-21 30-Nov-21 2021 30 FY31\n11 12 1-Dec-21 31-Dec-21 2021 31 FY32\n\n" ]
[ 0 ]
[]
[]
[ "csv", "pandas", "python" ]
stackoverflow_0074455614_csv_pandas_python.txt
Q: How to use Tweepy to retweet with a comment So i am stuck trying to figure out how to retweet a tweet with a comment, this was added to twitter recently. this is when you click retweet and add a comment to the retweet and retweet it. basically this is what i am talking about : i was looking at the api and count find a method dedicated to this. And even the retweet method does not have a parameter where i can pass text. So i was wondering is there a way to do this? A: Tweepy doesn't have functionality to retweet with your own text, but what you can do is make a url like this https://twitter.com/<user_displayname>/status/<tweet_id> and include it with the text you want comment. It's not a retweet but you are embedding the tweet in your new tweet. user_displayname - display name of person, whose tweet you are retweeting tweet_id - tweet id of tweet you are retweeting A: September 2021 Update Tweepy does have the functionality to quote retweet. Just provide the url of the tweet you want to quote into attachment_url of the API.update_status method. Python example: # Get the tweet you want to quote tweet_to_quote_url="https://twitter.com/andypiper/status/903615884664725505" # Quote it in a new status api.update_status("text", attachment_url=tweet_to_quote_url) # Done! A: In the documentation, there is a quote_tweet_id parameter in create_tweet method. You can create a new tweet with the tweet ID of the tweet you want to quote. comment = "Yep!" quote_tweet = 1592447141720780803 client = tweepy.Client(bearer_token=access_token) client.create_tweet(text=comment, quote_tweet_id=quote_tweet, user_auth=False)
How to use Tweepy to retweet with a comment
So i am stuck trying to figure out how to retweet a tweet with a comment, this was added to twitter recently. this is when you click retweet and add a comment to the retweet and retweet it. basically this is what i am talking about : i was looking at the api and count find a method dedicated to this. And even the retweet method does not have a parameter where i can pass text. So i was wondering is there a way to do this?
[ "Tweepy doesn't have functionality to retweet with your own text, but what you can do is make a url like this https://twitter.com/<user_displayname>/status/<tweet_id> and include it with the text you want comment. It's not a retweet but you are embedding the tweet in your new tweet. \nuser_displayname - display name of person, whose tweet you are retweeting\ntweet_id - tweet id of tweet you are retweeting\n", "September 2021 Update\nTweepy does have the functionality to quote retweet. Just provide the url of the tweet you want to quote into attachment_url of the API.update_status method.\nPython example:\n# Get the tweet you want to quote\ntweet_to_quote_url=\"https://twitter.com/andypiper/status/903615884664725505\"\n\n# Quote it in a new status\napi.update_status(\"text\", attachment_url=tweet_to_quote_url)\n\n# Done!\n\n", "In the documentation, there is a quote_tweet_id parameter in create_tweet method.\nYou can create a new tweet with the tweet ID of the tweet you want to quote.\ncomment = \"Yep!\"\nquote_tweet = 1592447141720780803\n\nclient = tweepy.Client(bearer_token=access_token)\nclient.create_tweet(text=comment, quote_tweet_id=quote_tweet, user_auth=False)\n\n" ]
[ 16, 4, 0 ]
[]
[]
[ "python", "tweepy", "twitter" ]
stackoverflow_0033619971_python_tweepy_twitter.txt
Q: how do you run pytest either from a notebook or command line on databricks? I have created some classes each of which takes a dataframe as a parameter. I have imported pytest and created some fixtures and simple assert methods. I can call pytest.main([.]) from a notebook and it will execute pytest from the rootdir (databricks/driver). I have tried passing the notebook path but it says not found. Ideally, i'd want to execute this from the command line. How do i configure the rootdir? There seems to be a disconnect between the spark os and the user workspace area which i'm finding hard to connect. As a caveat I dont want to use unittest as i pytest can be used successfully in the CI pipleine by outputting junitxml which AzureDevOps can report on. A: I've explained the reason why you can't run pytest on Databricks notebooks (unless you export them, and upload them to dbfs as regular .py files, which is not what you want) in the link at the bottom of this post. However, I have been able to run doctests in Databricks, using the doctest.run_docstring_examples method like so: import doctest def f(x): """ >>> f(1) 45 """ return x + 1 doctest.run_docstring_examples(f, globals()) This will print out: ********************************************************************** File "/local_disk0/tmp/1580942556933-0/PythonShell.py", line 5, in NoName Failed example: f(1) Expected: 45 Got: 2 If you also want to raise an exception, take a further look at: https://menziess.github.io/howto/test/code-in-databricks-notebooks/ A: Taken from Databricks' own repo: https://github.com/databricks/notebook-best-practices/blob/main/notebooks/run_unit_tests.py # Databricks notebook source # MAGIC %md Test runner for `pytest` # COMMAND ---------- !cp ../requirements.txt ~/. %pip install -r ~/requirements.txt # COMMAND ---------- # pytest.main runs our tests directly in the notebook environment, providing # fidelity for Spark and other configuration variables. # # A limitation of this approach is that changes to the test will be # cache by Python's import caching mechanism. # # To iterate on tests during development, we restart the Python process # and thus clear the import cache to pick up changes. dbutils.library.restartPython() import pytest import os import sys # Run all tests in the repository root. notebook_path = dbutils.notebook.entry_point.getDbutils().notebook().getContext().notebookPath().get() repo_root = os.path.dirname(os.path.dirname(notebook_path)) os.chdir(f'/Workspace/{repo_root}') %pwd # Skip writing pyc files on a readonly filesystem. sys.dont_write_bytecode = True retcode = pytest.main([".", "-p", "no:cacheprovider"]) # Fail the cell execution if we have any test failures. assert retcode == 0, 'The pytest invocation failed. See the log above for details.'
how do you run pytest either from a notebook or command line on databricks?
I have created some classes each of which takes a dataframe as a parameter. I have imported pytest and created some fixtures and simple assert methods. I can call pytest.main([.]) from a notebook and it will execute pytest from the rootdir (databricks/driver). I have tried passing the notebook path but it says not found. Ideally, i'd want to execute this from the command line. How do i configure the rootdir? There seems to be a disconnect between the spark os and the user workspace area which i'm finding hard to connect. As a caveat I dont want to use unittest as i pytest can be used successfully in the CI pipleine by outputting junitxml which AzureDevOps can report on.
[ "I've explained the reason why you can't run pytest on Databricks notebooks (unless you export them, and upload them to dbfs as regular .py files, which is not what you want) in the link at the bottom of this post.\nHowever, I have been able to run doctests in Databricks, using the doctest.run_docstring_examples method like so:\nimport doctest\n\ndef f(x):\n \"\"\"\n >>> f(1)\n 45\n \"\"\"\n return x + 1\n\ndoctest.run_docstring_examples(f, globals())\n\nThis will print out:\n**********************************************************************\nFile \"/local_disk0/tmp/1580942556933-0/PythonShell.py\", line 5, in NoName\nFailed example:\n f(1)\nExpected:\n 45\nGot:\n 2\n\nIf you also want to raise an exception, take a further look at: https://menziess.github.io/howto/test/code-in-databricks-notebooks/\n", "Taken from Databricks' own repo: https://github.com/databricks/notebook-best-practices/blob/main/notebooks/run_unit_tests.py\n# Databricks notebook source\n# MAGIC %md Test runner for `pytest`\n\n# COMMAND ----------\n\n!cp ../requirements.txt ~/.\n%pip install -r ~/requirements.txt\n\n# COMMAND ----------\n\n# pytest.main runs our tests directly in the notebook environment, providing\n# fidelity for Spark and other configuration variables.\n#\n# A limitation of this approach is that changes to the test will be\n# cache by Python's import caching mechanism.\n#\n# To iterate on tests during development, we restart the Python process \n# and thus clear the import cache to pick up changes.\ndbutils.library.restartPython()\n\nimport pytest\nimport os\nimport sys\n\n# Run all tests in the repository root.\nnotebook_path = dbutils.notebook.entry_point.getDbutils().notebook().getContext().notebookPath().get()\nrepo_root = os.path.dirname(os.path.dirname(notebook_path))\nos.chdir(f'/Workspace/{repo_root}')\n%pwd\n\n# Skip writing pyc files on a readonly filesystem.\nsys.dont_write_bytecode = True\n\nretcode = pytest.main([\".\", \"-p\", \"no:cacheprovider\"])\n\n# Fail the cell execution if we have any test failures.\nassert retcode == 0, 'The pytest invocation failed. See the log above for details.'\n\n" ]
[ 2, 0 ]
[]
[]
[ "databricks", "pytest", "python", "unit_testing" ]
stackoverflow_0055119153_databricks_pytest_python_unit_testing.txt
Q: How to solve an equation that contains sigma sum to find the upper limit of sigma in Python I want to find an integer value from a sigma contained equation with two variables like this post where x (a real decimal value) range is between two limits e.g. known_xmin_value <= x < known_xmax_value. 1 is the lower limit of k (which is the integer) but don't know the upper limit (which is the goal to be derived from the solution); Perhaps just could be guess that it will be lower than 2000000. Is it possible to find the integer for k? How? from sympy import Sum, solve from sympy.abc import k, x known_value = 742.231 # just for example known_xmin_value = 3.652 # just for example solve(-known_value + Sum((x + (k - 1) * (x - known_xmin_value)) ** 3, (k, 1, integer_unknown_limit)), x) A: Since you are looking for a numerical solution, a naive approach would be a brute force over a set of integer_unknown_limit and use numerical root finding algorithms. For example: from sympy import * from scipy.optimize import root import matplotlib.pyplot as plt x, k, integer_unknown_limit = symbols("x, k, u") known_value = 742.231 # just for example known_xmin_value = 3.652 # just for example expr = -known_value + Sum((x + (k - 1) * (x - known_xmin_value)) ** 3, (k, 1, integer_unknown_limit)) f = lambdify([x, integer_unknown_limit], expr) res = {} # NOTE: this loop may take a few minutes, depending on the machine. # I suggest to lower the upper limit for ul in range(1, 1000): print(ul) try: r = root(f, 2, args=(ul,)) if r.success: res[ul] = r.x[0] except: pass plt.figure() plt.plot(list(res.keys()), res.values()) plt.xlabel("u (upper limit)") plt.ylabel("x where f(x)=0") plt.title("f(x) = $%s$" % latex(expr)) plt.show() As you can see, for each upper limit value in the considered subset, the expression is satisfied at some value of x. EDIT: note that the same can be done with sympy's nsolve. Just replace the for loop with: for ul in range(1, 1000): print(ul) try: res[ul] = nsolve(expr.subs(integer_unknown_limit, ul), x, 1) except: pass A: SymPy can compute the sum symbolically: from sympy import * from sympy import Sum, solve from sympy.abc import k, x, y # Use exact rational numbers known_value = Rational('742.231') # just for example known_xmin_value = Rational('3.652') # just for example sum_value = Sum((x + (k - 1) * (x - known_xmin_value)) ** 3, (k, 1, y)) This is the summation and its result computed using doit: In [22]: sum_value Out[22]: y ____ ╲ ╲ 3 ╲ ⎛ ⎛ 913⎞⎞ ╱ ⎜x + (k - 1)⋅⎜x - ───⎟⎟ ╱ ⎝ ⎝ 250⎠⎠ ╱ ‾‾‾‾ k = 1 In [23]: sum_value.doit() Out[23]: ⎛ 2 ⎞ ⎛ 2 ⎞ ⎛ 3 2 ⎞ ⎛ 4 3 2⎞ ⎛ 2 ⎞ 761048497⋅y ⎛2500707⋅x 2283145491⎞ ⎜y y⎟ ⎜2739⋅x 2500707⋅x 2283145491⎟ ⎜y y y⎟ ⎜y y y ⎟ ⎜ 3 2739⋅x 2500707⋅x 761048497⎟ ─────────── + ⎜───────── - ──────────⎟⋅⎜── + ─⎟ + ⎜─────── - ───────── + ──────────⎟⋅⎜── + ── + ─⎟ + ⎜── + ── + ──⎟⋅⎜x - ─────── + ───────── - ─────────⎟ 15625000 ⎝ 62500 15625000 ⎠ ⎝2 2⎠ ⎝ 250 31250 15625000 ⎠ ⎝3 2 6⎠ ⎝4 2 4 ⎠ ⎝ 250 62500 15625000⎠ This is a quartic polynomial in y so the equation can be solved explicitly in radicals giving 4 symbolic expressions for the roots: In [24]: s1, s2, s3, s4 = solve(sum_value.doit() - known_value, y) In [25]: print(s1) (-250*x*sqrt(62500*x**2/(62500*x**2 - 456500*x + 833569) - 456500*x/(62500*x**2 - 456500*x + 833569) - 1000*sqrt(833569*x**2 + 185557750*x - 677656903)/(62500*x**2 - 456500*x + 833569) + 833569/(62500*x**2 - 456500*x + 833569)) - 250*x + 913*sqrt(62500*x**2/(62500*x**2 - 456500*x + 833569) - 456500*x/(62500*x**2 - 456500*x + 833569) - 1000*sqrt(833569*x**2 + 185557750*x - 677656903)/(62500*x**2 - 456500*x + 833569) + 833569/(62500*x**2 - 456500*x + 833569)) - 913)/(500*x - 1826) For any particular value of x probably none of these 4 roots will be equal to an integer though.
How to solve an equation that contains sigma sum to find the upper limit of sigma in Python
I want to find an integer value from a sigma contained equation with two variables like this post where x (a real decimal value) range is between two limits e.g. known_xmin_value <= x < known_xmax_value. 1 is the lower limit of k (which is the integer) but don't know the upper limit (which is the goal to be derived from the solution); Perhaps just could be guess that it will be lower than 2000000. Is it possible to find the integer for k? How? from sympy import Sum, solve from sympy.abc import k, x known_value = 742.231 # just for example known_xmin_value = 3.652 # just for example solve(-known_value + Sum((x + (k - 1) * (x - known_xmin_value)) ** 3, (k, 1, integer_unknown_limit)), x)
[ "Since you are looking for a numerical solution, a naive approach would be a brute force over a set of integer_unknown_limit and use numerical root finding algorithms. For example:\nfrom sympy import *\nfrom scipy.optimize import root\nimport matplotlib.pyplot as plt\n\nx, k, integer_unknown_limit = symbols(\"x, k, u\")\nknown_value = 742.231 # just for example\nknown_xmin_value = 3.652 # just for example\nexpr = -known_value + Sum((x + (k - 1) * (x - known_xmin_value)) ** 3, (k, 1, integer_unknown_limit))\n\nf = lambdify([x, integer_unknown_limit], expr)\nres = {}\n\n# NOTE: this loop may take a few minutes, depending on the machine.\n# I suggest to lower the upper limit\nfor ul in range(1, 1000):\n print(ul)\n try:\n r = root(f, 2, args=(ul,))\n if r.success:\n res[ul] = r.x[0]\n except:\n pass\n\nplt.figure()\nplt.plot(list(res.keys()), res.values())\nplt.xlabel(\"u (upper limit)\")\nplt.ylabel(\"x where f(x)=0\")\nplt.title(\"f(x) = $%s$\" % latex(expr))\nplt.show()\n\nAs you can see, for each upper limit value in the considered subset, the expression is satisfied at some value of x.\n\nEDIT: note that the same can be done with sympy's nsolve. Just replace the for loop with:\nfor ul in range(1, 1000):\n print(ul)\n try:\n res[ul] = nsolve(expr.subs(integer_unknown_limit, ul), x, 1)\n except:\n pass\n\n", "SymPy can compute the sum symbolically:\nfrom sympy import *\n\nfrom sympy import Sum, solve\nfrom sympy.abc import k, x, y\n\n# Use exact rational numbers\nknown_value = Rational('742.231') # just for example\nknown_xmin_value = Rational('3.652') # just for example\n\nsum_value = Sum((x + (k - 1) * (x - known_xmin_value)) ** 3, (k, 1, y))\n\nThis is the summation and its result computed using doit:\nIn [22]: sum_value\nOut[22]: \n y \n ____ \n ╲ \n ╲ 3\n ╲ ⎛ ⎛ 913⎞⎞ \n ╱ ⎜x + (k - 1)⋅⎜x - ───⎟⎟ \n ╱ ⎝ ⎝ 250⎠⎠ \n ╱ \n ‾‾‾‾ \nk = 1 \n\nIn [23]: sum_value.doit()\nOut[23]: \n ⎛ 2 ⎞ ⎛ 2 ⎞ ⎛ 3 2 ⎞ ⎛ 4 3 2⎞ ⎛ 2 ⎞\n761048497⋅y ⎛2500707⋅x 2283145491⎞ ⎜y y⎟ ⎜2739⋅x 2500707⋅x 2283145491⎟ ⎜y y y⎟ ⎜y y y ⎟ ⎜ 3 2739⋅x 2500707⋅x 761048497⎟\n─────────── + ⎜───────── - ──────────⎟⋅⎜── + ─⎟ + ⎜─────── - ───────── + ──────────⎟⋅⎜── + ── + ─⎟ + ⎜── + ── + ──⎟⋅⎜x - ─────── + ───────── - ─────────⎟\n 15625000 ⎝ 62500 15625000 ⎠ ⎝2 2⎠ ⎝ 250 31250 15625000 ⎠ ⎝3 2 6⎠ ⎝4 2 4 ⎠ ⎝ 250 62500 15625000⎠\n\nThis is a quartic polynomial in y so the equation can be solved explicitly in radicals giving 4 symbolic expressions for the roots:\nIn [24]: s1, s2, s3, s4 = solve(sum_value.doit() - known_value, y)\n\nIn [25]: print(s1)\n(-250*x*sqrt(62500*x**2/(62500*x**2 - 456500*x + 833569) - 456500*x/(62500*x**2 - 456500*x + 833569) - 1000*sqrt(833569*x**2 + 185557750*x - 677656903)/(62500*x**2 - 456500*x + 833569) + 833569/(62500*x**2 - 456500*x + 833569)) - 250*x + 913*sqrt(62500*x**2/(62500*x**2 - 456500*x + 833569) - 456500*x/(62500*x**2 - 456500*x + 833569) - 1000*sqrt(833569*x**2 + 185557750*x - 677656903)/(62500*x**2 - 456500*x + 833569) + 833569/(62500*x**2 - 456500*x + 833569)) - 913)/(500*x - 1826)\n\nFor any particular value of x probably none of these 4 roots will be equal to an integer though.\n" ]
[ 1, 1 ]
[]
[]
[ "equation_solving", "python", "solver", "sympy" ]
stackoverflow_0074456819_equation_solving_python_solver_sympy.txt
Q: scikit-learn train and test split returns NaNs my sample data looks like below customer_id revenue_m10 revenue_m9 revenue_m8 target 1 1234 1231 1256 1239 2 5678 3425 3255 2345 I am trying to split my dataset into train and test based on scikit-learn's train_test_split module. So, I tried the below code X_train,X_test,y_train, y_test = train_test_split( sample_set_df[all_features], sample_set_df[target_var], test_size=0.3 ) But when I view y_test, it looks like below with NaNs like below. Not sure what is the issue. Is the index number missing or any other issue? if index is an issue, cam I know how can we solve this? A: y_test is a pandas Series, printing it displays its index and the data. It seems that sample_set_df has NaNs in its index. Having NaNs in the index does not affect how train_test_split splits the data. You might have an issue with the actual data though. The target is 0 when you have NaNs.
scikit-learn train and test split returns NaNs
my sample data looks like below customer_id revenue_m10 revenue_m9 revenue_m8 target 1 1234 1231 1256 1239 2 5678 3425 3255 2345 I am trying to split my dataset into train and test based on scikit-learn's train_test_split module. So, I tried the below code X_train,X_test,y_train, y_test = train_test_split( sample_set_df[all_features], sample_set_df[target_var], test_size=0.3 ) But when I view y_test, it looks like below with NaNs like below. Not sure what is the issue. Is the index number missing or any other issue? if index is an issue, cam I know how can we solve this?
[ "y_test is a pandas Series, printing it displays its index and the data. It seems that sample_set_df has NaNs in its index.\nHaving NaNs in the index does not affect how train_test_split splits the data. You might have an issue with the actual data though. The target is 0 when you have NaNs.\n" ]
[ 1 ]
[]
[]
[ "data_mining", "machine_learning", "pandas", "python", "scikit_learn" ]
stackoverflow_0074456312_data_mining_machine_learning_pandas_python_scikit_learn.txt
Q: Getting 'Invalid query' error when doing name='test' to Google Drive API I am using PyDrive to fetch list of file names from a Google Drive folder from pydrive.auth import GoogleAuth from pydrive.drive import GoogleDrive GoogleAuth.DEFAULT_SETTINGS['client_config_file'] = r"client_secrets.json" gauth = GoogleAuth(settings_file='settings.yaml') drive = GoogleDrive(gauth) folder_id = "folder-id" file_list = drive.ListFile( { 'q': "name = 'test'", 'supportsAllDrives': True, 'includeItemsFromAllDrives': True, } ).GetList() The syntax name = 'test' I saw from here I am getting error oogleapiclient.errors.HttpError: <HttpError 400 when requesting https://www.googleapis.com/drive/v2/files?q=name%3D%27test%27&maxResults=1000&alt=json returned "Invalid query". Details: "[{'domain': 'global', 'reason': 'invalid', 'message': 'Invalid query', 'locationType': 'parameter', 'location': 'q'}]"> If I fetch all file names, it works perfectly 'q': f"'{folder_id}' in parents and trashed=false" My goal is to check if file with specific name exists or not in a particular folder, something like name = 'file-name' and 'folder-id' in parents and trashed=false A: When I saw the script of pydrive, it seems that Drive API v2 is used. Ref In the case of Drive API v2, the metadata of filename is title. I thought that this might be the reason of your current issue of Invalid query. So, how about the following modification? From: 'q': "name = 'test'", To: "q": "title = 'test'", Note: About 'q': f"'{folder_id}' in parents and trashed=false", in this case, both Drive API v2 and v3 can be used. By this, I think that no error occurs. Reference: PyDrive
Getting 'Invalid query' error when doing name='test' to Google Drive API
I am using PyDrive to fetch list of file names from a Google Drive folder from pydrive.auth import GoogleAuth from pydrive.drive import GoogleDrive GoogleAuth.DEFAULT_SETTINGS['client_config_file'] = r"client_secrets.json" gauth = GoogleAuth(settings_file='settings.yaml') drive = GoogleDrive(gauth) folder_id = "folder-id" file_list = drive.ListFile( { 'q': "name = 'test'", 'supportsAllDrives': True, 'includeItemsFromAllDrives': True, } ).GetList() The syntax name = 'test' I saw from here I am getting error oogleapiclient.errors.HttpError: <HttpError 400 when requesting https://www.googleapis.com/drive/v2/files?q=name%3D%27test%27&maxResults=1000&alt=json returned "Invalid query". Details: "[{'domain': 'global', 'reason': 'invalid', 'message': 'Invalid query', 'locationType': 'parameter', 'location': 'q'}]"> If I fetch all file names, it works perfectly 'q': f"'{folder_id}' in parents and trashed=false" My goal is to check if file with specific name exists or not in a particular folder, something like name = 'file-name' and 'folder-id' in parents and trashed=false
[ "When I saw the script of pydrive, it seems that Drive API v2 is used. Ref In the case of Drive API v2, the metadata of filename is title. I thought that this might be the reason of your current issue of Invalid query. So, how about the following modification?\nFrom:\n'q': \"name = 'test'\",\n\nTo:\n\"q\": \"title = 'test'\",\n\nNote:\n\nAbout 'q': f\"'{folder_id}' in parents and trashed=false\", in this case, both Drive API v2 and v3 can be used. By this, I think that no error occurs.\n\nReference:\n\nPyDrive\n\n" ]
[ 1 ]
[]
[]
[ "google_drive_api", "pydrive", "python" ]
stackoverflow_0074458200_google_drive_api_pydrive_python.txt
Q: im quite a beginner and i i got this error: RuntimeWarning: overflow encountered in exp y = A*np.exp(-1*B*x**2) import numpy as np import matplotlib.pyplot as plt from scipy.optimize import curve_fit def GaussFit(): xdata_raw = [0,24,22,20,18,16,14,12,10,8,6,4,2,-24,-22,-20,-18,-16,-14,-12,-10,-8,-6,-4,-2] ydata_raw =[0.398,0.061,0.066,0.076,0.095,0.115,0.148,0.183,0.211,0.270,0.330,0.361,0.391,0.061,0.066,0.076,0.095,0.115,0.148,0.183,0.211,0.270,0.330,0.361,0.391] y_norm = [] for i in range(len(ydata_raw)): temp = ydata_raw[i]/0.398 y_norm.append(temp) plt.plot(xdata_raw, y_norm, 'o') xdata = np.asarray(xdata_raw) ydata = np.asarray(y_norm) def Gauss(x, A, B): y = A*np.exp(-1*B*x**2) return y parameters, covariance = curve_fit(Gauss, xdata, ydata) fit_A = parameters[0] fit_B = parameters[1] fit_y = Gauss(xdata, fit_A, fit_B) plt.plot(xdata, ydata, 'o', label='data') plt.plot(xdata, fit_y, '-', label='fit') plt.grid(color='grey', linestyle='-', linewidth=0.25, alpha=0.5) plt.legend() plt.xlabel('Winkel der Auslenkung in °') plt.ylabel('Intensität [I]') plt.title('vertikale Ausrichtung') GaussFit() so this is the funktion and i got this plot:enter image description here as you can see, its not high quality. Im trying to fit my data to a gauss-kurve and norm it to 1. but it seems numpy got a problem with the range of the numbers. Any ideas how to fix it and get a reasonable plot? A: y = A*np.exp(-1*B*x**2) Maybe try y = A*np.exp(-1*B*np.square(x)) Or look at Python RuntimeWarning: overflow encountered in long scalars for a similar exception. Might be that you have to use a 64 bit type for y. A: Okay its solved, it wasnt the issue in the exp-funktion in particular. The problem is solved by addin a np.linspace() funktion that (as i think) is providing the limitations that are needed to generate the plot. as followed: fit_y = Gauss(xdata, fit_A, fit_B) x = np.linspace(-25, 25, 1000) plt.plot(xdata, ydata, 'o', label='data') plt.plot(x, Gauss(x, fit_A, fit_B), '-', label='fit')
im quite a beginner and i i got this error: RuntimeWarning: overflow encountered in exp y = A*np.exp(-1*B*x**2)
import numpy as np import matplotlib.pyplot as plt from scipy.optimize import curve_fit def GaussFit(): xdata_raw = [0,24,22,20,18,16,14,12,10,8,6,4,2,-24,-22,-20,-18,-16,-14,-12,-10,-8,-6,-4,-2] ydata_raw =[0.398,0.061,0.066,0.076,0.095,0.115,0.148,0.183,0.211,0.270,0.330,0.361,0.391,0.061,0.066,0.076,0.095,0.115,0.148,0.183,0.211,0.270,0.330,0.361,0.391] y_norm = [] for i in range(len(ydata_raw)): temp = ydata_raw[i]/0.398 y_norm.append(temp) plt.plot(xdata_raw, y_norm, 'o') xdata = np.asarray(xdata_raw) ydata = np.asarray(y_norm) def Gauss(x, A, B): y = A*np.exp(-1*B*x**2) return y parameters, covariance = curve_fit(Gauss, xdata, ydata) fit_A = parameters[0] fit_B = parameters[1] fit_y = Gauss(xdata, fit_A, fit_B) plt.plot(xdata, ydata, 'o', label='data') plt.plot(xdata, fit_y, '-', label='fit') plt.grid(color='grey', linestyle='-', linewidth=0.25, alpha=0.5) plt.legend() plt.xlabel('Winkel der Auslenkung in °') plt.ylabel('Intensität [I]') plt.title('vertikale Ausrichtung') GaussFit() so this is the funktion and i got this plot:enter image description here as you can see, its not high quality. Im trying to fit my data to a gauss-kurve and norm it to 1. but it seems numpy got a problem with the range of the numbers. Any ideas how to fix it and get a reasonable plot?
[ "y = A*np.exp(-1*B*x**2)\n\nMaybe try\ny = A*np.exp(-1*B*np.square(x))\n\nOr look at Python RuntimeWarning: overflow encountered in long scalars for a similar exception. Might be that you have to use a 64 bit type for y.\n", "Okay its solved, it wasnt the issue in the exp-funktion in particular.\nThe problem is solved by addin a np.linspace() funktion that (as i think) is providing the limitations that are needed to generate the plot.\nas followed:\nfit_y = Gauss(xdata, fit_A, fit_B)\nx = np.linspace(-25, 25, 1000)\nplt.plot(xdata, ydata, 'o', label='data')\nplt.plot(x, Gauss(x, fit_A, fit_B), '-', label='fit')\n\n" ]
[ 0, 0 ]
[]
[]
[ "numpy", "python" ]
stackoverflow_0074446025_numpy_python.txt
Q: How to compress WAV file in python? I have converted MP3 files to WAV format but how can I compress WAV file to very small size less or same size that of MP3 size without changing the file format from pydub import AudioSegment import os # files src_folder = "D:/projects/data/mp3" dst_folder = "D:/projects/data/wav" #get all audio file files = os.listdir(src_folder) for name in files: #name of the file wav_name = name.replace(".mp3", "") try: # convert wav to mp3 sound = AudioSegment.from_mp3("{}/{}".format(src_folder, name)) sound.export("{}/{}.wav".format(dst_folder, wav_name), format="wav") except Exception as e: pass A: s1.export("output.mp3", format='mp3', parameters=["-ac","2","-ar","8000"]) The line of code managed to reduce my audio size by half its previous size. Hope this is helpful to someone
How to compress WAV file in python?
I have converted MP3 files to WAV format but how can I compress WAV file to very small size less or same size that of MP3 size without changing the file format from pydub import AudioSegment import os # files src_folder = "D:/projects/data/mp3" dst_folder = "D:/projects/data/wav" #get all audio file files = os.listdir(src_folder) for name in files: #name of the file wav_name = name.replace(".mp3", "") try: # convert wav to mp3 sound = AudioSegment.from_mp3("{}/{}".format(src_folder, name)) sound.export("{}/{}.wav".format(dst_folder, wav_name), format="wav") except Exception as e: pass
[ "s1.export(\"output.mp3\", format='mp3', parameters=[\"-ac\",\"2\",\"-ar\",\"8000\"])\nThe line of code managed to reduce my audio size by half its previous size. Hope this is helpful to someone\n" ]
[ 0 ]
[]
[]
[ "mp3", "python", "wav" ]
stackoverflow_0074459969_mp3_python_wav.txt
Q: How to print the numbers for 0 to 10 in python and skip 5 and 8 between them I want to print a program using while loop in python numbers start from 0 to 10 but i want to skip 5 and 8 so the final result should be 0,1,2,3,4,6,7,9,10 i=0 while i<=10 : print(i) if i==5 or i==8 : break print(i) i+=1 i have tried this code but not successfully working A: You will continue your while loop instead of break, therefore you can use continue: i=0 while i<=10 : if i==5 or i==8 : i += 1 continue print(i) i+=1
How to print the numbers for 0 to 10 in python and skip 5 and 8 between them
I want to print a program using while loop in python numbers start from 0 to 10 but i want to skip 5 and 8 so the final result should be 0,1,2,3,4,6,7,9,10 i=0 while i<=10 : print(i) if i==5 or i==8 : break print(i) i+=1 i have tried this code but not successfully working
[ "You will continue your while loop instead of break, therefore you can use continue:\ni=0 \nwhile i<=10 :\n if i==5 or i==8 :\n i += 1\n continue\n print(i)\n i+=1\n\n" ]
[ 0 ]
[]
[]
[ "python", "while_loop" ]
stackoverflow_0074460047_python_while_loop.txt
Q: How to add timedelta on a subset of a dataframe I am trying to add a timedelta of 1 hour to a subset of my dataframe. I use df['2022-06-01 02:00:00':'2022-06-01 04:00:00'] to slice it and add + pd.Timedelta(hours=1) but I get an error. I want to add a timedelta only on `2022-06-01 02:00:00':'2022-06-01 04:00:00'. How can I achieve that? Solution can be either as datetime as index or as column. This is the datetime in the dataframe: 2022-06-01 00:30:00, 2022-06-01 01:00:00, 2022-06-01 01:30:00, 2022-06-01 02:00:00, 2022-06-01 02:30:00, 2022-06-01 03:00:00, 2022-06-01 03:30:00, 2022-06-01 04:00:00, 2022-06-01 04:30:00, 2022-06-01 05:00:00, 2022-06-01 05:30:00, 2022-06-01 06:00:00, 2022-11-16 06:30:00, 2022-11-16 07:00:00, 2022-11-16 07:30:00, 2022-11-16 08:00:00 A: Solutions for DatetimeIndex: You can create mask and rename values of index by Index.where: mask = (df.index >= '2022-06-01 02:00:00') & (df.index <= '2022-06-01 04:00:00') df.index = df.index.where(~mask, df.index + pd.Timedelta(hours=1)) Or get indices and use DataFrame.rename by dictionary: i = df['2022-06-01 02:00:00':'2022-06-01 04:00:00'].index df = df.rename(dict(zip(i, i + pd.Timedelta(hours=1)))) Solutions for date column: Use Series.between for boolean mask and DataFrame.loc for set new values: mask = df['date'].between('2022-06-01 02:00:00','2022-06-01 04:00:00') df.loc[mask, 'date'] += pd.Timedelta(hours=1) Or Series.mask: df['date'] = df['date'].mask(mask, df['date'] + pd.Timedelta(hours=1))
How to add timedelta on a subset of a dataframe
I am trying to add a timedelta of 1 hour to a subset of my dataframe. I use df['2022-06-01 02:00:00':'2022-06-01 04:00:00'] to slice it and add + pd.Timedelta(hours=1) but I get an error. I want to add a timedelta only on `2022-06-01 02:00:00':'2022-06-01 04:00:00'. How can I achieve that? Solution can be either as datetime as index or as column. This is the datetime in the dataframe: 2022-06-01 00:30:00, 2022-06-01 01:00:00, 2022-06-01 01:30:00, 2022-06-01 02:00:00, 2022-06-01 02:30:00, 2022-06-01 03:00:00, 2022-06-01 03:30:00, 2022-06-01 04:00:00, 2022-06-01 04:30:00, 2022-06-01 05:00:00, 2022-06-01 05:30:00, 2022-06-01 06:00:00, 2022-11-16 06:30:00, 2022-11-16 07:00:00, 2022-11-16 07:30:00, 2022-11-16 08:00:00
[ "Solutions for DatetimeIndex:\nYou can create mask and rename values of index by Index.where:\nmask = (df.index >= '2022-06-01 02:00:00') & (df.index <= '2022-06-01 04:00:00')\ndf.index = df.index.where(~mask, df.index + pd.Timedelta(hours=1))\n\nOr get indices and use DataFrame.rename by dictionary:\ni = df['2022-06-01 02:00:00':'2022-06-01 04:00:00'].index\n\ndf = df.rename(dict(zip(i, i + pd.Timedelta(hours=1))))\n\nSolutions for date column:\nUse Series.between for boolean mask and DataFrame.loc for set new values:\nmask = df['date'].between('2022-06-01 02:00:00','2022-06-01 04:00:00')\n\ndf.loc[mask, 'date'] += pd.Timedelta(hours=1)\n\nOr Series.mask:\ndf['date'] = df['date'].mask(mask, df['date'] + pd.Timedelta(hours=1))\n\n" ]
[ 1 ]
[]
[]
[ "datetime", "pandas", "python" ]
stackoverflow_0074460066_datetime_pandas_python.txt
Q: Text file Converter (replacing unknown words) I started playing with Python and programming in general like 3 weeks ago so be gentle ;) What i try to do is convert text files the way i want them to be, the text files have same pattern but the words i want to replace are unknown. So the program must first find them, set a pattern and then replace them to words i want. For example: xxxxx xxxxx Line3 - word - xxxx xxxx xxxxx xxxx word word xxxx word Legend: xxxxx = template words, present in every file word = random word, our target I am able to localize first apperance of the word because it appears always in the same place of the file, from then it appears randomly. MY code: f1 = open('test.txt', 'r') f2 = open('file2.txt', 'w') pattern = '' for line in f1.readlines(): if line.startswith('Seat 1'): line = line.split(' ', 3) pattern = line[2] line = ' '.join(line) f2.write(line) elif pattern in line.strip(): f2.write(line.replace(pattern, 'NewWord')) else: f2.write(line) f1.close() f2.close() This code doesnt work, whats wrong ? A: welcome to the world of Python! I believe you are on the right track and are very close to the correct solution, however I see a couple of potential issues which may cause your program to not run as expected. If you are trying to see if a string equals another, I would use == instead of is (see this answer for more info) When reading a file, lines end with \n which means your variable line might never match your word. To fix this you could use strip, which automatically removes leading and trailing "space" characters (like a space or a new line character) elif line.strip() == pattern: This is not really a problem but a recommendation, since you are just starting out. When dealing with files it is highly recommended to use the with statement that Python provides (see question and/or tutorial) Update: I saw that you might have the word be part of the line, do instead of using == as recommended in point 1, you could use in, but you need to reverse the order, i.e. elif pattern in line:
Text file Converter (replacing unknown words)
I started playing with Python and programming in general like 3 weeks ago so be gentle ;) What i try to do is convert text files the way i want them to be, the text files have same pattern but the words i want to replace are unknown. So the program must first find them, set a pattern and then replace them to words i want. For example: xxxxx xxxxx Line3 - word - xxxx xxxx xxxxx xxxx word word xxxx word Legend: xxxxx = template words, present in every file word = random word, our target I am able to localize first apperance of the word because it appears always in the same place of the file, from then it appears randomly. MY code: f1 = open('test.txt', 'r') f2 = open('file2.txt', 'w') pattern = '' for line in f1.readlines(): if line.startswith('Seat 1'): line = line.split(' ', 3) pattern = line[2] line = ' '.join(line) f2.write(line) elif pattern in line.strip(): f2.write(line.replace(pattern, 'NewWord')) else: f2.write(line) f1.close() f2.close() This code doesnt work, whats wrong ?
[ "welcome to the world of Python!\nI believe you are on the right track and are very close to the correct solution, however I see a couple of potential issues which may cause your program to not run as expected.\n\nIf you are trying to see if a string equals another, I would use == instead of is (see this answer for more info)\nWhen reading a file, lines end with \\n which means your variable line might never match your word. To fix this you could use strip, which automatically removes leading and trailing \"space\" characters (like a space or a new line character)\n\nelif line.strip() == pattern:\n\n\nThis is not really a problem but a recommendation, since you are just starting out. When dealing with files it is highly recommended to use the with statement that Python provides (see question and/or tutorial)\n\nUpdate:\nI saw that you might have the word be part of the line, do instead of using == as recommended in point 1, you could use in, but you need to reverse the order, i.e.\nelif pattern in line:\n\n" ]
[ 0 ]
[]
[]
[ "converters", "python", "replace", "text" ]
stackoverflow_0074459798_converters_python_replace_text.txt
Q: How to turn a cell of dataframe into list of list, when another cell does not equals to a certain value? I have the following pandas dataframe Consideration_level | Consideration_value ------------------------------------------------- Car_ID 00111 Car_ID 00222 Car_type, Location Jeep, NYC Car_Color, Location Pink, BOS I want to turn it into Consideration_level | Consideration_value ------------------------------------------------- Car_ID [00111] Car_ID [00222] Car_type, Location [[Jeep], [NYC]] Car_Color, Location [[Pink], [BOS]] So essentially, I want when the Consideration_level does NOT equal to "Car_ID", the Consideration_value should be a list of list; when Consideration_level equals to "Car_ID", the Consideration_value would be one single list. I tried list into a list of lists def extractDigits(lst): return [[el] for el in lst] But I don't know how to do the logic here... Any help is appreciated! A: You can refer below answers : i have used map and lambda function Solution import pandas as pd import numpy as np Consideration_level=['Car_ID','Car_ID','Car_type, Location','Car_type, Location'] Consideration_value=['00111','00222','Jeep, NYC','Pink, BOS'] data_dict = {'Consideration_level':Consideration_level, 'Consideration_value':Consideration_value} df = pd.DataFrame(data_dict) df['Consideration_value'] = list(map(lambda CL : list(CL) if CL == 'Car_ID' else \ np.array(list(CL.split(","))).reshape(-1,1).tolist(),\ df['Consideration_value'])) df Output Consideration_level Consideration_value 0 Car_ID [[00111]] 1 Car_ID [[00222]] 2 Car_type, Location [[Jeep], [ NYC]] 3 Car_type, Location [[Pink], [ BOS]]
How to turn a cell of dataframe into list of list, when another cell does not equals to a certain value?
I have the following pandas dataframe Consideration_level | Consideration_value ------------------------------------------------- Car_ID 00111 Car_ID 00222 Car_type, Location Jeep, NYC Car_Color, Location Pink, BOS I want to turn it into Consideration_level | Consideration_value ------------------------------------------------- Car_ID [00111] Car_ID [00222] Car_type, Location [[Jeep], [NYC]] Car_Color, Location [[Pink], [BOS]] So essentially, I want when the Consideration_level does NOT equal to "Car_ID", the Consideration_value should be a list of list; when Consideration_level equals to "Car_ID", the Consideration_value would be one single list. I tried list into a list of lists def extractDigits(lst): return [[el] for el in lst] But I don't know how to do the logic here... Any help is appreciated!
[ "You can refer below answers :\ni have used map and lambda function\nSolution\nimport pandas as pd\nimport numpy as np\nConsideration_level=['Car_ID','Car_ID','Car_type, Location','Car_type, Location']\nConsideration_value=['00111','00222','Jeep, NYC','Pink, BOS']\ndata_dict = {'Consideration_level':Consideration_level,\n 'Consideration_value':Consideration_value}\n\ndf = pd.DataFrame(data_dict)\n\ndf['Consideration_value'] = list(map(lambda CL : list(CL) if CL == 'Car_ID' else \\\n np.array(list(CL.split(\",\"))).reshape(-1,1).tolist(),\\\n df['Consideration_value']))\ndf\n\nOutput\n Consideration_level Consideration_value\n0 Car_ID [[00111]]\n1 Car_ID [[00222]]\n2 Car_type, Location [[Jeep], [ NYC]]\n3 Car_type, Location [[Pink], [ BOS]]\n\n" ]
[ 0 ]
[]
[]
[ "dataframe", "list", "pandas", "python" ]
stackoverflow_0074453260_dataframe_list_pandas_python.txt
Q: tqdm format remaining time I'm running a very long process, and iterating by with tqdm(total=N) as pbar: time.sleep(1) pbar.update(1) displays something like 0%| | 528912/1.1579208923731618e+77 [00:05<320918211271131291051900907686223146304413317191111137850058393514584:44:48, 100226.38it/s [Quite a big combinatorial process, I'm dealing with :S ] I will certainly try to optimize it and decrease the search-space (which I think I really cannot), but anyway, I'm curious whether if the 320918211271131291051900907686223146304413317191111137850058393514584 number of hours could be expressed as number of years + remaining days + remaining hours + remaining minutes + remaning seconds. Any idea on how can this be achieved? I certainly love tqdm, but it doesn't seem easy to customize. Thanks in advance! A: It's been a while, but for the sake of completiness, here it goes: class TqdmExtraFormat(tqdm): @property def format_dict(self): d = super(TqdmExtraFormat, self).format_dict rate = d["rate"] remaining_secs = (d["total"] - d["n"]) / rate if rate and d["total"] else 0 my_remaining = seconds_to_time_string(remaining_secs) d.update(my_remaining=(my_remaining)) return d b = '{l_bar}{bar}| {n_fmt}/{total_fmt} [{elapsed}<{my_remaining}, {rate_fmt}{postfix}]' jobs_list = range(1,1000000) for i in TqdmExtraFormat(jobs_list, bar_format=b): time.sleep(0.01) A: I like to use humanize for this purpose https://pypi.org/project/humanize/ import humanize humanize.naturaldelta(39191835) -> '1 year, 2 months'
tqdm format remaining time
I'm running a very long process, and iterating by with tqdm(total=N) as pbar: time.sleep(1) pbar.update(1) displays something like 0%| | 528912/1.1579208923731618e+77 [00:05<320918211271131291051900907686223146304413317191111137850058393514584:44:48, 100226.38it/s [Quite a big combinatorial process, I'm dealing with :S ] I will certainly try to optimize it and decrease the search-space (which I think I really cannot), but anyway, I'm curious whether if the 320918211271131291051900907686223146304413317191111137850058393514584 number of hours could be expressed as number of years + remaining days + remaining hours + remaining minutes + remaning seconds. Any idea on how can this be achieved? I certainly love tqdm, but it doesn't seem easy to customize. Thanks in advance!
[ "It's been a while, but for the sake of completiness, here it goes:\nclass TqdmExtraFormat(tqdm):\n @property\n def format_dict(self):\n d = super(TqdmExtraFormat, self).format_dict\n rate = d[\"rate\"]\n remaining_secs = (d[\"total\"] - d[\"n\"]) / rate if rate and d[\"total\"] else 0\n my_remaining = seconds_to_time_string(remaining_secs)\n d.update(my_remaining=(my_remaining))\n return d\nb = '{l_bar}{bar}| {n_fmt}/{total_fmt} [{elapsed}<{my_remaining}, {rate_fmt}{postfix}]'\njobs_list = range(1,1000000)\nfor i in TqdmExtraFormat(jobs_list, bar_format=b):\n time.sleep(0.01)\n\n", "I like to use humanize for this purpose https://pypi.org/project/humanize/\nimport humanize\nhumanize.naturaldelta(39191835)\n\n-> '1 year, 2 months'\n\n" ]
[ 0, 0 ]
[]
[]
[ "customization", "python", "tqdm" ]
stackoverflow_0070035937_customization_python_tqdm.txt
Q: UVa 458 - The Decoder: Python runtime error Here's my code while True: try: a = input() except EOFError: break print(''.join([chr(ord(i) - 7) for i in a])) I saw the same question in this website before. This is the link: UVa problem 458 - The Decoder python runtime error But it seems doesn't work. I've tried every thing I know. It can work on my computer but UVa still give me runtime error. How should I improve my code? Thank you. I've browse all the websites I found, but sadly, it's rare to find the person who solve this problem by Python. I found a weird situation. I solve this problem by C++ formerly. This is my code. #include<iostream> using namespace std; int main(){ string s; while(cin >> s){ string ans; for(int i = 0; i < s.length(); ++i) ans += char(int(s[i] - 7)); cout << ans << endl; } return 0; } And I got "Accept" on online judge. It seems that using C++ to solve this problem doesn't have to consider the details mentioned below. How's that happened? A: So the Python runtime error comes from chr(ord(i) - 7) as ord(i) - 7 might become negative. If you limit the values to the range (0, 128), this should not produce the error. I'd assume that encoding is mod128-based, so chr((ord(i) - 7) % 128). Unfortunately, this gives a "Wrong answer" on "online judge". There's also this part of the task: ... the printable portion of the ASCII character set Which is 32 to 126. But the task doesn't specify what to do exactly in case the value is outside of the range, so I thought that it's mod-this range: while True: try: a = input() except EOFError: break for i in a: decode = (ord(i) - 7 - 32) % 94 + 32 print(chr(decode), end='') print() But that also gives a "Wrong answer". Skipping values that are outside the "printable range" and leaving them unchanged also doesn't work.
UVa 458 - The Decoder: Python runtime error
Here's my code while True: try: a = input() except EOFError: break print(''.join([chr(ord(i) - 7) for i in a])) I saw the same question in this website before. This is the link: UVa problem 458 - The Decoder python runtime error But it seems doesn't work. I've tried every thing I know. It can work on my computer but UVa still give me runtime error. How should I improve my code? Thank you. I've browse all the websites I found, but sadly, it's rare to find the person who solve this problem by Python. I found a weird situation. I solve this problem by C++ formerly. This is my code. #include<iostream> using namespace std; int main(){ string s; while(cin >> s){ string ans; for(int i = 0; i < s.length(); ++i) ans += char(int(s[i] - 7)); cout << ans << endl; } return 0; } And I got "Accept" on online judge. It seems that using C++ to solve this problem doesn't have to consider the details mentioned below. How's that happened?
[ "So the Python runtime error comes from chr(ord(i) - 7) as ord(i) - 7 might become negative. If you limit the values to the range (0, 128), this should not produce the error.\nI'd assume that encoding is mod128-based, so chr((ord(i) - 7) % 128). Unfortunately, this gives a \"Wrong answer\" on \"online judge\".\nThere's also this part of the task:\n\n... the printable portion of the ASCII character set\n\nWhich is 32 to 126. But the task doesn't specify what to do exactly in case the value is outside of the range, so I thought that it's mod-this range:\nwhile True:\n try:\n a = input()\n except EOFError:\n break\n\n for i in a:\n decode = (ord(i) - 7 - 32) % 94 + 32\n print(chr(decode), end='')\n print()\n\nBut that also gives a \"Wrong answer\".\nSkipping values that are outside the \"printable range\" and leaving them unchanged also doesn't work.\n" ]
[ 0 ]
[]
[]
[ "python" ]
stackoverflow_0074459570_python.txt
Q: How to convert jpg file to tiff file in python? The conversion of .jpg file to .tiff file in python. I have tried the following two approaches but while using the output tiff file in my project, it doesn't support it. import aspose.words as aw doc = aw.Document() builder = aw.DocumentBuilder(doc) shape = builder.insert_image("0.jpg") shape.image_data.save("/TIFFs/0.tiff") from PIL import Image im = Image.open('0.jpg') im.save("/TIFFs/0.tiff", 'TIFF') A: If you go to the PIL documentation about TIFF format https://pillow.readthedocs.io/en/stable/handbook/image-file-formats.html#tiff You will notice that it says: Note Beginning in version 5.0.0, Pillow requires libtiff to read or write compressed files. Prior to that release, Pillow had buggy support for reading Packbits, LZW and JPEG compressed TIFFs without using libtiff. So make sure you have libtiff installed as specified in the documentation, and update your PIL library to >=5.0.0 if you are using an older version
How to convert jpg file to tiff file in python?
The conversion of .jpg file to .tiff file in python. I have tried the following two approaches but while using the output tiff file in my project, it doesn't support it. import aspose.words as aw doc = aw.Document() builder = aw.DocumentBuilder(doc) shape = builder.insert_image("0.jpg") shape.image_data.save("/TIFFs/0.tiff") from PIL import Image im = Image.open('0.jpg') im.save("/TIFFs/0.tiff", 'TIFF')
[ "If you go to the PIL documentation about TIFF format\nhttps://pillow.readthedocs.io/en/stable/handbook/image-file-formats.html#tiff\nYou will notice that it says:\nNote\n\nBeginning in version 5.0.0, Pillow requires libtiff to read or write compressed files. \nPrior to that release, Pillow had buggy support for reading Packbits, \nLZW and JPEG compressed TIFFs without using libtiff.\n\nSo make sure you have libtiff installed as specified in the documentation, and update your PIL library to >=5.0.0 if you are using an older version\n" ]
[ 0 ]
[]
[]
[ "file_conversion", "jpeg", "python", "tiff" ]
stackoverflow_0074459609_file_conversion_jpeg_python_tiff.txt
Q: Extract Json Data with Python My Jason Data looks like this: { "componentId": "SD1:1100047938", "componentType": "Device", "name": "WR50MS15-7938 (WR 33)", "product": "SB 5000TL", "productTagId": 9037, "pvPower": 886, "serial": "1100047938", "specWhOutToday": 3.0909803921568626, "specWhOutYesterday": 2.924313725490196, "state": 307, "totWhOutToday": 15764, "totWhOutYesterday": 14914 } How could i only extract: "state" to a separate file "pv-power" to a sperate file ? Thanks ! A: import requests import json import urllib3 import sys import requests urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning) fehler = '"state": 35' urltoken = "https://172.16.63.100/api/v1/token" urldaten = "https://172.16.63.100/api/v1/overview/Plant:1/devices?todayDate=2022-10-17T12%3A53%3A16.746Z" urlaktuell = "https://172.16.63.100/api/v1/widgets/gauge/power?componentId=Plant%3A1&type=PvProduction" data = { "grant_type": "password", "username": "...", "password": "...", } response = requests.post(urltoken, data, verify=False) #print(response.json()) data = json.loads(response.text) with open('/usr/lib/nagios/plugins/check_DataManager/token.txt', 'w') as f: data = json.dump(data, f, indent = 2) with open('/usr/lib/nagios/plugins/check_DataManager/token.txt') as json_file: data1 = json.load(json_file) token = data1["access_token"] payload={} headers = {'Authorization': 'Bearer ' + token } response = requests.request("GET", urldaten, headers=headers, data=payload, verify=False) data = json.loads(response.text) with open('/usr/lib/nagios/plugins/check_DataManager/info.txt', 'w') as f: data = json.dump(data, f, indent = 2) if fehler in open('/usr/lib/nagios/plugins/check_DataManager/info.txt').read(): print("Mindestens ein Wechselrichter hat einen Fehler!") exit(1) else: print("Alle Wechselrichter Online!") exit(0) payload={} headers = {'Authorization': 'Bearer ' + token } response = requests.request("GET", urlaktuell, headers=headers, data=payload, verify=False) aktuell = json.loads(response.text) daten = aktuell["value"] print("Aktuelle Leistung:", 0.001*daten , "KWh") I now managed to do all i wanted like this :) A: url = "https://172.16.63.100/api/v1/overview/Plant:1/devices?todayDate=2022-10-17T12%3A53%3A16.746Z" payload={} headers = { 'Authorization': 'Bearer eyJhbGciOiJIUzI1NiJ9.eyJpYXQiOjE2NjYwMTU0MzMsInN1YiI6Ik1iYWNobCIsInVpZCI6Ijc1YmNkNTM2LTFhOTYtNDQ4My05MjQxLWZkNjY5Zjk3M2Y5OCIsImV4cCI6MTY2NjAxOTAzM30.bMMAsD8iPrAXDp7fbnwYL3Y8lj4Ktok3tU9NHZkYq8s' } response = requests.request("GET", url, headers=headers, data=payload, verify=False) #print(response.text) data = json.loads(response.text) with open('data.json', 'w') as f: data = json.dump(data, f, indent = 2) This is my Code for gathering the JSON Data. I need to exctract the above mentioned Values. A: After you run data = json.loads(response.text) , the json is loaded into your data variable as a python dictionary. So state = data.state and pvPower = data.pvPower should give the info you're after. I'm not exactly sure what you want to do with that information, as far as extracting to a separate file. But, json.dump() does output data to a json file.
Extract Json Data with Python
My Jason Data looks like this: { "componentId": "SD1:1100047938", "componentType": "Device", "name": "WR50MS15-7938 (WR 33)", "product": "SB 5000TL", "productTagId": 9037, "pvPower": 886, "serial": "1100047938", "specWhOutToday": 3.0909803921568626, "specWhOutYesterday": 2.924313725490196, "state": 307, "totWhOutToday": 15764, "totWhOutYesterday": 14914 } How could i only extract: "state" to a separate file "pv-power" to a sperate file ? Thanks !
[ "import requests\nimport json\nimport urllib3\nimport sys\nimport requests\n\nurllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)\n\nfehler = '\"state\": 35'\nurltoken = \"https://172.16.63.100/api/v1/token\"\nurldaten = \"https://172.16.63.100/api/v1/overview/Plant:1/devices?todayDate=2022-10-17T12%3A53%3A16.746Z\"\nurlaktuell = \"https://172.16.63.100/api/v1/widgets/gauge/power?componentId=Plant%3A1&type=PvProduction\"\n\ndata = {\n \"grant_type\": \"password\",\n \"username\": \"...\",\n \"password\": \"...\",\n}\n\nresponse = requests.post(urltoken, data, verify=False)\n\n#print(response.json())\n\ndata = json.loads(response.text)\n\nwith open('/usr/lib/nagios/plugins/check_DataManager/token.txt', 'w') as f:\n data = json.dump(data, f, indent = 2)\n\nwith open('/usr/lib/nagios/plugins/check_DataManager/token.txt') as json_file:\n data1 = json.load(json_file)\n\ntoken = data1[\"access_token\"]\n\npayload={}\nheaders = {'Authorization': 'Bearer ' + token }\n\nresponse = requests.request(\"GET\", urldaten, headers=headers, data=payload, verify=False)\n\n\ndata = json.loads(response.text)\n\nwith open('/usr/lib/nagios/plugins/check_DataManager/info.txt', 'w') as f:\n data = json.dump(data, f, indent = 2)\n\n\n\nif fehler in open('/usr/lib/nagios/plugins/check_DataManager/info.txt').read():\n print(\"Mindestens ein Wechselrichter hat einen Fehler!\")\n exit(1)\n\nelse:\n print(\"Alle Wechselrichter Online!\")\n exit(0)\n\n\npayload={}\nheaders = {'Authorization': 'Bearer ' + token }\n\nresponse = requests.request(\"GET\", urlaktuell, headers=headers, data=payload, verify=False)\n\naktuell = json.loads(response.text)\n\ndaten = aktuell[\"value\"]\n\nprint(\"Aktuelle Leistung:\", 0.001*daten , \"KWh\")\n\nI now managed to do all i wanted like this :)\n", " url = \"https://172.16.63.100/api/v1/overview/Plant:1/devices?todayDate=2022-10-17T12%3A53%3A16.746Z\"\n\npayload={}\nheaders = {\n 'Authorization': 'Bearer eyJhbGciOiJIUzI1NiJ9.eyJpYXQiOjE2NjYwMTU0MzMsInN1YiI6Ik1iYWNobCIsInVpZCI6Ijc1YmNkNTM2LTFhOTYtNDQ4My05MjQxLWZkNjY5Zjk3M2Y5OCIsImV4cCI6MTY2NjAxOTAzM30.bMMAsD8iPrAXDp7fbnwYL3Y8lj4Ktok3tU9NHZkYq8s'\n}\n\nresponse = requests.request(\"GET\", url, headers=headers, data=payload, verify=False)\n\n#print(response.text)\n\ndata = json.loads(response.text)\n\nwith open('data.json', 'w') as f:\n data = json.dump(data, f, indent = 2) \n\nThis is my Code for gathering the JSON Data.\nI need to exctract the above mentioned Values.\n", "After you run data = json.loads(response.text) , the json is loaded into your data variable as a python dictionary.\nSo state = data.state and pvPower = data.pvPower should give the info you're after.\nI'm not exactly sure what you want to do with that information, as far as extracting to a separate file. But, json.dump() does output data to a json file.\n" ]
[ 1, 0, 0 ]
[]
[]
[ "json", "python" ]
stackoverflow_0074098501_json_python.txt
Q: How can I intergrate my django app to an alredy made database, probably from a PHP app this is part of the database. The extention is .sql -- phpMyAdmin SQL Dump -- version 3.2.0.1 -- http://www.phpmyadmin.net -- -- Host: localhost -- Generation Time: May 20, 2011 at 05:08 PM -- Server version: 5.1.36 -- PHP Version: 5.2.9-2 SET SQL_MODE="NO_AUTO_VALUE_ON_ZERO"; /*!40101 SET @OLD_CHARACTER_SET_CLIENT=@@CHARACTER_SET_CLIENT */; /*!40101 SET @OLD_CHARACTER_SET_RESULTS=@@CHARACTER_SET_RESULTS */; /*!40101 SET @OLD_COLLATION_CONNECTION=@@COLLATION_CONNECTION */; /*!40101 SET NAMES utf8 */; -- -- Database: `bincomphptest` -- -- -------------------------------------------------------- -- -- Table structure for table `agentname` -- DROP TABLE IF EXISTS `agentname`; CREATE TABLE IF NOT EXISTS `agentname` ( `name_id` int(11) NOT NULL AUTO_INCREMENT, `firstname` varchar(255) NOT NULL, `lastname` varchar(255) NOT NULL, `email` varchar(255) DEFAULT NULL, `phone` char(13) NOT NULL, `pollingunit_uniqueid` int(11) NOT NULL, PRIMARY KEY (`name_id`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=5 ; I was creating a new database on django with mysqlite but I think I'm inexperience and this maybe possible A: Django can read legacy databases and even auto-generate the models for them, connect to the db from settings and then run this command from your terminal python manage.py inspectdb it will print the models and you can use it as normal Django models A: To interpret these values in django you must look carefully at each field type so that you can create the variables that will store these values, such as name_id, firstname, lastname... # settings.py DATABASES = { 'default': { 'ENGINE': 'django.db.backends.mysql', 'NAME': 'django', 'USER' : 'userid', 'PASSWORD' :'password', 'HOST' : 'localhost' } } this example should work once the table has been defined in models and that you have added the respective app 'exampleApp.apps...' in settings.py. import uuid from django.db import models class Agent(models.Model): name_id = models.UUIDField(primary_key=True, editable=False) firstname = models.CharField(max_length=255) lastname = models.CharField(max_length=255) email = models.EmailField(max_length=255) phone = models.CharField(max_length=25)
How can I intergrate my django app to an alredy made database, probably from a PHP app
this is part of the database. The extention is .sql -- phpMyAdmin SQL Dump -- version 3.2.0.1 -- http://www.phpmyadmin.net -- -- Host: localhost -- Generation Time: May 20, 2011 at 05:08 PM -- Server version: 5.1.36 -- PHP Version: 5.2.9-2 SET SQL_MODE="NO_AUTO_VALUE_ON_ZERO"; /*!40101 SET @OLD_CHARACTER_SET_CLIENT=@@CHARACTER_SET_CLIENT */; /*!40101 SET @OLD_CHARACTER_SET_RESULTS=@@CHARACTER_SET_RESULTS */; /*!40101 SET @OLD_COLLATION_CONNECTION=@@COLLATION_CONNECTION */; /*!40101 SET NAMES utf8 */; -- -- Database: `bincomphptest` -- -- -------------------------------------------------------- -- -- Table structure for table `agentname` -- DROP TABLE IF EXISTS `agentname`; CREATE TABLE IF NOT EXISTS `agentname` ( `name_id` int(11) NOT NULL AUTO_INCREMENT, `firstname` varchar(255) NOT NULL, `lastname` varchar(255) NOT NULL, `email` varchar(255) DEFAULT NULL, `phone` char(13) NOT NULL, `pollingunit_uniqueid` int(11) NOT NULL, PRIMARY KEY (`name_id`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=5 ; I was creating a new database on django with mysqlite but I think I'm inexperience and this maybe possible
[ "Django can read legacy databases and even auto-generate the models for them,\nconnect to the db from settings and then run this command from your terminal\npython manage.py inspectdb\n\nit will print the models and you can use it as normal Django models\n", "To interpret these values in django you must look carefully at each field type so that you can create the variables that will store these values, such as name_id, firstname, lastname...\n# settings.py \nDATABASES = {\n 'default': {\n 'ENGINE': 'django.db.backends.mysql',\n 'NAME': 'django',\n 'USER' : 'userid',\n 'PASSWORD' :'password',\n 'HOST' : 'localhost'\n }\n}\n\nthis example should work once the table has been defined in models and that you have added the respective app 'exampleApp.apps...' in settings.py.\nimport uuid\nfrom django.db import models\n\nclass Agent(models.Model): \n name_id = models.UUIDField(primary_key=True, editable=False)\n firstname = models.CharField(max_length=255)\n lastname = models.CharField(max_length=255)\n email = models.EmailField(max_length=255)\n phone = models.CharField(max_length=25)\n\n" ]
[ 1, 0 ]
[]
[]
[ "database", "django", "mysql", "python" ]
stackoverflow_0074453409_database_django_mysql_python.txt
Q: AWS Glue Job : An error occurred while calling getCatalogSource. None.get I was using Password/Username in my aws glue conenctions and now I switched to Secret Manager. Now I get this error when I run my etl job : An error occurred while calling o89.getCatalogSource. None.get Even tho the connections and crawlers works : The Connection Image. (I added the connection to the job details) The Crawlers Image. This example of the etl job that used to work before : import sys from awsglue.transforms import * from awsglue.utils import getResolvedOptions from pyspark.context import SparkContext from awsglue.context import GlueContext from awsglue.job import Job args = getResolvedOptions(sys.argv, ["JOB_NAME"]) sc = SparkContext() glueContext = GlueContext(sc) spark = glueContext.spark_session job = Job(glueContext) job.init(args["JOB_NAME"], args) # Script generated for node PostgreSQL PostgreSQL_node1663615620851 = glueContext.create_dynamic_frame.from_catalog( database="pg-db", table_name="postgres_schema_table", transformation_ctx="PostgreSQL_node1663615620851", ) this what I see as erros in the logs : 2022-09-19 19:28:19,322 ERROR [main] glue.ProcessLauncher (Logging.scala:logError(73)): Error from Python:Traceback (most recent call last): File "/tmp/FC 2 job.py", line 19, in <module> transformation_ctx="PostgreSQL_node1663615620851", File "/opt/amazon/lib/python3.6/site-packages/awsglue/dynamicframe.py", line 629, in from_catalog return self._glue_context.create_dynamic_frame_from_catalog(db, table_name, redshift_tmp_dir, transformation_ctx, push_down_predicate, additional_options, catalog_id, **kwargs) File "/opt/amazon/lib/python3.6/site-packages/awsglue/context.py", line 186, in create_dynamic_frame_from_catalog makeOptions(self._sc, additional_options), catalog_id), File "/opt/amazon/spark/python/lib/py4j-0.10.9-src.zip/py4j/java_gateway.py", line 1305, in __call__ answer, self.gateway_client, self.target_id, self.name) File "/opt/amazon/spark/python/lib/pyspark.zip/pyspark/sql/utils.py", line 111, in deco return f(*a, **kw) File "/opt/amazon/spark/python/lib/py4j-0.10.9-src.zip/py4j/protocol.py", line 328, in get_return_value format(target_id, ".", name), value) py4j.protocol.Py4JJavaError: An error occurred while calling o89.getCatalogSource. : java.util.NoSuchElementException: None.get at scala.None$.get(Option.scala:349) at scala.None$.get(Option.scala:347) at com.amazonaws.services.glue.util.DataCatalogWrapper.$anonfun$getJDBCConf$1(DataCatalogWrapper.scala:208) at scala.util.Try$.apply(Try.scala:209) at com.amazonaws.services.glue.util.DataCatalogWrapper.getJDBCConf(DataCatalogWrapper.scala:199) at com.amazonaws.services.glue.GlueContext.getGlueNativeJDBCSource(GlueContext.scala:485) at com.amazonaws.services.glue.GlueContext.getCatalogSource(GlueContext.scala:320) at com.amazonaws.services.glue.GlueContext.getCatalogSource(GlueContext.scala:185) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244) at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357) at py4j.Gateway.invoke(Gateway.java:282) at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132) at py4j.commands.CallCommand.execute(CallCommand.java:79) at py4j.GatewayConnection.run(GatewayConnection.java:238) at java.lang.Thread.run(Thread.java:750) and also this : 2022-09-19 19:28:19,348 ERROR [main] glueexceptionanalysis.GlueExceptionAnalysisListener (Logging.scala:logError(9)): [Glue Exception Analysis] { "Event": "GlueETLJobExceptionEvent", "Timestamp": 1663615699344, "Failure Reason": "Traceback (most recent call last):\n File \"/tmp/FC 2 job.py\", line 19, in <module>\n transformation_ctx=\"PostgreSQL_node1663615620851\",\n File \"/opt/amazon/lib/python3.6/site-packages/awsglue/dynamicframe.py\", line 629, in from_catalog\n return self._glue_context.create_dynamic_frame_from_catalog(db, table_name, redshift_tmp_dir, transformation_ctx, push_down_predicate, additional_options, catalog_id, **kwargs)\n File \"/opt/amazon/lib/python3.6/site-packages/awsglue/context.py\", line 186, in create_dynamic_frame_from_catalog\n makeOptions(self._sc, additional_options), catalog_id),\n File \"/opt/amazon/spark/python/lib/py4j-0.10.9-src.zip/py4j/java_gateway.py\", line 1305, in __call__\n answer, self.gateway_client, self.target_id, self.name)\n File \"/opt/amazon/spark/python/lib/pyspark.zip/pyspark/sql/utils.py\", line 111, in deco\n return f(*a, **kw)\n File \"/opt/amazon/spark/python/lib/py4j-0.10.9-src.zip/py4j/protocol.py\", line 328, in get_return_value\n format(target_id, \".\", name), value)\npy4j.protocol.Py4JJavaError: An error occurred while calling o89.getCatalogSource.\n: java.util.NoSuchElementException: None.get\n\tat scala.None$.get(Option.scala:349)\n\tat scala.None$.get(Option.scala:347)\n\tat com.amazonaws.services.glue.util.DataCatalogWrapper.$anonfun$getJDBCConf$1(DataCatalogWrapper.scala:208)\n\tat scala.util.Try$.apply(Try.scala:209)\n\tat com.amazonaws.services.glue.util.DataCatalogWrapper.getJDBCConf(DataCatalogWrapper.scala:199)\n\tat com.amazonaws.services.glue.GlueContext.getGlueNativeJDBCSource(GlueContext.scala:485)\n\tat com.amazonaws.services.glue.GlueContext.getCatalogSource(GlueContext.scala:320)\n\tat com.amazonaws.services.glue.GlueContext.getCatalogSource(GlueContext.scala:185)\n\tat sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)\n\tat sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)\n\tat sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)\n\tat java.lang.reflect.Method.invoke(Method.java:498)\n\tat py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)\n\tat py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)\n\tat py4j.Gateway.invoke(Gateway.java:282)\n\tat py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)\n\tat py4j.commands.CallCommand.execute(CallCommand.java:79)\n\tat py4j.GatewayConnection.run(GatewayConnection.java:238)\n\tat java.lang.Thread.run(Thread.java:750)\n", "Stack Trace": [ { "Declaring Class": "get_return_value", "Method Name": "format(target_id, \".\", name), value)", "File Name": "/opt/amazon/spark/python/lib/py4j-0.10.9-src.zip/py4j/protocol.py", "Line Number": 328 }, { "Declaring Class": "deco", "Method Name": "return f(*a, **kw)", "File Name": "/opt/amazon/spark/python/lib/pyspark.zip/pyspark/sql/utils.py", "Line Number": 111 }, { "Declaring Class": "__call__", "Method Name": "answer, self.gateway_client, self.target_id, self.name)", "File Name": "/opt/amazon/spark/python/lib/py4j-0.10.9-src.zip/py4j/java_gateway.py", "Line Number": 1305 }, { "Declaring Class": "create_dynamic_frame_from_catalog", "Method Name": "makeOptions(self._sc, additional_options), catalog_id),", "File Name": "/opt/amazon/lib/python3.6/site-packages/awsglue/context.py", "Line Number": 186 }, { "Declaring Class": "from_catalog", "Method Name": "return self._glue_context.create_dynamic_frame_from_catalog(db, table_name, redshift_tmp_dir, transformation_ctx, push_down_predicate, additional_options, catalog_id, **kwargs)", "File Name": "/opt/amazon/lib/python3.6/site-packages/awsglue/dynamicframe.py", "Line Number": 629 }, { "Declaring Class": "<module>", "Method Name": "transformation_ctx=\"PostgreSQL_node1663615620851\",", "File Name": "/tmp/FC 2 job.py", "Line Number": 19 } ], "Last Executed Line number": 19, "script": "FC 2 job.py" } A: I have faced a similar issue. In my case it was not able to find the table at specified location. It looks to be same, Try checking the entities you have provided like db-name, table name etc. Should work !!
AWS Glue Job : An error occurred while calling getCatalogSource. None.get
I was using Password/Username in my aws glue conenctions and now I switched to Secret Manager. Now I get this error when I run my etl job : An error occurred while calling o89.getCatalogSource. None.get Even tho the connections and crawlers works : The Connection Image. (I added the connection to the job details) The Crawlers Image. This example of the etl job that used to work before : import sys from awsglue.transforms import * from awsglue.utils import getResolvedOptions from pyspark.context import SparkContext from awsglue.context import GlueContext from awsglue.job import Job args = getResolvedOptions(sys.argv, ["JOB_NAME"]) sc = SparkContext() glueContext = GlueContext(sc) spark = glueContext.spark_session job = Job(glueContext) job.init(args["JOB_NAME"], args) # Script generated for node PostgreSQL PostgreSQL_node1663615620851 = glueContext.create_dynamic_frame.from_catalog( database="pg-db", table_name="postgres_schema_table", transformation_ctx="PostgreSQL_node1663615620851", ) this what I see as erros in the logs : 2022-09-19 19:28:19,322 ERROR [main] glue.ProcessLauncher (Logging.scala:logError(73)): Error from Python:Traceback (most recent call last): File "/tmp/FC 2 job.py", line 19, in <module> transformation_ctx="PostgreSQL_node1663615620851", File "/opt/amazon/lib/python3.6/site-packages/awsglue/dynamicframe.py", line 629, in from_catalog return self._glue_context.create_dynamic_frame_from_catalog(db, table_name, redshift_tmp_dir, transformation_ctx, push_down_predicate, additional_options, catalog_id, **kwargs) File "/opt/amazon/lib/python3.6/site-packages/awsglue/context.py", line 186, in create_dynamic_frame_from_catalog makeOptions(self._sc, additional_options), catalog_id), File "/opt/amazon/spark/python/lib/py4j-0.10.9-src.zip/py4j/java_gateway.py", line 1305, in __call__ answer, self.gateway_client, self.target_id, self.name) File "/opt/amazon/spark/python/lib/pyspark.zip/pyspark/sql/utils.py", line 111, in deco return f(*a, **kw) File "/opt/amazon/spark/python/lib/py4j-0.10.9-src.zip/py4j/protocol.py", line 328, in get_return_value format(target_id, ".", name), value) py4j.protocol.Py4JJavaError: An error occurred while calling o89.getCatalogSource. : java.util.NoSuchElementException: None.get at scala.None$.get(Option.scala:349) at scala.None$.get(Option.scala:347) at com.amazonaws.services.glue.util.DataCatalogWrapper.$anonfun$getJDBCConf$1(DataCatalogWrapper.scala:208) at scala.util.Try$.apply(Try.scala:209) at com.amazonaws.services.glue.util.DataCatalogWrapper.getJDBCConf(DataCatalogWrapper.scala:199) at com.amazonaws.services.glue.GlueContext.getGlueNativeJDBCSource(GlueContext.scala:485) at com.amazonaws.services.glue.GlueContext.getCatalogSource(GlueContext.scala:320) at com.amazonaws.services.glue.GlueContext.getCatalogSource(GlueContext.scala:185) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244) at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357) at py4j.Gateway.invoke(Gateway.java:282) at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132) at py4j.commands.CallCommand.execute(CallCommand.java:79) at py4j.GatewayConnection.run(GatewayConnection.java:238) at java.lang.Thread.run(Thread.java:750) and also this : 2022-09-19 19:28:19,348 ERROR [main] glueexceptionanalysis.GlueExceptionAnalysisListener (Logging.scala:logError(9)): [Glue Exception Analysis] { "Event": "GlueETLJobExceptionEvent", "Timestamp": 1663615699344, "Failure Reason": "Traceback (most recent call last):\n File \"/tmp/FC 2 job.py\", line 19, in <module>\n transformation_ctx=\"PostgreSQL_node1663615620851\",\n File \"/opt/amazon/lib/python3.6/site-packages/awsglue/dynamicframe.py\", line 629, in from_catalog\n return self._glue_context.create_dynamic_frame_from_catalog(db, table_name, redshift_tmp_dir, transformation_ctx, push_down_predicate, additional_options, catalog_id, **kwargs)\n File \"/opt/amazon/lib/python3.6/site-packages/awsglue/context.py\", line 186, in create_dynamic_frame_from_catalog\n makeOptions(self._sc, additional_options), catalog_id),\n File \"/opt/amazon/spark/python/lib/py4j-0.10.9-src.zip/py4j/java_gateway.py\", line 1305, in __call__\n answer, self.gateway_client, self.target_id, self.name)\n File \"/opt/amazon/spark/python/lib/pyspark.zip/pyspark/sql/utils.py\", line 111, in deco\n return f(*a, **kw)\n File \"/opt/amazon/spark/python/lib/py4j-0.10.9-src.zip/py4j/protocol.py\", line 328, in get_return_value\n format(target_id, \".\", name), value)\npy4j.protocol.Py4JJavaError: An error occurred while calling o89.getCatalogSource.\n: java.util.NoSuchElementException: None.get\n\tat scala.None$.get(Option.scala:349)\n\tat scala.None$.get(Option.scala:347)\n\tat com.amazonaws.services.glue.util.DataCatalogWrapper.$anonfun$getJDBCConf$1(DataCatalogWrapper.scala:208)\n\tat scala.util.Try$.apply(Try.scala:209)\n\tat com.amazonaws.services.glue.util.DataCatalogWrapper.getJDBCConf(DataCatalogWrapper.scala:199)\n\tat com.amazonaws.services.glue.GlueContext.getGlueNativeJDBCSource(GlueContext.scala:485)\n\tat com.amazonaws.services.glue.GlueContext.getCatalogSource(GlueContext.scala:320)\n\tat com.amazonaws.services.glue.GlueContext.getCatalogSource(GlueContext.scala:185)\n\tat sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)\n\tat sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)\n\tat sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)\n\tat java.lang.reflect.Method.invoke(Method.java:498)\n\tat py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)\n\tat py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)\n\tat py4j.Gateway.invoke(Gateway.java:282)\n\tat py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)\n\tat py4j.commands.CallCommand.execute(CallCommand.java:79)\n\tat py4j.GatewayConnection.run(GatewayConnection.java:238)\n\tat java.lang.Thread.run(Thread.java:750)\n", "Stack Trace": [ { "Declaring Class": "get_return_value", "Method Name": "format(target_id, \".\", name), value)", "File Name": "/opt/amazon/spark/python/lib/py4j-0.10.9-src.zip/py4j/protocol.py", "Line Number": 328 }, { "Declaring Class": "deco", "Method Name": "return f(*a, **kw)", "File Name": "/opt/amazon/spark/python/lib/pyspark.zip/pyspark/sql/utils.py", "Line Number": 111 }, { "Declaring Class": "__call__", "Method Name": "answer, self.gateway_client, self.target_id, self.name)", "File Name": "/opt/amazon/spark/python/lib/py4j-0.10.9-src.zip/py4j/java_gateway.py", "Line Number": 1305 }, { "Declaring Class": "create_dynamic_frame_from_catalog", "Method Name": "makeOptions(self._sc, additional_options), catalog_id),", "File Name": "/opt/amazon/lib/python3.6/site-packages/awsglue/context.py", "Line Number": 186 }, { "Declaring Class": "from_catalog", "Method Name": "return self._glue_context.create_dynamic_frame_from_catalog(db, table_name, redshift_tmp_dir, transformation_ctx, push_down_predicate, additional_options, catalog_id, **kwargs)", "File Name": "/opt/amazon/lib/python3.6/site-packages/awsglue/dynamicframe.py", "Line Number": 629 }, { "Declaring Class": "<module>", "Method Name": "transformation_ctx=\"PostgreSQL_node1663615620851\",", "File Name": "/tmp/FC 2 job.py", "Line Number": 19 } ], "Last Executed Line number": 19, "script": "FC 2 job.py" }
[ "I have faced a similar issue. In my case it was not able to find the table at specified location. It looks to be same, Try checking the entities you have provided like db-name, table name etc. Should work !!\n" ]
[ 0 ]
[]
[]
[ "amazon_web_services", "aws_glue", "aws_glue_data_catalog", "data_lake", "python" ]
stackoverflow_0073778532_amazon_web_services_aws_glue_aws_glue_data_catalog_data_lake_python.txt
Q: How do I split a string into separate lists? I am given this as a string: Orville Wright 21 July 1988 \n Rogelio Holloway 13 September 1988 \n Marjorie Figueroa 9 October 1988 \n I need to separate the names from the dates and print it like this: Birthdate: 21 July 1988 \n 13 September 1988 \n 9 October 1988 \n etc.. I tried to save the string into a variable and split it into a list content = "" temp = content.strip() temp = temp.split() A: You can use regex for this problem: import re str = 'Orville Wright 21 July 1988 \n Rogelio Holloway 13 September 1988 \n Marjorie Figueroa 9 October 1988 \n' Birthdate = re.findall(r'(\d+ \w+ \d+)', str) >>> ['21 July 1988', '13 September 1988', '9 October 1988'] A: Assuming this is how the content will be content = 'Orville Wright 21 July 1988 \n Rogelio Holloway 13 September 1988 \n Marjorie Figueroa 9 October 1988 \n' and then we do content = content.strip().split('\n') BirthDate = '' for a in aa: a = a.strip().split(' ') date = f'{a[2]} {a[3]} {a[4]} \n ' BirthDate = BirthDate + date
How do I split a string into separate lists?
I am given this as a string: Orville Wright 21 July 1988 \n Rogelio Holloway 13 September 1988 \n Marjorie Figueroa 9 October 1988 \n I need to separate the names from the dates and print it like this: Birthdate: 21 July 1988 \n 13 September 1988 \n 9 October 1988 \n etc.. I tried to save the string into a variable and split it into a list content = "" temp = content.strip() temp = temp.split()
[ "You can use regex for this problem:\nimport re\nstr = 'Orville Wright 21 July 1988 \\n Rogelio Holloway 13 September 1988 \\n Marjorie Figueroa 9 October 1988 \\n'\nBirthdate = re.findall(r'(\\d+ \\w+ \\d+)', str)\n>>> ['21 July 1988', '13 September 1988', '9 October 1988']\n\n", "Assuming this is how the content will be\ncontent = 'Orville Wright 21 July 1988 \\n Rogelio Holloway 13 September 1988 \\n Marjorie Figueroa 9 October 1988 \\n'\n\nand then we do\ncontent = content.strip().split('\\n')\n\nBirthDate = ''\n\nfor a in aa:\n a = a.strip().split(' ')\n date = f'{a[2]} {a[3]} {a[4]} \\n '\n BirthDate = BirthDate + date\n\n" ]
[ 1, 0 ]
[]
[]
[ "python", "split" ]
stackoverflow_0074460056_python_split.txt
Q: How to load SQL file into dbt macro using jinja include command? I have a dbt macro where I am trying to load a stand-alone sql file. I would then like to execute the SQL statement that I loaded by calling my macro. I am attempting to use jinja's include statement. # test_sql_macro.sql {% macro test_sql_macro() -%} {%- if execute -%} {%- call statement('do_stuff', fetch_result=True) -%} {% include 'select.sql' %} {%- endcall -%} {%- set call_results = load_result('do_stuff') -%} {{ log("Snowflake response: " ~ call_results, info=True) }} {% endif %} {%- endmacro %} # select.sql SELECT * FROM MY_DB.MY_SCHEMA.MY_TABLE I am running the macro with dbt run-operation test_sql_macro --project-dir . --profiles-dir . but I am getting an error Encountered an error while running operation: Compilation Error in macro statement (macros/etc/statement.sql) no loader for this environment specified After some search I came across setting template renderer in Python, however I am wondering if it is possible to set the renderer in the macro itself. Is it possible to accomplish this in dbt? A: dbt doesn't support jinja's include tag. As a workaround, you can use a macro to "import" other sql statements into the jinja context. For example: Use a macro to hold arbitrary sql statements. {% macro my_query() %} drop table if exists films cascade; create table films ( film_id integer, title varchar ); insert into films (film_id, title) values (1, 'The Godfather'), (2, 'The Wizard of Oz'), (3, 'Citizen Kane') ; {% endmacro %} Call the sql statement from another macro. {% macro my_macro() %} {# use a statement block #} {% call statement('films', fetch_result=True, auto_begin=True) %} {# sql query is wrapped in a macro #} {{ my_query() }} {% endcall %} {# verify the results, print to stdout #} {% set results = run_query('SELECT * FROM films') %} {{ results.print_table() }} {% endmacro %} run the operation $ dbt run-operation my_macro Running with dbt=1.2.0 | film_id | title | | ------- | ---------------- | | 1 | The Godfather | | 2 | The Wizard of Oz | | 3 | Citizen Kane |
How to load SQL file into dbt macro using jinja include command?
I have a dbt macro where I am trying to load a stand-alone sql file. I would then like to execute the SQL statement that I loaded by calling my macro. I am attempting to use jinja's include statement. # test_sql_macro.sql {% macro test_sql_macro() -%} {%- if execute -%} {%- call statement('do_stuff', fetch_result=True) -%} {% include 'select.sql' %} {%- endcall -%} {%- set call_results = load_result('do_stuff') -%} {{ log("Snowflake response: " ~ call_results, info=True) }} {% endif %} {%- endmacro %} # select.sql SELECT * FROM MY_DB.MY_SCHEMA.MY_TABLE I am running the macro with dbt run-operation test_sql_macro --project-dir . --profiles-dir . but I am getting an error Encountered an error while running operation: Compilation Error in macro statement (macros/etc/statement.sql) no loader for this environment specified After some search I came across setting template renderer in Python, however I am wondering if it is possible to set the renderer in the macro itself. Is it possible to accomplish this in dbt?
[ "dbt doesn't support jinja's include tag.\nAs a workaround, you can use a macro to \"import\" other sql statements into the jinja context.\nFor example:\nUse a macro to hold arbitrary sql statements.\n{% macro my_query() %}\n\n drop table if exists films cascade;\n\n create table films (\n film_id integer,\n title varchar\n );\n\n insert into films (film_id, title) values\n (1, 'The Godfather'),\n (2, 'The Wizard of Oz'),\n (3, 'Citizen Kane')\n ;\n\n{% endmacro %}\n\nCall the sql statement from another macro.\n{% macro my_macro() %}\n {# use a statement block #}\n {% call statement('films', fetch_result=True, auto_begin=True) %}\n {# sql query is wrapped in a macro #}\n {{ my_query() }}\n {% endcall %}\n\n {# verify the results, print to stdout #}\n {% set results = run_query('SELECT * FROM films') %}\n {{ results.print_table() }}\n\n{% endmacro %}\n\nrun the operation\n$ dbt run-operation my_macro\n \nRunning with dbt=1.2.0\n| film_id | title |\n| ------- | ---------------- |\n| 1 | The Godfather |\n| 2 | The Wizard of Oz |\n| 3 | Citizen Kane |\n\n" ]
[ 2 ]
[]
[]
[ "dbt", "jinja2", "python", "snowflake_cloud_data_platform", "sql" ]
stackoverflow_0074453543_dbt_jinja2_python_snowflake_cloud_data_platform_sql.txt
Q: What is the difference between radd() and add( ) method of data frames in pandas? import pandas as pd student = {'unit test-1':[5,6,8,3,10],'unit Test-2':[7,8,9,6,15]} student1 = {'unit test-1':[3,3,6,6,8],'unit Test-2':[5,9,8,10,5]} print(ds.radd(ds1)) print(ds.add(ds1)) When I am performing addition operation on dataframe by using add() and radd() methods then output is same of both methods. unit test-1 unit Test-2 0 8 12 1 9 17 2 14 17 3 9 16 4 18 20 What is the difference between add() and radd()? A: The result is equivalent. You can say 'A' is added to 'B' or you can say 'B' is added to 'A'. It's really a convenience function depending on how you visualize the element-wise addition taking place. Links to the Panda's Documentation: .add, .radd A: Firstly i got confused with these methods too. The real difference you'll see when performing operations like subtraction or division, since they have no commutative property. Addition operation has a commutative property: a + b = b + a while, for example, division has not: a / b != b / a So: df1 = pd.DataFrame(np.arange(10, 20, 2), columns=['A']) df2 = pd.DataFrame(np.arange(0, 10, 2), columns=['A']) print(df1.div(df2)) print('-' * 10) print(df1.rdiv(df2)) Output: A 0 inf 1 6.000000 2 3.500000 3 2.666667 4 2.250000 ---------- A 0 0.000000 1 0.166667 2 0.285714 3 0.375000 4 0.444444 You can think of this method like it 'reverses' operands: df2.rdiv(df1) is the same as df1.div(df2)
What is the difference between radd() and add( ) method of data frames in pandas?
import pandas as pd student = {'unit test-1':[5,6,8,3,10],'unit Test-2':[7,8,9,6,15]} student1 = {'unit test-1':[3,3,6,6,8],'unit Test-2':[5,9,8,10,5]} print(ds.radd(ds1)) print(ds.add(ds1)) When I am performing addition operation on dataframe by using add() and radd() methods then output is same of both methods. unit test-1 unit Test-2 0 8 12 1 9 17 2 14 17 3 9 16 4 18 20 What is the difference between add() and radd()?
[ "The result is equivalent. You can say 'A' is added to 'B' or you can say 'B' is added to 'A'. It's really a convenience function depending on how you visualize the element-wise addition taking place.\nLinks to the Panda's Documentation: .add, .radd\n", "Firstly i got confused with these methods too.\nThe real difference you'll see when performing operations like subtraction or division, since they have no commutative property.\nAddition operation has a commutative property:\n\na + b = b + a\n\nwhile, for example, division has not:\n\na / b != b / a\n\nSo:\ndf1 = pd.DataFrame(np.arange(10, 20, 2), columns=['A'])\ndf2 = pd.DataFrame(np.arange(0, 10, 2), columns=['A'])\nprint(df1.div(df2))\nprint('-' * 10)\nprint(df1.rdiv(df2))\n\nOutput:\n A\n0 inf\n1 6.000000\n2 3.500000\n3 2.666667\n4 2.250000\n----------\n A\n0 0.000000\n1 0.166667\n2 0.285714\n3 0.375000\n4 0.444444\n\nYou can think of this method like it 'reverses' operands: df2.rdiv(df1) is the same as df1.div(df2)\n" ]
[ 3, 0 ]
[]
[]
[ "pandas", "python" ]
stackoverflow_0066182815_pandas_python.txt
Q: PLC , DIGITAL COUNTER newbie here. I work in a factory that produce electric cables. I'm trying to build a web application that will handle production processes. For this i need some real time data. There are multiple types of machines , some of them are older and uses a lot of digital counters like this on: https://mirror2.mixtronica.com/42248-superlarge_default/h7cx-aud1-n-contador-omron.jpg . Some of them are newer and have plc's ( siemens s7-1200 , s7-1500 and others ). I have 0 experience with plc and plc programming. From what I saw from my research is that newer plc like s7 have a option "WEB SERVER", if that's enables i may be able to send data from counters and other stuff to a web page that is automated generated from that option. Folks on the internet uses software like Tia Portal to connect to them. My question is : if i try to connected at them with TIA-PORTAL and ethernet cable is there any possibility that i will corrupt something ? Tia-portal will recognize the program that's running (it was not uplouded by me ) ? Is there any way that i can make that digital counters transmit data on a raspberry pi or similar device. If not , what are my options there , is there any product intergrated with iot that i can use ? I also have some industrial scales but those have rs 232 interface and that was prrety smooth to make them talk with my pi. A: From what I understand you want to read and send data to the machines, correct? Because there are some simpler approaches than having to build an IoT application from scratch. Recently I've been studying solutions like this and the best alternative I've looked at is using Thingsboard (which can be installed locally or using the remote platform) or other platforms like Losant... This avoids creating a lot from scratch (storing data, create charts...), however, in a way, is limited in a few things and a gateway may be needed to handle the conversion of data from devices to platform. I'm still learning how to use it, but I think it's worth leaving it as a tip if your case is more about collecting and dislay data or creating simple actions or logic. About connecting the devices: For machines with PLC I recommend that you research OPC-UA or MQTT, they are protocols that will facilitate communication and it seems that these versions of the Siemens S7 have OPC-UA. For machines without PLC and without any communication port you may be forced to install a "gateway", an Arduino, ESP32 or another Raspberry PI may facilitate things, or even a small PLC, to collect the discrete data and convert them to Modbus, MQTT, OPC-UA, or another way that you can send it to your server... The concern here would be with electromagnetic interference, but that will depend on your equipment (with industrial equipment validated as a PLC you won't have these problems). This is not a complete answer, but I believe it will help you to develop the solution :)
PLC , DIGITAL COUNTER
newbie here. I work in a factory that produce electric cables. I'm trying to build a web application that will handle production processes. For this i need some real time data. There are multiple types of machines , some of them are older and uses a lot of digital counters like this on: https://mirror2.mixtronica.com/42248-superlarge_default/h7cx-aud1-n-contador-omron.jpg . Some of them are newer and have plc's ( siemens s7-1200 , s7-1500 and others ). I have 0 experience with plc and plc programming. From what I saw from my research is that newer plc like s7 have a option "WEB SERVER", if that's enables i may be able to send data from counters and other stuff to a web page that is automated generated from that option. Folks on the internet uses software like Tia Portal to connect to them. My question is : if i try to connected at them with TIA-PORTAL and ethernet cable is there any possibility that i will corrupt something ? Tia-portal will recognize the program that's running (it was not uplouded by me ) ? Is there any way that i can make that digital counters transmit data on a raspberry pi or similar device. If not , what are my options there , is there any product intergrated with iot that i can use ? I also have some industrial scales but those have rs 232 interface and that was prrety smooth to make them talk with my pi.
[ "From what I understand you want to read and send data to the machines, correct? Because there are some simpler approaches than having to build an IoT application from scratch.\nRecently I've been studying solutions like this and the best alternative I've looked at is using Thingsboard (which can be installed locally or using the remote platform) or other platforms like Losant... This avoids creating a lot from scratch (storing data, create charts...), however, in a way, is limited in a few things and a gateway may be needed to handle the conversion of data from devices to platform. I'm still learning how to use it, but I think it's worth leaving it as a tip if your case is more about collecting and dislay data or creating simple actions or logic.\nAbout connecting the devices:\n\nFor machines with PLC I recommend that you research OPC-UA or MQTT, they are protocols that will facilitate communication and it seems that these versions of the Siemens S7 have OPC-UA.\nFor machines without PLC and without any communication port you may be forced to install a \"gateway\", an Arduino, ESP32 or another Raspberry PI may facilitate things, or even a small PLC, to collect the discrete data and convert them to Modbus, MQTT, OPC-UA, or another way that you can send it to your server... The concern here would be with electromagnetic interference, but that will depend on your equipment (with industrial equipment validated as a PLC you won't have these problems).\n\nThis is not a complete answer, but I believe it will help you to develop the solution :)\n" ]
[ 0 ]
[]
[]
[ "automation", "counter", "digital", "plc", "python" ]
stackoverflow_0074459148_automation_counter_digital_plc_python.txt
Q: I want to remove ' from numpy I want to remove the value of '' while creating the numpy array In the following situation, how can I remove the quotation mark that comes out by multiplying the 'character' by 0 and leave only 'character'? import numpy as np array = np.array(['character'*1,'character'*0]) Expected array(['character'], dtype='<U9') np.delete(array ,"''") IndexError: arrays used as indices must be of integer (or boolean) type A: You can solve this in different ways, here is one example: import numpy as np array = np.array([ele for ele in ['character'*1,'character'*0] if len(ele) > 0]) # or array = np.array([ele for ele in ['character'*1,'character'*0] if ele != '']) And to get your method working: array = np.delete(array, array=='') EDIT And for @S3DEV: import numpy as np array = np.array(['character'*1,'character'*0]) array = array[array != '']
I want to remove ' from numpy
I want to remove the value of '' while creating the numpy array In the following situation, how can I remove the quotation mark that comes out by multiplying the 'character' by 0 and leave only 'character'? import numpy as np array = np.array(['character'*1,'character'*0]) Expected array(['character'], dtype='<U9') np.delete(array ,"''") IndexError: arrays used as indices must be of integer (or boolean) type
[ "You can solve this in different ways,\nhere is one example:\nimport numpy as np\n\narray = np.array([ele for ele in ['character'*1,'character'*0] if len(ele) > 0])\n# or\narray = np.array([ele for ele in ['character'*1,'character'*0] if ele != ''])\n\nAnd to get your method working:\narray = np.delete(array, array=='')\n\nEDIT\nAnd for @S3DEV:\nimport numpy as np\n\narray = np.array(['character'*1,'character'*0])\narray = array[array != '']\n\n" ]
[ 1 ]
[]
[]
[ "numpy", "python" ]
stackoverflow_0074460299_numpy_python.txt
Q: How to convert current time from a specific format to epoch time in python? My current time format is in "Wednesday, November 16, 2022 4:21:33.082 PM GMT+05:30" format. How can I convert this to epoch time using python? Here in this case the epoch time should be "1668595893082" Note: I always want to get my current time format in the above format and then convert that to epoch. Please guide me. I tried using strftime('%s') but could not get the solution. Its throwing invalid format exception. A: I have used dateutil in the past, it can parse textual dates into datetime.datetime objects (from the inbuilt datetime package) First you need to install it: pip install python-dateutil Then you can use it like so: from dateutil import parser # extract datetime object from string dttm = parser.parse('Wednesday, November 16, 2022 4:21:33.082 PM GMT+05:30') # convert to unix time print(dttm.timestamp()) >>> 1668635493.082 A: from datetime import datetime dt_string = "Wednesday, November 16, 2022 4:21:33.082 PM GMT+05:30" dt_format = '%A, %B %d, %Y %I:%M:%S.%f %p GMT%z' datetime.strptime(dt_string, dt_format).timestamp() * 1_000 See datetime: strftime() and strptime() Format Codes and datetime: datetime.timestamp()
How to convert current time from a specific format to epoch time in python?
My current time format is in "Wednesday, November 16, 2022 4:21:33.082 PM GMT+05:30" format. How can I convert this to epoch time using python? Here in this case the epoch time should be "1668595893082" Note: I always want to get my current time format in the above format and then convert that to epoch. Please guide me. I tried using strftime('%s') but could not get the solution. Its throwing invalid format exception.
[ "I have used dateutil in the past, it can parse textual dates into datetime.datetime objects (from the inbuilt datetime package)\nFirst you need to install it:\npip install python-dateutil\nThen you can use it like so:\nfrom dateutil import parser\n\n# extract datetime object from string\ndttm = parser.parse('Wednesday, November 16, 2022 4:21:33.082 PM GMT+05:30')\n\n# convert to unix time\nprint(dttm.timestamp())\n\n>>> 1668635493.082\n\n", "from datetime import datetime\n\ndt_string = \"Wednesday, November 16, 2022 4:21:33.082 PM GMT+05:30\"\ndt_format = '%A, %B %d, %Y %I:%M:%S.%f %p GMT%z'\ndatetime.strptime(dt_string, dt_format).timestamp() * 1_000\n\nSee datetime: strftime() and strptime() Format Codes and datetime: datetime.timestamp()\n" ]
[ 1, 1 ]
[]
[]
[ "epoch", "python" ]
stackoverflow_0074460090_epoch_python.txt
Q: convert sentences in a column to list in pandas dataframe My current dataframe looks like this A header Another header First i like apple Second alex is friends with jack I am expecting A header Another header First [i, like, apple] Second [alex, is, friends, with, jack] How can I accomplish this efficiently? A: You can use standard str operations on the column: df['Another header'] = df['Another header'].str.split() A: Use Series.str.split: df['Another header'] = df['Another header'].str.split() A: You can use map with a lambda function df['Another header'] = list(map(lambda x: x.split(' '), df['Another header']))
convert sentences in a column to list in pandas dataframe
My current dataframe looks like this A header Another header First i like apple Second alex is friends with jack I am expecting A header Another header First [i, like, apple] Second [alex, is, friends, with, jack] How can I accomplish this efficiently?
[ "You can use standard str operations on the column:\ndf['Another header'] = df['Another header'].str.split()\n\n", "Use Series.str.split:\ndf['Another header'] = df['Another header'].str.split()\n\n", "You can use map with a lambda function\ndf['Another header'] = list(map(lambda x: x.split(' '), df['Another header']))\n\n" ]
[ 2, 1, 1 ]
[]
[]
[ "dataframe", "pandas", "python", "python_3.x" ]
stackoverflow_0074460346_dataframe_pandas_python_python_3.x.txt
Q: subsetting anndata on basis of louvain clusters I want to subset anndata on basis of clusters, but i am not able to understand how to do it. I am running scVelo pipeline, and in that i ran tl.louvain function to cluster cells on basis of louvain. I got around 32 clusters, of which cluster 2 and 4 is of my interest, and i have to run the pipeline further on these clusters only. (Initially i had the loom file which i read in scVelo, so i have now the anndata.) I tried using adata.obs["louvain"] which gave me the cluster information, but i need to write a new anndata with only 2 clusters and process further. Please help on how to subset anndata. Any help is highly appreciated. (Being very new to it, i am finding it difficult to get) A: If your adata.obs has a "louvain" column that I'd expect after running tl.louvain, you could do the subsetting as adata[adata.obs["louvain"] == "2"] if you want to obtain one cluster and adata[adata.obs['louvain'].isin(['2', '4'])] for obtaining cluster 2 & 4. A: Feel free to use this function I wrote for my work. import AnnData import numpy as np def cluster_sampled(adata: AnnData, clusters: list, n_samples: int) -> AnnData: """Sample n_samples randomly from each louvain cluster from the provided clusters Parameters ---------- adata AnnData object clusters List of clusters to sample from n_samples Number of samples to take from each cluster Returns ------- AnnData Annotated data matrix with sampled cells from the clusters """ l = [] adata_cluster_sampled = adata[adata.obs["louvain"].isin(clusters), :].copy() for k, v in adata_cluster_sampled.obs.groupby("louvain").indices.items(): l.append(np.random.choice(v, n_samples, replace=False)) return adata_cluster_sampled[np.concatenate(l)]
subsetting anndata on basis of louvain clusters
I want to subset anndata on basis of clusters, but i am not able to understand how to do it. I am running scVelo pipeline, and in that i ran tl.louvain function to cluster cells on basis of louvain. I got around 32 clusters, of which cluster 2 and 4 is of my interest, and i have to run the pipeline further on these clusters only. (Initially i had the loom file which i read in scVelo, so i have now the anndata.) I tried using adata.obs["louvain"] which gave me the cluster information, but i need to write a new anndata with only 2 clusters and process further. Please help on how to subset anndata. Any help is highly appreciated. (Being very new to it, i am finding it difficult to get)
[ "If your adata.obs has a \"louvain\" column that I'd expect after running tl.louvain, you could do the subsetting as\nadata[adata.obs[\"louvain\"] == \"2\"]\nif you want to obtain one cluster and\nadata[adata.obs['louvain'].isin(['2', '4'])]\nfor obtaining cluster 2 & 4.\n", "Feel free to use this function I wrote for my work.\nimport AnnData\nimport numpy as np\n\ndef cluster_sampled(adata: AnnData, clusters: list, n_samples: int) -> AnnData:\n \"\"\"Sample n_samples randomly from each louvain cluster from the provided clusters\n\n Parameters\n ----------\n adata\n AnnData object\n clusters\n List of clusters to sample from\n n_samples\n Number of samples to take from each cluster\n\n Returns\n -------\n AnnData\n Annotated data matrix with sampled cells from the clusters\n \"\"\"\n l = []\n adata_cluster_sampled = adata[adata.obs[\"louvain\"].isin(clusters), :].copy()\n for k, v in adata_cluster_sampled.obs.groupby(\"louvain\").indices.items():\n l.append(np.random.choice(v, n_samples, replace=False))\n return adata_cluster_sampled[np.concatenate(l)]\n\n" ]
[ 3, 0 ]
[]
[]
[ "python", "rna_seq", "scanpy" ]
stackoverflow_0063916137_python_rna_seq_scanpy.txt
Q: How to display csv data in tabular form in Flask Python? I'm making a web app using the Flask framework with python, I want to make the web able to upload csv without saving and displaying data in a table with the template I made, I've added the syntax for uploading and processing the data until it's in a table view, but after running the website it goes to the 404 not found error message, how can I fix it? I've made a code in main.py from datetime import datetime from flask import Flask, render_template, request, session from FlaskWebProject2 import app import os import pandas as pd from werkzeug.utils import secure_filename #*** Flask configuration UPLOAD_FOLDER = os.path.join('staticFiles', 'uploads') ALLOWED_EXTENSIONS = {'csv'} app = Flask(__name__, template_folder='templateFiles', static_folder='staticFiles') app.config['UPLOAD_FOLDER'] = UPLOAD_FOLDER app.secret_key = 'This is your secret key to utilize session in Flask' @app.route('/') def index(): return render_template('index.html') @app.route('/upload', methods=['POST', 'GET']) def uploadFile(): if request.method == 'POST': # upload file flask uploaded_df = request.files['uploaded-file'] # Extracting uploaded data file name data_filename = secure_filename(uploaded_df.filename) # flask upload file to database (defined uploaded folder in static path) uploaded_df.save(os.path.join(app.config['UPLOAD_FOLDER'], data_filename)) # Storing uploaded file path in flask session session['uploaded_data_file_path'] = os.path.join(app.config['UPLOAD_FOLDER'], data_filename) return render_template('index_upload_and_show_data_page2.html') @app.route('/show_data') def showData(): # Retrieving uploaded file path from session data_file_path = session.get('uploaded_data_file_path', None) # read csv file in python flask (reading uploaded csv file from uploaded server location) uploaded_df = pd.read_csv(data_file_path) # pandas dataframe to html table flask uploaded_df_html = uploaded_df.to_html() return render_template('show_csv_data.html', data_var = uploaded_df_html) I want the website can show the home page and upload, read and show csv data works properly A: If you are requesting /upload and getting a 404. This is natural. You have added a handler for /upload endpoint: @app.route('/upload', methods=['POST', 'GET']) def uploadFile(): ... But uploadFile only supports POST requests. When you enter localhost/upload into the browser, the browser sends a GET request to the web server. There are two solutions for you with the same result: Add another if to uploadFile function and check if incoming request is GET and if it is, show a page(e.g. a form to upload file). Write another function like upload_file_form exclusively for GET requests to show a form. P.S.: per PEP8, you should use camel_case for function names. e.g. upload_file rather than uploadFile
How to display csv data in tabular form in Flask Python?
I'm making a web app using the Flask framework with python, I want to make the web able to upload csv without saving and displaying data in a table with the template I made, I've added the syntax for uploading and processing the data until it's in a table view, but after running the website it goes to the 404 not found error message, how can I fix it? I've made a code in main.py from datetime import datetime from flask import Flask, render_template, request, session from FlaskWebProject2 import app import os import pandas as pd from werkzeug.utils import secure_filename #*** Flask configuration UPLOAD_FOLDER = os.path.join('staticFiles', 'uploads') ALLOWED_EXTENSIONS = {'csv'} app = Flask(__name__, template_folder='templateFiles', static_folder='staticFiles') app.config['UPLOAD_FOLDER'] = UPLOAD_FOLDER app.secret_key = 'This is your secret key to utilize session in Flask' @app.route('/') def index(): return render_template('index.html') @app.route('/upload', methods=['POST', 'GET']) def uploadFile(): if request.method == 'POST': # upload file flask uploaded_df = request.files['uploaded-file'] # Extracting uploaded data file name data_filename = secure_filename(uploaded_df.filename) # flask upload file to database (defined uploaded folder in static path) uploaded_df.save(os.path.join(app.config['UPLOAD_FOLDER'], data_filename)) # Storing uploaded file path in flask session session['uploaded_data_file_path'] = os.path.join(app.config['UPLOAD_FOLDER'], data_filename) return render_template('index_upload_and_show_data_page2.html') @app.route('/show_data') def showData(): # Retrieving uploaded file path from session data_file_path = session.get('uploaded_data_file_path', None) # read csv file in python flask (reading uploaded csv file from uploaded server location) uploaded_df = pd.read_csv(data_file_path) # pandas dataframe to html table flask uploaded_df_html = uploaded_df.to_html() return render_template('show_csv_data.html', data_var = uploaded_df_html) I want the website can show the home page and upload, read and show csv data works properly
[ "If you are requesting /upload and getting a 404. This is natural. You have added a handler for /upload endpoint:\n@app.route('/upload', methods=['POST', 'GET'])\ndef uploadFile():\n ...\n\nBut uploadFile only supports POST requests. When you enter localhost/upload into the browser, the browser sends a GET request to the web server. There are two solutions for you with the same result:\n\nAdd another if to uploadFile function and check if incoming request is GET and if it is, show a page(e.g. a form to upload file).\n\nWrite another function like upload_file_form exclusively for GET requests to show a form.\n\n\nP.S.: per PEP8, you should use camel_case for function names. e.g. upload_file rather than uploadFile\n" ]
[ 1 ]
[]
[]
[ "flask", "html", "pandas", "python", "read.csv" ]
stackoverflow_0074458071_flask_html_pandas_python_read.csv.txt
Q: How do I get a value from a json array using a key I'm reading a json and want to get the label field with a specific id. What I currently have is: with open("local_en.json") as json_file: parsed_dict = json.load(json_file) print(parsed_dict) # works print(parsed_dict["interface"]) # works print(parsed_dict["interface"]["testkey"]) My json has data blocks (being "interface" or "settings") and those data blocks contain arrays. { "interface":[ {"id": "testkey", "label": "The interface block local worked!"} {"id": "testkey2", "label": "The interface block local worked, AGAIN!"} ], "settings":[ ], "popup_success":[ ], "popup_error":[ ], "popup_warning":[ ], "other_strings":[ ] } A: You can "find" the element in the interface list via a list-comprehension, and fetch the label from that element. For instance: label = [x['label'] for x in parsed_dict['interface'] if x['id'] == 'testkey'][0] If you cannot assume that the relevant id exists, then you can wrap this in a try-except, or you can get a list of the labels and validate that it isn't of length 0, or whatever you think would work best for you. key = 'testkey' labels = [x['label'] for x in parsed_dict['interface'] if x['id'] == key] assert len(labels) > 0, f"There's no matching element for key {key}" label = labels[0] # Takes the first if there are multiple such elements in the interface array And while you're at it, you might want to explicitly deal with there being multiple elements with the same id, etc. Clarification regarding your error(s): parsed_dict["interface"] is a list, so you can index it with ints (and slices and stuff, but that's besides the point), and not with strings. Each of the list elements is a dict, with two keys: id and label, so even if you were to take a specific element, say - el = parsed_dict["interface"][0] you still couldn't do el['testkey'], because that's a value of the dict, not a key. You could check if the id is the one you're looking for though, via - if el['id'] == 'testkey': print('Yup, this is it') label = el['label'] In fact, the single line I gave above is really just shorthand for running over all the elements with a loop and doing just that... A: You need to browse through all the values and check if it matches expected value. Because values are not guaranteed to be unique in a dictionary, you can't simply refer to them directly like you do with keys. print([el for el in d["interface"] if "testkey" in el.values()])
How do I get a value from a json array using a key
I'm reading a json and want to get the label field with a specific id. What I currently have is: with open("local_en.json") as json_file: parsed_dict = json.load(json_file) print(parsed_dict) # works print(parsed_dict["interface"]) # works print(parsed_dict["interface"]["testkey"]) My json has data blocks (being "interface" or "settings") and those data blocks contain arrays. { "interface":[ {"id": "testkey", "label": "The interface block local worked!"} {"id": "testkey2", "label": "The interface block local worked, AGAIN!"} ], "settings":[ ], "popup_success":[ ], "popup_error":[ ], "popup_warning":[ ], "other_strings":[ ] }
[ "You can \"find\" the element in the interface list via a list-comprehension, and fetch the label from that element. For instance:\nlabel = [x['label'] for x in parsed_dict['interface'] if x['id'] == 'testkey'][0]\n\nIf you cannot assume that the relevant id exists, then you can wrap this in a try-except, or you can get a list of the labels and validate that it isn't of length 0, or whatever you think would work best for you.\nkey = 'testkey'\nlabels = [x['label'] for x in parsed_dict['interface'] if x['id'] == key]\nassert len(labels) > 0, f\"There's no matching element for key {key}\"\nlabel = labels[0] # Takes the first if there are multiple such elements in the interface array\n\nAnd while you're at it, you might want to explicitly deal with there being multiple elements with the same id, etc.\n\nClarification regarding your error(s):\nparsed_dict[\"interface\"] is a list, so you can index it with ints (and slices and stuff, but that's besides the point), and not with strings.\nEach of the list elements is a dict, with two keys: id and label, so even if you were to take a specific element, say -\nel = parsed_dict[\"interface\"][0]\n\nyou still couldn't do el['testkey'], because that's a value of the dict, not a key.\nYou could check if the id is the one you're looking for though, via -\nif el['id'] == 'testkey':\n print('Yup, this is it')\n label = el['label']\n\nIn fact, the single line I gave above is really just shorthand for running over all the elements with a loop and doing just that...\n", "You need to browse through all the values and check if it matches expected value. Because values are not guaranteed to be unique in a dictionary, you can't simply refer to them directly like you do with keys.\nprint([el for el in d[\"interface\"] if \"testkey\" in el.values()])\n\n" ]
[ 1, 1 ]
[]
[]
[ "arrays", "json", "python" ]
stackoverflow_0074460376_arrays_json_python.txt
Q: Conda command not found I've installed Miniconda and have added the environment variable export PATH="/home/username/miniconda3/bin:$PATH" to my .bashrc and .bash_profile but still can't run any conda commands in my terminal. Am I missing another step in my setup? I'm using zsh by the way. A: If you're using zsh and it has not been set up to read .bashrc, you need to add the Miniconda directory to the zsh shell PATH environment variable. Add this to your .zshrc: export PATH="/home/username/miniconda/bin:$PATH" Make sure to replace /home/username/miniconda with your actual path. Save, exit the terminal and then reopen the terminal. conda command should work. A: If you have the PATH in your .bashrc file and are still getting conda: command not found Your terminal might not be looking for the bash file. Type bash in the terminal to ensure you are in bash and then try: conda --version A: Maybe you need to execute "source ~/.bashrc" A: For those experiencing issues after upgrading to MacOS Catalina. Short version: # 1a) Use tool: conda-prefix-replacement - # Restores: Desktop -> Relocated Items -> Security -> anaconda3 curl -L https://repo.anaconda.com/pkgs/misc/cpr-exec/cpr-0.1.1-osx-64.exe -o cpr && chmod +x cpr ./cpr rehome ~/anaconda3 # or if fails #./cpr rehome ~/anaconda3 --old-prefix /Anaconda3 source ~/anaconda3/bin/activate # 1b) Alternatively - reintall anaconda - # brew cask install anaconda # 2) conda init conda init zsh # or # conda init Further reading - Anaconda blog post and Github discussion. A: Sometimes, if you don't restart your terminal after you have installed anaconda also, it gives this error. Close your terminal window and restart it. It worked for me now! A: Maybe you should type add this to your .bashrc or .zshrc export PATH="/anaconda3/bin":$PATH It worked for me. A: To initialize your shell run the below code source ~/anaconda3/etc/profile.d/conda.sh conda activate Your_env It's Worked for me, I got the solution from the below link https://www.codegrepper.com/code-[“CommandNotFoundError: Your shell has not been properly configured to use 'conda activate'.][1]examples/shell/CommandNotFoundError%3A+Your+shell+has+not+been+properly+configured+to+use+%27conda+activate%27.+To+initialize+your+shell%2C+run A: conda :command not found Try adding below line to your .bashrc file export PATH=~/anaconda3/bin:$PATH then try: conda --version to see version and then to take affect conda init A: Execute the following command after installing and adding to the path source ~/.bashrc where source is a bash shell built-in command that executes the content of the file passed as argument, in the current shell. It runs during boot up automatically. A: I had the same issue. I just closed and reopened the terminal, and it worked. That was because I installed anaconda with the terminal open. A: I faced this issue on my mac after updating conda. Solution was to run conda mini installer on top of existing conda setup. $ curl https://repo.continuum.io/miniconda/Miniconda3-latest-MacOSX-x86_64.sh -o ~/miniconda3.sh $ bash ~/miniconda3.sh -bfp ~/miniconda3 On linux, you can use: $ curl https://repo.continuum.io/miniconda/Miniconda3-latest-Linux-x86_64.sh -o ~/miniconda3.sh $ bash ~/miniconda3.sh -bfp ~/miniconda3 For other versions, you can go to https://repo.continuum.io/miniconda/ For details check: https://github.com/conda/conda/issues/1364 A: Make sure that you are installing the Anaconda binary that is compatible with your kernel. I was in the same situation.Turned out I have an x64_86 CPU and was trying to install a 64 bit Power 8 installer.You can find out the same for your CPU by using the following command.It gives you a basic information about a computer's software and hardware.- $ uname -a https://www.anaconda.com/download/#linux The page in the link above, displays 2 different types of 64-Bit installers - 64-Bit (x86) installer and 64-Bit (Power 8) installer. A: export PATH="~/anaconda3/bin":$PATH A: I had to run the following command to activate the shell: eval "$(/home/username/anaconda3/bin/conda shell.bash hook)" A: The brute-force way could be if [ $? -eq 0 ]; then eval "$__conda_setup" else if [ -f "/root/miniconda3/etc/profile.d/conda.sh" ]; then . "/root/miniconda3/etc/profile.d/conda.sh" else export PATH="/root/miniconda3/bin:$PATH" fi fi Then initialize and test Conda. conda init conda -V Which is what Conda tries to do. Take a look at the end of ~/.bashrc with less ~/.bashrc or with cat ~/.bashrc A: Do the same thing as the suggestion given by bash console, but pay attention that there are some errors in the suggestion (the file path format is incorrect). Paste these two commands in the bash console for windows: echo ". C:/Users/mingm/Anaconda3/etc/profile.d/conda.sh" >> ~/.bashrc and echo "conda activate" >> ~/.bashrc After having pasted these two commands, exit the bash console, reload it and then activate the virtual environment by entering "conda activate your_env_name". A: It can be a silly mistake, make sure that you use anaconda3 instead of anaconda in the export path if you installed so. A: If you are using Mac and have installed Conda with homebrew then you need to run this command to export path export PATH="$PATH:/opt/homebrew/anaconda3/bin" A: For Conda > 4.4 follow this: $ echo ". /home/ubuntu/miniconda2/etc/profile.d/conda.sh" >> ~/.bashrc then you need to reload user bash so you need to log out: exit and then log again. A: This worked for me on CentOS and miniconda3. Find out which shell you are using echo $0 conda init bash (could be conda init zsh if you are using zsh, etc.) - this adds a path to ~/.bashrc Reload command line sourc ~/.bashrc OR . ~/.bashrc A: I have encountered this problem lately and I have found a solution that worked for me. It is possible that your current user might not have permissions to anaconda directory, so check if you can read/write there, and if not, then change the files owner by using chown. A: This worked in M1 MAC: to get user name: echo $USER then substitute my_username with the correct one. source /Users/my_username/opt/anaconda3/bin/activate
Conda command not found
I've installed Miniconda and have added the environment variable export PATH="/home/username/miniconda3/bin:$PATH" to my .bashrc and .bash_profile but still can't run any conda commands in my terminal. Am I missing another step in my setup? I'm using zsh by the way.
[ "If you're using zsh and it has not been set up to read .bashrc, you need to add the Miniconda directory to the zsh shell PATH environment variable. Add this to your .zshrc:\nexport PATH=\"/home/username/miniconda/bin:$PATH\"\n\nMake sure to replace /home/username/miniconda with your actual path.\nSave, exit the terminal and then reopen the terminal. conda command should work.\n", "If you have the PATH in your .bashrc file and are still getting\nconda: command not found\n\nYour terminal might not be looking for the bash file.\nType\nbash in the terminal to ensure you are in bash and then try:\nconda --version\n", "Maybe you need to execute \"source ~/.bashrc\"\n", "For those experiencing issues after upgrading to MacOS Catalina.\nShort version:\n# 1a) Use tool: conda-prefix-replacement - \n# Restores: Desktop -> Relocated Items -> Security -> anaconda3\ncurl -L https://repo.anaconda.com/pkgs/misc/cpr-exec/cpr-0.1.1-osx-64.exe -o cpr && chmod +x cpr\n./cpr rehome ~/anaconda3\n# or if fails\n#./cpr rehome ~/anaconda3 --old-prefix /Anaconda3\nsource ~/anaconda3/bin/activate\n\n# 1b) Alternatively - reintall anaconda - \n# brew cask install anaconda\n\n# 2) conda init\nconda init zsh\n# or\n# conda init \n\nFurther reading - Anaconda blog post and Github discussion.\n", "Sometimes, if you don't restart your terminal after you have installed anaconda also, it gives this error.\nClose your terminal window and restart it.\nIt worked for me now!\n", "Maybe you should type add this to your .bashrc or .zshrc \nexport PATH=\"/anaconda3/bin\":$PATH\n\nIt worked for me.\n", "To initialize your shell run the below code\nsource ~/anaconda3/etc/profile.d/conda.sh\nconda activate Your_env\n\nIt's Worked for me, I got the solution from the below link\nhttps://www.codegrepper.com/code-[“CommandNotFoundError: Your shell has not been properly configured to use 'conda activate'.][1]examples/shell/CommandNotFoundError%3A+Your+shell+has+not+been+properly+configured+to+use+%27conda+activate%27.+To+initialize+your+shell%2C+run\n", "conda :command not found\nTry adding below line to your .bashrc file\nexport PATH=~/anaconda3/bin:$PATH\n\nthen try:\nconda --version\n\nto see version\nand then to take affect\nconda init \n\n", "Execute the following command after installing and adding to the path \nsource ~/.bashrc\n\nwhere source is a bash shell built-in command that executes the content of the file passed as argument, in the current shell.\nIt runs during boot up automatically.\n", "I had the same issue. I just closed and reopened the terminal, and it worked. That was because I installed anaconda with the terminal open.\n", "I faced this issue on my mac after updating conda. Solution was to run conda mini installer on top of existing conda setup.\n$ curl https://repo.continuum.io/miniconda/Miniconda3-latest-MacOSX-x86_64.sh -o ~/miniconda3.sh\n$ bash ~/miniconda3.sh -bfp ~/miniconda3\n\nOn linux, you can use:\n$ curl https://repo.continuum.io/miniconda/Miniconda3-latest-Linux-x86_64.sh -o ~/miniconda3.sh\n$ bash ~/miniconda3.sh -bfp ~/miniconda3\n\nFor other versions, you can go to https://repo.continuum.io/miniconda/\nFor details check: \nhttps://github.com/conda/conda/issues/1364\n", "Make sure that you are installing the Anaconda binary that is compatible with your kernel.\nI was in the same situation.Turned out I have an x64_86 CPU and was trying to install a 64 bit Power 8 installer.You can find out the same for your CPU by using the following command.It gives you a basic information about a computer's software and hardware.-\n$ uname -a\nhttps://www.anaconda.com/download/#linux\nThe page in the link above, displays 2 different types of 64-Bit installers -\n\n64-Bit (x86) installer and \n64-Bit (Power 8) installer.\n\n", "export PATH=\"~/anaconda3/bin\":$PATH\n", "I had to run the following command to activate the shell:\neval \"$(/home/username/anaconda3/bin/conda shell.bash hook)\"\n\n", "The brute-force way could be\nif [ $? -eq 0 ]; then\n eval \"$__conda_setup\"\nelse\n if [ -f \"/root/miniconda3/etc/profile.d/conda.sh\" ]; then\n . \"/root/miniconda3/etc/profile.d/conda.sh\"\n else\n export PATH=\"/root/miniconda3/bin:$PATH\"\n fi\nfi\n\nThen initialize and test Conda.\nconda init\nconda -V\n\nWhich is what Conda tries to do. Take a look at the end of ~/.bashrc with less ~/.bashrc or with cat ~/.bashrc\n", "Do the same thing as the suggestion given by bash console, but pay attention that there are some errors in the suggestion (the file path format is incorrect). Paste these two commands in the bash console for windows:\necho \". C:/Users/mingm/Anaconda3/etc/profile.d/conda.sh\" >> ~/.bashrc\n\nand\necho \"conda activate\" >> ~/.bashrc\n\nAfter having pasted these two commands, exit the bash console, reload it and then activate the virtual environment by entering \"conda activate your_env_name\".\n", "It can be a silly mistake, make sure that you use anaconda3 instead of anaconda in the export path if you installed so.\n", "If you are using Mac and have installed Conda with homebrew then you need to run this command to export path\nexport PATH=\"$PATH:/opt/homebrew/anaconda3/bin\" \n\n", "For Conda > 4.4 follow this:\n$ echo \". /home/ubuntu/miniconda2/etc/profile.d/conda.sh\" >> ~/.bashrc\n\nthen you need to reload user bash so you need to log out:\nexit\n\nand then log again.\n", "This worked for me on CentOS and miniconda3. Find out which shell you are using\necho $0\nconda init bash (could be conda init zsh if you are using zsh, etc.) - this adds a path to ~/.bashrc\nReload command line\nsourc ~/.bashrc OR . ~/.bashrc\n", "I have encountered this problem lately and I have found a solution that worked for me. It is possible that your current user might not have permissions to anaconda directory, so check if you can read/write there, and if not, then change the files owner by using chown.\n", "This worked in M1 MAC:\nto get user name:\necho $USER\n\nthen substitute my_username with the correct one.\nsource /Users/my_username/opt/anaconda3/bin/activate\n\n" ]
[ 249, 79, 41, 29, 28, 17, 16, 11, 7, 5, 4, 2, 2, 2, 1, 1, 1, 1, 0, 0, 0, 0 ]
[ "MacOSX: cd /Users/USER_NAME/anaconda3/bin && ./activate \n" ]
[ -1 ]
[ "anaconda", "miniconda", "python", "zsh" ]
stackoverflow_0035246386_anaconda_miniconda_python_zsh.txt
Q: Python Module To Convert JUnit XML Report to Pretty Console Output I am running android gradle tests which outputs a generic JUnit XML Report as shown below. I am trying to ideally find a Python extension or easy method to also print these results to console. I am capable of converting the XML to html with Python plugins which is useful in certain scenarios. But this is not ideal as it requires extra steps to open and requires a GUI which is not always possible. I can print the XML directly but it's not very clean. We want a direct clean printout. For example this is my XML file <?xml version="1.0"?> <testsuites> <testsuite name="My test suite 1" tests="2" failures="0" skipped="0" timedout="0" errors="0" time="316.032" timestamp="2022-11-10 21:43:40 +0000"> <properties> <property name="id" value="28394"/> <property name="device" value="Samsung Galaxy S10"/> </properties> <testcase name="test library" classname="library" result="passed" test_id="1" time="7.775"/> <testcase name="test package" classname="package" result="passed" test_id="2" time="7.986"/> </testsuite> <testsuite name="test suite 2" tests="1" failures="1" skipped="0" timedout="0" errors="0" time="193.795" timestamp="2022-11-10 21:55:10 +0000"> <properties> <property name="id" value="239548"/> <property name="device" value="Samsung Galaxy S10"/> </properties> <testcase name="test API" classname="apiTest" result="failed" test_id="1" time="193.795" > <failure> Failure message will be properly filled in </failure> </testcase> </testsuite> </testsuites> ... And I would like it to print something as follows My test suite 1 timedout=0 timestamp=2022-11-10 21:43:40 +0000 id=28394 device=Samsung Galaxy S10 ✓ test library time=7.775 ✓ test package time=7.986 test suite 2 timedout=0 timestamp=2022-11-10 21:55:10 +0000 id=239548 device=Samsung Galaxy S10 ✗ test API time=193.795 - Failure message will be properly filled in 2 passed, 1 failure What I tried so far is; xunit-viewerworks perfectly with option --console, exactly what I wanted. However I rather have a python package as its easier to assure multiple systems have the same setup by sharing a pipenv and allowing everything to run within python. Instead of having to install npm followed by this package and running externally. Tried using junit2html to create a html file with option to output summary matrix, but this does not include detailed info, like failure logs, only a brief summary. Attempted with junit_xml to do xml=JUnitXml.fromfile(<FILE>) followed by xml.tostring() but this prints it in XML format, and not a pretty output. Tried using import xml.etree.ElementTree as ET and iterating through the elements to custom print them. But I rather have a defined module that does this as I may not cover all possible scenarios, etc. that may not be present in my file. A: Write custom code to handle this situation def main(xml_path): tree = ET.parse(xml_path) root = tree.getroot() for child in root.iter(): if child.tag == "testsuite": # Add code for all cases, via `child.attrib or child.text`
Python Module To Convert JUnit XML Report to Pretty Console Output
I am running android gradle tests which outputs a generic JUnit XML Report as shown below. I am trying to ideally find a Python extension or easy method to also print these results to console. I am capable of converting the XML to html with Python plugins which is useful in certain scenarios. But this is not ideal as it requires extra steps to open and requires a GUI which is not always possible. I can print the XML directly but it's not very clean. We want a direct clean printout. For example this is my XML file <?xml version="1.0"?> <testsuites> <testsuite name="My test suite 1" tests="2" failures="0" skipped="0" timedout="0" errors="0" time="316.032" timestamp="2022-11-10 21:43:40 +0000"> <properties> <property name="id" value="28394"/> <property name="device" value="Samsung Galaxy S10"/> </properties> <testcase name="test library" classname="library" result="passed" test_id="1" time="7.775"/> <testcase name="test package" classname="package" result="passed" test_id="2" time="7.986"/> </testsuite> <testsuite name="test suite 2" tests="1" failures="1" skipped="0" timedout="0" errors="0" time="193.795" timestamp="2022-11-10 21:55:10 +0000"> <properties> <property name="id" value="239548"/> <property name="device" value="Samsung Galaxy S10"/> </properties> <testcase name="test API" classname="apiTest" result="failed" test_id="1" time="193.795" > <failure> Failure message will be properly filled in </failure> </testcase> </testsuite> </testsuites> ... And I would like it to print something as follows My test suite 1 timedout=0 timestamp=2022-11-10 21:43:40 +0000 id=28394 device=Samsung Galaxy S10 ✓ test library time=7.775 ✓ test package time=7.986 test suite 2 timedout=0 timestamp=2022-11-10 21:55:10 +0000 id=239548 device=Samsung Galaxy S10 ✗ test API time=193.795 - Failure message will be properly filled in 2 passed, 1 failure What I tried so far is; xunit-viewerworks perfectly with option --console, exactly what I wanted. However I rather have a python package as its easier to assure multiple systems have the same setup by sharing a pipenv and allowing everything to run within python. Instead of having to install npm followed by this package and running externally. Tried using junit2html to create a html file with option to output summary matrix, but this does not include detailed info, like failure logs, only a brief summary. Attempted with junit_xml to do xml=JUnitXml.fromfile(<FILE>) followed by xml.tostring() but this prints it in XML format, and not a pretty output. Tried using import xml.etree.ElementTree as ET and iterating through the elements to custom print them. But I rather have a defined module that does this as I may not cover all possible scenarios, etc. that may not be present in my file.
[ "Write custom code to handle this situation\ndef main(xml_path):\n tree = ET.parse(xml_path)\n root = tree.getroot()\n\n for child in root.iter():\n\n if child.tag == \"testsuite\":\n # Add code for all cases, via `child.attrib or child.text`\n\n\n" ]
[ 0 ]
[]
[]
[ "junit", "python", "python_3.x", "xml" ]
stackoverflow_0074424877_junit_python_python_3.x_xml.txt
Q: Python Selenium, checking if element is present. and if it was present I want it to return a boolean value of TRUE Python Selenium, checking if element is present. and if it was present I want it to return a boolean value of TRUE Here is the HTML Code: <td data-id="329083" data-property="status" xe-field="status" class="readonly" data-content="Status" style="width: 8%; display: none;" title=" FULL: 0 of 20 seats remain."><div class="status-full"> <span class="status-bold ">FULL</span>: 0 of 20 seats remain.</div></td> I need to check whenever the title changes to " 1 of 20 seats remain." and return a boolean value of TRUE idk if this helps but this code for checking if someone drops the course from my univercity, i'll create a loop that will keep refreshing the page until someone drops a course so i can register. Sorry if my english is bad Thanks in advance I tried these two different methods to check if the title changed: 1st code: status = driver.find_element(By.XPATH,"/html/body/main/div[3]/div/div[2]/div/div[1]/div/div[2]/div[3]/div[2]/div[1]/div[1]/div[1]/div/table/tbody/tr/td[11]").text status.get_attribute(" FULL: 0 of 20 seats remain.") if status == "FULL:0of20seatsremain.": print("Seats Not Avaliable") else: print("Error") 2nd Code: title = " FULL: 0 of 20 seats remain." courseStat = driver.find_element(By.CSS_SELECTOR("[title^='" + title + "']")).text Output for 1st Code: Exception has occurred: AttributeError 'str' object has no attribute 'get_attribute' status.get_attribute(" FULL: 0 of 20 seats remain.") Output for 2st Code: Exception has occurred: TypeError 'str' object is not callable courseStat = driver.find_element(By.CSS_SELECTOR("[title^='" + title + "']")).text I can't share the link because it's the portal for our university, which u will have to log in to view the page source or inspect elements, Here is a screenshot of the HTML code enter image description here A: The element attribute containing desired data is title. So, the first code can be fixed as following: status = driver.find_element(By.XPATH,"/html/body/main/div[3]/div/div[2]/div/div[1]/div/div[2]/div[3]/div[2]/div[1]/div[1]/div[1]/div/table/tbody/tr/td[11]").text status.get_attribute("title") if "FULL" in status: print("No seats available") else: print("Available seat found") While you have to improve the locator there. To correct the second approach we need to see that web page to define a correct locator for that element.
Python Selenium, checking if element is present. and if it was present I want it to return a boolean value of TRUE
Python Selenium, checking if element is present. and if it was present I want it to return a boolean value of TRUE Here is the HTML Code: <td data-id="329083" data-property="status" xe-field="status" class="readonly" data-content="Status" style="width: 8%; display: none;" title=" FULL: 0 of 20 seats remain."><div class="status-full"> <span class="status-bold ">FULL</span>: 0 of 20 seats remain.</div></td> I need to check whenever the title changes to " 1 of 20 seats remain." and return a boolean value of TRUE idk if this helps but this code for checking if someone drops the course from my univercity, i'll create a loop that will keep refreshing the page until someone drops a course so i can register. Sorry if my english is bad Thanks in advance I tried these two different methods to check if the title changed: 1st code: status = driver.find_element(By.XPATH,"/html/body/main/div[3]/div/div[2]/div/div[1]/div/div[2]/div[3]/div[2]/div[1]/div[1]/div[1]/div/table/tbody/tr/td[11]").text status.get_attribute(" FULL: 0 of 20 seats remain.") if status == "FULL:0of20seatsremain.": print("Seats Not Avaliable") else: print("Error") 2nd Code: title = " FULL: 0 of 20 seats remain." courseStat = driver.find_element(By.CSS_SELECTOR("[title^='" + title + "']")).text Output for 1st Code: Exception has occurred: AttributeError 'str' object has no attribute 'get_attribute' status.get_attribute(" FULL: 0 of 20 seats remain.") Output for 2st Code: Exception has occurred: TypeError 'str' object is not callable courseStat = driver.find_element(By.CSS_SELECTOR("[title^='" + title + "']")).text I can't share the link because it's the portal for our university, which u will have to log in to view the page source or inspect elements, Here is a screenshot of the HTML code enter image description here
[ "The element attribute containing desired data is title.\nSo, the first code can be fixed as following:\nstatus = driver.find_element(By.XPATH,\"/html/body/main/div[3]/div/div[2]/div/div[1]/div/div[2]/div[3]/div[2]/div[1]/div[1]/div[1]/div/table/tbody/tr/td[11]\").text\nstatus.get_attribute(\"title\")\nif \"FULL\" in status:\n print(\"No seats available\")\nelse:\n print(\"Available seat found\")\n\nWhile you have to improve the locator there.\nTo correct the second approach we need to see that web page to define a correct locator for that element.\n" ]
[ 0 ]
[]
[]
[ "automation", "python", "selenium", "selenium_chromedriver", "selenium_webdriver" ]
stackoverflow_0074460352_automation_python_selenium_selenium_chromedriver_selenium_webdriver.txt
Q: How to consecutively chain `dropna()` and `to_datetime()` in pandas, accounting for `SettingWithCopyWarning`? In a pandas DataFrame, I'd like to accomplish two clean-up steps: Drop any row with missing values; and Convert a date column from DD.MM.YYYY pattern to standard YYYY-MM-DD I do know the answer for each step separately: dropping missing values can be achieved with pandas.dropna() converting DD.MM.YYYY string to YYYY-MM-DD can be done with pandas.to_datetime(x, format='%d.%m.%Y') However, I'm not sure what would be the "standard" way of processing those two steps consecutively (aka "to chain the procedures"). I've seen this answer which is very on-topic, but too rudimentary. Example import numpy as np import pandas as pd name = ['John', 'Melinda', 'Greg', 'Amanda'] dob = ['20.12.2001', '11.03.1991', '31.12.1999', np.nan] my_df = pd.DataFrame({'name':name,'dob':dob}) my_df #> name dob #> 0 John 20.12.2001 #> 1 Melinda 11.03.1991 #> 2 Greg 31.12.1999 #> 3 Amanda NaN I'm attempting to write a concise code that will be something like: # pseudo code my_df.dropna().to_datetime('dob', format='%d.%m.%Y') # expected output #> name dob #> 0 John 2001-12-20 #> 1 Melinda 1991-03-11 #> 2 Greg 1999-12-31 But I can't make it that simple! In any case, it seems that I must first assign the no-NaN dataframe to another variable. That is: my_df_nona = my_df.dropna() and then process that dataframe with to_datetime(). Second, I'm not sure how I'm supposed to make the assignment to my_df_nona. Should I use copy()? Below are three versions of the same procedure. Each one gives the desired output, but with a different combination of warnings. Option 1 not using .copy() using .loc([:, 'dob']) as suggested here my_df_nona_1 = my_df.dropna() my_df_nona_1.loc[:, 'dob'] = pd.to_datetime(my_df_nona_1.loc[:, 'dob'], format='%d.%m.%Y') #> SettingWithCopyWarning: #> A value is trying to be set on a copy of a slice from a DataFrame. #> Try using .loc[row_indexer,col_indexer] = value instead #> See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy #> FutureWarning: In a future version, `df.iloc[:, i] = newvals` will attempt to set the values inplace instead of always setting a new array. To retain the old behavior, use either `df[df.columns[i]] = newvals` or, if columns are non-unique, `df.isetitem(i, newvals)` Option 2 using .copy() using .loc([:, 'dob']) my_df_nona_2 = my_df.dropna().copy() my_df_nona_2.loc[:, 'dob'] = pd.to_datetime(my_df_nona_2.loc[:, 'dob'], format='%d.%m.%Y') #> FutureWarning: In a future version, `df.iloc[:, i] = newvals` will attempt to set the values inplace instead of always setting a new array. To retain the old behavior, use either `df[df.columns[i]] = newvals` or, if columns are non-unique, `df.isetitem(i, newvals)` Option 3 using .copy() not using .loc([:, 'dob']) my_df_nona_3 = my_df.dropna().copy() my_df_nona_3['dob'] = pd.to_datetime(my_df_nona_3['dob'], format='%d.%m.%Y') Summary As a beginner in pandas, I'd like to ask: Is there a way to chain those procedures in a functional fashion? I come from R where one could do # R my_df |> drop_na() |> mutate(across(dob, dmy)) I'm trying to understand whether I should attempt to mimic such syntax in pandas If the answer to (1) is 'No', then what would be the best practice for a workflow that removes missing values, then converts the dob column type, then possibly additional data wrangling / aggregations over the data frame? I've seen this detailed answer about SettingWithCopyWarning. It is insightful. However, I'm not sure whether my take from it should be that using .copy() is the panacea for all dataframe assignment issues. One limitation I can think of is that using .copy() all over the place would be bloating memory. A: Use DataFrame.assign for chain to_datetime with DataFrame.dropna: df = my_df.dropna().assign(dob = lambda x: pd.to_datetime(x['dob'], format='%d.%m.%Y')) print (df) name dob 0 John 2001-12-20 1 Melinda 1991-03-11 2 Greg 1999-12-31
How to consecutively chain `dropna()` and `to_datetime()` in pandas, accounting for `SettingWithCopyWarning`?
In a pandas DataFrame, I'd like to accomplish two clean-up steps: Drop any row with missing values; and Convert a date column from DD.MM.YYYY pattern to standard YYYY-MM-DD I do know the answer for each step separately: dropping missing values can be achieved with pandas.dropna() converting DD.MM.YYYY string to YYYY-MM-DD can be done with pandas.to_datetime(x, format='%d.%m.%Y') However, I'm not sure what would be the "standard" way of processing those two steps consecutively (aka "to chain the procedures"). I've seen this answer which is very on-topic, but too rudimentary. Example import numpy as np import pandas as pd name = ['John', 'Melinda', 'Greg', 'Amanda'] dob = ['20.12.2001', '11.03.1991', '31.12.1999', np.nan] my_df = pd.DataFrame({'name':name,'dob':dob}) my_df #> name dob #> 0 John 20.12.2001 #> 1 Melinda 11.03.1991 #> 2 Greg 31.12.1999 #> 3 Amanda NaN I'm attempting to write a concise code that will be something like: # pseudo code my_df.dropna().to_datetime('dob', format='%d.%m.%Y') # expected output #> name dob #> 0 John 2001-12-20 #> 1 Melinda 1991-03-11 #> 2 Greg 1999-12-31 But I can't make it that simple! In any case, it seems that I must first assign the no-NaN dataframe to another variable. That is: my_df_nona = my_df.dropna() and then process that dataframe with to_datetime(). Second, I'm not sure how I'm supposed to make the assignment to my_df_nona. Should I use copy()? Below are three versions of the same procedure. Each one gives the desired output, but with a different combination of warnings. Option 1 not using .copy() using .loc([:, 'dob']) as suggested here my_df_nona_1 = my_df.dropna() my_df_nona_1.loc[:, 'dob'] = pd.to_datetime(my_df_nona_1.loc[:, 'dob'], format='%d.%m.%Y') #> SettingWithCopyWarning: #> A value is trying to be set on a copy of a slice from a DataFrame. #> Try using .loc[row_indexer,col_indexer] = value instead #> See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy #> FutureWarning: In a future version, `df.iloc[:, i] = newvals` will attempt to set the values inplace instead of always setting a new array. To retain the old behavior, use either `df[df.columns[i]] = newvals` or, if columns are non-unique, `df.isetitem(i, newvals)` Option 2 using .copy() using .loc([:, 'dob']) my_df_nona_2 = my_df.dropna().copy() my_df_nona_2.loc[:, 'dob'] = pd.to_datetime(my_df_nona_2.loc[:, 'dob'], format='%d.%m.%Y') #> FutureWarning: In a future version, `df.iloc[:, i] = newvals` will attempt to set the values inplace instead of always setting a new array. To retain the old behavior, use either `df[df.columns[i]] = newvals` or, if columns are non-unique, `df.isetitem(i, newvals)` Option 3 using .copy() not using .loc([:, 'dob']) my_df_nona_3 = my_df.dropna().copy() my_df_nona_3['dob'] = pd.to_datetime(my_df_nona_3['dob'], format='%d.%m.%Y') Summary As a beginner in pandas, I'd like to ask: Is there a way to chain those procedures in a functional fashion? I come from R where one could do # R my_df |> drop_na() |> mutate(across(dob, dmy)) I'm trying to understand whether I should attempt to mimic such syntax in pandas If the answer to (1) is 'No', then what would be the best practice for a workflow that removes missing values, then converts the dob column type, then possibly additional data wrangling / aggregations over the data frame? I've seen this detailed answer about SettingWithCopyWarning. It is insightful. However, I'm not sure whether my take from it should be that using .copy() is the panacea for all dataframe assignment issues. One limitation I can think of is that using .copy() all over the place would be bloating memory.
[ "Use DataFrame.assign for chain to_datetime with DataFrame.dropna:\ndf = my_df.dropna().assign(dob = lambda x: pd.to_datetime(x['dob'], format='%d.%m.%Y'))\nprint (df)\n name dob\n0 John 2001-12-20\n1 Melinda 1991-03-11\n2 Greg 1999-12-31\n\n" ]
[ 2 ]
[]
[]
[ "dataframe", "datetime", "pandas", "python" ]
stackoverflow_0074460147_dataframe_datetime_pandas_python.txt
Q: regex : how to keep relevant words and remove other? The original output looks like this: JOBS column: {"/j/03k50": "Waitress Job", "/j/055qm": "Programmer Job", "/j/02h40lc": "Marketing Job"} {"/j/03k50": "Waitress Job", "/j/055qm": "Programmer Job", "/j/02h40lc": "Marketing Job"} {"/j/055qm": "Programmer Job", "/j/02h40lc": "Marketing Job"} ` And I want something like this, so I want to remove the word "job" and the associated codes: New JOBS column {"Waitress", "Programmer", "Marketing"} {"Waitress", "Programmer", "Marketing"} {"Programmer", "Marketing"} Before using the regex, I converted the column Jobs into a list (df_old) and I tried this: df_new = [re.sub('^/j/', '', doc) for doc in df_old] I had an error: TypeError: expected string or bytes-like object, so I did this df_new = [re.sub('^/j/', '', doc) for doc in str(df_old) I had no errors but the output was horrible and was not conclusive in my objectives. I hope you can help. Thank you in advance. A: As per the comment...there are far better ways of doing this. However, as a rough example direct to the question asked... import pandas as pd data = ['{"/j/03k50": "Waitress Job", "/j/055qm": "Programmer Job", "/j/02h40lc": "Marketing Job"}', '{"/j/03k50": "Waitress Job", "/j/055qm": "Programmer Job", "/j/02h40lc": "Marketing Job"}', '{"/j/055qm": "Programmer Job", "/j/02h40lc": "Marketing Job"} `'] df = pd.DataFrame(data, columns=['JOBS']) df['Cleaned_JOBS'] = df['JOBS'].str.findall(r': (\".*?\sJob\"),?').str.join(', ') df['Cleaned_JOBS'] = df['Cleaned_JOBS'].str.replace(' Job', '') df['Cleaned_JOBS'] = '{' + df['Cleaned_JOBS'] + '}' print(df, '\n\n') Output: JOBS Cleaned_JOBS 0 {"/j/03k50": "Waitress Job", "/j/055qm": "Prog... {"Waitress", "Programmer", "Marketing"} 1 {"/j/03k50": "Waitress Job", "/j/055qm": "Prog... {"Waitress", "Programmer", "Marketing"} 2 {"/j/055qm": "Programmer Job", "/j/02h40lc": "... {"Programmer", "Marketing"}
regex : how to keep relevant words and remove other?
The original output looks like this: JOBS column: {"/j/03k50": "Waitress Job", "/j/055qm": "Programmer Job", "/j/02h40lc": "Marketing Job"} {"/j/03k50": "Waitress Job", "/j/055qm": "Programmer Job", "/j/02h40lc": "Marketing Job"} {"/j/055qm": "Programmer Job", "/j/02h40lc": "Marketing Job"} ` And I want something like this, so I want to remove the word "job" and the associated codes: New JOBS column {"Waitress", "Programmer", "Marketing"} {"Waitress", "Programmer", "Marketing"} {"Programmer", "Marketing"} Before using the regex, I converted the column Jobs into a list (df_old) and I tried this: df_new = [re.sub('^/j/', '', doc) for doc in df_old] I had an error: TypeError: expected string or bytes-like object, so I did this df_new = [re.sub('^/j/', '', doc) for doc in str(df_old) I had no errors but the output was horrible and was not conclusive in my objectives. I hope you can help. Thank you in advance.
[ "As per the comment...there are far better ways of doing this. However, as a rough example direct to the question asked...\nimport pandas as pd\n\ndata = ['{\"/j/03k50\": \"Waitress Job\", \"/j/055qm\": \"Programmer Job\", \"/j/02h40lc\": \"Marketing Job\"}',\n'{\"/j/03k50\": \"Waitress Job\", \"/j/055qm\": \"Programmer Job\", \"/j/02h40lc\": \"Marketing Job\"}',\n'{\"/j/055qm\": \"Programmer Job\", \"/j/02h40lc\": \"Marketing Job\"} `']\n\ndf = pd.DataFrame(data, columns=['JOBS'])\n\ndf['Cleaned_JOBS'] = df['JOBS'].str.findall(r': (\\\".*?\\sJob\\\"),?').str.join(', ')\ndf['Cleaned_JOBS'] = df['Cleaned_JOBS'].str.replace(' Job', '')\n\ndf['Cleaned_JOBS'] = '{' + df['Cleaned_JOBS'] + '}'\n\nprint(df, '\\n\\n')\n\nOutput:\n JOBS Cleaned_JOBS\n0 {\"/j/03k50\": \"Waitress Job\", \"/j/055qm\": \"Prog... {\"Waitress\", \"Programmer\", \"Marketing\"}\n1 {\"/j/03k50\": \"Waitress Job\", \"/j/055qm\": \"Prog... {\"Waitress\", \"Programmer\", \"Marketing\"}\n2 {\"/j/055qm\": \"Programmer Job\", \"/j/02h40lc\": \"... {\"Programmer\", \"Marketing\"}\n\n" ]
[ 0 ]
[]
[]
[ "json", "pandas", "python" ]
stackoverflow_0074453314_json_pandas_python.txt
Q: Django Get Last Object for each Value in List I have a model called Purchase, with two fields, User and amount_spent. This is models.py: class Purchase(models.Model): user = models.ForeignKey(User, on_delete=models.CASCADE) amount_spent = models.IntegerField() created_at = models.DateTimeField(auto_now_add=True) I want to get the last purchases from a list of users. On views.py I have a list with some User's objects, and I want to get the last purchase for each user in the list. I can't find a way of doing this in a single query, I checked the latest() operator on QuerySets, but it only returns one object. This is views.py: purchases = Purchase.objects.filter(user__in=list_of_users) # purchases contains all the purchases from users, now I need to get the most recent onces for each user. I now I could group the purchases by user and then get the most recent ones, but I was wondering it there is a way of making this as a single query to DB. A: try this: Purchase.objects.filter(user__in=list_of_users).values("user_id", "amount_spent").order_by("-id").distinct("user_id") A: You can annotate the Users with the last_purchase_pks and then fetch these and adds that to these users: from django.db.models import OuterRef, Subquery users = User.objects.annotate( last_purchase_pk=Subquery( purchase.objects.order_by('-created_at') .filter(user_id=OuterRef('pk')) .values('pk')[:1] ) ) purchases = { p.pk: p for p in Purchase.objects.filter( pk__in=[user.last_purchase_pk for user in users] ) } for user in users: user.last_purchase = purchases.get(user.last_purchase_pk) After this code snippet, the User objects in users will all have a last_purchase attribute that contains the last Purchase for that user, or None in case there is no such purchase.
Django Get Last Object for each Value in List
I have a model called Purchase, with two fields, User and amount_spent. This is models.py: class Purchase(models.Model): user = models.ForeignKey(User, on_delete=models.CASCADE) amount_spent = models.IntegerField() created_at = models.DateTimeField(auto_now_add=True) I want to get the last purchases from a list of users. On views.py I have a list with some User's objects, and I want to get the last purchase for each user in the list. I can't find a way of doing this in a single query, I checked the latest() operator on QuerySets, but it only returns one object. This is views.py: purchases = Purchase.objects.filter(user__in=list_of_users) # purchases contains all the purchases from users, now I need to get the most recent onces for each user. I now I could group the purchases by user and then get the most recent ones, but I was wondering it there is a way of making this as a single query to DB.
[ "try this:\nPurchase.objects.filter(user__in=list_of_users).values(\"user_id\", \"amount_spent\").order_by(\"-id\").distinct(\"user_id\")\n\n", "You can annotate the Users with the last_purchase_pks and then fetch these and adds that to these users:\nfrom django.db.models import OuterRef, Subquery\n\nusers = User.objects.annotate(\n last_purchase_pk=Subquery(\n purchase.objects.order_by('-created_at')\n .filter(user_id=OuterRef('pk'))\n .values('pk')[:1]\n )\n)\n\npurchases = {\n p.pk: p\n for p in Purchase.objects.filter(\n pk__in=[user.last_purchase_pk for user in users]\n )\n}\nfor user in users:\n user.last_purchase = purchases.get(user.last_purchase_pk)\nAfter this code snippet, the User objects in users will all have a last_purchase attribute that contains the last Purchase for that user, or None in case there is no such purchase.\n" ]
[ 2, 1 ]
[]
[]
[ "django", "django_database", "django_models", "python" ]
stackoverflow_0074460191_django_django_database_django_models_python.txt
Q: Creating sum of date ranges in Pandas I have the following DataFrame, with over 3 million rows: VALID_FROM VALID_TO VALUE 0 2022-01-01 2022-01-02 5 1 2022-01-01 2022-01-03 2 2 2022-01-02 2022-01-04 7 3 2022-01-03 2022-01-06 3 I want to create one large date_range with a sum of the values for each timestamp. For the DataFrame above that would come out to: dates val 0 2022-01-01 7 1 2022-01-02 14 2 2022-01-03 12 3 2022-01-04 10 4 2022-01-05 3 5 2022-01-06 3 However, as the DataFrame has a little over 3 Million rows I don't want to iterate over each row and I'm not sure how to do this without iterating. Any suggestions? Currently my code looks like this: new_df = pd.DataFrame() for idx, row in dummy_df.iterrows(): dr = pd.date_range(row["VALID_FROM"], end = row["VALID_TO"], freq = "D") tmp_df = pd.DataFrame({"dates": dr, "val": row["VALUE"]}) new_df = pd.concat(objs=[new_df, tmp_df], ignore_index=True) new_df.groupby("dates", as_index=False, group_keys=False).sum() The result of the groupby would be my desired output. A: If performance is important use Index.repeat with DataFrame.loc for new rows, create date colun with counter by GroupBy.cumcount and last aggregate sum: df['VALID_FROM'] = pd.to_datetime(df['VALID_FROM']) df['VALID_TO'] = pd.to_datetime(df['VALID_TO']) df1 = df.loc[df.index.repeat(df['VALID_TO'].sub(df['VALID_FROM']).dt.days + 1)] df1['dates'] = df1['VALID_FROM'] + pd.to_timedelta(df1.groupby(level=0).cumcount(),unit='d') df1 = df1.groupby('dates', as_index=False)['VALUE'].sum() print (df1) dates VALUE 0 2022-01-01 7 1 2022-01-02 14 2 2022-01-03 12 3 2022-01-04 10 4 2022-01-05 3 5 2022-01-06 3 A: One option is to build a list of dates, from the min to the max from the original dataframe, use a non-equi join with conditional_join to get matches, and finally groupby and sum: # pip install pyjanitor import pandas as pd import janitor # build the date pandas object: dates = df.filter(like='VALID').to_numpy() dates = pd.date_range(dates.min(), dates.max(), freq='1D') dates = pd.Series(dates, name='dates') # compute the inequality join between valid_from and valid_to, # followed by the aggregation on a groupby: (df .conditional_join( dates, ('VALID_FROM', 'dates', '<='), ('VALID_TO','dates', '>='), # if you have numba installed, # it can improve performance use_numba=False, df_columns='VALUE') .groupby('dates') .VALUE .sum() ) dates 2022-01-01 7 2022-01-02 14 2022-01-03 12 2022-01-04 10 2022-01-05 3 2022-01-06 3 Name: VALUE, dtype: int64
Creating sum of date ranges in Pandas
I have the following DataFrame, with over 3 million rows: VALID_FROM VALID_TO VALUE 0 2022-01-01 2022-01-02 5 1 2022-01-01 2022-01-03 2 2 2022-01-02 2022-01-04 7 3 2022-01-03 2022-01-06 3 I want to create one large date_range with a sum of the values for each timestamp. For the DataFrame above that would come out to: dates val 0 2022-01-01 7 1 2022-01-02 14 2 2022-01-03 12 3 2022-01-04 10 4 2022-01-05 3 5 2022-01-06 3 However, as the DataFrame has a little over 3 Million rows I don't want to iterate over each row and I'm not sure how to do this without iterating. Any suggestions? Currently my code looks like this: new_df = pd.DataFrame() for idx, row in dummy_df.iterrows(): dr = pd.date_range(row["VALID_FROM"], end = row["VALID_TO"], freq = "D") tmp_df = pd.DataFrame({"dates": dr, "val": row["VALUE"]}) new_df = pd.concat(objs=[new_df, tmp_df], ignore_index=True) new_df.groupby("dates", as_index=False, group_keys=False).sum() The result of the groupby would be my desired output.
[ "If performance is important use Index.repeat with DataFrame.loc for new rows, create date colun with counter by GroupBy.cumcount and last aggregate sum:\ndf['VALID_FROM'] = pd.to_datetime(df['VALID_FROM'])\ndf['VALID_TO'] = pd.to_datetime(df['VALID_TO'])\n\ndf1 = df.loc[df.index.repeat(df['VALID_TO'].sub(df['VALID_FROM']).dt.days + 1)]\ndf1['dates'] = df1['VALID_FROM'] + pd.to_timedelta(df1.groupby(level=0).cumcount(),unit='d')\n\ndf1 = df1.groupby('dates', as_index=False)['VALUE'].sum()\nprint (df1)\n dates VALUE\n0 2022-01-01 7\n1 2022-01-02 14\n2 2022-01-03 12\n3 2022-01-04 10\n4 2022-01-05 3\n5 2022-01-06 3\n\n", "One option is to build a list of dates, from the min to the max from the original dataframe, use a non-equi join with conditional_join to get matches, and finally groupby and sum:\n# pip install pyjanitor\nimport pandas as pd\nimport janitor\n\n# build the date pandas object:\ndates = df.filter(like='VALID').to_numpy()\ndates = pd.date_range(dates.min(), dates.max(), freq='1D')\ndates = pd.Series(dates, name='dates')\n\n# compute the inequality join between valid_from and valid_to, \n# followed by the aggregation on a groupby:\n(df\n.conditional_join(\n dates, \n ('VALID_FROM', 'dates', '<='),\n ('VALID_TO','dates', '>='), \n # if you have numba installed, \n # it can improve performance\n use_numba=False, \n df_columns='VALUE')\n.groupby('dates')\n.VALUE\n.sum()\n) \ndates\n2022-01-01 7\n2022-01-02 14\n2022-01-03 12\n2022-01-04 10\n2022-01-05 3\n2022-01-06 3\nName: VALUE, dtype: int64\n\n" ]
[ 3, 1 ]
[]
[]
[ "dataframe", "date_range", "datetime", "pandas", "python" ]
stackoverflow_0074460294_dataframe_date_range_datetime_pandas_python.txt
Q: How to pull any cells from a table/dataframe into a column if they contain specific string? I am using Python in CoLab and I am trying to find something that will allow me to move any cells from a subset of a data frame into a new/different column in the same data frame OR sort the cells of the dataframe into the correct columns. The original column in the CSV looked like this: and using Users[['Motorbike', 'Car', 'Bus', 'Train', 'Tram', 'Taxi']] = Users['What distance did you travel in the last month by:'].str.split(',', expand=True) I was able to split the column into 6 new series to give this However, now I would like all the cells with 'Motorbike' in the motorbike column, all the cells wih 'Car' in the Car column and so on, without overwriting any other cells OR if this cannot be done, to just assign any occurances of Motorbike, Car etc into the new columns 'Motorbike1', 'Car1' etc. that I have added to the dataframe as shown below. Can anyone help please? new columns I have tried to copy the cells in original columns to the new columns and then get rid of values containing say not 'Car' However repeating for the next original column into the same first new column it overwrites. There are no repeats of any mode of transport in any row. i.e there is only one or less occurrence of each mode of transport in every row. A: Use list comprehension with split for dictionaries, then pass to DataFrame constructor: L = [dict([y.split() for y in x.split(',')]) for x in df['What distance did you travel in the last month by:']] df = pd.DataFrame(L) print (df) Taxi Motorbike Car Train Bus Tram 0 (km)(20) NaN NaN NaN NaN NaN 1 NaN (km)(500) (km)(500) NaN NaN NaN 2 NaN NaN (km)(1000) NaN NaN NaN 3 NaN NaN (km)(100) (km)(20) NaN NaN 4 (km)(25) NaN NaN (km)(700) (km)(150) NaN 5 NaN (km)(0) (km)(0) NaN (km)(40) NaN 6 (km)(100) NaN (km)(300) NaN NaN NaN 7 NaN NaN (km)(300) NaN NaN NaN 8 NaN NaN NaN (km)(80) NaN (km)(300) 9 NaN NaN (km)(700) NaN (km)(50) (km)(50) A: You can use a regex to extract the xxx (yyy)(yyy) parts, then reshape: out = (df['col_name'] .str.extractall(r'([^,]+) (\([^,]*\))') .set_index(0, append=True)[1] .droplevel('match') .unstack(0) ) output: Bus Car Motorbike Taxi Train Tram 0 NaN NaN NaN (km)(20) NaN NaN 1 NaN (km)(500) (km)(500) NaN NaN NaN 2 NaN (km)(1000) NaN NaN NaN NaN 3 NaN (km)(100) NaN NaN (km)(20) NaN 4 (km)(150) NaN NaN (km)(25) (km)(700) NaN 5 (km)(40) (km)(0) (km)(0) NaN NaN NaN 6 NaN (km)(300) NaN (km)(100) NaN NaN 7 NaN (km)(300) NaN NaN NaN NaN 8 NaN NaN NaN NaN (km)(80) (km)(300) 9 (km)(50) (km)(700) NaN NaN NaN (km)(50) If you only need the numbers, you can change the regex: (df['col_name'].str.extractall(r'([^,]+)\s+\(km\)\((\d+)\)') .set_index(0, append=True)[1] .droplevel('match') .unstack(0).rename_axis(columns=None) ) Output: Bus Car Motorbike Taxi Train Tram 0 NaN NaN NaN 20 NaN NaN 1 NaN 500 500 NaN NaN NaN 2 NaN 1000 NaN NaN NaN NaN 3 NaN 100 NaN NaN 20 NaN 4 150 NaN NaN 25 700 NaN 5 40 0 0 NaN NaN NaN 6 NaN 300 NaN 100 NaN NaN 7 NaN 300 NaN NaN NaN NaN 8 NaN NaN NaN NaN 80 300 9 50 700 NaN NaN NaN 50
How to pull any cells from a table/dataframe into a column if they contain specific string?
I am using Python in CoLab and I am trying to find something that will allow me to move any cells from a subset of a data frame into a new/different column in the same data frame OR sort the cells of the dataframe into the correct columns. The original column in the CSV looked like this: and using Users[['Motorbike', 'Car', 'Bus', 'Train', 'Tram', 'Taxi']] = Users['What distance did you travel in the last month by:'].str.split(',', expand=True) I was able to split the column into 6 new series to give this However, now I would like all the cells with 'Motorbike' in the motorbike column, all the cells wih 'Car' in the Car column and so on, without overwriting any other cells OR if this cannot be done, to just assign any occurances of Motorbike, Car etc into the new columns 'Motorbike1', 'Car1' etc. that I have added to the dataframe as shown below. Can anyone help please? new columns I have tried to copy the cells in original columns to the new columns and then get rid of values containing say not 'Car' However repeating for the next original column into the same first new column it overwrites. There are no repeats of any mode of transport in any row. i.e there is only one or less occurrence of each mode of transport in every row.
[ "Use list comprehension with split for dictionaries, then pass to DataFrame constructor:\nL = [dict([y.split() for y in x.split(',')])\n for x in df['What distance did you travel in the last month by:']]\n\ndf = pd.DataFrame(L)\nprint (df)\n Taxi Motorbike Car Train Bus Tram\n0 (km)(20) NaN NaN NaN NaN NaN\n1 NaN (km)(500) (km)(500) NaN NaN NaN\n2 NaN NaN (km)(1000) NaN NaN NaN\n3 NaN NaN (km)(100) (km)(20) NaN NaN\n4 (km)(25) NaN NaN (km)(700) (km)(150) NaN\n5 NaN (km)(0) (km)(0) NaN (km)(40) NaN\n6 (km)(100) NaN (km)(300) NaN NaN NaN\n7 NaN NaN (km)(300) NaN NaN NaN\n8 NaN NaN NaN (km)(80) NaN (km)(300)\n9 NaN NaN (km)(700) NaN (km)(50) (km)(50)\n\n", "You can use a regex to extract the xxx (yyy)(yyy) parts, then reshape:\nout = (df['col_name']\n .str.extractall(r'([^,]+) (\\([^,]*\\))')\n .set_index(0, append=True)[1]\n .droplevel('match')\n .unstack(0)\n)\n\noutput:\n Bus Car Motorbike Taxi Train Tram\n0 NaN NaN NaN (km)(20) NaN NaN\n1 NaN (km)(500) (km)(500) NaN NaN NaN\n2 NaN (km)(1000) NaN NaN NaN NaN\n3 NaN (km)(100) NaN NaN (km)(20) NaN\n4 (km)(150) NaN NaN (km)(25) (km)(700) NaN\n5 (km)(40) (km)(0) (km)(0) NaN NaN NaN\n6 NaN (km)(300) NaN (km)(100) NaN NaN\n7 NaN (km)(300) NaN NaN NaN NaN\n8 NaN NaN NaN NaN (km)(80) (km)(300)\n9 (km)(50) (km)(700) NaN NaN NaN (km)(50)\n\nIf you only need the numbers, you can change the regex:\n(df['col_name'].str.extractall(r'([^,]+)\\s+\\(km\\)\\((\\d+)\\)')\n .set_index(0, append=True)[1]\n .droplevel('match')\n .unstack(0).rename_axis(columns=None)\n)\n\nOutput:\n Bus Car Motorbike Taxi Train Tram\n0 NaN NaN NaN 20 NaN NaN\n1 NaN 500 500 NaN NaN NaN\n2 NaN 1000 NaN NaN NaN NaN\n3 NaN 100 NaN NaN 20 NaN\n4 150 NaN NaN 25 700 NaN\n5 40 0 0 NaN NaN NaN\n6 NaN 300 NaN 100 NaN NaN\n7 NaN 300 NaN NaN NaN NaN\n8 NaN NaN NaN NaN 80 300\n9 50 700 NaN NaN NaN 50\n\n" ]
[ 0, 0 ]
[]
[]
[ "dataframe", "pandas", "python", "sorting" ]
stackoverflow_0074459952_dataframe_pandas_python_sorting.txt
Q: How to release GPU memory in tensorflow? (opposite of `allow_growth` → `allow_shrink`?) I'm using a GPU to train quite a lot of models. I want to tune the architecture of the network, so I train different models sequentially to compare their performances (I'm using keras-tuner). The problem is that some models are very small, and some others are very large. I don't want to allocate all the GPU memory to my trainings, but only the quantity I need. I've TF_FORCE_GPU_ALLOW_GROWTH to true, meaning that when a model requires a large quantity of memory, then the GPU will allocate it. However, once that big model has been trained, the memory will not be released, even if the next trainings are tiny models. Is there a way to force the GPU to release unused memory? Something like TF_FORCE_GPU_ALLOW_SHRINK? Maybe having an automatic shrinking might be difficult to achieve. If so I would be happy with a manual releasing that I could add in a callback to be run after each training. A: You can try by limiting GPU memory growth using this code: import tensorflow as tf gpus = tf.config.experimental.list_physical_devices('GPU') tf.config.experimental.set_memory_growth(gpus[0], True) The second method is to configure a virtual GPU device with tf.config.set_logical_device_configuration and set a hard limit on the total memory to allocate it to the GPU. Please check this link for more details.
How to release GPU memory in tensorflow? (opposite of `allow_growth` → `allow_shrink`?)
I'm using a GPU to train quite a lot of models. I want to tune the architecture of the network, so I train different models sequentially to compare their performances (I'm using keras-tuner). The problem is that some models are very small, and some others are very large. I don't want to allocate all the GPU memory to my trainings, but only the quantity I need. I've TF_FORCE_GPU_ALLOW_GROWTH to true, meaning that when a model requires a large quantity of memory, then the GPU will allocate it. However, once that big model has been trained, the memory will not be released, even if the next trainings are tiny models. Is there a way to force the GPU to release unused memory? Something like TF_FORCE_GPU_ALLOW_SHRINK? Maybe having an automatic shrinking might be difficult to achieve. If so I would be happy with a manual releasing that I could add in a callback to be run after each training.
[ "You can try by limiting GPU memory growth using this code:\nimport tensorflow as tf\ngpus = tf.config.experimental.list_physical_devices('GPU')\ntf.config.experimental.set_memory_growth(gpus[0], True)\n\nThe second method is to configure a virtual GPU device with tf.config.set_logical_device_configuration and set a hard limit on the total memory to allocate it to the GPU.\nPlease check this link for more details.\n" ]
[ 0 ]
[]
[]
[ "gpu", "gpu_managed_memory", "python", "tensorflow" ]
stackoverflow_0074190403_gpu_gpu_managed_memory_python_tensorflow.txt
Q: Understanding exception handling in with statement in python I am trying to understand the with statement in python but I don't get how it does exception handling. For example, we have this code file = open('file-path', 'w') try: file.write('Lorem ipsum') finally: file.close() and then this code with open('file_path', 'w') as file: file.write('hello world !') Here when is file.close() is called? From this question I think because python have entry and exit function (and exit function is called by itself when file.write(); is over?) but then how are we going to do exception handling (catch) statement in particular? Also, what if we don't write finally in first snippet, it won't close connection by itself? file = open('file-path', 'w') try: file.write('Lorem ipsum') A: When using with statements, __exit__ will be called whenever we leave the with block, regardless of if we leave it due to exception or if we just finished executing contained code normally. If any code contained in the with block causes an exception, it will cause the __exit__ to run and then propagate the exception to the surrounding try/except block. This snippet (added finally: pass for the sake of syntax correctness): file = open('file-path', 'w') try: file.write('Lorem ipsum') finally: pass will never cause the file to be closed, so it will remain open for the whole run of the program (assuming close is not called anywhere else).
Understanding exception handling in with statement in python
I am trying to understand the with statement in python but I don't get how it does exception handling. For example, we have this code file = open('file-path', 'w') try: file.write('Lorem ipsum') finally: file.close() and then this code with open('file_path', 'w') as file: file.write('hello world !') Here when is file.close() is called? From this question I think because python have entry and exit function (and exit function is called by itself when file.write(); is over?) but then how are we going to do exception handling (catch) statement in particular? Also, what if we don't write finally in first snippet, it won't close connection by itself? file = open('file-path', 'w') try: file.write('Lorem ipsum')
[ "When using with statements, __exit__ will be called whenever we leave the with block, regardless of if we leave it due to exception or if we just finished executing contained code normally.\nIf any code contained in the with block causes an exception, it will cause the __exit__ to run and then propagate the exception to the surrounding try/except block.\nThis snippet (added finally: pass for the sake of syntax correctness):\nfile = open('file-path', 'w') \ntry: \n file.write('Lorem ipsum')\nfinally:\n pass\n\nwill never cause the file to be closed, so it will remain open for the whole run of the program (assuming close is not called anywhere else).\n" ]
[ 2 ]
[]
[]
[ "python" ]
stackoverflow_0074460466_python.txt
Q: where can I see algorithms/codes used for wavelet transforms in the PyWavelets module? I would like a clear description of algorithms in pywavelets for several decompositions (transfo & inverse transfo if not obvious). Does anyone know or know where to find that ? update november, 17, 2022 sorry I wasn't clear enough : I need to try several transformations in fortran. That's why I was hoping a CLEAR description of algorithms I could use easily, not a set of python functions barely documented that I have to understand myself. By the way, in this link, where would you get the information ? The doc is very light... I need something saying : "for this kind of transfo, do that" A: You can find them here: https://github.com/PyWavelets/pywt I am not sure if this is what you meant and if you are using this version.
where can I see algorithms/codes used for wavelet transforms in the PyWavelets module?
I would like a clear description of algorithms in pywavelets for several decompositions (transfo & inverse transfo if not obvious). Does anyone know or know where to find that ? update november, 17, 2022 sorry I wasn't clear enough : I need to try several transformations in fortran. That's why I was hoping a CLEAR description of algorithms I could use easily, not a set of python functions barely documented that I have to understand myself. By the way, in this link, where would you get the information ? The doc is very light... I need something saying : "for this kind of transfo, do that"
[ "You can find them here:\nhttps://github.com/PyWavelets/pywt\nI am not sure if this is what you meant and if you are using this version.\n" ]
[ 0 ]
[]
[]
[ "python", "pywavelets" ]
stackoverflow_0074431540_python_pywavelets.txt
Q: Assigning node names to a graph in networkx I'm trying to generate a networkx graph from a dataframe using the code below: import pandas as pd import numpy as np import networkx as nx data = [[0,0,0,1], [1,0,0,1], [1,0,0,1], [0,0,0,0]] df = pd.DataFrame(data, columns=['S1', 'S2', 'S3', 'S4']) df.index = ['S1', 'S2', 'S3', 'S4'] G = nx.DiGraph(df.values, with_labels=True) nx.draw(G) However, the graphs outputs without the names of the node being shown, which looks like below I tried to label the nodes in the graph using the networkx relabel_nodes however without luck G = nx.relabel_nodes(G, dict(enumerate(df.columns))) nx.draw(G) Is there a way wherein I can label the nodes of the networkx graph as S1, S2, S3, S4 ? A: The renaming should be a mapping that has old node identifiers as keys and new node identifier as values (a dictionary, basically): mapping = {0: "S1", 1: "S2", 2: "S3", 3: "S4"} H = nx.relabel_nodes(G, mapping) print(H.nodes) # ['S1', 'S2', 'S3', 'S4']
Assigning node names to a graph in networkx
I'm trying to generate a networkx graph from a dataframe using the code below: import pandas as pd import numpy as np import networkx as nx data = [[0,0,0,1], [1,0,0,1], [1,0,0,1], [0,0,0,0]] df = pd.DataFrame(data, columns=['S1', 'S2', 'S3', 'S4']) df.index = ['S1', 'S2', 'S3', 'S4'] G = nx.DiGraph(df.values, with_labels=True) nx.draw(G) However, the graphs outputs without the names of the node being shown, which looks like below I tried to label the nodes in the graph using the networkx relabel_nodes however without luck G = nx.relabel_nodes(G, dict(enumerate(df.columns))) nx.draw(G) Is there a way wherein I can label the nodes of the networkx graph as S1, S2, S3, S4 ?
[ "The renaming should be a mapping that has old node identifiers as keys and new node identifier as values (a dictionary, basically):\nmapping = {0: \"S1\", 1: \"S2\", 2: \"S3\", 3: \"S4\"}\nH = nx.relabel_nodes(G, mapping)\nprint(H.nodes)\n# ['S1', 'S2', 'S3', 'S4']\n\n" ]
[ 1 ]
[]
[]
[ "networkx", "numpy", "pandas", "python", "python_3.x" ]
stackoverflow_0074459925_networkx_numpy_pandas_python_python_3.x.txt
Q: Copy a Azure table (SAS) to a db on Microsoft SQL Server Just that: Is there a way to copy a azure table (with SAS connection) to a db on Microsoft SQL Server? It could be possible with python? Thank you all! I've tried on SSIS visual studio 2019 with no success A: You can use **azure data factory ** or azure synapse to copy the data from azure table storage to azure SQL database. Refer MS document on Introduction to Azure Data Factory - Azure Data Factory | Microsoft Learn if you are new to data factory. Refer MS document on Copy data to and from Azure Table storage - Azure Data Factory & Azure Synapse | Microsoft Learn. I tried to repro this in my environment. Linked services are for Azure table storage and azure sql database. In the linked service for azure table storage's Authentication method, SAS URI is selected and URL and token is given. Similarly, linked service for Azure Sql databse is created by giving server name, database name, username and password. Then Copy activity is taken and source dataset for table storage is created and given the same in source settings. Similarly, sink dataset is created. Once source and sink datasets are configured in copy activity, pipeline is run to copy data from table storage to Azure SQL DB. By this way, Data can be copied from azure table storage with SAS key to Azure SQL Database.
Copy a Azure table (SAS) to a db on Microsoft SQL Server
Just that: Is there a way to copy a azure table (with SAS connection) to a db on Microsoft SQL Server? It could be possible with python? Thank you all! I've tried on SSIS visual studio 2019 with no success
[ "You can use **azure data factory ** or azure synapse to copy the data from azure table storage to azure SQL database. Refer MS document on Introduction to Azure Data Factory - Azure Data Factory | Microsoft Learn if you are new to data factory.\nRefer MS document on Copy data to and from Azure Table storage - Azure Data Factory & Azure Synapse | Microsoft Learn.\nI tried to repro this in my environment.\n\nLinked services are for Azure table storage and azure sql database.\nIn the linked service for azure table storage's Authentication method, SAS URI is selected and URL and token is given.\n\n\n\nSimilarly, linked service for Azure Sql databse is created by giving server name, database name, username and password.\n\n\n\nThen Copy activity is taken and source dataset for table storage is created and given the same in source settings.\n\n\n\n\nSimilarly, sink dataset is created.\n\n\n\nOnce source and sink datasets are configured in copy activity, pipeline is run to copy data from table storage to Azure SQL DB.\nBy this way, Data can be copied from azure table storage with SAS key to Azure SQL Database.\n" ]
[ 0 ]
[]
[]
[ "azure", "python", "sql_server" ]
stackoverflow_0074307453_azure_python_sql_server.txt
Q: Pytest dependency - make one function to be dependent on the other @pytest.mark.parametrize('feed', ['C', 'D']) @pytest.mark.parametrize('file', ['foo.txt', 'boo.txt', 'doo.txt']) def test_1(feed: Path, file: str): assert (Path(feed / file).is_file()), 'Not file' @pytest.mark.parametrize('feed_C, feed_D', [('C', 'D')]) @pytest.mark.parametrize('file', ['foo.txt', 'boo.txt', 'doo.txt']) @pytest.mark.parametrize('column', ['name', 'surname']) def test_2(feed_C: Path, feed_D: Path, file: str, column: str): df1 = pd.read_csv(Path(feed_C / file), sep="\t") df2 = pd.read_csv(Path(feed_D / file), sep="\t") assert df1[column].equals(df2[column]), 'data frames are not equal.' I have two test function test_1 and test_2. test_2 should be dependent on the test_1. But the iterations in both test are different. test_1 iterations => foo.txt_C foo.txt_D boo.txt_C boo.txt_D doo.txt_C doo.txt_D test_2 iterations => name_foo.txt_C_D name_boo.txt_C_D name_doo.txt_C_D surname_foo.txt_C_D surname_boo.txt_C_D surname_doo.txt_C_D I want, for example, test iteration (name_foo.txt_C_D) in test_2 to be dependent on result of 1 and 2. For example, if foo.txt_C or foo.txt_ (even one of them), then name_foo.txt_C_D test iteration in test_2 will be SKIPPED. the same is for surname_foo.txt_C_D A: You don't need a separate test just two check if the paths are valid, you can do it in the same test. You also shouldn't create dependency between tests @pytest.mark.parametrize('feed_C, feed_D', [('C:', 'D:')]) @pytest.mark.parametrize('file', ['foo.txt', 'boo.txt', 'doo.txt']) def test(feed_C: Path, feed_D: Path, file: str): path_c = Path(feed_C / file) assert path_c.is_file(), 'Not file' path_d = Path(feed_D / file) assert path_d.is_file(), 'Not file' df1 = pd.read_csv(path_c, sep="\t") df2 = pd.read_csv(path_d, sep="\t") assert df1.equals(df2), 'data frames are not equal.'
Pytest dependency - make one function to be dependent on the other
@pytest.mark.parametrize('feed', ['C', 'D']) @pytest.mark.parametrize('file', ['foo.txt', 'boo.txt', 'doo.txt']) def test_1(feed: Path, file: str): assert (Path(feed / file).is_file()), 'Not file' @pytest.mark.parametrize('feed_C, feed_D', [('C', 'D')]) @pytest.mark.parametrize('file', ['foo.txt', 'boo.txt', 'doo.txt']) @pytest.mark.parametrize('column', ['name', 'surname']) def test_2(feed_C: Path, feed_D: Path, file: str, column: str): df1 = pd.read_csv(Path(feed_C / file), sep="\t") df2 = pd.read_csv(Path(feed_D / file), sep="\t") assert df1[column].equals(df2[column]), 'data frames are not equal.' I have two test function test_1 and test_2. test_2 should be dependent on the test_1. But the iterations in both test are different. test_1 iterations => foo.txt_C foo.txt_D boo.txt_C boo.txt_D doo.txt_C doo.txt_D test_2 iterations => name_foo.txt_C_D name_boo.txt_C_D name_doo.txt_C_D surname_foo.txt_C_D surname_boo.txt_C_D surname_doo.txt_C_D I want, for example, test iteration (name_foo.txt_C_D) in test_2 to be dependent on result of 1 and 2. For example, if foo.txt_C or foo.txt_ (even one of them), then name_foo.txt_C_D test iteration in test_2 will be SKIPPED. the same is for surname_foo.txt_C_D
[ "You don't need a separate test just two check if the paths are valid, you can do it in the same test. You also shouldn't create dependency between tests\n@pytest.mark.parametrize('feed_C, feed_D', [('C:', 'D:')])\n@pytest.mark.parametrize('file', ['foo.txt', 'boo.txt', 'doo.txt'])\ndef test(feed_C: Path, feed_D: Path, file: str):\n path_c = Path(feed_C / file)\n assert path_c.is_file(), 'Not file'\n\n path_d = Path(feed_D / file)\n assert path_d.is_file(), 'Not file'\n\n df1 = pd.read_csv(path_c, sep=\"\\t\")\n df2 = pd.read_csv(path_d, sep=\"\\t\")\n assert df1.equals(df2), 'data frames are not equal.'\n\n" ]
[ 0 ]
[]
[]
[ "dependencies", "pytest", "python" ]
stackoverflow_0074460436_dependencies_pytest_python.txt
Q: How do I use python to find a number of unique groups such that each subset within the group has at most X elements from a given array of Y? This is the problem but I do not fully understand what I need to do, especially the functions of n and m I have tried looking for the patterns to use but I am stuck So far I have written def howManyGroups (n,m): if n >= 2: Return 2 else: This is my first time posting a question so I am sorry if something is wrong A: You could try the following: def how_many_groups(n, m): m = min(m, n) if n == 0 or m <= 1: return 1 return how_many_groups(n, m - 1) + how_many_groups(n - m, m) The logic, as far as I understand the requirement: Base cases: (1) If there are no elements then there's only one way (n == 0). (2) If the maximum of elements in a group is 1 then there's also only one way (m == 1). Reduction: (1) Reduce the maximum m by 1 and count (m - 1). (2) Then make sure there's a least one group of size m and count the remaining possibilities on the reduced population (n - m). Add both counts. The m = min(m, n) is to make sure that m is at most n, because a bigger m doesn't matter regarding the count. You could replace it with if m > n: return how_many_groups(n, n).
How do I use python to find a number of unique groups such that each subset within the group has at most X elements from a given array of Y?
This is the problem but I do not fully understand what I need to do, especially the functions of n and m I have tried looking for the patterns to use but I am stuck So far I have written def howManyGroups (n,m): if n >= 2: Return 2 else: This is my first time posting a question so I am sorry if something is wrong
[ "You could try the following:\ndef how_many_groups(n, m):\n m = min(m, n)\n if n == 0 or m <= 1:\n return 1\n return how_many_groups(n, m - 1) + how_many_groups(n - m, m)\n\nThe logic, as far as I understand the requirement:\n\nBase cases: (1) If there are no elements then there's only one way (n == 0). (2) If the maximum of elements in a group is 1 then there's also only one way (m == 1).\nReduction: (1) Reduce the maximum m by 1 and count (m - 1). (2) Then make sure there's a least one group of size m and count the remaining possibilities on the reduced population (n - m). Add both counts.\nThe m = min(m, n) is to make sure that m is at most n, because a bigger m doesn't matter regarding the count. You could replace it with if m > n: return how_many_groups(n, n).\n\n" ]
[ 0 ]
[]
[]
[ "python", "recursion" ]
stackoverflow_0074458831_python_recursion.txt