QuestionId
int64
74.8M
79.8M
UserId
int64
56
29.4M
QuestionTitle
stringlengths
15
150
QuestionBody
stringlengths
40
40.3k
Tags
stringlengths
8
101
CreationDate
stringdate
2022-12-10 09:42:47
2025-11-01 19:08:18
AnswerCount
int64
0
44
UserExpertiseLevel
int64
301
888k
UserDisplayName
stringlengths
3
30
βŒ€
77,899,656
10,107,805
Use reticulate to create plot using a pandas function within an R script
<p>The below code creates a scatter plot in python using pandas plotting functions</p> <pre><code>import pandas as pd data = {'col1': [1,2,3,4,5], 'col2': [6,7,8,9,10]} df = pd.DataFrame(data) df.plot.scatter(x='col1', y='col2', alpha=0.3) </code></pre> <p>I am trying to recreate the plot function using reticulate. So far I have gotten to:</p> <pre><code>library(reticulate) pd &lt;- reticulate::import('pandas') df &lt;- data.frame(col1=c(1,2,3,4,5), col2=c(6,7,8,9,19)) #how do I recreate the python code for #df.plot.scatter(x='col1', y='col2', alpha=0.3) using reticulate? </code></pre> <p>I know I could do this easily using ggplot2 but the objective is to use the pandas function within an r script (not rmarkdown script) using reticulate.</p> <p>Any suggestions?</p>
<python><r><reticulate>
2024-01-29 12:15:23
1
1,006
Basil
77,899,556
6,387,095
Requests - Cannot download xls file from a link, stuck in loop or timeout?
<p>I am trying to download a file from the following link:</p> <pre><code>url = &quot;https://nsearchives.nseindia.com/content/fo/qtyfreeze.xls&quot; </code></pre> <p>I have tried setting the <code>timeout</code>, <code>stream</code>, <code>headers</code>, <code>allow-redirects</code>. When I don't set the timeout, the response never completes. When I set the <code>timeout</code> I always get timeout error. I am able to download the file without issue in firefox and chrome.</p> <p>The code I am using:</p> <pre><code>import requests as req headers = { 'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/120.0.0.0 Safari/537.36', &quot;Accept&quot;: &quot;text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8,application/vnd.ms-excel&quot;, &quot;Accept-Encoding&quot;: &quot;gzip, deflate&quot;} url = &quot;https://nsearchives.nseindia.com/content/fo/qtyfreeze.xls&quot; # and variations of the following resp = req.get(url, timeout=30, headers=headers) # tried urllib #urllib.request.urlretrieve(url,&quot;/home/data/freeze_quantity.xls&quot;) </code></pre>
<python><python-requests>
2024-01-29 11:57:32
1
4,075
Sid
77,899,537
10,972,079
ASK-SDK Python - How do I use multiple permissions in ASK-SDK simultaneously (i.e. reminders and postal code)?
<p>Essentially, I have a skill which already has code for reminders in it: and to check for permissions and ask for them I have the following code:</p> <pre><code> ... permissions = request_envelope.context.system.user.permissions if not (permissions and permissions.consent_token): # No permissions exist - ask for them logger.debug('Reminder permissions not yet granted') directive_response = handler_input.response_builder.add_directive( SendRequestDirective( name=&quot;AskFor&quot;, payload={ &quot;@type&quot;: &quot;AskForPermissionsConsentRequest&quot;, &quot;@version&quot;: &quot;2&quot;, &quot;permissionScopes&quot;: [ { &quot;permissionScope&quot;: &quot;alexa::alerts:reminders:skill:readwrite&quot;, &quot;consentLevel&quot;: &quot;ACCOUNT&quot; } ] }, token=&quot;reminder&quot; ) ).response logger.debug(directive_response) return directive_response </code></pre> <p><code>request_envelope.context.system.user.permissions</code> (I logged them):</p> <pre><code>{ 'consent_token': 'eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiIsImtpZCI6IjEifQ.eyJhdWQiOiJodHRwczovL2FwaS5hbWF6b25hbGV4YS5jb20iLCJpc3MiOiJBbGV4YVNraWxsS2l0Iiwic3ViIjoiYW16bjEuYXNrLnNraWxsLjdjNzRjYWQ4LTRlODctNDIyNi04MjI5LWMzMWM0ZjBkNjUyYyIsImV4cCI6MTcwNjUyNTg5NCwiaWF0IjoxNzA2NTIyMjk0LCJuYmYiOjE3MDY1MjIyOTQsInByaXZhdGVDbGFpbXMiOnsiaXNEZXByZWNhdGVkIjoidHJ1ZSIsImNvbnNlbnRUb2tlbiI6IkF0emF8SXdFQklQenBUSFk5em9zY3R2cmpsNEdtanBpbmdQSmhfWWZPb3k5bVptNldkaWZYdHZDX3JRc19lVVRteTUtejQ5UWFVdzNJN0VfMVFQVjZMTktPS1Q3Tm04Vk90ZGgweEwxMGVacTh6R0hDRGhoc3NPQk5HLU5mNFJKZEJlR0VYdFRWcFJqTUVfYnltNE1OZk16WjdGYXNmZmFOSVJzYTBqWDhIcXRIaFVPMzk5TUoyVXBNTTZLdXZpRjZCc1Q0V0U5M1pEZUtZU1VuQzR2aUJGZ0FfOEJhbjNhcE9pcGdaNUdWdE1WS2h3R3psamNBQ2ZXTE5lR01fekpNMDNGVTNnLUl5QW5Yek9XUTl3bnNmUl9CTmdrTVY2ZmVBX01iZzhuaC1DMl9OdHBxTWlWc3JPRFNVMDc0RGFFU19KS01GLURUUTE0IiwiZGV2aWNlSWQiOiJhbXpuMS5hc2suZGV2aWNlLkFNQVZFUFY2NjNGVU40VzRXRUVNMkdIMjZYQUJJS0M2S0gyMzJET00zVU02S1pGVlBaWEYzMllIR1QzUllRREZPNFZRVzNWWDM0NVFETk9JRkpRSk5CN0haNFhNRjNLWlpWSkRDUkRSUDVQUlRWQlNCTElOWkNTRUxRV0ZaMkZRNDNEUjY2T0o0SjRUUzNTS1hWUlg2VksySzMySEQ0N1dOTUpWSFlDNk40UFpFMkU1QzVSNkZCUFIyNUdWQ1M1UEUyWlJIUTZPT0o0NVFWVlQiLCJ1c2VySWQiOiJhbXpuMS5hc2suYWNjb3VudC5BTUE0RkVFSFhZTlQ3VDM0TEk0U0FFWUg2SURTRUlGVDNTNktQRDdZQ0dHS0hPS0lNR1pIQU1DM0pMS1BJTU5DM0JSVjI0WUFUNTJGRVpCNlNTSE5TTlJNSEgzQVBQVkNCVFhGSFE2RzdHNkdUSUs2SEE3S1ZHTkVaSVlENktCWE5ZWFhESjVOUkhHUlhXNFUzNDJPSktJMkRMMkJCU0FRMkRNNkMyTTRXSVA3Q1pWQkhDSlg1RUdDUk9JRDJDN1BVN0hSTDJYVVFEWlBLWjZOVloyV1Q0UktNRExVQjRXSVQ2WDM3NEVKSUxNQSJ9fQ.MGRG4zGyx1PMgti8yG-5SNqpEE7HEW2B1yI3gNL1TCz1Oe7KdmQpbiOqHa8ZW4kEe2ALkWBijwBgFAuW-DMT08j54RpmBUDNdE7gV16W8DYjpGgAk50hBA86pZlCIwrETQx1SvYr1587ttOpsVuH_KyQZ0KdKMQ2h-jCbtct3e2izAzAOGQebR9PEZ2KhtJDQolSlECLCGjg60CUK1bMW7DiEw8jHT2cmSJtlbjma27HSezd0_9n8j5KeuEUFqvGbLNqLABiuhhZ_WZH9bsv2WFb8wQQ0bvauIMiTDdXCBYQDzLG_28D9vNKMIhMLRGjc5HyROdqHeNERffYrRxPkg', 'scopes': None } </code></pre> <p>Directive response:</p> <pre><code>{ 'api_response': None, 'can_fulfill_intent': None, 'card': None, 'directives': [ { 'name': 'AskFor', 'object_type': 'Connections.SendRequest', 'payload': { '@type': 'AskForPermissionsConsentRequest', '@version': '2', 'permissionScopes': [ { 'consentLevel': 'ACCOUNT', 'permissionScope': 'alexa::alerts:reminders:skill:readwrite' } ] }, 'token': '' } ], 'experimentation': None, 'output_speech': None, 'reprompt': None, 'should_end_session': None } </code></pre> <p>I notice that the following code doesn't actually check for the reminder permission specifically, only that any arbitrary permission exists. Now, this would be completely fine in the absence of other permissions, but I am looking to use the postal code permission also - how would this work? My first thought was to use the consent_token in the permissions, but that comes without a key for a token/permission name, and it doesn't come with the response, so I have no way of knowing which consent_token is for which permission. Is there any way to get this working? Help would be much appreciated.</p>
<python><json><alexa-skills-kit><ask-sdk>
2024-01-29 11:53:52
1
1,338
CauseYNot
77,899,471
8,541,953
Google App Engine does not find pip package
<p>I am deployging a python app in Google App Engine and I am facing an error when running <code>google cloud deploy</code></p> <p>My requiremetns.txt file includes the followign python packages and versions:</p> <pre><code>requests==2.31.0 ipyleaflet==0.18.0 ipywidgets==8.1.1 ipyfilechooser jupyterlab==4.0.9 shapely==2.0.2 voila==0.5.5 mercantile==1.2.1 python-dateutil==2.8.2 geojson==3.1.0 geopandas==0.14.1 rasterio==1.3.9 ipydatagrid==1.2.0 </code></pre> <p>The Google shell is prompting this error:</p> <pre><code> ERROR: Could not find a version that satisfies the requirement requests==2.31.0 (from -r requirements.txt (line 2)) (from versions: 0.2.0, 0.2.1, 0.2.2, 0.2.3, 0.2.4, 0.3.0, 0.3.1, 0.3.2, 0.3.3, 0.3.4, 0.4.0, 0.4.1, 0.5.0, 0.5.1, 0.6.0, 0.6.1, 0.6.2, 0.6.3, 0.6.4, 0.6.5, 0.6.6, 0.7.0, 0.7.1, 0.7.2, 0.7.3, 0.7.4, 0.7.5, 0.7.6, 0.8.0, 0.8.1, 0.8.2, 0.8.3, 0.8.4, 0.8.5, 0.8.6, 0.8.7, 0.8.8, 0.8.9, 0.9.0, 0.9.1, 0.9.2, 0.9.3, 0.10.0, 0.10.1, 0.10.2, 0.10.3, 0.10.4, 0.10.6, 0.10.7, 0.10.8, 0.11.1, 0.11.2, 0.12.0, 0.12.1, 0.13.0, 0.13.1, 0.13.2, 0.13.3, 0.13.4, 0.13.5, 0.13.6, 0.13.7, 0.13.8, 0.13.9, 0.14.0, 0.14.1, 0.14.2, 1.0.0, 1.0.1, 1.0.2, 1.0.3, 1.0.4, 1.1.0, 1.2.0, 1.2.1, 1.2.2, 1.2.3, 2.0.0, 2.0.1, 2.1.0, 2.2.0, 2.2.1, 2.3.0, 2.4.0, 2.4.1, 2.4.2, 2.4.3, 2.5.0, 2.5.1, 2.5.2, 2.5.3, 2.6.0, 2.6.1, 2.6.2, 2.7.0, 2.8.0, 2.8.1, 2.9.0, 2.9.1, 2.9.2, 2.10.0, 2.11.0, 2.11.1, 2.12.0, 2.12.1, 2.12.2, 2.12.3, 2.12.4, 2.12.5, 2.13.0, 2.14.0, 2.14.1, 2.14.2, 2.15.1, 2.16.0, 2.16.1, 2.16.2, 2.16.3, 2.16.4, 2.16.5, 2.17.0, 2.17.1, 2.17.2, 2.17.3, 2.18.0, 2.18.1, 2.18.2, 2.18.3, 2.18.4, 2.19.0, 2.19.1, 2.20.0, 2.20.1, 2.21.0, 2.22.0, 2.23.0, 2.24.0, 2.25.0, 2.25.1, 2.26.0, 2.27.0, 2.27.1) Step #2: ERROR: No matching distribution found for requests==2.31.0 (from -r requirements.txt (line 2)) Step #2: WARNING: You are using pip version 20.2.2; however, version 21.3.1 is available. Step #2: You should consider upgrading via the '/env/bin/python -m pip install --upgrade pip' command. Step #2: The command '/bin/sh -c pip install -r requirements.txt' returned a non-zero code: 1 Finished Step #2 ERROR </code></pre> <p>I wonder why the latest version of requests is not seen as an option? I have put 2.27.1 and then the same error takes place for the next package. Why is that? How can I fix it?</p> <p>ps: I am rather new with App Engine</p> <p>--- EDIT --</p> <p>My app.yaml file looks like this, specifying python to version 3</p> <pre><code>runtime: python env: flex runtime_config: python_version: 3 entrypoint: voila --port=$PORT --Voila.ip=0.0.0.0 --no-browser main.ipynb </code></pre>
<python><google-cloud-platform><google-app-engine>
2024-01-29 11:44:27
2
1,103
GCGM
77,899,264
10,161,091
Call functions within a method with slightly different inputs
<p>I have a py file with multiple functions inside.</p> <pre class="lang-py prettyprint-override"><code># methods.py def method_a(input_1, input_2) ... def method_b(input_1, input_2, input_3) ... </code></pre> <p>I want to call the functions in <code>methods.py</code> in my main by the method name as string.</p> <pre class="lang-py prettyprint-override"><code># main.py import methods input_1 = ... input_2 = ... input_3 = ... method_name = &quot;method_a&quot; method = getattr(methods, method_name) # How to know if I should pass input_3 or not? output = method(???) </code></pre> <p>So my question is how I can determine if I need to pass <code>input_3</code> or not.</p> <p>So far, I have thought about the following options but I am not sure if they are pythonic...</p> <ul> <li>Option A: Use <code>try</code> statement with the two inputs, if it raises exception, call method with <code>input_3</code> added.</li> <li>Option B: Use <code>inspect</code> and check if <code>input_3</code> is an argument of <code>method</code>.</li> <li>Option C: Update <code>method_a</code> to accept <code>input_3</code> or <code>**kwargs</code> but don't use it, or print a warning. (I don't like this since it raises linting warnings.)</li> </ul> <p>Are they other options to handle this?</p>
<python>
2024-01-29 11:07:30
0
2,750
SaTa
77,899,220
7,920,004
Python module not found when importing existing one
<p>I'm getting module not found error despite having <code>__init_.py</code> files where required. My structure is the same as in other sample project where <code>test_...</code> files work as expected.</p> <p>Error that I'm getting appears in <code>test_cicd_in_aws_stack.py</code></p> <p>When doing <code>from cdk_code.cicd_in_aws_stack import CicdInAwsStack</code>, I'm ending with</p> <p><code>ModuleNotFoundError: No module named 'cdk_code'</code></p> <pre><code>. β”œβ”€β”€ README.md β”œβ”€β”€ app.py β”œβ”€β”€ cdk.json β”œβ”€β”€ cdk_code β”‚Β Β  β”œβ”€β”€ __init__.py β”‚Β Β  β”œβ”€β”€ cicd_in_aws_stack.py &lt;---importing CicdInAwsStack class from here β”‚Β Β  β”œβ”€β”€ lambda_stack.py β”‚Β Β  └── stage.py β”œβ”€β”€ requirements-dev.txt β”œβ”€β”€ requirements.txt β”œβ”€β”€ source.bat └── tests β”œβ”€β”€ __init__.py └── unit β”œβ”€β”€ __init__.py └── test_cicd_in_aws_stack.py &lt;---importing CicdInAwsStack class here </code></pre>
<python>
2024-01-29 11:01:54
0
1,509
marcin2x4
77,899,196
6,681,932
harvesine vectorization for vector list
<p>I have a code snippet that computes a distance matrix between two lists of coordinates using the haversine function. While the current implementation works, it involves nested loops and can be time-consuming for large datasets. I am looking for a more efficient alternative that avoids the use of a for loop.</p> <pre><code>import numpy as np from haversine import haversine string_list_1 = [(20.00,-100.1),...] # List of vector pair coordinates (lat,long) string_list_2 = [(21.00,-101.1),...] # Another list of pair coordinates dist_mat = np.zeros((len(string_list_1), len(string_list_2))) for i, coord1 in enumerate(string_list_1): dist_mat[i, :] = np.array([haversine(coord1, coord2) for coord2 in string_list_2]) </code></pre> <p>I would appreciate suggestions or code examples for a more efficient and faster implementation that avoids the use of a for loop.</p>
<python><performance><vectorization>
2024-01-29 10:56:57
1
478
PeCaDe
77,899,090
14,640,064
Why can not I see logs in Azure Function app?
<p>I am working on Azure Function App (Python on Linux), but I am unable to view the logs.</p> <p>When I go to web portal to the Function App - tab Log Stream, the page never loads. There is just a spinning circle.</p> <p>The same happens when I open the Function itself and go to &quot;<em>Code + Test</em>&quot;. The console says, <code>Connecting to Application Insights...</code>, but it never loads.</p> <p>The Function App is not linked to &quot;<em>Application Insights</em>&quot;. But to my understanding I do not need to create Application Insights resource to just see simple logs. Or am I wrong?</p> <p>I am quite new to Azure, so I appreciate any help!</p> <p>In <a href="https://stackoverflow.com/questions/71560901/how-can-i-access-my-azure-functions-logs#:%7E:text=Add%20a%20comment-,5,has%20since%20disabled%20log%20streaming%20on%20function%20apps%20as%20shown%20below.,-The%20only%20way">this</a> post one of the users says that log streaming has been limited, so I assume that I am facing the similar issue. I have tried using VS Code extension Azure to stream logs, but it also requires Application Insights. <code>Error: You must configure Application Insights to stream logs on Linux Function Apps</code>.</p> <p><strong>Do I really have to create Application Insights resource?</strong> (It's complicated for me to create new resource due to corporate structure)</p> <p>Edit:</p> <p>As requested I am adding the code. I don't think that the issue is with the code itself. I am unable to view any logging page in Azure web portal as well as I am unable to stream logs to VS Code. The issues occurs even before running the code.</p> <p>I am expecting to see at least something. Any print or log statement, or even error message. But I am unable to anything.</p> <p>Also <a href="https://imgur.com/a/40lAeaF" rel="nofollow noreferrer">here</a> here are some screenshots from Azure Portal and the issues I am facing.</p> <pre><code>&quot;&quot;&quot; function_app.py &quot;&quot;&quot; import azure.functions as func import logging @app.route(route=&quot;http_trigger_test&quot;) def http_trigger_test(req: func.HttpRequest) -&gt; func.HttpResponse: logging.info('Python HTTP trigger function processed a request.') return func.HttpResponse(f&quot;Code finished&quot;, status_code=200) &quot;&quot;&quot; requirements.txt &quot;&quot;&quot; # DO NOT include azure-functions-worker in this file # The Python Worker is managed by Azure Functions platform # Manually managing azure-functions-worker may cause unexpected issues azure-functions &quot;&quot;&quot; http_trigger_test.json &quot;&quot;&quot; { &quot;name&quot;: &quot;http_trigger_test&quot;, &quot;entryPoint&quot;: &quot;http_trigger_test&quot;, &quot;scriptFile&quot;: &quot;function_app.py&quot;, &quot;language&quot;: &quot;python&quot;, &quot;functionDirectory&quot;: &quot;/home/site/wwwroot&quot;, &quot;bindings&quot;: [ { &quot;direction&quot;: &quot;IN&quot;, &quot;type&quot;: &quot;httpTrigger&quot;, &quot;name&quot;: &quot;req&quot;, &quot;authLevel&quot;: &quot;ANONYMOUS&quot;, &quot;route&quot;: &quot;http_trigger_test&quot; }, { &quot;direction&quot;: &quot;OUT&quot;, &quot;type&quot;: &quot;http&quot;, &quot;name&quot;: &quot;$return&quot; } ] } </code></pre>
<python><azure><logging><azure-functions><azure-application-insights>
2024-01-29 10:39:29
1
705
herdek550
77,899,043
7,027,964
How to catch DateParseError in pandas?
<p>I am running a script that uses <code>pd.to_datetime()</code> on inputs that are sometime not able to be parsed.</p> <p>For example if I try to run <code>pd.to_datetime('yesterday')</code> it results to an error</p> <p><code>DateParseError: Unknown datetime string format, unable to parse: yesterday, at position 0</code></p> <p>I 'd like to catch this exception and process it further in my code.</p> <p>I have tried:</p> <pre><code>try: pd.to_datetime('yesterday') except pd.errors.ParserError: print('exception caught') </code></pre> <p>but the exception is not caught. Does anyone know where <code>DateParseError</code> is defined and how I can catch it? Searching in <a href="https://pandas.pydata.org/docs/search.html?q=DateParseError" rel="noreferrer">pandas documentation</a> doesn't yield any results</p>
<python><pandas><python-datetime>
2024-01-29 10:31:54
2
2,474
kosnik
77,898,940
2,473,382
Alias a type to be specialised later
<p>(Using python 3.12, so the <code>type</code> syntax is available to me.)</p> <p>During testing, I might abuse types (the examples here are contrived. In real life, I am using a custom class subclassing lists, and I would like to give a list instead of the subclass while testing using pydantic validators to avoid a lot of boilerplate)</p> <p>With non-specialisable types, this is easy and typing is happy:</p> <pre class="lang-py prettyprint-override"><code>if TYPE_CHECKING: type MyNumber = int | float else: type MyNumber = int class TstBase: a: MyNumber print(TstBase(a=1.2)) </code></pre> <p>But when using specialisable types:</p> <pre class="lang-py prettyprint-override"><code>if TYPE_CHECKING: type MyContainer = list | set else: type MyContainer = list class TstContainer(BaseModel): a: MyContainer[str] # ERROR HERE </code></pre> <p>There pyright complains with</p> <blockquote> <p>Type &quot;list[Unknown]&quot; is already specialized</p> </blockquote> <blockquote> <p>Type &quot;set[Unknown]&quot; is already specialized</p> </blockquote> <p>How Can I tell pyright that <code>MyContainer</code> will be specialised later? .</p>
<python><python-typing><python-3.12>
2024-01-29 10:16:28
1
3,081
Guillaume
77,898,906
10,232,932
Use the first positive number of a row if a certrain condition in another column is valid, using pandas
<p>I want to have the first non-negative number in a new column, based on three columns (<code>valueA, valueB, valueC</code>).</p> <p>If a condition in <code>columnA</code> is valid. The condition is that <code>columnA!=None</code>.</p> <p>An example of a dataframe (df):</p> <pre><code>columnA valueA valueB valueC None -40 -41 -42 A -5 10 20 B -10 -5 10 C 1 2 3 </code></pre> <p>The <strong>result after the logic</strong> would lead to:</p> <pre><code>columnA valueA valueB valueC valueD None -40 -41 -42 None A -5 10 20 10 B -10 -5 10 10 C 1 2 3 1 </code></pre>
<python><pandas>
2024-01-29 10:11:37
3
6,338
PV8
77,898,765
2,386,930
Schedule a member function to run periodically
<p>I'm trying to schedule a member function to run periodically.</p> <pre><code>class Array: def __init__(self): self.array = [] # run every 10 seconds def periodic(self): [print(a) for a in self.array] def insert(self, a): self.array.append(a) </code></pre> <p>Most of the libraries and sample code I've seen for this type of scheduling don't work well for member functions. Are there any specific libraries or techniques I can use to achieve what I'm after?</p>
<python><scheduled-tasks>
2024-01-29 09:51:43
0
1,587
janovak
77,898,735
8,248,194
Calculate distance matrix for list of coordinates in numpy
<p>I've got a list of coordinates:</p> <pre class="lang-py prettyprint-override"><code>l_coords = [(1, 2), (1.1, 2.2), (1.05, 1.9)] </code></pre> <p>how can I calculate the distance matrix using a vectorized experession in numpy?</p>
<python><numpy>
2024-01-29 09:46:28
1
2,581
David Masip
77,898,710
12,544,391
ginstall: cannot stat 'Modules/_blake2.cpython-312-darwin.so':
<p>I was trying to install python 3.12.1 with asdf on macOS 14.2.1 (Sonoma)</p> <pre><code>python-build 3.12.1 /Users/dorianmariefr/.asdf/installs/python/3.12.1 python-build: use openssl@3 from homebrew python-build: use readline from homebrew Downloading Python-3.12.1.tar.xz... -&gt; https://www.python.org/ftp/python/3.12.1/Python-3.12.1.tar.xz Installing Python-3.12.1... python-build: use readline from homebrew python-build: use ncurses from homebrew python-build: use zlib from xcode sdk BUILD FAILED (OS X 14.2.1 using python-build 2.3.35-11-g9908daf8) Inspect or clean up the working tree at /var/folders/qx/vjzng4rs2pd9w20rh3x_fr3w0000gn/T/python-build.20240129095442.33260 Results logged to /var/folders/qx/vjzng4rs2pd9w20rh3x_fr3w0000gn/T/python-build.20240129095442.33260.log Last 10 log lines: __locale_setlocale in _localemodule.o __locale_localeconv in _localemodule.o __locale_localeconv in _localemodule.o __locale_localeconv in _localemodule.o __locale_localeconv in _localemodule.o _libintl_textdomain, referenced from: __locale_textdomain in _localemodule.o clang: error: linker command failed with exit code 1 (use -v to see invocation) make: *** [Programs/_freeze_module] Error 1 make: *** Waiting for unfinished jobs.... </code></pre> <p>Then I tried to <code>brew remove --ignore-dependencies gettext</code> as per the python bug <a href="https://bugs.python.org/issue46975" rel="nofollow noreferrer">https://bugs.python.org/issue46975</a></p> <p>Then I got:</p> <pre><code>ginstall: cannot stat 'Modules/_blake2.cpython-312-darwin.so': No such file or directory make: *** [sharedinstall] Error 1 </code></pre>
<python><macos>
2024-01-29 09:42:54
1
9,411
Dorian
77,898,671
13,023,478
not able to install bist in python3
<p>I'm trying to install the blist module in python3 but I'm getting the below error I have tried everything like upgrading the setuptools, pip, wheel, and everything but nothing seems to be working Please help. I'm trying to install elastalert but the blist is the dependency I have to install.</p> <pre><code> Γ— python setup.py bdist_wheel did not run successfully. β”‚ exit code: 1 ╰─&gt; [35 lines of output] running bdist_wheel running build running build_py creating build creating build/lib.macosx-10.9-universal2-cpython-39 creating build/lib.macosx-10.9-universal2-cpython-39/blist copying blist/_btuple.py -&gt; build/lib.macosx-10.9-universal2-cpython-39/blist copying blist/_sortedlist.py -&gt; build/lib.macosx-10.9-universal2-cpython-39/blist copying blist/__init__.py -&gt; build/lib.macosx-10.9-universal2-cpython-39/blist copying blist/_sorteddict.py -&gt; build/lib.macosx-10.9-universal2-cpython-39/blist running build_ext building 'blist._blist' extension creating build/temp.macosx-10.9-universal2-cpython-39 creating build/temp.macosx-10.9-universal2-cpython-39/blist clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -iwithsysroot/System/Library/Frameworks/System.framework/PrivateHeaders -iwithsysroot/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.9/Headers -arch arm64 -arch x86_64 -Werror=implicit-function-declaration -Wno-error=unreachable-code -DBLIST_FLOAT_RADIX_SORT=1 -I/Users/dhaval.p/Development/Solo-Alerts/myenv/include -I/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.9/Headers -c blist/_blist.c -o build/temp.macosx-10.9-universal2-cpython-39/blist/_blist.o blist/_blist.c:4448:25: warning: code will never be executed [-Wunreachable-code] register PyObject *t; ^~~~~~~~~~~~~~~~~~~~~ blist/_blist.c:4583:37: error: call to undeclared function '_PyObject_GC_IS_TRACKED'; ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declaration] if (leafs_n &gt; 1 &amp;&amp; !_PyObject_GC_IS_TRACKED(leafs[i])) ^ blist/_blist.c:5387:31: warning: comparison of integers of different signs: 'Py_ssize_t' (aka 'long') and 'unsigned long' [-Wsign-compare] for (j = 0; j &lt; NUM_PASSES; j++) { ~ ^ ~~~~~~~~~~ blist/_blist.c:5393:31: warning: comparison of integers of different signs: 'Py_ssize_t' (aka 'long') and 'unsigned long' [-Wsign-compare] for (j = 0; j &lt; NUM_PASSES; j++) { ~ ^ ~~~~~~~~~~ blist/_blist.c:5403:23: warning: comparison of integers of different signs: 'Py_ssize_t' (aka 'long') and 'unsigned long' [-Wsign-compare] for (j = 0; j &lt; NUM_PASSES; j++) { ~ ^ ~~~~~~~~~~ blist/_blist.c:5786:13: error: call to undeclared function '_PyObject_GC_IS_TRACKED'; ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declaration] if (_PyObject_GC_IS_TRACKED(self)) ^ 4 warnings and 2 errors generated. error: command '/usr/bin/clang' failed with exit code 1 [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. ERROR: Failed building wheel for blist Running setup.py clean for blist Failed to build blist ERROR: Could not build wheels for blist, which is required to install pyproject.toml-based projects </code></pre>
<python><python-3.x><elastalert>
2024-01-29 09:34:52
1
1,218
iamdhavalparmar
77,898,498
126,833
Download file uploaded to Azure Storage in Python
<p>I got the code to upload a document to Azure storage.</p> <pre class="lang-py prettyprint-override"><code>container_client = blob_service_client.get_container_client(container_name) blob_client = blob_service_client.get_blob_client(container_name, blob_name) blob_client.upload_blob(file_content, overwrite=True) absolute_path = f&quot;https://{container_name}.blob.core.windows.net/{blob_name}&quot; </code></pre> <p>And the file gets uploaded to a link like this :</p> <p><code>https://projectName.blob.core.windows.net/attachments/16/document1.pdf</code></p> <p>But how do I download it ? When I try to <code>curl --head</code> the URL I get :</p> <p><code>Could not resolve host: projectName.blob.core.windows.net</code></p>
<python><azure><azure-storage>
2024-01-29 09:08:53
1
4,291
anjanesh
77,898,398
1,581,090
Why does `scipy.optimize.minimize` always return the initial value?
<p>I have a complete working code that uses <code>scipy.optimize.minimize</code> but always returns the initial value as the optimized scalar parameter. Here is the complete code:</p> <pre><code>import sys import random import numpy as np import matplotlib.pyplot as plt import scipy.optimize as opt # Define the real shift shift = 50.5 data1_x = [] data1_y = [] data2_x = [] data2_y = [] for index in range(int(shift)): data2_x.append(index) data2_y.append(0) for index in range(500): x = index if index&lt;100: y = 0 elif index&lt;200: y = (index-100) elif index&lt;300: y = 100 elif index&lt;400: y = 400 - index else: y = 0 data1_x.append(x) data1_y.append(y) data2_x.append(x + shift) data2_y.append(y) index_range = range(len(data2_x)) # The function to minimize, returning a float def overlap(shift, data1_x, data1_y, data2_x, data2_y): sum_ = 0 for index1 in range(len(data1_x)): x1 = data1_x[index1] + shift[0] index2 = min(index_range, key=lambda i: abs(data2_x[i]-x1)) x2 = data2_x[index2] y1 = data1_y[index1] y2 = data2_y[index2] # Ignore x values outside of common range if abs(x2-x1)&gt;5: continue sum_ += abs(y2 - y1) return sum_ # Here chose some other initial value instead of '40'. result = opt.minimize(overlap, 40, args=(data1_x, data1_y, data2_x, data2_y)) # Print message indicating why the process terminated print(result.message) # Print the minimum value of the function print(result.fun) # Print the x-value resulting in the minimum value print(result.x) calculated_shift = result.x[0] # Plot the original and shifted signals along with cross-correlation plt.subplot(2, 1, 1) plt.scatter(data1_x, data1_y, s=20, marker=&quot;o&quot;, c=&quot;b&quot;, label=&quot;Data1&quot;) plt.scatter(data2_x, data2_y, s=5, marker=&quot;o&quot;, c=&quot;g&quot;, label=&quot;Data2&quot;) plt.legend() plt.subplot(2, 1, 2) plt.scatter(data1_x, data1_y, s=20, marker=&quot;o&quot;, c=&quot;b&quot;, label=&quot;Data1&quot;) plt.scatter([x-calculated_shift for x in data2_x], data2_y, s=5, marker=&quot;o&quot;, c=&quot;g&quot;, label=&quot;Data2&quot;) plt.legend() plt.tight_layout() plt.show() </code></pre> <p>Why does <code>optimize</code> not optimize?</p>
<python><scipy>
2024-01-29 08:47:39
1
45,023
Alex
77,898,325
14,721,356
Filter queryset based on values in the other queryset
<p>I have the following model in django:</p> <pre class="lang-py prettyprint-override"><code>class EmailSubscriber(BaseSubscriber): email = models.EmailField() name = models.CharField() </code></pre> <p>Now let's say I made a queryset from <code>EmailSubscriber</code> Like below:</p> <pre class="lang-py prettyprint-override"><code>queryset1 = EmailSubscriber.objects.all() </code></pre> <p>Now I create another queryset like below:</p> <pre class="lang-py prettyprint-override"><code>queryset2 = EmailSubscriber.objects.filter(some filters) </code></pre> <p>I want to exclude those fields in <code>queryset1</code> that have their email and name in <code>queryset2</code>. I mean, if a record in <code>queryset1</code> has <code>{&quot;email&quot;: &quot;john@gmail.com&quot;, &quot;name&quot;: &quot;john&quot;}</code> and there is a record in <code>queryset2</code> with <code>{&quot;email&quot;: &quot;john@gmail.com&quot;, &quot;name&quot;: &quot;john&quot;}</code>, the record in <code>queryset1</code> must be excluded. So they must be unique together. The thing is, I can't just use <code>some filters</code> to exclude from <code>queryset1</code>. Assume I have a function that receives <code>queryset1</code> and <code>queryset2</code> and it returns the desired queryset. How can I do that in an efficient way? Thanks in advance</p>
<python><django><django-queryset>
2024-01-29 08:34:13
2
868
Ardalan
77,898,119
2,469,032
how to properly incorporate early stopping validation in sklearn Pipeline with ColumnTransformer
<p>I want to setup a lightGBM model with early stop validation. I also want to follow the best practice of using Pipeline to combine preprocessing and model fitting and prediction. Code below:</p> <pre><code>coltransformer = ColumnTransformer([ ('cat', OneHotEncoder(sparse_output = False), cat_invar), ('num', 'passthrough', num_invar)]) lgbPipe = Pipeline([ ('preprocess', coltransformer), ('lgb', LGBMClassifier()]) X_learn, X_val, Y_learn, Y_val = train_test_split(X, y, test_size = 0.2) lgbPipe.fit(X_learn, Y_learn, lgb__eval_set = (X_val, Y_val)) </code></pre> <p>However, I got the following error message when running the code:</p> <pre><code>ValueError: DataFrame.dtypes for data must be int, float or bool. Did not expect the data types in the following fields: Geography, Gender, Surname </code></pre> <p>It appears the validation datasets (X_val, Y_val) were not being transformed in the preprocessing step. How should I properly setup the pipeline?</p>
<python><scikit-learn><pipeline>
2024-01-29 07:50:04
3
1,037
PingPong
77,898,085
7,800,726
How to obtain structured results with YOLOv8 similar to YOLOv5's results.pandas().xyxy method?
<p>I am currently working with YOLOv8 and I'm wondering if there is a method similar to <code>results.pandas().xyxy</code> available in YOLOv5 to obtain structured results in tabular form. With YOLOv5, it's possible to get results easily using the following code:</p> <pre class="lang-py prettyprint-override"><code>results = model(img) df = results.pandas().xyxy[0] # Get results in tabular format print(df) # xmin ymin xmax ymax confidence class name # 0 749.50 43.50 1148.0 704.5 0.874023 0 person # 1 433.50 433.50 517.5 714.5 0.687988 27 tie # 2 114.75 195.75 1095.0 708.0 0.624512 0 person # 3 986.00 304.00 1028.0 420.0 0.286865 27 tie </code></pre>
<python><pandas><yolov8>
2024-01-29 07:43:15
1
558
Ian Gallegos
77,898,061
6,502,077
Problem moving lines (one by one) from one text file to another
<p>I try to create a program that moves the first line in a text file to another and then remove said line in the first file (i.e. they are moved one by one). This should continue until there aren't any more lines in the first file to be moved to the second file.</p> <p>The problem I have is that the program freezes and won't quit properly. I have experimented with the code and reached the conclusion that the error probably is in the counting of lines but that's all...</p> <p>Here is the code:</p> <pre><code># Create the second file open(&quot;file2.txt&quot;, 'w', encoding=&quot;utf-8-sig&quot;).close() # Find out how many lines in file one lines = open(&quot;file1.txt&quot;, 'r', encoding=&quot;utf-8-sig&quot;).readlines() # Loop the amount of lines while range(len(lines)) != 0: # Get the first line in the first file first_line = open(&quot;file1.txt&quot;, 'r', encoding=&quot;utf-8-sig&quot;).readline() # Write the line to the second file out = open(&quot;file2.txt&quot;, 'a', encoding=&quot;utf-8-sig&quot;) out.write(first_line) # Remove the first line from the first file all_lines = open(&quot;file1.txt&quot;, 'r', encoding=&quot;utf-8-sig&quot;).readlines() new_file = open(&quot;file1.txt&quot;, 'w', encoding=&quot;utf-8-sig&quot;) new_file.writelines(all_lines[1:]) new_file.close() </code></pre> <p>I would really appreciate some help with the code and critic in general.</p>
<python><python-3.x>
2024-01-29 07:37:48
2
702
Lavonen
77,897,936
1,447,071
How can I prevent or mitigate pybel erroneous addition of hydrogens to a CSO3 group
<p>I read a PDB with a CSO3 group into pybel (The example was edited in Maestro Schrodinger to obscure the original chemistry). Note that since I use a lot of non natural amino acids and other molecule building blocks the amino acid name is inaccurate but easily read by Pymol and Schrodinger. Schrodinger also adds implicit hydrogens without the error! here is an original example:</p> <pre><code>TITLE FullModel_2569-std-man REMARK 4 COMPLIES WITH FORMAT V. 3.0, 1-DEC-2006 REMARK 888 REMARK 888 WRITTEN BY MAESTRO (A PRODUCT OF SCHRODINGER, LLC) ATOM 1 N ALA X 1 17.799 25.136 52.626 1.00 16.34 N1+ ATOM 2 CA ALA X 1 16.591 25.195 51.764 1.00 16.34 C ATOM 3 C ALA X 1 16.975 24.592 50.433 1.00 16.34 C ATOM 4 O ALA X 1 17.063 23.378 50.330 1.00 16.34 O ATOM 5 N ALA X 2 17.423 25.444 49.499 1.00 6.89 N ATOM 6 CA ALA X 2 18.667 25.160 48.776 1.00 6.89 C ATOM 7 C ALA X 2 19.807 25.237 49.806 1.00 6.89 C ATOM 8 O ALA X 2 19.691 26.012 50.771 1.00 6.89 O ATOM 9 N ALA X 3 20.783 24.323 49.735 1.00 10.36 N ATOM 10 CA ALA X 3 21.710 24.047 50.849 1.00 10.36 C ATOM 11 C ALA X 3 20.940 23.401 52.024 1.00 10.36 C ATOM 12 O ALA X 3 19.722 23.200 51.950 1.00 10.36 O ATOM 13 CB ALA X 3 22.871 23.144 50.393 1.00 10.36 C ATOM 14 SG ALA X 3 24.414 23.586 51.244 1.00 20.00 S ATOM 15 OD1 ALA X 3 24.591 24.986 50.847 1.00 20.00 O1- ATOM 16 OD2 ALA X 3 24.087 23.405 52.666 1.00 20.00 O ATOM 17 OD3 ALA X 3 25.366 22.624 50.685 1.00 20.00 O ATOM 18 N ASP X 4 21.584 23.207 53.175 1.00 21.55 N ATOM 19 CA ASP X 4 20.963 22.850 54.454 1.00 21.55 C ATOM 20 C ASP X 4 19.873 23.860 54.847 1.00 21.55 C ATOM 21 O ASP X 4 18.792 23.497 55.298 1.00 21.55 O ATOM 22 N ARG X 5 20.063 25.132 54.464 1.00 29.63 N ATOM 23 CA ARG X 5 19.068 26.199 54.538 1.00 29.63 C ATOM 24 C ARG X 5 17.759 25.922 53.771 1.00 29.63 C ATOM 25 O ARG X 5 16.712 26.441 54.157 1.00 29.63 O TER 26 ARG X 5 CONECT 13 14 CONECT 14 13 15 16 17 CONECT 14 16 17 CONECT 15 14 CONECT 16 14 CONECT 16 14 CONECT 17 14 CONECT 17 14 END </code></pre> <p>I ingest the PDB using the function below [it has to be a PDB and not mol2 or sdf because the original is PDB]</p> <pre><code>def pep_to_formats(data_path, mol_name, add_hydrogen=True): base = f&quot;{data_path}/data/{mol_name}/{mol_name}&quot; pep_pdb = f&quot;{base}.pdb&quot; std_pdb = f&quot;{base}-std.pdb&quot; babel_pdbh = f&quot;{base}-babelh.pdb&quot; babel_sdfh = f&quot;{base}-babelh.sdf&quot; babel_mol2h = f&quot;{base}-babelh.mol2&quot; babel_smileh = f&quot;{base}-babelh.smile&quot; babel_smilesh = f&quot;{base}-babelh.smiles&quot; ##input pepticom chemical expansion pdb file output and output standard hydrogenated pdb file babel_mol = pb.readfile(format=&quot;pdb&quot;, filename=std_pdb).__next__() if add_hydrogen: babel_mol.addh() # add Hs for 3D # dd.make3D() babel_mol.write(format='pdb', filename=babel_pdbh, overwrite=True) babel_mol.write(&quot;sdf&quot;, babel_sdfh, True) babel_mol.write(&quot;mol2&quot;, babel_mol2h, True) babel_mol.write(&quot;smi&quot;, babel_smileh, True) babel_mol.write(&quot;smiles&quot;, babel_smilesh, True) return babel_mol, base </code></pre> <p>When I add hydrogens the output SDF adds hydrogens to one of the oxygens and the sulfur. clearly a mistake.</p> <p><a href="https://i.sstatic.net/0vNxD.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/0vNxD.png" alt="mol depiction from jupyter" /></a></p> <p>Here is the resulting SDF</p> <pre><code>/Users/gideonbar/devel/pep-chem/md_python/data/FullModel_2569-std-man/FullModel_2569-std-man-std.pdb OpenBabel02022411333D 43 43 0 0 1 0 0 0 0 0999 V2000 17.7990 25.1360 52.6260 N 0 0 0 0 0 0 0 0 0 0 0 0 16.5910 25.1950 51.7640 C 0 0 0 0 0 0 0 0 0 0 0 0 16.9750 24.5920 50.4330 C 0 0 0 0 0 0 0 0 0 0 0 0 17.0630 23.3780 50.3300 O 0 0 0 0 0 0 0 0 0 0 0 0 17.4230 25.4440 49.4990 N 0 0 0 0 0 0 0 0 0 0 0 0 18.6670 25.1600 48.7760 C 0 0 0 0 0 0 0 0 0 0 0 0 19.8070 25.2370 49.8060 C 0 0 0 0 0 0 0 0 0 0 0 0 19.6910 26.0120 50.7710 O 0 0 0 0 0 0 0 0 0 0 0 0 20.7830 24.3230 49.7350 N 0 0 0 0 0 0 0 0 0 0 0 0 21.7100 24.0470 50.8490 C 0 0 1 0 0 0 0 0 0 0 0 0 20.9400 23.4010 52.0240 C 0 0 0 0 0 0 0 0 0 0 0 0 19.7220 23.2000 51.9500 O 0 0 0 0 0 0 0 0 0 0 0 0 22.8710 23.1440 50.3930 C 0 0 0 0 0 0 0 0 0 0 0 0 24.4140 23.5860 51.2440 S 0 0 0 0 0 0 0 0 0 0 0 0 24.5910 24.9860 50.8470 O 0 0 0 0 0 0 0 0 0 0 0 0 24.0870 23.4050 52.6660 O 0 5 0 0 0 0 0 0 0 0 0 0 25.3660 22.6240 50.6850 O 0 0 0 0 0 0 0 0 0 0 0 0 21.5840 23.2070 53.1750 N 0 0 0 0 0 0 0 0 0 0 0 0 20.9630 22.8500 54.4540 C 0 0 0 0 0 0 0 0 0 0 0 0 19.8730 23.8600 54.8470 C 0 0 0 0 0 0 0 0 0 0 0 0 18.7920 23.4970 55.2980 O 0 0 0 0 0 0 0 0 0 0 0 0 20.0630 25.1320 54.4640 N 0 0 0 0 0 0 0 0 0 0 0 0 19.0680 26.1990 54.5380 C 0 0 0 0 0 0 0 0 0 0 0 0 17.7590 25.9220 53.7710 C 0 0 0 0 0 0 0 0 0 0 0 0 16.7120 26.4410 54.1570 O 0 0 0 0 0 0 0 0 0 0 0 0 18.5772 24.5766 52.4008 H 0 0 0 0 0 0 0 0 0 0 0 0 16.2814 26.2108 51.6327 H 0 0 0 0 0 0 0 0 0 0 0 0 15.7766 24.6608 52.2070 H 0 0 0 0 0 0 0 0 0 0 0 0 16.9149 26.2630 49.2980 H 0 0 0 0 0 0 0 0 0 0 0 0 18.8183 25.8854 48.0041 H 0 0 0 0 0 0 0 0 0 0 0 0 18.6312 24.1954 48.3143 H 0 0 0 0 0 0 0 0 0 0 0 0 20.8868 23.8113 48.9004 H 0 0 0 0 0 0 0 0 0 0 0 0 22.1321 24.9731 51.1792 H 0 0 0 0 0 0 0 0 0 0 0 0 23.0086 23.2573 49.3380 H 0 0 0 0 0 0 0 0 0 0 0 0 22.6282 22.1299 50.6330 H 0 0 0 0 0 0 0 0 0 0 0 0 24.6600 24.5520 50.3188 H 0 0 0 0 0 0 0 0 0 0 0 0 25.3568 21.8133 51.2175 H 0 0 0 0 0 0 0 0 0 0 0 0 22.5627 23.3128 53.1652 H 0 0 0 0 0 0 0 0 0 0 0 0 21.7153 22.8355 55.2147 H 0 0 0 0 0 0 0 0 0 0 0 0 20.5093 21.8863 54.3517 H 0 0 0 0 0 0 0 0 0 0 0 0 20.9483 25.3660 54.1024 H 0 0 0 0 0 0 0 0 0 0 0 0 19.5055 27.0896 54.1376 H 0 0 0 0 0 0 0 0 0 0 0 0 18.7981 26.2859 55.5697 H 0 0 0 0 0 0 0 0 0 0 0 0 1 2 1 0 0 0 0 1 24 1 0 0 0 0 1 26 1 0 0 0 0 2 3 1 0 0 0 0 2 27 1 0 0 0 0 2 28 1 0 0 0 0 3 4 2 0 0 0 0 3 5 1 0 0 0 0 5 6 1 0 0 0 0 5 29 1 0 0 0 0 6 7 1 0 0 0 0 6 30 1 0 0 0 0 6 31 1 0 0 0 0 7 8 2 0 0 0 0 7 9 1 0 0 0 0 9 10 1 0 0 0 0 9 32 1 0 0 0 0 10 11 1 0 0 0 0 10 13 1 0 0 0 0 10 33 1 1 0 0 0 11 12 2 0 0 0 0 11 18 1 0 0 0 0 13 14 1 0 0 0 0 13 34 1 0 0 0 0 13 35 1 0 0 0 0 14 15 2 0 0 0 0 14 16 1 0 0 0 0 14 36 1 0 0 0 0 17 14 1 0 0 0 0 17 37 1 0 0 0 0 18 19 1 0 0 0 0 18 38 1 0 0 0 0 19 20 1 0 0 0 0 19 39 1 0 0 0 0 19 40 1 0 0 0 0 20 21 2 0 0 0 0 20 22 1 0 0 0 0 22 23 1 0 0 0 0 22 41 1 0 0 0 0 23 24 1 0 0 0 0 23 42 1 0 0 0 0 23 43 1 0 0 0 0 24 25 2 0 0 0 0 M CHG 1 16 -1 M END $$$$ </code></pre> <p><a href="https://i.sstatic.net/8nlmL.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8nlmL.png" alt="babel SDF hydrogens" /></a></p> <p><a href="https://i.sstatic.net/Ft5r4.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Ft5r4.png" alt="information about the bond distances and how I map them" /></a></p> <p>In my code I artificially add double bonds to two of the sulfur oxygen bonds in the PDB, because PDB doesn't capture resonance.</p> <p>How can I prevent this or just delete the hydrogens. I tried</p> <pre><code>for neighbour_atom in ob.OBAtomAtomIter(at): atom_type = neighbour_atom.GetType() bond = at.GetBond(neighbour_atom) neighbour_index = bond.GetNbrAtomIdx(at) print(bond.GetLength()) # print(bond.GetBondOrder()) print(neighbour_index) # print(bond.IsAromatic()) # print(bond.IsInRing()) # print(bond.IsRotor()) # print(bond.IsAmide()) print(atom_type) if atom_type == 'H': pass # at.GetParent().DeleteAtom(neighbour_atom) mol.OBMol.DeleteAtom(neighbour_atom) </code></pre> <p>The code freezes</p> <p>I converted it to a script and received this error:</p> <pre><code>Process finished with exit code 139 (interrupted by signal 11:SIGSEGV) </code></pre>
<python><jupyter><openbabel><pybel>
2024-01-29 07:09:51
1
3,723
Rubber Duck
77,897,621
17,729,094
Map each row into a list and collect all
<p>I have a dataframe:</p> <pre><code>df = pl.DataFrame( { &quot;t_left&quot;: [0.0, 1.0, 2.0, 3.0], &quot;t_right&quot;: [1.0, 2.0, 3.0, 4.0], &quot;counts&quot;: [1, 2, 3, 4], } ) </code></pre> <p>And I want to map each row into a list, and then collect all values (to be passed into e.g. <code>matplotlib.hist</code>)</p> <p>I can do it by hand like:</p> <pre><code>times = [] for t_left, t_right, counts in df.rows(): times.extend(np.linspace(t_left, t_right, counts + 1)[1:]) </code></pre> <p>but this is painfully slow for my data set.</p> <p>I am completely new to both python and polars, so I was wondering if there is a better way to achieve this.</p> <p><strong>EDIT</strong></p> <p>Complete copy-paste example to reproduce.</p> <pre><code>import polars as pl import numpy as np size = 1000000 df = pl.DataFrame( { &quot;t_left&quot;: np.random.rand(size), &quot;t_right&quot;: np.random.rand(size) + 1, &quot;counts&quot;: [1] * size, } ) times = [] for t_left, t_right, counts in df.rows(): times.extend(np.linspace(t_left, t_right, counts + 1)[1:]) </code></pre>
<python><python-polars>
2024-01-29 05:38:02
2
954
DJDuque
77,897,467
2,975,438
Adding thumbs up/down buttons and user's feedback into Streamlit based chatbot
<p>I am trying to add thumbs up/down buttons and user's feedback into Streamlit based chatbot. I use st.chat_message to create chatbot with Streamlit. For thumbs up/down buttons and user's feedback I use <a href="https://github.com/trubrics/streamlit-feedback" rel="nofollow noreferrer">streamlit-feedback</a> python package becouse I did not find any other way to include it into Streamlit based chatbot.</p> <p>My application code looks like:</p> <pre><code>import streamlit as st from streamlit_feedback import streamlit_feedback ... def handle_feedback(): st.write(st.session_state.fb_k) st.toast(&quot;βœ”οΈ Feedback received!&quot;) if &quot;df&quot; in st.session_state: if prompt := st.chat_input(placeholder=&quot;&quot;): ... with st.form('form'): streamlit_feedback(feedback_type=&quot;thumbs&quot;, optional_text_label=&quot;Enter your feedback here&quot;, align=&quot;flex-start&quot;, key='fb_k') st.form_submit_button('Save feedback', on_click=handle_feedback) </code></pre> <p>For some reason <code>streamlit_feedback</code> works only inside <code>st.form</code>. It creates two problems:</p> <ol> <li>To get it work user needs first click on &quot;SUBMIT&quot; button and only then to &quot;Save feedback&quot; button. <img src="https://github.com/trubrics/streamlit-feedback/assets/136030897/07f81665-193b-49a5-9e62-8ad15e3a8995" alt="image" /></li> </ol> <p>If user click &quot;Save feedback&quot; without using &quot;SUBMIT&quot; button then <code>st.session_state.fb_k</code> will be <code>None</code>.</p> <ol start="2"> <li>Feedback inside <code>st.form</code> does not look very appealing and I am looking to ways to get rid of <code>st.form</code>.</li> </ol> <p>I am looking for a way to resolve those problems with <code>streamlit_feedback</code> package or without it.</p> <p>Note that <code>streamlit_feedback</code> package has <code>on_submit</code> parameter where <code>handle_feedback</code> could be included:</p> <pre><code> streamlit_feedback(feedback_type=&quot;faces&quot;, optional_text_label=&quot;[Optional] Please provide an explanation&quot;, align=&quot;flex-start&quot;, key='fb_k', on_submit = handle_feedback) </code></pre> <p>but function:</p> <pre><code>def handle_feedback(): st.write(st.session_state.fb_k) st.toast(&quot;βœ”οΈ Feedback received!&quot;) </code></pre> <p>does not output anything (i do not see printed <code>st.write</code> or <code>st.toast</code> pop-up). So <code>on_submit</code> does not work for some reason.</p> <p>for reference here is full application code:</p> <pre><code> from langchain.chat_models import AzureChatOpenAI from langchain.memory import ConversationBufferWindowMemory # ConversationBufferMemory from langchain.agents import ConversationalChatAgent, AgentExecutor, AgentType from langchain.callbacks import StreamlitCallbackHandler from langchain.memory.chat_message_histories import StreamlitChatMessageHistory from langchain.agents import Tool from langchain.prompts import PromptTemplate from langchain.chains import LLMChain import pprint import streamlit as st import os import pandas as pd from streamlit_feedback import streamlit_feedback def handle_feedback(): st.write(st.session_state.fb_k) st.toast(&quot;βœ”οΈ Feedback received!&quot;) os.environ[&quot;OPENAI_API_KEY&quot;] = ... os.environ[&quot;OPENAI_API_TYPE&quot;] = &quot;azure&quot; os.environ[&quot;OPENAI_API_BASE&quot;] = ... os.environ[&quot;OPENAI_API_VERSION&quot;] = &quot;2023-08-01-preview&quot; @st.cache_data(ttl=72000) def load_data_(path): return pd.read_csv(path) uploaded_file = st.sidebar.file_uploader(&quot;Choose a CSV file&quot;, type=&quot;csv&quot;) if uploaded_file is not None: # If a file is uploaded, load the uploaded file st.session_state[&quot;df&quot;] = load_data_(uploaded_file) if &quot;df&quot; in st.session_state: msgs = StreamlitChatMessageHistory() memory = ConversationBufferWindowMemory(chat_memory=msgs, return_messages=True, k=5, memory_key=&quot;chat_history&quot;, output_key=&quot;output&quot;) if len(msgs.messages) == 0 or st.sidebar.button(&quot;Reset chat history&quot;): msgs.clear() msgs.add_ai_message(&quot;How can I help you?&quot;) st.session_state.steps = {} avatars = {&quot;human&quot;: &quot;user&quot;, &quot;ai&quot;: &quot;assistant&quot;} for idx, msg in enumerate(msgs.messages): with st.chat_message(avatars[msg.type]): # Render intermediate steps if any were saved for step in st.session_state.steps.get(str(idx), []): if step[0].tool == &quot;_Exception&quot;: continue # Insert a status container to display output from long-running tasks. with st.status(f&quot;**{step[0].tool}**: {step[0].tool_input}&quot;, state=&quot;complete&quot;): st.write(step[0].log) st.write(step[1]) st.write(msg.content) if prompt := st.chat_input(placeholder=&quot;&quot;): st.chat_message(&quot;user&quot;).write(prompt) llm = AzureChatOpenAI( deployment_name = &quot;gpt-4&quot;, model_name = &quot;gpt-4&quot;, openai_api_key = os.environ[&quot;OPENAI_API_KEY&quot;], openai_api_version = os.environ[&quot;OPENAI_API_VERSION&quot;], openai_api_base = os.environ[&quot;OPENAI_API_BASE&quot;], temperature = 0, streaming=True ) prompt_ = PromptTemplate( input_variables=[&quot;query&quot;], template=&quot;{query}&quot; ) chain_llm = LLMChain(llm=llm, prompt=prompt_) tool_llm_node = Tool( name='Large Language Model Node', func=chain_llm.run, description='This tool is useful when you need to answer general purpose queries with a large language model.' ) tools = [tool_llm_node] chat_agent = ConversationalChatAgent.from_llm_and_tools(llm=llm, tools=tools) executor = AgentExecutor.from_agent_and_tools( agent=chat_agent, tools=tools, memory=memory, return_intermediate_steps=True, handle_parsing_errors=True, verbose=True, ) with st.chat_message(&quot;assistant&quot;): st_cb = StreamlitCallbackHandler(st.container(), expand_new_thoughts=False) response = executor(prompt, callbacks=[st_cb, st.session_state['handler']]) st.write(response[&quot;output&quot;]) st.session_state.steps[str(len(msgs.messages) - 1)] = response[&quot;intermediate_steps&quot;] response_str = f'{response}' pp = pprint.PrettyPrinter(indent=4) pretty_response = pp.pformat(response_str) with st.form('form'): streamlit_feedback(feedback_type=&quot;thumbs&quot;, optional_text_label=&quot;[Optional] Please provide an explanation&quot;, align=&quot;flex-start&quot;, key='fb_k') st.form_submit_button('Save feedback', on_click=handle_feedback) </code></pre>
<python><chatbot><streamlit><langchain><large-language-model>
2024-01-29 04:37:51
0
1,298
illuminato
77,897,371
721,998
combinations of list of lists using python
<p>I am given an n-list-of-lists:</p> <pre><code>[ [a, b, c], [p, q, r], .., .., [x, y, z] ] </code></pre> <p>I am supposed to create a result by choosing one element from each lists. For example, the result for the above example should look like:</p> <pre><code>[ [a, p, .., x], [a, p, .., y], [a, p, .., z], .. .. [c, r, .., z] ] </code></pre> <p>How can I implement it without using n-nested for loops? I cannot use n-nested for loops because we do not know the size of input n-list-of-lists until runtime..</p>
<python><recursion><combinations><depth-first-search>
2024-01-29 04:00:25
1
6,511
Darth.Vader
77,897,370
146,077
Building tabular report in Pandas with index aligned concat
<p>I have two DataFrames that I want to merge or concat to build a report.</p> <pre><code>&gt;&gt;&gt; df1 class item value 0 A _1 10 1 A _2 11 2 B _3 12 3 X _4 13 &gt;&gt;&gt; df2 class item value 0 A _5 20 1 B _6 21 2 B _7 22 3 C _8 23 </code></pre> <p>My target is to align these two DataFrames by &quot;class&quot; so that users can inspect items for each class. That is, the &quot;class A&quot; items are aligned and the first &quot;class A&quot; row contains the first item in &quot;class A&quot; from each DataFrame. The second &quot;class A&quot; row only contains an <code>item_1</code> as <code>df2</code> did not contain a second &quot;class A&quot; item.</p> <pre><code>&gt;&gt;&gt; result class item_1 value_1 item_2 value_2 0 A _1 10 _5 20 1 A _2 11 NaN NaN 2 B _3 12 _6 21 3 B NaN NaN _7 22 4 C NaN NaN _8 23 5 X _4 13 NaN NaN </code></pre> <p>I'd normally pass this off as &quot;a presentation issue&quot; and not really relevant to Pandas, but with streamlit I'm simply rendering the DataFrame on the page so it needs to be structured correctly.</p> <p>This isn't a merge, nor a concat - it's almost like a partial merge (joining on the &quot;class&quot; column) followed by a concat operation.</p>
<python><pandas><dataframe>
2024-01-29 04:00:18
1
28,876
Kirk Broadhurst
77,897,298
16,220,410
storing and retrieving hashed password in postgres
<p>I am watching a tutorial on FastAPI, where it switched the database from SQLite to PostgreSQL, before generating a token. It was working before but now it has an error, shown below</p> <p><code>return bcrypt.checkpw(password=password_byte_enc, hashed_password=hashed_password) TypeError: argument 'hashed_password': 'str' object cannot be converted to 'PyBytes'</code></p> <p>I think this is the code concerning the error:</p> <pre><code>def get_password_hash(password): # return bcrypt_context.hash(password) pwd_bytes = password.encode('utf-8') salt = bcrypt.gensalt() hashed_password = bcrypt.hashpw(password=pwd_bytes, salt=salt) return hashed_password def verify_password(plain_password, hashed_password): password_byte_enc = plain_password.encode('utf-8') return bcrypt.checkpw(password=password_byte_enc, hashed_password=hashed_password) </code></pre> <p>The entirity of the auth.py file is here <a href="https://pastebin.com/mHcd0YLU" rel="nofollow noreferrer">https://pastebin.com/mHcd0YLU</a></p> <p>This is where I input the username and password to generate a token but it gets an error: <a href="https://i.sstatic.net/oU8cw.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/oU8cw.png" alt="username and password input" /></a></p>
<python><postgresql><fastapi><bcrypt>
2024-01-29 03:25:51
1
1,277
k1dr0ck
77,897,286
1,224,363
simple python program with gmpy2 is crashing (but i can do in scala???) -> overflow in mpz type / Aborted (core dumped
<p>I am running into an issue with the above mentioned library, running a very simple example -- which I can get working using similar libraries in JVM land (using Scala).</p> <p>The Scala code snippet below prints 'good', while the python version dies with the error in the title of the post.</p> <p>Wondering if this is a known bug.. or am i doing something silly in the Python version of the calculation ? thnx in advance !</p> <p><strong>Scala</strong></p> <pre><code>import java.math.BigInteger import java.nio.charset.StandardCharsets import java.util.Base64 object Example extends App { val pstr: String =&quot;134078079299425970995740249982058461274793658205923933&quot; + &quot;77723561443721764030073546976801874298166903427690031&quot; + &quot;858186486050853753882811946569946433649006084171&quot; val gstr = &quot;11717829880366207009516117596335367088558084999998952205&quot; + &quot;59997945906392949973658374667057217647146031292859482967&quot; + &quot;5428279466566527115212748467589894601965568&quot; val hstr = &quot;323947510405045044356526437872806578864909752095244&quot; + &quot;952783479245297198197614329255807385693795855318053&quot; + &quot;2878928001494706097394108577585732452307673444020333&quot; val decodedBytes = Base64.getDecoder.decode(&quot;Mzc1Mzc0MjE3ODMw&quot;) val decodedString = new String(decodedBytes, StandardCharsets.UTF_8) // obfuscated ... can't fall into wrong hands val p: BigInteger = new BigInteger(pstr) val g: BigInteger = new BigInteger(gstr) val h: BigInteger = new BigInteger(hstr) val exponent: BigInteger = new BigInteger(decodedString) val recover = g.modPow(exponent, p) if (recover == h) print(&quot;good&quot;) else print(&quot;bad&quot;) } </code></pre> <p><strong>Python version</strong></p> <pre class="lang-py prettyprint-override"><code>import gmpy2 import base64 pstr = &quot;134078079299425970995740249982058461274793658205923933&quot; + \ &quot;77723561443721764030073546976801874298166903427690031&quot; + \ &quot;858186486050853753882811946569946433649006084171&quot; gstr = &quot;11717829880366207009516117596335367088558084999998952205&quot; + \ &quot;59997945906392949973658374667057217647146031292859482967&quot; + \ &quot;5428279466566527115212748467589894601965568&quot; hstr = &quot;323947510405045044356526437872806578864909752095244&quot; + \ &quot;952783479245297198197614329255807385693795855318053&quot; + \ &quot;2878928001494706097394108577585732452307673444020333&quot; p = gmpy2.mpz(pstr) g = gmpy2.mpz(gstr) h = gmpy2.mpz(hstr) secret_exponent_encoded = base64.b64decode(b'Mzc1Mzc0MjE3ODMw').decode('utf-8') exponent = gmpy2.mpz(secret_exponent_encoded) recover = (g ** exponent) % p if (recover == h): print(&quot;true&quot;) else: print(&quot;false&quot;) </code></pre>
<python><largenumber>
2024-01-29 03:22:19
1
2,744
Chris Bedford
77,897,280
13,345,077
Disable linter error of `method is not a known member of module "cv2"` from pylance in VSCode
<ul> <li><p>OS: WSL 2 Ubuntu 20.04.6 LTS</p> </li> <li><p>VSCode version: 1.85.2</p> </li> <li><p>Pylance version: v2023.12.1</p> </li> <li><p>Pyhotn version: 3.10.13</p> </li> <li><p>opencv-python version: 4.6.0.66</p> </li> <li><p>settings.json</p> </li> </ul> <pre class="lang-json prettyprint-override"><code>{ &quot;terminal.integrated.defaultProfile.windows&quot;: &quot;Ubuntu (WSL)&quot;, &quot;workbench.colorTheme&quot;: &quot;Monokai Pro&quot;, &quot;editor.mouseWheelZoom&quot;: true, &quot;python.analysis.autoImportCompletions&quot;: true, &quot;flake8.args&quot;:[ &quot;--ignore=W293,W292,W291&quot;, ], &quot;pylint.args&quot;:[ &quot;--extension-pkg-whitelist=cv2&quot;, &quot;--generated-members=numpy.*, torch.*, cv2.*, cv.*&quot;, &quot;--disable=C0114,C0115,C0116&quot; ], &quot;python.analysis.typeCheckingMode&quot;: &quot;basic&quot;, // Pylance &quot;python.analysis.fixAll&quot;: [], &quot;editor.fontSize&quot;: 16, &quot;workbench.preferredDarkColorTheme&quot;: &quot;Monokai&quot;, &quot;workbench.preferredHighContrastColorTheme&quot;: &quot;Monokai&quot;, &quot;files.autoSave&quot;: &quot;afterDelay&quot;, &quot;[python]&quot;: { &quot;editor.formatOnType&quot;: true, &quot;editor.defaultFormatter&quot;: &quot;ms-python.autopep8&quot; }, &quot;editor.rulers&quot;: [ 79 ], &quot;git.ignoreLimitWarning&quot;: true, &quot;remote.SSH.remotePlatform&quot;: { &quot;192.168.60.229&quot;: &quot;linux&quot;, &quot;192.168.60.230&quot;: &quot;linux&quot; }, &quot;githubPullRequests.pullBranch&quot;: &quot;never&quot;, &quot;vim.useCtrlKeys&quot;: false, &quot;workbench.tree.indent&quot;: 4, &quot;workbench.iconTheme&quot;: &quot;vscode-icons&quot;, &quot;isort.check&quot;: true, &quot;window.zoomLevel&quot;: -1, &quot;remote.autoForwardPortsSource&quot;: &quot;hybrid&quot;, &quot;workbench.editorAssociations&quot;: { &quot;*.md&quot;: &quot;default&quot; }, &quot;markdown-preview-enhanced.previewTheme&quot;: &quot;atom-dark.css&quot;, &quot;git.openRepositoryInParentFolders&quot;: &quot;never&quot; } </code></pre> <p>I have installed the pylance, pylint and flake8 extensions with configuring settings with <code>cv2</code> yet it shows the linter error message <code>&quot;cvtColor&quot; is not a known member of module &quot;cv2&quot;</code>. Could anyone help me with this linter setting problem? many thanks! <a href="https://i.sstatic.net/Aq9nH.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Aq9nH.png" alt="enter image description here" /></a></p>
<python><visual-studio-code><pylance>
2024-01-29 03:20:47
1
412
Weber Huang
77,897,033
8,229,029
USing Python selenium to extract attribute with no class, id, or other unique identifier
<p>I need to extract text using Selenium for html inside a table. There are no unique classes, id's or other identifiers I can use. The lines look like this. I need the &quot; Cost Elements&quot; text.</p> <pre><code>&lt;th align=&quot;LEFT&quot; bgcolor=&quot;7DA6CF&quot; width=&quot;350 px&quot; overflow=&quot;HIDDEN&quot;&gt; Cost Elements&lt;/th&gt; </code></pre> <p>Here is the full block of html.</p> <pre><code>&lt;tr&gt; &lt;th align=&quot;LEFT&quot; bgcolor=&quot;7DA6CF&quot; width=&quot;350 px&quot; overflow=&quot;HIDDEN&quot;&gt; Cost Elements&lt;/th&gt; &lt;th align=&quot;CENTER&quot; bgcolor=&quot;7DA6CF&quot; width=&quot;160 px&quot; overflow=&quot;HIDDEN&quot;&gt; Plan&lt;/th&gt; &lt;th align=&quot;CENTER&quot; bgcolor=&quot;7DA6CF&quot; width=&quot;160 px&quot; overflow=&quot;HIDDEN&quot;&gt; Period 6&lt;/th&gt; &lt;th align=&quot;CENTER&quot; bgcolor=&quot;7DA6CF&quot; width=&quot;160 px&quot; overflow=&quot;HIDDEN&quot;&gt; Cumulative Act. &lt;/th&gt; &lt;th align=&quot;CENTER&quot; bgcolor=&quot;7DA6CF&quot; width=&quot;160 px&quot; overflow=&quot;HIDDEN&quot;&gt; Commitments&lt;/th&gt; &lt;th align=&quot;CENTER&quot; bgcolor=&quot;7DA6CF&quot; width=&quot;160 px&quot; overflow=&quot;HIDDEN&quot;&gt; $ Variance&lt;/th&gt; &lt;th align=&quot;CENTER&quot; bgcolor=&quot;7DA6CF&quot; width=&quot;90 px &quot; overflow=&quot;HIDDEN&quot;&gt; % Remain&lt;/th&gt; &lt;/tr&gt; </code></pre> <p>Here is my code, if it helps. Table1_cols is where I'm trying to extract the table column names.</p> <pre><code>from selenium import webdriver from selenium.webdriver.edge.service import Service from selenium.webdriver.common.by import By service = Service(executable_path = 'C:\Program Files\edgedriver_win64\msedgedriver.exe') driver = webdriver.Edge(service=service) driver.get('C:\\Users\\User\\Downloads\\_SAPreport-behnke r-20240102.HTM_.HTM') ne_mesonet_table = driver.find_element(By.LINK_TEXT, &quot;Nebraska Mesonet&quot;) ne_mesonet_table.click() ne_mesonet_xpath1 = '//html//body//table[1]//tbody' table1 = driver.find_element(By.XPATH, ne_mesonet_xpath1) table1_rows = table1.find_elements(By.TAG_NAME, &quot;tr&quot;) table1_cols = table1_rows[0].find_elements(By.TAG_NAME, 'th') </code></pre>
<python><selenium-webdriver><attributes>
2024-01-29 01:29:39
1
1,214
user8229029
77,896,993
13,520,498
Processing time increases with the number of Threads
<p>I'm working with the <code>pymeshlab</code> library. I noticed a weird behavior with <code>multithreading</code> while using one of their filters <code>generate_surface_reconstruction_screened_poisson</code> which can be found here: <a href="https://pymeshlab.readthedocs.io/en/latest/filter_list.html#generate_surface_reconstruction_screened_poisson" rel="nofollow noreferrer">https://pymeshlab.readthedocs.io/en/latest/filter_list.html#generate_surface_reconstruction_screened_poisson</a></p> <p>What it basically does is, create a watertight surface from oriented point sets. For example, if there are holes or missing surfaces in your mesh, it will reconstruct the surface of that area using the adjacent point sets as a reference.</p> <p>The left is the input and the right is the output:</p> <p><a href="https://i.sstatic.net/LAst1.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/LAst1.png" alt="enter image description here" /></a></p> <p>For this filter, there is an argument <code>threads</code> that can be passed which essentially represents</p> <pre><code>threads : int = 16 (default) Number Threads: Maximum number of threads that the reconstruction algorithm can use. </code></pre> <p>Now I have tried with different numbers of threads but it seems like the best performance in terms of execution time I'm getting while setting the number of threads to two. Increasing or decreasing the number of threads just increases the time elapsed in the execution of the filter. Here's my implementation:</p> <pre><code>import pymeshlab import time for thread_number in [1, 2, 4, 8, 16]: ms = pymeshlab.MeshSet() ms.load_new_mesh('FullHead.obj') start_time = time.perf_counter() ms.apply_filter('generate_surface_reconstruction_screened_poisson', visiblelayer=True, preclean=True, threads=thread_number) elapsed_time = time.perf_counter() - start_time print(f&quot;time taken for running with {thread_number} threads: {elapsed_time} seconds\n&quot;) ms.save_current_mesh(f'recons_output/{thread_number}_reconstructed_mesh.obj') </code></pre> <p>Output:</p> <pre><code>time taken for running with 1 threads: 7.891609824000625 seconds time taken for running with 2 threads: 7.006016070998157 seconds time taken for running with 4 threads: 10.419608506999793 seconds time taken for running with 8 threads: 17.781056181996973 seconds time taken for running with 16 threads: 31.617957017999288 seconds </code></pre> <p>This behavior seems quite strange to me. The number of cores and available threads in my CPU is 4. Here's my CPU specs: <a href="https://www.intel.com/content/www/us/en/products/sku/97456/intel-core-i57300hq-processor-6m-cache-up-to-3-50-ghz/specifications.html" rel="nofollow noreferrer">https://www.intel.com/content/www/us/en/products/sku/97456/intel-core-i57300hq-processor-6m-cache-up-to-3-50-ghz/specifications.html</a></p> <p>Shouldn't increasing the number of threads make the execution faster for at least the number of threads set to four? Also, let me know if there is any other way to achieve faster execution for surface reconstruction with the screened-poisson algorithm.</p> <p>P.S. If you want to try in your system, here's the <code>FullHead.obj</code> file uploaded in Google Drive: <a href="https://drive.google.com/file/d/1XjRq1HFyXvogDTSQtiKD2raIgvof5Tkj/view?usp=drive_link" rel="nofollow noreferrer">https://drive.google.com/file/d/1XjRq1HFyXvogDTSQtiKD2raIgvof5Tkj/view?usp=drive_link</a></p> <p>You can find similar results on google-colab too: <a href="https://colab.research.google.com/drive/1D-44yptui8tVCGU286y6Pi09yi0r4-Ce?usp=sharing" rel="nofollow noreferrer">https://colab.research.google.com/drive/1D-44yptui8tVCGU286y6Pi09yi0r4-Ce?usp=sharing</a></p>
<python><python-3.x><multithreading><pymeshlab>
2024-01-29 01:13:31
0
1,991
Musabbir Arrafi
77,896,948
3,498,864
Formulating a constraint with pyomo
<p>I'm trying to array genes (<code>elements</code>) in a matrix (<code>matrix_a</code>) based on the restriction sites they have (<code>attributes</code>). Restriction sites are recognized by enzymes that cut the DNA at a specific site. This can be represented as a sparse matrix with rows as genes, and columns as restriction enzymes, and values being zero or one, meaning the enzyme cuts the gene or it doesn't cut the gene. Given that information, I want to use ILP to array the genes such that the number of enzymes needed for each gene is minimized.</p> <p><strong>Background:</strong> Given a 3X3 matrix, I'm trying to arrange genes (<code>elements</code>) in the matrix (<code>matrix_a</code>) based on an adjacency rule. For any given <code>position</code> in the matrix, a restriction enzyme (<code>attribute</code>) should be used that cuts genes in all adjacent positions in the matrix, but not the gene at that position. I got help to produce an adjacency constraint that works. I just wanted to provide an overall context for the problem.</p> <p><strong>The problem</strong> is that sometimes it'll be necessary to add more than one restriction enzyme to a gene in order to cut all of the neighboring genes, and I'm stuck trying to formulate this as a constraint. My attempt at doing this immediately follows the <code>CONSTRAINTS</code> section in the code below; the code is commented-out. Ideally, the constraints would be:</p> <ol> <li>Only one gene (<code>element</code>) can be assigned to a position in the matrix, but within that one position, the gene (<code>element</code>) can appear multiple times if multiple restriction enzymes (<code>attributes</code>) are used</li> <li>Once a gene (<code>element</code>) is assigned to a position in the matrix, it must not appear in any other positions</li> </ol> <p>One way of doing this is: when I iterate over all elements and all positions, I need to collapse cases where one element appears at a given position more than once into a single value rather than the taking the sum. I haven't been able to distinguish, via constraints, cases where a gene (<code>element</code>) appears in a <em>single</em> position more than once and cases where a gene (<code>element</code>) appears in <em>multiple</em> positions.</p> <p><strong>Note</strong> I'm not sure if the <code>element_map</code> in my code below actually has a solution given my desired constraints</p> <pre><code>import pyomo.environ as pyo import numpy as np &quot;&quot;&quot; fit elements into matrix based on adjacency rules &quot;&quot;&quot; class Element: &quot;&quot;&quot;a convenience to hold the rows of attribute values&quot;&quot;&quot; def __init__(self, row): self.attributes = tuple(row) def attribute(self, idx): return self.attributes[idx] def __repr__(self): return str(self.attributes) class Position: &quot;&quot;&quot;a convenience for (x, y) positions that must have equality &amp; hash defined for consistency&quot;&quot;&quot; def __init__(self, x, y): self.x = x self.y = y def __repr__(self): return f'({self.x}, {self.y})' def __hash__(self): return hash((self.x, self.y)) def __eq__(self, other): if isinstance(other, Position): return (self.x, self.y) == (other.x, other.y) return False # each 'row' corresponds to an element # each 'column' corresponds to an attribute of the various elements # here, each element has 5 attributes, which are always 0 or 1 element_map = np.array([[1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0], [1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1], [1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1], [1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1], [0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1], [1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1], [1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1], [0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1], [1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1]]) matrix_a_rows = 3 matrix_a_cols = 3 matrix_a = np.zeros((matrix_a_rows, matrix_a_cols)) def adj_xy(mat, p: Position): x, y = p.x, p.y res = [] rows = len(mat) - 1 cols = len(mat[0]) - 1 for i in range(x - 1, x + 2): for j in range(y - 1, y + 2): if all((0 &lt;= i &lt;= rows, 0 &lt;= j &lt;= cols, (i, j) != (x, y))): res.append(Position(i, j)) return res # SET UP ILP m = pyo.ConcreteModel('matrix_fitter') # SETS m.E = pyo.Set(initialize=[Element(row) for row in element_map], doc='elements') m.P = pyo.Set(initialize=[Position(x, y) for x in range(len(matrix_a)) for y in range(len(matrix_a[0]))], doc='positions') m.A = pyo.Set(initialize=list(range(len(element_map[0]))), doc='attribute') #only consider valid element positionings valid_vals = [] for element in m.E: for pos in m.P: for i in m.A: if element.attribute(i) == 0: valid_vals.append((element,pos,i)) m.VS = pyo.Set(initialize=set(valid_vals)) # VARS # place element e in position p based on attribute a being 0... m.place = pyo.Var(m.VS, domain=pyo.Binary, doc='place') # OBJ: m.obj = pyo.Objective(expr=pyo.sum_product(m.place), sense=pyo.minimize) # CONSTRAINTS #each element can only appear in one position, but can appear multiple times inthat position with different attributes #sum across all positions (not within one position) must be == 1 #collapse cases where element is in one position twice to one #something where 0==0, 1==1, 2==1 #m.one_element_per_across_all_positions = pyo.ConstraintList() #for e in m.E: # for p in m.P: # s = 0 # for a in m.A: # if e.attribute(a) == 1: continue # # s += m.place[e,p,a] # # m.one_element_per_across_all_positions.add(s &lt;= 2) #each place must have 1 or more elements m.single = pyo.ConstraintList() for p in m.P: s = 0 for e in m.E: for a in m.A: if e.attribute(a) == 1: continue s += m.place[e,p,a] m.single.add(s &gt;= 1) #each element/attribute combo appears only once m.eacombo = pyo.ConstraintList() for element in m.E: for attribute in m.A: if element.attribute(attribute) == 1: continue s = 0 for p in m.P: s += m.place[element,p,attribute] m.eacombo.add(s &lt;= 1) #adjacency constraint m.attr_constraint = pyo.ConstraintList() for p in m.P: for e in m.E: s = 0 for a in m.A: if e.attribute(a) == 1: continue s += m.place[e,p,a] m.attr_constraint.add(s &lt;= 2) m.adjacency_rule = pyo.ConstraintList() for place in m.P: neighbor_positions = adj_xy(matrix_a, place) for element in m.E: for attribute in m.A: if element.attribute(attribute) == 1 : continue s = 0 for ee in m.E: if ee.attribute(attribute) == 0: continue if ee == element: continue for pp in neighbor_positions: for aa in m.A: if ee.attribute(aa) == 1: continue s += m.place[ee,pp,aa] * ee.attribute(attribute) m.adjacency_rule.add(s &gt;= len(neighbor_positions)*m.place[element,place,attribute]) solver = pyo.SolverFactory('cbc') results = solver.solve(m, tee=True) print(results, 'results') if results.solver.termination_condition == pyo.TerminationCondition.optimal: for idx in m.place.index_set(): if m.place[idx].value == 1: print(idx, 'idx') if pyo.value(m.obj) == matrix_a_rows * matrix_a_cols: # all positions were filled print('success!') else: print(f'number of elements that can be placed is {pyo.value(m.obj)} / {matrix_a_rows * matrix_a_cols}') else: print('Problem with model...see results') </code></pre> <p>The result I get is:</p> <pre><code>((1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1), (2, 2), 1) idx ((1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1), (1, 1), 3) idx ((1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1), (2, 0), 1) idx ((0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1), (1, 0), 0) idx ((1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1), (0, 0), 1) idx ((0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1), (1, 2), 11) idx ((1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0), (2, 1), 5) idx ((1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1), (0, 2), 1) idx ((1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0), (0, 1), 12) idx </code></pre> <p>Notice that the element in position <code>(2,1)</code> is the same as the element in <code>(0,1)</code>. This would be equivalent to having the same gene in two different positions in the matrix, which is what I <strong>don't</strong> want. <strong>I need a constraint that a) prevents the same element from appearing in two different positions and also b) allows for the same element to appear within <em>one</em> position multiple times, with different <code>attributes</code></strong></p>
<python><optimization><sparse-matrix><linear-programming><integer-programming>
2024-01-29 00:51:15
0
3,719
Ryan
77,896,944
13,000,229
Difference between df.plot(kind='bar') and df.plot.bar()
<p>I use Python 3.12.1 and pandas 2.2.0.<br /> I find <code>df.plot(kind='bar')</code> and <code>df.plot.bar()</code> work in a similar (or even the same?) way.</p> <ol> <li>Is there any difference between these two?</li> <li>Which is the preferred way?</li> </ol> <p><a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.plot.html" rel="nofollow noreferrer">https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.plot.html</a><br /> <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.plot.bar.html" rel="nofollow noreferrer">https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.plot.bar.html</a></p>
<python><pandas><dataframe><plot>
2024-01-29 00:49:07
2
1,883
dmjy
77,896,918
3,171,992
Python adding a double backslash when escaping string
<p>I work with devices such as routers and switches. I am trying to set up some banners on the devices based on user input. Since we cant know how the user would give an input(escaped or otherwise), I am trying to add escape characters to ', &quot; and \</p> <pre><code>Input: &quot;switch_mgmt&quot;: { &quot;cli_banner&quot;: '\t\n&quot;test&quot;' } def QUOTED_ESCAPED(name): return r'%s' % name.replace('\&quot;', '\\&quot;') cli_banner = get_str(self.device_mgmt.get('cli_banner'), None) if cli_banner: ensure_dict(dict1, &quot;system&quot;, &quot;login&quot;).update({ &quot;message&quot;: QUOTED_ESCAPED(cli_banner) }) </code></pre> <p>But python returns with \\&quot; and so the final command generated which is pushed to the device has a \\ instead of a \ &quot;.</p> <p>The ensure_dict function basically just makes sure (dict1, &quot;system&quot;, &quot;login&quot;) is an existing dict or if not, it creates it and assigns a {} to it.</p> <p>Any reason why Python is adding the extra escaped character? Is there a way I can just add \&quot;?</p> <p>I want to store '\&quot;test\&quot;' into message. But its storing '\\&quot;test\\&quot;' into message. Hence command generated at the end is 'set system login message '\\&quot;test\\&quot;' '</p> <p>I am looking to get 'set system login message '\&quot;test\&quot;' '</p> <p>Edit: <a href="https://i.sstatic.net/e4ufL.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/e4ufL.png" alt="enter image description here" /></a></p> <p>I need to use the printed result in the variable. If I use the variable directly, it will keep the \</p> <p>Update:</p> <p>So it seems multiline strings store escaped quotes with single escape characters. But replace or re.sub still involves the python interpreter and it adds the extra \ to the replaced characters</p>
<python><escaping>
2024-01-29 00:33:28
0
436
iceyb
77,896,833
5,965,685
Pandas Resample with Linear Interpolation
<p>I have some hourly data, such as below, with odd sample times.</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>#</th> <th>Date Time, GMT-08:00</th> <th>Temp, Β°C</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>10/31/23 15:51</td> <td>13.41</td> </tr> <tr> <td>2</td> <td>10/31/23 16:51</td> <td>7.49</td> </tr> <tr> <td>3</td> <td>10/31/23 17:51</td> <td>7.61</td> </tr> <tr> <td>4</td> <td>10/31/23 18:51</td> <td>7.39</td> </tr> <tr> <td>5</td> <td>10/31/23 19:51</td> <td>7.34</td> </tr> <tr> <td>6</td> <td>10/31/23 20:51</td> <td>7.33</td> </tr> <tr> <td>7</td> <td>10/31/23 21:51</td> <td>7.38</td> </tr> </tbody> </table> </div> <p>I would like to resample with interpolation so the data points occur on the hour. I.e. 1500, 1600, 1700...</p> <p>I assumed the following would work, but I've been unable to make this do what I expected.</p> <p><code>df.resample('60min').first().interpolate('linear')</code></p>
<python><pandas><pandas-resample>
2024-01-28 23:50:39
1
431
mitchute
77,896,751
11,342,139
How to overcome a class with too many dependencies in the constructor?
<p>I have a Python project where I have a Car class that has a main state machine with multiple states. Moreover, The Car class has multiple systems. These systems are Engine, GearBox, Brakes, Dashboard, Stereo and a few more. During the transition of each state, I have entry and exit action which do something with these systems. But aside from this I should be able to control the systems separately as well by checking in which state I am at the moment.</p> <p>As you can see I have multiple systems and passing them as a dependency injection to my Car class leads to constructor bloating which indicates too many responsibilities. This problem I need to solve.</p> <p>I tried to solve this by moving similar systems behind a facade class but the issue I find here is that for some of the systems, the components that make these systems need to be controller separately as well. An example here:</p> <p>We have a system that consists of 2 more systems. In the facade I can call a method activate system and that method will have the two systems behind this facade system be activated but sometimes I need to be able to activate them separately from each other. Now, I could just have 2 more methods activating each system separately in the facade class but that kind of feels to me that I am breaking the principle behind the facade pattern.</p> <p>I guess in general I could explain my problem as such where I have a class that has many dependencies that are needed and I try to move them behind facades but after some facades there is more behaviour I need to call directly in the main class but then that makes my code really nested with multiple facades within facades.</p>
<python><dependency-injection><constructor><facade>
2024-01-28 23:09:41
1
1,046
Angel Hadzhiev
77,896,439
1,609,514
How do you define a Python package's requirements when an older version of pip and setuptools are needed to install?
<p>I have an old <a href="https://github.com/billtubbs/gym-CartPole-bt-v0/tree/master/gym_CartPole_BT/envs" rel="nofollow noreferrer">Open AI gym environment</a> in a GitHub repo which requires gym==0.21.0 and pyglet==1.5.27. Based on <a href="https://stackoverflow.com/a/77205046/1609514">this answer</a> I found I can still install and run these versions provided I use an older version of pip and setuptools (pip==21 and setuptools=65.5.0).</p> <p>How should I update my package and ReadMe file to explain how to install it for other users?</p> <p>Should I put everything in the requirements.txt file as follows:</p> <pre class="lang-none prettyprint-override"><code>pip==21.0 setuptools==65.5.0 wheel==0.38.0 cloudpickle==3.0.0 numpy==1.26.3 scipy==1.12.0 gym==0.21.0 pyglet==1.5.27 </code></pre> <p>I tried this but running</p> <pre><code>pip install -r requirements.txt </code></pre> <p>doesn't work, presumably because the current/default pip version tries to install everything.</p> <p>Is it therefore necessary to tell the user to do it in two steps:</p> <ol> <li>first install the correct pip and setuptools versions (e.g. in a new environment),</li> <li>and then proceed with the installation in the usual way, i.e.</li> </ol> <pre><code>pip install -e . </code></pre> <p>(This works, I'm just not sure it is the best way to do it).</p>
<python><pip><package><openai-gym>
2024-01-28 21:08:01
1
11,755
Bill
77,896,279
15,474,507
Get file information to download files generates HttpError 404 googleapis
<p>With this code <strong>I can</strong> download a file from Google Drive, it downloads me into a file without extension and with a random name in a temporary folder. The file size is correct and the same as the original file.</p> <pre><code>@app.route('/download_from_drive', methods=['POST']) def download_from_drive(): data = flask_request.get_json() url = data.get('url') print(url) # Extract the file ID from the Google Drive URL file_id = url.split('id=')[-1] # Create a temporary file for download with tempfile.NamedTemporaryFile(delete=False) as temp_file: print(&quot;File downloaded to:&quot;, temp_file.name) # Get the contents of the file request = drive_service.files().get_media(fileId=file_id) fh = io.FileIO(temp_file.name, 'wb') downloader = MediaIoBaseDownload(fh, request) done = False while done is False: status, done = downloader.next_chunk() print(&quot;Download %d%%.&quot; % int(status.progress() * 100)) return jsonify({'message': 'Download completed successfully!'}) </code></pre> <p>But when I try to download the file with its original name I get a <strong>404</strong> error. This is unexpected behavior since I am the owner of the file and I use credentials.json and token.json (taken from my google cloud project) which allow me to find my files</p> <p>The code that gives me problems is this:</p> <pre><code>def extract_drive_file_id(file_url): try: # Extract the file ID from the Google Drive link file_id = file_url.split('/')[-1] return file_id except IndexError: return None @app.route('/download_from_drive', methods=['POST']) def download_from_drive(): data = flask_request.get_json() url = data.get('url') # Extract the file ID from the Google Drive URL file_id = url.split('id=')[-1] # Get file information file_info = drive_service.files().get(fileId=file_id).execute() file_name = file_info['name'] # Create a file with the original file name for download with open(file_name, 'wb') as file: print(&quot;File downloaded to:&quot;, file_name) # Get the contents of the file request = drive_service.files().get_media(fileId=file_id) fh = io.FileIO(file_name, 'wb') downloader = MediaIoBaseDownload(fh, request) done = False while done is False: status, done = downloader.next_chunk() print(&quot;Download %d%%.&quot; % int(status.progress() * 100)) return jsonify({'message': 'Download completed successfully!'}) </code></pre> <p>Errors from Console/Dev</p> <pre><code>https://drive.google.com/open?id=1HAaaT298cUncQ************** loadMore.js:20 POST http://localhost:5004/download_from_drive 500 (INTERNAL SERVER ERROR) uploadToTelegram @ loadMore.js:20 (anonymous) @ loadMore.js:15 VM1322:1 Uncaught (in promise) SyntaxError: Unexpected token '&lt;', &quot;&lt;!doctype &quot;... is not valid JSON Promise.then (async) uploadToTelegram @ loadMore.js:28 (anonymous) @ loadMore.js:15 return wrapped(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;C:\Users\admin\AppData\Roaming\Python\Python311\site-packages\googleapiclient\http.py&quot;, line 938, in execute raise HttpError(resp, content, uri=self.uri) googleapiclient.errors.HttpError: &lt;HttpError 404 when requesting https://www.googleapis.com/drive/v3/files/1HAaaT298cUncQ**************?alt=json returned &quot;File not found: 1HAaaT298cUncQ**************.&quot;. Details: &quot;[{'message': 'File not found: 1HAaaT298cUncQ**************.', 'domain': 'global', 'reason': 'notFound', 'location': 'fileId', 'locationType': 'parameter'}]&quot;&gt; </code></pre> <p>I read <a href="https://developers.google.com/drive/api/reference/rest/v2/files/get?hl=it" rel="nofollow noreferrer">here</a> to understand better</p>
<python><google-apps-script><google-app-engine>
2024-01-28 20:14:18
0
307
Alex Doc
77,896,229
6,871,867
How to color particular cells given row index and column name in pandas?
<p>I have a dataframe df of shape (100,10) and a dictionary columns_to_color for each row.</p> <pre><code>columns_to_color = { 0: ['col1', 'col2', 'col9'], # Columns 1, 2 and 9 should be colored for row 0 1: ['col1', 'col5', 'col8'], # Columns 1, 5 and 8 should be colored for row 1 2: ['col3', 'col4', 'col7'], # Columns 3, 4 and 7 should be colored for row 2 ....... </code></pre> <p>How do I color those particular cell for each row of that column? and then save it to excel?</p>
<python><pandas>
2024-01-28 20:00:08
2
462
Gopal Chitalia
77,895,978
15,494,335
python elegant way to iterate over a buffer whose size is not divisible by the step
<p>Let's say I have a buffer buf=b&quot;...&quot; of length 953. I want to iterate over it using a step size of 33. Now 953 is not divisible by 33, so there will be a remainder of 29.</p> <p>I want to iterate over the buffer, but without exceeding the end on the last iteration. I could have used a:</p> <blockquote> <p>for index in range(0,954,33): print(index)</p> </blockquote> <p>The problem is that the last index (924), if assuming the same step size, would exceed the end of the buffer (924+33=957). Now I could use the following:</p> <blockquote> <p>for index in range(0,954,33): size=min(33,954-index) ...</p> </blockquote> <p>but this is repetitive/verbose and feels unpythonic. I feel like I'm missing a battery here.</p> <p>The problem with mentioned solutions, is that they work with slices/arrays, bzt not general cases where the indices are passed to functions, where the exact values often matter.</p> <p>I've created a helper function to avoid repetition. But I wish there was a range variant that explicitly handles remainder cases as part of the standard library</p>
<python><iterator><buffer>
2024-01-28 18:40:03
2
707
mo FEAR
77,895,908
595,305
Define a class in Python and import into Rust module, but it "cannot be converted"
<p>This may be documented somewhere in the PyO3 documents, but I couldn't find it.</p> <p>I make a class like this in Python:</p> <pre><code>class ProgressData(): def __init__(self): self.start_time_ms = 0 self.last_progress_percent = 0.0 </code></pre> <p>... I instantiate it and pass it to a Rust module with the following signature:</p> <pre><code>#[pyfunction] fn index_documents(py: Python, progress_data: ProgressData) -&gt; PyResult&lt;()&gt; { ... </code></pre> <p>I've defined an &quot;equivalent&quot; <code>struct</code> in the Rust module:</p> <pre><code>#[derive(Clone)] #[pyclass] pub struct ProgressData { start_time_ms: usize, last_progress_percent: f64, } </code></pre> <p>and I've added this as one of the exportable classes:</p> <pre><code>#[pymodule] #[pyo3(name = &quot;populate_index&quot;)] fn my_extension(_py: Python&lt;'_&gt;, m: &amp;PyModule) -&gt; PyResult&lt;()&gt; { m.add_function(wrap_pyfunction!(index_documents, m)?)?; m.add_class::&lt;HandlingFramework&gt;()?; m.add_class::&lt;TextDocument&gt;()?; m.add_class::&lt;ProgressData&gt;()?; Ok(()) } </code></pre> <p>... but it doesn't work: I get</p> <blockquote> <p>argument 'progress_data': 'ProgressData' object cannot be converted to 'ProgressData'</p> </blockquote> <p>No further explanation.</p> <p>The above classes &quot;HandlingFramework&quot; and &quot;TextDocument&quot; are created in Rust and exported to Python where their properties and methods can be used. Is there a way to do it the other way round (make in Python, use in Rust)? Or maybe the only way is to make a factory function in the Rust module to deliver a <code>ProgressData</code> object to the Python, put data in this, and then re-export it to another function in the Rust module?</p> <p>I'd also like to define a couple of methods for <code>ProgressData</code>: again, any way to define this in Python, but have it accepted by Rust as a shoe-in for an equivalent Rust <code>struct</code> + <code>impl</code>?</p>
<python><class><rust><data-conversion><pyo3>
2024-01-28 18:18:49
1
16,076
mike rodent
77,895,517
11,536,058
How to url encode special character ~ in Python?
<p>I am trying to do an url encoding on the following string the following string 'https://bla/ble:bli~' using the urllib standard library on Python 3.10. Problem is that the ~ character is not encoded to %7E as per <a href="https://www.w3schools.com/tags/ref_urlencode.ASP" rel="nofollow noreferrer">url encoding</a>. Is there a way around it?</p> <p>I have tried</p> <pre><code>from urllib.parse import quote input_string = 'https://bla/ble:bli~' url_encoded = quote(input_string) </code></pre> <p>url_encoded takes the value 'https%3A//bla/ble%3Abli~'</p> <p>I have also tried:</p> <pre><code>url_encoded = quote(input_string,safe='~') </code></pre> <p>In this case the url_encoded takes the value 'https%3A%2F%2Fbla%2Fble%3Abli~'</p>
<python><python-3.x><urllib>
2024-01-28 16:27:54
2
873
Stamatis Tiniakos
77,895,366
10,574,250
Azure data tables python SDK does not work on read. "NotImplemented"
<p>I have a Table within Azure data storage that I am trying to read from. I am able to successfully write to this table but unable to read. I have followed the docs here: <a href="https://learn.microsoft.com/en-us/python/api/overview/azure/data-tables-readme?view=azure-python" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/python/api/overview/azure/data-tables-readme?view=azure-python</a></p> <p>I also have a contributor role in IAM which should allow me to read and write.</p> <p>My code is here:</p> <pre><code>credential = AzureNamedKeyCredential(os.environ['storage'], os.environ['azkey']) service = TableServiceClient(endpoint=os.environ['azendpoint'], credential=credential) az_table= service.get_table_client(&quot;az_table&quot;) entities = df.apply(lambda x: json.loads(x.to_json()), axis=1) [az_table.create_entity(entity) for entity in entities] </code></pre> <p>This now has entities in and I have checked this in the storage explorer. Here is the read that fails:</p> <pre><code>table_entities = az_table.query_entities(query_filter=&quot;PartitionKey eq 'somevalue'&quot;) for entity in table_entities : print(entity) for key in entity.keys(): print(f&quot;Key: {key}, Value: {entity[key]}&quot;) </code></pre> <p>I now get this error:</p> <pre><code>azure.core.exceptions.HttpResponseError: The requested operation is not implemented on the specified resource. RequestId:893f445b-6002-0031-6fff-51613d000000 Time:2024-01-28T15:31:45.5859059Z ErrorCode:NotImplemented Content: {&quot;odata.error&quot;:{&quot;code&quot;:&quot;NotImplemented&quot;,&quot;message&quot;:{&quot;lang&quot;:&quot;en-US&quot;,&quot;value&quot;:&quot;The requested operation is not implemented on the specified resource.\nRequestId:893f445b-6002-0031-6fff-51613d000000\nTime:2024-01-28T15:31:45.5859059Z&quot;}}} </code></pre> <p>I don't understand what I'm doing here and any help would be appreciated.</p>
<python><azure><azure-table-storage><azure-tablequery><azure-tableclient>
2024-01-28 15:42:36
1
1,555
geds133
77,895,358
9,669,142
Python geneticalgorithm2 bounds with stepsize
<p>I'm using the geneticalgorithm2 module for minimization. I use mixed variables and my code is similar to the example given by them:</p> <pre><code>import numpy as np from geneticalgorithm2 import geneticalgorithm2 as ga def f(X): return np.sum(X) varbound = [[0.5,1.5],[0,100],[0,1]] vartype = ('real', 'int', 'int') model = ga(function=f, dimension=3, variable_type=vartype, variable_boundaries=varbound) model.run() </code></pre> <p>varbound shows the bounds of the variables and the tool will minimize between those values as expected. My question is: is it possible to choose the stepsize?</p> <p>For example: the code will look between 0 and 100 for integers, but what can I do to make it look for tens only? So 0, 10, 20, 30, 40, 50, 60, 70, 80, 90 and 100. And what about a stepsize of 0.001 for the first variable (between 0.5 and 1.5).</p>
<python><python-3.x><genetic-algorithm>
2024-01-28 15:40:04
0
567
Fish1996
77,895,351
14,721,356
How can I fix the metaclass conflict in django-rest-framework Serializer
<p>I've written the following code in django and DRF:</p> <pre class="lang-py prettyprint-override"><code>class ServiceModelMetaClass(serializers.SerializerMetaclass, type): SERVICE_MODELS = { &quot;email&quot;: EmailSubscriber, &quot;push&quot;: PushSubscriber } def __call__(cls, *args, **kwargs): service = kwargs.get(&quot;data&quot;, {}).get(&quot;service&quot;) cls.Meta.subscriber_model = cls.SERVICE_MODELS.get(service) return super().__call__(*args, **kwargs) class InterListsActionsSerializer(serializers.Serializer, metaclass=ServiceModelMetaClass): source_list_id = serializers.IntegerField() target_list_id = serializers.IntegerField() subscriber_ids = serializers.IntegerField(many=True, required=False) account_id = serializers.CharField() service = serializers.ChoiceField(choices=(&quot;email&quot;, &quot;push&quot;)) class Meta: subscriber_model: Model = None def move(self): model = self.Meta.subscriber_model # Rest of the method code. </code></pre> <p>The purpose of this code is that this serializer might need doing operation on different models based on the service that the user wants to use. So I wrote this metaclass to prevent writing duplicate code and simply change the <code>subscriber_model</code> based on user's needs.<br /> Now as you might know, <code>serializers.Serializer</code> uses a metaclass by its own, <code>serializers.SerializerMetaclass</code>. If I don't use this metaclass for creating my metaclass, it results in the following error:<br /> <code>TypeError: metaclass conflict: the metaclass of a derived class must be a (non-strict) subclass of the metaclasses of all its bases.</code><br /> But when I try making my <code>ServiceModelMetaClass</code> metaclass to inherit from <code>serializers.SerializerMetaclass</code> it gives me this error:</p> <pre><code> File &quot;/project-root/segment_management/serializers/subscriber_list.py&quot;, line 33, in &lt;module&gt; class InterListsActionsSerializer(serializers.Serializer, metaclass=ServiceModelMetaClass): File &quot;/project-root/segment_management/serializers/subscriber_list.py&quot;, line 36, in InterListsActionsSerializer subscriber_ids = serializers.IntegerField(many=True, required=False) File &quot;/project-root/.venv/lib/python3.10/site-packages/rest_framework/fields.py&quot;, line 894, in __init__ super().__init__(**kwargs) TypeError: Field.__init__() got an unexpected keyword argument 'many' </code></pre> <p>What should I do to fix this problem or maybe a better alternative approach that keeps the code clean without using metaclass? Thanks in advance.</p>
<python><django><django-rest-framework>
2024-01-28 15:38:31
1
868
Ardalan
77,895,327
12,285,101
remove duplicates from sorted array - trying to understand an answer (leetcode)
<p>I'm trying to answer the Leetcode question &quot;<a href="https://leetcode.com/problems/remove-duplicates-from-sorted-array/?envType=study-plan-v2&amp;envId=top-interview-150" rel="nofollow noreferrer">Remove duplicates from sorted array</a>&quot; , and I am not sure I understand the logic. This is the challenge :</p> <blockquote> <p>Given an integer array nums sorted in non-decreasing order, remove the duplicates in-place such that each unique element appears only once. The relative order of the elements should be kept the same. Then return the number of unique elements in nums.</p> <p>Consider the number of unique elements of nums to be k, to get accepted, you need to do the following things:</p> <p>Change the array nums such that the first k elements of nums contain the unique elements in the order they were present in nums initially. The remaining elements of nums are not important as well as the size of nums. Return k.</p> </blockquote> <p>And this is one of the answers:</p> <pre><code>class Solution: def removeDuplicates(self, nums: List[int]) -&gt; int: i,j=0,1 while i&lt;=j and j&lt;len(nums): if nums[i]==nums[j]: j+=1 else: nums[i+1]=nums[j] i+=1 return i+1 </code></pre> <p>What I don't manage to get is this :</p> <pre><code> if nums[i]==nums[j]: j+=1 </code></pre> <p><strong>imagine we have this list of nums [1,1,2,3,5,6,7,7,8,7],</strong></p> <p><strong>why if value at index i == value at index j we only add 1 to j? what I understand from this script is that it doesn't remove any duplicate (1,1), it just changes j to be 2 , and i is still smaller than j (while loop), what am I missing here ?</strong></p> <p>I was trying to do something like this but it fails due to index error (I understand why but haven't solved it yet) :</p> <pre><code>class Solution: def removeDuplicate(self, nums: list, ) -&gt; int: k=1 for i in range(1,len(nums)): print(i,i-1) print(nums) if nums[i] == nums[i-1]: del nums[i] return k solution_instance = Solution() nums = [1,1,2,3,5,6,7,7,8,7] solution_instance.removeDuplicate(nums) </code></pre>
<python><list><duplicates>
2024-01-28 15:32:15
2
1,592
Reut
77,895,261
13,987,643
How to remove unwanted gap between bars in a grouped bar plot
<p>I'm trying to a create a grouped bar plot using Matplotlib. This is my code to do the same:</p> <pre><code>label_counts = df.groupby(['labels', 'preds']).size().unstack(fill_value=0) # Calculate percentages label_percentages = label_counts.div(label_counts.sum(axis=1), axis=0) * 100 label_percentages = label_percentages.loc[(label_percentages != 0).any(axis=1)] # Plotting plt.figure(figsize=(10, 6), dpi=400) colors = ['coral', 'wheat', 'y'] label_percentages.plot(kind='bar', width=0.6, figsize=(10, 6), edgecolor='black',color=colors, align='center') plt.title('Distribution of Incorrect Predictions', fontweight='bold') plt.xlabel('Correct Label', fontweight='bold') plt.ylabel('% of incorrect predictions', fontweight='bold') plt.legend(title='Predicted Class', title_fontsize='medium') plt.show() </code></pre> <p>This gives me a graph like this:</p> <p><a href="https://i.sstatic.net/zzAnA.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/zzAnA.jpg" alt="enter image description here" /></a></p> <p>where there is a clear gap in the center between two bars of the 'Neutral' category.</p> <p>I have tried to remove it by ignoring groups with zero values, and aligning the plot to center, etc. but nothing seems to get rid of it. How do I remove that space between the bars?</p>
<python><pandas><matplotlib>
2024-01-28 15:09:44
0
569
AnonymousMe
77,895,157
13,000,229
How can I solve "UserWarning: set_ticklabels() should only be used with a fixed number of ticks" in Seaborn?
<p>I faced a warning message when trying to rotate xticks of seaborn.lineplot.</p> <p>At first, I tried to run <code>set_ticks</code>, but <code>ax</code> doesn't seem to have this function. I also checked <code>FixedLocator</code>, but haven't figured out how to use it.</p> <p>Can anyone teach me how to remove this warning message?</p> <h3>Environment</h3> <ul> <li>Python 3.12.1</li> <li>matplotlib 3.8.2</li> <li>pandas 2.2.0</li> <li>seaborn 0.13.2</li> </ul> <h3>Sample Data</h3> <p><a href="https://www.kaggle.com/datasets/sumanthvrao/daily-climate-time-series-data?select=DailyDelhiClimateTest.csv" rel="noreferrer">https://www.kaggle.com/datasets/sumanthvrao/daily-climate-time-series-data?select=DailyDelhiClimateTest.csv</a></p> <h3>Code</h3> <pre><code>import matplotlib.pyplot as plt import pandas as pd import seaborn as sns data: pd.DataFrame = pd.read_csv(&quot;DailyDelhiClimateTest.csv&quot;, parse_dates=[&quot;date&quot;], index_col=&quot;date&quot;) ax: plt.Axes = sns.lineplot(data=data) ax.set_xticklabels(ax.get_xticklabels(), rotation=45) </code></pre> <h3>Warning Message</h3> <pre><code>UserWarning: set_ticklabels() should only be used with a fixed number of ticks, i.e. after set_ticks() or using a FixedLocator. ax.set_xticklabels(ax.get_xticklabels(), rotation=45) </code></pre>
<python><pandas><matplotlib><seaborn><xticks>
2024-01-28 14:38:30
0
1,883
dmjy
77,895,092
20,771,478
Python: Use Graph API to send mails with Outlook - Get token with microsoft username and password
<p>I am trying to use the <a href="https://github.com/vgrem/Office365-REST-Python-Client/tree/master?tab=readme-ov-file#Working-with-Outlook-API" rel="nofollow noreferrer">office365</a> lybrary for Python to send mails with Outlook. When I was working with SharePoint the authentication was very simple, as in the example below.</p> <pre><code>from office365.sharepoint.client_context import ClientContext from office365.runtime.auth.user_credential import UserCredential sp_credentials = UserCredential(SHAREPOINT_USER,SHAREPOINT_PASSWORD) sp_con = ClientContext(SHAREPOINT_SITE).with_credentials(sp_credentials) files = sp_con.web.get_folder_by_server_relative_url(SHAREPOINT_PATH).files sp_con.load(files).execute_query() </code></pre> <p>--&gt; With the above I was able to see what files are in a SharePoint folder. But with the same connection I was able to delete, move or add files.</p> <p>I just used the user credentials of my microsoft account and things worked. However, for Outlook the office365 library uses MSAL for authentification. The example given on the GitHub page is below.</p> <pre><code>import msal from office365.graph_client import GraphClient def acquire_token(): &quot;&quot;&quot; Acquire token via MSAL &quot;&quot;&quot; authority_url = 'https://login.microsoftonline.com/{tenant_id_or_name}' app = msal.ConfidentialClientApplication( authority=authority_url, client_id='{client_id}', client_credential='{client_secret}' ) token = app.acquire_token_for_client(scopes=[&quot;https://graph.microsoft.com/.default&quot;]) return token client = GraphClient(acquire_token) client.me.send_mail( subject=&quot;Meet for lunch?&quot;, body=&quot;The new cafeteria is open.&quot;, to_recipients=[&quot;fannyd@contoso.onmicrosoft.com&quot;] ).execute_query() </code></pre> <p>This example forces me to register an application with the Azure Active Directory because I need the Client ID of that application to create a <code>msal.ConfidentialClientApplication</code>. I don't want to do that.</p> <p>Instead I look for an authentification method similar to the one I use for SharePoint.</p> <p>Do you know of a way to get a token for <code>GraphClient()</code> that only requires me to provide username and password?</p> <p>What I already tried:</p> <p>The <code>msal.ConfidentialClientApplication</code> class has a function called <a href="https://github.com/AzureAD/microsoft-authentication-library-for-python/blob/dev/msal/application.py#L1615" rel="nofollow noreferrer"><code>acquire_token_by_username_password</code></a>. I replaced the <code>acquire_token_for_client</code> function with it but still face the problem that the <code>ConfidentialClientApplication</code> requires a Client ID.</p>
<python><microsoft-graph-api><office365><microsoft-graph-mail><msal>
2024-01-28 14:17:41
1
458
Merlin Nestler
77,895,088
12,894,926
Asyncio with boto and for range?
<p>I'm trying first time the <a href="https://github.com/aio-libs/aiobotocore" rel="nofollow noreferrer">asyncio botocore implementation</a>. However, I'm quite sure I'm not getting the expected asynchronicity, likely due to my own lack of experience with it. :)</p> <p>The goal of the bellow method is to <strong>duplicate all files in a bucket while suffixing keys with UUIDs</strong>.</p> <pre class="lang-python prettyprint-override"><code>async def async_duplicate_files_in_bucket(bucket, how_many_times=1): session = get_session() async with session.create_client('s3') as s3_client: s3_client: S3Client paginator = s3_client.get_paginator('list_objects') async for result in paginator.paginate(Bucket=bucket): for file in result[&quot;Contents&quot;]: # it already includes the prefix in the same original_file_name: str = file[&quot;Key&quot;] logger.debug(f&quot;Duplicating file: {original_file_name} &quot;) for _ in range(how_many_times): new_file_name = original_file_name + &quot;_&quot; + uuid.uuid4().__str__() copy_source = { 'Bucket': bucket, 'Key': original_file_name } await s3_client.copy_object(Bucket=bucket, CopySource=copy_source, Key=new_file_name) print(&quot;-&quot;, end=&quot;&quot;) </code></pre> <p>When looking at the terminal:</p> <ol> <li>I see <code>Duplicating file: file_1</code> not moving to the next file until it finishes duplicating <code>file_1</code>. Just then it I get a new log line with <code>Duplicating file: file_2</code>.</li> <li><code>print('-', end=&quot;&quot;)</code> is not printing</li> </ol> <p>Given my little experience with <code>asyncio</code>, I hypothesize that the <code>for _ in range(how_many_times)</code> is blocking the event loop.</p> <p>Appreciate directions to better understand how to make use of <code>asyncio</code> in Python as well as to achieve the goal of the function.</p> <p>Thanks.</p>
<python><amazon-web-services><asynchronous><python-asyncio><boto>
2024-01-28 14:15:15
1
1,579
YFl
77,894,773
7,959,614
Get numpy array that consists of ones and zeros for x combinations
<p>I want to create a <code>numpy.array</code> that consists of unique series of ones and zeros. A row needs to have a length of <code>2*x</code></p> <p>When <code>x=2</code> the output is expected to look as follows:</p> <pre><code>[[1 0 1 0] [1 0 0 1] [0 1 1 0] [0 1 0 1]] </code></pre> <p>When <code>x=3</code>:</p> <pre><code>[[1 0 1 0 1 0] [1 0 0 1 1 0] [0 1 1 0 1 0] [0 1 0 1 1 0] [1 0 1 0 0 1] [1 0 0 1 0 1] [0 1 1 0 0 1] [0 1 0 1 0 1]] </code></pre> <p>I tried <code>np.meshgrid</code> and <code>itertools.combinations</code> without any success sofar.</p> <p>Please advice</p>
<python><numpy>
2024-01-28 12:30:24
2
406
HJA24
77,894,722
16,259,344
CID encoding of font
<p>I'am trying to extrat text from a pdf with python. None of the packages I tried could read it (PyPDF2,pdfminer,fitz etc.), but some of them could return me the cid encodings. (eg. (cid:3) ).</p> <p>Now I read the file the &quot;brute force&quot; way, meaning I managed to found out the cid decoding from some examples. (That notebook can be found <a href="https://www.kaggle.com/code/franciskarajki/cid-decoder" rel="nofollow noreferrer">here</a> on kaggle.)</p> <p>I searched online for the elegant way, and found a lot of mentioning of <em>Registry-Ordering-Supplement</em> and how you should find the encodings by knowing the <em>font</em>.</p> <p>Altough fitz can not interpret the text, it says the font is <em>CourierNewPSMT</em>. Now even with this information, I could not find the ROS info/ CID encoding/ CID mapping / CID collection.</p> <p>Can someone tell me, how to interpret the cid encoded text, knowing the font?</p>
<python><pdf><fonts><pdf-extraction>
2024-01-28 12:11:52
2
505
Franciska
77,894,331
15,222,211
How can I disable a pylint message via pyproject.toml for a specific file?
<p>I'm encountering the <code>too-many-instance-attributes</code> pylint error in multiple files and I want to disable this message only for one file <code>my_project/runner/runner.py</code>. Is it possible to do this in the poetry <code>pyproject.toml</code> file?</p> <pre><code>pylint my_project ************* Module my_project.models_vi.config.hostname my_project\models_vi\config\hostname.py:67:0: R0902: Too many instance attributes (9/7) (too-many-instance-attributes) ************* Module my_project.models_vi.config.port_descr my_project\models_vi\config\port_descr.py:92:0: R0902: Too many instance attributes (11/7) (too-many-instance-attributes) ************* Module my_project.runner.runner my_project\runner\runner.py:22:0: R0902: Too many instance attributes (9/7) (too-many-instance-attributes) </code></pre>
<python><pylint><pyproject.toml>
2024-01-28 09:54:37
1
814
pyjedy
77,894,086
15,144,596
How to use asyncio.as_completed() with a variable size task list
<h1>Challenge</h1> <p>Suppose I have a list L=30 tasks. Each task takes a variable time to complete and has a 10% chance of failing. If a task T fails, add the task back to a queue and prioritize it's completion. Also I can only process at most N=8 number of tasks.</p> <p>I want to use Python's <code>asyncio.as_completed()</code> function to yield the result of a task as soon as it's completed. Once done, add one more task from the task_list L (to make sure we are constantly processing 8 tasks at any given instance).</p> <p>How can I do this in Python?</p> <h1>Current Scenario</h1> <p>In my current scenario, I have a queue of L=1000 tasks and at a time I can solve N=20 tasks like this:</p> <pre class="lang-py prettyprint-override"><code># This approach creates a task of all 1000 items in the queue L and then processes them. while not download_complete(): task_list = [asyncio.create_task(work()) for work in L] for task in asyncio.as_completed(task_list): result = await task yield result </code></pre> <h1>Problem</h1> <p>The problem however with my current approach is when a task fails, I add it back to the task queue L. Now asyncio.as_completed() will first finish processing all the tasks which are currently already in the task_list before creating a task for all items which have failed during the previous iteration. It's because of this, the failed tasks don't get a priority over the tasks which are already in the task_list.</p> <h1>Question</h1> <p>So how can I make it so that I can constantly keep processing N number of tasks at a time and yeild the result of the next earliest completion. Once a task is done, fetch one more task from a queue of 1000 tasks and if a task fails, add it back to asyncio.as_completed and prioritize it completion?</p>
<python><python-3.x><python-asyncio>
2024-01-28 08:31:50
2
549
Shakir
77,894,069
8,414,030
Run python and get result as string in browser
<p>I want to run Python code from a text-area in HTML and save the output in a div. I am using <code>pyodide.js</code> <a href="https://cdn.jsdelivr.net/pyodide/v0.25.0/full/pyodide.js" rel="nofollow noreferrer">https://cdn.jsdelivr.net/pyodide/v0.25.0/full/pyodide.js</a> to run Python. I would also like to use a worker which does not affect the main JS thread.</p> <p>I have a working code that logs the output to the JavaScript console in the browser, and I can't figure out a way to get the result in a string.</p> <p>Can you help me get the code output saved to a variable?</p> <p>The main function <code>runPythonCode</code> outputs the result to the console instead of returning an output to the variable <code>result</code> <code>(result=undefined)</code> in <code>main-worker.js</code>.</p> <p>Find the detailed code below.</p> <pre class="lang-js prettyprint-override"><code>//test-code import { runPythonCode } from &quot;./main-worker.js&quot;; $(element).find(&quot;.run-code:first&quot;).click( async function() { let current_code = &quot;print('Hello, World!')&quot; let result = await runPythonCode(current_code) console.log(&quot;Run clicked&quot;, result) } ) </code></pre> <p>// main-worker.js</p> <pre class="lang-js prettyprint-override"><code>// main-worker.js import { asyncRun } from &quot;./py-worker.js&quot;; console.log(&quot;Python main worker started.&quot;) async function runPythonCode(pythonCode, context) { try { const { result, error } = await asyncRun(pythonCode, context); if (result) { return result.toString() } else if (error) { return error } } catch (e) { console.error(&quot;Error communicating with the worker:&quot;, e); } } export {runPythonCode} </code></pre> <p>// py-worker.js</p> <pre><code>// py-worker.js const pyodideWorker = new Worker(&quot;./worker.js&quot;); const callbacks = {}; pyodideWorker.onmessage = (event) =&gt; { const { id, result, error } = event.data; const onSuccess = callbacks[id]; delete callbacks[id]; if (onSuccess) { if (result) { onSuccess({ result }); } else if (error) { onSuccess({ error }); } } }; const asyncRun = (pythonCode, context = {}) =&gt; { const id = Date.now().toString(); // Unique identifier for each run return new Promise((resolve) =&gt; { callbacks[id] = resolve; pyodideWorker.postMessage({ id, pythonCode, context }); }); }; export { asyncRun }; </code></pre> <p>// worker.js</p> <pre class="lang-js prettyprint-override"><code>// worker.js importScripts(&quot;https://cdn.jsdelivr.net/pyodide/v0.25.0/full/pyodide.js&quot;); async function loadPyodideAndPackages() { self.pyodide = await loadPyodide({ &quot;indexURL&quot; : &quot;https://cdn.jsdelivr.net/pyodide/v0.25.0/full/&quot;, }); // Add any necessary packages here // await self.pyodide.loadPackage([&quot;numpy&quot;]); } let pyodideReadyPromise = loadPyodideAndPackages(); self.onmessage = async (event) =&gt; { await pyodideReadyPromise; const { id, pythonCode, context } = event.data; // Copy the context variables to the worker's scope for (const key in context) { self[key] = context[key]; } try { // Load packages and run Python code await self.pyodide.runPythonAsync(pythonCode); postMessage({ id, result: &quot;Execution complete&quot; }); } catch (error) { postMessage({ id, error: error.message }); } }; </code></pre>
<javascript><python><pyscript><pyodide>
2024-01-28 08:23:06
0
791
inquilabee
77,894,056
4,451,521
matplotlib.image.imread(): Some images are integers while some others are floats
<p>I am trying a script that works with images in python. The first image was working and the second one did not work at all.</p> <p>The script was</p> <pre><code>import matplotlib.pyplot as plt import matplotlib.image as mpimg import numpy as np image = mpimg.imread('test.jpg') # Grab the x and y size and make a copy of the image ysize = image.shape[0] xsize = image.shape[1] color_select = np.copy(image) # Define color selection criteria ###### MODIFY THESE VARIABLES TO MAKE YOUR COLOR SELECTION red_threshold = 200 green_threshold = 200 blue_threshold = 200 ###### rgb_threshold = [red_threshold, green_threshold, blue_threshold] # Do a boolean or with the &quot;|&quot; character to identify # pixels below the thresholds thresholds = (image[:,:,0] &lt; rgb_threshold[0]) \ | (image[:,:,1] &lt; rgb_threshold[1]) \ | (image[:,:,2] &lt; rgb_threshold[2]) color_select[thresholds] = [0,0,0] plt.imshow(color_select) </code></pre> <p>Then I discovered the reason why the second image did not work</p> <pre><code>print(&quot;Red channel - Min:&quot;, np.min(image[:,:,0]), &quot;Max:&quot;, np.max(image[:,:,0])) print(&quot;Green channel - Min:&quot;, np.min(image[:,:,1]), &quot;Max:&quot;, np.max(image[:,:,1])) print(&quot;Blue channel - Min:&quot;, np.min(image[:,:,2]), &quot;Max:&quot;, np.max(image[:,:,2])) </code></pre> <p>For the first image</p> <pre><code>Red channel - Min: 0 Max: 255 Green channel - Min: 10 Max: 255 Blue channel - Min: 0 Max: 255 </code></pre> <p>For the second image that did not work</p> <pre><code>Red channel - Min: 0.0 Max: 1.0 Green channel - Min: 0.0 Max: 1.0 Blue channel - Min: 0.0 Max: 1.0 </code></pre> <p>Can someone explain to me the basics of why some images use 0-255 and other 0.0 to 1.0?</p>
<python><image><matplotlib>
2024-01-28 08:12:10
1
10,576
KansaiRobot
77,894,033
4,590,499
How to publish flask web app which runs C++ subprocess
<p><strong>Current Development Build Architecture</strong>:</p> <ul> <li>Single-Page Flask Web App: c:/code/flask_app/app.py</li> <li>C++ Program: c:/code/flask_app/main.exe</li> </ul> <p>I have a fully functional Flask web application that involves a straightforward HTTP POST request from a client to a server (<code>main.js -&gt; app.py</code>). Additionally, it utilizes the Socket.IO WebSocket library to trigger the execution of a subprocess (<code>main.exe</code>), located in the root directory of the Flask app (<code>app.py</code>), based on user-generated data.</p> <p>During development, this setup worked smoothly without any issues. However, I have encountered several challenges when attempting to deploy the application using Gunicorn on a local Ubuntu/Linux server:</p> <ol> <li><strong>Compatibility of main.exe with Linux:</strong></li> </ol> <p>I have observed that the Linux server fails to execute main.exe. I suspect this issue might require either rebuilding main.exe to make it compatible with the Linux operating system or identifying a suitable alternative solution.</p> <ol start="2"> <li><strong>Socket.IO and Gunicorn Interaction:</strong></li> </ol> <p>Upon deploying the application using Gunicorn, I encountered recurring Socket.IO 400 (BAD REQUEST) errors. I am uncertain whether these errors are caused by the nature of main.exe or if they are a result of Socket.IO's compatibility with Gunicorn.</p> <ol start="3"> <li><strong>Performance Concerns with Gunicorn:</strong></li> </ol> <p>Additionally, I have noticed that the application experiences severe performance degradation when launched via Gunicorn. I am curious to know if the Socket.IO errors contribute to this laggy behavior or if there might be other factors at play.</p> <p>Given the above challenges, I would appreciate any guidance or insights on the best and most straightforward approach to publish my web application, which I have developed over the past five weeks. I am open to suggestions on addressing the compatibility issues with main.exe, resolving Socket.IO-related concerns, or optimizing the application's performance on Gunicorn.</p>
<python><c++><ubuntu><flask><gunicorn>
2024-01-28 08:01:34
1
964
Daqs
77,893,953
23,219,369
Merge dxf files from multiple pages into one page
<p>I tried to merge multiple dxf files into one dxf file.</p> <p>I've used Importer of <a href="https://ezdxf.readthedocs.io/en/stable/addons/importer.html" rel="noreferrer">ezdxf</a> (Python package) but the result was that dxf files were accumulated in layers on one page.</p> <pre><code>def merge(source, target): try: source = ezdxf.readfile(source) except ezdxf.DXFError as e: print(&quot;DXF file not found&quot;) return False try: importer = Importer(source, target) importer.import_modelspace() importer.finalize() except Exception as e: print(&quot;Error: &quot;, e.__class__) </code></pre> <p>This is the input dxf files: <a href="https://i.sstatic.net/6bTIp.png" rel="noreferrer"><img src="https://i.sstatic.net/6bTIp.png" alt="enter image description here" /></a></p> <p><a href="https://i.sstatic.net/nBFsS.png" rel="noreferrer"><img src="https://i.sstatic.net/nBFsS.png" alt="enter image description here" /></a></p> <p>This is the current output dxf file: <a href="https://i.sstatic.net/dRYTY.png" rel="noreferrer"><img src="https://i.sstatic.net/dRYTY.png" alt="enter image description here" /></a></p> <p>The result I want is continuous dxf files on one page like following: <a href="https://i.sstatic.net/QdDQp.png" rel="noreferrer"><img src="https://i.sstatic.net/QdDQp.png" alt="enter image description here" /></a></p> <p>How can I achieve this?</p>
<python><pdf><merge><dxf><ezdxf>
2024-01-28 07:20:46
1
834
Temunel
77,893,946
5,049,813
Pylance thinks that an index into an H5 file does not create a group
<p>The following minimal example works fine:</p> <pre class="lang-py prettyprint-override"><code>import h5py f = h5py.File(&quot;mytestfile.hdf5&quot;, &quot;w&quot;) f.create_group(&quot;hello&quot;) # Pylance error here f[&quot;hello&quot;].create_dataset(&quot;hello&quot;, data=1) g = f.create_group(&quot;bye&quot;) g.create_dataset(&quot;bye&quot;, data=2) f.close() </code></pre> <p>However, on the indicated, line, Pylance throws an error, saying <code>Cannot access member &quot;create_dataset&quot; for type &quot;Dataset&quot; Member &quot;create_dataset&quot; is unknown</code> (And the same error again on the same line for &quot;Datatype&quot; instead of &quot;Dataset&quot;.) Note that this error does not occur on the later line that runs the same <code>create_dataset</code> function.</p> <p>Even adding an <code>assert isinstance(f[&quot;hello&quot;], h5py.Group)</code> before the erroring line does not solve the issue.</p> <p>Why does Pylance think that <code>f[&quot;hello&quot;]</code> is a <code>Datatype</code> or <code>Dataset</code> and not a <code>Group</code>?</p> <p>I've also looked at <a href="https://github.com/h5py/h5py" rel="nofollow noreferrer">the codebase</a> to see if I can find where the indexing magic method happens and can't easily find it to see if it's typed correctly and/or fix it.</p> <p>How can I get Pylance to understand that <code>f[&quot;hello&quot;]</code> is a group that can create a dataset?</p>
<python><python-typing><h5py><pylance>
2024-01-28 07:18:18
1
5,220
Pro Q
77,893,912
1,568,590
Python & Beautiful Soup/Requests: request return a byte string - how to decode?
<p><strong>Short version:</strong> Scraping two very similar web pages, but the content of one is returned as a byte stream.</p> <p><strong>Long version:</strong> I am just dipping my toes into Beautiful Soup. And I am testing on two web pages; both being lists of conference papers. One list is &quot;invited papers&quot;; the other list is of &quot;regular papers&quot;. They both look the same (aside from the content): the papers are given in a table on each page. And so on the first page I can do:</p> <pre><code>inv_papers = requests.get('https://my_conference.org/invited.html') soup = BeautifulSoup(inv_papers.text, 'html.parser') datas = soup.find_all(&quot;td&quot;) for all data in datas: text = ' '.join(data.text.split()) print(text) </code></pre> <p>There's a lot of extra stuff in there, but I can just delete that by hand (or learn Beautiful Soup to extract from only part of a web page).</p> <p>However, something in the regular papers page trips up requests and returns a string of bytes:</p> <pre><code>reg_papers = requests.get('https://my_conference.org/regular.html') soup = BeautifulSoup(reg_papers.text, 'html.parser') </code></pre> <p>I've found that both <code>inv_papers.header</code> and <code>reg_papers.header</code> are identical except for the date (one day apart) and the size. But the content is different (only the first few lines are shown):</p> <pre><code>inv_papers.content b'&lt;html xmlns:v=&quot;urn:schemas-microsoft-com:vml&quot;\r\nxmlns:o=&quot;urn:schemas-microsoft-com:office:office&quot;\r\nxmlns:w=&quot;urn:schemas-microsoft-com:office:word&quot;\r\nxmlns:dt=&quot;uuid:C2F41010-65B3-11d1-A29F-00AA00C14882&quot;\r\nxmlns:m=&quot;http://schema reg_papers.content b'\xff\xfe&lt;\x00h\x00t\x00m\x00l\x00 \x00x\x00m\x00l\x00n\x00s\x00:\x00v\x00=\x00&quot;\x00u\x00r\x00n\x00:\x00s\x00c\x00h\x00e\x00m\x00a\x00s\x00-\x00m\x00i\x00c\x00r\x00o\x00s\x00o\x00f\x00t\x00-\x00c\x00o\x00m\x00:\x00v\x00m\x00l\x00&quot;\x </code></pre> <p>As you see, <code>inv_papers</code> returns its contents in ascii text; <code>reg_papers</code> as a raw byte stream. I've tried:</p> <pre><code>reg_papers.content.decode('utf-8') </code></pre> <p>to receive the error:</p> <pre><code>UnicodeDecodeError: 'utf-8' codec can't decode byte 0xff in position 0: invalid start byte </code></pre> <p>I've also had a look at the page source of each page, and I can't see what might might be tripping the request up like this.</p> <p>How do I get the web page material in useful text form?</p> <p><em>Addendum:</em> If I save the page source of the second page to a file, I can open it up as a local file and parse it fine with Beautiful Soup.</p>
<python><beautifulsoup><encoding><python-requests>
2024-01-28 07:03:57
0
1,414
Alasdair
77,893,827
11,280,068
How do you develop the HTML portion of a Jinja2 template, after you've jinja-ified the file?
<p>I have an HTML email template that I've created and converted to a jinja template. This mean that I've:</p> <ol> <li>Changed it to a <code>.j2</code> file extension</li> <li>Added all the necessary jinja expressions like for-loops, if-statements, and expressions, using <code>{{ ... }}</code> and <code>{% ... %}</code></li> </ol> <p>The question that I see no obvious answer to is:</p> <h3>How do I continue to work on the HTML portion of a Jinja template, after I've already implemented all the Jinja2-specific syntax?</h3> <p>If I were to try and edit it as a regular HTML file, all the curly braces would show up and I don't even know how it would interpret the control-structure elements like for-loops and if-statements.</p> <p>Let me know if I need to share more context!</p>
<python><python-3.x><templates><jinja2><server-side-rendering>
2024-01-28 06:23:25
0
1,194
NFeruch - FreePalestine
77,893,774
5,976,033
Python Azure Functions Identity-Based Connection for Trigger Bindings
<p>I can't seem to find clear documentation on how to set a System-assigned Managed Identity-based connection for my Queue-triggered Azure Function.</p> <p><strong>Steps taken</strong>:</p> <ol> <li>Enabled System-assigned Managed Identity (SAMI) for the Azure Function</li> <li>On the Queue Storage Account, granted the SAMI <code>Storage Queue Data Reader</code> and <code>Storage Queue Data Message Processor</code> Roles per <a href="https://learn.microsoft.com/en-us/azure/azure-functions/functions-reference?tabs=queue&amp;pivots=programming-language-csharp#grant-permission-to-the-identity" rel="nofollow noreferrer">this doc</a>.</li> <li>Ensured the Extension Version is <code>5.0.0</code> or later</li> </ol> <pre><code>&quot;extensionBundle&quot;: { &quot;id&quot;: &quot;Microsoft.Azure.Functions.ExtensionBundle&quot;, &quot;version&quot;: &quot;[4.*, 5.0.0)&quot; } </code></pre> <ol start="4"> <li>Added a <code>connection</code> value to the Function's <code>function.json</code> file:</li> </ol> <pre><code>{ &quot;scriptFile&quot;: &quot;__init__.py&quot;, &quot;bindings&quot;: [ { &quot;name&quot;: &quot;msg&quot;, &quot;type&quot;: &quot;queueTrigger&quot;, &quot;direction&quot;: &quot;in&quot;, &quot;queueName&quot;: &quot;my-q&quot;, &quot;connection&quot;: &quot;QUEUE_CONN&quot; } ] } </code></pre> <ol start="5"> <li>Added a <code>QUEUE_CONN__queueServiceUri</code> app setting to the Function's <code>local.settings.json</code> file per <a href="https://stackoverflow.com/questions/77483728/azure-durable-functions-identity-based-queuetrigger-binding-queueserviceuri">this</a> SO question, which references <a href="https://learn.microsoft.com/en-us/azure/azure-functions/functions-bindings-storage-queue-trigger?tabs=python-v2%2Cisolated-process%2Cnodejs-v4%2Cextensionv5&amp;pivots=programming-language-python#identity-based-connections" rel="nofollow noreferrer">this</a> doc.</li> </ol> <pre><code>{ &quot;IsEncrypted&quot;: false, &quot;Values&quot;: { &quot;FUNCTIONS_WORKER_RUNTIME&quot;: &quot;python&quot;, &quot;AzureWebJobsStorage&quot;: &quot;UseDevelopmentStorage=true&quot;, &quot;QUEUE_CONN__queueServiceUri&quot;: &quot;https://&lt;my-q-storage&gt;.queue.core.windows.net&quot; } } </code></pre> <ul> <li>After <code>func azure functionapp publish &lt;my-function&gt; --publish-local-settings</code>, and writing the appropriate setting to Azure...the function will not trigger when adding a new queue.</li> </ul> <ol start="6"> <li><p>I also tried adding <code>QUEUE_CONN__managedIdentityResourceId</code> per <a href="https://learn.microsoft.com/en-us/azure/azure-functions/functions-reference?tabs=queue&amp;pivots=programming-language-csharp#common-properties-for-identity-based-connections" rel="nofollow noreferrer">this</a> (contradicting?) doc. But this didn't seem to trigger the Function upon adding a queue.</p> </li> <li><p>Also tried adding <code>&quot;QUEUE_CONN__credential&quot;: &quot;managedidentity&quot;</code>. Still unable to trigger the function.</p> </li> </ol> <p>I'd really like to get away from dealing with a Key Vault secret when all other connections within the function rely on SAMI auth.</p> <p>Any ideas?</p>
<python><azure><azure-functions><queue>
2024-01-28 05:54:56
2
4,456
SeaDude
77,893,484
3,973,175
Italicizing letters and numbers in matplotlib
<p>I am attempting to italicize text in my plot:</p> <pre><code>import matplotlib.pyplot as plt plt.plot([1,2,3],[1,1,1], label = '$\it{ABC123}$') plt.legend() plt.show() </code></pre> <p>as shown by <a href="https://stackoverflow.com/questions/8376335/styling-part-of-label-in-legend-in-matplotlib">Styling part of label in legend in matplotlib</a></p> <p>but this only italicized <code>ABC</code>, not <code>123</code>, which was also noticed, but left unsolved, by <a href="https://stackoverflow.com/questions/69964702/matplotlib-italic-font-cannot-be-applied-to-numbers-in-the-legend?noredirect=1&amp;lq=1">Matplotlib italic font cannot be applied to numbers in the legend?</a></p> <p>I have also tried <code>usetex</code> <a href="https://stackoverflow.com/questions/32470137/italic-symbols-in-matplotlib">italic symbols in matplotlib?</a> but that <em>changes the fonts</em> which make this figure incompatible with other figures that I'm making.</p> <p>I'm on Python 3.10.12 and</p> <pre><code>matplotlib 3.8.1 matplotlib-inline 0.1.3 </code></pre> <p>how can I italicize both letters and numbers, e.g. in <code>ABC123</code>?</p>
<python><python-3.x><matplotlib>
2024-01-28 02:40:18
1
6,227
con
77,893,416
7,978,112
Empty pandas dataframe with datatypes preserved
<p>I want to make an empty df with preserved datatypes as a template. Code is as follows:</p> <pre class="lang-py prettyprint-override"><code>import pandas as pd import datetime from dataclasses import dataclass @dataclass class OpenOrder: symbol: str = &quot;Dummy&quot; secType: str = &quot;STK&quot; dt: datetime.datetime = datetime.datetime.now() price: float = 0.0 status: str = None def empty(self): open_ord = self() empty_df = pd.DataFrame([open_ord.__dict__]) return empty_df.iloc[0:0] </code></pre> <p>Instantiation works, but emptying doesn't.</p> <pre><code>open_order = OpenOrder() order_df = open_order.empty() </code></pre> <p>How can I make this work?</p>
<python><python-dataclasses>
2024-01-28 02:01:25
1
1,847
reservoirinvest
77,893,374
21,107,707
How to configure pip to install a script to the environment bin/ folder?
<p>I'm writing a package in Python, and I want possible users to install a command to their system by installing with <code>pip</code>. A package that I want to copy the installation behavior of is <a href="https://github.com/wkentaro/gdown/blob/main/pyproject.toml" rel="nofollow noreferrer"><code>gdown</code></a>. When you install this, a <code>gdown</code> python executable is put inside the environment's (whether it's global or a virtual environment) <code>bin/</code> folder, so that <code>gdown</code> can be called from the command line.</p> <p>How do I replicate this behavior? I tried to look at the code in the repository but I can't seem to find the code that does this.</p>
<python><pip>
2024-01-28 01:26:12
1
801
vs07
77,893,232
651,174
How to properly log an error and then re-raise an exception
<p>Is the following the correct pattern for logging something and re-raising the error -- specifically, just using a blank <code>raise</code> without anything after it? Is there a more explicit way to do that?</p> <pre><code>try: #MySQL conn = pymysql.connect(host=HOST, user=USER, passwd=PASS, db=NAME) except pymysql.err.OperationalError: logger.error('Make sure a separate credentials file has been sourced.') raise </code></pre>
<python>
2024-01-27 23:55:56
0
112,064
David542
77,893,163
1,070,833
how to read/write "bits per sample "ifd tag in a tiff file (python)
<p>I'm struggling with IDF tag for &quot;bits per sample&quot;. According to the documentation the type of this field is <code>short</code></p> <pre><code>Code 258 (hex 0x0102) Name BitsPerSample Type SHORT Count N = SamplesPerPixel Default 1 </code></pre> <p>a tag in a valid tiff file looks like this in hex:</p> <p><code>02 01 03 00 03 00 00 00 E2 01 00 00</code></p> <ul> <li><code>02 01</code> - the tag 258</li> <li><code>03 00</code> - type short</li> <li><code>03 00 00 00</code> - number of values 3</li> <li><code>E2 01</code> - <strong>this is what I have the problem with. how is this 3 shorts?</strong></li> <li><code>00 00</code> - offset to the next ifd tag 0</li> </ul> <p>how are the three shorts encoded to <code>E2 01</code> and how to read them? I expected 3 decimal values of 8,8,8 or 16,16,16. What am I missing? a short is an 8 bit value.</p> <p>I'm using <code>struct</code> to read and write the tags and this one baffles me as this makes no sense with a single value of 482 as unsigned short (<code>H</code>):</p> <p><code>struct.pack('&lt;HHIHH', 258, 3, 3, 482, 0)</code></p> <pre><code>h short integer 2 (2) H unsigned short integer 2 (2) </code></pre> <p>I bet I'm missing something obvious here. Any insights very welcome :-)</p>
<python><image-processing><tiff>
2024-01-27 23:23:36
2
1,109
pawel
77,893,107
525,865
Beautiful soup parse the listing of many entries and saving in data frame
<p>At the moment I'll am gathering data from the dioceses of The world.</p> <p>My approach works with bs4 and pandas. I am currently working on the scrape-logic.</p> <pre><code>import requests from bs4 import BeautifulSoup import pandas as pd url = &quot;http://www.catholic-hierarchy.org/&quot; # Send a GET request to the website response = requests.get(url) #my approach to parse the HTML content of the page soup = BeautifulSoup(response.text, 'html.parser') # Find the relevant elements containing diocese information diocese_elements = soup.find_all(&quot;div&quot;, class_=&quot;diocesan&quot;) # Initialize empty lists to store data dioceses = [] addresses = [] # Extract now data from each diocese element for diocese_element in diocese_elements: # Example: Extracting diocese name diocese_name = diocese_element.find(&quot;a&quot;).text.strip() dioceses.append(diocese_name) # Example: Extracting address address = diocese_element.find(&quot;div&quot;, class_=&quot;address&quot;).text.strip() addresses.append(address) # to save the whole data we create a DataFrame using pandas data = {'Diocese': dioceses, 'Address': addresses} df = pd.DataFrame(data) # Display the DataFrame print(df) </code></pre> <p>At the moment I get some weird things on my pycharm. And I try to find a way to gather the whole data with the <strong>pandas approach</strong>.</p>
<python><pandas><beautifulsoup>
2024-01-27 22:56:17
1
1,223
zero
77,893,034
15,048,981
ImportError: Could not import faiss python package
<p>I am getting this error on python3.12 on M1 air</p> <p>ImportError: Could not import faiss python package. Please install it with <code>pip install faiss-gpu</code> (for CUDA supported GPU) or <code>pip install faiss-cpu</code> (depending on Python version).</p> <p>It suggests installing fairs-cpu and gpu and none of it works. Can someone help?</p> <p>Here's the code that's being used</p> <pre><code>import streamlit as st from PyPDF2 import PdfReader from langchain.text_splitter import RecursiveCharacterTextSplitter import google.generativeai as palm from langchain_community.embeddings import GooglePalmEmbeddings from langchain_community.llms import GooglePalm from langchain_community.vectorstores import FAISS from langchain.chains import ConversationalRetrievalChain from langchain.memory import ConversationBufferMemory import os os.environ[&quot;GOOGLE_API_KEY&quot;] = &quot;***********************&quot; def get_pdf_text(pdf_docs): return '' def get_text_chunks(text): text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=20) chunks = text_splitter.split_text(text) return chunks def get_vector_store(text_chunks): embeddings = GooglePalmEmbeddings() vector_store = FAISS.from_texts(text_chunks, embedding=embeddings) return vector_store def get_conversational_chain(vector_store): llm=GooglePalm() memory = ConversationBufferMemory(memory_key = &quot;chat_history&quot;, return_messages=True) conversation_chain = ConversationalRetrievalChain.from_llm(llm=llm, retriever=vector_store.as_retriever(), memory=memory) return conversation_chain def user_input(user_question): response = st.session_state.conversation({'question': user_question}) st.session_state.chatHistory = response['chat_history'] for i, message in enumerate(st.session_state.chatHistory): if i%2 == 0: st.write(&quot;Human: &quot;, message.content) else: st.write(&quot;Bot: &quot;, message.content) def main(): st.set_page_config(&quot;GreenThread Consulting&quot;) st.header(&quot;GreenThread Consulting Chat&quot;) user_question = st.text_input(&quot;Ask a Question on the sustainability of our company?&quot;) if &quot;conversation&quot; not in st.session_state: st.session_state.conversation = None if &quot;chatHistory&quot; not in st.session_state: st.session_state.chatHistory = None if user_question: user_input(user_question) with st.sidebar: st.title(&quot;Settings&quot;) st.subheader(&quot;Upload your Documents&quot;) pdf_docs = st.file_uploader(&quot;Upload your invoice and Click on the Process Button&quot;, accept_multiple_files=True) if st.button(&quot;Process&quot;): with st.spinner(&quot;Processing&quot;): raw_text = get_pdf_text(pdf_docs) text_chunks = get_text_chunks(raw_text) vector_store = get_vector_store(text_chunks) st.session_state.conversation = get_conversational_chain(vector_store) st.success(&quot;Done&quot;) if __name__ == &quot;__main__&quot;: main() </code></pre>
<python>
2024-01-27 22:27:17
1
569
Andy
77,893,032
1,001,581
a forking service blocks ssh connection even if the main process ends
<p>I have a program that should be run from <code>sshd</code>. It should print few lines to stdout and then fork and exit. The forked program should do its stuff detached from <code>sshd</code> without blocking the program. The thing is that the program below runs correctly when executed from a terminal shell or via ssh with option <code>-t</code>. It blocks ssh connection when the terminal is not assigned.</p> <pre class="lang-py prettyprint-override"><code>#! /usr/bin/env python3 import os, time, sys print(&quot;Hello world&quot;, flush=True) if os.fork()==0: # child sys.stdin.close() sys.stdout.close() sys.stderr.close() os.setsid() # just in case make a second fork if os.fork()&gt;0: os.exit() os.setsid() time.sleep(30) os.exit(0) os.wait() print(&quot;bye, world&quot;) </code></pre> <p>How can I make the program release ssh connection even if invoked without terminal?</p> <ol> <li>Direct run (runs correctly, exits immediately leaving forked child):</li> </ol> <pre class="lang-bash prettyprint-override"><code>./test-prog.py </code></pre> <ol start="2"> <li>Via ssh with forced terminal still works correctly:</li> </ol> <pre class="lang-bash prettyprint-override"><code>ssh -t server ./test-prog.py </code></pre> <ol start="3"> <li>Via ssh without forced terminal doesn't work correctly. It prints both lines and then waits 30 seconds to release ssh:</li> </ol> <pre class="lang-bash prettyprint-override"><code>ssh server ./test-prog.py </code></pre> <p>I would run this program from a daemon connecting via ssh and exit leaving forked child detached from ssh. In real application this program just opens few tcp socket and writes their port number on stdout. The client reads the ports and then connects to it.</p>
<python><ssh><tcp><sshd><setsid>
2024-01-27 22:27:04
1
431
ChewbaccaKL
77,892,981
17,275,588
moviepy introducing weird audio artifacts at end of audio files, when creating videos via Python. Why?
<p>These are not present in the original audio files. They get added to the end during the video creation process somehow. See this screenshot for a before and after comparison of the waveforms:</p> <p><a href="https://i.sstatic.net/ziZWL.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ziZWL.png" alt="enter image description here" /></a></p> <p>What's weirder is, the artifacts SOUND kind of like the narrator. Like it sounds as if the next sentence is starting, and he just starts to utter the first 0.1 seconds of the first word, which then abruptly gets cut off. That doesn't make much sense as an explanation though, because all this code does is take the one single audio file, and lay the video elements on top of it.</p> <p>While there is more to my code, here are the pertinent sections:</p> <pre><code> current_audio_file = os.path.join(current_video_folder_path, f'{current_video_title_template}-part-{current_video_section}.mp3') audio_clip = AudioFileClip(current_audio_file) final_clip = concatenate_videoclips(clips).subclip(0, audio_clip.duration) final_clip = final_clip.set_audio(audio_clip) final_clip_path = os.path.join(current_video_folder_path, f&quot;video-part-{current_video_section}.mp4&quot;) final_clip.write_videofile(final_clip_path, codec=&quot;libx264&quot;, audio_codec=&quot;aac&quot;) </code></pre> <p>One thing I tried was switching the audio codec to a different one. No difference. In fact it introduced a literally identical audio artifact -- not like a unique one in the same spot, it was literally the identical sound.</p> <p>My current best idea is a poor, yet functional workaround: Simply chop the last 0.2 seconds from the end of every audio file. I did that here like below, and it worked to eliminate the artifact. Which seems to indicate, it might get generated in the process of converting it into an AudioFileClip() element?</p> <pre><code> current_audio_file = os.path.join(current_video_folder_path, f'{current_video_title_template}-part-{current_video_section}.mp3') audio_clip = AudioFileClip(current_audio_file) # Trim the last 0.X seconds from the audio # this is a poor, but functional, workaround to remove those weird artifacts at end of certain audio clips. audio_duration = audio_clip.duration audio_clip = audio_clip.subclip(0, max(0, audio_duration - 0.2)) </code></pre> <p>I ran just this, to debug, and indeed this by itself introduced the audio artifact:</p> <pre><code>current_audio_file = os.path.join(current_video_folder_path, f'{current_video_title_template}-part-{current_video_section}.mp3') audio_clip = AudioFileClip(current_audio_file) # Export the trimmed audio for inspection debug_audio_path = os.path.join(current_video_folder_path, f'debug_{current_video_section}.mp3') audio_clip.write_audiofile(debug_audio_path) exit() </code></pre> <p>Also tried converting it to a WAV audio file, and using that as the input, then also exporting as a WAV audio file -- same issue with the artifact being at the end.</p> <p>Anyway, if anyone has experienced this before, or has any ideas, I'd be curious.</p>
<python><mp3><mp4><aac>
2024-01-27 22:08:03
2
389
king_anton
77,892,980
11,405,455
count frequencies in the values of dictionary in python
<p>I want to create a dictionary from the following dictionary which takes all the values in each round and make it as index and store its frequency across rounds</p> <p>I am maintaining a dictionary which gets bigger in every iteration</p> <pre><code>list_chosen_each_round {0: ['1', '26', '20', '14', '28', '23'], 1: Index(['4', '17', '29', '21', '8', '11'], dtype='object', name='ID'), 2: Index(['11', '9', '3', '1', '27', '28'], dtype='object', name='ID')} </code></pre> <p>output should be like following:</p> <pre><code>'1': 2 '26': 1 '20': 1 '14': 1 '28': 2 '23': 1 '4': 1 '17': 1 '29': 1 '21': 1 '8': 1 '11': 2 '9': 1 '3': 1 '27': 1 </code></pre> <p>I used the following code</p> <pre><code>from collections import Counter all_selections = sum(list_chosen_each_round.values(), []) frequency_counter = Counter(all_selections) print('frequency_counter ',frequency_counter) </code></pre> <p>It works well for the first round and it gives the correct output as follows</p> <pre><code>frequency_counter Counter({'28': 1, '9': 1, '4': 1, '22': 1, '15': 1, '12': 1}) </code></pre> <p>But after first round it gives</p> <pre><code>frequency_counter Counter({'2820': 1, '923': 1, '419': 1, '225': 1, '1517': 1, '1221': 1}) </code></pre> <p>Why does it concatenate the index rather than making the new indices?</p> <p>the above output should have been like following</p> <pre><code>frequency_counter Counter({'28': 1, '9': 1, '4': 1, '2': 1, '15': 1, '12': 1, '20': 1, '23': 1, '19': 1, '5': 1, '17': 1, '21': 1}) </code></pre> <p>and if the index repeated the counter should have increased for that index</p>
<python>
2024-01-27 22:08:00
1
443
Khaned
77,892,926
10,380,766
TypeError in Python 3.11 when Using BasicModelRunner from llama-cpp-python
<p>I'm currently taking the <a href="https://www.coursera.org/learn/finetuning-large-language-models-project/" rel="nofollow noreferrer">DeepAI's Finetuning Coursera course</a> and encountered a bug while trying to run one of their demonstrations locally in a Jupyter notebook.</p> <p><strong>Environment:</strong></p> <ul> <li>Python version: 3.11</li> <li>Required packages: <ul> <li>notebook</li> <li>lamini</li> <li>llama-cpp-python (&gt; 0.1.53)</li> </ul> </li> </ul> <p><strong>Issue:</strong></p> <p>When attempting to run the following code:</p> <pre class="lang-py prettyprint-override"><code>import os import lamini lamini.api_url = os.getenv(&quot;POWERML__PRODUCTION__URL&quot;) lamini.api_key = os.getenv(&quot;POWERML__PRODUCTION__KEY&quot;) from llama import BasicModelRunner non_finetuned = BasicModelRunner(&quot;meta-llama/Llama-2-7b-hf&quot;) non_finetuned_output = non_finetuned(&quot;Tell me how to train my dog to sit&quot;) </code></pre> <p>I receive a <code>TypeError</code> at the last line:</p> <pre><code>TypeError: can only concatenate str (not &quot;NoneType&quot;) to str </code></pre> <p>This error occurs when I try to pass a string to the <code>non_finetuned</code> BasicModelRunner object.</p>
<python><llama><llama-cpp-python>
2024-01-27 21:47:38
2
1,020
Hofbr
77,892,851
17,729,094
How to box a polar plot with a cartesian coordinates axis?
<p>I want the <code>xy</code> axis to perfectly enclose the polar axis without the colorbar. So far I have:</p> <pre><code>import matplotlib.pyplot as plt import numpy as np # Random data r = np.random.randn(1000) theta = np.random.randn(1000) fig = plt.figure() axc = plt.subplot(236) ax = plt.subplot(236, projection='polar') ax.set(xticklabels=[], yticklabels=[]) ax.grid(False) hist, phi_edges, r_edges = np.histogram2d(theta, r, bins=50) axc.set_xlim(-r_edges[-1], r_edges[-1]) axc.set_ylim(-r_edges[-1], r_edges[-1]) X, Y = np.meshgrid(phi_edges, r_edges) pc = ax.pcolormesh(X, Y, hist.T) cbar = fig.colorbar(pc) cbar.set_label(&quot;Counts&quot;, rotation=270, labelpad=15) </code></pre> <p>This produces a plot like:</p> <p><a href="https://i.sstatic.net/b6nit.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/b6nit.png" alt="pic" /></a></p> <p>I want the <code>x</code> and <code>y</code> axes to align with the polar plot (and the color bar outside the axes).</p> <p>Does anybody have any suggestion?</p>
<python><matplotlib>
2024-01-27 21:16:51
1
954
DJDuque
77,892,601
1,114,253
Looping over multiple ranges in "BFS" order?
<p>Let's say for <code>k=3</code> and <code>n=4</code>, then one could simply loop over the <code>k</code> different <code>n</code>-ranges like</p> <pre><code>for a in range(4): for b in range(4): for c in range(4): print((a, b, c)) </code></pre> <p>The problem is that this goes all the way through the last <code>n</code>-range quickly, and takes forever to go through the first <code>n</code>-range. I want a more balanced ordering of looping through the same ranges, where each range is gone through at the same pace. Kind of like the difference between DFS and BFS.</p> <p>For example, the ordering of the tuples to be more like</p> <pre><code>(0, 0, 0) (1, 0, 0) (0, 1, 0) (0, 0, 1) (1, 1, 0) (0, 1, 1) (1, 0, 1) (2, 0, 0) (0, 2, 0) (0, 0, 2) (1, 1, 1) (2, 1, 0) (1, 2, 0) (0, 2, 1) (0, 1, 2) (2, 0, 1) (1, 0, 2) (3, 0, 0) (0, 3, 0) (0, 0, 3) etc. </code></pre> <p>To be clear, I'd like a generator function <code>k_tuples_of_range_n(k, n)</code> that spits out the tuples in that &quot;BFS&quot; ordering. And the function should be efficient (taking up at most O(k) space and time to generate each tuple).</p> <p>Anyone know if there's a name for this ordering, or an elegant solution?</p> <p>Edit: To be clear, there's no explicit tree or BFS happening. I just call it &quot;BFS&quot; order to evoke that one is evenly exploring all the ranges (as opposed to going through one range and neglecting the others). If I had to say concretely how the tuples are ordered: each tuple <code>t</code> is ascendingly ordered primarily by <code>sum(t)</code>, and secondarily by <code>max(t)</code>. So <code>(3, 0, 0)</code> comes after <code>(1, 1, 0)</code> because it has a higher sum. <code>(3, 0, 0)</code> comes after <code>(1, 1, 1)</code> because it has a higher max.</p>
<python><algorithm><loops><breadth-first-search>
2024-01-27 19:52:23
1
884
chausies
77,892,410
8,285,840
How to use plotly buttons and plotly.express (px), with the color option?
<p>When using <code>px.scatter(color='key')</code> with the color option, using plotly buttons behaves erractic. The data is after clicking on either of the buttons is jumbled up and not what one expect, certainly does not go back to the original plot when clicking on Plength (the button for showing petal_length on the y-axis), which is what we started with. What is going on?</p> <p>Works fine:</p> <pre class="lang-py prettyprint-override"><code>from plotly import express as px import pandas as pd import seaborn as sns df = sns.load_dataset('iris') # df.head() # &gt; sepal_length sepal_width petal_length petal_width species # &gt; 0 5.1 3.5 1.4 0.2 setosa # &gt; 1 4.9 3.0 1.4 0.2 setosa # &gt; 2 4.7 3.2 1.3 0.2 setosa # &gt; 3 4.6 3.1 1.5 0.2 setosa # &gt; 4 5.0 3.6 1.4 0.2 setosa # df.species.unique() # &gt; array(['setosa', 'versicolor', 'virginica'], dtype=object) fig = px.scatter( df, x=df.index, y=&quot;petal_length&quot;, #color=&quot;species&quot;, title='Iris', ) updatemenus = [ dict( type=&quot;buttons&quot;, buttons=[ dict( label=&quot;Plength&quot;, method=&quot;update&quot;, args=[ { &quot;y&quot;: [df['petal_length']] }, { &quot;title&quot;: &quot;Petal length vs index&quot;, &quot;yaxis&quot;: {&quot;title&quot;: &quot;petal_length&quot;}, }, ], ), dict( label=&quot;Slength&quot;, method=&quot;update&quot;, args=[ { &quot;y&quot;: [df['sepal_length']] }, { &quot;title&quot;: &quot;Sepal length vs index&quot;, &quot;yaxis&quot;: {&quot;title&quot;: &quot;sepal_length&quot;}, }, ], ), ], ) ] # Update layout with the updatemenu fig.update_layout(updatemenus=updatemenus) </code></pre> <p>Weird data, when using the buttons:</p> <pre class="lang-py prettyprint-override"><code>from plotly import express as px import pandas as pd import seaborn as sns df = sns.load_dataset('iris') df.index fig = px.scatter( df, x=df.index, y=&quot;petal_length&quot;, color=&quot;species&quot;, title='Iris', ) updatemenus = [ dict( type=&quot;buttons&quot;, buttons=[ dict( label=&quot;Plength&quot;, method=&quot;update&quot;, args=[ { &quot;y&quot;: [df['petal_length']] }, { &quot;title&quot;: &quot;Petal length vs index&quot;, &quot;yaxis&quot;: {&quot;title&quot;: &quot;petal_length&quot;}, }, ], ), dict( label=&quot;Slength&quot;, method=&quot;update&quot;, args=[ { &quot;y&quot;: [df['sepal_length']] }, { &quot;title&quot;: &quot;Sepal length vs index&quot;, &quot;yaxis&quot;: {&quot;title&quot;: &quot;sepal_length&quot;}, }, ], ), ], ) ] # Update layout with the updatemenu fig.update_layout(updatemenus=updatemenus) </code></pre>
<python><plotly>
2024-01-27 18:57:40
1
760
Matthias Arras
77,892,148
1,102,806
Python concurrent.futures.wait job submission order not preserved
<p>Does python's <code>concurrent.futures.wait()</code> preserve the order of job submission? I submitted two jobs to <code>ThreadPoolExecutor</code> as follows:</p> <pre><code>import concurrent.futures import time, random def mult(n): time.sleep(random.randrange(1,5)) return f&quot;mult: {n * 2}&quot; def divide(n): time.sleep(random.randrange(1,5)) return f&quot;divide: {n // 2}&quot; with concurrent.futures.ThreadPoolExecutor() as executor: mult_future = executor.submit(mult, 200) divide_future = executor.submit(divide, 200) # wait for both task to complete mult_task, divide_task = concurrent.futures.wait( [mult_future, divide_future], return_when=concurrent.futures.ALL_COMPLETED, ).done mult_result = mult_task.result() divide_result = divide_task.result() print(mult_result) print(divide_result) </code></pre> <p>Sometimes I see</p> <pre><code>divide: 50 mult: 400 </code></pre> <p>and sometimes,</p> <pre><code>mult: 400 divide: 50 </code></pre> <p>shouldn't <code>mult_task, divide_task</code> always map to <code>mult_future, divide_future</code> ?</p> <pre><code>python --version &gt;&gt; Python 3.8.16 </code></pre>
<python><concurrent.futures>
2024-01-27 17:35:17
2
4,621
DevEx
77,892,031
6,626,093
array(struct) to array(map)β€”PySpark
<p>I have a <code>df</code> with the following schema,</p> <pre><code> g_hut: string date: date arr_data:array element:struct Id:string Q_Id:string Q_Type:string </code></pre> <p>I want to convert the <code>arr_data</code> column from <code>Array(Struct)</code> to <code>Array(Map)</code>.</p> <pre><code>g_hut: string date: date arr_data:array element:map key:string value:string </code></pre> <p>Original <code>arr_data</code> column's Row looks like this,</p> <pre><code>arr_data: [ {'Id': '12a', 'Q_Id': 'uac', 'Q_Type': 'action'}, {'Id': '', 'Q_Id': '', ''}, {'Id': '76v', 'Q_Id': '', 'Q_Type': 'form'} ] </code></pre> <p>I tried the following,</p> <pre><code>df = df.withColumn(&quot;arr_data_map&quot;, f.array(f.create_map( f.lit(&quot;Id&quot;), f.col(&quot;arr_data.Id&quot;), f.lit(&quot;Q_Id&quot;), f.col(&quot;arr_data.Q_Id&quot;), f.lit(&quot;Q_Type&quot;), f.col(&quot;arr_data.Q_Type&quot;) ))) </code></pre> <p>I get the following result,</p> <pre><code>[ {'Id': ['12a', '', '76v']}, {'Q_Id': ['uac', '','']}, {'Q_Type': ['action', '', 'form']} ] </code></pre> <p>This is not what I want. I want the original <code>arr_data</code> with the <code>Map</code> schema as mentioned above. How can I achieve this?</p> <p>Below to create a sample <code>df</code> (original) with schema that has array(struct),</p> <pre><code>data = [ ('A', datetime.date(2022, 1, 1), [{'Id': '12a', 'Q_Id': 'uac', 'Q_Type': 'action'}, {'Id': '', 'Q_Id': '', 'Q_Type': ''}, {'Id': '76v', 'Q_Id': '', 'Q_Type': 'form'}]), ('B', datetime.date(2022, 1, 2), [{'Id': '34b', 'Q_Id': 'abc', 'Q_Type': 'action'}, {'Id': '56c', 'Q_Id': 'def', 'Q_Type': 'form'}, {'Id': '78d', 'Q_Id': 'ghi', 'Q_Type': 'action'}]) ] # Define the schema schema = t.StructType([t.StructField(&quot;g_hut&quot;, t.StringType()), t.StructField(&quot;date&quot;, t.DateType()), t.StructField(&quot;arr_data&quot;, t.ArrayType( t.StructType([ t.StructField(&quot;Id&quot;, t.StringType()), t.StructField(&quot;Q_Id&quot;, t.StringType()), t.StructField(&quot;Q_Type&quot;, t.StringType())])) ) ]) # Create a DataFrame df = spark.createDataFrame(data, schema=schema) </code></pre>
<python><arrays><apache-spark><pyspark><apache-spark-sql>
2024-01-27 17:00:04
1
3,046
i.n.n.m
77,891,979
4,199,253
add another column as second X axis label in Python
<p>I have a 15minutes interval time series for several camera. each camera can collect 3 types of movement in four different leg. There is a <code>control_factor</code> that was present for an hour for each camera. I want to show the total of counts in each leg for each camera (each camera in separate plots ) for every 15minutes. and in the x axis, which is the time, next to each time, or under, I want to write yes, if the factor is true and no if the factor is false.</p> <p>You can create the data using the following lines.</p> <pre><code>import pandas as pd from datetime import datetime, timedelta import matplotlib.pyplot as plt # Function to create date range with 15-minute intervals def create_date_range(start_date, end_date, interval): date_range = [] current_date = start_date while current_date &lt;= end_date: date_range.append(current_date) current_date += interval return date_range # Function to create DataFrame def create_dataframe(start_date, end_date, interval): date_range = create_date_range(start_date, end_date, interval) data = [] for date_time in date_range: for camera in range(1, 5): for leg in range(1, 5): for movement in range(1, 4): count = 1 # You can set count based on your requirements control_factor = True if date_time.hour == 14 and camera == 1 else False data.append([date_time, f'Cam{camera}', f'Leg{leg}', f'Move{movement}', count, control_factor]) columns = ['DateTime', 'Camera', 'Leg', 'Movement', 'Count', 'control_factor'] df = pd.DataFrame(data, columns=columns) return df # Set start and end dates start_date = datetime(2024, 1, 27, 14, 0, 0) end_date = datetime(2024, 1, 27, 16, 0, 0) # Set time interval interval = timedelta(minutes=15) # Create DataFrame df = create_dataframe(start_date, end_date, interval) # Display DataFrame print(df) </code></pre> <p>I tried the plot:</p> <pre><code># List of unique cameras in your data unique_cameras = df['Camera'].unique() # Iterate over each camera for camera in unique_cameras: # Filter data for the current camera df_camera = df[df['Camera'] == camera] # Group by DateTime, Leg, and Movement, summing the counts and taking the first control_factor value grouped_df = df_camera.groupby(['DateTime', 'Leg', 'Movement'])[['Count', 'control_factor']].agg({'Count': 'sum', 'control_factor': 'first'}).reset_index() # Plot the data plt.figure(figsize=(15, 8)) ax = sns.barplot(x='DateTime', y='Count', hue='Leg', data=grouped_df, ci=None) # Rotate x-axis labels for better visibility ax.set_xticklabels(ax.get_xticklabels(), rotation=45, ha='right') plt.title(f'Sum of Counts for Each Leg Over Time ({camera})') plt.xlabel('Time') plt.ylabel('Sum of Counts') # Annotate bars with control_factor values for index, row in grouped_df.iterrows(): plt.text(index, row['Count'], str(row['control_factor']), ha='center', va='bottom') plt.show() </code></pre> <p>it shows true or false over each bar, but I want it under the x axis along with the time. I also want to make the colours like lighter for those bars with control-factor = False. Just same colour as the other bars, but a bit lighter.</p> <p>Here is what I am trying to do!</p> <p><a href="https://i.sstatic.net/Yfizv.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Yfizv.png" alt="enter image description here" /></a></p> <p>I am open to use other library as well.</p>
<python><pandas><seaborn><bar-chart><axis-labels>
2024-01-27 16:44:24
1
1,034
GeoBeez
77,891,836
2,213,825
Silence JAX initial messages
<p>I am using JAX with CPU. Everytime I use it I get a message:</p> <pre class="lang-bash prettyprint-override"><code>Platform 'METAL' is experimental and not all JAX functionality may be correctly supported! 2024-01-27 15:56:46.164447: W pjrt_plugin/src/mps_client.cc:563] WARNING: JAX Apple GPU support is experimental and not all JAX functionality is correctly supported! Metal device set to: Apple M1 systemMemory: 16.00 GB maxCacheSize: 5.33 GB </code></pre> <p>This is crowding my terminal, how do I disable it?</p>
<python><jax>
2024-01-27 15:58:24
1
4,883
JoΓ£o Abrantes
77,891,386
4,225,430
Get rid of format string conflict when using sympy
<p>When I try to use sympy to find the derivative of the loss function, it raises a conflict with format string.</p> <pre><code>import numpy as np import sympy as sp def predict(X, w, b): return np.dot(X, w) + b def loss(X, w, b, Y): return np.mean((predict(X, w, b) - Y) ** 2) X, Y = np.loadtxt(&quot;code/02_first/pizza.txt&quot;, unpack=True, skiprows=1) # Convert X and Y to sympy symbols X, w, b, Y = sp.symbols(&quot;X w b Y&quot;) def gradient(X, w, b, Y): loss_expr = loss(X, w, b, Y) dw_dX = sp.diff(loss_expr, w) db_dX = sp.diff(loss_expr, b) return dw_dX, db_dX def train(X, Y, iterations, lr): w = sp.symbols('w') b = sp.symbols('b') for i in range(iterations): loss_value = loss(X, w, b, Y) print(f&quot;Iteration: {i:4d}, Loss: {loss_value:.10f}&quot;) dw_dX, db_dX = gradient(X, w, b, Y) w -= dw_dX * lr b -= db_dX * lr return w, b w, b = train(X, Y, iterations=20000, lr=0.001) print(f&quot;\nw = {w:.10f}, b = {b:.10f}&quot;) print(f&quot;Prediction: x = 20 =&gt; y = {predict(20, w, b):.2f}&quot;) </code></pre> <pre><code>TypeError: unsupported format string passed to Pow.__format__ </code></pre> <p>The data is here in txt (or via the <a href="https://media.pragprog.com/titles/pplearn/code/02_first/pizza.txt" rel="nofollow noreferrer">link here</a>):</p> <pre><code>Reservations Pizzas 13 33 2 16 14 32 23 51 13 27 1 16 18 34 10 17 26 29 3 15 3 15 21 32 7 22 22 37 2 13 27 44 6 16 10 21 18 37 15 30 9 26 26 34 8 23 15 39 10 27 21 37 5 17 6 18 13 25 13 23 </code></pre> <p>I can just use numpy but in doing so I need to calculate the loss function myself, which is not effective (and easy to raise error with brackets).</p> <p>Why the error and why sympy is not compatible with format string? Also, how to generate a correct script with sympy?</p>
<python><string><numpy><machine-learning><sympy>
2024-01-27 13:35:43
2
393
ronzenith
77,891,369
9,947,140
Modeling Sigmoid Curve with Time-Dependent Steepness in Python
<p>I have a dataset that follows a sigmoid curve, and I've observed that the sigmoid function becomes steeper over time. I'm looking for guidance on creating a model to fit this curve using Python. This question is similar to <a href="https://stackoverflow.com/questions/55725139/fit-sigmoid-function-s-shape-curve-to-data-using-python">this post</a> expect that the data curve becomes steeper over time. I've attempted to use a logistic regression to fit the data (as the other post suggests), but I need to incorporate a time-dependent parameter for the steepness.</p> <p>The data is across a single day. In the images below, I've time segmented the data by hour. Meaning, each color represents data from a specific hour in the day. First, observe that for each color, the data follows a sigmoid curve (this is especially apparent for the orange data and is a property of the data I'm working with). Second, notice that the different colors follow different sigmoid steepnesses. For example, the blue curve follows a steeper sigmoid curve than the orange one does. From what I observed, the later in the day the data is from, the steeper the sigmoid curves become. I've attempted to show this property by drawing out the sigmoid curves in the second image.</p> <p><a href="https://i.sstatic.net/6gEyv.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/6gEyv.png" alt="enter image description here" /></a></p> <p><a href="https://i.sstatic.net/KHlba.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/KHlba.png" alt="enter image description here" /></a></p> <p>Here was my attempt to fit this data:</p> <pre class="lang-py prettyprint-override"><code>import numpy as np import matplotlib.pyplot as plt from scipy.optimize import curve_fit # Define a logistic function with a time-dependent parameter def time_dependent_logistic(x, k, t): return 1 / (1 + np.exp(-k * (x - t))) # TODO: I replaced the variables below with my actual data X = ... # Example: np.random.rand(100) * 10 time = ... # Example: np.linspace(0, 1, 100) y = time_dependent_logistic(X, k=5 * time, t=2) + 0.05 * np.random.randn(100) # Fit the time-dependent logistic function to the data popt, pcov = curve_fit(time_dependent_logistic, X, y, bounds=([0, 0], [np.inf, np.inf])) # Generate predictions using the fitted parameters X_test = np.linspace(min(X), max(X), 300) y_pred = time_dependent_logistic(X_test, *popt) # Plot the original data and the fitted logistic curve plt.scatter(X, y, label='Original data') plt.plot(X_test, y_pred, label='Fitted time-dependent logistic curve', color='red', linewidth=2) plt.xlabel('Input') plt.ylabel('Output') plt.legend() plt.show() </code></pre> <p>However, the graph was not fitting the data properly. It simply produces a flat line:</p> <p><a href="https://i.sstatic.net/yREHU.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/yREHU.png" alt="enter image description here" /></a></p> <p>I've researched ways into fitting the data properly so that the curve does not produce a flat line, but this is my first time attempting to model data like this, so I'm unsure if I'm even taking the right approach. Any guidance would be great. Thank you very in advance for your help. I've been very stuck on this problem for a long time, and I'm open to other methods of fitting the data as well.</p> <p>EDIT:</p> <p>I uploaded my data to Google Drive in case anyone wants to play around with it: <a href="https://drive.google.com/file/d/1aDB8U6Cn8lFo1TWFSRXWX-X0aEpCQ4sT/view?usp=sharing" rel="nofollow noreferrer">https://drive.google.com/file/d/1aDB8U6Cn8lFo1TWFSRXWX-X0aEpCQ4sT/view?usp=sharing</a>. Just a note, I shifted the X and Time columns to make the minimum 0, so it's not exactly the same data shown in the graphs.</p>
<python><numpy><scipy><curve-fitting><sigmoid>
2024-01-27 13:29:48
1
342
randomrabbit
77,891,032
11,748,924
Create a small control panel on screen for pyautogui monitoring and control purposes
<p>How do I display a small control panel screen that can control and monitoring <code>pyautogui</code> process? I expect there is a pinned window that for monitoring purpose such as displaying current log.txt that has been generated by <code>logging</code> and controlling purpose such as pause and resume button? Actually these are inteded for debugging purposes.</p> <p>Here are my simple code:</p> <p><strong>main.py</strong></p> <pre><code>if __name__ == '__main__': # Get current timestamp current_time: str = datetime.datetime.now().strftime('%Y-%m-%d_%H-%M-%S') # Initialize logging config logging.basicConfig( level=logging.DEBUG, format='%(asctime)s - %(levelname)s - %(message)s', filename=f'./log/bot_{current_time}.log' ) logging.info('Start of program.') execute() logging.info('End of program.') </code></pre> <p><strong>executes.py</strong></p> <pre><code>''' Here list of order of actions to be executed that has been defined from actions.py file. ''' from actions import open_firefox, open_new_tab def execute(): &quot;&quot;&quot; This is main program. &quot;&quot;&quot; open_firefox() open_new_tab() </code></pre> <p><strong>actions.py</strong>:</p> <pre><code>''' Here is list of actions defined that wrapped in functions to be used in executes.py file. ''' from time import sleep import logging import pyautogui as pag def open_firefox(): &quot;&quot;&quot; Open Firefox browser. &quot;&quot;&quot; logging.info('Open Firefox browser.') firefox_icon_location = pag.locateCenterOnScreen('./asset/firefox.png', confidence=0.75) pag.moveTo(firefox_icon_location, duration=1) sleep(1) pag.leftClick() sleep(1) logging.info('Firefox browser has been opened.') def open_new_tab(): &quot;&quot;&quot; Open new tab. &quot;&quot;&quot; pag.hotkey('ctrl', 't') sleep(1) </code></pre>
<python><debugging><controls><monitoring><pyautogui>
2024-01-27 11:38:37
1
1,252
Muhammad Ikhwan Perwira
77,891,012
18,493,710
Cannot fetch Camunda job variables in pyzeebe worker
<p>I have written a Python <code>pyzeebe</code> worker that is supposed to interact with my database using the values that the variables of the previous user task in the <code>Camunda BPMN</code> hold. I have defined a user task with a form. I have also created a service task and defined the <code>taskDefinition</code>.</p> <p>It seems like the worker can indeed communicate with my <code>zeebe</code> but it is unable to fetch the variables of the form.</p> <pre class="lang-py prettyprint-override"><code>from pyzeebe import ZeebeWorker, Job, create_insecure_channel import asyncio import psycopg2 import logging async def handle_database_transaction(job: Job) -&gt; dict: variables = job.variables # Log the received variables at INFO level logging.info(f&quot;Received variables: {variables}&quot;) selected_date = variables.get('selected_date') selected_project_id = int(variables.get('selected_project_id')) if variables.get('selected_project_id') else None # Log the extracted field values logging.info(f&quot;Selected date: {selected_date}, Project ID: {selected_project_id}&quot;) # if selected_date and selected_project_id is not None, I perform the insert query with my database. return {} </code></pre> <p>And here is the MappingIO in my BPMN:</p> <pre class="lang-xml prettyprint-override"><code>&lt;zeebe:ioMapping&gt; &lt;zeebe:output source=&quot;=select_0swn49&quot; target=&quot;output_project&quot; /&gt; &lt;zeebe:output source=&quot;=datetime_gporzh&quot; target=&quot;output_date&quot; /&gt; &lt;/zeebe:ioMapping&gt; &lt;zeebe:ioMapping&gt; &lt;zeebe:input source=&quot;=output_project&quot; target=&quot;selected_project_id&quot; /&gt; &lt;zeebe:input source=&quot;=output_date&quot; target=&quot;selected_date&quot; /&gt; &lt;/zeebe:ioMapping&gt; </code></pre> <p>What part of the procedure am I doing wrong?</p>
<python><camunda><zeebe>
2024-01-27 11:28:45
1
418
okaeiz
77,890,704
2,737,779
Just installed Python 3.12 but it doesn't work
<p>I'm trying to clone a repo from GitHub and the 3rd step involves running Python3 pip install to install the requirements.</p> <p>I didn't have python installed so I installed it using Winget with the command &quot;Winget install python.python 3.12&quot;</p> <p>However, I'm still getting an error message telling me Python isn't installed.</p> <p>After installing Python and retrying the command to install the requirement I got the error telling me Python wasn't installed.</p> <p>I opened the start menu and saw the various pieces of Python that were installed, IDLE, Docs and Python 3.12 64bit. I opened both Python and IDLE and saw no errors.</p> <p>So, I restarted, then opened the terminal and ran the command again: <code>python3 -m pip install -r requirements.txt</code> but again it showed me the error with the message I could type &quot;Python&quot; to open the Microsoft Store and install from there.</p> <p>Command and error below:<br /> $&gt; python3 -m pip install -r requirements.txt Python was not found; run without arguments to install from the Microsoft Store or disable this shortcut from Settings &gt; Manage App Execution Aliases.</p> <p>Edit: additional troubleshooting. I tried python3 --version and got the python not installed error so I retried it with python -- version and it returned Python 3.12.1</p> <p>So clearly it is installed but maybe not installed correctly as it doesn't appear under Python3?</p>
<python><windows>
2024-01-27 09:44:03
0
315
Nathaniel
77,890,446
14,098,117
How to call via Google places API(New) Text Search (ID Only) in python?
<p>According to documentation <a href="https://developers.google.com/maps/documentation/places/web-service/usage-and-billing#id-textsearch" rel="nofollow noreferrer">here</a> if I ask, just for place_id, the call should be for free. I tried various versions of call, but all of them either did not work or gave me not just id, but also other basic info such as address, location, rating, etc. Bard told me that it is ok and as I asked just for id, the calls will really cost nothing. However it is not possible to see cost of each call and based on whole day billing I am afraid I paid also for these calls.</p> <p>Should this be really free and even though address, location, etc. should be priced as basic info, it is actually given for free or should I change sth in my code?</p> <p>My code:</p> <pre><code>def find_place(query, api_key): base_url = &quot;https://maps.googleapis.com/maps/api/place/textsearch/json&quot; params = { &quot;query&quot;: query, &quot;inputtype&quot;: &quot;textquery&quot;, &quot;fieldMask&quot;: &quot;place_id&quot;, # Only retrieve the place ID &quot;key&quot;: api_key # Include API key } response = requests.get(base_url, params=params) return response.json() </code></pre>
<python><google-places-api>
2024-01-27 07:57:52
1
844
Emil Haas
77,890,294
10,342,778
TFDS extract file name
<p>In tfds, we do</p> <pre class="lang-py prettyprint-override"><code>def _generate_examples(self, file_paths): for file_path in file_paths: yield file_path, { 'image': image, 'label': label } </code></pre> <p>I am already yielding file_path, how can I get it when i load the dataset? i.e</p> <pre class="lang-py prettyprint-override"><code>data = tfds.load('dataset') # print all file names </code></pre>
<python><tensorflow><tensorflow-datasets>
2024-01-27 06:42:02
0
2,742
Ahmad Anis
77,890,206
1,029,902
Replacing a row value in csv with a value from a scrape function
<p>I have a script that opens two csv files. It scrapes the links in venues.csv for an <code>id</code> value for a beer from a webpage, then uses that <code>id</code> to find corresponding data from the beer.csv and then writes to a separate csv. One of the columns in beer.csv is rating_score. However, I want to use the rating_value that I can fetch using the scrape instead. So I want to replace the value in the rating_score of the row that is eventually written to the thirds csv with the rating_value that I can scrape from the page as it is more current.</p> <p>This is my current script:</p> <pre><code>import pandas as pd import warnings import requests from bs4 import BeautifulSoup import re import os import time import sys def resource_path(relative_path: str) -&gt; str: try: base_path = sys._MEIPASS except Exception: base_path = os.path.dirname(__file__) return os.path.join(base_path, relative_path) warnings.simplefilter(action=&quot;ignore&quot;, category=FutureWarning) df = pd.read_csv(resource_path(&quot;venues.csv&quot;)) all_beer = pd.read_csv(resource_path(&quot;beer_full.csv&quot;), encoding=&quot;ISO-8859-1&quot;) fname = &quot;menu_beers.csv&quot; def get_menu_beers(soup): global bar_beer_ids beers_all = soup.find_all(&quot;ul&quot;, {&quot;class&quot;: &quot;menu-section-list&quot;}) for beer_group in beers_all: beers = beer_group.find_all(&quot;li&quot;) for beer in beers: details = beer.find(&quot;div&quot;, {&quot;class&quot;: &quot;beer-details&quot;}) a_href = details.find(&quot;a&quot;, {&quot;class&quot;: &quot;track-click&quot;}).get(&quot;href&quot;) id_num = re.findall(r&quot;\d+&quot;, a_href) beer_id = int(id_num[-1]) bar_beer_ids.append(beer_id) name_ = details.find(&quot;a&quot;, {&quot;class&quot;: &quot;track-click&quot;}).text rating_value = details.find('div', {'class': 'caps small'})['data-rating'] for index, row in df.iterrows(): url_base = ( &quot;https://example.com/v/&quot; + str(row[&quot;venue_slug&quot;]) + &quot;/&quot; + str(row[&quot;venue_id&quot;]) ) url = url_base + &quot;/beers&quot; bar_beer_ids = [] response = requests.get(url, headers={&quot;User-agent&quot;: &quot;Mozilla/5.0&quot;}) if response.status_code == 200: soup = BeautifulSoup(response.content, &quot;html.parser&quot;) try: try: select_options = soup.find_all(&quot;select&quot;, {&quot;class&quot;: &quot;menu-selector&quot;}) if len(select_options) &gt; 0: options_list = select_options[0].find_all(&quot;option&quot;) menu_ids = [] for option in options_list: menu_ids.append(int(option[&quot;value&quot;])) menu_urls = [] for menu_id in menu_ids: menu_url = str(url_base) + &quot;?menu_id=&quot; + str(menu_id) menu_urls.append(menu_url) for url in menu_urls: res = requests.get(url, headers={&quot;User-agent&quot;: &quot;Mozilla/5.0&quot;}) s = BeautifulSoup(res.text, &quot;html.parser&quot;) get_menu_beers(s) else: get_menu_beers(soup) except: print(&quot;Error at &quot; + str(row[&quot;venue_name&quot;])) bar_beer_ids = set(bar_beer_ids) bar_beer_ids = list(bar_beer_ids) location_distance = row[&quot;distance_from_point&quot;] bar_beers = all_beer.loc[all_beer[&quot;beer_id&quot;].isin(bar_beer_ids)] bar_beers.insert(0, &quot;venue_name&quot;, str(row[&quot;venue_name&quot;])) bar_beers.insert(1, &quot;location_distance&quot;, round(location_distance, 2)) del bar_beers[&quot;beer_id&quot;] bar_beers.to_csv(fname, header=False, index=False, mode=&quot;a&quot;) print(&quot;Fetching menu for &quot; + str(row[&quot;venue_name&quot;])) except: print(&quot;No menu for &quot; + str(row[&quot;venue_name&quot;])) pass elif response.status_code == 429: print(&quot;Server currently receiving too many requests. Pausing for 1 minute&quot;) time.sleep(60) print(&quot;Resuming...&quot;) continue else: print(&quot;Could not process &quot; + str(row[&quot;venue_name&quot;])) print(&quot;\n \n \n \n \n---------- DONE ------------&quot;) print(&quot;Saving results to csv...&quot;) print( &quot;Column headers for csv in order are: beer_id,beer_name,brewery_name,beer_style,abv,rating_score,rating_count&quot; ) </code></pre> <p>I can scrape the rating_value easily with:</p> <pre><code>rating_value = details.find('div', {'class': 'caps small'})['data-rating'] </code></pre> <p>I just don't know how to insert it into the row data that is eventually written to the menu_beers.csv file</p> <p>I have tried using:</p> <pre><code>row['rating_score'] = rating_value </code></pre> <p>right after where I define location_distance but this did not work either.</p> <p>The column headers for beers.csv are</p> <pre><code>beer_id, beer_name, brewery_name,brewery_id, type_name, beer_abv, rating_score, rating_count, in_production </code></pre> <p>The column headers for venues.csv are</p> <pre><code>venue_id,venue_name,venue_slug,venue_full_address,venue_city,venue_state,venue_country,is_verified,lat,lng </code></pre> <p>Any help would be greatly appreciated</p>
<python><pandas><list><csv><web-scraping>
2024-01-27 05:59:24
1
557
Tendekai Muchenje
77,889,994
3,765,883
Python example f"{heading:=^30}"
<p>I ran across this in a basic Python tutorial (<a href="https://docs.python.org/3.10/tutorial/controlflow.html" rel="nofollow noreferrer">https://docs.python.org/3.10/tutorial/controlflow.html</a>). The example is:</p> <pre><code>&gt;&gt;&gt; heading = &quot;Centered string&quot; &gt;&gt;&gt; f&quot;{heading:=^30}&quot; '=======Centered string========' </code></pre> <p>I did some searching but all I could find was a reference to '^' as an X-OR function.</p> <p>Can someone point me in the right direction?</p> <p>TIA</p>
<python><operators><centering>
2024-01-27 03:46:33
0
327
user3765883
77,889,973
23,190,147
How do I get the right installation of python?
<p>I'm having trouble finding the right installation of python, and I am on windows.</p> <p>When I first installed python (IDLE) by going to <a href="https://python.org" rel="nofollow noreferrer">https://python.org</a>, and downloading the version, I did not know that there was a version of python in the microsoft store for windows.</p> <p>For some reason, whenever I needed to run commands in the shell, say, install a python module, I needed to do this: <code>py -m pip install module</code>, instead of the traditional: <code>python -m pip install module</code>.</p> <p>I don't know why python was called &quot;py&quot;, when I ran: <code>where python</code>, it showed a file path that ended with &quot;python.exe&quot;, so clearly python is being installed. And of course, IDLE and the python interpreter and all of python always worked...but I never figured out the reason python was called &quot;py&quot;.</p> <p>Now I saw that there was a version of IDLE, but for windows, in the microsoft store. So I went and installed it, but instead of giving me an IDLE app, like I expected, instead it gave me the windows terminal, with only the shell, not where I can write actual files (multiple lines), just a shell, which is useless to me.</p> <p>Of course, now <code>python -m pip install module</code> works, but I think my system is completely messed up. Because when I tried: <code>python -m pip show keyboard</code> (keyboard is a module that I already have installed), I got this error warning that the package was not found. But when I tried: <code>py -m pip show keyboard</code>, it worked.</p> <p>My assumption is that I now have <em>two</em> versions of python, the regular IDLE one that I installed (that is named py), and the new one that I just installed recently, which is not what was expected. I'll take this moment to note that when I installed the new one, my old one was still installed. Is it possible that the old one interfered with the new one being installed (so that I can write multiple lines and have the usual IDLE format)?</p> <p>I don't just want a shell, I want the regular IDLE, where I can write multiple lines of code and run it, and I also want to know why python was originally named py at the start?</p> <p>UPDATE:</p> <p>Resolved, looks like I can just open IDLE from the terminal, by typing in <code>idle</code>, it's that simple. I would have preferred a faster method to open the app, but it's no big issue.</p>
<python><windows><installation><python-idle>
2024-01-27 03:36:55
0
450
5rod
77,889,955
9,095,603
Selenium uc webdriver chrome instances proliferating, eating up all memory and CPU
<p>I have included steps to close the selenium python webdriver, and even restart it regularly to avoid accumulating problems. But my Windows 10 Task Manager shows a steady increase in memory and CPU usage up to the point of saturation and freezing up my computer, and the number of Chrome instances proliferate, despite my steps to close the webdriver regularly:</p> <pre><code>import csv import json import os import time from selenium import webdriver import undetected_chromedriver as uc from selenium.webdriver.common.by import By from selenium.webdriver.common.keys import Keys from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support import expected_conditions as EC from selenium.common.exceptions import TimeoutException import logging from selenium.webdriver.chrome.options import Options # Setup logging current_time = time.strftime(&quot;%Y%m%d_%H%M%S&quot;) logging.basicConfig(level=logging.INFO, format='%(asctime)s %(levelname)s %(message)s', handlers=[ logging.FileHandler(f&quot;log_{current_time}.log&quot;), logging.StreamHandler() ]) logger = logging.getLogger() # Global driver variable global_driver = None def close_driver(): global global_driver if global_driver: try: global_driver.quit() except Exception as e: logger.error(f&quot;Error closing the driver: {e}&quot;) global_driver = None def create_driver(): close_driver() # Ensure any existing driver is closed try: global global_driver chrome_options = Options() chrome_options.add_argument(&quot;--headless&quot;) # Run Chrome in headless mode global_driver = uc.Chrome(options=chrome_options) return global_driver except Exception as e: logger.error(f&quot;Error creating new driver instance: {e}&quot;) return None # Global setting for scraping quantity (1 or 2) SCRAPE_QTY = 2 # Set to 2 if scraping for 2 qty values representative_postcodes = [ (&quot;SYDMET&quot;, &quot;EASTGARDENS&quot;, &quot;NSW&quot;, &quot;2036&quot;), (&quot;NSWTWN&quot;, &quot;BAY VILLAGE&quot;, &quot;NSW&quot;, &quot;2261&quot;), (&quot;NSWREG&quot;, &quot;CATTLE CREEK&quot;, &quot;NSW&quot;, &quot;2339&quot;), (&quot;QLDTWN&quot;, &quot;BOTTLE CREEK&quot;, &quot;QLD&quot;, &quot;2469&quot;), (&quot;MELMET&quot;, &quot;CROSS KEYS&quot;, &quot;VIC&quot;, &quot;3041&quot;), (&quot;VICTWN&quot;, &quot;BELL PARK&quot;, &quot;VIC&quot;, &quot;3215&quot;), (&quot;VICREG&quot;, &quot;TERANG&quot;, &quot;VIC&quot;, &quot;3264&quot;), (&quot;BRIMET&quot;, &quot;ASPLEY&quot;, &quot;QLD&quot;, &quot;4034&quot;), (&quot;QLDREG&quot;, &quot;CARPENDALE&quot;, &quot;QLD&quot;, &quot;4344&quot;), (&quot;ADEMET&quot;, &quot;OAKLANDS PARK&quot;, &quot;SA&quot;, &quot;5046&quot;), (&quot;SAREG&quot;, &quot;CAPE JERVIS&quot;, &quot;SA&quot;, &quot;5204&quot;), (&quot;PERMET&quot;, &quot;KARAKIN&quot;, &quot;WA&quot;, &quot;6044&quot;), (&quot;WAREG&quot;, &quot;BALBARRUP&quot;, &quot;WA&quot;, &quot;6258&quot;), (&quot;TAZTWN&quot;, &quot;CAPE PILLAR&quot;, &quot;TAS&quot;, &quot;7182&quot;), (&quot;TASMAN&quot;, &quot;BLACK HILLS&quot;, &quot;TAS&quot;, &quot;7140&quot;), (&quot;DARMET&quot;, &quot;ANULA&quot;, &quot;NT&quot;, &quot;0812&quot;), (&quot;NTREG&quot;, &quot;ALICE SPRINGS&quot;, &quot;NT&quot;, &quot;0870&quot;) ] def read_source_csv(file_path): try: with open(file_path, newline='', encoding='utf-8') as csvfile: reader = csv.DictReader(csvfile) return list(reader) except Exception as e: logger.error(f&quot;Error reading CSV file: {e}&quot;) return [] def append_to_csv(file_name, data): try: file_exists = os.path.isfile(file_name) with open(file_name, 'a', newline='', encoding='utf-8') as f: writer = csv.writer(f) if not file_exists: writer.writerow(['sku', 'postcode_group', 'suburb', 'state', 'postcode', 'rate']) writer.writerow(data) except Exception as e: logger.error(f&quot;Error writing to CSV file: {e}&quot;) def save_last_processed(sku, postcode_group): try: with open('last_processed.json', 'w') as file: json.dump({'last_processed_sku': sku, 'last_processed_postcode_group': postcode_group}, file) except Exception as e: logger.error(f&quot;Error saving last processed record: {e}&quot;) def load_last_processed(): try: with open('last_processed.json', 'r') as file: return json.load(file) except FileNotFoundError: return {'last_processed_sku': None, 'last_processed_postcode_group': None} except Exception as e: logger.error(f&quot;Error loading last processed record: {e}&quot;) return {'last_processed_sku': None, 'last_processed_postcode_group': None} def scrape_shipping_rates(driver, sku, product_url, postcode_group): try: driver.get(product_url) logger.info(&quot;Page loaded.&quot;) # Debugging print # Check for the 404 error message on the page try: WebDriverWait(driver, 3).until( EC.presence_of_element_located((By.XPATH, &quot;//h1[text()='Whoops, our bad...']&quot;)) ) logger.info(f&quot;404 Error for {sku} at {product_url}&quot;) append_to_csv('404.csv', [sku, *postcode_group, '404 Error']) return '404 Error' except TimeoutException: # If the 404 message is not found, continue with the scraping logger.error(&quot;No 404 Error detected. Continuing scraping.&quot;) # Handling quantity selection if SCRAPE_QTY is set to 2 if SCRAPE_QTY == 2: try: qty_input = WebDriverWait(driver, 10).until( EC.presence_of_element_located((By.ID, &quot;qty&quot;)) ) qty_input.clear() qty_input.send_keys(&quot;2&quot;) logger.info(&quot;Quantity set to 2.&quot;) # Debugging print except TimeoutException: logger.error(&quot;Quantity input not found or loading issue.&quot;) # Execute JavaScript to get HTTP status code status_code = driver.execute_script( &quot;return window.performance.getEntriesByType('navigation')[0].responseStart;&quot; ) # Wait for the city input to appear and enter the postcode city_input = WebDriverWait(driver, 10).until( EC.presence_of_element_located((By.ID, &quot;city&quot;)) ) logger.info(&quot;City input found.&quot;) # Debugging print city_input.clear() # Clear the input field city_input.send_keys(postcode_group[3]) # Enter the representative postcode logger.info(&quot;Postcode entered.&quot;) # Debugging print time.sleep(2) suggestion = WebDriverWait(driver, 10).until( EC.presence_of_element_located((By.CSS_SELECTOR, &quot;#suggetion-box ul li&quot;)) ) logger.info(&quot;Suggestion box found.&quot;) # Debugging print suggestion.click() logger.info(&quot;Suggestion clicked.&quot;) # Debugging print # Click the Get Rate button get_rate_button = driver.find_element(By.ID, &quot;get_rate&quot;) get_rate_button.click() logger.info(&quot;Get rate button clicked.&quot;) # Debugging print # Check for error message try: WebDriverWait(driver, 3).until( EC.visibility_of_element_located((By.ID, &quot;shipping_rate_estimation_error&quot;)) ) append_to_csv('red_errormsg_returned.csv', [sku, *postcode_group, 'Error']) return except TimeoutException as e: logger.error(f&quot;TimeoutException for error message check: {e}&quot;) # No error message, continue # Check for rate table try: WebDriverWait(driver, 3).until( EC.visibility_of_element_located((By.ID, &quot;result-table&quot;)) ) rate_span = driver.find_element(By.CSS_SELECTOR, &quot;#result-table .price&quot;) rate = rate_span.text.replace('$', '').replace(',', '') append_to_csv('fed_scraped_shipping_rates.csv', [sku, *postcode_group, rate]) except TimeoutException as e: logger.error(f&quot;TimeoutException for rate table check: {e}&quot;) append_to_csv('rate_not_detected.csv', [sku, *postcode_group, 'No Rate']) # except Exception as e: # logger.info(f&quot;Error processing SKU {sku} for postcode group {postcode_group[0]}: {e}&quot;) except Exception as e: error_message = str(e) if &quot;Out of Memory&quot; in error_message or &quot;Timed out receiving message from renderer&quot; in error_message: logger.error(f&quot;Out of Memory error processing SKU {sku} for postcode group {postcode_group[0]}&quot;) append_to_csv('out_of_memory_errors.csv', [sku, *postcode_group, 'Out of Memory']) return 'Out of Memory' else: logger.error(f&quot;Error processing SKU {sku} for postcode group {postcode_group[0]}: {e}&quot;) return 'Error' return 'Success' def find_starting_index(source_data, last_processed_sku): for index, row in enumerate(source_data): if row['SKU'] == last_processed_sku: return index return 0 def get_postcode_group_index(postcode_group_name): for index, (name, _, _, _) in enumerate(representative_postcodes): if name == postcode_group_name: return index return -1 def process_sku(driver, row, postcode_group): try: result = scrape_shipping_rates(driver, row['SKU'], row['Product URL'], postcode_group) if result == 'Success': logger.info(f&quot;Successfully processed SKU: {row['SKU']}, Postcode Group: {postcode_group[0]}&quot;) save_last_processed(row['SKU'], postcode_group[0]) elif result == 'Out of Memory': # Don't update last_processed, but return the 'Out of Memory' status logger.error(f&quot;Out of Memory error for SKU: {row['SKU']}, Postcode Group: {postcode_group[0]}&quot;) return 'Out of Memory' else: logger.error(f&quot;Error encountered for SKU: {row['SKU']}, Postcode Group: {postcode_group[0]}&quot;) time.sleep(1) return result except Exception as e: # Close the driver in case of an exception and then re-raise the exception if driver: driver.quit() raise e def read_processed_skus(file_name): processed_skus = set() if os.path.isfile(file_name): with open(file_name, 'r', encoding='utf-8') as file: reader = csv.reader(file) next(reader, None) # Skip header for row in reader: processed_skus.add((row[0], row[1])) # SKU and postcode group return processed_skus def main(): global global_driver logger.info(&quot;Starting script...&quot;) try: source_data = read_source_csv(&quot;Data feed-22.01.24.csv&quot;) if not source_data: logger.info(&quot;No data found in source file, exiting.&quot;) return processed_skus = read_processed_skus(&quot;fed_scraped_shipping_rates.csv&quot;) errors = read_processed_skus(&quot;out_of_memory_errors.csv&quot;) | read_processed_skus(&quot;rate_not_detected.csv&quot;) | read_processed_skus(&quot;red_errormsg_returned.csv&quot;) | read_processed_skus(&quot;404.csv&quot;) process_count = 0 driver = create_driver() # Create a new driver instance for row in source_data: for postcode_group in representative_postcodes: if (row['SKU'], postcode_group[0]) in processed_skus or (row['SKU'], postcode_group[0]) in errors: continue result = process_sku(driver, row, postcode_group) process_count += 1 if result == 'Out of Memory' or process_count &gt;= 25: driver.quit() # Close the current driver driver = create_driver() # Create a new driver instance process_count = 0 except Exception as e: logger.error(f&quot;An error occurred in the main function: {e}&quot;) finally: close_driver() logger.info(&quot;Script finished.&quot;) if __name__ == '__main__': main() </code></pre>
<python><google-chrome><selenium-webdriver><selenium-chromedriver><cpu-usage>
2024-01-27 03:23:09
1
423
ptrcao
77,889,938
3,620,605
Reshaping and Transposing NumPy Array to Match Specific Flattened Order
<p>I have a numpy array of shape <code>(9,768)</code>, it is reshaped into a 3D array and then flattened. I am trying to find the combination of 2D operations like reshaping and transposing that will result in the same order when flattened. Here's an example</p> <pre><code>&gt;&gt;&gt; m = np.random.normal(size=(9,768)) &gt;&gt;&gt; A = m.reshape((9,12,64)) &gt;&gt;&gt; A = A.transpose((1,2,0)) &gt;&gt;&gt; a = m.T.reshape((12, 576)) &gt;&gt;&gt; np.array_equal(a.ravel(), A.ravel()) True </code></pre> <p>A's axes have been transposed and the equivalent in 2D is a transpose followed by a reshape.</p> <p>What would be the combination of 2D operations that can have the same effect as</p> <pre><code>&gt;&gt;&gt; B = m.reshape((9,12,64)) &gt;&gt;&gt; B = B.transpose((1,0,2)) </code></pre> <p>I want to limit myself to 2d reshape, transpose, or flatten and avoid any operation that will create a 3d or higher dimensional tensor</p>
<python><numpy>
2024-01-27 03:14:33
1
1,158
Effective_cellist
77,889,768
10,107,897
Swift PythonKit with error for no module found
<p>I'm trying to run some python code from Xcode but wasn't sure why it can't find the module. All the tutorial I found only shows importing the simple .py file that contains function only no import on the top. I'm using python3. TIA</p> <pre><code>import SwiftUI import PythonKit struct ContentView: View { var body: some View { VStack { Image(systemName: &quot;globe&quot;) .imageScale(.large) .foregroundStyle(.tint) Text(&quot;Hello, world!&quot;) } .onAppear() { runPython() } .padding() } func runPython() { let sys = Python.import(&quot;sys&quot;) sys.path.append(Config.Path.basePath) let test = Python.import(&quot;example&quot;) let response = test.hello() print(response) } } </code></pre> <p>example.py</p> <pre><code>import pandas def hello(): return &quot;Hello Python&quot; </code></pre> <p>Error:</p> <blockquote> <p>PythonKit/Python.swift:706: Fatal error: 'try!' expression unexpectedly raised an error: Python exception: No module named 'pandas'</p> </blockquote>
<python><swift><swift-pythonkit>
2024-01-27 01:40:58
1
427
14079_Z
77,889,719
7,218,062
With Pandas json_normalize how can I select only some of the dictionary fields?
<p>I'm trying to use Pandas json_normalize to import the cik, entityName, end, fp, and val fields from this json:</p> <pre><code>{ &quot;cik&quot;: 320193, &quot;entityName&quot;: &quot;Apple Inc.&quot;, &quot;facts&quot;: { &quot;dei&quot;: { &quot;EntityCommonStockSharesOutstanding&quot;: { &quot;label&quot;: &quot;Entity Common Stock, Shares Outstanding&quot;, &quot;units&quot;: { &quot;shares&quot;: [ { &quot;end&quot;: &quot;2023-07-21&quot;, &quot;val&quot;: 15634232000, &quot;accn&quot;: &quot;0000320193-23-000077&quot;, &quot;fy&quot;: 2023, &quot;fp&quot;: &quot;Q3&quot;, &quot;form&quot;: &quot;10-Q&quot;, &quot;filed&quot;: &quot;2023-08-04&quot;, &quot;frame&quot;: &quot;CY2023Q2I&quot; }, { &quot;end&quot;: &quot;2023-10-20&quot;, &quot;val&quot;: 15552752000, &quot;accn&quot;: &quot;0000320193-23-000106&quot;, &quot;fy&quot;: 2023, &quot;fp&quot;: &quot;FY&quot;, &quot;form&quot;: &quot;10-K&quot;, &quot;filed&quot;: &quot;2023-11-03&quot;, &quot;frame&quot;: &quot;CY2023Q3I&quot; } ] } } } } </code></pre> <p>I would like the end result to look like this:</p> <pre><code> end val fy fp cik entityName 0 2023-07-21 15634232000 2023 Q3 320193 Apple Inc. 1 2023-10-20 15552752000 2023 FY 20320193 Apple Inc. </code></pre> <p>So far I have this code:</p> <pre><code>df = pd.json_normalize(json_data, record_path=['facts', 'dei', 'EntityCommonStockSharesOutstanding', 'units', 'shares'], meta=['cik', 'entityName']) </code></pre> <p>However, this code will normalize all fields in the shares list.</p> <pre><code> end val accn fy fp form filed frame cik entityName 0 2023-07-21 15634232000 0000320193-23-000077 2023 Q3 10-Q 2023-08-04 CY2023Q2I 320193 Apple Inc. 1 2023-10-20 15552752000 0000320193-23-000106 2023 FY 10-K 2023-11-03 CY2023Q3I 320193 Apple Inc. </code></pre> <p>I know that I can delete the columns that I don't want. Is there a cleaner way I can do this by just specifying the fields I care about as part of the json_normalize call?</p>
<python><pandas>
2024-01-27 01:21:49
1
1,591
jmq
77,889,644
1,982,032
How can use higher pandas version when pandas-market-calendars requires lower version?
<p>Upgrade the pandas version:</p> <pre><code>pip install --upgrade pandas Requirement already satisfied: pandas in ./lib/python3.11/site-packages (1.5.3) Collecting pandas Downloading pandas-2.2.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (19 kB) Requirement already satisfied: numpy&lt;2,&gt;=1.23.2 in ./lib/python3.11/site-packages (from pandas) (1.26.2) Requirement already satisfied: python-dateutil&gt;=2.8.2 in ./lib/python3.11/site-packages (from pandas) (2.8.2) Requirement already satisfied: pytz&gt;=2020.1 in ./lib/python3.11/site-packages (from pandas) (2023.3.post1) Requirement already satisfied: tzdata&gt;=2022.7 in ./lib/python3.11/site-packages (from pandas) (2023.3) Requirement already satisfied: six&gt;=1.5 in ./lib/python3.11/site-packages (from python-dateutil&gt;=2.8.2-&gt;pandas) (1.16.0) Downloading pandas-2.2.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (13.0 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 13.0/13.0 MB 289.4 kB/s eta 0:00:00 Installing collected packages: pandas Attempting uninstall: pandas Found existing installation: pandas 1.5.3 Uninstalling pandas-1.5.3: Successfully uninstalled pandas-1.5.3 ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. pandas-market-calendars 4.3.3 requires pandas&lt;2.0,&gt;=1.1, but you have pandas 2.2.0 which is incompatible. Successfully installed pandas-2.2.0 </code></pre> <p>Is there no way to use the pandas-market-calendars then?</p> <pre><code>pandas-market-calendars 4.3.3 requires pandas&lt;2.0,&gt;=1.1, but you have pandas 2.2.0 which is incompatible. </code></pre>
<python><python-3.x><pandas><pip><dependencies>
2024-01-27 00:46:43
1
355
showkey
77,889,581
1,509,264
Typing Checking for a value to be any string except one literal
<p>The <code>Union[X, Y]</code> type is for when the type can be either <code>X</code> or <code>Y</code>. I want the opposite when the type must be <code>X</code> but never <code>Y</code> so either:</p> <ul> <li>Any <code>X</code> but not <code>Y</code> where <code>Y</code> is a specific sub-class of <code>X</code>; or</li> <li>Any base type except one particular literal of that base type that should be treated differently.</li> </ul> <p>For example, if there was an <code>Except</code> special form equivalent to the inverse of <code>Union</code> then I would use:</p> <pre class="lang-python prettyprint-override"><code>class Attribute: @overload def __init__( name: Literal[&quot;id&quot;] = ..., required: Literal[True] = ..., null: Literal[False] = ..., ) -&gt; None: pass @overload def __init__( name: Except[str, Literal[&quot;id&quot;]] = ..., required: bool = ..., null: bool = ..., ) -&gt; None: pass def __init__( name: str, required: bool = False, null: bool = False, ) -&gt; None: pass </code></pre> <p>In this case, if the <code>Attribute</code> has the <code>name</code> <code>&quot;id&quot;</code> then it should always have the <code>required</code> property and should never have the <code>null</code> property whereas if it has any other <code>name</code> then those properties are optional.</p> <p>Programmatically, it can be handled within the <code>__init__</code> function that if the properties are not the expected values then exceptions are thrown but I would also like to make it explicitly clear during static type checking that <code>Attribute(&quot;id&quot;, False, False)</code> (and other invalid combinations) would not be valid.</p> <p>Is there any way to specify this during type checking?</p>
<python><python-typing>
2024-01-27 00:17:14
0
172,539
MT0
77,889,482
1,132,708
Writing a custom Keras layer that returns a vector from a dictionary for a given token
<p>I'm working on a problem where I've tokenized a corpus, but there are specific words that have additional features I want to add to an embedding layer's output. The output dimension needs to be the precomputed vectors dimension + the embedding dimension. I'm storing the specific words in a dictionary where <code>word_dict[token_idx]</code> returns the <code>token_vec</code>.</p> <p>I'm trying to build a custom Keras layer that will look up those integers from a passed in dictionary, and this is what I have so far:</p> <pre><code>class SpecialWords(Layer): def __init__(self, words_dict, output_dim, **kwargs): super(SpecialWords, self).__init__(**kwargs) self.words_dict = words_dict self.output_dim = output_dim # pad the words_dict vectors to the output_dim shape for key in self.words_dict.keys(): vec = self.words_dict[key] self.words_dict[key] = np.pad(vec, ((0,output_dim-len(vec)),(0,0))).T def build(self, input_shape): self.words_table = self.add_weight( name='milestone_table', shape=(len(self.words_dict), self.output_dim), initializer=tf.keras.initializers.Constant(np.array(list(self.words_dict.values()))), trainable=False ) super(SpecialWords, self).build(input_shape) def call(self, inputs, **kwargs): # looks up the vectors vecs = tf.nn.embedding_lookup(self.words_table, inputs) return vecs </code></pre> <p>I keep getting errors about, in this case, the <code>vecs</code> in the <code>call</code> function not existing when I try to compile the model using the Keras functional API. The model look like this:</p> <pre><code>input = Input(shape=(None, 1), dtype='int32') embedding_branch = Embedding(input_dim=N_tokens, output_dim=D_embedding+D_words)(input) external_branch = SpecialWords(words_dict=words_dict, output_dim=D_embedding+D_words)(input) merged_model = Add()([embedding_branch.output, external_branch.output]) </code></pre> <p>Any tips on what I'm missing here would be appreciated.</p>
<python><keras>
2024-01-26 23:34:04
0
470
webb
77,889,447
1,214,800
Make Mypy accept a generic dict type with an empty dict default value
<p>I'm trying to make Mypy accept a generic param that conforms to a dict. No matter what I do, it is always complaining with:</p> <blockquote> <p>Incompatible default for argument &quot;overrides&quot; (default has type &quot;dict[str, Any]&quot;, argument has type &quot;T_Overrides&quot;) [Mypyassignment]</p> </blockquote> <p>I know there are some Mypy issues with generics and defaults (this is using 3.11), so maybe there's nothing I can do here?</p> <pre><code>from collections.abc import Hashable from typing import Any, Never, TypeVar K = TypeVar(&quot;K&quot;, bound=Hashable) V = TypeVar(&quot;V&quot;) StrKeyDict = dict[str, Any] EmptyDict = dict[str, Never] # Also tried: # GenericDict = dict[K, V] # StrKeyDict = GenericDict[str, Any] # EmptyDict = GenericDict[K, Never] T = TypeVar(&quot;T&quot;, bound=StrKeyDict | EmptyDict) # Also tried: # T = TypeVar(&quot;T&quot;, bound=dict[str, Any]) class ScriptArgs: def __init__(self, overrides: T = {}): # ~~ Mypy complains here ... </code></pre>
<python><python-typing><mypy>
2024-01-26 23:19:09
0
73,674
brandonscript
77,889,409
8,734,934
Specify browser for login
<p>I am writing a python script to query something from graph. I am using <a href="https://github.com/AzureAD/microsoft-authentication-library-for-python" rel="nofollow noreferrer">MSAL for python</a> to get the login and get the token. When using <em>acquire_token_interactive</em> the login prompt will redirect to my main browser. But I need to use a different browser for multiple reasons.</p> <p>Is there a way to tell the script for example to use a specific browser or something like that?</p>
<python><msal>
2024-01-26 23:07:11
1
475
devil_inside
77,889,205
7,087,604
Loading Python joblib package and sharing it in Flask between WSGI (Phusion Passenger) workers
<p>I have a pre-trained <a href="https://radimrehurek.com/gensim/models/word2vec.html" rel="nofollow noreferrer">Gensim Word2Vec</a> model that IΒ use as a global variable in a Flask web app:</p> <pre class="lang-py prettyprint-override"><code>app = Flask(__name__) engine = gensim.Models.load(&quot;search_engine&quot;) # Python Class instance @app.route('/api', methods=['GET']) def api(): query = requests.args(&quot;query&quot;, &quot;&quot;) # dummy example: output the n closest words from the query in a JSON response. closest = json.dumps(engine.wv.most_similar(query)) return app.response_class( response=closest, status=200, mimetype='application/json') </code></pre> <p>The model is quite heavy and rather slow to load in memory. Once it's loaded, due to internal Numpy matrix representation, it runs quite fast, and is used read-only.</p> <p>Problem is, the Flask app runs on Phusion Passenger WSGI server, into CPanel, which spawns an arbitrary number of processes depending on requests. IΒ have no control on the number of processes given it's a shared hosting.</p> <p>Not only does that make the server hit its I/O limit, but loading the joblib object into memory for newly-spawned processes is actually slower than simply queuing new jobs and waiting for current processes to finish.</p> <p>I would like to have the <code>engine</code> object shared in memory between workers. So far I tried:</p> <ul> <li><a href="https://stackoverflow.com/a/57810915/7087604">this answer</a> (<code>multiprocessing.managers</code>: <code>BaseManager</code>, <code>DictProxy</code>), but setting the Python object into <code>shared_dict[&quot;array&quot;]</code> doesn't seem to take and the value stays <code>None</code> (apparently works only for strings and numbers),</li> <li>using <code>from flask_caching import Cache</code> and <code>cache.set(&quot;engine&quot;, gensim.Models.load(&quot;search_engine&quot;))</code>, but then again <code>cache.get(&quot;engine&quot;)</code> fails with <code>redis.exceptions.ConnectionError: Error 104 while writing to socket. Connection reset by peer.</code> and I'm guessing it is because Redis does not allow that much memory.</li> </ul> <p>I really want to avoid using databases here because anyway, the expensive part is to load the Numpy matrices into memory and, once they are loaded, everything runs suprisingly fast considering it's a shared hosting designed for WordPress blogs (mostly).</p> <p>Note that Word2Vec uses pickle archives to load previously-trained models, I actually use a child class saving to joblib archives for better compression (which, of course, adds to the reading issue here).</p>
<python><flask><multiprocessing>
2024-01-26 22:02:50
0
713
AurΓ©lien Pierre