QuestionId
int64
74.8M
79.8M
UserId
int64
56
29.4M
QuestionTitle
stringlengths
15
150
QuestionBody
stringlengths
40
40.3k
Tags
stringlengths
8
101
CreationDate
stringdate
2022-12-10 09:42:47
2025-11-01 19:08:18
AnswerCount
int64
0
44
UserExpertiseLevel
int64
301
888k
UserDisplayName
stringlengths
3
30
78,543,027
4,249,338
Polars - How to drop inf rows
<p>I have a polars dataframe containing some rows with infinite values (<code>np.inf</code> and <code>-np.inf</code>) which I'd like to drop.</p> <p>I am aware of<code>drop_nans</code> and <code>drop_null</code> but I don't see a similar <code>drop_inf</code>. As I want to handle nan values separately, I cannot just replace inf with nan and then call <code>drop_nans</code>.</p> <p>What's an idiomatic way of dropping rows with infinite values?</p>
<python><python-polars>
2024-05-28 09:01:38
1
656
gg99
78,543,026
3,973,269
Azure app function - python, overlapping executions overwrite variables
<p>I have a function app in Azure, in which the app gathers row 0 of a cosmosDB and then sends out some emails with data.</p> <p>When the app function is triggered twice within a short period, I find that content of the first mail is overwritten with data from the 2nd execution, which contains a new row of data.</p> <p>function.json:</p> <pre><code>{ &quot;scriptFile&quot;: &quot;__init__.py&quot;, &quot;bindings&quot;: [ { &quot;name&quot;: &quot;msg&quot;, &quot;type&quot;: &quot;queueTrigger&quot;, &quot;direction&quot;: &quot;in&quot;, &quot;queueName&quot;: &quot;queue_name&quot;, &quot;connection&quot;: &quot;connection_env&quot; }, { &quot;type&quot;: &quot;cosmosDB&quot;, &quot;name&quot;: &quot;DBfile&quot;, &quot;databaseName&quot;: &quot;db_name&quot;, &quot;collectionName&quot;: &quot;collection_name&quot;, &quot;connectionStringSetting&quot;: &quot;cosmos_db_connection_string&quot;, &quot;direction&quot;: &quot;in&quot;, &quot;id&quot;: &quot;{id}&quot;, &quot;PartitionKey&quot;: &quot;{partitionkey}&quot; } ] } </code></pre> <p>Code:</p> <pre><code>def main(msg: func.QueueMessage, DBfile: func.DocumentList) -&gt; bool: file_raw = DBfile[0].to_json() file_json = json.loads(file_raw) users = file_json['users'] for user in users: message = { &quot;content&quot;: { &quot;subject&quot;: &quot;mail subject&quot;, &quot;html&quot;: json.dumps(file_json['data']), }, &quot;recipients&quot;: { &quot;to&quot;: [{&quot;address&quot;: user['emailAddress']}] }, &quot;senderAddress&quot;: &quot;&lt;sender@email.com&gt;&quot; } connection_string_mail = os.getenv(&quot;email_connection_string&quot;) if not connection_string_mail == None: email_client = EmailClient.from_connection_string(connection_string_mail) email_client.begin_send(message) </code></pre> <p>As you might be able to deduct from the code, for each user an email is sent. However, when the first trigger is sending emails, the 2nd trigger is triggered and also DBfile[0] is updated I assume. For that reason, in the first execution, the email content is changed I assume.</p> <p>Could this assumption be correct? And if so, can I somehow either overcome this issue or make the 2nd trigger wait until the first is finished?</p>
<python><azure-functions>
2024-05-28 09:01:28
1
569
Mart
78,542,877
1,887,919
Does matrix multiplication time-complexity only apply to large N?
<p>The time complexity of (square, naive) matrix multiplication is O(N<sup>3</sup>), e.g. <a href="https://stackoverflow.com/questions/8546756/matrix-multiplication-algorithm-time-complexity">this answer</a></p> <p>I can run a quick script</p> <pre class="lang-py prettyprint-override"><code>import numpy as np import time import matplotlib.pyplot as plt def time_matmul_n(n): #Create two arrays of dimension n arr1 = np.random.rand(n, n) arr2 = np.random.rand(n, n) t0=time.process_time_ns() #start the clock np.matmul(arr1,arr2) t1=time.process_time_ns() return t1-t0 N = 100 n_values = range(N) t_values = np.zeros_like(n_values) for i,n in enumerate(n_values): t_values[i] = time_matmul_n(n) #Plot the results plt.plot(n_values,t_values) </code></pre> <p>This gives me something like</p> <p><a href="https://i.sstatic.net/wih0GQwY.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/wih0GQwY.png" alt="enter image description here" /></a></p> <p>Now I appreciate there is a lot of noise here, and the run time will vary depending on the processes that are going on on my laptop, but this doesn't look like it scales anything like O(N<sup>3</sup>).</p> <p>I am clearly not understanding something. Can anyone explain why the graph doesn't look O(N<sup>3</sup>) -like? Does this only matter when N gets large?</p>
<python><algorithm><time-complexity><matrix-multiplication>
2024-05-28 08:34:44
2
923
user1887919
78,542,765
7,161,082
Get route within azure function and python
<p>I'm searching for a way to get the route of my azure-function from the <code>app</code> or <code>req</code> object. I assume this should be straight forward. But I'm not able to find it.</p> <pre class="lang-py prettyprint-override"><code> import azure.functions as func app = func.Function(...) app.route(route=&quot;myfunction&quot;) def my_function(req: func.Httprequest) -&gt; func.HttpResponse: route = ?? # should be myfunction </code></pre> <p>Does any one know how to get the route?</p> <p><strong>Edit</strong></p> <p>When I inspect the <code>route_params</code> of <code>req</code> in debugg mode I see this:</p> <pre class="lang-bash prettyprint-override"><code> req.route_params.values() &gt; dict_values([]) req.route_params.keys() &gt; dict_keys([]) </code></pre>
<python><azure-functions>
2024-05-28 08:13:52
1
493
samusa
78,542,760
18,091,040
How to proper "install" the python wrapper of Indy-vdr
<p>I'm following the instructions of <a href="https://github.com/hyperledger/indy-vdr/blob/main/README.md" rel="nofollow noreferrer">Indy-vdr</a> to use its python wrapper. It says:</p> <blockquote> <p>The Python wrapper is located in wrappers/python/indy_vdr. In order for the wrapper to locate the shared library, the latter may be placed in a system shared library directory like /usr/local/lib.</p> </blockquote> <p>I added the contents of the folder wrapper/python/ in the folder /usr/local/ib/python3.8/dist-packages, which is in my system patch:</p> <pre><code>python3.8 -c &quot;import sys; print(sys.path)&quot; ['', '/usr/lib/python38.zip', '/usr/lib/python3.8', '/usr/lib/python3.8/lib-dynload', '/home/ubuntu/.local/lib/python3.8/site-packages', '/usr/local/lib/python3.8/dist-packages', '/usr/lib/python3/dist-packages'] </code></pre> <p>And the files are:</p> <pre><code>ubuntu@indy1:/usr/local/lib/python3.8/dist-packages$ ls demo indy_vdr readme.md setup.cfg setup.py </code></pre> <p>When I try to run the demo, or a simple code to get its version I get an error that indy_vdr was not found. But the weird part is that I can do import indy_vdr, and it imports correctly. but I can't do anything else:</p> <p>Demo:</p> <pre><code>ubuntu@indy1:~/indy-vdr/wrappers$ python3 -m demo.test Library not loaded from python package Traceback (most recent call last): File &quot;/usr/lib/python3.8/runpy.py&quot;, line 192, in _run_module_as_main return _run_code(code, main_globals, None, File &quot;/usr/lib/python3.8/runpy.py&quot;, line 85, in _run_code exec(code, run_globals) File &quot;/usr/local/lib/python3.8/dist-packages/demo/test.py&quot;, line 246, in &lt;module&gt; log(&quot;indy-vdr version:&quot;, version()) File &quot;/usr/local/lib/python3.8/dist-packages/indy_vdr/bindings.py&quot;, line 449, in version lib = get_library() File &quot;/usr/local/lib/python3.8/dist-packages/indy_vdr/bindings.py&quot;, line 88, in get_library LIB = _load_library(&quot;indy_vdr&quot;) File &quot;/usr/local/lib/python3.8/dist-packages/indy_vdr/bindings.py&quot;, line 116, in _load_library raise VdrError(VdrErrorCode.WRAPPER, f&quot;Error loading library: {lib_name}&quot;) indy_vdr.error.VdrError: Error loading library: indy_vdr </code></pre> <p>Simple command:</p> <pre><code>ubuntu@indy1:~/indy-vdr/wrappers$ python3 -c &quot;import indy_vdr; print(indy_vdr.version())&quot; Library not loaded from python package Traceback (most recent call last): File &quot;&lt;string&gt;&quot;, line 1, in &lt;module&gt; File &quot;/usr/local/lib/python3.8/dist-packages/indy_vdr/bindings.py&quot;, line 449, in version lib = get_library() File &quot;/usr/local/lib/python3.8/dist-packages/indy_vdr/bindings.py&quot;, line 88, in get_library LIB = _load_library(&quot;indy_vdr&quot;) File &quot;/usr/local/lib/python3.8/dist-packages/indy_vdr/bindings.py&quot;, line 116, in _load_library raise VdrError(VdrErrorCode.WRAPPER, f&quot;Error loading library: {lib_name}&quot;) indy_vdr.error.VdrError: Error loading library: indy_vdr </code></pre>
<python><python-packaging><hyperledger-indy><indy-node>
2024-05-28 08:12:58
0
640
brenodacosta
78,542,706
955,273
Conflicting instructions between manpages and running help on the command
<p>I was recently trying to understand why running <code>pip install --upgrade</code> wasn't upgrading some packages in my virtualenv.</p> <p>Being on Ubuntu 22.04, I turned to the manpages for help.</p> <p>There I found the following:</p> <pre class="lang-none prettyprint-override"><code>-U, --upgrade Upgrade all packages to the newest available version. This process is recursive regardless of whether a dependency is already satisfied. </code></pre> <p>This didn't seem to be the case, so I was feeling a bit lost.</p> <p>I checked where the manpages were coming from, and they were the system install location</p> <pre class="lang-bash prettyprint-override"><code>$ man -w pip /usr/share/man/man1/pip3.1.gz </code></pre> <p>Eventually I found <code>pip --help</code>, which in turn led me to <code>pip help install</code></p> <p>Here I found different instructions on how to upgrade packages:</p> <pre class="lang-none prettyprint-override"><code> -U, --upgrade Upgrade all specified packages to the newest available version. The handling of dependencies depends on the upgrade-strategy used. --upgrade-strategy &lt;upgrade_strategy&gt; Determines how dependency upgrading should be handled [default: only-if-needed]. &quot;eager&quot; dependencies are upgraded regardless of whether the currently installed version satisfies the requirements of the upgraded package(s). &quot;only-if-needed&quot; are upgraded only when they do not satisfy the requirements of the upgraded package(s). </code></pre> <p>By specifying an eager upgrade strategy, I am now able to replicate the behaviour as described in the manpages.</p> <p>My conclusion is that the manpages are incorrect / out-of-date</p> <p>Questions:</p> <ul> <li>Is this a bug with Ubuntu or Pip? If so, should I report it?</li> <li>Does Pip not use manpages any more?</li> <li>If the manpages are no longer maintained / out-of-date, why are they even there?</li> </ul>
<python><linux><pip><manpage>
2024-05-28 08:02:33
1
28,956
Steve Lorimer
78,542,564
5,567,893
(Pytorch) How to get the tensors by matching values in other list?
<p>I want to select the values in a two-dimensional tensor by matching them to the list of values.</p> <pre class="lang-py prettyprint-override"><code>example #tensor([[61078, 51477, 28492, 4290, 86920, 2216], # [26799, 76684, 23785, 18202, 14552, 98301]]) a # Index([61078, 23785, 2216], dtype='int64', length=3) result #tensor([[61078, 28492, 2216], # [26799, 23785, 98301]]) </code></pre> <p>I tried using <code>torch.where</code> and <code>isin</code>, but they always returned errors. How can I solve the problem?</p>
<python><pytorch>
2024-05-28 07:37:49
2
466
Ssong
78,542,555
1,670,478
403 Forbidden Error when fetching content from a URL hosted in Zendesk
<p>When I try following python code to fetch content from a URL hosted in Zendesk, I get following 403 Forbidden Error:</p> <p>Python Code:</p> <pre class="lang-python prettyprint-override"><code>url = 'https://support.abc.com/hc...' html_content = fetch_content(url) if html_content: text_content = html_to_text(html_content.decode('utf-8')) # Output directory and file name output_pdf_file = os.path.join('output', 'output.pdf') if not os.path.exists('output'): os.makedirs('output') save_text_to_pdf(text_content, output_pdf_file) print(f'PDF saved to {output_pdf_file}') </code></pre> <p>Error:</p> <pre><code>Error fetching https://support.abc.com/hc....: HTTP Error 403: Forbidden </code></pre> <p>What can I try next, especially in the code part, as I don't want to change any configuration on the Zendesk side?</p>
<python><fetch-api><zendesk><zendesk-api>
2024-05-28 07:36:38
1
1,956
clint
78,542,549
17,795,398
NumPy: matrix multiplication when coordinates are given separately, is there a shortcut?
<p>I'm working with particle positions in different arrays, something like (three particles):</p> <pre><code>import numpy as np x = np.array([1, 2, 3]) y = np.array([2, 6, 1]) z = np.array([0, 3, 6]) </code></pre> <p>I do it in this way because of matplotlib 3D plots, which require to have the different coordinates in different arrays (<code>ax.scatter(x, y, z)</code> for example).</p> <p>But the drawback of this approach is that if I want to multiply matrices:</p> <pre><code>mat = np.array([[10, 0, 0], [0, 10, 0], [0, 0, 10]]) </code></pre> <p>I cannot use <code>np.dot</code> directly because coordinates are in different arrays, therefore I have to merge then and unmerge after multiplication. I know how to do that (<code>np.transpose([x, y, z])</code>), but I'm concerned about efficiency, I don't know if these operations are hard for big arrays (I'm working with hundred thousands or millions of particles). Maybe there is a numpy function for this that I don't know. Of course, it's not hard to implement it by myself.</p>
<python><arrays><numpy>
2024-05-28 07:35:59
1
472
Abel Gutiérrez
78,542,519
972,647
Add SSL certificate to python requests for all environments
<p>As the title states. Company enabled SSL inspection which blocks every call made by request due to ssl failure. Yes I could go the easy route and just set verify=False but that isn't really a clean and sustainable solution especially for modular code.</p> <p>Anyway from chrome I can easily get the root cert and I can add that to the cacert.pem of certifi package (I just copy&amp;paste it in text editor). Yes it works. However I have multiple machines each with multiple separate conda environments.</p> <p>So my question is:</p> <p>How can I automate this for the existing environments? For new environments? After updating certifi package? (I assume that overwrites my changes)</p>
<python><ssl-certificate><conda><certifi>
2024-05-28 07:31:15
1
7,652
beginner_
78,542,389
11,198,558
How to customize style of django-autocomplete-light Forms
<p>I need your help on customizing the style of TextInput Box while using django-autocomplete-light.</p> <p>Specifically, I have a data model <code>TextHeader</code>, <code>UserInputDataSearch</code> then I'm creating a view object as</p> <pre><code># views.py class DataSearchAutocomplete(autocomplete.Select2QuerySetView): model = UserInputDataSearch form_class = SearchDataForm success_url = 'data_access' def get_queryset(self): qs = TextHeader.objects.all() if self.q: qs = qs.filter(keywordFinder__istartswith=self.q) return qs </code></pre> <p>The <code>forms.py</code> is as below</p> <pre><code>class SearchDataForm(forms.ModelForm): userSearch = forms.ModelChoiceField( queryset=TextHeader.objects.all(), widget = autocomplete.ModelSelect2( url='dataSearch-autocomplete', attrs={ 'data-html': True, &quot;type&quot;:&quot;search&quot;, &quot;class&quot;:&quot;form-control&quot;, &quot;id&quot;:&quot;autoCompleteData&quot;, &quot;autocomplete&quot;:&quot;off&quot;, }, ) ) dateRange = fields.DateRangeField( input_formats=['%d/%m/%Y'], widget=widgets.DateRangeWidget( format='%d/%m/%Y', ) ) class Meta: model = UserInputDataSearch fields = [&quot;userSearch&quot;] </code></pre> <p>Then, on html template, I write down these lines</p> <pre><code>#htmlTemplate &lt;form method=&quot;POST&quot;&gt; {% csrf_token %} &lt;div class=&quot;row g-3&quot;&gt; &lt;div class=&quot;col-lg-6&quot;&gt; &lt;div&gt; {{ form.userSearch|as_crispy_field }} &lt;/div&gt; &lt;/div&gt;&lt;!--end col--&gt; &lt;div class=&quot;col-lg-6&quot;&gt; &lt;div&gt; {{ form.dateRange|as_crispy_field }} &lt;/div&gt; &lt;/div&gt;&lt;!--end col--&gt; &lt;div class=&quot;col-lg-12&quot;&gt; &lt;div class=&quot;text-left&quot;&gt; &lt;button type=&quot;submit&quot; class=&quot;btn btn-primary&quot;&gt;Find&lt;/button&gt; &lt;/div&gt; &lt;/div&gt;&lt;!--end col--&gt; &lt;/div&gt;&lt;!--end row--&gt; &lt;/form&gt; </code></pre> <p>However, it renders my page with difference style of two boxes as a screenshot below, I wonder how to make them in the same style <a href="https://i.sstatic.net/HlXCRz2O.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/HlXCRz2O.png" alt="enter image description here" /></a></p>
<python><django>
2024-05-28 07:01:21
0
981
ShanN
78,542,211
53,212
What is wrong with using Element.makeelement?
<p>Why does the Python documentation say this about <code>xml.etree.ElementTree</code>'s <code>Element.makeelement()</code> method?</p> <blockquote> <p>Do not call this method, use the SubElement() factory function instead</p> </blockquote> <p>It's specifically not marked as deprecated and has existed in the docs for several versions, with this same warning.</p> <p>Does the method not work as described? Does it break other things?</p> <p>Apart from seeming like awkward API design, it isn't clear why this warning is there, as opposed to putting it on track for deprecation, or simply omitting it from documentation, or explaining the negative consequences should one use it, which may help diagnosing issues if existing code uses it.</p>
<python><elementtree>
2024-05-28 06:13:33
0
117,860
thomasrutter
78,542,042
2,443,377
Cannot install ChromaDB on python:3.12.3-alpine3.19 Docker image
<p>I am trying to install python dependencies on a python:3.12.3-alpine3.19 Docker immage. When the requirements.txt file is processed I get the following error:</p> <pre><code>7.932 ERROR: Ignored the following versions that require a different python version: 0.5.12 Requires-Python &gt;=3.7,&lt;3.12; 0.5.13 Requires-Python &gt;=3.7,&lt;3.12; 1.21.2 Requires-Python &gt;=3.7,&lt;3.11; 1.21.3 Requires-Python &gt;=3.7,&lt;3.11; 1.21.4 Requires-Python &gt;=3.7,&lt;3.11; 1.21.5 Requires-Python &gt;=3.7,&lt;3.11; 1.21.6 Requires-Python &gt;=3.7,&lt;3.11 7.932 ERROR: Could not find a version that satisfies the requirement onnxruntime==1.18.0 (from versions: none) 7.933 ERROR: No matching distribution found for onnxruntime==1.18.0 ------ failed to solve: process &quot;/bin/sh -c pip install -r requirements.txt &amp;&amp; pip uninstall </code></pre> <p>Even if I try to install an older version of python I still get an error:</p> <p>From Python:3.10.14-alpine3.19.</p> <pre><code>61.58 ERROR: Could not find a version that satisfies the requirement onnxruntime==1.18.0 (from versions: none) 61.58 ERROR: No matching distribution found for onnxruntime==1.18.0 </code></pre> <p>Why is this happening?</p>
<python><docker><onnxruntime><chromadb>
2024-05-28 05:16:44
1
1,743
cw24
78,541,861
1,447,953
Pytest: use reduced parameter matrix unless 'slow' mark is set
<p>I have some pytest tests that run a test using a variety of different parameter combinations, like so:</p> <pre><code>class TestCases: d_prob = [0.1, 0.8] f_dens = [0.001, 0.1] r_scale = [True, False] i_corr = [True, False] # Cartesian product cart = itertools.product(d_prob, f_dens, r_scale, i_corr) @pytest.fixture(params=cart) def params(self, request): return request.param def test_mytest(self, params): d, f, r, i = params # Do stuff... </code></pre> <p>However, this can quickly blow out into a large number of tests and be pretty slow. So, I would like to have the behaviour change based on whether the <code>slow</code> mark has been set. I.e. if <code>slow</code> is set then use the full parameter matrix, otherwise use a reduced set that is faster but less comprehensive.</p> <p>I'm not sure how to cleanly achieve this though. I am familiar with running tests or not by attaching the <code>slow</code> mark to them, but I have no idea if I can somehow change my parameter matrix based on such a mark.</p> <p>Edit: I suppose one way is to put all my actual test code in a function that I can re-use, and then just make two different test functions with different marks, i.e.</p> <pre><code>def the_guts(*args): # do stuff def test_fast(fast_params): the_guts(fast_params) @pytest.mark.slow def test_slow(slow_params): the_guts(slow_params) </code></pre> <p>Something like that I guess. This is not great though if the params are the bottom fixture in a tower of fixtures. I don't want to have two separate but identical towers of fixtures just to switch the bottom-level parameter matrices.</p>
<python><pytest><pytest-fixtures><pytest-markers>
2024-05-28 03:51:16
1
2,974
Ben Farmer
78,541,706
678,572
How to computing k-nearest neighbors from rectangular distance matrix (i.e., scipy.spatial.distance.cdist) in Python?
<p>I want to calculate the k-nearest neighbors using either sklearn, scipy, or numpy but from a rectangular distance matrix that is output from <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.spatial.distance.cdist.html" rel="nofollow noreferrer"><code>scipy.spatial.distance.cdist</code></a>.</p> <p>I have tried inputting into the <code>kneighbors_graph</code> and <code>KNeighborsTransformer</code> with <code>metric=&quot;precomputed&quot;</code> but have not been successful.</p> <p><strong>How can I achieve this?</strong></p> <pre class="lang-py prettyprint-override"><code>from scipy.spatial.distance import cdist from sklearn.datasets import make_classification from sklearn.neighbors import kneighbors_graph, KNeighborsTransformer X, _ = make_classification(n_samples=15, n_features=4, n_classes=2, n_clusters_per_class=1, random_state=0) A = X[:10,:] B = X[10:,:] A.shape, B.shape # ((10, 4), (5, 4)) # Rectangular distance matrix dist = cdist(A,B) dist.shape # (10, 5) n_neighbors=3 kneighbors_graph(dist, n_neighbors=n_neighbors, metric=&quot;precomputed&quot;) # --------------------------------------------------------------------------- # ValueError Traceback (most recent call last) # Cell In[165], line 17 # 14 # (10, 5) # 16 n_neighbors=3 # ---&gt; 17 kneighbors_graph(dist, n_neighbors=n_neighbors, metric=&quot;precomputed&quot;) # File ~/miniconda3/envs/soothsayer_env/lib/python3.9/site-packages/sklearn/neighbors/_graph.py:117, in kneighbors_graph(X, n_neighbors, mode, metric, p, metric_params, include_self, n_jobs) # 50 &quot;&quot;&quot;Compute the (weighted) graph of k-Neighbors for points in X. # 51 # 52 Read more in the :ref:`User Guide &lt;unsupervised_neighbors&gt;`. # (...) # 114 [1., 0., 1.]]) # 115 &quot;&quot;&quot; # 116 if not isinstance(X, KNeighborsMixin): # --&gt; 117 X = NearestNeighbors( # 118 n_neighbors=n_neighbors, # 119 metric=metric, # 120 p=p, # 121 metric_params=metric_params, # 122 n_jobs=n_jobs, # 123 ).fit(X) # 124 else: # 125 _check_params(X, metric, p, metric_params) # File ~/miniconda3/envs/soothsayer_env/lib/python3.9/site-packages/sklearn/neighbors/_unsupervised.py:176, in NearestNeighbors.fit(self, X, y) # 159 &quot;&quot;&quot;Fit the nearest neighbors estimator from the training dataset. # 160 # 161 Parameters # (...) # 173 The fitted nearest neighbors estimator. # 174 &quot;&quot;&quot; # 175 self._validate_params() # --&gt; 176 return self._fit(X) # File ~/miniconda3/envs/soothsayer_env/lib/python3.9/site-packages/sklearn/neighbors/_base.py:545, in NeighborsBase._fit(self, X, y) # 543 # Precomputed matrix X must be squared # 544 if X.shape[0] != X.shape[1]: # --&gt; 545 raise ValueError( # 546 &quot;Precomputed matrix must be square.&quot; # 547 &quot; Input is a {}x{} matrix.&quot;.format(X.shape[0], X.shape[1]) # 548 ) # 549 self.n_features_in_ = X.shape[1] # 551 n_samples = X.shape[0] # ValueError: Precomputed matrix must be square. Input is a 10x5 matrix. </code></pre>
<python><numpy><scipy><distance><nearest-neighbor>
2024-05-28 02:33:20
1
30,977
O.rka
78,541,683
11,121,557
Python CFFI `<cdata>` data pointer replace from C land - is it safe?
<p>I am trying to use CFFI to create a python C extension.</p> <p>Suppose I have the following C code:</p> <pre class="lang-c prettyprint-override"><code>void freeSomeType(SomeType_t **ptr) { free(*ptr); *ptr = NULL; } void func(SomeType_t **ptr) { freeSomeType(ptr); *ptr = malloc(...); } </code></pre> <p>Then, suppose I call it from python land like so:</p> <pre class="lang-py prettyprint-override"><code>ptr = ffi.new(&quot;SomeType_t **&quot;) # Init and do stuff with ptr... lib.func(ptr) </code></pre> <p>In essence, the pointer holding the actual <code>SomeType **</code> data inside the <code>ptr</code>'s <code>cdata</code> object has changed its value. What happens if I then do:</p> <pre class="lang-py prettyprint-override"><code>lib.freeSomeType(ptr) del ptr </code></pre> <ul> <li>Is this safe to do?</li> <li>Do I need to worry about pointer data ownership or memory leaks?</li> </ul>
<python><c><cpython><python-cffi>
2024-05-28 02:22:27
1
376
Slav
78,541,610
5,651,575
Create a column aggregating column names based on boolean state of a row in Python Pandas
<p>I am attempting to scan a Pandas dataframe and identify all the columns that are True for a particular row and aggregate the column names in the output column.</p> <p>Given a sample input as follows:</p> <pre><code>a b c True False True True False False </code></pre> <p>The goal is to accomplish the following output:</p> <pre><code>a b c output True False True ['a', 'c'] True False False ['a'] </code></pre> <p>My attempt here was to use np.where as exemplified below but this is rather inefficient to scale.</p> <pre><code>df['output_1'] = np.where(df['a']==True, 'a', '') df['output_2'] = np.where(df['b']==True, 'b', '') df['output_3'] = np.where(df['c']==True, 'c', '') df['output'] = df['output_1'] + df['output_2'] + df['output_3'] </code></pre>
<python><pandas>
2024-05-28 01:35:14
3
617
youngdev
78,541,207
5,568,409
Why are colors so strangely showed?
<p>When running the following code, I found the two colors <code>red</code> and <code>blue</code> very strangely displayed in plot; moreover, the legend has the colors mismatched:</p> <pre><code>import matplotlib.pyplot as plt import seaborn as sns maxs = [35, 31, 35, 31, 34, 30, 31, 29, 30, 33] clrs = ['red', 'blue', 'red', 'blue', 'red', 'blue', 'blue', 'blue', 'blue', 'red'] fig, ax = plt.subplots(figsize=(8, 4)) sns.histplot(ax = ax, x = maxs, bins = &quot;auto&quot;, discrete = True, shrink = 0.5, stat = &quot;count&quot;, element = &quot;bars&quot;, kde = False, hue = clrs) plt.show() </code></pre> <p>I don't understand what kind of mistake(s) did I make? Could someone provide any help? <a href="https://i.sstatic.net/rEOHctmk.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/rEOHctmk.png" alt="enter image description here" /></a></p>
<python><seaborn><histplot>
2024-05-27 21:52:16
1
1,216
Andrew
78,541,141
896,451
Why does this two-line Python program give variable output under PyCharm?
<p>Using Windows 7, python.exe 3.8.0 and PyCharm 2019.4.5 with:</p> <p><a href="https://i.sstatic.net/JPx3Gh2C.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/JPx3Gh2C.png" alt="enter image description here" /></a></p> <p>this (faulty) program</p> <pre><code>print(&quot;hello&quot;, flush=True) garbage; </code></pre> <p>seemingly at random gives either:</p> <p><a href="https://i.sstatic.net/KUXzMoGy.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/KUXzMoGy.png" alt="" /></a></p> <p>or</p> <p><a href="https://i.sstatic.net/cWFK6Z1g.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/cWFK6Z1g.png" alt="enter image description here" /></a></p> <p>Likewise with &quot;-u&quot;:</p> <p><a href="https://i.sstatic.net/fpsvvP6t.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/fpsvvP6t.png" alt="enter image description here" /></a></p> <p>Why am I not getting the same each time?</p> <p>Note: I am not asking for a remedy. Just asking for the cause.</p>
<python><pycharm>
2024-05-27 21:25:46
2
2,312
ChrisJJ
78,541,066
3,391,549
Langchain HuggingFace embeddings no longer loading for Llama-Index?
<p>The following code worked about a week ago:</p> <pre><code>from langchain_community.embeddings import HuggingFaceEmbeddings from llama_index.embeddings.langchain import LangchainEmbedding embed_model = LangchainEmbedding(HuggingFaceEmbeddings(model_name=&quot;WhereIsAI/UAE-Large-V1&quot;)) </code></pre> <p>Previously, the <code>embed_model = ...</code> line takes only a few seconds to execute. However, now when I try to run it, it seems to be running indefinitely without an error/warning message, and it does not execute properly.</p> <p>Has anyone else come across this issue?</p>
<python><langchain><embedding><huggingface><llama-index>
2024-05-27 20:58:32
0
9,883
Adrian
78,540,850
1,189,239
Corner plot in log scale
<p>I am using the <code>corner</code> library to make a corner plot. However, when I set the <code>axes_scale</code> parameter to <code>log</code>, nothing changes. I know that I can put everything in log scale before plotting, but I would prefer to do it inside corner.</p> <pre><code>import numpy as np import matplotlib.pyplot as plt import corner # Generate data n_samples = 10000 n_parameters = 3 samples = np.random.randn(n_samples, n_parameters) # Generate the corner plot fig = corner.corner(samples, axes_scale=['log','linear','log'], show_titles=True) plt.show() </code></pre> <p>Nothing changes even with <code>axes_scale='log'</code>. Any help is appreciated. <a href="https://i.sstatic.net/gFP2LWIz.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/gFP2LWIz.png" alt="enter image description here" /></a></p>
<python><matplotlib>
2024-05-27 19:44:54
2
933
Alessandro Peca
78,540,765
2,194,119
How to Use Python Hypothesis in Integration Tests
<p>We've got pretty complex builder and factory classes/methods to build test/domain data for our unit tests using Hypothesis. Now for our integration tests, we'd like to use the same functionality, the only difference is that they should run only once regardless of the result of the test. One solution is to call <code>example()</code> on the composite strategy in the test (or test fixture) which works fine, but it prints a warning asking not to do it.</p> <p>I also tried to force Hypothesis to run the test only once:</p> <pre class="lang-py prettyprint-override"><code># This is for demo purposes, the actual factory class is much more complicated. def foo(foo: int): print(&quot;calling foo&quot;) return foo @given(st.builds(foo, st.integers())) @settings(max_examples=1, phases=[Phase.generate]) def test_foo(x: int): print(&quot;running test_foo&quot;) assert False </code></pre> <p>But still runs it twice:</p> <pre><code>------------------ Captured stdout call ------------------ calling foo running test_foo calling foo running test_foo </code></pre> <p>Now my question is:</p> <ol> <li>Is it possible to get an example from a strategy without actually using <code>@given</code>?</li> <li>If not, how can tell Hypothesis to run the test only one time?</li> <li>Also why is it running the test twice anyway, where the max_example is one and there's no shrinking?</li> </ol> <p>Thanks.</p>
<python><python-hypothesis>
2024-05-27 19:19:05
1
5,162
Rad
78,540,702
34,747
Deploying Flet app from a docker container using Uvicorn
<p>I am trying to run my <a href="https://flet.dev" rel="nofollow noreferrer">flet</a> as a web app from a docker container, but I am getting the following error message:</p> <pre><code>RuntimeError: asyncio.run() cannot be called from a running event loop 2024-05-27T18:46:46.450774857Z sys:1: RuntimeWarning: coroutine 'app_async' was never awaited </code></pre> <p>I have tried starting the app with and without exposing asgi interface:</p> <pre><code>def run_web(): ft.app(target=main, export_asgi_app=True) </code></pre> <p>My entrypoint script looks like this:</p> <pre><code>#!/usr/bin/env bash source .venv/bin/activate uvicorn --factory reggui.main:run_web --host 0.0.0.0 --port 8060 </code></pre> <p>Any hints about what I may be missing?</p> <h2>UPDATE:</h2> <p>It seems that with uvicorn the entry function (in this case, <code>main</code>) needs to be an async function.</p> <p>Now the app starts, but now there is another error which maybe related to Docker, but I am not sure:</p> <pre class="lang-py prettyprint-override"><code>INFO: Will watch for changes in these directories: ['/'] 2024-05-28T11:57:22.142940833Z INFO: Uvicorn running on http://0.0.0.0:8060 (Press CTRL+C to quit) 2024-05-28T11:57:22.142967464Z INFO: Started reloader process [8] using WatchFiles 2024-05-28T11:57:22.899964258Z WARNING: ASGI app factory detected. Using it, but please consider setting the --factory flag explicitly. 2024-05-28T11:57:22.900006979Z INFO: Started server process [10] 2024-05-28T11:57:22.900026356Z INFO: Waiting for application startup. 2024-05-28T11:57:22.900085509Z INFO: ASGI 'lifespan' protocol appears unsupported. 2024-05-28T11:57:22.900107430Z INFO: Application startup complete. 2024-05-28T11:57:53.724415871Z ERROR: Exception in ASGI application 2024-05-28T11:57:53.724433625Z Traceback (most recent call last): 2024-05-28T11:57:53.724436010Z File &quot;/.venv/lib/python3.11/site-packages/uvicorn/protocols/http/httptools_impl.py&quot;, line 411, in run_asgi 2024-05-28T11:57:53.724438114Z result = await app( # type: ignore[func-returns-value] 2024-05-28T11:57:53.724439907Z ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2024-05-28T11:57:53.724441671Z File &quot;/.venv/lib/python3.11/site-packages/uvicorn/middleware/proxy_headers.py&quot;, line 69, in __call__ 2024-05-28T11:57:53.724443514Z return await self.app(scope, receive, send) 2024-05-28T11:57:53.724445257Z ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2024-05-28T11:57:53.724447001Z File &quot;/.venv/lib/python3.11/site-packages/uvicorn/middleware/asgi2.py&quot;, line 14, in __call__ 2024-05-28T11:57:53.724448804Z instance = self.app(scope) 2024-05-28T11:57:53.724450548Z ^^^^^^^^^^^^^^^ 2024-05-28T11:57:53.724452281Z TypeError: 'NoneType' object is not callable 2024-05-28T11:57:53.724445979Z INFO: 172.20.0.1:34188 - &quot;GET / HTTP/1.1&quot; 500 Internal Server Error 2024-05-28T11:57:53.745830886Z ERROR: Exception in ASGI application 2024-05-28T11:57:53.745845213Z Traceback (most recent call last): 2024-05-28T11:57:53.745847708Z File &quot;/.venv/lib/python3.11/site-packages/uvicorn/protocols/http/httptools_impl.py&quot;, line 411, in run_asgi 2024-05-28T11:57:53.745849792Z result = await app( # type: ignore[func-returns-value] 2024-05-28T11:57:53.745851786Z ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2024-05-28T11:57:53.745853689Z File &quot;/.venv/lib/python3.11/site-packages/uvicorn/middleware/proxy_headers.py&quot;, line 69, in __call__ 2024-05-28T11:57:53.745852287Z INFO: 172.20.0.1:34198 - &quot;GET /favicon.ico HTTP/1.1&quot; 500 Internal Server Error 2024-05-28T11:57:53.745855603Z return await self.app(scope, receive, send) 2024-05-28T11:57:53.745871904Z ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2024-05-28T11:57:53.745874589Z File &quot;/.venv/lib/python3.11/site-packages/uvicorn/middleware/asgi2.py&quot;, line 14, in __call__ 2024-05-28T11:57:53.745882965Z instance = self.app(scope) 2024-05-28T11:57:53.745884188Z ^^^^^^^^^^^^^^^ 2024-05-28T11:57:53.745885290Z TypeError: 'NoneType' object is not callable </code></pre>
<python><docker><uvicorn><flet><asgi>
2024-05-27 18:58:59
1
6,262
fccoelho
78,540,671
1,473,517
How can I use float128 in Cython 3?
<ul> <li>Cython version: 3.0.10</li> <li>Python 3.10.12</li> </ul> <p>I have a file, <em>sum_array.pyx</em>, which is:</p> <pre><code># sum_array.pyx cimport numpy as np def sum_array(np.ndarray[np.float128_t, ndim=1] arr): cdef int n = arr.shape[0] cdef np.float128_t total = 0 cdef int i for i in range(n): total += arr[i] return total </code></pre> <p>and <em>setup.py</em> which is:</p> <pre><code># setup.py from setuptools import setup from Cython.Build import cythonize import numpy as np setup( ext_modules=cythonize(&quot;sum_array.pyx&quot;), include_dirs=[np.get_include()] ) </code></pre> <p>when I run</p> <pre class="lang-none prettyprint-override"><code>python setup.py build_ext --inplace </code></pre> <p>I get:</p> <pre class="lang-none prettyprint-override"><code>python setup.py build_ext --inplace Compiling sum_array.pyx because it changed. [1/1] Cythonizing sum_array.pyx /home/user/python/mypython3.10/lib/python3.10/site-packages/Cython/Compiler/Main.py:381: FutureWarning: Cython directive 'language_level' not set, using '3str' for now (Py3). This has changed from earlier releases! File: /home/user/python/sum_array.pyx tree = Parsing.p_module(s, pxd, full_module_name) Error compiling Cython file: ------------------------------------------------------------ ... # sum_array.pyx cimport numpy as np def sum_array(np.ndarray[np.float128_t, ndim=1] arr): ^ ------------------------------------------------------------ sum_array.pyx:4:28: Invalid type. Traceback (most recent call last): File &quot;/home/user/python/setup.py&quot;, line 7, in &lt;module&gt; ext_modules=cythonize(&quot;sum_array.pyx&quot;), File &quot;/home/iser/python/mypython3.10/lib/python3.10/site-packages/Cython/Build/Dependencies.py&quot;, line 1154, in cythonize cythonize_one(*args) File &quot;/home/user/python/mypython3.10/lib/python3.10/site-packages/Cython/Build/Dependencies.py&quot;, line 1321, in cythonize_one raise CompileError(None, pyx_file) Cython.Compiler.Errors.CompileError: sum_array.pyx </code></pre> <p>What is the right way to do this?</p>
<python><cython>
2024-05-27 18:47:27
2
21,513
Simd
78,540,605
2,153,235
"runfile" not aware of changes to *.py file until I open it in Spyder editor
<p>I have recently started to become familiar with the Spyder editor as a debugger. For purposes other than for debugging, however, I use my favourite external editor (Vim for me, but probably Emacs for others). I am working on a <code>MyScript.py</code> script that is not meant to be a module, in the sense that I don't define functions. I am simply experimenting with and growing some code as a script. I run the script with <code>runfile</code>.</p> <p>Before today, any changes to <code>MyScript.py</code> made with Vim would be reflected in the console output and Variable Explorer when I issued <code>runfile</code>. As of today, any changes made to <code>MyScript.py</code> is <em>not</em> reflected in console output and Variable Explorer. I cannot find any setting in Preferences that determines whether <code>runfile('MyScript.py')</code> refreshes its copy of <code>MyScript.py</code> before it executes the statements therein. The newest <code>MyScript.py</code> isn't seen even if I exit with <code>Ctrl+D</code>, which simply makes the console reconnect to a (hopefully new) kernel.</p> <p>What could possibly have caused this behaviour today and not previously? How can I get the old behaviour back, i.e., changes made to <code>MyScript.py</code> by an external editor are recognized by <code>runfile('MyScript.py')</code>? I am using Spyder 5.4.3 wit Python 3.9, installed on Windows 10 with Anaconda.</p> <p><strong>Afternote:</strong> If I open the file in Spyder's editor, it picks up the changes. If I make further changes in Vim, however, they aren't automatically detected. However, if I click in the Spyder editor, the change in focus seems to force it to pick up the latest changes. Since I'm otherwise not using the Spyder editor, I would prefer to use the real estate for the Console window or the Variable Explorer. Unfortunately, I can only shrink the width so much and there is significant width still taken up by the editor.</p> <p>Since this behaviour started recently, I suspect that Spyder or IPython may cache the <code>*.py</code> file if it has a size that is greater than some threshold. As I develop the file, it crosses this speculated threshold.</p>
<python><ipython><spyder>
2024-05-27 18:29:24
0
1,265
user2153235
78,540,460
2,065,083
Read xml and generate dynamic object properties
<p>Complete the XMLReader class, to which we pass XML document as string to constructor, it converts the XML document into a dynamic object, in which the nodes are accessible as the object's properties and the inner XML is accessible by the text property.</p> <p>Complete the class so that it works recursively for XMLs with any level.</p> <pre><code>xml_string = &quot;&lt;note&gt; &lt;to&gt;Tove&lt;/to&gt; &lt;from&gt;Jani&lt;/from&gt; &lt;heading&gt;Reminder&lt;/heading&gt; &lt;body&gt;Don't forget me this weekend!&lt;/body&gt; &lt;/note&gt; &quot; </code></pre> <p>This is my code:</p> <pre><code>import xml.etree.ElementTree as ET class XmlReader: def __init__(self, xml=None): self.text = &quot;&quot; if xml is not None: doc = ET.fromstring(xml) self.text = ET.tostring(doc).decode('ascii') xml = XmlReader() xml.text = doc.text setattr(self, doc.tag, xml) xml_string = \ &quot;&lt;?xml version=\&quot;1.0\&quot; encoding=\&quot;utf-8\&quot;?&gt;&quot; + \ &quot;&lt;note&gt;&quot; + \ &quot;&lt;to&gt;Tove&lt;/to&gt;&quot; +\ &quot;&lt;from&gt;Jani&lt;/from&gt;&quot; + \ &quot;&lt;heading&gt;Reminder&lt;/heading&gt;&quot; + \ &quot;&lt;body&gt;Don't forget me this weekend!&lt;/body&gt;&quot; +\ &quot;&lt;/note&gt;&quot; xmlObj = XmlReader(xml_string) print(xmlObj.text) print(xmlObj.note.text) print(xmlObj.note.to.text) </code></pre> <p>Expected result is :</p> <pre><code>&quot;&lt;note&gt; &lt;to&gt;Tove&lt;/to&gt;&lt;from&gt;Jani&lt;/from&gt;&lt;heading&gt;Reminder&lt;/heading&gt;&lt;body&gt;Don't forget me this weekend!&lt;/body&gt;&lt;/note&gt;&quot; &quot;&lt;to&gt;Tove&lt;/to&gt;&quot; &quot;&lt;from&gt;Jani&lt;/from&gt;&quot; </code></pre> <p>With my code, it is throwing error :</p> <pre><code>&lt;note&gt;&lt;to&gt;Tove&lt;/to&gt;&lt;from&gt;Jani&lt;/from&gt;&lt;heading&gt;Reminder&lt;/heading&gt;&lt;body&gt;Don't forget me this weekend!&lt;/body&gt;&lt;/note&gt; None Traceback (most recent call last): print(xmlObj.note.to.text) AttributeError: 'XmlReader' object has no attribute 'to' </code></pre> <p>What is the correct way to solve this program?</p>
<python><xml>
2024-05-27 17:43:30
1
21,515
Learner
78,540,435
20,302,906
Can't send json reponse in requests mock
<p>I'm looking for a way to test a function called <code>fetch_options</code> that basically renders a returned JSONResponse from an internal API. My approach to this test is to mock <code>requests</code> because the internal API is located in another app which has already tested in a different test suite.</p> <p>I've got a gut feeling that the function is properly patched with the mock but for some reason my <code>render</code> call can't get the response data, I might be wrong about this though.</p> <p><em>planner/views/meals.py</em></p> <pre><code>def fetch_options(request, meal, field): if field == &quot;ingredients&quot;: index = 1 else: index = 0 try: response = requests.get(&quot;log/api/send_&quot; + field) #Trying to mock requests here, request fetches from /log/views/api.py except ConnectionError: requests.raise_from_exception() else: return render(request, {&quot;meal&quot;: meal, &quot;options&quot;: list(response)[index]}) # throws list index out range in tests </code></pre> <p><em>/log/views/api.py</em></p> <pre><code>def send_data(request, option): match option: case &quot;ingredients&quot;: data = Log.objects.first().recipe_ingredients case &quot;areas&quot;: data = Log.objects.first().recipe_area case &quot;categories&quot;: data = Log.objects.first().recipe_categories case &quot;activities&quot;: data = Log.objects.first().activities response = JsonResponse(data, status=200, safe=False) return response </code></pre> <p><em>planner/tests/test_meals_view.py</em></p> <pre><code>from unittest import MagicMock, patch @patch(&quot;planner.views.meals.requests&quot;) def test_fetch_options(self, mock_requests): mock_response = MagicMock() mock_response.status_code = 200 mock_response.json_data = [{&quot;id&quot;: &quot;357&quot;, &quot;strIngredient&quot;: &quot;Orange&quot;}] mock_requests.get.return_value = mock_response self.assertContains( fetch_options(&quot;request&quot;, &quot;breakfast&quot;, &quot;ingredients&quot;), status_code=200 ) </code></pre> <p>Can anyone tell me what I'm missing or doing wrong please?</p>
<python><django><python-requests><mocking>
2024-05-27 17:36:51
2
367
wavesinaroom
78,540,370
3,155,240
RelatedObjectDoesNotExist when trying to count number of objects in a Query Set
<p>I know that this error is not unique to me. I have come across many posts asking what the error is and how to fix it. One of the answers I saw was to use <code>hasattr</code> on the returned objects in a for loop before you try to get the attribute of the object. The problem is, is that I actually want to <code>count()</code> on the QuerySet. I saw some other post use some Django model aggregate function inside of the query, but I don't think that is appropriate or necessary here. The query should be literally as easy as:</p> <pre><code>font_count = Fonts.objects.filter(origin=location, font_name=font_name).count() </code></pre> <p>but it says:</p> <blockquote> <p>my_app.models.Fonts.origin.RelatedObjectDoesNotExist: Fonts has no origin.</p> </blockquote> <p>Seems straight forward enough - I'll just open the interactive django shell and make sure that I can't get the origin of the Fonts already loaded into the DB... Well, what do you know?! I can access every <code>origin</code> of every <code>font</code> that I have loaded! What is this garbage?!</p> <p>I am on Django version &quot;4.1.dev20211216191317&quot; and python 3.9. <code>origin</code> is a foreign key (it can't be set to null values) and <code>font_name</code> is a text field.</p> <p>Thank you :)</p>
<python><python-3.x><django>
2024-05-27 17:19:16
0
2,371
Shmack
78,540,300
2,335,020
FastAPI - switching out OpenAPI.json after it has been generated - where to put code?
<p>FastAPI generates the &quot;openapi.json&quot; file and provides an interface to it.</p> <p>For an experiment I need to replace this with a third party file.</p> <pre><code>from pathlib import Path import json app.openapi_schema = json.loads(Path(r&quot;myopenapi.json&quot;).read_text()) </code></pre> <p>when I put this code behind a endpoint, for example &quot;/&quot;</p> <pre><code>@app.get(&quot;/&quot;, include_in_schema=FULL_SCHEMA) def read_root(): # code here </code></pre> <p>After calling the endpoint once, the loaded myopenapi.json is displayed for the &quot;/docs&quot; interface and the original is overwritten. The functionality has not changed, the old definitions still work.</p> <p>I would like to be able to make the switch directly after FastAPI has completed the setup and all end points are created.</p> <p>Putting this in the Startup code block doesn't work (<code>async def lifespan(app: FastAPI):</code>) - when this reaches <code>yield</code> the <code>app.openapi _schema</code> is not created yet.</p> <p>Where is the right place to change the FastAPI app after generation?</p> <p>FastAPI is started with the command:</p> <pre><code>uvicorn.run(app, host=SERVER_HOST, port=SERVER_PORT, workers=1) </code></pre>
<python><fastapi><openapi><startup>
2024-05-27 17:00:21
1
8,442
576i
78,540,291
3,654,852
Why are similar dataframes showing two different index types?
<p>Updates: 1.I got it to work. I created another unique counter column for both dataframes and then used merge instead of concat</p> <pre><code>terminal_price['counter'] = np.arange(terminal_price.shape[0]) terminal_pnl['counter'] = np.arange(terminal_pnl.shape[0]) pnl_vs_price_df = terminal_price.merge(terminal_pnl, how = 'inner', on='counter') </code></pre> <p>Now I get the df I wanted: <a href="https://i.sstatic.net/trbPgMNy.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/trbPgMNy.jpg" alt="enter image description here" /></a></p> <p>Sorry for wasting everyone's time.....should have used merge before. However, I am curious if there's a more direct way to do this than what I did above. Not sure why df indexes should be of two different types...and even if they are, could I make them the same? Earlier, when I tried to concat on the existing indices, it gave me an empty df.</p> <p>Updates: @wjandrea - thanks for the feedback.</p> <p>I'm not able to reproduce the source of the dataframes - as that comes from a simulation code which is quite massive. However, in terms of a minimum reproducible example, I hope this helps:</p> <ol> <li>I re-ran the simulation with just 5 time steps so it's easier to visualize.</li> <li>Both the dataframes are shown. Now from what I can tell, these df's are similar, they look the same too. But when I do a concat, I get the dataframes stacked on top of each other...instead of columns being placed adjacent to each other against a common index.</li> </ol> <p>Here's what it looks like: <a href="https://i.sstatic.net/pu14LDfg.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/pu14LDfg.jpg" alt="enter image description here" /></a></p> <p>I also tried merge against the common index - that gave me nothing. This leads me to believe that even the indexes of the two df's look the same in the picture.....they are actually not - going back to my original post below. One of the indices is an object, the other is a RangeIndex.</p> <p>I'm trying to create a graph of PnL vs price from a simulation of a call option. I have to extract data from the results of the simulation. I wanted to combine the extracted data into a single dataframe, but my resulting dataframe does not join them. I found out that the index of the two extract dataframes is different... what's going on?</p> <p>In the screenshot below, I see the <code>terminal_price</code> dataframe having an index type object. The other dataframe of PnL values, however, is a RangeIndex. I checked, and both the df's have the same dimensions and are of the same class; i.e. <code>pandas.core.frame.DataFrame</code>. So I'm not sure exactly what's going on here. Why do I get two different index types? I will try to convert the data extracts into pure series and then convert back into a concatenated dataframe, but in the meantime, why is this even happening?</p> <p>I'm not able to show the entire working (just too much to copy paste), but here's how I extracted the relevant data:</p> <pre><code>terminal_price = dhedge_strat['path'].iloc[-1:,].T terminal_price.rename(columns = {999 : 'term_price'}, inplace=True) terminal_pnl = pd.DataFrame(dhedge_err_['hedge_error']).reset_index() </code></pre> <p><a href="https://i.sstatic.net/8M2vdmqT.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8M2vdmqT.jpg" alt="enter image description here" /></a></p>
<python><pandas><dataframe>
2024-05-27 16:56:38
1
803
user3654852
78,540,119
6,875,230
Create binary matrix from dataframe of Indices for each pair of Rows
<p>I have the following Dataframe of indexes:</p> <pre class="lang-py prettyprint-override"><code>import pandas as pd df = pd.DataFrame({ 1: [(), (1, 2), (1, 2, 5, 7), (1, 2), (1, 2, 5), (1, 2)], 2: [(), (1, 2), (1, 2, 5, 7), (1, 2), (1, 2, 5), (1, 2)], 3: [(), (3, 4), (3, 4), (3, 5, 6), (3, 4, 6), (3, 4, 5, 7)], 4: [(), (3, 4), (3, 4), (), (3, 4, 6), (3, 4, 5, 7)], 5: [(), (), (1, 2, 5, 7), (3, 5, 6), (1, 2, 5), ()], 6: [(), (), (), (3, 5, 6), (3, 4, 6), ()], 7: [(), (), (1, 2, 5, 7), (), (), (3, 4, 5, 7)], 8: [(), (), (), (), (), ()], 9: [(), (), (), (), (), ()] }) </code></pre> <p>I want to obtain in an efficient way this output:</p> <pre><code>Pair_IDs, C1, C2, C3, C4, C5, C6 (1,2), 0, 1, 1, 1, 1, 1 (1,3), 0, 0, 0, 0, 0, 0 (1,4), 0, 0, 0, 0, 0, 0 (1,5), 0, 0, 1, 0, 1, 0 (1,6), 0, 0, 0, 0, 0, 0 (1,7), 0, 0, 1, 0, 0, 0 (1,8), 0, 0, 0, 0, 0, 0 (1,9), 0, 0, 0, 0, 0, 0 ... (5,7), 0, 0, 1, 0, 0, 1 (5,8), 0, 0, 0, 0, 0, 0 (5,9), 0, 0, 0, 0, 0, 0 </code></pre> <p>In which each row is a representazion of all pairs of indexes that are in the same subset for each column. Do you have any suggestions on how I can define an efficient Python function? Do you know is tensorflow or pandas or Spark have some methods that already process it?</p> <p><em>Example</em></p> <p>The pair of values (1,2) is contained in all the sets except the first one, which means:</p> <p>[( ), <strong>(1, 2)</strong>, <strong>(1, 2</strong>, 5, 7), <strong>(1, 2)</strong>, <strong>(1, 2</strong>, 5), <strong>(1, 2)</strong>]</p> <p>(1,2) =&gt; [0, <strong>1</strong>, <strong>1</strong>, <strong>1</strong>, <strong>1</strong>, <strong>1</strong>]</p>
<python><pandas><dataframe><pyspark>
2024-05-27 16:05:57
2
528
Stefano
78,540,055
10,816,965
Convert asynchronous iterable to non-list iterable
<p>The following code works as expected:</p> <pre><code>import asyncio from collections.abc import AsyncIterable, Iterable # In reality, this is more complicated. async def create_asynchronous_iterable() -&gt; AsyncIterable[int]: for k in range(3): yield k # In reality, this is more complicated. def do_something(it: Iterable[int]) -&gt; None: for k in it: print(k, end=&quot; &quot;) print() async def main(): async_it = create_asynchronous_iterable() it = [k async for k in async_it] do_something(it) if __name__ == &quot;__main__&quot;: asyncio.run(main()) # Prints &quot;0 1 2 &quot;. </code></pre> <p>How can I turn <code>it</code> into an iterable that avoids waiting until all items are in memory, e.g. a generator?</p> <p>I tried to replace the declaration of <code>main</code> as follows:</p> <pre><code>async def main(): async_it = create_asynchronous_iterable() def create_iterable() -&gt; Iterable[int]: async for k in async_it: yield k it = create_iterable() do_something(it) </code></pre> <p>However, this leads to:</p> <pre><code>SyntaxError: 'async for' outside async function </code></pre> <p>Is there a way to circumvent this limitation?</p>
<python><python-asyncio><iterable>
2024-05-27 15:51:27
0
605
Sebastian Thomas
78,539,937
6,068,731
Computing effective sample size in tensorflow - Complex128 Warning
<p>When computing the effective sample size of a NumPy array as done below</p> <pre><code>import numpy as np import tensorflow_probability as tfp x = np.random.randn(1000, 2, dtype=np.float64) ess = tfp.mcmc.effective_sample_size(x) </code></pre> <p>I get the following cryptic warning:</p> <pre><code>WARNING:tensorflow:You are casting an input of type complex128 to an incompatible dtype float64. This will discard the imaginary part and may not be what you intended. </code></pre> <p>How can I fix this?</p>
<python><numpy><tensorflow>
2024-05-27 15:25:18
1
728
Physics_Student
78,539,906
66,473
Logic behind adding keyboard callback in Flet app
<p>The following code feels a bit counterintuitive (you are deleting something defined later in code) but works:</p> <pre><code>import flet as ft def main(page: ft.Page): def on_keyboard(e: ft.KeyboardEvent): page.remove(cntnr) page.update() page.on_keyboard_event = on_keyboard cntnr = ft.Container(width=100, height=100, bgcolor=ft.colors.RED) page.add(cntnr) if __name__ == &quot;__main__&quot;: ft.app(target=main) </code></pre> <p>If I define and add a container first and add a keyboard callback to do something to that defined container later then keyboard callback suddenly doesn't work (keypresses are getting ignored and I just hear beep sounds on keypresses):</p> <pre><code>import flet as ft def main(page: ft.Page): cntnr = ft.Container(width=10, height=10, bgcolor=ft.colors.RED) page.add(cntnr) def on_keyboard(e: ft.KeyboardEvent): page.remove(cntnr) page.update() page.on_keyboard_event = on_keyboard if __name__ == &quot;__main__&quot;: ft.app(target=main) </code></pre> <p><strong>Why is it like this?</strong> Am I missing something in Flet documentation?</p> <p>This was only tested on Mac for now but I think it's reproducible on Linux and Windows as well. Is it a bug?</p> <p><a href="https://flet.dev/docs/cookbook/keyboard-shortcuts" rel="nofollow noreferrer">https://flet.dev/docs/cookbook/keyboard-shortcuts</a></p>
<python><flet>
2024-05-27 15:20:36
1
9,041
Alex Bolotov
78,539,835
1,304,247
Running Python package and unittests do not work at the same time
<p>Error in structuring and running Python code</p> <p><strong>Directory Structure:</strong></p> <pre><code>project ├── library │ ├── __init__.py │ ├── time_calculator.py │ ├── distance_calculator.py │ ├── weight_calculator.py │ ├── calculator.py │ │ │ └── tests │ ├── __init__.py │ └── unittests_calculator.py │ ├── __init__.py └── README.txt </code></pre> <p><strong>SCENARIO 1:</strong></p> <p>code inside the file <code>calculator.py</code></p> <pre><code>from time_calculator import TimeCalculator from distance_calculator import DistanceCalculator from weight_calculator import weightCalculator class Calculator: pass if __name__ == &quot;__main__&quot;: calculator = Calculator() calculator.calculate() </code></pre> <p>code inside the file <code>tests/unittests_calculator.py</code>...</p> <pre><code>from library.calculator import Calculator . . . if __name__ == '__main__': unittest.main() </code></pre> <p><strong>Observations for Scenario 1:</strong></p> <p>In case of the above scenario,</p> <ul> <li><p>[<code>python calculator.py</code>] Works!</p> </li> <li><p>[<code>python -m unittest tests/unittests_calculator.py</code>] <strong>DOES NOT WORK!</strong></p> <p><strong>Error</strong>: No module named 'time_calculator'</p> </li> </ul> <p><strong>SCENARIO 2:</strong></p> <p>code inside the file <code>calculator.py</code> (notice that I have added <code>library.</code> - package name, before the module name)</p> <pre><code>from library.time_calculator import TimeCalculator from library.distance_calculator import DistanceCalculator from library.weight_calculator import weightCalculator class Calculator: pass if __name__ == &quot;__main__&quot;: calculator = Calculator() calculator.calculate() </code></pre> <p>code inside the <code>tests/unittests_calculator.py</code></p> <pre><code>from library.calculator import Calculator . . . if __name__ == '__main__': unittest.main() </code></pre> <p><strong>Observations for Scenario 2:</strong></p> <p>In case of the above scenario,</p> <ul> <li><p>[<code>python calculator.py</code>] <strong>DOES NOT WORK!</strong></p> </li> <li><p>[<code>python -m unittest tests/unittests_calculator.py</code>] Works!</p> <p><strong>Error</strong>: No module named 'library.time_calculator'; 'library ' is not a package</p> </li> </ul> <p><strong>QUESTION</strong> I have <code>__init__.py</code>, which represents <code>library</code> as a package. but still something is not right, Not sure where am I wrong?! How can I resolve it to work for both; running the unittests and running the code?</p> <p><em><strong>NOTE</strong>: I am using <strong>python-3.9.13</strong></em></p>
<python><python-3.x>
2024-05-27 15:07:11
2
1,241
wafers
78,539,834
16,436,095
Correct way to mocking of httpx response
<p>I build API with Litestar, and I have several methods that internally call functions with asynchronous network requests.</p> <pre class="lang-py prettyprint-override"><code># routs.py class TokenController(Controller): @post(&quot;/check-token&quot;) async def check_token(self, data: SomeSchema) -&gt; Response: try: token = data.token response = await get_token(token=token) response_json = json.loads(response.text) return Response( status_code=status_codes.HTTP_200_OK, content=SomeResponse( idToken=response_json[&quot;idToken&quot;], refreshToken=response_json[&quot;refreshToken&quot;] ), ) except Exception as e: raise HTTPException( detail=e.detail, status_code=e.status_code, extra=e.extra ) </code></pre> <pre class="lang-py prettyprint-override"><code># service.py async def get_token(token: str) -&gt; Response: url= &quot;some-url&quot; headers = {&quot;Content-Type&quot;: &quot;application/json&quot;} data = { &quot;token&quot;: token, } async with httpx.AsyncClient() as client: response = await client.post(url, headers=headers, json=data) return response </code></pre> <p>I'd like to write a test for the <code>check_token</code> function. How can I mock the internal <code>get_token</code> function?</p> <p>UPD. I tried something like this:</p> <pre class="lang-py prettyprint-override"><code># tests.py @pytest.mark.asyncio class TestTokenController: @mock.patch(&quot;app.service.get_token&quot;, new_callable=mock.AsyncMock) async def test_check_token(self, get_token_mock): my_mock = AsyncMock() my_mock.return_value = TOKEN_RESPONSE get_token_mock.side_effect = my_mock async with AsyncTestClient(app=app) as client: response = await client.post(&quot;/check-token&quot;, json=TOKEN_DATA) assert response is None # I'll fix it later. Now I just examine the response </code></pre> <p>I'm expecting to receive TOKEN_RESPONSE in response, but I see that the response looks like a real response (moreover, if I turn the Internet connection off, then the test returns a 500 error). My mock doesn't participate in the test.</p> <p>Dirs structure:</p> <pre><code>project ├── app │ ├── token │ │ ├── __init__.py │ │ └── routes.py │ └── service.py └── tests └── tests_token └── test_routes.py </code></pre>
<python><mocking>
2024-05-27 15:06:45
0
370
maskalev
78,539,830
8,372,455
Fernet key must be 32 url-safe base64-encoded bytes error despite correct key length
<p>I'm working on a Python project using <code>aiohttp</code> and <code>aiohttp_session</code> with <code>EncryptedCookieStorage</code> for session management. I'm generating a Fernet key in a batch script and passing it to my Python application via an environment variable. However, despite the key being of the correct length (44 characters), I'm encountering the error:</p> <pre><code>ValueError: Fernet key must be 32 url-safe base64-encoded bytes. </code></pre> <p>I am experimenting with just running a power shell script to generate the key and save as an env variable. Any suggestion on best practices for this process greatly appreciated not a lot of wisdom here...</p> <pre><code>@echo off REM Generate the Fernet key using Python and store it in a variable for /f &quot;delims=&quot; %%i in ('python -c &quot;from cryptography.fernet import Fernet; print(Fernet.generate_key().decode().strip())&quot;') do set FERNET_KEY=%%i REM Display the generated key (optional, for debugging purposes) echo Generated Fernet Key: %FERNET_KEY% echo Fernet Key Length: %FERNET_KEY% REM Set the environment variable for the current session set &quot;FERNET_KEY=%FERNET_KEY%&quot; REM Run your Python application with the environment variable set cmd /c &quot;set FERNET_KEY=%FERNET_KEY% &amp;&amp; python main.py&quot; </code></pre> <p>And this is a snip of the <code>aiohttp</code> app:</p> <pre><code>import os import logging from aiohttp import web import aiohttp_cors import aiohttp_session from aiohttp_session.cookie_storage import EncryptedCookieStorage import hashlib from cryptography.fernet import Fernet # Set up logging logging.basicConfig(level=logging.DEBUG) # Retrieve Fernet key from environment variables FERNET_KEY = os.getenv(&quot;FERNET_KEY&quot;) if not FERNET_KEY: raise ValueError(&quot;FERNET_KEY environment variable not set&quot;) FERNET_KEY = FERNET_KEY.strip() logging.debug(f&quot;FERNET_KEY from environment: '{FERNET_KEY}'&quot;) logging.debug(f&quot;FERNET_KEY length: {len(FERNET_KEY)}&quot;) if len(FERNET_KEY) != 44: raise ValueError(&quot;FERNET_KEY must be 32 url-safe base64-encoded bytes (44 characters long)&quot;) # Validate and encode the Fernet key fernet_key = FERNET_KEY.encode() logging.debug(f&quot;Encoded Fernet Key: {fernet_key}&quot;) # Retrieve admin credentials from environment variables ADMIN_USERNAME = os.getenv(&quot;ADMIN_USERNAME&quot;, &quot;admin&quot;) ADMIN_PASSWORD = os.getenv(&quot;ADMIN_PASSWORD&quot;, &quot;password&quot;) USERS = {ADMIN_USERNAME: hashlib.sha256(ADMIN_PASSWORD.encode()).hexdigest()} </code></pre> <p>and error I cannot seem to rectify:</p> <pre><code>Generated Fernet Key: oGpGY-WldRjBpAzBMr3AfBQdKqIOZ530KegvDbsXMuk= Fernet Key Length: oGpGY-WldRjBpAzBMr3AfBQdKqIOZ530KegvDbsXMuk= DEBUG:root:FERNET_KEY from environment: 'oGpGY-WldRjBpAzBMr3AfBQdKqIOZ530KegvDbsXMuk=' DEBUG:root:FERNET_KEY length: 44 DEBUG:root:Encoded Fernet Key: b'oGpGY-WldRjBpAzBMr3AfBQdKqIOZ530KegvDbsXMuk=' Traceback (most recent call last): File &quot;C:\Users\bbartling\Desktop\react-login\server\main.py&quot;, line 114, in &lt;module&gt; aiohttp_session.setup(app, EncryptedCookieStorage(fernet_key)) File &quot;C:\Users\bbartling\AppData\Local\Programs\Python\Python312\Lib\site-packages\aiohttp_session\cookie_storage.py&quot;, line 47, in __init__ self._fernet = fernet.Fernet(secret_key) File &quot;C:\Users\bbartling\AppData\Local\Programs\Python\Python312\Lib\site-packages\cryptography\fernet.py&quot;, line 40, in __init__ raise ValueError: Fernet key must be 32 url-safe base64-encoded bytes. </code></pre> <p>I've verified that the key is 44 characters long, but it still throws this error. The debug logs show the key length and the exact value being used.</p> <p>The Fernet key is being used for session-based authentication in a front-end application made in React. Any tips appreciated for best practices on creating a secure login for a aio-http and React app.</p>
<python><session><cryptography><session-cookies><web-development-server>
2024-05-27 15:06:22
1
3,564
bbartling
78,539,704
850,781
Conda update flips between two versions of the same package from the same channel
<p>This is the continuation of <a href="https://stackoverflow.com/q/78512062/850781">my previous question</a>.</p> <p>Now, with both <code>channel_priority: flexible</code> and <code>channel_priority: strict</code>, every <code>conda update -n base --all</code> flips between</p> <pre><code>The following packages will be UPDATED: libarchive 3.7.2-h313118b_1 --&gt; 3.7.4-haf234dc_0 zstd 1.5.5-h12be248_0 --&gt; 1.5.6-h0ea2cb4_0 The following packages will be DOWNGRADED: zstandard 0.22.0-py311he5d195f_0 --&gt; 0.19.0-py311ha68e1ae_0 </code></pre> <p>and</p> <pre><code>The following packages will be UPDATED: zstandard 0.19.0-py311ha68e1ae_0 --&gt; 0.22.0-py311he5d195f_0 The following packages will be DOWNGRADED: libarchive 3.7.4-haf234dc_0 --&gt; 3.7.2-h313118b_1 zstd 1.5.6-h0ea2cb4_0 --&gt; 1.5.5-h12be248_0 </code></pre> <p>and the corresponding <code>conda list -n base --show-channel-urls</code> are</p> <pre><code>libarchive 3.7.2 h313118b_1 XXX zstandard 0.22.0 py311he5d195f_0 XXX zstd 1.5.5 h12be248_0 XXX </code></pre> <p>and</p> <pre><code>libarchive 3.7.4 haf234dc_0 XXX zstandard 0.19.0 py311ha68e1ae_0 XXX zstd 1.5.6 h0ea2cb4_0 XXX </code></pre> <p>where <code>XXX</code> is the exact same (internal/corp) channel (which I do not cotrol).</p> <p>What am I doing wrong? How do I avoid this?</p> <p>Is this a bug in <code>conda 24.5.0</code> or in the configuration of the <code>XXX</code> channel?</p> <p>PS. <a href="https://github.com/conda/conda/issues/13945" rel="nofollow noreferrer">Reported</a></p>
<python><conda><miniconda>
2024-05-27 14:36:18
1
60,468
sds
78,539,623
1,838,726
How to reorder (re-index) the vertices of a graph in graph-tool - efficiently?
<p>I use graph-tool (I love it for its sheer speed) with python as a basis for developing and running own graph algorithms. Sometimes I want to reorder the vertices of the graph, i.e. the indices of the vertices, like swapping vertex indices, without changing the topology of the graph. This doesn't seem to be a standard operation in graph-tool and I'm wondering if I'm using it the wrong way or simply overlooked a fast way to do reorderings. So far, I found two ways to reorder graphs in graph-tool:</p> <ol> <li><p><strong>Recreate Graph</strong><br /> For substantial changes, e.g. for randomly shuffling the vertex order, it's good to recreate a new graph based on the old one and pass a desired vertex order, like so:</p> <pre><code> import numpy as np import graph_tool as gt g = gt.Graph(...) rng = np.random.default_rng(seed=123) vertex_dst_indices = rng.choice(g.num_vertices()) vertex_dst_indices_prop_map = self.gtg.new_vertex_property(&quot;int&quot;, vals=vertex_dst_indices) g_reordered = gt.Graph(g, vorder=vertex_dst_indices_prop_map) </code></pre> </li> <li><p><strong>Swap Vertex Data</strong><br /> For small changes, e.g. for swapping two vertices, it's good to keep the vertices with their indices, but to swap all associated data about them, which are edges and vertex properties for my case.</p> </li> </ol> <p>Both strategies work, but each of them seem unnecessarily complicated. Recreating the graph needs two graphs in the RAM at one point, and swapping all vertex data is way more performance hungry (= time consuming) than it would need to be. Are there better ways to either reorder the indices or else to decorate a graph and use it as if the indices were reordered? Swapping indices in a fast O(1) would be ideal for me, but I couldn't find a way to do this.</p> <p>Of course I can write a vertex mapper myself and wrap the gt.Graph inside, but this feels like a code smell. Is there an existing graph-tool solution?</p>
<python><algorithm><graph><graph-tool>
2024-05-27 14:17:53
0
6,700
Daniel S.
78,539,617
3,765,883
'str' object is not callable error in Python/Selenium web-scraping script
<p>I have a very simple Python/Selenium web-scraping script, as follows:</p> <pre><code>from re import L from selenium import webdriver from selenium.webdriver.common.keys import Keys from selenium.webdriver.common.by import By cService = webdriver.ChromeService(executable_path=&quot;C:\\Users\\Frank\\Documents\\Visual Studio 2022\\Projects\\IV3_WebsiteAutomation\\chromedriver.exe&quot;) driver = webdriver.Chrome(service = cService) driver.get(&quot;https://app.iv3.us/login&quot;) print(driver.title) search_bar = driver.find_element(&quot;name&quot;, &quot;userName&quot;) search_bar.send_keys(&quot;paynterf@gmail.com&quot;) search_bar = driver.find_element(&quot;name&quot;, &quot;password&quot;) search_bar.send_keys(&quot;xxxxxxxxxxxxxxx&quot;) search_bar.send_keys(Keys.RETURN) # res = driver.find_element(By.XPATH(&quot;.//a[contains(@href,'View Active')]&quot;)) # res = driver.find_element(By.XPATH(&quot;//a[contains(@href,'View Active')]&quot;)) XXX = driver.find_element(By.XPATH(&quot;//div[./h4[text()='Moved from Registered Address']]//a[text()='View Active']&quot;)) driver.close() </code></pre> <p>When run, the 'XXX = driver.find_element...' line produces the dreaded <strong>'str' object is not callable</strong> error.</p> <p>Googling the error has led me to numerous posts describing how this error is caused by attempting to use a Python reserved word for a variable, but I can't see how that could be happening in my script; I even went so far as to change the 'res' variable name to 'XXX', but this didn't change anything.</p> <p>Could it be that this line:</p> <pre><code>from selenium.webdriver.common.by import By </code></pre> <p>and its subsequent use in the last line of the script is causing the problem? I got this usage from the <a href="https://selenium-python.readthedocs.io/locating-elements.html" rel="nofollow noreferrer">Selenium 'Locating Elements' tutorial</a> , so I would be amazed if it was the issue, but....</p> <p>TIA,</p> <p>Frank</p> <p>[edit]: per request from @OneCricketeer, I have included the error traceback:</p> <pre><code>IV3 'str' object is not callable Stack trace: &gt; File &quot;C:\Users\Frank\Documents\Visual Studio 2022\Projects\IV3_WebsiteAutomation\IV3_WebsiteAutomation.py&quot;, line 20, in &lt;module&gt; (Current frame) &gt; XXX = driver.find_element(By.XPATH(&quot;//div[./h4[text()='Moved from Registered Address']]//a[text()='View Active']&quot;)) &gt;TypeError: 'str' object is not callable Loaded 'main' Loaded 'runpy' </code></pre>
<python><selenium-webdriver>
2024-05-27 14:15:18
1
327
user3765883
78,539,509
18,910,865
CLI Tool Fails to Recognize New Arguments During Direct Execution with Python-Fire
<p>I'm working on a CLI in Python and I don't know why the main class of the CLI does not update as new arguments are added, performing the tool using <code>python-fire</code>.</p> <p>I've mainly 2 scripts that rule the CLI: <code>main.py</code> and <code>cli.py</code></p> <p><code>main.py</code> is structured as follows:</p> <pre class="lang-py prettyprint-override"><code>from src.cli import SearchFunction import fire def main_entrypoint(): fire.Fire(SearchFunction) if __name__ == &quot;__main__&quot;: main_entrypoint() </code></pre> <p>While <code>cli.py</code> has the main class, where I've added the <code>group</code> argument:</p> <pre class="lang-py prettyprint-override"><code>class SearchFunction: @staticmethod def text_search( *keywords: str, type: Optional[str] = None, group: Optional[str] = None, # NEW ARGUMENT ) -&gt; None: &quot;&quot;&quot; Perform a custom text search on the XXX website and save the results. :param keywords: List of keywords to search for :param type: Type of search :param group: group to search # NEW ARGUMENT &quot;&quot;&quot; if True: do_things() </code></pre> <hr /> <p>If I run the tool using <code>poetry</code>:</p> <pre><code>poetry run edgar-tool text_search keywords to search for --type &quot;type_1&quot; --group &quot;group_1&quot; </code></pre> <p>it correctly works with the new argument <code>group</code>.</p> <p>Instead, with <em><strong>direct execution</strong></em>:</p> <pre class="lang-bash prettyprint-override"><code>python main.py text_search keywords to search for --type &quot;type_1&quot; --group &quot;group_1&quot; </code></pre> <p>returns:</p> <pre class="lang-bash prettyprint-override"><code>ERROR: Could not consume arg: --group </code></pre> <p>Even clearing the <code>__pycache__</code>:</p> <pre><code>rm -r __pycache__ </code></pre> <p>leads to the same result.</p> <p>Why this does happen? How to make the direct execution updated with the edits that has been created?</p>
<python><arguments><command-line-interface><python-poetry><python-fire>
2024-05-27 13:57:37
0
522
Nauel
78,539,421
2,667,066
Reorder "ragged" numpy array defined by offsets (i.e. contiguous slices)
<p>I have a long 1D numeric data array of bytes (e.g. chars) along with a set of offsets which define a set of contiguous slices along the array (e.g. these could correspond to words). Here's an example, showing the character equivalent of each byte:</p> <pre><code>data = np.array(['h', 'e', 'l', 'l', 'o', 't', 'h', 'e', 'r', 'e', 'y', 'o', 'u', '!']) offsets = np.array([0, 5, 10, 13, 14]) # defines words [0:5], [5:10], [10:13], [13:14] </code></pre> <p>I want an efficient way to rearrange the slices (&quot;words&quot;) in the original array to make a new reordered data array. For instance,</p> <pre class="lang-py prettyprint-override"><code>new_word_order = np.array([2, 1, 0, 3]) # length == len(offset)-1 # some function that uses the new_word_order to make the following new_data == np.array(['y', 'o', 'u', 't', 'h', 'e', 'r', 'e', 'h', 'e', 'l', 'l', 'o', '!']) new_offsets == np.array([0, 3, 8, 13, 14]) </code></pre> <p>Ideally, I would also like the function to be able to duplicate or delete indexes:</p> <pre><code>new_word_order = np.array([2, 3, 3, 2, 3]) # no longer needs to be the same length as `offsets` # results in new_data == np.array(['y', 'o', 'u', '!', '!', 'y', 'o', 'u', '!']) new_offsets = np.array([0, 3, 4, 5, 8, 9]) </code></pre> <p>EDIT: I have concocted the solution below which uses np.arange repeatedly inside a list comprehension</p> <pre><code>ranges = np.array([offsets[:-1], offsets[1:]]).T reordered_ranges = ranges[new_word_order, :] new_offsets = np.zeros(len(new_word_order)+1, dtype=offsets.dtype) idx = [np.arange(l, r, dtype=offsets.dtype) for l, r in reordered_ranges if l != r] select = [] if len(idx) == 0 else np.concatenate(idx) new_data = data[select] new_offsets = np.insert(np.cumsum(np.diff(reordered_ranges, axis=1)), 0, 0) </code></pre> <p>But I worry that for large arrays (say a million slices) this will be inefficient, so I'me wondering if there's a build-in multiple-slice accessor, or if something with <code>np.r_</code>, <code>np.s_</code>, or <code>np.slice</code> would avoid the need for large numbers of np.arange calls. The answer at <a href="https://stackoverflow.com/questions/65430859/convert-list-of-tuples-into-list-of-slices-to-use-with-np-r">Convert list of tuples into list of slices to use with np.r_</a> seems to suggest maybe not.</p>
<python><numpy><ragged>
2024-05-27 13:39:39
1
2,169
user2667066
78,539,418
5,568,409
Python, scipy, gamma.rvs ans rng
<p>I am trying to sample from the <code>gamma</code> distribution and would like to &quot;fix&quot; the sample I get.</p> <p>What I do is:</p> <pre><code>from scipy.stats import gamma gamma.rvs(a = 20, loc = 0, scale = 1/2, size = 10) </code></pre> <p>and this gives me a different array of values, each time I run the code...</p> <p>I think that using <code>rng</code> could &quot;freeze&quot; the array once sampled, but browsing the maybe(?) appropriate page <a href="https://docs.scipy.org/doc/scipy/tutorial/stats/sampling.html" rel="nofollow noreferrer">HERE</a> confuses me on how to concretely implement the <code>rng</code> instruction in my code lines.</p> <p>Could someone explain what would be the best way?</p>
<python><random><scipy><generator>
2024-05-27 13:38:59
0
1,216
Andrew
78,539,135
7,031,021
How to avoid InterpolationResolutionError with hydra config variables in vsCode notebooks
<p>I know that hydra is only limited when it comes to using it in notebooks but the reality is that many projects have at least some notebooks that might need the same set of configurations.</p> <p>I would like to know how to load variables in a notebook that use config variables like:</p> <pre><code>conf.yaml paths: root: ${hydra:runtime.cwd}/data/MNIST </code></pre> <p>without getting an error:</p> <pre><code>In notebook # Initialize Hydra and compose the configuration with initialize(version_base=None, config_path=&quot;conf&quot;): cfg = compose(config_name=&quot;config.yaml&quot;) cfg.paths.root </code></pre> <pre><code>InterpolationResolutionError: ValueError raised while resolving interpolation: HydraConfig was not set full_key: paths.root object_type=dict </code></pre>
<python><visual-studio-code><jupyter-notebook><fb-hydra>
2024-05-27 12:39:15
2
510
RSale
78,539,092
7,808,647
How to provide additional information on enum values to an LLM in LangChain
<p>I'm trying to extract structured information with an LLM say GPT-4 using LangChain in python. My goal is to classify companies by associating them with tags.</p> <p>My output class is of the type:</p> <pre><code>from langchain_core.pydantic_v1 import BaseModel class Company(BaseModel): industry: list[Industry] customer: list[Customer] </code></pre> <p>So far so good. Now the problem is, some of the tags might be somewhat specific and I'd like to pass more information to the LLM to help it decide between options. Using <code>Enum</code> from <code>aenum</code> as described <a href="https://stackoverflow.com/questions/52062831/how-do-i-properly-document-python-enum-elements">here</a> I can add e.g. docstrings to the enum values:</p> <pre><code>from aenum import Enum class Industry(Enum): _init_ = 'value __doc__' it = &quot;Information Technology&quot;, &quot;All kinds of computer stuff&quot; agriculture = &quot;Agriculture&quot;, &quot;Farming, irrigation, fertilizers etc.&quot; class Customer(Enum): _init_ = 'value __doc__' B2C = &quot;B2C&quot;, &quot;Companies selling directly to consumers&quot; B2B = &quot;B2B&quot;, &quot;Companies selling to other businesses&quot; </code></pre> <p>Now I have my values and some helpful explanations, however, there's no direct way of passing these to the LLM.</p> <p>If I use the <code>.with_structured_output()</code> or <code>PydanticOutputParser</code> they fail to pass the docstrings from the enum members:</p> <pre><code>from langchain_core.output_parsers import PydanticOutputParser parser = PydanticOutputParser(pydantic_object=Company) parser.get_format_instructions() # 'The output should be formatted as a JSON instance that conforms to the JSON schema below. # As an example, for the schema {&quot;properties&quot;: {&quot;foo&quot;: {&quot;title&quot;: &quot;Foo&quot;, &quot;description&quot;: &quot;a list of strings&quot;, &quot;type&quot;: &quot;array&quot;, &quot;items&quot;: {&quot;type&quot;: &quot;string&quot;}}}, &quot;required&quot;: [&quot;foo&quot;]} # the object {&quot;foo&quot;: [&quot;bar&quot;, &quot;baz&quot;]} is a well-formatted instance of the schema. The object {&quot;properties&quot;: {&quot;foo&quot;: [&quot;bar&quot;, &quot;baz&quot;]}} is not well-formatted. # Here is the output schema: # ``` # {&quot;properties&quot;: {&quot;industry&quot;: {&quot;type&quot;: &quot;array&quot;, &quot;items&quot;: {&quot;$ref&quot;: &quot;#/definitions/Industry&quot;}}, &quot;customer&quot;: {&quot;type&quot;: &quot;array&quot;, &quot;items&quot;: {&quot;$ref&quot;: &quot;#/definitions/Customer&quot;}}}, &quot;required&quot;: [&quot;industry&quot;, &quot;customer&quot;], &quot;definitions&quot;: {&quot;Industry&quot;: {&quot;title&quot;: &quot;Industry&quot;, &quot;description&quot;: &quot;An enumeration.&quot;, &quot;enum&quot;: [&quot;Information Technology&quot;, &quot;Agriculture&quot;]}, &quot;Customer&quot;: {&quot;title&quot;: &quot;Customer&quot;, &quot;description&quot;: &quot;An enumeration.&quot;, &quot;enum&quot;: [&quot;B2C&quot;, &quot;B2B&quot;]}}} #```' </code></pre> <p>As a workaround, I can of course write a custom prompt that explicitly details the docstrings, but was just curious if anyone has figured out a more straightforward way to do it.</p>
<python><enums><large-language-model><py-langchain>
2024-05-27 12:30:36
1
522
antti
78,538,780
12,242,085
How to assign appropriate ZipCode from one table to geographical data from anothr table in PySpark / Python?
<p>I have in PySpark:</p> <p><strong>Input data:</strong></p> <ol> <li>table location_df with columns:</li> </ol> <ul> <li><p>location <code>(values: Canada, Costa Rica, Mexico, Puerto Rico, United States)</code></p> </li> <li><p>answer_label</p> <p>(values: ['Guadalajara', 'San José', 'Heredia', 'Puntarenas', 'Do not wish to answer', 'Cartago', 'Alajuela', 'Limon', 'Guanacaste', 'Monterrey', 'MidWest', 'Arizona', 'Alaska', 'NorthEast', 'Arkansas', 'Alabama', 'South', 'West', 'Northeast', 'Maryland', 'Michigan', 'Mississippi', 'Maine', 'Kentucky', 'Minnesota', 'Massachusetts', 'Louisiana', 'Iowa', 'Kansas', 'Ohio', 'Rhode Island', 'South Carolina', 'Vermont', 'West Virginia', 'Wisconsin', 'Pennsylvania', 'Washington', 'Washington DC', 'South Dakota', 'Oklahoma', 'Utah', 'Virginia', 'Texas', 'Wyoming', 'Tennessee', 'Oregon', 'Alberta', 'British Colombia', 'Ontario', 'New Brunswick', 'Manitoba', 'Prince Edward Island', 'Quebec', 'Nova Scotia', 'Saskatchewan', 'Newfoundland and Labrador', 'Mayagüez', 'Midwest', 'San Juan Metro', 'Caguas', 'San Juan Sub', 'Otras', 'México D.F.', 'Ponce', 'Arecibo', 'Hawaii', 'Florida', 'California', 'Connecticut', 'Georgia', 'Idaho', 'Illinois', 'Indiana', 'Colorado', 'Delaware', 'Nevada', 'North Carolina', 'Nebraska', 'New York', 'Montana', 'New Mexico', 'North Dakota', 'Missouri', 'New Hampshire', 'New Jersey']</p> </li> </ul> <ol start="2"> <li>table imp_df with column ZipCode with example values: '68364', '30133' and many many more...</li> </ol> <p><strong>My question:</strong></p> <p>How to create pipeline to merge above datasets (location_df and imp_df) based on values in column &quot;answer_label&quot; from location_df and assign to them appropriate ZipCode from column &quot;ZipCode&quot; from table &quot;imp_df&quot; ?</p> <p><strong>Desired output</strong></p> <p>As an output I need all columns from table &quot;location_df&quot; and assigned to each value from the &quot;answer_label&quot; column the corresponding ZipCode from the ZipCode column of the imp_df table. Like below:</p> <pre><code>location | answer_label | ZipCode </code></pre> <p>Of course, if more than 1 ZipCode matches a given geographic land, then such a line should be duplicated as many times as there are postal codes assigned to a given geographic land.</p> <p>Is there any package in python or PySpark to assign this according to actual geographical data ? Request for an example code :)</p>
<python><pyspark><geolocation>
2024-05-27 11:27:01
1
2,350
dingaro
78,538,536
4,271,491
How to parse a huge amount of json files effectively in pyspark
<p>I'm trying to parse around 100GB of small json files using pySpark. The files are stored in a google cloud bucket and they come zipped: *.jsonl.gz <br> How can I do it effectively?</p>
<python><pyspark><google-cloud-dataproc>
2024-05-27 10:28:24
0
528
Aleksander Lipka
78,538,476
9,134,545
Airflow DockerOperator: argument list too long
<p><strong>The Problem:</strong></p> <p>I've a DAG wherere I basically launch a docker Container with a bunch of volumes and environment variables.</p> <p>One of these Environment variables is a Json String that must be computed with Python Code and then supplied to the DockerOperator. This JSON String serves as a configuration to some application.</p> <pre><code>from datetime import datetime from airflow import DAG from airflow.providers.docker.operators.docker import DockerOperator default_args = { 'owner' : 'airflow', 'description' : 'Test', 'start_date' : datetime(2024, 5, 1), } def some_function(): # Here is the logic to generate the HUGE dynamic JSON String json_string = &quot;...&quot; return json_string with DAG('docker_operator_demo', default_args=default_args, schedule_interval=&quot;5 * * * *&quot;, catchup=False) as dag: example_task = DockerOperator( task_id='run_my_app', image='my_docker_img', container_name='my_app', api_version='auto', auto_remove=True, command=&quot;echo run_my_app&quot;, docker_url=&quot;unix://var/run/docker.sock&quot;, network_mode=&quot;bridge&quot;, environment={ &quot;MY_APP_CONFIG&quot;: some_function() } ) example_task </code></pre> <p>The problem is that this JSON String is huge sometimes (like 200+ keys where each values is 1000 characters+). When the JSON is this huge, I've this error <code>argument list too long</code>.</p> <p>I understand where this error comes from, it's just that the env variables exceeds the UNIX limit.</p> <p><strong>Possible solution:</strong></p> <p>I've thought about supplying this &quot;dynamically computed&quot; JSON string as a volume mount to the DockerOperator but I can't find a proper way to do this.</p> <p><strong>The Question:</strong></p> <p>With Airflow, is it possible to dynamically create a file and supply it as a volume to the docker operator ? If yes, can you please share an example ?</p>
<python><docker><airflow>
2024-05-27 10:16:08
0
892
Fragan
78,538,178
135,749
Python httpx log all request headers
<p>How can I log all request headers of an httpx request? I use log level DEBUG nad response headers log fine but there are no request headers in the log. If it matters I'm using async api of httpx lib.</p>
<python><httpx>
2024-05-27 09:12:58
1
1,034
Grigory
78,538,094
6,221,742
Poor predictions with phophet
<p>I am trying to make forecasting in my dataset using prophet but my results are very poor. I tried adding seasonality and and seasonality mode but the results on MAE and R-squared are very low. Here's an overview of my dataset</p> <p><a href="https://i.sstatic.net/Lx0x7Kdr.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Lx0x7Kdr.png" alt="enter image description here" /></a></p> <p>and here's my code</p> <pre><code># pandas import pandas as pd # prophet from prophet import Prophet # metrics from sklearn.metrics import r2_score, mean_absolute_error # holoviews import holoviews as hv from holoviews import opts # Create a new column to identify if the date is a weekday or weekend df_prophet['day_type'] = df_prophet['ds'].apply(lambda x: 'Weekend' if x.weekday() &gt;= 5 else 'Weekday') df_prophet threshold_date = pd.to_datetime('2023-10-01') mask = df_prophet['ds'] &lt; threshold_date # Split the data and select `ds` and `y` columns. df_train = df_prophet[mask][['ds', 'y']] df_test = df_prophet[~ mask][['ds', 'y']] def build_model(): &quot;&quot;&quot;Define forecasting model.&quot;&quot;&quot; model = Prophet( yearly_seasonality=True, weekly_seasonality=True, # daily_seasonality=True, # holidays = holidays, interval_width=0.95, mcmc_samples = 1000, # seasonality_mode = 'multiplicative', seasonality_mode = 'additive', growth= 'linear' ) # model.add_seasonality( # name='daily', # period=5, # fourier_order=5 # ) return model model = build_model() model.fit(df_train) # Extend dates and features. horizon = df_test.shape[0] future = model.make_future_dataframe(periods=horizon, freq='D') # daily days predictions # Generate predictions. forecast = model.predict(df=future) forecast.loc[:, 'yhat'] = forecast['yhat'].clip(lower=0) forecast.loc[:, 'yhat_lower'] = forecast['yhat_lower'].clip(lower=0) print('r2 train: {}'.format(r2_score(y_true=df_train['y'], y_pred=forecast_train['yhat']))) print('r2 test: {}'.format(r2_score(y_true=df_test['y'], y_pred=forecast_test['yhat']))) print('---'*10) print('mae train: {}'.format(mean_absolute_error(y_true=df_train['y'], y_pred=forecast_train['yhat']))) print('mae test: {}'.format(mean_absolute_error(y_true=df_test['y'], y_pred=forecast_test['yhat']))) </code></pre> <p>The results are quite poor though.</p> <p><a href="https://i.sstatic.net/ZLhCsOVm.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ZLhCsOVm.png" alt="enter image description here" /></a></p> <p>How can I improve my model?</p>
<python><time-series><forecasting><facebook-prophet>
2024-05-27 08:55:33
0
339
AndCh
78,538,033
4,340,985
How to turn a df column into a multiindex column?
<p>I have a dataframe with a column multiindex and various data columns:</p> <pre><code> id value1 value2 valuen name date foo 01-2000 No01 324 6575 ... bar 02-2000 No02 964 0982 ... </code></pre> <p>Now I need to turn <code>id</code> into a number (<code>df['id'] = df[id].str[1:]</code>; <code>df['id'] = df['id'].astype(int)</code>) and add this as a third column to the multiindex.</p> <p>Of course there's various ways to do that, but I'm wondering if there is one of the many, inbuilt pandas shortcuts for it, i.e. something like <code>pd.make_index_col(df['id'])</code> or <code>df.add_to_index('id')</code> to get a df with, in this case, 3 index columns:</p> <pre><code> value1 value2 valuen name date id foo 01-2000 1 324 6575 ... bar 02-2000 2 964 0982 ... </code></pre>
<python><dataframe><multi-index>
2024-05-27 08:44:08
0
2,668
JC_CL
78,537,817
1,084,174
ImportError: cannot import name 'DILL_AVAILABLE'
<p>I want to work with IMDB datasets. Trying to load using following command:</p> <pre><code>from torchtext.datasets import IMDB train_iter = IMDB(root='~/datasets', split='train') </code></pre> <p>I am getting following error:</p> <pre><code>ImportError: cannot import name 'DILL_AVAILABLE' from 'torch.utils.data.datapipes.utils.common' (/home/user/env_p3.10.12_ml/lib/python3.10/site-packages/torch/utils/data/datapipes/utils/common.py) </code></pre> <p><strong>How to solve it?</strong></p>
<python><pytorch><compiler-errors><dataset><torchtext>
2024-05-27 07:55:39
1
40,671
Sazzad Hissain Khan
78,537,678
3,595,995
Pyspark 3.5.0 fixing vulnerabilities in JAR files
<p>I'm working on a Python project that depends on PySpark 3.5.1.</p> <p>During the CI/CD pipeline, a Nexus scan identified vulnerabilities in PySpark's dependencies:</p> <ol> <li>avro-ipc-1.11.2.jar</li> <li>jackson-mapper-asi-1.9.13.jar</li> </ol> <p>These JAR files are sub-dependencies of PySpark.</p> <p>We use a plain <code>requirements.txt</code> file, and there's no straightforward way to exclude or override these sub-dependencies. I attempted to use Poetry and <code>constraints.txt</code>, but since these are JAR files, it didn't work as expected.</p> <p>Could someone provide guidance on how to address these vulnerabilities?</p>
<python><pyspark><python-poetry>
2024-05-27 07:23:52
0
445
PremKumarR
78,537,509
1,142,881
Postgres connection URL options stopped working switching from PG11 to PG14
<p>I have a codebase that relies on a connection to Postgres via a full self-contained connection URL. The Python codebase was working while on PG11 but after switching to PG14 the options in the connection URL are no longer recognized:</p> <pre><code>postgresql://username:password@host:port/database?&amp;options=-csearch_path=my_schema&amp;target_session_attrs=read-write&amp;connect_timeout=5 </code></pre> <p>specifically this part: <strong>?&amp;options=-csearch_path=my_schema&amp;target_session_attrs=read-write&amp;connect_timeout=5</strong></p> <p>The <code>search_path</code> is not recognized and therefore the schema <code>my_schema</code> is not set by default.</p> <p>I have tried many possibilities including, but none seem to work:</p> <ol> <li>adding encoding to the space like <code>?&amp;options=-c%20search_path=my_schema</code></li> <li>removing the leading &amp; before the options</li> <li>removing the -c before search_path</li> <li>specifying search_path without options</li> </ol>
<python><postgresql><psycopg2><postgresql-14><postgresql-11>
2024-05-27 06:38:51
1
14,469
SkyWalker
78,537,429
12,091,935
Create multiple modified versions of a script for submission to cluster
<p>I am trying create child scripts from one parent script where a few parameters are modified (48 child scripts, so automation would be preferred). My intention for this is to run different Modelica scripts as individual Slurm jobs. I would normally do this with the Modelica scripting language, but since I need to submit each script individually I will need to create multiple scripts. The pseudocode is as follows. Note: The search strings are unique</p> <pre><code>load('model.mo') # modelica file, parent script # change the control strategy in the script for i in ['control_1', 'control_2', 'control_3', 'control 4']: # change the amount of electricity generation find and replace r'moduleName = &quot;control&quot;' with 'moduleName = control_' + str(i) for j in [3, 7]: find and replace '.CombiTimeTable solar_data(columns = {2}' with '.CombiTimeTable solar_data(columns = {' + str(j) + '}' # change the battery size for k in [2000000000, 4000000000, 6000000000]: find and replace 'Storage.Battery BESS(EMax = 2000000000, SOC_start = 0.5, pf = 0.9)' with 'Storage.Battery BESS(EMax = ' + str(k) + ', SOC_start = 0.5, pf = 0.9)' for l in ['4', '8']: find and replace '.CombiTimeTable ev_data(columns = {2}' with '.CombiTimeTable ev_data(columns = {' + str(i) + '}' export('child_model_#.mo') </code></pre> <p>My goal is to change the actual text of each new script, not just the variables. I am not sure if I should use Python, Bash, or something else for this task, especially since I am modifying a non-<code>.txt</code> file.</p>
<python><text-manipulation>
2024-05-27 06:14:10
1
435
Luis Enriquez-Contreras
78,537,407
2,702,249
Packaging python script with it's dependent packages in single packages
<p>I have a situation:</p> <p>In macOS machine with <code>Python 3.6.8</code>, I wrote single python script called <code>publisher.py</code> which uses <code>pycrypto</code> package. Everything runs as expected.</p> <p>Now, I need to put my code in a shared location of production Linux env, from where hundreds of developers will run the script with just <code>Python 3.6.8</code> in their machine. I don't have rights/privilege to install <code>pycrypto</code> in production Linux env, I can just copy something there.</p> <p>Obviously, I can't force all developers to do <code>pip install pycrpto</code> on their individual env.</p> <p>Is there a way, I can package my single script with it's dependency(<code>pycrypto</code>) and copy it to production Linux env?</p> <p>Please suggest.</p>
<python><pip><package>
2024-05-27 06:06:23
1
7,743
Om Sao
78,537,167
126,833
Exception occurred in file 'chrome.py', line 64: 'NoneType' object has no attribute 'split'
<p>I am using pyhtml2pdf for converting HTML to PDF which has a requirement of Chrome headless.</p> <p>When the server is run as <code>python manage.py runserver</code> there is no issue.</p> <p>But when the django is run as a service it throws and error regarding split with chrome.py :</p> <pre><code>Exception occurred in file ‘/var/www/myProject/env/lib/python3.11/site-packages/webdriver_manager/drivers/chrome.py’, line 64: ‘NoneType’ object has no attribute ‘split’. </code></pre> <p>Our service :</p> <pre><code>cat /etc/systemd/system/myProject.service [Unit] Description=myProject_0 project After=network.target [Service] User=root Group=root WorkingDirectory=/var/www/myProject Environment=&quot;PATH=/var/www/myProject/env/bin/activate&quot; ExecStart=/bin/bash -c 'source /var/www/myProject/env/bin/activate &amp;&amp; python manage.py runserver' StandardOutput=append:/var/log/myProject/myProject.log StandardError=append:/var/log/myProject/myProject_error.log [Install] WantedBy=multi-user.target </code></pre>
<python><django><selenium-chromedriver>
2024-05-27 04:16:39
2
4,291
anjanesh
78,537,131
9,737,855
Dealing with a very large xarray dataset: loading slices consuming too much time
<p>I have a very large netcdf dataset consisting of daily chunks data from April 1985 to April 2024. As the arrays are divided into daily chunks, I often open them by using <code>ds = xr.open_mfdataset(*.nc)</code>. The entire dataset has up to 1.07TB, which is way way far than I can handle loading into memory:</p> <p><a href="https://i.sstatic.net/3GYAHQQl.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/3GYAHQQl.png" alt="enter image description here" /></a></p> <p>By slicing over lat/lon coordinates <code>ds.sel(latitude=y, longitude=x, method='nearest')</code> I get a single pixel along my timeseries, which is now far lighter than the original dataset and allows me to perform the analysis I need:</p> <p><a href="https://i.sstatic.net/655LE5IB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/655LE5IB.png" alt="enter image description here" /></a></p> <p>However, even though the sliced dataset is now very light, it stills takes so much time for it to get loaded into memory (more than 1h) with <code>ds.load()</code>. This would not be a big deal if I didn't need to perform this operation more 100,000 times, which would take incredible 10 years to finish!</p> <p>I don't have a powerful machine, but it's decent enough for performing the tasks I need. Although I was expecting this task to took some time, I really wish to finish it before becoming a dad. Besides going for a more powerful machine (which I think will still not reduce the amount of required time to the order of days), is there any way I can try to optimize this task?</p>
<python><dask><python-xarray>
2024-05-27 04:01:17
1
311
Gabriel Lucas
78,537,075
2,555,706
Why is "dict[int, int]" incompatible with "dict[int, int | str]"?
<pre class="lang-py prettyprint-override"><code>import typing a: dict[int, int] = {} b: dict[int, int | str] = a c: typing.Mapping[int, int | str] = a d: typing.Mapping[int | str, int] = a </code></pre> <p>Pylance reports an error for <code>b: dict[int, int | str] = a</code>:</p> <pre class="lang-none prettyprint-override"><code>Expression of type &quot;dict[int, int]&quot; is incompatible with declared type &quot;dict[int, int | str]&quot; &quot;dict[int, int]&quot; is incompatible with &quot;dict[int, int | str]&quot; Type parameter &quot;_VT@dict&quot; is invariant, but &quot;int&quot; is not the same as &quot;int | str&quot; Consider switching from &quot;dict&quot; to &quot;Mapping&quot; which is covariant in the value type </code></pre> <p>But <code>c: typing.Mapping[int, int | str] = a</code> is OK.</p> <p>Additionally, <code>d: typing.Mapping[int | str, int] = a</code> also gets an error:</p> <pre class="lang-none prettyprint-override"><code>Expression of type &quot;dict[int, int]&quot; is incompatible with declared type &quot;Mapping[int | str, int]&quot; &quot;dict[int, int]&quot; is incompatible with &quot;Mapping[int | str, int]&quot; Type parameter &quot;_KT@Mapping&quot; is invariant, but &quot;int&quot; is not the same as &quot;int | str&quot; </code></pre> <p>Why are these types hint incompatible?<br /> If a function declares a parameter of type <code>dict[int, int | str]</code>, how can I pass a <code>dict[int, int]</code> object as its parameter?</p>
<python><python-typing><type-theory>
2024-05-27 03:25:43
2
426
keakon
78,536,986
4,451,521
a chat with gradio (how to modify its screen appearance)
<p>I wrote this very simple script</p> <pre><code>import requests import gradio as gr import json def chat_response(message, history, response_limit): return f&quot;You wrote: {message} and asked {response_limit}&quot; with gr.Blocks() as demo: gr.Markdown(&quot;# Data Query!&quot;) with gr.Row(): with gr.Column(scale=3): response_limit = gr.Number(label=&quot;Response Limit&quot;, value=10, interactive=True) with gr.Column(scale=7): chat = gr.ChatInterface(fn=chat_response,additional_inputs=[response_limit]) demo.launch() </code></pre> <p>with this I got something like <a href="https://i.sstatic.net/B0UosUzu.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/B0UosUzu.png" alt="enter image description here" /></a></p> <p>Do you notice the huge blank space below the chat (in red) and how small the window for the chat is?</p> <p>I would like the chat space to extend all the way to the bottom (or if I put something else below, for all the elements in the column to ocuppy the whole screen (like the blue rectangle)</p> <p>How can I do this?</p> <p>EDIT: I modified the code to</p> <pre><code>import requests import gradio as gr import json def chat_response(message, history, response_limit): return f&quot;You wrote: {message} and asked {response_limit}&quot; css = &quot;&quot;&quot; #chatbot { flex-grow: 1 !important; overflow: auto !important; } &quot;&quot;&quot; with gr.Blocks(css=css) as demo: gr.Markdown(&quot;# Data Query!&quot;) with gr.Row(): with gr.Column(scale=3): response_limit = gr.Number(label=&quot;Response Limit&quot;, value=10, interactive=True) with gr.Column(scale=7): chat = gr.ChatInterface( fn=chat_response, chatbot=gr.Chatbot(elem_id=&quot;chatbot&quot;, render=False), additional_inputs=[response_limit] ) demo.launch() </code></pre> <p>and now the chatbox is a little larger but it does no go all the way to the bottom. How can I do this?</p> <p>(Memo) In the beginning I did not add <code>render=False</code> and I got</p> <pre><code> raise DuplicateBlockError( gradio.exceptions.DuplicateBlockError: A block with id: 7 has already been rendered in the current Blocks. </code></pre> <p>Thanks to <a href="https://discuss.huggingface.co/t/clear-chat-interface/49866/4" rel="nofollow noreferrer">this post</a> I could correct that, if anybody is having the same problem</p>
<python><gradio><gradio-chatinterface>
2024-05-27 02:38:10
1
10,576
KansaiRobot
78,536,915
395,857
How use Gradio's gr.Button.click() so that it passes a local variable that is not a Gradio block to the called function?
<p>Example:</p> <pre><code>import gradio as gr def dummy(a, b): return '{0} {1}'.format(a, b) with gr.Blocks() as demo: a = 'Hello ' txt = gr.Textbox(value=&quot;test&quot;, label=&quot;Query&quot;, lines=1) answer = gr.Textbox(value=&quot;&quot;, label=&quot;Answer&quot;) btn = gr.Button(value=&quot;Submit&quot;) btn.click(dummy, inputs=[txt], outputs=[answer]) gr.ClearButton([answer]) demo.launch() </code></pre> <p>How can I change <code>btn.click</code> so that it passes the variable <code>a</code> to <code>dummy()</code>?</p> <p><a href="https://i.sstatic.net/E4vsfdGZ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/E4vsfdGZ.png" alt="enter image description here" /></a></p>
<python><gradio>
2024-05-27 01:49:19
1
84,585
Franck Dernoncourt
78,536,889
4,348,400
How to implement a mixture of gamma distributions in Python without Bayes'?
<p>I am trying to create examples to compare and contrast <a href="https://en.wikipedia.org/wiki/Bayesian_inference" rel="nofollow noreferrer">Bayesian</a> <a href="https://en.wikipedia.org/wiki/Markov_chain_Monte_Carlo" rel="nofollow noreferrer">MCMC</a> (e.g. <a href="https://arxiv.org/pdf/1701.02434" rel="nofollow noreferrer">HMC</a>) with non-Bayesian equivalents. One of the cases I am finding difficult is creating a <a href="https://en.wikipedia.org/wiki/Mixture_distribution" rel="nofollow noreferrer">mixture</a> of <a href="https://en.wikipedia.org/wiki/Gamma_distribution" rel="nofollow noreferrer">gamma distributions</a>.</p> <p>I first had some provisional success with a mixture of two distributions:</p> <pre class="lang-py prettyprint-override"><code>import numpy as np from scipy.stats import gamma, rv_continuous import matplotlib.pyplot as plt from scipy.optimize import minimize class gamma_mixture(rv_continuous): def _pdf(self, x, w, a1, scale1, a2, scale2): return w * gamma.pdf(x, a1, scale=scale1) + (1 - w) * gamma.pdf(x, a2, scale=scale2) def fit(self, data): def log_likelihood(params): w, a1, scale1, a2, scale2 = params mixture = w * gamma.pdf(data, a1, scale=scale1) + (1 - w) * gamma.pdf(data, a2, scale=scale2) return -np.sum(np.log(mixture)) initial_params = [0.8, 2.0, 2.0, 10.0, 1.0] bounds = [(0, 1), (0, None), (0, None), (0, None), (0, None)] result = minimize(log_likelihood, initial_params, bounds=bounds, method='L-BFGS-B') if result.success: self.fitted_params = result.x else: raise RuntimeError(&quot;Optimization failed&quot;) # Generate sample data np.random.seed(2018) data = np.concatenate([ gamma.rvs(a=2.0, scale=2.0, size=100), gamma.rvs(a=20.0, scale=1.0, size=100) ]) # Define and fit the gamma mixture model to the data custom_gamma_mixture = gamma_mixture(name='gamma_mixture') custom_gamma_mixture.fit(data) w, a1, scale1, a2, scale2 = custom_gamma_mixture.fitted_params # Evaluate the PDF of the fitted mixture model x = np.linspace(data.min(), data.max(), 1000) pdf_vals = custom_gamma_mixture.pdf(x, w, a1, scale1, a2, scale2) # Plot the fitted PDF against the histogram of the data fig, axes = plt.subplots(2, sharex=True) axes[0].hist(data, bins=30, density=True, alpha=0.6, color='g', label='Data Histogram') axes[0].plot(x, pdf_vals, 'r-', lw=2, label='Fitted Mixture PDF') axes[0].set_title('Original Sample') axes[1].hist(custom_gamma_mixture(*custom_gamma_mixture.fitted_params).rvs(size=200), bins=30, density=True, alpha=0.6, color='b', label='Data Histogram') axes[1].plot(x, pdf_vals, 'r-', lw=2, label='Fitted Mixture PDF') axes[1].set_title('New Sample') plt.tight_layout() plt.show() # Output fitted parameters print(&quot;Fitted Parameters:&quot;) print(f&quot;w: {w:.4f}&quot;) print(f&quot;a1: {a1:.4f}, scale1: {scale1:.4f}&quot;) print(f&quot;a2: {a2:.4f}, scale2: {scale2:.4f}&quot;) </code></pre> <p>I then tried to generalize to multiple distributions and found that either I got failures to converge or the plotted distribution just didn't look right. Here is an example:</p> <pre class="lang-py prettyprint-override"><code>import numpy as np from scipy.stats import gamma from scipy.optimize import minimize from typing import Tuple class GammaMixture: def __init__(self, n_components: int): self.n_components = n_components self.weights = np.ones(n_components) / n_components self.alphas = np.ones(n_components) self.scales = np.ones(n_components) def _pdf(self, x: np.ndarray) -&gt; np.ndarray: mixture = np.sum(self.weights[i] * gamma.pdf(x, self.alphas[i], scale=self.scales[i]) for i in range(self.n_components)) return mixture def _negative_log_likelihood(self, params: np.ndarray, data: np.ndarray) -&gt; float: self.weights, self.alphas, self.scales = np.split(params, [self.n_components, 2*self.n_components]) self.weights = np.exp(self.weights) / np.sum(np.exp(self.weights)) # Ensure probabilities sum to 1 neg_log_likelihood = -np.sum(np.log(self._pdf(data))) return neg_log_likelihood def fit(self, data: np.ndarray) -&gt; Tuple[np.ndarray, np.ndarray, np.ndarray]: initial_params = np.concatenate([np.zeros(self.n_components), np.ones(2*self.n_components)]) bounds = [(0, None)] * self.n_components + [(0, None)] * (2*self.n_components) result = minimize(self._negative_log_likelihood, initial_params, args=(data,), bounds=bounds) if result.success: self.weights, self.alphas, self.scales = np.split(result.x, [self.n_components, 2*self.n_components]) self.weights = np.exp(self.weights) / np.sum(np.exp(self.weights)) # Ensure probabilities sum to 1 return self.weights, self.alphas, self.scales else: raise RuntimeError(&quot;Optimization failed&quot;) def sample(self, n_samples: int) -&gt; np.ndarray: components = np.random.choice(self.n_components, size=n_samples, p=self.weights) samples = np.array([gamma.rvs(self.alphas[i], scale=self.scales[i]) for i in components]) return samples # Example usage: np.random.seed(0) data = np.concatenate([ gamma.rvs(a=2.0, scale=2.0, size=100), gamma.rvs(a=20.0, scale=1.0, size=100) ]) n_components = 3 gamma_mixture = GammaMixture(n_components) weights, alphas, scales = gamma_mixture.fit(data) print(&quot;Fitted Parameters:&quot;) print(&quot;Weights:&quot;, weights) print(&quot;Alphas:&quot;, alphas) print(&quot;Scales:&quot;, scales) # Generate samples from the fitted model samples = gamma_mixture.sample(n_samples=1000) import matplotlib.pyplot as plt # Plot histograms plt.figure(figsize=(10, 5)) # Histogram of original data plt.subplot(1, 2, 1) plt.hist(data, bins=30, density=True, color='blue', alpha=0.6, label='Original Data') plt.title('Histogram of Original Data') plt.xlabel('Value') plt.ylabel('Density') plt.legend() # Histogram of new samples plt.subplot(1, 2, 2) plt.hist(samples, bins=30, density=True, color='orange', alpha=0.6, label='New Samples') plt.title('Histogram of New Samples') plt.xlabel('Value') plt.ylabel('Density') plt.legend() plt.tight_layout() plt.show() </code></pre> <p>I had wondered if maybe I should use <a href="https://en.wikipedia.org/wiki/Expectation%E2%80%93maximization_algorithm" rel="nofollow noreferrer">expectation maximization</a>, so I tried that also using KMeans clustering to give the process a warm start:</p> <pre class="lang-py prettyprint-override"><code>import numpy as np from scipy.stats import gamma from sklearn.cluster import KMeans class GammaMixture: def __init__(self, n_components: int): &quot;&quot;&quot; Initialize the Gamma Mixture Model. Args: n_components (int): Number of gamma distributions (components) in the mixture. &quot;&quot;&quot; self.n_components = n_components self.weights = np.ones(n_components) / n_components self.alphas = np.ones(n_components) self.scales = np.ones(n_components) self.fitted = False def _e_step(self, data: np.ndarray) -&gt; np.ndarray: &quot;&quot;&quot; E-step: Calculate the responsibilities. Args: data (np.ndarray): Observed data. Returns: np.ndarray: Responsibilities of each component for each data point. &quot;&quot;&quot; responsibilities = np.zeros((data.shape[0], self.n_components)) for i in range(self.n_components): responsibilities[:, i] = self.weights[i] * gamma.pdf(data, a=self.alphas[i], scale=self.scales[i]) sum_responsibilities = np.sum(responsibilities, axis=1).reshape(-1, 1) if np.any(sum_responsibilities == 0): raise ValueError(&quot;Some data points have zero responsibilities.&quot;) responsibilities /= sum_responsibilities return responsibilities def _m_step(self, data: np.ndarray, responsibilities: np.ndarray): &quot;&quot;&quot; M-step: Update the parameters of the gamma distributions and the weights. Args: data (np.ndarray): Observed data. responsibilities (np.ndarray): Responsibilities of each component for each data point. &quot;&quot;&quot; total_resp = np.sum(responsibilities, axis=0) self.weights = total_resp / data.shape[0] for i in range(self.n_components): resp = responsibilities[:, i] weighted_data_sum = np.sum(resp * data) weighted_log_data_sum = np.sum(resp * np.log(data)) if total_resp[i] == 0 or weighted_data_sum == 0 or weighted_log_data_sum == 0: raise ValueError(f&quot;Invalid weighted sums for component {i}: total_resp={total_resp[i]}, weighted_data_sum={weighted_data_sum}, weighted_log_data_sum={weighted_log_data_sum}&quot;) self.alphas[i] = total_resp[i] / (np.sum(resp * np.log(data)) - np.sum(resp) * np.log(weighted_data_sum / total_resp[i])) self.scales[i] = weighted_data_sum / (total_resp[i] * self.alphas[i]) if np.isnan(self.alphas[i]) or np.isnan(self.scales[i]): raise ValueError(f&quot;NaN encountered in alphas or scales during M-step for component {i}.&quot;) print(f&quot;Component {i}: alpha={self.alphas[i]}, scale={self.scales[i]}, weight={self.weights[i]}&quot;) def _warm_start(self, data: np.ndarray): &quot;&quot;&quot; Warm start the parameters using K-means clustering. Args: data (np.ndarray): Observed data. &quot;&quot;&quot; kmeans = KMeans(n_clusters=self.n_components, random_state=0) labels = kmeans.fit_predict(data.reshape(-1, 1)) for i in range(self.n_components): cluster_data = data[labels == i] if len(cluster_data) == 0: continue data_mean = np.mean(cluster_data) data_var = np.var(cluster_data) self.alphas[i] = data_mean ** 2 / data_var self.scales[i] = data_var / data_mean self.weights[i] = len(cluster_data) / len(data) print(f&quot;Warm start Component {i}: alpha={self.alphas[i]}, scale={self.scales[i]}, weight={self.weights[i]}&quot;) def fit(self, data: np.ndarray, tol: float = 1e-6, max_iter: int = 100): &quot;&quot;&quot; Fit the Gamma Mixture Model to the data. Args: data (np.ndarray): Observed data. tol (float): Tolerance for convergence. max_iter (int): Maximum number of iterations. Raises: RuntimeError: If the optimization fails to converge. &quot;&quot;&quot; self._warm_start(data) log_likelihood_prev = -np.inf for iteration in range(max_iter): responsibilities = self._e_step(data) self._m_step(data, responsibilities) log_likelihood = np.sum(np.log(np.sum([w * gamma.pdf(data, a, scale=s) for w, a, s in zip(self.weights, self.alphas, self.scales)], axis=0))) print(f&quot;Iteration {iteration}: log_likelihood={log_likelihood}&quot;) if np.abs(log_likelihood - log_likelihood_prev) &lt; tol: break log_likelihood_prev = log_likelihood if np.any(np.isnan(self.weights)) or np.any(np.isnan(self.alphas)) or np.any(np.isnan(self.scales)): raise ValueError(&quot;NaN encountered in parameters after fitting.&quot;) self.fitted = True def sample(self, n_samples: int) -&gt; np.ndarray: &quot;&quot;&quot; Sample from the fitted Gamma Mixture Model. Args: n_samples (int): Number of samples to generate. Returns: np.ndarray: Samples generated from the model. Raises: RuntimeError: If the model has not been fitted yet. &quot;&quot;&quot; if not self.fitted: raise RuntimeError(&quot;Model has not been fitted yet. Fit the model first.&quot;) samples = np.zeros(n_samples) component_samples = np.random.choice(self.n_components, size=n_samples, p=self.weights) for i in range(self.n_components): n_component_samples = np.sum(component_samples == i) if n_component_samples &gt; 0: samples[component_samples == i] = gamma.rvs(a=self.alphas[i], scale=self.scales[i], size=n_component_samples) return samples # Example usage np.random.seed(0) data = np.concatenate([ gamma.rvs(a=2, scale=2, size=300), gamma.rvs(a=5, scale=1, size=300), gamma.rvs(a=9, scale=0.5, size=400) ]) gamma_mixture = GammaMixture(n_components=3) gamma_mixture.fit(data) samples = gamma_mixture.sample(n_samples=1000) import matplotlib.pyplot as plt import seaborn as sns plt.figure(figsize=(8, 6)) sns.histplot(data, color='blue', kde=True, label='Observed', stat='density') sns.histplot(samples, color='red', kde=True, label='Sampled', stat='density') plt.title('Distribution of Observed vs Sampled Data') plt.xlabel('Value') plt.ylabel('Density') plt.legend() plt.show() </code></pre> <p>But that also gave either convergence issues or visually poor agreement with the data. Lastly, I tried a warm start that used method of moments:</p> <pre class="lang-py prettyprint-override"><code>import numpy as np from scipy.stats import gamma class GammaMixture: def __init__(self, n_components: int): &quot;&quot;&quot; Initialize the Gamma Mixture Model. Args: n_components (int): Number of gamma distributions (components) in the mixture. &quot;&quot;&quot; self.n_components = n_components self.weights = np.ones(n_components) / n_components self.alphas = np.ones(n_components) self.scales = np.ones(n_components) self.fitted = False def _e_step(self, data: np.ndarray) -&gt; np.ndarray: &quot;&quot;&quot; E-step: Calculate the responsibilities. Args: data (np.ndarray): Observed data. Returns: np.ndarray: Responsibilities of each component for each data point. &quot;&quot;&quot; responsibilities = np.zeros((data.shape[0], self.n_components)) for i in range(self.n_components): responsibilities[:, i] = self.weights[i] * gamma.pdf(data, a=self.alphas[i], scale=self.scales[i]) sum_responsibilities = np.sum(responsibilities, axis=1).reshape(-1, 1) if np.any(sum_responsibilities == 0): raise ValueError(&quot;Some data points have zero responsibilities.&quot;) responsibilities /= sum_responsibilities return responsibilities def _m_step(self, data: np.ndarray, responsibilities: np.ndarray): &quot;&quot;&quot; M-step: Update the parameters of the gamma distributions and the weights. Args: data (np.ndarray): Observed data. responsibilities (np.ndarray): Responsibilities of each component for each data point. &quot;&quot;&quot; total_resp = np.sum(responsibilities, axis=0) self.weights = total_resp / data.shape[0] for i in range(self.n_components): resp = responsibilities[:, i] weighted_data_sum = np.sum(resp * data) weighted_log_data_sum = np.sum(resp * np.log(data)) if total_resp[i] == 0 or weighted_data_sum == 0 or weighted_log_data_sum == 0: raise ValueError(f&quot;Invalid weighted sums for component {i}: total_resp={total_resp[i]}, weighted_data_sum={weighted_data_sum}, weighted_log_data_sum={weighted_log_data_sum}&quot;) self.alphas[i] = (total_resp[i] / weighted_log_data_sum) self.scales[i] = (weighted_data_sum / total_resp[i]) / self.alphas[i] if np.isnan(self.alphas[i]) or np.isnan(self.scales[i]): raise ValueError(f&quot;NaN encountered in alphas or scales during M-step for component {i}.&quot;) print(f&quot;Component {i}: alpha={self.alphas[i]}, scale={self.scales[i]}, weight={self.weights[i]}&quot;) def _warm_start(self, data: np.ndarray): &quot;&quot;&quot; Warm start the parameters using Method of Moments. Args: data (np.ndarray): Observed data. &quot;&quot;&quot; data_mean = np.mean(data) data_var = np.var(data) for i in range(self.n_components): self.alphas[i] = data_mean ** 2 / data_var self.scales[i] = data_var / data_mean self.weights[i] = 1 / self.n_components print(f&quot;Warm start Component {i}: alpha={self.alphas[i]}, scale={self.scales[i]}, weight={self.weights[i]}&quot;) def fit(self, data: np.ndarray, tol: float = 1e-6, max_iter: int = 100): &quot;&quot;&quot; Fit the Gamma Mixture Model to the data. Args: data (np.ndarray): Observed data. tol (float): Tolerance for convergence. max_iter (int): Maximum number of iterations. Raises: RuntimeError: If the optimization fails to converge. &quot;&quot;&quot; self._warm_start(data) log_likelihood_prev = -np.inf for iteration in range(max_iter): responsibilities = self._e_step(data) self._m_step(data, responsibilities) log_likelihood = np.sum(np.log(np.sum([w * gamma.pdf(data, a, scale=s) for w, a, s in zip(self.weights, self.alphas, self.scales)], axis=0))) print(f&quot;Iteration {iteration}: log_likelihood={log_likelihood}&quot;) if np.abs(log_likelihood - log_likelihood_prev) &lt; tol: break log_likelihood_prev = log_likelihood if np.any(np.isnan(self.weights)) or np.any(np.isnan(self.alphas)) or np.any(np.isnan(self.scales)): raise ValueError(&quot;NaN encountered in parameters after fitting.&quot;) self.fitted = True def sample(self, n_samples: int) -&gt; np.ndarray: &quot;&quot;&quot; Sample from the fitted Gamma Mixture Model. Args: n_samples (int): Number of samples to generate. Returns: np.ndarray: Samples generated from the model. Raises: RuntimeError: If the model has not been fitted yet. &quot;&quot;&quot; if not self.fitted: raise RuntimeError(&quot;Model has not been fitted yet. Fit the model first.&quot;) samples = np.zeros(n_samples) component_samples = np.random.choice(self.n_components, size=n_samples, p=self.weights) for i in range(self.n_components): n_component_samples = np.sum(component_samples == i) if n_component_samples &gt; 0: samples[component_samples == i] = gamma.rvs(a=self.alphas[i], scale=self.scales[i], size=n_component_samples) return samples # Example usage np.random.seed(0) data = np.concatenate([ gamma.rvs(a=2, scale=2, size=300), gamma.rvs(a=5, scale=1, size=300), gamma.rvs(a=9, scale=0.5, size=400) ]) gamma_mixture = GammaMixture(n_components=3) gamma_mixture.fit(data) samples = gamma_mixture.sample(n_samples=1000) import matplotlib.pyplot as plt import seaborn as sns plt.figure(figsize=(8, 6)) sns.histplot(data, color='blue', kde=True, label='Observed', stat='density') sns.histplot(samples, color='red', kde=True, label='Sampled', stat='density') plt.title('Distribution of Observed vs Sampled Data') plt.xlabel('Value') plt.ylabel('Density') plt.legend() plt.show() </code></pre> <p>How should I actually go about implementing a mixture of gamma distributions without using Bayes'?</p> <hr /> <ul> <li><a href="https://en.wikipedia.org/wiki/Expectation%E2%80%93maximization_algorithm#Properties" rel="nofollow noreferrer">Wikipedia suggests</a> heuristics like simulated annealing to improve the behaviour of EM algorithm. Perhaps the same could be said for MLE.</li> <li>Something I have not explored yet is the numerical stability of objective function. Perhaps computing the pdfs and then taking their logarithms is worse than some other approach. Unfortunately logarithms will not distribute over a sum, the log of the likelihood of a mixture distribution might not be better. I don't know if this would be an improvement, but I could look at computing Taylor series expansions of the log likelood instead.</li> </ul>
<python><scipy><statistics><distribution>
2024-05-27 01:36:34
1
1,394
Galen
78,536,748
2,276,054
Or-Tools CP-SAT: how to define goal function to minimize number of unique booked rooms?
<p>I'm starting to learn <code>ortools.sat.python</code> / <code>cp_model</code> and I am confused as how to use <code>model.minimize()/maximize()</code>.</p> <p>Let's take an example simple problem of booking meetings to timeslots and rooms:</p> <pre><code>model = cp_model.CpModel() bookings = {} for m in meetings: for t in timeslots: for r in rooms: bookings[(m, t, r)] = model.new_bool_var(f&quot;{m}_{t}_{r}&quot;) # 1. each meeting shall be held once for m in meetings: model.add_exactly_one(bookings[(m, t, r)] for r in rooms for t in timeslots) # 2. no overbooking: no two meetings can be booked for the same timeslot-room pair for t in timeslots: for r in rooms: model.add_at_most_one(bookings[(m, t, r)] for m in meetings) </code></pre> <p>Now, what I would like to further improve, is to minimize the total number of different booked rooms. In other words, if it is possible to create the whole booking schedule using e.g. 3 rooms only (and not all 5 or 6 that are available), then such solution should be preferred.</p> <p>How do I do that?</p> <p>If I wanted to minimize the total number of bookings for room #1, I would simply write:</p> <pre><code>model.minimize(sum(bookings[(m, t, room_1)] for m in meetings for t in timeslots)) </code></pre> <p>To calculate the total number of different booked rooms AFTER finding the solution, I would write:</p> <pre><code>total_rooms_used = 0 for r in rooms: room_r_bookings = sum(solver.value(bookings[(m, t, r)]) for t in timeslots for m in meetings) if room_r_bookings &gt; 0: total_rooms_used += 1 print(f&quot;Total rooms used: {total_rooms_used}&quot;) </code></pre> <p>However, my problem is that I want to minimize <code>total_rooms_used</code>, and I don't know how to put it in <code>model.minimize(...)</code></p>
<python><or-tools><cp-sat>
2024-05-26 23:59:33
1
681
Leszek Pachura
78,536,720
5,284,054
Python MySQL: DatabaseError: 1364 (HY000): Field 'emp_no' doesn't have a default value
<p>I've installed MySQL on my Windows desktop. The installation was successful. I downloaded the <code>employees</code> data from github and added it to my MySql server. I confirmed that through the <strong>MySQL 8.4 Command Line Client</strong></p> <p>I'm working through the Documentation for the MySQL Connector/Python Developer Guide. I got to <a href="https://dev.mysql.com/doc/connector-python/en/connector-python-example-cursor-transaction.html" rel="nofollow noreferrer">inserting data into the employee database</a>. I get this error: <code>DatabaseError: 1364 (HY000): Field 'emp_no' doesn't have a default value</code></p> <p>The answers to this question are about PHP, not python: <a href="https://stackoverflow.com/questions/15438840/mysql-error-1364-field-doesnt-have-a-default-values">mysql error 1364 Field doesn&#39;t have a default values</a></p> <p>This question is about default values for <code>Datetime</code>: <a href="https://stackoverflow.com/questions/168736/how-do-you-set-a-default-value-for-a-mysql-datetime-column">How do you set a default value for a MySQL Datetime column?</a></p> <p>Except for the <code>**config</code>, this code is a cut-n-paste from the dev.mysql.com hyperlink. And except for the <code>'password': '****'</code>, here's the code that I used:</p> <pre><code>from __future__ import print_function from datetime import date, datetime, timedelta import mysql.connector config = { 'user': 'root', 'password': '****', 'host': '127.0.0.1', 'database': 'employees' } cnx = mysql.connector.connect(**config) cursor = cnx.cursor() tomorrow = datetime.now().date() + timedelta(days=1) add_employee = (&quot;INSERT INTO employees &quot; &quot;(first_name, last_name, hire_date, gender, birth_date) &quot; &quot;VALUES (%s, %s, %s, %s, %s)&quot;) add_salary = (&quot;INSERT INTO salaries &quot; &quot;(emp_no, salary, from_date, to_date) &quot; &quot;VALUES (%(emp_no)s, %(salary)s, %(from_date)s, %(to_date)s)&quot;) data_employee = ('Geert', 'Vanderkelen', tomorrow, 'M', date(1977, 6, 14)) # # Insert new employee # cursor.execute(add_employee, data_employee) # emp_no = cursor.lastrowid # # Insert salary information # data_salary = { # 'emp_no': emp_no, # 'salary': 50000, # 'from_date': tomorrow, # 'to_date': date(9999, 1, 1), # } # cursor.execute(add_salary, data_salary) # # Make sure data is committed to the database # cnx.commit() cursor.close() cnx.close() </code></pre> <p>Now, I know it connects because if I comment out the two <code>Insert</code> blocks, it runs fine. But here is the error message when it runs as posted in this question.</p> <pre><code>Traceback (most recent call last): File &quot;C:\Users\User\Documents\Python\Tutorials\.venv\Lib\site-packages\mysql\connector\connection_cext.py&quot;, line 697, in cmd_query self._cmysql.query( _mysql_connector.MySQLInterfaceError: Field 'emp_no' doesn't have a default value The above exception was the direct cause of the following exception: Traceback (most recent call last): File &quot;c:\Users\User\Documents\Python\Tutorials\Add_to_Employees.py&quot;, line 27, in &lt;module&gt; cursor.execute(add_employee, data_employee) File &quot;C:\Users\User\Documents\Python\Tutorials\.venv\Lib\site-packages\mysql\connector\cursor_cext.py&quot;, line 372, in execute result = self._cnx.cmd_query( ^^^^^^^^^^^^^^^^^^^^ File &quot;C:\Users\User\Documents\Python\Tutorials\.venv\Lib\site-packages\mysql\connector\opentelemetry\context_propagation.py&quot;, line 102, in wrapper return method(cnx, *args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;C:\Users\User\Documents\Python\Tutorials\.venv\Lib\site-packages\mysql\connector\connection_cext.py&quot;, line 705, in cmd_query raise get_mysql_exception( mysql.connector.errors.DatabaseError: 1364 (HY000): Field 'emp_no' doesn't have a default value </code></pre> <p>DISCLAIMER: I know not to put my password in the file. I trying to solve one problem at a time.</p> <p>EDIT:<br /> When I <code>ALTER TABLE employees MODIFY emp_no INT NOT NULL AUTO_INCREMENT;</code> in <strong>MySQL 8.4 Command Line Client</strong>,I get this error:</p> <pre><code>ERROR 1833 (HY000): Cannot change column 'emp_no': used in a foreign key constraint 'dept_manager_ibfk_1' of table 'employees.dept_manager' </code></pre> <p>Here's the <code>TABLE DEFINITON</code> <a href="https://i.sstatic.net/WxZypgfw.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/WxZypgfw.png" alt="enter image description here" /></a></p> <p><a href="https://i.sstatic.net/EdTz25ZP.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/EdTz25ZP.png" alt="enter image description here" /></a></p>
<python><mysql><mysql-python>
2024-05-26 23:42:15
2
900
David Collins
78,536,693
4,730,164
change pandas index value base on column condition
<p>I want to change an index value for a specific column condition.</p> <p>Here an example</p> <pre><code>import pandas as pd data = { &quot;Product&quot;: [&quot;Computer&quot;, &quot;Printer&quot;, &quot;Monitor&quot;], &quot;Price&quot;: [120, 25, 40], } df = pd.DataFrame(data, index=[&quot;Item_1&quot;, &quot;Item_1&quot;, &quot;Item_1&quot;]) </code></pre> <pre><code> Product Price Item_1 Computer 120 Item_1 Printer 25 Item_1 Monitor 40 </code></pre> <p>I want to modify the index value 'Item_1' to 'TOTO' for df.loc[df.Product=='Monitor'] (and without passing by reset_index()).</p> <p>The result should be:</p> <pre><code> Product Price Item_1 Computer 120 Item_1 Printer 25 TOTO Monitor 40 </code></pre>
<python><pandas>
2024-05-26 23:17:31
1
753
olivier dadoun
78,536,595
4,445,584
PySide2 QWebEngineView does not play mp4, mkv, avi, mov and wmv video files
<p>I just write a simple applications for Windows using PySide2 and Python 3.8 that is a reduced web browser (it can not navigate everywhere but only in some specific web pages) and it has to open html pages embedding audio or video files or just pointing to audio and video files. Everything works apart for the major part of video files it has to support. In practice, instead of playing mkv, avi, mov and wmv files the only thing I can do with them is download them. It plays well only webm video files and seems that it could play also mp4 video files because it opens them but it does not start them to play. Seems as it has no permission to do that (not by my side) or that some video codecs is missing. I just tryed to move to PySide6 but at the moment I am not able to fix other problems not present on PySide2. Did anyone had similar problems? Thanks, Massimo</p>
<python><mp4><pyside2>
2024-05-26 22:09:58
0
475
Massimo Manca
78,536,551
845,210
"Variable not allowed in type expression" - how can I create parameterized typing.Annotated types?
<p>I'm not sure if this is possible in Python, but I'd like to create a series of related <code>typing.Annotated</code> referring to the same underlying type, but where each is constrained to a different set of values (ideally using enums, but that isn't crucial), and annotated with metadata about what it accepts. I could manually specify them all, but it would be cleaner if this could be parameterized.</p> <p>Here's a simplified example of what I'm trying to do:</p> <pre class="lang-py prettyprint-override"><code>from dataclasses import dataclass from enum import Enum from typing import Annotated from annotated_types import Predicate class WildAnimals(Enum): LION = 'lion' BEAR = 'bear' class PetAnimals(Enum): CAT = 'cat' DOG = 'dog' def ConstrainedType(allowed_values: type[Enum]): return Annotated[ str, Predicate(lambda val: val in (m.value for m in allowed_values)), f'Allowed: {list(allowed_values)}', ] def ConstrainedContainer(allowed_values: type[Enum]): CType = ConstrainedType(allowed_values) @dataclass class Container: # error: Variable not allowed in type expression (reportInvalidTypeForm) animal: CType count: int return Container PetCarrier = ConstrainedContainer(PetAnimals) WildCarrier = ConstrainedContainer(WildAnimals) </code></pre> <p>As far as I can tell, this code runs fine (as it should, since type hints are optional) but Pyright objects to line 29 where I'm assigning <code>CType</code> as a type for <code>animal</code>:</p> <blockquote> <p>example.py:29:17 - error: Variable not allowed in type expression (reportInvalidTypeForm)</p> </blockquote> <p>If I instead try to inline <code>ConstrainedType()</code> I get this error:</p> <blockquote> <p>example.py:29:17 - error: Call expression not allowed in type expression (reportInvalidTypeForm)</p> </blockquote> <p>What am I missing here? Do I need to be using <code>TypeVar</code> or <code>Generic</code> somehow? Is there a way to have parameterized types that aren't &quot;variable&quot; so the type checker can understand them?</p>
<python><mypy><python-typing><pyright>
2024-05-26 21:46:41
0
3,331
bjmc
78,536,496
3,765,883
Selenium how to find button that is part of a 'flex flex-col md:grid class
<p>Using Selenium in Python, I want to click on a button with text 'View Active', but Selenium can't find it when searching for 'View Active'. Looking at the code with 'inspect' (see below), I see the following for the 'flex-col' class containing the button. I can see the button text, but I don't know how to access it from Selenium.</p> <pre><code>&lt;div class=&quot;flex flex-col md:grid md:grid-cols-2 mt-7 flex-wrap gap-x-7 gap-y-7&quot;&gt; &lt;div class=&quot;bg-white shadow-chart px-8 py-6 w-full&quot;&gt; &lt;h4 class=&quot;font-bold&quot;&gt;Moved from Registered Address&lt;/h4&gt; &lt;p class=&quot;mt-1 mb-6&quot;&gt;Review for consideration of challenge&lt;/p&gt; &lt;div class=&quot;flex gap-3 flex-col md:flex-row&quot;&gt; &lt;a class=&quot; px-2 py-3 rounded-lg text-base hover:shadow-btn disabled:opacity-30 outline-none focus:ring-red focus:ring-2 w-44 bg-red text-white block text-center&quot; href=&quot;/assignedVoters&quot;&gt;View All&lt;/a&gt; &lt;a class=&quot; px-2 py-3 rounded-lg text-base hover:shadow-btn disabled:opacity-30 outline-none focus:ring-red focus:ring-2 w-44 bg-red text-white block text-center&quot; href=&quot;/assignedVoters?activeOnly=true&quot;&gt;View Active&lt;/a&gt; &lt;/div&gt; &lt;/div&gt; &lt;div class=&quot;bg-white shadow-chart px-8 py-6 w-full&quot;&gt; &lt;h4 class=&quot;font-bold&quot;&gt;Non-Standard Addresses&lt;/h4&gt; &lt;p class=&quot;mt-1 mb-6&quot;&gt;Sorted by largest # of registrations&lt;/p&gt; &lt;div class=&quot;flex gap-3 flex-col md:flex-row&quot;&gt; &lt;a class=&quot; px-2 py-3 rounded-lg text-base hover:shadow-btn disabled:opacity-30 outline-none focus:ring-red focus:ring-2 w-44 bg-red text-white block text-center&quot; href=&quot;/addressWithIssues&quot;&gt;View All&lt;/a&gt; &lt;a class=&quot; px-2 py-3 rounded-lg text-base hover:shadow-btn disabled:opacity-30 outline-none focus:ring-red focus:ring-2 w-44 bg-red text-white block text-center&quot; href=&quot;/addressWithIssues?activeOnly=true&quot;&gt;View Active&lt;/a&gt; &lt;/div&gt; &lt;/div&gt; &lt;div class=&quot;bg-white shadow-chart px-8 py-6 w-full&quot;&gt; &lt;h4 class=&quot;font-bold&quot;&gt;Search by Name&lt;/h4&gt; &lt;p class=&quot;mt-1 mb-6&quot;&gt;Search for records by voter name&lt;/p&gt; &lt;div class=&quot;flex gap-3 flex-col md:flex-row&quot;&gt; &lt;a class=&quot; px-2 py-3 rounded-lg text-base hover:shadow-btn disabled:opacity-30 outline-none focus:ring-red focus:ring-2 w-44 bg-red text-white block text-center&quot; href=&quot;/search/name&quot;&gt;Search&lt;/a&gt; &lt;/div&gt; &lt;/div&gt; &lt;div class=&quot;bg-white shadow-chart px-8 py-6 w-full&quot;&gt; &lt;h4 class=&quot;font-bold&quot;&gt;Search by Address&lt;/h4&gt; &lt;p class=&quot;mt-1 mb-6&quot;&gt;Search for records by address&lt;/p&gt; &lt;div class=&quot;flex gap-3 flex-col md:flex-row&quot;&gt; &lt;a class=&quot; px-2 py-3 rounded-lg text-base hover:shadow-btn disabled:opacity-30 outline-none focus:ring-red focus:ring-2 w-44 bg-red text-white block text-center&quot; href=&quot;/search/address&quot;&gt;Search&lt;/a&gt; &lt;/div&gt; &lt;/div&gt; &lt;div class=&quot;bg-white shadow-chart px-8 py-6 w-full&quot;&gt; &lt;h4 class=&quot;font-bold&quot;&gt;My Challenges&lt;/h4&gt; &lt;p class=&quot;mt-1 mb-6&quot;&gt;Review records selected for challenge&lt;/p&gt; &lt;div class=&quot;flex gap-3 flex-col md:flex-row&quot;&gt; &lt;a class=&quot; px-2 py-3 rounded-lg text-base hover:shadow-btn disabled:opacity-30 outline-none focus:ring-red focus:ring-2 w-44 bg-red text-white block text-center&quot; href=&quot;/reviewChallenges&quot;&gt;View Now&lt;/a&gt; &lt;/div&gt; &lt;/div&gt; &lt;/div&gt; </code></pre> <p>Here is the python code I am using (for obvious reasons I deleted my password)</p> <pre><code>from re import L from selenium import webdriver from selenium.webdriver.common.keys import Keys from selenium.webdriver.common.by import By cService = webdriver.ChromeService(executable_path=&quot;C:\\Users\\Frank\\Documents\\Visual Studio 2022\\Projects\\IV3_WebsiteAutomation\\chromedriver.exe&quot;) driver = webdriver.Chrome(service = cService) driver.get(&quot;https://app.iv3.us/login&quot;) print(driver.title) search_bar = driver.find_element(&quot;name&quot;, &quot;userName&quot;) search_bar.send_keys(&quot;paynterf@gmail.com&quot;) search_bar = driver.find_element(&quot;name&quot;, &quot;password&quot;) search_bar.send_keys(&quot;-----------------&quot;) search_bar.send_keys(Keys.RETURN) res = driver.find_element(By.XPATH(&quot;.//a[contains(@href,'View Active')]&quot;)) driver.close() </code></pre> <p>This code properly opens the web page and enters the username and password. The next page that opens contains the 'View Active' button I am trying to activate</p>
<python><selenium-webdriver>
2024-05-26 21:14:38
3
327
user3765883
78,536,443
20,969,632
Moviepy not updating FFmpeg Version after FFmpeg Install?
<p>I've been toying around with MoviePy, and recently switched a project to a new computer at home to continue messing around with it. However, I tried running what I had previously written (which ran perfectly fine on the other computer) and I get this:</p> <pre><code>OSError: MoviePy error: failed to read the first frame of video file ./Gameplay/minecraft- gameplay2.mp4. That might mean that the file is corrupted. That may also mean that you are using a deprecated version of FFMPEG. On Ubuntu/Debian for instance the version in the repos is deprecated. Please update to a recent version from the website. </code></pre> <p>After reading the error, I did as it instructed, and updated my FFmpeg:</p> <pre class="lang-bash prettyprint-override"><code>$ ffmpeg ffmpeg version N-115387-g8e27bd025f-20240525 Copyright (c) 2000-2024 the FFmpeg developers built with gcc 13.2.0 (crosstool-NG 1.26.0.65_ecc5e41) configuration: --prefix=/ffbuild/prefix --pkg-config-flags=--static --pkg-config=pkg-config --cross-prefix=x86_64-ffbuild-linux-gnu- --arch=x86_64 --target-os=linux --enable-gpl --enable-version3 --disable-debug --enable-iconv --enable-libxml2 --enable-zlib --enable-libfreetype --enable-libfribidi --enable-gmp --enable-openssl --enable-fontconfig --enable-libharfbuzz --enable-libvorbis --enable-opencl --enable-libpulse --enable-libvmaf --enable-libxcb --enable-xlib --enable-amf --enable-libaom --enable-libaribb24 --enable-avisynth --enable-chromaprint --enable-libdav1d --enable-libdavs2 --enable-libdvdread --enable-libdvdnav --disable-libfdk-aac --enable-ffnvcodec --enable-cuda-llvm --enable-frei0r --enable-libgme --enable-libkvazaar --enable-libaribcaption --enable-libass --enable-libbluray --enable-libjxl --enable-libmp3lame --enable-libopus --enable-librist --enable-libssh --enable-libtheora --enable-libvpx --enable-libwebp --enable-lv2 --enable-libvpl --enable-openal --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenh264 --enable-libopenjpeg --enable-libopenmpt --enable-librav1e --enable-librubberband --disable-schannel --enable-sdl2 --enable-libsoxr --enable-libsrt --enable-libsvtav1 --enable-libtwolame --enable-libuavs3d --enable-libdrm --enable-vaapi --enable-libvidstab --enable-vulkan --enable-libshaderc --enable-libplacebo --enable-libx264 --enable-libx265 --enable-libxavs2 --enable-libxvid --enable-libzimg --enable-libzvbi --extra-cflags=-DLIBTWOLAME_STATIC --extra-cxxflags= --extra-libs='-ldl -lgomp' --extra-ldflags=-pthread --extra-ldexeflags=-pie --cc=x86_64-ffbuild-linux-gnu-gcc --cxx=x86_64-ffbuild-linux-gnu-g++ --ar=x86_64-ffbuild-linux-gnu-gcc-ar --ranlib=x86_64-ffbuild-linux-gnu-gcc-ranlib --nm=x86_64-ffbuild-linux-gnu-gcc-nm --extra-version=20240525 libavutil 59. 20.100 / 59. 20.100 libavcodec 61. 5.104 / 61. 5.104 libavformat 61. 3.104 / 61. 3.104 libavdevice 61. 2.100 / 61. 2.100 libavfilter 10. 2.102 / 10. 2.102 libswscale 8. 2.100 / 8. 2.100 libswresample 5. 2.100 / 5. 2.100 libpostproc 58. 2.100 / 58. 2.100 Universal media converter usage: ffmpeg [options] [[infile options] -i infile]... {[outfile options] outfile}... </code></pre> <p>And I continued getting the same error. So I looked at what version my moviepy was using, and it was still 4.X. I read somewhere that your FFmpeg version is determined on the first use, which made sense, so I uninstalled and reinstalled, to get the same error.</p> <p>I am honestly lost at this point, as I have the newest version of FFmpeg, but I still get this from moviepy:</p> <pre class="lang-py prettyprint-override"><code>&gt;&gt;&gt; import moviepy &gt;&gt;&gt; print(moviepy.config.FFPMEG_BINARY) ffmpeg : /home/&lt;username&gt;/.local/lib/python3.9/site-packages/imageio_ffmpeg/binaries/ffmpeg-linux64-v4.2.2 </code></pre> <p>Any Ideas as to what I'm doing wrong?</p> <p>(Note: I'm using Crostini which I believe is using an Ubuntu or Ubuntu-Like shell)</p> <p>Thanks :)</p>
<python><ubuntu><ffmpeg><moviepy><crostini>
2024-05-26 20:48:40
1
355
The_ Game12
78,536,181
25,091,707
why does my discord tasks start at random times?
<p>I built a bot to ping people at certain time in the day (for fun lol) but the bot seems to randomly ping at random times in the day and doesn't print a message to terminal as it should've if it were a usual ping. This has happened twice already and caused a lot of frustration since obviously people don't really like getting pinged for no reason. I'm hosting this bot on <a href="https://bot_hosting.net" rel="nofollow noreferrer">Bot-Hosting</a>.</p> <p>Here's my code:</p> <pre><code>import discord from datetime import time, datetime, timezone, timedelta, today from discord.ext import tasks, commands from time import sleep def run(): time_role = time(16,7,0, tzinfo=timezone(timedelta(hours=a certain timezone))) time_person = time(16,7,0, tzinfo=timezone(timedelta(hours=another certain timezone))) channel_id = # channel id server_id = # server id #gives the bot message content intent intents = discord.Intents.default() intents.message_content = True TOKEN = &quot;my token obviously&quot; client = discord.Client(intents = intents) #pings @role A at time_role @tasks.loop(time=time_role) async def send_time(): await client.get_channel(channel_id).send(&quot;&lt;@&amp;role A id&gt;&quot;) await client.get_channel(channel_id).send(&quot;get ready guys&quot;) print(&quot;sent role A ping at &quot; + today().strftime('%m-%d %H:%M:%S')) #the above two lines were NOT SENT along with the unexpected pings #pings @person A at time_person @tasks.loop(time=time_person) async def send_person(): await client.get_channel(channel_id).send(&quot;&lt;@person A id&gt;&quot;) await client.get_channel(channel_id).send(&quot;get ready person A&quot;) print(&quot;sent person A ping at &quot; + today().strftime('%m-%d %H:%M:%S')) #the above two lines were NOT SENT along with the unexpected pings @client.event async def on_ready(): print(f&quot;{client.user} is now online!\ntime: &quot;today().strftime('%m-%d %H:%M:%S')) if not send_time.is_running(): send_time.start() print(&quot;started role ping&quot;) if not send_person.is_running(): send_person.start() print(&quot;started person ping&quot;) client.run(TOKEN) run() </code></pre> <p><a href="https://i.sstatic.net/4hzD6nYL.png" rel="nofollow noreferrer">screenshot of one of the unexpected pings</a></p>
<python><discord><discord.py><bots>
2024-05-26 18:40:31
1
343
Matt
78,536,066
10,426,490
Unable to get Azure Document Intelligence service to process a 1000pg. PDF
<p>I can't find any information on this. Currently using a Premium App Service Plan-hosted Azure Function with <code>functionTimeout</code> set to <code>00:30:00</code> but still timing out.</p> <p><strong>PDF specs</strong>:</p> <ul> <li>1000 pages</li> <li>30MiB</li> <li>Scanned documents, no text</li> </ul> <p><strong>Here is the DocIntel function</strong>:</p> <pre><code>from azure.ai.documentintelligence.aio import DocumentIntelligenceClient from azure.ai.documentintelligence.models import AnalyzeDocumentRequest async def create_doc_intel_client(credential): doc_intel_endpoint = f'https://{DOC_INTEL_RESOURCE}.cognitiveservices.azure.com' try: doc_intel_client = DocumentIntelligenceClient( doc_intel_endpoint, credential) logging.info(f'#### Doc Intel Client Successfully Created') return doc_intel_client except Exception as e: logging.error(f'#### Failed to create Doc Intel Client: {e}') raise async def get_doc_layout(doc_client, doc_url): try: async with doc_client: poller = await doc_client.begin_analyze_document( &quot;prebuilt-layout&quot;, AnalyzeDocumentRequest(url_source=doc_url) ) minutes_elapsed = 0 while not poller.done(): status = poller.status() logging.info(f'#### Polling status: {status} | Minutes elapsed: {minutes_elapsed}') await asyncio.sleep(60) # Sleep for 60 seconds (1 minute) minutes_elapsed += 1 doc_analysis = await poller.result() doc_analysis_dict = doc_analysis.as_dict() doc_analysis_json = json.dumps(doc_analysis_dict) logging.info(f'#### Document successfully analyzed') return doc_analysis_json except Exception as e: logging.error(f'#### Failed to analyze document: {e}') raise </code></pre> <p><strong>Logs</strong>:</p> <pre><code>2024-05-26T17:21:12Z [Information] #### Polling status: InProgress | Minutes elapsed: 1 2024-05-26T17:22:12Z [Information] #### Polling status: InProgress | Minutes elapsed: 2 2024-05-26T17:23:12Z [Information] #### Polling status: InProgress | Minutes elapsed: 3 2024-05-26T17:24:12Z [Information] #### Polling status: InProgress | Minutes elapsed: 4 2024-05-26T17:25:12Z [Information] #### Polling status: InProgress | Minutes elapsed: 5 2024-05-26T17:26:12Z [Information] #### Polling status: InProgress | Minutes elapsed: 6 2024-05-26T17:27:12Z [Information] #### Polling status: InProgress | Minutes elapsed: 7 2024-05-26T17:28:13Z [Information] #### Polling status: InProgress | Minutes elapsed: 8 2024-05-26T17:29:13Z [Information] #### Polling status: InProgress | Minutes elapsed: 9 2024-05-26T17:30:13Z [Information] #### Polling status: InProgress | Minutes elapsed: 10 2024-05-26T17:31:13Z [Information] #### Polling status: InProgress | Minutes elapsed: 11 2024-05-26T17:32:13Z [Information] #### Polling status: InProgress | Minutes elapsed: 12 2024-05-26T17:33:13Z [Information] #### Polling status: InProgress | Minutes elapsed: 13 2024-05-26T17:34:13Z [Information] #### Polling status: InProgress | Minutes elapsed: 14 2024-05-26T17:35:13Z [Information] #### Polling status: InProgress | Minutes elapsed: 15 2024-05-26T17:36:13Z [Information] #### Polling status: InProgress | Minutes elapsed: 16 2024-05-26T17:37:13Z [Information] #### Polling status: InProgress | Minutes elapsed: 17 2024-05-26T17:38:13Z [Information] #### Polling status: InProgress | Minutes elapsed: 18 2024-05-26T17:39:13Z [Information] #### Polling status: InProgress | Minutes elapsed: 19 2024-05-26T17:40:13Z [Information] #### Polling status: InProgress | Minutes elapsed: 20 2024-05-26T17:41:13Z [Information] #### Polling status: InProgress | Minutes elapsed: 21 2024-05-26T17:42:13Z [Information] #### Polling status: InProgress | Minutes elapsed: 22 2024-05-26T17:43:13Z [Information] #### Polling status: InProgress | Minutes elapsed: 23 2024-05-26T17:44:13Z [Information] #### Polling status: InProgress | Minutes elapsed: 24 2024-05-26T17:45:13Z [Information] #### Polling status: InProgress | Minutes elapsed: 25 2024-05-26T17:46:13Z [Information] #### Polling status: InProgress | Minutes elapsed: 26 2024-05-26T17:47:13Z [Information] #### Polling status: InProgress | Minutes elapsed: 27 2024-05-26T17:48:13Z [Information] #### Polling status: InProgress | Minutes elapsed: 28 2024-05-26T17:49:13Z [Information] #### Polling status: InProgress | Minutes elapsed: 29 2024-05-26T17:50:11Z [Error] Timeout value of 00:30:00 exceeded by function 'Functions.doc-process' (Id: '&lt;guid&gt;'). Initiating cancellation. 2024-05-26T17:50:11Z [Error] Executed 'Functions.doc-process' (Failed, Id=&lt;guid&gt;, Duration=1799999ms) </code></pre> <p><strong>Update 1</strong>:</p> <ul> <li>Set <code>functionTimeout: -1</code></li> <li>Let the Function run for 90 mins, still no response.</li> </ul>
<python><azure><azure-form-recognizer>
2024-05-26 17:54:26
1
2,046
ericOnline
78,536,048
12,321,900
Is it possible to install only the Scipy stats module with pip?
<p>For my project I am only needing to import the stats and optimize module from Scipy. I would like to know if it is possible to install only these modules with pip? What alternatives do I have?</p>
<python><pip><scipy><pyodide>
2024-05-26 17:47:22
1
307
Sebastian Jose
78,535,999
2,520,186
How to reorganize data to correctly lineplot in Python
<p>I have the following code that plots <code>ydata</code> vs <code>xdata</code> which is supposed to be a circle. The plot has two subplots -- a lineplot with markers and a scatter plot.</p> <pre><code>import matplotlib.pyplot as plt xdata = [-1.9987069285852805, -1.955030386765729, -1.955030386765729, -1.8259096357678795, -1.8259096357678795, -1.6169878720004491, -1.6169878720004491, -1.3373959790579202, -1.3373959790579202, -0.9993534642926399, -0.9993534642926399, -0.6176344077078071, -0.6176344077078071, -0.20892176376743077, -0.20892176376743077, 0.20892176376743032, 0.20892176376743032, 0.6176344077078065, 0.6176344077078065, 0.999353464292642, 0.999353464292642, 1.3373959790579217, 1.3373959790579217, 1.6169878720004487, 1.6169878720004487, 1.8259096357678786, 1.8259096357678786, 1.9550303867657255, 1.9550303867657255, 1.9987069285852832] ydata = (0.0, -0.038801795445724575, 0.038801795445724575, -0.07590776623879933, 0.07590776623879933, -0.10969620340136318, 0.10969620340136318, -0.13869039009450249, 0.13869039009450249, -0.16162314123018345, 0.16162314123018345, -0.1774921855402276, 0.1774921855402276, -0.18560396964016201, 0.18560396964016201, -0.185603969640162, 0.185603969640162, -0.17749218554022747, 0.17749218554022747, -0.16162314123018337, 0.16162314123018337, -0.13869039009450224, 0.13869039009450224, -0.10969620340136294, 0.10969620340136294, -0.0759077662387991, 0.0759077662387991, -0.038801795445725006, 0.038801795445725006, 0.0) fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(16,6)) fig.suptitle('Plot comparison: line vs scatter'+ 3*'\n') fig.subplots_adjust(wspace=1, hspace=3) fig.supxlabel('x') fig.supylabel('y') ax1.plot(xdata, ydata, 'o-', c='blue') ax1.set_title('Line-point plot', c='blue') for i in range(len(xdata)): ax2.scatter(xdata, ydata, c='orange') ax2.set_title('Scatter plot', c='orange') plt.savefig('line_vs_scatter_plot.png') plt.show() </code></pre> <p>Output: <a href="https://i.sstatic.net/fzORWwi6.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/fzORWwi6.png" alt="enter image description here" /></a></p> <p>From the output, it can be seen that the lineplot does not connect the dots (or points). Can we rearrange the x or y data in someway that fixes the issue? Or do something else?</p>
<python><list><matplotlib><plot>
2024-05-26 17:30:42
2
2,394
hbaromega
78,535,743
2,845,095
Two Sample Z-Test on Pandas dataframe
<p>I have a pandas dataframe that looks like this.It is a frequency table.</p> <div class="s-table-container"><table class="s-table"> <thead> <tr> <th>landing_page</th> <th>0</th> <th>1</th> </tr> </thead> <tbody> <tr> <td>new_page</td> <td>128045</td> <td>17264</td> </tr> <tr> <td>old_page</td> <td>127785</td> <td>17489</td> </tr> </tbody> </table></div> <p>where 0 = not converted and 1 = converted.</p> <p>I am trying to calculate a two sample Z-test to show if a new page affects conversion rate.</p> <p>I am new to Python and I do not know how to use this dataframe to conduct the test.</p> <p>Any help is much appreciated.</p> <p>Thanks a lot.</p>
<python><pandas><z-test>
2024-05-26 15:40:15
2
533
user2845095
78,535,555
7,958,562
Python - Stream file
<p>I want to stream upload a file to a webserver. To achieve this I found this library: <a href="https://github.com/requests/toolbelt#multipartform-data-encoder" rel="nofollow noreferrer">https://github.com/requests/toolbelt#multipartform-data-encoder</a></p> <p>The code looks pretty straight forward:</p> <pre><code>from requests_toolbelt import MultipartEncoder import requests m = MultipartEncoder( fields={'field0': 'value', 'field1': 'value', 'field2': ('filename', open('file.py', 'rb'), 'text/plain')} ) r = requests.post('http://httpbin.org/post', data=m, headers={'Content-Type': m.content_type}) </code></pre> <p>But how do I pass a stream to this <code>MultipartEncoder</code> instead of opening a file on disk? The reason is that I am fetching a file using <code>stream=True</code> as I don't want to save the file on disk. Like this:</p> <pre><code>def stream_file(): url = &quot;https://webserver.com/file.zip&quot; with requests.get(url, stream=True) as file_stream: print(file_stream.status_code) </code></pre> <p>My question is how do I pass this <code>file_stream</code> to the <code>MultipartEncoder</code>?</p>
<python>
2024-05-26 14:34:11
0
437
John
78,535,480
15,648,070
SqlAlchemy Autocommit mode not commiting
<p>Having some issues setting up an autocommit session</p> <p>The create_audit_log function doesn't &quot;create&quot; the new record without having <code>s.commit()</code> afterwards</p> <p>SqlAlchemy version <code>2.0.30</code></p> <p>psycopg2-binary version <code>2.9.9</code></p> <p>My code</p> <pre><code>import logging from sqlalchemy import create_engine, inspect from sqlalchemy.orm import sessionmaker from .models import Audit class DBManager: audit_model = Audit def __init__(self, db_url): self.engine = create_engine(db_url, isolation_level=&quot;AUTOCOMMIT&quot;) self.session = sessionmaker(bind=self.engine) if not inspect(self.engine).has_table(self.audit_model.__tablename__): self.audit_model.metadata.create_all(bind=self.engine) def create_audit_log(self, requester, operation, status, started_at, ended_at): with self.session() as s: audit_log = self.audit_model(requester=requester, operation=operation, status=status, started_at=started_at, ended_at=ended_at) s.add(audit_log) </code></pre>
<python><sqlalchemy><autocommit>
2024-05-26 14:04:38
1
636
Eyal Solomon
78,535,453
12,358,733
Google ADCs don't seem to work using Python on Windows 11 / Gcloud SDK 475.0.0
<p>So just migrated from a Windows 10 to Windows 11 machine and notice Python scripts don't seem to be picking up Google ADCs. I'm repeatedly getting this error:</p> <p><code>Reauthentication is needed. Please run 'gcloud auth application-default login' to reauthenticate.</code></p> <p>Python code uses <a href="https://google-auth.readthedocs.io/en/master/" rel="nofollow noreferrer">google-auth</a>, and is failing on the credentials.refresh() step in this code:</p> <pre><code>import google.auth import google.auth.transport.requests SCOPES = [&quot;https://www.googleapis.com/auth/cloud-platform.read-only&quot;] credentials, project_id = google.auth.default(scopes=SCOPES, quota_project_id=None) _ = google.auth.transport.requests.Request() credentials.refresh(_) </code></pre> <p>I am noticing that <code>C:\Users\&lt;user&gt;\AppData\Roaming\gcloud\application_default_credentials.json</code> is getting updated upon logging in, and Terraform authenticates without issue.</p> <p>The same code works without issue on Mac and Linux.</p>
<python><google-cloud-platform><google-oauth><gcloud>
2024-05-26 13:50:39
1
931
John Heyer
78,535,203
5,790,653
Find one value between two lists of dicts, then find another same value based on first similarity
<p>This is my code:</p> <pre class="lang-py prettyprint-override"><code> from collections import defaultdict ip_port_device = [ {'ip': '192.168.1.140', 'port_number': 4, 'device_name': 'device1'}, {'ip': '192.168.1.128', 'port_number': 8, 'device_name': 'device1'}, {'ip': '192.168.1.56', 'port_number': 14, 'device_name': 'device1'}, {'ip': '192.168.1.61', 'port_number': 4, 'device_name': 'device1'}, {'ip': '192.168.1.78', 'port_number': 8, 'device_name': 'device1'}, {'ip': '192.168.1.13', 'port_number': 16, 'device_name': 'device1'}, {'ip': '192.168.2.140', 'port_number': 4, 'device_name': 'device2'}, {'ip': '192.168.2.128', 'port_number': 8, 'device_name': 'device2'}, {'ip': '192.168.2.56', 'port_number': 14, 'device_name': 'device2'}, {'ip': '192.168.2.61', 'port_number': 4, 'device_name': 'device2'}, {'ip': '192.168.2.78', 'port_number': 8, 'device_name': 'device2'}, {'ip': '192.168.2.13', 'port_number': 16, 'device_name': 'device2'}, {'ip': '192.168.3.140', 'port_number': 4, 'device_name': 'device3'}, {'ip': '192.168.3.128', 'port_number': 8, 'device_name': 'device3'}, {'ip': '192.168.3.56', 'port_number': 14, 'device_name': 'device3'}, {'ip': '192.168.3.61', 'port_number': 4, 'device_name': 'device3'}, {'ip': '192.168.3.78', 'port_number': 8, 'device_name': 'device3'}, {'ip': '192.168.3.13', 'port_number': 16, 'device_name': 'device3'}, ] ip_per_node = [ {'node_name': 'server9.example.com', 'ip_address': '192.168.1.140'}, {'node_name': 'server19.example.com', 'ip_address': '192.168.1.128'}, {'node_name': 'server11.example.com', 'ip_address': '192.168.2.140'}, {'node_name': 'server21.example.com', 'ip_address': '192.168.2.128'}, {'node_name': 'server17.example.com', 'ip_address': '192.168.3.140'}, {'node_name': 'server6.example.com', 'ip_address': '192.168.3.128'}, ] ips_and_ports_in_switch = [] for compute in ip_per_node: for port in ip_port_device: if compute['ip_address'] == port['ip']: port = port['port_number'] for new_port in ip_port_device: if port == new_port['port_number']: ips_and_ports_in_switch.append({ 'port_number': new_port['port_number'], 'ip_address': new_port['ip'], 'node_name': compute['node_name'], 'device_name': new_port['device_name'] }) concatenated = defaultdict(list) for entry in ips_and_ports_in_switch: concatenated[(entry['device_name'], entry['port_number'], entry['node_name'])].append(entry['ip_address']) </code></pre> <p>The logic is:</p> <p>if <code>ip_per_node['ip_address']</code> matches <code>ip_port_device['ip']</code>, then in <code>ip_port_device</code> find all ips have the same port number.</p> <p>Then save like this (expected output):</p> <pre><code>node server9.example.com, port 4, device device1, ips ['192.168.1.140', '192.168.1.61'] node server19.example.com, port 8, device device1, ips ['192.168.1.128', '192.168.1.78'] node server11.example.com, port 4, device device2, ips ['192.168.2.140', '192.168.2.61'] node server21.example.com, port 8, device device2, ips ['192.168.2.128', '192.168.2.78'] node server17.example.com, port 4, device device3, ips ['192.168.3.140', '192.168.3.61'] node server6.example.com, port 8, device device3, ips ['192.168.3.128', '192.168.3.78'] </code></pre> <p>My current code doesn't work as I expect. It saves one port multiple times for all nodes.</p> <p>I tried to add least but needed data for the sample.</p>
<python>
2024-05-26 12:03:28
2
4,175
Saeed
78,535,147
10,542,284
How do I know when a program is waiting for an input and then give the input through Python?
<p>Sorry, this might look very stupid but I have a console <code>C++</code> program and I need to pass input through <code>Python</code>. It is unexpected when this <code>C++</code> program waits for an input and so I hoped I could have done this programmatically. Probably there's a way with <code>Windows API</code> but I'm not so knowledgeable in <code>WinAPI</code>. I'm with Windows 11.</p>
<python><winapi><stdin>
2024-05-26 11:45:47
0
473
Jugert Mucoimaj
78,534,807
5,686,015
Pandas: Convert rows in a single index datafram to columns in a multilevel index dataframe
<p>I've the following dataframe:</p> <pre><code> month name product category Metric Flipkart Active 0 April Accessories Stock Quantity NaN 1808.00 1 April Accessories Stock Quantity 0.0 NaN 2 May Accessories Sales Quantity NaN 61.00 3 May Accessories Sales Quantity 0.0 NaN 4 April Anklet Stock Quantity NaN 21861.75 </code></pre> <p>I'd like to convert it into this: <a href="https://i.sstatic.net/cwalmY8g.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/cwalmY8g.png" alt="enter image description here" /></a></p> <p>How do I achieve this? It has to be generic enough for when multiple rows need to be switched to columns. The column names to be switched will be available in a list.Doesn't necessarily have to be with pandas. I'm open to using other libraries as well.</p>
<python><pandas><dataframe><pivot-table>
2024-05-26 09:22:45
1
1,874
Judy T Raj
78,534,514
3,184,779
How to solve grpc error in fetching data from GA4 API (google-analytics-data api) by python
<p>I am using this code snippet to get data from GA4:</p> <pre><code> credentials = service_account.Credentials.from_service_account_info(self.credential_info) client = BetaAnalyticsDataAsyncClient(credentials=credentials) offset = 0 # Start at the first row limit = 100000 # Maximum number of rows to return per request request = RunReportRequest( property=f&quot;properties/{self.property_id}&quot;, date_ranges=[DateRange(start_date=start, end_date=end)], dimensions=[some_dimensions], metrics=[some_metrics], dimension_filter=None, keep_empty_rows=True, limit=limit, offset=offset, ) response = await client.run_report(request) </code></pre> <p>for some property ids it works completely fine, without any problem. but some times, it shows me this error and crash:</p> <pre><code> ERROR:grpc._common:Exception deserializing message! Traceback (most recent call last): File &quot;C:\Users\TM-USER\Desktop\henry\datascience\.venv\Lib\site-packages\grpc\_common.py&quot;, line 89, in _transform return transformer(message) ^^^^^^^^^^^^^^^^^^^^ File &quot;C:\Users\TM-USER\Desktop\henry\datascience\.venv\Lib\site-packages\proto\message.py&quot;, line 370, in deserialize return cls.wrap(cls.pb().FromString(payload)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ google.protobuf.message.DecodeError: Error parsing message </code></pre> <p>In one specific case, if I change offset to 2000, it will work, that it shows there is something wrong in first 2000 rows of data. I updated google-analytics-data to last version but nothing changed.</p> <p>Do you know how can I fix this?</p>
<python><python-3.x><google-analytics><google-analytics-api><google-analytics-4>
2024-05-26 07:06:52
0
387
Mehdi Mirzaei
78,534,458
2,050,158
How deserialize langchain_core.messages.ai.AIMessage
<p>I would like to reconstruct an object of langchain_core.messages.ai.AIMessage from the dict obtained from an object of type langchain_core.messages.ai.AIMessage.</p> <p>Is there a constructor I can use?</p> <p>The documentation of &quot;langchain_core.messages.ai.AIMessage&quot; seems not to include information on the construction of &quot;langchain_core.messages.ai.AIMessage&quot; objects.</p>
<python><langchain>
2024-05-26 06:34:26
1
503
Allan K
78,534,378
6,915,206
No DATABASE_URL environment variable set, and so no databases setup
<p>While Deploying Django website on AWS EC2 server I am getting error</p> <blockquote> <p>No DATABASE_URL environment variable set, and so no databases setup.</p> </blockquote> <p>I Created a gunicon.service file on my server location <code>/home/ubuntu/lighthousemedia/LightHouseMediaAgency</code> and set <code>EnvironmentFile=/home/ubuntu/lighthousemedia/LightHouseMediaAgency/.env</code> to get the environment variables and in the same location I created a <code>.env</code> file which contain the <code>DATABASE_URL = postgresql://postgres:password@lighthousemediadb.crte3456.ap-south-1.rds.amazonaws.com:5432/dbname</code>. I believe that <code>.env</code> file is in the correct location. Also check my DATABASE is set correctly.</p> <p><strong>production.py</strong></p> <pre><code><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false"> <div class="snippet-code"> <pre class="snippet-code-html lang-html prettyprint-override"><code>DATABASES = { 'default': { 'ENGINE': 'django.db.backends.postgresql_psycopg2', 'NAME': 'lighthousemediadb', 'USER': "postgres", 'PASSWORD': os.environ.get('DB_PASSWORD'), # dont keep it hardcoded in production 'HOST': os.environ.get('DB_HOST'), 'PORT': '5432', # add these too it improve performance slightly 'client_encoding': 'UTF8', 'default_transaction_isolation': 'read committed', 'timezone': 'UTC' } } import dj_database_url db_from_env = dj_database_url.config() DATABASES['default'].update(db_from_env) DATABASES['default']['CONN_MAX_AGE'] = 500</code></pre> </div> </div> </code></pre> <p><strong>gunicorn.service</strong></p> <pre><code> [Unit] Description=gunicorn daemon Requires=gunicorn.socket After=network.target [Service] User=ubuntu Group=www-data EnvironmentFile=/home/ubuntu/lighthousemedia/LightHouseMediaAgency/.env WorkingDirectory=/home/ubuntu/lighthousemedia/LightHouseMediaAgency ExecStart=/home/ubuntu/lighthousemedia/.venv/bin/gunicorn \ --access-logfile - \ --workers 3 \ --bind unix:/run/gunicorn.sock \ BE.wsgi:application [Install] WantedBy=multi-user.target </code></pre> <p><strong>.env</strong></p> <pre><code> SECRET_KEY = 'epmxxxxxxxxxxxxu$0' DEBUG = True ALLOWED_HOSTS = '3.196.58.181', 'abc.com', 'localhost', '127.0.0.1' DB_PASSWORD = 'password' DB_HOST = 'lighthousemediadb.crte3456.ap-south-1.rds.amazonaws.com' DATABASE_URL = postgresql://postgres:password@lighthousemediadb.crte3456.ap-south-1.rds.amazonaws.com:5432/dbname </code></pre>
<python><django><postgresql><amazon-web-services><gunicorn>
2024-05-26 05:42:00
1
563
Rahul Verma
78,534,257
3,628,240
Scraping a site with embedded Google Maps
<p>I'm taking a look at this site: [redacted]</p> <p>Is there a way to scrape the red pins on Google Maps? When you click on a pin, you get the name, address, and phone number. Ideally I would like to be able to create a list of the locations.</p> <p>Is there a way to do this using APIs? I don't see one when I check the network requests.</p> <p>Or is the only option to use selenium to input various zipcodes and manually scrape the text one by one?</p>
<python><selenium-webdriver><web-scraping>
2024-05-26 04:07:52
1
927
user3628240
78,534,246
12,870,750
ezdxf python - How can I set the plot transparency configuration in the plotsettings of a layout?
<p>I have a code that plots a layout created with ezdxf, but I have trouble setting my plot transparency option, because I have a external reference with an image that I want to be 40% transparent.</p> <p>This is the code I use for plotting the whole dxf file, but I dont know how to set the transparency option to True in the plot options, I cant find the docs for this references.</p> <p>Also it would be very nice if when I create my layout in the ezdxf, I could set all this plotting options for each layout.</p> <pre><code>doc = ezdxf.readfile(dxf_path) layout = doc.layouts.new(name='layout_name') layout.applyPlotOptions(papersize=&quot;ISO_full_bleed_A1_(841.00_x_594.00_MM)&quot;, plotsytle table='style.ctb', plottransparency=0.6, etc) </code></pre> <p>I would very much appreciatte any help on this.</p> <p>Chelo.</p> <pre><code> def plot_layouts(dwg_path, ctb_file_path, output_folder): if not os.path.exists(output_folder): os.mkdir(output_folder) # Initialize AutoCAD acad = win32com.client.Dispatch(&quot;AutoCAD.Application&quot;) acad.Visible = False # Set to False if you don't want to show AutoCAD # Open the DWG file doc = acad.Documents.Open(dwg_path) # Loop through all layouts for i in range(doc.Layouts.Count): print(i) time.sleep(5) layout = doc.Layouts.Item(i + 1) # Indexing starts from 1 in COM # Set the layout as active doc.ActiveLayout = layout # Configure the plot settings doc.ActiveLayout.ConfigName = &quot;DWG To PDF.pc3&quot; #se puede cambiar a cualquier pc3 configurado doc.ActiveLayout.StyleSheet = ctb_file_path # doc.ActiveLayout.CanonicalMediaName = &quot;ISO_expand_A4_(210.00_x_297.00_MM)&quot; #debe coincidir exctamente el nombre doc.ActiveLayout.CanonicalMediaName = &quot;ISO_full_bleed_A1_(841.00_x_594.00_MM)&quot; #debe coincidir exctamente el nombre # Set the output file path output_file_path = f&quot;{output_folder}/{layout.Name}.pdf&quot; # Plot the layout doc.Plot.PlotToFile(output_file_path) time.sleep(5) # Close the document without saving changes doc.Close(SaveChanges=True) # Quit AutoCAD acad.Quit() </code></pre>
<python><win32com><autocad><ezdxf>
2024-05-26 03:56:55
1
640
MBV
78,534,210
1,107,474
Plotly, convert epoch timestamps (with ms) to readable datetimes?
<p>The below code uses Python lists to create a Plotly graph.</p> <p>The timestamps are Epoch milliseconds. How do I format the x-axis to readable datetime?</p> <p>I tried <code>fig.layout['xaxis_tickformat'] = '%HH-%MM-%SS'</code> but it didn't work.</p> <pre><code>import plotly.graph_objects as go time_series = [1716693661000, 1716693662000, 1716693663000, 1716693664000] prices = [20, 45, 32, 19] fig = go.Figure() fig.add_trace(go.Scatter(x=time_series, y=prices, yaxis='y')) fig.update_layout(xaxis=dict(rangeslider=dict(visible=True),type=&quot;linear&quot;)) fig.layout['xaxis_tickformat'] = '%Y-%m-%d' fig.show() </code></pre>
<python><plotly>
2024-05-26 03:25:04
1
17,534
intrigued_66
78,534,013
7,498,328
Python code for counting the number of Yahoo emails maxes out at 10,000 emails, how to fix?
<p>I have code that uses Python to programmatically count the number of emails in each of the mailboxes of my Yahoo email account. The issue is that it maxes out at 10,000 even though some of my inboxes have over 80,000 emails. Would anyone have any ideas how this can be fixed? My code is as follows. Thanks!</p> <pre><code>import os import getpass import imaplib import logging IMAP_SERVER = 'imap.mail.yahoo.com' MAILBOX_BLACKLIST = ['trash'] def get_credentials(): username = input('Yahoo Mail username: ').strip() psswd = getpass.getpass(prompt='Yahoo Mail app-specific password: ') return username, psswd def login(username, psswd): imap_client = imaplib.IMAP4_SSL(IMAP_SERVER) try: login_status, _ = imap_client.login(username, psswd) if login_status == &quot;OK&quot;: logging.info(&quot;Login successful&quot;) return imap_client except imaplib.IMAP4.error as e: logging.error(f&quot;Login failed: {e}&quot;) return None def count_emails(imap_client): # Get list of all mailboxes list_status, mailboxes = imap_client.list() if list_status != &quot;OK&quot;: logging.error(&quot;Could not retrieve mailboxes&quot;) return 1 # Filter and prepare mailbox names mailbox_names = [get_mailbox_name(mailbox) for mailbox in mailboxes if get_mailbox_name(mailbox).lower() not in MAILBOX_BLACKLIST] for mailbox_name in mailbox_names: try: open_status, _ = imap_client.select(f'&quot;{mailbox_name}&quot;', readonly=True) if open_status == &quot;OK&quot;: search_status, message_set = imap_client.search(None, 'ALL') if search_status == &quot;OK&quot;: message_nums = message_set[0].split() total_messages = len(message_nums) print(f&quot;Mailbox '{mailbox_name}' has {total_messages} emails.&quot;) else: print(f&quot;Search failed for mailbox {mailbox_name}&quot;) else: print(f&quot;Could not open mailbox {mailbox_name}&quot;) except imaplib.IMAP4.error as e: print(f&quot;Error accessing {mailbox_name}: {e}&quot;) def get_mailbox_name(mailbox): mailbox = mailbox.decode('utf-8') mailbox_name = mailbox.split(' &quot;/&quot;')[-1].replace('&quot;', '').strip() return mailbox_name if __name__ == '__main__': logging.basicConfig(level=logging.DEBUG) username, psswd = get_credentials() imap_client = login(username, psswd) if not imap_client: logging.error(&quot;Exiting due to login failure&quot;) sys.exit(1) count_emails(imap_client) try: imap_client.logout() except imaplib.IMAP4.error as e: logging.error(f&quot;Error during logout: {e}&quot;) sys.exit(0) </code></pre>
<python><email><yahoo-mail>
2024-05-26 00:35:06
1
2,618
user321627
78,533,984
233,428
Implementing a weighted loss function in SFTTrainer
<p>Currently you can let SFTTrainer teach your models to learn to predict every token in your dataset, or you can let it train on &quot;completions only&quot;, using the <code>DataCollatorForCompletionOnlyLM</code> class.</p> <p>I would like something in between, where certain tokens have a higher weight than others.</p> <p>I thought it would be fairly trivial, but nope.</p> <p>Here's what I currently came up with (using Unsloth, so I can try this out on Google Collab):</p> <pre class="lang-py prettyprint-override"><code>import transformers import torch.nn as nn import torch from datetime import datetime from transformers import PreTrainedTokenizerBase from typing import List, Dict, Any from unsloth import is_bfloat16_supported from trl import SFTTrainer from transformers.utils import logging logging.set_verbosity_info() logger = logging.get_logger(&quot;transformers.modeling_utils&quot;) class WeightedLossTrainer(SFTTrainer): def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) def compute_loss(self, model, inputs, return_outputs=False): logger.info(&quot;Compute loss starts&quot;) labels = inputs.get(&quot;labels&quot;) outputs = model(**inputs) logits = outputs.get(&quot;logits&quot;) weight_ranges = inputs.get(&quot;weight_ranges&quot;) batch_size, seq_len, num_classes = logits.shape loss_fct = nn.CrossEntropyLoss(reduction='none') total_weighted_loss = 0.0 total_weights = 0.0 logger.info(f&quot;Doing {batch_size} batch sizes&quot;) for batch_idx in range(batch_size): # Collect weights and losses. batch_weighted_losses = [] for start_idx, end_idx, weight in weight_ranges[batch_idx]: logit_chunk = logits[batch_idx, start_idx:end_idx + 1] label_chunk = labels[batch_idx, start_idx:end_idx + 1] loss = loss_fct(logit_chunk.view(-1, num_classes), label_chunk.view(-1)) weighted_loss = loss * weight batch_weighted_losses.append(weighted_loss.sum()) total_weights += weight * (end_idx - start_idx + 1) # Total token count in this range # Sum the weighted losses for the batch. batch_weighted_loss_sum = torch.stack(batch_weighted_losses).sum() total_weighted_loss += batch_weighted_loss_sum.detach() # Compute the mean loss. mean_loss = total_weighted_loss / total_weights mean_loss = torch.tensor(mean_loss, dtype=torch.float32, device=logits.device, requires_grad=True) logger.info(f&quot;Mean loss: {mean_loss}&quot;) return (mean_loss, outputs) if return_outputs else mean_loss class WeightedDataCollator: def __init__(self, tokenizer: PreTrainedTokenizerBase): self.tokenizer = tokenizer def __call__(self, examples: List): all_input_ids = [] all_attention_masks = [] all_weight_ranges = [] for entry in examples: example_input_ids = [] example_attention_masks = [] example_weight_ranges = [] current_length = 0 # Initialize length counter for item in entry['pieces']: tokenized = self.tokenizer(item['text'], truncation=True, padding=False, return_tensors='pt') input_ids = tokenized.input_ids.squeeze() # Get tensor, remove batch dimension attention_mask = tokenized.attention_mask.squeeze() # Get tensor, remove batch dimension start_idx = current_length end_idx = start_idx + len(input_ids) - 1 example_input_ids.append(input_ids) example_attention_masks.append(attention_mask) example_weight_ranges.append((start_idx, end_idx, item['weight'])) current_length = end_idx + 1 # Update current length concatenated_input_ids = torch.cat(example_input_ids, dim=0) if example_input_ids else torch.tensor([], dtype=torch.long) concatenated_attention_masks = torch.cat(example_attention_masks, dim=0) if example_attention_masks else torch.tensor([], dtype=torch.long) pad_length = max_seq_length - len(concatenated_input_ids) # Assuming max_length = 512 for padding if needed if pad_length &gt; 0: concatenated_input_ids = torch.cat([concatenated_input_ids, torch.tensor([self.tokenizer.pad_token_id] * pad_length)]) concatenated_attention_masks = torch.cat([concatenated_attention_masks, torch.tensor([0] * pad_length)]) all_input_ids.append(concatenated_input_ids) all_attention_masks.append(concatenated_attention_masks) all_weight_ranges.append(example_weight_ranges) logger.info(f&quot;All ranges: {all_weight_ranges}&quot;) return { &quot;input_ids&quot;: torch.stack(all_input_ids), &quot;attention_mask&quot;: torch.stack(all_attention_masks), &quot;labels&quot;: torch.stack(all_input_ids).clone(), &quot;weight_ranges&quot;: all_weight_ranges } # Define data collator data_collator = WeightedDataCollator(tokenizer=tokenizer) # Prepare dataset for the data collator #collated_data = data_collator(dataset) training_args = transformers.TrainingArguments( per_device_train_batch_size = 2, gradient_accumulation_steps = 4, warmup_steps = 5, max_steps = 60, learning_rate = 2e-4, fp16 = not is_bfloat16_supported(), bf16 = is_bfloat16_supported(), logging_steps = 5, optim = &quot;adamw_8bit&quot;, weight_decay = 0.01, lr_scheduler_type = &quot;linear&quot;, seed = 3407, output_dir = &quot;outputs&quot;, remove_unused_columns=False, ) from trl import SFTTrainer from transformers import TrainingArguments from unsloth import is_bfloat16_supported trainer = WeightedLossTrainer( model = model, tokenizer = tokenizer, train_dataset = dataset, data_collator=data_collator, max_seq_length = max_seq_length, dataset_num_proc = 2, args = training_args, packing=False, dataset_text_field='text', dataset_kwargs={'skip_prepare_dataset': True} ) trainer_stats = trainer.train() </code></pre> <p>Each entry in my dataset is an object that has a single property <code>pieces</code>. <code>pieces</code> is an array, and it contains other objects. Each object inside it has a <code>text</code> and a <code>weight</code> property.</p> <p>As soon as it starts to calculate the loss, it seems to take a long while (a few seconds) until it eventually just OOMs: ran out of CUDA memory.</p> <p>So what exactly am I doing wrong, and how can I fix it?</p>
<python><pytorch><huggingface-transformers><huggingface-trainer>
2024-05-26 00:01:05
0
22,155
Jelle De Loecker
78,533,861
5,306,861
Python silently crashes after building scipy from source code, when `import _ufuncs`
<p>I built scipy from source code according to this <a href="https://docs.scipy.org/doc/scipy/building/index.html#building-from-source" rel="nofollow noreferrer">guide</a> on Windows, the build was successful, The build was with command: <code>python dev.py build --with-scipy-openblas</code>.</p> <p>Now I run Python and write the following code:</p> <pre class="lang-py prettyprint-override"><code>import os os.chdir(&quot;C:/Users/codeDom/scipy/build-install/Lib/site-packages&quot;) from scipy.special import _ufuncs </code></pre> <p>Python just crashes without any error message. I ran Python under Visual Studio's debugger and got an obscure error: <a href="https://i.sstatic.net/19jYDkB3.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/19jYDkB3.png" alt="enter image description here" /></a></p> <p>I tried using <code>faulthandler</code> and didn't get any useful information.</p> <p>I also tried <code>python -m pdb program.py</code>, and nothing helped.</p> <p>I also tried to check in <code>ProcessMonitor</code> if there is any missing DLL and I didn't find anything missing.</p> <p>I think it's related to <code>openblas</code> but I'm not sure.</p> <p>My question is how can I find out what the problem is? Why does Python crash without any error and how can you debug Python and find the error?</p>
<python><scipy>
2024-05-25 22:24:49
1
1,839
codeDom
78,533,823
5,053,483
Flattening groups of indices in a numpy array
<p>Suppose I have a four-dimensional array and I would like to flatten the first two indices and the last two indices. For example, with a 2x2x2x2 array, this would yield a (2x2)x(2x2)=4x4 array. How can I do this?</p> <p>In other words, I want to reindex an array in the fashion of</p> <p><code>A_new[2*n+m,2*q+p] = A_old[m,n,p,q]</code></p> <p>where the first two indices have been consolidated into one large index and so have the last two indices. To my knowledge, <code>reshape</code> doesn't allow this level of control into the reindexing. How can I achieve it?</p>
<python><numpy><numpy-ndarray>
2024-05-25 22:02:41
2
482
BGreen
78,533,795
12,139,954
Streamlit multipage app crashes on import pandas or import plotly.express statement
<p><strong>EDIT: Solution that worked for me</strong></p> <p>instead of using conda environment, I created environment using pip and it stopped crashing. Don't know what is the issue with conda based environment.<br /> <strong>EDIT End</strong></p> <p>--</p> <p>the code is more or less a copy paste of sample code given at link <a href="https://docs.streamlit.io/develop/tutorials/multipage/st.page_link-nav" rel="nofollow noreferrer">page_link</a></p> <p>The way to run code will be <code>streamlit run streamlit_app.py </code><br /> <strong>NOTE: The checked in version of the code works</strong></p> <p>Now if you go to file: pages/page2.py and uncomment line<br /> <code># import pandas as pd</code><br /> and run again the code crashes on running when you click on 'backtesting page' on the side bar</p> <p>repository: <a href="https://github.com/alsm6169/streamlit-login" rel="nofollow noreferrer">https://github.com/alsm6169/streamlit-login</a></p> <pre><code>% python --version Python 3.11.9 % streamlit version Streamlit, version 1.35.0 </code></pre> <p>minimum code</p> <blockquote> <p>navigation.py</p> </blockquote> <pre><code>def authenticated_menu(): # Show a navigation menu for authenticated users if st.session_state.get(&quot;logged_in&quot;, False): st.sidebar.page_link(&quot;pages/page1.py&quot;, label=&quot;Landing Page&quot;) st.sidebar.page_link(&quot;pages/page2.py&quot;, label=&quot;Backtesting Page&quot;) st.sidebar.page_link(&quot;streamlit_app.py&quot;, label=&quot;Login Page&quot;, disabled=True) def unauthenticated_menu(): # Show a navigation menu for unauthenticated users st.sidebar.page_link(&quot;streamlit_app.py&quot;, label=&quot;Login Page&quot;, disabled=False) def menu(): # Determine if a user is logged in or not, then show the correct # navigation menu if &quot;logged_in&quot; not in st.session_state or st.session_state.logged_in is None: unauthenticated_menu() return authenticated_menu() </code></pre> <blockquote> <p>app.py</p> </blockquote> <pre><code>import streamlit as st import time from navigation import menu menu() # Render the dynamic menu! username = st.text_input(&quot;Username&quot;) password = st.text_input(&quot;Password&quot;, type=&quot;password&quot;) login_button = st.button(&quot;Login&quot;, type=&quot;primary&quot;) if login_button: if username == &quot;test&quot; and password == &quot;test&quot;: st.success(&quot;Logged in as {}&quot;.format(username)) st.session_state.logged_in = True st.switch_page(&quot;pages/page1.py&quot;) </code></pre> <blockquote> <p>pages/page1.py</p> </blockquote> <pre><code>from navigation import menu import streamlit as st menu() st.title('Landing Page (pages/page1.py)') </code></pre> <blockquote> <p>pages/page2.py</p> </blockquote> <pre><code>import streamlit as st # import pandas as pd ###&lt;--uncommenting leads to code crash from navigation import menu menu() st.title('Backtesting Page (pages/page2.py)') </code></pre>
<python><streamlit>
2024-05-25 21:44:51
0
381
Ani
78,533,683
7,318,120
how to inherit from tkinter
<p>I see two methods for using the <code>tkinter</code> module with python classes.</p> <ul> <li>the first method uses the <code>super()</code> method.</li> <li>the second method does not.</li> </ul> <p>So the example code looks like this:</p> <pre class="lang-py prettyprint-override"><code>import tkinter as tk class MyClass(tk.Tk): ''' class using super ''' def __init__(self): super().__init__() self.title('class using super') self.mainloop() return class MyOhterClass: ''' class that does not use super ''' def __init__(self): self.win = tk.Tk() self.win.title('class NOT using super') self.win.mainloop() return # main guard idiom if __name__=='__main__': app_1 = MyClass() app_2 = MyOhterClass() </code></pre> <p>Both of the classes work.</p> <p>It looks like i can achieve exactly the same with both (as far as i can tell). The only difference being <code>self</code> compared to <code>self.win</code> for creation of the actual tkinter window.</p> <p>But which of the two class methods is preferable (if any) ?</p>
<python><class><tkinter>
2024-05-25 20:48:01
1
6,075
darren
78,533,563
1,601,580
How do we dispatch many multiprocessing jobs robustly in Python without memory errors?
<p>I'm working on a Python project where I need to run a large grid search optimization using multiprocessing. My challenge is to dispatch a large number of jobs without running into memory issues.</p> <p>Here's my current approach:</p> <ol> <li>Memory Check Function: Checks if the current memory usage is within a specified limit.</li> <li>Optimization Task Function: Runs the optimization using SciPy's minimize on given parameters.</li> <li>Main Function:</li> </ol> <ul> <li>Creates a pool of worker processes.</li> <li>Dispatches subsets of tasks (G') to the pool.</li> <li>Waits for the current tasks to complete.</li> <li>Checks memory usage before dispatching the next subset.</li> <li>Updates the best result and parameters if the current batch yields a better result.</li> <li>Closes the pool after all tasks are processed. Here's my code:</li> </ul> <pre class="lang-py prettyprint-override"><code>import multiprocessing as mp import psutil import numpy as np from scipy.optimize import minimize from typing import List, Dict, Any import time # Function to check current memory usage def memory_usage_within_limit(limit: float) -&gt; bool: &quot;&quot;&quot;Check if the current memory usage is within the specified limit.&quot;&quot;&quot; memory_info = psutil.virtual_memory() return memory_info.available / memory_info.total &gt;= limit # Function to run the optimization task def optimization_task(params: Dict[str, Any]) -&gt; Dict[str, Any]: &quot;&quot;&quot;Run optimization using SciPy minimize on the given parameters.&quot;&quot;&quot; def objective_function(x): return np.sum(x**2) # Example: simple quadratic function result = minimize(objective_function, np.array(list(params.values()))) return {'params': params, 'result': result} # Function to run the optimization in parallel while managing memory usage def run_optimization(grid: List[Dict[str, Any]], memory_limit: float, num_workers: int): &quot;&quot;&quot;Run the optimization tasks in parallel, managing memory usage.&quot;&quot;&quot; best_result = None best_params = None pool = mp.Pool(num_workers) # Create a pool of workers try: for subset in grid: # Wait until memory usage is within limit while not memory_usage_within_limit(memory_limit): time.sleep(1) # Wait until memory usage is within limit # Dispatch the subset of tasks to the worker pool using .map async_results = pool.map(optimization_task, subset) # Update the best result and params if the current results are better for result in async_results: if best_result is None or result['result'].fun &lt; best_result['result'].fun: best_result = result best_params = result['params'] finally: pool.close() # Close the pool pool.join() # Wait for all worker processes to finish pool.terminate() # Ensure all processes are terminated return best_result, best_params # Define hyperparameter grid hyperparameter_grid = [ {'x1': i, 'x2': j} for i in range(-2, 3) for j in range(-2, 3) # Reduced grid for small example ] # Split grid into subsets subset_size = 3 grid_subsets = [hyperparameter_grid[i:i + subset_size] for i in range(0, len(hyperparameter_grid), subset_size)] # Run the optimization with memory management memory_limit = 0.1 # Allow usage up to 90% of memory num_workers = 1 # Example with 1 worker start_time = time.time() best_result, best_params = run_optimization(grid_subsets, memory_limit, num_workers) end_time = time.time() duration = end_time - start_time print(&quot;Best Result:&quot;, best_result) print(&quot;Best Params:&quot;, best_params) print(&quot;Duration:&quot;, duration) </code></pre> <p>I appreciate any help or suggestions on how to improve the robustness of my multiprocessing job dispatching to avoid memory errors -- especially because this can't be a new problem. I bet it's been solved before. Thank you!</p> <hr /> <p>While this approach <strong>might</strong> works, I wish I could continually dispatch multiprocessing jobs without having memory issues. Ideally, I would like to continually dispatch jobs, and after they run for a bit and the memory increase stops, dispatch more jobs while always keeping a safe margin from the total memory limit. Is this possible? If so, how can I implement it?</p> <p>First attempt, but I feel these are amateur attempts to my issues:</p> <pre class="lang-py prettyprint-override"><code>import multiprocessing import psutil import time def example_task(arg): time.sleep(5) # Simulate a task that takes time return f&quot;Processed {arg}&quot; def memory_safe_worker(func, args, memory_limit, sleep_interval=1): &quot;&quot;&quot; Wrapper function to monitor memory usage and manage jobs accordingly. &quot;&quot;&quot; pool = multiprocessing.Pool() jobs = [] def check_memory(): mem = psutil.virtual_memory() return mem.available / mem.total try: while args: current_memory = check_memory() if current_memory &gt; memory_limit: arg = args.pop(0) job = pool.apply_async(func, (arg,)) jobs.append(job) print(f&quot;Dispatched a job for argument {arg}. Available memory: {current_memory * 100:.2f}%&quot;) else: print(f&quot;Waiting for memory to free up. Available memory: {current_memory * 100:.2f}%&quot;) time.sleep(sleep_interval) # Clean up completed jobs jobs = [job for job in jobs if not job.ready()] except KeyboardInterrupt: print(&quot;Terminating all jobs...&quot;) pool.terminate() pool.join() finally: pool.close() pool.join() if __name__ == &quot;__main__&quot;: memory_limit = 0.2 # Adjust this to set how much memory should be available before dispatching new jobs (e.g., 0.2 means 20% of total memory should be free) args = [i for i in range(10)] # Example arguments for tasks memory_safe_worker(example_task, args, memory_limit) </code></pre>
<python><multiprocessing><scipy-optimize>
2024-05-25 19:50:08
0
6,126
Charlie Parker
78,533,165
8,521,346
Selenium Standalone Chrome Allow Multiple Simultaneous Executions
<p>I have an app that allows users to click a button that performs a webscraping task using selenium. During dev with a webdriver.exe file, this works fine, but we decided to use the <code>selenium/standalone-chrome</code> docker image for production.</p> <p><code>driver = webdriver.Remote(&quot;http://127.0.0.1:4444/wd/hub&quot;, options=options)</code></p> <p>The only issue is that until one user is done with their scraping, the remote Webdriver wont allow connection for another user, whereas when just using the webdriver binary, it will just spin up another process.</p> <p>My question is, how do I allow multiple users to run these scraping tasks simultaniously.</p> <ul> <li>Is there a way to just make Selenium Grid spin up a new process for each connection?</li> <li>What about spinning up and destroying docker containers on the fly with a random port number?</li> </ul>
<python><docker><selenium-webdriver>
2024-05-25 17:07:25
1
2,198
Bigbob556677
78,533,162
274,579
How to define separator characters for argparse?
<p>Python's <code>argparse</code> module is a powerful tool for parsing script command line arguments. The problem is that it expects the args to be separated by spaces. Is there a way to let <code>argparse</code> know I am using a comma-separated args list instead of space-separated args list, especially for the positional args?</p> <p>For example:</p> <pre><code>$ python3 myscr.py --arg1 1 --arg2 2 3,4 </code></pre> <p>In this example, <code>3</code> and <code>4</code> are two required positional args separated by a comma, but the parser will get them as a single positional argument, and possibly throw an error for the missing second arg.</p> <p>The help for the <a href="https://docs.python.org/3/library/argparse.html#the-parse-args-method" rel="nofollow noreferrer"><code>parse_args()</code></a> method says it is expecting a list of strings to parse, and where none is provided, it uses <code>sys.argv</code> by default. So a workaround may be to pre-parse <code>sys.argv</code> before calling <code>parse_args()</code>. But this needs explicitly extending the list and making sure not to split comma-separated-substrings that are given inside quotes, etc.</p>
<python><python-3.x><command-line-arguments><argparse>
2024-05-25 17:06:35
2
8,231
ysap
78,533,152
1,128,648
Unable to upload file to gdrive using gdrive api with service account
<p>I am using a service account created in Google cloud platform to upload csv files to gdrive using a python script. This script was working file till yesterday but now I am getting below error.</p> <p>Script:</p> <pre><code>scopes = ['https://www.googleapis.com/auth/drive'] gdrive = 'D:/common-modules/gdrive.json' folder_id = &quot;1IxxxxyYxxxx&quot; def upload_option_strike_data(save_location, file_name): try: creds = service_account.Credentials.from_service_account_file(gdrive, scopes=scopes) service = build('drive', 'v3', credentials=creds, cache_discovery=False) file_metadata = { 'name': file_name, 'parents': [folder_id] } file_path = f&quot;{save_location}/{file_name}&quot; media = MediaFileUpload(file_path, mimetype='text/csv') service.files().create( body=file_metadata, media_body=media ).execute() print(f&quot;Successfully uploaded historical data collected for to Gdrive&quot;) except Exception as e: print( f&quot;Failed to upload historical data collected for .Function - upload_option_strike_data.Exception: {str(e)}&quot;) upload_option_strike_data(&quot;D:/daily-data/output&quot;, &quot;FINNIFTY_OptionData_20240514.csv&quot;) </code></pre> <p>Error Message:</p> <pre><code>Failed to upload historical data collected for .Function - upload_option_strike_data.Exception: &lt;HttpError 403 when requesting https://www.googleapis.com/upload/drive/v3/files?alt=json&amp;uploadType=multipart returned &quot;The user's Drive storage quota has been exceeded.&quot;. Details: &quot;[{'message': &quot;The user's Drive storage quota has been exceeded.&quot;, 'domain': 'usageLimits', 'reason': 'storageQuotaExceeded'}]&quot;&gt; </code></pre> <p>I am using 200GB storage plan and is currently 143GB used. Each of my csv files will be around 7MB size. When I upload a small file with size around 100KB, files are getting uploaded without any issues. Error indicates about storageQuotaExceeded, but unable to identify this quota setting (dont see anything from GCP console)</p> <p>How to resolve this issue ?</p>
<python><google-cloud-platform><google-drive-api>
2024-05-25 17:03:21
1
1,746
acr
78,533,132
1,132,175
Ray serve error: serve.run throws utf-8 can't decode byte 0xf8 in position 0 invalid byte
<p>I am trying to run <code>serve.run</code> in my test method but when the test runs, it throws an error in this part of the code:</p> <p><code>serve.run(RayFastApiWrapper.bind(), route_prefix=settings.app_root_url)</code></p> <p>FastApiWrapper looks like this:</p> <pre><code>@serve.deployment(ray_actor_options={&quot;num_gpus&quot;: 1, &quot;num_cpus&quot;: 8}) @serve.ingress(app) class RayFastApiWrapper:     pass </code></pre> <p>And the module causing the exception seems to be in module <code>ray/_private/accelerators/nvidia_gpu.py</code></p> <p>StackTrace:</p> <pre><code>serve.run(RayFastApiWrapper.bind(), route_prefix=settings.app_root_url) /home/user/.local/lib/python3.9/site-packages/ray/serve/api.py:578: in run handle = _run( /home/user/.local/lib/python3.9/site-packages/ray/serve/api.py:484: in _run client = _private_api.serve_start( /home/user/.local/lib/python3.9/site-packages/ray/serve/_private/api.py:257: in serve_start client = _get_global_client(_health_check_controller=True) /home/user/.local/lib/python3.9/site-packages/ray/serve/context.py:87: in _get_global_client return _connect(raise_if_no_controller_running) /home/user/.local/lib/python3.9/site-packages/ray/serve/context.py:132: in _connect ray.init(namespace=SERVE_NAMESPACE) /home/user/.local/lib/python3.9/site-packages/ray/_private/client_mode_hook.py:103: in wrapper return func(*args, **kwargs) /home/user/.local/lib/python3.9/site-packages/ray/_private/worker.py:1642: in init _global_node = ray._private.node.Node( /home/user/.local/lib/python3.9/site-packages/ray/_private/node.py:336: in __init__ self.start_ray_processes() /home/user/.local/lib/python3.9/site-packages/ray/_private/node.py:1396: in start_ray_processes resource_spec = self.get_resource_spec() /home/user/.local/lib/python3.9/site-packages/ray/_private/node.py:571: in get_resource_spec self._resource_spec = ResourceSpec( /home/user/.local/lib/python3.9/site-packages/ray/_private/resource_spec.py:215: in resolve accelerator_manager.get_current_node_accelerator_type() _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ @staticmethod def get_current_node_accelerator_type() -&gt; Optional[str]: import ray._private.thirdparty.pynvml as pynvml try: pynvml.nvmlInit() except pynvml.NVMLError: return None # pynvml init failed device_count = pynvml.nvmlDeviceGetCount() cuda_device_type = None if device_count &gt; 0: handle = pynvml.nvmlDeviceGetHandleByIndex(0) device_name = pynvml.nvmlDeviceGetName(handle) if isinstance(device_name, bytes): &gt; device_name = device_name.decode(&quot;utf-8&quot;) E UnicodeDecodeError: 'utf-8' codec can't decode byte 0xf8 in position 0: invalid start byte </code></pre> <p>I am using ray 2.23.0 and gpustat 1.0. However, seems ray uses its own fork of pynvml, judging from the code in the repo.</p> <p>UPDATE: Following the comment from @furas, trying to decode with codec utf-16, certainly returns some weird chars:</p> <pre><code>&gt;&gt;&gt; device_name.decode(&quot;utf-16&quot;) '闸膠\uf88e肑要郸膐\uf889낑ꂀ釸膠\uf8a5ꂜ꾁駸膐\uf8a3ꂔꂀ雸膀\uf894낌ꂀ軸肐グ </code></pre> <p>However, if use the command nvidia-smi directly from the terminal, the device name is correctly obtained:</p> <pre><code>+-----------------------------------------------------------------------------------------+ | NVIDIA-SMI 555.42.03 Driver Version: 555.85 CUDA Version: 12.5 | |-----------------------------------------+------------------------+----------------------+ | GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |=========================================+========================+======================| | 0 NVIDIA GeForce RTX 3090 On | 00000000:01:00.0 On | N/A | | 0% 35C P8 16W / 370W | 947MiB / 24576MiB | 2% Default | | | | N/A | +-----------------------------------------+------------------------+----------------------+ </code></pre>
<python><fastapi><ray>
2024-05-25 16:54:22
1
597
Jorge Cespedes
78,533,107
590,335
ibis: select map on array value works with table object but not with the underscore operation
<p>for a table with a list field i'm trying to call map</p> <pre><code>import ibis from ibis import _ t = ibis.memtable({ 'i': [1,2,3], 'x': [[1,2],[3,5],[6,7] ] }) </code></pre> <p>using the table object works:</p> <pre><code>t.select(t.x.map(_ + 1)) </code></pre> <p>but using the underscore api (<a href="https://ibis-project.org/how-to/analytics/chain_expressions" rel="nofollow noreferrer">chain expressions</a>) fails:</p> <pre><code>t.select(_.x.map(_ + 1)) </code></pre> <blockquote> <p>TypeError: unsupported operand type(s) for +: 'Table' and 'int'</p> </blockquote>
<python><ibis>
2024-05-25 16:47:33
1
8,467
Ophir Yoktan
78,532,969
373,121
How to make Python VSCode Intellisense work with dynamically imported versioned API modules
<p>I have a Python client library that supports multiple versions of a server by dynamically loading a different &quot;API&quot; module depending on which version of the server is connected to.</p> <p>Once that module is loaded, the API content is fully determined. The implementing class has complete comments and even has full type hinting implemented. However, because it is dynamically loaded, the type information is lost in VSCode and someone developing against the library does not get any of the suggestions and completions that VSCode would give with other Python code. (I have the Python extension installed so I guess this is really a PyLance question?)</p> <p>Roughly speaking, the actual setup looks something like this:</p> <p>There is a directory containing the API modules:</p> <pre><code> api/ ver_1/ api.py ver_2/ api.py ... </code></pre> <p>Each <code>api.py</code> module exposes a &quot;root API&quot; object that is derived from a common generic type - say, <code>api_base</code>.</p> <p>The client library code looks something like this:</p> <pre><code>class Client: def connect_to_server(...) -&gt; api_base: # Depending on the version of the server, # use importlib.import_module() to import the correct api.py </code></pre> <p>Is there any way that VSCode/PyLance can be coerced into assuming that <code>connect_to_server</code> returns a specific type so that Intellisense would work? It would be nice to make this configurable but it wouldn't matter if it was fixed to a specific version - the latest one, say, as this would satisfy the most common use case, and the API versions differ little enough that it would be useful even for users of older versions.</p> <p>In trying to find others who have had similar issues, I have come across suggestions of using <code>.pyi</code> files, but I haven't quite been able to get my head around exactly what I would need to do, nor indeed be sure whether this is the right solution to be pursuing.</p> <p><strong>Update</strong></p> <p>A comment asked me why I couldn't simply use <code>typing.cast</code>. This might in fact offer a solution for certain cases, but I think the point is that we want to avoid any unnecessary explicit mentions of version numbers.</p> <p>The Intellisense support that I am concerned about is for the users of the <code>Client</code> library, who will typically be writing scripts against the API that it exposes. The API is conceptually simple but is quite big, and it involves a lot of named settings exposed as properties. It would be useful for users if code completions and suggestions worked.</p> <p>The expectation is that a given version of the client library is associated with a default server version. One detail that I skipped is that it will often launch as well as connect to the default version of the server process. So, the normal use case for 95% of the time is that that current version is going to be used without having to mention it, but there has to be flexibility to use a different version if required. Moreover, if a script is written against version N, there is an expectation that the same script should almost always work without modification against version N+1. We therefore would like to avoid imposing the requirement on users to introduce explicit casts in their code just so that they can get Intellisense to work.</p> <p>Based on some preliminary experiments, I <em>think</em> I might be able to achieve what I want by using <code>.pyi</code> files in a particular way. I need to take it further and will write a solution if it works out. The basic idea is that the <code>.pyi</code> (but no actual code) will refer explicitly to the current default API version. The <code>typing.cast</code> approach then might be a way for users who explicitly want to use a different version to override that default behaviour.</p>
<python><python-typing><pyright>
2024-05-25 15:56:49
0
475
Bob
78,532,840
552,247
Handling Synchronous Socket Reads and Asynchronous Events in Python
<p>As an educational project, I'm working on a Python application that involves multiple threads, each blocked on a synchronous socket read operation. I need these threads to also respond to two types of events: individual thread-specific events and broadcast events that all threads should respond to. The challenge is to handle these events without interrupting the blocking read on the socket.</p> <p>Here's what I'm trying to achieve:</p> <ul> <li>Each thread is continuously reading from its own dedicated socket.</li> <li>I need each thread to handle &quot;personal&quot; events and a &quot;broadcast&quot; event. The broadcast event should be received by all threads simultaneously without any one of them missing it due to others reading it first.</li> </ul> <p>The solution needs to ensure that while a thread is blocked on a socket read, it can still react to the broadcast events. I've considered using select to monitor the sockets and other descriptors, but I'm unsure how to implement the broadcast mechanism properly. Here's the conceptual approach I've thought about:</p> <ul> <li>Use <code>os.pipe()</code> to create communication channels for the personal and broadcast events.</li> <li>Use <code>select.select()</code> to wait on the sockets and these pipes simultaneously.</li> </ul> <p>However, I'm struggling with the scenario where the broadcast event must be handled by all threads without any thread missing it because another has already read it. Here's a rough sketch of what I'm thinking:</p> <pre class="lang-py prettyprint-override"><code>import select import socket import os import threading # Setup for sockets and pipes def thread_function(sock, rfd): while True: readable, _, _ = select.select([sock, rfd], [], []) for r in readable: if r == sock: data = sock.recv(1024) # Handle socket data elif r == rfd: os.read(rfd, 1024) # Handle event # Threads setup and event handling logic here </code></pre> <p>My questions are:</p> <ul> <li>How can I implement a broadcast event mechanism that ensures all threads receive the event without one consuming it before the others?</li> <li>Is there a better way to structure this system to handle both dedicated and broadcast events while keeping the socket read operations blocking?</li> </ul> <p>Any insights, suggestions, or examples would be greatly appreciated. Thank you!</p>
<python><multithreading><sockets>
2024-05-25 15:06:08
1
1,598
mastupristi