QuestionId
int64
74.8M
79.8M
UserId
int64
56
29.4M
QuestionTitle
stringlengths
15
150
QuestionBody
stringlengths
40
40.3k
Tags
stringlengths
8
101
CreationDate
stringdate
2022-12-10 09:42:47
2025-11-01 19:08:18
AnswerCount
int64
0
44
UserExpertiseLevel
int64
301
888k
UserDisplayName
stringlengths
3
30
βŒ€
76,005,122
6,278
Running deferred initialization code and gracefully blocking on requests until initialization is complete
<p>I would like to set up Flask so that it first sets up the listening socket, then runs some initialization code, then holds all incoming request processing until the initialization is complete. How do I set that up?</p> <p>My Flask app takes about 30s of initialization (loading AI models, etc.). These are required for all real endpoints. However, during development, it is frustrating having to wait and watch for this to complete, before beginning with frontend interactions.</p> <p>I remember old Tomcats (~15 years ago, haven't used it since) having this exact behavior: you could already send a request from an early stage of initialization, and then it would keep the connection open and process and send the response as soon as the webapp was fully loaded.</p> <p>I guess the initialization could be deferred to a new thread and the request blocking could be accomplished with some middleware? Is there a library for this somewhere, so I don't have to start from scratch?</p>
<python><flask>
2023-04-13 11:55:14
2
10,251
Henrik Heimbuerger
76,005,083
19,770,795
Why is covariant subtyping of mutable members allowed?
<h2>Invariance of mutable collections</h2> <p>The rationale for why built-in <strong>mutable</strong> collection types in Python are <strong>invariant</strong> is explained well enough in both <a href="https://peps.python.org/pep-0483/#covariance-and-contravariance" rel="nofollow noreferrer">PEP 483</a> and <a href="https://peps.python.org/pep-0484/#covariance-and-contravariance" rel="nofollow noreferrer">PEP 484</a> and a nice illustrative example specifically for why <code>list</code> is invariant is given in the <a href="https://mypy.readthedocs.io/en/stable/generics.html#variance-of-generic-types" rel="nofollow noreferrer">Mypy docs</a> on this topic:</p> <pre class="lang-py prettyprint-override"><code>class Shape: ... class Circle(Shape): # The rotate method is only defined on Circle, not on Shape def rotate(self): ... def add_one(things: list[Shape]) -&gt; None: things.append(Shape()) my_circles: list[Circle] = [] add_one(my_circles) # This may appear safe, but my_circles[-1].rotate() # this will fail because that last item is now a Shape, not a Circle </code></pre> <p>We can <code>append</code> to a list; if <code>list[Circle]</code> <em>were</em> considered a subtype of <code>list[Shape]</code>, calling <code>add_one</code> with an argument of the type <code>list[Circle]</code> <em>would</em> be fine. But that function appends a <code>Shape</code> instance to the list, which means it would no longer be of the type <code>list[Circle]</code>, and a subsequent attempt to call the last item's <code>rotate</code> method would fail.</p> <p>So in a nutshell, if a complex<sup>1</sup> type is <strong>mutable</strong>, making it covariant with respect to whatever type(s) are contained in it would <strong>not be type safe</strong>. So far so good.</p> <hr /> <h2>Invariance with mutable <em>protocol</em> members</h2> <p>The <a href="https://mypy.readthedocs.io/en/stable/common_issues.html#covariant-subtyping-of-mutable-protocol-members-is-rejected" rel="nofollow noreferrer">Mypy docs</a> also give an explanation along with another example for why covariant subtyping of mutable protocol members is considered <strong>unsafe</strong>:</p> <p>(Slightly modified by me<sup>2</sup>, using dataclasses for convenience.)</p> <pre class="lang-py prettyprint-override"><code>from dataclasses import dataclass from typing import Protocol class P(Protocol): x: float def fun(arg: P) -&gt; None: arg.x = 3.14 @dataclass class C: x: int c = C(42) fun(c) # This is not safe c.x &lt;&lt; 5 # because this will fail! </code></pre> <p><code>C</code> <em>seems</em> like a structural subtype of <code>P</code> because its <code>x</code> attribute is of the type <code>int</code>, which in turn is a subtype of <code>float</code>. But Mypy rejects this notion and considers calling <code>fun</code> with an instance of <code>C</code> to be an <strong>error</strong> because <code>fun</code> could (and in this example <em>does</em>) <strong>mutate</strong> its argument assigning its <code>x</code> attribute a <code>float</code> value, and a subsequent attempt to shift the instance's <code>x</code> attribute bitwise would fail.</p> <p>Again, the crux of the argument is the <strong>mutability</strong> inherent to regular members of a class.<sup>3</sup> Makes sense.</p> <hr /> <h2>Covariant subtyping in <em>normal</em> classes?</h2> <p>Here is where I run into problems. Why does the same logic <strong>not apply</strong>, when dealing with <em>normal</em> classes? Simple example (using dataclasses for convenience):</p> <pre class="lang-py prettyprint-override"><code>from dataclasses import dataclass @dataclass class Foo: x: float @dataclass class Bar(Foo): x: int def f(obj: Foo) -&gt; None: obj.x = 3.14 bar = Bar(1) f(bar) bar.x &lt;&lt; 5 # This will fail at runtime </code></pre> <p>That code will pass <code>mypy --strict</code> without errors. This means <strong>covariant</strong> subtyping regarding the type of the <code>x</code> attribute is applied here.</p> <p>I understand that it would make little sense to consider the call <code>f(bar)</code> an error because obviously <code>Bar</code> is a <strong>nominal</strong> subtype of <code>Foo</code>.<sup>4</sup> But why is the <em>definition</em> of <code>Bar</code> allowed <em>as is</em> in the first place?</p> <p>Of course <code>int</code> is a subtype of <code>float</code>; no arguments here. But we are again dealing with a complex type, so variance considerations should apply like they do in the previous cases, shouldn't they?</p> <p>Class members <em>are</em> mutable by default. Shouldn't regular classes therefore be considered <strong>invaraint</strong> with respect to their members, just like protocols are?</p> <p>I would therefore expect the line <code>x: int</code> in the definition of <code>Bar</code> to trigger an error using the same logic as before. But it doesn't. Why?</p> <hr /> <h2>Pragmatism and consistency</h2> <p>I understand that there are and should be pragmatic considerations for decisions like these - whether something is covariant/contravariant/invariant. And there are even arguments made actively <em>against</em> the Liskov Substitution Principle in the context of method parameter types (to consider them covariant rather than contravaraint). Some of these arguments center around the claim that <em>&quot;the world is covariant&quot;</em>, such as in <a href="https://se.inf.ethz.ch/%7Emeyer/ongoing/covariance/recast.pdf#G1040086" rel="nofollow noreferrer">this paper</a> from 2003. Regardless of whether or not that makes sense, these considerations <em>should</em> at least be applied <strong>consistently</strong>.</p> <p>And I struggle to see the consistency in the observations I made above.</p> <hr /> <p><strong>Footnotes:</strong></p> <p><sup>1</sup> Here the term &quot;complex&quot; simply means the type has itself <em>some</em> components with their own distinct type.</p> <p><sup>2</sup> The original code snippet from the Mypy docs simply set <code>x = 42</code> as a class attribute in the definition of <code>C</code>, which is enough to illustrate the point, but to be very precise (see comment by @chepner) I changed it to make <code>C.x</code> a &quot;pure&quot; instance attribute.</p> <p><sup>3</sup> The fact that the <strong>mutability</strong> of the member <code>x</code> is what causes the issue is further driven home in that section of the Mypy docs by suggesting <code>x</code> to be defined as a (non-settable) property to resolve the error.</p> <p><sup>4</sup> In contrast, with protocols the <strong>structural</strong> subtyping relation is obviously only checked at the call/assignment site, and an error can only arise, when an object is determined <em>not</em> to be a subtype of the variable type.</p>
<python><mypy><covariance><python-typing>
2023-04-13 11:50:50
0
19,997
Daniel Fainberg
76,005,070
12,639,940
Find all documents between 2 keys inclusive (ranged key search) - MongoDB
<p>I am Emulating Firebase Realtime DB's REST GET endpoint's <code>startAt</code> and <code>endAt</code> filters using MongoDB and FastAPI.</p> <p>I am aware of the $gt and $lt query filters for fields in MongoDB and these work great with numerical values but act kind of weird when comparing values between strings.</p> <p>Firebase's functionality will find all the key-value pairs between 2 keys passed in the <code>startAt</code> &amp; <code>endAt</code> filters. For example,</p> <pre class="lang-json prettyprint-override"><code>{&quot;a&quot;: 1, &quot;b&quot;: 2, &quot;c&quot;: 11, &quot;d&quot;: 43, &quot;ax&quot;: 31} </code></pre> <p>The API Call looks like</p> <pre class="lang-bash prettyprint-override"><code>https://test-5681a-default-rtdb.firebaseio.com/sample.json?orderBy=&quot;$key&quot;&amp;startAt=&quot;b&quot;&amp;endAt=&quot;d&quot; </code></pre> <p>Returns</p> <pre class="lang-json prettyprint-override"><code>{&quot;b&quot;: 2, &quot;c&quot;: 11, &quot;d&quot;: 43} </code></pre> <h2>Replicating this functionality using MongoDB's query</h2> <p>MongoDB collections comprise JSON documents, while Firebase Realtime DB is a JSON document itself. When trying to replicate such behavior with MongoDB query for string comparisons with</p> <p>Sample data:</p> <pre class="lang-json prettyprint-override"><code>[ { &quot;_fm_id&quot;: &quot;1&quot;, &quot;_fm_val&quot;: { &quot;userId&quot;: 1, &quot;firstName&quot;: &quot;Krish&quot;, &quot;lastName&quot;: &quot;Lee&quot;, &quot;phoneNumber&quot;: &quot;123456&quot;, &quot;emailAddress&quot;: &quot;krish.lee@learningcontainer.com&quot; } }, { &quot;_fm_id&quot;: &quot;2&quot;, &quot;_fm_val&quot;: { &quot;userId&quot;: 2, &quot;firstName&quot;: &quot;racks&quot;, &quot;lastName&quot;: &quot;jacson&quot;, &quot;phoneNumber&quot;: &quot;123456&quot;, &quot;emailAddress&quot;: &quot;racks.jacson@learningcontainer.com&quot; } }, { &quot;_fm_id&quot;: &quot;3&quot;, &quot;_fm_val&quot;: { &quot;userId&quot;: 3, &quot;firstName&quot;: &quot;denial&quot;, &quot;lastName&quot;: &quot;roast&quot;, &quot;phoneNumber&quot;: &quot;33333333&quot;, &quot;emailAddress&quot;: &quot;denial.roast@learningcontainer.com&quot; } }, { &quot;_fm_id&quot;: &quot;4&quot;, &quot;_fm_val&quot;: { &quot;userId&quot;: 4, &quot;firstName&quot;: &quot;devid&quot;, &quot;lastName&quot;: &quot;neo&quot;, &quot;phoneNumber&quot;: &quot;222222222&quot;, &quot;emailAddress&quot;: &quot;devid.neo@learningcontainer.com&quot; } }, { &quot;_fm_id&quot;: &quot;5&quot;, &quot;_fm_val&quot;: { &quot;userId&quot;: 5, &quot;firstName&quot;: &quot;jone&quot;, &quot;lastName&quot;: &quot;mac&quot;, &quot;phoneNumber&quot;: &quot;111111111&quot;, &quot;emailAddress&quot;: &quot;jone.mac@learningcontainer.com&quot; } }, { &quot;_fm_id&quot;: &quot;0605e7bc80884477b71a3675c77e5a80&quot;, &quot;_fm_val&quot;: { &quot;userId&quot;: 90, &quot;firstName&quot;: &quot;Harish&quot;, &quot;lastName&quot;: &quot;Nathuram&quot;, &quot;phoneNumber&quot;: &quot;7289338473&quot;, &quot;emailAddress&quot;: &quot;harish.nathu@yahoo.com&quot; } } ] </code></pre> <p>Expecting the query to return</p> <pre class="lang-json prettyprint-override"><code>[ { &quot;_fm_id&quot;: &quot;3&quot;, &quot;_fm_val&quot;: { &quot;userId&quot;: 3, &quot;firstName&quot;: &quot;denial&quot;, &quot;lastName&quot;: &quot;roast&quot;, &quot;phoneNumber&quot;: &quot;33333333&quot;, &quot;emailAddress&quot;: &quot;denial.roast@learningcontainer.com&quot; } }, { &quot;_fm_id&quot;: &quot;4&quot;, &quot;_fm_val&quot;: { &quot;userId&quot;: 4, &quot;firstName&quot;: &quot;devid&quot;, &quot;lastName&quot;: &quot;neo&quot;, &quot;phoneNumber&quot;: &quot;222222222&quot;, &quot;emailAddress&quot;: &quot;devid.neo@learningcontainer.com&quot; } }, { &quot;_fm_id&quot;: &quot;5&quot;, &quot;_fm_val&quot;: { &quot;userId&quot;: 5, &quot;firstName&quot;: &quot;jone&quot;, &quot;lastName&quot;: &quot;mac&quot;, &quot;phoneNumber&quot;: &quot;111111111&quot;, &quot;emailAddress&quot;: &quot;jone.mac@learningcontainer.com&quot; } } ] </code></pre> <p>Can this be done by comparing <code>firstName</code>? - Write a Mongo Query</p> <pre class="lang-py prettyprint-override"><code>db.user.find( {&quot;_fm_val.firstName&quot;: {&quot;$gt&quot;: &quot;denial&quot;, &quot;$lt&quot;: &quot;jone&quot;}}, {&quot;_id&quot;: 0} ) </code></pre>
<python><python-3.x><mongodb><pymongo>
2023-04-13 11:49:51
0
516
Kayvan Shah
76,005,058
10,938,315
Testing a method without instantiating class
<p>I followed this approach <a href="https://stackoverflow.com/questions/27105491/how-can-i-unit-test-a-method-without-instantiating-the-class">here</a>, but the issue is that this requires me to duplicate code.</p> <p>I can override the class in my test, but then I also have to copy and paste the method from my original script to the test script which defeats the purpose of unit testing. Someone could change the original method, but not the one used in the unit test.</p> <p><code>main.py</code></p> <pre><code>class ClassToTest(object): def __init__(x, y, z, many_dependencies): self.x = x self.y = y self.z = z self.many_dependencies = many_dependencies def method_to_test(self, y): self.x = y return 5 </code></pre> <p><code>test_main.py</code></p> <pre><code>from main import ClassToTest class ClassToTest(object): # I want to override this only def __init__(self): pass # I don't want to copy and paste this into my test script def method_to_test(self, y): self.x = y return 5 fake_instance = mock.Mock() ClassToTest.__init__(fake_instance) x = ClassToTest.method_to_test(fake_instance, 3) assert x == 5 assert fake_instance.x == 3 </code></pre> <p>Is there a way to import the class, override the init method and test the original method without pasting it into the test itself?</p>
<python><unit-testing>
2023-04-13 11:48:54
1
881
Omega
76,005,035
8,813,699
How do I get n_samples in one call of pm.sample_posterior_predictive() in pymc?
<p>I want to generate multiple samples from one function call of pymc's sample_posterior_predictive(). In the previous version pymc3 there was an argument called 'samples', here is an example <a href="https://www.pymc.io/projects/examples/en/latest/gaussian_processes/GP-Marginal.html" rel="nofollow noreferrer">https://www.pymc.io/projects/examples/en/latest/gaussian_processes/GP-Marginal.html</a>. However this argument is not part of the API anymore in pymc version '5.1.2'.</p> <p>I am further using matplotlib version '3.7.1', numpy version '1.24.2' and python version '3.11.0'.</p> <p>Here is a minimal example (as minimal as possible) showing a workaround, however I really want to avoid the for-loop and generate n_samples from one function call.</p> <pre><code>import matplotlib.pyplot as plt import numpy as np import pymc as pm # set the seed np.random.seed(1) # create random data n = 50 # The number of data points X = np.linspace(0, np.pi*2, n)[:, None] # The inputs to the GP, they must be arranged as a column vector y = 2*np.sin(0.25*2*np.pi*X[:,0])*X[:,0] + 2 # setup model with pm.Model() as model: β„“ = pm.Gamma(&quot;β„“&quot;, alpha=2, beta=1) Ξ· = pm.HalfCauchy(&quot;Ξ·&quot;, beta=5) cov = Ξ·**2 * pm.gp.cov.Matern52(1, β„“) gp = pm.gp.Marginal(cov_func=cov) Οƒ = pm.HalfCauchy(&quot;Οƒ&quot;, beta=5) y_ = gp.marginal_likelihood(&quot;y&quot;, X=X, y=y, sigma=Οƒ) mp = pm.find_MAP() # new values from x=0 to x=20 X_new = np.linspace(0, 20, 600)[:, None] # add the GP conditional to the model, given the new X values with model: f_pred = gp.conditional(&quot;f_pred&quot;, X_new) ######################################################################################## # How do I get (n_samples, 600) in one call of pm.sample_posterior_predictive()? n_samples = 4 sample_list = [] for i in range(n_samples): with model: pred_samples = pm.sample_posterior_predictive([mp], var_names=[&quot;f_pred&quot;]) sample_list.append(pred_samples) ######################################################################################## # plot result fig, ax = plt.subplots( figsize=(12, 5)) for pred_samples in sample_list: f_pred = pred_samples.posterior_predictive[&quot;f_pred&quot;].sel(chain=0) ax.plot(X_new[:,0], f_pred[0,:], alpha=0.1, color = 'blue') # plot the data and the true latent function ax.plot(X, y, &quot;ok&quot;, ms=3, alpha=0.5, label=&quot;Observed data&quot;) plt.show() </code></pre>
<python><pymc><gaussian-process>
2023-04-13 11:45:46
0
1,855
sehan2
76,004,994
2,123,706
download zipped csv from url and convert to dataframe
<p>I want to download and read a file from this site: <a href="http://data.gdeltproject.org/events/index.html" rel="nofollow noreferrer">http://data.gdeltproject.org/events/index.html</a></p> <p>each of the files are of the named <code>YYYYMMDD.export.CSV.zip</code></p> <p>I am stuck at this point in my code:</p> <pre><code>import pandas as pd import zipfile import requests from datetime import date, timedelta url = 'http://data.gdeltproject.org/events/index.html' yesterday = (date.today() - timedelta(days=1)).strftime('%Y%m%d') file_from_url = yesterday + '.export.CSV.zip' with open(file_from_url, &quot;wb&quot;) as f: f.write(resp.content) </code></pre> <p>now I am stuck trying to read the contents</p> <p>I tried readlines, but this did not work</p> <p>Any suggestions how I can read my zipped file</p>
<python><pandas><dataframe><download><zip>
2023-04-13 11:40:59
1
3,810
frank
76,004,898
5,547,553
How to convert polars dataframe column type from float64 to int64?
<p>I have a polars dataframe, like:</p> <pre><code>import polars as pl df = pl.DataFrame({&quot;foo&quot;: [1.0, 2.0, 3.0], &quot;bar&quot;: [11, 5, 8]}) </code></pre> <p>How do I convert the first column to int64 type?<br> I was trying something like:</p> <pre><code>df.select(pl.col('foo')) = df.select(pl.col('foo')).cast(pl.Int64) </code></pre> <p>but it is not working.<br> In Pandas it was super easy:</p> <pre><code>df['foo'] = df['foo'].astype('int64') </code></pre> <p>Thanks.</p>
<python><dataframe><python-polars>
2023-04-13 11:29:46
1
1,174
lmocsi
76,004,759
14,729,820
How push HF model to the hub
<p>I have a related issue can any one solve it? <a href="https://discuss.huggingface.co/t/typeerror-repository-init-got-an-unexpected-keyword-argument-token/33458/4" rel="nofollow noreferrer">Problem described well here </a></p> <p>I am expacting to pushing the model not only tokenizer and processor as : <a href="https://huggingface.co/AlhitawiMohammed22/test_pushing_to_hu/tree/main" rel="nofollow noreferrer">link</a></p>
<python><deep-learning><nlp><huggingface-transformers><hub>
2023-04-13 11:12:58
1
366
Mohammed
76,004,670
3,645,510
Why adding a object to an attribute of the another object of the same class have the side effect of adding it to both objects in python
<p>It's easier to explain my issue with an example so here it goes:</p> <pre><code>#!/usr/bin/env python3 class Node: _name: str = None _parents = [] def __init__(self, name: str): self._name = name def add_parent(self, node): self._parents.append(node) if __name__ == &quot;__main__&quot;: foo = Node(&quot;foo&quot;) bar = Node(&quot;bar&quot;) foo.add_parent(bar) print(&quot;= foo&quot;) print(foo._parents) print(list(map(lambda n: n._name, foo._parents))) print(&quot;= bar&quot;) print(bar._parents) print(list(map(lambda n: n._name, bar._parents))) </code></pre> <p>and this is the result:</p> <pre><code>= foo [&lt;__main__.Node object at 0x7fdd66d9ac90&gt;] ['bar'] = bar [&lt;__main__.Node object at 0x7fdd66d9ac90&gt;] ['bar'] </code></pre> <p>why is the <code>bar</code> object added to itself too?</p> <p>I would expect it to be added only to foo._parents so to me this is a side effect maybe even a bug in Python.</p>
<python><oop>
2023-04-13 11:03:52
2
390
b1zzu
76,004,593
12,243,638
How to reverse year over year change to fill the nan values?
<p>I have a dataframe, the <code>Value Col</code> ends in 2022-12-31.</p> <pre><code> Value Col Factor 2022-01-31 0.021 5% 2022-02-28 0.020 4% 2022-03-31 0.019 3% 2022-04-30 0.018 2% 2022-05-31 0.017 9% 2022-06-30 0.016 7% 2022-07-31 0.015 7% 2022-08-31 0.014 5% 2022-09-30 0.013 -6% 2022-10-31 0.018 4% 2022-11-30 0.020 -8% 2022-12-31 0.015 7% 2023-01-31 NaN 5% 2023-02-28 NaN 4% 2023-03-31 NaN 3% 2023-04-30 NaN 4% 2023-05-31 NaN 9% 2023-06-30 NaN -6% 2023-07-31 NaN 7% 2023-08-31 NaN 5% 2023-09-30 NaN 6% 2023-10-31 NaN -4% 2023-11-30 NaN 2% 2023-12-31 NaN 1% 2024-01-31 NaN 5% 2024-02-28 NaN 4% 2024-03-31 NaN 6% 2024-04-30 NaN 2% 2024-05-31 NaN -9% 2024-06-30 NaN 8% 2024-07-31 NaN 6% 2024-08-31 NaN -7% 2024-09-30 NaN 6% 2024-10-31 NaN 4% 2024-11-30 NaN 2% 2024-12-31 NaN -1% </code></pre> <p>And there is a <code>Factor</code> column which shows the percentage; how much the NaN value should be filled with compared to the same month of the previous year value. For example, df.loc['2023-04-30', 'Value Col'] should be 0,01872. (value on <strong>2022</strong>-04-30 is 0.018 and factor on <strong>2023</strong>-04-30 is 4%. So, 0.018 + 0.018*4% = 0.01872.</p> <p>I seems to me a reverse of <code>pct_change()</code> function of pandas. But I could not figure it out how to solve it. Any hint or suggestion will be appreciated.</p>
<python><pandas>
2023-04-13 10:56:05
2
500
EMT
76,004,579
15,358,800
Pandas shift index ignoring passed shift object
<p>Let's say I've df like this</p> <pre><code>import pandas as pd df= pd.date_range('2023-04-01', '2023-05-01') frequency = df.shift(freq='W') print(frequency) </code></pre> <p>output I got freuqnecy as <code>None</code></p> <pre><code>DatetimeIndex(['2023-04-02', '2023-04-09', '2023-04-09', '2023-04-09', '2023-04-09', '2023-04-09', '2023-04-09', '2023-04-09', '2023-04-16', '2023-04-16', '2023-04-16', '2023-04-16', '2023-04-16', '2023-04-16', '2023-04-16', '2023-04-23', '2023-04-23', '2023-04-23', '2023-04-23', '2023-04-23', '2023-04-23', '2023-04-23', '2023-04-30', '2023-04-30', '2023-04-30', '2023-04-30', '2023-04-30', '2023-04-30', '2023-04-30', '2023-05-07', '2023-05-07'], dtype='datetime64[ns]', freq=None) &lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;--------Here------&lt;&lt;&lt;&lt;&lt; </code></pre> <p>According to <a href="https://pandas.pydata.org/pandas-docs/version/0.9.1/timeseries.html#offset-aliases" rel="nofollow noreferrer">documentation</a> <code>W</code> stands for week</p> <p>Am i missing anything here?? I was looking for a quick fix..Is there any alternate way to do it?</p> <pre><code>Version: 1.4.2 </code></pre> <p><a href="https://i.sstatic.net/2akOD.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2akOD.png" alt="enter image description here" /></a></p>
<python><pandas><datetime>
2023-04-13 10:54:21
1
4,891
Bhargav
76,004,515
12,236,313
Python-polars: from coefficients, values and (nested) lists to weighted values
<p>Let's say I've got a Polars DataFrame similar to this one:</p> <pre class="lang-py prettyprint-override"><code>import polars as pl from decimal import Decimal df = pl.from_dicts( [ {&quot;id&quot;: 1, &quot;value&quot;: Decimal(&quot;100.0&quot;), &quot;items&quot;: [&quot;A&quot;]}, {&quot;id&quot;: 2, &quot;value&quot;: Decimal(&quot;150.000&quot;), &quot;items&quot;: [&quot;A&quot;, &quot;B&quot;]}, {&quot;id&quot;: 3, &quot;value&quot;: Decimal(&quot;70.0000&quot;), &quot;items&quot;: [&quot;A&quot;, &quot;B&quot;, &quot;C&quot;]}, ] ) </code></pre> <div class="s-table-container"><table class="s-table"> <thead> <tr> <th>id <em>β†’ pl.Int64</em></th> <th>value <em>β†’ pl.Decimal</em></th> <th>items <em>β†’ pl.List(str)</em></th> </tr> </thead> <tbody> <tr> <td>1</td> <td>100</td> <td><code>[&quot;A&quot;]</code></td> </tr> <tr> <td>2</td> <td>150</td> <td><code>[&quot;A&quot;, &quot;B&quot;]</code></td> </tr> <tr> <td>3</td> <td>70</td> <td><code>[&quot;A&quot;, &quot;B&quot;, &quot;C&quot;]</code></td> </tr> </tbody> </table></div> <p>And the following Python dictionary:</p> <pre class="lang-py prettyprint-override"><code>coef = {&quot;A&quot;: Decimal(&quot;0.2&quot;), &quot;B&quot;: Decimal(&quot;0.35&quot;), &quot;C&quot;: Decimal(&quot;0.45&quot;) } </code></pre> <p>From here, how to <strong>get the following DataFrame</strong> in an efficient manner <strong>using Polars</strong>?</p> <pre><code>β”Œβ”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β” β”‚ id ┆ value ┆ item β”‚ β”‚ --- ┆ --- ┆ --- β”‚ β”‚ i64 ┆ decimal[*,10] ┆ str β”‚ β•žβ•β•β•β•β•β•ͺ════════════════β•ͺ══════║ β”‚ 1 ┆ 100.0000000000 ┆ A β”‚ β”‚ 2 ┆ 54.5454000000 ┆ A β”‚ β”‚ 2 ┆ 95.4544500000 ┆ B β”‚ β”‚ 3 ┆ 14.0000000000 ┆ A β”‚ β”‚ 3 ┆ 24.5000000000 ┆ B β”‚ β”‚ 3 ┆ 31.5000000000 ┆ C β”‚ β””β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”˜ </code></pre> <p>In this example, <code>54.5454...</code> for instance corresponds to <code>0.2 / (0.2 + 0.35) * 150</code>.</p> <p>Note that the length of the <code>coef</code> dictionary isn't always the same. It basically looks like <code>{key1: coef1, key2: coef2, ... }</code> with <code>n</code> keys and <code>n</code> values (whose sum is equal to <code>Decimal(1)</code>).</p> <p>Also, I'm working with <code>Decimal</code> (not <code>float64</code> values).</p>
<python><python-polars>
2023-04-13 10:45:12
1
1,030
scΕ«riolus
76,004,438
12,783,363
How to detect words in text given a set of words (almost similar) in database?
<p>Assuming we have a database containing the following words:</p> <pre><code>Chicken Wings Chicken Chicken Nuggets Super Chicken Nuggets </code></pre> <p>And the text:</p> <pre><code>Instruction: Add super chicken nuggets and chicken wings to the salad </code></pre> <p>What would the logic/algorithm be if we want to extract &quot;Super Chicken Nuggets&quot; and &quot;Chicken Wings&quot; from the text, but not &quot;Chicken&quot; nor &quot;Chicken Nuggets&quot;?</p> <p>I can't express the exact rule as I'm confused myself. But Since &quot;Super Chicken Nuggets&quot; and &quot;Chicken Wings&quot; are part of the text, then the two should be considered. While &quot;Chicken&quot; and &quot;Chicken Nuggets&quot; also exist in the text, but really the two are just subset to &quot;Super Chicken Nuggets&quot; and &quot;Chicken Wings&quot;.</p> <p>But if we have a text like <code>Add super chicken nuggets and chicken to the salad</code>, then &quot;Super Chicken Nuggets&quot; and &quot;Chicken&quot; should be extracted, but not &quot;Chicken Nuggets&quot;.</p> <p>PS: For simplicity, we may ignore the case.</p>
<python><algorithm><nlp>
2023-04-13 10:35:39
2
916
Jobo Fernandez
76,004,164
12,965,658
Merge dataframes having array
<p>I have two data frames.</p> <h1>DF1</h1> <pre><code>isActive,trackedSearchId True,53436615 True,53434228 True,53434229 </code></pre> <h1>DF2</h1> <pre><code>trackedSearchIds,Primary Keyword Group(s) &quot;[53436613, 53436615, 53437436, 53436506]&quot;,SEO - Directory-Deployment &quot;[53435887, 53437509, 53437441, 53436615, 53438685, 53437636]&quot;,SEO - Other-Glossary &quot;[53437504, 53435090, 53435887, 53434228]&quot;,SEO - Other &quot;[53437504, 53435090, 53434229]&quot;,SEO - Glossary </code></pre> <p>I want to check for each row of DF1 for column trackedSearchId and check with each row in DF2 with each value of trackedSearchIds array and if the value of DF1 column is present in DF2 append it with DF1.</p> <h1>The output should look like:</h1> <pre><code>isActive,trackedSearchId,Primary Keyword Group(s) True,53436615,SEO - Directory-Deployment&amp;SEO - Other-Glossary True,53434228,SEO - Other True,53434229,SEO - Glossary </code></pre>
<python><python-3.x><pandas><dataframe><python-2.7>
2023-04-13 10:03:56
2
909
Avenger
76,003,924
3,377,926
Python typing - TypeVar for something that inherits from multiple base classes
<p>Python's <a href="https://docs.python.org/3/library/typing.html#typing.TypeVar" rel="nofollow noreferrer"><code>TypeVar</code></a> allows setting a <code>bound</code> for a type, like so</p> <pre class="lang-py prettyprint-override"><code>from typing import TypeVar class A: pass T = TypeVar(&quot;T&quot;, bound=A) def foo(_: T) -&gt; None: pass class B(A): pass class C: pass foo(B()) # OK foo(C()) # error: Value of type variable &quot;T&quot; of &quot;foo&quot; cannot be &quot;C&quot; </code></pre> <p>My question is how can I express &quot;any type T where T inherits from X <em>and</em> Y&quot;?</p> <p>This was my first naive attempt:</p> <pre class="lang-py prettyprint-override"><code>from typing import TypeVar class X: pass class Y: pass class XY(X, Y): pass T = TypeVar(&quot;T&quot;, bound=XY) def foo(_: T) -&gt; None: pass class MyClass(X, Y): pass foo(MyClass()) # error: Value of type variable &quot;T&quot; of &quot;foo&quot; cannot be &quot;MyClass&quot; </code></pre> <p>This doesn't work, because <code>MyClass</code> isn't a subclass of <code>XY</code>, and mypy correctly rejects the code.</p> <p>In this minimal example I could do <code>class MyClass(XY)</code>, but this isn't a good solution for the actual code I'm working with. Consider what happens if there are classes <code>X1</code>, <code>X2</code>, ..., <code>Xn</code>, and I have functions that expect types that inherit from some subset of these. Making a class for each combination would be infeasible, as <code>MyClass</code> would need to inherit from each &quot;combo-class&quot;.</p> <p>Essentially I'm looking for something equivalent to this fictional code:</p> <pre class="lang-py prettyprint-override"><code>T = TypeVar(&quot;T&quot;, bound=(X, Y)) # Not allowed def foo(x: T): assert isinstance(x, X) assert isinstance(x, Y) </code></pre> <p>Any thought's on how this could be done, or achieve something to this effect?</p>
<python><mypy><python-typing>
2023-04-13 09:36:55
1
1,167
Bendik
76,003,889
497,517
Adding Dates to the X axis on my graph breaks it
<p>I have a CSV file that stores an upload and download speed of my Internet connection every hour. The columns are Date, Time, Download, Upload &amp; ping. see below...</p> <blockquote> <pre><code>230302,2305,835.89,109.91,11.46 230303,0005,217.97,109.58,5.222 230303,0105,790.61,111.41,5.191 230303,0205,724.59,109.23,9.259 230303,0305,820.04,111.06,4.376 </code></pre> </blockquote> <p>I can display the data fine when I just use 0-x on the x axis like the example below: <a href="https://i.sstatic.net/YULEK.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/YULEK.png" alt="V1 example" /></a></p> <p>However I want to display dates on the x Axis and when I do this I get this result: <a href="https://i.sstatic.net/3Vftp.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/3Vftp.png" alt="V2 Example" /></a></p> <p>What am I doing wrong here?</p> <pre><code>import csv import matplotlib.pyplot as plt from datetime import datetime filename = 'C:/Users/tim/Documents/p5e/Internet_Speed_Tests.csv' with open(filename) as f: reader = csv.reader(f) dates = [] times = [] downs = [] ups = [] pings = [] for row in reader: #date = int(row[0]) current_date = datetime.strptime(row[0],'%y%m%d') time = int(row[1]) #current_time = datetime.strptime(row[1],'%H%m') #dateandtime = current_date + current_time down = float(row[2]) up = float(row[3]) ping = float(row[4]) dates.append(current_date) #times.append(current_time) downs.append(down) ups.append(up) pings.append(ping) fig, ax = plt.subplots() ax.set_title(&quot;Internet Speed&quot;, fontsize=24) ax.set_ylabel(&quot;Speed&quot;, fontsize=16) ax.set_xlabel('date', fontsize=16) fig.autofmt_xdate() ax.plot(dates, downs) ax.plot(dates, ups) plt.show() </code></pre>
<python><matplotlib>
2023-04-13 09:32:59
3
7,957
Entropy1024
76,003,851
9,827,719
Python Weasyprint to Google Bucket
<p>I am using Google Functions in order to generate PDFs.</p> <p>I want to store the PDFs in a Google Bucket.</p> <p>I know that I can store PDFs as a file using the following code:</p> <pre><code># Write PDF to HTML pdf = &quot;&lt;html&gt;&lt;title&gt;Hello&lt;/title&gt;&lt;body&gt;&lt;p&gt;Hi!&lt;/p&gt;&lt;/body&gt;&lt;/html&gt;&quot; # HTML to PDF at local disk document = weasyprint.HTML(string=pdf, encoding='UTF-8') document.write_pdf(f&quot;Hello.pdf&quot;) </code></pre> <p>However I want to store it in a Google Bucket, so I have tried the following code :</p> <pre><code># Write PDF to HTML pdf = &quot;&lt;html&gt;&lt;title&gt;Hello&lt;/title&gt;&lt;body&gt;&lt;p&gt;Hi!&lt;/p&gt;&lt;/body&gt;&lt;/html&gt;&quot; # HTML to PDF in Google Bucket document = weasyprint.HTML(string=pdf, encoding='UTF-8') client = storage.Client() bucket = client.get_bucket(&quot;monthly-customer-reports&quot;) blob = bucket.blob(&quot;Hello.pdf&quot;) with blob.open(&quot;w&quot;) as f: f.write(str(document)) </code></pre> <p>This stored a PDF in my Google Bucket but it was invalid.</p>
<python><weasyprint><google-bucket>
2023-04-13 09:29:10
1
1,400
Europa
76,003,814
52,917
How to run multiple discord.py bots concurrently with different bot tokens?
<p>I'm trying to create a Discord bot application using discord.py, where I need to run 5 different bots concurrently. I have all the bot tokens stored in a list variable named <code>BOT_TOKENS</code>.</p> <p>I've set up my bot instances and event handlers, but I'm unsure about how to run all bots concurrently using their respective tokens. I'm aware that the <code>bot.run(token)</code> method is blocking, so I can't simply call it multiple times sequentially. I've been trying to use <code>asyncio</code> to run the bots concurrently, but I'm having difficulties getting it to work correctly.</p> <p>Can anyone provide an example or guidance on how to achieve running discord.py bots concurrently with different tokens from a list?</p> <p>Here's the basic structure of my bot setup:</p> <pre class="lang-py prettyprint-override"><code>import discord from discord.ext import commands intents = discord.Intents.default() BOT_TOKENS = [&quot;token1&quot;, &quot;token2&quot;, &quot;token3&quot;, &quot;token4&quot;, &quot;token5&quot;] # Define event handlers and commands for all bots def create_bot(token): bot = commands.Bot(command_prefix=f&quot;!&quot;, intents=intents) @bot.event async def on_ready(): print(f&quot;{bot.user.name} has connected to Discord!&quot;) return bot bots = [create_bot(token) for token in BOT_TOKENS] # I need help with running all 5 bots concurrently here. ### cannot call bots[0].run() as this is blocking </code></pre> <p>Any help would be appreciated. Thanks!</p>
<python><discord.py><python-asyncio>
2023-04-13 09:25:12
1
2,449
Aviad Rozenhek
76,003,740
8,849,755
Plotly multiline legend bullets vertical alignment
<p>Consider the following MWE:</p> <pre class="lang-py prettyprint-override"><code>import plotly.graph_objects as go fig = go.Figure() for i in range(4): fig.add_trace( go.Scatter( x = [1,2,3], y = [i]*3, name = 'A very long legend&lt;br&gt;which I break in&lt;br&gt;multiple lines' ) ) fig.write_html('deleteme.html') </code></pre> <p>which produces this: <a href="https://i.sstatic.net/1chV7.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/1chV7.png" alt="enter image description here" /></a></p> <p>Is it possible to change the alignment of the bullets in the legend to produce this instead? (I made it with Paint) <a href="https://i.sstatic.net/EANxY.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/EANxY.png" alt="enter image description here" /></a></p> <p>In the previous trivial example this looks like a whim, however in more complex cases the alignment of the legend bullets makes it really confusing to know which is which: <a href="https://i.sstatic.net/8n3wL.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8n3wL.png" alt="enter image description here" /></a></p>
<python><plotly>
2023-04-13 09:16:35
1
3,245
user171780
76,003,623
6,225,526
How to insert and fill the rows with calculated value in pandas?
<p>I have a pandas dataframe with missing theta steps as below,</p> <pre><code>index name theta r 1 wind 0 10 2 wind 30 17 3 wind 60 19 4 wind 90 14 5 wind 120 17 6 wind 210 18 7 wind 240 17 8 wind 270 11 9 wind 300 13 </code></pre> <p>I need to add the missing theta with values,</p> <pre><code>index name theta r 1 wind 0 10 2 wind 30 17 3 wind 60 19 4 wind 90 14 5 wind 120 17 6 wind 150 null 7 wind 180 null 8 wind 210 18 9 wind 240 17 10 wind 270 11 11 wind 300 13 12 wind 330 null </code></pre> <p>And then fill the null values with linear interpolation. For simplicity here we can consider average of previous and next available value,</p> <pre><code>index name theta r 1 wind 0 10 2 wind 30 17 3 wind 60 19 4 wind 90 14 5 wind 120 17 6 wind 150 17.5 #(17 + 18)/2 7 wind 180 17.5 #(17 + 18)/2 8 wind 210 18 9 wind 240 17 10 wind 270 11 11 wind 300 13 12 wind 330 11.5 #(13 + 10)/2 </code></pre> <p>How can I do this?</p>
<python><pandas>
2023-04-13 09:04:40
2
1,161
Selva
76,003,332
6,737,387
How to visualize segment-anything model with edges?
<p>I'm able to use the segment-anything model and visualize their results but they appear different than the demo online.</p> <p>My results look something like this:</p> <p><a href="https://i.sstatic.net/glc11.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/glc11.jpg" alt="enter image description here" /></a></p> <p>And here's the results from the segment-anything official site:</p> <p><a href="https://i.sstatic.net/1qrMw.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/1qrMw.jpg" alt="enter image description here" /></a></p> <p>I need something like with the edges around the object, exactly like the official site but I'm unable to find any method or utility that could get the job done. I'm sure I must be missing something.</p> <p>I'm using the following code from their official notebook to show the annotations:</p> <pre><code>def show_anns(anns): if len(anns) == 0: return sorted_anns = sorted(anns, key=(lambda x: x['area']), reverse=True) ax = plt.gca() ax.set_autoscale_on(False) polygons = [] color = [] for ann in sorted_anns: m = ann['segmentation'] img = np.ones((m.shape[0], m.shape[1], 3)) color_mask = np.random.random((1, 3)).tolist()[0] for i in range(3): img[:,:,i] = color_mask[i] ax.imshow(np.dstack((img, m*0.35))) </code></pre> <p>Does anyone know how to sort this out ?</p>
<python><pytorch><computer-vision><image-segmentation>
2023-04-13 08:33:45
1
2,683
Hisan
76,003,258
1,678,780
Getting codepage / encoding for windows executables called with subprocess.check_output from python
<p>I have a similar issue to <a href="https://stackoverflow.com/questions/21486703/how-to-get-the-output-of-subprocess-check-output-python-module">How to get the output of subprocess.check_output() python module?</a> but the solution there does not work for German Windows.</p> <p>I execute the following script in python 3.10 in wsl2 / ubuntu:</p> <pre class="lang-py prettyprint-override"><code>import subprocess import sys ipconfig = subprocess.check_output([&quot;ipconfig.exe&quot;, &quot;/all&quot;]).decode(sys.stdout.encoding) </code></pre> <p>This leads to</p> <pre><code>Traceback (most recent call last): File &quot;&lt;stdin&gt;&quot;, line 1, in &lt;module&gt; UnicodeDecodeError: 'utf-8' codec can't decode byte 0x84 in position 90: invalid start byte </code></pre> <p>The bytes returned from <code>check_output</code> are:</p> <pre><code>b'\r\nWindows-IP-Konfiguration\r\n\r\n Hostname . . . . . . . . . . . . : XXXXXXXXXXXX\r\n Prim\x84res DNS-Suffix . </code></pre> <p>and <code>sys.stdout.encoding</code> is <code>utf-8</code>. But this is wrong, as <code>\x84</code> is invalid under utf-8.</p> <p>The <code>\x84</code> here is an <code>Γ€</code>. According to <a href="https://tripleee.github.io/8bit/#84" rel="nofollow noreferrer">https://tripleee.github.io/8bit/#84</a> this corresponds to e.g. <code>cp850</code> for western european encoding, which makes sense.</p> <p>How can I get the correct encoding programmatically?</p>
<python><windows><character-encoding><subprocess>
2023-04-13 08:25:16
0
1,216
GenError
76,003,183
19,491,471
Tenserflow module is giving errors
<p>I am trying to import some modules but I get errors back. These are what I am trying to import and install:</p> <pre><code>%pip install pandas %pip install numpy %pip install requests %pip install beautifulsoup4 %pip install tensorflow import requests import pandas import numpy import requests from bs4 import BeautifulSoup import tensorflow </code></pre> <p>after running the code, what I get is:</p> <pre><code>--------------------------------------------------------------------------- TypeError Traceback (most recent call last) Cell In[2], line 12 10 import requests 11 from bs4 import BeautifulSoup ---&gt; 12 import tensorflow 13 import keras File ~/opt/anaconda3/lib/python3.9/site-packages/tensorflow/__init__.py:41 38 import six as _six 39 import sys as _sys ---&gt; 41 from tensorflow.python.tools import module_util as _module_util 42 from tensorflow.python.util.lazy_loader import LazyLoader as _LazyLoader 44 # Make sure code inside the TensorFlow codebase can use tf2.enabled() at import. File ~/opt/anaconda3/lib/python3.9/site-packages/tensorflow/python/__init__.py:40 31 import traceback 33 # We aim to keep this file minimal and ideally remove completely. 34 # If you are adding a new file with @tf_export decorators, 35 # import it in modules_with_exports.py instead. 36 37 # go/tf-wildcard-import 38 # pylint: disable=wildcard-import,g-bad-import-order,g-import-not-at-top ---&gt; 40 from tensorflow.python.eager import context 41 from tensorflow.python import pywrap_tensorflow as _pywrap_tensorflow ... If you cannot immediately regenerate your protos, some other possible workarounds are: 1. Downgrade the protobuf package to 3.20.x or lower. 2. Set PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=python (but this will use pure-Python parsing and will be much slower). More information: https://developers.google.com/protocol-buffers/docs/news/2022-05-06#python-updates </code></pre> <p>I am using python 3.9.13 - TensorFlow is 2.12.0, conda 23.3.1 and anaconda custom py29_1</p>
<python><tensorflow><jupyter-notebook><jupyter>
2023-04-13 08:17:24
0
327
Amin
76,003,126
726,156
ModuleNotFoundError for 'sklearn' as subdependency of numpy
<p>I am using Docker combined with virtualenv to run a project for a client, but getting the error ModuleNotFound for sklearn.</p> <p>In my Pipfile I have added the numpy dependency</p> <pre><code>numpy = &quot;==1.21.6&quot; </code></pre> <p>The error is thrown from the following line</p> <pre><code>np.load(PATH_TO_NPY_FILE, allow_pickle=True) </code></pre> <p>with the following stack trace:</p> <pre><code>development_1 | File &quot;/root/.local/share/virtualenvs/code-_Py8Si6I/lib/python3.7/site-packages/numpy/lib/npyio.py&quot;, line 441, in load development_1 | pickle_kwargs=pickle_kwargs) development_1 | File &quot;/root/.local/share/virtualenvs/code-_Py8Si6I/lib/python3.7/site-packages/numpy/lib/format.py&quot;, line 748, in read_array development_1 | array = pickle.load(fp, **pickle_kwargs) development_1 | ModuleNotFoundError: No module named 'sklearn' </code></pre> <p>I find this strange, because <code>sklearn</code> should be installed as part of the numpy dependency tree, right?</p> <p>Still I tried the suggestions I found in <a href="https://stackoverflow.com/questions/46113732/modulenotfounderror-no-module-named-sklearn">other posts</a>, like adding the following command explicitly to my Dockerfile</p> <pre><code>python -m pip install scikit-learn scipy matplotlib </code></pre> <p>However, the error still persists.</p> <p>For completeness, I'll provide some extra info below, although the key question remains why installing numpy does not imply its sub dependencies to be in place.</p> <hr /> <p><strong>Project structure</strong></p> <p>The project is sort of a bridge between SQS on one hand and the logic of the client on the other. The code from which the error is thrown comes from a git submodule and the Pipfile is added on the top-level repo. The submodule does not contain a Pipfile. The submodules folder has an <code>__init__.py</code> file because it contains functions that I want to use in my src code. In the tree below, my code is in <code>main.py</code> and the error throwing code is in <code>submodules/module2/bar.py</code>.</p> <pre><code>|- src/ | |- main.py | |- submodules/ | |- module1 | | |- foo.py | | |- setup.py | | | |- module2 | | |- bar.py | | | |- __init__.py | |- .gitmodules |- Pipfile |- Dockerfile </code></pre> <p><strong>Dockerfile contents</strong></p> <p>Note that at this point, it is a bit of an aggregate of solutions I took from the <a href="https://stackoverflow.com/questions/46113732/modulenotfounderror-no-module-named-sklearn">other post on the matter</a>. That's why both <code>pip install scikit-learn</code> and <code>apt-get install python3-sklearn</code> are currently included. Will prune later when I finally have fixed this issue.</p> <pre><code>FROM python:3.7 WORKDIR code/ COPY Pipfile . COPY submodules/ submodules/ RUN pip install pipenv &amp;&amp; \ pipenv install --deploy &amp;&amp; \ python -m pip install scikit-learn scipy matplotlib &amp;&amp; \ apt-get update &amp;&amp; \ apt-get install -y locales ffmpeg libsm6 libxext6 libxrender-dev python3-sklearn &amp;&amp; \ sed -i -e 's/# nl_BE.UTF-8 UTF-8/nl_BE.UTF-8 UTF-8/' /etc/locale.gen &amp;&amp; \ dpkg-reconfigure --frontend=noninteractive locales ENV LANG nl_BE.UTF-8 ENV LC_ALL nl_BE.UTF-8 COPY .env . COPY src/ . COPY data/ data CMD [ &quot;pipenv&quot;, &quot;run&quot;, &quot;python&quot;, &quot;main.py&quot; ]x </code></pre> <p><strong>Pipfile contents</strong></p> <pre><code>[[source]] url = &quot;https://pypi.org/simple&quot; verify_ssl = true name = &quot;pypi&quot; [packages] python-dotenv = &quot;*&quot; boto3 = &quot;*&quot; pySqsListener = &quot;*&quot; xpress = &quot;==9.0.5&quot; module1 = {path = &quot;./submodules/module1&quot;} pandas = &quot;==1.3.4&quot; numpy = &quot;==1.21.6&quot; [dev-packages] [requires] python_version = &quot;3.7&quot; </code></pre>
<python><python-3.x><docker><numpy><scikit-learn>
2023-04-13 08:11:05
1
1,813
gleerman
76,002,994
13,158,157
pyspark fill values with join instead of isin
<p>I want to fill <a href="/questions/tagged/pyspark" class="post-tag" title="show questions tagged &#39;pyspark&#39;" aria-label="show questions tagged &#39;pyspark&#39;" rel="tag" aria-labelledby="tag-pyspark-tooltip-container">pyspark</a> dataframe on rows where several column values are found in other dataframe columns but I cannot use <code>.collect().distinct()</code> and <code>.isin()</code> since it takes a long time compared to join. How can I use join or broadcast when filling values conditionally? In <a href="/questions/tagged/pandas" class="post-tag" title="show questions tagged &#39;pandas&#39;" aria-label="show questions tagged &#39;pandas&#39;" rel="tag" aria-labelledby="tag-pandas-tooltip-container">pandas</a> I would do:</p> <pre><code>df.loc[(df.A.isin(df2.A)) | (df.B.isin(df2B)), 'new_column'] = 'new_value' </code></pre> <p>UPD: so far I tried this approach in pyspark but it did not work right judging by <code>.count()</code> before and after (rows count is artificially decreased)</p> <pre><code>count_first = df.count() dfA_1 = df.join(df2, 'A', 'leftanti') \ .withColumn('new_column', F.lit(None).cast(StringType())) dfA_2= df.join(df2, 'A', 'inner') \ .withColumn('new_column', F.lit('new_value')) df = dfA_1 .unionByName(dfA_2) count_second = df.count() cont_first - count_second </code></pre> <p>How can I achieve the same in <a href="/questions/tagged/pyspark" class="post-tag" title="show questions tagged &#39;pyspark&#39;" aria-label="show questions tagged &#39;pyspark&#39;" rel="tag" aria-labelledby="tag-pyspark-tooltip-container">pyspark</a> but with join ?</p>
<python><pyspark>
2023-04-13 07:57:15
1
525
euh
76,002,781
4,791,603
Looping through pandas dataframe to get highest possible total score of football players with unique clubs
<p>I have a dataframe with football players with their <em>team name, position <em>and</em> total score</em> based on a bunch of factors during the games they played. I want to create a team with the highest total score for different line-ups (4-3-3, 4-4-2, etc.) but the clubs can only occur for 1 player in that team. So if I got a goalkeeper from club X then I can't have a defender from club X. Right now I am able to create a team for a 4-3-3 line-up (4 defenders, 3 midfielders, 3 attackers) but there are duplicate teams in there. ( see screenshot ). Also, if the total scores of 2 players are equal then the minutes_played should make the difference. Does anyone have an idea on how to start building this logic?</p> <p><em><strong>(I am not sure if this post is clear enough and acceptable for stackoverflow. If not, please tell me how to improve or what to provide to make it acceptable)</strong></em></p> <p>All help is highly appreciated!</p> <p><a href="https://i.sstatic.net/Qsr4L.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Qsr4L.png" alt="enter image description here" /></a></p>
<python><pandas><dataframe><loops><logic>
2023-04-13 07:34:02
1
327
jscholten
76,002,771
17,530,552
Normalize a list containing only positive data into a range comprising negative and positive values
<p>I have a list of data that only contains positive values, such as the list below:</p> <pre><code>data_list = [3.34, 2.16, 8.64, 4.41, 5.0] </code></pre> <p>Is there a way to normalize this list into a range that spans from -1 to +1?</p> <p>The value <code>2.16</code> should correspond to <code>-1</code> in the normalized range, and the value <code>8.64</code> should correspond to <code>+1</code>.</p> <p>I found several topics treating the question of how one can normalize a list that contains negative and positive values. But how can one normalize a list of only positive or only negative values into a new normalized list that spans from the negative into the positive range?</p>
<python><normalization><normalize>
2023-04-13 07:32:51
3
415
Philipp
76,002,614
8,849,755
How to determine whether a fit is reasonable in Python
<p>I am fitting a function to data in Python using <a href="https://lmfit.github.io/lmfit-py/" rel="nofollow noreferrer"><code>lmfit</code></a>. I want to tell whether the fit is good or not. Consider this example (which is actually my data):</p> <p><a href="https://i.sstatic.net/frtUu.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/frtUu.png" alt="enter image description here" /></a></p> <p>Most humans will agree in that the fit in the plot is reasonable. On the other hand, the 'bad fit example' shows a case in which most humans will agree in that this fit is not good. As a human, I am capable of performing such 'statistical eye test' to tell whether the fit is good looking at the plot.</p> <p>Now I want to automate this process, because I have a lot of data sets and fits and simply cannot look at each of them individually. I am using a chi squared test in the following way:</p> <pre class="lang-py prettyprint-override"><code>result = model.fit(y_values, params, x=x_values) # `model` was previously created using lmfit. degrees_of_freedom = result.nfree significance_alpha = .05 print('Is fit good?', scipy.stats.chi2.ppf(1-significance_alpha, degrees_of_freedom)&gt;result.chisqr) </code></pre> <p>No matter what <code>significance_alpha</code> do I choose, it is rejecting all the fits, even though the fits are 'not that bad'. For example setting <code>significance_alpha=1e-10</code> rejected the fit from the plot above, which actually it looks 'reasonably good' to me and I don't want to reject it.</p> <p>So my specific question is: What am I doing wrong? Or, what other kind of test or procedure is usually done to filter between 'decent fits' and 'bad fits'?</p>
<python><curve-fitting><goodness-of-fit>
2023-04-13 07:12:50
2
3,245
user171780
76,002,506
5,816,253
dictionary from different lists python 3
<p>I have the following lists:</p> <pre class="lang-py prettyprint-override"><code>list1 = [&quot;Hulk&quot;, &quot;Flash&quot;, &quot;Tesla&quot;] list2 = [&quot;green&quot;, &quot;23&quot;, &quot;thunder&quot;] list3 = [&quot;Marvel&quot;, &quot;DC_comics&quot;, &quot;0.0&quot;] </code></pre> <p>and I would like to create a dictionary like this one:</p> <pre class="lang-py prettyprint-override"><code>{ &quot;eo:dict&quot;: [ {&quot;name&quot;: &quot;Hulk&quot;, &quot;color&quot;: green, &quot;company&quot;: Marvel}, {&quot;name&quot;: &quot;Flash&quot;, &quot;color&quot;: red, &quot;company&quot;: DC_comics}, {&quot;name&quot;: &quot;Tesla&quot;, &quot;color&quot;: thunder, &quot;company&quot;: 0.0}, ] } </code></pre> <p>that has to be appended in a specific position within a json file I'm creating.</p> <p>I tried in this way:</p> <pre class="lang-py prettyprint-override"><code>keys = [&quot;name&quot;, &quot;company&quot;, &quot;colours&quot;] eo_dict = dict(zip(keys, name,company, color)) </code></pre> <p>but I got the error <code>&quot;_ValueError: dictionary update sequence element #0 has length 3; 2 is required_&quot;</code></p> <p>Could anybody give me some hints?</p>
<python><json><list><dictionary>
2023-04-13 06:59:31
2
375
sylar_80
76,002,461
562,769
Where does "AttributeError: 'VendorAlias' object has no attribute 'find_spec'" come from?
<p>I'm currently trying to update a larger codebase from Python 3.8 to Python 3.11. I use <code>pyenv</code> to manage my Python versions and <code>poetry</code> to manage my dependencies:</p> <pre><code>pyenv local 3.11.3 poetry update </code></pre> <p>When I run <code>pytest</code> I immediately get:</p> <pre><code>python -m pytest -n 1 Traceback (most recent call last): File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 1074, in _find_spec AttributeError: 'VendorAlias' object has no attribute 'find_spec' During handling of the above exception, another exception occurred: Traceback (most recent call last): File &quot;/home/martin/.cache/pypoetry/virtualenvs/web-backend-4tJQ0X3K-py3.11/lib/python3.11/site-packages/django/utils/module_loading.py&quot;, line 58, in autodiscover_modules import_module(&quot;%s.%s&quot; % (app_config.name, module_to_search)) File &quot;/home/martin/.pyenv/versions/3.11.1/lib/python3.11/importlib/__init__.py&quot;, line 126, in import_module return _bootstrap._gcd_import(name[level:], package, level) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 1206, in _gcd_import File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 1178, in _find_and_load File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 1140, in _find_and_load_unlocked File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 1076, in _find_spec File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 1049, in _find_spec_legacy ImportWarning: VendorAlias.find_spec() not found; falling back to find_module() During handling of the above exception, another exception occurred: Traceback (most recent call last): File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 1074, in _find_spec AttributeError: 'VendorAlias' object has no attribute 'find_spec' During handling of the above exception, another exception occurred: Traceback (most recent call last): File &quot;&lt;frozen runpy&gt;&quot;, line 198, in _run_module_as_main File &quot;&lt;frozen runpy&gt;&quot;, line 88, in _run_code File &quot;/home/martin/.cache/pypoetry/virtualenvs/web-backend-4tJQ0X3K-py3.11/lib/python3.11/site-packages/pytest/__main__.py&quot;, line 5, in &lt;module&gt; raise SystemExit(pytest.console_main()) ^^^^^^^^^^^^^^^^^^^^^ File &quot;/home/martin/.cache/pypoetry/virtualenvs/web-backend-4tJQ0X3K-py3.11/lib/python3.11/site-packages/_pytest/config/__init__.py&quot;, line 189, in console_main code = main() ^^^^^^ File &quot;/home/martin/.cache/pypoetry/virtualenvs/web-backend-4tJQ0X3K-py3.11/lib/python3.11/site-packages/_pytest/config/__init__.py&quot;, line 147, in main config = _prepareconfig(args, plugins) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/home/martin/.cache/pypoetry/virtualenvs/web-backend-4tJQ0X3K-py3.11/lib/python3.11/site-packages/_pytest/config/__init__.py&quot;, line 328, in _prepareconfig config = pluginmanager.hook.pytest_cmdline_parse( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/home/martin/.cache/pypoetry/virtualenvs/web-backend-4tJQ0X3K-py3.11/lib/python3.11/site-packages/pluggy/_hooks.py&quot;, line 265, in __call__ return self._hookexec(self.name, self.get_hookimpls(), kwargs, firstresult) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/home/martin/.cache/pypoetry/virtualenvs/web-backend-4tJQ0X3K-py3.11/lib/python3.11/site-packages/pluggy/_manager.py&quot;, line 80, in _hookexec return self._inner_hookexec(hook_name, methods, kwargs, firstresult) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/home/martin/.cache/pypoetry/virtualenvs/web-backend-4tJQ0X3K-py3.11/lib/python3.11/site-packages/pluggy/_callers.py&quot;, line 55, in _multicall gen.send(outcome) File &quot;/home/martin/.cache/pypoetry/virtualenvs/web-backend-4tJQ0X3K-py3.11/lib/python3.11/site-packages/_pytest/helpconfig.py&quot;, line 103, in pytest_cmdline_parse config: Config = outcome.get_result() ^^^^^^^^^^^^^^^^^^^^ File &quot;/home/martin/.cache/pypoetry/virtualenvs/web-backend-4tJQ0X3K-py3.11/lib/python3.11/site-packages/pluggy/_result.py&quot;, line 60, in get_result raise ex[1].with_traceback(ex[2]) File &quot;/home/martin/.cache/pypoetry/virtualenvs/web-backend-4tJQ0X3K-py3.11/lib/python3.11/site-packages/pluggy/_callers.py&quot;, line 39, in _multicall res = hook_impl.function(*args) ^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/home/martin/.cache/pypoetry/virtualenvs/web-backend-4tJQ0X3K-py3.11/lib/python3.11/site-packages/_pytest/config/__init__.py&quot;, line 1067, in pytest_cmdline_parse self.parse(args) File &quot;/home/martin/.cache/pypoetry/virtualenvs/web-backend-4tJQ0X3K-py3.11/lib/python3.11/site-packages/_pytest/config/__init__.py&quot;, line 1354, in parse self._preparse(args, addopts=addopts) File &quot;/home/martin/.cache/pypoetry/virtualenvs/web-backend-4tJQ0X3K-py3.11/lib/python3.11/site-packages/_pytest/config/__init__.py&quot;, line 1256, in _preparse self.hook.pytest_load_initial_conftests( File &quot;/home/martin/.cache/pypoetry/virtualenvs/web-backend-4tJQ0X3K-py3.11/lib/python3.11/site-packages/pluggy/_hooks.py&quot;, line 265, in __call__ return self._hookexec(self.name, self.get_hookimpls(), kwargs, firstresult) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/home/martin/.cache/pypoetry/virtualenvs/web-backend-4tJQ0X3K-py3.11/lib/python3.11/site-packages/pluggy/_manager.py&quot;, line 80, in _hookexec return self._inner_hookexec(hook_name, methods, kwargs, firstresult) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/home/martin/.cache/pypoetry/virtualenvs/web-backend-4tJQ0X3K-py3.11/lib/python3.11/site-packages/pluggy/_callers.py&quot;, line 60, in _multicall return outcome.get_result() ^^^^^^^^^^^^^^^^^^^^ File &quot;/home/martin/.cache/pypoetry/virtualenvs/web-backend-4tJQ0X3K-py3.11/lib/python3.11/site-packages/pluggy/_result.py&quot;, line 60, in get_result raise ex[1].with_traceback(ex[2]) File &quot;/home/martin/.cache/pypoetry/virtualenvs/web-backend-4tJQ0X3K-py3.11/lib/python3.11/site-packages/pluggy/_callers.py&quot;, line 39, in _multicall res = hook_impl.function(*args) ^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/home/martin/.cache/pypoetry/virtualenvs/web-backend-4tJQ0X3K-py3.11/lib/python3.11/site-packages/pytest_django/plugin.py&quot;, line 353, in pytest_load_initial_conftests _setup_django() File &quot;/home/martin/.cache/pypoetry/virtualenvs/web-backend-4tJQ0X3K-py3.11/lib/python3.11/site-packages/pytest_django/plugin.py&quot;, line 236, in _setup_django django.setup() File &quot;/home/martin/.cache/pypoetry/virtualenvs/web-backend-4tJQ0X3K-py3.11/lib/python3.11/site-packages/django/__init__.py&quot;, line 24, in setup apps.populate(settings.INSTALLED_APPS) File &quot;/home/martin/.cache/pypoetry/virtualenvs/web-backend-4tJQ0X3K-py3.11/lib/python3.11/site-packages/django/apps/registry.py&quot;, line 124, in populate app_config.ready() File &quot;/home/martin/.cache/pypoetry/virtualenvs/web-backend-4tJQ0X3K-py3.11/lib/python3.11/site-packages/django/contrib/admin/apps.py&quot;, line 27, in ready self.module.autodiscover() File &quot;/home/martin/.cache/pypoetry/virtualenvs/web-backend-4tJQ0X3K-py3.11/lib/python3.11/site-packages/django/contrib/admin/__init__.py&quot;, line 50, in autodiscover autodiscover_modules(&quot;admin&quot;, register_to=site) File &quot;/home/martin/.cache/pypoetry/virtualenvs/web-backend-4tJQ0X3K-py3.11/lib/python3.11/site-packages/django/utils/module_loading.py&quot;, line 70, in autodiscover_modules if module_has_submodule(app_config.module, module_to_search): ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/home/martin/.cache/pypoetry/virtualenvs/web-backend-4tJQ0X3K-py3.11/lib/python3.11/site-packages/django/utils/module_loading.py&quot;, line 85, in module_has_submodule return importlib_find(full_module_name, package_path) is not None ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;&lt;frozen importlib.util&gt;&quot;, line 103, in find_spec File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 1076, in _find_spec File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 1049, in _find_spec_legacy ImportWarning: VendorAlias.find_spec() not found; falling back to find_module() </code></pre> <p>I have no clue where that comes from.</p> <p>When I uninstall <code>pytest-django</code>, then <code>pytest</code> at least starts. However, in a related project everything works fine - and that includes <code>pytest-djanog</code> in the same version.</p> <p>The upgrade to Python 3.9 worked, but to 3.10 (and 3.11) it failed.</p> <p>How can I investigate that issue?</p>
<python><django><pytest><python-3.10>
2023-04-13 06:53:08
1
138,373
Martin Thoma
76,002,447
13,262,787
Scapy: failed to set hardware filter to promiscuous mode: A device attached to the system is not functioning
<p>I am trying to send an ICMP packet with python scapy like this:</p> <pre><code>request_packet = IP(dst=&quot;www.google.com&quot;)/ICMP(type=&quot;echo-request&quot;) send(request_packet) </code></pre> <p>but when running the code the following error appears:</p> <pre><code>OSError: \Device\NPF_{5E5248B6-F793-4AAF-BA07-269A904D1D3A}: failed to set hardware filter to promiscuous mode: A device attached to the system is not functioning. </code></pre> <p>I am on Windows 10 and using a wired internet connection. How can i fix this?</p>
<python><windows><scapy>
2023-04-13 06:51:16
0
4,545
Serket
76,002,185
10,856,988
GoogleSheet Column Download Limit
<p>Is there a limit to the number of Columns in a Spreadsheet, which could be downloaded with the googleSheetsApi(Python) ? The response provided only gives the first 7 columns.<br /> On trying to download individual column data beyond Column_7 , gs gives an IndexError(refer image_Response).</p> <p><em><strong>SpreadSheet:</strong></em> <a href="https://i.sstatic.net/UsxSN.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/UsxSN.jpg" alt="enter image description here" /></a></p> <p><em><strong>Response</strong></em> <a href="https://i.sstatic.net/SGScK.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/SGScK.jpg" alt="enter image description here" /></a></p>
<python><google-sheets-api>
2023-04-13 06:13:24
0
1,641
srt111
76,001,985
1,229,966
Overriding python builtin 'print' using pybing11
<p>I'm embedding Python into a C++ app using pybind11. I'm trying to override the Python builtin 'print' function from C++.</p> <pre><code>mBuiltinsMod = py::module::import(&quot;builtins&quot;); mBuiltinsMod.attr(&quot;print&quot;) = py::cpp_function([](py::object msg) { std::cout &lt;&lt; msg.cast&lt;std::string&gt;(); }); </code></pre> <p>This successfully overrides the 'print' function to call cout, but it crashes on program exit. The crash happes on <code>pybind11/embed.h</code>:</p> <pre><code>void finalize_interpreter() { // ... if (internals_ptr_ptr) { // CRASH HERE delete *internals_ptr_ptr; *internals_ptr_ptr = nullptr; } } </code></pre> <p>Are there other steps needed when overriding the 'print' function to fix the crash?</p>
<python><c++><pybind11>
2023-04-13 05:40:48
2
2,739
Dess
76,001,845
19,491,471
Reading a dataset from Kaggle
<p>I am trying to download a dataset from kaggle, the link is: <a href="https://www.kaggle.com/datasets/birdy654/cifake-real-and-ai-generated-synthetic-images" rel="nofollow noreferrer">https://www.kaggle.com/datasets/birdy654/cifake-real-and-ai-generated-synthetic-images</a> there is a download button in this link. The link of that is: <a href="https://www.kaggle.com/datasets/birdy654/cifake-real-and-ai-generated-synthetic-images/download?datasetVersionNumber=3" rel="nofollow noreferrer">https://www.kaggle.com/datasets/birdy654/cifake-real-and-ai-generated-synthetic-images/download?datasetVersionNumber=3</a> I have a Jupyter project on my local machine. When I try to download the dataset using:</p> <pre><code>url = 'https://www.kaggle.com/datasets/birdy654/cifake-real-and-ai-generated-synthetic-images/download?datasetVersionNumber=3' response = requests.get(url) with open('cifake-real-and-ai-generated-synthetic-images.zip', 'wb') as f: f.write(response.content) </code></pre> <p>I get an html file, instead of the zip folder. I read online, and it seemed like I need a gaggle API token. I got that, then I put in in the folder, but the same issue persists. So right now the hierarchy of my folders is:</p> <pre><code>project -&gt; [(.kaggle -&gt; [kaggle.json]) and (file.ipynb)]. </code></pre> <p>project has .kaggle folder and file.ipynb and inside .kaggle I have kaggle.json I am also logged in to kaggle, so I am not sure why it keeps downloading the html file instead of the actual zip file.</p>
<python><jupyter-notebook><kaggle>
2023-04-13 05:14:00
1
327
Amin
76,001,795
9,757,174
python fastapi giving incorrect responses
<p>I have a fastapi app connected to my firebase firestore. I am writing a simple endpoint to check if the current user has an admin role or not?</p> <p>I have written the following code for the endpoint</p> <pre><code>@router.get(&quot;/isAdmin&quot;) def is_admin(userId: str): # sourcery skip: merge-nested-ifs &quot;&quot;&quot;Enddpoint to check if the current user is an admin or not Args: email_id (str): email id of the user to be validated &quot;&quot;&quot; # Check if the user exists in our firestore database based on the email ID db = firestore.client() print(userId) user_ref = db.collection(&quot;users&quot;).document(userId).get() print(user_ref, userId) # Check if the user exists and if the user has admin role if user_ref: # If the user exists, check if the user is an admin and return the roles if the user is an admin if user_ref.to_dict()[&quot;hasAdminRole&quot;]: user_id = user_ref[0].id user_roles_ref = ( db.collection(&quot;users&quot;).document(user_id).collection(&quot;roles&quot;) ) user_roles_data = user_roles_ref.stream() roles = {role.id: role.to_dict() for role in user_roles_data} return {&quot;hasAdminRole&quot;: True, &quot;roles&quot;: roles} # If the user doesn't exist or doesn't have admin role, # check the tempAdmins collection to see if the user is a temporary admin temp_admin_ref = db.collection(&quot;tempAdmins&quot;).document(userId).get() temp_admin_data = temp_admin_ref.get() if temp_admin_data: # Get the documentID from the data temp_admin_id = temp_admin_data[0].id # Reference the roles document and get the data temp_admin_roles_ref = ( db.collection(&quot;tempAdmins&quot;).document(temp_admin_id).collection(&quot;roles&quot;) ) temp_admin_roles_data = temp_admin_roles_ref.stream() roles = {role.id: role.to_dict() for role in temp_admin_roles_data} return {&quot;hasAdminRole&quot;: True, &quot;roles&quot;: roles} # return no access message if the user is not an admin return JSONResponse( status_code=response_status.HTTP_401_UNAUTHORIZED, content={&quot;message&quot;: NO_ADMIN_ACCESS_ERROR, &quot;hasAdminRole&quot;: False}, ) </code></pre> <p>For any email ID, whether it's an admin or not, I get the following response.</p> <pre><code>{ &quot;message&quot;: &quot;User does not exist&quot; } </code></pre> <p>The above response is very weird because I am not even writing the above message as a response anywhere and I don't know if this a fastapi swagger issue.</p> <p>The endpoint I am hitting is - <code>http://127.0.0.1:8000/users/isAdmin?email=test%40test.com</code></p>
<python><google-cloud-firestore><fastapi><jsonresponse>
2023-04-13 05:04:52
1
1,086
Prakhar Rathi
76,001,787
874,188
How can I read just one line from standard input, and pass the rest to a subprocess?
<p>If you <code>readline()</code> from <code>sys.stdin</code>, passing the rest of it to a subprocess does not seem to work.</p> <pre><code>import subprocess import sys header = sys.stdin.buffer.readline() print(header) subprocess.run(['nl'], check=True) </code></pre> <p>(I'm using <code>sys.stdin.buffer</code> to avoid any encoding issues; this handle returns the raw bytes.)</p> <p>This runs, but I don't get any output from the subprocess;</p> <pre class="lang-none prettyprint-override"><code>bash$ printf '%s\n' foo bar baz | python demo1.py b'foo\n' </code></pre> <p>If I take out the <code>readline</code> etc, the subprocess reads standard input and produces the output I expect.</p> <pre class="lang-none prettyprint-override"><code>bash$ printf '%s\n' foo bar baz | &gt; python -c 'import subprocess; subprocess.run([&quot;nl&quot;], check=True)' 1 foo 2 bar 3 baz </code></pre> <p>Is Python buffering the rest of stdin when I start reading it, or what's going on here? Running with <code>python -u</code> does not remove the problem (and indeed, the documentation for it only mentions that it changes the behavior for <code>stdout</code> and <code>stderr</code>). But if I pass in a larger amount of data, I do get some of it:</p> <pre class="lang-none prettyprint-override"><code>bash$ wc -l /etc/services 13921 /etc/services bash$ python demo1.py &lt;/etc/services | head -n 3 1 27/tcp # NSW User System FE 2 # Robert Thomas &lt;BThomas@F.BBN.COM&gt; 3 # 28/tcp Unassigned (... traceback from broken pipe elided ...) bash$ fgrep -n 'NSW User System FE' /etc/services 91:nsw-fe 27/udp # NSW User System FE 92:nsw-fe 27/tcp # NSW User System FE bash$ sed -n '1,/NSW User System FE/p' /etc/services | wc 91 449 4082 </code></pre> <p>(So, looks like it eats 4096 bytes from the beginning.)</p> <p>Is there a way I can avoid this behavior, though? I would like to only read one line off from the beginning, and pass the rest to the subprocess.</p> <p>Calling <code>sys.stdin.buffer.readline(-1)</code> repeatedly in a loop does not help.</p> <p>This is actually a follow-up for <a href="https://stackoverflow.com/questions/75940345/read-line-from-shell-pipe-pass-to-exec-and-keep-to-variable">Read line from shell pipe, pass to exec, and keep to variable</a> but I wanted to focus on this, to me, surprising aspect of the problem in that question.</p>
<python><input><subprocess>
2023-04-13 05:04:19
1
191,551
tripleee
76,001,446
10,003,538
My `collate_fn` function got empty data when pass it to collate_fn parameter of Trainer function
<p>I am trying to do fine-tuning an existing hugging face model.</p> <p>The below code is what I collected from some documents</p> <pre><code>from transformers import AutoTokenizer, AutoModelForQuestionAnswering, TrainingArguments, Trainer import torch # Load the Vietnamese model and tokenizer model_name = &quot;vinai/phobert-base&quot; tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForQuestionAnswering.from_pretrained(model_name) # Define the training data train_dataset = [ { &quot;question&quot;: &quot;What is your name ?&quot;, &quot;context&quot;: &quot;My name is Peter&quot;, &quot;answer&quot;: { &quot;text&quot;: &quot;Peter&quot;, &quot;start&quot;: 7, &quot;end&quot;: 11 } } ] # Define the validation data val_dataset = [ { &quot;question&quot;: &quot;What is your name ?&quot;, &quot;context&quot;: &quot;My name is Peter&quot;, &quot;answer&quot;: { &quot;text&quot;: &quot;Peter&quot;, &quot;start&quot;: 7, &quot;end&quot;: 11 } } ] # Define the training arguments training_args = TrainingArguments( output_dir='./results', evaluation_strategy='epoch', learning_rate=2e-5, per_device_train_batch_size=16, per_device_eval_batch_size=16, num_train_epochs=3, weight_decay=0.01, ) # Define the data collator def collate_fn(data): input_ids = torch.stack([item.get('input_ids', None) for item in data if 'input_ids' in item]) attention_mask = torch.stack([item.get('attention_mask', None) for item in data if 'attention_mask' in item]) start_positions = torch.stack([item.get('start_positions', None) for item in data if 'start_positions' in item]) end_positions = torch.stack([item.get('end_positions', None) for item in data if 'end_positions' in item]) return { 'input_ids': input_ids, 'attention_mask': attention_mask, 'start_positions': start_positions, 'end_positions': end_positions } # Define the trainer trainer = Trainer( model=model, args=training_args, train_dataset=train_dataset, eval_dataset=val_dataset, data_collator=collate_fn ) # Fine-tune the model trainer.train() </code></pre> <p>I keep receiving the error of</p> <pre><code> input_ids = torch.stack([item.get('input_ids', None) for item in data if 'input_ids' in item]) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ RuntimeError: stack expects a non-empty TensorList </code></pre> <p>I try to do</p> <pre><code>def collate_fn(data): print(data) </code></pre> <p>but I got <code>[]</code></p>
<python><python-3.x><pytorch><huggingface-transformers>
2023-04-13 03:45:24
1
1,225
Chau Loi
76,001,128
188,331
Splitting dataset into Train, Test and Validation using HuggingFace Datasets functions
<p>I can split my dataset into Train and Test split with 80%:20% ratio using:</p> <pre><code>from datasets import load_dataset ds = load_dataset(&quot;myusername/mycorpus&quot;) ds = ds[&quot;train&quot;].train_test_split(test_size=0.2) # my data in HF have 1 train split only print(ds) </code></pre> <p>which outputs:</p> <pre><code>DatasetDict({ train: Dataset({ features: ['translation'], num_rows: 62044 }) test: Dataset({ features: ['translation'], num_rows: 15512 }) }) </code></pre> <p><strong>How can I generate the validation split, with ratio 80%:10%:10%?</strong></p>
<python><huggingface-datasets>
2023-04-13 02:16:50
2
54,395
Raptor
76,000,974
4,521,938
Pytorch predict wrong with small batch
<p>Have code:</p> <pre class="lang-py prettyprint-override"><code>firstItem, first_y = dataset[0] secondItem, second_y = dataset[4444] thirdItem, third_y = dataset[55555] print(f&quot;expected: {[first_y, second_y, third_y]}&quot;) X = firstItem.unsqueeze(0) X = torch.cat((X, secondItem.unsqueeze(0)), 0) X = torch.cat((X, thirdItem.unsqueeze(0)), 0).cuda() with torch.no_grad(): pred = model_ft(X) print(f&quot;predicted: {pred.argmax(1).cpu().numpy()}&quot;) </code></pre> <p>I get 3 items from dataset and process them though model.</p> <p>So output of that code:</p> <pre><code>expected: [0, 12, 89] predicted: [ 0 12 89] </code></pre> <p>Everything is perfect! Expected is equal predicted values.</p> <p>Now let me show magic that I don't understand. Instead of 3 items we process only 2.</p> <pre class="lang-py prettyprint-override"><code>firstItem, first_y = dataset[0] thirdItem, third_y = dataset[55555] print(f&quot;expected: {[first_y, third_y]}&quot;) X = firstItem.unsqueeze(0) X = torch.cat((X, thirdItem.unsqueeze(0)), 0).cuda() with torch.no_grad(): pred = model_ft(X) print(f&quot;predicted: {pred.argmax(1).cpu().numpy()}&quot;) </code></pre> <p>and output:</p> <pre><code>expected: [0, 89] predicted: [202 89] </code></pre> <p>I <strong>don't understand</strong> why in this case expected and predicted items are not the same. If we will precess only one value then result gonna be the same (wrong prediction)</p> <p><strong>Solved!</strong> Thanks @coder00! You need to turn model to evaluation mode by calling model_ft.eval()</p>
<python><deep-learning><pytorch>
2023-04-13 01:30:59
1
358
kirsanv43
76,000,939
1,973,207
How to fuse 4-bit LLAMA weights with LoRA ones into one .pt file?
<p>I followed <a href="https://github.com/s4rduk4r/alpaca_lora_4bit_readme/blob/main/README.md" rel="nofollow noreferrer">this manual</a> and got <code>llama-7b-hf-int4</code> (got <code>llama-7b-4bit.pt </code>) and <code>samwit/alpaca7B-lora</code> (got <code>adapter_model.bin</code>). Now I want to merge them into a single .pt 4bit model. How to do such a thing?</p> <p>Why I need this:</p> <ol> <li>current lama.cpp supports only legacy 4-bit single file models.</li> <li>4-bit fine-tuners generate small alpaca fine-tuned mini models.</li> <li>only 4-bit alpaca tuning is available for my current setup; thus, I need to know how to apply/merge one into another.</li> </ol>
<python><deep-learning><pytorch><alpaca>
2023-04-13 01:19:36
0
880
DuckQueen
76,000,922
595,553
update python so that it point to new version
<p>Here I installed Python on Amazon Linux 2. python 2.7.18 was available by default. i installed 3.9.6 but python --version point to python 2</p> <pre><code>[root@AnsibleM Python-3.9.6]# python --version Python 2.7.18 [root@AnsibleM Python-3.9.6]# python3.9 --version Python 3.9.6 [root@AnsibleM Python-3.9.6]# which python /usr/bin/python [root@AnsibleM Python-3.9.6]# which python3.9 /usr/local/bin/python3.9 [root@AnsibleM Python-3.9.6]# </code></pre>
<python><linux>
2023-04-13 01:11:52
1
753
Satyam Pandey
76,000,750
5,091,720
pandas problem when assigning value using loc
<p>So what is happening is the values in column B are becoming NaN. How would I fix this so that it does not override other values?</p> <pre><code>import pandas as pd import numpy as np # %% # df=pd.read_csv('testing/example.csv') data = { 'Name' : ['Abby', 'Bob', 'Chris'], 'Active' : ['Y', 'Y', 'N'], 'A' : [89, 92, np.nan], 'B' : ['eye', 'hand', np.nan], 'C' : ['right', 'left', 'right'] } df = pd.DataFrame(data) df.loc[((df['Active'] =='N') &amp; (df['A'].isna())), ['A', 'B']] = [99, df['C']] df </code></pre> <p>What I want the results to be is:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>Name</th> <th>Active</th> <th>A</th> <th>B</th> <th>C</th> </tr> </thead> <tbody> <tr> <td>Abby</td> <td>Y</td> <td>89.0</td> <td>eye</td> <td>right</td> </tr> <tr> <td>Bob</td> <td>Y</td> <td>92.0</td> <td>hand</td> <td>left</td> </tr> <tr> <td>Chris</td> <td>N</td> <td>99</td> <td>right</td> <td>right</td> </tr> </tbody> </table> </div>
<python><pandas><dataframe>
2023-04-13 00:19:21
1
2,363
Shane S
76,000,509
13,538,030
Interactive visualization does not work in Jupyter notebook
<p>I am a beginner for interactive visualization in Jupyter Notebook. Below is the Python code I have been using.</p> <pre><code>import altair as alt alt.Chart(data=dt).mark_point().encode( x=&quot;seq&quot;, y=&quot;v3&quot;, color=&quot;v2&quot;, size='v1' ) </code></pre> <p><code>dt</code> is the data I have created, and below is the data structure:</p> <pre><code> seq v1 v2 v3 color 0 1100 3 35 11 red 1 1101 5 41 12 red 2 1102 1 19 9 yellow 3 1103 1 19 11 blue 4 1104 5 44 14 red </code></pre> <p>After running the code, nothing is showing.</p> <p>Thank you for your guidance.</p>
<python><jupyter-notebook><visualization>
2023-04-12 23:16:10
2
384
Sophia
76,000,390
5,696,181
Firefox binary issue when deploying Python script to Heroku server
<p>I am deploying a Python script to Heroku. The Python script includes a Selenium script that uses Firefox. Here is a snippet of the code:</p> <pre><code>def runFirefoxSelenium(): options = FirefoxOptions() options.add_argument('--headless') options.add_argument('--no-sandbox') binary = './bin/Firefox.app/Contents/MacOS/firefox-bin' options.binary_location = binary driver = webdriver.Firefox(options=options) ... </code></pre> <p>In the root directory, I created a folder called bin and dumped the Firefox app into the bin.</p> <p>When I run the script locally (MacOS), it works perfectly. It is able to find the Firefox binary in within the Firefox.app directory.</p> <p>However, when I upload it to the Heroku server, I get the following error:</p> <pre><code>selenium.common.exceptions.InvalidArgumentException: Message: binary is not a Firefox executable </code></pre> <p>I tried variations where I only put the firefox binary (called &quot;firefox-bin&quot;) in the bin folder, but that doesn't work even locally.</p> <p>How would you recommend that I resolve this issue?</p>
<python><selenium-webdriver><heroku><firefox>
2023-04-12 22:53:12
1
1,037
Vaibhav Verma
76,000,363
717,231
Python zipfile fails ("compression method not supported") containing deflate64-compressed content
<p>I have a zip file that I can only unzip using terminal (on mac). I have not tried windows. For reference, I tried opening the file by double clicking in the finder window and I get &quot;Error - 79 Inappropriate file type or format&quot;</p> <p>But this command on terminal works as expected:</p> <pre><code>unzip zip_file.zip &gt; extracted.txt </code></pre> <p>My final goal is to extract this file using python 3.x I have tried</p> <pre><code>with py7zr.SevenZipFile(fq_file_name, mode='r') as archive: archive.extractall(file_path) </code></pre> <p>Error:</p> <pre><code> raise Bad7zFile(&quot;not a 7z file&quot;) py7zr.exceptions.Bad7zFile: not a 7z file </code></pre> <p>With this:</p> <pre><code>with zipfile.ZipFile(fq_file_name, 'r') as zip_ref: zip_ref.extractall(file_path) </code></pre> <p>Error:</p> <pre><code>raise NotImplementedError(&quot;That compression method is not supported&quot;) NotImplementedError: That compression method is not supported </code></pre> <p>I even tried shutil</p> <pre><code>shutil.unpack_archive(fq_file_name) </code></pre> <p>Error</p> <pre><code>NotImplementedError: That compression method is not supported </code></pre> <p>Looking inside the zipfile module, it's failing because the file requests compression method 9, which it doesn't support. Apparently method 9 is DEFLATE64. Is there any way to compress this file in Python?</p>
<python><macos>
2023-04-12 22:46:23
3
958
Bitmask
76,000,347
10,466,809
Should Optional type annotation be used when an initialized input parameter is immediately set in the function body?
<p>In Python we often have a situation where the default argument for some input parameter is <code>None</code> and, if that is the case, we immediately initialize that variable at the top of the function body for use in the rest of the function. One common use case is for mutable default arguments:</p> <pre><code>def foo(input_list = None): if input_list is None: input_list = [] input_list.append('bar') print(input_list) </code></pre> <p>What would be the appropriate way to type hint this?</p> <pre><code>from typing import List def foo(input_list: List = None): if input_list is None: input_list = [] input_list.append('bar') print(input_list) </code></pre> <p>or</p> <pre><code>from typing import List, Optional def foo(input_list: Optional[List] = None): if input_list is None: input_list = [] input_list.append('bar') print(input_list) </code></pre> <p>The core of the question is this: From the caller's perspective it is ok to pass in either a list or nothing. So from the callers perspective the <code>input_list</code> parameter is <code>Optional</code>. But within the body of the function, it is necessary that <code>input_list</code> is indeed a <code>List</code> and not <code>None</code>. So is the <code>Optional</code> flag meant to be an indicator to the caller that it is optional to pass that parameter in (in which case the second code block would be correct and the type hint in the first block would be too restrictive), or is it actually a signal to the body of the function that <code>input_list</code> may be either a <code>List</code> or <code>None</code> (in which case the second code block would be wrong since <code>None</code> is not acceptable)?</p> <p>I lean towards not including <code>Optional</code> because users aren't really supposed to pass anything other than a list into <code>input_list</code>. It's true that they <em>can</em> pass in <code>None</code>, and nothing bad will happen. But that's not really the intent.</p> <p>Note that I am aware that in at least some cases <code>param: type = None</code> will be parsed as if it was equal to <code>param: Optional[type] = None</code>. That doesn't actually give me clarity on how I should use the <code>Optional</code> annotation.</p>
<python><type-hinting>
2023-04-12 22:41:00
3
1,125
Jagerber48
76,000,316
4,670,408
What is the most efficient way to normalize values in a single row in pandas?
<p>I have two types of columns in a pandas dataframe, let's say A and B.</p> <p>How to normalize the values in each row individually using the mean for each type of column efficiently?</p> <p>I can first calculate mean for each column type and then divide each column with it's respective column type mean but it's taking too much time(more than 30 mins). I have over 300 columns and 500K rows.</p> <pre><code>df = pd.DataFrame({'A1': [1,2,3], 'A2': [4,5,6], 'A3': [7,8,9], 'B1': [11,12,13], 'B2': [14,15,16], 'B3': [17,18,19] }) df['A_mean'] = df.apply(lambda x: x.filter(regex='A').mean(), axis=1) df['A1'] = df['A1']/df['A_mean'] </code></pre> <p>I am expecting the following result.</p> <p><a href="https://i.sstatic.net/PlJCz.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/PlJCz.png" alt="enter image description here" /></a></p>
<python><pandas><performance>
2023-04-12 22:32:49
2
1,281
Vinay
76,000,175
6,703,783
How to create a dataframe by appending heterogenous column values
<p>I want to create a <code>dataframe</code> like 2 columns and several rows</p> <pre><code>[ ['text1',[float1, float2, float3]] ['text2',[float4, float5, float6]] . . . ] </code></pre> <p>The names of the columns should be <code>content</code> and <code>embeddings</code>. <code>text1</code>, <code>text2</code> are for <code>content</code> column, the list of floats is in <code>embeddings</code> column.</p> <p>The code I have written is</p> <pre><code>mycontent = [&quot;i live in space&quot;,&quot;i live my life to fullest&quot;, &quot;dogs live in kennel&quot;,&quot;we live to eat and not eat to live&quot;,&quot;cricket lives in heart of every indian&quot;,&quot;live and let live&quot;,&quot;my house is in someplace&quot;,&quot;my office is in someotherplace&quot;,&quot;chair is red&quot;] contents_and_embeddings_df = pd.DataFrame(columns=['content','embeddings']) for content in mycontent: embedding = get_embedding(content,engine='textsearchcuriedoc001mc') #returns list of floats contents_and_embeddings_df.append(pd.DataFrame([content,embedding])) contents_and_embeddings_df </code></pre> <p>In output I get several warnings of <code> contents_and_embeddings_df.append(pd.DataFrame([content,embedding])) /tmp/ipykernel_15879/3971327095.py:8: FutureWarning: The frame.append method is deprecated and will be removed from pandas in a future version. Use pandas.concat instead. contents_and_embeddings_df.append(pd.DataFrame([content,embedding]))</code></p> <p>The content of the dataframe is empty. I see only two headers - <code>content embeddings</code></p> <p>I tried few other ways as well but am not able to create the desired <code>dataframe</code></p> <pre><code>for content in mycontent: embedding = get_embedding(content,engine='textsearchcuriedoc001mc') #pd.concat(contents_and_embeddings_df,pd.DataFrame([content,embedding])) --&gt; doesn't work #contents_and_embeddings_df.append(pd.DataFrame([content,embedding])) --&gt; doesn't work tempdf = pd.DataFrame([content,embedding]) #doesn't work. # tempdf = pd.DataFrame([content,embedding], columns=['content','embeddings']) --&gt; doesn't compile contents_and_embeddings_df.add(tempdf) # doesn't work. contents_and_embeddings_df #shows empty </code></pre>
<python><pandas><dataframe>
2023-04-12 22:04:21
2
16,891
Manu Chadha
76,000,163
12,415,855
Google AI Vision / Image labeling?
<p>i try to label images with the Google Vision API - and this code works fine generally:</p> <pre><code>from google.cloud import vision import os import sys if __name__ == '__main__': path = os.path.abspath(os.path.dirname(sys.argv[0])) credsFN = os.path.join(path, &quot;creds.json&quot;) os.environ['GOOGLE_APPLICATION_CREDENTIALS'] = credsFN imageURL = &quot;https://www.24mantra.com/wp-content/uploads/2019/10/banner-3.jpg&quot; client = vision.ImageAnnotatorClient() image = vision.Image() image.source.image_uri = imageURL features = [vision.Feature.Type.LABEL_DETECTION] features = [vision.Feature(type_=feature_type) for feature_type in features] requ = vision.AnnotateImageRequest(image=image, features=features) resp = client.annotate_image(request=requ) for label in resp.label_annotations: print(f&quot;{label.score:4.0%}: {label.description}&quot;) </code></pre> <p>Now i tried it with some other url like:</p> <p><a href="https://www.juicer.io/api/posts/459631754/images.jpg?external_id=CiE264cqZcw&amp;s=199f57b090efba5ef332c3ff09d7b14554f84bfc" rel="nofollow noreferrer">https://www.juicer.io/api/posts/459631754/images.jpg?external_id=CiE264cqZcw&amp;s=199f57b090efba5ef332c3ff09d7b14554f84bfc</a></p> <p>(when you click the link you will see that the image opens without problems in the browser)</p> <p>But when i run the above program with this link it is not working anymore and i get an empty response? How can i get also results from Google Vison API with this links?</p>
<python><google-cloud-platform><artificial-intelligence>
2023-04-12 22:01:50
1
1,515
Rapid1898
76,000,138
11,411,944
How can I change the default path for saving figures from an interactive Jupyter shell for Python in Visual Studio Code?
<p>I often generate figures using <code>matplotlib</code>. They get displayed in the shell and there's a little &quot;Save As&quot; icon that lets me save them. Whenever I click on it the default path is my system's root directory &quot;/&quot;. Every time I want to save a figure I have to click through my file system to arrive at the desired filepath (which would be Desktop). Is there a way to change this? What I've tried so far:</p> <ul> <li>running <code>os.chdir(&quot;/Users/clo/Desktop/&quot;)</code> in the active shell</li> <li>in Settings, changing Terminal &gt; Integrated: Cwd to /Users/clo/Desktop/</li> </ul> <p>EDIT:</p> <p>Posted on github as a feature request: <a href="https://github.com/microsoft/vscode-jupyter/issues/13321" rel="nofollow noreferrer">github.com/microsoft/vscode-jupyter/issues/13321</a></p>
<python><visual-studio-code><jupyter>
2023-04-12 21:56:38
1
490
Haliaetus
76,000,127
12,134,098
Directory of files containerized and deployed in Lambda?
<p>I have a Python package directory for which looks like this:</p> <p><a href="https://i.sstatic.net/bsPdA.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/bsPdA.png" alt="enter image description here" /></a></p> <p>I have a Dockerfile which uses AWS provided base image:</p> <pre><code> COPY requirements.txt . RUN pip install --upgrade pip ENV LAMBDA_TASK_ROOT=&quot;/usr/lambda&quot; RUN pip install -r requirements.txt --target &quot;${LAMBDA_TASK_ROOT}&quot; COPY src/b &quot;${LAMBDA_TASK_ROOT}&quot; </code></pre> <p>Now when I deploy a lambda using image built using code above, and I use &quot;b.handler.lambda_handler&quot; as my CMD override, I get this error:</p> <pre><code>Unable to import module 'b': No module named 'b' </code></pre> <p>I have tried using CMD override &quot;handler.lambda_handler&quot;, but that gives me a similar error.</p> <p>If my code has multiple files like so, can I not copy entire directory and then call my handler ? Can someone point me to right solution or documentation ? AWS seems to have a documentation which guides me through a 1-file setup only.</p> <p>EDIT 1: When I run image locally, I get this:</p> <pre><code>PS C:\Windows\system32&gt; docker run -p 9000:8080 3cdde6464b082779fec0f6c263b9052514886d4f90d0d0c1a6d8d40f459660c7 12 Apr 2023 23:10:34,654 [INFO] (rapid) exec '/var/runtime/bootstrap' (cwd=/var/task, handler=) </code></pre>
<python><amazon-web-services><docker><aws-lambda>
2023-04-12 21:55:15
1
434
m00s3
76,000,070
8,188,910
Add a 1D numpy array to a 2D array along a new dimension (dimensions do not match)
<p>I want to add a 1D array to a 2D array along the second dimension of the 2D array using the logic as in the code below.</p> <pre><code>import numpy as np TwoDArray = np.random.randint(0, 10, size=(10000, 50)) OneDArray = np.random.randint(0, 10, size=(2000)) Sum = np.array([(TwoDArray+element).sum(axis=1) for element in OneDArray]).T print(Sum.shape) &gt;&gt; (10000, 2000) </code></pre> <p>This list comprehension is dramatically slow. What is the fastest way to do this? (I guess with array computation).</p> <p><strong>EDIT</strong> I tried the following approach, but the running time was worst</p> <pre><code>Sum = np.sum(TwoDArray[:, np.newaxis, :] + OneDArray[np.newaxis, :, np.newaxis], axis=2) </code></pre> <p>I also tried with Numba, but running time is the same.</p> <pre><code>@jit(nopython=True) def compute_sum(TwoDArray, OneDArray): return [(TwoDArray+element).sum(axis=1) for element in OneDArray] </code></pre>
<python><arrays><numpy><multidimensional-array>
2023-04-12 21:45:24
3
419
Nicolas
76,000,033
14,584,978
Python polars has a modulus operator but it throws an attribute error
<p><strong>Update:</strong> This was due to an older version of polars still running on my machine.</p> <hr /> <p>Please see this documentation: <a href="https://docs.pola.rs/api/python/stable/reference/expressions/api/polars.Expr.mod.html" rel="nofollow noreferrer">Docs</a></p> <p>When I run this code:</p> <pre class="lang-py prettyprint-override"><code>print( pl.scan_csv(openpath) .select('Barcodes','DateTime') .sort('DateTime') .with_row_index().collect().with_columns(pl.col('index').mod(2).alias('oddeven')) ) </code></pre> <p>Throws this error:</p> <blockquote> <p>AttributeError: 'Expr' object has no attribute 'mod'. Did you mean: 'mode'?</p> </blockquote> <p>This might should be a github issue. If so, I will move it there.</p>
<python><modulo><python-polars>
2023-04-12 21:39:23
0
374
Isaacnfairplay
76,000,024
930,675
Kivy: Why is my TextInput cursor/caret defaulting to red?
<p>Note the caret color in the my TextInput is defaulting to red. I'm unsure why this would be happening or how to change it?</p> <p><a href="https://i.sstatic.net/5Gj7G.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/5Gj7G.png" alt="enter image description here" /></a></p> <p>This is my code:</p> <pre class="lang-py prettyprint-override"><code>import kivy kivy.require('2.1.0') # replace with your current kivy version ! from kivy.app import App from kivy.uix.textinput import TextInput class MyApp(App): def build(self): return TextInput(text='Hello world') if __name__ == '__main__': MyApp().run() </code></pre>
<python><kivy>
2023-04-12 21:36:52
1
3,195
Sean Bannister
76,000,010
11,922,765
Python Dataframe check if a name exists in the variable columns
<p>I want my dataframe to have two columns. I don't know what there names going to be. I want to assign them through a variable. I want to check if one of the column is not available, then create this new column Code:</p> <pre><code># Columns dataframe or series. It contains names of the actual columns # I get below information from another source cols_df = pd.Series(index=['main_col'],data=['A']) # This also the dataframe I get from another source df = pd.DataFrame(data={'A':[10,20,30]}) # My job is see if the given dataframe has two columns if (cols_df['main_col'] in df.columns) and (cols_df['aux_col'] not in df.columns): print(&quot;Aux col not in df&quot;) cols_df['aux_col'] = 'B' df[cols_df['aux_col']] = df[cols_df['main_col']]-10 </code></pre> <p>Present output:</p> <pre><code>_check_indexing_error will raise KeyError: 'aux_col' </code></pre> <p>Expected output:</p> <pre><code>df = A B 0 10 0 1 20 10 2 30 20 cols_df = main_col A aux_col B </code></pre>
<python><pandas><dataframe>
2023-04-12 21:35:17
1
4,702
Mainland
75,999,989
7,583,953
Why is float accurate but Decimal is wrong?
<p>We're used to floats being inaccurate for obvious reasons. I thought Decimals were supposed to be accurate though because they represent all the base 10 digits</p> <p>This code gives us the right answer of 9</p> <pre><code>print(27*3/9) </code></pre> <p>So I thought, oh that's integer multiplication followed by division, that's why it's accurate</p> <p>But nope, this gives us the correct 9.0:</p> <pre><code>print(27*(3/9)) </code></pre> <p>So why does</p> <pre><code>print(Decimal(27)*(Decimal(3)/Decimal(9))) </code></pre> <p>give the incorrect 8.999999999999999999999999999</p> <p>I understand that 3/9 is 0.333... which can't be represented as a terminating decimal. But why then is base 2 float accurate?</p>
<python><floating-point><decimal>
2023-04-12 21:31:08
2
9,733
Alec
75,999,781
12,436,050
Fill Pandas column with a fix value from a specific index
<p>I have a dataframe with following columns.</p> <pre><code> id label parent 0 ID LABEL SC% split 1 https://lists/100000000006/terms MedDRA RMS List 2 https://lists/100000000006/terms/2937736 Nausea 3 https://lists/100000000006/terms/2937735 Fever </code></pre> <p>I would like to fill column 'parent' with value 'MedDRA RMS List' from index 2 onwards. How can I achieve this. Till now, the solutions I have seen fill the value to the entire column. Any help is highly appreciated.</p> <p>The expected output is:</p> <pre><code> id label parent 0 ID LABEL SC% split 1 https://lists/100000000006/terms MedDRA RMS List 2 https://lists/100000000006/terms/2937736 Nausea MedDRA RMS List 3 https://lists/100000000006/terms/2937735 Fever MedDRA RMS List </code></pre>
<python><pandas>
2023-04-12 20:58:53
1
1,495
rshar
75,999,762
3,315,629
Issue consistently mapping a python list to dictionary
<p>I am trying to understand why my list (update_features in code below) isn't consistently mapping properly to a dictionary (update_reno_map in code below). There is a bit of code leading up to the mapping, that i've included to give context to the way the list is created.</p> <pre><code>renouv_dict = {} layer = iface.activeLayer() fields = layer.fields() age = fields.indexFromName('elec_moa_hta_2023') year_proj_renew = fields.indexFromName('a_proj_r') provider = layer.dataProvider() with open(&quot;traitements/renouvellement_par_annee.csv&quot;, newline='') as csvfile: year_reader = csv.reader(csvfile, delimiter=';', quotechar='|') for row in year_reader: year = 0 if row[0] != 'annee': renouv_dict[row[0]] = [] for item in row[1:]: renouv_dict[row[0]].append((year,item)) year += 1 for key, value in renouv_dict.items(): year_renew = key print(year_renew) for renew_tuples in value: age_renew = renew_tuples[0] number_renew = renew_tuples[1] print(str(age_renew) +'--&gt;AGE') if int(number_renew) &gt; 0: try: processing.run(&quot;qgis:randomselectionwithinsubsets&quot;, {'INPUT':layer, 'FIELD':&quot;age_a_x&quot;, 'METHOD':0, 'NUMBER':number_renew}) update_features = [] for feature in layer.selectedFeatures(): if feature['age_a_x'] == age_renew and not feature['a_proj_r']: update_features.append(feature) if len(update_features) &lt; int(number_renew): print('ADDING FEATURES---') print(len(update_features)) print(number_renew) features_to_add = int(number_renew) - len(update_features) print(features_to_add) renew_age_features = list(layer.getFeatures('&quot;age_a_x&quot; =' + str(age_renew) + 'and &quot;a_proj_r&quot; is null')) num_avail = len(list(renew_age_features)) print('number of available to add' + str(num_avail)) for x in range (0,features_to_add): try: update_features.append(renew_age_features[x]) except IndexError: print ('no more entities at this age') print('number of features total to update ---' + str(len(update_features))) update_reno_map = {} a_proj_r_field = provider.fields().indexFromName('a_proj_r') for feature in update_features: update_reno_map[feature.id()] = {a_proj_r_field:year_renew} print(len(update_reno_map)) print(update_reno_map) provider.changeAttributeValues(update_reno_map) except QgsProcessingException: print('cant run') updateMap = {} age_a_x_field = provider.fields().indexFromName('age_a_x') features = provider.getFeatures() for feature in features: new_age = int(feature['age_a_x']) + 1 updateMap[feature.id()] = {age_a_x_field:new_age} provider.changeAttributeValues(updateMap) </code></pre> <p>This is what renouv_dict is, as the csv isn't included here:</p> <pre><code>{'2023': [(0, '0'), (1, '0'), (2, '0'), (3, '0'), (4, '0'), (5, '0'), (6, '0'), (7, '0'), (8, '0'), (9, '0'), (10, '0'), (11, '0'), (12, '0'), (13, '0'), (14, '0'), (15, '0'), (16, '0'), (17, '0'), (18, '0'), (19, '0'), (20, '0'), (21, '0'), (22, '0'), (23, '0'), (24, '0'), (25, '3'), (26, '0'), (27, '0'), (28, '0'), (29, '0'), (30, '5'), (31, '0'), (32, '0'), (33, '5'), (34, '159'), (35, '1'), (36, '19'), (37, '0'), (38, '0'), (39, '0'), (40, '25'), (41, '0'), (42, '0'), (43, '0'), (44, '3'), (45, '1'), (46, '0'), (47, '0'), (48, '1'), (49, '0'), (50, '0'), (51, '222')], '2024': [(0, '0'), (1, '0'), (2, '0'), (3, '0'), (4, '0'), (5, '0'), (6, '0'), (7, '0'), (8, '0'), (9, '0'), (10, '0'), (11, '0'), (12, '0'), (13, '0'), (14, '0'), (15, '0'), (16, '0'), (17, '0'), (18, '0'), (19, '0'), (20, '0'), (21, '0'), (22, '0'), (23, '0'), (24, '0'), (25, '0'), (26, '5'), (27, '0'), (28, '0'), (29, '0'), (30, '0'), (31, '6'), (32, '0'), (33, '0'), (34, '4'), (35, '150'), (36, '0'), (37, '11'), (38, '0'), (39, '0'), (40, '0'), (41, '5'), (42, '0'), (43, '0'), (44, '0'), (45, '0'), (46, '0'), (47, '0'), (48, '0'), (49, '0'), (50, '0'), (51, '181')], '2025': [(0, '0'), (1, '0'), (2, '0'), (3, '0'), (4, '0'), (5, '0'), (6, '0'), (7, '0'), (8, '0'), (9, '0'), (10, '0'), (11, '0'), (12, '0'), (13, '0'), (14, '0'), (15, '0'), (16, '0'), (17, '0'), (18, '0'), (19, '0'), (20, '0'), (21, '0'), (22, '0'), (23, '0'), (24, '0'), (25, '0'), (26, '1'), (27, '6'), (28, '0'), (29, '0'), (30, '0'), (31, '0'), (32, '6'), (33, '0'), (34, '0'), (35, '3'), (36, '108'), (37, '1'), (38, '6'), (39, '0'), (40, '0'), (41, '0'), (42, '1'), (43, '0'), (44, '0'), (45, '0'), (46, '0'), (47, '0'), (48, '0'), (49, '0'), (50, '0'), (51, '132')], '2026': [(0, '0'), (1, '0'), (2, '0'), (3, '0'), (4, '0'), (5, '0'), (6, '0'), (7, '0'), (8, '0'), (9, '0'), (10, '0'), (11, '0'), (12, '0'), (13, '0'), (14, '0'), (15, '0'), (16, '0'), (17, '0'), (18, '0'), (19, '0'), (20, '0'), (21, '0'), (22, '0'), (23, '1'), (24, '0'), (25, '0'), (26, '1'), (27, '1'), (28, '6'), (29, '0'), (30, '0'), (31, '0'), (32, '0'), (33, '5'), (34, '0'), (35, '0'), (36, '2'), (37, '63'), (38, '0'), (39, '3'), (40, '0'), (41, '0'), (42, '0'), (43, '0'), (44, '0'), (45, '0'), (46, '0'), (47, '0'), (48, '0'), (49, '0'), (50, '0'), (51, '82')], '2027': [(0, '0'), (1, '0'), (2, '0'), (3, '0'), (4, '0'), (5, '0'), (6, '0'), (7, '0'), (8, '0'), (9, '0'), (10, '0'), (11, '0'), (12, '0'), (13, '0'), (14, '0'), (15, '0'), (16, '0'), (17, '0'), (18, '0'), (19, '0'), (20, '0'), (21, '0'), (22, '0'), (23, '0'), (24, '1'), (25, '0'), (26, '0'), (27, '1'), (28, '1'), (29, '7'), (30, '0'), (31, '0'), (32, '0'), (33, '0'), (34, '4'), (35, '0'), (36, '0'), (37, '2'), (38, '32'), (39, '0'), (40, '1'), (41, '0'), (42, '0'), (43, '0'), (44, '0'), (45, '0'), (46, '0'), (47, '0'), (48, '0'), (49, '0'), (50, '0'), (51, '49')], '2028': [(0, '0'), (1, '0'), (2, '0'), (3, '0'), (4, '0'), (5, '0'), (6, '0'), (7, '0'), (8, '0'), (9, '0'), (10, '0'), (11, '0'), (12, '0'), (13, '0'), (14, '0'), (15, '0'), (16, '0'), (17, '0'), (18, '0'), (19, '0'), (20, '0'), (21, '0'), (22, '0'), (23, '0'), (24, '0'), (25, '1'), (26, '0'), (27, '0'), (28, '1'), (29, '1'), (30, '9'), (31, '0'), (32, '0'), (33, '0'), (34, '0'), (35, '4'), (36, '0'), (37, '0'), (38, '1'), (39, '13'), (40, '0'), (41, '0'), (42, '0'), (43, '0'), (44, '0'), (45, '0'), (46, '0'), (47, '0'), (48, '0'), (49, '0'), (50, '0'), (51, '30')], '2029': [(0, '0'), (1, '0'), (2, '0'), (3, '0'), (4, '0'), (5, '0'), (6, '0'), (7, '0'), (8, '0'), (9, '0'), (10, '0'), (11, '0'), (12, '0'), (13, '0'), (14, '0'), (15, '0'), (16, '0'), (17, '0'), (18, '0'), (19, '0'), (20, '0'), (21, '0'), (22, '0'), (23, '1'), (24, '0'), (25, '0'), (26, '2'), (27, '0'), (28, '0'), (29, '1'), (30, '1'), (31, '10'), (32, '0'), (33, '0'), (34, '0'), (35, '0'), (36, '2'), (37, '1'), (38, '0'), (39, '0'), (40, '6'), (41, '0'), (42, '0'), (43, '0'), (44, '0'), (45, '0'), (46, '0'), (47, '0'), (48, '0'), (49, '0'), (50, '0'), (51, '24')], '2030': [(0, '0'), (1, '0'), (2, '0'), (3, '0'), (4, '0'), (5, '0'), (6, '0'), (7, '0'), (8, '0'), (9, '0'), (10, '0'), (11, '0'), (12, '0'), (13, '0'), (14, '0'), (15, '0'), (16, '0'), (17, '0'), (18, '0'), (19, '0'), (20, '0'), (21, '0'), (22, '0'), (23, '1'), (24, '1'), (25, '0'), (26, '0'), (27, '2'), (28, '0'), (29, '0'), (30, '2'), (31, '1'), (32, '10'), (33, '0'), (34, '0'), (35, '0'), (36, '0'), (37, '2'), (38, '0'), (39, '0'), (40, '0'), (41, '2'), (42, '0'), (43, '0'), (44, '0'), (45, '0'), (46, '0'), (47, '0'), (48, '0'), (49, '0'), (50, '0'), (51, '21')], '2031': [(0, '0'), (1, '0'), (2, '0'), (3, '0'), (4, '0'), (5, '0'), (6, '0'), (7, '0'), (8, '0'), (9, '0'), (10, '0'), (11, '0'), (12, '0'), (13, '0'), (14, '0'), (15, '0'), (16, '0'), (17, '0'), (18, '0'), (19, '0'), (20, '0'), (21, '0'), (22, '0'), (23, '0'), (24, '1'), (25, '1'), (26, '0'), (27, '0'), (28, '2'), (29, '0'), (30, '0'), (31, '2'), (32, '1'), (33, '8'), (34, '0'), (35, '0'), (36, '0'), (37, '0'), (38, '1'), (39, '0'), (40, '0'), (41, '0'), (42, '0'), (43, '0'), (44, '0'), (45, '0'), (46, '0'), (47, '0'), (48, '0'), (49, '0'), (50, '0'), (51, '16')], '2032': [(0, '0'), (1, '0'), (2, '0'), (3, '0'), (4, '0'), (5, '0'), (6, '0'), (7, '0'), (8, '0'), (9, '0'), (10, '0'), (11, '0'), (12, '0'), (13, '0'), (14, '0'), (15, '0'), (16, '0'), (17, '0'), (18, '0'), (19, '0'), (20, '0'), (21, '0'), (22, '0'), (23, '0'), (24, '0'), (25, '1'), (26, '2'), (27, '0'), (28, '0'), (29, '3'), (30, '0'), (31, '0'), (32, '2'), (33, '1'), (34, '7'), (35, '0'), (36, '0'), (37, '0'), (38, '0'), (39, '0'), (40, '0'), (41, '0'), (42, '0'), (43, '0'), (44, '0'), (45, '0'), (46, '0'), (47, '0'), (48, '0'), (49, '0'), (50, '0'), (51, '16')], '2033': [(0, '0'), (1, '0'), (2, '0'), (3, '0'), (4, '0'), (5, '0'), (6, '0'), (7, '0'), (8, '0'), (9, '0'), (10, '0'), (11, '0'), (12, '0'), (13, '0'), (14, '0'), (15, '0'), (16, '0'), (17, '0'), (18, '0'), (19, '0'), (20, '0'), (21, '1'), (22, '1'), (23, '0'), (24, '0'), (25, '0'), (26, '2'), (27, '3'), (28, '1'), (29, '0'), (30, '3'), (31, '0'), (32, '0'), (33, '1'), (34, '1'), (35, '6'), (36, '0'), (37, '0'), (38, '0'), (39, '0'), (40, '0'), (41, '0'), (42, '0'), (43, '0'), (44, '0'), (45, '0'), (46, '0'), (47, '0'), (48, '0'), (49, '0'), (50, '0'), (51, '19')], '2034': [(0, '0'), (1, '0'), (2, '0'), (3, '0'), (4, '0'), (5, '0'), (6, '0'), (7, '0'), (8, '0'), (9, '0'), (10, '0'), (11, '0'), (12, '0'), (13, '0'), (14, '0'), (15, '0'), (16, '0'), (17, '0'), (18, '0'), (19, '0'), (20, '0'), (21, '0'), (22, '1'), (23, '1'), (24, '1'), (25, '0'), (26, '0'), (27, '3'), (28, '3'), (29, '1'), (30, '0'), (31, '4'), (32, '0'), (33, '0'), (34, '1'), (35, '1'), (36, '5'), (37, '1'), (38, '0'), (39, '0'), (40, '0'), (41, '0'), (42, '0'), (43, '0'), (44, '0'), (45, '0'), (46, '0'), (47, '0'), (48, '0'), (49, '0'), (50, '0'), (51, '22')], '2035': [(0, '0'), (1, '0'), (2, '0'), (3, '0'), (4, '0'), (5, '0'), (6, '0'), (7, '0'), (8, '0'), (9, '0'), (10, '0'), (11, '0'), (12, '0'), (13, '0'), (14, '0'), (15, '0'), (16, '0'), (17, '0'), (18, '0'), (19, '0'), (20, '0'), (21, '0'), (22, '1'), (23, '2'), (24, '1'), (25, '1'), (26, '0'), (27, '0'), (28, '3'), (29, '3'), (30, '1'), (31, '0'), (32, '4'), (33, '0'), (34, '0'), (35, '1'), (36, '1'), (37, '3'), (38, '0'), (39, '0'), (40, '0'), (41, '0'), (42, '0'), (43, '0'), (44, '0'), (45, '0'), (46, '0'), (47, '0'), (48, '0'), (49, '0'), (50, '0'), (51, '21')], '2036': [(0, '0'), (1, '0'), (2, '0'), (3, '0'), (4, '0'), (5, '0'), (6, '0'), (7, '0'), (8, '0'), (9, '0'), (10, '0'), (11, '0'), (12, '0'), (13, '0'), (14, '0'), (15, '0'), (16, '0'), (17, '0'), (18, '0'), (19, '0'), (20, '0'), (21, '0'), (22, '0'), (23, '1'), (24, '2'), (25, '2'), (26, '2'), (27, '1'), (28, '0'), (29, '3'), (30, '4'), (31, '1'), (32, '0'), (33, '3'), (34, '0'), (35, '0'), (36, '1'), (37, '1'), (38, '1'), (39, '0'), (40, '0'), (41, '0'), (42, '0'), (43, '0'), (44, '0'), (45, '0'), (46, '0'), (47, '0'), (48, '0'), (49, '0'), (50, '0'), (51, '22')], '2037': [(0, '0'), (1, '0'), (2, '0'), (3, '0'), (4, '0'), (5, '0'), (6, '0'), (7, '0'), (8, '0'), (9, '0'), (10, '0'), (11, '0'), (12, '0'), (13, '0'), (14, '0'), (15, '0'), (16, '0'), (17, '0'), (18, '0'), (19, '0'), (20, '0'), (21, '0'), (22, '0'), (23, '0'), (24, '1'), (25, '3'), (26, '3'), (27, '2'), (28, '1'), (29, '0'), (30, '4'), (31, '5'), (32, '0'), (33, '1'), (34, '3'), (35, '0'), (36, '0'), (37, '1'), (38, '0'), (39, '1'), (40, '0'), (41, '0'), (42, '0'), (43, '0'), (44, '0'), (45, '0'), (46, '0'), (47, '0'), (48, '0'), (49, '0'), (50, '0'), (51, '25')], '2038': [(0, '0'), (1, '0'), (2, '0'), (3, '0'), (4, '0'), (5, '0'), (6, '0'), (7, '0'), (8, '0'), (9, '0'), (10, '0'), (11, '0'), (12, '0'), (13, '0'), (14, '0'), (15, '0'), (16, '0'), (17, '0'), (18, '0'), (19, '0'), (20, '0'), (21, '0'), (22, '0'), (23, '0'), (24, '0'), (25, '2'), (26, '6'), (27, '4'), (28, '2'), (29, '1'), (30, '0'), (31, '5'), (32, '5'), (33, '1'), (34, '0'), (35, '2'), (36, '0'), (37, '0'), (38, '0'), (39, '0'), (40, '0'), (41, '0'), (42, '0'), (43, '0'), (44, '0'), (45, '0'), (46, '0'), (47, '0'), (48, '0'), (49, '0'), (50, '0'), (51, '28')], '2039': [(0, '0'), (1, '0'), (2, '0'), (3, '0'), (4, '0'), (5, '0'), (6, '0'), (7, '0'), (8, '0'), (9, '0'), (10, '0'), (11, '0'), (12, '0'), (13, '0'), (14, '0'), (15, '0'), (16, '0'), (17, '0'), (18, '0'), (19, '0'), (20, '0'), (21, '0'), (22, '0'), (23, '0'), (24, '0'), (25, '1'), (26, '3'), (27, '7'), (28, '4'), (29, '2'), (30, '1'), (31, '0'), (32, '4'), (33, '4'), (34, '0'), (35, '0'), (36, '2'), (37, '0'), (38, '0'), (39, '0'), (40, '0'), (41, '0'), (42, '0'), (43, '0'), (44, '0'), (45, '0'), (46, '0'), (47, '0'), (48, '0'), (49, '0'), (50, '0'), (51, '28')], '2040': [(0, '0'), (1, '0'), (2, '0'), (3, '0'), (4, '0'), (5, '0'), (6, '0'), (7, '0'), (8, '0'), (9, '0'), (10, '0'), (11, '0'), (12, '0'), (13, '0'), (14, '0'), (15, '0'), (16, '0'), (17, '0'), (18, '0'), (19, '0'), (20, '0'), (21, '1'), (22, '1'), (23, '0'), (24, '0'), (25, '1'), (26, '1'), (27, '4'), (28, '8'), (29, '4'), (30, '3'), (31, '1'), (32, '0'), (33, '4'), (34, '3'), (35, '0'), (36, '0'), (37, '1'), (38, '0'), (39, '0'), (40, '0'), (41, '0'), (42, '0'), (43, '0'), (44, '0'), (45, '0'), (46, '0'), (47, '0'), (48, '0'), (49, '0'), (50, '0'), (51, '32')], '2041': [(0, '0'), (1, '0'), (2, '0'), (3, '0'), (4, '0'), (5, '0'), (6, '0'), (7, '0'), (8, '0'), (9, '0'), (10, '0'), (11, '0'), (12, '0'), (13, '0'), (14, '0'), (15, '0'), (16, '0'), (17, '0'), (18, '0'), (19, '0'), (20, '0'), (21, '0'), (22, '1'), (23, '1'), (24, '0'), (25, '0'), (26, '1'), (27, '2'), (28, '4'), (29, '9'), (30, '6'), (31, '4'), (32, '1'), (33, '0'), (34, '3'), (35, '3'), (36, '0'), (37, '1'), (38, '1'), (39, '0'), (40, '0'), (41, '0'), (42, '0'), (43, '0'), (44, '0'), (45, '0'), (46, '0'), (47, '0'), (48, '0'), (49, '0'), (50, '0'), (51, '37')], '2042': [(0, '0'), (1, '0'), (2, '0'), (3, '0'), (4, '0'), (5, '0'), (6, '0'), (7, '0'), (8, '0'), (9, '0'), (10, '0'), (11, '0'), (12, '0'), (13, '0'), (14, '0'), (15, '0'), (16, '0'), (17, '0'), (18, '0'), (19, '0'), (20, '0'), (21, '0'), (22, '0'), (23, '1'), (24, '1'), (25, '0'), (26, '0'), (27, '2'), (28, '2'), (29, '4'), (30, '11'), (31, '7'), (32, '4'), (33, '1'), (34, '0'), (35, '3'), (36, '2'), (37, '1'), (38, '0'), (39, '0'), (40, '0'), (41, '0'), (42, '0'), (43, '0'), (44, '0'), (45, '0'), (46, '0'), (47, '0'), (48, '0'), (49, '0'), (50, '0'), (51, '39')], '2043': [(0, '0'), (1, '0'), (2, '0'), (3, '0'), (4, '0'), (5, '0'), (6, '0'), (7, '0'), (8, '0'), (9, '0'), (10, '0'), (11, '0'), (12, '0'), (13, '0'), (14, '0'), (15, '0'), (16, '0'), (17, '0'), (18, '0'), (19, '0'), (20, '0'), (21, '0'), (22, '0'), (23, '0'), (24, '2'), (25, '2'), (26, '0'), (27, '0'), (28, '2'), (29, '2'), (30, '6'), (31, '13'), (32, '6'), (33, '3'), (34, '0'), (35, '0'), (36, '2'), (37, '2'), (38, '0'), (39, '0'), (40, '0'), (41, '0'), (42, '0'), (43, '0'), (44, '0'), (45, '0'), (46, '0'), (47, '0'), (48, '0'), (49, '0'), (50, '0'), (51, '40')], '2044': [(0, '0'), (1, '0'), (2, '0'), (3, '0'), (4, '0'), (5, '0'), (6, '0'), (7, '0'), (8, '0'), (9, '0'), (10, '0'), (11, '0'), (12, '0'), (13, '0'), (14, '0'), (15, '0'), (16, '0'), (17, '0'), (18, '0'), (19, '0'), (20, '0'), (21, '0'), (22, '0'), (23, '0'), (24, '0'), (25, '3'), (26, '3'), (27, '0'), (28, '0'), (29, '2'), (30, '2'), (31, '7'), (32, '12'), (33, '6'), (34, '2'), (35, '0'), (36, '0'), (37, '1'), (38, '1'), (39, '0'), (40, '0'), (41, '0'), (42, '0'), (43, '0'), (44, '0'), (45, '0'), (46, '0'), (47, '0'), (48, '0'), (49, '0'), (50, '0'), (51, '39')], '2045': [(0, '0'), (1, '0'), (2, '0'), (3, '0'), (4, '0'), (5, '0'), (6, '0'), (7, '0'), (8, '0'), (9, '0'), (10, '0'), (11, '0'), (12, '0'), (13, '0'), (14, '0'), (15, '0'), (16, '0'), (17, '0'), (18, '0'), (19, '0'), (20, '0'), (21, '1'), (22, '0'), (23, '0'), (24, '0'), (25, '1'), (26, '5'), (27, '4'), (28, '1'), (29, '0'), (30, '2'), (31, '3'), (32, '6'), (33, '11'), (34, '4'), (35, '2'), (36, '0'), (37, '1'), (38, '1'), (39, '0'), (40, '0'), (41, '0'), (42, '0'), (43, '0'), (44, '0'), (45, '0'), (46, '0'), (47, '0'), (48, '0'), (49, '0'), (50, '0'), (51, '42')], '2046': [(0, '0'), (1, '0'), (2, '0'), (3, '0'), (4, '0'), (5, '0'), (6, '0'), (7, '0'), (8, '0'), (9, '0'), (10, '0'), (11, '0'), (12, '0'), (13, '0'), (14, '0'), (15, '0'), (16, '0'), (17, '0'), (18, '0'), (19, '0'), (20, '0'), (21, '1'), (22, '2'), (23, '0'), (24, '0'), (25, '0'), (26, '1'), (27, '6'), (28, '5'), (29, '1'), (30, '1'), (31, '3'), (32, '3'), (33, '6'), (34, '9'), (35, '4'), (36, '2'), (37, '1'), (38, '0'), (39, '0'), (40, '0'), (41, '0'), (42, '0'), (43, '0'), (44, '0'), (45, '0'), (46, '0'), (47, '0'), (48, '0'), (49, '0'), (50, '0'), (51, '45')], '2047': [(0, '0'), (1, '0'), (2, '0'), (3, '0'), (4, '0'), (5, '0'), (6, '0'), (7, '0'), (8, '0'), (9, '0'), (10, '0'), (11, '0'), (12, '0'), (13, '0'), (14, '0'), (15, '0'), (16, '0'), (17, '0'), (18, '0'), (19, '0'), (20, '0'), (21, '1'), (22, '2'), (23, '3'), (24, '0'), (25, '0'), (26, '0'), (27, '1'), (28, '7'), (29, '5'), (30, '1'), (31, '1'), (32, '3'), (33, '2'), (34, '4'), (35, '8'), (36, '3'), (37, '1'), (38, '0'), (39, '0'), (40, '0'), (41, '0'), (42, '0'), (43, '0'), (44, '0'), (45, '0'), (46, '0'), (47, '0'), (48, '0'), (49, '0'), (50, '0'), (51, '42')], '2048': [(0, '0'), (1, '0'), (2, '0'), (3, '0'), (4, '0'), (5, '0'), (6, '0'), (7, '0'), (8, '0'), (9, '0'), (10, '0'), (11, '0'), (12, '0'), (13, '0'), (14, '0'), (15, '0'), (16, '0'), (17, '0'), (18, '0'), (19, '0'), (20, '0'), (21, '0'), (22, '1'), (23, '3'), (24, '4'), (25, '0'), (26, '0'), (27, '0'), (28, '2'), (29, '8'), (30, '6'), (31, '1'), (32, '0'), (33, '2'), (34, '2'), (35, '4'), (36, '6'), (37, '2'), (38, '1'), (39, '0'), (40, '0'), (41, '0'), (42, '0'), (43, '0'), (44, '0'), (45, '0'), (46, '0'), (47, '0'), (48, '0'), (49, '0'), (50, '0'), (51, '42')]} </code></pre> <p>Also, I was hoping to post the console output here, but it's too long. Lmk if that could help anyone help me : )</p>
<python><list><dictionary><qgis><pyqgis>
2023-04-12 20:56:20
1
1,045
user25976
75,999,656
11,388,321
Is there a better way to strip and get OpenAI responses?
<p>I'm trying to get at least 1 response for each keyword that is looped to be included in the text prompt, however when I run the python code to generate responses I get the following error:</p> <pre><code>PS C:\Users\...\Documents\Article-gen&gt; &amp; C:/Users/.../AppData/Local/Microsoft/WindowsApps/python3.11.exe c:/Users/.../Documents/Article-gen/createArticle.py ChatGPT API replies for Amazon (AMZN) stock: Traceback (most recent call last): File &quot;C:\Users\...\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\openai\openai_object.py&quot;, line 59, in __getattr__ return self[k] ~~~~^^^ KeyError: 'text' During handling of the above exception, another exception occurred: Traceback (most recent call last): File &quot;c:\Users\...\Documents\Article-gen\createArticle.py&quot;, line 29, in &lt;module&gt; outputText = choice.text.strip() ^^^^^^^^^^^ File &quot;C:\Users\...\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\openai\openai_object.py&quot;, line 61, in __getattr__ raise AttributeError(*err.args) AttributeError: text </code></pre> <p>How can I fix this so I can get at least 1 response back per keyword? Here's the Python code I'm working with:</p> <pre><code>import openai import os # Set OpenAI API key openai.api_key = &quot;&quot; # Then, you can call the &quot;gpt-3.5-turbo&quot; model modelEngine = &quot;gpt-3.5-turbo&quot; # set your input text inputText = &quot;Write a 1,500 word that is highly speculative bullish article IN YOUR OWN WORDS on {} stock and why it went up, you must include how it could affect the stock price and future outcome of the business. Include subheadings in your own words and act like you know it all and be an authoritative expert on the topic. Now write.&quot; # Array of keywords to generate article on keywords = [&quot;Nio (NIO)&quot;, &quot;Apple (AAPL)&quot;, &quot;Microsoft (MSFT)&quot;, &quot;Tesla (TSLA)&quot;, &quot;Meta (META)&quot;, &quot;Amazon (AMZN)&quot;] # Switches and injects keywords into API request for keyword in keywords: # set input text with the current keyword inputSent = inputText.format(keyword) # Send an API request and get a response, note that the interface and parameters have changed compared to the old model response = openai.ChatCompletion.create( model=modelEngine, messages=[{&quot;role&quot;: &quot;user&quot;, &quot;content&quot;: inputSent }], n = 1 ) print(&quot;ChatGPT API replies for&quot;, keyword, &quot;stock:\n&quot;) for choice in response.choices: outputText = choice.text.strip() print(outputText) print(&quot;------&quot;) print(&quot;\n&quot;) </code></pre>
<python><python-3.x><openai-api>
2023-04-12 20:41:47
1
810
overdeveloping
75,999,612
2,644,016
Why is a pickled object with slots bigger than one without slots?
<p>I'm working on a program that keeps dying because of the OOM killer. I was hoping for some quick wins in reducing the memory usage without a major refactor. I tried adding <code>__slots__</code> to the most common classes but I noticed the pickled size went up. Why is that?</p> <pre class="lang-py prettyprint-override"><code>class Class: def __init__(self, a, b, c): self.a = a self.b = b self.c = c class ClassSlots: __slots__ = [&quot;a&quot;, &quot;b&quot;, &quot;c&quot;] def __init__(self, a, b, c): self.a = a self.b = b self.c = c cases = [ Class(1, 2, 3), ClassSlots(1, 2, 3), [Class(1, 2, 3) for _ in range(1000)], [ClassSlots(1, 2, 3) for _ in range(1000)] ] for case in cases: dump = pickle.dumps(case, protocol=5) print(len(dump)) </code></pre> <p>with Python 3.10 prints</p> <pre><code>59 67 22041 25046 </code></pre>
<python><pickle>
2023-04-12 20:33:14
2
651
parched
75,999,508
11,310,492
Pass a string with quotation marks as variable in Python
<p>I have a function that takes multiple string arguments in the form of e.g. <code>Subscription(items=[&quot;A&quot;, &quot;B&quot;, &quot;C&quot;])</code> (quotation marks are mandatory, alphabet is just an example here). I don't want to write the arguments manually so I have a loop like</p> <pre><code>string=&quot;&quot; for letter in alphabet.values(): tmp=&quot;\&quot;+letter+&quot;\&quot;,&quot; string=string+tmp print(string) #only for debugging Subscription(items=[string]) </code></pre> <p>Unfortunately, this does not work, nothing happens. However, if I copy the string printed to the console and paste it into the <code>items=[ ]</code>, it works. So the content seems to be fine but I suspect that something about the <code>\&quot;</code> formatting seems to be off when pasting the variable. I feel like I made a thinking mistake but I can't figure it out. Does anybody here have an idea?</p>
<python><string>
2023-04-12 20:20:34
3
335
starsnpixel
75,999,481
15,178,267
Django: can I verify email using otp in Django?
<p>I have a question about Django authentication. Is it possible for a user, before they create an account they pass through these steps?</p> <ol> <li>Enter only their email first</li> <li>I send OTP to their email</li> <li>They verify their email using the OTP</li> <li>I then ask for their username, password 1, password 2</li> </ol> <p>Please, if anybody knows whether something like this is possible using the Django REST framework, I would really appreciate an answer.</p>
<python><django><django-models><django-forms>
2023-04-12 20:16:59
1
851
Destiny Franks
75,999,468
13,132,640
Convert arbitrary-length nested lists to format for pandas dataframe?
<p>I have an optimization script which outputs data in a format similar to the fake lists below:</p> <pre><code>l1 = [1,2,3,4] l2 = [[5,6,7,8], [9,10,11,12]] l3 = [[13,14,15,16], [17,18,19,20]] </code></pre> <p>All lists are of the same length always (at least, the lists which contain values), but some are stored within a larger list container (l2 or l3 in this example). I ultimately want each individual list to be a separate column in a pandas dataframe (e.g., 1,2,3,4 is a column, 5,6,7,8 is a column, etc.). However, the number of lists within l2 or l3 will vary.</p> <p>What is the best way to unpack these lists or otherwise get into a pandas dataframe?</p> <p>What's throwing me off is doing this in a way which will always work regardless of the number of lists in l2 and l3.</p>
<python><pandas><dataframe>
2023-04-12 20:13:56
2
379
user13132640
75,999,444
2,052,436
How to iterate numpy array (of tuples) in list manner
<p>I am getting an error <code>TypeError: Iterator operand or requested dtype holds references, but the REFS_OK flag was not enabled</code> when iterating numpy array of tuples as below:</p> <pre><code>import numpy as np tmp = np.empty((), dtype=object) tmp[()] = (0, 0) arr = np.full(10, tmp, dtype=object) for a, b in np.nditer(arr): print(a, b) </code></pre> <p>How to fix this?</p>
<python><python-3.x><numpy><iterable-unpacking>
2023-04-12 20:10:02
2
5,087
user2052436
75,999,436
182,683
Parsing input text in a strange format
<p>I have an input document with data in the following format. The three example target words are 'overlook', 'lettered', and 'resignation'. Each is followed by a list of synonyms or, if none were found, just the word None. Because the target word is not included in the list of synonyms, I've prepended &quot;tgws_&quot; to it for identification purposes. Input document:</p> <pre><code>tgws_overlook ['omit', 'forget', 'ignore', 'discount'] tgws_overlook ['verb'] tgws_lettered None tgws_lettered ['adj.'] tgws_resignation [ 'dejection', 'surrender', 'pessimism', 'defeatism', 'acceptance', 'abdication'] tgws_resignation ['noun'] </code></pre> <p>Note that each target word appears twice; I only want it to appear once in the output. I need to read each line in, and then output a new document with the data looking as follows. Here, though, I'm just printing the output. If the string beginning with tgws_ is a new string, ie if it hasn’t been seen before, then save it in a variable called target_word. If it has been seen before, then ignore it. In the case of None, we just print the target word followed by a hyphen and the part of speech (pos), followed by a dash and the word None, all on one line. Otherwise, we print out the target word on one line, pos on the next line, and synonyms on a third line. Here's what I'm looking for:</p> <pre><code>tgws_overlook POS: verb ['omit', 'forget', 'ignore', 'discount'] tgws_lettered – adj. - None tgws_resignation POS: noun [ 'dejection', 'surrender', 'pessimism', 'defeatism', 'acceptance', 'abdication'] </code></pre> <p>Here's the code I wrote, that isn't quite doing it. It repeats targetwords like 6 times... Something is wrong with the loop. And perhaps there is a better way to do this...</p> <pre><code>def main(): wordlist = [] current_word = &quot;&quot; target_word = &quot;&quot; with open(input_filename, &quot;r&quot;) as infile: counter = 0 pos = &quot;&quot; for line in infile: if line.startswith(&quot;tgws_&quot;): target_word = line if line.startswith((&quot;['adv.']&quot;, &quot;['pronoun']&quot;, &quot;['conjunction']&quot;, &quot;['noun']&quot;, &quot;['verb']&quot;, &quot;['adj.']&quot; )): pos = line.strip(&quot;['']&quot;) elif line.startswith(&quot;['&quot;): wordlist = line elif line.startswith(&quot;None&quot;): wordlist = &quot;[None]&quot; print(target_word, pos, wordlist) current_word = target_word if __name__ == &quot;__main__&quot;: main() </code></pre>
<python><string><parsing><line>
2023-04-12 20:08:53
1
773
David
75,999,364
12,762,467
Why is it not possible to iterate over pandas dataframes?
<p>Let there be several similar dataframes that an operation is to be performed on, e.g. dropping or renaming columns. One may want to do it in a loop:</p> <pre class="lang-py prettyprint-override"><code>this = pd.DataFrame({'text': ['Hello World']}) that = pd.DataFrame({'text': ['Hello Gurl']}) for df in [this, that]: df = df.rename(columns={'text': 'content'}) </code></pre> <p>No exception is raised, however, the dataframes remain unchanged. Why is that and how can I iterate over dataframes without having to type the same line of code dozens of times?</p> <p>On other hand, operations like creating new columns do work:</p> <pre class="lang-py prettyprint-override"><code>for df in [this, that]: df['content'] = df.text </code></pre>
<python><pandas><dataframe><loops><iteration>
2023-04-12 19:59:19
4
373
Zwiebak
75,999,085
16,707,518
Pick random values from a second table based on join in Python / Pandas
<p>Suppose I have a Python dataframe:</p> <pre><code>A B C A B </code></pre> <p>...and a second dataframe</p> <pre><code>A 3 A 2 A 4 B 5 B 2 B 8 B 7 C 1 C 5 </code></pre> <p>I want to join the second dataframe to the first - but for each value in the first frame, the join should be a random selection from the second row of the second dataframe picking only from where the first column is the same value.</p> <p>So, for example, for the first value A in the first dataframe, I'd look in the second table and it would pick randomly from the values in the 2nd row whose first row value is an A - i.e. randomly select one of 3, 2 or 4. For the second value B, I'd pick randomly from 5,2,8 or 7. The end result I'd simply want a dataframe like:</p> <pre><code>A 2 B 8 C 1 B 7 A 4 </code></pre>
<python><pandas><join><random>
2023-04-12 19:21:24
2
341
Richard Dixon
75,999,041
7,984,318
pandas how to check if column not empty then apply .str.replace in one line code
<p>code:</p> <pre><code>df['Rep'] = df['Rep'].str.replace('\\n', ' ') </code></pre> <p>issue: if the df['Rep'] is empty or null ,there will be an error:</p> <pre><code>Failed: Can only use .str accessor with string values! </code></pre> <p>is there anyway can handle the situation when the column value is empty or null? If it is empty or null ,just ignore that row</p>
<python><pandas><dataframe>
2023-04-12 19:15:40
1
4,094
William
75,998,941
4,092,142
Databricks Python wheel based on Databricks Workflow. Acces job_id & run_id
<p>I'm using Python (as Python wheel application) on <strong>Databricks</strong>.</p> <p>I deploy &amp; run my jobs using <strong>dbx</strong>.</p> <p>I defined some <strong>Databricks Workflow</strong> using <strong>Python wheel tasks</strong>.</p> <p>Everything is working fine, but I'm having issue to extract <strong>&quot;databricks_job_id&quot;</strong> &amp; <strong>&quot;databricks_run_id&quot;</strong> for logging/monitoring purpose.</p> <p>I'm used to defined <strong>{{job_id}}</strong> &amp; <strong>{{run_id}}</strong> as parameter in &quot;Notebook Task&quot; or other task type, ( see this <a href="https://stackoverflow.com/questions/63018871/how-do-you-get-the-run-parameters-and-runid-within-databricks-notebook">How do you get the run parameters and runId within Databricks notebook?</a>) but with Python wheel I'm not able to define theses :</p> <p>With Python wheel task, parameters are basically an array of string :</p> <blockquote> <p>[&quot;/dbfs/Shared/dbx/projects/myproject/66655665aac24e748d4e7b28c6f4d624/artifacts/myparameter.yml&quot;,&quot;/dbfs/Shared/dbx/projects/myproject/66655665aac24e748d4e7b28c6f4d624/artifacts/conf&quot;]</p> </blockquote> <p>Adding &quot;<strong>{{job_id}}</strong>&quot; &amp; &quot;<strong>{{run_id}}</strong>&quot; in this array doesn't seems to work ...</p> <p>Do you have any ideas ? Don't want to use any REST API during my workload just to extract theses ids...</p>
<python><pyspark><databricks><azure-databricks><dbutils>
2023-04-12 19:01:00
1
1,286
Gohmz
75,998,924
3,970,738
What is the difference between manager.Pool and Pool in python multiprocessing?
<p>Say I want to share a dictionary between processes. If I have defined a manager, what is the difference between instantiating a pool using <code>manager.Pool()</code> and <code>multiprocessing.Pool()</code>?</p> <p>Ex: What is the difference between the two <code>with</code> statements in <code>main_1</code> and <code>main_2</code>?</p> <pre><code>import multiprocessing as mp import time from random import random def func(d): pid = mp.current_process().pid time.sleep(3 * random()) d[pid] = 'I added this' def main_1(): print('Start of main 1') with mp.Manager() as manager: d = manager.dict() with manager.Pool() as pool: # using manager pool.map(func, [d] * 4) print(d) def main_2(): print('Start of main 2') manager = mp.Manager() try: d = manager.dict() with mp.Pool() as pool: # using multiprocessing pool.map(func, [d] * 4) print(d) finally: manager.shutdown() if __name__ == '__main__': main_1() main_2() </code></pre> <h3>More generally</h3> <p>Are all processes started after a Manager exists in the scope automatically served by it?</p>
<python><multiprocessing>
2023-04-12 18:58:17
1
501
Stalpotaten
75,998,669
15,439,115
OutOfBoundsDatetime issue handle in pandas
<p>I am running this code and on a date older then 1677 I guess there will be issue of <code>OutOfBoundsDatetime</code> .</p> <p><strong>my code is</strong></p> <pre><code>import pandas as pd df = pd.DataFrame({'datetime_str': ['2011-01-17 23:20:00' ,'0031-01-17 23:20:00']}) df['datetime_str'] = (pd.to_datetime(df['datetime_str']).astype(int) / 10 ** 6).astype(int) </code></pre> <p>and now I want to assign the minimum date in case this error happens .and I am achieving this using this code</p> <pre><code>import pandas as pd import numpy as np df = pd.DataFrame({'datetime_str': ['2011-01-17 23:20:00', '0031-01-17 23:20:00']}) # convert the datetime string to epoch time epoch_time = [] for dt_str in df['datetime_str']: try: epoch_time.append(int(pd.to_datetime(dt_str).timestamp())) except pd.errors.OutOfBoundsDatetime: epoch_time.append(int(pd.Timestamp('1970-09-21 00:12:43.145225').timestamp())) df['epoch_time'] = epoch_time print(df['epoch_time']) </code></pre> <p>i am able to achieve my goal but I think this is not best way to do with panda as iterating over all and I want to save epoch in milliseconds. Is there any better way ?</p>
<python><pandas><datetime><epoch>
2023-04-12 18:21:08
0
309
Ninja
75,998,560
10,918,680
Pairwise plot of 2D heatmap in Plotly Express
<p>I have a Pandas dataframe like the following:</p> <pre><code> df = pd.DataFrame({'gender': [1,2,1,1,2,1], 'rating': [2,1,1,3,4,5], 'speed': [1,5,5,3,2,4], 'value':[4,4,3,2,2,1], 'appearance':[1,2,3,3,1,1], 'will_buy': [2,2,1,5,2,3]}) </code></pre> <p>This is for a consumer study where columns are all categorical and only takes on a limited set of fixed values. For instance, in 'gender', 1 might denote 'male', and 2 might denote 'female'. In 'value', 1 might mean 'poor', and 5 might mean 'excellent'.</p> <p>It would be useful to see a pairwise plot of the data to notice any trend.</p> <p>I tried to use Plotly Express to create a pair plot, this is for a Streamlit dashboard:</p> <pre><code>pairplot_fig = px.scatter_matrix(df, dimensions = df.columns) st.plotly_chart(pairplot_fig) </code></pre> <p><a href="https://i.sstatic.net/pO9g4.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/pO9g4.png" alt="enter image description here" /></a></p> <p>As you can see, due to the categorical nature of the data, the pair plot does not tell a lot of information. For instance, there might be many observations at a certain location, but it only shows up as one dot. Moreover, the column names on the left edge are cluttered due to lack of spacing.</p> <p>I then tried to create a 2D heatmap that will show the number of observations at each location. This will help uncover insights like &quot;many people who selected 5 for value also tend to select 5 for speed.&quot;</p> <pre><code> heatmap_fig = px.density_heatmap(df, x= 'gender', y='rating', marginal_x=&quot;histogram&quot;, marginal_y=&quot;histogram&quot;) st.plotly_chart(heatmap_fig, theme = None) </code></pre> <p><a href="https://i.sstatic.net/X38PZ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/X38PZ.png" alt="enter image description here" /></a></p> <p>Unfortunately, I can only figure out how to generate the heatmap of 1 column VS 1 column. It would be ideal to generate a heatmap that is many columns to many columns, just like the pair plot.</p> <p>I hope to do this in Plotly Express as it's interactive. But if that's not possible, a solution in other plotting packages like Seaborn would also be helpful.</p>
<python><pandas><heatmap><plotly>
2023-04-12 18:04:11
2
425
user173729
75,998,534
495,990
Understanding the difference between these two python requests POST calls (data vs. json args)
<p>This is a toy, non-reproducible example as I can't share the original. I think it's answerable, and might help others. From other SO posts like <a href="https://stackoverflow.com/questions/26685248/difference-between-data-and-json-parameters-in-python-requests-package">this</a> and <a href="https://stackoverflow.com/questions/34051500/python-requests-getting-back-unexpected-character-encountered-while-parsing-va">this</a>, my understanding is that given some dictionary <code>d</code> of params, these are equivalent:</p> <pre><code>requests.post(url, data=json.dumps(d)) requests.post(url, json=d) </code></pre> <p>The parameters for a token endpoint were defined in documentation like so:</p> <ul> <li>url: {{base_url}}/token</li> <li>parameters <ul> <li>grant_type={{password}}</li> <li>username={{username}}</li> <li>password={{password}}</li> <li>scope={&quot;account&quot;:&quot;{{account}}&quot;, &quot;tenant&quot;:&quot;{{tenant}}&quot;}</li> </ul> </li> </ul> <p>I started with this, with variables loaded from a .env file:</p> <pre><code>resp = requests.post(f'{base_url}/token', json={'grant_type': 'password', 'username': uname, 'password': pwd, 'scope': {'account': account, 'tenant': tenant}}) resp.text # '{&quot;error&quot;:&quot;unsupported_grant_type&quot;}' </code></pre> <p>I tried changing to the data argument, and got a more sane error:</p> <pre><code>resp = requests.post(f'{base_url}/token', data={'grant_type': 'password', 'username': uname, 'password': pwd, 'scope': {'account': account, 'tenant': tenant}}) resp.text # '{&quot;error&quot;:&quot;invalid_grant&quot;,&quot;error_description&quot;:&quot;{\\&quot;ErrorMessage\\&quot;:\\&quot;Error trying to Login - User [username] Account [] Unexpected character encountered while parsing value: a. </code></pre> <p>I tried a few other things like forcing quotes around args (e.g. <code>{'account': f&quot;{account}&quot;}</code>) without success, and ultimately succeeded with this &quot;hybrid&quot; method:</p> <pre><code>resp = requests.post(f'{base_url}/token', data={'grant_type': 'password', 'username': uname, 'password': pwd, 'scope': json.dumps({'account': account, 'tenant': tenant})}) </code></pre> <p>My questions:</p> <ul> <li>is this nuance &quot;real&quot; vs. the straightforward reading of the linked questions? Namely, it seemed like one <em>either</em> uses <code>data=json.dumps(d)</code> or <code>json=d</code>, but I have not found an answer mixing the two (and wrapping the entire <code>data</code> arg in <code>json.dumps()</code> breaks my working final version)</li> <li>as a relative noob in APIs/network things, would this be discernible to me from the documentation arguments listed above, or was trial and error the only way to discover this?</li> <li>given my final solution was there a better/more correct way to pass these params?</li> </ul>
<python><python-requests>
2023-04-12 18:00:03
1
10,621
Hendy
75,998,511
3,505,206
polars - memory allocation of N bytes failed
<p>Trying to execute a statement on a file of 30 Gbs containing 26 million records with 33 columns. When we execute on a sample with 10,000 rows, works without any errors. problem exists on the full file. Noticed that I was able to execute the code in jupyter, however this fails when running from VS Code on Windows 10 64 bit.</p> <pre class="lang-py prettyprint-override"><code>( df .group_by(person_id_key, person_source_id_key, &quot;person_source_type&quot;) .agg( # All Names pl.concat_list( pl.col(full_name_key).fill_null(&quot;&quot;), pl.col(nickname_key).fill_null(&quot;&quot;).str.split(&quot;|&quot;), pl.col(first_name_key).fill_null(&quot;&quot;) + &quot; &quot; + pl.col(last_name_key).fill_null(&quot;&quot;) ) .list.eval(pl.element().filter(~pl.element().is_in(special_removal))) .flatten() .unique() .alias(&quot;name&quot;), # All Companies pl.concat_list(org_name_key, master_org_name_key) .flatten() .unique() .alias(&quot;company&quot;) ) .with_columns(pl.col('name').list.len().alias(&quot;size_of_name&quot;)) .filter( pl.col(&quot;size_of_name&quot;) &gt;= 1 ) .drop(&quot;size_of_name&quot;) ) </code></pre> <p>is it a problem with my script or should I try a different version of polars than 0.17.2?</p> <p>Let me know in comments if there is any other information that will be helpful, the dataset is confidential but I can share a synthetic version if necessary.</p>
<python><python-polars>
2023-04-12 17:57:34
0
456
Jenobi
75,998,501
21,420,742
Adding a new column with count in Python
<p>I have a dataset that looks at reports to a manager and would like to create a new column that shows those counts.</p> <p>What I have:</p> <pre><code> ID ManagerID 101 105 102 103 103 105 104 103 105 110 </code></pre> <p>Output I want:</p> <pre><code>ID ManagerID Count 101 105 0 102 103 0 103 105 2 104 103 0 105 110 2 </code></pre> <p>I have tried by doing:<code>df['count'] = df.groupby(['ID'])['ManagerID'].transform('nunique')</code> this gives me numbers that don't actually equal any of the value s. Any suggestions?</p>
<python><python-3.x><pandas><dataframe><group-by>
2023-04-12 17:54:52
2
473
Coding_Nubie
75,998,434
52,917
how can I customize the way pytest prints objects?
<p>Is there a way I can control the way pytest creates string representations from objects?</p> <p>specifically I am using <code>mongoengine</code> where the objects of type <code>Document</code> are just lazily outputed to console as <code>&quot;&lt;Document: Documment object&gt;&quot;</code> and I'd like to create a more meaningful conversion to string (probably utilizing the <code>Document.to_json()</code> function )</p> <p>can I get pytest to call a custom function whenever it needs to print out an object?</p>
<python><pytest><mongoengine>
2023-04-12 17:45:56
0
2,449
Aviad Rozenhek
75,998,423
7,025,033
numpy replace array elements with numpy arrays, according to condition
<pre><code>subst1 = numpy.array([2, 2, 2, 2]) subst2 = numpy.array([3, 3, 3, 3]) a = numpy.array([[1, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0,]]) b = numpy.where(0==a, subst1, subst2) </code></pre> <p><strong>Result:</strong></p> <pre><code>&gt;&gt;&gt; a array([[1, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0]]) &gt;&gt;&gt; b array([[3, 2, 2, 2], [2, 2, 2, 2], [2, 2, 2, 2]]) </code></pre> <p><strong>What I want:</strong></p> <pre><code>array([[[3,3,3,3], [2,2,2,2], [2,2,2,2], [2,2,2,2]], [[2,2,2,2], [2,2,2,2], [2,2,2,2], [2,2,2,2]], [[2,2,2,2], [2,2,2,2], [2,2,2,2], [2,2,2,2]]]) </code></pre> <p>I know this does not work because the <code>subst*</code> arrays are used elementwise.</p> <p>It may not be possible with where, alternative solutions are also welcome.</p> <p>I <em>want</em> to use numpy arrays as replacements, I know something similar can be done, if I replace the <code>subst*</code> arrays with <code>bytes</code>. I want an efficient solution, I am doing this for performance comparison with another solution - which has its own issues.</p> <p>I guess this would make a 3D array out of a 2D, but I am not sure.</p>
<python><numpy><numpy-ndarray>
2023-04-12 17:42:20
2
1,230
Zoltan K.
75,998,304
13,689,939
How to Convert Pandas fillna Function with mean into SQL (Snowflake)?
<p><strong>Problem</strong></p> <p>I'm converting a Python Pandas data pipeline into a series of views in Snowflake. The transformations are mostly straightforward, but some of them seem to be more difficult in SQL. I'm wondering if there are straightforward methods.</p> <p><strong>Question</strong></p> <p>How can I write a Pandas <code>fillna(df['col'].mean())</code> as simply as possible using SQL?</p> <p><strong>Example</strong></p> <p>Here's a sample dataframe with the result I'm looking for:</p> <pre><code>&gt;&gt;&gt; import pandas as pd &gt;&gt;&gt; import numpy as np &gt;&gt;&gt; df = pd.DataFrame({'somenull':[0, 0, 1, 3, 5, np.nan, np.nan]}) &gt;&gt;&gt; df['nonull']=df['somenull'].fillna(df['somenull'].mean()) &gt;&gt;&gt; df somenull nonull 0 0.0 0.0 1 0.0 0.0 2 1.0 1.0 3 3.0 3.0 4 5.0 5.0 5 NaN 1.8 6 NaN 1.8 </code></pre>
<python><sql><pandas><migration><snowflake-cloud-data-platform>
2023-04-12 17:27:54
1
986
whoopscheckmate
75,998,242
5,561,058
Python.exe in incorrect path despite adding path to Windows
<p>I added the following python paths to the Path in Windows for system variables.</p> <pre><code>C:\Project1\python\Python310-32\Scripts C:\Project1\python\Python310-32 </code></pre> <p>When I run pip upgrade I get the following error:</p> <pre><code>Unable to create process using '&quot;C:\Project0\python\Python310-32\python.exe&quot; &quot;C:\Project1\python\Python310-32\Scripts\pip.exe&quot; upgrade': The system cannot find the file specified. </code></pre> <p>My question is despite me adding the path to the system variables, restarting the computer multiple times, why is my python.exe still pointed to an old project in project0 instead of project 1.</p> <p>I am trying to install/get a particular package under site-packages to install/work for my project. But that particular library under site-packages isn't being found.</p>
<python><site-packages>
2023-04-12 17:19:26
1
471
Yash Jain
75,998,241
9,937,874
Pandas read_excel parameter sheet_name
<p>I am building a pipeline that unfortunately requires a data hand off from another team. We have found that the sheet name for a particular piece of data suffers from formatting issues. The sheet is supposed to be named by the month corresponding to the data in all lowercase. However we have received the file multiple times now with all uppercase and mixed case. I believe that this file is generated manually so the sheet is not always in the same position (most of the time it is the first sheet but occasionally it is second). Is there any way to programmatically use Pandas read_excel function to read a sheet name in a case insensitive way?</p>
<python><pandas>
2023-04-12 17:19:23
1
644
magladde
75,998,227
11,065,874
How to define query parameters using Pydantic model in FastAPI?
<p>I am trying to have an endpoint like <code>/services?status=New</code></p> <p><code>status</code> is going to be either <code>New</code> or <code>Old</code></p> <p>Here is my code:</p> <pre class="lang-py prettyprint-override"><code>from fastapi import APIRouter, Depends from pydantic import BaseModel from enum import Enum router = APIRouter() class ServiceStatusEnum(str, Enum): new = &quot;New&quot; old = &quot;Old&quot; class ServiceStatusQueryParam(BaseModel): status: ServiceStatusEnum @router.get(&quot;/services&quot;) def get_services( status: ServiceStatusQueryParam = Query(..., title=&quot;Services&quot;, description=&quot;my desc&quot;), ): pass #my code for handling this route..... </code></pre> <p>The result is that I get an error that seems to be relevant to this issue <a href="https://github.com/tiangolo/fastapi/issues/884" rel="noreferrer">here</a></p> <p>The error says <code>AssertionError: Param: status can only be a request body, using Body()</code></p> <hr /> <p>Then I found another solution explained <a href="https://stackoverflow.com/a/67581078/11065874">here</a>.</p> <p>So, my code will be like this:</p> <pre class="lang-py prettyprint-override"><code>from fastapi import APIRouter, Depends from pydantic import BaseModel from enum import Enum router = APIRouter() class ServiceStatusEnum(str, Enum): new = &quot;New&quot; old = &quot;Old&quot; class ServicesQueryParam(BaseModel): status: ServiceStatusEnum @router.get(&quot;/services&quot;) def get_services( q: ServicesQueryParam = Depends(), ): pass #my code for handling this route..... </code></pre> <p>It is working (and I don't understand why) - but the question is how and where do I add the description and title?</p>
<python><fastapi><openapi><pydantic>
2023-04-12 17:17:46
1
2,555
Amin Ba
75,997,756
11,167,163
In oracledb How to retrieve the column names of the REF CURSOR output from cursor.execute?
<p>Below is the code I tried which is working fin if I change</p> <pre><code>column_names by column_names = ['Col1','Col2','Col3'] </code></pre> <p>But I need it to be dynamic because the number and the name of the columns can change depending on the procedure I want to execute.</p> <pre><code>cursor.execute(GET_Transaction_History, date_value=date_value, cursor=ref_cursor) column_names = [desc[0] for desc in ref_cursor.description] df = pd.DataFrame(ref_cursor.getvalue(), columns=column_names) </code></pre> <p>The below line throw the following error :</p> <pre><code>column_names = [desc[0] for desc in ref_cursor.description] </code></pre> <p>AttributeError: 'Var' object has no attribute 'description'</p> <p>So I wonder how to retrieve column names properly.</p>
<python><oracle-database><plsql><python-oracledb>
2023-04-12 16:20:41
1
4,464
TourEiffel
75,997,720
12,576,581
Can't use aws_cdk.Fn.conditionIf in AutoScalingGroup
<p>I'm currently making a Stack using python aws cdk V2 and I want to make certain conditions be ran on the template instead in CDK synth so by updating a parameter in cloudformation the template can adapt and not have to be re-synthesised.</p> <p>Having that said, I currently have this code to make the AutoScaling Group:</p> <pre><code>autoscaling.AutoScalingGroup( self, &quot;MagentoAutoScalingInstance&quot;, auto_scaling_group_name=f&quot;MagentoAutoScalingGroup{self._parameters.environment.value_as_string}&quot;, vpc=self.vpc, vpc_subnets=ec2.SubnetSelection( subnet_type=ec2.SubnetType.PRIVATE_WITH_EGRESS ), instance_type=ec2.InstanceType(self._parameters.auto_scaling_instance_type.value_as_string), instance_monitoring=aws_cdk.Fn.condition_if( self._conditions.is_production.logical_id, autoscaling.Monitoring.DETAILED, autoscaling.Monitoring.BASIC ), new_instances_protected_from_scale_in=True, machine_image=ec2.AmazonLinuxImage( generation=ec2.AmazonLinuxGeneration.AMAZON_LINUX_2 ), role=self.auto_scaling_role, security_group=self.auto_scaling_sg ) </code></pre> <p>But when I try <code>cdk synth</code> I get the following type error:</p> <pre><code>TypeError: type of argument instance_monitoring must be o ne of (aws_cdk.aws_autoscaling.Monitoring, NoneType); got jsii._reference_map.InterfaceDynamicProxy instead </code></pre> <p>The option <code>Fn.condition_if</code> exists so I suppose this should be possible. Am I missing anything?</p>
<python><aws-cdk><aws-auto-scaling>
2023-04-12 16:15:57
1
888
DeadSec
75,997,702
4,621,513
What is the rationale for not allowing to change field types in TypedDict, even to a subtype, when using inheritance?
<p>PEP 589 <a href="https://peps.python.org/pep-0589/#inheritance" rel="nofollow noreferrer">states</a>:</p> <blockquote> <p>Changing a field type of a parent TypedDict class in a subclass is not allowed.</p> <p>Example:</p> <pre><code>class X(TypedDict): x: str class Y(X): x: int # Type check error: cannot overwrite TypedDict field &quot;x&quot; </code></pre> </blockquote> <p>But it gives no further explanation.</p> <p>And in fact, this is an error e.g. when using this example with Mypy (here, version 0.950):</p> <blockquote> <p>error: Overwriting TypedDict field &quot;x&quot; while extending</p> </blockquote> <p>While it makes sense to me to not allow changing the type entirely (presumably due to the Liskov substitution principle), it seems it is not even allowed to narrow the type of <code>x</code> in <code>Y</code> to a <em>subtype</em> of <code>x</code> in <code>X</code>:</p> <p>Changing <code>str</code> in <code>X</code> to <code>Any</code>, <code>Optional[int]</code> or <code>Union[int, str]</code> all raise the same error.</p> <p>However, when changing <code>X</code> from a <code>TypedDict</code> to a <code>NamedTuple</code> or a <code>dataclass</code>, this then <em>is</em> allowed.</p> <p>Why is this handled more strictly in <code>TypedDict</code>?</p>
<python><inheritance><python-typing>
2023-04-12 16:13:52
1
24,148
mkrieger1
75,997,634
14,637,258
Class variable and __dict__
<p>Why <code>print(b.__dict__)</code> prints<code>{'state': 'Init'}</code> I understand, but why does <code>print(b._di)</code> print<code>{'state': 'Init'}</code>?</p> <pre><code>class A: _di = {} def __init__(self): self.__dict__ = self._di class B(A): def __init__(self): super().__init__() self.state = &quot;Init&quot; b = B() print(b.__dict__) # {'state': 'Init'} print(b._di) # {'state': 'Init'} </code></pre>
<python>
2023-04-12 16:05:24
0
329
Anne Maier
75,997,627
10,282,088
Ace editor is showing the error: Line too long
<p>I use ace-linters for ace editor. It is showing the relevant linter warnings for each line on the left bar.</p> <p><a href="https://i.sstatic.net/AkjWh.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/AkjWh.png" alt="enter image description here" /></a></p> <p>I have this <em>line too long</em> error. but I do not need this to show. Is it possible to remove this error?</p> <p>Is there any way to have more control on the rules used by ace linters for python language. I noticed that it uses ruff for python linting. <a href="https://github.com/mkslanc/ace-linters" rel="nofollow noreferrer">https://github.com/mkslanc/ace-linters</a></p>
<python><angular><ace-editor><linter>
2023-04-12 16:04:32
1
538
Abdul K Shahid
75,997,543
10,271,487
Pandas getting nsmallest avg for each column
<p>I have a dataframe of values and I'm trying to curvefit a lower bound based on avg of the nsmallest values.</p> <p>The dataframe is organized with theline data (y-vals) in each row, and the columns are ints from 0 to end (x-vals) and I need to return the nsmallest y-vals for each x value ideally to avg out and return as a series if possible with xy-vals.</p> <p>DataFrame nsmallest() doesn't return nsmallest in each column individually which is what I want/need.</p>
<python><pandas>
2023-04-12 15:54:35
1
309
evan
75,997,515
657,693
Get All Combinations of N-Length For Different Sized Input Lists
<p>I have seen other questions on using <code>itertools</code> to generate combinations from a single list &amp; even a list of lists, but I am looking for something slightly different.</p> <p>I have a list of lists of differing lengths (some are 2-attributes long, some are 4-attributes long). I need to be able to generate all combinations of lists that contain all elements from any of the lists that ADD up to 6 final elements total.</p> <p>Here is my source data -</p> <pre><code>A = [&quot;A1&quot;, &quot;A2&quot;, &quot;A3&quot;, &quot;A4&quot;] B = [&quot;B1&quot;, &quot;B2&quot;] C = [&quot;C1&quot;, &quot;C2&quot;] D = [&quot;D1&quot;, &quot;D2&quot;] E = [&quot;E1&quot;, &quot;E2&quot;] all = [A,B,C,D,E] </code></pre> <p>My ideal (sample) output would be -</p> <pre><code>[A1, A2, A3, A4, B1, B2] [A1, A2, A3, A4, C1, C2] [A1, A2, A3, A4, D1, D2] [A1, A2, A3, A4, E1, E2] [B1, B2, C1, C2, D1, D2] [B1, B2, C1, C2, E1, E2] ... </code></pre> <p>Is there a utility in <code>itertools</code> that would allow me to do this or would I need to write a custom loop to achieve this and if so what would be the right way to accomplish this?</p>
<python><combinations>
2023-04-12 15:52:10
1
1,366
mattdonders
75,997,482
14,333,315
python variable as a name of list item
<p>I have a list:</p> <pre><code> topics = ['topic1', 'topic2', 'topic3'] </code></pre> <p>and a dict with links to topics values index:</p> <pre><code> values = {'my_value' : &quot;topics[2]&quot;, 'your_value': &quot;topics[0]&quot;} </code></pre> <p>of cause, that statement will not work, but the idea is to have a dictionary where values will be values from required list.</p> <p>Yes, it looks like I have to write simple statement:</p> <pre><code>values = {'my_value' : topics[2], 'your_value': topics[0]} </code></pre> <p>but unfortunately, i don't have a ref to the list initially. it is better to say that at the begining, i have only &quot;values&quot; and then, list &quot;topics&quot; will be set up.</p> <p>as result, I would like to have dictionary:</p> <pre><code> values = {'my_value': 'topic3', 'your_value': 'topic1'} </code></pre>
<python><list><dictionary>
2023-04-12 15:48:00
2
470
OcMaRUS
75,997,169
2,228,592
Django Queryset filtering against list of strings
<p>Is there a way to combine the django queryset filters <code>__in</code> and <code>__icontains</code>.</p> <p>Ex: given a list of strings <code>['abc', 'def']</code>, can I check if an object contains anything in that list.</p> <p><code>Model.objects.filter(field__icontains=value)</code> combined with <code>Model.objects.filter(field__in=value)</code>.</p>
<python><django>
2023-04-12 15:18:02
1
9,345
cclloyd
75,997,144
14,269,252
Find the common columns between a list of Data frames
<p>I want to find the common columns between a list of Data frames, the way that I started is, I defined x1 that is a lists of list ( for each data frames columns name), then I extract each sub list to a separate list.</p> <p>I have the output as follows:</p> <pre><code>lst_1=['a1,a2,a3'] </code></pre> <p>which has to be as follows, to be able to use set(lst_1) &amp; set(lst_2)&amp; etc :</p> <pre><code> lst_1=[&quot;a1&quot;,&quot;a2&quot;,&quot;a3&quot;'] </code></pre> <p>The code</p> <pre><code> x1=[] dfs = [list of df] for i, df in enumerate(dfs): x1.append([&quot;,&quot;.join((str(i) for i in df.columns))]) globals()[f'lst_{i}']= x1[i] </code></pre>
<python><pandas>
2023-04-12 15:15:43
3
450
user14269252
75,997,054
6,728,100
Trying to visualize topics using pyldavis but it is giving drop error
<p>I am trying to visualize topics using PyLDAVis but the following code is giving error. Not sure what the issue is.</p> <pre><code>import pyLDAvis.gensim_models pyLDAvis.enable_notebook() vis = pyLDAvis.gensim_models.prepare(lda_model, corpus, id2word) pyLDAvis.show(vis) File ~/PycharmProjects/KeyWordExtractor/venv/lib/python3.9/site-packages/pyLDAvis/_prepare.py:228, in _topic_info(topic_term_dists, topic_proportion, term_frequency, term_topic_freq, vocab, lambda_step, R, n_jobs) 225 saliency = term_proportion * distinctiveness 227 # Order the terms for the &quot;default&quot; view by decreasing saliency: --&gt; 228 default_term_info = pd.DataFrame({'saliency': saliency, 'Term': vocab, \ 229 'Freq': term_frequency, 'Total': term_frequency, \ 230 'Category': 'Default'}). \ 231 sort_values(by='saliency', ascending=False). \ 232 head(R).drop('saliency', 1) 233 # Rounding Freq and Total to integer values to match LDAvis code: 234 default_term_info['Freq'] = np.floor(default_term_info['Freq']) TypeError: drop() takes from 1 to 2 positional arguments but 3 were given </code></pre>
<python><topic-modeling><pyldavis>
2023-04-12 15:05:08
3
505
Prince Modi
75,996,969
5,983,691
Arrays differ while using numpy hstack
<p>I have over 1200 images that I resize to be the same (256, 256) size:</p> <pre class="lang-py prettyprint-override"><code>filenames = glob('data/*.png') for filename in filenames: im = skimage.io.imread(filename) im = skimage.transform.resize(im, (256, 256), anti_aliasing=True) im = skimage.util.img_as_ubyte(im) skimage.io.imsave(filename, I'm) </code></pre> <p>And when I combine certain images via the following:</p> <pre class="lang-py prettyprint-override"><code>filenames = sorted(glob('data/pic_f.*.png')) k = 0 for filename in tqdm(filenames): images = [filename, filename.replace('_f', '_r')] images = [Image.open(image) for image in images] min_shape = sorted([(np.sum(image.size), image.size) for image in images])[0][1] imgs_comb = np.hstack((np.asarray(image.resize(min_shape)) for image in images)) imgs_comb = Image.fromarray(imgs_comb) imgs_comb.save('data/mugshot_comp.{}.png'.format(k)) k += 1 </code></pre> <p>I get the following error:</p> <pre><code>--------------------------------------------------------------------------- ValueError Traceback (most recent call last) &lt;ipython-input-9-ea68e88f7337&gt; in &lt;module&gt; 7 8 min_shape = sorted([(np.sum(image.size), image.size) for image in images])[0][1] ----&gt; 9 imgs_comb = np.hstack((np.asarray(image.resize(min_shape)) for image in images)) 10 11 imgs_comb = Image.fromarray(imgs_comb) /anaconda3/lib/python3.7/site-packages/numpy/core/shape_base.py in hstack(tup) 286 return _nx.concatenate(arrs, 0) 287 else: --&gt; 288 return _nx.concatenate(arrs, 1) 289 290 ValueError: all the input arrays must have same number of dimensions </code></pre> <p>Anything I am missing here? Any help is highly appreciated.</p>
<python><arrays><numpy>
2023-04-12 14:57:23
1
457
osterburg
75,996,948
6,552,836
Bucket Constraint Optimization
<p>I'm trying to design a constraint that is based on optimizing groups/buckets of elements rather than just individual elements. Here is a list of all the constraints I have:</p> <ol> <li>Total sum of all the elements must equal a certain value</li> <li>Total sum of each product (column sum of elements) must equal a certain value</li> <li>Bucket of size 'x', where every element in the bucket must be the same</li> </ol> <p>Here is an example of what I'm after:</p> <p><strong>Constraints</strong></p> <pre><code>Total Budget = 10000 Product-A Budget = 2500 Product-B Budget = 7500 Product-A Bucket Size = 3 Product-B Bucket Size = 2 Below is an illustration of the buckets (have placed them randomly, optimizer should decide where to best place the bucket within each column): Product-A Product-B _______ Week 1 | | Week 2 | | ________ Week 3 |_______| | | Week 4 |________| Week 5 </code></pre> <p>After passing the constraints to the optimizer, I want to the optimizer to allocate elements like this (every element inside the bucket should be equal):</p> <p><strong>Desired Optimizer Output</strong></p> <pre><code> Product-A Product-B _______ Week 1 | 500 | 150 Week 2 | 500 | __2350__ Week 3 |__500__| | 1000 | Week 4 700 |__1000__| Week 5 300 3000 </code></pre> <p>Here is my attempt at trying to create the bucket constraint. I've used a simple, dummy objective function for demo purposes:</p> <pre><code># Import Libraries import pandas as pd import numpy as np import scipy.optimize as so import random # Define Objective function (Maximization) def obj_func(matrix): return -np.sum(matrix) # Create optimizer function def optimizer_result(tot_budget, col_budget_list, bucket_size_list): # Create constraint 1) - total matrix sum range constraints_list = [{'type': 'eq', 'fun': lambda x: np.sum(x) - tot_budget}, {'type': 'eq', 'fun': lambda x: (sum(x[i] for i in range(0, 10, 5)) - col_budget_list[0])}, {'type': 'eq', 'fun': lambda x: (sum(x[i] for i in range(1, 10, 5)) - col_budget_list[1])}, {'type': 'eq', 'fun': lambda x, bucket_size_list[0]: [item for item in x for i in range(bucket_size_list[0])]}, {'type': 'eq', 'fun': lambda x, bucket_size_list[1]: [item for item in x for i in range(bucket_size_list[1])]}] # Create an inital matrix start_matrix = [random.randint(0, 3) for i in range(0, 10)] # Run optimizer optimizer_solution = so.minimize(obj_func, start_matrix, method='SLSQP', bounds=[(0, tot_budget)] * 10, tol=0.01, options={'disp': True, 'maxiter': 100}, constraints=constraints_list) return optimizer_solution # Initalise constraints tot_budget = 10000 col_budget_list = [2500, 7500] bucket_size_list = [3, 2] # Run Optimizer y = optimizer_result(tot_budget, col_budget_list, bucket_size_list) print(y) </code></pre>
<python><optimization><scipy-optimize><constraint-programming>
2023-04-12 14:55:25
1
439
star_it8293
75,996,861
11,277,108
Exception with custom Retry class to set BACKOFF_MAX
<p>I've built a helper function and custom <code>Retry</code> class so I can set <code>BACKOFF_MAX</code> for a <code>requests</code> session as per <a href="https://stackoverflow.com/a/48198708/11277108">this solution</a>:</p> <pre><code>from requests import Session from requests.adapters import HTTPAdapter, Retry class RetryRequest(Retry): def __init__(self, backoff_max: int, **kwargs): super().__init__(**kwargs) self.BACKOFF_MAX = backoff_max def create_session( retries: int, backoff_factor: float, backoff_max: int, user_agent: str = &quot;*&quot;, referer: str = None, ) -&gt; Session: session = Session() retries_spec = RetryRequest( total=retries, backoff_factor=backoff_factor, backoff_max=backoff_max, ) session.mount(&quot;https://&quot;, HTTPAdapter(max_retries=retries_spec)) headers = {&quot;User-Agent&quot;: user_agent, &quot;Referer&quot;: referer} session.headers.update(headers) return session </code></pre> <p>The <code>session</code> has been working fine for weeks but today it raised the following exception after it had been online for a few hours:</p> <pre><code> File &quot;/Users/username/GitHub/polgara/apis/base_endpoint.py&quot;, line 54, in _get response = self.client.session.get(url=url, headers=headers, params=params) File &quot;/Users/username/miniconda3/envs/capra_production/lib/python3.10/site-packages/requests/sessions.py&quot;, line 600, in get return self.request(&quot;GET&quot;, url, **kwargs) File &quot;/Users/username/miniconda3/envs/capra_production/lib/python3.10/site-packages/requests/sessions.py&quot;, line 587, in request resp = self.send(prep, **send_kwargs) File &quot;/Users/username/miniconda3/envs/capra_production/lib/python3.10/site-packages/requests/sessions.py&quot;, line 701, in send r = adapter.send(request, **kwargs) File &quot;/Users/username/miniconda3/envs/capra_production/lib/python3.10/site-packages/requests/adapters.py&quot;, line 489, in send resp = conn.urlopen( File &quot;/Users/username/miniconda3/envs/capra_production/lib/python3.10/site-packages/urllib3/connectionpool.py&quot;, line 787, in urlopen retries = retries.increment( File &quot;/Users/username/miniconda3/envs/capra_production/lib/python3.10/site-packages/urllib3/util/retry.py&quot;, line 581, in increment new_retry = self.new( File &quot;/Users/username/miniconda3/envs/capra_production/lib/python3.10/site-packages/urllib3/util/retry.py&quot;, line 338, in new return type(self)(**params) TypeError: RetryRequest.__init__() missing 1 required positional argument: 'backoff_max' </code></pre> <p>It looks like something has gone awry in <code>requests</code> and it has tried to create a new session with the <code>RetryRequest</code> class but because it usually only deals with a <code>Retry</code> class it didn't pass the custom parameter of <code>backoff_max</code>.</p> <p>I could get around this by setting a default argument for <code>backoff_max</code> but this defeats the object of being able to customise it when I create the session to begin with.</p> <p>Would anyone have any idea on how I'd be able to solve this?</p> <p><code>requests</code> version is 2.28.2</p>
<python><python-requests>
2023-04-12 14:47:11
1
1,121
Jossy
75,996,750
9,580,325
Inject dependencies in Flask blueprint
<p>I'm trying to inject dependencies to a flask blueprint:</p> <p>blueprints.py</p> <pre><code>from flask import Blueprint, Response from dependency_injector.wiring import inject, Provide from container import Container from controllers import ServiceController bp = Blueprint(&quot;event&quot;, __name__) @bp.route(&quot;/event&quot;, methods=[&quot;GET&quot;]) @inject def handle_event(service_controller: ServiceController = Provide[Container.service_controller]) -&gt; Response: service_controller.handle_event() return Response(&quot;OK&quot;) </code></pre> <p>I setup a container with dependencies for <code>ServiceController</code>:</p> <p>container.py</p> <pre><code>from dependency_injector import containers, providers from controllers import ServiceController class Container(containers.DeclarativeContainer): service_controller = providers.Singleton(ServiceController) </code></pre> <p>and wire this to the blueprint:</p> <p>wiring.py</p> <pre><code>from container import Container from blueprints import bp Container().wire(modules=[bp]) </code></pre> <p>controller for now has an empty constructor:</p> <p>controllers.py</p> <pre><code>class ServiceController: def handle_event(self): print(&quot;event arrived.&quot;) </code></pre> <p>and flask initialization:</p> <p>app.py</p> <pre><code>from flask import Flask from blueprints import bp app = Flask(__name__) app.register_blueprint(bp) if __name__ == &quot;__main__&quot;: app.run(debug=True) </code></pre> <p>after running above code and requesting the <code>/event</code> endpoint I get the following error:</p> <pre><code>AttributeError: 'Provide' object has no attribute 'handle_event' </code></pre> <p>What am I missing here?</p> <p>I've referenced this documentation: <a href="https://python-dependency-injector.ets-labs.org/examples/flask-blueprints.html" rel="nofollow noreferrer">https://python-dependency-injector.ets-labs.org/examples/flask-blueprints.html</a></p>
<python><flask><dependency-injection>
2023-04-12 14:36:37
1
1,145
Domin
75,996,731
3,505,206
Polars groupby concat on multiple cols returning a list of unique values
<p>Have a polars dataframe with the following data.</p> <pre class="lang-py prettyprint-override"><code>df = pl.DataFrame({&quot;small&quot;: [&quot;Apple Inc.&quot;, &quot;Baidu Inc.&quot;, &quot;Chevron Global&quot;, &quot;Chevron Global&quot;, &quot;Apple Inc.&quot;], &quot;person&quot;: [10, 20, 30, 10, 10], &quot;comp_big&quot;: [&quot;Apple&quot;, &quot;Baidu&quot;, &quot;Chevron&quot;, &quot;Chevron&quot;, &quot;Apple&quot;]}) </code></pre> <p>Able to groupby on person, but this returns a dataframe of 2 lists.</p> <pre class="lang-py prettyprint-override"><code>( df .group_by(&quot;person&quot;) .agg( pl.col(&quot;small&quot;, &quot;comp_big&quot;).unique() ) ) </code></pre> <p>Returns</p> <p><a href="https://i.sstatic.net/URWgc.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/URWgc.png" alt="Groupby Result" /></a></p> <p>This is close, but want to merge the result between small and comp_big into a single list.</p> <pre class="lang-py prettyprint-override"><code>( df .group_by(&quot;person&quot;) .agg( pl.col(&quot;small&quot;) + &quot;|&quot; + pl.col(&quot;comp_big&quot;) ) ) </code></pre> <p>This combines both into a single list, but I would want to split on the pipe and get the unique values.</p> <p>How can this be achieved?</p>
<python><python-polars>
2023-04-12 14:34:35
2
456
Jenobi
75,996,695
19,003,861
How to dynamically generate optimal zoom on folium/leaflet?
<p>I am using <code>leaflet</code> and <code>folium</code> to map out locations.</p> <p>These locations can be filtered out and therefore requires something a bit dynamic.</p> <p>I want to achieve two things:</p> <ol> <li>center the user on the map between the different locations (that works);</li> <li>Now I also want to regulate the zoom level to capture all the locations on the screen - sometime this zoom might be at street level, sometimes it might be at a country level.</li> </ol> <p>I feel my problem can be solved by using <code>fitBounds</code>, which according to documentation <em>automatically calculates the zoom to fit a rectangular area on the map</em>.</p> <p>That sounds ideal and this post here seems to be giving an answer about a similar question: <a href="https://stackoverflow.com/questions/58162200/pre-determine-optimal-level-of-zoom-in-folium">pre-determine optimal level of zoom in folium</a></p> <p>Slight problem, I just don't understand it.</p> <p>I am thinking I should be able to use the min and max longitude and latitude to generate the rectangular area leaflet documentation refers to.</p> <p>But how does that translate in the zoom levels provided by leaflet?</p> <pre><code>def function(request): markers = Model.objects.filter(location_active=True) #Max latitude &amp; longitude min_latitude = Model.objects.filter(location_active=True).aggregate(Min('latitude'))['latitude__min'] min_longitude = Model.objects.filter(location_active=True).aggregate(Min('longitude'))['longitude__min'] #Min latitude &amp; longitude max_latitude = Model.objects.filter(location_active=True).aggregate(Max('latitude'))['latitude__max'] max_longitude = Model.objects.filter(location_active=True).aggregate(Max('longitude'))['longitude__max'] sum_latitude = 0 # sum latitude and longitude sum_latitude = max_latitude + min_latitude sum_longitude = max_longitude + min_longitude #average position for latitude and longitude average_latitude = sum_latitude/2 print(f'average_latitude - {average_latitude} ') average_longitude = sum_longitude/2 print(f'average_longitude - {average_longitude} ') center_location = [average_latitude,average_longitude] center_zoom_start= 12 tiles_style = 'Stamen Terrain' #create Folium map m = folium.Map(location=center_location,zoom_start=center_zoom_start,tiles=tiles_style) m_access = folium.Map(location=center_location,zoom_start=center_zoom_start,tiles=tiles_style) context = {'markers':markers,'map_access':m_access._repr_html_,'map':m._repr_html_} return render(request,'template.html',context) </code></pre>
<python><django><django-views><leaflet><folium>
2023-04-12 14:31:32
1
415
PhilM
75,996,408
696,712
Get json data from incoming Flask request.json case insensitive
<p>I'm implementing an API (proxy) that accepts incoming JSON POST data in Flask. I need to process this JSON and after that send it on to a backend API, which was written in another language.</p> <p>The JSON data will be sent by end-users, and they are used to sending this JSON data case-insensitive. This means that the incoming JSON data will sometimes have uppercase keys/nodes, sometimes lowercase, and sometimes maybe camelcase or pascalcase.</p> <p>I'm using Flasks <code>request.json</code> to get the data from the request. It is parsed into a Python object, but this object will have case-sensitive keys and values. These will also be nested.</p> <p>A specific example of how I currently get data is:</p> <p><code>data['ORDERS']['authentication']['AccountToken']</code></p> <p>But my users might POST:</p> <pre><code>{ &quot;Orders&quot;: { &quot;Authentication&quot;: { &quot;AccountToken&quot;: &quot;C3485D7B&quot; }, ... </code></pre> <p>Is there a way to get <code>data['ORDERS']['authentication']['AccountToken']</code> in such a way that the complete path to that value is case-insensitive? I understand I can check for each part of the path case-insensitive separately, but that requires a lot of overhead code to get to the right child-nodes.</p> <p>I saw other solutions: <a href="https://stackoverflow.com/questions/2082152/case-insensitive-dictionary">Case insensitive dictionary</a></p> <p>I have also tried using <code>CaseInsensitiveDict</code> from the <code>requests</code> library like this: <code>data = CaseInsensitiveDict(request.json)</code>, but that only makes the first level of the object case insensitive actually.</p> <p>The problem with these solutions is that they deal with dicts, while the JSON data is a dict of objects that can be lists or other objects. The solutions provided don't work recursively or only on Dicts.</p> <p>Any help is appreciated.</p>
<python><json><flask>
2023-04-12 14:04:15
2
4,414
Erik Oosterwaal
75,996,371
10,164,750
Format one column with another column in Pyspark dataframe
<p>I have business case, where one column to be updated based on the value of another 2 columns. I have given an example as below:</p> <pre><code>+-------------------------------+------+------+---------------------------------------------------------------------+ |ErrorDescBefore |name |value |ErrorDescAfter | +-------------------------------+------+------+---------------------------------------------------------------------+ |The error is in %s value is %s.|xx |z |The error is in xx value is z. | |The new cond is in %s is %s. |y |ww |The new cond is in y is ww. | +-------------------------------+------+------+---------------------------------------------------------------------+ </code></pre> <p>The <code>ErrorDescBefore</code> <code>column</code> has 2 <code>placeholders</code> i.e. <code>%s</code>, the <code>placeholders</code> to be filled by <code>columns</code> <code>name</code> and <code>value</code>. The output is in <code>ErrorDescAfter</code>.</p> <p>Can we achieve this in Pyspark. I tried <code>string_format</code> and realized that is not the right approach. Any help would be greatly appreciated.</p> <p>Thank You</p>
<python><dataframe><apache-spark><pyspark><apache-spark-sql>
2023-04-12 14:00:36
3
331
SDS