QuestionId
int64
74.8M
79.8M
UserId
int64
56
29.4M
QuestionTitle
stringlengths
15
150
QuestionBody
stringlengths
40
40.3k
Tags
stringlengths
8
101
CreationDate
stringdate
2022-12-10 09:42:47
2025-11-01 19:08:18
AnswerCount
int64
0
44
UserExpertiseLevel
int64
301
888k
UserDisplayName
stringlengths
3
30
75,407,085
1,609,428
how to keep continental U.S.A. from shapefile at the zip code level?
<p>I have downloaded the large <code>.shapefile</code> at the <code>zip code</code> level from Census.</p> <p>The link is here : cb_2017_us_zcta510_500k.shp (<a href="https://www2.census.gov/geo/tiger/TIGER_RD18/LAYER/ZCTA520/" rel="nofollow noreferrer">https://www2.census.gov/geo/tiger/TIGER_RD18/LAYER/ZCTA520/</a>) The problem is that reading into <code>geopandas</code> shows that, obviously, it includes alaska and all the small island around.</p> <p><a href="https://i.sstatic.net/5s9t6.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/5s9t6.png" alt="enter image description here" /></a></p> <pre><code>gg.head(1) Out[709]: ZCTA5CE20 GEOID20 CLASSFP20 MTFCC20 FUNCSTAT20 ALAND20 \ 0 35592 35592 B5 G6350 S 298552385 AWATER20 INTPTLAT20 INTPTLON20 \ 0 235989 +33.7427261 -088.0973903 geometry 0 POLYGON ((-88.24735 33.65390, -88.24713 33.65415, -88.24656 33.65454, -88.24658 33.65479, -88.24672 33.65497, -88.24672 33.65520, -88.24626 33.65559, -88.24601 33.65591, -88.24601 33.65630, -88.24... </code></pre> <p>I know there is an easy solution in R (that uses the area of a polygon, see <a href="https://stackoverflow.com/questions/50375619/how-to-remove-all-the-small-islands-from-the-census-shapefile-zip-code-level">how to remove all the small islands from the Census Shapefile (zip code level)?</a>) but what can I do here in Python?</p> <p>Thanks!</p>
<python><geopandas><shapefile>
2023-02-10 04:51:17
1
19,485
ℕʘʘḆḽḘ
75,407,052
3,368,722
Installing test files with pyproject.toml and setuptools
<p>I'm migrating an old python project to the new <code>pyproject.toml</code> based system and am having trouble with getting files that are required by tests to install. Inside the <code>pyproject.toml</code> I have:</p> <pre><code>[tool.setuptools] package-data = {&quot;my_pkg_name&quot; = [&quot;tests/*.sdf&quot;, &quot;tests/*.urdf&quot;, &quot;tests/*.xml&quot;, &quot;tests/meshes/*.obj&quot;]} [build-system] requires = [&quot;setuptools&gt;=43.0.0&quot;, &quot;wheel&quot;] build-backend = &quot;setuptools.build_meta&quot; </code></pre> <p>The tests that are run with <code>pytest</code> require the files described under <code>package-data</code>. After I build and install the build, the test files are not there. How do I get those files to be installed? <a href="https://stackoverflow.com/questions/7522250/how-to-include-package-data-with-setuptools-distutils">How to include package data with setuptools/distutils?</a> may be related, but things have changed, and I would rather not have to create a manifest file.</p> <p>The project structure looks something like:</p> <pre><code>. ├── LICENSE.txt ├── pyproject.toml ├── README.md ├── src │   ├── my_pkg_name │   │   ├── __init__.py └── tests ├── ant.xml ├── humanoid.xml ├── __init__.py ├── kuka_iiwa.urdf ├── meshes │   ├── link_0.obj │   ├── link_1.obj │   ├── link_2.obj │   ├── link_3.obj │   ├── link_4.obj │   ├── link_5.obj │   ├── link_6.obj │   └── link_7.obj └── test_transform.py </code></pre> <p>The <code>pyproject.toml</code> has no specific package discovery related settings.</p>
<python><setuptools><python-packaging><pyproject.toml>
2023-02-10 04:46:38
1
1,226
LemonPi
75,407,048
1,028,270
Is it possible to programmatically generate a pyi file from an instantiated class (for autocomplete)?
<p>I'm creating a class from a dictionary like this:</p> <pre><code>class MyClass: def __init__(self, dictionary): for k, v in dictionary.items(): setattr(self, k, v) </code></pre> <p>I'm trying to figure out how I can get Intellisense for this dynamically generated class. Most IDEs can read pyi files for this sort of thing.</p> <p>I don't want to write out a pyi file manually though.</p> <p>Is it possible to instantiate this class and programmatically write a pyi file to disk from it?</p> <p>mypy has the stubgen tool, but I can't figure out if it's possible to use it this way.</p> <p>Can I import stubgen from mypy and feed it <code>MyClass(&lt;some dict&gt;)</code> somehow?</p>
<python><python-3.x><mypy>
2023-02-10 04:45:58
1
32,280
red888
75,406,876
1,277,488
Issue using nested serializer with django-rest-framework
<p>I'm trying to create a nested serializer, <code>UserLoginSerializer </code>, composed of a <code>UserSerializer</code> and a <code>NotificationSerializer</code>, but I'm getting this error when it tries to serialize:</p> <blockquote> <p>AttributeError: Got AttributeError when attempting to get a value for field <code>email</code> on serializer <code>UserSerializer</code>. The serializer field might be named incorrectly and not match any attribute or key on the <code>UserSerializer</code> instance. Original exception text was: 'UserSerializer' object has no attribute 'email'.</p> </blockquote> <p>Here is my models.py:</p> <pre><code>class Notification(models.Model): kind = models.IntegerField(default=0) message = models.CharField(max_length=256) class User(AbstractUser): username = models.CharField(max_length=150, blank=True, null=True) email = models.EmailField(unique=True) customer_id = models.UUIDField(default=uuid.uuid4, editable=False) </code></pre> <p>And my serializers.py:</p> <pre><code>class UserSerializer(serializers.ModelSerializer): class Meta: model = User fields = [ &quot;id&quot;, &quot;first_name&quot;, &quot;last_name&quot;, &quot;email&quot;, &quot;customer_id&quot; ] class NotificationSerializer(serializers.ModelSerializer): class Meta: model = Notification fields = [ &quot;id&quot;, &quot;kind&quot;, &quot;message&quot;, ] class UserLoginSerializer(serializers.Serializer): user_info = UserSerializer(read_only=True) notifications = NotificationSerializer(many=True, read_only=True) </code></pre> <p>The error occurs at the last line in this endpoint:</p> <pre><code>def get_login_info(self, request): notifications = Notification.objects.filter(recipient=request.user) serializer = UserLoginSerializer( { &quot;user_info&quot;: UserSerializer(request.user), &quot;notifications&quot;: NotificationSerializer(notifications, many=True), } ) return Response(serializer.data) </code></pre> <p>What am I doing wrong?</p>
<python><django><serialization><django-rest-framework>
2023-02-10 04:13:39
1
2,385
Dylan
75,406,821
13,176,726
How to assign the lastest terms and condtions to users upon sign in in a djanog project
<p>I have the following model for the terms and conditions:</p> <pre><code>class TermsAndConditions(models.Model): text = models.TextField() date_modified = models.DateTimeField(auto_now=True) def __str__(self): return f'Terms and conditions, last modified on {self.date_modified}' class UserAgreement(models.Model): user = models.OneToOneField(User, on_delete=models.CASCADE, blank=True, null=True) agreed = models.BooleanField(default=False) agreed_at = models.DateTimeField(auto_now_add=True, blank=True, null=True) terms_and_conditions = models.ForeignKey(TermsAndConditions, on_delete=models.PROTECT) def __str__(self): return f'Agreement for user {self.user.username}' </code></pre> <p>I have the following views:</p> <pre><code>@login_required def show_terms_and_conditions(request): try: user_agreement = UserAgreement.objects.get(user=request.user) except UserAgreement.DoesNotExist: user_agreement = UserAgreement.objects.create(user=request.user) terms_and_conditions = user_agreement.terms_and_conditions return render(request, 'tac/termsandconditions.html', {'terms_and_conditions': terms_and_conditions}) @login_required def agree(request): if request.method == &quot;POST&quot;: user_agreement = UserAgreement.objects.get(user=request.user) user_agreement.agreed = True user_agreement.save() return redirect(reverse('........')) else: return redirect(reverse('.......')) </code></pre> <p>My objective to to assign the latest terms and conditions to every user who signs in and if it is not agreed then to show</p> <p>here is what I tried</p> <pre><code># @login_required # def show_terms_and_conditions(request): # latest_tac = TermsAndConditions.objects.latest('date_modified') # user_agreement, created = UserAgreement.objects.get_or_create(user=request.user, terms_and_conditions=latest_tac) # if created: # # If the user agreement was created in this request, set `agreed` to False and update the `agreed_at` field # user_agreement.agreed = False # user_agreement.agreed_at = timezone.now() # user_agreement.save() # terms_and_conditions = user_agreement.terms_and_conditions # # return render(request, 'tac/termsandconditions.html', {'terms_and_conditions': terms_and_conditions, 'user_agreement': user_agreement}) </code></pre> <p>The reason I am doing this is because I keep getting <code>NOT NULL constraint failed: tac_useragreement.terms_and_conditions_id</code></p> <p>Here is the traceback:</p> <pre><code>sqlite3.IntegrityError: NOT NULL constraint failed: tac_useragreement.terms_and_conditions_id The above exception was the direct cause of the following exception: Traceback (most recent call last): File &quot;C:\Users\User\Desktop\Project\venv\lib\site-packages\django\core\handlers\exception.py&quot;, line 55, in inner response = get_response(request) File &quot;C:\Users\User\Desktop\Project\venv\lib\site-packages\django\core\handlers\base.py&quot;, line 197, in _get_response response = wrapped_callback(request, *callback_args, **callback_kwargs) File &quot;C:\Users\User\Desktop\Project\venv\lib\site-packages\django\views\generic\base.py&quot;, line 103, in view return self.dispatch(request, *args, **kwargs) File &quot;C:\Users\User\Desktop\Project\venv\lib\site-packages\django\views\generic\base.py&quot;, line 142, in dispatch return handler(request, *args, **kwargs) File &quot;C:\Users\User\Desktop\Project\my_gym\views.py&quot;, line 61, in get user_agreement = UserAgreement.objects.create(user=request.user) File &quot;C:\Users\User\Desktop\Project\venv\lib\site-packages\django\db\models\manager.py&quot;, line 85, in manager_method return getattr(self.get_queryset(), name)(*args, **kwargs) File &quot;C:\Users\User\Desktop\Project\venv\lib\site-packages\django\db\models\query.py&quot;, line 671, in create obj.save(force_insert=True, using=self.db) File &quot;C:\Users\User\Desktop\Project\venv\lib\site-packages\django\db\models\base.py&quot;, line 812, in save self.save_base( File &quot;C:\Users\User\Desktop\Project\venv\lib\site-packages\django\db\models\base.py&quot;, line 863, in save_base updated = self._save_table( File &quot;C:\Users\User\Desktop\Project\venv\lib\site-packages\django\db\models\base.py&quot;, line 1006, in _save_table results = self._do_insert( File &quot;C:\Users\User\Desktop\Project\venv\lib\site-packages\django\db\models\base.py&quot;, line 1047, in _do_insert return manager._insert( File &quot;C:\Users\User\Desktop\Project\venv\lib\site-packages\django\db\models\manager.py&quot;, line 85, in manager_method return getattr(self.get_queryset(), name)(*args, **kwargs) File &quot;C:\Users\User\Desktop\Project\venv\lib\site-packages\django\db\models\query.py&quot;, line 1790, in _insert return query.get_compiler(using=using).execute_sql(returning_fields) File &quot;C:\Users\User\Desktop\Project\venv\lib\site-packages\django\db\models\sql\compiler.py&quot;, line 1660, in execute_sql cursor.execute(sql, params) File &quot;C:\Users\User\Desktop\Project\venv\lib\site-packages\django\db\backends\utils.py&quot;, line 103, in execute return super().execute(sql, params) File &quot;C:\Users\User\Desktop\Project\venv\lib\site-packages\django\db\backends\utils.py&quot;, line 67, in execute return self._execute_with_wrappers( File &quot;C:\Users\User\Desktop\Project\venv\lib\site-packages\django\db\backends\utils.py&quot;, line 80, in _execute_with_wrappers return executor(sql, params, many, context) File &quot;C:\Users\User\Desktop\Project\venv\lib\site-packages\django\db\backends\utils.py&quot;, line 89, in _execute return self.cursor.execute(sql, params) File &quot;C:\Users\User\Desktop\Project\venv\lib\site-packages\django\db\utils.py&quot;, line 91, in __exit__ raise dj_exc_value.with_traceback(traceback) from exc_value File &quot;C:\Users\User\Desktop\Project\venv\lib\site-packages\django\db\backends\utils.py&quot;, line 89, in _execute return self.cursor.execute(sql, params) File &quot;C:\Users\User\Desktop\Project\venv\lib\site-packages\django\db\backends\sqlite3\base.py&quot;, line 357, in execute return Database.Cursor.execute(self, query, params) django.db.utils.IntegrityError: NOT NULL constraint failed: tac_useragreement.terms_and_conditions_id [10/Feb/2023 20:41:47] &quot;GET /favicon.ico/ HTTP/1.1&quot; 500 177365 </code></pre>
<python><django>
2023-02-10 04:03:26
1
982
A_K
75,406,645
90,338
How to compute expectancy in a dataframe across rows
<p>I have a dataframe that contains day, symbol, strategy, and pnl. I want to analyze and compare pnl in a couple of ways.</p> <p>I'd like to get the win-rate &amp; expectancy when grouped by symbol and strategy. So I've done this:</p> <pre><code>def stats(s): winrate = s['isWinner']['count'] / (s['isWinner']['count'] + s['isLoser']['count']) expectancy = s['isWinner']['mean'] * winrate - s['isLoser']['mean'] * (1.0 - winrate) df[&quot;isWinner&quot;] = df['pnl'] &gt;= 0 df[&quot;isLoser&quot;] = df['pnl'] &lt; 0 df2 = df.groupby(['day', 'symbol', 'strategy', 'isWinner']).agg({'pnl': ['count', 'mean', 'std', 'min', 'max']}) df2.groupby(['day', 'symbol', 'strategy']).agg(stats) </code></pre> <p>Apparently, I can't do <code>s['isWinner']</code> in the <code>stats</code> function. What am I doing wrong?</p> <p>Once the stats function works, how do I add winrate and expectancy to df2?</p> <p>Am I going about this the right way? Is it necessary to create df2 from df, or is there a better way?</p>
<python><pandas>
2023-02-10 03:26:39
1
990
greymatter
75,406,634
10,613,037
Sort dates in mm/dd/yy and dd/mm/yy where I know the month they are from
<p>I have a column of date <strong>strings</strong> I know are from a single month, in this case the dates are all between January and February 2020. I want to sort them in ascending order. However, they are in different formats some in mm/dd/yy, some in dd/mm/yy. How can I sort them?</p> <pre class="lang-py prettyprint-override"><code>data = { 'date': ['1/1/2020','20/1/2020', '1/1/2020', '1/28/2020','21/1/2020', '1/25/2020', '29/1/2020'], } df = pd.DataFrame(data) print(df) </code></pre> <p>Edit</p> <p>Another sample of dates I'd like to be sorted</p> <hr /> <pre class="lang-py prettyprint-override"><code>import pandas as pd data = {'Tgl': { 1: '1/1/2023', 2: '1/1/2023', 3: '1/3/2023', 4: '1/5/2023', 5: '1/5/2023', 6: '1/9/2023', 7: '10/1/2023', 8: '12/1/2023', 9: '16/1/2023'}} df = pd.DataFrame(data) df = pd.to_datetime(df['Tgl']) df = pd.to_datetime(df['Tgl'], dayfirst = True) </code></pre>
<python><pandas>
2023-02-10 03:24:28
1
320
meg hidey
75,406,630
9,386,819
Do built-in Python classes have attributes, and if so, how do I find those that are available to the class?
<p>This feels like a simple question, but I can't seem to figure out the answer after much searching. I'm wondering if, for instance, lists have attributes. By attributes I mean values that are accessed by dot notation (not methods). Do strings have them?</p> <p>If I assign a string value to a variable:</p> <p><code>test = 'test'</code></p> <p>I tried <code>dir(test)</code>, which returned a long list that included stuff like:</p> <pre><code>['__add__', '__class__', '__contains__', '__delattr__', '__dir__', '__doc__', '__eq__', '__format__', '__ge__', '__getattribute__', '__getitem__', '__getnewargs__', '__gt__', '__hash__', '__init__', '__init_subclass__', '__iter__', '__le__', '__len__', 'capitalize', 'casefold', 'center', 'count', 'encode', 'endswith', 'expandtabs', 'find', 'format', 'format_map', 'index', ...] </code></pre> <p>(Note that I cut items off this list to abridge it.) What are the items with underscores? The other items seem to be methods. Are there any attributes? How would I identify an attribute?</p> <p>Same question for instances of the <code>list</code> class. How do I see all the available attributes of the class?</p>
<python><class><attributes>
2023-02-10 03:23:26
2
414
NaiveBae
75,406,627
14,293,020
Python streamline algorithm
<p><strong>Goal:</strong></p> <p>I have 2 arrays <code>vx</code> and <code>vy</code> representing velocity components. I want to write a streamline algorithm:</p> <ol> <li>Input the coordinates of a point (<code>seed</code>)</li> <li>Evaluate which pixels are on the path of the input point based on its velocity components</li> <li>Return the indices of the points in the path of the <code>seed</code> point</li> </ol> <p><strong>Issue/Question:</strong></p> <p>I initially wrote a Euler-forward algorithm that was solving very poorly my problem. I was advised to consider my problem as an Ordinary Differential Equation (ODE) where dx/dt = v_x(t) and dy/dt = v_y(t). I can <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.interpolate.RegularGridInterpolator.html" rel="nofollow noreferrer">interpolate</a> my velocities but struggle with <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.integrate.odeint.html" rel="nofollow noreferrer">solving</a> the ODE with Scipy. How could I do that ?</p> <p><strong>Homemade algorithm:</strong></p> <p>I have 2 arrays <code>vx</code> and <code>vy</code> representing velocity components. When one has a NaN, the other has one too. I have a point from which I start, the <code>seed</code> point. I want to track which cells this point went through based on the velocity components. I interpolate the velocity components <code>vx</code> and <code>vy</code> in order to input them in an ODE solver.</p> <p><strong>Example:</strong></p> <p>This code tests the algorithm for a 10x11 velocities array. I am blocked at the ODE solver.</p> <pre><code>import numpy as np import matplotlib.pyplot as plt from scipy.interpolate import RegularGridInterpolator from scipy.integrate import odeint # Create coordinates x = np.linspace(0, 10, 100) y = np.linspace(11, 20, 90) Y, X = np.meshgrid(x, y) # Create velocity fields vx = -1 - X**2 + Y vy = 1 + X - Y**2 # Seed point J = 5 I = 14 # Interpolate the velocity fields interpvx = RegularGridInterpolator((y,x), vx) interpvy = RegularGridInterpolator((y,x), vy) # Solve the ODE to get the point's path, but I don't know what to put for the parameter t #solx = odeint(interpvx, interpvx((I,J)), np.linspace(0,1,501)) #soly = odeint(interpvy, interpvx((I,J)), np.linspace(0,1,501)) </code></pre>
<python><scipy><interpolation><ode>
2023-02-10 03:22:45
2
721
Nihilum
75,406,542
8,584,998
Pandas is Reading .xlsx Column as Datetime rather than float
<p>I obtained an Excel file with complicated formatting for some cells. Here is a sample:</p> <p><a href="https://i.sstatic.net/ZkzJ0.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ZkzJ0.png" alt="enter image description here" /></a></p> <p>The &quot;USDC Amount USDC&quot; column has formatting of &quot;General&quot; for the header cell, and the following for cells C2 through C6:</p> <p><a href="https://i.sstatic.net/1VoPr.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/1VoPr.png" alt="enter image description here" /></a></p> <p>I need to read this column into pandas as a float value. However, when I use</p> <pre><code>import pandas df = pandas.read_excel('Book1.xlsx') print(['USDC Amount USDC']) print(df['USDC Amount USDC']) </code></pre> <p>I get</p> <pre><code>['USDC Amount USDC'] 0 NaT 1 1927-06-05 05:38:32.726400 2 1872-07-25 18:21:27.273600 3 NaT 4 NaT Name: USDC Amount USDC, dtype: datetime64[ns] </code></pre> <p>I do not want these as datetimes, I want them as floats! If I remove the complicated formatting in the Excel document (change it to &quot;general&quot; in column C), they are read in as float values, like this, which is what I want:</p> <pre><code>['USDC Amount USDC'] 0 NaN 1 10018.235101 2 -10018.235101 3 NaN 4 NaN Name: USDC Amount USDC, dtype: float64 </code></pre> <p>The problem is that I have to download these Excel documents on a regular basis, and cannot modify them from the source. I have to get Pandas to understand (or ignore) this formatting and interpret the value as a float on its own.</p> <p>I'm on Pandas 1.4.4, Windows 10, and Python 3.8. Any idea how to fix this? I cannot change the source Excel file, all the processing must be done in the Python script.</p> <p>EDIT:</p> <p>I added the sample Excel document in my comment below to download for reference. Also, here are some other package versions in case these matter:</p> <pre><code>openpyxl==3.0.3 xlrd==1.2.0 XlsxWriter==1.2.8 </code></pre>
<python><python-3.x><excel><pandas><openpyxl>
2023-02-10 03:03:27
2
1,310
EllipticalInitial
75,406,426
8,076,158
How do I retrieve the input parameters for the current object
<p>How do I get the raw value which was passed to MyClass by Factory Boy?</p> <pre><code>class MyClass: def __init__(self, raw): self.processed = f'***{raw}***' class MyClassFactory(factory.Factory): class Meta: model = MyClass raw = factory.fuzzy.FuzzyChoice(['a', 'b']) o = MyClassFactory.create() </code></pre>
<python><factory-boy>
2023-02-10 02:39:08
1
1,063
GlaceCelery
75,406,249
5,567,893
How can I map the node index in pytorch geometric graph?
<p>I'd like to mapping node index to the original name in pytorch geometric graph for extracting node embedding.</p> <pre class="lang-py prettyprint-override"><code>import numpy as np import torch import pandas as pd data = {'source': ['123', '2323', '545', '4928', '398'], 'target': ['2323', '398', '958', '203', '545']} df = pd.DataFrame(data) df #source target #0 123 2323 #1 2323 398 #2 545 958 #3 4928 203 #4 398 545 </code></pre> <p>Note that I inserted additional nodes without edges.</p> <pre class="lang-py prettyprint-override"><code>import networkx as nx G = nx.from_pandas_edgelist(df, 'source', 'target') G = nx.relabel_nodes(G, { n:str(n) for n in G.nodes()}) G.add_nodes_from(['1', '309', '6749']) G.number_of_nodes() </code></pre> <p>Then I converted the networkx graph to pytorch geometric graph.</p> <pre class="lang-py prettyprint-override"><code>from torch_geometric.utils.convert import from_networkx pyg_graph = from_networkx(G) print(pyg_graph) #Data(edge_index=[2, 10], num_nodes=10) </code></pre> <p>Finally, I get the below edge index that need to mapping index to name.<br /> (It is normal situation that PyG graph has bidirectional graph automatically)</p> <pre class="lang-py prettyprint-override"><code>pyg_graph.edge_index # tensor([[0, 1, 1, 2, 2, 3, 3, 4, 5, 6], # [1, 0, 2, 1, 3, 2, 4, 3, 6, 5]]) # Expected result # 123 0 # 2323 1 # 398 2 # 545 3 # 958 4 # 4928 5 # 203 6 # 1 7 # 309 8 # 6749 9 </code></pre>
<python><networkx><pytorch-geometric>
2023-02-10 01:58:28
1
466
Ssong
75,406,196
1,050,648
Tensorflow Keras Model subclassing -- call function
<p>I am experimenting with self supervised learning using tensorflow. The example code I'm running can be found in the Keras examples website. <a href="https://keras.io/examples/vision/nnclr/" rel="nofollow noreferrer">This is the link</a> to the NNCLR example. The Github link to download the code can be <a href="https://github.com/keras-team/keras-io/blob/master/examples/vision/nnclr.py" rel="nofollow noreferrer">found here</a>. While I have no issues running the examples, I am running into issues when I try to save the pretrained or the finetuned model using <code>model.save()</code>. The error I'm getting is this:</p> <pre><code> f&quot;Model {model} cannot be saved either because the input shape is not &quot; ValueError: Model &lt;__main__.NNCLR object at 0x7f6bc0f39550&gt; cannot be saved either because the input shape is not available or because the forward pass of the model is not defined. To define a forward pass, please override `Model.call()`. To specify an input shape, either call `build(input_shape)` directly, or call the model on actual data using `Model()`, `Model.fit()`, or `Model.predict()`. If you have a custom training step, please make sure to invoke the forward pass in train step through `Model.__call__`, i.e. `model(inputs)`, as opposed to `model.call()`. </code></pre> <p>I am unsure how to override the Model.call() method. Appreciate some help.</p>
<python><tensorflow><keras>
2023-02-10 01:44:56
1
1,529
HuckleberryFinn
75,406,182
4,505,601
pyexcel get_book and get_records functions throw exceptions for XLSX files
<p>I'm trying to open an XLSX file using <code>pyexcel</code>. But it fails for both <code>get_book</code> and <code>get_records</code> with the following error. However if I try to read the same file converted to <code>xls</code> it does work. I get the files uploaded by users: so can not restrict uploading files in XLSX format.</p> <pre class="lang-py prettyprint-override"><code>&gt;&gt;&gt; import pyexcel &gt;&gt;&gt; workbook = pyexcel.get_book(file_name='Sample_Employee_data_xls.xlsx') Traceback (most recent call last): File &quot;&lt;stdin&gt;&quot;, line 1, in &lt;module&gt; File &quot;/home/me/env/lib/python3.10/site-packages/pyexcel/core.py&quot;, line 47, in get_book book_stream = sources.get_book_stream(**keywords) File &quot;/home/me/env/lib/python3.10/site-packages/pyexcel/internal/core.py&quot;, line 38, in get_book_stream sheets = a_source.get_data() File &quot;/home/me/env/lib/python3.10/site-packages/pyexcel/plugins/sources/file_input.py&quot;, line 38, in get_data sheets = self.__parser.parse_file(self.__file_name, **self._keywords) File &quot;/home/me/env/lib/python3.10/site-packages/pyexcel/plugins/parsers/excel.py&quot;, line 19, in parse_file return self._parse_any(file_name, **keywords) File &quot;/home/me/env/lib/python3.10/site-packages/pyexcel/plugins/parsers/excel.py&quot;, line 40, in _parse_any sheets = get_data(anything, file_type=file_type, **keywords) File &quot;/home/me/env/lib/python3.10/site-packages/pyexcel_io/io.py&quot;, line 86, in get_data data, _ = _get_data( File &quot;/home/me/env/lib/python3.10/site-packages/pyexcel_io/io.py&quot;, line 105, in _get_data return load_data(**keywords) File &quot;/home/me/env/lib/python3.10/site-packages/pyexcel_io/io.py&quot;, line 205, in load_data result = reader.read_all() File &quot;/home/me/env/lib/python3.10/site-packages/pyexcel_io/reader.py&quot;, line 95, in read_all content_dict = self.read_sheet_by_index(sheet_index) File &quot;/home/me/env/lib/python3.10/site-packages/pyexcel_io/reader.py&quot;, line 84, in read_sheet_by_index sheet_reader = self.reader.read_sheet(sheet_index) File &quot;/home/me/env/lib/python3.10/site-packages/pyexcel_xlsx/xlsxr.py&quot;, line 148, in read_sheet sheet = SlowSheet(native_sheet, **self.keywords) File &quot;/home/me/env/lib/python3.10/site-packages/pyexcel_xlsx/xlsxr.py&quot;, line 72, in __init__ for ranges in sheet.merged_cells.ranges[:]: TypeError: 'set' object is not subscriptable &gt;&gt;&gt; workbook = pyexcel.get_book(file_name='Sample_Employee_data_xls.xls') # working </code></pre> <p>Here is my requirements file.</p> <pre><code>asgiref==3.6.0 asttokens==2.2.1 autopep8==2.0.1 backcall==0.2.0 certifi==2022.12.7 chardet==5.1.0 charset-normalizer==2.1.1 decorator==5.1.1 Django==3.2.16 django-cors-headers==3.13.0 django-filter==22.1 djangorestframework==3.13.1 et-xmlfile==1.1.0 executing==1.2.0 idna==3.4 ipython==8.8.0 jedi==0.18.2 lml==0.1.0 matplotlib-inline==0.1.6 openpyxl==3.1.0 parso==0.8.3 pexpect==4.8.0 pickleshare==0.7.5 prompt-toolkit==3.0.36 ptyprocess==0.7.0 pure-eval==0.2.2 pycodestyle==2.10.0 pyexcel==0.7.0 pyexcel-io==0.6.6 pyexcel-xls==0.7.0 pyexcel-xlsx==0.6.0 Pygments==2.14.0 pytz==2022.7 requests==2.28.1 six==1.16.0 sqlparse==0.4.3 stack-data==0.6.2 texttable==1.6.7 tomli==2.0.1 traitlets==5.8.1 urllib3==1.26.13 wcwidth==0.2.5 xlrd==2.0.1 xlwt==1.3.0 </code></pre>
<python><excel><xlsx><pyexcel>
2023-02-10 01:42:27
1
1,178
Indika Rajapaksha
75,406,055
10,853,071
Weird behavior on String vs Categorical Dtypes
<p>I´ve been facing a very weird, or could I say, a bug, when handling some categorical vs string Dtypes. Take a look at this simple example dataframe :</p> <pre><code>import pandas as pd import numpy as np data = pd.DataFrame({ 'status' : ['pending', 'pending','pending', 'canceled','canceled','canceled', 'confirmed', 'confirmed','confirmed'], 'partner' : ['A', np.nan,'C', 'A',np.nan,'C', 'A', np.nan,'C'], 'product' : ['afiliates', 'pre-paid', 'giftcard','afiliates', 'pre-paid', 'giftcard','afiliates', 'pre-paid', 'giftcard'], 'brand' : ['brand_1', 'brand_2', 'brand_3','brand_1', 'brand_2', 'brand_3','brand_1', 'brand_2', 'brand_3'], 'gmv' : [100,100,100,100,100,100,100,100,100]}) data = data.astype({'partner':'category','status':'category','product':'category', 'brand':'category'}) </code></pre> <p>When I execute a single Loc selection</p> <pre><code>test = data.loc[(data.partner !='A') | ((data.brand == 'A') &amp; (data.status == 'confirmed'))] </code></pre> <p>and this is the output</p> <p><a href="https://i.sstatic.net/4Jf9L.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/4Jf9L.png" alt="First output" /></a></p> <p>Now, just move my categorical columns to string (I am moving it back to string due to an bug related to groupby issues regarding categorical as described <a href="https://github.com/pandas-dev/pandas/issues/50100" rel="nofollow noreferrer">here</a>)</p> <pre><code>data = data.astype({'partner':'string','status':'string','product':'string', 'brand':'string'}) </code></pre> <p>And lest make the same loc command.</p> <pre><code>test2 = data.loc[(data.partner !='A') | ((data.brand == 'A') &amp; (data.status == 'confirmed'))] </code></pre> <p>but take a look at the output!</p> <p><a href="https://i.sstatic.net/s3Q18.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/s3Q18.png" alt="second output" /></a></p> <p>I am really lost why it does not work. I´ve figureout that is something related to the NAN categorical being sent back to strings, but I don´t see why would it be a problem.</p>
<python><pandas>
2023-02-10 01:12:49
1
457
FábioRB
75,406,037
5,091,720
Python Pandas SQLAlchemy how to make connection to a local SQL Server
<p>I am trying to connect to a local network SQL Server using SQLAlchemy. I don't know how to use SQLAlchemy for doing this. Other examples I have seen do not use the more modern Python (3.6+) f-string. I need to have data in a Pandas dataframe &quot;df&quot;. I'm not 100% sure but this local server does not have a username and password requirement...</p>
<python><sql-server><pandas><sqlalchemy>
2023-02-10 01:09:07
1
2,363
Shane S
75,405,951
10,620,003
Put a bar as background of bars in histogram plot
<p>I have a histogram plot and different bars has different color. I want to show a background (the same width as the bar width) and also change the bar from rectangular to oval. I attached an that show the way I want.</p> <pre><code>import numpy as np import matplotlib.pyplot as plt import pandas as pd rating = [9, 5, 7, 6] objects = ('h', 'b', 'c', 'a') y_pos = np.arange(len(objects)) cmap = plt.get_cmap('cool') norm = plt.Normalize(vmin=min(rating), vmax=max(rating)) plt.barh(y_pos, rating, align='center', color=cmap(norm(np.array(rating)))) plt.show() </code></pre> <p>The maximum length of the bars is 10. In the following image, we can see a bar with two colors (blue and black background). Can you please help me with that?</p> <p><a href="https://i.sstatic.net/San0O.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/San0O.jpg" alt="enter image description here" /></a></p>
<python><matplotlib>
2023-02-10 00:53:30
1
730
Sadcow
75,405,795
944,849
Google Cloud Buildpack custom source directory for Python app
<p>I am experimenting with Google Cloud Platform buildpacks, specifically for Python. I started with the <a href="https://github.com/GoogleCloudPlatform/buildpack-samples/tree/master/sample-functions-framework-python" rel="nofollow noreferrer">Sample Functions Framework Python</a> example app, and got that running locally, with commands:</p> <pre><code>pack build --builder=gcr.io/buildpacks/builder sample-functions-framework-python docker run -it -ePORT=8080 -p8080:8080 sample-functions-framework-python </code></pre> <p>Great, let's see if I can apply this concept on a legacy project (Python 3.7 if that matters).</p> <p>The legacy project has a structure similar to:</p> <pre><code>.gitignore source/ main.py lib helper.py requirements.txt tests/ &lt;test files here&gt; </code></pre> <p>The <code>Dockerfile</code> that came with this project packaged the <code>source</code> directory contents without the &quot;source&quot; directory, like this:</p> <pre><code>COPY lib/ /app/lib COPY main.py /app WORKDIR /app ... rest of Dockerfile here ... </code></pre> <p>Is there a way to package just the <em>contents</em> of the <code>source</code> directory using the buildpack?</p> <p>I tried to add this config to the <code>project.toml</code> file:</p> <pre><code>[[build.env]] name = &quot;GOOGLE_FUNCTION_SOURCE&quot; value = &quot;./source/main.py&quot; </code></pre> <p>But the Python modules/imports aren't set up correctly for that, as I get this error:</p> <pre><code>File &quot;/workspace/source/main.py&quot;, line 2, in &lt;module&gt; from source.lib.helper import mymethod ModuleNotFoundError: No module named 'source' </code></pre> <p>Putting both <code>main.py</code> and <code>/lib</code> into the project root dir would make this work, but I'm wondering if there is a better way.</p> <p>Related question, is there a way to see what project files are being copied into the image by the buildpack? I tried using verbose logging but didn't see anything useful.</p> <hr /> <p>Update:</p> <p>The python module error:</p> <pre><code>File &quot;/workspace/source/main.py&quot;, line 2, in &lt;module&gt; from source.lib.helper import mymethod ModuleNotFoundError: No module named 'source' </code></pre> <p>was happening because I moved the <code>lib</code> dir into <code>source</code> in my test project, and when I did this, Intellij updated the import statement in <code>main.py</code> without me catching it. I fixed the import, then applied the solution listed below and it worked.</p>
<python><google-cloud-functions><buildpack>
2023-02-10 00:24:13
1
15,101
user944849
75,405,790
4,348,400
Why doesn't PriorityQueue waitlist protect lexicographical order of surgeries?
<p>This is a toy example of a waitlist (`PriorityQueue') in which each surgery on the waitlist should have a lexicographical order on the pairs (p, date). The p is an integer and the date is a datetime object.</p> <p>Clearly integers in python have an order, but so do these datetime objects. And I had thought that <a href="https://stackoverflow.com/questions/75022266/is-there-a-lexicographic-priorityqueue-in-pythons-standard-library">Is there a Lexicographic PriorityQueue in Python&#39;s standard library?</a> had taught me that I just need to implement <code>__lt__</code> for my <code>Surgery</code> object.</p> <p>But the following minimal working example shows that the order of the surgeries on the waitlist is wrong.</p> <pre class="lang-py prettyprint-override"><code>from queue import PriorityQueue as PQ import numpy as np import pandas as pd np.random.seed(123) waitlist = PQ() class Surgery: def __init__(self, a, b): self.priority = (a,b) def __lt__(self, other): return self.priority &lt; other.priority def __repr__(self): return f'Surgery({self.priority})' # Some fake data x = np.random.randint(1,3,size=10) y = pd.date_range('2022-01-01', '2022-01-10') # Instantiate objects and put them on queue for i,j in zip(x,y): waitlist.put(Surgery(i,j)) for s in waitlist.queue: print(s) </code></pre> <p>Which outputs:</p> <pre class="lang-py prettyprint-override"><code>Surgery((1, Timestamp('2022-01-01 00:00:00', freq='D'))) Surgery((1, Timestamp('2022-01-04 00:00:00', freq='D'))) Surgery((1, Timestamp('2022-01-03 00:00:00', freq='D'))) Surgery((2, Timestamp('2022-01-02 00:00:00', freq='D'))) Surgery((1, Timestamp('2022-01-05 00:00:00', freq='D'))) Surgery((1, Timestamp('2022-01-06 00:00:00', freq='D'))) Surgery((1, Timestamp('2022-01-07 00:00:00', freq='D'))) Surgery((2, Timestamp('2022-01-08 00:00:00', freq='D'))) Surgery((2, Timestamp('2022-01-09 00:00:00', freq='D'))) Surgery((1, Timestamp('2022-01-10 00:00:00', freq='D'))) </code></pre> <p>Printing the surgeries in order of the queue shows that the relative position of the <code>p</code> is not satisfied. I don't understand why this is the case. One guess is that the <code>PriorityQueue</code> is not actually satisfying the order, or that the order of elements in <code>waitlist.queue</code> is not the true order represented by the underlying heap.</p> <p>What is going on with the apparent queue order (and how do I fix it)?</p>
<python><pandas><datetime><priority-queue><lexicographic>
2023-02-10 00:24:01
1
1,394
Galen
75,405,785
12,424,975
pipenv and OS X Monterey - can't install dependencies from Pipfile
<p>I am running OS X Monterey 12.6.3 and I installed pipenv 2023.2.4 via brew with <code>brew install pipenv</code>.</p> <p>I have the following <a href="https://github.com/Evo-Learning-project/sai_evo_backend/blob/master/Pipfile" rel="nofollow noreferrer">Pipfile</a> and <a href="https://github.com/Evo-Learning-project/sai_evo_backend/blob/master/Pipfile.lock" rel="nofollow noreferrer">Pipfile.lock</a>, which I have been able to run <code>pipenv install</code> on on other machines.</p> <p>If I run <code>pipenv install</code> on the repository containing those files, the output ends with a bunch of failed dependencies which all look like this:</p> <pre><code>An error occurred while installing zipp==3.11.0 ; python_version &gt;= '3.7' --hash=sha256:a7a22e05929290a67401440b39690ae6563279bced5f314609d9d03798f56766 --hash=sha256:83a28fcb75844b5c0cdaf5aa4003c2d728c77e05f5aeabe8e95e56727005fbaa! Will try again. An error occurred while installing zope.interface==5.5.2 ; python_version &gt;= '2.7' and python_version not in '3.0, 3.1, 3.2, 3.3, 3.4' --hash=sha256:8a2ffadefd0e7206adc86e492ccc60395f7edb5680adedf17a7ee4205c530df4 --hash=sha256:0217a9615531c83aeedb12e126611b1b1a3175013bbafe57c702ce40000eb9a0 --hash=sha256:e1574980b48c8c74f83578d1e77e701f8439a5d93f36a5a0af31337467c08fcf --hash=sha256:6e972493cdfe4ad0411fd9abfab7d4d800a7317a93928217f1a5de2bb0f0d87a --hash=sha256:4087e253bd3bbbc3e615ecd0b6dd03c4e6a1e46d152d3be6d2ad08fbad742dcc --hash=sha256:7e66f60b0067a10dd289b29dceabd3d0e6d68be1504fc9d0bc209cf07f56d189 --hash=sha256:40f4065745e2c2fa0dff0e7ccd7c166a8ac9748974f960cd39f63d2c19f9231f --hash=sha256:765d703096ca47aa5d93044bf701b00bbce4d903a95b41fff7c3796e747b1f1d --hash=sha256:a16025df73d24795a0bde05504911d306307c24a64187752685ff6ea23897cb0 --hash=sha256:fb68d212efd057596dee9e6582daded9f8ef776538afdf5feceb3059df2d2e7b --hash=sha256:f0980d44b8aded808bec5059018d64692f0127f10510eca71f2f0ace8fb11188 --hash=sha256:959697ef2757406bff71467a09d940ca364e724c534efbf3786e86eee8591452 --hash=sha256:7579960be23d1fddecb53898035a0d112ac858c3554018ce615cefc03024e46d --hash=sha256:604cdba8f1983d0ab78edc29aa71c8df0ada06fb147cea436dc37093a0100a4e --hash=sha256:0fb497c6b088818e3395e302e426850f8236d8d9f4ef5b2836feae812a8f699c --hash=sha256:d692374b578360d36568dd05efb8a5a67ab6d1878c29c582e37ddba80e66c396 --hash=sha256:5334e2ef60d3d9439c08baedaf8b84dc9bb9522d0dacbc10572ef5609ef8db6d --hash=sha256:008b0b65c05993bb08912f644d140530e775cf1c62a072bf9340c2249e613c32 --hash=sha256:17ebf6e0b1d07ed009738016abf0d0a0f80388e009d0ac6e0ead26fc162b3b9c --hash=sha256:d169ccd0756c15bbb2f1acc012f5aab279dffc334d733ca0d9362c5beaebe88e --hash=sha256:d514c269d1f9f5cd05ddfed15298d6c418129f3f064765295659798349c43e6f --hash=sha256:a2ad597c8c9e038a5912ac3cf166f82926feff2f6e0dabdab956768de0a258f5 --hash=sha256:dc26c8d44472e035d59d6f1177eb712888447f5799743da9c398b0339ed90b1b --hash=sha256:f98d4bd7bbb15ca701d19b93263cc5edfd480c3475d163f137385f49e5b3a3a7 --hash=sha256:e74a578172525c20d7223eac5f8ad187f10940dac06e40113d62f14f3adb1e8f --hash=sha256:e945de62917acbf853ab968d8916290548df18dd62c739d862f359ecd25842a6 --hash=sha256:dbaeb9cf0ea0b3bc4b36fae54a016933d64c6d52a94810a63c00f440ecb37dd7 --hash=sha256:696f3d5493eae7359887da55c2afa05acc3db5fc625c49529e84bd9992313296 --hash=sha256:9d783213fab61832dbb10d385a319cb0e45451088abd45f95b5bb88ed0acca1a --hash=sha256:6373d7eb813a143cb7795d3e42bd8ed857c82a90571567e681e1b3841a390d16 --hash=sha256:65c3c06afee96c654e590e046c4a24559e65b0a87dbff256cd4bd6f77e1a33f9 --hash=sha256:404d1e284eda9e233c90128697c71acffd55e183d70628aa0bbb0e7a3084ed8b --hash=sha256:3218ab1a7748327e08ef83cca63eea7cf20ea7e2ebcb2522072896e5e2fceedf --hash=sha256:311196634bb9333aa06f00fc94f59d3a9fddd2305c2c425d86e406ddc6f2260d --hash=sha256:bfee1f3ff62143819499e348f5b8a7f3aa0259f9aca5e0ddae7391d059dce671 --hash=sha256:655796a906fa3ca67273011c9805c1e1baa047781fca80feeb710328cdbed87f! Will try again. Installing initially failed dependencies... </code></pre> <p>I tried uninstalling pipenv and reinstalling it, upgrading brew, deleting and recreating the environment... nothing has worked.</p> <p>So I tried deleting the Pipfile.lock file and recreating it.</p> <p>Running <code>pipenv lock</code> on the given Pipfile ends with this:</p> <pre><code>Locking [packages] dependencies... Building requirements... Resolving dependencies... ✘ Locking Failed! ⠧ Locking... ERROR:pip.subprocessor:[present-rich] python setup.py egg_info exited with 1 [ResolutionFailure]: File &quot;/usr/local/Cellar/pipenv/2023.2.4/libexec/lib/python3.11/site-packages/pipenv/resolver.py&quot;, line 811, in _main [ResolutionFailure]: resolve_packages( [ResolutionFailure]: File &quot;/usr/local/Cellar/pipenv/2023.2.4/libexec/lib/python3.11/site-packages/pipenv/resolver.py&quot;, line 759, in resolve_packages [ResolutionFailure]: results, resolver = resolve( [ResolutionFailure]: File &quot;/usr/local/Cellar/pipenv/2023.2.4/libexec/lib/python3.11/site-packages/pipenv/resolver.py&quot;, line 738, in resolve [ResolutionFailure]: return resolve_deps( [ResolutionFailure]: File &quot;/usr/local/Cellar/pipenv/2023.2.4/libexec/lib/python3.11/site-packages/pipenv/utils/resolver.py&quot;, line 1102, in resolve_deps [ResolutionFailure]: results, hashes, markers_lookup, resolver, skipped = actually_resolve_deps( [ResolutionFailure]: File &quot;/usr/local/Cellar/pipenv/2023.2.4/libexec/lib/python3.11/site-packages/pipenv/utils/resolver.py&quot;, line 899, in actually_resolve_deps [ResolutionFailure]: resolver.resolve() [ResolutionFailure]: File &quot;/usr/local/Cellar/pipenv/2023.2.4/libexec/lib/python3.11/site-packages/pipenv/utils/resolver.py&quot;, line 687, in resolve [ResolutionFailure]: raise ResolutionFailure(message=str(e)) [pipenv.exceptions.ResolutionFailure]: Warning: Your dependencies could not be resolved. You likely have a mismatch in your sub-dependencies. You can use $ pipenv install --skip-lock to bypass this mechanism, then run $ pipenv graph to inspect the situation. Hint: try $ pipenv lock --pre if it is a pre-release dependency. ERROR: metadata generation failed </code></pre> <p>Running any of <code>pipenv lock --pre</code>, <code>pipenv lock --clear</code>, and <code>pipenv lock --pre --clear</code> yields the same results.</p> <p>Running <code>pipenv lock --pre --clear --verbose</code> yields an output which ends like this: (I cut the first several lines which look all like the first ones here, just with different dependencies)</p> <pre><code>Reporter.adding_requirement(SpecifierRequirement('django-silk'), None) INFO:pipenv.patched.pip._internal.resolution.resolvelib.reporter:Reporter.adding_requirement(SpecifierRequirement('django-silk'), None) Reporter.adding_requirement(SpecifierRequirement('django-stubs'), None) INFO:pipenv.patched.pip._internal.resolution.resolvelib.reporter:Reporter.adding_requirement(SpecifierRequirement('django-stubs'), None) Reporter.adding_requirement(SpecifierRequirement('djangochannelsrestframework'), None) INFO:pipenv.patched.pip._internal.resolution.resolvelib.reporter:Reporter.adding_requirement(SpecifierRequirement('djangochannelsrestframework'), None) Reporter.adding_requirement(SpecifierRequirement('djangorestframework~=3.13'), None) INFO:pipenv.patched.pip._internal.resolution.resolvelib.reporter:Reporter.adding_requirement(SpecifierRequirement('djangorestframework~=3.13'), None) Reporter.adding_requirement(SpecifierRequirement('djangorestframework-stubs'), None) INFO:pipenv.patched.pip._internal.resolution.resolvelib.reporter:Reporter.adding_requirement(SpecifierRequirement('djangorestframework-stubs'), None) Reporter.adding_requirement(SpecifierRequirement('drf-access-policy~=1.0'), None) INFO:pipenv.patched.pip._internal.resolution.resolvelib.reporter:Reporter.adding_requirement(SpecifierRequirement('drf-access-policy~=1.0'), None) Reporter.adding_requirement(SpecifierRequirement('drf-nested-routers'), None) INFO:pipenv.patched.pip._internal.resolution.resolvelib.reporter:Reporter.adding_requirement(SpecifierRequirement('drf-nested-routers'), None) Reporter.adding_requirement(SpecifierRequirement('drf-social-oauth2'), None) INFO:pipenv.patched.pip._internal.resolution.resolvelib.reporter:Reporter.adding_requirement(SpecifierRequirement('drf-social-oauth2'), None) Reporter.adding_requirement(SpecifierRequirement('drf-viewset-profiler'), None) INFO:pipenv.patched.pip._internal.resolution.resolvelib.reporter:Reporter.adding_requirement(SpecifierRequirement('drf-viewset-profiler'), None) Reporter.adding_requirement(SpecifierRequirement('drf-yasg'), None) INFO:pipenv.patched.pip._internal.resolution.resolvelib.reporter:Reporter.adding_requirement(SpecifierRequirement('drf-yasg'), None) Reporter.adding_requirement(SpecifierRequirement('gunicorn'), None) INFO:pipenv.patched.pip._internal.resolution.resolvelib.reporter:Reporter.adding_requirement(SpecifierRequirement('gunicorn'), None) Reporter.adding_requirement(SpecifierRequirement('markdown'), None) INFO:pipenv.patched.pip._internal.resolution.resolvelib.reporter:Reporter.adding_requirement(SpecifierRequirement('markdown'), None) Reporter.adding_requirement(SpecifierRequirement('opencv-python'), None) INFO:pipenv.patched.pip._internal.resolution.resolvelib.reporter:Reporter.adding_requirement(SpecifierRequirement('opencv-python'), None) Reporter.adding_requirement(SpecifierRequirement('pillow'), None) INFO:pipenv.patched.pip._internal.resolution.resolvelib.reporter:Reporter.adding_requirement(SpecifierRequirement('pillow'), None) Reporter.adding_requirement(SpecifierRequirement('prompt-toolkit'), None) INFO:pipenv.patched.pip._internal.resolution.resolvelib.reporter:Reporter.adding_requirement(SpecifierRequirement('prompt-toolkit'), None) Reporter.adding_requirement(SpecifierRequirement('psycopg2'), None) INFO:pipenv.patched.pip._internal.resolution.resolvelib.reporter:Reporter.adding_requirement(SpecifierRequirement('psycopg2'), None) ERROR:pip.subprocessor: python setup.py egg_info exited with 1 Traceback (most recent call last): File &quot;/usr/local/Cellar/pipenv/2023.2.4/libexec/lib/python3.11/site-packages/pipenv/patched/pip/_internal/operations/build/metadata_legacy.py&quot;, line 64, in generate_metadata call_subprocess( File &quot;/usr/local/Cellar/pipenv/2023.2.4/libexec/lib/python3.11/site-packages/pipenv/patched/pip/_internal/utils/subprocess.py&quot;, line 224, in call_subprocess raise error pipenv.patched.pip._internal.exceptions.InstallationSubprocessError: python setup.py egg_info exited with 1 The above exception was the direct cause of the following exception: Traceback (most recent call last): File &quot;/usr/local/Cellar/pipenv/2023.2.4/libexec/lib/python3.11/site-packages/pipenv/utils/resolver.py&quot;, line 685, in resolve results = resolver.resolve(self.constraints, check_supported_wheels=False) File &quot;/usr/local/Cellar/pipenv/2023.2.4/libexec/lib/python3.11/site-packages/pipenv/patched/pip/_internal/resolution/resolvelib/resolver.py&quot;, line 92, in resolve result = self._result = resolver.resolve( File &quot;/usr/local/Cellar/pipenv/2023.2.4/libexec/lib/python3.11/site-packages/pipenv/patched/pip/_vendor/resolvelib/resolvers.py&quot;, line 481, in resolve state = resolution.resolve(requirements, max_rounds=max_rounds) File &quot;/usr/local/Cellar/pipenv/2023.2.4/libexec/lib/python3.11/site-packages/pipenv/patched/pip/_vendor/resolvelib/resolvers.py&quot;, line 348, in resolve self._add_to_criteria(self.state.criteria, r, parent=None) File &quot;/usr/local/Cellar/pipenv/2023.2.4/libexec/lib/python3.11/site-packages/pipenv/patched/pip/_vendor/resolvelib/resolvers.py&quot;, line 172, in _add_to_criteria if not criterion.candidates: File &quot;/usr/local/Cellar/pipenv/2023.2.4/libexec/lib/python3.11/site-packages/pipenv/patched/pip/_vendor/resolvelib/structs.py&quot;, line 151, in __bool__ return bool(self._sequence) File &quot;/usr/local/Cellar/pipenv/2023.2.4/libexec/lib/python3.11/site-packages/pipenv/patched/pip/_internal/resolution/resolvelib/found_candidates.py&quot;, line 155, in __bool__ return any(self) File &quot;/usr/local/Cellar/pipenv/2023.2.4/libexec/lib/python3.11/site-packages/pipenv/patched/pip/_internal/resolution/resolvelib/found_candidates.py&quot;, line 143, in &lt;genexpr&gt; return (c for c in iterator if id(c) not in self._incompatible_ids) File &quot;/usr/local/Cellar/pipenv/2023.2.4/libexec/lib/python3.11/site-packages/pipenv/patched/pip/_internal/resolution/resolvelib/found_candidates.py&quot;, line 47, in _iter_built candidate = func() File &quot;/usr/local/Cellar/pipenv/2023.2.4/libexec/lib/python3.11/site-packages/pipenv/patched/pip/_internal/resolution/resolvelib/factory.py&quot;, line 206, in _make_candidate_from_link self._link_candidate_cache = LinkCandidate( File &quot;/usr/local/Cellar/pipenv/2023.2.4/libexec/lib/python3.11/site-packages/pipenv/patched/pip/_internal/resolution/resolvelib/candidates.py&quot;, line 301, in __init__ super().__init__( File &quot;/usr/local/Cellar/pipenv/2023.2.4/libexec/lib/python3.11/site-packages/pipenv/patched/pip/_internal/resolution/resolvelib/candidates.py&quot;, line 163, in __init__ self.dist = self._prepare() File &quot;/usr/local/Cellar/pipenv/2023.2.4/libexec/lib/python3.11/site-packages/pipenv/patched/pip/_internal/resolution/resolvelib/candidates.py&quot;, line 232, in _prepare dist = self._prepare_distribution() File &quot;/usr/local/Cellar/pipenv/2023.2.4/libexec/lib/python3.11/site-packages/pipenv/patched/pip/_internal/resolution/resolvelib/candidates.py&quot;, line 312, in _prepare_distribution return preparer.prepare_linked_requirement(self._ireq, parallel_builds=True) File &quot;/usr/local/Cellar/pipenv/2023.2.4/libexec/lib/python3.11/site-packages/pipenv/patched/pip/_internal/operations/prepare.py&quot;, line 491, in prepare_linked_requirement return self._prepare_linked_requirement(req, parallel_builds) File &quot;/usr/local/Cellar/pipenv/2023.2.4/libexec/lib/python3.11/site-packages/pipenv/patched/pip/_internal/operations/prepare.py&quot;, line 577, in _prepare_linked_requirement dist = _get_prepared_distribution( File &quot;/usr/local/Cellar/pipenv/2023.2.4/libexec/lib/python3.11/site-packages/pipenv/patched/pip/_internal/operations/prepare.py&quot;, line 69, in _get_prepared_distribution abstract_dist.prepare_distribution_metadata( File &quot;/usr/local/Cellar/pipenv/2023.2.4/libexec/lib/python3.11/site-packages/pipenv/patched/pip/_internal/distributions/sdist.py&quot;, line 61, in prepare_distribution_metadata self.req.prepare_metadata() File &quot;/usr/local/Cellar/pipenv/2023.2.4/libexec/lib/python3.11/site-packages/pipenv/patched/pip/_internal/req/req_install.py&quot;, line 542, in prepare_metadata self.metadata_directory = generate_metadata_legacy( File &quot;/usr/local/Cellar/pipenv/2023.2.4/libexec/lib/python3.11/site-packages/pipenv/patched/pip/_internal/operations/build/metadata_legacy.py&quot;, line 71, in generate_metadata raise MetadataGenerationFailed(package_details=details) from error pipenv.patched.pip._internal.exceptions.MetadataGenerationFailed: metadata generation failed During handling of the above exception, another exception occurred: Traceback (most recent call last): File &quot;/usr/local/Cellar/pipenv/2023.2.4/libexec/lib/python3.11/site-packages/pipenv/resolver.py&quot;, line 845, in &lt;module&gt; main() File &quot;/usr/local/Cellar/pipenv/2023.2.4/libexec/lib/python3.11/site-packages/pipenv/resolver.py&quot;, line 831, in main _main( File &quot;/usr/local/Cellar/pipenv/2023.2.4/libexec/lib/python3.11/site-packages/pipenv/resolver.py&quot;, line 811, in _main resolve_packages( File &quot;/usr/local/Cellar/pipenv/2023.2.4/libexec/lib/python3.11/site-packages/pipenv/resolver.py&quot;, line 759, in resolve_packages results, resolver = resolve( File &quot;/usr/local/Cellar/pipenv/2023.2.4/libexec/lib/python3.11/site-packages/pipenv/resolver.py&quot;, line 738, in resolve return resolve_deps( File &quot;/usr/local/Cellar/pipenv/2023.2.4/libexec/lib/python3.11/site-packages/pipenv/utils/resolver.py&quot;, line 1102, in resolve_deps results, hashes, markers_lookup, resolver, skipped = actually_resolve_deps( File &quot;/usr/local/Cellar/pipenv/2023.2.4/libexec/lib/python3.11/site-packages/pipenv/utils/resolver.py&quot;, line 899, in actually_resolve_deps resolver.resolve() File &quot;/usr/local/Cellar/pipenv/2023.2.4/libexec/lib/python3.11/site-packages/pipenv/utils/resolver.py&quot;, line 687, in resolve raise ResolutionFailure(message=str(e)) pipenv.exceptions.ResolutionFailure: [31m[1mERROR[0m: [33mmetadata generation failed[0m ✘ Locking Failed! ⠹ Locking... </code></pre> <p>Finally, <code>pipenv install --skip-lock</code> and then <code>pipenv graph</code> yields: <a href="https://gist.github.com/samul-1/513e6ee5a554748ed2e72f916227319c" rel="nofollow noreferrer">https://gist.github.com/samul-1/513e6ee5a554748ed2e72f916227319c</a></p> <p>I also tried using different versions of python, forcing them with pyenv... I also browsed all github issues and SO questions I could find on the matter.</p> <p>I have no idea what else to try. Any input is highly appreciated.</p>
<python><python-3.x><pip><homebrew><pipenv>
2023-02-10 00:22:37
1
668
Samuele B.
75,405,766
5,920,741
Do tasks within a Celery chord always execute in order?
<p>Are tasks within a celery chord guaranteed to execute in the order that they were started? I repeated this example many times and the order was identical. From the docs, I interpreted that the tasks will occur concurrently and so the order would be unpredictable. I added a <code>time.sleep(random())</code> and the result still remained <code>True</code>.</p> <pre><code># pip install celery # docker run -d -p 6379:6379 redis # app.py import time import random from celery import Celery app = Celery('tasks', broker='redis://localhost:6379') app.conf.update( result_backend='redis://localhost:6379/0', ) @app.task def add(x, y): return x + y @app.task() def check_task_order(numbers): time.sleep(random.random()) return numbers == sorted(numbers) # start.py from celery import chord from app import add, tsum print(chord( add.subtask((i, i)) for i in range(100) )(tsum.subtask()).get()) # terminal 1: celery -A app worker --loglevel=info # terminal 2 python start.py &gt;&gt; True </code></pre>
<python><redis><celery>
2023-02-10 00:18:51
1
732
AmourK
75,405,637
504,877
PerformanceWarning for Pandas dataframe decoding URL string
<p>I have a dataframe with a URL encoded column &quot;event_properties_searchterm&quot;.</p> <p>I use the following code to clean up, it works exactly as I expected but I got this warning:</p> <blockquote> <p>PerformanceWarning: DataFrame is highly fragmented. This is usually the result of calling <code>frame.insert</code> many times, which has poor performance. Consider joining all columns at once using pd.concat(axis=1) instead. To get a de-fragmented frame, use `newframe = frame.copy()</p> </blockquote> <pre><code> from urllib.parse import unquote_plus def clean_search_term(self, text: str) -&gt; str: &quot;&quot;&quot;Decode URL encodings back to Unicode string&quot;&quot;&quot; if not text: # empty string or None return &quot;empty_search_term&quot; return unquote_plus(text).lower().strip() df[&quot;search_term&quot;] = df[&quot;event_properties_searchterm&quot;].apply(clean_search_term) </code></pre> <p>I tried just applying unquote_plus (although it doesn't do all I would like to do in the function above) as follows but still got the warning.</p> <pre><code>df[&quot;event_properties_searchterm&quot;].apply(clean_search_term) </code></pre> <p>What can I do to improve it? Python: 3.9.2 Pandas: 1.5.0</p>
<python><pandas>
2023-02-09 23:50:38
0
765
Tyn
75,405,521
2,228,155
python pandas resample monthly aggregated data to daily, then aggregate back to weekly
<p>Here is my example data:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>team</th> <th>sales</th> <th>month</th> </tr> </thead> <tbody> <tr> <td>a</td> <td>100</td> <td>1/1/2023</td> </tr> <tr> <td>a</td> <td>200</td> <td>2/1/2023</td> </tr> <tr> <td>b</td> <td>600</td> <td>1/1/2023</td> </tr> <tr> <td>b</td> <td>300</td> <td>2/1/2023</td> </tr> </tbody> </table> </div> <p>load in pandas like so:</p> <pre><code>mydata = pd.DataFrame([ ['team','sales','month'], ['a', 100, '1/1/2023'], ['a', 200, '2/1/2023'], ['b', 600, '1/1/2023'], ['b', 300, '2/1/2023'] ]) mydata.columns = mydata.iloc[0] mydata = mydata[1:] mydata['month'] = pd.to_datetime(mydata['month']) </code></pre> <p>My desired outcome for team &quot;a&quot; is this data aggregated by each week as starting on Monday, like this:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>team</th> <th>sales</th> <th>Monday Week</th> </tr> </thead> <tbody> <tr> <td>a</td> <td>22.58</td> <td>1/2/2023</td> </tr> <tr> <td>a</td> <td>22.58</td> <td>1/9/2023</td> </tr> <tr> <td>a</td> <td>22.58</td> <td>1/16/2023</td> </tr> <tr> <td>a</td> <td>22.58</td> <td>1/23/2023</td> </tr> <tr> <td>a</td> <td>42.17</td> <td>1/30/2023</td> </tr> <tr> <td>a</td> <td>50</td> <td>2/6/2023</td> </tr> <tr> <td>a</td> <td>50</td> <td>2/13/2023</td> </tr> <tr> <td>a</td> <td>50</td> <td>2/20/2023</td> </tr> <tr> <td>a</td> <td>14.29</td> <td>2/27/2023</td> </tr> </tbody> </table> </div> <p>So the logic on the calculated sales per week is:<br /> $100 of sales in January, so avg sales per day is 100/31 = 3.23 per day, * 7 days in a weeks = $22.58 for each week in January.<br /> February is $200 over 28 days, so ($200/28)*7 = $50 a week in Feb.</p> <p>The calculation on the week starting 1/30/2023 is a little more complicated. I need to carry the January rate the first 2 days of 1/30 and 1/31, then start summing the Feb rate for the following 5 days in Feb (until 2/5/2023). So it would be 5*(200/28)+2*(100/31) = 42.17</p> <p>Is there a way to do this in Pandas? I believe the logic that may work is taking each monthly total, decomposing that into daily data with an average rate, then using pandas to aggregate back up to weekly data starting on Monday for each month, but I'm lost trying to chain together the date functions.</p>
<python><pandas><date><time-series>
2023-02-09 23:29:23
1
1,055
barker
75,405,220
4,770,906
Why is PySpark logger not logging INFO statements?
<p>In the PySpark code below, I am attempting to ensure INFO statements are logged. However, I am only seeing WARN, ERROR and FATAL messages. How do I update the logger <code>('Example Processor')</code> to have a log level of INFO, and log everything out?</p> <p>Note: DEBUG logging works, and logs everything for DEBUG, INFO, WARN, ERROR, and FATAL. Thanks!</p> <pre><code> self.spark = SparkSession.builder \ .master(&quot;local[1]&quot;) \ .appName(&quot;DemoProcessor&quot;) \ .getOrCreate() log4jLogger = self.spark.sparkContext._jvm.org.apache.log4j self.log = log4jLogger.LogManager.getLogger('Example Processor') self.log.setLevel(log4jLogger.Level.INFO) # self.log.setLevel(log4jLogger.Level.DEBUG) self.log.trace(&quot;Trace Message!&quot;) self.log.debug(&quot;Debug Message!&quot;) self.log.info(&quot;Info Message!&quot;) self.log.warn(&quot;Warn Message!&quot;) self.log.error(&quot;Error Message!&quot;) self.log.fatal(&quot;Fatal Message!&quot;) </code></pre> <p>INFO level Log:</p> <pre><code>Setting default log level to &quot;WARN&quot;. To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel). 23/02/09 17:22:44 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 23/02/09 17:22:44 WARN Example Processor: Warn Message! 23/02/09 17:22:44 ERROR Example Processor: Error Message! 23/02/09 17:22:44 FATAL Example Processor: Fatal Message! </code></pre> <p>DEBUG level log:</p> <pre><code>Setting default log level to &quot;WARN&quot;. To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel). 23/02/09 17:35:00 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 23/02/09 17:35:01 INFO Example Processor: Info Message! 23/02/09 17:35:01 DEBUG Example Processor: Debug Message! 23/02/09 17:35:01 WARN Example Processor: Warn Message! 23/02/09 17:35:01 ERROR Example Processor: Error Message! 23/02/09 17:35:01 FATAL Example Processor: Fatal Message! </code></pre>
<python><java><logging><pyspark>
2023-02-09 22:44:29
1
514
mikey8989
75,405,194
317,460
How to forward headers using FastAPI - Tracing use cases
<p>Below is simple server written with FastAPI and running with Uvicorn.</p> <p>In order to send the value to the next hop, the '/destination' url, I need to pass the value to the forward_request method.</p> <p>In this implementation, passing the value is easy, because the calls' depth is just 1 function more.</p> <p>But if I have a function that calls a function that calls a function...., I need to pass the value again and again and...</p> <p>Is there a simpler way to share the value downstream without passing it?</p> <p>Is there a way, without using inspect or some dark magic, to understand what is the scope that the function forward_request lives in?</p> <hr /> <p><strong>Why am I asking this question?</strong></p> <p>I am using Jaeger for tracing and I need to forward the header x-request-id that I receive in the 1st server (/source in this example) to the 2nd server (/destination in this example).</p> <p>If FastAPI\Uvicorn processed just 1 request at a time, I could have shared the value as a singleton class and access it from anywhere, but since requests are handled in parallel, the function forward_request doesn't have the context of who called it.</p> <hr /> <p><strong>Some black magic</strong></p> <p>Function forward_request can inspect the call stack and figure from it, what is the value that was received, but that is one ugly way to do things.</p> <hr /> <p><strong>Server Code</strong></p> <pre><code>import asyncio import aiohttp as aiohttp from fastapi import FastAPI, Request import uvicorn from datetime import datetime app = FastAPI() @app.get(&quot;/source&quot;) async def route1(value: int, request: Request): print(f'source {value}') start = datetime.now().strftime(&quot;%H:%M:%S&quot;) await asyncio.sleep(5) resp: dict = await forward_request(value) end = datetime.now().strftime(&quot;%H:%M:%S&quot;) return { &quot;start&quot;: start, &quot;end&quot;: end, &quot;value&quot;: value, &quot;resp&quot;: resp } @app.get(&quot;/destination&quot;) async def route2(value, request: Request): print(f'destination {value}') start = datetime.now().strftime(&quot;%H:%M:%S&quot;) await asyncio.sleep(5) end = datetime.now().strftime(&quot;%H:%M:%S&quot;) return { &quot;start&quot;: start, &quot;end&quot;: end, &quot;value&quot;: value } async def forward_request(value: int) -&gt; dict: async with aiohttp.ClientSession() as session: async with session.get('http://127.0.0.1:5000/destination', params={'value': value}) as resp: resp = await resp.json() return resp if __name__ == &quot;__main__&quot;: uvicorn.run(&quot;main2:app&quot;, host=&quot;127.0.0.1&quot;, port=5000, log_level=&quot;info&quot;, workers=1) </code></pre>
<python><http-headers><fastapi><trace><jaeger>
2023-02-09 22:40:42
0
3,627
RaamEE
75,405,100
11,405,455
create a new folder everyday as per UTC time in my s3 bucket and save json files in it
<p>I want to create a new folder everyday as per UTC time in my s3 bucket, the dict should be saved in json format in that folder</p> <p><strong>My_Attempt</strong></p> <pre><code> import pytz import datetime import botocore import boto3 s3 = session.resource('s3') config['S3_BUCKET'] = 'my_data' # Get the current date and time in UTC utc_now = datetime.datetime.now(pytz.utc) today = utc_now.strftime(&quot;%Y-%m-%d&quot;) folder_name = &quot;json_files_&quot; + today my_array = '{&quot;id&quot;: &quot;0&quot;}' # Check if the folder for today already exists try: s3.head_bucket(Bucket=config['S3_BUCKET'], Key=folder_name + &quot;/&quot;) #head_object(Bucket=config['S3_BUCKET'], Key=folder_name + &quot;/&quot;) except botocore.exceptions.ClientError as e: #boto3.exceptions.ClientError as e: #s3.exceptions.ClientError as e: # If the folder does not exist, create it if e.response[&quot;Error&quot;][&quot;Code&quot;] == &quot;404&quot;: s3.put_object(Bucket=config['S3_BUCKET'], Key=folder_name + &quot;/&quot;) else: # Raise any other errors raise key = folder_name + &quot;/&quot; + str(my_array['id']) + &quot;.json&quot; print('key -&gt; ',key) s3.put_object(Bucket=&quot;my-bucket&quot;, Key=key, Body=my_array) </code></pre> <p><strong>ERROR</strong> I tried both 'head_bucket' and 'head_object'</p> <pre><code>'s3.ServiceResource' object has no attribute 'head_bucket' 's3.ServiceResource' object has no attribute 'head_object' </code></pre> <p>How can I improve the code and debug the code?</p>
<python><amazon-web-services><amazon-s3>
2023-02-09 22:27:16
1
443
Khaned
75,404,979
10,001,610
Why does my context manager not exit on exception
<p>I am learning about context managers and was trying to build one myself. The following is a dummy context manager that opens a file in read mode (I know I can just do <code>with open(...): ...</code>. this is just an example I built to help me understand how to make my own context managers):</p> <pre class="lang-py prettyprint-override"><code>@contextmanager def open_read(path: str): f = open(path, 'r') print('open') yield f f.close() print('closed') def foo(): try: with open_read('main.py') as f: print(f.readline()) raise Exception('oopsie') except Exception: pass print(f.readline()) foo() </code></pre> <p>I expect this code to print:</p> <pre><code>open &lt;line 1 of a.txt&gt; closed ValueError: I/O operation on closed file. </code></pre> <p>But instead it prints:</p> <pre><code>open &lt;line 1 of a.txt&gt; &lt;line 2 of a.txt&gt; </code></pre> <p>It didn't close the file!</p> <p>This seems to contradict python's docs which state that <code>__exit__</code> will be called whether the <code>with</code> statement exited successfully or with an exception:</p> <blockquote> <p>object.<strong>exit</strong>(self, exc_type, exc_value, traceback)</p> <p>Exit the runtime context related to this object. The parameters describe the exception that caused the context to be exited. If the context was exited without an exception, all three arguments will be None.</p> </blockquote> <p>Interestingly, when I reimplemented the context manager as shown below, it worked as expected:</p> <pre class="lang-py prettyprint-override"><code>class open_read(ContextDecorator): def __init__(self, path: str): self.path = path self.f = None def __enter__(self): self.f = open(self.path, 'r') print('open') return self.f def __exit__(self, exc_type, exc_val, exc_tb): self.f.close() print('closed') </code></pre> <p>Why didn't my original implementation work?</p>
<python><python-decorators><contextmanager>
2023-02-09 22:11:43
1
310
Eyal Kutz
75,404,886
8,869,570
What is the conventional way in Python for defining attributes in an abstract base class?
<p>In C++, I often times will declare attributes in an abstract base class and not instantiate it in the constructor (it doesn't make much sense to do that for my use cases) because the instantiation is meant to occur in the subclasses.</p> <p>In Python, I am wondering what I can achieve the analogous behavior when declarations without instantiation aren't really a thing in Python?</p> <p>One approach I saw posted was using a type annotation</p> <pre><code>from abc import ABC, abstract method class abstract_base_class(ABC): var : int # define some abstract methods class inherited(abstract_base_class): def __init__(self): self.var = ..... </code></pre> <p>I am wondering if there's a standard way to achieve what I want?</p> <p>I also may be trying to apply some C++ practices that shouldn't be applied to Python. Would it make more sense to just not &quot;declare&quot; <code>var</code> at all in this case in <code>abstract_base_class</code>?</p> <p>I don't think the linked question provides an answer for my second question. It provides a way to accomplish what I want to do, but I don't know if that's the standard way to do it or not.</p>
<python><inheritance><abstract-class>
2023-02-09 22:01:47
0
2,328
24n8
75,404,738
10,620,003
Select the color of the bar in histogram plot based on its value
<p>I have thousands of data that I want to plot the histogram of them. I want to put the different colors based on the values of the histogram. My values are between <strong>0-10</strong>. So, I want to put the color of the bar from red to green. And if it is close to zero, the color should be red and if it is close to 10, the color should be green. Like the image I attached. In the following example, I want to set the color of row h as close to green, and the b is close to red. Here is a simple example, I have multiple bars and values.</p> <pre><code>import numpy as np import matplotlib.pyplot as plt import pandas as pd rating = [8, 4, 5,6] objects = ('h', 'b', 'c','a') y_pos = np.arange(len(objects)) plt.barh(y_pos, rating, align='center', alpha=0.5) plt.yticks(y_pos, objects) plt.show() </code></pre> <p>Could you please help me with this? Thank you. <a href="https://i.sstatic.net/1LDYI.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/1LDYI.png" alt="enter image description here" /></a></p>
<python><matplotlib>
2023-02-09 21:45:32
1
730
Sadcow
75,404,668
10,088,007
Parse datetime from CSV, assign timezone and convert to another timezone - Polars Python
<p>I have a column of timestamps in a CSV file, like <code>2022-01-03 17:59:16.254</code>. As an external information, I know this time is in JST.</p> <p>I am trying to parse this string into datetime, assign JST timezone (without changing the timestamp), and convert it to CET.</p> <p>An attempt:</p> <pre><code>new = pl.scan_csv('test.csv').with_columns( [pl.col(&quot;timestamp&quot;).str.strptime(pl.Datetime, &quot;%Y-%m-%d %H:%M:%S.%f&quot;, strict=True), ] ).select( [pl.col(&quot;timestamp&quot;).cast(pl.Date).alias(&quot;Date&quot;), pl.col(&quot;timestamp&quot;).dt.with_time_zone(&quot;Asia/Tokyo&quot;).alias(&quot;WithTZ&quot;), pl.col(&quot;timestamp&quot;).dt.with_time_zone(&quot;Asia/Tokyo&quot;).dt.cast_time_zone(&quot;Europe/Berlin&quot;).alias(&quot;WithCastTZ&quot;), pl.all(), ] ) new.fetch(10).write_csv(&quot;testOut.csv&quot;) </code></pre> <p>as a result, I was expecting the datetime part to not change in WithTZ. However, this is my first line. Casting also did not have any impact.</p> <pre><code>WithTZ |WithCastTZ |timestamp 2022-01-04 02:59:16.213 JST|2022-01-04 02:59:16.213 CET|2022-01-03T17:59:16.213000000 </code></pre> <p>I think I am missing something obvious..</p>
<python><datetime><python-polars>
2023-02-09 21:37:27
1
332
Kocas
75,404,631
200,985
Python Postgres Connections with Green Threads
<p>I am using psycopg2 and I have more than one green thread in my application. Each thread gets connections using <code>psycopg2.connect</code>. Sometimes I get the following error:</p> <pre class="lang-none prettyprint-override"><code>error: Second simultaneous read on fileno 14 detected. Unless you really know what you're doing, make sure that only one greenthread can read any particular socket. Consider using a pools.Pool. If you do know what you're doing and want to disable this error, call eventlet.debug.hub_prevent_multiple_readers(False) - MY THREAD=&lt;built-in method switch of GreenThread object at 0x7fbf6aafc048&gt;; THAT THREAD=FdListener('read', 14, &lt;built-in method switch of greenlet.greenlet object at 0x7fbf6aafc470&gt;, &lt;built-in method throw of greenlet.greenlet object at 0x7fbf6aafc470&gt;) </code></pre> <p>I don't have connection pooling configured in this project as far as I know. (<code>grep -ri pool .;</code> returns nothing.)</p> <p>Does <code>psycopg2.connect</code> reuse connections in some sort of implicit connection pool?</p> <p>How do I get a new connection without reusing old connections (or sockets)?</p>
<python><postgresql><multithreading><psycopg2><green-threads>
2023-02-09 21:33:18
1
7,855
lmat - Reinstate Monica
75,404,602
8,667,016
Using an `if` statement inside a Pandas DataFrame's `assign` method
<h2>Intro and reproducible code snippet</h2> <p>I'm having a hard time performing an operation on a few columns that requires the checking of a condition using an <code>if/else</code> statement.</p> <p>More specifically, I'm trying to perform this check within the confines of the <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.assign.html" rel="nofollow noreferrer"><code>assign</code></a> method of a Pandas Dataframe. Here is an example of what I'm trying to do</p> <pre class="lang-py prettyprint-override"><code># Importing Pandas import pandas as pd # Creating synthetic data my_df = pd.DataFrame({'col1':[1,2,3,4,5,6,7,8,9,10], 'col2':[11,22,33,44,55,66,77,88,99,1010]}) # Creating a separate output DataFrame that doesn't overwrite # the original input DataFrame out_df = my_df.assign( # Successfully creating a new column called `col3` using a lambda function col3=lambda row: row['col1'] + row['col2'], # Using a new lambda function to perform an operation on the newly # generated column. bleep_bloop=lambda row: 'bleep' if (row['col3']%8 == 0) else 'bloop') </code></pre> <p>The code above yeilds a <code>ValueError</code>:</p> <p><code>ValueError: The truth value of a Series is ambiguous</code></p> <p>When trying to investigate the error, I found <a href="https://stackoverflow.com/questions/50941472/python-3-lambda-error-the-truth-value-of-a-series-is-ambiguous">this SO thread</a>. It seems that <code>lambda</code> functions don't always work very nicely with conditional logic in a DataFrame, mostly due to the DataFrame's attempt to deal with things as Series.</p> <h2>A few dirty workarounds</h2> <h3>Use <code>apply</code></h3> <p>A dirty workaround would be to make <code>col3</code> using the <code>assign</code> method as indicated above, but then create the <code>bleep_bloop</code> column using an <code>apply</code> method instead:</p> <pre class="lang-py prettyprint-override"><code>out_sr = (my_df.assign( col3=lambda row: row['col1'] + row['col2']) .apply(lambda row: 'bleep' if (row['col3']%8 == 0) else 'bloop', axis=1)) </code></pre> <p>The problem here is that the code above returns only a Series with the results of the <code>bleep_bloop</code> column instead of a new DataFrame with both <code>col3</code> and <code>bleep_bloop</code>.</p> <h3>On the fly vs. multiple commands</h3> <p>Yet another approach would be to break one command into two:</p> <pre class="lang-py prettyprint-override"><code>out_df_2 = (my_df.assign(col3=lambda row: row['col1'] + row['col2'])) out_df_2['bleep_bloop'] = out_df_2.apply(lambda row: 'bleep' if (row['col3']%8 == 0) else 'bloop', axis=1) </code></pre> <p>This also works, but I'd really like to stick to the on-the-fly approach where I do everything in one chained command, if possible.</p> <h2>Back to the main question</h2> <p>Given that the workarounds I showed above are messy and don't really get the job done like I need, is there any other way I can create a new column that's based on using a conditional <code>if/else</code> statement?</p> <p>The example I gave here is pretty simple, but consider that the real world application would likely involve applying custom-made functions (e.g.: <code>out_df=my_df.assign(new_col=lambda row: my_func(row))</code>, where <code>my_func</code> is some complex function that uses several other columns from the same row as inputs).</p>
<python><pandas><dataframe><if-statement><lambda>
2023-02-09 21:30:28
3
1,291
Felipe D.
75,404,552
1,226,649
KeOps (Kernel Operations on the GPU) on Windows
<p>When installing <a href="https://github.com/getkeops/keops" rel="nofollow noreferrer">KeOps</a> on Windows 10 with:</p> <pre><code>pip install pykeops </code></pre> <p>I get very a famous error:</p> <pre><code>ModuleNotFoundError: No module named 'fcntl' </code></pre> <p>I know, that <code>fcntl</code> does not exist on Windows, but how to substitute it for KeOps?</p>
<python><windows><gpu>
2023-02-09 21:23:57
0
3,549
dokondr
75,404,497
11,755,414
How do I replace part of string with various combinations in lookup in Python?
<p>I have the following code replacing every element with it's short form in the lookup:</p> <pre><code>case = [&quot;MY_FIRST_RODEO&quot;] lookup = {'MY': 'M', 'FIRST': 'FRST', 'RODEO' : 'RD', 'FIRST_RODEO': 'FRD', 'MY_FIRST': 'MF', 'MY_FIRST_RODEO': 'MFR'} case_mod = [] for string in case: words = string.split(&quot;_&quot;) new_string = [lookup[word] for word in words] case_mod.append(&quot;_&quot;.join(new_string)) print(case_mod) </code></pre> <p>This returns:</p> <pre><code>['M_FRST_RD'] </code></pre> <p>However, I want it to additionally return all possibilities since in the lookup, I have short words for all MY_FIRST, FIRST_RODEO, and MY_FIRST_RODEO. So, I want the following returned:</p> <pre><code>['M_FRST_RD', 'MF_RD', 'M_FRD', 'MFR'] </code></pre> <p>I was able to write code to break the original list into all possibilities as follows:</p> <pre><code>case = [&quot;MY_FIRST_RODEO&quot;] result = [] for string in case: words = string.split(&quot;_&quot;) n = len(words) for i in range(n): result.append(&quot;_&quot;.join(words[:i + 1])) for j in range(i + 1, n): result.append(&quot;_&quot;.join(words[i:j + 1])) result.extend(words) result = list(dict.fromkeys(result)) print(result) </code></pre> <p>to return:</p> <pre><code>['MY', 'MY_FIRST', 'FIRST', 'RODEO', 'MY_FIRST_RODEO', 'FIRST_RODEO'] </code></pre> <p>But somehow can't make the connection between the two solutions. Any help will be greatly appreciated.</p>
<python><string><lookup><python-itertools>
2023-02-09 21:18:46
1
774
flying_fluid_four
75,404,491
7,388,758
How to fix mypy type Any error in sqlalchemy Table.columns
<p>I'm still new to type hints. Here's the minimal code example of the error I'm getting:</p> <pre><code>import sqlalchemy as sa t = sa.Table(&quot;a&quot;, sa.MetaData(), sa.Column(&quot;id_&quot;, sa.Integer)) cols = t.columns </code></pre> <p>This raises the following error when I run mypy:</p> <pre><code>error: Expression type contains &quot;Any&quot; (has type &quot;ReadOnlyColumnCollection[str, Column[Any]]&quot;) [misc] </code></pre> <p>I'm running mypy with the following configuration turned on (<a href="https://mypy.readthedocs.io/en/stable/command_line.html#cmdoption-mypy-disallow-any-expr" rel="nofollow noreferrer">link</a>):</p> <pre><code>disallow_any_expr = true </code></pre> <p>I've looked at the sql alchemy source code and the <code>.colums</code> method of the <code>Table</code> class does indeed have the return type that mypy states.</p> <p>I don't know however how could I go about altering that to remove the Any. Would that be even the correct approach?</p>
<python><sqlalchemy><type-hinting><mypy>
2023-02-09 21:17:36
2
433
Miguel
75,404,470
17,696,880
How to make a list of lists with each of the possible combinations of the elements of a previous list?
<pre class="lang-py prettyprint-override"><code>import re input_info_list = [['corre', 'salta', 'dibuja'], ['en el bosque', 'en el patio'], ['2023-02-05 00:00 am', '2022-12-29 12:33 am _--_ 2023-01-25 19:13 pm']] print(reordered_input_indo_list) # --&gt; output new list </code></pre> <p>I want to create a list that separates all possible combinations of the elements in the <code>input_info_list</code> list into separate lists inside a new list.</p> <p>This should be the list you should get:</p> <pre class="lang-py prettyprint-override"><code>[[['corre'], ['en el bosque'], ['2023-02-05 00:00 am']] [['corre'], ['en el patio'], ['2023-02-05 00:00 am']], [['corre'], ['en el bosque'], ['2022-12-29 12:33 am _--_ 2023-01-25 19:13 pm']], [['corre'], ['en el patio'], ['2022-12-29 12:33 am _--_ 2023-01-25 19:13 pm']], [['salta'], ['en el bosque'], ['2023-02-05 00:00 am']], [['salta'], ['en el patio'], ['2023-02-05 00:00 am']], [['salta'], ['en el bosque'], ['2022-12-29 12:33 am _--_ 2023-01-25 19:13 pm']], [['salta'], ['en el patio'], ['2022-12-29 12:33 am _--_ 2023-01-25 19:13 pm']], [['dibuja'], ['en el bosque'], ['2023-02-05 00:00 am']], [['dibuja'], ['en el patio'], ['2023-02-05 00:00 am']], [['dibuja'], ['en el bosque'], ['2022-12-29 12:33 am _--_ 2023-01-25 19:13 pm']], [['dibuja'], ['en el patio'], ['2022-12-29 12:33 am _--_ 2023-01-25 19:13 pm']]] </code></pre>
<python><python-3.x><list><loops><data-structures>
2023-02-09 21:14:33
0
875
Matt095
75,404,376
11,092,636
Mapping a pandas dataframe to a n-dimensional array, where each dimension corresponds to one of the x columns
<p>I have a dataframe with columns <code>x0 x1 x2 x3 x4</code> and <code>y0 y1 y2 y3 y4</code>.</p> <p>First ten rows:</p> <pre class="lang-py prettyprint-override"><code> Id x0 x1 x2 x3 x4 y0 y1 y2 y3 y4 0 0 -5.0 -5.0 -5.0 -5.0 -5.0 268035854.2037072 0.94956508069182 3520.7568220782514 -412868933.038522 242572043.87727848 1 1 -5.0 -5.0 -5.0 -5.0 -4.5 268035883.40390667 0.94956508069182 3482.0382462663074 -412868933.038522 242572043.87727848 2 2 -5.0 -5.0 -5.0 -5.0 -4.0 268035901.1170006 0.94956508069182 3443.3196704543634 -412868933.038522 242572043.87727848 3 3 -5.0 -5.0 -5.0 -5.0 -3.5 268035911.8642905 0.94956508069182 3404.6010946424194 -412868933.038522 242572043.87727848 4 4 -5.0 -5.0 -5.0 -5.0 -3.0 268035918.38904288 0.94956508069182 3365.882518830476 -412868933.038522 242572043.87727848 5 5 -5.0 -5.0 -5.0 -5.0 -2.5 268035922.35671327 0.94956508069182 3327.163943018532 -412868933.038522 242572043.87727848 6 6 -5.0 -5.0 -5.0 -5.0 -2.0 268035924.7800574 0.94956508069182 3288.445367206588 -412868933.038522 242572043.87727848 7 7 -5.0 -5.0 -5.0 -5.0 -1.5 268035926.27763835 0.94956508069182 3249.726791394644 -412868933.038522 242572043.87727848 8 8 -5.0 -5.0 -5.0 -5.0 -1.0 268035927.2317166 0.94956508069182 3211.0082155827004 -412868933.038522 242572043.87727848 9 9 -5.0 -5.0 -5.0 -5.0 -0.5 268035927.8858225 0.94956508069182 3172.2896397707564 -412868933.038522 242572043.87727848 </code></pre> <p>I did this:</p> <pre><code>values = df_train[['y0', 'y1', 'y2', 'y3', 'y4']].values values.shape </code></pre> <p>I now have <code>shape (4084101, 5)</code></p> <p>I would like to have shape <code>(21, 21, 21, 21, 21, 5)</code> (so that the first shape is <code>x0</code>, the second <code>x1</code>, like if we had a 5D graph). Basically, it should be <code>values[1, 0, 0, 0, 0]</code> to access the tuple <code>(y0, y1, y2, y3, y4)</code> corresponding to <code>x0=-4.5</code>, <code>x1=-5</code>, ..., <code>x4=-5</code>.</p> <p>21 because values go from -5 to 5 for the <code>x0, ..., x4</code> with step <code>0.5</code> and 5 because <code>y0, y1, y2, y3, y4</code> I did <code>values = values.reshape(21, 21, 21, 21, 21, 5)</code> But when I do <code>values[1][0][0][0][0]</code>, I expected to have the value corresponding to <code>x1=-4.5, x2=-5, ..., x4=-5 </code> but I don't.</p> <p>One bad idea that I had (complexity wise) was to make a dictionary in which keys are tuples (x0, x1, x2, x3, x4) and attributes the index where to find the y values. And then fill a <code>np.zeros((21, 21, 21, 21, 21, 5))</code> dataframe.</p> <pre class="lang-py prettyprint-override"><code># Get the values values = df_train[['y0', 'y1', 'y2', 'y3', 'y4']].values # Create a dictionary to map the x0, x1, x2, x3, x4 values to indices grid = {} for i, row in df_train.iterrows(): x0, x1, x2, x3, x4 = [int((x + 5) / 0.5) for x in [row['x0'], row['x1'], row['x2'], row['x3'], row['x4']]] grid[(x0, x1, x2, x3, x4)] = i # Create the reshaped array reshaped_values = np.zeros((21, 21, 21, 21, 21, 5)) for key, index in grid.items(): reshaped_values[key[0]][key[1]][key[2]][key[3]][key[4]] = values[index] </code></pre> <p>but it takes almost a minute on my computer ... and looks like the worst idea ever.</p>
<python><pandas><dataframe>
2023-02-09 21:00:37
1
720
FluidMechanics Potential Flows
75,404,321
4,570,472
PyCharm: Stuck on Introspecting SSH server because two-factor login required for rsync
<p>I work on a cluster that requires two-factor login for every new connection. When I try creating a remote python interpreter in PyCharm, I can successfully connect, which requires 1 round of two-factor login via Duo. But then, PyCharm tries to test whether rsync works, which requires another round of two-factor authentication:</p> <pre><code>Successfully connected to rschaef@login.sherlock.stanford.edu:22 &gt; pwd Failed to execute command Checking rsync connection... /usr/bin/rsync -n -e &quot;ssh -p 22 &quot; rschaef@login.sherlock.stanford.edu: rschaef@login.sherlock.stanford.edu's password: (rschaef@login.sherlock.stanford.edu) Duo two-factor login for rschaef Enter a passcode or select one of the following options: 1. Duo Push to XXX-XXX-3013 2. Phone call to XXX-XXX-3013 3. SMS passcodes to XXX-XXX-3013 Passcode or option (1-3): </code></pre> <p>The problem is that PyCharm gives me no way to specify 1, 2 or 3, so I cannot authenticate again for rsync to complete. Consequently, I cannot move past this rsync step. Can someone please help me?</p>
<python><pycharm><remote-debugging>
2023-02-09 20:53:10
1
2,835
Rylan Schaeffer
75,404,289
12,394,134
Simulating different data with decorators in python
<p>I am trying to force myself to understand how decorators work and how I might use them to run a function multiple times.</p> <p>I am trying to simulate datasets with three variables, but they vary on their sample size and whether the sampling was conditional or not.</p> <p>So I create the population distribution that I am sampling from:</p> <pre><code>from numpy.random import normal, negative_binomial, binomial import pandas as pd population_N = 100000 data = pd.DataFrame({ &quot;Variable A&quot;: normal(0, 1, population_N), &quot;Variable B&quot;: negative_binomial(1, 0.5, population_N), &quot;Variable C&quot;: binomial(1, 0.5, population_N) }) </code></pre> <p>Rather than doing the following:</p> <pre><code>sample_20 = data.sample(20) sample_50 = data.sample(50) condition = data[&quot;Variable B&quot;] != 0 sample_20_non_random = data[condition].sample(20) sample_50_non_random = data[condition].sample(50) </code></pre> <p>I wanted to simplify things and make it more efficient. So I started with a super simple function where I can pass whether or not the sample will be random or not.</p> <pre><code>def simple_function(data_frame, type = &quot;random&quot;): if (type == &quot;random&quot;): sample = data_frame.sample(sample_size) else: condition = data_frame[&quot;Variable B&quot;] != 0 sample = data_frame[condition].sample(sample_size) return sample </code></pre> <p>But, I want to do this for more than one sample size. So I thought that rather than writing a for-loop that can be slow, I could maybe just use a decorator. I also have tried but have failed to understand their logic, so I thought this could be good practice to try to understand them better.</p> <pre><code>import functools def decorator(cache = {}, **case): def inner(function): function_name = function.__name__ if function_name not in cache: cache[function_name] = function @functools.wraps(function) def wrapped_function(**kwargs): if cache[function_name] != function: cache[function_name](**case) else: function(**case) return wrapped_function return inner @decorator(sample_size = [20, 50]) def sample(data_frame, type = &quot;random&quot;): if (type == &quot;random&quot;): sample = data_frame.sample(sample_size) else: condition = data_frame[&quot;Variable B&quot;] != 0 sample = data_frame[condition].sample(sample_size) return sample </code></pre> <p>I guess what I am not understanding is how the inheritance of the arguments works and how that then affects the iteration over the function in the decorator.</p>
<python><decorator>
2023-02-09 20:48:34
0
326
Damon C. Roberts
75,404,232
6,077,239
How to use apply better in Polars?
<p>I have a polars dataframe illustrated as follows.</p> <pre><code>import polars as pl df = pl.DataFrame( { &quot;a&quot;: [1, 4, 3, 2, 8, 4, 5, 6], &quot;b&quot;: [2, 3, 1, 3, 9, 7, 6, 8], &quot;c&quot;: [1, 1, 1, 1, 2, 2, 2, 2], } ) </code></pre> <p>The task I have is</p> <ol> <li>groupby column &quot;c&quot;</li> <li>for each group, check whether all numbers from column &quot;a&quot; is less than corresponding values from column &quot;b&quot;. <ul> <li>If so, just return a column same as &quot;a&quot; in the groupby context.</li> <li>Otherwise, apply a third-party function called <strong>&quot;convert&quot;</strong> which takes two numpy arrays and return a single numpy array with the same size, so in my case, I can first convert column &quot;a&quot; and &quot;b&quot; to numpy arrays and supply them as inputs to <strong>&quot;convert&quot;</strong>. Finally, return the array returned from <strong>&quot;convert&quot;</strong> (probably need to transform it to polars series before returning) in the groupby context.</li> </ul> </li> </ol> <p>So, for the example above, the output I want is as follows (exploded after groupby for better illustration).</p> <pre><code>shape: (8, 2) ┌─────┬─────┐ │ c ┆ a │ │ --- ┆ --- │ │ i64 ┆ i64 │ ╞═════╪═════╡ │ 1 ┆ 1 │ │ 1 ┆ 3 │ │ 1 ┆ 1 │ │ 1 ┆ 2 │ │ 2 ┆ 8 │ │ 2 ┆ 4 │ │ 2 ┆ 5 │ │ 2 ┆ 6 │ └─────┴─────┘ </code></pre> <p>With the assumption,</p> <pre><code>&gt;&gt;&gt; import numpy as np &gt;&gt;&gt; convert(np.array([1, 4, 3, 2]), np.array([2, 3, 1, 3])) np.array([1, 3, 1, 2]) # [1, 4, 3, 2] is from column a of df when column c is 1, and [2, 3, 1, 3] comes from column b of df when column c is 1. # I have to apply my custom python function 'convert' for the c == 1 group, because not all values in a are smaller than those in b according to the task description above. </code></pre> <p>My question is <strong>how am I supposed to implement this logic in a performant or polars idiomatic way without sacrificing so much speed gained from running Rust code and parallelization?</strong></p> <p>The reason I ask is because from my understanding, using apply with custom python function will slow down the program, but in my case, in certain scenarios, I will not need to resort to a third-party function for help. So, is there any way I can get the best of worlds somehow? (for scenarios where no third-party function is required, get full benefits of polars, and only apply third-party function when necessary).</p>
<python><python-polars>
2023-02-09 20:42:27
1
1,153
lebesgue
75,404,177
5,924,264
Is "Dict" a keyword in python?
<p>In the codebase I'm looking at at work, there's a line</p> <pre><code>values = Dict[int, List[QuantityClass]] </code></pre> <p>I cannot find where <code>Dict</code> is defined anywhere in the codebase when I did a grep, and I don't think it's a keyword in Python? This isn't a keyword right? It looks like some kind of custom <code>dictionary</code>, analogous to the native <code>dict</code>, but I'm not able to find its definition with a grep.</p>
<python><dictionary>
2023-02-09 20:36:50
1
2,502
roulette01
75,403,887
8,297,745
Cannot use a decorator on a child class inheriting from a parent class but can use it on the object itself
<p>I am creating a class inheriting from a parent class. The parent class is Telebot and the child class is my own Telegram BOT class. I am doing this to create a standard BOT that multiple scripts will call and implement with default actions etc.</p> <p>Then I am trying to implement a decorator to treat default incoming messages on this child class. Here is how I am doing it (file is <code>telegram.py</code>):</p> <pre><code># Importo a biblioteca que vou usar pra criar meu BOT import telebot class TelegramBot(telebot.TeleBot): &quot;&quot;&quot; Essa classe é chamada para criar uma instância de um BOT do Telegram. Ela herda a classe Telebot do módulo telebot. Doc: https://pytba.readthedocs.io/en/latest/sync_version &quot;&quot;&quot; def __init__(self, token): &quot;&quot;&quot;Esta função é chamada no momento que a classe é instanciada. Herda os atributos do telebot&quot;&quot;&quot; super().__init__(token) def polling(self): &quot;&quot;&quot;Este método está sendo incrementado com algumas funcionalidades próprias do bot, ao invés de ser feito o override/polimorfismo.&quot;&quot;&quot; super().polling() @telebot.TeleBot.message_handler(commands=[&quot;testar_bot&quot;]) def resposta_teste(self, mensagem): &quot;&quot;&quot;Esse método utiliza do Decorator para ter uma nova caracteristica e funcionalidade, que define o padrão para cada mensagem recebida. O método testa a resposta padrão para fazer o teste no BOT. &quot;&quot;&quot; self.reply_to(mensagem, &quot;Olá! Teste passou. Tudo funcionando :)&quot;) </code></pre> <p>When doing it like this, I receive the error: <code>TypeError: TeleBot.message_handler() missing 1 required positional argument: 'self'</code></p> <p>The weirdest thing is that if I remove the decorator alongside the method and implement in my bot object on my <code>main.py</code> file, it works like a charm.</p> <p>Here is the code that works - I remove the decorator from the class and put it on <code>main.py</code>:</p> <pre><code> import telegram bot = telegram.TelegramBot(&quot;blablabla_mytokenhidden&quot;) @bot.message_handler(commands=[&quot;testar_bot&quot;]) def resposta_teste(mensagem): &quot;&quot;&quot;Esse método utiliza do Decorator para ter uma nova caracteristica e funcionalidade, que define o padrão para cada mensagem recebida. O método testa a resposta padrão para fazer o teste no BOT. &quot;&quot;&quot; bot.reply_to(mensagem, &quot;Olá! Teste passou. Tudo funcionando :)&quot;) bot.polling() </code></pre> <p>This does not gives me any error, and the decorator works like a charm. I really don't understand the logic behind this.</p> <p>Why the decorator I used on my object works but on my child class (telegram.py with TelegramBot class) doesn't?</p> <p><strong>Edit #1</strong></p> <p>Following the object logic, I tried replacing <code>telebot.TeleBot</code> with <code>self</code>, also tested with <code>object</code> and it still did not work...</p> <p>So I am clearly NOT getting the logic here.</p>
<python><telegram>
2023-02-09 20:05:10
1
849
Raul Chiarella
75,403,882
1,658,105
Add a data directory outside Python package directory
<p>Given the following directory structure for a package <code>my_package</code>:</p> <pre><code>/ ├── data/ │ ├── more_data/ │ └── foo.txt ├── my_package/ │ ├── __init__.py │ └── stuff/ │ └── __init__.py ├── README.md ├── setup.cfg ├── setup.py </code></pre> <p>How can I make the <code>data/</code> directory accessible (in the most Pythonic way) from within code, without using <code>__file__</code> or other hacky solutions? I have tried using <code>data_files</code> in <code>setup.py</code> and the <code>[options.package_data]</code> in <code>setup.cfg</code> to no avail.</p> <p>I would like to do something like:</p> <pre class="lang-py prettyprint-override"><code>dir_data = importlib.resources.files(data) csv_files = dir_data.glob('*.csv') </code></pre> <p><strong>EDIT:</strong></p> <p>I'm working with an editable installation and there's already a <code>data/</code> directory in the package (for source code unrelated to the top-level data).</p>
<python><setuptools><setup.py><python-packaging><python-importlib>
2023-02-09 20:04:26
2
1,150
Androvich
75,403,841
11,940,250
How do I create a 16-bit grayscale image from my array dataset
<p>I want to convert a height map from NASA database into an image file. There is already a bit about this on the net and that helped me to read the file into an array and it looks like this:</p> <pre><code>data = [(113.0, 39.0, 1242), (113.00027777777778, 39.0, 1231), (113.00055555555555, 39.0, 1232), (113.00083333333333, 39.0, 1239), (113.00111111111111, 39.0, 1244), ...] </code></pre> <p>So I have an array with all the data according to the data pattern</p> <pre><code>data[width][height][tupel] </code></pre> <pre><code>tupel = [longitude, latitude, height] </code></pre> <p>the width and height are both 3601 long.</p> <pre><code>print (data[1800][1800]) </code></pre> <p>returns the tupel: (113.5, 39.5, 2032)</p> <p>and that's fine. It is exactly the center of the dataset which goes from longitude 39 to 40 and from latitude 113 to 114. I don't think I will need longitude and latitude because I know that the data set is 3601 x 3601 in size. The valuable information is in the last value, the height. In my example here the 2032.</p> <p>My question now is: How do i get the data set data = [rows, columns, [longitude, latitude, height] ] to a 16 bit grayscale image. As mentioned, longitude and latitude are not relevant. Do I have to first make the dataset something like data = [rows, columns, heights] so filter out longitude and latitude? before I can further process the image file?</p> <p>And how exactly do I create a 16-bit grayscale image file in png from this?</p>
<python><numpy><image-processing><arraylist>
2023-02-09 20:00:16
1
419
Spiri
75,403,840
2,726,900
Why does HVAC does not see secrets in HashiCorp Vault?
<p>I'm trying to use HashiCorp Vault with the HVAC Python client.</p> <p>I've run vault docker container (development mode config) on localhost, created a KV secret engine <code>kv1</code> (with version 1 API), added a secret <code>mega_secret</code>, added a key/value (<code>&quot;hell&quot; --&gt; &quot;yeah&quot;</code>) it it and tried to read it with HVAC.</p> <p>At first, let's go to docker container terminal and check that the secret is alive:</p> <pre><code># vault kv get kv1/mega_secret ==== Data ==== Key Value --- ----- hell yeah </code></pre> <p>And now I'm trying to read it with HVAC.</p> <pre><code>import hvac client = hvac.Client(url=&quot;http://localhost:8200&quot;, token=&quot;hvs.4MzADdB9pIHAggqaQWQZASx0&quot;, namespace=&quot;&quot;) assert client.is_authenticated() assert not client.sys.is_sealed() print(client.kv.v1.read_secret(path=&quot;kv1/mega_secret&quot;)) # Here will be crash </code></pre> <p>Error message:</p> <blockquote> <p>hvac.exceptions.InvalidPath: no handler for route &quot;secret/kv1/mega_secret&quot;.<br /> route entry not found., on get http://localhost:8200/v1/secret/kv1/mega_secret</p> </blockquote> <p>How can it be fixed?</p>
<python><hashicorp-vault><hvac>
2023-02-09 20:00:11
1
3,669
Felix
75,403,816
6,251,742
Why `.decode("utf-16")` with ASCII encoded string sometime crash?
<p>I wanted to show how we can reduce the number of character required to code a script in Python using encoding conversion, and I took the <a href="https://docs.python.org/3/faq/programming.html#is-it-possible-to-write-obfuscated-one-liners-in-python" rel="nofollow noreferrer">Mandelbrot set obfuscated example</a> from the <a href="https://docs.python.org/3/faq/programming.html#programming-faq" rel="nofollow noreferrer">Python programming FAQ</a> as an example.</p> <pre class="lang-py prettyprint-override"><code>code = b&quot;&quot;&quot;print((lambda Ru,Ro,Iu,Io,IM,Sx,Sy:reduce(lambda x,y:x+'\n'+y,map(lambda y, Iu=Iu,Io=Io,Ru=Ru,Ro=Ro,Sy=Sy,L=lambda yc,Iu=Iu,Io=Io,Ru=Ru,Ro=Ro,i=IM, Sx=Sx,Sy=Sy:reduce(lambda x,y:x+y,map(lambda x,xc=Ru,yc=yc,Ru=Ru,Ro=Ro, i=i,Sx=Sx,F=lambda xc,yc,x,y,k,f=lambda xc,yc,x,y,k,f:(k&lt;=0)or (x*x+y*y &gt;=4.0) or 1+f(xc,yc,x*x-y*y+xc,2.0*x*y+yc,k-1,f):f(xc,yc,x,y,k,f):chr( 64+F(Ru+x*(Ro-Ru)/Sx,yc,0,0,i)),range(Sx))):L(Iu+y*(Io-Iu)/Sy),range(Sy ))))(-2.1, 0.7, -1.2, 1.2, 30, 80, 24))&quot;&quot;&quot; shorter_code = code.decode(&quot;u16&quot;) # crash here print(shorter_code) code_back = shorter_code.encode(&quot;u16&quot;)[2:] print(code_back) print(code_back == code) </code></pre> <p>However, the code crashed unexpectedly during execution.</p> <pre><code>Traceback (most recent call last): File &quot;C:\Users\lancet\AppData\Roaming\JetBrains\PyCharm2022.3\scratches\scratch_24.py&quot;, line 9, in &lt;module&gt; shorter_code = code.decode(&quot;u16&quot;) ^^^^^^^^^^^^^^^^^^ UnicodeDecodeError: 'utf-16-le' codec can't decode byte 0x29 in position 472: truncated data </code></pre> <p>I already did this kind of tricks for challenges in CodinGame code golf mode with success. So I tried with another example from the documentation, the <code>First 10 Fibonacci numbers</code> example, with success.</p> <pre class="lang-py prettyprint-override"><code>code = b&quot;&quot;&quot;print(list(map(lambda x,f=lambda x,f:(f(x-1,f)+f(x-2,f)) if x&gt;1 else 1: f(x,f), range(10))))&quot;&quot;&quot; shorter_code = code.decode(&quot;u16&quot;) print(shorter_code) # 牰湩⡴楬瑳洨灡氨浡摢⁡ⱸ㵦慬扭慤砠昬⠺⡦⵸ⰱ⥦昫砨㈭昬⤩椠⁦㹸‱汥敳ㄠ਺⡦ⱸ⥦‬慲杮⡥〱⤩⤩ code_back = shorter_code.encode(&quot;u16&quot;)[2:] print(code_back) # b'print(list(map(lambda x,f=lambda x,f:(f(x-1,f)+f(x-2,f)) if x&gt;1 else 1:\nf(x,f), range(10))))' print(code_back == code) # True </code></pre> <p>Why the first string is considered <code>truncated</code>?</p>
<python><decode><utf-16>
2023-02-09 19:58:12
1
4,033
Dorian Turba
75,403,728
11,809,811
check if Toplevel windows was closed?
<p>I have a tkinter app, created with customtkinter:</p> <pre><code>import customtkinter class App(customtkinter.CTk): def __init__(self): super().__init__() Extra() self.mainloop() class Extra(customtkinter.CTkToplevel): def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) self.geometry(&quot;400x300&quot;) self.label = customtkinter.CTkLabel(self, text=&quot;ToplevelWindow&quot;) self.label.pack(padx=20, pady=20) App() </code></pre> <p>I am trying to figure out code that checks if the Extra window has been closed. I've been looking around and cannot seem to find anything useful. Is there a way of doing this?</p>
<python><tkinter><customtkinter>
2023-02-09 19:48:37
1
830
Another_coder
75,403,654
1,795,357
How do I connect to a MySQL5 DB using mysql-connector-python?
<p>I believe the crux of the issue is the version mismatch, but I'm not sure how to get around this.</p> <p>This is my conda environment file:</p> <pre><code>channels: - anaconda - defaults - conda-forge dependencies: - _libgcc_mutex=0.1=conda_forge - _openmp_mutex=4.5=2_gnu - bzip2=1.0.8=h7f98852_4 - ca-certificates=2022.12.7=ha878542_0 - cffi=1.15.1=py311h409f033_3 - cryptography=39.0.1=py311h9b4c7bb_0 - dnspython=2.3.0=pyhd8ed1ab_0 - greenlet=2.0.2=py311hcafe171_0 - idna=3.3=pyhd3eb1b0_0 - ld_impl_linux-64=2.39=hc81fddc_0 - libblas=3.9.0=16_linux64_openblas - libcblas=3.9.0=16_linux64_openblas - libffi=3.4.2=h7f98852_5 - libgcc-ng=12.2.0=h65d4601_19 - libgfortran-ng=11.2.0=h00389a5_1 - libgfortran5=11.2.0=h1234567_1 - libgomp=12.2.0=h65d4601_19 - liblapack=3.9.0=16_linux64_openblas - libnsl=2.0.0=h7f98852_0 - libopenblas=0.3.21=h043d6bf_0 - libprotobuf=3.21.12=h3eb15da_0 - libsqlite=3.40.0=h753d276_0 - libstdcxx-ng=12.2.0=h46fd767_19 - libuuid=2.32.1=h7f98852_1000 - libzlib=1.2.13=h166bdaf_4 - lz4-c=1.9.3=h295c915_1 - mysql-common=8.0.32=ha901b37_0 - mysql-connector-python=8.0.31=py311h0cf059c_2 - mysql-libs=8.0.32=hd7da12d_0 - ncurses=6.3=h27087fc_1 - numpy=1.24.2=py311h8e6699e_0 - openssl=3.0.8=h0b41bf4_0 - pandas=1.5.3=py311h2872171_0 - protobuf=4.21.12=py311hcafe171_0 - pycparser=2.21=pyhd3eb1b0_0 - python=3.11.0=ha86cf86_0_cpython - python-dateutil=2.8.2=pyhd3eb1b0_0 - python_abi=3.11=3_cp311 - pytz=2021.3=pyhd3eb1b0_0 - readline=8.1.2=h0f457ee_0 - six=1.16.0=pyhd3eb1b0_1 - sqlalchemy=2.0.2=py311h2582759_0 - tk=8.6.12=h27826a3_0 - typing-extensions=4.4.0=hd8ed1ab_0 - typing_extensions=4.4.0=pyha770c72_0 - tzdata=2022f=h191b570_0 - xz=5.2.6=h166bdaf_0 - zlib=1.2.13=h166bdaf_4 - zstd=1.5.2=ha4553b6_0 - pip: - pip==22.3.1 - setuptools==65.5.1 - wheel==0.38.4 </code></pre> <p>and this is my code:</p> <pre><code>import pandas as pd import mysql.connector database = 'db' host = 'host' port = '3306' user = 'user' password = 'pass' con = mysql.connector.connect(user=user, password=password, host=host, database=database, ssl_disabled=True) </code></pre> <p>I get the following error:</p> <pre><code>mysql.connector.errors.OperationalError: 1043 (08S01): Bad handshake </code></pre> <p>I've tried with <code>ssl_disabled</code> and with it enabled.</p>
<python><mysql><python-3.x>
2023-02-09 19:40:43
1
2,179
ajoseps
75,403,349
13,382,982
Use a class variable across inheritance in Python
<p>In Python, I want to define a top level class that can depend on a class variable. Then I want to be able to change that variable at the class level, for children of the class, but still inherit the functionality that uses that variable.</p> <p>In general, my Parent class has some functions that depend on configuration variables. All my child classes use those same functions, but with different parameters. I would like to be able to change the parameters at the class level.</p> <p>As the simplest example, here are two classes where the <code>Parent</code> defines functions in terms of <code>my_global</code>, then the <code>Child</code> attempts to change that variable (but fails)</p> <pre><code>class Parent(): my_global = &quot;parent&quot; def _init_(self): pass def printmg(self): print(Parent.my_global) class Child(Parent): my_global = &quot;child&quot; my_parent = Parent() my_parent.printmg() my_child = Child() my_child.printmg() </code></pre> <p>This outputs</p> <pre><code>parent parent </code></pre> <p>While I would like it to output</p> <pre><code>parent child </code></pre> <p>I don't wan't to keep the variables at the object level (i.e. <code>self.my_global = &quot;child&quot;</code>), or to rewrite the function for the child.</p>
<python><inheritance>
2023-02-09 19:10:41
2
405
Santiago Cuellar
75,403,301
12,043,946
Scipy.Stats Error. Why is the function stats.combine_pvalues not accepting my weights for Stouffer method?
<p>I have two dictionaries,</p> <pre><code> import scipy.stats import numpy as np import pandas as pd exp_pvalues={'SC_1': array([0.96612999, 0.30348366]), 'SC_2': array([0.66871158, 0.0011381 ]), 'SC_3': array([0.66871158, 0.0011381 , 0.96612999, 0.30348366]), 'SC_4': array([0.66871158, 0.0011381 , 0.46018094, 0.30348366]), 'SC_5': array([0.66871158, 0.0011381 , 0.18113085, 0.04860657]), 'SC_6': array([6.68711583e-01, 1.13809558e-03, 0.00000000e+00, 8.54560803e-07]), 'SC_7': array([6.68711583e-001, 1.13809558e-003, 8.47561031e-131, 1.28484156e-018])} weights_final={'SC_1': array([0.5, 0.5]), 'SC_2': array([0.5, 0.5]), 'SC_3': array([0.25, 0.25, 0.25, 0.25]), 'SC_4': array([0.49751244, 0.49751244, 0.00248756, 0.00248756]), 'SC_5': array([0.47619048, 0.47619048, 0.02380952, 0.02380952]), 'SC_6': array([0.32786885, 0.32786885, 0.01639344, 0.32786885]), 'SC_7': array([0.38461538, 0.38461538, 0.07692308, 0.15384615])} </code></pre> <p>The following function works for everything single other method but stouffers. The outputs changed as well when weights were included so why is this stouffers function failing?</p> <pre><code>combined_pvalues_Stouffer_weighted = {} for key, values in exp_pvalues.items(): test_stat, combined_pval= stats.combine_pvalues(values, method='stouffer', weights = weights_final) combined_pvalues_Stouffer_weighted[key] = combined_pval ValueError: pvalues and weights must be of the same size. </code></pre>
<python><arrays><p-value><probability-distribution>
2023-02-09 19:05:38
1
392
d3hero23
75,403,127
5,268,999
How to negate a predicate in Python?
<p>I have a predicate function that accepts a string and evaluates it to <code>bool</code>:</p> <pre><code>def pred(line): return someval in line </code></pre> <p>Now I have a strings list and want to select only those which don't match the predicate. I'd expect code like below:</p> <pre><code>my_list = [&quot;Thanks&quot;,&quot;in&quot;,&quot;advance!&quot;] not_pred = negate(pred) new_list = filter(not_pred, my_list) </code></pre> <p>I suppose Python has something similar to a <code>negate</code> function but I could not find any. What is a convenient Python way for this?</p> <p>I know it can be achieved with lambda but I feel there's an easier standard way.</p>
<python><functional-programming>
2023-02-09 18:47:08
2
334
Antonio
75,403,048
19,675,781
Python calculate correlation of a column against entire dataframe grouped by index
<p>I have a dataframe of size (109049, 29184) that looks like this:</p> <pre><code>df: Ford Honda GM index Sedan 4 1 8 Sedan 5 2 7 Sedan 6 3 6 Sedan 7 4 5 SUV 8 5 7 SUV 1 6 6 SUV 2 7 5 SUV 3 8 4 </code></pre> <p>This data frame has 22 different indexes. I want to calculate correlation for column Ford against all the other columns broken down by index in this way:</p> <pre><code>index SUV Sedan Ford Ford Ford 1.00 1.0 Honda -0.58 1.0 GM 0.58 -1.0 </code></pre> <p>I tried to calculate correlation across the entire data using this:</p> <pre><code>df.groupby('index').corr(method = 'spearman').reset_index() </code></pre> <p>But due to huge data size, I am unable to calculate even after the running the code for more than 10 hours. How can I calculate the correlation for one column against rest of the columns broken down by index in a quick way?</p> <p>Your help is appreciated!</p>
<python><pandas><dataframe><statistics>
2023-02-09 18:38:41
1
357
Yash
75,403,031
15,363,250
How to compare 2 jsons and if one is bigger than other, add the missing ones to the json with less elements?
<p>I have an error in the data entry in one of my tables, so data I use is sometimes incomplete, as you can see in the example below, where some users have more questions answered then the others.</p> <pre><code>| user_id| user_name | client_preferences +--------+-----------------+-------------------------------------------------------------------------------+ | 1020 | John Greene | [{&quot;fav_book&quot;: &quot;1984&quot;, &quot;fav_food&quot;: &quot;Pizza&quot;}] +--------+-----------------+-------------------------------------------------------------------------------+ | 3002 | Albert Onestone | [{&quot;fav_food&quot;: &quot;Fried Chicken&quot;}] +--------+-----------------+-------------------------------------------------------------------------------+ | 2334 | Luis Ville | [{&quot;fav_book&quot;: &quot;Harry Potter&quot;, &quot;fav_food&quot;: &quot;Tacos&quot;, &quot;fav_holiday&quot;:&quot;christmas&quot;}] +--------+-----------------+---------------------------------------------------------- --------------------+ </code></pre> <p>As you can see some users have more preferences than the others. And this is a problem, because even if the client didn't have answered this question, we need them as null. Now we have a perfect example of user with all possible preferences in her profile:</p> <pre><code>| user_id| user_name | client_preferences +--------+-----------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------+ | 0001 | Emma Write | [{&quot;fav_book&quot;: &quot;Alice In the wonderland&quot;, &quot;fav_food&quot;: &quot;Hamburger&quot;, &quot;fav_holiday&quot;:&quot;christmas&quot;,&quot;fav_desert&quot;:&quot;ice cream&quot;, &quot;fav_pet&quot;:&quot;dog&quot;, &quot;fav_season&quot;:&quot;fall&quot;}] +--------+-----------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------+ </code></pre> <p>how can I check if all the users have these questions the user above has? And, if they don't have some of the questions, how can I insert them as null in their profiles?</p> <p>thanks!</p>
<python><pandas><pyspark><databricks>
2023-02-09 18:37:17
0
450
Marcos Dias
75,403,024
832,490
mypy complains about classmethod
<p>I have a trivial dataclass (from pydantic)</p> <pre><code>from pydantic.dataclasses import dataclass from abc import ABCMeta from abc import abstractmethod @dataclass class BaseEntity(metaclass=ABCMeta): @classmethod @abstractmethod def from_dict(cls, other: dict): ... @abstractmethod def dict(self): ... @dataclass class UserEntity(BaseEntity): id: Optional[str] name: str email: str avatar: str @classmethod def from_dict(cls, other: dict): return cls( id=other.get(&quot;id&quot;), name=other.get(&quot;name&quot;), email=other.get(&quot;email&quot;), avatar=other.get(&quot;avatar&quot;), ) </code></pre> <p>When I run mypy, I get this set of errors:</p> <blockquote> <p>app/entities/user.py:25: error: Unexpected keyword argument &quot;id&quot; for &quot;UserEntity&quot; [call-arg]</p> </blockquote> <blockquote> <p>app/entities/user.py:25: error: Unexpected keyword argument &quot;name&quot; for &quot;UserEntity&quot; [call-arg]</p> </blockquote> <blockquote> <p>app/entities/user.py:25: error: Unexpected keyword argument &quot;email&quot; for &quot;UserEntity&quot; [call-arg]</p> </blockquote> <blockquote> <p>app/entities/user.py:25: error: Unexpected keyword argument &quot;avatar&quot; for &quot;UserEntity&quot; [call-arg]</p> </blockquote> <p>What I'm doing wrong? The code is fine; it runs. Or is it a mypy bug?</p> <pre><code>$ mypy --version mypy 1.0.0 (compiled: yes) </code></pre>
<python><mypy><pydantic>
2023-02-09 18:36:31
2
1,009
Rodrigo
75,403,007
3,084,842
Annotating top of stacked barplot in matplotlib
<p>I made a stacked barplot in matplotlib and want to print the total of each bar at the top,</p> <pre><code>import numpy as np import pandas as pd import matplotlib.pyplot as plt df = pd.DataFrame() df['year'] = ['2012','2013','2014'] df['test 1'] = [4,17,5] df['test 2'] = [1,4,1] df['test 2a'] = [1,1,2] df['test 3'] = [2,1,8] df['test 4'] = [2,1,5] df['test 4a'] = [2,1,7] df = df.set_index('year') df['total'] = df.sum(axis=1) fig, ax = plt.subplots(1,1) df.drop(columns=['total']).plot(kind='bar', stacked=True, ax=ax) # iterate through each group of container (bar) objects for c in ax.containers: # annotate the container group ax.bar_label(c, label_type='center') ##for p in ax.patches: ## width, height = p.get_width(), p.get_height() ## x, y = p.get_xy() ## ax.text(x+width/2, ## y+height/2, ## '{:.0f}'.format(height), ## horizontalalignment='center', ## verticalalignment='center') </code></pre> <p>I tried using the answers in other posts, eg <a href="https://stackoverflow.com/q/21397549">here</a> and <a href="https://stackoverflow.com/q/63135395">here</a> but they print the values for every section. I'm looking for a graph like below with black text showing <code>12</code>, <code>15</code>, <code>18</code> values for each stacked bar. Ideally I could print the numbers in <code>df['total']</code> above each stack, but I'm not sure how to do this.</p> <p><a href="https://i.sstatic.net/tBHHD.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/tBHHD.png" alt="enter image description here" /></a></p>
<python><pandas><matplotlib><formatting>
2023-02-09 18:34:53
1
3,997
Medulla Oblongata
75,402,849
9,861,647
Pandas reformat and Melt
<p>I have this pandas data frame which has 65 columns Full month:</p> <pre><code>ID Name Date 2022-12-1-IN 2022-12-1-OUT 2022-12-2-IN 2022-12-2-OUT ... 2022-12-31-IN 2022-12-31-OUT n_cols = df.shape[1] # Create a list of the new column names new_col_names = [] for i in range(3, n_cols): if (i - 3) % 2 == 0: new_col_names.append('2022-12-0' + str(((i - 4) // 2) + 1) + '-IN') else: new_col_names.append('2022-12-0' + str(((i - 4) // 2) + 1) + '-OUT') df.columns = new_col_names </code></pre> <p>My expected results:</p> <pre><code>ID Name Date 2022-12-01-IN 2022-12-01-OUT 2022-12-02-IN 2022-12-02-OUT ... 2022-12-31-IN 2022-12-31-OUT lif new_len != old_len: ---&gt; 70 raise ValueError( 71 f&quot;Length mismatch: Expected axis has {old_len} elements, new &quot; 72 f&quot;values have {new_len} elements&quot; 73 ) ValueError: Length mismatch: Expected axis has 65 elements, new values have 62 elements </code></pre> <p>What I'm doing wrong?</p> <p>Or any better solution to replace all my dates with this format yyyy-mm-dd-IN and yyyy-mm-dd-OUT.</p> <p>Ex: 2022-12-01-IN, 2022-12-01-OUT,... 2022-12-02-IN ...</p> <p>I have a range of dates from 2022-1-1 to 2022-12-1</p>
<python><dataframe>
2023-02-09 18:19:18
1
1,065
Simon GIS
75,402,818
10,714,156
PyTorch: `DataLoader()` for aggregated/clustered/panel data
<p>Say I have a data set with multiple observations per individual (also known as <strong>panel data</strong>). Hence, I want to sample them together; that is to say <strong>I want to sample my dataset at the level of the <em>individuals</em>, not at the level of the observations</strong> (or rows).</p> <p>That being said, imagine we have the following data, where I identify my <em><strong>individuals</strong></em> in column <code>id_ind</code>, so the first two rows have a <code>1</code> in <code>id_ind</code> since these two observations belong to the <strong>first individual</strong>. Then, the 3rd and the 4th rows belong to the <strong>second individual</strong> (<code>id_ind == 2</code>), and so forth..</p> <pre><code>import pandas as pd X = pd.DataFrame.from_dict({'x1_1': {0: -0.1766214634108258, 1: 1.645852185286492, 2: -0.13348860101031038, 3: 1.9681043689968933, 4: -1.7004428240831382, 5: 1.4580091413853749, 6: 0.06504113741068565, 7: -1.2168493676768384, 8: -0.3071304478616376, 9: 0.07121332925591593}, 'x1_2': {0: -2.4207773498298844, 1: -1.0828751040719462, 2: 2.73533787008624, 3: 1.5979611987152071, 4: 0.08835542172064115, 5: 1.2209786277076156, 6: -0.44205979195950784, 7: -0.692872860268244, 8: 0.0375521181289943, 9: 0.4656030062266639}, 'x1_3': {0: -1.548320898226322, 1: 0.8457342014424675, 2: -0.21250514722879738, 3: 0.5292389938329516, 4: -2.593946520223666, 5: -0.6188958526077123, 6: 1.6949245117526974, 7: -1.0271341091035742, 8: 0.637561891142571, 9: -0.7717170035055559}, 'x2_1': {0: 0.3797245517345564, 1: -2.2364391598508835, 2: 0.6205947900678905, 3: 0.6623865847688559, 4: 1.562036259999875, 5: -0.13081282910947759, 6: 0.03914373833251773, 7: -0.995761652421108, 8: 1.0649494418154162, 9: 1.3744782478849122}, 'x2_2': {0: -0.5052556836786106, 1: 1.1464291788297152, 2: -0.5662380273138174, 3: 0.6875729143723538, 4: 0.04653136473130827, 5: -0.012885303852347407, 6: 1.5893672346098884, 7: 0.5464286050059511, 8: -0.10430829457707284, 9: -0.5441755265313813}, 'x2_3': {0: -0.9762973303149007, 1: -0.983731467806563, 2: 1.465827578266328, 3: 0.5325950414202745, 4: -1.4452121324204903, 5: 0.8148816373643869, 6: 0.470791989780882, 7: -0.17951636294180473, 8: 0.7351814781280054, 9: -0.28776723200679066}, 'x3_1': {0: 0.12751822396637064, 1: -0.21926633684030983, 2: 0.15758799357206943, 3: 0.5885412224632464, 4: 0.11916562911189271, 5: -1.6436210334529249, 6: -0.12444368631987467, 7: 1.4618564171802453, 8: 0.6847234328916137, 9: -0.23177118858569187}, 'x3_2': {0: -0.6452955690715819, 1: 1.052094761527654, 2: 0.20190339195326157, 3: 0.6839430295237913, 4: -0.2607691613858866, 5: 0.3315513026670213, 6: 0.015901139336566113, 7: 0.15243420084881903, 8: -0.7604225072161022, 9: -0.4387652927008854}, 'x3_3': {0: -1.067058994377549, 1: 0.8026914180717286, 2: -1.9868531745912268, 3: -0.5057770735303253, 4: -1.6589569342151713, 5: 0.358172252880764, 6: 1.9238983803281329, 7: 2.2518318810978246, 8: -1.2781475121874357, 9: -0.7103081175166167}}) Y = pd.DataFrame.from_dict({'CHOICE': {0: 1.0, 1: 1.0, 2: 2.0, 3: 2.0, 4: 3.0, 5: 2.0, 6: 1.0, 7: 1.0, 8: 2.0, 9: 2.0}}) Z = pd.DataFrame.from_dict({'z1': {0: 2.4196730570917233, 1: 2.4196730570917233, 2: 2.822802255159467, 3: 2.822802255159467, 4: 2.073171091633643, 5: 2.073171091633643, 6: 2.044165101485163, 7: 2.044165101485163, 8: 2.4001241292606275, 9: 2.4001241292606275}, 'z2': {0: 0.0, 1: 0.0, 2: 0.0, 3: 0.0, 4: 1.0, 5: 1.0, 6: 1.0, 7: 1.0, 8: 0.0, 9: 0.0}, 'z3': {0: 1.0, 1: 1.0, 2: 1.0, 3: 1.0, 4: 2.0, 5: 2.0, 6: 2.0, 7: 2.0, 8: 3.0, 9: 3.0}}) id = pd.DataFrame.from_dict({'id_choice': {0: 1.0, 1: 2.0, 2: 3.0, 3: 4.0, 4: 5.0, 5: 6.0, 6: 7.0, 7: 8.0, 8: 9.0, 9: 10.0}, 'id_ind': {0: 1.0, 1: 1.0, 2: 2.0, 3: 2.0, 4: 3.0, 5: 3.0, 6: 4.0, 7: 4.0, 8: 5.0, 9: 5.0}} ) # Create a dataframe with all the data data = pd.concat([id, X, Z, Y], axis=1) print(data.head(4)) id_choice id_ind x1_1 x1_2 x1_3 x2_1 x2_2 \ 0 1.0 1.0 -0.176621 -2.420777 -1.548321 0.379725 -0.505256 1 2.0 1.0 1.645852 -1.082875 0.845734 -2.236439 1.146429 2 3.0 2.0 -0.133489 2.735338 -0.212505 0.620595 -0.566238 3 4.0 2.0 1.968104 1.597961 0.529239 0.662387 0.687573 x2_3 x3_1 x3_2 x3_3 z1 z2 z3 CHOICE 0 -0.976297 0.127518 -0.645296 -1.067059 2.419673 0.0 1.0 1.0 1 -0.983731 -0.219266 1.052095 0.802691 2.419673 0.0 1.0 1.0 2 1.465828 0.157588 0.201903 -1.986853 2.822802 0.0 1.0 2.0 3 0.532595 0.588541 0.683943 -0.505777 2.822802 0.0 1.0 2.0 </code></pre> <p>Now, I have written the <code>ChoiceDataset</code> class around the primitive <code>torch.utils.data.Dataset</code>. Unfortunately, it is sampling at the level of the observations.</p> <pre><code># Create a dictionary with the data data_dict = {'idx': id, 'X': X, 'Z': Z, 'Y': Y} # Create a pytorch.Dataset class from torch.utils.data import Dataset from torch.utils.data import DataLoader import torch class ChoiceDataset(Dataset): def __init__(self, data): self.Y = torch.LongTensor(data['Y'].values -1).reshape(len(data['Y'].index),1) self.J = torch.unique(self.Y).shape[0] self.id = torch.LongTensor(data['idx']['id_ind'].values).reshape(len(data['idx']['id_ind'].index),1) self.N = torch.unique(self.id).shape[0] # Total number of individuals _,self.t_n = self.id.unique(return_counts=True) self.N_t = self.t_n.sum(axis=0).item() #total number of observations self.X_wide = torch.DoubleTensor(data['X'].values) self.K = int(self.X_wide.shape[1] / self.J) self.Z = torch.DoubleTensor(data['Z'].values) self.X = self.X_wide.reshape(self.N_t ,self.K, self.J) def __len__(self): # return a dictionary with the data dimensions # __len__ is equal to the number of individual here (not the observations) # since this is the level at which I want to sample from return self.N def __getitem__(self, idx): # return a dictionary with the data return {'Y': self.Y[idx], 'X': self.X[idx], 'id': self.id[idx], 'Z': self.Z[idx]} </code></pre> <p>As you can see below, it is sampling at the level of the observations. Could you please suggest some changes to make it sample at the level of <strong>individuals</strong>?</p> <pre><code>df_train = ChoiceDataset(data_dict) data_train = DataLoader(df_train, batch_size=3, shuffle=False, num_workers=0) for batch_idx, data in enumerate(data_train): print('batch_idx:',batch_idx) print(data['Y'].shape) #batch_idx: 0 #torch.Size([3, 1]) # takes first 3 observations #batch_idx: 1 #torch.Size([2, 1]) # takes the last 2 observations </code></pre> <hr /> <p><strong>Update</strong></p> <p>When selecting the observations that belong to each individual, by using the following <code>__getitem__()</code> function, I was expecting to solve the problem.</p> <pre><code> def __getitem__(self, idx): # return a dictionary with the data # Get the position of individual idx in the dataset ind_position = torch.where(self.id == idx)[0] return {'Y': self.Y[ind_position], 'X': self.X[ind_position], 'id': self.id[ind_position], 'Z': self.Z[ind_position]} </code></pre> <p>However, I am getting the following error, which, if I am reading it correctly, is telling me that the, internally, <code>torch.stack()</code> is meant to receive only tensors of the same size (probably 1 row?) when putting the batches together. Unfortunately, I am still stuck with this.</p> <pre><code>--------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) c:\Users\u0133260\Documents\_local_git_repos\MixTasteNet_project\MixTasteNet_local\CODE\SO_Q\dataloader.py in line 53 52 df_train =ChoiceDataset(data_dict) 53 data_train = DataLoader(df_train, batch_size=2, shuffle=False, num_workers=0) ---&gt; 54 for batch_idx, data in enumerate(data_train): 55 print('batch_idx:',batch_idx) File c:\Users\u0133260\Anaconda3\envs\pyt\lib\site-packages\torch\utils\data\dataloader.py:681, in _BaseDataLoaderIter.__next__(self) 678 if self._sampler_iter is None: 679 # TODO(https://github.com/pytorch/pytorch/issues/76750) 680 self._reset() # type: ignore[call-arg] --&gt; 681 data = self._next_data() 682 self._num_yielded += 1 683 if self._dataset_kind == _DatasetKind.Iterable and \ 684 self._IterableDataset_len_called is not None and \ 685 self._num_yielded &gt; self._IterableDataset_len_called: File c:\Users\u0133260\Anaconda3\envs\pyt\lib\site-packages\torch\utils\data\dataloader.py:721, in _SingleProcessDataLoaderIter._next_data(self) 719 def _next_data(self): 720 index = self._next_index() # may raise StopIteration --&gt; 721 data = self._dataset_fetcher.fetch(index) # may raise StopIteration 722 if self._pin_memory: 723 data = _utils.pin_memory.pin_memory(data, self._pin_memory_device) File c:\Users\u0133260\Anaconda3\envs\pyt\lib\site-packages\torch\utils\data\_utils\fetch.py:52, in _MapDatasetFetcher.fetch(self, possibly_batched_index) 50 else: 51 data = self.dataset[possibly_batched_index] ---&gt; 52 return self.collate_fn(data) File c:\Users\u0133260\Anaconda3\envs\pyt\lib\site-packages\torch\utils\data\_utils\collate.py:160, in default_collate(batch) 158 elif isinstance(elem, collections.abc.Mapping): 159 try: --&gt; 160 return elem_type({key: default_collate([d[key] for d in batch]) for key in elem}) 161 except TypeError: 162 # The mapping type may not support `__init__(iterable)`. 163 return {key: default_collate([d[key] for d in batch]) for key in elem} File c:\Users\u0133260\Anaconda3\envs\pyt\lib\site-packages\torch\utils\data\_utils\collate.py:160, in &lt;dictcomp&gt;(.0) 158 elif isinstance(elem, collections.abc.Mapping): 159 try: --&gt; 160 return elem_type({key: default_collate([d[key] for d in batch]) for key in elem}) 161 except TypeError: 162 # The mapping type may not support `__init__(iterable)`. 163 return {key: default_collate([d[key] for d in batch]) for key in elem} File c:\Users\u0133260\Anaconda3\envs\pyt\lib\site-packages\torch\utils\data\_utils\collate.py:141, in default_collate(batch) 139 storage = elem.storage()._new_shared(numel, device=elem.device) 140 out = elem.new(storage).resize_(len(batch), *list(elem.size())) --&gt; 141 return torch.stack(batch, 0, out=out) 142 elif elem_type.__module__ == 'numpy' and elem_type.__name__ != 'str_' \ 143 and elem_type.__name__ != 'string_': 144 if elem_type.__name__ == 'ndarray' or elem_type.__name__ == 'memmap': 145 # array of string classes and object RuntimeError: stack expects each tensor to be equal size, but got [0, 1] at entry 0 and [2, 1] at entry 1 </code></pre>
<python><pandas><dataframe><pytorch><dataloader>
2023-02-09 18:16:25
1
1,966
Álvaro A. Gutiérrez-Vargas
75,402,817
2,989,642
How to properly call a multithreaded method inside a class in Python?
<p>I've got a method to download a bunch of files, and then do things with them.</p> <p>The multithreaded download methods worked when not in a class, but when I put them inside the class, they cease processing immediately after initiating the first file in the list. There are no errors thrown; the URL call is good, etc. So I am probably missing something related to OOP in python.</p> <pre><code>from multiprocessing import cpu_count from multiprocessing.pool import ThreadPool import os from requests import Session import time class OSM def __init__(self): self.url_root = &quot;https://my.site/index.html&quot; self.s = self._mount_session() self.data = None # zip object of download links and associated local paths self.download_path = &quot;C:/Temp&quot; def _download_parallel(self, args): results = ThreadPool(cpu_count() - 1).imap_unordered(self._download_url, args) for result in results: print(f&quot;URL: {result[0]} | Time (s): {result[1]}&quot;) def _download_url(self, args): t0 = time.time() url, fn = args[0], args[1] try: r = self.s.get(url) with open(fn, 'wb') as f: f.write(r.content) return (url, time.time() - t0) except Exception as e: print(f&quot;Exception in _download_url(): {e}&quot;) pass def _mount_session(self): return Session() # placefiller, the session is negotiated in here def download(self): # expose this to the user if not os.path.exists(self.download_path): os.makedirs(self.download_path) return self._download_parallel(self.data) def do_stuff_with_files(self): # process files, etc return def get_file_list(self): dl_links = [] local_files = [] # check the website, get list of links, create list of local files, zip together self.data = zip(dl_links, local_files) if __name__ == &quot;__main__&quot;: o = OSM() o.get_file_list() o.download() </code></pre>
<python><multithreading><multiprocessing>
2023-02-09 18:16:04
0
549
auslander
75,402,809
21,182,228
Connection pool exhausted error (psycopg2)
<p>I am making an api that listens to webhook events , and every few hours I get &quot;connection pool exhausted&quot; error, which does not make any sense , since I am returning the connection to the pool at the end of each request</p> <p>this is the python code</p> <pre class="lang-py prettyprint-override"><code>POOL = SimpleConnectionPool(10, 20,database=&quot;chat&quot;, user='postgres', password='postgres', host='99.99.99.99', port='4567') @csrf_exempt def post(request): jned = json.loads(request.body) with POOL.getconn() as connection: with connection.cursor() as cursor: try: proxy_request(connection,cursor,jned) except Exception as e: print(e) POOL.putconn(connection) return HttpResponse('OK')` </code></pre> <p>NOTE : the webhook that I am listening to has webhook-retires enabled NOTE : the proxy_request() function calls external APIs which sometimes take a few seconds to respond NOTE : once the connections-exhausted message comes up , it does not go away , and the server stops working , I think that this is happening because the external APIs that I am using are not fast , which causes the &quot;connections exhausted error&quot;</p> <p>ANOTHER QUESTION: can I use max_lifetime when creating the connection pool to temporarily fix the problem ?</p> <p>once the &quot; connections exhausted &quot; message comes up , I get a notification on whatsapp , and I manually restart the server , which is not scalable.</p>
<python><database><postgresql><psycopg2>
2023-02-09 18:15:20
0
1,126
Mike Lennon
75,402,713
9,878,135
How to access a FastAPI Depends value from a Pydantic validator?
<p>Let's say I have a route that allows clients to create a new user</p> <p>(pseudocode)</p> <pre class="lang-py prettyprint-override"><code>@app.route(&quot;POST&quot;) def create_user(user: UserScheme, db: Session = Depends(get_db)) -&gt; User: ... </code></pre> <p>and my <code>UserScheme</code> accepts a field such as an <code>email</code>. I would like to be able to set some settings (for example <code>max_length</code>) globally in a different model <code>Settings</code>. How do I access that inside a scheme? I'd like to access the <code>db</code> inside my scheme.</p> <p>So basically my scheme should look something like this (the given code does not work):</p> <pre class="lang-py prettyprint-override"><code>class UserScheme(BaseModel): email: str @validator(&quot;email&quot;) def validate_email(cls, value: str) -&gt; str: settings = get_settings(db) # `db` should be set somehow if len(value) &gt; settings.email_max_length: raise ValueError(&quot;Your mail might not be that long&quot;) return value </code></pre> <p>I couldn't find a way to somehow pass <code>db</code> to the scheme. I was thinking about validating such fields (that depend on <code>db</code>) inside my route. While this approach works somehow, the error message itself is not raised on the specific field but rather on the entire form, but it should report the error for the correct field so that frontends can display it correctly.</p>
<python><fastapi><pydantic><starlette>
2023-02-09 18:06:18
1
1,328
Myzel394
75,402,602
7,706,354
python executing powershell script but no effect
<pre><code> import subprocess command = r&quot;[System.Windows.Forms.Clipboard]::SetFileDropList('E:\xampp\htdocs\tadiya\webpack.mix.js')&quot; result = subprocess.Popen( [&quot;powershell.exe&quot;, command], stdout=subprocess.PIPE, stderr=subprocess.PIPE) stdout, stderr = result.communicate() print(stdout) print(stderr) </code></pre> <h2>output</h2> <pre><code> b'' b&quot;Unable to find type [System.Windows.Forms.Clipboard].\r\nAt line:1 char:1\r\n+ [System.Windows.Forms.Clipboard]::SetFileDropList('E:\\xampp\\htdocs\\ta ...\r\n+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\r\n + CategoryInfo : InvalidOperation: (System.Windows.Forms.Clipboard:TypeName) [], RuntimeException\r\n + FullyQualifiedErrorId : TypeNotFound\r\n \r\n&quot; </code></pre> <p>script is running correctly.But while using on python could'nt see any error did'nt work .what i wrong with it?</p> <h5>Expected Result:</h5> <p>script is just copy file to clipboard</p> <p>nb: I could execute basic powershell commands like dir, other get-process,etc in this python code.</p>
<python>
2023-02-09 17:57:08
0
7,616
lava
75,402,584
1,815,710
String interpolation with Python AND Javascript from server to client
<p>Is there a way to pass a string to be interpolated to Javascript coming from Python?</p> <p>For example, to string interpolate a Python string, I <code>f-strings</code> like so</p> <pre><code>&gt;&gt;&gt; friend = &quot;Bob&quot; &gt;&gt;&gt; f&quot;Hey there, {friend}&quot; Hey there, Bob </code></pre> <p>However, I want to send a string that the client (Javascript code) can also string interpolate.</p> <pre><code>friend1 = &quot;Bob&quot; friend2 = &quot;Jill&quot; f&quot;Hi here {friend}! When are you going to the {LOCATION}? {friend2} is going at 8AM.&quot; </code></pre> <p>In the example above, I only want to fill in the values for <code>friend1</code> and <code>friend2</code> but I want <code>LOCATION</code> to be filled in by the client.</p>
<javascript><python><string><ecmascript-6>
2023-02-09 17:54:37
1
16,539
Liondancer
75,402,537
1,765,579
How to specify the path of Python Script located locally OR within Karate project in karate.exec() command?
<p>I want to execute a python script which is located either:</p> <ol> <li>On my Local System or,</li> <li>Within the Karate project itself</li> </ol> <p>My question is how to specify the paths of the above scripts location into karate.exec() command. I Tried giving the path but while executing it says working directory: null and python: can't open file + No such file or directory</p>
<python><karate>
2023-02-09 17:49:32
1
1,417
Thonas1601
75,402,454
15,176,150
Can Anaconda tell you Conda-Forge url of an installed package?
<p>I'm trying to build a dependency tree for a series of python package installs, and I'd like to include the conda-forge url for each package.</p> <p>Is there a way to find the url of an installed package using the command line interface?</p>
<python><installation><package><anaconda><conda>
2023-02-09 17:40:52
1
1,146
Connor
75,402,315
7,658,051
Ansible: How to access a stored csv, order its rows by certain columns and then save it
<p>In my Ansible playbook there is a task which saves data into a csv at path /tmp/.</p> <p>The generated csv has 4 columns: 'idm', 'idc', 'day', 'time'.</p> <p>data example:</p> <pre><code>idm idc day time 34 3 2023-02-09 12:57:34 56 6 2023-02-10 20:25:12 78 2 2023-02-11 04:01:00 </code></pre> <p>Now, I would like to build a task to read the csv and edit it, so that I can order its columns like</p> <pre><code>ORDER BY day, time, idm, idc </code></pre> <p>and then overwrite the file.</p> <p>How can I do it?</p> <p><strong>Note:</strong> If possible, I would like to do that by using python, so calling python from inside the playbook.</p> <p>Here below is the code which generates the csv.</p> <pre><code>- name: Dump data into csv copy: dest: /tmp/data_{{ '%Y-%m-%d' | strftime }}.csv content: | {{ ['idm', 'idc', 'day', 'time'] | map('trim') | join(';') }} {% for host in hosts_list %} {% set idm = host.inventory_hostname.split('_')[0].split('-')[1] %} {% set idm_padded = '%03d' % idm|int %} {% set idm_padded = '&quot;' + idm_padded + '&quot;' %} {% set idc = host.inventory_hostname.split('_')[1].upper() %} {% if host.grep_output.stdout_lines %} {% for line in host.grep_output.stdout_lines %} {% set day = line.strip().split(' ')[0] %} {% set time = line.strip().split(' ')[1] %} {{ [idm_padded, idc, day, time] | map('trim') | join(';') }} {% endfor %} {% endif %} {% endfor %} vars: hosts_list: &quot;{{ ansible_play_hosts | map('extract', hostvars) | list }}&quot; delegate_to: localhost register: csv_content run_once: yes </code></pre> <h2>question update</h2> <p>I found out by reading <a href="https://stackoverflow.com/a/43992216/7658051">this thread</a> that csv can be ordered py using pure python like this</p> <pre><code>with open('unsorted.csv',newline='') as csvfile: spamreader = csv.DictReader(csvfile, delimiter=&quot;;&quot;) sortedlist = sorted(spamreader, key=lambda row:(row['column_1'],row['column_2']), reverse=False) with open('sorted.csv', 'w') as f: fieldnames = ['column_1', 'column_2', column_3] writer = csv.DictWriter(f, fieldnames=fieldnames) writer.writeheader() for row in sortedlist: writer.writerow(row) </code></pre> <p>My question is just: how do I write an ansible task that runs these operations on the saved file?</p>
<python><csv><ansible>
2023-02-09 17:29:23
2
4,389
Tms91
75,402,084
13,696,853
PyTorch Not Recognizing CUDA (Error 803: system has unsupported display driver / cuda driver combination)
<p>I'm running PyTorch on Linux.</p> <p>The OS is Ubuntu 22.04.1. The GPU is NVIDIA RTX A4000. CUDA Version 11.5.</p> <p>Additionally, I am running the CUDA 11.7 version of PyTorch.</p> <p>When I run Python3 on the terminal, PyTorch is unable to detect CUDA. I have tried rebooting but the issue persists.</p> <pre><code>Python 3.10.6 (main, Nov 14 2022, 16:10:14) [GCC 11.3.0] on linux Type &quot;help&quot;, &quot;copyright&quot;, &quot;credits&quot; or &quot;license&quot; for more information. &gt;&gt;&gt; import torch &gt;&gt;&gt; torch.cuda.is_available() [PATH]/.local/lib/python3.10/site-packages/torch/cuda/__init__.py:88: UserWarning: CUDA initialization: Unexpected error from cudaGetDeviceCount(). Did you run some cuda functions before calling NumCudaDevices() that might have already set an error? Error 803: system has unsupported display driver / cuda driver combination (Triggered internally at ../c10/cuda/CUDAFunctions.cpp:109.) return torch._C._cuda_getDeviceCount() &gt; 0 False </code></pre>
<python><pytorch><cuda><nvidia>
2023-02-09 17:08:50
1
321
Raiyan Chowdhury
75,401,921
11,167,163
with pandas read_sql, how do I run procedure with parameters?
<p>I have a stored procedure which as been created like this :</p> <pre><code>create or replace PROCEDURE MyTestProcedure(p_date IN DATE) AS OUTPUT SYS_REFCURSOR; BEGIN END </code></pre> <p>I would like to execute it doing :</p> <pre><code>pd.read_sql(&quot;&quot;&quot;EXECUTE MyTestProcedure @p_date=(TO_DATE('2022-02-01','YYYY-MM-DD'))&quot;&quot;&quot; </code></pre> <p>but this does throw an error :</p> <pre><code>DatabaseError: Execution failed on sql 'EXECUTE MyTestProcedure @p_date=(TO_DATE('2022-02-01','YYYY-MM-DD'))': ORA-00900: invalid SQL statement </code></pre> <p>what am I missing there ?</p>
<python><pandas><plsql>
2023-02-09 16:54:18
0
4,464
TourEiffel
75,401,900
9,840,684
Labelling legend values for Axes3D chart
<p>I have a data frame with the following values and would like to create a 3D plot showing Recency, Frequency, and Monetary values labelled by the categories/loyalty levels (bronze, silver, gold, platinum) assigned to them. The relevant data looks as follows:</p> <p><code>RFMScores.head()</code></p> <p><a href="https://i.sstatic.net/seLdP.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/seLdP.png" alt="enter image description here" /></a></p> <p>the <code>RFM_Catagory_Level</code> are numeric scores associated with the loyalty level</p> <p>In attempting to make the chart, I used</p> <pre><code>figrfm2 = plt.figure() ax = Axes3D(figrfm2) xs = RFMScores.Recency ys = RFMScores.Frequency zs = RFMScores.Monetary scores = RFMScores.RFM_Catagory_Level scatter = ax.scatter(xs, ys, zs,c=scores,cmap='tab20b') ax.set_title(&quot;3D plot&quot;) ax.set_xlabel('Recency') ax.set_ylabel('Frequency') ax.set_zlabel('Monetary') ax.legend(*scatter.legend_elements()) plt.show() </code></pre> <p>But the legend has the numeric values instead of the actual labels.</p> <p><a href="https://i.sstatic.net/IqMxg.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/IqMxg.png" alt="enter image description here" /></a></p> <p>When I attempt to use the <code>RFM_Loyalty_Level</code> such as this:</p> <pre><code>figrfm2 = plt.figure() ax = Axes3D(figrfm2) xs = RFMScores.Recency ys = RFMScores.Frequency zs = RFMScores.Monetary scatter = ax.scatter(xs, ys, zs,c=RFMScores.RFM_Loyalty_Level,cmap='tab20b') ax.set_title(&quot;3D plot&quot;) ax.set_xlabel('Recency') ax.set_ylabel('Frequency') ax.set_zlabel('Monetary') ax.legend(*scatter.legend_elements()) plt.show() </code></pre> <p>The chart is blank with no data. How do I fix this so that I have a chart, but with the label categories &quot;<strong>platinum, gold, silver, bronze</strong>&quot; instead of the numeric values that are in the legend?</p>
<python><matplotlib>
2023-02-09 16:52:21
1
373
JLuu
75,401,881
8,869,570
How to use multiple variable annotations for a abstract class instance variable?
<p>I need to declare an abstract instance variable in an abstract base class. After looking at <a href="https://stackoverflow.com/questions/51055750/python-abstract-instance-variable">Python: abstract instance variable?</a>, it seems the one way to do that is with type annotations, e.g.,,</p> <pre><code>class base(ABC): var : str </code></pre> <p>If <code>var</code> here can take on multiple types, e.g., <code>str</code> or <code>int</code>, how would that work?</p> <p>I tried</p> <pre><code>class base(ABC): var : str | int </code></pre> <p>but that gives the error</p> <pre><code> val : int | str TypeError: unsupported operand type(s) for |: 'type' and 'type' </code></pre> <p>Also, I'm not tied to annotations, so if there's a better way to accomplish what I need, I'm definitely open to switching.</p>
<python><python-typing>
2023-02-09 16:50:45
0
2,328
24n8
75,401,855
10,296,584
Pandas add a total column after every year column in a multilevel dataframe
<p>I have created the below pivotable in pandas</p> <pre><code>&gt;&gt;&gt; out Year 2021 2022 2023 Month Feb Mar Sep Oct Dec Jan Jun Aug Oct Jun Sep Nov Dec ID 1 0 8 1.5 6.5 6 8 8 2 7.0 9 9 3 0 2 4 4 0.0 0.0 0 0 0 2 8.5 0 0 0 3 </code></pre> <p>Code:</p> <pre><code>months = ['Jan', 'Feb', 'Mar', 'Apr', 'May', 'Jun', 'Jul', 'Aug', 'Sep', 'Oct', 'Nov', 'Dec'] months = pd.CategoricalDtype(months, ordered=True) rng = np.random.default_rng(2023) df = pd.DataFrame({'ID': rng.integers(1, 3, 20), 'Year': rng.integers(2021, 2024, 20), 'Month': rng.choice(months.categories, 20), 'Value': rng.integers(1, 10, 20)}) out = (df.astype({'Month': months}) .pivot_table(index='ID', columns=['Year', 'Month'], values='Value', aggfunc='mean', fill_value=0)) </code></pre> <p>Now I would like add a total column after each year:</p> <pre><code>Year 2021 Total 2022 Total 2023 Total Month Feb Mar Sep Oct Dec Jan Jun Aug Oct Jun Sep Nov Dec ID 1 0 8 1.5 9.5 6.5 6 8 8 2 31.5 7.0 9 9 3 0 28 2 4 4 0.0 8 0.0 0 0 0 2 2 8.5 0 0 0 3 11.5 </code></pre> <p>How could I get this? Thanks!</p>
<python><pandas><dataframe><sum><pivot-table>
2023-02-09 16:48:38
2
597
Atharva Katre
75,401,834
10,620,003
Create a dataframe with 1 rows and n*m column with an existing dataframe with n rows and m columns
<p>I have a dataframe and I want to stick the values and create one row with 6 columns. Here is my dataframe.</p> <pre><code> time val1 val2 0 2020-01-01 1 4 1 2020-01-02 2 5 2 2020-01-03 3 6 </code></pre> <p>I want to create the following dataframe:</p> <pre><code> val1 val2 val3 val4 val5 val6 0 1 2 3 4 5 6 </code></pre> <p>Here is the code;</p> <pre><code>import pandas as pd df = pd.DataFrame() df['time'] = ['2020-01-01', '2020-01-02', '2020-01-03'] df['val1'] = [1,2,3] df['val2'] = [4,5,6] df </code></pre> <p>I tried to use the pivot, but I got this error; The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()</p> <p>Can you help me with that?</p>
<python><dataframe>
2023-02-09 16:46:46
2
730
Sadcow
75,401,761
12,276,162
Change Verbosity of Keras train_on_batch()?
<p>I am training a GAN using Keras's <code>train_on_batch()</code> command. This is very similar to Keras's <code>fit()</code>. However, in the documentation for <code>fit()</code>, there is a parameter for <code>verbose</code>, which changes how often a progress bar is printed to the console.</p> <p>My model has many batches, and so it is printing tons of progress bars to the command line. Unfortunately, <code>train_on_batch()</code> does not have a <code>verbose</code> parameter. Is there a workaround for this? Is there a Keras global variable/environment variable that I can set? I don't want to disable my program from printing to the console, I just want to change the verbosity of specifically <code>train_on_batch()</code>.</p> <p>For clarify, I am using Keras directly from the Keras package, I am not using tf.keras.</p>
<python><tensorflow><machine-learning><keras><artificial-intelligence>
2023-02-09 16:41:49
1
499
lowlyprogrammer
75,401,738
6,382,526
DatetimeRangeSlide updating network graph with bokeh
<p>I would like to update a network graph using a datetime range slider with Bokeh. So appearing/disappearing nodes depending on the datetime range, and also width of edges is proportional to the number of connections between sources and target within datetime range.</p> <p>So far here is my code:</p> <pre><code>nb_conn = df.groupby(['src','dst'])['src'].count() nb_conn = nb_conn.rename(&quot;nb_conn&quot;) nb_conn_tot = nb_conn.sum() ratio_nb_conn = (nb_conn/nb_conn_tot)*100 netflow_feat = ( df.merge(ratio_nb_conn, on=[&quot;src&quot;, 'dst']) ) G = nx.from_pandas_edgelist(netflow_feat, source='src', target='dst' ,edge_attr='nb_conn') degrees = dict(nx.degree(G)) nx.set_node_attributes(G, name='degree', values=degrees) number_to_adjust_by = 5 adjusted_node_size = dict([(node, degree+number_to_adjust_by) for node, degree in nx.degree(G)]) nx.set_node_attributes(G, name='adjusted_node_size', values=adjusted_node_size) number_to_adjust_by = 5 adjusted_node_size = dict([(node, degree+number_to_adjust_by) for node, degree in nx.degree(G)]) nx.set_node_attributes(G, name='adjusted_node_size', values=adjusted_node_size) #Choose attributes from G network to size and color by — setting manual size (e.g. 10) or color (e.g. 'skyblue') also allowed size_by_this_attribute = 'adjusted_node_size' color_by_this_attribute = 'adjusted_node_size' #Pick a color palette — Blues8, Reds8, Purples8, Oranges8, Viridis8 color_palette = Blues8 #Choose a title! title = 'Cibles Network' #Establish which categories will appear when hovering over each node HOVER_TOOLTIPS = [ (&quot;IP&quot;, &quot;@index&quot;), ] #Create a plot — set dimensions, toolbar, and title plot = figure(tooltips = HOVER_TOOLTIPS, tools=&quot;pan,wheel_zoom,save,reset&quot;, active_scroll='wheel_zoom', x_range=Range1d(-20.1, 20.1), y_range=Range1d(-20.1, 20.1), title=title) #Create a network graph object # https://networkx.github.io/documentation/networkx-1.9/reference/generated/networkx.drawing.layout.spring_layout.html\ network_graph = from_networkx(G, nx.spring_layout, scale=20, center=(0, 0)) #Set node sizes and colors according to node degree (color as spectrum of color palette) minimum_value_color = min(network_graph.node_renderer.data_source.data[color_by_this_attribute]) maximum_value_color = max(network_graph.node_renderer.data_source.data[color_by_this_attribute]) network_graph.node_renderer.glyph = Circle(fill_color=linear_cmap(color_by_this_attribute, color_palette, minimum_value_color, maximum_value_color)) #Set edge opacity and width network_graph.edge_renderer.data_source.data[&quot;line_width&quot;] = [G.get_edge_data(a,b)['nb_conn'] for a, b in G.edges()] network_graph.edge_renderer.glyph = MultiLine(line_alpha=0.5) network_graph.edge_renderer.glyph.line_width = {'field': 'line_width'} plot.renderers.append(network_graph) backup_edge_data = copy.deepcopy(network_graph.edge_renderer.data_source.data) code = &quot;&quot;&quot; # print out array of date from, date to console.log(cb_obj.value); # dates returned from slider are not at round intervals and include time; const date_from = Date.parse(new Date(cb_obj.value[0]).toDateString()); const date_to = Date.parse(new Date(cb_obj.value[1]).toDateString()); const old_Weight = df[&quot;nb_conn&quot;]; const old_start = df.loc[start]; const old_end = df.loc[end]; const df_filtered = df[(df['timestamp'] &gt;= date_from) &amp; (df['timestamp'] &lt;= date_to)] What should I do here??? graph_setup.edge_renderer.data_source.data = new_data_edge; graph_setup.edge_renderer.data_source.change.emit(); &quot;&quot;&quot; callback = CustomJS(args = dict(graph_setup = network_graph, df=netflow_feat, start = netflow_feat['timestamp'].min, end = netflow_feat['timestamp'].max), code = code) datetime_range_slider = DatetimeRangeSlider(value=(datetime(2023, 1, 5, 12), datetime(2022, 1, 6, 18)), start=datetime(2023, 1, 5), end=datetime(2023, 1, 7)) datetime_range_slider.js_on_change(&quot;value&quot;, callback) layout = Column(plot, datetime_range_slider) show(layout) </code></pre> <p>In the callback function, is it supposed to be only javascript? I guess my callback function is not correct but I don't know how to do what I'd like to do, or is it even possible?</p>
<python><slider><networkx><bokeh><bokehjs>
2023-02-09 16:39:20
0
897
Laure D
75,401,587
6,068,731
Cannot import from module inside the same package, from a subpackage
<p>I have the following structure</p> <pre><code>folder/ ├─ subfolder/ │ ├─ __init__.py │ ├─ script.py ├─ __init__.py ├─ module.py </code></pre> <p>Inside <code>script.py</code> I want to import the function <code>my_function</code> from <code>module.py</code>. I have tried various variants</p> <pre class="lang-py prettyprint-override"><code>from ..module import my_function from ...folder.module import my_function from .. import module # and then use module.my_function from ... import folder.module # and then use folder.module.my_function </code></pre> <p>However, whenever I am in the terminal inside <code>folder</code> and run <code>python3 subfolder/script.py</code> I get the error</p> <pre class="lang-py prettyprint-override"><code>ImportError: attempted relative import with no known parent package </code></pre>
<python>
2023-02-09 16:29:18
0
728
Physics_Student
75,401,420
8,869,570
"class attribute" terminology in python
<p>This is a pedantic question, but I feel I've been reading some conflicting uses of the term &quot;class attribute&quot; in Python.</p> <p>My understanding and usage of &quot;attribute&quot; of a class has always been to mean some variable associated with the class, whether it is a class or instance variable.</p> <p>I've seen some others use &quot;attribute&quot; to mean class/instance variables of a class, as well as methods of that class.</p> <p>So I wanted to ask if there is a formal definition of what &quot;attribute&quot; means as it pertains to classes in Python?</p>
<python><attributes><terminology>
2023-02-09 16:16:31
1
2,328
24n8
75,401,348
4,935,567
Selenium Chrome driver headless mode not working
<p>My code worked perfectly until yesterday when I updated <em>Google Chrome</em> to version <strong>110.0.5481.77</strong>. Now it's not working in headless mode:</p> <pre class="lang-py prettyprint-override"><code>options.add_argument(&quot;--headless&quot;) </code></pre> <p>I even tried adding <code>options.add_argument(&quot;--window-size=1280,700&quot;)</code> but still not working. Although if I remove the headless option it again works correctly!</p>
<python><selenium><selenium-webdriver><selenium-chromedriver><google-chrome-headless>
2023-02-09 16:10:40
1
2,618
Masked Man
75,401,318
10,623,489
Django append to FileField
<p>Is there any way to append directly to a FileField in django? Doing something like this:</p> <pre class="lang-py prettyprint-override"><code>class ChunkedUpload(models.Model): id = models.CharField(max_length=128) data = models.FileField(upload_to='chunked_uploads/') def append_chunk(self, chunk, create): if create: self.data.save(self.id, ContentFile(chunk.encode())) else: self.data.append(ContentFile(chunk.encode()) </code></pre> <p>I'm working on an existing solution that sent data by chunks (base64 encoded string) in TextField. But some data are now too big to be handled in a TextField (250++ Mb). I can't change the other parts of the application, so I'm trying to find a way to handle this situation.</p>
<python><django><append><filefield>
2023-02-09 16:08:46
1
367
Borhink
75,401,309
10,296,584
Pandas sort multilevel columns with year and month
<p>I have created a pivotable in pandas with multilevel columns, but the order of columns are not sorted -</p> <pre><code>Year 2022 2021 2023 Month Jan Feb Mar Jan Dec Jun </code></pre> <p>What I want:</p> <pre><code>Year 2021 2022 2023 Month Jan Mar Jan Feb Jun Dec </code></pre> <p>How can I get the above order?</p>
<python><pandas><dataframe><sorting><pivot-table>
2023-02-09 16:08:08
1
597
Atharva Katre
75,401,247
6,077,239
How can I use polars.when based on whether a column name is None?
<p>I have a python function which takes a polars dataframe, a column name and a default value. The function will return a polars series (with length the same as the number of rows of the dataframe) based on the column name and default value.</p> <ul> <li>When the column name is None, just return a series of default values.</li> <li>When the column name is not None, return that column from dataframe as a series.</li> </ul> <p>And, I want to achieve this with just oneline polars expression.</p> <p>Below is an example for better illustration.</p> <p>The function I want has the following signature.</p> <pre><code>import polars as pl def f(df, colname=None, value=0): pass </code></pre> <p>And below are the behaviors I want to have.</p> <pre><code>&gt;&gt;&gt; df = pl.DataFrame({&quot;a&quot;: [1, 2, 3], &quot;b&quot;: [2, 3, 4]}) &gt;&gt;&gt; f(df) shape: (3,) Series: '' [i64] [ 0 0 0 ] &gt;&gt;&gt; f(df, &quot;a&quot;) shape: (3,) Series: '' [i64] [ 1 2 3 ] </code></pre> <p>This is what I tried, basically use polars.when.</p> <pre><code>def f(df, colname=None, value=0): return df.select(pl.when(colname is None).then(pl.lit(value)).otherwise(pl.col(colname))).to_series() </code></pre> <p>But the code errors out when colname is None, with the error message: <strong>TypeError: argument 'name': 'NoneType' object cannot be converted to 'PyString'</strong>.</p> <p>Another problem is that the code below runs successfully, but it returns a dataframe with shape (1, 1),</p> <pre><code>&gt;&gt;&gt; colname = None &gt;&gt;&gt; value = 0 &gt;&gt;&gt; df.select(pl.when(colname is None).then(pl.lit(value)).otherwise(100)) shape: (1, 1) ┌─────────┐ │ literal │ │ --- │ │ i32 │ ╞═════════╡ │ 0 │ └─────────┘ </code></pre> <p>the result I want is a dataframe with shape (3, 1), e.g.,</p> <pre><code>shape: (3, 1) ┌─────────┐ │ literal │ │ --- │ │ i32 │ ╞═════════╡ │ 0 │ │ 0 │ │ 0 │ └─────────┘ </code></pre> <p>What am I supposed to do?</p>
<python><dataframe><python-polars>
2023-02-09 16:03:15
1
1,153
lebesgue
75,401,217
1,442,881
Python UDP socketserver returning empty message
<p>I have a UDP socketserver program that I use to demonstrate how UDP works (code for the server and client are below). I run this on a server, then have the <code>client.py</code> program send a message and receive a reply. I am unfortunately running into an issue that seems to only occur on campus Wifi. On campus wifi, the client does not receive a response.</p> <p>Troubleshooting with Wireshark shows the issue. For some reason the UDP server is responding with two UDP messages - one empty, and one containing the response message. These messages are recorded in Wireshark as coming in approximately 0.000002 seconds apart. On a wired network, the one with the response consistently comes first, and on Wifi, the empty message consistently comes first. Since the client is waiting for a single messages response, when the empty message returns, the client prints and exits, and the actual response is never seen.</p> <p>I know I could write the client to listen for both messages and print out whichever one has the data, but I would rather try to figure out what's going on. Why is the socketserver responding with two messages in the first place, and how can I get it to only send one? OR at least to send the data first.</p> <p><code>server.py</code>:</p> <pre class="lang-py prettyprint-override"><code>import socketserver class MyUDPRequestHandler(socketserver.DatagramRequestHandler): def handle(self): data = self.request[0].strip() socket = self.request[1] # just send back the same data, but lower-cased socket.sendto(data.lower(), self.client_address) if __name__ == &quot;__main__&quot;: with socketserver.UDPServer((&quot;0.0.0.0&quot;, 9091), MyUDPRequestHandler) as server: server.serve_forever() </code></pre> <p><code>client.py</code>:</p> <pre class="lang-py prettyprint-override"><code>import socket HOST, PORT = &quot;localhost&quot;, 9091 message = &quot;NOW I AM SHOUTING&quot; # The UDP server will lowercase the message sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) sock.sendto(bytes(message + &quot;\n&quot;, &quot;utf-8&quot;), (HOST, PORT)) received = str(sock.recv(1024), &quot;utf-8&quot;) print(&quot;Sent: {}&quot;.format(message)) print(&quot;Received: {}&quot;.format(received)) </code></pre>
<python><sockets><udp><socketserver>
2023-02-09 16:01:17
1
918
Ryan
75,401,114
10,115,878
Reading text from image, facing problem because of the font
<p><a href="https://i.sstatic.net/ietDJ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ietDJ.png" alt="enter image description here" /></a></p> <p>I am trying to read this image and do the arithmetic operation in the image. For some reason i am not able to read 7 because of the font it has. I am relatively new to image processing. Can you please help me with solution. I tried pixeliating the image, but that did not help.</p> <pre><code>import cv2 import pytesseract from PIL import Image img = cv2.imread('modules/visual_basic_math/temp2.png', cv2.IMREAD_GRAYSCALE) thresh = cv2.threshold(img, 100, 255, cv2.THRESH_BINARY_INV+cv2.THRESH_OTSU)[1] print(pytesseract.image_to_string(img, config='--psm 6')) </code></pre> <p>Response i am getting is -</p> <pre><code>+44 849559 +46653% 14 +7776197 +6415995 +*9156346 x4463310 +54Q%433 +1664 20% </code></pre>
<python><ocr><tesseract><python-tesseract>
2023-02-09 15:51:21
1
560
Anish Jain
75,400,855
3,713,835
How to concat 2 or more sqlalchemy rowproxy
<p>I have 2 tables with same structure but in different db instance &amp; schema. So I run 2 queries:</p> <pre><code>rows1 = conn1.execute(query1) rows2 = conn2.execute(query2) </code></pre> <p>is there any magic to concat/combine these 2 result?(rows1 &amp; row2)</p> <p>is it possible like list: <code>rows = rows1 + rows2</code> ?</p>
<python><sqlalchemy>
2023-02-09 15:31:46
1
424
Niuya
75,400,685
7,655,151
Python Protocols: cannot understand 'class objects vs protocols"
<p>I'm reading PEP 544, specifically the <a href="https://peps.python.org/pep-0544/#type-and-class-objects-vs-protocols" rel="nofollow noreferrer">section for class objects vs protocols</a>, and I cannot understand the last example given there, I'll copy paste it here:</p> <blockquote> <p>A class object is considered an implementation of a protocol if accessing all members on it results in types compatible with the protocol members. For example:</p> </blockquote> <pre class="lang-py prettyprint-override"><code>from typing import Any, Protocol class ProtoA(Protocol): def meth(self, x: int) -&gt; int: ... class ProtoB(Protocol): def meth(self, obj: Any, x: int) -&gt; int: ... class C: def meth(self, x: int) -&gt; int: ... a: ProtoA = C # Type check error, signatures don't match! b: ProtoB = C # OK </code></pre> <p>I can get the rest of the PEP, but this example seems counterintuitive to me. The way I would think it is that the class <code>C</code> implements the method <code>meth</code> with the same signature as <code>ProtoA</code>, so why the heck is an error in line <code>a: ProtoA = C</code>?</p> <p>And why <code>b: ProtoB = C</code> is correct? The signature of <code>C.meth</code> is different than the signature of <code>ProtoB.meth</code> (the latter includes an extra argument <code>obj: Any</code>.</p> <p>Can someone explain this by expanding the concept so I can understand it?</p>
<python><typing><pep>
2023-02-09 15:19:23
1
440
danielcaballero88
75,400,650
572,575
Python BeautifulSoup cannot read data from div tag
<p>I try to read data from this div tag from <a href="https://finance.yahoo.com/" rel="nofollow noreferrer">website</a>.</p> <pre><code>&lt;div class=&quot;Bgc($lv2BgColor) Bxz(bb) Ovx(a) Pos(r) Maw($newGridWidth) Miw($minGridWidth) Miw(a)!--tab768 Miw(a)!--tab1024 Mstart(a) Mend(a) Px(20px) Py(10px) D(n)--print&quot;&gt; </code></pre> <p><a href="https://i.sstatic.net/evhul.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/evhul.png" alt="enter image description here" /></a></p> <pre><code>from bs4 import BeautifulSoup import requests import re from urllib.request import urlopen url = &quot;https://finance.yahoo.com/&quot; urlpage=urlopen(url).read() bswebpage=BeautifulSoup(urlpage) t = bswebpage.find_all(&quot;div&quot;,{'class':&quot;Bgc($lv2BgColor) Bxz(bb) Ovx(a) Pos(r) Maw($newGridWidth) Miw($minGridWidth) Miw(a)!--tab768 Miw(a)!--tab1024 Mstart(a) Mend(a) Px(20px) Py(10px) D(n)--print&quot;}) print(t) </code></pre> <p>I use findall with BeautifulSoup but output not show anything. It show only this</p> <pre><code>[] </code></pre> <p>How to fix it?</p>
<python><web-scraping><beautifulsoup>
2023-02-09 15:15:57
2
1,049
user572575
75,400,589
16,414,611
Open multiple tabs in a browser using Playwright
<p>I am trying to open multiple tabs from the same browser using the mentioned playwright code. The browser I'm trying to use is Firefox. Firefox is opening multiple windows instead of tabs. With Chrome it is successfully opening multiple tabs from the same browser. I have searched for different solutions but the result was the same.</p> <pre class="lang-py prettyprint-override"><code>import time from playwright.sync_api import sync_playwright def run(playwright): firefox = playwright.firefox browser = firefox.launch(headless=False) context = browser.new_context() page = context.new_page() page.goto(&quot;https://google.com&quot;) new_tab = context.new_page() new_tab.goto(&quot;https://playwright.dev&quot;) time.sleep(5) browser.close() with sync_playwright() as playwright: run(playwright) </code></pre>
<python><python-3.x><playwright><playwright-python>
2023-02-09 15:11:12
1
329
Raisul Islam
75,400,306
6,579,048
How to print the "original name" of an input
<p>I'm trying to do a comparison of multiple numbers and print out the result in a string type. Here's a simplified ver.</p> <pre><code>a = input() b = input() c = input() Com = [a, b, c] print (min(Com) + &quot; is the smallest, the value is %s&quot; %min(Com)) </code></pre> <p>For example, when I input 1,2,3 to a,b,c the output will be</p> <pre><code>&quot;1 is the smallest, the value is 1&quot; </code></pre> <p>but what I really want is</p> <pre><code>&quot;a is the smallest, the value is 1&quot; </code></pre> <p>is there any func. I can use to find out the original name of 1, which is a?</p>
<python><list>
2023-02-09 14:49:07
1
321
Chu
75,400,210
14,667,788
How to get full html body of google search using requests
<p>I have a following problem. I would like to get full HTML body of google search output.</p> <p>Suppose, I would like to google <code>Everton stadium address</code>. This is my python code:</p> <pre class="lang-py prettyprint-override"><code>import urllib.request as urllib2 user_agent = 'Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.0.7) Gecko/2009021910 Firefox/3.0.7' url = &quot;https://www.google.com/search?q=Everton+stadium+address&quot; headers={'User-Agent':user_agent,} request=urllib2.Request(url,None,headers) response = urllib2.urlopen(request) data = response.read() </code></pre> <p>But when I print my <code>data</code> I can see that the html of the right part of the page is missing, see the missing red area:</p> <p><a href="https://i.sstatic.net/syx7D.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/syx7D.png" alt="enter image description here" /></a></p> <p>how can I get full html body including the red part, please?</p>
<python><python-requests>
2023-02-09 14:41:53
1
1,265
vojtam
75,400,186
6,367,971
Total count of spaces in specific dataframe column
<p>I have a dataframe and I want to count the number of spaces present for all strings in <code>Col1</code>.</p> <pre><code> Col1 Col2 file_name 0 AAA A XYZ test1.csv 1 B BBB XYZ test1.csv 2 CC CC RST test1.csv 3 DDDDD XYZ test2.csv 4 AAAAX WXY test3.csv </code></pre> <p>So I want the output to simply be something like:</p> <p><code>num_of_spaces = 3</code></p>
<python><pandas><dataframe>
2023-02-09 14:40:04
3
978
user53526356
75,400,176
11,462,274
Drop duplicate columns when they have the same name and same values in a DataFrame with different types, for example dictionaries
<p>Example of my DataFrame (I'm putting it with a divider in commas because the print in the terminal is too big):</p> <pre class="lang-none prettyprint-override"><code>_item_name_to_attribute_name_overrides,_datetime_created,_datetime_updated,event,elapsed_time,market_count,_data,marketCount,event.id,event.name,event.countryCode,event.timezone,event.openDate {},2023-02-09 14:25:03.184838,2023-02-09 14:25:03.184838,&lt;betfairlightweight.resources.bettingresources.Event object at 0x000001EDC6E020E0&gt;,0.30626821517944336,9,&quot;{'event': {'id': '32092781', 'name': 'Al-Sadd v Al Arabi (QAT)', 'countryCode': 'QA', 'timezone': 'GMT', 'openDate': '2023-02-09T13:40:50.000Z'}, 'marketCount': 9}&quot;,9,32092781,Al-Sadd v Al Arabi (QAT),QA,GMT,2023-02-09T13:40:50.000Z {},2023-02-09 14:25:03.186839,2023-02-09 14:25:03.186839,&lt;betfairlightweight.resources.bettingresources.Event object at 0x000001EDC6E02650&gt;,0.30626821517944336,12,&quot;{'event': {'id': '32062604', 'name': 'Scheveningen v Katwijk', 'countryCode': 'NL', 'timezone': 'GMT', 'openDate': '2023-01-28T13:32:48.000Z'}, 'marketCount': 12}&quot;,12,32062604,Scheveningen v Katwijk,NL,GMT,2023-01-28T13:32:48.000Z {},2023-02-09 14:25:03.186839,2023-02-09 14:25:03.186839,&lt;betfairlightweight.resources.bettingresources.Event object at 0x000001EDC6E02440&gt;,0.30626821517944336,9,&quot;{'event': {'id': '32091628', 'name': 'Al-Oruba v Al Nahdha', 'timezone': 'GMT', 'openDate': '2023-02-09T13:11:14.000Z'}, 'marketCount': 9}&quot;,9,32091628,Al-Oruba v Al Nahdha,,GMT,2023-02-09T13:11:14.000Z {},2023-02-09 14:25:03.186839,2023-02-09 14:25:03.186839,&lt;betfairlightweight.resources.bettingresources.Event object at 0x000001EDC6E026E0&gt;,0.30626821517944336,9,&quot;{'event': {'id': '32091663', 'name': 'Busoga United v Maroons FC', 'countryCode': 'UG', 'timezone': 'GMT', 'openDate': '2023-02-09T13:05:11.000Z'}, 'marketCount': 9}&quot;,9,32091663,Busoga United v Maroons FC,UG,GMT,2023-02-09T13:05:11.000Z {},2023-02-09 14:25:03.186839,2023-02-09 14:25:03.186839,&lt;betfairlightweight.resources.bettingresources.Event object at 0x000001EDC6E027A0&gt;,0.30626821517944336,8,&quot;{'event': {'id': '32091662', 'name': 'Wakiso Giants v Kampala City Council', 'countryCode': 'UG', 'timezone': 'GMT', 'openDate': '2023-02-09T13:00:29.000Z'}, 'marketCount': 8}&quot;,8,32091662,Wakiso Giants v Kampala City Council,UG,GMT,2023-02-09T13:00:29.000Z {},2023-02-09 14:25:03.186839,2023-02-09 14:25:03.186839,&lt;betfairlightweight.resources.bettingresources.Event object at 0x000001EDC6E02890&gt;,0.30626821517944336,5,&quot;{'event': {'id': '32093038', 'name': 'Bodo Glimt v Silkeborg', 'timezone': 'GMT', 'openDate': '2023-02-09T13:00:37.000Z'}, 'marketCount': 5}&quot;,5,32093038,Bodo Glimt v Silkeborg,,GMT,2023-02-09T13:00:37.000Z {},2023-02-09 14:25:03.186839,2023-02-09 14:25:03.186839,&lt;betfairlightweight.resources.bettingresources.Event object at 0x000001EDC6E02920&gt;,0.30626821517944336,7,&quot;{'event': {'id': '32085575', 'name': 'Al Ittihad (EGY) v Aswan FC', 'countryCode': 'EG', 'timezone': 'GMT', 'openDate': '2023-02-09T12:45:00.000Z'}, 'marketCount': 7}&quot;,7,32085575,Al Ittihad (EGY) v Aswan FC,EG,GMT,2023-02-09T12:45:00.000Z {},2023-02-09 14:25:03.186839,2023-02-09 14:25:03.186839,&lt;betfairlightweight.resources.bettingresources.Event object at 0x000001EDC6E029E0&gt;,0.30626821517944336,8,&quot;{'event': {'id': '32094567', 'name': 'Chabab Ben Guerir v Rapide Club Oued Zem', 'countryCode': 'MA', 'timezone': 'GMT', 'openDate': '2023-02-09T13:01:31.000Z'}, 'marketCount': 8}&quot;,8,32094567,Chabab Ben Guerir v Rapide Club Oued Zem,MA,GMT,2023-02-09T13:01:31.000Z {},2023-02-09 14:25:03.186839,2023-02-09 14:25:03.186839,&lt;betfairlightweight.resources.bettingresources.Event object at 0x000001EDC6E02AD0&gt;,0.30626821517944336,10,&quot;{'event': {'id': '32094023', 'name': 'Hapoel Kfar Shelem v Hapoel Marmorek', 'countryCode': 'IL', 'timezone': 'GMT', 'openDate': '2023-02-09T14:00:00.000Z'}, 'marketCount': 10}&quot;,10,32094023,Hapoel Kfar Shelem v Hapoel Marmorek,IL,GMT,2023-02-09T14:00:00.000Z {},2023-02-09 14:25:03.186839,2023-02-09 14:25:03.186839,&lt;betfairlightweight.resources.bettingresources.Event object at 0x000001EDC6E02B60&gt;,0.30626821517944336,9,&quot;{'event': {'id': '32090014', 'name': 'Al-Batin v Al-Shabab (KSA)', 'countryCode': 'SA', 'timezone': 'GMT', 'openDate': '2023-02-09T13:00:00.000Z'}, 'marketCount': 9}&quot;,9,32090014,Al-Batin v Al-Shabab (KSA),SA,GMT,2023-02-09T13:00:00.000Z {},2023-02-09 14:25:03.186839,2023-02-09 14:25:03.186839,&lt;betfairlightweight.resources.bettingresources.Event object at 0x000001EDC6E02C50&gt;,0.30626821517944336,7,&quot;{'event': {'id': '32093329', 'name': 'Al Madina Al Monawara SC v La Viena FC', 'countryCode': 'EG', 'timezone': 'GMT', 'openDate': '2023-02-09T13:00:09.000Z'}, 'marketCount': 7}&quot;,7,32093329,Al Madina Al Monawara SC v La Viena FC,EG,GMT,2023-02-09T13:00:09.000Z {},2023-02-09 14:25:03.186839,2023-02-09 14:25:03.186839,&lt;betfairlightweight.resources.bettingresources.Event object at 0x000001EDC6E02E60&gt;,0.30626821517944336,6,&quot;{'event': {'id': '32091633', 'name': 'Bani Sweif v MS Tamia', 'countryCode': 'EG', 'timezone': 'GMT', 'openDate': '2023-02-09T13:01:15.000Z'}, 'marketCount': 6}&quot;,6,32091633,Bani Sweif v MS Tamia,EG,GMT,2023-02-09T13:01:15.000Z {},2023-02-09 14:25:03.186839,2023-02-09 14:25:03.186839,&lt;betfairlightweight.resources.bettingresources.Event object at 0x000001EDC6E02DA0&gt;,0.30626821517944336,9,&quot;{'event': {'id': '32091632', 'name': 'El Alominiom v Dayrout', 'countryCode': 'EG', 'timezone': 'GMT', 'openDate': '2023-02-09T13:03:50.000Z'}, 'marketCount': 9}&quot;,9,32091632,El Alominiom v Dayrout,EG,GMT,2023-02-09T13:03:50.000Z {},2023-02-09 14:25:03.186839,2023-02-09 14:25:03.186839,&lt;betfairlightweight.resources.bettingresources.Event object at 0x000001EDC6E02EF0&gt;,0.30626821517944336,12,&quot;{'event': {'id': '32091344', 'name': 'AC Paradou U21 v NC Magra U21', 'countryCode': 'DZ', 'timezone': 'GMT', 'openDate': '2023-02-09T14:08:25.000Z'}, 'marketCount': 12}&quot;,12,32091344,AC Paradou U21 v NC Magra U21,DZ,GMT,2023-02-09T14:08:25.000Z {},2023-02-09 14:25:03.186839,2023-02-09 14:25:03.186839,&lt;betfairlightweight.resources.bettingresources.Event object at 0x000001EDC6E02170&gt;,0.30626821517944336,12,&quot;{'event': {'id': '32091347', 'name': 'HB Chelghoum Laid U21 v Cr Belouizdad U21', 'countryCode': 'DZ', 'timezone': 'GMT', 'openDate': '2023-02-09T14:00:00.000Z'}, 'marketCount': 12}&quot;,12,32091347,HB Chelghoum Laid U21 v Cr Belouizdad U21,DZ,GMT,2023-02-09T14:00:00.000Z {},2023-02-09 14:25:03.186839,2023-02-09 14:25:03.186839,&lt;betfairlightweight.resources.bettingresources.Event object at 0x000001EDC6E02FB0&gt;,0.30626821517944336,12,&quot;{'event': {'id': '32091346', 'name': 'MC Alger U21 v JS Saoura U21', 'countryCode': 'DZ', 'timezone': 'GMT', 'openDate': '2023-02-09T14:05:27.000Z'}, 'marketCount': 12}&quot;,12,32091346,MC Alger U21 v JS Saoura U21,DZ,GMT,2023-02-09T14:05:27.000Z {},2023-02-09 14:25:03.186839,2023-02-09 14:25:03.186839,&lt;betfairlightweight.resources.bettingresources.Event object at 0x000001EDC6E030A0&gt;,0.30626821517944336,8,&quot;{'event': {'id': '32091637', 'name': 'Misr El Makasa v Kema Aswan', 'countryCode': 'EG', 'timezone': 'GMT', 'openDate': '2023-02-09T13:05:00.000Z'}, 'marketCount': 8}&quot;,8,32091637,Misr El Makasa v Kema Aswan,EG,GMT,2023-02-09T13:05:00.000Z {},2023-02-09 14:25:03.186839,2023-02-09 14:25:03.186839,&lt;betfairlightweight.resources.bettingresources.Event object at 0x000001EDC6E03160&gt;,0.30626821517944336,12,&quot;{'event': {'id': '32091349', 'name': 'USM Khenchela U21 v CS Constantine U21', 'countryCode': 'DZ', 'timezone': 'GMT', 'openDate': '2023-02-09T14:02:19.000Z'}, 'marketCount': 12}&quot;,12,32091349,USM Khenchela U21 v CS Constantine U21,DZ,GMT,2023-02-09T14:02:19.000Z {},2023-02-09 14:25:03.186839,2023-02-09 14:25:03.186839,&lt;betfairlightweight.resources.bettingresources.Event object at 0x000001EDC6E03220&gt;,0.30626821517944336,10,&quot;{'event': {'id': '32091348', 'name': 'MC Oran U21 v RC Arba U21', 'countryCode': 'DZ', 'timezone': 'GMT', 'openDate': '2023-02-09T14:05:15.000Z'}, 'marketCount': 10}&quot;,10,32091348,MC Oran U21 v RC Arba U21,DZ,GMT,2023-02-09T14:05:15.000Z {},2023-02-09 14:25:03.186839,2023-02-09 14:25:03.186839,&lt;betfairlightweight.resources.bettingresources.Event object at 0x000001EDC6E032E0&gt;,0.30626821517944336,8,&quot;{'event': {'id': '32091638', 'name': 'Wadi Degla v Eastern Company SC', 'countryCode': 'EG', 'timezone': 'GMT', 'openDate': '2023-02-09T13:03:29.000Z'}, 'marketCount': 8}&quot;,8,32091638,Wadi Degla v Eastern Company SC,EG,GMT,2023-02-09T13:03:29.000Z </code></pre> <p>To remove columns that have the <strong>same name</strong> and <strong>same values</strong>, I do this here:</p> <pre class="lang-python prettyprint-override"><code>df = df.T.drop_duplicates().T </code></pre> <p>But, one of my DataFrame columns has dictionaries and another column has an object instance of the &quot;Event&quot; class from the &quot;betfairlightweight.resources.bettingresources&quot; package., I am getting this error:</p> <pre class="lang-none prettyprint-override"><code>Traceback (most recent call last): File &quot;c:\Users\Computador\Desktop\Testes VSCODE\b.py&quot;, line 59, in &lt;module&gt; matches_df(trading) File &quot;c:\Users\Computador\Desktop\Testes VSCODE\b.py&quot;, line 55, in matches_df df = df.T.drop_duplicates().T File &quot;C:\Users\Computador\AppData\Local\Programs\Python\Python310\lib\site-packages\pandas\util\_decorators.py&quot;, line 311, in wrapper return func(*args, **kwargs) File &quot;C:\Users\Computador\AppData\Local\Programs\Python\Python310\lib\site-packages\pandas\core\frame.py&quot;, line 6116, in drop_duplicates duplicated = self.duplicated(subset, keep=keep) File &quot;C:\Users\Computador\AppData\Local\Programs\Python\Python310\lib\site-packages\pandas\core\frame.py&quot;, line 6253, in duplicated labels, shape = map(list, zip(*map(f, vals))) File &quot;C:\Users\Computador\AppData\Local\Programs\Python\Python310\lib\site-packages\pandas\core\frame.py&quot;, line 6226, in f labels, shape = algorithms.factorize(vals, size_hint=len(self)) File &quot;C:\Users\Computador\AppData\Local\Programs\Python\Python310\lib\site-packages\pandas\core\algorithms.py&quot;, line 763, in factorize codes, uniques = factorize_array( File &quot;C:\Users\Computador\AppData\Local\Programs\Python\Python310\lib\site-packages\pandas\core\algorithms.py&quot;, line 560, in factorize_array uniques, codes = table.factorize( File &quot;pandas\_libs\hashtable_class_helper.pxi&quot;, line 5394, in pandas._libs.hashtable.PyObjectHashTable.factorize File &quot;pandas\_libs\hashtable_class_helper.pxi&quot;, line 5310, in pandas._libs.hashtable.PyObjectHashTable._unique TypeError: unhashable type: 'dict' </code></pre> <p>How can I proceed in this scenario where I don't have just common strings and numbers?</p> <p>Just for context, in the example there are two columns called <code>market_count</code> and their values are exactly the same, so I want to keep only one. But it may happen that the <code>_data</code> column (which is where the dictionaries are) is exactly the same</p>
<python><pandas><datetime>
2023-02-09 14:39:14
1
2,222
Digital Farmer
75,400,165
825,489
Pandas DataFrame - Best way to transform multiple input columns to multiple output columns?
<p>I have a simple function that takes a sequence of numbers, and returns a tuple of 4 values, eg the (min, max, first, last) elements of the sequence. <em>(This is just an example; I can't just use builtins to get each value; I need to actually use my function).</em> I'd like to apply this function to all columns of a DataFrame and return the results in a new DataFrame, preserving the index.</p> <p>A couple days ago I did the following: Convert the initial DataFrame into a Series of input tuples; <code>apply()</code> my function to create a Series of output tuples; and create a new DataFrame from this output Series, expanding the output tuples into individual columns. And it works fine ...</p> <pre><code>def fn(a): return (min(a), max(a), a[0], a[-1]) df = pd.DataFrame([(2, 4, 5, 1, 3), (12, 14, 15, 11, 13), (20, 40, 50, 10, 30)], index=['a', 'b', 'c']) sr = pd.Series(df.to_numpy().tolist(), index=df.index) sr2 = sr.apply(fn) print(pd.DataFrame(sr2.values.tolist(), index=df.index)) 0 1 2 3 a 1 5 2 3 b 11 15 12 13 c 10 50 20 30 </code></pre> <p>However, now that a couple of days have passed and the thrill of victory has faded, it feels like this code can't be right, it's too many steps, it's insufficiently vectorized, and I'm sure I'm missing something.</p> <p>Is there a simpler/faster/better way of doing this?</p>
<python><pandas><dataframe>
2023-02-09 14:38:05
1
1,269
Bean Taxi
75,400,040
1,323,992
Why sqlalchemy postgres engine connection commits automatically?
<p>For some reason engine connection commits automatically, which is not desired. I want to commit explicitly. Here's my code (<a href="https://docs.sqlalchemy.org/en/14/tutorial/index.html" rel="nofollow noreferrer">it repeats official tutorial verbatim</a>):</p> <pre class="lang-py prettyprint-override"><code>from sqlalchemy import Table, Column, ForeignKey, Integer, String from sqlalchemy.orm import relationship from sqlalchemy.orm import registry from sqlalchemy import create_engine from sqlalchemy import ForeignKey engine = create_engine('postgresql+psycopg2://user:password@localhost:5432/test_db', echo=True) mapper_registry = registry() Base = mapper_registry.generate_base() user_table = Table( &quot;user_account&quot;, mapper_registry.metadata, Column(&quot;id&quot;, Integer, primary_key=True), Column(&quot;name&quot;, String(30)), Column(&quot;fullname&quot;, String), ) mapper_registry.metadata.create_all(engine) stmt = insert(user_table).values(name=&quot;spongebob&quot;, fullname=&quot;Spongebob Squarepants&quot;) print(stmt) with engine.connect() as conn: result = conn.execute(stmt) # conn.commit() </code></pre> <p>Even without commit my logs are:</p> <pre><code>INSERT INTO user_account (name, fullname) VALUES (:name, :fullname) 2023-02-09 16:12:05,033 INFO sqlalchemy.engine.Engine INSERT INTO user_account (name, fullname) VALUES (%(name)s, %(fullname)s) RETURNING user_account.id 2023-02-09 16:12:05,034 INFO sqlalchemy.engine.Engine [generated in 0.00123s] {'name': 'spongebob', 'fullname': 'Spongebob Squarepants'} 2023-02-09 16:12:05,037 INFO sqlalchemy.engine.Engine COMMIT </code></pre> <p>And new object appears in the database. I tried to set <code>conn.execution_options(isolation_level=&quot;SERIALIZABLE&quot;)</code> and <code>create_engine(...).execution_options(isolation_level=&quot;SERIALIZABLE&quot;)</code> with no success</p> <p>However when I set <code>isolation_level</code> to <code>&quot;AUTOCOMMIT&quot;</code>, logs are slightly different (so I'm sure isolation_level option is interpreted correctly):</p> <pre><code>INSERT INTO user_account (name, fullname) VALUES (:name, :fullname) 2023-02-09 16:17:35,889 INFO sqlalchemy.engine.Engine INSERT INTO user_account (name, fullname) VALUES (%(name)s, %(fullname)s) RETURNING user_account.id 2023-02-09 16:17:35,890 INFO sqlalchemy.engine.Engine [generated in 0.00132s] {'name': 'spongebob', 'fullname': 'Spongebob Squarepants'} 2023-02-09 16:17:35,893 INFO sqlalchemy.engine.Engine COMMIT using DBAPI connection.commit(), DBAPI should ignore due to autocommit mode </code></pre> <pre><code>sqlalchemy.__version__ '1.4.46' </code></pre> <p>What's the reason of such behavior?</p> <p>What do I not understand?</p> <p>What am I missing?</p> <p>Thanks</p> <p><strong>IMPORTANT UPDATE</strong></p> <p>As @snakecharmerb pointed, with <code>fututre=true</code> it works as expected (and this is the only significant difference with the docs).</p> <pre class="lang-py prettyprint-override"><code>engine = create_engine('...url', echo=True, future=True**) </code></pre> <p>So this flag changes autocommit behavior.</p> <pre><code>INSERT INTO user_account (name, fullname) VALUES (:name, :fullname) 2023-02-09 17:42:08,499 INFO sqlalchemy.engine.Engine BEGIN (implicit) 2023-02-09 17:42:08,499 INFO sqlalchemy.engine.Engine INSERT INTO user_account (name, fullname) VALUES (%(name)s, %(fullname)s) RETURNING user_account.id 2023-02-09 17:42:08,500 INFO sqlalchemy.engine.Engine [generated in 0.00085s] {'name': 'spongebob', 'fullname': 'Spongebob Squarepants'} &lt;sqlalchemy.engine.cursor.CursorResult object at 0x7fef30f76ac0&gt; 2023-02-09 17:42:10,511 INFO sqlalchemy.engine.Engine ROLLBACK </code></pre> <p>It answers my question mostly, but I'm still curious if I can have similar behavior without <code>future=True</code>, because I cannot turn it on (for the project I work on) yet</p>
<python><postgresql><sqlalchemy><autocommit>
2023-02-09 14:27:11
1
846
yevt
75,399,983
2,371,684
Python file read write issue
<p>I am trying to add course data to a course, I enter id, name, start and end date, credits. Then I load a file with educations to connect the course, I choose one with the index nr, then I add the education name data to course and finally save the data back to the courses.json file.</p> <p>But when I look at the course.json file, it lists all the education, not just the course I added with the education name as one of the attributes.</p> <p>This is my code:</p> <pre><code># import packages import json from datetime import datetime def courses(): # Json File Name filename = 'courses.json' # Read Json file and load content in data variable try: with open(&quot;courses.json&quot;, &quot;r&quot;) as file: # load content content = file.read() # if json file has content save in data variable if content: data = json.loads(content) # else load empty json object in data variable else: data = json.loads('{&quot;Courses&quot;: [] }') # exception handling except FileNotFoundError: data = json.loads('{&quot;Courses&quot;: [] }') except json.decoder.JSONDecodeError: data = json.loads('{&quot;Courses&quot;: [] }') # While User wants to add more records while True: # input course id courseId = input(&quot;Enter course id: &quot;) # input course name course_name = input(&quot;Enter course name: &quot;) # input start date start_input_date = input(&quot;Enter the start date in the format (yyyy-mm-dd): &quot;) start_date = datetime.strptime(start_input_date, &quot;%Y-%m-%d&quot;) start_date_str = start_date.strftime(&quot;%Y-%m-%d&quot;) # input end date end_input_date = input(&quot;Enter the end date in the format (yyyy-mm-dd): &quot;) end_date = datetime.strptime(end_input_date, &quot;%Y-%m-%d&quot;) end_date_str = end_date.strftime(&quot;%Y-%m-%d&quot;) # input course credits credits = int(input(&quot;Enter amount of course credits: &quot;)) # Load the JSON file with open(&quot;educations.json&quot;, &quot;r&quot;) as file: # Load the JSON data data = json.load(file) # Try to convert the loaded JSON data into a list try: educations = list(data[&quot;Educations&quot;]) # If the &quot;Educations&quot; key does not exist in the &quot;data&quot; dictionary, assign an empty list to the &quot;educations&quot; variable except KeyError: educations = [] # Get the number of educations num_educations = len(educations) print(&quot;Number of Educations:&quot;, num_educations) # Enumerate through the list of educations and print the index and education name for index, education in enumerate(educations, start=1): print(index, education[&quot;education_name&quot;]) # Get the index of the education to be chosen chosen_index = int(input(&quot;Enter the index of the education you want to choose: &quot;)) # Check if the chosen index is within the range if chosen_index &gt; 0 and chosen_index &lt;= num_educations: # Print the chosen education print(&quot;Chosen Education: &quot;, educations[chosen_index - 1]) else: print(&quot;Invalid index. Please try again.&quot;) education_name = educations[chosen_index - 1][&quot;education_name&quot;] entry = {'courseId': courseId, 'course_name': course_name, 'start_date': start_date_str, 'end_date': end_date_str, 'credits': credits, 'education_name': education_name} # Try to append new row in the &quot;Courses&quot; key of the &quot;data&quot; dictionary try: data[&quot;Courses&quot;].append(entry) # If the &quot;Courses&quot; key does not exist in the &quot;data&quot; dictionary, create the &quot;Courses&quot; key with an empty list as its value except KeyError: data[&quot;Courses&quot;]=[] data[&quot;Courses&quot;].append(entry) # Store all courses in a list courses = data[&quot;Courses&quot;] print(&quot;All courses: &quot;) for index, course in enumerate(courses): print(f&quot;{index + 1}. Course:&quot;) for key, value in course.items(): print(f&quot;\t{key}: {value}&quot;) # append data in json file with open(filename, 'a') as outfile: json.dump(courses, outfile, indent=4) # using indent to make json more readable outfile.write('\n') print(&quot;Course added to file successfully!\n&quot;) # ask users for inserting more records or exit program add_more = input(&quot;Do you want to add more courses? (y/n)&quot;) if add_more.lower() == 'n': break courses() </code></pre> <p>The educations.json file has the following content:</p> <pre><code>{ &quot;Educations&quot;: [ { &quot;education_name&quot;: &quot;itsak&quot;, &quot;start_date&quot;: &quot;2022-08-22&quot;, &quot;end_date&quot;: &quot;2024-05-20&quot;, &quot;education_id&quot;: &quot;itsak2023&quot; }, { &quot;education_name&quot;: &quot;Jamstack&quot;, &quot;start_date&quot;: &quot;2023-08-22&quot;, &quot;end_date&quot;: &quot;2025-05-20&quot;, &quot;education_id&quot;: &quot;Jamst2023&quot; }, { &quot;education_name&quot;: &quot;Backend developer&quot;, &quot;start_date&quot;: &quot;2023-08-22&quot;, &quot;end_date&quot;: &quot;2025-05-20&quot;, &quot;education_id&quot;: &quot;Backe2023&quot; } ] } </code></pre> <p>The outcome of the file is:</p> <pre><code>[ { &quot;courseId&quot;: &quot;itstrapi&quot;, &quot;course_name&quot;: &quot;Strapi&quot;, &quot;start_date&quot;: &quot;2022-08-22&quot;, &quot;end_date&quot;: &quot;2022-09-21&quot;, &quot;credits&quot;: 45, &quot;education_name&quot;: &quot;Backend developer&quot; } ] [ { &quot;courseId&quot;: &quot;itgatsby&quot;, &quot;course_name&quot;: &quot;Gatsby&quot;, &quot;start_date&quot;: &quot;2022-09-22&quot;, &quot;end_date&quot;: &quot;2022-10-21&quot;, &quot;credits&quot;: 30, &quot;education_name&quot;: &quot;Jamstack&quot; } ] </code></pre> <p>But the desired outcome should be the following:</p> <pre><code>{ &quot;Courses&quot;: [ { &quot;courseId&quot;: &quot;itstrapi&quot;, &quot;course_name&quot;: &quot;Strapi&quot;, &quot;start_date&quot;: &quot;2022-08-22&quot;, &quot;end_date&quot;: &quot;2022-09-21&quot;, &quot;credits&quot;: 45, &quot;education_name&quot;: &quot;Backend developer&quot; }, { &quot;courseId&quot;: &quot;itgatsby&quot;, &quot;course_name&quot;: &quot;Gatsby&quot;, &quot;start_date&quot;: &quot;2022-09-22&quot;, &quot;end_date&quot;: &quot;2022-10-21&quot;, &quot;credits&quot;: 30, &quot;education_name&quot;: &quot;Jamstack&quot; } ] } </code></pre> <p>I am getting output now, but not in a proper format though. What have I done wrong?</p>
<python>
2023-02-09 14:22:15
1
1,575
user2371684
75,399,909
17,561,414
try except AttributeError bugs
<p>I'm trying to add <code>try</code> and <code>except</code> to my code but it does not work.</p> <pre class="lang-py prettyprint-override"><code>def _get_table_df(self, table): l = module.send_command(name='Read', attributes={'type': table, 'method':'all', 'limit':'1', 'enable_custom':1}) so = BeautifulSoup(l.text,'xml') columns = [] try: for x in list(so.find(table).children): columns.append(x.name) df = pd.DataFrame(columns=columns) except AttributeError: return &quot;no data&quot; return df </code></pre> <p>The error pointing to <code>try</code> line says</p> <blockquote> <p>AttributeError: 'NoneType' object has no attribute 'children'</p> </blockquote>
<python><except>
2023-02-09 14:16:23
0
735
Greencolor
75,399,810
5,040,775
Cannot upgrade pandas on Pycharm
<p>When I enter the command <code>pip install pandas==1.3.0</code> on a virtual environment from PyCharm, I get the following error.</p> <blockquote> <p>ERROR: Could not find a version that satisfies the requirement pandas==1.3.0 (from versions: 0.1, 0.2, 0.3.0, 0.4.0, 0.4.1, 0.4.2, 0.4.3, 0.5.0, 0.6.0, 0.6.1, 0.7.0, 0.7.1, 0.7.2, 0.7.3, 0.8.0, 0.8.1, 0.9.0, 0.9.1, 0.10.0, 0.10.1, 0.11.0, 0.12.0, 0.13.0, 0.13.1, 0 .14.0, 0.14.1, 0.15.0, 0.15.1, 0.15.2, 0.16.0, 0.16.1, 0.16.2, 0.17.0, 0.17.1, 0.18.0, 0.18.1, 0.19.0, 0.19.1, 0.19.2, 0.20.0, 0.20.1, 0.20.2, 0.20.3, 0.21.0, 0.21.1, 0.22.0, 0.23.0, 0.23.1, 0.23.2, 0.23.3, 0.23.4, 0.24.0, 0.24.1, 0.24.2, 0.25.0, 0.25.1, 0.25.2, 0 .25.3, 1.0.0, 1.0.1, 1.0.2, 1.0.3, 1.0.4, 1.0.5, 1.1.0, 1.1.1, 1.1.2, 1.1.3, 1.1.4, 1.1.5) ERROR: No matching distribution found for pandas==1.3.0</p> </blockquote>
<python><pip>
2023-02-09 14:08:56
1
3,525
JungleDiff
75,399,746
1,185,081
Certificate issue when installing Apache Airflow
<p>Trying to install Airflow on a Windows server, I receive lost of certificate errors. Is there a way to bypass certificates checking while installing?</p> <p><strong>For GitPython:</strong></p> <pre><code>C:\apache-airflow-2.5.1&gt;pip install GitPython WARNING: Retrying (Retry(total=4, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate in certificate chain (_ssl.c:992)'))': /simple/gitpython/ </code></pre> <p><strong>For AirFlow:</strong></p> <pre><code>C:\apache-airflow-2.5.1&gt;python setup.py install gitpython not found: Cannot compute the git version. C:\Python311\Lib\site-packages\setuptools\installer.py:27: SetuptoolsDeprecationWarning: setuptools.installer is deprecated. Requirements should be satisfied by a PEP 517 installer. warnings.warn( WARNING: The wheel package is not available. WARNING: Retrying (Retry(total=4, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate in certificate chain (_ssl.c:992)'))': /simple/wheel/ </code></pre> <p>Is this deprecation warning due to Python 3.11 not being validated by AirFlow?</p> <p>Is there a simple way of installing AirFlow?</p>
<python><airflow>
2023-02-09 14:04:51
1
2,168
user1185081
75,399,734
3,406,193
Type hint a tuple whose length is the same as *args but different element types
<p>Python supports defining functions that take a variable-length sequence of arguments with <code>*args</code>. The type annotation in those cases is expected to describe the type of the individual elements in the sequence. But how can one describe the return type of the function if its length is supposed to depend on the length of <code>args</code> and the type of the elements is arbitrary?</p> <p>Let's consider how we could annotate the output type in the following dubiously useful function:</p> <pre class="lang-py prettyprint-override"><code>def add_one_and_to_str(*args): return tuple(str(i+1) for i in args) </code></pre> <h4>Attempt 1: <code>Sequence</code></h4> <p>One option might be to use <code>Sequence</code>. For example, one can do</p> <pre class="lang-py prettyprint-override"><code>from typing import Sequence def add_one_and_to_str(*args: int) -&gt; Sequence[str]: return tuple(str(i+1) for i in args) </code></pre> <p>This kind of works, but it is loosing information: we know that return type will not be just any sequence of strings but specifically a tuple, and precisely of the same length than the number of arguments provided.</p> <h4>Attempt 2: <code>TypeVarTuple</code></h4> <p>As an alternative, Python 3.11 introduces <a href="https://docs.python.org/3/library/typing.html#typing.TypeVarTuple" rel="noreferrer">TypeVarTuple</a>, which conceptually, can be thought of as a tuple of type variables (T1, T2, ...).</p> <p>So you can use it like</p> <pre class="lang-py prettyprint-override"><code>from typing import Sequence, TypeVarTuple Ts = TypeVarTuple('Ts') def make_tuple(*args: *Ts) -&gt; Tuple[*Ts]: return args </code></pre> <p>This correctly propagates the information about the length of the tuple. However, you cannot restrict the input types because <a href="https://peps.python.org/pep-0646/#variance-type-constraints-and-type-bounds-not-yet-supported" rel="noreferrer">type bounds are still not supported</a> for <code>TypeVarTuple</code> (so you would accept <code>Any</code>). But more importantly this seems to be useful in this case only to use the exact same types in the outputs than in the input.</p> <p>Thus, it doesn't seem to be useful in the case discussed her.</p> <h4>Open question</h4> <p>How could the first function be correctly annotated specifying that the output will be a tuple of strings with the same number of elements than integers were provided as function arguments? Any idea?</p> <p>Thanks!</p>
<python><python-typing>
2023-02-09 14:04:02
0
4,044
mgab
75,399,706
2,836,172
pip can't find a matching version when installing flask-user in a docker image
<p>I have several packages in my <code>requirements.txt</code> file, but only <code>Flask-User</code> can't be installed:</p> <pre><code>ERROR: Could not find a version that satisfies the requirement Flask-User (from versions: 0.3, 0.3.1, 0.3.2, 0.3.3, 0.3.4, 0.3.5, 0.3.6, 0.3.7, 0.3.8, 0.4.0, 0.4.1, 0.4.2, 0.4.3, 0.4.4, 0.4.5, 0.4.6, 0.4.7, 0.4.8, 0.4.9, 0.5.0, 0.5.1, 0.5.2, 0.5.3, 0.5.4, 0.5.5, 0.6, 0.6.1, 0.6.2, 0.6.3, 0.6.4, 0.6.5, 0.6.6, 0.6.7, 0.6.8, 0.6.9, 0.6.10, 0.6.12, 0.6.13, 0.6.14, 0.6.15, 0.6.16, 0.6.17, 0.6.19, 0.6.20, 0.6.21, 1.0.1.1, 1.0.1.2, 1.0.1.3, 1.0.1.4, 1.0.1.5, 1.0.2.0, 1.0.2.1, 1.0.2.2) ERROR: No matching distribution found for Flask-User </code></pre> <p>I didn't even specify a version. How can I install this package in my Docker image?</p> <p>The base image I'm using is <code>python:3.9-bullseye</code></p>
<python><docker><pip><package>
2023-02-09 14:01:45
1
1,522
Standard
75,399,697
12,993,568
Parametrize input datasets in kedro
<p>I'm trying to move my project into a kedro pipeline but I'm struggling with the following step:</p> <p>my prediction pipeline is being run by a scheduler. The scheduler supplies all the necessary parameters (dates, country codes etc.). Up until now I had a CLI which would get input parameters such as below</p> <p><code>python predict --date 2022-01-03 --country UK</code></p> <p>The code would then read the input dataset for a given date and for a given country, so the query would be something like:</p> <pre><code>SELECT * FROM input_data_{country} WHERE date = {date} </code></pre> <p>and this would be formatted using the input variables passed in the CLI.</p> <p>Important note: the code has to run on any arbitary date passed by the scheduler, and not only on &quot;today&quot;.</p> <p>How would I parametrize Kedro's data catalog using CLI arguments?</p> <p>I tried the examples in the documentation of Kedro but it seems that they are mainly geared towards using templates from config in reading the data. The key issue I'm struggling with is passing CLI arguments to the data catalog and haven't found a working solution. I looked into <code>PartitionedDataSet</code> but I don't see an option to have CLI arguments as inputs there</p>
<python><kedro>
2023-02-09 14:01:26
1
352
w_sz
75,399,617
12,814,680
Filtering an array on nested values by using values from another array
<p>I have this array of 3 objects. The parameter that interests me is &quot;id&quot;, that is nested into &quot;categories&quot; attribute.</p> <pre><code>list = [ { &quot;title&quot;: &quot;\u00c9glise Saint-Julien&quot;, &quot;distance&quot;: 1841, &quot;excursionDistance&quot;: 1575, &quot;categories&quot;: [ { &quot;id&quot;: &quot;300-3200-0030&quot;, &quot;name&quot;: &quot;\u00c9glise&quot;, &quot;primary&quot;: true }, { &quot;id&quot;: &quot;300-3000-0025&quot;, &quot;name&quot;: &quot;Monument historique&quot; } ] }, { &quot;title&quot;: &quot;Sevdec&quot;, &quot;distance&quot;: 2250, &quot;excursionDistance&quot;: 301, &quot;categories&quot;: [ { &quot;id&quot;: &quot;700-7600-0322&quot;, &quot;name&quot;: &quot;Station de recharge&quot;, &quot;primary&quot;: true } ] }, { &quot;title&quot;: &quot;SIEGE 27&quot;, &quot;distance&quot;: 2651, &quot;excursionDistance&quot;: 1095, &quot;categories&quot;: [ { &quot;id&quot;: &quot;700-7600-0322&quot;, &quot;name&quot;: &quot;Station de recharge&quot;, &quot;primary&quot;: true } ] } ] </code></pre> <p>Then I have these two arrays that contain ids:</p> <pre><code>mCat1 = [&quot;300-3000-0000&quot;,&quot;300-3000-0023&quot;,&quot;300-3000-0030&quot;,&quot;300-3000-0025&quot;,&quot;300-3000-0024&quot;,&quot;300-3100&quot;] # macro cat1 = tourism mCat2 = [&quot;400-4300&quot;,&quot;700-7600-0322&quot;] </code></pre> <p>I need to filter &quot;list&quot; on &quot;mCat1&quot; in order to extract in a new variable the object(s) that have at least one &quot;id&quot; that matches those in &quot;mCat1&quot;. Then I need to do the same with &quot;mCat2&quot;.</p> <p>In this example the expected result would be:</p> <pre><code>mCat1Result = [{ &quot;title&quot;: &quot;\u00c9glise Saint-Julien&quot;, &quot;distance&quot;: 1841, &quot;excursionDistance&quot;: 1575, &quot;categories&quot;: [ { &quot;id&quot;: &quot;300-3200-0030&quot;, &quot;name&quot;: &quot;\u00c9glise&quot;, &quot;primary&quot;: true }, { &quot;id&quot;: &quot;300-3000-0025&quot;, &quot;name&quot;: &quot;Monument historique&quot; } ] }] mCat2Result = [{ &quot;title&quot;: &quot;Sevdec&quot;, &quot;distance&quot;: 2250, &quot;excursionDistance&quot;: 301, &quot;categories&quot;: [ { &quot;id&quot;: &quot;700-7600-0322&quot;, &quot;name&quot;: &quot;Station de recharge&quot;, &quot;primary&quot;: true } ] }, { &quot;title&quot;: &quot;SIEGE 27&quot;, &quot;distance&quot;: 2651, &quot;excursionDistance&quot;: 1095, &quot;categories&quot;: [ { &quot;id&quot;: &quot;700-7600-0322&quot;, &quot;name&quot;: &quot;Station de recharge&quot;, &quot;primary&quot;: true } ] }] </code></pre> <p>What would be the most efficient way to do this? I am able to do it using loops but it is very resource dependent on large datasets.</p>
<python>
2023-02-09 13:55:08
0
499
JK2018
75,399,522
19,828,155
How should I send a Post request to create an object that has a foreign key relationship
<p>hey yall im trying to learn the Django rest framework and im getting stuck trying to create a new product in my db via a POST request</p> <p>i have a model called Product which has a bunch of fields for all the product details, one of those fields (brand) is a foreign key to another table containing all my brands</p> <p>when sending a post request to create a new product i want to be able to pass in an id for a brand and it should save</p> <p>the serializer for the product model has the brand nested in it and i cant figure out how to properly structure the product info in the post request</p> <p>serializers.py</p> <pre><code>class ProductSerializer(serializers.ModelSerializer): brand = BrandSerializer(required=False, read_only=False) vendors = VendorSerializer(many=True, required=False) class Meta: model = Product fields = &quot;__all__&quot; def create(self, validated_data): product_brand = validated_data.pop(&quot;brand&quot;) print(product_brand) product_instance = Product.objects.create(**validated_data) Brand.objects.create(product_brand) return product_instance </code></pre> <p>models.py</p> <pre><code>class Brand(models.Model): name = models.CharField(max_length=50) def __str__(self): return self.name class Product(models.Model): name = models.CharField(max_length=200) description = models.CharField(max_length=500) sku = models.CharField(max_length=50) brand = models.ForeignKey( Brand, on_delete=models.CASCADE, null=True, blank=True ) added = models.DateTimeField(auto_now_add=True) updated = models.DateTimeField(auto_now=True) def __str__(self): return self.name + &quot;: &quot; + self.description </code></pre> <p>example JSON of POST request</p> <pre><code>{ &quot;id&quot;: 1, &quot;brand&quot;: { &quot;id&quot;: 2, &quot;name&quot;: &quot;Dewalt&quot; }, &quot;name&quot;: &quot;Product1&quot;, &quot;description&quot;: &quot;the first product&quot;, &quot;sku&quot;: &quot;111111111&quot;, &quot;added&quot;: &quot;2022-12-28T19:09:30.007480Z&quot;, &quot;updated&quot;: &quot;2022-12-29T15:10:36.432685Z&quot; } </code></pre> <p>i dont even know if i should be using nested serializers or if there is a better way to do this. i want that when i make a request to get a product, the brand info should be contained in it and not just the ID</p> <p>i read somewhere that i have to override the default create() method but im not sure what thats supposed to be doing</p> <p>i dont want to create a new brand when i do a post, i just want to be able to choose an existing brand when creating a new product</p> <p>if someone can point me in the right direction that would be incredible</p>
<python><django><django-rest-framework>
2023-02-09 13:47:09
2
404
spaceCabbage