QuestionId
int64
74.8M
79.8M
UserId
int64
56
29.4M
QuestionTitle
stringlengths
15
150
QuestionBody
stringlengths
40
40.3k
Tags
stringlengths
8
101
CreationDate
stringdate
2022-12-10 09:42:47
2025-11-01 19:08:18
AnswerCount
int64
0
44
UserExpertiseLevel
int64
301
888k
UserDisplayName
stringlengths
3
30
76,204,868
21,420,742
Finding differences in values with strings in Python
<p>I am trying to create a new column that will describe when someone is a manager or a stand in while a manager is away.</p> <p>DF</p> <pre><code> ID Name Emp_Level 101 Adam Emp 102 Betty Mgr 103 Chris(Stand in) Mgr 104 Dave Emp 105 Emily(Stand in) Mgr 106 Frank Mgr </code></pre> <p>desired output</p> <pre><code> ID Name Emp_Level Status 101 Adam Emp Emp 102 Betty Mgr Full 103 Chris(Stand in) Mgr Partial 104 Dave Mgr Full 105 Emily(Stand in) Mgr Partial 106 Frank Mgr Full </code></pre> <p>I want to identify the differences between manager and those that are stand in managers.</p> <p>I tried</p> <pre><code> np.where(df['Name'].str.contains('(Stand in)', regex = True), 'Partical', np.where(df['Emp_Level'] == 'Mgr', 'Manager', 'Employee')) </code></pre>
<python><python-3.x><pandas><dataframe><numpy>
2023-05-08 22:53:27
3
473
Coding_Nubie
76,204,704
13,119,730
FastAPI get exception stacktrace
<p>I have a problem with my Exception handler in my FastAPI app. Here's my current code:</p> <pre><code>@app.exception_handler(Exception) async def server_error(request: Request, error: Exception): logger.error(f&quot;Internal server error: {error} \n{traceback.format_exc()}&quot;) return JSONResponse( status_code=status.HTTP_500_INTERNAL_SERVER_ERROR, content=jsonable_encoder({ &quot;message&quot;: &quot;...&quot;, &quot;exception&quot;: [str(error)], &quot;endpoint&quot;: request.url }) ) </code></pre> <p>I would like to add a logging message that prints the stacktrace of the exception that triggered this function. Currently, the output doesn't point to the stacktrace of the exception.</p> <p>I've tried many ways to fix this, but the only result I ended up with was creating a pull request into the FastAPI repository.</p> <p>If I missed something, please let me know. Thank you!</p>
<python><exception><logging><fastapi><stack-trace>
2023-05-08 22:15:07
0
387
Jakub Zilinek
76,204,675
8,849,755
Changing colorbar title angle in plotly
<p>Consider the following code taken from <a href="https://plotly.com/python/heatmaps/" rel="nofollow noreferrer">the documentation</a></p> <pre class="lang-py prettyprint-override"><code>import plotly.express as px data=[[1, 25, 30, 50, 1], [20, 1, 60, 80, 30], [30, 60, 1, 5, 20]] fig = px.imshow(data, labels=dict(x=&quot;Day of Week&quot;, y=&quot;Time of Day&quot;, color=&quot;Productivity&quot;), x=['Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday'], y=['Morning', 'Afternoon', 'Evening'] ) fig.update_xaxes(side=&quot;top&quot;) fig.show() </code></pre> <p>which produces the following plot <a href="https://i.sstatic.net/4Bazw.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/4Bazw.png" alt="enter image description here" /></a></p> <p>Is it possible to rotate the 'Productivity' title so that it shows up like this? <a href="https://i.sstatic.net/coqh2.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/coqh2.png" alt="enter image description here" /></a></p>
<python><plotly>
2023-05-08 22:08:37
1
3,245
user171780
76,204,615
11,141,816
Python md5 same image files with different md5
<p>From <a href="https://stackoverflow.com/questions/16874598/how-do-i-calculate-the-md5-checksum-of-a-file-in-python">How do I calculate the MD5 checksum of a file in Python?</a> , I wrote a script to remove the duplicate files in the folder <code>dst_dir</code> with md5. However, for many files(.jpg and .mp4), the md5 was not able to remove the duplicate files. I checked that the methods mentioned in <a href="https://stackoverflow.com/questions/54202929/python-3-same-text-but-different-md5-hashes">Python 3 same text but different md5 hashes</a> did not work. I suspect if might be the property file(the &quot;modification date&quot; etc.) that's attached to the image files that's changed.</p> <pre><code>import os dst_dir=&quot;/&quot; import hashlib directory=dst_dir; #list of file md5 md5_list=[]; md5_file_list=[]; for root, subdirectories, files in os.walk(directory): if &quot;.tresorit&quot; not in root: for file in files: file_path =os.path.abspath( os.path.join(root,file) ); print(file_path) # Open,close, read file and calculate MD5 on its contents with open(file_path, 'rb') as file_to_check: # read contents of the file data = file_to_check.read() # pipe contents of the file through md5_returned = hashlib.md5(data).hexdigest() if md5_returned not in md5_list: md5_list.append(md5_returned); md5_file_list.append(file_path); else: # remove duplicate file print([&quot;Duplicate file&quot;, file_path, md5_returned] ) if &quot;-&quot; not in file: os.remove(file_path); print(&quot;Duplicate file removed 01&quot;) else: file_list_index=md5_list.index(md5_returned); if &quot;-&quot; not in md5_file_list[file_list_index]: os.remove(md5_file_list[file_list_index]); del md5_list[file_list_index] del md5_file_list[file_list_index] print(&quot;Duplicate file removed 02&quot;) md5_list.append(md5_returned) md5_file_list.append(file_path) else: os.remove(file_path); print(&quot;Duplicate file removed 03&quot;) </code></pre> <p>How to fix Python md5 calculation such that the same image files could be returned with the same md5 values?</p>
<python><md5><hashlib>
2023-05-08 21:55:04
0
593
ShoutOutAndCalculate
76,204,576
15,569,921
Filtering on the positive values in scatterplot
<p>Imagine a nested dictionary, where the first level is the dates, and the second level dictionary has the number as keys and the value corresponding to that number as values. I want to do a scatterplot on the positive <strong>values</strong> of each date without losing the dates as the x-axis. What I mean is that if the nested dictionary (one specific date), has no positive values (all zero or negative), I don't want to lose the date, but have nothing showing up as the y-axis value for the date.</p> <p>Here's a working example for clarification. I plot everything in here. I want to keep the same labels and distance on x-axis but don't have the zero or negative values in y-axis show up. In this example, I don't want to see the dots on dates '2006-02-03' and '2006-04-05'. Any help would be appreciated.</p> <pre><code>import matplotlib.pyplot as plt dictt = {('2005-01-04', '2006-01-04'): {0: 0, 1: 3, 2: -1, 3: 5}, ('2005-02-03', '2006-02-03'): {0: 0, 1: 0, 2: 0, 3: 0}, ('2005-03-07', '2006-03-07'): {0: -3, 1: 0, 2: 3, 3: 5}, ('2005-04-06', '2006-04-05'): {0: -2, 1: -3, 2: -1, 3: -2}} for k in list(dictt.keys()): for i in range(4): plt.scatter(k[1], dictt[k[0], k[1]][i]) </code></pre> <p><a href="https://i.sstatic.net/7q8Ki.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/7q8Ki.png" alt="enter image description here" /></a></p>
<python><dictionary><matplotlib><filter>
2023-05-08 21:48:22
4
390
statwoman
76,204,504
5,424,117
Django Custom API Endpoint with two parameters
<p>I want to build a fairly simple API endpoint in a Django app I inherited.</p> <p>I'm new to python and Django</p> <p>The app uses views.py with classes that extend ModelViewSet.</p> <p>It also uses a urls.py with entries like this:</p> <pre class="lang-py prettyprint-override"><code>router.register('agent_licenses', views.AgentLicenseViewSet) </code></pre> <p>and this one I came up with:</p> <pre class="lang-py prettyprint-override"><code>router.register(r'agent_licenses/(?P&lt;agent_id&gt;\d+)/&lt;state_id&gt;\d+/', views.AgentLicenseViewSet) </code></pre> <p>I need to build the fastest-responding endpoint possible and so I plan to use a custom SQL query that returns a True/False.</p> <p>Is there some straightforward way to build an endpoint (hopefully in views.py, although if that's not possible, that's ok.) that will show up in Swagger without having to wrestle with Models, ModelViewSet, urls.py, Serializers, etc?</p> <p>I have managed to get close, using the AgentLicenseViewSet, like this: (Ignore the actual code in the endpoint method - it's just test code at this point)</p> <pre class="lang-py prettyprint-override"><code>class AgentLicenseViewSet(viewsets.ModelViewSet): &quot;&quot;&quot;Agent license REST view &quot;&quot;&quot; queryset = models.AgentLicense.objects.all() serializer_class = serializers.AgentLicenseSerializer filter_backends = (django_filters.DjangoFilterBackend,) filterset_class = filters.AgentLicenseFilter @action(methods=[&quot;GET&quot;], detail=True) # def has_license_in_state(self, agent_id: int, state_id: int) -&gt; Response: def has_license_in_state(self, *_args, **_kwargs ) -&gt; Response: #print(state_id) #print(pk) # return Response(data={'message': False}) return Response(data=False) # return False </code></pre> <p>But - when I run this, I get an endpoint that appears to be ignoring the \d+ in the router.register line I made above ( it asks for strings instead of ints) and it insists that the primary key on the agent_license table MUST be included.</p> <p>I just want to build an endpoint that shows up in Swagger, and takes two ints (but not the primary key on the table) so that I can use them to populate my custom query and get a true/false response, but if that's not possible, any advice on how to make DjangoSwagger see the two parameters (agent_id and state_id) as ints, and eliminate the requirement that the endpoint submit a primary key would also be welcomed.</p> <p>Many thanks.</p> <p>Here is a picture of the endpoint that I currently get in Swagger.</p> <p><a href="https://i.sstatic.net/vNHMm.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/vNHMm.png" alt="Swagger Endpoint showing strings (instead of ints on the two fields) and primary key required" /></a></p> <p>Update: I find that although a breakpoint in my API method does not get hit when I exercise the endpoint via Swagger, the endpoint disappears when I remove it. I can't explain that.</p>
<python><django><django-rest-framework><django-views>
2023-05-08 21:33:01
2
2,474
jb62
76,204,225
17,101,330
Python Kivy cannot use Video Player ffpyplayer ignored
<p>I am building my first app with kivy but cannot make the VideoPlayer work.</p> <p>There is a error-message stating: <strong>Provider: ffpyplayer(['video_ffmpeg'] ignored)</strong> which seems to be the problem.</p> <p>I have already installed ffpyplayer (aswell as Pillow) using pip install.</p> <p>When I type:</p> <pre><code>import kivy print(kivy.kivy_options['VIDEO']) </code></pre> <p>It says: <strong>('gstplayer', 'ffmpeg', 'ffpyplayer', 'null')</strong></p> <p>However, here is a minimal example:</p> <pre><code>from kivy.app import App from kivy.uix.widget import Widget from kivy.uix.videoplayer import VideoPlayer class VideoPlayerWindow(Widget): def __init__(self, **kwargs): super(VideoPlayerWindow, self).__init__(**kwargs) self.player = VideoPlayer( source=r&quot;D:\kivy-thelab\resources\commercial_free\videos\pilates1.mp4&quot;, state='play', ) class MainApp(App): def build(self): return VideoPlayerWindow() if __name__ == &quot;__main__&quot;: MainApp().run() </code></pre> <p>And the full debugging output:</p> <pre><code>[INFO ] [Logger ] Record log in C:\Users\Home\.kivy\logs\kivy_23-05-08_10.txt [INFO ] [deps ] Successfully imported &quot;kivy_deps.angle&quot; 0.3.3 [INFO ] [deps ] Successfully imported &quot;kivy_deps.glew&quot; 0.3.1 [INFO ] [deps ] Successfully imported &quot;kivy_deps.sdl2&quot; 0.4.5 [INFO ] [Kivy ] v2.1.0 [INFO ] [Kivy ] Installed at &quot;C:\Users\Home\miniconda3\envs\kivy_env\lib\site-packages\kivy\__init__.py&quot; [INFO ] [Python ] v3.9.0 | packaged by conda-forge | (default, Nov 26 2020, 07:53:15) [MSC v.1916 64 bit (AMD64)] [INFO ] [Python ] Interpreter at &quot;C:\Users\Home\miniconda3\envs\kivy_env\python.exe&quot; [INFO ] [Logger ] Purge log fired. Processing... [INFO ] [Logger ] Purge finished! [INFO ] [Factory ] 189 symbols loaded [INFO ] [ImageLoaderFFPy] Using ffpyplayer 4.5.0 [INFO ] [Image ] Providers: img_tex, img_dds, img_sdl2, img_pil, img_ffpyplayer [INFO ] [Text ] Provider: sdl2 [INFO ] [VideoFFPy ] Using ffpyplayer 4.5.0 [INFO ] [Video ] Provider: ffpyplayer(['video_ffmpeg'] ignored) [INFO ] [Window ] Provider: sdl2 [INFO ] [GL ] Using the &quot;OpenGL&quot; graphics system [INFO ] [GL ] GLEW initialization succeeded [INFO ] [GL ] Backend used &lt;glew&gt; [INFO ] [GL ] OpenGL version &lt;b'4.6.0 - Build 27.20.100.8682'&gt; [INFO ] [GL ] OpenGL vendor &lt;b'Intel'&gt; [INFO ] [GL ] OpenGL renderer &lt;b'Intel(R) HD Graphics 620'&gt; [INFO ] [GL ] OpenGL parsed version: 4, 6 [INFO ] [GL ] Shading version &lt;b'4.60 - Build 27.20.100.8682'&gt; [INFO ] [GL ] Texture max size &lt;16384&gt; [INFO ] [GL ] Texture max units &lt;32&gt; [INFO ] [Window ] auto add sdl2 input provider [INFO ] [Window ] virtual keyboard not allowed, single mode, not docked [INFO ] [Base ] Start application main loop [ERROR ] [Image ] Error loading &lt;D:\kivy-thelab\resources\commercial_free\videos\pilates1.mp4&gt; [INFO ] [GL ] NPOT texture support is available </code></pre> <p>What am I doing wrong?</p>
<python><kivy><video-player>
2023-05-08 20:40:31
1
530
jamesB
76,204,154
1,123,336
Passing keyword arguments to NumPy __array__ functions
<p>In the past, a few questions have asked for help when getting the following exception:</p> <pre><code>TypeError: __array__() takes 1 positional argument but 2 were given </code></pre> <p>These have usually been in the context of using another package, in which the <code>__array__</code> function has been defined without allowing for additional arguments. However, the underlying problem seems to be that it is impossible to allow for additional arguments even if you wanted to.</p> <p>For example, defining the following class doesn't work:</p> <pre><code>&gt;&gt;&gt; import numpy as np &gt;&gt;&gt; class MyArray: def __init__(self, arr): self.arr = arr def __array__(self, **kwargs): return np.array(self.arr, **kwargs) &gt;&gt;&gt; a=MyArray([1,2,3]) &gt;&gt;&gt; np.array(a) array([1, 2, 3]) &gt;&gt;&gt; np.array(a, dtype=np.float32) TypeError: MyArray.__array__() takes 1 positional argument but 2 were given </code></pre> <p>Does anyone know why the usual mechanism for passing on keyword arguments doesn't seem to work for the <code>np.array</code>?</p> <p>To add to the confusion, the following does work:</p> <pre><code>&gt;&gt;&gt; class MyArray: def __init__(self, arr): self.arr = arr def __array__(self, dtype=None): return np.array(self.arr, dtype) &gt;&gt;&gt; a=MyArray([1,2,3]) &gt;&gt;&gt; np.array(a, dtype=np.float32) array([1., 2., 3.], dtype=float32) </code></pre> <p>It looks as if I could add all the remaining keyword arguments that <code>np.array</code> accepts, but that doesn't seem very Pythonic.</p> <p>Am I missing something here?</p>
<python><numpy><numpy-ndarray>
2023-05-08 20:28:10
2
582
Ray Osborn
76,204,004
5,507,389
Unpacking dictionary in for loop with indeces
<p>I have the following code which iterate over a dictionary to unpack its values:</p> <pre><code>my_dict = {&quot;brand&quot;: &quot;A&quot;, &quot;product&quot;: [&quot;X&quot;, &quot;Y&quot;, &quot;Z&quot;], &quot;color&quot;: [&quot;red&quot;, &quot;green&quot;, &quot;blue&quot;]} def my_fnc(product, index): print(f&quot;Product {product} at index {index}&quot;) if __name__ == &quot;__main__&quot;: for i, product in enumerate(my_dict[&quot;product&quot;], 1): my_fnc(product, i) </code></pre> <p>The above code returns the following output:</p> <pre><code>Product X at index 1 Product Y at index 2 Product Z at index 3 </code></pre> <p>I'm now trying to retrieve the <code>color</code> values across the iteration process. The result I'm looking for is the following:</p> <pre><code>Product X at index 1 is colored red Product Y at index 2 is colored green Product Z at index 3 is colored blue </code></pre> <p>and <code>my_func</code> should be defined like this:</p> <pre><code>def my_fnc(product, index, color): print(f&quot;Product {product} at index {index} is colored {color}&quot;) </code></pre> <p>I should note that it's important that I retrieve the index in this way. I tried a couple of things (like iterating through <code>my_dict.items()</code>, but I couldn't make it work together with the index.</p>
<python><loops><dictionary>
2023-05-08 20:02:57
2
679
glpsx
76,203,943
13,741,789
Running a dash app from multidirectory structure
<p>I seem to have trouble running dash/(probably also flask?) apps from a multiple directory structure with python.</p> <p>Basically I have a large dash app I'm writing which would be annoying to maintain if everything was in one giant &quot;app.py&quot; file. I'd like to have a few folders to organize types of visualizations, accessing multiple sources of data, etc. As soon as I try to break it up into multiple scripts and folders I get a weird 'No module named...&quot; error which seems to have nothing to do with the imports. Here is the simplest example which recreates the error:</p> <p>Folder structure:</p> <pre><code>big_dashboard/ __init__.py __main__.py src/ __init__.py does_things/ __init__.py do_a_thing.py </code></pre> <p>Here are the contents of __main__.py</p> <pre class="lang-py prettyprint-override"><code>import dash from src.does_things.do_a_thing import do do() app = dash.Dash(__name__) app.layout = dash.html.H1('Hello World!') if __name__ == '__main__': app.run(debug = True, host = 'localhost', port = 5000) </code></pre> <p>Here are the contents of do_a_thing.py:</p> <pre class="lang-py prettyprint-override"><code>def do(): print('Thing successfully did') </code></pre> <p>Here is what prints to the console (not even an error):</p> <pre><code>Thing successfully did Dash is running on http://localhost5000/ * Serving Flask app &quot;__main__&quot; (lazy loading)  * Environment: production    WARNING: This is a development server. Do not use it in a production deployment.    Use a production WSGI server instead.  * Debug mode: on C:\...\Anaconda3\python.exe: No module named /big_dashboard </code></pre> <p>And the app immediately crashes. The printed message looks like an import error, but as do() works fine my relative imports seem to all function. Even the app successfully launches. I'm sure I'm missing something super basic and fundamental here, but that &quot;error&quot; doesn't give me much to go on.</p> <p>What am I doing wrong?</p>
<python><flask><plotly-dash>
2023-05-08 19:53:40
1
312
psychicesp
76,203,925
3,413,699
Custom Logging Module not working in Python
<p>I want to use same logging mechanism in all python scripts. Developed a class below class:</p> <p>File Name: ms_logging.py</p> <pre><code>import logging class ms_logging: @staticmethod def logging(): logging.basicConfig( level=logging.DEBUG, format='%(asctime)s | %(levelname)s | [%(filename)s:%(lineno)d] | %(message)s', datefmt=&quot;%Y-%m-%d %H:%M:%S&quot;, filename=&quot;/home/etl_developer/ms_de/log_directory/master_log_file.log&quot; ) logger = logging.getLogger() return logger </code></pre> <p>Then I'm trying to load that module in another Python script and use that <code>logging</code> function to insert logs. The purpose of this is to write one configuration and use them in all Python scripts to have a centralized logs</p> <p>Here is the code:</p> <pre><code>import sys sys.path.append('/home/etl_developer/ms_de/module_library/') from ms_logging import ms_logging log = ms_logging.logging() log.error('test') log.info('hello there') </code></pre> <p>This above code doesn't give any error but logging is also not working. The log file is empty.</p> <p>What is the mistake that I'm making? Any clues would be very much appreciated.</p>
<python><logging><python-logging>
2023-05-08 19:49:53
1
478
Milon Sarker
76,203,884
8,032,148
Inconsistent Flask request behavior
<p>I have a simple flask app that receives an encoded json file in a url, prints it out and imports it in to a dictionary object my_dict:</p> <pre><code>from flask import request import json @app.route('/test', methods=['POST', 'GET']) def test: rqa = request.args.get('data', '') print(rqa) my_dict = json.loads(rqa) </code></pre> <p>I need to pass input from a form field that may contain quotes, etc., which I first encode using <code>encodeURIComponent()</code></p> <pre><code>var dict = { 'key': encodeURIComponent('red (&quot;old&quot;) fox') }; var dict_en = encodeURIComponent(JSON.stringify(dict)); </code></pre> <p>Using dict_en, I create a url for the flask app:</p> <pre><code>http://localhost:5000/test?data=%7B%22key%22%3A%22red%2520(%2522old%2522)%2520fox%22%7D </code></pre> <p>When I run this locally using flask, it works fine and the result from request.args can be turned in to a dictionary object using json.loads()</p> <pre><code>{&quot;key&quot;:&quot;red%20(%22old%22)%20fox&quot;} </code></pre> <p>The problem is when I run this same code on AWS Lambda, request.args decodes my string twice, which then cannot be converted to a dictionary object due to the unescaped double quotes:</p> <pre><code>{&quot;key&quot;:&quot;red (&quot;old&quot;) fox&quot;} </code></pre> <p>How do I ensure flask.request works consistently in the two environments?</p> <p>Both environments use Python 3.8.3, Flask 1.1.2, Werkzeug 0.16.1</p>
<python><flask><aws-lambda>
2023-05-08 19:42:32
1
314
FarNorth
76,203,575
14,676,485
How to extract first value from each array-like element in column?
<p>I have a column with single elements as well as array-like elements but I don't think it looks like an array:</p> <p><a href="https://i.sstatic.net/fxcse.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/fxcse.png" alt="enter image description here" /></a></p> <p>I need to extract first value from this array-like element. First, I check if it's array. If so I extract first elemnt, if not I return default value. I use this function:</p> <pre><code>def get_top_element(x): if isinstance(x, np.ndarray): return x[0] return x df_no_outliers['brand'] = df_no_outliers['brand'].apply(get_top_element) </code></pre> <p>Following error araises:</p> <blockquote> <p>IndexError: index 0 is out of bounds for axis 0 with size 0</p> </blockquote> <p>How can I make it work?</p> <p><strong>EDIT:</strong></p> <p>If I convert column to tuple then I get this:</p> <pre><code>tpl=tuple(df_no_outliers['brand']) tpl </code></pre> <p><a href="https://i.sstatic.net/tIcrj.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/tIcrj.png" alt="enter image description here" /></a></p>
<python><pandas>
2023-05-08 18:57:45
0
911
mustafa00
76,203,482
21,420,742
Creating a column that indicates when a change in values happens by ID
<p>I know this is simple but I am having a little trouble trying to see when someone switche s roles. df</p> <pre><code> ID Date Job 101 05/2022 Sales 101 06/2022 Sales 102 12/2021 Tech 102 1/2022 Tech 102 2/2022 Finance 103 4/2022 HR 103 5/2022 Sales 103 6/2022 Tech </code></pre> <p>Desired output:</p> <pre><code>ID Date Job Switch 101 05/2022 Sales No 101 06/2022 Sales No 102 12/2021 Tech No 102 01/2022 Tech No 102 02/2022 Finance Yes 103 04/2022 HR No 103 05/2022 Sales Yes 103 06/2022 Tech Yes </code></pre> <p>I believe the right approach would be starting with <code>df.groupby('ID')[Job']</code> Any suggestions?</p>
<python><python-3.x><pandas><dataframe><group-by>
2023-05-08 18:45:39
4
473
Coding_Nubie
76,203,454
653,379
Correctly sort RSS items by time
<p>I'm getting RSS items from different RSS channels. And I'd like to sort them correctly by time and take into account the time zone, from the latests to the oldests. So far, I have the following code:</p> <pre><code>import feedparser import dateutil.parser rss_channels = [ &quot;https://www.novinky.cz/rss&quot;, &quot;https://news.ycombinator.com/rss&quot;, &quot;https://unix.stackexchange.com/feeds&quot;, &quot;https://www.lupa.cz/rss/clanky/&quot;, &quot;https://www.lupa.cz/rss/n/digizone/&quot;, &quot;https://www.zive.cz/rss/sc-47/&quot;, &quot;https://bitcoin.stackexchange.com/feeds&quot;, &quot;https://vi.stackexchange.com/feeds&quot;, &quot;https://askubuntu.com/feeds&quot;, ] latest_items = [] for url in rss_channels: feed = feedparser.parse(url) for entry in feed.entries: pub_date_str = entry.published try: pub_date = dateutil.parser.parse(pub_date_str, ignoretz=True, fuzzy=True) if pub_date.tzinfo is None: pub_date = pub_date.replace(tzinfo=dateutil.tz.tzutc()) latest_items.append((entry.title, pub_date, entry.link)) except Exception as e: print(str(e)) latest_items.sort(key=lambda x: x[1], reverse=True) for title, pub_date, url in latest_items: print(f&quot;{pub_date.strftime('%Y-%m-%d %H:%M:%S %z')} - {title} - {url}&quot;) </code></pre> <p>I'm not sure if the code is correct. Could you assure me or refute and show me what's wrong? The code is very slow as well, so if it's possible to make faster, it would be great.</p>
<python><time><timezone><rss><feedparser>
2023-05-08 18:42:25
1
3,742
xralf
76,203,356
17,243,835
Plotly - Use custom Font
<p>I am trying to use a custom installed font in Plotly, specifically Plotly Express, however my fonts do not seem to be recognized by Plotly.</p> <p>My font is being recognized by Tkinter:</p> <pre><code>from tkinter import Tk, font root = Tk() print(font.families()) </code></pre> <p>Output:</p> <pre><code> ('System', 'Terminal', 'Fixedsys', 'Modern', 'Roman', 'Script', 'Courier', 'MS Serif', ..., '**HaboroSans-NormRegular**', ...) </code></pre> <p>However: When I try to use this font in Plotly:</p> <pre><code>px.line(x=[1,2,3], y=[1,2,3], title = 'Plot 123').update_layout(titlefont_family = 'HaboroSans-NormRegular') </code></pre> <p>I get a graph returned with the default plotly font.</p> <p>Any Ideas?</p>
<python><fonts><plotly>
2023-05-08 18:28:13
0
355
drew wood
76,203,330
181,098
Backslash gets stripped from string when trying to insert into mysql column using python
<p>As per title, when I try to insert a string containing backslashes they get removed. Say I have the following statement:</p> <p>filename=&quot;D:\myfiles\log files\2022-12-05 11-33-31.dat&quot;</p> <p>then I insert it into my MySQL table with the following:</p> <pre><code> sql=&quot;INSERT INTO df_log (log_file_name,file_hash) VALUES ('{log_file_name}');&quot;.format(log_file_name=(log_file_name)) cursor.execute(sql) conn.commit() </code></pre> <p>It executes happily but then when I do a select from the database, I see:</p> <pre><code>D:myfileslog files2022-12-05 11-33-31.dat </code></pre> <p>All the \ are missing. I've tried double backslash to try to escape it, but still no luck.</p> <p>This is similar to <a href="https://stackoverflow.com/questions/68327214/convert-a-string-to-literal-type">Convert a string to literal type</a> and <a href="https://stackoverflow.com/questions/7553200/storing-windows-path-in-mysql-without-escaping-backslashes">Storing windows path in MySQL without escaping backslashes</a> but nobody seems to understand the question or able to replicate the issue.</p>
<python><mysql>
2023-05-08 18:24:08
0
3,367
Hein du Plessis
76,203,293
183,315
Presenting an unnamed query_string parameter in OpenAPI docs using FastAPI
<p>I'm building a Python 3.10.9 FastAPI application. On my, HTTP GET single resource API endpoints, I'm accepting multiple query string elements, like this small sample:</p> <pre><code>amount__gt=1000&amp;start_date__lt=2023-05-10 </code></pre> <p>To get a query_string placeholder on the OpenAPI docs created by FastAPI I have a <code>qs</code> parameter. When I call the endpoint using the OpenAPI docs I get this back:</p> <pre><code>qs=amount__gt=1000&amp;start_date__lt=2023-05-10 </code></pre> <p>Which comes back as a single parameter and value. However, when calling the endpoint with PostMan, I can build up the query string with individual elements, and get back the individual parameters and values. This is how the endpoint would be called by a program. Is there a way to get FastAPI to present an unnamed parameter in the OpenAPI interface?</p>
<python><fastapi><query-string>
2023-05-08 18:20:38
1
1,935
writes_on
76,203,192
7,097,192
Stop a list in Python being shared between function invocations when multiprocessing
<p>I'm passing a function into a queue for parallel processing via <code>multiprocessing</code>, and that function takes 2 lists as inputs. I'm doing this in a loop 6 times, where each time I append the loops counter to the end of the list (see output section below). I would have expected each list to be a unique local variable for each function execution, but it seems like they're sharing memory pointers, and each time the function runs the list just gets longer. How can I make each list be a distinct variable? I've found many questions on here about trying to <em>make</em> a shared varaible, but none about <em>unsharing</em> one</p> <p>Or alternatively, is there a better/different way I should be doing this in the first place? I'm largely copy/pasting code snippits for the multiprocessing part, as I'm A) Not great at scripting in general; and B) never tried parallel processing before</p> <p>Background: I'm trying to test every possible die combination using the rules of Risk and output the probability of the attacker or defender winning, but instead of the normal &quot;Attacker rolls 3 die, defender rolls 2&quot;, I'm looking at larger numbers of die for both, and am trying to add parallel processing to the script to combat the exponential increase in time per extra die.</p> <p>Code snippit (simplified for testing)</p> <pre><code>#!/usr/bin/python3.5 #good break down of logic for traditional 3v2 in here: http://www.datagenetics.com/blog/november22011/index.html #multithreading logic from: https://superfastpython.com/parallel-nested-for-loops-in-python/ import threading import multiprocessing.pool import queue def compare(attack, defence): aWon = 0 print(defence) for i in range(len(defence)): if attack[i] &gt; defence[i]: aWon += 1 aWins[aWon] += 1 #aWins is a shared global array to keep track of how many attacker die won a given encounter def d1(attack, defence): global queue defence.sort(reverse=True) for i in range(1, 7): defence.append(i+10) queue.put((compare(attack, defence), (i,))) aWins = [0,0,0,0,0,0] #0 index incremented when attacker won 0 die in a set; 1 index when they won 1, etc global queue queue = queue.Queue() queue.put((d1([1,1,1,1,1,1], [1,1]))) total_tasks = 216 with multiprocessing.pool.ThreadPool(total_tasks) as pool: for _ in range(total_tasks): task, args = queue.get() pool.apply_async(task, args) pool.close() pool.join() </code></pre> <p>debug/print output:</p> <pre><code>[1, 1, 11] [1, 1, 11, 12] [1, 1, 11, 12, 13] [1, 1, 11, 12, 13, 14] [1, 1, 11, 12, 13, 14, 15] </code></pre> <p>Expected debug/print output</p> <pre><code>[1, 1, 11] [1, 1, 12] [1, 1, 13] [1, 1, 14] [1, 1, 15] </code></pre>
<python><list><multiprocessing><queue><python-multiprocessing>
2023-05-08 18:06:35
0
781
Shahad
76,203,141
5,959,685
Pandas rowwise partial matching with any item from a list
<p>I am trying to do a rowwise partial match in pandas dataframe so that if any of the terms from a list matches any word from a column. For example, I have a dataframe df_x</p> <pre><code> name terms_list 0 acupuncturist [acupuncturist, acupuncture] 1 seafood restaurant [seafood, fish] 2 pet daycare [pet, dog, k9] 3 seafood restaurant [restaurant] 4 seafood grocery [restaurant] </code></pre> <p>which should have an output like below</p> <pre><code> match 0 yes 1 yes 2 yes 3 yes 4 no </code></pre> <p>I tried</p> <pre><code>df_x[df_x.apply(lambda x: df_x['name'] in df_x['terms_list'], axis=1)] </code></pre> <p>But getting an error as <code>TypeError: unhashable type: 'Series' </code>. How can I resolve this error? Is there a better way to do this.</p> <p>Reproducible input:</p> <pre><code>df = pd.DataFrame({'name': ['acupuncturist', 'seafood restaurant', 'pet daycare', 'seafood restaurant', 'seafood grocery'], 'terms_list': [['acupuncturist', 'acupuncture'], ['seafood', 'fish'], ['pet', 'dog', 'k9'], ['restaurant'], ['restaurant']] }) </code></pre>
<python><pandas><dataframe>
2023-05-08 17:59:27
4
423
Dutt
76,203,128
14,963,549
How to translate a query executed by a cursor in a pandas data frame? (Databricks)
<p>I've got a script that executes a query by a cursor in SQL (this is the only way to make a query in that Data base). I've to analyse its data, unfortunately it's so hard to make it with a cursor, I've tried to translate it into a pandas data frame but I've couldn't. The script is as follow:</p> <pre><code>from databricks import sql import os connection = sql.connect( server_hostname = &quot;hostname&quot;, http_path = &quot;http_path&quot;, access_token = &quot;token&quot;) cursor = connection.cursor() cursor.execute(&quot;SELECT * from schema.table limit 10&quot;) print(cursor.fetchall()) cursor.close() connection.close() </code></pre> <p>Could you help me or guide me on how to translate this cursor into a Pandas Dataframe?</p>
<python><sql><pandas><dataframe><databricks>
2023-05-08 17:57:52
1
419
Xkid
76,202,975
1,187,968
GLIBC_2.28 not found in Docker image amazonlinux
<p>I am using docker image <code>amazonlinux</code> to build my aws-lambdas in Python. Now, I'm getting the following build error.</p> <pre><code>/lib64/libc.so.6: version `GLIBC_2.28' not found </code></pre> <p>It was triggered by the Python <code>cryptography</code> package. I can do version locking <code>cryptography==3.4.8</code> to fix the problem.</p> <p>However, I want to install GLIBC_2.28 into the image. How should I do it?</p>
<python><linux><amazon-web-services><docker>
2023-05-08 17:36:29
0
8,146
user1187968
76,202,829
4,879,688
Why there is no speed benefit of in-place multiplication when returning a numpy array?
<p>I have defined two functions as a minimal working example.</p> <pre><code>In [2]: A = np.random.random(10_000_000) In [3]: def f(): ...: return A.copy() * np.pi ...: In [4]: def g(): ...: B = A.copy() ...: B *= np.pi ...: return B </code></pre> <p>Both of them return the same result:</p> <pre><code>In [5]: assert all(f() == g()) </code></pre> <p>but I would expect <code>g()</code> to be faster, as augmented assignment is (for <code>A</code>) more than 4 times as fast as multiplication:</p> <pre><code>In [7]: %timeit B = A.copy(); B * np.pi 82.2 ms ± 301 µs per loop (mean ± std. dev. of 7 runs, 10 loops each) In [8]: %timeit B = A.copy(); B *= np.pi 55 ms ± 174 µs per loop (mean ± std. dev. of 7 runs, 10 loops each) In [9]: %timeit B = A.copy() 46.3 ms ± 664 µs per loop (mean ± std. dev. of 7 runs, 10 loops each) </code></pre> <p>Sadly, there is no speedup:</p> <pre><code>In [10]: %timeit f() 54.5 ms ± 150 µs per loop (mean ± std. dev. of 7 runs, 10 loops each) In [11]: %timeit g() 54.6 ms ± 46.1 µs per loop (mean ± std. dev. of 7 runs, 10 loops each) </code></pre> <p>Of course <code>dis.dis(g)</code> shows some overhead when compared to <code>dis.dis(f)</code> (2 * STORE_FAST + 2 * LOAD_FAST):</p> <pre><code>In [26]: dis.dis(f) 2 0 LOAD_GLOBAL 0 (A) 2 LOAD_METHOD 1 (copy) 4 CALL_METHOD 0 6 LOAD_GLOBAL 2 (np) 8 LOAD_ATTR 3 (pi) 10 BINARY_MULTIPLY 12 RETURN_VALUE In [27]: dis.dis(g) 2 0 LOAD_GLOBAL 0 (A) 2 LOAD_METHOD 1 (copy) 4 CALL_METHOD 0 6 STORE_FAST 0 (B) 3 8 LOAD_FAST 0 (B) 10 LOAD_GLOBAL 2 (np) 12 LOAD_ATTR 3 (pi) 14 INPLACE_MULTIPLY 16 STORE_FAST 0 (B) 4 18 LOAD_FAST 0 (B) 20 RETURN_VALUE </code></pre> <p>but for <code>A = np.random.random(1)</code> the overhead (difference in execution time) is less than 2 µs.</p> <p>To make things even more confusing I defined a third function <code>h()</code> which behaves as expected (is slower than <code>f()</code>):</p> <pre><code>In [19]: def h(): ...: B = A.copy() ...: return B * np.pi ...: In [20]: %timeit h() 81.9 ms ± 171 µs per loop (mean ± std. dev. of 7 runs, 10 loops each) </code></pre> <p>but <code>dis.dis(h)</code> gives me no insight why:</p> <pre><code>In [28]: dis.dis(h) 2 0 LOAD_GLOBAL 0 (A) 2 LOAD_METHOD 1 (copy) 4 CALL_METHOD 0 6 STORE_FAST 0 (B) 3 8 LOAD_FAST 0 (B) 10 LOAD_GLOBAL 2 (np) 12 LOAD_ATTR 3 (pi) 14 BINARY_MULTIPLY 16 RETURN_VALUE </code></pre> <p>Why there is no speed benefit of in-place multiplication when returning a numpy array, or maybe why <code>f()</code> gets the speed benefit despite of binary multiplication?</p> <p>I use Python 3.7.12 and numpy 1.21.6.</p>
<python><numpy><performance><optimization><augmented-assignment>
2023-05-08 17:15:04
1
2,742
abukaj
76,202,643
14,790,056
Total time elapsed between the first fund addition and fund withdrawal
<p>I have the following dataframe:a positive value of LIQUIDITY_adj means fund addition and negative implies withdrawal. I want to compute time elapsed between the first liquidity addition until the whole funds are withdrawn.</p> <pre><code> BLOCK_TIMESTAMP NF_TOKEN_ID LIQUIDITY_adj LIQUIDITY_cumsum 0 2021-05-04 22:11:00+00:00 33.0 3.470648e+17 3.470648e+17 1 2021-05-04 22:42:08+00:00 33.0 -3.470648e+17 0.000000e+00 2 2021-05-05 02:10:21+00:00 41.0 2.299857e+14 2.299857e+14 3 2021-05-05 02:21:03+00:00 41.0 1.373000e+10 2.299995e+14 4 2021-05-05 02:30:25+00:00 41.0 -2.299995e+14 0.000000e+00 5 2021-05-05 17:57:32+00:00 141.0 1.163028e+23 1.163028e+23 6 2021-05-05 20:24:07+00:00 141.0 -1.163028e+23 0.000000e+00 7 2021-05-05 18:05:11+00:00 151.0 4.127229e+16 4.127229e+16 8 2021-05-05 18:10:41+00:00 151.0 -4.127229e+16 0.000000e+00 9 2021-05-05 18:06:06+00:00 152.0 9.695097e+20 9.695097e+20 10 2021-05-05 18:08:55+00:00 152.0 -9.695097e+20 0.000000e+00 11 2021-05-05 18:08:16+00:00 157.0 1.377613e+20 1.377613e+20 12 2021-05-05 18:09:23+00:00 157.0 -1.377613e+20 0.000000e+00 13 2021-05-05 18:18:36+00:00 157.0 1.199888e+19 1.199888e+19 14 2021-05-05 23:31:48+00:00 157.0 -1.199888e+19 0.000000e+00 </code></pre> <p>I tried</p> <pre><code>lp_shortterm['DURATION_HR'] = lp_shortterm.groupby('NF_TOKEN_ID')['BLOCK_TIMESTAMP'].diff().dt.total_seconds() / 3600 </code></pre> <p>and got this</p> <pre><code> BLOCK_TIMESTAMP NF_TOKEN_ID LIQUIDITY_adj LIQUIDITY_cumsum DURATION_HR 0 2021-05-04 22:11:00+00:00 33.0 3.470648e+17 3.470648e+17 NaN 1 2021-05-04 22:42:08+00:00 33.0 -3.470648e+17 0.000000e+00 0.518889 2 2021-05-05 02:10:21+00:00 41.0 2.299857e+14 2.299857e+14 NaN 3 2021-05-05 02:21:03+00:00 41.0 1.373000e+10 2.299995e+14 0.178333 4 2021-05-05 02:30:25+00:00 41.0 -2.299995e+14 0.000000e+00 0.156111 5 2021-05-05 17:57:32+00:00 141.0 1.163028e+23 1.163028e+23 NaN 6 2021-05-05 20:24:07+00:00 141.0 -1.163028e+23 0.000000e+00 2.443056 7 2021-05-05 18:05:11+00:00 151.0 4.127229e+16 4.127229e+16 NaN 8 2021-05-05 18:10:41+00:00 151.0 -4.127229e+16 0.000000e+00 0.091667 9 2021-05-05 18:06:06+00:00 152.0 9.695097e+20 9.695097e+20 NaN 10 2021-05-05 18:08:55+00:00 152.0 -9.695097e+20 0.000000e+00 0.046944 11 2021-05-05 18:08:16+00:00 157.0 1.377613e+20 1.377613e+20 NaN 12 2021-05-05 18:09:23+00:00 157.0 -1.377613e+20 0.000000e+00 0.018611 13 2021-05-05 18:18:36+00:00 157.0 1.199888e+19 1.199888e+19 0.153611 14 2021-05-05 23:31:48+00:00 157.0 -1.199888e+19 0.000000e+00 5.220000 </code></pre> <p>but the issue with this is, for instance, <code>NF_TOKEN_ID = 41</code>, there are two values for <code>DURATION_HR</code> but i just want one value which should be the difference between <code>2021-05-05 02:10:21+00:00</code> and <code>2021-05-05 02:30:25+00:00 </code>. For <code>NF_TOKEN_ID = 157</code> there should be two values since funds are added/withdrawn twice. Basically, values <code>DURATION_HR</code> should only exists where the value of <code>LIQUIDITY_cumsum</code> is 0.</p>
<python><pandas><dataframe>
2023-05-08 16:49:12
1
654
Olive
76,202,551
11,475,651
Converting a JSON to a Pandas data frame in Python: why is it not working?
<p>I am trying to get a very straightforward bit of information from an API released by NREL (the National Renewable Energy Lab in the US):</p> <p><a href="https://developer.nrel.gov/docs/transportation/alt-fuel-stations-v1/all/" rel="nofollow noreferrer">https://developer.nrel.gov/docs/transportation/alt-fuel-stations-v1/all/</a></p> <p>My script in Python looks like this:</p> <pre><code>import requests import csv import pandas as pd full_request = &quot;https://developer.nrel.gov/api/alt-fuel-stations/v1.json?api_key=[TADA]&amp;fuel_type=E85,ELEC&amp;location=NewWestminster&quot; gotten = requests.get(full_request) print(gotten.status_code) print(type(gotten)) json_type = gotten.json() dicty = json.loads(gotten) </code></pre> <p>In the very last line, informed by previous questions here on Stack Exchange, I thought that maybe I would be converting it to a dictionary that I could then convert it to a dataframe, just something plain and readable. This is what my console output looks like, however:</p> <pre><code>200 &lt;class 'requests.models.Response'&gt; &lt;class 'dict'&gt; Traceback (most recent call last): File &quot;new_west_request.py&quot;, line 21, in &lt;module&gt; dicty = json.loads(gotten) NameError: name 'json' is not defined </code></pre> <p>So I'm just looking for a quick way to get this into a form where I can put it into a spreadsheet. I confirmed that there was a response. Are some of the funcctions deprecated or something?</p> <p>Thank you!</p>
<python>
2023-05-08 16:38:12
2
317
Abed
76,202,535
9,328,846
How to replace character in the index column of a Pandas Dataframe
<p>I have the following code piece:</p> <pre><code>d = {'col1': [0, 1, 2, 3], 'col2': [&quot;AAAD&quot;, &quot;BBC&quot;, &quot;KLMA&quot;, &quot;ABC&quot;]} df = pd.DataFrame(data=d) df </code></pre> <p>Output:</p> <pre><code> col1 col2 0 0 AAAD 1 1 BBC 2 2 KLMA 3 3 ABC </code></pre> <p>And I set the index to &quot;col2&quot;. And after that, if I need to change any occurace of &quot;.&quot; with a &quot;-&quot; in the index column (if there is any). Here is what I do:</p> <pre><code>df.index = df['col2'] df.index = df.index.str.replace('.','-', regex=True) df </code></pre> <p>Output:</p> <pre><code> col1 col2 col2 AAAD 0 AAAD BBC 1 BBC KLMA 2 KLMA ABC 3 ABC </code></pre> <p>Since there is no &quot;.&quot; in the index column, nothing is changed.</p> <p>This was just an example I produced. I have the same code in a production environment and it produces the following dataframe at the end:</p> <pre><code> col1 col2 col2 ---- 0 AAAD --- 1 BBC ---- 2 KLMA --- 3 ABC </code></pre> <p>I just cannot understand why. Is it related to the Python version? In the production environment, which I am not able to share here due to its complexity, it replaces all characters with &quot;-&quot;. Why am I having this problem?</p>
<python><python-3.x><pandas><dataframe>
2023-05-08 16:35:18
1
2,201
edn
76,202,510
9,430,855
dictionary to "long" dataframe
<p>Is it possible to take a dictionary and transform it into a &quot;long&quot; dataframe in polars? basically have input that looks like</p> <pre><code>input_ = {&quot;a&quot;: 1, &quot;b&quot;: 2, &quot;c&quot;: 3} </code></pre> <p>and I want output like</p> <pre><code>┌───────┬───────┐ │ col_0 ┆ col_1 │ │ --- ┆ --- │ │ str ┆ i64 │ ╞═══════╪═══════╡ │ a ┆ 1 │ │ b ┆ 2 │ │ c ┆ 3 │ └───────┴───────┘ </code></pre> <p>With pandas I use:</p> <pre><code>pd.DataFrame.from_dict(input_, orient=&quot;index&quot;).reset_index() </code></pre>
<python><dataframe><python-polars>
2023-05-08 16:32:08
2
3,736
dave-edison
76,202,488
11,246,056
"MockerFixture" has no attribute "assert_called_once" [attr-defined]
<p>In <code>module.py</code>, I have:</p> <pre class="lang-py prettyprint-override"><code>def func(x: int) -&gt; str | None: if x &gt; 9: return &quot;OK&quot; return None def main(x: int) -&gt; str | None: return func(x) </code></pre> <p>In <code>test_module.py</code>, I have:</p> <pre class="lang-py prettyprint-override"><code>import pytest from pytest_mock import MockerFixture from module import main @pytest.fixture(name=&quot;func&quot;) def fixture_func(mocker: MockerFixture) -&gt; MockerFixture: return mocker.patch(&quot;module.func&quot;, autospec=True) def test_main(func: MockerFixture) -&gt; None: _ = main(11) func.assert_called_once() </code></pre> <p><code>pytest</code> says:</p> <pre><code>1 passed in 0.01s [100%] </code></pre> <p>But <code>mypy</code> generates the following error:</p> <pre class="lang-py prettyprint-override"><code>test_module.py:13:5: error: &quot;MockerFixture&quot; has no attribute &quot;assert_called_once&quot; [attr-defined] Found 1 error in 1 file </code></pre> <p>What am I missing?</p>
<python><pytest><mypy><python-typing><pytest-mock>
2023-05-08 16:28:51
1
13,680
Laurent
76,202,360
10,133,797
Why is complex conjugation so slow?
<p>It's on par with complex multiplication, which boggles my mind:</p> <pre class="lang-py prettyprint-override"><code>import numpy as np def op0(x): return np.conj(x) def op1(x0, x1): return x0 * x1 def op2(x0, x1): x0[:] = x1 for N in (50, 500, 5000): print(f&quot;\nshape = ({N}, {N})&quot;) x0 = np.random.randn(N, N) + 1j*np.random.randn(N, N) x1 = np.random.randn(N, N) + 1j*np.random.randn(N, N) %timeit op0(x0) %timeit op1(x0, x1) %timeit op2(x0, x1) </code></pre> <pre><code>shape = (50, 50) 3.55 µs ± 143 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each) 4.85 µs ± 261 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each) 1.85 µs ± 116 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each) shape = (500, 500) 1.52 ms ± 60.6 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each) 1.96 ms ± 133 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each) 299 µs ± 50.2 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each) shape = (5000, 5000) 163 ms ± 4.4 ms per loop (mean ± std. dev. of 7 runs, 10 loops each) 185 ms ± 11.5 ms per loop (mean ± std. dev. of 7 runs, 10 loops each) 39.8 ms ± 399 µs per loop (mean ± std. dev. of 7 runs, 10 loops each) </code></pre> <p>Why is flipping the sign of <code>x.imag</code> so expensive? Surely, at low level, it's much easier than several multiplications and additions (<code>(a + j*b)*(c + j*d)</code>)?</p> <p>Windows 10 x64, numpy 1.23.5, Python 3.10.4</p>
<python><numpy><performance>
2023-05-08 16:09:18
3
19,954
OverLordGoldDragon
76,202,346
3,453,901
Recommended approach for retrieving rotating RDS credentials stored in Secrets Manager for python web app?
<p>I have a <code>flask</code> app running on <code>ECS</code> with <code>EC2 instance type</code> that accesses a <code>RDS</code> database. I'm storing the database credentials in <code>Secrets Manager</code>. I'd like to enable the built-in rotation provided by <code>Secrets Manager</code>, but I'm unsure about the best way to approach it.</p> <p>I don't want to poll <code>Secrets Manager</code> for the credentials every time I need to make a request because of the cost. I also can't hard-code the credentials with a single call to <code>Secrets Manager</code> to use for the lifecycle of the app since the credentials will eventually be rotated.</p> <p>Thoughts I have are to use a caching mechanism and a background task that polls <code>Secrets Manager</code> for changes in <code>Secrets Manager</code> credentials at a fixed interval. This still seems costly to me though since you're constantly polling <code>Secrets Manager</code>. The other issue is if the credentials are rotated between the fixed interval then they will be invalid until the background task runs again. I then have to worry about securing the credentials in the cache as well.</p> <p>My other thought is to have an CloudWatch event tied to when <code>Secrets Manager</code> rotates the credentials. However, I'm having a hard time on how to trigger an event in my running <code>flask</code> app on <code>ECS</code> to update the credentials from <code>Secrets Manager</code>.</p>
<python><amazon-web-services><flask><aws-secrets-manager>
2023-05-08 16:07:11
2
2,274
Alex F
76,202,342
13,488,243
works in postman but python request failing error":"Invalid access token
<p>scratching my head around this simple code giving error 401:invalid access token. could it be due to some endpoint security software tampering the request? code is actually generated by postman, believe no issue with it!</p> <pre><code> import requests url = &quot;&lt;url&gt;&quot; payload = {} headers = { 'Authorization': 'Bearer &lt;token&gt;', 'Cookie': 'xxxx' } response = requests.request(&quot;GET&quot;, url, headers=headers, data=payload) print(response.text) </code></pre>
<python><client>
2023-05-08 16:07:00
2
459
uman dev
76,202,255
2,451,520
How can I reenable the warning about unsupported Python version by VS Code debugger
<p>I'm using Visual Studio Code 1.78.0 and version v2023.8.0 of the Python extension on Windows 10. When I tried to debug an old project requiring Python 2.7.13 (provided by a virtual environment), I got a warning saying <em>The debugger in the python extension no longer supports python version minor than 3.7</em>. I selected <em>Do not show again</em> in the dialog and switched to an older version of the extension.</p> <p>Now I would like that dialog to show again, but can't find a setting to do that. How can I reenable the warning?</p>
<python><visual-studio-code><debugging><windows-10>
2023-05-08 15:56:51
1
353
maf
76,202,002
3,136,710
turtle.TurtleGraphicsError: bad color sequence
<p>I'm getting a colorfill error for my Python turtle graphics code, but I don't understand why. The method <code>t.fillcolor()</code> should accept 3 values as the r,g,b values for the fillcolor of a turtle shape, where <code>t</code> is a turtle. However, any three values in the method produce an error.</p> <p>This is how the <code>t.fillcolor()</code> is explained in my book:</p> <p><a href="https://i.sstatic.net/COmRd.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/COmRd.png" alt="enter image description here" /></a></p> <p>Here is the code:</p> <pre><code>from turtle import Turtle,Screen #might want to use: #from turtle import * from random import randint def hexagon(t, length): &quot;&quot;&quot;Draws a hexagon with the given length.&quot;&quot;&quot; for count in range(6): t.forward(length) t.left(60) def radialPattern(t, n, length, shape): &quot;&quot;&quot;Draws a radial pattern of n shapes with the given length.&quot;&quot;&quot; for count in range(n): shape(t, length) t.left(360 / n) screen = Screen() #create a screen object screen.setup(width=600,height=400) #size screen.bgcolor(&quot;lightblue&quot;) screen.title(&quot;My turtle&quot;) t = Turtle() #added code here: t.fillcolor(20,75,153) t.begin_fill() radialPattern(t,5,30,hexagon) t.end_fill() screen.exitonclick() #keep the screen open until user closes it </code></pre> <p>The code execution starts at line <code>screen = Screen() #create a screen object</code></p> <p>Here is the error:</p> <pre class="lang-none prettyprint-override"><code>Traceback (most recent call last): File &quot;D:\Turtle for final.py&quot;, line 25, in &lt;module&gt; t.fillcolor(20,75,153) File &quot;C:\Users\User\AppData\Local\Programs\Python\Python311\Lib\turtle.py&quot;, line 2289, in fillcolor color = self._colorstr(args) File &quot;C:\Users\User\AppData\Local\Programs\Python\Python311\Lib\turtle.py&quot;, line 2697, in _colorstr return self.screen._colorstr(args) File &quot;C:\Users\User\AppData\Local\Programs\Python\Python311\Lib\turtle.py&quot;, line 1167, in _colorstr raise TurtleGraphicsError(&quot;bad color sequence: %s&quot; % str(color)) turtle.TurtleGraphicsError: bad color sequence: (20, 75, 153) </code></pre> <p>I've gotten this method to work properly plenty of times. It's just not working here and I don't understand why.</p> <p>If someone could help me out, I would appreciate it.</p>
<python><turtle-graphics><python-turtle>
2023-05-08 15:23:00
1
652
Spellbinder2050
76,201,908
1,082,349
Serializing, compressing and writing large object to file in one go takes too much memory
<p>I have a list of very large objects <code>objects</code>, that I want to compress and save to the hard drive.</p> <p>My current approach is</p> <pre><code>import brotli import dill # serialize list of objects objects_serialized = dill.dumps(objects, pickle.HIGHEST_PROTOCOL) # compress serialized string objects_serialized_compressed = brotli.compress(data=objects_serialized, quality=1) # write compressed string to file output.write(objects_serialized_compressed) </code></pre> <p>However, if <code>objects</code> is very large, this leads to a memory error, since -- for some time -- I simultaneously carry <code>objects</code>, <code>objects_serialized</code>, <code>objects_serialized_compressed</code> around in their entirety.</p> <p>Is there a way to do this chunk-wise? Presumably the first step -- serializing the objects -- has to done in one go, but perhaps the compression and writing to file can be done chunk-wise?</p>
<python><dill><brotli>
2023-05-08 15:09:45
1
16,698
FooBar
76,201,871
14,468,588
How to speed up implementation of my Code executing a complex formula in python
<p>I have implemented a complex problem in python. In fact, My code does not have any error. Here is my code (if it seems too long, you can skip it. in last part of question, I have mentioned a short part in my code that I think it makes the simulation time-consuming):</p> <pre><code>from random import uniform import numpy as np from numpy.random import randint SOLUTION_SEQUENCE = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] TOURNAMENT_SIZE = 20 MAX_FITNESS = 10 FREQUENCY = 28e9 LIGHT_SPEED = 3e8 WAVELENGTH = LIGHT_SPEED / FREQUENCY K = 2 * np.pi / WAVELENGTH PHI = 0 RING_ELEMENT_NUM = [6, 12, 18, 25, 31, 37, 43, 50, 56] RING_NUM = 9 CHROMOSOME_LENGTH = sum(RING_ELEMENT_NUM) THETA_range_mul10 = range(-900, 901) class Individual: def __init__(self): self.genes = [randint(0, 1) for _ in range(CHROMOSOME_LENGTH)] def get_fitness(self): fitness = self.fitness_function(self.genes, self.fn_uniform(), self.fn_com(self.genes)) return fitness def fitness_function(self, Ipop, fn_uni, fn_com): WF1 = 0.8 WF2 = 0.2 CF = WF1 * np.abs(fn_com[1] + fn_com[2]) / np.abs(self.AF(Ipop, 0)) + WF2 * np.abs(fn_com[0] - fn_uni) return CF def fn_uniform(self): AF_uni = self.AF_uniform() null1 = 0 null2 = 0 for i in range(899): if AF_uni[int(i + 901) + 1] &gt; AF_uni[int(i + 901)]: null1 = i / 10 break for i in range(899): if AF_uni[int(-i + 901) - 1] &gt; AF_uni[int(i + 901)]: null2 = i / 10 break fn = np.abs(null2 - null1) return [fn] def fn_com(self, I_pop): msl1 = 0 msl2 = 0 AF_com = list(range(-900, 901)) for theta in range(-900, 901): AF_com[theta + 900] = self.AF(I_pop, theta / 10) null1 = 0 null2 = 0 for i in range(899): if AF_com[int(i + 901) + 1] &gt; AF_com[int(i + 901)]: index_msl1 = AF_com.index(max(AF_com[i:])) msl1 = max(AF_com[i:]) null1 = i / 10 break for i in range(899): if AF_com[int(-i + 901) - 1] &gt; AF_com[int(i + 901)]: index_msl2 = AF_com.index(max(AF_com[0:i])) msl2 = max(AF_com[i:]) null2 = i / 10 break fn = np.abs(null2 - null1) return [fn, msl1, msl2] def AF_uniform(self): AF_uni = list(range(-900, 901)) cur = np.ones(CHROMOSOME_LENGTH) for theta in list(range(-900, 901)): AF_uni[theta + 900] = self.AF(cur, theta / 10) return AF_uni def AF(self, cur, theta): AF_out = 1 AF_out = 1 + sum(sum(cur[i] * np.exp(1j * K * (m * WAVELENGTH / 2) * np.sin(theta) * np.cos(PHI - 2 * np.pi * (i - 1) / RING_ELEMENT_NUM[m])) for i in range(RING_ELEMENT_NUM[m])) for m in range(RING_NUM)) return AF_out def set_gene(self, index, value): self.genes[index] = value def __repr__(self): return ''.join(str(e) for e in self.genes) class Population: def __init__(self, population_size): self.population_size = population_size self.individuals = [Individual() for _ in range(population_size)] def get_fittest(self): fittest = self.individuals[0] for individual in self.individuals[1:]: if individual.get_fitness() &gt; fittest.get_fitness(): fittest = individual return fittest def get_fittest_elitism(self, n): self.individuals.sort(key=lambda ind: ind.get_fitness(), reverse=True) print(self.individuals) print(self.individuals[:n]) return self.individuals[:n] def get_size(self): return self.population_size def get_individual(self, index): return self.individuals[index] def save_individual(self, index, individual): self.individuals[index] = individual class GeneticAlgorithm: def __init__(self, population_size=100, crossover_rate=0.65, mutation_rate=0.1, elitism_param=5): self.population_size = population_size self.crossover_rate = crossover_rate self.mutation_rate = mutation_rate self.elitism_param = elitism_param def run(self): pop = Population(self.population_size) generation_counter = 0 while pop.get_fittest().get_fitness() != MAX_FITNESS: generation_counter += 1 print('Generation #%s - fittest is: %s with fitness value %s' % ( generation_counter, pop.get_fittest(), pop.get_fittest().get_fitness())) pop = algorithm.evolve_population(pop) print('Solution found...') print(pop.get_fittest()) def evolve_population(self, population): next_population = Population(self.population_size) # elitism: the top fittest individuals from previous population survive # so we copy the top 5 individuals to the next iteration (next population) # in this case the population fitness can not decrease during the iterations next_population.individuals.extend(population.get_fittest_elitism(self.elitism_param)) # crossover for index in range(self.elitism_param, next_population.get_size()): first = self.random_selection(population) second = self.random_selection(population) next_population.save_individual(index, self.crossover(first, second)) # mutation for individual in next_population.individuals: self.mutate(individual) print(next_population) return next_population def crossover(self, offspring1, offspring2): cross_individual = Individual() start = randint(CHROMOSOME_LENGTH) end = randint(CHROMOSOME_LENGTH) if start &gt; end: start, end = end, start cross_individual.genes = offspring1.genes[:start] + offspring2.genes[start:end] + offspring1.genes[end:] return cross_individual def mutate(self, individual): for index in range(CHROMOSOME_LENGTH): if uniform(0, 1) &lt;= self.mutation_rate: individual.genes[index] = randint(CHROMOSOME_LENGTH) # this is called tournament selection def random_selection(self, actual_population): new_population = Population(TOURNAMENT_SIZE) for i in range(new_population.get_size()): random_index = randint(new_population.get_size()) new_population.save_individual(i, actual_population.get_individual(random_index)) return new_population.get_fittest() if __name__ == '__main__': algorithm = GeneticAlgorithm(100, 0.8, 0.015) algorithm.run() </code></pre> <p>Here is part of my code likely causing the problem and making the simulation time-consuming:</p> <pre><code>def AF(self, cur, theta): AF_out = 1 AF_out = 1 + sum(sum(cur[i] * np.exp(1j * K * (m * WAVELENGTH / 2) * np.sin(theta) * np.cos(PHI - 2 * np.pi * (i - 1) / RING_ELEMENT_NUM[m])) for i in range(RING_ELEMENT_NUM[m])) for m in range(RING_NUM)) return AF_out </code></pre> <p>If you ask me, how I anticipated this part takes a lot of time, I would say that every time I clicked on start simulation option and then clicked on stop simulation option, it stopped at this line.</p>
<python><runtime><genetic-algorithm>
2023-05-08 15:06:25
0
352
mohammad rezza
76,201,836
11,278,478
list S3 files in Pyspark
<p>I am new to Pyspark and trying to use spark.read method to read S3 files in dataframe. I was able to successfully read one file from S3. Now I need to iterate and read all the files in a bucket.</p> <p>My question is how to iterate and get all the files one by one.</p> <p>I used to do this in in Python using boto3, is there something similar in Pyspark. s3_client.list_objects</p>
<python><apache-spark><amazon-s3><pyspark><boto3>
2023-05-08 15:02:37
4
434
PythonDeveloper
76,201,745
7,053,813
How do I use a protocol for an attribute that shows up after running a method?
<p>I am working on a package that intimately uses scikit-learn objects. I wanted to use a protocol to identify the interface for certain aspects of sklearn functionality. One example is that an object when the <code>fit</code> method is run gains an attribute called <code>feature_names_in_</code>. However, it does not exist before it is <code>fit</code>. How do I use structural typing here?</p> <pre><code>_TransformerSelf = TypeVar('_TransformerSelf', bound='TransformerMixin') @runtime_checkable class P(Protocol): # Causes error because it doesn't exist # at instantiation time. feature_names_in_: Sequence[str] def fit( self: _TransformerSelf, X: pd.DataFrame, y: pd.DataFrame | None = None ) -&gt; _TransformerSelf: ... </code></pre>
<python><python-typing>
2023-05-08 14:51:33
1
759
Collin Cunningham
76,201,672
15,001,463
Single SciPy CSR Matrix from Large Dask Array
<p>I have a sparse dask array with 200k rows and 150k columns. I want to convert this sparse dask array into a <strong>single</strong> scipy sparse CSR matrix for use with collaborative filtering models from the <a href="https://github.com/benfred/implicit" rel="nofollow noreferrer">implicit</a> library. How can I do this?</p> <p>Current attempt/ideas:</p> <p>(1) Simply using compute.</p> <pre class="lang-py prettyprint-override"><code>import dask.array as da import sparse nusers = 200_000 intens = 150_000 rng = da.random.default_rng() x = rng.random((nusers, nitems)) x[x &lt; 0.95] = 0 s = s.map_blocks(sparse.COO) # this kills program due to out of memory s_csr = s.compute().tocsr() </code></pre> <p>(2) Incrementally creating the sparse array, converting it to a scipy csr format, writing that to disk, and then resuming computation. Note, the <code>sys.getsizeof(sp_matrix)</code> when <code>sp_matrix</code> is the result of <code>sparse.concatenate()</code> will also increasingly require more RAM; however, casting this matrix to a scipy csr matrix, results in an object requiring only 48 bytes of memory. Note, I have only partially tested this strategy, which is how I experimentally determined these byte sizes. This strategy is <strong>extremely</strong> slow and would likely require manual garbage collection (<code>import gc; gc.collect()</code>) as well.</p> <pre class="lang-py prettyprint-override"><code>def get_next_chunk(s: DaskArray): &quot;&quot;&quot;Gets next chunk of DaskArray&quot;&quot;&quot; raise NotImplementedError # rechunk the data such that concatenation # along the first axis will incrementally # create the entire matrix s = s.rechunk((some_value, nitems)) # get the first chunk # this will have shape (some_value, nitems) sp_matrix = get_next_chunk(s) # for all remaining chunks, # concatenate said chunks to the sp # matrix, then cast that to csr while (chunk = get_next_chunk(s)): sp_matrix = sparse.concatenate((sp_matrix, chunk)) # note: could intermittently write the # sparse matrix to disk in case of failure if (sys.getsizeof(sp_matrix)/2**30) &gt; threshold_GB: # or maybe sp_matrix_tmp = sp_matrix.toscr() # del sp_matrix; sp_matrix = sp_matrix_tmp sp_matrix = sp_matrix.tocsr() save_npz(fpath, sp_matrix) sp_matrix = sparse.COO.from_scipy_sparse(sp_matrix) sp_matrix = sp_matrix.tocsr() save_npz(final, sp_matrix) </code></pre> <p>My hardware: Lubuntu 22.04 LTS laptop with 8 GB RAM and 8 GB swap space.</p>
<python><dask><sparse-matrix>
2023-05-08 14:42:20
1
714
Jared
76,201,572
16,383,578
How to deal with overlapping IP ranges in Python?
<p>I have downloaded the raw text file of IPFire location database from <a href="https://git.ipfire.org/?p=location/location-database.git;a=tree" rel="nofollow noreferrer">here</a>, I tried to convert the data to a sqlite3 database and I succeeded in doing so.</p> <p>However there are lots of overlapping IP ranges, and I want to merge the IP overlapping ranges that share the same country and ASN, and split the overlapping IP ranges into discrete IP ranges if they don't.</p> <p>For example, if I am not mistaken, 1.0.0.0/8 denotes a network, the starting address is 1.0.0.0 and the end is 1.255.255.255, and there are 16777216 addresses in the network. However immediately after the first example, which is the first entry in the data, the second entry is 1.0.0.0/24, which is completely contained in the first range.</p> <p>But they don't share same ASN and country. I think all IP addresses in the larger range is assigned to the first party unless they are assigned to the second party, in which case the second party overrides the first in the ranges assigned to the second party.</p> <p>There are lots of cases like this.</p> <p>Given the example input:</p> <pre class="lang-py prettyprint-override"><code>data = [ (16777216, 33554431, '1.0.0.0', '1.255.255.255', 16777216, '1.0.0.0/8', 'Unknown', 'Australia'), (16777216, 16777471, '1.0.0.0', '1.0.0.255', 256, '1.0.0.0/24', 'AS13335', 'Australia'), (16777472, 16777727, '1.0.1.0', '1.0.1.255', 256, '1.0.1.0/24', 'Unknown', 'China'), (16777728, 16778239, '1.0.2.0', '1.0.3.255', 512, '1.0.2.0/23', 'Unknown', 'China'), (16778240, 16779263, '1.0.4.0', '1.0.7.255', 1024, '1.0.4.0/22', 'AS38803', 'Australia'), (16778496, 16778751, '1.0.5.0', '1.0.5.255', 256, '1.0.5.0/24', 'AS38803', 'Australia'), (16779264, 16781311, '1.0.8.0', '1.0.15.255', 2048, '1.0.8.0/21', 'Unknown', 'China'), (16781312, 16785407, '1.0.16.0', '1.0.31.255', 4096, '1.0.16.0/20', 'Unknown', 'Japan'), (16781312, 16781567, '1.0.16.0', '1.0.16.255', 256, '1.0.16.0/24', 'AS2519', 'Japan'), (16785408, 16793599, '1.0.32.0', '1.0.63.255', 8192, '1.0.32.0/19', 'Unknown', 'China'), (16785408, 16785663, '1.0.32.0', '1.0.32.255', 256, '1.0.32.0/24', 'AS141748', 'China'), (16793600, 16809983, '1.0.64.0', '1.0.127.255', 16384, '1.0.64.0/18', 'AS18144', 'Japan'), (16809984, 16842751, '1.0.128.0', '1.0.255.255', 32768, '1.0.128.0/17', 'AS23969', 'Thailand'), (16809984, 16826367, '1.0.128.0', '1.0.191.255', 16384, '1.0.128.0/18', 'AS23969', 'Thailand'), (16809984, 16818175, '1.0.128.0', '1.0.159.255', 8192, '1.0.128.0/19', 'AS23969', 'Thailand'), (16809984, 16810239, '1.0.128.0', '1.0.128.255', 256, '1.0.128.0/24', 'AS23969', 'Thailand'), (16810240, 16810495, '1.0.129.0', '1.0.129.255', 256, '1.0.129.0/24', 'AS23969', 'Thailand'), (16810496, 16811007, '1.0.130.0', '1.0.131.255', 512, '1.0.130.0/23', 'AS23969', 'Thailand'), (16811008, 16811263, '1.0.132.0', '1.0.132.255', 256, '1.0.132.0/24', 'AS23969', 'Thailand'), (16811264, 16811519, '1.0.133.0', '1.0.133.255', 256, '1.0.133.0/24', 'AS23969', 'Thailand'), (16812032, 16812287, '1.0.136.0', '1.0.136.255', 256, '1.0.136.0/24', 'AS23969', 'Thailand'), (16812288, 16812543, '1.0.137.0', '1.0.137.255', 256, '1.0.137.0/24', 'AS23969', 'Thailand'), (16812544, 16812799, '1.0.138.0', '1.0.138.255', 256, '1.0.138.0/24', 'AS23969', 'Thailand'), (16812800, 16813055, '1.0.139.0', '1.0.139.255', 256, '1.0.139.0/24', 'AS23969', 'Thailand'), (16813312, 16813567, '1.0.141.0', '1.0.141.255', 256, '1.0.141.0/24', 'AS23969', 'Thailand'), (16814080, 16818175, '1.0.144.0', '1.0.159.255', 4096, '1.0.144.0/20', 'AS23969', 'Thailand'), (16818176, 16826367, '1.0.160.0', '1.0.191.255', 8192, '1.0.160.0/19', 'AS23969', 'Thailand'), (16818176, 16819199, '1.0.160.0', '1.0.163.255', 1024, '1.0.160.0/22', 'AS23969', 'Thailand'), (16819200, 16819455, '1.0.164.0', '1.0.164.255', 256, '1.0.164.0/24', 'AS23969', 'Thailand'), (16819456, 16819711, '1.0.165.0', '1.0.165.255', 256, '1.0.165.0/24', 'AS23969', 'Thailand'), (16819712, 16819967, '1.0.166.0', '1.0.166.255', 256, '1.0.166.0/24', 'AS23969', 'Thailand'), (16819968, 16820223, '1.0.167.0', '1.0.167.255', 256, '1.0.167.0/24', 'AS23969', 'Thailand') ] </code></pre> <p>How to get the desired output:</p> <pre class="lang-py prettyprint-override"><code>[ (16777216, 16777471, '1.0.0.0', '1.0.0.255', 256, '1.0.0.0/24', 'AS13335', 'Australia'), (16777472, 16777727, '1.0.1.0', '1.0.1.255', 256, '1.0.1.0/24', 'Unknown', 'China'), (16777728, 16778239, '1.0.2.0', '1.0.3.255', 512, '1.0.2.0/23', 'Unknown', 'China'), (16778240, 16779263, '1.0.4.0', '1.0.7.255', 1024, '1.0.4.0/22', 'AS38803', 'Australia'), (16779264, 16781311, '1.0.8.0', '1.0.15.255', 2048, '1.0.8.0/21', 'Unknown', 'China'), (16781312, 16781567, '1.0.16.0', '1.0.16.255', 256, '1.0.16.0/24', 'AS2519', 'Japan'), (16781568, 16783615, '1.0.17.0', '1.0.24.255', 2048, '1.0.17.0/21', 'Unknown', 'Japan'), (16783616, 16784639, '1.0.25.0', '1.0.28.255', 1024, '1.0.25.0/22', 'Unknown', 'Japan'), (16784640, 16785151, '1.0.29.0', '1.0.30.255', 512, '1.0.29.0/23', 'Unknown', 'Japan'), (16785152, 16785407, '1.0.31.0', '1.0.31.255', 256, '1.0.31.0/24', 'Unknown', 'Japan'), (16785408, 16785663, '1.0.32.0', '1.0.32.255', 256, '1.0.32.0/24', 'AS141748', 'China'), (16785664, 16789759, '1.0.33.0', '1.0.48.255', 4096, '1.0.33.0/20', 'Unknown', 'China'), (16789760, 16791807, '1.0.49.0', '1.0.56.255', 2048, '1.0.49.0/21', 'Unknown', 'China'), (16791808, 16792831, '1.0.57.0', '1.0.60.255', 1024, '1.0.57.0/22', 'Unknown', 'China'), (16792832, 16793343, '1.0.61.0', '1.0.62.255', 512, '1.0.61.0/23', 'Unknown', 'China'), (16793344, 16793599, '1.0.63.0', '1.0.63.255', 256, '1.0.63.0/24', 'Unknown', 'China'), (16793600, 16809983, '1.0.64.0', '1.0.127.255', 16384, '1.0.64.0/18', 'AS18144', 'Japan'), (16809984, 16842751, '1.0.128.0', '1.0.255.255', 32768, '1.0.128.0/17', 'AS23969', 'Thailand'), (16842752, 25231359, '1.1.0.0', '1.128.255.255', 8388608, '1.1.0.0/9', 'Unknown', 'Australia'), (25231360, 29425663, '1.129.0.0', '1.192.255.255', 4194304, '1.129.0.0/10', 'Unknown', 'Australia'), (29425664, 31522815, '1.193.0.0', '1.224.255.255', 2097152, '1.193.0.0/11', 'Unknown', 'Australia'), (31522816, 32571391, '1.225.0.0', '1.240.255.255', 1048576, '1.225.0.0/12', 'Unknown', 'Australia'), (32571392, 33095679, '1.241.0.0', '1.248.255.255', 524288, '1.241.0.0/13', 'Unknown', 'Australia'), (33095680, 33357823, '1.249.0.0', '1.252.255.255', 262144, '1.249.0.0/14', 'Unknown', 'Australia'), (33357824, 33488895, '1.253.0.0', '1.254.255.255', 131072, '1.253.0.0/15', 'Unknown', 'Australia'), (33488896, 33554431, '1.255.0.0', '1.255.255.255', 65536, '1.255.0.0/16', 'Unknown', 'Australia') ] </code></pre> <p>I used the following functions to assist me in creating the output, but I generated the output by hand:</p> <pre class="lang-py prettyprint-override"><code>import re MAX_IPV4 = 2**32-1 le255 = r'(25[0-5]|2[0-4]\d|[01]?\d\d?)' IPV4_PATTERN = re.compile(rf'^({le255}\.){{3}}{le255}$') def parse_ipv4(ip: str) -&gt; int: assert IPV4_PATTERN.match(ip) a, b, c, d = ip.split('.') return (int(a) &lt;&lt; 24) + (int(b) &lt;&lt; 16) + (int(c) &lt;&lt; 8) + int(d) def to_ipv4(number: int) -&gt; str: assert 0 &lt;= number &lt;= MAX_IPV4 return &quot;.&quot;.join(str(number &gt;&gt; i &amp; 255) for i in range(24, -1, -8)) def parse_network(network): start_ip, slash = network.split('/') start = parse_ipv4(start_ip) slash = int(slash) count = 2 ** (32 - slash) end = start + count - 1 return start, end, start_ip, to_ipv4(end), count, network def split_network(start, end, asn='Unknown', country='Unknown'): count = end - start + 1 start = to_ipv4(start) binary = f'{count:032b}' i = binary.index('1') + 1 ones = binary.count('1') subnets = [] for n in range(ones): item = parse_network(f'{start}/{i+n}') subnets.append((*item, asn, country)) start = to_ipv4(item[1]+1) return subnets </code></pre> <p>I tried the following code and it isn't working:</p> <pre><code>indices = [(e[0], e[1], i) for i, e in enumerate(data)] for k, e in enumerate(data): start, end = e[:2] for i_start, i_end, i in indices.copy(): if i_start &lt;= start &lt; i_end and (start, end) != (i_start, i_end): if e[-2:] == data[i][-2:]: print(True) if (start, end, k) in indices: indices.remove((start, end, k)) else: indices.remove((i_start, i_end, i)) indices.append((end+1, i_end, i)) if start &gt; i_start: indices.append((i_start, start-1, i)) </code></pre> <p>What is a smart way to split overlapping IP ranges that are different and merge them if they are the same? I want a smart algorithm, better than brute-force, because I literally have millions of entries to process.</p> <hr /> <p>Update:</p> <p>I tried to nest the overlapping ranges, and my code gives wrong output, and I don't know how to fix it:</p> <pre><code>indices = [ (16777216, 33554431, 0), (16777216, 16777471, 1), (16777472, 16777727, 2), (16777728, 16778239, 3), (16778240, 16779263, 4), (16778496, 16778751, 5), (16779264, 16781311, 6), (16781312, 16785407, 7), (16781312, 16781567, 8), (16785408, 16793599, 9), (16785408, 16785663, 10), (16793600, 16809983, 11), (16809984, 16842751, 12), (16809984, 16826367, 13), (16809984, 16818175, 14), (16809984, 16810239, 15), (16810240, 16810495, 16), (16810496, 16811007, 17), (16811008, 16811263, 18), (16811264, 16811519, 19), (16812032, 16812287, 20), (16812288, 16812543, 21), (16812544, 16812799, 22), (16812800, 16813055, 23), (16813312, 16813567, 24), (16814080, 16818175, 25), (16818176, 16826367, 26), (16818176, 16819199, 27), (16819200, 16819455, 28), (16819456, 16819711, 29), (16819712, 16819967, 30), (16819968, 16820223, 31) ] def nest_overlaps(indices): if not indices: return first = indices[:][0] if len(indices) == 1: return [first] tree = {first: {}} top_start, top_end = first[:2] for i, (start, end, j) in enumerate(indices[1:]): if start &gt; top_end: break if top_start &lt;= start &lt; top_end and (start, end) != (top_start, top_end): tree[first][(start, end, j)] = nest_overlaps(indices[i+1:]) return tree </code></pre>
<python><python-3.x><algorithm>
2023-05-08 14:30:30
1
3,930
Ξένη Γήινος
76,201,512
3,729,714
Python - Variable won't print? (Trying to get variable outside of while/if, etc.)
<p>I have this code:</p> <pre><code>global dir_name dir_name = '' def generateDirectoryName(newpath, x=0): while True: dir_name = (newpath + ('_' + str(x) if x != 0 else '')).strip() if not os.path.exists(dir_name): os.mkdir(dir_name) #1 #print(dir_name) return dir_name else: x = x + 1 def createDirectory(): generateDirectoryName(newpath) def main(): cwd = os.getcwd() createDirectory() #2 print(dir_name) main() #3 print(dir_name) </code></pre> <p>When I try the code, the two <code>print</code>s at the end (labelled with comments <code>2</code> and <code>3</code> don't appear to have any effect. However, if I uncomment the <code>print</code> inside the function (at comment <code>1</code>), it will show the <code>dir_name</code> value. Why does this happen - why can't I access <code>dir_name</code> outside the function? How can I fix the problem?</p>
<python>
2023-05-08 14:22:25
1
1,735
JM1
76,201,369
1,624,552
How can I get the commit where the head is pointing to?
<p>My Git HEAD is pointing to a commit that is not the last one committed so I would like to get that commit. Is it possible through the GitLab API? If so how can I do it from Python or a curl command?</p> <p>In case GitLab does not provide any mechanism to do so, how can I do it using a Git command from Python?</p>
<python><git><gitlab><gitlab-api>
2023-05-08 14:05:34
0
10,752
Willy
76,201,333
5,002,658
How can time spent in asynchronous generators be measured?
<p>I want to measure a time spent by generator (time blocking main loop).</p> <p>Lets say that I have two following generators:</p> <pre><code>async def run(): for i in range(5): await asyncio.sleep(0.2) yield i return async def walk(): for i in range(5): time.sleep(0.3) yield i return </code></pre> <p>I want to measure that <code>run</code> spent around <code>0.0s</code> per iteration, while <code>walk</code> used at least <code>0.3s</code>.</p> <p>I wanted to use something similar to <a href="https://stackoverflow.com/questions/73028924/how-to-measure-time-spent-in-blocking-code-while-using-asyncio-in-python">this</a>, but wasn't able to make it work for me.</p> <p>Clarification:</p> <p>I want to exclude the time spent in any <code>await</code> section. If for any reason coroutine is halted, then I don't want to take this time into account.</p>
<python><python-3.x><python-asyncio><generator>
2023-05-08 13:59:54
2
340
Artur Laskowski
76,201,183
4,495,790
How to forecast with FB's Prophet with one common model for multiple users' time series data?
<p>I have the following (Pandas) data frame with daily records of purchase amounts (<code>y</code>) of different users (<code>user_id)</code> with other independent features (<code>v1, v2</code>) like this:</p> <pre><code>ds user_id v1 v2 y 2013-01-01 00:00:00 user00 6 100 8 2013-01-01 00:00:00 user01 5 0 8 2013-01-01 00:00:00 user02 4 1 12 2013-01-01 00:00:00 user03 3 200 2 ... 2013-12-31 00:00:00 user99 1 0 1 </code></pre> <p>I want to make multivariate predictions (forecasts) for the next 30 days for <code>y</code> with Facebook's Prophet. I'm new to Prophet, and the tutorials I've seen so far are used to train models on the whole training set without any assumption of the possible existence of multiple users (or other entities):</p> <pre><code>model = Prophet() model.add_regressor('v1', standardize=False) model.add_regressor('v2', standardize=False) model.fit(df_train) </code></pre> <p>This evidently doesn't fit to my problem above. The most fitting examples found so far are training many different models according to the different entities like this:</p> <pre><code>for user in set(df_train.user_id): model = Prophet() model.add_regressor('v1', standardize=False) model.add_regressor('v2', standardize=False) model.fit(df_train.loc[df_train.user_id == user]) forecast = model.predict(...) </code></pre> <p>However, this also seem unfit, as I don't wish to have as many forecasting models as users (as the magnitude of users in reality are on ten thousands, and anyway, users come and leave constantly). My goal is to train one single model for all users (present and future), something like a single common LSTM neural network fit on multiple time series. But how to build this on Prophet?</p> <p>(Additional question: is explicit standardization needed on features/target with Prophet?)</p>
<python><forecasting><facebook-prophet>
2023-05-08 13:41:06
0
459
Fredrik
76,201,055
610,569
How to skip bad lines when loading csv/tsv file in HuggingFace dataset? ParserError: Error tokenizing data
<p>Sometimes the default csv engine setup in the Python and/or pandas environment raises errors on bad line, e.g.</p> <pre><code>dataset = load_dataset(&quot;alvations/aymara-english&quot;) </code></pre> <p>[out]:</p> <pre><code>--------------------------------------------------------------------------- ParserError Traceback (most recent call last) Cell In[7], line 1 ----&gt; 1 dataset = load_dataset(&quot;alvations/aymara-english&quot;) File /opt/conda/lib/python3.10/site-packages/datasets/load.py:1691, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, **config_kwargs) 1688 try_from_hf_gcs = path not in _PACKAGED_DATASETS_MODULES 1690 # Download and prepare data -&gt; 1691 builder_instance.download_and_prepare( 1692 download_config=download_config, 1693 download_mode=download_mode, 1694 ignore_verifications=ignore_verifications, 1695 try_from_hf_gcs=try_from_hf_gcs, 1696 use_auth_token=use_auth_token, 1697 ) 1699 # Build dataset for splits 1700 keep_in_memory = ( 1701 keep_in_memory if keep_in_memory is not None else is_small_dataset(builder_instance.info.dataset_size) 1702 ) File /opt/conda/lib/python3.10/site-packages/datasets/builder.py:605, in DatasetBuilder.download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, **download_and_prepare_kwargs) 603 logger.warning(&quot;HF google storage unreachable. Downloading and preparing it from source&quot;) 604 if not downloaded_from_gcs: --&gt; 605 self._download_and_prepare( 606 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs 607 ) 608 # Sync info 609 self.info.dataset_size = sum(split.num_bytes for split in self.info.splits.values()) File /opt/conda/lib/python3.10/site-packages/datasets/builder.py:694, in DatasetBuilder._download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs) 690 split_dict.add(split_generator.split_info) 692 try: 693 # Prepare split will record examples associated to the split --&gt; 694 self._prepare_split(split_generator, **prepare_split_kwargs) 695 except OSError as e: 696 raise OSError( 697 &quot;Cannot find data file. &quot; 698 + (self.manual_download_instructions or &quot;&quot;) 699 + &quot;\nOriginal error:\n&quot; 700 + str(e) 701 ) from None File /opt/conda/lib/python3.10/site-packages/datasets/builder.py:1151, in ArrowBasedBuilder._prepare_split(self, split_generator) 1149 generator = self._generate_tables(**split_generator.gen_kwargs) 1150 with ArrowWriter(features=self.info.features, path=fpath) as writer: -&gt; 1151 for key, table in logging.tqdm( 1152 generator, unit=&quot; tables&quot;, leave=False, disable=True # not logging.is_progress_bar_enabled() 1153 ): 1154 writer.write_table(table) 1155 num_examples, num_bytes = writer.finalize() File /opt/conda/lib/python3.10/site-packages/tqdm/notebook.py:259, in tqdm_notebook.__iter__(self) 257 try: 258 it = super(tqdm_notebook, self).__iter__() --&gt; 259 for obj in it: 260 # return super(tqdm...) will not catch exception 261 yield obj 262 # NB: except ... [ as ...] breaks IPython async KeyboardInterrupt File /opt/conda/lib/python3.10/site-packages/tqdm/std.py:1183, in tqdm.__iter__(self) 1180 # If the bar is disabled, then just walk the iterable 1181 # (note: keep this check outside the loop for performance) 1182 if self.disable: -&gt; 1183 for obj in iterable: 1184 yield obj 1185 return File /opt/conda/lib/python3.10/site-packages/datasets/packaged_modules/csv/csv.py:156, in Csv._generate_tables(self, files) 154 csv_file_reader = pd.read_csv(file, iterator=True, dtype=dtype, **self.config.read_csv_kwargs) 155 try: --&gt; 156 for batch_idx, df in enumerate(csv_file_reader): 157 pa_table = pa.Table.from_pandas(df, schema=schema) 158 # Uncomment for debugging (will print the Arrow table size and elements) 159 # logger.warning(f&quot;pa_table: {pa_table} num rows: {pa_table.num_rows}&quot;) 160 # logger.warning('\n'.join(str(pa_table.slice(i, 1).to_pydict()) for i in range(pa_table.num_rows))) File /opt/conda/lib/python3.10/site-packages/pandas/io/parsers/readers.py:1698, in TextFileReader.__next__(self) 1696 def __next__(self) -&gt; DataFrame: 1697 try: -&gt; 1698 return self.get_chunk() 1699 except StopIteration: 1700 self.close() File /opt/conda/lib/python3.10/site-packages/pandas/io/parsers/readers.py:1810, in TextFileReader.get_chunk(self, size) 1808 raise StopIteration 1809 size = min(size, self.nrows - self._currow) -&gt; 1810 return self.read(nrows=size) File /opt/conda/lib/python3.10/site-packages/pandas/io/parsers/readers.py:1778, in TextFileReader.read(self, nrows) 1771 nrows = validate_integer(&quot;nrows&quot;, nrows) 1772 try: 1773 # error: &quot;ParserBase&quot; has no attribute &quot;read&quot; 1774 ( 1775 index, 1776 columns, 1777 col_dict, -&gt; 1778 ) = self._engine.read( # type: ignore[attr-defined] 1779 nrows 1780 ) 1781 except Exception: 1782 self.close() File /opt/conda/lib/python3.10/site-packages/pandas/io/parsers/c_parser_wrapper.py:230, in CParserWrapper.read(self, nrows) 228 try: 229 if self.low_memory: --&gt; 230 chunks = self._reader.read_low_memory(nrows) 231 # destructive to chunks 232 data = _concatenate_chunks(chunks) File /opt/conda/lib/python3.10/site-packages/pandas/_libs/parsers.pyx:820, in pandas._libs.parsers.TextReader.read_low_memory() File /opt/conda/lib/python3.10/site-packages/pandas/_libs/parsers.pyx:866, in pandas._libs.parsers.TextReader._read_rows() File /opt/conda/lib/python3.10/site-packages/pandas/_libs/parsers.pyx:852, in pandas._libs.parsers.TextReader._tokenize_rows() File /opt/conda/lib/python3.10/site-packages/pandas/_libs/parsers.pyx:1973, in pandas._libs.parsers.raise_parser_error() ParserError: Error tokenizing data. C error: Expected 1 fields in line 625, saw 2 add </code></pre> <p><strong>How to pass in the csv reader / pandas arguments into the <code>load_dataset</code> function for Huggingface datasets?</strong></p>
<python><pandas><csv><huggingface-datasets>
2023-05-08 13:25:19
1
123,325
alvas
76,200,996
2,606,240
Pylance showing error for @classmethod which works during execution
<p>The following code works exactly as expected:</p> <pre><code>from abc import ABCMeta from typing import Dict, Any class Singleton(ABCMeta): __instances: Dict[Any, Any] = {} def __call__(cls, *args: Any, **kwargs: Any) -&gt; Any: if cls not in cls.__instances: cls.__instances[cls] = super(Singleton, cls).__call__(*args, **kwargs) return cls.__instances[cls] @classmethod def delete_all_instances(cls: Any) -&gt; None: cls.__instances = {} print(&quot;Deleted&quot;) class Class1(metaclass=Singleton): def __init__(self) -&gt; None: print(&quot;Constructor&quot;) self.__name = 'Class1' def say_name(self) -&gt; str: return self.__name if __name__ == &quot;__main__&quot;: obj1 = Class1() print(obj1.say_name()) Singleton.delete_all_instances() obj1 = Class1() print(obj1.say_name()) </code></pre> <p>Output:</p> <pre><code>Constructor Class1 Deleted Constructor Class1 </code></pre> <p>But Pylance (v2023.5.10) installed in Visual Studio Code 1.78.0 complains in line 29 <code>Singleton.delete_all_instances()</code>, that <code>Argument missing for parameter &quot;cls&quot;</code>. It doesn't matter, if the <code>Any</code> behind the cls-parameter in the function definition is there, or not.<br><br> Is this a bug in Pylance, or am I just too blind to see the issue?</p>
<python><pylance>
2023-05-08 13:18:17
1
681
user2606240
76,200,793
13,002,906
Unable to redirect non-www to www in django Nginx project on Digital Ocean
<p>Hello I would be grateful if someone could please help with the following Nginx configuration:</p> <p>/etc/nginx/sites-available/example</p> <pre><code>server { server_name *.example.com; location = /favicon.ico { access_log off; log_not_found off; } location /static/ { alias /home/admin/pyapps/example/example/static/; } location /static/admin/ { alias /home/admin/pyapps/example/example/static/admin/; } location /media/ { root /home/admin/pyapps/example; } location / { include proxy_params; proxy_pass http://unix:/run/gunicorn.sock; } listen 443 ssl; # managed by Certbot ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem; # managed by Certbot ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem; # managed by Certbot include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot } server { if ($host = admin.example.com) { return 301 https://$host$request_uri; } # managed by Certbot if ($host = backoffice.example.com) { return 301 https://$host$request_uri; } # managed by Certbot if ($host = www.example.com) { return 301 https://$host$request_uri; } # managed by Certbot listen 80; server_name example.com *.example.com; return 404; # managed by Certbot } </code></pre> <p>I would like to create a redirection for example.com to <a href="https://www.example.com" rel="nofollow noreferrer">https://www.example.com</a>.</p> <p>Note that I am using django-hosts.</p> <p>My configuration for django-hosts below:</p> <p>settings.py</p> <pre><code>ROOT_URLCONF = 'example.urls' ROOT_HOSTCONF = 'example.hosts' DEFAULT_HOST = 'www' # DEFAULT_REDIRECT_URL = &quot;http://www.example.com:8000&quot; DEFAULT_REDIRECT_URL = &quot;http://www.example.com&quot; PARENT_HOST = 'example.com' # HOST_PORT = '8000' MIDDLEWARE = [ 'django_hosts.middleware.HostsRequestMiddleware', .... 'django_hosts.middleware.HostsResponseMiddleware', ] </code></pre> <p>hosts.py</p> <pre><code>from django.conf import settings from django_hosts import patterns, host from example.hostsconf import urls as redirect_urls from . import admin_urls host_patterns = patterns('', host(r'www', settings.ROOT_URLCONF, name='www'), host(r'admin', admin_urls, name='admin'), host(r'backoffice', 'backoffice.urls', name=&quot;backoffice&quot;), host(r'(?!www).*', redirect_urls, name='wildcard'), ) </code></pre> <p>hostsconf/urls.py</p> <pre><code>from django.urls import path from .views import wildcard_redirect urlpatterns = [ path(r'^?P&lt;path&gt;.*)', wildcard_redirect), ] </code></pre> <p>hostsconf/views.py</p> <pre><code>from django.conf import settings from django.http import HttpResponseRedirect DEFAULT_REDIRECT_URL = getattr(settings, &quot;DEFAULT_REDIRECT_URL&quot;, &quot;http://www.example.com&quot;) def wildcard_redirect(request, path=None): new_url = DEFAULT_REDIRECT_URL if path is not None: new_url = DEFAULT_REDIRECT_URL + '/' + path return HttpResponseRedirect(new_url) </code></pre> <p>I have tried pretty much everything, but I think it might have to do with the django-hosts config.</p> <p>When I go to <a href="https://example.com" rel="nofollow noreferrer">https://example.com</a>, it shows me the 'Welcome to nginx' screen instead of redirecting to <a href="https://www.example.com" rel="nofollow noreferrer">https://www.example.com</a>.</p> <p><a href="https://i.sstatic.net/33icK.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/33icK.png" alt="enter image description here" /></a></p> <p>When I go to <a href="http://www.example.com" rel="nofollow noreferrer">www.example.com</a> or to admin.example.com or to backoffice.example.com, the website loads fine.</p> <p>Someone please assist me. I would like to redirect <a href="https://example.com" rel="nofollow noreferrer">https://example.com</a> to <a href="https://www.example.com" rel="nofollow noreferrer">https://www.example.com</a></p> <p>Let me know if you need any other configuration from me.</p>
<python><django><ubuntu><nginx><digital-ocean>
2023-05-08 12:52:20
2
389
pH4nT0M
76,200,786
10,489,887
How to place images on top of each other in Python?
<p>I'm currently working on a project to save the image parts and for one part I need to combine them together and have the effect of full sweater feel.Most things work except masking. I'm creating the API basically to have combined view as image</p> <pre class="lang-py prettyprint-override"><code>from rembg import remove from PIL import Image, ImageDraw, ImageFilter import cv2 import numpy as np </code></pre> <p>I have 3 images: front piece, left and right arms:</p> <pre class="lang-py prettyprint-override"><code> background = Image.open('Front.png') foreground = Image.open('SleeveR.png') foreground1 = Image.open('SleeveL.png') </code></pre> <p>I have used <code>rembg</code> to remove the backgrounds which works fine.</p> <p>I then get the <code>width</code> and <code>height</code> of arm piece and create a new mask to place it and put the front piece onto that arm piece:</p> <pre class="lang-py prettyprint-override"><code> width, height = foreground.size mask_im = Image.new(&quot;L&quot;, foreground.size, 0) draw = ImageDraw.Draw(mask_im) draw.rectangle((130, 70, width/2, height), fill=255) mask_im = mask_im.rotate(-20, resample=Image.NEAREST, expand=False) mask_im.save('mask_circle.jpg', quality=95) background.paste(foreground, (-120, 0), mask_im) background.save('result.png', quality=100) </code></pre> <p>I'm getting following result:</p> <p><a href="https://i.sstatic.net/eqCqn.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/eqCqn.png" alt="Image output" /></a></p> <p>But, I want the output like this (made in Photoshop with same &quot;cut&quot; arm images):</p> <p><a href="https://i.sstatic.net/CgwdU.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/CgwdU.png" alt="Image desired" /></a></p> <p>The problem appears with masking I assume, but couldn't figure out why transparent background gives that kind of result.</p> <p>You can find images here:</p> <p>Front: <a href="https://i.sstatic.net/ofhYR.jpg" rel="nofollow noreferrer">https://i.sstatic.net/ofhYR.jpg</a></p> <p>Left: <a href="https://i.sstatic.net/2KXgU.jpg" rel="nofollow noreferrer">https://i.sstatic.net/2KXgU.jpg</a></p> <p>Right: <a href="https://i.sstatic.net/JqJKj.jpg" rel="nofollow noreferrer">https://i.sstatic.net/JqJKj.jpg</a></p>
<python><image-processing><python-imaging-library>
2023-05-08 12:51:23
1
2,184
mrconcerned
76,200,707
2,835,640
Use numpy.vectorize on array of arrays
<p>How do I apply numpy.vectorize in order to have it act on an array of arrays where each array is an input to the function? For instance underneath, I am looking for the returns values to be the list [4,90]:</p> <pre><code>import numpy as np import operator import functools vidx = np.vectorize(operator.index) functools.partial(vidx,0)([[4,3,2],[90,7,6]]) vidx = np.vectorize(functools.partial(operator.index,0)) vidx([[4,3,2],[90,7,6]]) </code></pre> <p>None of these set ups seem to interpret the intention correctly. I know you can easily do this with a list comprehension, just looking to expand the idea to custom functions of this type.</p>
<python><arrays><numpy><vectorization><array-broadcasting>
2023-05-08 12:40:47
1
2,526
crogg01
76,200,674
13,102,609
How to use argument with dash in name in ffmpeg-python?
<p>Following is a simple ffmpeg command line for encoding an input video into a AV1 output with CRF 30:</p> <pre><code>ffmpeg -i input.mkv -c:v libsvtav1 -crf 30 output.mkv </code></pre> <p>Converting that command line into the syntax of <a href="https://github.com/kkroening/ffmpeg-python" rel="nofollow noreferrer">ffmpeg-python</a> is pretty straight-forward:</p> <pre class="lang-py prettyprint-override"><code>( ffmpeg .input('input.mkv') .output('output.mkv', vcodec='libsvtav1', crf=30) .run() ) </code></pre> <p>However, what happens if we want to specify the fast-decode option? In ffmpeg that would mean extending our command line to include <code>-svtav1-params fast-decode=1</code>, i.e.:</p> <pre><code>ffmpeg -i input.mkv -c:v libsvtav1 -crf 30 -svtav1-params fast-decode=1 output.mkv </code></pre> <p>How do we specify the same thing in ffmpeg-python? Adding <code>svtav1-params='fast-decode=1'</code> into the output arguments results in invalid Python code since variables are not allowed to have dash in the name:</p> <pre class="lang-py prettyprint-override"><code>( ffmpeg .input('input.mkv') .output('output.mkv', vcodec='libsvtav1', crf=30, svtav1-params='fast-decode=1') .run() ) # Error: Expected parameter name </code></pre> <p>Replacing the dash with an underscore in the name makes ffmpeg-python read it literally which doesn't translate into a valid ffmpeg command line.</p> <p>How is it possible to specify fast-decode and other <code>svtav1-params</code> specific arguments in ffmpeg-python?</p>
<python><ffmpeg><video-encoding><ffmpeg-python>
2023-05-08 12:37:24
1
663
orderlyfashion
76,200,579
3,674,674
AWS Cloud Development Kit (CDK) No Container Instances were found in your cluster
<p>I want to deploy fastapi application as a service using AWS Elastic Container Service of type EC2. While deploying application, it is raising error <em><strong>No Container Instances were found in your cluster</strong></em>. Here is code for Cloud Development Kit using Python.</p> <pre><code>from pathlib import Path from aws_cdk import ( aws_autoscaling as autoscaling, aws_ec2 as ec2, aws_ecs as ecs, App, Stack, ) app = App() stack = Stack( app, &quot;sample-aws-ec2-integ-ecs&quot;, env={&quot;region&quot;: &quot;ap-south-1&quot;, &quot;account&quot;: &quot;xxxxxxxxx&quot;}, ) # Create a VPC with public subnets vpc = ec2.Vpc.from_lookup(stack, &quot;dev-vpc&quot;, vpc_id=&quot;vpc-xxxxxxxx&quot;) # Create a cluster with the VPC and enable CloudWatch Container Insights cluster = ecs.Cluster(stack, &quot;EcsCluster&quot;, vpc=vpc, container_insights=True) web_server_sg = ec2.SecurityGroup.from_security_group_id( stack, &quot;Dev-Pub-SG&quot;, security_group_id=&quot;sg-xxxxxxxx&quot; ) # ! creating auto scaling group auto_scaling_group = autoscaling.AutoScalingGroup( stack, &quot;asg&quot;, vpc=vpc, instance_type=ec2.InstanceType(&quot;t2.micro&quot;), machine_image=ecs.EcsOptimizedImage.amazon_linux2(), security_group=web_server_sg, associate_public_ip_address=True, vpc_subnets=ec2.SubnetSelection(subnet_type=ec2.SubnetType.PUBLIC), key_name=&quot;asg-key&quot;, desired_capacity=1, ) capacity_provider = ecs.AsgCapacityProvider( stack, &quot;capacity_provider&quot;, auto_scaling_group=auto_scaling_group ) cluster.add_asg_capacity_provider(provider=capacity_provider) # Create a task definition with a container listening on port 80 task_definition = ecs.Ec2TaskDefinition(stack, &quot;TaskDef&quot;) container = task_definition.add_container( &quot;web&quot;, image=ecs.AssetImage(str(Path(&quot;.&quot;).resolve().parent)), memory_limit_mib=512 ) port_mapping = ecs.PortMapping( container_port=8000, host_port=8000, protocol=ecs.Protocol.TCP ) container.add_port_mappings(port_mapping) # Create a service with the task definition and a desired count of 1 service = ecs.Ec2Service( stack, &quot;Service&quot;, cluster=cluster, task_definition=task_definition, desired_count=1 ) app.synth() </code></pre> <p>I had tried the sample examples provided in <a href="https://github.com/aws-samples/aws-cdk-examples/blob/master/python/ecs/ecs-service-with-advanced-alb-config/app.py" rel="nofollow noreferrer">aws-examples</a> as well. It has the same problem.</p>
<python><amazon-web-services><aws-cdk>
2023-05-08 12:24:28
1
1,612
c__c
76,200,535
21,346,793
How to call an a callbacks when I take an a type of autorization?
<p>I have got a project, which have an autorization by vk and google. I need to call funcitions like vk_callback or google_callback when i make autorization by this types. Hoe can i do it? views.py:</p> <pre><code>def home(request): return HttpResponse('Home Page') def vk_callback(request): user_info_getter = UserInfoGetter(request.user, 'vk-oauth2') take_info = user_info_getter.get_user_info_vk() if take_info: name, last_name, photo = take_info print(name, last_name, photo, sep='\n') return redirect(home) def google_callback(request): user_info_getter = UserInfoGetter(request.user, 'google-oauth2') take_info = user_info_getter.get_user_info_google() if take_info: name, last_name, photo = take_info print(name, last_name, photo, sep='\n') return redirect(home) </code></pre> <p>I try to make it by url-address, but it doesn't work: urls.py:</p> <pre><code>urlpatterns = [ path('vk/callback/', views.vk_callback, name='vk_callback'), path('google/callback/', views.google_callback, name='google_callback') ] </code></pre> <p>After successful authorization takes info from server:</p> <pre><code>&quot;GET /complete/google-oauth2/?state=Qw2iyDHL9w6ueu4aKU8cfx7lC4VKjEgN&amp;code=4%2F0AbUR2VMMND7R2atmHBSbfGxtlbRBzTwiMnsiSpcYAmLI2cQeKuK7D1b_iUJ6uc2Pgp863w&amp;scope=email+profile+openid+https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fuserinfo.ema il+https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fuserinfo.profile&amp;authuser=0&amp;prompt=none HTTP/1.1&quot; 302 0 </code></pre>
<python><django>
2023-05-08 12:18:10
0
400
Ubuty_programmist_7
76,200,452
7,979,645
Error while iterating over dataframe column's entries: "AttributeError: 'Series' object has no attribute 'iteritems'"
<p>Using pandas version 2, I get an error when calling <code>iteritems</code>.</p> <pre class="lang-py prettyprint-override"><code>for event_id, region in column.iteritems(): pass </code></pre> <p>The following error message appears:</p> <pre><code>Traceback (most recent call last): File &quot;/home/analyst/anaconda3/envs/outrigger_env/lib/python3.10/site- packages/outrigger/io/gtf.py&quot;, line 185, in exon_bedfiles for event_id, region in column.iteritems()) AttributeError: 'Series' object has no attribute 'iteritems' </code></pre>
<python><pandas><dataframe><series><iteritems>
2023-05-08 12:07:34
1
319
jordimaggi
76,200,377
3,521,180
Why my unit test coverage is at the lower side?
<p>I have written a small class and function and unit test case for them.</p> <p>from flask_restful import Resource</p> <pre><code>class Calculator: def __init__(self, a, b ): self.a = a self.b = b def add(self): return self.a + self.b class CalcResource(Resource): def get(self): cal = Calculator(2, 3) tr = cal.add() return tr </code></pre> <p>Below is the unit test for the above class</p> <pre><code>class TestCalcResource: def test_get(self): calc_res = CalcResource() res = calc_res.get() assert res == 5 </code></pre> <p>On running the python coverage module on the test file. I am only getting 65% coverage. my aim is to touch 90%, that is the requirements. Please suggest.</p> <p><a href="https://i.sstatic.net/2S4oD.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2S4oD.png" alt="enter image description here" /></a></p> <p>I have added the screenshot.</p>
<python><python-3.x><pytest><coverage.py>
2023-05-08 11:57:22
0
1,150
user3521180
76,200,256
4,336,593
Vectoring the FOR loop while computing rows of Dataframe
<p>Following is the sample dataframe:</p> <pre><code>df = pd.DataFrame({ 'time': pd.date_range(start='2023-03-15 00:00:00', periods=15, freq='1min'), 'binary': [0,0,0,0,1,0,0,0,0,1,1,1,1,1,0], 'nonbinary': np.random.randint(1, 10, 15) }) </code></pre> <p>My task is:</p> <p>To find the presence of <code>1s</code> in column 'binary' (either a single occurrence or in a consecutive manner) and change the rows (that have <code>0s</code>) that are two minutes back (previous) with <code>1s</code>. For example, the resultant 'binary' column should be:</p> <pre><code> [0,0,0,0,1,0,0,0,0,1,1,1,1,1,0] --&gt; Original [0,0,1,1,1,0,0,0,1,1,1,1,1,1,1,0] --&gt; New </code></pre> <p>My following snippet works fine and gives the expected result.</p> <pre><code>import pandas as pd import numpy as np # create the DataFrame df = pd.DataFrame({ 'time': pd.date_range(start='2023-03-15 00:00:00', periods=15, freq='1min'), 'binary': [0,0,1,0,1,0,0,0,0,1,1,1,1,1,0], 'nonbinary': np.random.randint(1, 10, 15) }) # create a copy of the binary column to store the updated values updated_binary = df['binary'].copy() # iterate through the rows of the DataFrame for i in range(len(df)): # check if the current row has a binary value of 1 if df.loc[i, 'binary'] == 1: # update the corresponding rows in binary to 1 updated_binary[max(0, i-2):i+1] = 1 # update the binary column with the updated values df['binary'] = updated_binary # print the resulting DataFrame print(df) </code></pre> <p>The problem is, my actual <code>dataframe</code> consists of thousands of rows. I want to make my computation faster by using <a href="https://www.geeksforgeeks.org/vectorization-in-python/" rel="nofollow noreferrer">vectorization</a>. Following is my attempt to solve it using vectorization:</p> <pre><code>import pandas as pd import numpy as np # create the DataFrame df = pd.DataFrame({ 'time': pd.date_range(start='2023-03-15 00:00:00', periods=15, freq='1min'), 'binary': [0,0,0,0,1,0,0,0,0,1,1,1,1,1,0], 'nonbinary': np.random.randint(1, 10, 15) }) # create a copy of the binary column to store the updated values updated_binary = df['binary'].copy() # create a boolean array to identify the positions of 1s in binary column Ones_positions = df['binary'].eq(1).values # create a sliding window of size 3 to find the positions of 0s two minutes before the 1s window = np.zeros(3, dtype=bool) window[:2] = True zero_positions = np.logical_and.reduce(( np.roll(Ones_positions, 2), np.logical_not(Ones_positions), np.convolve(window, np.ones(3, dtype=bool), mode='valid') == 2 ), axis=0) # update the corresponding positions in the updated_binary to 1 updated_binary[zero_positions] = 1 # update the binary column with the updated values df['binary'] = updated_binary # print the resulting DataFrame print(df) </code></pre> <p>But I get the following error:</p> <pre><code>zero_positions = np.logical_and.reduce(( ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all() </code></pre>
<python><pandas><dataframe><vectorization>
2023-05-08 11:42:21
1
858
santobedi
76,200,044
1,624,552
How to check if a release exists on a GitLab repo
<p>I have a repo in GitLab and now I need to check if a release based on a tag exists using Python.</p> <p>I have tried below Python code but I always get the response &quot;301 Moved Permanently&quot; independently if release exsits or not.</p> <p>I am using Http Client instead of requests as I do not want to depend on any third-party and also because Http Client is faster.</p> <pre><code>conn = http.client.HTTPConnection(&quot;my.git.space&quot;) headers = { 'PRIVATE-TOKEN': &quot;XXX&quot; } conn.request(&quot;GET&quot;, &quot;/api/v4/projects/497/releases/1.0.0.0&quot;, headers=headers) res = conn.getresponse() print(f&quot;{res.status} {res.reason}&quot;) conn.close() </code></pre> <p>If I use postman it works, if exists it returns 200, otherwise, 404.</p> <p>Any ideas on how to do this?</p>
<python><git><gitlab><gitlab-api><http.client>
2023-05-08 11:13:51
1
10,752
Willy
76,199,893
13,236,421
How can I use sklearn's Binarizer in a Pipeline with muliple, custom thresholds?
<p>Say I have a dataset with several features that I would like to binarize. Each feature has its own (manual) threshold. I have tried this:</p> <pre><code>import pandas as pd from sklearn.pipeline import Pipeline from sklearn.compose import ColumnTransformer from sklearn.preprocessing import Binarizer X = pd.DataFrame({&quot;age&quot;: [20, 35, 67, 85, 98, 33, 28], &quot;BMI&quot;: [21.2, 24.2, 19.8, 28.1, 18.6, 31.3, 22.3]}) y = [1,0,0,0,0,0,0] thresholds = {&quot;age&quot;: 67, &quot;BMI&quot;: 25} pipe = Pipeline([('binarize', ColumnTransformer([(&quot;binarizer&quot;, Binarizer(threshold=list(thresholds.values())), list(thresholds.keys()))]))]) pipe.fit_transform(X, y) </code></pre> <p>However, the <code>Binarizer</code> class in <code>sklearn</code> does not support specifying multiple thresholds.</p> <pre><code>sklearn.utils._param_validation.InvalidParameterError: The 'threshold' parameter of Binarizer must be an instance of 'float'. Got [67, 25] instead </code></pre> <p>How can I binarize these variables in a pipeline using different thresholds for each variable?</p>
<python><scikit-learn>
2023-05-08 10:57:49
1
365
Larsq
76,199,745
1,711,271
What is the best way to iterate over rows of a pandas dataframe whose elements are arrays of different length?
<p>I have a weird dataframe, where each row contains numpy arrays of different length from row to row. Below an example (actual df has more columns and waaaay more rows):</p> <pre><code>import pandas as pd import numpy as np import random import string def create_funny_string(): adjectives = [&quot;brilliant&quot;, &quot;esoteric&quot;, &quot;amazing&quot;, &quot;curious&quot;, &quot;enthusiastic&quot;, &quot;lazy&quot;, &quot;mysterious&quot;, &quot;tricky&quot;] nouns = [&quot;hippo&quot;, &quot;mandala&quot;, &quot;penguin&quot;, &quot;cat&quot;, &quot;unicorn&quot;, &quot;giraffe&quot;, &quot;monkey&quot;, &quot;spaceship&quot;] return f&quot;{random.choice(adjectives)}_{random.choice(nouns)}&quot; def create_random_np_array(arr_length): return np.random.rand(arr_length) def create_row(): tag = create_funny_string() num = random.randint(0, 10) array_length = random.randint(1, 10) a = create_random_np_array(array_length) b = create_random_np_array(array_length) c = create_random_np_array(array_length) row = {&quot;Tag&quot;: tag, &quot;Num&quot;: num, &quot;a&quot;: a, &quot;b&quot;: b, &quot;c&quot;: c} return row nrows = 10 rows = [create_row() for _ in range(nrows)] df = pd.DataFrame(rows) print(df) </code></pre> <p>For each row, I want to create two plots, one of <code>a</code> vs <code>b</code> and another one of <code>a</code> vs <code>c</code>. This is clearly a &quot;per element&quot; iteration, meaning that it doesn't make sense to see each column as a single <code>np.ndarray</code> (also because it would be a jagged array, which isn't allowed in <code>numpy</code>). I was thinking to use <code>df.iterrows()</code>, but the Internet (and specifically this website) is full of scary stories about not using it. What should I use?</p>
<python><arrays><pandas><numpy><iteration>
2023-05-08 10:37:23
0
5,726
DeltaIV
76,199,739
3,946,307
Deploy a flask selenium restful app in production
<p>I am trying to deploy a <code>Flask REST app</code> that primarily does two tasks:</p> <ol> <li>Normal CRUD Operations</li> <li>Using Python <code>selenium</code> to scrape web pages and fetch appropriate results (it opens up Chrome and does the ask)</li> </ol> <p>Can someone suggest to me the best way to deploy such an application in production?</p> <p>Tried Following ways but was unable to successfully achieve the task</p> <p><strong>1. AWS Ubuntu:</strong> used <code>gunicorn</code> to deploy the app was able to do successfully deployed but selenium was only able to open a small size window hence unable to give proper result.</p> <p><strong>2. AWS Windows Server:</strong> used <code>uvicorn</code> but getting an error application should be ASGI</p> <p>I understand the above-mentioned errors can be fixed, but not confident enough to host the application in production, so asking for suggestions what should be the correct way to handle a <code>Flask REST API</code> app which internally uses <code>Selenium</code> .</p> <p>P.S. Also if more than one request comes at a time when selenium is running with some other task what are the best practices to handle such scenarios</p>
<python><flask><selenium-webdriver>
2023-05-08 10:36:52
1
797
Navitas28
76,199,659
5,371,102
How access form data as dict in streamlit app
<p>I want to be able to access the form data of a streamlit form as a collected dictonary object on the form. I know I can put it into seperate variables, but my code will be much nicer and easier to generalize if I can access it directly on the form object and it seems strange if it is not posible.</p> <pre><code>form = st.form(key=&quot;my_form&quot;) form.text_input(&quot;input 1&quot;) # Allot of more inputs def submit(): requests.post(&quot;localhost:5000/form-submit&quot;, data=prepare_for_backend(form.data)) form.form_submit_button(&quot;Submit&quot;, on_click=submit) </code></pre>
<python><streamlit>
2023-05-08 10:24:53
1
1,989
Peter Mølgaard Pallesen
76,199,653
13,314,132
ValueError: `run` not supported when there is not exactly one input key, got ['question', 'documents']
<p>Getting the error while trying to run a langchain code.</p> <pre><code>ValueError: `run` not supported when there is not exactly one input key, got ['question', 'documents']. Traceback: File &quot;c:\users\aviparna.biswas\appdata\local\programs\python\python37\lib\site-packages\streamlit\runtime\scriptrunner\script_runner.py&quot;, line 565, in _run_script exec(code, module.__dict__) File &quot;D:\Python Projects\POC\Radium\Ana\app.py&quot;, line 49, in &lt;module&gt; answer = question_chain.run(formatted_prompt) File &quot;c:\users\aviparna.biswas\appdata\local\programs\python\python37\lib\site-packages\langchain\chains\base.py&quot;, line 106, in run f&quot;`run` not supported when there is not exactly one input key, got ['question', 'documents'].&quot; </code></pre> <p>My code is as follows.</p> <pre><code>import os from apikey import apikey import streamlit as st from langchain.llms import OpenAI from langchain.prompts import PromptTemplate from langchain.chains import LLMChain, SequentialChain #from langchain.memory import ConversationBufferMemory from docx import Document os.environ['OPENAI_API_KEY'] = apikey # App framework st.title('🦜🔗 Colab Ana Answering Bot..') prompt = st.text_input('Plug in your question here') # Upload multiple documents uploaded_files = st.file_uploader(&quot;Choose your documents (docx files)&quot;, accept_multiple_files=True, type=['docx']) document_text = &quot;&quot; # Read and combine Word documents def read_docx(file): doc = Document(file) full_text = [] for paragraph in doc.paragraphs: full_text.append(paragraph.text) return '\n'.join(full_text) for file in uploaded_files: document_text += read_docx(file) + &quot;\n\n&quot; with st.expander('Contextual Prompt'): st.write(document_text) # Prompt template question_template = PromptTemplate( input_variables=['question', 'documents'], template='Given the following documents: {documents}. Answer the question: {question}' ) # Llms llm = OpenAI(temperature=0.9) question_chain = LLMChain(llm=llm, prompt=question_template, verbose=True, output_key='answer') # Show answer if there's a prompt and documents are uploaded if prompt and document_text: formatted_prompt = question_template.format(question=prompt, documents=document_text) answer = question_chain.run(formatted_prompt) st.write(answer['answer']) </code></pre> <p>I have gone through the documentations and even then I am getting the same error. I have already seen demos where multiple prompts are being taken by langchain.</p>
<python><langchain>
2023-05-08 10:24:17
2
655
Daremitsu
76,199,569
7,775,166
Python import function/class using full path or base path
<p>What is recommended when importing something from a subpackage?</p> <p>Option A:</p> <pre><code>from sklearn.model_selection import train_test_split from sklearn import preprocessing train_test_split() preprocessing() </code></pre> <p>Option B:</p> <pre><code>import sklearn sklearn.model_selection.train_test_split() sklearn.preprocessing() </code></pre> <p>In my opinion, using option A you may not know where the function comes from when you see it many lines after it is imported. In option B, you always know where it comes from because it is more verbose. However, you always need to write the full path function. Is that a disadvantage?</p> <p>What are your recommendations?</p>
<python><import>
2023-05-08 10:13:13
3
732
girdeux
76,199,563
5,672,673
TypeError: Input 'y' of 'Sub' Op has type float32 that does not match type uint8 of argument 'x'
<p>I'm working on a GAN with generator and discriminator.</p> <pre><code>@tf.function def train_step(input_image, target, step): with tf.GradientTape() as gen_tape, tf.GradientTape() as disc_tape: gen_output = generator(input_image, training=True) disc_real_output = discriminator([input_image, target], training=True) disc_generated_output = discriminator([input_image, gen_output], training=True) gen_total_loss, gen_gan_loss, gen_l1_loss = generator_loss(disc_generated_output, gen_output, target) disc_loss = discriminator_loss(disc_real_output, disc_generated_output) generator_gradients = gen_tape.gradient(gen_total_loss, generator.trainable_variables) discriminator_gradients = disc_tape.gradient(disc_loss, discriminator.trainable_variables) generator_optimizer.apply_gradients(zip(generator_gradients, generator.trainable_variables)) discriminator_optimizer.apply_gradients(zip(discriminator_gradients, discriminator.trainable_variables)) with summary_writer.as_default(): tf.summary.scalar('gen_total_loss', gen_total_loss, step=step//1000) tf.summary.scalar('gen_gan_loss', gen_gan_loss, step=step//1000) tf.summary.scalar('gen_l1_loss', gen_l1_loss, step=step//1000) tf.summary.scalar('disc_loss', disc_loss, step=step//1000) </code></pre> <p>This function throws an error:</p> <pre><code>TypeError: in user code: File &quot;/tmp/ipykernel_34/3224399777.py&quot;, line 9, in train_step * gen_total_loss, gen_gan_loss, gen_l1_loss = generator_loss(disc_generated_output, gen_output, target) File &quot;/tmp/ipykernel_34/3072633757.py&quot;, line 5, in generator_loss * l1_loss = tf.reduce_mean(tf.abs(target - gen_output)) TypeError: Input 'y' of 'Sub' Op has type float32 that does not match type uint8 of argument 'x'. </code></pre> <p>But I try to subtract it manually, it works just fine, they are both <code>float32</code></p> <pre><code>target - gen_output </code></pre> <pre><code>&lt;tf.Tensor: shape=(1, 256, 256, 3), dtype=float32, numpy= array([[[[185.98402 , 151.92749 , 81.13361 ], [186.15788 , 151.78894 , 80.930176], [185.86765 , 151.81358 , 80.65687 ], ..., [183.64613 , 151.91382 , 87.36469 ], [183.17218 , 152.08833 , 86.43396 ], [183.51439 , 152.04149 , 87.40147 ]], ... </code></pre>
<python><tensorflow><typeerror><generative-adversarial-network>
2023-05-08 10:12:43
1
1,177
Linh Chi Nguyen
76,199,404
10,966,677
Django admin showing a white box instead of toggle theme icon
<p>As I open Django admin page, a wide white box appears instead of the theme icon (using Django 4.2.1).</p> <p><a href="https://i.sstatic.net/dXqnL.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/dXqnL.png" alt="admin page as appears on production" /></a></p> <p>While testing on Docker container, everything seems ok.</p> <p><a href="https://i.sstatic.net/zoDtk.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/zoDtk.png" alt="admin page while testing on Docker container" /></a></p> <p>I have been looking at the <a href="https://docs.djangoproject.com/en/4.1/ref/contrib/admin/#theming-support" rel="nofollow noreferrer">documentation</a> on overriding admin/base.html, but I'm not sure this is the issue.</p> <p>Checking the log after deployment (on EC cloud), nothing comes to my attention.</p> <p>I am letting serve the static content by nginx (1.23).</p> <pre><code>python manage.py collectstatic --noinput </code></pre> <p>Overall, all static files are working properly on the rest of the site.</p> <p>I tried inspecting the element <code>&lt;button class=&quot;theme-toggle&quot;&gt;</code>. Nothing anomalous.</p> <p>Everything looks <em>normal</em> to me. Apart from this, production looks identical to the Docker container.</p> <p><strong>Note as of Jan 2025</strong>: Currently, I don't have access to this EC platform anymore, so that I cannot debug. In Docker, everything is OK. I hope this is helpful to anybody though.</p>
<python><django>
2023-05-08 09:51:19
15
459
Domenico Spidy Tamburro
76,199,337
610,569
How to add new language to NLLB tokenizer in Huggingface?
<p>No Language Left Behind (NLLB) is the machine translation model available on <a href="https://huggingface.co/facebook/nllb-200-distilled-600M" rel="nofollow noreferrer">https://huggingface.co/facebook/nllb-200-distilled-600M</a></p> <p>It supports a list of languages but to add a new language in the tokenizer, the follow code runs successfully but the language token didn't get add to the tokenizer object.</p> <pre><code>from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained(&quot;facebook/nllb-200-distilled-600M&quot;) tokenizer.additional_special_tokens.append('aym_Latn') print('aym_Latn' in tokenizer.additional_special_tokens) tokenizer </code></pre> <p>[out]:</p> <pre><code>False NllbTokenizerFast(name_or_path='facebook/nllb-200-distilled-600M', vocab_size=256204, model_max_length=1024, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'bos_token': '&lt;s&gt;', 'eos_token': '&lt;/s&gt;', 'unk_token': '&lt;unk&gt;', 'sep_token': '&lt;/s&gt;', 'pad_token': '&lt;pad&gt;', 'cls_token': '&lt;s&gt;', 'mask_token': AddedToken(&quot;&lt;mask&gt;&quot;, rstrip=False, lstrip=True, single_word=False, normalized=True), 'additional_special_tokens': ['ace_Arab', 'ace_Latn', 'acm_Arab', 'acq_Arab', 'aeb_Arab', 'afr_Latn', 'ajp_Arab', 'aka_Latn', 'amh_Ethi', 'apc_Arab', 'arb_Arab', 'ars_Arab', 'ary_Arab', 'arz_Arab', 'asm_Beng', 'ast_Latn', 'awa_Deva', 'ayr_Latn', 'azb_Arab', 'azj_Latn', 'bak_Cyrl', 'bam_Latn', 'ban_Latn', 'bel_Cyrl', 'bem_Latn', 'ben_Beng', 'bho_Deva', 'bjn_Arab', 'bjn_Latn', 'bod_Tibt', 'bos_Latn', 'bug_Latn', 'bul_Cyrl', 'cat_Latn', 'ceb_Latn', 'ces_Latn', 'cjk_Latn', 'ckb_Arab', 'crh_Latn', 'cym_Latn', 'dan_Latn', 'deu_Latn', 'dik_Latn', 'dyu_Latn', 'dzo_Tibt', 'ell_Grek', 'eng_Latn', 'epo_Latn', 'est_Latn', 'eus_Latn', 'ewe_Latn', 'fao_Latn', 'pes_Arab', 'fij_Latn', 'fin_Latn', 'fon_Latn', 'fra_Latn', 'fur_Latn', 'fuv_Latn', 'gla_Latn', 'gle_Latn', 'glg_Latn', 'grn_Latn', 'guj_Gujr', 'hat_Latn', 'hau_Latn', 'heb_Hebr', 'hin_Deva', 'hne_Deva', 'hrv_Latn', 'hun_Latn', 'hye_Armn', 'ibo_Latn', 'ilo_Latn', 'ind_Latn', 'isl_Latn', 'ita_Latn', 'jav_Latn', 'jpn_Jpan', 'kab_Latn', 'kac_Latn', 'kam_Latn', 'kan_Knda', 'kas_Arab', 'kas_Deva', 'kat_Geor', 'knc_Arab', 'knc_Latn', 'kaz_Cyrl', 'kbp_Latn', 'kea_Latn', 'khm_Khmr', 'kik_Latn', 'kin_Latn', 'kir_Cyrl', 'kmb_Latn', 'kon_Latn', 'kor_Hang', 'kmr_Latn', 'lao_Laoo', 'lvs_Latn', 'lij_Latn', 'lim_Latn', 'lin_Latn', 'lit_Latn', 'lmo_Latn', 'ltg_Latn', 'ltz_Latn', 'lua_Latn', 'lug_Latn', 'luo_Latn', 'lus_Latn', 'mag_Deva', 'mai_Deva', 'mal_Mlym', 'mar_Deva', 'min_Latn', 'mkd_Cyrl', 'plt_Latn', 'mlt_Latn', 'mni_Beng', 'khk_Cyrl', 'mos_Latn', 'mri_Latn', 'zsm_Latn', 'mya_Mymr', 'nld_Latn', 'nno_Latn', 'nob_Latn', 'npi_Deva', 'nso_Latn', 'nus_Latn', 'nya_Latn', 'oci_Latn', 'gaz_Latn', 'ory_Orya', 'pag_Latn', 'pan_Guru', 'pap_Latn', 'pol_Latn', 'por_Latn', 'prs_Arab', 'pbt_Arab', 'quy_Latn', 'ron_Latn', 'run_Latn', 'rus_Cyrl', 'sag_Latn', 'san_Deva', 'sat_Beng', 'scn_Latn', 'shn_Mymr', 'sin_Sinh', 'slk_Latn', 'slv_Latn', 'smo_Latn', 'sna_Latn', 'snd_Arab', 'som_Latn', 'sot_Latn', 'spa_Latn', 'als_Latn', 'srd_Latn', 'srp_Cyrl', 'ssw_Latn', 'sun_Latn', 'swe_Latn', 'swh_Latn', 'szl_Latn', 'tam_Taml', 'tat_Cyrl', 'tel_Telu', 'tgk_Cyrl', 'tgl_Latn', 'tha_Thai', 'tir_Ethi', 'taq_Latn', 'taq_Tfng', 'tpi_Latn', 'tsn_Latn', 'tso_Latn', 'tuk_Latn', 'tum_Latn', 'tur_Latn', 'twi_Latn', 'tzm_Tfng', 'uig_Arab', 'ukr_Cyrl', 'umb_Latn', 'urd_Arab', 'uzn_Latn', 'vec_Latn', 'vie_Latn', 'war_Latn', 'wol_Latn', 'xho_Latn', 'ydd_Hebr', 'yor_Latn', 'yue_Hant', 'zho_Hans', 'zho_Hant', 'zul_Latn']}, clean_up_tokenization_spaces=True) </code></pre> <p>There's some solution on <a href="https://github.com/huggingface/tokenizers/issues/247" rel="nofollow noreferrer">https://github.com/huggingface/tokenizers/issues/247</a> but note that if you do something like overriding the additional special tokens, the original ones will be lost, i.e.</p> <pre><code>from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained(&quot;facebook/nllb-200-distilled-600M&quot;) tokenizer.add_special_tokens({'additional_special_tokens': ['aym_Latn']}) print('aym_Latn' in tokenizer.additional_special_tokens) tokenizer </code></pre> <p>[out]:</p> <pre><code>True NllbTokenizerFast(name_or_path='facebook/nllb-200-distilled-600M', vocab_size=256204, model_max_length=1024, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'bos_token': '&lt;s&gt;', 'eos_token': '&lt;/s&gt;', 'unk_token': '&lt;unk&gt;', 'sep_token': '&lt;/s&gt;', 'pad_token': '&lt;pad&gt;', 'cls_token': '&lt;s&gt;', 'mask_token': AddedToken(&quot;&lt;mask&gt;&quot;, rstrip=False, lstrip=True, single_word=False, normalized=True), 'additional_special_tokens': ['aym_Latn']}, clean_up_tokenization_spaces=True) </code></pre> <p><strong>How to add new language to NLLB tokenizer in Huggingface?</strong></p> <p>My questions in parts are:</p> <ul> <li>(part1) How to add the special tokens for new languages? (without forgetting all the other languages it's trained on)</li> <li>(part2) After adding the special tokens, are there additional steps to properly tokenize inputs? E.g. change/set the language token assignment function</li> <li>(part3) After adding the special tokens and any additional steps, when processing the inputs, should the special token be pre-pended in the raw string? Or is there a special function in NLLB tokenizer to automatically add it in when initializing the tokenizer?</li> </ul> <hr /> <p>The desired goal is to be able to do this with pipeline automatically detecting the new added language after fine-tuning the model.</p> <pre><code>from transformers import AutoTokenizer, AutoModelForSeq2SeqLM, pipeline model = AutoModelForSeq2SeqLM.from_pretrained(&quot;facebook/nllb-200-distilled-600M&quot;) tokenizer = AutoTokenizer.from_pretrained(&quot;facebook/nllb-200-distilled-600M&quot;) translator = pipeline(‘translation’, model=model, tokenizer=tokenizer, src_lang=&quot;aym_Latn&quot;, tgt_lang=&quot;spa_Latn&quot;, max_length = 512 ) pipeline(&quot;Phisqha alwa pachaw sartapxta ukatx utaj jak’an 3 millas ukaruw muytir sarapxta.&quot;) </code></pre> <p>The pipeline method might not be possible since there might be some implicit function controlling how the tokenizer interacts with the languages, in that case, at least this should work:</p> <pre><code>from transformers import AutoTokenizer, AutoModelForSeq2SeqLM, pipeline model = AutoModelForSeq2SeqLM.from_pretrained(&quot;facebook/nllb-200-distilled-600M&quot;) tokenizer = AutoTokenizer.from_pretrained(&quot;facebook/nllb-200-distilled-600M&quot;) # In this case, how do we add the `src_lang` and `tgt_lang`? text = &quot;Phisqha alwa pachaw sartapxta ukatx utaj jak’an 3 millas ukaruw muytir sarapxta.&quot; model.generate(**tokenizer([text], return_tensors=&quot;pt&quot;, padding=True)) </code></pre> <p><strong>In the case, how do we add the <code>src_lang</code> and <code>tgt_lang</code>?</strong></p>
<python><nlp><huggingface-tokenizers><machine-translation>
2023-05-08 09:42:27
2
123,325
alvas
76,199,262
6,495,199
Send BackgroundTask out of controller
<p>The documentation explains how to send background tasks from controllers, like</p> <pre><code>@app.get(&quot;/mypath&quot;) async def send_notification(email: str, background_tasks: BackgroundTasks): pass </code></pre> <p>I wonder if there's a way to emit a background task out of controller scope (I mean, without needing to pass the <code>BackgroundTasks</code> object from controller to over all my function calls)</p> <p>I did try this but didn't work</p> <pre><code>from fastapi import BackgroundTasks def print_key(key: str): print(f&quot;test bg task: {key}&quot;) def send_stats(key: str): BackgroundTasks().add_task(print_key, key) </code></pre>
<python><python-3.x><fastapi><starlette>
2023-05-08 09:33:23
1
354
Carlos Rojas
76,199,165
1,826,066
Overlay average of data in Plotly plot inside a Streamlit app
<p>In this simplified <code>streamlit</code> app, I am using `plotly, to plot some dummy data:</p> <pre class="lang-py prettyprint-override"><code>import numpy as np import pandas as pd import plotly.express as px import streamlit as st n = 100 df1 = pd.DataFrame({&quot;x&quot;: np.arange(n), &quot;y&quot;: np.random.randn(n), &quot;file&quot;: [&quot;f1&quot;] * n}) df2 = pd.DataFrame({&quot;x&quot;: np.arange(n), &quot;y&quot;: np.random.randn(n), &quot;file&quot;: [&quot;f2&quot;] * n}) df = pd.concat([df1, df2]) fig = px.scatter(df, x=&quot;x&quot;, y=&quot;y&quot;, color=&quot;file&quot;) st.plotly_chart(fig) </code></pre> <p>The app looks like this: <a href="https://i.sstatic.net/R8lTg.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/R8lTg.jpg" alt="enter image description here" /></a></p> <p>I'd like to overlay two horizontal lines inside this plot that represent the average of the data in the selected x-range. That means that this average would have to be recomputed on every zoom event. Additionally, I want to write that average using <code>st.write(...)</code>. I am not sure how to add a callback (which I assume is necessary) in the <code>streamlit</code> context. Is there a simple way to achieve an interactive computation of the chart's data based on zoom events?</p>
<python><plotly><streamlit>
2023-05-08 09:19:32
0
1,351
Thomas
76,199,139
7,822,387
how to call a databricks notebook from an external python notebook
<p>I want to create a python notebook on my desktop that pass an input to another notebook in databricks, and then return the output of the databricks notebook. For example, my local python file will pass a string into a databricks notebook, which will reverse the string and then output the result back to my local python file. What would be the best way to achieve this?</p> <p><a href="https://docs.databricks.com/dev-tools/python-api.html#:%7E:text=In%20your%20Python%20code%20file,get%20the%20environment%20variable%20values.&amp;text=Import%20the%20ApiClient%20class%20from,with%20the%20Databricks%20REST%20API" rel="nofollow noreferrer">https://docs.databricks.com/dev-tools/python-api.html#:~:text=In%20your%20Python%20code%20file,get%20the%20environment%20variable%20values.&amp;text=Import%20the%20ApiClient%20class%20from,with%20the%20Databricks%20REST%20API</a>.</p> <p>I used the Rest API detailed in this link. I added my workspace instance https://adb-################.##.azuredatabricks.net and created a token, which is working now.</p> <p>Edit: When I try to create a new run, I get this error. Is my json formatted incorrectly or am I missing something else? Thanks</p> <pre><code>import os from databricks_cli.sdk.api_client import ApiClient from databricks_cli.clusters.api import ClusterApi os.environ['DATABRICKS_HOST'] = &quot;https://adb-################.##.azuredatabricks.net/&quot; os.environ['DATABRICKS_TOKEN'] = &quot;token-value&quot; api_client = ApiClient(host=os.getenv('DATABRICKS_HOST'), token=os.getenv('DATABRICKS_TOKEN')) runJson = &quot;&quot;&quot; {     &quot;name&quot;: &quot;test job&quot;,     &quot;max_concurrent_runs&quot;: 1,     &quot;tasks&quot;: [     {         &quot;task_key&quot;: &quot;test&quot;,         &quot;description&quot;: &quot;test&quot;,         &quot;notebook_task&quot;: {             &quot;notebook_path&quot;: &quot;/Users/user@domain.com/api_test&quot;         },         &quot;existing_cluster_id&quot;: &quot;cluster_name&quot;,         &quot;timeout_seconds&quot;: 3600,         &quot;max_retries&quot;: 3,         &quot;retry_on_timeout&quot;: true     }     ] } &quot;&quot;&quot; runs_api = RunsApi(api_client) runs_api.submit_run(runJson) Error: Response from server: { 'error_code': 'MALFORMED_REQUEST', 'message': 'Invalid JSON given in the body of the request - expected a map'} </code></pre>
<python><rest><azure-databricks>
2023-05-08 09:15:54
1
311
J. Doe
76,198,778
343,159
Retrieving the context (sentences) from Pinecone
<p>I am building an open book abstractive QA model using Pinecone and Sentence Transformers, but I am having trouble later recalling the sentences that produced the matching vectors.</p> <p>I thought I could store this context as metadata, like this:</p> <pre><code>with open(&quot;content.processed.json&quot;, &quot;r&quot;) as f: documents = json.load(f) model = SentenceTransformer('multi-qa-MiniLM-L6-cos-v1') qa = [{'id': str(uuid.uuid4()), 'content': doc['content'], 'encodings': model.encode(doc['content']).tolist()} for doc in documents] pinecone.init(API_KEY, environment='us-west1-gcp-free') index = pinecone.Index('httyl-index') upserts = [(v['id'], v['encodings'], {'content': v['content']}) for v in qa] for i in tqdm(range(0, len(upserts), 50)): i_end = i + 50 if i_end &gt; len(upserts): i_end = len(upserts) index.upsert(vectors=upserts[i:i_end]) </code></pre> <p>Later, on retrieving these contexts, I am unable to get access to the sentences:</p> <pre><code>xq = model.encode(query).tolist() xc = index.query(xq, top_k=5) ids = [x['id'] for x in xc['matches']] contexts = index.fetch(ids=ids) model_name = 'deepset/electra-base-squad2' nlp = pipeline(tokenizer=model_name, model=model_name, task='question-answering') for context in contexts: print(nlp( f&quot;question: {query} context: {context.content}&quot;, do_sample=True, temperature=0.8, max_length=64 )) </code></pre> <p>The error is actually around an index, so perhaps this syntax is entirely incorrect, but this is like my 1,011th iteration of this code. The error produces is:</p> <blockquote> <p>pinecone.core.client.exceptions.ApiAttributeError: FetchResponse has no attribute '0' at ['['received_data']']</p> </blockquote> <p>I need to iterate over those contexts and provide the original sentences to the pipeline. I intend to split this into two at some point, so the original sentences won't necessary be available.</p> <p>How do I accomplish this? I imagined the metadata, including the original sentences, would be returned as metadata.</p> <p>(I should point out I have never done Python in my life)</p>
<python><huggingface-transformers><sentence-transformers><vector-database>
2023-05-08 08:30:22
0
12,750
serlingpa
76,198,488
17,980,931
Why is numpy.add.at slower than simple grouping and summation?
<p>I am improving the implementation of my kmeans algorithm and currently have the following ndarrays:</p> <ol> <li>Dataset <code>X</code> with shape <code>(n_samples, n_features)</code> and dtype <code>np.float64</code></li> <li>Labels <code>y</code> with shape <code>(n_samples,)</code> and dtpye <code>np.int64</code>, and it supports to <code>np.logical_and(0 &lt;= y, y &lt; n_clusters).all()</code></li> </ol> <p>Now I need to group the dataset based on labels and calculate the average value to obtain the new cluster centers. To simplify some content, only the summation calculation is performed here. A simple implementation is as follows:</p> <pre><code>new_centers = np.array([X[y == i].sum(0) for i in range(n_clusters)]) </code></pre> <p>Note that this requires iterating <code>n_clusters</code> times and comparing each time to obtain a mask with a shape of <code>(n_samples,)</code>, then copying data from <code>X</code> and calculating the sum, and finally stacking them into an array.</p> <p>I naturally believe that there will be more efficient ways, so I tried using <a href="https://numpy.org/devdocs/reference/generated/numpy.ufunc.at.html" rel="nofollow noreferrer"><code>numpy.add.at</code></a>:</p> <pre><code>new_centers = np.zeros((n_clusters, n_features)) np.add.at(new_centers, y, X) </code></pre> <p>Here, almost pure C is running without python loops and unnecessary array generation, but surprisingly, under the conditions of <code>n_clusters, n_samples, n_features = 100, 100_000, 10</code>, its performance is worse than the naive implementation:</p> <pre><code>In [_]: n_clusters, n_samples, n_features = 100, 100_000, 10 In [_]: rng = np.random.default_rng() ...: X = rng.random((n_samples, n_features)) ...: y = rng.integers(0, n_clusters, n_samples) ...: In [_]: %timeit np.array([X[y == i].sum(0) for i in range(n_clusters)]) 29.5 ms ± 852 µs per loop (mean ± std. dev. of 7 runs, 10 loops each) In [_]: %timeit np.add.at(np.zeros((n_clusters, n_features)), y, X) 58.7 ms ± 708 µs per loop (mean ± std. dev. of 7 runs, 10 loops each) </code></pre> <p>I want to know why <code>numpy.add.at</code> performed so poorly. As a comparison, I also found two similar methods in <code>torch</code> that perform much better than <code>numpy.add.at</code> (I have tested that their outputs are very close or the same):</p> <pre><code>In [_]: %%timeit ...: new_centers = np.zeros((n_clusters, n_features)) ...: torch.from_numpy(new_centers).index_put_([torch.from_numpy(y)], ...: torch.from_numpy(X), accumulate=True) ...: ...: 1.74 ms ± 3.93 µs per loop (mean ± std. dev. of 7 runs, 1,000 loops each) In [_]: %%timeit ...: new_centers = np.zeros((n_clusters, n_features)) ...: torch.from_numpy(new_centers).index_add_(0, torch.from_numpy(y), torch.from_numpy(X)) ...: ...: 8.97 ms ± 51.6 µs per loop (mean ± std. dev. of 7 runs, 100 loops each) </code></pre> <p>Some version information:</p> <pre><code>In [_]: sys.version, np.__version__, torch.__version__ Out[_]: ('3.11.3 (tags/v3.11.3:f3909b8, Apr 4 2023, 23:49:59) [MSC v.1934 64 bit (AMD64)]', '1.24.3', '2.0.0+cu118') </code></pre>
<python><numpy><sum>
2023-05-08 07:51:48
1
7,781
Mechanic Pig
76,198,429
7,128,910
Enfore app-check for firebase functions in python
<p>I am trying to enforce firebase app-check for my android app. Specifically, I want to restrict access to my firebase cloud functions written in python. I found instructions on how to <a href="https://firebase.google.com/docs/app-check/cloud-functions?authuser=0" rel="noreferrer">do this on node.js</a> based firebase cloud functions, but none for Python. I'm using the <a href="https://firebase.google.com/docs/reference/admin/python/firebase_admin" rel="noreferrer">firebase_admin module</a>. The module has a <a href="https://firebase.google.com/docs/reference/admin/python/firebase_admin.app_check" rel="noreferrer">verify_token</a> function <code>firebase_admin.app_check.verify_token(token: str, app=None)</code>, however I don't see where I am getting the token from in my HTTP Flask-based cloud function.</p> <p>I define my cloud function like this:</p> <pre><code>def my_cloud_function(request): .... </code></pre> <p>Is the app-check token supposed to be somewhere in the request header or arguments? Is this documented in some place that I missed?</p>
<python><firebase><firebase-app-check>
2023-05-08 07:43:35
2
1,261
Nik
76,198,291
5,072,705
Missing custom headers in Google Cloud Storage
<ol> <li>I've created Standard Storaget Type bucket in my GCS console with public access restriction</li> <li>I've put a little file inside this bucket and add custom metadata (&quot;name&quot;:&quot;jack&quot;) using GUI</li> <li>I've created simple Python script for downloading this file:</li> </ol> <p>from pathlib import Path import requests</p> <pre><code>def url_retrieve(url: str, outfile: Path): R = requests.get(url, allow_redirects=True) if R.status_code != 200: raise ConnectionError('could not download {}\nerror code: {}'.format(url, R.status_code)) print(R.history) print(R.headers) print(R.status_code) outfile.write_bytes(R.content) url_retrieve(url, path) </code></pre> <p>I pass url from &quot;Authenticated URL&quot; field in object inside GCS Console GUI.</p> <p>Problem: There is no my custom metadata inside headers. What am i doing wrong? :)</p> <p>Example Response Headers:</p> <p>Content-Type: text/html; charset=utf-8 X-Frame-Options: DENY Set-Cookie: __Host-GAPS=1:YbfLYYNRk-YoqA4WaishOZQsoWZyKg:Mgn4RZDSaVcAMUG4; Expires=Wed, 07-May-2025 08:51:52 GMT; Path=/; Secure; HttpOnly; Priority=HIGH x-auto-login: realm=com.google&amp;args=service%3Dcds%26continue%3Dhttps://storage.cloud.google.com/kravtsov-test-bucket/tox_01.tox x-ua-compatible: IE=edge Cache-Control: no-cache, no-store, max-age=0, must-revalidate Pragma: no-cache Expires: Mon, 01 Jan 1990 00:00:00 GMT Date: Mon, 08 May 2023 08:51:52 GMT Strict-Transport-Security: max-age=31536000; includeSubDomains Cross-Origin-Opener-Policy-Report-Only: same-origin; report-to=&quot;AccountsSignInUi&quot; Content-Security-Policy: script-src 'report-sample' 'nonce-4gxVLMlbGh5kkUnB0MDL4g' 'unsafe-inline';object-src 'none';base-uri 'self';report-uri /v3/signin/<em>/AccountsSignInUi/cspreport;worker-src 'self' Content-Security-Policy: require-trusted-types-for 'script';report-uri /v3/signin/</em>/AccountsSignInUi/cspreport Accept-CH: Sec-CH-UA-Arch, Sec-CH-UA-Bitness, Sec-CH-UA-Full-Version, Sec-CH-UA-Full-Version-List, Sec-CH-UA-Model, Sec-CH-UA-WoW64, Sec-CH-UA-Platform, Sec-CH-UA-Platform-Version Cross-Origin-Resource-Policy: same-site Report-To: {&quot;group&quot;:&quot;AccountsSignInUi&quot;,&quot;max_age&quot;:2592000,&quot;endpoints&quot;:[{&quot;url&quot;:&quot;https://csp.withgoogle.com/csp/report-to/AccountsSignInUi&quot;}]} Permissions-Policy: ch-ua-arch=<em>, ch-ua-bitness=</em>, ch-ua-full-version=<em>, ch-ua-full-version-list=</em>, ch-ua-model=<em>, ch-ua-wow64=</em>, ch-ua-platform=<em>, ch-ua-platform-version=</em> Server: ESF X-XSS-Protection: 0 X-Content-Type-Options: nosniff Alt-Svc: h3=&quot;:443&quot;; ma=2592000,h3-29=&quot;:443&quot;; ma=2592000 Accept-Ranges: none Vary: Sec-Fetch-Dest, Sec-Fetch-Mode, Sec-Fetch-Site,Accept-Encoding Connection: close Transfer-Encoding: chunked</p>
<python><google-cloud-storage>
2023-05-08 07:18:42
1
1,213
Евгений Кравцов
76,198,210
11,801,298
How to have a smooth transition in an animation
<p>I want to create animation using matplotlib.</p> <p>But there is one problem related to the beauty of visualization. The next value appears abruptly after the previous one. There is no smooth transition. Such animation is good for own use, but it does not look good in a public demonstration.</p> <p>Example from official documentation</p> <pre><code>import numpy as np import matplotlib.pyplot as plt import matplotlib.animation as animation np.random.seed(19680801) HIST_BINS = np.linspace(-4, 4, 100) data = np.random.randn(1000) n, _ = np.histogram(data, HIST_BINS) def prepare_animation(bar_container): def animate(frame_number): # simulate new data coming in data = np.random.randn(1000) n, _ = np.histogram(data, HIST_BINS) for count, rect in zip(n, bar_container.patches): rect.set_height(count) return bar_container.patches return animate fig, ax = plt.subplots() _, _, bar_container = ax.hist(data, HIST_BINS, lw=1, ec=&quot;yellow&quot;, fc=&quot;green&quot;, alpha=0.5) ax.set_ylim(top=55) ani = animation.FuncAnimation(fig, prepare_animation(bar_container), 50, repeat=False, blit=True) plt.show() </code></pre> <p><a href="https://i.sstatic.net/WZ70R.gif" rel="nofollow noreferrer"><img src="https://i.sstatic.net/WZ70R.gif" alt="enter image description here" /></a></p> <p>Can matplotlib provide a smooth transition from one value to another? Here's an example using the <code>flet</code> library.</p> <p><img src="https://flet.dev/img/docs/getting-started/animations/animate-offset.gif" alt="GIF with animation made by flet library" /></p> <p>Can this be done in matplotlib?</p>
<python><matplotlib><matplotlib-animation>
2023-05-08 07:08:09
0
877
Igor K.
76,198,051
610,569
How to add new tokens to an existing Huggingface tokenizer?
<h1>How to add new tokens to an existing Huggingface AutoTokenizer?</h1> <p>Canonically, there's this tutorial from Huggingface <a href="https://huggingface.co/learn/nlp-course/chapter6/2" rel="noreferrer">https://huggingface.co/learn/nlp-course/chapter6/2</a> but it ends on the note of &quot;quirks when using existing tokenizers&quot;. And then it points to the <code>train_new_from_iterator()</code> function in Chapter 7 but I can't seem to find reference to how to use it to extend the tokenizer without re-training it.</p> <p><strong>I've tried</strong> the solution from <a href="https://stackoverflow.com/questions/71974438/training-new-autotokenizer-hugging-face">Training New AutoTokenizer Hugging Face</a> that uses <code>train_new_from_iterator()</code> but that will re-train a tokenizer, but it is not extending it, the solution would replace the existing token indices. <a href="https://stackoverflow.com/questions/71974438/training-new-autotokenizer-hugging-face">Training New AutoTokenizer Hugging Face</a></p> <pre><code>import pandas as pd def batch_iterator(batch_size=3, size=8): df = pd.DataFrame({&quot;note_text&quot;: ['foobar', 'helloworld']}) for x in range(0, size, batch_size): yield df['note_text'].to_list() old_tokenizer = AutoTokenizer.from_pretrained('roberta-base') training_corpus = batch_iterator() new_tokenizer = old_tokenizer.train_new_from_iterator(training_corpus, 32000) print(len(old_tokenizer)) print(old_tokenizer( ['foobarzz', 'helloworld'] )) print(new_tokenizer( ['foobarzz', 'hello world'] )) </code></pre> <p>[out]:</p> <pre><code>50265 {'input_ids': [[0, 21466, 22468, 7399, 2], [0, 20030, 1722, 39949, 2]], 'attention_mask': [[1, 1, 1, 1, 1], [1, 1, 1, 1, 1]]} {'input_ids': [[0, 275, 2], [0, 276, 2]], 'attention_mask': [[1, 1, 1], [1, 1, 1]]} </code></pre> <p><strong>Note:</strong> The reason why the new tokens starts from 275 and 276 is because there are reserved tokens from ids 0-274.</p> <p>The expected behavior of <code>new_tokenizer( ['foo bar', 'hello word'] )</code> is to have IDs beyond the tokenizer vocab size (i.e. 50265 for the <code>roberta-base</code> model) and it should look like this:</p> <pre><code>{'input_ids': [[0, 50265, 2], [0, 50266, 2]], 'attention_mask': [[1, 1, 1], [1, 1, 1]]} </code></pre>
<python><nlp><huggingface-transformers><huggingface-tokenizers><large-language-model>
2023-05-08 06:41:32
1
123,325
alvas
76,197,932
13,403,510
Handling time stamps on a time series data and make prediction
<p>So I have a data which looks like below;</p> <pre><code>import pandas as pd data = {'Date': ['16-08-2021', '16-08-2021', '17-08-2021', '18-08-2021', '19-08-2021'], 'Reason no': ['R13', 'R2', 'R5', 'R2', 'R3'], 'Minutes': [115, 625, 625, 1364, 1440], 'Issues': ['Not meeting the hourly target output', 'Air leak issue', 'other problem', 'Air leak issue', 'Air leak issue']} df = pd.DataFrame(data) </code></pre> <p><a href="https://i.sstatic.net/gtwrY.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/gtwrY.png" alt="enter image description here" /></a> we can see on 16-08-2021 we have 2 downtimes of different categories. so what we can do is modify this data into 2 separate time stamp. like '16-08-2021 1AM', '16-08-2021 4AM' and also assign time features to all other dates. Minutes is the <strong>downtime</strong>, My goal is to forecast the next downtime for next 2 days (which can be like e.g</p> <pre><code>'Date': ['20-08-2021 5AM', '20-08-2021 6PM', '21-08-2021 12AM'], 'Reason no': ['R5', 'R2', 'R2'], 'Minutes': [655, 142, 425], 'Issues': [ 'Air leak issue', 'other problem', 'Air leak issue']} </code></pre> <p>what i have seen is <strong>most tutorial just treat datetime data as id and ignores it</strong>, but in my case date and timestamp is an important tool. i want to train my model using LSTM and other hybrid technique.</p> <p>So, How can I deal multiple data of same date using pandas and python. And keep date-time as a feature variable. kindly looking for help. Thank you. Also looking for any additional insights and suggestions.</p>
<python><pandas><datetime><deep-learning><lstm>
2023-05-08 06:15:42
0
1,066
def __init__
76,197,920
5,834,711
How to sum the values of multiple rows, group by id and keep the rows of all unique ids with pandas in python?
<p>I have a file with the following columns: 'id','H1','H2','H3','H4','H5','H6','H7','H8','H9','H10','H11','H12','H13','H14','H15','H16','H17','H18','H19','H20','H21','H22','H23','H24'</p> <p>see an example: <a href="https://i.sstatic.net/74NOd.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/74NOd.png" alt="enter image description here" /></a></p> <p>I want to sum the values of columns but grouping by id and keep only the rows of the unique id (i.e., 1,2,3,4,5,6) I tried the following but the sum does not work.</p> <pre><code> col_names=['id','H1','H2','H3','H4','H5','H6','H7','H8','H9','H10','H11','H12','H13','H14','H15','H16','H17','H18','H19','H20','H21','H22','H23','H24'] df = pd.read_csv(i,sep='\t',names=col_names) cols = df.filter(regex='^H').columns df.groupby(['id'])[cols].sum() df_final = df.sort_index(by=['id'], ascending=[True]) df_final.to_csv(outfile,sep='\t',index=False,header=False) </code></pre> <p>Edited/Update:</p> <p>I am trying the following, since I have multiple files with thousands of rows (in case the file has 150000 rows):</p> <pre><code>col_names=['id','H1','H2','H3','H4','H5','H6','H7','H8','H9','H10','H11','H12','H13','H14','H15','H16','H17','H18','H19','H20','H21','H22','H23','H24'] df = pd.read_csv(i,sep='\t',names=col_names) cols = df.filter(regex='^H').columns grouped=df.groupby(['id'])[cols].sum() unique_ids = [range(1,150000)] grouped = grouped.loc[unique_ids] </code></pre> <p>I get the error</p> <pre><code>key=key, axis=self.obj._get_axis_name(axis))) KeyError: u&quot;None of [Index([(1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, ...)], dtype='object', name=u'id')] are in the [index]&quot; </code></pre>
<python><pandas><group-by><sum>
2023-05-08 06:14:07
2
345
Nat
76,197,800
7,921,635
filter list of dictionary based on two values
<p>I have dictionary:</p> <p><code>[{&quot;score&quot;:-10, &quot;name&quot;: &quot;Tom&quot;, &quot;etc&quot;: 1}, {&quot;score&quot;:-20, &quot;name&quot;: &quot;Tom&quot;, &quot;etc&quot;: 2} {&quot;score&quot;:-110, &quot;name&quot;: &quot;Jerry&quot;, &quot;etc&quot;: 3}, {&quot;score&quot;:-210, &quot;name&quot;: &quot;Jerry&quot;,, &quot;etc&quot;: 5}]</code></p> <p>How can I filter list so that I have only one <code>unique</code> name with lowest score.</p> <p>Desired result:</p> <p><code>[ {&quot;score&quot;:-20, &quot;name&quot;: &quot;Tom&quot;, &quot;etc&quot;: 2}, {&quot;score&quot;:-210, &quot;name&quot;: &quot;Jerry&quot;,, &quot;etc&quot;: 5}]</code></p> <p>How can accomplish that easily, wit build in libraries, whiteout using something like pandas?</p>
<python>
2023-05-08 05:43:31
1
427
Nir Vana
76,197,715
6,005,206
Pandas rolling max by skipping NaN
<p>Trying to get the rolling max in presence of NaN's. For some reason, not getting the expected output. Any thoughts!</p> <pre><code># Import library import pandas as pd # Data x = pd.Series([1,2,np.nan,4,np.nan]) # Get rolling max out = x.rolling(window=2).max() print(out) print() # Get rolling max out = x.rolling(window=2).max(skipna=True) #&lt;-- deprecation warning print(out) </code></pre> <p>Output</p> <pre><code>0 NaN 1 2.0 2 NaN 3 NaN 4 NaN dtype: float64 0 NaN 1 2.0 2 NaN 3 NaN 4 NaN dtype: float64 </code></pre> <p>Warning:</p> <pre><code>FutureWarning: Passing additional kwargs to Rolling.max has no impact on the result and is deprecated. This will raise a TypeError in a future version of pandas. </code></pre> <p>Need the following output:</p> <pre><code>0 NaN 1 2.0 2 2.0 3 4.0 4 4.0 dtype: float64 </code></pre> <p><strong>Edited</strong></p> <p>Thank you for answers below. However, I need it to work in situation such as below as well. Would <code>.rolling().max()</code> get the desired output?</p> <p>New data:</p> <pre><code>x = pd.Series([1,2,np.nan,np.nan,4,np.nan]) </code></pre> <p>Current output</p> <pre><code>0 NaN 1 2.0 2 NaN 3 NaN 4 NaN 5 NaN </code></pre> <p>Desired output</p> <pre><code>0 NaN 1 2.0 2 2.0 3 NaN 4 4.0 5 4.0 </code></pre>
<python><pandas><max><rolling-computation><skip>
2023-05-08 05:23:16
2
1,893
Nilesh Ingle
76,197,453
3,713,835
How to pick value from list if gap(diff) is larger than a define number
<p><code>a = [44, 52, 59, 62, 174, 175, 178, 288, 292, 404, 408, 518, 521, 633, 637, 747, 751, 863, 867, 977, 980, 987]</code></p> <p>For example, on above list, if the gap is 100, then first 3 items should be filter out.</p> <p>final result should be:</p> <p><code>b = [62, 174, 288, 404, 518, 633, 747, 863, 977]</code></p> <p>here is my code:</p> <pre><code>def pick(lst, gap): result = [] first_item_picked = False for idx, x in enumerate(lst): try: if not first_item_picked and idx &lt; len(lst)-1 and lst[idx+1]-x &gt; gap: first_item_picked = True result.append(x) elif x- lst[idx-1] &gt; gap: result.append(x) except: print(idx,x) return result b = pick(a,100) print(b) </code></pre> <p>How to write a more simple algorithm to make it?</p>
<python>
2023-05-08 04:04:29
2
424
Niuya
76,197,265
21,107,707
Why is squaring a number slower than multiplying itself by itself?
<p>I was curious and decided to run this code in python:</p> <pre class="lang-py prettyprint-override"><code>import time def timeit(function): strt = time.time() for _ in range(100_000_000): function() end = time.time() print(end-strt) @timeit def function1(): return 1 * 1 @timeit def function2(): return 1_000_000_000_000_000_000_000_000_000_000 * 1_000_000_000_000_000_000_000_000_000_000 @timeit def function3(): return 1_000_000_000_000_000_000_000_000_000_000 ** 2 </code></pre> <p>Here are my results:</p> <pre><code>4.712368965148926 9.684480905532837 11.74640703201294 </code></pre> <p>Why is the third function (squaring the number) slower than the second function (multiplying the number by itself)? What's going on in the computer, as I thought that doing exponents was simply multiplying a number by itself directly anyways?</p>
<python>
2023-05-08 03:04:46
1
801
vs07
76,197,241
10,452,700
Debugging RandomForestRegressor() producing mainly constant forecast results over time-series data
<p>Let's say I have <a href="https://drive.google.com/file/d/18PGLNnOI44LVFignYriBWQFW9WBkTX5c/view?usp=share_link" rel="nofollow noreferrer">dataset</a> contains a timestamp (<em>non-standard</em> timestamp column without datetime format) as a single feature and <code>count</code> as Label/target to predict within the following <a href="/questions/tagged/pandas" class="post-tag" title="show questions tagged &#39;pandas&#39;" aria-label="show questions tagged &#39;pandas&#39;" rel="tag" aria-labelledby="tag-pandas-tooltip-container">pandas</a> dataframe format as follows:</p> <pre><code> X y Timestamp label +--------+-----+ |TS_24hrs|count| +--------+-----+ |0 |157 | |1 |334 | |2 |176 | |3 |86 | |4 |89 | ... ... |270 |192 | |271 |196 | |270 |251 | |273 |138 | +--------+-----+ 274 rows × 2 columns </code></pre> <p>I have already implemented RF regression within <a href="/questions/tagged/sklearn" class="post-tag" title="show questions tagged &#39;sklearn&#39;" aria-label="show questions tagged &#39;sklearn&#39;" rel="tag" aria-labelledby="tag-sklearn-tooltip-container">sklearn</a> <code>pipeline()</code> after splitting data with the following strategy for 274 records:</p> <blockquote> <ul> <li>split data into [<em>training-set</em> + <em>validation-set</em>] <a href="https://stackoverflow.com/a/59452915/10452700">Ref.</a> <em>e.g. The first 200 records [160 +40]</em></li> <li>keeping <strong>unseen</strong> [<em>test-set</em>] hold-on for final forecasting <em>e.g. The last 74 records (after 200th rows\event)</em></li> </ul> </blockquote> <pre class="lang-py prettyprint-override"><code>#print(train.shape) #(160, 2) #print(validation.shape) #(40, 2) #print(test.shape) #(74, 2) </code></pre> <p>I tried default pipeline as well as optimized one by tuning Hyper-parameters to get optimum results by equipping the RF pipeline with <a href="https://scikit-learn.org/stable/tutorial/statistical_inference/putting_together.html" rel="nofollow noreferrer">GridSearchCV()</a>, however results didn't improve as follow:</p> <pre class="lang-py prettyprint-override"><code>from sklearn.metrics import r2_score print(f&quot;r2 (defaults): {r2_score(test['count'], rf_pipeline2.predict(X_test))}&quot;) print(f&quot;r2 (opt.): {r2_score(test['count'], rf_pipeline2o.predict(X_test))}&quot;) #r2 (defaults): 0.025314471951056405 #r2 (opt.): 0.07593841572721849 </code></pre> <p><img src="https://i.imgur.com/4wF0lv9.jpg" alt="img" /></p> <p>Full code for reproducing the example:</p> <pre class="lang-py prettyprint-override"><code># Load the time-series data as dataframe import numpy as np import pandas as pd import matplotlib.pyplot as plt df = pd.read_csv('/content/U2996_24hrs_.csv', sep=&quot;,&quot;) # The first 200 records slice for training-set and validation-set df200 = df[:200] # The rest records = 74 events (after 200th event) kept as hold-on unseen-set for forecasting test = df[200:] #test (keep it unseen) # Split the data into training and testing sets from sklearn.model_selection import train_test_split X = df200[['TS_24hrs']] y = df200['count'] X_train, X_val, y_train, y_val = train_test_split(X, y , test_size=0.2, shuffle=False, random_state=0) #train + validat X_test = test['count'].values.reshape(-1,1) # Train and fit the RF model from sklearn.ensemble import RandomForestRegressor #rf_model = RandomForestRegressor(random_state=10).fit(train, train['count']) #X, y # build an end-to-end pipeline, and supply the data into a regression model and train within pipeline. It avoids leaking the test\val-set into the train-set from sklearn.preprocessing import MinMaxScaler from sklearn.ensemble import RandomForestRegressor from sklearn.pipeline import Pipeline, make_pipeline # Pipeline (defaults) rf_pipeline2 = Pipeline([('scaler', MinMaxScaler()),('RF', RandomForestRegressor(random_state=10))]).fit(X_train,y_train) #Approach 2 train-set excludes label # Pipeline (optimum) # Parameters of pipelines can be set using '__' separated parameter names: from sklearn.model_selection import GridSearchCV from sklearn.model_selection import TimeSeriesSplit tscv = TimeSeriesSplit(n_splits = 5) param_grid = { &quot;RF__n_estimators&quot;: [10, 50, 100], &quot;RF__max_depth&quot;: [1, 5, 10, 25], &quot;RF__max_features&quot;: [*np.arange(0.1, 1.1, 0.1)],} rf_pipeline2o = Pipeline([('scaler', MinMaxScaler()),('RF', GridSearchCV(rf_pipeline2, param_grid=param_grid, n_jobs=2, cv=tscv, refit=True))]).fit(X_train,y_train) #Approach 2 train-set excludes label # Displaying a Pipeline with a Preprocessing Step and Regression from sklearn import set_config set_config(display=&quot;text&quot;) #print(rf_pipeline2) #print(rf_pipeline2o) # Use the pipeline to predict over the validation-set and test-set y_predictions_test2 = rf_pipeline2.predict(X_test) y_predictions_test2o = rf_pipeline2o.predict(X_test) y_predictions_val2 = rf_pipeline2.predict(X_val) y_predictions_val2o = rf_pipeline2o.predict(X_val) # Convert forecast result over the test-set into dataframe for plot issue with ease df_pre_test_rf2 = pd.DataFrame({'TS_24hrs':test['TS_24hrs'], 'count_forecast_test':y_predictions_test2}) df_pre_test_rf2o = pd.DataFrame({'TS_24hrs':test['TS_24hrs'], 'count_forecast_test':y_predictions_test2o}) # Convert predict result over the validation-set into dataframe for plot issue with ease df_pre_val_rf2 = pd.DataFrame({'TS_24hrs':X_val['TS_24hrs'], 'count_prediction_val':y_predictions_val2}) df_pre_val_rf2o = pd.DataFrame({'TS_24hrs':X_val['TS_24hrs'], 'count_prediction_val':y_predictions_val2o}) # evaluate performance with MAE # Evaluate performance by calculate the loss and metric over unseen test-set from sklearn.metrics import mean_absolute_error, mean_squared_error, mean_absolute_percentage_error, explained_variance_score, r2_score rf_mae_test2 = mean_absolute_error(test['count'], df_pre_test_rf2['count_forecast_test']) rf_mae_test2o = mean_absolute_error(test['count'], df_pre_test_rf2o['count_forecast_test']) #visulize forecast or prediction of RF pipleine import matplotlib.pyplot as plt fig, ax = plt.subplots( figsize=(10,4)) pd.Series(y_train).plot(label='Training-set', c='b') pd.Series(y_val).plot(label='Validation-set', linestyle=':', c='b') test['count'].plot(label='Test-set (unseen)', c='cyan') #predict plot over validation-set df_pre_val_rf2['count_prediction_val'].plot(label=f'RF_predict_val (defaults) ', linestyle='--', c='green', marker=&quot;+&quot;) df_pre_val_rf2o['count_prediction_val'].plot(label=f'RF_predict_val (opt.) ', linestyle='--', c='purple', marker=&quot;+&quot;, alpha= 0.4) #forecast plot over test-set (unseen) df_pre_test_rf2['count_forecast_test'].plot(label=f'RF_forecast_test (defaults) MAE={rf_mae_test2:.2f}', linestyle='--', c='green', marker=&quot;*&quot;) df_pre_test_rf2o['count_forecast_test'].plot(label=f'RF_forecast_test (opt.) MAE={rf_mae_test2o:.2f}', linestyle='--', c='purple', marker=&quot;*&quot;, alpha= 0.4) plt.legend() plt.title('Plot of comparioson results of used implementation approaches trained RF pipeline ') plt.ylabel('count', fontsize=15) plt.xlabel('Timestamp [24hrs]', fontsize=15) plt.show() </code></pre> <p>I have implemented different approaches, but I have not figured out to debug the problem so far. In the past time, I had some issues with another regressor that outputs constant prediction, and I resolved them with Hyper-parameters tuning like this <a href="https://stackoverflow.com/q/71285022/10452700">post</a>.</p> <p>Furthermore, based on this <a href="https://stats.stackexchange.com/a/235233/240550">answer</a>:</p> <blockquote> <p><em>&quot;...<strong>random forest</strong> for regression / regression trees doesn't produce expected predictions for data points <strong>beyond the scope of training data range</strong> because they cannot extrapolate (well).&quot;</em></p> </blockquote> <p>Despite this could explain constant forecast concerning <strong>out-of-sample prediction</strong> over test-set (unseen), but still, I believe that even if it is the case, it should show a non-constant prediction over the validation-set in my case while it is constant.</p> <hr /> <p>Some related posts:</p> <ul> <li><a href="https://neptune.ai/blog/random-forest-regression-when-does-it-fail-and-why" rel="nofollow noreferrer">Random Forest Regression: When Does It Fail and Why?</a></li> <li><a href="https://stackoverflow.com/q/74673426/10452700">Why did the prediction results of Random Forest and Gradient Boosting change into a straight line after a certain time point?</a></li> <li><a href="https://stackoverflow.com/q/44095921/10452700">XGBoost and Random Forest lead to constant predictions on test set when training data are centered</a></li> <li><a href="https://stackoverflow.com/q/67625762/10452700">Forecasts are constant all the time in time series data</a></li> <li><a href="https://stackoverflow.com/q/57927187/10452700">LSTM Time-Series produces shifted forecast?</a></li> <li><a href="https://stackoverflow.com/q/69495910/10452700">Time series forecasting- wrong result</a></li> <li><a href="https://stackoverflow.com/q/57960616/10452700">facebook prophet produces constant forecasts for time series with less than 25 observations</a></li> <li><a href="https://stackoverflow.com/q/59617202/10452700">Time Series Forecasting Python | Constant Increasing Predictions</a></li> <li><a href="https://stackoverflow.com/q/60000473/10452700">Time Series Forecasting model with LSTM in Tensorflow predicts a constant</a></li> </ul>
<python><machine-learning><scikit-learn><time-series><random-forest>
2023-05-08 02:56:12
2
2,056
Mario
76,197,174
5,601,663
AWS Elastic Beanstalk: can't install mysqlclient
<p>I am going through the <a href="https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create-deploy-python-rds.html" rel="nofollow noreferrer">adding database</a> to elastic beanstalk python documentation but following the steps leads to the following error</p> <pre><code>2023/05/07 14:06:36.847596 [ERROR] An error occurred during execution of command [app-deploy] - [InstallDependency]. Stop running the command. Error: fail to install dependencies with requirements.txt file with error Command /bin/sh -c /var/app/venv/staging-LQM1lest/bin/pip install -r requirements.txt failed with error exit status 1. Stderr: error: subprocess-exited-with-error × python setup.py egg_info did not run successfully. │ exit code: 1 ╰─&gt; [18 lines of output] /bin/sh: line 1: mysql_config: command not found /bin/sh: line 1: mariadb_config: command not found /bin/sh: line 1: mysql_config: command not found Traceback (most recent call last): File &quot;&lt;string&gt;&quot;, line 2, in &lt;module&gt; File &quot;&lt;pip-setuptools-caller&gt;&quot;, line 34, in &lt;module&gt; File &quot;/tmp/pip-install-7hpz1pvs/mysqlclient_f18de82744fa43e9b3b8706c3e581791/setup.py&quot;, line 15, in &lt;module&gt; metadata, options = get_config() ^^^^^^^^^^^^ File &quot;/tmp/pip-install-7hpz1pvs/mysqlclient_f18de82744fa43e9b3b8706c3e581791/setup_posix.py&quot;, line 70, in get_config libs = mysql_config(&quot;libs&quot;) ^^^^^^^^^^^^^^^^^^^^ File &quot;/tmp/pip-install-7hpz1pvs/mysqlclient_f18de82744fa43e9b3b8706c3e581791/setup_posix.py&quot;, line 31, in mysql_config raise OSError(&quot;{} not found&quot;.format(_mysql_config_path)) OSError: mysql_config not found mysql_config --version mariadb_config --version mysql_config --libs [end of output] </code></pre> <p>I did some digging and came to the conclusion that mysql or mariadb is not installed to the ec2 instance. So I started browsing <a href="https://docs.aws.amazon.com/linux/al2023/release-notes/all-packages-al2023-20230501.html" rel="nofollow noreferrer">the available packages</a> for the Amazon Linux 2023 AMI since my ec2 is running that version and modified my config in <code>.ebextensions</code> folder to look like this:</p> <pre class="lang-yaml prettyprint-override"><code>packages: yum: mysql-selinux: [] mariadb105: [] mariadb-connector-c: [] container_commands: 01_migrate: command: &quot;source /var/app/venv/*/bin/activate &amp;&amp; python3 manage.py migrate&quot; leader_only: true 02_createsuperuser: command: &quot;source /var/app/venv/*/bin/activate &amp;&amp; python3 manage.py createsuperuser --noinput --username aradipatrik2 --email ap@gmail.com&quot; leader_only: true 03_save_data_to_db: command: &quot;source /var/app/venv/*/bin/activate &amp;&amp; python3 manage.py save_data_to_db&quot; leader_only: true option_settings: aws:elasticbeanstalk:application:environment: DJANGO_SETTINGS_MODULE: env_info.settings </code></pre> <p>The packages install successfully but I still get the same exact error. When I ssh into the ec2 device to install mysql or mariadb manually the terminal tells me:</p> <blockquote> <p>Changes made via SSH WILL BE LOST if the instance is replaced by auto-scaling</p> </blockquote> <p><em>So my question is</em>: How can I install mysql or mariadb onto the ec2 instance when deploying the application using elastic beanstalk, so I can have <code>mysqlclient==2.0.3</code> inside my <code>requirements.txt</code></p> <p>My <code>requirements.txt</code></p> <pre><code>asgiref==3.6.0 branca==0.6.0 certifi==2022.12.7 charset-normalizer==3.1.0 Django==4.2.1 folium==0.14.0 idna==3.4 Jinja2==3.1.2 MarkupSafe==2.1.2 numpy==1.24.3 python-dotenv==1.0.0 requests==2.29.0 sqlparse==0.4.4 tzdata==2023.3 urllib3==1.26.15 mysqlclient==2.0.3 </code></pre> <p>edit: Received feedback to install mariadb package with a name ending in -dev or -devel but there's no such package listed at <a href="https://docs.aws.amazon.com/linux/al2023/release-notes/all-packages-al2023-20230501.html" rel="nofollow noreferrer">the available packages list</a> for the Amazon Linux 2023 AMI any further pointers would be appreciated!</p>
<python><mysql><amazon-web-services><amazon-ec2><amazon-elastic-beanstalk>
2023-05-08 02:29:44
2
1,732
A. Patrik
76,196,843
10,610,620
1B pseudo-random boolean permutations of vector set N by M
<p>Let's say we have a set of pseudo-random boolean/binary vectors of length N, and each vector is length M.</p> <p>e.g. {[0,1,0,1], [0,0,0,1]}</p> <p>We need one billion unique vectors.</p> <p><strong>Now, the question is, what is the fastest way to do this?</strong>? <em>(also please keep in mind we need to keep the vector set in memory, so that later we can use the same set to do other stuff)</em></p> <p>For example... here are some approaches I have tried...</p> <pre><code>## where N = 10 and M = 10,000 def idx_v(idx, ivals=vector_set(N,M), ival=np.zeros(10_000)): divs = idx // 10 remains = idx % 10 a = [divs,remains] hv = str( abs( hash( str(a) ) ) ) ival = ivals[remains] for i in hv: ival = np.roll(ival, int(i)) return ival, divs, remains for idx in range(1_000_000_000): ival = idx_v(idx) </code></pre> <p>that takes about 0.006 seconds per generation of 10 vectors</p> <pre><code>##where N = 100 and M = 10,000 def idx_v(idx, ivals=vector_set(N,M), ival=np.zeros(10_000)): divs = idx // 100 remains = idx % 100 ival = np.roll(ivals[remains], divs ) return ival, divs, remains for idx in range(1_000_000_000): ival = idx_v(idx) </code></pre> <p>that takes about 0.005 seconds per generation of 10 vectors</p>
<python><performance><optimization><random><mathematical-optimization>
2023-05-08 00:29:33
2
446
Yume
76,196,828
2,081,511
"NameError: global name 'open' is not defined" Error in __init__.py File
<p>I'm trying to modify an existing Python script even though I'm very inexperience in Python.</p> <p>The script is a <code>__init__.py</code> file and is a Plex Plugin for a custom Agent.</p> <p>The exiting relevant section of code looks like:</p> <pre><code>class CustomAgent(Agent.Movies): def update(self, metadata, media, lang): uri = &quot;http://domain.tld/file?id=%s&quot; % (urllib.quote(metadata.id)) data = JSON.ObjectFromURL(uri) </code></pre> <p>Instead of reaching out to a website I'm trying to have the agent open up a tsv file to get values for various fields.</p> <pre><code>class CustomAgent(Agent.Movies): def update(self, metadata, media, lang): with open(&quot;db.tsv&quot;, &quot;r&quot;) as fp: line = fp.readline() while line: ... </code></pre> <p>My problem is I'm getting the &quot;NameError: global name 'open' is not defined&quot; error. At first I thought I didn't import something that I needed to import but couldn't find any relevant libraries(?).</p> <p>Searching online suggests (if I'm interpreting it right) that it's because the script is shutting down before the relevant code gets executed. But I don't understand how to resolve the issue.</p> <p>I think the issue might be because it's an <code>__init__.py</code> file but I didn't choose the name, its what the Plex Plugin Framework has as the name and I don't think it's something I can change.</p> <p>How can I open another file in Python?</p>
<python><python-3.x>
2023-05-08 00:22:49
1
1,486
GFL
76,196,817
20,122,390
How can I filter by date range in mongo db with python?
<p>I have a mongodb database for a python and fastapi web application. The data stored in the collection has the following structure:</p> <pre><code>[ { &quot;_id&quot;: &quot;6457dc8740f35d494c0c76a4&quot;, &quot;operator&quot;: &quot;example&quot;, &quot;department&quot;: &quot;deparment_example&quot;, &quot;city&quot;: &quot;MIAMI&quot;, &quot;locality&quot;: &quot;ASD&quot;, &quot;neighborhood&quot;: &quot;BROKILIN&quot;, &quot;start_date&quot;: &quot;26/04/2024&quot;, &quot;final_date&quot;: &quot;26/04/2025&quot;, &quot;reason&quot;: &quot;programED&quot;, &quot;description&quot;: &quot;example description&quot;, &quot;status&quot;: &quot;not_validated&quot;, &quot;customers_affected&quot;: [ &quot;BOG-00035&quot;, &quot;BOG-00023&quot;, &quot;BOG-00085&quot;, &quot;BOG-00048&quot;, &quot;BOG-00043&quot; ] },... more data ] </code></pre> <p>(start date and end date are in str and D/M/YEAR format) as you can see. And I want to apply a filter like the following:</p> <pre><code>start_date__lte=final_filter and final_date__gte=start_filter </code></pre> <p>From the api I receive the date in YEAR-MONTH-DAY format. So I tried to transform it into day/month/year to match the database and filter as I show below:</p> <pre><code>@app.post(&quot;/filter&quot;, response_model=List[Alert]) async def filter_alerts(filter: FilterAlert): filter_dict = { key: value for key, value in filter.dict(exclude={&quot;skip&quot;, &quot;limit&quot;}).items() if value is not None } if filter.start_date is not None and filter.final_date is not None: start_date_unf = datetime.strptime(filter.start_date, '%Y-%m-%d') final_date_unf = datetime.strptime(filter.final_date, '%Y-%m-%d') start_date = datetime.strftime(start_date_unf, '%d/%m/%Y') final_date = datetime.strftime(final_date_unf, '%d/%m/%Y') filter_dict[&quot;start_date&quot;] = {&quot;$lte&quot;: start_date} filter_dict[&quot;final_date&quot;] = {&quot;$gte&quot;: final_date} skip = filter.skip limit = filter.limit db_objs = await ( mongoclient.client[alerts_database][alerts_collection] .find(filter_dict) .skip(skip) .limit(limit) .to_list(length=None) ) return db_objs </code></pre> <p>But I don't get the expected results (it returns empty lists when there are documents that meet the condition). What am I doing wrong? I am using AsyncIOMotorClient from the Motor library for the queries.</p>
<python><mongodb><date><mongodb-query><fastapi>
2023-05-08 00:16:58
0
988
Diego L
76,196,816
104,950
Peculiar behavior with Python metaclasses
<p>I'm working on a coding puzzle shown <a href="https://www.codewars.com/kata/54bebed0d5b56c5b2600027f/train/python" rel="nofollow noreferrer">here</a>. My solution is shown and I've included the test cases which pass on my local computer but fail when submitted. It appears as though none of the instrumentation for the attribute accesses occur, but only when submitted online. Can someone help me?</p> <pre><code>import functools import codewars_test as test class Debugger(object): attribute_accesses = [] method_calls = [] class AttributeAccessDescriptor: def __init__(self, attr_name): self.attr_name = attr_name def __get__(self, instance, owner): if instance is None: return self value = instance.__dict__[self.attr_name] print(f&quot;Attribute '{self.attr_name}' accessed with value: {value}&quot;) Debugger.attribute_accesses.append( {'action': 'get', 'class': instance.__class__, 'attribute': self.attr_name, 'value': value } ) return value def __set__(self, instance, value): print(f&quot;Attribute '{self.attr_name}' set with value: {value}&quot;) Debugger.attribute_accesses.append( {'action': 'set', 'class': instance.__class__, 'attribute': self.attr_name, 'value': value } ) instance.__dict__[self.attr_name] = value class Meta(type): def __new__(cls, name, bases, attrs): for key, value in attrs.items(): if callable(value): attrs[key] = cls.wrap_method(value) elif not key.startswith(&quot;__&quot;): attrs[key] = AttributeAccessDescriptor(key) print(f&quot;__new__: cls = {cls}, name = {name}, bases = {bases}, attrs = {attrs}&quot;) return super().__new__(cls, name, bases, attrs) @staticmethod def wrap_method(method): @functools.wraps(method) def wrapped(*args, **kwargs): print(f'method = {method}') if method.__name__ != '__init__': Debugger.attribute_accesses.append( {'action': 'get', 'class': args[0].__class__, 'attribute': method.__name__, } ) Debugger.method_calls.append( {'class': args[0].__class__.__name__, 'method': method.__name__, 'args': args, 'kwargs': kwargs}) return method(*args, **kwargs) return wrapped class Foo(object, metaclass=Meta): x = 0 def __init__(self, x): self.x = x def bar(self, v): return self.x, v a = Foo(1) a.bar(2) print(Debugger.method_calls) print(Debugger.attribute_accesses) calls = Debugger.method_calls test.assert_equals(len(calls), 2) test.describe(&quot;Test collected method calls&quot;) test.it(&quot;Call to init should be collected&quot;) test.assert_equals(calls[0]['args'], (a, 1)) test.it(&quot;Call to bar should be collected&quot;) test.assert_equals(calls[1]['args'], (a, 2)) test.describe(&quot;Test collected attribute accesses&quot;) accesses = Debugger.attribute_accesses test.assert_equals(len(accesses), 3) test.it(&quot;Attribute set in init should be collected&quot;) test.assert_equals(accesses[0]['action'], 'set') test.assert_equals(accesses[0]['attribute'], 'x') test.assert_equals(accesses[0]['value'], 1) test.it(&quot;Method get should be collected too&quot;) test.assert_equals(accesses[1]['action'], 'get') test.assert_equals(accesses[1]['attribute'], 'bar') test.it(&quot;Attribute get should be collected&quot;) test.assert_equals(accesses[2]['action'], 'get') test.assert_equals(accesses[2]['attribute'], 'x') </code></pre>
<python><metaclass>
2023-05-08 00:16:55
1
38,691
Amir Afghani
76,196,742
3,728,184
Can I leave out the `with` statement when opening a file with no pointer?
<p>In Python, it is recommended to use <code>with</code> when opening files, e.g.:</p> <pre><code>with open('x.json') as f: data = json.load(f) </code></pre> <p>But is it okay to shorten it as follows?</p> <pre><code>data = json.load(open('x.json')) </code></pre> <p>The <code>with</code> statement makes sure that the file is closed, but would Python close the file in my second example as well, as there's no pointer to the file and it's clear that the file won't be accessed anymore?</p>
<python>
2023-05-07 23:47:20
1
2,005
Markus
76,196,612
11,235,680
Deploy HuggingFace models to AWS: configuration file needed on local machine
<p>I'm trying to deploy this model to AWS through Sagemaker: <a href="https://huggingface.co/mosaicml/mpt-7b-chat" rel="nofollow noreferrer">https://huggingface.co/mosaicml/mpt-7b-chat</a></p> <p>I generated the code from the same page which is as follow:</p> <pre><code>from sagemaker.huggingface import HuggingFaceModel import sagemaker role = sagemaker.get_execution_role() # Hub Model configuration. https://huggingface.co/models hub = { 'HF_MODEL_ID':'mosaicml/mpt-7b-chat', 'HF_TASK':'text-generation' } # create Hugging Face Model Class huggingface_model = HuggingFaceModel( transformers_version='4.17.0', pytorch_version='1.10.2', py_version='py38', env=hub, role=role, ) # deploy model to SageMaker Inference predictor = huggingface_model.deploy( initial_instance_count=1, # number of instances instance_type='ml.m5.xlarge' # ec2 instance type ) predictor.predict({ 'inputs': &quot;Can you please let us know more details about your &quot; }) </code></pre> <p>The code crash at the &quot;predict&quot; call with this error: Loading /.sagemaker/mms/models/mosaicml__mpt-7b-chat requires you to execute the configuration file in that repo on your local machine. Make sure you have read the code there to avoid malicious use, then set the option <code>trust_remote_code\u003dTrue</code> to remove this error.</p> <p>how can I deploy the model through AWS?</p>
<python><amazon-web-services><amazon-sagemaker><huggingface-transformers>
2023-05-07 22:58:11
1
316
Bouji
76,196,491
14,729,820
How to avoid skipping double quotation when save in Json line?
<p>I have file in <code>jsonl </code>format as shown below :</p> <p><code>input.jsonl</code></p> <pre><code>{&quot;file_name&quot;: &quot;input_image.jpg&quot;, &quot;text&quot;: &quot;II. Firtos. Jelige: \&quot;Vándor daruid V betűje szállt.\&quot;&quot;} </code></pre> <p>I am reading this file using pandas :</p> <pre><code>def load_jsonl1(): return pd.read_json( path_or_buf = 'input.jsonl', lines=True, ) df1= load_jsonl1() df1.head(1) </code></pre> <p>after showing first row we see there is <code>\</code> missing : <code>II. Firtos. Jelige: &quot;Vándor daruid V betűje szállt.&quot;</code> I wrote the following :</p> <pre><code>new_df = pd.DataFrame(columns=['file_name', 'text']) output_path = 'output.jsonl' # create output.jsonl file with open(output_path, 'w') as outfile: # to add taqd to see progress for idx in tqdm(range(len(df1))): image_path = os.path.join(path, df['file_name'][idx]) image = Image.open(image_path).convert(&quot;RGB&quot;) for i in range(sample_amount): augmented_image = augment_img(image) augmented_image_path = os.path.join(aug_imgs, 'aug_' + str(idx * sample_amount + i) + '.jpg') augmented_image.save(augmented_image_path) # write updated label to output.jsonl file outfile.write('{&quot;file_name&quot;: &quot;' + os.path.basename(augmented_image_path) + '&quot;, &quot;text&quot;: &quot;' + df['text'][idx] + '&quot;}\n') </code></pre> <p>I want to keep all the text with all symbols as it is in <code>input.jsonl</code> for the later case needed so what I am expacting after reading file with <code>pandas</code> and save in <code>ouput.jsonl</code> file same as the origin file</p> <p><code>{&quot;file_name&quot;: &quot;input_image.jpg&quot;, &quot;text&quot;: &quot;II. Firtos. Jelige: \&quot;Vándor daruid V betűje szállt.\&quot;&quot;} </code></p>
<python><json><pandas>
2023-05-07 22:11:16
0
366
Mohammed
76,196,094
6,223,346
How to parse a CSV file that does not have a separator in Python?
<p>My CSV files after decryption are not having any separator and all the data is within quotes.</p> <p>Sample CSV file data:</p> <pre><code>&quot;text&quot;&quot;1&quot;&quot;0.2&quot;&quot;true&quot; &quot;text&quot;&quot;1&quot;&quot;0.2&quot;&quot;true&quot; </code></pre>
<python><csv><parsing>
2023-05-07 20:18:44
4
613
Harish
76,196,068
8,849,755
How to use plotly.express.imshow facet_row argument?
<p>Consider the following code:</p> <pre class="lang-py prettyprint-override"><code>import pandas as pd import numpy as np import plotly.express as px # Create the index for the data frame x = np.linspace(-1,1, 6) y = np.linspace(-1,1,6) n_channel = [1, 2, 3, 4] xx, yy = np.meshgrid(x, y) zzz = np.random.randn(len(y)*len(n_channel),len(x)) df = pd.DataFrame( zzz, columns = pd.Index(x, name='x (m)'), index = pd.MultiIndex.from_product([y, n_channel], names=['y (m)', 'n_channel']), ) print(df) fig = px.imshow( df.reset_index('n_channel'), facet_col = 'n_channel', ) fig.write_html( 'plot.html', include_plotlyjs = 'cdn', ) </code></pre> <p>The data frame looks like this:</p> <pre><code>x (m) -1.0 -0.6 -0.2 0.2 0.6 1.0 y (m) n_channel -1.0 1 -0.492584 0.599464 0.097405 -0.177793 -0.027311 1.468527 2 0.202147 0.449809 -2.047460 -1.392223 0.245228 1.220419 3 0.139111 -0.699596 1.754103 -0.141732 -1.494373 -0.003184 4 0.124390 0.245113 -0.031949 1.938560 1.418563 -0.787295 -0.6 1 1.112547 0.307750 -1.206242 -0.739546 0.038905 -0.923485 2 -0.900733 -1.094717 0.770876 -1.973305 2.677651 3.072124 3 -0.279864 -1.341024 2.750811 -1.401604 0.929714 0.658087 4 -1.038905 -1.038625 0.112878 1.112139 -0.799305 -0.934813 -0.2 1 0.332704 1.321129 0.241799 -1.100657 -0.927649 -1.928624 2 -0.576210 0.257960 -0.196699 -0.245751 0.575648 -0.703353 3 -0.549881 -1.208282 0.959120 1.852333 1.452697 -0.562802 4 -0.433256 -0.339644 -1.636592 -1.022501 -0.614497 1.085253 0.2 1 0.378474 -0.829495 -1.313322 -0.654698 -0.644115 2.175938 2 0.567393 -0.340301 1.304942 0.197879 0.309288 -0.126187 3 0.209954 0.161299 -0.362754 -0.328356 -0.106934 -0.238329 4 -0.284447 -0.367920 -0.275830 -0.776649 0.656279 0.056389 0.6 1 1.174153 -1.112658 1.245117 -0.395144 0.471050 0.165074 2 -0.220246 1.063194 0.292873 0.266250 -0.175274 0.225985 3 0.301462 0.737581 0.271691 0.936558 1.007112 1.857389 4 -0.689441 3.369569 0.675700 0.077706 0.152062 -0.533258 1.0 1 0.732183 0.041873 1.156681 0.841262 -0.984433 1.313900 2 0.157533 0.723356 -0.786721 0.150939 0.164049 -0.351816 3 -0.390037 -1.513096 0.255813 -1.365759 0.570145 1.630885 4 0.318037 -1.103191 1.472340 -0.218038 0.990673 -1.565340 </code></pre> <p>and I expect it to produce 4 heatmaps, each of them similar to this one:</p> <p><a href="https://i.sstatic.net/QdpLy.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/QdpLy.png" alt="enter image description here" /></a></p> <p>but instead I get <code>AttributeError: 'DataFrame' object has no attribute 'dims'</code>. If instead I do like this</p> <pre class="lang-py prettyprint-override"><code>for n in n_channel: fig = px.imshow( df.query(f'n_channel=={n}').reset_index('n_channel', drop=True), ) fig.write_html( f'plot_{n}.html', include_plotlyjs = 'cdn', ) </code></pre> <p>then this produces the 4 plots, but separated and (of course) with the axes not connected.</p> <p>Is it possible to use the <code>facet_row</code> argument based on one column, similarly as it can be done e.g. with <code>px.scatter</code>?</p>
<python><plotly>
2023-05-07 20:12:24
1
3,245
user171780
76,195,989
10,122,822
opencv2 stitching images together failed
<p>I want to stitch together a bunch of images with OpenCV2 with python. Since it is very time consuming to stitch together 30+ images I modified my code to just stitch together 2 images and after that stitch together the next image with the previous stitched image.</p> <p>But when running the code even at the first stitch it will run into an error with the status code 1 and I dont know why. I even tested it with only 2 images in the folder but with the same result so it seems like I made an error somewhere in the code.</p> <p>Full code:</p> <pre><code>from imutils import paths import numpy as np import imutils import cv2 print(&quot;[INFO] loading images...&quot;) imagePaths = sorted(list(paths.list_images(&quot;./images/&quot;))) images = [] for imagePath in imagePaths: image = cv2.imread(imagePath) images.append(image) i = 0 currentImage = None while len(images) &gt; i + 1: #loop through all images. Use i+1 because we used i+1 to get the next image down below. If we get to the last image there is no next image imagesToCombine = [] if i == 0: imagesToCombine.append(images[i]) #Use first image at the first run else: imagesToCombine.append(currentImage) #Use the last stiched image imagesToCombine.append(images[i + 1]) print(&quot;[INFO] stitching images...&quot;) stitcher = cv2.createStitcher() if imutils.is_cv3() else cv2.Stitcher_create() (status, stitched) = stitcher.stitch(imagesToCombine) if status == 0: currentImage = stitched print(&quot;Done&quot; + str(i)) i += 1 else: print(&quot;[INFO] image stitching failed ({})&quot;.format(status)) break if currentImage is not None: cv2.imwrite(&quot;./stiched.png&quot;, currentImage) print(&quot;Done&quot;) </code></pre> <p>Console output:</p> <pre><code>[INFO] loading images... [INFO] stitching images... [INFO] image stitching failed (1) Done </code></pre> <p>Edit:</p> <p>After I wanted to create another set of example images I saw that with those images it was working! So somehow the images I actually want to combine doesn't work somehow even when they have a good portion of overlap.</p> <p>Those images doesn't work: <a href="https://i.sstatic.net/pIjrs.jpg" rel="nofollow noreferrer">https://i.sstatic.net/pIjrs.jpg</a></p> <p>But those images do work: <a href="https://i.sstatic.net/0zG9U.jpg" rel="nofollow noreferrer">https://i.sstatic.net/0zG9U.jpg</a></p> <p>Any Idea why the other images don't work?</p>
<python><opencv><imutils>
2023-05-07 19:53:36
0
748
sirzento
76,195,972
9,218,849
aspect sentiment analysis using Hugging face
<p>I am new to transformers models and trying to extract aspect and sentiment for a sentence but having issues</p> <pre><code>from transformers import AutoTokenizer, AutoModelForSequenceClassification model_name = &quot;yangheng/deberta-v3-base-absa-v1.1&quot; tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForSequenceClassification.from_pretrained(model_name) text = &quot;The food was great but the service was terrible.&quot; inputs = tokenizer(text, return_tensors=&quot;pt&quot;) outputs = model(**inputs) </code></pre> <p>I am able to get the tensor what I need is the output to extract the aspect and sentiment for the overall sentence</p> <p>I tried this however getting error</p> <pre><code>sentiment_scores = outputs.logits.softmax(dim=1) aspect_scores = sentiment_scores[:, 1:-1] aspects = [tokenizer.decode([x]) for x in inputs[&quot;input_ids&quot;].squeeze()][1:-1] sentiments = ['Positive' if score &gt; 0.5 else 'Negative' for score in aspect_scores.squeeze()] for aspect, sentiment in zip(aspects, sentiments): print(f&quot;{aspect}: {sentiment}&quot;) </code></pre> <p>I am looking for below o/p or similar o/p</p> <p>I am unable to write the logic as to how extract aspect and sentiment</p> <pre><code>text -The food was great but the service was terrible aspect- food ,sentiment positive aspect - service, sentiment negative or at overall level aspect - food, sentiment positive </code></pre>
<python><nlp><huggingface-transformers><sentiment-analysis>
2023-05-07 19:50:01
1
572
Dexter1611
76,195,686
7,643,958
How to declare a range in the datetime while creating a column in SQLalchemy?
<p>I have a table named &quot;Employee&quot;, where I have a requirement to <strong>set the column value of &quot;hire date&quot; to a datetime object in the format of 'YYYY-MM-DD HH:MM:SS', with a range from 01-01-2020 00:00:00 to today</strong>. I did the following:</p> <pre><code>class Employee(db.Model): id = db.Column(db.Integer, primary_key=True) name = db.Column(db.String(50)) department = db.Column(db.String(50)) salary = db.Column(db.Float, db.CheckConstraint('salary &gt; 0 AND salary &lt; 100')) hire_date =db.Column(DateTimeRange(&quot;2020-01-01 00:00:00&quot;, datetime.date.today())) def __init__(self, name, department, salary, hire_date): self.name = name self.department = department self.salary = salary self.hire_date = hire_date # Employee Schema class EmployeeSchema(ma.Schema): class Meta: fields = ('id', 'name', 'department', 'salary', 'hire_date') # Init schema employee_schema = EmployeeSchema(strict = True) employees_schema = EmployeeSchema(many=True) </code></pre> <p>So when I am trying to create the table in sqlite, I'm getting this error:</p> <blockquote> <p>in _init_items raise exc.ArgumentError( sqlalchemy.exc.ArgumentError: 'SchemaItem' object, such as a 'Column' or a 'Constraint' expected, got 2020-01-01T00:00:00 - 2023-05-07T00:00:00</p> </blockquote> <p>Can anyone help me with this problem? How do I define the constraint?</p> <p><strong>Update</strong>: I tried putting the current datetime in a variable to do something like this:</p> <pre><code>current_datetime = datetime.date.today() hire_date =db.Column(db.DateTime, db.CheckConstraint('hire_date =&gt; &quot;2020-01-01 00:00:00&quot; AND hire_date &lt;= current_datetime')) </code></pre> <p>Which is giving me this error now:</p> <blockquote> <p>sqlite3.OperationalError: near &quot;&gt;&quot;: syntax error</p> </blockquote>
<python><sqlite><datetime><sqlalchemy><flask-sqlalchemy>
2023-05-07 18:40:44
1
1,179
Proteeti Prova
76,195,685
11,913,986
How to replicate value based on distinct column values from a different df pyspark
<p>I have a df like:</p> <pre><code>df1 = AA BB CC DD 1 X Y Z 2 M N O 3 P Q R </code></pre> <p>I have another df like:</p> <pre><code>df2 = BB CC DD G K O H L P I M Q </code></pre> <p>I want to copy all the columns and rows of df2 for every distinct value of 'AA' column of df1 and get the resultant df as:</p> <pre><code>df = AA BB CC DD 1 X Y Z 1 G K O 1 H L P 1 I M Q 2 M N O 2 G K O 2 H L P 2 I M Q 3 P Q R 3 G K O 3 H L P 3 I M Q </code></pre> <p>What I am doing right now is:</p> <pre><code>AAs = df1.select(&quot;AA&quot;).distinct().rdd.flatMap(lambda x: x).collect() out= [] for i in AAs: dff = df1.filter(col('AA')==i) temp_df = (df1.orderBy(rand()) .withColumn('AA', lit(i)) ) out.append(temp_df) df = reduce(DataFrame.unionAll, out) </code></pre> <p>Which is taking extremely long time and failing the cluster as these are mock dataframes, actual dataframes are quite large in dimension. Any Pysparky way of doing it? Thanks in advance.</p>
<python><pandas><dataframe><apache-spark><pyspark>
2023-05-07 18:40:33
1
739
Strayhorn
76,195,648
5,766,416
Efficient way to divide a pandas col by a groupby df
<p>Easier to explain with an example, say I have an example dataframe here with <code>year</code>, <code>cc_rating</code> and <code>number_x</code>.</p> <pre><code>df = pd.DataFrame({&quot;year&quot;:{&quot;0&quot;:2005,&quot;1&quot;:2005,&quot;2&quot;:2005,&quot;3&quot;:2006,&quot;4&quot;:2006,&quot;5&quot;:2006,&quot;6&quot;:2007,&quot;7&quot;:2007,&quot;8&quot;:2007},&quot;cc_rating&quot;:{&quot;0&quot;:&quot;2&quot;,&quot;1&quot;:&quot;2a&quot;,&quot;2&quot;:&quot;2b&quot;,&quot;3&quot;:&quot;2&quot;,&quot;4&quot;:&quot;2a&quot;,&quot;5&quot;:&quot;2b&quot;,&quot;6&quot;:&quot;2&quot;,&quot;7&quot;:&quot;2a&quot;,&quot;8&quot;:&quot;2b&quot;},&quot;number_x&quot;:{&quot;0&quot;:9368,&quot;1&quot;:21643,&quot;2&quot;:107577,&quot;3&quot;:10069,&quot;4&quot;:21486,&quot;5&quot;:110326,&quot;6&quot;:10834,&quot;7&quot;:21566,&quot;8&quot;:111082}}) df year cc_rating number_x 0 2005 2 9368 1 2005 2a 21643 2 2005 2b 107577 3 2006 2 10069 4 2006 2a 21486 5 2006 2b 110326 6 2007 2 10834 7 2007 2a 21566 8 2007 2b 111082 </code></pre> <p><strong>Problem</strong></p> <p>How can I get the % of number_x per year? Meaning:</p> <p><a href="https://i.sstatic.net/k39HF.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/k39HF.png" alt="enter image description here" /></a></p> <p>Straight division wont work as year cant be set as the index in the original df as it is not unique.</p> <p>Right now I'm doing the following but its quite inefficient and im sure theres a better way.</p> <pre><code>df= pd.merge(df, df.groupby('year').sum(), left_on='year',right_index=True) df['%'] = round((df['number_x'] / df['number_y'])*100 , 2) df = df.drop('number_y', axis=1) </code></pre> <p>Thanks!</p>
<python><pandas><dataframe>
2023-05-07 18:32:27
1
2,572
Wboy
76,195,623
230,468
"Show hover" fails inside function arguments (vscode python intellisense)
<p>I'm working in python using vscode.<br /> My current vscode setup shows very nice, useful information when my cursor is on a function and I trigger <code>editor.action.showHover</code> (for me this is mapped to <code>Shift+Tab</code>). However, when I'm inside the function arguments and I hit <code>Shift+Tab</code> it doesn't work. I checked the <code>Developer: Toggle Keyboard Shortcuts Troubleshooting</code> and it shows me the error:</p> <pre><code>2023-05-07 11:13:06.535 [info] [KeybindingService]: | Resolving shift+[Tab] 2023-05-07 11:13:06.535 [info] [KeybindingService]: \ From 8 keybinding entries, matched editor.action.showHover, when: editorTextFocus, source: user. 2023-05-07 11:13:06.535 [info] [KeybindingService]: + Invoking command editor.action.showHover. 2023-05-07 11:13:06.535 [error] Unhandled method getIdAtPosition: Error: Unhandled method getIdAtPosition at /Users/USER/.vscode/extensions/visualstudioexptteam.intellicode-api-usage-examples-0.2.7/dist/extension.js:2:406156 at /Users/USER/.vscode/extensions/visualstudioexptteam.intellicode-api-usage-examples-0.2.7/dist/extension.js:2:406450 at Immediate.&lt;anonymous&gt; (/Users/USER/.vscode/extensions/visualstudioexptteam.intellicode-api-usage-examples-0.2.7/dist/extension.js:2:406812) at process.processImmediate (node:internal/timers:466:21) </code></pre> <p>Sometimes it does seem to work, except using <code>Trigger Parameter Hints</code>, instead of <code>Show Hover</code>. This is what's described in <a href="https://stackoverflow.com/a/62680065/230468">this question</a>, but at least for my configuration the <code>Trigger Parameter Hints</code> is really unhelpful, especially compared to the <code>Show Hover</code> output, and it still doesn't show up consistently.</p> <p>Is there a way to get <code>Show Hover</code> to trigger from within function parenthesis?</p> <p>Edit: It's also unclear why Parameter Hints are being triggered at all. under keyboard shortcuts I see a <code>pylance.triggerParameterHints</code> that has no keybinding. And nothing else relevant is set to <code>Shift+Tab</code> besides <code>showHover</code>.</p>
<python><visual-studio-code><intellisense>
2023-05-07 18:27:09
0
18,777
DilithiumMatrix
76,195,514
15,755,176
Detect original color regardless of light/scanning conditions
<p>I have the following color palette, which consists of eight RGB colors that are quite distinct from one another:</p> <pre><code>color_palette = [ (255, 255, 255), # White (0, 0, 0), # Black (0, 255, 255), # Cyan (255, 0, 255), # Magenta (255, 255, 0), # Yellow (0, 0, 255), # Blue (255, 0, 0), # Red (0, 255, 0) # Green ] </code></pre> <p>What I am interested in doing is print a bunch of pixels with those colors on a piece of paper and then scan the paper back and be able to compute the closest match of each color to a color from the palette regardless of scanning/light conditions or if the colors have faded. (due to ink exposed to light over time)</p> <p>I came across color spaces such as Lab but they seem to be aiming at human perception, not scanning.</p> <p>Is there a color space/method that works well especially for computing this?</p>
<python><opencv><computer-vision>
2023-05-07 18:05:09
0
376
mangotango
76,195,463
13,231,537
Shorthand index to get a cycle of a numpy array
<p>I want to index all elements in a numpy array and also include the first index at the end. So if I have the array <code>[2, 4, 6]</code> , I want to index the array such that the result is <code>[2, 4, 6, 2]</code>.</p> <pre><code>import numpy as np a = np.asarray([2,4,6]) # One solution is cycle = np.append(a, a[0]) # Another solution is cycle = a[[0, 1, 2, 0]] # Instead of creating a list, can indexing type be combined? cycle = a[:+0] </code></pre>
<python><numpy>
2023-05-07 17:56:13
1
858
nikost
76,195,422
7,685,367
In Python with Tinkter, how can I use a grid to display a PandasTable instead of pack?
<p>I am researching how to use Tkinter ( CustomTkinter ) and I would like to display a pandastable using the Tkinter GRID layout, instead of the PACK layout. The code below will show the table but it is taking up the entire frame.</p> <p>The entire project is very large and complex but here is the relevant code:</p> <pre><code>import customtkinter as ctk import pandas as pd from tkinter import END from pandastable import Table class DisplayTable(ctk.CTkFrame): def __init__(self, parent): ctk.CTkFrame.__init__(self, parent) label = ctk.CTkLabel(self, text=&quot;DisplayTable&quot;) label.grid(row=0, column=1, padx=10, pady=10, columnspan=4) df = pd.read_csv(&quot;data/data_points.csv&quot;) self.table = Table(self, dataframe=df, showtoolbar=True, showstatusbar=True) self.table.grid(row=1, column=1, padx=10, pady=10, columnspan=4) self.table.show() </code></pre> <p>My question is how to apply the GRID layout to the pandastable so that I have a label at the top of the screen and panddastable below?</p>
<python><pandas><tkinter><layout><pandastable>
2023-05-07 17:47:46
1
1,057
vscoder
76,195,331
76,701
Plotly: Make a plot that doesn't change proportions when zoomed in
<p>I'm making a custom plot using Plotly for representing a <a href="https://math.stackexchange.com/questions/4693604/succinct-represention-of-a-condensed-complete-directed-graph">condensed complete directed graph</a>. I've worked hard to get the shapes right and it's still a work-in-progress. However, I noticed a problem: The scale between pixel and data coordinated isn't fixed.</p> <p>For example, here is a screenshot of such a plot that my code made (excuse the ugliness):</p> <p><a href="https://i.sstatic.net/zQ6EW.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/zQ6EW.jpg" alt="" /></a></p> <p>Here's the same plot, but zoomed into the area of the &quot;0&quot; node:</p> <p><a href="https://i.sstatic.net/KdER5.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/KdER5.jpg" alt="" /></a></p> <p>The shapes are out of proportion. This is because Plotly seems to have two different worlds of coordinates:</p> <ol> <li>Pixels, which are used for width of lines, sizes of fonts and sizes of markers, and</li> <li>Data coordinates, which are used for placing scatter points on the figure.</li> </ol> <p>Because zooming in or out only pertains to the data coordinates and not the pixels, the picture changes when you zoom in and out.</p> <p><strong>I want to design my plot so it doesn't change any of its proportions when you zoom in or out. Is that possible with Plotly? If not, what other tool would you recommended?</strong></p> <p>(Note: I know that the <code>fixedrange</code> argument exists, but I would not like to disable zoom, just make it zoom everything equally.)</p>
<python><plot><plotly><drawing><graph-theory>
2023-05-07 17:24:11
1
89,497
Ram Rachum