QuestionId
int64
74.8M
79.8M
UserId
int64
56
29.4M
QuestionTitle
stringlengths
15
150
QuestionBody
stringlengths
40
40.3k
Tags
stringlengths
8
101
CreationDate
stringdate
2022-12-10 09:42:47
2025-11-01 19:08:18
AnswerCount
int64
0
44
UserExpertiseLevel
int64
301
888k
UserDisplayName
stringlengths
3
30
βŒ€
75,893,753
10,565,820
How to write decorator without syntactic sugar in python?
<p>This question is rather specific, and I believe there are many similar questions but not exactly like this.</p> <p>I am trying to understand syntactic sugar. My understanding of it is that by definition the code always can be written in a more verbose form, but the sugar exists to make it easier for humans to handle. So there is always a way to write syntactic sugar &quot;without sugar&quot; so to speak?</p> <p>With that in mind, how precisely do you write a decorator without syntactic sugar? I understand it's basically like:</p> <pre><code># With syntactic sugar @decorator def foo(): pass # Without syntactic sugar def foo(): pass foo = decorator(foo) </code></pre> <p>Except from <a href="https://peps.python.org/pep-0318/" rel="nofollow noreferrer">PEP 318</a></p> <blockquote> <p>Current Syntax</p> <p>The current syntax for function decorators as implemented in Python 2.4a2 is:</p> </blockquote> <pre><code>@dec2 @dec1 def func(arg1, arg2, ...): pass </code></pre> <blockquote> <p>This is equivalent to:</p> </blockquote> <pre><code>def func(arg1, arg2, ...): pass func = dec2(dec1(func)) </code></pre> <blockquote> <p><strong>without the intermediate assignment to the variable func</strong>. (emphasis mine)</p> </blockquote> <p>In the example I gave above, which is how the syntactic sugar is commonly explained, there is an intermediate assignment. But how does the syntactic sugar work without the intermediate assignment? A lambda function? But I also thought they could only be one line? Or is the name of the decorated function changed? It seems like that could possibly conflict with another method if the user created one coincidentally with that same name. But I don't know which is why I'm asking.</p> <p>To give a specific example, I'm thinking of how a property is defined. Since when defining a property's setter method, it cannot work if the setter method is defined as that would destroy the property.</p> <pre><code>class Person: def __init__(self, name): self.name = name @property def name(self): return self._name # name = property(name) # This would work @name.setter def name(self, value): self._name = value.upper() # name = name.setter(name) # This would not work as name is no longer a property but the immediately preceding method </code></pre>
<python><properties><decorator><syntactic-sugar>
2023-03-30 23:41:29
2
644
geckels1
75,893,730
4,635,580
Can multiple processes creating the same intermediate directories get into a race condition in UNIX?
<p>Given this piece of Python code</p> <pre class="lang-py prettyprint-override"><code>from pathlib import Path def create_dir(last_component: str): path = Path(&quot;l1/l2&quot;) / last_component path.mkdir(parents=True, exist_ok=True) </code></pre> <p>If there two concurrent processes, not threads, that at the same time, call <code>create_dir</code> defined above as</p> <pre class="lang-py prettyprint-override"><code># The call by process 1 create_dir('sub1') </code></pre> <p>and</p> <pre class="lang-py prettyprint-override"><code># The call by process 2 create_dir('sub2') </code></pre> <p>Assuming that both processes have the same current working directory which is empty, not having a subdirectory named <code>l1</code>. Is it possibly that these two processes enter into a race condition that corrupt the file system or produces an unexpected result? (I only need the answer for UNIX systems.)</p> <p><strong>Background information of the above question:</strong> I am running a remote <a href="https://docs.celeryq.dev/en/stable/userguide/workers.html" rel="nofollow noreferrer">Celery worker</a> that have many parallel processes. These processes are responsible for putting contents into directories in the file system. Each unit of content is put into a directory of 3 levels in depth.</p> <p>For example,</p> <p>For content1, it would be put into <code>l1_dir_name1/l2_dir_name1/content1_dir</code>. For content2, it would be put into <code>l1_dir_name2/l2_dir_name2/content2_dir</code>. It is possibly that <code>l1_dir_name1</code> is the same as <code>l1_dir_name2</code> and <code>l2_dir_name1</code> is the same as <code>l2_dir_name2</code>.</p>
<python><linux><unix><concurrency>
2023-03-30 23:35:48
0
739
Isaac To
75,893,723
12,561,168
Sum of numbers "around" a number with NumPy
<p>Say I have a NumPy array like this:</p> <pre><code>[[100. 100. 100. 100. 100.] [100. 0. 0. 0. 100.] [100. 0. 0. 0. 100.] [100. 0. 0. 0. 100.] [100. 100. 100. 100. 100.]] </code></pre> <p>I'd like to take the middle values:</p> <pre><code>0. 0. 0. 0. 0. 0. 0. 0. 0. </code></pre> <p>And set them equal to the sum of the numbers above, left, right, and below the respective number. For example, the first <code>0.</code> would become <code>100. + 100. + 0. + 0. = 200.</code>.</p> <p>I'm generally new to vectorization, so I'm not sure how to go about doing this <strong>using vectorization only</strong>. Any tips?</p>
<python><numpy><vectorization>
2023-03-30 23:33:47
3
603
jianmin-chen
75,893,685
1,185,242
Why is this shapely polygon invalid?
<pre><code>from shapely.wkt import loads polygon = loads(&quot;POLYGON Z ((11.2704142515539 8.935877943566396 -16.74699224693797, 9.552838743503662 9.966533722939751 -16.77349242463657, 7.038910673892143 11.41232301137523 -16.7241410803892, 5.216749880203796 12.46026879775946 -16.68836982474948, 8.001220401932684 17.16509576267924 -16.77314058305908, 13.96493422834735 13.65121729926299 -16.69268953378357, 11.2704142515539 8.935877943566396 -16.74699224693797), (9.548433717869264 9.955660336966762 -16.73313716982752, 12.17899487645576 14.57302531902789 -16.73313714768369, 9.676190207235408 15.98563829772018 -16.7331371381166, 7.056180834251874 11.41237681923199 -16.733137145912, 9.64504281597692 15.6700635337078 -16.73313699214635, 11.91573358793374 14.30274366995717 -16.73313697544633, 9.548433717869264 9.955660336966762 -16.73313716982752))&quot;) print(polygon.is_valid) </code></pre> <p><a href="https://i.sstatic.net/Uluza.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Uluza.png" alt="enter image description here" /></a></p>
<python><shapely>
2023-03-30 23:22:18
1
26,004
nickponline
75,893,669
3,894,471
Creating a list of outputs from a function with unknown number of variables and possible variable values
<p>Let me start with a quick example:</p> <pre><code>def my_fun(dict): # do stuff return dict_1 = { &quot;A&quot;: np.arange(0,4,1), &quot;B&quot;: np.arange(10, 20, 1), &quot;C&quot;: np.arange(3,13,2) } for a in dict_1[&quot;A&quot;]: for b in dict_1[&quot;B&quot;]: for c in dict_1[&quot;C&quot;]: temp_dict = {&quot;A&quot;: a, &quot;B&quot;: b, &quot;C&quot;: c} my_fun(temp_dict) </code></pre> <p>The problem is that I have almost 200 different dictionaries: <code>dict_2</code>, <code>dict_3</code>, etc.</p> <p>Each dict has different numbers of different keys and the arrays in them are all different.</p> <p>I'm hoping to get a single script that loops through any arbitrary dictionary (of the structure <code>{key1: array1, key2: array2, ...}</code>) and runs my_fun on each possible iteration of the different values of the different arrays, so I don't have to write 200 different loops.</p>
<python><python-3.x><loops>
2023-03-30 23:19:09
1
537
John
75,893,577
10,998,672
How to add folders in sharepoint using office365-rest-python-api
<p>I have registered an application in sharepoint with read and write access to the site. I want to create folders with subfolders and I found solution in official documentation:</p> <pre><code>def create_folder(dir_path): ctx = auth() target_folder_url = dir_path target_folder = ctx.web.ensure_folder_path(target_folder_url).execute_query() print(target_folder.serverRelativeUrl) </code></pre> <p>But after call:</p> <pre><code>create_folder(&quot;/Shared Documents/Archive/2022/09/01&quot;) </code></pre> <p>I got:</p> <pre><code>('-2147024891, System.UnauthorizedAccessException', 'Access denied.', &quot;403 Client Error: Forbidden for url: https://&lt;tenant&gt;.sharepoint.com/sites/test_site/_api/Web/RootFolder/Folders/Add('Shared%20Documents')) </code></pre> <p>and when I paste in browser:</p> <pre><code>https://&lt;tenant&gt;.sharepoint.com/sites/test_site/_api/Web/RootFolder/Folders/Add('Shared%20Documents') </code></pre> <p>I got error like that:</p> <pre><code>&lt;m:error xmlns:m=&quot;http://schemas.microsoft.com/ado/2007/08/dataservices/metadata&quot;&gt; &lt;m:code&gt;-1, Microsoft.SharePoint.Client.ClientServiceException&lt;/m:code&gt; &lt;m:message xml:lang=&quot;en-US&quot;&gt;The HTTP method 'GET' cannot be used to access the resource 'Add'. The operation type of the resource is specified as 'Default'. Please use correct HTTP method to invoke the resource.&lt;/m:message&gt; &lt;/m:error&gt; </code></pre> <p>Can someone tell me what I'm doing wrong?</p> <h1>EDIT:</h1> <p>I figure it out. On permission:</p> <pre><code>&lt;AppPermissionRequests AllowAppOnlyPolicy=&quot;true&quot;&gt; &lt;AppPermissionRequest Scope=&quot;http://sharepoint/content/sitecollection&quot; Right=&quot;Read&quot;/&gt; &lt;AppPermissionRequest Scope=&quot;http://sharepoint/content/sitecollection&quot; Right=&quot;Write&quot;/&gt; &lt;/AppPermissionRequests&gt; </code></pre> <p>it's not enough to create folder in that way. I have to change to Full Control:</p> <pre><code>&lt;AppPermissionRequests AllowAppOnlyPolicy=&quot;true&quot;&gt; &lt;AppPermissionRequest Scope=&quot;http://sharepoint/content/sitecollection&quot; Right=&quot;FullControl&quot;/&gt; &lt;/AppPermissionRequests&gt; </code></pre>
<python><azure><sharepoint><office365api>
2023-03-30 22:55:44
1
1,185
martin
75,893,562
417,678
Flask-SocketIO in Webpack, can't get messages from server back to client
<p>When I'm using just plain Flask <code>FLASK_APP=web_server.py flask run --host=0.0.0.0</code> for development purposes, I can receive messages from the client (http and websocket). When I want to send messages back after an event, they don't seem to be received by the browser. I'm not sure where they're going. I have a feeling it's my Webpack config because I'm using npm to start the front-end development server which is on port 3000 and I'm wondering if flask-socketio knows how to send messages back. Does anyone see anything wrong with this off the top of their head?</p> <pre><code> devServer: { host: 'localhost', port: port, host: '0.0.0.0', historyApiFallback: true, open: true, proxy: { '**': 'http://0.0.0.0:5001/' } }, </code></pre> <pre><code>app = Flask(__name__) cors = CORS(app, supports_credentials=True, resources={r&quot;/*&quot;: {&quot;origins&quot;: &quot;*&quot;}}) app.config['CORS_HEADERS'] = 'Content-Type' app.config['SECRET_KEY'] = os.environ['FLASK_LOGIN_SECRET_KEY'] socketio = SocketIO(app, cors_allowed_origins='*', async_mode='eventlet') socketio.run(app, debug=True) </code></pre>
<python><flask><webpack><websocket><webpack-dev-server>
2023-03-30 22:53:13
1
6,469
mj_
75,893,532
10,530,575
Convert dataframe to list of record WITHOUT brackets
<p>I have a dataframe</p> <pre><code>import pandas as pd d = { &quot;letter&quot;: [&quot;a&quot;, &quot;b&quot;, &quot;c&quot;]} df = pd.DataFrame(d) </code></pre> <p><a href="https://i.sstatic.net/U9kPD.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/U9kPD.png" alt="enter image description here" /></a></p> <p>if I use <code>df.values.tolist()</code> , I will get</p> <p><a href="https://i.sstatic.net/kNjRn.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/kNjRn.png" alt="enter image description here" /></a></p> <p>BUT my expected result is below</p> <pre><code>letter =['a','b','c'] </code></pre> <p>which without the brackets</p>
<python><pandas><dataframe>
2023-03-30 22:45:40
3
631
PyBoss
75,893,429
4,061,339
A widget disappers when a button is clicked twice in streamlit in Python
<h1>What I want to achieve</h1> <p>To show a widget only when a button is clicked in streamlit. The return value of a time-consuming operation will be loaded to the widget. While the operation is being done, the button is disabled. Every time the button is clicked, the new value of the operation will be loaded to it.</p> <h1>Issue</h1> <p>When the button is clicked once, the widget is shown. However, when the button is clicked again, it disappers. It looks like the entire webpage was reset. When the button is clicked one more time, the widget is shown again. The series of events are cycling.</p> <h1>Environment</h1> <ul> <li>Windows 8.1</li> <li>Python 3.9.13</li> <li>streamlit 1.20.0</li> </ul> <h1>Minimal reproducible example</h1> <pre><code>import streamlit as st import streamlit.components.v1 as stc def very_heavy_operation(): return 'result' def main(): process_btn_ph = st.empty() process_btn = process_btn_ph.button('process', disabled=False, key='1') text_wait = st.empty() print(process_btn) # for debug if process_btn: process_btn_ph.button('process', disabled=True, key='2') text_wait.markdown('**processing...**') result = very_heavy_operation() stc.html(f'&lt;div&gt;{result}&lt;/div&gt;', height=100, scrolling=True) process_btn_ph.button('process', disabled=False, key='3') text_wait.empty() if __name__ == '__main__': main() </code></pre> <h1>What I tried</h1> <p>I googled &quot;streamlit button placeholder push twice reset&quot; but no useful information was found.</p> <p>When the button is clicked once, <code>print(process_btn) # for debug</code> yields <code>True</code>. But when it is clicked again, it yields <code>False</code>. If the two <code>process_btn_ph.button()</code> in <code>if process_btn:</code> are removed, the code works as I want it to. I believe <code>process_btn_ph.button()</code> does something wrong but I have no idea how to solve it.</p> <p>Any suggestions are greatly appeciated. Thanks in advance.</p>
<python><html><button><widget><streamlit>
2023-03-30 22:24:05
1
3,094
dixhom
75,893,268
2,801,404
Is it possible to modify YOLOv8 to use it as a feature extractor for other tasks?
<p>I'm reading through the documentation of YOLOv8 <a href="https://docs.ultralytics.com/#where-to-start" rel="nofollow noreferrer">here</a>, but I fail to see an easy way to do what I suggest in the title. What I want to do is to load a pretrained YOLOv8 model, create a bigger model that will contain YOLOv8 as a submodule, and modify the forward function of YOLOv8 so that I may have access to the object detection loss plus the convolutional features, so that they can be used to feed subsequent layers for other custom tasks.</p> <p>To make things clearer, this is how yolov8 is intended to be used originally according to <a href="https://docs.ultralytics.com/quickstart/#use-with-python" rel="nofollow noreferrer">https://docs.ultralytics.com/quickstart/#use-with-python</a>:</p> <pre class="lang-py prettyprint-override"><code>from ultralytics import YOLO # Create a new YOLO model from scratch model = YOLO('yolov8n.yaml') # Load a pretrained YOLO model (recommended for training) model = YOLO('yolov8n.pt') # Train the model using the 'coco128.yaml' dataset for 3 epochs results = model.train(data='coco128.yaml', epochs=3) # Evaluate the model's performance on the validation set results = model.val() # Perform object detection on an image using the model results = model('https://ultralytics.com/images/bus.jpg') # Export the model to ONNX format success = model.export(format='onnx') </code></pre> <p>Instead, I want to do something like this:</p> <pre class="lang-py prettyprint-override"><code>import torch import torch.nn as nn from ultralytics import YOLO class Yolov8Wrapper(nn.Module): def __init__(self, yolov8_feature_dim, n1, n2, n3): super().__init__() self.yolov8 = YOLO('yolov8n.pt') self.fc1 = nn.Linear(yolov8_feature_dim, n1) self.fc2 = nn.Linear(yolov8_feature_dim, n2) self.fc3 = nn.Linear(yolov8_feature_dim, n3) def forward(self, images, gt_boxes): features, loss = self.yolov8(images, gt_boxes) logits1 = self.fc1(features) logits2 = self.fc2(features) logits3 = self.fc3(features) return { 'logits1': logits1, 'logits2': logits2, 'logits3': logits3, 'yolov8_loss': loss, } </code></pre> <p>The code above is a very simplified sketch and of course is not going to work, but more or less that's the idea. Moreover, by creating this ad-hoc wrapper I won't be able to use the out-of-the-box functionality to train, validate and predict that comes with the YOLO library, since it would be a custom architecture (YOLOv8 being just a submodule of it). Thus, I also need to figure out how to write a custom dataloader in order to provide YOLOv8 with the input it expects plus the additional stuff required by my wrapper (there could be different additional layers in the wrapper predicting different outputs based on YOLOv8's features, think of it as multitask learning).</p> <p>Is this possible? How can this be done?</p>
<python><deep-learning><pytorch><computer-vision><yolo>
2023-03-30 21:48:50
1
451
Pablo Messina
75,893,026
3,788,557
uploading a png/plot to s3 bucket?
<p>I can't figure out how to write images to my s3 bucket. I use matplotlib and</p> <pre><code>import pandas as pd import boto3 import ploty.express as px path = 'https://raw.githubusercontent.com/jbrownlee/Datasets/master/monthly-car-sales.csv' data = pd.read_csv(path) data.rename(columns = {'Month':'ds', 'Sales':'y'}, inplace=True) fig1 = data.plot() #type(fig1) ### fig2=px.line(data, x='ds', y='y') type(fig2) </code></pre> <p>I tried to do <code>fig1.savefig('s3://my_bucket_name/fig1.png')</code> and I get</p> <blockquote> <p>'no such file or directory s3://my_bucket...'</p> </blockquote> <p>If i do something like <code>data.to_csv('s3://my_bucket_name/data.csv')</code> my file gets written just fine. I've tried a variety of things with plotly using</p> <pre><code>s3_client = boto3.client('s3') s3_client.upload_file(fig1, key='s3://my_bucket_name/fig1.png') </code></pre> <p>But I can't get this to work either.</p>
<python><amazon-web-services><amazon-s3><plot><plotly>
2023-03-30 21:09:50
3
6,665
runningbirds
75,892,988
5,495,134
Imputation of mixed data types with pandas and Scikit-Learn
<p>I have to create a pre-processing pipeline dynamically to impute missing values, this is, I want to go through all the columns in a pandas data frame (which I don't know before-hand), and impute their missing values. To impute the missing values I use <code>sklearn.preprocessing.SimpleImputer</code></p> <p>I use a different imputer in case the column is numerical or not, like this:</p> <pre class="lang-py prettyprint-override"><code> numerical_imputer = SimpleImputer(strategy='median') categorical_imputer = SimpleImputer(missing_values=None,strategy='most_frequent') </code></pre> <p>My problem is that sometimes pandas would encode the missing values as one of np.nan, None. pd.NaN, and it's not always the same. If I force the missing values encoding it changes the whole column dtype which is something I don't want to do</p> <p>Is there any way to make this work with any data type and missing value encoding (of the possible ones for pandas)?</p>
<python><pandas><scikit-learn><scikit-learn-pipeline>
2023-03-30 21:06:03
0
787
Rodrigo A
75,892,941
2,809,512
How to use IPython.notebook.kernel.execute in Azure databricks?
<p>I have built an HTML JS based UI within azure databricks notebook <br> While trying to call a python function using kernel i keep getting IPython isnt available.</p> <pre><code>%%javascript function call_python_fn(cmdstr){ return new Promise((resolve,reject) =&gt; { var callbacks = { iopub: { output: (data) =&gt; resolve(data.content.text.trim()) }, shell: { reply: (data) =&gt; resolve(data.content.status) } }; IPython.notebook.kernel.execute(cmdstr, callbacks); }); } var tmp = 2; call_python_fn('print('+tmp+')').then((resolve)=&gt;alert(resolve));` </code></pre> <p>Above snippet perfectly works fine in conda Jupyter notebook, what is alternative to kernel execute in Azure databricks?</p>
<python><azure><jupyter-notebook><databricks><azure-databricks>
2023-03-30 20:58:50
1
1,384
Hari Prasad
75,892,892
731,351
python polars: df partition with pivot and concat
<p>my goal was to groupby/partition by one column (a below), create a string identifier (b and c columns) then use this b_c identifier as a name for a column in a pivoted data frame. Code below works OK as far as I can tell, but the path to get the result is a bit twisted. So my question is: can this be done in a simpler way? BTW, at this tiny scale (max 1k of rows so far) I am not obsessed to make it faster.</p> <pre class="lang-py prettyprint-override"><code>data = { &quot;a&quot;: [1, 1, 1, 2, 2, 3], &quot;b&quot;: [11, 12, 13, 11, 12, 11], &quot;c&quot;: [&quot;x1&quot;, &quot;x2&quot;, &quot;x3&quot;, &quot;x1&quot;, &quot;x2&quot;, &quot;x1&quot;], &quot;val&quot;: [101, 102, 102, 201, 202, 301], } df = pl.DataFrame(data) print(df) counter = 0 for tmp_df in df.partition_by(&quot;a&quot;): grp_df = ( tmp_df.with_columns((pl.col(&quot;b&quot;).cast(pl.String) + &quot;_&quot; + pl.col(&quot;c&quot;)).alias(&quot;col_id&quot;)) .drop(&quot;b&quot;, &quot;c&quot;) .pivot(&quot;col_id&quot;, index=&quot;a&quot;) ) if counter == 0: result_df = grp_df.select(pl.all()) else: result_df = pl.concat([result_df, grp_df], how=&quot;diagonal&quot;) counter += 1 print(result_df) </code></pre> <p>Output:</p> <pre><code>shape: (3, 4) β”Œβ”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β” β”‚ a ┆ 11_x1 ┆ 12_x2 ┆ 13_x3 β”‚ β”‚ --- ┆ --- ┆ --- ┆ --- β”‚ β”‚ i64 ┆ i64 ┆ i64 ┆ i64 β”‚ β•žβ•β•β•β•β•β•ͺ═══════β•ͺ═══════β•ͺ═══════║ β”‚ 1 ┆ 101 ┆ 102 ┆ 102 β”‚ β”‚ 2 ┆ 201 ┆ 202 ┆ null β”‚ β”‚ 3 ┆ 301 ┆ null ┆ null β”‚ β””β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”˜ </code></pre>
<python><python-polars>
2023-03-30 20:51:10
1
529
darked89
75,892,874
5,212,614
How can we melt a dataframe and list words under columns?
<p>I have a dataframe that looks like this.</p> <pre><code>import pandas as pd data = {'clean_words':['good','evening','how','are','you','how','can','i','help'], 'start_time':[1900,2100,2500,2750,2900,1500,1650,1770,1800], 'end_time':[2100,2500,2750,2900,3000,1650,1770,1800,1950], 'transaction':[1,1,1,1,1,2,2,2,2]} df = pd.DataFrame(data) df </code></pre> <p><a href="https://i.sstatic.net/h52md.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/h52md.png" alt="enter image description here" /></a></p> <p>If I try a basic melt, like so...</p> <pre><code>df_melted = df.pivot_table(index='clean_words', columns='transaction') df_melted.tail() </code></pre> <p>I get this...</p> <p><a href="https://i.sstatic.net/kYiUn.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/kYiUn.png" alt="enter image description here" /></a></p> <p>What I really want is the transaction number as columns and the words listed down. So, if transaction1 was the column, these words would be listed in rows, under that column:</p> <pre><code>`'good','evening','how','are','you'` </code></pre> <p>Under transaction2, these words would be listed in rows, under that column:</p> <pre><code>'how','can','i','help' </code></pre> <p>How can I do that? The start_time and end_time are kind of superfluous here.</p>
<python><python-3.x><dataframe><pandas-melt>
2023-03-30 20:48:02
2
20,492
ASH
75,892,836
2,897,989
Huggingface Windows SYMLINK(?)/OSError - file that exists shows as missing
<p>Trying to <code>train.py</code> a new language on <a href="https://github.com/clovaai/donut" rel="nofollow noreferrer">donut</a> based on a corpora I generated with their synthDOG tool, running the command <code>python train.py --config config/base.yaml --exp_version &quot;base&quot;</code> on up-to-date Windows 11 inside a conda virtualenv. Dev mode in Windows is activated, and I've launched Anaconda as admin, cmd also shows Administrator: at the top.</p> <p>The error is as follows:</p> <pre><code>Traceback (most recent call last): File &quot;C:\ProgramData\Anaconda3\envs\Donut\lib\site-packages\transformers\tokenization_utils_base.py&quot;, line 1958, in _from_pretrained tokenizer = cls(*init_inputs, **init_kwargs) File &quot;C:\ProgramData\Anaconda3\envs\Donut\lib\site-packages\transformers\models\xlm_roberta\tokenization_xlm_roberta.py&quot;, line 168, in __init__ self.sp_model.Load(str(vocab_file)) File &quot;C:\ProgramData\Anaconda3\envs\Donut\lib\site-packages\sentencepiece\__init__.py&quot;, line 905, in Load return self.LoadFromFile(model_file) File &quot;C:\ProgramData\Anaconda3\envs\Donut\lib\site-packages\sentencepiece\__init__.py&quot;, line 310, in LoadFromFile return _sentencepiece.SentencePieceProcessor_LoadFromFile(self, arg) OSError: Not found: &quot;C:\Users\me\.cache\models--naver-clova-ix--donut-base\snapshots\a959cf33c20e09215873e338299c900f57047c61\sentencepiece.bpe.model&quot;: No such file or directory Error #2 During handling of the above exception, another exception occurred: Traceback (most recent call last): File &quot;C:\Users\me\Documents\Kontron\donut-master\train.py&quot;, line 149, in &lt;module&gt; train(config) File &quot;C:\Users\me\Documents\Kontron\donut-master\train.py&quot;, line 57, in train model_module = DonutModelPLModule(config) File &quot;C:\Users\me\Documents\Kontron\donut-master\lightning_module.py&quot;, line 30, in __init__ self.model = DonutModel.from_pretrained( File &quot;C:\Users\me\Documents\Kontron\donut-master\donut\model.py&quot;, line 594, in from_pretrained model = super(DonutModel, cls).from_pretrained(pretrained_model_name_or_path, revision=&quot;official&quot;, *model_args, **kwargs) File &quot;C:\ProgramData\Anaconda3\envs\Donut\lib\site-packages\transformers\modeling_utils.py&quot;, line 2498, in from_pretrained model = cls(config, *model_args, **model_kwargs) File &quot;C:\Users\me\Documents\Kontron\donut-master\donut\model.py&quot;, line 390, in __init__ self.decoder = BARTDecoder( File &quot;C:\Users\me\Documents\Kontron\donut-master\donut\model.py&quot;, line 159, in __init__ self.tokenizer = XLMRobertaTokenizer.from_pretrained( File &quot;C:\ProgramData\Anaconda3\envs\Donut\lib\site-packages\transformers\tokenization_utils_base.py&quot;, line 1804, in from_pretrained return cls._from_pretrained( File &quot;C:\ProgramData\Anaconda3\envs\Donut\lib\site-packages\transformers\tokenization_utils_base.py&quot;, line 1960, in _from_pretrained raise OSError( OSError: Unable to load vocabulary from file. Please check that the provided vocabulary is accessible and not corrupted. </code></pre> <p>When running by default, the cache link is incorrect, as it generates something like &quot;C:\User\me/.cache\etc.&quot;. I manually changed the cache, but still after downloading about a GB of model files, the folder only contains SYMLINK files that are 0 KB. Even when I manually downloads all the files and add them in, the path won't be recognized (please do note the path is now NOT mixing / and \ together like it did by default). Copy-pasting the apparently erroneous file path from the cmd stack trace to a python <code>with open()</code> command, it seems to open it just fine. I have no idea what's wrong, but I'd really like some help, I'm going crazy. It's saying files aren't there, but they are.</p>
<python><huggingface-transformers>
2023-03-30 20:42:05
0
7,601
lte__
75,892,786
875,737
Process() fails macOS swift
<p>I am trying to run python script from my macOS app. So I added the following code:</p> <pre><code>public class func run(name: String, arguments: [String] = [], environment: [String: String] = [:]) -&gt; ScriptResponse { let result = self.runResolved(path: &quot;/usr/bin/which&quot;, arguments: [&quot;python&quot;], environment: environment) return result } private class func runResolved(path: String, arguments: [String], environment: [String: String]) -&gt; ScriptResponse { let outputPipe = Pipe() let errorPipe = Pipe() let outputFile = outputPipe.fileHandleForReading let errorFile = errorPipe.fileHandleForReading let task = Process() task.launchPath = path task.arguments = arguments var env = ProcessInfo.processInfo.environment environment.enumerated().forEach { entry in env[entry.element.key] = entry.element.value } task.environment = env task.standardOutput = outputPipe task.standardError = errorPipe task.launch() task.waitUntilExit() let terminationStatus = Int(task.terminationStatus) let output = self.stringFromFileAndClose(file: outputFile) let error = self.stringFromFileAndClose(file: errorFile) return (terminationStatus, output, error) } </code></pre> <p>However, I think the code is correct, yet it always produces the following result:</p> <p><code>(terminationStatus: 1, standardOutput: &quot;&quot;, standardError: &quot;&quot;)</code></p> <p>My python location after I type <code>/usr/bin/which python</code>:</p> <p><code>/Users/me/.pyenv/shims/python</code></p> <p>Anyone knows and can help me to understand why the Process() fails?</p>
<python><swift><macos><process>
2023-03-30 20:36:10
0
1,065
PiotrDomo
75,892,777
14,328,019
Plot error bars on a horizontal bar plot in seaborn
<p>How do you plot error bars, corresponding to 95% confidence intervals, on a horizontal bar plot in seaborn (i.e. bars from left to right, rather than bottom to top)?</p> <p>Adding the <code>errorbar=&quot;ci&quot;</code> argument does not work for me.</p>
<python><seaborn><bar-chart><confidence-interval><errorbar>
2023-03-30 20:34:20
1
349
RossB
75,892,537
8,537,770
How to put a string of json string representation into a json string: AWS eventbridge
<p>I hope the title makes sense. What I am trying to do is create the AWS cdk code to have an AWS eventbridge rule trigger a lambda function with static input. What the input needs to look like is this</p> <pre><code>&quot;&quot;&quot;{ &quot;resource&quot;: &quot;/{proxy+}&quot;, &quot;path&quot;: &quot;/jobs/sendRepoJob&quot;, &quot;httpMethod&quot;: &quot;POST&quot;, &quot;requestContext&quot;: &quot;{}&quot;, &quot;queryStringParameters&quot;: null, &quot;multiValueQueryStringParameters&quot;: null, &quot;pathParameters&quot;: null, &quot;stageVariables&quot;: null, &quot;body&quot;: &quot;{\&quot;Configs\&quot;:{\&quot;_url\&quot;:\&quot;&lt;some_url&gt;&quot;,\&quot;name\&quot;:\&quot;&lt;some_name&gt;\&quot;},\&quot;Configs2\&quot;:{\&quot;_url\&quot;:\&quot;&lt;some_url&gt;&quot;,\&quot;name\&quot;:\&quot;&lt;some_name&gt;\&quot;}}&quot; &quot;isBase64Encoded&quot;: null }&quot;&quot;&quot; </code></pre> <p>Basically, I need the json string as the body in the static input for the eventbridge rule. However, when I try to create the rule with AWS CDK, the <code>events.CfnRule.TargetProperty</code> object takes a json string as input as seen from the documentation <a href="https://docs.aws.amazon.com/cdk/api/v1/python/aws_cdk.aws_events/CfnRule.html#aws_cdk.aws_events.CfnRule.TargetProperty" rel="nofollow noreferrer">https://docs.aws.amazon.com/cdk/api/v1/python/aws_cdk.aws_events/CfnRule.html#aws_cdk.aws_events.CfnRule.TargetProperty</a></p> <p>My code for creating event in AWS CDK is shown below.</p> <pre><code>rule = events.CfnRule(self, &quot;InvokeApiRule&quot;, description=&quot;test&quot;, schedule_expression=&quot;rate(2 minutes)&quot;, state=&quot;ENABLED&quot;, role_arn=role.role_arn, targets=[events.CfnRule.TargetProperty( arn=&quot;&lt;lambda_arn&gt;&quot;, id=&quot;&lt;someID&gt;&quot;, dead_letter_config=events.CfnRule.DeadLetterConfigProperty( arn=&quot;&lt;dlq_arn&gt;&quot; ), input = tmp_input )]) </code></pre> <p>When I try making the json string, in the script for &quot;tmp_input&quot;, it automatically converts it to json when creating the stack. I need it to be a string, and stay a string as I showed above.</p> <p>Anyone know how to do this?</p>
<python><amazon-web-services><aws-event-bridge>
2023-03-30 20:03:03
1
663
A Simple Programmer
75,892,498
20,102,061
Output to Tkinter textbox does not appear
<p>I am trying to log some output instead to output it to the console, I want it to write it inside a textbox that I have defined.</p> <p>The output should be in the <code>match-case</code> of the <code>proccess_input</code> function.</p> <p>Problem is, I don't know why but nothing appears anywhere... It does not appear in the console or in the textbox. Not a single error is being raised.</p> <p>I did a little bit of research and got to this code:</p> <pre><code>from tkinter import * import sys import time import multiprocessing class Window: def __init__(self, WIDTH, HEIGHT, WIN) -&gt; None: &quot;&quot;&quot; Parameters ---------- self.width : int The width of )the window created. self.height : int The height of the window created. self.window : tk.TK The window object. Variables --------- self.data : dict A dictonary containing all information about buttons creates and titles displayed on screen. &quot;&quot;&quot; self.width = WIDTH self.height = HEIGHT self.window = WIN self.data : dict = { &quot;Title&quot; : None, &quot;btns&quot; : {} } def constract(self, title : str, background_color : str) -&gt; None: &quot;&quot;&quot; Creates the main window adds a title and a background color. Parameters ---------- title : str A string which will serve as the window's title. backgound_color : str A string represents a hex code color (e.g. #FFFFFF). &quot;&quot;&quot; self.window.title(title) self.window.geometry(f'{self.width}x{self.height}') self.window.configure(bg=background_color) def header(self, text : str, fill : str = &quot;black&quot;, font : str = 'Arial 28 bold', background : str ='#E1DCDC') -&gt; None: &quot;&quot;&quot; Displays a title on the screen. Parametes --------- text : str The text which will be displayed on the screen. fill : str The color of the text, can be an existing color in tkinter or a custom hex code color (e.g. #FFFFFF). font : str The font type, the size of the letters and other designs for the text. backgound : str The color of the box around the text, can be an existing color in tkinter or a custom hex code color (e.g. #FFFFFF). &quot;&quot;&quot; T = Label(self.window, text=text, bg=background ,fg=fill, font=font) T.pack() self.data[&quot;Title&quot;] = text def logger(self, BG_color : str, state : bool = False, side ='BOTTOM', anchor='se', area : tuple = ()) -&gt; object: # anchors - n s e w ne nw se sw center # state - TOP BOTTOM LEFT RIGHT if not area: t = Text(self.window, bg=BG_color,bd=1, width=self.width//50, height=self.height//40) else: t = Text(self.window, bg=BG_color,bd=1, width=area[0], height=area[1]) # if not state: # t.config(state=DISABLED) match side: case 'TOP': t.pack(side=TOP, anchor=anchor) case 'BOTTOM': t.pack(side=BOTTOM, anchor=anchor) case 'LEFT': t.pack(side=LEFT, anchor=anchor) case 'RIGHT': t.pack(side=RIGHT, anchor=anchor) case _: raise(ValueError(&quot;Value should be a string and one of those: \&quot;LEFT\&quot;, \&quot;RIGHT\&quot; \&quot;TOP\&quot;, \&quot;BOTTOM\&quot;.&quot;)) return t class PrintLogger: def __init__(self, textbox): self.textbox = textbox def write(self, text): self.textbox.config(state='normal') self.textbox.insert(END, text) self.textbox.config(state='disabled') def flush(self): pass def Move(direction, q): match direction: case 'left': q.enqueue('left') case 'right': q.enqueue('right') def get_input(seconds, q): win = Tk() WIDTH, HEIGHT = win.winfo_screenwidth(), win.winfo_screenheight() main_window = Window(WIDTH, HEIGHT, win) main_window.constract(&quot;AntiBalloons system controls&quot;, &quot;#E1DCDC&quot;) main_window.header(&quot;AntiAir system controls&quot;) l = main_window.logger(BG_color='#E1DCDC') sys.stdout = PrintLogger(l) # pass the Text widget to PrintLogger while True: try: win.bind('&lt;A&gt;', lambda event: Move('left', q)) win.bind('&lt;a&gt;', lambda event: Move('left', q)) win.bind('&lt;D&gt;', lambda event: Move('right', q)) win.bind('&lt;d&gt;', lambda event: Move('right', q)) except Exception: print('err') break #main_window.window.after(100, stdout_not_moving) main_window.window.mainloop() time.sleep(seconds) def process_input(seconds, q): while True: if not q.is_empty(): command = q.dequeue() match command: case 'left': sys.stdout.write('Moving left') case 'right': sys.stdout.write('Moving right') else: sys.stdout.write('Not moving') time.sleep(seconds) # def stdout_not_moving(): # print('Not moving') if __name__ == '__main__': multiprocessing.freeze_support() manager = multiprocessing.Manager() q = manager.Queue() p1 = multiprocessing.Process(target=get_input, args=[0.1, q]) p2 = multiprocessing.Process(target=process_input, args=[0.03015, q]) p1.start() p2.start() p1.join() p2.join() </code></pre>
<python><python-3.x><tkinter>
2023-03-30 20:00:59
1
402
David
75,892,493
6,654,451
Weird python sys.path module resolution behavior
<p>I produced a minimal example of behavior that I thought was weird.</p> <pre><code>β”œβ”€β”€ a β”‚Β Β  └── com β”‚Β Β  β”œβ”€β”€ __init__.py β”‚Β Β  └── x β”‚Β Β  β”œβ”€β”€ __init__.py β”‚Β Β  └── test1.py └── b └── com β”œβ”€β”€ __init__.py └── x β”œβ”€β”€ __init__.py └── test2.py </code></pre> <p>In the prior filestructure I set my working directory to the one that contains 'a' and 'b'. Then I run</p> <pre><code># Works python3 -c &quot;import sys; sys.path = ['a', 'b']; import com.x.test1&quot; </code></pre> <pre><code># Error python3 -c &quot;import sys; sys.path = ['a', 'b']; import com.x.test2&quot; </code></pre> <p>For the second example, it seems that the first sys path entry matches the common prefix, but when it ultimately can't find module <code>test2</code> it errors out. I would have expected python to resolve it by matching the module prefix in the next sys path entry and finding the module in the b folder. Why doesn't it work like this? Seems like this makes it really hard for multiple packages in an organization with the same namespace to collaborate.</p>
<python><sys.path>
2023-03-30 20:00:15
0
331
Alex Kyriazis
75,892,442
1,872,400
Connecting and Authenticating to Delta Lake on Azure Data Lake Storage Gen 2 using delta-rs Python API
<p>I am trying to connect and authenticate to an existing Delta Table in Azure Data Lake Storage Gen 2 using the Delta-rs Python API. I found the Delta-rs library from this StackOverflow question: <a href="https://stackoverflow.com/questions/67181870/delta-lake-independent-of-apache-spark">Delta Lake independent of Apache Spark?</a></p> <p>However, the documentation for Delta-rs (<a href="https://delta-io.github.io/delta-rs/python/usage.html" rel="nofollow noreferrer">https://delta-io.github.io/delta-rs/python/usage.html</a> and <a href="https://docs.rs/object_store/latest/object_store/azure/enum.AzureConfigKey.html#variant.SasKey" rel="nofollow noreferrer">https://docs.rs/object_store/latest/object_store/azure/enum.AzureConfigKey.html#variant.SasKey</a>) is quite vague regarding the authentication and connection process to Azure Data Lake Storage Gen 2. I am having trouble finding a clear example that demonstrates the required steps.</p> <p>Can someone provide a step-by-step guide or example on how to connect and authenticate to an Azure Data Lake Storage Gen 2 Delta table using the Delta-rs Python API?</p>
<python><authentication><azure-storage><azure-data-lake><delta-lake>
2023-03-30 19:54:36
5
573
user__42
75,892,343
9,536,233
eBay webscraping from different geographical locations is returning inconsistent number of results
<p>I have a small eBay webscraping project, in which I am collecting data on Pokemon Trading Card Games (TCG). I have a friendly scraper which only puts forth a request every 20 seconds or more. I was wondering if eBay shows me all items available, or a subset thereof, and unfortunately the latter is the case. I reviewed their documentation and alternative resources, but am unable to find in-depth information regarding their algorithm.</p> <p>Based on geographical location, eBay filters the resultset and returns only items they! regard of importance for me. For this specific example, some countries receive an additional 40.000 items, which makes me feel upset and is giving me anxiety of missing out.</p> <p>Preferable, I would not want to retrieve this additional data by means of a proxy, but directly by means of an altered URL, which does return the complete resultset, irrespective of geographical location, allowing me to use their front-end interface.</p> <p>For example, I have tried to add the Region tag to my URL, but with no marked effect: <code>&amp;Region=Europe%7CAustralia%252C%2520Oceania%7CAsia%7CAntarctica%7CAfrica%7CMiddle%2520East%7CNorth%2520America%7CSouth%2520America</code> Similarly, I have tried to add the Location tag (worldwide) but instead this decreased the number of retrieved results: <code>&amp;LH_PrefLoc=2</code></p> <p>Is there a way to overcome their resultset limitation? Perhaps by setting specific request headers or different URL modifications? I feel like I'm hitting a wall.</p> <pre><code>#Below a code snippet to show the general methodology of retrieved resultsset #Lets run over proxies, retrieve specific eBay URL, and check how many results are returned per proxy. #We will scrape the Pokemon Trading Card Game (TCG) items. base_url = 'https://www.ebay.com/b/Pokemon-TCG/2536/bn_7117595258?LH_Auction=1&amp;rt=nc&amp;_sop=5' adj_url_1 = 'https://www.ebay.com/b/Pokemon-TCG/2536/bn_7117595258?LH_Auction=1&amp;rt=nc&amp;Region=Europe%7CAustralia%252C%2520Oceania%7CAsia%7CAntarctica%7CAfrica%7CMiddle%2520East%7CNorth%2520America%7CSouth%2520America&amp;_sop=5' adj_url_2 = 'https://www.ebay.com/b/Pokemon-TCG/2536/bn_7117595258?LH_Auction=1&amp;rt=nc&amp;LH_PrefLoc=2&amp;_sop=5' cookies={} country_results=[] #Iterate over proxies for proxy in tqdm(ebay_proxies[1:]): try: header=header_generator() proxy_country=get_location(proxy[0][&quot;https&quot;].split(&quot;:&quot;)[0], header, proxy[0], cookies)[&quot;country&quot;] SP={} for key, val in {&quot;base_url&quot;: base_url, &quot;adj_url_1&quot; : adj_url_1, &quot;adj_url_2&quot; : adj_url_2}.items(): soup, header, session = retrieve_soup(URL=val ,proxy=proxy[0], headers=header, cookies=cookies) results = soup.find('h2', {'class': 'srp-controls__count-heading'}).text SP[key] = results SP[&quot;country_proxy&quot;] = proxy_country country_results.append(SP) except: print(&quot;proxy failed&quot;) </code></pre> <p>After filtering for duplicated countries, this resulted in the following resultsets:</p> <pre><code> base_url adj_url_1 adj_url_2 country_proxy 0 168,113 Results 168,112 Results 169,021 Results Germany 1 209,539 Results 209,533 Results 72,828 Results Finland 13 203.499 resultados 203.502 resultados 25.995 resultados Argentina 16 205.693 resultados 205.689 resultados 52.273 resultados Bolivia 17 203.051 resultados 203.053 resultados 30.790 resultados Peru 19 179,187 Results 179,188 Results 180,092 Results United States 25 164,360 Results 164,358 Results 165,297 Results India </code></pre> <p>I can't place full source code, as I do not want to share my proxies for the obvious reasons. However, you can test the <code>base_url</code>, and see yourself how many results are returned and if it can be improved!</p>
<python><web-scraping><proxy><request><get>
2023-03-30 19:41:08
0
799
Rivered
75,892,309
15,474,270
How can i script entering as administrator to cmd?
<p>I'm writing this script in python, which needs to access to administrator privileges so it can start and stop services in Windows. I've found this line</p> <pre><code>runas /user:Administrator cmd </code></pre> <p>But it opens a prompt, which i can't fill with print, and even if i could, it opens cmd as administrator in a new window. And with powershell line</p> <pre><code>start-process powershell -verb runas </code></pre> <p>it does the same, opens a window to enter user and password.</p> <p>I need it to run all through so it starts a service when it's detected it's down. How can i do that, knowing i need administrator privileges on it, and that it needs to be a pipelined script?</p>
<python><windows><service><command-line-interface><elevated-privileges>
2023-03-30 19:36:22
1
493
Claudio Torres
75,892,172
19,253,406
What is the difference between exhausting a generator using "for i in generator" and next(generator)
<p>I want to learn how to use the return value of a generator (but this not what I'm concerned with now).</p> <p>After searching, they said that I can get the return value from <code>StopIteration</code> when the generator is exhausted, so I tested it with the following code:</p> <pre><code>def my_generator(): yield 1 yield 2 yield 3 return &quot;done&quot; def exhaust_generator(_gen): print(&quot;===============================================\n&quot;) print(&quot;exhaust_generator&quot;) try: while True: print(next(_gen)) except StopIteration as e: print(f&quot;Return value: '{e.value}'&quot;) def exhaust_generator_iter(_gen): print(&quot;===============================================\n&quot;) print(&quot;exhaust_generator_iter&quot;) try: for i in _gen: print(i) print(next(_gen)) except StopIteration as e: print(f&quot;Return value: {e.value}&quot;) gen = my_generator() gen2 = my_generator() exhaust_generator(gen) exhaust_generator_iter(gen2) </code></pre> <pre><code>=============================================== exhaust_generator 1 2 3 Return value: 'done' =============================================== exhaust_generator_iter 1 2 3 Return value: None </code></pre> <p>As you can see, the return value is different between the two versions of exhausting the generator and I wonder why.</p> <p>Searched google but it has not been helpful.</p>
<python><generator>
2023-03-30 19:19:30
2
338
Chicky
75,892,092
1,045,755
Replace rows in one data frame with rows from another
<p>I have a data frame that looks something like:</p> <pre><code>df1 = name code country type --------------------------------- ben 100 de l mic 200 nl s dan NaN NaN NaN bro NaN NaN NaN </code></pre> <p>I then have another data frame that looks like:</p> <pre><code>df2 = name code country type --------------------------------- dan 400 be l bro 500 cz m </code></pre> <p>So in this example <code>df1</code> has some rows with some missing values, or in some cases wrong values. However, luckily I have another data frame <code>df2</code> which has the correct values.</p> <p>So basically, what I would like to do is move the rows from <code>df2</code> to <code>df1</code> based on the <code>name</code> column, i.e. the resulting data frame should look like:</p> <pre><code>df_final = name code country type --------------------------------- ben 100 de l mic 200 nl s dan 400 be l bro 500 cz m </code></pre>
<python><pandas>
2023-03-30 19:09:13
2
2,615
Denver Dang
75,892,091
6,400,443
Polars - Glob read Parquet from S3 only read first file
<p>I try to read some Parquet files from S3 using Polars.</p> <p>Those files are generated by Redshift using UNLOAD with PARALLEL ON.</p> <p>The 4 files are : <code>0000_part_00.parquet</code>, <code>0001_part_00.parquet</code>, <code>0002_part_00.parquet</code>, <code>0003_part_00.parquet</code></p> <p>When I use : <code>pl.read_parquet(&quot;s3://my_bucket/my_folder/*.parquet&quot;)</code>, it returns the result for only the first file (<code>0000_part_00.parquet</code>) -&gt; 340 rows.</p> <p>Weird thing is that running the same command locally : <code>pl.read_parquet(&quot;*.parquet&quot;)</code>, will return all the rows -&gt; 1239 rows.</p> <p>Is it normal behavior or I am missing something here ?</p>
<python><amazon-s3><python-polars>
2023-03-30 19:09:09
2
737
FairPluto
75,891,977
2,270,422
Type hint pytz timezone
<p>I have a function that returns a <code>pytz.timezone('...')</code> object. For example for the function below, what should be the type hint for the return?</p> <pre><code>def myfunc(tz_str: str) -&gt; ????: return pytz.timezone(tz_str) </code></pre> <p>And in general, how should we type hint objects from installed modules?</p>
<python><python-typing><pytz>
2023-03-30 18:53:43
4
685
masec
75,891,862
5,403,987
How to get pytest to not delete successful test directories
<p>I'm using pytest and it's been great. My tests use the <code>tmp_path</code> fixture to hold a bunch of outputs (log files, plots, etc.). I want to keep these so I have a record of what the performance of all tests are, not just pass/fail. That way as I make changes I can see whether things are faster or slower, diff the logs, etc. Currently pytest only keeps the directories for tests that fail. I've been through the documentation and can't find any flags that will convince it to do otherwise.</p> <p>Here's a sample test:</p> <pre><code>from pathlib import Path def test_file_to_keep(tmp_path): # write some outputs from the test. my_file_to_keep = Path(tmp_path) / &quot;my_file.txt&quot; with open(my_file_to_keep, 'w') as out_file: out_file.writelines(&quot;some good output.&quot;) assert my_file_to_keep.exists() </code></pre> <p>I want to be able to look at the tmp_path directory and examine the contents of <code>my_file.txt</code>.</p>
<python><pytest>
2023-03-30 18:39:43
1
2,224
Tom Johnson
75,891,839
10,620,003
Define main file without defining the parameters in the main file
<p>I have two .py files, one is Setting.py and one is main.py.</p> <p>In the Setting.py file I have the following code:</p> <pre><code>a= 2 def foo(a): return a+2 </code></pre> <p>In the main file I have this:</p> <pre><code>import Setting if __name__ == '__main__': X = foo(a) </code></pre> <p>So, when I run the main file, I get the error that a is not defined. So, I dont want again to define a in the main file. Could you please help me with this? this is a minimal code of my main code. I have multiple .py files and parameters. Thanks</p>
<python>
2023-03-30 18:37:04
1
730
Sadcow
75,891,657
1,586,200
Maximum of an array at indices provided in another array in Numpy
<p>Given an array <code>a = [0.51, 0.6, 0.8, 0.65, 0.7, 0.75, 0.9]</code> and an indices array <code>ind = [0, 1, 3, 1, 2, 2, 3]</code>. Find the maximum value for each value in index and replace that in the corresponding place in the array a. So here the output should be <code>out = [0.51, 0.65, 0.9, 0.65, 0.75, 0.75, 0.9]</code>.</p> <p>Explanation: Consider value 1 in the <code>ind</code> array. The values at the corresponding positions are <code>[0.6, 0.65]</code>. The maximum value is 0.65. Replace that at corresponding positions (1 and 3) in array a.</p> <p>Interested in the vectorized code. The code using for loop is pretty simple.</p> <pre><code>import numpy as np a = np.array([0.51, 0.6, 0.8, 0.65, 0.7, 0.75, 0.9]) ind = np.array([0, 1, 3, 1, 2, 2, 3]) # Get unique indices unique_indices = np.unique(ind) # Loop through unique indices and find max value for each index for index in unique_indices: max_value = np.max(a[ind == index]) a[ind == index] = max_value out = a print(out) </code></pre> <p>What I have explored: I think we can use <code>np.naximum.reduceat</code> here but still not sure how to create a working code using it.</p>
<python><numpy><vectorization>
2023-03-30 18:15:47
0
9,075
Autonomous
75,891,653
4,162,689
How do you transform a Pandas data frame into category and list of items?
<p>I am trying to output a task list as PDF. The output should be</p> <ul> <li>Hallway <ul> <li>Task</li> <li>Task</li> <li>Task</li> </ul> </li> <li>Front Room <ul> <li>Task</li> <li>Task</li> <li>Task</li> </ul> </li> <li>[Room name] <ul> <li>Task</li> <li>...</li> </ul> </li> </ul> <p>I have the following Pandas data frame which represents the cleaning tasks that need to occur in each room on a given week.</p> <pre><code> Room Task Weeks 0 Hallway Vacuum floor ABCD 2 Hallway Mop floor ABCD 4 Front Room Vacuum floor ABCD 5 Front Room Empty Vacuum Cleaner ABCD 6 Front Room Wipe skirting boards with slightly damp cloth A 7 Front Room Mop floor ABCD 8 Front Room If tablecloth is dirty, change tablecloth ABCD 10 Living Room Vacuum floor ABCD 11 Living Room Wipe skirting boards with slightly damp cloth A 12 Living Room Tidy sofa ABCD </code></pre> <p>I have tried groupby</p> <pre><code>today_tasks = ( task_data[task_data[&quot;Weeks&quot;].str.contains(schedule_week)] .groupby(&quot;Room&quot;, group_keys=True) .apply(lambda x: x) ) print(today_tasks.head(10)) </code></pre> <p>Which returns the following</p> <pre><code> Room Task Weeks Room Bathroom 62 Bathroom Vacuum floor ABCD 63 Bathroom Mop floor ABCD 64 Bathroom Clean bath ABCD 65 Bathroom Clean shower heads AC 66 Bathroom Clean shower glass AC 68 Bathroom Clean toilet ABCD 69 Bathroom Clean sink AC Front Bedroom Upstairs 45 Front Bedroom Upstairs Vacuum floor ABCD 46 Front Bedroom Upstairs Mop floor ABCD 47 Front Bedroom Upstairs Remove sheets from bed ABCD </code></pre> <p>I appreciate that I could <code>for</code> through and put the data frame into a dictionary, but this does not feel very Pandas-native. Equally, however, the set is only 70 rows long and only needs to be executed weekly, so performance probably does not matter too much. Nonetheless, I do prefer to abide by best practices.</p> <p>Is there a better way to do it?</p>
<python><pandas><dataframe>
2023-03-30 18:15:26
1
972
James Geddes
75,891,632
1,562,489
Issues with running Elastic in Bitbucket piplines
<p>Hi can anyone help me debug an issue with Elastic in my CI/CD environment?</p> <p>Elastic is running locally (for development) and on their managed SaaS platform for staging and production.</p> <p>I have the following step in bitbucket pipelines</p> <pre><code> - step: &amp;unit_tests image: python:3.9 name: Testing caches: - pip script: - pip install -r requirements.txt - export GC_API_DB_PORT=5432 - export CS_DEPLOYMENT=development - docker run -d -p 9200:9200 -e &quot;discovery.type=single-node&quot; docker.elastic.co/elasticsearch/elasticsearch:8.6.2 - python manage.py search_index --rebuild -f - python -m pytest --cov-report term --cov=gc_api --cov=gc_api_auth --cov-fail-under=85 services: - postgres - docker </code></pre> <p>However it seems I cannot connect to the elastic instance (I've also tried a configuration where elastic runs separately)</p> <pre><code> elasticsearch: image: docker.elastic.co/elasticsearch/elasticsearch:8.6.2 container_name: elasticsearch cap_add: - IPC_LOCK volumes: - elasticsearch-data:/usr/share/elasticsearch/data ports: - 9200:9200 - 9300:9300 .... - step: &amp;unit_tests image: python:3.9 name: Testing caches: - pip script: - pip install -r requirements.txt - export GC_API_DB_PORT=5432 - export ELASTIC_SEARCH_HOST=$BITBUCKET_DOCKER_HOST_INTERNAL - python -m pytest --cov-report term --cov=gc_api --cov=gc_api_auth --cov-fail-under=85 services: - postgres - elasticsearch </code></pre> <p>Here is my error</p> <pre><code>Traceback (most recent call last): File &quot;/usr/local/lib/python3.9/site-packages/requests/adapters.py&quot;, line 439, in send resp = conn.urlopen( File &quot;/usr/local/lib/python3.9/site-packages/urllib3/connectionpool.py&quot;, line 787, in urlopen retries = retries.increment( File &quot;/usr/local/lib/python3.9/site-packages/urllib3/util/retry.py&quot;, line 550, in increment raise six.reraise(type(error), error, _stacktrace) File &quot;/usr/local/lib/python3.9/site-packages/urllib3/packages/six.py&quot;, line 769, in reraise raise value.with_traceback(tb) File &quot;/usr/local/lib/python3.9/site-packages/urllib3/connectionpool.py&quot;, line 703, in urlopen httplib_response = self._make_request( File &quot;/usr/local/lib/python3.9/site-packages/urllib3/connectionpool.py&quot;, line 449, in _make_request six.raise_from(e, None) File &quot;&lt;string&gt;&quot;, line 3, in raise_from File &quot;/usr/local/lib/python3.9/site-packages/urllib3/connectionpool.py&quot;, line 444, in _make_request httplib_response = conn.getresponse() File &quot;/usr/local/lib/python3.9/http/client.py&quot;, line 1377, in getresponse response.begin() File &quot;/usr/local/lib/python3.9/http/client.py&quot;, line 320, in begin version, status, reason = self._read_status() File &quot;/usr/local/lib/python3.9/http/client.py&quot;, line 289, in _read_status raise RemoteDisconnected(&quot;Remote end closed connection without&quot; urllib3.exceptions.ProtocolError: ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response')) During handling of the above exception, another exception occurred: Traceback (most recent call last): File &quot;/usr/local/lib/python3.9/site-packages/elasticsearch/connection/http_requests.py&quot;, line 166, in perform_request response = self.session.send(prepared_request, **send_kwargs) File &quot;/usr/local/lib/python3.9/site-packages/requests/sessions.py&quot;, line 655, in send r = adapter.send(request, **kwargs) File &quot;/usr/local/lib/python3.9/site-packages/requests/adapters.py&quot;, line 498, in send raise ConnectionError(err, request=request) requests.exceptions.ConnectionError: ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response')) During handling of the above exception, another exception occurred: Traceback (most recent call last): File &quot;/opt/atlassian/pipelines/agent/build/manage.py&quot;, line 15, in &lt;module&gt; execute_from_command_line(sys.argv) File &quot;/usr/local/lib/python3.9/site-packages/django/core/management/__init__.py&quot;, line 419, in execute_from_command_line utility.execute() File &quot;/usr/local/lib/python3.9/site-packages/django/core/management/__init__.py&quot;, line 413, in execute self.fetch_command(subcommand).run_from_argv(self.argv) File &quot;/usr/local/lib/python3.9/site-packages/django/core/management/base.py&quot;, line 354, in run_from_argv self.execute(*args, **cmd_options) File &quot;/usr/local/lib/python3.9/site-packages/django/core/management/base.py&quot;, line 398, in execute output = self.handle(*args, **options) File &quot;/usr/local/lib/python3.9/site-packages/django_elasticsearch_dsl/management/commands/search_index.py&quot;, line 301, in handle for index in self.es_conn.indices.get_alias().values(): File &quot;/usr/local/lib/python3.9/site-packages/elasticsearch/client/utils.py&quot;, line 347, in _wrapped return func(*args, params=params, headers=headers, **kwargs) File &quot;/usr/local/lib/python3.9/site-packages/elasticsearch/client/indices.py&quot;, line 642, in get_alias return self.transport.perform_request( File &quot;/usr/local/lib/python3.9/site-packages/elasticsearch/transport.py&quot;, line 417, in perform_request self._do_verify_elasticsearch(headers=headers, timeout=timeout) File &quot;/usr/local/lib/python3.9/site-packages/elasticsearch/transport.py&quot;, line 606, in _do_verify_elasticsearch raise error File &quot;/usr/local/lib/python3.9/site-packages/elasticsearch/transport.py&quot;, line 569, in _do_verify_elasticsearch _, info_headers, info_response = conn.perform_request( File &quot;/usr/local/lib/python3.9/site-packages/elasticsearch/connection/http_requests.py&quot;, line 194, in perform_request raise ConnectionError(&quot;N/A&quot;, str(e), e) elasticsearch.exceptions.ConnectionError: ConnectionError(('Connection aborted.', RemoteDisconnected('Remote end closed connection without response'))) caused by: ConnectionError(('Connection aborted.', RemoteDisconnected('Remote end closed connection without response'))) </code></pre> <p>Any input you have would be appreciated</p>
<python><elasticsearch><cicd><bitbucket-pipelines>
2023-03-30 18:12:43
1
1,198
David Sigley
75,891,327
5,069,105
DataFrame downsampling
<p>In a DataFrame with too much resolution in column <code>H</code>, the goal is to downsample that column, and sum the values of the other columns.</p> <p>My column <code>H</code> is a float and does not represent time. The other columns are counters of events. So when downsampling H, the values from other columns must be added.</p> <pre><code>&gt; df = pd.DataFrame( data=[ [1.0, 4, 2], [1.5, 3, 2], [2.0, 3, 3], [2.5, 2, 5] ], columns=['H', 'A', 'B'] ) &gt; df H A B 0 1.0 4 2 1 1.5 3 2 2 2.0 3 3 3 2.5 2 5 </code></pre> <p>I'd like column <code>H</code> to have an interval of 1.0 rather than 0.5, adding the values of the other columns:</p> <pre><code> H A B 0 1.0 7 4 1 2.0 5 8 </code></pre> <p>Which I can do by:</p> <pre><code>&gt; def downsample(x): return int(x) &gt; df2 = df.groupby(df.H.apply(downsample)).sum() &gt; df2 H A B H 1 2.5 7 4 2 4.5 5 8 </code></pre> <p>But then I'm left with garbage which must be cleaned:</p> <pre><code>&gt; del df2['H'] &gt; df2.reset_index(inplace=True) &gt; df2 H A B 0 1 7 4 1 2 5 8 </code></pre> <p>Is there an easier way to do this?</p>
<python><pandas><dataframe>
2023-03-30 17:34:08
2
1,789
Raf
75,891,269
4,623,227
How can I use Pytest with a local module not installed, without import errors
<p>I've tried other solutions with no luck (<code>pytest.ini</code>, <code>__init__.py</code> inside <code>test\</code>, <code>conftest.py</code>...)</p> <p>I have the following project structure:</p> <pre><code>. β”œβ”€β”€ module β”‚ β”œβ”€β”€ bar.py β”‚ β”œβ”€β”€ foo.py β”‚ β”œβ”€β”€ __init__.py β”‚ └── __main__.py └── tests β”œβ”€β”€ __init__.py └── test_module.py </code></pre> <h2>Module works, test fails</h2> <p>and the files are:</p> <p><code>module/bar.py</code>:</p> <pre class="lang-py prettyprint-override"><code>def bar(): return &quot;bar&quot; </code></pre> <p><code>module/foo.py</code>: <em>I suspect this import should be done in other way... * more on this later</em></p> <pre class="lang-py prettyprint-override"><code>from bar import bar def foo(): value = bar() print(value) return value </code></pre> <p><code>module/__init__.py</code>: empty</p> <p><code>module/__main__.py</code>:</p> <pre class="lang-py prettyprint-override"><code>from foo import foo foo() </code></pre> <p>And my module works, so if I call it:</p> <pre class="lang-bash prettyprint-override"><code>python module </code></pre> <p>I get <code>bar</code></p> <p>so far so good. Now the test, <code>tests/test_module.py</code>:</p> <pre class="lang-py prettyprint-override"><code>from module import foo def test_foo(): assert foo.bar() == &quot;bar&quot; </code></pre> <p><code>tests/__init__.py</code>: empty</p> <p>If I try to execute the tests, either with <code>pytest .</code> or <code>python -m pytest</code> I get</p> <pre><code>_________________________________________________________________________________________________________________________________ ERROR collecting tests/test_module.py __________________________________________________________________________________________________________________________________ ImportError while importing test module '/home/susensio/Projects/sandbox/pytest/tests/test_module.py'. Hint: make sure your test modules/packages have valid Python names. Traceback: /usr/lib/python3.11/importlib/__init__.py:126: in import_module return _bootstrap._gcd_import(name[level:], package, level) tests/test_module.py:1: in &lt;module&gt; from module import foo E ModuleNotFoundError: No module named 'module' </code></pre> <p>So, at this point <strong>module works, test does not</strong></p> <hr /> <h2>Module fails, test works</h2> <p>I've tried several things, the only one that changes something is the following: changing <code>foo.py</code> to (note the relative import):</p> <pre class="lang-py prettyprint-override"><code>from .bar import bar def foo(): value = bar() print(value) return value </code></pre> <p>causes the tests to work, but I cannot call the module with <code>python module</code>, it throws an error:</p> <pre><code>Traceback (most recent call last): File &quot;&lt;frozen runpy&gt;&quot;, line 198, in _run_module_as_main File &quot;&lt;frozen runpy&gt;&quot;, line 88, in _run_code File &quot;/home/susensio/Projects/sandbox/pytest/module/__main__.py&quot;, line 1, in &lt;module&gt; from foo import foo File &quot;/home/susensio/Projects/sandbox/pytest/module/foo.py&quot;, line 1, in &lt;module&gt; from .bar import bar ImportError: attempted relative import with no known parent package </code></pre> <p>As I've said, other questions didn't help to solve this issue.</p>
<python><import><pytest>
2023-03-30 17:26:41
1
870
Susensio
75,891,080
9,157,212
Python: Optimization with Exponential Objective Function
<p>I am working on assignment problems. So far, I have been using <code>Gurobi</code> via gurobipy. In my current problem, I have an objective function with an exponential term. In gurobipy this would be</p> <pre><code>m.setObjective(ratings.prod(assign) ** 0.5, sense=GRB.MAXIMIZE) </code></pre> <p>but <code>**</code> (or <code>pow()</code>) is not supported in Gurobi. What are other options (with other tools than Gurobi) or workarounds?</p> <p>Below is a minimal working example:</p> <pre><code>import pandas as pd import gurobipy as gp from gurobipy import GRB import random def solve(): m = gp.Model() assign = m.addVars(permutations, vtype=gp.GRB.BINARY, name=&quot;assign&quot;) m.addConstrs((assign.sum(i, &quot;*&quot;) == 1 for i in individuals)) capacity = { 'P1':40, 'P2':30, 'P3':30 } m.addConstrs( (assign.sum(&quot;*&quot;, p) &lt;= capacity[p] for p in groups), name=&quot;limit&quot; ) # objective function m.setObjective(ratings.prod(assign) ** 0.5, sense=GRB.MAXIMIZE) m.optimize() return m, assign def get_results(assign): # create df with placeholder results = pd.DataFrame(-1, index=individuals, columns=groups) # fill in the results for (i, j), x_ij in assign.items(): results.loc[i, j] = int(x_ij.X) return results # minimal example outcomes = [] for i in range(0,100): x = random.uniform(0, 1) outcomes.append(x) data = {'P1': outcomes, 'P2': random.sample(outcomes, len(outcomes)), 'P3': random.sample(outcomes, len(outcomes))} df = pd.DataFrame(data) individuals = list(df.index) groups = ['P1', 'P2', 'P3'] # prepare permutations, ratings = gp.multidict({ (i, j): df.loc[i, j] for i in individuals for j in groups }) # run m, assign = solve() results = get_results(assign) printresults) </code></pre>
<python><optimization><gurobi>
2023-03-30 17:05:10
1
631
jkortner
75,891,046
18,769,241
combinations accross columns not yielding all possible values
<p>I have this dataframe:</p> <pre><code>12 2 17 16 4 16 2 19 11 </code></pre> <p>I want to get, accross its columns, the following output</p> <pre><code>[ [12,16,2],[2,4,19],[17,16,11],[[12,16,2],[2,4,19]],[[2,4,19],[17,16,11]],[[12,16,2],[2,4,19],[17,16,11]] ] </code></pre> <p>I have this code which yield the first 3 possibilities only:</p> <pre><code> from itertools import combinations resultTmp2 = [] for j in range(1, len(dataframe.columns) + 1): resultTmp = [] for xVal in list(combinations(dataframe.iloc[:len(dataframe) + 1,j-1], len(dataframe) )): resultTmp.append(list(xVal)) resultTmp2.append(resultTmp) print(resultTmp2) </code></pre> <p>How to update my code so that it yields correct mentioned output?</p>
<python><pandas>
2023-03-30 17:00:04
2
571
Sam
75,890,891
4,151,075
How to make a compound generic type
<p>I'd like to make a generic compound type that will set different type hints (but related to each other) for different attributes of the class.</p> <p>I got this:</p> <pre><code>from typing import Generic, Literal, NewType, TypeVar AccessTokenType = TypeVar(&quot;AccessTokenType&quot;, bound=str) RefreshTokenType = TypeVar(&quot;RefreshTokenType&quot;, bound=str) XAccessToken = NewType(&quot;XAccessToken&quot;, AccessToken) YAccessToken = NewType(&quot;YAccessToken&quot;, AccessToken) XRefreshToken = NewType(&quot;XRefreshToken&quot;, RefreshToken) YRefreshToken = NewType(&quot;YRefreshToken&quot;, RefreshToken) class AuthorizationData(Generic[AccessTokenType, RefreshTokenType]): access_token: AccessTokenType refresh_token: RefreshTokenType class AuthorizationDataSet: x: AuthorizationData[XAccessToken, XRefreshToken] y: AuthorizationData[YAccessToken, YRefreshToken] </code></pre> <p>I expect that tokens from the same group always will be set for one type inside <code>AuthorizationDataSet</code>.</p> <p>I was wondering if there is a way to somehow make an inseparable compound type e.g., all X-related are together <code>AuthorizationData[TokenGroup.X]</code> would make <code>access_token: XAccessToken</code> &amp; <code>refresh_token: XRefreshToken</code>.</p>
<python><python-typing>
2023-03-30 16:42:33
0
1,269
Marek
75,890,876
8,713,442
How to maintain hash value even if order of column changes while calculating hash
<p>When we change the order of columns while finding hash its result is changing .Is there a way we can maintain this value even when order of column changes. We are using CDC in pysaprk using hash value . This is just an example with 2 columns. In actual we have 50-60 columns on which we need to check if any changed has happened</p> <pre><code>import oracledb from pyspark.sql import Row from functools import reduce from pyspark.sql import DataFrame from pyspark.sql.types import StructType,StructField, StringType, IntegerType class JobBase(object): spark=None arr_list=['curr_col1','curr_col2'] arr_list2=['curr_col2','curr_col1'] Oracle_Username=None Oracle_Password=None Oracle_jdbc_url=None firmographic_cdc_dataframe =None winner_org_calculations_attributes=['curr_col2','curr_col3','curr_col4','curr_col45'] def __start_spark_glue_context(self): conf = SparkConf().setAppName(&quot;python_thread&quot;) self.sc = SparkContext(conf=conf) self.glueContext = GlueContext(self.sc) self.spark = self.glueContext.spark_session def execute(self): self.__start_spark_glue_context() new_dict={} print('hello') schema = StructType([ \ StructField(&quot;curr_col1&quot;,StringType(),True), \ StructField(&quot;curr_col2&quot;,StringType(),True), \ ]) d = [{&quot;curr_col1&quot;: '75757', &quot;curr_col2&quot;: &quot;fgsjdfd&quot;}] # columnarray = array(self.arr_list) columnarray = array(self.arr_list2) # ,&quot;curr_col3&quot;: 79,&quot;curr_col4&quot;: 'pb',&quot;curr_col45&quot;: &quot;a&quot;,&quot;E_N&quot;: &quot;b&quot; df = self.spark.createDataFrame(data=d,schema=schema) df=df.withColumn(&quot;hash_value&quot;, sha2(concat_ws(&quot;||&quot;, columnarray), 256)) #df = self.spark.createDataFrame(d) df.show() def main(): job = JobBase() job.execute() if __name__ == '__main__': main() </code></pre> <pre class="lang-none prettyprint-override"><code>result of df=df.withColumn(&quot;hash_value&quot;, sha2(concat_ws(&quot;||&quot;, columnarray), 256)) when columnarray=['curr_col1','curr_col2'] +---------+---------+--------------------+ |curr_col1|curr_col2| hash_value| +---------+---------+--------------------+ | 75757| fgsjdfd|3162cb81bf242c4d9...| +---------+---------+--------------------+ when columnarray=['curr_col2','curr_col1'] +---------+---------+--------------------+ |curr_col1|curr_col2| hash_value| +---------+---------+--------------------+ | 75757| fgsjdfd|1528bbcb98b92a2ba...| +---------+---------+--------------------+ </code></pre>
<python><python-3.x><pyspark><hash>
2023-03-30 16:40:41
0
464
pbh
75,890,758
12,657,753
Access denied when uninstalling matplotlib with pip
<p>I want to uninstall matplotlib library with pip</p> <pre><code>$pip uninstall matplotlib </code></pre> <p>But I receive the following error:</p> <pre><code>ERROR: Exception: Traceback (most recent call last): File &quot;C:\Users\user\anaconda3\envs\work\lib\shutil.py&quot;, line 791, in move os.rename(src, real_dst) PermissionError: [WinError 5] Access is denied: 'c:\\users\\user\\appdata\\roaming\\python\\python38\\site-packages\\matplotlib\\' -&gt; 'C:\\Users\\user\\AppData\\Roaming\\Python\\Python38\\site-packages\\~atplotlib' During handling of the above exception, another exception occurred: Traceback (most recent call last): File &quot;C:\Users\user\anaconda3\envs\work\lib\site-packages\pip\_internal\cli\base_command.py&quot;, line 167, in exc_logging_wrapper status = run_func(*args) File &quot;C:\Users\user\anaconda3\envs\work\lib\site-packages\pip\_internal\commands\uninstall.py&quot;, line 98, in run uninstall_pathset = req.uninstall( File &quot;C:\Users\user\anaconda3\envs\work\lib\site-packages\pip\_internal\req\req_install.py&quot;, line 642, in uninstall uninstalled_pathset.remove(auto_confirm, verbose) File &quot;C:\Users\user\anaconda3\envs\work\lib\site-packages\pip\_internal\req\req_uninstall.py&quot;, line 373, in remove moved.stash(path) File &quot;C:\Users\user\anaconda3\envs\work\lib\site-packages\pip\_internal\req\req_uninstall.py&quot;, line 271, in stash renames(path, new_path) File &quot;C:\Users\user\anaconda3\envs\work\lib\site-packages\pip\_internal\utils\misc.py&quot;, line 311, in renames shutil.move(old, new) File &quot;C:\Users\user\anaconda3\envs\work\lib\shutil.py&quot;, line 809, in move rmtree(src) File &quot;C:\Users\user\anaconda3\envs\work\lib\shutil.py&quot;, line 740, in rmtree return _rmtree_unsafe(path, onerror) File &quot;C:\Users\user\anaconda3\envs\work\lib\shutil.py&quot;, line 613, in _rmtree_unsafe _rmtree_unsafe(fullname, onerror) File &quot;C:\Users\user\anaconda3\envs\work\lib\shutil.py&quot;, line 618, in _rmtree_unsafe onerror(os.unlink, fullname, sys.exc_info()) File &quot;C:\Users\user\anaconda3\envs\work\lib\shutil.py&quot;, line 616, in _rmtree_unsafe os.unlink(fullname) PermissionError: [WinError 5] Access is denied: 'c:\\users\\user\\appdata\\roaming\\python\\python38\\site-packages\\matplotlib\\backends\\_backend_agg.cp38-win_amd64.pyd' </code></pre> <p>I have no idea how to solve that, I want to install another version of matplotlib, and because of that I must uninstall matplotlib.</p>
<python><matplotlib><pip>
2023-03-30 16:29:05
1
663
TheGainadl
75,890,681
9,773,920
How to list all csv files from an s3 bucket using Pandas?
<p>I have the below code that pulls in all the folder names as well as file names from specific S3 bucket. How to modify this so it can read only files that ends with &quot;.csv&quot; and not all the folder names?</p> <pre><code>def lambda_handler(event, context): s3_client = boto3.client(&quot;s3&quot;) bucket_name = &quot;dump&quot; response = s3_client.list_objects_v2(Bucket=bucket_name) files = response.get(&quot;Contents&quot;) for file in files: print(f&quot;file_name: {file['Key']}&quot;) </code></pre> <p>Current output:</p> <pre><code>file_name: 2023/ file_name: 2023/Feb/ file_name: 2023/Feb/file1.csv file_name: 2023/Jan/ file_name: 2023/Jan/file2.csv file_name: 2023/Mar/ file_name: 2023/Mar/file3.csv </code></pre> <p>However, I want to list only thecsv files. So I want the output to be:</p> <pre><code>file_name: 2023/Feb/file1.csv file_name: 2023/Jan/file2.csv file_name: 2023/Mar/file3.csv </code></pre> <p>How to do that? I tried &quot;endswith&quot; but it's not working. Any help?</p>
<python><pandas><amazon-web-services><amazon-s3>
2023-03-30 16:19:23
1
1,619
Rick
75,890,628
4,436,572
dask_cudf/dask read_parquet failed with NotImplementedError: large_string
<p>I am a new user of <code>dask</code>/<code>dask_cudf</code>. I have a parquet files of various sizes (11GB, 2.5GB, 1.1GB), all of which failed with <code>NotImplementedError: large_string</code>. My <code>dask.dataframe</code> backend is <code>cudf</code>. When the backend is <code>pandas</code>, <code>read.parquet</code> works fine.</p> <p>Here's an exerpt of what my data look like in <code>csv</code> format:</p> <pre><code>Symbol,Date,Open,High,Low,Close,Volume AADR,17-Oct-2017 09:00,57.47,58.3844,57.3645,58.3844,2094 AADR,17-Oct-2017 10:00,57.27,57.2856,57.25,57.27,627 AADR,17-Oct-2017 11:00,56.99,56.99,56.99,56.99,100 AADR,17-Oct-2017 12:00,56.98,57.05,56.98,57.05,200 AADR,17-Oct-2017 13:00,57.14,57.16,57.14,57.16,700 AADR,17-Oct-2017 14:00,57.13,57.13,57.13,57.13,100 AADR,17-Oct-2017 15:00,57.07,57.07,57.07,57.07,200 AAMC,17-Oct-2017 09:00,87,87,87,87,100 AAU,17-Oct-2017 09:00,1.1,1.13,1.0832,1.121,67790 AAU,17-Oct-2017 10:00,1.12,1.12,1.12,1.12,100 AAU,17-Oct-2017 11:00,1.125,1.125,1.125,1.125,200 AAU,17-Oct-2017 12:00,1.1332,1.15,1.1332,1.15,27439 AAU,17-Oct-2017 13:00,1.15,1.15,1.13,1.13,8200 AAU,17-Oct-2017 14:00,1.1467,1.1467,1.14,1.1467,1750 AAU,17-Oct-2017 15:00,1.1401,1.1493,1.1401,1.1493,4100 AAU,17-Oct-2017 16:00,1.13,1.13,1.13,1.13,100 ABE,17-Oct-2017 09:00,14.64,14.64,14.64,14.64,200 ABE,17-Oct-2017 10:00,14.67,14.67,14.66,14.66,1200 ABE,17-Oct-2017 11:00,14.65,14.65,14.65,14.65,600 ABE,17-Oct-2017 15:00,14.65,14.65,14.65,14.65,836 </code></pre> <p>What I did was really simple:</p> <pre><code>import dask.dataframe as dd import cudf import dask_cudf # Failed with large_string error dask_cudf.read_parquet('path/to/my.parquet') # Failed with large_string error dd.read_parquet('path/to/my.parquet') </code></pre> <p>The only large string I could think of is the timestamp string.</p> <p>Is there a way around this in <code>cudf</code> as it is not implemented yet? The format is <code>2023-03-12 09:00:00+00:00</code>.</p>
<python><dask-dataframe><rapids><cudf>
2023-03-30 16:12:41
0
1,288
stucash
75,890,606
13,325,186
Resample pandas df with multiple groupbys so each condition has the same number of total days of data
<p>I have been going round in circles with this and haven't been able to figure it out.</p> <p>Suppose I have the following dataframe:</p> <pre><code>df = pd.DataFrame({ &quot;person_id&quot;: [&quot;1&quot;, &quot;1&quot;, &quot;1&quot;, &quot;1&quot;, &quot;2&quot;, &quot;2&quot;, &quot;2&quot;, &quot;2&quot;, &quot;3&quot;, &quot;3&quot;, &quot;3&quot;, &quot;3&quot;], &quot;event&quot;: [&quot;Alert1&quot;, &quot;Alert1&quot;, &quot;Alert1&quot;, &quot;Alert2&quot;, &quot;Alert1&quot;, &quot;Alert1&quot;, &quot;Alert1&quot;, &quot;Alert2&quot;, &quot;Alert2&quot;, &quot;Alert2&quot;, &quot;Alert2&quot;, &quot;Alert2&quot;], &quot;mode&quot;: [&quot;Manual&quot;, &quot;Manual&quot;, &quot;Auto&quot;, &quot;Manual&quot;, &quot;Auto&quot;, &quot;Auto&quot;, &quot;Auto&quot;, &quot;Manual&quot;, &quot;Manual&quot;, &quot;Manual&quot;, &quot;Auto&quot;, &quot;Manual&quot;], &quot;date&quot;: [&quot;2020-01-01&quot;, &quot;2020-01-01&quot;, &quot;2020-01-03&quot;, &quot;2020-01-03&quot;, &quot;2020-01-03&quot;, &quot;2020-01-03&quot;, &quot;2020-01-04&quot;, &quot;2020-01-04&quot;, &quot;2020-01-04&quot;, &quot;2020-01-04&quot;, &quot;2020-01-05&quot;, &quot;2020-01-05&quot;] } ) df </code></pre> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>index</th> <th>person_id</th> <th>event</th> <th>mode</th> <th>date</th> </tr> </thead> <tbody> <tr> <td>0</td> <td>1</td> <td>Alert1</td> <td>Manual</td> <td>2020-01-01</td> </tr> <tr> <td>1</td> <td>1</td> <td>Alert1</td> <td>Manual</td> <td>2020-01-01</td> </tr> <tr> <td>2</td> <td>1</td> <td>Alert1</td> <td>Auto</td> <td>2020-01-03</td> </tr> <tr> <td>3</td> <td>1</td> <td>Alert2</td> <td>Manual</td> <td>2020-01-03</td> </tr> <tr> <td>4</td> <td>2</td> <td>Alert1</td> <td>Auto</td> <td>2020-01-03</td> </tr> <tr> <td>5</td> <td>2</td> <td>Alert1</td> <td>Auto</td> <td>2020-01-03</td> </tr> <tr> <td>6</td> <td>2</td> <td>Alert1</td> <td>Auto</td> <td>2020-01-04</td> </tr> <tr> <td>7</td> <td>2</td> <td>Alert2</td> <td>Manual</td> <td>2020-01-04</td> </tr> <tr> <td>8</td> <td>3</td> <td>Alert2</td> <td>Manual</td> <td>2020-01-04</td> </tr> <tr> <td>9</td> <td>3</td> <td>Alert2</td> <td>Manual</td> <td>2020-01-04</td> </tr> <tr> <td>10</td> <td>3</td> <td>Alert2</td> <td>Auto</td> <td>2020-01-05</td> </tr> <tr> <td>11</td> <td>3</td> <td>Alert2</td> <td>Manual</td> <td>2020-01-05</td> </tr> </tbody> </table> </div> <p>What I want is the count of <em>each possible combination</em> per possible day (the minimum date would be the first date appearing in the dataset, in this case 2020-01-01 and the maximum date would be the last date appearing in the dataset, in this case 2020-01-05). For example, in the case of the df above, the output would look like this:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>index</th> <th>person_id</th> <th>event</th> <th>mode</th> <th>date</th> <th>count</th> </tr> </thead> <tbody> <tr> <td>0</td> <td>1</td> <td>Alert1</td> <td>Manual</td> <td>2020-01-01</td> <td>2</td> </tr> <tr> <td>1</td> <td>1</td> <td>Alert1</td> <td>Auto</td> <td>2020-01-01</td> <td>0</td> </tr> <tr> <td>2</td> <td>1</td> <td>Alert2</td> <td>Manual</td> <td>2020-01-01</td> <td>0</td> </tr> <tr> <td>3</td> <td>1</td> <td>Alert2</td> <td>Auto</td> <td>2020-01-01</td> <td>0</td> </tr> <tr> <td>4</td> <td>1</td> <td>Alert1</td> <td>Manual</td> <td>2020-01-02</td> <td>0</td> </tr> <tr> <td>5</td> <td>1</td> <td>Alert1</td> <td>Auto</td> <td>2020-01-02</td> <td>0</td> </tr> <tr> <td>6</td> <td>1</td> <td>Alert2</td> <td>Manual</td> <td>2020-01-02</td> <td>0</td> </tr> <tr> <td>7</td> <td>1</td> <td>Alert2</td> <td>Auto</td> <td>2020-01-02</td> <td>0</td> </tr> <tr> <td>8</td> <td>1</td> <td>Alert1</td> <td>Manual</td> <td>2020-01-03</td> <td>0</td> </tr> <tr> <td>9</td> <td>1</td> <td>Alert1</td> <td>Auto</td> <td>2020-01-03</td> <td>1</td> </tr> <tr> <td>10</td> <td>1</td> <td>Alert2</td> <td>Manual</td> <td>2020-01-03</td> <td>1</td> </tr> <tr> <td>11</td> <td>1</td> <td>Alert2</td> <td>Auto</td> <td>2020-01-03</td> <td>0</td> </tr> <tr> <td>12</td> <td>1</td> <td>Alert1</td> <td>Manual</td> <td>2020-01-04</td> <td>0</td> </tr> <tr> <td>13</td> <td>1</td> <td>Alert1</td> <td>Auto</td> <td>2020-01-04</td> <td>0</td> </tr> <tr> <td>14</td> <td>1</td> <td>Alert2</td> <td>Manual</td> <td>2020-01-04</td> <td>0</td> </tr> <tr> <td>15</td> <td>1</td> <td>Alert2</td> <td>Auto</td> <td>2020-01-04</td> <td>0</td> </tr> <tr> <td>16</td> <td>1</td> <td>Alert1</td> <td>Manual</td> <td>2020-01-05</td> <td>0</td> </tr> <tr> <td>17</td> <td>1</td> <td>Alert1</td> <td>Auto</td> <td>2020-01-05</td> <td>0</td> </tr> <tr> <td>18</td> <td>1</td> <td>Alert2</td> <td>Manual</td> <td>2020-01-05</td> <td>0</td> </tr> <tr> <td>19</td> <td>1</td> <td>Alert2</td> <td>Auto</td> <td>2020-01-05</td> <td>0</td> </tr> <tr> <td>20</td> <td>2</td> <td>Alert1</td> <td>Manual</td> <td>2020-01-01</td> <td>0</td> </tr> <tr> <td>21</td> <td>2</td> <td>Alert1</td> <td>Auto</td> <td>2020-01-01</td> <td>0</td> </tr> <tr> <td>22</td> <td>2</td> <td>Alert2</td> <td>Manual</td> <td>2020-01-01</td> <td>0</td> </tr> <tr> <td>23</td> <td>2</td> <td>Alert2</td> <td>Auto</td> <td>2020-01-01</td> <td>0</td> </tr> <tr> <td>24</td> <td>2</td> <td>Alert1</td> <td>Manual</td> <td>2020-01-02</td> <td>0</td> </tr> <tr> <td>25</td> <td>2</td> <td>Alert1</td> <td>Auto</td> <td>2020-01-02</td> <td>0</td> </tr> <tr> <td>26</td> <td>2</td> <td>Alert2</td> <td>Manual</td> <td>2020-01-02</td> <td>0</td> </tr> <tr> <td>27</td> <td>2</td> <td>Alert2</td> <td>Auto</td> <td>2020-01-02</td> <td>0</td> </tr> <tr> <td>28</td> <td>2</td> <td>Alert1</td> <td>Manual</td> <td>2020-01-03</td> <td>0</td> </tr> <tr> <td>29</td> <td>2</td> <td>Alert1</td> <td>Auto</td> <td>2020-01-03</td> <td>2</td> </tr> <tr> <td>30</td> <td>2</td> <td>Alert2</td> <td>Manual</td> <td>2020-01-03</td> <td>0</td> </tr> <tr> <td>31</td> <td>2</td> <td>Alert2</td> <td>Auto</td> <td>2020-01-03</td> <td>0</td> </tr> <tr> <td>32</td> <td>2</td> <td>Alert1</td> <td>Manual</td> <td>2020-01-04</td> <td>0</td> </tr> <tr> <td>33</td> <td>2</td> <td>Alert1</td> <td>Auto</td> <td>2020-01-04</td> <td>1</td> </tr> <tr> <td>34</td> <td>2</td> <td>Alert2</td> <td>Manual</td> <td>2020-01-04</td> <td>1</td> </tr> <tr> <td>35</td> <td>2</td> <td>Alert2</td> <td>Auto</td> <td>2020-01-04</td> <td>0</td> </tr> <tr> <td>36</td> <td>2</td> <td>Alert1</td> <td>Manual</td> <td>2020-01-05</td> <td>0</td> </tr> <tr> <td>37</td> <td>2</td> <td>Alert1</td> <td>Auto</td> <td>2020-01-05</td> <td>0</td> </tr> <tr> <td>38</td> <td>2</td> <td>Alert2</td> <td>Manual</td> <td>2020-01-05</td> <td>0</td> </tr> <tr> <td>39</td> <td>2</td> <td>Alert2</td> <td>Auto</td> <td>2020-01-05</td> <td>0</td> </tr> <tr> <td>40</td> <td>3</td> <td>Alert1</td> <td>Manual</td> <td>2020-01-01</td> <td>0</td> </tr> <tr> <td>41</td> <td>3</td> <td>Alert1</td> <td>Auto</td> <td>2020-01-01</td> <td>0</td> </tr> <tr> <td>42</td> <td>3</td> <td>Alert2</td> <td>Manual</td> <td>2020-01-01</td> <td>0</td> </tr> <tr> <td>43</td> <td>3</td> <td>Alert2</td> <td>Auto</td> <td>2020-01-01</td> <td>0</td> </tr> <tr> <td>44</td> <td>3</td> <td>Alert1</td> <td>Manual</td> <td>2020-01-02</td> <td>0</td> </tr> <tr> <td>45</td> <td>3</td> <td>Alert1</td> <td>Auto</td> <td>2020-01-02</td> <td>0</td> </tr> <tr> <td>46</td> <td>3</td> <td>Alert2</td> <td>Manual</td> <td>2020-01-02</td> <td>0</td> </tr> <tr> <td>47</td> <td>3</td> <td>Alert2</td> <td>Auto</td> <td>2020-01-02</td> <td>0</td> </tr> <tr> <td>48</td> <td>3</td> <td>Alert1</td> <td>Manual</td> <td>2020-01-03</td> <td>0</td> </tr> <tr> <td>49</td> <td>3</td> <td>Alert1</td> <td>Auto</td> <td>2020-01-03</td> <td>0</td> </tr> <tr> <td>50</td> <td>3</td> <td>Alert2</td> <td>Manual</td> <td>2020-01-03</td> <td>0</td> </tr> <tr> <td>51</td> <td>3</td> <td>Alert2</td> <td>Auto</td> <td>2020-01-03</td> <td>0</td> </tr> <tr> <td>52</td> <td>3</td> <td>Alert1</td> <td>Manual</td> <td>2020-01-04</td> <td>0</td> </tr> <tr> <td>53</td> <td>3</td> <td>Alert1</td> <td>Auto</td> <td>2020-01-04</td> <td>0</td> </tr> <tr> <td>54</td> <td>3</td> <td>Alert2</td> <td>Manual</td> <td>2020-01-04</td> <td>2</td> </tr> <tr> <td>55</td> <td>3</td> <td>Alert2</td> <td>Auto</td> <td>2020-01-04</td> <td>0</td> </tr> <tr> <td>56</td> <td>3</td> <td>Alert1</td> <td>Manual</td> <td>2020-01-05</td> <td>0</td> </tr> <tr> <td>57</td> <td>3</td> <td>Alert1</td> <td>Auto</td> <td>2020-01-05</td> <td>0</td> </tr> <tr> <td>58</td> <td>3</td> <td>Alert2</td> <td>Manual</td> <td>2020-01-05</td> <td>1</td> </tr> <tr> <td>59</td> <td>3</td> <td>Alert2</td> <td>Auto</td> <td>2020-01-05</td> <td>1</td> </tr> </tbody> </table> </div> <p>Importantly, each combination should have the exact same number of unique datetimes at the end, so if I run the following line of code:</p> <p><code>df_summarized.groupby(['person_id', 'event', 'mode'])['date'].nunique().reset_index()</code></p> <p>The result should clearly show that each combination has 5 unique days of data.</p> <p>How could I achieve this?</p> <p>Thanks in advance</p>
<python><pandas><datetime><resample>
2023-03-30 16:10:00
1
473
Dr Wampa
75,890,601
19,051,091
Error at custom image classification with PyTorch weight of size?
<p>Edit Question:</p> <p>I'm using my own black and white tiff images dataset and I created the model_0 as the videos I will put my code That's the error I got</p> <blockquote> <p>Given groups=1, weight of size [10, 1, 3, 3], expected input[32, 3, 128, 128] to have 1 channels, but got 3 channels instead</p> </blockquote> <p>here is the full code:</p> <pre><code># Create simple transform simple_transform = transforms.Compose([ transforms.Resize(size=(128, 128)), transforms.ToTensor(), ]) # Load data and transform data train_data_simple = datasets.ImageFolder(root=train_dir, transform=simple_transform) test_data_simple = datasets.ImageFolder(root=test_dir, transform=simple_transform) # Turn dataset into DataLoader BATCH_SIZE = 32 NUM_WORKERS = 2 train_dataloader_simple = DataLoader(dataset=train_data_simple, batch_size=BATCH_SIZE, num_workers=NUM_WORKERS, shuffle=True) test_dataloader_simple = DataLoader(dataset=test_data_simple, batch_size=BATCH_SIZE, num_workers=NUM_WORKERS, shuffle=False) class TingVGG(nn.Module): def __init__(self, input_shape: int, hidden_units: int, output_shape: int) -&gt; None: super().__init__() self.conv_block1 = nn.Sequential(nn.Conv2d(in_channels=input_shape,out_channels=hidden_units,kernel_size=3,stride=1,padding=1), nn.ReLU(), nn.Conv2d(in_channels=hidden_units, out_channels=hidden_units, kernel_size=3,stride=1,padding=1), nn.ReLU(), nn.MaxPool2d(kernel_size=2, stride=2) ) self.conv_block2 = nn.Sequential(nn.Conv2d(in_channels=hidden_units,out_channels=hidden_units,kernel_size=3,stride=1,padding=1), nn.ReLU(), nn.Conv2d(in_channels=hidden_units, out_channels=hidden_units, kernel_size=3,stride=1,padding=1), nn.ReLU(), nn.MaxPool2d(kernel_size=2, stride=2) ) self.classifier = nn.Sequential(nn.Flatten(), nn.Linear(in_features=hidden_units ,out_features=output_shape)) def forward(self, x: torch.Tensor): x = self.conv_block1(x) #print(x.shape) x = self.conv_block2(x) #print(x.shape) x = self.classifier(x) #print(x.shape) return x torch.manual_seed(42) model_0 = TingVGG(input_shape=1, # Number of Color channel in the input image (c, h, w) hidden_units=10, output_shape=len(train_data.classes)).to(device) model_0 image_batch, label_batch = next(iter(train_dataloader_simple)) image_batch.shape, label_batch.shape </code></pre> <p>The output is:</p> <pre><code>(torch.Size([32, 3, 128, 128]), torch.Size([32])) </code></pre> <p>I think it should be</p> <pre><code>(torch.Size([32, 1, 128, 128]), torch.Size([32])) model_0(image_batch.to(device)) # when I run this code I got the error </code></pre> <p>I don't know where is the fault in my code. I just began to learn PyTorch, please help me and excuse me if my question is not that good</p>
<python><pytorch><pytorch-dataloader>
2023-03-30 16:08:39
1
307
Emad Younan
75,890,548
5,056,011
Getting an error when I run Python and scripts in LibreOffice\program directory from C# using IronPython
<p>I want to run the python version and scripts that are in the libreoffice\program directory on Windows from C# in Visual Studio 2022 using the latest version of IronPython. The .py script runs fine at the command line. When I use the following code it wants to use the default version, 3.9, installed by Visual Studio. The C# code looks like:</p> <pre><code> string pythonPath = @&quot;D:\Program Files\LibreOffice\program\python.exe&quot;; Environment.SetEnvironmentVariable(&quot;PythonHome&quot;, pythonPath); // Initialize the Python engine with the correct Python environment var engine = Python.CreateEngine(new Dictionary&lt;string, object&gt;() { { &quot;PythonHome&quot;, pythonPath } }); // Create a new scope for executing the Python script var scope = engine.CreateScope(); // Set any necessary variables in the scope scope.SetVariable(&quot;param1&quot;, fileName); scope.SetVariable(&quot;param2&quot;, outputPath); engine.ExecuteFile(scriptFile, scope); </code></pre> <p>After a long pause, it errors out with can't find module &quot;Uno&quot; which is in the libreoffice\program path. In the course of trying so many things, I also see Python39 popping up from the Visual Studio install and LibreOffice is 3.8.</p> <p>So, how can I make it run my script from Python and the scripts in libreoffice\program directory?</p>
<python><c#><ironpython><libreoffice>
2023-03-30 16:02:59
1
1,474
Velocedge
75,890,540
3,595,907
OpenCV Python PIP install broken?
<p>Win 10 x64 Python 3.6</p> <p>I'm trying to install the latest version of <a href="https://pypi.org/project/opencv-contrib-python/" rel="nofollow noreferrer"><code>opencv-contrib-python</code></a> from PyPi into a <code>conda</code> environment. I did this very thing yesterday with no issues but today its broken.</p> <p>From the conda commandline (as Admin) in my activated env I have tried</p> <pre><code>pip install opencv-contrib-python pip install opencv-contrib-python --user pip install opencv-python pip install opencv-python --user </code></pre> <p>All throw the same pages &amp; pages full of errors. Some highights,</p> <pre><code>ERROR: Command errored out with exit status 1: command: 'C:\ProgramData\Anaconda3\envs\genicam\python.exe' 'C:\ProgramData\Anaconda3\envs\genicam\lib\site-packages\pip\_vendor\pep517\in_process\_in_process.py' build_wheel 'C:\Users\NICBWT~1\AppData\Local\Temp\tmp98_vz8u5' cwd: C:\Users\NICBWT~1\AppData\Local\Temp\pip-install-71z9rl7t\opencv-contrib-python_387a51900d48422f8ed4ed222402118d Complete output (1205 lines): -------------------------------------------------------------------------------- -- Trying 'Ninja (Visual Studio 17 2022 x64 v143)' generator -------------------------------- --------------------------- ---------------------- ----------------- ------------ ------- -- Not searching for unused variables given on the command line. CMake Error at CMakeLists.txt:2 (PROJECT): Generator Ninja does not support platform specification, but platform x64 was specified. Then LOADS of stuff finally ending with... ERROR: Could not build wheels for opencv-contrib-python, which is required to install pyproject.toml-based projects </code></pre> <p>There is literally pages of this. Most of it looks like CMake generated output, searching &amp; finding stuff but I've never seen anything like that with anyother OpenCV install before.</p> <p>I have updated <code>pip</code> and <code>setup-tools</code>, still that same errors. I have attempted all fixes I could find on here, still the same errors. I have tried installing directly from a cloned git repo, still the same errors</p> <p>Anyone seen this kind of behaviour before?</p>
<python><opencv><pip>
2023-03-30 16:02:14
1
3,687
DrBwts
75,890,415
11,261,546
How to set just one default argument pybind11
<p>I have a function :</p> <pre><code>void my_functions(int a, int b = 42); </code></pre> <p>And I want to bind it using only the default argument for <code>b</code>:</p> <pre><code> m.def(&quot;my_functions&quot;, &amp;my_functions, pb::arg(&quot;b&quot;) = 42); // just default for b </code></pre> <p>This doesn't work, I get:</p> <pre><code>/cache/venv/include/pybind11/pybind11.h:219:40: error: static assertion failed: The number of argument annotations does not match the number of function arguments 219 | expected_num_args&lt;Extra...&gt;( | ~~~~~~~~~~~~~~~~~~~~~~~~~~~^ 220 | sizeof...(Args), cast_in::args_pos &gt;= 0, cast_in::has_kwargs), </code></pre> <p>What's the right way of doing it?</p>
<python><c++><pybind11>
2023-03-30 15:49:58
3
1,551
Ivan
75,890,407
11,901,834
Python regex to remove string which may contain additional character
<p>I've got a string in python that sometimes starts with either <code>{txt -</code> or <code>{txt</code>.</p> <p>These do not always appear at the start of the string, but if they do, I want to remove them.</p> <p>I know I can do it like this:</p> <pre><code>string = string.strip('{txt -').strip('{txt') </code></pre> <p>But I'm thinking there is surely a better solution (maybe using regex). Is it possible to add a potential extra character to a regex (in this case <code>-</code>)?</p>
<python><string><replace>
2023-03-30 15:49:11
2
1,579
nimgwfc
75,890,327
681,548
Create an alembic upgrade process from configuration taken by another package
<p>I'm extending a 3rd party application which internally uses SQLAlchemy and Alembic. This is good, because I need to extend the database model of this application but I don't want to fork the project.</p> <p>So, I'm trying to use alembic in my extension package.</p> <ol> <li><p>alembic recognize that the database is already using alembic, so I cannot just start a new alembic configuration in my package as it try to find the hash of the last alembic upgrade.</p> </li> <li><p>I tried to assert that my step depeds on the last alembic upgrade:</p> </li> </ol> <pre class="lang-py prettyprint-override"><code>revision = &quot;my-autogenerated-revision&quot; down_revision = &quot;last-revision-of-3rd-party-package&quot; branch_labels = None depends_on = None </code></pre> <p>But alembic then raise an error has it cannot find the <code>last-revision-of-3rd-party-package</code> definition.</p> <ol start="3"> <li>I need a way to configure alembic to use additional folders for looking for version upgrades.</li> </ol> <p>Looking at the .ini file, I found this section:</p> <pre><code># version location specification; This defaults # to alembic/versions. When using multiple version # directories, initial revisions must be specified with --version-path. # The path separator used here should be the separator specified by &quot;version_path_separator&quot; below. # version_locations = %(here)s/bar:%(here)s/bat:alembic/versions </code></pre> <p>This works, I can use something like this:</p> <pre><code>version_locations = %(here)s/alembic/versions:/path/to/the/other/package/alembic/versions </code></pre> <p>But this is an hardcoded path.</p> <p>Is it possible to generally identify versions upgrade from a package, in an OS independent way? Something like:</p> <pre><code>version_locations = %(here)s/alembic/versions:%(package:xxx.yyy)/alembic/versions </code></pre>
<python><sqlalchemy><alembic>
2023-03-30 15:41:13
0
7,839
keul
75,890,304
16,389,095
Kivy: How to switch between interfaces defined in different classes
<p>I'm developing an app in Kivy/KivyMD - Python. I defined three different UI in three different classes. Each interface contains a button to switch between them. When the app starts, the first interface is displayed. Here is the code:</p> <pre><code>from kivy.lang import Builder from kivymd.app import MDApp from kivymd.uix.relativelayout import MDRelativeLayout Builder.load_string( &quot;&quot;&quot; &lt;View3&gt;: MDRaisedButton: text: 'GO TO VIEW 1' pos_hint: {'center_x': 0.7, 'center_y': 0.7} #on_release: &lt;View2&gt;: MDRaisedButton: text: 'GO TO VIEW 3' pos_hint: {'center_x': 0.5, 'center_y': 0.5} #on_release: &lt;View1&gt;: MDRaisedButton: text: 'GO TO VIEW 2' pos_hint: {'center_x': 0.3, 'center_y': 0.3} #on_release: &quot;&quot;&quot; ) class View3(MDRelativeLayout): def __init__(self, **kwargs): super().__init__(**kwargs) class View2(MDRelativeLayout): def __init__(self, **kwargs): super().__init__(**kwargs) class View1(MDRelativeLayout): def __init__(self, **kwargs): super().__init__(**kwargs) class MainApp(MDApp): def __init__(self, **kwargs): super().__init__(**kwargs) self.view = View1() def build(self): return View1() if __name__ == '__main__': MainApp().run() </code></pre> <p>How can I switch between them?</p>
<python><kivy><kivy-language><kivymd>
2023-03-30 15:38:42
1
421
eljamba
75,890,022
325,809
pylsp can't find installed editable packages
<p>The problem is fairly straightforward: <code>pylsp</code> can't deal with editable packages. To create an environment that reproduces my problem:</p> <pre><code>$ mkdir /tmp/pyslp_test $ cd /tmp/pylsp_test $ echo &quot;import jaxtyping&quot; &gt; script.py $ mkdir editable_packages $ git clone https://github.com/google/jaxtyping editable_packages/jaxtyping $ pip3 install -e editable_packages/jaxtyping/ $ python3 -c 'import jaxtyping; print(jaxtyping.__path__)' ['/private/tmp/pylsp_test/editable_packages/jaxtyping/jaxtyping'] </code></pre> <p>I open <code>script.py</code> from my editor and try to jump to the definition with my cursor on <code>jaxtyping</code>, but it claims it can't find the definition even though python is demonstrably aware of the package.</p> <p>the relevant parts of my editor log:</p> <pre><code>[Trace - 04:05:16 PM] Sending request 'initialize - (2148)'. Params: { &quot;processId&quot;: null, &quot;rootPath&quot;: &quot;/tmp&quot;, &quot;clientInfo&quot;: { &quot;name&quot;: &quot;emacs&quot;, &quot;version&quot;: &quot;GNU Emacs 28.2 (build 1, aarch64-apple-darwin21.1.0, NS appkit-2113.00 Version 12.0.1 (Build 21A559))\n of 2022-09-12&quot; }, &quot;rootUri&quot;: &quot;file:///tmp&quot;, &quot;capabilities&quot;: { &quot;workspace&quot;: { &quot;workspaceEdit&quot;: { &quot;documentChanges&quot;: true, &quot;resourceOperations&quot;: [ &quot;create&quot;, &quot;rename&quot;, &quot;delete&quot; ] }, &quot;applyEdit&quot;: true, &quot;symbol&quot;: { &quot;symbolKind&quot;: { &quot;valueSet&quot;: [ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26 ] } }, &quot;executeCommand&quot;: { &quot;dynamicRegistration&quot;: false }, &quot;didChangeWatchedFiles&quot;: { &quot;dynamicRegistration&quot;: true }, &quot;workspaceFolders&quot;: true, &quot;configuration&quot;: true, &quot;fileOperations&quot;: { &quot;didCreate&quot;: false, &quot;willCreate&quot;: false, &quot;didRename&quot;: false, &quot;willRename&quot;: false, &quot;didDelete&quot;: false, &quot;willDelete&quot;: false } }, &quot;textDocument&quot;: { &quot;declaration&quot;: { &quot;linkSupport&quot;: true }, &quot;definition&quot;: { &quot;linkSupport&quot;: true }, &quot;implementation&quot;: { &quot;linkSupport&quot;: true }, &quot;typeDefinition&quot;: { &quot;linkSupport&quot;: true }, &quot;synchronization&quot;: { &quot;willSave&quot;: true, &quot;didSave&quot;: true, &quot;willSaveWaitUntil&quot;: true }, &quot;documentSymbol&quot;: { &quot;symbolKind&quot;: { &quot;valueSet&quot;: [ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26 ] }, &quot;hierarchicalDocumentSymbolSupport&quot;: true }, &quot;formatting&quot;: { &quot;dynamicRegistration&quot;: true }, &quot;rangeFormatting&quot;: { &quot;dynamicRegistration&quot;: true }, &quot;rename&quot;: { &quot;dynamicRegistration&quot;: true, &quot;prepareSupport&quot;: true }, &quot;codeAction&quot;: { &quot;dynamicRegistration&quot;: true, &quot;isPreferredSupport&quot;: true, &quot;codeActionLiteralSupport&quot;: { &quot;codeActionKind&quot;: { &quot;valueSet&quot;: [ &quot;&quot;, &quot;quickfix&quot;, &quot;refactor&quot;, &quot;refactor.extract&quot;, &quot;refactor.inline&quot;, &quot;refactor.rewrite&quot;, &quot;source&quot;, &quot;source.organizeImports&quot; ] } }, &quot;resolveSupport&quot;: { &quot;properties&quot;: [ &quot;edit&quot;, &quot;command&quot; ] }, &quot;dataSupport&quot;: true }, &quot;completion&quot;: { &quot;completionItem&quot;: { &quot;snippetSupport&quot;: false, &quot;documentationFormat&quot;: [ &quot;markdown&quot;, &quot;plaintext&quot; ], &quot;resolveAdditionalTextEditsSupport&quot;: true, &quot;insertReplaceSupport&quot;: true, &quot;deprecatedSupport&quot;: true, &quot;resolveSupport&quot;: { &quot;properties&quot;: [ &quot;documentation&quot;, &quot;details&quot;, &quot;additionalTextEdits&quot;, &quot;command&quot; ] }, &quot;insertTextModeSupport&quot;: { &quot;valueSet&quot;: [ 1, 2 ] } }, &quot;contextSupport&quot;: true }, &quot;signatureHelp&quot;: { &quot;signatureInformation&quot;: { &quot;parameterInformation&quot;: { &quot;labelOffsetSupport&quot;: true } } }, &quot;documentLink&quot;: { &quot;dynamicRegistration&quot;: true, &quot;tooltipSupport&quot;: true }, &quot;hover&quot;: { &quot;contentFormat&quot;: [ &quot;markdown&quot;, &quot;plaintext&quot; ] }, &quot;foldingRange&quot;: null, &quot;callHierarchy&quot;: { &quot;dynamicRegistration&quot;: false }, &quot;publishDiagnostics&quot;: { &quot;relatedInformation&quot;: true, &quot;tagSupport&quot;: { &quot;valueSet&quot;: [ 1, 2 ] }, &quot;versionSupport&quot;: true }, &quot;linkedEditingRange&quot;: { &quot;dynamicRegistration&quot;: true } }, &quot;window&quot;: { &quot;workDoneProgress&quot;: true, &quot;showMessage&quot;: null, &quot;showDocument&quot;: { &quot;support&quot;: true } } }, &quot;initializationOptions&quot;: null, &quot;workDoneToken&quot;: &quot;1&quot; } [Trace - 04:05:16 PM] Received response 'initialize - (2148)' in 747ms. Result: { &quot;capabilities&quot;: { &quot;codeActionProvider&quot;: true, &quot;codeLensProvider&quot;: { &quot;resolveProvider&quot;: null }, &quot;completionProvider&quot;: { &quot;resolveProvider&quot;: true, &quot;triggerCharacters&quot;: [ &quot;.&quot; ] }, &quot;documentFormattingProvider&quot;: true, &quot;documentHighlightProvider&quot;: true, &quot;documentRangeFormattingProvider&quot;: true, &quot;documentSymbolProvider&quot;: true, &quot;definitionProvider&quot;: true, &quot;executeCommandProvider&quot;: { &quot;commands&quot;: [] }, &quot;hoverProvider&quot;: true, &quot;referencesProvider&quot;: true, &quot;renameProvider&quot;: true, &quot;foldingRangeProvider&quot;: true, &quot;signatureHelpProvider&quot;: { &quot;triggerCharacters&quot;: [ &quot;(&quot;, &quot;,&quot;, &quot;=&quot; ] }, &quot;textDocumentSync&quot;: { &quot;change&quot;: 2, &quot;save&quot;: { &quot;includeText&quot;: true }, &quot;openClose&quot;: true }, &quot;workspace&quot;: { &quot;workspaceFolders&quot;: { &quot;supported&quot;: true, &quot;changeNotifications&quot;: true } }, &quot;experimental&quot;: {} }, &quot;serverInfo&quot;: { &quot;name&quot;: &quot;pylsp&quot;, &quot;version&quot;: &quot;1.4.1&quot; } } [Trace - 04:05:16 PM] Sending notification 'initialized'. Params: {} [Trace - 04:05:16 PM] Sending notification 'workspace/didChangeConfiguration'. Params: { &quot;settings&quot;: { &quot;pylsp&quot;: { &quot;plugins&quot;: { &quot;rope_rename&quot;: { &quot;enabled&quot;: false }, &quot;autopep8&quot;: { &quot;enabled&quot;: false }, &quot;yapf&quot;: { &quot;enabled&quot;: false }, &quot;rope_completion&quot;: { &quot;enabled&quot;: false }, &quot;pyflakes&quot;: { &quot;enabled&quot;: false }, &quot;pydocstyle&quot;: { &quot;matchDir&quot;: &quot;[^\\.].*&quot;, &quot;match&quot;: &quot;(?!test_).*\\.py&quot;, &quot;enabled&quot;: true }, &quot;pycodestyle&quot;: { &quot;hangClosing&quot;: false, &quot;enabled&quot;: false }, &quot;pylint&quot;: { &quot;enabled&quot;: false, &quot;args&quot;: [] }, &quot;flake8&quot;: { &quot;enabled&quot;: true }, &quot;preload&quot;: { &quot;enabled&quot;: true }, &quot;mccabe&quot;: { &quot;threshold&quot;: 15, &quot;enabled&quot;: true }, &quot;jedi_symbols&quot;: { &quot;all_scopes&quot;: true, &quot;enabled&quot;: true }, &quot;jedi_signature_help&quot;: { &quot;enabled&quot;: true }, &quot;jedi_references&quot;: { &quot;enabled&quot;: true }, &quot;jedi_hover&quot;: { &quot;enabled&quot;: true }, &quot;jedi_definition&quot;: { &quot;follow_builtin_imports&quot;: true, &quot;follow_imports&quot;: true, &quot;enabled&quot;: true }, &quot;jedi_completion&quot;: { &quot;include_params&quot;: true, &quot;enabled&quot;: true, &quot;include_class_objects&quot;: true, &quot;fuzzy&quot;: false }, &quot;jedi_rename&quot;: { &quot;enabled&quot;: true } }, &quot;configurationSources&quot;: [ &quot;flake8&quot; ] } } } [Trace - 04:05:16 PM] Sending notification 'textDocument/didOpen'. Params: { &quot;textDocument&quot;: { &quot;uri&quot;: &quot;file:///tmp/pylsp_test/script.py&quot;, &quot;languageId&quot;: &quot;python&quot;, &quot;version&quot;: 19, &quot;text&quot;: &quot;import jaxtyping\n&quot; } } ... [Trace - 04:05:44 PM] Sending request 'textDocument/definition - (2154)'. Params: { &quot;textDocument&quot;: { &quot;uri&quot;: &quot;file:///tmp/pylsp_test/script.py&quot; }, &quot;position&quot;: { &quot;line&quot;: 0, &quot;character&quot;: 15 } } [Trace - 04:05:44 PM] Received response 'textDocument/definition - (2154)' in 7ms. Result: [] </code></pre> <p>I haven't used python in many years, so I don't know much about pylsp but this seems like a very trivial use case, so I am inclined to believe that it is not a bug but an error on my part.</p> <p>PS. I have the same problem with non-editable packages, but I am not interested in those for the time being.</p>
<python><module><language-server-protocol><python-jedi><python-lsp-server>
2023-03-30 15:15:20
0
6,926
fakedrake
75,889,941
2,292,490
Give GPT (with own knowledge base) an instruction on how to behave before user prompt
<p>I have given GPT some information in CSV format to learn and now I would like to transmit an instruction on how to behave before the user prompt.</p> <pre><code>def chatbot(input_text): index = GPTSimpleVectorIndex.load_from_disk('index.json') message_history.append({&quot;role&quot;: &quot;user&quot;, &quot;content&quot;: f&quot;{input_text}&quot;}) response = index.query(input_text+message_history, response_mode=&quot;compact&quot;) return response.response </code></pre> <p>&quot;message_history&quot; looks like this:</p> <pre><code>message_history = [{&quot;role&quot;: &quot;user&quot;, &quot;content&quot;: f&quot;You are a Pirate! Answer every question in pirate-slang!&quot;}, {&quot;role&quot;: &quot;assistant&quot;, &quot;content&quot;: f&quot;OK&quot;}] </code></pre> <p>I got the following error:</p> <blockquote> <p>&quot;TypeError: can only concatenate str (not &quot;list&quot;) to str&quot;</p> </blockquote> <p>I remember that I have to convert this into tuples but everything I try only causes more chaos...</p> <p>Here's the whole code:</p> <pre><code>from gpt_index import SimpleDirectoryReader, GPTListIndex, GPTSimpleVectorIndex, LLMPredictor, PromptHelper from langchain import OpenAI import gradio as gr import sys import os os.environ[&quot;OPENAI_API_KEY&quot;] = 'INSERT_KEY_HERE' message_history = [{&quot;role&quot;: &quot;user&quot;, &quot;content&quot;: f&quot;You are a Pirate! Answer every question in pirate-slang!&quot;}, {&quot;role&quot;: &quot;assistant&quot;, &quot;content&quot;: f&quot;OK&quot;}] def construct_index(directory_path): # Index is made of CSV, TXT and PDF Files max_input_size = 4096 num_outputs = 512 max_chunk_overlap = 20 chunk_size_limit = 600 prompt_helper = PromptHelper(max_input_size, num_outputs, max_chunk_overlap, chunk_size_limit=chunk_size_limit) llm_predictor = LLMPredictor(llm=OpenAI(temperature=0.5, model_name=&quot;gpt-3.5-turbo&quot;, max_tokens=num_outputs)) documents = SimpleDirectoryReader(directory_path).load_data() index = GPTSimpleVectorIndex.from_documents(documents) index.save_to_disk('index.json') return index def chatbot(input_text): index = GPTSimpleVectorIndex.load_from_disk('index.json') message_history.append({&quot;role&quot;: &quot;user&quot;, &quot;content&quot;: f&quot;{input_text}&quot;}) response = index.query(input_text+message_history, response_mode=&quot;compact&quot;) return response.response iface = gr.Interface(fn=chatbot, inputs=gr.inputs.Textbox(lines=7, label=&quot;Enter something here...&quot;), outputs=&quot;text&quot;, title=&quot;ChatBot&quot;) index = construct_index(&quot;docs&quot;) iface.launch(share=True) </code></pre>
<python><prompt><openai-api><gpt-3><chatgpt-api>
2023-03-30 15:09:19
0
571
Bill Bronson
75,889,805
7,746,166
Elasticsearch python api time range with specific field values
<p>Im trying to write a query for elasticsearch where im searching in a specific time range for the specific names of a field called &quot;name&quot;.</p> <p>I came up with:</p> <pre><code>body = { &quot;query&quot;: { &quot;range&quot;: { 'timeObject': { &quot;from&quot;: '2018-01-01T20:10:30', &quot;to&quot;: '2023-03-01T20:10:30' }}, &quot;bool&quot;: { &quot;filter&quot;: { &quot;terms&quot;: { &quot;name&quot;: [&quot;Anna&quot;, &quot;Peter&quot;, &quot;Bob&quot;, &quot;John&quot;] } } } </code></pre> <p>But i get the error: elasticsearch.exceptions.RequestError: RequestError(400, 'parsing_exception', '[range] malformed query, expected [END_OBJECT] but found [FIELD_NAME]')</p> <p>What do I wrong?</p>
<python><elasticsearch>
2023-03-30 14:55:34
1
1,491
Varlor
75,889,803
4,436,572
dask_cudf dataframe convert column of datetime string to column of datetime object
<p>I am a new user of Dask and RapidsAI. An exerpt of my data (in <code>csv</code> format):</p> <pre><code>Symbol,Date,Open,High,Low,Close,Volume AADR,17-Oct-2017 09:00,57.47,58.3844,57.3645,58.3844,2094 AADR,17-Oct-2017 10:00,57.27,57.2856,57.25,57.27,627 AADR,17-Oct-2017 11:00,56.99,56.99,56.99,56.99,100 AADR,17-Oct-2017 12:00,56.98,57.05,56.98,57.05,200 AADR,17-Oct-2017 13:00,57.14,57.16,57.14,57.16,700 AADR,17-Oct-2017 14:00,57.13,57.13,57.13,57.13,100 AADR,17-Oct-2017 15:00,57.07,57.07,57.07,57.07,200 AAMC,17-Oct-2017 09:00,87,87,87,87,100 AAU,17-Oct-2017 09:00,1.1,1.13,1.0832,1.121,67790 AAU,17-Oct-2017 10:00,1.12,1.12,1.12,1.12,100 AAU,17-Oct-2017 11:00,1.125,1.125,1.125,1.125,200 AAU,17-Oct-2017 12:00,1.1332,1.15,1.1332,1.15,27439 AAU,17-Oct-2017 13:00,1.15,1.15,1.13,1.13,8200 AAU,17-Oct-2017 14:00,1.1467,1.1467,1.14,1.1467,1750 AAU,17-Oct-2017 15:00,1.1401,1.1493,1.1401,1.1493,4100 AAU,17-Oct-2017 16:00,1.13,1.13,1.13,1.13,100 ABE,17-Oct-2017 09:00,14.64,14.64,14.64,14.64,200 ABE,17-Oct-2017 10:00,14.67,14.67,14.66,14.66,1200 ABE,17-Oct-2017 11:00,14.65,14.65,14.65,14.65,600 ABE,17-Oct-2017 15:00,14.65,14.65,14.65,14.65,836 </code></pre> <p>Note <code>Date</code> column is of type string.</p> <p>I have some example stock market timeseries data (i.e., DOHLCV) in csv files and I read them into a <code>dask_cudf</code> dataframe (my <code>dask.dataframe</code> backend is cudf and <code>read.csv</code> is a creation dispacther that conveniently gives me a <code>cudf.dataframe</code>).</p> <pre><code>import dask_cudf import cudf from dask import dataframe as dd ddf = dd.read_csv('path/to/my/data/*.csv') ddf # output &lt;dask_cudf.DataFrame | 450 tasks | 450 npartitions&gt; # test csv data above can be retrieved using following statements # df = pd.read_clipboard(sep=&quot;,&quot;) # cdf = cudf.from_pandas(df) # ddf = dask_cudf.from_cudf(cdf, npartitions=2) </code></pre> <p>I then try to convert datetime string into real datetime object (<code>np.datetime64[ns]</code> or anything equivalent in <code>cudf</code>/<code>dask</code> world). I then failed with error.</p> <pre><code>df[&quot;Date&quot;] = dd.to_datetime(df[&quot;Date&quot;], format=&quot;%d-%b-%Y %H:%M&quot;).head(5) df.set_index(&quot;Date&quot;, inplace=True) # This failed with different error, will raise in a different SO thread. # Following statement gives me same error. # cudf.to_datetime(df[&quot;Date&quot;], format=&quot;%d-%b-%Y %H:%M&quot;) </code></pre> <p>Full error log is to the end.</p> <p>The error message seems to suggest that I'd need to <code>compute</code> the <code>dask_cudf.dataframe</code>, turning it into a real <code>cudf</code> object, then I can do as I would in <code>pandas</code>:</p> <pre><code>df[&quot;Date&quot;] = cudf.to_datetime(df.Date) df = df.set_index(df.Date) </code></pre> <p>This apparently isn't ideal and it very much is the thing that <code>dask</code> is for: we'd delay this and only calculate the ultimate number we need.</p> <p>what is the <code>dask</code>/<code>dask_cudf</code> way to convert a string column to datetime column in <code>dask_cudf</code>? As far as I can see, if the backend is <code>pandas</code>, the conversion is done smoothly and rarely has problem.</p> <p>Or, is it that <code>cudf</code> or GPU world in general, is not supposed to do much with date types like <code>datetime</code>, <code>string</code> ? (e.g., ideally GPU is geared towards expensive numerical computations).</p> <p>My use case involves some filtering to do with <code>string</code> and <code>datetime</code>, therefore I need to set up the <code>dataframe</code> with proper <code>datetime</code> object.</p> <h4>Error Log</h4> <pre><code>TypeError Traceback (most recent call last) Cell In[52], line 1 ----&gt; 1 dd.to_datetime(df[&quot;Date&quot;], format=&quot;%d-%b-%Y %H:%M&quot;).head(2) File ~/Live-usb-storage/projects/python/alpha/lib/python3.10/site-packages/dask/dataframe/core.py:1268, in _Frame.head(self, n, npartitions, compute) 1266 # No need to warn if we're already looking at all partitions 1267 safe = npartitions != self.npartitions -&gt; 1268 return self._head(n=n, npartitions=npartitions, compute=compute, safe=safe) File ~/Live-usb-storage/projects/python/alpha/lib/python3.10/site-packages/dask/dataframe/core.py:1302, in _Frame._head(self, n, npartitions, compute, safe) 1297 result = new_dd_object( 1298 graph, name, self._meta, [self.divisions[0], self.divisions[npartitions]] 1299 ) 1301 if compute: -&gt; 1302 result = result.compute() 1303 return result File ~/Live-usb-storage/projects/python/alpha/lib/python3.10/site-packages/dask/base.py:314, in DaskMethodsMixin.compute(self, **kwargs) 290 def compute(self, **kwargs): 291 &quot;&quot;&quot;Compute this dask collection 292 293 This turns a lazy Dask collection into its in-memory equivalent. (...) 312 dask.base.compute 313 &quot;&quot;&quot; --&gt; 314 (result,) = compute(self, traverse=False, **kwargs) 315 return result File ~/Live-usb-storage/projects/python/alpha/lib/python3.10/site-packages/dask/base.py:599, in compute(traverse, optimize_graph, scheduler, get, *args, **kwargs) 596 keys.append(x.__dask_keys__()) 597 postcomputes.append(x.__dask_postcompute__()) --&gt; 599 results = schedule(dsk, keys, **kwargs) 600 return repack([f(r, *a) for r, (f, a) in zip(results, postcomputes)]) File ~/Live-usb-storage/projects/python/alpha/lib/python3.10/site-packages/dask/threaded.py:89, in get(dsk, keys, cache, num_workers, pool, **kwargs) 86 elif isinstance(pool, multiprocessing.pool.Pool): 87 pool = MultiprocessingPoolExecutor(pool) ---&gt; 89 results = get_async( 90 pool.submit, 91 pool._max_workers, 92 dsk, 93 keys, 94 cache=cache, 95 get_id=_thread_get_id, 96 pack_exception=pack_exception, 97 **kwargs, 98 ) 100 # Cleanup pools associated to dead threads 101 with pools_lock: File ~/Live-usb-storage/projects/python/alpha/lib/python3.10/site-packages/dask/local.py:511, in get_async(submit, num_workers, dsk, result, cache, get_id, rerun_exceptions_locally, pack_exception, raise_exception, callbacks, dumps, loads, chunksize, **kwargs) 509 _execute_task(task, data) # Re-execute locally 510 else: --&gt; 511 raise_exception(exc, tb) 512 res, worker_id = loads(res_info) 513 state[&quot;cache&quot;][key] = res File ~/Live-usb-storage/projects/python/alpha/lib/python3.10/site-packages/dask/local.py:319, in reraise(exc, tb) 317 if exc.__traceback__ is not tb: 318 raise exc.with_traceback(tb) --&gt; 319 raise exc File ~/Live-usb-storage/projects/python/alpha/lib/python3.10/site-packages/dask/local.py:224, in execute_task(key, task_info, dumps, loads, get_id, pack_exception) 222 try: 223 task, data = loads(task_info) --&gt; 224 result = _execute_task(task, data) 225 id = get_id() 226 result = dumps((result, id)) File ~/Live-usb-storage/projects/python/alpha/lib/python3.10/site-packages/dask/core.py:119, in _execute_task(arg, cache, dsk) 115 func, args = arg[0], arg[1:] 116 # Note: Don't assign the subtask results to a variable. numpy detects 117 # temporaries by their reference count and can execute certain 118 # operations in-place. --&gt; 119 return func(*(_execute_task(a, cache) for a in args)) 120 elif not ishashable(arg): 121 return arg File ~/Live-usb-storage/projects/python/alpha/lib/python3.10/site-packages/dask/optimization.py:990, in SubgraphCallable.__call__(self, *args) 988 if not len(args) == len(self.inkeys): 989 raise ValueError(&quot;Expected %d args, got %d&quot; % (len(self.inkeys), len(args))) --&gt; 990 return core.get(self.dsk, self.outkey, dict(zip(self.inkeys, args))) File ~/Live-usb-storage/projects/python/alpha/lib/python3.10/site-packages/dask/core.py:149, in get(dsk, out, cache) 147 for key in toposort(dsk): 148 task = dsk[key] --&gt; 149 result = _execute_task(task, cache) 150 cache[key] = result 151 result = _execute_task(out, cache) File ~/Live-usb-storage/projects/python/alpha/lib/python3.10/site-packages/dask/core.py:119, in _execute_task(arg, cache, dsk) 115 func, args = arg[0], arg[1:] 116 # Note: Don't assign the subtask results to a variable. numpy detects 117 # temporaries by their reference count and can execute certain 118 # operations in-place. --&gt; 119 return func(*(_execute_task(a, cache) for a in args)) 120 elif not ishashable(arg): 121 return arg File ~/Live-usb-storage/projects/python/alpha/lib/python3.10/site-packages/dask/utils.py:72, in apply(func, args, kwargs) 41 &quot;&quot;&quot;Apply a function given its positional and keyword arguments. 42 43 Equivalent to ``func(*args, **kwargs)`` (...) 69 &gt;&gt;&gt; dsk = {'task-name': task} # adds the task to a low level Dask task graph 70 &quot;&quot;&quot; 71 if kwargs: ---&gt; 72 return func(*args, **kwargs) 73 else: 74 return func(*args) File ~/Live-usb-storage/projects/python/alpha/lib/python3.10/site-packages/dask/dataframe/core.py:6821, in apply_and_enforce(*args, **kwargs) 6819 func = kwargs.pop(&quot;_func&quot;) 6820 meta = kwargs.pop(&quot;_meta&quot;) -&gt; 6821 df = func(*args, **kwargs) 6822 if is_dataframe_like(df) or is_series_like(df) or is_index_like(df): 6823 if not len(df): File ~/Live-usb-storage/projects/python/alpha/lib/python3.10/site-packages/pandas/core/tools/datetimes.py:1100, in to_datetime(arg, errors, dayfirst, yearfirst, utc, format, exact, unit, infer_datetime_format, origin, cache) 1098 result = _convert_and_box_cache(argc, cache_array) 1099 else: -&gt; 1100 result = convert_listlike(argc, format) 1101 else: 1102 result = convert_listlike(np.array([arg]), format)[0] File ~/Live-usb-storage/projects/python/alpha/lib/python3.10/site-packages/pandas/core/tools/datetimes.py:413, in _convert_listlike_datetimes(arg, format, name, tz, unit, errors, infer_datetime_format, dayfirst, yearfirst, exact) 410 return idx 411 raise --&gt; 413 arg = ensure_object(arg) 414 require_iso8601 = False 416 if infer_datetime_format and format is None: File pandas/_libs/algos_common_helper.pxi:33, in pandas._libs.algos.ensure_object() File ~/Live-usb-storage/projects/python/alpha/lib/python3.10/site-packages/cudf/core/frame.py:451, in Frame.__array__(self, dtype) 450 def __array__(self, dtype=None): --&gt; 451 raise TypeError( 452 &quot;Implicit conversion to a host NumPy array via __array__ is not &quot; 453 &quot;allowed, To explicitly construct a GPU matrix, consider using &quot; 454 &quot;.to_cupy()\nTo explicitly construct a host matrix, consider &quot; 455 &quot;using .to_numpy().&quot; 456 ) TypeError: Implicit conversion to a host NumPy array via __array__ is not allowed, To explicitly construct a GPU matrix, consider using .to_cupy() To explicitly construct a host matrix, consider using .to_numpy(). </code></pre>
<python><dataframe><dask><rapids><cudf>
2023-03-30 14:55:30
0
1,288
stucash
75,889,725
15,283,859
Ipywidgets interactive - take a dictionary as an argument
<p>I have a function that looks like this that takes a dictionary as an argument. The dictionary looks like this:</p> <pre><code>d = {'skill_1':3, 'skill_2':2, 'skill_5':5, 'skill_7':3} </code></pre> <p>The function i have is a function that takes this dictionary, and then looks into a dataframe for the people that have a value greater or equal that the one in the dictionary, for the skills mentioned. It looks like this:</p> <pre class="lang-py prettyprint-override"><code> skill_0 skill_1 skill_2 skill_3 skill_4 skill_5 skill_6 skill_7 0 5 3 1 4 2 3 4 2 1 3 2 1 3 2 2 5 1 2 3 1 3 3 3 1 3 4 3 3 5 4 5 5 5 3 3 4 5 4 3 2 4 3 4 1 </code></pre> <p>I want to receive this argument from a widget(or using 2 widgets and then combining the values of the two widgets into a dictionary).</p> <p>I have tried something like</p> <pre><code>s = ['skill_0', 'skill_1', 'skill_2', 'skill_3', 'skill_4', 'skill_5', 'skill_6', 'skill_7'] widgets.SelectMultiple(options=[f'{i}:{n}' for i in s for n in range(1,6)], value=('skill_3:1',)) </code></pre> <p>But i cant manage to retrive the value from the widget and make a dictionary (i tried with <code>eval</code> too.). Also, the list like this looks visually ugly. Is there any way to achieve this?</p>
<python><python-3.x><dictionary><jupyter-notebook><ipython>
2023-03-30 14:49:04
1
895
Yolao_21
75,889,612
1,901,071
Outlook COM object - Marking an Email as Read
<p>Using w32 python library, I am taking an email and saving the attachment from Outlook. I have seen several questions dealing with this topic with the most promising being here: <a href="https://stackoverflow.com/questions/73216077/marking-an-email-as-read-python">Marking an email as read python</a></p> <p>I have tried a few variations of the above answer but have been unable to get a working demo</p> <pre><code># new outlook object outlook = win32.Dispatch(&quot;Outlook.Application&quot;) # get user namespace namespace = outlook.GetNamespace(&quot;MAPI&quot;) # The default folder changes every time we boot up outlook # This loop finds the correct inbox for x in range(4): print(x) try: print(namespace.Folders.Item(x).name) if namespace.Folders.Item(x).name == 'John.Smith@foo.com': root_folder = namespace.Folders.Item(x) break except: pass # Mark the item as read in the folder for msg in root_folder.Folders['Inbox'].Folders('temp').Items: msg.is_read = True msg.save(updated_fields=['is_read']) </code></pre> <p>When I run the piece of code below I get the error</p> <blockquote> <p>AttributeError: Property '.is_read' can not be set.</p> </blockquote> <p>Can anyone make any suggestions on how i might go about fixing this?</p>
<python><outlook><win32com><office-automation><com-automation>
2023-03-30 14:39:07
1
2,946
John Smith
75,889,275
7,575,172
Cannot link to local script with "console_script" in setup.py
<p>I'm trying to setup the console_script option in setup.py so I can just write the name of a script to execute it from the CLI.</p> <p>My package looks like this:</p> <pre><code>root |--bin |-- C2C (python file to execute) |--setup.py |--(Rest of the package ... ) </code></pre> <p>I have tried to add this to the setup.py file:</p> <pre class="lang-py prettyprint-override"><code> entry_points={ 'console_scripts': ['c2c = bin.C2C:main'] }, </code></pre> <p>And use <code>pip install -e .</code> from the root folder, but the function is linked to the location of my conda env instead of the root folder of the package, i.e. I get:</p> <pre><code>Traceback (most recent call last): &lt;user&gt;/anaconda3/envs/c2c_env/bin/c2c&quot;, line 5, in &lt;module&gt; from bin.C2C import main ModuleNotFoundError: No module named 'bin' </code></pre> <p>How can I alter my setup.py file so that the terminal look for <code>root/bin/C2C</code> when I write <code>c2c</code> in stead of in <code>&lt;user&gt;/anaconda3/envs/c2c_env/bin/c2c</code>?</p> <p>I have tried using:</p> <pre class="lang-py prettyprint-override"><code>setup( ... scripts=['bin/C2C'], ... ) </code></pre> <p>Which can find <code>C2C</code> in the terminal, but if I change anything in C2C script, they are not reflected when only <code>C2C</code> is used (and it also messes up the relative paths inside the scripts, which are used to save results).</p>
<python><setuptools>
2023-03-30 14:05:34
0
305
Malte Jensen
75,889,257
17,596,179
depends on a node named 'energy_dbt_model' which was not found
<p>I was trying out to find out how to query my duckdb file. When I run <code>dbt run --select energy_dbt_model</code>. I keep getting the error</p> <blockquote> <p>Model 'model.transform_dbt.first_model' (models\example\first_model.sql) depends on a node named 'energy_dbt_model' which was not found</p> </blockquote> <p>this is my schema.yml file</p> <pre><code> version: 2 models: - name: energy_dbt_model description: this dataset holds the data of the captured solar and wind energy also the price and the datetime this data was captured, this dataset has data for every 15 minuts columns: - name: id description: &quot;The primary key for this table&quot; tests: - unique - not_null - name: date_time description: The datetime the data was captured tests: - unique - not_null - name: solar_measured description: The amount of solar energy produced in the whole of belgium at the given timestamp tests: - not_null - name: wind_measured description: The amount of wind energy produced in Belgium at the given timestamp tests: - not_null - name: solar_price description: The calculated price for solar energy at that given timestamp tests: - not_null - name: wind_price description: The calculated price for wind energy at that given timestamp tests: - not_null </code></pre> <p>this is my <code>first_model.sql</code> which I am trying to run.</p> <pre><code>SELECT * FROM {{ ref('energy_dbt_model') }} </code></pre> <p>this is my profiles.yml file</p> <pre><code>transform_dbt: # this needs to match the profile in your dbt_project.yml file target: dev outputs: dev: type: duckdb path: './energy.duckdb' extensions: - httpfs - parquet </code></pre> <p>this is mostly copied from the internet I just changed the path var.</p> <p>and this is my dbt_project.yml file.</p> <pre><code> # Name your project! Project names should contain only lowercase characters # and underscores. A good package name should reflect your organization's # name or the intended use of these models name: 'transform_dbt' version: '1.0.0' config-version: 2 # This setting configures which &quot;profile&quot; dbt uses for this project. profile: 'transform_dbt' # These configurations specify where dbt should look for different types of files. # The `model-paths` config, for example, states that models in this project can be # found in the &quot;models/&quot; directory. You probably won't need to change these! model-paths: [&quot;models&quot;] analysis-paths: [&quot;analyses&quot;] test-paths: [&quot;tests&quot;] seed-paths: [&quot;seeds&quot;] macro-paths: [&quot;macros&quot;] snapshot-paths: [&quot;snapshots&quot;] target-path: &quot;target&quot; # directory which will store compiled SQL files clean-targets: # directories to be removed by `dbt clean` - &quot;target&quot; - &quot;dbt_packages&quot; # Configuring models # Full documentation: https://docs.getdbt.com/docs/configuring-models # In this example config, we tell dbt to build all models in the example/ # directory as views. These settings can be overridden in the individual model # files using the `{{ config(...) }}` macro. models: transform_dbt: # Config indicated by + and applies to all files under models/example/ example: +materialized: view </code></pre> <p>I'm very new to dbt and it could as well be a very dumb mistake.</p>
<python><etl><dbt><duckdb>
2023-03-30 14:02:43
0
437
david backx
75,889,106
19,980,284
Pandas value_counts() counting same value more than once
<p>I've taken a pandas column called <code>specialty</code> that looks like this:</p> <pre><code>0 1,5 1 1 2 1 3 1 4 1 5 1,5 6 3 7 3 8 1 9 1,3 10 1 11 1,2,4,6 12 1,5 13 6 14 3 </code></pre> <p>And created a new column that converts values with multiple items to single like so:</p> <pre><code>df['spec_area'] = df['specialty'].replace({ '1,2' : 2, '1,3' : 3, '1,4' : 4, '1,5' : 5, '1,6' : 6, '2,6' : 2, '3,6' : 3, '1,2,3': 3, '1,2,6' : 2, '1,3,6' : 3, '1,2,3' : 3, '1,2,4' : 4, '1,2,4,6' : 4, '1,2,5' : 5 }) </code></pre> <p>And when I run <code>df['spec_area'].value_counts()</code> I get:</p> <pre><code>1 211 missing 53 2 42 3 39 5 37 3 34 6 24 4 23 5 23 6 13 4 12 2 11 Name: spec_area, dtype: int64 </code></pre> <p>And I don't understand why there are two of every in 2-6.</p>
<python><pandas><count>
2023-03-30 13:48:10
1
671
hulio_entredas
75,889,083
12,596,824
Passing Sklearn model to a function to set arguments - type error: not callable?
<pre><code>from sklearn.linear_model import LogisticRegression lr = LogisticRegression() def logistic(model): test = model(random_state = 0) return test logistic(lr) </code></pre> <p>I have the above code where I am trying to pass a logistic regression with no arguments and then doing so within the function.</p> <p>I keep getting an error &quot;TypeError: 'LogisticRegression' object is not callable&quot;</p> <p>Any idea why this is happening and how to resolve?</p>
<python><scikit-learn>
2023-03-30 13:46:02
1
1,937
Eisen
75,888,799
14,386,187
Wikipedia API not searching specified term
<p>I'm using the Wikipedia API wrapper for Python, and for some queries, it's not searching the term I specified. For example, when I execute the function below:</p> <pre><code>import Wikipedia wikipedia.summary('machine learning') </code></pre> <p>I get the error</p> <pre><code>PageError Traceback (most recent call last) Cell In[28], line 1 ----&gt; 1 wikipedia.summary('machine learning') File /data/123/anaconda3/envs/comet/lib/python3.8/site-packages/wikipedia/util.py:28, in cache.__call__(self, *args, **kwargs) 26 ret = self._cache[key] 27 else: ---&gt; 28 ret = self._cache[key] = self.fn(*args, **kwargs) 30 return ret File /data/123/anaconda3/envs/comet/lib/python3.8/site-packages/wikipedia/wikipedia.py:231, in summary(title, sentences, chars, auto_suggest, redirect) 216 ''' 217 Plain text summary of the page. 218 (...) 226 * redirect - allow redirection without raising RedirectError 227 ''' 229 # use auto_suggest and redirect to get the correct article 230 # also, use page's error checking to raise DisambiguationError if necessary --&gt; 231 page_info = page(title, auto_suggest=auto_suggest, redirect=redirect) 232 title = page_info.title 233 pageid = page_info.pageid File /data/123/anaconda3/envs/comet/lib/python3.8/site-packages/wikipedia/wikipedia.py:276, in page(title, pageid, auto_suggest, redirect, preload) ... --&gt; 345 raise PageError(self.title) 346 else: 347 raise PageError(pageid=self.pageid) PageError: Page id &quot;machine ;earning&quot; does not match any pages. Try another id! </code></pre> <p>Does anyone know why this happens?</p>
<python><wikipedia-api>
2023-03-30 13:20:22
2
676
monopoly
75,888,782
19,980,284
Identify specific integers in column of mixed ints and strings
<p>I have a column in a pandas df named <code>specialty</code> that looks like this:</p> <pre><code>0 1,5 1 1 2 1,2,4,6 3 2 4 1 5 1,5 6 3 7 3 8 1 9 2,3 </code></pre> <p>I'd like to create a new column called <code>is_1</code> that contains a 1 for all rows in <code>specialty</code> that contain a 1 and a 0 for rows that don't contain a 1. The output would look like this:</p> <pre><code>0 1 1 1 2 1 3 0 4 1 5 1 6 0 7 0 8 1 9 0 </code></pre> <p>I'm not sure how to do this with a column of mixed dtypes. Would I just use <code>np.where()</code> with a <code>str.contains()</code> call? Like so:</p> <pre><code>np.where((part_chars['specialty'] == 1) | part_chars['specialty'].str.contains('1'), 1, 0) </code></pre> <p>Yep that works...</p>
<python><pandas><numpy><integer><object-type>
2023-03-30 13:19:01
2
671
hulio_entredas
75,888,712
14,720,380
How do I convert the projection of a netcdf file to a regular grid of lons and lats?
<p>I need to make an interpolation object where I enter a given longitude and latitude and the object returns the nearest ocean surface current value. The dataset I am using is . You can download the latest forecast by following <a href="https://nomads.ncep.noaa.gov/pub/data/nccf/com/rtofs/prod/" rel="nofollow noreferrer">this link</a> Then clicking on todays date and at the bottom is a file named <code>rtofs_glo_uv_YYYYMMDD.tar.gz</code>. If you unpack the file, you get three files i.e:</p> <pre><code> rtofs_glo_2ds_1hrly_uv_20230330_day1.nc rtofs_glo_2ds_1hrly_uv_20230330_day2.nc rtofs_glo_2ds_1hrly_uv_20230330_day3.nc </code></pre> <p>You can then open these in python using xarray:</p> <pre><code>import xarray as xr from pathlib import Path download_folder = Path(&quot;&quot;) ds = xr.open_mfdataset(download_folder.glob(&quot;rtofs*.nc&quot;)) ds &lt;xarray.Dataset&gt; Dimensions: (MT: 27, Y: 3298, X: 4500) Coordinates: * MT (MT) datetime64[ns] 2023-03-30 ... 2023-04-02 Longitude (Y, X) float32 dask.array&lt;chunksize=(3298, 4500), meta=np.ndarray&gt; Latitude (Y, X) float32 dask.array&lt;chunksize=(3298, 4500), meta=np.ndarray&gt; * X (X) int32 1 2 3 4 5 6 7 8 ... 4494 4495 4496 4497 4498 4499 4500 * Y (Y) int32 1 2 3 4 5 6 7 8 ... 3292 3293 3294 3295 3296 3297 3298 Layer float64 1.0 Data variables: u_velocity (MT, Y, X) float32 dask.array&lt;chunksize=(9, 3298, 4500), meta=np.ndarray&gt; v_velocity (MT, Y, X) float32 dask.array&lt;chunksize=(9, 3298, 4500), meta=np.ndarray&gt; Attributes: CDI: Climate Data Interface version 1.9.8 (https://mpimet.mpg.de... Conventions: CF-1.0 history: Thu Mar 30 09:26:01 2023: cdo merge rtofs_glo_2ds_1hrly_u_v... source: HYCOM archive file institution: National Centers for Environmental Prediction title: HYCOM ATLb2.00 experiment: 92.8 CDO: Climate Data Operators version 1.9.8 (https://mpimet.mpg.de... </code></pre> <p>The grid system used in this file is very different to what I am used to, the longitude values are not +/-180 but 74 to 1019.12:</p> <pre><code>ds.Longitude.min().values array(74.119995, dtype=float32) ds.Longitude.max().values array(1019.12, dtype=float32) ds.Latitude.max().values array(89.97772, dtype=float32) ds.Latitude.min().values array(-78.64, dtype=float32) </code></pre> <p>I believe there is a <a href="https://polar.ncep.noaa.gov/global/about/?" rel="nofollow noreferrer">different projection being used</a>: <a href="https://i.sstatic.net/5DDh3.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/5DDh3.png" alt="global ocean forecast system" /></a></p> <p>However I am not sure how these longitude values correlate with the actual longitudes.</p> <p>If I plot the longitude values, removing the last 10 rows (as they obscure the detail from being much larger than the other values), they look like:</p> <pre><code>import matplotlib.pyplot as plt from mpl_toolkits.axes_grid1 import make_axes_locatable import numpy as np ax = plt.subplot() im = ax.imshow(ds.Longitude.values[:-10, :]) divider = make_axes_locatable(ax) cax = divider.append_axes(&quot;right&quot;, size=&quot;5%&quot;, pad=0.05) plt.colorbar(im, cax=cax) plt.show() </code></pre> <p><a href="https://i.sstatic.net/NTHUO.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/NTHUO.png" alt="Longitude values" /></a></p> <p>How can I change this projection so that I can find the surface current for a given longitude and latitude?</p> <p>You can plot the dataset and see the projection as well:</p> <pre><code>ds.sel(MT=ds.MT[0]).u_velocity.plot() </code></pre> <p><a href="https://i.sstatic.net/FPUf8.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/FPUf8.png" alt="example" /></a></p>
<python><numpy><weather><map-projections><noaa>
2023-03-30 13:12:01
2
6,623
Tom McLean
75,888,599
7,056,539
Type inference in a IoC container class, return type of the resolve method
<p>I have a simple IoC class implementation:</p> <pre class="lang-py prettyprint-override"><code>Abs = TypeVar('Abs', str, Type) class MyContainer: _bindings: dict = {} _aliases: dict = {} def resolve(abstract: Abs): # something along these lines... abstract = ( self._aliases[abstract] if isinstance(abstract, str) else abstract ) factory = self._bindings.get(abstract) return factory() def bind(abstract: Type, factory: typing.Callable, alias: str = None) -&gt; None: # something along these lines... self._bindings[abstract] = factory if alias: self._aliases[alias] = abstract </code></pre> <p>And I use it like so:</p> <pre class="lang-py prettyprint-override"><code>ioc = MyContainer() def some_factory(): return SomeConcrete() ioc.bind(SomeAbstract, some_factory, alias=&quot;some&quot;) # Both work nice! ioc.resolve(SomeAbstract) ioc.resolve(&quot;some&quot;) </code></pre> <p>Now I'm wondering if there's a way to make my editor be able to tell what would the return type of a call to <code>ioc.resolve</code> be regardless of what type of <code>abstract</code> I use (Type, or str)?</p>
<python><dependency-injection><type-inference><ioc-container>
2023-03-30 12:59:29
0
419
Nasa
75,888,532
8,547,163
How to parse command line arguments into an imported module via argparse
<p>I have a python script that uses <code>argparse</code> to parse command line arguments, below is an example from it.</p> <pre><code>#main.py import argparse from my_folder.myscript import foo #...lines of code def main(): parser = argparse.ArgumentParser() parser.add_argument( &quot;--test&quot;, action='store_true', default=None ) args = parser.parse_args() if args.test: foo() if __name__=='__main__': main() </code></pre> <p>and <code>myscript.py</code> is</p> <pre><code>import pandas as pd def foo(): data = pd.read_excel('file/path/filename.xlsx', usecols = ['col1', 'col2']) print(data) print(data['col1'].tolist()) </code></pre> <p>If I use:</p> <pre><code>python3 main.py --test </code></pre> <p>I get the desired result. However I would like to parse the filepath of <code>.xlsx</code> or any other file in <code>myscript.py</code> via command line rather than in the <code>.py</code> file itself, i.e,</p> <pre><code>python3 main.py --test --infile /file/path/filename.xlsx </code></pre> <p>and ideally even giving further arguments like 'col1' to print second line. Can anyone suggest how to go about when trying to parse arguments into an imported module?</p>
<python><command-line-arguments><argparse>
2023-03-30 12:52:34
1
559
newstudent
75,888,457
10,362,801
how can I remove Label conflict in classification problem?
<p>I have identical samples with different labels and this has occurred due to either mislabeled data, If the data is mislabeled, it can confuse the model and can result in lower performance of the model.</p> <p>It's a binary classification problem. if my input table is somethin like below</p> <p><a href="https://i.sstatic.net/kzZb4.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/kzZb4.png" alt="enter image description here" /></a></p> <p>I want below table as my cleaned data</p> <p><a href="https://i.sstatic.net/7cLmB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/7cLmB.png" alt="enter image description here" /></a></p> <p>I tied this data cleaning library to check conflict but was not able to clean it :<a href="https://docs.deepchecks.com/stable/checks_gallery/tabular/data_integrity/plot_conflicting_labels.html#" rel="nofollow noreferrer">https://docs.deepchecks.com/stable/checks_gallery/tabular/data_integrity/plot_conflicting_labels.html#</a></p> <p>and my custom function take lots of time to run, whats the most efficient way to run when i have 2M records to clean?</p>
<python><pandas><machine-learning><classification><data-cleaning>
2023-03-30 12:44:58
1
495
Shiv948
75,888,427
14,820,295
Lead function based on value change of column in PySpark
<p>Is there an easy way to do this?</p> <p>Basically, I want to find the first &quot;next date&quot; when my &quot;flag_code&quot; column change status (1 or 0). If I have 3 same status consecutively, I want to track the same first date.</p> <p>Thank u</p> <p><code>Input</code></p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>id</th> <th>flag_ code</th> <th>date</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>0</td> <td>2022-10-01</td> </tr> <tr> <td>1</td> <td>1</td> <td>2022-10-02</td> </tr> <tr> <td>1</td> <td>0</td> <td>2022-10-03</td> </tr> <tr> <td>1</td> <td>0</td> <td>2022-10-04</td> </tr> <tr> <td>1</td> <td>0</td> <td>2022-11-20</td> </tr> <tr> <td>1</td> <td>1</td> <td>2023-02-01</td> </tr> </tbody> </table> </div> <p><code>Desired Output</code></p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>id</th> <th>flag_ code</th> <th>date</th> <th><strong>next_date</strong></th> </tr> </thead> <tbody> <tr> <td>1</td> <td>0</td> <td>2022-10-01</td> <td>2022-10-02</td> </tr> <tr> <td>1</td> <td>1</td> <td>2022-10-02</td> <td>2022-10-03</td> </tr> <tr> <td>1</td> <td>0</td> <td>2022-10-03</td> <td>2023-02-01</td> </tr> <tr> <td>1</td> <td>0</td> <td>2022-10-04</td> <td>2023-02-01</td> </tr> <tr> <td>1</td> <td>0</td> <td>2022-11-20</td> <td>2023-02-01</td> </tr> <tr> <td>1</td> <td>1</td> <td>2023-02-01</td> <td>NULL</td> </tr> </tbody> </table> </div><hr />
<python><apache-spark><pyspark><apache-spark-sql><window>
2023-03-30 12:43:02
1
347
Jresearcher
75,888,406
17,973,259
Space shooter player not becoming inactive while shooting
<p>I have this method that manages the Last Bullet game mode in my game. It keeps track of how many enemies are alive, the number of bullets that every player has available and the number of flying bullets and, if there are no remaining bullets, flying bullets and more than one enemy, the player becomes inactive. But here is the problem, regardless of how many enemies are on the screen, if the player keeps shooting, the remaining bullets are going negative and as long as there are flying bullets on screen, the player stays active, which I don t want to happen. Any ideas of how can I prevent this scenario from happening? I have a variable <code>bullets_allowed</code> in the game that increases or decreases the number of bullets that the player can have on the screen and I know that setting that to always be 1 would solve my issue but I don't want to be able to shoot only 1 at a time.</p> <pre class="lang-py prettyprint-override"><code>def last_bullet(self, thunderbird, phoenix): &quot;&quot;&quot;Starts the Last Bullet game mode in which the players must fight aliens but they have a limited number of bullets, when a player remains with no bullets he dies, when both players are out of bullets, the game is over.&quot;&quot;&quot; aliens_remaining = len(self.game.aliens.sprites()) flying_thunder_bullets = sum( bullet.rect.left &gt; 0 and bullet.rect.right &lt; self.settings.screen_width and bullet.rect.top &gt; 0 and bullet.rect.bottom &lt; self.settings.screen_height for bullet in self.game.thunderbird_bullets.sprites() ) flying_phoenix_bullets = sum( bullet.rect.left &gt; 0 and bullet.rect.right &lt; self.settings.screen_width and bullet.rect.top &gt; 0 and bullet.rect.bottom &lt; self.settings.screen_height for bullet in self.game.phoenix_bullets.sprites() ) if thunderbird.remaining_bullets &lt;= 0 and flying_thunder_bullets &lt;= 0 \ and aliens_remaining &gt; 0: thunderbird.state.alive = False if phoenix.remaining_bullets &lt;= 0 and flying_phoenix_bullets &lt;= 0 \ and aliens_remaining &gt; 0: phoenix.state.alive = False if all(not player.state.alive for player in [thunderbird, phoenix]): self.stats.game_active = False </code></pre>
<python><pygame>
2023-03-30 12:41:47
1
878
Alex
75,888,343
4,377,559
How to Pact test a dictionary object
<p>The below test will pass correctly, but if I post a body of <code>{&quot;a different key&quot; : 4.56}</code> it will fail as &quot;key&quot; is expected. In other words, the dictionary key is not flexible, only the float value.</p> <p>How can I define a pact where only the dictionary types matter i.e. keys must be strings, value must be floats? The docs don't make this clear: <a href="https://github.com/pact-foundation/pact-python" rel="nofollow noreferrer">https://github.com/pact-foundation/pact-python</a></p> <pre><code>def test_case_1(pact, client): ( pact.given(&quot;object does not exist&quot;) .upon_receiving(&quot;a new post request&quot;) .with_request( &quot;post&quot;, &quot;/url/post/endpoint&quot;, body=Like({&quot;key&quot;: 1.23}) .will_respond_with(200, body={}) ) with pact: client.post(body={&quot;key&quot;: 4.56}) </code></pre>
<python><pact><pact-python>
2023-03-30 12:33:55
1
603
Conor
75,888,319
7,692,855
Python 3 SQLAlchemey UnicodeDecodeError: 'ascii' codec can't decode byte
<p>I have some code which I'm finally getting around to updating to Python 3.</p> <p>It runs in Docker using ubuntu:20.04 as the base image.</p> <p>I am using SQLAlchemy query .all() method <code>method sqlalchemy.orm.Query.all()</code></p> <p>The code works with Python 2 however, it fails on Python 3 with the following error:</p> <blockquote> <p>UnicodeDecodeError: 'ascii' codec can't decode byte 0xe7 in position 1: ordinal not in range(128)</p> </blockquote> <p>I am struggling to understand why the code is trying to decode to ascii.</p> <p>I've set the locale in the Dockerfile and running <code>locale.getlocale()</code> on the line above the SQLAlchemy line prints <code>('en_US', 'UTF-8')</code></p> <p>The SQLAlchemy connection url specifies utf8:</p> <p><code>sqlalchemy.url='mysql+pymysql:{server}&amp;charset=utf8&amp;binary_prefix=true</code></p> <p>I've read all the other similar questions but still cannot get this working.</p> <p>Update:</p> <p>I've tracked it down to one column:</p> <pre><code>from sqlalchemy import PickleType class Schedule(OrgRefMixin, DeclarativeBase): __tablename__ = 'flights' ... routing = Column(PickleType()) ... </code></pre> <p>This is a blob column in the database.</p> <pre><code>Traceback (most recent call last): File &quot;/opt/venv/lib/python3.8/site-packages/flask/app.py&quot;, line 2446, in wsgi_app response = self.full_dispatch_request() File &quot;/opt/venv/lib/python3.8/site-packages/flask/app.py&quot;, line 1951, in full_dispatch_request rv = self.handle_user_exception(e) File &quot;/opt/venv/lib/python3.8/site-packages/flask/app.py&quot;, line 1820, in handle_user_exception reraise(exc_type, exc_value, tb) File &quot;/opt/venv/lib/python3.8/site-packages/flask/_compat.py&quot;, line 39, in reraise raise value File &quot;/opt/venv/lib/python3.8/site-packages/flask/app.py&quot;, line 1949, in full_dispatch_request rv = self.dispatch_request() File &quot;/opt/venv/lib/python3.8/site-packages/flask/app.py&quot;, line 1935, in dispatch_request return self.view_functions[rule.endpoint](**req.view_args) File &quot;/code/flights/flights/json.py&quot;, line 182, in view_wrapper return view_func(*args, **kwargs) File &quot;/code/flights/flights/blueprints/flights.py&quot;, line 62, in get_flights flights = Flights.get(filters, **valid_args) File &quot;/code/flights/flights/resources/flights.py&quot;, line 254, in get response = query.all() File &quot;/opt/venv/lib/python3.8/site-packages/sqlalchemy/orm/query.py&quot;, line 2643, in all return list(self) File &quot;/opt/venv/lib/python3.8/site-packages/sqlalchemy/orm/loading.py&quot;, line 90, in instances util.raise_from_cause(err) File &quot;/opt/venv/lib/python3.8/site-packages/sqlalchemy/util/compat.py&quot;, line 202, in raise_from_cause reraise(type(exception), exception, tb=exc_tb, cause=cause) File &quot;/opt/venv/lib/python3.8/site-packages/sqlalchemy/util/compat.py&quot;, line 186, in reraise raise value File &quot;/opt/venv/lib/python3.8/site-packages/sqlalchemy/orm/loading.py&quot;, line 77, in instances rows = [keyed_tuple([proc(row) for proc in process]) File &quot;/opt/venv/lib/python3.8/site-packages/sqlalchemy/orm/loading.py&quot;, line 77, in &lt;listcomp&gt; rows = [keyed_tuple([proc(row) for proc in process]) File &quot;/opt/venv/lib/python3.8/site-packages/sqlalchemy/orm/loading.py&quot;, line 77, in &lt;listcomp&gt; rows = [keyed_tuple([proc(row) for proc in process]) File &quot;/opt/venv/lib/python3.8/site-packages/sqlalchemy/engine/result.py&quot;, line 93, in __getitem__ return processor(self._row[index]) File &quot;/opt/venv/lib/python3.8/site-packages/sqlalchemy/sql/sqltypes.py&quot;, line 1478, in process return loads(value) UnicodeDecodeError: 'ascii' codec can't decode byte 0xe7 in position 1: ordinal not in range(128) </code></pre>
<python><sqlalchemy>
2023-03-30 12:31:59
1
1,472
user7692855
75,888,196
357,313
Why doesn't restore_best_weights=True update results?
<p>I found that <code>restore_best_weights=True</code> does not actually restore the best behavior. A simplified example with some dummy data:</p> <pre><code>import numpy as np from tensorflow.keras.utils import set_random_seed from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense from tensorflow.keras.optimizers import RMSprop from tensorflow.keras.callbacks import EarlyStopping np.random.seed(1) set_random_seed(2) x = np.array([1., 2., 3., 4., 5.]) y = np.array([1., 3., 4., 2., 5.]) model = Sequential() model.add(Dense(2, input_shape=(1, ), activation='tanh')) model.add(Dense(4, activation='relu')) model.add(Dense(1)) model.compile(optimizer=RMSprop(learning_rate=0.1), loss='mse') stopmon = EarlyStopping(monitor='loss', patience=2, restore_best_weights=True, verbose=1) history = model.fit(x, y, epochs=100, verbose=2, callbacks=[stopmon]) res = model.evaluate(x, y, verbose=1) print(f'best={stopmon.best:.4f}, loss={res:.4f}') </code></pre> <p>The output (on my system) is:</p> <pre><code>Epoch 1/100 1/1 - 0s - loss: 11.8290 - 434ms/epoch - 434ms/step Epoch 2/100 1/1 - 0s - loss: 1.9091 - 0s/epoch - 0s/step Epoch 3/100 1/1 - 0s - loss: 1.5159 - 16ms/epoch - 16ms/step Epoch 4/100 1/1 - 0s - loss: 1.3921 - 0s/epoch - 0s/step Epoch 5/100 1/1 - 0s - loss: 1.6787 - 0s/epoch - 0s/step Epoch 6/100 Restoring model weights from the end of the best epoch: 4. 1/1 - 0s - loss: 2.0629 - 33ms/epoch - 33ms/step Epoch 6: early stopping 1/1 [==============================] - 0s 100ms/step - loss: 1.6787 best=1.3921, loss=1.6787 </code></pre> <p>It looks like the weights <em>are</em> set to those from epoch 4. Then why does the loss still evaluate to the higher value from epoch 6? Is there anything extra I should do to update the model or something?</p> <p>I use an up-to-date TensorFlow (version 2.12.0) on Windows x64 (Intel), <code>tf.version.COMPILER_VERSION == 'MSVC 192930140'</code>.</p>
<python><keras><tf.keras><early-stopping>
2023-03-30 12:18:25
1
8,135
Michel de Ruiter
75,888,185
128,618
Merge the list of dictionary if there are some condition is match?
<pre><code>input = [ {'date':'2023-03-1', 'item':'i1', 'balance': 11, 'warehouse' : 'W1'}, {'date':'2023-03-2', 'item':'i1', 'balance': 12, 'warehouse' : 'W1'}, {'date':'2023-03-3', 'item':'i1', 'balance': 13, 'warehouse' : 'W1'}, {'date':'2023-04-2', 'item':'i2', 'balance': 11, 'warehouse' : 'W2'}, {'date':'2023-04-3', 'item':'i2', 'balance': 10, 'warehouse' : 'W2'}, {'date':'2023-03-3', 'item':'i1', 'balance': 10, 'warehouse' : 'W3'}, ] </code></pre> <p>If Lookup: <code>item='i1' and date ='2023-03-3' </code></p> <p>I want the put like this:</p> <p><code>[{'date':'2023-03-3', 'item':'i1' , 'W1':13,'W2':0: W2:10}]</code></p> <p>The goal is to display this in format:</p> <pre><code> Date | item | W1 | W2 | W3 2023-03-3 | i1 | 13 | 0 | 10 </code></pre>
<python>
2023-03-30 12:17:47
5
21,977
tree em
75,888,099
20,051,041
how to add subtitles in video with ffmpeg filter?
<p>I am having hard time adding .srt subtitles to the newly creating video. I am using Python.</p> <p>subtitles:</p> <pre><code>f&quot;{PROJECT_PATH}/data/subtitles/final_subtitle_srt/all_slides.srt&quot; </code></pre> <p>I have checked they are correct.</p> <p>pieces of my code that does not work:</p> <pre><code>audio = f'{PROJECT_PATH}/data/ppt-elements/audio_{file_id}.txt' images = f'{PROJECT_PATH}/data/ppt-elements/images_{file_id}.txt' image_input = ffmpeg.input(images, f='concat', safe=0, t=seconds).video audio_input = ffmpeg.input(audio, f='concat', safe=0, t=seconds).audio inputs = [image_input, audio_input] command = ( ffmpeg.filter('subtitles', f&quot;{PROJECT_PATH}/data/subtitles/final_subtitle_srt/all_slides.srt&quot;) .output(*inputs,f&quot;{PROJECT_PATH}/data/subtitles/final_subtitle_srt_all_slides.srt&quot;, f&quot;{PROJECT_PATH}/data/final-{file_id}.mp4&quot;, vf=&quot;fps=10,format=yuv420p&quot;, preset=&quot;veryfast&quot;, shortest=None, r=10, max_muxing_queue_size=4000, **additional_parameters, ) ) </code></pre> <p>Am I using subtitles filter well?</p>
<python><ffmpeg><video-subtitles>
2023-03-30 12:10:06
1
580
Mr.Slow
75,887,939
12,297,666
Plotting CV Indices for Multiclass in Sklearn
<p>Following this <a href="https://scikit-learn.org/stable/auto_examples/model_selection/plot_cv_indices.html" rel="nofollow noreferrer">guide</a> from Sklearn, i have modified the code a bit to also show the classes in the legend:</p> <pre><code>import numpy as np import matplotlib.pyplot as plt from sklearn.model_selection import KFold from matplotlib.patches import Patch from sklearn.datasets import make_classification x_train, y_train = make_classification(n_samples=1000, n_features=10, n_classes=2) cmap_data = plt.cm.Paired cmap_cv = plt.cm.coolwarm def plot_cv_indices(cv, x_input, y_output, axis, n_folds, lw=10): &quot;&quot;&quot;Create a sample plot for indices of a cross-validation object.&quot;&quot;&quot; # Generate the training/testing visualizations for each CV split ii = 0 for ii, (tr, tt) in enumerate(cv.split(X=x_input, y=y_output)): # Fill in indices with the training/test groups indices = np.array([np.nan] * len(x_input)) indices[tt] = 1 indices[tr] = 0 # Visualize the results axis.scatter( range(len(indices)), [ii + 0.5] * len(indices), c=indices, marker=&quot;_&quot;, lw=lw, cmap=cmap_cv, vmin=-0.2, vmax=1.2, ) # Plot the data classes at the end axis.scatter( range(len(x_input)), [ii + 1.5] * len(x_input), c=y_output, marker=&quot;_&quot;, lw=lw ) # Formatting yticklabels = list(range(n_folds)) + [&quot;Class&quot;] axis.set( yticks=np.arange(n_folds + 1) + 0.5, yticklabels=yticklabels, xlabel=&quot;Samples&quot;, ylabel=&quot;CV Iterations&quot;, ylim=[n_folds + 1.2, -0.2], xlim=[0, len(x_input)], ) axis.set_title(&quot;{}&quot;.format(type(cv).__name__), fontsize=15) axis.legend( [Patch(color=cmap_cv(0.19)), Patch(color=cmap_cv(0.87)), Patch(color='darkslateblue'), Patch(color='yellow')], [&quot;Training&quot;, &quot;Validation&quot;, &quot;Class 0&quot;, &quot;Class 1&quot;], loc=(0.9, 1.02), ) return axis n_splits = 5 kfold = KFold(n_splits=n_splits) fig, ax = plt.subplots() plot_cv_indices(kfold, x_train, y_train, ax, n_splits) plt.tight_layout() fig.show() </code></pre> <p>So, for a binary problem, we get a figure like this:</p> <p><a href="https://i.sstatic.net/La9f0.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/La9f0.png" alt="enter image description here" /></a></p> <p>It is &quot;easy&quot; to change the colors in the following part:</p> <pre><code>axis.legend( [Patch(color=cmap_cv(0.19)), Patch(color=cmap_cv(0.87)), Patch(color='darkslateblue'), Patch(color='yellow')], [&quot;Training&quot;, &quot;Validation&quot;, &quot;Class 0&quot;, &quot;Class 1&quot;], loc=(0.9, 1.02), ) </code></pre> <p>But if we have a multiclass problem:</p> <pre><code>x_train, y_train = make_classification(n_samples=1000, n_features=20, n_informative=10, n_classes=10) </code></pre> <p>We get the following figure:</p> <p><a href="https://i.sstatic.net/f5ma5.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/f5ma5.png" alt="enter image description here" /></a></p> <p>So, we have two problems here:</p> <p>1 - Classes don't become clearly visible in the bottom bar;</p> <p>2 - And even if they were, we would have to manually change the colors for each one of them in the <code>axis.legend()</code> part of the code.</p> <p>Does anyone have a solution for this? To make a better visualization and to automatically get the colors from the Class Bar to match the ones in the legend?</p>
<python><scikit-learn>
2023-03-30 11:54:39
0
679
Murilo
75,887,761
1,150,683
Typing to mark return values of current class type or any of its subclasses
<p>I want to make sure that the <code>from_dict</code> in the following method works well in its subclasses as well. Currently, its typing does not work (mypy error &quot;Incompatible return value type&quot;). I think because the subclass is returning an instance of the subclass and not an instance of the super class.</p> <pre class="lang-py prettyprint-override"><code>from __future__ import annotations from abc import ABC from dataclasses import dataclass from typing import ClassVar, Type @dataclass class Statement(ABC): @classmethod def from_dict(cls) -&gt; Statement: return cls() @dataclass class Parent(ABC): SIGNATURE_CLS: ClassVar[Type[Statement]] def say(self) -&gt; Statement: # Initialize a Statement through a from_dict classmethod return self.SIGNATURE_CLS.from_dict() @dataclass class ChildStatement(Statement): pass @dataclass class Child(Parent, ABC): SIGNATURE_CLS = ChildStatement def say(self) -&gt; ChildStatement: # Initialize a ChildStatement through a from_dict classmethod # that ChildStatement inherits from Statement return self.SIGNATURE_CLS.from_dict() </code></pre> <p>The code above yields this MyPy error:</p> <pre><code>Incompatible return value type (got &quot;Statement&quot;, expected &quot;ChildStatement&quot;) [return-value] </code></pre> <p>I think this is a use case for <code>TypeVar</code> in <code>Statement</code> but I am not sure how to implement and - especially - what the meaning behind it is.</p>
<python><oop><subclass><mypy><python-typing>
2023-03-30 11:38:34
1
28,776
Bram Vanroy
75,887,656
10,155,536
botocore package in lambda python 3.9 runtime return error: "cannot import name "'DEPRECATED_SERVICE_NAMES'" from 'botocore.docs'"
<p>I am using Lambda Python 3.9 runtimes. I also use boto3 and botocore default packages in Lambda.</p> <p>Today, I suddenly got this error: &quot;cannot import name &quot;'DEPRECATED_SERVICE_NAMES'&quot; from 'botocore.docs'&quot;. I only succeeded in fixing it when I added the botocore package to the lambda runtime. I want to avoid it since it increases the size of the layer by 10 MB.</p> <p>Any help? thanks</p>
<python><lambda><boto3>
2023-03-30 11:29:38
5
401
Elad Hazan
75,887,646
9,859,642
Drop zeroes from xarray DataArray
<p>I have a DataArray with three coordinates and I wanted to drop arrays containing all zeroes from one of the dimensions, but I can't seem to get it right. The data array (old_array) looks as follows, and as you can see there are quite a few arrays that are filled with zeroes.</p> <pre><code>&lt;xarray.DataArray (dim1: 100, dim2: 100, dim3: 100)&gt; array([[[81.20, 13.20, 12.24, ..., 81.24, 14.61, 85.58], [31.34, 83.96, 02.97, ..., 46.88, 01.81, 48.24], [43.54, 12.50, 35.33, ..., 39.23, 91.93, 01.21], [37.95, 75.59, 71.00, ..., 15.14, 68.76, 44.56], [54.31, 48.99, 85.59, ..., 15.58, 86.48, 81.19]], [[ 0. , 0. , 0. , ..., 0. , 0. , 0. ], [ 0. , 0. , 0. , ..., 0. , 0. , 0. ], [ 0. , 0. , 0. , ..., 0. , 0. , 0. ], [ 0. , 0. , 0. , ..., 0. , 0. , 0. ], [ 0. , 0. , 0. , ..., 0. , ... 0. , 0. ], [ 0. , 0. , 0. , ..., 0. , 0. , 0. ], [ 0. , 0. , 0. , ..., 0. , 0. , 0. ], [ 0. , 0. , 0. , ..., 0. , 0. , 0. ], [ 0. , 0. , 0. , ..., 0. , 0. , 0. ]], [[ 0. , 0. , 0. , ..., 0. , 0. , 0. ], [ 0. , 0. , 0. , ..., 0. , 0. , 0. ], [ 0. , 0. , 0. , ..., 0. , 0. , 0. ], [ 0. , 0. , 0. , ..., 0. , 0. , 0. ], [ 0. , 0. , 0. , ..., 0. , 0. , 0. ]]]) Coordinates: * dim1 (dim1) &lt;U7 'x0 y0 z0' 'x1 y0 z0' 'x2 y0 z0' ... 'x10 y10 z8' 'x10 y10 z9' 'x10 y10 z10' * dim2 (dim2) float64 0.0 0.1 0.2 ... 9.8 9.9 10.0 * dim3 (dim3) float64 0.5 1.0 1.5 ... 49.0 49.5 50.0 </code></pre> <p>I tried</p> <pre><code>new_array = old_array.drop_sel(dim1 = np.zeros(100)) </code></pre> <p>But it gives me KeyError: &quot;not all values found in index 'dim1'&quot;</p> <p>I also tried</p> <pre><code>new_array = old_array.drop(np.zeros(100), dim1) </code></pre> <p>But then I have a ValueError: dimension 'x1 y5 z3' does not have coordinate labels.</p> <p>I wanted to tell xarray &quot;If 'x2 y3 z7' is an array with all zeroes, then delete it&quot;, but I don't know how to do it.</p>
<python><numpy><python-xarray>
2023-03-30 11:28:51
1
632
Anavae
75,887,378
839,497
requests.get connection refuse error (exception) not catched
<p>Im trying to catch connection error when calling http get in a Python script.<br /> I found this <a href="https://stackoverflow.com/questions/55414137/how-can-i-catch-a-connection-refused-error-in-a-proper-way">post</a> descussing the exact same issue</p> <p>For me, in 99% of the calls are going smooth. but every once in a while it seems the server refuses connection, but i am not able to catch that specific error, when is happens. Other exceptions are catched.</p> <pre><code>import requests import sys params = { &quot;pair&quot;: &quot;xbtusd&quot;, &quot;interval&quot;: 5 } url = &quot;https://api.kraken.com/0/public/OHLC&quot; try: response = requests.get(url=url, params=params) except requests.exceptions.ConnectionError as e: print(&quot;get requests.ConnectionError: &quot; + str(sys.exc_info()[0]) + e.args[0].reason.errno) except requests.exceptions.RequestException as e: print(&quot;get requests.RequestException: &quot; + str(sys.exc_info()[0]) + e.args[0].reason.errno) except Exception as e: print(&quot;get general exception: &quot; + str(sys.exc_info()[0]) + e.args[0].reason.errno) except: # if no connection exception is raised - catch and try again print(&quot;get general exception2: &quot; + str(sys.exc_info()[0])) print(response.status_code) if response.status_code in (200, 201, 202): print(response.json()) </code></pre> <p>This is the output when exception occur. prints i added - are not written</p> <pre><code>Traceback (most recent call last): File &quot;C:\Users\Oren\anaconda3\envs\v38\lib\runpy.py&quot;, line 194, in _run_module_as_main return _run_code(code, main_globals, None, File &quot;C:\Users\Oren\anaconda3\envs\v38\lib\runpy.py&quot;, line 87, in _run_code exec(code, run_globals) File &quot;c:\Users\Oren\.vscode\extensions\ms-python.python-2023.4.1\pythonFiles\lib\python\debugpy\adapter/../..\debugpy\launcher\__main__.py&quot;, line 91, in &lt;module&gt; main() File &quot;c:\Users\Oren\.vscode\extensions\ms-python.python-2023.4.1\pythonFiles\lib\python\debugpy\adapter/../..\debugpy\launcher\__main__.py&quot;, line 47, in main launcher.connect(host, port) File &quot;c:\Users\Oren\.vscode\extensions\ms-python.python-2023.4.1\pythonFiles\lib\python\debugpy\adapter/../..\debugpy\launcher/../..\debugpy\launcher\__init__.py&quot;, line 27, in connect sock.connect((host, port)) ConnectionRefusedError: [WinError 10061] No connection could be made because the target machine actively refused it </code></pre>
<python><error-handling><python-requests>
2023-03-30 10:59:20
0
818
OJNSim
75,887,280
145,413
Avoiding python ints being truncated to 32 bits when sent through a Qt signal-slot connection
<h2>Issue</h2> <p>I have some <code>QObject</code>s living on different threads that communicate using Qt Signals and Slots. I'm using <em>PyQt5</em> (through <em>qtpy</em>), but I don't think it's specific to that API or version.</p> <p>One of the values being sent is an integer on the order of 2^40, but as the C++ type <code>int</code> is only 32 bits wide, this is silently being truncated.</p> <p>I've replaced the Signal&amp;Slot types with <code>'long'</code>, but that only works on some machines (see results below), so I've fallen back on wrapping the value in a 1-element tuple, as Qt will pass Python objects on unmodified.</p> <h2>Testcase</h2> <pre class="lang-py prettyprint-override"><code>from qtpy import QtCore class Sender(QtCore.QObject): intsig = QtCore.Signal(int) longsig = QtCore.Signal('long') tuplesig = QtCore.Signal(tuple) done = QtCore.Signal() def __init__(self, thread): super().__init__() self.moveToThread(thread) thread.started.connect(self.start) self.done.connect(thread.quit) def start(self): QtCore.QTimer.singleShot(1000, self.send) def send(self, value=123456789012345): self.intsig.emit(value) self.tuplesig.emit((value,)) self.longsig.emit(value) self.done.emit() class Receiver(QtCore.QObject): def __init__(self, thread): super().__init__() self.moveToThread(thread) @QtCore.Slot(int) def intslot(self, val): print(f&quot;intslot: {val}&quot;) @QtCore.Slot('long') def longslot(self, val): print(f&quot;longslot: {val}&quot;) @QtCore.Slot(tuple) def tupleslot(self, val): print(f&quot;tupleslot: {val[0]}&quot;) if __name__ == &quot;__main__&quot;: qa = QtCore.QCoreApplication(['foo']) sthread = QtCore.QThread() rthread = QtCore.QThread() sender = Sender(sthread) receiver = Receiver(rthread) sender.intsig.connect(receiver.intslot) sender.longsig.connect(receiver.longslot) sender.tuplesig.connect(receiver.tupleslot) sthread.finished.connect(rthread.quit) rthread.finished.connect(qa.quit) rthread.start() sthread.start() qa.exec() sthread.wait() rthread.wait() </code></pre> <h2>Results</h2> <p>Result on my development machine (Gentoo Linux (updated in March 2023), amd64):</p> <pre><code>$ python3.8 --version &amp;&amp; python3.8 -m test Python 3.8.16 intslot: -2045911175 tupleslot: 123456789012345 longslot: 123456789012345 $ python3.9 --version &amp;&amp; python3.9 -m test Python 3.9.16 intslot: -2045911175 tupleslot: 123456789012345 longslot: 123456789012345 $ python3.10 --version &amp;&amp; python3.10 -m test Python 3.10.10 intslot: -2045911175 tupleslot: 123456789012345 longslot: 123456789012345 </code></pre> <p>Result on the target machine (Raspberry Pi OS (based on Debian 11.6), RPi 4 Model B Rev 1.4, armv7l):</p> <pre><code>$ python --version &amp;&amp; python -m test Python 3.9.2 intslot: -2045911175 tupleslot: 123456789012345 Traceback (most recent call last): File &quot;/tmp/test.py&quot;, line 21, in send self.longsig.emit(value) TypeError: Sender.longsig[long].emit(): argument 1 has unexpected type 'int' Aborted </code></pre> <h2>Question</h2> <p>Am I missing something with regard to coercing python primitives to the right C++ types? Or is the type mapping subtly architecture-/machine-/version-dependent? Is wrapping things that might not fit in the basic C++ types in python objects the best way to deal with this issue?</p>
<python><qt><pyqt5><qt-signals>
2023-03-30 10:51:16
0
396
AI0867
75,887,199
9,468,092
How to optimize a Python script for inserting data into a database and pushing data to an AWS SQS queue with large datasets?
<p>I have a python script which reads data from a CSV and performs 2 tasks for each row:</p> <ol> <li>Inserts the row into a table in PostgreSQL database.</li> <li>Push the row to an AWS SQS queue.</li> </ol> <p>This script performs well when I have a small dataset of ~1-10k records. But as the size of my dataset increases. The script execution time increases exponentially.</p> <p>I did some benchmarking and realised that: In order to insert in database and push to queue, when the dataset contains ~1M records. My script takes &gt;100 Minutes to complete.</p> <p>I did some digging and found that my DB inserts are much faster than pushing to queue.</p> <p>I thought <code>asyncio</code> would solve my problem if I synchronously inserted into the database and asynchronously pushed to the queue. But what it does is something that I cant use: It batches all the asynchronous requests in an array and after the synchronous task is completed, starts executing the async tasks. This is a no no for me because I will be running this script in a serverless environment. Where the execution will be stopped after database insertions are finished.</p> <p>An ideal scenario for me would be: If I'm able to run run database insertions on one tread and queue push on another, which I know is not possible in Python.</p> <p>Is there any other way or pattern that would help in my case. Any kind of suggestions are appreciated.</p> <p>Thank you in advance.</p>
<python><amazon-web-services><concurrency>
2023-03-30 10:42:13
1
890
Dipanshu Chaubey
75,886,959
497,517
Installing Plotty on Python
<p>I'm trying to install Plotty in PyCharm. When I try to add it in the 'Python Interpriter' Install section it can only find plottyprint.</p> <p>How can I get Plotty to install?</p> <p>Many thanks for any help.</p>
<python>
2023-03-30 10:20:33
2
7,957
Entropy1024
75,886,674
5,312,965
How to compute sentence level perplexity from hugging face language models?
<p>I have a large collection of documents each consisting of ~ 10 sentences. For each document, I wish to find the sentence that maximises perplexity, or equivalently the loss from a fine-tuned causal LM. I have decided to use Hugging Face and the <code>distilgpt2</code> model for this purpose. I have 2 problems when trying to do in an efficient (vectorized) fashion:</p> <ol> <li><p>The tokenizer required padding to work in batch mode, but when computing the loss on padded <code>input_ids</code> those pad tokens are contributing to the loss. So the loss of a given sentence depends on the length of the longest sentence in the batch which is clearly wrong.</p> </li> <li><p>When I pass a batch of input IDs to the model and compute the loss, I get a scalar as it (mean?) pools across the batch. I instead need the loss per item, not the pooled one.</p> </li> </ol> <p>I made a version that operates on a sentence by sentence basis and while correct, it is extremely slow (I want to process ~ 25m sentences total). Any advice?</p> <p>Minimal example below:</p> <pre><code># Init tokenizer = AutoTokenizer.from_pretrained(&quot;distilgpt2&quot;) tokenizer.pad_token = tokenizer.eos_token model = AutoModelForCausalLM.from_pretrained(&quot;clm-gpu/checkpoint-138000&quot;) segmenter = spacy.load('en_core_web_sm') # That's the part I need to vectorise, surely within a document (bsize ~ 10) # and ideally across documents (bsize as big as my GPU can handle) def select_sentence(sentences): &quot;&quot;&quot;We pick the sentence that maximizes perplexity&quot;&quot;&quot; max_loss, best_index = 0, 0 for i, sentence in enumerate(sentences): encodings = tokenizer(sentence, return_tensors=&quot;pt&quot;) input_ids = encodings.input_ids loss = lm(input_ids, labels=input_ids).loss.item() if loss &gt; max_loss: max_loss = loss best_index = i return sentences[best_index] for document in documents: sentences = [sentence.text.strip() for sentence in segmenter(document).sents] best_sentence = select_sentence(sentences) write(best_sentence) </code></pre>
<python><nlp><huggingface-transformers><large-language-model><huggingface-evaluate>
2023-03-30 09:53:14
1
830
pilu
75,886,663
673,600
Driving a colab python script from Google sheets
<p>Here is what I need to do.</p> <ol> <li>Populate a drop down list from a python function in <code>colab</code>.</li> <li>Allow user to select one item and after hitting go fire another python function.</li> </ol> <p>I can write and read from a Google Sheets from colab, but I have no way to drive python this way. I cannot seem to find any examples.</p>
<python><google-sheets><google-colaboratory>
2023-03-30 09:52:07
0
6,026
disruptive
75,886,387
8,477,952
Python quivalent for a C# Object
<p>We have a python script which can interact with a couple of Bosch Rexroth servo drivers. We can succesfully read data from it's parameters but we cannot write to parameters. We use the Bosch Easy Automation Library (EAL.dll)</p> <pre><code>import clr clr.AddReference(&quot;EAL&quot;) import EAL # imports EAL.dll from EAL.EALConnection import * from System import Console from System.Threading import Thread </code></pre> <p>The very most propable cause is that we are using the <code>writeData()</code> method wrong. The drives should accept new parameters, we can do so in Indraworks. <a href="https://apps.boschrexroth.com/docs/oci/eal/html/75fb2cf5-0741-ef4c-7e4c-8582551d8baa.htm" rel="nofollow noreferrer">Documentation</a> dictates that the 'data' must an object type. Until so far everything has been a string. It is how we read data in, how we stuff it in JSON etc</p> <p>This is the code we were using</p> <pre><code> x = json.loads( message ) servoid = x[&quot;servoid&quot;] parameter = x[&quot;parameter&quot;] value = x[&quot;value&quot;] #parameter = &quot;P-0-4006.0.0&quot; # TEMPORAL OVERRIDE print(&quot;servoid: &quot; + servoid ) print(&quot;parameter: &quot; + parameter ) print(&quot;value: &quot; + value ) print() try: if servoid == &quot;0&quot;: conn1.Parameter.Axes[0].WriteData( value, parameter ) if servoid == &quot;1&quot;: conn2.Parameter.Axes[0].WriteData( value, parameter ) toWrite = True except EAL.Exceptions.EALException: print(&quot;The given data type and system type are not compatible.&quot;) </code></pre> <p>To solve the problem we tried hardcoding value as:</p> <pre><code>value = [0.0000, 0.0000, 3000.0000, 0.0000] </code></pre> <p>and</p> <pre><code>value = {0.0000, 0.0000, 3000.0000, 0.0000} </code></pre> <p>but they yield the same result. <code>The given data type and system type are not compatible.</code></p> <p>We have not tried to make an actual class and create such an object.</p> <p>What is the python equivalent for?</p> <pre><code>C#: void WriteData( Object data, string idn ) Visual basic: Sub WriteData ( data As Object, idn As String ) Visual C++: void WriteData( Object^ data, String^ idn ) </code></pre> <p>EDIT In order to read the data we use the <code>ReadDataAsString()</code>. Needless to say, this returns a string.</p> <p>The read data is captured like &quot;0.0000 0.0000 3000.0000 0.0000&quot; but this is heavily depended on which parameter you read. There is a also a parameter which returns 64 binary values like this.</p> <pre><code>print(&quot;reading value from axis 0&quot;) value = conn1.Parameter.Axes[0].ReadDataAsString(parameter) print(&quot;read value from axis 0: {}&quot;.format(value)) </code></pre>
<python><object>
2023-03-30 09:24:46
0
407
bask185
75,886,385
6,281,366
pydantic - copy object fields to another, while excluding several fields
<p>i have a pydantic class:</p> <pre><code>class SomeData(BaseModel): id: int x: str y: str z: str </code></pre> <p>and lets say i have two object of this class, obj1, obj2.</p> <p>is there any simple way, i can copy obj2, into obj1, while ignoring a subset of fields? for example copy all SomeData fields, except [id,z]</p>
<python><pydantic>
2023-03-30 09:24:36
3
827
tamirg
75,886,353
13,518,907
Python - Two different bar charts next to each other
<p>I am using matplotlib to analyze my data. For this I created a dataframe with following structure:</p> <pre><code>merge.set_index('index', inplace=True) print(merge) username mentioned_user index matthiashauer 73 10 derya_tn 67 5 renatekuenast 36 9 ralf_stegner 35 73 mgrossebroemer 33 12 ... ... ... katrinhelling 1 1 gydej 1 2 martingassner 1 2 daniludwigmdb 1 3 philipphartewig 1 1 </code></pre> <p>Now I want to plot two bar charts in one row. One the left side, there should be the bar chart with &quot;username&quot; column (ascending, first 10 biggest values) and on the right side there should be the the bar chart with the &quot;mentioned_user&quot; column (ascending, first 10 biggest values). As the values of the columns are different, the y-axis label for each bar chart has to be different. Here is the plot that I have so far:</p> <pre><code>merges = merge[:30] font_color = '#525252' hfont = {'fontname':'Calibri'} facecolor = '#eaeaf2' color_red = '#fd625e' color_blue = '#01b8aa' index = merges.index column0 = merges['username'] column1 = merges['mentioned_user'] title0 = 'Spreading Hate' title1 = 'Receiving Hate' fig, axes = plt.subplots(figsize=(10,5), facecolor=facecolor, ncols=2, sharey=True) fig.tight_layout() axes[0].barh(index, column0, align='center', color=color_red, zorder=10) axes[0].set_title(title0, fontsize=18, pad=15, color=color_red, **hfont) axes[1].barh(index, column1, align='center', color=color_blue, zorder=10) axes[1].set_title(title1, fontsize=18, pad=15, color=color_blue, **hfont) # To show data from highest to lowest plt.gca().invert_yaxis() axes[0].set(yticks=merges.index, yticklabels=merges.index) axes[0].yaxis.tick_left() axes[1].yaxis.tick_right() axes[0].tick_params(axis='y', colors='black') # tick color for label in (axes[0].get_xticklabels() + axes[0].get_yticklabels()): label.set(fontsize=13, color=font_color, **hfont) for label in (axes[1].get_xticklabels() + axes[1].get_yticklabels()): label.set(fontsize=13, color=font_color, **hfont) plt.subplots_adjust(wspace=0, top=0.85, bottom=0.1, left=0.18, right=0.95) filename = 'politicians_spread_vs_receive_hate' plt.savefig(filename+'.png', facecolor=facecolor) </code></pre> <p><a href="https://i.sstatic.net/on5hK.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/on5hK.png" alt="enter image description here" /></a></p> <p>For the left plot, I get the correct order and y-axis labels. However for the right plot, I would need to order the data as well and would also need another y-axis label on the right side.</p> <p>How can I do this?</p> <p>Thanks in advance!</p>
<python><matplotlib><plot><bar-chart>
2023-03-30 09:21:22
1
565
Maxl Gemeinderat
75,886,320
3,007,402
Extend psycopg2 cursor class' __iter__ method
<p>I am creating server-side cursors at several places during the course of a long ETL process. I want to automatically close the db connection once all rows are fetched from a server-side cursor.</p> <p>Instead of doing that manually at every place the cursor is created, I want to define a custom <code>psycopg2</code> cursor class that closes the connection on exhausting iteration so that the client code(calling function) doesn't have to worry about open connections.</p> <p>I tried the below but it's giving <code>RecursionError</code>. The statement <code>yield from super().__iter__()</code> works as expected when I tried extending other iterables, for e.g. <code>dict</code>.</p> <pre class="lang-py prettyprint-override"><code>class AutoCloseCur(psycopg2.extensions.cursor): def __iter__(self): yield from super().__iter__() self.close() self.connection.close() </code></pre> <p>Am I extending the right <code>class</code>? Do I need to override any other dunder method? Or does psycopg2 provide any function for achieving my goal?</p>
<python><psycopg2>
2023-03-30 09:18:03
0
2,868
Shiva
75,886,111
602,117
String manipulation of cell contents in polars
<p>In polars, I am trying to perform selection of rows and create a new content based on string manipulation. However, the python string manipulation commands below don't work. I've seen polars uses regular expressions, but am unsure how to use this to create a number from the option_type column using everything before the '_'.</p> <pre class="lang-py prettyprint-override"><code>import polars as pl columns = ['2022-03-01_bid', '2022-03-01_ask', 'option_type'] data = [ [100.0, 110.0, '100_P'], [100.0, 110.0, '100_C'], [100.0, 110.0, '200_P'], [100.0, 110.0, '200_C'], [100.0, 110.0, '300_P'], [100.0, 110.0, '300_C'], [100.0, 110.0, '400_P'], [100.0, 110.0, '400_C'], [100.0, 110.0, '500_P'], [100.0, 110.0, '500_C'], ] df = pl.DataFrame(data, orient=&quot;row&quot;, schema=columns) # Filter rows where option_type ends with &quot;P&quot; df_filtered = df.filter(pl.col(&quot;option_type&quot;).str.ends_with(&quot;_P&quot;)) </code></pre> <pre><code>shape: (5, 3) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ 2022-03-01_bid ┆ 2022-03-01_ask ┆ option_type β”‚ β”‚ --- ┆ --- ┆ --- β”‚ β”‚ f64 ┆ f64 ┆ str β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•ͺ════════════════β•ͺ═════════════║ β”‚ 100.0 ┆ 110.0 ┆ 100_P β”‚ β”‚ 100.0 ┆ 110.0 ┆ 200_P β”‚ β”‚ 100.0 ┆ 110.0 ┆ 300_P β”‚ β”‚ 100.0 ┆ 110.0 ┆ 400_P β”‚ β”‚ 100.0 ┆ 110.0 ┆ 500_P β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ </code></pre> <p>I get an error with my next step:</p> <pre class="lang-py prettyprint-override"><code># Create a new column &quot;strike&quot; df_filtered = df_filtered.with_columns( pl.col(&quot;option_type&quot;).str.split(&quot;_&quot;).str[0].astype(int) ) </code></pre> <blockquote> <p>TypeError: 'ExprStringNameSpace' object is not subscriptable</p> </blockquote> <p>I'm trying to get the following output:</p> <pre><code>shape: (5, 4) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ 2022-03-01_bid ┆ 2022-03-01_ask ┆ option_type ┆ strike β”‚ β”‚ --- ┆ --- ┆ --- ┆ --- β”‚ β”‚ f64 ┆ f64 ┆ str ┆ u32 β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•ͺ════════════════β•ͺ═════════════β•ͺ════════║ β”‚ 100.0 ┆ 110.0 ┆ 100_P ┆ 100 β”‚ β”‚ 100.0 ┆ 110.0 ┆ 200_P ┆ 200 β”‚ β”‚ 100.0 ┆ 110.0 ┆ 300_P ┆ 300 β”‚ β”‚ 100.0 ┆ 110.0 ┆ 400_P ┆ 400 β”‚ β”‚ 100.0 ┆ 110.0 ┆ 500_P ┆ 500 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”˜ </code></pre>
<python><dataframe><python-polars>
2023-03-30 08:57:24
2
1,019
mr_js
75,885,945
14,912,118
How to pass shell script variable to python in databricks
<p>I am currently trying to use shell script variable in python.</p> <p>Below is the code.</p> <pre><code>%sh export MY_VAR=&quot;my_value&quot; echo $MY_VAR </code></pre> <pre><code>%python import os my_var = os.environ.get(&quot;MY_VAR&quot;) print(my_var) </code></pre> <p>Code screenshot</p> <p><a href="https://i.sstatic.net/VL9hN.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/VL9hN.png" alt="Code screenshot" /></a></p> <p>I am getting the value in shell script but when i am trying to get the value using python <code>os.environ.get(&quot;MY_VAR&quot;)</code> i am getting as <code>None</code></p> <p>Could anyone help me to resolve the above issue.</p>
<python><shell><databricks>
2023-03-30 08:43:20
1
427
Sharma
75,885,793
20,102,061
How to set a tkinter textbox to be in the button right of the window
<p>I am working on a simple UI for my electronics project application, I am using tkinter as my window and want to add a text box that will be used as a logger. My screen size is the window's initial size and I like the textbox to be at the bottom right of the window.</p> <p>this is a test code that I am using to try and figure it out:</p> <pre><code>class Window: def __init__(self, WIDTH, HEIGHT, WIN) -&gt; None: &quot;&quot;&quot; Parameters ---------- self.width : int The width of )the window created. self.height : int The height of the window created. self.window : tk.TK The window object. Variables --------- self.data : dict A dictonary containing all information about buttons creates and titles displayed on screen. &quot;&quot;&quot; self.width = WIDTH self.height = HEIGHT self.window = WIN self.data : dict = { &quot;Title&quot; : None, &quot;btns&quot; : {} } def constract(self, title : str, background_color : str) -&gt; None: &quot;&quot;&quot; Creates the main window adds a title and a background color. Parameters ---------- title : str A string which will serve as the window's title. backgound_color : str A string represents a hex code color (e.g. #FFFFFF). &quot;&quot;&quot; self.window.title(title) self.window.geometry(f'{self.width}x{self.height}') self.window.configure(bg=background_color) def header(self, text : str, fill : str = &quot;black&quot;, font : str = 'Arial 28 bold', background : str ='#E1DCDC') -&gt; None: &quot;&quot;&quot; Displays a title on the screen. Parametes --------- text : str The text which will be displayed on the screen. fill : str The color of the text, can be an existing color in tkinter or a custom hex code color (e.g. #FFFFFF). font : str The font type, the size of the letters and other designs for the text. backgound : str The color of the box around the text, can be an existing color in tkinter or a custom hex code color (e.g. #FFFFFF). &quot;&quot;&quot; T = Label(self.window, text=text, bg=background ,fg=fill, font=font) T.pack() self.data[&quot;Title&quot;] = text class PrintLogger(): # create file like object def __init__(self, textbox): # pass reference to text widget self.textbox = textbox # keep ref def write(self, text): self.textbox.insert(tk.END, text) # write text to textbox # could also scroll to end of textbox here to make sure always visible def flush(self): # needed for file like object pass if __name__ == '__main__': from tkinter import * import sys win = Tk() WIDTH, HEIGHT = win.winfo_screenwidth(), win.winfo_screenheight() main_window = Window(WIDTH, HEIGHT, win) main_window.constract(&quot;AntiBalloons system controls&quot;, &quot;#E1DCDC&quot;) main_window.header(&quot;AntiAir system controls&quot;) t = Text(win, bg=&quot;#E1DCDC&quot;,bd=1) t.tag_configure(&quot;&quot;, justify='right') t.pack() pl = PrintLogger(t) sys.stdout = pl main_window.window.mainloop() </code></pre>
<python><python-3.x><tkinter>
2023-03-30 08:24:03
1
402
David
75,885,670
18,140,022
Polars: adding a column to an empty dataframe with set schema
<p>I created an empty dataframe with a set schema. The schema sets the columns data types. I want to add a single name-matching column (series) to the empty dataframe. But it seems to not like it.</p> <pre><code># Empty dataframe with a given schema df = pl.DataFrame(schema={&quot;my_string_col&quot;:str, &quot;my_int_col&quot;: int, &quot;my_bool_col&quot;: bool}) # Now I try to add a series to it df = df.with_columns(pl.Series(name=&quot;my_int_col&quot;, values=[1, 2, 3, 4, 5])) </code></pre> <p>But I get the following error:</p> <pre><code>exceptions.ShapeError: unable to add a column of length 5 to a dataframe of height 0 </code></pre> <p>It looks as if Polars isn't able to fill the rest of columns (i.e. my_string_col &amp; my_bool_col) with null values. In Pandas you can do this in multiple ways and I wonder if I am missing something or there's no implementation yet?</p>
<python><python-polars>
2023-03-30 08:13:07
3
405
user18140022
75,885,628
247,696
Can I count on the "python3" executable being available on Python 3 installations? Or "python"?
<p>Let's say I have a Python command that I want to share with people widely. Let's take it as a given that Python 3 is installed. Which binary should I expect users to have, <code>python</code> or <code>python3</code>?</p> <p>For example, which is better, to tell users to run this:</p> <pre><code>python -m http.server </code></pre> <p>or to run this</p> <pre><code>python3 -m http.server </code></pre> <p>?</p> <p>In Ubuntu 22.10, the <code>python3</code> binary is available, but not <code>python</code>, so the latter would be better. Can I expect the same for all Python 3 installations on different operating systems?</p> <p>Please note that I am not trying to support Python 2 at all, <a href="https://www.python.org/doc/sunset-python-2/" rel="nofollow noreferrer">as it has been sunset</a>, and Python 3 has been available since 2006.</p>
<python><python-3.x>
2023-03-30 08:08:13
1
153,921
Flimm
75,885,451
9,970,706
How to insert values into a nested python dictionary
<p>I am trying to insert values into a nested python dictionary. The response I am getting is from a postgresql database where the table has four columns:</p> <pre><code>coe, coe_type, count, coe_status Author 1, Open, 10, Published Author 2, Closed, 20, Not-Published etc.... </code></pre> <p>Currently my response looks like this</p> <pre class="lang-json prettyprint-override"><code> &quot;data&quot;: { &quot;Author 1&quot;: { &quot;Open&quot;: {}, &quot;Closed&quot;: {}, &quot;All&quot;: { &quot;Published&quot;: 1, &quot;Non-Published&quot;: 1 } }, </code></pre> <p>The problem I am having is that I want to insert all the counts for each specific type. For instance, Open should have its own Published and Non-Published count. This is the same for Closed as well.</p> <p>So the response should look like is this</p> <pre class="lang-json prettyprint-override"><code> &quot;data&quot;: { &quot;beryl&quot;: { &quot;Open&quot;: { &quot;Published&quot;: 1, &quot;Non-Published&quot;: 1 }, &quot;Closed&quot;: { &quot;Published&quot;: 1, &quot;Non-Published&quot;: 1 }, &quot;All&quot;: { &quot;Published&quot;: 1, &quot;Non-Published&quot;: 1 } }, </code></pre> <p>This is how the current code is written:</p> <pre class="lang-py prettyprint-override"><code>category_headers = [&quot;Open&quot;, &quot;Published&quot;] book_main_headers = [&quot;Owner&quot;, &quot;Published&quot;, &quot;Non-Published&quot;] book_type_headers=[&quot;Open&quot;, &quot;Closed&quot;, &quot;All&quot;] response_result = {} last_author = None response = execute_query(sql_query, transaction_id, False) current_list = ast.literal_eval(row) current_author, book, book_count, book_published_non_published = (current_list[0], current_list[1], current_list[2], current_list[3]) if last_author is None or last_author != current_author: interim_dictionary[str(current_author)] = {} last_author = current_author for book_type in book_type_headers: interim_dictionary[str(current_author)][str(book_type)] = {} for coe_category in category_headers: interim_dictionary[str(current_author)][str(book_type)][str(book_category)] = {} if book_category not in interim_dictionary[str(current_author)]: interim_dictionary[str(current_author)][str(book_type)][book_category] = 0 if book == 'Open': if book_type_headers == 'Published': interim_dictionary[str(current_author)][str(book_type)][book_category] = book_count else: interim_dictionary[str(current_author)][str(book_type)][book_category] = book_count if book == 'Closed': if book_type_headers == 'Published': interim_dictionary[str(current_author)][str(book_type)][book_category] = book_count else: interim_dictionary[str(current_author)][str(book_type)][book_category] = book_count else: if book_type_headers == 'Published': interim_dictionary[str(current_author)][str(book_type)][book_category] = book_count else: interim_dictionary[str(current_author)][str(book_type)][book_category] = book_count </code></pre>
<python><postgresql><dictionary>
2023-03-30 07:48:52
1
781
Zubair Amjad
75,885,359
10,197,418
How to convert float to string with specific number of decimal places in Python polars?
<p>I have a polars DataFrame with multiple numeric (float dtype) columns. I want to write some of them to a csv with a certain number of decimal places. The number of decimal places I want is column-specific.</p> <p><code>polars</code> offers <a href="https://pola-rs.github.io/polars/py-polars/html/reference/expressions/api/polars.format.html#polars-format" rel="noreferrer">format</a>:</p> <pre class="lang-py prettyprint-override"><code>import polars as pl df = pl.DataFrame({&quot;a&quot;: [1/3, 1/4, 1/7]}) df.select( [ pl.format(&quot;as string {}&quot;, pl.col(&quot;a&quot;)), ] ) shape: (3, 1) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ literal β”‚ β”‚ --- β”‚ β”‚ str β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•‘ β”‚ as string 0.3333333333333333 β”‚ β”‚ as string 0.25 β”‚ β”‚ as string 0.14285714285714285 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ </code></pre> <p>However, if I try to set a directive to specify number of decimal places, it fails:</p> <pre class="lang-py prettyprint-override"><code>df.select( [ pl.format(&quot;{:.3f}&quot;, pl.col(&quot;a&quot;)), ] ) </code></pre> <blockquote> <p>ValueError: number of placeholders should equal the number of arguments</p> </blockquote> <p>Is there an option to have &quot;real&quot; f-string functionality without using an <code>apply</code>?</p> <ul> <li><code>pl.__version__: '0.16.16'</code></li> <li>related: <a href="https://stackoverflow.com/q/71790235/10197418">Polars: switching between dtypes within a DataFrame</a></li> <li>to set the decimal places of <em>all</em> output columns, <a href="https://pola-rs.github.io/polars/py-polars/html/reference/api/polars.DataFrame.write_csv.html#polars-dataframe-write-csv" rel="noreferrer">pl.DataFrame.write_csv</a> offers the <code>float_precision</code> keyword</li> </ul>
<python><format><python-polars><f-string>
2023-03-30 07:37:32
2
26,076
FObersteiner
75,885,162
6,916,032
Patching child TestCase class affects parent TestCase class
<p>I need to test a function which uses feature toggles to turn some functionality on and off. Say, a function like this:</p> <pre><code>def func_to_test(hello_value: str): if toggle_is_active('only_hello_and_hi'): if hello_value not in ('hello', 'hi'): return print(hello_value) </code></pre> <p>And now I want to test this function for both feature toggle states. For the off toggle I would do something like this:</p> <pre><code>class InactiveToggleTestCase(unittest.TestCase): hello_values = ['hello', 'hi', 'bonjour', 'sup'] def test_func(self): for hello_value in self.hello_values: with self.subTest(hello_value=hello_value): func_to_test(hello_value) # assert print was called with expected values </code></pre> <p>And for the active state I want to just inherit the class and patch the toggle state:</p> <pre><code>@patch('toggle_is_active', lambda x: True) class ActiveToggleTestCase(InactiveToggleTestCase): hello_values = ['hello', 'hi'] </code></pre> <p>This way I don't need to rewrite the test itself. The thing is, the parent class also gets the patch from the child class and test case for inactive toggle state doesn't pass anymore.</p> <p>How can I avoid this effect? I could just duplicate the test, of course, but this doesn't seem to be right and also if there is a plenty of tests the inheritance would be really helpful.</p>
<python><python-unittest><patch><python-unittest.mock>
2023-03-30 07:13:58
1
417
Artem Ilin
75,885,115
16,250,404
Convert Datetime Objects as per Time zones
<p>I have tried two different scenarios:</p> <ol> <li>I fetched Current Datetime of <code>UTC</code> and <code>Europe/Paris</code> and then I just converted in string which shows <code>02 hours</code> of gap which is correct.</li> </ol> <pre><code>from datetime import datetime import datetime as dt import pytz from dateutil import tz current_utc = datetime.utcnow() current_europe = datetime.now(pytz.timezone('Europe/Paris')) current_utc_str = datetime.strftime(current_utc, &quot;%Y-%m-%d %H:%M&quot;) current_europe_str = datetime.strftime(current_europe, &quot;%Y-%m-%d %H:%M&quot;) print('current_utc',current_utc_str) print('current_europe',current_europe_str) </code></pre> <p>results:</p> <pre><code>current_utc 2023-03-30 07:01 current_europe 2023-03-30 09:01 </code></pre> <ol start="2"> <li>I created a custom UTC datetime object and then converted it to Europe/Paris timezone and here are the results with the Gap of <code>01 Hour</code>.</li> </ol> <pre><code>from datetime import datetime import datetime as dt import pytz from dateutil import tz utc = datetime(2023, 3, 21, 23, 45).replace(tzinfo=dt.timezone.utc) utc_str = datetime.strftime(utc, &quot;%Y-%m-%d %H:%M&quot;) print(&quot;utc_str&quot;, utc_str) from_zone = tz.gettz(&quot;UTC&quot;) to_zone = tz.gettz('Europe/Paris') utc = utc.replace(tzinfo=from_zone) new_time = utc.astimezone(to_zone) new_time_str = datetime.strftime(new_time, &quot;%Y-%m-%d %H:%M&quot;) print(&quot;new_time_str&quot;, new_time_str) </code></pre> <p>results:</p> <pre><code>utc_str 2023-03-21 23:45 new_time_str 2023-03-22 00:45 </code></pre> <p>What is the reason behind this <code>01 hour of variation</code> while fetching current and creating custom datetime?</p> <p><strong>Edit</strong> How can we handle Daylight Saving Time (DST) for custom created Datetime objects?</p> <p>I think <a href="https://stackoverflow.com/q/1398674/16250404">Display the time in a different time zone</a> this doesn't answer about handling Daylight Saving Time (DST).</p>
<python><datetime><timezone><python-datetime>
2023-03-30 07:09:18
0
933
Hemal Patel
75,885,078
1,780,761
python - split query results into multiple objects based on one column
<p>my program is querying a sqlite database, and thre result is like this (simplified) in the cursor ready to be fetched.</p> <pre><code>connection = sqlite3.connect(IMAGE_LOG_DB_PATH) connection.isolation_level = None cur = connection.cursor() sql_query = &quot;Select date, name, count(*) as sells from sellers group by date, name order by date asc;&quot; cur.execute(sql_query) result = cur.fetchall() </code></pre> <hr /> <pre><code>2023-01-01 | John | 5 2023-01-01 | Mark | 10 2023-01-01 | Alex | 7 2023-01-02 | John | 4 2023-01-02 | Alex | 3 2023-01-03 | John | 3 2023-01-03 | Mark | 4 2023-01-03 | Alex | 3 </code></pre> <p>I would need to split this into separate objects for each Name.</p> <pre><code>Object 'John': 2023-01-01 | John | 5 2023-01-02 | John | 4 2023-01-03 | John | 3 Object 'Mark': 2023-01-01 | Mark | 10 2023-01-03 | Mark | 4 Object 'Alex': 2023-01-01 | Alex | 7 2023-01-02 | Alex | 3 2023-01-03 | Alex | 3 </code></pre> <p>it would be easy to do with a loop, and if the object exits, add the entry, if not create a new object. but what I have learned so far is that in Python for almost everything there is a handy tool that does things automatically and usually much faster than what my code can do. I have been reading into ORM, but its my understanding (correct me if I am wrong) that ORM replaces also the connection/query to the database and handles everything on its own. And it appears to be slower than the approach i am having right now.</p> <p>What would be a proper way to do this?</p>
<python><sqlite><object><orm>
2023-03-30 07:04:52
1
4,211
sharkyenergy
75,885,077
6,447,123
Searching for sequence of bits in an integer in Python
<p>I have two integers, lets call them <code>haystack</code> and <code>needle</code>. I need to check that, if the binary representation of <code>needle</code> occurred in <code>haystack</code> [and <strong>OPTIONALLY</strong> find the position of the first occurrence]</p> <h2>Example</h2> <pre class="lang-py prettyprint-override"><code>haystack = 0b10101111010110010101010101110 needle = 0b1011001 # occurred in position 13 needle = 0b111011 # not occurred </code></pre> <p>I am looking for the least possible time complexity, I cannot write a code with time complexity better than <code>O(h)</code>, where <code>h</code> is the number of bits in the haystack. You can see my code below.</p> <p>I need to check occurrence of a predefined <code>needle</code> (which never changes and it's an <strong>odd</strong> number), in billions of random <code>haystack</code> integers (So we cannot preprocess <code>haystack</code> to optimize speed)</p> <p>As finding position is optional, if you can write a code with better time complexity to just return a Boolean that indicate occurrence, it's perfect too. Because in billions of check I know that it's not occurred and when it occurred I can use the following code to find the position.</p> <p>A good probabilistic algorithm with False Positive results is fine too.</p> <pre class="lang-py prettyprint-override"><code>def find_needle_in_haystack(haystack, needle): n = needle.bit_length() # calculate the number of bits in needle mask = (1 &lt;&lt; n) - 1 # create a mask with n bits i = 0 while haystack != 0: x = haystack &amp; mask # bitwise AND with mask to get first n bits if x == needle: return i i += 1 haystack &gt;&gt;= 1 # shift haystack to the right by 1 bit return -1 # needle not found in haystack </code></pre>
<python><algorithm><binding><substring><binary-data>
2023-03-30 07:04:45
2
4,309
A.A
75,885,039
8,868,327
Test Django settings with pytest
<p>I am trying to set some environment variables that drive the configuration of a Django project in some tests so that I can mock some of the values and make assertions easier and explicit.</p> <p>I want to set an environment variable with the name of a file which stores some configuration which is loaded if the variable is set and the file exists. i.e:</p> <pre class="lang-py prettyprint-override"><code># proj.settings.py CONFIG = None if _filename := os.environ.get('FILE_PATH', None): with open(_filename) as f: CONFIG = json.load(f) </code></pre> <p>I have tried a fixture that sets an environment variable (see <code>set_env</code>), so my tests look like this:</p> <pre class="lang-py prettyprint-override"><code># some_app.tests.test_settings.py import os @fixture def data_raw(): return dict( foo=&quot;bar&quot; ) @fixture def data_file(data_raw): with NamedTemporaryFile(mode=&quot;w+&quot;) as f: json.dump(data_raw, f) yield f.name @fixture def set_env(data_file): os.environ['FILE_PATH'] = data_file def test_it_loads_data(set_env, data_raw, settings): assert settings.CONFIG == data_raw </code></pre> <p>But <code>set_env</code> doesn't execute before Django's configuration, so <code>CONFIG</code> is never set.</p>
<python><django>
2023-03-30 06:59:37
1
992
EDG956
75,884,946
10,748,412
How to get coordinates of all vertical lines using OpenCV
<p>How can I get the coordinates of all these vertical white lines as a list using OpenCV or any other method.</p> <p><a href="https://i.sstatic.net/EdPhD.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/EdPhD.png" alt="enter image description here" /></a></p> <p>This is what i did to get these lines</p> <pre><code>import cv2 import numpy as np from google.colab.patches import cv2_imshow from PIL import Image document_img = cv2.imread(&quot;2.jpg&quot;) table_list = [np.array(document_img, copy=True)] for each_table in table_list: img = cv2.cvtColor(each_table, cv2.COLOR_BGR2GRAY) img_height, img_width = img.shape thresh, img_bin = cv2.threshold(img, 180, 255, cv2.THRESH_BINARY) img_bin_inv = 255 - img_bin kernel_len_ver = max(10, img_height // 50) kernel_len_hor = max(10, img_width // 50) ver_kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (1, kernel_len_ver)) # shape (kernel_len, 1) inverted! xD hor_kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (kernel_len_hor, 1)) # shape (1,kernel_ken) xD kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (2, 2)) image_1 = cv2.erode(img_bin_inv, ver_kernel, iterations=3) vertical_lines = cv2.dilate(image_1, ver_kernel, iterations=4) cv2_imshow(vertical_lines) </code></pre>
<python><opencv><machine-learning><deep-learning>
2023-03-30 06:50:09
1
365
ReaL_HyDRA