QuestionId
int64
74.8M
79.8M
UserId
int64
56
29.4M
QuestionTitle
stringlengths
15
150
QuestionBody
stringlengths
40
40.3k
Tags
stringlengths
8
101
CreationDate
stringdate
2022-12-10 09:42:47
2025-11-01 19:08:18
AnswerCount
int64
0
44
UserExpertiseLevel
int64
301
888k
UserDisplayName
stringlengths
3
30
75,323,839
2,814,378
Write ORC using Pandas with all values of sequence None
<p>I want to write a simple dataframe as an ORC file. The only sequence is of an integer type. If I set all values to <code>None</code>, an exception is raised on <code>to_orc</code>.</p> <p>I understand that <code>pyarrow</code> cannot infer datatype from <code>None</code> values but what can I do to fix the datatype for output? Attempts to use <code>.astype()</code> only brought <code>TypeError: int() argument must be a string, a bytes-like object or a number, not 'NoneType'</code></p> <p>Bonus points if the solution also works for</p> <ol> <li>empty dataframes</li> <li>nested types</li> </ol> <p>Script:</p> <pre><code>data = {'a': [1, 2]} df = pd.DataFrame(data=data) print(df) df.to_orc('a.orc') # OK df['a'] = None print(df) df.to_orc('a.orc') # fails </code></pre> <p>Output:</p> <pre><code> a 0 1 1 2 a 0 None 1 None Traceback (most recent call last): File ... line 9, in &lt;module&gt; ... File &quot;pyarrow/_orc.pyx&quot;, line 443, in pyarrow._orc.ORCWriter.write File &quot;pyarrow/error.pxi&quot;, line 121, in pyarrow.lib.check_status pyarrow.lib.ArrowNotImplementedError: Unknown or unsupported Arrow type: null </code></pre>
<python><pandas><pyarrow><orc>
2023-02-02 13:24:04
1
2,380
Blaf
75,323,826
20,443,541
How to upload a image to google-lens using python?
<p>I'm trying to scrape <strong><a href="https://images.google.com/" rel="nofollow noreferrer">google-lens</a></strong> with python <code>requests</code> but can't find the request where it uploads the image or how it is decoded.</p> <p>The request (which the answer is the image-analysis) is as following:</p> <pre class="lang-py prettyprint-override"><code>import requests cookies = { 'CONSENT': 'PENDING+XXX', 'SOCS': 'XXXXXXXXXXXXXXXXXXXXXXXXXXX', 'HSID': 'XXXXXXXXXXXXXXXXXXXXXXXXXXXXXX', 'SSID': 'XXXXXXXXXXXXXXXXXXXXXXXXXXXX', 'APISID': 'XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX', 'SAPISID': 'XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX', '__Secure-1PAPISID': 'XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX', 'SID': 'XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXx.', '__Secure-1PSID': 'XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX', 'SIDCC': 'XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX', '__Secure-1PSIDCC': 'XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX', 'AEC': 'XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX', 'NID': 'XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX', 'OTZ': 'XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX', '__Secure-ENID': 'XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX', } headers = { 'authority': 'lens.google.com', 'accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9', 'accept-language': 'de-DE,de;q=0.9,en-US;q=0.8,en;q=0.7', 'referer': 'https://lens.google.com/upload?hl=de-CH&amp;re=df&amp;st=1675340672651&amp;plm=ChAIARIMCIDX7p4GEMDxtbYC&amp;ep=gisbubb', 'sec-ch-ua': '&quot;Not_A Brand&quot;;v=&quot;99&quot;, &quot;Google Chrome&quot;;v=&quot;109&quot;, &quot;Chromium&quot;;v=&quot;109&quot;', 'sec-ch-ua-arch': '&quot;x86&quot;', 'sec-ch-ua-bitness': '&quot;64&quot;', 'sec-ch-ua-full-version': '&quot;109.0.5414.120&quot;', 'sec-ch-ua-full-version-list': '&quot;Not_A Brand&quot;;v=&quot;99.0.0.0&quot;, &quot;Google Chrome&quot;;v=&quot;109.0.5414.120&quot;, &quot;Chromium&quot;;v=&quot;109.0.5414.120&quot;', 'sec-ch-ua-mobile': '?0', 'sec-ch-ua-model': '&quot;&quot;', 'sec-ch-ua-platform': '&quot;Windows&quot;', 'sec-ch-ua-platform-version': '&quot;10.0.0&quot;', 'sec-ch-ua-wow64': '?0', 'sec-fetch-dest': 'document', 'sec-fetch-mode': 'navigate', 'sec-fetch-site': 'same-origin', 'upgrade-insecure-requests': '1', 'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/109.0.0.0 Safari/537.36', 'x-client-data': 'CJe2yQEIorbJAQjBtskBCKmdygEIte3KAQiTocsBCPKCzQEIv4TNAQiAjM0BCIiMzQEI14zNAQiGjc0BCMeNzQEI1Y7NAQj2js0BCNLhrAII8fStAg==', } p = &quot;AfVzNa_TGIdeDaL6ZaPXF7Wx8FCDSF8grbjYLUPXuk5_7Ia3vUCoQ5BUa8slWojngiUp-88dvc59Ohx3_22wAH3GXJHgaT-bLnpAm0r-5YjYIErXRCYJJ0ndUQUxxdF1JptYTdjqaEXXRR87igdc_xBCpxGpdXkXrf7Nf226SST0MdF3vF7mmtvJyklqA8494byV6bj_I92D3vihWglO3OV6phVD1zsqVyfSU_qZvtuEPEA59LETwQ4SKlztDy0fMWmBGgCsXiCuz2bWH2bOIRqUFo0stSVAvscHpY0iIVcEyRYQhXBxRkibV6UvnSIK2w_JQZV7TP4AkRRBPCwy2iKu-KJS6R28OZ3ABqIth7IPDLGymZKQ20vl_HPjXBHAgHzZgFLTs-AfR7zkmsnyWQ9FB77YVA&quot; response = requests.get( 'https://lens.google.com/search?p='+p+&quot;%3D%3D&amp;ep=gisbubb&amp;hl=en-US&amp;re=df&amp;st=1675340672651&amp;plm=ChAIARIMCIDX7p4GEMDxtbYCCg8IFRILCIDX7p4GENCgvHUKDwgWEgsIgNfungYQkM3CdQoPCBMSCwiA1%2B6eBhCA/MJ1ChAIFBIMCIDX7p4GEOjKj7MC&quot;, cookies=cookies, headers=headers, ) </code></pre> <p>The <code>p</code> parameter in the url seems to me like data, but:</p> <ul> <li>Maybe too short for a image?</li> <li>I can't decode the string as <strong>Base64</strong> to an image. Any ideas?</li> </ul> <p><code>p</code> in my case is:</p> <pre><code>AfVzNa_TGIdeDaL6ZaPXF7Wx8FCDSF8grbjYLUPXuk5_7Ia3vUCoQ5BUa8slWojngiUp-88dvc59Ohx3_22wAH3GXJHgaT-bLnpAm0r-5YjYIErXRCYJJ0ndUQUxxdF1JptYTdjqaEXXRR87igdc_xBCpxGpdXkXrf7Nf226SST0MdF3vF7mmtvJyklqA8494byV6bj_I92D3vihWglO3OV6phVD1zsqVyfSU_qZvtuEPEA59LETwQ4SKlztDy0fMWmBGgCsXiCuz2bWH2bOIRqUFo0stSVAvscHpY0iIVcEyRYQhXBxRkibV6UvnSIK2w_JQZV7TP4AkRRBPCwy2iKu-KJS6R28OZ3ABqIth7IPDLGymZKQ20vl_HPjXBHAgHzZgFLTs-AfR7zkmsnyWQ9FB77YVA== </code></pre> <p>In the network tab when uploading the image, I can't find any other request with data.</p> <p>Also, how would I encode a image to such a string using python?</p>
<python><encryption><cryptography><binary><base64>
2023-02-02 13:23:16
1
1,159
kaliiiiiiiii
75,323,747
1,901,071
Polars looping through the rows in a dataset
<p>I am trying to loop through a Polars recordset using the following code:</p> <pre class="lang-py prettyprint-override"><code>import polars as pl df = pl.DataFrame({ &quot;start_date&quot;: [&quot;2020-01-02&quot;, &quot;2020-01-03&quot;, &quot;2020-01-04&quot;], &quot;Name&quot;: [&quot;John&quot;, &quot;Joe&quot;, &quot;James&quot;] }) for row in df.rows(): print(row) </code></pre> <pre><code>('2020-01-02', 'John') ('2020-01-03', 'Joe') ('2020-01-04', 'James') </code></pre> <p>Is there a way to specifically reference 'Name' using the named column as opposed to the index? In Pandas this would look something like:</p> <pre class="lang-py prettyprint-override"><code>import pandas as pd df = pd.DataFrame({ &quot;start_date&quot;: [&quot;2020-01-02&quot;, &quot;2020-01-03&quot;, &quot;2020-01-04&quot;], &quot;Name&quot;: [&quot;John&quot;, &quot;Joe&quot;, &quot;James&quot;] }) for index, row in df.iterrows(): df['Name'][index] </code></pre> <pre><code>'John' 'Joe' 'James' </code></pre>
<python><dataframe><loops><python-polars>
2023-02-02 13:15:45
3
2,946
John Smith
75,323,597
3,575,623
Reorder dataframe groupby medians following custom order
<p>I have a dataset containing a bunch of data in the columns <code>params</code> and <code>value</code>. I'd like to count how many values each <code>params</code> contains (to use as labels in a boxplot), so I use <code>mydf['params'].value_counts()</code> to show this:</p> <pre><code>slidingwindow_250 11574 hotspots_1k_100 8454 slidingwindow_500 5793 slidingwindow_100 5366 hotspots_5k_500 3118 slidingwindow_1000 2898 hotspots_10k_1k 1772 slidingwindow_2500 1160 slidingwindow_5000 580 Name: params, dtype: int64 </code></pre> <p>I have a list of all of the entries in <code>params</code> in the order I wish to display them in a boxplot. I try to use <code>sort_index(level=myorder)</code> to get them in my custom order, but the function ignores <code>myorder</code> and just sorts them alphabetically.</p> <pre><code>myorder = [&quot;slidingwindow_100&quot;, &quot;slidingwindow_250&quot;, &quot;slidingwindow_500&quot;, &quot;slidingwindow_1000&quot;, &quot;slidingwindow_2500&quot;, &quot;slidingwindow_5000&quot;, &quot;hotspots_1k_100&quot;, &quot;hotspots_5k_500&quot;, &quot;hotspots_10k_1k&quot;] sizes_bp_log_df['params'].value_counts().sort_index(level=myorder) hotspots_10k_1k 1772 hotspots_1k_100 8454 hotspots_5k_500 3118 slidingwindow_100 5366 slidingwindow_1000 2898 slidingwindow_250 11574 slidingwindow_2500 1160 slidingwindow_500 5793 slidingwindow_5000 580 Name: params, dtype: int64 </code></pre> <p>How can I get the index of my value counts in the order I want them to be in?</p> <p>In addition, I'll be using the median of each distribution as coordinates for the boxplot labels too, which I retrieve using <code>sizes_bp_log_df.groupby(['params']).median()</code>; hopefully your suggested sort methods will also work for that task.</p>
<python><pandas>
2023-02-02 13:02:58
1
507
Whitehot
75,323,511
14,351,788
how to call class properties in function with Python
<p>Here is an example:</p> <p>I have a student class like this</p> <pre><code>class student(): def __init__(self, x, y, z): self.name = x self.age = y self.ID = z </code></pre> <p>and a function to print the corresponding property:</p> <pre><code>def printer(student, parameter_name): print(student.parameter_name) </code></pre> <p>My goal is to print the property I want via the function:</p> <pre><code>s1 = student('John', '14', '9927') printer(s1, age) 14 print(s1, name) John </code></pre> <p>But, actually, my function raises an error: &quot;AttributeError: 'student' object has no attribute 'parameter_name'&quot;</p> <p>So, how to fix the error and complete my function?</p>
<python><class>
2023-02-02 12:55:02
1
437
Carlos
75,323,510
3,423,825
How to sort querysets from different models based on two fields?
<p>I have querysets from different models which have only two fields in common: <code>datetime</code> and <code>dt_created</code>, and I would like to sort the objects first on <code>datetime</code> and then on <code>dt_created</code>, so that objects with the same <code>datetime</code> are sorted based on field <code>dt_created</code>.</p> <p>How can I do that ?</p> <p>Until now I was able to combine and sort the queryset with <code>datetime</code> like this:</p> <pre><code>lst_qs = list(qs_trades) + list(qs_deposits) + list(qs_withdrawals) sorted_lst = sorted(lst_qs, key=lambda x: x.datetime) </code></pre>
<python><django>
2023-02-02 12:54:54
2
1,948
Florent
75,323,506
9,182,743
seaborn stop figure from being visualized
<p>A figure generated with seaborn is being visualized, even without f.show().</p> <p>I want the figure to be only visualized when i call it.</p> <pre class="lang-py prettyprint-override"><code>import numpy as np import matplotlib.pyplot as plt import seaborn as sns def plot_kde_x (x): sns.set(style=&quot;ticks&quot;) f, (ax_box, ax_hist) = plt.subplots(2, sharex=True, gridspec_kw={&quot;height_ratios&quot;: (.15, .85)}) sns.boxplot(x, ax=ax_box) sns.kdeplot(x, ax=ax_hist) ax_box.set(yticks=[]) sns.despine(ax=ax_hist) sns.despine(ax=ax_box, left=True) return f x = np.random.randint(1,10,100) # figure should not be displayed f = plot_kde_x(x) </code></pre> <p>OUT, figure still displayed<br /> <a href="https://i.sstatic.net/qXNLH.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/qXNLH.png" alt="enter image description here" /></a></p>
<python><seaborn>
2023-02-02 12:54:39
2
1,168
Leo
75,323,400
15,934,951
Squared Error Relevance Area (SERA) implementation in Python as custom evaluation metric
<p>I'm facing an imbalanced regression problem and I've already tried several ways to solve this problem. Eventually I came a cross this new metric called SERA (Squared Error Relevance Area) as a custom scoring function for imbalanced regression as mentioned in this paper. <a href="https://link.springer.com/article/10.1007/s10994-020-05900-9" rel="nofollow noreferrer">https://link.springer.com/article/10.1007/s10994-020-05900-9</a></p> <p>In order to calculate SERA you have to compute the relevance function phi, which is varied from 0 to 1 in small steps. For each value of relevance (phi) (e.g. 0.45) a subset of the training dataset is selected where the relevance is greater or equal to that value (e.g. 0.45). And for that selected training subset sum of squared errors is calculated i.e. sum(y_true - y_pred)**2 which is known as squared error relevance (SER). Then a plot us created for SER vs phi and area under the curve is calculated i.e. SERA.</p> <p>Here is my implementation, inspired by <a href="https://stackoverflow.com/questions/71058327/scikit-learn-with-a-custom-scoring-function-using-a-feature">this</a> other question here in StackOverflow:</p> <pre><code>import pandas as pd from scipy.integrate import simps from sklearn.metrics import make_scorer def calc_sera(y_true, y_pred,x_relevance=None): # creating a list from 0 to 1 with 0.001 interval start_range = 0 end_range = 1 interval_size = 0.001 list_1 = [round(val * interval_size, 3) for val in range(1, 1000)] list_1.append(start_range) list_1.append(end_range) epsilon = sorted(list_1, key=lambda x: float(x)) df = pd.concat([y_true,y_pred,x_relevance],axis=1,keys= ['true', 'pred', 'phi']) # Initiating lists to store relevance(phi) and squared-error relevance (ser) relevance = [] ser = [] # Converting the dataframe to a numpy array rel_arr = x_relevance # selecting a phi value for phi in epsilon: relevance.append(phi) error_squared_sum = 0 error_squared_sum = sum((df[df.phi&gt;=phi]['true'] - df[df.phi&gt;=phi]['pred'])**2) ser.append(error_squared_sum) # squared-error relevance area (sera) # numerical integration using simps(y, x) sera = simps(ser, relevance) return sera sera = make_scorer(calc_sera, x_relevance=X['relevance'], greater_is_better=False) </code></pre> <p>I implemented a simple GridSearch using this score as an evaluation metric to select the best model:</p> <pre><code>model = CatBoostRegressor(random_state=0) cv = KFold(n_splits = 5, shuffle = True, random_state = 42) parameters = {'depth': [6,8,10],'learning_rate' : [0.01, 0.05, 0.1],'iterations': [100, 200, 500,1000]} clf = GridSearchCV(estimator=model, param_grid=parameters, scoring=sera, verbose=0,cv=cv) clf.fit(X=X.drop(columns=['relevance']), y=y, sample_weight=X['relevance']) print(&quot;Best parameters:&quot;, clf.best_params_) print(&quot;Lowest SERA: &quot;, clf.best_score_) </code></pre> <p>I also added the relevance function as weights to the model so it could apply this weights in the learning task. However, what I am getting as output is this:</p> <pre><code>Best parameters: {'depth': 6, 'iterations': 100, 'learning_rate': 0.01} Lowest SERA: nan </code></pre> <p>Any clue on why SERA value is returning nan? Should I implement this another way?</p>
<python><machine-learning><metrics><loss-function><catboost>
2023-02-02 12:45:25
1
517
Daniel Aben-Athar Bemerguy
75,323,383
7,599,292
How to deal with DataCollator and DataLoaders in Huggingface?
<p>I have issues combining a DataLoader and DataCollator. The following code with DataCollatorWithPadding results in a <code>ValueError: Unable to create tensor, you should probably activate truncation and/or padding with 'padding=True' 'truncation=True' to have batched tensors with the same length.</code> when I want to iterate through the batches.</p> <pre><code>from torch.utils.data.dataloader import DataLoader from transformers import DataCollatorWithPadding data_collator = DataCollatorWithPadding(tokenizer) train_dataloader = DataLoader(train_dataset, shuffle=True, batch_size=16, collate_fn=data_collator) eval_dataloader = DataLoader(eval_dataset, batch_size=16, collate_fn=data_collator) for epoch in range(2): model.train() for step, batch in enumerate(train_dataloader): outputs = model(**batch) loss = outputs.loss </code></pre> <p>However, I found annother approach where I changed the DataCollator to <code>lambda x: x</code> Then it gives me a <code>TypeError: DistilBertForSequenceClassification object argument after ** must be a mapping, not list</code></p> <pre><code>from torch.utils.data.dataloader import DataLoader train_dataloader = DataLoader(train_dataset, shuffle=True, batch_size=16, collate_fn=lambda x: x ) eval_dataloader = DataLoader(eval_dataset, batch_size=16, collate_fn=lambda x: x) for epoch in range(2): model.train() for step, batch in enumerate(train_dataloader): outputs = model(**batch) loss = outputs.loss </code></pre> <p>For reproducability and for the rest of the code I provide you a Jupyter Notebook on Google Colab. You find the errors at the bottom of the notebook. <a href="https://colab.research.google.com/drive/1UboXyiL8Iovg-5ikRoSC-at7fWlFtPIW?usp=sharing" rel="nofollow noreferrer">Link to Colab Notebook</a></p>
<python><pytorch><huggingface-transformers><dataloader><huggingface-datasets>
2023-02-02 12:43:53
1
396
3r1c
75,323,365
1,581,090
Is there a python package - import mapping available?
<p>When I run a python code and it imports a library that is missing an error is printed out that this library cannot be imported, for example</p> <pre><code>import cv2 import PIL </code></pre> <p>for order to install these two example packages, however, you have to install them as follows</p> <pre><code>pip install opencv-python pip install pillow </code></pre> <p>So the names of the import and the package do not match.</p> <p>Is there a central database/file etc. somewhere that contains the name of the package given the name of the import?</p>
<python><pypi>
2023-02-02 12:41:38
0
45,023
Alex
75,323,297
6,256,859
Django form not populating with POST data
<p><strong>Problem:</strong> Django form is populating with list of <strong>objects</strong> rather than <strong>values</strong></p> <p><strong>Summary:</strong> I have 2 models <em>Entities</em> and <em>Breaks</em>. <em>Breaks</em> has a FK relationship to the <em>entity_id</em> (not the PK) on the <em>Entities</em> model.</p> <p>I want to generate an empty form for all the fields of <em>Breaks</em>. Generating a basic form populates all the empty fields, but for the FK it generates a dropdown list of all <strong>objects</strong> of the <em>Entities</em> table. This is not helpful so I have excluded this in the ModelForm below and tried to replace with a list of all the <strong>entity_ids</strong> of the <em>Entities</em> table. This form renders as expected.</p> <pre><code>class BreakForm(ModelForm): class Meta: model = Breaks #fields = '__all__' exclude = ('entity',) def __init__(self, *args, **kwargs): super(BreakForm, self).__init__(*args, **kwargs) self.fields['entity_id'] = ModelChoiceField(queryset=Entities.objects.all().values_list('entity_id', flat=True)) </code></pre> <p>The below FormView is the cbv called by the URL. As the below stands if I populate the form, and for the FK column <strong>entity_id</strong> choose one of the values, the form will not submit. By that field on the form template the following message appears <em>Select a valid choice. That choice is not one of the available choices</em>.</p> <pre><code>class ContactFormView(FormView): template_name = &quot;breaks/test/breaks_form.html&quot; form_class = BreakForm </code></pre> <p>My initial thoughts were either that the datatype of this field (string/integer) was wrong or that Django needed the PK of the row in the <em>Entities</em> table (for whatever reason).</p> <p>So I added a post function to the FormView and could see that the request.body was populating correctly. However I can't work out how to populate this into the ModelForm and save to the database, or overcome the issue mentioned above.</p> <p>Addendum:</p> <p>Models added below:</p> <pre><code>class Entity(models.Model): pk_securities = models.AutoField(primary_key=True) entity_id = models.CharField(unique=True) entity_description = models.CharField(blank=True, null=True) class Meta: managed = False db_table = 'entities' class Breaks(models.Model): pk_break = models.AutoField(primary_key=True) date = models.DateField(blank=True, null=True) entity = models.ForeignKey(Entity, on_delete= models.CASCADE, to_field='entity_id') commentary = models.CharField(blank=True, null=True) active = models.BooleanField() def get_absolute_url(self): return reverse( &quot;item-update&quot;, args=[str(self.pk_break)] ) def __str__(self): return f&quot;{self.pk_break}&quot; class Meta: managed = False db_table = 'breaks' </code></pre>
<python><django><django-models><django-views><django-forms>
2023-02-02 12:36:43
2
1,080
Andy
75,323,079
19,079,397
How to extract coordinates falling in a bbox from a geopandas data frame?
<p>I have a geopandas dataframe with coordinates and long with data frame I have a bbox. Now I want to apply the bbox on the data frame and extract the coordinates that's falling in that bbox. I tried using <code>gpd.clip</code> to extract but it is returning an empty data frame. What is the best way to extract the coordinates that are falling inside a bbox?</p> <pre><code>from shapely.geometry import box import geopandas as gpd from shapely.geometry import Point print(df) lat lon geometry 0 30.302228 -87.475474 POINT (-87.4754735 30.3022278) 1 30.302249 -87.475305 POINT (-87.4753053 30.3022495) 2 30.302268 -87.475203 POINT (-87.4752034 30.3022676) 3 30.302284 -87.475118 POINT (-87.4751181 30.3022838) 4 30.299260 -87.473474 POINT (-87.473474 30.2992603) 5 30.299501 -87.473526 POINT (-87.473526 30.299501) 6 30.299285 -87.481937 POINT (-87.4819365 30.299285) 7 31.176753 -86.579765 POINT (-86.5797648 31.1767528) 8 31.176670 -86.579352 POINT (-86.5793519 31.1766701) 9 31.176644 -86.579243 POINT (-86.5792434 31.1766441) 10 31.176596 -86.579159 POINT (-86.5791589 31.1765959) 11 31.176503 -86.579115 POINT (-86.5791153 31.1765032) 12 31.173518 -86.578724 POINT (-86.578724 31.173518) 13 31.170868 -86.578374 POINT (-86.578374 31.170868) 14 31.170122 -86.578270 POINT (-86.57827 31.170122) 15 31.161356 -86.577077 POINT (-86.577077 31.161356) 16 31.160598 -86.576931 POINT (-86.576931 31.160598) 17 31.160147 -86.576831 POINT (-86.576831 31.160147) 18 31.109081 -85.516056 POINT (-85.516056 31.109081) 19 31.109327 -85.515871 POINT (-85.515871 31.109327) 20 31.161638 -85.736218 POINT (-85.736218 31.161638) 21 31.169062 -85.741498 POINT (-85.7414983 31.1690619) 22 31.109349 -85.056092 POINT (-85.0560924 31.1093492) 23 27.713963 -82.679369 POINT (-82.6793689 27.7139633) 24 27.714265 -82.679379 POINT (-82.6793793 27.7142646) 25 30.299501 -81.619310 POINT (-81.61931 30.299501) bbox = box(*[30.902576115004003,-85.72642861167968,31.072530650777363,-85.57774194396336]) geometry = [Point(xy) for xy in zip(nodes.lon, nodes.lat)] gdf = gpd.GeoDataFrame(nodes, crs=&quot;EPSG:4326&quot;, geometry=geometry) df_clipped = gpd.clip(gdf, mask=bbox) print(df_clipped) lon lat geometry </code></pre>
<python><mask><geopandas><bounding-box><clip>
2023-02-02 12:19:00
2
615
data en
75,322,989
4,753,897
Getting NameError in unittests for testing a parameter from argparse
<p>I normally use Python for scripts but I am trying to write a unit test and am having a lot of issues. I would like to test a method that creates a parameter <code>--users</code>. The value is how many occurred.</p> <pre><code>count_users(df, args.metrics) </code></pre> <p>It is a spark dataframe and the metrics are set like so:</p> <pre><code>if __name__ == &quot;__main__&quot;: parser = argparse.ArgumentParser(&quot;Processing args&quot;) parser.add_argument(&quot;--metrics&quot;, required=True) main(parser.parse_args()) </code></pre> <p>The method looks like this:</p> <pre><code>def count_users(df, metrics): users = df.where(df.users &gt; 0).count() temp_df = df.withColumn(&quot;user_count_values&quot;, F.lit(users)) temp_df.write.json(metrics) </code></pre> <p>Now I am trying to write my test, and this is where I am not sure about:</p> <pre><code>def test_count_users(self): df = ( SparkSession.builder.appName(&quot;test&quot;) .getOrCreate() .createDataFrame( data=[ (Decimal(0),), (Decimal(22),), ], schema=StructType( [ StructField(&quot;users&quot;, DecimalType(38, 4), True), ] ), ) ) ap = argparse.ArgumentParser(&quot;Test args&quot;) ap.add_argument(&quot;metrics&quot;) args = {_.dest: _ for _ in ap._actions if isinstance(_, _StoreAction)} assert args.keys() == {&quot;metrics&quot;} count_users(df, args.metrics) self.assertTrue(args[&quot;metrics&quot;], 1) </code></pre> <p>Right now I get an error that reads</p> <pre><code> count_users(df, args.metrics) AttributeError: 'dict' object has no attribute 'metrics' </code></pre>
<python><python-3.x>
2023-02-02 12:12:02
1
12,145
Mike3355
75,322,860
6,805,754
Executing commands incorrectly in Jenkins pipeline shell script
<p>I set pyenv to an enabled state. And if there is no python version corresponding to the pipfile through pipenv, I tried to install the corresponding python version automatically through pyenv.</p> <blockquote> <p>[Reference] Automatically install required Python version when pyenv is available. at <a href="https://pipenv.pypa.io/en/latest/" rel="nofollow noreferrer">https://pipenv.pypa.io/en/latest/</a></p> </blockquote> <p>This attempt works fine when I connect to the server via ssh and run it as an interactive command.</p> <p>Please refer to the example below. A case of connecting to the server with ssh and using the command</p> <pre><code>&lt;Pipfile&gt; [[source]] name = &quot;pypi&quot; url = &quot;https://pypi.org/simple&quot; verify_ssl = true [dev-packages] [packages] [requires] python_version = &quot;3.8.13&quot; </code></pre> <p>I run pipenv install and get the following message.</p> <pre><code>&gt; pipenv install Warning: Python 3.8.13 was not found on your system… Would you like us to install CPython 3.8.13 with pyenv? [Y/n]: Y Installing CPython 3.8.13 with pyenv (this may take a few minutes)… </code></pre> <p>However, if Jenkins is installed on the same server and the command is executed with a shell script in the Jenkins pipeline, the server gives different results than with the interactive command.</p> <p>Please refer to the example below. This is an example of using a shell script in the jenkins pipeline.</p> <pre><code># Before starting this pipeline, pyenv version 3.8.13 was uninstalled and the pipenv environment was also removed. pipeline { agent any stages { stage('Example') { steps { sh 'pipenv install' } } } }​ </code></pre> <p>When the above pipeline was executed, a message asking whether to install Python 3.8.13 did not appear, and a virtualenv was created using the system Python 3.7 version on the server.</p> <pre><code>Warning: Python 3.8.13 was not found on your system… Creating a virtualenv for this project… Using /usr/bin/python3 (3.7.3) to create virtualenv… </code></pre> <p>I don't know why it behaves differently than when I entered an interactive command on the server.</p> <p>The PATH and ENV of Jenkins and the server are the same, and the pyenv and pipenv commands in the Jenkins pipeline shell script work well.</p> <p>Same as the server Is there a way to install the 3.8.13 version when the pipenv install command is executed in the Jenkins pipeline?</p>
<python><jenkins><jenkins-pipeline><pipenv><pyenv>
2023-02-02 11:59:52
0
611
S.Kang
75,322,509
3,394,021
Convert Blocking python function to async function
<p>So currently I have 4 api requests which are called in synchronous method using a third_party lib which is synchronous. What i want is to run them in parallel. So that my total time to call all 4 apis is reduced.</p> <p>I am using fastapi as micro framework.</p> <p>utilities.py</p> <pre><code>async def get_api_1_data(): data = some_third_party_lib() return data async def get_api_2_data(): data = some_third_party_lib() return data async def get_api_3_data(): data = some_third_party_lib() return data async def get_api_4_data(): data = some_third_party_lib() return data </code></pre> <p>my main.py looks something like this</p> <pre><code>import asyncio from fastapi import FastAPI app = FastAPI() @app.get(&quot;/&quot;) async def fetch_new_exposure_api_data(node: str): functions_to_run = [get_api_1_data(), get_api_2_data(), get_api_3_data(), get_api_4_data()] r1, r2, r3, r4 = await asyncio.gather(*functions_to_run) return [r1, r2, r3, r4] </code></pre> <p>So the issue is i can not put await in front of <code>some_third_party_lib()</code> as it's not a async lib it's a sync lib. So is there any way i can convert them to async functionality to run them in parallel.</p>
<python><asynchronous><async-await><python-asyncio><fastapi>
2023-02-02 11:28:46
1
995
Abhishek Sachan
75,322,357
648,044
How to run unittest tests from multiple directories
<p>I have 2 directories containing tests:</p> <pre><code>project/ | |-- test/ | | | |-- __init__.py | |-- test_1.py | |-- my_submodule/ | |-- test/ | |-- __init__.py |-- test_2.py </code></pre> <p>How can I run all tests?</p> <p><code>python -m unittest discover .</code> only runs <code>test_1.py</code></p> <p>and obviously <code>python -m unittest discover my_submodule</code> only runs <code>test_2.py</code></p>
<python><unit-testing><python-unittest>
2023-02-02 11:16:43
2
2,561
Guglie
75,322,244
11,714,087
Appending python dictionary values to a python list
<p>I am trying to process my api result string the result string format is given below. My goal is to add these values to a list where each element of list will be a dictionary.</p> <pre><code>result_str = '['{&quot;abc.xyz&quot;: &quot;80983098429842&quot;,&quot;dev.uvw&quot;: 898420920},' \ '{&quot;abc.xyz&quot;: &quot;80983098429843&quot;,&quot;dev.uvw&quot;: 898420921},' \ '{&quot;abc.xyz&quot;: &quot;80983098429844&quot;,&quot;dev.uvw&quot;: 898420922}]' </code></pre> <p>However my code is returning a list that has only last element multiple times. Rather than having each element once.</p> <p>Here is my code:</p> <pre class="lang-py prettyprint-override"><code>import json def format_api_value(result_str, split_char, label_map): results = json.loads(result_str) d = dict() output = [] for item in results: output.append(d) print(f&quot;clearing d after appending {d} \n&quot;) d.clear() for k, v in item.items(): if split_char in k: key = k.split(split_char)[len(k.split(split_char))-1] if key in label_map: key = label_map[key] d[key] = v else: d[k] = v print(f&quot;printing output intermediate {output}&quot;) print(f&quot;returning final list output&quot;) print(output) return d if __name__ == &quot;__main__&quot;: result_str = '[' \ '{&quot;abc.xyz&quot;: &quot;80983098429842&quot;,&quot;dev.uvw&quot;: 898420920},' \ '{&quot;abc.xyz&quot;: &quot;80983098429843&quot;,&quot;dev.uvw&quot;: 898420921},' \ '{&quot;abc.xyz&quot;: &quot;80983098429844&quot;,&quot;dev.uvw&quot;: 898420922}]' split_char = &quot;.&quot; label_map = {&quot;xyz&quot;: &quot;xyz_1&quot;, &quot;uvw&quot;: &quot;uvw_1&quot;} format_api_value(result_str, split_char, label_map) </code></pre> <p>Expected Output:</p> <pre><code>[{'xyz_1': '80983098429842', 'uvw_1': 898420920}, {'xyz_1': '80983098429843', 'uvw_1': 898420921}, {'xyz_1': '80983098429844', 'uvw_1': 898420922}] </code></pre> <p>Current Output:</p> <pre><code>[{'xyz_1': '80983098429844', 'uvw_1': 898420922}, {'xyz_1': '80983098429844', 'uvw_1': 898420922}, {'xyz_1': '80983098429844', 'uvw_1': 898420922}] </code></pre>
<python><python-3.x>
2023-02-02 11:05:51
1
377
palamuGuy
75,322,177
7,422,352
ERROR: Failed building wheel for sentencepiece while installing flair on python 3.10
<p>While installing <code>flair</code> using <code>pip install flair</code> in python 3.10 virtual environment on <code>mac-os</code> Ventura, I get the following error:</p> <p><code>ERROR: Failed building wheel for sentencepiece</code></p> <p>Seperately installing <code>sentencepeice</code> using <code>pip install sentenpeice</code> did not work.</p> <p>Upgrading <code>pip</code> did not work.</p> <p>I am using Intel macbook.</p>
<python><python-3.x><flair>
2023-02-02 11:00:08
6
5,381
Deepak Tatyaji Ahire
75,322,169
859,227
Decreasing column values with Pandas
<p>I would like to decrease values in a column. The following code</p> <pre><code>df['ID'] = df.index df.loc[df['ID'].astype(int)] -= 1 </code></pre> <p>Should convert</p> <pre><code> ID Process ID Name Time 0 0 NaN NaN msecond 1 1 32434.0 X1 1.4 2 2 32434.0 X2 1.3 </code></pre> <p>to</p> <pre><code> ID Process ID Name Time 0 -1 NaN NaN msecond 1 0 32434.0 X1 1.4 2 1 32434.0 X2 1.3 </code></pre> <p>How can I fix that? I should keep the first line because of the code I have written.</p>
<python><python-3.x><pandas>
2023-02-02 10:59:36
1
25,175
mahmood
75,322,094
10,795,473
SQLAlchemy relationship fields name constructor
<p>I'm using SQLAlchemy 1.4 to build my database models (posgresql).</p> <p>I've stablished relationships between my models, which I follow using the different SQLAlchemy capabilities. When doing so, the fields of the related models get aliases which don't work for me.</p> <p>Here's an example of one of my models:</p> <pre><code>from sqlalchemy import Column, DateTime, ForeignKey, Integer, func from sqlalchemy.orm import relationship class Process(declarative_model()): &quot;&quot;&quot;Process database table class. Process model. It contains all the information about one process iteration. This is the proces of capturing an image with all the provided cameras, preprocess the images and make a prediction for them as well as computing the results. &quot;&quot;&quot; id: int = Column(Integer, primary_key=True, index=True, autoincrement=True) &quot;&quot;&quot;Model primary key.&quot;&quot;&quot; petition_id: int = Column(Integer, ForeignKey(&quot;petition.id&quot;, ondelete=&quot;CASCADE&quot;)) &quot;&quot;&quot;Foreign key to the related petition.&quot;&quot;&quot; petition: &quot;Petition&quot; = relationship(&quot;Petition&quot;, backref=&quot;processes&quot;, lazy=&quot;joined&quot;) &quot;&quot;&quot;Related petition object.&quot;&quot;&quot; camera_id: int = Column(Integer, ForeignKey(&quot;camera.id&quot;, ondelete=&quot;CASCADE&quot;)) &quot;&quot;&quot;Foreign key to the related camera.&quot;&quot;&quot; camera: &quot;Camera&quot; = relationship(&quot;Camera&quot;, backref=&quot;processes&quot;, lazy=&quot;joined&quot;) &quot;&quot;&quot;Related camera object.&quot;&quot;&quot; n: int = Column(Integer, comment=&quot;Iteration number for the given petition.&quot;) &quot;&quot;&quot;Iteration number for the given petition.&quot;&quot;&quot; image: &quot;Image&quot; = relationship( &quot;Image&quot;, back_populates=&quot;process&quot;, uselist=False, lazy=&quot;joined&quot; ) &quot;&quot;&quot;Related image object.&quot;&quot;&quot; datetime_init: datetime = Column(DateTime(timezone=True), server_default=func.now()) &quot;&quot;&quot;Datetime when the process started.&quot;&quot;&quot; datetime_end: datetime = Column(DateTime(timezone=True), nullable=True) &quot;&quot;&quot;Datetime when the process finished if so.&quot;&quot;&quot; </code></pre> <p>The model works perfectly and joins the data by default as expected, so far so good.</p> <p>My problem comes when I make a query and I extract the results through <code>query.all()</code> or through <code>pd.read_sql(query.statement, db)</code>. Reading the <a href="https://docs.sqlalchemy.org/en/14/orm/loading_relationships.html#joined-eager-loading" rel="nofollow noreferrer">documentation</a>, I should get aliases for my fields like &quot;{table_name}.{field}&quot; but instead of that I'm getting like &quot;{field}_{counter}&quot;. Here's an example of a <code>query.statement</code> for my model:</p> <pre><code>SELECT process.id, process.petition_id, process.camera_id, process.n, process.datetime_init, process.datetime_end, asset_quality_1.id AS id_2, asset_quality_1.code AS code_1, asset_quality_1.name AS name_1, asset_quality_1.active AS active_1, asset_quality_1.stock_quality_id, pit_door_1.id AS id_3, pit_door_1.code AS code_2, petition_1.id AS id_4, petition_1.user_id, petition_1.user_code, petition_1.load_code, petition_1.provider_code, petition_1.origin_code, petition_1.asset_quality_initial_id, petition_1.pit_door_id, petition_1.datetime_init AS datetime_init_1, petition_1.datetime_end AS datetime_end_1, mask_1.id AS id_5, mask_1.camera_id AS camera_id_1, mask_1.prefix_path, mask_1.position, mask_1.format, camera_1.id AS id_6, camera_1.code AS code_3, camera_1.pit_door_id AS pit_door_id_1, camera_1.position AS position_1, image_1.id AS id_7, image_1.prefix_path AS prefix_path_1, image_1.format AS format_1, image_1.process_id FROM process LEFT OUTER JOIN petition AS petition_1 ON petition_1.id = process.petition_id LEFT OUTER JOIN asset_quality AS asset_quality_1 ON asset_quality_1.id = petition_1.asset_quality_initial_id LEFT OUTER JOIN stock_quality AS stock_quality_1 ON stock_quality_1.id = asset_quality_1.stock_quality_id LEFT OUTER JOIN pit_door AS pit_door_1 ON pit_door_1.id = petition_1.pit_door_id LEFT OUTER JOIN camera AS camera_1 ON camera_1.id = process.camera_id LEFT OUTER JOIN mask AS mask_1 ON camera_1.id = mask_1.camera_id LEFT OUTER JOIN image AS image_1 ON process.id = image_1.process_id </code></pre> <p>Does anybody know how can I change this behavior and make it alias the fields like “{table_name}_{field}&quot;?</p>
<python><postgresql><sqlalchemy>
2023-02-02 10:52:36
2
309
aarcas
75,322,024
4,199,496
How set a C char** pointer in Python
<p>I want to set a C <code>char**</code> pointer, called <code>results</code>, in Python. The variable is in a dll I have loaded. I want to set results so that it points to a string in Python. I want to get the string I created in Python (or at least a copy of it since ctypes does a lot of copying) to be pointed to by the C variable <code>results</code>. So I have in Python <code>product_class = (ctypes.c_char_p)(b&quot;321&quot;)</code>. I want to set results to the value &quot;321&quot;.</p> <p>Here is the code I have written. It does not work. It does not even change the C-variable <code>results</code>.</p> <pre class="lang-py prettyprint-override"><code># py_parse_pdl_func function is a callback which is called from a c dll which has been loaded into the python prorgram. # Here is the declaration of the callback in c # typedef int (*tsl_pdl_cb_t)(void *pz_prv, const char **results, const char* query); # so am trying to set results to point to a string &quot;321&quot; def py_parse_pdl_func(pz_prv, py_results, query): global product_class_void product_class = (ctypes.c_char_p)(b&quot;321&quot;) product_class_void = ctypes.cast(product_class, ctypes.c_void_p) py_results.contents = ctypes.c_long(product_class_void.value) return 1 </code></pre>
<python><c><dll><ctypes>
2023-02-02 10:47:31
1
302
drlolly
75,321,840
14,291,703
How to update PySpark df on the basis of other PySpark df?
<pre><code>df1 +------------------------------------------------------ |ID| NAME|ADDRESS|DELETE_FLAG|INSERT_DATE|UPDATE_DATE| +------------------------------------------------------ | 1|sravan|delhi |false |25/01/2023 |25/01/2023| | 2|ojasvi|patna |false |25/01/2023 |25/01/2023| | 3|rohith|jaipur |false |25/01/2023 |25/01/2023| df2 +---------- |ID| NAME| +---------- | 1|sravan| | 2|ojasvi| </code></pre> <p>Suppose I have two pyspark df's (df1 and df2)</p> <p>How can I get the result df3 like below given ID and NAME are the keys?</p> <pre><code>df3 +------------------------------------------------------ |ID| NAME|ADDRESS|DELETE_FLAG|INSERT_DATE|UPDATE_DATE| +------------------------------------------------------ | 1|sravan|delhi |true |25/01/2023 |02/02/2023| | 2|ojasvi|patna |true |25/01/2023 |02/02/2023| | 3|rohith|jaipur |false |25/01/2023 |25/01/2023| </code></pre> <p>I am looking more for a generic answer where I state the keys within a list or store it as a string.</p>
<python><pyspark>
2023-02-02 10:33:42
1
512
royalewithcheese
75,321,833
6,629,309
how to add new tag value to a exiting yaml file using python
<p>I want to add additional tag value to below yaml contents.</p> <p><strong>Base Yaml</strong></p> <pre><code>infra: etcd: container: replica_count: 3 resource: limit_memory: 1000Mi limit_cpu: 1000m requests_memory: 1000Mi requests_cpu: 1000m volume: storageClaim: 5Gi storageCapacity: 5Gi kafka: container: replica_count: 3 resource: limit_memory: 2000Mi limit_cpu: 1000m requests_memory: 2000Mi requests_cpu: 1000m volume: storageClaim: 10Gi storageCapacity: 10Gi zk: container: replica_count: 3 resource: limit_memory: 500Mi limit_cpu: 1000m requests_memory: 500Mi requests_cpu: 1000m volume: storageClaim: 10Gi storageCapacity: 10Gi </code></pre> <p><strong>After Update</strong></p> <pre><code>infra: etcd: container: **image: tag: etcd-21.3.4** replica_count: 3 resource: limit_memory: 1000Mi limit_cpu: 1000m requests_memory: 1000Mi requests_cpu: 1000m volume: storageClaim: 5Gi storageCapacity: 5Gi kafka: container: **image: tag: kafka-21.3.4** replica_count: 3 resource: limit_memory: 2000Mi limit_cpu: 1000m requests_memory: 2000Mi requests_cpu: 1000m volume: storageClaim: 10Gi storageCapacity: 10Gi zk: container: **image: tag: zk-21.3.4** replica_count: 3 resource: limit_memory: 500Mi limit_cpu: 1000m requests_memory: 500Mi requests_cpu: 1000m volume: storageClaim: 10Gi storageCapacity: 10Gi </code></pre> <p>I am new to python and yaml handling, Any reference will help. I am able open &amp; close the files but not able to get specific guideline to add/remove/update the new tag &amp; value. even contents.update is removing the data after the first image tag update.</p> <pre><code>import yaml # Read the YAML file with open ('in.yaml', 'r') as read_file: contents = yaml.safe_load(read_file) contents['infra']['etcd'] = 'Image' # Write the YAML file with sort_keys=False to retain same order with open('in.yaml', 'w') as write_file: yaml.dump(contents, write_file, sort_keys=False) </code></pre>
<python><python-3.x><yaml>
2023-02-02 10:33:21
1
345
samir
75,321,775
7,800,760
Python and pylint: conflicting errors
<p>My <strong>rssita.py</strong> python code has the following lines:</p> <pre><code>from feeds import RSS_FEEDS from termcolors import PC </code></pre> <p>and this is the corresponding directory tree:</p> <pre><code>(rssita-py3.10) (base) bob@Roberts-Mac-mini rssita % tree . ├── README.md ├── poetry.lock ├── pyproject.toml ├── setup.cfg ├── src │   └── rssita │   ├── __init__.py │   ├── feeds.py │   ├── rssita.py │   └── termcolors.py └── tests ├── __init__.py └── test_feeds.py </code></pre> <p>With this setup I can run rssita.py fine from both the command line (from the activated poetry venv) and from Visual Studio Code (also using the right venv).</p> <p>On the other end with this setup, pylint fails:</p> <pre><code>(rssita-py3.10) (base) bob@Roberts-Mac-mini rssita % pylint src ************* Module rssita.rssita src/rssita/rssita.py:12:0: E0401: Unable to import 'feeds' (import-error) src/rssita/rssita.py:13:0: E0401: Unable to import 'termcolors' (import-error) </code></pre> <p>Also in Visual Studio Code, those imports are flagged as:</p> <pre><code>Unable to import 'feeds' (pylint(import-error) </code></pre> <p>As a last issue, pre-commit runs passing all, including pylint, peraphs because I set 8 as a threshold under which it fails and the score is 8.47 (but the errors are there). Here is the relevant .pre-commit-configuration snippet:</p> <pre><code>- repo: local hooks: - id: pylint name: pylint entry: pylint language: python types: [python] args: [--fail-under=8, --enable=&quot;W&quot;, --recursive=y, -rn,] </code></pre> <p>What do I need to do to fix and reconcile running the script from command line and Studio, and pylint from command line and pre-commit?</p>
<python><pylint><pre-commit-hook><pre-commit>
2023-02-02 10:28:41
0
1,231
Robert Alexander
75,321,652
1,045,755
Remove duplicated rows based on a Series of lists in which item order doesn't matter
<p>I have a data frame with a lot of columns. But one column has values/lists that look like this:</p> <pre><code>df = val1 --------------- 0 [&quot;hej&quot;, &quot;hello&quot;] 1 [&quot;mus&quot;, &quot;mouse&quot;] 2 [&quot;hest&quot;, &quot;horse&quot;] 3 [&quot;hello&quot;, &quot;hej&quot;] 4 [&quot;mouse&quot;, &quot;mus&quot;] </code></pre> <p>If I just do:</p> <pre><code>df.drop_duplicates(subset=[&quot;val1&quot;], keep=&quot;first&quot;) </code></pre> <p>Nothing will get dropped in this case. However, in my case <code>[&quot;hej&quot;, &quot;hello&quot;]</code> is the same as <code>[&quot;hello&quot;, &quot;hej&quot;]</code> and <code>[&quot;mus&quot;, &quot;mouse&quot;]</code> is the same as <code>[&quot;mouse&quot;, &quot;mus&quot;]</code>. So in this case it should just keep <code>[&quot;hej&quot;, &quot;hello&quot;]</code> and <code>[&quot;mus&quot;, &quot;mouse&quot;]</code>, ending up with a data frame with the rows:</p> <pre><code>df_final = val1 --------------- 0 [&quot;hej&quot;, &quot;hello&quot;] 1 [&quot;mus&quot;, &quot;mouse&quot;] 2 [&quot;hest&quot;, &quot;horse&quot;] </code></pre> <p>Is there any way to accomplish this?</p>
<python><pandas>
2023-02-02 10:19:38
1
2,615
Denver Dang
75,321,543
13,174,189
Why is my hungarian algorithm application function not working?
<p>I want to solve this combinatorial optimization task using Hungarian algorithm:</p> <p>Different teams of the same company may have employees in different cities. If employees have the same position, we can swap them. And the goal is to distribute employees to teams in such a way as to maximize the number of teams that have employees which are together in the same city. Data we have consists of 4 columns: employee's id, employee's team, employee's position, employee's city. The goal is to redistribute employees across teams and get a table with 4 columns: employee's id, employee's team, employee's position, employee's city</p> <p>Example: input:</p> <pre><code> employee_id team_id position city 1 0.0 Manager New York 2 0.0 Manager San Francisco 3 1.0 Engineer Boston 4 1.0 Engineer Boston 5 2.0 Engineer London 6 2.0 Engineer London 7 2.0 Manager New York </code></pre> <p>output:</p> <pre><code> employee_id team_id position city 1 0.0 Manager New York 7 0.0 Manager New York 3 1.0 Engineer Boston 4 1.0 Engineer Boston 5 2.0 Engineer London 6 2.0 Engineer London 2 2.0 Manager San Francisco </code></pre> <p>as you see employees with id 7 and 2 were swapped and now all employees of team 0.0 are in New York.</p> <p>i wrote this code:</p> <pre><code>import numpy as np import scipy.optimize as opt def hungarian(cost_matrix): row_ind, col_ind = opt.linear_sum_assignment(cost_matrix) return row_ind, col_ind def redistribute_employees(employee_data, cost_matrix): n = len(employee_data) row_ind, col_ind = hungarian(cost_matrix) new_teams = np.zeros(n) for i in range(n): new_teams[i] = col_ind[int(employee_data[i, 1])] return np.column_stack((employee_data[:, 0], new_teams, employee_data[:, 2], employee_data[:, 3])) employee_data = np.array([[1, 0, 'Manager', 'New York'], [2, 0, 'Manager', 'San Francisco'], [3, 1, 'Engineer', 'Boston'], [4, 1, 'Engineer', 'Boston'], [5, 2, 'Engineer', 'London'], [6, 2, 'Engineer', 'London'], [7, 2, 'Manager', 'New York']]) city_map = {'New York': 0, 'San Francisco': 1, 'Boston': 2, 'London': 3} n = len(employee_data) cost_matrix = np.zeros((n, n)) for i in range(n): for j in range(n): if employee_data[i, 2] == employee_data[j, 2] and employee_data[i, 3] != employee_data[j, 3]: cost_matrix[i, j] = 1 redistributed_employees = redistribute_employees(employee_data, cost_matrix) print(redistributed_employees) </code></pre> <p>The condition <code>employee_data[i, 2] == employee_data[j, 2] and employee_data[i, 3] != employee_data[j, 3]</code> checks if two employees have the same position <code>(employee_data[i, 2] == employee_data[j, 2])</code> but work in different cities <code>(employee_data[i, 3] != employee_data[j, 3])</code>. In other words, it checks if two employees can be swapped without affecting the position but making sure that they are not working in the same city.</p> <p>print of cost_matrix is: <code>array([[0., 1., 0., 0., 0., 0., 0.], [1., 0., 0., 0., 0., 0., 1.], [0., 0., 0., 0., 1., 1., 0.], [0., 0., 0., 0., 1., 1., 0.], [0., 0., 1., 1., 0., 0., 0.], [0., 0., 1., 1., 0., 0., 0.], [0., 1., 0., 0., 0., 0., 0.]])</code></p> <p>But the output is:</p> <pre><code>[['1' '0.0' 'Manager' 'New York'] ['2' '0.0' 'Manager' 'San Francisco'] ['3' '1.0' 'Engineer' 'Boston'] ['4' '1.0' 'Engineer' 'Boston'] ['5' '2.0' 'Engineer' 'London'] ['6' '2.0' 'Engineer' 'London'] ['7' '2.0' 'Manager' 'New York']] </code></pre> <p>nothing changed and I don't understand why. the steps of algorithm I use are:</p> <ol> <li><p>Create a cost matrix where the entries represent the cost of assigning an employee to a team. In this case, the cost could be defined as 1 if the employees are in different cities, and 0 if they are in the same city.</p> </li> <li><p>Run the Hungarian algorithm on the cost matrix to find a minimum-cost matching between employees and teams. The algorithm will find the minimum number of swaps needed to achieve a maximum number of employees in the same city.</p> </li> <li><p>Based on the minimum-cost matching, reassign employees to teams to maximize the number of employees in the same city.</p> </li> <li><p>The final output is a table with the reassigned employee ids, teams, positions, and cities</p> </li> </ol> <p>What am I doing wrong? How to fix my code?</p>
<python><python-3.x><algorithm><function><hungarian-algorithm>
2023-02-02 10:11:01
0
1,199
french_fries
75,321,526
12,193,952
How to pass arguments to function (FastAPI endpoint) loaded from JSON?
<p>I have a FastAPI endpoint which takes up to 70 parameters and they are often added or removed. I have also list of arguments sent to this API stored in <code>JSON</code> format (extracted using <code>json.dumps(locals())</code>), so I can easily reproduce the API call in the future.</p> <p>I would like to call EP <code>/another-endpoint</code> which is gonna load arguments from <code>JSON</code> and call another EP <code>/my-endpoint</code>, how should I do it?</p> <p>My code so far</p> <pre class="lang-py prettyprint-override"><code>from fastapi import FastAPI import asyncio app = FastAPI() app.mount(&quot;/&quot;, app) @app.get('/my-endpoint') def my_endpoint( arg1: int, arg2: str, ... arg70: str ): print(&quot;Doing some action with those 70 arguments...&quot;) ... return &quot;Some fake response&quot; # Example arguments in JSON format arguments_json = { &quot;arg1&quot;: 12, &quot;arg2&quot;: &quot;extract&quot;, .... } @app.get('/another-endpoint') async def another_endpoint(): # I have tried using asyncio since I am calling another async function # but this does not work obviously response = asyncio.run(my_endpoint(arguments_json)) </code></pre> <p>It does not work, obviously.</p>
<python><json><arguments><fastapi>
2023-02-02 10:09:42
1
873
FN_
75,321,353
6,218,501
Searching string among 5Gb of text files
<p>I have several CSV files (~25k in total) with a total size of ~5Gb. This files are in a network path and I need to search for several strings inside all these files and to save the files' names (in an output file for example) where these strings are found.</p> <p>I've already tried two things:</p> <ol> <li>With Windows I've used <em>findstr</em> : <code>findstr /s &quot;MYSTRING&quot; *.csv &gt; Output.txt</code></li> <li>With Windows PowerShell: <code>gci -r &quot;.&quot; -filter &quot;*.csv&quot; | Select-String &quot;MYSTRING&quot; -list &gt; .\Output.txt</code></li> </ol> <p>I also can use Python but I don't really think it'll be faster.</p> <p>There is any other way to speed up this search ?</p> <hr /> <p>More precision: the structure of all the files is different. They are CSV but they could be just simple TXT files</p>
<python><powershell><batch-file>
2023-02-02 09:55:50
2
461
Ralk
75,321,312
5,868,293
Create an indicator column if a column contains many string values in pandas
<p>I have a pandas dataframe that looks like this:</p> <pre><code>import pandas as pd pd.DataFrame({'id': [1,1,1,2,2,3,3,3], 'col': ['a','a','a','a','b','c','b','a']}) id col 0 1 a 1 1 a 2 1 a 3 2 a 4 2 b 5 3 c 6 3 b 7 3 a </code></pre> <p>I would like to create an indicator column which will tell me, if an <code>id</code> has both &quot;a&quot; and &quot;b&quot; in the <code>col</code></p> <p>The output should look like this:</p> <pre><code>pd.DataFrame({'id': [1,1,1,2,2,3,3,3], 'col': ['a','a','a','a','b','c','b','a'], 'indicator': [0,0,0,1,1,1,1,1]}) id col indicator 0 1 a 0 1 1 a 0 2 1 a 0 3 2 a 1 4 2 b 1 5 3 c 1 6 3 b 1 7 3 a 1 </code></pre> <p>How can I do that in pandas ?</p>
<python><pandas>
2023-02-02 09:50:58
2
4,512
quant
75,321,159
7,522,285
Google Auth sign in - Redirect URI Mismatch
<p>I am trying to add Google OAuth login/register to my app, first testing it locally then on the web.</p> <p>Google OAuth has been set up. Redirect URLs as below: <a href="https://i.sstatic.net/zfOX9.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/zfOX9.png" alt="enter image description here" /></a></p> <p>A 'sign in' button on the login page loads the login route.</p> <p>The code in the <strong>routes.py</strong> file is:</p> <pre><code># Create a LoginManager and Flask-OAuthlib object login_manager = LoginManager() oauth = OAuth() # Configure Flask-OAuthlib to use the Google OAuth API google = oauth.remote_app( 'google', consumer_key='377916639662-b3hlrf0tqbr4ib13bg8jgu1dsltfin8s.apps.googleusercontent.com', consumer_secret='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', request_token_params={ 'scope': 'email' }, base_url='https://www.googleapis.com/oauth2/v1/', request_token_url=None, access_token_method='POST', access_token_url='https://accounts.google.com/o/oauth2/token', authorize_url='https://accounts.google.com/o/oauth2/auth', ) @login_manager.user_loader def load_user(google_id): return User.query.get(google_id) # Login @accounts_bp.route('/login') def login(): return render_template('login.html') @accounts_bp.route('/google-login') def google_login(): callback = url_for( 'accounts_bp.authorized', _external=True, next=request.args.get('next') or request.referrer or None ) return google.authorize(callback=callback) @accounts_bp.route('/authorized') def authorized(): resp = google.authorized_response() if resp is None: return 'Access denied: reason=%s error=%s' % ( request.args['error_reason'], request.args['error_description'] ) session['google_token'] = (resp['access_token'], '') me = google.get('userinfo') user = User.query.filter_by(google_id=me.data['id']).first() if not user: user = User(google_id=me.data['id'], name=me.data['name'], email=me.data['email']) db.session.add(user) db.session.commit() login_user(user) return redirect(url_for('dashboard_bp.app_home')) </code></pre> <p>The error during Google sign in is &quot;Request Invalid: redirect_uri_mismatch&quot;: <a href="https://i.sstatic.net/QE4Iq.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/QE4Iq.png" alt="enter image description here" /></a></p> <p><strong>Question:</strong> What is causing the redirect uri mismatch and how to resolve it?</p>
<python><flask><google-cloud-platform><google-oauth>
2023-02-02 09:38:46
1
1,359
TimothyAURA
75,321,138
12,928,363
How to allow 'filter' query parameter Django REST Framework JSON API?
<p>I am using Django REST Framework with <code>djangorestframework-jsonapi</code></p> <p>When I query with <code>filter[name]=THEOS</code> DRF raise an error into my browser. I tried to query with this URL</p> <pre><code>http://localhost:8000/api/space_objects/?filter[name]=THEOS </code></pre> <p>THe other parameter from JSON API I can use it without any problem.</p> <pre><code>ValidationError at /api/space_objects/ [ErrorDetail(string='invalid filter[name]', code='invalid')] </code></pre> <p>And this is my DRF JSON API settings <a href="https://django-rest-framework-json-api.readthedocs.io/en/stable/getting-started.html" rel="nofollow noreferrer">DRF JSON API Documentation</a></p> <pre><code>REST_FRAMEWORK = { 'PAGE_SIZE': 10, 'EXCEPTION_HANDLER': 'rest_framework_json_api.exceptions.exception_handler', 'DEFAULT_PAGINATION_CLASS': 'rest_framework_json_api.pagination.JsonApiPageNumberPagination', 'DEFAULT_PARSER_CLASSES': ( 'rest_framework_json_api.parsers.JSONParser', 'rest_framework.parsers.FormParser', 'rest_framework.parsers.MultiPartParser' ), 'DEFAULT_RENDERER_CLASSES': ( 'rest_framework_json_api.renderers.JSONRenderer', # If you're performance testing, you will want to use the browseable API # without forms, as the forms can generate their own queries. # If performance testing, enable: # 'example.utils.BrowsableAPIRendererWithoutForms', # Otherwise, to play around with the browseable API, enable: 'rest_framework_json_api.renderers.BrowsableAPIRenderer' ), 'DEFAULT_METADATA_CLASS': 'rest_framework_json_api.metadata.JSONAPIMetadata', 'DEFAULT_SCHEMA_CLASS': 'rest_framework_json_api.schemas.openapi.AutoSchema', 'DEFAULT_FILTER_BACKENDS': ( 'rest_framework_json_api.filters.QueryParameterValidationFilter', 'rest_framework_json_api.filters.OrderingFilter', 'rest_framework_json_api.django_filters.DjangoFilterBackend', 'rest_framework.filters.SearchFilter', ), 'SEARCH_PARAM': 'filter[search]', 'TEST_REQUEST_RENDERER_CLASSES': ( 'rest_framework_json_api.renderers.JSONRenderer', ), 'TEST_REQUEST_DEFAULT_FORMAT': 'vnd.api+json' } </code></pre> <p>My model</p> <pre><code>class SpaceObject(models.Model): class Meta: ordering = ['norad'] norad = models.IntegerField(primary_key=True) object_type = models.CharField(max_length=200, null=True) name = models.CharField(max_length=200, null=True) period = models.FloatField(null=True) inclination = models.FloatField(null=True) apogee = models.FloatField(null=True) perigee = models.FloatField(null=True) rcs_size = models.CharField(max_length=200, null=True) tle_1 = models.CharField(max_length=200, null=True) tle_2 = models.CharField(max_length=200, null=True) last_updated = models.DateTimeField(max_length=6, default=timezone.now) </code></pre> <p>My serializer</p> <pre><code>class SpaceObjectSerializer(serializers.HyperlinkedModelSerializer): class Meta: model = SpaceObject fields = ['norad', 'object_type', 'name', 'period', 'inclination', 'apogee', 'perigee', 'rcs_size', 'tle_1', 'tle_2', 'last_updated'] </code></pre> <p>My view</p> <pre><code>class SpaceObjectViewSet(viewsets.ModelViewSet): queryset = SpaceObject.objects.all() serializer_class = SpaceObjectSerializer permission_classes = [permissions.AllowAny] </code></pre> <p>I tried to run this but still got the problem</p> <pre><code>pip install djangorestframework-jsonapi['django-filter'] </code></pre> <p>There is the way to fix the problem? I am tried to follow the documentation but not successful.</p>
<python><json><django><django-rest-framework><json-api>
2023-02-02 09:37:16
1
377
Paweenwat Maneechai
75,321,022
5,336,651
Efficient way of requiring that a certain portion of elements from a Hypothesis strategy must be unique
<p>What is the most efficient way of requiring rather than that all elements generated according to a <code>hypothesis</code> strategy are unique, at least a certain portion are unique?</p> <p>Strategies such as the following appear to be inefficient when multiply composed partly because of the <code>unique=True</code> condition. In this case, it matters that at least a certain portion of the elements are unique rather than that all are.</p> <pre><code>def base_float( min_value=None, max_value=None, *, allow_nan=False, allow_infinity=False, allow_subnormal=False, ): &quot;&quot;&quot;Strategy for returning a float.&quot;&quot;&quot; return st.floats( min_value=min_value, max_value=max_value, allow_nan=allow_nan, allow_infinity=allow_infinity, allow_subnormal=allow_subnormal, ) @st.composite def plaus_arr(draw, size=None, bounds=ARR_LEN): size = draw(st.integers(*bounds)) if size is None else size areas = draw(st_np.arrays(float, size, elements=base_float(), unique=True)) return areas </code></pre> <p>Is the best approach some modification of the approach <a href="https://stackoverflow.com/questions/73737073/create-hypothesis-strategy-that-returns-unique-values">suggested here</a>?</p>
<python><pytest><python-hypothesis>
2023-02-02 09:26:14
0
401
curlew77
75,320,937
14,193,797
Why doesn't float() throw an exception when the argument is outside the range of a Python float?
<p>I'm using <code>Python 3.10</code> and I have:</p> <pre class="lang-py prettyprint-override"><code>a = int(2 ** 1023 * (1 + (1 - 2 ** -52))) </code></pre> <p>Now, the value of <code>a</code> is the biggest integer value in <a href="https://en.wikipedia.org/wiki/Double-precision_floating-point_format" rel="nofollow noreferrer">double precision floating point format</a>.</p> <p>So, I'm expecting <code>float(a + 1)</code> to give an <code>OverflowError</code> error, as noted in <a href="https://docs.python.org/3/library/functions.html#float" rel="nofollow noreferrer">here</a>:</p> <blockquote> <p>If the argument is outside the range of a Python float, an OverflowError will be raised.</p> </blockquote> <p>But, to my surprise, it doesn't throw the error, instead, it happily returns:</p> <pre><code>1.7976931348623157e+308 </code></pre> <p>which seems like <code>sys.float_info.max</code>.</p> <p>I also do <code>float(a + 2)</code>, <code>float(a + 3)</code>, <code>float(a + 4)</code>, etc but it still returns <code>1.7976931348623157e+308</code>. Only until I do <code>float(a + a)</code> then it throws the expected exception:</p> <pre><code>Traceback (most recent call last): File &quot;&lt;stdin&gt;&quot;, line 1, in &lt;module&gt; OverflowError: int too large to convert to float </code></pre> <p>It seems like the smallest number that fails is <code>a + 2 ** 970</code>, as noted in <a href="https://stackoverflow.com/questions/75320937/why-doesnt-float-throw-an-exception-when-the-argument-is-outside-the-range-of#comment132907424_75320937">this comment</a>.</p> <p>So, what could be the reason for this?</p>
<python><floating-point>
2023-02-02 09:17:23
1
4,663
justANewb stands with Ukraine
75,320,562
5,775,358
Python polars speed issue
<p>I have python code that runs fast on my laptop, but terrible slow on a desktop. The desktop is new with a better cpu and more ram. Why is the same code slower?</p> <p>I use polars and this is my code:</p> <pre><code>def remove_timezone(df: pl.DataFrame, col: str = 't'): return df.with_column( pl.col(col).apply( lambda x: x.astimezone( # type: ignore pytz.utc).replace(tzinfo=None) # type: ignore ).alias(col)) df = remove_timezone(df, 't') </code></pre> <p>On the laptop, with a 11th Gen intel core i7, 4 cores 8 logical processors and 16 gb of ram this takes 6 seconds. On the desktop with an AMD Ryzen Threadripper Pro 24 cores 48 logical processors and 128 gb of ram it takes 134.1 second.</p> <p>To reproduce this problem:</p> <pre><code>def setup(): t = [datetime.datetime(2022, 1, 1, 1, 0, tzinfo=datetime.timezone(datetime.timedelta(seconds=3600)))] * 51824 return pl.from_pandas(pd.DataFrame({'t': t})) remove_timezone(setup()) </code></pre> <p>Using pandas:</p> <pre><code>def remove_timezone(df): df['t'].apply(lambda x: x.astimezone(pytz.utc).replace(tzinfo=None)) </code></pre> <p>the pandas solution it takes 5.1 seconds on the laptop and 0.2 seconds on the desktop.</p> <p>EDIT:</p> <p>To make it possible to compare the results a new environment is created. For now python 3.9.12 is used and polars version 0.15.8. The results are still the same.</p> <p>I noticed that there is some &quot;problem&quot; in polars when using the date type <code>[datetime[ns, +01:00]</code>. For example when I do <code>df['t'].to_list()</code> it takes also a long time. If I do it after removing the timezone it is super fast.</p>
<python><pandas><datetime><python-polars>
2023-02-02 08:44:25
0
2,406
3dSpatialUser
75,320,508
2,642,356
Apply `torch.nn.MultiheadAttention`’s heads to same input
<p>My question surely has a simple answer, but I couldn't find it. I wish to apply <code>MultiheadAttention</code> to the same sequence without copying the sequence. My data is temporal data with dimensions (batch, time, channels). I treat the &quot;channels&quot; dimension as the embedding, and the time dimension as the sequence dimension. For example:</p> <pre class="lang-py prettyprint-override"><code>N, C, T = 2, 3, 5 n_heads = 7 X = torch.rand(N, T, C) </code></pre> <p>Now, I want to apply 7 different heads as self-attention to the same input <code>X</code>, but as far as I understand, it attrequires me to copy the data 7 times:</p> <pre class="lang-py prettyprint-override"><code>attn = torch.nn.MultiheadAttention(C * n_heads, n_heads, batch_first=True) X_ = X.repeat(1, 1, n_heads) attn(X_, X_, X_) </code></pre> <p>Is there any way to do this without copying the data 7 times? Thanks!</p>
<python><deep-learning><pytorch><nlp><attention-model>
2023-02-02 08:39:06
0
1,864
EZLearner
75,320,399
2,919,052
Qml, Reference error <signalName> is not defined when signal defined
<p>I have a very simple qml+python app to play and test signal/slot communication.</p> <p>All works fine so far, but when I run the app, a <code>ReferenceError</code> is reported on the QML side.</p> <p>However, all works fine, it is so simple code:</p> <p><strong>QML:</strong></p> <pre><code>import QtQuick 2.0 import QtQuick.Window 2.0 Window { width: 1000 height: 480 visible: true title: qsTr(&quot;Hello World&quot;) Connections { target: signalEmitter ignoreUnknownSignals : true function onSignal() { console.log(&quot;HELLO QML&quot;) } } Rectangle{ height: 100 width: 100 color: &quot;green&quot; MouseArea { anchors.fill: parent onClicked: { signalEmitter.sayHello() } } } Rectangle{ anchors.fill: parent color: &quot;transparent&quot; border.color: &quot;black&quot; } } </code></pre> <p><strong>Python:</strong></p> <pre><code>from PySide6.QtCore import QObject, Signal, Slot from PySide6.QtGui import QGuiApplication from PySide6.QtQml import QQmlApplicationEngine import sys class PythonSignalEmitter(QObject): signal = Signal(str) @Slot() def sayHello(self): print(&quot;HELLO PYTHON&quot;) self.signal.emit(&quot;HELLO&quot;) if __name__ == '__main__': app = QGuiApplication([]) engine = QQmlApplicationEngine() engine.load(&quot;main.qml&quot;) signal_emitter = PythonSignalEmitter() engine.rootContext().setContextProperty(&quot;signalEmitter&quot;, signal_emitter) sys.exit(app.exec()) </code></pre> <p>Why do I keep getting the error:</p> <pre><code>ReferenceError: signalEmitter is not defined </code></pre> <p>on line 12 in qml file. (app runs and signal/slot works as expected)</p>
<python><qt><qml><pyside6>
2023-02-02 08:29:04
1
5,778
codeKiller
75,320,243
419,399
conda disappeared, command not found - corrupted .zshrc
<p>All of the sudden, my terminal stopped recognizing the 'conda'. Also the VS Code stopped seeing my environments.</p> <p>All the folders, with my precious environments are there (<code>/opt/anaconda3</code>), but when I type conda I get:</p> <pre><code>conda zsh: command not found: conda </code></pre> <p>I tried install conda again (from <code>.pkg</code>) but it fails at the end of installation (no log provided).</p> <p>How can I clean it without losing my envs?</p> <p>I use Apple M1 MacBookPro with Monterey.</p>
<python><macos><conda>
2023-02-02 08:11:22
3
1,230
Intelligent-Infrastructure
75,320,160
18,273,129
pytest-html extras customizing code understanding
<p>I'm trying to customize report.html of pytest using pytest-html plugin.</p> <p>I searched up many sites(including pytest-html documentation) and found that the code below is commonly used.(The code is in conftest.py)</p> <p>(<a href="https://pytest-html.readthedocs.io/en/latest/user_guide.html#extra-content" rel="nofollow noreferrer">https://pytest-html.readthedocs.io/en/latest/user_guide.html#extra-content</a>)</p> <pre class="lang-py prettyprint-override"><code>@pytest.hookimpl(hookwrapper = True) def pytest_runtest_makereport(item, call): pytest_html = item.config.pluginmanager.getplugin(&quot;html&quot;) outcome = yield report = outcome.get_result() extra = getattr(report, &quot;extra&quot;, []) if report.outcome == &quot;call&quot;: #always add url to report xfail = hasattr(report, &quot;wasxfail&quot;) if (report.skipped and xfail) or (report.failed and not xfail): extra.append(pytest_html.extras.url(&quot;http://www.google.com/&quot;)) extra.append(pytest_html.extras.text('Hi', name = 'TEXT')) # only add additional html on failure # extra.append(pytest_html.extras.html(&quot;&lt;div&gt;Additional HTML&lt;/div&gt;&quot;)) report.extra = extra </code></pre> <p>However, I have no idea of each lines.</p> <p>No one explained what the line does actually.</p> <p>Why does the script allocates <strong>yield</strong> keyword to outcome with out any variable(e.g. yield 1), and what does yield.get_result() actually do?</p> <p>Also, I have no idea of xfail(&quot;wasxfail&quot;).</p> <p>I found that @pytest.xfail makes the test function fail in the pytest run, but I think it has nothing to do with the above code.</p> <p>Why don't we use 'fail' not 'xfail'?</p> <p>Anyway, what I need is</p> <p><strong>First</strong>, the meaning of each line and what it does.</p> <p><strong>Second</strong>, I wanna set different message in the report.html depending on the pass/fail.</p> <p>I tried <code>python report.outcome == 'failed', report.outcome == 'passed'</code> to divide conditions, but it didn't work.</p> <p><strong>Third</strong>, when adding the text not url, it becomes tag and helps redirecting the page containing the text.</p> <p>However, if I click the page in the html, it opens <strong>about:blank</strong> page not the desired one.</p> <p>Using right click and open in new tab redirects to the desired one.</p> <p>Any help is welcomed. Thanks.</p> <hr /> <p>+ I have more questions, I tried</p> <pre class="lang-py prettyprint-override"><code>if report.passed: extra.append(pytest_html.extras.url(&quot;https://www.google.com/&quot;) report.extra = extra </code></pre> <p>It attaches 3 same links in the report.html(Results table) How can I handle it?</p> <p>+ I could log a message when test is failed like <code>msg = 'hi', pytest.fail(msg)</code> However, I cannot get a clue to do it when the test is passed.</p>
<python><pytest><pytest-html><pytest-html-reporter>
2023-02-02 08:02:18
1
341
jaemmin
75,320,104
75,612
svgwrite rotate issues causes spiral result
<p>Im trying to create canvas filed with triangles with random rotation and size but as soon as I try rotation everything gets out of wack. If rotation is turned off the resulting svg looks like expected</p> <p><a href="https://i.sstatic.net/XViPH.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/XViPH.png" alt="enter image description here" /></a></p> <p>As soon as I try to get triangles to be rotated it all spirals out of control.</p> <p><a href="https://i.sstatic.net/WqTW8.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/WqTW8.png" alt="enter image description here" /></a></p> <pre><code> def create_triangle(size, rotation, color): triangle = svgwrite.shapes.Polygon(points=[ (0, size), (size / 2, 0), (size, size),], fill=color) triangle.rotate(rotation, center=(size / 2, size / 2)) return triangle dwg = svgwrite.Drawing(size=(canvas_width, canvas_height), profile='tiny') y = 0 x = 0 i = 0 for y in range(0, canvas_height, distance): i += 1 rotation = random.uniform(0, rotation_range) random_size = random.uniform(min_size, size) tri = create_triangle(random_size, i, colors[i*2]) tri.translate(x, y) dwg.add(tri) dwg.save() </code></pre>
<python><svgwrite>
2023-02-02 07:54:39
0
494
NccWarp9
75,320,065
19,556,055
How do I quickly drop rows based on the max value in a groupby?
<p>I have a large dataframe containing information on people and their job change history. Sometimes, someone had multiple changes to their record on one day, each of which is assigned a transaction sequence number. I just want to keep the rows with the highest transaction sequence number of that day. Currently, I'm using the for loop below to do this, but it takes forever.</p> <pre><code>list_indexes_to_drop = [] for (associate_id, date), df in df_job_his.groupby([&quot;Employee ID&quot;, &quot;Event Date&quot;]): if len(df) &gt; 1: list_indexes_to_drop += list(df.index[df[&quot;Transaction Sequence Number&quot;] != df[&quot;Transaction Sequence Number&quot;].max()]) </code></pre> <p>I also have this code below, but I'm not sure how to use it to filter the dataframe.</p> <pre><code>df_job_his.groupby([&quot;Employee ID&quot;, &quot;Event Date&quot;])[&quot;Transaction Sequence Number&quot;].max() </code></pre> <p>Is there a more efficient way to go about this?</p> <p>Here's an example of some random data in the same format:</p> <pre><code>df_job_his = pd.DataFrame({&quot;Employee ID&quot;: [1, 1, 1, 2, 3, 3, 4, 4, 5, 6, 6, 6, 7, 8, 9, 9, 10], &quot;Event Date&quot;: [&quot;2020-04-05&quot;, &quot;2020-06-08&quot;, &quot;2020-06-08&quot;, &quot;2022-09-01&quot;, &quot;2022-02-15&quot;, &quot;2022-02-15&quot;, &quot;2021-07-29&quot;, &quot;2021-07-29&quot;, &quot;2021-08-14&quot;, &quot;2021-09-14&quot;, &quot;2022-01-04&quot;, &quot;2022-01-04&quot;, &quot;2022-01-04&quot;, &quot;2022-04-04&quot;, &quot;2020-08-13&quot;, &quot;2020-08-13&quot;, &quot;2020-03-17&quot;], &quot;Transaction Sequence Number&quot;: [1, 1, 2, 1, 1, 2, 1, 2, 1, 1, 1, 2, 1, 1, 1, 2, 1]}).groupby([&quot;Employee ID&quot;, &quot;Event Date&quot;]) </code></pre>
<python><pandas><group-by>
2023-02-02 07:50:46
1
338
MKJ
75,320,011
2,091,247
Point out of curve
<p>I have the secp256k1 elliptic curve and I would like to print a Dot on that curve. However, the dot is out of curve and I can not see why.</p> <ul> <li><p>python 3.10.7</p> </li> <li><p>manim 0.17.2</p> </li> </ul> <p>Thanks for any help.</p> <pre><code>from manim import * class eliptic_curves(MovingCameraScene): def secp256k1(self, x): return x ** 3 + 7 def construct(self): ax = Axes( x_range=[-10, 10] ) # plot the x^3 + 7 = y^2 curve graph = ax.plot_implicit_curve(lambda x, y : x ** 3 + 7 - y ** 2, color = BLUE) self.add(ax, graph) y = np.sqrt(self.secp256k1(1)) dA = Dot([1, y, 0], color = RED) self.add(dA) with tempconfig({&quot;quality&quot;: &quot;medium_quality&quot;, &quot;preview&quot;: True}): scene = eliptic_curves() scene.render() </code></pre> <p><a href="https://i.sstatic.net/NGhtT.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/NGhtT.png" alt="Dot out of eliptic curve" /></a></p>
<python><manim>
2023-02-02 07:44:08
1
10,731
Radim Bača
75,319,944
2,385,132
How are keras tensors connected to layers that create them
<p>In the book &quot;Machine Learning with scikit-learn and Tensorflow&quot; there's a code fragment I can't wrap my head around. Until that chapter, their models were only explicitly using layers - be it in a sequential fashion, or functional. But in the chapter 16, there's this:</p> <pre><code>import tensorflow_addons as tfa encoder_inputs = keras.layers.Input(shape=[None], dtype=np.int32) decoder_inputs = keras.layers.Input(shape=[None], dtype=np.int32) sequence_lengths = keras.layers.Input(shape=[], dtype=np.int32) embeddings = keras.layers.Embedding(vocab_size, embed_size) encoder_embeddings = embeddings(encoder_inputs) decoder_embeddings = embeddings(decoder_inputs) encoder = keras.layers.LSTM(512, return_state=True) encoder_outputs, state_h, state_c = encoder(encoder_embeddings) encoder_state = [state_h, state_c] sampler = tfa.seq2seq.sampler.TrainingSampler() decoder_cell = keras.layers.LSTMCell(512) output_layer = keras.layers.Dense(vocab_size) decoder = tfa.seq2seq.basic_decoder.BasicDecoder(decoder_cell, sampler, output_layer=output_layer) final_outputs, final_state, final_sequence_lengths = decoder( decoder_embeddings, initial_state=encoder_state, sequence_length=sequence_lengths) Y_proba = tf.nn.softmax(final_outputs.rnn_output) model = keras.models.Model( inputs=[encoder_inputs, decoder_inputs, sequence_lengths], outputs=[Y_proba]) </code></pre> <p>And then he just runs the model in a standard way:</p> <pre><code>model.compile(loss=&quot;sparse_categorical_crossentropy&quot;, optimizer=&quot;adam&quot;) X = np.random.randint(100, size=10*1000).reshape(1000, 10) Y = np.random.randint(100, size=15*1000).reshape(1000, 15) X_decoder = np.c_[np.zeros((1000, 1)), Y[:, :-1]] seq_lengths = np.full([1000], 15) history = model.fit([X, X_decoder, seq_lengths], Y, epochs=2) </code></pre> <p>I have trouble understanding the code starting at line 7. The author is creating an <code>Embedding</code> layer which he immediately calls on <code>encoder_inputs</code> and <code>decoder_inputs</code>, then he does basically the same with the <code>LSTM</code> layer that he calls on the previously created <code>encoder_embeddings</code> and tensors returned by this operation are used in the code slightly below. What I don't get here is how are those tensors trained? It looks like he's not using the layers creating them in the model, but if so, then how come the embeddings are learned and the whole model converges?</p>
<python><tensorflow><keras>
2023-02-02 07:37:01
1
3,930
Marek M.
75,319,924
10,689,857
Switching PyCharm project between Python versions
<p>I have a project in pycharm which I am currently running in python 3.9 However, I want to compare how would it behave under python 3.11</p> <p>I have tried File-Settings-Python Interpreter - Add Interpreter- Add Local Interpreter and tried adding 3.11 from there but it says &quot;Environment location directory is not empty&quot; and prevents me from doing this.</p> <p>How can I make my project run under python 3.11?</p>
<python><pycharm>
2023-02-02 07:33:08
2
854
Javi Torre
75,319,803
806,160
how Install the Delta Lake package on the on-premise environment?
<p>I want make a data lake for my self without using any cloud service. I now have an Debian server and I want create this data lake with Databricks solution, Delta Lake.</p> <p>As I search all sample for stablish Delta Lake in could service.</p> <p>How can I do this in my own server?</p> <p>Maybe I want create an cluster for store data and doing machine learning. And I want use only python for create Delta Lake.</p>
<python><pyspark><debian><delta-lake><data-lake>
2023-02-02 07:17:03
1
1,423
Tavakoli
75,319,736
20,646,427
How to check user for 2 fields from another model with ManyToMany fields
<p>How am i suppose to check request.user for 2 fields in the same time</p> <p>I want to show different pages in header if user is customer or contractor by another many to many field or if he is contractor and customer in same time show both pages</p> <p>Im not sure how am i suppose to do that</p> <p>Models.py:</p> <pre><code>class CounterParty(models.Model): GUID = models.UUIDField(default=uuid.uuid4, editable=True, unique=True) name = models.CharField(max_length=150, verbose_name='Name') customer = models.BooleanField(default=False, verbose_name='customer') contractor = models.BooleanField(default=False, verbose_name='contractor') counter_user = models.ManyToManyField(User, blank=True, related_name='counter_user') </code></pre> <p>views.py:</p> <pre><code>@login_required def home_view(request): counter_party = CounterParty.objects.all() context = { 'counter_party': counter_party } return render(request, 'common/home.html', context) </code></pre> <p>header.html:</p> <pre><code>{% for counter in counter_party %} {% if request.user in counter.counter_user.all %} &lt;li class=&quot;nav-item&quot;&gt;&lt;a class=&quot;nav-link&quot; href=&quot;{% url 'contractor_view' %}&quot;&gt;&lt;span class=&quot;nav-link-title&quot;&gt;Contractor&lt;/span&gt;&lt;/a&gt;&lt;/li&gt; {% endif %} {% endfor %} {% if request.user.is_staff %} &lt;li class=&quot;nav-item&quot;&gt;&lt;a class=&quot;nav-link&quot; href=&quot;{% url 'admin:index' %}&quot;&gt;&lt;span class=&quot;nav-link-title&quot;&gt;Admin&lt;/span&gt;&lt;/a&gt;&lt;/li&gt; {% endif %} </code></pre>
<python><django>
2023-02-02 07:09:11
1
524
Zesshi
75,319,170
1,896,222
azure log analytics InsufficientAccessError
<p>I am trying to read log analytics in python. Here is my code:</p> <pre><code>AZURE_CLIENT_ID = '' AZURE_CLIENT_SECRET = '' AZURE_TENANT_ID = '' workspace_id = '' from azure.identity import ClientSecretCredential from datetime import datetime from azure.monitor.query import LogsQueryClient, LogsQueryStatus start_time = datetime(2022, 1, 1) end_time = datetime(2023, 1, 2) credential = ClientSecretCredential( client_id = AZURE_CLIENT_ID, client_secret = AZURE_CLIENT_SECRET, tenant_id = AZURE_TENANT_ID ) client = LogsQueryClient(credential) query = &quot;ContainerLog&quot; response = client.query_workspace(workspace_id=workspace_id, query=query, timespan=(start_time, end_time - start_time)) if response.status == LogsQueryStatus.PARTIAL: error = response.partial_error print('Results are partial', error.message) elif response.status == LogsQueryStatus.SUCCESS: results = [] for table in response.tables: for row in table.rows: results.append(dict(zip(table.columns, row))) print(convert_azure_table_to_dict(results)) </code></pre> <p>and it is failing:</p> <pre><code>Traceback (most recent call last): File &quot;c:\temp\x.py&quot;, line 24, in &lt;module&gt; response = client.query_workspace(workspace_id=workspace_id, File &quot;C:\kourosh\venv\lib\site-packages\azure\core\tracing\decorator.py&quot;, line 78, in wrapper_use_tracer return func(*args, **kwargs) File &quot;C:\kourosh\venv\lib\site-packages\azure\monitor\query\_logs_query_client.py&quot;, line 136, in query_workspace process_error(err, LogsQueryError) File &quot;C:\kourosh\venv\lib\site-packages\azure\monitor\query\_helpers.py&quot;, line 141, in process_error raise HttpResponseError(message=error.message, response=error.response, model=model) azure.core.exceptions.HttpResponseError: (InsufficientAccessError) The provided credentials have insufficient access to perform the requested operation Code: InsufficientAccessError Message: The provided credentials have insufficient access to perform the requested operation </code></pre> <p>I have added Log Analytics API -&gt; Data.Read permission to the registered app that I'm using. Any idea what is causing this?</p>
<python><azure><azure-log-analytics>
2023-02-02 05:52:43
1
10,564
max
75,319,159
18,758,062
Get Cloudflare R2 Public URL after uploading file using boto3
<p>Is there a way to upload a file to Cloudflare R2 storage using <code>boto3</code> in Python, and get the public URL that it can be accessed at?</p> <p>The files are being uploaded by doing</p> <pre><code>s3 = boto3.resource( &quot;s3&quot;, endpoint_url=endpoint_url, aws_access_key_id=aws_access_key_id, aws_secret_access_key=secret_access_key, ) bucket = s3.Bucket(&quot;my_bucket&quot;) res = bucket.Object(&quot;foo.png&quot;).put(Body=f.read()) </code></pre> <p>but the <code>res</code> reponse object returned does not appear to contain that looks like they can be used to construct the public URL.</p> <p>Thanks!</p>
<python><amazon-s3><boto3><cloudflare><cloudflare-r2>
2023-02-02 05:51:11
0
1,623
gameveloster
75,319,024
10,844,937
How to merge dataframe faster?
<p>I have a <code>df</code> as following</p> <pre><code>import pandas as pd df = pd.DataFrame( {'number_1': ['1', '2', None, None, '5', '6', '7', '8'], 'fruit_1': ['apple', 'banana', None, None, 'watermelon', 'peach', 'orange', 'lemon'], 'name_1': ['tom', 'jerry', None, None, 'paul', 'edward', 'reggie', 'nicholas'], 'number_2': [None, None, '3', None, None, None, None, None], 'fruit_2': [None, None, 'blueberry', None, None, None, None, None], 'name_2': [None, None, 'anthony', None, None, None, None, None], 'number_3': [None, None, '3', '4', None, None, None, None], 'fruit_3': [None, None, 'blueberry', 'strawberry', None, None, None, None], 'name_3': [None, None, 'anthony', 'terry', None, None, None, None], } ) </code></pre> <p>Here what I'd like to do is:</p> <ol> <li>find columns which has the same item. <code>name_1</code>, <code>name_2</code>, <code>name_3</code> for example.</li> <li>combine the columns to get rid of the <code>None</code> values.</li> </ol> <p>The desired result is</p> <pre><code> number fruit name 0 1 apple tom 1 2 banana jerry 2 3 blueberry anthony 3 4 strawberry terry 4 5 watermelon paul 5 6 peach edward 6 7 orange reggie 7 8 lemon nicholas </code></pre> <p>Here is how I do it.</p> <pre><code># Get the first column merge_df = pd.DataFrame(df.iloc[:, 0]) merge_df.columns = [merge_df.columns[0].split('_')[0]] item_list = [column_list[0].split('_')[0]] column_list = df.columns.to_list() for i in range(len(column_list)): for j in range(i + 1, len(column_list)): first_item = column_list[i].split('_')[0] second_item = column_list[j].split('_')[0] # change series name df_series = df.iloc[:, j] df_series.name = second_item if first_item != second_item and second_item not in item_list: merge_df = pd.concat([merge_df, df_series], axis=1) item_list.append(column_list[j].split('_')[0]) if first_item == second_item: # combine df and series if second_item in merge_df.columns: merge_df = merge_df.assign( **{f'{second_item}': merge_df[second_item].combine(df_series, lambda x, y: x if x is not None else y)}) print(merge_df) </code></pre> <p>Problem is it is very slow if <code>df</code> has multiple columns.</p> <p>Anyone has an advice to optimize this?</p> <hr /> <p>Edit:</p> <p>The accepted answer has given a perfect way to use a regex. Here I had a more complicated issue which is similar to this. I put it here instead of creating a new answer.</p> <p>Here the <code>df</code> is</p> <pre><code>import pandas as pd df = pd.DataFrame( {'number_C1_E1': ['1', '2', None, None, '5', '6', '7', '8'], 'fruit_C11_E1': ['apple', 'banana', None, None, 'watermelon', 'peach', 'orange', 'lemon'], 'name_C111_E1': ['tom', 'jerry', None, None, 'paul', 'edward', 'reggie', 'nicholas'], 'number_C2_E2': [None, None, '3', None, None, None, None, None], 'fruit_C22_E2': [None, None, 'blueberry', None, None, None, None, None], 'name_C222_E2': [None, None, 'anthony', None, None, None, None, None], 'number_C3_E1': [None, None, '3', '4', None, None, None, None], 'fruit_C33_E1': [None, None, 'blueberry', 'strawberry', None, None, None, None], 'name_C333_E1': [None, None, 'anthony', 'terry', None, None, None, None], } ) </code></pre> <p>Here the rule is: <strong>if a column removes <code>_C{0~9}</code> or <code>_C{0~9}{0~9}</code> or <code>_C{0~9}{0~9}{0~9}</code> is equal to another column, these two columns can be combined</strong>. Let's take <code>number_C1_E1</code> <code>number_C2_E2</code> <code>number_C3_E1</code> as an example, here <code>number_C1_E1</code> and <code>number_C3_E1</code> can be combined because they are both <code>number_E1</code> after removing <code>_C{0~9}</code>. In this way, the desired result is</p> <pre><code> number_E1 fruit_E1 name_E1 number_E2 fruit_E2 name_E2 0 1 apple tom None None None 1 2 banana jerry None None None 2 3 blueberry anthony 3 blueberry anthony 3 4 strawberry terry None None None 4 5 watermelon paul None None None 5 6 peach edward None None None 6 7 orange reggie None None None 7 8 lemon nicholas None None None </code></pre>
<python><pandas>
2023-02-02 05:29:34
1
783
haojie
75,318,798
7,218,871
In a 2D Numpy Array find max streak of consecutive 1's
<p>I have a 2d numpy array like so. I want to find the maximum consecutive streak of 1's for every row.</p> <pre><code>a = np.array([[1, 1, 1, 1, 1], [1, 0, 1, 0, 1], [1, 1, 0, 1, 0], [0, 0, 0, 0, 0], [1, 1, 1, 0, 1], [1, 0, 0, 0, 0], [0, 1, 1, 0, 0], [1, 0, 1, 1, 0], ] ) </code></pre> <p>Desired Output: <code>[5, 1, 2, 0, 3, 1, 2, 2]</code></p> <p>I have found the solution to above for a 1D array:</p> <pre><code>a = np.array([1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 0, 0]) d = np.diff(np.concatenate(([0], a, [0]))) np.max(np.flatnonzero(d == -1) - np.flatnonzero(d == 1)) &gt; 4 </code></pre> <p>On similar lines, I wrote the following but it doesn't work.</p> <pre><code>d = np.diff(np.column_stack(([0] * a.shape[0], a, [0] * a.shape[0]))) np.max(np.flatnonzero(d == -1) - np.flatnonzero(d == 1)) </code></pre>
<python><arrays><numpy>
2023-02-02 04:50:20
1
620
Abhishek Jain
75,318,347
7,946,143
Updating an 2d array created using 2 * operator produces unexpected side effect in python
<p>I tried making a 2d array using 2 approaches in python. Why does approach 1 provide an unexpected side effect where all the rows get updated?</p> <pre><code>dp1 = [ [0] * 5 ] * 8 dp1[2][2] = dp1[1][1] + 1 print(&quot;DP1&quot;) for dp in dp1: print(dp) dp2 = [[0] * 5 for _ in range(8)] dp2[2][2] = dp2[1][1] + 1 print(&quot;DP2&quot;) for dp in dp2: print(dp) </code></pre> <p>Output:</p> <pre><code>DP1 [0, 0, 1, 0, 0] [0, 0, 1, 0, 0] [0, 0, 1, 0, 0] [0, 0, 1, 0, 0] [0, 0, 1, 0, 0] [0, 0, 1, 0, 0] [0, 0, 1, 0, 0] [0, 0, 1, 0, 0] DP2 [0, 0, 0, 0, 0] [0, 0, 0, 0, 0] [0, 0, 1, 0, 0] [0, 0, 0, 0, 0] [0, 0, 0, 0, 0] [0, 0, 0, 0, 0] [0, 0, 0, 0, 0] [0, 0, 0, 0, 0] </code></pre>
<python><arrays><matrix>
2023-02-02 03:21:19
0
3,110
Dracula
75,318,191
2,655,127
How to select dropdown in the dynamic table Selenium webdriver
<p>I have HTML tables like</p> <pre><code>&lt;table id='table1'&gt; &lt;tbody&gt; &lt;tr&gt; &lt;td colspan='2'&gt;Food&lt;/td&gt; &lt;/tr&gt; &lt;tr&gt; &lt;td&gt;Burger&lt;/td&gt; &lt;td&gt; &lt;select class=&quot;form-control input-sm&quot; id=&quot;xx2&quot; onchange=&quot;count(this)&quot;&gt; &lt;option value=&quot;1&quot;&gt;Burger&lt;/option&gt; &lt;option value=&quot;2&quot;&gt;spaghetti&lt;/option&gt; &lt;option value=&quot;3&quot;&gt;Kebab&lt;/option&gt; &lt;/select&gt; &lt;/td&gt; &lt;/tr&gt; &lt;tr&gt; &lt;td colspan='2'&gt;Drink&lt;/td&gt; &lt;/tr&gt; &lt;tr&gt; &lt;td&gt;Burger&lt;/td&gt; &lt;td&gt; &lt;select class=&quot;form-control input-sm&quot; id=&quot;kj2&quot; onchange=&quot;count(this)&quot;&gt; &lt;option value=&quot;1&quot;&gt;Coffe&lt;/option&gt; &lt;option value=&quot;2&quot;&gt;Tea&lt;/option&gt; &lt;option value=&quot;3&quot;&gt;Milk&lt;/option&gt; &lt;/select&gt; &lt;/td&gt; &lt;/tr&gt; &lt;tr&gt; &lt;td colspan='2'&gt; ... &lt;/td&gt; &lt;/tr&gt; &lt;tr&gt; &lt;td&gt; .......... &lt;/td&gt; &lt;td&gt; &lt;select class=&quot;form-control input-sm&quot; id=&quot;jj&quot; onchange=&quot;count(this)&quot;&gt; &lt;option value=&quot;1&quot;&gt; ... &lt;/option&gt; &lt;option value=&quot;2&quot;&gt; ..... &lt;/option&gt; &lt;option value=&quot;3&quot;&gt; .... &lt;/option&gt; &lt;/select&gt; &lt;/td&gt; &lt;/tr&gt; &lt;/tbody&gt; </code></pre> <p>In selenium web driver, how to select first value at dropdownlist in the dynamic table (total rows is varry and there is colspan)</p> <p>I have get table, and I want to get all and fill dropdownlist in the table</p> <pre><code>WebDriverWait(driver, 10).until(EC.presence_of_element_located((By.XPATH, '//*[@id=&quot;table1&quot;]/tbody'))) table = driver.find_element(By.XPATH, '//*[@id=&quot;table1&quot;]/tbody') </code></pre>
<python><python-3.x><selenium><selenium-webdriver><xpath>
2023-02-02 02:51:25
4
853
jack
75,317,985
7,188,929
problem in lists content in DNN work by python
<p>i have three list [weights,n,org_weigths] in the code below It is assumed that only List [n] changes, but when viewing the other lists, we find that they are changing with the same change in List [n]</p> <pre><code>for i in range(size): for j in range(len(weights[i])): print(&quot;the index is [&quot;,i,&quot;,&quot;,j,&quot;]&quot;) n[i][j]=weights[i][j]*3 model.set_weights(n) test_result = model.test_on_batch(X_test,y_test) print(test_result) n=org_weigths``` You must change one item at a time in List [n] and then fetch back the original data from List [B] </code></pre>
<python><list><deep-learning><artificial-intelligence>
2023-02-02 02:06:28
1
321
maath frman
75,317,799
14,291,703
How to update a delta table with the missing row using PySpark?
<p>I need to update delta table on the basis of update delta table rows.</p> <pre><code>Update table (source_df) +---------------------------------------------------------- |ID| NAME|ADDRESS|DELETE_FLAG|INSERT_DATE|UPDATE_DATE| +---------------------------------------------------------- | 1|sravan|delhi |false |02/02/2023 |02/02/2023| | 3|rohith|jaipur |false |02/02/2023 |02/02/2023| Delta table (delta_df) +---------------------------------------------------------- |ID| NAME|ADDRESS|DELETE_FLAG|INSERT_DATE|UPDATE_DATE| +---------------------------------------------------------- | 1|sravan|delhi |false |25/01/2023 |25/01/2023| | 2|ojasvi|patna |false |25/01/2023 |25/01/2023| | 3|rohith|jaipur |false |25/01/2023 |25/01/2023| </code></pre> <p>I want to update the delta table (both are delta table instances and not dataframes)</p> <p>Final table would look like this,</p> <pre><code>Delta table (Updated) +---------------------------------------------------------- |ID| NAME|ADDRESS|DELETE_FLAG|INSERT_DATE|UPDATE_DATE| +---------------------------------------------------------- | 1|sravan|delhi |false |25/01/2023 |25/01/2023| | 2|ojasvi|patna |true |25/01/2023 |02/02/2023| | 3|rohith|jaipur |false |25/01/2023 |25/01/2023| </code></pre> <p>Let the keys for both tables be ID and NAME, I tried the following,</p> <pre><code>delta_df.merge( source_df, f&quot;delta_df.DELETE_FLAG= 0 AND (delta_df.ID &lt;&gt; source_df.ID AND delta_df.NAME &lt;&gt; source_df.NAME)&quot; ).whenMatchedUpdate( set = { &quot;DELETE_FLAG&quot; : &quot;1&quot;, &quot;UPDATE_DATE&quot; : &quot;source_df.UPDATE_DATE&quot; } ).execute() </code></pre> <p>However, the above code is taking infinite time to execute. I am new to PySPark and not sure what is the most efficient way to deal with the problem at hand.</p> <p>Note: They are not dataframes.</p>
<python><pyspark><databricks><delta-lake><databricks-sql>
2023-02-02 01:25:00
0
512
royalewithcheese
75,317,762
6,498,757
Pandas: find interval distance from N consecutive to M consecutive
<h1>TLDR version:</h1> <p>I have a column like below,</p> <pre><code>[2, 2, 0, 0, 0, 2, 2, 0, 3, 3, 3, 0, 0, 2, 2, 0, 0, 0, 0, 2, 2, 0, 0, 0, 3, 3, 3] # There is the probability that has more sequences, like 4, 5, 6, 7, 8... </code></pre> <p>I need a function that has parameters n,m, if I use n=2, m=3, I will get a distance between 2 and 3, and then final result after the group could be :</p> <pre><code>[6, 9] </code></pre> <h1>Detailed version</h1> <p>Here is the test case. And I'm writing a function that will give n,m then generate a list of distances between each consecutive. Currently, this function can only work with one parameter <code>N</code> (which is the distance from N consecutive to another N consecutive). I want to make some changes to this function to make it accept <code>M</code>.</p> <pre class="lang-py prettyprint-override"><code>dummy = [1,1,0,0,0,1,1,0,1,1,1,0,0,1,1,0,0,0,0,1,1,0,0,0,1,1,1] df = pd.DataFrame({'a': dummy}) </code></pre> <p>What I write currently,</p> <pre class="lang-py prettyprint-override"><code>def get_N_seq_stat(df, N=2, M=3): df[&quot;c1&quot;] = ( df.groupby(df.a.ne(df.a.shift()).cumsum())[&quot;a&quot;] .transform(&quot;size&quot;) .where(df.a.eq(1), 0) ) df[&quot;c2&quot;] = np.where(df.c1.ne(N) , 1, 0) df[&quot;c3&quot;] = df[&quot;c2&quot;].ne(df[&quot;c2&quot;].shift()).cumsum() result = df.loc[df[&quot;c2&quot;] == 1].groupby(&quot;c3&quot;)[&quot;c2&quot;].count().tolist() # if last N rows are not consequence shouldn't add last. if not (df[&quot;c1&quot;].tail(N) == N).all(): del result[-1] if not (df[&quot;c1&quot;].head(N) == N).all(): del result[0] return result </code></pre> <p>if I set N=2, M=3 ( from 2 consecutive to 3 consecutive), Then the ideal value return from this would be [6,9] because below.</p> <pre><code>dummy = [1,1,**0,0,0,1,1,0,**1,1,1,0,0,1,1,**0,0,0,0,1,1,0,0,0,**1,1,1] </code></pre> <p>Currently, if I set N =2, the return list would be [3, 6, 4] that because</p> <pre><code>dummy = [1,1,**0,0,0,**1,1,**0,1,1,1,0,0,**1,1,**0,0,0,0,**1,1,0,0,0,1,1,1] </code></pre>
<python><pandas><dataframe>
2023-02-02 01:16:51
1
351
Yiffany
75,317,640
10,270,590
How to save down Django user's updated social media post?
<h2>Goal</h2> <ul> <li>A user can edit the post that that specific user made. Bly clicking edit than editing than pressing save.</li> </ul> <h2>Problem</h2> <ul> <li>When I edit the social media post it does not get saved</li> </ul> <h2>Description</h2> <ul> <li>I can make a mew post like in social media</li> <li>Post it in to a list where all the other users post (shortened 200 character visible only)</li> <li>Than I can click on a &quot;Details button&quot; that jumps me to another page where I can see the full length of the post</li> <li>There is a button here called &quot;edit&quot; it should only appear to the post creator</li> <li>If you click edit than a window pop up where you already have your existing post copied in to an inout field</li> <li>here you can edit your post</li> <li>the goal would be it you click save it should save it down but that does not happens</li> <li>Interestingly if i close down the pop up windows with the small window [X] button or the &quot;cancel&quot; button and I go back it memorizes my edit there</li> </ul> <h3>View function</h3> <pre><code>@login_required def social_post_detail(request, pk): social_post = get_object_or_404(social_post, pk=pk) form = None if request.user == social_post.created_by: if request.method == 'POST': print(request.POST) form = social_postForm(request.POST, instance=social_post) if form.is_valid(): form.save() return redirect('social_post_list') else: form = social_postForm(instance=social_post) return render(request, 'social_post_detail.html', {'social_post': social_post, 'form': form}) ### new edit from django.shortcuts import render, redirect from .models import social_post from .forms import social_postForm def social_post_edit(request, pk): social_post = social_post.objects.get(pk=pk) if request.method == 'POST': form = social_postForm(request.POST, instance=social_post) if form.is_valid(): form.save() return redirect('social_post_detail', pk=social_post.pk) else: form = social_postForm(instance=social_post) return render(request, 'social_post/social_post_edit.html', {'form': form}) </code></pre> <h3>View function unified 1 functions insted of 2</h3> <p>I have tried it 1by one but non of them worked</p> <pre><code>########## ALL IN 1 FUNCTION #1 ########## @login_required def social_post_detail(request, pk): social_post = get_object_or_404(social_post, pk=pk) if request.user != social_post.created_by: return redirect('social_post_list') if request.method == 'POST': form = social_postForm(request.POST, instance=social_post) if form.is_valid(): form.save() return redirect('social_post_list') else: form = social_postForm(instance=social_post) return render(request, 'social_post_detail.html', {'social_post': social_post, 'form': form}) ######### ALL IN 1 FUNCTION #2 ########## 2023.02.01 @login_required def social_post_detail(request, id): social_post = get_object_or_404(social_post, id=id) social_post = social_post.objects.get(pk=pk) if request.method == &quot;POST&quot;: form = social_postForm(request.POST, instance=social_post) if form.is_valid(): form.save() return redirect('social_post_list') else: form = social_postForm(instance=social_post) return render(request, 'social_post/social_post_detail.html', {'social_post': social_post, 'form': form}) </code></pre> <h3>HTML</h3> <p>social_post.html details</p> <pre><code>{% extends 'base.html' %} {% block content %} &lt;h1&gt;{{ social_post.title }}&lt;/h1&gt; &lt;p&gt;{{ social_post.description }}&lt;/p&gt; &lt;a href=&quot;{% url 'social_post_list' %}&quot; class=&quot;btn btn-primary&quot;&gt;Back to social_post List&lt;/a&gt; &lt;button type=&quot;button&quot; class=&quot;btn btn-primary&quot; id=&quot;editsocial_postButton&quot;&gt;Edit social_post&lt;/button&gt; &lt;script&gt; document.getElementById(&quot;editsocial_postButton&quot;).addEventListener(&quot;click&quot;, function() { $('#editModal').modal('show'); }); &lt;/script&gt; &lt;div class=&quot;modal fade&quot; id=&quot;editModal&quot; tabindex=&quot;-1&quot; role=&quot;dialog&quot; aria-labelledby=&quot;exampleModalLabel&quot; aria-hidden=&quot;true&quot;&gt; &lt;div class=&quot;modal-dialog&quot; role=&quot;document&quot;&gt; &lt;div class=&quot;modal-content&quot;&gt; &lt;div class=&quot;modal-header&quot;&gt; &lt;h5 class=&quot;modal-title&quot; id=&quot;exampleModalLabel&quot;&gt;Edit social_post&lt;/h5&gt; &lt;button type=&quot;button&quot; class=&quot;close&quot; data-dismiss=&quot;modal&quot; aria-label=&quot;Close&quot;&gt; &lt;span aria-hidden=&quot;true&quot;&gt;&amp;times;&lt;/span&gt; &lt;/button&gt; &lt;/div&gt; &lt;div class=&quot;modal-body&quot;&gt; &lt;form method=&quot;post&quot;&gt; {% csrf_token %} &lt;div class=&quot;form-group&quot;&gt; &lt;label for=&quot;title&quot;&gt;social_post Title&lt;/label&gt; &lt;input type=&quot;text&quot; class=&quot;form-control&quot; name=&quot;title&quot; value=&quot;{{ social_post.title }}&quot;&gt; &lt;/div&gt; &lt;div class=&quot;form-group&quot;&gt; &lt;label for=&quot;description&quot;&gt;social_post Description&lt;/label&gt; &lt;textarea class=&quot;form-control&quot; name=&quot;description&quot;&gt;{{ social_post.description }}&lt;/textarea&gt; &lt;/div&gt; {{ form.as_p }} &lt;button type=&quot;submit&quot; class=&quot;btn btn-primary&quot;&gt;Save Changes&lt;/button&gt; &lt;button type=&quot;button&quot; class=&quot;btn btn-secondary&quot; data-dismiss=&quot;modal&quot;&gt;Cancel&lt;/button&gt; &lt;/form&gt; &lt;/div&gt; &lt;/div&gt; &lt;/div&gt; &lt;/div&gt; {% endblock %} </code></pre> <h2>No ERROR message</h2> <ul> <li>I get no error message</li> <li>Just the following terminal print out after I press the save button</li> </ul> <pre><code> [02/Feb/2023 00:43:52] &quot;GET /social_post/social_post/1/ HTTP/1.1&quot; 200 7142 [02/Feb/2023 00:44:13] &quot;POST /social_post/social_post/1/ HTTP/1.1&quot; 200 7142 </code></pre> <h2>My guesses</h2> <ul> <li>I am struggling with the JS and CSS imports they might cause the error.</li> </ul> <h2>Tried Solutions</h2> <p>view.py</p> <pre><code>### new edit from django.shortcuts import render, redirect from .models import social_post from .forms import social_postForm from django.contrib import messages def social_post_edit(request, pk): social_post = social_post.objects.get(pk=pk) if request.method == 'POST': form = social_postForm(request.POST, instance=social_post) if form.is_valid(): form.save() return redirect('social_post_detail', pk=social_post.pk) else: messages.error(request, messages.INFO, str(form.errors)) return redirect('social_post_detail', pk=social_post.pk) else: form = social_postForm(instance=social_post) return render(request, 'social_post/social_post_edit.html', {'form': form}) </code></pre> <p>terminal output after clicking edit than saving down.</p> <pre><code>[02/Feb/2023 21:44:45] &quot;POST /social_post/social_post/1/ HTTP/1.1&quot; 200 8474 </code></pre>
<javascript><python><html><css><django>
2023-02-02 00:49:50
1
3,146
sogu
75,317,584
11,501,160
pypdf gives output with incorrect PDF format
<p>I am using the following code to resize pages in a PDF:</p> <pre><code>from pypdf import PdfReader, PdfWriter, Transformation, PageObject, PaperSize from pypdf.generic import RectangleObject reader = PdfReader(&quot;input.pdf&quot;) writer = PdfWriter() for page in reader.pages: A4_w = PaperSize.A4.width A4_h = PaperSize.A4.height # resize page to fit *inside* A4 h = float(page.mediabox.height) w = float(page.mediabox.width) scale_factor = min(A4_h/h, A4_w/w) transform = Transformation().scale(scale_factor,scale_factor).translate(0, A4_h/2 - h*scale_factor/2) page.add_transformation(transform) page.cropbox = RectangleObject((0, 0, A4_w, A4_h)) # merge the pages to fit inside A4 # prepare A4 blank page page_A4 = PageObject.create_blank_page(width = A4_w, height = A4_h) page.mediabox = page_A4.mediabox page_A4.merge_page(page) writer.add_page(page_A4) writer.write('output.pdf') </code></pre> <p>Source: <a href="https://stackoverflow.com/a/75274841/11501160">https://stackoverflow.com/a/75274841/11501160</a></p> <p>While this code works fine for the resizing part, I have found that most input files work fine but some input files do not work fine.</p> <p>I am providing download links to <a href="https://drive.google.com/file/d/1aQ8xEeQUq4gA3Ugn73Ed-dy_Dt5TFC-a/view?usp=sharing" rel="nofollow noreferrer">input.pdf</a> and <a href="https://drive.google.com/file/d/1sV9P5m0_ItPzF0VL_lkCHzMZPE2LkpRJ/view?usp=sharing" rel="nofollow noreferrer">output.pdf</a> files for testing and review. The output file is completely different from the input file. The images are missing, the background colour is different, even the pure text on first page has only the first line visible.</p> <p>What is interesting is that these difference are only seen when I open the output pdf in Adobe Acrobat, or look at the physically printed pages. The PDF looks perfect when i open in Preview (on MacOS) or open the PDF in my Chrome Browser.</p> <p><a href="https://i.sstatic.net/AURyy.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/AURyy.jpg" alt="input file" /></a></p> <p>and</p> <p><a href="https://i.sstatic.net/akLGN.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/akLGN.png" alt="output file" /></a></p> <p>The origin of the input pdf is that I created it in Preview (on MacOS) by mixing pages from different PDFs and dragging image files into the thumbnails as per these instructions: <a href="https://support.apple.com/en-ca/HT202945" rel="nofollow noreferrer">https://support.apple.com/en-ca/HT202945</a> I've never had a problem before while making PDFs like this and even Adobe Acrobat reads the input pdf properly. Only the output pdf is problematic in Acrobat and in printers.</p> <p>Is this a bug with pypdf or am I doing something wrong ? How can i get the output PDF to be proper in Adobe Acrobat and printers etc ?</p>
<python><pypdf>
2023-02-02 00:33:24
2
305
Zain Khaishagi
75,317,575
1,470,373
read keyboard/stdin asynchronously
<p>I have a loop where I would like to do something until a key is pressed. Is there a way to read input from stdin in another thread until a specific key is pressed? Is there a better way to do this?</p> <pre><code>def capture_loop(dev_path, breakout_key=&quot;a&quot;): captured_devs = [] capture = True def wait_for_key(): global capture while True: i, o, e = select.select([sys.stdin], [], [], 10) if len(i) &gt; 0: input_string = i[0].read() if breakout_key in str(input_string): capture = False break else: print(&quot;You said nothing!&quot;) t = threading.Thread(target=wait_for_key) t.setDaemon(False) t.start() while capture: print(&quot;Doing Stuff until the 'a' key is pressed&quot;) time.sleep(10) print(&quot;Done doing stuff&quot;) </code></pre>
<python><multithreading><stdin>
2023-02-02 00:29:52
1
1,592
Rusty Weber
75,317,356
8,222,791
Unable to share global variable between threads with flask
<h2>Context</h2> <p>I'm trying to write a python flask server that answers a simple request. The data to be returned in the response is queried from a backend service. Since this query may take some time to complete, I don't want to do it synchronously, I want to have it run periodically, in a background thread, where I can explicitly control the frequency. It should update some data structure that is shared with the flask view function, so the GET requests gets its answer from the shared data.</p> <p>I'm including two sample codes below. In both codes, <code>cnt</code> is a global variable, starting at 0, and increased inside a separate thread. The <code>index.html</code> file displays the value of <code>cnt</code>:</p> <pre><code>&lt;h1&gt;cnt = {{ cnt }}&lt;/h1&gt; </code></pre> <h2>The issue I'm facing</h2> <p>When the view function is inside the same module, it works: every time I refresh the page with F5, the value of <code>cnt</code> has changed, I can see it increasing.</p> <p>But when I put the view functions in a separate <code>routes</code> module (which I import at the end of my <code>hello.py</code> file), it no longer works: I can see in the server traces that <code>cnt</code> is being increased by the background thread, but when I refresh the page I always see</p> <pre><code>cnt = 1 </code></pre> <p>It's as if I now have two different copies of the <code>cnt</code> variable, even though the variable has been imported into the <code>routes</code> module.</p> <h3>Note</h3> <p>I've found countless question on SO on this topic, but none that really addresses this specific concern. Also, I'm perfectly aware that in my examples below, there is no <em>lock</em> protecting the shared data (which is a simple <code>cnt</code> variable) and I'm not handling <em>thread termination</em>. This is being deliberately ignored for now, in order to keep the sample code minimal.</p> <p>Here are the codes.</p> <h2>Single module, it works</h2> <p>Here's the main <code>hello.py</code> file, with everything inside the same module:</p> <pre><code>from flask import Flask, render_template import threading as th from time import sleep cnt = 0 app = Flask(__name__) # Run in background thread def do_stuff(): global cnt while True: cnt += 1 print(f'do_stuff: cnt={cnt}') sleep(1) # Create the separate thread t = th.Thread(target=do_stuff) t.start() @app.route(&quot;/&quot;) def hello(): return render_template('index.html', cnt=cnt) </code></pre> <p>The variable is indeed shared between the background thread and the view function, as expected.</p> <h2>Separate modules, it no longer works</h2> <p>Here's the main <code>hello.py</code> module, without the view function :</p> <pre><code>from flask import Flask import threading as th from time import sleep cnt = 0 app = Flask(__name__) # Run in background thread def do_stuff(): global cnt while True: cnt += 1 print(f'do_stuff: cnt={cnt}') sleep(1) # Create the separate thread t = th.Thread(target=do_stuff) t.start() import routes </code></pre> <p>And here is the separate <code>routes.py</code> file (see import at the end of <code>hello.py</code> above):</p> <pre><code># routes.py from hello import app, cnt from flask import render_template @app.route(&quot;/&quot;) def hello(): return render_template('index.html', cnt=cnt) </code></pre> <p>With this code, the web page always displays <code>cnt = 1</code>, as if the two modules had two distinct instances of the <code>cnt</code>variable.</p> <p>I feel like I'm missing some basic insight into python modules, or threads, or their interaction. Any help will be appreciated, and my apologies for such a long question.</p>
<python><multithreading><flask><module>
2023-02-01 23:44:54
1
2,303
joao
75,317,209
16,011,842
AccessDenied when calling Boto3 assume_role from EC2 even with service principal
<p>I'm running into an issue when trying to have a Python script running on an EC2 instance assume a role to perform S3 tasks. Here's what I have done.</p> <ol> <li>Created a IAM role with <code>AmazonS3FullAccess</code> permissions and got the following <code>ARN</code>:</li> </ol> <p><code>arn:aws:iam::&lt;account_number&gt;:role/&lt;role_name&gt;</code></p> <p>The trust policy is set so the principal is a the EC2 service. I interpret this as allowing any EC2 instance within the account being allowed to assume the role.</p> <blockquote> <pre><code>{ &quot;Version&quot;: &quot;2012-10-17&quot;, &quot;Statement&quot;: [ { &quot;Effect&quot;: &quot;Allow&quot;, &quot;Principal&quot;: { &quot;Service&quot;: &quot;ec2.amazonaws.com&quot; }, &quot;Action&quot;: &quot;sts:AssumeRole&quot; } ] } </code></pre> </blockquote> <ol start="2"> <li><p>I launched an EC2 instance and attached the above IAM role.</p> </li> <li><p>I attempt to call <code>assume_role()</code> using <code>Boto3</code></p> </li> </ol> <blockquote> <pre><code>session = boto3.Session() sts = session.client(&quot;sts&quot;) response = sts.assume_role( RoleArn=&quot;arn:aws:iam::&lt;account_number&gt;:role/&lt;role_name&gt;&quot;, RoleSessionName=&quot;role_session_name&quot; ) </code></pre> </blockquote> <p>But it throws the following error:</p> <blockquote> <p>botocore.exceptions.ClientError: An error occurred (AccessDenied) when calling the AssumeRole operation: User: arn:aws:sts::&lt;account_number&gt;:assumed-role/&lt;role_name&gt;/i-&lt;instance_id&gt; is not authorized to perform: sts:AssumeRole on resource: arn:aws:iam::&lt;account_number&gt;:role/&lt;role_name&gt;</p> </blockquote> <p>All other StackOverflow questions about this talk about the Role's trust policy but mine is set to allow EC2. So either I'm misinterpreting what the policy should be or there is some other error I can't figure out.</p>
<python><amazon-web-services><amazon-ec2><boto3>
2023-02-01 23:21:25
1
329
barneyAgumble
75,317,183
17,553,278
How to display code as code with color using python django
<p>Context: I'm developing a web application using Python Django, and I want to display code snippets in the browser with syntax highlighting. For example, I would like to display the following code with different colors:</p> <pre><code>if True: print(&quot;Hello World&quot;) </code></pre> <p>I want to achieve this effect using Django's built-in functionality or any recommended libraries or techniques.</p> <p>Question: How can I display code as code in the browser with syntax highlighting using Python Django? Are there any specific libraries or techniques that I can use to achieve this effect?</p> <p>Thank you.</p>
<python><html><css><django><colors>
2023-02-01 23:15:25
2
325
Baboucarr Badjie
75,317,104
15,724,084
Python pandas lower data AttributeError: 'Series' object has no attribute 'lower'
<p>I want to lower data taken from pandas sheet and trim all spaces then to look for an equality.</p> <pre><code>df['ColumnA'].loc[lambda x: x.lower().replace(&quot; &quot;, &quot;&quot;) == var_name] </code></pre> <p>Code is above. It says pandas series has no lower method. But I need to search for data inside column A via pandas framework while lowering all letters to small and whitespace trimmering. Any other idea, how can I achieve in pandas?</p>
<python><pandas>
2023-02-01 23:02:50
1
741
xlmaster
75,316,866
9,235,704
Get partial output from nbconvert.preprocessors.ExecutePreprocessor
<p>Is there a way to get partial output from nbconvert.preprocessors.ExecutePreprocessor? Currently, I use the ExecutePreprocessor to execute my Jupyter notebook programmatically, and it returns the output after executing the entire notebook. However, it would be great to be able to get and save the partial results and while running the notebook. For example, If I have a progress bar in the jupyter notebook, is there a way to continuously read the updated the execution output so that I can see it updating?</p> <p>This is my current code:</p> <pre><code>import nbformat from nbconvert.preprocessors import ExecutePreprocessor with open('./test2.ipynb') as f: nb = nbformat.read(f, as_version=4) ep = ExecutePreprocessor(timeout=600, kernel_name='python3') ep.preprocess(nb) print(nb) with open('executed_notebook.ipynb', 'w', encoding='utf-8') as f: nbformat.write(nb, f) </code></pre> <p>however it would be great to be able to continuously read the nb variable and write it to a file while it executes</p>
<python><jupyter-notebook><jupyter><ipython>
2023-02-01 22:28:17
1
2,090
Julian Chu
75,316,822
12,574,341
Python annotate type as difference of two TypedDicts
<p>In set theory you can subtract set B from set A to derive a new set consisting of members that are in A but not in B.</p> <pre><code>A = {x,y,z} B = {z} C = A - B = {x,y} </code></pre> <p>I want to achieve this in Python by subtracting two <code>TypedDicts</code> to derive a new type annotation</p> <pre class="lang-py prettyprint-override"><code>class A(TypedDict): x: int y: int z: int class B(TypedDict): z: int C = type(A - B) # &lt;- fake code, what I'm looking for def f() -&gt; C: return {'x':0,'y':0} </code></pre> <p>Languages like TypeScript <a href="https://stackoverflow.com/a/50918777/12574341">seem to support this</a> operation. Does Python?</p>
<python><python-typing>
2023-02-01 22:22:46
0
1,459
Michael Moreno
75,316,811
1,231,560
How can I parse a row's column value passed to a UDF when mapping a column?
<p>I have a dataframe like this, for the sake of simplicity i'm just showing 2 columns both columns are <code>string</code>, but in real life it will have more columns each of different types other than <code>string</code>:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th style="text-align: center;">SQLText</th> <th style="text-align: center;">TableName</th> </tr> </thead> <tbody> <tr> <td style="text-align: center;">select * from sourceTable;</td> <td style="text-align: center;">NewTable</td> </tr> <tr> <td style="text-align: center;">select * from sourceTable1;</td> <td style="text-align: center;">NewTable1</td> </tr> </tbody> </table> </div> <p>I also have a custom Function where i want to iterate over the dataframe and get the <code>sql</code> and run it to create a table, however I'm not passing each column individually, but rather the whole row:</p> <pre><code>def CreateTables(rowp): df = spark.sql(rowp.SQLText) #code to create table using rowp.TableName </code></pre> <p>This is my code, I first clean up <code>SQLText</code> because it's stored in another table and then I run the UDF on the column:</p> <pre><code>l = l.withColumn(&quot;SQLText&quot;, F.lit(F.regexp_replace(F.col(&quot;SQLText&quot;).cast(&quot;string&quot;), &quot;[\n\r]&quot;, &quot; &quot;))) nt = l.select(l[&quot;*&quot;]).withColumn(&quot;TableName&quot;,CreateTables(F.struct(*list(l.columns)) )).select(&quot;TableName&quot;,&quot;SQLText&quot;) nt.show(truncate=False) </code></pre> <p>So when I'm running the function, and I try to run the code above, it errors out because instead of parsing the <code>rowp.SQLText</code> into its literal value, it passes its type?:</p> <pre><code>Column&lt;'struct(SourceSQL, TableName)[SourceSQL]'&gt; </code></pre> <p>So in the <code>CreateTables</code> function, when <code>spark.sql(rowp.SQLText)</code> is executed I expect the following:</p> <p><code>df = spark.sql(&quot;select * from sourceTable;&quot;)</code></p> <p>but instead this is happening, the variable type is literally being sent instead of the variable <em>value</em></p> <p><code>df = spark.sql(&quot;Column&lt;'struct(SourceSQL, TableName)[SourceSQL]'&gt;&quot;)</code></p> <p>I've tried numerous solutions: <code>getItem</code>, <code>getField</code>, <code>get</code>, <code>getAs</code> but no luck yet.</p> <p>I've also tried using indexes like <code>rowp[0]</code> but it just changes the variable type passed to the <code>spark.sql</code> function:</p> <pre><code>Column&lt;'struct(SourceSQL, TableName)[0]'&gt; </code></pre> <p>If I try <code>rowp(0)</code> it gives me a <code>Column is not callable</code> error.</p>
<python><sql><apache-spark><pyspark>
2023-02-01 22:21:44
1
826
Jose R
75,316,741
10,007,302
AttributeError: 'Engine' object has no attribute 'execute' when trying to run sqlalchemy in python to manage my SQL database
<p>I have the following line of code that keeps giving me an error that Engine object has no object execute. I think I have everything right but no idea what keeps happening. It seemed others had this issue and restarting their notebooks worked. I'm using Pycharm and have restarted that without any resolution. Any help is greatly appreciated!</p> <pre><code>import pandas as pd from sqlalchemy import create_engine, text import sqlalchemy import pymysql masterlist = pd.read_excel('Masterlist.xlsx') user = 'root' pw = 'test!*' db = 'hcftest' engine = create_engine(&quot;mysql+pymysql://{user}:{pw}@localhost:3306/{db}&quot; .format(user=user, pw=pw, db=db)) results = engine.execute(text(&quot;SELECT * FROM companyname;&quot;)) for row in results: print(row) </code></pre>
<python><mysql><sql><sqlalchemy>
2023-02-01 22:11:04
3
1,281
novawaly
75,316,580
12,821,675
Django - FileField - PDF vs octet-stream - AWS S3
<p>I have a model with a file field like so:</p> <pre class="lang-py prettyprint-override"><code>class Document(models.Model): file = models.FileField(...) </code></pre> <p>Elsewhere in my application, I am trying to download a pdf file from an external url and upload it to the file field:</p> <pre class="lang-py prettyprint-override"><code>import requests from django.core.files.base import ContentFile ... # get the external file: response = requests.get('&lt;external-url&gt;') # convert to ContentFile: file = ContentFile(response.content, name='document.pdf') # update document: document.file.save('document.pdf', content=file, save=True) </code></pre> <p>However, I have noticed the following behavior:</p> <ul> <li>files uploaded via the django-admin portal have the <code>content_type</code> &quot;application/json&quot;</li> <li>files uploaded via the script above have the <code>content_type</code> &quot;application/octet-stream&quot;</li> </ul> <p>How can I ensure that files uploaded via the script have the &quot;application/pdf&quot; <code>content_type</code>? Is it possible to set the <code>content_type</code> on the the <code>ContentFile</code> object? This is important for the frontend.</p> <p>Other notes:</p> <ul> <li>I am using AWS S3 as my file storage system.</li> <li>Uploading a file from my local file storage via the scirpt (i.e. using <code>with open(...) as file:</code> still uploads a file as &quot;applicaton/octet-stream&quot;</li> </ul>
<python><django><amazon-web-services>
2023-02-01 21:51:44
1
3,537
Daniel
75,316,579
12,436,050
Explode pandas dataframe column values to separate rows
<p>I have following pandas dataframe:</p> <pre><code>id term code 2445 | 2716 abcd | efgh 2345 1287 hgtz 6567 </code></pre> <p>I would like to explode <code>id</code> and <code>term</code> column. How can I explode multiple columns to keep the values across the columns <code>id</code>, <code>term</code> and <code>code</code> together.</p> <p>The expected output is:</p> <pre><code>id term code 2445 abcd 2345 2716 efgh 2345 1287 hgtz 6567 </code></pre> <p>I have tried so far is: <code>df.assign(id=df['id'].str.split(' | ')).explode('id')</code></p>
<python><pandas><dataframe>
2023-02-01 21:51:37
2
1,495
rshar
75,316,508
7,158,458
Access to XMLHttpRequest has been blocked by CORS even though CORS set up in backend
<p>I have a flask backend and a React frontend. I am trying to do a POST request to my backend but get the error:</p> <pre><code>Access to XMLHttpRequest at 'http://127.0.0.1:5000/predict' from origin 'http://localhost:3000' has been blocked by CORS policy: Response to preflight request doesn't pass access control check: No 'Access-Control-Allow-Origin' header is present on the requested resource. </code></pre> <p>Here is my flask code:</p> <pre><code>from flask_cors import CORS app = Flask(__name__) CORS(app) cors = CORS(app, resource={ r&quot;/*&quot;:{ &quot;origins&quot;: &quot;*&quot; } }) @app.route('/predict', methods=['GET','POST']) def home(): df = pd.DataFrame(request.json[&quot;challenge&quot;]) ... return pd.DataFrame.to_json(result) app.run(port=5000) </code></pre> <p>This is my axios request from the frontend:</p> <pre><code>function getPrediction(data) { const sending = {} sending['challenge'] = fighters axios.post('http://127.0.0.1:5000/predict', sending, { headers: { 'Access-Control-Allow-Origin': '*' } }) .then((res) =&gt; { console.log(res) }) } </code></pre>
<javascript><python><reactjs><flask>
2023-02-01 21:42:53
1
2,515
Emm
75,316,353
1,839,674
tf.data.datasets set each batch (prefetch)
<p>I am looking for help thinking through this.</p> <p>I have a function (that is not a generator) that will give me any number of samples. Let's say that getting all the data I want to train (1000 samples) can't fit into memory. So I want to call this function 10 times to get smaller number of samples that fit into memory.</p> <p>This is a dummy example for simplicity.</p> <pre><code>def get_samples(num_samples: int, random_seed=0): np.random.seed(random_seed) x = np.random.randint(0,100, num_samples) y = np.random.randint(0,2, num_samples) return np.array(list(zip(x,y)) </code></pre> <p>Again lets say get_samples(1000,0) won't fit into memory.</p> <p>So in theory I am looking for something like this:</p> <pre><code>batch_size = 100 total_num_samples = 1000 batches = [] for i in range(total_num_samples//batch_size): batches.append(get_samples(batch_size, i)) </code></pre> <p>But this still loads everything into memory.</p> <p>Again this function is a dummy representation and the real one is already defined and not a generator.</p> <p>In tf land. I was hoping that:</p> <pre><code>tf.data.Dataset.batch[0] would equal to the output of get_data(100,0) tf.data.Dataset.batch[1] would equal to the output of get_data(100,1) tf.data.Dataset.batch[2] would equal to the output of get_data(100,2) ... tf.data.Dataset.batch[9] would equal to the output of get_data(100,9) </code></pre> <p>I understand that I can use tf.data.Datasets with a generator (and I think you can set a generator per batch). But the function I have gives more than a single sample. The set up is too expensive to set it up for a every single sample.</p> <p>I was wanting to use tf.data.Dataset.prefetch() to run the get_batch function on every batch. And of course, it would call the get_batch with the same parameters on every epoch.</p> <p>Sorry if the explaination is convoluted. Trying my best to describe the problem.</p> <p>Anyone have any ideas?</p>
<python><python-3.x><tensorflow><tf.data.dataset><tf.dataset>
2023-02-01 21:25:30
1
620
lr100
75,316,207
12,574,341
Python equivalent to `as` type assertion in TypeScript
<p>In TypeScript you can override type inferences with the <code>as</code> keyword</p> <pre class="lang-none prettyprint-override"><code>const canvas = document.querySelector('canvas') as HTMLCanvasElement; </code></pre> <p>Are there similar techniques in Python3.x typing without involving runtime casting? I want to do something like the following:</p> <pre class="lang-py prettyprint-override"><code>class SpecificDict(TypedDict): foo: str bar: str res = request(url).json() as SpecificDict </code></pre>
<python><python-typing>
2023-02-01 21:08:23
2
1,459
Michael Moreno
75,316,073
10,400,238
How to format numerical labels into datetime labels on a plotly xaxis in python?
<p>I have a plotly scatter plot like this example:</p> <pre><code>x = [1,2,3,4,5] y = [3,2,3,5,6] fig = go.Figure() fig.add_trace(go.Scatter(x=x, y=y)) fig.show() </code></pre> <p>Now I want to label the x-axis numbers as dates with this dates for example:</p> <pre><code>dates = pd.date_range('2022-01-01', periods=len(x), freq='1H') dates.values </code></pre> <blockquote> <p>array(['2022-01-01T00:00:00.000000000', '2022-01-01T01:00:00.000000000', '2022-01-01T02:00:00.000000000', '2022-01-01T03:00:00.000000000', '2022-01-01T04:00:00.000000000'], dtype='datetime64[ns]')</p> </blockquote> <p>How can i do this?</p> <p>The Background here is that I want to add vlines or other things based on the numerical x axis I only want to show the labels as dates. Is this possible? Is the title of the question understandable?</p>
<python><datetime><plotly><axis-labels><x-axis>
2023-02-01 20:53:16
1
488
till Kadabra
75,315,828
3,937,811
How to resolve ModuleNotFoundError: No module named 'gluonts.torch.modules.distribution_output'?
<p>I am working on a project and I am trying to run the code from this repository: <a href="https://github.com/jc-audet/WOODS" rel="nofollow noreferrer">https://github.com/jc-audet/WOODS</a></p> <p>I was able to get past this step:</p> <pre><code>python3 -m woods.scripts.download_datasets {dataset} \ --data_path /path/to/data/directory </code></pre> <p>Then when I tried to run this step:</p> <pre class="lang-bash prettyprint-override"><code>python3 -m woods.scripts.main train \ --dataset Spurious_Fourier \ --objective ERM \ --test_env 0 \ --data_path /path/to/data/directory </code></pre> <p>I get the following error:</p> <pre><code>File &quot;/Users/evangertis/development/UGA/Research/WOODS/woods/datasets.py&quot;, line 1839, in &lt;module&gt; from gluonts.torch.modules.distribution_output import ( ModuleNotFoundError: No module named 'gluonts.torch.modules.distribution_output </code></pre> <p>I was expecting the code to Train a model using one objective on one dataset with one test environment. I get the following error:</p> <pre><code> warnings.warn( Traceback (most recent call last): File &quot;/usr/local/Cellar/python@3.10/3.10.9/Frameworks/Python.framework/Versions/3.10/lib/python3.10/runpy.py&quot;, line 196, in _run_module_as_main return _run_code(code, main_globals, None, File &quot;/usr/local/Cellar/python@3.10/3.10.9/Frameworks/Python.framework/Versions/3.10/lib/python3.10/runpy.py&quot;, line 86, in _run_code exec(code, run_globals) File &quot;/Users/evangertis/development/UGA/Research/WOODS/woods/scripts/hparams_sweep.py&quot;, line 14, in &lt;module&gt; from woods import utils File &quot;/Users/evangertis/development/UGA/Research/WOODS/woods/utils.py&quot;, line 13, in &lt;module&gt; from woods.scripts import hparams_sweep File &quot;/Users/evangertis/development/UGA/Research/WOODS/woods/scripts/hparams_sweep.py&quot;, line 17, in &lt;module&gt; from woods import datasets File &quot;/Users/evangertis/development/UGA/Research/WOODS/woods/datasets.py&quot;, line 1839, in &lt;module&gt; from gluonts.torch.modules.distribution_output import ( ModuleNotFoundError: No module named 'gluonts.torch.modules.distribution_output' </code></pre>
<python><machine-learning><pytorch><gluonts>
2023-02-01 20:25:31
1
2,066
Evan Gertis
75,315,827
4,385,647
Read concatenation of json file in python
<p>I have a text file containing multiple valid json texts, something like this</p> <pre><code>{ &quot;items&quot;: [ { &quot;id&quot;: &quot;someID&quot;, &quot;vale&quot;: &quot;someValue&quot; }, { &quot;id&quot;: &quot;otherID&quot;, &quot;vale&quot;: &quot;otherValue&quot; } ] } { &quot;items&quot;: [ { &quot;id&quot;: &quot;yetAnotherID&quot;, &quot;vale&quot;: &quot;yetAnotherValue&quot; }, { &quot;id&quot;: &quot;lastID&quot;, &quot;vale&quot;: &quot;lastValue&quot; } ] } </code></pre> <p>but with many more items per json and many more json's.</p> <p>My attempt to read it <em>(not surprisingly)</em> failed:</p> <pre><code>import os import json jsonFile = open(jsonName) jsonList=json.load(jsonFile) </code></pre> <p>Any sugestions to get the concatenation of all lists in a python object?</p>
<python><json>
2023-02-01 20:25:23
0
3,855
Dirk Horsten
75,315,788
12,282,349
Response model as list of strings instead of objects
<p>I am trying to return a list of items in FastAPI via a Pydantic model.</p> <p>Currently I have the route:</p> <pre class="lang-py prettyprint-override"><code>from typing import List from fastapi import Depends from sqlalchemy.orm.session import Session ... @router.get('/search-job-types', response_model=List[JobTypeDisplay]) def job_types(search_word: str, db: Session = Depends(get_db)): return db_dims.search_job_types(search_word, db) #db_dims: def search_job_types(search_word: str, db: Session): s_word = search_word.capitalize() s_word2 = &quot;%{}%&quot;.format(s_word) all = db.query(DbJobType).filter(DbJobType.name.like(s_word2)).all() #list_jobs = [] #for item in all: #list_jobs.append(item.name) return all </code></pre> <p>And my schema is as the following:</p> <pre class="lang-py prettyprint-override"><code>from pydantic import BaseModel class JobTypeDisplay(BaseModel): name: str class Config: orm_mode = True </code></pre> <p>I am getting a list of objects like this:</p> <pre><code>[ { &quot;name&quot;: &quot;Something3&quot; }, { &quot;name&quot;: &quot;Somethin2&quot; }, { &quot;name&quot;: &quot;Something1&quot; } ] </code></pre> <p>But would like something like this:</p> <pre><code>['Something3', 'Somethin2', 'Something1'] </code></pre> <p>What is the best way to achieve this and do I really need a loop for it?</p>
<python><sqlalchemy><fastapi><pydantic>
2023-02-01 20:22:20
2
513
Tomas Am
75,315,748
9,235,704
How do I build something similar to a jupyter notebook code cell with rich outputs
<p>How do I construct a script such that given a code block, it will return and display rich output like that of a cell of a Jupyter notebook? For example, is there a way to get the output section of the ipynb file given the code for the cell?</p> <p><a href="https://i.sstatic.net/1MgSx.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/1MgSx.png" alt="Example cell" /></a></p>
<javascript><python><html><jupyter-notebook><ipython>
2023-02-01 20:18:28
1
2,090
Julian Chu
75,315,701
7,267,480
linear Interpolation between points of dataframe using nearest points of dataframe
<p>I need a quick solution to interpolate between the nearest points of a data frame without adding new points to a data frame if there is a lot of data - millions of points (without NANs). The dataframe is sorted using x vlaues.</p> <p>E.g. I have a dataframe with the next columns:</p> <pre><code>x | y ----- 0 | 1 1 | 2 2 | 3 ... </code></pre> <p>I need a function that will fire out for a given x_input value calculated as a linear interpolated value between nearest points, something like this:</p> <p><code>calc_linear(df, cinput_col = 'x', input_val=1.5, output_col=y)</code> will output 2.5 - as interpolated y value for a given x</p> <p>Maybe there are some pandas functions for that?</p>
<python><pandas><interpolation>
2023-02-01 20:13:51
1
496
twistfire
75,315,505
7,320,048
Removing Duplicate Domain URLs From the Text File Using Bash
<p>Text file</p> <pre><code>https://www.google.com/1/ https://www.google.com/2/ https://www.google.com https://www.bing.com https://www.bing.com/2/ https://www.bing.com/3/ </code></pre> <p>Expected Output:</p> <pre><code>https://www.google.com/1/ https://www.bing.com </code></pre> <p>What I Tried</p> <pre><code>awk -F'/' '!a[$3]++' $file; </code></pre> <p>Output</p> <pre><code>https://www.google.com/1/ https://www.google.com https://www.bing.com https://www.bing.com/2/ </code></pre> <p>I already tried various codes and none of them work as expected. I just want to pick only one unique domain URL per domain from the list.</p> <p>Please tell me how I can do it by using the Bash script or Python.</p> <p>PS: I want to filter and save full URLs from the list and not only the root domain.</p>
<python><bash><awk>
2023-02-01 19:53:47
2
959
Praveen Kumar
75,315,492
1,114,453
subprocess to print stderr and stdout in real-time
<p>Say I have this program <code>printer.py</code>:</p> <pre><code>#!/usr/bin/env python3 import sys import time sys.stdout.write(&quot;STDOUT 1\n&quot;) time.sleep(1) sys.stderr.write(&quot;STDERR 2\n&quot;) time.sleep(1) sys.stdout.write(&quot;STDOUT 3\n&quot;) time.sleep(1) sys.stderr.write(&quot;STDERR 4\n&quot;) time.sleep(1) </code></pre> <p>It prints to stdout and stderr to produce:</p> <pre><code>./printer.py STDOUT 1 STDERR 2 STDOUT 3 STDERR 4 </code></pre> <p>I would like to execute <code>printer.py</code> inside another python script, <code>runner.py</code>, and print in real time both stderr and stdout. The following version of <code>runner.py</code> does not work:</p> <pre><code>#!/usr/bin/env python3 import sys import subprocess def run_command(command): process = subprocess.Popen(command, shell=True, stdout=subprocess.PIPE) while True: output = process.stdout.readline().decode() if output == '' and process.poll() is not None: break if output: print(output.strip()) rc = process.poll() return rc rc = run_command('./printer.py') </code></pre> <p>because it prints the stderr lines first in real-time and the stdout lines later all at once:</p> <pre><code>./runner.py STDERR 2 STDERR 4 STDOUT 1 STDOUT 3 </code></pre> <p>How can fix it to have the correct order 1, 2, 3, and 4 in real-time? The closer I could get is by using:</p> <pre><code>rc = run_command('./printer.py 1&gt;&amp;2') </code></pre> <p>which is kind of ok, but I wonder whether I could make it do the proper thing and print to stdout and stderr in the same way as <code>printer.py</code>.</p> <hr /> <p><code>sys.stdout.flush()</code> as suggested in comments makes no difference:</p> <pre><code>#!/usr/bin/env python3 import sys import subprocess def run_command(command): process = subprocess.Popen(command, shell=True, stdout=subprocess.PIPE) while True: output = process.stdout.readline().decode() if output == '' and process.poll() is not None: break if output: sys.stdout.write(output.strip() + '\n') sys.stdout.flush() rc = process.poll() return rc rc = run_command('./printer.py') </code></pre> <pre><code>./runner.py STDERR 2 STDERR 4 STDOUT 1 STDOUT 3 </code></pre> <p>The same for <code>print(..., flush=True)</code>. Am I doing something wrong?</p>
<python><subprocess><stdout><real-time><stderr>
2023-02-01 19:53:07
1
9,102
dariober
75,315,443
17,696,880
How to capture all the substrings inside a string that start with an element of a list and end with another element of another list?
<pre class="lang-py prettyprint-override"><code>import re #input string capture_where_capsule = &quot;alrededor ((NOUN)del auto rojizo, algo grande y completamente veloz). Luego dentro del baúl rápidamente abajo de una caja por sobre ello vimos una caña.&quot; #list of adverbs indicating the start of the capture list_all_adverbs_of_place = [&quot;adentro&quot;, &quot;dentro&quot;, &quot;al rededor&quot;, &quot;alrededor&quot;, &quot;abajo&quot;, &quot;hacía&quot;, &quot;hacia&quot;, &quot;por sobre&quot;, &quot;sobre&quot;] #It should cut everything in place_reference_string after '((NOUN) )' descriptive_noun_pattern = r&quot;\(\(NOUN\)&quot; + r'(?:[\w,;.]\s*)+' + r&quot;\)&quot; list_verbs = [&quot;vimos&quot;, &quot;hemos visto&quot;, &quot;encontramos&quot;, &quot;hemos encontrado&quot;] list_adverbs_of_manner = [&quot;rápidamente&quot;, &quot;rapidamente&quot;, &quot;intensamente&quot;, &quot;gradualmente&quot;, &quot;completamente&quot;] list_adverbs_of_time = [&quot;durante&quot;, &quot;luego&quot;, &quot;ahora&quot;, &quot;mientras tanto&quot;] list_limitant_words = [&quot;a las&quot;, &quot;a los&quot;, &quot;a la&quot;, &quot;a el&quot;, &quot;a los&quot;, &quot;a las&quot;, &quot;a mí&quot;, &quot;a mi&quot;, &quot;a sus&quot;, &quot;a su&quot;, &quot;a él&quot;, &quot;a ella&quot;, &quot;talvez&quot;, &quot;tal vez&quot;, &quot;tal&quot;, &quot;al&quot;, &quot;los&quot;, &quot;las&quot;, &quot;él&quot;, &quot;el&quot;, &quot;la&quot;, &quot;cómo&quot;, &quot;como&quot; , &quot;con&quot;, &quot;en su&quot;, &quot;en mi&quot;, &quot;en&quot;, &quot;.&quot;, &quot;:&quot;, &quot;;&quot;, &quot;,&quot;, &quot;(&quot;, &quot;)&quot;, &quot;[&quot;, &quot;]&quot;, &quot;¿&quot;, &quot;?&quot;, &quot;¡&quot;, &quot;!&quot;, &quot;&amp;&quot;, &quot;=&quot;] #list that combines all the elements that act as limits, indicating when the captures should end list_limiting_elements = list_verbs + list_adverbs_of_manner + list_adverbs_of_time + list_limitant_words print(repr(capture_where_capsule)) #--&gt; output </code></pre> <p>I must capture within the encapsulation standard <code>((PL_ADVB)the text)</code> , those strings that are after one of the elements of the list <code>list_all_adverbs_of_place</code> , which basically consists of a small list of adverbs indicating place. (the adverb must also be captured).</p> <p>And the end of the capture can be after a pattern <code>((NOUN)some text here)</code>, and if it is not that pattern has to end if any of the elements of the list <code>list_limiting_elements</code> appear.</p> <p>In this way, perform this restructuring of the input string so that it looks like this <strong>output</strong>, after using a <code>re.sub(, , capture_where_capsule, flags = re.IGNORECASE)</code></p> <pre><code>&quot;((PL_ADVB)alrededor ((NOUN)del auto rojizo, algo grande y completamente veloz)). Luego ((PL_ADVB)dentro del baúl) rápidamente ((PL_ADVB)abajo de una caja) ((PL_ADVB)por sobre ello) vimos una caña.&quot; </code></pre> <p>Keep in mind that an adverb from the <code>list_limiting_elements</code> list represents the beginning of the capture, but if there is a second adverb within the captured text, then this will act as an end limit.</p>
<python><python-3.x><regex><replace><regex-group>
2023-02-01 19:48:59
0
875
Matt095
75,315,390
3,286,743
Construct date column from year, month and day in Polars
<p>Consider the following Polars dataframe:</p> <pre class="lang-py prettyprint-override"><code>import polars as pl df = pl.DataFrame({'year': [2023], 'month': [2], 'day': [1]}) </code></pre> <p>I want to construct a date column from <code>year</code>, <code>month</code> and <code>day</code>. I know this can be done by first concatenating into a string and then parsing this:</p> <pre class="lang-py prettyprint-override"><code>df.with_column( pl.concat_str([pl.col('year'), pl.col('month'), pl.col('day')], sep='-') .str.strptime(pl.Date).alias('date') ) </code></pre> <p>But that seems like a detour. Is it possible to construct it directly with the three inputs? Something like this (that doesn't work):</p> <pre class="lang-py prettyprint-override"><code>import datetime df.with_column( datetime.date(pl.col('year'), pl.col('month'), pl.col('day')).alias('date') ) </code></pre>
<python><python-polars>
2023-02-01 19:43:07
1
1,177
robertdj
75,315,365
2,458,922
With Pandas Calculate Diff , but with different columns
<p>Give Data As</p> <pre><code>import pandas as pd import numpy as np df = pd.DataFrame({&quot;Start&quot;: [1, 4, 8, 12], &quot;Stop&quot;: [2, 6, 9, 13]}) # Calculate df['lag'] = Previous[Stop] - This[Start] </code></pre> <p>Lag = NA for first Row. Else Lag = Current Row Start - Previous Row Stop</p> <p>Output:</p> <pre><code>df = pd.DataFrame({&quot;Start&quot;: [1, 4, 8, 12], &quot;Stop&quot;: [2, 6, 9, 13],&quot;lag&quot;:[np.nan,2,2,3]}) </code></pre>
<python><pandas>
2023-02-01 19:40:39
1
1,731
user2458922
75,314,899
5,520,444
pytorch's augmented assignment and requires_grad
<p>Why does:</p> <pre><code>with torch.no_grad(): w = w - lr*w.grad print(w) </code></pre> <p>results in:</p> <pre><code>tensor(0.9871) </code></pre> <p>and</p> <pre><code>with torch.no_grad(): w -= lr*w.grad print(w) </code></pre> <p>results in:</p> <pre><code>tensor(0.9871, requires_grad=True) </code></pre> <p>Aren't both operations the same?</p> <p>Here is some test code:</p> <pre><code>def test_stack(): np.random.seed(0) n = 50 feat1 = np.random.randn(n, 1) feat2 = np.random.randn(n, 1) X = torch.tensor(feat1).view(-1, 1) Y = torch.tensor(feat2).view(-1, 1) w = torch.tensor(1.0, requires_grad=True) epochs = 1 lr = 0.001 for epoch in range(epochs): for i in range(len(X)): y_pred = w*X[i] loss = (y_pred - Y[i])**2 loss.backward() with torch.no_grad(): #w = w - lr*w.grad # DOESN'T WORK!!!! #print(w); return w -= lr*w.grad print(w); return w.grad.zero_() </code></pre> <p>Remove the comments and you'll se the requires_grad disappearing. Could this be a bug?</p>
<python><pytorch><augmented-assignment>
2023-02-01 18:54:33
1
1,228
Tony Power
75,314,891
5,243,291
Python MYSql update Syntax error using dynamic variables
<p>I am trying to update a MySQL table with dynamic column name and value. I m always getting a syntax Error when I m doing the update. I have similar INSERT and SELECT statements which work as expected. I have already tried different variants of the query to get the right syntax</p> <pre><code> cursor.execute(&quot;&quot;&quot;UPDATE movies SET %s=&quot;%s&quot; WHERE Rank_id = %s&quot;&quot;&quot;,(column.replace(&quot;'&quot;,&quot;&quot;),str(movie[key]).replace(&quot;'&quot;,&quot;&quot;),rank)) or query = &quot;&quot;&quot;UPDATE movies SET %s=&quot;%s&quot; WHERE Rank_id = %s&quot;&quot;&quot; values = (column,movie[key],rank) cursor.execute(query,(column,str(movie[key]).replace(&quot;'&quot;,&quot;&quot;),rank)) </code></pre> <blockquote> <p>mysql.connector.errors.ProgrammingError: 1064 (42000): You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near ''Title'=&quot;'fsdsd'&quot; WHERE Rank_id = '1'' at line 1</p> </blockquote> <p>I tried to execute the SQL command on DBeaver and this works</p> <pre><code>update movies set Title=&quot;djhdfghgfjhjfg&quot; where Rank_id=1 </code></pre> <p>What am I doing wrong here?</p> <pre><code>Code that works query = &quot;INSERT INTO movies(Title,Genre,Description,Director,Actors,Release_Year,Runtime_Minutes,Rating,Votes,Revenue_Millions,Metascore) VALUES (%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s)&quot; values =(title, genre, description, director,actors, release_year,runtime_minut, rating,votes,revenue_millions,metascore) cursor.execute(query,values) query = &quot;SELECT Title,Genre,Description FROM movies WHERE Rank_id = %s&quot;; cursor.execute(query,(rank,)) </code></pre> <p>Update</p> <p>I am able to get it working when I hardcode the column name in the Query (Though not the desired solution)</p>
<python><mysql><python-3.x>
2023-02-01 18:53:51
0
2,144
Vini
75,314,866
3,486,773
How to convert string into list of integers in Python?
<p>I have this:</p> <pre><code> &quot;1,2,3,4,5,6,7,8,9,0&quot; </code></pre> <p>And need this:</p> <pre><code> [1,2,3,4,5,6,7,8,9,0] </code></pre> <p>Everything I search for has the example where the string is a list of strings like this:</p> <pre><code> &quot;'1','2','3'...&quot; </code></pre> <p>those solutions do not work for the conversion I need.</p> <p>What do I need to do? I need the easiest solution to understand for a beginner.</p>
<python><string><list>
2023-02-01 18:51:53
3
1,278
user3486773
75,314,863
3,357,935
How do I safely iterate over an optional list in Python?
<p>I have a function which compares a file path against two lists of folder paths. It returns true if the start of the path matches any folder from list A (<code>include</code>) but doesn't match an optional list B (<code>exclude</code>).</p> <pre><code>from typing import List, Optional def match(file: str, include: List[str], exclude: Optional[List[str]]=None): return any(file.startswith(p) for p in include) and not any( file.startswith(p) for p in exclude ) </code></pre> <p>The function works as expected if all parameter values are provided, but fails with a TypeError if no <code>exclude</code> folders are given.</p> <pre><code># True match(file=&quot;source/main.py&quot;, include=[&quot;source/&quot;, &quot;output/&quot;], exclude=[&quot;output/debug/&quot;]) # False match(file=&quot;output/debug/output.log&quot;, include=[&quot;source/&quot;, &quot;output/&quot;], exclude=[&quot;output/debug/&quot;]) # Expected result - True # Actual result - TypeError: 'NoneType' object is not iterable match(file=&quot;output/debug/output.log&quot;, include=[&quot;source/&quot;, &quot;output/&quot;]) </code></pre> <p>There was a PEP proposal to <a href="https://peps.python.org/pep-0505/" rel="nofollow noreferrer">introduce null-aware operators</a> that may have helped, but it has not been implemented as of now.</p> <p>How can I safely iterate over an Optional list in Python without running into null errors?</p>
<python>
2023-02-01 18:51:43
3
27,724
Stevoisiak
75,314,836
2,030,532
How to "tell" the decorator function that the return type is the same as the wrapped function in Python?
<p>I have a decorator whose return type is the same as the wrapped function return type. Here is a simple example. The code works as it stands, however, it does not provide type hinting for the output value <code>res</code>. Is there a way to specify that <code>my_decorator</code> return the same type as the wrapped function <code>foo</code> (e.g. both returning variables of type <code>Foo</code>)</p> <pre><code>@dataclasses.dataclass class Foo: x: str def my_decorator(func): def wrapped(x): print(&quot;decorated&quot;) return func(x) return wrapped @my_decorator def foo(x: int) -&gt; Foo: return Foo(str(x)) if __name__ == '__main__': res = foo(1) # no type hinting for res </code></pre>
<python><types><pycharm>
2023-02-01 18:49:01
1
3,874
motam79
75,314,822
10,386,434
In Python, io.StringIO codec writer gets string in input but gives "TypeError: string argument expected, got 'bytes'"
<p>Simple example</p> <pre><code>import codecs import io buffer = io.StringIO() writer = codecs.getwriter('UTF-8') buffer_writer = writer(buffer) buffer_writer.write('[]') </code></pre> <p>output</p> <pre><code>Traceback (most recent call last): File &quot;&lt;stdin&gt;&quot;, line 1, in &lt;module&gt; File &quot;/usr/lib/python3.10/codecs.py&quot;, line 378, in write self.stream.write(data) TypeError: string argument expected, got 'bytes' </code></pre> <p>I dind`t understand the error. I need buffer to be an io.StringIO object.</p>
<python><io>
2023-02-01 18:47:11
0
451
vinicius de novaes
75,314,658
3,557,485
Can a bug in a Python library produce a ModuleNotFoundError?
<p>I have to run a simple code snippet which uses the <a href="https://pypi.org/project/mexc-sdk/0.0.1/#description" rel="nofollow noreferrer">mexc_sdk</a> library. I installed it properly, it is in the proper folder, but somehow I get the <code>ModuleNotFoundError: No module named 'mexc_sdk'</code> message when I run the below code. (Visual Studio throws an <code>Import &quot;mexc_sdk&quot; could not be resolved</code> warning as well.)</p> <pre><code>from mexc_sdk.src.mexc_sdk import Spot client = Spot() info = client.exchange_info(options={ &quot;symbol&quot;: &quot;BTCUSDT&quot;}) print(info) </code></pre> <p>I already installed it on my server which is Linux while my PC is a Windows machine with different setup and I get the same error there. So I have no idea what can be the problem. My Python version is 3.8 and the lib is compatible from 3.6.</p> <p>I also checked the folder of the installed lib and it doesn't contain any Python files, but as I am not familiar with libraries I don't know if it's normal or not. My question is, how should I debug this? Is it possible that the library is not installed correctly?</p>
<python>
2023-02-01 18:33:36
1
3,360
rihekopo
75,314,624
4,784,683
PyQt QTreeView - simulate selection of new item without using Squish
<p>I have existing PySide6 code.</p> <p>My class <code>PyProto</code> inherits <code>QTreeView</code> and implements <code>currentChanged()</code>.</p> <p>I am doing automated testing and I need to move the focus to any item in the <code>QTreeView</code>.</p> <p>When I put a breakpoint on <code>PyProto.currentChanged()</code> and click a new item in the tree view, the breakpoint is hit and the call stack only contains two items: <code>app.exec()</code> and <code>currentChanged()</code>.</p> <p><strong>QUESTIONS</strong></p> <ol> <li><p>How do I simulate the selection of a new item in the tree view in a way that is transparent to the rest of the app?<br /> Is it enough to call <code>PyProto.currentChanged()</code> from my test code?</p> </li> <li><p>How do I obtain a list of the current items in the tree view so that I can call <code>currentChanged()</code>?</p> </li> </ol> <p><a href="https://doc.qt.io/qtforpython/PySide6/QtWidgets/QAbstractItemView.html?highlight=qabstractitemview#PySide6.QtWidgets.PySide6.QtWidgets.QAbstractItemView.currentChanged" rel="nofollow noreferrer">Link:</a></p> <blockquote> <p>PySide6.QtWidgets.QAbstractItemView.currentChanged(current, previous) PARAMETERS current – PySide6.QtCore.QModelIndex</p> <p>previous – PySide6.QtCore.QModelIndex</p> <p>This slot is called when a new item becomes the current item. The previous current item is specified by the previous index, and the new item by the current index.</p> </blockquote>
<python><qt><pyqt>
2023-02-01 18:29:42
0
5,180
Bob
75,314,508
4,856,421
Disable pylint missing docstring warning for protected classes
<p>Is it possible to disable pylint warnings related to missing docstring within a module for protected (starting with underscore character) classes only? There is a possibility to disable warnings for method or classes that fulfill a certain regex precondition(by default everything which starts with _ is omited) but what in cases a given protected class contains non-protected method?</p>
<python><pylint>
2023-02-01 18:18:40
0
781
Mati
75,314,436
13,359,498
Can't concatinate texture_features and resnet50_features
<p>I have extracted resnet features(array) and texture features(list) of my image dataset. My idea is to concatinating both of the features and then use the merged feature to fit the model. Code snippet:</p> <pre><code># Extract features from the dataset using ResNet50 resnet50_features = ResNet50.predict(X_test) for category in categories: path = os.path.join(data_dir,category) # print(path) class_num = categories.index(category) for img in os.listdir(path): try: image = cv2.imread(os.path.join(path,img))[:,:,::-1] gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) gabor_kernel = cv2.getGaborKernel((21, 21), 8.0, 0, 10.0, 0.5, 0, ktype=cv2.CV_32F) filtered_image = cv2.filter2D(gray, cv2.CV_8UC3, gabor_kernel) texture_features.append(filtered_image) except Exception as e: pass </code></pre> <p>The Dimensions of resnet50_features (98, 7, 7, 2048), I converted texture_features from list to array and the dimension is (766, 224, 224)</p> <p>I tried concatenating both, but it is giving an error. My question is: how do I merge these two, and how can I use them to classify? A simple code snippet would be great!</p>
<python><keras><deep-learning><tensor><feature-extraction>
2023-02-01 18:10:02
0
578
Rezuana Haque
75,314,435
5,769,814
Combining flags in argparse
<p>I have the following subclass that extends <code>argparse.ArgumentParser</code> with some helper methods for adding common argument types:</p> <pre><code>import argparse class ArgParser(argparse.ArgumentParser): def add_bool_arg(self, default, help, *flags, **kw): &quot;&quot;&quot; Add a boolean argument to the parser. This method adds: - long flags that store true into the destination - long flags with `--no-` prefix that store false into the destination - short flags that: - store true into the destination if 'True' or 'Yes' is passed as an argument - store false into the destination if 'False' or 'No' is passed as an argument - toggle the default value if no argument is passed If `dest` is not explicitly provided, it is inferred from the first long flag or the first short flag if no long flags are passed. &quot;&quot;&quot; short_flags = [flag for flag in flags if flag[0] == '-' and flag[1] != '-'] long_flags = [flag for flag in flags if flag[:2] == '--'] dest = kw.get('dest', long_flags[0] if long_flags else flags[0]).strip('-').replace('-', '_') no_f = lambda arg: '--no-' + arg.strip('-') str_to_bool = lambda s: s.lower() in {'true', 'yes', 't', 'y', '1'} group = self.add_mutually_exclusive_group() group.add_argument(*short_flags, dest=dest, nargs='?', default=default, const=not default, type=str_to_bool, help=help, **kw) group.add_argument(*long_flags, dest=dest, action='store_true', help=help, **kw) group.add_argument(*map(no_f, long_flags), dest=dest, action='store_false', help=&quot;do not &quot; + help, **kw) </code></pre> <p>I've also set up a <a href="https://replit.com/@matedevita/Combining-Argparse-Flags" rel="nofollow noreferrer">REPL</a> with this class for easier running.</p> <p>The <code>add_bool_arg</code> method adds flags for a boolean argument with the functionality described in the docstring:</p> <pre><code>ap = ArgParser() ap.add_bool_arg(True, &quot;default true bool arg&quot;, '-t', '--true-arg') ap.add_bool_arg(False, &quot;default false bool arg&quot;, '-f', '--false-arg') print(ap.parse_args(['-t', 'True'])) # true_arg = True print(ap.parse_args(['-t', 'False'])) # true_arg = False print(ap.parse_args(['-t'])) # true_arg = False print(ap.parse_args(['--true-arg'])) # true_arg = True print(ap.parse_args(['--no-true-arg'])) # true_arg = False print(ap.parse_args(['-f', 'False'])) # false_arg = False print(ap.parse_args(['-f', 'True'])) # false_arg = True print(ap.parse_args(['-f'])) # false_arg = True print(ap.parse_args(['--false-arg'])) # false_arg = True print(ap.parse_args(['--no-false-arg'])) # false_arg = False </code></pre> <p>However, the default boolean arguments in <code>argparse.ArgumentParser</code> also allow you to combine flags (for example <code>-ab</code> instead of <code>-a -b</code>). However my approach does not work like this and only the first flag is processed:</p> <pre><code>ap.add_argument('-a', action='store_true') ap.add_argument('-b', action='store_true') print(ap.parse_args(['-ab'])) # a = True, b = True print(ap.parse_args(['-tf'])) # true_arg = False, false_arg = True </code></pre> <p>Is there a way to implement this functionality, so that all so-combined flags will be processed (i.e. toggled)? How does argparse even handle this case of splitting combined flags?</p>
<python><command-line-arguments><argparse>
2023-02-01 18:09:58
1
1,324
Mate de Vita
75,314,250
5,991,368
Python WeakKeyDictionary for unhashable types
<p>As raised in <a href="https://github.com/python/cpython/issues/88306" rel="nofollow noreferrer">cpython issue 88306</a>, python <a href="https://docs.python.org/3/library/weakref.html#weakref.WeakKeyDictionary" rel="nofollow noreferrer">WeakKeyDictionary</a> fails for non hashable types. According to the discussion in the python issue above, this is an unnecessary restriction, using <code>id</code>s of the keys instead of <code>hash</code> would work just fine: In this special case <code>id</code>s are unique identifiers for the keys in the WeakKeyDictionary, because the keys are automatically removed when the original object is deleted. It is important to be aware that using ids instead of hashes is only feasible in this very special case.</p> <p>We can tweak <code>weakref.WeakKeyDictionary</code> (<a href="https://gist.github.com/ptrba/b198f0cf6c22047df77483e8aa28f408" rel="nofollow noreferrer">see gist</a>) to achieve the desired behaviour. In summary, this implementation wraps the <code>weakref</code> keys as follows:</p> <pre class="lang-py prettyprint-override"><code>class _IdKey: def __init__(self, key): self._id = id(key) def __hash__(self): return self._id def __eq__(self, other: typing_extensions.Self): return self._id == other._id def __repr__(self): return f&quot;&lt;_IdKey(_id={self._id})&gt;&quot; class _IdWeakRef(_IdKey): def __init__(self, key, remove: typing.Callable[[typing.Any], None]): super().__init__(key) # hold weak ref to avoid garbage collection of the remove callback self._ref = weakref.ref(key, lambda _: remove(self)) def __call__(self): # used in weakref.WeakKeyDictionary.__copy__ return self._ref() def __repr__(self): return f&quot;&lt;_IdKey(_id={self._id},{self._ref})&gt;&quot; class WeakKeyIdDictionary(weakref.WeakKeyDictionary): &quot;&quot;&quot; overrides all methods involving dictionary access key &quot;&quot;&quot; ... https://gist.github.com/barmettl/b198f0cf6c22047df77483e8aa28f408 </code></pre> <p>However, this depends on the details of the implementation of <code>weakref.WeakKeyDictionary</code> (using python3.10 here) and is likely to break in future (or even past) versions of python. Of course, alternatively one can just rewrite an entirely new class.</p> <p>It is also possible to implement a custom <code>__hash__</code> method for all classes, but this won't work when dealing with external code and will give unreliable hashes for use cases beyond <code>weakref.WeakKeyDictionary</code>. We can also monkey patch <code>__hash__</code>, but this is not possible in particular for built in classes and will have unintended effects in other parts of the code.</p> <p>Thus the following question: How should one store non hashable items in a WeakKeyDictionary?</p>
<python>
2023-02-01 17:53:04
1
661
Peter Barmettler
75,314,234
5,469,184
How to convert list of JSON object to PySpark DataFrame?
<p>I want to convert a JSON string in variable to PySpark DataFrame on Databricks.</p> <p>I have a payload coming from API. It is a list of JSON objects hold on a variable called <code>response_list</code>. The variable is JSON object with type of <code>class 'str'&gt;</code>:</p> <pre class="lang-json prettyprint-override"><code>[{&quot;sentiment&quot;:&quot;neutral&quot;,&quot;sentiment_confidence_score&quot;:0.8585},{&quot;sentiment&quot;:&quot;neutral&quot;,&quot;sentiment_confidence_score&quot;:0.7861}] </code></pre> <p>I am trying to parse this into a PySpark dataframe so each object here is a single row. The desired output is below.</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>pyspark_column</th> </tr> </thead> <tbody> <tr> <td>{&quot;sentiment&quot;:&quot;neutral&quot;,&quot;sentiment_confidence_score&quot;:0.8585}</td> </tr> <tr> <td>{&quot;sentiment&quot;:&quot;neutral&quot;,&quot;sentiment_confidence_score&quot;:0.7861}</td> </tr> </tbody> </table> </div> <p>What I try is:</p> <pre class="lang-py prettyprint-override"><code>dfJson = sc.parallelize(response_list).map(lambda x: json.dumps(x)) dfJson = spark.read.json(dfJson) dfJson.show(truncate = False) </code></pre> <p>It throws me this error of missing argument:</p> <pre><code>File &quot;&lt;command-3646528696964905&gt;&quot;, line 79, in json_parse dfJson = sc.parallelize(response_list).map(lambda x: json.dumps(x)) TypeError: parallelize() missing 1 required positional argument: 'c' </code></pre> <p>I have spent nearly whole day on this. When I copy paste that list of JSON into a JSON Validator it says, this JSON is valid. So I assume format is correct. But I couldn't figure out how to convert this to a dataframe.</p>
<python><json><pyspark><databricks><azure-databricks>
2023-02-01 17:51:45
1
434
mmustafaicer
75,314,154
19,155,645
YOLOv7 segmentation: error with using pre-trained weights
<p>I would like to use YOLOv7 for segmentation on my custom dataset and custom classes. <br> I am already able to run the 'normal' YOLO version with my data and using the <code>yolov7.pt</code> weights.<br> But when I am using the <code>yolov7-mask.pt</code> weights, I end up having an error:</p> <pre><code>Traceback (most recent call last): File &quot;train.py&quot;, line 616, in &lt;module&gt; train(hyp, opt, device, tb_writer) File &quot;train.py&quot;, line 71, in train run_id = torch.load(weights, map_location=device).get('wandb_id') if weights.endswith('.pt') and os.path.isfile(weights) else None File &quot;/usr/local/lib/python3.8/dist-packages/torch/serialization.py&quot;, line 789, in load return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args) File &quot;/usr/local/lib/python3.8/dist-packages/torch/serialization.py&quot;, line 1131, in _load result = unpickler.load() File &quot;/usr/local/lib/python3.8/dist-packages/torch/serialization.py&quot;, line 1124, in find_class return super().find_class(mod_name, name) AttributeError: Can't get attribute 'Merge' on &lt;module 'models.common' from '/content/yolov7/models/common.py'&gt; </code></pre> <p>I also saw that <a href="https://github.com/WongKinYiu/yolov7/issues/1076" rel="nofollow noreferrer">this error is not specific to me, but not a solution.</a><br> Also, <a href="https://medium.com/augmented-startups/train-yolov7-segmentation-on-custom-data-b91237bd2a29" rel="nofollow noreferrer">this tutorial</a> does not use pre-trained weights and does not mention why it does so.</p> <p>When I do not use pretrained weights the code compiles, but I did not check yet how good it is (I assume will take much longer to train).</p> <p>Any advice will be appreciated.</p>
<python><deep-learning><computer-vision><image-segmentation><yolo>
2023-02-01 17:44:46
0
512
ArieAI
75,314,041
169,947
Check whether timezone is dateutil.tz instance
<p>There are several Python packages that implement the <code>datetime.tzinfo</code> interface, including <code>pytz</code> and <code>dateutil</code>. If someone hands me a timezone object and wants me to apply it to a datetime, the procedure is different depending on what kind of timezone object it is:</p> <pre class="lang-py prettyprint-override"><code>def apply_tz_to_datetime(dt: datetime.datetime, tz: datetime.tzinfo, ambiguous, nonexistent): if isinstance(tz, dateutil.tz._common._tzinfo): # do dt.replace(tz, fold=...) elif isinstance(tz, pytz.tzinfo.BaseTzInfo): # do tz.localize(dt, is_dst=...) # other cases here </code></pre> <p>(The <code>dateutil.tz</code> case is a lot more complicated than I've shown, because there are a lot of cases to consider for non-existent or ambiguous datetimes, but the gist is always to either call <code>dt.replace(tz, fold=...)</code> or raise an exception.)</p> <p>Checking <code>dateutil.tz._common._tzinfo</code> seems like a no-no, though, is there a better way?</p>
<python><timezone><pytz><python-dateutil>
2023-02-01 17:35:07
2
24,277
Ken Williams
75,314,032
1,096,660
How to grab video frames fast?
<p>For my project I want to extract frames from a video to make thumbnails. However every method i find is super slow going through the entere video. Here is what I have tried:</p> <pre><code>with iio.imopen(v.file.path, &quot;r&quot;) as vObj: metadata = iio.immeta(v.file.path, exclude_applied=False) frame_num = int(metadata['fps']*metadata['duration']-metadata['fps']) for i in range(10): x = int((frame_num/100)*(i*10)) frame = vObj.read(index=x) path = v.get_thumbnail_path(index=i) os.makedirs(os.path.dirname(path), exist_ok=True) iio.imwrite(path, frame) logger.info('Written video thumbnail: {}'.format(path)) </code></pre> <p>For a long video that takes extremely long. I know videos are compressed over multiple frames, however if I just manually open a video and jump to a point it also does not require to go through the video from first to last frame.</p> <p>I don't care about specific frames, just roughly 10%, so sticking to keyframes is fine, if it makes it faster.</p> <p>How to grab a frame every 10% of the video quickly?</p> <p>Thank you.</p>
<python><python-3.8><python-imageio>
2023-02-01 17:34:15
1
2,629
JasonTS
75,313,984
3,120,129
pywin32 alternative in C# for SAP GUI automation?
<p>I have got Python and Visual Basic working. I am struggling to find a working solution for C# although I understand that it should be available and relatively similar to Visual Basic, right? I don't have SAP GUI DLL's on my development machine, so I can't just add reference to it.</p> <p>So in Python this is working fine:</p> <pre><code>#-Begin----------------------------------------------------------------- #-Includes-------------------------------------------------------------- import sys, win32com.client #-Sub Main-------------------------------------------------------------- def Main(): SapGuiAuto = win32com.client.GetObject(&quot;SAPGUI&quot;) if not type(SapGuiAuto) == win32com.client.CDispatch: return application = SapGuiAuto.GetScriptingEngine if not type(application) == win32com.client.CDispatch: SapGuiAuto = None return connection = application.Children(0) if not type(connection) == win32com.client.CDispatch: application = None SapGuiAuto = None return session = connection.Children(1) if not type(session) == win32com.client.CDispatch: connection = None application = None SapGuiAuto = None return #session.findById(&quot;wnd[0]&quot;).resizeWorkingPane 173, 36, 0 session.findById(&quot;wnd[0]&quot;).resizeWorkingPane(173, 36, 0) </code></pre> <p>In Visual Basic similar would be:</p> <pre><code>Sub Main Set SapGuiAuto = GetObject(&quot;SAPGUI&quot;) Set Application = SapGuiAuto.GetScriptingEngine Set Connection = Application.openConnection(&quot;EQ2&quot;) Set Session = Connection.Children(0) Session.ActiveWindow.Iconify() Session.findById(&quot;wnd[0]/usr/txtRSYST-MANDT&quot;).Text = &quot;410&quot; Session.findById(&quot;wnd[0]/usr/txtRSYST-BNAME&quot;).Text = &quot;XYZ&quot; Session.findById(&quot;wnd[0]/usr/pwdRSYST-BCODE&quot;).Text = &quot;Pwd1&quot; Session.findById(&quot;wnd[0]/usr/txtRSYST-LANGU&quot;).Text = &quot;EN&quot; Session.findById(&quot;wnd[0]&quot;).SendVKey 0 End Sub </code></pre> <p>Or this:</p> <pre><code>Sub SAP_OpenSessionFromLogon() Dim SapGui Dim Applic Dim connection Dim session Shell (&quot;C:\Program Files (x86)\SAP\FrontEnd\SAPgui\sapfewcp.exe&quot;) Set SapGui = GetObject(&quot;SAPGUI&quot;) Set Applic = SapGui.GetScriptingEngine Set connection = Applic.OpenConnection(&quot;QLA - ECC Project One Quality System&quot;, True) '&lt;=== here you need to fillin your connection description End Sub </code></pre> <p>How to do the same in C# without adding reference to SAP GUI COM DLL or is it even possible without it? I mean just to open some exe on computer, bring windows foreground and send automate GUI?</p>
<python><c#><vba><vb.net><sap-gui>
2023-02-01 17:28:50
1
2,422
10101
75,313,920
3,798,897
How can I accelerate the matrix multiplication XAX^T when A is sparse?
<p>Suppose <code>X</code> has <code>r</code> rows and <code>c</code> columns, so that <code>A</code> is a <code>c</code> by <code>c</code> matrix. If the total count of non-zero elements in <code>A</code> (call it <code>z</code>) is small then the following Python/pseudocode is plenty fast enough. If <code>c</code> is large though, and if <code>z</code> is bigger than <code>c</code>, then I don't have any reasonable ideas for how to accelerate the 3-way product, even approximately.</p> <pre><code>from itertools import product def naive_threeway_matmul(X, A): &quot;&quot;&quot; X: (r,c) dense nested arrays A: (c,c) sparse matrix, stored as {(i,j): value} rtn: (r,r) dense nested arrays: X @ A @ X.transpose() &quot;&quot;&quot; r = len(X) c = len(A) rtn = [[0]*r for _ in range(r)] for (u,v) in product(range(r), repeat=2): rtn[u][v] = sum( X[u][i] * value * X[v][j] for (i,j), value in A.items() ) return rtn </code></pre> <p>So we have a reasonable <code>O(r^2 z)</code> solution, and by computing a sparse-dense product followed by a dense-dense product we can get an <code>O(r^2 c)</code> solution (if <code>z ~ c</code>), but both of those are still effectively some cubic function. Is there an algorithm to bring the runtime down closer to <code>O(r^2)</code> when <code>c</code> is large and <code>z ~ c</code>, even approximately?</p> <p>I don't really care about numpy or jax or other accelerators right now; I can optimize the code later.</p>
<python><sparse-matrix><matrix-multiplication>
2023-02-01 17:22:11
0
7,191
Hans Musgrave
75,313,835
10,976,654
What is the difference between creating tox environment for linting and using pre-commit hooks
<p>I am learning CI/CD for Python packages and have worked through the text and examples in Dane Hillards's new Book <a href="https://www.manning.com/books/publishing-python-packages" rel="nofollow noreferrer">Publishing Python Packages</a>. I know there are a lot of different tools and approaches, but I am new to this and have a novice understanding at best. I'm having trouble understanding the different ways code is cleaned/checked. For example, the book uses separate environments created through tox and runs black, flake8, etc., but then it also uses pre-commit hooks. Are these redundant (i.e., alternatives) or do they do different things?</p> <p>Example <code>setup.cfg</code></p> <pre><code>[metadata] name = first-python-package version = 0.0.1 [options] package_dir = =src packages = find: include_package_data = True [options.packages.find] where = src exclude = test* ###################### # Tool configuration # ###################### [mypy] python_version = 3.10 warn_unused_configs = True show_error_context = True pretty = True namespace_packages = True check_untyped_defs = True [flake8] max-line-length = 120 [tool:pytest] testpaths = test addopts = --cov --strict-markers xfail_strict = True [coverage:run] source = imppkg branch = True [coverage:report] show_missing = True skip_covered = True [coverage:paths] source = src/imppkg */site-packages/imppkg [tox:tox] envlist = py39,py310 isolated_build = True [testenv] deps = pytest pytest-cov commands = pytest {posargs} [testenv:typecheck] deps = mypy pytest types-termcolor commands = mypy --ignore-missing-imports {posargs:src test} [testenv:format] skip_install = True deps = black commands = black {posargs:--check --diff src test} [testenv:lint] skip_install = True deps = flake8 flake8-bugbear commands = flake8 {posargs:src test} </code></pre> <p>Example GitHub Actions workflow <code>.github/packaging.yml</code></p> <pre><code>name: Packaging on: - push jobs: format: name: Check formatting runs-on: ubuntu-latest steps: - uses: actions/checkout@v3 - uses: actions/setup-python@v4.0.0 with: python-version: &quot;3.10&quot; - name: Install tox run: python -m pip install tox - name: Run black run: tox -e format lint: name: Lint runs-on: ubuntu-latest steps: - uses: actions/checkout@v3 - uses: actions/setup-python@v4.0.0 with: python-version: &quot;3.10&quot; - name: Install tox run: python -m pip install tox - name: Run flake8 run: tox -e lint typecheck: name: Type check runs-on: ubuntu-latest steps: - uses: actions/checkout@v3 - uses: actions/setup-python@v4.0.0 with: python-version: &quot;3.10&quot; - name: Install tox run: python -m pip install tox - name: Run mypy run: python -m tox -e typecheck test: name: Test runs-on: ubuntu-latest strategy: matrix: python: - version: &quot;3.10&quot; toxenv: &quot;py310&quot; - version: &quot;3.9&quot; toxenv: &quot;py39&quot; steps: - uses: actions/checkout@v3 - uses: actions/setup-python@v4.0.0 with: python-version: ${{ matrix.python.version }} - name: Install tox run: python -m pip install tox - name: Run pytest run: tox -e ${{ matrix.python.toxenv }} </code></pre> <p>Example <code>.pre-commit-config.yaml</code></p> <pre><code>repos: - repo: https://github.com/asottile/pyupgrade rev: v2.31.0 hooks: - id: pyupgrade args: ['--py39-plus'] - repo: https://github.com/psf/black rev: 22.1.0 hooks: - id: black language_version: python3.10 args: ['--config=pyproject.toml'] - repo: https://github.com/pycqa/flake8 rev: 4.0.1 hooks: - id: flake8 </code></pre> <p><strong>Questions:</strong> (1) Do I need to do the tox env actions for black and flake8, or can I just do the pre-commit hooks? (2) Do I need to do the tox env actions for pytest, or is a there a pre-commit hook for that? (it is not provided in the book)</p>
<python><continuous-integration><github-actions><pre-commit-hook><pre-commit.com>
2023-02-01 17:14:33
0
3,476
a11